├── .gitignore ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── ansible.cfg ├── ansible.sh ├── aws.ini ├── inventory ├── ec2.ini └── ec2.py ├── playbook.sh ├── test.yml └── vars_plugins ├── aws.py └── aws.yml /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Thanks for considering this project! 2 | 3 | Really, thanks for your valuable time. 4 | 5 | Contributions to this project are welcome. If you're unsure whether your contribution would be merged, please open a new issue explaining your idea. 6 | 7 | Unsolicited pull requests are also happily received. 8 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2018 Tom Bamford 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. 14 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Ansible AWS Vars Plugin 2 | 3 | This is a drop-in plugin for Ansible 2.5+ which provides the following: 4 | 5 | * Searches one or more AWS accounts for VPC, subnet, security group and ELB target group details, 6 | * Matches tags of all these resources with configured set of tag names, then 7 | * Builds a hierarchical dictionary of resources mapped by tag values, 8 | * All the above information is made available to all hosts managed by Ansible by means of native host variables 9 | * Bonus feature: brings native support for multiple AWS accounts with automatic account switching once per playbook based on extra vars passed at runtime. 10 | 11 | Read below for more detailed explanations. 12 | 13 | 14 | # How To Use 15 | 16 | This module is shipped with a skeleton structure with the intention that you can test it right out of this repository. An example AWS config file is given with `aws.ini` and wrapper scripts `ansible.sh`/`playbook.sh` to set some useful environment variables for you. 17 | 18 | However, it's more likely that you already have an Ansible project, in which case all you need to do it copy the `vars_plugins/` directory into your project root (relative to your playbooks). The plugin should be automatically detected by Ansible. 19 | 20 | If you have a different vars_plugin directory configured in `ansible.cfg`, just drop `aws.py` and `aws.yml` it into that directory instead. 21 | 22 | You'll want to change the settings in `vars_plugin/aws.yml` to match your environment. These settings are: 23 | 24 | `regions:` 25 | This is a list of regions where the plugin will look for resources 26 | 27 | `use_cache: [yes|no]` 28 | Whether or not to cache resource details after retrieving them. Recommended. 29 | 30 | `cache_max_age: 600` 31 | How long to cache resource details before retrieving them again from AWS. Defaults to 600 seconds (10 mins). 32 | 33 | `cache_env_vars:` 34 | A list of environment variables to inspect and save the values of when caching resource details. Should the values of any of these environment variables change, the cache will be invalidated. 35 | 36 | `aws_profiles:` 37 | Can be either a list of profile names, or a dictionary having profile names as keys, and each value being a dictionary of extra variables to inspect when selecting a default account for the current playbook (see below). When a list, no matching is performed and no credentials are set. 38 | 39 | `vpc_tags:` 40 | A list of tag keys to match when building a dictionary of VPC IDs. When specified, the global host variable `vpc_ids` contains a nested dictionary of resource IDs denominated by tag values. If not specified, no dictionary is set. 41 | 42 | `subnet_tags:` 43 | A list of tag keys to match when building a dictionary of subnet IDs. When specified, the global host variable `subnet_ids` contains a nested dictionary of resource IDs denominated by tag values. If not specified, no dictionary is set. 44 | 45 | `security_group_tags:` 46 | A list of tag keys to match when building a dictionary of security group IDs. When specified, the global host variable `security_group_ids` contains a nested dictionary of resource IDs denominated by tag values. If not specified, no dictionary is set. 47 | 48 | `elb_target_group_tags:` 49 | A list of tag keys to match when building a dictionary of ELB target groups IDs. When specified, the global host variable `elb_target_group_arns` contains a nested dictionary of resource IDs denominated by tag values. If not specified, no dictionary is set. 50 | 51 | 52 | # Global Variables 53 | 54 | The following host variables are set by this plugin for every host Ansible attempts to manage, essentially these are global variables usable anywhere within your playbooks or roles. 55 | 56 | - `aws_account_ids` - a dictionary of AWS account IDs with profile name as keys and account ID as values. 57 | - `aws_profile` - the currently selected AWS profile when matched by extra vars 58 | - `elb_target_groups` - a dictionary of ELB target groups with resource ID as keys and dictionary of useful information as values 59 | - `elb_target_group_arns` - a complex nested dictionary of ELB target group ARNs denominated by matched tag values 60 | - `security_groups` - a dictionary of security groups with resource ID as keys and dictionary of useful information as values 61 | - `security_group_ids` - a complex nested dictionary of security group IDs denominated by matched tag values 62 | - `subnets` - a dictionary of subnets with resource ID as keys and dictionary of useful information as values 63 | - `subnet_ids` - a complex nested dictionary of subnet IDs denominated by matched tag values 64 | - `vpcs` - a dictionary of VPCs with resource ID as keys and dictionary of useful information as values 65 | - `vpc_ids` - a complex nested dictionary of VPC IDs denominated by matched tag values 66 | 67 | 68 | # AWS Resources 69 | 70 | In the configuration file for this plugin, you can specify a list of tag keys for each type of supported resource. When the plugin runs (right before playbook execution) it fetches resource descriptions from AWS and organizes them into hierarchical dictionaries based on the values of the configured tag keys. 71 | 72 | For example, given this configuration: 73 | 74 | ```yaml 75 | subnet_tags: 76 | - project 77 | - env 78 | - tier 79 | ``` 80 | 81 | and having subnets in your AWS account(s) like: 82 | 83 | Subnet ID | Tag: project | Tag: env | Tag: tier 84 | --------------- | ------------ | -------- | --------- 85 | subnet-aabbcc12 | apollo | prod | app 86 | subnet-aabbcc13 | apollo | prod | app 87 | subnet-aabbcc14 | apollo | prod | data 88 | subnet-aabbcc15 | apollo | prod | data 89 | subnet-aabbcc16 | apollo | prod | lb 90 | subnet-aabbcc17 | apollo | prod | lb 91 | subnet-aabbcc18 | apollo | staging | app 92 | subnet-aabbcc19 | apollo | staging | app 93 | subnet-aabbcc20 | apollo | staging | data 94 | subnet-aabbcc21 | apollo | staging | data 95 | subnet-aabbcc22 | apollo | staging | lb 96 | subnet-aabbcc23 | apollo | staging | lb 97 | subnet-aabbcc24 | manhattan | prod | app 98 | subnet-aabbcc25 | manhattan | prod | app 99 | subnet-aabbcc26 | manhattan | prod | data 100 | subnet-aabbcc27 | manhattan | prod | data 101 | subnet-aabbcc28 | manhattan | prod | lb 102 | subnet-aabbcc29 | manhattan | prod | lb 103 | subnet-aabbcc30 | manhattan | staging | app 104 | subnet-aabbcc31 | manhattan | staging | app 105 | subnet-aabbcc32 | manhattan | staging | data 106 | subnet-aabbcc33 | manhattan | staging | data 107 | subnet-aabbcc34 | manhattan | staging | lb 108 | subnet-aabbcc35 | manhattan | staging | lb 109 | 110 | You'll end up with a global dictionary like: 111 | 112 | ```yaml 113 | subnet_ids: 114 | us-east-1: 115 | apollo: 116 | prod: 117 | app: 118 | - subnet-aabbcc12 119 | - subnet-aabbcc13 120 | data: 121 | - subnet-aabbcc14 122 | - subnet-aabbcc15 123 | lb: 124 | - subnet-aabbcc16 125 | - subnet-aabbcc17 126 | staging 127 | app: 128 | - subnet-aabbcc18 129 | - subnet-aabbcc19 130 | data: 131 | - subnet-aabbcc20 132 | - subnet-aabbcc21 133 | lb: 134 | - subnet-aabbcc22 135 | - subnet-aabbcc23 136 | manhattan: 137 | prod: 138 | app: 139 | - subnet-aabbcc24 140 | - subnet-aabbcc25 141 | data: 142 | - subnet-aabbcc26 143 | - subnet-aabbcc27 144 | lb: 145 | - subnet-aabbcc28 146 | - subnet-aabbcc29 147 | staging 148 | app: 149 | - subnet-aabbcc30 150 | - subnet-aabbcc31 151 | data: 152 | - subnet-aabbcc32 153 | - subnet-aabbcc33 154 | lb: 155 | - subnet-aabbcc34 156 | - subnet-aabbcc35 157 | ``` 158 | 159 | Which you can reference like this: 160 | 161 | ```yaml 162 | - hosts: localhost 163 | connection: local 164 | tasks: 165 | - ec2: 166 | instance_type: t2.micro 167 | vpc_subnet_id: "{{ subnet_ids['us-east-1']['manhattan']['staging']['app'] | random }}" 168 | state: present 169 | ``` 170 | 171 | The same pattern is implemented for VPCs, subnets, security groups and ELB target groups, alowing you to specify and target resources in your playbooks without hard coding resource IDs, without using `*_facts` modules everywhere, and without needing to know the exact names of every resource. It's important to note that these resources can reside in any number of AWS accounts, as long as they can be reached by your Ansible control host. 172 | 173 | 174 | # Multi Account Support 175 | 176 | The multi account support provided by this plugin comes in two flavors. 177 | 178 | 1. With multiple accounts configured as AWS profiles on your Ansible control host, the plugin will traverse all accounts to retrieve resource information. 179 | 2. Additionally, with rules configured in the configuration file, it will inspect any extra vars you specify when running your playbook and *automatically select* one of the profiles to use for your playbook execution. This is accomplished by requesting temporary credentials from STS and exporting them within Ansible as a set of `AWS_*` environment variables. These exported env vars are consumed by Boto2 and Boto3-based modules alike. 180 | 181 | Note that resources are still retrieved from all accounts, even when auto-selection is configured. 182 | 183 | For example, given this configuration: 184 | 185 | ```yaml 186 | aws_profiles: 187 | staging: 188 | env: 189 | - development 190 | - staging 191 | production: 192 | env: production 193 | ``` 194 | 195 | And the following playbook invocation: 196 | 197 | ``` 198 | $ ansible-playbook do-a-thing.yml -e env=staging 199 | ``` 200 | 201 | The plugin would select the `staging` profile, obtain temporary credentials using that profile, then export those credentials to be automatically used by any AWS modules/tasks in your playbook. 202 | 203 | Any number of extra vars can be specified for each profile, and all must match for a profile to be selected. For example, you might have something like this, where each project resides in its own AWS account, and development happens in a default account (possibly the developers' own accounts): 204 | 205 | ```yaml 206 | aws_profiles: 207 | default: 208 | env: development 209 | apollo-staging: 210 | env: staging 211 | project: apollo 212 | apollo-production: 213 | env: production 214 | project: apollo 215 | manhattan-staging: 216 | env: staging 217 | project: manhattan 218 | manhattan-production: 219 | env: production 220 | project: manhattan 221 | ops: 222 | env: ops 223 | ``` 224 | 225 | The primary limitation of this approach is that the AWS account is selected once, prior to playbook execution. However, it's possible to trivially use a different account for a given task by requesting credentials with the `sts_assume_role` module and specifying them explicitly for a task, which overrides the environment variables set by this plugin. 226 | 227 | # Putting It All Together 228 | 229 | By passing the `env`, `project` and `service` extra vars to `ansible-playbook`, you can invoke the multi-account support to auto-select the correct AWS account to use, and use those same extra vars to pick the right resources. In the example below, the instance will be launched in the desired account, with the appropriate security group and subnet for the service type. 230 | 231 | ``` 232 | $ ansible-playbook launch.yml -e env=staging -e project=manhattan -e service=app 233 | ``` 234 | ```yaml 235 | - hosts: localhost 236 | connection: local 237 | 238 | vars: 239 | region: us-east-1 240 | 241 | tasks: 242 | 243 | - name: Launch instance 244 | ec2: 245 | region: "{{ region }}" 246 | instance_tags: 247 | env: "{{ env }}" 248 | project: "{{ project }}" 249 | service: "{{ service }}" 250 | instance_type: t2.micro 251 | group_id: "{{ security_group_ids[region][project][env][service] }}" 252 | vpc_subnet_id: "{{ subnet_ids[region][project][env][service] | random }}" 253 | wait: yes 254 | register: result_ec2 255 | 256 | - name: Register in inventory 257 | add_host: 258 | name: "{{ item.private_ip }}" 259 | groups: launch 260 | ec2_id: "{{ item.id }}" 261 | when: item.state == 'running' 262 | with_flattened: 263 | - "{{ result_ec2.results | map(attribute='instances') | list }}" 264 | 265 | - name: Wait for SSH 266 | wait_for: 267 | host: "{{ item }}" 268 | port: 22 269 | timeout: 420 270 | state: started 271 | with_items: "{{ groups.launch }}" 272 | 273 | 274 | - hosts: launch 275 | roles: 276 | - role: common 277 | ``` 278 | 279 | # Also Note 280 | 281 | ## Regions 282 | 283 | For maximum control, this plugin requires that regions be explicitly configured. The configuration setting `regions` is a list. All regions configured will be searched for resources, and the resulting dictionaries are nested by region. 284 | 285 | ## Resource Caching 286 | 287 | Just like Ansible's EC2 inventory script, this plugin caches all the resource information it finds, in order to speed up subsequent playbook executions. The cache can be disabled by setting `use_cache: no` in the configuration file, and the cache timeout (which defaults to 10 minutes) can be specified [in seconds] with the `cache_max_age` setting. 288 | 289 | It's possible that factors outside of Ansible could invalidate cached information, so it's also possible to configure one or more environment variables, the values of which will be saved and if they change between playbook runs, the cache will be automatically invalidated. 290 | 291 | -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | # config file for ansible -- https://ansible.com/ 2 | # =============================================== 3 | 4 | # nearly all parameters can be overridden in ansible-playbook 5 | # or with command line flags. ansible will read ANSIBLE_CONFIG, 6 | # ansible.cfg in the current working directory, .ansible.cfg in 7 | # the home directory or /etc/ansible/ansible.cfg, whichever it 8 | # finds first 9 | 10 | [defaults] 11 | 12 | # some basic default values... 13 | 14 | #inventory = /etc/ansible/hosts 15 | #library = /usr/share/my_modules/ 16 | #module_utils = /usr/share/my_module_utils/ 17 | #remote_tmp = ~/.ansible/tmp 18 | #local_tmp = ~/.ansible/tmp 19 | #plugin_filters_cfg = /etc/ansible/plugin_filters.yml 20 | #forks = 5 21 | #poll_interval = 15 22 | #sudo_user = root 23 | #ask_sudo_pass = True 24 | #ask_pass = True 25 | #transport = smart 26 | #remote_port = 22 27 | #module_lang = C 28 | #module_set_locale = False 29 | 30 | # plays will gather facts by default, which contain information about 31 | # the remote system. 32 | # 33 | # smart - gather by default, but don't regather if already gathered 34 | # implicit - gather by default, turn off with gather_facts: False 35 | # explicit - do not gather by default, must say gather_facts: True 36 | #gathering = implicit 37 | 38 | # This only affects the gathering done by a play's gather_facts directive, 39 | # by default gathering retrieves all facts subsets 40 | # all - gather all subsets 41 | # network - gather min and network facts 42 | # hardware - gather hardware facts (longest facts to retrieve) 43 | # virtual - gather min and virtual facts 44 | # facter - import facts from facter 45 | # ohai - import facts from ohai 46 | # You can combine them using comma (ex: network,virtual) 47 | # You can negate them using ! (ex: !hardware,!facter,!ohai) 48 | # A minimal set of facts is always gathered. 49 | #gather_subset = all 50 | 51 | # some hardware related facts are collected 52 | # with a maximum timeout of 10 seconds. This 53 | # option lets you increase or decrease that 54 | # timeout to something more suitable for the 55 | # environment. 56 | # gather_timeout = 10 57 | 58 | # additional paths to search for roles in, colon separated 59 | #roles_path = /etc/ansible/roles 60 | 61 | # uncomment this to disable SSH key host checking 62 | host_key_checking = False 63 | 64 | # change the default callback, you can only have one 'stdout' type enabled at a time. 65 | #stdout_callback = skippy 66 | 67 | 68 | ## Ansible ships with some plugins that require whitelisting, 69 | ## this is done to avoid running all of a type by default. 70 | ## These setting lists those that you want enabled for your system. 71 | ## Custom plugins should not need this unless plugin author specifies it. 72 | 73 | # enable callback plugins, they can output to stdout but cannot be 'stdout' type. 74 | #callback_whitelist = timer, mail 75 | callback_whitelist = timer 76 | 77 | # Determine whether includes in tasks and handlers are "static" by 78 | # default. As of 2.0, includes are dynamic by default. Setting these 79 | # values to True will make includes behave more like they did in the 80 | # 1.x versions. 81 | #task_includes_static = True 82 | #handler_includes_static = True 83 | 84 | # Controls if a missing handler for a notification event is an error or a warning 85 | #error_on_missing_handler = True 86 | 87 | # change this for alternative sudo implementations 88 | #sudo_exe = sudo 89 | 90 | # What flags to pass to sudo 91 | # WARNING: leaving out the defaults might create unexpected behaviours 92 | #sudo_flags = -H -S -n 93 | 94 | # SSH timeout 95 | timeout = 60 96 | 97 | # default user to use for playbooks if user is not specified 98 | # (/usr/bin/ansible will use current user as default) 99 | #remote_user = root 100 | 101 | # logging is off by default unless this path is defined 102 | # if so defined, consider logrotate 103 | #log_path = /var/log/ansible.log 104 | 105 | # default module name for /usr/bin/ansible 106 | #module_name = command 107 | 108 | # use this shell for commands executed under sudo 109 | # you may need to change this to bin/bash in rare instances 110 | # if sudo is constrained 111 | #executable = /bin/sh 112 | 113 | # if inventory variables overlap, does the higher precedence one win 114 | # or are hash values merged together? The default is 'replace' but 115 | # this can also be set to 'merge'. 116 | hash_behaviour = merge 117 | 118 | # by default, variables from roles will be visible in the global variable 119 | # scope. To prevent this, the following option can be enabled, and only 120 | # tasks and handlers within the role will see the variables there 121 | #private_role_vars = yes 122 | 123 | # list any Jinja2 extensions to enable here: 124 | #jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n 125 | 126 | # if set, always use this private key file for authentication, same as 127 | # if passing --private-key to ansible or ansible-playbook 128 | #private_key_file = /path/to/file 129 | 130 | # If set, configures the path to the Vault password file as an alternative to 131 | # specifying --vault-password-file on the command line. 132 | #vault_password_file = /path/to/vault_password_file 133 | 134 | # format of string {{ ansible_managed }} available within Jinja2 135 | # templates indicates to users editing templates files will be replaced. 136 | # replacing {file}, {host} and {uid} and strftime codes with proper values. 137 | #ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} 138 | # {file}, {host}, {uid}, and the timestamp can all interfere with idempotence 139 | # in some situations so the default is a static string: 140 | ansible_managed = This file is managed by Ansible. Local changes may be lost. 141 | 142 | # by default, ansible-playbook will display "Skipping [host]" if it determines a task 143 | # should not be run on a host. Set this to "False" if you don't want to see these "Skipping" 144 | # messages. NOTE: the task header will still be shown regardless of whether or not the 145 | # task is skipped. 146 | #display_skipped_hosts = True 147 | 148 | # by default, if a task in a playbook does not include a name: field then 149 | # ansible-playbook will construct a header that includes the task's action but 150 | # not the task's args. This is a security feature because ansible cannot know 151 | # if the *module* considers an argument to be no_log at the time that the 152 | # header is printed. If your environment doesn't have a problem securing 153 | # stdout from ansible-playbook (or you have manually specified no_log in your 154 | # playbook on all of the tasks where you have secret information) then you can 155 | # safely set this to True to get more informative messages. 156 | #display_args_to_stdout = False 157 | 158 | # by default (as of 1.3), Ansible will raise errors when attempting to dereference 159 | # Jinja2 variables that are not set in templates or action lines. Uncomment this line 160 | # to revert the behavior to pre-1.3. 161 | #error_on_undefined_vars = False 162 | 163 | # by default (as of 1.6), Ansible may display warnings based on the configuration of the 164 | # system running ansible itself. This may include warnings about 3rd party packages or 165 | # other conditions that should be resolved if possible. 166 | # to disable these warnings, set the following value to False: 167 | #system_warnings = True 168 | 169 | # by default (as of 1.4), Ansible may display deprecation warnings for language 170 | # features that should no longer be used and will be removed in future versions. 171 | # to disable these warnings, set the following value to False: 172 | #deprecation_warnings = True 173 | 174 | # (as of 1.8), Ansible can optionally warn when usage of the shell and 175 | # command module appear to be simplified by using a default Ansible module 176 | # instead. These warnings can be silenced by adjusting the following 177 | # setting or adding warn=yes or warn=no to the end of the command line 178 | # parameter string. This will for example suggest using the git module 179 | # instead of shelling out to the git command. 180 | # command_warnings = False 181 | 182 | 183 | # set plugin path directories here, separate with colons 184 | #action_plugins = /usr/share/ansible/plugins/action 185 | #cache_plugins = /usr/share/ansible/plugins/cache 186 | #callback_plugins = /usr/share/ansible/plugins/callback 187 | #connection_plugins = /usr/share/ansible/plugins/connection 188 | #lookup_plugins = /usr/share/ansible/plugins/lookup 189 | #inventory_plugins = /usr/share/ansible/plugins/inventory 190 | #vars_plugins = /usr/share/ansible/plugins/vars 191 | #filter_plugins = /usr/share/ansible/plugins/filter 192 | #test_plugins = /usr/share/ansible/plugins/test 193 | #terminal_plugins = /usr/share/ansible/plugins/terminal 194 | #strategy_plugins = /usr/share/ansible/plugins/strategy 195 | 196 | 197 | # by default, ansible will use the 'linear' strategy but you may want to try 198 | # another one 199 | #strategy = free 200 | 201 | # by default callbacks are not loaded for /bin/ansible, enable this if you 202 | # want, for example, a notification or logging callback to also apply to 203 | # /bin/ansible runs 204 | #bin_ansible_callbacks = False 205 | 206 | 207 | # don't like cows? that's unfortunate. 208 | # set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 209 | #nocows = 1 210 | 211 | # set which cowsay stencil you'd like to use by default. When set to 'random', 212 | # a random stencil will be selected for each task. The selection will be filtered 213 | # against the `cow_whitelist` option below. 214 | #cow_selection = default 215 | #cow_selection = random 216 | 217 | # when using the 'random' option for cowsay, stencils will be restricted to this list. 218 | # it should be formatted as a comma-separated list with no spaces between names. 219 | # NOTE: line continuations here are for formatting purposes only, as the INI parser 220 | # in python does not support them. 221 | #cow_whitelist=bud-frogs,bunny,cheese,daemon,default,dragon,elephant-in-snake,elephant,eyes,\ 222 | # hellokitty,kitty,luke-koala,meow,milk,moofasa,moose,ren,sheep,small,stegosaurus,\ 223 | # stimpy,supermilker,three-eyes,turkey,turtle,tux,udder,vader-koala,vader,www 224 | 225 | # don't like colors either? 226 | # set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 227 | #nocolor = 1 228 | 229 | # if set to a persistent type (not 'memory', for example 'redis') fact values 230 | # from previous runs in Ansible will be stored. This may be useful when 231 | # wanting to use, for example, IP information from one group of servers 232 | # without having to talk to them in the same playbook run to get their 233 | # current IP information. 234 | #fact_caching = memory 235 | 236 | 237 | # retry files 238 | # When a playbook fails by default a .retry file will be created in ~/ 239 | # You can disable this feature by setting retry_files_enabled to False 240 | # and you can change the location of the files by setting retry_files_save_path 241 | 242 | retry_files_enabled = False 243 | #retry_files_save_path = ~/.ansible-retry 244 | 245 | # squash actions 246 | # Ansible can optimise actions that call modules with list parameters 247 | # when looping. Instead of calling the module once per with_ item, the 248 | # module is called once with all items at once. Currently this only works 249 | # under limited circumstances, and only with parameters named 'name'. 250 | #squash_actions = apk,apt,dnf,homebrew,pacman,pkgng,yum,zypper 251 | 252 | # prevents logging of task data, off by default 253 | #no_log = False 254 | 255 | # prevents logging of tasks, but only on the targets, data is still logged on the master/controller 256 | #no_target_syslog = False 257 | 258 | # controls whether Ansible will raise an error or warning if a task has no 259 | # choice but to create world readable temporary files to execute a module on 260 | # the remote machine. This option is False by default for security. Users may 261 | # turn this on to have behaviour more like Ansible prior to 2.1.x. See 262 | # https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user 263 | # for more secure ways to fix this than enabling this option. 264 | #allow_world_readable_tmpfiles = False 265 | 266 | # controls the compression level of variables sent to 267 | # worker processes. At the default of 0, no compression 268 | # is used. This value must be an integer from 0 to 9. 269 | #var_compression_level = 9 270 | 271 | # controls what compression method is used for new-style ansible modules when 272 | # they are sent to the remote system. The compression types depend on having 273 | # support compiled into both the controller's python and the client's python. 274 | # The names should match with the python Zipfile compression types: 275 | # * ZIP_STORED (no compression. available everywhere) 276 | # * ZIP_DEFLATED (uses zlib, the default) 277 | # These values may be set per host via the ansible_module_compression inventory 278 | # variable 279 | #module_compression = 'ZIP_DEFLATED' 280 | 281 | # This controls the cutoff point (in bytes) on --diff for files 282 | # set to 0 for unlimited (RAM may suffer!). 283 | #max_diff_size = 1048576 284 | 285 | # This controls how ansible handles multiple --tags and --skip-tags arguments 286 | # on the CLI. If this is True then multiple arguments are merged together. If 287 | # it is False, then the last specified argument is used and the others are ignored. 288 | # This option will be removed in 2.8. 289 | #merge_multiple_cli_flags = True 290 | 291 | # Controls showing custom stats at the end, off by default 292 | #show_custom_stats = True 293 | 294 | # Controls which files to ignore when using a directory as inventory with 295 | # possibly multiple sources (both static and dynamic) 296 | #inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo 297 | 298 | # This family of modules use an alternative execution path optimized for network appliances 299 | # only update this setting if you know how this works, otherwise it can break module execution 300 | #network_group_modules=eos, nxos, ios, iosxr, junos, vyos 301 | 302 | # When enabled, this option allows lookups (via variables like {{lookup('foo')}} or when used as 303 | # a loop with `with_foo`) to return data that is not marked "unsafe". This means the data may contain 304 | # jinja2 templating language which will be run through the templating engine. 305 | # ENABLING THIS COULD BE A SECURITY RISK 306 | #allow_unsafe_lookups = False 307 | 308 | # set default errors for all plays 309 | #any_errors_fatal = False 310 | 311 | [inventory] 312 | # enable inventory plugins, default: 'host_list', 'script', 'yaml', 'ini' 313 | #enable_plugins = host_list, aws_ec2 314 | 315 | # ignore these extensions when parsing a directory as inventory source 316 | #ignore_extensions = .pyc, .pyo, .swp, .bak, ~, .rpm, .md, .txt, ~, .orig, .ini, .cfg, .retry 317 | 318 | # ignore files matching these patterns when parsing a directory as inventory source 319 | #ignore_patterns= 320 | 321 | # If 'true' unparsed inventory sources become fatal errors, they are warnings otherwise. 322 | #unparsed_is_failed=False 323 | 324 | [privilege_escalation] 325 | #become=True 326 | #become_method=sudo 327 | #become_user=root 328 | #become_ask_pass=False 329 | 330 | [paramiko_connection] 331 | 332 | # uncomment this line to cause the paramiko connection plugin to not record new host 333 | # keys encountered. Increases performance on new host additions. Setting works independently of the 334 | # host key checking setting above. 335 | record_host_keys=False 336 | 337 | # by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this 338 | # line to disable this behaviour. 339 | #pty=False 340 | 341 | # paramiko will default to looking for SSH keys initially when trying to 342 | # authenticate to remote devices. This is a problem for some network devices 343 | # that close the connection after a key failure. Uncomment this line to 344 | # disable the Paramiko look for keys function 345 | #look_for_keys = False 346 | 347 | # When using persistent connections with Paramiko, the connection runs in a 348 | # background process. If the host doesn't already have a valid SSH key, by 349 | # default Ansible will prompt to add the host key. This will cause connections 350 | # running in background processes to fail. Uncomment this line to have 351 | # Paramiko automatically add host keys. 352 | #host_key_auto_add = True 353 | 354 | [ssh_connection] 355 | 356 | # ssh arguments to use 357 | # Leaving off ControlPersist will result in poor performance, so use 358 | # paramiko on older platforms rather than removing it, -C controls compression use 359 | #ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s 360 | 361 | # The base directory for the ControlPath sockets. 362 | # This is the "%(directory)s" in the control_path option 363 | # 364 | # Example: 365 | # control_path_dir = /tmp/.ansible/cp 366 | #control_path_dir = ~/.ansible/cp 367 | 368 | # The path to use for the ControlPath sockets. This defaults to a hashed string of the hostname, 369 | # port and username (empty string in the config). The hash mitigates a common problem users 370 | # found with long hostames and the conventional %(directory)s/ansible-ssh-%%h-%%p-%%r format. 371 | # In those cases, a "too long for Unix domain socket" ssh error would occur. 372 | # 373 | # Example: 374 | # control_path = %(directory)s/%%h-%%r 375 | #control_path = 376 | 377 | # Enabling pipelining reduces the number of SSH operations required to 378 | # execute a module on the remote server. This can result in a significant 379 | # performance improvement when enabled, however when using "sudo:" you must 380 | # first disable 'requiretty' in /etc/sudoers 381 | # 382 | # By default, this option is disabled to preserve compatibility with 383 | # sudoers configurations that have requiretty (the default on many distros). 384 | # 385 | pipelining = True 386 | 387 | # Control the mechanism for transferring files (old) 388 | # * smart = try sftp and then try scp [default] 389 | # * True = use scp only 390 | # * False = use sftp only 391 | #scp_if_ssh = smart 392 | 393 | # Control the mechanism for transferring files (new) 394 | # If set, this will override the scp_if_ssh option 395 | # * sftp = use sftp to transfer files 396 | # * scp = use scp to transfer files 397 | # * piped = use 'dd' over SSH to transfer files 398 | # * smart = try sftp, scp, and piped, in that order [default] 399 | #transfer_method = smart 400 | 401 | # if False, sftp will not use batch mode to transfer files. This may cause some 402 | # types of file transfer failures impossible to catch however, and should 403 | # only be disabled if your sftp version has problems with batch mode 404 | #sftp_batch_mode = False 405 | 406 | # The -tt argument is passed to ssh when pipelining is not enabled because sudo 407 | # requires a tty by default. 408 | #use_tty = True 409 | 410 | [persistent_connection] 411 | 412 | # Configures the persistent connection timeout value in seconds. This value is 413 | # how long the persistent connection will remain idle before it is destroyed. 414 | # If the connection doesn't receive a request before the timeout value 415 | # expires, the connection is shutdown. The default value is 30 seconds. 416 | #connect_timeout = 30 417 | 418 | # Configures the persistent connection retry timeout. This value configures the 419 | # the retry timeout that ansible-connection will wait to connect 420 | # to the local domain socket. This value must be larger than the 421 | # ssh timeout (timeout) and less than persistent connection idle timeout (connect_timeout). 422 | # The default value is 15 seconds. 423 | #connect_retry_timeout = 15 424 | 425 | # The command timeout value defines the amount of time to wait for a command 426 | # or RPC call before timing out. The value for the command timeout must 427 | # be less than the value of the persistent connection idle timeout (connect_timeout) 428 | # The default value is 10 second. 429 | #command_timeout = 10 430 | 431 | [accelerate] 432 | #accelerate_port = 5099 433 | #accelerate_timeout = 30 434 | #accelerate_connect_timeout = 5.0 435 | 436 | # The daemon timeout is measured in minutes. This time is measured 437 | # from the last activity to the accelerate daemon. 438 | #accelerate_daemon_timeout = 30 439 | 440 | # If set to yes, accelerate_multi_key will allow multiple 441 | # private keys to be uploaded to it, though each user must 442 | # have access to the system via SSH to add a new key. The default 443 | # is "no". 444 | #accelerate_multi_key = yes 445 | 446 | [selinux] 447 | # file systems that require special treatment when dealing with security context 448 | # the default behaviour that copies the existing context or uses the user default 449 | # needs to be changed to use the file system dependent context. 450 | #special_context_filesystems=nfs,vboxsf,fuse,ramfs,9p 451 | 452 | # Set this to yes to allow libvirt_lxc connections to work without SELinux. 453 | #libvirt_lxc_noseclabel = yes 454 | 455 | [colors] 456 | #highlight = white 457 | #verbose = blue 458 | #warn = bright purple 459 | #error = red 460 | #debug = dark gray 461 | #deprecate = purple 462 | #skip = cyan 463 | #unreachable = red 464 | #ok = green 465 | #changed = yellow 466 | #diff_add = green 467 | #diff_remove = red 468 | #diff_lines = cyan 469 | 470 | 471 | [diff] 472 | # Always print diff when running ( same as always running with -D/--diff ) 473 | # always = no 474 | 475 | # Set how many context lines to show in diff 476 | # context = 3 477 | -------------------------------------------------------------------------------- /ansible.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | DIR="$(cd `dirname $0` && pwd)" 4 | 5 | ANSIBLE_VERSION="$(ansible --version | head -1 | awk '{print $2}')" 6 | [[ "${ANSIBLE_VERSION:0:3}" != "2.5" ]] && echo -e "Supported Ansible version: 2.5\nYou are using version: ${ANSIBLE_VERSION:0:3}\n\nPlease install the supported version" >&2 && exit 1 7 | 8 | [[ -z "${ANSIBLE_CONFIG}" ]] && export ANSIBLE_CONFIG="${DIR}/ansible.cfg" 9 | [[ -z "${ANSIBLE_INVENTORY}" ]] && export ANSIBLE_INVENTORY="${DIR}/inventory" 10 | [[ -z "${EC2_INI_PATH}" ]] && export EC2_INI_PATH="${DIR}/inventory/ec2.ini" 11 | [[ -z "${ANSIBLE_ROLES_PATH}" ]] && export ANSIBLE_ROLES_PATH="${DIR}/roles" 12 | [[ -z "${ANSIBLE_VARS_PLUGINS}" ]] && export ANSIBLE_VARS_PLUGINS="${DIR}/vars_plugins" 13 | [[ -z "${AWS_CONFIG_FILE}" ]] && export AWS_CONFIG_FILE="$(readlink -f "${DIR}/aws.ini")" 14 | 15 | echo Running from directory: ${DIR} 16 | echo AWS configuration: ${AWS_CONFIG_FILE} 17 | echo Using inventory from: ${ANSIBLE_INVENTORY} 18 | echo EC2 inventory config: ${EC2_INI_PATH} 19 | echo 20 | 21 | ansible "$@" 22 | exit $? 23 | 24 | # vim: set ts=2 sts=2 sw=2 et: 25 | -------------------------------------------------------------------------------- /aws.ini: -------------------------------------------------------------------------------- 1 | [profile ops] 2 | region = us-east-1 3 | role_arn = 4 | credential_source = Ec2InstanceMetadata 5 | 6 | [profile staging] 7 | region = us-east-1 8 | role_arn = 9 | credential_source = Ec2InstanceMetadata 10 | 11 | [profile production] 12 | region = us-east-1 13 | role_arn = 14 | credential_source = Ec2InstanceMetadata 15 | -------------------------------------------------------------------------------- /inventory/ec2.ini: -------------------------------------------------------------------------------- 1 | --2018-04-01 18:02:44-- https://github.com/ansible/ansible/raw/devel/contrib/inventory/ec2.ini 2 | Resolving github.com (github.com)... 192.30.253.112, 192.30.253.113 3 | Connecting to github.com (github.com)|192.30.253.112|:443... connected. 4 | HTTP request sent, awaiting response... 302 Found 5 | Location: https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini [following] 6 | --2018-04-01 18:02:44-- https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini 7 | Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.32.133 8 | Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.32.133|:443... connected. 9 | HTTP request sent, awaiting response... 200 OK 10 | Length: 9529 (9.3K) [text/plain] 11 | Saving to: ‘ec2.ini’ 12 | 13 | 0K ......... 100% 142M=0s 14 | 15 | 2018-04-01 18:02:44 (142 MB/s) - ‘ec2.ini’ saved [9529/9529] 16 | 17 | -------------------------------------------------------------------------------- /inventory/ec2.py: -------------------------------------------------------------------------------- 1 | --2018-04-01 18:02:26-- https://github.com/ansible/ansible/raw/devel/contrib/inventory/ec2.py 2 | Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112 3 | Connecting to github.com (github.com)|192.30.253.113|:443... connected. 4 | HTTP request sent, awaiting response... 302 Found 5 | Location: https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py [following] 6 | --2018-04-01 18:02:26-- https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py 7 | Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.32.133 8 | Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.32.133|:443... connected. 9 | HTTP request sent, awaiting response... 200 OK 10 | Length: 72131 (70K) [text/plain] 11 | Saving to: ‘ec2.py’ 12 | 13 | 0K .......... .......... .......... .......... .......... 70% 36.4M 0s 14 | 50K .......... .......... 100% 55.4M=0.002s 15 | 16 | 2018-04-01 18:02:26 (40.4 MB/s) - ‘ec2.py’ saved [72131/72131] 17 | 18 | -------------------------------------------------------------------------------- /playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | DIR="$(cd `dirname $0` && pwd)" 4 | 5 | ANSIBLE_VERSION="$(ansible --version | head -1 | awk '{print $2}')" 6 | [[ "${ANSIBLE_VERSION:0:3}" != "2.5" ]] && echo -e "Supported Ansible version: 2.5\nYou are using version: ${ANSIBLE_VERSION:0:3}\n\nPlease install the supported version" >&2 && exit 1 7 | 8 | [[ -z "${ANSIBLE_CONFIG}" ]] && export ANSIBLE_CONFIG="${DIR}/ansible.cfg" 9 | [[ -z "${ANSIBLE_INVENTORY}" ]] && export ANSIBLE_INVENTORY="${DIR}/inventory" 10 | [[ -z "${EC2_INI_PATH}" ]] && export EC2_INI_PATH="${DIR}/inventory/ec2.ini" 11 | [[ -z "${ANSIBLE_ROLES_PATH}" ]] && export ANSIBLE_ROLES_PATH="${DIR}/roles" 12 | [[ -z "${ANSIBLE_VARS_PLUGINS}" ]] && export ANSIBLE_VARS_PLUGINS="${DIR}/vars_plugins" 13 | [[ -z "${AWS_CONFIG_FILE}" ]] && export AWS_CONFIG_FILE="$(readlink -f "${DIR}/aws.ini")" 14 | 15 | echo Running from directory: ${DIR} 16 | echo AWS configuration: ${AWS_CONFIG_FILE} 17 | echo Using inventory from: ${ANSIBLE_INVENTORY} 18 | echo EC2 inventory config: ${EC2_INI_PATH} 19 | echo 20 | 21 | time ansible-playbook -vv "$@" 22 | exit $? 23 | 24 | # vim: set ts=2 sts=2 sw=2 et: 25 | -------------------------------------------------------------------------------- /test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: localhost 4 | connection: local 5 | 6 | tasks: 7 | 8 | - debug: 9 | var: aws_profile 10 | tags: aws_profile 11 | 12 | - debug: 13 | var: aws_account_ids 14 | tags: aws_account_ids 15 | 16 | - debug: 17 | var: vpcs 18 | tags: vpcs 19 | 20 | - debug: 21 | var: vpc_ids 22 | tags: vpc_ids 23 | 24 | - debug: 25 | var: subnets 26 | tags: subnets 27 | 28 | - debug: 29 | var: subnet_ids 30 | tags: subnet_ids 31 | 32 | - debug: 33 | var: security_groups 34 | tags: security_groups 35 | 36 | - debug: 37 | var: security_group_ids 38 | tags: security_group_ids 39 | 40 | - debug: 41 | var: elb_target_groups 42 | tags: elb_target_groups 43 | 44 | - debug: 45 | var: elb_target_group_arns 46 | tags: elb_target_group_arns 47 | 48 | 49 | # vim: set ft=ansible ts=2 sts=2 sw=2 et: 50 | -------------------------------------------------------------------------------- /vars_plugins/aws.py: -------------------------------------------------------------------------------- 1 | from __future__ import (absolute_import, division, print_function) 2 | __metaclass__ = type 3 | 4 | DOCUMENTATION = ''' 5 | vars: aws 6 | version_added: "2.5" 7 | short_description: Retrieves VPC, subnet, security group and load balancer information from AWS 8 | description: 9 | - Connects to AWS for each of the configured accounts and regions 10 | - Discovers VPC, subnet, security group and ELB target group IDs 11 | - Makes these IDs available as global variables for all hosts 12 | - Caches the above data locally for the configured period (default: 5 mins) 13 | - Optionally matches extra vars with configured rules to identify which account should be used for the current playbook, then establishes a temporary session and exports credentials as environment variables for consumption by AWS modules 14 | notes: 15 | - Requires boto3 16 | - Configuration should be placed in aws.yml in the same directory as this file 17 | ''' 18 | 19 | try: 20 | import boto3 21 | import botocore.exceptions 22 | HAS_BOTO3 = True 23 | except ImportError: 24 | HAS_BOTO3 = False 25 | 26 | import argparse 27 | import json, os, re, time, yaml 28 | from ansible.errors import AnsibleParserError 29 | from ansible.plugins.vars import BaseVarsPlugin 30 | 31 | DIR = os.path.dirname(os.path.realpath(__file__)) 32 | 33 | def parse_cli_args(): 34 | parser = argparse.ArgumentParser() 35 | parser.add_argument('-e', '--extra-vars', action='append') 36 | parser.add_argument('--flush-cache', action='store_true', default=False) 37 | opts, unknown = parser.parse_known_args() 38 | args = dict() 39 | if opts.extra_vars: 40 | args['extra_vars'] = dict(e.split('=') for e in opts.extra_vars if '=' in e) 41 | if opts.flush_cache: 42 | args['flush_cache'] = True 43 | return args 44 | 45 | 46 | def load_config(): 47 | ''' Test for configuration file and return configuration dictionary ''' 48 | 49 | with open(os.path.join(DIR, 'aws.yml'), 'r') as stream: 50 | try: 51 | config = yaml.safe_load(stream) 52 | return config 53 | except yaml.YAMLError as e: 54 | raise AnsibleParserError('Failed to read aws.yml: {0}'.format(e)) 55 | 56 | 57 | def append_leaf(d, l, v): 58 | i = l.pop(0) 59 | if len(l): 60 | if i not in d: 61 | d[i] = dict() 62 | d[i] = append_leaf(d[i], l, v) 63 | else: 64 | if i not in d: 65 | d[i] = [] 66 | d[i].append(v) 67 | return d 68 | 69 | 70 | class VarsModule(BaseVarsPlugin): 71 | 72 | CACHE_MAX_AGE = 600 73 | 74 | def __init__(self, *args): 75 | ''' Load configuration and determine cache path ''' 76 | 77 | super(VarsModule, self).__init__(*args) 78 | 79 | self.config = load_config() 80 | cli_args = parse_cli_args() 81 | self.extra_vars = cli_args.get('extra_vars', dict()) 82 | self.flush_cache = cli_args.get('flush_cache', False) 83 | 84 | self.cache_path = os.path.expanduser('~/.ansible/tmp/aws-vars.cache') 85 | self.env_cache_path = os.path.expanduser('~/.ansible/tmp/aws-vars.env') 86 | self.use_cache = self.config.get('use_cache', True) in [True, 'yes', 'y', 'true'] 87 | self.cache_env_vars = self.config.get('cache_env_vars', []) 88 | self._connect_profiles() 89 | self._export_credentials() 90 | 91 | 92 | def _connect_profiles(self): 93 | for profile in self._profiles(): 94 | self._init_session(profile) 95 | 96 | 97 | def _export_credentials(self): 98 | self.aws_profile = None 99 | profiles = self.config.get('aws_profiles', ['default']) 100 | 101 | if isinstance(profiles, dict): 102 | profiles_list = profiles.keys() 103 | else: 104 | profiles_list = profiles 105 | 106 | credentials = {profile: self._credentials(profile) for profile in profiles_list} 107 | 108 | profile_override = os.environ.get('ANSIBLE_AWS_PROFILE') 109 | default_profile = None 110 | if profile_override: 111 | if profile_override in profiles: 112 | default_profile = profile_override 113 | elif isinstance(profiles, dict) and self.extra_vars: 114 | for profile, rules in profiles.iteritems(): 115 | if isinstance(rules, dict): 116 | rule_matches = {var: False for var in rules.keys()} 117 | for var, vals in rules.iteritems(): 118 | if isinstance(vals, basestring): 119 | vals = [vals] 120 | if var in self.extra_vars and self.extra_vars[var] in vals: 121 | rule_matches[var] = True 122 | if all(m == True for m in rule_matches.values()): 123 | default_profile = profile 124 | break 125 | 126 | if default_profile: 127 | self.aws_profile = default_profile 128 | os.environ['AWS_ACCESS_KEY_ID'] = credentials[default_profile].access_key 129 | os.environ['AWS_SECRET_ACCESS_KEY'] = credentials[default_profile].secret_key 130 | os.environ['AWS_SECURITY_TOKEN'] = credentials[default_profile].token 131 | os.environ['AWS_SESSION_TOKEN'] = credentials[default_profile].token 132 | 133 | cleaner = re.compile('[^a-zA-Z0-9_]') 134 | for profile, creds in credentials.iteritems(): 135 | profile_clean = cleaner.sub('_', profile).upper() 136 | os.environ['{}_AWS_ACCESS_KEY_ID'.format(profile_clean)] = creds.access_key 137 | os.environ['{}_AWS_SECRET_ACCESS_KEY'.format(profile_clean)] = creds.secret_key 138 | os.environ['{}_AWS_SECURITY_TOKEN'.format(profile_clean)] = creds.token 139 | os.environ['{}_AWS_SESSION_TOKEN'.format(profile_clean)] = creds.token 140 | 141 | 142 | def _init_session(self, profile): 143 | if not hasattr(self, 'sessions'): 144 | self.sessions = dict() 145 | try: 146 | self.sessions[profile] = boto3.Session(profile_name=profile) 147 | except botocore.exceptions.ProfileNotFound as e: 148 | if profile == 'default': 149 | self.sessions[profile] = boto3.Session() 150 | else: 151 | raise 152 | 153 | 154 | def _session(self, profile): 155 | return self.sessions[profile] 156 | 157 | 158 | def _credentials(self, profile): 159 | return self.sessions[profile].get_credentials().get_frozen_credentials() 160 | 161 | 162 | def _profiles(self): 163 | profiles = self.config.get('aws_profiles', ['default']) 164 | if isinstance(profiles, dict): 165 | return profiles.keys() 166 | else: 167 | return [p for p in profiles] 168 | 169 | 170 | def _get_account_ids(self): 171 | ''' Retrieve AWS account ID ''' 172 | self.account_ids = {p: self.sessions[p].client('sts').get_caller_identity()['Account'] for p in self._profiles()} 173 | 174 | 175 | def _get_vpc_ids(self): 176 | ''' Retrieve all VPC details from AWS API ''' 177 | 178 | self.vpcs = dict() 179 | self.vpc_ids = dict() 180 | tag_list = self.config.get('vpc_tags', []) 181 | for region in self.config.get('regions'): 182 | for profile in self._profiles(): 183 | client = self._session(profile).client('ec2', region_name=region) 184 | vpcs_result = client.describe_vpcs() 185 | if vpcs_result and 'Vpcs' in vpcs_result and len(vpcs_result['Vpcs']): 186 | for vpc in vpcs_result['Vpcs']: 187 | self.vpcs[vpc['VpcId']] = dict( 188 | cidr_block=vpc['CidrBlock'], 189 | is_default=vpc['IsDefault'], 190 | instance_tenancy=vpc['InstanceTenancy'], 191 | profile=profile, 192 | region=region, 193 | state=vpc['State'], 194 | ) 195 | if 'Tags' in vpc: 196 | tags = dict((t['Key'], t['Value']) for t in vpc['Tags']) 197 | self.vpcs[vpc['VpcId']]['tags'] = tags 198 | if tag_list: 199 | ind = [tags[t] for t in tag_list if t in tags] 200 | if len(ind) == len(tag_list): 201 | self.vpc_ids = append_leaf(self.vpc_ids, [region] + ind, vpc['VpcId']) 202 | 203 | 204 | def _get_subnets(self): 205 | ''' Retrieve all subnet details from AWS API ''' 206 | 207 | self.subnets = dict() 208 | self.subnet_ids = dict() 209 | tag_list = self.config.get('subnet_tags', []) 210 | for region in self.config.get('regions', []): 211 | for profile in self._profiles(): 212 | client = self._session(profile).client('ec2', region_name=region) 213 | subnets_result = client.describe_subnets() 214 | if subnets_result and 'Subnets' in subnets_result and len(subnets_result['Subnets']): 215 | for subnet in subnets_result['Subnets']: 216 | self.subnets[subnet['SubnetId']] = dict( 217 | cidr=subnet['CidrBlock'], 218 | zone=subnet['AvailabilityZone'], 219 | profile=profile, 220 | region=region, 221 | vpc_id=subnet['VpcId'], 222 | ) 223 | if 'Tags' in subnet: 224 | tags = dict((t['Key'], t['Value']) for t in subnet['Tags']) 225 | self.subnets[subnet['SubnetId']]['tags'] = tags 226 | if tag_list: 227 | ind = [tags[t] for t in tag_list if t in tags] 228 | if len(ind) == len(tag_list): 229 | self.subnet_ids = append_leaf(self.subnet_ids, [region] + ind, subnet['SubnetId']) 230 | 231 | 232 | def _get_security_groups(self): 233 | ''' Retrieve all security group details from AWS API ''' 234 | self.security_groups = dict() 235 | self.security_group_ids = dict() 236 | tag_list = self.config.get('security_group_tags', []) 237 | for region in self.config.get('regions'): 238 | for profile in self._profiles(): 239 | client = self._session(profile).client('ec2', region_name=region) 240 | groups_result = client.describe_security_groups() 241 | if groups_result and 'SecurityGroups' in groups_result and len(groups_result['SecurityGroups']): 242 | for group in groups_result['SecurityGroups']: 243 | self.security_groups[group['GroupId']] = dict( 244 | name=group['GroupName'], 245 | profile=profile, 246 | region=region, 247 | ) 248 | if 'VpcId' in group: 249 | self.security_groups[group['GroupId']]['type'] = 'vpc' 250 | self.security_groups[group['GroupId']]['vpc_id'] = group['VpcId'] 251 | else: 252 | self.security_groups[group['GroupId']]['type'] = 'classic' 253 | if 'Tags' in group: 254 | tags = dict((t['Key'], t['Value']) for t in group['Tags']) 255 | self.security_groups[group['GroupId']]['tags'] = tags 256 | if tag_list: 257 | ind = [tags[t] for t in tag_list if t in tags] 258 | if len(ind) == len(tag_list): 259 | self.security_group_ids = append_leaf(self.security_group_ids, [region] + ind, group['GroupId']) 260 | 261 | 262 | def _get_elb_target_groups(self): 263 | ''' Retrieve all LB target group details from AWS API ''' 264 | 265 | self.elb_target_groups = dict() 266 | self.elb_target_group_arns = dict() 267 | tag_list = self.config.get('elb_target_group_tags', []) 268 | for region in self.config.get('regions'): 269 | for profile in self._profiles(): 270 | client = self._session(profile).client('elbv2', region_name=region) 271 | groups_result = client.describe_target_groups() 272 | if groups_result and 'TargetGroups' in groups_result and len(groups_result['TargetGroups']): 273 | groups = dict() 274 | for group in groups_result['TargetGroups']: 275 | groups[group['TargetGroupArn']] = dict( 276 | name=group['TargetGroupName'], 277 | protocol=group['Protocol'], 278 | port=group['Port'], 279 | load_balancer_arns=group['LoadBalancerArns'], 280 | profile=profile, 281 | region=region, 282 | target_type=group['TargetType'], 283 | vpc_id=group['VpcId'], 284 | ) 285 | self.elb_target_groups.update(groups) 286 | tags_result = client.describe_tags(ResourceArns=groups.keys()) 287 | if tags_result and 'TagDescriptions' in tags_result and len(tags_result['TagDescriptions']): 288 | for group in tags_result['TagDescriptions']: 289 | if 'Tags' in group: 290 | tags = dict((t['Key'], t['Value']) for t in group['Tags']) 291 | self.elb_target_groups[group['ResourceArn']]['tags'] = tags 292 | if tag_list: 293 | ind = [tags[t] for t in tag_list if t in tags] 294 | if len(ind) == len(tag_list): 295 | self.elb_target_group_arns = append_leaf(self.elb_target_group_arns, [region] + ind, group['ResourceArn']) 296 | 297 | 298 | def _get_vars_from_api(self): 299 | ''' Retrieve AWS resources from AWS API ''' 300 | 301 | self._get_account_ids() 302 | self._get_vpc_ids() 303 | self._get_security_groups() 304 | self._get_subnets() 305 | self._get_elb_target_groups() 306 | 307 | return dict( 308 | aws_account_ids=self.account_ids, 309 | elb_target_groups=self.elb_target_groups, 310 | elb_target_group_arns=self.elb_target_group_arns, 311 | security_groups=self.security_groups, 312 | security_group_ids=self.security_group_ids, 313 | subnets=self.subnets, 314 | subnet_ids=self.subnet_ids, 315 | vpcs=self.vpcs, 316 | vpc_ids=self.vpc_ids, 317 | ) 318 | 319 | 320 | def _get_vars_from_cache(self): 321 | ''' Load AWS resources from JSON cache file ''' 322 | 323 | cache = open(self.cache_path, 'r') 324 | aws_vars = json.load(cache) 325 | return aws_vars 326 | 327 | 328 | def _check_env_var_cache(self): 329 | ''' Check the environment variable cache to see if any values have changed ''' 330 | if not os.path.isfile(self.env_cache_path): 331 | return False 332 | 333 | env_cache = open(self.env_cache_path, 'r') 334 | env_data = json.load(env_cache) 335 | for v in self.cache_env_vars: 336 | if v not in env_data or env_data[v] != os.environ.get(v, ''): 337 | return False 338 | return True 339 | 340 | 341 | def _is_cache_valid(self): 342 | ''' Determines if the cache files have expired, or if it is still valid ''' 343 | 344 | if self.use_cache and not self.flush_cache: 345 | if os.path.isfile(self.cache_path): 346 | mod_time = os.path.getmtime(self.cache_path) 347 | current_time = time.time() 348 | if (mod_time + self.config.get('cache_max_age', self.CACHE_MAX_AGE)) > current_time: 349 | if self.cache_env_vars: 350 | return self._check_env_var_cache() 351 | else: 352 | return True 353 | return False 354 | 355 | 356 | def _save_cache(self, data): 357 | ''' Write AWS vars in JSON format to cache file ''' 358 | 359 | if self.use_cache: 360 | cache = open(self.cache_path, 'w') 361 | json_data = json.dumps(data) 362 | cache.write(json_data) 363 | cache.close() 364 | 365 | if self.cache_env_vars: 366 | env_cache = open(self.env_cache_path, 'w') 367 | env_data = {v: os.environ.get(v, '') for v in self.cache_env_vars} 368 | json_env_data = json.dumps(env_data) 369 | env_cache.write(json_env_data) 370 | env_cache.close() 371 | 372 | 373 | def get_vars(self, loader, path, entities, cache=True): 374 | if not HAS_BOTO3: 375 | raise AnsibleParserError('AWS vars plugin requires boto3') 376 | 377 | #raise AnsibleParserError(self.extra_vars) 378 | 379 | super(VarsModule, self).get_vars(loader, path, entities) 380 | 381 | if self._is_cache_valid(): 382 | data = self._get_vars_from_cache() 383 | else: 384 | data = self._get_vars_from_api() 385 | self._save_cache(data) 386 | 387 | data['aws_profile'] = self.aws_profile 388 | return data 389 | 390 | 391 | # vim: set ft=python ts=4 sts=4 sw=4 et: 392 | -------------------------------------------------------------------------------- /vars_plugins/aws.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | # List of regions in which to look for AWS resources 4 | regions: 5 | - us-east-1 6 | 7 | # Whether or not to cache VPC, subnet, security group and LB target group information 8 | use_cache: yes 9 | 10 | # Maximum time in seconds to cache VPC, subnet and security group information (default: 600) 11 | #cache_max_age: 600 12 | 13 | # Environment variables to use when saving cache. If any of these variables change, the cache is invalidated. 14 | # Useful when using multiple AWS configurations or profiles outside of Ansible. 15 | cache_env_vars: 16 | - AWS_CONFIG_FILE 17 | - AWS_PROFILE 18 | 19 | # AWS profile matching 20 | # When a list, use for looking up resource IDs 21 | # When a dict, also look inside each profile for tags to match to export credentials 22 | aws_profiles: 23 | staging: 24 | env: 25 | - development 26 | - staging 27 | production: 28 | env: production 29 | ops: 30 | env: ops 31 | 32 | # Use tags to build a hierarchical dictionary of VPC IDs 33 | vpc_tags: 34 | - project 35 | - env 36 | 37 | # Use tags to build a hierarchical dictionary of subnet IDs 38 | subnet_tags: 39 | - project 40 | - env 41 | - tier 42 | 43 | # Use tags to build a hierarchical dictionary of security group IDs 44 | security_group_tags: 45 | - project 46 | - env 47 | - service 48 | 49 | # Use tags to build a hierarchical dictionary of ELB target group ARNs 50 | elb_target_group_tags: 51 | - project 52 | - env 53 | - service 54 | 55 | # vim: set ts=2 sts=2 sw=2 et: 56 | --------------------------------------------------------------------------------