├── Makefile ├── README.md ├── redhat └── ansible-dynamic-inventory-ec2.spec ├── ec2.ini ├── LICENSE └── ec2.py /Makefile: -------------------------------------------------------------------------------- 1 | PYTHON_SITELIB_DIR ?= usr/lib/python2.7/site-packages 2 | 3 | all: #nothing to build 4 | 5 | install: 6 | mkdir -p $(DESTDIR)/$(PYTHON_SITELIB_DIR)/ansible/contrib/inventory 7 | cp -v ec2.py $(DESTDIR)/$(PYTHON_SITELIB_DIR)/ansible/contrib/inventory 8 | chmod +x $(DESTDIR)/$(PYTHON_SITELIB_DIR)/ansible/contrib/inventory/ec2.py 9 | cp -v ec2.ini $(DESTDIR)/$(PYTHON_SITELIB_DIR)/ansible/contrib/inventory 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # RPM package for Ansible dynamic inventory EC2 2 | 3 | This package provides build files for a RPM package including the dynamic 4 | inventory script released with Ansible 2.9 upstream branch. 5 | 6 | ## Prerequisites 7 | 8 | * RPM build environment ex. https://github.com/mhutter/docker-rpmbuild/ 9 | 10 | The script is working with Ansible 2.9 (or newer). 11 | 12 | ## Getting started 13 | 14 | The package contain the script ec2.py and ec2.ini from the upstream Ansible 15 | repository: https://github.com/ansible/ansible/tree/stable-2.9/contrib/inventory (more recent versions moved to https://github.com/ansible-community/contrib-scripts/blob/main/inventory/) 16 | 17 | If you searching for a manual, you may find the ressources here: 18 | 19 | * http://docs.ansible.com/ansible/latest/intro_dynamic_inventory.html#example-aws-ec2-external-inventory-script 20 | * https://aws.amazon.com/blogs/apn/getting-started-with-ansible-and-dynamic-amazon-ec2-inventory-management/ 21 | 22 | ## Contributions 23 | 24 | Each contribution is very welcome--be it an issue or a pull request. We're 25 | happy to accept pull requests so long as they meet the existing code quality 26 | and design. 27 | 28 | 1. Fork repository (https://github.com/appuio/ansible-dynamic-inventory-ec2/fork) 29 | 2. Create feature branch (`git checkout -b my-new-feature`) 30 | 3. Commit changes (`git commit -av`) 31 | 4. Push to branch (`git push origin my-new-feature`) 32 | 5. Create a pull request 33 | 34 | ## License 35 | 36 | GPL 3.0 37 | 38 | ## Author Information 39 | 40 | APPUiO Team 41 | -------------------------------------------------------------------------------- /redhat/ansible-dynamic-inventory-ec2.spec: -------------------------------------------------------------------------------- 1 | Summary: Ansible dynamic inventory script and config for Amazon EC2 2 | Name: ansible-dynamic-inventory-ec2 3 | Version: 2.9 4 | Release: 1 5 | License: GPL-3.0 6 | Source: . 7 | URL: https://github.com/appuio/ansible-dynamic-inventory-ec2 8 | Vendor: VSHN AG 9 | Packager: Gabriel Mainberger 10 | Requires: ansible >= 2.9 11 | Requires: python-boto >= 2.24.0 12 | 13 | %package config 14 | Summary: Config file for the Ansible dynamic inventory script 15 | Group: Development/Libraries 16 | 17 | %description 18 | Specific script for the dynamic inventoy of Amazon EC2, which is part of the source release, but not in the standard RPM 19 | 20 | %description config 21 | Config file for the EC2 inventory script 22 | 23 | %prep 24 | %setup -cT 25 | cp -v -R -a %SOURCE0/* . 26 | 27 | %build 28 | make 'PYTHON_SITELIB_DIR=%{python_sitelib}' 29 | 30 | %install 31 | %make_install 'PYTHON_SITELIB_DIR=%{python_sitelib}' 32 | 33 | %files 34 | %{python_sitelib}/ansible/contrib/inventory/ec2.py 35 | %{python_sitelib}/ansible/contrib/inventory/ec2.ini 36 | 37 | %exclude %{python_sitelib}/ansible/contrib/inventory/ec2.pyc 38 | %exclude %{python_sitelib}/ansible/contrib/inventory/ec2.pyo 39 | 40 | %changelog 41 | * Thu Dec 31 2020 Simon Gerber 2.9-1 42 | - Update ec2.py to version shipped with Ansible 2.9 43 | - Ensure host ordering in groups is stable 44 | 45 | * Wed Apr 19 2018 Gabriel Mainberger 2.4-3 46 | - Move ini to PYTHON_SITELIB_DIR directory 47 | 48 | * Wed Mar 15 2018 Gabriel Mainberger 2.4-2 49 | - Fixed file execution permission on ec2.py 50 | 51 | * Mon Mar 12 2018 Gabriel Mainberger 2.4-1 52 | - Initial release for Red Hat 53 | 54 | # vim: set sw=2 sts=2 et : 55 | -------------------------------------------------------------------------------- /ec2.ini: -------------------------------------------------------------------------------- 1 | # Ansible EC2 external inventory script settings 2 | # 3 | 4 | [ec2] 5 | 6 | # to talk to a private eucalyptus instance uncomment these lines 7 | # and edit edit eucalyptus_host to be the host name of your cloud controller 8 | #eucalyptus = True 9 | #eucalyptus_host = clc.cloud.domain.org 10 | 11 | # AWS regions to make calls to. Set this to 'all' to make request to all regions 12 | # in AWS and merge the results together. Alternatively, set this to a comma 13 | # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' and do not 14 | # provide the 'regions_exclude' option. If this is set to 'auto', AWS_REGION or 15 | # AWS_DEFAULT_REGION environment variable will be read to determine the region. 16 | regions = all 17 | regions_exclude = us-gov-west-1, cn-north-1 18 | 19 | # When generating inventory, Ansible needs to know how to address a server. 20 | # Each EC2 instance has a lot of variables associated with it. Here is the list: 21 | # http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance 22 | # Below are 2 variables that are used as the address of a server: 23 | # - destination_variable 24 | # - vpc_destination_variable 25 | 26 | # This is the normal destination variable to use. If you are running Ansible 27 | # from outside EC2, then 'public_dns_name' makes the most sense. If you are 28 | # running Ansible from within EC2, then perhaps you want to use the internal 29 | # address, and should set this to 'private_dns_name'. The key of an EC2 tag 30 | # may optionally be used; however the boto instance variables hold precedence 31 | # in the event of a collision. 32 | destination_variable = public_dns_name 33 | 34 | # This allows you to override the inventory_name with an ec2 variable, instead 35 | # of using the destination_variable above. Addressing (aka ansible_ssh_host) 36 | # will still use destination_variable. Tags should be written as 'tag_TAGNAME'. 37 | #hostname_variable = tag_Name 38 | 39 | # For server inside a VPC, using DNS names may not make sense. When an instance 40 | # has 'subnet_id' set, this variable is used. If the subnet is public, setting 41 | # this to 'ip_address' will return the public IP address. For instances in a 42 | # private subnet, this should be set to 'private_ip_address', and Ansible must 43 | # be run from within EC2. The key of an EC2 tag may optionally be used; however 44 | # the boto instance variables hold precedence in the event of a collision. 45 | # WARNING: - instances that are in the private vpc, _without_ public ip address 46 | # will not be listed in the inventory until You set: 47 | # vpc_destination_variable = private_ip_address 48 | vpc_destination_variable = ip_address 49 | 50 | # The following two settings allow flexible ansible host naming based on a 51 | # python format string and a comma-separated list of ec2 tags. Note that: 52 | # 53 | # 1) If the tags referenced are not present for some instances, empty strings 54 | # will be substituted in the format string. 55 | # 2) This overrides both destination_variable and vpc_destination_variable. 56 | # 57 | #destination_format = {0}.{1}.example.com 58 | #destination_format_tags = Name,environment 59 | 60 | # To tag instances on EC2 with the resource records that point to them from 61 | # Route53, set 'route53' to True. 62 | route53 = False 63 | 64 | # To use Route53 records as the inventory hostnames, uncomment and set 65 | # to equal the domain name you wish to use. You must also have 'route53' (above) 66 | # set to True. 67 | # route53_hostnames = .example.com 68 | 69 | # To exclude RDS instances from the inventory, uncomment and set to False. 70 | #rds = False 71 | 72 | # To exclude ElastiCache instances from the inventory, uncomment and set to False. 73 | #elasticache = False 74 | 75 | # Additionally, you can specify the list of zones to exclude looking up in 76 | # 'route53_excluded_zones' as a comma-separated list. 77 | # route53_excluded_zones = samplezone1.com, samplezone2.com 78 | 79 | # By default, only EC2 instances in the 'running' state are returned. Set 80 | # 'all_instances' to True to return all instances regardless of state. 81 | all_instances = False 82 | 83 | # By default, only EC2 instances in the 'running' state are returned. Specify 84 | # EC2 instance states to return as a comma-separated list. This 85 | # option is overridden when 'all_instances' is True. 86 | # instance_states = pending, running, shutting-down, terminated, stopping, stopped 87 | 88 | # By default, only RDS instances in the 'available' state are returned. Set 89 | # 'all_rds_instances' to True return all RDS instances regardless of state. 90 | all_rds_instances = False 91 | 92 | # Include RDS cluster information (Aurora etc.) 93 | include_rds_clusters = False 94 | 95 | # By default, only ElastiCache clusters and nodes in the 'available' state 96 | # are returned. Set 'all_elasticache_clusters' and/or 'all_elastic_nodes' 97 | # to True return all ElastiCache clusters and nodes, regardless of state. 98 | # 99 | # Note that all_elasticache_nodes only applies to listed clusters. That means 100 | # if you set all_elastic_clusters to false, no node will be return from 101 | # unavailable clusters, regardless of the state and to what you set for 102 | # all_elasticache_nodes. 103 | all_elasticache_replication_groups = False 104 | all_elasticache_clusters = False 105 | all_elasticache_nodes = False 106 | 107 | # API calls to EC2 are slow. For this reason, we cache the results of an API 108 | # call. Set this to the path you want cache files to be written to. Two files 109 | # will be written to this directory: 110 | # - ansible-ec2.cache 111 | # - ansible-ec2.index 112 | cache_path = ~/.ansible/tmp 113 | 114 | # The number of seconds a cache file is considered valid. After this many 115 | # seconds, a new API call will be made, and the cache file will be updated. 116 | # To disable the cache, set this value to 0 117 | cache_max_age = 300 118 | 119 | # Organize groups into a nested/hierarchy instead of a flat namespace. 120 | nested_groups = False 121 | 122 | # Replace - tags when creating groups to avoid issues with ansible 123 | replace_dash_in_groups = True 124 | 125 | # If set to true, any tag of the form "a,b,c" is expanded into a list 126 | # and the results are used to create additional tag_* inventory groups. 127 | expand_csv_tags = False 128 | 129 | # The EC2 inventory output can become very large. To manage its size, 130 | # configure which groups should be created. 131 | group_by_instance_id = True 132 | group_by_region = True 133 | group_by_availability_zone = True 134 | group_by_aws_account = False 135 | group_by_ami_id = True 136 | group_by_instance_type = True 137 | group_by_instance_state = False 138 | group_by_platform = True 139 | group_by_key_pair = True 140 | group_by_vpc_id = True 141 | group_by_security_group = True 142 | group_by_tag_keys = True 143 | group_by_tag_none = True 144 | group_by_route53_names = True 145 | group_by_rds_engine = True 146 | group_by_rds_parameter_group = True 147 | group_by_elasticache_engine = True 148 | group_by_elasticache_cluster = True 149 | group_by_elasticache_parameter_group = True 150 | group_by_elasticache_replication_group = True 151 | 152 | # If you only want to include hosts that match a certain regular expression 153 | # pattern_include = staging-* 154 | 155 | # If you want to exclude any hosts that match a certain regular expression 156 | # pattern_exclude = staging-* 157 | 158 | # Instance filters can be used to control which instances are retrieved for 159 | # inventory. For the full list of possible filters, please read the EC2 API 160 | # docs: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html#query-DescribeInstances-filters 161 | # Filters are key/value pairs separated by '=', to list multiple filters use 162 | # a list separated by commas. See examples below. 163 | 164 | # If you want to apply multiple filters simultaneously, set stack_filters to 165 | # True. Default behaviour is to combine the results of all filters. Stacking 166 | # allows the use of multiple conditions to filter down, for example by 167 | # environment and type of host. 168 | stack_filters = False 169 | 170 | # Retrieve only instances with (key=value) env=staging tag 171 | # instance_filters = tag:env=staging 172 | 173 | # Retrieve only instances with role=webservers OR role=dbservers tag 174 | # instance_filters = tag:role=webservers,tag:role=dbservers 175 | 176 | # Retrieve only t1.micro instances OR instances with tag env=staging 177 | # instance_filters = instance-type=t1.micro,tag:env=staging 178 | 179 | # You can use wildcards in filter values also. Below will list instances which 180 | # tag Name value matches webservers1* 181 | # (ex. webservers15, webservers1a, webservers123 etc) 182 | # instance_filters = tag:Name=webservers1* 183 | 184 | # An IAM role can be assumed, so all requests are run as that role. 185 | # This can be useful for connecting across different accounts, or to limit user 186 | # access 187 | # iam_role = role-arn 188 | 189 | # A boto configuration profile may be used to separate out credentials 190 | # see http://boto.readthedocs.org/en/latest/boto_config_tut.html 191 | # boto_profile = some-boto-profile-name 192 | 193 | 194 | [credentials] 195 | 196 | # The AWS credentials can optionally be specified here. Credentials specified 197 | # here are ignored if the environment variable AWS_ACCESS_KEY_ID or 198 | # AWS_PROFILE is set, or if the boto_profile property above is set. 199 | # 200 | # Supplying AWS credentials here is not recommended, as it introduces 201 | # non-trivial security concerns. When going down this route, please make sure 202 | # to set access permissions for this file correctly, e.g. handle it the same 203 | # way as you would a private SSH key. 204 | # 205 | # Unlike the boto and AWS configure files, this section does not support 206 | # profiles. 207 | # 208 | # aws_access_key_id = AXXXXXXXXXXXXXX 209 | # aws_secret_access_key = XXXXXXXXXXXXXXXXXXX 210 | # aws_security_token = XXXXXXXXXXXXXXXXXXXXXXXXXXXX 211 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /ec2.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | ''' 4 | EC2 external inventory script 5 | ================================= 6 | 7 | Generates inventory that Ansible can understand by making API request to 8 | AWS EC2 using the Boto library. 9 | 10 | NOTE: This script assumes Ansible is being executed where the environment 11 | variables needed for Boto have already been set: 12 | export AWS_ACCESS_KEY_ID='AK123' 13 | export AWS_SECRET_ACCESS_KEY='abc123' 14 | 15 | Optional region environment variable if region is 'auto' 16 | 17 | This script also assumes that there is an ec2.ini file alongside it. To specify a 18 | different path to ec2.ini, define the EC2_INI_PATH environment variable: 19 | 20 | export EC2_INI_PATH=/path/to/my_ec2.ini 21 | 22 | If you're using eucalyptus you need to set the above variables and 23 | you need to define: 24 | 25 | export EC2_URL=http://hostname_of_your_cc:port/services/Eucalyptus 26 | 27 | If you're using boto profiles (requires boto>=2.24.0) you can choose a profile 28 | using the --boto-profile command line argument (e.g. ec2.py --boto-profile prod) or using 29 | the AWS_PROFILE variable: 30 | 31 | AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml 32 | 33 | For more details, see: http://docs.pythonboto.org/en/latest/boto_config_tut.html 34 | 35 | You can filter for specific EC2 instances by creating an environment variable 36 | named EC2_INSTANCE_FILTERS, which has the same format as the instance_filters 37 | entry documented in ec2.ini. For example, to find all hosts whose name begins 38 | with 'webserver', one might use: 39 | 40 | export EC2_INSTANCE_FILTERS='tag:Name=webserver*' 41 | 42 | When run against a specific host, this script returns the following variables: 43 | - ec2_ami_launch_index 44 | - ec2_architecture 45 | - ec2_association 46 | - ec2_attachTime 47 | - ec2_attachment 48 | - ec2_attachmentId 49 | - ec2_block_devices 50 | - ec2_client_token 51 | - ec2_deleteOnTermination 52 | - ec2_description 53 | - ec2_deviceIndex 54 | - ec2_dns_name 55 | - ec2_eventsSet 56 | - ec2_group_name 57 | - ec2_hypervisor 58 | - ec2_id 59 | - ec2_image_id 60 | - ec2_instanceState 61 | - ec2_instance_type 62 | - ec2_ipOwnerId 63 | - ec2_ip_address 64 | - ec2_item 65 | - ec2_kernel 66 | - ec2_key_name 67 | - ec2_launch_time 68 | - ec2_monitored 69 | - ec2_monitoring 70 | - ec2_networkInterfaceId 71 | - ec2_ownerId 72 | - ec2_persistent 73 | - ec2_placement 74 | - ec2_platform 75 | - ec2_previous_state 76 | - ec2_private_dns_name 77 | - ec2_private_ip_address 78 | - ec2_publicIp 79 | - ec2_public_dns_name 80 | - ec2_ramdisk 81 | - ec2_reason 82 | - ec2_region 83 | - ec2_requester_id 84 | - ec2_root_device_name 85 | - ec2_root_device_type 86 | - ec2_security_group_ids 87 | - ec2_security_group_names 88 | - ec2_shutdown_state 89 | - ec2_sourceDestCheck 90 | - ec2_spot_instance_request_id 91 | - ec2_state 92 | - ec2_state_code 93 | - ec2_state_reason 94 | - ec2_status 95 | - ec2_subnet_id 96 | - ec2_tenancy 97 | - ec2_virtualization_type 98 | - ec2_vpc_id 99 | 100 | These variables are pulled out of a boto.ec2.instance object. There is a lack of 101 | consistency with variable spellings (camelCase and underscores) since this 102 | just loops through all variables the object exposes. It is preferred to use the 103 | ones with underscores when multiple exist. 104 | 105 | In addition, if an instance has AWS tags associated with it, each tag is a new 106 | variable named: 107 | - ec2_tag_[Key] = [Value] 108 | 109 | Security groups are comma-separated in 'ec2_security_group_ids' and 110 | 'ec2_security_group_names'. 111 | 112 | When destination_format and destination_format_tags are specified 113 | the destination_format can be built from the instance tags and attributes. 114 | The behavior will first check the user defined tags, then proceed to 115 | check instance attributes, and finally if neither are found 'nil' will 116 | be used instead. 117 | 118 | 'my_instance': { 119 | 'region': 'us-east-1', # attribute 120 | 'availability_zone': 'us-east-1a', # attribute 121 | 'private_dns_name': '172.31.0.1', # attribute 122 | 'ec2_tag_deployment': 'blue', # tag 123 | 'ec2_tag_clusterid': 'ansible', # tag 124 | 'ec2_tag_Name': 'webserver', # tag 125 | ... 126 | } 127 | 128 | Inside of the ec2.ini file the following settings are specified: 129 | ... 130 | destination_format: {0}-{1}-{2}-{3} 131 | destination_format_tags: Name,clusterid,deployment,private_dns_name 132 | ... 133 | 134 | These settings would produce a destination_format as the following: 135 | 'webserver-ansible-blue-172.31.0.1' 136 | ''' 137 | 138 | # (c) 2012, Peter Sankauskas 139 | # 140 | # This file is part of Ansible, 141 | # 142 | # Ansible is free software: you can redistribute it and/or modify 143 | # it under the terms of the GNU General Public License as published by 144 | # the Free Software Foundation, either version 3 of the License, or 145 | # (at your option) any later version. 146 | # 147 | # Ansible is distributed in the hope that it will be useful, 148 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 149 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 150 | # GNU General Public License for more details. 151 | # 152 | # You should have received a copy of the GNU General Public License 153 | # along with Ansible. If not, see . 154 | 155 | ###################################################################### 156 | 157 | import sys 158 | import os 159 | import argparse 160 | import bisect 161 | import re 162 | from time import time 163 | from copy import deepcopy 164 | from datetime import date, datetime 165 | import boto 166 | from boto import ec2 167 | from boto import rds 168 | from boto import elasticache 169 | from boto import route53 170 | from boto import sts 171 | 172 | from ansible.module_utils import six 173 | from ansible.module_utils import ec2 as ec2_utils 174 | from ansible.module_utils.six.moves import configparser 175 | 176 | HAS_BOTO3 = False 177 | try: 178 | import boto3 # noqa 179 | HAS_BOTO3 = True 180 | except ImportError: 181 | pass 182 | 183 | from collections import defaultdict 184 | 185 | import json 186 | 187 | DEFAULTS = { 188 | 'all_elasticache_clusters': 'False', 189 | 'all_elasticache_nodes': 'False', 190 | 'all_elasticache_replication_groups': 'False', 191 | 'all_instances': 'False', 192 | 'all_rds_instances': 'False', 193 | 'aws_access_key_id': '', 194 | 'aws_secret_access_key': '', 195 | 'aws_security_token': '', 196 | 'boto_profile': '', 197 | 'cache_max_age': '300', 198 | 'cache_path': '~/.ansible/tmp', 199 | 'destination_variable': 'public_dns_name', 200 | 'elasticache': 'True', 201 | 'eucalyptus': 'False', 202 | 'eucalyptus_host': '', 203 | 'expand_csv_tags': 'False', 204 | 'group_by_ami_id': 'True', 205 | 'group_by_availability_zone': 'True', 206 | 'group_by_aws_account': 'False', 207 | 'group_by_elasticache_cluster': 'True', 208 | 'group_by_elasticache_engine': 'True', 209 | 'group_by_elasticache_parameter_group': 'True', 210 | 'group_by_elasticache_replication_group': 'True', 211 | 'group_by_instance_id': 'True', 212 | 'group_by_instance_state': 'False', 213 | 'group_by_instance_type': 'True', 214 | 'group_by_key_pair': 'True', 215 | 'group_by_platform': 'True', 216 | 'group_by_rds_engine': 'True', 217 | 'group_by_rds_parameter_group': 'True', 218 | 'group_by_region': 'True', 219 | 'group_by_route53_names': 'True', 220 | 'group_by_security_group': 'True', 221 | 'group_by_tag_keys': 'True', 222 | 'group_by_tag_none': 'True', 223 | 'group_by_vpc_id': 'True', 224 | 'hostname_variable': '', 225 | 'iam_role': '', 226 | 'include_rds_clusters': 'False', 227 | 'nested_groups': 'False', 228 | 'pattern_exclude': '', 229 | 'pattern_include': '', 230 | 'rds': 'False', 231 | 'regions': 'all', 232 | 'regions_exclude': 'us-gov-west-1, cn-north-1', 233 | 'replace_dash_in_groups': 'True', 234 | 'route53': 'False', 235 | 'route53_excluded_zones': '', 236 | 'route53_hostnames': '', 237 | 'stack_filters': 'False', 238 | 'vpc_destination_variable': 'ip_address' 239 | } 240 | 241 | 242 | class Ec2Inventory(object): 243 | 244 | def _empty_inventory(self): 245 | return {"_meta": {"hostvars": {}}} 246 | 247 | def _json_serial(self, obj): 248 | """JSON serializer for objects not serializable by default json code""" 249 | 250 | if isinstance(obj, (datetime, date)): 251 | return obj.isoformat() 252 | raise TypeError("Type %s not serializable" % type(obj)) 253 | 254 | def __init__(self): 255 | ''' Main execution path ''' 256 | 257 | # Inventory grouped by instance IDs, tags, security groups, regions, 258 | # and availability zones 259 | self.inventory = self._empty_inventory() 260 | 261 | self.aws_account_id = None 262 | 263 | # Index of hostname (address) to instance ID 264 | self.index = {} 265 | 266 | # Boto profile to use (if any) 267 | self.boto_profile = None 268 | 269 | # AWS credentials. 270 | self.credentials = {} 271 | 272 | # Read settings and parse CLI arguments 273 | self.parse_cli_args() 274 | self.read_settings() 275 | 276 | # Make sure that profile_name is not passed at all if not set 277 | # as pre 2.24 boto will fall over otherwise 278 | if self.boto_profile: 279 | if not hasattr(boto.ec2.EC2Connection, 'profile_name'): 280 | self.fail_with_error("boto version must be >= 2.24 to use profile") 281 | 282 | # Cache 283 | if self.args.refresh_cache: 284 | self.do_api_calls_update_cache() 285 | elif not self.is_cache_valid(): 286 | self.do_api_calls_update_cache() 287 | 288 | # Data to print 289 | if self.args.host: 290 | data_to_print = self.get_host_info() 291 | 292 | elif self.args.list: 293 | # Display list of instances for inventory 294 | if self.inventory == self._empty_inventory(): 295 | data_to_print = self.get_inventory_from_cache() 296 | else: 297 | data_to_print = self.json_format_dict(self.inventory, True) 298 | 299 | print(data_to_print) 300 | 301 | def is_cache_valid(self): 302 | ''' Determines if the cache files have expired, or if it is still valid ''' 303 | 304 | if os.path.isfile(self.cache_path_cache): 305 | mod_time = os.path.getmtime(self.cache_path_cache) 306 | current_time = time() 307 | if (mod_time + self.cache_max_age) > current_time: 308 | if os.path.isfile(self.cache_path_index): 309 | return True 310 | 311 | return False 312 | 313 | def read_settings(self): 314 | ''' Reads the settings from the ec2.ini file ''' 315 | 316 | scriptbasename = __file__ 317 | scriptbasename = os.path.basename(scriptbasename) 318 | scriptbasename = scriptbasename.replace('.py', '') 319 | 320 | defaults = { 321 | 'ec2': { 322 | 'ini_fallback': os.path.join(os.path.dirname(__file__), 'ec2.ini'), 323 | 'ini_path': os.path.join(os.path.dirname(__file__), '%s.ini' % scriptbasename) 324 | } 325 | } 326 | 327 | if six.PY3: 328 | config = configparser.ConfigParser(DEFAULTS) 329 | else: 330 | config = configparser.SafeConfigParser(DEFAULTS) 331 | ec2_ini_path = os.environ.get('EC2_INI_PATH', defaults['ec2']['ini_path']) 332 | ec2_ini_path = os.path.expanduser(os.path.expandvars(ec2_ini_path)) 333 | 334 | if not os.path.isfile(ec2_ini_path): 335 | ec2_ini_path = os.path.expanduser(defaults['ec2']['ini_fallback']) 336 | 337 | if os.path.isfile(ec2_ini_path): 338 | config.read(ec2_ini_path) 339 | 340 | # Add empty sections if they don't exist 341 | try: 342 | config.add_section('ec2') 343 | except configparser.DuplicateSectionError: 344 | pass 345 | 346 | try: 347 | config.add_section('credentials') 348 | except configparser.DuplicateSectionError: 349 | pass 350 | 351 | # is eucalyptus? 352 | self.eucalyptus = config.getboolean('ec2', 'eucalyptus') 353 | self.eucalyptus_host = config.get('ec2', 'eucalyptus_host') 354 | 355 | # Regions 356 | self.regions = [] 357 | config_regions = config.get('ec2', 'regions') 358 | if (config_regions == 'all'): 359 | if self.eucalyptus_host: 360 | self.regions.append(boto.connect_euca(host=self.eucalyptus_host).region.name, **self.credentials) 361 | else: 362 | config_regions_exclude = config.get('ec2', 'regions_exclude') 363 | 364 | for region_info in ec2.regions(): 365 | if region_info.name not in config_regions_exclude: 366 | self.regions.append(region_info.name) 367 | else: 368 | self.regions = config_regions.split(",") 369 | if 'auto' in self.regions: 370 | env_region = os.environ.get('AWS_REGION') 371 | if env_region is None: 372 | env_region = os.environ.get('AWS_DEFAULT_REGION') 373 | self.regions = [env_region] 374 | 375 | # Destination addresses 376 | self.destination_variable = config.get('ec2', 'destination_variable') 377 | self.vpc_destination_variable = config.get('ec2', 'vpc_destination_variable') 378 | self.hostname_variable = config.get('ec2', 'hostname_variable') 379 | 380 | if config.has_option('ec2', 'destination_format') and \ 381 | config.has_option('ec2', 'destination_format_tags'): 382 | self.destination_format = config.get('ec2', 'destination_format') 383 | self.destination_format_tags = config.get('ec2', 'destination_format_tags').split(',') 384 | else: 385 | self.destination_format = None 386 | self.destination_format_tags = None 387 | 388 | # Route53 389 | self.route53_enabled = config.getboolean('ec2', 'route53') 390 | self.route53_hostnames = config.get('ec2', 'route53_hostnames') 391 | 392 | self.route53_excluded_zones = [] 393 | self.route53_excluded_zones = [a for a in config.get('ec2', 'route53_excluded_zones').split(',') if a] 394 | 395 | # Include RDS instances? 396 | self.rds_enabled = config.getboolean('ec2', 'rds') 397 | 398 | # Include RDS cluster instances? 399 | self.include_rds_clusters = config.getboolean('ec2', 'include_rds_clusters') 400 | 401 | # Include ElastiCache instances? 402 | self.elasticache_enabled = config.getboolean('ec2', 'elasticache') 403 | 404 | # Return all EC2 instances? 405 | self.all_instances = config.getboolean('ec2', 'all_instances') 406 | 407 | # Instance states to be gathered in inventory. Default is 'running'. 408 | # Setting 'all_instances' to 'yes' overrides this option. 409 | ec2_valid_instance_states = [ 410 | 'pending', 411 | 'running', 412 | 'shutting-down', 413 | 'terminated', 414 | 'stopping', 415 | 'stopped' 416 | ] 417 | self.ec2_instance_states = [] 418 | if self.all_instances: 419 | self.ec2_instance_states = ec2_valid_instance_states 420 | elif config.has_option('ec2', 'instance_states'): 421 | for instance_state in config.get('ec2', 'instance_states').split(','): 422 | instance_state = instance_state.strip() 423 | if instance_state not in ec2_valid_instance_states: 424 | continue 425 | self.ec2_instance_states.append(instance_state) 426 | else: 427 | self.ec2_instance_states = ['running'] 428 | 429 | # Return all RDS instances? (if RDS is enabled) 430 | self.all_rds_instances = config.getboolean('ec2', 'all_rds_instances') 431 | 432 | # Return all ElastiCache replication groups? (if ElastiCache is enabled) 433 | self.all_elasticache_replication_groups = config.getboolean('ec2', 'all_elasticache_replication_groups') 434 | 435 | # Return all ElastiCache clusters? (if ElastiCache is enabled) 436 | self.all_elasticache_clusters = config.getboolean('ec2', 'all_elasticache_clusters') 437 | 438 | # Return all ElastiCache nodes? (if ElastiCache is enabled) 439 | self.all_elasticache_nodes = config.getboolean('ec2', 'all_elasticache_nodes') 440 | 441 | # boto configuration profile (prefer CLI argument then environment variables then config file) 442 | self.boto_profile = self.args.boto_profile or \ 443 | os.environ.get('AWS_PROFILE') or \ 444 | config.get('ec2', 'boto_profile') 445 | 446 | # AWS credentials (prefer environment variables) 447 | if not (self.boto_profile or os.environ.get('AWS_ACCESS_KEY_ID') or 448 | os.environ.get('AWS_PROFILE')): 449 | 450 | aws_access_key_id = config.get('credentials', 'aws_access_key_id') 451 | aws_secret_access_key = config.get('credentials', 'aws_secret_access_key') 452 | aws_security_token = config.get('credentials', 'aws_security_token') 453 | 454 | if aws_access_key_id: 455 | self.credentials = { 456 | 'aws_access_key_id': aws_access_key_id, 457 | 'aws_secret_access_key': aws_secret_access_key 458 | } 459 | if aws_security_token: 460 | self.credentials['security_token'] = aws_security_token 461 | 462 | # Cache related 463 | cache_dir = os.path.expanduser(config.get('ec2', 'cache_path')) 464 | if self.boto_profile: 465 | cache_dir = os.path.join(cache_dir, 'profile_' + self.boto_profile) 466 | if not os.path.exists(cache_dir): 467 | os.makedirs(cache_dir) 468 | 469 | cache_name = 'ansible-ec2' 470 | cache_id = self.boto_profile or os.environ.get('AWS_ACCESS_KEY_ID', self.credentials.get('aws_access_key_id')) 471 | if cache_id: 472 | cache_name = '%s-%s' % (cache_name, cache_id) 473 | cache_name += '-' + str(abs(hash(__file__)))[1:7] 474 | self.cache_path_cache = os.path.join(cache_dir, "%s.cache" % cache_name) 475 | self.cache_path_index = os.path.join(cache_dir, "%s.index" % cache_name) 476 | self.cache_max_age = config.getint('ec2', 'cache_max_age') 477 | 478 | self.expand_csv_tags = config.getboolean('ec2', 'expand_csv_tags') 479 | 480 | # Configure nested groups instead of flat namespace. 481 | self.nested_groups = config.getboolean('ec2', 'nested_groups') 482 | 483 | # Replace dash or not in group names 484 | self.replace_dash_in_groups = config.getboolean('ec2', 'replace_dash_in_groups') 485 | 486 | # IAM role to assume for connection 487 | self.iam_role = config.get('ec2', 'iam_role') 488 | 489 | # Configure which groups should be created. 490 | 491 | group_by_options = [a for a in DEFAULTS if a.startswith('group_by')] 492 | for option in group_by_options: 493 | setattr(self, option, config.getboolean('ec2', option)) 494 | 495 | # Do we need to just include hosts that match a pattern? 496 | self.pattern_include = config.get('ec2', 'pattern_include') 497 | if self.pattern_include: 498 | self.pattern_include = re.compile(self.pattern_include) 499 | 500 | # Do we need to exclude hosts that match a pattern? 501 | self.pattern_exclude = config.get('ec2', 'pattern_exclude') 502 | if self.pattern_exclude: 503 | self.pattern_exclude = re.compile(self.pattern_exclude) 504 | 505 | # Do we want to stack multiple filters? 506 | self.stack_filters = config.getboolean('ec2', 'stack_filters') 507 | 508 | # Instance filters (see boto and EC2 API docs). Ignore invalid filters. 509 | self.ec2_instance_filters = [] 510 | 511 | if config.has_option('ec2', 'instance_filters') or 'EC2_INSTANCE_FILTERS' in os.environ: 512 | filters = os.getenv('EC2_INSTANCE_FILTERS', config.get('ec2', 'instance_filters') if config.has_option('ec2', 'instance_filters') else '') 513 | 514 | if self.stack_filters and '&' in filters: 515 | self.fail_with_error("AND filters along with stack_filter enabled is not supported.\n") 516 | 517 | filter_sets = [f for f in filters.split(',') if f] 518 | 519 | for filter_set in filter_sets: 520 | filters = {} 521 | filter_set = filter_set.strip() 522 | for instance_filter in filter_set.split("&"): 523 | instance_filter = instance_filter.strip() 524 | if not instance_filter or '=' not in instance_filter: 525 | continue 526 | filter_key, filter_value = [x.strip() for x in instance_filter.split('=', 1)] 527 | if not filter_key: 528 | continue 529 | filters[filter_key] = filter_value 530 | self.ec2_instance_filters.append(filters.copy()) 531 | 532 | def parse_cli_args(self): 533 | ''' Command line argument processing ''' 534 | 535 | parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on EC2') 536 | parser.add_argument('--list', action='store_true', default=True, 537 | help='List instances (default: True)') 538 | parser.add_argument('--host', action='store', 539 | help='Get all the variables about a specific instance') 540 | parser.add_argument('--refresh-cache', action='store_true', default=False, 541 | help='Force refresh of cache by making API requests to EC2 (default: False - use cache files)') 542 | parser.add_argument('--profile', '--boto-profile', action='store', dest='boto_profile', 543 | help='Use boto profile for connections to EC2') 544 | self.args = parser.parse_args() 545 | 546 | def do_api_calls_update_cache(self): 547 | ''' Do API calls to each region, and save data in cache files ''' 548 | 549 | if self.route53_enabled: 550 | self.get_route53_records() 551 | 552 | for region in self.regions: 553 | self.get_instances_by_region(region) 554 | if self.rds_enabled: 555 | self.get_rds_instances_by_region(region) 556 | if self.elasticache_enabled: 557 | self.get_elasticache_clusters_by_region(region) 558 | self.get_elasticache_replication_groups_by_region(region) 559 | if self.include_rds_clusters: 560 | self.include_rds_clusters_by_region(region) 561 | 562 | self.write_to_cache(self.inventory, self.cache_path_cache) 563 | self.write_to_cache(self.index, self.cache_path_index) 564 | 565 | def connect(self, region): 566 | ''' create connection to api server''' 567 | if self.eucalyptus: 568 | conn = boto.connect_euca(host=self.eucalyptus_host, **self.credentials) 569 | conn.APIVersion = '2010-08-31' 570 | else: 571 | conn = self.connect_to_aws(ec2, region) 572 | return conn 573 | 574 | def boto_fix_security_token_in_profile(self, connect_args): 575 | ''' monkey patch for boto issue boto/boto#2100 ''' 576 | profile = 'profile ' + self.boto_profile 577 | if boto.config.has_option(profile, 'aws_security_token'): 578 | connect_args['security_token'] = boto.config.get(profile, 'aws_security_token') 579 | return connect_args 580 | 581 | def connect_to_aws(self, module, region): 582 | connect_args = deepcopy(self.credentials) 583 | 584 | # only pass the profile name if it's set (as it is not supported by older boto versions) 585 | if self.boto_profile: 586 | connect_args['profile_name'] = self.boto_profile 587 | self.boto_fix_security_token_in_profile(connect_args) 588 | elif os.environ.get('AWS_SESSION_TOKEN'): 589 | connect_args['security_token'] = os.environ.get('AWS_SESSION_TOKEN') 590 | 591 | if self.iam_role: 592 | sts_conn = sts.connect_to_region(region, **connect_args) 593 | role = sts_conn.assume_role(self.iam_role, 'ansible_dynamic_inventory') 594 | connect_args['aws_access_key_id'] = role.credentials.access_key 595 | connect_args['aws_secret_access_key'] = role.credentials.secret_key 596 | connect_args['security_token'] = role.credentials.session_token 597 | 598 | conn = module.connect_to_region(region, **connect_args) 599 | # connect_to_region will fail "silently" by returning None if the region name is wrong or not supported 600 | if conn is None: 601 | self.fail_with_error("region name: %s likely not supported, or AWS is down. connection to region failed." % region) 602 | return conn 603 | 604 | def get_instances_by_region(self, region): 605 | ''' Makes an AWS EC2 API call to the list of instances in a particular 606 | region ''' 607 | 608 | try: 609 | conn = self.connect(region) 610 | reservations = [] 611 | if self.ec2_instance_filters: 612 | if self.stack_filters: 613 | filters_dict = {} 614 | for filters in self.ec2_instance_filters: 615 | filters_dict.update(filters) 616 | reservations.extend(conn.get_all_instances(filters=filters_dict)) 617 | else: 618 | for filters in self.ec2_instance_filters: 619 | reservations.extend(conn.get_all_instances(filters=filters)) 620 | else: 621 | reservations = conn.get_all_instances() 622 | 623 | # Pull the tags back in a second step 624 | # AWS are on record as saying that the tags fetched in the first `get_all_instances` request are not 625 | # reliable and may be missing, and the only way to guarantee they are there is by calling `get_all_tags` 626 | instance_ids = [] 627 | for reservation in reservations: 628 | instance_ids.extend([instance.id for instance in reservation.instances]) 629 | 630 | max_filter_value = 199 631 | tags = [] 632 | for i in range(0, len(instance_ids), max_filter_value): 633 | tags.extend(conn.get_all_tags(filters={'resource-type': 'instance', 'resource-id': instance_ids[i:i + max_filter_value]})) 634 | 635 | tags_by_instance_id = defaultdict(dict) 636 | for tag in tags: 637 | tags_by_instance_id[tag.res_id][tag.name] = tag.value 638 | 639 | if (not self.aws_account_id) and reservations: 640 | self.aws_account_id = reservations[0].owner_id 641 | 642 | for reservation in reservations: 643 | for instance in reservation.instances: 644 | instance.tags = tags_by_instance_id[instance.id] 645 | self.add_instance(instance, region) 646 | 647 | except boto.exception.BotoServerError as e: 648 | if e.error_code == 'AuthFailure': 649 | error = self.get_auth_error_message() 650 | else: 651 | backend = 'Eucalyptus' if self.eucalyptus else 'AWS' 652 | error = "Error connecting to %s backend.\n%s" % (backend, e.message) 653 | self.fail_with_error(error, 'getting EC2 instances') 654 | 655 | def tags_match_filters(self, tags): 656 | ''' return True if given tags match configured filters ''' 657 | if not self.ec2_instance_filters: 658 | return True 659 | 660 | for filters in self.ec2_instance_filters: 661 | for filter_name, filter_value in filters.items(): 662 | if filter_name[:4] != 'tag:': 663 | continue 664 | filter_name = filter_name[4:] 665 | if filter_name not in tags: 666 | if self.stack_filters: 667 | return False 668 | continue 669 | if isinstance(filter_value, list): 670 | if self.stack_filters and tags[filter_name] not in filter_value: 671 | return False 672 | if not self.stack_filters and tags[filter_name] in filter_value: 673 | return True 674 | if isinstance(filter_value, six.string_types): 675 | if self.stack_filters and tags[filter_name] != filter_value: 676 | return False 677 | if not self.stack_filters and tags[filter_name] == filter_value: 678 | return True 679 | 680 | return self.stack_filters 681 | 682 | def get_rds_instances_by_region(self, region): 683 | ''' Makes an AWS API call to the list of RDS instances in a particular 684 | region ''' 685 | 686 | if not HAS_BOTO3: 687 | self.fail_with_error("Working with RDS instances requires boto3 - please install boto3 and try again", 688 | "getting RDS instances") 689 | 690 | client = ec2_utils.boto3_inventory_conn('client', 'rds', region, **self.credentials) 691 | db_instances = client.describe_db_instances() 692 | 693 | try: 694 | conn = self.connect_to_aws(rds, region) 695 | if conn: 696 | marker = None 697 | while True: 698 | instances = conn.get_all_dbinstances(marker=marker) 699 | marker = instances.marker 700 | for index, instance in enumerate(instances): 701 | # Add tags to instances. 702 | instance.arn = db_instances['DBInstances'][index]['DBInstanceArn'] 703 | tags = client.list_tags_for_resource(ResourceName=instance.arn)['TagList'] 704 | instance.tags = {} 705 | for tag in tags: 706 | instance.tags[tag['Key']] = tag['Value'] 707 | if self.tags_match_filters(instance.tags): 708 | self.add_rds_instance(instance, region) 709 | if not marker: 710 | break 711 | except boto.exception.BotoServerError as e: 712 | error = e.reason 713 | 714 | if e.error_code == 'AuthFailure': 715 | error = self.get_auth_error_message() 716 | elif e.error_code == "OptInRequired": 717 | error = "RDS hasn't been enabled for this account yet. " \ 718 | "You must either log in to the RDS service through the AWS console to enable it, " \ 719 | "or set 'rds = False' in ec2.ini" 720 | elif not e.reason == "Forbidden": 721 | error = "Looks like AWS RDS is down:\n%s" % e.message 722 | self.fail_with_error(error, 'getting RDS instances') 723 | 724 | def include_rds_clusters_by_region(self, region): 725 | if not HAS_BOTO3: 726 | self.fail_with_error("Working with RDS clusters requires boto3 - please install boto3 and try again", 727 | "getting RDS clusters") 728 | 729 | client = ec2_utils.boto3_inventory_conn('client', 'rds', region, **self.credentials) 730 | 731 | marker, clusters = '', [] 732 | while marker is not None: 733 | resp = client.describe_db_clusters(Marker=marker) 734 | clusters.extend(resp["DBClusters"]) 735 | marker = resp.get('Marker', None) 736 | 737 | account_id = boto.connect_iam().get_user().arn.split(':')[4] 738 | c_dict = {} 739 | for c in clusters: 740 | if not self.ec2_instance_filters: 741 | matches_filter = True 742 | else: 743 | matches_filter = False 744 | 745 | try: 746 | # arn:aws:rds:::: 747 | tags = client.list_tags_for_resource( 748 | ResourceName='arn:aws:rds:' + region + ':' + account_id + ':cluster:' + c['DBClusterIdentifier']) 749 | c['Tags'] = tags['TagList'] 750 | 751 | if self.ec2_instance_filters: 752 | for filters in self.ec2_instance_filters: 753 | for filter_key, filter_values in filters.items(): 754 | # get AWS tag key e.g. tag:env will be 'env' 755 | tag_name = filter_key.split(":", 1)[1] 756 | # Filter values is a list (if you put multiple values for the same tag name) 757 | matches_filter = any(d['Key'] == tag_name and d['Value'] in filter_values for d in c['Tags']) 758 | 759 | if matches_filter: 760 | # it matches a filter, so stop looking for further matches 761 | break 762 | 763 | if matches_filter: 764 | break 765 | 766 | except Exception as e: 767 | if e.message.find('DBInstanceNotFound') >= 0: 768 | # AWS RDS bug (2016-01-06) means deletion does not fully complete and leave an 'empty' cluster. 769 | # Ignore errors when trying to find tags for these 770 | pass 771 | 772 | # ignore empty clusters caused by AWS bug 773 | if len(c['DBClusterMembers']) == 0: 774 | continue 775 | elif matches_filter: 776 | c_dict[c['DBClusterIdentifier']] = c 777 | 778 | self.inventory['db_clusters'] = c_dict 779 | 780 | def get_elasticache_clusters_by_region(self, region): 781 | ''' Makes an AWS API call to the list of ElastiCache clusters (with 782 | nodes' info) in a particular region.''' 783 | 784 | # ElastiCache boto module doesn't provide a get_all_instances method, 785 | # that's why we need to call describe directly (it would be called by 786 | # the shorthand method anyway...) 787 | clusters = [] 788 | try: 789 | conn = self.connect_to_aws(elasticache, region) 790 | if conn: 791 | # show_cache_node_info = True 792 | # because we also want nodes' information 793 | _marker = 1 794 | while _marker: 795 | if _marker == 1: 796 | _marker = None 797 | response = conn.describe_cache_clusters(None, None, _marker, True) 798 | _marker = response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['Marker'] 799 | try: 800 | # Boto also doesn't provide wrapper classes to CacheClusters or 801 | # CacheNodes. Because of that we can't make use of the get_list 802 | # method in the AWSQueryConnection. Let's do the work manually 803 | clusters = clusters + response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['CacheClusters'] 804 | except KeyError as e: 805 | error = "ElastiCache query to AWS failed (unexpected format)." 806 | self.fail_with_error(error, 'getting ElastiCache clusters') 807 | except boto.exception.BotoServerError as e: 808 | error = e.reason 809 | 810 | if e.error_code == 'AuthFailure': 811 | error = self.get_auth_error_message() 812 | elif e.error_code == "OptInRequired": 813 | error = "ElastiCache hasn't been enabled for this account yet. " \ 814 | "You must either log in to the ElastiCache service through the AWS console to enable it, " \ 815 | "or set 'elasticache = False' in ec2.ini" 816 | elif not e.reason == "Forbidden": 817 | error = "Looks like AWS ElastiCache is down:\n%s" % e.message 818 | self.fail_with_error(error, 'getting ElastiCache clusters') 819 | 820 | for cluster in clusters: 821 | self.add_elasticache_cluster(cluster, region) 822 | 823 | def get_elasticache_replication_groups_by_region(self, region): 824 | ''' Makes an AWS API call to the list of ElastiCache replication groups 825 | in a particular region.''' 826 | 827 | # ElastiCache boto module doesn't provide a get_all_instances method, 828 | # that's why we need to call describe directly (it would be called by 829 | # the shorthand method anyway...) 830 | try: 831 | conn = self.connect_to_aws(elasticache, region) 832 | if conn: 833 | response = conn.describe_replication_groups() 834 | 835 | except boto.exception.BotoServerError as e: 836 | error = e.reason 837 | 838 | if e.error_code == 'AuthFailure': 839 | error = self.get_auth_error_message() 840 | if not e.reason == "Forbidden": 841 | error = "Looks like AWS ElastiCache [Replication Groups] is down:\n%s" % e.message 842 | self.fail_with_error(error, 'getting ElastiCache clusters') 843 | 844 | try: 845 | # Boto also doesn't provide wrapper classes to ReplicationGroups 846 | # Because of that we can't make use of the get_list method in the 847 | # AWSQueryConnection. Let's do the work manually 848 | replication_groups = response['DescribeReplicationGroupsResponse']['DescribeReplicationGroupsResult']['ReplicationGroups'] 849 | 850 | except KeyError as e: 851 | error = "ElastiCache [Replication Groups] query to AWS failed (unexpected format)." 852 | self.fail_with_error(error, 'getting ElastiCache clusters') 853 | 854 | for replication_group in replication_groups: 855 | self.add_elasticache_replication_group(replication_group, region) 856 | 857 | def get_auth_error_message(self): 858 | ''' create an informative error message if there is an issue authenticating''' 859 | errors = ["Authentication error retrieving ec2 inventory."] 860 | if None in [os.environ.get('AWS_ACCESS_KEY_ID'), os.environ.get('AWS_SECRET_ACCESS_KEY')]: 861 | errors.append(' - No AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY environment vars found') 862 | else: 863 | errors.append(' - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment vars found but may not be correct') 864 | 865 | boto_paths = ['/etc/boto.cfg', '~/.boto', '~/.aws/credentials'] 866 | boto_config_found = [p for p in boto_paths if os.path.isfile(os.path.expanduser(p))] 867 | if len(boto_config_found) > 0: 868 | errors.append(" - Boto configs found at '%s', but the credentials contained may not be correct" % ', '.join(boto_config_found)) 869 | else: 870 | errors.append(" - No Boto config found at any expected location '%s'" % ', '.join(boto_paths)) 871 | 872 | return '\n'.join(errors) 873 | 874 | def fail_with_error(self, err_msg, err_operation=None): 875 | '''log an error to std err for ansible-playbook to consume and exit''' 876 | if err_operation: 877 | err_msg = 'ERROR: "{err_msg}", while: {err_operation}'.format( 878 | err_msg=err_msg, err_operation=err_operation) 879 | sys.stderr.write(err_msg) 880 | sys.exit(1) 881 | 882 | def get_instance(self, region, instance_id): 883 | conn = self.connect(region) 884 | 885 | reservations = conn.get_all_instances([instance_id]) 886 | for reservation in reservations: 887 | for instance in reservation.instances: 888 | return instance 889 | 890 | def add_instance(self, instance, region): 891 | ''' Adds an instance to the inventory and index, as long as it is 892 | addressable ''' 893 | 894 | # Only return instances with desired instance states 895 | if instance.state not in self.ec2_instance_states: 896 | return 897 | 898 | # Select the best destination address 899 | # When destination_format and destination_format_tags are specified 900 | # the following code will attempt to find the instance tags first, 901 | # then the instance attributes next, and finally if neither are found 902 | # assign nil for the desired destination format attribute. 903 | if self.destination_format and self.destination_format_tags: 904 | dest_vars = [] 905 | inst_tags = getattr(instance, 'tags') 906 | for tag in self.destination_format_tags: 907 | if tag in inst_tags: 908 | dest_vars.append(inst_tags[tag]) 909 | elif hasattr(instance, tag): 910 | dest_vars.append(getattr(instance, tag)) 911 | else: 912 | dest_vars.append('nil') 913 | 914 | dest = self.destination_format.format(*dest_vars) 915 | elif instance.subnet_id: 916 | dest = getattr(instance, self.vpc_destination_variable, None) 917 | if dest is None: 918 | dest = getattr(instance, 'tags').get(self.vpc_destination_variable, None) 919 | else: 920 | dest = getattr(instance, self.destination_variable, None) 921 | if dest is None: 922 | dest = getattr(instance, 'tags').get(self.destination_variable, None) 923 | 924 | if not dest: 925 | # Skip instances we cannot address (e.g. private VPC subnet) 926 | return 927 | 928 | # Set the inventory name 929 | hostname = None 930 | if self.hostname_variable: 931 | if self.hostname_variable.startswith('tag_'): 932 | hostname = instance.tags.get(self.hostname_variable[4:], None) 933 | else: 934 | hostname = getattr(instance, self.hostname_variable) 935 | 936 | # set the hostname from route53 937 | if self.route53_enabled and self.route53_hostnames: 938 | route53_names = self.get_instance_route53_names(instance) 939 | for name in route53_names: 940 | if name.endswith(self.route53_hostnames): 941 | hostname = name 942 | 943 | # If we can't get a nice hostname, use the destination address 944 | if not hostname: 945 | hostname = dest 946 | # to_safe strips hostname characters like dots, so don't strip route53 hostnames 947 | elif self.route53_enabled and self.route53_hostnames and hostname.endswith(self.route53_hostnames): 948 | hostname = hostname.lower() 949 | else: 950 | hostname = self.to_safe(hostname).lower() 951 | 952 | # if we only want to include hosts that match a pattern, skip those that don't 953 | if self.pattern_include and not self.pattern_include.match(hostname): 954 | return 955 | 956 | # if we need to exclude hosts that match a pattern, skip those 957 | if self.pattern_exclude and self.pattern_exclude.match(hostname): 958 | return 959 | 960 | # Add to index 961 | self.index[hostname] = [region, instance.id] 962 | 963 | # Inventory: Group by instance ID (always a group of 1) 964 | if self.group_by_instance_id: 965 | self.inventory[instance.id] = [hostname] 966 | if self.nested_groups: 967 | self.push_group(self.inventory, 'instances', instance.id) 968 | 969 | # Inventory: Group by region 970 | if self.group_by_region: 971 | self.push(self.inventory, region, hostname) 972 | if self.nested_groups: 973 | self.push_group(self.inventory, 'regions', region) 974 | 975 | # Inventory: Group by availability zone 976 | if self.group_by_availability_zone: 977 | self.push(self.inventory, instance.placement, hostname) 978 | if self.nested_groups: 979 | if self.group_by_region: 980 | self.push_group(self.inventory, region, instance.placement) 981 | self.push_group(self.inventory, 'zones', instance.placement) 982 | 983 | # Inventory: Group by Amazon Machine Image (AMI) ID 984 | if self.group_by_ami_id: 985 | ami_id = self.to_safe(instance.image_id) 986 | self.push(self.inventory, ami_id, hostname) 987 | if self.nested_groups: 988 | self.push_group(self.inventory, 'images', ami_id) 989 | 990 | # Inventory: Group by instance type 991 | if self.group_by_instance_type: 992 | type_name = self.to_safe('type_' + instance.instance_type) 993 | self.push(self.inventory, type_name, hostname) 994 | if self.nested_groups: 995 | self.push_group(self.inventory, 'types', type_name) 996 | 997 | # Inventory: Group by instance state 998 | if self.group_by_instance_state: 999 | state_name = self.to_safe('instance_state_' + instance.state) 1000 | self.push(self.inventory, state_name, hostname) 1001 | if self.nested_groups: 1002 | self.push_group(self.inventory, 'instance_states', state_name) 1003 | 1004 | # Inventory: Group by platform 1005 | if self.group_by_platform: 1006 | if instance.platform: 1007 | platform = self.to_safe('platform_' + instance.platform) 1008 | else: 1009 | platform = self.to_safe('platform_undefined') 1010 | self.push(self.inventory, platform, hostname) 1011 | if self.nested_groups: 1012 | self.push_group(self.inventory, 'platforms', platform) 1013 | 1014 | # Inventory: Group by key pair 1015 | if self.group_by_key_pair and instance.key_name: 1016 | key_name = self.to_safe('key_' + instance.key_name) 1017 | self.push(self.inventory, key_name, hostname) 1018 | if self.nested_groups: 1019 | self.push_group(self.inventory, 'keys', key_name) 1020 | 1021 | # Inventory: Group by VPC 1022 | if self.group_by_vpc_id and instance.vpc_id: 1023 | vpc_id_name = self.to_safe('vpc_id_' + instance.vpc_id) 1024 | self.push(self.inventory, vpc_id_name, hostname) 1025 | if self.nested_groups: 1026 | self.push_group(self.inventory, 'vpcs', vpc_id_name) 1027 | 1028 | # Inventory: Group by security group 1029 | if self.group_by_security_group: 1030 | try: 1031 | for group in instance.groups: 1032 | key = self.to_safe("security_group_" + group.name) 1033 | self.push(self.inventory, key, hostname) 1034 | if self.nested_groups: 1035 | self.push_group(self.inventory, 'security_groups', key) 1036 | except AttributeError: 1037 | self.fail_with_error('\n'.join(['Package boto seems a bit older.', 1038 | 'Please upgrade boto >= 2.3.0.'])) 1039 | 1040 | # Inventory: Group by AWS account ID 1041 | if self.group_by_aws_account: 1042 | self.push(self.inventory, self.aws_account_id, hostname) 1043 | if self.nested_groups: 1044 | self.push_group(self.inventory, 'accounts', self.aws_account_id) 1045 | 1046 | # Inventory: Group by tag keys 1047 | if self.group_by_tag_keys: 1048 | for k, v in instance.tags.items(): 1049 | if self.expand_csv_tags and v and ',' in v: 1050 | values = map(lambda x: x.strip(), v.split(',')) 1051 | else: 1052 | values = [v] 1053 | 1054 | for v in values: 1055 | if v: 1056 | key = self.to_safe("tag_" + k + "=" + v) 1057 | else: 1058 | key = self.to_safe("tag_" + k) 1059 | self.push(self.inventory, key, hostname) 1060 | if self.nested_groups: 1061 | self.push_group(self.inventory, 'tags', self.to_safe("tag_" + k)) 1062 | if v: 1063 | self.push_group(self.inventory, self.to_safe("tag_" + k), key) 1064 | 1065 | # Inventory: Group by Route53 domain names if enabled 1066 | if self.route53_enabled and self.group_by_route53_names: 1067 | route53_names = self.get_instance_route53_names(instance) 1068 | for name in route53_names: 1069 | self.push(self.inventory, name, hostname) 1070 | if self.nested_groups: 1071 | self.push_group(self.inventory, 'route53', name) 1072 | 1073 | # Global Tag: instances without tags 1074 | if self.group_by_tag_none and len(instance.tags) == 0: 1075 | self.push(self.inventory, 'tag_none', hostname) 1076 | if self.nested_groups: 1077 | self.push_group(self.inventory, 'tags', 'tag_none') 1078 | 1079 | # Global Tag: tag all EC2 instances 1080 | self.push(self.inventory, 'ec2', hostname) 1081 | 1082 | self.inventory["_meta"]["hostvars"][hostname] = self.get_host_info_dict_from_instance(instance) 1083 | self.inventory["_meta"]["hostvars"][hostname]['ansible_host'] = dest 1084 | 1085 | def add_rds_instance(self, instance, region): 1086 | ''' Adds an RDS instance to the inventory and index, as long as it is 1087 | addressable ''' 1088 | 1089 | # Only want available instances unless all_rds_instances is True 1090 | if not self.all_rds_instances and instance.status != 'available': 1091 | return 1092 | 1093 | # Select the best destination address 1094 | dest = instance.endpoint[0] 1095 | 1096 | if not dest: 1097 | # Skip instances we cannot address (e.g. private VPC subnet) 1098 | return 1099 | 1100 | # Set the inventory name 1101 | hostname = None 1102 | if self.hostname_variable: 1103 | if self.hostname_variable.startswith('tag_'): 1104 | hostname = instance.tags.get(self.hostname_variable[4:], None) 1105 | else: 1106 | hostname = getattr(instance, self.hostname_variable) 1107 | 1108 | # If we can't get a nice hostname, use the destination address 1109 | if not hostname: 1110 | hostname = dest 1111 | 1112 | hostname = self.to_safe(hostname).lower() 1113 | 1114 | # Add to index 1115 | self.index[hostname] = [region, instance.id] 1116 | 1117 | # Inventory: Group by instance ID (always a group of 1) 1118 | if self.group_by_instance_id: 1119 | self.inventory[instance.id] = [hostname] 1120 | if self.nested_groups: 1121 | self.push_group(self.inventory, 'instances', instance.id) 1122 | 1123 | # Inventory: Group by region 1124 | if self.group_by_region: 1125 | self.push(self.inventory, region, hostname) 1126 | if self.nested_groups: 1127 | self.push_group(self.inventory, 'regions', region) 1128 | 1129 | # Inventory: Group by availability zone 1130 | if self.group_by_availability_zone: 1131 | self.push(self.inventory, instance.availability_zone, hostname) 1132 | if self.nested_groups: 1133 | if self.group_by_region: 1134 | self.push_group(self.inventory, region, instance.availability_zone) 1135 | self.push_group(self.inventory, 'zones', instance.availability_zone) 1136 | 1137 | # Inventory: Group by instance type 1138 | if self.group_by_instance_type: 1139 | type_name = self.to_safe('type_' + instance.instance_class) 1140 | self.push(self.inventory, type_name, hostname) 1141 | if self.nested_groups: 1142 | self.push_group(self.inventory, 'types', type_name) 1143 | 1144 | # Inventory: Group by VPC 1145 | if self.group_by_vpc_id and instance.subnet_group and instance.subnet_group.vpc_id: 1146 | vpc_id_name = self.to_safe('vpc_id_' + instance.subnet_group.vpc_id) 1147 | self.push(self.inventory, vpc_id_name, hostname) 1148 | if self.nested_groups: 1149 | self.push_group(self.inventory, 'vpcs', vpc_id_name) 1150 | 1151 | # Inventory: Group by security group 1152 | if self.group_by_security_group: 1153 | try: 1154 | if instance.security_group: 1155 | key = self.to_safe("security_group_" + instance.security_group.name) 1156 | self.push(self.inventory, key, hostname) 1157 | if self.nested_groups: 1158 | self.push_group(self.inventory, 'security_groups', key) 1159 | 1160 | except AttributeError: 1161 | self.fail_with_error('\n'.join(['Package boto seems a bit older.', 1162 | 'Please upgrade boto >= 2.3.0.'])) 1163 | # Inventory: Group by tag keys 1164 | if self.group_by_tag_keys: 1165 | for k, v in instance.tags.items(): 1166 | if self.expand_csv_tags and v and ',' in v: 1167 | values = map(lambda x: x.strip(), v.split(',')) 1168 | else: 1169 | values = [v] 1170 | 1171 | for v in values: 1172 | if v: 1173 | key = self.to_safe("tag_" + k + "=" + v) 1174 | else: 1175 | key = self.to_safe("tag_" + k) 1176 | self.push(self.inventory, key, hostname) 1177 | if self.nested_groups: 1178 | self.push_group(self.inventory, 'tags', self.to_safe("tag_" + k)) 1179 | if v: 1180 | self.push_group(self.inventory, self.to_safe("tag_" + k), key) 1181 | 1182 | # Inventory: Group by engine 1183 | if self.group_by_rds_engine: 1184 | self.push(self.inventory, self.to_safe("rds_" + instance.engine), hostname) 1185 | if self.nested_groups: 1186 | self.push_group(self.inventory, 'rds_engines', self.to_safe("rds_" + instance.engine)) 1187 | 1188 | # Inventory: Group by parameter group 1189 | if self.group_by_rds_parameter_group: 1190 | self.push(self.inventory, self.to_safe("rds_parameter_group_" + instance.parameter_group.name), hostname) 1191 | if self.nested_groups: 1192 | self.push_group(self.inventory, 'rds_parameter_groups', self.to_safe("rds_parameter_group_" + instance.parameter_group.name)) 1193 | 1194 | # Global Tag: instances without tags 1195 | if self.group_by_tag_none and len(instance.tags) == 0: 1196 | self.push(self.inventory, 'tag_none', hostname) 1197 | if self.nested_groups: 1198 | self.push_group(self.inventory, 'tags', 'tag_none') 1199 | 1200 | # Global Tag: all RDS instances 1201 | self.push(self.inventory, 'rds', hostname) 1202 | 1203 | self.inventory["_meta"]["hostvars"][hostname] = self.get_host_info_dict_from_instance(instance) 1204 | self.inventory["_meta"]["hostvars"][hostname]['ansible_host'] = dest 1205 | 1206 | def add_elasticache_cluster(self, cluster, region): 1207 | ''' Adds an ElastiCache cluster to the inventory and index, as long as 1208 | it's nodes are addressable ''' 1209 | 1210 | # Only want available clusters unless all_elasticache_clusters is True 1211 | if not self.all_elasticache_clusters and cluster['CacheClusterStatus'] != 'available': 1212 | return 1213 | 1214 | # Select the best destination address 1215 | if 'ConfigurationEndpoint' in cluster and cluster['ConfigurationEndpoint']: 1216 | # Memcached cluster 1217 | dest = cluster['ConfigurationEndpoint']['Address'] 1218 | is_redis = False 1219 | else: 1220 | # Redis sigle node cluster 1221 | # Because all Redis clusters are single nodes, we'll merge the 1222 | # info from the cluster with info about the node 1223 | dest = cluster['CacheNodes'][0]['Endpoint']['Address'] 1224 | is_redis = True 1225 | 1226 | if not dest: 1227 | # Skip clusters we cannot address (e.g. private VPC subnet) 1228 | return 1229 | 1230 | # Add to index 1231 | self.index[dest] = [region, cluster['CacheClusterId']] 1232 | 1233 | # Inventory: Group by instance ID (always a group of 1) 1234 | if self.group_by_instance_id: 1235 | self.inventory[cluster['CacheClusterId']] = [dest] 1236 | if self.nested_groups: 1237 | self.push_group(self.inventory, 'instances', cluster['CacheClusterId']) 1238 | 1239 | # Inventory: Group by region 1240 | if self.group_by_region and not is_redis: 1241 | self.push(self.inventory, region, dest) 1242 | if self.nested_groups: 1243 | self.push_group(self.inventory, 'regions', region) 1244 | 1245 | # Inventory: Group by availability zone 1246 | if self.group_by_availability_zone and not is_redis: 1247 | self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest) 1248 | if self.nested_groups: 1249 | if self.group_by_region: 1250 | self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone']) 1251 | self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone']) 1252 | 1253 | # Inventory: Group by node type 1254 | if self.group_by_instance_type and not is_redis: 1255 | type_name = self.to_safe('type_' + cluster['CacheNodeType']) 1256 | self.push(self.inventory, type_name, dest) 1257 | if self.nested_groups: 1258 | self.push_group(self.inventory, 'types', type_name) 1259 | 1260 | # Inventory: Group by VPC (information not available in the current 1261 | # AWS API version for ElastiCache) 1262 | 1263 | # Inventory: Group by security group 1264 | if self.group_by_security_group and not is_redis: 1265 | 1266 | # Check for the existence of the 'SecurityGroups' key and also if 1267 | # this key has some value. When the cluster is not placed in a SG 1268 | # the query can return None here and cause an error. 1269 | if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None: 1270 | for security_group in cluster['SecurityGroups']: 1271 | key = self.to_safe("security_group_" + security_group['SecurityGroupId']) 1272 | self.push(self.inventory, key, dest) 1273 | if self.nested_groups: 1274 | self.push_group(self.inventory, 'security_groups', key) 1275 | 1276 | # Inventory: Group by engine 1277 | if self.group_by_elasticache_engine and not is_redis: 1278 | self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest) 1279 | if self.nested_groups: 1280 | self.push_group(self.inventory, 'elasticache_engines', self.to_safe(cluster['Engine'])) 1281 | 1282 | # Inventory: Group by parameter group 1283 | if self.group_by_elasticache_parameter_group: 1284 | self.push(self.inventory, self.to_safe("elasticache_parameter_group_" + cluster['CacheParameterGroup']['CacheParameterGroupName']), dest) 1285 | if self.nested_groups: 1286 | self.push_group(self.inventory, 'elasticache_parameter_groups', self.to_safe(cluster['CacheParameterGroup']['CacheParameterGroupName'])) 1287 | 1288 | # Inventory: Group by replication group 1289 | if self.group_by_elasticache_replication_group and 'ReplicationGroupId' in cluster and cluster['ReplicationGroupId']: 1290 | self.push(self.inventory, self.to_safe("elasticache_replication_group_" + cluster['ReplicationGroupId']), dest) 1291 | if self.nested_groups: 1292 | self.push_group(self.inventory, 'elasticache_replication_groups', self.to_safe(cluster['ReplicationGroupId'])) 1293 | 1294 | # Global Tag: all ElastiCache clusters 1295 | self.push(self.inventory, 'elasticache_clusters', cluster['CacheClusterId']) 1296 | 1297 | host_info = self.get_host_info_dict_from_describe_dict(cluster) 1298 | 1299 | self.inventory["_meta"]["hostvars"][dest] = host_info 1300 | 1301 | # Add the nodes 1302 | for node in cluster['CacheNodes']: 1303 | self.add_elasticache_node(node, cluster, region) 1304 | 1305 | def add_elasticache_node(self, node, cluster, region): 1306 | ''' Adds an ElastiCache node to the inventory and index, as long as 1307 | it is addressable ''' 1308 | 1309 | # Only want available nodes unless all_elasticache_nodes is True 1310 | if not self.all_elasticache_nodes and node['CacheNodeStatus'] != 'available': 1311 | return 1312 | 1313 | # Select the best destination address 1314 | dest = node['Endpoint']['Address'] 1315 | 1316 | if not dest: 1317 | # Skip nodes we cannot address (e.g. private VPC subnet) 1318 | return 1319 | 1320 | node_id = self.to_safe(cluster['CacheClusterId'] + '_' + node['CacheNodeId']) 1321 | 1322 | # Add to index 1323 | self.index[dest] = [region, node_id] 1324 | 1325 | # Inventory: Group by node ID (always a group of 1) 1326 | if self.group_by_instance_id: 1327 | self.inventory[node_id] = [dest] 1328 | if self.nested_groups: 1329 | self.push_group(self.inventory, 'instances', node_id) 1330 | 1331 | # Inventory: Group by region 1332 | if self.group_by_region: 1333 | self.push(self.inventory, region, dest) 1334 | if self.nested_groups: 1335 | self.push_group(self.inventory, 'regions', region) 1336 | 1337 | # Inventory: Group by availability zone 1338 | if self.group_by_availability_zone: 1339 | self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest) 1340 | if self.nested_groups: 1341 | if self.group_by_region: 1342 | self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone']) 1343 | self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone']) 1344 | 1345 | # Inventory: Group by node type 1346 | if self.group_by_instance_type: 1347 | type_name = self.to_safe('type_' + cluster['CacheNodeType']) 1348 | self.push(self.inventory, type_name, dest) 1349 | if self.nested_groups: 1350 | self.push_group(self.inventory, 'types', type_name) 1351 | 1352 | # Inventory: Group by VPC (information not available in the current 1353 | # AWS API version for ElastiCache) 1354 | 1355 | # Inventory: Group by security group 1356 | if self.group_by_security_group: 1357 | 1358 | # Check for the existence of the 'SecurityGroups' key and also if 1359 | # this key has some value. When the cluster is not placed in a SG 1360 | # the query can return None here and cause an error. 1361 | if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None: 1362 | for security_group in cluster['SecurityGroups']: 1363 | key = self.to_safe("security_group_" + security_group['SecurityGroupId']) 1364 | self.push(self.inventory, key, dest) 1365 | if self.nested_groups: 1366 | self.push_group(self.inventory, 'security_groups', key) 1367 | 1368 | # Inventory: Group by engine 1369 | if self.group_by_elasticache_engine: 1370 | self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest) 1371 | if self.nested_groups: 1372 | self.push_group(self.inventory, 'elasticache_engines', self.to_safe("elasticache_" + cluster['Engine'])) 1373 | 1374 | # Inventory: Group by parameter group (done at cluster level) 1375 | 1376 | # Inventory: Group by replication group (done at cluster level) 1377 | 1378 | # Inventory: Group by ElastiCache Cluster 1379 | if self.group_by_elasticache_cluster: 1380 | self.push(self.inventory, self.to_safe("elasticache_cluster_" + cluster['CacheClusterId']), dest) 1381 | 1382 | # Global Tag: all ElastiCache nodes 1383 | self.push(self.inventory, 'elasticache_nodes', dest) 1384 | 1385 | host_info = self.get_host_info_dict_from_describe_dict(node) 1386 | 1387 | if dest in self.inventory["_meta"]["hostvars"]: 1388 | self.inventory["_meta"]["hostvars"][dest].update(host_info) 1389 | else: 1390 | self.inventory["_meta"]["hostvars"][dest] = host_info 1391 | 1392 | def add_elasticache_replication_group(self, replication_group, region): 1393 | ''' Adds an ElastiCache replication group to the inventory and index ''' 1394 | 1395 | # Only want available clusters unless all_elasticache_replication_groups is True 1396 | if not self.all_elasticache_replication_groups and replication_group['Status'] != 'available': 1397 | return 1398 | 1399 | # Skip clusters we cannot address (e.g. private VPC subnet or clustered redis) 1400 | if replication_group['NodeGroups'][0]['PrimaryEndpoint'] is None or \ 1401 | replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address'] is None: 1402 | return 1403 | 1404 | # Select the best destination address (PrimaryEndpoint) 1405 | dest = replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address'] 1406 | 1407 | # Add to index 1408 | self.index[dest] = [region, replication_group['ReplicationGroupId']] 1409 | 1410 | # Inventory: Group by ID (always a group of 1) 1411 | if self.group_by_instance_id: 1412 | self.inventory[replication_group['ReplicationGroupId']] = [dest] 1413 | if self.nested_groups: 1414 | self.push_group(self.inventory, 'instances', replication_group['ReplicationGroupId']) 1415 | 1416 | # Inventory: Group by region 1417 | if self.group_by_region: 1418 | self.push(self.inventory, region, dest) 1419 | if self.nested_groups: 1420 | self.push_group(self.inventory, 'regions', region) 1421 | 1422 | # Inventory: Group by availability zone (doesn't apply to replication groups) 1423 | 1424 | # Inventory: Group by node type (doesn't apply to replication groups) 1425 | 1426 | # Inventory: Group by VPC (information not available in the current 1427 | # AWS API version for replication groups 1428 | 1429 | # Inventory: Group by security group (doesn't apply to replication groups) 1430 | # Check this value in cluster level 1431 | 1432 | # Inventory: Group by engine (replication groups are always Redis) 1433 | if self.group_by_elasticache_engine: 1434 | self.push(self.inventory, 'elasticache_redis', dest) 1435 | if self.nested_groups: 1436 | self.push_group(self.inventory, 'elasticache_engines', 'redis') 1437 | 1438 | # Global Tag: all ElastiCache clusters 1439 | self.push(self.inventory, 'elasticache_replication_groups', replication_group['ReplicationGroupId']) 1440 | 1441 | host_info = self.get_host_info_dict_from_describe_dict(replication_group) 1442 | 1443 | self.inventory["_meta"]["hostvars"][dest] = host_info 1444 | 1445 | def get_route53_records(self): 1446 | ''' Get and store the map of resource records to domain names that 1447 | point to them. ''' 1448 | 1449 | if self.boto_profile: 1450 | r53_conn = route53.Route53Connection(profile_name=self.boto_profile) 1451 | else: 1452 | r53_conn = route53.Route53Connection() 1453 | all_zones = r53_conn.get_zones() 1454 | 1455 | route53_zones = [zone for zone in all_zones if zone.name[:-1] not in self.route53_excluded_zones] 1456 | 1457 | self.route53_records = {} 1458 | 1459 | for zone in route53_zones: 1460 | rrsets = r53_conn.get_all_rrsets(zone.id) 1461 | 1462 | for record_set in rrsets: 1463 | record_name = record_set.name 1464 | 1465 | if record_name.endswith('.'): 1466 | record_name = record_name[:-1] 1467 | 1468 | for resource in record_set.resource_records: 1469 | self.route53_records.setdefault(resource, set()) 1470 | self.route53_records[resource].add(record_name) 1471 | 1472 | def get_instance_route53_names(self, instance): 1473 | ''' Check if an instance is referenced in the records we have from 1474 | Route53. If it is, return the list of domain names pointing to said 1475 | instance. If nothing points to it, return an empty list. ''' 1476 | 1477 | instance_attributes = ['public_dns_name', 'private_dns_name', 1478 | 'ip_address', 'private_ip_address'] 1479 | 1480 | name_list = set() 1481 | 1482 | for attrib in instance_attributes: 1483 | try: 1484 | value = getattr(instance, attrib) 1485 | except AttributeError: 1486 | continue 1487 | 1488 | if value in self.route53_records: 1489 | name_list.update(self.route53_records[value]) 1490 | 1491 | return list(name_list) 1492 | 1493 | def get_host_info_dict_from_instance(self, instance): 1494 | instance_vars = {} 1495 | for key in vars(instance): 1496 | value = getattr(instance, key) 1497 | key = self.to_safe('ec2_' + key) 1498 | 1499 | # Handle complex types 1500 | # state/previous_state changed to properties in boto in https://github.com/boto/boto/commit/a23c379837f698212252720d2af8dec0325c9518 1501 | if key == 'ec2__state': 1502 | instance_vars['ec2_state'] = instance.state or '' 1503 | instance_vars['ec2_state_code'] = instance.state_code 1504 | elif key == 'ec2__previous_state': 1505 | instance_vars['ec2_previous_state'] = instance.previous_state or '' 1506 | instance_vars['ec2_previous_state_code'] = instance.previous_state_code 1507 | elif isinstance(value, (int, bool)): 1508 | instance_vars[key] = value 1509 | elif isinstance(value, six.string_types): 1510 | instance_vars[key] = value.strip() 1511 | elif value is None: 1512 | instance_vars[key] = '' 1513 | elif key == 'ec2_region': 1514 | instance_vars[key] = value.name 1515 | elif key == 'ec2__placement': 1516 | instance_vars['ec2_placement'] = value.zone 1517 | elif key == 'ec2_tags': 1518 | for k, v in value.items(): 1519 | if self.expand_csv_tags and ',' in v: 1520 | v = list(map(lambda x: x.strip(), v.split(','))) 1521 | key = self.to_safe('ec2_tag_' + k) 1522 | instance_vars[key] = v 1523 | elif key == 'ec2_groups': 1524 | group_ids = [] 1525 | group_names = [] 1526 | for group in value: 1527 | group_ids.append(group.id) 1528 | group_names.append(group.name) 1529 | instance_vars["ec2_security_group_ids"] = ','.join([str(i) for i in group_ids]) 1530 | instance_vars["ec2_security_group_names"] = ','.join([str(i) for i in group_names]) 1531 | elif key == 'ec2_block_device_mapping': 1532 | instance_vars["ec2_block_devices"] = {} 1533 | for k, v in value.items(): 1534 | instance_vars["ec2_block_devices"][os.path.basename(k)] = v.volume_id 1535 | else: 1536 | pass 1537 | # TODO Product codes if someone finds them useful 1538 | # print key 1539 | # print type(value) 1540 | # print value 1541 | 1542 | instance_vars[self.to_safe('ec2_account_id')] = self.aws_account_id 1543 | 1544 | return instance_vars 1545 | 1546 | def get_host_info_dict_from_describe_dict(self, describe_dict): 1547 | ''' Parses the dictionary returned by the API call into a flat list 1548 | of parameters. This method should be used only when 'describe' is 1549 | used directly because Boto doesn't provide specific classes. ''' 1550 | 1551 | # I really don't agree with prefixing everything with 'ec2' 1552 | # because EC2, RDS and ElastiCache are different services. 1553 | # I'm just following the pattern used until now to not break any 1554 | # compatibility. 1555 | 1556 | host_info = {} 1557 | for key in describe_dict: 1558 | value = describe_dict[key] 1559 | key = self.to_safe('ec2_' + self.uncammelize(key)) 1560 | 1561 | # Handle complex types 1562 | 1563 | # Target: Memcached Cache Clusters 1564 | if key == 'ec2_configuration_endpoint' and value: 1565 | host_info['ec2_configuration_endpoint_address'] = value['Address'] 1566 | host_info['ec2_configuration_endpoint_port'] = value['Port'] 1567 | 1568 | # Target: Cache Nodes and Redis Cache Clusters (single node) 1569 | if key == 'ec2_endpoint' and value: 1570 | host_info['ec2_endpoint_address'] = value['Address'] 1571 | host_info['ec2_endpoint_port'] = value['Port'] 1572 | 1573 | # Target: Redis Replication Groups 1574 | if key == 'ec2_node_groups' and value: 1575 | host_info['ec2_endpoint_address'] = value[0]['PrimaryEndpoint']['Address'] 1576 | host_info['ec2_endpoint_port'] = value[0]['PrimaryEndpoint']['Port'] 1577 | replica_count = 0 1578 | for node in value[0]['NodeGroupMembers']: 1579 | if node['CurrentRole'] == 'primary': 1580 | host_info['ec2_primary_cluster_address'] = node['ReadEndpoint']['Address'] 1581 | host_info['ec2_primary_cluster_port'] = node['ReadEndpoint']['Port'] 1582 | host_info['ec2_primary_cluster_id'] = node['CacheClusterId'] 1583 | elif node['CurrentRole'] == 'replica': 1584 | host_info['ec2_replica_cluster_address_' + str(replica_count)] = node['ReadEndpoint']['Address'] 1585 | host_info['ec2_replica_cluster_port_' + str(replica_count)] = node['ReadEndpoint']['Port'] 1586 | host_info['ec2_replica_cluster_id_' + str(replica_count)] = node['CacheClusterId'] 1587 | replica_count += 1 1588 | 1589 | # Target: Redis Replication Groups 1590 | if key == 'ec2_member_clusters' and value: 1591 | host_info['ec2_member_clusters'] = ','.join([str(i) for i in value]) 1592 | 1593 | # Target: All Cache Clusters 1594 | elif key == 'ec2_cache_parameter_group': 1595 | host_info["ec2_cache_node_ids_to_reboot"] = ','.join([str(i) for i in value['CacheNodeIdsToReboot']]) 1596 | host_info['ec2_cache_parameter_group_name'] = value['CacheParameterGroupName'] 1597 | host_info['ec2_cache_parameter_apply_status'] = value['ParameterApplyStatus'] 1598 | 1599 | # Target: Almost everything 1600 | elif key == 'ec2_security_groups': 1601 | 1602 | # Skip if SecurityGroups is None 1603 | # (it is possible to have the key defined but no value in it). 1604 | if value is not None: 1605 | sg_ids = [] 1606 | for sg in value: 1607 | sg_ids.append(sg['SecurityGroupId']) 1608 | host_info["ec2_security_group_ids"] = ','.join([str(i) for i in sg_ids]) 1609 | 1610 | # Target: Everything 1611 | # Preserve booleans and integers 1612 | elif isinstance(value, (int, bool)): 1613 | host_info[key] = value 1614 | 1615 | # Target: Everything 1616 | # Sanitize string values 1617 | elif isinstance(value, six.string_types): 1618 | host_info[key] = value.strip() 1619 | 1620 | # Target: Everything 1621 | # Replace None by an empty string 1622 | elif value is None: 1623 | host_info[key] = '' 1624 | 1625 | else: 1626 | # Remove non-processed complex types 1627 | pass 1628 | 1629 | return host_info 1630 | 1631 | def get_host_info(self): 1632 | ''' Get variables about a specific host ''' 1633 | 1634 | if len(self.index) == 0: 1635 | # Need to load index from cache 1636 | self.load_index_from_cache() 1637 | 1638 | if self.args.host not in self.index: 1639 | # try updating the cache 1640 | self.do_api_calls_update_cache() 1641 | if self.args.host not in self.index: 1642 | # host might not exist anymore 1643 | return self.json_format_dict({}, True) 1644 | 1645 | (region, instance_id) = self.index[self.args.host] 1646 | 1647 | instance = self.get_instance(region, instance_id) 1648 | return self.json_format_dict(self.get_host_info_dict_from_instance(instance), True) 1649 | 1650 | def push(self, my_dict, key, element): 1651 | ''' Insert an element into an array that may not have been defined in 1652 | the dict. The elements are inserted to preserve asciibetical ordering 1653 | of the array ''' 1654 | group_info = my_dict.setdefault(key, []) 1655 | if isinstance(group_info, dict): 1656 | host_list = group_info.setdefault('hosts', []) 1657 | bisect.insort(host_list, element) 1658 | else: 1659 | bisect.insort(group_info, element) 1660 | 1661 | def push_group(self, my_dict, key, element): 1662 | ''' Push a group as a child of another group. ''' 1663 | parent_group = my_dict.setdefault(key, {}) 1664 | if not isinstance(parent_group, dict): 1665 | parent_group = my_dict[key] = {'hosts': parent_group} 1666 | child_groups = parent_group.setdefault('children', []) 1667 | if element not in child_groups: 1668 | child_groups.append(element) 1669 | 1670 | def get_inventory_from_cache(self): 1671 | ''' Reads the inventory from the cache file and returns it as a JSON 1672 | object ''' 1673 | 1674 | with open(self.cache_path_cache, 'r') as f: 1675 | json_inventory = f.read() 1676 | return json_inventory 1677 | 1678 | def load_index_from_cache(self): 1679 | ''' Reads the index from the cache file sets self.index ''' 1680 | 1681 | with open(self.cache_path_index, 'rb') as f: 1682 | self.index = json.load(f) 1683 | 1684 | def write_to_cache(self, data, filename): 1685 | ''' Writes data in JSON format to a file ''' 1686 | 1687 | json_data = self.json_format_dict(data, True) 1688 | with open(filename, 'w') as f: 1689 | f.write(json_data) 1690 | 1691 | def uncammelize(self, key): 1692 | temp = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', key) 1693 | return re.sub('([a-z0-9])([A-Z])', r'\1_\2', temp).lower() 1694 | 1695 | def to_safe(self, word): 1696 | ''' Converts 'bad' characters in a string to underscores so they can be used as Ansible groups ''' 1697 | regex = r"[^A-Za-z0-9\_" 1698 | if not self.replace_dash_in_groups: 1699 | regex += r"\-" 1700 | return re.sub(regex + "]", "_", word) 1701 | 1702 | def json_format_dict(self, data, pretty=False): 1703 | ''' Converts a dict to a JSON object and dumps it as a formatted 1704 | string ''' 1705 | 1706 | if pretty: 1707 | return json.dumps(data, sort_keys=True, indent=2, default=self._json_serial) 1708 | else: 1709 | return json.dumps(data, default=self._json_serial) 1710 | 1711 | 1712 | if __name__ == '__main__': 1713 | # Run the script 1714 | Ec2Inventory() 1715 | --------------------------------------------------------------------------------