├── .gitignore ├── .gitreview ├── .stestr.conf ├── .zuul.yaml ├── CONTRIBUTING.rst ├── LICENSE ├── MANIFEST.in ├── README.rst ├── bin ├── cfn-create-aws-symlinks ├── cfn-get-metadata ├── cfn-hup ├── cfn-init └── cfn-signal ├── doc ├── .gitignore ├── Makefile ├── README.rst ├── requirements.txt └── source │ ├── cfn-create-aws-symlinks.rst │ ├── cfn-get-metadata.rst │ ├── cfn-hup.rst │ ├── cfn-init.rst │ ├── cfn-push-stats.rst │ ├── cfn-signal.rst │ ├── conf.py │ ├── contributor │ └── contributing.rst │ └── index.rst ├── heat_cfntools ├── __init__.py ├── cfntools │ ├── __init__.py │ └── cfn_helper.py └── tests │ ├── __init__.py │ ├── test_cfn_helper.py │ └── test_cfn_hup.py ├── releasenotes └── notes │ └── remove-cfn-push-stats-fe0cb5de0d6077cc.yaml ├── requirements.txt ├── setup.cfg ├── setup.py ├── test-requirements.txt └── tox.ini /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | *.swp 3 | build 4 | dist 5 | heat_cfntools.egg-info/ 6 | .stestr/ 7 | subunit.log 8 | .tox 9 | .coverage 10 | .coverage.* 11 | AUTHORS 12 | ChangeLog 13 | -------------------------------------------------------------------------------- /.gitreview: -------------------------------------------------------------------------------- 1 | [gerrit] 2 | host=review.opendev.org 3 | port=29418 4 | project=openstack/heat-cfntools.git 5 | -------------------------------------------------------------------------------- /.stestr.conf: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | test_path=./heat_cfntools/tests 3 | top_dir=./ 4 | -------------------------------------------------------------------------------- /.zuul.yaml: -------------------------------------------------------------------------------- 1 | - project: 2 | templates: 3 | - check-requirements 4 | - openstack-python3-jobs 5 | - publish-openstack-docs-pti 6 | -------------------------------------------------------------------------------- /CONTRIBUTING.rst: -------------------------------------------------------------------------------- 1 | The source repository for this project can be found at: 2 | 3 | https://opendev.org/openstack/heat-cfntools 4 | 5 | Pull requests submitted through GitHub are not monitored. 6 | 7 | To start contributing to OpenStack, follow the steps in the contribution guide 8 | to set up and use Gerrit: 9 | 10 | https://docs.openstack.org/contributors/code-and-documentation/quick-start.html 11 | 12 | Bugs should be filed on Storyboard,: 13 | 14 | https://storyboard.openstack.org/#!/project/openstack/heat-cfntools 15 | 16 | For more specific information about contributing to this repository, see the 17 | heat-cfntools contributor guide: 18 | 19 | https://docs.openstack.org/heat-cfntools/latest/contributor/contributing.html 20 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include CONTRIBUTING.rst 2 | include MANIFEST.in 3 | include README.rst 4 | include AUTHORS LICENSE 5 | include ChangeLog 6 | graft doc 7 | graft tools 8 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | ======================== 2 | Team and repository tags 3 | ======================== 4 | 5 | .. image:: https://governance.openstack.org/tc/badges/heat-cfntools.svg 6 | :target: https://governance.openstack.org/tc/reference/tags/index.html 7 | 8 | .. Change things from this point on 9 | 10 | ========================= 11 | Heat CloudFormation Tools 12 | ========================= 13 | 14 | There are several bootstrap methods for cloudformations: 15 | 16 | 1. Create image with application ready to go 17 | 2. Use cloud-init to run a startup script passed as userdata to the nova 18 | server create 19 | 3. Use the CloudFormation instance helper scripts 20 | 21 | This package contains files required for choice #3. 22 | 23 | cfn-init - 24 | Reads the AWS::CloudFormation::Init for the instance resource, 25 | installs packages, and starts services 26 | cfn-signal - 27 | Waits for an application to be ready before continuing, ie: 28 | supporting the WaitCondition feature 29 | cfn-hup - 30 | Handle updates from the UpdateStack CloudFormation API call 31 | 32 | * Free software: Apache license 33 | * Source: https://opendev.org/openstack/heat-cfntools/ 34 | * Bugs: https://storyboard.openstack.org/#!/project/openstack/heat-cfntools 35 | 36 | Related projects 37 | ---------------- 38 | * https://wiki.openstack.org/Heat 39 | -------------------------------------------------------------------------------- /bin/cfn-create-aws-symlinks: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 4 | # not use this file except in compliance with the License. You may obtain 5 | # a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 12 | # License for the specific language governing permissions and limitations 13 | # under the License. 14 | 15 | """ 16 | Creates symlinks for the cfn-* scripts in this directory to /opt/aws/bin 17 | """ 18 | import argparse 19 | import glob 20 | import os 21 | import os.path 22 | 23 | 24 | def create_symlink(source_file, target_file, override=False): 25 | if os.path.exists(target_file): 26 | if (override): 27 | os.remove(target_file) 28 | else: 29 | print('%s already exists, will not replace with symlink' 30 | % target_file) 31 | return 32 | print('%s -> %s' % (source_file, target_file)) 33 | os.symlink(source_file, target_file) 34 | 35 | 36 | def check_dirs(source_dir, target_dir): 37 | print('%s -> %s' % (source_dir, target_dir)) 38 | 39 | if source_dir == target_dir: 40 | print('Source and target are the same %s' % target_dir) 41 | return False 42 | 43 | if not os.path.exists(target_dir): 44 | try: 45 | os.makedirs(target_dir) 46 | except OSError as exc: 47 | print('Could not create target directory %s: %s' 48 | % (target_dir, exc)) 49 | return False 50 | return True 51 | 52 | 53 | def create_symlinks(source_dir, target_dir, glob_pattern, override): 54 | source_files = glob.glob(os.path.join(source_dir, glob_pattern)) 55 | for source_file in source_files: 56 | target_file = os.path.join(target_dir, os.path.basename(source_file)) 57 | create_symlink(source_file, target_file, override=override) 58 | 59 | if __name__ == '__main__': 60 | description = 'Creates symlinks for the cfn-* scripts to /opt/aws/bin' 61 | parser = argparse.ArgumentParser(description=description) 62 | parser.add_argument( 63 | '-t', '--target', 64 | dest="target_dir", 65 | help="Target directory to create symlinks", 66 | default='/opt/aws/bin', 67 | required=False) 68 | parser.add_argument( 69 | '-s', '--source', 70 | dest="source_dir", 71 | help="Source directory to create symlinks from. " 72 | "Defaults to the directory where this script is", 73 | default='/usr/bin', 74 | required=False) 75 | parser.add_argument( 76 | '-f', '--force', 77 | dest="force", 78 | action='store_true', 79 | help="If specified, will create symlinks even if " 80 | "there is already a target file", 81 | required=False) 82 | args = parser.parse_args() 83 | 84 | if not check_dirs(args.source_dir, args.target_dir): 85 | exit(1) 86 | 87 | create_symlinks(args.source_dir, args.target_dir, 'cfn-*', args.force) 88 | -------------------------------------------------------------------------------- /bin/cfn-get-metadata: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 4 | # not use this file except in compliance with the License. You may obtain 5 | # a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 12 | # License for the specific language governing permissions and limitations 13 | # under the License. 14 | 15 | """ 16 | Implements cfn-get-metadata CloudFormation functionality 17 | """ 18 | import argparse 19 | import logging 20 | 21 | 22 | from heat_cfntools.cfntools import cfn_helper 23 | 24 | description = " " 25 | parser = argparse.ArgumentParser(description=description) 26 | parser.add_argument('-s', '--stack', 27 | dest="stack_name", 28 | help="A Heat stack name", 29 | required=True) 30 | parser.add_argument('-r', '--resource', 31 | dest="logical_resource_id", 32 | help="A Heat logical resource ID", 33 | required=True) 34 | parser.add_argument('--access-key', 35 | dest="access_key", 36 | help="A Keystone access key", 37 | required=False) 38 | parser.add_argument('--secret-key', 39 | dest="secret_key", 40 | help="A Keystone secret key", 41 | required=False) 42 | parser.add_argument('--region', 43 | dest="region", 44 | help="Openstack region", 45 | required=False) 46 | parser.add_argument('--credential-file', 47 | dest="credential_file", 48 | help="credential-file", 49 | required=False) 50 | parser.add_argument('-u', '--url', 51 | dest="url", 52 | help="service url", 53 | required=False) 54 | parser.add_argument('-k', '--key', 55 | dest="key", 56 | help="key", 57 | required=False) 58 | args = parser.parse_args() 59 | 60 | if not args.stack_name: 61 | print('The Stack name must not be empty.') 62 | exit(1) 63 | 64 | if not args.logical_resource_id: 65 | print('The Resource ID must not be empty') 66 | exit(1) 67 | 68 | log_format = '%(levelname)s [%(asctime)s] %(message)s' 69 | logging.basicConfig(format=log_format, level=logging.DEBUG) 70 | 71 | LOG = logging.getLogger('cfntools') 72 | log_file_name = "/var/log/cfn-get-metadata.log" 73 | file_handler = logging.FileHandler(log_file_name) 74 | file_handler.setFormatter(logging.Formatter(log_format)) 75 | LOG.addHandler(file_handler) 76 | 77 | metadata = cfn_helper.Metadata(args.stack_name, 78 | args.logical_resource_id, 79 | access_key=args.access_key, 80 | secret_key=args.secret_key, 81 | region=args.region, 82 | credentials_file=args.credential_file) 83 | metadata.retrieve() 84 | LOG.debug(str(metadata)) 85 | metadata.display(args.key) 86 | -------------------------------------------------------------------------------- /bin/cfn-hup: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 4 | # not use this file except in compliance with the License. You may obtain 5 | # a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 12 | # License for the specific language governing permissions and limitations 13 | # under the License. 14 | 15 | """ 16 | Implements cfn-hup CloudFormation functionality 17 | """ 18 | import argparse 19 | import logging 20 | import os 21 | import os.path 22 | 23 | 24 | from heat_cfntools.cfntools import cfn_helper 25 | 26 | description = " " 27 | parser = argparse.ArgumentParser(description=description) 28 | parser.add_argument('-c', '--config', 29 | dest="config_dir", 30 | help="Hook Config Directory", 31 | required=False, 32 | default='/etc/cfn/hooks.d') 33 | parser.add_argument('-f', '--no-daemon', 34 | dest="no_daemon", 35 | action="store_true", 36 | help="Do not run as a daemon", 37 | required=False) 38 | parser.add_argument('-v', '--verbose', 39 | action="store_true", 40 | dest="verbose", 41 | help="Verbose logging", 42 | required=False) 43 | args = parser.parse_args() 44 | 45 | # Setup logging 46 | log_format = '%(levelname)s [%(asctime)s] %(message)s' 47 | log_file_name = "/var/log/cfn-hup.log" 48 | log_level = logging.INFO 49 | if args.verbose: 50 | log_level = logging.DEBUG 51 | logging.basicConfig(filename=log_file_name, 52 | format=log_format, 53 | level=log_level) 54 | 55 | LOG = logging.getLogger('cfntools') 56 | 57 | main_conf_path = '/etc/cfn/cfn-hup.conf' 58 | try: 59 | main_config_file = open(main_conf_path) 60 | except IOError as exc: 61 | LOG.error('Could not open main configuration at %s' % main_conf_path) 62 | exit(1) 63 | 64 | config_files = [] 65 | hooks_conf_path = '/etc/cfn/hooks.conf' 66 | if os.path.exists(hooks_conf_path): 67 | try: 68 | config_files.append(open(hooks_conf_path)) 69 | except IOError as exc: 70 | LOG.exception(exc) 71 | 72 | if args.config_dir and os.path.exists(args.config_dir): 73 | try: 74 | for f in os.listdir(args.config_dir): 75 | config_files.append(open(os.path.join(args.config_dir, f))) 76 | 77 | except OSError as exc: 78 | LOG.exception(exc) 79 | 80 | if not config_files: 81 | LOG.error('No hook files found at %s or %s' % (hooks_conf_path, 82 | args.config_dir)) 83 | exit(1) 84 | 85 | try: 86 | mainconfig = cfn_helper.HupConfig([main_config_file] + config_files) 87 | except Exception as ex: 88 | LOG.error('Cannot load configuration: %s' % str(ex)) 89 | exit(1) 90 | 91 | if not mainconfig.unique_resources_get(): 92 | LOG.error('No hooks were found. Add some to %s or %s' % (hooks_conf_path, 93 | args.config_dir)) 94 | exit(1) 95 | 96 | 97 | for r in mainconfig.unique_resources_get(): 98 | LOG.debug('Checking resource %s' % r) 99 | metadata = cfn_helper.Metadata(mainconfig.stack, 100 | r, 101 | credentials_file=mainconfig.credential_file, 102 | region=mainconfig.region) 103 | metadata.retrieve() 104 | try: 105 | metadata.cfn_hup(mainconfig.hooks) 106 | except Exception as e: 107 | LOG.exception("Error processing metadata") 108 | exit(1) 109 | -------------------------------------------------------------------------------- /bin/cfn-init: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 4 | # not use this file except in compliance with the License. You may obtain 5 | # a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 12 | # License for the specific language governing permissions and limitations 13 | # under the License. 14 | 15 | """ 16 | Implements cfn-init CloudFormation functionality 17 | """ 18 | import argparse 19 | import logging 20 | 21 | 22 | from heat_cfntools.cfntools import cfn_helper 23 | 24 | description = " " 25 | parser = argparse.ArgumentParser(description=description) 26 | parser.add_argument('-s', '--stack', 27 | dest="stack_name", 28 | help="A Heat stack name", 29 | required=False) 30 | parser.add_argument('-r', '--resource', 31 | dest="logical_resource_id", 32 | help="A Heat logical resource ID", 33 | required=False) 34 | parser.add_argument('--access-key', 35 | dest="access_key", 36 | help="A Keystone access key", 37 | required=False) 38 | parser.add_argument('--secret-key', 39 | dest="secret_key", 40 | help="A Keystone secret key", 41 | required=False) 42 | parser.add_argument('--region', 43 | dest="region", 44 | help="Openstack region", 45 | required=False) 46 | parser.add_argument('-c', '--configsets', 47 | dest="configsets", 48 | help="An optional list of configSets (default: default)", 49 | required=False) 50 | args = parser.parse_args() 51 | 52 | log_format = '%(levelname)s [%(asctime)s] %(message)s' 53 | log_file_name = "/var/log/cfn-init.log" 54 | logging.basicConfig(filename=log_file_name, 55 | format=log_format, 56 | level=logging.DEBUG) 57 | 58 | LOG = logging.getLogger('cfntools') 59 | 60 | metadata = cfn_helper.Metadata(args.stack_name, 61 | args.logical_resource_id, 62 | access_key=args.access_key, 63 | secret_key=args.secret_key, 64 | region=args.region, 65 | configsets=args.configsets) 66 | metadata.retrieve() 67 | try: 68 | metadata.cfn_init() 69 | except Exception as e: 70 | LOG.exception("Error processing metadata") 71 | exit(1) 72 | -------------------------------------------------------------------------------- /bin/cfn-signal: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 4 | # not use this file except in compliance with the License. You may obtain 5 | # a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 11 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 12 | # License for the specific language governing permissions and limitations 13 | # under the License. 14 | 15 | """ 16 | Implements cfn-signal CloudFormation functionality 17 | """ 18 | import argparse 19 | import logging 20 | import sys 21 | 22 | 23 | from heat_cfntools.cfntools import cfn_helper 24 | 25 | 26 | description = " " 27 | parser = argparse.ArgumentParser(description=description) 28 | parser.add_argument('-s', '--success', 29 | dest="success", 30 | help="signal status to report", 31 | default='true', 32 | required=False) 33 | parser.add_argument('-r', '--reason', 34 | dest="reason", 35 | help="The reason for the failure", 36 | default="Configuration Complete", 37 | required=False) 38 | parser.add_argument('-d', '--data', 39 | dest="data", 40 | default="Application has completed configuration.", 41 | help="The data to send", 42 | required=False) 43 | parser.add_argument('-i', '--id', 44 | dest="unique_id", 45 | help="the unique id to send back to the WaitCondition", 46 | default=None, 47 | required=False) 48 | parser.add_argument('-e', '--exit-code', 49 | dest="exit_code", 50 | help="The exit code from a process to interpret", 51 | default=None, 52 | required=False) 53 | parser.add_argument('--exit', 54 | dest="exit", 55 | help="DEPRECATED! Use -e or --exit-code instead.", 56 | default=None, 57 | required=False) 58 | parser.add_argument('url', 59 | help='the url to post to') 60 | parser.add_argument('-k', '--insecure', 61 | help="This will make insecure https request to cfn-api.", 62 | action='store_true') 63 | args = parser.parse_args() 64 | 65 | log_format = '%(levelname)s [%(asctime)s] %(message)s' 66 | log_file_name = "/var/log/cfn-signal.log" 67 | logging.basicConfig(filename=log_file_name, 68 | format=log_format, 69 | level=logging.DEBUG) 70 | 71 | LOG = logging.getLogger('cfntools') 72 | 73 | LOG.debug('cfn-signal called %s ' % (str(args))) 74 | if args.exit: 75 | LOG.warning('--exit DEPRECATED! Use -e or --exit-code instead.') 76 | status = 'FAILURE' 77 | exit_code = args.exit_code or args.exit 78 | if exit_code: 79 | # "exit_code" takes precedence over "success". 80 | if exit_code == '0': 81 | status = 'SUCCESS' 82 | else: 83 | if args.success == 'true': 84 | status = 'SUCCESS' 85 | 86 | unique_id = args.unique_id 87 | if unique_id is None: 88 | LOG.debug('No id passed from the command line') 89 | md = cfn_helper.Metadata('not-used', None) 90 | unique_id = md.get_instance_id() 91 | if unique_id is None: 92 | LOG.error('Could not get the instance id from metadata!') 93 | import socket 94 | unique_id = socket.getfqdn() 95 | LOG.debug('id: %s' % (unique_id)) 96 | 97 | body = { 98 | "Status": status, 99 | "Reason": args.reason, 100 | "UniqueId": unique_id, 101 | "Data": args.data 102 | } 103 | data = cfn_helper.json.dumps(body) 104 | 105 | cmd = ['curl'] 106 | if args.insecure: 107 | cmd.append('--insecure') 108 | cmd.extend([ 109 | '-X', 'PUT', 110 | '-H', 'Content-Type:', 111 | '--data-binary', data, 112 | args.url 113 | ]) 114 | 115 | command = cfn_helper.CommandRunner(cmd).run() 116 | if command.status != 0: 117 | LOG.error(command.stderr) 118 | sys.exit(command.status) 119 | -------------------------------------------------------------------------------- /doc/.gitignore: -------------------------------------------------------------------------------- 1 | target/ 2 | build/ 3 | -------------------------------------------------------------------------------- /doc/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = build 9 | 10 | # Internal variables. 11 | PAPEROPT_a4 = -D latex_paper_size=a4 12 | PAPEROPT_letter = -D latex_paper_size=letter 13 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source 14 | # the i18n builder cannot share the environment and doctrees with the others 15 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source 16 | 17 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext 18 | 19 | help: 20 | @echo "Please use \`make ' where is one of" 21 | @echo " html to make standalone HTML files" 22 | @echo " dirhtml to make HTML files named index.html in directories" 23 | @echo " singlehtml to make a single large HTML file" 24 | @echo " pickle to make pickle files" 25 | @echo " json to make JSON files" 26 | @echo " htmlhelp to make HTML files and a HTML help project" 27 | @echo " qthelp to make HTML files and a qthelp project" 28 | @echo " devhelp to make HTML files and a Devhelp project" 29 | @echo " epub to make an epub" 30 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 31 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 32 | @echo " text to make text files" 33 | @echo " man to make manual pages" 34 | @echo " texinfo to make Texinfo files" 35 | @echo " info to make Texinfo files and run them through makeinfo" 36 | @echo " gettext to make PO message catalogs" 37 | @echo " changes to make an overview of all changed/added/deprecated items" 38 | @echo " linkcheck to check all external links for integrity" 39 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 40 | 41 | clean: 42 | -rm -rf $(BUILDDIR)/* 43 | 44 | html: 45 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 46 | @echo 47 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 48 | 49 | dirhtml: 50 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 51 | @echo 52 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 53 | 54 | singlehtml: 55 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 56 | @echo 57 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 58 | 59 | pickle: 60 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 61 | @echo 62 | @echo "Build finished; now you can process the pickle files." 63 | 64 | json: 65 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 66 | @echo 67 | @echo "Build finished; now you can process the JSON files." 68 | 69 | htmlhelp: 70 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 71 | @echo 72 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 73 | ".hhp project file in $(BUILDDIR)/htmlhelp." 74 | 75 | qthelp: 76 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 77 | @echo 78 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 79 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 80 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Heat.qhcp" 81 | @echo "To view the help file:" 82 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Heat.qhc" 83 | 84 | devhelp: 85 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 86 | @echo 87 | @echo "Build finished." 88 | @echo "To view the help file:" 89 | @echo "# mkdir -p $$HOME/.local/share/devhelp/Heat" 90 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Heat" 91 | @echo "# devhelp" 92 | 93 | epub: 94 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 95 | @echo 96 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 97 | 98 | latex: 99 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 100 | @echo 101 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 102 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 103 | "(use \`make latexpdf' here to do that automatically)." 104 | 105 | latexpdf: 106 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 107 | @echo "Running LaTeX files through pdflatex..." 108 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 109 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 110 | 111 | text: 112 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 113 | @echo 114 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 115 | 116 | man: 117 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 118 | @echo 119 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 120 | 121 | texinfo: 122 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 123 | @echo 124 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 125 | @echo "Run \`make' in that directory to run these through makeinfo" \ 126 | "(use \`make info' here to do that automatically)." 127 | 128 | info: 129 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 130 | @echo "Running Texinfo files through makeinfo..." 131 | make -C $(BUILDDIR)/texinfo info 132 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 133 | 134 | gettext: 135 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 136 | @echo 137 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 138 | 139 | changes: 140 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 141 | @echo 142 | @echo "The overview file is in $(BUILDDIR)/changes." 143 | 144 | linkcheck: 145 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 146 | @echo 147 | @echo "Link check complete; look for any errors in the above output " \ 148 | "or in $(BUILDDIR)/linkcheck/output.txt." 149 | 150 | doctest: 151 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 152 | @echo "Testing of doctests in the sources finished, look at the " \ 153 | "results in $(BUILDDIR)/doctest/output.txt." 154 | -------------------------------------------------------------------------------- /doc/README.rst: -------------------------------------------------------------------------------- 1 | ====================== 2 | Building the man pages 3 | ====================== 4 | 5 | Dependencies 6 | ============ 7 | 8 | Sphinx_ 9 | You'll need sphinx (the python one) and if you are 10 | using the virtualenv you'll need to install it in the virtualenv 11 | specifically so that it can load the cinder modules. 12 | 13 | :: 14 | 15 | sudo yum install python-sphinx 16 | sudo pip-python install sphinxcontrib-httpdomain 17 | 18 | Use `make` 19 | ========== 20 | 21 | To build the man pages: 22 | 23 | make man 24 | -------------------------------------------------------------------------------- /doc/requirements.txt: -------------------------------------------------------------------------------- 1 | openstackdocstheme>=2.2.1 # Apache-2.0 2 | sphinx>=2.0.0,!=2.1.0 # BSD 3 | sphinxcontrib-httpdomain>=1.7.0 4 | -------------------------------------------------------------------------------- /doc/source/cfn-create-aws-symlinks.rst: -------------------------------------------------------------------------------- 1 | ======================= 2 | cfn-create-aws-symlinks 3 | ======================= 4 | 5 | .. program:: cfn-create-aws-symlinks 6 | 7 | SYNOPSIS 8 | ======== 9 | 10 | ``cfn-create-aws-symlinks`` 11 | 12 | DESCRIPTION 13 | =========== 14 | Creates symlinks for the cfn-* scripts in this directory to /opt/aws/bin 15 | 16 | 17 | OPTIONS 18 | ======= 19 | .. cmdoption:: -t, --target 20 | 21 | Target directory to create symlinks, defaults to /opt/aws/bin 22 | 23 | .. cmdoption:: -s, --source 24 | 25 | Source directory to create symlinks from. Defaults to the directory where this script is 26 | 27 | .. cmdoption:: -f, --force 28 | 29 | If specified, will create symlinks even if there is already a target file 30 | 31 | 32 | BUGS 33 | ==== 34 | Heat bugs are managed through Launchpad -------------------------------------------------------------------------------- /doc/source/cfn-get-metadata.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | cfn-get-metadata 3 | ================ 4 | 5 | .. program:: cfn-get-metadata 6 | 7 | SYNOPSIS 8 | ======== 9 | 10 | ``cfn-get-metadata`` 11 | 12 | DESCRIPTION 13 | =========== 14 | Implements cfn-get-metadata CloudFormation functionality 15 | 16 | 17 | OPTIONS 18 | ======= 19 | .. cmdoption:: -s --stack 20 | 21 | A Heat stack name 22 | 23 | .. cmdoption:: -r --resource 24 | 25 | A Heat logical resource ID 26 | 27 | .. cmdoption:: --access-key 28 | 29 | A Keystone access key 30 | 31 | .. cmdoption:: --secret-key 32 | 33 | A Keystone secret key 34 | 35 | .. cmdoption:: --region 36 | 37 | Openstack region 38 | 39 | .. cmdoption:: --credential-file 40 | 41 | credential-file 42 | 43 | .. cmdoption:: -u --url 44 | 45 | service url 46 | 47 | .. cmdoption:: -k --key 48 | 49 | key 50 | 51 | 52 | 53 | BUGS 54 | ==== 55 | Heat bugs are managed through Launchpad -------------------------------------------------------------------------------- /doc/source/cfn-hup.rst: -------------------------------------------------------------------------------- 1 | ======= 2 | cfn-hup 3 | ======= 4 | 5 | .. program:: cfn-hup 6 | 7 | SYNOPSIS 8 | ======== 9 | 10 | ``cfn-hup`` 11 | 12 | DESCRIPTION 13 | =========== 14 | Implements cfn-hup CloudFormation functionality 15 | 16 | 17 | OPTIONS 18 | ======= 19 | .. cmdoption:: -c, --config 20 | 21 | Hook Config Directory, defaults to /etc/cfn/hooks.d 22 | 23 | .. cmdoption:: -f, --no-daemon 24 | 25 | Do not run as a daemon 26 | 27 | .. cmdoption:: -v, --verbose 28 | 29 | Verbose logging 30 | 31 | 32 | BUGS 33 | ==== 34 | Heat bugs are managed through Launchpad 35 | -------------------------------------------------------------------------------- /doc/source/cfn-init.rst: -------------------------------------------------------------------------------- 1 | ======== 2 | cfn-init 3 | ======== 4 | 5 | .. program:: cfn-init 6 | 7 | SYNOPSIS 8 | ======== 9 | 10 | ``cfn-init`` 11 | 12 | DESCRIPTION 13 | =========== 14 | Implements cfn-init CloudFormation functionality 15 | 16 | 17 | OPTIONS 18 | ======= 19 | .. cmdoption:: -s, --stack 20 | 21 | A Heat stack name 22 | 23 | .. cmdoption:: -r, --resource 24 | 25 | A Heat logical resource ID 26 | 27 | .. cmdoption:: --access-key 28 | 29 | A Keystone access key 30 | 31 | .. cmdoption:: --secret-key 32 | 33 | A Keystone secret key 34 | 35 | .. cmdoption:: --region 36 | 37 | Openstack region 38 | 39 | .. cmdoption:: -c, --configsets 40 | 41 | An optional list of configSets (default: default) 42 | 43 | 44 | BUGS 45 | ==== 46 | Heat bugs are managed through Launchpad -------------------------------------------------------------------------------- /doc/source/cfn-push-stats.rst: -------------------------------------------------------------------------------- 1 | ============== 2 | cfn-push-stats 3 | ============== 4 | 5 | .. program:: cfn-push-stats 6 | 7 | SYNOPSIS 8 | ======== 9 | 10 | ``cfn-push-stats`` 11 | 12 | DESCRIPTION 13 | =========== 14 | Implements cfn-push-stats CloudFormation functionality 15 | 16 | 17 | OPTIONS 18 | ======= 19 | .. cmdoption:: -v, --verbose 20 | 21 | Verbose logging 22 | 23 | .. cmdoption:: --credential-file 24 | 25 | credential-file 26 | 27 | .. cmdoption:: --service-failure 28 | 29 | Reports a service falure. 30 | 31 | .. cmdoption:: --mem-util 32 | 33 | Reports memory utilization in percentages. 34 | 35 | .. cmdoption:: --mem-used 36 | 37 | Reports memory used (excluding cache and buffers) in megabytes. 38 | 39 | .. cmdoption:: --mem-avail 40 | 41 | Reports available memory (including cache and buffers) in megabytes. 42 | 43 | .. cmdoption:: --swap-util 44 | 45 | Reports swap utilization in percentages. 46 | 47 | .. cmdoption:: --swap-used 48 | 49 | Reports allocated swap space in megabytes. 50 | 51 | .. cmdoption:: --disk-space-util 52 | 53 | Reports disk space utilization in percentages. 54 | 55 | .. cmdoption:: --disk-space-used 56 | 57 | Reports allocated disk space in gigabytes. 58 | 59 | .. cmdoption:: --disk-space-avail 60 | 61 | Reports available disk space in gigabytes. 62 | 63 | .. cmdoption:: --memory-units 64 | 65 | Specifies units for memory metrics. 66 | 67 | .. cmdoption:: --disk-units 68 | 69 | Specifies units for disk metrics. 70 | 71 | .. cmdoption:: --disk-path 72 | 73 | Selects the disk by the path on which to report. 74 | 75 | .. cmdoption:: --cpu-util 76 | 77 | Reports cpu utilization in percentages. 78 | 79 | .. cmdoption:: --haproxy 80 | 81 | Reports HAProxy loadbalancer usage. 82 | 83 | .. cmdoption:: --haproxy-latency 84 | 85 | Reports HAProxy latency 86 | 87 | .. cmdoption:: --heartbeat 88 | 89 | Sends a Heartbeat. 90 | 91 | .. cmdoption:: --watch 92 | 93 | the name of the watch to post to. 94 | 95 | 96 | BUGS 97 | ==== 98 | Heat bugs are managed through Launchpad -------------------------------------------------------------------------------- /doc/source/cfn-signal.rst: -------------------------------------------------------------------------------- 1 | ========== 2 | cfn-signal 3 | ========== 4 | 5 | .. program:: cfn-signal 6 | 7 | SYNOPSIS 8 | ======== 9 | 10 | ``cfn-signal`` 11 | 12 | DESCRIPTION 13 | =========== 14 | Implements cfn-signal CloudFormation functionality 15 | 16 | 17 | OPTIONS 18 | ======= 19 | .. cmdoption:: -s, --success 20 | 21 | signal status to report 22 | 23 | .. cmdoption:: -r, --reason 24 | 25 | The reason for the failure 26 | 27 | .. cmdoption:: --data 28 | 29 | The data to send 30 | 31 | .. cmdoption:: -i, --id 32 | 33 | the unique id to send back to the WaitCondition 34 | 35 | .. cmdoption:: -e, --exit 36 | 37 | The exit code from a procecc to interpret 38 | 39 | 40 | BUGS 41 | ==== 42 | Heat bugs are managed through Launchpad -------------------------------------------------------------------------------- /doc/source/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 3 | # not use this file except in compliance with the License. You may obtain 4 | # a copy of the License at 5 | # 6 | # http://www.apache.org/licenses/LICENSE-2.0 7 | # 8 | # Unless required by applicable law or agreed to in writing, software 9 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 10 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 11 | # License for the specific language governing permissions and limitations 12 | # under the License. 13 | # 14 | # heat-cfntools documentation build configuration file, created by 15 | # sphinx-quickstart on Thu Jul 20 09:19:39 2017. 16 | # 17 | # This file is execfile()d with the current directory set to its 18 | # containing dir. 19 | # 20 | # Note that not all possible configuration values are present in this 21 | # autogenerated file. 22 | # 23 | # All configuration values have a default; values that are commented out 24 | # serve to show the default. 25 | 26 | # If extensions (or modules to document with autodoc) are in another directory, 27 | # add these directories to sys.path here. If the directory is relative to the 28 | # documentation root, use os.path.abspath to make it absolute, like shown here. 29 | # 30 | # import os 31 | # import sys 32 | # sys.path.insert(0, os.path.abspath('.')) 33 | 34 | 35 | # -- General configuration ------------------------------------------------ 36 | 37 | # If your documentation needs a minimal Sphinx version, state it here. 38 | # 39 | # needs_sphinx = '1.0' 40 | 41 | # Add any Sphinx extension module names here, as strings. They can be 42 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 43 | # ones. 44 | extensions = ['sphinx.ext.autodoc', 45 | 'openstackdocstheme'] 46 | 47 | # Add any paths that contain templates here, relative to this directory. 48 | templates_path = ['_templates'] 49 | 50 | # The suffix(es) of source filenames. 51 | # You can specify multiple suffix as a list of string: 52 | # 53 | # source_suffix = ['.rst', '.md'] 54 | source_suffix = '.rst' 55 | 56 | # The master toctree document. 57 | master_doc = 'index' 58 | 59 | # General information about the project. 60 | project = 'heat-cfntools' 61 | copyright = 'OpenStack Foundation' 62 | 63 | # The language for content autogenerated by Sphinx. Refer to documentation 64 | # for a list of supported languages. 65 | # 66 | # This is also used if you do content translation via gettext catalogs. 67 | # Usually you set "language" from the command line for these cases. 68 | # language = None 69 | 70 | # List of patterns, relative to source directory, that match files and 71 | # directories to ignore when looking for source files. 72 | # This patterns also effect to html_static_path and html_extra_path 73 | exclude_patterns = [] 74 | 75 | # The name of the Pygments (syntax highlighting) style to use. 76 | pygments_style = 'native' 77 | 78 | # If true, `todo` and `todoList` produce output, else they produce nothing. 79 | # todo_include_todos = False 80 | 81 | 82 | # -- Options for HTML output ---------------------------------------------- 83 | 84 | # The theme to use for HTML and HTML Help pages. See the documentation for 85 | # a list of builtin themes. 86 | # 87 | html_theme = 'openstackdocs' 88 | 89 | # Theme options are theme-specific and customize the look and feel of a theme 90 | # further. For a list of options available for each theme, see the 91 | # documentation. 92 | # 93 | # html_theme_options = {} 94 | 95 | # Add any paths that contain custom static files (such as style sheets) here, 96 | # relative to this directory. They are copied after the builtin static files, 97 | # so a file named "default.css" will overwrite the builtin "default.css". 98 | # html_static_path = ['_static'] 99 | 100 | # Custom sidebar templates, must be a dictionary that maps document names 101 | # to template names. 102 | # 103 | # This is required for the alabaster theme 104 | # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars 105 | # html_sidebars = {} 106 | 107 | # -- Options for openstackdocstheme -------------------------------------- 108 | openstackdocs_repo_name = 'openstack/heat-cfntools' 109 | openstackdocs_auto_name = False 110 | openstackdocs_use_storyboard = True 111 | 112 | # -- Options for HTMLHelp output ------------------------------------------ 113 | 114 | # Output file base name for HTML help builder. 115 | htmlhelp_basename = 'heat-cfntoolsdoc' 116 | 117 | 118 | # -- Options for LaTeX output --------------------------------------------- 119 | 120 | latex_elements = { 121 | # The paper size ('letterpaper' or 'a4paper'). 122 | # 123 | # 'papersize': 'letterpaper', 124 | 125 | # The font size ('10pt', '11pt' or '12pt'). 126 | # 127 | # 'pointsize': '10pt', 128 | 129 | # Additional stuff for the LaTeX preamble. 130 | # 131 | # 'preamble': '', 132 | 133 | # Latex figure (float) alignment 134 | # 135 | # 'figure_align': 'htbp', 136 | } 137 | 138 | # Grouping the document tree into LaTeX files. List of tuples 139 | # (source start file, target name, title, 140 | # author, documentclass [howto, manual, or own class]). 141 | latex_documents = [ 142 | (master_doc, 'heat-cfntools.tex', 'heat-cfntools Documentation', 143 | 'OpenStack Foundation', 'manual'), 144 | ] 145 | 146 | 147 | # -- Options for manual page output --------------------------------------- 148 | 149 | # One entry per manual page. List of tuples 150 | # (source start file, name, description, authors, manual section). 151 | man_pages = [ 152 | (master_doc, 'heat-cfntools', 'heat-cfntools Documentation', 153 | ['Heat Developers'], 1), 154 | ('cfn-create-aws-symlinks', 'cfn-create-aws-symlinks', 155 | 'Creates symlinks for the cfn-* scripts in this directory to /opt/aws/bin', 156 | ['Heat Developers'], 1), 157 | ('cfn-get-metadata', 'cfn-get-metadata', 158 | 'Implements cfn-get-metadata CloudFormation functionality', 159 | ['Heat Developers'], 1), 160 | ('cfn-hup', 'cfn-hup', 161 | 'Implements cfn-hup CloudFormation functionality', 162 | ['Heat Developers'], 1), 163 | ('cfn-init', 'cfn-init', 164 | 'Implements cfn-init CloudFormation functionality', 165 | ['Heat Developers'], 1), 166 | ('cfn-push-stats', 'cfn-push-stats', 167 | 'Implements cfn-push-stats CloudFormation functionality', 168 | ['Heat Developers'], 1), 169 | ('cfn-signal', 'cfn-signal', 170 | 'Implements cfn-signal CloudFormation functionality', 171 | ['Heat Developers'], 1), 172 | ] 173 | 174 | 175 | # -- Options for Texinfo output ------------------------------------------- 176 | 177 | # Grouping the document tree into Texinfo files. List of tuples 178 | # (source start file, target name, title, author, 179 | # dir menu entry, description, category) 180 | texinfo_documents = [ 181 | (master_doc, 'heat-cfntools', 'heat-cfntools Documentation', 182 | 'Heat Developers', 'heat-cfntools', 'One line description of project.', 183 | 'Miscellaneous'), 184 | ] 185 | -------------------------------------------------------------------------------- /doc/source/contributor/contributing.rst: -------------------------------------------------------------------------------- 1 | ============================ 2 | So You Want to Contribute... 3 | ============================ 4 | For general information on contributing to OpenStack, please check out the 5 | `contributor guide `_ to get started. 6 | It covers all the basics that are common to all OpenStack projects: the accounts 7 | you need, the basics of interacting with our Gerrit review system, how we 8 | communicate as a community, etc. 9 | Below will cover the more project specific information you need to get started 10 | with heat-cfntools. 11 | 12 | Communication 13 | ~~~~~~~~~~~~~ 14 | * IRC channel #heat at OFTC 15 | * Mailing list (prefix subjects with ``[heat]`` for faster responses) 16 | http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss 17 | 18 | Contacting the Core Team 19 | ~~~~~~~~~~~~~~~~~~~~~~~~ 20 | Please refer the `heat-cfntools Core Team 21 | `_ contacts. 22 | 23 | New Feature Planning 24 | ~~~~~~~~~~~~~~~~~~~~ 25 | heat-cfntools features are tracked on `Storyboard `_. 26 | Please specify heat-cfntools in Storyboard. 27 | 28 | Task Tracking 29 | ~~~~~~~~~~~~~ 30 | We track our tasks in `Storyboard `_. 31 | Please specify heat-cfntools in Storyboard. 32 | 33 | Reporting a Bug 34 | ~~~~~~~~~~~~~~~ 35 | You found an issue and want to make sure we are aware of it? You can do so on 36 | `Storyboard `_. 37 | 38 | Getting Your Patch Merged 39 | ~~~~~~~~~~~~~~~~~~~~~~~~~ 40 | All changes proposed to the heat-cfntools project require one or two +2 votes 41 | from heat-cfntools core reviewers before one of the core reviewers can approve 42 | patch by giving ``Workflow +1`` vote. 43 | 44 | Project Team Lead Duties 45 | ~~~~~~~~~~~~~~~~~~~~~~~~ 46 | All common PTL duties are enumerated in the `PTL guide 47 | `_. 48 | -------------------------------------------------------------------------------- /doc/source/index.rst: -------------------------------------------------------------------------------- 1 | ===================================== 2 | Man pages for Heat cfntools utilities 3 | ===================================== 4 | 5 | ------------- 6 | Heat cfntools 7 | ------------- 8 | 9 | .. toctree:: 10 | :maxdepth: 1 11 | 12 | cfn-create-aws-symlinks 13 | cfn-get-metadata 14 | cfn-hup 15 | cfn-init 16 | cfn-push-stats 17 | cfn-signal 18 | 19 | For Contributors 20 | ---------------- 21 | 22 | * If you are a new contributor to Heat cfntools please refer: :doc:`contributor/contributing` 23 | 24 | .. toctree:: 25 | :hidden: 26 | 27 | contributor/contributing 28 | -------------------------------------------------------------------------------- /heat_cfntools/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack/heat-cfntools/8d2a16958bd4542382810630a22583a24f9ba2eb/heat_cfntools/__init__.py -------------------------------------------------------------------------------- /heat_cfntools/cfntools/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack/heat-cfntools/8d2a16958bd4542382810630a22583a24f9ba2eb/heat_cfntools/cfntools/__init__.py -------------------------------------------------------------------------------- /heat_cfntools/cfntools/cfn_helper.py: -------------------------------------------------------------------------------- 1 | 2 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 3 | # not use this file except in compliance with the License. You may obtain 4 | # a copy of the License at 5 | # 6 | # http://www.apache.org/licenses/LICENSE-2.0 7 | # 8 | # Unless required by applicable law or agreed to in writing, software 9 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 10 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 11 | # License for the specific language governing permissions and limitations 12 | # under the License. 13 | 14 | """ 15 | Implements cfn metadata handling 16 | 17 | Not implemented yet: 18 | * command line args 19 | - placeholders are ignored 20 | """ 21 | import atexit 22 | import configparser 23 | import contextlib 24 | import errno 25 | import functools 26 | import grp 27 | import json 28 | import logging 29 | import os 30 | import os.path 31 | import pwd 32 | try: 33 | import rpmUtils.miscutils as rpmutils 34 | import rpmUtils.updates as rpmupdates 35 | rpmutils_present = True 36 | except ImportError: 37 | rpmutils_present = False 38 | import re 39 | import shutil 40 | import subprocess 41 | import tempfile 42 | 43 | 44 | # Override BOTO_CONFIG, which makes boto look only at the specified 45 | # config file, instead of the default locations 46 | os.environ['BOTO_CONFIG'] = '/var/lib/heat-cfntools/cfn-boto-cfg' 47 | from boto import cloudformation # noqa 48 | 49 | 50 | LOG = logging.getLogger(__name__) 51 | 52 | 53 | def to_boolean(b): 54 | val = b.lower().strip() if isinstance(b, str) else b 55 | return val in [True, 'true', 'yes', '1', 1] 56 | 57 | 58 | def parse_creds_file(path='/etc/cfn/cfn-credentials'): 59 | '''Parse the cfn credentials file. 60 | 61 | Default location is as specified, and it is expected to contain 62 | exactly two keys "AWSAccessKeyId" and "AWSSecretKey) 63 | The two keys are returned a dict (if found) 64 | ''' 65 | creds = {'AWSAccessKeyId': None, 'AWSSecretKey': None} 66 | for line in open(path): 67 | for key in creds: 68 | match = re.match("^%s *= *(.*)$" % key, line) 69 | if match: 70 | creds[key] = match.group(1) 71 | return creds 72 | 73 | 74 | class InvalidCredentialsException(Exception): 75 | def __init__(self, credential_file): 76 | super(Exception, self).__init__("invalid credentials file %s" % 77 | credential_file) 78 | 79 | 80 | class HupConfig(object): 81 | def __init__(self, fp_list): 82 | self.config = configparser.ConfigParser() 83 | for fp in fp_list: 84 | self.config.read_file(fp) 85 | 86 | self.load_main_section() 87 | 88 | self.hooks = [] 89 | for s in self.config.sections(): 90 | if s != 'main': 91 | self.hooks.append(Hook( 92 | s, 93 | self.config.get(s, 'triggers'), 94 | self.config.get(s, 'path'), 95 | self.config.get(s, 'runas'), 96 | self.config.get(s, 'action'))) 97 | 98 | def load_main_section(self): 99 | # required values 100 | self.stack = self.config.get('main', 'stack') 101 | self.credential_file = self.config.get('main', 'credential-file') 102 | try: 103 | with open(self.credential_file) as f: 104 | self.credentials = f.read() 105 | except Exception: 106 | raise InvalidCredentialsException(self.credential_file) 107 | 108 | # optional values 109 | try: 110 | self.region = self.config.get('main', 'region') 111 | except configparser.NoOptionError: 112 | self.region = 'nova' 113 | 114 | try: 115 | self.interval = self.config.getint('main', 'interval') 116 | except configparser.NoOptionError: 117 | self.interval = 10 118 | 119 | def __str__(self): 120 | return ('{stack: %s, credential_file: %s, region: %s, interval:%d}' % 121 | (self.stack, self.credential_file, self.region, self.interval)) 122 | 123 | def unique_resources_get(self): 124 | resources = [] 125 | for h in self.hooks: 126 | r = h.resource_name_get() 127 | if r not in resources: 128 | resources.append(h.resource_name_get()) 129 | return resources 130 | 131 | 132 | class Hook(object): 133 | def __init__(self, name, triggers, path, runas, action): 134 | self.name = name 135 | self.triggers = triggers 136 | self.path = path 137 | self.runas = runas 138 | self.action = action 139 | 140 | def resource_name_get(self): 141 | sp = self.path.split('.') 142 | return sp[1] 143 | 144 | def event(self, ev_name, ev_object, ev_resource): 145 | if (self.resource_name_get() == ev_resource and 146 | ev_name in self.triggers): 147 | CommandRunner(self.action, shell=True).run(user=self.runas) 148 | else: 149 | LOG.debug('event: {%s, %s, %s} did not match %s' % 150 | (ev_name, ev_object, ev_resource, self.__str__())) 151 | 152 | def __str__(self): 153 | return '{%s, %s, %s, %s, %s}' % (self.name, 154 | self.triggers, 155 | self.path, 156 | self.runas, 157 | self.action) 158 | 159 | 160 | class ControlledPrivilegesFailureException(Exception): 161 | pass 162 | 163 | 164 | @contextlib.contextmanager 165 | def controlled_privileges(user): 166 | orig_euid = None 167 | try: 168 | real = pwd.getpwnam(user) 169 | if os.geteuid() != real.pw_uid: 170 | orig_euid = os.geteuid() 171 | os.seteuid(real.pw_uid) 172 | LOG.debug("Privileges set for user %s" % user) 173 | except Exception as e: 174 | raise ControlledPrivilegesFailureException(e) 175 | 176 | try: 177 | yield 178 | finally: 179 | if orig_euid is not None: 180 | try: 181 | os.seteuid(orig_euid) 182 | LOG.debug("Original privileges restored.") 183 | except Exception as e: 184 | LOG.error("Error restoring privileges %s" % e) 185 | 186 | 187 | class CommandRunner(object): 188 | """Helper class to run a command and store the output.""" 189 | 190 | def __init__(self, command, shell=False, nextcommand=None): 191 | self._command = command 192 | self._shell = shell 193 | self._next = nextcommand 194 | self._stdout = None 195 | self._stderr = None 196 | self._status = None 197 | 198 | def __str__(self): 199 | s = "CommandRunner:" 200 | s += "\n\tcommand: %s" % self._command 201 | if self._status: 202 | s += "\n\tstatus: %s" % self.status 203 | if self._stdout: 204 | s += "\n\tstdout: %s" % self.stdout 205 | if self._stderr: 206 | s += "\n\tstderr: %s" % self.stderr 207 | return s 208 | 209 | def run(self, user='root', cwd=None, env=None): 210 | """Run the Command and return the output. 211 | 212 | Returns: 213 | self 214 | """ 215 | LOG.debug("Running command: %s" % self._command) 216 | 217 | cmd = self._command 218 | shell = self._shell 219 | 220 | # Ensure commands that are given as string are run on shell 221 | assert isinstance(cmd, str) is bool(shell) 222 | 223 | try: 224 | with controlled_privileges(user): 225 | subproc = subprocess.Popen(cmd, stdout=subprocess.PIPE, 226 | stderr=subprocess.PIPE, cwd=cwd, 227 | env=env, shell=shell) 228 | output = subproc.communicate() 229 | self._status = subproc.returncode 230 | self._stdout = output[0] 231 | self._stderr = output[1] 232 | except ControlledPrivilegesFailureException as e: 233 | LOG.error("Error setting privileges for user '%s': %s" 234 | % (user, e)) 235 | self._status = 126 236 | self._stderr = str(e) 237 | 238 | if self._status: 239 | LOG.debug("Return code of %d after executing: '%s'\n" 240 | "stdout: '%s'\n" 241 | "stderr: '%s'" % (self._status, cmd, self._stdout, 242 | self._stderr)) 243 | 244 | if self._next: 245 | self._next.run() 246 | return self 247 | 248 | @property 249 | def stdout(self): 250 | return self._stdout 251 | 252 | @property 253 | def stderr(self): 254 | return self._stderr 255 | 256 | @property 257 | def status(self): 258 | return self._status 259 | 260 | 261 | class RpmHelper(object): 262 | 263 | if rpmutils_present: 264 | _rpm_util = rpmupdates.Updates([], []) 265 | 266 | @classmethod 267 | def compare_rpm_versions(cls, v1, v2): 268 | """Compare two RPM version strings. 269 | 270 | Arguments: 271 | v1 -- a version string 272 | v2 -- a version string 273 | 274 | Returns: 275 | 0 -- the versions are equal 276 | 1 -- v1 is greater 277 | -1 -- v2 is greater 278 | """ 279 | if v1 and v2: 280 | return rpmutils.compareVerOnly(v1, v2) 281 | elif v1: 282 | return 1 283 | elif v2: 284 | return -1 285 | else: 286 | return 0 287 | 288 | @classmethod 289 | def newest_rpm_version(cls, versions): 290 | """Returns the highest (newest) version from a list of versions. 291 | 292 | Arguments: 293 | versions -- A list of version strings 294 | e.g., ['2.0', '2.2', '2.2-1.fc16', '2.2.22-1.fc16'] 295 | """ 296 | if versions: 297 | if isinstance(versions, str): 298 | return versions 299 | versions = sorted(versions, rpmutils.compareVerOnly, 300 | reverse=True) 301 | return versions[0] 302 | else: 303 | return None 304 | 305 | @classmethod 306 | def rpm_package_version(cls, pkg): 307 | """Returns the version of an installed RPM. 308 | 309 | Arguments: 310 | pkg -- A package name 311 | """ 312 | cmd = "rpm -q --queryformat '%%{VERSION}-%%{RELEASE}' %s" % pkg 313 | command = CommandRunner(cmd).run() 314 | return command.stdout 315 | 316 | @classmethod 317 | def rpm_package_installed(cls, pkg): 318 | """Indicates whether pkg is in rpm database. 319 | 320 | Arguments: 321 | pkg -- A package name (with optional version and release spec). 322 | e.g., httpd 323 | e.g., httpd-2.2.22 324 | e.g., httpd-2.2.22-1.fc16 325 | """ 326 | cmd = ['rpm', '-q', pkg] 327 | command = CommandRunner(cmd).run() 328 | return command.status == 0 329 | 330 | @classmethod 331 | def yum_package_available(cls, pkg): 332 | """Indicates whether pkg is available via yum. 333 | 334 | Arguments: 335 | pkg -- A package name (with optional version and release spec). 336 | e.g., httpd 337 | e.g., httpd-2.2.22 338 | e.g., httpd-2.2.22-1.fc16 339 | """ 340 | cmd = ['yum', '-y', '--showduplicates', 'list', 'available', pkg] 341 | command = CommandRunner(cmd).run() 342 | return command.status == 0 343 | 344 | @classmethod 345 | def dnf_package_available(cls, pkg): 346 | """Indicates whether pkg is available via dnf. 347 | 348 | Arguments: 349 | pkg -- A package name (with optional version and release spec). 350 | e.g., httpd 351 | e.g., httpd-2.2.22 352 | e.g., httpd-2.2.22-1.fc21 353 | """ 354 | cmd = ['dnf', '-y', '--showduplicates', 'list', 'available', pkg] 355 | command = CommandRunner(cmd).run() 356 | return command.status == 0 357 | 358 | @classmethod 359 | def zypper_package_available(cls, pkg): 360 | """Indicates whether pkg is available via zypper. 361 | 362 | Arguments: 363 | pkg -- A package name (with optional version and release spec). 364 | e.g., httpd 365 | e.g., httpd-2.2.22 366 | e.g., httpd-2.2.22-1.fc16 367 | """ 368 | cmd = ['zypper', '-n', '--no-refresh', 'search', pkg] 369 | command = CommandRunner(cmd).run() 370 | return command.status == 0 371 | 372 | @classmethod 373 | def install(cls, packages, rpms=True, zypper=False, dnf=False): 374 | """Installs (or upgrades) packages via RPM, yum, dnf, or zypper. 375 | 376 | Arguments: 377 | packages -- a list of packages to install 378 | rpms -- if True: 379 | * use RPM to install the packages 380 | * packages must be a list of URLs to retrieve RPMs 381 | if False: 382 | * use Yum to install packages 383 | * packages is a list of: 384 | - pkg name (httpd), or 385 | - pkg name with version spec (httpd-2.2.22), or 386 | - pkg name with version-release spec 387 | (httpd-2.2.22-1.fc16) 388 | zypper -- if True: 389 | * overrides use of yum, use zypper instead 390 | dnf -- if True: 391 | * overrides use of yum, use dnf instead 392 | * packages must be in same format as yum pkg list 393 | """ 394 | if rpms: 395 | cmd = ['rpm', '-U', '--force', '--nosignature'] 396 | elif zypper: 397 | cmd = ['zypper', '-n', 'install'] 398 | elif dnf: 399 | # use dnf --best to upgrade outdated-but-installed packages 400 | cmd = ['dnf', '-y', '--best', 'install'] 401 | else: 402 | cmd = ['yum', '-y', 'install'] 403 | cmd.extend(packages) 404 | LOG.info("Installing packages: %s" % cmd) 405 | command = CommandRunner(cmd).run() 406 | if command.status: 407 | LOG.warning("Failed to install packages: %s" % cmd) 408 | 409 | @classmethod 410 | def downgrade(cls, packages, rpms=True, zypper=False, dnf=False): 411 | """Downgrades a set of packages via RPM, yum, dnf, or zypper. 412 | 413 | Arguments: 414 | packages -- a list of packages to downgrade 415 | rpms -- if True: 416 | * use RPM to downgrade (replace) the packages 417 | * packages must be a list of URLs to retrieve the RPMs 418 | if False: 419 | * use Yum to downgrade packages 420 | * packages is a list of: 421 | - pkg name with version spec (httpd-2.2.22), or 422 | - pkg name with version-release spec 423 | (httpd-2.2.22-1.fc16) 424 | dnf -- if True: 425 | * Use dnf instead of RPM/yum 426 | """ 427 | if rpms: 428 | cls.install(packages) 429 | elif zypper: 430 | cmd = ['zypper', '-n', 'install', '--oldpackage'] 431 | cmd.extend(packages) 432 | LOG.info("Downgrading packages: %s", cmd) 433 | command = CommandRunner(cmd).run() 434 | if command.status: 435 | LOG.warning("Failed to downgrade packages: %s" % cmd) 436 | elif dnf: 437 | cmd = ['dnf', '-y', 'downgrade'] 438 | cmd.extend(packages) 439 | LOG.info("Downgrading packages: %s", cmd) 440 | command = CommandRunner(cmd).run() 441 | if command.status: 442 | LOG.warning("Failed to downgrade packages: %s" % cmd) 443 | else: 444 | cmd = ['yum', '-y', 'downgrade'] 445 | cmd.extend(packages) 446 | LOG.info("Downgrading packages: %s" % cmd) 447 | command = CommandRunner(cmd).run() 448 | if command.status: 449 | LOG.warning("Failed to downgrade packages: %s" % cmd) 450 | 451 | 452 | class PackagesHandler(object): 453 | _packages = {} 454 | 455 | _package_order = ["dpkg", "rpm", "apt", "yum", "dnf"] 456 | 457 | @staticmethod 458 | def _pkgsort(pkg1, pkg2): 459 | order = PackagesHandler._package_order 460 | p1_name = pkg1[0] 461 | p2_name = pkg2[0] 462 | if p1_name in order and p2_name in order: 463 | i1 = order.index(p1_name) 464 | i2 = order.index(p2_name) 465 | return (i1 > i2) - (i1 < i2) 466 | elif p1_name in order: 467 | return -1 468 | elif p2_name in order: 469 | return 1 470 | else: 471 | n1 = p1_name.lower() 472 | n2 = p2_name.lower() 473 | return (n1 > n2) - (n1 < n2) 474 | 475 | def __init__(self, packages): 476 | self._packages = packages 477 | 478 | def _handle_gem_packages(self, packages): 479 | """very basic support for gems.""" 480 | # TODO(asalkeld) support versions 481 | # -b == local & remote install 482 | # -y == install deps 483 | opts = ['-b', '-y'] 484 | for pkg_name, versions in packages.items(): 485 | if len(versions) > 0: 486 | cmd = ['gem', 'install'] + opts 487 | cmd.extend(['--version', versions[0], pkg_name]) 488 | CommandRunner(cmd).run() 489 | else: 490 | cmd = ['gem', 'install'] + opts 491 | cmd.append(pkg_name) 492 | CommandRunner(cmd).run() 493 | 494 | def _handle_python_packages(self, packages): 495 | """very basic support for easy_install.""" 496 | # TODO(asalkeld) support versions 497 | for pkg_name, versions in packages.items(): 498 | cmd = ['easy_install', pkg_name] 499 | CommandRunner(cmd).run() 500 | 501 | def _handle_zypper_packages(self, packages): 502 | """Handle installation, upgrade, or downgrade of packages via yum. 503 | 504 | Arguments: 505 | packages -- a package entries map of the form: 506 | "pkg_name" : "version", 507 | "pkg_name" : ["v1", "v2"], 508 | "pkg_name" : [] 509 | 510 | For each package entry: 511 | * if no version is supplied and the package is already installed, do 512 | nothing 513 | * if no version is supplied and the package is _not_ already 514 | installed, install it 515 | * if a version string is supplied, and the package is already 516 | installed, determine whether to downgrade or upgrade (or do nothing 517 | if version matches installed package) 518 | * if a version array is supplied, choose the highest version from the 519 | array and follow same logic for version string above 520 | """ 521 | # collect pkgs for batch processing at end 522 | installs = [] 523 | downgrades = [] 524 | for pkg_name, versions in packages.items(): 525 | ver = RpmHelper.newest_rpm_version(versions) 526 | pkg = "%s-%s" % (pkg_name, ver) if ver else pkg_name 527 | if RpmHelper.rpm_package_installed(pkg): 528 | # FIXME:print non-error, but skipping pkg 529 | pass 530 | elif not RpmHelper.zypper_package_available(pkg): 531 | LOG.warning( 532 | "Skipping package '%s' - unavailable via zypper", pkg) 533 | elif not ver: 534 | installs.append(pkg) 535 | else: 536 | current_ver = RpmHelper.rpm_package_version(pkg) 537 | rc = RpmHelper.compare_rpm_versions(current_ver, ver) 538 | if rc < 0: 539 | installs.append(pkg) 540 | elif rc > 0: 541 | downgrades.append(pkg) 542 | if installs: 543 | RpmHelper.install(installs, rpms=False, zypper=True) 544 | if downgrades: 545 | RpmHelper.downgrade(downgrades, zypper=True) 546 | 547 | def _handle_dnf_packages(self, packages): 548 | """Handle installation, upgrade, or downgrade of packages via dnf. 549 | 550 | Arguments: 551 | packages -- a package entries map of the form: 552 | "pkg_name" : "version", 553 | "pkg_name" : ["v1", "v2"], 554 | "pkg_name" : [] 555 | 556 | For each package entry: 557 | * if no version is supplied and the package is already installed, do 558 | nothing 559 | * if no version is supplied and the package is _not_ already 560 | installed, install it 561 | * if a version string is supplied, and the package is already 562 | installed, determine whether to downgrade or upgrade (or do nothing 563 | if version matches installed package) 564 | * if a version array is supplied, choose the highest version from the 565 | array and follow same logic for version string above 566 | """ 567 | # collect pkgs for batch processing at end 568 | installs = [] 569 | downgrades = [] 570 | for pkg_name, versions in packages.items(): 571 | ver = RpmHelper.newest_rpm_version(versions) 572 | pkg = "%s-%s" % (pkg_name, ver) if ver else pkg_name 573 | if RpmHelper.rpm_package_installed(pkg): 574 | # FIXME:print non-error, but skipping pkg 575 | pass 576 | elif not RpmHelper.dnf_package_available(pkg): 577 | LOG.warning( 578 | "Skipping package '%s'. Not available via yum" % pkg) 579 | elif not ver: 580 | installs.append(pkg) 581 | else: 582 | current_ver = RpmHelper.rpm_package_version(pkg) 583 | rc = RpmHelper.compare_rpm_versions(current_ver, ver) 584 | if rc < 0: 585 | installs.append(pkg) 586 | elif rc > 0: 587 | downgrades.append(pkg) 588 | if installs: 589 | RpmHelper.install(installs, rpms=False, dnf=True) 590 | if downgrades: 591 | RpmHelper.downgrade(downgrades, rpms=False, dnf=True) 592 | 593 | def _handle_yum_packages(self, packages): 594 | """Handle installation, upgrade, or downgrade of packages via yum. 595 | 596 | Arguments: 597 | packages -- a package entries map of the form: 598 | "pkg_name" : "version", 599 | "pkg_name" : ["v1", "v2"], 600 | "pkg_name" : [] 601 | 602 | For each package entry: 603 | * if no version is supplied and the package is already installed, do 604 | nothing 605 | * if no version is supplied and the package is _not_ already 606 | installed, install it 607 | * if a version string is supplied, and the package is already 608 | installed, determine whether to downgrade or upgrade (or do nothing 609 | if version matches installed package) 610 | * if a version array is supplied, choose the highest version from the 611 | array and follow same logic for version string above 612 | """ 613 | 614 | cmd = CommandRunner(['which', 'yum']).run() 615 | if cmd.status == 1: 616 | # yum not available, use DNF if available 617 | self._handle_dnf_packages(packages) 618 | return 619 | elif cmd.status == 127: 620 | # `which` command not found 621 | LOG.info("`which` not found. Using yum without checking if dnf " 622 | "is available") 623 | 624 | # collect pkgs for batch processing at end 625 | installs = [] 626 | downgrades = [] 627 | for pkg_name, versions in packages.items(): 628 | ver = RpmHelper.newest_rpm_version(versions) 629 | pkg = "%s-%s" % (pkg_name, ver) if ver else pkg_name 630 | if RpmHelper.rpm_package_installed(pkg): 631 | # FIXME:print non-error, but skipping pkg 632 | pass 633 | elif not RpmHelper.yum_package_available(pkg): 634 | LOG.warning( 635 | "Skipping package '%s'. Not available via yum" % pkg) 636 | elif not ver: 637 | installs.append(pkg) 638 | else: 639 | current_ver = RpmHelper.rpm_package_version(pkg) 640 | rc = RpmHelper.compare_rpm_versions(current_ver, ver) 641 | if rc < 0: 642 | installs.append(pkg) 643 | elif rc > 0: 644 | downgrades.append(pkg) 645 | if installs: 646 | RpmHelper.install(installs, rpms=False) 647 | if downgrades: 648 | RpmHelper.downgrade(downgrades) 649 | 650 | def _handle_rpm_packages(self, packages): 651 | """Handle installation, upgrade, or downgrade of packages via rpm. 652 | 653 | Arguments: 654 | packages -- a package entries map of the form: 655 | "pkg_name" : "url" 656 | 657 | For each package entry: 658 | * if the EXACT package is already installed, skip it 659 | * if a different version of the package is installed, overwrite it 660 | * if the package isn't installed, install it 661 | """ 662 | # FIXME(asalkeld): handle rpm installs 663 | pass 664 | 665 | def _handle_apt_packages(self, packages): 666 | """very basic support for apt.""" 667 | # TODO(asalkeld) support versions 668 | pkg_list = list(packages) 669 | 670 | env = {'DEBIAN_FRONTEND': 'noninteractive'} 671 | cmd = ['apt-get', '-y', 'install'] + pkg_list 672 | CommandRunner(cmd).run(env=env) 673 | 674 | # map of function pointers to handle different package managers 675 | _package_handlers = {"yum": _handle_yum_packages, 676 | "dnf": _handle_dnf_packages, 677 | "zypper": _handle_zypper_packages, 678 | "rpm": _handle_rpm_packages, 679 | "apt": _handle_apt_packages, 680 | "rubygems": _handle_gem_packages, 681 | "python": _handle_python_packages} 682 | 683 | def _package_handler(self, manager_name): 684 | handler = None 685 | if manager_name in self._package_handlers: 686 | handler = self._package_handlers[manager_name] 687 | return handler 688 | 689 | def apply_packages(self): 690 | """Install, upgrade, or downgrade packages listed. 691 | 692 | Each package is a dict containing package name and a list of versions 693 | Install order: 694 | * dpkg 695 | * rpm 696 | * apt 697 | * yum 698 | * dnf 699 | """ 700 | if not self._packages: 701 | return 702 | try: 703 | packages = sorted( 704 | self._packages.items(), cmp=PackagesHandler._pkgsort) 705 | except TypeError: 706 | # On Python 3, we have to use key instead of cmp 707 | # This could also work on Python 2.7, but not on 2.6 708 | packages = sorted( 709 | self._packages.items(), 710 | key=functools.cmp_to_key(PackagesHandler._pkgsort)) 711 | 712 | for manager, package_entries in packages: 713 | handler = self._package_handler(manager) 714 | if not handler: 715 | LOG.warning("Skipping invalid package type: %s" % manager) 716 | else: 717 | handler(self, package_entries) 718 | 719 | 720 | class FilesHandler(object): 721 | def __init__(self, files): 722 | self._files = files 723 | 724 | def apply_files(self): 725 | if not self._files: 726 | return 727 | for fdest, meta in self._files.items(): 728 | dest = fdest.encode() 729 | try: 730 | os.makedirs(os.path.dirname(dest)) 731 | except OSError as e: 732 | if e.errno == errno.EEXIST: 733 | LOG.debug(str(e)) 734 | else: 735 | LOG.exception(e) 736 | 737 | if 'content' in meta: 738 | if isinstance(meta['content'], str): 739 | f = open(dest, 'w+') 740 | f.write(meta['content']) 741 | f.close() 742 | else: 743 | f = open(dest, 'w+') 744 | f.write(json.dumps(meta['content'], 745 | indent=4).encode('UTF-8')) 746 | f.close() 747 | elif 'source' in meta: 748 | CommandRunner(['curl', '-o', dest, meta['source']]).run() 749 | else: 750 | LOG.error('%s %s' % (dest, str(meta))) 751 | continue 752 | 753 | uid = -1 754 | gid = -1 755 | if 'owner' in meta: 756 | try: 757 | user_info = pwd.getpwnam(meta['owner']) 758 | uid = user_info[2] 759 | except KeyError: 760 | pass 761 | 762 | if 'group' in meta: 763 | try: 764 | group_info = grp.getgrnam(meta['group']) 765 | gid = group_info[2] 766 | except KeyError: 767 | pass 768 | 769 | os.chown(dest, uid, gid) 770 | if 'mode' in meta: 771 | os.chmod(dest, int(meta['mode'], 8)) 772 | 773 | 774 | class SourcesHandler(object): 775 | '''tar, tar+gzip,tar+bz2 and zip.''' 776 | _sources = {} 777 | 778 | def __init__(self, sources): 779 | self._sources = sources 780 | 781 | def _url_to_tmp_filename(self, url): 782 | tempdir = tempfile.mkdtemp() 783 | atexit.register(lambda: shutil.rmtree(tempdir, True)) 784 | name = os.path.basename(url) 785 | return os.path.join(tempdir, name) 786 | 787 | def _splitext(self, path): 788 | (r, ext) = os.path.splitext(path) 789 | return (r, ext.lower()) 790 | 791 | def _github_ball_type(self, url): 792 | ext = "" 793 | if url.endswith('/'): 794 | url = url[0:-1] 795 | sp = url.split('/') 796 | if len(sp) > 2: 797 | http = sp[0].startswith('http') 798 | github = sp[2].endswith('github.com') 799 | btype = sp[-2] 800 | if http and github: 801 | if 'zipball' == btype: 802 | ext = '.zip' 803 | elif 'tarball' == btype: 804 | ext = '.tgz' 805 | return ext 806 | 807 | def _source_type(self, url): 808 | (r, ext) = self._splitext(url) 809 | if ext == '.gz': 810 | (r, ext2) = self._splitext(r) 811 | if ext2 == '.tar': 812 | ext = '.tgz' 813 | elif ext == '.bz2': 814 | (r, ext2) = self._splitext(r) 815 | if ext2 == '.tar': 816 | ext = '.tbz2' 817 | elif ext == "": 818 | ext = self._github_ball_type(url) 819 | 820 | return ext 821 | 822 | def _apply_source_cmd(self, dest, url): 823 | cmd = "" 824 | basename = os.path.basename(url) 825 | stype = self._source_type(url) 826 | if stype == '.tgz': 827 | cmd = "curl -s '%s' | gunzip | tar -xvf -" % url 828 | elif stype == '.tbz2': 829 | cmd = "curl -s '%s' | bunzip2 | tar -xvf -" % url 830 | elif stype == '.zip': 831 | tmp = self._url_to_tmp_filename(url) 832 | cmd = "curl -s -o '%s' '%s' && unzip -o '%s'" % (tmp, url, tmp) 833 | elif stype == '.tar': 834 | cmd = "curl -s '%s' | tar -xvf -" % url 835 | elif stype == '.gz': 836 | (r, ext) = self._splitext(basename) 837 | cmd = "curl -s '%s' | gunzip > '%s'" % (url, r) 838 | elif stype == '.bz2': 839 | (r, ext) = self._splitext(basename) 840 | cmd = "curl -s '%s' | bunzip2 > '%s'" % (url, r) 841 | 842 | if cmd != '': 843 | cmd = "mkdir -p '%s'; cd '%s'; %s" % (dest, dest, cmd) 844 | 845 | return cmd 846 | 847 | def _apply_source(self, dest, url): 848 | cmd = self._apply_source_cmd(dest, url) 849 | # FIXME bug 1498298 850 | if cmd != '': 851 | runner = CommandRunner(cmd, shell=True) 852 | runner.run() 853 | 854 | def apply_sources(self): 855 | if not self._sources: 856 | return 857 | for dest, url in self._sources.items(): 858 | self._apply_source(dest, url) 859 | 860 | 861 | class ServicesHandler(object): 862 | _services = {} 863 | 864 | def __init__(self, services, resource=None, hooks=None): 865 | self._services = services 866 | self.resource = resource 867 | self.hooks = hooks 868 | 869 | def _handle_sysv_command(self, service, command): 870 | if os.path.exists("/bin/systemctl"): 871 | service_exe = "/bin/systemctl" 872 | service = '%s.service' % service 873 | service_start = [service_exe, 'start', service] 874 | service_status = [service_exe, 'status', service] 875 | service_stop = [service_exe, 'stop', service] 876 | elif os.path.exists("/sbin/service"): 877 | service_exe = "/sbin/service" 878 | service_start = [service_exe, service, 'start'] 879 | service_status = [service_exe, service, 'status'] 880 | service_stop = [service_exe, service, 'stop'] 881 | else: 882 | service_exe = "/usr/sbin/service" 883 | service_start = [service_exe, service, 'start'] 884 | service_status = [service_exe, service, 'status'] 885 | service_stop = [service_exe, service, 'stop'] 886 | 887 | if os.path.exists("/bin/systemctl"): 888 | enable_exe = "/bin/systemctl" 889 | enable_on = [enable_exe, 'enable', service] 890 | enable_off = [enable_exe, 'disable', service] 891 | elif os.path.exists("/sbin/chkconfig"): 892 | enable_exe = "/sbin/chkconfig" 893 | enable_on = [enable_exe, service, 'on'] 894 | enable_off = [enable_exe, service, 'off'] 895 | 896 | else: 897 | enable_exe = "/usr/sbin/update-rc.d" 898 | enable_on = [enable_exe, service, 'enable'] 899 | enable_off = [enable_exe, service, 'disable'] 900 | 901 | cmd = None 902 | if "enable" == command: 903 | cmd = enable_on 904 | elif "disable" == command: 905 | cmd = enable_off 906 | elif "start" == command: 907 | cmd = service_start 908 | elif "stop" == command: 909 | cmd = service_stop 910 | elif "status" == command: 911 | cmd = service_status 912 | 913 | if cmd is not None: 914 | command = CommandRunner(cmd) 915 | command.run() 916 | return command 917 | else: 918 | LOG.error("Unknown sysv command %s" % command) 919 | 920 | def _initialize_service(self, handler, service, properties): 921 | if "enabled" in properties: 922 | enable = to_boolean(properties["enabled"]) 923 | if enable: 924 | LOG.info("Enabling service %s" % service) 925 | handler(self, service, "enable") 926 | else: 927 | LOG.info("Disabling service %s" % service) 928 | handler(self, service, "disable") 929 | 930 | if "ensureRunning" in properties: 931 | ensure_running = to_boolean(properties["ensureRunning"]) 932 | command = handler(self, service, "status") 933 | running = command.status == 0 934 | if ensure_running and not running: 935 | LOG.info("Starting service %s" % service) 936 | handler(self, service, "start") 937 | elif not ensure_running and running: 938 | LOG.info("Stopping service %s" % service) 939 | handler(self, service, "stop") 940 | 941 | def _monitor_service(self, handler, service, properties): 942 | if "ensureRunning" in properties: 943 | ensure_running = to_boolean(properties["ensureRunning"]) 944 | command = handler(self, service, "status") 945 | running = command.status == 0 946 | if ensure_running and not running: 947 | LOG.warning("Restarting service %s" % service) 948 | start_cmd = handler(self, service, "start") 949 | if start_cmd.status != 0: 950 | LOG.warning('Service %s did not start. STDERR: %s' % 951 | (service, start_cmd.stderr)) 952 | for h in self.hooks: 953 | h.event('service.restarted', service, self.resource) 954 | 955 | def _monitor_services(self, handler, services): 956 | for service, properties in services.items(): 957 | self._monitor_service(handler, service, properties) 958 | 959 | def _initialize_services(self, handler, services): 960 | for service, properties in services.items(): 961 | self._initialize_service(handler, service, properties) 962 | 963 | # map of function pointers to various service handlers 964 | _service_handlers = { 965 | "sysvinit": _handle_sysv_command, 966 | "systemd": _handle_sysv_command 967 | } 968 | 969 | def _service_handler(self, manager_name): 970 | handler = None 971 | if manager_name in self._service_handlers: 972 | handler = self._service_handlers[manager_name] 973 | return handler 974 | 975 | def apply_services(self): 976 | """Starts, stops, enables, disables services.""" 977 | if not self._services: 978 | return 979 | for manager, service_entries in self._services.items(): 980 | handler = self._service_handler(manager) 981 | if not handler: 982 | LOG.warning("Skipping invalid service type: %s" % manager) 983 | else: 984 | self._initialize_services(handler, service_entries) 985 | 986 | def monitor_services(self): 987 | """Restarts failed services, and runs hooks.""" 988 | if not self._services: 989 | return 990 | for manager, service_entries in self._services.items(): 991 | handler = self._service_handler(manager) 992 | if not handler: 993 | LOG.warning("Skipping invalid service type: %s" % manager) 994 | else: 995 | self._monitor_services(handler, service_entries) 996 | 997 | 998 | class ConfigsetsHandler(object): 999 | 1000 | def __init__(self, configsets, selectedsets): 1001 | self.configsets = configsets 1002 | self.selectedsets = selectedsets 1003 | 1004 | def expand_sets(self, list, executionlist): 1005 | for elem in list: 1006 | if isinstance(elem, dict): 1007 | dictkeys = elem.keys() 1008 | if len(dictkeys) != 1 or dictkeys.pop() != 'ConfigSet': 1009 | raise Exception('invalid ConfigSets metadata') 1010 | dictkey = elem.values().pop() 1011 | try: 1012 | self.expand_sets(self.configsets[dictkey], executionlist) 1013 | except KeyError: 1014 | raise Exception("Undefined ConfigSet '%s' referenced" 1015 | % dictkey) 1016 | else: 1017 | executionlist.append(elem) 1018 | 1019 | def get_configsets(self): 1020 | """Returns a list of Configsets to execute in template.""" 1021 | if not self.configsets: 1022 | if self.selectedsets: 1023 | raise Exception('Template has no configSets') 1024 | return 1025 | if not self.selectedsets: 1026 | if 'default' not in self.configsets: 1027 | raise Exception('Template has no default configSet, must' 1028 | ' specify') 1029 | self.selectedsets = 'default' 1030 | 1031 | selectedlist = [x.strip() for x in self.selectedsets.split(',')] 1032 | executionlist = [] 1033 | for item in selectedlist: 1034 | if item not in self.configsets: 1035 | raise Exception("Requested configSet '%s' not in configSets" 1036 | " section" % item) 1037 | self.expand_sets(self.configsets[item], executionlist) 1038 | if not executionlist: 1039 | raise Exception( 1040 | "Requested configSet %s empty?" % self.selectedsets) 1041 | 1042 | return executionlist 1043 | 1044 | 1045 | def metadata_server_port( 1046 | datafile='/var/lib/heat-cfntools/cfn-metadata-server'): 1047 | """Return the the metadata server port. 1048 | 1049 | Reads the :NNNN from the end of the URL in cfn-metadata-server 1050 | """ 1051 | try: 1052 | f = open(datafile) 1053 | server_url = f.read().strip() 1054 | f.close() 1055 | except IOError: 1056 | return None 1057 | 1058 | if len(server_url) < 1: 1059 | return None 1060 | 1061 | if server_url[-1] == '/': 1062 | server_url = server_url[:-1] 1063 | 1064 | try: 1065 | return int(server_url.split(':')[-1]) 1066 | except ValueError: 1067 | return None 1068 | 1069 | 1070 | class CommandsHandlerRunError(Exception): 1071 | pass 1072 | 1073 | 1074 | class CommandsHandler(object): 1075 | 1076 | def __init__(self, commands): 1077 | self.commands = commands 1078 | 1079 | def apply_commands(self): 1080 | """Execute commands on the instance in alphabetical order by name.""" 1081 | if not self.commands: 1082 | return 1083 | for command_label in sorted(self.commands): 1084 | LOG.debug("%s is being processed" % command_label) 1085 | self._initialize_command(command_label, 1086 | self.commands[command_label]) 1087 | 1088 | def _initialize_command(self, command_label, properties): 1089 | command_status = None 1090 | cwd = None 1091 | env = properties.get("env", None) 1092 | 1093 | if "cwd" in properties: 1094 | cwd = os.path.expanduser(properties["cwd"]) 1095 | if not os.path.exists(cwd): 1096 | LOG.error("%s has failed. " % command_label + 1097 | "%s path does not exist" % cwd) 1098 | return 1099 | 1100 | if "test" in properties: 1101 | test = CommandRunner(properties["test"], shell=True) 1102 | test_status = test.run('root', cwd, env).status 1103 | if test_status != 0: 1104 | LOG.info("%s test returns false, skipping command" 1105 | % command_label) 1106 | return 1107 | else: 1108 | LOG.debug("%s test returns true, proceeding" % command_label) 1109 | 1110 | if "command" in properties: 1111 | try: 1112 | command = properties["command"] 1113 | shell = isinstance(command, str) 1114 | command = CommandRunner(command, shell=shell) 1115 | command.run('root', cwd, env) 1116 | command_status = command.status 1117 | except OSError as e: 1118 | if e.errno == errno.EEXIST: 1119 | LOG.debug(str(e)) 1120 | else: 1121 | LOG.exception(e) 1122 | else: 1123 | LOG.error("%s has failed. " % command_label 1124 | + "'command' property missing") 1125 | return 1126 | 1127 | if command_status == 0: 1128 | LOG.info("%s has been successfully executed" % command_label) 1129 | else: 1130 | if ("ignoreErrors" in properties and 1131 | to_boolean(properties["ignoreErrors"])): 1132 | LOG.info("%s has failed (status=%d). Explicit ignoring" 1133 | % (command_label, command_status)) 1134 | else: 1135 | raise CommandsHandlerRunError("%s has failed." % command_label) 1136 | 1137 | 1138 | class GroupsHandler(object): 1139 | 1140 | def __init__(self, groups): 1141 | self.groups = groups 1142 | 1143 | def apply_groups(self): 1144 | """Create Linux/UNIX groups and assign group IDs.""" 1145 | if not self.groups: 1146 | return 1147 | for group, properties in self.groups.items(): 1148 | LOG.debug("%s group is being created" % group) 1149 | self._initialize_group(group, properties) 1150 | 1151 | def _initialize_group(self, group, properties): 1152 | gid = properties.get("gid", None) 1153 | cmd = ['groupadd', group] 1154 | if gid is not None: 1155 | cmd.extend(['--gid', str(gid)]) 1156 | 1157 | command = CommandRunner(cmd) 1158 | command.run() 1159 | command_status = command.status 1160 | 1161 | if command_status == 0: 1162 | LOG.info("%s has been successfully created" % group) 1163 | elif command_status == 9: 1164 | LOG.error("An error occurred creating %s group : " % 1165 | group + "group name not unique") 1166 | elif command_status == 4: 1167 | LOG.error("An error occurred creating %s group : " % 1168 | group + "GID not unique") 1169 | elif command_status == 3: 1170 | LOG.error("An error occurred creating %s group : " % 1171 | group + "GID not valid") 1172 | elif command_status == 2: 1173 | LOG.error("An error occurred creating %s group : " % 1174 | group + "Invalid syntax") 1175 | else: 1176 | LOG.error("An error occurred creating %s group" % group) 1177 | 1178 | 1179 | class UsersHandler(object): 1180 | 1181 | def __init__(self, users): 1182 | self.users = users 1183 | 1184 | def apply_users(self): 1185 | """Create Linux/UNIX users and assign user IDs, groups and homedir.""" 1186 | if not self.users: 1187 | return 1188 | for user, properties in self.users.items(): 1189 | LOG.debug("%s user is being created" % user) 1190 | self._initialize_user(user, properties) 1191 | 1192 | def _initialize_user(self, user, properties): 1193 | uid = properties.get("uid", None) 1194 | homeDir = properties.get("homeDir", None) 1195 | 1196 | cmd = ['useradd', user] 1197 | 1198 | if uid is not None: 1199 | cmd.extend(['--uid', str(uid)]) 1200 | 1201 | if homeDir is not None: 1202 | cmd.extend(['--home', str(homeDir)]) 1203 | 1204 | if "groups" in properties: 1205 | groups = ','.join(properties["groups"]) 1206 | cmd.extend(['--groups', groups]) 1207 | 1208 | # Users are created as non-interactive system users with a shell 1209 | # of /sbin/nologin. This is by design and cannot be modified. 1210 | cmd.extend(['--shell', '/sbin/nologin']) 1211 | 1212 | command = CommandRunner(cmd) 1213 | command.run() 1214 | command_status = command.status 1215 | 1216 | if command_status == 0: 1217 | LOG.info("%s has been successfully created" % user) 1218 | elif command_status == 9: 1219 | LOG.error("An error occurred creating %s user : " % 1220 | user + "user name not unique") 1221 | elif command_status == 6: 1222 | LOG.error("An error occurred creating %s user : " % 1223 | user + "group does not exist") 1224 | elif command_status == 4: 1225 | LOG.error("An error occurred creating %s user : " % 1226 | user + "UID not unique") 1227 | elif command_status == 3: 1228 | LOG.error("An error occurred creating %s user : " % 1229 | user + "Invalid argument") 1230 | elif command_status == 2: 1231 | LOG.error("An error occurred creating %s user : " % 1232 | user + "Invalid syntax") 1233 | else: 1234 | LOG.error("An error occurred creating %s user" % user) 1235 | 1236 | 1237 | class MetadataServerConnectionError(Exception): 1238 | pass 1239 | 1240 | 1241 | class Metadata(object): 1242 | _metadata = None 1243 | _init_key = "AWS::CloudFormation::Init" 1244 | DEFAULT_PORT = 8000 1245 | 1246 | def __init__(self, stack, resource, access_key=None, 1247 | secret_key=None, credentials_file=None, region=None, 1248 | configsets=None): 1249 | 1250 | self.stack = stack 1251 | self.resource = resource 1252 | self.access_key = access_key 1253 | self.secret_key = secret_key 1254 | self.region = region 1255 | self.credentials_file = credentials_file 1256 | self.access_key = access_key 1257 | self.secret_key = secret_key 1258 | self.configsets = configsets 1259 | 1260 | # TODO(asalkeld) is this metadata for the local resource? 1261 | self._is_local_metadata = True 1262 | self._metadata = None 1263 | self._has_changed = False 1264 | 1265 | def remote_metadata(self): 1266 | """Connect to the metadata server and retrieve the metadata.""" 1267 | 1268 | if self.credentials_file: 1269 | credentials = parse_creds_file(self.credentials_file) 1270 | access_key = credentials['AWSAccessKeyId'] 1271 | secret_key = credentials['AWSSecretKey'] 1272 | elif self.access_key and self.secret_key: 1273 | access_key = self.access_key 1274 | secret_key = self.secret_key 1275 | else: 1276 | raise MetadataServerConnectionError("No credentials!") 1277 | 1278 | port = metadata_server_port() or self.DEFAULT_PORT 1279 | 1280 | client = cloudformation.CloudFormationConnection( 1281 | aws_access_key_id=access_key, 1282 | aws_secret_access_key=secret_key, 1283 | is_secure=False, port=port, 1284 | path="/v1", debug=0) 1285 | 1286 | res = client.describe_stack_resource(self.stack, self.resource) 1287 | # Note pending upstream patch will make this response a 1288 | # boto.cloudformation.stack.StackResourceDetail object 1289 | # which aligns better with all the existing calls 1290 | # see https://github.com/boto/boto/pull/857 1291 | resource_detail = res['DescribeStackResourceResponse'][ 1292 | 'DescribeStackResourceResult']['StackResourceDetail'] 1293 | return resource_detail['Metadata'] 1294 | 1295 | def get_nova_meta(self, 1296 | cache_path='/var/lib/heat-cfntools/nova_meta.json'): 1297 | """Get nova's meta_data.json and cache it. 1298 | 1299 | Since this is called repeatedly return the cached metadata, 1300 | if we have it. 1301 | """ 1302 | 1303 | url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json' 1304 | if not os.path.exists(cache_path): 1305 | cmd = ['curl', '-o', cache_path, url] 1306 | CommandRunner(cmd).run() 1307 | try: 1308 | with open(cache_path) as fd: 1309 | try: 1310 | return json.load(fd) 1311 | except ValueError: 1312 | pass 1313 | except IOError: 1314 | pass 1315 | return None 1316 | 1317 | def get_instance_id(self): 1318 | """Get the unique identifier for this server.""" 1319 | instance_id = None 1320 | md = self.get_nova_meta() 1321 | if md is not None: 1322 | instance_id = md.get('uuid') 1323 | return instance_id 1324 | 1325 | def get_tags(self): 1326 | """Get the tags for this server.""" 1327 | tags = {} 1328 | md = self.get_nova_meta() 1329 | if md is not None: 1330 | tags.update(md.get('meta', {})) 1331 | tags['InstanceId'] = md['uuid'] 1332 | return tags 1333 | 1334 | def retrieve( 1335 | self, 1336 | meta_str=None, 1337 | default_path='/var/lib/heat-cfntools/cfn-init-data', 1338 | last_path='/var/cache/heat-cfntools/last_metadata'): 1339 | """Read the metadata from the given filename or from the remote server. 1340 | 1341 | Returns: 1342 | True -- success 1343 | False -- error 1344 | """ 1345 | if self.resource is not None: 1346 | res_last_path = last_path + '_' + self.resource 1347 | else: 1348 | res_last_path = last_path 1349 | 1350 | if meta_str: 1351 | self._data = meta_str 1352 | else: 1353 | try: 1354 | self._data = self.remote_metadata() 1355 | except MetadataServerConnectionError as ex: 1356 | LOG.warning( 1357 | "Unable to retrieve remote metadata : %s" % str(ex)) 1358 | 1359 | # If reading remote metadata fails, we fall-back on local files 1360 | # in order to get the most up-to-date version, we try: 1361 | # /var/cache/heat-cfntools/last_metadata, followed by 1362 | # /var/lib/heat-cfntools/cfn-init-data 1363 | # This should allow us to do the right thing both during the 1364 | # first cfn-init run (when we only have cfn-init-data), and 1365 | # in the event of a temporary interruption to connectivity 1366 | # affecting cfn-hup, in which case we want to use the locally 1367 | # cached metadata or the logic below could re-run a stale 1368 | # cfn-init-data 1369 | fd = None 1370 | for filepath in [res_last_path, last_path, default_path]: 1371 | try: 1372 | fd = open(filepath) 1373 | except IOError: 1374 | LOG.warning("Unable to open local metadata : %s" % 1375 | filepath) 1376 | continue 1377 | else: 1378 | LOG.info("Opened local metadata %s" % filepath) 1379 | break 1380 | 1381 | if fd: 1382 | self._data = fd.read() 1383 | fd.close() 1384 | else: 1385 | LOG.error("Unable to read any valid metadata!") 1386 | return 1387 | 1388 | if isinstance(self._data, str): 1389 | self._metadata = json.loads(self._data) 1390 | else: 1391 | self._metadata = self._data 1392 | 1393 | last_data = "" 1394 | for metadata_file in [res_last_path, last_path]: 1395 | try: 1396 | with open(metadata_file) as lm: 1397 | try: 1398 | last_data = json.load(lm) 1399 | except ValueError: 1400 | pass 1401 | lm.close() 1402 | except IOError: 1403 | LOG.warning("Unable to open local metadata : %s" % 1404 | metadata_file) 1405 | continue 1406 | 1407 | if self._metadata != last_data: 1408 | self._has_changed = True 1409 | 1410 | # if cache dir does not exist try to create it 1411 | cache_dir = os.path.dirname(last_path) 1412 | if not os.path.isdir(cache_dir): 1413 | try: 1414 | os.makedirs(cache_dir, mode=0o700) 1415 | except IOError as e: 1416 | LOG.warning('could not create metadata cache dir %s [%s]' % 1417 | (cache_dir, e)) 1418 | return 1419 | # save current metadata to file 1420 | tmp_dir = os.path.dirname(last_path) 1421 | with tempfile.NamedTemporaryFile(dir=tmp_dir, 1422 | mode='wb', 1423 | delete=False) as cf: 1424 | os.chmod(cf.name, 0o600) 1425 | cf.write(json.dumps(self._metadata).encode('UTF-8')) 1426 | os.rename(cf.name, last_path) 1427 | cf.close() 1428 | if res_last_path != last_path: 1429 | shutil.copy(last_path, res_last_path) 1430 | 1431 | return True 1432 | 1433 | def __str__(self): 1434 | return json.dumps(self._metadata) 1435 | 1436 | def display(self, key=None): 1437 | """Print the metadata to the standard output stream. 1438 | 1439 | By default the full metadata is displayed but the ouptut can be limited 1440 | to a specific with the argument. 1441 | 1442 | Arguments: 1443 | key -- the metadata's key to display, nested keys can be specified 1444 | separating them by the dot character. 1445 | e.g., "foo.bar" 1446 | If the key contains a dot, it should be surrounded by single 1447 | quotes 1448 | e.g., "foo.'bar.1'" 1449 | """ 1450 | if self._metadata is None: 1451 | return 1452 | 1453 | if key is None: 1454 | print(str(self)) 1455 | return 1456 | 1457 | value = None 1458 | md = self._metadata 1459 | while True: 1460 | key_match = re.match(r'^(?:(?:\'([^\']+)\')|([^\.]+))(?:\.|$)', 1461 | key) 1462 | if not key_match: 1463 | break 1464 | 1465 | k = key_match.group(1) or key_match.group(2) 1466 | if isinstance(md, dict) and k in md: 1467 | key = key.replace(key_match.group(), '') 1468 | value = md = md[k] 1469 | else: 1470 | break 1471 | 1472 | if key != '': 1473 | value = None 1474 | 1475 | if value is not None: 1476 | print(json.dumps(value)) 1477 | 1478 | return 1479 | 1480 | def _is_valid_metadata(self): 1481 | """Should find the AWS::CloudFormation::Init json key.""" 1482 | is_valid = (self._metadata and 1483 | self._init_key in self._metadata and 1484 | self._metadata[self._init_key]) 1485 | if is_valid: 1486 | self._metadata = self._metadata[self._init_key] 1487 | return is_valid 1488 | 1489 | def _process_config(self, config="config"): 1490 | """Parse and process a config section. 1491 | 1492 | * packages 1493 | * sources 1494 | * groups 1495 | * users 1496 | * files 1497 | * commands 1498 | * services 1499 | """ 1500 | 1501 | try: 1502 | self._config = self._metadata[config] 1503 | except KeyError: 1504 | raise Exception("Could not find '%s' set in template, may need to" 1505 | " specify another set." % config) 1506 | PackagesHandler(self._config.get("packages")).apply_packages() 1507 | SourcesHandler(self._config.get("sources")).apply_sources() 1508 | GroupsHandler(self._config.get("groups")).apply_groups() 1509 | UsersHandler(self._config.get("users")).apply_users() 1510 | FilesHandler(self._config.get("files")).apply_files() 1511 | CommandsHandler(self._config.get("commands")).apply_commands() 1512 | ServicesHandler(self._config.get("services")).apply_services() 1513 | 1514 | def cfn_init(self): 1515 | """Process the resource metadata.""" 1516 | if not self._is_valid_metadata(): 1517 | raise Exception("invalid metadata") 1518 | else: 1519 | executionlist = ConfigsetsHandler(self._metadata.get("configSets"), 1520 | self.configsets).get_configsets() 1521 | if not executionlist: 1522 | self._process_config() 1523 | else: 1524 | for item in executionlist: 1525 | self._process_config(item) 1526 | 1527 | def cfn_hup(self, hooks): 1528 | """Process the resource metadata.""" 1529 | if not self._is_valid_metadata(): 1530 | LOG.debug( 1531 | 'Metadata does not contain a %s section' % self._init_key) 1532 | 1533 | if self._is_local_metadata: 1534 | self._config = self._metadata.get("config", {}) 1535 | s = self._config.get("services") 1536 | sh = ServicesHandler(s, resource=self.resource, hooks=hooks) 1537 | sh.monitor_services() 1538 | 1539 | if self._has_changed: 1540 | for h in hooks: 1541 | h.event('post.update', self.resource, self.resource) 1542 | -------------------------------------------------------------------------------- /heat_cfntools/tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openstack/heat-cfntools/8d2a16958bd4542382810630a22583a24f9ba2eb/heat_cfntools/tests/__init__.py -------------------------------------------------------------------------------- /heat_cfntools/tests/test_cfn_helper.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2013 Hewlett-Packard Development Company, L.P. 3 | # All Rights Reserved. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 6 | # not use this file except in compliance with the License. You may obtain 7 | # a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 14 | # License for the specific language governing permissions and limitations 15 | # under the License. 16 | 17 | import json 18 | import os 19 | import tempfile 20 | from unittest import mock 21 | 22 | import boto.cloudformation as cfn 23 | import fixtures 24 | import testtools 25 | import testtools.matchers as ttm 26 | 27 | from heat_cfntools.cfntools import cfn_helper 28 | 29 | 30 | def popen_root_calls(calls, shell=False): 31 | kwargs = {'env': None, 'cwd': None, 'stderr': -1, 'stdout': -1, 32 | 'shell': shell} 33 | return [ 34 | mock.call(call, **kwargs) 35 | for call in calls 36 | ] 37 | 38 | 39 | class FakePOpen(object): 40 | def __init__(self, stdout='', stderr='', returncode=0): 41 | self.returncode = returncode 42 | self.stdout = stdout 43 | self.stderr = stderr 44 | 45 | def communicate(self): 46 | return (self.stdout, self.stderr) 47 | 48 | def wait(self): 49 | pass 50 | 51 | 52 | @mock.patch.object(cfn_helper.pwd, 'getpwnam') 53 | @mock.patch.object(cfn_helper.os, 'seteuid') 54 | @mock.patch.object(cfn_helper.os, 'geteuid') 55 | class TestCommandRunner(testtools.TestCase): 56 | 57 | def test_command_runner(self, mock_geteuid, mock_seteuid, mock_getpwnam): 58 | def returns(*args, **kwargs): 59 | if args[0][0] == '/bin/command1': 60 | return FakePOpen('All good') 61 | elif args[0][0] == '/bin/command2': 62 | return FakePOpen('Doing something', 'error', -1) 63 | else: 64 | raise Exception('This should never happen') 65 | 66 | with mock.patch('subprocess.Popen') as mock_popen: 67 | mock_popen.side_effect = returns 68 | cmd2 = cfn_helper.CommandRunner(['/bin/command2']) 69 | cmd1 = cfn_helper.CommandRunner(['/bin/command1'], 70 | nextcommand=cmd2) 71 | cmd1.run('root') 72 | self.assertEqual( 73 | 'CommandRunner:\n\tcommand: [\'/bin/command1\']\n\tstdout: ' 74 | 'All good', 75 | str(cmd1)) 76 | self.assertEqual( 77 | 'CommandRunner:\n\tcommand: [\'/bin/command2\']\n\tstatus: ' 78 | '-1\n\tstdout: Doing something\n\tstderr: error', 79 | str(cmd2)) 80 | calls = popen_root_calls([['/bin/command1'], ['/bin/command2']]) 81 | mock_popen.assert_has_calls(calls) 82 | 83 | def test_privileges_are_lowered_for_non_root_user(self, mock_geteuid, 84 | mock_seteuid, 85 | mock_getpwnam): 86 | pw_entry = mock.Mock() 87 | pw_entry.pw_uid = 1001 88 | mock_getpwnam.return_value = pw_entry 89 | mock_geteuid.return_value = 0 90 | calls = [mock.call(1001), mock.call(0)] 91 | with mock.patch('subprocess.Popen') as mock_popen: 92 | command = ['/bin/command', '--option=value', 'arg1', 'arg2'] 93 | cmd = cfn_helper.CommandRunner(command) 94 | cmd.run(user='nonroot') 95 | self.assertTrue(mock_geteuid.called) 96 | mock_getpwnam.assert_called_once_with('nonroot') 97 | mock_seteuid.assert_has_calls(calls) 98 | self.assertTrue(mock_popen.called) 99 | 100 | def test_run_returns_when_cannot_set_privileges(self, mock_geteuid, 101 | mock_seteuid, 102 | mock_getpwnam): 103 | msg = '[Error 1] Permission Denied' 104 | mock_seteuid.side_effect = Exception(msg) 105 | with mock.patch('subprocess.Popen') as mock_popen: 106 | command = ['/bin/command2'] 107 | cmd = cfn_helper.CommandRunner(command) 108 | cmd.run(user='nonroot') 109 | self.assertTrue(mock_getpwnam.called) 110 | self.assertTrue(mock_seteuid.called) 111 | self.assertFalse(mock_popen.called) 112 | self.assertEqual(126, cmd.status) 113 | self.assertEqual(msg, cmd.stderr) 114 | 115 | def test_privileges_are_restored_for_command_failure(self, mock_geteuid, 116 | mock_seteuid, 117 | mock_getpwnam): 118 | pw_entry = mock.Mock() 119 | pw_entry.pw_uid = 1001 120 | mock_getpwnam.return_value = pw_entry 121 | mock_geteuid.return_value = 0 122 | calls = [mock.call(1001), mock.call(0)] 123 | with mock.patch('subprocess.Popen') as mock_popen: 124 | mock_popen.side_effect = ValueError('Something wrong') 125 | command = ['/bin/command', '--option=value', 'arg1', 'arg2'] 126 | cmd = cfn_helper.CommandRunner(command) 127 | self.assertRaises(ValueError, cmd.run, user='nonroot') 128 | self.assertTrue(mock_geteuid.called) 129 | mock_getpwnam.assert_called_once_with('nonroot') 130 | mock_seteuid.assert_has_calls(calls) 131 | self.assertTrue(mock_popen.called) 132 | 133 | 134 | @mock.patch.object(cfn_helper, 'controlled_privileges') 135 | class TestPackages(testtools.TestCase): 136 | 137 | def test_yum_install(self, mock_cp): 138 | 139 | def returns(*args, **kwargs): 140 | if args[0][0] == 'rpm' and args[0][1] == '-q': 141 | return FakePOpen(returncode=1) 142 | else: 143 | return FakePOpen(returncode=0) 144 | 145 | calls = [['which', 'yum']] 146 | for pack in ('httpd', 'wordpress', 'mysql-server'): 147 | calls.append(['rpm', '-q', pack]) 148 | calls.append(['yum', '-y', '--showduplicates', 'list', 149 | 'available', pack]) 150 | calls = popen_root_calls(calls) 151 | 152 | packages = { 153 | "yum": { 154 | "mysql-server": [], 155 | "httpd": [], 156 | "wordpress": [] 157 | } 158 | } 159 | 160 | with mock.patch('subprocess.Popen') as mock_popen: 161 | mock_popen.side_effect = returns 162 | cfn_helper.PackagesHandler(packages).apply_packages() 163 | mock_popen.assert_has_calls(calls, any_order=True) 164 | 165 | def test_dnf_install_yum_unavailable(self, mock_cp): 166 | 167 | def returns(*args, **kwargs): 168 | if ((args[0][0] == 'rpm' and args[0][1] == '-q') 169 | or (args[0][0] == 'which' and args[0][1] == 'yum')): 170 | return FakePOpen(returncode=1) 171 | else: 172 | return FakePOpen(returncode=0) 173 | 174 | calls = [['which', 'yum']] 175 | for pack in ('httpd', 'wordpress', 'mysql-server'): 176 | calls.append(['rpm', '-q', pack]) 177 | calls.append(['dnf', '-y', '--showduplicates', 'list', 178 | 'available', pack]) 179 | calls = popen_root_calls(calls) 180 | 181 | packages = { 182 | "yum": { 183 | "mysql-server": [], 184 | "httpd": [], 185 | "wordpress": [] 186 | } 187 | } 188 | 189 | with mock.patch('subprocess.Popen') as mock_popen: 190 | mock_popen.side_effect = returns 191 | cfn_helper.PackagesHandler(packages).apply_packages() 192 | mock_popen.assert_has_calls(calls, any_order=True) 193 | 194 | def test_dnf_install(self, mock_cp): 195 | 196 | def returns(*args, **kwargs): 197 | if args[0][0] == 'rpm' and args[0][1] == '-q': 198 | return FakePOpen(returncode=1) 199 | else: 200 | return FakePOpen(returncode=0) 201 | 202 | calls = [] 203 | for pack in ('httpd', 'wordpress', 'mysql-server'): 204 | calls.append(['rpm', '-q', pack]) 205 | calls.append(['dnf', '-y', '--showduplicates', 'list', 206 | 'available', pack]) 207 | calls = popen_root_calls(calls) 208 | 209 | packages = { 210 | "dnf": { 211 | "mysql-server": [], 212 | "httpd": [], 213 | "wordpress": [] 214 | } 215 | } 216 | 217 | with mock.patch('subprocess.Popen') as mock_popen: 218 | mock_popen.side_effect = returns 219 | cfn_helper.PackagesHandler(packages).apply_packages() 220 | mock_popen.assert_has_calls(calls, any_order=True) 221 | 222 | def test_zypper_install(self, mock_cp): 223 | 224 | def returns(*args, **kwargs): 225 | if args[0][0].startswith('rpm') and args[0][1].startswith('-q'): 226 | return FakePOpen(returncode=1) 227 | else: 228 | return FakePOpen(returncode=0) 229 | 230 | calls = [] 231 | for pack in ('httpd', 'wordpress', 'mysql-server'): 232 | calls.append(['rpm', '-q', pack]) 233 | calls.append(['zypper', '-n', '--no-refresh', 'search', pack]) 234 | calls = popen_root_calls(calls) 235 | 236 | packages = { 237 | "zypper": { 238 | "mysql-server": [], 239 | "httpd": [], 240 | "wordpress": [] 241 | } 242 | } 243 | 244 | with mock.patch('subprocess.Popen') as mock_popen: 245 | mock_popen.side_effect = returns 246 | cfn_helper.PackagesHandler(packages).apply_packages() 247 | mock_popen.assert_has_calls(calls, any_order=True) 248 | 249 | def test_apt_install(self, mock_cp): 250 | packages = { 251 | "apt": { 252 | "mysql-server": [], 253 | "httpd": [], 254 | "wordpress": [] 255 | } 256 | } 257 | 258 | with mock.patch('subprocess.Popen') as mock_popen: 259 | mock_popen.return_value = FakePOpen(returncode=0) 260 | cfn_helper.PackagesHandler(packages).apply_packages() 261 | self.assertTrue(mock_popen.called) 262 | 263 | 264 | @mock.patch.object(cfn_helper, 'controlled_privileges') 265 | class TestServicesHandler(testtools.TestCase): 266 | 267 | def test_services_handler_systemd(self, mock_cp): 268 | calls = [] 269 | returns = [] 270 | 271 | # apply_services 272 | calls.append(['/bin/systemctl', 'enable', 'httpd.service']) 273 | returns.append(FakePOpen()) 274 | calls.append(['/bin/systemctl', 'status', 'httpd.service']) 275 | returns.append(FakePOpen(returncode=-1)) 276 | calls.append(['/bin/systemctl', 'start', 'httpd.service']) 277 | returns.append(FakePOpen()) 278 | calls.append(['/bin/systemctl', 'enable', 'mysqld.service']) 279 | returns.append(FakePOpen()) 280 | calls.append(['/bin/systemctl', 'status', 'mysqld.service']) 281 | returns.append(FakePOpen(returncode=-1)) 282 | calls.append(['/bin/systemctl', 'start', 'mysqld.service']) 283 | returns.append(FakePOpen()) 284 | 285 | # monitor_services not running 286 | calls.append(['/bin/systemctl', 'status', 'httpd.service']) 287 | returns.append(FakePOpen(returncode=-1)) 288 | calls.append(['/bin/systemctl', 'start', 'httpd.service']) 289 | returns.append(FakePOpen()) 290 | 291 | calls = popen_root_calls(calls) 292 | 293 | calls.extend(popen_root_calls(['/bin/services_restarted'], shell=True)) 294 | returns.append(FakePOpen()) 295 | 296 | calls.extend(popen_root_calls([['/bin/systemctl', 'status', 297 | 'mysqld.service']])) 298 | returns.append(FakePOpen(returncode=-1)) 299 | calls.extend(popen_root_calls([['/bin/systemctl', 'start', 300 | 'mysqld.service']])) 301 | returns.append(FakePOpen()) 302 | 303 | calls.extend(popen_root_calls(['/bin/services_restarted'], shell=True)) 304 | returns.append(FakePOpen()) 305 | 306 | # monitor_services running 307 | calls.extend(popen_root_calls([['/bin/systemctl', 'status', 308 | 'httpd.service']])) 309 | returns.append(FakePOpen()) 310 | calls.extend(popen_root_calls([['/bin/systemctl', 'status', 311 | 'mysqld.service']])) 312 | returns.append(FakePOpen()) 313 | 314 | services = { 315 | "systemd": { 316 | "mysqld": {"enabled": "true", "ensureRunning": "true"}, 317 | "httpd": {"enabled": "true", "ensureRunning": "true"} 318 | } 319 | } 320 | hooks = [ 321 | cfn_helper.Hook( 322 | 'hook1', 323 | 'service.restarted', 324 | 'Resources.resource1.Metadata', 325 | 'root', 326 | '/bin/services_restarted') 327 | ] 328 | 329 | with mock.patch('os.path.exists') as mock_exists: 330 | mock_exists.return_value = True 331 | with mock.patch('subprocess.Popen') as mock_popen: 332 | mock_popen.side_effect = returns 333 | 334 | sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) 335 | sh.apply_services() 336 | # services not running 337 | sh.monitor_services() 338 | 339 | # services running 340 | sh.monitor_services() 341 | mock_popen.assert_has_calls(calls, any_order=True) 342 | mock_exists.assert_called_with('/bin/systemctl') 343 | 344 | def test_services_handler_systemd_disabled(self, mock_cp): 345 | calls = [] 346 | 347 | # apply_services 348 | calls.append(['/bin/systemctl', 'disable', 'httpd.service']) 349 | calls.append(['/bin/systemctl', 'status', 'httpd.service']) 350 | calls.append(['/bin/systemctl', 'stop', 'httpd.service']) 351 | calls.append(['/bin/systemctl', 'disable', 'mysqld.service']) 352 | calls.append(['/bin/systemctl', 'status', 'mysqld.service']) 353 | calls.append(['/bin/systemctl', 'stop', 'mysqld.service']) 354 | calls = popen_root_calls(calls) 355 | 356 | services = { 357 | "systemd": { 358 | "mysqld": {"enabled": "false", "ensureRunning": "false"}, 359 | "httpd": {"enabled": "false", "ensureRunning": "false"} 360 | } 361 | } 362 | hooks = [ 363 | cfn_helper.Hook( 364 | 'hook1', 365 | 'service.restarted', 366 | 'Resources.resource1.Metadata', 367 | 'root', 368 | '/bin/services_restarted') 369 | ] 370 | with mock.patch('os.path.exists') as mock_exists: 371 | mock_exists.return_value = True 372 | with mock.patch('subprocess.Popen') as mock_popen: 373 | mock_popen.return_value = FakePOpen() 374 | sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) 375 | sh.apply_services() 376 | mock_popen.assert_has_calls(calls, any_order=True) 377 | mock_exists.assert_called_with('/bin/systemctl') 378 | 379 | def test_services_handler_sysv_service_chkconfig(self, mock_cp): 380 | 381 | def exists(*args, **kwargs): 382 | return args[0] != '/bin/systemctl' 383 | 384 | calls = [] 385 | returns = [] 386 | 387 | # apply_services 388 | calls.append(['/sbin/chkconfig', 'httpd', 'on']) 389 | returns.append(FakePOpen()) 390 | calls.append(['/sbin/service', 'httpd', 'status']) 391 | returns.append(FakePOpen(returncode=-1)) 392 | calls.append(['/sbin/service', 'httpd', 'start']) 393 | returns.append(FakePOpen()) 394 | 395 | # monitor_services not running 396 | calls.append(['/sbin/service', 'httpd', 'status']) 397 | returns.append(FakePOpen(returncode=-1)) 398 | calls.append(['/sbin/service', 'httpd', 'start']) 399 | returns.append(FakePOpen()) 400 | 401 | calls = popen_root_calls(calls) 402 | 403 | calls.extend(popen_root_calls(['/bin/services_restarted'], shell=True)) 404 | returns.append(FakePOpen()) 405 | 406 | # monitor_services running 407 | calls.extend(popen_root_calls([['/sbin/service', 'httpd', 'status']])) 408 | returns.append(FakePOpen()) 409 | 410 | services = { 411 | "sysvinit": { 412 | "httpd": {"enabled": "true", "ensureRunning": "true"} 413 | } 414 | } 415 | hooks = [ 416 | cfn_helper.Hook( 417 | 'hook1', 418 | 'service.restarted', 419 | 'Resources.resource1.Metadata', 420 | 'root', 421 | '/bin/services_restarted') 422 | ] 423 | 424 | with mock.patch('os.path.exists') as mock_exists: 425 | mock_exists.side_effect = exists 426 | with mock.patch('subprocess.Popen') as mock_popen: 427 | mock_popen.side_effect = returns 428 | sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) 429 | sh.apply_services() 430 | # services not running 431 | sh.monitor_services() 432 | 433 | # services running 434 | sh.monitor_services() 435 | mock_popen.assert_has_calls(calls) 436 | mock_exists.assert_any_call('/bin/systemctl') 437 | mock_exists.assert_any_call('/sbin/service') 438 | mock_exists.assert_any_call('/sbin/chkconfig') 439 | 440 | def test_services_handler_sysv_disabled_service_chkconfig(self, mock_cp): 441 | def exists(*args, **kwargs): 442 | return args[0] != '/bin/systemctl' 443 | 444 | calls = [] 445 | 446 | # apply_services 447 | calls.append(['/sbin/chkconfig', 'httpd', 'off']) 448 | calls.append(['/sbin/service', 'httpd', 'status']) 449 | calls.append(['/sbin/service', 'httpd', 'stop']) 450 | 451 | calls = popen_root_calls(calls) 452 | 453 | services = { 454 | "sysvinit": { 455 | "httpd": {"enabled": "false", "ensureRunning": "false"} 456 | } 457 | } 458 | hooks = [ 459 | cfn_helper.Hook( 460 | 'hook1', 461 | 'service.restarted', 462 | 'Resources.resource1.Metadata', 463 | 'root', 464 | '/bin/services_restarted') 465 | ] 466 | 467 | with mock.patch('os.path.exists') as mock_exists: 468 | mock_exists.side_effect = exists 469 | with mock.patch('subprocess.Popen') as mock_popen: 470 | mock_popen.return_value = FakePOpen() 471 | sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) 472 | sh.apply_services() 473 | mock_popen.assert_has_calls(calls) 474 | mock_exists.assert_any_call('/bin/systemctl') 475 | mock_exists.assert_any_call('/sbin/service') 476 | mock_exists.assert_any_call('/sbin/chkconfig') 477 | 478 | def test_services_handler_sysv_systemctl(self, mock_cp): 479 | calls = [] 480 | returns = [] 481 | 482 | # apply_services 483 | calls.append(['/bin/systemctl', 'enable', 'httpd.service']) 484 | returns.append(FakePOpen()) 485 | calls.append(['/bin/systemctl', 'status', 'httpd.service']) 486 | returns.append(FakePOpen(returncode=-1)) 487 | calls.append(['/bin/systemctl', 'start', 'httpd.service']) 488 | returns.append(FakePOpen()) 489 | 490 | # monitor_services not running 491 | calls.append(['/bin/systemctl', 'status', 'httpd.service']) 492 | returns.append(FakePOpen(returncode=-1)) 493 | calls.append(['/bin/systemctl', 'start', 'httpd.service']) 494 | returns.append(FakePOpen()) 495 | 496 | shell_calls = [] 497 | shell_calls.append('/bin/services_restarted') 498 | returns.append(FakePOpen()) 499 | 500 | calls = popen_root_calls(calls) 501 | calls.extend(popen_root_calls(shell_calls, shell=True)) 502 | 503 | # monitor_services running 504 | calls.extend(popen_root_calls([['/bin/systemctl', 'status', 505 | 'httpd.service']])) 506 | returns.append(FakePOpen()) 507 | 508 | services = { 509 | "sysvinit": { 510 | "httpd": {"enabled": "true", "ensureRunning": "true"} 511 | } 512 | } 513 | hooks = [ 514 | cfn_helper.Hook( 515 | 'hook1', 516 | 'service.restarted', 517 | 'Resources.resource1.Metadata', 518 | 'root', 519 | '/bin/services_restarted') 520 | ] 521 | 522 | with mock.patch('os.path.exists') as mock_exists: 523 | mock_exists.return_value = True 524 | with mock.patch('subprocess.Popen') as mock_popen: 525 | mock_popen.side_effect = returns 526 | sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) 527 | sh.apply_services() 528 | # services not running 529 | sh.monitor_services() 530 | 531 | # services running 532 | sh.monitor_services() 533 | mock_popen.assert_has_calls(calls) 534 | mock_exists.assert_called_with('/bin/systemctl') 535 | 536 | def test_services_handler_sysv_disabled_systemctl(self, mock_cp): 537 | calls = [] 538 | 539 | # apply_services 540 | calls.append(['/bin/systemctl', 'disable', 'httpd.service']) 541 | calls.append(['/bin/systemctl', 'status', 'httpd.service']) 542 | calls.append(['/bin/systemctl', 'stop', 'httpd.service']) 543 | 544 | calls = popen_root_calls(calls) 545 | 546 | services = { 547 | "sysvinit": { 548 | "httpd": {"enabled": "false", "ensureRunning": "false"} 549 | } 550 | } 551 | hooks = [ 552 | cfn_helper.Hook( 553 | 'hook1', 554 | 'service.restarted', 555 | 'Resources.resource1.Metadata', 556 | 'root', 557 | '/bin/services_restarted') 558 | ] 559 | 560 | with mock.patch('os.path.exists') as mock_exists: 561 | mock_exists.return_value = True 562 | with mock.patch('subprocess.Popen') as mock_popen: 563 | mock_popen.return_value = FakePOpen() 564 | sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) 565 | sh.apply_services() 566 | mock_popen.assert_has_calls(calls) 567 | mock_exists.assert_called_with('/bin/systemctl') 568 | 569 | def test_services_handler_sysv_service_updaterc(self, mock_cp): 570 | calls = [] 571 | returns = [] 572 | 573 | # apply_services 574 | calls.append(['/usr/sbin/update-rc.d', 'httpd', 'enable']) 575 | returns.append(FakePOpen()) 576 | calls.append(['/usr/sbin/service', 'httpd', 'status']) 577 | returns.append(FakePOpen(returncode=-1)) 578 | calls.append(['/usr/sbin/service', 'httpd', 'start']) 579 | returns.append(FakePOpen()) 580 | 581 | # monitor_services not running 582 | calls.append(['/usr/sbin/service', 'httpd', 'status']) 583 | returns.append(FakePOpen(returncode=-1)) 584 | calls.append(['/usr/sbin/service', 'httpd', 'start']) 585 | returns.append(FakePOpen()) 586 | 587 | shell_calls = [] 588 | shell_calls.append('/bin/services_restarted') 589 | returns.append(FakePOpen()) 590 | 591 | calls = popen_root_calls(calls) 592 | calls.extend(popen_root_calls(shell_calls, shell=True)) 593 | 594 | # monitor_services running 595 | calls.extend(popen_root_calls([['/usr/sbin/service', 'httpd', 596 | 'status']])) 597 | returns.append(FakePOpen()) 598 | 599 | services = { 600 | "sysvinit": { 601 | "httpd": {"enabled": "true", "ensureRunning": "true"} 602 | } 603 | } 604 | hooks = [ 605 | cfn_helper.Hook( 606 | 'hook1', 607 | 'service.restarted', 608 | 'Resources.resource1.Metadata', 609 | 'root', 610 | '/bin/services_restarted') 611 | ] 612 | 613 | with mock.patch('os.path.exists') as mock_exists: 614 | mock_exists.return_value = False 615 | with mock.patch('subprocess.Popen') as mock_popen: 616 | mock_popen.side_effect = returns 617 | sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) 618 | sh.apply_services() 619 | # services not running 620 | sh.monitor_services() 621 | 622 | # services running 623 | sh.monitor_services() 624 | mock_popen.assert_has_calls(calls) 625 | mock_exists.assert_any_call('/bin/systemctl') 626 | mock_exists.assert_any_call('/sbin/service') 627 | mock_exists.assert_any_call('/sbin/chkconfig') 628 | 629 | def test_services_handler_sysv_disabled_service_updaterc(self, mock_cp): 630 | calls = [] 631 | returns = [] 632 | 633 | # apply_services 634 | calls.append(['/usr/sbin/update-rc.d', 'httpd', 'disable']) 635 | returns.append(FakePOpen()) 636 | calls.append(['/usr/sbin/service', 'httpd', 'status']) 637 | returns.append(FakePOpen()) 638 | calls.append(['/usr/sbin/service', 'httpd', 'stop']) 639 | returns.append(FakePOpen()) 640 | 641 | calls = popen_root_calls(calls) 642 | 643 | services = { 644 | "sysvinit": { 645 | "httpd": {"enabled": "false", "ensureRunning": "false"} 646 | } 647 | } 648 | hooks = [ 649 | cfn_helper.Hook( 650 | 'hook1', 651 | 'service.restarted', 652 | 'Resources.resource1.Metadata', 653 | 'root', 654 | '/bin/services_restarted') 655 | ] 656 | 657 | with mock.patch('os.path.exists') as mock_exists: 658 | mock_exists.return_value = False 659 | with mock.patch('subprocess.Popen') as mock_popen: 660 | mock_popen.side_effect = returns 661 | sh = cfn_helper.ServicesHandler(services, 'resource1', hooks) 662 | sh.apply_services() 663 | mock_popen.assert_has_calls(calls) 664 | mock_exists.assert_any_call('/bin/systemctl') 665 | mock_exists.assert_any_call('/sbin/service') 666 | mock_exists.assert_any_call('/sbin/chkconfig') 667 | 668 | 669 | class TestHupConfig(testtools.TestCase): 670 | 671 | def test_load_main_section(self): 672 | fcreds = tempfile.NamedTemporaryFile() 673 | fcreds.write('AWSAccessKeyId=foo\nAWSSecretKey=bar\n'.encode('UTF-8')) 674 | fcreds.flush() 675 | 676 | main_conf = tempfile.NamedTemporaryFile() 677 | main_conf.write(('''[main] 678 | stack=teststack 679 | credential-file=%s''' % fcreds.name).encode('UTF-8')) 680 | main_conf.flush() 681 | mainconfig = cfn_helper.HupConfig([open(main_conf.name)]) 682 | self.assertEqual( 683 | '{stack: teststack, credential_file: %s, ' 684 | 'region: nova, interval:10}' % fcreds.name, 685 | str(mainconfig)) 686 | main_conf.close() 687 | 688 | main_conf = tempfile.NamedTemporaryFile() 689 | main_conf.write(('''[main] 690 | stack=teststack 691 | region=region1 692 | credential-file=%s-invalid 693 | interval=120''' % fcreds.name).encode('UTF-8')) 694 | main_conf.flush() 695 | e = self.assertRaises(cfn_helper.InvalidCredentialsException, 696 | cfn_helper.HupConfig, 697 | [open(main_conf.name)]) 698 | self.assertIn('invalid credentials file', str(e)) 699 | fcreds.close() 700 | 701 | @mock.patch.object(cfn_helper, 'controlled_privileges') 702 | def test_hup_config(self, mock_cp): 703 | hooks_conf = tempfile.NamedTemporaryFile() 704 | 705 | def write_hook_conf(f, name, triggers, path, action): 706 | f.write(( 707 | '[%s]\ntriggers=%s\npath=%s\naction=%s\nrunas=root\n\n' % ( 708 | name, triggers, path, action)).encode('UTF-8')) 709 | 710 | write_hook_conf( 711 | hooks_conf, 712 | 'hook2', 713 | 'service2.restarted', 714 | 'Resources.resource2.Metadata', 715 | '/bin/hook2') 716 | write_hook_conf( 717 | hooks_conf, 718 | 'hook1', 719 | 'service1.restarted', 720 | 'Resources.resource1.Metadata', 721 | '/bin/hook1') 722 | write_hook_conf( 723 | hooks_conf, 724 | 'hook3', 725 | 'service3.restarted', 726 | 'Resources.resource3.Metadata', 727 | '/bin/hook3') 728 | write_hook_conf( 729 | hooks_conf, 730 | 'cfn-http-restarted', 731 | 'service.restarted', 732 | 'Resources.resource.Metadata', 733 | '/bin/cfn-http-restarted') 734 | hooks_conf.flush() 735 | 736 | fcreds = tempfile.NamedTemporaryFile() 737 | fcreds.write('AWSAccessKeyId=foo\nAWSSecretKey=bar\n'.encode('UTF-8')) 738 | fcreds.flush() 739 | 740 | main_conf = tempfile.NamedTemporaryFile() 741 | main_conf.write(('''[main] 742 | stack=teststack 743 | credential-file=%s 744 | region=region1 745 | interval=120''' % fcreds.name).encode('UTF-8')) 746 | main_conf.flush() 747 | 748 | mainconfig = cfn_helper.HupConfig([ 749 | open(main_conf.name), 750 | open(hooks_conf.name)]) 751 | unique_resources = mainconfig.unique_resources_get() 752 | self.assertThat([ 753 | 'resource', 754 | 'resource1', 755 | 'resource2', 756 | 'resource3', 757 | ], ttm.Equals(sorted(unique_resources))) 758 | 759 | hooks = sorted(mainconfig.hooks, 760 | key=lambda hook: hook.resource_name_get()) 761 | self.assertEqual(len(hooks), 4) 762 | self.assertEqual( 763 | '{cfn-http-restarted, service.restarted,' 764 | ' Resources.resource.Metadata, root, /bin/cfn-http-restarted}', 765 | str(hooks[0])) 766 | self.assertEqual( 767 | '{hook1, service1.restarted, Resources.resource1.Metadata,' 768 | ' root, /bin/hook1}', str(hooks[1])) 769 | self.assertEqual( 770 | '{hook2, service2.restarted, Resources.resource2.Metadata,' 771 | ' root, /bin/hook2}', str(hooks[2])) 772 | self.assertEqual( 773 | '{hook3, service3.restarted, Resources.resource3.Metadata,' 774 | ' root, /bin/hook3}', str(hooks[3])) 775 | 776 | calls = [] 777 | calls.extend(popen_root_calls(['/bin/cfn-http-restarted'], shell=True)) 778 | calls.extend(popen_root_calls(['/bin/hook1'], shell=True)) 779 | calls.extend(popen_root_calls(['/bin/hook2'], shell=True)) 780 | calls.extend(popen_root_calls(['/bin/hook3'], shell=True)) 781 | 782 | with mock.patch('subprocess.Popen') as mock_popen: 783 | mock_popen.return_value = FakePOpen('All good') 784 | 785 | for hook in hooks: 786 | hook.event(hook.triggers, None, hook.resource_name_get()) 787 | 788 | hooks_conf.close() 789 | fcreds.close() 790 | main_conf.close() 791 | mock_popen.assert_has_calls(calls) 792 | 793 | 794 | class TestCfnHelper(testtools.TestCase): 795 | 796 | def _check_metadata_content(self, content, value): 797 | with tempfile.NamedTemporaryFile() as metadata_info: 798 | metadata_info.write(content.encode('UTF-8')) 799 | metadata_info.flush() 800 | port = cfn_helper.metadata_server_port(metadata_info.name) 801 | self.assertEqual(value, port) 802 | 803 | def test_metadata_server_port(self): 804 | self._check_metadata_content("http://172.20.42.42:8000\n", 8000) 805 | 806 | def test_metadata_server_port_https(self): 807 | self._check_metadata_content("https://abc.foo.bar:6969\n", 6969) 808 | 809 | def test_metadata_server_port_noport(self): 810 | self._check_metadata_content("http://172.20.42.42\n", None) 811 | 812 | def test_metadata_server_port_justip(self): 813 | self._check_metadata_content("172.20.42.42", None) 814 | 815 | def test_metadata_server_port_weird(self): 816 | self._check_metadata_content("::::", None) 817 | self._check_metadata_content("beforecolons:aftercolons", None) 818 | 819 | def test_metadata_server_port_emptyfile(self): 820 | self._check_metadata_content("\n", None) 821 | self._check_metadata_content("", None) 822 | 823 | def test_metadata_server_nofile(self): 824 | random_filename = self.getUniqueString() 825 | self.assertIsNone(cfn_helper.metadata_server_port(random_filename)) 826 | 827 | def test_to_boolean(self): 828 | self.assertTrue(cfn_helper.to_boolean(True)) 829 | self.assertTrue(cfn_helper.to_boolean('true')) 830 | self.assertTrue(cfn_helper.to_boolean('yes')) 831 | self.assertTrue(cfn_helper.to_boolean('1')) 832 | self.assertTrue(cfn_helper.to_boolean(1)) 833 | 834 | self.assertFalse(cfn_helper.to_boolean(False)) 835 | self.assertFalse(cfn_helper.to_boolean('false')) 836 | self.assertFalse(cfn_helper.to_boolean('no')) 837 | self.assertFalse(cfn_helper.to_boolean('0')) 838 | self.assertFalse(cfn_helper.to_boolean(0)) 839 | self.assertFalse(cfn_helper.to_boolean(None)) 840 | self.assertFalse(cfn_helper.to_boolean('fingle')) 841 | 842 | def test_parse_creds_file(self): 843 | def parse_creds_test(file_contents, creds_match): 844 | with tempfile.NamedTemporaryFile(mode='w') as fcreds: 845 | fcreds.write(file_contents) 846 | fcreds.flush() 847 | creds = cfn_helper.parse_creds_file(fcreds.name) 848 | self.assertThat(creds_match, ttm.Equals(creds)) 849 | parse_creds_test( 850 | 'AWSAccessKeyId=foo\nAWSSecretKey=bar\n', 851 | {'AWSAccessKeyId': 'foo', 'AWSSecretKey': 'bar'} 852 | ) 853 | parse_creds_test( 854 | 'AWSAccessKeyId =foo\nAWSSecretKey= bar\n', 855 | {'AWSAccessKeyId': 'foo', 'AWSSecretKey': 'bar'} 856 | ) 857 | parse_creds_test( 858 | 'AWSAccessKeyId = foo\nAWSSecretKey = bar\n', 859 | {'AWSAccessKeyId': 'foo', 'AWSSecretKey': 'bar'} 860 | ) 861 | 862 | 863 | class TestMetadataRetrieve(testtools.TestCase): 864 | 865 | def setUp(self): 866 | super(TestMetadataRetrieve, self).setUp() 867 | self.tdir = self.useFixture(fixtures.TempDir()) 868 | self.last_file = os.path.join(self.tdir.path, 'last_metadata') 869 | 870 | def test_metadata_retrieve_files(self): 871 | 872 | md_data = {"AWS::CloudFormation::Init": {"config": {"files": { 873 | "/tmp/foo": {"content": "bar"}}}}} 874 | md_str = json.dumps(md_data) 875 | 876 | md = cfn_helper.Metadata('teststack', None) 877 | 878 | with tempfile.NamedTemporaryFile(mode='w+') as default_file: 879 | default_file.write(md_str) 880 | default_file.flush() 881 | self.assertThat(default_file.name, ttm.FileContains(md_str)) 882 | 883 | self.assertTrue( 884 | md.retrieve(default_path=default_file.name, 885 | last_path=self.last_file)) 886 | 887 | self.assertThat(self.last_file, ttm.FileContains(md_str)) 888 | self.assertThat(md_data, ttm.Equals(md._metadata)) 889 | 890 | md = cfn_helper.Metadata('teststack', None) 891 | self.assertTrue(md.retrieve(default_path=default_file.name, 892 | last_path=self.last_file)) 893 | self.assertThat(md_data, ttm.Equals(md._metadata)) 894 | 895 | def test_metadata_retrieve_none(self): 896 | 897 | md = cfn_helper.Metadata('teststack', None) 898 | default_file = os.path.join(self.tdir.path, 'default_file') 899 | 900 | self.assertFalse(md.retrieve(default_path=default_file, 901 | last_path=self.last_file)) 902 | self.assertIsNone(md._metadata) 903 | 904 | displayed = self.useFixture(fixtures.StringStream('stdout')) 905 | fake_stdout = displayed.stream 906 | self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) 907 | md.display() 908 | fake_stdout.flush() 909 | self.assertEqual(displayed.getDetails()['stdout'].as_text(), "") 910 | 911 | def test_metadata_retrieve_passed(self): 912 | 913 | md_data = {"AWS::CloudFormation::Init": {"config": {"files": { 914 | "/tmp/foo": {"content": "bar"}}}}} 915 | md_str = json.dumps(md_data) 916 | 917 | md = cfn_helper.Metadata('teststack', None) 918 | self.assertTrue(md.retrieve(meta_str=md_data, 919 | last_path=self.last_file)) 920 | self.assertThat(md_data, ttm.Equals(md._metadata)) 921 | self.assertEqual(md_str, str(md)) 922 | 923 | displayed = self.useFixture(fixtures.StringStream('stdout')) 924 | fake_stdout = displayed.stream 925 | self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) 926 | md.display() 927 | fake_stdout.flush() 928 | self.assertEqual(displayed.getDetails()['stdout'].as_text(), 929 | "{\"AWS::CloudFormation::Init\": {\"config\": {" 930 | "\"files\": {\"/tmp/foo\": {\"content\": \"bar\"}" 931 | "}}}}\n") 932 | 933 | def test_metadata_retrieve_by_key_passed(self): 934 | 935 | md_data = {"foo": {"bar": {"fred.1": "abcd"}}} 936 | md_str = json.dumps(md_data) 937 | 938 | md = cfn_helper.Metadata('teststack', None) 939 | self.assertTrue(md.retrieve(meta_str=md_data, 940 | last_path=self.last_file)) 941 | self.assertThat(md_data, ttm.Equals(md._metadata)) 942 | self.assertEqual(md_str, str(md)) 943 | 944 | displayed = self.useFixture(fixtures.StringStream('stdout')) 945 | fake_stdout = displayed.stream 946 | self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) 947 | md.display("foo") 948 | fake_stdout.flush() 949 | self.assertEqual(displayed.getDetails()['stdout'].as_text(), 950 | "{\"bar\": {\"fred.1\": \"abcd\"}}\n") 951 | 952 | def test_metadata_retrieve_by_nested_key_passed(self): 953 | 954 | md_data = {"foo": {"bar": {"fred.1": "abcd"}}} 955 | md_str = json.dumps(md_data) 956 | 957 | md = cfn_helper.Metadata('teststack', None) 958 | self.assertTrue(md.retrieve(meta_str=md_data, 959 | last_path=self.last_file)) 960 | self.assertThat(md_data, ttm.Equals(md._metadata)) 961 | self.assertEqual(md_str, str(md)) 962 | 963 | displayed = self.useFixture(fixtures.StringStream('stdout')) 964 | fake_stdout = displayed.stream 965 | self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) 966 | md.display("foo.bar.'fred.1'") 967 | fake_stdout.flush() 968 | self.assertEqual(displayed.getDetails()['stdout'].as_text(), 969 | '"abcd"\n') 970 | 971 | def test_metadata_retrieve_key_none(self): 972 | 973 | md_data = {"AWS::CloudFormation::Init": {"config": {"files": { 974 | "/tmp/foo": {"content": "bar"}}}}} 975 | md_str = json.dumps(md_data) 976 | 977 | md = cfn_helper.Metadata('teststack', None) 978 | self.assertTrue(md.retrieve(meta_str=md_data, 979 | last_path=self.last_file)) 980 | self.assertThat(md_data, ttm.Equals(md._metadata)) 981 | self.assertEqual(md_str, str(md)) 982 | 983 | displayed = self.useFixture(fixtures.StringStream('stdout')) 984 | fake_stdout = displayed.stream 985 | self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) 986 | md.display("no_key") 987 | fake_stdout.flush() 988 | self.assertEqual(displayed.getDetails()['stdout'].as_text(), "") 989 | 990 | def test_metadata_retrieve_by_nested_key_none(self): 991 | 992 | md_data = {"foo": {"bar": {"fred.1": "abcd"}}} 993 | md_str = json.dumps(md_data) 994 | 995 | md = cfn_helper.Metadata('teststack', None) 996 | self.assertTrue(md.retrieve(meta_str=md_data, 997 | last_path=self.last_file)) 998 | self.assertThat(md_data, ttm.Equals(md._metadata)) 999 | self.assertEqual(md_str, str(md)) 1000 | 1001 | displayed = self.useFixture(fixtures.StringStream('stdout')) 1002 | fake_stdout = displayed.stream 1003 | self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) 1004 | md.display("foo.fred") 1005 | fake_stdout.flush() 1006 | self.assertEqual(displayed.getDetails()['stdout'].as_text(), "") 1007 | 1008 | def test_metadata_retrieve_by_nested_key_none_with_matching_string(self): 1009 | 1010 | md_data = {"foo": "bar"} 1011 | md_str = json.dumps(md_data) 1012 | 1013 | md = cfn_helper.Metadata('teststack', None) 1014 | self.assertTrue(md.retrieve(meta_str=md_data, 1015 | last_path=self.last_file)) 1016 | self.assertThat(md_data, ttm.Equals(md._metadata)) 1017 | self.assertEqual(md_str, str(md)) 1018 | 1019 | displayed = self.useFixture(fixtures.StringStream('stdout')) 1020 | fake_stdout = displayed.stream 1021 | self.useFixture(fixtures.MonkeyPatch('sys.stdout', fake_stdout)) 1022 | md.display("foo.bar") 1023 | fake_stdout.flush() 1024 | self.assertEqual(displayed.getDetails()['stdout'].as_text(), "") 1025 | 1026 | def test_metadata_creates_cache(self): 1027 | temp_home = tempfile.mkdtemp() 1028 | 1029 | def cleanup_temp_home(thome): 1030 | os.unlink(os.path.join(thome, 'cache', 'last_metadata')) 1031 | os.rmdir(os.path.join(thome, 'cache')) 1032 | os.rmdir(os.path.join(thome)) 1033 | 1034 | self.addCleanup(cleanup_temp_home, temp_home) 1035 | 1036 | last_path = os.path.join(temp_home, 'cache', 'last_metadata') 1037 | md_data = {"AWS::CloudFormation::Init": {"config": {"files": { 1038 | "/tmp/foo": {"content": "bar"}}}}} 1039 | md_str = json.dumps(md_data) 1040 | md = cfn_helper.Metadata('teststack', None) 1041 | 1042 | self.assertFalse(os.path.exists(last_path), 1043 | "last_metadata file already exists") 1044 | self.assertTrue(md.retrieve(meta_str=md_str, last_path=last_path)) 1045 | self.assertTrue(os.path.exists(last_path), 1046 | "last_metadata file should exist") 1047 | # Ensure created dirs and file have right perms 1048 | self.assertTrue(os.stat(last_path).st_mode & 0o600 == 0o600) 1049 | self.assertTrue( 1050 | os.stat(os.path.dirname(last_path)).st_mode & 0o700 == 0o700) 1051 | 1052 | def test_is_valid_metadata(self): 1053 | md_data = {"AWS::CloudFormation::Init": {"config": {"files": { 1054 | "/tmp/foo": {"content": "bar"}}}}} 1055 | 1056 | md = cfn_helper.Metadata('teststack', None) 1057 | self.assertTrue( 1058 | md.retrieve(meta_str=md_data, last_path=self.last_file)) 1059 | 1060 | self.assertThat(md_data, ttm.Equals(md._metadata)) 1061 | self.assertTrue(md._is_valid_metadata()) 1062 | self.assertThat( 1063 | md_data['AWS::CloudFormation::Init'], ttm.Equals(md._metadata)) 1064 | 1065 | def test_remote_metadata(self): 1066 | md_data = {"AWS::CloudFormation::Init": {"config": {"files": { 1067 | "/tmp/foo": {"content": "bar"}}}}} 1068 | 1069 | with mock.patch.object( 1070 | cfn.CloudFormationConnection, 'describe_stack_resource' 1071 | ) as mock_dsr: 1072 | mock_dsr.return_value = { 1073 | 'DescribeStackResourceResponse': { 1074 | 'DescribeStackResourceResult': { 1075 | 'StackResourceDetail': {'Metadata': md_data}}}} 1076 | md = cfn_helper.Metadata( 1077 | 'teststack', 1078 | None, 1079 | access_key='foo', 1080 | secret_key='bar') 1081 | self.assertTrue(md.retrieve(last_path=self.last_file)) 1082 | self.assertThat(md_data, ttm.Equals(md._metadata)) 1083 | 1084 | with tempfile.NamedTemporaryFile(mode='w') as fcreds: 1085 | fcreds.write('AWSAccessKeyId=foo\nAWSSecretKey=bar\n') 1086 | fcreds.flush() 1087 | md = cfn_helper.Metadata( 1088 | 'teststack', None, credentials_file=fcreds.name) 1089 | self.assertTrue(md.retrieve(last_path=self.last_file)) 1090 | self.assertThat(md_data, ttm.Equals(md._metadata)) 1091 | 1092 | def test_nova_meta_with_cache(self): 1093 | meta_in = {"uuid": "f9431d18-d971-434d-9044-5b38f5b4646f", 1094 | "availability_zone": "nova", 1095 | "hostname": "as-wikidatabase-4ykioj3lgi57.novalocal", 1096 | "launch_index": 0, 1097 | "meta": {}, 1098 | "public_keys": {"heat_key": "ssh-rsa etc...\n"}, 1099 | "name": "as-WikiDatabase-4ykioj3lgi57"} 1100 | md_str = json.dumps(meta_in) 1101 | 1102 | md = cfn_helper.Metadata('teststack', None) 1103 | with tempfile.NamedTemporaryFile(mode='w+') as default_file: 1104 | default_file.write(md_str) 1105 | default_file.flush() 1106 | self.assertThat(default_file.name, ttm.FileContains(md_str)) 1107 | meta_out = md.get_nova_meta(cache_path=default_file.name) 1108 | 1109 | self.assertEqual(meta_in, meta_out) 1110 | 1111 | @mock.patch.object(cfn_helper, 'controlled_privileges') 1112 | def test_nova_meta_curl(self, mock_cp): 1113 | url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json' 1114 | temp_home = tempfile.mkdtemp() 1115 | cache_path = os.path.join(temp_home, 'meta_data.json') 1116 | 1117 | def cleanup_temp_home(thome): 1118 | os.unlink(cache_path) 1119 | os.rmdir(thome) 1120 | 1121 | self.addCleanup(cleanup_temp_home, temp_home) 1122 | 1123 | meta_in = {"uuid": "f9431d18-d971-434d-9044-5b38f5b4646f", 1124 | "availability_zone": "nova", 1125 | "hostname": "as-wikidatabase-4ykioj3lgi57.novalocal", 1126 | "launch_index": 0, 1127 | "meta": {"freddy": "is hungry"}, 1128 | "public_keys": {"heat_key": "ssh-rsa etc...\n"}, 1129 | "name": "as-WikiDatabase-4ykioj3lgi57"} 1130 | md_str = json.dumps(meta_in) 1131 | 1132 | def write_cache_file(*params, **kwargs): 1133 | with open(cache_path, 'w+') as cache_file: 1134 | cache_file.write(md_str) 1135 | cache_file.flush() 1136 | self.assertThat(cache_file.name, ttm.FileContains(md_str)) 1137 | return FakePOpen('Downloaded', '', 0) 1138 | 1139 | with mock.patch('subprocess.Popen') as mock_popen: 1140 | mock_popen.side_effect = write_cache_file 1141 | md = cfn_helper.Metadata('teststack', None) 1142 | meta_out = md.get_nova_meta(cache_path=cache_path) 1143 | self.assertEqual(meta_in, meta_out) 1144 | mock_popen.assert_has_calls( 1145 | popen_root_calls([['curl', '-o', cache_path, url]])) 1146 | 1147 | @mock.patch.object(cfn_helper, 'controlled_privileges') 1148 | def test_nova_meta_curl_corrupt(self, mock_cp): 1149 | url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json' 1150 | temp_home = tempfile.mkdtemp() 1151 | cache_path = os.path.join(temp_home, 'meta_data.json') 1152 | 1153 | def cleanup_temp_home(thome): 1154 | os.unlink(cache_path) 1155 | os.rmdir(thome) 1156 | 1157 | self.addCleanup(cleanup_temp_home, temp_home) 1158 | 1159 | md_str = "this { is not really json" 1160 | 1161 | def write_cache_file(*params, **kwargs): 1162 | with open(cache_path, 'w+') as cache_file: 1163 | cache_file.write(md_str) 1164 | cache_file.flush() 1165 | self.assertThat(cache_file.name, ttm.FileContains(md_str)) 1166 | return FakePOpen('Downloaded', '', 0) 1167 | 1168 | with mock.patch('subprocess.Popen') as mock_popen: 1169 | mock_popen.side_effect = write_cache_file 1170 | md = cfn_helper.Metadata('teststack', None) 1171 | meta_out = md.get_nova_meta(cache_path=cache_path) 1172 | self.assertIsNone(meta_out) 1173 | mock_popen.assert_has_calls( 1174 | popen_root_calls([['curl', '-o', cache_path, url]])) 1175 | 1176 | @mock.patch.object(cfn_helper, 'controlled_privileges') 1177 | def test_nova_meta_curl_failed(self, mock_cp): 1178 | url = 'http://169.254.169.254/openstack/2012-08-10/meta_data.json' 1179 | temp_home = tempfile.mkdtemp() 1180 | cache_path = os.path.join(temp_home, 'meta_data.json') 1181 | 1182 | def cleanup_temp_home(thome): 1183 | os.rmdir(thome) 1184 | 1185 | self.addCleanup(cleanup_temp_home, temp_home) 1186 | 1187 | with mock.patch('subprocess.Popen') as mock_popen: 1188 | mock_popen.return_value = FakePOpen('Failed', '', 1) 1189 | md = cfn_helper.Metadata('teststack', None) 1190 | meta_out = md.get_nova_meta(cache_path=cache_path) 1191 | self.assertIsNone(meta_out) 1192 | mock_popen.assert_has_calls( 1193 | popen_root_calls([['curl', '-o', cache_path, url]])) 1194 | 1195 | def test_get_tags(self): 1196 | fake_tags = {'foo': 'fee', 1197 | 'apple': 'red'} 1198 | md_data = {"uuid": "f9431d18-d971-434d-9044-5b38f5b4646f", 1199 | "availability_zone": "nova", 1200 | "hostname": "as-wikidatabase-4ykioj3lgi57.novalocal", 1201 | "launch_index": 0, 1202 | "meta": fake_tags, 1203 | "public_keys": {"heat_key": "ssh-rsa etc...\n"}, 1204 | "name": "as-WikiDatabase-4ykioj3lgi57"} 1205 | tags_expect = fake_tags 1206 | tags_expect['InstanceId'] = md_data['uuid'] 1207 | 1208 | md = cfn_helper.Metadata('teststack', None) 1209 | 1210 | with mock.patch.object(md, 'get_nova_meta') as mock_method: 1211 | mock_method.return_value = md_data 1212 | tags = md.get_tags() 1213 | mock_method.assert_called_once_with() 1214 | 1215 | self.assertEqual(tags_expect, tags) 1216 | 1217 | def test_get_instance_id(self): 1218 | uuid = "f9431d18-d971-434d-9044-5b38f5b4646f" 1219 | md_data = {"uuid": uuid, 1220 | "availability_zone": "nova", 1221 | "hostname": "as-wikidatabase-4ykioj3lgi57.novalocal", 1222 | "launch_index": 0, 1223 | "public_keys": {"heat_key": "ssh-rsa etc...\n"}, 1224 | "name": "as-WikiDatabase-4ykioj3lgi57"} 1225 | 1226 | md = cfn_helper.Metadata('teststack', None) 1227 | 1228 | with mock.patch.object(md, 'get_nova_meta') as mock_method: 1229 | mock_method.return_value = md_data 1230 | self.assertEqual(md.get_instance_id(), uuid) 1231 | mock_method.assert_called_once_with() 1232 | 1233 | 1234 | class TestCfnInit(testtools.TestCase): 1235 | 1236 | def setUp(self): 1237 | super(TestCfnInit, self).setUp() 1238 | self.tdir = self.useFixture(fixtures.TempDir()) 1239 | self.last_file = os.path.join(self.tdir.path, 'last_metadata') 1240 | 1241 | def test_cfn_init(self): 1242 | 1243 | with tempfile.NamedTemporaryFile(mode='w+') as foo_file: 1244 | md_data = {"AWS::CloudFormation::Init": {"config": {"files": { 1245 | foo_file.name: {"content": "bar"}}}}} 1246 | 1247 | md = cfn_helper.Metadata('teststack', None) 1248 | self.assertTrue( 1249 | md.retrieve(meta_str=md_data, last_path=self.last_file)) 1250 | md.cfn_init() 1251 | self.assertThat(foo_file.name, ttm.FileContains('bar')) 1252 | 1253 | @mock.patch.object(cfn_helper, 'controlled_privileges') 1254 | def test_cfn_init_with_ignore_errors_false(self, mock_cp): 1255 | md_data = {"AWS::CloudFormation::Init": {"config": {"commands": { 1256 | "00_foo": {"command": "/bin/command1", 1257 | "ignoreErrors": "false"}}}}} 1258 | with mock.patch('subprocess.Popen') as mock_popen: 1259 | mock_popen.return_value = FakePOpen('Doing something', 'error', -1) 1260 | md = cfn_helper.Metadata('teststack', None) 1261 | self.assertTrue( 1262 | md.retrieve(meta_str=md_data, last_path=self.last_file)) 1263 | self.assertRaises(cfn_helper.CommandsHandlerRunError, md.cfn_init) 1264 | mock_popen.assert_has_calls(popen_root_calls(['/bin/command1'], 1265 | shell=True)) 1266 | 1267 | @mock.patch.object(cfn_helper, 'controlled_privileges') 1268 | def test_cfn_init_with_ignore_errors_true(self, mock_cp): 1269 | calls = [] 1270 | returns = [] 1271 | calls.extend(popen_root_calls(['/bin/command1'], shell=True)) 1272 | returns.append(FakePOpen('Doing something', 'error', -1)) 1273 | calls.extend(popen_root_calls(['/bin/command2'], shell=True)) 1274 | returns.append(FakePOpen('All good')) 1275 | 1276 | md_data = {"AWS::CloudFormation::Init": {"config": {"commands": { 1277 | "00_foo": {"command": "/bin/command1", 1278 | "ignoreErrors": "true"}, 1279 | "01_bar": {"command": "/bin/command2", 1280 | "ignoreErrors": "false"} 1281 | }}}} 1282 | 1283 | with mock.patch('subprocess.Popen') as mock_popen: 1284 | mock_popen.side_effect = returns 1285 | md = cfn_helper.Metadata('teststack', None) 1286 | self.assertTrue( 1287 | md.retrieve(meta_str=md_data, last_path=self.last_file)) 1288 | md.cfn_init() 1289 | mock_popen.assert_has_calls(calls) 1290 | 1291 | @mock.patch.object(cfn_helper, 'controlled_privileges') 1292 | def test_cfn_init_runs_list_commands_without_shell(self, mock_cp): 1293 | calls = [] 1294 | returns = [] 1295 | # command supplied as list shouldn't run on shell 1296 | calls.extend(popen_root_calls([['/bin/command1', 'arg']], shell=False)) 1297 | returns.append(FakePOpen('Doing something')) 1298 | # command supplied as string should run on shell 1299 | calls.extend(popen_root_calls(['/bin/command2'], shell=True)) 1300 | returns.append(FakePOpen('All good')) 1301 | 1302 | md_data = {"AWS::CloudFormation::Init": {"config": {"commands": { 1303 | "00_foo": {"command": ["/bin/command1", "arg"]}, 1304 | "01_bar": {"command": "/bin/command2"} 1305 | }}}} 1306 | 1307 | with mock.patch('subprocess.Popen') as mock_popen: 1308 | mock_popen.side_effect = returns 1309 | md = cfn_helper.Metadata('teststack', None) 1310 | self.assertTrue( 1311 | md.retrieve(meta_str=md_data, last_path=self.last_file)) 1312 | md.cfn_init() 1313 | mock_popen.assert_has_calls(calls) 1314 | 1315 | 1316 | class TestSourcesHandler(testtools.TestCase): 1317 | def test_apply_sources_empty(self): 1318 | sh = cfn_helper.SourcesHandler({}) 1319 | sh.apply_sources() 1320 | 1321 | def _test_apply_sources(self, url, end_file): 1322 | dest = tempfile.mkdtemp() 1323 | self.addCleanup(os.rmdir, dest) 1324 | sources = {dest: url} 1325 | td = os.path.dirname(end_file) 1326 | er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | gunzip | tar -xvf -" 1327 | calls = popen_root_calls([er % (dest, dest, url)], shell=True) 1328 | 1329 | with mock.patch.object(tempfile, 'mkdtemp') as mock_mkdtemp: 1330 | mock_mkdtemp.return_value = td 1331 | with mock.patch('subprocess.Popen') as mock_popen: 1332 | mock_popen.return_value = FakePOpen('Curl good') 1333 | sh = cfn_helper.SourcesHandler(sources) 1334 | sh.apply_sources() 1335 | mock_popen.assert_has_calls(calls) 1336 | mock_mkdtemp.assert_called_with() 1337 | 1338 | @mock.patch.object(cfn_helper, 'controlled_privileges') 1339 | def test_apply_sources_github(self, mock_cp): 1340 | url = "https://github.com/NoSuchProject/tarball/NoSuchTarball" 1341 | dest = tempfile.mkdtemp() 1342 | self.addCleanup(os.rmdir, dest) 1343 | sources = {dest: url} 1344 | er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | gunzip | tar -xvf -" 1345 | calls = popen_root_calls([er % (dest, dest, url)], shell=True) 1346 | with mock.patch('subprocess.Popen') as mock_popen: 1347 | mock_popen.return_value = FakePOpen('Curl good') 1348 | sh = cfn_helper.SourcesHandler(sources) 1349 | sh.apply_sources() 1350 | mock_popen.assert_has_calls(calls) 1351 | 1352 | @mock.patch.object(cfn_helper, 'controlled_privileges') 1353 | def test_apply_sources_general(self, mock_cp): 1354 | url = "https://website.no.existe/a/b/c/file.tar.gz" 1355 | dest = tempfile.mkdtemp() 1356 | self.addCleanup(os.rmdir, dest) 1357 | sources = {dest: url} 1358 | er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | gunzip | tar -xvf -" 1359 | calls = popen_root_calls([er % (dest, dest, url)], shell=True) 1360 | with mock.patch('subprocess.Popen') as mock_popen: 1361 | mock_popen.return_value = FakePOpen('Curl good') 1362 | sh = cfn_helper.SourcesHandler(sources) 1363 | sh.apply_sources() 1364 | mock_popen.assert_has_calls(calls) 1365 | 1366 | def test_apply_source_cmd(self): 1367 | sh = cfn_helper.SourcesHandler({}) 1368 | er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | %s | tar -xvf -" 1369 | dest = '/tmp' 1370 | # test tgz 1371 | url = 'http://www.example.com/a.tgz' 1372 | cmd = sh._apply_source_cmd(dest, url) 1373 | self.assertEqual(er % (dest, dest, url, "gunzip"), cmd) 1374 | # test tar.gz 1375 | url = 'http://www.example.com/a.tar.gz' 1376 | cmd = sh._apply_source_cmd(dest, url) 1377 | self.assertEqual(er % (dest, dest, url, "gunzip"), cmd) 1378 | # test github - tarball 1 1379 | url = 'https://github.com/openstack/heat-cfntools/tarball/master' 1380 | cmd = sh._apply_source_cmd(dest, url) 1381 | self.assertEqual(er % (dest, dest, url, "gunzip"), cmd) 1382 | # test github - tarball 2 1383 | url = 'https://github.com/openstack/heat-cfntools/tarball/master/' 1384 | cmd = sh._apply_source_cmd(dest, url) 1385 | self.assertEqual(er % (dest, dest, url, "gunzip"), cmd) 1386 | # test tbz2 1387 | url = 'http://www.example.com/a.tbz2' 1388 | cmd = sh._apply_source_cmd(dest, url) 1389 | self.assertEqual(er % (dest, dest, url, "bunzip2"), cmd) 1390 | # test tar.bz2 1391 | url = 'http://www.example.com/a.tar.bz2' 1392 | cmd = sh._apply_source_cmd(dest, url) 1393 | self.assertEqual(er % (dest, dest, url, "bunzip2"), cmd) 1394 | # test zip 1395 | er = "mkdir -p '%s'; cd '%s'; curl -s -o '%s' '%s' && unzip -o '%s'" 1396 | url = 'http://www.example.com/a.zip' 1397 | d = "/tmp/tmp2I0yNK" 1398 | tmp = "%s/a.zip" % d 1399 | with mock.patch.object(tempfile, 'mkdtemp') as mock_mkdtemp: 1400 | mock_mkdtemp.return_value = d 1401 | 1402 | cmd = sh._apply_source_cmd(dest, url) 1403 | self.assertEqual(er % (dest, dest, tmp, url, tmp), cmd) 1404 | # test gz 1405 | er = "mkdir -p '%s'; cd '%s'; curl -s '%s' | %s > '%s'" 1406 | url = 'http://www.example.com/a.sh.gz' 1407 | cmd = sh._apply_source_cmd(dest, url) 1408 | self.assertEqual(er % (dest, dest, url, "gunzip", "a.sh"), cmd) 1409 | # test bz2 1410 | url = 'http://www.example.com/a.sh.bz2' 1411 | cmd = sh._apply_source_cmd(dest, url) 1412 | self.assertEqual(er % (dest, dest, url, "bunzip2", "a.sh"), cmd) 1413 | # test other 1414 | url = 'http://www.example.com/a.sh' 1415 | cmd = sh._apply_source_cmd(dest, url) 1416 | self.assertEqual("", cmd) 1417 | mock_mkdtemp.assert_called_with() 1418 | -------------------------------------------------------------------------------- /heat_cfntools/tests/test_cfn_hup.py: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2013 Hewlett-Packard Development Company, L.P. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); you may 5 | # not use this file except in compliance with the License. You may obtain 6 | # a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 12 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 13 | # License for the specific language governing permissions and limitations 14 | # under the License. 15 | 16 | import tempfile 17 | from unittest import mock 18 | 19 | import fixtures 20 | import testtools 21 | 22 | from heat_cfntools.cfntools import cfn_helper 23 | 24 | 25 | class TestCfnHup(testtools.TestCase): 26 | 27 | def setUp(self): 28 | super(TestCfnHup, self).setUp() 29 | self.logger = self.useFixture(fixtures.FakeLogger()) 30 | self.stack_name = self.getUniqueString() 31 | self.resource = self.getUniqueString() 32 | self.region = self.getUniqueString() 33 | self.creds = tempfile.NamedTemporaryFile() 34 | self.metadata = cfn_helper.Metadata(self.stack_name, 35 | self.resource, 36 | credentials_file=self.creds.name, 37 | region=self.region) 38 | self.init_content = self.getUniqueString() 39 | self.init_temp = tempfile.NamedTemporaryFile() 40 | self.service_name = self.getUniqueString() 41 | self.init_section = {'AWS::CloudFormation::Init': { 42 | 'config': { 43 | 'services': { 44 | 'sysvinit': { 45 | self.service_name: { 46 | 'enabled': True, 47 | 'ensureRunning': True, 48 | } 49 | } 50 | }, 51 | 'files': { 52 | self.init_temp.name: { 53 | 'content': self.init_content 54 | } 55 | } 56 | } 57 | } 58 | } 59 | 60 | def _mock_retrieve_metadata(self, desired_metadata): 61 | with mock.patch.object( 62 | cfn_helper.Metadata, 'remote_metadata') as mock_method: 63 | mock_method.return_value = desired_metadata 64 | with tempfile.NamedTemporaryFile() as last_md: 65 | self.metadata.retrieve(last_path=last_md.name) 66 | 67 | def _test_cfn_hup_metadata(self, metadata): 68 | 69 | self._mock_retrieve_metadata(metadata) 70 | FakeServicesHandler = mock.Mock() 71 | FakeServicesHandler.monitor_services.return_value = None 72 | self.useFixture( 73 | fixtures.MonkeyPatch( 74 | 'heat_cfntools.cfntools.cfn_helper.ServicesHandler', 75 | FakeServicesHandler)) 76 | 77 | section = self.getUniqueString() 78 | triggers = 'post.add,post.delete,post.update' 79 | path = 'Resources.%s.Metadata' % self.resource 80 | runas = 'root' 81 | action = '/bin/sh -c "true"' 82 | hook = cfn_helper.Hook(section, triggers, path, runas, action) 83 | 84 | with mock.patch.object(cfn_helper.Hook, 'event') as mock_method: 85 | mock_method.return_value = None 86 | self.metadata.cfn_hup([hook]) 87 | 88 | def test_cfn_hup_empty_metadata(self): 89 | self._test_cfn_hup_metadata({}) 90 | 91 | def test_cfn_hup_cfn_init_metadata(self): 92 | self._test_cfn_hup_metadata(self.init_section) 93 | -------------------------------------------------------------------------------- /releasenotes/notes/remove-cfn-push-stats-fe0cb5de0d6077cc.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | upgrade: 3 | - | 4 | The ``cfn-push-stats`` tool has been removed. This tools required 5 | the CloudWatch API of heat, which was already removed. 6 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | pbr!=2.1.0,>=2.0.0 2 | boto>=2.32.1 3 | psutil>=1.1.1 4 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [metadata] 2 | name = heat-cfntools 3 | summary = Tools required to be installed on Heat provisioned cloud instances 4 | description_file = 5 | README.rst 6 | author = OpenStack 7 | author_email = openstack-discuss@lists.openstack.org 8 | home_page = https://docs.openstack.org/heat-cfntools/latest/ 9 | python_requires = >=3.8 10 | classifier = 11 | Environment :: OpenStack 12 | Intended Audience :: Information Technology 13 | Intended Audience :: System Administrators 14 | License :: OSI Approved :: Apache Software License 15 | Operating System :: POSIX :: Linux 16 | Programming Language :: Python 17 | Programming Language :: Python :: 3 18 | Programming Language :: Python :: 3.8 19 | Programming Language :: Python :: 3.9 20 | Programming Language :: Python :: 3.10 21 | Programming Language :: Python :: 3.11 22 | Programming Language :: Python :: 3 :: Only 23 | 24 | [files] 25 | packages = 26 | heat_cfntools 27 | scripts = 28 | bin/cfn-create-aws-symlinks 29 | bin/cfn-get-metadata 30 | bin/cfn-hup 31 | bin/cfn-init 32 | bin/cfn-signal 33 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 13 | # implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | import setuptools 18 | 19 | setuptools.setup( 20 | setup_requires=['pbr>=2.0.0'], 21 | pbr=True) 22 | -------------------------------------------------------------------------------- /test-requirements.txt: -------------------------------------------------------------------------------- 1 | coverage!=4.4,>=4.0 # Apache-2.0 2 | hacking>=6.1.0,<6.2.0 # Apache-2.0 3 | stestr>=2.0.0 4 | testtools>=0.9.34 5 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = py3,pep8 3 | ignore_basepython_conflict = true 4 | 5 | [testenv] 6 | basepython = python3 7 | setenv = VIRTUAL_ENV={envdir} 8 | deps = 9 | -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} 10 | -r{toxinidir}/requirements.txt 11 | -r{toxinidir}/test-requirements.txt 12 | commands = stestr run --slowest {posargs} 13 | 14 | [testenv:pep8] 15 | commands = flake8 16 | flake8 --filename=cfn-* bin 17 | 18 | [testenv:cover] 19 | setenv = 20 | {[testenv]setenv} 21 | PYTHON=coverage run --source heat_cfntools --parallel-mode 22 | commands = 23 | coverage erase 24 | stestr run {posargs} 25 | coverage combine 26 | coverage html -d cover 27 | coverage xml -o cover/coverage.xml 28 | coverage report 29 | 30 | [testenv:venv] 31 | commands = {posargs} 32 | 33 | [flake8] 34 | show-source = true 35 | exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,tools 36 | 37 | [testenv:docs] 38 | deps = 39 | -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} 40 | -r{toxinidir}/doc/requirements.txt 41 | commands = sphinx-build -W -b html doc/source doc/build/html 42 | --------------------------------------------------------------------------------