├── .DS_Store ├── .github └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── CHANGELOG.md ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE.txt ├── NOTICE.txt ├── README.md ├── deployment ├── build-s3-dist.sh ├── cost-optimization-ec2-right-sizing.template └── run-unit-tests.sh └── source ├── local-solution-helper ├── __init__.py ├── pycfn_custom_resource │ ├── __init__.py │ ├── lambda_backed.py │ └── util.py ├── setup.py └── solution-helper.py └── scripts ├── callgcw.py ├── deleteandterminate.py ├── getcloudwatchmetrics.py └── run-rightsizing-redshift.py /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amazon-archives/cost-optimization-ec2-right-sizing/844380205acbae24f02fa3f8e0589ccbe5877a67/.DS_Store -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | *Issue #, if available:* 2 | 3 | *Description of changes:* 4 | 5 | 6 | By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | **/dist 2 | **/.zip 3 | **/.DS_Store 4 | **/open-source 5 | *.orig 6 | source/local-solution-helper.zip 7 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), 4 | and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). 5 | 6 | ## [2.4.1] - 2019-10-22 7 | 8 | ### Added 9 | * Now automatically selects the latest Amazon Linux HVM x86_64 AMI 10 | 11 | ### Changed 12 | * Converted the CloudFormation template from JSON to YAML, as YAML is easier to read and supports comments 13 | * Upgraded the solution to work under Python 3.7 14 | * Replaced psycopg2 with as-psycopg2, which statically links the Postgred libs 15 | * Replaced Python 2 urllib with http.client 16 | * Updated Solution Helper to use request module 17 | * Solution Helper now runs under Python 3.7 18 | 19 | ### Fixed 20 | * Removed multiple installs of Python from the EC2 deployment 21 | * Moved Python install to a separate ConfigSet for clarity 22 | * Eliminated duplication and extra Python modules 23 | 24 | ### Removed 25 | * Lambda-back custom resource for AMI selection 26 | * boto3.vendored module 27 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check [existing open](https://github.com/awslabs/cost-optimization-ec2-right-sizing/issues), or [recently closed](https://github.com/awslabs/cost-optimization-ec2-right-sizing/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/awslabs/cost-optimization-ec2-right-sizing/labels/help%20wanted) issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](https://github.com/awslabs/cost-optimization-ec2-right-sizing/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS -------------------------------------------------------------------------------- /NOTICE.txt: -------------------------------------------------------------------------------- 1 | AWS Cost Optimization: EC2 Right Sizing 2 | 3 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except 5 | in compliance with the License. A copy of the License is located at http://www.apache.org/licenses/ 6 | or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, 7 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the 8 | specific language governing permissions and limitations under the License. 9 | 10 | ********************** 11 | THIRD PARTY COMPONENTS 12 | ********************** 13 | This software includes third party software subject to the following copyrights: 14 | 15 | AWS SDK under the Apache License Version 2.0 16 | Requests under the Apache License Version 2.0 17 | aws-psycopg2 under the MIT License 18 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AWS Cost Optimization: EC2 Right Sizing 2 | Source code for the AWS solution "Cost Optimization: EC2 Right Sizing". Please see the main solution for the [Cost Optimization: EC2 Right Sizing](https://aws.amazon.com/answers/account-management/cost-optimization-ec2-right-sizing/). 3 | 4 | ## Cloudformation template 5 | /deployment 6 | - cost-optimization-ec2-right-sizing.template 7 | 8 | You will need to replace %%BUCKET_NAME%% & %%VERSION%% in the template to point to the bucket where you put your own copies of the Python source code below. 9 | 10 | ## Python source code 11 | /source 12 | - callgcw.py 13 | - deleteandterminate.py 14 | - getcloudwatchmetrics.py 15 | - run-rightsizing-redshift.py 16 | 17 | ## Troubleshooting 18 | Log files are exported to the CloudWatch Logs log group cost-optimization-ec2-right-sizing, log streams: 19 | {instance_id}/cfn-init.log 20 | {instance_id}/run-rightsizing-redshift.log 21 | {instance_id}/deleteandterminate.log 22 | 23 | Log files are also located locally on the solution created EC2 instance (if you chose to not terminate resources): 24 | /var/log/cfn-init.log 25 | /tmp/run-rightsizing-redshift.log 26 | /tmp/deleteandterminate.log 27 | 28 | 29 | 30 | *** 31 | 32 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 33 | 34 | Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 35 | 36 | http://www.apache.org/licenses/ 37 | 38 | or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 39 | -------------------------------------------------------------------------------- /deployment/build-s3-dist.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This assumes all of the OS-level configuration has been completed and git repo has already been cloned 4 | # 5 | # This script should be run from the repo's deployment directory 6 | # cd deployment 7 | # ./build-s3-dist.sh source-bucket-base-name trademarked-solution-name version-code 8 | # 9 | # Paramenters: 10 | # - source-bucket-base-name: Name for the S3 bucket location where the template will source the Lambda 11 | # code from. The template will append '-[region_name]' to this bucket name. 12 | # For example: ./build-s3-dist.sh solutions my-solution v1.0.0 13 | # The template will then expect the source code to be located in the solutions-[region_name] bucket 14 | # 15 | # - trademarked-solution-name: name of the solution for consistency 16 | # 17 | # - version-code: version of the package 18 | 19 | # Check to see if input has been provided: 20 | if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then 21 | echo "Please provide the base source bucket name, trademark approved solution name and version where the lambda code will eventually reside." 22 | echo "For example: ./build-s3-dist.sh solutions trademarked-solution-name v1.0.0" 23 | exit 1 24 | fi 25 | 26 | do_cmd () { 27 | echo "------ EXEC $*" 28 | $* 29 | } 30 | 31 | # On entry it is expected that we are in the deployment folder of the build 32 | 33 | # Get reference for all important folders 34 | template_dir="$PWD" 35 | template_dist_dir="$template_dir/global-s3-assets" 36 | build_dist_dir="$template_dir/regional-s3-assets" 37 | source_dir="../source" 38 | 39 | echo "------------------------------------------------------------------------------" 40 | echo "[Init] Clean old dist folders" 41 | echo "------------------------------------------------------------------------------" 42 | do_cmd rm -rf $template_dist_dir 43 | do_cmd mkdir -p $template_dist_dir 44 | do_cmd rm -rf $build_dist_dir 45 | do_cmd mkdir -p $build_dist_dir 46 | 47 | echo "------------------------------------------------------------------------------" 48 | echo "[Packing] Templates" 49 | echo "------------------------------------------------------------------------------" 50 | do_cmd cp -R $template_dir/*.template $template_dist_dir/ 51 | 52 | echo "Updating code source bucket in template with $1" 53 | replace="s/%%BUCKET_NAME%%/$1/g" 54 | do_cmd sed -i -e $replace $template_dist_dir/*.template 55 | 56 | replace="s/%%SOLUTION_NAME%%/$2/g" 57 | do_cmd sed -i -e $replace $template_dist_dir/*.template 58 | 59 | replace="s/%%VERSION%%/$3/g" 60 | do_cmd sed -i -e $replace $template_dist_dir/*.template 61 | 62 | replace="s/%%TEMPLATE_BUCKET_NAME%%/$4/g" 63 | do_cmd sed -i -e $replace $template_dist_dir/*.template 64 | 65 | echo "------------------------------------------------------------------------------" 66 | echo "[Packing] Scripts" 67 | echo "------------------------------------------------------------------------------" 68 | do_cmd cp $template_dir/../source/scripts/*.py $build_dist_dir 69 | 70 | echo "------------------------------------------------------------------------------" 71 | echo "[Packing] Solution Helper" 72 | echo "------------------------------------------------------------------------------" 73 | echo "------ Building local-solution-helper ZIP file" 74 | cd $build_dist_dir 75 | do_cmd virtualenv env 76 | do_cmd source env/bin/activate 77 | do_cmd pip install $template_dir/../source/local-solution-helper/. --target=$template_dir/dist/env/lib/python3.7/site-packages/ --upgrade --upgrade-strategy only-if-needed 78 | # do_cmd pip install requests --target=$template_dir/dist/env/lib/python3.7/site-packages/ --upgrade --upgrade-strategy only-if-needed 79 | 80 | # fail build if pip install fails 81 | instl_status=$? 82 | if [ ${instl_status} != '0' ]; then 83 | echo "------ FAILED pip install solution helper status: ${instl_status}" 84 | exit ${instl_status} 85 | fi 86 | echo "------ Solution Helper package built ------" 87 | cd $template_dir/dist/env/lib/python3.7/site-packages/ 88 | 89 | do_cmd zip -r9 $build_dist_dir/local-solution-helper.zip . 90 | 91 | echo "Clean up build material in $VIRTUAL_ENV" 92 | do_cmd rm -rf $VIRTUAL_ENV 93 | 94 | echo "------------------------------------------------------------------------------" 95 | echo "Completed building distribution" 96 | echo "------------------------------------------------------------------------------" 97 | -------------------------------------------------------------------------------- /deployment/cost-optimization-ec2-right-sizing.template: -------------------------------------------------------------------------------- 1 | # %%SOLUTION_NAME%% 2 | # 3 | # template for %%SOLUTION_NAME%% 4 | # **DO NOT DELETE** 5 | # 6 | # author: aws-solutions-builder@ 7 | AWSTemplateFormatVersion: 2010-09-09 8 | 9 | Description: >- 10 | (SO0011) - The AWS CloudFormation Template for deployment of the %%SOLUTION_NAME%%. 11 | Version %%VERSION%% - Master Template 12 | **WARNING** This template creates AWS resources. You will be billed for the 13 | AWS resources used if you create a stack from this template. 14 | 15 | Parameters: 16 | KeyName: 17 | Description: Existing Amazon EC2 key pair for SSH access to the instances 18 | Type: 'AWS::EC2::KeyPair::KeyName' 19 | ConstraintDescription: must be the name of an existing EC2 KeyPair. 20 | InboundTraffic: 21 | Description: IP address range that can access the EC2 Instance 22 | Type: String 23 | MinLength: '9' 24 | MaxLength: '18' 25 | AllowedPattern: '(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})' 26 | ConstraintDescription: must be a valid CIDR range of the form x.x.x.x/x. 27 | TerminateResources: 28 | Description: >- 29 | Choose Yes to automatically delete the Amazon Redshift cluster and 30 | terminate the EC2 instance after results are returned 31 | Type: String 32 | Default: 'Yes' 33 | AllowedValues: 34 | - 'Yes' 35 | - 'No' 36 | SendAnonymousData: 37 | Description: Send anonymous data to AWS 38 | Type: String 39 | Default: 'Yes' 40 | AllowedValues: 41 | - 'Yes' 42 | - 'No' 43 | LatestAMIId: 44 | Description: >- 45 | Automatically selects the latest Amazon Linux AMI. Do not change this 46 | value 47 | Type: 'AWS::SSM::Parameter::Value' 48 | Default: /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-gp2 49 | 50 | Mappings: 51 | Function: 52 | SolutionHelper: 53 | S3Bucket: '%%BUCKET_NAME%%' 54 | S3Key: '%%SOLUTION_NAME%%/%%VERSION%%/local-solution-helper.zip' 55 | Name: 'local-solution-helper' 56 | Handler: 'local_solution_helper/solution-helper.lambda_handler' 57 | Description: 'Helper: This function is invoked for custom resources.' 58 | Runtime: 'python3.7' 59 | Timeout: 60 60 | MemorySize: 128 61 | 62 | Conditions: 63 | Terminate: !Equals 64 | - !Ref TerminateResources 65 | - 'Yes' 66 | SendData: !Equals 67 | - !Ref SendAnonymousData 68 | - 'Yes' 69 | 70 | Metadata: 71 | 'AWS::CloudFormation::Interface': 72 | ParameterGroups: 73 | - Label: 74 | default: Solution Configuration 75 | Parameters: 76 | - InboundTraffic 77 | - KeyName 78 | - TerminateResources 79 | - LatestAMIId 80 | - Label: 81 | default: Anonymous Metrics Request 82 | Parameters: 83 | - SendAnonymousData 84 | ParameterLabels: 85 | InboundTraffic: 86 | default: Access CIDR Block 87 | KeyName: 88 | default: SSH Key 89 | TerminateResources: 90 | default: Terminate Resources 91 | SendAnonymousData: 92 | default: Send Anonymous Usage Data 93 | LatestAMIId: 94 | default: Latest Amazon Linux HVM AMI 95 | 96 | Resources: 97 | 98 | S3Bucket: 99 | Type: 'AWS::S3::Bucket' 100 | # DeletionPolicy: 'Retain' 101 | Metadata: 102 | cfn_nag: 103 | rules_to_suppress: 104 | - id: W35 105 | reason: Bucket is used as temporary storage for output. Access logging not required. 106 | Properties: 107 | BucketEncryption: 108 | ServerSideEncryptionConfiguration: 109 | - ServerSideEncryptionByDefault: 110 | SSEAlgorithm: AES256 111 | Tags: 112 | - Key: Name 113 | Value: 'Cost Optimization: EC2 Right Sizing - S3 Bucket' 114 | PublicAccessBlockConfiguration: 115 | BlockPublicAcls: True 116 | BlockPublicPolicy: True 117 | IgnorePublicAcls: True 118 | RestrictPublicBuckets: True 119 | 120 | S3BucketPolicy: 121 | Type: 'AWS::S3::BucketPolicy' 122 | Properties: 123 | Bucket: !Ref S3Bucket 124 | PolicyDocument: 125 | Version: 2012-10-17 126 | Statement: 127 | - Action: 128 | - 's3:GetObject' 129 | - 's3:PutObject' 130 | - 's3:PutObjectAcl' 131 | Effect: Allow 132 | Resource: !Join 133 | - '' 134 | - - 'arn:aws:s3:::' 135 | - !Ref S3Bucket 136 | - /* 137 | Principal: 138 | AWS: 139 | - !Join 140 | - '' 141 | - - 'arn:aws:iam::' 142 | - !Ref 'AWS::AccountId' 143 | - ':root' 144 | - Effect: Allow 145 | Action: 146 | - 's3:GetBucketAcl' 147 | - 's3:GetBucketPolicy' 148 | Principal: 149 | AWS: '386209384616' 150 | Resource: !Join 151 | - '' 152 | - - 'arn:aws:s3:::' 153 | - !Ref S3Bucket 154 | - Effect: Allow 155 | Action: 's3:PutObject' 156 | Principal: 157 | AWS: '386209384616' 158 | Resource: !Join 159 | - '' 160 | - - 'arn:aws:s3:::' 161 | - !Ref S3Bucket 162 | - /* 163 | S3Role: 164 | Type: 'AWS::IAM::Role' 165 | Properties: 166 | AssumeRolePolicyDocument: 167 | Version: 2012-10-17 168 | Statement: 169 | - Effect: Allow 170 | Principal: 171 | Service: 172 | - ec2.amazonaws.com 173 | Action: 174 | - 'sts:AssumeRole' 175 | Path: / 176 | 177 | EC2InstanceRole: 178 | Type: 'AWS::IAM::Role' 179 | Metadata: 180 | cfn_nag: 181 | rules_to_suppress: 182 | - id: W11 183 | reason: Allows readonly access to CloudWatch metrics. Metrics do not have a more specific ARN. 184 | Properties: 185 | AssumeRolePolicyDocument: 186 | Version: 2012-10-17 187 | Statement: 188 | - Effect: Allow 189 | Principal: 190 | Service: 191 | - ec2.amazonaws.com 192 | Action: 193 | - 'sts:AssumeRole' 194 | Path: / 195 | Policies: 196 | - PolicyName: EC2InstanceRole 197 | PolicyDocument: 198 | Version: 2012-10-17 199 | Statement: 200 | - Sid: LambdaLogging 201 | Effect: Allow 202 | Action: 203 | - 'logs:CreateLogGroup' 204 | - 'logs:CreateLogStream' 205 | - 'logs:PutLogEvents' 206 | Resource: !Join 207 | - '' 208 | - - 'arn:aws:logs:' 209 | - !Ref 'AWS::Region' 210 | - ':' 211 | - !Ref 'AWS::AccountId' 212 | - ':log-group:cost-optimization-ec2-right-sizing:*' 213 | - Sid: S3BucketAccess 214 | Effect: Allow 215 | Action: 216 | - 's3:PutObject' 217 | - 's3:PutObjectAcl' 218 | - 's3:GetObject' 219 | Resource: !Join 220 | - '' 221 | - - 'arn:aws:s3:::' 222 | - !Ref S3Bucket 223 | - /* 224 | - Sid: S3ListBucket 225 | Effect: Allow 226 | Action: 227 | - 's3:ListBucket' 228 | Resource: !Join 229 | - '' 230 | - - 'arn:aws:s3:::' 231 | - !Ref S3Bucket 232 | - Sid: ReadOnly 233 | Effect: Allow 234 | Action: 235 | - 'ec2:Describe*' 236 | - 'cloudwatch:GetMetricStatistics' 237 | Resource: '*' 238 | - Sid: StopEC2Instances 239 | Effect: Allow 240 | Action: 241 | - 'ec2:TerminateInstances' 242 | Resource: 'arn:aws:ec2:*:*:instance/*' 243 | Condition: 244 | StringEquals: 245 | 'ec2:ResourceTag/Name': 'Cost Optimization: EC2 Right Sizing - Instance' 246 | - Sid: StopRedShift 247 | Effect: Allow 248 | Action: 249 | - 'redshift:DeleteCluster' 250 | Resource: !Join 251 | - '' 252 | - - 'arn:aws:redshift:*:*:cluster:' 253 | - !Ref RedshiftCluster 254 | 255 | EC2InstanceInstanceProfile: 256 | Type: 'AWS::IAM::InstanceProfile' 257 | Properties: 258 | Path: / 259 | Roles: 260 | - !Ref EC2InstanceRole 261 | 262 | EC2InstanceSecurityGroup: 263 | Type: 'AWS::EC2::SecurityGroup' 264 | Metadata: 265 | cfn_nag: 266 | rules_to_suppress: 267 | - id: W5 268 | reason: Egress limits to protocols for DNS, Amazon APIs (80/443), and to RS cluster 269 | Properties: 270 | GroupDescription: 'Enable SSH access from the customer-provided network address' 271 | VpcId: !Ref VPC 272 | SecurityGroupIngress: 273 | - Description: 'Enable SSH access from the customer-provided network address' 274 | IpProtocol: tcp 275 | FromPort: 22 276 | ToPort: 22 277 | CidrIp: !Ref InboundTraffic 278 | SecurityGroupEgress: 279 | - Description: 'Allow egress for dns - tcp' 280 | IpProtocol: tcp 281 | FromPort: 53 282 | ToPort: 53 283 | CidrIp: 0.0.0.0/0 284 | - Description: 'Allow egress for dns - udp' 285 | IpProtocol: udp 286 | FromPort: 53 287 | ToPort: 53 288 | CidrIp: 0.0.0.0/0 289 | - Description: 'Allow egress for access to AWS services - http' 290 | IpProtocol: tcp 291 | FromPort: 80 292 | ToPort: 80 293 | CidrIp: 0.0.0.0/0 294 | - Description: 'Allow egress for access to AWS services - https' 295 | IpProtocol: tcp 296 | FromPort: 443 297 | ToPort: 443 298 | CidrIp: 0.0.0.0/0 299 | - Description: 'Allow local egress' 300 | IpProtocol: tcp 301 | FromPort: 5439 302 | ToPort: 5439 303 | CidrIp: 10.0.0.0/16 304 | Tags: 305 | - Key: Name 306 | Value: 'Cost Optimization: EC2 Right Sizing - Security Group' 307 | EC2Instance: 308 | Type: 'AWS::EC2::Instance' 309 | DependsOn: 310 | - PublicSubnetRouteTableAssociation 311 | Metadata: 312 | 'AWS::CloudFormation::Init': 313 | configSets: 314 | rightsizing_install: 315 | - install_python3 316 | - install_cfn 317 | - install_rightsizing 318 | - run_rightsizing 319 | install_python3: 320 | packages: 321 | yum: 322 | python36: [] 323 | commands: 324 | 0-boto3install: 325 | command: >- 326 | 'yum update -y; 327 | alternatives --set python /usr/bin/python3.6; 328 | pip install --upgrade pip; 329 | /usr/local/bin/pip install boto3 boto.utils; 330 | /usr/local/bin/pip install requests jmespath aws-psycopg2 ConfigParser' 331 | cwd: '~' 332 | runas: root 333 | install_cfn: 334 | packages: 335 | yum: 336 | awslogs: [] 337 | files: 338 | /etc/awslogs/awslogs.conf: 339 | content: !Join 340 | - '' 341 | - - | 342 | [general] 343 | - | 344 | state_file= /var/awslogs/agent-state 345 | - | 346 | [/var/log/cfn-init.log] 347 | - | 348 | file = /var/log/cfn-init.log 349 | - | 350 | log_group_name = cost-optimization-ec2-right-sizing 351 | - | 352 | log_stream_name = {instance_id}/cfn-init.log 353 | - | 354 | datetime_format = %d/%b/%Y:%H:%M:%S 355 | - | 356 | [/tmp/run-rightsizing-redshift.log] 357 | - | 358 | file = /tmp/run-rightsizing-redshift.log 359 | - | 360 | log_group_name = cost-optimization-ec2-right-sizing 361 | - | 362 | log_stream_name = {instance_id}/run-rightsizing-redshift.log 363 | - | 364 | datetime_format = %d/%b/%Y:%H:%M:%S 365 | - | 366 | [/tmp/] 367 | - | 368 | file = /tmp/deleteandterminate.log 369 | - | 370 | log_group_name = cost-optimization-ec2-right-sizing 371 | - | 372 | log_stream_name = {instance_id}/deleteandterminate.log 373 | - | 374 | datetime_format = %d/%b/%Y:%H:%M:%S 375 | - | 376 | [/var/log/messages] 377 | - | 378 | file = /var/log/messages 379 | - | 380 | log_group_name = cost-optimization-ec2-right-sizing 381 | - | 382 | log_stream_name = {instance_id}/messages 383 | - | 384 | datetime_format = %d/%b/%Y:%H:%M:%S 385 | mode: '000644' 386 | owner: root 387 | group: root 388 | /etc/awslogs/awscli.conf: 389 | content: !Join 390 | - '' 391 | - - | 392 | [plugins] 393 | - | 394 | cwlogs = cwlogs 395 | - | 396 | [default] 397 | - 'region = ' 398 | - !Ref 'AWS::Region' 399 | mode: '000644' 400 | owner: root 401 | group: root 402 | /var/awslogs/agent-state: 403 | content: !Join 404 | - '' 405 | - - ' ' 406 | mode: '000644' 407 | owner: root 408 | group: root 409 | /etc/cfn/cfn-hup.conf: 410 | content: !Join 411 | - '' 412 | - - | 413 | [main] 414 | - stack= 415 | - !Ref 'AWS::StackId' 416 | - |+ 417 | 418 | - region= 419 | - !Ref 'AWS::Region' 420 | - |+ 421 | 422 | mode: '000400' 423 | owner: root 424 | group: root 425 | /etc/cfn/hooks.d/cfn-auto-reloader.conf: 426 | content: !Join 427 | - '' 428 | - - | 429 | [cfn-auto-reloader-hook] 430 | - | 431 | triggers=post.update 432 | - > 433 | path=Resources.EC2Instance.Metadata.AWS::CloudFormation::Init 434 | - 'action=/opt/aws/bin/cfn-init -v ' 435 | - ' --stack ' 436 | - !Ref 'AWS::StackName' 437 | - ' --resource EC2Instance ' 438 | - ' --configsets rightsizing_install ' 439 | - ' --region ' 440 | - !Ref 'AWS::Region' 441 | - |+ 442 | 443 | mode: '000400' 444 | owner: root 445 | group: root 446 | services: 447 | sysvinit: 448 | awslogs: 449 | enabled: 'true' 450 | ensureRunning: 'true' 451 | cfn-hup: 452 | enabled: 'true' 453 | ensureRunning: 'true' 454 | files: 455 | - /etc/cfn/cfn-hup.conf 456 | - /etc/cfn/hooks.d/cfn-auto-reloader.conf 457 | install_rightsizing: 458 | files: 459 | /tmp/perform-rightsizing.sh: 460 | content: !Join 461 | - '' 462 | - - | 463 | #!/bin/bash 464 | - | 465 | cd /tmp 466 | - 'wget https://%%BUCKET_NAME%%-' 467 | - !Ref AWS::Region 468 | - '.s3.amazonaws.com/%%SOLUTION_NAME%%/%%VERSION%%/' 469 | - | 470 | 'run-rightsizing-redshift.py' 471 | - 'wget https://%%BUCKET_NAME%%-' 472 | - !Ref AWS::Region 473 | - '.s3.amazonaws.com/%%SOLUTION_NAME%%/%%VERSION%%/' 474 | - | 475 | 'callgcw.py' 476 | - 'wget https://%%BUCKET_NAME%%-' 477 | - !Ref AWS::Region 478 | - '.s3.amazonaws.com/%%SOLUTION_NAME%%/%%VERSION%%/' 479 | - | 480 | 'getcloudwatchmetrics.py' 481 | - 'wget https://%%BUCKET_NAME%%-' 482 | - !Ref AWS::Region 483 | - '.s3.amazonaws.com/%%SOLUTION_NAME%%/%%VERSION%%/' 484 | - | 485 | 'deleteandterminate.py' 486 | - sed -i 's/cfn_s3_bucket_name/ 487 | - !GetAtt 488 | - S3Bucket 489 | - DomainName 490 | - | 491 | /g' /tmp/run-rightsizing-redshift.py 492 | - > 493 | sed -i "s/.s3.amazonaws.com//" 494 | /tmp/run-rightsizing-redshift.py 495 | - sed -i 's/cfn_account/ 496 | - !Ref 'AWS::AccountId' 497 | - | 498 | /g' /tmp/run-rightsizing-redshift.py 499 | - > 500 | sed -i 's/cfn_db_user/redshiftuser/g' 501 | /tmp/run-rightsizing-redshift.py 502 | - sed -i 's/cfn_db_pass/ 503 | - !GetAtt 504 | - CreateRandomPassword 505 | - Password 506 | - | 507 | /g' /tmp/run-rightsizing-redshift.py 508 | - > 509 | sed -i 's/cfn_db_name/rightsizing/g' 510 | /tmp/run-rightsizing-redshift.py 511 | - sed -i 's/cfn_db_host/ 512 | - !GetAtt 513 | - RedshiftCluster 514 | - Endpoint.Address 515 | - | 516 | /g' /tmp/run-rightsizing-redshift.py 517 | - sed -i 's/cfn_db_port/ 518 | - !GetAtt 519 | - RedshiftCluster 520 | - Endpoint.Port 521 | - | 522 | /g' /tmp/run-rightsizing-redshift.py 523 | - sed -i 's/cfn_region/ 524 | - !Ref 'AWS::Region' 525 | - | 526 | /g' /tmp/run-rightsizing-redshift.py 527 | - sed -i 's/cfn_region/ 528 | - !Ref 'AWS::Region' 529 | - | 530 | /g' /tmp/callgcw.py 531 | - sed -i 's/cfn_region/ 532 | - !Ref 'AWS::Region' 533 | - | 534 | /g' /tmp/deleteandterminate.py 535 | - sed -i 's/cfn_db_clusteridentifier/ 536 | - !Ref RedshiftCluster 537 | - | 538 | /g' /tmp/deleteandterminate.py 539 | - > 540 | python /tmp/run-rightsizing-redshift.py 541 | &>/tmp/run-rightsizing-redshift.log 542 | - > 543 | sleep 30s 544 | - !If 545 | - Terminate 546 | - > 547 | python /tmp/deleteandterminate.py 548 | &>/tmp/deleteandterminate.log 549 | - !Ref 'AWS::NoValue' 550 | - |+ 551 | 552 | mode: '000700' 553 | owner: root 554 | group: root 555 | run_rightsizing: 556 | commands: 557 | 0-perform_rightsizing: 558 | command: /tmp/perform-rightsizing.sh 559 | cwd: /tmp 560 | Properties: 561 | ImageId: !Ref LatestAMIId 562 | InstanceType: m5.large 563 | IamInstanceProfile: !Ref EC2InstanceInstanceProfile 564 | NetworkInterfaces: 565 | - AssociatePublicIpAddress: True 566 | DeviceIndex: '0' 567 | GroupSet: 568 | - !Ref EC2InstanceSecurityGroup 569 | SubnetId: !Ref PublicSubnet 570 | KeyName: !Ref KeyName 571 | Tags: 572 | - Key: Name 573 | Value: 'Cost Optimization: EC2 Right Sizing - Instance' 574 | UserData: !Base64 575 | 'Fn::Join': 576 | - '' 577 | - - | 578 | #!/bin/bash -xe 579 | - | 580 | yum update -y aws-cfn-bootstrap 581 | - '/opt/aws/bin/cfn-init -v ' 582 | - ' --stack ' 583 | - !Ref 'AWS::StackName' 584 | - ' --resource EC2Instance ' 585 | - ' --configsets rightsizing_install ' 586 | - ' --region ' 587 | - !Ref 'AWS::Region' 588 | - |+ 589 | 590 | - '/opt/aws/bin/cfn-signal -e $? ' 591 | - ' --stack ' 592 | - !Ref 'AWS::StackName' 593 | - ' --resource EC2Instance ' 594 | - ' --region ' 595 | - !Ref 'AWS::Region' 596 | - |+ 597 | 598 | CreationPolicy: 599 | ResourceSignal: 600 | Timeout: PT8H 601 | 602 | RedshiftCluster: 603 | Type: 'AWS::Redshift::Cluster' 604 | DependsOn: AttachGateway 605 | Properties: 606 | ClusterType: multi-node 607 | Tags: 608 | - Key: Name 609 | Value: 'Cost Optimization: EC2 Right Sizing - RS Cluster' 610 | AvailabilityZone: !Select 611 | - '0' 612 | - !GetAZs '' 613 | NumberOfNodes: 2 614 | NodeType: dc2.large 615 | DBName: rightsizing 616 | Encrypted: True 617 | MasterUsername: redshiftuser 618 | MasterUserPassword: !GetAtt 619 | - CreateRandomPassword 620 | - Password 621 | ClusterParameterGroupName: !Ref RedshiftClusterParameterGroup 622 | VpcSecurityGroupIds: 623 | - !Ref RedshiftSecurityGroup 624 | ClusterSubnetGroupName: !Ref RedshiftClusterSubnetGroup 625 | PubliclyAccessible: False 626 | Port: 5439 627 | 628 | RedshiftClusterParameterGroup: 629 | Type: 'AWS::Redshift::ClusterParameterGroup' 630 | Properties: 631 | Description: Cluster parameter group 632 | ParameterGroupFamily: redshift-1.0 633 | Parameters: 634 | - ParameterName: enable_user_activity_logging 635 | ParameterValue: 'true' 636 | 637 | RedshiftClusterSubnetGroup: 638 | Type: 'AWS::Redshift::ClusterSubnetGroup' 639 | Properties: 640 | Description: Cluster subnet group 641 | SubnetIds: 642 | - !Ref PublicSubnet 643 | Tags: 644 | - Key: Name 645 | Value: 'Cost Optimization: EC2 Right Sizing - RS Cluster Subnet Group' 646 | VPC: 647 | Type: 'AWS::EC2::VPC' 648 | Properties: 649 | CidrBlock: 10.0.0.0/16 650 | Tags: 651 | - Key: Name 652 | Value: 'Cost Optimization: EC2 Right Sizing - VPC' 653 | PublicSubnet: 654 | Type: 'AWS::EC2::Subnet' 655 | Properties: 656 | CidrBlock: 10.0.0.0/24 657 | AvailabilityZone: !Select 658 | - '0' 659 | - !GetAZs '' 660 | VpcId: !Ref VPC 661 | Tags: 662 | - Key: Name 663 | Value: 'Cost Optimization: EC2 Right Sizing - Subnet' 664 | 665 | RedshiftSecurityGroup: 666 | Type: 'AWS::EC2::SecurityGroup' 667 | Properties: 668 | GroupDescription: 'Allow Redshift ingress' 669 | SecurityGroupIngress: 670 | - Description: 'Allow Redshift ingress' 671 | SourceSecurityGroupId: !Ref EC2InstanceSecurityGroup 672 | FromPort: 5439 673 | ToPort: 5439 674 | IpProtocol: tcp 675 | SecurityGroupEgress: 676 | - Description: 'Disallow egress' 677 | IpProtocol: tcp 678 | FromPort: 0 679 | ToPort: 0 680 | CidrIp: 10.0.0.0/16 681 | Tags: 682 | - Key: Name 683 | Value: 'Cost Optimization: EC2 Right Sizing - Security Group' 684 | VpcId: !Ref VPC 685 | 686 | myInternetGateway: 687 | Type: 'AWS::EC2::InternetGateway' 688 | Properties: 689 | Tags: 690 | - Key: Name 691 | Value: 'Cost Optimization: EC2 Right Sizing - Internet Gateway' 692 | 693 | AttachGateway: 694 | Type: 'AWS::EC2::VPCGatewayAttachment' 695 | Properties: 696 | VpcId: !Ref VPC 697 | InternetGatewayId: !Ref myInternetGateway 698 | 699 | PublicRouteTable: 700 | Type: 'AWS::EC2::RouteTable' 701 | Properties: 702 | VpcId: !Ref VPC 703 | Tags: 704 | - Key: Name 705 | Value: 'Cost Optimization: EC2 Right Sizing - Route Table' 706 | PublicRoute: 707 | Type: 'AWS::EC2::Route' 708 | DependsOn: AttachGateway 709 | Properties: 710 | RouteTableId: !Ref PublicRouteTable 711 | DestinationCidrBlock: 0.0.0.0/0 712 | GatewayId: !Ref myInternetGateway 713 | 714 | PublicSubnetRouteTableAssociation: 715 | Type: 'AWS::EC2::SubnetRouteTableAssociation' 716 | Properties: 717 | SubnetId: !Ref PublicSubnet 718 | RouteTableId: !Ref PublicRouteTable 719 | 720 | SolutionHelperRole: 721 | Type: 'AWS::IAM::Role' 722 | Properties: 723 | AssumeRolePolicyDocument: 724 | Version: 2012-10-17 725 | Statement: 726 | - Effect: Allow 727 | Principal: 728 | Service: lambda.amazonaws.com 729 | Action: 'sts:AssumeRole' 730 | Path: / 731 | Policies: 732 | - PolicyName: Solution_Helper_Permissions 733 | PolicyDocument: 734 | Version: 2012-10-17 735 | Statement: 736 | - Effect: Allow 737 | Action: 738 | - 'logs:CreateLogGroup' 739 | - 'logs:CreateLogStream' 740 | - 'logs:PutLogEvents' 741 | Resource: !Join 742 | - '' 743 | - - 'arn:aws:logs:' 744 | - !Ref 'AWS::Region' 745 | - ':' 746 | - !Ref 'AWS::AccountId' 747 | - ':log-group:/aws/lambda/*' 748 | 749 | SolutionHelper: 750 | Type: 'AWS::Lambda::Function' 751 | DependsOn: SolutionHelperRole 752 | Properties: 753 | FunctionName: !Join 754 | - '-' 755 | - - !Ref 'AWS::StackName' 756 | - !FindInMap 757 | - Function 758 | - SolutionHelper 759 | - Name 760 | Handler: !FindInMap 761 | - Function 762 | - SolutionHelper 763 | - Handler 764 | Role: !GetAtt 765 | - SolutionHelperRole 766 | - Arn 767 | Description: !FindInMap 768 | - Function 769 | - SolutionHelper 770 | - Description 771 | Code: 772 | S3Bucket: !Join 773 | - '' 774 | - - !FindInMap 775 | - Function 776 | - SolutionHelper 777 | - S3Bucket 778 | - '-' 779 | - !Ref 'AWS::Region' 780 | S3Key: !FindInMap 781 | - Function 782 | - SolutionHelper 783 | - S3Key 784 | Runtime: !FindInMap 785 | - Function 786 | - SolutionHelper 787 | - Runtime 788 | MemorySize: !FindInMap 789 | - Function 790 | - SolutionHelper 791 | - MemorySize 792 | Timeout: !FindInMap 793 | - Function 794 | - SolutionHelper 795 | - Timeout 796 | 797 | CreateRandomPassword: 798 | Type: 'Custom::LoadLambda' 799 | Properties: 800 | ServiceToken: !GetAtt 801 | - SolutionHelper 802 | - Arn 803 | Region: !Ref 'AWS::Region' 804 | CreateRandomPassword: '16' 805 | RandomPasswordSpecialCharacters: 'False' 806 | 807 | CreateUniqueID: 808 | Type: 'Custom::LoadLambda' 809 | Properties: 810 | ServiceToken: !GetAtt 811 | - SolutionHelper 812 | - Arn 813 | Region: !Ref 'AWS::Region' 814 | CreateUniqueID: 'true' 815 | 816 | SendingAnonymousData: 817 | Type: 'Custom::LoadLambda' 818 | Condition: SendData 819 | Properties: 820 | ServiceToken: !GetAtt 821 | - SolutionHelper 822 | - Arn 823 | SendAnonymousData: !Join 824 | - '' 825 | - - '{ ''Solution'' : ''' 826 | - SO00011 827 | - ''', ' 828 | - '''UUID'' : ''' 829 | - !GetAtt 830 | - CreateUniqueID 831 | - UUID 832 | - ''', ' 833 | - '''Data'': {' 834 | - '''TerminateResources'': ''' 835 | - !Ref TerminateResources 836 | - '''' 837 | - '}' 838 | - '}' 839 | 840 | Outputs: 841 | ClusterEndpoint: 842 | Description: Cluster endpoint 843 | Value: !Join 844 | - ':' 845 | - - !GetAtt 846 | - RedshiftCluster 847 | - Endpoint.Address 848 | - !GetAtt 849 | - RedshiftCluster 850 | - Endpoint.Port 851 | ClusterName: 852 | Description: Name of cluster 853 | Value: !Ref RedshiftCluster 854 | BucketName: 855 | Description: Bucket for storing Detailed Billing Records 856 | Value: !Ref S3Bucket 857 | UUID: 858 | Description: Newly created random anonymous UUID. 859 | Value: !GetAtt 860 | - CreateUniqueID 861 | - UUID 862 | -------------------------------------------------------------------------------- /deployment/run-unit-tests.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # This script should be run from the repo's deployment directory 4 | # cd deployment 5 | # ./run-unit-tests.sh 6 | 7 | # Run unit tests 8 | ORIGPWD=`pwd` 9 | echo "Running unit tests" 10 | echo "cd ../source" 11 | cd ../source 12 | echo "No unit tests to run, so sad ..." 13 | echo "Completed unit tests" 14 | # Return to where we came from 15 | cd $ORIGPWD 16 | -------------------------------------------------------------------------------- /source/local-solution-helper/__init__.py: -------------------------------------------------------------------------------- 1 | from pkg_resources import get_distribution 2 | 3 | __version__ = get_distribution('local-solution-helper').version 4 | __release_date__ = "Oct-2019" 5 | -------------------------------------------------------------------------------- /source/local-solution-helper/pycfn_custom_resource/__init__.py: -------------------------------------------------------------------------------- 1 | __all__ = ["lambda_backed"] 2 | 3 | __version__ = ["0.2"] 4 | -------------------------------------------------------------------------------- /source/local-solution-helper/pycfn_custom_resource/lambda_backed.py: -------------------------------------------------------------------------------- 1 | ###################################################################################################################### 2 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 3 | # # 4 | # Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # 5 | # with the License. A copy of the License is located at # 6 | # # 7 | # http://www.apache.org/licenses/ # 8 | # # 9 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # 10 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 11 | # and limitations under the License. # 12 | ###################################################################################################################### 13 | 14 | from . import util 15 | import requests 16 | import json 17 | import uuid 18 | import sys 19 | import traceback 20 | 21 | import logging 22 | log = logging.getLogger() 23 | log.addHandler(logging.NullHandler()) 24 | log.setLevel(logging.DEBUG) 25 | 26 | 27 | _DEFAULT_CREATE_TIMEOUT = 30 * 60 28 | _DEFAULT_DELETE_TIMEOUT = 30 * 60 29 | _DEFAULT_UPDATE_TIMEOUT = 30 * 60 30 | 31 | 32 | class CustomResource(object): 33 | def __init__(self, event): 34 | self._event = event 35 | self._logicalresourceid = event.get("LogicalResourceId") 36 | self._physicalresourceid = event.get("PhysicalResourceId") 37 | self._requestid = event.get("RequestId") 38 | self._resourceproperties = event.get("ResourceProperties") 39 | self._resourcetype = event.get("ResourceType") 40 | self._responseurl = event.get("ResponseURL") 41 | self._requesttype = event.get("RequestType") 42 | self._servicetoken = event.get("ServiceToken") 43 | self._stackid = event.get("StackId") 44 | self._region = self._get_region() 45 | self.result_text = None 46 | self.result_attributes = None 47 | 48 | # Set timeout for actions 49 | self._create_timeout = _DEFAULT_CREATE_TIMEOUT 50 | self._delete_timeout = _DEFAULT_DELETE_TIMEOUT 51 | self._update_timeout = _DEFAULT_UPDATE_TIMEOUT 52 | 53 | @property 54 | def logicalresourceid(self): 55 | return self._logicalresourceid 56 | 57 | @property 58 | def physicalresourceid(self): 59 | return self._physicalresourceid 60 | 61 | @property 62 | def requestid(self): 63 | return self._requestid 64 | 65 | @property 66 | def resourceproperties(self): 67 | return self._resourceproperties 68 | 69 | @property 70 | def resourcetype(self): 71 | return self._resourcetype 72 | 73 | @property 74 | def responseurl(self): 75 | return self._responseurl 76 | 77 | @property 78 | def requesttype(self): 79 | return self._requesttype 80 | 81 | @property 82 | def servicetoken(self): 83 | return self._servicetoken 84 | 85 | @property 86 | def stackid(self): 87 | return self._stackid 88 | 89 | def create(self): 90 | return {} 91 | 92 | def delete(self): 93 | return {} 94 | 95 | def update(self): 96 | return {} 97 | 98 | def _get_region(self): 99 | if 'Region' in self._resourceproperties: 100 | return self._resourceproperties['Region'] 101 | else: 102 | return self._stackid.split(':')[3] 103 | 104 | def determine_event_timeout(self): 105 | if self.requesttype == "Create": 106 | timeout = self._create_timeout 107 | elif self.requesttype == "Delete": 108 | timeout = self._delete_timeout 109 | else: 110 | timeout = self._update_timeout 111 | 112 | return timeout 113 | 114 | def process_event(self): 115 | if self.requesttype == "Create": 116 | command = self.create 117 | elif self.requesttype == "Delete": 118 | command = self.delete 119 | else: 120 | command = self.update 121 | 122 | try: 123 | self.result_text = command() 124 | success = True 125 | if isinstance(self.result_text, dict): 126 | try: 127 | self.result_attributes = { "Data" : self.result_text } 128 | log.info(u"Command %s-%s succeeded", self.logicalresourceid, self.requesttype) 129 | log.debug(u"Command %s output: %s", self.logicalresourceid, self.result_text) 130 | except: 131 | log.error(u"Command %s-%s returned invalid data: %s", self.logicalresourceid, 132 | self.requesttype, self.result_text) 133 | success = False 134 | self.result_attributes = {} 135 | else: 136 | raise ValueError(u"Results must be a JSON object") 137 | except: 138 | e = sys.exc_info() 139 | log.error(u"Command %s-%s failed", self.logicalresourceid, self.requesttype) 140 | log.debug(u"Command %s output: %s", self.logicalresourceid, e[0]) 141 | log.debug(u"Command %s traceback: %s", self.logicalresourceid, traceback.print_tb(e[2])) 142 | success = False 143 | 144 | self.send_result(success, self.result_attributes) 145 | 146 | def send_result(self, success, attributes): 147 | attributes = attributes if attributes else {} 148 | source_attributes = { 149 | "Status": "SUCCESS" if success else "FAILED", 150 | "StackId": self.stackid, 151 | "RequestId": self.requestid, 152 | "LogicalResourceId": self.logicalresourceid 153 | } 154 | 155 | source_attributes['PhysicalResourceId'] = self.physicalresourceid 156 | if not source_attributes['PhysicalResourceId']: 157 | source_attributes['PhysicalResourceId'] = str(uuid.uuid4()) 158 | 159 | if not success: 160 | source_attributes["Reason"] = "Unknown Failure" 161 | 162 | source_attributes.update(attributes) 163 | log.debug(u"Sending result: %s", source_attributes) 164 | self._put_response(source_attributes) 165 | 166 | @util.retry_on_failure(max_tries=10) 167 | def __send(self, data): 168 | requests.put(self.responseurl, 169 | data=json.dumps(data), 170 | headers={"Content-Type": ""}, 171 | verify=True).raise_for_status() 172 | 173 | def _put_response(self, data): 174 | try: 175 | self.__send(data) 176 | log.info(u"CloudFormation successfully sent response %s", data["Status"]) 177 | except IOError: 178 | log.exception(u"Failed sending CloudFormation response") 179 | 180 | def __repr__(self): 181 | return str(self._event) 182 | -------------------------------------------------------------------------------- /source/local-solution-helper/pycfn_custom_resource/util.py: -------------------------------------------------------------------------------- 1 | ###################################################################################################################### 2 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 3 | # # 4 | # Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # 5 | # with the License. A copy of the License is located at # 6 | # # 7 | # http://www.apache.org/licenses/ # 8 | # # 9 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # 10 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 11 | # and limitations under the License. # 12 | ###################################################################################################################### 13 | 14 | # Changes from original pycfn_custom_resource: 15 | # 16 | # Changed/updated imports. 17 | # 18 | # Updated exception syntax for Python 3. 19 | 20 | from requests.exceptions import ConnectionError, HTTPError, Timeout, SSLError 21 | import logging 22 | import random 23 | import time 24 | 25 | log = logging.getLogger() 26 | 27 | 28 | class RemoteError(IOError): 29 | retry_modes = frozenset(['TERMINAL', 'RETRIABLE', 'RETRIABLE_FOREVER']) 30 | 31 | def __init__(self, code, msg, retry_mode='RETRIABLE'): 32 | super(RemoteError, self).__init__(code, msg) 33 | if not retry_mode in RemoteError.retry_modes: 34 | raise ValueError("Invalid retry mode: %s" % retry_mode) 35 | self.retry_mode = retry_mode 36 | 37 | 38 | def _extract_http_error(resp): 39 | if resp.status_code == 503: 40 | retry_mode = 'RETRIABLE_FOREVER' 41 | elif resp.status_code < 500 and resp.status_code not in (404, 408): 42 | retry_mode = 'TERMINAL' 43 | else: 44 | retry_mode = 'RETRIABLE' 45 | 46 | return RemoteError(resp.status_code, u"HTTP Error %s : %s" % (resp.status_code, resp.text), retry_mode) 47 | 48 | 49 | def exponential_backoff(max_tries, max_sleep=20): 50 | """ 51 | Returns a series of floating point numbers between 0 and min(max_sleep, 2^i-1) for i in 0 to max_tries 52 | """ 53 | return [random.random() * min(max_sleep, (2 ** i - 1)) for i in range(0, max_tries)] 54 | 55 | 56 | def extend_backoff(durations, max_sleep=20): 57 | """ 58 | Adds another exponential delay time to a list of delay times 59 | """ 60 | durations.append(random.random() * min(max_sleep, (2 ** len(durations) - 1))) 61 | 62 | 63 | def retry_on_failure(max_tries=5, http_error_extractor=_extract_http_error): 64 | def _decorate(f): 65 | def _retry(*args, **kwargs): 66 | durations = exponential_backoff(max_tries) 67 | for i in durations: 68 | if i > 0: 69 | log.debug(u"Sleeping for %f seconds before retrying", i) 70 | time.sleep(i) 71 | 72 | try: 73 | return f(*args, **kwargs) 74 | except SSLError as e: 75 | log.exception(u"SSLError") 76 | raise RemoteError(None, str(e), retry_mode='TERMINAL') 77 | except ConnectionError as e: 78 | log.exception(u"ConnectionError") 79 | last_error = RemoteError(None, str(e)) 80 | except HTTPError as e: 81 | last_error = http_error_extractor(e.response) 82 | if last_error.retry_mode == 'TERMINAL': 83 | raise last_error 84 | elif last_error.retry_mode == 'RETRIABLE_FOREVER': 85 | extend_backoff(durations) 86 | 87 | log.exception(last_error.strerror) 88 | except Timeout as e: 89 | log.exception(u"Timeout") 90 | last_error = RemoteError(None, str(e)) 91 | else: 92 | raise last_error 93 | return _retry 94 | return _decorate 95 | -------------------------------------------------------------------------------- /source/local-solution-helper/setup.py: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | 3 | from setuptools import setup, find_packages 4 | # Solution Helper - 09/06/2018 - Pip version 5 | # pip version handling 6 | try: # for pip >= 10 7 | from pip._internal.req import parse_requirements 8 | except ImportError: # for pip <= 9.0.3 9 | from pip.req import parse_requirements 10 | 11 | setup( 12 | name='local_solution_helper', 13 | version='1.0', 14 | description='AWS Solution Helper Custom Resource', 15 | author='AWS Solutions Development', 16 | license='Apache 2.0', 17 | zip_safe=False, 18 | packages=['local_solution_helper', 'pycfn_custom_resource'], 19 | package_dir={'local_solution_helper': '.', 'pycfn_custom_resource' : './pycfn_custom_resource'}, 20 | install_requires=[ 21 | 'requests>=2.22.0' 22 | ], 23 | classifiers=[ 24 | 'Programming Language :: Python :: 3.7', 25 | ], 26 | ) 27 | -------------------------------------------------------------------------------- /source/local-solution-helper/solution-helper.py: -------------------------------------------------------------------------------- 1 | ###################################################################################################################### 2 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 3 | # # 4 | # Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # 5 | # with the License. A copy of the License is located at # 6 | # # 7 | # http://www.apache.org/licenses/ # 8 | # # 9 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # 10 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 11 | # and limitations under the License. # 12 | ###################################################################################################################### 13 | 14 | import os 15 | import string 16 | import ast 17 | import crypt 18 | import logging 19 | import uuid 20 | import json 21 | import datetime 22 | import re 23 | import urllib 24 | from pycfn_custom_resource.lambda_backed import CustomResource 25 | 26 | log = logging.getLogger() 27 | log.setLevel(logging.INFO) 28 | 29 | def SendAnonymousData(AnonymousData): 30 | log.info("Sending anonymous data") 31 | TimeNow = datetime.datetime.utcnow().isoformat() 32 | TimeStamp = str(TimeNow) 33 | AnonymousData['TimeStamp'] = TimeStamp 34 | data = json.dumps(AnonymousData) 35 | log.info("Data: %s", data) 36 | data_utf8 = data.encode('utf-8') 37 | url = 'https://metrics.awssolutionsbuilder.com/generic' 38 | headers = { 39 | 'content-type': 'application/json; charset=utf-8', 40 | 'content-length': len(data_utf8) 41 | } 42 | req = urllib.request.Request(url, data_utf8, headers) 43 | rsp = urllib.request.urlopen(req) 44 | rspcode = rsp.getcode() 45 | content = rsp.read() 46 | log.info("Response from APIGateway: %s, %s", rspcode, content) 47 | return data 48 | 49 | def createRandomPassword(pwdLength=13, specialChars="True"): 50 | log.info("Creating random password") 51 | if specialChars is None: 52 | specialChars = "True" 53 | # Generate new random password 54 | chars = string.ascii_letters + string.digits 55 | if specialChars == "True": 56 | chars += '#$%^&+=' 57 | p=re.compile('^(?=.{1,})(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[#$%^&+=]).*$') 58 | else: 59 | p=re.compile('^(?=.{1,})(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z]).*$') 60 | numTries = 0 61 | pwdFound = False 62 | while not pwdFound: 63 | password = '' 64 | numTries += 1 65 | for i in range(int(pwdLength)): 66 | password += chars[ord(os.urandom(1)) % len(chars)] 67 | m=p.match(password) 68 | if m is not None: 69 | pwdFound = True 70 | log.info("Password created after %s tries", numTries) 71 | log.debug("%s", password) 72 | return password 73 | 74 | 75 | def createUniqueID(): 76 | log.info("Creating Unique ID") 77 | # Generate new random Unique ID 78 | uniqueID = uuid.uuid4() 79 | log.debug("UUID: %s", uniqueID) 80 | return uniqueID 81 | 82 | 83 | def md5hash(value, salt): 84 | return crypt.crypt(value, '$1$' + salt) 85 | 86 | 87 | class myCustomResource(CustomResource): 88 | def __init__(self, event): 89 | super(myCustomResource, self).__init__(event) 90 | 91 | def create(self): 92 | try: 93 | FunctName = self._resourceproperties.get('FunctionName') 94 | FunctArn = self._resourceproperties.get('LambdaArn') 95 | CreateRandomPassword = self._resourceproperties.get('CreateRandomPassword') 96 | CreateUniqueID = self._resourceproperties.get('CreateUniqueID') 97 | SendData = self._resourceproperties.get('SendAnonymousData') 98 | response = None 99 | 100 | if SendData is not None: 101 | log.debug("Sending Data: %s", SendData) 102 | SendData = ast.literal_eval(SendData) 103 | SendData['Data'].update({'CFTemplate': 'Created'}) 104 | data = SendAnonymousData(SendData) 105 | response = {"Status": "SUCCESS", "Data": str(data)} 106 | log.debug("%s", response) 107 | 108 | if CreateRandomPassword is not None: 109 | # Expect value of CreateRandomPassword to be the desired password length 110 | password = createRandomPassword(CreateRandomPassword, self._resourceproperties.get('RandomPasswordSpecialCharacters')) 111 | response = {"Status": "SUCCESS", "Password": password} 112 | 113 | if CreateUniqueID is not None: 114 | # Value of CreateUniqueID does not matter 115 | newID = createUniqueID() 116 | response = {"Status": "SUCCESS", "UUID": str(newID)} 117 | log.debug("%s", response) 118 | 119 | if response is None: 120 | response = {"Status": "SUCCESS"} 121 | 122 | # Results dict referenced by GetAtt in template 123 | return response 124 | 125 | except Exception as e: 126 | log.error("Create exception: %s", e) 127 | return {"Status": "FAILED", "Reason": str(e)} 128 | 129 | def update(self): 130 | try: 131 | FunctName = self._resourceproperties.get('FunctionName') 132 | FunctArn = self._resourceproperties.get('LambdaArn') 133 | # S3Event = self._resourceproperties.get('S3Event') 134 | SendData = self._resourceproperties.get('SendAnonymousData') 135 | 136 | response = None 137 | 138 | if SendData is not None: 139 | log.debug("Sending Data: %s", SendData) 140 | SendData = ast.literal_eval(SendData) 141 | SendData['Data'].update({'CFTemplate': 'Updated'}) 142 | SendAnonymousData(SendData) 143 | response = {"Status": "SUCCESS", "Data": str(SendData)} 144 | log.debug("%s", response) 145 | 146 | if response is None: 147 | response = {"Status": "SUCCESS"} 148 | 149 | # Results dict referenced by GetAtt in template 150 | return response 151 | 152 | except Exception as e: 153 | log.error("Update exception: %s", e) 154 | return {"Status": "FAILED", "Reason": str(e)} 155 | 156 | def delete(self): 157 | try: 158 | FunctName = self._resourceproperties.get('FunctionName') 159 | # CreateSshKey = self._resourceproperties.get('CreateSshKey') 160 | SendData = self._resourceproperties.get('SendAnonymousData') 161 | 162 | log.info("Delete called, cleaning up") 163 | 164 | if SendData is not None: 165 | log.debug("Sending Data: %s", SendData) 166 | SendData = ast.literal_eval(SendData) 167 | SendData['Data'].update({'CFTemplate': 'Deleted'}) 168 | data = SendAnonymousData(SendData) 169 | response = {"Status": "SUCCESS", "Data": str(data)} 170 | log.debug("%s", response) 171 | 172 | return {"Status": "SUCCESS"} 173 | 174 | # Delete operations do not return result data 175 | except Exception as e: 176 | log.error("Delete exception: %s -- %s", FunctName, e) 177 | return {"Status": "FAILED", "Reason": str(e)} 178 | 179 | 180 | def lambda_handler(event, context): 181 | resource = myCustomResource(event) 182 | resource.process_event() 183 | return {'message': 'done'} 184 | -------------------------------------------------------------------------------- /source/scripts/callgcw.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | ###################################################################################################################### 3 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 4 | # # 5 | # Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # 6 | # with the License. A copy of the License is located at # 7 | # # 8 | # http://www.apache.org/licenses/ # 9 | # # 10 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # 11 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 12 | # and limitations under the License. # 13 | ###################################################################################################################### 14 | import logging 15 | import os 16 | import sys 17 | import time 18 | import platform 19 | import codecs 20 | import configparser 21 | import boto3 22 | 23 | 24 | CW_REGION = "cfn_region" 25 | 26 | CURRENTOS = platform.system() 27 | 28 | cf = configparser.ConfigParser() 29 | 30 | # Main 31 | def call_gcw(p_region, p_account, p_mode, p_statistics, p_period, p_starttime, p_endtime, p_output): 32 | sys.path.append('getcloudwatchmetrics.py') 33 | import getcloudwatchmetrics 34 | 35 | logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)-8s %(message)s') 36 | 37 | ec2 = boto3.client('ec2', region_name=CW_REGION) 38 | awsregions = ec2.describe_regions()['Regions'] 39 | print(awsregions) 40 | #region = [str(x) for x in p_region.split(' ')] 41 | account = p_account 42 | mode = p_mode 43 | statistics = p_statistics 44 | period = int(p_period) * 60 45 | startTime = int(p_starttime) * 60 * 60 * 1000 46 | endTime = int(p_endtime) * 60 * 60 * 1000 47 | outputName = p_output 48 | 49 | ls_today = time.strftime('%Y-%m-%d', time.localtime(time.time())) 50 | ls_combined_csv = ( 51 | ls_today + "-before" + 52 | str(p_starttime) +"hour-with" + 53 | str(p_period) + "min.csv" 54 | ) 55 | outfile = codecs.open(ls_combined_csv, 'a', encoding='utf-8') 56 | outfile.write( 57 | u"\"{0}\",\"{1}\",\"{2}\",\"{3}\",\"{4}\",\"{5}\",\"{6}\",\"{7}\",\"{8}\",\"{9}\",\"{10}\",\"{11}\",\"{12}\",\"{13}\",\"{14}\",\"{15}\"\n".format( 58 | "humanReadableTimestamp", "timestamp", "accountId", "az", "instanceId", "instanceType", "instanceTags", 59 | "ebsBacked", "volumeIds", "instanceLaunchTime", "humanReadableInstanceLaunchTime", "CPUUtilization", 60 | "NetworkIn", "NetworkOut", "DiskReadOps", "DiskWriteOps")) 61 | outfile.close() 62 | #separate multiple regions to call pt-cwatch.py one region by one region 63 | for i in awsregions: 64 | ls_single_region = i['RegionName'] 65 | print(ls_single_region) 66 | # if ls_single_region <> 'ap-southeast-1': 67 | ls_region_array = [] 68 | ls_region_array.append(ls_single_region) 69 | ls_outputfile_name = outputName + "-in-" + ls_single_region + ".csv" 70 | getcloudwatchmetrics.download_metrics(ls_region_array, account, mode, statistics, period, startTime, endTime, ls_outputfile_name) 71 | 72 | if CURRENTOS == "Linux": 73 | os.system('cat ' + ls_outputfile_name + ' >> ' + ls_combined_csv) 74 | os.system('rm -rf ' + ls_outputfile_name) 75 | elif CURRENTOS == "Windows": 76 | os.system('type ' + ls_outputfile_name + ' >> ' + ls_combined_csv) 77 | os.system('del ' + ls_outputfile_name) 78 | 79 | os.system('gzip -f ' + ls_combined_csv) 80 | return ls_combined_csv+".gz" 81 | -------------------------------------------------------------------------------- /source/scripts/deleteandterminate.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | ###################################################################################################################### 3 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 4 | # # 5 | # Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # 6 | # with the License. A copy of the License is located at # 7 | # # 8 | # http://www.apache.org/licenses/ # 9 | # # 10 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # 11 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 12 | # and limitations under the License. # 13 | ###################################################################################################################### 14 | 15 | import boto.utils, boto3 16 | import logging 17 | 18 | CW_REGION = "cfn_region" 19 | DB_CLID = "cfn_db_clusteridentifier" 20 | # Main 21 | if __name__ == "__main__": 22 | logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)-8s %(message)s') 23 | 24 | logging.info("Deleting the redshift cluster") 25 | rs = boto3.client('redshift',CW_REGION) 26 | response = rs.delete_cluster(ClusterIdentifier=DB_CLID,SkipFinalClusterSnapshot=True) 27 | 28 | logging.info("Terminating the EC2 instance") 29 | ec2 = boto3.resource('ec2',CW_REGION) 30 | instanceid = boto.utils.get_instance_metadata()['instance-id'] 31 | instance = ec2.Instance(instanceid) 32 | response = instance.terminate(DryRun=False) 33 | -------------------------------------------------------------------------------- /source/scripts/getcloudwatchmetrics.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | ###################################################################################################################### 3 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 4 | # # 5 | # Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # 6 | # with the License. A copy of the License is located at # 7 | # # 8 | # http://www.apache.org/licenses/ # 9 | # # 10 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # 11 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 12 | # and limitations under the License. # 13 | ###################################################################################################################### 14 | 15 | import boto3 16 | import json 17 | import re 18 | import os 19 | import sys 20 | import logging 21 | import multiprocessing 22 | import itertools 23 | from functools import partial 24 | from argparse import ArgumentParser 25 | import time 26 | import datetime 27 | import codecs 28 | 29 | account = "" 30 | instanceNum = {} 31 | 32 | def getInstances(region): 33 | ec2 = boto3.resource('ec2',region_name=region) 34 | json_result = [] 35 | runningInstances = [] 36 | #countInstance = [] 37 | try: 38 | json_result = ec2.meta.client.describe_instances() 39 | 40 | except Exception as e: 41 | print(e) 42 | 43 | if "Reservations" in json_result: 44 | for reservation in json_result["Reservations"]: 45 | for instance in reservation["Instances"]: 46 | if instance["State"]["Name"] == "running": 47 | instance["OwnerAccountId"] = account 48 | runningInstances.append(instance) 49 | #countInstance.append(instance["InstanceId"]) 50 | 51 | return runningInstances 52 | 53 | def getMetrics(intNow, startTime, endTime, period, statistics, unit, metrics, outputName, instance): 54 | global instanceNum 55 | instanceNum[instance["InstanceId"]] = 1 56 | res = "" 57 | output = {} 58 | for metric in metrics: 59 | args = { 60 | "dimensions": [{"Name": "InstanceId", "Value": instance["InstanceId"]}], 61 | "startTime": intNow - startTime, 62 | "endTime": intNow - endTime, 63 | "period": period, 64 | "statistics": [statistics], 65 | "metricName": metric, 66 | "namespace": "AWS/EC2", 67 | "unit": unit[metric] 68 | } 69 | logging.info("instance %s (%s)" % (instance["InstanceId"], len(instanceNum))) 70 | logging.info("metric %s" % (metric)) 71 | 72 | numRetries = 0 73 | gettingMetrics = True 74 | while gettingMetrics: 75 | try: 76 | session = boto3.session.Session(region_name=instance["Placement"]["AvailabilityZone"][:-1]) 77 | cloudwatch = session.resource('cloudwatch') 78 | json_result = cloudwatch.meta.client.get_metric_statistics( 79 | Dimensions=args['dimensions'], 80 | StartTime=datetime.datetime.fromtimestamp(args['startTime']/1e3).strftime("%Y-%m-%d %H:%M:%S"), 81 | EndTime=datetime.datetime.fromtimestamp(args['endTime']/1e3).strftime("%Y-%m-%d %H:%M:%S"), 82 | Period=args['period'], 83 | Statistics=args['statistics'], 84 | MetricName=args['metricName'], 85 | Namespace=args['namespace'], 86 | Unit=args['unit']) 87 | gettingMetrics = False 88 | except Exception as e: 89 | numRetries+=1 90 | logging.error("Getting CW metric %s try %s of 3" % (args['metricName'], numRetries)) 91 | logging.error("Exception: %s" % (e)) 92 | if numRetries > 3: 93 | gettingMetrics = False 94 | raise 95 | time.sleep(1) 96 | 97 | #logging.info("metric_stats %s" % (json_result)) 98 | for datapoint in json_result['Datapoints']: 99 | try: 100 | if(str(datapoint['Timestamp']) in output): 101 | output[str(datapoint["Timestamp"])][metric] = datapoint[statistics] 102 | else: 103 | readableTimeStamp = datapoint['Timestamp'] 104 | readableInstanceLaunchTime = instance["LaunchTime"] 105 | tagString = "" 106 | ebsString = "" 107 | 108 | if instance.get('Tags'): 109 | for tag in instance["Tags"]: 110 | tagString += re.sub('[^a-zA-Z0-9-_ *.]', '', tag["Key"].replace(",", " ")) + ":" + re.sub('[^a-zA-Z0-9-_ *.]', '', tag["Value"].replace(",", " ")) + " | " 111 | tagString = tagString[:-3] 112 | if instance["BlockDeviceMappings"]: 113 | for ebs in instance["BlockDeviceMappings"]: 114 | ebsString += ebs["Ebs"]["VolumeId"] + " | " 115 | ebsString = ebsString[:-3] 116 | 117 | output[str(datapoint['Timestamp'])] = { 118 | "humanReadableTimestamp": readableTimeStamp, 119 | "timestamp": datapoint['Timestamp'], 120 | "accountId": instance["OwnerAccountId"], 121 | "az": instance["Placement"]["AvailabilityZone"], 122 | "instanceId": instance["InstanceId"], 123 | "instanceType": instance["InstanceType"], 124 | "instanceTags": tagString, 125 | "ebsBacked": True if instance["RootDeviceType"] == "ebs" else "false", 126 | "volumeIds": ebsString, 127 | "instanceLaunchTime": instance["LaunchTime"], 128 | "humanReadableInstanceLaunchTime": readableInstanceLaunchTime, 129 | metric: datapoint[statistics] 130 | } 131 | except Exception as e: 132 | print(e) 133 | 134 | for row in output: 135 | res += u"\"{0}\",\"{1}\",\"{2}\",\"{3}\",\"{4}\",\"{5}\",\"{6}\",\"{7}\",\"{8}\",\"{9}\",\"{10}\",\"{11}\",\"{12}\",\"{13}\",\"{14}\",\"{15}\"\n".format(\ 136 | output[row].setdefault("humanReadableTimestamp",""),\ 137 | output[row].setdefault("timestamp",""),\ 138 | output[row].setdefault("accountId",""),\ 139 | output[row].setdefault("az",""),\ 140 | output[row].setdefault("instanceId",""),\ 141 | output[row].setdefault("instanceType",""),\ 142 | output[row].setdefault("instanceTags",""),\ 143 | output[row].setdefault("ebsBacked",""),\ 144 | output[row].setdefault("volumeIds",""),\ 145 | output[row].setdefault("instanceLaunchTime",""),\ 146 | output[row].setdefault("humanReadableInstanceLaunchTime",""),\ 147 | output[row].setdefault("CPUUtilization","0"),\ 148 | output[row].setdefault("NetworkIn","0"),\ 149 | output[row].setdefault("NetworkOut","0"),\ 150 | output[row].setdefault("DiskReadOps","0"),\ 151 | output[row].setdefault("DiskWriteOps","0")) 152 | return res 153 | 154 | # Main 155 | def download_metrics(p_region, p_account, p_mode, p_statistics, p_period, p_starttime, p_endtime, p_output): 156 | 157 | logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)-8s %(message)s') 158 | 159 | region = p_region 160 | account = p_account 161 | mode = p_mode 162 | statistics = p_statistics 163 | period = p_period 164 | startTime = p_starttime 165 | endTime = p_endtime 166 | outputName = p_output 167 | 168 | 169 | metrics = ['NetworkIn', 'NetworkOut', 'DiskReadOps', 'DiskWriteOps', 'CPUUtilization']; 170 | 171 | unit = { 172 | 'CPUUtilization': 'Percent', 173 | 'NetworkIn': 'Bytes', 174 | 'NetworkOut': 'Bytes', 175 | 'DiskReadOps': 'Count', 176 | 'DiskWriteOps': 'Count' 177 | } 178 | 179 | intNow = int(time.time()*1000) 180 | 181 | logging.info("region %s " % (region)) 182 | logging.info("account %s " % (account)) 183 | logging.info("mode %s " % (mode)) 184 | logging.info("statistics %s " % (statistics)) 185 | logging.info("period %s " % (period)) 186 | logging.info("time %s " % (startTime)) 187 | logging.info("endtime %s " % (endTime)) 188 | logging.info("metrics %s " % (metrics)) 189 | logging.info("unit %s " % (unit)) 190 | logging.info("now %s " % (intNow)) 191 | logging.info("startTime %s " % (intNow - startTime)) 192 | logging.info("endTime %s " % (intNow - endTime)) 193 | logging.info("output %s " % (outputName)) 194 | 195 | outfile = codecs.open(outputName, 'a', encoding='utf-8') 196 | 197 | if (p_mode == 'single'): 198 | accounts = p_account 199 | else: 200 | logging.error('Mode is not correct') 201 | 202 | p = multiprocessing.Pool(multiprocessing.cpu_count() - 1) 203 | func = partial(getMetrics, intNow, startTime, endTime, period, statistics, unit, metrics, outputName) 204 | response = p.map(func, getInstances(region[0])) 205 | p.close() 206 | p.join() 207 | 208 | for line in response: 209 | outfile.write(line) 210 | outfile.close() 211 | -------------------------------------------------------------------------------- /source/scripts/run-rightsizing-redshift.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | ###################################################################################################################### 3 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 4 | # # 5 | # Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # 6 | # with the License. A copy of the License is located at # 7 | # # 8 | # http://www.apache.org/licenses/ # 9 | # # 10 | # or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # 11 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 12 | # and limitations under the License. # 13 | ###################################################################################################################### 14 | # 15 | # v1.0 initial version - AWS Solutions Builders 16 | # v1.1 fix f1.2xlarge zero price issue - AWS Solutions Builders 17 | # 18 | ###################################################################################################################### 19 | 20 | import math 21 | import string 22 | import os 23 | import sys 24 | import csv 25 | import random 26 | import platform 27 | import linecache 28 | import logging 29 | import http.client 30 | import boto3 31 | import psycopg2 32 | 33 | #Global variables 34 | CURRENTOS = platform.system() 35 | import configparser 36 | cf = configparser.ConfigParser() 37 | 38 | #=============================================================================== 39 | # cf.read("resize.conf") 40 | # 41 | # CW_REGION = cf.get("cwatch","region") 42 | # CW_ACCOUNT = cf.get("cwatch","account") 43 | # CW_MODE = cf.get("cwatch","mode") 44 | # CW_STATISTICS = cf.get("cwatch","statistics") 45 | # CW_PERIOD = cf.get("cwatch","period") 46 | # CW_STARTTIME = cf.get("cwatch","startTime") 47 | # CW_ENDTIME = cf.get("cwatch","endTime") 48 | # CW_OUTPUT = cf.get("cwatch","outputName") 49 | # CW_DATAFILE = cf.get("parameters","cw_datafile") 50 | # 51 | # ACCOUNT_ID = CW_ACCOUNT 52 | # REDSHIFT_IAM_ROLE = cf.get("parameters","redshift_iam_role") 53 | # S3_BUCKET = cf.get("parameters","s3_bucket_name") 54 | # 55 | # DB_HOST = cf.get("db", "db_host") 56 | # DB_PORT = cf.getint("db", "db_port") 57 | # DB_USER = cf.get("db", "db_user") 58 | # DB_PASS = cf.get("db", "db_pass") 59 | # DB_NAME = cf.get("db", "db_name") 60 | # IOSP_PER_SSD = int(cf.get("parameters","iops_per_ssd")) 61 | #=============================================================================== 62 | 63 | CW_REGION = "cfn_region" 64 | CW_ACCOUNT = "cfn_account" 65 | CW_MODE = "single" 66 | CW_STATISTICS = "Maximum" 67 | CW_PERIOD = "60" 68 | CW_STARTTIME = "336" 69 | CW_ENDTIME = "0" 70 | CW_OUTPUT = "result" 71 | CW_DATAFILE = "cfn_datafile" 72 | 73 | ACCOUNT_ID = CW_ACCOUNT 74 | REDSHIFT_IAM_ROLE = "redshift_iam_role" 75 | S3_BUCKET = "cfn_s3_bucket_name" 76 | 77 | DB_HOST = "cfn_db_host" 78 | DB_PORT = "cfn_db_port" 79 | DB_USER = "cfn_db_user" 80 | DB_PASS = "cfn_db_pass" 81 | DB_NAME = "cfn_db_name" 82 | IOSP_PER_SSD = int("6000") 83 | 84 | def db_conn(db_host, db_port, db_user, db_pass, db_name): 85 | try: 86 | ls_dsn = "dbname='" + db_name + "' user='" + db_user + "' host='" + db_host + "' password='" + db_pass + "' port=" + str(db_port) 87 | conn = psycopg2.connect(ls_dsn) 88 | return conn 89 | except: 90 | logging.error("I am unable to connect to the database %s " % (ls_dsn)) 91 | exit() 92 | 93 | def execute_dml_ddl(db_conn, sql_stat): 94 | cur_dml_ddl = db_conn.cursor() 95 | cur_dml_ddl.execute(sql_stat) 96 | conn.commit() 97 | cur_dml_ddl.close() 98 | 99 | def upload_s3(bucketname, keyname, file_upload): 100 | s3 = boto3.resource('s3',region_name=CW_REGION) 101 | s3.meta.client.upload_file(file_upload, bucketname, keyname) 102 | 103 | def copy_table(db_conn, tablename, bucketname, sourcefile, ignorerows, gzflag): 104 | #ls_rolesession_name = REDSHIFT_IAM_ROLE[REDSHIFT_IAM_ROLE.index("/")+1:] 105 | #client = boto3.client('sts') 106 | #assumedRoleObject = client.assume_role(RoleArn=REDSHIFT_IAM_ROLE, RoleSessionName=ls_rolesession_name) 107 | #credentials = assumedRoleObject['Credentials'] 108 | #credentials = client.get_session_token()['Credentials'] 109 | session = boto3.Session() 110 | credentials = session.get_credentials() 111 | ls_aws_access_key_id = credentials.access_key 112 | ls_aws_secret_access_key = credentials.secret_key 113 | ls_aws_session_token = credentials.token 114 | 115 | ls_import_pricelist_sql = ( 116 | "copy " + tablename + " from 's3://" + bucketname + "/" + sourcefile + "'" 117 | ) 118 | ls_import_pricelist_sql += " credentials 'aws_access_key_id=" + ls_aws_access_key_id + ";aws_secret_access_key="+ ls_aws_secret_access_key + ";token=" + ls_aws_session_token + "'" 119 | ls_import_pricelist_sql += " delimiter ',' QUOTE AS '" + '"' + "'" + " IGNOREHEADER " + str(ignorerows) 120 | if gzflag=="Y": 121 | ls_import_pricelist_sql += " gzip csv" 122 | else: 123 | ls_import_pricelist_sql += " csv" 124 | execute_dml_ddl(db_conn, ls_import_pricelist_sql) 125 | 126 | 127 | def import_cwdata(db_conn, sourcefile, ignorerows, gzflag): 128 | ls_temp_cw_table = "cwdata" + ''.join(random.sample(['z','y','x','w','v','u','t','s','r','q','p','o','n','m','l','k','j','i','h','g','f','e','d','c','b','a'], 8)).replace(' ','') 129 | ls_create_cwtab_sql = "create table " + ls_temp_cw_table + "( " 130 | ls_create_cwtab_sql += " humanReadableTimestamp varchar(300), " 131 | ls_create_cwtab_sql += " timestamp varchar(300), accountId varchar(300), " 132 | ls_create_cwtab_sql += " az varchar(300), instanceId varchar(300) distkey, " 133 | ls_create_cwtab_sql += " instanceType varchar(300), instanceTags varchar(max), " 134 | ls_create_cwtab_sql += " ebsBacked varchar(300), volumeIds varchar(1024), " 135 | ls_create_cwtab_sql += " instanceLaunchTime varchar(300), humanReadableInstanceLaunchTime varchar(300), " 136 | ls_create_cwtab_sql += " CPUUtilization varchar(300), NetworkIn varchar(300), " 137 | ls_create_cwtab_sql += " NetworkOut varchar(300), DiskReadOps varchar(300), DiskWriteOps varchar(300) ) " 138 | execute_dml_ddl(db_conn, ls_create_cwtab_sql) 139 | 140 | copy_table(db_conn, ls_temp_cw_table, S3_BUCKET, sourcefile, ignorerows, gzflag) 141 | return ls_temp_cw_table 142 | 143 | def download_ec2pricelist(): 144 | logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)-8s %(message)s') 145 | ls_pricelist_file = 'ec2pricelist.csv' 146 | if os.path.exists(ls_pricelist_file): 147 | if CURRENTOS == "Linux": 148 | os.system('rm -rf ' + ls_pricelist_file) 149 | elif CURRENTOS == "Windows": 150 | os.system('del ' + ls_pricelist_file) 151 | 152 | try: 153 | conn = http.client.HTTPSConnection( 154 | 'pricing.us-east-1.amazonaws.com' 155 | ) 156 | conn.request( 157 | 'GET', 158 | '/offers/v1.0/aws/AmazonEC2/current/index.csv' 159 | ) 160 | resp = conn.getresponse() 161 | plist = resp.read().decode('utf-8') 162 | conn.close() # Be nice and close the connection when you're done 163 | 164 | plout = open(ls_pricelist_file, 'w') 165 | plout.writelines("%s" % line for line in plist) 166 | plout.close() 167 | 168 | except Exception as inst: 169 | print(inst) 170 | logging.error("Could not download the EC2 pricelist from https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/index.csv") 171 | exit() 172 | 173 | ls_target_bucket=S3_BUCKET 174 | ls_source_file = "ec2pricelist.csv" 175 | logging.info("Uploading the EC2 pricelist file to S3 bucket %s " % (ls_target_bucket)) 176 | upload_s3(ls_target_bucket, ls_pricelist_file, ls_source_file) 177 | return ls_pricelist_file 178 | 179 | def import_ec2pricelist(db_conn, p_ec2pricelist_file): 180 | logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)-8s %(message)s') 181 | ls_pricelist_file = p_ec2pricelist_file 182 | 183 | ls_columns = linecache.getline(ls_pricelist_file, 6) 184 | logging.info("Generate Redshift table structure") 185 | ls_columns_list = [str(x) for x in ls_columns.split(',')] 186 | ls_temp_price_table = ( 187 | "pricelist" + 188 | ''.join(random.sample( 189 | ['z','y','x','w','v','u','t','s','r','q','p','o','n','m','l','k','j','i','h','g','f','e','d','c','b','a'], 8) 190 | ).replace(' ','') 191 | ) 192 | 193 | logging.info("Importing the pricelist files to Redshift table: %s " % (ls_temp_price_table)) 194 | ls_create_table_sql = "create table " + ls_temp_price_table + "( " 195 | for col in ls_columns_list: 196 | ls_colname = col.replace(' ','').replace('/','').replace('-','').replace('"','') 197 | if ls_colname == "Group": 198 | ls_colname = "GroupId" 199 | ls_create_table_sql += " " + ls_colname + " varchar(300) ," 200 | 201 | ls_create_table_sql = ls_create_table_sql[:-1] 202 | ls_create_table_sql += " )" 203 | execute_dml_ddl(db_conn, ls_create_table_sql) 204 | 205 | copy_table(db_conn, ls_temp_price_table, S3_BUCKET, ls_pricelist_file, 6, "N") 206 | 207 | ls_alter_pricelist_sql = " alter table " + ls_temp_price_table + " add regionabbr varchar(300) " 208 | execute_dml_ddl(db_conn, ls_alter_pricelist_sql) 209 | ls_update_pricelist_sql = "update " + ls_temp_price_table + " set regionabbr=case " 210 | ls_update_pricelist_sql += " when location='US West (Oregon)' then 'USW2' " 211 | ls_update_pricelist_sql += " when location='US East (N. Virginia)' then 'USE1' " 212 | ls_update_pricelist_sql += " when location='US West (N. California)' then 'USW1' " 213 | ls_update_pricelist_sql += " when location='Asia Pacific (Seoul)' then 'APN2' " 214 | ls_update_pricelist_sql += " when location='Asia Pacific (Singapore)' then 'APS1' " 215 | ls_update_pricelist_sql += " when location='Asia Pacific (Sydney)' then 'APS2' " 216 | ls_update_pricelist_sql += " when location='Asia Pacific (Tokyo)' then 'APN1' " 217 | ls_update_pricelist_sql += " when location='EU (Frankfurt)' then 'EU' " 218 | ls_update_pricelist_sql += " when location='EU (Ireland)' then 'EUW1' " 219 | ls_update_pricelist_sql += " when location='South America (Sao Paulo)' then 'SAE1' " 220 | ls_update_pricelist_sql += " when location='Asia Pacific (Mumbai)' then 'APS1' " 221 | ls_update_pricelist_sql += " end " 222 | execute_dml_ddl(db_conn, ls_update_pricelist_sql) 223 | ls_delete_zero_entry_pricelist_sql = "delete from " + ls_temp_price_table + " where to_number(trim(both ' ' from priceperunit),'9999999D99999999') <= 0.00" 224 | execute_dml_ddl(db_conn, ls_delete_zero_entry_pricelist_sql) 225 | 226 | return ls_temp_price_table 227 | 228 | def determine_right_type(db_conn, sql_stat, s_temp_table, s_instanceid, iops_usage, ssd_size_usage, cpu_nbr_usage, network_level_usage, rate_usage, mem_size): 229 | ln_iops_usage = iops_usage 230 | ln_ssd_size_usage = ssd_size_usage 231 | ln_cpu_nbr = cpu_nbr_usage 232 | ln_network_level_usage = network_level_usage 233 | ln_rate = rate_usage 234 | ln_mem_size = mem_size 235 | ls_instanceid = s_instanceid 236 | ls_temp_table = s_temp_table 237 | cur_resize = db_conn.cursor() 238 | cur_resize.execute(sql_stat) 239 | row_newtypes = cur_resize.fetchall() 240 | 241 | for record in row_newtypes: 242 | ls_min_type = record[1] 243 | ls_min_storage = record[3] 244 | ln_min_cpu = int(record[4]) 245 | ln_min_network_level = int(record[5]) 246 | ls_min_network = record[6] 247 | ln_min_mem = float(record[7].split(' ')[0].replace(',','')) 248 | ln_min_rate = float(record[2]) 249 | if ls_min_storage.find('SSD')>0: 250 | if ls_min_storage.find('NVMe')>0: 251 | ls_min_storage1 = ls_min_storage[:(ls_min_storage.find('NVMe SSD')-1)] 252 | else: 253 | ls_min_storage1 = ls_min_storage[:(ls_min_storage.find('SSD')-1)] 254 | ln_min_ssd_nbr = int(ls_min_storage1[:ls_min_storage1.find('x')-1]) 255 | ln_min_ssd_size = float(ls_min_storage1[ls_min_storage1.find('x')+1:]) 256 | ln_min_ssd_total_size = ln_min_ssd_nbr * ln_min_ssd_size 257 | ln_min_ssd_total_iops = IOSP_PER_SSD * ln_min_ssd_nbr 258 | 259 | if ln_iops_usage > 3000 and ls_min_storage.find('SSD')>0: 260 | if ln_min_ssd_total_iops >= ln_iops_usage and ln_min_ssd_total_size>=ln_ssd_size_usage: 261 | if ln_min_mem >= ln_mem_size: 262 | if ln_min_cpu >= ln_cpu_nbr: 263 | if ln_min_network_level >= ln_network_level_usage: 264 | if ln_min_rate <= ln_rate: 265 | ls_update_type_sql = ( 266 | "update " + ls_temp_table + 267 | " set resizetype='" + ls_min_type + 268 | "', resizeprice='" + str(ln_min_rate) + "', " 269 | ) 270 | ls_update_type_sql += ( 271 | " newvcpu='" + str(ln_min_cpu) + 272 | "', newmemory='" + str(ln_min_mem) + 273 | " GiB" + "', newstorage='" + ls_min_storage + 274 | "', newnetwork='" + ls_min_network + "' " 275 | ) 276 | ls_update_type_sql += " where instanceid = '" + ls_instanceid + "'" 277 | execute_dml_ddl(db_conn, ls_update_type_sql) 278 | break 279 | else: 280 | if ln_min_cpu>=ln_cpu_nbr: 281 | if ln_min_mem>=ln_mem_size: 282 | if ln_min_network_level>=ln_network_level_usage: 283 | if ln_min_rate<=ln_rate: 284 | ls_update_type_sql = "update " + ls_temp_table + " set resizetype='" + ls_min_type + "', resizeprice='" + str(ln_min_rate) + "', " 285 | ls_update_type_sql += " newvcpu='" + str(ln_min_cpu) + "', newmemory='" + str(ln_min_mem) + " GiB" + "', newstorage='" + ls_min_storage + "', newnetwork='" + ls_min_network + "' " 286 | ls_update_type_sql += " where instanceid = '" + ls_instanceid + "'" 287 | execute_dml_ddl(db_conn, ls_update_type_sql) 288 | break 289 | cur_resize.close() 290 | 291 | 292 | def right_sizing(db_conn, pricelist_table, cw_tablename): 293 | ls_temp_table = ( 294 | "rightsizing" + 295 | ''.join( 296 | random.sample(['z','y','x','w','v','u','t','s','r','q','p','o','n','m','l','k','j','i','h','g','f','e','d','c','b','a'], 8) 297 | ).replace(' ','') 298 | ) 299 | 300 | ls_gen_list_sql = "create table " + ls_temp_table + " as " 301 | ls_gen_list_sql += " select upper(substring(a.az,1,2))||upper(substring(a.az,4,1))|| substring(substring(a.az, position('-' in a.az)+1),position('-' in substring(a.az, position('-' in a.az)+1))+1,1) as region, " 302 | ls_gen_list_sql += " a.instancetype, b.vcpu, b.memory, b.storage, b.networkperformance, b.priceperunit, a.instanceid, max(a.maxcpu) as maxcpu, max(a.maxiops) as maxiops, max(a.maxnetwork) as maxnetwork, a.instancetags " 303 | ls_gen_list_sql += " from (select instanceid, instancetags, instanceType, az, max(to_number(trim(both ' ' from CPUUtilization),'9999999D99999999')) as maxcpu, " 304 | #ls_gen_list_sql += " max(to_number(trim(both ' ' from diskreadops), '9999999999999D99999999')/60+to_number(trim(both ' ' from diskwriteops),'9999999999999D99999999')/60) as maxiops, " 305 | ls_gen_list_sql += " max(CAST(trim(both ' ' from diskreadops) as decimal(38,10))/60+CAST(trim(both ' ' from diskwriteops) as decimal(38,10))/60) as maxiops, " 306 | ls_gen_list_sql += " max((to_number(trim(both ' ' from networkin),'9999999999999D99999999')/60/1024/1024)*8+(to_number(trim(both ' ' from networkout),'9999999999999D99999999')/60/1024/1024)*8) as maxnetwork " 307 | ls_gen_list_sql += " from " + cw_tablename 308 | #ls_gen_list_sql += " where accountid like '%" + ACCOUNT_ID + "%' " 309 | ls_gen_list_sql += " where accountid not like '%accountId%' " 310 | ls_gen_list_sql += " group by instanceid, instancetags, instanceType, az) a, " + pricelist_table + " b " 311 | ls_gen_list_sql += " where a.instanceid in (select instanceid from (select instanceid,max(maxcpu) as topcpu from " 312 | ls_gen_list_sql += "(select instanceid, instancetags, instanceType, az, max(to_number(trim(both ' ' from CPUUtilization),'9999999D99999999')) as maxcpu, " 313 | #ls_gen_list_sql += " max(to_number(trim(both ' ' from diskreadops), '9999999999999D99999999')/60+to_number(trim(both ' ' from diskwriteops),'9999999999999D99999999')/60) as maxiops, " 314 | ls_gen_list_sql += " max(CAST(trim(both ' ' from diskreadops) as decimal(38,10))/60+CAST(trim(both ' ' from diskwriteops) as decimal(38,10))/60) as maxiops, " 315 | ls_gen_list_sql += " max((to_number(trim(both ' ' from networkin),'9999999999999D99999999')/60/1024/1024)*8+(to_number(trim(both ' ' from networkout),'9999999999999D99999999')/60/1024/1024)*8) as maxnetwork " 316 | #ls_gen_list_sql += " from " + cw_tablename + " where accountid like '%" + ACCOUNT_ID + "%' group by instanceid, instancetags, instanceType, az) group by instanceid) where topcpu<50) " 317 | ls_gen_list_sql += " from " + cw_tablename + " where accountid not like '%accountId%' group by instanceid, instancetags, instanceType, az) group by instanceid) where topcpu<50) " 318 | ls_gen_list_sql += " and a.instancetype=b.instancetype " 319 | ls_gen_list_sql += " and upper(substring(a.az,1,2))||upper(substring(a.az,4,1))|| substring(substring(a.az, position('-' in a.az)+1),position('-' in substring(a.az, position('-' in a.az)+1))+1,1)=b.regionabbr " 320 | ls_gen_list_sql += " and b.termtype='OnDemand' and b.location<>'AWS GovCloud (US)' and b.servicecode='AmazonEC2' " 321 | ls_gen_list_sql += " and b.tenancy='Shared' and b.processorarchitecture='64-bit' and b.operatingsystem='Linux' and b.preinstalledsw='NA'" 322 | ls_gen_list_sql += " group by upper(substring(a.az,1,2))||upper(substring(a.az,4,1))|| substring(substring(a.az, position('-' in a.az)+1),position('-' in substring(a.az, position('-' in a.az)+1))+1,1), " 323 | ls_gen_list_sql += " a.instancetype, b.vcpu, b.memory, b.storage, b.networkperformance, b.priceperunit, a.instanceid, a.instancetags" 324 | 325 | execute_dml_ddl(db_conn, ls_gen_list_sql) 326 | 327 | ls_alter_temp_table = "alter table " + ls_temp_table + " add resizetype varchar(300)" 328 | execute_dml_ddl(db_conn, ls_alter_temp_table) 329 | ls_alter_temp_table = "alter table " + ls_temp_table + " add newvcpu varchar(300)" 330 | execute_dml_ddl(db_conn, ls_alter_temp_table) 331 | ls_alter_temp_table = "alter table " + ls_temp_table + " add newmemory varchar(300)" 332 | execute_dml_ddl(db_conn, ls_alter_temp_table) 333 | ls_alter_temp_table = "alter table " + ls_temp_table + " add newnetwork varchar(300)" 334 | execute_dml_ddl(db_conn, ls_alter_temp_table) 335 | ls_alter_temp_table = "alter table " + ls_temp_table + " add resizeprice varchar(300)" 336 | execute_dml_ddl(db_conn, ls_alter_temp_table) 337 | ls_alter_temp_table = "alter table " + ls_temp_table + " add costsavedpermonth varchar(300)" 338 | execute_dml_ddl(db_conn, ls_alter_temp_table) 339 | ls_alter_temp_table = "alter table " + ls_temp_table + " add newstorage varchar(300)" 340 | execute_dml_ddl(db_conn, ls_alter_temp_table) 341 | 342 | ls_resizelist_sql = "select * from " + ls_temp_table 343 | cur = db_conn.cursor() 344 | cur.execute(ls_resizelist_sql) 345 | row_resizelists = cur.fetchall() 346 | ln_instance_nbr = cur.rowcount 347 | ln_curr_nbr = 0 348 | for row in row_resizelists: 349 | ln_curr_nbr += 1 350 | sys.stdout.write("\rComplete percent: " + str(round(float(ln_curr_nbr)/ln_instance_nbr * 100))+"%") 351 | sys.stdout.flush() 352 | ln_cpu = int(row[2]) 353 | ls_storage = row[4] 354 | ln_mem = float(row[3].split(' ')[0].replace(',','')) 355 | ln_ssd_type = '' 356 | 357 | if ls_storage.find('SSD')>0: 358 | if ls_storage.find('NVMe')>0: 359 | ln_ssd_type = 'NVMe' 360 | ls_storage1 = ls_storage[:(ls_storage.find('NVMe SSD')-1)] 361 | else: 362 | ln_ssd_type = 'SSD' 363 | ls_storage1 = ls_storage[:(ls_storage.find('SSD')-1)] 364 | ln_ssd_nbr = int(ls_storage1[:ls_storage1.find('x')-1]) 365 | ln_ssd_size = float(ls_storage1[ls_storage1.find('x')+1:]) 366 | ln_ssd_total_size = ln_ssd_nbr * ln_ssd_size 367 | ln_ssd_total_iops = IOSP_PER_SSD * ln_ssd_nbr 368 | else: 369 | ln_ssd_total_size = 0 370 | ln_ssd_total_iops = 0 371 | 372 | ln_rate = float(row[6]) 373 | ls_instanceid = row[7] 374 | ln_cpu_usage = math.ceil(row[8]) 375 | ln_cpu_nbr = math.ceil(float(ln_cpu_usage)/100 * ln_cpu) 376 | ln_iops_usage = math.ceil(row[9]) 377 | ls_networkperf = row[5] 378 | ln_network_usage = math.ceil(row[10]) 379 | 380 | if ls_networkperf == '10 Gigabit': 381 | ln_network_level_usage = 99 382 | else: 383 | if ln_network_usage<=300: 384 | ln_network_level_usage = 1 385 | elif ln_network_usage>300 and ln_network_usage<=1000: 386 | ln_network_level_usage = 2 387 | else: 388 | ln_network_level_usage = 3 389 | 390 | ls_resizetype_sql = "select regionabbr, instancetype, priceperunit, storage, vcpu, " 391 | ls_resizetype_sql += " case when networkperformance='Low' then 1 when networkperformance='Moderate' then 2 when networkperformance='High' then 3 else 99 end as networkperformance, networkperformance as newnetwork, memory from " + pricelist_table 392 | ls_resizetype_sql += " where termtype='OnDemand' and location<>'AWS GovCloud (US)' and servicecode='AmazonEC2' " 393 | ls_resizetype_sql += " and tenancy='Shared' and processorarchitecture='64-bit' and operatingsystem='Linux' " 394 | ls_resizetype_sql += " and regionabbr = '" + row[0] + "' " 395 | if ln_network_level_usage == 99: 396 | ls_resizetype_sql += " and networkperformance = '10 Gigabit' " 397 | if ln_ssd_type == 'NVMe': 398 | ls_resizetype_sql += " and storage like '%NVMe%' " 399 | else: 400 | ls_resizetype_sql += " and storage not like '%NVMe%' " 401 | ls_resizetype_sql += " order by to_number(trim(both ' ' from priceperunit),'9999999D99999999')" 402 | 403 | determine_right_type(db_conn, ls_resizetype_sql, ls_temp_table, ls_instanceid, ln_iops_usage, ln_ssd_total_size, ln_cpu_nbr, ln_network_level_usage, ln_rate, ln_mem) 404 | 405 | print ("\n") 406 | ls_update_costsaved = "update " + ls_temp_table + " set costsavedpermonth=(to_number(trim(both ' ' from priceperunit),'9999999D99999999') - to_number(trim(both ' ' from resizeprice),'9999999D99999999'))*30*24 " 407 | execute_dml_ddl(db_conn, ls_update_costsaved) 408 | ls_update_totalsaved = "insert into " + ls_temp_table + " (region,costsavedpermonth) select 'Total', sum(to_number(trim(both ' ' from costsavedpermonth),'9999999999D99999999')) from " + ls_temp_table 409 | execute_dml_ddl(db_conn, ls_update_totalsaved) 410 | ls_delete_sametype = "delete " + ls_temp_table + " where instancetype=resizetype" 411 | execute_dml_ddl(db_conn, ls_delete_sametype) 412 | 413 | cur.close() 414 | return ls_temp_table 415 | 416 | def dump_results(db_conn, sql_stat, csv_filename): 417 | cur_csv = db_conn.cursor() 418 | cur_csv.execute(sql_stat) 419 | row_csv = cur_csv.fetchall() 420 | csvfile = open(csv_filename, 'w') 421 | writers = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_ALL, lineterminator='\n') 422 | writers.writerow(['region', 'InstanceId', 'Old-InstanceType', 'Old-vCPU', 'Old-Memory', 'Old-Storage', 'Old-NetworkPerformance', 'Old-Rate', 'New-InstanceType', 'New-vCPU', 'New-Memory', 'New-Storage', 'New-Network', 'New-Rate', 'Cost-Saved-Per-Month', 'Max-CPU-Uti', 'Max-IOPS', 'Max-Network', 'Instance-Tag']) 423 | 424 | for line in row_csv: 425 | writers.writerows([line]) 426 | 427 | csvfile.close() 428 | cur_csv.close() 429 | 430 | # Main 431 | if __name__ == "__main__": 432 | logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)-8s %(message)s') 433 | 434 | logging.info("Downloading the CloudWatch metrics") 435 | sys.path.append('callgcw.py') 436 | import callgcw 437 | ls_cwfile = callgcw.call_gcw(CW_REGION, CW_ACCOUNT, CW_MODE, CW_STATISTICS, CW_PERIOD, CW_STARTTIME, CW_ENDTIME, CW_OUTPUT) 438 | #ls_cwfile = "result.20160825.csv.gz" 439 | logging.info("Finish to download the CloudWatch metrics to the file %s " % ls_cwfile) 440 | 441 | logging.info("Uploading the CloudWatch files to S3 ") 442 | upload_s3(S3_BUCKET, ls_cwfile, ls_cwfile) 443 | logging.info("Finish to upload the CloudWatch files to S3 bucket %s " % (S3_BUCKET)) 444 | 445 | logging.info("Downloading the EC2 pricelist file and upload it to S3 bucket") 446 | ls_ec2pricelist_fileame = download_ec2pricelist() 447 | logging.info("Finish to download EC2 pricelist file and upload it to S3 bucket: %s " % (ls_ec2pricelist_fileame)) 448 | 449 | conn = db_conn(DB_HOST, DB_PORT, DB_USER, DB_PASS, DB_NAME) 450 | logging.info("Connected to the database") 451 | 452 | logging.info("Importing the CloudWatch files to Redshift table") 453 | ls_cw_tabname = import_cwdata(conn, ls_cwfile, 0, "Y") 454 | logging.info("Finish to import the CloudWatch files to the Redshift table: %s " % (ls_cw_tabname)) 455 | 456 | logging.info("Importing the EC2 pricelist to Redshift") 457 | ls_pricelist_tabname = import_ec2pricelist(conn, ls_ec2pricelist_fileame) 458 | logging.info("Finish to import the EC2 pricelist to Redshift table: %s " % (ls_pricelist_tabname)) 459 | 460 | logging.info("Analyzing the instances need to be resized") 461 | ls_temp_table = right_sizing(conn, ls_pricelist_tabname, ls_cw_tabname) 462 | logging.info("Finish the analysis and store the instances to the table %s " % (ls_temp_table)) 463 | 464 | logging.info("Dumping the instances into the csv file") 465 | #ls_csv_sql = "select * from " + ls_temp_table + " order by to_number(trim(both ' ' from costsavedpermonth),'9999999999D99999999')" 466 | ls_csv_sql = " select region, instanceid, instancetype, vcpu, memory, storage, networkperformance, priceperunit, " 467 | ls_csv_sql += " resizetype, newvcpu, newmemory, newstorage, newnetwork, resizeprice, costsavedpermonth, maxcpu, maxiops, maxnetwork, instancetags " 468 | ls_csv_sql += " from " + ls_temp_table + " order by to_number(trim(both ' ' from costsavedpermonth),'9999999999D99999999')" 469 | ls_csvfile = "results_" + ls_temp_table + ".csv" 470 | dump_results(conn, ls_csv_sql, ls_csvfile) 471 | logging.info("Finish to dump to the csv file %s " % (ls_csvfile) ) 472 | 473 | logging.info("Uploading the rightsizing results file to S3 bucket %s " % (S3_BUCKET)) 474 | upload_s3(S3_BUCKET, ls_csvfile, ls_csvfile) 475 | 476 | # @logging.info("Delete the temp table with EC2 pricelist %s " % (ls_pricelist_tabname)) 477 | #execute_dml_ddl(conn, "drop table "+ls_pricelist_tabname) 478 | #logging.info("Delete the temp table with instances need to be resized %s " % (ls_temp_table)) 479 | #execute_dml_ddl(conn, "drop table "+ls_temp_table) 480 | #if CURRENTOS == "Windows": 481 | # logging.info("Delete the temp table with CloudWatch data %s " % (ls_cw_tabname)) 482 | # execute_dml_ddl(conn, "drop table "+ls_cw_tabname) 483 | 484 | logging.info("Analysis complete.") 485 | conn.close() 486 | --------------------------------------------------------------------------------