├── .github └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── CHANGELOG.md ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE.txt ├── NOTICE.txt ├── Readme.md ├── deployment ├── build-s3-dist.sh ├── crr-agent.template └── crr-monitor.template └── source ├── CRRHourlyMaint └── CRRHourlyMaint.py ├── CRRMonitor └── CRRMonitor.py ├── CRRMonitorHousekeeping └── CRRMonitorHousekeeping.py ├── CRRMonitorTrailAlarm ├── CRRMonitorTrailAlarm.py └── cfn_resource.py ├── CRRdeployagent ├── CRRdeployagent.py └── cfn_resource.py └── solution-helper └── solution-helper.py /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | *Issue #, if available:* 2 | 3 | *Description of changes:* 4 | 5 | 6 | By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | **/dist 2 | **/.zip 3 | **/.DS_Store 4 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Change Log 2 | All notable changes to this project will be documented in this file. 3 | 4 | The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), 5 | and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). 6 | 7 | --- 8 | 9 | ## [2.0.1] - 2020-03-26 10 | ### Changed 11 | * Removed `botocore.vendored.request` dependency 12 | * Updated Lambda runtimes to Python 3.8 -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check [existing open](https://github.com/awslabs/crr-monitor/issues), or [recently closed](https://github.com/awslabs/crr-monitor/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/awslabs/crr-monitor/labels/help%20wanted) issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](https://github.com/awslabs/crr-monitor/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. -------------------------------------------------------------------------------- /NOTICE.txt: -------------------------------------------------------------------------------- 1 | Cross Region Replication Monitor 2 | Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | Licensed under the Amazon Software License (the "License"). You may not use this file except 5 | in compliance with the License. A copy of the License is located at http://aws.amazon.com/asl/ 6 | or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, 7 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the 8 | specific language governing permissions and limitations under the License. 9 | 10 | ********************** 11 | THIRD PARTY COMPONENTS 12 | ********************** 13 | This software includes third party software subject to the following copyrights: 14 | 15 | Boto under the Apache License Version 2.0 16 | 17 | The licenses for these third party components are included in LICENSE.txt -------------------------------------------------------------------------------- /Readme.md: -------------------------------------------------------------------------------- 1 | # Deprecation Notice 2 | 3 | This AWS Solution has been archived and is no longer maintained by AWS. To discover other solutions, please visit the [AWS Solutions Library](https://aws.amazon.com/solutions/). 4 | 5 | # S3 Cross Region Replication Monitor 6 | 7 | A solution describing S3 Cross Region Replication Monitoring. 8 | 9 | ## Overview 10 | Amazon Simple Storage Service (Amazon S3) offers cross-region replication, a bucket-level feature that enables automatic, asynchronous copying of objects across buckets in different AWS Regions. This feature can help companies minimize latency when accessing objects in different geographic regions, meet compliance requirements, and for operational purposes. Amazon S3 encrypts all data in transit across AWS Regions using SSL, and objects in the destination bucket are exact replicas of objects in the source bucket. For more information on cross-region replication, see the Amazon S3 Developer Guide. AWS customers can retrieve the replication status of their objects manually or use an Amazon S3 inventory to generate metrics on a daily or weekly basis. 11 | 12 | To help customers more proactively monitor the replication status of their Amazon S3 objects, AWS offers the Cross-Region Replication Monitor (CRR Monitor) solution. The CRR Monitor automatically checks the replication status of Amazon S3 objects across all AWS Regions in a customers’ account, providing near real-time metrics and failure notifications to help customers proactively identify failures and troubleshoot problems. The solution automatically provisions the necessary AWS services to monitor and view replication status, including AWS Lambda, Amazon CloudWatch, Amazon Simple Notification Service (Amazon SNS), AWS CloudTrail and Amazon DynamoDB, and offers an option to use Amazon Kinesis Firehose to archive replication metadata in Amazon S3. 13 | 14 | ## CloudFormation Templates 15 | * crr-monitor.template 16 | * crr-agent.template 17 | 18 | ## Lambda Scripts 19 | * CRRdeployagent.py 20 | * CRRMonitor.py 21 | * CRRHourlyMaint.py 22 | * CRRMonitorHousekeeping.py 23 | * CRRMonitorTrailAlarm.py 24 | * solution-helper.py 25 | 26 | 27 | ## Collection of operational metrics 28 | This solution collects anonymous operational metrics to help AWS improve the quality and features of the solution. For more information, including how to disable this capability, please see the [implementation guide](https://docs.aws.amazon.com/solutions/latest/crr-monitor/appendix.html). 29 | 30 | 31 | *** 32 | 33 | Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. 34 | 35 | Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at 36 | 37 | http://aws.amazon.com/asl/ 38 | 39 | or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. 40 | 41 | 42 | 43 | 44 | -------------------------------------------------------------------------------- /deployment/build-s3-dist.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This assumes all of the OS-level configuration has been completed and git repo has already been cloned 4 | # 5 | # This script should be run from the repo's deployment directory 6 | # cd deployment 7 | # ./build-s3-dist.sh source-bucket-base-name solution-name version-code 8 | # 9 | # Parameters: 10 | # - source-bucket-base-name: Name for the S3 bucket location where the template will source the Lambda 11 | # code from. The template will append '-[region_name]' to this bucket name. 12 | # For example: ./build-s3-dist.sh solutions my-solution v1.0.0 13 | # The template will then expect the source code to be located in the solutions-[region_name] bucket 14 | # 15 | # - solution-name: name of the solution for consistency 16 | # 17 | # - version-code: version of the package 18 | 19 | # Check to see if input has been provided: 20 | if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then 21 | echo "Please provide the base source bucket name, trademark approved solution name and version where the lambda code will eventually reside." 22 | echo "For example: ./build-s3-dist.sh solutions trademarked-solution-name v1.0.0" 23 | exit 1 24 | fi 25 | 26 | # define main directories 27 | template_dir="$PWD" 28 | template_dist_dir="$template_dir/global-s3-assets" 29 | build_dist_dir="$template_dir/regional-s3-assets" 30 | source_dir="$template_dir/../source" 31 | 32 | # clean up old build files 33 | rm -rf $template_dist_dir 34 | mkdir -p $template_dist_dir 35 | rm -rf $build_dist_dir 36 | mkdir -p $build_dist_dir 37 | 38 | SUB1="s/CODE_BUCKET/$1/g" 39 | SUB2="s/SOLUTION_NAME/$2/g" 40 | SUB3="s/SOLUTION_VERSION/$3/g" 41 | 42 | for FULLNAME in ./*.template 43 | do 44 | TEMPLATE=`basename $FULLNAME` 45 | echo "Preparing $TEMPLATE" 46 | sed -e $SUB1 -e $SUB2 -e $SUB3 $template_dir/$TEMPLATE > $template_dist_dir/$TEMPLATE 47 | done 48 | 49 | for lambda_pkg in CRRdeployagent CRRHourlyMaint CRRMonitor CRRMonitorHousekeeping CRRMonitorTrailAlarm solution-helper; do 50 | echo "cd $source_dir/$lambda_pkg" 51 | cd $source_dir/$lambda_pkg 52 | echo "zip -q -r9 $build_dist_dir/$lambda_pkg.zip *" 53 | zip -q -r9 $build_dist_dir/$lambda_pkg.zip * 54 | done 55 | -------------------------------------------------------------------------------- /deployment/crr-agent.template: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion": "2010-09-09", 3 | "Description": "(SO0022) - CRR Monitor Remote Agent: Cross-Region Replication Monitor Remote Agent for S3. This template creates necessary IAM policies and CloudWatch Event configuration to allow the CRR Manager account to monitor S3 Cross Region Replication.", 4 | "Parameters": { 5 | "CRRMonitorAccount": { 6 | "Description": "AWS AccountId where the CRR Monitor manager component will run.", 7 | "AllowedPattern": "[0-9]{12}", 8 | "Type": "String", 9 | "ConstraintDescription": "Please enter a 12-digit AWS Account ID" 10 | } 11 | }, 12 | "Mappings": { 13 | "Function": { 14 | "CustomResourceDeployAgent": { 15 | "S3Bucket": "CODE_BUCKET", 16 | "S3Key": "SOLUTION_NAME/SOLUTION_VERSION/CRRdeployagent.zip", 17 | "Description": "crr: CloudFormation custom resource function invoked during CloudFormation create, update, and delete stack operations." 18 | }, 19 | "SolutionHelperFunction": { 20 | "S3Bucket": "CODE_BUCKET", 21 | "S3Key": "SOLUTION_NAME/SOLUTION_VERSION/solution-helper.zip", 22 | "Description": "crr: CloudFormation custom resource function for generating UUID." 23 | } 24 | } 25 | }, 26 | "Resources": { 27 | "CRRMonitorRole": { 28 | "Type": "AWS::IAM::Role", 29 | "Properties": { 30 | "RoleName": "CRRMonitorRole", 31 | "AssumeRolePolicyDocument": { 32 | "Version": "2012-10-17", 33 | "Statement": [ 34 | { 35 | "Effect": "Allow", 36 | "Principal": { 37 | "AWS": { "Ref": "CRRMonitorAccount" } 38 | }, 39 | "Action": [ 40 | "sts:AssumeRole" 41 | ] 42 | } 43 | ] 44 | }, 45 | "Path": "/", 46 | "Policies": [ 47 | { 48 | "PolicyName": "CRRMonitorPolicy", 49 | "PolicyDocument": { 50 | "Version": "2012-10-17", 51 | "Statement": [ 52 | { 53 | "Sid": "S3ListPerms", 54 | "Effect": "Allow", 55 | "Action": [ 56 | "s3:ListBucket", 57 | "s3:GetReplicationConfiguration", 58 | "s3:GetObject" 59 | ], 60 | "Resource": [ 61 | "arn:aws:s3:::*", 62 | "arn:aws:s3:::*/*" 63 | ] 64 | }, 65 | { 66 | "Sid": "S3Perms", 67 | "Effect": "Allow", 68 | "Action": [ 69 | "s3:ListAllMyBuckets", 70 | "s3:HeadBucket" 71 | ], 72 | "Resource": "*" 73 | } 74 | ] 75 | } 76 | } 77 | ] 78 | }, 79 | "Metadata": { 80 | "cfn_nag": { 81 | "rules_to_suppress": [ 82 | { 83 | "id": "W11", 84 | "reason": "The cloudwatch:ListMetrics,s3:ListAllMyBuckets action requires the wildcard ('*') resource identifier to function properly.Supporting documentation available at (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-identity-based-access-control-cw.html) and (https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html)" 85 | }, 86 | { 87 | "id": "W28", 88 | "reason": "CRRMonitor Role name has been specified to assume role" 89 | } 90 | ] 91 | } 92 | } 93 | }, 94 | "CRRMonitorRemoteDeployRole": { 95 | "Type": "AWS::IAM::Role", 96 | "Properties": { 97 | "AssumeRolePolicyDocument": { 98 | "Version": "2012-10-17", 99 | "Statement": [ 100 | { 101 | "Effect": "Allow", 102 | "Principal": { 103 | "Service": [ 104 | "lambda.amazonaws.com" 105 | ] 106 | }, 107 | "Action": [ 108 | "sts:AssumeRole" 109 | ] 110 | } 111 | ] 112 | }, 113 | "Path": "/", 114 | "Policies": [ 115 | { 116 | "PolicyName": "CRRMonitorDeployPolicy", 117 | "PolicyDocument": { 118 | "Version": "2012-10-17", 119 | "Statement": [ 120 | { 121 | "Sid": "CloudTrailLogPerms", 122 | "Effect": "Allow", 123 | "Action": [ 124 | "logs:CreateLogGroup", 125 | "logs:CreateLogStream", 126 | "logs:PutLogEvents", 127 | "logs:DescribeLogStreams" 128 | ], 129 | "Resource": [ 130 | { 131 | "Fn::Join": [ 132 | "", 133 | [ 134 | "arn:aws:logs:", 135 | { 136 | "Ref": "AWS::Region" 137 | }, 138 | ":", 139 | { 140 | "Ref": "AWS::AccountId" 141 | }, 142 | ":log-group:*" 143 | ] 144 | ] 145 | } 146 | ] 147 | }, 148 | { 149 | "Sid": "S3ListPerms", 150 | "Effect": "Allow", 151 | "Action": [ 152 | "s3:ListBucket", 153 | "s3:GetObjectVersionForReplication", 154 | "s3:GetReplicationConfiguration", 155 | "s3:GetObject" 156 | ], 157 | "Resource": [ 158 | "arn:aws:s3:::*", 159 | "arn:aws:s3:::*/*" 160 | ] 161 | }, 162 | { 163 | "Sid": "S3Perms", 164 | "Effect": "Allow", 165 | "Action": [ 166 | "s3:ListAllMyBuckets", 167 | "s3:HeadBucket" 168 | ], 169 | "Resource": "*" 170 | }, 171 | { 172 | "Sid": "DeployAgentEC2Perms", 173 | "Effect": "Allow", 174 | "Action": [ 175 | "ec2:DescribeRegions" 176 | ], 177 | "Resource": "*" 178 | }, 179 | { 180 | "Sid": "EventBusPerms", 181 | "Effect": "Allow", 182 | "Action": [ 183 | "events:PutTargets" 184 | ], 185 | "Resource": "*", 186 | "Condition": { 187 | "ArnLike": { 188 | "events:TargetArn": [ 189 | { "Fn::Join": [ 190 | "", 191 | [ 192 | "arn:aws:events:*:", 193 | { 194 | "Ref": "CRRMonitorAccount" 195 | }, 196 | ":event-bus/default" 197 | ] 198 | ] 199 | } 200 | ] 201 | } 202 | } 203 | }, 204 | { 205 | "Sid": "PutEventsPerms", 206 | "Effect": "Allow", 207 | "Action": [ 208 | "events:PutRule" 209 | ], 210 | "Resource": "*" 211 | }, 212 | { 213 | "Sid": "EventsPerms", 214 | "Effect": "Allow", 215 | "Action": [ 216 | "events:DeleteRule", 217 | "events:EnableRule", 218 | "events:RemoveTargets" 219 | ], 220 | "Resource": [ 221 | { "Fn::Join": [ 222 | "", 223 | [ 224 | "arn:aws:events:*:", 225 | { 226 | "Ref": "AWS::AccountId" 227 | }, 228 | ":rule/CRR*" 229 | ] 230 | ] } 231 | ] 232 | } 233 | ] 234 | } 235 | } 236 | ] 237 | }, 238 | "Metadata": { 239 | "cfn_nag": { 240 | "rules_to_suppress": [ 241 | { 242 | "id": "W11", 243 | "reason": "The s3:ListAllMyBuckets,ec2:DescribeRegions,events:PutTargets action requires the wildcard ('*') resource identifier to function properly.Supporting documentation available at (https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html),(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExamplePolicies_EC2.html) and (https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/permissions-reference-cwe.html)" 244 | 245 | } 246 | ] 247 | } 248 | } 249 | }, 250 | "CRRDeployRemoteAgentLambda": { 251 | "Type": "AWS::Lambda::Function", 252 | "Properties": { 253 | "Code": { 254 | "S3Bucket": { 255 | "Fn::Join": [ 256 | "", 257 | [ 258 | { 259 | "Fn::FindInMap": [ 260 | "Function", 261 | "CustomResourceDeployAgent", 262 | "S3Bucket" 263 | ] 264 | }, 265 | "-", 266 | { 267 | "Ref": "AWS::Region" 268 | } 269 | ] 270 | ] 271 | }, 272 | "S3Key": { 273 | "Fn::FindInMap": [ 274 | "Function", 275 | "CustomResourceDeployAgent", 276 | "S3Key" 277 | ] 278 | } 279 | }, 280 | "Handler": "CRRdeployagent.handler", 281 | "FunctionName": "CRRDeployAgent", 282 | "Role": { "Fn::GetAtt": [ "CRRMonitorRemoteDeployRole", "Arn" ] }, 283 | "Runtime": "python3.8", 284 | "Timeout": 300 285 | } 286 | }, 287 | "CustomDeploy": { 288 | "Type": "Custom::DeployAgent", 289 | "Properties": { 290 | "ServiceToken": { "Fn::GetAtt": [ "CRRDeployRemoteAgentLambda", "Arn" ] }, 291 | "CRRMonitorAccount": { "Ref": "CRRMonitorAccount" } 292 | } 293 | }, 294 | "SolutionHelperRole": { 295 | "Type": "AWS::IAM::Role", 296 | "Properties": { 297 | "AssumeRolePolicyDocument": { 298 | "Version": "2012-10-17", 299 | "Statement": [ 300 | { 301 | "Effect": "Allow", 302 | "Principal": { 303 | "Service": "lambda.amazonaws.com" 304 | }, 305 | "Action": "sts:AssumeRole" 306 | } 307 | ] 308 | }, 309 | "Path": "/", 310 | "Policies": [ 311 | { 312 | "PolicyName": "Custom_Solution_Helper_Permissions", 313 | "PolicyDocument": { 314 | "Version": "2012-10-17", 315 | "Statement": [ 316 | { 317 | "Effect": "Allow", 318 | "Action": [ 319 | "logs:CreateLogGroup", 320 | "logs:CreateLogStream", 321 | "logs:PutLogEvents" 322 | ], 323 | "Resource": { 324 | "Fn::Join": [ 325 | "", 326 | [ 327 | "arn:aws:logs:", 328 | { 329 | "Ref": "AWS::Region" 330 | }, 331 | ":", 332 | { 333 | "Ref": "AWS::AccountId" 334 | }, 335 | ":log-group:/aws/lambda/*" 336 | ] 337 | ] 338 | } 339 | } 340 | ] 341 | } 342 | } 343 | ] 344 | }, 345 | "Metadata": { 346 | "cfn_nag": { 347 | "rules_to_suppress": [ 348 | { 349 | "id": "W11", 350 | "reason": "Requires log-group level access :log-group:/aws/lambda/*" 351 | } 352 | ] 353 | } 354 | } 355 | }, 356 | "SolutionHelper": { 357 | "Type": "AWS::Lambda::Function", 358 | "Properties": { 359 | "Handler": "solution-helper.lambda_handler", 360 | "Role": { 361 | "Fn::GetAtt": [ 362 | "SolutionHelperRole", 363 | "Arn" 364 | ] 365 | }, 366 | "Description": "This function creates a CloudFormation custom lambda resource that creates custom lambda functions by finding and replacing specific values from existing lambda function code.", 367 | "Code": { 368 | "S3Bucket": { 369 | "Fn::Join": [ 370 | "", 371 | [ 372 | { 373 | "Fn::FindInMap": [ 374 | "Function", 375 | "SolutionHelperFunction", 376 | "S3Bucket" 377 | ] 378 | }, 379 | "-", 380 | { 381 | "Ref": "AWS::Region" 382 | } 383 | ] 384 | ] 385 | }, 386 | "S3Key": { 387 | "Fn::FindInMap": [ 388 | "Function", 389 | "SolutionHelperFunction", 390 | "S3Key" 391 | ] 392 | } 393 | }, 394 | "Runtime": "python3.8", 395 | "Timeout": 300 396 | } 397 | }, 398 | "UUIDGenerator": { 399 | "Type": "Custom::UUIDGenerator", 400 | "Properties": { 401 | "ServiceToken": { 402 | "Fn::GetAtt": [ 403 | "SolutionHelper", 404 | "Arn" 405 | ] 406 | }, 407 | "Region": { "Ref": "AWS::Region" } 408 | } 409 | } 410 | }, 411 | "Outputs": { 412 | "UUID": { 413 | "Description": "Newly created random UUID.", 414 | "Value": { 415 | "Fn::GetAtt": [ 416 | "UUIDGenerator", 417 | "UUID" 418 | ] 419 | } 420 | } 421 | }, 422 | "Metadata": { 423 | "AWS::CloudFormation::Interface": { 424 | "ParameterGroups": [ 425 | { 426 | "Label": { 427 | "default": "Deployment" 428 | }, 429 | "Parameters": [ 430 | "CRRMonitorAccount" 431 | ] 432 | } 433 | ], 434 | "ParameterLabels" : { 435 | "CRRMonitorAccount" : { "default" : "CRR Monitor Account" } 436 | } 437 | } 438 | } 439 | } 440 | -------------------------------------------------------------------------------- /deployment/crr-monitor.template: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion": "2010-09-09", 3 | "Description": "(SO0022) - CRR Monitor: Cross-Region Replication monitor for S3. This template should be deployed in a single region per account, and requires Lambda, SNS, SQS and DynamoDB.", 4 | "Parameters": { 5 | "ArchiveToS3": { 6 | "Description": "Do you want to archive the DynamoDB data to S3?. Deploy in KinesisFirehose supported region", 7 | "Type": "String", 8 | "Default": "No", 9 | "AllowedValues": [ 10 | "Yes", 11 | "No" 12 | ] 13 | }, 14 | "S3ArchiveBucket": { 15 | "Description": "The name of the S3 bucket that contains the archive data from Dynamo DB", 16 | "Type": "String" 17 | }, 18 | "remoteAccounts": { 19 | "Description": "A list of accounts that will be monitored. Each must have had crr-agent.template deployed.", 20 | "Type": "CommaDelimitedList" 21 | } 22 | }, 23 | "Mappings": { 24 | "Function": { 25 | "CustomResourceTrailAlarm": { 26 | "S3Bucket": "CODE_BUCKET", 27 | "S3Key": "SOLUTION_NAME/SOLUTION_VERSION/CRRMonitorTrailAlarm.zip", 28 | "Description": "crr: CloudFormation custom resource function invoked during CloudFormation create, update, and delete stack operations." 29 | }, 30 | "CustomResourceMonitor": { 31 | "S3Bucket": "CODE_BUCKET", 32 | "S3Key": "SOLUTION_NAME/SOLUTION_VERSION/CRRMonitor.zip", 33 | "Description": "crr: CloudFormation custom resource function invoked during CloudFormation create, update, and delete stack operations." 34 | }, 35 | "CustomResourceHousekeeping": { 36 | "S3Bucket": "CODE_BUCKET", 37 | "S3Key": "SOLUTION_NAME/SOLUTION_VERSION/CRRMonitorHousekeeping.zip", 38 | "Description": "crr: CloudFormation custom resource function invoked during CloudFormation create, update, and delete stack operations." 39 | }, 40 | "CustomResourceHourlyMaint": { 41 | "S3Bucket": "CODE_BUCKET", 42 | "S3Key": "SOLUTION_NAME/SOLUTION_VERSION/CRRHourlyMaint.zip", 43 | "Description": "crr: CloudFormation custom resource function invoked during CloudFormation create, update, and delete stack operations." 44 | }, 45 | "CustomResourceDeployAgent": { 46 | "S3Bucket": "CODE_BUCKET", 47 | "S3Key": "SOLUTION_NAME/SOLUTION_VERSION/CRRdeployagent.zip", 48 | "Description": "crr: CloudFormation custom resource function invoked during CloudFormation create, update, and delete stack operations." 49 | }, 50 | "SolutionHelperFunction": { 51 | "S3Bucket": "CODE_BUCKET", 52 | "S3Key": "SOLUTION_NAME/SOLUTION_VERSION/solution-helper.zip", 53 | "Description": "crr: CloudFormation custom resource function for generating UUID." 54 | } 55 | }, 56 | "Send" : { 57 | "AnonymousUsage" : { "Data" : "Yes" } 58 | } 59 | }, 60 | "Conditions": { 61 | "SingleAccnt": { 62 | "Fn::Not": [ 63 | { 64 | "Fn::Equals": [ 65 | { 66 | "Fn::Join": [ 67 | "", 68 | { 69 | "Ref": "remoteAccounts" 70 | } 71 | ] 72 | }, 73 | "" 74 | ] 75 | } 76 | ] 77 | }, 78 | "StreamToKinesis": { 79 | "Fn::Equals": [ 80 | { 81 | "Ref": "ArchiveToS3" 82 | }, 83 | "Yes" 84 | ] 85 | } 86 | }, 87 | "Resources": { 88 | "CRRMonitorRole": { 89 | "Type": "AWS::IAM::Role", 90 | "Properties": { 91 | "RoleName": "CRRMonitorRole", 92 | "AssumeRolePolicyDocument": { 93 | "Version": "2012-10-17", 94 | "Statement": [ 95 | { 96 | "Effect": "Allow", 97 | "Principal": { 98 | "Service": [ 99 | "lambda.amazonaws.com" 100 | ] 101 | }, 102 | "Action": [ 103 | "sts:AssumeRole" 104 | ] 105 | } 106 | ] 107 | }, 108 | "Path": "/", 109 | "Policies": [ 110 | { 111 | "PolicyName": "CRRMonitorPolicy", 112 | "PolicyDocument": { 113 | "Version": "2012-10-17", 114 | "Statement": [ 115 | { 116 | "Sid": "DynamoDBPerms", 117 | "Effect": "Allow", 118 | "Action": [ 119 | "dynamodb:DescribeTable", 120 | "dynamodb:DeleteItem", 121 | "dynamodb:GetItem", 122 | "dynamodb:Scan", 123 | "dynamodb:UpdateItem" 124 | ], 125 | "Resource": [ { 126 | "Fn::Join": [ 127 | "", 128 | [ 129 | "arn:aws:dynamodb:", 130 | { 131 | "Ref": "AWS::Region" 132 | }, 133 | ":", 134 | { 135 | "Ref": "AWS::AccountId" 136 | }, 137 | ":table/CRRMonitor*" 138 | ] 139 | ] 140 | } ] 141 | }, 142 | { 143 | "Sid": "CloudTrailLogPerms", 144 | "Effect": "Allow", 145 | "Action": [ 146 | "logs:CreateLogGroup", 147 | "logs:CreateLogStream", 148 | "logs:PutLogEvents", 149 | "logs:DescribeLogStreams" 150 | ], 151 | "Resource": { 152 | "Fn::Join": [ 153 | "", 154 | [ 155 | "arn:aws:logs:", 156 | { 157 | "Ref": "AWS::Region" 158 | }, 159 | ":", 160 | { 161 | "Ref": "AWS::AccountId" 162 | }, 163 | ":*" 164 | ] 165 | ] 166 | } 167 | }, 168 | { 169 | "Sid": "LambdaPerms", 170 | "Effect": "Allow", 171 | "Action": [ 172 | "lambda:InvokeFunction" 173 | ], 174 | "Resource": { 175 | "Fn::Join": [ 176 | "", 177 | [ 178 | "arn:aws:lambda:", 179 | { 180 | "Ref": "AWS::Region" 181 | }, 182 | ":", 183 | { 184 | "Ref": "AWS::AccountId" 185 | }, 186 | ":function:CRRMonitor" 187 | ] 188 | ] 189 | } 190 | }, 191 | { 192 | "Sid": "SQSQueuePerms", 193 | "Effect": "Allow", 194 | "Action": [ 195 | "sqs:ReceiveMessage", 196 | "sqs:DeleteMessage", 197 | "sqs:ChangeMessageVisibility", 198 | "sqs:DeleteMessageBatch", 199 | "sqs:GetQueueUrl", 200 | "sqs:GetQueueAttributes" 201 | ], 202 | "Resource": [ 203 | "arn:aws:sqs:*:*:CRRMonitor*" 204 | ] 205 | }, 206 | { 207 | "Sid": "S3ListPerms", 208 | "Effect": "Allow", 209 | "Action": [ 210 | "s3:ListBucket", 211 | "s3:GetReplicationConfiguration", 212 | "s3:GetObject" 213 | ], 214 | "Resource": [ 215 | "arn:aws:s3:::*", 216 | "arn:aws:s3:::*/*" 217 | ] 218 | }, 219 | { 220 | "Sid": "S3Perms", 221 | "Effect": "Allow", 222 | "Action": [ 223 | "s3:ListAllMyBuckets", 224 | "s3:HeadBucket" 225 | ], 226 | "Resource": "*" 227 | }, 228 | { 229 | "Effect": "Allow", 230 | "Action": "sts:AssumeRole", 231 | "Resource": "arn:aws:iam:::role/CRRMonitorRole" 232 | }, 233 | { 234 | "Effect": "Allow", 235 | "Action": "kms:Decrypt", 236 | "Resource": { "Fn::GetAtt": ["CRRMonitorKey", "Arn"] } 237 | } 238 | ] 239 | } 240 | } 241 | ] 242 | }, 243 | "Metadata": { 244 | "cfn_nag": { 245 | "rules_to_suppress": [ 246 | { 247 | "id": "W11", 248 | "reason": "The cloudwatch:ListMetrics,s3:ListAllMyBuckets action requires the wildcard ('*') resource identifier to function properly.Supporting documentation available at (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-identity-based-access-control-cw.html) and (https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html)" 249 | }, 250 | { 251 | "id": "W28", 252 | "reason": "CRRMonitor Role name has been specified to assume role" 253 | } 254 | ] 255 | } 256 | } 257 | }, 258 | "CRRMonitorDeployRole": { 259 | "Type": "AWS::IAM::Role", 260 | "Properties": { 261 | "AssumeRolePolicyDocument": { 262 | "Version": "2012-10-17", 263 | "Statement": [ 264 | { 265 | "Effect": "Allow", 266 | "Principal": { 267 | "Service": [ 268 | "lambda.amazonaws.com" 269 | ] 270 | }, 271 | "Action": [ 272 | "sts:AssumeRole" 273 | ] 274 | } 275 | ] 276 | }, 277 | "Path": "/", 278 | "Policies": [ 279 | { 280 | "PolicyName": "CRRMonitorDeployPolicy", 281 | "PolicyDocument": { 282 | "Version": "2012-10-17", 283 | "Statement": [ 284 | { 285 | "Sid": "CloudTrailLogPerms", 286 | "Effect": "Allow", 287 | "Action": [ 288 | "logs:CreateLogGroup", 289 | "logs:CreateLogStream", 290 | "logs:PutLogEvents", 291 | "logs:DescribeLogStreams" 292 | ], 293 | "Resource": [ 294 | { 295 | "Fn::Join": [ 296 | "", 297 | [ 298 | "arn:aws:logs:", 299 | { 300 | "Ref": "AWS::Region" 301 | }, 302 | ":", 303 | { 304 | "Ref": "AWS::AccountId" 305 | }, 306 | ":log-group:*" 307 | ] 308 | ] 309 | } 310 | ] 311 | }, 312 | { 313 | "Sid": "DeploySNS", 314 | "Effect": "Allow", 315 | "Action": [ 316 | "sns:CreateTopic", 317 | "sns:DeleteTopic", 318 | "sns:Subscribe", 319 | "sns:Unsubscribe", 320 | "sns:SetTopicAttributes" 321 | ], 322 | "Resource": [ 323 | "arn:aws:sns:*:*:CRRMonitor*" 324 | ] 325 | }, 326 | { 327 | "Sid": "S3ListPerms", 328 | "Effect": "Allow", 329 | "Action": [ 330 | "s3:ListBucket", 331 | "s3:GetReplicationConfiguration" 332 | ], 333 | "Resource": [ 334 | "arn:aws:s3:::*", 335 | "arn:aws:s3:::*/*" 336 | ] 337 | }, 338 | { 339 | "Sid": "S3Perms", 340 | "Effect": "Allow", 341 | "Action": [ 342 | "s3:ListAllMyBuckets", 343 | "s3:HeadBucket" 344 | ], 345 | "Resource": "*" 346 | }, 347 | { 348 | "Sid": "DeployAgentEC2Perms", 349 | "Effect": "Allow", 350 | "Action": [ 351 | "ec2:DescribeRegions" 352 | ], 353 | "Resource": "*" 354 | }, 355 | { 356 | "Sid": "PutEventsPerms", 357 | "Effect": "Allow", 358 | "Action": [ 359 | "events:PutPermission" 360 | ], 361 | "Resource": "*" 362 | }, 363 | { 364 | "Sid": "EventsPerms", 365 | "Effect": "Allow", 366 | "Action": [ 367 | "events:PutRule", 368 | "events:PutTargets", 369 | "events:DeleteRule", 370 | "events:EnableRule", 371 | "events:RemoveTargets" 372 | ], 373 | "Resource": [ 374 | { "Fn::Join": [ 375 | "", 376 | [ 377 | "arn:aws:events:*:", 378 | { 379 | "Ref": "AWS::AccountId" 380 | }, 381 | ":rule/CRR*" 382 | ] 383 | ] } 384 | ] 385 | } 386 | ] 387 | } 388 | } 389 | ] 390 | }, 391 | "Metadata": { 392 | "cfn_nag": { 393 | "rules_to_suppress": [ 394 | { 395 | "id": "W11", 396 | "reason": "The s3:ListAllMyBuckets,ec2:DescribeRegions,events:PutPermission action requires the wildcard ('*') resource identifier to function properly.Supporting documentation available at (https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html),(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExamplePolicies_EC2.html) and (https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/permissions-reference-cwe.html)" 397 | 398 | } 399 | ] 400 | } 401 | } 402 | }, 403 | "CRRFirehoseRole": { 404 | "Type": "AWS::IAM::Role", 405 | "Condition": "StreamToKinesis", 406 | "Properties": { 407 | "AssumeRolePolicyDocument": { 408 | "Version": "2012-10-17", 409 | "Statement": [ 410 | { 411 | "Effect": "Allow", 412 | "Principal": { 413 | "Service": [ "firehose.amazonaws.com" , "lambda.amazonaws.com" ] 414 | }, 415 | "Action": "sts:AssumeRole", 416 | "Condition": { 417 | "StringEquals": { 418 | "sts:ExternalId": { "Ref" : "AWS::AccountId" } 419 | } 420 | } 421 | } 422 | ] 423 | }, 424 | "Policies": [ 425 | { 426 | "PolicyName": "CRRFirehosePolicy", 427 | "PolicyDocument": { 428 | "Version": "2012-10-17", 429 | "Statement": [ 430 | { 431 | "Sid": "S3Perms", 432 | "Effect": "Allow", 433 | "Action": [ 434 | "s3:AbortMultipartUpload", 435 | "s3:GetBucketLocation", 436 | "s3:GetObject", 437 | "s3:ListBucket", 438 | "s3:ListBucketMultipartUploads", 439 | "s3:PutObject" 440 | ], 441 | "Resource": [ 442 | { "Fn::Join" : [ "", [ "arn:aws:s3:::", { "Ref" : "S3ArchiveBucket" } ] ] }, 443 | { "Fn::Join" : [ "", [ "arn:aws:s3:::", { "Ref" : "S3ArchiveBucket" }, "/*"] ] } 444 | ] 445 | }, 446 | { 447 | "Sid": "LambdaPerms", 448 | "Effect": "Allow", 449 | "Action": [ 450 | "lambda:InvokeFunction", 451 | "lambda:GetFunctionConfiguration" 452 | ], 453 | "Resource": { 454 | "Fn::Join": [ 455 | "", 456 | [ 457 | "arn:aws:lambda:", 458 | { 459 | "Ref": "AWS::Region" 460 | }, 461 | ":", 462 | { 463 | "Ref": "AWS::AccountId" 464 | }, 465 | ":function:CRR*" 466 | ] 467 | ] 468 | } 469 | } 470 | ] 471 | } 472 | } 473 | ] 474 | }, 475 | "Metadata": { 476 | "cfn_nag": { 477 | "rules_to_suppress": [ 478 | { 479 | "id": "W11", 480 | "reason": "Requires the S3 Archive Bucket folder access" 481 | } 482 | ] 483 | } 484 | } 485 | }, 486 | "CRRMonitorHouseKeepingRole": { 487 | "Type": "AWS::IAM::Role", 488 | "Properties": { 489 | "AssumeRolePolicyDocument": { 490 | "Version": "2012-10-17", 491 | "Statement": [ 492 | { 493 | "Effect": "Allow", 494 | "Principal": { 495 | "Service": [ 496 | "lambda.amazonaws.com" 497 | ] 498 | }, 499 | "Action": [ 500 | "sts:AssumeRole" 501 | ] 502 | } 503 | ] 504 | }, 505 | "Path": "/", 506 | "Policies": [ 507 | { 508 | "PolicyName": "CRRMonitorHouseKeepingPolicy", 509 | "PolicyDocument": { 510 | "Version": "2012-10-17", 511 | "Statement": [ 512 | { 513 | "Sid": "DynamoDBPerms", 514 | "Effect": "Allow", 515 | "Action": [ 516 | "dynamodb:DescribeTable", 517 | "dynamodb:DeleteItem", 518 | "dynamodb:GetItem", 519 | "dynamodb:Scan", 520 | "dynamodb:UpdateItem" 521 | ], 522 | "Resource": [ { 523 | "Fn::Join": [ 524 | "", 525 | [ 526 | "arn:aws:dynamodb:", 527 | { 528 | "Ref": "AWS::Region" 529 | }, 530 | ":", 531 | { 532 | "Ref": "AWS::AccountId" 533 | }, 534 | ":table/CRRMonitor*" 535 | ] 536 | ] 537 | } ] 538 | }, 539 | { 540 | "Sid": "CloudTrailLogPerms", 541 | "Effect": "Allow", 542 | "Action": [ 543 | "logs:CreateLogGroup", 544 | "logs:CreateLogStream", 545 | "logs:PutLogEvents", 546 | "logs:DescribeLogStreams" 547 | ], 548 | "Resource": { 549 | "Fn::Join": [ 550 | "", 551 | [ 552 | "arn:aws:logs:", 553 | { 554 | "Ref": "AWS::Region" 555 | }, 556 | ":", 557 | { 558 | "Ref": "AWS::AccountId" 559 | }, 560 | ":log-group:*" 561 | ] 562 | ] 563 | } 564 | }, 565 | { 566 | "Sid": "CloudWatchMetrics", 567 | "Effect": "Allow", 568 | "Action": [ 569 | "cloudwatch:PutMetricAlarm", 570 | "cloudwatch:DeleteAlarms", 571 | "cloudwatch:EnableAlarmActions" 572 | ], 573 | "Resource": { 574 | "Fn::Join": [ 575 | "", 576 | [ 577 | "arn:aws:cloudwatch:", 578 | { 579 | "Ref": "AWS::Region" 580 | }, 581 | ":", 582 | { 583 | "Ref": "AWS::AccountId" 584 | }, 585 | ":*" 586 | ] 587 | ] 588 | } 589 | }, 590 | { 591 | "Sid": "CloudWatchMetricsPerms", 592 | "Effect": "Allow", 593 | "Action": [ 594 | "cloudwatch:PutMetricData", 595 | "cloudwatch:ListMetrics" 596 | ], 597 | "Resource": "*" 598 | }, 599 | { 600 | "Sid": "FirehosePerm", 601 | "Effect": "Allow", 602 | "Action": [ 603 | "firehose:PutRecord" 604 | ], 605 | "Resource": { 606 | "Fn::Join": [ 607 | "", 608 | [ 609 | "arn:aws:firehose:", 610 | { 611 | "Ref": "AWS::Region" 612 | }, 613 | ":", 614 | { 615 | "Ref": "AWS::AccountId" 616 | }, 617 | ":deliverystream/CRRMonitorDeliveryStream" 618 | ] 619 | ] 620 | } 621 | } 622 | ] 623 | } 624 | } 625 | ] 626 | }, 627 | "Metadata": { 628 | "cfn_nag": { 629 | "rules_to_suppress": [ 630 | { 631 | "id": "W11", 632 | "reason": "The cloudwatch:ListMetrics action requires the wildcard ('*') resource identifier to function properly.Supporting documentation available at (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-identity-based-access-control-cw.html)" 633 | } 634 | ] 635 | } 636 | } 637 | }, 638 | "CRRHourlyMaintRole": { 639 | "Type": "AWS::IAM::Role", 640 | "Properties": { 641 | "AssumeRolePolicyDocument": { 642 | "Version": "2012-10-17", 643 | "Statement": [ 644 | { 645 | "Effect": "Allow", 646 | "Principal": { 647 | "Service": [ 648 | "lambda.amazonaws.com" 649 | ] 650 | }, 651 | "Action": [ 652 | "sts:AssumeRole" 653 | ] 654 | } 655 | ] 656 | }, 657 | "Path": "/", 658 | "Policies": [ 659 | { 660 | "PolicyName": "CRRHourlyMaintPolicy", 661 | "PolicyDocument": { 662 | "Version": "2012-10-17", 663 | "Statement": [ 664 | { 665 | "Sid": "DynamoDBPerms", 666 | "Effect": "Allow", 667 | "Action": [ 668 | "dynamodb:DescribeTable", 669 | "dynamodb:DeleteItem", 670 | "dynamodb:GetItem", 671 | "dynamodb:Scan", 672 | "dynamodb:UpdateItem" 673 | ], 674 | "Resource": [ { 675 | "Fn::Join": [ 676 | "", 677 | [ 678 | "arn:aws:dynamodb:", 679 | { 680 | "Ref": "AWS::Region" 681 | }, 682 | ":", 683 | { 684 | "Ref": "AWS::AccountId" 685 | }, 686 | ":table/CRRMonitor*" 687 | ] 688 | ] 689 | } 690 | ] 691 | }, 692 | { 693 | "Sid": "S3ListPerms", 694 | "Effect": "Allow", 695 | "Action": [ 696 | "s3:ListBucket", 697 | "s3:GetReplicationConfiguration" 698 | ], 699 | "Resource": [ 700 | "arn:aws:s3:::*", 701 | "arn:aws:s3:::*/*" 702 | ] 703 | }, 704 | { 705 | "Sid": "S3Perms", 706 | "Effect": "Allow", 707 | "Action": [ 708 | "s3:ListAllMyBuckets", 709 | "s3:HeadBucket" 710 | ], 711 | "Resource": "*" 712 | }, 713 | { 714 | "Sid": "LogPermissions", 715 | "Effect": "Allow", 716 | "Action": [ 717 | "logs:CreateLogGroup", 718 | "logs:CreateLogStream", 719 | "logs:PutLogEvents", 720 | "logs:DescribeLogStreams" 721 | ], 722 | "Resource": { 723 | "Fn::Join": [ 724 | "", 725 | [ 726 | "arn:aws:logs:", 727 | { 728 | "Ref": "AWS::Region" 729 | }, 730 | ":", 731 | { 732 | "Ref": "AWS::AccountId" 733 | }, 734 | ":log-group:/aws/lambda/CRRHourlyMaint:*" 735 | ] 736 | ] 737 | } 738 | } 739 | ] 740 | } 741 | } 742 | ] 743 | }, 744 | "Metadata": { 745 | "cfn_nag": { 746 | "rules_to_suppress": [ 747 | { 748 | "id": "W11", 749 | "reason": "The cloudwatch:ListMetrics,s3:ListAllMyBuckets action requires the wildcard ('*') resource identifier to function properly.Supporting documentation available at (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-identity-based-access-control-cw.html) and (https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html)" 750 | 751 | } 752 | ] 753 | } 754 | } 755 | }, 756 | "CRRMonitorTrailRole": { 757 | "Type": "AWS::IAM::Role", 758 | "Properties": { 759 | "AssumeRolePolicyDocument": { 760 | "Version": "2012-10-17", 761 | "Statement": [ 762 | { 763 | "Effect": "Allow", 764 | "Principal": { 765 | "Service": [ 766 | "lambda.amazonaws.com" 767 | ] 768 | }, 769 | "Action": [ 770 | "sts:AssumeRole" 771 | ] 772 | } 773 | ] 774 | }, 775 | "Path": "/", 776 | "Policies": [ 777 | { 778 | "PolicyName": "CRRMonitorTrailPolicy", 779 | "PolicyDocument": { 780 | "Version": "2012-10-17", 781 | "Statement": [ 782 | { 783 | "Sid": "CloudTrailLogPerms", 784 | "Effect": "Allow", 785 | "Action": [ 786 | "logs:CreateLogGroup", 787 | "logs:CreateLogStream", 788 | "logs:PutLogEvents", 789 | "logs:DescribeLogStreams" 790 | ], 791 | "Resource": { 792 | "Fn::Join": [ 793 | "", 794 | [ 795 | "arn:aws:logs:", 796 | { 797 | "Ref": "AWS::Region" 798 | }, 799 | ":", 800 | { 801 | "Ref": "AWS::AccountId" 802 | }, 803 | ":*" 804 | ] 805 | ] 806 | } 807 | }, 808 | { 809 | "Sid": "LambdaPerms", 810 | "Effect": "Allow", 811 | "Action": [ 812 | "lambda:InvokeFunction" 813 | ], 814 | "Resource": { 815 | "Fn::Join": [ 816 | "", 817 | [ 818 | "arn:aws:lambda:", 819 | { 820 | "Ref": "AWS::Region" 821 | }, 822 | ":", 823 | { 824 | "Ref": "AWS::AccountId" 825 | }, 826 | ":function:CRRMonitor" 827 | ] 828 | ] 829 | } 830 | }, 831 | { 832 | "Sid": "CloudWatchMetrics", 833 | "Effect": "Allow", 834 | "Action": [ 835 | "cloudwatch:PutMetricAlarm", 836 | "cloudwatch:DeleteAlarms", 837 | "cloudwatch:EnableAlarmActions" 838 | ], 839 | "Resource": { 840 | "Fn::Join": [ 841 | "", 842 | [ 843 | "arn:aws:cloudwatch:", 844 | { 845 | "Ref": "AWS::Region" 846 | }, 847 | ":", 848 | { 849 | "Ref": "AWS::AccountId" 850 | }, 851 | ":*" 852 | ] 853 | ] 854 | } 855 | }, 856 | { 857 | "Sid": "CloudWatchMetricsPerms", 858 | "Effect": "Allow", 859 | "Action": [ 860 | "cloudwatch:PutMetricData", 861 | "cloudwatch:ListMetrics" 862 | ], 863 | "Resource": "*" 864 | }, 865 | { 866 | "Sid": "CloudTrailPerms", 867 | "Effect": "Allow", 868 | "Action": [ 869 | "cloudtrail:PutEventSelectors", 870 | "cloudtrail:StopLogging", 871 | "cloudtrail:StartLogging", 872 | "cloudtrail:DeleteTrail", 873 | "cloudtrail:UpdateTrail", 874 | "cloudtrail:CreateTrail" 875 | ], 876 | "Resource": { 877 | "Fn::Join": [ 878 | "", 879 | [ 880 | "arn:aws:cloudtrail:", 881 | "*", 882 | ":", 883 | { 884 | "Ref": "AWS::AccountId" 885 | }, 886 | ":*" 887 | ] 888 | ] 889 | } 890 | }, 891 | { 892 | "Sid": "S3ListPerms", 893 | "Effect": "Allow", 894 | "Action": [ 895 | "s3:ListBucket", 896 | "s3:GetReplicationConfiguration" 897 | ], 898 | "Resource": [ 899 | "arn:aws:s3:::*", 900 | "arn:aws:s3:::*/*" 901 | ] 902 | }, 903 | { 904 | "Sid": "S3Perms", 905 | "Effect": "Allow", 906 | "Action": [ 907 | "s3:ListAllMyBuckets", 908 | "s3:HeadBucket" 909 | ], 910 | "Resource": "*" 911 | } 912 | ] 913 | } 914 | } 915 | ] 916 | }, 917 | "Metadata": { 918 | "cfn_nag": { 919 | "rules_to_suppress": [ 920 | { 921 | "id": "W11", 922 | "reason": "The cloudwatch:ListMetrics action requires the wildcard ('*') resource identifier to function properly.Supporting documentation available at (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-identity-based-access-control-cw.html)" 923 | 924 | } 925 | ] 926 | } 927 | } 928 | }, 929 | "CRRMonitorLambda": { 930 | "Type": "AWS::Lambda::Function", 931 | "Properties": { 932 | "Code": { 933 | "S3Bucket": { 934 | "Fn::Join": [ 935 | "", 936 | [ 937 | { 938 | "Fn::FindInMap": [ 939 | "Function", 940 | "CustomResourceMonitor", 941 | "S3Bucket" 942 | ] 943 | }, 944 | "-", 945 | { 946 | "Ref": "AWS::Region" 947 | } 948 | ] 949 | ] 950 | }, 951 | "S3Key": { 952 | "Fn::FindInMap": [ 953 | "Function", 954 | "CustomResourceMonitor", 955 | "S3Key" 956 | ] 957 | } 958 | }, 959 | "Handler": "CRRMonitor.queue_handler", 960 | "FunctionName": "CRRMonitor", 961 | "Environment" : { 962 | "Variables": { 963 | "maxtask" : "1800" , "maxspawn" : "20", 964 | "AnonymousUsage": { 965 | "Fn::FindInMap": [ 966 | "Send", 967 | "AnonymousUsage", 968 | "Data" 969 | ] 970 | }, 971 | "UUID": { 972 | "Fn::GetAtt": [ 973 | "UUIDGenerator", 974 | "UUID" 975 | ] 976 | }, 977 | "SolutionVersion": "SOLUTION_VERSION" 978 | } 979 | }, 980 | "Role": { "Fn::GetAtt": [ "CRRMonitorRole", "Arn" ] }, 981 | "Runtime": "python3.8", 982 | "Timeout": 300 983 | } 984 | }, 985 | "MonitorTimer": { 986 | "Type": "AWS::Events::Rule", 987 | "Properties": { 988 | "Description": "Runs CRRMonitor every 1 minute.", 989 | "ScheduleExpression": "rate(1 minute)", 990 | "State": "ENABLED", 991 | "Targets": [ 992 | { 993 | "Id": "CRRMonitor", 994 | "Arn": { "Fn::GetAtt": [ "CRRMonitorLambda", "Arn" ] } 995 | } 996 | ] 997 | } 998 | }, 999 | "MonitorTimerPermission": { 1000 | "Type": "AWS::Lambda::Permission", 1001 | "Properties": { 1002 | "Action": "lambda:invokeFunction", 1003 | "FunctionName": { 1004 | "Ref": "CRRMonitorLambda" 1005 | }, 1006 | "Principal": "events.amazonaws.com", 1007 | "SourceArn": { "Fn::GetAtt": [ "MonitorTimer", "Arn" ] } 1008 | }, 1009 | "Metadata": { 1010 | "cfn_nag": { 1011 | "rules_to_suppress": [ 1012 | { 1013 | "id": "W24", 1014 | "reason": "Lambda permission requires InvokeFunction" 1015 | } 1016 | ] 1017 | } 1018 | } 1019 | }, 1020 | "HousekeepingLambda": { 1021 | "Type": "AWS::Lambda::Function", 1022 | "Properties": { 1023 | "Code": { 1024 | "S3Bucket": { 1025 | "Fn::Join": [ 1026 | "", 1027 | [ 1028 | { 1029 | "Fn::FindInMap": [ 1030 | "Function", 1031 | "CustomResourceHousekeeping", 1032 | "S3Bucket" 1033 | ] 1034 | }, 1035 | "-", 1036 | { 1037 | "Ref": "AWS::Region" 1038 | } 1039 | ] 1040 | ] 1041 | }, 1042 | "S3Key": { 1043 | "Fn::FindInMap": [ 1044 | "Function", 1045 | "CustomResourceHousekeeping", 1046 | "S3Key" 1047 | ] 1048 | } 1049 | }, 1050 | "Handler": "CRRMonitorHousekeeping.lambda_handler", 1051 | "Environment" : { 1052 | "Variables": { 1053 | "stream_to_kinesis": { "Ref" : "ArchiveToS3" }, 1054 | "kinesisfirestream": { 1055 | "Fn::If" : [ 1056 | "StreamToKinesis", 1057 | {"Ref" : "CRRMonitorKinesisDeliveryStream"}, 1058 | {"Ref" : "AWS::NoValue"} 1059 | ] 1060 | }, 1061 | "stack_name" : { "Ref" : "AWS::StackName"}, 1062 | "purge_thresh": "24" 1063 | } 1064 | }, 1065 | "FunctionName": "CRRMonitorHousekeeping", 1066 | "Role": { "Fn::GetAtt": [ "CRRMonitorHouseKeepingRole", "Arn" ] }, 1067 | "Runtime": "python3.8", 1068 | "Timeout": 300 1069 | }, 1070 | "DependsOn": [ 1071 | "CRRMonitorRole" 1072 | ] 1073 | }, 1074 | "HousekeepingTimer": { 1075 | "Type": "AWS::Events::Rule", 1076 | "Properties": { 1077 | "Description": "Runs the housekeeper every 5 minutes.", 1078 | "ScheduleExpression": "rate(5 minutes)", 1079 | "State": "ENABLED", 1080 | "Targets": [ 1081 | { 1082 | "Id": "Housekeeper", 1083 | "Arn": { "Fn::GetAtt": [ "HousekeepingLambda", "Arn" ] } 1084 | } 1085 | ] 1086 | }, 1087 | "Metadata": { 1088 | "cfn_nag": { 1089 | "rules_to_suppress": [ 1090 | { 1091 | "id": "W24", 1092 | "reason": "Lambda permission requires InvokeFunction" 1093 | } 1094 | ] 1095 | } 1096 | } 1097 | }, 1098 | "HousekeepingPermission": { 1099 | "Type": "AWS::Lambda::Permission", 1100 | "Properties": { 1101 | "Action": "lambda:invokeFunction", 1102 | "FunctionName": { 1103 | "Ref": "HousekeepingLambda" 1104 | }, 1105 | "Principal": "events.amazonaws.com", 1106 | "SourceArn": { "Fn::GetAtt": [ "HousekeepingTimer", "Arn" ] } 1107 | }, 1108 | "Metadata": { 1109 | "cfn_nag": { 1110 | "rules_to_suppress": [ 1111 | { 1112 | "id": "W24", 1113 | "reason": "Lambda permission requires InvokeFunction" 1114 | } 1115 | ] 1116 | } 1117 | } 1118 | }, 1119 | "MaintLambda": { 1120 | "Type": "AWS::Lambda::Function", 1121 | "Properties": { 1122 | "Code": { 1123 | "S3Bucket": { 1124 | "Fn::Join": [ 1125 | "", 1126 | [ 1127 | { 1128 | "Fn::FindInMap": [ 1129 | "Function", 1130 | "CustomResourceHourlyMaint", 1131 | "S3Bucket" 1132 | ] 1133 | }, 1134 | "-", 1135 | { 1136 | "Ref": "AWS::Region" 1137 | } 1138 | ] 1139 | ] 1140 | }, 1141 | "S3Key": { 1142 | "Fn::FindInMap": [ 1143 | "Function", 1144 | "CustomResourceHourlyMaint", 1145 | "S3Key" 1146 | ] 1147 | } 1148 | }, 1149 | "Handler": "CRRHourlyMaint.lambda_handler", 1150 | "FunctionName": "CRRHourlyMaint", 1151 | "Role": { "Fn::GetAtt": [ "CRRHourlyMaintRole", "Arn" ] }, 1152 | "Runtime": "python3.8", 1153 | "Timeout": 300 1154 | }, 1155 | "DependsOn": [ 1156 | "CRRMonitorRole" 1157 | ], 1158 | "Metadata": { 1159 | "cfn_nag": { 1160 | "rules_to_suppress": [ 1161 | { 1162 | "id": "W24", 1163 | "reason": "Lambda permission requires InvokeFunction" 1164 | } 1165 | ] 1166 | } 1167 | } 1168 | }, 1169 | "MaintTimer": { 1170 | "Type": "AWS::Events::Rule", 1171 | "Properties": { 1172 | "Description": "Runs the maintenance tasks every hour.", 1173 | "ScheduleExpression": "rate(1 hour)", 1174 | "State": "ENABLED", 1175 | "Targets": [ 1176 | { 1177 | "Id": "CRRMaint", 1178 | "Arn": { "Fn::GetAtt": [ "MaintLambda", "Arn" ] } 1179 | } 1180 | ] 1181 | } 1182 | }, 1183 | "MaintPermission": { 1184 | "Type": "AWS::Lambda::Permission", 1185 | "Properties": { 1186 | "Action": "lambda:invokeFunction", 1187 | "FunctionName": { 1188 | "Ref": "MaintLambda" 1189 | }, 1190 | "Principal": "events.amazonaws.com", 1191 | "SourceArn": { "Fn::GetAtt": [ "MaintTimer", "Arn" ] } 1192 | }, 1193 | "Metadata": { 1194 | "cfn_nag": { 1195 | "rules_to_suppress": [ 1196 | { 1197 | "id": "W24", 1198 | "reason": "Lambda permission requires InvokeFunction" 1199 | } 1200 | ] 1201 | } 1202 | } 1203 | }, 1204 | "CRRMonitorQueue": { 1205 | "Type": "AWS::SQS::Queue", 1206 | "Properties": { 1207 | "QueueName": { "Fn::Join": [ "", [ "CRRMonitor", "Queue" ] ] }, 1208 | "KmsMasterKeyId": { "Ref": "CRRMonitorKey" }, 1209 | "RedrivePolicy": { 1210 | "deadLetterTargetArn": {"Fn::GetAtt" : [ "CRRMonitorDeadLetterQueue" , "Arn" ]}, 1211 | "maxReceiveCount": 10 1212 | } 1213 | } 1214 | }, 1215 | "CRRMonitorQueuePolicy": { 1216 | "Type": "AWS::SQS::QueuePolicy", 1217 | "Properties": { 1218 | "PolicyDocument": { 1219 | "Id": "CRRMonitorQueuePolicy", 1220 | "Statement": [ 1221 | { 1222 | "Sid": "SQSQueuePermission", 1223 | "Effect": "Allow", 1224 | "Principal": "*", 1225 | "Action": [ 1226 | "SQS:SendMessage", 1227 | "SQS:ReceiveMessage", 1228 | "SQS:DeleteMessage" 1229 | ], 1230 | "Resource": [ 1231 | { "Fn::GetAtt": [ "CRRMonitorQueue", "Arn" ] } 1232 | ], 1233 | "Condition": { 1234 | "ArnEquals": { 1235 | "aws:SourceArn": { "Fn::Join" : [ ":", [ "arn:aws:sns", "*", { "Ref": "AWS::AccountId" } , "*" ] ] } 1236 | } 1237 | } 1238 | } 1239 | ] 1240 | }, 1241 | "Queues": [ 1242 | { "Ref": "CRRMonitorQueue" } 1243 | ] 1244 | }, 1245 | "Metadata": { 1246 | "cfn_nag": { 1247 | "rules_to_suppress": [ 1248 | { 1249 | "id": "F21", 1250 | "reason": "This is required for remote accounts to send messages to queue." 1251 | } 1252 | ] 1253 | } 1254 | } 1255 | }, 1256 | "CRRMonitorDeadLetterQueue": { 1257 | "Type": "AWS::SQS::Queue", 1258 | "Properties": { 1259 | "KmsMasterKeyId": { "Ref": "CRRMonitorKey" } 1260 | } 1261 | }, 1262 | "CRRMonitorKinesisDeliveryStream": { 1263 | "Type" : "AWS::KinesisFirehose::DeliveryStream", 1264 | "Condition": "StreamToKinesis", 1265 | "Properties" : { 1266 | "DeliveryStreamName" : "CRRMonitorDeliveryStream", 1267 | "S3DestinationConfiguration" : { 1268 | "BucketARN" : { "Fn::Join" : [ ":", [ "arn:aws:s3::", { "Ref" : "S3ArchiveBucket" } ] ] }, 1269 | "BufferingHints" : { 1270 | "IntervalInSeconds" : 60, 1271 | "SizeInMBs" : 5 1272 | }, 1273 | "CompressionFormat" : "GZIP", 1274 | "Prefix" : "CRRMonitorArchive", 1275 | "RoleARN" : { "Fn::GetAtt": [ "CRRFirehoseRole", "Arn" ] }, 1276 | "CloudWatchLoggingOptions" : { 1277 | "Enabled" : true, 1278 | "LogGroupName" : "deliverystream", 1279 | "LogStreamName" : "s3Backup" 1280 | } 1281 | } 1282 | 1283 | } 1284 | }, 1285 | "CRRDeployAgentLambda": { 1286 | "Type": "AWS::Lambda::Function", 1287 | "Properties": { 1288 | "Code": { 1289 | "S3Bucket": { 1290 | "Fn::Join": [ 1291 | "", 1292 | [ 1293 | { 1294 | "Fn::FindInMap": [ 1295 | "Function", 1296 | "CustomResourceDeployAgent", 1297 | "S3Bucket" 1298 | ] 1299 | }, 1300 | "-", 1301 | { 1302 | "Ref": "AWS::Region" 1303 | } 1304 | ] 1305 | ] 1306 | }, 1307 | "S3Key": { 1308 | "Fn::FindInMap": [ 1309 | "Function", 1310 | "CustomResourceDeployAgent", 1311 | "S3Key" 1312 | ] 1313 | } 1314 | }, 1315 | "Handler": "CRRdeployagent.handler", 1316 | "FunctionName": "CRRDeployAgent", 1317 | "Role": { "Fn::GetAtt": [ "CRRMonitorDeployRole", "Arn" ] }, 1318 | "Runtime": "python3.8", 1319 | "Timeout": 300 1320 | }, 1321 | "DependsOn": [ 1322 | "CRRMonitorRole" 1323 | ] 1324 | }, 1325 | "CustomDeploy": { 1326 | "Type": "Custom::DeployAgent", 1327 | "Properties": { 1328 | "ServiceToken": { "Fn::GetAtt": [ "CRRDeployAgentLambda", "Arn" ] }, 1329 | "Topic": "CRRMonitor", 1330 | "CRRQueueArn": { "Fn::GetAtt": [ "CRRMonitorQueue", "Arn" ] }, 1331 | "AgentAccounts": {"Fn::If": ["SingleAccnt", {"Ref": "remoteAccounts"}, [{"Ref": "AWS::AccountId"}]]} 1332 | }, 1333 | "DependsOn": [ 1334 | "CRRMonitorLambda" 1335 | ] 1336 | }, 1337 | "CRRMonitorDDB": { 1338 | "Type": "AWS::DynamoDB::Table", 1339 | "Properties": { 1340 | "AttributeDefinitions": [ 1341 | { 1342 | "AttributeName": "ETag", 1343 | "AttributeType": "S" 1344 | } 1345 | ], 1346 | "BillingMode": "PAY_PER_REQUEST", 1347 | "KeySchema": [ 1348 | { 1349 | "AttributeName": "ETag", 1350 | "KeyType": "HASH" 1351 | } 1352 | ], 1353 | "TableName": "CRRMonitor", 1354 | "TimeToLiveSpecification": { 1355 | "AttributeName": "TimeToLive", 1356 | "Enabled": true 1357 | } 1358 | }, 1359 | "Metadata": { 1360 | "cfn_nag": { 1361 | "rules_to_suppress": [ 1362 | { 1363 | "id": "W28", 1364 | "reason": "Requires the resource name table name CRRMonitor" 1365 | } 1366 | ] 1367 | } 1368 | } 1369 | }, 1370 | "CRRMonitorStatisticsDDB": { 1371 | "Type": "AWS::DynamoDB::Table", 1372 | "Properties": { 1373 | "AttributeDefinitions": [ 1374 | { 1375 | "AttributeName": "OriginReplicaBucket", 1376 | "AttributeType": "S" 1377 | } 1378 | ], 1379 | "BillingMode": "PAY_PER_REQUEST", 1380 | "KeySchema": [ 1381 | { 1382 | "AttributeName": "OriginReplicaBucket", 1383 | "KeyType": "HASH" 1384 | } 1385 | ], 1386 | "TableName": "CRRMonitorStatistics" 1387 | }, 1388 | "Metadata": { 1389 | "cfn_nag": { 1390 | "rules_to_suppress": [ 1391 | { 1392 | "id": "W28", 1393 | "reason": "Requires the resource name table name CRRMonitorStatistics" 1394 | } 1395 | ] 1396 | } 1397 | } 1398 | }, 1399 | "CloudMetricsNotificationTopic" : { 1400 | "Type" : "AWS::SNS::Topic", 1401 | "Properties" : { 1402 | "DisplayName" : "CRRMonitorMetricsTopic", 1403 | "TopicName" : "CRRMonitorMetricsTopic", 1404 | "KmsMasterKeyId": { "Fn::Sub": "arn:${AWS::Partition}:kms:${AWS::Region}:${AWS::AccountId}:alias/aws/sns" } 1405 | } 1406 | }, 1407 | "logBucket": { 1408 | "Type": "AWS::S3::Bucket", 1409 | "DeletionPolicy": "Retain", 1410 | "Properties" : { 1411 | "BucketEncryption" : { 1412 | "ServerSideEncryptionConfiguration" : [ { 1413 | "ServerSideEncryptionByDefault" : { 1414 | "SSEAlgorithm" : "AES256" 1415 | } 1416 | } 1417 | ] 1418 | }, 1419 | "PublicAccessBlockConfiguration" : { 1420 | "BlockPublicAcls" : true, 1421 | "BlockPublicPolicy" : true, 1422 | "IgnorePublicAcls" : true, 1423 | "RestrictPublicBuckets" : true 1424 | } 1425 | }, 1426 | "Metadata": { 1427 | "cfn_nag": { 1428 | "rules_to_suppress": [ 1429 | { 1430 | "id": "W35", 1431 | "reason": "S3 Bucket should have access logging suppressed" 1432 | } 1433 | ] 1434 | } 1435 | } 1436 | }, 1437 | "logBucketPolicy" : { 1438 | "Type" : "AWS::S3::BucketPolicy", 1439 | "Properties" : { 1440 | "Bucket" : {"Ref" : "logBucket"}, 1441 | "PolicyDocument": { 1442 | "Version": "2012-10-17", 1443 | "Statement": [ 1444 | { 1445 | "Sid": "AWSCloudTrailAclCheck20150319", 1446 | "Effect": "Allow", 1447 | "Principal": { 1448 | "Service": "cloudtrail.amazonaws.com" 1449 | }, 1450 | "Action": "s3:GetBucketAcl", 1451 | "Resource": { "Fn::Join": [ "", [ "arn:aws:s3:::", {"Ref" : "logBucket" } ] ] } 1452 | }, 1453 | { 1454 | "Sid": "AWSCloudTrailWrite20150319", 1455 | "Effect": "Allow", 1456 | "Principal": { 1457 | "Service": "cloudtrail.amazonaws.com" 1458 | }, 1459 | "Action": "s3:PutObject", 1460 | "Resource": { "Fn::Join": [ "", [ "arn:aws:s3:::", {"Ref" : "logBucket" }, "/AWSLogs/", { "Ref" : "AWS::AccountId"} , "/*"] ] }, 1461 | "Condition": { 1462 | "StringEquals": { 1463 | "s3:x-amz-acl": "bucket-owner-full-control" 1464 | } 1465 | } 1466 | } 1467 | ] 1468 | } 1469 | } 1470 | }, 1471 | "CRRMonitorTrailAlarm": { 1472 | "Type": "AWS::Lambda::Function", 1473 | "Properties": { 1474 | "Code": { 1475 | "S3Bucket": { 1476 | "Fn::Join": [ 1477 | "", 1478 | [ 1479 | { 1480 | "Fn::FindInMap": [ 1481 | "Function", 1482 | "CustomResourceTrailAlarm", 1483 | "S3Bucket" 1484 | ] 1485 | }, 1486 | "-", 1487 | { 1488 | "Ref": "AWS::Region" 1489 | } 1490 | ] 1491 | ] 1492 | }, 1493 | "S3Key": { 1494 | "Fn::FindInMap": [ 1495 | "Function", 1496 | "CustomResourceTrailAlarm", 1497 | "S3Key" 1498 | ] 1499 | } 1500 | }, 1501 | "Handler": "CRRMonitorTrailAlarm.handler", 1502 | "FunctionName": "CRRMonitorTrailAlarm", 1503 | "Role": { "Fn::GetAtt": [ "CRRMonitorTrailRole", "Arn" ] }, 1504 | "Runtime": "python3.8", 1505 | "Timeout": 300 1506 | }, 1507 | "DependsOn": [ 1508 | "CRRMonitorRole", 1509 | "CloudMetricsNotificationTopic" 1510 | ] 1511 | }, 1512 | "CustomTrailAlarm": { 1513 | "Type": "Custom::TrailAlarm", 1514 | "Properties": { 1515 | "ServiceToken": { "Fn::GetAtt": [ "CRRMonitorTrailAlarm", "Arn" ] }, 1516 | "trail_name" : "CRRMonitor_trail", 1517 | "trail_log_bucket" : { "Ref" : "logBucket"}, 1518 | "sns_topic_arn" : { "Ref" : "CloudMetricsNotificationTopic"} 1519 | }, 1520 | "DependsOn" : [ "logBucketPolicy" ] 1521 | }, 1522 | "SolutionHelperRole": { 1523 | "Type": "AWS::IAM::Role", 1524 | "Properties": { 1525 | "AssumeRolePolicyDocument": { 1526 | "Version": "2012-10-17", 1527 | "Statement": [ 1528 | { 1529 | "Effect": "Allow", 1530 | "Principal": { 1531 | "Service": "lambda.amazonaws.com" 1532 | }, 1533 | "Action": "sts:AssumeRole" 1534 | } 1535 | ] 1536 | }, 1537 | "Path": "/", 1538 | "Policies": [ 1539 | { 1540 | "PolicyName": "Custom_Solution_Helper_Permissions", 1541 | "PolicyDocument": { 1542 | "Version": "2012-10-17", 1543 | "Statement": [ 1544 | { 1545 | "Effect": "Allow", 1546 | "Action": [ 1547 | "logs:CreateLogGroup", 1548 | "logs:CreateLogStream", 1549 | "logs:PutLogEvents" 1550 | ], 1551 | "Resource": { 1552 | "Fn::Join": [ 1553 | "", 1554 | [ 1555 | "arn:aws:logs:", 1556 | { 1557 | "Ref": "AWS::Region" 1558 | }, 1559 | ":", 1560 | { 1561 | "Ref": "AWS::AccountId" 1562 | }, 1563 | ":log-group:/aws/lambda/*" 1564 | ] 1565 | ] 1566 | } 1567 | } 1568 | ] 1569 | } 1570 | } 1571 | ] 1572 | }, 1573 | "Metadata": { 1574 | "cfn_nag": { 1575 | "rules_to_suppress": [ 1576 | { 1577 | "id": "W11", 1578 | "reason": "Requires log-group level access :log-group:/aws/lambda/*" 1579 | } 1580 | ] 1581 | } 1582 | } 1583 | }, 1584 | "SolutionHelper": { 1585 | "Type": "AWS::Lambda::Function", 1586 | "Properties": { 1587 | "Handler": "solution-helper.lambda_handler", 1588 | "Role": { 1589 | "Fn::GetAtt": [ 1590 | "SolutionHelperRole", 1591 | "Arn" 1592 | ] 1593 | }, 1594 | "Description": "This function creates a CloudFormation custom lambda resource that creates custom lambda functions by finding and replacing specific values from existing lambda function code.", 1595 | "Code": { 1596 | "S3Bucket": { 1597 | "Fn::Join": [ 1598 | "", 1599 | [ 1600 | { 1601 | "Fn::FindInMap": [ 1602 | "Function", 1603 | "SolutionHelperFunction", 1604 | "S3Bucket" 1605 | ] 1606 | }, 1607 | "-", 1608 | { 1609 | "Ref": "AWS::Region" 1610 | } 1611 | ] 1612 | ] 1613 | }, 1614 | "S3Key": { 1615 | "Fn::FindInMap": [ 1616 | "Function", 1617 | "SolutionHelperFunction", 1618 | "S3Key" 1619 | ] 1620 | } 1621 | }, 1622 | "Runtime": "python3.8", 1623 | "Timeout": 300 1624 | } 1625 | }, 1626 | "UUIDGenerator": { 1627 | "Type": "Custom::UUIDGenerator", 1628 | "Properties": { 1629 | "ServiceToken": { 1630 | "Fn::GetAtt": [ 1631 | "SolutionHelper", 1632 | "Arn" 1633 | ] 1634 | }, 1635 | "Region": { "Ref": "AWS::Region" } 1636 | } 1637 | }, 1638 | "CRRMonitorKey": { 1639 | "Type" : "AWS::KMS::Key", 1640 | "Properties" : { 1641 | "Description" : "Key used to encrypt SNS/SQS messages for CRR Monitor solution", 1642 | "Enabled": true, 1643 | "EnableKeyRotation": true, 1644 | "KeyPolicy": { 1645 | "Statement": [ 1646 | { 1647 | "Sid": "Allow root user to administer the key", 1648 | "Effect": "Allow", 1649 | "Principal": { 1650 | "AWS": { 1651 | "Fn::Sub": "arn:${AWS::Partition}:iam::${AWS::AccountId}:root" 1652 | } 1653 | }, 1654 | "Action": [ "kms:*" ], 1655 | "Resource": [ "*" ] 1656 | }, 1657 | { 1658 | "Sid": "Allow Amazon SNS to use this key", 1659 | "Effect": "Allow", 1660 | "Principal": { 1661 | "Service": "sns.amazonaws.com" 1662 | }, 1663 | "Action": [ 1664 | "kms:Decrypt", 1665 | "kms:GenerateDataKey*" 1666 | ], 1667 | "Resource": "*" 1668 | } 1669 | ] 1670 | } 1671 | } 1672 | }, 1673 | "CRRMonitorKeyAlias": { 1674 | "Type": "AWS::KMS::Alias", 1675 | "Properties": { 1676 | "AliasName": "alias/crr-monitor-encryption-key", 1677 | "TargetKeyId": { 1678 | "Ref": "CRRMonitorKey" 1679 | } 1680 | } 1681 | } 1682 | }, 1683 | "Outputs": { 1684 | "QueueURL": { 1685 | "Description": "URL of newly created SQS Queue", 1686 | "Value": { "Ref": "CRRMonitorQueue" } 1687 | }, 1688 | "QueueARN": { 1689 | "Description": "ARN of newly created SQS Queue", 1690 | "Value": { "Fn::GetAtt": [ "CRRMonitorQueue", "Arn" ] } 1691 | }, 1692 | "QueueName": { 1693 | "Description": "Name newly created SQS Queue", 1694 | "Value": { "Fn::GetAtt": [ "CRRMonitorQueue", "QueueName" ] } 1695 | }, 1696 | "MetricsNotificationTopic": { 1697 | "Description": "Cloud Metrics Notification Topic", 1698 | "Value": { "Fn::GetAtt": [ "CloudMetricsNotificationTopic", "TopicName" ] } 1699 | }, 1700 | "UUID": { 1701 | "Description": "Newly created random UUID.", 1702 | "Value": { 1703 | "Fn::GetAtt": [ 1704 | "UUIDGenerator", 1705 | "UUID" 1706 | ] 1707 | } 1708 | } 1709 | }, 1710 | "Metadata": { 1711 | "AWS::CloudFormation::Interface": { 1712 | "ParameterGroups": [ 1713 | { 1714 | "Label": { 1715 | "default": "DynamoDB" 1716 | }, 1717 | "Parameters": [ "ArchiveToS3", "S3ArchiveBucket" ] 1718 | }, 1719 | { 1720 | "Label": { 1721 | "default": "Accounts" 1722 | }, 1723 | "Parameters": [ "remoteAccounts" ] 1724 | } 1725 | ], 1726 | "ParameterLabels" : { 1727 | "ArchiveToS3" : { "default" : "Archive to S3"}, 1728 | "S3ArchiveBucket" : { "default" : "Archive Bucket" }, 1729 | "remoteAccounts" : { "default" : "Remote Accounts" } 1730 | } 1731 | } 1732 | } 1733 | } 1734 | -------------------------------------------------------------------------------- /source/CRRHourlyMaint/CRRHourlyMaint.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | ###################################################################################################################### 5 | # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 6 | # # 7 | # Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance # 8 | # with the License. A copy of the License is located at # 9 | # # 10 | # http://www.apache.org/licenses/LICENSE-2.0 # 11 | # # 12 | # or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES # 13 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 14 | # and limitations under the License. # 15 | ###################################################################################################################### 16 | 17 | from __future__ import print_function 18 | 19 | import boto3 20 | import os 21 | from datetime import datetime,timedelta 22 | 23 | def getparm (parmname, defaultval): 24 | try: 25 | myval = os.environ[parmname] 26 | if isinstance(defaultval, int): 27 | return int(myval) 28 | else: 29 | return myval 30 | except: 31 | print('Environmental variable \'' + parmname + '\' not found. Using default [' + str(defaultval) + ']') 32 | return defaultval 33 | ####################################################################### 34 | # Check for incomplete transfers that started more than an hour ago. 35 | # - If it no longer exists, discard it 36 | # - If it failed, increment the CloudWatch metric 37 | # - If it completed, update the DDB fields. Note that currently we 38 | # don't do anything about this - should never happen. Future enhancement 39 | ####################################################################### 40 | 41 | # 42 | # Define the DynamoDB table to be used to track replication status. 43 | # It must be in the same region as this Lambda and should already 44 | # exist. It is created by the CloudFormation template. 45 | # 46 | ddbtable = getparm('appname', 'CRRMonitor') 47 | stattable = ddbtable + 'Statistics' 48 | timefmt = '%Y-%m-%dT%H:%M:%SZ' 49 | roundTo = getparm('roundTo', 300) # 5 minute buckets for CW metrics 50 | purge_thresh = getparm('purge_thresh', 24) # threshold in hours 51 | client={ 52 | 's3': { 'service': 's3' }, 53 | 'ddb': { 'service': 'dynamodb'} 54 | } 55 | 56 | # ===================================================================== 57 | # connect_clients 58 | # --------------- 59 | # Connect to all the clients. We will do this once per instantiation of 60 | # the Lambda function (not per execution) 61 | # ===================================================================== 62 | def connect_clients(clients_to_connect): 63 | for c in clients_to_connect: 64 | try: 65 | if 'region' in clients_to_connect[c]: 66 | clients_to_connect[c]['handle']=boto3.client(clients_to_connect[c]['service'], region_name=clients_to_connect[c]['region']) 67 | else: 68 | clients_to_connect[c]['handle']=boto3.client(clients_to_connect[c]['service']) 69 | except Exception as e: 70 | print(e) 71 | print('Error connecting to ' + clients_to_connect[c]['service']) 72 | raise e 73 | return clients_to_connect 74 | 75 | # ===================================================================== 76 | # lambda_handler 77 | # -------------- 78 | # Look for failed replication and other anomalies. 79 | # ===================================================================== 80 | def lambda_handler(event, context): 81 | # ----------------------------------------------------------------- 82 | # purge_item - removes old items 83 | # 84 | def purge_item(itemkey): 85 | print('Purge ETag: ' + itemkey) 86 | try: 87 | client['ddb']['handle'].delete_item( 88 | TableName=ddbtable, 89 | Key={ 90 | 'ETag': { 91 | 'S': itemkey 92 | } 93 | } 94 | ) 95 | except Exception as e: 96 | print(e) 97 | print('Error purging ' + itemkey + ' from ' + ddbtable) 98 | raise e 99 | # ----------------------------------------------------------------- 100 | # log_statistics 101 | # 102 | def log_statistics(Src,Dst,Tstamp,Size,ET,roundTo): 103 | # ------------------------------------------------------------- 104 | # Derive the statistic bucket from source/dest and time bucket 105 | # (5 minute rolling window) 106 | # 107 | statbucket=Src + ':' + Dst 108 | ts = datetime.strptime(Tstamp, timefmt) 109 | secs = (ts.replace(tzinfo=None) - ts.min).seconds 110 | rounding = (secs+roundTo/2) // roundTo * roundTo 111 | ts = ts + timedelta(0,rounding-secs,-ts.microsecond) 112 | statbucket += ':' + datetime.strftime(ts, timefmt) 113 | # ------------------------------------------------------------- 114 | # Init a dict to use to hold our attrs for DDB 115 | stat_exp_attrs = {} 116 | # ------------------------------------------------------------- 117 | # Build the DDB UpdateExpression 118 | stat_update_exp = 'SET timebucket = :t, source_bucket = :o, dest_bucket = :r ADD objects :a, size :c, elapsed :d' 119 | # ------------------------------------------------------------- 120 | # push the first attr: s3Object 121 | stat_exp_attrs[':a'] = { 'N': '1' } 122 | stat_exp_attrs[':c'] = { 'N': Size } 123 | stat_exp_attrs[':d'] = { 'N': ET } 124 | stat_exp_attrs[':t'] = { 'S': datetime.strftime(ts, timefmt) } 125 | stat_exp_attrs[':o'] = { 'S': Src } 126 | stat_exp_attrs[':r'] = { 'S': Dst } 127 | #print('s3Object: ' + key) 128 | try: 129 | client['ddb']['handle'].update_item( 130 | TableName = stattable, 131 | Key = { 'OriginReplicaBucket': { 'S': statbucket } }, 132 | UpdateExpression = stat_update_exp, 133 | ExpressionAttributeValues = stat_exp_attrs) 134 | except Exception as e: 135 | print(e) 136 | print('Table ' + stattable + ' update failed') 137 | raise e 138 | 139 | # ----------------------------------------------------------------- 140 | # process_items - check each item returned by the scan 141 | # 142 | def process_items(items): 143 | for i in items: 144 | 145 | # Call head-object to check replication status 146 | try: 147 | response = client['s3']['handle'].head_object( 148 | Bucket=i['s3Origin']['S'], 149 | Key=i['s3Object']['S']) 150 | except Exception as e: 151 | print('Item no longer exists - purging: ' + i['ETag']['S']) 152 | purge_item(i['ETag']['S']) 153 | continue 154 | # Init a dict to use to hold our attrs for DDB 155 | ddb_exp_attrs = {} 156 | # Build th e DDB UpdateExpression 157 | ddb_update_exp = 'set s3Object = :a' 158 | # push the first attr: s3Object 159 | ddb_exp_attrs[':a'] = { 'S': i['s3Object']['S'] } 160 | 161 | # Object still exists 162 | headers = response['ResponseMetadata']['HTTPHeaders'] 163 | 164 | lastmod = datetime.strftime(response['LastModified'], timefmt) 165 | 166 | if headers['x-amz-replication-status'] == 'COMPLETED': 167 | print('Completed transfer found: ' + i['ETag']['S']) 168 | ddb_update_exp += ', replication_status = :b' 169 | ddb_exp_attrs[':b'] = { 'S': 'COMPLETED' } 170 | #print(response) 171 | elif headers['x-amz-replication-status'] == 'FAILED': 172 | ddb_update_exp += ', replication_status = :b' 173 | ddb_exp_attrs[':b'] = { 'S': 'FAILED' } 174 | log_statistics(i['s3Origin']['S'],'FAILED',i['start_datetime']['S'],'0','1',300) 175 | 176 | # Update the record in the DDB table 177 | try: 178 | client['ddb']['handle'].update_item( 179 | TableName = ddbtable, 180 | Key = { 'ETag': i['ETag'] }, 181 | UpdateExpression = ddb_update_exp, 182 | ExpressionAttributeValues = ddb_exp_attrs) 183 | except Exception as e: 184 | print(e) 185 | print('Table ' + ddbtable + ' update failed') 186 | raise e 187 | 188 | # ----------------------------------------------------------------- 189 | # check_incompletes 190 | # 191 | print('Checking for incomplete transfers') 192 | check = datetime.utcnow() - timedelta(hours=1) # datetime object 193 | checkstr= check.strftime(timefmt) # string object 194 | # Set scan filter attrs 195 | eav = { 196 | ":check": { "S": checkstr }, 197 | ":completed": { "S": "COMPLETED" } 198 | } 199 | 200 | print('Reading from ' + ddbtable) 201 | try: 202 | response = client['ddb']['handle'].scan( 203 | TableName=ddbtable, 204 | ExpressionAttributeValues=eav, 205 | FilterExpression="replication_status <> :completed and start_datetime < :check", 206 | Limit=1000 207 | ) 208 | except Exception as e: 209 | print(e) 210 | print('Table ' + ddbtable + ' scan failed') 211 | raise e 212 | 213 | print('Checking for incomplete items from ' + ddbtable) 214 | process_items(response['Items']) 215 | 216 | 217 | while 'LastEvaluatedKey' in response: 218 | response = client['ddb']['handle'].scan( 219 | TableName=ddbtable, 220 | FilterExpression="replication_status <> :completed and start_datetime < :check", 221 | ExpressionAttributeValues=eav, 222 | ExclusiveStartKey=response['LastEvaluatedKey'], 223 | Limit=1000 224 | ) 225 | 226 | process_items(response['Items']) 227 | 228 | ###### M A I N ###### 229 | client = connect_clients(client) 230 | -------------------------------------------------------------------------------- /source/CRRMonitor/CRRMonitor.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | ###################################################################################################################### 5 | # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 6 | # # 7 | # Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance # 8 | # with the License. A copy of the License is located at # 9 | # # 10 | # http://www.apache.org/licenses/LICENSE-2.0 # 11 | # # 12 | # or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES # 13 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 14 | # and limitations under the License. # 15 | ###################################################################################################################### 16 | 17 | from __future__ import print_function 18 | 19 | import json 20 | import boto3 21 | from botocore.exceptions import ClientError 22 | import os 23 | from datetime import datetime, timedelta 24 | import urllib.request 25 | 26 | def getparm(parmname, defaultval): 27 | try: 28 | myval = os.environ[parmname] 29 | print('Environmental variable \'' + parmname + '\' = ' + str(myval)) 30 | if isinstance(defaultval, int): 31 | return int(myval) 32 | else: 33 | return myval 34 | except: 35 | print('Environmental variable \'' + parmname + '\' not found. Using default [' + \ 36 | str(defaultval) + ']') 37 | return defaultval 38 | 39 | # ===================================================================== 40 | # Configuration 41 | # 42 | # appname: the names of AWS resources are derived from this. It is not 43 | # recommended that you change this from the default 'CRRMonitor' 44 | appname = getparm('appname', 'CRRMonitor') 45 | 46 | # maxtask: Tune this parameter to get the most effective use of a single 47 | # instance of your lambda. It should be roughly 300,000 divided by the average 48 | # time required to process a single SQS record (160ms). Example: if it takes an 49 | # average of 500ms to process a single SQS record you would set this to 50 | # 300 / 0.5 = 600. This parameter tells the lambda when to ask for help: 51 | # If the queue depth is > maxtask it will spawn a copy of itself. 52 | maxtask = getparm('maxtask', 1800) 53 | 54 | # maxspawn: This parameter limits how many copies of itself the lambda 55 | # can spawn. This should not allow you to exceed your maximum concurrent 56 | # lambda execution limit (default 100). By default the lambda is set 57 | # to execute every minute and time out after 5. With a default maxspawn 58 | # of 25 this will allow 100 concurrent lambdas to execute. This should 59 | # allow capacity of 200 events per second at an average processing time 60 | # of 500ms per event, or 100 CRR replications per second. Scale and 61 | # request limits accordingly. 62 | maxspawn = getparm('maxspawn', 20) 63 | 64 | # How long to keep records for completed transfers 65 | purge_thresh = getparm('purge_thresh', 24) 66 | 67 | # DEBUG 68 | DEBUG = getparm('debug', 0) 69 | 70 | # VERSION_ID: The version of this solution 71 | VERSION_ID = getparm('SolutionVersion', "").strip() 72 | 73 | # ANONYMOUS_SOLUTION_ID: An anonymous identifier for this instance of the solution 74 | ANONYMOUS_SOLUTION_ID = getparm('UUID', "").strip() 75 | 76 | # SEND_ANONYMOUS_USAGE_METRIC: A flag indicating whether the solution should 77 | # report anonymous usage metrics to AWS 78 | SEND_ANONYMOUS_USAGE_METRIC = (getparm('AnonymousUsage', 'No') == 'Yes') 79 | 80 | # Make sure the VERSION_ID and ANONYMOUS_SOLUTION_ID are valid 81 | if VERSION_ID is None or VERSION_ID == "": 82 | SEND_ANONYMOUS_USAGE_METRIC = False 83 | 84 | if ANONYMOUS_SOLUTION_ID is None or ANONYMOUS_SOLUTION_ID == "": 85 | SEND_ANONYMOUS_USAGE_METRIC = False 86 | 87 | # 88 | # ddbtable and stattable: name of the DynamoDB tables. The tables are 89 | # created in the CloudFormation stack and defaults to the value of appname. 90 | # Do not change this without changing the template. 91 | ddbtable = appname 92 | stattable = ddbtable + 'Statistics' 93 | # queue: name of the SQS queue. Derived from the appname. The SQS queue 94 | # is created in the CloudFormation template. Do not change this without 95 | # changing the template 96 | queue = appname + 'Queue' 97 | # timefmt: used to format timestamps. Do not change. 98 | timefmt = '%Y-%m-%dT%H:%M:%SZ' 99 | # client: defines the api client connections to create 100 | client={ 101 | 'ddb': {'service': 'dynamodb'}, 102 | 'sqs': {'service': 'sqs'}, 103 | 'lbd': {'service': 'lambda'} 104 | } 105 | s3client = {} # will hold client handle for s3 per region 106 | initfail = {} # hash of source buckets to handle FAILED counter initialization 107 | 108 | # ===================================================================== 109 | # connect_clients 110 | # --------------- 111 | # Connect to all the clients. We will do this once per instantiation of 112 | # the Lambda function (not per execution) 113 | # ===================================================================== 114 | def connect_clients(clients_to_connect): 115 | for c in clients_to_connect: 116 | try: 117 | if 'region' in clients_to_connect[c]: 118 | clients_to_connect[c]['handle'] = boto3.client(clients_to_connect[c]['service'], region_name=clients_to_connect[c]['region']) 119 | else: 120 | clients_to_connect[c]['handle'] = boto3.client(clients_to_connect[c]['service']) 121 | except Exception as e: 122 | print(e) 123 | print('Error connecting to ' + clients_to_connect[c]['service']) 124 | raise e 125 | return clients_to_connect 126 | 127 | def message_handler(event): 128 | def log_statistics(Src, Dst, Tstamp, Size, ET, roundTo): 129 | # ------------------------------------------------------------- 130 | # Derive the statistic bucket from source/dest and time bucket 131 | # (5 minute rolling window) 132 | # 133 | statbucket = Src + ':' + Dst 134 | ts = datetime.strptime(Tstamp, timefmt) 135 | secs = (ts.replace(tzinfo=None) - ts.min).seconds 136 | rounding = (secs+roundTo/2) // roundTo * roundTo 137 | ts = ts + timedelta(0, rounding-secs, -ts.microsecond) 138 | timebucket = datetime.strftime(ts, timefmt) 139 | statbucket += ':' + timebucket 140 | # ------------------------------------------------------------- 141 | # Init a dict to use to hold our attrs for DDB 142 | stat_exp_attrs = {} 143 | # ------------------------------------------------------------- 144 | # Build the DDB UpdateExpression 145 | stat_update_exp = 'SET timebucket = :t, source_bucket = :o, dest_bucket = :r ADD objects :a, size :c, elapsed :d' 146 | # ------------------------------------------------------------- 147 | # push the first attr: s3Object 148 | stat_exp_attrs[':a'] = {'N': '1'} 149 | stat_exp_attrs[':c'] = {'N': Size} 150 | stat_exp_attrs[':d'] = {'N': ET} 151 | stat_exp_attrs[':t'] = {'S': timebucket} 152 | stat_exp_attrs[':o'] = {'S': Src} 153 | stat_exp_attrs[':r'] = {'S': Dst} 154 | 155 | # Update the DDB table 156 | try: 157 | response = client['ddb']['handle'].update_item( 158 | TableName=stattable, 159 | Key={'OriginReplicaBucket': {'S': statbucket}}, 160 | UpdateExpression=stat_update_exp, 161 | ExpressionAttributeValues=stat_exp_attrs) 162 | except Exception as e: 163 | print(e) 164 | print('Table ' + stattable + ' update failed') 165 | raise e 166 | 167 | # Initialize a counter for failed replications for the source bucket 168 | if not Src in initfail: 169 | initfail[Src] = 'foo' 170 | if Dst != 'FAILED' and initfail[Src] != timebucket: 171 | print('Initializing FAILED bucket for ' + Src + ':' + timebucket) 172 | statbucket = Src + ':FAILED:' + timebucket 173 | stat_exp_attrs = {} 174 | # ------------------------------------------------------------- 175 | # Build the DDB UpdateExpression 176 | stat_update_exp = 'SET timebucket = :t, source_bucket = :o, dest_bucket = :r ADD objects :a, size :c, elapsed :d' 177 | # ------------------------------------------------------------- 178 | # push the first attr: s3Object 179 | stat_exp_attrs[':a'] = {'N': '0'} 180 | stat_exp_attrs[':c'] = {'N': '1'} 181 | stat_exp_attrs[':d'] = {'N': '1'} 182 | stat_exp_attrs[':t'] = {'S': timebucket} 183 | stat_exp_attrs[':o'] = {'S': Src} 184 | stat_exp_attrs[':r'] = {'S': 'FAILED'} 185 | 186 | try: 187 | response = client['ddb']['handle'].update_item( 188 | TableName=stattable, 189 | Key={'OriginReplicaBucket': {'S': statbucket }}, 190 | UpdateExpression=stat_update_exp, 191 | ExpressionAttributeValues=stat_exp_attrs) 192 | initfail[Src] = timebucket 193 | except Exception as e: 194 | print(e) 195 | print('Table ' + stattable + ' update failed') 196 | raise e 197 | 198 | #print('Stats written to ' + statbucket) 199 | 200 | # So this will work with CloudWatch Events directly or via SNS, let's look 201 | # at the structure of the incoming JSON. Note that this has not been 202 | # tested with CloudWatch events directly, but should be a simple matter. 203 | # I kept the code here as it adds no overhead but is a solid flexible 204 | # example. 205 | # 206 | # A Cloudwatch Event looks like event[event json] 207 | # An SNS notification looks like event['Records'][0][event json] 208 | # print("Received raw event: " + json.dumps(event, indent=2)) 209 | 210 | # Create a reference in evdata that points to the correct element in the 211 | # event dictionary 212 | if 'detail-type' in event: 213 | evdata = event 214 | elif 'Records' in event: 215 | # An SNS notification will have another layer in the dict. Look for 216 | # EventSource = aws:sns. Otherwise generate an exception and get out. 217 | if event['Records'][0]['EventSource'] == 'aws:sns': 218 | #print('Message is ' + event['Records'][0]['Sns']['Message']) 219 | evdata = json.loads(event['Records'][0]['Sns']['Message']) 220 | #print("Message event: " + json.dumps(evdata, indent=2)) 221 | 222 | else: 223 | # Unrecognized event format: uncomment print statements to 224 | # identify the format and enhance this logic. At the end of 225 | # the day, evdata must contain the dict for the event record 226 | # of the Cloudwatch log event for the S3 update notification 227 | print('Error: unrecognized event format received') 228 | raise Exception('Unrecognized event format') 229 | 230 | elif 'MessageId' in event: 231 | evdata = json.loads(event['Message']) 232 | else: 233 | evdata = event 234 | 235 | if DEBUG > 1: 236 | print(json.dumps(evdata)) 237 | 238 | #----------------------------------------------------------------- 239 | # Quietly ignore all but PutObject 240 | # 241 | if evdata['detail']['eventName'] != 'PutObject': 242 | if DEBUG > 0: 243 | print('Ignoring ' + evdata['detail']['eventName'] + ' event') 244 | return 245 | 246 | #----------------------------------------------------------------- 247 | # 248 | # Collect the data we want for the DynamoDB table 249 | # 250 | region = evdata['region'] 251 | bucket = evdata['detail']['requestParameters']['bucketName'] 252 | key = evdata['detail']['requestParameters']['key'] 253 | 254 | # This timestamp is from the CW Event record and is most accurate 255 | now = evdata['detail']['eventTime'] 256 | 257 | # Init a dict to use to hold our attrs for DDB 258 | ddb_exp_attrs = {} 259 | # Build th e DDB UpdateExpression 260 | ddb_update_exp = 'set s3Object = :a' 261 | # push the first attr: s3Object 262 | ddb_exp_attrs[':a'] = {'S': key} 263 | 264 | 265 | # establish s3 client per region, but only once. 266 | if not region in s3client: 267 | s3client[region] = boto3.client('s3', region) 268 | 269 | # ----------------------------------------------------------------- 270 | # Do a head_object. If the object no longer exists just return. 271 | # 272 | try: 273 | response = s3client[region].head_object( 274 | Bucket=bucket, 275 | Key=key 276 | ) 277 | except ClientError as e: 278 | # { "Error": { 279 | # "Code": "403", 280 | # "Message": "Forbidden" 281 | # }, 282 | # "ResponseMetadata": { 283 | # "RequestId": "B7C8873E3C067128", 284 | # "HostId": "kYARs5PKMuah57ewyzYq6l5laO4xu9fcWFYVnEPLMHeqNSF4yLhrYIhbbUT0Tw7hp3f2PgCQO9E=", 285 | # "HTTPStatusCode": 403, 286 | # "HTTPHeaders": { 287 | # "x-amz-request-id": "B7C8873E3C067128", 288 | # "x-amz-id-2": "kYARs5PKMuah57ewyzYq6l5laO4xu9fcWFYVnEPLMHeqNSF4yLhrYIhbbUT0Tw7hp3f2PgCQO9E=", 289 | # "content-type": "application/xml", 290 | # "transfer-encoding": "chunked", 291 | # "date": "Tue, 25 Sep 2018 11:58:48 GMT", 292 | # "server": "AmazonS3" 293 | # }, 294 | # "RetryAttempts": 0 295 | # } 296 | # } 297 | 298 | if e.response['Error']['Code'] == '403': 299 | print('IGNORING: CRRMonitor does not have access to Object - ' + \ 300 | evdata['detail']['requestParameters']['bucketName'] + '/' + \ 301 | evdata['detail']['requestParameters']['key']) 302 | elif e.response['Error']['Code'] == '404': 303 | print('IGNORING: Object no longer exists - ' + \ 304 | evdata['detail']['requestParameters']['bucketName'] + '/' + \ 305 | evdata['detail']['requestParameters']['key']) 306 | 307 | else: 308 | # Need to improve this to recognize specifically a 404 309 | print('Unhandled ClientError ' + str(e)) 310 | print(json.dumps(e.response)) 311 | 312 | #print('Removing from queue / ignoring') 313 | return 314 | 315 | except Exception as e: 316 | # Need to improve this to recognize specifically a 404 317 | print('Unandled Exception ' + str(e)) 318 | print('Removing from queue / ignoring') 319 | return 320 | 321 | 322 | # 2) check that the x-amz-replication-status header is present 323 | # response['ResponseMetadata']['HTTPHeaders']['x-amz-replication-status'] 324 | # 325 | # Note that this function is only called when an object is written. Assume that 326 | # the object was written and the x-amz-replication-status is a final status for 327 | # this object in this bucket. So, if it is the source it can be COMPLETED, PENDING, 328 | # or FAILED. If it is the replica it can only be REPLICA. 329 | # 330 | # That in mind, the update date/time for the REPLICA will always be definitive for 331 | # the end_datetime column 332 | # 333 | # Conversely, the source object is always definitive for the start_datetime. 334 | # 335 | # Code must not assume that the events (source and dest) are processed in the correct 336 | # order. Any process consuming the DynamoDB table should do their own Elapsed Time 337 | # calculation. 338 | # 339 | # Reference the dict we want for clarity in the code 340 | headers = response['ResponseMetadata']['HTTPHeaders'] 341 | 342 | # If this object has no x-amz-replication-status header then we can leave 343 | if 'x-amz-replication-status' not in headers: 344 | # This is not a replicated object - get out 345 | if DEBUG > 0: 346 | print('Not a replicated object') 347 | return() 348 | 349 | # repstatus is a pointer to the headers (for code clarity) 350 | repstatus = headers['x-amz-replication-status'] 351 | 352 | # ----------------------------------------------------------------- 353 | # Verify that the DynamoDB table exists. Note: we could create it 354 | # but that takes so long that the lambda function may time out. 355 | # Better to create it in the CFn template and handle this as a 356 | # failure condition 357 | # 358 | try: 359 | response = client['ddb']['handle'].describe_table( 360 | TableName=ddbtable 361 | ) 362 | except Exception as e: 363 | print(e) 364 | print('Table ' + ddbtable + ' does not exist - need to create it') 365 | raise e 366 | 367 | # Update object size 368 | objsize = headers['content-length'] 369 | ddb_update_exp += ', ObjectSize = :s' 370 | ddb_exp_attrs[':s'] = {'N': objsize} 371 | 372 | ETag = {'S': headers['etag'][1:-1] + ':' + headers['x-amz-version-id'][1:-1]} 373 | 374 | # ----------------------------------------------------------------- 375 | # If the object already has a DDB record get it 376 | # 377 | ddbdata = client['ddb']['handle'].get_item( 378 | TableName=ddbtable, 379 | Key={'ETag': ETag}, 380 | ConsistentRead=True 381 | ) 382 | 383 | ddbitem = {} # reset the dict 384 | if 'Item' in ddbdata: 385 | ddbitem = ddbdata['Item'] 386 | if DEBUG > 4: 387 | print("DDB record: " + json.dumps(ddbitem, indent=2)) 388 | 389 | # 390 | # Is this a REPLICA? Use timestamp as completion time 391 | # 392 | # Note: replica only updates s3Replica, replication_status, and end_datetime. 393 | # 394 | # We do this so we don't have to handle conditional update of fields that might get 395 | # stepped on of the events are processed out of order. 396 | # 397 | if repstatus == 'REPLICA': 398 | # print('Processing a REPLICA object: ' + ETag['S']) 399 | ddb_update_exp += ', s3Replica = :d' 400 | ddb_exp_attrs[':d'] = {'S': bucket} 401 | #print('s3Replica: ' + bucket) 402 | 403 | ddb_update_exp += ', end_datetime = :e' 404 | ddb_exp_attrs[':e'] = {'S': now} # 'now' is from the event data 405 | #print('end_datetime: ' + now) 406 | 407 | # Set the ttl 408 | purge = datetime.strptime(now, timefmt) - timedelta(hours=purge_thresh) # datetime object 409 | ttl = purge.strftime('%s') 410 | ddb_update_exp += ', itemttl = :p' 411 | ddb_exp_attrs[':p'] = {'N': ttl} 412 | 413 | # If this is a replica then status is COMPLETE 414 | ddb_update_exp += ', replication_status = :b' 415 | ddb_exp_attrs[':b'] = {'S': 'COMPLETED'} 416 | #print('replication_status: COMPLETED (implied)') 417 | 418 | if 'start_datetime' in ddbitem and 'crr_rate' not in ddbitem: 419 | etime = datetime.strptime(now, timefmt) - datetime.strptime(ddbitem['start_datetime']['S'], timefmt) 420 | etimesecs = (etime.days * 24 * 60 * 60) + etime.seconds 421 | #print("Calculate elapsed time in seconds") 422 | crr_rate = int(objsize) * 8 / (etimesecs + 1) # Add 1 to prevent /0 errors 423 | ddb_update_exp += ', crr_rate = :r' 424 | ddb_exp_attrs[':r'] = {'N': str(crr_rate)} 425 | #print('crr_rate: ', crr_rate) 426 | 427 | ddb_update_exp += ', elapsed = :t' 428 | ddb_exp_attrs[':t'] = {'N': str(etimesecs)} 429 | #print('elapsed: ', etimesecs) 430 | log_statistics( 431 | ddbitem['s3Origin']['S'], 432 | bucket, 433 | ddbitem['start_datetime']['S'], 434 | objsize, 435 | str(etimesecs), 436 | 300) 437 | # ----------------------------------------------------------------- 438 | # Or is this a SOURCE? Use timestamp as replication start time 439 | # 440 | else: 441 | 442 | ddb_update_exp += ', s3Origin = :f' 443 | ddb_exp_attrs[':f'] = {'S': bucket} 444 | 445 | # If this is not a replica then do not report status. It's not important and 446 | # makes the DynamoDB update much more complicated. Just get the start time 447 | # 448 | # We also do not care what the status is. If it has a FAILED status we could 449 | # write code to send a notification, but that's outside our scope. 450 | if repstatus == 'COMPLETED' or repstatus == 'FAILED' or repstatus == 'PENDING': 451 | # print('Processing a ORIGINAL object: ' + ETag['S'] + ' status: ' + repstatus) 452 | ddb_update_exp += ', start_datetime = :g' 453 | ddb_exp_attrs[':g'] = {'S': now} 454 | # --------------------------------------------------------- 455 | # If we already got the replica event... 456 | # 457 | if 'end_datetime' in ddbitem and 'crr_rate' not in ddbitem: 458 | etime = datetime.strptime(ddbitem['end_datetime']['S'], timefmt) - datetime.strptime(now, timefmt) 459 | etimesecs = (etime.days * 24 * 60 * 60) + etime.seconds 460 | #print("Calculate elapsed time in seconds") 461 | crr_rate = int(objsize) * 8 / (etimesecs + 1) # Add 1 to prevent /0 errors 462 | ddb_update_exp += ', crr_rate = :r' 463 | ddb_exp_attrs[':r'] = {'N': str(crr_rate)} 464 | 465 | # Set the ttl 466 | purge = datetime.strptime(ddbitem['end_datetime']['S'], timefmt) - timedelta(hours=purge_thresh) # datetime object 467 | ttl = purge.strftime('%s') 468 | ddb_update_exp += ', itemttl = :p' 469 | ddb_exp_attrs[':p'] = {'N': ttl} 470 | 471 | ddb_update_exp += ', elapsed = :t' 472 | ddb_exp_attrs[':t'] = {'N': str(etimesecs)} 473 | 474 | log_statistics( 475 | bucket,ddbitem['s3Replica']['S'], 476 | ddbitem['end_datetime']['S'], 477 | objsize, 478 | str(etimesecs),300) 479 | # --------------------------------------------------------- 480 | # We did not yet get the replica event 481 | # 482 | else: 483 | if repstatus == 'FAILED': 484 | # If replication failed this is the only time we will see this object. 485 | # Update the status to FAILED 486 | ddb_update_exp += ', replication_status = :b' 487 | ddb_exp_attrs[':b'] = {'S': 'FAILED'} 488 | log_statistics( 489 | bucket, 490 | 'FAILED', 491 | now, 492 | '0', 493 | '1', 494 | 300) 495 | 496 | else: 497 | print('Unknown Replication Status: ' + repstatus) 498 | raise Exception('Unknown Replication Status') 499 | 500 | 501 | 502 | # Create a record in the DDB table 503 | try: 504 | response = client['ddb']['handle'].update_item( 505 | TableName=ddbtable, 506 | Key={'ETag': ETag}, 507 | UpdateExpression=ddb_update_exp, 508 | ExpressionAttributeValues=ddb_exp_attrs) 509 | except Exception as e: 510 | print(e) 511 | print('Table ' + ddbtable + ' update failed') 512 | raise e 513 | 514 | # ===================================================================== 515 | # queue_handler 516 | # ------------- 517 | # Main entry point 518 | # Count the SQS queue and manage scale. 519 | # Here's what my event looks like: 520 | # { 521 | # "account": "SAMPLE12345", 522 | # "region": "us-east-2", 523 | # "detail": {}, 524 | # "detail-type": "Scheduled Event", 525 | # "source": "aws.events", 526 | # "version": "0", 527 | # "time": "2017-02-09T13:56:03Z", 528 | # "id": "a8b4f046-06c5-4b3c-b543-90c3fdaaac14", 529 | # "resources": [ 530 | # "arn:aws:events:us-east-2:SAMPLE12345:rule/CRRMonitor-2" 531 | # ] 532 | # } 533 | # 534 | # When I spawn a child process I will change "detail-type" to "Spawned Event" 535 | # and add "child-number", where 0 is the top-level 536 | # ===================================================================== 537 | def queue_handler(event, context): 538 | cnum = 0 539 | if 'child-number' in event: 540 | cnum = int(event['child-number']) 541 | 542 | message_floor = cnum * maxtask 543 | 544 | # { 545 | # "Attributes": {"ApproximateNumberOfMessages": "1040"}, 546 | # "ResponseMetadata": { 547 | # "RetryAttempts": 0, 548 | # "HTTPStatusCode": 200, 549 | # "RequestId": "51c43b7e-9b05-59c8-b68e-6a68f3f3b999", 550 | # "HTTPHeaders": { 551 | # "x-amzn-requestid": "51c43b7e-9b05-59c8-b68e-6a68f3f3b999", 552 | # "content-length": "360", 553 | # "server": "Server", 554 | # "connection": "keep-alive", 555 | # "date": "Thu, 09 Feb 2017 12:55:18 GMT", 556 | # "content-type": "text/xml" 557 | # } 558 | # } 559 | # } 560 | response = client['sqs']['handle'].get_queue_attributes( 561 | QueueUrl=queue_endpoint, 562 | AttributeNames=['ApproximateNumberOfMessages'] 563 | ) 564 | 565 | if response['ResponseMetadata']['HTTPStatusCode'] != 200: 566 | print('Bad status from ' + queue + ': ' + response['ResponseMetadata']['HTTPStatusCode']) 567 | return 568 | 569 | queue_sz = int(response['Attributes']['ApproximateNumberOfMessages']) 570 | queue_backlog = queue_sz - message_floor 571 | 572 | print('INFO [CNUM-' + str(cnum) + '] Queue is ' + str(queue_sz) + \ 573 | ' deep. Backlog is ' + str(queue_backlog)) 574 | 575 | # We subtracted the number of messages for which processes are already 576 | # running. If the backlog is still too deep them first spawn another child, 577 | # updating child-number + 1 578 | if queue_backlog > maxtask: 579 | # increment child-number (or initialize to 1) in the event dict 580 | # spawn another lambda, passing the event and context dicts 581 | if cnum < maxspawn: 582 | event['child-number'] = cnum + 1 583 | try: 584 | client['lbd']['handle'].invoke( 585 | FunctionName=context.function_name, 586 | InvocationType='Event', 587 | Payload=json.dumps(event) 588 | ) 589 | print('Spawning a child because there are ' + str(queue_sz) + ' messages in the queue. I am child ' + str(cnum) + ' with a max capacity of ' + str(maxtask) + '. Message floor is ' + str(message_floor)) 590 | 591 | print('Reproduction successful - child ' + str(cnum+1) + ' spawned') 592 | except Exception as e: 593 | print(e) 594 | print('ERROR[CNUM-' + str(cnum) + '] Failed to reproduce') 595 | raise e 596 | else: 597 | print('WARNING: maxspawn(' + str(maxspawn) + ') exceeded. Not spawning a helper.') 598 | 599 | # ----------------------------------------------------------------- 600 | # Now we get to work. Process messages from the queue until empty 601 | # or we time out. This is the secret sauce to our horizontal scale 602 | print('INFO [CNUM-' + str(cnum) + '] Priming read from SQS...') 603 | msg_ctr = 0 # keep a count of messages processed 604 | sqs_msgs = client['sqs']['handle'].receive_message( 605 | QueueUrl=queue_endpoint, 606 | AttributeNames=['All'], 607 | MaxNumberOfMessages=10, 608 | VisibilityTimeout=60 609 | ) 610 | sqs_delete = [] 611 | while 'Messages' in sqs_msgs: 612 | print('INFO [CNUM-' + str(cnum) + '] Processing ' + str(len(sqs_msgs['Messages'])) + ' messages') 613 | for message in sqs_msgs['Messages']: 614 | rc = message_handler(json.loads(message['Body'])) 615 | # If we did not get a 0 return code let the record time out back 616 | # back into the queue 617 | if not rc: 618 | sqs_delete.append({'Id': message['MessageId'], 'ReceiptHandle': message['ReceiptHandle']}) 619 | msg_ctr += 1 # keep a count of messages processed 620 | 621 | if len(sqs_delete) > 0: 622 | # Delete the messages we just processed 623 | response = client['sqs']['handle'].delete_message_batch( 624 | QueueUrl=queue_endpoint, 625 | Entries=sqs_delete 626 | ) 627 | if len(response['Successful']) < len(sqs_delete): 628 | print('ERROR[CNUM-' + str(cnum) + ']: processed ' + str(len(sqs_msgs)) + ' messages but only deleted ' + str(len(response['Successful'])) + ' messages') 629 | 630 | sqs_delete = [] # reset the list 631 | 632 | print('INFO [CNUM-' + str(cnum) + '] Reading from SQS...') 633 | sqs_msgs = client['sqs']['handle'].receive_message( 634 | QueueUrl=queue_endpoint, 635 | AttributeNames=['All'], 636 | MaxNumberOfMessages=10, 637 | VisibilityTimeout=60 638 | ) 639 | 640 | print('INFO [CNUM-' + str(cnum) + '] Completed - ' + str(msg_ctr) + ' messages processed') 641 | 642 | if SEND_ANONYMOUS_USAGE_METRIC and msg_ctr > 0: 643 | send_anonymous_usage_metric({ 644 | "Action": f"Num messages processed by CRRMonitor: {msg_ctr}" 645 | }) 646 | 647 | def send_anonymous_usage_metric(metric_data={}): 648 | try: 649 | if type(metric_data) is not dict or not dict: 650 | raise Exception('Invalid metric_data passed to send_anonymous_usage_metric') 651 | 652 | metric_endpoint = 'https://metrics.awssolutionsbuilder.com/generic' 653 | metric_payload = { 654 | "Solution": "SO0022", 655 | "UUID": ANONYMOUS_SOLUTION_ID, 656 | "Version": VERSION_ID, 657 | "Timestamp": str(datetime.utcnow()), 658 | "Data": metric_data 659 | } 660 | data = bytes(json.dumps(metric_payload), 'utf-8') 661 | headers = { "Content-Type": "application/json" } 662 | 663 | print(f"Sending anonymous usage metric: {str(metric_payload)}") 664 | 665 | req = urllib.request.Request(url=metric_endpoint, data=data, method='POST', headers=headers) 666 | with urllib.request.urlopen(req) as f: 667 | print(f"Anonymous usage metric send status: {f.status}") 668 | except Exception as e: 669 | # Log the exception but do not raise it again 670 | print(f'Exception while sending anonymous usage metric: {e}') 671 | 672 | ###### M A I N ###### 673 | client = connect_clients(client) 674 | try: 675 | queue_endpoint = client['sqs']['handle'].get_queue_url( 676 | QueueName=queue 677 | )['QueueUrl'] 678 | except Exception as e: 679 | print(e) 680 | print('Could not get the url for ' + queue) 681 | raise e 682 | -------------------------------------------------------------------------------- /source/CRRMonitorHousekeeping/CRRMonitorHousekeeping.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | ###################################################################################################################### 5 | # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 6 | # # 7 | # Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance # 8 | # with the License. A copy of the License is located at # 9 | # # 10 | # http://www.apache.org/licenses/LICENSE-2.0 # 11 | # # 12 | # or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES # 13 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 14 | # and limitations under the License. # 15 | ###################################################################################################################### 16 | 17 | from __future__ import print_function 18 | 19 | import json 20 | import boto3 21 | import os 22 | import logging 23 | from datetime import datetime,timedelta 24 | 25 | log = logging.getLogger() 26 | log.setLevel(logging.INFO) 27 | 28 | log.debug('Loading function') 29 | 30 | def getparm (parmname, defaultval): 31 | try: 32 | myval = os.environ[parmname] 33 | if isinstance(defaultval, int): 34 | return int(myval) 35 | else: 36 | return myval 37 | except: 38 | print('Environmental variable \'' + parmname + '\' not found. Using default [' + str(defaultval) + ']') 39 | return defaultval 40 | # 41 | # Define the DynamoDB table to be used to track replication status. 42 | # It must be in the same region as this Lambda and should already 43 | # exist. It is created by the CloudFormation template. 44 | # 45 | ddbtable = getparm('appname','CRRMonitor') 46 | stattable = ddbtable + 'Statistics' 47 | # Stream to kinesis? Must be YES or NO 48 | stream_to_kinesis = getparm('stream_to_kinesis','No') 49 | kinesisfirestream = getparm('kinesisfirestream', ddbtable + 'DeliveryStream') 50 | stack_name = getparm('stack_name','Nil') 51 | 52 | timefmt = '%Y-%m-%dT%H:%M:%SZ' 53 | roundTo = getparm('roundto', 300) # 5 minute buckets for CW metrics 54 | purge_thresh = getparm('purge_thresh', 24) # threshold in hours 55 | client={ 56 | 'cw': { 'service': 'cloudwatch' }, 57 | 'ddb': { 'service': 'dynamodb'} 58 | } 59 | # optionally include firehose 60 | if stream_to_kinesis != 'No': 61 | #kinesisfirestream = ddbtable + 'DeliveryStream' 62 | client['firehose'] = { 'service': 'firehose' } 63 | 64 | # ===================================================================== 65 | # connect_clients 66 | # --------------- 67 | # Connect to all the clients. We will do this once per instantiation of 68 | # the Lambda function (not per execution) 69 | # ===================================================================== 70 | def connect_clients(clients_to_connect): 71 | for c in clients_to_connect: 72 | try: 73 | if 'region' in clients_to_connect[c]: 74 | clients_to_connect[c]['handle']=boto3.client(clients_to_connect[c]['service'], region_name=clients_to_connect[c]['region']) 75 | else: 76 | clients_to_connect[c]['handle']=boto3.client(clients_to_connect[c]['service']) 77 | except Exception as e: 78 | print(e) 79 | print('Error connecting to ' + clients_to_connect[c]['service']) 80 | raise e 81 | return clients_to_connect 82 | 83 | def lambda_handler(event, context): 84 | # ----------------------------------------------------------------- 85 | # save items in S3 - save items 86 | # 87 | def save_item(item): 88 | print('Save Item' + json.dumps(item)) 89 | try: 90 | response = client['firehose']['handle'].put_record( 91 | DeliveryStreamName=kinesisfirestream, 92 | Record={ 93 | 'Data': json.dumps(item) + '\n' 94 | } 95 | ) 96 | except Exception as e: 97 | print(e) 98 | print('Error saving ' + item['ETag']['S'] + ' from ' + ddbtable) 99 | raise e 100 | 101 | # ----------------------------------------------------------------- 102 | 103 | # ----------------------------------------------------------------- 104 | # post_stats - post statistics to CloudWatch 105 | # 106 | def post_stats(item): 107 | print('Posting statistics to CloudWatch for ' + item['source_bucket']['S'] + ' time bucket ' + item['timebucket']['S']) 108 | 109 | ts=item['timebucket']['S'] 110 | 111 | # ------------------------------------------------------------- 112 | # Special Handling: Failed replicatons are reported in the 113 | # same data format. The destination bucket will be FAILED. 114 | # Pull these out separately to a different CW metric. 115 | if item['dest_bucket']['S'] == 'FAILED': 116 | try: 117 | client['cw']['handle'].put_metric_data( 118 | Namespace='CRRMonitor', 119 | MetricData=[ 120 | { 121 | 'MetricName': 'FailedReplications', 122 | 'Dimensions': [ 123 | { 124 | 'Name': 'SourceBucket', 125 | 'Value': item['source_bucket']['S'] 126 | } 127 | ], 128 | 'Timestamp': ts, 129 | 'Value': int(item['objects']['N']) 130 | }, 131 | ] 132 | ) 133 | except Exception as e: 134 | print(e) 135 | print('Error creating CloudWatch metric FailedReplications') 136 | raise e 137 | 138 | else: 139 | try: 140 | client['cw']['handle'].put_metric_data( 141 | Namespace='CRRMonitor', 142 | MetricData=[ 143 | { 144 | 'MetricName': 'ReplicationObjects', 145 | 'Dimensions': [ 146 | { 147 | 'Name': 'SourceBucket', 148 | 'Value': item['source_bucket']['S'] 149 | }, 150 | { 151 | 'Name': 'DestBucket', 152 | 'Value': item['dest_bucket']['S'] 153 | } 154 | ], 155 | 'Timestamp': ts, 156 | 'Value': int(item['objects']['N']) 157 | }, 158 | ] 159 | ) 160 | except Exception as e: 161 | print(e) 162 | print('Error creating CloudWatch metric') 163 | raise e 164 | 165 | try: 166 | client['cw']['handle'].put_metric_data( 167 | Namespace='CRRMonitor', 168 | MetricData=[ 169 | { 170 | 'MetricName': 'ReplicationSpeed', 171 | 'Dimensions': [ 172 | { 173 | 'Name': 'SourceBucket', 174 | 'Value': item['source_bucket']['S'] 175 | }, 176 | { 177 | 'Name': 'DestBucket', 178 | 'Value': item['dest_bucket']['S'] 179 | } 180 | ], 181 | 'Timestamp': ts, 182 | 'Value': ((int(item['size']['N'])*8)/1024)/(int(item['elapsed']['N'])+1) 183 | }, 184 | ] 185 | ) 186 | except Exception as e: 187 | print(e) 188 | print('Error creating CloudWatch metric') 189 | raise e 190 | 191 | print ('Statistics posted to ' + ts) 192 | 193 | try: 194 | client['ddb']['handle'].delete_item( 195 | TableName=stattable, 196 | Key={ 197 | 'OriginReplicaBucket': { 198 | 'S': item['source_bucket']['S'] + ':' + item['dest_bucket']['S'] + ':' + ts 199 | } 200 | } 201 | ) 202 | print('Purged statistics date for ' + ts) 203 | except Exception as e: 204 | print(e) 205 | print('Error purging from ' + ts) 206 | raise e 207 | 208 | #======================== post_stats ============================== 209 | 210 | #================================================================== 211 | # firehose: retrieve all records completed in the last 5 minutes 212 | # Stream them to firehose 213 | def firehose(ts): 214 | begts=ts - timedelta(minutes=5) 215 | arch_beg = begts.strftime(timefmt) 216 | arch_end = ts.strftime(timefmt) 217 | # Set scan filter attrs 218 | eav = { 219 | ":archbeg": { "S": arch_beg }, 220 | ":archend": { "S": arch_end } 221 | } 222 | 223 | print('Reading from ' + ddbtable) 224 | try: 225 | response = client['ddb']['handle'].scan( 226 | TableName=ddbtable, 227 | ExpressionAttributeValues=eav, 228 | FilterExpression="end_datetime >= :archbeg and end_datetime < :archend", 229 | Limit=1000 230 | ) 231 | except Exception as e: 232 | print(e) 233 | print('Table ' + ddbtable + ' scan failed') 234 | raise e 235 | 236 | print('Archiving items from ' + ddbtable + ' beg>=' + arch_beg + ' end=' + arch_end) 237 | 238 | for i in response['Items']: 239 | save_item(i) 240 | 241 | while 'LastEvaluatedKey' in response: 242 | response = client['ddb']['handle'].scan( 243 | TableName=ddbtable, 244 | FilterExpression="end_datetime >= :archbeg and end_datetime < :archend", 245 | ExpressionAttributeValues=eav, 246 | ExclusiveStartKey=response['LastEvaluatedKey'], 247 | Limit=1000 248 | ) 249 | 250 | for i in response['Items']: 251 | print('Items LastEvaluated ' + i['ETag']['S']) 252 | save_item(i) 253 | #====================== firehose ================================== 254 | 255 | # What time is it? 256 | ts = datetime.utcnow() 257 | 258 | # CRRMonitor logs forward (rounds up). We want to read from the last bucket, 259 | # not the current on. So round down to the previous 5 min interval 260 | secs = (ts.replace(tzinfo=None) - ts.min).seconds 261 | rounding = (secs-roundTo/2) // roundTo * roundTo 262 | ts = ts + timedelta(0,rounding-secs,-ts.microsecond) 263 | 264 | # save the timestamp we created in a str 265 | statbucket = datetime.strftime(ts, timefmt) # We'll get stats from this bucket 266 | print('Logging from ' + statbucket) 267 | 268 | # ----------------------------------------------------------------- 269 | # Process Statistics 270 | # 271 | # Get the name of the 5 minute stat bucket that we just stopped 272 | # logging to, read the data, and delete the record. 273 | # 274 | try: 275 | client['ddb']['handle'].describe_table( 276 | TableName = stattable 277 | ) 278 | except Exception as e: 279 | print(e) 280 | print('Table ' + stattable + ' does not exist - need to create it') 281 | raise e 282 | 283 | eav = { 284 | ":stats": { "S": statbucket } 285 | } 286 | 287 | try: 288 | response = client['ddb']['handle'].scan( 289 | TableName=stattable, 290 | ExpressionAttributeValues=eav, 291 | FilterExpression="timebucket <= :stats", 292 | ConsistentRead=True 293 | ) 294 | except Exception as e: 295 | print(e) 296 | print('Table ' + ddbtable + ' scan failed') 297 | raise e 298 | 299 | if len(response['Items']) == 0: 300 | print('WARNING: No stats bucket found for ' + statbucket) 301 | 302 | for i in response['Items']: 303 | post_stats(i) 304 | 305 | while 'LastEvaluatedKey' in response: 306 | try: 307 | response = client['ddb']['handle'].scan( 308 | TableName=ddbtable, 309 | FilterExpression="timebucket <= :stats", 310 | ExpressionAttributeValues=eav, 311 | ExclusiveStartKey=response['LastEvaluatedKey'], 312 | ConsistentRead=True 313 | ) 314 | except Exception as e: 315 | print(e) 316 | print('Table ' + ddbtable + ' scan failed') 317 | raise e 318 | 319 | for i in response['Items']: 320 | post_stats(i) 321 | 322 | # Archive to firehose 323 | if stream_to_kinesis == 'Yes': 324 | firehose(ts) 325 | 326 | ######## M A I N ######## 327 | client = connect_clients(client) 328 | -------------------------------------------------------------------------------- /source/CRRMonitorTrailAlarm/CRRMonitorTrailAlarm.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | ###################################################################################################################### 5 | # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 6 | # # 7 | # Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance # 8 | # with the License. A copy of the License is located at # 9 | # # 10 | # http://www.apache.org/licenses/LICENSE-2.0 # 11 | # # 12 | # or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES # 13 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 14 | # and limitations under the License. # 15 | ###################################################################################################################### 16 | 17 | from __future__ import print_function 18 | 19 | import boto3 20 | import json 21 | 22 | # Unable to import module? You need to zip CRRdeployagent.py with 23 | # cfn_resource.py!! 24 | import cfn_resource 25 | 26 | handler = cfn_resource.Resource() 27 | 28 | source_buckets = [] 29 | 30 | client = { 31 | 's3': { 'service': 's3' }, 32 | 'cloudtrail': { 'service': 'cloudtrail'}, 33 | 'cloudwatch': { 'service': 'cloudwatch'} 34 | } 35 | 36 | 37 | 38 | # ===================================================================== 39 | # connect_clients 40 | # --------------- 41 | # Connect to all the clients. We will do this once per instantiation of 42 | # the Lambda function (not per execution) 43 | # ===================================================================== 44 | def connect_clients(clients_to_connect): 45 | for c in clients_to_connect: 46 | try: 47 | if 'region' in clients_to_connect[c]: 48 | clients_to_connect[c]['handle'] = boto3.client(clients_to_connect[c]['service'], region_name=clients_to_connect[c]['region']) 49 | else: 50 | clients_to_connect[c]['handle'] = boto3.client(clients_to_connect[c]['service']) 51 | except Exception as e: 52 | print(e) 53 | print('Error connecting to ' + clients_to_connect[c]['service']) 54 | raise e 55 | return clients_to_connect 56 | 57 | def create_trail(trail_name, trail_log_bucket): 58 | print('Create Trail: ') 59 | try: 60 | response = client['cloudtrail']['handle'].create_trail( 61 | Name=trail_name, 62 | S3BucketName=trail_log_bucket, 63 | IncludeGlobalServiceEvents=True, 64 | IsMultiRegionTrail=True, 65 | EnableLogFileValidation=True 66 | ) 67 | 68 | ## Start Logging 69 | client['cloudtrail']['handle'].start_logging( 70 | Name=response['TrailARN'] 71 | ) 72 | except Exception as e: 73 | print(e) 74 | print('Create Trail') 75 | raise e 76 | 77 | 78 | def get_buckets(): 79 | print('List Buckets:') 80 | try: 81 | list_buckets = client['s3']['handle'].list_buckets()['Buckets'] 82 | crr_buckets = [] 83 | for i in list_buckets: 84 | bucket_response = get_bucket_replication(i['Name']) 85 | if 'ReplicationConfigurationError-' != bucket_response \ 86 | and bucket_response['ReplicationConfiguration']['Rules'][0]['Status'] != 'Disabled': 87 | source_buckets.append(i['Name']) 88 | crr_buckets.append(get_source_bucket_arn(i['Name'])) 89 | crr_buckets.append(get_replica_bucket_arn(bucket_response)) 90 | except Exception as e: 91 | print(e) 92 | raise e 93 | return crr_buckets 94 | 95 | def get_source_buckets(): 96 | print('List Buckets:') 97 | try: 98 | list_buckets = client['s3']['handle'].list_buckets()['Buckets'] 99 | source_bucket_list = [] 100 | for i in list_buckets: 101 | bucket_response = get_bucket_replication(i['Name']) 102 | if 'ReplicationConfigurationError-' != bucket_response \ 103 | and bucket_response['ReplicationConfiguration']['Rules'][0]['Status'] != 'Disabled': 104 | source_bucket_list.append(i['Name']) 105 | except Exception as e: 106 | print(e) 107 | raise e 108 | return source_bucket_list 109 | 110 | def get_bucket_replication(bucket_name): 111 | try: 112 | response = client['s3']['handle'].get_bucket_replication( 113 | Bucket=bucket_name 114 | ) 115 | except Exception as e: 116 | print(e) 117 | response = "ReplicationConfigurationError-" 118 | return response 119 | 120 | def get_source_bucket_arn(response): 121 | try: 122 | src_bucket = 'arn:aws:s3:::' + response + '/' 123 | except Exception as e: 124 | print(e) 125 | print('SourceBucket') 126 | raise e 127 | return src_bucket 128 | 129 | def get_replica_bucket_arn(response): 130 | print(json.dumps(response)) 131 | try: 132 | dest_bucket_arn = response['ReplicationConfiguration']['Rules'][0]['Destination']['Bucket'] 133 | dest_bucket_prefix = '' 134 | if 'Prefix' in response['ReplicationConfiguration']['Rules'][0]: 135 | dest_bucket_prefix = response['ReplicationConfiguration']['Rules'][0]['Prefix'] 136 | replica_bucket = dest_bucket_arn + '/' + dest_bucket_prefix 137 | except Exception as e: 138 | print(e) 139 | print('ReplicaBucket') 140 | raise e 141 | return replica_bucket 142 | 143 | def put_event_selectors(trail_name,crr_buckets): 144 | print('Data Events: ') 145 | try: 146 | 147 | client['cloudtrail']['handle'].put_event_selectors( 148 | TrailName=trail_name, 149 | EventSelectors=[ 150 | { 151 | 'ReadWriteType': 'WriteOnly', 152 | 'IncludeManagementEvents': True, 153 | 'DataResources': [ 154 | { 155 | 'Type': 'AWS::S3::Object', 156 | 'Values': crr_buckets 157 | }, 158 | ] 159 | }, 160 | ] 161 | ) 162 | except Exception as e: 163 | print(e) 164 | print('Data Events Trail') 165 | raise e 166 | 167 | def put_metric_alarm(sns_topic, src_buckets): 168 | print('Metric Alarms:') 169 | try: 170 | for bucket in src_buckets: 171 | client['cloudwatch']['handle'].put_metric_alarm( 172 | AlarmName='FailedReplicationAlarm-' + bucket, 173 | AlarmDescription='Trigger a alarm for Failed Replication Objects.', 174 | ActionsEnabled=True, 175 | AlarmActions=[ 176 | sns_topic, 177 | ], 178 | MetricName='FailedReplications', 179 | Namespace='CRRMonitor', 180 | Statistic='Sum', 181 | Dimensions=[ 182 | { 183 | 'Name': 'SourceBucket', 184 | 'Value': bucket 185 | }, 186 | ], 187 | Period=60, 188 | EvaluationPeriods=1, 189 | Threshold=0.0, 190 | ComparisonOperator='GreaterThanThreshold' 191 | 192 | ) 193 | except Exception as e: 194 | print(e) 195 | print('Data Events Trail') 196 | raise e 197 | 198 | def put_metric_data(src_buckets): 199 | print('Metric Data: ') 200 | try: 201 | for bucket in src_buckets: 202 | client['cloudwatch']['handle'].put_metric_data( 203 | Namespace='CRRMonitor', 204 | MetricData=[ 205 | { 206 | 'MetricName': 'FailedReplications', 207 | 'Dimensions': [ 208 | { 209 | 'Name': 'SourceBucket', 210 | 'Value': bucket 211 | }, 212 | ], 213 | 'Value': 0.0 214 | }, 215 | ] 216 | ) 217 | except Exception as e: 218 | print(e) 219 | print('Data Events Trail') 220 | raise e 221 | 222 | 223 | # ===================================================================== 224 | # CREATE 225 | # 226 | @handler.create 227 | def create_trail_alarm(event, context): 228 | 229 | trail_name = event["ResourceProperties"]["trail_name"] # Trail Name 230 | trail_log_bucket = event["ResourceProperties"]["trail_log_bucket"] # Trail Log 231 | sns_topic_arn = event["ResourceProperties"]["sns_topic_arn"] # SNS Topic 232 | 233 | ### Trail Creation 234 | 235 | create_trail(trail_name, trail_log_bucket) 236 | crr_buckets = get_buckets() 237 | put_event_selectors(trail_name, crr_buckets) 238 | 239 | ### Metric Alarm 240 | 241 | put_metric_data(source_buckets) #Source buckets are derived from get_buckets() call 242 | put_metric_alarm(sns_topic_arn, source_buckets) 243 | return { 'PhysicalResourceId': 'CRRMonitorTrailAlarm' } 244 | 245 | ###### M A I N ###### 246 | client = connect_clients(client) 247 | 248 | 249 | # ===================================================================== 250 | # UPDATE 251 | # 252 | @handler.update 253 | def update_trail_alarm(event, context): 254 | 255 | # No update action necessary 256 | return {} 257 | 258 | 259 | # ===================================================================== 260 | # DELETE 261 | # 262 | @handler.delete 263 | def delete_trail_alarm(event, context): 264 | 265 | trail_name = event["ResourceProperties"]["trail_name"] # Trail Name 266 | print("Delete TrailName:" + trail_name) 267 | # ----------------------------------------------------------------- 268 | # Create client connections 269 | # 270 | # events 271 | try: 272 | ctl = boto3.client('cloudtrail') 273 | cwe = boto3.client('cloudwatch') 274 | 275 | except Exception as e: 276 | print(e) 277 | print('Error creating Events client') 278 | raise e 279 | 280 | # ----------------------------------------------------------------- 281 | # Remove the Targets 282 | # 283 | ctl.delete_trail( 284 | Name=trail_name 285 | ) 286 | 287 | ###Fetching source bucket details 288 | source_bucket_list = get_source_buckets() 289 | for bucket in source_bucket_list: 290 | cwe.delete_alarms( 291 | AlarmNames=[ 292 | 'FailedReplicationAlarm-' + bucket, 293 | ] 294 | ) 295 | 296 | return {} 297 | -------------------------------------------------------------------------------- /source/CRRMonitorTrailAlarm/cfn_resource.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | ###################################################################################################################### 5 | # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 6 | # # 7 | # Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance # 8 | # with the License. A copy of the License is located at # 9 | # # 10 | # http://www.apache.org/licenses/LICENSE-2.0 # 11 | # # 12 | # or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES # 13 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 14 | # and limitations under the License. # 15 | ###################################################################################################################### 16 | 17 | import json 18 | import logging 19 | import sys 20 | if sys.version_info.major == 3: 21 | from urllib.request import urlopen, Request, HTTPError, URLError 22 | from urllib.parse import urlencode 23 | else: 24 | from urllib2 import urlopen, Request, HTTPError, URLError 25 | 26 | logger = logging.getLogger() 27 | logger.setLevel(logging.INFO) 28 | 29 | SUCCESS = 'SUCCESS' 30 | FAILED = 'FAILED' 31 | 32 | """ 33 | Event example 34 | { 35 | "Status": SUCCESS | FAILED, 36 | "Reason: mandatory on failure 37 | "PhysicalResourceId": string, 38 | "StackId": event["StackId"], 39 | "RequestId": event["RequestId"], 40 | "LogicalResourceId": event["LogicalResourceId"], 41 | "Data": {} 42 | } 43 | """ 44 | 45 | def wrap_user_handler(func, base_response=None): 46 | def wrapper_func(event, context): 47 | response = { 48 | "StackId": event["StackId"], 49 | "RequestId": event["RequestId"], 50 | "LogicalResourceId": event["LogicalResourceId"], 51 | "Status": SUCCESS, 52 | } 53 | if event.get("PhysicalResourceId", False): 54 | response["PhysicalResourceId"] = event["PhysicalResourceId"] 55 | 56 | if base_response is not None: 57 | response.update(base_response) 58 | 59 | logger.debug("Received %s request with event: %s" % (event['RequestType'], json.dumps(event))) 60 | 61 | try: 62 | response.update(func(event, context)) 63 | except: 64 | logger.exception("Failed to execute resource function") 65 | response.update({ 66 | "Status": FAILED, 67 | "Reason": "Exception was raised while handling custom resource" 68 | }) 69 | 70 | serialized = json.dumps(response) 71 | logger.info("Responding to '%s' request with: %s" % ( 72 | event['RequestType'], serialized)) 73 | 74 | if sys.version_info.major == 3: 75 | req_data = serialized.encode('utf-8') 76 | else: 77 | req_data = serialized 78 | req = Request( 79 | event['ResponseURL'], data=req_data, 80 | headers={'Content-Length': len(req_data), 81 | 'Content-Type': ''} 82 | ) 83 | req.get_method = lambda: 'PUT' 84 | 85 | try: 86 | urlopen(req) 87 | logger.debug("Request to CFN API succeeded, nothing to do here") 88 | except HTTPError as e: 89 | logger.error("Callback to CFN API failed with status %d" % e.code) 90 | logger.error("Response: %s" % e.reason) 91 | except URLError as e: 92 | logger.error("Failed to reach the server - %s" % e.reason) 93 | 94 | return wrapper_func 95 | 96 | class Resource(object): 97 | _dispatch = None 98 | 99 | def __init__(self, wrapper=wrap_user_handler): 100 | self._dispatch = {} 101 | self._wrapper = wrapper 102 | 103 | def __call__(self, event, context): 104 | request = event['RequestType'] 105 | logger.debug("Received {} type event. Full parameters: {}".format(request, json.dumps(event))) 106 | return self._dispatch.get(request, self._succeed())(event, context) 107 | 108 | def _succeed(self): 109 | @self._wrapper 110 | def success(event, context): 111 | return { 112 | 'Status': SUCCESS, 113 | 'PhysicalResourceId': event.get('PhysicalResourceId', 'mock-resource-id'), 114 | 'Reason': 'Life is good, man', 115 | 'Data': {}, 116 | } 117 | return success 118 | 119 | def create(self, wraps): 120 | self._dispatch['Create'] = self._wrapper(wraps) 121 | return wraps 122 | 123 | def update(self, wraps): 124 | self._dispatch['Update'] = self._wrapper(wraps) 125 | return wraps 126 | 127 | def delete(self, wraps): 128 | self._dispatch['Delete'] = self._wrapper(wraps) 129 | return wraps 130 | -------------------------------------------------------------------------------- /source/CRRdeployagent/CRRdeployagent.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | ###################################################################################################################### 5 | # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 6 | # # 7 | # Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance # 8 | # with the License. A copy of the License is located at # 9 | # # 10 | # http://www.apache.org/licenses/LICENSE-2.0 # 11 | # # 12 | # or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES # 13 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 14 | # and limitations under the License. # 15 | ###################################################################################################################### 16 | 17 | from __future__ import print_function 18 | 19 | import boto3 20 | # Unable to import module? You need to zip CRRdeployagent.py with 21 | # cfn_resource.py!! 22 | import cfn_resource 23 | import json 24 | 25 | handler = cfn_resource.Resource() 26 | 27 | source_buckets = [] 28 | 29 | try: 30 | sts = boto3.client('sts') 31 | ec2 = boto3.client('ec2') 32 | except Exception as e: 33 | print(e) 34 | print('Error creating sts and ec2 clients') 35 | raise e 36 | 37 | local_account = sts.get_caller_identity()['Account'] 38 | 39 | # Create a hash of regions 40 | REGIONSEL = {} 41 | 42 | response = ec2.describe_regions() 43 | 44 | for region in response['Regions']: 45 | 46 | REGIONSEL[region['RegionName']] = 0 47 | 48 | ###########Get Buckets and Agent Region ################# 49 | 50 | def get_replica_buckets(client): 51 | print('List Replica Buckets:') 52 | try: 53 | list_buckets = client.list_buckets()['Buckets'] 54 | replica_buckets = [] 55 | for i in list_buckets: 56 | bucket_response = get_bucket_replication(i['Name'], client) 57 | if 'ReplicationConfigurationError-' != bucket_response \ 58 | and bucket_response['ReplicationConfiguration']['Rules'][0]['Status'] != 'Disabled': 59 | source_buckets.append(i['Name']) 60 | dest_bucket_arn = bucket_response['ReplicationConfiguration']['Rules'][0]['Destination']['Bucket'] 61 | replica_buckets.append(dest_bucket_arn.split(':', 5)[5]) 62 | except Exception as e: 63 | print(e) 64 | raise e 65 | return replica_buckets 66 | 67 | def get_bucket_replication(bucket_name, client): 68 | try: 69 | response = client.get_bucket_replication( 70 | Bucket=bucket_name 71 | ) 72 | except Exception as e: 73 | print(e) 74 | response = "ReplicationConfigurationError-" 75 | return response 76 | 77 | #Gets the list of agent regions for Agent deployment 78 | 79 | def get_agent_regions(): 80 | try: 81 | client = boto3.client('s3') 82 | replica_buckets = get_replica_buckets(client) 83 | agent_set = set([]) 84 | for bucket in replica_buckets: 85 | try: 86 | response = client.head_bucket( 87 | Bucket=bucket 88 | ) 89 | region = response['ResponseMetadata']['HTTPHeaders']['x-amz-bucket-region'] 90 | if region == None: 91 | region = 'us-east-1' 92 | agent_set.add(region) 93 | except Exception as e: 94 | print('Unable to get region for bucket ' + bucket) 95 | print(e) 96 | 97 | for bucket in source_buckets: 98 | try: 99 | response = client.head_bucket( 100 | Bucket=bucket 101 | ) 102 | region = response['ResponseMetadata']['HTTPHeaders']['x-amz-bucket-region'] 103 | if region == None: 104 | region = 'us-east-1' 105 | agent_set.add(region) 106 | except Exception as e: 107 | print('Unable to get region for bucket ' + bucket) 108 | print(e) 109 | 110 | agent_regions = list(agent_set) 111 | print('get_agent_regions: agent_regions = ') 112 | print(*agent_regions, sep = "\n") 113 | except Exception as e: 114 | print(e) 115 | raise e 116 | return agent_regions 117 | 118 | # ===================================================================== 119 | # CREATE 120 | # 121 | @handler.create 122 | 123 | def create_agent(event, context): 124 | # print(json.dumps(event)) 125 | # For manager/agent account you will receive: 126 | # - Topic 127 | # - CRRQueueArn 128 | # - MyAccountId 129 | # For agent-only (remote) account you will receive: 130 | # - MyAccountId 131 | # - CRRMonitorAccount 132 | # 133 | print(json.dumps(event)) 134 | monitor_account = '' 135 | topic_name = '' 136 | queue_arn = '' 137 | agent_accounts = [] 138 | 139 | 140 | if 'Topic' in event["ResourceProperties"]: 141 | topic_name = event["ResourceProperties"]["Topic"] # SNS topic 142 | 143 | if 'CRRQueueArn' in event["ResourceProperties"]: 144 | queue_arn = event["ResourceProperties"]["CRRQueueArn"] 145 | 146 | if 'CRRMonitorAccount' in event["ResourceProperties"]: 147 | monitor_account = event["ResourceProperties"]["CRRMonitorAccount"] 148 | 149 | if 'AgentAccounts' in event["ResourceProperties"]: 150 | agent_accounts = event["ResourceProperties"]["AgentAccounts"] 151 | 152 | 153 | # Default value for returning resourceid 154 | physical_resource_id = {'PhysicalResourceId': 'CRRMonitorAgent-Deployed'} 155 | 156 | # Configure each region for monitoring based on what we found. 157 | for region in REGIONSEL: 158 | 159 | print('Deploying in ' + region) 160 | 161 | agent_creator(region, topic_name, queue_arn, monitor_account, agent_accounts) 162 | 163 | return physical_resource_id 164 | 165 | def agent_creator(agt_region, topic_name, queue_arn, monitor_account, agent_accounts): 166 | 167 | rule = 'CRRRemoteAgent' 168 | 169 | if not monitor_account: 170 | rule = 'CRRAgent' 171 | 172 | boto3.setup_default_session(region_name=agt_region) 173 | # ----------------------------------------------------------------- 174 | # Create client connections 175 | # 176 | try: 177 | cwe = boto3.client('events') 178 | sns = boto3.client('sns') 179 | except Exception as e: 180 | print(e) 181 | print('Error creating clients for ' + agt_region) 182 | raise e 183 | 184 | try: 185 | cwe.put_rule( 186 | Description='Fires CRRMonitor for S3 events that indicate an object has been stored.', 187 | Name=rule, 188 | EventPattern="{ \"detail-type\": [ \"AWS API Call via CloudTrail\" ], \"detail\": { \"eventSource\": [ \"s3.amazonaws.com\"], \"eventName\": [ \"PutObject\", \"CopyObject\", \"CompleteMultipartUpload\" ] } }", 189 | State='DISABLED' 190 | ) 191 | except Exception as e: 192 | print(e) 193 | print('Error creating CW Event rule') 194 | raise e 195 | 196 | if not monitor_account: 197 | print('Creating agent for a monitor/agent account in region ' + agt_region) 198 | topic = topic_name + "-" + agt_region 199 | 200 | # ----------------------------------------------------------------- 201 | # Note: duplication is not a concern - we will replace the rule and 202 | # topic if they already exist 203 | # 204 | # Create the CloudWatch Event rule in a disabled state. 205 | # Create an SNS topic 206 | # Add a target to the rule to send to the new SNS topic 207 | # Enable the rule 208 | try: 209 | 210 | topicarn = sns.create_topic(Name=topic)['TopicArn'] 211 | sns.set_topic_attributes( 212 | TopicArn=topicarn, 213 | AttributeName='Policy', 214 | AttributeValue='{\ 215 | "Version": "2012-10-17",\ 216 | "Id": "CWEventPublishtoTopic",\ 217 | "Statement": [\ 218 | {\ 219 | "Sid": "CWEventPublishPolicy",\ 220 | "Action": [\ 221 | "SNS:Publish"\ 222 | ],\ 223 | "Effect": "Allow",\ 224 | "Resource": "' + topicarn + '",\ 225 | "Principal": {\ 226 | "Service": [\ 227 | "events.amazonaws.com"\ 228 | ]\ 229 | }\ 230 | }\ 231 | ]\ 232 | }\ 233 | ', 234 | 235 | ) 236 | cwe.put_targets( 237 | Rule=rule, 238 | Targets=[ 239 | { 240 | 'Id': 'CRRAgent-' + agt_region, 241 | 'Arn': topicarn 242 | } 243 | ] 244 | ) 245 | cwe.enable_rule(Name=rule) 246 | except Exception as e: 247 | print(e) 248 | print('Error creating SNS topic and CW Event rule: ' + topic) 249 | raise e 250 | 251 | # ----------------------------------------------------------------- 252 | # Create cross-region Queue subscription from the SNS end 253 | # Only when deployed from the Manager account 254 | # 255 | try: 256 | response = sns.subscribe( 257 | TopicArn=topicarn, 258 | Protocol='sqs', 259 | Endpoint=queue_arn 260 | ) 261 | 262 | except Exception as e: 263 | print(e) 264 | print('Error subscribing SNS topic ' + topic + ' to SQS Queue ' + queue_arn) 265 | raise e 266 | 267 | # Grant permissions to the default event bus 268 | for account in agent_accounts: 269 | try: 270 | cwe.put_permission( 271 | Action='events:PutEvents', 272 | Principal=account, 273 | StatementId=account 274 | ) 275 | except Exception as e: 276 | print(e) 277 | print('Error creating Event Bus permissions for ' + account) 278 | raise e 279 | 280 | return_data = { 281 | 'Data': { 'TopicArn': topicarn }, 282 | 'PhysicalResourceId': 'CRRMonitorAgent-' + agt_region 283 | } 284 | 285 | else: 286 | print('Creating agent for an agent-only account in region ' + agt_region) 287 | try: 288 | cwe.put_targets( 289 | Rule=rule, 290 | Targets=[ 291 | { 292 | 'Id': 'CRRRemoteAgent-' + agt_region, 293 | 'Arn': 'arn:aws:events:' + agt_region + ':' + monitor_account + ':event-bus/default' 294 | } 295 | ] 296 | ) 297 | cwe.enable_rule(Name=rule) 298 | except Exception as e: 299 | print(e) 300 | print('Error creating CW Event target') 301 | raise e 302 | 303 | return_data = {'PhysicalResourceId': 'CRRMonitorAgent-' + agt_region} 304 | 305 | return return_data 306 | 307 | # ===================================================================== 308 | # UPDATE 309 | # 310 | @handler.update 311 | def update_agent(event, context): 312 | event["ResourceProperties"]["Topic"] # SNS topic 313 | 314 | # No update action necessary 315 | return {} 316 | 317 | # ===================================================================== 318 | # DELETE 319 | # 320 | @handler.delete 321 | def delete_agent(event, context): 322 | # For manager/agent account you will receive: 323 | # - Topic 324 | # - CRRQueueArn 325 | # For agent-only (remote) account you will receive: 326 | # - MyAccountId 327 | # 328 | monitor_account = '' 329 | topic_name = '' 330 | queue_arn = '' 331 | agent_accounts = [] 332 | 333 | if 'Topic' in event["ResourceProperties"]: 334 | topic_name = event["ResourceProperties"]["Topic"] # SNS topic 335 | 336 | if 'CRRQueueArn' in event["ResourceProperties"]: 337 | queue_arn = event["ResourceProperties"]["CRRQueueArn"] 338 | arnparts = queue_arn.split(':') 339 | monitor_account = arnparts[4] 340 | 341 | if 'CRRMonitorAccount' in event["ResourceProperties"]: 342 | monitor_account = event["ResourceProperties"]["CRRMonitorAccount"] 343 | 344 | if 'AgentAccounts' in event["ResourceProperties"]: 345 | agent_accounts = event["ResourceProperties"]["AgentAccounts"] 346 | 347 | # Get a list of regions where we have source or replica buckets 348 | agent_regions = get_agent_regions() 349 | 350 | for region in agent_regions: 351 | agent_deleter(region, topic_name, queue_arn, monitor_account, agent_accounts) 352 | 353 | return {} 354 | 355 | 356 | def agent_deleter(agt_region, topic_name, queue_arn, monitor_account, agent_accounts): 357 | # 358 | # Deletion has to occur in a specific order 359 | # 360 | boto3.setup_default_session(region_name=agt_region) 361 | # ----------------------------------------------------------------- 362 | # Create client connections 363 | # 364 | try: 365 | cwe = boto3.client('events') 366 | if not monitor_account: 367 | sns = boto3.client('sns') 368 | except Exception as e: 369 | print(e) 370 | print('Error creating Events client for ' + agt_region) 371 | raise e 372 | 373 | #------------------------------------------------------------------ 374 | # Remove the CWE rule 375 | # 376 | # Rule name is different for Monitor/Agent vs Agent-only 377 | # 378 | rule = 'CRRRemoteAgent' 379 | 380 | if not monitor_account: 381 | rule = 'CRRAgent' 382 | # 383 | # Remove the Targets 384 | # 385 | try: 386 | cwe.remove_targets( 387 | Rule=rule, 388 | Ids=[ 389 | rule + '-' + agt_region, 390 | ] 391 | ) 392 | except Exception as e: 393 | print(e) 394 | print('Failed to remove target ' + rule + ' id ' + rule + '-' + agt_region) 395 | 396 | # For Manager/Agent account, remove the SNS topic 397 | if not monitor_account: 398 | topic = topic_name + "-" + agt_region 399 | print("Delete " + topic + " in " + agt_region) 400 | # ----------------------------------------------------------------- 401 | # RequestType Delete 402 | # 403 | sts = boto3.client('sts') 404 | myaccount = sts.get_caller_identity()['Account'] 405 | topicarn = 'arn:aws:sns:' + agt_region + ':' + myaccount + ':' + topic 406 | # Delete the SNS topic 407 | sns.delete_topic( 408 | TopicArn=topicarn 409 | ) 410 | 411 | # Delete the CW rule 412 | cwe.delete_rule( 413 | Name=rule 414 | ) 415 | 416 | if not monitor_account: 417 | # Remove permissions to the default event bus 418 | for account in agent_accounts: 419 | try: 420 | cwe.remove_permission( 421 | StatementId=account 422 | ) 423 | except Exception as e: 424 | print(e) 425 | print('Error removing Event Bus permissions for ' + account) 426 | 427 | return {} 428 | -------------------------------------------------------------------------------- /source/CRRdeployagent/cfn_resource.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | ###################################################################################################################### 5 | # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 6 | # # 7 | # Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance # 8 | # with the License. A copy of the License is located at # 9 | # # 10 | # http://www.apache.org/licenses/LICENSE-2.0 # 11 | # # 12 | # or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES # 13 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 14 | # and limitations under the License. # 15 | ###################################################################################################################### 16 | 17 | import json 18 | import logging 19 | import sys 20 | if sys.version_info.major == 3: 21 | from urllib.request import urlopen, Request, HTTPError, URLError 22 | from urllib.parse import urlencode 23 | else: 24 | from urllib2 import urlopen, Request, HTTPError, URLError 25 | 26 | logger = logging.getLogger() 27 | logger.setLevel(logging.INFO) 28 | 29 | SUCCESS = 'SUCCESS' 30 | FAILED = 'FAILED' 31 | 32 | """ 33 | Event example 34 | { 35 | "Status": SUCCESS | FAILED, 36 | "Reason: mandatory on failure 37 | "PhysicalResourceId": string, 38 | "StackId": event["StackId"], 39 | "RequestId": event["RequestId"], 40 | "LogicalResourceId": event["LogicalResourceId"], 41 | "Data": {} 42 | } 43 | """ 44 | 45 | def wrap_user_handler(func, base_response=None): 46 | def wrapper_func(event, context): 47 | response = { 48 | "StackId": event["StackId"], 49 | "RequestId": event["RequestId"], 50 | "LogicalResourceId": event["LogicalResourceId"], 51 | "Status": SUCCESS, 52 | } 53 | if event.get("PhysicalResourceId", False): 54 | response["PhysicalResourceId"] = event["PhysicalResourceId"] 55 | 56 | if base_response is not None: 57 | response.update(base_response) 58 | 59 | logger.debug("Received %s request with event: %s" % (event['RequestType'], json.dumps(event))) 60 | 61 | try: 62 | response.update(func(event, context)) 63 | except: 64 | logger.exception("Failed to execute resource function") 65 | response.update({ 66 | "Status": FAILED, 67 | "Reason": "Exception was raised while handling custom resource" 68 | }) 69 | 70 | serialized = json.dumps(response) 71 | logger.info("Responding to '%s' request with: %s" % ( 72 | event['RequestType'], serialized)) 73 | 74 | if sys.version_info.major == 3: 75 | req_data = serialized.encode('utf-8') 76 | else: 77 | req_data = serialized 78 | req = Request( 79 | event['ResponseURL'], data=req_data, 80 | headers={'Content-Length': len(req_data), 81 | 'Content-Type': ''} 82 | ) 83 | req.get_method = lambda: 'PUT' 84 | 85 | try: 86 | urlopen(req) 87 | logger.debug("Request to CFN API succeeded, nothing to do here") 88 | except HTTPError as e: 89 | logger.error("Callback to CFN API failed with status %d" % e.code) 90 | logger.error("Response: %s" % e.reason) 91 | except URLError as e: 92 | logger.error("Failed to reach the server - %s" % e.reason) 93 | 94 | return wrapper_func 95 | 96 | class Resource(object): 97 | _dispatch = None 98 | 99 | def __init__(self, wrapper=wrap_user_handler): 100 | self._dispatch = {} 101 | self._wrapper = wrapper 102 | 103 | def __call__(self, event, context): 104 | request = event['RequestType'] 105 | logger.debug("Received {} type event. Full parameters: {}".format(request, json.dumps(event))) 106 | return self._dispatch.get(request, self._succeed())(event, context) 107 | 108 | def _succeed(self): 109 | @self._wrapper 110 | def success(event, context): 111 | return { 112 | 'Status': SUCCESS, 113 | 'PhysicalResourceId': event.get('PhysicalResourceId', 'mock-resource-id'), 114 | 'Reason': 'Life is good, man', 115 | 'Data': {}, 116 | } 117 | return success 118 | 119 | def create(self, wraps): 120 | self._dispatch['Create'] = self._wrapper(wraps) 121 | return wraps 122 | 123 | def update(self, wraps): 124 | self._dispatch['Update'] = self._wrapper(wraps) 125 | return wraps 126 | 127 | def delete(self, wraps): 128 | self._dispatch['Delete'] = self._wrapper(wraps) 129 | return wraps 130 | -------------------------------------------------------------------------------- /source/solution-helper/solution-helper.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | ###################################################################################################################### 5 | # Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. # 6 | # # 7 | # Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance # 8 | # with the License. A copy of the License is located at # 9 | # # 10 | # http://www.apache.org/licenses/LICENSE-2.0 # 11 | # # 12 | # or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES # 13 | # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # 14 | # and limitations under the License. # 15 | ###################################################################################################################### 16 | 17 | # solution-helper.py 18 | # This code generates a uuid using the uuid random tool 19 | import json 20 | import uuid 21 | import urllib.request 22 | 23 | def send(event, context, responseStatus, responseData, physicalResourceId=None, noEcho=False): 24 | try: 25 | responseUrl = event.get('ResponseURL') 26 | 27 | responseBody = { 28 | "Status":responseStatus, 29 | "Reason": "See the details in CloudWatch Log Stream: " + context.log_stream_name, 30 | "PhysicalResourceId": physicalResourceId or context.log_stream_name, 31 | "StackId": event.get('StackId'), 32 | "RequestId":event.get('RequestId'), 33 | "LogicalResourceId": event.get('LogicalResourceId'), 34 | "NoEcho": noEcho, 35 | "Data": responseData 36 | } 37 | 38 | data = bytes(json.dumps(responseBody), 'utf-8') 39 | 40 | headers = { 41 | 'content-type' : '', 42 | 'content-length' : str(len(data)) 43 | } 44 | 45 | req = urllib.request.Request(url=responseUrl, data=data, method='PUT', headers=headers) 46 | with urllib.request.urlopen(req) as f: 47 | print(f"CFN Status: {f.status}") 48 | except Exception as e: 49 | raise(e) 50 | 51 | def lambda_handler(event, context): 52 | try: 53 | request = event.get('RequestType') 54 | responseData = {} 55 | 56 | if request == 'Create': 57 | responseData = {'UUID':str(uuid.uuid4())} 58 | 59 | send(event, context, 'SUCCESS', responseData) 60 | except Exception as e: 61 | print('Exception: {}'.format(e)) 62 | send(event, context, 'FAILED', {}, context.log_stream_name) 63 | --------------------------------------------------------------------------------