├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── app.py ├── cdk.json ├── images ├── Architecture.jpg └── Architecture.xml ├── lib ├── CloudtrailConstructs.py ├── __init__.py ├── glueConstructs.py ├── kmskey.py ├── lambdaConstructs.py ├── quickSightConstruct.py ├── resource_iac_tool_stack.py └── s3constructs.py ├── requirements.txt └── src ├── .DS_Store └── lambda_code └── lambda_function.py /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *main* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT No Attribution 2 | 3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of 6 | this software and associated documentation files (the "Software"), to deal in 7 | the Software without restriction, including without limitation the rights to 8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 9 | the Software, and to permit persons to whom the Software is furnished to do so. 10 | 11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 13 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 14 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 15 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 16 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 17 | 18 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # aws-resource-assessment-iac-automation 3 | This Pattern demonstrate approach for setting up resources assessment capabilities with automated way in AWS using CDK Application. This solution enables the operations team to get resource auditing details in an automated manner. This solution assists operations teams to gather details of all resources deployed in AWS account to a single dashboard which is helpful in following use cases. 4 | 1. Identify IAC tools and segregate resources created by different IAC Solutions like terraform, cloudformation, cdk and etc. 5 | 2. Fetch resource auditing information 6 | 3. Monitor and Cleanup manually created resources 7 | 4. Fetching information of manually created resources in order to automate them using IAC solution. 8 | 9 | # Prerequisites 10 | - An Activate AWS Account. 11 | - AWS Identity and Access Management Roles and permissions with Access to provision resources. 12 | - QuickSight Account created with access to S3 and Athena. Please refer this [page](https://docs.aws.amazon.com/quicksight/latest/user/signing-up.html) for more help. 13 | 14 | # Product Versions 15 | - [AWS CDK](https://aws.amazon.com/cdk/) (version 2.55.1 or higher) 16 | - [Python](https://www.python.org/downloads/release/python-390/) (version 3.9 or higher) 17 | 18 | # Target Technology Stack 19 | - [AWS CloudTrail](https://aws.amazon.com/cloudtrail/) 20 | - [Amazon S3](https://aws.amazon.com/s3/) 21 | - [AWS Lambda](https://aws.amazon.com/lambda/) 22 | - [Amazon Athena](https://aws.amazon.com/athena/) 23 | - [AWS Glue Crawler](https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html) 24 | - [AWS Glue Catalog](https://docs.aws.amazon.com/glue/latest/dg/catalog-and-crawler.html) 25 | - [Amazon QuickSight](https://aws.amazon.com/quicksight/) 26 | 27 | # Target Architecture 28 | The AWS CDK code will deploy all the resources in order to setup resource assessment capabilities in AWS account. 29 | 30 | ![Architecture](./images/Architecture.jpg) 31 | 32 | # Automation and Scale 33 | This solution can be scaled from one AWS account to multiple AWS account or at organizations level to fetch resources auditing details for all the required resources. 34 | 35 | # Tools 36 | - [AWS CDK](https://aws.amazon.com/cdk/) - The AWS CDK lets you build reliable, scalable, cost-effective applications in the cloud with the considerable expressive power of a programming language. 37 | - [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) – AWS CloudFormation is an infrastructure as code (IaC) service that allows you to easily model, provision, and manage AWS and third-party resources. 38 | 39 | # Code 40 | The code repository contains the following files and folders: 41 | 42 | - [infrastructure-assessment-tool-iac](./) - contains AWS CDK code to deploy resources required for Resource Assessment Infrastructure tool. 43 | - [cdk.out/ResourceIaCStack.template.json]() - contains CloudFormation template used by the product. 44 | - [lib](./lib/) - Contains all CDK construct python files used to create AWS resources. 45 | - [src/lambda_code](./src/lambda_code/) - Contains Python code which is to be executed in the AWS Lambda function. 46 | - [requirement.txt](./requirements.txt) - Contains list of all python dependencies needs to be installed. 47 | 48 | # Prerequisites 49 | Make sure you have AWS CDK Toolkit installed on your Terminal. To check execute below command: 50 | ``` 51 | cdk --version 52 | ``` 53 | If AWS CDK Toolkit is not installed then execute below command 54 | ``` 55 | npm install -g aws-cdk@2.55.1 56 | ``` 57 | If AWS CDK Toolkit version is lower than 2.55.1 then update to 2.55.1 or higher 58 | ``` 59 | npm install -g aws-cdk@2.55.1 --force 60 | ``` 61 | 62 | # Setup your environment 63 | 1. Pull the repo in your local by executing below command: 64 | ```bash 65 | git clone https://gitlab.aws.dev/gcci-serverless/infrastructure-assessment-tool/infrastructure-assessment-tool-iac 66 | ``` 67 | This step creates a folder named `infrastructure-assessment-tool-iac`. 68 | 69 | 2. To setup python virtual environment and install required dependencies execute below commands: 70 | ```bash 71 | cd infrastructure-assessment-tool-iac 72 | python3 -m venv .venv 73 | source .venv/bin/activate 74 | pip install -r requirements.txt 75 | ``` 76 | 3. Execute below command to setup AWS CDK environment and synth CDK code. Learn more about AWS CDK [here](https://docs.aws.amazon.com/cdk/v2/guide/hello_world.html). 77 | ```bash 78 | cdk bootstrap aws://ACCOUNT-NUMBER/REGION 79 | cdk synth 80 | ``` 81 | 82 | ## Set up AWS credentials in Terminal 83 | Export the following variables which refer to the AWS Account and region where the stack will be deployed: 84 | 85 | export CDK_DEFAULT_ACCOUNT=<12 Digit AWS Account Number> 86 | 87 | export CDK_DEFAULT_REGION= 88 | 89 | AWS Credentials for CDK can be provided through environment variables. 90 | 91 | ## Configure and Deploy the Resource Assessment Tool 92 | Deploy Resources in AWS Account using CDK. 93 | 94 | 1. In the root of this repository, provide inputs required to run cdk in [cdk.json](./cdk.json) file for s3_context, ct_context, kms_context, lambda_context, glue_context and qs_context. There values define resource configurations and nomenclature. Defaults values are set and can be changed if required. 95 | 96 | *Note: You might see S3 already exist error while running cdk deploy so make sure to provide unique names in s3_context under "ct" and "output" section.* 97 | 98 | To deploy resources use below CDK command: 99 | 100 | ```bash 101 | cdk deploy 102 | ``` 103 | 104 | 2. After executing the above command, it will create AWS CloudTrail resource to trail the data in Amazon S3 bucket. This trail logs will be processed by AWS Lambda function and store the filtered results in output S3 bucket and this results are ready to be consumed by Amazon Athena And Amazon quicksight service. 105 | 3. [AWS Glue Crawler](https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html) is used to keep the data schema dynamic. It creates and update partitions in [AWS Glue catalog table](https://docs.aws.amazon.com/athena/latest/ug/querying-glue-catalog.html) by running it on periodically as defined by AWS Glue Crawler scheduler. Once the data is available in output Amazon S3 bucket then follow below steps to execute AWS Glue Crawler. 106 | 107 | *Note: As per the configuration in AWS CDK code, AWS Glue Crawler is scheduled to run at particular time. It can also run on-demand basis.* 108 | 109 | You can run using below commands to create table schema for testing: 110 | - Go to [AWS Glue console](console.aws.amazon.com/glue/home) 111 | - From the left pane select `Crawlers` under `Data Catalog` section. 112 | - Select crawler with name `iac-tool-qa-resource-iac-json-crawler`. 113 | - Run the crawler and wait for its execution. 114 | - After successful execution, it will create a glue catalog table which will be used by AWS QuickSight to visualize the data. 115 | 116 | 4. Deploy QuickSight construct - 117 | *Note: Step#3 must be executed before proceeding for this step.* 118 | Uncomment the code present between comments `#QuickSight setup - start` and `#QuickSight setup - ends`(which creates `QuickSight DataSource` and `QuickSight DataSet` in quicksight account) in [resource_iac_tool_stack.py](./lib/resource_iac_tool_stack.py). After uncommenting it execute below command for deploying changes. 119 | ```bash 120 | cdk deploy 121 | ``` 122 | 5. To create AWS QuickSight dashboard follow below steps: 123 | 124 | *Note: [AWS QuickSight](https://aws.amazon.com/quicksight) is a paid service, please go through [AWS QuickSight pricing](https://aws.amazon.com/quicksight/pricing/) before creating analysis and dashboard.* 125 | 126 | Refer this [page](https://docs.aws.amazon.com/quicksight/latest/user/creating-an-analysis.html) to get understand about AWS QuickSight analysis creation. 127 | 128 | To create a sample analysis for this solution please follow steps: 129 | - Go to AWS QuickSight [console](quicksight.aws.amazon.com) and [select region](https://docs.aws.amazon.com/quicksight/latest/user/customizing-quicksight.html) where resources are deployed. 130 | - From the left pane select `Datasets` and validate if dataset named `ct-operations-iac-ds` has been created in AWS QuickSight dataset. If not then please re-visit [Step 4](#configure-and-deploy-the-resource-assessment-tool#4). 131 | - Select dataset `ct-operations-iac-ds` and click on `USE IN ANALYSIS`. 132 | - Select the sheet with default setting. 133 | - select the respective columns from the field list on left side. 134 | - After selecting the required columns select the appropriate visual to view the data. 135 | - Learn more about different visual types supported by AWS QuickSight by visiting this [page](https://docs.aws.amazon.com/quicksight/latest/user/working-with-visual-types.html). 136 | 137 | 138 | 139 | 140 | ## Clean up all AWS Resources in Solution 141 | ```bash 142 | cdk destroy 143 | ``` 144 | *Note: Amazon S3 buckets needs to be removed manually after deleting all S3 objects from the buckets. Please refer this [page](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html) for more help. 145 | 146 | ## Contributors 147 | - Sandeep Gawande 148 | - Naveen Suthar 149 | - Manish Garg 150 | 151 | 152 | ## Useful commands 153 | 154 | * `cdk ls` list all stacks in the app 155 | * `cdk synth` emits the synthesized CloudFormation template 156 | * `cdk deploy` deploy this stack to your default AWS account/region 157 | * `cdk diff` compare deployed stack with current state 158 | * `cdk docs` open CDK documentation 159 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import os 3 | 4 | import aws_cdk as cdk 5 | 6 | from lib.resource_iac_tool_stack import ResourceIaCStack 7 | 8 | 9 | app = cdk.App() 10 | ResourceIaCStack(app, "ResourceIaCStack", 11 | # If you don't specify 'env', this stack will be environment-agnostic. 12 | # Account/Region-dependent features and context lookups will not work, 13 | # but a single synthesized template can be deployed anywhere. 14 | 15 | # Uncomment the next line to specialize this stack for the AWS Account 16 | # and Region that are implied by the current CLI configuration. 17 | 18 | #env=cdk.Environment(account=os.getenv('CDK_DEFAULT_ACCOUNT'), region=os.getenv('CDK_DEFAULT_REGION')), 19 | 20 | # Uncomment the next line if you know exactly what Account and Region you 21 | # want to deploy the stack to. */ 22 | 23 | #env=cdk.Environment(account='123456789012', region='us-east-1'), 24 | 25 | # For more information, see https://docs.aws.amazon.com/cdk/latest/guide/environments.html 26 | ) 27 | 28 | app.synth() 29 | -------------------------------------------------------------------------------- /cdk.json: -------------------------------------------------------------------------------- 1 | { 2 | "app": "python3 app.py", 3 | "watch": { 4 | "include": [ 5 | "**" 6 | ], 7 | "exclude": [ 8 | "README.md", 9 | "cdk*.json", 10 | "requirements*.txt", 11 | "source.bat", 12 | "**/__init__.py", 13 | "python/__pycache__", 14 | "tests" 15 | ] 16 | }, 17 | "context": { 18 | "s3_context": { 19 | "ct": { 20 | "name": "cdk-iac-tool-qa-ct-events" 21 | }, 22 | "output": { 23 | "name": "cdk-iac-tool-qa-filter-data" 24 | } 25 | }, 26 | "ct_context": { 27 | "ct": { 28 | "name": "iac-tool-qa-resourca-iac-monitor" 29 | } 30 | }, 31 | "kms_context": { 32 | "ct": { 33 | "name": "iac-tool-qa-kms-key" 34 | } 35 | }, 36 | 37 | "lambda_context": { 38 | "ct": { 39 | "name": "iac-tool-qa-lambdaToProcessCTLogs", 40 | "timeout": 60, 41 | "memory_size": 128, 42 | "ephemeral_storage_size": 512, 43 | "relative_code_path": "../src/lambda_code", 44 | "handler": "lambda_function.lambda_handler" 45 | } 46 | }, 47 | "glue_context": { 48 | "ct": { 49 | "name": "iactoolqaresourceiac", 50 | "crawler_name": "iac-tool-qa-resource-iac-json-crawler" 51 | } 52 | }, 53 | "qs_context": { 54 | "ct": { 55 | "name": "operations-iac", 56 | "ds_name": "operations-iac-ds", 57 | "db_name": "iactoolqaresourceiac", 58 | "table_name": "resource_iac_tool_qa", 59 | "group_name": "iac-tool-qa-devops", 60 | "group_region": "us-east-1", 61 | "qs_namespace": "default" 62 | } 63 | }, 64 | 65 | "@aws-cdk/aws-lambda:recognizeLayerVersion": true, 66 | "@aws-cdk/core:checkSecretUsage": true, 67 | "@aws-cdk/core:target-partitions": [ 68 | "aws", 69 | "aws-cn" 70 | ], 71 | "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true, 72 | "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true, 73 | "@aws-cdk/aws-ecs:arnFormatIncludesClusterName": true, 74 | "@aws-cdk/aws-iam:minimizePolicies": true, 75 | "@aws-cdk/core:validateSnapshotRemovalPolicy": true, 76 | "@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true, 77 | "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true, 78 | "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true, 79 | "@aws-cdk/aws-apigateway:disableCloudWatchRole": true, 80 | "@aws-cdk/core:enablePartitionLiterals": true, 81 | "@aws-cdk/aws-events:eventsTargetQueueSameAccount": true, 82 | "@aws-cdk/aws-iam:standardizedServicePrincipals": true, 83 | "@aws-cdk/aws-ecs:disableExplicitDeploymentControllerForCircuitBreaker": true 84 | } 85 | } 86 | -------------------------------------------------------------------------------- /images/Architecture.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/infrastructure-assessment-iac-automation/f2b3548bebdc6222f2e90de1bc8c5a1f11e6d55e/images/Architecture.jpg -------------------------------------------------------------------------------- /images/Architecture.xml: -------------------------------------------------------------------------------- 1 | 7V1bd6O2Fv41WavnQVkSErfHxB5P007PSZvp7SlLgLDpYHABx/H8+rPFxcGAiZ3YMSS0WWssIXTd0vdp7y1xQUfzx88RX8x+CR3hXyjYebyg4wtFIRqm8I+MWWcxhq5nEdPIc/JETxF33neRR+I8duk5It5KmIShn3iL7Ug7DAJhJ1txPIrC1XYyN/S3S13wqahF3Nncr8f+6TnJLG+Foj/F/yi86awomWhm9mTOi8R5S+IZd8JVKYp+uqCjKAyT7Nf8cSR82XlFv+TvBeIxkU9unD+4v8yrpRqbHH4TcbiMbDEWsR15iySM4KUoj8xS31/Qq6IWIvK4733niRcG6EFEMfybpXrIk/C8y6KGjPMy78ScB4lnj3nCR2GQcC8Q0T65Z28nkRdMv3iJiLifjV0igmSr1YsoXIgoyaVmliRyvK8ulAn8QfLQD6fry1jYy8hL1pd8zr+HwaUjHuCxGy4DJ60BBByPTyM+Rw9evNzUDOI59KFqqhqyqeMiZis2sphhQhDrrmZwyzazIZ5kdb757abWrQfVCqTYmwbIC+IFyKnsy4kdzhdhAC2PIWAwbmBLdZGqMQUxTnRkqoaKhOU6lqa6zBL2cbsmXseJmKO5nLMwHhCDmUpN1dQRUwyKmKupyMAmRlxoums6VJjMLHcK/GiWhOJpg2wWj3Jxbxb9zayLtuXiEIEnPRF4Mgj8hxP40PpHgpWCfW4BYKbtysUVBMP2/K/rRd7nsPpnT4q54Y/ufyLox9no978UFI2R54SPJlLMjbRtZlOcrAsYA/BJM5S50WsnXFrpEwKBSHaRkJljCK1mIKV3C54O+QowHeJmydzPE7ue74+gw6FZ4wCGEqKy2oHoJ+Jx5+zNoyLH/X0B4pmVV0LqOBaJbPMf5RlES036LMK5SKI1JM5LKSA4FwuaT+ZVCaVxHjcrITTTK7N+usn5aZz3XGLahqIHK0979YcFqQcL0sk7hbkqMU3iIpfaGnQKg06xMEaYUFXD3DIVl+3qlE0dVqvV5YpehpFsImRnypbKRiggdzD48Rrk+hEFUCBN5Mp3zBZu9fhU7lJQnERLO1lGQtbDYkJw20CYCRUxAQNuaCbIoqNqrmUy3dKtkzawdYr+zxPf9Pubz79aeKLN3G+rn/nt9dvMT6zatkqJjRyFEcRMm4IoOrJjqK7rtsoUzT0FNsrqZ/B4KFTyVYxsP1w6ewHm0w6qCTAXoScnAPxWr+EPVtQRvlDhyUiGLhW1ElEN69sRpB6SeWxHVMP6dgSpZk8q5ZNqBUsRtdBW9rhSPi5VEP7odbhMYIUSo80uW3IFCeQeiGqFEJToQhObcEEw8n0+UYpw3vEy14KqzB/TyXoJo6rgyynQlEVa5o2d4lzD43v4fZ8KwD33Jc+BeR5+E0X1YO7B/xM55E08RjIYDzb+Vz4siBCdhLI8nod84aY5QlvkbE1DY4rz+jcV4fB4tqFV5yJJmrZFkrQ6SdKVJpJENmWeiiQZ/SZJxkCSBpI0kKSBJPWNJE0FFOfZaCPA22Tp9usNY/bf6+XPP7v3Dv7xp0c3GSGmtpKlJxXCLtAvcYJzQSErtAG5MCimcqnW4JDSBjgk1DwaHLZ1cA/gsL36AxwOcDjA4Znh8PfgWxCugltp2RxAsB0EZxw2tP4++oIyuNQRUDhTUWxiwyiZhdMw4P6np9iKmr2Eh/+IJFnnG2K+TEK5297k8CWUu9ASbsqCnkPNstg3N6awpPNoKtr3OUfE4Uj4IDEPopLDiTaYtCeA2lr7AU97gKfvZW9yatTUVKLqmoKRzizoXB0LqIrpIMu0XFWzNNtip0XNa58H3/4LA30zPmAhObJ9vt4vDnFtrGkCUUOTMi8wMihmSLOFy4AqYa7i1zdDOXUzNKJbAvAJ2S7MWsYxQVxXHWQQ4VDggIpKlbOxAC3VBzsetEObyp9Xf95BmpFUFn+NuOcXSdL5tEm1gz6kOuYke+155kBamUOjoaHR2NBkcGg0OtQND1vJUlNAQwnVyKY4vR5J6skK60E9simuyVRSfZs0vE0qb+82VOxS0VcNGPBsQhkzlNKzsRdBRhmsB5KYVawH8M71SG5smuwNbvpf1RhQWBq+SAm9DWMvz94KkyScP2uKsKFWcpps+WU8Y1ThKd5CwPUeZT12WFmKxSSzsVxDsNHaUpL/8+lzSEWfQ5QmD5C6Msc4tf9HXxzP3gTuBuY5aHIGTc4LNTl1knJITRQsV8gJhhk9Sathz2AxgTpkzOgd7mB6qVjSz61YgpGO1n/l76eBv2UA2FYeHD+WH47X5dAtNBfgU/ZxydFhHy1VWfm0mOB/I/Zn+K9vILH+vp5/mZhjROk5tE/5q7eSl5esR2aFbWgVDM+ak7/1JJK1jKj2TEaZhNYySkV0057XsxS91yylOH7TMZbyXhbSd68KGuhkA518TrHUvkoPmqVcs5S2DJLd0eKpFRUPf7iDwkSF1GHomvg/B2mgYroPtVAGzVN/NE8avqJUP0zzpOg6IR9J8wRyf8YTR0ZHNU7FbqSfXO7IloneaJyEYMyiJkc2syzEMNAWy9QIUlxmCJuYVBSC0x2K8MH1MU/Q3grkg5amI1qazYD9uvTsb3fpklznZH9kDYrlcEJr+X92kK5/ZSZxlskeZr+nRXFgX51nXyZTx5MD7X7qFcXX6gdiX6UJ0B27Hy2UsedmYcWVNT2lYeTIZKM3PKyPqprB8vdBmOYTcdlFUwai2QGiuYMxvsAcSNqPWr2FPfDRS/4q/S5ZAyH0ZAyUgXUp8HJTYHmOd98UqOoVNdCeFrxaRpr6MpvixZFNgaQvh8/aqz8Qlx4Ql/eCLe/eQtpVY+Ch3vIdJwHN52nVs5OAl4H3jgU6vyzj+eNmxR600+fN2oasByDaXv2ugegLz+wc+XqWcx3aObZO5vW+Fe8e9wbWdihrG86bt/sDJTMRcAjxudTmB1a8SPG12dun9voPvy5FCme5yucAJyGel7yH0qH9MrxG48qpTRsNlgp6mTepZoqZwAyibPeddBcdcRnpjrGi3/evHRvge7Pnd0xXxTahSKGWgZjBdWSYiosw4TD1TIFVshMfe2us6LcqP12yqsv4oLnvMGKnx66+8Lnl8AZ3kdsotEUcX7zEb9fPMy1DcpsmZXAf6Yf7yEQ3PuEy/9jDfWSM1RHRP5D7SC78Z7wC0Ni+DvcMDrztatPOs7G30fr2ho0JTSGEEoIcThXJPAzELYUgjB2iadhSNDG48HaLjZWwvY7kZ2Blg/bkJAaUk52qfjotfWmaRuXEtP6iM9OntcnsWLJz762yTab9btczuF5cRRFflxLkpHenZ4a+w6FilwNGNT3VaUXysxq80uuiTUR7gPnt1e8a5r/Qbr3TmbGrBqO3uYh5MBgNBqPBYNTFA+QTz4fOBSx+iWWoeny8bVEcVFD9UEEN58e7fn685u/bGfUT6wkV7ZEJYzg+Puie9jg+XofxM+ig3mZPdKwd3huyszsRyJEpH+/fwaiS2neamzdotN3TJsk+0FzCR5gs0FXWBg+3WZckDlJlFOcIelH7YmEzIuffMfya6pYQO+dx3urFeqwOimYDKCrHA8W2keoBKLZXv2uzdzi52nuQGzQeh2o8duLIYGw6m+ZFaleweEi/gIiDMPFcgMms2q9CeNb+vYePh/C08hHiJoTfbIXfFOJZX67qb6/+APEDxHetgR8P4tvhZAD6TplYUi+gz/4yvcEv4itfrtSHfAGqeGcPPqC08oFXqf3z0y373UuWJ+6cQj/lOyL69CAy2rPj8I+CL6cwYPd518fn9CNl2zoLSkiN0VC9TmiKuNMRGqXfhEb5oISmj2g4sLzes7w9PGWrGDlwmTfjMhu28Ay/yH05Hrgcqr0Yz9dU29DKdwYvjzf08mjw5hjuo93Dm2OaSvzZaGD1IrgzHO5uXxK6QANPfyLaUHXCdYE0gYEm6MDjLE1oyGGMKbarmbToju5wp5dw4/d6xur9kKQcVgeKdDqKdOJTQ1qrqkYEzlUUhSuJfT4AhmdXjgblF6cWF6Luc3FqVq1Dz/S0O//vcc+acUQ0LKGd2oB2Rdwrb12l21i7wd5DL12tXMiiVo1MR79ztU3UugHRLzn9o3UTjbp6auaNros7+bV3w+mfwVA2nP55kTpHMrwwTMpwIsX5F9nNEPl/ -------------------------------------------------------------------------------- /lib/CloudtrailConstructs.py: -------------------------------------------------------------------------------- 1 | import aws_cdk as cdk 2 | from aws_cdk import aws_cloudtrail as ct 3 | from aws_cdk import aws_kms as kms 4 | from aws_cdk import aws_ssm as ssm 5 | from aws_cdk import aws_s3 as s3 6 | from aws_cdk import aws_logs as logs 7 | 8 | from constructs import Construct as construct 9 | 10 | class Cloud_trail(construct): 11 | def __init__(self, scope: construct, id: str, identifier: str, bucket_name: s3.Bucket, kms_key: kms.Key, **kwargs): 12 | super().__init__(scope, id, **kwargs) 13 | ct_context = dict(self.node.try_get_context("ct_context")) 14 | ct_details = ct_context[identifier] 15 | trail_name=identifier+'-'+ct_details['name'] 16 | 17 | self.cloud_trail= ct.Trail(self, trail_name, 18 | send_to_cloud_watch_logs=False, 19 | bucket=bucket_name, 20 | trail_name=ct_details['name'], 21 | is_multi_region_trail=True, 22 | enable_file_validation=True, 23 | encryption_key=kms_key, 24 | management_events=ct.ReadWriteType.ALL, 25 | ) 26 | 27 | 28 | -------------------------------------------------------------------------------- /lib/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/infrastructure-assessment-iac-automation/f2b3548bebdc6222f2e90de1bc8c5a1f11e6d55e/lib/__init__.py -------------------------------------------------------------------------------- /lib/glueConstructs.py: -------------------------------------------------------------------------------- 1 | import aws_cdk as cdk 2 | from aws_cdk import aws_glue as glue 3 | from aws_cdk import aws_iam as iam 4 | 5 | from constructs import Construct as construct 6 | 7 | class Glue_DB(construct): 8 | def __init__(self, scope: construct, id: str, identifier: str, output_s3_bucket_folder_arn: str, output_s3_bucket_path: str, **kwargs): 9 | super().__init__(scope, id, **kwargs) 10 | 11 | glue_context = dict(self.node.try_get_context("glue_context")) 12 | glue_details = glue_context[identifier] 13 | glue_name = identifier+'-'+glue_details['name'] 14 | db_name = glue_details['name'] 15 | crawler_name = glue_details['crawler_name'] 16 | cr_logical_name = identifier+'-'+glue_details['crawler_name'] 17 | 18 | self.glue_role = iam.Role(self, identifier+'glue_role', 19 | description='Role for Glue services to access S3', 20 | assumed_by=iam.ServicePrincipal('glue.amazonaws.com'), 21 | inline_policies={'resource_iac_glue_policy': iam.PolicyDocument( 22 | statements=[iam.PolicyStatement( 23 | effect=iam.Effect.ALLOW, 24 | actions=['s3:GetObject', 's3:PutObject'], 25 | resources=[output_s3_bucket_folder_arn])])}, 26 | managed_policies=[ 27 | iam.ManagedPolicy.from_aws_managed_policy_name( 28 | 'service-role/AWSGlueServiceRole') 29 | ] 30 | ) 31 | 32 | self.glue_db = glue.CfnDatabase(self, glue_name, 33 | catalog_id=cdk.Aws.ACCOUNT_ID, 34 | database_input=glue.CfnDatabase.DatabaseInputProperty( 35 | name=db_name, 36 | description='Database to store resource event details.' 37 | ) 38 | ) 39 | glue_crawler=glue.CfnCrawler(self, cr_logical_name, 40 | name=crawler_name, 41 | database_name=db_name, 42 | table_prefix='resource_iac_data_', 43 | role=self.glue_role.role_arn, 44 | schedule=glue.CfnCrawler.ScheduleProperty( 45 | schedule_expression="cron(10 5 * * ? *)" 46 | ), 47 | targets=glue.CfnCrawler.TargetsProperty( 48 | s3_targets=[glue.CfnCrawler.S3TargetProperty( 49 | path=f's3://{output_s3_bucket_path}/' 50 | )])) 51 | -------------------------------------------------------------------------------- /lib/kmskey.py: -------------------------------------------------------------------------------- 1 | import aws_cdk as cdk 2 | from aws_cdk import aws_cloudtrail as ct 3 | from aws_cdk import aws_kms as kms 4 | from aws_cdk import aws_ssm as ssm 5 | from aws_cdk import aws_s3 as s3 6 | from aws_cdk import aws_iam as iam 7 | 8 | from constructs import Construct as construct 9 | 10 | class Kms_Key(construct): 11 | def __init__(self, scope: construct, id: str, identifier: str, **kwargs): 12 | super().__init__(scope, id, **kwargs) 13 | 14 | kms_context = dict(self.node.try_get_context("kms_context")) 15 | kms_details = kms_context[identifier] 16 | kms_name = identifier+'-'+kms_details['name'] 17 | 18 | self.kms_key = kms.Key(self, kms_name, 19 | description = "{}-key-ct".format(kms_name), 20 | enable_key_rotation=True, 21 | 22 | ) 23 | self.kms_key.add_alias( 24 | 25 | alias_name = 'alias/{}-key-ct'.format(kms_name) 26 | ) 27 | 28 | self.kms_key.grant_encrypt_decrypt(iam.ServicePrincipal('cloudtrail.amazonaws.com')) 29 | -------------------------------------------------------------------------------- /lib/lambdaConstructs.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import ( 2 | Stack, 3 | aws_iam as iam, 4 | aws_cloudtrail as ct, 5 | aws_lambda as lm 6 | ) 7 | import aws_cdk as cdk 8 | import os 9 | from constructs import Construct as construct 10 | 11 | class lm_func(construct): 12 | def __init__(self, scope: construct, id: str, identifier: str, **kwargs): 13 | super().__init__(scope, id, **kwargs) 14 | 15 | lm_context = dict(self.node.try_get_context("lambda_context")) 16 | lm_details = lm_context[identifier] 17 | 18 | s3_context = dict(self.node.try_get_context("s3_context")) 19 | s3_details = s3_context['output'] 20 | 21 | lm_name = identifier+'-'+lm_details['name'] 22 | timeout = lm_details['timeout'] 23 | memory_size = lm_details['memory_size'] 24 | ephemeral_storage_size = lm_details['ephemeral_storage_size'] 25 | relative_code_path=lm_details['relative_code_path'] 26 | handler=lm_details['handler'] 27 | fun_name=lm_details['name'] 28 | env_variables = ({"OUTPUT_S3_BUCKET": s3_details['name']}) 29 | 30 | self.lambda_role = iam.Role(self, id=fun_name+'-role', 31 | assumed_by =iam.ServicePrincipal('lambda.amazonaws.com'), 32 | managed_policies=[ 33 | iam.ManagedPolicy.from_aws_managed_policy_name( 34 | 'service-role/AWSLambdaBasicExecutionRole') 35 | ] 36 | ) 37 | 38 | self.lam_func = lm.Function(self, lm_name, 39 | runtime=lm.Runtime.PYTHON_3_9, 40 | handler=handler, 41 | code=lm.Code.from_asset(os.path.join(os.path.dirname(__file__), relative_code_path)), 42 | timeout=cdk.Duration.seconds(timeout), 43 | memory_size=memory_size, 44 | ephemeral_storage_size=cdk.Size.mebibytes(ephemeral_storage_size), 45 | function_name=fun_name, 46 | role=self.lambda_role, 47 | environment=env_variables, 48 | reserved_concurrent_executions=20 49 | ) 50 | -------------------------------------------------------------------------------- /lib/quickSightConstruct.py: -------------------------------------------------------------------------------- 1 | import aws_cdk as cdk 2 | from aws_cdk import aws_quicksight as quicksight 3 | 4 | from constructs import Construct as construct 5 | 6 | class QuickSight(construct): 7 | def __init__(self, scope: construct, id: str, identifier: str, **kwargs): 8 | super().__init__(scope, id, **kwargs) 9 | qs_context = dict(self.node.try_get_context("qs_context")) 10 | qs_details = qs_context[identifier] 11 | qs_principal_arn = "arn:aws:quicksight:"+qs_details['group_region']+":"+cdk.Aws.ACCOUNT_ID+":group/"+qs_details['qs_namespace']+"/"+qs_details['group_name'] 12 | qs_data_source_permissions = [ 13 | quicksight.CfnDataSource.ResourcePermissionProperty( 14 | principal=qs_principal_arn, 15 | actions=[ 16 | "quicksight:DescribeDataSource", 17 | "quicksight:DescribeDataSourcePermissions", 18 | "quicksight:PassDataSource", 19 | "quicksight:UpdateDataSource", 20 | "quicksight:DeleteDataSource", 21 | "quicksight:UpdateDataSourcePermissions" 22 | ], 23 | ), 24 | ] 25 | 26 | qs_dataset_permissions = [ 27 | quicksight.CfnDataSet.ResourcePermissionProperty( 28 | principal=qs_principal_arn, 29 | actions=[ 30 | "quicksight:DescribeDataSet", 31 | "quicksight:DescribeDataSetPermissions", 32 | "quicksight:PassDataSet", 33 | "quicksight:DescribeIngestion", 34 | "quicksight:ListIngestions", 35 | "quicksight:UpdateDataSet", 36 | "quicksight:DeleteDataSet", 37 | "quicksight:CreateIngestion", 38 | "quicksight:CancelIngestion", 39 | "quicksight:UpdateDataSetPermissions" 40 | ], 41 | ) 42 | ] 43 | 44 | resource_dasboard_source = quicksight.CfnDataSource(self, identifier+'-'+qs_details['name'], 45 | aws_account_id=cdk.Aws.ACCOUNT_ID, 46 | data_source_id=identifier+'-'+qs_details['name']+'-datasource', 47 | type="ATHENA", 48 | name=identifier+'-'+qs_details['name']+'-datasource', 49 | permissions=qs_data_source_permissions 50 | ) 51 | 52 | qs_athena_dataset_resource_iac_physical_table = ( 53 | quicksight.CfnDataSet.PhysicalTableProperty( 54 | relational_table=quicksight.CfnDataSet.RelationalTableProperty( 55 | data_source_arn=resource_dasboard_source.attr_arn, 56 | input_columns=[ 57 | quicksight.CfnDataSet.InputColumnProperty( 58 | name="principal_id", type="STRING" 59 | ), 60 | quicksight.CfnDataSet.InputColumnProperty( 61 | name="account_id", type="INTEGER" 62 | ), 63 | quicksight.CfnDataSet.InputColumnProperty( 64 | name="invoked_by", type="STRING" 65 | ), 66 | quicksight.CfnDataSet.InputColumnProperty( 67 | name="session_user", type="STRING" 68 | ), 69 | quicksight.CfnDataSet.InputColumnProperty( 70 | name="event_time", type="DECIMAL" 71 | ), 72 | quicksight.CfnDataSet.InputColumnProperty( 73 | name="event_source", type="INTEGER" 74 | ), 75 | quicksight.CfnDataSet.InputColumnProperty( 76 | name="event_name", type="INTEGER" 77 | ), 78 | quicksight.CfnDataSet.InputColumnProperty( 79 | name="user_agent", type="DECIMAL" 80 | ), 81 | quicksight.CfnDataSet.InputColumnProperty( 82 | name="tool", type="INTEGER" 83 | ), 84 | quicksight.CfnDataSet.InputColumnProperty( 85 | name="resource", type="STRING" 86 | ), 87 | quicksight.CfnDataSet.InputColumnProperty( 88 | name="partition_0", type="INTEGER" 89 | ), 90 | quicksight.CfnDataSet.InputColumnProperty( 91 | name="partition_1", type="DECIMAL" 92 | ), 93 | quicksight.CfnDataSet.InputColumnProperty( 94 | name="partition_2", type="INTEGER" 95 | ), 96 | quicksight.CfnDataSet.InputColumnProperty( 97 | name="partition_3", type="INTEGER" 98 | ), 99 | ], 100 | catalog="AWSDataCatalog", 101 | schema=qs_details['db_name'], 102 | name=qs_details['table_name'], 103 | ) 104 | ) 105 | ) 106 | 107 | resource_dataset = quicksight.CfnDataSet(self, identifier+'-'+qs_details['ds_name'], 108 | data_set_id=identifier+'-'+qs_details['ds_name'], 109 | name = identifier+'-'+qs_details['ds_name'], 110 | aws_account_id=cdk.Aws.ACCOUNT_ID, 111 | import_mode="DIRECT_QUERY", 112 | physical_table_map={ 113 | "resource-iac-table": qs_athena_dataset_resource_iac_physical_table 114 | }, 115 | permissions=qs_dataset_permissions 116 | ) 117 | -------------------------------------------------------------------------------- /lib/resource_iac_tool_stack.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import ( 2 | Duration, 3 | Stack, 4 | aws_iam as iam, 5 | aws_cloudtrail as ct, 6 | aws_lambda_event_sources as lm_event_source, 7 | aws_s3 as s3 8 | # aws_sqs as sqs, 9 | ) 10 | import aws_cdk as cdk 11 | 12 | from constructs import Construct as construct 13 | 14 | from . import s3constructs 15 | from . import CloudtrailConstructs 16 | from .import kmskey 17 | from .import lambdaConstructs 18 | from .import glueConstructs 19 | from .import quickSightConstruct 20 | 21 | class ResourceIaCStack(Stack): 22 | 23 | def __init__(self, scope: construct, construct_id: str, **kwargs) -> None: 24 | super().__init__(scope, construct_id, **kwargs) 25 | 26 | ### Creating S3 Buckets ## 27 | 28 | trail_s3_bucket = s3constructs.S3Buckets(self, 'cloudtrail-s3-bucket', identifier='ct') 29 | output_s3_bucket = s3constructs.S3Buckets(self, 'output-s3-bucket', identifier='output') 30 | output_s3_bucket.s3_bucket.add_lifecycle_rule(transitions=[s3.Transition(storage_class=s3.StorageClass.GLACIER,transition_after=Duration.days(30))]) 31 | 32 | ### Creating KMS key for Cloud Trail ## 33 | kms_key = kmskey.Kms_Key(self, 'ct-kms_key', identifier='ct') 34 | 35 | 36 | ### Creating Lambda Function ## 37 | lambda_function = lambdaConstructs.lm_func(self, 'lambda-to-process-ct-logs', identifier='ct') 38 | 39 | ### Adding S3 event Trigger to Lambda Function ## 40 | lambda_function.lam_func.add_event_source(lm_event_source.S3EventSource(bucket=trail_s3_bucket.s3_bucket, events=[s3.EventType.OBJECT_CREATED],filters=[s3.NotificationKeyFilter(prefix=f"AWSLogs/{cdk.Aws.ACCOUNT_ID}/CloudTrail/")])) 41 | 42 | ## Add read policy for ct event bucket to lambda ## 43 | trail_bucket_arn=trail_s3_bucket.s3_bucket.bucket_arn 44 | trail_bucket_arn_path=trail_bucket_arn+'/*' 45 | lambda_function.lam_func.add_to_role_policy(iam.PolicyStatement( 46 | actions=['s3:Get*'], 47 | effect=iam.Effect.ALLOW, 48 | resources=[trail_bucket_arn_path] 49 | ) 50 | ) 51 | 52 | self.lambda_role_arn = lambda_function.lambda_role.role_arn 53 | kms_key.kms_key.add_to_resource_policy(iam.PolicyStatement(actions=["kms:Decrypt"],principals=[iam.ArnPrincipal(self.lambda_role_arn)],resources=["*"],effect=iam.Effect.ALLOW)) 54 | 55 | ## Add write policy for output filtered event bucket to lambda ## 56 | output_s3_bucket_arn=output_s3_bucket.s3_bucket.bucket_arn 57 | output_s3_bucket_arn_path=output_s3_bucket_arn+'/*' 58 | lambda_function.lam_func.add_to_role_policy(iam.PolicyStatement( 59 | actions=['s3:ListMultipartUploadParts', 's3:PutObject', 's3:GetObject', 's3:AbortMultipartUpload', 'kms:Decrypt'], 60 | effect=iam.Effect.ALLOW, 61 | resources=[output_s3_bucket_arn_path,kms_key.kms_key.key_arn] 62 | )) 63 | 64 | ### Creating Cloud Trail # 65 | trail = CloudtrailConstructs.Cloud_trail(self, 'cloud_trail', bucket_name=trail_s3_bucket.s3_bucket, kms_key=kms_key.kms_key, identifier='ct') 66 | 67 | ### Create Glue Database ## 68 | output_s3_bucket_folder_arn=output_s3_bucket_arn+'/AWSLogs/'+cdk.Aws.ACCOUNT_ID+'/CloudTrail/*' 69 | output_s3_bucket_path=output_s3_bucket.s3_bucket.bucket_name+'/AWSLogs/'+cdk.Aws.ACCOUNT_ID+'/CloudTrail' 70 | 71 | glue_db = glueConstructs.Glue_DB(self, 'Resource-iac-glue-db', identifier='ct', output_s3_bucket_folder_arn=output_s3_bucket_folder_arn, output_s3_bucket_path=output_s3_bucket_path) 72 | 73 | #QuickSight setup - start 74 | #quicksight_athena = quickSightConstruct.QuickSight(self, 'operations-iac-qs-athena', identifier='ct') 75 | #quicksight_athena.node.add_dependency(glue_db) 76 | #QuickSight setup - ends 77 | -------------------------------------------------------------------------------- /lib/s3constructs.py: -------------------------------------------------------------------------------- 1 | import aws_cdk as cdk 2 | from aws_cdk import aws_s3 as s3 3 | 4 | from constructs import Construct as construct 5 | 6 | class S3Buckets(construct): 7 | def __init__(self, scope: construct, id: str, identifier: str, **kwargs): 8 | super().__init__(scope, id, **kwargs) 9 | s3_context = dict(self.node.try_get_context("s3_context")) 10 | s3_details = s3_context[identifier] 11 | 12 | self.s3_bucket = s3.Bucket(self, identifier+'-'+s3_details['name'], 13 | bucket_name=s3_details['name'], 14 | encryption=s3.BucketEncryption.S3_MANAGED, 15 | server_access_logs_prefix="bucket-access-logs", 16 | enforce_ssl=True 17 | ) 18 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aws-cdk-lib==2.55.1 2 | constructs>=10.0.0,<11.0.0 3 | -------------------------------------------------------------------------------- /src/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/infrastructure-assessment-iac-automation/f2b3548bebdc6222f2e90de1bc8c5a1f11e6d55e/src/.DS_Store -------------------------------------------------------------------------------- /src/lambda_code/lambda_function.py: -------------------------------------------------------------------------------- 1 | import json 2 | import urllib.parse 3 | import boto3 4 | import os 5 | import gzip 6 | import json 7 | import io 8 | from botocore.exceptions import ClientError 9 | 10 | region = os.getenv('AWS_REGION','us-west-2') 11 | output_bucket = os.getenv('OUTPUT_S3_BUCKET','resource-output-bucket') 12 | s3_client = boto3.client('s3', region_name = region) 13 | 14 | DATA_DICT = { 'AWS-CDK': ['cdk'], 15 | 'Terraform': ['Terraform'], 16 | 'AWS-CLI': ['aws-cli'], 17 | 'AWS-Console': ['Console'], 18 | 'AWS SSM Agent': ['amazon-ssm-agent'], 19 | 'BOTO3 SDK': ['boto3'], 20 | 'Browser': ['Mozilla','Safari','Chrome'] 21 | } 22 | 23 | def get_trail_data(bucket,key): 24 | try: 25 | response = s3_client.get_object(Bucket=bucket, Key=key) 26 | content = response['Body'].read() 27 | with gzip.GzipFile(fileobj=io.BytesIO(content), mode='rb') as fh: 28 | return (json.load(fh)) 29 | except Exception as e: 30 | print(e) 31 | print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) 32 | raise e 33 | 34 | def extract_data(record): 35 | output = {} 36 | user_identity_record = record.get('userIdentity',{}) 37 | if user_identity_record: 38 | output['principal_id'] = user_identity_record.get('principalId','') 39 | output['account_id'] = user_identity_record.get('accountId','') 40 | output['invoked_by'] = user_identity_record.get('invokedBy','') 41 | session_context = user_identity_record.get('sessionContext',{}) 42 | if session_context: 43 | sessionIssuer = session_context.get('sessionIssuer',{}) 44 | if sessionIssuer: 45 | output['session_user'] = sessionIssuer.get('userName','') 46 | else: 47 | output['session_user'] = '' 48 | output['event_time'] = record.get('eventTime','') 49 | output['event_source'] = record.get('eventSource','') 50 | output['event_name'] = record.get('eventName','') 51 | output['user_agent'] = record.get('userAgent','') 52 | output['tool'] = get_iac_tool(output['invoked_by'],output['session_user'],output['user_agent']) 53 | if output['tool'] == output['user_agent']: 54 | print(f'Could not find for request where request:{record}') 55 | output['resource'] = json.dumps(record.get('requestParameters',{})) 56 | 57 | return output 58 | 59 | 60 | 61 | 62 | def get_iac_tool(invoked_by,session_user,user_agent): 63 | 64 | for k in DATA_DICT: 65 | for v in DATA_DICT[k]: 66 | if v.lower() in user_agent.lower(): 67 | return k 68 | 69 | if invoked_by == 'cloudformation.amazonaws.com': 70 | if 'cdk' in session_user: 71 | return 'AWS-CDK' 72 | return 'CloudFormation' 73 | elif 'aws-sdk-' in user_agent: 74 | return user_agent.split('/')[0] 75 | elif 'aws-sdk-' in user_agent and 'amazon-ssm-agent' in user_agent: 76 | return user_agent.split('/')[0] 77 | elif 'amazonaws.com' in user_agent: 78 | return f'AWS {user_agent.split(".")[0]}' 79 | else: 80 | return user_agent 81 | 82 | 83 | def lambda_handler(event, context): 84 | source_bucket = event['Records'][0]['s3']['bucket']['name'] 85 | source_key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') 86 | output_key=source_key.replace('json.gz','json') 87 | trail_data = get_trail_data(source_bucket,source_key) 88 | output_temp_file = f'/tmp/{output_key.split("/")[-1]}' 89 | 90 | with open(output_temp_file, 'w') as f: 91 | for record in trail_data['Records']: 92 | if not record['readOnly']: 93 | if record['eventName'] != 'CreateLogStream': 94 | json.dump(extract_data(record), f) 95 | 96 | try: 97 | response = s3_client.upload_file(output_temp_file, output_bucket, output_key) 98 | except ClientError as e: 99 | print(e) 100 | raise e from ClientError --------------------------------------------------------------------------------