├── .github └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── Pipfile ├── README.md ├── deploy_scripts ├── api_cfn_template.yaml ├── config-dev-env.sh ├── ddl_scripts │ ├── create_schema.py │ ├── create_schema.sh │ ├── table_ec2.txt │ ├── table_ec2_package.txt │ └── table_package.txt ├── deploy_api.sh ├── deploy_rds.sh ├── package_api.sh └── rds_cfn_template.yaml ├── docs ├── aurora-servereless-data-api-xray-no-batching-service-map.png ├── aurora-serverless-data-api-sam.pdf ├── aurora-serverless-data-api-xray-batching-segments.png ├── aurora-serverless-data-api-xray-batching-service-map.png ├── aurora-serverless-data-api-xray-no-batching-segments.png └── aurora-serverless-sam-architecture.png ├── examples ├── dataapi_examples.py └── table_package.txt ├── lambdas ├── add_ec2_info.py ├── get_ec2_info.py ├── helper │ ├── __init__.py │ ├── dal.py │ ├── lambdautils.py │ └── logger.py └── requirements.txt ├── local ├── AddEC2InfoLambda-event.json ├── GetEC2InfoLambda-event.json ├── env_variables.json └── run_local.sh └── tests ├── run_tests.sh └── test_api.py /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | *Issue #, if available:* 2 | 3 | *Description of changes:* 4 | 5 | 6 | By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | **/__pycache__/* 2 | /**/.DS_Store 3 | .vscode/settings.json 4 | .aws-sam/ 5 | **/.pytest_cache/ 6 | tmp 7 | deploy_scripts/is*.sh 8 | lambdas/test_dal.py 9 | lambdas/test_lambda.py 10 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check [existing open](https://github.com/aws-samples/aws-aurora-serverless-data-api-sam/issues), or [recently closed](https://github.com/aws-samples/aws-aurora-serverless-data-api-sam/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws-samples/aws-aurora-serverless-data-api-sam/labels/help%20wanted) issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](https://github.com/aws-samples/aws-aurora-serverless-data-api-sam/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT No Attribution 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 4 | software and associated documentation files (the "Software"), to deal in the Software 5 | without restriction, including without limitation the rights to use, copy, modify, 6 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 7 | permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 10 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 11 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 12 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 13 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 14 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /Pipfile: -------------------------------------------------------------------------------- 1 | [[source]] 2 | url = "https://pypi.org/simple" 3 | verify_ssl = true 4 | name = "pypi" 5 | 6 | [dev-packages] 7 | aws-sam-cli = "*" 8 | pytest = "*" 9 | requests = "*" 10 | "boto3" = "*" 11 | pyyaml = ">=4.2b1" 12 | 13 | [packages] 14 | aws-xray-sdk = "*" 15 | pymysql = "*" 16 | 17 | [requires] 18 | python_version = "3.6" 19 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | This project shows how to build a fully Serverless application on AWS (including the SQL database) using [Amazon API Gateway](https://aws.amazon.com/api-gateway/), [AWS Lambda](https://aws.amazon.com/lambda/),[ Amazon Aurora Serverless](https://aws.amazon.com/rds/aurora/serverless/) (MySQL) and the new [Data API](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html). By using the Data API, our Lambda functions do not have to manage persistent database connections which greatly simplifies application logic. 4 | 5 | Cool, eh? 6 | 7 | If you are not familiar with the Amazon Aurora Serverless Data API, please have a quick look at my blog post [Using the Data API to interact with an Amazon Aurora Serverless MySQL database](https://aws.amazon.com/blogs/database/using-the-data-api-to-interact-with-an-amazon-aurora-serverless-mysql-database/). It shows how to provision an Amazon Aurora Serverless database (MySQL) using infrastructure-as-code and provides code samples for various Data API use cases. 8 | 9 | ## Context 10 | 11 | Imagine for a moment an organization where the majority of application workloads are deployed to virtual machines by leveraging Amazon's EC2 infrastructure. Your team was assigned the task of building up a solution to maintain an inventory of software packages (eg, AWS CLI v1.16.111) installed across the various EC2 fleets. The solution should be API-based (REST) and leverage AWS Serverless services as much as possible, including the database. Most importantly, the solution should not make use of persistent database connections but instead use an API to interact with the SQL database. 12 | 13 | ## Solution 14 | 15 | This is what this project is all about. It describes an **end-to-end API-based Serverless solution for a simple EC2 Package Inventory system leveraging [Amazon Aurora Serverless (MySQL)](https://aws.amazon.com/rds/aurora/serverless/) and the [Data API](https://aws.amazon.com/blogs/aws/new-data-api-for-amazon-aurora-serverless/) for access to the database**. 16 | 17 | ![Simple EC2 Package Inventory Serverless API Using Aurora Serverless and the Data API](docs/aurora-serverless-sam-architecture.png) 18 | 19 | The architecture diagram above shows the two REST APIs (see the _POST_ and _GET_ APIs) that we're going to build. The `POST:/ec2/aws_instance_id` REST API stores EC2- and package-related information on the SQL database for a given EC2 instance. API `GET:/ec2/aws_instance_id` retrieves the stored information from the database for a given EC2 instance. 20 | 21 | Client applications send REST requests to an [Amazon API Gateway](https://aws.amazon.com/api-gateway/) endpoint which then routes the request to the appropriate Lambda function. The [Lambda](https://aws.amazon.com/lambda/) functions implement the core API logic and make use of database credentials stored on AWS Secrets Manager to connect to the Data API Endpoint for the [Aurora serverless](https://aws.amazon.com/rds/aurora/serverless/) MySQL cluster. By leveraging the Data API, Lambda functions will not have to manage persistent database connections which greatly simplifies application logic. Instead, simple API calls will be performed via the Data API to issue SQL commands to the Aurora Serverless database. 22 | 23 | By using Aurora Serverless MySQL we can take advantage of the optional auto-pause feature which allows us to automatically and seamlessly shut down and restart the database when needed without any impact to application code. This makes sense as the EC2 Inventory database will only be updated sporadically when EC2 instances are launched or terminated. In the occasional event of a large number of EC2 instances being launched simultaneously, the Aurora Serverless database will automatically scale up to meet traffic demands. 24 | 25 | ## Required software 26 | 27 | You'll need to download and install the following software: 28 | 29 | * [Python 3.6](https://www.python.org/downloads/) 30 | * [Pipenv](https://pypi.org/project/pipenv/) 31 | * [AWS CLI](https://aws.amazon.com/cli/) 32 | * [AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html) 33 | 34 | Make sure you have set up AWS credentials (typically placed under `~/.aws/credentials` or `~/.aws/config`). The credentials you're using should have "enough" privileges to provision all required services. You'll know the exact definition of "enough" when you get "permission denied" errors :) 35 | 36 | Now, indicate which AWS profile should be used by the provided scripts, e.g,: 37 | 38 | ```bash 39 | export AWS_PROFILE=[your-aws-profile] 40 | ``` 41 | 42 | ## Python environment 43 | 44 | Create the Python virtual environment and install the dependencies: 45 | 46 | ```bash 47 | # from the project's root directory 48 | pipenv --python 3.6 # creates Python 3.6 virtual environment 49 | pipenv shell # activate the virtual environment 50 | pipenv install # install dependencies 51 | ``` 52 | 53 | To know where the virtual environments and the dependencies are installed type this: 54 | 55 | ```bash 56 | pipenv --venv 57 | ``` 58 | 59 | ## Deploying the Solution 60 | 61 | **Note:** At the time of this writing (July 2019), the Data API is publicly available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions. 62 | 63 | 64 | ### Deploying the Database 65 | 66 | The deployment script reads the values from config file ```config-dev-env.sh``` (__important__: This file will be used everywhere! Make sure you edit the file with config value for your AWS account!). 67 | 68 | Create (or reuse) an S3 bucket to store Lambda packages. Your AWS credentials must give you access to put objects in that bucket. 69 | 70 | ``` 71 | # Creating an S3 bucket (if needed) 72 | aws s3 mb s3://[your-s3-bucket-name] 73 | ``` 74 | 75 | Make sure you update file `config-dev-env.sh` with the S3 bucket name otherwise the deployment will fail. 76 | 77 | ```bash 78 | # Specifying the S3 bucket that will store Lambda package artifacts 79 | export s3_bucket_deployment_artifacts="[your-s3-bucket-name]" 80 | ``` 81 | 82 | Now deploy the database resources by invoking the deploy script and passing the config file as an input parameter (__important__: Notice that we only specify the prefix of the config file (eg, `config-dev`) not the full file name). 83 | 84 | ```bash 85 | # from project's root directory 86 | ./deploy_scripts/deploy_rds.sh config-dev 87 | ``` 88 | 89 | ### Creating the Database entities (database and tables) 90 | 91 | ```bash 92 | # from project's root directory 93 | cd deploy_scripts/ddl_scripts 94 | # run the script 95 | ./create_schema.sh config-dev 96 | ``` 97 | 98 | ### Deploying the API 99 | 100 | ```bash 101 | # from the project's root directory 102 | ./deploy_scripts/package_api.sh config-dev && ./deploy_scripts/deploy_api.sh config-dev 103 | ``` 104 | 105 | Upon completion, the deploy script will print the output parameters produced by the deployed API stack. Take note of the ```ApiEndpoint``` output parameter value. 106 | 107 | ## APIs 108 | 109 | You can now use a REST API client such as [Postman](https://www.getpostman.com/downloads/) or the ```curl``` command to invoke the EC2 Inventory API. You'll use the ```ApiEndpoint``` value you grabbed in the previous step for that (see next). 110 | 111 | ### Add EC2 info to inventory 112 | 113 | Add a new EC2 to the inventory by specifying the EC2 instance id (```aws_instance_id```), AWS region, and AWS account as well as the packages that have been deployed to the instance (```package_name``` and ```package_version```). 114 | 115 | #### Request 116 | 117 | ```POST: https://[EpiEndpoint]/ec2/{aws_instance_id}``` 118 | 119 | Example: 120 | 121 | ``` 122 | POST: /ec2/i-01aaae43feb712345 123 | { 124 | "aws_region": "us-east-1", 125 | "aws_account": "123456789012", 126 | "packages": [ 127 | {"package_name": "package-1", "package_version": "v1"}, 128 | {"package_name": "package-1", "package_version": "v2"}, 129 | {"package_name": "package-2", "package_version": "v1"}, 130 | {"package_name": "package-3", "package_version": "v1"} 131 | ] 132 | } 133 | ``` 134 | 135 | #### Responses 136 | 137 | **Success - HttpCode: 200** 138 | 139 | Example: 140 | 141 | ``` 142 | { 143 | "new_record": { 144 | "aws_account": "123456789012", 145 | "aws_region": "us-east-1", 146 | "packages": [ 147 | { 148 | "package_name": "package-1", 149 | "package_version": "v1" 150 | }, 151 | { 152 | "package_name": "package-1", 153 | "package_version": "v2" 154 | }, 155 | { 156 | "package_name": "package-2", 157 | "package_version": "v1" 158 | } 159 | ] 160 | } 161 | } 162 | ``` 163 | 164 | **Error - HttpCode: 400** 165 | 166 | Example: 167 | 168 | ``` 169 | { 170 | "error_message": "An error occurred (BadRequestException) when calling the ExecuteSql operation: Duplicate entry 'instance-002' for key 'PRIMARY'" 171 | } 172 | ``` 173 | 174 | ### Get EC2 info from inventory (includes packages) 175 | 176 | Get information about an EC2 from the inventory by specifying the EC2 instance id (```aws_instance_id```). 177 | 178 | #### Request 179 | 180 | ``` 181 | GET: https://[EpiEndpoint]/ec2/{aws_instance_id} 182 | ``` 183 | 184 | Example: 185 | ``` 186 | GET: /ec2/i-01aaae43feb712345 187 | ``` 188 | 189 | #### Response 190 | 191 | **Success - HttpCode=200 (AMI found)** 192 | 193 | Example: 194 | 195 | ``` 196 | { 197 | "record": { 198 | "aws_instance_id": "i-01aaae43feb712345", 199 | "aws_region": "us-east-1", 200 | "aws_account": "123456789012", 201 | "creation_date_utc": "2019-03-06 02:45:32.0", 202 | "packages": [ 203 | { 204 | "package_name": "package-2", 205 | "package_version": "v1" 206 | }, 207 | { 208 | "package_name": "package-1", 209 | "package_version": "v2" 210 | }, 211 | { 212 | "package_name": "package-1", 213 | "package_version": "v1" 214 | } 215 | ] 216 | }, 217 | "record_found": true 218 | } 219 | ``` 220 | 221 | **Success - HttpCode=200 (EC2 not found)** 222 | 223 | { 224 | "record": {}, 225 | "record_found": false 226 | } 227 | 228 | **Error - HttpCode=400** 229 | 230 | Example: 231 | 232 | ``` 233 | { 234 | "error_message": "Some error message" 235 | } 236 | ``` 237 | 238 | ## Observability 239 | 240 | We enabled observability of this application via [AWS X-Ray](https://aws.amazon.com/xray/). Take a look at the data access layer source file ([dal.py](https://github.com/aws-samples/aws-aurora-serverless-data-api-sam/blob/master/lambdas/helper/dal.py#L67)) for details. Search for terms `x-ray` and `xray`. 241 | 242 | ## Running Lambda Functions Locally 243 | 244 | To run Lambda function ```GetEC2InfoLambda``` locally using the environment variables defined in ```local/env_variables.json``` and the event input file ```GetEC2InfoLambda-event.json``` do the following: 245 | 246 | ``` 247 | # from the project's root directory 248 | local/run_local.sh config-dev GetEC2InfoLambda 249 | ``` 250 | 251 | Exercise: Create an event JSON file for the ```AddEC2InfoLambda``` Lambda function and invoke it locally. 252 | 253 | ## Running Integration Tests 254 | 255 | A few integration tests are available under directory ```tests/```. The tests use the ```pytest``` framework to make API calls against our deployed API. So, before running the tests, make sure the API is actually deployed to AWS. 256 | 257 | The API endpoint is discovered automatically from the test script based on the ```ApiEndpoint``` output parameter produced by the API CloudFormation stack. 258 | 259 | To run the integration tests locally do this: 260 | 261 | ```bash 262 | # from the project's root directory 263 | ./tests/run_tests.sh config-dev 264 | ``` 265 | 266 | ## License Summary 267 | 268 | This sample code is made available under a modified MIT license. See the LICENSE file for details. 269 | -------------------------------------------------------------------------------- /deploy_scripts/api_cfn_template.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: Provisions AWS Serverless API resourcs for the Simple EC2 Inventory sample solution on AWS 3 | Transform: 'AWS::Serverless-2016-10-31' 4 | Parameters: 5 | AppName: 6 | Description: "Application Name" 7 | Type: String 8 | Default: simple-ec2-inventory 9 | EnvType: 10 | Description: "Environment type (eg, dev, qa, prod)" 11 | Type: String 12 | Default: dev 13 | DatabaseStackName: 14 | Description: "Name of the RDS CloudFormation stack to import parameters from" 15 | Type: String 16 | Default: ec2_inventory_db 17 | EC2TableName: 18 | Description: "EC2 Table name" 19 | Type: String 20 | Default: ec2 21 | PackageTableName: 22 | Description: "Package Table name" 23 | Type: String 24 | Default: package 25 | EC2PackageTableName: 26 | Description: "EC2-Package Table name" 27 | Type: String 28 | Default: ec2_package 29 | ApiStageName: 30 | Description: "API Stage Name" 31 | Type: String 32 | Default: dev 33 | LambdaLogLevel: 34 | Description: "Log verbosity level for Lambda functions" 35 | Type: String 36 | Default: info 37 | Globals: 38 | Function: 39 | Runtime: python3.6 40 | Timeout: 120 41 | MemorySize: 256 42 | AutoPublishAlias: live 43 | Environment: 44 | Variables: 45 | LOG_LEVEL: !Ref LambdaLogLevel 46 | EC2_TABLE_NAME: !Ref EC2TableName 47 | PACKAGE_TABLE_NAME: !Ref PackageTableName 48 | EC2_PACKAGE_RPM_TABLE_NAME: !Ref EC2PackageTableName 49 | DB_NAME: 50 | Fn::ImportValue: 51 | !Sub "${DatabaseStackName}-DatabaseName" 52 | DB_CLUSTER_ARN: 53 | Fn::ImportValue: 54 | !Sub "${DatabaseStackName}-DatabaseClusterArn" 55 | DB_CRED_SECRETS_STORE_ARN: 56 | Fn::ImportValue: 57 | !Sub "${DatabaseStackName}-DatabaseSecretArn" 58 | Api: 59 | Name: !Sub "${EnvType}-${AppName}-api" 60 | EndpointConfiguration: REGIONAL 61 | Resources: 62 | EC2InventoryAPI: 63 | Type: 'AWS::Serverless::Api' 64 | Properties: 65 | StageName: !Sub "${ApiStageName}" 66 | TracingEnabled: True 67 | AddEC2InfoLambda: 68 | Type: 'AWS::Serverless::Function' 69 | Properties: 70 | Description: Adds EC2 info to the inventory 71 | FunctionName: !Sub "${EnvType}-${AppName}-add-ec2-lambda" 72 | CodeUri: ../lambdas/ 73 | Handler: add_ec2_info.handler 74 | Tracing: Active 75 | Events: 76 | EC2PostEvent: 77 | Type: Api 78 | Properties: 79 | Path: '/ec2/{aws_instance_id}' 80 | Method: post 81 | RestApiId: !Ref EC2InventoryAPI 82 | Policies: 83 | - Version: '2012-10-17' # Policy Document 84 | Statement: 85 | - Effect: Allow 86 | Action: 87 | - rds-data:* 88 | Resource: 89 | Fn::ImportValue: 90 | !Sub "${DatabaseStackName}-DatabaseClusterArn" 91 | - Effect: Allow 92 | Action: 93 | - secretsmanager:GetSecretValue 94 | Resource: 95 | Fn::ImportValue: 96 | !Sub "${DatabaseStackName}-DatabaseSecretArn" 97 | - Effect: Allow 98 | Action: 99 | - xray:PutTraceSegments 100 | - xray:PutTelemetryRecords 101 | Resource: "*" 102 | GetEC2InfoLambda: 103 | Type: 'AWS::Serverless::Function' 104 | Properties: 105 | Description: Retrieves EC2 info from the inventory 106 | FunctionName: !Sub "${EnvType}-${AppName}-get-ec2-lambda" 107 | CodeUri: ../lambdas/ 108 | Handler: get_ec2_info.handler 109 | Tracing: Active 110 | Events: 111 | EC2GetEvent: 112 | Type: Api 113 | Properties: 114 | Path: '/ec2/{aws_instance_id}' 115 | Method: get 116 | RestApiId: !Ref EC2InventoryAPI 117 | Policies: 118 | - Version: '2012-10-17' # Policy Document 119 | Statement: 120 | - Effect: Allow 121 | Action: 122 | - rds-data:* 123 | Resource: 124 | Fn::ImportValue: 125 | !Sub "${DatabaseStackName}-DatabaseClusterArn" 126 | - Effect: Allow 127 | Action: 128 | - secretsmanager:GetSecretValue 129 | Resource: 130 | Fn::ImportValue: 131 | !Sub "${DatabaseStackName}-DatabaseSecretArn" 132 | - Effect: Allow 133 | Action: 134 | - xray:PutTraceSegments 135 | - xray:PutTelemetryRecords 136 | Resource: "*" 137 | Outputs: 138 | StackName: 139 | Description: API Stack Name 140 | Value: !Ref AWS::StackName 141 | Export: 142 | Name: !Sub ${AWS::StackName}-StackName 143 | 144 | ApiEndpoint: 145 | Description: API Stage Endpoint 146 | Value: !Sub "https://${EC2InventoryAPI}.execute-api.${AWS::Region}.amazonaws.com/${EnvType}" 147 | Export: 148 | Name: !Sub ${AWS::StackName}-ApiEndPoint -------------------------------------------------------------------------------- /deploy_scripts/config-dev-env.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | 4 | # Python virtual environment location for packaging 5 | if [ -z "$virtual_env_location" ]; then 6 | virtual_env_location=`pipenv --venv` 7 | fi 8 | 9 | # CHANGE THESE VALUES FOR YOUR AWS ACCOUNT -------------------- 10 | 11 | # All resources deployed (eg, API, Lambdas) will be prefix w/ the env type (eg, dev-register-ami-lambda) 12 | 13 | # ----- General Parameters ----- # 14 | 15 | # Prefix to use to name provisioned resources 16 | export env_type="dev" 17 | # S3 bucket to store packaged Lambdas 18 | export s3_bucket_deployment_artifacts="[your-s3-bucket]" 19 | 20 | # ----- RDS Stack ----- # 21 | # RDS database name (a-zA-Z0-9_) 22 | export db_name="ec2_inventory_db" 23 | # RDS Aurora Serverless Cluster Name (a-zA-Z0-9-) 24 | export db_cluster_name="${env_type}-aurora-ec2-inventory-cluster" 25 | # RDS Master Username 26 | export db_master_username="db_user" # password will be create on-the-fly and associtated w/ this user 27 | # RDS Aurora Serverless Cluster Subnets 28 | export db_subnet_1="[your-subnet-1]" 29 | export db_subnet_2="[your-subnet-2]" 30 | export db_subnet_3="[your-subnet-3]" 31 | 32 | # ----- API Stack ----- # 33 | export api_stage_name="dev" 34 | export log_level="DEBUG" # debug/info/error 35 | 36 | # --------------------------------------------------------------- 37 | 38 | # You probably don't need to change these values 39 | export app_name="ec2-inv" 40 | export rds_cfn_template="rds_cfn_template.yaml" 41 | export api_cfn_template="api_cfn_template.yaml" 42 | export gen_api_cfn_template="generated-${api_cfn_template}" 43 | export sam_build_dir=".aws-sam" 44 | export lambdas_dir="lambdas" 45 | export rds_stack_name="${env_type}-${app_name}-database-stack" 46 | export api_stack_name="${env_type}-${app_name}-api-stack" -------------------------------------------------------------------------------- /deploy_scripts/ddl_scripts/create_schema.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | import os 4 | 5 | def get_cfn_output(key, outputs): 6 | result = [ v['OutputValue'] for v in outputs if v['OutputKey'] == key ] 7 | return result[0] if len(result) > 0 else '' 8 | 9 | # Retrieve required parameters from RDS stack exported output values 10 | rds_stack_name = os.getenv('rds_stack_name') 11 | cloudformation = boto3.resource('cloudformation') 12 | stack = cloudformation.Stack(rds_stack_name) 13 | database_name = get_cfn_output('DatabaseName', stack.outputs) 14 | db_cluster_arn = get_cfn_output('DatabaseClusterArn', stack.outputs) 15 | db_credentials_secrets_store_arn = get_cfn_output('DatabaseSecretArn', stack.outputs) 16 | print(f'Database info: [name={database_name}, cluster arn={db_cluster_arn}, secrets arn={db_credentials_secrets_store_arn}]') 17 | 18 | # Run DDL commands idempotently to create database and tables 19 | rds_client = boto3.client('rds-data') 20 | 21 | table_ddl_script_files = ['table_ec2.txt', 'table_package.txt', 'table_ec2_package.txt'] 22 | 23 | def execute_statement(sql): 24 | print(f'Running SQL statement: {sql}') 25 | response = rds_client.execute_statement( 26 | secretArn=db_credentials_secrets_store_arn, 27 | database=database_name, 28 | resourceArn=db_cluster_arn, 29 | sql=sql 30 | ) 31 | return response 32 | 33 | execute_statement(f'create database if not exists {database_name}') 34 | 35 | for table_ddl_script_file in table_ddl_script_files: 36 | print(f"Creating table from DDL file: {table_ddl_script_file}") 37 | with open(table_ddl_script_file, 'r') as ddl_script: 38 | ddl_script_content=ddl_script.read() 39 | execute_statement(ddl_script_content) 40 | 41 | response = execute_statement(f'show tables') 42 | print(response) 43 | 44 | -------------------------------------------------------------------------------- /deploy_scripts/ddl_scripts/create_schema.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #====================================================================== 4 | # Creates Databse schema 5 | #====================================================================== 6 | 7 | set -e 8 | 9 | function error() { 10 | echo "Error: $1" 11 | exit -1 12 | } 13 | [[ -n "$1" ]] || error "Missing environment name (eg, dev, qa, prod)" 14 | env_type=$1 15 | 16 | . "../${env_type}-env.sh" 17 | 18 | python create_schema.py 19 | -------------------------------------------------------------------------------- /deploy_scripts/ddl_scripts/table_ec2.txt: -------------------------------------------------------------------------------- 1 | 2 | CREATE TABLE IF NOT EXISTS ec2 ( 3 | aws_instance_id VARCHAR(255) NOT NULL, 4 | aws_region VARCHAR(30) NOT NULL, 5 | aws_account VARCHAR(13) NOT NULL, 6 | creation_date_utc DATETIME DEFAULT CURRENT_TIMESTAMP, 7 | PRIMARY KEY (aws_instance_id), 8 | INDEX aws_region_idx (aws_region), 9 | INDEX aws_account_idx (aws_account), 10 | INDEX creation_date_utc_idx (creation_date_utc) 11 | ) -------------------------------------------------------------------------------- /deploy_scripts/ddl_scripts/table_ec2_package.txt: -------------------------------------------------------------------------------- 1 | 2 | CREATE TABLE IF NOT EXISTS ec2_package ( 3 | aws_instance_id VARCHAR(255) NOT NULL, 4 | package_name VARCHAR(100) NOT NULL, 5 | package_version VARCHAR(50) NOT NULL, 6 | FOREIGN KEY (aws_instance_id) 7 | REFERENCES ec2(aws_instance_id) 8 | ON DELETE CASCADE, 9 | FOREIGN KEY (package_name, package_version) 10 | REFERENCES package(package_name, package_version) 11 | ON DELETE CASCADE 12 | ) 13 | -------------------------------------------------------------------------------- /deploy_scripts/ddl_scripts/table_package.txt: -------------------------------------------------------------------------------- 1 | 2 | CREATE TABLE IF NOT EXISTS package ( 3 | package_name VARCHAR(100) NOT NULL, 4 | package_version VARCHAR(50) NOT NULL, 5 | PRIMARY KEY (package_name, package_version) 6 | ) 7 | -------------------------------------------------------------------------------- /deploy_scripts/deploy_api.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #====================================================================== 4 | # Deploys API resources on AWS 5 | #====================================================================== 6 | 7 | set -e 8 | 9 | function error() { 10 | echo "Error: $1" 11 | exit -1 12 | } 13 | [[ -n "$1" ]] || error "Missing environment name (eg, dev, qa, prod)" 14 | env_type=$1 15 | 16 | . "./deploy_scripts/${env_type}-env.sh" 17 | 18 | sam deploy \ 19 | --template-file "${sam_build_dir}/${gen_api_cfn_template}" \ 20 | --stack-name $api_stack_name \ 21 | --parameter-overrides \ 22 | ProjectName="$app_name" \ 23 | EnvType="$env_type" \ 24 | DatabaseStackName="${rds_stack_name}" \ 25 | ApiStageName="${api_stage_name}" \ 26 | LambdaLogLevel="${log_level}" \ 27 | --capabilities \ 28 | CAPABILITY_IAM 29 | 30 | # Print the Stack Output 31 | sleep 5 && aws cloudformation describe-stacks --stack-name $api_stack_name --query 'Stacks[0].Outputs' -------------------------------------------------------------------------------- /deploy_scripts/deploy_rds.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #====================================================================== 4 | # Deploys RDS Aurora Serverless and related resources 5 | #====================================================================== 6 | 7 | # Sample invoke: 8 | # ./deploy.sh dev 9 | 10 | set -e 11 | 12 | function error() { 13 | echo "Error: $1" 14 | exit -1 15 | } 16 | [[ -n "$1" ]] || error "Missing environment name (eg, dev, qa, prod)" 17 | env_type=$1 18 | 19 | . "./deploy_scripts/${env_type}-env.sh" 20 | 21 | aws cloudformation create-stack \ 22 | --template-body "file://deploy_scripts/${rds_cfn_template}" \ 23 | --stack-name $rds_stack_name \ 24 | --parameters \ 25 | ParameterKey="AppName",ParameterValue="$app_name" \ 26 | ParameterKey="EnvType",ParameterValue="$env_type" \ 27 | ParameterKey="DBClusterName",ParameterValue="$db_cluster_name" \ 28 | ParameterKey="DatabaseName",ParameterValue="$db_name" \ 29 | ParameterKey="DBMasterUserName",ParameterValue="$db_master_username" \ 30 | ParameterKey="DBSubnetList",ParameterValue="\"${db_subnet_1},${db_subnet_2},${db_subnet_3}\"" \ 31 | --capabilities \ 32 | CAPABILITY_IAM 33 | 34 | # TODO: wait stack creation/update completion 35 | sleep 180 36 | 37 | # Enable the Data API 38 | aws rds modify-db-cluster --db-cluster-identifier $db_cluster_name --enable-http-endpoint 39 | 40 | # TODO: we could trigger the schema creation under ddl_scripts/create_schema.py from here -------------------------------------------------------------------------------- /deploy_scripts/package_api.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #====================================================================== 4 | # Package API resources for deployment 5 | #====================================================================== 6 | 7 | set -e 8 | 9 | function error() { 10 | echo "Error: $1" 11 | echo "Example: ./package.sh qa" 12 | exit -1 13 | } 14 | 15 | [[ -n "$1" ]] || error "Missing environment name (eg, dev, uat, prod)" 16 | env_type=$1 17 | 18 | . "./deploy_scripts/${env_type}-env.sh" 19 | 20 | if [ -z "$virtual_env_location" ]; then 21 | virtual_env_location=`pipenv --venv` 22 | fi 23 | 24 | # create or update requirements.txt 25 | # (cd lambdas/ && pipenv lock -r) 26 | 27 | # install dependencies from requirements.txt 28 | sam build \ 29 | -t deploy_scripts/${api_cfn_template} \ 30 | -s $lambdas_dir 31 | 32 | # package lambdas and dependencies in S3 33 | rm -f "${sam_build_dir}/${gen_api_cfn_template}" 34 | sam package \ 35 | --s3-bucket $s3_bucket_deployment_artifacts \ 36 | --output-template-file "${sam_build_dir}/${gen_api_cfn_template}" 37 | -------------------------------------------------------------------------------- /deploy_scripts/rds_cfn_template.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: Provisions Aurora Serverless and related resources for the Simple EC2 Inventory sample solution on AWS 3 | Parameters: 4 | AppName: 5 | Description: "Application Name" 6 | Type: String 7 | Default: ec2-inv 8 | EnvType: 9 | Description: "Environment type (eg, dev, qa, prod)" 10 | Type: String 11 | Default: dev 12 | DBClusterName: 13 | Description: "Aurora RDS cluster name" 14 | Type: String 15 | Default: ec2-inv-rds-cluster 16 | DatabaseName: 17 | Description: "Aurora RDS database name" 18 | Type: String 19 | Default: ec2_inventory_db 20 | DBMasterUserName: 21 | AllowedPattern: "[a-zA-Z0-9_]+" 22 | ConstraintDescription: must be between 1 to 16 alphanumeric characters. 23 | Description: The database admin account user name, between 1 to 16 alphanumeric characters. 24 | MaxLength: '16' 25 | MinLength: '1' 26 | Type: String 27 | Default: admin_user 28 | DBSubnetList: 29 | Description: "List of subnets for the Aurora RDS cluster" 30 | Type: List 31 | ConstraintDescription: "must be a list of at least three existing subnets associated with at least three different availability zones." 32 | Resources: 33 | DBSubnetGroup: 34 | Type: AWS::RDS::DBSubnetGroup 35 | Properties: 36 | DBSubnetGroupDescription: CloudFormation managed DB subnet group. 37 | SubnetIds: 38 | - !Select [0, !Ref DBSubnetList ] 39 | - !Select [1, !Ref DBSubnetList ] 40 | - !Select [2, !Ref DBSubnetList ] 41 | DBSecret: 42 | Type: AWS::SecretsManager::Secret 43 | Properties: 44 | Name: !Sub "${EnvType}-AuroraUserSecret" 45 | Description: RDS database auto-generated user password 46 | GenerateSecretString: 47 | SecretStringTemplate: !Sub '{"username": "${DBMasterUserName}"}' 48 | GenerateStringKey: "password" 49 | PasswordLength: 30 50 | ExcludeCharacters: '"@/\' 51 | Tags: 52 | - 53 | Key: AppName 54 | Value: !Ref AppName 55 | RDSCluster: 56 | Type: AWS::RDS::DBCluster 57 | Properties: 58 | DBClusterIdentifier: !Ref DBClusterName 59 | MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref DBSecret, ':SecretString:username}}' ]] 60 | MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref DBSecret, ':SecretString:password}}' ]] 61 | DatabaseName: !Ref DatabaseName 62 | Engine: aurora 63 | EngineMode: serverless 64 | EngineVersion: 5.6.10a 65 | ScalingConfiguration: 66 | AutoPause: true 67 | MaxCapacity: 4 68 | MinCapacity: 1 69 | SecondsUntilAutoPause: 900 # 15 min 70 | DBSubnetGroupName: 71 | Ref: DBSubnetGroup 72 | Outputs: 73 | StackName: 74 | Description: Aurora Stack Name 75 | Value: !Ref AWS::StackName 76 | Export: 77 | Name: !Sub ${AWS::StackName}-StackName 78 | 79 | DatabaseName: 80 | Description: Aurora Database Name 81 | Value: !Ref DatabaseName 82 | Export: 83 | Name: !Sub ${AWS::StackName}-DatabaseName 84 | 85 | DatabaseClusterArn: 86 | Description: Aurora Cluster ARN 87 | Value: !Sub arn:aws:rds:${AWS::Region}:${AWS::AccountId}:cluster:${DBClusterName} 88 | Export: 89 | Name: !Sub ${AWS::StackName}-DatabaseClusterArn 90 | 91 | DatabaseSecretArn: 92 | Description: Aurora Secret ARN 93 | Value: !Ref DBSecret 94 | Export: 95 | Name: !Sub ${AWS::StackName}-DatabaseSecretArn 96 | 97 | DatabaseClusterID: 98 | Description: Aurora Cluster ID 99 | Value: !Ref RDSCluster 100 | Export: 101 | Name: !Sub ${AWS::StackName}-DatabaseClusterID 102 | 103 | AuroraDbURL: 104 | Description: Aurora Database URL 105 | Value: !GetAtt RDSCluster.Endpoint.Address 106 | Export: 107 | Name: !Sub ${AWS::StackName}-DatabaseURL 108 | 109 | DatabaseMasterUserName: 110 | Description: Aurora Database User 111 | Value: !Ref DBMasterUserName 112 | Export: 113 | Name: !Sub ${AWS::StackName}-DatabaseMasterUserName -------------------------------------------------------------------------------- /docs/aurora-servereless-data-api-xray-no-batching-service-map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-aurora-serverless-data-api-sam/9c9500b61ff79e4134126d38a75ffc2cada4115a/docs/aurora-servereless-data-api-xray-no-batching-service-map.png -------------------------------------------------------------------------------- /docs/aurora-serverless-data-api-sam.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-aurora-serverless-data-api-sam/9c9500b61ff79e4134126d38a75ffc2cada4115a/docs/aurora-serverless-data-api-sam.pdf -------------------------------------------------------------------------------- /docs/aurora-serverless-data-api-xray-batching-segments.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-aurora-serverless-data-api-sam/9c9500b61ff79e4134126d38a75ffc2cada4115a/docs/aurora-serverless-data-api-xray-batching-segments.png -------------------------------------------------------------------------------- /docs/aurora-serverless-data-api-xray-batching-service-map.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-aurora-serverless-data-api-sam/9c9500b61ff79e4134126d38a75ffc2cada4115a/docs/aurora-serverless-data-api-xray-batching-service-map.png -------------------------------------------------------------------------------- /docs/aurora-serverless-data-api-xray-no-batching-segments.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-aurora-serverless-data-api-sam/9c9500b61ff79e4134126d38a75ffc2cada4115a/docs/aurora-serverless-data-api-xray-no-batching-segments.png -------------------------------------------------------------------------------- /docs/aurora-serverless-sam-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-aurora-serverless-data-api-sam/9c9500b61ff79e4134126d38a75ffc2cada4115a/docs/aurora-serverless-sam-architecture.png -------------------------------------------------------------------------------- /examples/dataapi_examples.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | import os 4 | import time 5 | 6 | # Update these 3 parameters for your environment 7 | database_name = 'ec2_inventory_db' 8 | db_cluster_arn = 'arn:aws:rds:us-east-1:123456789012:cluster:dev-aurora-ec2-inventory-cluster' 9 | db_credentials_secrets_store_arn = 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dev-AuroraUserSecret-DhpkOI' 10 | 11 | # This is the Data API client that will be used in our examples below 12 | rds_client = boto3.client('rds-data') 13 | 14 | #-------------------------------------------------------------------------------- 15 | # Helper Functions 16 | #-------------------------------------------------------------------------------- 17 | 18 | # Timing function executions 19 | def timeit(f): 20 | def timed(*args, **kw): 21 | ts = time.time() 22 | result = f(*args, **kw) 23 | te = time.time() 24 | print(f'Function: {f.__name__}') 25 | print(f'* args: {args}') 26 | print(f'* kw: {kw}') 27 | print(f'* execution time: {(te-ts)*1000:8.2f} ms') 28 | return result 29 | return timed 30 | 31 | @timeit 32 | def execute_statement(sql, sql_parameters=[]): 33 | response = rds_client.execute_statement( 34 | secretArn=db_credentials_secrets_store_arn, 35 | database=database_name, 36 | resourceArn=db_cluster_arn, 37 | sql=sql, 38 | parameters=sql_parameters 39 | ) 40 | return response 41 | 42 | #-------------------------------------------------------------------------------- 43 | # Various Examples of Using the Data API 44 | #-------------------------------------------------------------------------------- 45 | 46 | # Create the DB Schema (here just table package) 47 | @timeit 48 | def example_create_table(): 49 | print('===== Example - create schema from DDL file =====') 50 | execute_statement(f'create database if not exists {database_name}') 51 | table_ddl_script_file = 'table_package.txt' 52 | print(f"Creating table from DDL file: {table_ddl_script_file}") 53 | with open(table_ddl_script_file, 'r') as ddl_script: 54 | ddl_script_content=ddl_script.read() 55 | execute_statement(ddl_script_content) 56 | # populate table w/ some data for querying 57 | execute_statement('delete from package') 58 | for i in range(100,110): 59 | execute_statement(f'insert into package (package_name, package_version) values ("package-{i}", "version-1")') 60 | execute_statement(f'insert into package (package_name, package_version) values ("package-{i}", "version-2")') 61 | 62 | # Simple select example with no parameters 63 | @timeit 64 | def example_simple_query(): 65 | print('===== Example - Simple query =====') 66 | def execute_statement(sql): 67 | response = rds_client.execute_statement( 68 | secretArn=db_credentials_secrets_store_arn, 69 | database=database_name, 70 | resourceArn=db_cluster_arn, 71 | sql=sql 72 | ) 73 | return response 74 | 75 | response = execute_statement(f'select * from package') 76 | print(response['records']) 77 | 78 | # Simple select example with parameters 79 | # Talk about parameters as a means to prevent SQL injections 80 | @timeit 81 | def example_parameterized_query(): 82 | print('===== Example - Parameterized query =====') 83 | sql = 'select * from package where package_name=:package_name' 84 | package_name = 'package-100' 85 | sql_parameters = [{'name':'package_name', 'value':{'stringValue': f'{package_name}'}}] 86 | response = execute_statement(sql, sql_parameters) 87 | print(response['records']) 88 | 89 | # Fetch results 90 | # Order of parameters on select is relevant (eg, package_name, package_version) 91 | @timeit 92 | def example_format_query_results(): 93 | print('===== Example - Format query results =====') 94 | 95 | # Formatting query returned Field 96 | def formatField(field): 97 | return list(field.values())[0] 98 | 99 | # Formatting query returned Record 100 | def formatRecord(record): 101 | return [formatField(field) for field in record] 102 | 103 | # Formatting query returned Field 104 | def formatRecords(records): 105 | return [formatRecord(record) for record in records] 106 | 107 | sql = 'select package_name, package_version from package' 108 | response = execute_statement(sql) 109 | print(formatRecords(response['records'])) 110 | 111 | # Simple insert example 112 | @timeit 113 | def example_simple_parameterized_insert(): 114 | print('===== Example - Simple parameterized insert =====') 115 | sql = 'insert into package (package_name, package_version) values (:package_name, :package_version)' 116 | sql_parameters = [ 117 | {'name':'package_name', 'value':{'stringValue': 'package-1'}}, 118 | {'name':'package_version', 'value':{'stringValue': 'version-1'}} 119 | ] 120 | response = execute_statement(sql, sql_parameters) 121 | print(f'Number of records updated: {response["numberOfRecordsUpdated"]}') 122 | 123 | # Handling exceptions 124 | @timeit 125 | def example_exception_handling(): 126 | print('===== Example - Exception handling - Duplicate Primary Key =====') 127 | class DataAccessLayerException(Exception): 128 | pass 129 | def add_package(): 130 | try: 131 | sql = 'insert into package (package_name, package_version) values (:package_name, :package_version)' 132 | sql_parameters = [ 133 | {'name':'package_name', 'value':{'stringValue': 'package-1'}}, 134 | {'name':'package_version', 'value':{'stringValue': 'version-1'}} 135 | ] 136 | response = execute_statement(sql, sql_parameters) 137 | print(f'Number of records updated: {response["numberOfRecordsUpdated"]}') 138 | except Exception as e: 139 | raise DataAccessLayerException(e) from e 140 | try: 141 | add_package() 142 | except DataAccessLayerException as e: 143 | print(e) 144 | 145 | # Introduce batch inserts 146 | @timeit 147 | def batch_execute_statement(sql, sql_parameter_sets): 148 | response = rds_client.batch_execute_statement( 149 | secretArn=db_credentials_secrets_store_arn, 150 | database=database_name, 151 | resourceArn=db_cluster_arn, 152 | sql=sql, 153 | parameterSets=sql_parameter_sets 154 | ) 155 | return response 156 | 157 | # Batch insert example 158 | # Ask Data API what's the max batch size! 159 | @timeit 160 | def example_batch_insert(): 161 | print('===== Example - Batch insert =====') 162 | sql = 'insert into package (package_name, package_version) values (:package_name, :package_version)' 163 | sql_parameter_sets = [] 164 | for i in range(10,20): 165 | entry = [ 166 | {'name':'package_name', 'value':{'stringValue': f'package{i}'}}, 167 | {'name':'package_version', 'value':{'stringValue': 'version-1'}} 168 | ] 169 | sql_parameter_sets.append(entry) 170 | response = batch_execute_statement(sql, sql_parameter_sets) 171 | print(f'Number of records updated: {len(response["updateResults"])}') 172 | 173 | # Transactions (commit and rollback) 174 | # Here we redefine functions execute_statement() and batch_execute_statement() to support transactions 175 | @timeit 176 | def example_handling_transactions(package_start_idx, package_end_idx): 177 | 178 | @timeit 179 | def execute_statement(sql, sql_parameters=[], transaction_id=None): 180 | parameters = { 181 | 'secretArn': db_credentials_secrets_store_arn, 182 | 'database': database_name, 183 | 'resourceArn': db_cluster_arn, 184 | 'sql': sql, 185 | 'parameters': sql_parameters 186 | } 187 | if transaction_id is not None: 188 | parameters['transactionId'] = transaction_id 189 | response = rds_client.execute_statement(**parameters) 190 | return response 191 | 192 | @timeit 193 | def batch_execute_statement(sql, sql_parameter_sets, transaction_id=None): 194 | parameters = { 195 | 'secretArn': db_credentials_secrets_store_arn, 196 | 'database': database_name, 197 | 'resourceArn': db_cluster_arn, 198 | 'sql': sql, 199 | 'parameterSets': sql_parameter_sets 200 | } 201 | if transaction_id is not None: 202 | parameters['transactionId'] = transaction_id 203 | response = rds_client.batch_execute_statement(**parameters) 204 | return response 205 | 206 | print('===== Example - Handling transactions (commit and rollback) =====') 207 | transaction = rds_client.begin_transaction( 208 | secretArn=db_credentials_secrets_store_arn, 209 | resourceArn=db_cluster_arn, 210 | database=database_name) 211 | try: 212 | sql = 'insert into package (package_name, package_version) values (:package_name, :package_version)' 213 | sql_parameter_sets = [] 214 | for i in range(package_start_idx,package_end_idx): 215 | entry = [ 216 | {'name':'package_name', 'value':{'stringValue': f'package-{i}'}}, 217 | {'name':'package_version', 'value':{'stringValue': 'version-1'}} 218 | ] 219 | sql_parameter_sets.append(entry) 220 | response = batch_execute_statement(sql, sql_parameter_sets, transaction['transactionId']) 221 | except Exception as e: 222 | print(f'Error: {e}') 223 | transaction_response = rds_client.rollback_transaction( 224 | secretArn=db_credentials_secrets_store_arn, 225 | resourceArn=db_cluster_arn, 226 | transactionId=transaction['transactionId']) 227 | else: 228 | transaction_response = rds_client.commit_transaction( 229 | secretArn=db_credentials_secrets_store_arn, 230 | resourceArn=db_cluster_arn, 231 | transactionId=transaction['transactionId']) 232 | print(f'Number of records updated: {len(response["updateResults"])}') 233 | print(f'Transaction Status: {transaction_response["transactionStatus"]}') 234 | 235 | 236 | # Running our examples in order 237 | example_create_table() 238 | print('\n') 239 | 240 | example_simple_query() 241 | print('\n') 242 | 243 | example_parameterized_query() 244 | print('\n') 245 | 246 | example_format_query_results() 247 | print('\n') 248 | 249 | example_simple_parameterized_insert() 250 | print('\n') 251 | 252 | example_exception_handling() 253 | print('\n') 254 | 255 | example_batch_insert() 256 | print('\n') 257 | 258 | # key 100 is a duplicate - transaction will rollback 259 | example_handling_transactions(91,101) 260 | print('\n') 261 | 262 | # transaction will be committed successfully 263 | example_handling_transactions(1000,1020) 264 | -------------------------------------------------------------------------------- /examples/table_package.txt: -------------------------------------------------------------------------------- 1 | 2 | CREATE TABLE IF NOT EXISTS package ( 3 | package_name VARCHAR(100) NOT NULL, 4 | package_version VARCHAR(50) NOT NULL, 5 | PRIMARY KEY (package_name, package_version) 6 | ) 7 | -------------------------------------------------------------------------------- /lambdas/add_ec2_info.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 5 | software and associated documentation files (the "Software"), to deal in the Software 6 | without restriction, including without limitation the rights to use, copy, modify, 7 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 8 | permit persons to whom the Software is furnished to do so. 9 | 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 11 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 12 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 13 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 14 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 15 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 16 | """ 17 | 18 | import os 19 | from helper.dal import * 20 | from helper.lambdautils import * 21 | from helper.logger import get_logger 22 | 23 | logger = get_logger(__name__) 24 | 25 | database_name = os.getenv('DB_NAME') 26 | db_cluster_arn = os.getenv('DB_CLUSTER_ARN') 27 | db_credentials_secrets_store_arn = os.getenv('DB_CRED_SECRETS_STORE_ARN') 28 | 29 | dal = DataAccessLayer(database_name, db_cluster_arn, db_credentials_secrets_store_arn) 30 | 31 | ec2_valid_fields = ['aws_account', 'aws_region', 'packages'] 32 | 33 | #----------------------------------------------------------------------------------------------- 34 | # Input Validation 35 | #----------------------------------------------------------------------------------------------- 36 | def validate_ec2_path_parameters(event): 37 | if key_missing_or_empty_value(event, 'pathParameters'): 38 | raise ValueError('Invalid input - missing aws_instance_id as part of path parameters') 39 | if key_missing_or_empty_value(event['pathParameters'], 'aws_instance_id'): 40 | raise ValueError('Invalid input - missing aws_instance_id as part of path parameters') 41 | return event['pathParameters']['aws_instance_id'] 42 | 43 | def validate_ec2_input_parameters(input_fields): 44 | for field in input_fields: 45 | if field not in ec2_valid_fields: 46 | raise ValueError(f'Invalid EC2 input parameter: {field}') 47 | 48 | def validate_input(event): 49 | aws_instance_id = validate_ec2_path_parameters(event) 50 | if key_missing_or_empty_value(event, 'body'): 51 | raise ValueError('Invalid input - body must contain EC2 mandatory attributes') 52 | input_fields = json.loads(event['body']) 53 | validate_ec2_input_parameters(input_fields.keys()) 54 | return aws_instance_id, input_fields 55 | 56 | #----------------------------------------------------------------------------------------------- 57 | # Lambda Entrypoint 58 | #----------------------------------------------------------------------------------------------- 59 | def handler(event, context): 60 | try: 61 | logger.info(f'Event received: {event}') 62 | aws_instance_id, input_fields = validate_input(event) 63 | dal.save_ec2(aws_instance_id, input_fields) 64 | output = {'new_record': input_fields} 65 | logger.debug(f'Output: {output}') 66 | return success(output) 67 | except Exception as e: 68 | return handle_error(e) -------------------------------------------------------------------------------- /lambdas/get_ec2_info.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 5 | software and associated documentation files (the "Software"), to deal in the Software 6 | without restriction, including without limitation the rights to use, copy, modify, 7 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 8 | permit persons to whom the Software is furnished to do so. 9 | 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 11 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 12 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 13 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 14 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 15 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 16 | """ 17 | 18 | from helper.dal import * 19 | from helper.lambdautils import * 20 | from helper.logger import get_logger 21 | 22 | logger = get_logger(__name__) 23 | 24 | database_name = os.getenv('DB_NAME') 25 | db_cluster_arn = os.getenv('DB_CLUSTER_ARN') 26 | db_credentials_secrets_store_arn = os.getenv('DB_CRED_SECRETS_STORE_ARN') 27 | 28 | dal = DataAccessLayer(database_name, db_cluster_arn, db_credentials_secrets_store_arn) 29 | 30 | #----------------------------------------------------------------------------------------------- 31 | # Input Validation 32 | #----------------------------------------------------------------------------------------------- 33 | def validate_path_parameters(event): 34 | if key_missing_or_empty_value(event, 'pathParameters'): 35 | raise ValueError('Invalid input - missing aws_instance_id as part of path parameters') 36 | if key_missing_or_empty_value(event['pathParameters'], 'aws_instance_id'): 37 | raise ValueError('Invalid input - missing aws_instance_id as part of path parameters') 38 | return event['pathParameters']['aws_instance_id'] 39 | 40 | #----------------------------------------------------------------------------------------------- 41 | # Lambda Entrypoint 42 | #----------------------------------------------------------------------------------------------- 43 | def handler(event, context): 44 | try: 45 | logger.info(f'Event received: {event}') 46 | aws_instance_id = validate_path_parameters(event) 47 | results = dal.find_ec2(aws_instance_id) 48 | output = { 49 | 'record': results, 50 | 'record_found': len(results) > 0 51 | } 52 | logger.debug(f'Output: {output}') 53 | return success(output) 54 | except Exception as e: 55 | return handle_error(e) -------------------------------------------------------------------------------- /lambdas/helper/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-aurora-serverless-data-api-sam/9c9500b61ff79e4134126d38a75ffc2cada4115a/lambdas/helper/__init__.py -------------------------------------------------------------------------------- /lambdas/helper/dal.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 5 | software and associated documentation files (the "Software"), to deal in the Software 6 | without restriction, including without limitation the rights to use, copy, modify, 7 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 8 | permit persons to whom the Software is furnished to do so. 9 | 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 11 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 12 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 13 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 14 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 15 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 16 | """ 17 | 18 | import json 19 | import os 20 | import boto3 21 | from .logger import get_logger 22 | 23 | logger = get_logger(__name__) 24 | 25 | is_lambda_environment = (os.getenv('AWS_LAMBDA_FUNCTION_NAME') != None) 26 | 27 | # AWS X-Ray support 28 | from aws_xray_sdk.core import xray_recorder, patch_all 29 | if is_lambda_environment: 30 | patch_all() 31 | 32 | ec2_table_name = os.getenv('EC2_TABLE_NAME', 'ec2') 33 | package_table_name = os.getenv('PACKAGE_TABLE_NAME', 'package') 34 | ec2_package_table_name = os.getenv('EC2_PACKAGE_TABLE_NAME', 'ec2_package') 35 | 36 | class DataAccessLayerException(Exception): 37 | 38 | def __init__(self, original_exception): 39 | self.original_exception = original_exception 40 | 41 | class DataAccessLayer: 42 | 43 | def __init__(self, database_name, db_cluster_arn, db_credentials_secrets_store_arn): 44 | self._rdsdata_client = boto3.client('rds-data') 45 | self._database_name = database_name 46 | self._db_cluster_arn = db_cluster_arn 47 | self._db_credentials_secrets_store_arn = db_credentials_secrets_store_arn 48 | 49 | @staticmethod 50 | def _xray_start(segment_name): 51 | if is_lambda_environment and xray_recorder: 52 | xray_recorder.begin_subsegment(segment_name) 53 | 54 | @staticmethod 55 | def _xray_stop(): 56 | if is_lambda_environment and xray_recorder: 57 | xray_recorder.end_subsegment() 58 | 59 | @staticmethod 60 | def _xray_add_metadata(name, value): 61 | if is_lambda_environment and xray_recorder and xray_recorder.current_subsegment(): 62 | return xray_recorder.current_subsegment().put_metadata(name, value) 63 | 64 | def execute_statement(self, sql_stmt, sql_params=[], transaction_id=None): 65 | parameters = f' with parameters: {sql_params}' if len(sql_params) > 0 else '' 66 | logger.debug(f'Running SQL statement: {sql_stmt}{parameters}') 67 | DataAccessLayer._xray_start('execute_statement') 68 | try: 69 | DataAccessLayer._xray_add_metadata('sql_statement', sql_stmt) 70 | parameters = { 71 | 'secretArn': self._db_credentials_secrets_store_arn, 72 | 'database': self._database_name, 73 | 'resourceArn': self._db_cluster_arn, 74 | 'sql': sql_stmt, 75 | 'parameters': sql_params 76 | } 77 | if transaction_id is not None: 78 | parameters['transactionId'] = transaction_id 79 | result = self._rdsdata_client.execute_statement(**parameters) 80 | except Exception as e: 81 | logger.debug(f'Error running SQL statement (error class: {e.__class__})') 82 | raise DataAccessLayerException(e) from e 83 | else: 84 | DataAccessLayer._xray_add_metadata('rdsdata_executesql_result', json.dumps(result)) 85 | return result 86 | finally: 87 | DataAccessLayer._xray_stop() 88 | 89 | def batch_execute_statement(self, sql_stmt, sql_param_sets, batch_size, transaction_id=None): 90 | parameters = f' with parameters: {sql_param_sets}' if len(sql_param_sets) > 0 else '' 91 | logger.debug(f'Running SQL statement: {sql_stmt}{parameters}') 92 | DataAccessLayer._xray_start('batch_execute_statement') 93 | try: 94 | array_length = len(sql_param_sets) 95 | num_batches = 1 + len(sql_param_sets)//batch_size 96 | results = [] 97 | for i in range(0, num_batches): 98 | start_idx = i*batch_size 99 | end_idx = min(start_idx + batch_size, array_length) 100 | batch_sql_param_sets = sql_param_sets[start_idx:end_idx] 101 | if len(batch_sql_param_sets) > 0: 102 | print(f'Running SQL statement: [batch #{i+1}/{num_batches}, batch size {batch_size}, SQL: {sql_stmt}]') 103 | DataAccessLayer._xray_add_metadata('sql_statement', sql_stmt) 104 | parameters = { 105 | 'secretArn': self._db_credentials_secrets_store_arn, 106 | 'database': self._database_name, 107 | 'resourceArn': self._db_cluster_arn, 108 | 'sql': sql_stmt, 109 | 'parameterSets': batch_sql_param_sets 110 | } 111 | if transaction_id is not None: 112 | parameters['transactionId'] = transaction_id 113 | result = self._rdsdata_client.batch_execute_statement(**parameters) 114 | results.append(result) 115 | except Exception as e: 116 | logger.debug(f'Error running SQL statement (error class: {e.__class__})') 117 | raise DataAccessLayerException(e) from e 118 | else: 119 | DataAccessLayer._xray_add_metadata('rdsdata_executesql_result', json.dumps(result)) 120 | return results 121 | finally: 122 | DataAccessLayer._xray_stop() 123 | 124 | #----------------------------------------------------------------------------------------------- 125 | # Package Functions 126 | #----------------------------------------------------------------------------------------------- 127 | def find_package(self, package_name, package_version): 128 | DataAccessLayer._xray_start('find_package') 129 | try: 130 | sql_parameters = [ 131 | {'name':'package_name', 'value':{'stringValue': package_name}}, 132 | {'name':'package_version', 'value':{'stringValue': package_version}}, 133 | ] 134 | sql = f'select package_name, package_version' \ 135 | f' from {package_table_name}' \ 136 | f' where package_name=:package_name' \ 137 | f' and package_version=:package_version' 138 | response = self.execute_statement(sql, sql_parameters) 139 | results = [ 140 | { 141 | 'package_name': record[0]['stringValue'], 142 | 'package_version': record[1]['stringValue'] 143 | } 144 | for record in response['records'] 145 | ] 146 | return results 147 | except DataAccessLayerException as de: 148 | raise de 149 | except Exception as e: 150 | raise DataAccessLayerException(e) from e 151 | finally: 152 | DataAccessLayer._xray_stop() 153 | 154 | def _save_package(self, package_name, package_version, ignore_key_conflict=True): 155 | DataAccessLayer._xray_start('save_package') 156 | try: 157 | ignore = 'ignore' if ignore_key_conflict else '' 158 | sql_parameters = [ 159 | {'name':'package_name', 'value':{'stringValue': package_name}}, 160 | {'name':'package_version', 'value':{'stringValue': package_version}}, 161 | ] 162 | sql = f'insert {ignore} into {package_table_name} ' \ 163 | f' (package_name, package_version)' \ 164 | f' values (:package_name,:package_version)' 165 | response = self.execute_statement(sql, sql_parameters) 166 | return response 167 | finally: 168 | DataAccessLayer._xray_stop() 169 | 170 | def _save_packages_batch(self, package_list, batch_size=200, ignore_key_conflict=True): 171 | DataAccessLayer._xray_start('save_packages_batch') 172 | try: 173 | ignore = 'ignore' if ignore_key_conflict else '' 174 | sql_parameter_sets = [] 175 | for package in package_list: 176 | sql_parameters = [ 177 | {'name':'package_name', 'value':{'stringValue': package['package_name']}}, 178 | {'name':'package_version', 'value':{'stringValue': package['package_version']}} 179 | ] 180 | sql_parameter_sets.append(sql_parameters) 181 | sql = f'insert {ignore} into {package_table_name}' \ 182 | f' (package_name, package_version)' \ 183 | f' values (:package_name, :package_version)' 184 | response = self.batch_execute_statement(sql, sql_parameter_sets, batch_size) 185 | return response 186 | finally: 187 | DataAccessLayer._xray_stop() 188 | 189 | #----------------------------------------------------------------------------------------------- 190 | # EC2-PACKAGE Functions 191 | #----------------------------------------------------------------------------------------------- 192 | def _find_ec2_package_relations(self, aws_instance_id): 193 | DataAccessLayer._xray_start('find_ec2_package_relations') 194 | try: 195 | sql_parameters = [ 196 | {'name':'aws_instance_id', 'value':{'stringValue': aws_instance_id}} 197 | ] 198 | sql = f'select aws_instance_id, package_name, package_version' \ 199 | f' from {ec2_package_table_name}' \ 200 | f' where aws_instance_id=:aws_instance_id' 201 | response = self.execute_statement(sql, sql_parameters) 202 | results = [ 203 | { 204 | 'aws_instance_id': record[0]['stringValue'], 205 | 'package_name': record[1]['stringValue'], 206 | 'package_version': record[2]['stringValue'] 207 | } 208 | for record in response['records'] 209 | ] 210 | return results 211 | finally: 212 | DataAccessLayer._xray_stop() 213 | 214 | def _save_ec2_package_relation(self, aws_instance_id, package_name, package_version): 215 | DataAccessLayer._xray_start('save_ec2_package_relation') 216 | try: 217 | sql_parameters = [ 218 | {'name':'aws_instance_id', 'value':{'stringValue': aws_instance_id}}, 219 | {'name':'package_name', 'value':{'stringValue': package_name}}, 220 | {'name':'package_version', 'value':{'stringValue': package_version}}, 221 | ] 222 | sql = f'insert into {ec2_package_table_name}' \ 223 | f' (aws_instance_id, package_name, package_version)' \ 224 | f' values (:aws_instance_id, :package_name, :package_version)' 225 | response = self.execute_statement(sql, sql_parameters) 226 | return response 227 | finally: 228 | DataAccessLayer._xray_stop() 229 | 230 | def _save_ec2_package_relations_batch(self, aws_instance_id, package_list, batch_size=200, ignore_key_conflict=True): 231 | DataAccessLayer._xray_start('save_ec2_package_relations_batch') 232 | try: 233 | ignore = 'ignore' if ignore_key_conflict else '' 234 | sql_parameter_sets = [] 235 | for package in package_list: 236 | sql_parameters = [ 237 | {'name':'aws_instance_id', 'value':{'stringValue': aws_instance_id}}, 238 | {'name':'package_name', 'value':{'stringValue': package['package_name']}}, 239 | {'name':'package_version', 'value':{'stringValue': package['package_version']}} 240 | ] 241 | sql_parameter_sets.append(sql_parameters) 242 | sql = f'insert {ignore} into {ec2_package_table_name}' \ 243 | f' (aws_instance_id, package_name, package_version)' \ 244 | f' values (:aws_instance_id, :package_name, :package_version)' 245 | response = self.batch_execute_statement(sql, sql_parameter_sets, batch_size) 246 | return response 247 | finally: 248 | DataAccessLayer._xray_stop() 249 | 250 | #----------------------------------------------------------------------------------------------- 251 | # EC2 Functions 252 | #----------------------------------------------------------------------------------------------- 253 | def find_ec2(self, aws_instance_id): 254 | DataAccessLayer._xray_start('find_ec2') 255 | try: 256 | DataAccessLayer._xray_add_metadata('aws_instance_id', aws_instance_id) 257 | sql_parameters = [ 258 | {'name':'aws_instance_id', 'value':{'stringValue': aws_instance_id}} 259 | ] 260 | sql = f'select aws_instance_id, aws_region, aws_account' \ 261 | f' from {ec2_table_name}' \ 262 | f' where aws_instance_id=:aws_instance_id' 263 | response = self.execute_statement(sql, sql_parameters) 264 | record = dict() 265 | returned_records = response['records'] 266 | if len(returned_records) > 0: 267 | record['instance_id'] = returned_records[0][0]['stringValue'] 268 | record['aws_region'] = returned_records[0][1]['stringValue'] 269 | record['aws_account'] = returned_records[0][2]['stringValue'] 270 | # find ec2-package relations and add packages to returned ec2 object 271 | ec2_package_relations = self._find_ec2_package_relations(aws_instance_id) 272 | record['packages'] = [ {'package_name': package['package_name'], 'package_version': package['package_version']} for package in ec2_package_relations] 273 | return record 274 | except DataAccessLayerException as de: 275 | raise de 276 | except Exception as e: 277 | raise DataAccessLayerException(e) from e 278 | finally: 279 | DataAccessLayer._xray_stop() 280 | 281 | def save_ec2(self, aws_instance_id, input_fields, batch_size=200): 282 | DataAccessLayer._xray_start('save_ec2') 283 | try: 284 | num_ec2_packages = len(input_fields['packages']) if 'packages' in input_fields else 0 285 | DataAccessLayer._xray_add_metadata('aws_instance_id', aws_instance_id) 286 | DataAccessLayer._xray_add_metadata('num_ec2_packages', num_ec2_packages) 287 | # packages have their own table, so remove it to construct the ec2 record 288 | ec2_fields = input_fields.copy() 289 | ec2_fields.pop('packages') 290 | sql_parameters = [ 291 | {'name':'aws_instance_id', 'value':{'stringValue': aws_instance_id}}, 292 | {'name':'aws_region', 'value':{'stringValue': ec2_fields['aws_region']}}, 293 | {'name':'aws_account', 'value':{'stringValue': ec2_fields['aws_account']}}, 294 | ] 295 | sql = f'insert into {ec2_table_name}' \ 296 | f' (aws_instance_id, aws_region, aws_account)' \ 297 | f' values (:aws_instance_id, :aws_region, :aws_account)' 298 | response = self.execute_statement(sql, sql_parameters) 299 | if 'packages' in input_fields: 300 | self._save_packages_batch(input_fields['packages'], batch_size) 301 | self._save_ec2_package_relations_batch(aws_instance_id, input_fields['packages'], batch_size) 302 | return response 303 | except DataAccessLayerException as de: 304 | raise de 305 | except Exception as e: 306 | raise DataAccessLayerException(e) from e 307 | finally: 308 | DataAccessLayer._xray_stop() 309 | -------------------------------------------------------------------------------- /lambdas/helper/lambdautils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 5 | software and associated documentation files (the "Software"), to deal in the Software 6 | without restriction, including without limitation the rights to use, copy, modify, 7 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 8 | permit persons to whom the Software is furnished to do so. 9 | 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 11 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 12 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 13 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 14 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 15 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 16 | """ 17 | 18 | import json 19 | import uuid 20 | from .logger import get_logger 21 | from .dal import DataAccessLayerException 22 | 23 | logger = get_logger(__name__) 24 | 25 | def key_missing_or_empty_value(d, key): 26 | return not key in d or not d[key] 27 | 28 | def success(output): 29 | return { 30 | 'statusCode': 200, 31 | 'body': json.dumps(output) 32 | } 33 | 34 | def error(error_code, error): 35 | return { 36 | 'statusCode': error_code, 37 | 'body': json.dumps({ 38 | 'error_message': error 39 | }) 40 | } 41 | 42 | def handle_error(e): 43 | client_err_code = uuid.uuid4() 44 | client_error_msg = f'(error_code: {client_err_code})' 45 | if isinstance(e, ValueError): 46 | client_error_msg = f'{client_error_msg} - Error while validating input parameters: {e}' 47 | logger.error(f'[client error code: {client_err_code}, client error message: {client_error_msg}, internal error (ValueError)]') 48 | return error(400, client_error_msg) 49 | elif isinstance(e, DataAccessLayerException): 50 | client_error_msg = f'{client_error_msg} - Error while interacting with the database' 51 | logger.error(f'[client error code: {client_err_code}, client error message: {client_error_msg}, internal error (DataAccessLayerException): {e.original_exception}]') 52 | return error(400, client_error_msg) 53 | client_error_msg = f'{client_error_msg} - Unexpected error. Please contact the software vendor.' 54 | logger.error(f'[client error code: {client_err_code}, client error message: {client_error_msg}, internal error (Exception): {e}]') 55 | return error(400, client_error_msg) -------------------------------------------------------------------------------- /lambdas/helper/logger.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | 4 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 5 | software and associated documentation files (the "Software"), to deal in the Software 6 | without restriction, including without limitation the rights to use, copy, modify, 7 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 8 | permit persons to whom the Software is furnished to do so. 9 | 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 11 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 12 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 13 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 14 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 15 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 16 | """ 17 | 18 | import logging 19 | import os 20 | 21 | logging.getLogger().addHandler(logging.StreamHandler()) 22 | LOG_LEVEL = getattr(logging, os.getenv('LOG_LEVEL', 'INFO')) 23 | 24 | def get_logger(name): 25 | logger = logging.getLogger(name) 26 | logger.setLevel(LOG_LEVEL) 27 | return logger -------------------------------------------------------------------------------- /lambdas/requirements.txt: -------------------------------------------------------------------------------- 1 | -i https://pypi.org/simple/ 2 | aws-xray-sdk==2.4.2 3 | botocore==1.20.33 4 | future==0.18.3 5 | importlib-metadata==3.7.3 ; python_version < '3.8' 6 | jmespath==0.10.0 7 | jsonpickle==2.0.0 8 | pymysql==1.1.1 9 | python-dateutil==2.8.1 10 | six==1.15.0 11 | typing-extensions==3.7.4.3 ; python_version < '3.8' 12 | urllib3==1.26.19 13 | wrapt==1.12.1 14 | zipp==3.19.1 15 | -------------------------------------------------------------------------------- /local/AddEC2InfoLambda-event.json: -------------------------------------------------------------------------------- 1 | { 2 | "resource": "/ec2/{aws_instance_id}", 3 | "path": "/ec2/instance-12346", 4 | "httpMethod": "POST", 5 | "headers": { 6 | "Accept": "* /*", 7 | "accept-encoding": "gzip, deflate", 8 | "cache-control": "no-cache", 9 | "Content-Type": "application/json", 10 | "X-Amzn-Trace-Id": "Root=1-5ca1937f-ff43def8dcc20eb0a67779e8", 11 | }, 12 | "pathParameters": { 13 | "aws_instance_id": "instance-12346" 14 | }, 15 | "requestContext": { 16 | "resourceId": "o3xe3p", 17 | "resourcePath": "/ec2/{aws_instance_id}", 18 | "httpMethod": "POST", 19 | "extendedRequestId": "XcP75F__IAMFjIA=", 20 | "requestTime": "01/Apr/2019: 04: 28: 47 +0000", 21 | "path": "/dev/ec2/instance-12346", 22 | "accountId": "665243897136", 23 | "protocol": "HTTP/1.1", 24 | "stage": "dev", 25 | "domainPrefix": "p7sz7f51mf", 26 | "requestTimeEpoch": 1554092927347, 27 | "requestId": "a4545caa-5436-11e9-a130-af1393d42df1", 28 | "domainName": "p7sz7f51mf.execute-api.us-east-1.amazonaws.com", 29 | "apiId": "p7sz7f51mf" 30 | }, 31 | "body": "{\"aws_account\": \"123456789012\", \"aws_region\": \"us-east-1\", \"packages\": [ {\"package_name\": \"package-1\",\"package_version\": \"v1\"},{\"package_name\": \"package-1\",\"package_version\": \"v2\"}] }" 32 | } -------------------------------------------------------------------------------- /local/GetEC2InfoLambda-event.json: -------------------------------------------------------------------------------- 1 | { 2 | "resource": "/ec2/{aws_instance_id}", 3 | "path": "/ec2/i-0000000001", 4 | "httpMethod": "POST", 5 | "headers": { 6 | "Accept": "*/*", 7 | "accept-encoding": "gzip, deflate", 8 | "cache-control": "no-cache", 9 | "Content-Type": "application/json", 10 | "X-Amzn-Trace-Id": "Root=1-5c747f89-9713cbc075d265403910db80", 11 | }, 12 | "pathParameters": { 13 | "aws_instance_id": "i-0000000001" 14 | }, 15 | "requestContext": { 16 | "resourceId": "trf23c", 17 | "resourcePath": "/ami/i-0000000001", 18 | "httpMethod": "POST", 19 | "extendedRequestId": "VrjdiHUmoAMF41Q=", 20 | "requestTime": "25/Feb/2019:23:51:37 +0000", 21 | "path": "/Prod/ami/ami-00000001/us-east-1", 22 | "accountId": "665243897136", 23 | "protocol": "HTTP/1.1", 24 | "stage": "Prod", 25 | "domainPrefix": "9ru2poro88", 26 | "requestTimeEpoch": 1551138697815, 27 | "requestId": "4a4fa73d-3958-11e9-93b4-d1d6bd8c8dc1", 28 | "domainName": "9ru2poro88.execute-api.us-east-1.amazonaws.com", 29 | "apiId": "9ru2poro88" 30 | }, 31 | "body":"{}" 32 | } -------------------------------------------------------------------------------- /local/env_variables.json: -------------------------------------------------------------------------------- 1 | { 2 | "GetEC2InfoLambda": { 3 | "LOG_LEVEL": "DEBUG", 4 | "DB_NAME": "ec2_inventory_db", 5 | "DB_CLUSTER_ARN": "arn:aws:rds:us-east-1:665243897136:cluster:dev-aurora-ec2-inventory-cluster", 6 | "DB_CRED_SECRETS_STORE_ARN": "arn:aws:secretsmanager:us-east-1:665243897136:secret:dev-AuroraUserSecret-DhpkOI" 7 | } 8 | } -------------------------------------------------------------------------------- /local/run_local.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #====================================================================== 4 | # Build Lambda code and dependencies and runs it locally 5 | #====================================================================== 6 | 7 | set -e 8 | 9 | function error() { 10 | echo "Error: $1" 11 | exit -1 12 | } 13 | 14 | [[ -n "$1" ]] || error "Missing environment name (eg, dev, uat, prod)" 15 | [[ -n "$2" ]] || error "Lambda function name (eg, GetEC2InfoLambda)" 16 | env_type=$1 17 | lambda_function=$2 18 | 19 | . "deploy_scripts/${env_type}-env.sh" 20 | 21 | if [ -z "$virtual_env_location" ]; then 22 | virtual_env_location=`pipenv --venv` 23 | fi 24 | 25 | # create or update requirements.txt 26 | # (cd lambdas/ && pipenv lock -r) 27 | 28 | # install dependencies from requirements.txt 29 | sam build \ 30 | -t deploy_scripts/${api_cfn_template} \ 31 | -s $lambdas_dir 32 | 33 | # run locally 34 | env_variables_file="env_variables.json" 35 | echo "Running Lambda function locally: $lambda_function" 36 | sam local invoke "${lambda_function}" \ 37 | --event "local/${lambda_function}-event.json" \ 38 | --env-vars "local/${env_variables_file}" 39 | -------------------------------------------------------------------------------- /tests/run_tests.sh: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | # 3 | # Permission is hereby granted, free of charge, to any person obtaining a copy of this 4 | # software and associated documentation files (the "Software"), to deal in the Software 5 | # without restriction, including without limitation the rights to use, copy, modify, 6 | # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 7 | # permit persons to whom the Software is furnished to do so. 8 | # 9 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 10 | # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 11 | # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 12 | # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 13 | # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 14 | # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | #!/bin/bash 17 | 18 | set -e 19 | 20 | function error() { 21 | echo "Error: $1" 22 | exit -1 23 | } 24 | [[ -n "$1" ]] || error "Missing environment name (eg, dev, qa, prod)" 25 | env_type=$1 26 | 27 | . "./deploy_scripts/${env_type}-env.sh" 28 | 29 | (cd tests && pytest -s) -------------------------------------------------------------------------------- /tests/test_api.py: -------------------------------------------------------------------------------- 1 | ''' 2 | * Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | * 4 | * Permission is hereby granted, free of charge, to any person obtaining a copy of this 5 | * software and associated documentation files (the "Software"), to deal in the Software 6 | * without restriction, including without limitation the rights to use, copy, modify, 7 | * merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 8 | * permit persons to whom the Software is furnished to do so. 9 | * 10 | * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 11 | * INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 12 | * PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 13 | * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 14 | * OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 15 | * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 16 | ''' 17 | 18 | import os 19 | import requests 20 | import boto3 21 | import pytest 22 | import uuid 23 | from http import HTTPStatus 24 | 25 | def get_cfn_output(key, outputs): 26 | result = [ v['OutputValue'] for v in outputs if v['OutputKey'] == key ] 27 | return result[0] if len(result) > 0 else '' 28 | 29 | @pytest.fixture(scope="module") 30 | def api_endpoint(): 31 | cloudformation = boto3.resource('cloudformation') 32 | api_stack_name = os.getenv('api_stack_name') 33 | stack = cloudformation.Stack(api_stack_name) 34 | return get_cfn_output('ApiEndpoint', stack.outputs) 35 | 36 | @pytest.fixture() 37 | def ec2_input_data(): 38 | return { 39 | 'instance_id': uuid.uuid4(), 40 | 'input_data': { 41 | "aws_region": "us-east-1", 42 | "aws_account": "123456789012", 43 | "packages": [ 44 | {"package_name": "package-1", "package_version": "v1"}, 45 | {"package_name": "package-1", "package_version": "v2"}, 46 | {"package_name": "package-2", "package_version": "v1"} 47 | ] 48 | } 49 | } 50 | 51 | # TODO: add_ec2* tests have side effects (create DB record for test but does not delete it) 52 | # TODO: Warm up Aurora Serverless with an initial request + sleep 53 | 54 | def test_add_ec2_info_returns_expected_attributes(api_endpoint, ec2_input_data): 55 | r = requests.post(f'{api_endpoint}/ec2/{ec2_input_data["instance_id"]}', json = ec2_input_data['input_data']) 56 | assert HTTPStatus.OK == r.status_code 57 | response = r.json() 58 | assert 'new_record' in response 59 | assert ec2_input_data['input_data']['aws_region'] == response['new_record']['aws_region'] 60 | assert ec2_input_data['input_data']['aws_account'] == response['new_record']['aws_account'] 61 | assert ec2_input_data['input_data']['packages'] == response['new_record']['packages'] 62 | 63 | def test_add_ec2_info_error_duplicate(api_endpoint, ec2_input_data): 64 | r = requests.post(f'{api_endpoint}/ec2/{ec2_input_data["instance_id"]}', json = ec2_input_data['input_data']) 65 | assert HTTPStatus.OK == r.status_code 66 | 67 | r = requests.post(f'{api_endpoint}/ec2/{ec2_input_data["instance_id"]}', json = ec2_input_data['input_data']) 68 | response = r.json() 69 | assert HTTPStatus. BAD_REQUEST == r.status_code 70 | 71 | def test_add_ec2_info_invalid_input_field(api_endpoint): 72 | r = requests.post(f'{api_endpoint}/ec2/{uuid.uuid4()}', json = {'invalid_field_name': 'any-value'}) 73 | assert HTTPStatus. BAD_REQUEST == r.status_code 74 | 75 | def test_get_ec2_info_record_found(api_endpoint, ec2_input_data): 76 | r = requests.post(f'{api_endpoint}/ec2/{ec2_input_data["instance_id"]}', json = ec2_input_data['input_data']) 77 | assert HTTPStatus.OK == r.status_code 78 | 79 | r = requests.get(f'{api_endpoint}/ec2/{ec2_input_data["instance_id"]}') 80 | assert r.status_code == HTTPStatus.OK 81 | response = r.json() 82 | assert True == response['record_found'] 83 | 84 | def test_get_ec2_info_record_not_found(api_endpoint): 85 | instance_id = uuid.uuid4() 86 | r = requests.get(f'{api_endpoint}/ec2/{instance_id}') 87 | assert r.status_code == HTTPStatus.OK 88 | response = r.json() 89 | assert False == response['record_found'] 90 | --------------------------------------------------------------------------------