├── src ├── requirements.txt ├── Dockerfile └── runjob.py ├── Scheduling and running Amazon RDS jobs with AWS Batch and Amazon CloudWatch rules.png ├── CODE_OF_CONDUCT.md ├── LICENSE ├── CONTRIBUTING.md ├── CreateSampleDataAndSP.sql ├── README.md └── batchenv-cf.yaml /src/requirements.txt: -------------------------------------------------------------------------------- 1 | boto3 2 | botocore 3 | psycopg2 4 | -------------------------------------------------------------------------------- /Scheduling and running Amazon RDS jobs with AWS Batch and Amazon CloudWatch rules.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-batch-rds-job-scheduling/HEAD/Scheduling and running Amazon RDS jobs with AWS Batch and Amazon CloudWatch rules.png -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /src/Dockerfile: -------------------------------------------------------------------------------- 1 | # INSTALL PYTHON IMAGE 2 | FROM python:3.6 3 | 4 | # INSTALL TOOLS 5 | RUN apt-get update \ 6 | && apt-get -y install unzip \ 7 | && apt-get -y install libaio-dev \ 8 | && mkdir -p /opt/data/api 9 | 10 | ADD ./requirements.txt /opt/data 11 | ADD ./runjob.py /opt/data/runjob.py 12 | 13 | WORKDIR /opt/data 14 | 15 | RUN pip install -r /opt/data/requirements.txt 16 | 17 | CMD ["python3","/opt/data/runjob.py"] 18 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /src/runjob.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import base64 3 | from botocore.exceptions import ClientError 4 | import json 5 | import psycopg2 6 | 7 | def job_exec(): 8 | 9 | secret_name = "batchjob-secret" 10 | region_name = "us-east-1" 11 | 12 | # Create a Secrets Manager client 13 | session = boto3.session.Session() 14 | client = session.client( 15 | service_name='secretsmanager', 16 | region_name=region_name 17 | ) 18 | 19 | response = client.get_secret_value( 20 | SecretId=secret_name 21 | ) 22 | 23 | secret = json.loads(response['SecretString']) 24 | 25 | print(secret['username']) 26 | #print(secret['password']) 27 | print(secret['dbname']) 28 | print(secret['host']) 29 | 30 | db_host=secret['host'] 31 | db_user=secret['username'] 32 | db_pwd=secret['password'] 33 | db_name=secret['dbname'] 34 | 35 | conn = psycopg2.connect(database=db_name, user=db_user,password=db_pwd, host=db_host) 36 | cur = conn.cursor() 37 | 38 | # Call stored procedure 39 | cur.callproc('low_high_salaries', (10,)) 40 | # process and print highest and lowest salary in dept 10 41 | row = cur.fetchone() 42 | while row is not None: 43 | print(row) 44 | row = cur.fetchone() 45 | # close the communication with the PostgreSQL database server 46 | cur.close() 47 | conn.close() 48 | 49 | job_exec() -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /CreateSampleDataAndSP.sql: -------------------------------------------------------------------------------- 1 | --drop table emp; 2 | --drop table dept; 3 | --drop function low_high_salaries; 4 | 5 | create table dept( 6 | deptno integer, 7 | dname varchar(20), 8 | loc varchar(20), 9 | constraint pk_dept primary key (deptno) 10 | ); 11 | 12 | create table emp( 13 | empno integer, 14 | ename varchar(10), 15 | job varchar(9), 16 | mgr integer, 17 | hiredate date, 18 | sal integer, 19 | comm integer, 20 | deptno integer, 21 | constraint pk_emp primary key (empno), 22 | constraint fk_deptno foreign key (deptno) references dept (deptno) 23 | ); 24 | 25 | insert into dept 26 | values(10, 'ACCOUNTING', 'NEW YORK') 27 | ; 28 | insert into dept 29 | values(20, 'RESEARCH', 'DALLAS') 30 | ; 31 | insert into dept 32 | values(30, 'SALES', 'CHICAGO') 33 | ; 34 | insert into dept 35 | values(40, 'OPERATIONS', 'BOSTON') 36 | ; 37 | commit; 38 | 39 | insert into emp 40 | values( 41 | 7839, 'KING', 'PRESIDENT', null, 42 | to_date('17-11-1981','dd-mm-yyyy'), 43 | 5000, null, 10 44 | ); 45 | insert into emp 46 | values( 47 | 7698, 'BLAKE', 'MANAGER', 7839, 48 | to_date('1-5-1981','dd-mm-yyyy'), 49 | 2850, null, 30 50 | ); 51 | insert into emp 52 | values( 53 | 7782, 'CLARK', 'MANAGER', 7839, 54 | to_date('9-6-1981','dd-mm-yyyy'), 55 | 2450, null, 10 56 | ); 57 | insert into emp 58 | values( 59 | 7566, 'JONES', 'MANAGER', 7839, 60 | to_date('2-4-1981','dd-mm-yyyy'), 61 | 2975, null, 20 62 | ); 63 | insert into emp 64 | values( 65 | 7788, 'SCOTT', 'ANALYST', 7566, 66 | to_date('13-7-87','dd-mm-rr'), 67 | 3000, null, 20 68 | ); 69 | insert into emp 70 | values( 71 | 7902, 'FORD', 'ANALYST', 7566, 72 | to_date('3-12-1981','dd-mm-yyyy'), 73 | 3000, null, 20 74 | ); 75 | insert into emp 76 | values( 77 | 7369, 'SMITH', 'CLERK', 7902, 78 | to_date('17-12-1980','dd-mm-yyyy'), 79 | 800, null, 20 80 | ); 81 | insert into emp 82 | values( 83 | 7499, 'ALLEN', 'SALESMAN', 7698, 84 | to_date('20-2-1981','dd-mm-yyyy'), 85 | 1600, 300, 30 86 | ); 87 | insert into emp 88 | values( 89 | 7521, 'WARD', 'SALESMAN', 7698, 90 | to_date('22-2-1981','dd-mm-yyyy'), 91 | 1250, 500, 30 92 | ); 93 | insert into emp 94 | values( 95 | 7654, 'MARTIN', 'SALESMAN', 7698, 96 | to_date('28-9-1981','dd-mm-yyyy'), 97 | 1250, 1400, 30 98 | ); 99 | insert into emp 100 | values( 101 | 7844, 'TURNER', 'SALESMAN', 7698, 102 | to_date('8-9-1981','dd-mm-yyyy'), 103 | 1500, 0, 30 104 | ); 105 | insert into emp 106 | values( 107 | 7876, 'ADAMS', 'CLERK', 7788, 108 | to_date('13-7-87', 'dd-mm-rr') - 51, 109 | 1100, null, 20 110 | ); 111 | insert into emp 112 | values( 113 | 7900, 'JAMES', 'CLERK', 7698, 114 | to_date('3-12-1981','dd-mm-yyyy'), 115 | 950, null, 30 116 | ); 117 | insert into emp 118 | values( 119 | 7934, 'MILLER', 'CLERK', 7782, 120 | to_date('23-1-1982','dd-mm-yyyy'), 121 | 1300, null, 10 122 | ); 123 | commit; 124 | 125 | 126 | CREATE OR REPLACE FUNCTION low_high_salaries (dpet_id integer) 127 | RETURNS TABLE(low_sal INTEGER, high_sal INTEGER) AS 128 | $$ 129 | BEGIN 130 | RETURN QUERY 131 | 132 | select min(sal) as minsal, max(sal) as maxsal 133 | from emp 134 | where deptno = dpet_id; 135 | 136 | END; $$ 137 | LANGUAGE plpgsql; 138 | 139 | SELECT * FROM emp; 140 | SELECT * FROM dept; 141 | select * from low_high_salaries(20); 142 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # How to schedule and execute Amazon RDS jobs using AWS Batch and CloudWatch rules 2 | 3 | ## Introduction 4 | Database administrators and developers traditionally schedule scripts to run against databases using the system cron on the host where the database is running. As a managed database service, Amazon RDS does not provide access to the underlying infrastructure, so if you migrate such workloads from on premises, you must move these jobs. This post provides an alternate way to schedule and run jobs centrally. 5 | 6 | AWS Batch is a managed service that abstracts the complexities of provisioning, managing, monitoring, and scaling your computing jobs, and enables you to easily and efficiently run jobs on AWS. Additionally, AWS Batch enables you to build jobs using the language of your choice and deploy it as a Docker container. 7 | 8 | This post demonstrates how to use the combination of AWS Batch and Amazon CloudWatch rules to dynamically provision resources and schedule and run functions or stored procedures on Amazon RDS. 9 | 10 | Please follow the blog post to schedule jobs and test in you account. 11 | 12 | ![Alt text](Scheduling%20and%20running%20Amazon%20RDS%20jobs%20with%20AWS%20Batch%20and%20Amazon%20CloudWatch%20rules.png?raw=true "Title") 13 | 14 | 15 | ## Prerequisites 16 | Before you get started, complete the following prerequisites: 17 | 18 | • Install Docker Desktop on your machine. 19 | • Set up and configure AWS CLI. For instructions, see Installing the AWS CLI. 20 | • Provide the comma-separated list of the default subnets and security groups as input parameters in the AWS CloudFormation template. 21 | 22 | 23 | ## Walkthrough 24 | The following steps provide a high-level overview of the walkthrough: 25 | 26 | 1. Clone the project from the AWS code samples repository 27 | 2. Deploy the CloudFormation template to create the required services 28 | 3. Go to the AWS CloudFormation console and make sure that the resources are created 29 | 4. Run database scripts and create the required tables and functions 30 | 5. Build, tag, and push the Docker image to Amazon ECR 31 | 6. Verify if AWS Batch is running the job successfully based on the CloudWatch rule 32 | 33 | This post also includes optional instructions to manage changes to the job and schedule with AWS CodeCommit and AWS CodeBuild. 34 | 35 | 36 | ## Clone source code from AWS samples 37 | Download the files required to set up the environment. See the following code: 38 | 39 | $ git clone https://github.com/aws-samples/aws-batch-rds-job-scheduling 40 | $ cd aws-batch-rds-job-scheduling 41 | 42 | 43 | ## Deploy the AWS CloudFormation template 44 | Run the CloudFormation template to provision the required services. See the following code: 45 | 46 | $ aws cloudformation create-stack --stack-name batchjob --template-body file://batchenv-cf.yaml --capabilities CAPABILITY_NAMED_IAM --region us-east-1 47 | { 48 | "StackId": "arn:aws:cloudformation:us-east-1:XXXXXXXXXXXXXX:stack/batchjob/73448940-63c5-11ea-918d-1208f0f76cbf" 49 | } 50 | 51 | The template creates the following: 52 | 53 | • Docker registry to store the Docker image 54 | • Job definition to define the Docker image, IAM role, and resource requirements for the job 55 | • Queue for jobs until they are ready to run in a compute environment 56 | • Compute environment in which AWS Batch manages the compute resources that jobs use 57 | • PostgresSQL instance 58 | • AWS Secrets Manager with PostgresSQL database login credentials 59 | • CloudWatch rule to run the AWS Batch job based on the schedule 60 | • Roles with appropriate permission 61 | 62 | The following are ancillary services, which are required only if you choose to manage changes to the job and schedule rule using CodeCommit and CodeBuild: 63 | 64 | • Repository to store buildspec.yml and src folder 65 | • A CodeBuild project to build, tag, and push Docker images to the registry 66 | 67 | This post includes these instructions after the main walkthrough. 68 | 69 | ## Testing 70 | 71 | Please follow the blog post to schedule and the test you job execution. 72 | 73 | 74 | ## Code Cleanup 75 | 76 | On the AWS Management Console, navigate to your CloudFormation stack batchjob and delete it. 77 | 78 | Alternatively, enter the following code in AWS CLI: 79 | 80 | $ aws cloudformation delete-stack --stack-name batchjob 81 | 82 | 83 | 84 | ## License 85 | 86 | This library is licensed under the MIT-0 License. See the LICENSE file. -------------------------------------------------------------------------------- /batchenv-cf.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: 'Cloudformation for provisioning service required to schedule and execute RDS jobs using AWS Batch' 3 | Parameters: 4 | Subnet: 5 | Type: String 6 | Default: "subnet-xxxxxxxx, subnet-xxxxxxxx, subnet-xxxxxxxx, subnet-xxxxxxxx, subnet-xxxxxxxx, subnet-xxxxxxxx" 7 | SecurityGroup: 8 | Type: String 9 | Default: "sg-xxxxxxxx" 10 | Resources: 11 | 12 | # Oracle RDS Instance 13 | DBInstance: 14 | Type: AWS::RDS::DBInstance 15 | Properties: 16 | AllocatedStorage: 20 17 | DBInstanceClass: db.m4.large 18 | DBName: "pgdb" 19 | Engine: "postgres" 20 | MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref RDSInstanceSecret, ':SecretString:username}}' ]] 21 | MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref RDSInstanceSecret, ':SecretString:password}}' ]] 22 | BackupRetentionPeriod: 0 23 | DBInstanceIdentifier: 24 | Fn::Join: 25 | - '' 26 | - - !Ref 'AWS::StackName' 27 | - '-dbinst' 28 | 29 | # AWS Secrets manager to store Database password 30 | RDSInstanceSecret: 31 | Type: AWS::SecretsManager::Secret 32 | Properties: 33 | Name: 34 | Fn::Join: 35 | - '' 36 | - - !Ref 'AWS::StackName' 37 | - '-secret' 38 | Description: 'This is the secret for my RDS instance' 39 | GenerateSecretString: 40 | SecretStringTemplate: '{"username": "master"}' 41 | GenerateStringKey: 'password' 42 | PasswordLength: 16 43 | ExcludeCharacters: '"@/\' 44 | SecretRDSInstanceAttachment: 45 | Type: AWS::SecretsManager::SecretTargetAttachment 46 | Properties: 47 | SecretId: !Ref RDSInstanceSecret 48 | TargetId: !Ref DBInstance 49 | TargetType: AWS::RDS::DBInstance 50 | BatchServiceRole: 51 | Type: AWS::IAM::Role 52 | Properties: 53 | RoleName: 54 | Fn::Join: 55 | - '' 56 | - - !Ref 'AWS::StackName' 57 | - '-batchservice-role' 58 | AssumeRolePolicyDocument: 59 | Version: '2012-10-17' 60 | Statement: 61 | - Effect: Allow 62 | Principal: 63 | Service: batch.amazonaws.com 64 | Action: sts:AssumeRole 65 | ManagedPolicyArns: 66 | - arn:aws:iam::aws:policy/service-role/AWSBatchServiceRole 67 | IamInstanceProfile: 68 | Type: AWS::IAM::InstanceProfile 69 | Properties: 70 | Roles: 71 | - Ref: EcsInstanceRole 72 | EcsInstanceRole: 73 | Type: AWS::IAM::Role 74 | Properties: 75 | AssumeRolePolicyDocument: 76 | Version: '2008-10-17' 77 | Statement: 78 | - Sid: '' 79 | Effect: Allow 80 | Principal: 81 | Service: ec2.amazonaws.com 82 | Action: sts:AssumeRole 83 | ManagedPolicyArns: 84 | - arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role 85 | - arn:aws:iam::aws:policy/SecretsManagerReadWrite 86 | 87 | # AWS Batch Job Definition 88 | BatchProcessingJobDefinition: 89 | Type: AWS::Batch::JobDefinition 90 | Properties: 91 | Type: container 92 | JobDefinitionName: 93 | Fn::Join: 94 | - '' 95 | - - !Ref 'AWS::StackName' 96 | - '-BatchJobDefinition' 97 | ContainerProperties: 98 | Image: 99 | Fn::Join: 100 | - '' 101 | - - Ref: AWS::AccountId 102 | - .dkr.ecr. 103 | - Ref: AWS::Region 104 | - '.amazonaws.com/' 105 | - !Ref BatchProcessRepository 106 | - ':latest' 107 | Vcpus: 2 108 | Memory: 2000 109 | RetryStrategy: 110 | Attempts: 1 111 | 112 | # AWS Batch Job Queue 113 | BatchProcessingJobQueue: 114 | Type: AWS::Batch::JobQueue 115 | Properties: 116 | JobQueueName: 117 | Fn::Join: 118 | - '' 119 | - - !Ref 'AWS::StackName' 120 | - '-BatchProcessingJobQueue' 121 | Priority: 1 122 | ComputeEnvironmentOrder: 123 | - Order: 1 124 | ComputeEnvironment: 125 | Ref: ComputeEnvironment 126 | 127 | # AWS Batch Compute Environment 128 | ComputeEnvironment: 129 | Type: AWS::Batch::ComputeEnvironment 130 | Properties: 131 | Type: MANAGED 132 | ComputeResources: 133 | Type: EC2 134 | MinvCpus: 0 135 | DesiredvCpus: 0 136 | MaxvCpus: 16 137 | InstanceTypes: 138 | #- a1.medium 139 | - optimal 140 | Subnets: 141 | - Ref: Subnet 142 | SecurityGroupIds: 143 | - Ref: SecurityGroup 144 | InstanceRole: 145 | Ref: IamInstanceProfile 146 | ServiceRole: 147 | Ref: BatchServiceRole 148 | 149 | #Code Commit 150 | CodeCommitRepository: 151 | Type: AWS::CodeCommit::Repository 152 | Properties: 153 | RepositoryName: 154 | Fn::Join: 155 | - '' 156 | - - !Ref 'AWS::StackName' 157 | - '-codecommit' 158 | RepositoryDescription: Respository to maintain code related to the Batch Processing Jobs. 159 | 160 | #Code Build 161 | CodeBuildProject: 162 | Type: AWS::CodeBuild::Project 163 | Properties: 164 | Name: 165 | Fn::Join: 166 | - '' 167 | - - !Ref 'AWS::StackName' 168 | - '-codebuild' 169 | Description: Batchjob codebuild project 170 | ServiceRole: !GetAtt CodeBuildRole.Arn 171 | Artifacts: 172 | Type: no_artifacts 173 | Environment: 174 | Type: LINUX_CONTAINER 175 | ComputeType: BUILD_GENERAL1_SMALL 176 | Image: aws/codebuild/amazonlinux2-x86_64-standard:1.0 177 | PrivilegedMode: true 178 | EnvironmentVariables: 179 | - Name: REPOSITORY_URI 180 | Type: PLAINTEXT 181 | Value: 182 | Fn::Join: 183 | - '' 184 | - - Ref: AWS::AccountId 185 | - .dkr.ecr. 186 | - Ref: AWS::Region 187 | - '.amazonaws.com/' 188 | - !Ref BatchProcessRepository 189 | - ':latest' 190 | 191 | - Name: AWS_DEFAULT_REGION 192 | Type: PLAINTEXT 193 | Value: 194 | Ref: AWS::Region 195 | Source: 196 | BuildSpec: buildspec.yml 197 | Location: 198 | Fn::Join: 199 | - '' 200 | - - 'https://git-codecommit.' 201 | - Ref: AWS::Region 202 | - '.amazonaws.com/v1/repos/' 203 | - !Ref 'AWS::StackName' 204 | - '-codecommit' 205 | Type: CODECOMMIT 206 | SourceVersion: refs/heads/master 207 | TimeoutInMinutes: 10 208 | CodeBuildRole: 209 | Type: AWS::IAM::Role 210 | Properties: 211 | RoleName: 212 | Fn::Join: 213 | - '' 214 | - - !Ref 'AWS::StackName' 215 | - '-codebuild-role' 216 | ManagedPolicyArns: 217 | - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess 218 | - arn:aws:iam::aws:policy/AWSCodeCommitFullAccess 219 | AssumeRolePolicyDocument: 220 | Statement: 221 | - Action: ['sts:AssumeRole'] 222 | Effect: Allow 223 | Principal: 224 | Service: [codebuild.amazonaws.com] 225 | Version: '2012-10-17' 226 | Path: / 227 | Policies: 228 | - PolicyName: CodeBuildAccess 229 | PolicyDocument: 230 | Version: '2012-10-17' 231 | Statement: 232 | - Action: 233 | - 'logs:*' 234 | - 'ec2:CreateNetworkInterface' 235 | - 'ec2:DescribeNetworkInterfaces' 236 | - 'ec2:DeleteNetworkInterface' 237 | - 'ec2:DescribeSubnets' 238 | - 'ec2:DescribeSecurityGroups' 239 | - 'ec2:DescribeDhcpOptions' 240 | - 'ec2:DescribeVpcs' 241 | - 'ec2:CreateNetworkInterfacePermission' 242 | Effect: Allow 243 | Resource: '*' 244 | 245 | # CloudWatchEvents Code build Rold 246 | CloudWatchEventsCodeBuildRole: 247 | Type: AWS::IAM::Role 248 | Properties: 249 | RoleName: 250 | Fn::Join: 251 | - '' 252 | - - !Ref 'AWS::StackName' 253 | - '-batch-processing-job-cw-events-codebuild-role' 254 | AssumeRolePolicyDocument: 255 | Version: 2012-10-17 256 | Statement: 257 | - 258 | Effect: Allow 259 | Principal: 260 | Service: 261 | - events.amazonaws.com 262 | Action: sts:AssumeRole 263 | Policies: 264 | - PolicyName: aws-events-code-build 265 | PolicyDocument: 266 | Version: 2012-10-17 267 | Statement: 268 | - Effect: Allow 269 | Action: 270 | - 'codebuild:StartBuild' 271 | Resource: !GetAtt CodeBuildProject.Arn 272 | 273 | # CloudWatch Event Rule for codecommit build trigger 274 | CloudWatchEventCodeBuildEventRule: 275 | Type: AWS::Events::Rule 276 | Properties: 277 | Name: !Join 278 | - '-' 279 | - - !Ref 'AWS::StackName' 280 | - 'codecommit-rule' 281 | Description: "This event rule triggers the build on code commit event" 282 | EventPattern: 283 | source: 284 | - "aws.codecommit" 285 | detail-type: 286 | - "CodeCommit Repository State Change" 287 | detail: 288 | event: 289 | - "referenceCreated" 290 | - "referenceUpdated" 291 | referenceType: 292 | - "branch" 293 | referenceName: 294 | - "master" 295 | State: "ENABLED" 296 | Targets: 297 | - 298 | Arn: {'Fn::GetAtt': [CodeBuildProject, Arn]} 299 | Id: cloudwatch-codebuild-eventrules 300 | RoleArn: !GetAtt CloudWatchEventsCodeBuildRole.Arn 301 | 302 | # Elastic Container Registry 303 | BatchProcessRepository: 304 | Type: AWS::ECR::Repository 305 | Properties: 306 | RepositoryName: 307 | Fn::Join: 308 | - '' 309 | - - !Ref 'AWS::StackName' 310 | - '-ecr' 311 | RepositoryPolicyText: 312 | Version: "2012-10-17" 313 | Statement: 314 | - 315 | Sid: AllowPushPull 316 | Effect: Allow 317 | Principal: 318 | AWS: 319 | - !Sub arn:aws:iam::${AWS::AccountId}:role/${EcsInstanceRole} 320 | Action: 321 | - "ecr:GetDownloadUrlForLayer" 322 | - "ecr:BatchGetImage" 323 | - "ecr:BatchCheckLayerAvailability" 324 | - "ecr:PutImage" 325 | - "ecr:InitiateLayerUpload" 326 | - "ecr:UploadLayerPart" 327 | - "ecr:CompleteLayerUpload" 328 | 329 | # Cloud Watch Rule to trigger AWS Batch jobs based on schedule 330 | BatchScheduledEvent: 331 | Type: AWS::Events::Rule 332 | Properties: 333 | Description: Rule for Amazon CloudWatch Events to trigger a AWS Batch job 334 | ScheduleExpression: "cron(0/15 * * * ? *)" 335 | Name: !Join 336 | - '-' 337 | - - !Ref 'AWS::StackName' 338 | - 'schedule-rule' 339 | State: ENABLED 340 | Targets: 341 | - 342 | Arn: !Ref BatchProcessingJobQueue 343 | BatchParameters: 344 | JobDefinition: !Ref BatchProcessingJobDefinition 345 | JobName: "BatchDBJobExecutedByCloudWatchRule" 346 | Id: cloudwatch-schedule-eventrule 347 | RoleArn: !GetAtt BatchServiceRole.Arn 348 | 349 | 350 | Outputs: 351 | ComputeEnvironmentArn: 352 | Value: 353 | Ref: ComputeEnvironment 354 | BatchProcessingJobQueueArn: 355 | Value: 356 | Ref: BatchProcessingJobQueue 357 | BatchProcessingJobDefinitionArn: 358 | Value: 359 | Ref: BatchProcessingJobDefinition 360 | CodeBuildProjectArn: 361 | Value: 362 | Ref: CodeBuildProject 363 | CodeCommitRepositoryArn: 364 | Value: 365 | Ref: CodeCommitRepository 366 | BatchProcessECRArn: 367 | Value: 368 | Ref: BatchProcessRepository 369 | CodeCommitRepo: 370 | Value: CodeCommitRepository 371 | --------------------------------------------------------------------------------