├── stepfunction.png ├── aws-ecs-stepfunctions.png ├── stepfunction.png.license ├── aws-ecs-stepfunctions.png.license ├── Dockerfile ├── src ├── test │ ├── resources │ │ └── application.properties │ └── java │ │ └── com │ │ └── example │ │ └── S3ForwardHandlerTest.java └── main │ ├── resources │ ├── application.properties │ └── log4j.properties │ └── java │ └── com │ └── example │ ├── Model │ └── Product.java │ ├── Utils │ ├── ConfigReader.java │ └── DataProcessor.java │ └── S3ForwardHandler.java ├── templates ├── providers.tf ├── environments.tf ├── sns.tf ├── ecr.tf ├── configs.tf ├── cloudwatch.tf ├── s3.tf ├── kinesis.tf ├── stepfunction.tf ├── fargate.tf ├── network.tf └── roles.tf ├── CODE_OF_CONDUCT.md ├── .gitignore ├── cleanup.sh ├── HELP.md ├── LICENSE ├── LICENSES └── MIT-0.txt ├── exec.sh ├── CONTRIBUTING.md ├── pom.xml ├── mvnw.cmd ├── mvnw └── README.md /stepfunction.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-stepfunctions-ecs-fargate-process/HEAD/stepfunction.png -------------------------------------------------------------------------------- /aws-ecs-stepfunctions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-stepfunctions-ecs-fargate-process/HEAD/aws-ecs-stepfunctions.png -------------------------------------------------------------------------------- /stepfunction.png.license: -------------------------------------------------------------------------------- 1 | SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | 3 | SPDX-License-Identifier: MIT-0 -------------------------------------------------------------------------------- /aws-ecs-stepfunctions.png.license: -------------------------------------------------------------------------------- 1 | SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | 3 | SPDX-License-Identifier: MIT-0 -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM openjdk:8-jdk-alpine 2 | ARG JAR_FILE=target/ecsFargateService-1.0-SNAPSHOT.jar 3 | COPY ${JAR_FILE} app.jar 4 | ENTRYPOINT ["java","-jar","/app.jar"] -------------------------------------------------------------------------------- /src/test/resources/application.properties: -------------------------------------------------------------------------------- 1 | REGION=us-east-1 2 | S3_BUCKET=my-stepfunction-ecs-app-dev-source-bucket-- 3 | STREAM_NAME=my-stepfunction-ecs-app-stream -------------------------------------------------------------------------------- /src/main/resources/application.properties: -------------------------------------------------------------------------------- 1 | REGION=us-east-1 2 | S3_BUCKET=my-stepfunction-ecs-app-dev-source-bucket- 3 | STREAM_NAME=my-stepfunction-ecs-app-stream 4 | -------------------------------------------------------------------------------- /templates/providers.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | provider "aws" { 6 | region = "us-east-1" 7 | } -------------------------------------------------------------------------------- /templates/environments.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | data "aws_caller_identity" "current" { } 6 | data "aws_region" "current" {} -------------------------------------------------------------------------------- /templates/sns.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | resource "aws_sns_topic" "stepfunction_ecs_sns" { 6 | name = "${var.app_prefix}-SNSTopic" 7 | } -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /templates/ecr.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | resource "aws_ecr_repository" "stepfunction_ecs_ecr_repo" { 6 | name = "${var.app_prefix}-repo" 7 | 8 | tags = { 9 | Name = "${var.app_prefix}-ecr-repo" 10 | } 11 | } -------------------------------------------------------------------------------- /src/main/resources/log4j.properties: -------------------------------------------------------------------------------- 1 | 2 | # Root logger option 3 | log4j.rootLogger=INFO, stdout 4 | 5 | 6 | # configuration to print on console 7 | log4j.appender.stdout=org.apache.log4j.ConsoleAppender 8 | log4j.appender.stdout.Target=System.out 9 | log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 10 | log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n -------------------------------------------------------------------------------- /templates/configs.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | variable "app_prefix" { 6 | description = "Application prefix for the AWS services that are built" 7 | default = "my-stepfunction-ecs-app" 8 | } 9 | 10 | variable "stage_name" { 11 | default = "dev" 12 | type = "string" 13 | } 14 | 15 | variable "java_source_zip_path" { 16 | description = "Java Springboot app" 17 | default = "..//target//ecsFargateService-1.0-SNAPSHOT.jar" 18 | } -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled class file 2 | *.class 3 | 4 | # Log file 5 | *.log 6 | 7 | # BlueJ files 8 | *.ctxt 9 | 10 | # Mobile Tools for Java (J2ME) 11 | .mtj.tmp/ 12 | 13 | # Package Files # 14 | *.jar 15 | *.war 16 | *.nar 17 | *.ear 18 | *.zip 19 | *.tar.gz 20 | *.rar 21 | 22 | # virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml 23 | hs_err_pid* 24 | 25 | templates/.terraform 26 | target/* 27 | templates/terraform.tfstate 28 | templates/terraform.tfstate.backup 29 | templates/.terraform.tfstate.lock.info 30 | samples/* 31 | .code/* 32 | .idea/* 33 | ecsFargateService.iml 34 | *DS_Store* 35 | -------------------------------------------------------------------------------- /cleanup.sh: -------------------------------------------------------------------------------- 1 | ACCOUNT_NUMBER= 2 | 3 | SOURCE_S3_BUCKET="my-stepfunction-ecs-app-dev-source-bucket" 4 | TARGET_S3_BUCKET="my-stepfunction-ecs-app-dev-target-bucket" 5 | ECR_REPO_NAME="my-stepfunction-ecs-app-repo" 6 | 7 | aws ecr batch-delete-image --repository-name $ECR_REPO_NAME --image-ids imageTag=latest 8 | 9 | aws ecr batch-delete-image --repository-name $ECR_REPO_NAME --image-ids imageTag=untagged 10 | 11 | aws s3 rm s3://$SOURCE_S3_BUCKET-$ACCOUNT_NUMBER --recursive 12 | aws s3 rm s3://$TARGET_S3_BUCKET-$ACCOUNT_NUMBER --recursive 13 | 14 | cd templates 15 | terraform destroy --auto-approve 16 | cd .. 17 | 18 | 19 | $SHELL -------------------------------------------------------------------------------- /templates/cloudwatch.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | resource "aws_cloudwatch_log_group" "stepfunction_ecs_container_cloudwatch_loggroup" { 6 | name = "${var.app_prefix}-cloudwatch-log-group" 7 | 8 | tags = { 9 | Name = "${var.app_prefix}-cloudwatch-log-group" 10 | Environment = "${var.stage_name}" 11 | } 12 | } 13 | 14 | resource "aws_cloudwatch_log_stream" "stepfunction_ecs_container_cloudwatch_logstream" { 15 | name = "${var.app_prefix}-cloudwatch-log-stream" 16 | log_group_name = "${aws_cloudwatch_log_group.stepfunction_ecs_container_cloudwatch_loggroup.name}" 17 | } -------------------------------------------------------------------------------- /HELP.md: -------------------------------------------------------------------------------- 1 | # Getting Started 2 | 3 | ### Reference Documentation 4 | For further reference, please consider the following sections: 5 | 6 | * [Official Apache Maven documentation](https://maven.apache.org/guides/index.html) 7 | * [Spring Boot Maven Plugin Reference Guide](https://docs.spring.io/spring-boot/docs/2.4.2/maven-plugin/reference/html/) 8 | * [Create an OCI image](https://docs.spring.io/spring-boot/docs/2.4.2/maven-plugin/reference/html/#build-image) 9 | * [Spring Web](https://docs.spring.io/spring-boot/docs/2.4.2/reference/htmlsingle/#boot-features-developing-web-applications) 10 | 11 | ### Guides 12 | The following guides illustrate how to use some features concretely: 13 | 14 | * [Building a RESTful Web Service](https://spring.io/guides/gs/rest-service/) 15 | * [Serving Web Content with Spring MVC](https://spring.io/guides/gs/serving-web-content/) 16 | * [Building REST services with Spring](https://spring.io/guides/tutorials/bookmarks/) 17 | 18 | -------------------------------------------------------------------------------- /src/main/java/com/example/Model/Product.java: -------------------------------------------------------------------------------- 1 | package com.example.Model; 2 | 3 | /* 4 | * SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 5 | * 6 | * SPDX-License-Identifier: MIT-0 7 | */ 8 | public class Product 9 | { 10 | public String productId; 11 | public String productName; 12 | public String productVersion; 13 | 14 | public String getProductId(){ 15 | return productId; 16 | } 17 | 18 | public String getProductName(){ 19 | return productName; 20 | } 21 | public String getProductVersion(){ 22 | return productVersion; 23 | } 24 | 25 | public void setProductId(String productId){ 26 | this.productId = productId; 27 | } 28 | 29 | public void setProductName(String productName){ 30 | this.productName = productName; 31 | } 32 | public void setProductVersion(String productVersion){ 33 | this.productVersion = productVersion; 34 | } 35 | } -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /src/main/java/com/example/Utils/ConfigReader.java: -------------------------------------------------------------------------------- 1 | package com.example.Utils; 2 | 3 | import java.io.FileNotFoundException; 4 | import java.io.IOException; 5 | import java.io.InputStream; 6 | import java.util.Properties; 7 | 8 | /* 9 | * SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 10 | * 11 | * SPDX-License-Identifier: MIT-0 12 | */ 13 | public class ConfigReader { 14 | private static Properties prop; 15 | 16 | static{ 17 | InputStream inputStream = null; 18 | try { 19 | prop = new Properties(); 20 | inputStream = ClassLoader.class.getResourceAsStream("/application.properties"); 21 | prop.load(inputStream); 22 | } catch (FileNotFoundException e) { 23 | e.printStackTrace(); 24 | } catch (IOException e) { 25 | e.printStackTrace(); 26 | } 27 | } 28 | 29 | public static String getPropertyValue(String key){ 30 | return prop.getProperty(key); 31 | } 32 | } 33 | -------------------------------------------------------------------------------- /LICENSES/MIT-0.txt: -------------------------------------------------------------------------------- 1 | SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /templates/s3.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | resource "aws_s3_bucket" "stepfunction_ecs_source_s3bucket" { 6 | bucket = "${var.app_prefix}-${var.stage_name}-source-bucket-${data.aws_caller_identity.current.account_id}" 7 | acl = "private" 8 | 9 | tags = { 10 | Name = "${var.app_prefix}-source-s3" 11 | Environment = "${var.stage_name}" 12 | } 13 | } 14 | 15 | resource "aws_s3_bucket" "stepfunction_ecs_target_s3bucket" { 16 | bucket = "${var.app_prefix}-${var.stage_name}-target-bucket-${data.aws_caller_identity.current.account_id}" 17 | acl = "private" 18 | 19 | tags = { 20 | Name = "${var.app_prefix}-target-s3" 21 | Environment = "${var.stage_name}" 22 | } 23 | } 24 | 25 | resource "aws_vpc_endpoint" "stepfunction_ecs_s3_vpc_endpoint" { 26 | vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 27 | service_name = "com.amazonaws.${data.aws_region.current.name}.s3" 28 | 29 | tags = { 30 | Environment = "${var.app_prefix}-s3-vpc-endpoint" 31 | } 32 | } -------------------------------------------------------------------------------- /templates/kinesis.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | resource "aws_kinesis_stream" "stepfunction_ecs_kinesis_stream" { 6 | name = "${var.app_prefix}-stream" 7 | shard_count = 1 8 | retention_period = 48 9 | 10 | shard_level_metrics = [ 11 | "IncomingBytes", 12 | "OutgoingBytes", 13 | ] 14 | 15 | tags = { 16 | Name = "${var.app_prefix}-stream" 17 | Environment = "${var.stage_name}" 18 | } 19 | } 20 | 21 | resource "aws_kinesis_firehose_delivery_stream" "stepfunction_ecs_kinesis_firehosedelivery_stream" { 22 | name = "${var.app_prefix}-firehose-delivery-stream" 23 | destination = "s3" 24 | 25 | kinesis_source_configuration { 26 | role_arn = "${aws_iam_role.ecs_firehose_delivery_role.arn}" 27 | kinesis_stream_arn = "${aws_kinesis_stream.stepfunction_ecs_kinesis_stream.arn}" 28 | } 29 | s3_configuration { 30 | role_arn = "${aws_iam_role.ecs_firehose_delivery_role.arn}" 31 | bucket_arn = "${aws_s3_bucket.stepfunction_ecs_target_s3bucket.arn}" 32 | cloudwatch_logging_options { 33 | enabled = true 34 | log_group_name = "${aws_cloudwatch_log_group.stepfunction_ecs_container_cloudwatch_loggroup.name}" 35 | log_stream_name = "${aws_cloudwatch_log_stream.stepfunction_ecs_container_cloudwatch_logstream.name}" 36 | } 37 | } 38 | } -------------------------------------------------------------------------------- /exec.sh: -------------------------------------------------------------------------------- 1 | ACCOUNT_NUMBER= 2 | REGION= 3 | INPUT_S3_BUCKET="my-stepfunction-ecs-app-dev-source-bucket" 4 | 5 | APP_ECR_REPO_NAME=my-stepfunction-ecs-app-repo 6 | APP_ECR_REPO_URL=$ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com/$APP_ECR_REPO_NAME 7 | 8 | # Build the sprintboot Jar 9 | mvn clean package 10 | 11 | # Terraform infrastructure apply 12 | cd templates 13 | terraform init 14 | terraform apply --auto-approve 15 | 16 | cd .. 17 | 18 | docker build -t example/ecsfargateservice . 19 | docker tag example/ecsfargateservice ${APP_ECR_REPO_URL}:latest 20 | 21 | aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com 22 | docker push ${APP_ECR_REPO_URL}:latest 23 | 24 | #aws ecr list-images --repository-name ${APP_ECR_REPO_URL} 25 | 26 | 27 | ####### 28 | ### PUT SAMPLE S3 For the Input S3 bucket 29 | ####### 30 | 31 | CURRYEAR=`date +"%Y"` 32 | CURRMONTH=`date +"%m"` 33 | CURRDATE=`date +"%d"` 34 | 35 | echo $CURRYEAR-$CURRMONTH-$CURRDATE 36 | 37 | echo "Creating sample files and will load to S3" 38 | COUNTER=0 39 | NUMBER_OF_FILES=10 40 | 41 | EXTN=".txt" 42 | S3_SUB_PATH=$CURRYEAR"/"$CURRMONTH"/"$CURRDATE 43 | echo $S3_SUB_PATH 44 | 45 | INPUT_S3_BUCKET_PATH="s3://$INPUT_S3_BUCKET-"$ACCOUNT_NUMBER 46 | 47 | cd samples 48 | 49 | while [ $COUNTER -lt $NUMBER_OF_FILES ]; do 50 | FILENAME="Product-"$COUNTER$EXTN 51 | 52 | echo "{\"productId\": $COUNTER , \"productName\": \"some Name\", \"productVersion\": \"v$COUNTER\"}" >> $FILENAME 53 | 54 | aws s3 --region $REGION cp $FILENAME $INPUT_S3_BUCKET_PATH/$S3_SUB_PATH/ 55 | 56 | echo $FILENAME " samples uploaded into S3 sample bucket" 57 | let COUNTER=COUNTER+1 58 | done 59 | 60 | cd .. 61 | 62 | $SHELL -------------------------------------------------------------------------------- /templates/stepfunction.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | ################################################## 6 | # AWS Step Functions - Start Fargate Task On success notify SNS 7 | ################################################## 8 | resource "aws_sfn_state_machine" "stepfunction_ecs_state_machine" { 9 | name = "${var.app_prefix}-ECSTaskStateMachine" 10 | role_arn = "${aws_iam_role.stepfunction_ecs_role.arn}" 11 | 12 | definition = < { 21 | 22 | String clientRegion = ""; 23 | String bucketName = ""; 24 | String kinesisStream = ""; 25 | DataProcessor dataProcessor; 26 | 27 | public S3ForwardHandler() { 28 | this.clientRegion = System.getenv("REGION"); 29 | this.bucketName = System.getenv("S3_BUCKET"); 30 | this.kinesisStream = System.getenv("STREAM_NAME"); 31 | if(StringUtils.isNullOrEmpty(this.clientRegion)){ 32 | this.clientRegion = ConfigReader.getPropertyValue("REGION"); 33 | this.bucketName = ConfigReader.getPropertyValue("S3_BUCKET"); 34 | this.kinesisStream = ConfigReader.getPropertyValue("STREAM_NAME"); 35 | System.out.println(String.format("Default Constructor ConfigReader - Client Region: %s, Bucket Name: %s, Stream Name: %s", this.clientRegion, this.bucketName, this.kinesisStream)); 36 | } 37 | dataProcessor = new DataProcessor(this.clientRegion); 38 | System.out.println(String.format("Default Constructor - Client Region: %s, Bucket Name: %s, Stream Name: %s", this.clientRegion, this.bucketName, this.kinesisStream)); 39 | } 40 | 41 | public S3ForwardHandler(String clientRegion, String bucketName, String kinesisStream, AmazonS3 s3Client, AmazonKinesis kinesisClient, Gson gson) { 42 | this.clientRegion = clientRegion; 43 | this.bucketName = bucketName; 44 | this.kinesisStream = kinesisStream; 45 | if(StringUtils.isNullOrEmpty(this.clientRegion)){ 46 | this.clientRegion = ConfigReader.getPropertyValue("REGION"); 47 | this.bucketName = ConfigReader.getPropertyValue("S3_BUCKET"); 48 | this.kinesisStream = ConfigReader.getPropertyValue("STREAM_NAME"); 49 | System.out.println(String.format("Default Constructor ConfigReader - Client Region: %s, Bucket Name: %s, Stream Name: %s", this.clientRegion, this.bucketName, this.kinesisStream)); 50 | } 51 | dataProcessor = new DataProcessor(s3Client, kinesisClient, gson); 52 | System.out.println(String.format("Test Constructor - Client Region: %s, Bucket Name: %s, Stream Name: %s", this.clientRegion, this.bucketName, this.kinesisStream)); 53 | } 54 | 55 | @Override 56 | public Object handleRequest(ScheduledEvent input, Context context) { 57 | 58 | String success_response = "Processing S3 Forward Handler "; 59 | System.out.println(success_response); 60 | 61 | try { 62 | dataProcessor.sendS3ContentsToKinesis(clientRegion, bucketName, kinesisStream); 63 | } catch (IOException e) { 64 | e.printStackTrace(); 65 | } 66 | return null; 67 | 68 | } 69 | 70 | public static void main(String[] args) { 71 | new S3ForwardHandler().handleRequest(new ScheduledEvent(), null); 72 | } 73 | } -------------------------------------------------------------------------------- /src/test/java/com/example/S3ForwardHandlerTest.java: -------------------------------------------------------------------------------- 1 | package com.example; 2 | 3 | import static org.junit.Assert.assertEquals; 4 | import static org.junit.Assert.assertThat; 5 | import static org.mockito.ArgumentMatchers.*; 6 | import static org.mockito.Mockito.verify; 7 | import static org.mockito.internal.verification.VerificationModeFactory.times; 8 | import static org.powermock.api.mockito.PowerMockito.doReturn; 9 | import static org.powermock.api.mockito.PowerMockito.mockStatic; 10 | import static org.powermock.api.mockito.PowerMockito.when; 11 | 12 | import java.io.ByteArrayInputStream; 13 | import java.io.IOException; 14 | import java.io.InputStream; 15 | import java.util.ArrayList; 16 | import java.util.List; 17 | 18 | import com.amazonaws.services.kinesis.AmazonKinesis; 19 | import com.amazonaws.services.s3.AmazonS3; 20 | import com.amazonaws.services.s3.AmazonS3Client; 21 | import com.amazonaws.services.s3.model.ListObjectsV2Result; 22 | import com.amazonaws.services.s3.model.S3Object; 23 | import com.amazonaws.services.s3.model.S3ObjectSummary; 24 | import org.junit.Before; 25 | import org.junit.Test; 26 | 27 | 28 | import com.example.Model.Product; 29 | import com.example.Utils.DataProcessor; 30 | import com.google.gson.Gson; 31 | import org.junit.runner.RunWith; 32 | import org.mockito.Mock; 33 | import org.mockito.Mockito; 34 | import org.mockito.MockitoAnnotations; 35 | import org.powermock.api.mockito.PowerMockito; 36 | import org.powermock.core.classloader.annotations.PrepareForTest; 37 | import org.powermock.modules.junit4.PowerMockRunner; 38 | 39 | /* 40 | * SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 41 | * 42 | * SPDX-License-Identifier: MIT-0 43 | */ 44 | @RunWith(PowerMockRunner.class) 45 | public class S3ForwardHandlerTest { 46 | AmazonS3 s3Client = Mockito.mock(AmazonS3.class); 47 | 48 | AmazonKinesis kinesisClient = Mockito.mock(AmazonKinesis.class); 49 | 50 | Gson gson; 51 | 52 | private Product inputProduct; 53 | private String TEST_OUTPUT = "{\"productId\":\"1\",\"productName\":\"iphone\",\"productVersion\":\"10R\"}"; 54 | private List products = new ArrayList<>(); 55 | 56 | S3ForwardHandler s3ForwardHandler; 57 | 58 | @Before 59 | public void setup() throws Exception{ 60 | gson = new Gson(); 61 | 62 | inputProduct = new Product(); 63 | inputProduct.setProductId("1"); 64 | 65 | Product product = new Product(); 66 | product.setProductId("1"); 67 | product.setProductName("iphone"); 68 | product.setProductVersion("10R"); 69 | products.add(product); 70 | 71 | ListObjectsV2Result results = new ListObjectsV2Result(); 72 | List objectSummaries = new ArrayList(); 73 | S3ObjectSummary s3ObjectSummary = new S3ObjectSummary(); 74 | s3ObjectSummary.setKey("sample.txt"); 75 | objectSummaries.add(s3ObjectSummary); 76 | S3Object object = new S3Object(); 77 | String initialString = "text"; 78 | InputStream inputStream = new ByteArrayInputStream(initialString.getBytes()); 79 | //object.setObjectContent(inputStream); 80 | PowerMockito.when(s3Client.listObjectsV2(anyString())).thenReturn(results); 81 | PowerMockito.when(s3Client.getObject(anyString(), anyString())).thenReturn(object); 82 | s3ForwardHandler = new S3ForwardHandler("test-region", "test-bucket", "test-stream", s3Client, kinesisClient, gson); 83 | } 84 | 85 | @Test 86 | public void saveS3_validRequest_Success() throws IOException { 87 | 88 | s3ForwardHandler.handleRequest(null, null); 89 | verify(s3Client, times(0)).listObjectsV2(any(), any()); 90 | verify(s3Client, times(0)).getObject(anyString(), anyString()); 91 | 92 | } 93 | 94 | 95 | } -------------------------------------------------------------------------------- /src/main/java/com/example/Utils/DataProcessor.java: -------------------------------------------------------------------------------- 1 | package com.example.Utils; 2 | 3 | import com.amazonaws.AmazonServiceException; 4 | import com.amazonaws.SdkClientException; 5 | import com.amazonaws.regions.Regions; 6 | import com.amazonaws.services.kinesis.AmazonKinesis; 7 | import com.amazonaws.services.kinesis.AmazonKinesisClientBuilder; 8 | import com.amazonaws.services.kinesis.model.PutRecordsRequest; 9 | import com.amazonaws.services.kinesis.model.PutRecordsRequestEntry; 10 | import com.amazonaws.services.kinesis.model.PutRecordsResult; 11 | import com.amazonaws.services.s3.AmazonS3; 12 | import com.amazonaws.services.s3.AmazonS3ClientBuilder; 13 | 14 | import com.amazonaws.services.s3.model.S3Object; 15 | import com.amazonaws.services.s3.model.ListObjectsV2Result; 16 | import com.amazonaws.services.s3.model.S3ObjectSummary; 17 | import com.amazonaws.services.s3.model.ObjectMetadata; 18 | import com.amazonaws.services.s3.model.PutObjectRequest; 19 | 20 | import java.io.*; 21 | import java.nio.ByteBuffer; 22 | import java.util.List; 23 | import java.util.ArrayList; 24 | import java.nio.charset.StandardCharsets; 25 | import java.util.UUID; 26 | 27 | import com.example.Model.Product; 28 | import com.google.gson.Gson; 29 | import com.google.gson.GsonBuilder; 30 | 31 | import org.slf4j.Logger; 32 | import org.slf4j.LoggerFactory; 33 | 34 | 35 | /* 36 | * SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 37 | * 38 | * SPDX-License-Identifier: MIT-0 39 | */ 40 | public class DataProcessor 41 | { 42 | AmazonS3 s3Client; 43 | AmazonKinesis kinesisClient; 44 | Gson gson; 45 | 46 | public DataProcessor(String clientRegion){ 47 | s3Client = AmazonS3ClientBuilder.standard() 48 | .withRegion(clientRegion) 49 | .build(); 50 | AmazonKinesisClientBuilder clientBuilder = AmazonKinesisClientBuilder.standard(); 51 | clientBuilder.setRegion(clientRegion); 52 | kinesisClient = clientBuilder.build(); 53 | gson = new GsonBuilder().create(); 54 | } 55 | public DataProcessor(AmazonS3 s3Client, AmazonKinesis kinesisClient, Gson gson){ 56 | this.s3Client = s3Client; 57 | this.kinesisClient = kinesisClient; 58 | this.gson = gson; 59 | } 60 | 61 | public Void sendS3ContentsToKinesis(String clientRegion, String bucketName, String streamName) throws IOException { 62 | 63 | String key = ""; 64 | if(clientRegion != null) { 65 | System.out.println("Fetching S3 file content."); 66 | ListObjectsV2Result result = s3Client.listObjectsV2(bucketName); 67 | List objects = result.getObjectSummaries(); 68 | 69 | for (S3ObjectSummary os : objects) { 70 | key = os.getKey(); 71 | if (key.endsWith(".txt")) { 72 | Product product = new Product(); 73 | System.out.println("S3 Object Key" + key); 74 | 75 | S3Object object = s3Client.getObject(bucketName, key); 76 | BufferedInputStream input = new BufferedInputStream(object.getObjectContent()); 77 | ByteArrayOutputStream output = new ByteArrayOutputStream(); 78 | 79 | byte[] b = new byte[1000 * 1024]; 80 | int len; 81 | while ((len = input.read(b)) != -1) { 82 | output.write(b, 0, len); 83 | } 84 | byte[] bytes = output.toByteArray(); 85 | String fileContent = new String(bytes, StandardCharsets.UTF_8); 86 | Product savedProduct = gson.fromJson(fileContent, Product.class); 87 | if(savedProduct != null && Integer.parseInt(savedProduct.getProductId()) > 0 ) { 88 | System.out.println("Processing Product ID - " + savedProduct.getProductId()); 89 | sendToKinesis(clientRegion, streamName, fileContent); 90 | } 91 | } 92 | } 93 | } 94 | return null; 95 | } 96 | 97 | private void sendToKinesis(String clientRegion, String streamName, String contents){ 98 | if(clientRegion != null) { 99 | System.out.println("Sending to Kinesis."); 100 | PutRecordsRequest putRecordsRequest = new PutRecordsRequest(); 101 | putRecordsRequest.setStreamName(streamName); 102 | List putRecordsRequestEntryList = new ArrayList<>(); 103 | PutRecordsRequestEntry putRecordsRequestEntry = new PutRecordsRequestEntry(); 104 | putRecordsRequestEntry.setData(ByteBuffer.wrap(contents.getBytes())); 105 | UUID uuid = UUID.randomUUID(); 106 | putRecordsRequestEntry.setPartitionKey(uuid.toString()); 107 | putRecordsRequestEntryList.add(putRecordsRequestEntry); 108 | 109 | putRecordsRequest.setRecords(putRecordsRequestEntryList); 110 | PutRecordsResult putRecordsResult = kinesisClient.putRecords(putRecordsRequest); 111 | System.out.println("Put Result" + putRecordsResult); 112 | } 113 | } 114 | } -------------------------------------------------------------------------------- /templates/network.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | ################################################## 6 | # AWS VPC Network - AWS VPC, IGW/NGW, EIP, 7 | # Public/Private Subnets, Route Tables and Table Association 8 | ################################################## 9 | resource "aws_vpc" "stepfunction_ecs_vpc" { 10 | cidr_block = "10.0.0.0/16" 11 | tags = { 12 | Name = "${var.app_prefix}-VPC" 13 | } 14 | } 15 | 16 | resource "aws_subnet" "stepfunction_ecs_public_subnet1" { 17 | vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 18 | cidr_block = "10.0.0.0/24" 19 | 20 | tags = { 21 | Name = "${var.app_prefix}-public-subnet1" 22 | } 23 | } 24 | 25 | resource "aws_subnet" "stepfunction_ecs_private_subnet1" { 26 | vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 27 | cidr_block = "10.0.1.0/24" 28 | 29 | tags = { 30 | Name = "${var.app_prefix}-private-subnet1" 31 | } 32 | } 33 | 34 | resource "aws_internet_gateway" "stepfunction_ecs_igw" { 35 | vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 36 | } 37 | 38 | resource "aws_route_table" "stepfunction_ecs_route_table" { 39 | vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 40 | } 41 | 42 | resource aws_route "stepfunction_ecs_public_route" { 43 | route_table_id = "${aws_route_table.stepfunction_ecs_route_table.id}" 44 | destination_cidr_block = "0.0.0.0/0" 45 | gateway_id = "${aws_internet_gateway.stepfunction_ecs_igw.id}" 46 | } 47 | 48 | resource "aws_route_table_association" "stepfunction_ecs_route_table_association1" { 49 | subnet_id = "${aws_subnet.stepfunction_ecs_public_subnet1.id}" 50 | route_table_id = "${aws_route_table.stepfunction_ecs_route_table.id}" 51 | } 52 | 53 | resource "aws_eip" "stepfunction_elastic_ip" { 54 | vpc = true 55 | 56 | tags = { 57 | Name = "${var.app_prefix}-elastic-ip" 58 | } 59 | } 60 | 61 | resource "aws_nat_gateway" "stepfunction_ecs_ngw" { 62 | allocation_id = "${aws_eip.stepfunction_elastic_ip.id}" 63 | subnet_id = "${aws_subnet.stepfunction_ecs_public_subnet1.id}" 64 | 65 | tags = { 66 | "Name" = "${var.app_prefix}-NATGateway" 67 | } 68 | } 69 | 70 | resource "aws_route_table" "stepfunction_ngw_route_table" { 71 | vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 72 | } 73 | 74 | resource aws_route "stepfunction_ngw_route" { 75 | route_table_id = "${aws_route_table.stepfunction_ngw_route_table.id}" 76 | destination_cidr_block = "0.0.0.0/0" 77 | gateway_id = "${aws_nat_gateway.stepfunction_ecs_ngw.id}" 78 | } 79 | 80 | resource "aws_route_table_association" "stepfunction_ngw_route_table_association1" { 81 | subnet_id = "${aws_subnet.stepfunction_ecs_private_subnet1.id}" 82 | route_table_id = "${aws_route_table.stepfunction_ngw_route_table.id}" 83 | } 84 | 85 | resource "aws_route_table" "stepfunction_vpce_route_table" { 86 | vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 87 | } 88 | 89 | resource aws_route "stepfunction_vpce_route" { 90 | route_table_id = "${aws_route_table.stepfunction_vpce_route_table.id}" 91 | destination_cidr_block = "0.0.0.0/0" 92 | gateway_id = "${aws_nat_gateway.stepfunction_ecs_ngw.id}" 93 | } 94 | 95 | resource "aws_vpc_endpoint_route_table_association" "stepfunction_ngw_route_table_association2" { 96 | route_table_id = "${aws_route_table.stepfunction_vpce_route_table.id}" 97 | vpc_endpoint_id = "${aws_vpc_endpoint.stepfunction_ecs_s3_vpc_endpoint.id}" 98 | } 99 | 100 | resource "aws_security_group" "stepfunction_ecs_security_group" { 101 | name = "${var.app_prefix}-ECSSecurityGroup" 102 | description = "ECS Allowed Ports" 103 | vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 104 | } 105 | 106 | resource "aws_security_group_rule" "stepfunction_ecs_security_group_rule" { 107 | type = "egress" 108 | protocol = "-1" 109 | from_port = 0 110 | to_port = 0 111 | cidr_blocks = ["0.0.0.0/0"] 112 | security_group_id = "${aws_security_group.stepfunction_ecs_security_group.id}" 113 | } 114 | 115 | 116 | # resource "aws_vpc_endpoint" "stepfunction_ecs_service_endpoint_ecs" { 117 | # vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 118 | # service_name = "com.amazonaws.${data.aws_region.current.name}.ecs" 119 | # vpc_endpoint_type = "Interface" 120 | 121 | # security_group_ids = [ 122 | # "${aws_security_group.stepfunction_ecs_security_group.id}", 123 | # ] 124 | 125 | # subnet_ids = ["${aws_subnet.stepfunction_ecs_private_subnet1.id}"] 126 | # private_dns_enabled = false 127 | # } 128 | 129 | # resource "aws_vpc_endpoint" "stepfunction_ecs_service_endpoint_api" { 130 | # vpc_id = "${aws_vpc.stepfunction_ecs_vpc.id}" 131 | # service_name = "com.amazonaws.${data.aws_region.current.name}.ecr.api" 132 | # vpc_endpoint_type = "Interface" 133 | 134 | # security_group_ids = [ 135 | # "${aws_security_group.stepfunction_ecs_security_group.id}", 136 | # ] 137 | 138 | # subnet_ids = ["${aws_subnet.stepfunction_ecs_private_subnet1.id}"] 139 | # private_dns_enabled = false 140 | # } -------------------------------------------------------------------------------- /pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 4 | 4.0.0 5 | com.example 6 | ecsFargateService 7 | jar 8 | 1.0-SNAPSHOT 9 | ecsFargateService 10 | 11 | 1.8 12 | 1.8 13 | 1.8 14 | 15 | 16 | 17 | com.amazonaws 18 | aws-java-sdk-core 19 | 1.11.938 20 | 21 | 22 | com.amazonaws 23 | aws-lambda-java-core 24 | 1.2.1 25 | 26 | 27 | com.amazonaws 28 | * 29 | 30 | 31 | 32 | 33 | com.amazonaws 34 | aws-lambda-java-events 35 | 3.7.0 36 | 37 | 38 | com.amazonaws 39 | * 40 | 41 | 42 | 43 | 44 | com.amazonaws 45 | aws-java-sdk-s3 46 | 1.12.261 47 | 48 | 49 | com.amazonaws 50 | * 51 | 52 | 53 | 54 | 55 | com.amazonaws 56 | aws-java-sdk-kinesis 57 | 1.11.938 58 | 59 | 60 | com.amazonaws 61 | * 62 | 63 | 64 | 65 | 66 | org.apache.commons 67 | commons-lang3 68 | 3.10 69 | 70 | 71 | com.google.code.gson 72 | gson 73 | 2.8.9 74 | 75 | 76 | org.slf4j 77 | slf4j-api 78 | 1.7.25 79 | 80 | 81 | org.slf4j 82 | slf4j-log4j12 83 | 1.7.30 84 | 85 | 86 | org.mockito 87 | mockito-core 88 | 2.2.28 89 | test 90 | 91 | 92 | org.powermock 93 | powermock-api-mockito2 94 | 1.7.4 95 | test 96 | 97 | 98 | org.powermock 99 | powermock-module-junit4-rule 100 | 1.7.0 101 | test 102 | 103 | 104 | org.powermock 105 | powermock-module-junit4 106 | 1.7.0 107 | test 108 | 109 | 110 | org.powermock 111 | powermock-core 112 | 1.7.0 113 | test 114 | 115 | 116 | org.hamcrest 117 | hamcrest-all 118 | 1.3 119 | test 120 | 121 | 122 | 123 | 124 | 125 | org.apache.maven.plugins 126 | maven-shade-plugin 127 | 2.3 128 | 129 | false 130 | 131 | 132 | 133 | package 134 | 135 | shade 136 | 137 | 138 | 139 | 140 | 141 | org.apache.maven.plugins 142 | maven-jar-plugin 143 | 3.1.0 144 | 145 | 146 | 147 | true 148 | lib/ 149 | com.example.S3ForwardHandler 150 | 151 | 152 | 153 | 154 | 155 | 156 | -------------------------------------------------------------------------------- /mvnw.cmd: -------------------------------------------------------------------------------- 1 | @REM ---------------------------------------------------------------------------- 2 | @REM Licensed to the Apache Software Foundation (ASF) under one 3 | @REM or more contributor license agreements. See the NOTICE file 4 | @REM distributed with this work for additional information 5 | @REM regarding copyright ownership. The ASF licenses this file 6 | @REM to you under the Apache License, Version 2.0 (the 7 | @REM "License"); you may not use this file except in compliance 8 | @REM with the License. You may obtain a copy of the License at 9 | @REM 10 | @REM https://www.apache.org/licenses/LICENSE-2.0 11 | @REM 12 | @REM Unless required by applicable law or agreed to in writing, 13 | @REM software distributed under the License is distributed on an 14 | @REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | @REM KIND, either express or implied. See the License for the 16 | @REM specific language governing permissions and limitations 17 | @REM under the License. 18 | @REM ---------------------------------------------------------------------------- 19 | 20 | @REM ---------------------------------------------------------------------------- 21 | @REM Maven Start Up Batch script 22 | @REM 23 | @REM Required ENV vars: 24 | @REM JAVA_HOME - location of a JDK home dir 25 | @REM 26 | @REM Optional ENV vars 27 | @REM M2_HOME - location of maven2's installed home dir 28 | @REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands 29 | @REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a keystroke before ending 30 | @REM MAVEN_OPTS - parameters passed to the Java VM when running Maven 31 | @REM e.g. to debug Maven itself, use 32 | @REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 33 | @REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files 34 | @REM ---------------------------------------------------------------------------- 35 | 36 | @REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on' 37 | @echo off 38 | @REM set title of command window 39 | title %0 40 | @REM enable echoing by setting MAVEN_BATCH_ECHO to 'on' 41 | @if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO% 42 | 43 | @REM set %HOME% to equivalent of $HOME 44 | if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%") 45 | 46 | @REM Execute a user defined script before this one 47 | if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre 48 | @REM check for pre script, once with legacy .bat ending and once with .cmd ending 49 | if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat" 50 | if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd" 51 | :skipRcPre 52 | 53 | @setlocal 54 | 55 | set ERROR_CODE=0 56 | 57 | @REM To isolate internal variables from possible post scripts, we use another setlocal 58 | @setlocal 59 | 60 | @REM ==== START VALIDATION ==== 61 | if not "%JAVA_HOME%" == "" goto OkJHome 62 | 63 | echo. 64 | echo Error: JAVA_HOME not found in your environment. >&2 65 | echo Please set the JAVA_HOME variable in your environment to match the >&2 66 | echo location of your Java installation. >&2 67 | echo. 68 | goto error 69 | 70 | :OkJHome 71 | if exist "%JAVA_HOME%\bin\java.exe" goto init 72 | 73 | echo. 74 | echo Error: JAVA_HOME is set to an invalid directory. >&2 75 | echo JAVA_HOME = "%JAVA_HOME%" >&2 76 | echo Please set the JAVA_HOME variable in your environment to match the >&2 77 | echo location of your Java installation. >&2 78 | echo. 79 | goto error 80 | 81 | @REM ==== END VALIDATION ==== 82 | 83 | :init 84 | 85 | @REM Find the project base dir, i.e. the directory that contains the folder ".mvn". 86 | @REM Fallback to current working directory if not found. 87 | 88 | set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR% 89 | IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir 90 | 91 | set EXEC_DIR=%CD% 92 | set WDIR=%EXEC_DIR% 93 | :findBaseDir 94 | IF EXIST "%WDIR%"\.mvn goto baseDirFound 95 | cd .. 96 | IF "%WDIR%"=="%CD%" goto baseDirNotFound 97 | set WDIR=%CD% 98 | goto findBaseDir 99 | 100 | :baseDirFound 101 | set MAVEN_PROJECTBASEDIR=%WDIR% 102 | cd "%EXEC_DIR%" 103 | goto endDetectBaseDir 104 | 105 | :baseDirNotFound 106 | set MAVEN_PROJECTBASEDIR=%EXEC_DIR% 107 | cd "%EXEC_DIR%" 108 | 109 | :endDetectBaseDir 110 | 111 | IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig 112 | 113 | @setlocal EnableExtensions EnableDelayedExpansion 114 | for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a 115 | @endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS% 116 | 117 | :endReadAdditionalConfig 118 | 119 | SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe" 120 | set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar" 121 | set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain 122 | 123 | set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar" 124 | 125 | FOR /F "tokens=1,2 delims==" %%A IN ("%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties") DO ( 126 | IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B 127 | ) 128 | 129 | @REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central 130 | @REM This allows using the maven wrapper in projects that prohibit checking in binary data. 131 | if exist %WRAPPER_JAR% ( 132 | if "%MVNW_VERBOSE%" == "true" ( 133 | echo Found %WRAPPER_JAR% 134 | ) 135 | ) else ( 136 | if not "%MVNW_REPOURL%" == "" ( 137 | SET DOWNLOAD_URL="%MVNW_REPOURL%/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar" 138 | ) 139 | if "%MVNW_VERBOSE%" == "true" ( 140 | echo Couldn't find %WRAPPER_JAR%, downloading it ... 141 | echo Downloading from: %DOWNLOAD_URL% 142 | ) 143 | 144 | powershell -Command "&{"^ 145 | "$webclient = new-object System.Net.WebClient;"^ 146 | "if (-not ([string]::IsNullOrEmpty('%MVNW_USERNAME%') -and [string]::IsNullOrEmpty('%MVNW_PASSWORD%'))) {"^ 147 | "$webclient.Credentials = new-object System.Net.NetworkCredential('%MVNW_USERNAME%', '%MVNW_PASSWORD%');"^ 148 | "}"^ 149 | "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $webclient.DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"^ 150 | "}" 151 | if "%MVNW_VERBOSE%" == "true" ( 152 | echo Finished downloading %WRAPPER_JAR% 153 | ) 154 | ) 155 | @REM End of extension 156 | 157 | @REM Provide a "standardized" way to retrieve the CLI args that will 158 | @REM work with both Windows and non-Windows executions. 159 | set MAVEN_CMD_LINE_ARGS=%* 160 | 161 | %MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %* 162 | if ERRORLEVEL 1 goto error 163 | goto end 164 | 165 | :error 166 | set ERROR_CODE=1 167 | 168 | :end 169 | @endlocal & set ERROR_CODE=%ERROR_CODE% 170 | 171 | if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost 172 | @REM check for post script, once with legacy .bat ending and once with .cmd ending 173 | if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat" 174 | if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd" 175 | :skipRcPost 176 | 177 | @REM pause the script if MAVEN_BATCH_PAUSE is set to 'on' 178 | if "%MAVEN_BATCH_PAUSE%" == "on" pause 179 | 180 | if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE% 181 | 182 | exit /B %ERROR_CODE% 183 | -------------------------------------------------------------------------------- /templates/roles.tf: -------------------------------------------------------------------------------- 1 | ## SPDX-FileCopyrightText: Copyright 2019 Amazon.com, Inc. or its affiliates 2 | ## 3 | ### SPDX-License-Identifier: MIT-0 4 | 5 | locals { 6 | iam_role_name = "${var.app_prefix}-ECSRunTaskSyncExecutionRole" 7 | iam_policy_name = "FargateTaskNotificationAccessPolicy" 8 | iam_task_role_policy_name = "${var.app_prefix}-ECS-Task-Role-Policy" 9 | } 10 | 11 | resource "aws_iam_role" "stepfunction_ecs_role" { 12 | name = "${local.iam_role_name}" 13 | assume_role_policy = "${data.aws_iam_policy_document.stepfunction_ecs_policy_document.json}" 14 | } 15 | data "aws_iam_policy_document" "stepfunction_ecs_policy_document" { 16 | statement { 17 | actions = ["sts:AssumeRole"] 18 | principals { 19 | type = "Service" 20 | identifiers = ["states.amazonaws.com"] 21 | } 22 | } 23 | } 24 | resource "aws_iam_role_policy" "stepfunction_ecs_policy" { 25 | name = "${local.iam_policy_name}" 26 | role = "${aws_iam_role.stepfunction_ecs_role.id}" 27 | # Policy type: Inline policy 28 | # StepFunctionsGetEventsForECSTaskRule is AWS Managed Rule 29 | policy = < \(.*\)$'` 83 | if expr "$link" : '/.*' > /dev/null; then 84 | PRG="$link" 85 | else 86 | PRG="`dirname "$PRG"`/$link" 87 | fi 88 | done 89 | 90 | saveddir=`pwd` 91 | 92 | M2_HOME=`dirname "$PRG"`/.. 93 | 94 | # make it fully qualified 95 | M2_HOME=`cd "$M2_HOME" && pwd` 96 | 97 | cd "$saveddir" 98 | # echo Using m2 at $M2_HOME 99 | fi 100 | 101 | # For Cygwin, ensure paths are in UNIX format before anything is touched 102 | if $cygwin ; then 103 | [ -n "$M2_HOME" ] && 104 | M2_HOME=`cygpath --unix "$M2_HOME"` 105 | [ -n "$JAVA_HOME" ] && 106 | JAVA_HOME=`cygpath --unix "$JAVA_HOME"` 107 | [ -n "$CLASSPATH" ] && 108 | CLASSPATH=`cygpath --path --unix "$CLASSPATH"` 109 | fi 110 | 111 | # For Mingw, ensure paths are in UNIX format before anything is touched 112 | if $mingw ; then 113 | [ -n "$M2_HOME" ] && 114 | M2_HOME="`(cd "$M2_HOME"; pwd)`" 115 | [ -n "$JAVA_HOME" ] && 116 | JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`" 117 | fi 118 | 119 | if [ -z "$JAVA_HOME" ]; then 120 | javaExecutable="`which javac`" 121 | if [ -n "$javaExecutable" ] && ! [ "`expr \"$javaExecutable\" : '\([^ ]*\)'`" = "no" ]; then 122 | # readlink(1) is not available as standard on Solaris 10. 123 | readLink=`which readlink` 124 | if [ ! `expr "$readLink" : '\([^ ]*\)'` = "no" ]; then 125 | if $darwin ; then 126 | javaHome="`dirname \"$javaExecutable\"`" 127 | javaExecutable="`cd \"$javaHome\" && pwd -P`/javac" 128 | else 129 | javaExecutable="`readlink -f \"$javaExecutable\"`" 130 | fi 131 | javaHome="`dirname \"$javaExecutable\"`" 132 | javaHome=`expr "$javaHome" : '\(.*\)/bin'` 133 | JAVA_HOME="$javaHome" 134 | export JAVA_HOME 135 | fi 136 | fi 137 | fi 138 | 139 | if [ -z "$JAVACMD" ] ; then 140 | if [ -n "$JAVA_HOME" ] ; then 141 | if [ -x "$JAVA_HOME/jre/sh/java" ] ; then 142 | # IBM's JDK on AIX uses strange locations for the executables 143 | JAVACMD="$JAVA_HOME/jre/sh/java" 144 | else 145 | JAVACMD="$JAVA_HOME/bin/java" 146 | fi 147 | else 148 | JAVACMD="`which java`" 149 | fi 150 | fi 151 | 152 | if [ ! -x "$JAVACMD" ] ; then 153 | echo "Error: JAVA_HOME is not defined correctly." >&2 154 | echo " We cannot execute $JAVACMD" >&2 155 | exit 1 156 | fi 157 | 158 | if [ -z "$JAVA_HOME" ] ; then 159 | echo "Warning: JAVA_HOME environment variable is not set." 160 | fi 161 | 162 | CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher 163 | 164 | # traverses directory structure from process work directory to filesystem root 165 | # first directory with .mvn subdirectory is considered project base directory 166 | find_maven_basedir() { 167 | 168 | if [ -z "$1" ] 169 | then 170 | echo "Path not specified to find_maven_basedir" 171 | return 1 172 | fi 173 | 174 | basedir="$1" 175 | wdir="$1" 176 | while [ "$wdir" != '/' ] ; do 177 | if [ -d "$wdir"/.mvn ] ; then 178 | basedir=$wdir 179 | break 180 | fi 181 | # workaround for JBEAP-8937 (on Solaris 10/Sparc) 182 | if [ -d "${wdir}" ]; then 183 | wdir=`cd "$wdir/.."; pwd` 184 | fi 185 | # end of workaround 186 | done 187 | echo "${basedir}" 188 | } 189 | 190 | # concatenates all lines of a file 191 | concat_lines() { 192 | if [ -f "$1" ]; then 193 | echo "$(tr -s '\n' ' ' < "$1")" 194 | fi 195 | } 196 | 197 | BASE_DIR=`find_maven_basedir "$(pwd)"` 198 | if [ -z "$BASE_DIR" ]; then 199 | exit 1; 200 | fi 201 | 202 | ########################################################################################## 203 | # Extension to allow automatically downloading the maven-wrapper.jar from Maven-central 204 | # This allows using the maven wrapper in projects that prohibit checking in binary data. 205 | ########################################################################################## 206 | if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then 207 | if [ "$MVNW_VERBOSE" = true ]; then 208 | echo "Found .mvn/wrapper/maven-wrapper.jar" 209 | fi 210 | else 211 | if [ "$MVNW_VERBOSE" = true ]; then 212 | echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..." 213 | fi 214 | if [ -n "$MVNW_REPOURL" ]; then 215 | jarUrl="$MVNW_REPOURL/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar" 216 | else 217 | jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar" 218 | fi 219 | while IFS="=" read key value; do 220 | case "$key" in (wrapperUrl) jarUrl="$value"; break ;; 221 | esac 222 | done < "$BASE_DIR/.mvn/wrapper/maven-wrapper.properties" 223 | if [ "$MVNW_VERBOSE" = true ]; then 224 | echo "Downloading from: $jarUrl" 225 | fi 226 | wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" 227 | if $cygwin; then 228 | wrapperJarPath=`cygpath --path --windows "$wrapperJarPath"` 229 | fi 230 | 231 | if command -v wget > /dev/null; then 232 | if [ "$MVNW_VERBOSE" = true ]; then 233 | echo "Found wget ... using wget" 234 | fi 235 | if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then 236 | wget "$jarUrl" -O "$wrapperJarPath" 237 | else 238 | wget --http-user=$MVNW_USERNAME --http-password=$MVNW_PASSWORD "$jarUrl" -O "$wrapperJarPath" 239 | fi 240 | elif command -v curl > /dev/null; then 241 | if [ "$MVNW_VERBOSE" = true ]; then 242 | echo "Found curl ... using curl" 243 | fi 244 | if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then 245 | curl -o "$wrapperJarPath" "$jarUrl" -f 246 | else 247 | curl --user $MVNW_USERNAME:$MVNW_PASSWORD -o "$wrapperJarPath" "$jarUrl" -f 248 | fi 249 | 250 | else 251 | if [ "$MVNW_VERBOSE" = true ]; then 252 | echo "Falling back to using Java to download" 253 | fi 254 | javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java" 255 | # For Cygwin, switch paths to Windows format before running javac 256 | if $cygwin; then 257 | javaClass=`cygpath --path --windows "$javaClass"` 258 | fi 259 | if [ -e "$javaClass" ]; then 260 | if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then 261 | if [ "$MVNW_VERBOSE" = true ]; then 262 | echo " - Compiling MavenWrapperDownloader.java ..." 263 | fi 264 | # Compiling the Java class 265 | ("$JAVA_HOME/bin/javac" "$javaClass") 266 | fi 267 | if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then 268 | # Running the downloader 269 | if [ "$MVNW_VERBOSE" = true ]; then 270 | echo " - Running MavenWrapperDownloader.java ..." 271 | fi 272 | ("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR") 273 | fi 274 | fi 275 | fi 276 | fi 277 | ########################################################################################## 278 | # End of extension 279 | ########################################################################################## 280 | 281 | export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"} 282 | if [ "$MVNW_VERBOSE" = true ]; then 283 | echo $MAVEN_PROJECTBASEDIR 284 | fi 285 | MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS" 286 | 287 | # For Cygwin, switch paths to Windows format before running java 288 | if $cygwin; then 289 | [ -n "$M2_HOME" ] && 290 | M2_HOME=`cygpath --path --windows "$M2_HOME"` 291 | [ -n "$JAVA_HOME" ] && 292 | JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"` 293 | [ -n "$CLASSPATH" ] && 294 | CLASSPATH=`cygpath --path --windows "$CLASSPATH"` 295 | [ -n "$MAVEN_PROJECTBASEDIR" ] && 296 | MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"` 297 | fi 298 | 299 | # Provide a "standardized" way to retrieve the CLI args that will 300 | # work with both Windows and non-Windows executions. 301 | MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@" 302 | export MAVEN_CMD_LINE_ARGS 303 | 304 | WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain 305 | 306 | exec "$JAVACMD" \ 307 | $MAVEN_OPTS \ 308 | -classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \ 309 | "-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \ 310 | ${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@" 311 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # **Provision AWS infrastructure using Terraform (By HashiCorp): an example of running Amazon ECS tasks on AWS Fargate** 2 | 3 | 4 | [AWS Fargate](https://aws.amazon.com/fargate) supports many common container use cases, like running micro-services architecture applications, batch processing, machine learning applications, and migrating on premise applications to the cloud without having to manage servers or clusters of Amazon EC2 instances.AWS customers have a choice of fully managed container services, including [Amazon Elastic Container Service](https://aws.amazon.com/ecs/) (Amazon ECS) and [Amazon Elastic Kubernetes Service](https://aws.amazon.com/ecs/) (Amazon EKS). Both services support a broad array of compute options, have deep integration with other AWS services, and provide the global scale and reliability you’ve come to expect from AWS. For more details to choose between ECS and EKS please refer this [blog](https://aws.amazon.com/blogs/containers/amazon-ecs-vs-amazon-eks-making-sense-of-aws-container-services/). 5 | 6 | With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. In this blog, we will use [Amazon Elastic Container Service (ECS)](https://aws.amazon.com/ecs), a highly scalable, high performance container management service that supports Docker containers.Amazon ECS use containers provisioned by Fargate to automatically scale, load balance, and manage scheduling of your containers for availability, providing an easier way to build and operate containerized applications. There are several ‘infrastructure as code’ frameworks available today, to help customers define their infrastructure, such as the [AWS CloudFormation](https://aws.amazon.com/cloudformation/), [AWS CDK](https://aws.amazon.com/cdk/) or Terraform by [HashiCorp](https://www.hashicorp.com/). In this blog, we will walk you through a use case of running an Amazon ECS Task on AWS Fargate that can be initiated using [AWS Step Functions](https://aws.amazon.com/step-functions). We will use Terraform to model the AWS infrastructure. 7 | 8 | [Terraform](https://www.terraform.io/intro/) by [HashiCorp](https://hashicorp.com/), an AWS Partner Network (APN) Advanced Technology Partner and member of the [AWS DevOps Competency](https://aws.amazon.com/solutions/partners/dev-ops/), is an *infrastructure as code* tool similar to AWS CloudFormation that allows you to create, update, and version your Amazon Web Services (AWS) infrastructure. Terraform provide friendly syntax (similar to AWS CloudFormation) along with other features like planning (visibility to see the changes before they actually happen), graphing, create templates to break configurations into smaller chunks to organize, maintain and reusability. We will leverage the capabilities and features of Terraform to build an API based ingestion process into AWS. Let’s get started! 9 | 10 | We will provide the Terraform infrastructure definition and the source code for a Java based container application that will read & process the files in the input [AWS S3](https://aws.amazon.com/s3/) bucket. The files will be processed and pushed to an [Amazon Kinesis stream](https://aws.amazon.com/kinesis/). The stream is subscribed with an [Amazon Data Firehose](https://aws.amazon.com/kinesis/data-firehose) which has a target of an output AWS S3 bucket. The java application is containerized using a Dockerfile and the ECS tasks are orchestrated using the [ECS task definition](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html) which are also built using the terraform. 11 | 12 | At a high-level, we will go through the following: 13 | 14 | 1. Create a simple java application that will read contents of Amazon S3 bucket folder and pushes it to Amazon Kinesis stream. The application code is build using maven. 15 | 2. Use HashiCorp Terraform to define the AWS infrastructure resources required for the application. 16 | 3. Use terraform commands to plan, apply and destroy (cleanup the infrastructure) 17 | 4. The infrastructure builds a new [Amazon VPC](https://aws.amazon.com/vpc/) where the required AWS resources are launched in a logically isolated virtual network that you define. The infrastructure spins up [Amazon SNS](https://aws.amazon.com/sns/), [NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html#nat-gateway-basics), [S3 Gateway Endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html), [Elastic Network Interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html), [Amazon ECR](https://aws.amazon.com/ecr/), etc., 18 | 5. Provided script inserts sample S3 content files in the input bucket that are needed for the application processing. 19 | 6. Navigate to AWS Console, AWS Step Functions and initiate the process. Validate the result in logs and the output in S3 bucket. 20 | 7. Cleanup Script, that will clean up the AWS ECR, Amazon S3 input files and destroy AWS resources created by the terraform 21 | 22 | 23 | The creation of above infrastructure on your account would result in charges beyond free tier. Please see below Pricing section for each individual services’ specific details. Make sure to clean up the built infrastructure to avoid any recurring cost. 24 | 25 | ![Alt text](aws-ecs-stepfunctions.png?raw=true "ECS Fargate Step Functions") 26 | 27 | ## Overview of some of the AWS services used in this solution 28 | 29 | * [Amazon Elastic Container Service (ECS)](https://aws.amazon.com/ecs), a highly scalable, high performance container management service that supports Docker containers 30 | * [AWS Fargate](https://aws.amazon.com/fargate) is a serverless compute engine for containers that works with both [Amazon Elastic Container Service (ECS)](https://aws.amazon.com/ecs/) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. 31 | * [Amazon Kinesis](https://aws.amazon.com/kinesis/) makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. 32 | * [Amazon Virtual Private Cloud (Amazon VPC)](https://aws.amazon.com/vpc) is a service that lets you launch AWS resources in a logically isolated virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways 33 | 34 | ## Prerequisites 35 | 36 | We will use Docker Containers to deploy the Java application. The following are required to setup your development environment: 37 | 38 | 1. An AWS Account. 39 | 2. Make sure to have Java installed and running on your machine. For instructions, see [Java Development Kit](https://www.oracle.com/java/technologies/javase-downloads.html) 40 | 3. [Apache Maven](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html) – Java application code is built using mvn packages and are deployed using Terraform into AWS 41 | 4. Set up Terraform. For steps, see [Terraform downloads](https://www.terraform.io/downloads.html) 42 | 5. [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html) - Make sure to [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) your AWS CLI 43 | 6. [Docker](https://www.docker.com/) 44 | 1. [Install Docker](https://www.docker.com/products/docker-desktop) based on your OS. 45 | 2. Make sure the docker daemon/service is running. We will build, tag & push the application code using the provided Dockerfile to the Amazon ECR 46 | 47 | ## Walk-through of the Solution 48 | 49 | At a high-level, here are the steps you will follow to get this solution up and running. 50 | 51 | 1. [Download](https://github.com/aws-samples/aws-stepfunctions-ecs-fargate-process) the code and perform maven package for the Java lambda code. 52 | 2. Run Terraform command to spin up the infrastructure. 53 | 3. Once the code is downloaded, please take a moment to see how Terraform provides a similar implementation for spinning up the infrastructure like that of AWS CloudFormation. You may use [Visual Studio Code](https://aws.amazon.com/visualstudiocode/) or your favorite choice of IDE to open the folder (aws-stepfunctions-ecs-fargate-process) 54 | 4. The git folder will have these folder 55 | 1. “templates” - Terraform templates to build the infrastructure 56 | 2. “src” - Java application source code. 57 | 3. Dockerfile 58 | 4. “exec.sh” - a shell script that will build the infrastructure, java code and will push to Amazon ECR. Make sure to have Docker running in your machine at this point. Also modify your account number where the application need to deployed, tested. 59 | 5. This step is needed if you are running the steps manually and not using the provide “exec.sh” script. Put sample files in the input S3 bucket location - It could be something like “my-stepfunction-ecs-app-dev-source-bucket-” 60 | 6. In AWS Console, navigate to AWS Step Function. Click on “my-stepfunction-ecs-app-ECSTaskStateMachine”. Click “Start Execution” button. 61 | 7. Once the Step Function is completed, output of the processed files can be found in “my-stepfunction-ecs-app-dev-target-bucket-” 62 | 63 | 64 | **Detailed steps are provided below** 65 | 66 | ### 1. Deploying the Terraform template to spin up the infrastructure 67 | 68 | Download the code from the [GitHub](https://github.com/aws-samples/aws-stepfunctions-ecs-fargate-process) location. 69 | 70 | `$ git clone https://github.com/aws-samples/aws-stepfunctions-ecs-fargate-process` 71 | 72 | Please take a moment to review the code structure as mentioned above in the walkthrough of the solution. 73 | Provided “exec.sh” script/bash file as part of the code base folder, Make sure to replace **, ** with your AWS account number (where you are trying to deploy/run this application) and the **** with the AWS region . This will create the infrastructure and pushes the java application into the ECR. Last section on the script also creates sample/dummy input files for the source S3 bucket. 74 | 75 | 76 | * `$ cd aws-stepfunctions-ecs-fargate-process` 77 | * `$ chmod +x exec.sh` 78 | * `$ ./exec.sh` 79 | 80 | 81 | 82 | ### 2. Manual Deployment (Only do if you did not do the above step) 83 | 84 | Do this only if you are not executing the above scripts and wanted to perform these steps manually 85 | 86 | **Step 1**: Build java application 87 | 88 | * `$ cd aws-stepfunctions-ecs-fargate-process` 89 | * `$ mvn clean package` 90 | 91 | 92 | **Step 2**: Deploy the infrastructure 93 | 94 | * `$ cd templates` 95 | * `$ terraform plan` 96 | * `$ terraform apply --auto-approve` 97 | 98 | 99 | **Step 3:** Steps to build and push Java application into ECR (my-stepfunction-ecs-app-repo ECR repository created as part of above infrastructure) 100 | 101 | 102 | * `$ docker build -t example/ecsfargateservice .` 103 | * `$ docker tag example/ecsfargateservice $ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com/my-stepfunction-ecs-app-repo:latest` 104 | * `$ aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com` 105 | * `$ docker push $ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com/my-stepfunction-ecs-app-repo:latest` 106 | 107 | 108 | Make sure to update your region and account number above 109 | 110 | **Step 4:** Sample S3 files generation to the input bucket 111 | 112 | 113 | * `$ echo "{\"productId\":"1" , \"productName\": \"some Name\", \"productVersion\": \"v1"}" >> "product_1.txt"` 114 | * `$ aws s3 --region $REGION cp "product_1.txt" my-stepfunction-ecs-app-dev-source-bucket-` 115 | 116 | 117 | Note: exec.sh script has logic to create multiple files to validate. Above provided will create 1 sample file 118 | 119 | ### 3. Stack Verification 120 | 121 | Once the preceding Terraform commands complete successfully, take a moment to identify the major components that are deployed in AWS. 122 | 123 | * Amazon VPC 124 | * VPC - my-stepfunction-ecs-app-VPC 125 | * Subnets 126 | * Public subnet - my-stepfunction-ecs-app-public-subnet1 127 | * Private subnet - my-stepfunction-ecs-app-private-subnet1 128 | * Internet gateway - my-stepfunction-ecs-app-VPC 129 | * NAT Gateway - my-stepfunction-ecs-app-NATGateway 130 | * Elastic IP - my-stepfunction-ecs-app-elastic-ip 131 | * VPC Endpoint 132 | * AWS Step Functions 133 | * my-stepfunction-ecs-app-ECSTaskStateMachine 134 | * Amazon ECS 135 | * Cluster - my-stepfunction-ecs-app-ECSCluster 136 | * Task Definition - my-stepfunction-ecs-app-ECSTaskDefinition 137 | * Amazon Kinesis 138 | * Data Stream - my-stepfunction-ecs-app-stream 139 | * Delivery stream – my-stepfunction-ecs-app-firehose-delivery-stream - notice the source (kinesis stream) and the target output S3 bucket 140 | * S3 141 | * my-stepfunction-ecs-app-dev-source-bucket- 142 | * my-stepfunction-ecs-app-dev-target-bucket- 143 | * Amazon ECR 144 | * my-stepfunction-ecs-app-repo - Make sure to check if the repository has the code/image 145 | * Amazon SNS 146 | * my-stepfunction-ecs-app-SNSTopic - Note this is not subscribed to any endpoint. You may do so subscribing to your email Id, text message etc., using [AWS Console, API or CLI](https://docs.aws.amazon.com/sns/latest/dg/sns-create-subscribe-endpoint-to-topic.html). 147 | * CloudWatch – Log Groups 148 | * my-stepfunction-ecs-app-cloudwatch-log-group 149 | * /aws/ecs/fargate-app/ 150 | * /aws/ecs/fargate-app/ 151 | 152 | 153 | Let’s test our stack from AWS Console>Step Functions> 154 | 155 | * Click on “my-stepfunction-ecs-app-ECSTaskStateMachine”. 156 | 157 | * Click on “Start Execution”. The state machine will trigger the ECS Fargate task and will complete as below 158 | * To see the process: 159 | * ECS: 160 | * Navigate to AWS Console > ECS > Select your cluster 161 | * click on “Tasks” sub tab, select the task. and see the status. While the task runs you may notice the status will be in PROVISIONING, PENDING, RUNNING, STOPPED states 162 | * S3 163 | * Navigate to the output S3 bucket - my-stepfunction-ecs-app-dev-target-bucket- to see the output 164 | * Note there could be a delay for the files to be processed by Amazon Kinesis, Kinesis Firehose to S3 165 | 166 | 167 | ![Alt text](stepfunction.png?raw=true "AWS Step Functions") 168 | 169 | 170 | ### Troubleshooting 171 | 172 | * Java errors: Make sure to have JDK, maven installed for the compilation of the application code. 173 | * Check if local Docker is running. 174 | * VPC - Check VPC [quota/limits](https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html). Current limit is 5 per region 175 | * ECR Deployment - CLI V2 is used at this point. Refer aws [cli v1](https://docs.aws.amazon.com/cli/latest/reference/ecr/get-login.html) vs [cli 2](https://docs.aws.amazon.com/cli/latest/reference/ecr/get-login-password.html) for issues 176 | * Issues with running the installation/shell script 177 | * Windows users - Shell scripts by default opens in a new window and closes once done. To see the execution you can paste the script contents in a windows CMD and shall execute sequentially 178 | * If you are deploying through the provided installation/cleanup scripts, make sure to have “chmod +x exec.sh” or “chmod +777 exec.sh” (Elevate the execution permission of the scripts) 179 | * Linux Users - Permission issues could arise if you are not running as root user. you may have to “sudo su“ . 180 | * If you are running the steps manually, refer the “exec.sh” script for any difference in the command execution 181 | 182 | ### Pricing 183 | 184 | * VPC, NAT Gateway pricing - https://aws.amazon.com/vpc/pricing/ 185 | * ECS - https://aws.amazon.com/ecs/pricing/ 186 | * VPC Private link pricing - https://aws.amazon.com/privatelink/pricing/ 187 | * Amazon Kinesis Data Streams - https://aws.amazon.com/kinesis/data-streams/pricing/ 188 | * Amazon Kinesis Data Firehose - https://aws.amazon.com/kinesis/data-firehose/pricing/ 189 | * Amazon S3 - https://aws.amazon.com/s3/pricing/ 190 | 191 | ### 4. Code Cleanup 192 | 193 | Terraform destroy command will delete all the infrastructure that were planned and applied. Since the S3 will have both sample input and the processed files generated, make sure to delete the files before initiating the destroy command. 194 | This can be done either in AWS Console or using AWS CLI (commands provided). See both options below 195 | 196 | Using the cleanup script provided 197 | 198 | 1. Cleanup.sh 199 | 1. Make sure to provide 200 | 2. chmod +x cleanup.sh 201 | 3. ./cleanup.sh 202 | 203 | ``` 204 | Manual Cleanup - Only do if you didn't do the above step 205 | ``` 206 | 207 | 1. Clean up resources from the AWS Console 208 | 1. Open AWS Console, select S3 209 | 2. Navigate to the bucket created as part of the stack 210 | 3. Delete the S3 bucket manually 211 | 4. Similarly navigate to “ECR”, select the create repository - my-stepfunction-ecs-app-repo you may have more than one image pushed to the repository depending on changes (if any) done to your java code 212 | 5. Select all the images and delete the images pushed 213 | 2. Clean up resources using AWS CLI 214 | 215 | `# CLI Commands to delete the S3` 216 | 217 | * `$ aws s3 rb s3://my-stepfunction-ecs-app-dev-source-bucket- --force` 218 | * `$ aws s3 rb s3://my-stepfunction-ecs-app-dev-target-bucket- --force` 219 | * `$ aws ecr batch-delete-image --repository-name my-stepfunction-ecs-app-repo --image-ids imageTag=latest` 220 | * `$ aws ecr batch-delete-image --repository-name my-stepfunction-ecs-app-repo --image-ids imageTag=untagged` 221 | * cd templates 222 | * `terraform destroy –-auto-approve` 223 | 224 | ## Conclusion 225 | 226 | You were able to launch an application process involving Amazon ECS and AWS Fargate which integrated with various AWS services. The post walked through deploying an application code packaged with Java using maven. You may use any combination of applicable programming languages to build your application logic. The sample provided has a Java code that is packaged using Dockerfile into the Amazon ECR. 227 | 228 | We encourage you to try this example and see for yourself how this overall application design works within AWS. Then, it will just be a matter of replacing your current application, package them as Docker containers and let the Amazon ECS manage the application efficiently. 229 | 230 | If you have any questions/feedback about this blog please provide your comments below! 231 | 232 | ## References 233 | 234 | * [Amazon ECS Faqs](https://aws.amazon.com/ecs/faqs/) 235 | * [AWS Fargate Faqs](https://aws.amazon.com/fargate/faqs/) 236 | * [Amazon Kinesis](https://aws.amazon.com/kinesis/) 237 | * [Docker Containers](https://www.docker.com/resources/what-container) 238 | * [Terraform: Beyond the basics with AWS](https://aws.amazon.com/blogs/apn/terraform-beyond-the-basics-with-aws/) 239 | * [VPC Endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) 240 | --------------------------------------------------------------------------------