├── .gitignore ├── README.md ├── alb-external.yml ├── cdk ├── README.md ├── app.py ├── cdk.json ├── requirements.txt └── stresstool_user_data.sh ├── cluster-ec2-private-vpc.yml ├── cluster-fargate-private-vpc-onsite.yml ├── cluster-fargate-private-vpc.yml ├── config.json.template ├── images └── private-subnet-public-lb.png └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | *env 2 | cdk/cdk.out 3 | cdk.context.json 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Bring up a cloud9 IDE and run these prerequisite commands: 2 | ``` 3 | # Choose your region, and store it in this environment variable 4 | export AWS_DEFAULT_REGION=ap-southeast-1 5 | echo "export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION >> ~/.bashrc" 6 | 7 | # Install software 8 | sudo yum -y install jq gettext 9 | sudo curl -so /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest 10 | sudo chmod +x /usr/local/bin/ecs-cli 11 | ``` 12 | This installs some handy text parsing utilities, and the latest ecs-cli. 13 | 14 | ## Clone this demo repository: 15 | ``` 16 | cd ~/environment 17 | git clone https://github.com/brentley/container-demo.git 18 | ``` 19 | 20 | ## Clone our application microservice repositories: 21 | ``` 22 | cd ~/environment 23 | git clone https://github.com/brentley/ecsdemo-frontend.git 24 | git clone https://github.com/brentley/ecsdemo-nodejs.git 25 | git clone https://github.com/brentley/ecsdemo-crystal.git 26 | ``` 27 | 28 | ## Build a VPC, ECS Cluster, and ALB: 29 | ![infrastructure](images/private-subnet-public-lb.png) 30 | ``` 31 | cd ~/environment/container-demo 32 | 33 | aws cloudformation deploy --stack-name container-demo --template-file cluster-fargate-private-vpc.yml --capabilities CAPABILITY_IAM 34 | aws cloudformation deploy --stack-name container-demo-alb --template-file alb-external.yml 35 | ``` 36 | At a high level, we are building what you see in the diagram. We will have 3 37 | availability zones, each with a public and private subnet. The public subnets 38 | will hold service endpoints, and the private subnets will be where our workloads run. 39 | Where the image shows an instance, we will have containers on AWS Fargate. 40 | 41 | ## Set environment variables from our build 42 | ``` 43 | 44 | export clustername=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`ClusterName`].OutputValue' --output text) 45 | export target_group_arn=$(aws cloudformation describe-stack-resources --stack-name container-demo-alb | jq -r '.[][] | select(.ResourceType=="AWS::ElasticLoadBalancingV2::TargetGroup").PhysicalResourceId') 46 | export vpc=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`VpcId`].OutputValue' --output text) 47 | export ecsTaskExecutionRole=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`ECSTaskExecutionRole`].OutputValue' --output text) 48 | export subnet_1=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetOne`].OutputValue' --output text) 49 | export subnet_2=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetTwo`].OutputValue' --output text) 50 | export subnet_3=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`PrivateSubnetThree`].OutputValue' --output text) 51 | export security_group=$(aws cloudformation describe-stacks --stack-name container-demo --query 'Stacks[0].Outputs[?OutputKey==`ContainerSecurityGroup`].OutputValue' --output text) 52 | 53 | cd ~/environment 54 | ``` 55 | This creates our infrastructure, and sets several environment variables we will use to 56 | automate deploys. 57 | 58 | ## Configure `ecs-cli` to talk to your cluster: 59 | ``` 60 | ecs-cli configure --region $AWS_DEFAULT_REGION --cluster $clustername --default-launch-type FARGATE --config-name container-demo 61 | ``` 62 | We set a default region so we can reference the region when we run our commands. 63 | 64 | 65 | ## Authorize traffic: 66 | ``` 67 | aws ec2 authorize-security-group-ingress --group-id "$security_group" --protocol tcp --port 3000 --cidr 0.0.0.0/0 68 | ``` 69 | We know that our containers talk on port 3000, so authorize that traffic on our security group: 70 | 71 | ## Deploy our frontend application: 72 | ``` 73 | cd ~/environment/ecsdemo-frontend 74 | envsubst < ecs-params.yml.template >ecs-params.yml 75 | 76 | ecs-cli compose --project-name ecsdemo-frontend service up \ 77 | --create-log-groups \ 78 | --target-group-arn $target_group_arn \ 79 | --private-dns-namespace service \ 80 | --enable-service-discovery \ 81 | --container-name ecsdemo-frontend \ 82 | --container-port 3000 \ 83 | --cluster-config container-demo \ 84 | --vpc $vpc 85 | 86 | ``` 87 | Here, we change directories into our frontend application code directory. 88 | The `envsubst` command templates our `ecs-params.yml` file with our current values. 89 | We then launch our frontend service on our ECS cluster (with a default launchtype 90 | of Fargate) 91 | 92 | Note: ecs-cli will take care of building our private dns namespace for service discovery, 93 | and log group in cloudwatch logs. 94 | 95 | ## View running container: 96 | ``` 97 | ecs-cli compose --project-name ecsdemo-frontend service ps \ 98 | --cluster-config container-demo 99 | ``` 100 | We should have one task registered. 101 | 102 | ## Check reachability (open url in your browser): 103 | ``` 104 | alb_url=$(aws cloudformation describe-stacks --stack-name container-demo-alb --query 'Stacks[0].Outputs[?OutputKey==`ExternalUrl`].OutputValue' --output text) 105 | echo "Open $alb_url in your browser" 106 | ``` 107 | This command looks up the URL for our ingress ALB, and outputs it. You should 108 | be able to click to open, or copy-paste into your browser. 109 | 110 | ## View logs: 111 | ``` 112 | #substitute your task id from the ps command 113 | ecs-cli logs --task-id a06a6642-12c5-4006-b1d1-033994580605 \ 114 | --follow --cluster-config container-demo 115 | ``` 116 | To view logs, find the task id from the earlier `ps` command, and use it in this 117 | command. You can follow a task's logs also. 118 | 119 | ## Scale the tasks: 120 | ``` 121 | ecs-cli compose --project-name ecsdemo-frontend service scale 3 \ 122 | --cluster-config container-demo 123 | ecs-cli compose --project-name ecsdemo-frontend service ps \ 124 | --cluster-config container-demo 125 | ``` 126 | We can see that our containers have now been evenly distributed across all 3 of our 127 | availability zones. 128 | 129 | ## Bring up NodeJS backend api: 130 | ``` 131 | cd ~/environment/ecsdemo-nodejs 132 | envsubst ecs-params.yml 133 | ecs-cli compose --project-name ecsdemo-nodejs service up \ 134 | --create-log-groups \ 135 | --private-dns-namespace service \ 136 | --enable-service-discovery \ 137 | --cluster-config container-demo \ 138 | --vpc $vpc 139 | 140 | ``` 141 | Just like earlier, we are now bringing up one of our backend API services. 142 | This service is not registered with any ALB, and instead is only reachable by 143 | private IP in the VPC, so we will use service discovery to talk to it. 144 | 145 | ## Scale the tasks: 146 | ``` 147 | ecs-cli compose --project-name ecsdemo-nodejs service scale 3 \ 148 | --cluster-config container-demo 149 | 150 | ``` 151 | We can see that our containers have now been evenly distributed across all 3 of our 152 | availability zones. 153 | 154 | ## Bring up Crystal backend api: 155 | ``` 156 | cd ~/environment/ecsdemo-crystal 157 | envsubst ecs-params.yml 158 | ecs-cli compose --project-name ecsdemo-crystal service up \ 159 | --create-log-groups \ 160 | --private-dns-namespace service \ 161 | --enable-service-discovery \ 162 | --cluster-config container-demo \ 163 | --vpc $vpc 164 | 165 | ``` 166 | Just like earlier, we are now bringing up one of our backend API services. 167 | This service is not registered with any ALB, and instead is only reachable by 168 | private IP in the VPC, so we will use service discovery to talk to it. 169 | 170 | ## Scale the tasks: 171 | ``` 172 | ecs-cli compose --project-name ecsdemo-crystal service scale 3 \ 173 | --cluster-config container-demo 174 | 175 | ``` 176 | We can see that our containers have now been evenly distributed across all 3 of our 177 | availability zones. 178 | 179 | ## Conclusion: 180 | You should now have 3 services, each running 3 tasks, spread across 3 availability zones. 181 | Additionally you should have zero instances to manage. :) 182 | 183 | ## Cleanup: 184 | ``` 185 | cd ~/environment/ecsdemo-frontend 186 | ecs-cli compose --project-name ecsdemo-frontend service down --cluster-config container-demo 187 | cd ~/environment/ecsdemo-nodejs 188 | ecs-cli compose --project-name ecsdemo-nodejs service down --cluster-config container-demo 189 | cd ~/environment/ecsdemo-crystal 190 | ecs-cli compose --project-name ecsdemo-crystal service down --cluster-config container-demo 191 | 192 | ecs-cli down --force --cluster-config container-demo 193 | aws cloudformation delete-stack --stack-name container-demo-alb 194 | aws cloudformation wait stack-delete-complete --stack-name container-demo-alb 195 | aws cloudformation delete-stack --stack-name container-demo 196 | aws cloudformation delete-stack --stack-name amazon-ecs-cli-setup-private-dns-namespace-$clustername-ecsdemo-frontend 197 | ``` 198 | 199 | 200 | -------------------------------------------------------------------------------- /alb-external.yml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: External, public facing load balancer, for forwarding public traffic to containers 3 | Parameters: 4 | EnvironmentName: 5 | Type: String 6 | Default: container-demo 7 | Description: The name of the environment to add this load balancer to 8 | Resources: 9 | EcsSecurityGroupIngressFromPublicALB: 10 | Type: AWS::EC2::SecurityGroupIngress 11 | Properties: 12 | Description: Ingress from the public ALB 13 | GroupId: 14 | Fn::ImportValue: !Sub ${EnvironmentName}:ContainerSecurityGroup 15 | IpProtocol: -1 16 | SourceSecurityGroupId: !Ref 'PublicLoadBalancerSG' 17 | 18 | # Public load balancer, hosted in public subnets that is accessible 19 | # to the public, and is intended to route traffic to one or more public 20 | # facing services. This is used for accepting traffic from the public 21 | # internet and directing it to public facing microservices 22 | PublicLoadBalancerSG: 23 | Type: AWS::EC2::SecurityGroup 24 | Properties: 25 | GroupDescription: Access to the public facing load balancer 26 | VpcId: 27 | Fn::ImportValue: !Sub ${EnvironmentName}:VpcId 28 | SecurityGroupIngress: 29 | # Allow access to ALB from anywhere on the internet 30 | - CidrIp: 0.0.0.0/0 31 | IpProtocol: -1 32 | PublicLoadBalancer: 33 | Type: AWS::ElasticLoadBalancingV2::LoadBalancer 34 | Properties: 35 | Scheme: internet-facing 36 | LoadBalancerAttributes: 37 | - Key: idle_timeout.timeout_seconds 38 | Value: '30' 39 | Subnets: 40 | # The load balancer is placed into the public subnets, so that traffic 41 | # from the internet can reach the load balancer directly via the internet gateway 42 | - Fn::ImportValue: !Sub ${EnvironmentName}:PublicSubnetOne 43 | - Fn::ImportValue: !Sub ${EnvironmentName}:PublicSubnetTwo 44 | - Fn::ImportValue: !Sub ${EnvironmentName}:PublicSubnetThree 45 | SecurityGroups: [!Ref 'PublicLoadBalancerSG'] 46 | # A dummy target group is used to setup the ALB to just drop traffic 47 | # initially, before any real service target groups have been added. 48 | DummyTargetGroupPublic: 49 | Type: AWS::ElasticLoadBalancingV2::TargetGroup 50 | Properties: 51 | TargetType: ip 52 | HealthCheckIntervalSeconds: 10 53 | HealthCheckPath: /health 54 | HealthCheckProtocol: HTTP 55 | HealthCheckTimeoutSeconds: 5 56 | HealthyThresholdCount: 2 57 | Matcher: 58 | HttpCode: '200-299' 59 | Port: 80 60 | Protocol: HTTP 61 | UnhealthyThresholdCount: 10 62 | VpcId: 63 | Fn::ImportValue: !Sub ${EnvironmentName}:VpcId 64 | TargetGroupAttributes: 65 | - Key: deregistration_delay.timeout_seconds 66 | Value: '5' 67 | - Key: slow_start.duration_seconds 68 | Value: '60' 69 | PublicLoadBalancerListener: 70 | Type: AWS::ElasticLoadBalancingV2::Listener 71 | DependsOn: 72 | - PublicLoadBalancer 73 | Properties: 74 | DefaultActions: 75 | - TargetGroupArn: !Ref 'DummyTargetGroupPublic' 76 | Type: 'forward' 77 | LoadBalancerArn: !Ref 'PublicLoadBalancer' 78 | Port: 80 79 | Protocol: HTTP 80 | 81 | Outputs: 82 | PublicListener: 83 | Description: The ARN of the public load balancer's Listener 84 | Value: !Ref PublicLoadBalancerListener 85 | Export: 86 | Name: !Sub ${EnvironmentName}:PublicListener 87 | ExternalUrl: 88 | Description: The url of the external load balancer 89 | Value: !Sub http://${PublicLoadBalancer.DNSName} 90 | Export: 91 | Name: !Sub ${EnvironmentName}:ExternalUrl 92 | -------------------------------------------------------------------------------- /cdk/README.md: -------------------------------------------------------------------------------- 1 | ## Status: Beta 2 | ## Bring up a cloud9 IDE and run these prerequisite commands: 3 | ```bash 4 | # Choose your region, and store it in this environment variable 5 | 6 | export AWS_DEFAULT_REGION= # Example region: us-west-2 7 | echo "export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION >> ~/.bashrc" 8 | 9 | # Install ecs-cli 10 | sudo curl -so /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest 11 | sudo chmod +x /usr/local/bin/ecs-cli 12 | ``` 13 | 14 | AWS CDK pre-requisites: 15 | 16 | - [Node.js](https://nodejs.org/en/download) >= 8.11.x 17 | 18 | - Python >= 3.6 19 | 20 | or 21 | 22 | - Docker 23 | 24 | Not using Docker: 25 | ```bash 26 | CDK_VERSION=v0.36.0 27 | npm install -g aws-cdk@${CDK_VERSION} 28 | cdk --version 29 | virtual env .env 30 | source .env/bin/activate 31 | pip install --upgrade -r requirements.txt 32 | ``` 33 | Using Docker: 34 | ```bash 35 | CDK_VERSION=v0.36.0 36 | function _cdk { docker run -v $(pwd):/cdk -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN -e AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -it adam9098/aws-cdk:${CDK_VERSION} $@; } 37 | ``` 38 | 39 | This installs the required libraries to run cdk. Choosing the Docker path takes a lot of the pain of having to install the libraries locally. 40 | 41 | ## Clone this demo repository: 42 | ```bash 43 | cd ~/environment 44 | git clone https://github.com/brentley/fargate-demo.git 45 | ``` 46 | 47 | ## Clone our application microservice repositories. NOTE: This is not required, but simply an option to give you visibility into the services being deployed. 48 | ```bash 49 | cd ~/environment 50 | git clone https://github.com/brentley/ecsdemo-frontend.git 51 | git clone https://github.com/brentley/ecsdemo-nodejs.git 52 | git clone https://github.com/brentley/ecsdemo-crystal.git 53 | ``` 54 | 55 | ## Build a base stack (VPC, ECS Cluster, Service Discovery Namespace, Base Security Group) 56 | ![infrastructure](images/private-subnet-public-lb.png) 57 | ```bash 58 | cd ~/environment/fargate-demo 59 | ``` 60 | First, let's confirm that our code can properly synthesize and create the outputs. 61 | ``` 62 | _cdk synth 63 | ``` 64 | 65 | You should now see CloudFormation templates in the `cdk.out` directory, along with some other files related to cdk deployment. 66 | 67 | Now, let's build out the baseline environment 68 | ```bash 69 | _cdk deploy fargate-demo-base 70 | ``` 71 | 72 | At the end of the deploy, cdk will show you a list of the outputs and their values. Find the output that is the ECS Cluster name. Export this as an environment variable, we will need this for communicating with ecs via the command line. 73 | Using the ecs-cli, we will setup the cli to communicate with the cluster. 74 | ``` 75 | Outputs: 76 | 77 | fargate-demo-base.ExportsOutputRefECSCluster7D463CD47A8DFE2F = fargate-demo-base-ECSCluster7D463CD4-IH4IZATNP701 78 | ``` 79 | 80 | ```bash 81 | export ECS_CLUSTER_NAME="fargate-demo-base-ECSCluster7D463CD4-IH4IZATNP701" 82 | 83 | ecs-cli configure -c ${ECS_CLUSTER_NAME} -r ${AWS_DEFAULT_REGION} 84 | 85 | ``` 86 | 87 | At a high level, we are building what you see in the diagram. We will have 3 88 | availability zones, each with a public and private subnet. The public subnets 89 | will hold service endpoints, and the private subnets will be where our workloads run. 90 | Where the image shows an instance, we will have containers on AWS Fargate. 91 | 92 | What's nice about using the cdk, is that you can rely on the service to make opinionated decisions based on well architected patterns. For example: 93 | 94 | ```python 95 | self.vpc = aws_ec2.Vpc( 96 | self, "BaseVPC", 97 | cidr='10.0.0.0/24', 98 | enable_dns_support=True, 99 | enable_dns_hostnames=True, 100 | ) 101 | ``` 102 | 103 | In the above code snippet, we simply defined our cidr notation for the vpc, and cdk will provision subnets to span across three availability zones, as well as splitting up the address blocks per subnet evenly. Not only that, but it will create public and private subnets along with Internet/NAT Gateways. Now of course, if you want to define this on your own, cdk allows for that as well. 104 | 105 | ## Deploy our frontend application: 106 | 107 | Let's deploy the frontend application. This will be comprised of an Application Load Balancer, and a container running on ECS Fargate. First, we will run a diff to see what is set to be deployed. If this is a first time build, you should see all of the resources that are slated for creation. Once you review, deploy it! 108 | ```bash 109 | cd ~/environment 110 | _cdk diff fargate-demo-frontend 111 | _cdk deploy fargate-demo-frontend 112 | ``` 113 | 114 | The deployment will take place and provide outputs as it progresses. When complete, copy the outputs to a file for further use. 115 | 116 | ``` 117 | Outputs: 118 | fargate-demo-frontend.FrontendFargateLBServiceLoadBalancerDNSAFFB8F0B = farga-Front-17NF2P95ABONI-525673116.us-west-2.elb.amazonaws.com 119 | ``` 120 | 121 | The output is a url for the load balancer that was just created. Open the url in your browser and you should see it running your frontend service! 122 | 123 | Once again we are using an opinionated library within the cdk that will do the heavy lifting of creating the resources for the load balanced ecs fargate service. Let's take a look at the code: 124 | 125 | ```python 126 | # This will create an ALB with listener/target group, ecs task def, ecs fargate service, logging in cloudwatch 127 | # and security group from ALB to containers. This essentially condenses 95 lines of code into 15. 128 | self.fargate_load_balanced_service = aws_ecs_patterns.LoadBalancedFargateService( 129 | self, "FrontendFargateLBService", 130 | cluster=self.ecs_cluster, 131 | image=aws_ecs.ContainerImage.from_registry("brentley/ecsdemo-frontend"), 132 | container_port=3000, 133 | cpu=256, 134 | memory_limit_mi_b=512, 135 | enable_logging=True, 136 | desired_count=self.desired_service_count, 137 | load_balancer_type=aws_ecs_patterns.LoadBalancerType('Application'), 138 | public_load_balancer=True, 139 | environment={ 140 | "CRYSTAL_URL": "http://ecsdemo-crystal.service:3000/crystal", 141 | "NODEJS_URL": "http://ecsdemo-nodejs.service:3000" 142 | }, 143 | ) 144 | ``` 145 | 146 | We have defined our frontend application as its own stack in the cdk codebase. This provides a layer of isolation specific to the frontend application. This is an ideal way to split up your infrastructure deployments to ensure you limit blast radius to small functional groups. 147 | 148 | ## View running containers: 149 | ``` 150 | ecs-cli ps 151 | ``` 152 | Here is an example response: 153 | ``` 154 | Name State Ports TaskDefinition Health 155 | 391bb0ca-da0f-48b1-b982-acb75ec7d975/web RUNNING 10.0.0.107:3000->3000/tcp fargatedemofrontendFrontendFargateLBServiceTaskDefC747F090:1 UNKNOWN 156 | ``` 157 | 158 | We should have one task registered, and you should see three instances of that task running. 159 | 160 | ## View logs: 161 | 162 | Let's see the log output for one of the tasks. Run the following command to tail the logs live: 163 | 164 | ``` 165 | #substitute your task id from the ps command 166 | ecs-cli logs -t --since 5 --task-id 391bb0ca-da0f-48b1-b982-acb75ec7d97 167 | ``` 168 | To view logs, find the task id from the earlier `ps` command, and use it in this 169 | command. You can follow a task's logs also. 170 | 171 | ## Scale the tasks: 172 | Open up in an editor of your choice `app.py`, and we will modify the frontend stack and up the desired count from 1 to 3. Simply comment the variable 173 | `desired_service_count=1`, and uncomment `desired_service_count=3`. 174 | 175 | ```python 176 | # Frontend service stack 177 | self.frontend_service = FrontendECSService(self, self.stack_name + "-frontend", 178 | self.base_module.ecs_cluster, self.base_module.vpc, 179 | self.base_module.services_3000_sec_group, 180 | #desired_service_count=1) 181 | desired_service_count=3) 182 | ``` 183 | 184 | Let's run a diff to see what changes will be made, and then deploy! 185 | 186 | ``` 187 | _cdk diff fargate-demo-frontend 188 | _cdk deploy fargate-demo-frontend 189 | ``` 190 | 191 | Run the following again to see how many containers are running in the cluster: 192 | 193 | ``` 194 | ecs-cli ps 195 | ``` 196 | 197 | We can see that our containers have now been evenly distributed across all 3 of our 198 | availability zones. 199 | 200 | ## Bring up NodeJS backend api: 201 | ```bash 202 | cd ~/environment 203 | _cdk diff fargate-demo-node-backend 204 | _cdk deploy fargate-demo-node-backend 205 | ``` 206 | 207 | Just like earlier, we are now bringing up one of our backend API services. 208 | This service is not registered with any ALB, and instead is only reachable by 209 | private IP in the VPC, so we will use service discovery to talk to it. 210 | The containers will automatically register with CloudMap on launch. 211 | 212 | ## Scale the tasks: 213 | Open up in an editor of your choice `app.py`, and we will modify the frontend stack and up the desired count from 1 to 3. Simply comment the variable 214 | `desired_service_count=1`, and uncomment `desired_service_count=3`. 215 | 216 | ```python 217 | # Backend Node.js service 218 | self.backend_node_service = BackendNodeECSService(self, self.stack_name + "-node-backend", 219 | self.base_module.ecs_cluster,self.base_module.vpc, 220 | self.base_module.services_3000_sec_group, 221 | #desired_service_count=1) 222 | desired_service_count=3) 223 | ``` 224 | 225 | Let's run a diff to see what changes will be made, and then deploy! 226 | 227 | ``` 228 | _cdk diff fargate-demo-node-backend 229 | _cdk deploy fargate-demo-node-backend 230 | ``` 231 | 232 | ``` 233 | ecs-cli ps 234 | ``` 235 | 236 | ## Bring up Crystal backend api: 237 | ```bash 238 | cd ~/environment 239 | _cdk diff fargate-demo-crystal-backend 240 | _cdk deploy fargate-demo-crystal-backend 241 | ``` 242 | 243 | Just like earlier, we are now bringing up one of our backend API services. 244 | This service is not registered with any ALB, and instead is only reachable by 245 | private IP in the VPC, so we will use service discovery to talk to it. 246 | The containers will automatically register with CloudMap on launch. 247 | ``` 248 | 249 | ## Scale the tasks: 250 | Open up in an editor of your choice `app.py`, and we will modify the frontend stack and up the desired count from 1 to 3. Simply comment the variable 251 | `desired_service_count=1`, and uncomment `desired_service_count=3`. 252 | 253 | ```python 254 | # Backend Crystal service 255 | self.backend_crystal_service = BackendCrystalECSService(self, self.stack_name + "-crystal-backend", 256 | self.base_module.ecs_cluster,self.base_module.vpc, 257 | self.base_module.services_3000_sec_group, 258 | #desired_service_count=1) 259 | desired_service_count=3) 260 | ``` 261 | 262 | Let's run a diff to see what changes will be made, and then deploy! 263 | 264 | ``` 265 | _cdk diff fargate-demo-crystal-backend 266 | _cdk deploy fargate-demo-crystal-backend 267 | ``` 268 | 269 | ``` 270 | ecs-cli ps 271 | ``` 272 | 273 | We can see that our containers have now been evenly distributed across all 3 of our 274 | availability zones. 275 | 276 | ## Conclusion: 277 | You should now have 3 services, each running 3 tasks, spread across 3 availability zones. 278 | Additionally you should have zero instances to manage. :) 279 | 280 | ## Cleanup: 281 | ``` 282 | _cdk destroy 283 | ``` 284 | 285 | 286 | -------------------------------------------------------------------------------- /cdk/app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | # cdk: 1.41.0 4 | from aws_cdk import ( 5 | aws_ec2, 6 | aws_ecs, 7 | aws_iam, 8 | aws_ssm, 9 | aws_autoscaling, 10 | core 11 | ) 12 | 13 | from os import getenv 14 | 15 | 16 | class BaseVPCStack(core.Stack): 17 | 18 | def __init__(self, scope: core.Stack, id: str, **kwargs): 19 | super().__init__(scope, id, **kwargs) 20 | 21 | # This resource alone will create a private/public subnet in each AZ as well as nat/internet gateway(s) 22 | self.vpc = aws_ec2.Vpc( 23 | self, "BaseVPC", 24 | cidr='10.0.0.0/24', 25 | 26 | ) 27 | 28 | # Creating ECS Cluster in the VPC created above 29 | self.ecs_cluster = aws_ecs.Cluster( 30 | self, "ECSCluster", 31 | vpc=self.vpc, 32 | cluster_name="container-demo" 33 | ) 34 | 35 | # Adding service discovery namespace to cluster 36 | self.ecs_cluster.add_default_cloud_map_namespace( 37 | name="service", 38 | ) 39 | 40 | ###### CAPACITY PROVIDERS SECTION ##### 41 | # Adding EC2 capacity to the ECS Cluster 42 | #self.asg = self.ecs_cluster.add_capacity( 43 | # "ECSEC2Capacity", 44 | # instance_type=aws_ec2.InstanceType(instance_type_identifier='t3.small'), 45 | # min_capacity=0, 46 | # max_capacity=10 47 | #) 48 | 49 | #core.CfnOutput(self, "EC2AutoScalingGroupName", value=self.asg.auto_scaling_group_name, export_name="EC2ASGName") 50 | ##### END CAPACITY PROVIDER SECTION ##### 51 | 52 | ###### EC2 SPOT CAPACITY PROVIDER SECTION ###### 53 | 54 | ## As of today, AWS CDK doesn't support Launch Templates on the AutoScaling construct, hence it 55 | ## doesn't support Mixed Instances Policy to combine instance types on Auto Scaling and adhere to Spot best practices 56 | ## In the meantime, CfnLaunchTemplate and CfnAutoScalingGroup resources are used to configure Spot capacity 57 | ## https://github.com/aws/aws-cdk/issues/6734 58 | 59 | #self.ecs_spot_instance_role = aws_iam.Role( 60 | # self, "ECSSpotECSInstanceRole", 61 | # assumed_by=aws_iam.ServicePrincipal("ec2.amazonaws.com"), 62 | # managed_policies=[ 63 | # aws_iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AmazonEC2ContainerServiceforEC2Role"), 64 | # aws_iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AmazonEC2RoleforSSM") 65 | # ] 66 | #) 67 | # 68 | #self.ecs_spot_instance_profile = aws_iam.CfnInstanceProfile( 69 | # self, "ECSSpotInstanceProfile", 70 | # roles = [ 71 | # self.ecs_spot_instance_role.role_name 72 | # ] 73 | # ) 74 | # 75 | ## This creates a Launch Template for the Auto Scaling group 76 | #self.lt = aws_ec2.CfnLaunchTemplate( 77 | # self, "ECSEC2SpotCapacityLaunchTemplate", 78 | # launch_template_data={ 79 | # "instanceType": "m5.large", 80 | # "imageId": aws_ssm.StringParameter.value_for_string_parameter( 81 | # self, 82 | # "/aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id"), 83 | # "securityGroupIds": [ x.security_group_id for x in self.ecs_cluster.connections.security_groups ], 84 | # "iamInstanceProfile": {"arn": self.ecs_spot_instance_profile.attr_arn}, 85 | # 86 | # # Here we configure the ECS agent to drain Spot Instances upon catching a Spot Interruption notice from instance metadata 87 | # "userData": core.Fn.base64( 88 | # core.Fn.sub( 89 | # "#!/usr/bin/bash\n" 90 | # "echo ECS_CLUSTER=${cluster_name} >> /etc/ecs/ecs.config\n" 91 | # "sudo iptables --insert FORWARD 1 --in-interface docker+ --destination 169.254.169.254/32 --jump DROP\n" 92 | # "sudo service iptables save\n" 93 | # "echo ECS_ENABLE_SPOT_INSTANCE_DRAINING=true >> /etc/ecs/ecs.config\n" 94 | # "echo ECS_AWSVPC_BLOCK_IMDS=true >> /etc/ecs/ecs.config\n" 95 | # "cat /etc/ecs/ecs.config", 96 | # variables = { 97 | # "cluster_name":self.ecs_cluster.cluster_name 98 | # } 99 | # ) 100 | # ) 101 | # }, 102 | # launch_template_name="ECSEC2SpotCapacityLaunchTemplate") 103 | # 104 | #self.ecs_ec2_spot_mig_asg = aws_autoscaling.CfnAutoScalingGroup( 105 | # self, "ECSEC2SpotCapacity", 106 | # min_size = "0", 107 | # max_size = "10", 108 | # vpc_zone_identifier = [ x.subnet_id for x in self.vpc.private_subnets ], 109 | # mixed_instances_policy = { 110 | # "instancesDistribution": { 111 | # "onDemandAllocationStrategy": "prioritized", 112 | # "onDemandBaseCapacity": 0, 113 | # "onDemandPercentageAboveBaseCapacity": 0, 114 | # "spotAllocationStrategy": "capacity-optimized" 115 | # }, 116 | # "launchTemplate": { 117 | # "launchTemplateSpecification": { 118 | # "launchTemplateId": self.lt.ref, 119 | # "version": self.lt.attr_default_version_number 120 | # }, 121 | # "overrides": [ 122 | # {"instanceType": "m5.large"}, 123 | # {"instanceType": "m5d.large"}, 124 | # {"instanceType": "m5a.large"}, 125 | # {"instanceType": "m5ad.large"}, 126 | # {"instanceType": "m5n.large"}, 127 | # {"instanceType": "m5dn.large"}, 128 | # {"instanceType": "m3.large"}, 129 | # {"instanceType": "m4.large"}, 130 | # {"instanceType": "t3.large"}, 131 | # {"instanceType": "t2.large"} 132 | # ] 133 | # } 134 | # } 135 | #) 136 | # 137 | #core.Tag.add(self.ecs_ec2_spot_mig_asg, "Name", self.ecs_ec2_spot_mig_asg.node.path) 138 | #core.CfnOutput(self, "EC2SpotAutoScalingGroupName", value=self.ecs_ec2_spot_mig_asg.ref, export_name="EC2SpotASGName") 139 | # 140 | ##### END EC2 SPOT CAPACITY PROVIDER SECTION ##### 141 | 142 | # Namespace details as CFN output 143 | self.namespace_outputs = { 144 | 'ARN': self.ecs_cluster.default_cloud_map_namespace.private_dns_namespace_arn, 145 | 'NAME': self.ecs_cluster.default_cloud_map_namespace.private_dns_namespace_name, 146 | 'ID': self.ecs_cluster.default_cloud_map_namespace.private_dns_namespace_id, 147 | } 148 | 149 | # Cluster Attributes 150 | self.cluster_outputs = { 151 | 'NAME': self.ecs_cluster.cluster_name, 152 | 'SECGRPS': str(self.ecs_cluster.connections.security_groups) 153 | } 154 | 155 | # When enabling EC2, we need the security groups "registered" to the cluster for imports in other service stacks 156 | if self.ecs_cluster.connections.security_groups: 157 | self.cluster_outputs['SECGRPS'] = str([x.security_group_id for x in self.ecs_cluster.connections.security_groups][0]) 158 | 159 | # Frontend service to backend services on 3000 160 | self.services_3000_sec_group = aws_ec2.SecurityGroup( 161 | self, "FrontendToBackendSecurityGroup", 162 | allow_all_outbound=True, 163 | description="Security group for frontend service to talk to backend services", 164 | vpc=self.vpc 165 | ) 166 | 167 | # Allow inbound 3000 from ALB to Frontend Service 168 | self.sec_grp_ingress_self_3000 = aws_ec2.CfnSecurityGroupIngress( 169 | self, "InboundSecGrp3000", 170 | ip_protocol='TCP', 171 | source_security_group_id=self.services_3000_sec_group.security_group_id, 172 | from_port=3000, 173 | to_port=3000, 174 | group_id=self.services_3000_sec_group.security_group_id 175 | ) 176 | 177 | # Creating an EC2 bastion host to perform load test on private backend services 178 | amzn_linux = aws_ec2.MachineImage.latest_amazon_linux( 179 | generation=aws_ec2.AmazonLinuxGeneration.AMAZON_LINUX_2, 180 | edition=aws_ec2.AmazonLinuxEdition.STANDARD, 181 | virtualization=aws_ec2.AmazonLinuxVirt.HVM, 182 | storage=aws_ec2.AmazonLinuxStorage.GENERAL_PURPOSE 183 | ) 184 | 185 | # Instance Role/profile that will be attached to the ec2 instance 186 | # Enabling service role so the EC2 service can use ssm 187 | role = aws_iam.Role(self, "InstanceSSM", assumed_by=aws_iam.ServicePrincipal("ec2.amazonaws.com")) 188 | 189 | # Attaching the SSM policy to the role so we can use SSM to ssh into the ec2 instance 190 | role.add_managed_policy(aws_iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AmazonEC2RoleforSSM")) 191 | 192 | # Reading user data, to install siege into the ec2 instance. 193 | with open("stresstool_user_data.sh") as f: 194 | user_data = f.read() 195 | 196 | # Instance creation 197 | self.instance = aws_ec2.Instance(self, "Instance", 198 | instance_name="{}-stresstool".format(stack_name), 199 | instance_type=aws_ec2.InstanceType("t3.medium"), 200 | machine_image=amzn_linux, 201 | vpc = self.vpc, 202 | role = role, 203 | user_data=aws_ec2.UserData.custom(user_data), 204 | security_group=self.services_3000_sec_group 205 | ) 206 | 207 | # All Outputs required for other stacks to build 208 | core.CfnOutput(self, "NSArn", value=self.namespace_outputs['ARN'], export_name="NSARN") 209 | core.CfnOutput(self, "NSName", value=self.namespace_outputs['NAME'], export_name="NSNAME") 210 | core.CfnOutput(self, "NSId", value=self.namespace_outputs['ID'], export_name="NSID") 211 | core.CfnOutput(self, "FE2BESecGrp", value=self.services_3000_sec_group.security_group_id, export_name="SecGrpId") 212 | core.CfnOutput(self, "ECSClusterName", value=self.cluster_outputs['NAME'], export_name="ECSClusterName") 213 | core.CfnOutput(self, "ECSClusterSecGrp", value=self.cluster_outputs['SECGRPS'], export_name="ECSSecGrpList") 214 | core.CfnOutput(self, "ServicesSecGrp", value=self.services_3000_sec_group.security_group_id, export_name="ServicesSecGrp") 215 | core.CfnOutput(self, "StressToolEc2Id",value=self.instance.instance_id) 216 | core.CfnOutput(self, "StressToolEc2Ip",value=self.instance.instance_private_ip) 217 | 218 | 219 | _env = core.Environment(account=getenv('AWS_ACCOUNT_ID'), region=getenv('AWS_DEFAULT_REGION')) 220 | stack_name = "ecsworkshop-base" 221 | app = core.App() 222 | BaseVPCStack(app, stack_name, env=_env) 223 | app.synth() 224 | -------------------------------------------------------------------------------- /cdk/cdk.json: -------------------------------------------------------------------------------- 1 | { 2 | "app": "python3 app.py" 3 | } 4 | -------------------------------------------------------------------------------- /cdk/requirements.txt: -------------------------------------------------------------------------------- 1 | aws-cdk.core 2 | aws-cdk.aws_ec2 3 | aws-cdk.aws_ecs 4 | -------------------------------------------------------------------------------- /cdk/stresstool_user_data.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #Install the tool 4 | sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 5 | sudo yum-config-manager --enable epel 6 | sudo yum install -y siege 7 | -------------------------------------------------------------------------------- /cluster-ec2-private-vpc.yml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: EC2 ECS cluster running containers in a private subnet. Supports 3 | public facing load balancers, private internal load balancers, and 4 | both internal and external service discovery namespaces. 5 | Parameters: 6 | EnvironmentName: 7 | Type: String 8 | Default: container-demo 9 | Description: "A friendly environment name that will be used for namespacing all cluster resources. Example: staging, qa, or production" 10 | InstanceType: 11 | Type: String 12 | Default: c5.xlarge 13 | Description: Class of EC2 instance used to host containers. Choose t2 for testing, m5 for general purpose, c5 for CPU intensive services, and r5 for memory intensive services 14 | AllowedValues: [ t2.micro, t2.small, t2.medium, t2.large, t2.xlarge, t2.2xlarge, 15 | m5.large, m5.xlarge, m5.2large, m5.4xlarge, m5.12xlarge, m5.24large, 16 | c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, c5.18xlarge, 17 | r5.large, r5.xlarge, r5.2xlarge, r5.4xlarge, r5.12xlarge, r5.24xlarge ] 18 | ConstraintDescription: Please choose a valid instance type. 19 | DesiredCapacity: 20 | Type: Number 21 | Default: '3' 22 | Description: Number of EC2 instances to launch in your ECS cluster. 23 | MaxSize: 24 | Type: Number 25 | Default: '6' 26 | Description: Maximum number of EC2 instances that can be launched in your ECS cluster. 27 | ECSAMI: 28 | Type: AWS::SSM::Parameter::Value 29 | Default: /aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id 30 | Description: The Amazon Machine Image ID used for the cluster, leave it as the default value to get the latest AMI 31 | 32 | Mappings: 33 | # Hard values for the subnet masks. These masks define 34 | # the range of internal IP addresses that can be assigned. 35 | # The VPC can have all IP's from 10.0.0.0 to 10.0.255.255 36 | # There are four subnets which cover the ranges: 37 | # 38 | # 10.0.0.0 - 10.0.0.255 39 | # 10.0.1.0 - 10.0.1.255 40 | # 10.0.2.0 - 10.0.2.255 41 | # 10.0.3.0 - 10.0.3.255 42 | # 43 | # If you need more IP addresses (perhaps you have so many 44 | # instances that you run out) then you can customize these 45 | # ranges to add more 46 | SubnetConfig: 47 | VPC: 48 | CIDR: '10.0.0.0/16' 49 | PublicOne: 50 | CIDR: '10.0.0.0/24' 51 | PublicTwo: 52 | CIDR: '10.0.1.0/24' 53 | PublicThree: 54 | CIDR: '10.0.2.0/24' 55 | PrivateOne: 56 | CIDR: '10.0.100.0/24' 57 | PrivateTwo: 58 | CIDR: '10.0.101.0/24' 59 | PrivateThree: 60 | CIDR: '10.0.102.0/24' 61 | Resources: 62 | # VPC in which containers will be networked. 63 | # It has two public subnets, and two private subnets. 64 | # We distribute the subnets across the first two available subnets 65 | # for the region, for high availability. 66 | VPC: 67 | Type: AWS::EC2::VPC 68 | Properties: 69 | EnableDnsSupport: true 70 | EnableDnsHostnames: true 71 | CidrBlock: !FindInMap ['SubnetConfig', 'VPC', 'CIDR'] 72 | 73 | # Three public subnets, where containers can have public IP addresses 74 | PublicSubnetOne: 75 | Type: AWS::EC2::Subnet 76 | Properties: 77 | AvailabilityZone: !Select 78 | - 0 79 | - Fn::GetAZs: !Ref 'AWS::Region' 80 | VpcId: !Ref 'VPC' 81 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicOne', 'CIDR'] 82 | MapPublicIpOnLaunch: true 83 | PublicSubnetTwo: 84 | Type: AWS::EC2::Subnet 85 | Properties: 86 | AvailabilityZone: !Select 87 | - 1 88 | - Fn::GetAZs: !Ref 'AWS::Region' 89 | VpcId: !Ref 'VPC' 90 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicTwo', 'CIDR'] 91 | MapPublicIpOnLaunch: true 92 | PublicSubnetThree: 93 | Type: AWS::EC2::Subnet 94 | Properties: 95 | AvailabilityZone: !Select 96 | - 2 97 | - Fn::GetAZs: !Ref 'AWS::Region' 98 | VpcId: !Ref 'VPC' 99 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicThree', 'CIDR'] 100 | MapPublicIpOnLaunch: true 101 | 102 | # Three private subnets where containers will only have private 103 | # IP addresses, and will only be reachable by other members of the 104 | # VPC 105 | PrivateSubnetOne: 106 | Type: AWS::EC2::Subnet 107 | Properties: 108 | AvailabilityZone: !Select 109 | - 0 110 | - Fn::GetAZs: !Ref 'AWS::Region' 111 | VpcId: !Ref 'VPC' 112 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateOne', 'CIDR'] 113 | PrivateSubnetTwo: 114 | Type: AWS::EC2::Subnet 115 | Properties: 116 | AvailabilityZone: !Select 117 | - 1 118 | - Fn::GetAZs: !Ref 'AWS::Region' 119 | VpcId: !Ref 'VPC' 120 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateTwo', 'CIDR'] 121 | PrivateSubnetThree: 122 | Type: AWS::EC2::Subnet 123 | Properties: 124 | AvailabilityZone: !Select 125 | - 2 126 | - Fn::GetAZs: !Ref 'AWS::Region' 127 | VpcId: !Ref 'VPC' 128 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateThree', 'CIDR'] 129 | 130 | # Setup networking resources for the public subnets. Containers 131 | # in the public subnets have public IP addresses and the routing table 132 | # sends network traffic via the internet gateway. 133 | InternetGateway: 134 | Type: AWS::EC2::InternetGateway 135 | GatewayAttachment: 136 | Type: AWS::EC2::VPCGatewayAttachment 137 | Properties: 138 | VpcId: !Ref 'VPC' 139 | InternetGatewayId: !Ref 'InternetGateway' 140 | PublicRouteTable: 141 | Type: AWS::EC2::RouteTable 142 | Properties: 143 | VpcId: !Ref 'VPC' 144 | PublicRoute: 145 | Type: AWS::EC2::Route 146 | DependsOn: GatewayAttachment 147 | Properties: 148 | RouteTableId: !Ref 'PublicRouteTable' 149 | DestinationCidrBlock: '0.0.0.0/0' 150 | GatewayId: !Ref 'InternetGateway' 151 | PublicSubnetOneRouteTableAssociation: 152 | Type: AWS::EC2::SubnetRouteTableAssociation 153 | Properties: 154 | SubnetId: !Ref PublicSubnetOne 155 | RouteTableId: !Ref PublicRouteTable 156 | PublicSubnetTwoRouteTableAssociation: 157 | Type: AWS::EC2::SubnetRouteTableAssociation 158 | Properties: 159 | SubnetId: !Ref PublicSubnetTwo 160 | RouteTableId: !Ref PublicRouteTable 161 | PublicSubnetThreeRouteTableAssociation: 162 | Type: AWS::EC2::SubnetRouteTableAssociation 163 | Properties: 164 | SubnetId: !Ref PublicSubnetThree 165 | RouteTableId: !Ref PublicRouteTable 166 | 167 | # Setup networking resources for the private subnets. Containers 168 | # in these subnets have only private IP addresses, and must use a NAT 169 | # gateway to talk to the internet. We launch three NAT gateways, one for 170 | # each private subnet. 171 | NatGatewayOneAttachment: 172 | Type: AWS::EC2::EIP 173 | DependsOn: GatewayAttachment 174 | Properties: 175 | Domain: vpc 176 | NatGatewayTwoAttachment: 177 | Type: AWS::EC2::EIP 178 | DependsOn: GatewayAttachment 179 | Properties: 180 | Domain: vpc 181 | NatGatewayThreeAttachment: 182 | Type: AWS::EC2::EIP 183 | DependsOn: GatewayAttachment 184 | Properties: 185 | Domain: vpc 186 | NatGatewayOne: 187 | Type: AWS::EC2::NatGateway 188 | Properties: 189 | AllocationId: !GetAtt NatGatewayOneAttachment.AllocationId 190 | SubnetId: !Ref PublicSubnetOne 191 | NatGatewayTwo: 192 | Type: AWS::EC2::NatGateway 193 | Properties: 194 | AllocationId: !GetAtt NatGatewayTwoAttachment.AllocationId 195 | SubnetId: !Ref PublicSubnetTwo 196 | NatGatewayThree: 197 | Type: AWS::EC2::NatGateway 198 | Properties: 199 | AllocationId: !GetAtt NatGatewayThreeAttachment.AllocationId 200 | SubnetId: !Ref PublicSubnetThree 201 | PrivateRouteTableOne: 202 | Type: AWS::EC2::RouteTable 203 | Properties: 204 | VpcId: !Ref 'VPC' 205 | PrivateRouteOne: 206 | Type: AWS::EC2::Route 207 | Properties: 208 | RouteTableId: !Ref PrivateRouteTableOne 209 | DestinationCidrBlock: 0.0.0.0/0 210 | NatGatewayId: !Ref NatGatewayOne 211 | PrivateRouteTableOneAssociation: 212 | Type: AWS::EC2::SubnetRouteTableAssociation 213 | Properties: 214 | RouteTableId: !Ref PrivateRouteTableOne 215 | SubnetId: !Ref PrivateSubnetOne 216 | PrivateRouteTableTwo: 217 | Type: AWS::EC2::RouteTable 218 | Properties: 219 | VpcId: !Ref 'VPC' 220 | PrivateRouteTwo: 221 | Type: AWS::EC2::Route 222 | Properties: 223 | RouteTableId: !Ref PrivateRouteTableTwo 224 | DestinationCidrBlock: 0.0.0.0/0 225 | NatGatewayId: !Ref NatGatewayTwo 226 | PrivateRouteTableTwoAssociation: 227 | Type: AWS::EC2::SubnetRouteTableAssociation 228 | Properties: 229 | RouteTableId: !Ref PrivateRouteTableTwo 230 | SubnetId: !Ref PrivateSubnetTwo 231 | PrivateRouteTableThree: 232 | Type: AWS::EC2::RouteTable 233 | Properties: 234 | VpcId: !Ref 'VPC' 235 | PrivateRouteThree: 236 | Type: AWS::EC2::Route 237 | Properties: 238 | RouteTableId: !Ref PrivateRouteTableThree 239 | DestinationCidrBlock: 0.0.0.0/0 240 | NatGatewayId: !Ref NatGatewayThree 241 | PrivateRouteTableThreeAssociation: 242 | Type: AWS::EC2::SubnetRouteTableAssociation 243 | Properties: 244 | RouteTableId: !Ref PrivateRouteTableThree 245 | SubnetId: !Ref PrivateSubnetThree 246 | 247 | 248 | # OPTIONAL: VPC Endpoint for DynamoDB 249 | # If a container needs to access DynamoDB this allows a container in the private subnet 250 | # to talk to DynamoDB directly without needing to go via the NAT gateway. This reduces 251 | # the amount of bandwidth through the gateway, meaning that the gateway is free to serve 252 | # your other traffic. 253 | DynamoDBEndpoint: 254 | Type: AWS::EC2::VPCEndpoint 255 | Properties: 256 | PolicyDocument: 257 | Version: "2012-10-17" 258 | Statement: 259 | - Effect: Allow 260 | Action: "*" 261 | Principal: "*" 262 | Resource: "*" 263 | RouteTableIds: 264 | - !Ref 'PrivateRouteTableOne' 265 | - !Ref 'PrivateRouteTableTwo' 266 | - !Ref 'PrivateRouteTableThree' 267 | ServiceName: !Sub com.amazonaws.${AWS::Region}.dynamodb 268 | VpcId: !Ref 'VPC' 269 | 270 | # ECS Resources 271 | ECSCluster: 272 | Type: AWS::ECS::Cluster 273 | Properties: 274 | ClusterName: !Ref EnvironmentName 275 | 276 | # A security group for the containers we will run in ECS. 277 | # Rules are added to this security group based on what ingress you 278 | # add for the cluster. 279 | ContainerSecurityGroup: 280 | Type: AWS::EC2::SecurityGroup 281 | Properties: 282 | GroupDescription: Access to the ECS hosts that run containers 283 | VpcId: !Ref 'VPC' 284 | 285 | # Autoscaling group. This launches the actual EC2 instances that will register 286 | # themselves as members of the cluster, and run the docker containers. 287 | ECSAutoScalingGroup: 288 | Type: AWS::AutoScaling::AutoScalingGroup 289 | Properties: 290 | VPCZoneIdentifier: 291 | - !Ref PrivateSubnetOne 292 | - !Ref PrivateSubnetTwo 293 | - !Ref PrivateSubnetThree 294 | LaunchConfigurationName: !Ref 'ContainerInstances' 295 | MinSize: '1' 296 | MaxSize: !Ref 'MaxSize' 297 | DesiredCapacity: !Ref 'DesiredCapacity' 298 | CreationPolicy: 299 | ResourceSignal: 300 | Timeout: PT15M 301 | UpdatePolicy: 302 | AutoScalingReplacingUpdate: 303 | WillReplace: 'true' 304 | ContainerInstances: 305 | Type: AWS::AutoScaling::LaunchConfiguration 306 | Properties: 307 | ImageId: !Ref 'ECSAMI' 308 | SecurityGroups: [!Ref 'ContainerSecurityGroup'] 309 | InstanceType: !Ref 'InstanceType' 310 | IamInstanceProfile: !Ref 'EC2InstanceProfile' 311 | UserData: 312 | Fn::Base64: !Sub | 313 | #!/bin/bash -xe 314 | echo ECS_CLUSTER=${ECSCluster} >> /etc/ecs/ecs.config 315 | yum install -y aws-cfn-bootstrap 316 | /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource ECSAutoScalingGroup --region ${AWS::Region} 317 | EC2InstanceProfile: 318 | Type: AWS::IAM::InstanceProfile 319 | Properties: 320 | Path: / 321 | Roles: [!Ref 'EC2Role'] 322 | 323 | # A role used to allow AWS Autoscaling to inspect stats and adjust scalable targets 324 | # on your AWS account 325 | AutoscalingRole: 326 | Type: AWS::IAM::Role 327 | Properties: 328 | AssumeRolePolicyDocument: 329 | Statement: 330 | - Effect: Allow 331 | Principal: 332 | Service: [application-autoscaling.amazonaws.com] 333 | Action: ['sts:AssumeRole'] 334 | Path: / 335 | Policies: 336 | - PolicyName: service-autoscaling 337 | PolicyDocument: 338 | Statement: 339 | - Effect: Allow 340 | Action: 341 | - 'application-autoscaling:*' 342 | - 'cloudwatch:DescribeAlarms' 343 | - 'cloudwatch:PutMetricAlarm' 344 | - 'ecs:DescribeServices' 345 | - 'ecs:UpdateService' 346 | Resource: '*' 347 | 348 | # Role for the EC2 hosts. This allows the ECS agent on the EC2 hosts 349 | # to communicate with the ECS control plane, as well as download the docker 350 | # images from ECR to run on your host. 351 | EC2Role: 352 | Type: AWS::IAM::Role 353 | Properties: 354 | AssumeRolePolicyDocument: 355 | Statement: 356 | - Effect: Allow 357 | Principal: 358 | Service: [ec2.amazonaws.com] 359 | Action: ['sts:AssumeRole'] 360 | Path: / 361 | Policies: 362 | - PolicyName: ecs-service 363 | PolicyDocument: 364 | Statement: 365 | - Effect: Allow 366 | Action: 367 | - 'ecs:CreateCluster' 368 | - 'ecs:DeregisterContainerInstance' 369 | - 'ecs:DiscoverPollEndpoint' 370 | - 'ecs:Poll' 371 | - 'ecs:RegisterContainerInstance' 372 | - 'ecs:StartTelemetrySession' 373 | - 'ecs:Submit*' 374 | - 'logs:CreateLogStream' 375 | - 'logs:PutLogEvents' 376 | - 'ecr:GetAuthorizationToken' 377 | - 'ecr:BatchGetImage' 378 | - 'ecr:GetDownloadUrlForLayer' 379 | Resource: '*' 380 | 381 | # This is an IAM role which authorizes ECS to manage resources on your 382 | # account on your behalf, such as updating your load balancer with the 383 | # details of where your containers are, so that traffic can reach your 384 | # containers. 385 | ECSRole: 386 | Type: AWS::IAM::Role 387 | Properties: 388 | AssumeRolePolicyDocument: 389 | Statement: 390 | - Effect: Allow 391 | Principal: 392 | Service: [ecs.amazonaws.com] 393 | Action: ['sts:AssumeRole'] 394 | Path: / 395 | Policies: 396 | - PolicyName: ecs-service 397 | PolicyDocument: 398 | Statement: 399 | - Effect: Allow 400 | Action: 401 | # Rules which allow ECS to attach network interfaces to instances 402 | # on your behalf in order for awsvpc networking mode to work right 403 | - 'ec2:AttachNetworkInterface' 404 | - 'ec2:CreateNetworkInterface' 405 | - 'ec2:CreateNetworkInterfacePermission' 406 | - 'ec2:DeleteNetworkInterface' 407 | - 'ec2:DeleteNetworkInterfacePermission' 408 | - 'ec2:Describe*' 409 | - 'ec2:DetachNetworkInterface' 410 | 411 | # Rules which allow ECS to update load balancers on your behalf 412 | # with the information sabout how to send traffic to your containers 413 | - 'elasticloadbalancing:DeregisterInstancesFromLoadBalancer' 414 | - 'elasticloadbalancing:DeregisterTargets' 415 | - 'elasticloadbalancing:Describe*' 416 | - 'elasticloadbalancing:RegisterInstancesWithLoadBalancer' 417 | - 'elasticloadbalancing:RegisterTargets' 418 | Resource: '*' 419 | 420 | # This is a role which is used by the ECS tasks themselves. 421 | ECSTaskExecutionRole: 422 | Type: AWS::IAM::Role 423 | Properties: 424 | AssumeRolePolicyDocument: 425 | Statement: 426 | - Effect: Allow 427 | Principal: 428 | Service: [ecs-tasks.amazonaws.com] 429 | Action: ['sts:AssumeRole'] 430 | Path: / 431 | Policies: 432 | - PolicyName: AmazonECSTaskExecutionRolePolicy 433 | PolicyDocument: 434 | Statement: 435 | - Effect: Allow 436 | Action: 437 | # Allow the ECS Tasks to download images from ECR 438 | - 'ecr:GetAuthorizationToken' 439 | - 'ecr:BatchCheckLayerAvailability' 440 | - 'ecr:GetDownloadUrlForLayer' 441 | - 'ecr:BatchGetImage' 442 | 443 | # Allow the ECS tasks to upload logs to CloudWatch 444 | - 'logs:CreateLogStream' 445 | - 'logs:PutLogEvents' 446 | Resource: '*' 447 | 448 | # These are the values output by the CloudFormation template. Be careful 449 | # about changing any of them, because of them are exported with specific 450 | # names so that the other task related CF templates can use them. 451 | Outputs: 452 | ClusterName: 453 | Description: The name of the ECS cluster 454 | Value: !Ref 'ECSCluster' 455 | Export: 456 | Name: !Sub ${EnvironmentName}:ClusterName 457 | AutoscalingRole: 458 | Description: The ARN of the role used for autoscaling 459 | Value: !GetAtt 'AutoscalingRole.Arn' 460 | Export: 461 | Name: !Sub ${EnvironmentName}:AutoscalingRole 462 | ECSRole: 463 | Description: The ARN of the ECS role 464 | Value: !GetAtt 'ECSRole.Arn' 465 | Export: 466 | Name: !Sub ${EnvironmentName}:ECSRole 467 | ECSTaskExecutionRole: 468 | Description: The ARN of the ECS role 469 | Value: !GetAtt 'ECSTaskExecutionRole.Arn' 470 | Export: 471 | Name: !Sub ${EnvironmentName}:ECSTaskExecutionRole 472 | VpcId: 473 | Description: The ID of the VPC that this stack is deployed in 474 | Value: !Ref 'VPC' 475 | Export: 476 | Name: !Sub ${EnvironmentName}:VpcId 477 | PublicSubnetOne: 478 | Description: Public subnet one 479 | Value: !Ref 'PublicSubnetOne' 480 | Export: 481 | Name: !Sub ${EnvironmentName}:PublicSubnetOne 482 | PublicSubnetTwo: 483 | Description: Public subnet two 484 | Value: !Ref 'PublicSubnetTwo' 485 | Export: 486 | Name: !Sub ${EnvironmentName}:PublicSubnetTwo 487 | PublicSubnetThree: 488 | Description: Public subnet three 489 | Value: !Ref 'PublicSubnetThree' 490 | Export: 491 | Name: !Sub ${EnvironmentName}:PublicSubnetThree 492 | PrivateSubnetOne: 493 | Description: Private subnet one 494 | Value: !Ref 'PrivateSubnetOne' 495 | Export: 496 | Name: !Sub ${EnvironmentName}:PrivateSubnetOne 497 | PrivateSubnetTwo: 498 | Description: Private subnet two 499 | Value: !Ref 'PrivateSubnetTwo' 500 | Export: 501 | Name: !Sub ${EnvironmentName}:PrivateSubnetTwo 502 | PrivateSubnetThree: 503 | Description: Private subnet three 504 | Value: !Ref 'PrivateSubnetThree' 505 | Export: 506 | Name: !Sub ${EnvironmentName}:PrivateSubnetThree 507 | ContainerSecurityGroup: 508 | Description: A security group used to allow containers to receive traffic 509 | Value: !Ref 'ContainerSecurityGroup' 510 | Export: 511 | Name: !Sub ${EnvironmentName}:ContainerSecurityGroup 512 | -------------------------------------------------------------------------------- /cluster-fargate-private-vpc-onsite.yml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: AWS Fargate cluster that can span public and private subnets. Supports 3 | public facing load balancers, private internal load balancers, and 4 | both internal and external service discovery namespaces. 5 | Parameters: 6 | EnvironmentName: 7 | Type: String 8 | Default: fargatedemo 9 | Description: "A friendly environment name that will be used for namespacing all cluster resources. Example: staging, qa, or production" 10 | Mappings: 11 | # Hard values for the subnet masks. These masks define 12 | # the range of internal IP addresses that can be assigned. 13 | # The VPC can have all IP's from 10.0.0.0 to 10.0.255.255 14 | # There are four subnets which cover the ranges: 15 | # 16 | # 10.0.0.0 - 10.0.0.255 17 | # 10.0.1.0 - 10.0.1.255 18 | # 10.0.2.0 - 10.0.2.255 19 | # 10.0.3.0 - 10.0.3.255 20 | # 21 | # If you need more IP addresses (perhaps you have so many 22 | # instances that you run out) then you can customize these 23 | # ranges to add more 24 | SubnetConfig: 25 | VPC: 26 | CIDR: '10.0.0.0/16' 27 | PublicOne: 28 | CIDR: '10.0.0.0/24' 29 | PublicTwo: 30 | CIDR: '10.0.1.0/24' 31 | PublicThree: 32 | CIDR: '10.0.2.0/24' 33 | PrivateOne: 34 | CIDR: '10.0.100.0/24' 35 | PrivateTwo: 36 | CIDR: '10.0.101.0/24' 37 | PrivateThree: 38 | CIDR: '10.0.102.0/24' 39 | Resources: 40 | # VPC in which containers will be networked. 41 | # It has two public subnets, and two private subnets. 42 | # We distribute the subnets across the first two available subnets 43 | # for the region, for high availability. 44 | VPC: 45 | Type: AWS::EC2::VPC 46 | Properties: 47 | EnableDnsSupport: true 48 | EnableDnsHostnames: true 49 | CidrBlock: !FindInMap ['SubnetConfig', 'VPC', 'CIDR'] 50 | 51 | # Three public subnets, where containers can have public IP addresses 52 | PublicSubnetOne: 53 | Type: AWS::EC2::Subnet 54 | Properties: 55 | AvailabilityZone: !Select 56 | - 0 57 | - Fn::GetAZs: !Ref 'AWS::Region' 58 | VpcId: !Ref 'VPC' 59 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicOne', 'CIDR'] 60 | MapPublicIpOnLaunch: true 61 | PublicSubnetTwo: 62 | Type: AWS::EC2::Subnet 63 | Properties: 64 | AvailabilityZone: !Select 65 | - 1 66 | - Fn::GetAZs: !Ref 'AWS::Region' 67 | VpcId: !Ref 'VPC' 68 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicTwo', 'CIDR'] 69 | MapPublicIpOnLaunch: true 70 | PublicSubnetThree: 71 | Type: AWS::EC2::Subnet 72 | Properties: 73 | AvailabilityZone: !Select 74 | - 2 75 | - Fn::GetAZs: !Ref 'AWS::Region' 76 | VpcId: !Ref 'VPC' 77 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicThree', 'CIDR'] 78 | MapPublicIpOnLaunch: true 79 | 80 | # Three private subnets where containers will only have private 81 | # IP addresses, and will only be reachable by other members of the 82 | # VPC 83 | PrivateSubnetOne: 84 | Type: AWS::EC2::Subnet 85 | Properties: 86 | AvailabilityZone: !Select 87 | - 0 88 | - Fn::GetAZs: !Ref 'AWS::Region' 89 | VpcId: !Ref 'VPC' 90 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateOne', 'CIDR'] 91 | PrivateSubnetTwo: 92 | Type: AWS::EC2::Subnet 93 | Properties: 94 | AvailabilityZone: !Select 95 | - 1 96 | - Fn::GetAZs: !Ref 'AWS::Region' 97 | VpcId: !Ref 'VPC' 98 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateTwo', 'CIDR'] 99 | PrivateSubnetThree: 100 | Type: AWS::EC2::Subnet 101 | Properties: 102 | AvailabilityZone: !Select 103 | - 2 104 | - Fn::GetAZs: !Ref 'AWS::Region' 105 | VpcId: !Ref 'VPC' 106 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateThree', 'CIDR'] 107 | 108 | # Setup networking resources for the public subnets. Containers 109 | # in the public subnets have public IP addresses and the routing table 110 | # sends network traffic via the internet gateway. 111 | InternetGateway: 112 | Type: AWS::EC2::InternetGateway 113 | GatewayAttachement: 114 | Type: AWS::EC2::VPCGatewayAttachment 115 | Properties: 116 | VpcId: !Ref 'VPC' 117 | InternetGatewayId: !Ref 'InternetGateway' 118 | PublicRouteTable: 119 | Type: AWS::EC2::RouteTable 120 | Properties: 121 | VpcId: !Ref 'VPC' 122 | PublicRoute: 123 | Type: AWS::EC2::Route 124 | DependsOn: GatewayAttachment 125 | Properties: 126 | RouteTableId: !Ref 'PublicRouteTable' 127 | DestinationCidrBlock: '0.0.0.0/0' 128 | GatewayId: !Ref 'InternetGateway' 129 | PublicSubnetOneRouteTableAssociation: 130 | Type: AWS::EC2::SubnetRouteTableAssociation 131 | Properties: 132 | SubnetId: !Ref PublicSubnetOne 133 | RouteTableId: !Ref PublicRouteTable 134 | PublicSubnetTwoRouteTableAssociation: 135 | Type: AWS::EC2::SubnetRouteTableAssociation 136 | Properties: 137 | SubnetId: !Ref PublicSubnetTwo 138 | RouteTableId: !Ref PublicRouteTable 139 | PublicSubnetThreeRouteTableAssociation: 140 | Type: AWS::EC2::SubnetRouteTableAssociation 141 | Properties: 142 | SubnetId: !Ref PublicSubnetThree 143 | RouteTableId: !Ref PublicRouteTable 144 | 145 | # Setup networking resources for the private subnets. Containers 146 | # in these subnets have only private IP addresses, and must use a NAT 147 | # gateway to talk to the internet. We launch three NAT gateways, one for 148 | # each private subnet. 149 | NatGatewayOneAttachment: 150 | Type: AWS::EC2::EIP 151 | DependsOn: GatewayAttachment 152 | Properties: 153 | Domain: vpc 154 | NatGatewayTwoAttachment: 155 | Type: AWS::EC2::EIP 156 | DependsOn: GatewayAttachment 157 | Properties: 158 | Domain: vpc 159 | NatGatewayThreeAttachment: 160 | Type: AWS::EC2::EIP 161 | DependsOn: GatewayAttachment 162 | Properties: 163 | Domain: vpc 164 | NatGatewayOne: 165 | Type: AWS::EC2::NatGateway 166 | Properties: 167 | AllocationId: !GetAtt NatGatewayOneAttachment.AllocationId 168 | SubnetId: !Ref PublicSubnetOne 169 | NatGatewayTwo: 170 | Type: AWS::EC2::NatGateway 171 | Properties: 172 | AllocationId: !GetAtt NatGatewayTwoAttachment.AllocationId 173 | SubnetId: !Ref PublicSubnetTwo 174 | NatGatewayThree: 175 | Type: AWS::EC2::NatGateway 176 | Properties: 177 | AllocationId: !GetAtt NatGatewayThreeAttachment.AllocationId 178 | SubnetId: !Ref PublicSubnetThree 179 | PrivateRouteTableOne: 180 | Type: AWS::EC2::RouteTable 181 | Properties: 182 | VpcId: !Ref 'VPC' 183 | PrivateRouteOne: 184 | Type: AWS::EC2::Route 185 | Properties: 186 | RouteTableId: !Ref PrivateRouteTableOne 187 | DestinationCidrBlock: 0.0.0.0/0 188 | NatGatewayId: !Ref NatGatewayOne 189 | PrivateRouteTableOneAssociation: 190 | Type: AWS::EC2::SubnetRouteTableAssociation 191 | Properties: 192 | RouteTableId: !Ref PrivateRouteTableOne 193 | SubnetId: !Ref PrivateSubnetOne 194 | PrivateRouteTableTwo: 195 | Type: AWS::EC2::RouteTable 196 | Properties: 197 | VpcId: !Ref 'VPC' 198 | PrivateRouteTwo: 199 | Type: AWS::EC2::Route 200 | Properties: 201 | RouteTableId: !Ref PrivateRouteTableTwo 202 | DestinationCidrBlock: 0.0.0.0/0 203 | NatGatewayId: !Ref NatGatewayTwo 204 | PrivateRouteTableTwoAssociation: 205 | Type: AWS::EC2::SubnetRouteTableAssociation 206 | Properties: 207 | RouteTableId: !Ref PrivateRouteTableTwo 208 | SubnetId: !Ref PrivateSubnetTwo 209 | PrivateRouteTableThree: 210 | Type: AWS::EC2::RouteTable 211 | Properties: 212 | VpcId: !Ref 'VPC' 213 | PrivateRouteThree: 214 | Type: AWS::EC2::Route 215 | Properties: 216 | RouteTableId: !Ref PrivateRouteTableThree 217 | DestinationCidrBlock: 0.0.0.0/0 218 | NatGatewayId: !Ref NatGatewayThree 219 | PrivateRouteTableThreeAssociation: 220 | Type: AWS::EC2::SubnetRouteTableAssociation 221 | Properties: 222 | RouteTableId: !Ref PrivateRouteTableThree 223 | SubnetId: !Ref PrivateSubnetThree 224 | 225 | 226 | # OPTIONAL: VPC Endpoint for DynamoDB 227 | # If a container needs to access DynamoDB this allows a container in the private subnet 228 | # to talk to DynamoDB directly without needing to go via the NAT gateway. This reduces 229 | # the amount of bandwidth through the gateway, meaning that the gateway is free to serve 230 | # your other traffic. 231 | DynamoDBEndpoint: 232 | Type: AWS::EC2::VPCEndpoint 233 | Properties: 234 | PolicyDocument: 235 | Version: "2012-10-17" 236 | Statement: 237 | - Effect: Allow 238 | Action: "*" 239 | Principal: "*" 240 | Resource: "*" 241 | RouteTableIds: 242 | - !Ref 'PrivateRouteTableOne' 243 | - !Ref 'PrivateRouteTableTwo' 244 | - !Ref 'PrivateRouteTableThree' 245 | ServiceName: !Sub com.amazonaws.${AWS::Region}.dynamodb 246 | VpcId: !Ref 'VPC' 247 | 248 | # ECS Resources 249 | ECSCluster: 250 | Type: AWS::ECS::Cluster 251 | 252 | # A security group for the containers we will run in Fargate. 253 | # Rules are added to this security group based on what ingress you 254 | # add for the cluster. 255 | ContainerSecurityGroup: 256 | Type: AWS::EC2::SecurityGroup 257 | Properties: 258 | GroupDescription: Access to the Fargate containers 259 | VpcId: !Ref 'VPC' 260 | 261 | # These are the values output by the CloudFormation template. Be careful 262 | # about changing any of them, because of them are exported with specific 263 | # names so that the other task related CF templates can use them. 264 | Outputs: 265 | ClusterName: 266 | Description: The name of the ECS cluster 267 | Value: !Ref 'ECSCluster' 268 | Export: 269 | Name: !Sub ${EnvironmentName}:ClusterName 270 | VpcId: 271 | Description: The ID of the VPC that this stack is deployed in 272 | Value: !Ref 'VPC' 273 | Export: 274 | Name: !Sub ${EnvironmentName}:VpcId 275 | PublicSubnetOne: 276 | Description: Public subnet one 277 | Value: !Ref 'PublicSubnetOne' 278 | Export: 279 | Name: !Sub ${EnvironmentName}:PublicSubnetOne 280 | PublicSubnetTwo: 281 | Description: Public subnet two 282 | Value: !Ref 'PublicSubnetTwo' 283 | Export: 284 | Name: !Sub ${EnvironmentName}:PublicSubnetTwo 285 | PublicSubnetThree: 286 | Description: Public subnet three 287 | Value: !Ref 'PublicSubnetThree' 288 | Export: 289 | Name: !Sub ${EnvironmentName}:PublicSubnetThree 290 | PrivateSubnetOne: 291 | Description: Private subnet one 292 | Value: !Ref 'PrivateSubnetOne' 293 | Export: 294 | Name: !Sub ${EnvironmentName}:PrivateSubnetOne 295 | PrivateSubnetTwo: 296 | Description: Private subnet two 297 | Value: !Ref 'PrivateSubnetTwo' 298 | Export: 299 | Name: !Sub ${EnvironmentName}:PrivateSubnetTwo 300 | PrivateSubnetThree: 301 | Description: Private subnet three 302 | Value: !Ref 'PrivateSubnetThree' 303 | Export: 304 | Name: !Sub ${EnvironmentName}:PrivateSubnetThree 305 | ContainerSecurityGroup: 306 | Description: A security group used to allow Fargate containers to receive traffic 307 | Value: !Ref 'ContainerSecurityGroup' 308 | Export: 309 | Name: !Sub ${EnvironmentName}:ContainerSecurityGroup 310 | -------------------------------------------------------------------------------- /cluster-fargate-private-vpc.yml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: AWS Fargate cluster that can span public and private subnets. Supports 3 | public facing load balancers, private internal load balancers, and 4 | both internal and external service discovery namespaces. 5 | Parameters: 6 | EnvironmentName: 7 | Type: String 8 | Default: container-demo 9 | Description: "A friendly environment name that will be used for namespacing all cluster resources. Example: staging, qa, or production" 10 | Mappings: 11 | # Hard values for the subnet masks. These masks define 12 | # the range of internal IP addresses that can be assigned. 13 | # The VPC can have all IP's from 10.0.0.0 to 10.0.255.255 14 | # There are four subnets which cover the ranges: 15 | # 16 | # 10.0.0.0 - 10.0.0.255 17 | # 10.0.1.0 - 10.0.1.255 18 | # 10.0.2.0 - 10.0.2.255 19 | # 10.0.3.0 - 10.0.3.255 20 | # 21 | # If you need more IP addresses (perhaps you have so many 22 | # instances that you run out) then you can customize these 23 | # ranges to add more 24 | SubnetConfig: 25 | VPC: 26 | CIDR: '10.0.0.0/16' 27 | PublicOne: 28 | CIDR: '10.0.0.0/24' 29 | PublicTwo: 30 | CIDR: '10.0.1.0/24' 31 | PublicThree: 32 | CIDR: '10.0.2.0/24' 33 | PrivateOne: 34 | CIDR: '10.0.100.0/24' 35 | PrivateTwo: 36 | CIDR: '10.0.101.0/24' 37 | PrivateThree: 38 | CIDR: '10.0.102.0/24' 39 | Resources: 40 | # VPC in which containers will be networked. 41 | # It has two public subnets, and two private subnets. 42 | # We distribute the subnets across the first two available subnets 43 | # for the region, for high availability. 44 | VPC: 45 | Type: AWS::EC2::VPC 46 | Properties: 47 | EnableDnsSupport: true 48 | EnableDnsHostnames: true 49 | CidrBlock: !FindInMap ['SubnetConfig', 'VPC', 'CIDR'] 50 | 51 | # Three public subnets, where containers can have public IP addresses 52 | PublicSubnetOne: 53 | Type: AWS::EC2::Subnet 54 | Properties: 55 | AvailabilityZone: !Select 56 | - 0 57 | - Fn::GetAZs: !Ref 'AWS::Region' 58 | VpcId: !Ref 'VPC' 59 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicOne', 'CIDR'] 60 | MapPublicIpOnLaunch: true 61 | PublicSubnetTwo: 62 | Type: AWS::EC2::Subnet 63 | Properties: 64 | AvailabilityZone: !Select 65 | - 1 66 | - Fn::GetAZs: !Ref 'AWS::Region' 67 | VpcId: !Ref 'VPC' 68 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicTwo', 'CIDR'] 69 | MapPublicIpOnLaunch: true 70 | PublicSubnetThree: 71 | Type: AWS::EC2::Subnet 72 | Properties: 73 | AvailabilityZone: !Select 74 | - 2 75 | - Fn::GetAZs: !Ref 'AWS::Region' 76 | VpcId: !Ref 'VPC' 77 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicThree', 'CIDR'] 78 | MapPublicIpOnLaunch: true 79 | 80 | # Three private subnets where containers will only have private 81 | # IP addresses, and will only be reachable by other members of the 82 | # VPC 83 | PrivateSubnetOne: 84 | Type: AWS::EC2::Subnet 85 | Properties: 86 | AvailabilityZone: !Select 87 | - 0 88 | - Fn::GetAZs: !Ref 'AWS::Region' 89 | VpcId: !Ref 'VPC' 90 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateOne', 'CIDR'] 91 | PrivateSubnetTwo: 92 | Type: AWS::EC2::Subnet 93 | Properties: 94 | AvailabilityZone: !Select 95 | - 1 96 | - Fn::GetAZs: !Ref 'AWS::Region' 97 | VpcId: !Ref 'VPC' 98 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateTwo', 'CIDR'] 99 | PrivateSubnetThree: 100 | Type: AWS::EC2::Subnet 101 | Properties: 102 | AvailabilityZone: !Select 103 | - 2 104 | - Fn::GetAZs: !Ref 'AWS::Region' 105 | VpcId: !Ref 'VPC' 106 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateThree', 'CIDR'] 107 | 108 | # Setup networking resources for the public subnets. Containers 109 | # in the public subnets have public IP addresses and the routing table 110 | # sends network traffic via the internet gateway. 111 | InternetGateway: 112 | Type: AWS::EC2::InternetGateway 113 | GatewayAttachement: 114 | Type: AWS::EC2::VPCGatewayAttachment 115 | Properties: 116 | VpcId: !Ref 'VPC' 117 | InternetGatewayId: !Ref 'InternetGateway' 118 | PublicRouteTable: 119 | Type: AWS::EC2::RouteTable 120 | Properties: 121 | VpcId: !Ref 'VPC' 122 | PublicRoute: 123 | Type: AWS::EC2::Route 124 | DependsOn: GatewayAttachement 125 | Properties: 126 | RouteTableId: !Ref 'PublicRouteTable' 127 | DestinationCidrBlock: '0.0.0.0/0' 128 | GatewayId: !Ref 'InternetGateway' 129 | PublicSubnetOneRouteTableAssociation: 130 | Type: AWS::EC2::SubnetRouteTableAssociation 131 | Properties: 132 | SubnetId: !Ref PublicSubnetOne 133 | RouteTableId: !Ref PublicRouteTable 134 | PublicSubnetTwoRouteTableAssociation: 135 | Type: AWS::EC2::SubnetRouteTableAssociation 136 | Properties: 137 | SubnetId: !Ref PublicSubnetTwo 138 | RouteTableId: !Ref PublicRouteTable 139 | PublicSubnetThreeRouteTableAssociation: 140 | Type: AWS::EC2::SubnetRouteTableAssociation 141 | Properties: 142 | SubnetId: !Ref PublicSubnetThree 143 | RouteTableId: !Ref PublicRouteTable 144 | 145 | # Setup networking resources for the private subnets. Containers 146 | # in these subnets have only private IP addresses, and must use a NAT 147 | # gateway to talk to the internet. We launch three NAT gateways, one for 148 | # each private subnet. 149 | NatGatewayOneAttachment: 150 | Type: AWS::EC2::EIP 151 | DependsOn: GatewayAttachement 152 | Properties: 153 | Domain: vpc 154 | NatGatewayTwoAttachment: 155 | Type: AWS::EC2::EIP 156 | DependsOn: GatewayAttachement 157 | Properties: 158 | Domain: vpc 159 | NatGatewayThreeAttachment: 160 | Type: AWS::EC2::EIP 161 | DependsOn: GatewayAttachement 162 | Properties: 163 | Domain: vpc 164 | NatGatewayOne: 165 | Type: AWS::EC2::NatGateway 166 | Properties: 167 | AllocationId: !GetAtt NatGatewayOneAttachment.AllocationId 168 | SubnetId: !Ref PublicSubnetOne 169 | NatGatewayTwo: 170 | Type: AWS::EC2::NatGateway 171 | Properties: 172 | AllocationId: !GetAtt NatGatewayTwoAttachment.AllocationId 173 | SubnetId: !Ref PublicSubnetTwo 174 | NatGatewayThree: 175 | Type: AWS::EC2::NatGateway 176 | Properties: 177 | AllocationId: !GetAtt NatGatewayThreeAttachment.AllocationId 178 | SubnetId: !Ref PublicSubnetThree 179 | PrivateRouteTableOne: 180 | Type: AWS::EC2::RouteTable 181 | Properties: 182 | VpcId: !Ref 'VPC' 183 | PrivateRouteOne: 184 | Type: AWS::EC2::Route 185 | Properties: 186 | RouteTableId: !Ref PrivateRouteTableOne 187 | DestinationCidrBlock: 0.0.0.0/0 188 | NatGatewayId: !Ref NatGatewayOne 189 | PrivateRouteTableOneAssociation: 190 | Type: AWS::EC2::SubnetRouteTableAssociation 191 | Properties: 192 | RouteTableId: !Ref PrivateRouteTableOne 193 | SubnetId: !Ref PrivateSubnetOne 194 | PrivateRouteTableTwo: 195 | Type: AWS::EC2::RouteTable 196 | Properties: 197 | VpcId: !Ref 'VPC' 198 | PrivateRouteTwo: 199 | Type: AWS::EC2::Route 200 | Properties: 201 | RouteTableId: !Ref PrivateRouteTableTwo 202 | DestinationCidrBlock: 0.0.0.0/0 203 | NatGatewayId: !Ref NatGatewayTwo 204 | PrivateRouteTableTwoAssociation: 205 | Type: AWS::EC2::SubnetRouteTableAssociation 206 | Properties: 207 | RouteTableId: !Ref PrivateRouteTableTwo 208 | SubnetId: !Ref PrivateSubnetTwo 209 | PrivateRouteTableThree: 210 | Type: AWS::EC2::RouteTable 211 | Properties: 212 | VpcId: !Ref 'VPC' 213 | PrivateRouteThree: 214 | Type: AWS::EC2::Route 215 | Properties: 216 | RouteTableId: !Ref PrivateRouteTableThree 217 | DestinationCidrBlock: 0.0.0.0/0 218 | NatGatewayId: !Ref NatGatewayThree 219 | PrivateRouteTableThreeAssociation: 220 | Type: AWS::EC2::SubnetRouteTableAssociation 221 | Properties: 222 | RouteTableId: !Ref PrivateRouteTableThree 223 | SubnetId: !Ref PrivateSubnetThree 224 | 225 | 226 | # OPTIONAL: VPC Endpoint for DynamoDB 227 | # If a container needs to access DynamoDB this allows a container in the private subnet 228 | # to talk to DynamoDB directly without needing to go via the NAT gateway. This reduces 229 | # the amount of bandwidth through the gateway, meaning that the gateway is free to serve 230 | # your other traffic. 231 | DynamoDBEndpoint: 232 | Type: AWS::EC2::VPCEndpoint 233 | Properties: 234 | PolicyDocument: 235 | Version: "2012-10-17" 236 | Statement: 237 | - Effect: Allow 238 | Action: "*" 239 | Principal: "*" 240 | Resource: "*" 241 | RouteTableIds: 242 | - !Ref 'PrivateRouteTableOne' 243 | - !Ref 'PrivateRouteTableTwo' 244 | - !Ref 'PrivateRouteTableThree' 245 | ServiceName: !Sub com.amazonaws.${AWS::Region}.dynamodb 246 | VpcId: !Ref 'VPC' 247 | 248 | # ECS Resources 249 | ECSCluster: 250 | Type: AWS::ECS::Cluster 251 | 252 | # A security group for the containers we will run in Fargate. 253 | # Rules are added to this security group based on what ingress you 254 | # add for the cluster. 255 | ContainerSecurityGroup: 256 | Type: AWS::EC2::SecurityGroup 257 | Properties: 258 | GroupDescription: Access to the Fargate containers 259 | VpcId: !Ref 'VPC' 260 | 261 | # A role used to allow AWS Autoscaling to inspect stats and adjust scalable targets 262 | # on your AWS account 263 | AutoscalingRole: 264 | Type: AWS::IAM::Role 265 | Properties: 266 | AssumeRolePolicyDocument: 267 | Statement: 268 | - Effect: Allow 269 | Principal: 270 | Service: [application-autoscaling.amazonaws.com] 271 | Action: ['sts:AssumeRole'] 272 | Path: / 273 | Policies: 274 | - PolicyName: service-autoscaling 275 | PolicyDocument: 276 | Statement: 277 | - Effect: Allow 278 | Action: 279 | - 'application-autoscaling:*' 280 | - 'cloudwatch:DescribeAlarms' 281 | - 'cloudwatch:PutMetricAlarm' 282 | - 'ecs:DescribeServices' 283 | - 'ecs:UpdateService' 284 | Resource: '*' 285 | 286 | # This is an IAM role which authorizes ECS to manage resources on your 287 | # account on your behalf, such as updating your load balancer with the 288 | # details of where your containers are, so that traffic can reach your 289 | # containers. 290 | ECSRole: 291 | Type: AWS::IAM::Role 292 | Properties: 293 | AssumeRolePolicyDocument: 294 | Statement: 295 | - Effect: Allow 296 | Principal: 297 | Service: [ecs.amazonaws.com] 298 | Action: ['sts:AssumeRole'] 299 | Path: / 300 | Policies: 301 | - PolicyName: ecs-service 302 | PolicyDocument: 303 | Statement: 304 | - Effect: Allow 305 | Action: 306 | # Rules which allow ECS to attach network interfaces to instances 307 | # on your behalf in order for awsvpc networking mode to work right 308 | - 'ec2:AttachNetworkInterface' 309 | - 'ec2:CreateNetworkInterface' 310 | - 'ec2:CreateNetworkInterfacePermission' 311 | - 'ec2:DeleteNetworkInterface' 312 | - 'ec2:DeleteNetworkInterfacePermission' 313 | - 'ec2:Describe*' 314 | - 'ec2:DetachNetworkInterface' 315 | 316 | # Rules which allow ECS to update load balancers on your behalf 317 | # with the information about how to send traffic to your containers 318 | - 'elasticloadbalancing:DeregisterInstancesFromLoadBalancer' 319 | - 'elasticloadbalancing:DeregisterTargets' 320 | - 'elasticloadbalancing:Describe*' 321 | - 'elasticloadbalancing:RegisterInstancesWithLoadBalancer' 322 | - 'elasticloadbalancing:RegisterTargets' 323 | Resource: '*' 324 | 325 | # This is a role which is used by the ECS tasks themselves. 326 | ECSTaskExecutionRole: 327 | Type: AWS::IAM::Role 328 | Properties: 329 | AssumeRolePolicyDocument: 330 | Statement: 331 | - Effect: Allow 332 | Principal: 333 | Service: [ecs-tasks.amazonaws.com] 334 | Action: ['sts:AssumeRole'] 335 | Path: / 336 | Policies: 337 | - PolicyName: AmazonECSTaskExecutionRolePolicy 338 | PolicyDocument: 339 | Statement: 340 | - Effect: Allow 341 | Action: 342 | # Allow the ECS Tasks to download images from ECR 343 | - 'ecr:GetAuthorizationToken' 344 | - 'ecr:BatchCheckLayerAvailability' 345 | - 'ecr:GetDownloadUrlForLayer' 346 | - 'ecr:BatchGetImage' 347 | 348 | # Allow the ECS tasks to upload logs to CloudWatch 349 | - 'logs:CreateLogStream' 350 | - 'logs:PutLogEvents' 351 | Resource: '*' 352 | 353 | # These are the values output by the CloudFormation template. Be careful 354 | # about changing any of them, because of them are exported with specific 355 | # names so that the other task related CF templates can use them. 356 | Outputs: 357 | ClusterName: 358 | Description: The name of the ECS cluster 359 | Value: !Ref 'ECSCluster' 360 | Export: 361 | Name: !Sub ${EnvironmentName}:ClusterName 362 | AutoscalingRole: 363 | Description: The ARN of the role used for autoscaling 364 | Value: !GetAtt 'AutoscalingRole.Arn' 365 | Export: 366 | Name: !Sub ${EnvironmentName}:AutoscalingRole 367 | ECSRole: 368 | Description: The ARN of the ECS role 369 | Value: !GetAtt 'ECSRole.Arn' 370 | Export: 371 | Name: !Sub ${EnvironmentName}:ECSRole 372 | ECSTaskExecutionRole: 373 | Description: The ARN of the ECS role 374 | Value: !GetAtt 'ECSTaskExecutionRole.Arn' 375 | Export: 376 | Name: !Sub ${EnvironmentName}:ECSTaskExecutionRole 377 | VpcId: 378 | Description: The ID of the VPC that this stack is deployed in 379 | Value: !Ref 'VPC' 380 | Export: 381 | Name: !Sub ${EnvironmentName}:VpcId 382 | PublicSubnetOne: 383 | Description: Public subnet one 384 | Value: !Ref 'PublicSubnetOne' 385 | Export: 386 | Name: !Sub ${EnvironmentName}:PublicSubnetOne 387 | PublicSubnetTwo: 388 | Description: Public subnet two 389 | Value: !Ref 'PublicSubnetTwo' 390 | Export: 391 | Name: !Sub ${EnvironmentName}:PublicSubnetTwo 392 | PublicSubnetThree: 393 | Description: Public subnet three 394 | Value: !Ref 'PublicSubnetThree' 395 | Export: 396 | Name: !Sub ${EnvironmentName}:PublicSubnetThree 397 | PrivateSubnetOne: 398 | Description: Private subnet one 399 | Value: !Ref 'PrivateSubnetOne' 400 | Export: 401 | Name: !Sub ${EnvironmentName}:PrivateSubnetOne 402 | PrivateSubnetTwo: 403 | Description: Private subnet two 404 | Value: !Ref 'PrivateSubnetTwo' 405 | Export: 406 | Name: !Sub ${EnvironmentName}:PrivateSubnetTwo 407 | PrivateSubnetThree: 408 | Description: Private subnet three 409 | Value: !Ref 'PrivateSubnetThree' 410 | Export: 411 | Name: !Sub ${EnvironmentName}:PrivateSubnetThree 412 | ContainerSecurityGroup: 413 | Description: A security group used to allow Fargate containers to receive traffic 414 | Value: !Ref 'ContainerSecurityGroup' 415 | Export: 416 | Name: !Sub ${EnvironmentName}:ContainerSecurityGroup 417 | -------------------------------------------------------------------------------- /config.json.template: -------------------------------------------------------------------------------- 1 | { 2 | "PolicyName": "TrackOPSperTarget", 3 | "ScalableDimension": "ecs:service:DesiredCount", 4 | "ResourceId": "service/$clustername/ecsdemo-frontend", 5 | "PolicyType": "TargetTrackingScaling", 6 | "TargetTrackingScalingPolicyConfiguration": { 7 | "TargetValue": 1000, 8 | "PredefinedMetricSpecification": { 9 | "PredefinedMetricType": "ALBRequestCountPerTarget", 10 | "ResourceLabel": "$resource_label" 11 | } 12 | }, 13 | "ServiceNamespace": "ecs" 14 | } 15 | -------------------------------------------------------------------------------- /images/private-subnet-public-lb.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/brentley/container-demo/f637ecadecc18e05f580a3bdf3a24f44cc634676/images/private-subnet-public-lb.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aws-cdk.cdk==0.35.0 2 | aws-cdk.aws_ec2==0.35.0 3 | aws-cdk.aws_ecs==0.35.0 4 | aws-cdk.aws_ecs_patterns==0.35.0 5 | aws-cdk.aws_elasticloadbalancingv2==0.35.0 6 | aws-cdk.aws_logs==0.35.0 7 | aws-cdk.aws_servicediscovery==0.35.0 8 | --------------------------------------------------------------------------------