├── .gitignore ├── Lab-0 ├── README.md └── images │ ├── arch-starthere.png │ ├── cfn-create-complete-2.png │ ├── cfn-create-complete.png │ ├── cfn-createstack-1.png │ ├── cfn-createstack-2.png │ ├── cfn-iam-capabilities.png │ ├── cloud9-environment.png │ ├── cloud9-list.png │ ├── cloud9.png │ ├── deploy-to-aws.png │ ├── ecr-list-repos.png │ ├── ecs-taskdef-change-image.png │ ├── ecs-taskdef-describe.png │ └── ecs-taskdef-update-success.png ├── Lab-1 └── README.md ├── Lab-2 ├── README.md ├── hints │ ├── buildspec_dev.yml.draft │ └── hintspec_dev.yml └── images │ ├── arch-codebuild.png │ ├── cb-create-1-eht.png │ ├── cb-create-1.png │ ├── cb-create-project-1-2.png │ ├── cb-create-project-1.png │ ├── cb-create-project-2.png │ ├── cb-create-project-done.png │ ├── cb-create-project-envvar.png │ ├── cb-project-start.png │ ├── cb-success.png │ ├── ecr-get-like-commands.png │ └── ecr-new-image.png ├── Lab-3 ├── README.md ├── hints │ └── buildspec_prod.yml ├── images │ ├── cp-add-source.png │ ├── cp-create-cb-1.png │ ├── cp-create-cb-complete.png │ ├── cp-create-name.png │ ├── cp-create-source.png │ ├── cp-deploy-step.png │ └── cp-deploy-success.png └── mysfits_like_v2.py ├── Lab-4 ├── README.md ├── hints │ └── buildspec_clair.yml └── images │ ├── arch-codebuild.png │ ├── cb-create-1.png │ ├── cb-create-project-1-2.png │ ├── cb-create-project-1.png │ ├── cb-create-project-2.png │ ├── cb-create-project-envvar.png │ ├── cb-create-test-project-1.png │ ├── cb-create-test-project-2.png │ ├── cb-success.png │ ├── clair-action.png │ ├── cloud9.png │ ├── cp-create-action.png │ ├── cp-create-action2.jpeg │ ├── ecr-get-like-commands.png │ ├── ecr-new-image.png │ ├── edit-pipeline.png │ └── klar-logs.png ├── README.md ├── app ├── buildspec.yml ├── like-service │ ├── Dockerfile │ └── service │ │ ├── mysfits_like.py │ │ └── requirements.txt └── monolith-service │ ├── Dockerfile │ └── service │ ├── mysfitsTableClient.py │ ├── mythicalMysfitsService.py │ └── requirements.txt ├── core.yml ├── images └── mysfits-welcome.png ├── script ├── fetch-outputs ├── load-ddb ├── populate-dynamodb.json ├── setup ├── setup_ws1_end ├── upload-site ├── ws2 │ ├── bootstrap_ws2 │ ├── build-containers │ ├── create-fargate-services │ ├── service-template.json │ └── update-service-json └── ws3 │ ├── bootstrap_ws3 │ ├── clone │ └── populate ├── web ├── confirm.html ├── index.html ├── js │ ├── amazon-cognito-identity.min.js │ ├── aws-cognito-sdk.min.js │ └── aws-sdk-2.246.1.min.js └── register.html └── ws3-start ├── app ├── like-service │ ├── Dockerfile │ ├── buildspec_prod.yml │ └── service │ │ ├── mysfits_like.py │ │ └── requirements.txt └── monolith-service │ ├── Dockerfile │ └── service │ ├── mysfitsTableClient.py │ ├── mythicalMysfitsService.py │ └── requirements.txt └── core.yml /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | .AppleDouble 3 | .LSOverride 4 | ._* 5 | -------------------------------------------------------------------------------- /Lab-0/README.md: -------------------------------------------------------------------------------- 1 | # Mythical Mysfits: DevSecOps with Docker and AWS Fargate 2 | 3 | ## Lab 0 - Deploy Existing Mythical Stack 4 | 5 | In this lab, we are going to create the core infrastructure for the rest of the workshop and get familiar with the general environment. 6 | 7 | ## Table of Contents 8 | 9 | Here's what you'll be doing: 10 | 11 | * [Deploy Mythical CloudFormation Stack](#deploy-mythical-cloudformation-stack) 12 | * [Familiarize Yourself with the Mythical Workshop Environment](#familiarize-yourself-with-the-workshop-environment) 13 | * [Configure Cloud 9 Mythical Working Environment](#configure-cloud9-working-environment) 14 | * [Choose Your Mythical Path](#stop-pay-attention-here-because-it-matters) 15 | * [Crash Course/Refresher of CON214](#crash-courserefresher-on-workshop-1-con214-monolith-to-microservice-with-docker-and-aws-fargate) 16 | 17 | ### Deploy Mythical CloudFormation Stack 18 | 19 | 1. Select an AWS Region 20 | 21 | Log into the AWS Management Console and select an [AWS region](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html). 22 | 23 | The region dropdown is in the upper right hand corner of the console to the left of the Support dropdown menu. For this workshop, choose either **US West (Oregon)**, **US East (Ohio)**, **EU (Ireland)** or **Asia Pacific (Singapore)**. Workshop administrators will typically indicate which region you should use. 24 | 25 | 2. Launch CloudFormation Stack to create core workshop infrastructure 26 | 27 | Click on one of the **Deploy to AWS** icons below to region to stand up the core workshop infrastructure. 28 | 29 | Region | Launch Template 30 | ------------ | ------------- 31 | **Oregon** (us-west-2) | [![Launch Mythical Mysfits Stack into Oregon with CloudFormation](images/deploy-to-aws.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=mythical-mysfits-devsecops&templateURL=https://s3.amazonaws.com/mythical-mysfits-website/fargate-devsecops/core.yml) 32 | **Ohio** (us-east-2) | [![Launch Mythical Mysfits Stack into Ohio with CloudFormation](images/deploy-to-aws.png)](https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=mythical-mysfits-devsecops&templateURL=https://s3.amazonaws.com/mythical-mysfits-website/fargate-devsecops/core.yml) 33 | **Ireland** (eu-west-1) | [![Launch Mythical Mysfits Stack into Ireland with CloudFormation](images/deploy-to-aws.png)](https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/new?stackName=mythical-mysfits-devsecops&templateURL=https://s3.amazonaws.com/mythical-mysfits-website/fargate-devsecops/core.yml) 34 | **Singapore** (ap-southeast-1) | [![Launch Mythical Mysfits Stack into Singapore with CloudFormation](images/deploy-to-aws.png)](https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/new?stackName=mythical-mysfits-devsecops&templateURL=https://s3.amazonaws.com/mythical-mysfits-website/fargate-devsecops/core.yml) 35 | 36 | The links above will bring you to the AWS CloudFormation console with the **Specify an Amazon S3 template URL** field populated and radio button selected. Just click **Next**. If you do not have this populated, please click the link above. 37 | 38 | 3. Specify stack details 39 | 40 | On the Create Stack page, the stack name should automatically be populated. If you're running multiple workshop environments in the same account, use a different stack name. 41 | 42 | 43 | 44 | 45 | 46 | For the parameter **ClairDBPassword** you need to follow the Postgres minimum password requirements: 47 | 48 | > Master Password must be at least eight characters long, as in "mypassword". Can be any printable ASCII character except "/", "", or "@". 49 | 50 | Click **Next** to continue. 51 | 52 | ![CloudFormation Parameters](images/cfn-createstack-2.png) 53 | 54 | 4. Configure stack options 55 | 56 | No changes or inputs are required on the Configure stack options page. Click **Next** to move on to the Review page. 57 | 58 | 5. Review 59 | 60 | On the Review page, take a look at all the parameters and make sure they're accurate. Check the box next to **I acknowledge that AWS CloudFormation might create IAM resources with custom names.** If you do not check this box, the stack creation will fail. As part of the cleanup, CloudFormation will remove the IAM Roles for you. 61 | 62 | ![CloudFormation IAM Capabilities](images/cfn-iam-capabilities.png) 63 | 64 | Click **Create** to launch the CloudFormation stack. 65 | 66 | Here is what the templates are launching: 67 | 68 | ![CloudFormation Starting Stack](images/arch-starthere.png) 69 | 70 | The CloudFormation template will launch the following: 71 | * VPC with public subnets, routes and Internet Gateway 72 | * An ECS cluster with no EC2 resources because we're using Fargate 73 | * ECR repositories for your container images 74 | * Application Load Balancer to front all your services 75 | * Cloud9 Development Environment 76 | * A DynamoDB table to store your mysfits and their data 77 | 78 | ## Checkpoint: 79 | 80 | The CloudFormation stack will take a few minutes to launch. Periodically check on the stack creation process in the CloudFormation Dashboard. If you select box next to your stack and click on the **Events** tab, you can see what steps it's on. Wait roughly 5-10 minutes until you see "mythical-mysfits-devsecops" in the "Logical ID" row with status **CREATE\_COMPLETE**. 81 | 82 | ![CloudFormation CREATE_COMPLETE](images/cfn-create-complete.png) 83 | 84 | or until you see the overall Stack status in the **Stack Info** tab is **CREATE\_COMPLETE** 85 | 86 | ![CloudFormation CREATE_COMPLETE](images/cfn-create-complete-2.png) 87 | 88 | If there was an [error](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors) during the stack creation process, CloudFormation will rollback and terminate. You can investigate and troubleshoot by looking in the Events tab. Any errors encountered during stack creation will appear in the event stream as a failure. 89 | 90 | ### Familiarize yourself with the workshop environment 91 | 92 | 1. Access your AWS Cloud9 Development Environment 93 | 94 | In the AWS Management Console, go to the [Cloud9 Dashboard](https://console.aws.amazon.com/cloud9/home) and find your environment which should be prefixed with the name of the CloudFormation stack you created earlier, in our case mythical-mysfits-devsecops. You can also find the name of your environment in the CloudFormation outputs as Cloud9Env. Click **Open IDE**. 95 | 96 | ![Cloud9 Env](images/cloud9.png) 97 | 98 | 2. Familiarize yourself with the Cloud9 Environment 99 | 100 | On the left pane (Blue), any files downloaded to your environment will appear here in the file tree. In the middle (Red) pane, any documents you open will show up here. Test this out by double clicking on README.md in the left pane and edit the file by adding some arbitrary text. Then save it by clicking **File** and **Save**. Keyboard shortcuts will work as well. 101 | 102 | ![Cloud9 Editing](images/cloud9-environment.png) 103 | 104 | On the bottom, you will see a bash shell (Yellow). For the remainder of the lab, use this shell to enter all commands. You can also customize your Cloud9 environment by changing themes, moving panes around, etc. As an example, you can change the theme from light to dark by following the instructions [here](https://docs.aws.amazon.com/cloud9/latest/user-guide/settings-theme.html). 105 | 106 | ### Configure Cloud9 Working Environment 107 | 108 | 1. Configure Git credentials 109 | 110 | Since most of the labs are going to be using git, let's set up our permissions now. There are a number of ways to authenticate with git repositories, and specifically CodeCommit in this case, but for the sake of simplicity, we'll use the CodeCommit credential helper here. Enter the following commands to configure git to access CodeCommit. 111 | 112 |
113 |     $ git config --global credential.helper "cache --timeout=7200"
114 |     $ git config --global user.email "REPLACEWITHYOUREMAIL"
115 |     $ git config --global user.name "REPLACEWITHYOURNAME"
116 |     $ git config --global credential.helper '!aws codecommit credential-helper $@'
117 |     $ git config --global credential.UseHttpPath true
118 |     
119 | 120 | 2. Clone Workshop Repo 121 | 122 | There are a number of files and startup scripts we have pre-created for you. They're all in the main repo that you're using, so we'll clone that locally. Run this: 123 | 124 | ``` 125 | $ git clone https://github.com/aws-samples/amazon-ecs-mythicalmysfits-workshop.git 126 | ``` 127 | 128 | 3. Bootstrap 129 | 130 | There are a number of files that need to be created in order for your services to run later, so let's create them now. 131 | 132 | ``` 133 | $ cd ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/ 134 | $ script/setup 135 | ``` 136 | 137 | # STOP! Pay attention here because it matters! Choose Your Path. 138 | 139 |
140 | 141 | Click here if you are already attended CON214 or are familiar with Docker, Fargate, and AWS in general, we'll give you instructions on how to run the bootstrap script that will get you to the start of Lab 1. 142 | 143 |
144 | $ cd ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/
145 | $ script/setup_ws1_end
146 | 
147 | 148 | You should now have 2 Fargate services running in ECS - one for the Monolith service and one for the Like service. These are both sitting behind an ALB. 149 | 150 | One last thing before you move on. Go to the CloudFormation Outputs section of your stack and get the **S3WebsiteEndpoint**. It is an HTTP link. Copy and paste it into your browser window and bookmark it or put it in a note. It should already be working. If you see a bunch of Mysfits, it's working. Otherwise, it's not. 151 | 152 | # Checkpoint 153 | 154 | You made it to the end of Lab 0. You should now have two running services hooked into an ALB. If you visit the S3 static website bucket that was created as part of the bootstrap, it should be working already and you should see a bunch of Mythical Mysfits. Now you're ready to move on to Lab 1 to start your journey to DevSecOps! 155 | 156 | [Proceed to Lab 1](../Lab-1) 157 | 158 |
159 | 160 |
161 | 162 | Click here if you want a refresher or a quick crash course on Docker, Fargate, and AWS in general. You'll do a few of the steps from CON214 to get you to the start of Lab 1. 163 | 164 | 165 | ### Crash Course/Refresher on Workshop 1 (CON214: Monolith to Microservice with Docker and AWS Fargate) 166 | 167 | 1. Build the monolith docker image and test it 168 | 169 | In order for us to use a Docker image, we have to create it first. We'll do it manually here but don't worry, the whole point is to automate all this away. 170 | 171 |
172 | $ cd ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/app/monolith-service
173 | $ docker build -t monolith-service .
174 | 
175 | 176 | Run the docker container and test the adoption agency platform running as a container to make sure it responds 177 | 178 | Use the [docker run](https://docs.docker.com/engine/reference/run/) command to run your image; the -p flag is used to map the host listening port to the container listening port. Note that "Table-REPLACEME_STACKNAME" will need to be updated; replace the ***REPLACEME_STACKNAME*** portion with the name you entered when you created the CloudFormation stack. 179 | 180 |
181 | $ docker run -p 8000:80 -e AWS_DEFAULT_REGION=REPLACEME_REGION -e DDB_TABLE_NAME=Table-REPLACEME_STACKNAME monolith-service
182 | 
183 | 184 | Following our naming conventions, my command would be: 185 |
186 | $ docker run -p 8080:80 -e AWS_DEFAULT_REGION=ap-southeast-1 -e DDB_TABLE_NAME=Table-mythical-mysfits-devsecops monolith-service
187 |  * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
188 | 
189 | 190 | Press **Ctrl + C to exit** 191 | 192 | 2\. Push to the monolith-service ECR Repository 193 | 194 | In order to pull an image to use it, we have to put it somewhere. Similarly to how we use Git and centralized source control systems like GitHub, we'll use Amazon Elastic Container Registry (ECR) to store our images. Let's start by getting the ECR repository that we will be pushing to. Use the CLI to run `aws ecr describe-repositories` and note down both of the **repositoryUri** values for the ECR repositories that were created for you. The **repositoryName** should have the words mono or like in them. Don't worry that the name has a bunch of random characters in it. That's just CloudFormation making uniquely named resources for you. 195 | 196 |
197 | $ aws ecr describe-repositories
198 | {
199 |     "repositories": [
200 |         {
201 |             "registryId": "123456789012",
202 |             "repositoryName": "mythic-mono-ui2nkbotfxk2",
203 |             "repositoryArn": "arn:aws:ecr:eu-west-1:123456789012:repository/mythic-mono-ui2nkbotfxk2",
204 |             "createdAt": 1542995670.0,
205 |             "repositoryUri": "123456789012.dkr.ecr.eu-west-1.amazonaws.com/mythic-mono-ui2nkbotfxk2"
206 |         },
207 |         {
208 |             "registryId": "123456789012",
209 |             "repositoryName": "mythic-like-qhe5ji30css2",
210 |             "repositoryArn": "arn:aws:ecr:eu-west-1:123456789012:repository/mythic-like-qhe5ji30css2",
211 |             "createdAt": 1542995670.0,
212 |             "repositoryUri": "123456789012.dkr.ecr.eu-west-1.amazonaws.com/mythic-like-qhe5ji30css2"
213 |         }
214 |     ]
215 | }
216 | 
217 | 218 | Now that we have the repository URIs, we can tag and push the images up to ECR for later use. Here we are pushing the monolith-service to the repository with the word `mono` in it we got from above. 219 | 220 |
221 | $ $(aws ecr get-login --no-include-email --region ap-southeast-1)
222 | $ docker tag monolith-service:latest REPLACEME_ECR_REPOSITORY_URI_FOR_mythic-mono:latest
223 | $ docker push REPLACEME_ECR_REPOSITORY_URI_FOR_mythic-mono:latest
224 | 
225 | The push refers to repository [123456789012.dkr.ecr.ap-southeast-1.amazonaws.com/mythic-mono-uji1fb14urlq]
226 | a09105a1d2ce: Pushed
227 | b0be10c9aaa2: Pushed
228 | 5a458948ccaa: Pushed
229 | 2fc1a26ddb10: Pushed
230 | 3178611d3d5f: Pushed
231 | 76c033092e10: Pushed
232 | 2146d867acf3: Pushed
233 | ae1f631f14b7: Pushed
234 | 102645f1cf72: Pushed
235 | latest: digest: sha256:5d985802219c5a92ea097d414858d962c125c1ff46cfc70edcdf7f05ac964f62 size: 2206
236 | 
237 | 238 | When you issue the push command, Docker pushes the layers up to ECR, and if you refresh the monolith-service ECR repository page, you'll see an image indicating the latest version. 239 | 240 | 2\. Build the like docker image and push to ECR. 241 | 242 | We already have the repository URIs so let's build the like-service: 243 | 244 |
245 | $ cd ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/app/like-service
246 | $ docker build -t like-service .
247 | 
248 | 249 | *Note: Did you notice that the build time was significantly shorter when building the like-service? That's because most of the layers were already cached* 250 | 251 |
252 | $ docker tag like-service:latest REPLACEME_ECR_REPOSITORY_URI_FOR_mythic-like:latest
253 | $ docker push REPLACEME_ECR_REPOSITORY_URI_FOR_mythic-like:latest
254 | 
255 | 256 | 3\. Look at the task definition for the monolith-service 257 | 258 | Task definitions are an integral part of Fargate. It tells the Fargate service what to run, from how much memory to which actual Docker image to run. 259 | 260 | As part of the core infrastructure stack, we've already created task definitions for you, but let's take a look at them to understand what gets updated on a deployment. In the AWS Management Console, navigate to [Task Definitions](https://console.aws.amazon.com/ecs/home#/taskDefinitions) in the ECS dashboard. Check the checkbox next to the monolith-service task definition. It should be named something like Mythical-Mysfits-Monolith-mythical-mysfits-devsecops. Then click on **Create New Revision** 261 | 262 | ![ECS Create Task Definition Revision](images/ecs-taskdef-describe.png) 263 | 264 | Scroll down to Container Definitions where you should see where we have pre-defined a monolith-service container. Click on **monolith-service** to see details. Normally, this is where you'd modify the container image to change what you want to deploy to Fargate. However, since we've already pre-populated this, you're all set. 265 | 266 | ![ECS Update Task Definition Image](images/ecs-taskdef-change-image.png) 267 | 268 | Cancel out of everything until you're back to the **Task Definition** page. 269 | 270 | 4\. Create Fargate services 271 | 272 | First, we get the task definition names that we want to use. You saw them in the console earlier, but let's get them from the CLI: 273 | 274 |
275 | $ aws ecs list-task-definitions
276 | {
277 |     "taskDefinitionArns": [
278 |         "arn:aws:ecs:eu-west-1:123456789012:task-definition/Mythical-Mysfits-Like-Service-mythical-mysfits-devsecops:1",
279 |         "arn:aws:ecs:eu-west-1:123456789012:task-definition/Mythical-Mysfits-Monolith-Service-mythical-mysfits-devsecops:1"
280 |     ]
281 | }
282 | 
283 | 284 | Next up we need to create the Fargate services for the monolith service and the like service. We're using AWS CLI skeletons that we've updated to include the output values from the CloudFormation stack. The only thing you have to do is pass in the task definitions you noted down earlier. Run the following commands from your Cloud9 IDE, substituting in the task definitions for the ones you just listed. Make sure to include the number at the very end. 285 | 286 |
287 | $ cd ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/Lab-0
288 | $ aws ecs create-service --cli-input-json file://monolith-service.json --task-definition REPLACE_ME_MONOLITH_TASK_DEFINITION
289 | $ aws ecs create-service --cli-input-json file://like-service.json --task-definition REPLACE_ME_LIKE_TASK_DEFINITION
290 | 
291 | 292 | In my case, things looked like this: 293 | 294 |
295 | aws ecs create-service --cli-input-json file://monolith-service.json --task-definition Mythical-Mysfits-Monolith-Service-mythical-mysfits-devsecops:1
296 | aws ecs create-service --cli-input-json file://like-service.json --task-definition Mythical-Mysfits-Like-Service-mythical-mysfits-devsecops:1
297 | 
298 | 299 | If successful, a large blob of JSON describing your new service will appear. 300 | 301 | 5\. Visit the Mythical Mysfits Homepage 302 | 303 | Finally, let's look at what you've set up. The Mythical Mysfits adoption homepage is where you will be able to view all sorts of information about the Mythical Mysfits. To find out how to get there, go to the CloudFormation outputs section for your CloudFormation stack. Look for an output named **S3WebsiteEndpoint**. It is an HTTP link. Copy and paste it into your browser window and bookmark it or put it in a note. It should already be working. If you see a bunch of Mysfits, it's working. Otherwise, it's not. 304 | 305 | # Checkpoint 306 | 307 | You made it to the end of Lab 0. In one way or another, you should now have two running services hooked into an ALB. If you visit the S3 static website bucket that was created as part of the bootstrap, it should be working already and you should see a bunch of Mythical Mysfits. Now you're ready to move on to Lab 1 to start your journey to DevSecOps! 308 | 309 | [Proceed to Lab 1](../Lab-1) 310 | 311 |
312 | 313 | # The End? 314 | 315 | This is the end of Lab 0, but if you're reading this, you may have gone too far. Make sure you click on the hidden dialogues right above this section. 316 | -------------------------------------------------------------------------------- /Lab-0/images/arch-starthere.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/arch-starthere.png -------------------------------------------------------------------------------- /Lab-0/images/cfn-create-complete-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/cfn-create-complete-2.png -------------------------------------------------------------------------------- /Lab-0/images/cfn-create-complete.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/cfn-create-complete.png -------------------------------------------------------------------------------- /Lab-0/images/cfn-createstack-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/cfn-createstack-1.png -------------------------------------------------------------------------------- /Lab-0/images/cfn-createstack-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/cfn-createstack-2.png -------------------------------------------------------------------------------- /Lab-0/images/cfn-iam-capabilities.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/cfn-iam-capabilities.png -------------------------------------------------------------------------------- /Lab-0/images/cloud9-environment.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/cloud9-environment.png -------------------------------------------------------------------------------- /Lab-0/images/cloud9-list.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/cloud9-list.png -------------------------------------------------------------------------------- /Lab-0/images/cloud9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/cloud9.png -------------------------------------------------------------------------------- /Lab-0/images/deploy-to-aws.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/deploy-to-aws.png -------------------------------------------------------------------------------- /Lab-0/images/ecr-list-repos.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/ecr-list-repos.png -------------------------------------------------------------------------------- /Lab-0/images/ecs-taskdef-change-image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/ecs-taskdef-change-image.png -------------------------------------------------------------------------------- /Lab-0/images/ecs-taskdef-describe.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/ecs-taskdef-describe.png -------------------------------------------------------------------------------- /Lab-0/images/ecs-taskdef-update-success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-0/images/ecs-taskdef-update-success.png -------------------------------------------------------------------------------- /Lab-1/README.md: -------------------------------------------------------------------------------- 1 | # Mythical Mysfits: DevSecOps with Docker and AWS Fargate 2 | 3 | ## Lab 1 - Starting the DevSecOps Journey 4 | 5 | In this lab, we are going to start building in DevSecOps. Security is everyone's responsibility and in today, you will ensure that you aren't checking in any AWS secrets like AWS Access and Secret Keys. 6 | 7 | Here's what you'll be doing: 8 | 9 | * [Set up repos](#set-up-repos) 10 | * [Build security right into git commits](#build-security-right-into-git-commits) 11 | * [Remediation](#rsemediation) 12 | 13 | ### Set up repos 14 | 15 | 16 | 1. Clone all repos 17 | 18 | Up until now, Mythical Mysfits hasn't really been doing anything with source repos, so let's start checking things into repos like we're supposed to. First, you'll have to clone the pre-created repositories. You can get the git clone URLs from either the console or CLI. We'll do it from the CLI today. 19 | 20 | First, list your repositories: 21 | 22 | ``` 23 | $ aws codecommit list-repositories 24 | { 25 | "repositories": [ 26 | { 27 | "repositoryName": "mythical-mysfits-devsecops-like-service", 28 | "repositoryId": "54763f98-c295-4189-a91a-7830ea085aae" 29 | }, 30 | { 31 | "repositoryName": "mythical-mysfits-devsecops-monolith-service", 32 | "repositoryId": "c8aa761e-3ed1-4033-b830-4d9465b51087" 33 | } 34 | ] 35 | } 36 | ``` 37 | 38 | Next, use the batch-get-repositories command to get the clone URLs for both repositories, substituting the names you got from the previous CLI command: 39 | 40 | ``` 41 | $ aws codecommit batch-get-repositories --repository-names mythical-mysfits-devsecops-monolith-service mythical-mysfits-devsecops-like-service 42 | { 43 | "repositories": [ 44 | { 45 | "repositoryName": "mythical-mysfits-devsecops-monolith-service", 46 | "cloneUrlSsh": "ssh://git-codecommit.eu-west-1.amazonaws.com/v1/repos/mythical-mysfits-devsecops-monolith-service", 47 | "lastModifiedDate": 1542588318.447, 48 | "repositoryDescription": "Repository for the Mythical Mysfits monolith service", 49 | "cloneUrlHttp": "https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/mythical-mysfits-devsecops-monolith-service", 50 | "creationDate": 1542588318.447, 51 | "repositoryId": "c8aa761e-3ed1-4033-b830-4d9465b51087", 52 | "Arn": "arn:aws:codecommit:eu-west-1:123456789012:mythical-mysfits-devsecops-monolith-service", 53 | "accountId": "123456789012" 54 | }, 55 | { 56 | "repositoryName": "mythical-mysfits-devsecops-like-service", 57 | "cloneUrlSsh": "ssh://git-codecommit.eu-west-1.amazonaws.com/v1/repos/mythical-mysfits-devsecops-like-service", 58 | "lastModifiedDate": 1542500073.535, 59 | "repositoryDescription": "Repository for the Mythical Mysfits like service", 60 | "cloneUrlHttp": "https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/mythical-mysfits-devsecops-like-service", 61 | "creationDate": 1542500073.535, 62 | "repositoryId": "54763f98-c295-4189-a91a-7830ea085aae", 63 | "Arn": "arn:aws:codecommit:eu-west-1:123456789012:mythical-mysfits-devsecops-like-service", 64 | "accountId": "123456789012" 65 | } 66 | ], 67 | "repositoriesNotFound": [] 68 | } 69 | ``` 70 | 71 | 2. Clone repos and copy in app code 72 | 73 | Earlier in the workshop, we set up the CodeCommit credential helper, so for today, we'll use the HTTPS clone URLs instead of SSH. 74 | 75 |
 76 |     $ cd ~/environment/
 77 |     $ git clone REPLACEME_LIKE_REPOSITORY_cloneUrlHttp
 78 |     $ git clone REPLACEME_MONOLITH_REPOSITORY_cloneUrlHttp
 79 |     $ cp -R ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/app/like-service/* REPLACEME_LIKE_REPOSITORY_NAME
 80 |     $ cp -R ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/app/monolith-service/* REPLACEME_MONOLITH_REPOSITORY_NAME
 81 |     
82 | 83 | ### Build security right into git commits 84 | 85 | Now that we have our repos cloned and are ready to start checking in, let's stop to think about security. Exposed access and secret keys are often very costly for companies and that's what we're going to try and avoid. To achieve this, we're going to use a project called [git-secrets](https://github.com/awslabs/git-secrets). 86 | 87 | Git-Secrets scans commits, commit messages, and "--no-ff merges" to prevent adding secrets into your git repositories. If a commit, commit message, or any commit in a "--no-ff merge" history matches one of your configured prohibited regular expression patterns, then the commit is rejected. 88 | 89 | 1. Install git-secrets 90 | 91 | First thing's first. We have to install git-secrets and set it up. Clone the git-secrets repo: 92 | 93 | ``` 94 | $ cd ~/environment/ 95 | $ git clone https://github.com/awslabs/git-secrets.git 96 | ``` 97 | 98 | Install it as per the instructions on the [git-secrets GitHub page](https://github.com/awslabs/git-secrets#installing-git-secrets): 99 | 100 | ``` 101 | $ cd ~/environment/git-secrets/ 102 | $ sudo make install 103 | $ git secrets --install 104 | ✓ Installed commit-msg hook to .git/hooks/commit-msg 105 | ✓ Installed pre-commit hook to .git/hooks/pre-commit 106 | ✓ Installed prepare-commit-msg hook to .git/hooks/prepare-commit-msg 107 | ``` 108 | 109 | 2. Configure git-secrets 110 | 111 | Git-secrets uses hooks within git to catch whether or not you're committing something you're not supposed to. We will install it into both the repos we cloned: 112 | 113 |
114 |     $ git secrets --register-aws --global
115 |     OK
116 |     $ cd ~/environment/REPLACEME_MONOLITH_REPOSITORY_NAME
117 |     $ git secrets --install
118 |     $ cd ~/environment/REPLACEME_LIKE_REPOSITORY_NAME
119 |     $ git secrets --install
120 |     
121 | 122 | 123 | 3. Check in code 124 |
125 |     $ cd ~/environment/REPLACEME_LIKE_REPOSITORY_NAME
126 |     $ git add -A
127 |     $ git commit -m "Initial Commit of like-service repo"
128 |     
129 | 130 | Did you run into any issues? You should! **If not, go back to Lab 1 and make sure git secrets is working.** 131 | 132 | Basically, `git-secrets` scans commits, commit messages, and `--no-ff` merges to prevent adding secrets into your git repositories. If a commit, commit message, or any commit in a `--no-ff merge` history matches one of your configured prohibited regular expression patterns, then the commit is rejected. 133 | 134 | ### Remediation 135 | 136 | 1. Stop following anti-patterns! 137 | 138 | Looks like someone put in some secrets to our application. We should never have any sort of secrets directly built into the application. We have to fix this. This is the output you should have seen: 139 | 140 | ``` 141 | service/mysfits_like.py:19: # Boy I hope someone finds me: AKIAIOSFODNN7EXAMPLS 142 | 143 | [ERROR] Matched one or more prohibited patterns 144 | 145 | Possible mitigations: 146 | - Mark false positives as allowed using: git config --add secrets.allowed ... 147 | - Mark false positives as allowed by adding regular expressions to .gitallowed at repository's root directory 148 | - List your configured patterns: git config --get-all secrets.patterns 149 | - List your configured allowed patterns: git config --get-all secrets.allowed 150 | - List your configured allowed patterns in .gitallowed at repository's root directory 151 | - Use --no-verify if this is a one-time false positive 152 | ``` 153 | 154 | If you see the above output, git-secrets is working. If not, go back to the [Build security right into git commits](#build-security-right-into-git-commits) section. 155 | 156 | In your Cloud9 console, open up the directory `~/environment/mythical-mysfits-devsecops-like-service/service`. You should find the file `mysfits_like.py`. Double-click, open up the file and remove the line. This time, someone just left a commented access key in there so it's not being used, but it could have been bad. 157 | 158 | 159 | 2. Check in the code again 160 | 161 | Now that we've fixed the issue, let's try again. 162 | ``` 163 | $ git add -A 164 | $ git commit -m "Initial Commit of like-service repo" 165 | $ git push origin master 166 | ``` 167 | 168 | 3. Check the rest of the repos for AWS Credentials 169 | 170 | Now let's make sure the rest of the repos don't have any access and secret keys checked in. 171 | 172 |
173 |     $ cd ~/environment/REPLACEME_MONOLITH_REPOSITORY_NAME
174 |     $ git secrets --scan
175 |     $ cd ~/environment/REPLACEME_LIKE_REPOSITORY_NAME
176 |     $ git secrets --scan
177 |     
178 | 179 | If there were no errors, looks like we're ok. 180 | 181 | # Checkpoint 182 | 183 | This short lab taught you how to start building in security right from the beginning before we even hit any sort of infrastructure. Now we can really get started. 184 | 185 | Proceed to [Lab 2](../Lab-2) 186 | -------------------------------------------------------------------------------- /Lab-2/README.md: -------------------------------------------------------------------------------- 1 | # Mythical Mysfits: DevSecOps with Docker and AWS Fargate 2 | 3 | ## Lab 2 - Offloading Builds to AWS CodeBuild 4 | 5 | In this lab, you will start the process of automating the entire software delivery process. The first step we're going to take is to automate the Docker container builds and push the container image into the Elastic Container Registry. This will allow you to develop and not have to worry too much about build resources. We will use AWS CodeCommit and AWS CodeBuild to automate this process. Then, we'll create a continuous delivery pipeline for our Like service in AWS Fargate. 6 | 7 | You may be thinking, why would I want to offload my builds when I could just do it on my local machine. Well, this is going to be part of your full production pipeline. We'll use the same build system process as you will for production deployments. In the event that something is different on your local machine as it is within the full dev/prod pipeline, this will catch the issue earlier. You can read more about this by looking into **[Shift Left](https://en.wikipedia.org/wiki/Shift_left_testing)**. 8 | 9 | Here's a reference architecture for what you'll be building: 10 | 11 | ![CodeBuild Create](images/arch-codebuild.png) 12 | 13 | Here's what you'll be doing: 14 | 15 | * [Create AWS CodeBuild Project](#create-aws-codebuild-project) 16 | * [Create BuildSpec File](#create-buildspec-file) 17 | * [Test your AWS CodeBuild Project](#test-your-aws-codebuild-project) 18 | 19 | ### Create AWS CodeBuild Project 20 | 21 | 1. Create and configure an AWS CodeBuild project. 22 | 23 | We will be using AWS CodeBuild to offload the builds from the local Cloud9 instance. Let's create the AWS CodeBuild project. In the AWS Management Console, navigate to the [AWS CodeBuild dashboard](https://console.aws.amazon.com/codebuild/home). Click on **Create build project**. 24 | 25 | On the **Create build project** page, enter in the following details: 26 | 27 | - Project Name: Enter `dev-like-service-build` 28 | - Source Provider: Select **AWS CodeCommit** 29 | - Repository: Choose the repo from the CloudFormation stack that looks like StackName-**like-service** 30 | 31 | 32 | **Environment:** 33 | 34 | - Environment Image: Select **Managed Image** - *There are two options. You can either use a predefined Docker container that is curated by CodeBuild, or you can upload your own if you want to customize dependencies etc. to speed up build time* 35 | - Operating System: Select **Ubuntu** - *This is the OS that will run your build* 36 | - Runtime: Select **Standard** 37 | - Runtime version: Select **aws/codebuild/standard:1.0** - *This will default to the latest* 38 | - Image version: **aws/codebuild/standard:1.0-1.8.0** 39 | - Privileged: **Checked** - *In order to run Docker inside a Docker container, you need to have elevated privileges* 40 | - Service role: **Existing service role** - *A service role was automatically created for you via CFN* 41 | - Role name: Choose **CFNStackName-CodeBuildServiceRole** - *Look for the service role that has the name of the CFN stack you created previously* 42 | - Uncheck **Allow AWS CodeBuild to modify this service role so it can be used with this build project** 43 | 44 | ![CodeBuild Create Project Part 1](images/cb-create-1-eht.png) 45 | 46 | Expand the **Additional Information** and enter the following in Environment Variables: 47 | 48 | - Name: `AWS_ACCOUNT_ID` - *Enter this string* 49 | - Value: ***`REPLACEME_YOUR_ACCOUNT_ID`*** - *This is YOUR account ID* Run this command to get your 12-digit Account ID ``aws sts get-caller-identity`` 50 | 51 | 52 | **Buildspec:** 53 | 54 | - Build Specification: Select **Use a buildspec file** - *We are going to provide CodeBuild with a buildspec file* 55 | - Buildspec name: Enter `buildspec_dev.yml` - *we'll be using the same repo, but different buildspecs* 56 | 57 | 58 | **Artifacts:** 59 | 60 | - Type: Select **No artifacts** *If there are any build outputs that need to be stored, you can choose to put them in S3.* 61 | 62 | Click **Create build project**. 63 | 64 | 65 | ![CodeBuild Create Project Part 2](images/cb-create-project-2.png) 66 | 67 | 2. Get login, tag, and push commands for ECR 68 | 69 | 70 | We now have the building blocks in place to start automating the builds of our Docker images. You should have previously found all the commands to push/pull from ECR, but if not, follow this. Otherwise, skip to Step 5 and create your Buildspec now. 71 | 72 | In the AWS Management Console, navigate to [Repositories](https://console.aws.amazon.com/ecs/home#/repositories) in the ECS dashboard. Click on the repository with "like" in the name. 73 | 74 | Click on "View Push Commands" and copy the login, build, tag, and push commands to use later. 75 | 76 | ![ECR Get Like Commands](images/ecr-get-like-commands.png) 77 | 78 | ### Create BuildSpec File 79 | 80 | 1. Create BuildSpec file 81 | 82 | AWS CodeBuild uses a definition file called a buildspec Yaml file. The contents of the buildspec will determine what AWS actions CodeBuild should perform. The key parts of the buildspec are Environment Variables, Phases, and Artifacts. See [Build Specification Reference for AWS CodeBuild](http://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html) for more details. 83 | 84 | **At Mythical Mysfits, we want to follow best practices, so there are 2 requirements:** 85 | 86 | 1. We don't use the ***latest*** tag for Docker images. We have decided to use the Commit ID from our source control instead as the tag so we know exactly what image was deployed. 87 | 88 | 2. We want to use multiple buildspec files. One for dev, one for test, one for prod. 89 | 90 | Another developer from the Mythical Mysfits team has started a buildspec_dev file for you, but never got to finishing it. Add the remaining instructions to the buildspec_dev.yml.draft file. The file should be in your like-service folder and already checked in. Let's create a dev branch and copy the draft to a buildspec_dev.yml file. 91 | 92 | 93 |
 94 |   $ cd ~/environment/REPLACEME_LIKE_REPO_NAME
 95 |   $ git checkout -b dev
 96 |   $ cp ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/Lab-2/hints/buildspec_dev.yml.draft buildspec_dev.yml
 97 |   
98 | 99 | Now that you have a copy of the draft as your buildspec, you can start editing it. The previous developer left comments indicating what commands you need to add (These comments look like - #[TODO]:). Add the remaining instructions to your buildspec_dev.yml. 100 | 101 | Here are links to documentation and hints to help along the way. If you get stuck, look at the [hintspec_dev.yml](hints/hintspec_dev.yml) file in the hints folder: 102 | 103 |
104 | #[TODO]: Command to log into ECR. Remember, it has to be executed $(maybe like this?)
105 | 
106 | - http://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html
107 | - https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html#sample-docker-files
108 | 
109 | #[TODO]: Build the actual image using the current commit ID as the tag...perhaps there's a CodeBuild environment variable we can use. Remember that we also added two custom environment variables into the CodeBuild project previously: AWS_ACCOUNT_ID. How can you use this?
110 | 
111 | - https://docs.docker.com/get-started/part2/#build-the-app
112 | - https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html
113 | 
114 | #[TODO]: Tag the newly built Docker image so that we can push the image to ECR. See the instructions in your ECR console to find out how to do this. Make sure you use the current commit ID as the tag!
115 | 
116 | #[TODO]: Push the Docker image up to ECR
117 | 
118 | - https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html
119 | - https://docs.docker.com/engine/reference/builder/#entrypoint
120 | 
121 | 122 |
123 | 124 | HINT: Click here for the completed `buildspec.yml` file. 125 | 126 | There are many ways to achieve what we're looking for. In this case, the buildspec looks like this: 127 |
128 | version: 0.2
129 | 
130 | phases:
131 |   pre_build:
132 |     commands:
133 |       - echo Logging in to Amazon ECR...
134 |       - REPOSITORY_URI=REPLACEME_REPO_URI # This was started. Just replace REPLACEME_REPO_URI with your ECR Repo URI
135 |       - $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION) # This is the login command from earlier
136 |   build:
137 |     commands:
138 |       - echo Build started on `date`
139 |       - echo Building the Docker image...          
140 |       - docker build -t $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION . # There are a number of variables that are available directly in the CodeBuild build environment. We specified IMAGE_REPO_NAME earlier, but CODEBUILD_SOURCE_VERSION is there by default.
141 |       - docker tag $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION # This is the tag command from earlier
142 |   post_build:
143 |     commands:
144 |       - echo Build completed on `date`
145 |       - echo Pushing the Docker image...
146 |       - docker push $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION # This is the push command from earlier
147 | 
148 | 149 | You can copy a pre-created one into your application directory. If you do, make sure you replace the REPOSITORY_URI with the one from your like-service ECR repository! You can get it using command line `aws ecr describe-repositories | jq '.repositories[].repositoryUri' | sed s/\"//g | grep like` 150 |
151 | $ cp ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/Lab-2/hints/hintspec_dev.yml buildspec_dev.yml
152 | 
153 | 154 |
155 | 156 | When we created the buildspec_dev.yml file, we used CODEBUILD_RESOLVED_SOURCE_VERSION. What is CODEBUILD_RESOLVED_SOURCE_VERSION and why didn't we just use CODEBUILD_SOURCE_VERSION? You can find out in the [Environment Variables for Build Environments](http://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html) documentation. 157 | 158 | 159 |
160 | 161 | HINT: Click here for a spoiler! 162 | 163 | For Amazon S3, the version ID associated with the input artifact. For AWS CodeCommit, the commit ID or branch name associated with the version of the source code to be built. For GitHub, the commit ID, branch name, or tag name associated with the version of the source code to be built. Since we will be triggering the build from CLI, the source version passed to CodeBuild will be 'dev', so that's what would show up if you use CODEBUILD_SOURCE_VERSION. Since we are using CODEBUILD_RESOLVED_SOURCE_VERSION, you will see the actual HEAD commit ID for our dev branch.
164 | 165 |
166 | 167 | ### Test Your AWS CodeBuild Project 168 | 169 | 1. Check in your new file into the AWS CodeCommit repository. 170 | 171 | Make sure the name of the file is buildspec_dev.yml and then run these commands (don't forget to git push!): 172 | 173 |
174 |     $ git add buildspec_dev.yml
175 |     $ git commit -m "Adding in support for AWS CodeBuild"
176 |     [dev 6755244] Adding in support for AWS CodeBuild
177 | 
178 | 
179 |     $ git push origin dev
180 | 
181 |     Counting objects: 8, done.
182 |     Compressing objects: 100% (7/7), done.
183 |     Writing objects: 100% (8/8), 1.07 KiB | 546.00 KiB/s, done.
184 |     Total 8 (delta 1), reused 0 (delta 0)
185 |     To https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/mythical-mysfits-devsecops-like-service
186 |      * [new branch]      dev -> dev
187 |     
188 | 189 | 2. Test your build. 190 |
191 |     $ aws codebuild start-build --project-name dev-like-service-build --source-version dev
192 |     {
193 |         "build": {
194 |             "environment": {
195 |                 "computeType": "BUILD_GENERAL1_SMALL",
196 |                 "privilegedMode": true,
197 |                 "image": "aws/codebuild/docker:17.09.0",
198 |                 "type": "LINUX_CONTAINER",
199 |                 "environmentVariables": [
200 |                     {
201 |                         "type": "PLAINTEXT",
202 |                         "name": "AWS_ACCOUNT_ID",
203 |                         "value": "123456789012"
204 |                     },
205 |                     {
206 |                         "type": "PLAINTEXT",
207 |                         "name": "IMAGE_REPO_NAME",
208 |                         "value": "mythical-mysfits-devsecops/like-service"
209 |                     }
210 |                 ]
211 |             },
212 |             "phases": [
213 |                 {
214 |                     "phaseStatus": "SUCCEEDED",
215 |                     "endTime": 1542597587.613,
216 |                     "phaseType": "SUBMITTED",
217 |                     "durationInSeconds": 0,
218 |                     "startTime": 1542597587.318
219 |                 },
220 |                 {
221 |                     "phaseType": "QUEUED",
222 |                     "startTime": 1542597587.613
223 |                 }
224 |             ],
225 |             "timeoutInMinutes": 60,
226 |             "buildComplete": false,
227 |             "logs": {
228 |                 "deepLink": "https://console.aws.amazon.com/cloudwatch/home?region=eu-west-1#logEvent:group=null;stream=null"
229 |             },
230 |             "serviceRole": "arn:aws:iam::123456789012:role/service-role/codebuild-dev-like-service-build-service-role",
231 |             "artifacts": {
232 |                 "location": ""
233 |             },
234 |             "projectName": "dev-like-service-build",
235 |             "cache": {
236 |                 "type": "NO_CACHE"
237 |             },
238 |             "initiator": "IsengardAdministrator/hubertc-Isengard",
239 |             "buildStatus": "IN_PROGRESS",
240 |             "sourceVersion": "6755244",
241 |             "source": {
242 |                 "buildspec": "buildspec_dev.yml",
243 |                 "gitCloneDepth": 1,
244 |                 "type": "CODECOMMIT",
245 |                 "location": "https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/mythical-mysfits-devsecops-like-service",
246 |                 "insecureSsl": false
247 |             },
248 |             "currentPhase": "QUEUED",
249 |             "startTime": 1542597587.318,
250 |             "id": "dev-like-service-build:53be8027-f831-4a5e-8888-41d757e26392",
251 |             "arn": "arn:aws:codebuild:eu-west-1:123456789012:build/dev-like-service-build:53be8027-f831-4a5e-8888-41d757e26392",
252 |             "encryptionKey": "arn:aws:kms:eu-west-1:123456789012:alias/aws/s3"
253 |         }
254 |     }
255 |     
256 | 257 | 3. Get status of build 258 | 259 | 260 | Within the return data, you should see an 'id' section. This is the build ID. In the previous example, it was mythicalmysfits-build:8c1d38a6-39f6-41b8-8360-a34d8042640b. You can either query this build ID using the CLI or visit the CodeBuild console. To find logs about what happened, visit the 'deeplink' link that will bring you directly to CloudWatch logs console where you can view logs. 261 | 262 |
263 |     $ aws codebuild batch-get-builds --ids 'dev-like-service-build:53be8027-f831-4a5e-8888-41d757e26392'
264 |     ...
265 |                 "currentPhase": "COMPLETED",
266 |                 "startTime": 1542597587.318,
267 |                 "endTime": 1542597706.584,
268 |                 "id": "dev-like-service-build:53be8027-f831-4a5e-8888-41d757e26392",
269 |                 "arn": "arn:aws:codebuild:eu-west-1:123456789012:build/dev-like-service-build:53be8027-f831-4a5e-8888-41d757e26392",
270 |                 "encryptionKey": "arn:aws:kms:eu-west-1:123456789012:alias/aws/s3"
271 |             }
272 |         ]
273 |     }
274 |     
275 | 276 | If all goes well, you should see a lot of successes in the logs and your image in the ECR console. Inspect the **Build Log** if there were any failures. You'll also see these same logs in the CloudWatch Logs console. This will take a few minutes. 277 | 278 | ![Successes in CodeBuild](images/cb-success.png) 279 | 280 | What CodeBuild has done is follow the steps in your buildspec. If you refresh your ECR Repository, you should see a new image that was built, tagged and pushed by CodeBuild. 281 | 282 | ![New ECR Image w/ Commit ID as Tag](images/ecr-new-image.png) 283 | 284 | Now that you are sure that the image can be built in the same environment as production you can test the new image. 285 | 286 | # Checkpoint 287 | 288 | At this point, you have begun the CI/CD process by offloading your builds to make sure the actual production-esque build environment can handle whatever you're going to throw it it. That way, if something goes wrong, you're not trying to push to production or any other environment. You can catch those errors earlier. You've also started building in best practices by not using the :latest tag in Docker. It's very common for beginners to use the :latest tag, but the challenge is that when you do that you don't know exactly what you're deploying without comparing the SHA hash of your image with whatever you have locally. 289 | 290 | You're now ready to build in end to end deployments! 291 | 292 | Proceed to [Lab 3](../Lab-3)! 293 | -------------------------------------------------------------------------------- /Lab-2/hints/buildspec_dev.yml.draft: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | phases: 4 | pre_build: 5 | commands: 6 | - echo Logging in to Amazon ECR... 7 | - REPOSITORY_URI=REPLACEME_REPO_URI # This was started. Just replace REPLACEME_REPO_URI with your ECR Repo URI 8 | - #[TODO]: Command to log into ECR. Remember, it has to be executed $(maybe like this?) 9 | build: 10 | commands: 11 | - echo Build started on `date` 12 | - echo Building the Docker image... 13 | - #[TODO]: Build the actual image using the current commit ID as the tag 14 | - #[TODO]: Tag the newly built Docker image so that we can push the image to ECR. See the instructions in your ECR console to find out how to do this. Make sure you use the current commit ID as the tag! 15 | post_build: 16 | commands: 17 | - echo Build completed on `date` 18 | - echo Pushing the Docker image... 19 | - #[TODO]: Push the Docker image up to ECR 20 | -------------------------------------------------------------------------------- /Lab-2/hints/hintspec_dev.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | phases: 4 | pre_build: 5 | commands: 6 | - echo Logging in to Amazon ECR... 7 | - REPOSITORY_URI=REPLACEME_REPO_URI # This was started. Just replace REPLACEME_REPO_URI with your ECR Repo URI 8 | - $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION) 9 | build: 10 | commands: 11 | - echo Build started on `date` 12 | - echo Building the Docker image... 13 | - docker build -t $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION . 14 | - docker tag $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION 15 | post_build: 16 | commands: 17 | - echo Build completed on `date` 18 | - echo Pushing the Docker image... 19 | - docker push $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION 20 | -------------------------------------------------------------------------------- /Lab-2/images/arch-codebuild.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/arch-codebuild.png -------------------------------------------------------------------------------- /Lab-2/images/cb-create-1-eht.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/cb-create-1-eht.png -------------------------------------------------------------------------------- /Lab-2/images/cb-create-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/cb-create-1.png -------------------------------------------------------------------------------- /Lab-2/images/cb-create-project-1-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/cb-create-project-1-2.png -------------------------------------------------------------------------------- /Lab-2/images/cb-create-project-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/cb-create-project-1.png -------------------------------------------------------------------------------- /Lab-2/images/cb-create-project-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/cb-create-project-2.png -------------------------------------------------------------------------------- /Lab-2/images/cb-create-project-done.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/cb-create-project-done.png -------------------------------------------------------------------------------- /Lab-2/images/cb-create-project-envvar.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/cb-create-project-envvar.png -------------------------------------------------------------------------------- /Lab-2/images/cb-project-start.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/cb-project-start.png -------------------------------------------------------------------------------- /Lab-2/images/cb-success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/cb-success.png -------------------------------------------------------------------------------- /Lab-2/images/ecr-get-like-commands.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/ecr-get-like-commands.png -------------------------------------------------------------------------------- /Lab-2/images/ecr-new-image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-2/images/ecr-new-image.png -------------------------------------------------------------------------------- /Lab-3/README.md: -------------------------------------------------------------------------------- 1 | # Mythical Mysfits: DevSecOps with Docker and AWS Fargate 2 | 3 | ## Lab 3 - Automating End to End Deployments for AWS Fargate 4 | 5 | In this lab, you will implement the end to end deployment and testing process for your like service. This lab is where it all comes together. By the end, you will be able to check in new code and have your application automatically updated. 6 | 7 | Here's what you'll be doing: 8 | 9 | * Create new buildspec and merge feature branch 10 | * Create Pipeline for deployments 11 | * Deploy new version of Project Cuddle 12 | 13 | ### Create new buildspec and merge feature branch 14 | 15 | 1\. Create the `buildspec_prod.yml` file 16 | 17 | In Lab 2, we created a buildspec for dev named `buildspec_dev.yml`. That was used when CodeBuild was run directly on source code in CodeCommit, but now for production we want to build a full pipeline that will automatically deploy our environment, so we'll use CodePipeline to orchestrate that. 18 | 19 | Ideally, we want to keep production and development branches as similar as possible, but want to control differences between dev and prod build stages. To start, we can basically copy over what we created for Lab 2, but there will be a few minor changes. We will name the new buildspec `buildspec_prod.yml` instead of `buildspec_dev.yml`. 20 | 21 | Make sure you're in the like repository folder, which should be named something like **CFNStackName-like-service**. 22 | 23 |
 24 | $ cd ~/environment/REPLACE_ME_LIKE_REPOSITORY_NAME
 25 | $ cp buildspec_dev.yml buildspec_prod.yml
 26 | 
27 | 28 | Next, in order for CodePipeline to deploy to Fargate, we need to have an `imagedefinitions.json` file that includes the name of the container we want to replace as well as the imageUri. Then we have to surface the file to CodePipeline in an Artifacts section. The end of your `buildspec_prod.yml` file will look like this: 29 | 30 |
 31 | ...
 32 |     post_build:
 33 |       commands:
 34 |         - echo Build completed on `date`
 35 |         - echo Pushing the Docker image...
 36 |         - docker push $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION
 37 |         - printf '[{"name":"REPLACEME_CONTAINERNAME","imageUri":"%s"}]' $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION > imagedefinitions.json
 38 | artifacts:
 39 |   files: imagedefinitions.json
 40 | 
41 | 42 | Replace the container name (`REPLACEME_CONTAINERNAME`) with the name of your service, which should be `like-service`. 43 | 44 |
45 | HINT: There's also completed file in hints/hintspec_prod.yml. Click here to see how to copy it in. 46 | Make sure you change REPLACEME_REPO_URI to your ECR repository URI! 47 |
 48 |   $ cp ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/Lab-3/hints/buildspec_prod.yml ~/environment/REPLACEME_REPO_NAME/buildspec_prod.yml
 49 |   
50 |
51 | 52 | 2\. Check in and push to dev 53 | 54 | Add, commit, and push the new file to your repo. You can try to build the app again, but CodeBuild will just do the same thing because it's still looking at `buildspec_dev.yml`. 55 | 56 |
 57 |   $ git add buildspec_prod.yml
 58 |   $ git commit -m "Adding a buildspec for prod"
 59 |   $ git push origin dev
 60 | 
61 | 62 | 3\. Merge into master branch 63 | 64 | Now that we're ready with all the code let's merge with our master branch. The master branch is what we're going to use to trigger all CodePipeline deployments 65 | 66 | First switch back to your master branch: 67 |
 68 |   $ git checkout master
 69 | 
70 | 71 | Next, merge in all your changes: 72 | 73 |
 74 |   $ git merge dev
 75 |   $ git push origin master
 76 | 
77 | 78 | ### Create Pipeline for deployments 79 | 80 | 1\. Create an AWS CodePipeline Pipeline and set it up to listen to AWS CodeCommit. 81 | 82 | Now it's time to hook everything together. In the AWS Management Console, navigate to the [AWS CodePipeline](https://console.aws.amazon.com/codepipeline/home#/) dashboard. Click on **Create Pipeline**. 83 | 84 | On the following pages, enter the following details: 85 | 86 | **Choose pipeline settings:** 87 | 88 | - Pipeline name: `prod-like-service` - *This is a production pipeline, so we'll prefix with prod* 89 | - Service role: **Existing service role** - *A service role was automatically created for you via CFN* 90 | - Role name: Choose **CFNStackName-CodeBuildServiceRole** - *Look for the service role that has the name of the CFN stack you created previously* 91 | - Artifact store: Choose **Custom location** - *An artifact bucket was created for you via CFN* 92 | - Bucket: Choose **CFNStackName-mythicalartifactbucket** - *Look for the artifact bucket that has the name of the CFN stack you created previously. Note that there are two buckets that were created for you. Look for the one that says mythicalartifactbucket* 93 | 94 | Click **Next** 95 | 96 | ![CodePipeline Choose pipeline settings](images/cp-create-name.png) 97 | 98 | **Add source stage:** 99 | 100 | - Source provider: **AWS CodeCommit** - *We checked in our code to CodeCommit, so that's where we'll start. CodePipeline also supports a variety of different source providers. Try them out later!* 101 | - Repository name: **CFNStackName-like-service** - *Name of your CodeCommit repo for the like-service* 102 | - Branch name: **master** - *We want this to automatically trigger when we push to the master branch in our repo* 103 | - Change detection options: **Amazon CloudWatch Events (recommended)** - *You have the option of using CodePipeline to poll CodeCommit for changes every minute, but using CloudWatch Events will trigger CodePipeline executions based on events, so it's much faster* 104 | 105 | Click **Next**. 106 | 107 | ![CodePipeline Add source stage](images/cp-add-source.png) 108 | 109 | **Add build stage:** 110 | 111 | - Build provider: **AWS CodeBuild** 112 | - Project name: Click **Create project** button 113 | 114 | A new window should appear. The values here are almost identical to that of Lab-2 when you created your dev CodeBuild project, with the exception that the name is now prod-like-service-build and the buildspec will be buildspec_prod.yml. See Lab-2 instructions for detailed screenshots. 115 | 116 | **Create build project:** 117 | 118 | - Project name: `prod-like-service-build` 119 | - Environment Image: Select **Managed Image** - *There are two options. You can either use a predefined Docker container that is curated by CodeBuild, or you can upload your own if you want to customize dependencies etc. to speed up build time* 120 | - Operating System: Select **Ubuntu** - *This is the OS that will run your build* 121 | - Runtime: Select **Standard** 122 | - Runtime version: Select **aws/codebuild/standard:1.0** - *This will default to the latest* 123 | - Image version: **aws/codebuild/standard:1.0-1.8.0** 124 | - Privileged: **Checked** 125 | - Service role: **Existing service role** - *A service role was automatically created for you via CFN* 126 | - Role name: Choose **CFNStackName-CodeBuildServiceRole** - *Look for the service role that has the name of the CFN stack you created previously. It will be in the form of **CFNStackName**-CodeBuildServiceRole* 127 | 128 | - Uncheck **Allow AWS CodeBuild to modify this service role so it can be used with this build project** 129 | 130 | Expand the **Additional Information** and enter the following in Environment Variables: 131 | 132 | - Name: `AWS_ACCOUNT_ID` - *Enter this string* 133 | - Value: ***`REPLACEME_YOUR_ACCOUNT_ID`*** - *This is YOUR account ID* Run this command to get your 12-digit Account ID ``aws sts get-caller-identity`` 134 | 135 | **Buildspec:** 136 | 137 | - Build Specification: Select **Use a buildspec file** - *We are going to provide CodeBuild with a buildspec file* 138 | - Buildspec name: Enter `buildspec_prod.yml` - *Using our new buildspec* 139 | 140 | Once confirmed, click **Continue to CodePipeline**. This should close out the popup and tell you that it **successfully created prod-like-service-build in CodeBuild.** 141 | 142 | Now in the **Add build stage** page, you will see the "Project Name" has been set to the Build Project you've created in the CodeBuild. 143 | 144 | - Project Name: **prod-like-service-build** 145 | 146 | ![CodePipeline Created Build Project](images/cp-create-cb-complete.png) 147 | 148 | Click **Next**. 149 | 150 | **Add deploy stage:** 151 | 152 | - Deployment provider: Select **Amazon ECS** - *This is the mechanism we're choosing to deploy with. CodePipeline also supports several other deployment options, but we're using ECS directly in this case.* 153 | - Cluster Name: Select your ECS Cluster. In my case, **Cluster-mythical-mysfits-devsecops** - *This is the cluster that CodePipeline will deploy into.* 154 | - Service Name: Enter `CFNStackName-Mythical-like-service` - *Name the CloudFormation stack that you're going to create/update* 155 | - Image definitions file - *optional*: Enter `imagedefinitions.json` - *This is the file we created within the buildspec_prod.yml file earlier* 156 | 157 | ![CodePipeline ECS](images/cp-deploy-step.png) 158 | 159 | Click **Next**. 160 | 161 | Review your details and click **Create Pipeline**. 162 | 163 | 5\. Test your pipeline. 164 | 165 | By default, when you create your pipeline, CodePipeline will automatically run through and try to deploy your application. If you see it go through all three stages with all GREEN, you're good to go. Otherwise, click into the links it gives you and troubleshoot the deployment. 166 | 167 | ![CodePipeline ECS](images/cp-deploy-success.png) 168 | 169 | ## Deploy new version of Project Cuddle 170 | 171 | Now that you have your application deploying automatically, let's deploy a new version! We've upgraded the health check for our like service to make sure it can connect to the monolith service for the fulfillment method. 172 | 173 |
174 | $ cd ~/environment/REPLACEME_LIKE_REPO
175 | $ cp ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/Lab-3/mysfits_like_v2.py service/mysfits_like.py
176 | $ git add service/mysfits_like.py
177 | $ git commit -m "Cuddles v2"
178 | $ git push origin master
179 | 
180 | 181 | Now sit back, relax, and watch the deployment. When it's done, congratulations! You've unlocked the automated build and deploy achievement! Next up, head on over to [Lab 4](../Lab-4) for the "Sec" in DevSecOps. 182 | -------------------------------------------------------------------------------- /Lab-3/hints/buildspec_prod.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | phases: 4 | pre_build: 5 | commands: 6 | - echo Logging in to Amazon ECR... 7 | - REPOSITORY_URI=REPLACEME_REPO_URI 8 | - $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION) 9 | build: 10 | commands: 11 | - echo Build started on `date` 12 | - echo Building the Docker image... 13 | - docker build -t $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION . # Here, we are using the environment variable passed in via CodeBuild IMAGE_REPO_NAME 14 | - docker tag $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION 15 | post_build: 16 | commands: 17 | - echo Build completed on `date` 18 | - echo Pushing the Docker image... 19 | - docker push $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION 20 | - printf '[{"name":"like-service","imageUri":"%s"}]' $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION > imagedefinitions.json 21 | artifacts: 22 | files: imagedefinitions.json 23 | -------------------------------------------------------------------------------- /Lab-3/images/cp-add-source.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-3/images/cp-add-source.png -------------------------------------------------------------------------------- /Lab-3/images/cp-create-cb-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-3/images/cp-create-cb-1.png -------------------------------------------------------------------------------- /Lab-3/images/cp-create-cb-complete.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-3/images/cp-create-cb-complete.png -------------------------------------------------------------------------------- /Lab-3/images/cp-create-name.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-3/images/cp-create-name.png -------------------------------------------------------------------------------- /Lab-3/images/cp-create-source.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-3/images/cp-create-source.png -------------------------------------------------------------------------------- /Lab-3/images/cp-deploy-step.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-3/images/cp-deploy-step.png -------------------------------------------------------------------------------- /Lab-3/images/cp-deploy-success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-3/images/cp-deploy-success.png -------------------------------------------------------------------------------- /Lab-3/mysfits_like_v2.py: -------------------------------------------------------------------------------- 1 | import os 2 | import requests 3 | from urlparse import urlparse 4 | from flask import Flask, jsonify, json, Response, request 5 | from flask_cors import CORS 6 | 7 | app = Flask(__name__) 8 | CORS(app) 9 | 10 | # The service basepath has a short response just to ensure that healthchecks 11 | # sent to the service root will receive a healthy response. 12 | @app.route("/") 13 | def health_check_response(): 14 | url = urlparse('http://{}/'.format(os.environ['MONOLITH_URL'])) 15 | response = requests.get(url=url.geturl()) 16 | 17 | flask_response = jsonify({"message" : "Health check, monolith service available."}) 18 | flask_response.status_code = response.status_code 19 | return flask_response 20 | 21 | # indicate that the provided mysfit should be marked as liked. 22 | def process_like_request(): 23 | print('Like processed.') 24 | 25 | def fulfill_like(mysfit_id): 26 | url = urlparse('http://{}/mysfits/{}/fulfill-like'.format(os.environ['MONOLITH_URL'], mysfit_id)) 27 | return requests.post(url=url.geturl()) 28 | 29 | 30 | @app.route("/mysfits//like", methods=['POST']) 31 | def like_mysfit(mysfit_id): 32 | process_like_request() 33 | service_response = fulfill_like(mysfit_id) 34 | 35 | flask_response = Response(service_response) 36 | flask_response.headers["Content-Type"] = "application/json" 37 | 38 | return flask_response 39 | 40 | # Run the service on the local server it has been deployed to 41 | if __name__ == "__main__": 42 | app.run(host="0.0.0.0", port=80) 43 | -------------------------------------------------------------------------------- /Lab-4/README.md: -------------------------------------------------------------------------------- 1 | # Mythical Mysfits: DevSecOps with Docker and AWS Fargate 2 | 3 | ## Lab 4: Implement Container Image scanning 4 | 5 | In this lab we will really put the Sec in DevSecOps by including Clair, a static vulnerability scanner for container images. Clair is a service that requires a Postgres DB and a running container. Because we know you already know how to stand up ECS services we've already stood up the Clair service for you in the core.yml CloudFormation template. Now we're going to add another CodeBuild project to run Clair via a tool called Klar, add a test phase to the pipeline we created in Lab 3, then run our CodeBuild project in that test phase. Klar is a wrapper around the Clair APIs that makes it easier to use integration piplines like this one. Check out the [Klar GitHub repo](https://github.com/optiopay/klar)) for more info. 6 | 7 | Why would we use Clair? Injecting automated security into the pipeline gives you the same benefit as automated test - the ability to move quickly with confidence that you haven't regressed on security. 8 | 9 | Here's what you'll be doing: 10 | 11 | * [Create AWS CodeBuild Project](#create-aws-codebuild-project) 12 | * [Create Buildspec and Test the Pipeline](#create-buildspec-and-test-the-pipeline) 13 | 14 | ### Create AWS CodeBuild Project 15 | 1\. Create the new CodeBuild project. 16 | 17 | In the AWS Management Console, navigate to the [AWS CodePipeline](https://console.aws.amazon.com/codepipeline/home#/) dashboard. Choose the pipeline you created in Lab 3, which should be named prod-like-service, then choose Edit in the upper right. 18 | 19 | ![Edit Pipeline](images/edit-pipeline.png) 20 | 21 | We're going to add a new stage, "Test", between the "Build" and "Deploy" stages. Scroll down and choose "Add Stage" between "Build" and "Deploy". Name the stage "Test" then choose "Add stage". Choose "Add action group" inside of our new stage. Name the new action "Clair" and choose CodeBuild from the dropdown. New fields will appear - choose "Create project" similar to what we did in Lab 3. 22 | 23 | ![Clair Action](images/clair-action.png) 24 | 25 | - Project Name: Enter `prod-like-service-test` 26 | 27 | **Environment:** 28 | 29 | - Environment Image: Select **Managed Image** - *There are two options. You can either use a predefined Docker container that is curated by CodeBuild, or you can upload your own if you want to customize dependencies etc. to speed up build time* 30 | - Operating System: Select **Ubuntu** - *This is the OS that will run your build* 31 | - Runtime: Select **Standard** 32 | - Runtime version: Select **aws/codebuild/standard:1.0** 33 | - Image version: **aws/codebuild/standard:1.0-1.8.0** 34 | - Privileged: **Checked** 35 | - Service role: **Existing service role** - *A service role was automatically created for you via CFN* 36 | - Role name: Choose **CFNStackName-CodeBuildServiceRole** - *Look for the service role that has the name of the CFN stack you created previously* 37 | 38 | - Uncheck **Allow AWS CodeBuild to modify this service role so it can be used with this build project** 39 | 40 | 41 | 42 | Expand the **Additional Information** and enter the following in Environment Variables: 43 | 44 | - Name: `CLAIR_URL` - *Enter this string* 45 | - Value: ***`REPLACE_ME_LoadBalancerDNS`*** - *This is an output from your CloudFormation stack - LoadBalancerDNS.* 46 | 47 | 48 | 49 | **Buildspec:** 50 | 51 | - Build Specification: Select **Use a buildspec file** - *We are going to provide CodeBuild with a buildspec file* 52 | - Buildspec name: Enter `buildspec_clair.yml` - *we'll be using the same repo, but different buildspecs* 53 | 54 | Choose **Continue to CodePipeline**. 55 | 56 | ![CodeBuild Create Project Part 1](images/cb-create-test-project-2.png) 57 | 58 | You will now be back at the CodePipeline console. In the Input Artifacts section we're going to do something different than before - we're going to use **multiple artifacts**. Select *SourceArtifact* from the dropdown, then choose add, then choose *BuildArtifact*. Another dropdown will appear above, *Primary Artifact*. Select *SourceArtifact* here. Finally leave your **output artifacts empty**. What we've done here is inject the source artifact from CodeCommit as well as the build artifact (imagedefinitions.json) from the build phase. The source artifact is our primary input artifact because it's where our buildspec is. 59 | 60 | ![CodePipeline Create Action](images/cp-create-action2.jpeg) 61 | 62 | 63 | Save the action, choose "Done" to finish editing the stage, then choose "Save" to save the entire pipeline. 64 | 65 | ### Create Buildspec and Test the Pipeline 66 | 67 | 1\. Create BuildSpec and Test the Pipeline 68 | 69 | Just like in Lab 2 and Lab 3, we're going to create a buildspec for running Clair. See [Build Specification Reference for AWS CodeBuild](http://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html) for more details on the buildspec format. 70 | 71 | Our intern from last summer got started with Clair but didn't finish before she went back to school. Her buildspec is ready to rock, it just needs to get copied into your source repository. 72 | 73 |
74 | $ cd ~/environment/REPLACEME_LIKE_REPO_NAME
75 | $ git checkout master
76 | $ cp ~/environment/amazon-ecs-mythicalmysfits-workshop/workshop-2/Lab-4/hints/buildspec_clair.yml buildspec_clair.yml
77 | 
78 | 79 | 2\. Check in your new file into the AWS CodeCommit repository. 80 | 81 | Make sure the name of the file is buildspec_clair.yml and then run these commands: 82 | 83 |
84 | $ git add buildspec_clair.yml
85 | $ git commit -m "Adding in support for Clair."
86 | $ git push origin master
87 | 
88 | 89 | If all goes well, you should see that Clair inspected your image and didn't find anything wrong. Choose "AWS CodeBuild" in the Clair action to go to the CodeBuild project. Inspect the **Build Log** to see if there was a failure. You'll also see these same logs in the CloudWatch Logs console. It make take a few minutes before anything shows up. 90 | 91 | ![Klar logs](images/klar-logs.png) 92 | 93 | From here, any run of the pipeline will include an automated security check to look for CVEs and other vulnerabilities. Congratulations! You've finished the second workshop in this track. If you're heading on to the last workshop in the series, feel free to leave everything as is. 94 | 95 | Whether you move on to the next workshop or wrap up here, make sure you [clean everything up](../README.md#workshop-cleanup) when you're done! 96 | -------------------------------------------------------------------------------- /Lab-4/hints/buildspec_clair.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | phases: 4 | pre_build: 5 | commands: 6 | # Grabbing Klar, our integration tool. 7 | - echo Grabbing Klar 8 | - wget https://github.com/optiopay/klar/releases/download/v2.3.0/klar-2.3.0-linux-amd64 9 | - chmod +x ./klar-2.3.0-linux-amd64 10 | - mv ./klar-2.3.0-linux-amd64 ./klar 11 | 12 | # Getting the image URI 13 | - cd $CODEBUILD_SRC_DIR_BuildArtifact 14 | - wget https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 -O jq 15 | - chmod +x jq 16 | - IMAGE_URI=$(./jq -r '.[0] | .imageUri' imagedefinitions.json) 17 | - cd $CODEBUILD_SRC_DIR 18 | 19 | # Getting our Docker registry (ECR) login info for Klar and Clair to use 20 | - echo Getting Docker registry login info 21 | - DOCKER_LOGIN=`aws ecr get-login --region $AWS_DEFAULT_REGION` 22 | - PASSWORD=`echo $DOCKER_LOGIN | cut -d' ' -f6` 23 | build: 24 | commands: 25 | - echo Calling Clair with Klar 26 | - DOCKER_USER=AWS DOCKER_PASSWORD=${PASSWORD} CLAIR_ADDR=$CLAIR_URL ./klar $IMAGE_URI 27 | post_build: 28 | commands: 29 | - echo Finished 30 | -------------------------------------------------------------------------------- /Lab-4/images/arch-codebuild.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/arch-codebuild.png -------------------------------------------------------------------------------- /Lab-4/images/cb-create-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cb-create-1.png -------------------------------------------------------------------------------- /Lab-4/images/cb-create-project-1-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cb-create-project-1-2.png -------------------------------------------------------------------------------- /Lab-4/images/cb-create-project-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cb-create-project-1.png -------------------------------------------------------------------------------- /Lab-4/images/cb-create-project-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cb-create-project-2.png -------------------------------------------------------------------------------- /Lab-4/images/cb-create-project-envvar.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cb-create-project-envvar.png -------------------------------------------------------------------------------- /Lab-4/images/cb-create-test-project-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cb-create-test-project-1.png -------------------------------------------------------------------------------- /Lab-4/images/cb-create-test-project-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cb-create-test-project-2.png -------------------------------------------------------------------------------- /Lab-4/images/cb-success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cb-success.png -------------------------------------------------------------------------------- /Lab-4/images/clair-action.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/clair-action.png -------------------------------------------------------------------------------- /Lab-4/images/cloud9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cloud9.png -------------------------------------------------------------------------------- /Lab-4/images/cp-create-action.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cp-create-action.png -------------------------------------------------------------------------------- /Lab-4/images/cp-create-action2.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/cp-create-action2.jpeg -------------------------------------------------------------------------------- /Lab-4/images/ecr-get-like-commands.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/ecr-get-like-commands.png -------------------------------------------------------------------------------- /Lab-4/images/ecr-new-image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/ecr-new-image.png -------------------------------------------------------------------------------- /Lab-4/images/edit-pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/edit-pipeline.png -------------------------------------------------------------------------------- /Lab-4/images/klar-logs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/Lab-4/images/klar-logs.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Mythical Mysfits: DevSecOps with Docker and AWS Fargate 2 | 3 | ## Recording: 4 | 5 | - Lab0 https://asciinema.org/a/ZfiNXylAJkNm4PhFb8PO9WO1P 6 | - Lab 1 https://asciinema.org/a/T417bRBumxkxMV2nT2Yt1WRTO 7 | - Lab 2 https://asciinema.org/a/4JYRT4kxm8RgBKR9zVEFinJ4t 8 | - Lab 3 https://asciinema.org/a/KqbKs0uYkqnPqsG01DueIVdIL 9 | 10 | ## Overview 11 | ![mysfits-welcome](/images/mysfits-welcome.png) 12 | 13 | **Mythical Mysfits** is a (fictional) pet adoption non-profit dedicated to helping abandoned, and often misunderstood, mythical creatures find a new forever family! Mythical Mysfits believes that all creatures deserve a second chance, even if they spent their first chance hiding under bridges and unapologetically robbing helpless travelers. 14 | 15 | Our business has been thriving with only a single Mysfits adoption center, located inside Devils Tower National Monument. Speak, friend, and enter should you ever come to visit. 16 | 17 | We've just had a surge of new mysfits arrive at our door with nowhere else to go! They're all pretty distraught after not only being driven from their homes... but an immensely grumpy ogre has also denied them all entry at a swamp they've used for refuge in the past. 18 | 19 | That's why we've hired you to be our first Full Stack Engineer. We need a more scalable way to show off our inventory of mysfits and let families adopt them. We'd like you to build the first Mythical Mysfits adoption website to help introduce these lovable, magical, often mischievous creatures to the world! 20 | 21 | We're growing, but we're struggling to keep up with our new mysfits mainly due to our legacy inventory platform. We heard about the benefits of containers, especially in the context of microservices and devsecops. We've already taken some steps in that direction, but can you help us take this to the next level? 22 | 23 | We've already moved to a microservice based model, but are still not able to develop quickly. We want to be able to deploy to our microservices as quickly as possible while maintaining a certain level of confidence that our code will work well. This is where you come in. 24 | 25 | If you are not familiar with DevOps, there are multiple facets to the the word. One focuses on organizational values, such as small, well rounded agile teams focusing on owning a particular service, whereas one focuses on automating the software delivery process as much as possible to shorten the time between code check in and customers testing and providing feedback. This allows us to shorten the feedback loop and iterate based on customer requirements at a much quicker rate. 26 | 27 | In this workshop, you will take our Mythical stack and apply concepts of CI/CD to their environment. To do this, you will create a pipeline to automate all deployments using AWS CodeCommit or GitHub, AWS CodeBuild, AWS CodePipeline, and AWS Fargate. Today, the Mythical stack runs on AWS Fargate following a microservice architecture, meaning that there are very strict API contracts that are in place. As part of the move to a more continuous delivery model, they would like to make sure these contracts are always maintained. 28 | 29 | The tools that we use in this workshop are part of the AWS Dev Tools stack, but are by no means an end all be all. What you should focus on is the idea of CI/CD and how you can apply it to your environments. 30 | 31 | ### Requirements: 32 | * AWS account - if you don't have one, it's easy and free to [create one](https://aws.amazon.com/) 33 | * AWS IAM account with elevated privileges allowing you to interact with CloudFormation, IAM, EC2, ECS, ECR, ALB, VPC, SNS, CloudWatch, AWS CodeCommit, AWS CodeBuild, AWS CodePipeline 34 | * Familiarity with Python, vim/emacs/nano, [Docker](https://www.docker.com/), [basic GIT commands](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-basic-git.html), AWS and microservices - not required but a bonus 35 | 36 | ### What you'll do: 37 | 38 | These labs are designed to be completed in sequence, and the full set of instructions are documented below. Read and follow along to complete the labs. If you're at a live AWS event, the workshop attendants will give you a high level run down of the labs and help answer any questions. Don't worry if you get stuck, we provide hints along the way. 39 | 40 | * **[Lab 0](Lab-0):** Deploy Existing Mythical Stack 41 | * **[Lab 1](Lab-1):** Integrating Security Right from the Get Go 42 | * **[Lab 2](Lab-2):** Offloading Builds to AWS CodeBuild 43 | * **[Lab 3](Lab-3):** Automating End to End Deployments for AWS Fargate 44 | * **[Lab 4](Lab-4):** Moar Security! Implementing Container Image scanning 45 | * **Workshop Cleanup** [Cleanup working environment](#workshop-cleanup) 46 | 47 | ### Conventions: 48 | Throughout this workshop, we will provide commands for you to run in the terminal. These commands will look like this: 49 | 50 |
51 | $ ssh -i PRIVATE_KEY.PEM ec2-user@EC2_PUBLIC_DNS_NAME
52 | 
53 | 54 | The command starts after the $. Text that is ***UPPER_ITALIC_BOLD*** indicates a value that is unique to your environment. For example, the ***PRIVATE\_KEY.PEM*** refers to the private key of an SSH key pair that you've created, and the ***EC2\_PUBLIC\_DNS\_NAME*** is a value that is specific to an EC2 instance launched in your account. You can find these unique values either in the CloudFormation outputs or by going to the specific service dashboard in the [AWS management console](https://console.aws.amazon.com). 55 | 56 | If you are asked to enter a specific value in a text field, the value will look like `VALUE`. 57 | 58 | Hints are also provided along the way and will look like this: 59 | 60 |
61 | HINT 62 | 63 | **Nice work, you just revealed a hint!** 64 |
65 | 66 | 67 | *Click on the arrow to show the contents of the hint.* 68 | 69 | ### IMPORTANT: Workshop Cleanup 70 | 71 | You will be deploying infrastructure on AWS which will have an associated cost. If you're attending an AWS event, credits will be provided. When you're done with the workshop, [follow the steps at the very end of the instructions](#workshop-cleanup) to make sure everything is cleaned up and avoid unnecessary charges. 72 | 73 | * * * 74 | 75 | ## Let's Begin! 76 | 77 | [Go to Lab-0 to set up your environment](Lab-0) 78 | 79 | ### Workshop Cleanup 80 | 81 | This is really important because if you leave stuff running in your account, it will continue to generate charges. Certain things were created by CloudFormation and certain things were created manually throughout the workshop. Follow the steps below to make sure you clean up properly. 82 | 83 | 1. Delete any manually created resources throughout the labs, e.g. CodePipeline Pipelines and CodeBuild projects. Certain things like task definitions do not have a cost associated, so you don't have to worry about that. If you're not sure what has a cost, you can always look it up on our website. All of our pricing is publicly available, or feel free to ask one of the workshop attendants when you're done. 84 | 2. Go to the CodePipeline console and delete prod-like-service. Hit Edit and then Delete. 85 | 3. Delete any container images stored in ECR, delete CloudWatch logs groups, and empty/delete S3 buckets 86 | 4. In your ECS Cluster, edit all services to have 0 tasks and delete all services 87 | 5. Delete log groups in CloudWatch Logs 88 | 6. Finally, delete the CloudFormation stack launched at the beginning of the workshop to clean up the rest. If the stack deletion process encountered errors, look at the Events tab in the CloudFormation dashboard, and you'll see what steps failed. It might just be a case where you need to clean up a manually created asset that is tied to a resource goverened by CloudFormation. 89 | 90 | -------------------------------------------------------------------------------- /app/buildspec.yml: -------------------------------------------------------------------------------- 1 | # A buildspec.yml file informs AWS CodeBuild of all the actions that should be 2 | # taken during a build execution for our application. We are able to divide the 3 | # build execution in separate pre-defined phases for logical organization, and 4 | # list the commands that will be executed on the provisioned build server 5 | # performing a build execution job. 6 | version: 0.2 7 | 8 | phases: 9 | pre_build: 10 | commands: 11 | - echo Logging in to Amazon ECR... 12 | # Retrieves docker credentials so that the subsequent docker push command is 13 | # authorized. Authentication is performed automatically by the AWS CLI 14 | # using the AWS credentials associated with the IAM role assigned to the 15 | # instances in your AWS CodeBuild project. 16 | - $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION) 17 | build: 18 | commands: 19 | - echo Build started on `date` 20 | - echo Building the Docker image... 21 | - docker build -t mythicalmysfits/service:latest . 22 | # Tag the built docker image using the appropriate Amazon ECR endpoint and relevant 23 | # repository for our service container. This ensures that when the docker push 24 | # command is executed later, it will be pushed to the appropriate repository. 25 | - docker tag mythicalmysfits/service:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/mythicalmysfits/service:latest 26 | post_build: 27 | commands: 28 | - echo Build completed on `date` 29 | - echo Pushing the Docker image.. 30 | # Push the image to ECR. 31 | - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/mythicalmysfits/service:latest 32 | - echo Completed pushing Docker image. Deploying Docker image to AWS Fargate on `date` 33 | # Create a artifacts file that contains the name and location of the image 34 | # pushed to ECR. This will be used by AWS CodePipeline to automate 35 | # deployment of this specific container to Amazon ECS. 36 | - printf '[{"name":"MythicalMysfits-Service","imageUri":"%s"}]' $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/mythicalmysfits/service:latest > imagedefinitions.json 37 | artifacts: 38 | # Indicate that the created imagedefinitions.json file created on the previous 39 | # line is to be referenceable as an artifact of the build execution job. 40 | files: imagedefinitions.json 41 | -------------------------------------------------------------------------------- /app/like-service/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:latest 2 | RUN echo Updating existing packages, installing and upgrading python and pip. 3 | RUN apt-get update -y 4 | RUN apt-get install -y python-pip python-dev build-essential 5 | RUN pip install --upgrade pip 6 | RUN echo Copying the Mythical Mysfits Flask service into a service directory. 7 | COPY ./service /MythicalMysfitsService 8 | WORKDIR /MythicalMysfitsService 9 | RUN echo Installing Python packages listed in requirements.txt 10 | RUN pip install -r ./requirements.txt 11 | RUN echo Starting python and starting the Flask service... 12 | ENTRYPOINT ["python"] 13 | CMD ["mysfits_like.py"] 14 | -------------------------------------------------------------------------------- /app/like-service/service/mysfits_like.py: -------------------------------------------------------------------------------- 1 | import os 2 | import requests 3 | from urlparse import urlparse 4 | from flask import Flask, jsonify, json, Response, request 5 | from flask_cors import CORS 6 | 7 | app = Flask(__name__) 8 | CORS(app) 9 | 10 | # The service basepath has a short response just to ensure that healthchecks 11 | # sent to the service root will receive a healthy response. 12 | @app.route("/") 13 | def health_check_response(): 14 | return jsonify({"message" : "Nothing here, used for health check."}) 15 | # indicate that the provided mysfit should be marked as liked. 16 | 17 | def process_like_request(): 18 | print('Like processed.') 19 | # Boy I hope someone finds me: AKIAIOSFODNN7EXAMPLS 20 | 21 | def fulfill_like(mysfit_id): 22 | url = urlparse('http://{}/mysfits/{}/fulfill-like'.format(os.environ['MONOLITH_URL'], mysfit_id)) 23 | return requests.post(url=url.geturl()) 24 | 25 | 26 | @app.route("/mysfits//like", methods=['POST']) 27 | def like_mysfit(mysfit_id): 28 | process_like_request() 29 | service_response = fulfill_like(mysfit_id) 30 | 31 | flask_response = Response(service_response) 32 | flask_response.headers["Content-Type"] = "application/json" 33 | 34 | return flask_response 35 | 36 | # Run the service on the local server it has been deployed to, 37 | # listening on port 8080. 38 | if __name__ == "__main__": 39 | app.run(host="0.0.0.0", port=80) 40 | -------------------------------------------------------------------------------- /app/like-service/service/requirements.txt: -------------------------------------------------------------------------------- 1 | Flask==0.12.2 2 | flask-cors==3.0.0 3 | requests==1.2.3 4 | -------------------------------------------------------------------------------- /app/monolith-service/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:latest 2 | RUN echo Updating existing packages, installing and upgrading python and pip. 3 | RUN apt-get update -y 4 | RUN apt-get install -y python-pip python-dev build-essential 5 | RUN pip install --upgrade pip 6 | RUN echo Copying the Mythical Mysfits Flask service into a service directory. 7 | COPY ./service /MythicalMysfitsService 8 | WORKDIR /MythicalMysfitsService 9 | RUN echo Installing Python packages listed in requirements.txt 10 | RUN pip install -r ./requirements.txt 11 | RUN echo Starting python and starting the Flask service... 12 | ENTRYPOINT ["python"] 13 | CMD ["mythicalMysfitsService.py"] 14 | -------------------------------------------------------------------------------- /app/monolith-service/service/mysfitsTableClient.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | import logging 4 | import os 5 | from collections import defaultdict 6 | 7 | # create a DynamoDB client using boto3. The boto3 library will automatically 8 | # use the credentials associated with our ECS task role to communicate with 9 | # DynamoDB, so no credentials need to be stored/managed at all by our code! 10 | client = boto3.client('dynamodb') 11 | table_name = os.environ['DDB_TABLE_NAME'] 12 | 13 | def getAllMysfits(): 14 | 15 | # Retrieve all Mysfits from DynamoDB using the DynamoDB scan operation. 16 | # Note: The scan API can be expensive in terms of latency when a DynamoDB 17 | # table contains a high number of records and filters are applied to the 18 | # operation that require a large amount of data to be scanned in the table 19 | # before a response is returned by DynamoDB. For high-volume tables that 20 | # receive many requests, it is common to store the result of frequent/common 21 | # scan operations in an in-memory cache. DynamoDB Accelerator (DAX) or 22 | # use of ElastiCache can provide these benefits. But, because out Mythical 23 | # Mysfits API is low traffic and the table is very small, the scan operation 24 | # will suit our needs for this workshop. 25 | response = client.scan( 26 | TableName=table_name 27 | ) 28 | 29 | logging.info(response["Items"]) 30 | 31 | # loop through the returned mysfits and add their attributes to a new dict 32 | # that matches the JSON response structure expected by the frontend. 33 | mysfitList = defaultdict(list) 34 | for item in response["Items"]: 35 | mysfit = {} 36 | mysfit["mysfitId"] = item["MysfitId"]["S"] 37 | mysfit["name"] = item["Name"]["S"] 38 | mysfit["goodevil"] = item["GoodEvil"]["S"] 39 | mysfit["lawchaos"] = item["LawChaos"]["S"] 40 | mysfit["species"] = item["Species"]["S"] 41 | mysfit["thumbImageUri"] = item["ThumbImageUri"]["S"] 42 | mysfitList["mysfits"].append(mysfit) 43 | 44 | # convert the create list of dicts in to JSON 45 | return json.dumps(mysfitList) 46 | 47 | def queryMysfits(queryParam): 48 | 49 | logging.info(json.dumps(queryParam)) 50 | 51 | # Use the DynamoDB API Query to retrieve mysfits from the table that are 52 | # equal to the selected filter values. 53 | response = client.query( 54 | TableName=table_name, 55 | IndexName=queryParam['filter']+'Index', 56 | KeyConditions={ 57 | queryParam['filter']: { 58 | 'AttributeValueList': [ 59 | { 60 | 'S': queryParam['value'] 61 | } 62 | ], 63 | 'ComparisonOperator': "EQ" 64 | } 65 | } 66 | ) 67 | 68 | mysfitList = defaultdict(list) 69 | for item in response["Items"]: 70 | mysfit = {} 71 | mysfit["mysfitId"] = item["MysfitId"]["S"] 72 | mysfit["name"] = item["Name"]["S"] 73 | mysfit["goodevil"] = item["GoodEvil"]["S"] 74 | mysfit["lawchaos"] = item["LawChaos"]["S"] 75 | mysfit["species"] = item["Species"]["S"] 76 | mysfit["thumbImageUri"] = item["ThumbImageUri"]["S"] 77 | mysfitList["mysfits"].append(mysfit) 78 | 79 | return json.dumps(mysfitList) 80 | 81 | # Retrive a single mysfit from DynamoDB using their unique mysfitId 82 | def getMysfit(mysfitId): 83 | 84 | # use the DynamoDB API GetItem, which gives you the ability to retrieve 85 | # a single item from a DynamoDB table using its unique key with super 86 | # low latency. 87 | response = client.get_item( 88 | TableName=table_name, 89 | Key={ 90 | 'MysfitId': { 91 | 'S': mysfitId 92 | } 93 | } 94 | ) 95 | 96 | item = response["Item"] 97 | 98 | mysfit = {} 99 | mysfit["mysfitId"] = item["MysfitId"]["S"] 100 | mysfit["name"] = item["Name"]["S"] 101 | mysfit["age"] = int(item["Age"]["N"]) 102 | mysfit["goodevil"] = item["GoodEvil"]["S"] 103 | mysfit["lawchaos"] = item["LawChaos"]["S"] 104 | mysfit["species"] = item["Species"]["S"] 105 | mysfit["description"] = item["Description"]["S"] 106 | mysfit["thumbImageUri"] = item["ThumbImageUri"]["S"] 107 | mysfit["profileImageUri"] = item["ProfileImageUri"]["S"] 108 | mysfit["likes"] = item["Likes"]["N"] 109 | mysfit["adopted"] = item["Adopted"]["BOOL"] 110 | 111 | return json.dumps(mysfit) 112 | 113 | # increment the number of likes for a mysfit by 1 114 | def likeMysfit(mysfitId): 115 | 116 | # Use the DynamoDB API UpdateItem to increment the number of Likes 117 | # the mysfit has by 1 using an UpdateExpression. 118 | response = client.update_item( 119 | TableName=table_name, 120 | Key={ 121 | 'MysfitId': { 122 | 'S': mysfitId 123 | } 124 | }, 125 | UpdateExpression="SET Likes = Likes + :n", 126 | ExpressionAttributeValues={':n': {'N': '1'}} 127 | ) 128 | 129 | response = {} 130 | response["Update"] = "Success"; 131 | 132 | return json.dumps(response) 133 | 134 | # mark a mysfit as adopted 135 | def adoptMysfit(mysfitId): 136 | 137 | # Use the DynamoDB API UpdateItem to set the value of the mysfit's 138 | # Adopted attribute to True using an UpdateExpression. 139 | response = client.update_item( 140 | TableName=table_name, 141 | Key={ 142 | 'MysfitId': { 143 | 'S': mysfitId 144 | } 145 | }, 146 | UpdateExpression="SET Adopted = :b", 147 | ExpressionAttributeValues={':b': {'BOOL': True}} 148 | ) 149 | 150 | response = {} 151 | response["Update"] = "Success"; 152 | 153 | return json.dumps(response) 154 | -------------------------------------------------------------------------------- /app/monolith-service/service/mythicalMysfitsService.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, jsonify, json, Response, request 2 | from flask_cors import CORS 3 | import mysfitsTableClient 4 | import os 5 | 6 | app = Flask(__name__) 7 | CORS(app) 8 | 9 | # The service basepath has a short response just to ensure that healthchecks 10 | # sent to the service root will receive a healthy response. 11 | @app.route("/") 12 | def healthCheckResponse(): 13 | return jsonify({"message" : "Nothing here, used for health check. Try /mysfits instead."}) 14 | 15 | # Retrive mysfits from DynamoDB based on provided querystring params, or all 16 | # mysfits if no querystring is present. 17 | @app.route("/mysfits", methods=['GET']) 18 | def getMysfits(): 19 | 20 | filterCategory = request.args.get('filter') 21 | if filterCategory: 22 | filterValue = request.args.get('value') 23 | queryParam = { 24 | 'filter': filterCategory, 25 | 'value': filterValue 26 | } 27 | serviceResponse = mysfitsTableClient.queryMysfits(queryParam) 28 | else: 29 | serviceResponse = mysfitsTableClient.getAllMysfits() 30 | 31 | flaskResponse = Response(serviceResponse) 32 | flaskResponse.headers["Content-Type"] = "application/json" 33 | 34 | return flaskResponse 35 | 36 | def process_like_request(): 37 | print('Like processed.') 38 | 39 | # retrieve the full details for a specific mysfit with their provided path 40 | # parameter as their ID. 41 | @app.route("/mysfits/", methods=['GET']) 42 | def getMysfit(mysfit_id): 43 | serviceResponse = mysfitsTableClient.getMysfit(mysfit_id) 44 | 45 | flaskResponse = Response(serviceResponse) 46 | flaskResponse.headers["Content-Type"] = "application/json" 47 | 48 | return flaskResponse 49 | 50 | # increment the number of likes for the provided mysfit. 51 | @app.route("/mysfits//like", methods=['POST']) 52 | def likeMysfit(mysfit_id): 53 | serviceResponse = mysfitsTableClient.likeMysfit(mysfit_id) 54 | process_like_request() 55 | flaskResponse = Response(serviceResponse) 56 | flaskResponse.headers["Content-Type"] = "application/json" 57 | return flaskResponse 58 | 59 | # @app.route("/mysfits//fulfill-like", methods=['POST']) 60 | # def likeMysfit(mysfit_id): 61 | # serviceResponse = mysfitsTableClient.likeMysfit(mysfit_id) 62 | # flaskResponse = Response(serviceResponse) 63 | # flaskResponse.headers["Content-Type"] = "application/json" 64 | # return flaskResponse 65 | 66 | # indicate that the provided mysfit should be marked as adopted. 67 | @app.route("/mysfits//adopt", methods=['POST']) 68 | def adoptMysfit(mysfit_id): 69 | serviceResponse = mysfitsTableClient.adoptMysfit(mysfit_id) 70 | 71 | flaskResponse = Response(serviceResponse) 72 | flaskResponse.headers["Content-Type"] = "application/json" 73 | 74 | return flaskResponse 75 | 76 | # Run the service on the local server it has been deployed to, 77 | # listening on port 8080. 78 | if __name__ == "__main__": 79 | app.run(host="0.0.0.0", port=80) 80 | -------------------------------------------------------------------------------- /app/monolith-service/service/requirements.txt: -------------------------------------------------------------------------------- 1 | Flask==0.12.2 2 | flask-cors==3.0.0 3 | boto3==1.7.16 4 | -------------------------------------------------------------------------------- /core.yml: -------------------------------------------------------------------------------- 1 | --- 2 | AWSTemplateFormatVersion: '2010-09-09' 3 | Description: This stack deploys the core network infrastructure and IAM resources 4 | to be used for a service hosted in Amazon ECS using AWS Fargate. 5 | 6 | Parameters: 7 | SkipBucket: 8 | Type: String 9 | Default: "false" 10 | AllowedValues: 11 | - "true" 12 | - "false" 13 | 14 | ClairDBPassword: 15 | Description: The initial Clair DB password in RDS. Minimum length 8, must only contain alphanumeric characteres 16 | NoEcho: true 17 | Type: String 18 | MinLength: 8 19 | MaxLength: 41 20 | AllowedPattern: "[a-zA-Z0-9]*" 21 | 22 | Conditions: 23 | MakeBucket: 24 | Fn::Equals: ["false", !Ref SkipBucket] 25 | 26 | Mappings: 27 | # Hard values for the subnet masks. These masks define 28 | # the range of internal IP addresses that can be assigned. 29 | # The VPC can have all IP's from 10.0.0.0 to 10.0.255.255 30 | # There are four subnets which cover the ranges: 31 | # 32 | # 10.0.0.0 - 10.0.0.255 33 | # 10.0.1.0 - 10.0.1.255 34 | # 10.0.2.0 - 10.0.2.255 35 | # 10.0.3.0 - 10.0.3.255 36 | # 37 | # If you need more IP addresses (perhaps you have so many 38 | # instances that you run out) then you can customize these 39 | # ranges to add more 40 | SubnetConfig: 41 | VPC: 42 | CIDR: '10.0.0.0/16' 43 | PublicOne: 44 | CIDR: '10.0.0.0/24' 45 | PublicTwo: 46 | CIDR: '10.0.1.0/24' 47 | PrivateOne: 48 | CIDR: '10.0.2.0/24' 49 | PrivateTwo: 50 | CIDR: '10.0.3.0/24' 51 | 52 | Resources: 53 | 54 | # ECSServiceLinkedRole: 55 | # Type: "AWS::IAM::ServiceLinkedRole" 56 | # Properties: 57 | # AWSServiceName: "ecs.amazonaws.com" 58 | # Description: "Role to enable Amazon ECS to manage your cluster." 59 | 60 | MythicalArtifactBucket: 61 | Type: AWS::S3::Bucket 62 | DeletionPolicy: Delete 63 | 64 | MythicalBucket: 65 | Type: AWS::S3::Bucket 66 | Condition: MakeBucket 67 | Properties: 68 | AccessControl: PublicRead 69 | WebsiteConfiguration: 70 | IndexDocument: index.html 71 | ErrorDocument: error.html 72 | 73 | MythicalMonolithGitRepository: 74 | Type: AWS::CodeCommit::Repository 75 | Properties: 76 | RepositoryDescription: Repository for the Mythical Mysfits monolith service 77 | RepositoryName: !Sub ${AWS::StackName}-monolith-service 78 | 79 | MythicalLikeGitRepository: 80 | Type: AWS::CodeCommit::Repository 81 | Properties: 82 | RepositoryDescription: Repository for the Mythical Mysfits like service 83 | RepositoryName: !Sub ${AWS::StackName}-like-service 84 | 85 | Mono: 86 | Type: AWS::ECR::Repository 87 | 88 | Like: 89 | Type: AWS::ECR::Repository 90 | 91 | MythicalEcsCluster: 92 | Type: AWS::ECS::Cluster 93 | Properties: 94 | ClusterName: !Sub Cluster-${AWS::StackName} 95 | 96 | MythicalMonolithLogGroup: 97 | Type: AWS::Logs::LogGroup 98 | 99 | MythicalLikeLogGroup: 100 | Type: AWS::Logs::LogGroup 101 | 102 | MythicalMonolithTaskDefinition: 103 | Type: AWS::ECS::TaskDefinition 104 | Properties: 105 | Cpu: 256 106 | ExecutionRoleArn: !GetAtt EcsServiceRole.Arn 107 | Family: !Sub Mythical-Mysfits-Monolith-Service-${AWS::StackName} 108 | Memory: 512 109 | NetworkMode: awsvpc 110 | RequiresCompatibilities: 111 | - FARGATE 112 | TaskRoleArn: !GetAtt ECSTaskRole.Arn 113 | ContainerDefinitions: 114 | - Name: monolith-service 115 | Image: !Sub ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${Mono}:latest 116 | PortMappings: 117 | - ContainerPort: 80 118 | Protocol: http 119 | Environment: 120 | - Name: UPSTREAM_URL 121 | Value: !GetAtt MythicalLoadBalancer.DNSName 122 | - Name: DDB_TABLE_NAME 123 | Value: !Ref MythicalDynamoTable 124 | LogConfiguration: 125 | LogDriver: awslogs 126 | Options: 127 | awslogs-group: !Ref MythicalMonolithLogGroup 128 | awslogs-region: !Ref AWS::Region 129 | awslogs-stream-prefix: awslogs-mythicalmysfits-service 130 | Essential: true 131 | 132 | MythicalLikeTaskDefinition: 133 | Type: AWS::ECS::TaskDefinition 134 | Properties: 135 | Cpu: 256 136 | ExecutionRoleArn: !GetAtt EcsServiceRole.Arn 137 | Family: !Sub Mythical-Mysfits-Like-Service-${AWS::StackName} 138 | Memory: 512 139 | NetworkMode: awsvpc 140 | RequiresCompatibilities: 141 | - FARGATE 142 | TaskRoleArn: !GetAtt ECSTaskRole.Arn 143 | ContainerDefinitions: 144 | - Name: like-service 145 | Image: !Sub ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${Like}:latest 146 | PortMappings: 147 | - ContainerPort: 80 148 | Protocol: http 149 | Environment: 150 | - Name: MONOLITH_URL 151 | Value: !GetAtt MythicalLoadBalancer.DNSName 152 | LogConfiguration: 153 | LogDriver: awslogs 154 | Options: 155 | awslogs-group: !Ref MythicalLikeLogGroup 156 | awslogs-region: !Ref AWS::Region 157 | awslogs-stream-prefix: awslogs-mythicalmysfits-service 158 | Essential: true 159 | 160 | MythicalLoadBalancerSecurityGroup: 161 | Type: AWS::EC2::SecurityGroup 162 | Properties: 163 | GroupName: !Sub SecurityGroup-${AWS::StackName} 164 | GroupDescription: Access to the load balancer 165 | VpcId: !Ref 'VPC' 166 | SecurityGroupIngress: 167 | # Allow access to ALB from anywhere on the internet 168 | - IpProtocol: tcp 169 | FromPort: 80 170 | ToPort: 80 171 | CidrIp: 0.0.0.0/0 172 | - IpProtocol: tcp 173 | FromPort: 6060 174 | ToPort: 6061 175 | CidrIp: 0.0.0.0/0 176 | 177 | MythicalLoadBalancer: 178 | Type: AWS::ElasticLoadBalancingV2::LoadBalancer 179 | Properties: 180 | Name: !Sub alb-${AWS::StackName} 181 | Scheme: internet-facing 182 | Type: application 183 | SecurityGroups: 184 | - !Ref MythicalLoadBalancerSecurityGroup 185 | Subnets: 186 | - !Ref PublicSubnetOne 187 | - !Ref PublicSubnetTwo 188 | 189 | MythicalListener: 190 | Type: AWS::ElasticLoadBalancingV2::Listener 191 | Properties: 192 | DefaultActions: 193 | - TargetGroupArn: !Ref MythicalMonolithTargetGroup 194 | Type: forward 195 | LoadBalancerArn: !Ref MythicalLoadBalancer 196 | Port: 80 197 | Protocol: HTTP 198 | 199 | MythicalMonolithTargetGroup: 200 | Type: AWS::ElasticLoadBalancingV2::TargetGroup 201 | Properties: 202 | HealthCheckIntervalSeconds: 10 203 | HealthCheckPath: / 204 | HealthCheckProtocol: HTTP 205 | HealthyThresholdCount: 3 206 | UnhealthyThresholdCount: 3 207 | Port: 80 208 | Protocol: HTTP 209 | VpcId: !Ref VPC 210 | TargetType: ip 211 | 212 | MythicalLikeTargetGroup: 213 | Type: AWS::ElasticLoadBalancingV2::TargetGroup 214 | Properties: 215 | HealthCheckIntervalSeconds: 10 216 | HealthCheckPath: / 217 | HealthCheckProtocol: HTTP 218 | HealthyThresholdCount: 3 219 | UnhealthyThresholdCount: 3 220 | Port: 80 221 | Protocol: HTTP 222 | VpcId: !Ref VPC 223 | TargetType: ip 224 | 225 | MythicalLikeListenerRule: 226 | Type: AWS::ElasticLoadBalancingV2::ListenerRule 227 | Properties: 228 | Actions: 229 | - Type: forward 230 | TargetGroupArn: !Ref MythicalLikeTargetGroup 231 | Conditions: 232 | - Field: path-pattern 233 | Values: 234 | - "/mysfits/*/like/" 235 | ListenerArn: !Ref MythicalListener 236 | Priority: 1 237 | 238 | # VPC in which containers will be networked. 239 | # It has two public subnets, and two private subnets. 240 | # We distribute the subnets across the first two available subnets 241 | # for the region, for high availability. 242 | VPC: 243 | Type: AWS::EC2::VPC 244 | Properties: 245 | EnableDnsSupport: true 246 | EnableDnsHostnames: true 247 | CidrBlock: !FindInMap ['SubnetConfig', 'VPC', 'CIDR'] 248 | Tags: 249 | - Key: Name 250 | Value: !Sub Mysfits-VPC-${AWS::StackName} 251 | 252 | # Two public subnets, where a public load balancer will later be created. 253 | PublicSubnetOne: 254 | Type: AWS::EC2::Subnet 255 | Properties: 256 | AvailabilityZone: 257 | Fn::Select: 258 | - 0 259 | - Fn::GetAZs: {Ref: 'AWS::Region'} 260 | VpcId: !Ref 'VPC' 261 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicOne', 'CIDR'] 262 | MapPublicIpOnLaunch: true 263 | Tags: 264 | - Key: Name 265 | Value: !Sub MysfitsPublicOne-${AWS::StackName} 266 | PublicSubnetTwo: 267 | Type: AWS::EC2::Subnet 268 | Properties: 269 | AvailabilityZone: 270 | Fn::Select: 271 | - 1 272 | - Fn::GetAZs: {Ref: 'AWS::Region'} 273 | VpcId: !Ref 'VPC' 274 | CidrBlock: !FindInMap ['SubnetConfig', 'PublicTwo', 'CIDR'] 275 | MapPublicIpOnLaunch: true 276 | Tags: 277 | - Key: Name 278 | Value: !Sub MysfitsPublicTwo-${AWS::StackName} 279 | 280 | # Two private subnets where containers will only have private 281 | # IP addresses, and will only be reachable by other members of the 282 | # VPC 283 | PrivateSubnetOne: 284 | Type: AWS::EC2::Subnet 285 | Properties: 286 | AvailabilityZone: 287 | Fn::Select: 288 | - 0 289 | - Fn::GetAZs: {Ref: 'AWS::Region'} 290 | VpcId: !Ref 'VPC' 291 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateOne', 'CIDR'] 292 | Tags: 293 | - Key: Name 294 | Value: !Sub MysfitsPrivateOne-${AWS::StackName} 295 | PrivateSubnetTwo: 296 | Type: AWS::EC2::Subnet 297 | Properties: 298 | AvailabilityZone: 299 | Fn::Select: 300 | - 1 301 | - Fn::GetAZs: {Ref: 'AWS::Region'} 302 | VpcId: !Ref 'VPC' 303 | CidrBlock: !FindInMap ['SubnetConfig', 'PrivateTwo', 'CIDR'] 304 | Tags: 305 | - Key: Name 306 | Value: !Sub MysfitsPrivateTwo-${AWS::StackName} 307 | 308 | # Setup networking resources for the public subnets. 309 | InternetGateway: 310 | Type: AWS::EC2::InternetGateway 311 | GatewayAttachement: 312 | Type: AWS::EC2::VPCGatewayAttachment 313 | Properties: 314 | VpcId: !Ref 'VPC' 315 | InternetGatewayId: !Ref 'InternetGateway' 316 | PublicRouteTable: 317 | Type: AWS::EC2::RouteTable 318 | Properties: 319 | VpcId: !Ref 'VPC' 320 | PublicRoute: 321 | Type: AWS::EC2::Route 322 | DependsOn: GatewayAttachement 323 | Properties: 324 | RouteTableId: !Ref 'PublicRouteTable' 325 | DestinationCidrBlock: '0.0.0.0/0' 326 | GatewayId: !Ref 'InternetGateway' 327 | PublicSubnetOneRouteTableAssociation: 328 | Type: AWS::EC2::SubnetRouteTableAssociation 329 | Properties: 330 | SubnetId: !Ref PublicSubnetOne 331 | RouteTableId: !Ref PublicRouteTable 332 | PublicSubnetTwoRouteTableAssociation: 333 | Type: AWS::EC2::SubnetRouteTableAssociation 334 | Properties: 335 | SubnetId: !Ref PublicSubnetTwo 336 | RouteTableId: !Ref PublicRouteTable 337 | 338 | # Setup networking resources for the private subnets. Containers 339 | # in these subnets have only private IP addresses, and must use a NAT 340 | # gateway to talk to the internet. We launch two NAT gateways, one for 341 | # each private subnet. 342 | NatGatewayOneAttachment: 343 | Type: AWS::EC2::EIP 344 | DependsOn: GatewayAttachement 345 | Properties: 346 | Domain: vpc 347 | NatGatewayTwoAttachment: 348 | Type: AWS::EC2::EIP 349 | DependsOn: GatewayAttachement 350 | Properties: 351 | Domain: vpc 352 | NatGatewayOne: 353 | Type: AWS::EC2::NatGateway 354 | Properties: 355 | AllocationId: !GetAtt NatGatewayOneAttachment.AllocationId 356 | SubnetId: !Ref PublicSubnetOne 357 | NatGatewayTwo: 358 | Type: AWS::EC2::NatGateway 359 | Properties: 360 | AllocationId: !GetAtt NatGatewayTwoAttachment.AllocationId 361 | SubnetId: !Ref PublicSubnetTwo 362 | PrivateRouteTableOne: 363 | Type: AWS::EC2::RouteTable 364 | Properties: 365 | VpcId: !Ref 'VPC' 366 | PrivateRouteOne: 367 | Type: AWS::EC2::Route 368 | Properties: 369 | RouteTableId: !Ref PrivateRouteTableOne 370 | DestinationCidrBlock: 0.0.0.0/0 371 | NatGatewayId: !Ref NatGatewayOne 372 | PrivateRouteTableOneAssociation: 373 | Type: AWS::EC2::SubnetRouteTableAssociation 374 | Properties: 375 | RouteTableId: !Ref PrivateRouteTableOne 376 | SubnetId: !Ref PrivateSubnetOne 377 | PrivateRouteTableTwo: 378 | Type: AWS::EC2::RouteTable 379 | Properties: 380 | VpcId: !Ref 'VPC' 381 | PrivateRouteTwo: 382 | Type: AWS::EC2::Route 383 | Properties: 384 | RouteTableId: !Ref PrivateRouteTableTwo 385 | DestinationCidrBlock: 0.0.0.0/0 386 | NatGatewayId: !Ref NatGatewayTwo 387 | PrivateRouteTableTwoAssociation: 388 | Type: AWS::EC2::SubnetRouteTableAssociation 389 | Properties: 390 | RouteTableId: !Ref PrivateRouteTableTwo 391 | SubnetId: !Ref PrivateSubnetTwo 392 | 393 | # VPC Endpoint for DynamoDB 394 | # If a container needs to access DynamoDB (coming in module 3) this 395 | # allows a container in the private subnet to talk to DynamoDB directly 396 | # without needing to go via the NAT gateway. 397 | DynamoDBEndpoint: 398 | Type: AWS::EC2::VPCEndpoint 399 | Properties: 400 | PolicyDocument: 401 | Version: "2012-10-17" 402 | Statement: 403 | - Effect: Allow 404 | Action: "*" 405 | Principal: "*" 406 | Resource: "*" 407 | RouteTableIds: 408 | - !Ref 'PrivateRouteTableOne' 409 | - !Ref 'PrivateRouteTableTwo' 410 | ServiceName: !Join [ "", [ "com.amazonaws.", { "Ref": "AWS::Region" }, ".dynamodb" ] ] 411 | VpcId: !Ref 'VPC' 412 | 413 | # The security group for our service containers to be hosted in Fargate. 414 | # Even though traffic from users will pass through a Network Load Balancer, 415 | # that traffic is purely TCP passthrough, without security group inspection. 416 | # Therefore, we will allow for traffic from the Internet to be accepted by our 417 | # containers. But, because the containers will only have Private IP addresses, 418 | # the only traffic that will reach the containers is traffic that is routed 419 | # to them by the public load balancer on the specific ports that we configure. 420 | FargateContainerSecurityGroup: 421 | Type: AWS::EC2::SecurityGroup 422 | Properties: 423 | GroupDescription: Access to the fargate containers from the Internet 424 | VpcId: !Ref 'VPC' 425 | SecurityGroupIngress: 426 | # Allow access to NLB from anywhere on the internet 427 | - CidrIp: !FindInMap ['SubnetConfig', 'VPC', 'CIDR'] 428 | IpProtocol: -1 429 | 430 | # This is an IAM role which authorizes ECS to manage resources on your 431 | # account on your behalf, such as updating your load balancer with the 432 | # details of where your containers are, so that traffic can reach your 433 | # containers. 434 | EcsServiceRole: 435 | Type: AWS::IAM::Role 436 | Properties: 437 | AssumeRolePolicyDocument: 438 | Statement: 439 | - Effect: Allow 440 | Principal: 441 | Service: 442 | - ecs.amazonaws.com 443 | - ecs-tasks.amazonaws.com 444 | Action: 445 | - sts:AssumeRole 446 | Path: / 447 | Policies: 448 | - PolicyName: ecs-service 449 | PolicyDocument: 450 | Statement: 451 | - Effect: Allow 452 | Action: 453 | # Rules which allow ECS to attach network interfaces to instances 454 | # on your behalf in order for awsvpc networking mode to work right 455 | - 'ec2:AttachNetworkInterface' 456 | - 'ec2:CreateNetworkInterface' 457 | - 'ec2:CreateNetworkInterfacePermission' 458 | - 'ec2:DeleteNetworkInterface' 459 | - 'ec2:DeleteNetworkInterfacePermission' 460 | - 'ec2:Describe*' 461 | - 'ec2:DetachNetworkInterface' 462 | 463 | # Rules which allow ECS to update load balancers on your behalf 464 | # with the information sabout how to send traffic to your containers 465 | - 'elasticloadbalancing:DeregisterInstancesFromLoadBalancer' 466 | - 'elasticloadbalancing:DeregisterTargets' 467 | - 'elasticloadbalancing:Describe*' 468 | - 'elasticloadbalancing:RegisterInstancesWithLoadBalancer' 469 | - 'elasticloadbalancing:RegisterTargets' 470 | 471 | # Rules which allow ECS to run tasks that have IAM roles assigned to them. 472 | - 'iam:PassRole' 473 | 474 | # Rules that let ECS interact with container images. 475 | - 'ecr:GetAuthorizationToken' 476 | - 'ecr:BatchCheckLayerAvailability' 477 | - 'ecr:GetDownloadUrlForLayer' 478 | - 'ecr:BatchGetImage' 479 | 480 | # Rules that let ECS create and push logs to CloudWatch. 481 | - 'logs:DescribeLogStreams' 482 | - 'logs:CreateLogStream' 483 | - 'logs:CreateLogGroup' 484 | - 'logs:PutLogEvents' 485 | 486 | Resource: '*' 487 | 488 | # This is a role which is used by the ECS tasks. Tasks in Amazon ECS define 489 | # the containers that should be deployed togehter and the resources they 490 | # require from a compute/memory perspective. So, the policies below will define 491 | # the IAM permissions that our Mythical Mysfits docker containers will have. 492 | # If you attempted to write any code for the Mythical Mysfits service that 493 | # interacted with different AWS service APIs, these roles would need to include 494 | # those as allowed actions. 495 | ECSTaskRole: 496 | Type: AWS::IAM::Role 497 | Properties: 498 | AssumeRolePolicyDocument: 499 | Statement: 500 | - Effect: Allow 501 | Principal: 502 | Service: 503 | - ecs-tasks.amazonaws.com 504 | # Also add EC2 for testing in Cloud9 505 | - ec2.amazonaws.com 506 | Action: ['sts:AssumeRole'] 507 | Path: / 508 | Policies: 509 | - PolicyName: AmazonECSTaskRolePolicy 510 | PolicyDocument: 511 | Statement: 512 | - Effect: Allow 513 | Action: 514 | # Allow the ECS Tasks to download images from ECR 515 | - 'ecr:GetAuthorizationToken' 516 | - 'ecr:BatchCheckLayerAvailability' 517 | - 'ecr:GetDownloadUrlForLayer' 518 | - 'ecr:BatchGetImage' 519 | 520 | # Allow the ECS tasks to upload logs to CloudWatch 521 | - 'logs:CreateLogStream' 522 | - 'logs:CreateLogGroup' 523 | - 'logs:PutLogEvents' 524 | Resource: '*' 525 | 526 | - Effect: Allow 527 | Action: 528 | # Allows the ECS tasks to interact with only the MysfitsTable 529 | # in DynamoDB 530 | - 'dynamodb:Scan' 531 | - 'dynamodb:Query' 532 | - 'dynamodb:UpdateItem' 533 | - 'dynamodb:GetItem' 534 | Resource: !GetAtt MythicalDynamoTable.Arn 535 | 536 | # CodeBuild requires access to push images to repos and this is the role that provides that access 537 | CodeBuildServiceRole: 538 | Type: AWS::IAM::Role 539 | Properties: 540 | ManagedPolicyArns: 541 | - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPowerUser 542 | RoleName: !Sub ${AWS::StackName}-CodeBuildServiceRole 543 | Path: / 544 | AssumeRolePolicyDocument: | 545 | { 546 | "Statement": [{ 547 | "Effect": "Allow", 548 | "Principal": { "Service": [ "codebuild.amazonaws.com" ]}, 549 | "Action": [ "sts:AssumeRole" ] 550 | }] 551 | } 552 | Policies: 553 | - PolicyName: root 554 | PolicyDocument: 555 | Version: 2012-10-17 556 | Statement: 557 | - Resource: "*" 558 | Effect: Allow 559 | Action: 560 | - logs:CreateLogGroup 561 | - logs:CreateLogStream 562 | - logs:PutLogEvents 563 | - ecr:GetAuthorizationToken 564 | - codecommit:GitPull 565 | - Resource: !Sub arn:aws:s3:::${MythicalArtifactBucket}/* 566 | Effect: Allow 567 | Action: 568 | - s3:GetObject 569 | - s3:PutObject 570 | - s3:GetObjectVersion 571 | - Resource: 572 | - !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/${MythicalLikeLogGroup}/codebuild/tests/ 573 | - !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/${MythicalLikeLogGroup}/codebuild/tests:* 574 | Effect: Allow 575 | Action: 576 | - logs:CreateLogGroup 577 | - logs:CreateLogStream 578 | - logs:PutLogEvent 579 | - Resource: 580 | - !Sub arn:aws:s3:::codepipeline-${AWS::Region}-* 581 | Effect: Allow 582 | Action: 583 | - s3:PutObject 584 | - s3:GetObject 585 | - s3:GetObjectVersion 586 | 587 | # Begin WS2 Bootstrapping 588 | # This role is used by CodePipeline to trigger deployments 589 | 590 | CodePipelineServiceRole: 591 | Type: AWS::IAM::Role 592 | Properties: 593 | Path: / 594 | AssumeRolePolicyDocument: 595 | Version: "2012-10-17" 596 | Statement: 597 | - Effect: "Allow" 598 | Principal: 599 | Service: [codepipeline.amazonaws.com] 600 | Action: ['sts:AssumeRole'] 601 | Policies: 602 | - PolicyName: root 603 | PolicyDocument: 604 | Version: 2012-10-17 605 | Statement: 606 | - Resource: "*" 607 | Effect: Allow 608 | Action: 609 | - s3:GetObject 610 | - s3:GetObjectVersion 611 | - s3:GetBucketVersioning 612 | - Resource: "arn:aws:s3:::*" 613 | Effect: Allow 614 | Action: 615 | - s3:PutObject 616 | - Resource: "*" 617 | Effect: Allow 618 | Action: 619 | - codecommit:* 620 | - codebuild:StartBuild 621 | - codebuild:BatchGetBuilds 622 | - iam:PassRole 623 | - iam:CreateRole 624 | - iam:DetachRolePolicy 625 | - iam:AttachRolePolicy 626 | - iam:PassRole 627 | - iam:PutRolePolicy 628 | - Resource: "*" 629 | Effect: Allow 630 | Action: 631 | - ecs:* 632 | 633 | MythicalProfile: 634 | Type: AWS::IAM::InstanceProfile 635 | Properties: 636 | Roles: 637 | - !Ref ECSTaskRole 638 | MythicalDynamoTable: 639 | Type: AWS::DynamoDB::Table 640 | Properties: 641 | TableName: !Sub Table-${AWS::StackName} 642 | AttributeDefinitions: 643 | - AttributeName: MysfitId 644 | AttributeType: S 645 | - AttributeName: GoodEvil 646 | AttributeType: S 647 | - AttributeName: LawChaos 648 | AttributeType: S 649 | GlobalSecondaryIndexes: 650 | - IndexName: LawChaosIndex 651 | KeySchema: 652 | - AttributeName: LawChaos 653 | KeyType: HASH 654 | - AttributeName: MysfitId 655 | KeyType: RANGE 656 | Projection: 657 | ProjectionType: ALL 658 | ProvisionedThroughput: 659 | ReadCapacityUnits: 5 660 | WriteCapacityUnits: 5 661 | - IndexName: GoodEvilIndex 662 | KeySchema: 663 | - AttributeName: GoodEvil 664 | KeyType: HASH 665 | - AttributeName: MysfitId 666 | KeyType: RANGE 667 | Projection: 668 | ProjectionType: ALL 669 | ProvisionedThroughput: 670 | ReadCapacityUnits: 5 671 | WriteCapacityUnits: 5 672 | KeySchema: 673 | - AttributeName: MysfitId 674 | KeyType: HASH 675 | ProvisionedThroughput: 676 | ReadCapacityUnits: 5 677 | WriteCapacityUnits: 5 678 | 679 | MythicalEnvironment: 680 | Type: AWS::Cloud9::EnvironmentEC2 681 | Properties: 682 | AutomaticStopTimeMinutes: 30 683 | InstanceType: t2.small 684 | Name: !Sub Project-${AWS::StackName} 685 | SubnetId: !Ref PublicSubnetOne 686 | 687 | ClairDB: 688 | Type: AWS::RDS::DBInstance 689 | Properties: 690 | AllocatedStorage: 20 691 | DBInstanceClass: db.t2.micro 692 | DBName: postgres 693 | DBSubnetGroupName: !Ref DBSubnetGroup 694 | Engine: postgres 695 | EngineVersion: 9.6.8 696 | MasterUserPassword: !Ref ClairDBPassword 697 | MasterUsername: postgres 698 | MultiAZ: false 699 | StorageType: gp2 700 | VPCSecurityGroups: 701 | - !Ref ClairDBSecurityGroup 702 | 703 | ClairDBSecurityGroup: 704 | Type: AWS::EC2::SecurityGroup 705 | Properties: 706 | GroupDescription: Security group for Clair DB 707 | VpcId: !Ref VPC 708 | SecurityGroupIngress: 709 | FromPort: 5432 710 | ToPort: 5432 711 | IpProtocol: tcp 712 | SourceSecurityGroupId: !GetAtt ClairHostSecurityGroup.GroupId 713 | 714 | DBSubnetGroup: 715 | Type: AWS::RDS::DBSubnetGroup 716 | Properties: 717 | DBSubnetGroupDescription: Subnets available for the Clair DB 718 | SubnetIds: 719 | - !Ref PrivateSubnetOne 720 | - !Ref PrivateSubnetTwo 721 | 722 | ClairHostSecurityGroup: 723 | Type: AWS::EC2::SecurityGroup 724 | Properties: 725 | GroupDescription: Clair ECS Security Group 726 | VpcId: !Ref VPC 727 | SecurityGroupIngress: 728 | FromPort: 6060 729 | ToPort: 6061 730 | IpProtocol: tcp 731 | SourceSecurityGroupId: !GetAtt MythicalLoadBalancerSecurityGroup.GroupId 732 | 733 | ClairLogGroup: 734 | Type: AWS::Logs::LogGroup 735 | 736 | ClairService: 737 | Type: AWS::ECS::Service 738 | DependsOn: MythicalLoadBalancer 739 | Properties: 740 | Cluster: !Ref MythicalEcsCluster 741 | DesiredCount: 1 742 | LaunchType: FARGATE 743 | TaskDefinition: !Ref ClairTaskDefinition 744 | LoadBalancers: 745 | - ContainerName: clair 746 | ContainerPort: 6060 747 | TargetGroupArn: !Ref ClairTargetGroup 748 | NetworkConfiguration: 749 | AwsvpcConfiguration: 750 | SecurityGroups: 751 | - !Ref ClairHostSecurityGroup 752 | Subnets: 753 | - !Ref PrivateSubnetOne 754 | - !Ref PrivateSubnetTwo 755 | 756 | ClairTaskDefinition: 757 | Type: AWS::ECS::TaskDefinition 758 | Properties: 759 | TaskRoleArn: !GetAtt ECSTaskRole.Arn 760 | ExecutionRoleArn: !GetAtt EcsServiceRole.Arn 761 | Cpu: 512 762 | Memory: 1GB 763 | NetworkMode: awsvpc 764 | RequiresCompatibilities: 765 | - FARGATE 766 | ContainerDefinitions: 767 | - Name: clair 768 | Image: jasonumiker/clair:latest 769 | Essential: true 770 | PortMappings: 771 | - ContainerPort: 6060 772 | - ContainerPort: 6061 773 | LogConfiguration: 774 | LogDriver: awslogs 775 | Options: 776 | awslogs-group: !Ref ClairLogGroup 777 | awslogs-region: !Ref AWS::Region 778 | awslogs-stream-prefix: clair 779 | Environment: 780 | - Name: DB_HOST 781 | Value: !GetAtt ClairDB.Endpoint.Address 782 | - Name: DB_PASSWORD 783 | Value: !Ref ClairDBPassword 784 | 785 | ClairTargetGroup: 786 | Type: AWS::ElasticLoadBalancingV2::TargetGroup 787 | Properties: 788 | HealthCheckIntervalSeconds: 30 789 | HealthCheckPath: /health 790 | HealthCheckProtocol: HTTP 791 | HealthCheckPort: 6061 792 | HealthyThresholdCount: 3 793 | HealthCheckTimeoutSeconds: 10 794 | UnhealthyThresholdCount: 4 795 | Port: 6060 796 | Protocol: HTTP 797 | VpcId: !Ref VPC 798 | TargetType: ip 799 | 800 | ClairListener: 801 | Type: AWS::ElasticLoadBalancingV2::Listener 802 | Properties: 803 | DefaultActions: 804 | - TargetGroupArn: !Ref ClairTargetGroup 805 | Type: forward 806 | LoadBalancerArn: !Ref MythicalLoadBalancer 807 | Port: 6060 808 | Protocol: HTTP 809 | 810 | Outputs: 811 | LoadBalancerDNS: 812 | Description: The DNS for the load balancer 813 | Value: !GetAtt MythicalLoadBalancer.DNSName 814 | DynamoTable: 815 | Value: !Ref MythicalDynamoTable 816 | SiteBucket: 817 | Value: !Ref MythicalBucket 818 | Condition: MakeBucket 819 | Cloud9Env: 820 | Value: !Ref MythicalEnvironment 821 | PrivateSubnetOne: 822 | Value: !Ref PrivateSubnetOne 823 | PrivateSubnetTwo: 824 | Value: !Ref PrivateSubnetTwo 825 | EcsClusterName: 826 | Value: !Ref MythicalEcsCluster 827 | StackName: 828 | Value: !Sub ${AWS::StackName} 829 | MonolithTaskDefinition: 830 | Value: !Ref MythicalMonolithTaskDefinition 831 | LikeTaskDefinition: 832 | Value: !Ref MythicalLikeTaskDefinition 833 | MonolithTargetGroupArn: 834 | Value: !Ref MythicalMonolithTargetGroup 835 | LikeTargetGroupArn: 836 | Value: !Ref MythicalLikeTargetGroup 837 | FargateContainerSecurityGroup: 838 | Value: !Ref FargateContainerSecurityGroup 839 | ProfileName: 840 | Value: !Ref MythicalProfile 841 | MonoEcrRepo: 842 | Value: !Sub ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${Mono} 843 | LikeEcrRepo: 844 | Value: !Sub ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${Like} 845 | S3WebsiteEndpoint: 846 | Value: !GetAtt MythicalBucket.WebsiteURL 847 | Condition: MakeBucket 848 | -------------------------------------------------------------------------------- /images/mysfits-welcome.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-asean-builders/devsecops/d1c6e000d0a5d341ec2ac021da45b169497e635e/images/mysfits-welcome.png -------------------------------------------------------------------------------- /script/fetch-outputs: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | 3 | set -eu 4 | 5 | if [[ $# -eq 1 ]]; then 6 | STACK_NAME=$1 7 | else 8 | STACK_NAME="$(echo $C9_PROJECT | sed 's/^Project-//')" 9 | fi 10 | 11 | aws cloudformation describe-stacks --stack-name "$STACK_NAME" | jq -r '[.Stacks[0].Outputs[] | {key: .OutputKey, value: .OutputValue}] | from_entries' > cfn-output.json 12 | -------------------------------------------------------------------------------- /script/load-ddb: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | 3 | set -eu 4 | 5 | TABLE_NAME="$(jq < cfn-output.json -r '.DynamoTable')" 6 | 7 | read -r -d '' items <