├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── assets └── images │ ├── aws-favicon.ico │ ├── aws_smile_logo.png │ └── second-nav-blue.png ├── code ├── Pipeline.yml ├── SecGuardRails.zip ├── SecGuardRails │ ├── cfn_validate_lambda.py │ └── stack_validate_lambda.py ├── cfn_encrypted_ebs.py ├── cfn_ftp_port.py ├── cfn_s3_versioning.py ├── cfn_secrets.py ├── codepipe-AWS-devsecops.zip └── codepipe-AWS-devsecops │ ├── prod-stack-configuration.json │ ├── resources.json │ └── test-stack-configuration.json ├── docs ├── 01-environment-setup.md ├── 02-first-pipeline-error.md ├── 03-No-AWS-Secrets.md ├── 04-find-aws-keys.md ├── 05-missing-s3-confg.md ├── 06-s3-versioning.md ├── 07-cleanup.md ├── CNAME ├── _config.yml ├── assets │ └── images │ │ ├── aws-favicon.ico │ │ ├── aws_smile_logo.png │ │ └── second-nav-blue.png ├── contribute.md ├── images │ ├── 01-pipeline.png │ ├── 02-ApprovalStage.png │ ├── 02-CFNParsingInfo.png │ ├── 02-firstpipelineerror.png │ ├── 03-Edit-Pipeline.png │ ├── 03-Source.png │ └── 04-aws-keys.png ├── index.md ├── license.md └── stylesheets │ └── custom.css ├── images ├── 01-pipeline.png ├── 02-ApprovalStage.png ├── 02-CFNParsingInfo.png ├── 03-Edit-Pipeline.png ├── 03-Source.png └── 04-aws-keys.png ├── mkdocs.yml └── presentations └── ReInvent2019-SEC302-R.pdf /.gitignore: -------------------------------------------------------------------------------- 1 | # macOS Internals 2 | .DS_Store 3 | 4 | # Files generated by build 5 | /site 6 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Adding Security to Pipelines in AWS 2 | 3 | This hands workshop will walk you through various common scenarios looking from both a developer's perspective and a security engineer's perspective. In order to quickly deliver applications to production, development and security teams work together to automate secure pipelines. 4 | 5 | ## Start the workshop: 6 | 7 | Visit the site to get started: [AWS DevSecOps](https://catalog.workshops.aws/devsecops) 8 | 9 | ## License 10 | 11 | This library is licensed under the MIT-0 License. See the LICENSE file. 12 | -------------------------------------------------------------------------------- /assets/images/aws-favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/assets/images/aws-favicon.ico -------------------------------------------------------------------------------- /assets/images/aws_smile_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/assets/images/aws_smile_logo.png -------------------------------------------------------------------------------- /assets/images/second-nav-blue.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/assets/images/second-nav-blue.png -------------------------------------------------------------------------------- /code/Pipeline.yml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: 2010-09-09 2 | Description: | 3 | AWS DevSecOps Template. 4 | Parameters: 5 | PipelineName: 6 | Default: DevSecOps 7 | Description: A name for the pipeline 8 | Type: String 9 | S3Bucket: 10 | Default: 'InputBucketNameYouCreated' 11 | Description: >- 12 | The name of the S3 bucket that contains the source Lambda security guardrails, which must be 13 | in the same region as this stack 14 | Type: String 15 | SourceS3Key: 16 | Default: codepipe-AWS-devsecops.zip 17 | Description: 'The zip file for the application code source to be built by the pipeline.' 18 | Type: String 19 | LambdaS3Key: 20 | Default: SecGuardRails.zip 21 | Description: >- 22 | This is the source files for Lambda code used to do Security Guard rails in the various stages. 23 | Type: String 24 | TemplateFileName: 25 | Default: resources.json 26 | Description: The file name of the template that will build out the applicatino. 27 | Type: String 28 | TemplateFilePath: 29 | Default: codepipe-AWS-devsecops/resources.json 30 | Description: The file path of the template that will build out the application. 31 | Type: String 32 | TestStackName: 33 | Default: AWS-devsecops-TestStack 34 | Description: A name for the test stack 35 | Type: String 36 | TestStackConfig: 37 | Default: codepipe-AWS-devsecops/test-stack-configuration.json 38 | Description: The configuration file name for the test stack. This will build the application in a test VPC. 39 | Type: String 40 | ProdStackName: 41 | Default: AWS-devsecops-ProdStack 42 | Description: A name for the production stack 43 | Type: String 44 | ProdStackConfig: 45 | Default: codepipe-AWS-devsecops/prod-stack-configuration.json 46 | Description: The configuration file name for the production stack. This will build the application in a prod VPC. 47 | Type: String 48 | ChangeSetName: 49 | Default: UpdatePreview-MyResource-ProdChange 50 | Description: A name for the production stack change set 51 | Type: String 52 | Email: 53 | Default: noreply@noemailaddress.com 54 | Description: The email address where CodePipeline sends pipeline notifications (Optional for labs) 55 | Type: String 56 | DynamoDBTableName: 57 | Default: AWS-devsecops 58 | Description: A name for the DynamoDBTable used to store rules 59 | Type: String 60 | Metadata: 61 | 'AWS::CloudFormation::Interface': 62 | ParameterGroups: 63 | - Label: 64 | default: CodePipeline Settings 65 | Parameters: 66 | - PipelineName 67 | - S3Bucket 68 | - SourceS3Key 69 | - Email 70 | - Label: 71 | default: Test Stack Settings 72 | Parameters: 73 | - TestStackName 74 | - TemplateFileName 75 | - TestStackConfig 76 | - TemplateFilePath 77 | - Label: 78 | default: Production Stack Settings 79 | Parameters: 80 | - ChangeSetName 81 | - ProdStackName 82 | - ProdStackConfig 83 | Resources: 84 | devsecopsVPCProd: 85 | Type: 'AWS::EC2::VPC' 86 | Properties: 87 | CidrBlock: 10.0.0.0/16 88 | EnableDnsSupport: 'false' 89 | EnableDnsHostnames: 'false' 90 | Tags: 91 | - Key: AWS 92 | Value: devsecopsProdVPC 93 | devsecopsVPCTest: 94 | Type: 'AWS::EC2::VPC' 95 | Properties: 96 | CidrBlock: 10.1.0.0/16 97 | EnableDnsSupport: 'false' 98 | EnableDnsHostnames: 'false' 99 | Tags: 100 | - Key: AWS 101 | Value: devsecopsTestVPC 102 | ArtifactStoreBucket: 103 | Type: 'AWS::S3::Bucket' 104 | Properties: 105 | VersioningConfiguration: 106 | Status: Enabled 107 | CodePipelineSNSTopic: 108 | Type: 'AWS::SNS::Topic' 109 | Properties: 110 | Subscription: 111 | - Endpoint: !Ref Email 112 | Protocol: email 113 | TopicName: !Ref PipelineName 114 | CodePipelineLambdaRole: 115 | Type: 'AWS::IAM::Role' 116 | Properties: 117 | RoleName: module-secops1-PipelineRole-56457 118 | AssumeRolePolicyDocument: 119 | Version: 2012-10-17 120 | Statement: 121 | - Effect: Allow 122 | Principal: 123 | Service: 124 | - lambda.amazonaws.com 125 | Action: 126 | - 'sts:AssumeRole' 127 | Path: / 128 | LambdaCodePipelineExecutionPolicy: 129 | DependsOn: 130 | - CodePipelineLambdaRole 131 | Type: 'AWS::IAM::Policy' 132 | Properties: 133 | PolicyName: LambdaRolePolicy 134 | Roles: 135 | - !Ref CodePipelineLambdaRole 136 | PolicyDocument: 137 | Version: 2012-10-17 138 | Statement: 139 | - Effect: Allow 140 | Action: 141 | - 'logs:*' 142 | Resource: 143 | - 'arn:aws:logs:*:*:*' 144 | - Effect: Allow 145 | Action: 146 | - 'codepipeline:PutJobSuccessResult' 147 | - 'codepipeline:PutJobFailureResult' 148 | - 's3:*' 149 | - 'ec2:*' 150 | - 'cloudformation:*' 151 | - 'dynamodb:*' 152 | - 'sns:*' 153 | Resource: 154 | - '*' 155 | CFNValidateLambda: 156 | Type: 'AWS::Lambda::Function' 157 | DependsOn: 158 | - CodePipelineLambdaRole 159 | - LambdaCodePipelineExecutionPolicy 160 | Properties: 161 | Code: 162 | S3Bucket: !Ref S3Bucket 163 | S3Key: !Ref LambdaS3Key 164 | Role: !GetAtt 165 | - CodePipelineLambdaRole 166 | - Arn 167 | Description: Always return success 168 | Timeout: 60 169 | Handler: SecGuardRails.cfn_validate_lambda.lambda_handler 170 | Runtime: python3.9 171 | TestStackValidationLambda: 172 | Type: 'AWS::Lambda::Function' 173 | DependsOn: 174 | - CodePipelineLambdaRole 175 | - LambdaCodePipelineExecutionPolicy 176 | Properties: 177 | Code: 178 | S3Bucket: !Ref S3Bucket 179 | S3Key: !Ref LambdaS3Key 180 | Role: !GetAtt 181 | - CodePipelineLambdaRole 182 | - Arn 183 | Description: Always return success 184 | Timeout: 60 185 | Handler: SecGuardRails.stack_validate_lambda.lambda_handler 186 | Runtime: python3.9 187 | myDynamoDBTable: 188 | Type: 'AWS::DynamoDB::Table' 189 | Properties: 190 | AttributeDefinitions: 191 | - AttributeName: rule 192 | AttributeType: S 193 | KeySchema: 194 | - AttributeName: rule 195 | KeyType: HASH 196 | ProvisionedThroughput: 197 | ReadCapacityUnits: '5' 198 | WriteCapacityUnits: '5' 199 | TableName: !Ref DynamoDBTableName 200 | CFNRole: 201 | Type: 'AWS::IAM::Role' 202 | Properties: 203 | RoleName: module-nathacas-secops1-PipelineRole-09876 204 | AssumeRolePolicyDocument: 205 | Statement: 206 | - Action: 207 | - 'sts:AssumeRole' 208 | Effect: Allow 209 | Principal: 210 | Service: 211 | - cloudformation.amazonaws.com 212 | Version: 2012-10-17 213 | Path: / 214 | Policies: 215 | - PolicyName: CloudFormationRole 216 | PolicyDocument: 217 | Version: 2012-10-17 218 | Statement: 219 | - Action: 220 | - 'ec2:*' 221 | - 's3:*' 222 | - 'ssm:GetParameters' 223 | Effect: Allow 224 | Resource: '*' 225 | PipelineRole: 226 | Type: 'AWS::IAM::Role' 227 | Properties: 228 | RoleName: module-nathacas-secops1-PipelineRole-123123 229 | AssumeRolePolicyDocument: 230 | Statement: 231 | - Action: 232 | - 'sts:AssumeRole' 233 | Effect: Allow 234 | Principal: 235 | Service: 236 | - codepipeline.amazonaws.com 237 | Version: 2012-10-17 238 | Path: / 239 | Policies: 240 | - PolicyName: CodePipelineAccess 241 | PolicyDocument: 242 | Version: 2012-10-17 243 | Statement: 244 | - Action: 245 | - 's3:*' 246 | - 'ec2:*' 247 | - 'cloudformation:CreateStack' 248 | - 'cloudformation:DescribeStacks' 249 | - 'cloudformation:DeleteStack' 250 | - 'cloudformation:UpdateStack' 251 | - 'cloudformation:CreateChangeSet' 252 | - 'cloudformation:ExecuteChangeSet' 253 | - 'cloudformation:DeleteChangeSet' 254 | - 'cloudformation:DescribeChangeSet' 255 | - 'cloudformation:SetStackPolicy' 256 | - 'iam:PassRole' 257 | - 'sns:Publish' 258 | - 'lambda:*' 259 | Effect: Allow 260 | Resource: '*' 261 | Pipeline: 262 | Type: 'AWS::CodePipeline::Pipeline' 263 | Properties: 264 | ArtifactStore: 265 | Location: !Ref ArtifactStoreBucket 266 | Type: S3 267 | DisableInboundStageTransitions: [] 268 | Name: !Ref PipelineName 269 | RoleArn: !GetAtt 270 | - PipelineRole 271 | - Arn 272 | Stages: 273 | - Name: Commit 274 | Actions: 275 | - InputArtifacts: [] 276 | Name: TemplateSource 277 | ActionTypeId: 278 | Category: Source 279 | Owner: AWS 280 | Provider: S3 281 | Version: '1' 282 | Configuration: 283 | S3Bucket: !Ref ArtifactStoreBucket 284 | S3ObjectKey: !Ref SourceS3Key 285 | OutputArtifacts: 286 | - Name: TemplateSource 287 | RunOrder: '1' 288 | - Name: StaticCodeAnalysis 289 | Actions: 290 | - InputArtifacts: 291 | - Name: TemplateSource 292 | Name: CFNParsing 293 | ActionTypeId: 294 | Category: Invoke 295 | Owner: AWS 296 | Provider: Lambda 297 | Version: '1' 298 | Configuration: 299 | FunctionName: !Ref CFNValidateLambda 300 | UserParameters: !Sub 301 | - >- 302 | {"input": "TemplateSource", "file": 303 | "${TemplateFilePath}","output": "${S3BucketName}"} 304 | - S3BucketName: !Ref ArtifactStoreBucket 305 | OutputArtifacts: 306 | - Name: TemplateSource2 307 | RunOrder: '1' 308 | - Name: TestDeployment 309 | Actions: 310 | - InputArtifacts: 311 | - Name: TemplateSource 312 | Name: CreateStack 313 | ActionTypeId: 314 | Category: Deploy 315 | Owner: AWS 316 | Provider: CloudFormation 317 | Version: '1' 318 | OutputArtifacts: 319 | - Name: StackOutput 320 | Configuration: 321 | ActionMode: REPLACE_ON_FAILURE 322 | RoleArn: !GetAtt 323 | - CFNRole 324 | - Arn 325 | StackName: !Ref TestStackName 326 | TemplateConfiguration: !Sub 'TemplateSource::${TestStackConfig}' 327 | TemplatePath: !Sub 'TemplateSource::${TemplateFilePath}' 328 | RunOrder: '1' 329 | - InputArtifacts: 330 | - Name: TemplateSource 331 | - Name: StackOutput 332 | Name: StackValidation_Test 333 | ActionTypeId: 334 | Category: Invoke 335 | Owner: AWS 336 | Version: '1' 337 | Provider: Lambda 338 | OutputArtifacts: [] 339 | Configuration: 340 | FunctionName: !Ref TestStackValidationLambda 341 | UserParameters: !Ref TestStackName 342 | RunOrder: '2' 343 | - Name: ApproveTestStack 344 | ActionTypeId: 345 | Category: Approval 346 | Owner: AWS 347 | Provider: Manual 348 | Version: '1' 349 | Configuration: 350 | NotificationArn: !Ref CodePipelineSNSTopic 351 | CustomData: !Sub >- 352 | Do you want to create a change set against the production 353 | stack and delete the ${TestStackName} stack? 354 | RunOrder: '3' 355 | - Name: DeleteTestStack 356 | ActionTypeId: 357 | Category: Deploy 358 | Owner: AWS 359 | Provider: CloudFormation 360 | Version: '1' 361 | Configuration: 362 | ActionMode: DELETE_ONLY 363 | RoleArn: !GetAtt 364 | - CFNRole 365 | - Arn 366 | StackName: !Ref TestStackName 367 | RunOrder: '4' 368 | - Name: ProductionDeployment 369 | Actions: 370 | - InputArtifacts: 371 | - Name: TemplateSource 372 | Name: CreateChangeSet 373 | ActionTypeId: 374 | Category: Deploy 375 | Owner: AWS 376 | Provider: CloudFormation 377 | Version: '1' 378 | OutputArtifacts: [] 379 | Configuration: 380 | ActionMode: CHANGE_SET_REPLACE 381 | RoleArn: !GetAtt 382 | - CFNRole 383 | - Arn 384 | StackName: !Ref ProdStackName 385 | ChangeSetName: !Ref ChangeSetName 386 | TemplateConfiguration: !Sub 'TemplateSource::${ProdStackConfig}' 387 | TemplatePath: !Sub 'TemplateSource::${TemplateFilePath}' 388 | RunOrder: '1' 389 | - Name: ExecuteChangeSet 390 | ActionTypeId: 391 | Category: Deploy 392 | Owner: AWS 393 | Provider: CloudFormation 394 | Version: '1' 395 | Configuration: 396 | ActionMode: CHANGE_SET_EXECUTE 397 | RoleArn: !GetAtt 398 | - CFNRole 399 | - Arn 400 | ChangeSetName: !Ref ChangeSetName 401 | StackName: !Ref ProdStackName 402 | RunOrder: '2' 403 | 404 | ProdVPCID: 405 | Type: "AWS::SSM::Parameter" 406 | Properties: 407 | Name: ProdVPCIdParam 408 | Type: "String" 409 | Value: !Ref devsecopsVPCProd 410 | Description: "Storing ProdVPC ID." 411 | Tags: 412 | "Environment": "PROD" 413 | 414 | TestVPCID: 415 | Type: "AWS::SSM::Parameter" 416 | Properties: 417 | Name: TestVPCIdParam 418 | Type: "String" 419 | Value: !Ref devsecopsVPCTest 420 | Description: "Storing TestVPC ID." 421 | Tags: 422 | "Environment": "TEST" 423 | 424 | Outputs: 425 | LambdaFunction: 426 | Description: Lambda Function Name 427 | Value: !Ref CFNValidateLambda 428 | ProdVPCId: 429 | Description: DevSecOps Production VPC ID 430 | Value: !Ref devsecopsVPCProd 431 | TestVPCId: 432 | Description: DevSecOps Test VPC ID 433 | Value: !Ref devsecopsVPCTest 434 | -------------------------------------------------------------------------------- /code/SecGuardRails.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/code/SecGuardRails.zip -------------------------------------------------------------------------------- /code/SecGuardRails/cfn_validate_lambda.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | from boto3.session import Session 3 | 4 | import json 5 | import urllib 6 | import boto3 7 | import zipfile 8 | import tempfile 9 | import botocore 10 | import traceback 11 | import re 12 | import zipfile 13 | import time 14 | 15 | print('Loading function') 16 | 17 | cf = boto3.client('cloudformation') 18 | code_pipeline = boto3.client('codepipeline') 19 | 20 | def find_artifact(artifacts, name): 21 | """Finds the artifact 'name' among the 'artifacts' 22 | 23 | Args: 24 | artifacts: The list of artifacts available to the function 25 | name: The artifact we wish to use 26 | Returns: 27 | The artifact dictionary found 28 | Raises: 29 | Exception: If no matching artifact is found 30 | 31 | """ 32 | for artifact in artifacts: 33 | if artifact['name'] == name: 34 | return artifact 35 | 36 | raise Exception('Input artifact named "{0}" not found in event'.format(name)) 37 | 38 | 39 | def get_template(s3, artifact, file_in_zip): 40 | """Gets the template artifact 41 | 42 | Downloads the artifact from the S3 artifact store to a temporary file 43 | then extracts the zip and returns the file containing the CloudFormation 44 | template. 45 | 46 | Args: 47 | artifact: The artifact to download 48 | file_in_zip: The path to the file within the zip containing the template 49 | 50 | Returns: 51 | The CloudFormation template as a string 52 | 53 | Raises: 54 | Exception: Any exception thrown while downloading the artifact or unzipping it 55 | 56 | """ 57 | tmp_file = tempfile.NamedTemporaryFile() 58 | bucket = artifact['location']['s3Location']['bucketName'] 59 | key = artifact['location']['s3Location']['objectKey'] 60 | 61 | with tempfile.NamedTemporaryFile() as tmp_file: 62 | print("Retrieving s3://" + bucket + "/" + key) 63 | s3.download_file(bucket, key, tmp_file.name) 64 | with zipfile.ZipFile(tmp_file.name, 'r') as zip: 65 | zip.printdir() 66 | return zip.read(file_in_zip) 67 | 68 | 69 | def put_job_success(job, message): 70 | """Notify CodePipeline of a successful job 71 | 72 | Args: 73 | job: The CodePipeline job ID 74 | message: A message to be logged relating to the job status 75 | 76 | Raises: 77 | Exception: Any exception thrown by .put_job_success_result() 78 | 79 | """ 80 | print('Putting job success') 81 | print(message) 82 | code_pipeline.put_job_success_result(jobId=job) 83 | 84 | def put_job_failure(job, message): 85 | """Notify CodePipeline of a failed job 86 | 87 | Args: 88 | job: The CodePipeline job ID 89 | message: A message to be logged relating to the job status 90 | 91 | Raises: 92 | Exception: Any exception thrown by .put_job_failure_result() 93 | 94 | """ 95 | print('Putting job failure') 96 | print(message) 97 | code_pipeline.put_job_failure_result(jobId=job, failureDetails={'message': message, 'type': 'JobFailed'}) 98 | 99 | def continue_job_later(job, message): 100 | """Notify CodePipeline of a continuing job 101 | 102 | This will cause CodePipeline to invoke the function again with the 103 | supplied continuation token. 104 | 105 | Args: 106 | job: The JobID 107 | message: A message to be logged relating to the job status 108 | continuation_token: The continuation token 109 | 110 | Raises: 111 | Exception: Any exception thrown by .put_job_success_result() 112 | 113 | """ 114 | 115 | # Use the continuation token to keep track of any job execution state 116 | # This data will be available when a new job is scheduled to continue the current execution 117 | continuation_token = json.dumps({'previous_job_id': job}) 118 | 119 | print('Putting job continuation') 120 | print(message) 121 | code_pipeline.put_job_success_result(jobId=job, continuationToken=continuation_token) 122 | 123 | def get_user_params(job_data): 124 | print(job_data) 125 | """Decodes the JSON user parameters and validates the required properties. 126 | 127 | Args: 128 | job_data: The job data structure containing the UserParameters string which should be a valid JSON structure 129 | 130 | Returns: 131 | The JSON parameters decoded as a dictionary. 132 | 133 | Raises: 134 | Exception: The JSON can't be decoded or a property is missing. 135 | 136 | """ 137 | try: 138 | # Get the user parameters which contain the artifact and file settings 139 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 140 | decoded_parameters = json.loads(user_parameters) 141 | 142 | except Exception as e: 143 | # We're expecting the user parameters to be encoded as JSON 144 | # so we can pass multiple values. If the JSON can't be decoded 145 | # then fail the job with a helpful message. 146 | raise Exception('UserParameters could not be decoded as JSON') 147 | 148 | if 'input' not in decoded_parameters: 149 | # Validate that the artifact name is provided, otherwise fail the job 150 | # with a helpful message. 151 | raise Exception('Your UserParameters JSON must include the artifact name') 152 | 153 | if 'file' not in decoded_parameters: 154 | # Validate that the template file is provided, otherwise fail the job 155 | # with a helpful message. 156 | raise Exception('Your UserParameters JSON must include the template file name') 157 | 158 | if 'output' not in decoded_parameters: 159 | # Validate that the template file is provided, otherwise fail the job 160 | # with a helpful message. 161 | raise Exception('Your UserParameters JSON must include the output bucket') 162 | 163 | return decoded_parameters 164 | 165 | def setup_s3_client(job_data): 166 | """Creates an S3 client 167 | 168 | Uses the credentials passed in the event by CodePipeline. These 169 | credentials can be used to access the artifact bucket. 170 | 171 | Args: 172 | job_data: The job data structure 173 | 174 | Returns: 175 | An S3 client with the appropriate credentials 176 | 177 | """ 178 | key_id = job_data['artifactCredentials']['accessKeyId'] 179 | key_secret = job_data['artifactCredentials']['secretAccessKey'] 180 | session_token = job_data['artifactCredentials']['sessionToken'] 181 | 182 | session = Session( 183 | aws_access_key_id=key_id, 184 | aws_secret_access_key=key_secret, 185 | aws_session_token=session_token) 186 | return session.client('s3', config=botocore.client.Config(signature_version='s3v4')) 187 | 188 | def get_rules(): 189 | # Find table 190 | client = boto3.client('dynamodb') 191 | resource = boto3.resource('dynamodb') 192 | response = client.list_tables() 193 | logTable = "" 194 | for i in range(len(response['TableNames'])): 195 | if "AWS-devsecops" in response['TableNames'][i]: 196 | logTable = response['TableNames'][i] 197 | 198 | # Verify that rules are created and if not, create them 199 | response = client.scan( 200 | TableName=logTable, 201 | AttributesToGet=[ 202 | 'rule', 203 | ] 204 | ) 205 | if len(response['Items']) == 0: 206 | add_rules(logTable) 207 | time.sleep(45) 208 | 209 | # Get all rules from DDB. 210 | # Rules have rule, ruledata, type and weight 211 | rules = dict() 212 | sgRules = [] 213 | ec2Rules = [] 214 | volRules = [] 215 | 216 | for n in range(len(response['Items'])): 217 | rule = client.get_item( 218 | TableName=logTable, 219 | Key={ 220 | 'rule': {'S':response['Items'][n]['rule']['S']} 221 | }, 222 | ConsistentRead=True 223 | )['Item'] 224 | if rule['category']['S'] == "SecurityGroup": 225 | sgRules.append(rule) 226 | elif rule['category']['S'] == "EC2Instance": 227 | ec2Rules.append(rule) 228 | elif rule['category']['S'] == "Volume": 229 | volRules.append(rule) 230 | 231 | rules['sgRules'] = sgRules 232 | rules['ec2Rules'] = ec2Rules 233 | rules['volRules'] = volRules 234 | return rules 235 | 236 | 237 | def add_rules(logTable): 238 | client = boto3.client('dynamodb') 239 | client.put_item( 240 | TableName=logTable, 241 | Item={ 242 | 'rule' : {'S': "IngressOpenToWorld"}, 243 | 'category' : {'S': "SecurityGroup"}, 244 | 'ruletype' : {'S': "regex"}, 245 | 'ruledata' : {'S': "^.*Ingress.*((0\.){3}0\/0)"}, 246 | 'riskvalue' : {'N': "100"}, 247 | 'active' : {'S': "Y"} 248 | } 249 | ) 250 | client.put_item( 251 | TableName=logTable, 252 | Item={ 253 | 'rule' : {'S': "SSHOpenToWorld"}, 254 | 'category' : {'S': "SecurityGroup"}, 255 | 'ruletype' : {'S': "regex"}, 256 | 'ruledata' : {'S': "^.*Ingress.*(([fF]rom[pP]ort|[tT]o[pP]ort).\s*:\s*u?.(22).*[cC]idr[iI]p.\s*:\s*u?.((0\.){3}0\/0)|[cC]idr[iI]p.\s*:\s*u?.((0\.){3}0\/0).*([fF]rom[pP]ort|[tT]o[pP]ort).\s*:\s*u?.(22))"}, 257 | 'riskvalue' : {'N': "100"}, 258 | 'active' : {'S': "Y"} 259 | } 260 | ) 261 | client.put_item( 262 | TableName=logTable, 263 | Item={ 264 | 'rule' : {'S': "AllowHttp"}, 265 | 'category' : {'S': "SecurityGroup"}, 266 | 'ruletype' : {'S': "regex"}, 267 | 'ruledata' : {'S': "^.*Ingress.*[fF]rom[pP]ort.\s*:\s*u?.(80)"}, 268 | 'riskvalue' : {'N': "3"}, 269 | 'active' : {'S': "N"} 270 | } 271 | ) 272 | client.put_item( 273 | TableName=logTable, 274 | Item={ 275 | 'rule' : {'S': "ForbiddenAMIs"}, 276 | 'category' : {'S': "EC2Instance"}, 277 | 'ruletype' : {'S': "regex"}, 278 | 'ruledata' : {'S': "^.*ImageId.\s*:\s*u?.(ami-7a11e211|ami-08111162|ami-f6035893)"}, 279 | 'riskvalue' : {'N': "10"}, 280 | 'active' : {'S': "N"} 281 | } 282 | ) 283 | client.put_item( 284 | TableName=logTable, 285 | Item={ 286 | 'rule' : {'S': "VolumesNotEncrypted"}, 287 | 'category' : {'S': "Volume"}, 288 | 'ruletype' : {'S': "regex"}, 289 | 'ruledata' : {'S': "^.*Encrypted.?\s*:\s*u?.?false"}, 290 | 'riskvalue' : {'N': "90"}, 291 | 'active' : {'S': "Y"} 292 | } 293 | ) 294 | 295 | 296 | def evaluate_template(rules, template): 297 | # Validate rules and increase risk value 298 | risk = 0 299 | # Extract Security Group Resources 300 | sgResources = [] 301 | ec2Resources = [] 302 | volumeResources = [] 303 | failedRules = [] 304 | jsonTemplate = json.loads(template) 305 | print(json.dumps(jsonTemplate, sort_keys=True, indent=4, separators=(',', ': '))) 306 | print(rules) 307 | for key in jsonTemplate['Resources'].keys(): 308 | if "EC2::SecurityGroup" in jsonTemplate['Resources'][key]['Type']: 309 | sgResources.append(jsonTemplate['Resources'][key]) 310 | elif "EC2::Instance" in jsonTemplate['Resources'][key]['Type']: 311 | ec2Resources.append(jsonTemplate['Resources'][key]) 312 | elif "EC2::Volume" in jsonTemplate['Resources'][key]['Type']: 313 | volumeResources.append(jsonTemplate['Resources'][key]) 314 | 315 | for n in range(len(sgResources)): 316 | for m in range(len(rules['sgRules'])): 317 | if rules['sgRules'][m]['active']['S'] == "Y": 318 | if re.match(rules['sgRules'][m]['ruledata']['S'], str(sgResources[n])): 319 | risk = risk + int(rules['sgRules'][m]['riskvalue']['N']) 320 | failedRules.append(str(rules['sgRules'][m]['rule']['S'])) 321 | print("Matched rule: " + str(rules['sgRules'][m]['rule']['S'])) 322 | print("Resource: " + str(sgResources[n])) 323 | print("Riskvalue: " + rules['sgRules'][m]['riskvalue']['N']) 324 | print("") 325 | 326 | for n in range(len(ec2Resources)): 327 | for m in range(len(rules['ec2Rules'])): 328 | if rules['ec2Rules'][m]['active']['S'] == "Y": 329 | if re.match(rules['ec2Rules'][m]['ruledata']['S'], str(ec2Resources[n])): 330 | risk = risk + int(rules['ec2Rules'][m]['riskvalue']['N']) 331 | failedRules.append(str(rules['ec2Rules'][m]['rule']['S'])) 332 | print("Matched rule: " + str(rules['ec2Rules'][m]['rule']['S'])) 333 | print("Resource: " + str(ec2Resources[n])) 334 | print("Riskvalue: " + rules['ec2Rules'][m]['riskvalue']['N']) 335 | print("") 336 | 337 | for n in range(len(volumeResources)): 338 | for m in range(len(rules['volRules'])): 339 | if rules['volRules'][m]['active']['S'] == "Y": 340 | if re.match(rules['volRules'][m]['ruledata']['S'], str(volumeResources[n])): 341 | risk = risk + int(rules['volRules'][m]['riskvalue']['N']) 342 | failedRules.append(str(rules['volRules'][m]['rule']['S'])) 343 | print("Matched rule: " + str(rules['volRules'][m]['rule']['S'])) 344 | print("Resource: " + str(volumeResources[n])) 345 | print("Riskvalue: " + rules['volRules'][m]['riskvalue']['N']) 346 | print("") 347 | print("Risk value: " +str(risk)) 348 | return risk, failedRules 349 | 350 | def s3_next_step(s3, bucket, risk, failedRules, template, job_id): 351 | # Store data in temporary physical file 352 | s3Client = boto3.client('s3', config=botocore.client.Config(signature_version='s3v4')) 353 | tmp_file = tempfile.NamedTemporaryFile() 354 | tmp_zip = tempfile.NamedTemporaryFile() 355 | #for item in template: 356 | #tmp_file.write(item) 357 | tmp_file.write(template) 358 | 359 | 360 | tmp_file.flush() 361 | # Process file based on risk value 362 | if risk < 5: 363 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 364 | zip.write(tmp_file.name, "valid.template.json") 365 | zip.close() 366 | s3Client.upload_file( # Add encryption support 367 | tmp_zip.name, 368 | bucket, 369 | 'valid.template.zip') 370 | tmp_file.close() 371 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') 372 | elif 5 <= risk < 50: 373 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 374 | zip.write(tmp_file.name, "flagged.template.json") 375 | zip.close() 376 | s3Client.upload_file( # Add encryption support 377 | tmp_zip.name, 378 | bucket, 379 | 'flagged.template.zip') 380 | tmp_file.close() 381 | put_job_success(job_id, 'Job succesful, medium risk detected, manual approval needed.') 382 | elif risk >= 50: 383 | tmp_file.close() 384 | print("High risk file, fail pipeline") 385 | put_job_failure(job_id, 'Function exception: Failed filters ' + str(failedRules)) 386 | return 0 387 | 388 | 389 | def lambda_handler(event, context): 390 | """The Lambda function handler 391 | 392 | Validate input template for security vulnerables. Route as appropriate based on risk assesment. 393 | 394 | Args: 395 | event: The event passed by Lambda 396 | context: The context passed by Lambda 397 | 398 | """ 399 | try: 400 | # Print the entire event for tracking 401 | print("Received event: " + json.dumps(event, indent=2)) 402 | 403 | # Extract the Job ID 404 | job_id = event['CodePipeline.job']['id'] 405 | 406 | # Extract the Job Data 407 | job_data = event['CodePipeline.job']['data'] 408 | 409 | # Extract the params 410 | params = get_user_params(job_data) 411 | 412 | # Get the list of artifacts passed to the function 413 | input_artifacts = job_data['inputArtifacts'] 414 | 415 | input_artifact = params['input'] 416 | template_file = params['file'] 417 | output_bucket = params['output'] 418 | 419 | # Get the artifact details 420 | input_artifact_data = find_artifact(input_artifacts, input_artifact) 421 | 422 | # Get S3 client to access artifact with 423 | s3 = setup_s3_client(job_data) 424 | 425 | # Get the JSON template file out of the artifact 426 | template = get_template(s3, input_artifact_data, template_file) 427 | print("Template: " + template_file) 428 | 429 | # Get validation rules from DDB 430 | rules = get_rules() 431 | 432 | # Validate template from risk perspective. FailedRules can be used if you wish to expand the script to report failed items 433 | risk, failedRules = evaluate_template(rules, template) 434 | 435 | # Based on risk, store the template in the correct S3 bucket for future process 436 | s3_next_step(s3, output_bucket, risk, failedRules, template, job_id) 437 | 438 | except Exception as e: 439 | # If any other exceptions which we didn't expect are raised 440 | # then fail the job and log the exception message. 441 | print('Function failed due to exception.') 442 | print(e) 443 | traceback.print_exc() 444 | put_job_failure(job_id, 'Function exception: ' + str(e)) 445 | 446 | print('Function complete.') 447 | return "Complete." -------------------------------------------------------------------------------- /code/SecGuardRails/stack_validate_lambda.py: -------------------------------------------------------------------------------- 1 | """Summary 2 | Attributes: 3 | AWS_CIS_BENCHMARK_VERSION (str): Description 4 | CONFIG_RULE (bool): Description 5 | CONTROL_1_1_DAYS (int): Description 6 | REGIONS (list): Description 7 | SCRIPT_OUTPUT_JSON (bool): Description 8 | """ 9 | 10 | from __future__ import print_function 11 | import json 12 | import csv 13 | import time 14 | import sys 15 | import re 16 | import tempfile 17 | from datetime import datetime 18 | import boto3 19 | import botocore 20 | import traceback 21 | import zipfile 22 | 23 | # Would you like to print the results as JSON to output? 24 | SCRIPT_OUTPUT_JSON = True 25 | code_pipeline = boto3.client('codepipeline') 26 | EC2_CLIENT = boto3.client('ec2') 27 | cf = boto3.client('cloudformation') 28 | 29 | 30 | def put_job_success(job, message): 31 | """Notify CodePipeline of a successful job 32 | 33 | Args: 34 | job: The CodePipeline job ID 35 | message: A message to be logged relating to the job status 36 | 37 | Raises: 38 | Exception: Any exception thrown by .put_job_success_result() 39 | 40 | """ 41 | print('Putting job success') 42 | print(message) 43 | code_pipeline.put_job_success_result(jobId=job) 44 | 45 | 46 | def put_job_failure(job, message): 47 | """Notify CodePipeline of a failed job 48 | 49 | Args: 50 | job: The CodePipeline job ID 51 | message: A message to be logged relating to the job status 52 | 53 | Raises: 54 | Exception: Any exception thrown by .put_job_failure_result() 55 | 56 | """ 57 | print('Putting job failure') 58 | print(message) 59 | code_pipeline.put_job_failure_result(jobId=job, failureDetails={'message': message, 'type': 'JobFailed'}) 60 | 61 | 62 | def continue_job_later(job, message): 63 | """Notify CodePipeline of a continuing job 64 | 65 | This will cause CodePipeline to invoke the function again with the 66 | supplied continuation token. 67 | 68 | Args: 69 | job: The JobID 70 | message: A message to be logged relating to the job status 71 | continuation_token: The continuation token 72 | 73 | Raises: 74 | Exception: Any exception thrown by .put_job_success_result() 75 | 76 | """ 77 | 78 | # Use the continuation token to keep track of any job execution state 79 | # This data will be available when a new job is scheduled to continue the current execution 80 | continuation_token = json.dumps({'previous_job_id': job}) 81 | 82 | print('Putting job continuation') 83 | print(message) 84 | code_pipeline.put_job_success_result(jobId=job, continuationToken=continuation_token) 85 | 86 | 87 | def stack_exists(stack): 88 | """Check if a stack exists or not 89 | 90 | Args: 91 | stack: The stack to check 92 | 93 | Returns: 94 | True or False depending on whether the stack exists 95 | 96 | Raises: 97 | Any exceptions raised .describe_stacks() besides that 98 | the stack doesn't exist. 99 | 100 | """ 101 | try: 102 | cf.describe_stacks(StackName=stack) 103 | return True 104 | except botocore.exceptions.ClientError as e: 105 | if "does not exist" in e.response['Error']['Message']: 106 | return False 107 | else: 108 | raise e 109 | 110 | 111 | def delete_stack(stack): 112 | """CloudFormation stack deletion 113 | 114 | Args: 115 | stack: The stack to be created 116 | 117 | Throws: 118 | Exception: Any exception thrown by .create_stack() 119 | """ 120 | cf.delete_stack(StackName=stack) 121 | 122 | 123 | # --- Security Groups --- 124 | # 4.1 Ensure no security groups allow ingress from 0.0.0.0/0 to port 22 (Scored) 125 | def control_4_1_ensure_ssh_not_open_to_world(regions, stackName): 126 | """Summary 127 | 128 | Returns: 129 | TYPE: Description 130 | """ 131 | result = True 132 | failReason = "" 133 | offenders = [] 134 | control = "4.1" 135 | description = "Ensure that security groups allow ingress from approved CIDR range to port 22" 136 | scored = True 137 | for n in regions: 138 | client = boto3.client('ec2', region_name=n) 139 | response = client.describe_security_groups( 140 | Filters=[{'Name': 'tag:aws:cloudformation:stack-name', 'Values': [stackName]}]) 141 | for m in response['SecurityGroups']: 142 | if "1.2.3.4/32" not in str(m['IpPermissions']): 143 | for o in m['IpPermissions']: 144 | try: 145 | if int(o['FromPort']) <= 22 <= int(o['ToPort']): 146 | result = False 147 | failReason = "Found Security Group with port 22 open to the wrong source IP range. Allowed IP is: 1.2.3.4/32" 148 | offenders.append(str(m['GroupId'])) 149 | except: 150 | if str(o['IpProtocol']) == "-1": 151 | result = False 152 | failReason = "Found Security Group with port 22 open to the wrong source IP range. Allowed IP is: 1.2.3.4/32" 153 | offenders.append(str(n) + " : " + str(m['GroupId'])) 154 | return {'Result': result, 'failReason': failReason, 'Offenders': offenders, 'ScoredControl': scored, 155 | 'Description': description, 'ControlId': control} 156 | 157 | # --- S3 Access control --- 158 | # 4.2 Ensure S3 bucket is not publicly accessible 159 | def control_4_2_no_global_s3(stackName): 160 | """Summary 161 | Returns: 162 | TYPE: Description 163 | """ 164 | 165 | # Getting the s3 bucket name first from cloudformation 166 | cfn = boto3.client('cloudformation') 167 | cfnResourceBucketInfo = cfn.describe_stack_resource(StackName=stackName,LogicalResourceId='S3Bucket') 168 | s3BucketName = cfnResourceBucketInfo['StackResourceDetail']['PhysicalResourceId'] 169 | 170 | hasPassed = True 171 | failReason = "" 172 | offenders = [] 173 | control = "4.2" 174 | description = "Ensure that there are no S3 elements exposed to the public" 175 | scored = True 176 | client = boto3.client('s3') 177 | 178 | # First check bucket policy 179 | try: 180 | response = client.get_bucket_policy(Bucket=s3BucketName) 181 | policyJson = json.loads(response['Policy']) 182 | for statement in policyJson['Statement']: 183 | print(statement) 184 | if (statement['Principal'] and ('*' in statement['Principal'])) and (statement['Effect'] and ('Allow' in statement['Effect'])) and (statement['Action'] and ('*' in statement['Action'])): 185 | hasPassed = False 186 | failReason = 'Bucket [' + s3BucketName + '] has Allow policy for everyone.' 187 | offenders.append(s3BucketName) 188 | except botocore.exceptions.ClientError as exp: 189 | if 'NoSuchBucketPolicy' in str(exp): 190 | # no policy is fine 191 | hasPassed = True 192 | 193 | if hasPassed: 194 | # check secondary ACL properties 195 | try: 196 | aclResponse = client.get_bucket_acl(Bucket=s3BucketName) 197 | for aGrant in aclResponse['Grants']: 198 | # contains definitions for all users then it should be invalid 199 | if (aGrant['Grantee']['Type'] == 'Group') and (aGrant['Grantee']['URI']) and ('groups/global/AllUsers' in aGrant['Grantee']['URI']): 200 | print ('Found information about Global All users. This is not permitted') 201 | hasPassed = False 202 | offenders.append(s3BucketName) 203 | failReason = s3BucketName + " contains ACL specifications for All Users. Update S3 AccessControl property" 204 | except botocore.exceptions.ClientError as expAcl: 205 | print('problems extracting ACL information') 206 | hasPassed = False 207 | offenders.append(s3BucketName) 208 | failReason = s3BucketName + " cannot read ACL information. Please check permissions on this lambda script" 209 | 210 | return {'Result': hasPassed, 'failReason': failReason, 'Offenders': offenders, 'ScoredControl': scored, 211 | 'Description': description, 'ControlId': control} 212 | 213 | def get_regions(): 214 | region_response = EC2_CLIENT.describe_regions() 215 | regions = [region['RegionName'] for region in region_response['Regions']] 216 | return regions 217 | 218 | def json_output(controlResult): 219 | """Summary 220 | 221 | Args: 222 | controlResult (TYPE): Description 223 | 224 | Returns: 225 | TYPE: Description 226 | """ 227 | inner = dict() 228 | outer = dict() 229 | for m in range(len(controlResult)): 230 | inner = dict() 231 | for n in range(len(controlResult[m])): 232 | x = int(controlResult[m][n]['ControlId'].split('.')[1]) 233 | inner[x] = controlResult[m][n] 234 | y = controlResult[m][0]['ControlId'].split('.')[0] 235 | outer[y] = inner 236 | print("JSON output:") 237 | print("-------------------------------------------------------") 238 | print(json.dumps(outer, sort_keys=True, indent=4, separators=(',', ': '))) 239 | print("-------------------------------------------------------") 240 | print("\n") 241 | print("Summary:") 242 | print(shortAnnotation(controlResult)) 243 | print("\n") 244 | return 0 245 | 246 | 247 | def shortAnnotation(controlResult): 248 | """Summary 249 | 250 | Args: 251 | controlResult (TYPE): Description 252 | 253 | Returns: 254 | TYPE: Description 255 | """ 256 | annotation = [] 257 | longAnnotation = False 258 | for m, _ in enumerate(controlResult): 259 | for n in range(len(controlResult[m])): 260 | if controlResult[m][n]['Result'] is False: 261 | if len(str(annotation)) < 220: 262 | annotation.append(controlResult[m][n]['ControlId']) 263 | else: 264 | longAnnotation = True 265 | if longAnnotation: 266 | annotation.append("etc") 267 | return "{\"Failed\":" + json.dumps(annotation) + "}" 268 | else: 269 | return "{\"Failed\":" + json.dumps(annotation) + "}" 270 | 271 | 272 | def lambda_handler(event, context): 273 | """Summary 274 | 275 | Args: 276 | event (TYPE): Description 277 | context (TYPE): Description 278 | 279 | Returns: 280 | TYPE: Description 281 | """ 282 | # Run all control validations. 283 | # The control object is a dictionary with the value 284 | # result : Boolean - True/False 285 | # failReason : String - Failure description 286 | # scored : Boolean - True/False 287 | # Check if the script is initiade from AWS Config Rules 288 | # Print the entire event for tracking 289 | print("Received event: " + json.dumps(event, indent=2)) 290 | # Extract the Job ID 291 | job_id = event['CodePipeline.job']['id'] 292 | # Globally used resources 293 | region_list = get_regions() 294 | stackName = event['CodePipeline.job']['data']['actionConfiguration']['configuration']['UserParameters'] 295 | print("stackName: " + stackName) 296 | 297 | # Run individual controls. 298 | # Comment out unwanted controls 299 | control4 = [] 300 | control_4_1_result = control_4_1_ensure_ssh_not_open_to_world(region_list, stackName) 301 | print('control_4_1_result: ' + str(control_4_1_result['Result'])) 302 | control4.append(control_4_1_result) 303 | 304 | # Running 4.2 control for s3 protection 305 | control_4_2_result = control_4_2_no_global_s3(stackName) 306 | print('control_4_2_result: ' + str(control_4_2_result['Result'])) 307 | control4.append(control_4_2_result) 308 | 309 | # Join results 310 | controls = [] 311 | controls.append(control4) 312 | 313 | # Build JSON structure for console output if enabled 314 | if SCRIPT_OUTPUT_JSON: 315 | json_output(controls) 316 | 317 | # iterate through controls for error checks 318 | for control in controls: 319 | for controlspec in control: 320 | if controlspec['Result'] is False: 321 | print("\n") 322 | if stack_exists(stackName): 323 | delete_stack(stackName) 324 | put_job_failure(job_id, controlspec['failReason']) 325 | return 326 | 327 | # found nothing and is good to go 328 | print("\n") 329 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') -------------------------------------------------------------------------------- /code/cfn_encrypted_ebs.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | from boto3.session import Session 3 | 4 | import json 5 | import urllib 6 | import boto3 7 | import zipfile 8 | import tempfile 9 | import botocore 10 | import traceback 11 | import re 12 | import zipfile 13 | import time 14 | 15 | print('Loading function') 16 | print () 17 | cf = boto3.client('cloudformation') 18 | code_pipeline = boto3.client('codepipeline') 19 | 20 | def find_artifact(artifacts, name): 21 | """Finds the artifact 'name' among the 'artifacts' 22 | 23 | Args: 24 | artifacts: The list of artifacts available to the function 25 | name: The artifact we wish to use 26 | Returns: 27 | The artifact dictionary found 28 | Raises: 29 | Exception: If no matching artifact is found 30 | 31 | """ 32 | for artifact in artifacts: 33 | if artifact['name'] == name: 34 | return artifact 35 | 36 | raise Exception('Input artifact named "{0}" not found in event'.format(name)) 37 | 38 | def get_user_params(job_data): 39 | print(job_data) 40 | """Decodes the JSON user parameters and validates the required properties. 41 | 42 | Args: 43 | job_data: The job data structure containing the UserParameters string which should be a valid JSON structure 44 | 45 | Returns: 46 | The JSON parameters decoded as a dictionary. 47 | 48 | Raises: 49 | Exception: The JSON can't be decoded or a property is missing. 50 | 51 | """ 52 | try: 53 | # Get the user parameters which contain the artifact and file settings 54 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 55 | decoded_parameters = json.loads(user_parameters) 56 | 57 | except Exception as e: 58 | # We're expecting the user parameters to be encoded as JSON 59 | # so we can pass multiple values. If the JSON can't be decoded 60 | # then fail the job with a helpful message. 61 | raise Exception('UserParameters could not be decoded as JSON') 62 | 63 | if 'input' not in decoded_parameters: 64 | # Validate that the artifact name is provided, otherwise fail the job 65 | # with a helpful message. 66 | raise Exception('Your UserParameters JSON must include the artifact name') 67 | 68 | if 'file' not in decoded_parameters: 69 | # Validate that the template file is provided, otherwise fail the job 70 | # with a helpful message. 71 | raise Exception('Your UserParameters JSON must include the template file name') 72 | 73 | if 'output' not in decoded_parameters: 74 | # Validate that the template file is provided, otherwise fail the job 75 | # with a helpful message. 76 | raise Exception('Your UserParameters JSON must include the output bucket') 77 | 78 | return decoded_parameters 79 | 80 | def get_template(s3, artifact, file_in_zip): 81 | """Gets the template artifact 82 | 83 | Downloads the artifact from the S3 artifact store to a temporary file 84 | then extracts the zip and returns the file containing the CloudFormation 85 | template. 86 | 87 | Args: 88 | artifact: The artifact to download 89 | file_in_zip: The path to the file within the zip containing the template 90 | 91 | Returns: 92 | The CloudFormation template as a string 93 | 94 | Raises: 95 | Exception: Any exception thrown while downloading the artifact or unzipping it 96 | 97 | """ 98 | tmp_file = tempfile.NamedTemporaryFile() 99 | bucket = artifact['location']['s3Location']['bucketName'] 100 | key = artifact['location']['s3Location']['objectKey'] 101 | 102 | with tempfile.NamedTemporaryFile() as tmp_file: 103 | print("Retrieving s3://" + bucket + "/" + key) 104 | s3.download_file(bucket, key, tmp_file.name) 105 | with zipfile.ZipFile(tmp_file.name, 'r') as zip: 106 | zip.printdir() 107 | return zip.read(file_in_zip) 108 | 109 | 110 | def put_job_success(job, message): 111 | """Notify CodePipeline of a successful job 112 | 113 | Args: 114 | job: The CodePipeline job ID 115 | message: A message to be logged relating to the job status 116 | 117 | Raises: 118 | Exception: Any exception thrown by .put_job_success_result() 119 | 120 | """ 121 | print('Putting job success') 122 | print(message) 123 | code_pipeline.put_job_success_result(jobId=job) 124 | 125 | def put_job_failure(job, message): 126 | """Notify CodePipeline of a failed job 127 | 128 | Args: 129 | job: The CodePipeline job ID 130 | message: A message to be logged relating to the job status 131 | 132 | Raises: 133 | Exception: Any exception thrown by .put_job_failure_result() 134 | 135 | """ 136 | print('Putting job failure') 137 | print(message) 138 | code_pipeline.put_job_failure_result(jobId=job, failureDetails={'message': message, 'type': 'JobFailed'}) 139 | 140 | def continue_job_later(job, message): 141 | """Notify CodePipeline of a continuing job 142 | 143 | This will cause CodePipeline to invoke the function again with the 144 | supplied continuation token. 145 | 146 | Args: 147 | job: The JobID 148 | message: A message to be logged relating to the job status 149 | continuation_token: The continuation token 150 | 151 | Raises: 152 | Exception: Any exception thrown by .put_job_success_result() 153 | 154 | """ 155 | 156 | # Use the continuation token to keep track of any job execution state 157 | # This data will be available when a new job is scheduled to continue the current execution 158 | continuation_token = json.dumps({'previous_job_id': job}) 159 | 160 | print('Putting job continuation') 161 | print(message) 162 | code_pipeline.put_job_success_result(jobId=job, continuationToken=continuation_token) 163 | 164 | def get_user_params(job_data): 165 | print(job_data) 166 | """Decodes the JSON user parameters and validates the required properties. 167 | 168 | Args: 169 | job_data: The job data structure containing the UserParameters string which should be a valid JSON structure 170 | 171 | Returns: 172 | The JSON parameters decoded as a dictionary. 173 | 174 | Raises: 175 | Exception: The JSON can't be decoded or a property is missing. 176 | 177 | """ 178 | try: 179 | # Get the user parameters which contain the artifact and file settings 180 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 181 | decoded_parameters = json.loads(user_parameters) 182 | 183 | except Exception as e: 184 | # We're expecting the user parameters to be encoded as JSON 185 | # so we can pass multiple values. If the JSON can't be decoded 186 | # then fail the job with a helpful message. 187 | raise Exception('UserParameters could not be decoded as JSON') 188 | 189 | if 'input' not in decoded_parameters: 190 | # Validate that the artifact name is provided, otherwise fail the job 191 | # with a helpful message. 192 | raise Exception('Your UserParameters JSON must include the artifact name') 193 | 194 | if 'file' not in decoded_parameters: 195 | # Validate that the template file is provided, otherwise fail the job 196 | # with a helpful message. 197 | raise Exception('Your UserParameters JSON must include the template file name') 198 | 199 | if 'output' not in decoded_parameters: 200 | # Validate that the template file is provided, otherwise fail the job 201 | # with a helpful message. 202 | raise Exception('Your UserParameters JSON must include the output bucket') 203 | 204 | return decoded_parameters 205 | 206 | def setup_s3_client(job_data): 207 | """Creates an S3 client 208 | 209 | Uses the credentials passed in the event by CodePipeline. These 210 | credentials can be used to access the artifact bucket. 211 | 212 | Args: 213 | job_data: The job data structure 214 | 215 | Returns: 216 | An S3 client with the appropriate credentials 217 | 218 | """ 219 | key_id = job_data['artifactCredentials']['accessKeyId'] 220 | key_secret = job_data['artifactCredentials']['secretAccessKey'] 221 | session_token = job_data['artifactCredentials']['sessionToken'] 222 | 223 | session = Session( 224 | aws_access_key_id=key_id, 225 | aws_secret_access_key=key_secret, 226 | aws_session_token=session_token) 227 | return session.client('s3', config=botocore.client.Config(signature_version='s3v4')) 228 | 229 | 230 | def evaluate_template(template, job_id): 231 | # Validate rules and increase risk value 232 | #print(template) 233 | risk = 0 234 | failedRules = [] 235 | print("----------------") 236 | template = json.loads(template) 237 | 238 | #print(template) 239 | 240 | for r in template['Resources']: 241 | #print(template['Resources'][r]) 242 | for s in template['Resources'][r]: 243 | #print(s) 244 | if template['Resources'][r][s] == template['Resources'][r]['Type']: 245 | #print(template['Resources'][r]['Type']) 246 | if template['Resources'][r]['Type'] == 'AWS::EC2::Volume': 247 | #print(template['Resources'][r]['Properties']) 248 | try: 249 | if template['Resources'][r]['Properties']['Encrypted'] == 'true': 250 | print('Found encrypted EBS volume') 251 | if template['Resources'][r]['Properties']['Encrypted'] == 'false': 252 | risk = risk + 100 253 | print("Risk value: " +str(risk)) 254 | failedRules.append("Found unencrypted EBS volume.") 255 | print("killing job") 256 | put_job_failure(job_id, "EBS Encryption is set to false") 257 | print('EBS encryption flag is neither true or false.') 258 | except: 259 | risk = risk + 100 260 | print("Risk value: " +str(risk)) 261 | failedRules.append("EBS volume with no encryption configured.") 262 | print("killing job") 263 | put_job_failure(job_id, "EBS Encryption is set to false") 264 | print("----------------") 265 | 266 | if risk > 10: 267 | print("good job") 268 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') 269 | 270 | return risk, failedRules 271 | 272 | def s3_next_step(s3, bucket, risk, failedRules, template, job_id): 273 | # Store data in temporary physical file 274 | s3Client = boto3.client('s3', config=botocore.client.Config(signature_version='s3v4')) 275 | tmp_file = tempfile.NamedTemporaryFile() 276 | tmp_zip = tempfile.NamedTemporaryFile() 277 | for item in template: 278 | tmp_file.write(item) 279 | tmp_file.flush() 280 | # Process file based on risk value 281 | if risk < 5: 282 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 283 | zip.write(tmp_file.name, "valid.template.json") 284 | zip.close() 285 | s3Client.upload_file( # Add encryption support 286 | tmp_zip.name, 287 | bucket, 288 | 'valid.template.zip') 289 | tmp_file.close() 290 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') 291 | elif 5 <= risk < 50: 292 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 293 | zip.write(tmp_file.name, "flagged.template.json") 294 | zip.close() 295 | s3Client.upload_file( # Add encryption support 296 | tmp_zip.name, 297 | bucket, 298 | 'flagged.template.zip') 299 | tmp_file.close() 300 | put_job_success(job_id, 'Job succesful, medium risk detected, manual approval needed.') 301 | elif risk >= 50: 302 | tmp_file.close() 303 | print("High risk file, fail pipeline") 304 | put_job_failure(job_id, 'Function exception: Failed filters ' + str(failedRules)) 305 | return 0 306 | 307 | 308 | def lambda_handler(event, context): 309 | """The Lambda function handler 310 | 311 | Validate input template for security vulnerables. Route as appropriate based on risk assesment. 312 | 313 | Args: 314 | event: The event passed by Lambda 315 | context: The context passed by Lambda 316 | 317 | """ 318 | try: 319 | # Print the entire event for tracking 320 | print("Received event: " + json.dumps(event, indent=2)) 321 | print(event) 322 | # Extract the Job ID 323 | job_id = event['CodePipeline.job']['id'] 324 | 325 | # Extract the Job Data 326 | job_data = event['CodePipeline.job']['data'] 327 | 328 | # Extract the params 329 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 330 | decoded_parameters = json.loads(user_parameters) 331 | 332 | # Get the list of artifacts passed to the function 333 | input_artifacts = job_data['inputArtifacts'] 334 | 335 | params = get_user_params(job_data) 336 | 337 | input_artifact = params['input'] 338 | template_file = params['file'] 339 | output_bucket = params['output'] 340 | 341 | # Get the artifact details 342 | input_artifact_data = find_artifact(input_artifacts, input_artifact) 343 | 344 | # Get S3 client to access artifact with 345 | s3 = setup_s3_client(job_data) 346 | 347 | # Get the JSON template file out of the artifact 348 | template = get_template(s3, input_artifact_data, template_file) 349 | #print("Template: " + str(template)) 350 | 351 | 352 | # Validate template from risk perspective. FailedRules can be used if you wish to expand the script to report failed items 353 | risk, failedRules = evaluate_template(template, job_id) 354 | 355 | # Based on risk, store the template in the correct S3 bucket for future process 356 | s3_next_step(s3, output_bucket, risk, failedRules, template, job_id) 357 | 358 | except Exception as e: 359 | # If any other exceptions which we didn't expect are raised 360 | # then fail the job and log the exception message. 361 | print('Function failed due to exception.') 362 | print(e) 363 | traceback.print_exc() 364 | put_job_failure(job_id, 'Function exception: ' + str(e)) 365 | 366 | print('Function complete.') 367 | return "Complete." -------------------------------------------------------------------------------- /code/cfn_ftp_port.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | from boto3.session import Session 3 | 4 | import json 5 | import urllib 6 | import boto3 7 | import zipfile 8 | import tempfile 9 | import botocore 10 | import traceback 11 | import re 12 | import zipfile 13 | import time 14 | 15 | print('Loading function') 16 | print () 17 | cf = boto3.client('cloudformation') 18 | code_pipeline = boto3.client('codepipeline') 19 | 20 | def find_artifact(artifacts, name): 21 | """Finds the artifact 'name' among the 'artifacts' 22 | 23 | Args: 24 | artifacts: The list of artifacts available to the function 25 | name: The artifact we wish to use 26 | Returns: 27 | The artifact dictionary found 28 | Raises: 29 | Exception: If no matching artifact is found 30 | 31 | """ 32 | for artifact in artifacts: 33 | if artifact['name'] == name: 34 | return artifact 35 | 36 | raise Exception('Input artifact named "{0}" not found in event'.format(name)) 37 | 38 | def get_user_params(job_data): 39 | print(job_data) 40 | """Decodes the JSON user parameters and validates the required properties. 41 | 42 | Args: 43 | job_data: The job data structure containing the UserParameters string which should be a valid JSON structure 44 | 45 | Returns: 46 | The JSON parameters decoded as a dictionary. 47 | 48 | Raises: 49 | Exception: The JSON can't be decoded or a property is missing. 50 | 51 | """ 52 | try: 53 | # Get the user parameters which contain the artifact and file settings 54 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 55 | decoded_parameters = json.loads(user_parameters) 56 | 57 | except Exception as e: 58 | # We're expecting the user parameters to be encoded as JSON 59 | # so we can pass multiple values. If the JSON can't be decoded 60 | # then fail the job with a helpful message. 61 | raise Exception('UserParameters could not be decoded as JSON') 62 | 63 | if 'input' not in decoded_parameters: 64 | # Validate that the artifact name is provided, otherwise fail the job 65 | # with a helpful message. 66 | raise Exception('Your UserParameters JSON must include the artifact name') 67 | 68 | if 'file' not in decoded_parameters: 69 | # Validate that the template file is provided, otherwise fail the job 70 | # with a helpful message. 71 | raise Exception('Your UserParameters JSON must include the template file name') 72 | 73 | if 'output' not in decoded_parameters: 74 | # Validate that the template file is provided, otherwise fail the job 75 | # with a helpful message. 76 | raise Exception('Your UserParameters JSON must include the output bucket') 77 | 78 | return decoded_parameters 79 | 80 | def get_template(s3, artifact, file_in_zip): 81 | """Gets the template artifact 82 | 83 | Downloads the artifact from the S3 artifact store to a temporary file 84 | then extracts the zip and returns the file containing the CloudFormation 85 | template. 86 | 87 | Args: 88 | artifact: The artifact to download 89 | file_in_zip: The path to the file within the zip containing the template 90 | 91 | Returns: 92 | The CloudFormation template as a string 93 | 94 | Raises: 95 | Exception: Any exception thrown while downloading the artifact or unzipping it 96 | 97 | """ 98 | tmp_file = tempfile.NamedTemporaryFile() 99 | bucket = artifact['location']['s3Location']['bucketName'] 100 | key = artifact['location']['s3Location']['objectKey'] 101 | 102 | with tempfile.NamedTemporaryFile() as tmp_file: 103 | print("Retrieving s3://" + bucket + "/" + key) 104 | s3.download_file(bucket, key, tmp_file.name) 105 | with zipfile.ZipFile(tmp_file.name, 'r') as zip: 106 | zip.printdir() 107 | return zip.read(file_in_zip) 108 | 109 | 110 | def put_job_success(job, message): 111 | """Notify CodePipeline of a successful job 112 | 113 | Args: 114 | job: The CodePipeline job ID 115 | message: A message to be logged relating to the job status 116 | 117 | Raises: 118 | Exception: Any exception thrown by .put_job_success_result() 119 | 120 | """ 121 | print('Putting job success') 122 | print(message) 123 | code_pipeline.put_job_success_result(jobId=job) 124 | 125 | def put_job_failure(job, message): 126 | """Notify CodePipeline of a failed job 127 | 128 | Args: 129 | job: The CodePipeline job ID 130 | message: A message to be logged relating to the job status 131 | 132 | Raises: 133 | Exception: Any exception thrown by .put_job_failure_result() 134 | 135 | """ 136 | print('Putting job failure') 137 | print(message) 138 | code_pipeline.put_job_failure_result(jobId=job, failureDetails={'message': message, 'type': 'JobFailed'}) 139 | 140 | def continue_job_later(job, message): 141 | """Notify CodePipeline of a continuing job 142 | 143 | This will cause CodePipeline to invoke the function again with the 144 | supplied continuation token. 145 | 146 | Args: 147 | job: The JobID 148 | message: A message to be logged relating to the job status 149 | continuation_token: The continuation token 150 | 151 | Raises: 152 | Exception: Any exception thrown by .put_job_success_result() 153 | 154 | """ 155 | 156 | # Use the continuation token to keep track of any job execution state 157 | # This data will be available when a new job is scheduled to continue the current execution 158 | continuation_token = json.dumps({'previous_job_id': job}) 159 | 160 | print('Putting job continuation') 161 | print(message) 162 | code_pipeline.put_job_success_result(jobId=job, continuationToken=continuation_token) 163 | 164 | def get_user_params(job_data): 165 | print(job_data) 166 | """Decodes the JSON user parameters and validates the required properties. 167 | 168 | Args: 169 | job_data: The job data structure containing the UserParameters string which should be a valid JSON structure 170 | 171 | Returns: 172 | The JSON parameters decoded as a dictionary. 173 | 174 | Raises: 175 | Exception: The JSON can't be decoded or a property is missing. 176 | 177 | """ 178 | try: 179 | # Get the user parameters which contain the artifact and file settings 180 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 181 | decoded_parameters = json.loads(user_parameters) 182 | 183 | except Exception as e: 184 | # We're expecting the user parameters to be encoded as JSON 185 | # so we can pass multiple values. If the JSON can't be decoded 186 | # then fail the job with a helpful message. 187 | raise Exception('UserParameters could not be decoded as JSON') 188 | 189 | if 'input' not in decoded_parameters: 190 | # Validate that the artifact name is provided, otherwise fail the job 191 | # with a helpful message. 192 | raise Exception('Your UserParameters JSON must include the artifact name') 193 | 194 | if 'file' not in decoded_parameters: 195 | # Validate that the template file is provided, otherwise fail the job 196 | # with a helpful message. 197 | raise Exception('Your UserParameters JSON must include the template file name') 198 | 199 | if 'output' not in decoded_parameters: 200 | # Validate that the template file is provided, otherwise fail the job 201 | # with a helpful message. 202 | raise Exception('Your UserParameters JSON must include the output bucket') 203 | 204 | return decoded_parameters 205 | 206 | def setup_s3_client(job_data): 207 | """Creates an S3 client 208 | 209 | Uses the credentials passed in the event by CodePipeline. These 210 | credentials can be used to access the artifact bucket. 211 | 212 | Args: 213 | job_data: The job data structure 214 | 215 | Returns: 216 | An S3 client with the appropriate credentials 217 | 218 | """ 219 | key_id = job_data['artifactCredentials']['accessKeyId'] 220 | key_secret = job_data['artifactCredentials']['secretAccessKey'] 221 | session_token = job_data['artifactCredentials']['sessionToken'] 222 | 223 | session = Session( 224 | aws_access_key_id=key_id, 225 | aws_secret_access_key=key_secret, 226 | aws_session_token=session_token) 227 | return session.client('s3', config=botocore.client.Config(signature_version='s3v4')) 228 | 229 | 230 | def evaluate_template(template, job_id): 231 | # Validate rules and increase risk value 232 | #print(template) 233 | risk = 0 234 | failedRules = [] 235 | print("----------------") 236 | template = json.loads(template) 237 | 238 | #print(template) 239 | 240 | for r in template['Resources']: 241 | #print(template['Resources'][r]) 242 | for s in template['Resources'][r]: 243 | #print(s) 244 | if template['Resources'][r][s] == template['Resources'][r]['Type']: 245 | #print(template['Resources'][r]['Type']) 246 | if template['Resources'][r]['Type'] == 'AWS::EC2::SecurityGroup': 247 | #print(template['Resources'][r]['Properties']) 248 | for sg in template['Resources'][r]['Properties']['SecurityGroupIngress']: 249 | if sg == 'FromPort': 250 | print(template['Resources'][r]['Properties']['SecurityGroupIngress']['FromPort']) 251 | if str(template['Resources'][r]['Properties']['SecurityGroupIngress']['FromPort']) == '21': 252 | risk = risk + 100 253 | print("Risk value: " +str(risk)) 254 | failedRules.append("Found FTP port.") 255 | print("killing job") 256 | put_job_failure(job_id, "Found FTP port.") 257 | print('Found FTP port.') 258 | print("----------------") 259 | 260 | if risk > 10: 261 | print("good job") 262 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') 263 | 264 | return risk, failedRules 265 | 266 | def s3_next_step(s3, bucket, risk, failedRules, template, job_id): 267 | # Store data in temporary physical file 268 | s3Client = boto3.client('s3', config=botocore.client.Config(signature_version='s3v4')) 269 | tmp_file = tempfile.NamedTemporaryFile() 270 | tmp_zip = tempfile.NamedTemporaryFile() 271 | for item in template: 272 | tmp_file.write(item) 273 | tmp_file.flush() 274 | # Process file based on risk value 275 | if risk < 5: 276 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 277 | zip.write(tmp_file.name, "valid.template.json") 278 | zip.close() 279 | s3Client.upload_file( # Add encryption support 280 | tmp_zip.name, 281 | bucket, 282 | 'valid.template.zip') 283 | tmp_file.close() 284 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') 285 | elif 5 <= risk < 50: 286 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 287 | zip.write(tmp_file.name, "flagged.template.json") 288 | zip.close() 289 | s3Client.upload_file( # Add encryption support 290 | tmp_zip.name, 291 | bucket, 292 | 'flagged.template.zip') 293 | tmp_file.close() 294 | put_job_success(job_id, 'Job succesful, medium risk detected, manual approval needed.') 295 | elif risk >= 50: 296 | tmp_file.close() 297 | print("High risk file, fail pipeline") 298 | put_job_failure(job_id, 'Function exception: Failed filters ' + str(failedRules)) 299 | return 0 300 | 301 | 302 | def lambda_handler(event, context): 303 | """The Lambda function handler 304 | 305 | Validate input template for security vulnerables. Route as appropriate based on risk assesment. 306 | 307 | Args: 308 | event: The event passed by Lambda 309 | context: The context passed by Lambda 310 | 311 | """ 312 | try: 313 | # Print the entire event for tracking 314 | print("Received event: " + json.dumps(event, indent=2)) 315 | print(event) 316 | # Extract the Job ID 317 | job_id = event['CodePipeline.job']['id'] 318 | 319 | # Extract the Job Data 320 | job_data = event['CodePipeline.job']['data'] 321 | 322 | # Extract the params 323 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 324 | decoded_parameters = json.loads(user_parameters) 325 | 326 | # Get the list of artifacts passed to the function 327 | input_artifacts = job_data['inputArtifacts'] 328 | 329 | params = get_user_params(job_data) 330 | 331 | input_artifact = params['input'] 332 | template_file = params['file'] 333 | output_bucket = params['output'] 334 | 335 | # Get the artifact details 336 | input_artifact_data = find_artifact(input_artifacts, input_artifact) 337 | 338 | # Get S3 client to access artifact with 339 | s3 = setup_s3_client(job_data) 340 | 341 | # Get the JSON template file out of the artifact 342 | template = get_template(s3, input_artifact_data, template_file) 343 | #print("Template: " + str(template)) 344 | 345 | 346 | # Validate template from risk perspective. FailedRules can be used if you wish to expand the script to report failed items 347 | risk, failedRules = evaluate_template(template, job_id) 348 | 349 | # Based on risk, store the template in the correct S3 bucket for future process 350 | s3_next_step(s3, output_bucket, risk, failedRules, template, job_id) 351 | 352 | except Exception as e: 353 | # If any other exceptions which we didn't expect are raised 354 | # then fail the job and log the exception message. 355 | print('Function failed due to exception.') 356 | print(e) 357 | traceback.print_exc() 358 | put_job_failure(job_id, 'Function exception: ' + str(e)) 359 | 360 | print('Function complete.') 361 | return "Complete." -------------------------------------------------------------------------------- /code/cfn_s3_versioning.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | from boto3.session import Session 3 | 4 | import json 5 | import urllib 6 | import boto3 7 | import zipfile 8 | import tempfile 9 | import botocore 10 | import traceback 11 | import re 12 | import zipfile 13 | import time 14 | 15 | print('Loading function') 16 | print () 17 | cf = boto3.client('cloudformation') 18 | code_pipeline = boto3.client('codepipeline') 19 | 20 | def find_artifact(artifacts, name): 21 | """Finds the artifact 'name' among the 'artifacts' 22 | 23 | Args: 24 | artifacts: The list of artifacts available to the function 25 | name: The artifact we wish to use 26 | Returns: 27 | The artifact dictionary found 28 | Raises: 29 | Exception: If no matching artifact is found 30 | 31 | """ 32 | for artifact in artifacts: 33 | if artifact['name'] == name: 34 | return artifact 35 | 36 | raise Exception('Input artifact named "{0}" not found in event'.format(name)) 37 | 38 | def get_user_params(job_data): 39 | print(job_data) 40 | """Decodes the JSON user parameters and validates the required properties. 41 | 42 | Args: 43 | job_data: The job data structure containing the UserParameters string which should be a valid JSON structure 44 | 45 | Returns: 46 | The JSON parameters decoded as a dictionary. 47 | 48 | Raises: 49 | Exception: The JSON can't be decoded or a property is missing. 50 | 51 | """ 52 | try: 53 | # Get the user parameters which contain the artifact and file settings 54 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 55 | decoded_parameters = json.loads(user_parameters) 56 | 57 | except Exception as e: 58 | # We're expecting the user parameters to be encoded as JSON 59 | # so we can pass multiple values. If the JSON can't be decoded 60 | # then fail the job with a helpful message. 61 | raise Exception('UserParameters could not be decoded as JSON') 62 | 63 | if 'input' not in decoded_parameters: 64 | # Validate that the artifact name is provided, otherwise fail the job 65 | # with a helpful message. 66 | raise Exception('Your UserParameters JSON must include the artifact name') 67 | 68 | if 'file' not in decoded_parameters: 69 | # Validate that the template file is provided, otherwise fail the job 70 | # with a helpful message. 71 | raise Exception('Your UserParameters JSON must include the template file name') 72 | 73 | if 'output' not in decoded_parameters: 74 | # Validate that the template file is provided, otherwise fail the job 75 | # with a helpful message. 76 | raise Exception('Your UserParameters JSON must include the output bucket') 77 | 78 | return decoded_parameters 79 | 80 | def get_template(s3, artifact, file_in_zip): 81 | """Gets the template artifact 82 | 83 | Downloads the artifact from the S3 artifact store to a temporary file 84 | then extracts the zip and returns the file containing the CloudFormation 85 | template. 86 | 87 | Args: 88 | artifact: The artifact to download 89 | file_in_zip: The path to the file within the zip containing the template 90 | 91 | Returns: 92 | The CloudFormation template as a string 93 | 94 | Raises: 95 | Exception: Any exception thrown while downloading the artifact or unzipping it 96 | 97 | """ 98 | tmp_file = tempfile.NamedTemporaryFile() 99 | bucket = artifact['location']['s3Location']['bucketName'] 100 | key = artifact['location']['s3Location']['objectKey'] 101 | 102 | with tempfile.NamedTemporaryFile() as tmp_file: 103 | print("Retrieving s3://" + bucket + "/" + key) 104 | s3.download_file(bucket, key, tmp_file.name) 105 | with zipfile.ZipFile(tmp_file.name, 'r') as zip: 106 | zip.printdir() 107 | return zip.read(file_in_zip) 108 | 109 | 110 | def put_job_success(job, message): 111 | """Notify CodePipeline of a successful job 112 | 113 | Args: 114 | job: The CodePipeline job ID 115 | message: A message to be logged relating to the job status 116 | 117 | Raises: 118 | Exception: Any exception thrown by .put_job_success_result() 119 | 120 | """ 121 | print('Putting job success') 122 | print(message) 123 | code_pipeline.put_job_success_result(jobId=job) 124 | 125 | def put_job_failure(job, message): 126 | """Notify CodePipeline of a failed job 127 | 128 | Args: 129 | job: The CodePipeline job ID 130 | message: A message to be logged relating to the job status 131 | 132 | Raises: 133 | Exception: Any exception thrown by .put_job_failure_result() 134 | 135 | """ 136 | print('Putting job failure') 137 | print(message) 138 | code_pipeline.put_job_failure_result(jobId=job, failureDetails={'message': message, 'type': 'JobFailed'}) 139 | 140 | def continue_job_later(job, message): 141 | """Notify CodePipeline of a continuing job 142 | 143 | This will cause CodePipeline to invoke the function again with the 144 | supplied continuation token. 145 | 146 | Args: 147 | job: The JobID 148 | message: A message to be logged relating to the job status 149 | continuation_token: The continuation token 150 | 151 | Raises: 152 | Exception: Any exception thrown by .put_job_success_result() 153 | 154 | """ 155 | 156 | # Use the continuation token to keep track of any job execution state 157 | # This data will be available when a new job is scheduled to continue the current execution 158 | continuation_token = json.dumps({'previous_job_id': job}) 159 | 160 | print('Putting job continuation') 161 | print(message) 162 | code_pipeline.put_job_success_result(jobId=job, continuationToken=continuation_token) 163 | 164 | def get_user_params(job_data): 165 | print(job_data) 166 | """Decodes the JSON user parameters and validates the required properties. 167 | 168 | Args: 169 | job_data: The job data structure containing the UserParameters string which should be a valid JSON structure 170 | 171 | Returns: 172 | The JSON parameters decoded as a dictionary. 173 | 174 | Raises: 175 | Exception: The JSON can't be decoded or a property is missing. 176 | 177 | """ 178 | try: 179 | # Get the user parameters which contain the artifact and file settings 180 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 181 | decoded_parameters = json.loads(user_parameters) 182 | 183 | except Exception as e: 184 | # We're expecting the user parameters to be encoded as JSON 185 | # so we can pass multiple values. If the JSON can't be decoded 186 | # then fail the job with a helpful message. 187 | raise Exception('UserParameters could not be decoded as JSON') 188 | 189 | if 'input' not in decoded_parameters: 190 | # Validate that the artifact name is provided, otherwise fail the job 191 | # with a helpful message. 192 | raise Exception('Your UserParameters JSON must include the artifact name') 193 | 194 | if 'file' not in decoded_parameters: 195 | # Validate that the template file is provided, otherwise fail the job 196 | # with a helpful message. 197 | raise Exception('Your UserParameters JSON must include the template file name') 198 | 199 | if 'output' not in decoded_parameters: 200 | # Validate that the template file is provided, otherwise fail the job 201 | # with a helpful message. 202 | raise Exception('Your UserParameters JSON must include the output bucket') 203 | 204 | return decoded_parameters 205 | 206 | def setup_s3_client(job_data): 207 | """Creates an S3 client 208 | 209 | Uses the credentials passed in the event by CodePipeline. These 210 | credentials can be used to access the artifact bucket. 211 | 212 | Args: 213 | job_data: The job data structure 214 | 215 | Returns: 216 | An S3 client with the appropriate credentials 217 | 218 | """ 219 | key_id = job_data['artifactCredentials']['accessKeyId'] 220 | key_secret = job_data['artifactCredentials']['secretAccessKey'] 221 | session_token = job_data['artifactCredentials']['sessionToken'] 222 | 223 | session = Session( 224 | aws_access_key_id=key_id, 225 | aws_secret_access_key=key_secret, 226 | aws_session_token=session_token) 227 | return session.client('s3', config=botocore.client.Config(signature_version='s3v4')) 228 | 229 | 230 | def evaluate_template(template, job_id): 231 | # Validate rules and increase risk value 232 | #print(template) 233 | risk = 0 234 | failedRules = [] 235 | print("----------------") 236 | template = json.loads(template) 237 | 238 | #print(template) 239 | 240 | for r in template['Resources']: 241 | #print(template['Resources'][r]) 242 | for s in template['Resources'][r]: 243 | #print(s) 244 | if template['Resources'][r][s] == template['Resources'][r]['Type']: 245 | #print(template['Resources'][r]['Type']) 246 | if template['Resources'][r]['Type'] == 'AWS::S3::Bucket': 247 | #print(template['Resources'][r]['Properties']) 248 | try: 249 | if template['Resources'][r]['Properties']['VersioningConfiguration']['Status'] != 'Enabled': 250 | if template['Resources'][r]['Properties']['VersioningConfiguration']['Status'] == 'Disabled': 251 | print('Found s3 bucket with versioning disabled.') 252 | if template['Resources'][r]['Properties']['VersioningConfiguration']['Status'] == 'Suspended': 253 | print('Found s3 bucket with versioning suspended.') 254 | risk = risk + 100 255 | print("Risk value: " +str(risk)) 256 | failedRules.append("s3 versioning flag is neither Enabled or Suspended.") 257 | print("killing job") 258 | put_job_failure(job_id, "s3 versioning flag is neither Enabled or Suspended.") 259 | else: 260 | risk = 11 261 | 262 | except: 263 | risk = risk + 100 264 | print("Risk value: " +str(risk)) 265 | failedRules.append("s3 bucket does not have VersionConfiguration configured.") 266 | print("killing job") 267 | put_job_failure(job_id, "s3 bucket does not have VersionConfiguration configured.") 268 | print('s3 bucket does not have VersionConfiguration configured..') 269 | print("----------------") 270 | 271 | if risk > 10: 272 | print("good job") 273 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') 274 | 275 | return risk, failedRules 276 | 277 | def s3_next_step(s3, bucket, risk, failedRules, template, job_id): 278 | # Store data in temporary physical file 279 | s3Client = boto3.client('s3', config=botocore.client.Config(signature_version='s3v4')) 280 | tmp_file = tempfile.NamedTemporaryFile() 281 | tmp_zip = tempfile.NamedTemporaryFile() 282 | for item in template: 283 | tmp_file.write(item) 284 | tmp_file.flush() 285 | # Process file based on risk value 286 | if risk < 5: 287 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 288 | zip.write(tmp_file.name, "valid.template.json") 289 | zip.close() 290 | s3Client.upload_file( # Add encryption support 291 | tmp_zip.name, 292 | bucket, 293 | 'valid.template.zip') 294 | tmp_file.close() 295 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') 296 | elif 5 <= risk < 50: 297 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 298 | zip.write(tmp_file.name, "flagged.template.json") 299 | zip.close() 300 | s3Client.upload_file( # Add encryption support 301 | tmp_zip.name, 302 | bucket, 303 | 'flagged.template.zip') 304 | tmp_file.close() 305 | put_job_success(job_id, 'Job succesful, medium risk detected, manual approval needed.') 306 | elif risk >= 50: 307 | tmp_file.close() 308 | print("High risk file, fail pipeline") 309 | put_job_failure(job_id, 'Function exception: Failed filters ' + str(failedRules)) 310 | return 0 311 | 312 | 313 | def lambda_handler(event, context): 314 | """The Lambda function handler 315 | 316 | Validate input template for security vulnerables. Route as appropriate based on risk assesment. 317 | 318 | Args: 319 | event: The event passed by Lambda 320 | context: The context passed by Lambda 321 | 322 | """ 323 | try: 324 | # Print the entire event for tracking 325 | print("Received event: " + json.dumps(event, indent=2)) 326 | print(event) 327 | # Extract the Job ID 328 | job_id = event['CodePipeline.job']['id'] 329 | 330 | # Extract the Job Data 331 | job_data = event['CodePipeline.job']['data'] 332 | 333 | # Extract the params 334 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 335 | decoded_parameters = json.loads(user_parameters) 336 | 337 | # Get the list of artifacts passed to the function 338 | input_artifacts = job_data['inputArtifacts'] 339 | 340 | params = get_user_params(job_data) 341 | 342 | input_artifact = params['input'] 343 | template_file = params['file'] 344 | output_bucket = params['output'] 345 | 346 | # Get the artifact details 347 | input_artifact_data = find_artifact(input_artifacts, input_artifact) 348 | 349 | # Get S3 client to access artifact with 350 | s3 = setup_s3_client(job_data) 351 | 352 | # Get the JSON template file out of the artifact 353 | template = get_template(s3, input_artifact_data, template_file) 354 | #print("Template: " + str(template)) 355 | 356 | 357 | # Validate template from risk perspective. FailedRules can be used if you wish to expand the script to report failed items 358 | risk, failedRules = evaluate_template(template, job_id) 359 | 360 | # Based on risk, store the template in the correct S3 bucket for future process 361 | s3_next_step(s3, output_bucket, risk, failedRules, template, job_id) 362 | 363 | except Exception as e: 364 | # If any other exceptions which we didn't expect are raised 365 | # then fail the job and log the exception message. 366 | print('Function failed due to exception.') 367 | print(e) 368 | traceback.print_exc() 369 | put_job_failure(job_id, 'Function exception: ' + str(e)) 370 | 371 | print('Function complete.') 372 | return "Complete." -------------------------------------------------------------------------------- /code/cfn_secrets.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | from boto3.session import Session 3 | 4 | import json 5 | import urllib 6 | import boto3 7 | import zipfile 8 | import tempfile 9 | import botocore 10 | import traceback 11 | import re 12 | import zipfile 13 | import time 14 | 15 | print('Loading function') 16 | print () 17 | cf = boto3.client('cloudformation') 18 | code_pipeline = boto3.client('codepipeline') 19 | 20 | def find_artifact(artifacts, name): 21 | """Finds the artifact 'name' among the 'artifacts' 22 | 23 | Args: 24 | artifacts: The list of artifacts available to the function 25 | name: The artifact we wish to use 26 | Returns: 27 | The artifact dictionary found 28 | Raises: 29 | Exception: If no matching artifact is found 30 | 31 | """ 32 | for artifact in artifacts: 33 | if artifact['name'] == name: 34 | return artifact 35 | 36 | raise Exception('Input artifact named "{0}" not found in event'.format(name)) 37 | 38 | def get_user_params(job_data): 39 | print(job_data) 40 | """Decodes the JSON user parameters and validates the required properties. 41 | 42 | Args: 43 | job_data: The job data structure containing the UserParameters string which should be a valid JSON structure 44 | 45 | Returns: 46 | The JSON parameters decoded as a dictionary. 47 | 48 | Raises: 49 | Exception: The JSON can't be decoded or a property is missing. 50 | 51 | """ 52 | try: 53 | # Get the user parameters which contain the artifact and file settings 54 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 55 | decoded_parameters = json.loads(user_parameters) 56 | 57 | except Exception as e: 58 | # We're expecting the user parameters to be encoded as JSON 59 | # so we can pass multiple values. If the JSON can't be decoded 60 | # then fail the job with a helpful message. 61 | raise Exception('UserParameters could not be decoded as JSON') 62 | 63 | if 'input' not in decoded_parameters: 64 | # Validate that the artifact name is provided, otherwise fail the job 65 | # with a helpful message. 66 | raise Exception('Your UserParameters JSON must include the artifact name') 67 | 68 | if 'file' not in decoded_parameters: 69 | # Validate that the template file is provided, otherwise fail the job 70 | # with a helpful message. 71 | raise Exception('Your UserParameters JSON must include the template file name') 72 | 73 | if 'output' not in decoded_parameters: 74 | # Validate that the template file is provided, otherwise fail the job 75 | # with a helpful message. 76 | raise Exception('Your UserParameters JSON must include the output bucket') 77 | 78 | return decoded_parameters 79 | 80 | def get_template(s3, artifact, file_in_zip): 81 | """Gets the template artifact 82 | 83 | Downloads the artifact from the S3 artifact store to a temporary file 84 | then extracts the zip and returns the file containing the CloudFormation 85 | template. 86 | 87 | Args: 88 | artifact: The artifact to download 89 | file_in_zip: The path to the file within the zip containing the template 90 | 91 | Returns: 92 | The CloudFormation template as a string 93 | 94 | Raises: 95 | Exception: Any exception thrown while downloading the artifact or unzipping it 96 | 97 | """ 98 | tmp_file = tempfile.NamedTemporaryFile() 99 | bucket = artifact['location']['s3Location']['bucketName'] 100 | key = artifact['location']['s3Location']['objectKey'] 101 | 102 | with tempfile.NamedTemporaryFile() as tmp_file: 103 | print("Retrieving s3://" + bucket + "/" + key) 104 | s3.download_file(bucket, key, tmp_file.name) 105 | with zipfile.ZipFile(tmp_file.name, 'r') as zip: 106 | zip.printdir() 107 | return zip.read(file_in_zip) 108 | 109 | 110 | def put_job_success(job, message): 111 | """Notify CodePipeline of a successful job 112 | 113 | Args: 114 | job: The CodePipeline job ID 115 | message: A message to be logged relating to the job status 116 | 117 | Raises: 118 | Exception: Any exception thrown by .put_job_success_result() 119 | 120 | """ 121 | print('Putting job success') 122 | print(message) 123 | code_pipeline.put_job_success_result(jobId=job) 124 | 125 | def put_job_failure(job, message): 126 | """Notify CodePipeline of a failed job 127 | 128 | Args: 129 | job: The CodePipeline job ID 130 | message: A message to be logged relating to the job status 131 | 132 | Raises: 133 | Exception: Any exception thrown by .put_job_failure_result() 134 | 135 | """ 136 | print('Putting job failure') 137 | print(message) 138 | code_pipeline.put_job_failure_result(jobId=job, failureDetails={'message': message, 'type': 'JobFailed'}) 139 | 140 | def continue_job_later(job, message): 141 | """Notify CodePipeline of a continuing job 142 | 143 | This will cause CodePipeline to invoke the function again with the 144 | supplied continuation token. 145 | 146 | Args: 147 | job: The JobID 148 | message: A message to be logged relating to the job status 149 | continuation_token: The continuation token 150 | 151 | Raises: 152 | Exception: Any exception thrown by .put_job_success_result() 153 | 154 | """ 155 | 156 | # Use the continuation token to keep track of any job execution state 157 | # This data will be available when a new job is scheduled to continue the current execution 158 | continuation_token = json.dumps({'previous_job_id': job}) 159 | 160 | print('Putting job continuation') 161 | print(message) 162 | code_pipeline.put_job_success_result(jobId=job, continuationToken=continuation_token) 163 | 164 | def get_user_params(job_data): 165 | print(job_data) 166 | """Decodes the JSON user parameters and validates the required properties. 167 | 168 | Args: 169 | job_data: The job data structure containing the UserParameters string which should be a valid JSON structure 170 | 171 | Returns: 172 | The JSON parameters decoded as a dictionary. 173 | 174 | Raises: 175 | Exception: The JSON can't be decoded or a property is missing. 176 | 177 | """ 178 | try: 179 | # Get the user parameters which contain the artifact and file settings 180 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 181 | decoded_parameters = json.loads(user_parameters) 182 | 183 | except Exception as e: 184 | # We're expecting the user parameters to be encoded as JSON 185 | # so we can pass multiple values. If the JSON can't be decoded 186 | # then fail the job with a helpful message. 187 | raise Exception('UserParameters could not be decoded as JSON') 188 | 189 | if 'input' not in decoded_parameters: 190 | # Validate that the artifact name is provided, otherwise fail the job 191 | # with a helpful message. 192 | raise Exception('Your UserParameters JSON must include the artifact name') 193 | 194 | if 'file' not in decoded_parameters: 195 | # Validate that the template file is provided, otherwise fail the job 196 | # with a helpful message. 197 | raise Exception('Your UserParameters JSON must include the template file name') 198 | 199 | if 'output' not in decoded_parameters: 200 | # Validate that the template file is provided, otherwise fail the job 201 | # with a helpful message. 202 | raise Exception('Your UserParameters JSON must include the output bucket') 203 | 204 | return decoded_parameters 205 | 206 | def setup_s3_client(job_data): 207 | """Creates an S3 client 208 | 209 | Uses the credentials passed in the event by CodePipeline. These 210 | credentials can be used to access the artifact bucket. 211 | 212 | Args: 213 | job_data: The job data structure 214 | 215 | Returns: 216 | An S3 client with the appropriate credentials 217 | 218 | """ 219 | key_id = job_data['artifactCredentials']['accessKeyId'] 220 | key_secret = job_data['artifactCredentials']['secretAccessKey'] 221 | session_token = job_data['artifactCredentials']['sessionToken'] 222 | 223 | session = Session( 224 | aws_access_key_id=key_id, 225 | aws_secret_access_key=key_secret, 226 | aws_session_token=session_token) 227 | return session.client('s3', config=botocore.client.Config(signature_version='s3v4')) 228 | 229 | 230 | def evaluate_template(template, job_id): 231 | # Validate rules and increase risk value 232 | #print(template) 233 | risk = 0 234 | failedRules = [] 235 | print("----------------") 236 | 237 | template = str(json.loads(template)) 238 | 239 | smite = re.compile(r'A3T[A-Z0-9]|(AKIA|AGPA|AIDA|AROA|AIPA|ANPA|ANVA|ASIA)[A-Z0-9]{16}') 240 | matchobj = smite.search(template) 241 | if matchobj: # True if matchobj is not "nothing" 242 | print(matchobj.group()) 243 | risk = risk + 100 244 | print("Risk value: " +str(risk)) 245 | failedRules.append("AWS Key Found") 246 | print("killing job") 247 | put_job_failure(job_id, "AWS Key Found") 248 | else: 249 | print("No match!!") 250 | 251 | smite2 = re.compile(r'(^|[^A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=])') 252 | matchObj2 = smite2.search(template) 253 | if matchObj2: 254 | print(matchObj2.group()) 255 | risk = risk + 100 256 | print("Risk value: " +str(risk)) 257 | failedRules.append("AWS Secret Key Found") 258 | print("killing job") 259 | put_job_failure(job_id, "AWS Secret Key Found") 260 | else: 261 | print("No match!!") 262 | print("----------------") 263 | 264 | if risk > 10: 265 | print("good job") 266 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') 267 | 268 | return risk, failedRules 269 | 270 | def s3_next_step(s3, bucket, risk, failedRules, template, job_id): 271 | # Store data in temporary physical file 272 | s3Client = boto3.client('s3', config=botocore.client.Config(signature_version='s3v4')) 273 | tmp_file = tempfile.NamedTemporaryFile() 274 | tmp_zip = tempfile.NamedTemporaryFile() 275 | 276 | tmp_file.write(template) 277 | 278 | tmp_file.flush() 279 | # Process file based on risk value 280 | if risk < 5: 281 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 282 | zip.write(tmp_file.name, "valid.template.json") 283 | zip.close() 284 | s3Client.upload_file( # Add encryption support 285 | tmp_zip.name, 286 | bucket, 287 | 'valid.template.zip') 288 | tmp_file.close() 289 | put_job_success(job_id, 'Job succesful, minimal or no risk detected.') 290 | elif 5 <= risk < 50: 291 | with zipfile.ZipFile(tmp_zip.name, 'w') as zip: 292 | zip.write(tmp_file.name, "flagged.template.json") 293 | zip.close() 294 | s3Client.upload_file( # Add encryption support 295 | tmp_zip.name, 296 | bucket, 297 | 'flagged.template.zip') 298 | tmp_file.close() 299 | put_job_success(job_id, 'Job succesful, medium risk detected, manual approval needed.') 300 | elif risk >= 50: 301 | tmp_file.close() 302 | print("High risk file, fail pipeline") 303 | put_job_failure(job_id, 'Function exception: Failed filters ' + str(failedRules)) 304 | return 0 305 | 306 | 307 | def lambda_handler(event, context): 308 | """The Lambda function handler 309 | 310 | Validate input template for security vulnerables. Route as appropriate based on risk assesment. 311 | 312 | Args: 313 | event: The event passed by Lambda 314 | context: The context passed by Lambda 315 | 316 | """ 317 | try: 318 | # Print the entire event for tracking 319 | print("Received event: " + json.dumps(event, indent=2)) 320 | print(event) 321 | # Extract the Job ID 322 | job_id = event['CodePipeline.job']['id'] 323 | 324 | # Extract the Job Data 325 | job_data = event['CodePipeline.job']['data'] 326 | 327 | # Extract the params 328 | user_parameters = job_data['actionConfiguration']['configuration']['UserParameters'] 329 | decoded_parameters = json.loads(user_parameters) 330 | 331 | # Get the list of artifacts passed to the function 332 | input_artifacts = job_data['inputArtifacts'] 333 | 334 | params = get_user_params(job_data) 335 | 336 | input_artifact = params['input'] 337 | template_file = params['file'] 338 | output_bucket = params['output'] 339 | 340 | # Get the artifact details 341 | input_artifact_data = find_artifact(input_artifacts, input_artifact) 342 | 343 | # Get S3 client to access artifact with 344 | s3 = setup_s3_client(job_data) 345 | 346 | # Get the JSON template file out of the artifact 347 | template = get_template(s3, input_artifact_data, template_file) 348 | #print("Template: " + str(template)) 349 | 350 | 351 | # Validate template from risk perspective. FailedRules can be used if you wish to expand the script to report failed items 352 | risk, failedRules = evaluate_template(template, job_id) 353 | 354 | # Based on risk, store the template in the correct S3 bucket for future process 355 | s3_next_step(s3, output_bucket, risk, failedRules, template, job_id) 356 | 357 | except Exception as e: 358 | # If any other exceptions which we didn't expect are raised 359 | # then fail the job and log the exception message. 360 | print('Function failed due to exception.') 361 | print(e) 362 | traceback.print_exc() 363 | put_job_failure(job_id, 'Function exception: ' + str(e)) 364 | 365 | print('Function complete.') 366 | return "Complete." -------------------------------------------------------------------------------- /code/codepipe-AWS-devsecops.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/code/codepipe-AWS-devsecops.zip -------------------------------------------------------------------------------- /code/codepipe-AWS-devsecops/prod-stack-configuration.json: -------------------------------------------------------------------------------- 1 | { 2 | "Parameters": { 3 | "VPCName": "ProdVPCIdParam" 4 | } 5 | } -------------------------------------------------------------------------------- /code/codepipe-AWS-devsecops/resources.json: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion": "2010-09-09", 3 | "Description": "AWS CloudFormation Sample Template for Continuous Delievery:AWS DevSecOps", 4 | "Parameters": { 5 | "VPCName": { 6 | "Description": "vpc-03994a431fd6d8905", 7 | "Type": "AWS::SSM::Parameter::Value" 8 | }, 9 | "AWSKey":{ 10 | "Description": "AWSKey", 11 | "Type" : "String", 12 | "Default" : "AKIA000000000000000Q" 13 | }, 14 | "AWSSecretKey":{ 15 | "Description": "AWSSecretKey", 16 | "Type" : "String", 17 | "Default" : "xB00A0BBB78IfPiwxOrOaaa6A1vSXa4aaaVpIieC" 18 | } 19 | 20 | }, 21 | "Resources": { 22 | "SecurityGroup": { 23 | "Type" : "AWS::EC2::SecurityGroup", 24 | "Properties" : { 25 | "GroupDescription" : "SSH Security Group", 26 | "SecurityGroupIngress" : { 27 | "CidrIp": "1.2.3.4/32", 28 | "FromPort": 21, 29 | "ToPort": 21, 30 | "IpProtocol": "tcp" 31 | }, 32 | "Tags": [ 33 | {"Key": "Name", "Value": "AWS"}, 34 | {"Key": "LOB", "Value": "AWSSecurityJam"} 35 | ], 36 | "VpcId" : { "Ref" : "VPCName" } 37 | } 38 | }, 39 | "EC2Volume" : { 40 | "Type" : "AWS::EC2::Volume", 41 | "Properties" : { 42 | "Size" : "8", 43 | "Encrypted" : "false", 44 | "AvailabilityZone" : { 45 | "Fn::Select" : [ "0", { "Fn::GetAZs" : "" } ] 46 | }, 47 | "Tags" : [ 48 | {"Key": "Name", "Value": "AWS"}, 49 | {"Key": "LOB", "Value": "SecurityJam"} 50 | ] 51 | } 52 | }, 53 | "S3Bucket": { 54 | "Type": "AWS::S3::Bucket", 55 | "Properties": { 56 | "AccessControl" : "BucketOwnerFullControl", 57 | "Tags" : [ 58 | {"Key": "Name", "Value": "AWS"}, 59 | {"Key": "LOB", "Value": "DevSecOps"} 60 | ] 61 | } 62 | } 63 | }, 64 | "Outputs" : { 65 | "SecurityVolume" : { 66 | "Description" : "EBS Volume created by this Cloudformation template", 67 | "Value" : { "Ref" : "EC2Volume" } 68 | }, 69 | "SecurityBucket" : { 70 | "Description" : "S3 Bucket created by Finance team", 71 | "Value" : { "Ref" : "S3Bucket" } 72 | } 73 | } 74 | } 75 | -------------------------------------------------------------------------------- /code/codepipe-AWS-devsecops/test-stack-configuration.json: -------------------------------------------------------------------------------- 1 | { 2 | "Parameters": { 3 | "VPCName": "TestVPCIdParam" 4 | } 5 | } -------------------------------------------------------------------------------- /docs/01-environment-setup.md: -------------------------------------------------------------------------------- 1 | # Module 1: Environment build 2 | 3 | If you are doing this workshop at an event, you will be provided with either an AWS account or a hash key for event engine. Please raise your hand and flag down a workshop facilitator for assistance on any issues during the lab. 4 | 5 | 6 | ## Are you using your own AWS account or given one at a sponsored event? 7 | 8 | If you are at an AWS Sponsored event, skip down to the Build Phase. 9 | 10 | If you are using your own AWS account follow these steps: 11 | 12 | !!! info "Download or Clone: https://github.com/aws-samples/secure-pipelines-in-aws-workshop" 13 | 14 | 1. Log in with an Adminstrator level account 15 | 2. Create an s3 bucket in the US-West-2 Oregon region. 16 | 3. Upload the two zip files to that bucket. 17 | 4. Go to Cloudformation and run “pipeline.yml” 18 | 5. Change the bucket location to the one you created earlier. This bucket location is to reference the zip files. 19 | 6. Deploy the stack. 20 | 7. Continue on to the next build phase. 21 | 22 | 23 | 24 | ## Build Phase 25 | 26 | 27 | 28 | 1. Browse to the CodePipeline console. You will find a new Pipeline called DevSecOps. 29 | 2. Disable the transition between "StaticCodeAnalysis" and "TestDeployment". 30 | 3. Download the code for the workshop here: https://github.com/aws-samples/secure-pipelines-in-aws-workshop/tree/master/code 31 | 4. Once in the AWS Console, go to S3 buckets. Look for a bucket: “{CloudformationStackName}-artifactstorebucket-{randomstring} 32 | 5. Copy the two zip files into the S3 bucket (remember to copy the zip files, not the unzipped directories) 33 | 34 | 35 | 36 | Your environment is now built with a basic pipeline using s3 as the source repository. 37 | 38 | ![Pipeline](./images/01-pipeline.png) 39 | 40 | After you have successfully setup your environment, you can proceed to the next module. 41 | -------------------------------------------------------------------------------- /docs/02-first-pipeline-error.md: -------------------------------------------------------------------------------- 1 | # Module 2: Encrypt the EBS 2 | 3 | Now that your pipeline is built. As a developer, you uploaded the zip files to s3, you committed code into the pipeline. In this module, this example shows what a developer would experience when code does not comply with security requirements. In this particular case, there is a security requirement that EBS volumes must be encrypted. 4 | 5 | ## First Pipeline Error 6 | 7 | **Click on "Release Change".** 8 | 9 | It looks like the pipeline has failed at the "Static Code Analysis" stage. 10 | 11 | ![FirstPipelineError](./images/02-firstpipelineerror.png) 12 | 13 | 1. Click on the "Details" link and take a look at the error. Based on the error message, what needs to be changed? 14 | 2. Click on the "i" next to CFNParsing. 15 | 16 | ![CFNParsing](./images/02-CFNParsingInfo.png) 17 | 18 | 3. Note the location and file this is referencing. 19 | 4. Locate the file, resources.json and open it with your favorite editor. (Certain editors such as Windows notepad or Mac TextEditor may not work when editing json files) 20 | 5. Find the issue in the file and change it the value. 21 | 6. Rezip the directory with the modified resources.json. The name of the zip file is important and must match the original name. The pipeline is looking for that filename specifically. If Lambda takes a while to execute, look at the logs. 22 | 7. Upload and overwrite the existing zip file in: “{CloudformationStackName}-artifactstorebucket-{randomstring} 23 | 8. Go back to your CodePipeline screen and watch the stages go through processing again. If you fixed the code correctly, it should go through to the next stage. 24 | 9. Re-enable the transition you disabled earlier. 25 | 9. When it gets to the ApproveTestStack stage, click on the Review button and then approve the deployment. (Normally you would get an email asking for manual approval, but for the purposes of this lab, do not wait). Everything should then carry on until you have a pipeline full of stages which have succeeded. 26 | 27 | ![ApprovalStage](./images/02-ApprovalStage.png) 28 | 29 | (Optional) Feel free to click on some **Details** icons to look into what happened at each stage in more detail. CFNParsing is a lambda function is a script which does various checks for security compliance. 30 | 31 | -------------------------------------------------------------------------------- /docs/03-No-AWS-Secrets.md: -------------------------------------------------------------------------------- 1 | 4 | 5 | 6 | # Module 3: No AWS Keys Allowed! 7 | 8 | In this module, as a Security engineer, you will add a lambda function that will look for AWS Access and Secret keys. 9 | 10 | 11 | ## Setting Lambda to Look for AWS Credentials 12 | 13 | 1. Browse to the Lambda console, and create a new function from scratch. 14 | 2. Be sure to select the Python 2.7 runtime, and the module*PipelineL- IAM Role. 15 | 3. Name the function to your choosing. Create function. 16 | 3. Set the Lambda time out to 1 minute. 17 | 4. `cfn_secrets.py` is provided in the workshop. Open this in your favorite editor. 18 | 5. Paste the contents of `cfn_secrets.py` source editor (the one in the Lambda console), overwriting the initial placeholder function. 19 | 6. Browse back to the CodePipeline Console, and open your DevSecOps Pipeline again. 20 | 7. Edit the pipeline, using the button at the top right. 21 | 22 | ![Edit-Pipeline](./images/03-Edit-Pipeline.png) 23 | 24 | 7. Use the `Edit Stage` button for the StaticCodeAnalysis stage. 25 | 8. Select the Edit icon for the CFNParsing function. 26 | 9. Copy the contents of “User Parameters (optional)” to your paste buffer. Close the Edit action pop-up. 27 | 28 | ![Source](./images/03-Source.png) 29 | 30 | 10. Add a new action group. 31 | 1. Select “Add action group”. 32 | 2. Create a name for your key-scanning action, choose AWS Lambda from the Action provider drop-down. 33 | 3. In “Function name”, select the name you gave your Lambda function in Step 2 above. 34 | 4. TemplateSource in the “Input artifacts” drop-down. 35 | 5. Paste the contents of your paste buffer from above into “User Parameters (optional)” 36 | 6. Select Save the newly-edited pipeline. You must check the “No resource updates needed for this source action change” option on the pipeline save pop-up window. 37 | 11. Your new Lambda function is now integrated into your pipeline. 38 | 39 | **Proceed to the next module to test your Lambda function.** 40 | 41 | 42 | -------------------------------------------------------------------------------- /docs/04-find-aws-keys.md: -------------------------------------------------------------------------------- 1 | 4 | # Module 4: Finding AWS Keys 5 | 6 | Unfortunately, not all code committed into a source repository will be free of secrets. In this module, as a developer, you will see what security automation can do to help keep confidential information such as AWS keys stay out of source repositories. 7 | 8 | 9 | 1. Release change to start the code building again on the updated pipeline. Oops – the code had some stray AWS credentials in it – but this time we’ve caught them and stopped the build! 10 | ![aws-keys](./images/04-aws-keys.png) 11 | 12 | 13 | 14 | 2. Remove the credentials. 15 | 1. Edit resource.json and remove the offending credentials. 16 | 2. Rezip the “codepipe-AWS-devsecops.zip” (the exact name is important) 17 | 3. Upload the zip to s3. 18 | 3. Come back to the DevSecOps pipeline and watch it through the stages again. 19 | 20 | 21 | 22 | !!! question "Can you use IAM roles instead?" 23 | 24 | 25 | -------------------------------------------------------------------------------- /docs/05-missing-s3-confg.md: -------------------------------------------------------------------------------- 1 | 4 | 5 | 6 | # Module 5: Missing s3 Configuration 7 | 8 | In this module, as part of security governance, all s3 buckets must have s3 versioning configuration enabled. This policy helps organizations recover from deletion or alterations of data by keeping copies of previous versions. As a security engineer, you want to enable the pipeline to enforce enabling s3 bucket versioning configuration. 9 | 10 | 11 | ## Setting Lambda to scan AWS Cloudformation templates for s3 configuration settings 12 | 13 | 1. Browse to the Lambda console, and create a new function from scratch. 14 | 2. Be sure to select the Python 2.7 runtime, and the module*PipelineLambdaRole- IAM Role. 15 | 3. Name the function to your choosing. Create function. 16 | 3. Set the Lambda time out to 1 minute. 17 | 4. `cfn_s3_versioning.py` is provided in the workshop. Open this in your favorite editor. 18 | 5. Paste the contents of `cfn_s3_versioning.py` source editor (the one in the Lambda console), overwriting the initial placeholder function. 19 | 6. Browse back to the CodePipeline Console, and open your DevSecOps Pipeline again. 20 | 7. Edit the pipeline, using the button at the top right. 21 | 22 | ![Edit-Pipeline](./images/03-Edit-Pipeline.png) 23 | 24 | 7. Use the `Edit Stage` button for the StaticCodeAnalysis stage. 25 | 8. Select the Edit icon for the CFNParsing function. 26 | 9. Copy the contents of “User Parameters (optional)” to your paste buffer. Close the Edit action pop-up. 27 | 28 | ![Source](./images/03-Source.png) 29 | 30 | 10. Add a new action group. 31 | 1. Select “Add action group”. 32 | 2. Create a name for your key-scanning action, choose AWS Lambda from the Action provider drop-down. 33 | 3. In “Function name”, select the name you gave your Lambda function in Step 2 above. 34 | 4. TemplateSource in the “Input artifacts” drop-down. 35 | 5. Paste the contents of your paste buffer from above into “User Parameters (optional)” 36 | 6. Select Save the newly-edited pipeline. You must check the “No resource updates needed for this source action change” option on the pipeline save pop-up window. 37 | 11. Your new Lambda function is now integrated into your pipeline. 38 | 39 | **Proceed to the next module to test your Lambda function.** 40 | 41 | 42 | !!! question "What other things can you look for in an AWS Cloudformation template which you can create a security automation?" -------------------------------------------------------------------------------- /docs/06-s3-versioning.md: -------------------------------------------------------------------------------- 1 | 4 | # Module 6: Configuring S3 Security 5 | 6 | Many organizations utilize s3 extensively. It is important that the s3 buckets are configured to the organization's requirements to ensure data stored safely. 7 | 8 | 9 | 1. Now that you have added a lambda function to enforce enablement of S3 Versioning Configuration. Release the change to start the pipeline. 10 | 11 | !!! question "What is S3 Versioning? How can it help you secure data?" 12 | 13 | 14 | 15 | 2. Add the appropriate configuration. 16 | 1. Edit resource.json and add the appropriate lines. 17 | 2. Rezip the “codepipe-AWS-devsecops.zip” (the exact name is important) 18 | 3. Upload the zip to s3. 19 | 3. Come back to the DevSecOps pipeline and watch it through the stages again. 20 | 21 | 22 | !!! info "Hint: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html" 23 | 24 | !!! question "Are there other s3 configurations you would want to enforce? Are there other AWS native ways to enforce some of those controls?" 25 | 26 | 27 | -------------------------------------------------------------------------------- /docs/07-cleanup.md: -------------------------------------------------------------------------------- 1 | # Module 7: Cleanup 2 | 3 | This last module helps with cleaning up the lab environment. This only applies if you are running this in your own account. 4 | 5 | ## Cleanup 6 | In order to prevent charges to your account we recommend cleaning up the infrastructure that was created. If you plan to keep things running so you can examine the workshop a bit more please remember to do the cleanup when you are done. It is very easy to leave things running in an AWS account, forget about it, and then accrue charges. 7 | 8 | !!! info "If you are using this in an instructor led session, with the AWS Event Engine you do not need to run the cleanup steps" 9 | 10 | !!! info "If you are running this in your own account. You will need to manually delete some resources before you delete the CloudFormation stacks so please do the following steps in order." 11 | 12 | 13 | 1. Delete s3 buckets. 14 | * Go to Amazon S3 console. 15 | * Go into the s3 bucket you created in module 1 and delete all of the contents. 16 | * After the bucket is empty, delete the bucket. 17 | * Look for *artifactstorebucket* and delete all of the contents. You will have to click the "Show" button to show all versions of files. The versions of files also have to be deleted before the bucket can be deleted. 18 | * After the bucket is empty, delete the bucket. 19 | 20 | 2. Delete the Cloudformation stack. 21 | * Go to AWS Cloudformation console. 22 | * Look for the stack deployed in module 1 and delete the stack. 23 | 24 | 3. Delete Lambdas. 25 | * Go to the AWS Lambda console. 26 | * Delete the lambda functions you created. 27 | 28 | 29 | ## Finished! 30 | 31 | Congratulations on completing this workshop! This is the workshop's permanent home, so feel free to revisit as often as you'd like. 32 | 33 | 34 | 35 | ## Continuing On 36 | 37 | The workshop is intended to give you an idea of how to start your own Security 38 | 39 | -------------------------------------------------------------------------------- /docs/CNAME: -------------------------------------------------------------------------------- 1 | devops.awssecworkshops.com -------------------------------------------------------------------------------- /docs/_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-architect -------------------------------------------------------------------------------- /docs/assets/images/aws-favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/assets/images/aws-favicon.ico -------------------------------------------------------------------------------- /docs/assets/images/aws_smile_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/assets/images/aws_smile_logo.png -------------------------------------------------------------------------------- /docs/assets/images/second-nav-blue.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/assets/images/second-nav-blue.png -------------------------------------------------------------------------------- /docs/contribute.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check [existing open](https://github.com/aws-samples/secure-pipelines-in-aws-workshop/issues), or [recently closed](https://github.com/aws-samples/secure-pipelines-in-aws-workshop/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels ((enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws-samples/secure-pipelines-in-aws-workshop/labels/help%20wanted) issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](https://github.com/aws-samples/secure-pipelines-in-aws-workshop/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /docs/images/01-pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/images/01-pipeline.png -------------------------------------------------------------------------------- /docs/images/02-ApprovalStage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/images/02-ApprovalStage.png -------------------------------------------------------------------------------- /docs/images/02-CFNParsingInfo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/images/02-CFNParsingInfo.png -------------------------------------------------------------------------------- /docs/images/02-firstpipelineerror.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/images/02-firstpipelineerror.png -------------------------------------------------------------------------------- /docs/images/03-Edit-Pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/images/03-Edit-Pipeline.png -------------------------------------------------------------------------------- /docs/images/03-Source.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/images/03-Source.png -------------------------------------------------------------------------------- /docs/images/04-aws-keys.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/docs/images/04-aws-keys.png -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | Welcome to the world of DevSecOps! You have heard of DevOps. Nowadays, DevSecOps is becoming more popular. How can Security organizations enable developers build security into their products and services? In this workshop, you will experience how tooling and automation can create a security conscious culture throughout the development lifecycle while scaling to the demands of the business. 4 | We will see an example of how to stop code that erroneously contains stray AWS credentials (we revoked them first, of course) from being deployed. 5 | 6 | 7 | * **Level**: Intermediate 8 | * **Duration**: 2 hours 9 | * **AWS CAF**: Security Perspective 10 | * **Prerequisites**: AWS Account, Admin IAM User 11 | 12 | ## Scenario 13 | 14 | For this workshop, you will build a pipeline using AWS CodePipeline and AWS Lambda. Amazon S3 will be your code repository. Your developers want to deploy an AWS Cloudformation template. However, you need to ensure it is secure before it is released to production. 15 | 16 | 17 | ## Region 18 | Please use the **us-west-2 (Oregon)** region for this workshop. 19 | 20 | -------------------------------------------------------------------------------- /docs/license.md: -------------------------------------------------------------------------------- 1 | # License 2 | 3 | **MIT License** 4 | 5 | Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 6 | 7 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 8 | software and associated documentation files (the "Software"), to deal in the Software 9 | without restriction, including without limitation the rights to use, copy, modify, 10 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 11 | permit persons to whom the Software is furnished to do so. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 14 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 15 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 16 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 17 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 18 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /docs/stylesheets/custom.css: -------------------------------------------------------------------------------- 1 | 2 | .md-header { 3 | background-color: #232f3e !important; 4 | border-bottom: 1px solid #1b2532 !important; 5 | } 6 | 7 | @media only screen and (max-width: 76.1875em) { 8 | 9 | html .md-nav--primary .md-nav__title--site { 10 | background-color: #232f3e !important; 11 | border-bottom: 1px solid #1b2532 !important; 12 | } 13 | 14 | .md-nav__source { 15 | background-color: #1f4e79 !important; 16 | } 17 | 18 | html .md-nav--primary .md-nav__title { 19 | background-color: #232f3e !important; 20 | border-bottom: 1px solid #1b2532 !important; 21 | color: #ffffff; 22 | } 23 | 24 | html .md-nav--primary .md-nav__title::before { 25 | color: #ffffff; 26 | } 27 | 28 | } 29 | 30 | .md-tabs__link { 31 | font-weight: bold !important; 32 | font-size: .8rem !important; 33 | } 34 | 35 | .md-header-nav__source { 36 | margin-left: 0; 37 | } 38 | 39 | /* Language Drop Down */ 40 | .md-header-nav__lang { 41 | display: block; 42 | width: 2rem; 43 | margin-left: .5rem; 44 | margin-right: .2rem; 45 | padding-right: .6rem; 46 | } 47 | 48 | .md-lang-drop { 49 | line-height: 2.6rem; 50 | padding: 0 0.5rem; 51 | font-size: 1.2rem; 52 | width: 1px 53 | } 54 | 55 | .md-lang-dropbtn { 56 | color: white; 57 | border: none; 58 | cursor: pointer; 59 | font-weight: bold; 60 | } 61 | 62 | .md-lang-dropbtn:hover, .md-lang-dropbtn:focus { 63 | color: #f8991d; 64 | } 65 | 66 | .md-dropdown-content { 67 | display: none; 68 | width: 100%; 69 | background-color: #f1f1f1; 70 | box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); 71 | z-index: 1; 72 | overflow: auto; 73 | } 74 | 75 | .md-dropdown-content a { 76 | color: black; 77 | padding: 0px 10px; 78 | text-decoration: none; 79 | display: block; 80 | line-height: 3rem; 81 | font-weight: bold; 82 | font-size: 1.3rem; 83 | } 84 | 85 | .md-dropdown-content a:hover { 86 | color: #f8991d; 87 | } 88 | 89 | .show { 90 | display:block; 91 | } 92 | 93 | /* Second Navigation */ 94 | 95 | .md-tabs { 96 | background-image: url(../assets/images/second-nav-blue.png); 97 | background-color: #1f4e79 !important; 98 | } 99 | 100 | .md-tabs__item { 101 | margin-right: 15px; 102 | } 103 | 104 | .md-content__button { 105 | display: none; 106 | } 107 | 108 | .md-typeset table:not([class]) th { 109 | min-width: 15rem; 110 | background-color: #6c6c6c !important; 111 | font-size: medium; 112 | } 113 | 114 | .md-typeset table:not([class]) td { 115 | vertical-align: middle; 116 | } 117 | 118 | .table { 119 | font-size: medium; 120 | } 121 | 122 | .md-header-nav__button.md-logo img, .md-header-nav__button.md-logo svg { 123 | width: 40px 124 | } 125 | 126 | /* Footer */ 127 | 128 | .md-footer-nav { 129 | background-image: url(../assets/images/second-nav-blue.png); 130 | background-color: #1f4e79 !important; 131 | } 132 | 133 | .md-footer-meta { 134 | background-color: #232f3e !important; 135 | border-top: 1px solid #1b2532 !important; 136 | } 137 | -------------------------------------------------------------------------------- /images/01-pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/images/01-pipeline.png -------------------------------------------------------------------------------- /images/02-ApprovalStage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/images/02-ApprovalStage.png -------------------------------------------------------------------------------- /images/02-CFNParsingInfo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/images/02-CFNParsingInfo.png -------------------------------------------------------------------------------- /images/03-Edit-Pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/images/03-Edit-Pipeline.png -------------------------------------------------------------------------------- /images/03-Source.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/images/03-Source.png -------------------------------------------------------------------------------- /images/04-aws-keys.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/images/04-aws-keys.png -------------------------------------------------------------------------------- /mkdocs.yml: -------------------------------------------------------------------------------- 1 | # Project information 2 | 3 | site_name: Adding Security into DevOps 4 | site_short_name: Security Inside Pipelines 5 | site_description: 'This hands-on workshop is where you will learn about a number of AWS services involved integrating security into CICD pipelines. At the end of this workshop, you will learn about the automated possibilities where code development and deployment can be managed at scale through the use of various AWS services including AWS CodePipeline, AWS CodeBuild, and AWS Lambda.' 6 | site_author: 'aws-security-workshops@amazon.com' 7 | site_url: 'https://devops.awssecworkshops.com/' 8 | 9 | # Repository 10 | repo_name: 'aws-samples/secure-pipelines-in-aws-workshop' 11 | repo_url: 'https://github.com/aws-samples/secure-pipelines-in-aws-workshop' 12 | 13 | # Copyright 14 | copyright: 'Privacy | Site terms | © 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.' 15 | 16 | # Configuration 17 | theme: 18 | name: 'material' 19 | logo: 'assets/images/aws_smile_logo.png' 20 | favicon: 'assets/images/aws-favicon.ico' 21 | font: false 22 | 23 | # Customization 24 | extra_css: 25 | - 'stylesheets/custom.css' 26 | extra: 27 | social: 28 | - icon: fontawesome/solid/home 29 | link: https://awssecworkshops.com 30 | - icon: fontawesome/solid/shield-alt 31 | link: https://aws.amazon.com/security/ 32 | - icon: fontawesome/brands/twitter 33 | link: https://twitter.com/awssecurityinfo?lang=en 34 | - icon: fontawesome/solid/rss-square 35 | link: https://aws.amazon.com/blogs/security/ 36 | 37 | # Extensions 38 | markdown_extensions: 39 | - admonition 40 | - codehilite 41 | - pymdownx.details 42 | 43 | # Navigation 44 | nav: 45 | - Overview: 'index.md' 46 | - 'Module 1: Environment Build': '01-environment-setup.md' 47 | - 'Module 2: Encrypt the EBS': '02-first-pipeline-error.md' 48 | - 'Module 3: No AWS Keys': '03-No-AWS-Secrets.md' 49 | - 'Module 4: Find AWS Keys': '04-find-aws-keys.md' 50 | - 'Module 5: Missing S3 Configuration': '05-missing-s3-confg.md' 51 | - 'Module 6: Configuring S3 Security': '06-s3-versioning.md' 52 | - 'Module 7: Cleaning Up': '07-cleanup.md' 53 | - Contributing: 'contribute.md' 54 | - License: 'license.md' 55 | -------------------------------------------------------------------------------- /presentations/ReInvent2019-SEC302-R.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/secure-pipelines-in-aws-workshop/f2c1c376039911dba231164deaf01b7653322093/presentations/ReInvent2019-SEC302-R.pdf --------------------------------------------------------------------------------