├── .flake8 ├── .gitignore ├── .gitmodules ├── .go-version ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── app.py ├── assets ├── architecture.png ├── cloud9.png ├── cloud9_env.png ├── cloud9_outputs.png ├── dev.png ├── pipeline.png ├── prod.png ├── repository.png ├── sample_app.png └── validate.png ├── cdk.json ├── create_secret_helper.sh ├── devsecops_quickstart ├── __init__.py ├── cfn_nag │ ├── cfn_nag.py │ ├── layer.zip │ └── rules │ │ └── README.md ├── cloud9.py ├── opa_scan │ ├── go.mod │ ├── go.sum │ ├── lambda │ │ └── main.go │ ├── opascan.py │ ├── pkg │ │ ├── app │ │ │ ├── app.go │ │ │ ├── app_test.go │ │ │ └── lib │ │ │ │ ├── cloudformation_utils.rego │ │ │ │ ├── terraform_utils.rego │ │ │ │ └── utils.rego │ │ ├── filesystem │ │ │ ├── adapters │ │ │ │ ├── local.go │ │ │ │ └── s3.go │ │ │ └── filesystem.go │ │ ├── runners │ │ │ ├── cloudformation.go │ │ │ ├── runner.go │ │ │ └── terraform.go │ │ └── utils │ │ │ ├── json.go │ │ │ ├── maps.go │ │ │ ├── types.go │ │ │ └── utils.go │ └── rules │ │ └── cloudformation │ │ ├── s3_encryption.rego │ │ └── tag_all_resources.rego ├── pipeline.py └── sample_app │ └── sample_app.py ├── pyproject.toml ├── requirements.txt ├── setup.py └── source.bat /.flake8: -------------------------------------------------------------------------------- 1 | [flake8] 2 | max-line-length = 140 3 | exclude = .git,__pycache__,node_modules,cdk.out,test,.venv 4 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.swp 2 | package-lock.json 3 | __pycache__ 4 | .pytest_cache 5 | .env 6 | .venv 7 | *.egg-info 8 | 9 | # CDK asset staging directory 10 | .cdk.staging 11 | cdk.out 12 | .idea 13 | 14 | node_modules 15 | **/layer/ 16 | 17 | .DS_Store 18 | -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "devsecops_quickstart/cfn_nag/cfn-nag-pipeline"] 2 | path = devsecops_quickstart/cfn_nag/cfn-nag-pipeline 3 | url = https://github.com/stelligent/cfn-nag-pipeline.git 4 | -------------------------------------------------------------------------------- /.go-version: -------------------------------------------------------------------------------- 1 | 1.16.3 2 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *main* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DevSecOps Quick Start 2 | 3 | This artefact helps development teams to quickly set up a ready to use environment integrated with a 4 | multi-account CI/CD pipeline following security and DevOps best practices. 5 | 6 | ![architecture](./assets/architecture.png) 7 | 8 | Upon successful deployment, you will have: 9 | 10 | - an AWS CodeCommit Git repository 11 | - an AWS Cloud9 development environment integrated with the code repository 12 | - a multi-stage, multi-account CI/CD pipeline integrated with the code repository 13 | - pipeline integration with [Bandit](https://github.com/PyCQA/bandit) for finding common security issues in Python code 14 | - pipeline integration with [Snyk](https://snyk.io/) for continuously monitoring for vulnerabilities in your dependencies 15 | - pipeline integration with [CFN NAG](https://github.com/stelligent/cfn_nag) to look for patterns in 16 | CloudFormation templates that may indicate insecure infrastructure 17 | - pipeline integration with [Open Policy Agent (OPA)](https://www.openpolicyagent.org/) that enables you define and 18 | enforce policies on infrastructure resources at development time 19 | 20 | ## Prerequisites 21 | - An [AWS Account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/) to be used as `Toolchain` account 22 | - One or more [AWS Account(s)](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/) to be used as deployment target accounts (e.g. `Dev`,`QA`,`Prod`, etc.) 23 | 24 | > **_NOTE:_** Separating toolchain and deployment target accounts is technically not required, but is considered as best practice. We encourage you to start with a toolchain and a development account. You can easilty add new target environments to the pipeline in the future, if and when needed. 25 | 26 | - [AWS CLI](https://aws.amazon.com/cli/) 27 | - [Install AWS CDK Toolkit v2](https://docs.aws.amazon.com/cdk/v2/guide/cli.html) 28 | - [Install Python v3.7 or higher](https://www.python.org/downloads/) 29 | - [Install Docker](https://docs.docker.com/get-docker/) 30 | - [Install Go](https://go.dev/doc/install) 31 | 32 | ## Set UP the Project 33 | 34 | ### 1. Clone GitHub repository 35 | This project's code repository contains `Git Submodules`. If cloning for the first time, make sure to use 36 | `--recurse-submodules` flag to automatically initialize and update submodules: 37 | 38 | ``` 39 | git clone --recurse-submodules https://github.com/aws-samples/devsecops-quickstart.git 40 | ``` 41 | 42 | If the repository is already cloned without the `--recurse-submodules` flag, 43 | initialize and update submodules by running the following command: 44 | 45 | ``` 46 | git submodule update --init --recursive 47 | ``` 48 | 49 | For more information on working with Git repositories with submodules, refere to 50 | [Git-Tools-Submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) documentation. 51 | 52 | ### 2. Configure accounts and regions 53 | Locate `cdk.json` file on project's root folder. This is the project's configuration file. 54 | Update `cdk.json` with account number and region values to be used for toolchain, and deployment target accounts. 55 | The default setting has three deployment target accounts for Dev, QA, and Prod. This is just for demonstration purposes. 56 | Feel free to add/remove deployment targets according to your requirements. You can for example start with a 57 | single development account and keep adding more stages in the future as requierd. 58 | 59 | > **_NOTE:_** Make sure to commit your changes. 60 | 61 | ### 3. Set up Python environment 62 | This project is set up like a standard Python project. The initialization 63 | process also creates a virtualenv within this project, stored under the `.venv` 64 | directory. 65 | 66 | Use the following command to create a virtualenv on the project's root folder: 67 | 68 | > **_NOTE:_** Use `python` instead of `python3` on Windows. 69 | 70 | ``` 71 | python3 -m venv .venv 72 | ``` 73 | 74 | After the virtualenv is created, use the following step to activate your virtualenv. 75 | 76 | ``` 77 | source .venv/bin/activate 78 | ``` 79 | 80 | If you are a Windows platform, you would activate the virtualenv like this: 81 | 82 | ``` 83 | .venv\Scripts\activate.bat 84 | ``` 85 | 86 | Once the virtualenv is activated, install the required dependencies. 87 | 88 | ``` 89 | pip install -r requirements.txt 90 | ``` 91 | 92 | 93 | ## Bootstrap Account(s) 94 | This is a one-time setup known as [Bootstrapping](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html), 95 | which you need to do for the toolchain and each deployment target account separately. 96 | 97 | Do the following steps for toolchain account+region, that you configured in `cdk.json` in 98 | step **2. Configure accounts and regions**: 99 | 100 | ### 4. [Configure AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) to point to the account and region 101 | 102 | > **_NOTE:_** This can be quickly done as described in [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) section. Alternatively, you can use 103 | [AWS CLI Profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-profiles), which is more convenient when switching back and forth between multiple accounts. 104 | 105 | > **_NOTE:_** If you are using AWS CLI profiles, make sure to provide the `--profile ` option for every command interacting with AWS accounts below. 106 | 107 | ### 5. Run CDK Bootstrap 108 | 109 | > **_NOTE:_** Make sure to replace `toolchain_account` and `toolchain_region` placeholders. 110 | 111 | ``` 112 | cdk bootstrap \ 113 | --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \ 114 | aws:/// 115 | ``` 116 | 117 | Repeat the following steps for each deployment target account+region that you configured in `cdk.json` in 118 | step **2. Configure accounts and regions**: 119 | 120 | ### 6. [Configure AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) to point to the account and rgion 121 | 122 | ### 7. Run CDK Bootstrap 123 | 124 | > **_NOTE:_** Make sure to replace ``, ``, and `` placeholders. 125 | 126 | ``` 127 | cdk bootstrap \ 128 | --trust \ 129 | --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \ 130 | aws:/// 131 | ``` 132 | 133 | ## Configure Snyk authentication token in toolchain account 134 | 135 | For Snyk integration, you need to provide authentication token with a Snyk profile account. You can sign up for a 136 | free Snyk account [here](https://app.snyk.io/login?cta=sign-up&loc=body&page=try-snyk). After sign up, you can get 137 | your Auth Token from the Account Settings section in your profile. 138 | 139 | > **_NOTE:_** When signing up for a new account on Snyk, the whizzard routes you to a final step to select a 140 | Git repository and provide integration for Snyk to monitor that repository. This step is not mandatory for creating 141 | Snyk accounts and is not required for our setup, either. Simply skip that by choosing `Select from other repositories` 142 | which will bring you directly to your profile. 143 | 144 | With the Snyk authentication token retrieved, use `create_secret_helper` utility to securely store 145 | the token in AWS Secret Manager in toolchain account. 146 | It will be automatically used by the pipeline to interact with Snyk during the pipeline execution. 147 | 148 | ### 8. [Configure AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) to point to the account and rgion 149 | 150 | > **_TODO:_** Pass profile to CLI commands 151 | 152 | ``` 153 | ./create_secret_helper.sh snyk-auth-token 154 | ``` 155 | 156 | ## Seed project's code to toolchain account 157 | 158 | ### 9. [Configure AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) to point to the account and region 159 | 160 | ### 10. Create CodeCommit repository 161 | 162 | Create a new Git repository in the toolchain account. This will become the main repository used by the pipeline 163 | for deployments, where you will be developing your application. 164 | 165 | ``` 166 | aws codecommit create-repository --repository-name devsecops-quickstart 167 | ``` 168 | 169 | Take note of `cloneUrlHttp` value in the command output. 170 | 171 | ### 11. Setup Git Credential Helper for AWS CodeCommit 172 | 173 | If this is the first time you are using AWS CodeCommit, you need to configure a Git Credential Helper 174 | to be able to access CodeCommit repositories using HTTPS. Follow AWS CodeCommit 175 | documentation to set up a credential helper on 176 | [Linux, macOS, or Unix](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-https-unixes.html#setting-up-https-unixes-credential-helper) or 177 | [Windows](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-https-windows.html#setting-up-https-windows-credential-helper). 178 | 179 | ### 12. Seed CodeCommit repository 180 | 181 | > **_NOTE:_** Make sure to replace `` placeholder with `cloneUrlHttp` value 182 | from step **9. Create Git repository**. 183 | 184 | ``` 185 | git remote add codecommit 186 | git checkout -b main 187 | git add . 188 | git commit -m "initial commit" 189 | git push --set-upstream codecommit main 190 | ``` 191 | 192 | ## Deploy CI/CD pipeline in toolchain account 193 | 194 | ### 13. [Configure AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) to point to the account and region 195 | ### 14. Deploy 196 | Run CDK deploy command to build and deploy the CI/CD pipeline. 197 | 198 | ``` 199 | cdk deploy devsecops-quickstart-cicd 200 | ``` 201 | 202 | ## Verify deployment in toolchain account 203 | Login to the toolchain account in AWS Console. Select the same region you used for the deployment. 204 | ### AWS CodeCommit Repository 205 | 206 | Navigate to AWS CodeCommit service, select Source/Repositories from the left navigation bar, and verify that a new repository has been created. 207 | 208 | ![validate](./assets/repository.png) 209 | 210 | ### AWS Cloud9 Environment 211 | Navigate to AWS Cloud9 service, select Account Environments from the left navigation bar, and verify that a new Cloud9 environment has been created. 212 | 213 | ![cloud9](./assets/cloud9.png) 214 | 215 | Note that `cloud9_admin` is the owner of the environment and Open IDE buttion is disabled. To access the 216 | Cloud9 environment, go to CloudFormation service, open `tooling-cloud9` stack, and select Outputs section from the top navigation bar. 217 | 218 | ![validate](./assets/cloud9_outputs.png) 219 | 220 | Copy the `IDEURL` link and open it in a new __incognito__ browser tab. For Sign In select IAM User. For Account number use the `IDEAccount` value from stack outputs, and for IAM user use `AdminUserName`. The Password needs to be retrieved from AWS Secret Manager by following the link in `AdminPasswordSecretURL` and selecting Retrieve Secret Value under Secret Value section. After successful login, you get access to the Cloud9 environment. 221 | 222 | ![validate](./assets/cloud9_env.png) 223 | 224 | ### AWS CodePipeline pipeline 225 | Navigate to AWS CodePipeline service, select Pipeline/Pipelines from the left navigation bar, and verify that a new pipeline has been created. 226 | 227 | ![validate](./assets/pipeline.png) 228 | 229 | Open the pipeline and scroll down to `validate` stage. This is the stage where all the validation steps will be run. 230 | 231 | ![validate](./assets/validate.png) 232 | 233 | After validation you can see the deployment stage(s). The number of the stages correspond to the number of target accounts you have configured in the `cdk.json` file as described in the __Deploy__ section. 234 | 235 | ![dev](./assets/dev.png) 236 | 237 | If you have enabled manual approvals in the stage configuration in `cdk.json`, the corresponding pipeline stage will have a mandatory manual approval before deploying to that environment. 238 | 239 | ![prod](./assets/prod.png) 240 | 241 | ## Verify deployment in target account(s) 242 | For each target account, login to the account in AWS Console. Make sure you are in the same region as you used for the deployment. Navigate to CloudFormation service and search for the application stack that is deployed by the pipeline. This pipeline comes with a sample application called `SampleApp` which is deployed for demonstration purposes. You can add your application stacks following the same approach and the pipeline will take care of deploying them into your target environments. 243 | 244 | ![prod](./assets/sample_app.png) 245 | 246 | 247 | ## What Now? 248 | 249 | The pipeline you just created is deploying a dummy application stack called `SampleApp`. The stack is defined in 250 | `devsecops_quickstart/sample_app/sample_app.py` and is integrated into the pipeline definition in 251 | `devsecops_quickstart/pipeline.py`. 252 | 253 | It is now your turn to roll up the sleeves and develop your application stack(s). Follow the same approach as in 254 | the `SampleApp` to define you application stage and integrate it into the pipeline. 255 | 256 | Have fun! 257 | ## FAQ 258 | ### Q: How to access the Cloud9 Environment? 259 | A: Check the CloudFormation Outputs section of the stack called `tooling-Cloud9`. There you can find output parameters 260 | for the environment URL, admin user, and the AWS Secret Manager secret containing the admin password. 261 | 262 | ### Q: How to add a new deployment target to the pipeline? 263 | A: Deployment targets are configured in `cdk.json` file under `context.config.stage` variable. You can add as many stages 264 | to the pipeline as required, both now or in the future. Simply give your new stage a name and provide values for account, region, and wheather 265 | manual approval is required. Then commit and push the changes to trigger the pipeline. The pipeline is capable of mutating 266 | itself to dynamically adjust to your configuration. 267 | 268 | ## Security 269 | 270 | See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. 271 | 272 | ## License 273 | 274 | This library is licensed under the MIT-0 License. See the LICENSE file. 275 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | from aws_cdk import core as cdk 3 | from devsecops_quickstart.pipeline import CICDPipelineStack 4 | 5 | app = cdk.App() 6 | config = app.node.try_get_context("config") 7 | general_config = config["general"] 8 | 9 | developmentPipeline = CICDPipelineStack( 10 | app, 11 | id=f"{general_config['repository_name']}-cicd", 12 | general_config=general_config, 13 | stages_config=config["stage"], 14 | env=cdk.Environment( 15 | account=general_config["toolchain_account"], 16 | region=general_config["toolchain_region"], 17 | ), 18 | ) 19 | 20 | app.synth() 21 | -------------------------------------------------------------------------------- /assets/architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/architecture.png -------------------------------------------------------------------------------- /assets/cloud9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/cloud9.png -------------------------------------------------------------------------------- /assets/cloud9_env.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/cloud9_env.png -------------------------------------------------------------------------------- /assets/cloud9_outputs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/cloud9_outputs.png -------------------------------------------------------------------------------- /assets/dev.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/dev.png -------------------------------------------------------------------------------- /assets/pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/pipeline.png -------------------------------------------------------------------------------- /assets/prod.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/prod.png -------------------------------------------------------------------------------- /assets/repository.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/repository.png -------------------------------------------------------------------------------- /assets/sample_app.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/sample_app.png -------------------------------------------------------------------------------- /assets/validate.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/assets/validate.png -------------------------------------------------------------------------------- /cdk.json: -------------------------------------------------------------------------------- 1 | { 2 | "app": "python3 app.py", 3 | "context": { 4 | "@aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": true, 5 | "@aws-cdk/core:enableStackNameDuplicates": "true", 6 | "aws-cdk:enableDiffNoFail": "true", 7 | "@aws-cdk/core:stackRelativeExports": "true", 8 | "@aws-cdk/aws-ecr-assets:dockerIgnoreSupport": true, 9 | "@aws-cdk/aws-secretsmanager:parseOwnedSecretName": true, 10 | "@aws-cdk/aws-kms:defaultKeyPolicies": true, 11 | "@aws-cdk/aws-s3:grantWriteWithoutAcl": true, 12 | "@aws-cdk/aws-ecs-patterns:removeDefaultDesiredCount": true, 13 | "@aws-cdk/aws-rds:lowercaseDbIdentifier": true, 14 | "@aws-cdk/aws-efs:defaultEncryptionAtRest": true, 15 | "@aws-cdk/aws-lambda:recognizeVersionProps": true, 16 | "@aws-cdk/core:newStyleStackSynthesis": true, 17 | "config": { 18 | "general": { 19 | "repository_name": "devsecops-quickstart", 20 | "repository_branch": "main", 21 | "toolchain_account": "", 22 | "toolchain_region": "", 23 | "secret_name": { 24 | "snyk": "snyk-auth-token" 25 | } 26 | }, 27 | "stage": { 28 | "dev": { 29 | "manual_approvals": false, 30 | "stage_account": "", 31 | "stage_region": "" 32 | }, 33 | "qa": { 34 | "manual_approvals": false, 35 | "stage_account": "", 36 | "stage_region": "" 37 | }, 38 | "prod": { 39 | "manual_approvals": true, 40 | "stage_account": "", 41 | "stage_region": "" 42 | } 43 | } 44 | } 45 | } 46 | } 47 | -------------------------------------------------------------------------------- /create_secret_helper.sh: -------------------------------------------------------------------------------- 1 | SECRET_NAME=$1 2 | SECRET_VALUE=$2 3 | ACCOUNT=$(aws sts get-caller-identity --query Account) 4 | 5 | SECRET_ARN=$(aws secretsmanager create-secret --name "$SECRET_NAME" --secret-string "$SECRET_VALUE" | jq '."ARN"') 6 | 7 | SECRET_POLICY=$(cat << EOM 8 | { 9 | "Version": "2012-10-17", 10 | "Statement": [ 11 | { 12 | "Effect": "Allow", 13 | "Principal": { 14 | "AWS": $ACCOUNT 15 | }, 16 | "Action": [ 17 | "secretsmanager:GetResourcePolicy", 18 | "secretsmanager:GetSecretValue", 19 | "secretsmanager:DescribeSecret", 20 | "secretsmanager:ListSecretVersionIds" 21 | ], 22 | "Resource": $SECRET_ARN 23 | } 24 | ] 25 | } 26 | EOM 27 | ) 28 | 29 | aws secretsmanager put-resource-policy --secret-id "$SECRET_NAME" --resource-policy "$SECRET_POLICY" -------------------------------------------------------------------------------- /devsecops_quickstart/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/devsecops_quickstart/__init__.py -------------------------------------------------------------------------------- /devsecops_quickstart/cfn_nag/cfn_nag.py: -------------------------------------------------------------------------------- 1 | import aws_cdk.core as cdk 2 | import aws_cdk.aws_s3 as s3 3 | import aws_cdk.aws_s3_deployment as s3_deployment 4 | import aws_cdk.aws_lambda as lambda_ 5 | import aws_cdk.aws_iam as iam 6 | import aws_cdk.aws_kms as kms 7 | 8 | 9 | class CfnNagStack(cdk.Stack): 10 | def __init__(self, scope: cdk.Construct, general_config: dict, **kwargs): 11 | 12 | super().__init__(scope, id="CfnNag", **kwargs) 13 | 14 | lambda_role = iam.Role.from_role_arn( 15 | self, "cfn-nag-role", role_arn=f"arn:aws:iam::{general_config['toolchain_account']}:role/cfn-nag-role" 16 | ) 17 | 18 | lambda_role.add_managed_policy( 19 | iam.ManagedPolicy.from_managed_policy_arn( 20 | self, "lambda-service-basic-role", "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" 21 | ) 22 | ) 23 | 24 | lambda_policy = iam.Policy( 25 | self, 26 | "lambda-role-policy", 27 | statements=[ 28 | iam.PolicyStatement( 29 | effect=iam.Effect.ALLOW, 30 | actions=["codepipeline:PutJobSuccessResult", "codepipeline:PutJobFailureResult"], 31 | resources=["*"], 32 | ) 33 | ], 34 | ) 35 | 36 | cfn_policy = lambda_policy.node.default_child 37 | cfn_policy.cfn_options.metadata = { 38 | "cfn_nag": { 39 | "rules_to_suppress": [ 40 | {"id": "W12", "reason": "Circular dependency, pipeline is not deployed yet"}, 41 | ] 42 | } 43 | } 44 | 45 | lambda_policy.attach_to_role(lambda_role) 46 | 47 | encryption_key = kms.Key(self, "cfn-nag-rules-key", enable_key_rotation=True) 48 | encryption_key.add_to_resource_policy( 49 | iam.PolicyStatement( 50 | effect=iam.Effect.ALLOW, 51 | actions=["kms:Decrypt", "kms:DescribeKey"], 52 | resources=["*"], 53 | principals=[iam.ArnPrincipal(lambda_role.role_arn)], 54 | ) 55 | ) 56 | 57 | rules_bucket = s3.Bucket( 58 | self, 59 | id="cfn-nag-rules-bucket", 60 | bucket_name=f"cfn-nag-rules-{self.account}", 61 | removal_policy=cdk.RemovalPolicy.DESTROY, 62 | block_public_access=s3.BlockPublicAccess.BLOCK_ALL, 63 | encryption=s3.BucketEncryption.KMS, 64 | encryption_key=encryption_key, 65 | ) 66 | 67 | cdk.Tags.of(rules_bucket).add("resource-owner", "cfn-nag") 68 | 69 | s3_deployment.BucketDeployment( 70 | self, 71 | id="cfn-nag-rules-deployment", 72 | destination_bucket=rules_bucket, 73 | sources=[s3_deployment.Source.asset("./devsecops_quickstart/cfn_nag/rules")], 74 | memory_limit=128, 75 | ) 76 | 77 | rules_bucket.add_to_resource_policy( 78 | iam.PolicyStatement( 79 | actions=["s3:List*", "s3:GetObject*", "s3:GetBucket*"], 80 | resources=[ 81 | rules_bucket.bucket_arn, 82 | f"{rules_bucket.bucket_arn}/*", 83 | ], 84 | principals=[iam.ArnPrincipal(lambda_role.role_arn)], 85 | ) 86 | ) 87 | 88 | layer = lambda_.LayerVersion( 89 | self, 90 | "cfn-nag-layer", 91 | code=lambda_.Code.from_asset("devsecops_quickstart/cfn_nag/layer.zip"), 92 | # code=lambda_.Code.from_asset("devsecops_quickstart/cfn_nag/cfn-nag-pipeline/layer"), 93 | compatible_runtimes=[lambda_.Runtime.RUBY_2_7], 94 | description="Ruby gems required for cfn-nag lambda handler", 95 | ) 96 | 97 | # layer = lambda_.LayerVersion( 98 | # self, 99 | # "cfn-nag-layer", 100 | # code=lambda_.Code.from_asset( 101 | # path="devsecops_quickstart/cfn_nag/cfn-nag-pipeline", 102 | # bundling=cdk.BundlingOptions( 103 | # image=cdk.DockerImage("amazon/aws-sam-cli-build-image-ruby2.7"), 104 | # command=[ 105 | # 'echo "!!!!!! HELLO WORLD"', 106 | # # "bundle install --path=ruby/gems" 107 | # # "mv ruby/gems/ruby/* ruby/gems/", 108 | # # "rm -rf ruby/gems/2.7.0/cache", 109 | # # "rm -rf ruby/gems/ruby", 110 | # # "mkdir layer", 111 | # # "mv ruby layer", 112 | # # "cp layer /asset-output" 113 | # ] 114 | # ) 115 | # ), 116 | # compatible_runtimes=[lambda_.Runtime.RUBY_2_7], 117 | # description="Ruby gems required for cfn-nag lambda handler" 118 | # ) 119 | 120 | lambda_.Function( 121 | self, 122 | "cfn-nag-handler", 123 | function_name="cfn-nag", 124 | runtime=lambda_.Runtime.RUBY_2_7, 125 | memory_size=1024, 126 | timeout=cdk.Duration.seconds(300), 127 | handler="handler.handler", 128 | layers=[layer], 129 | role=lambda_role, 130 | code=lambda_.Code.from_asset("devsecops_quickstart/cfn_nag/cfn-nag-pipeline/lib"), 131 | environment={"RULE_BUCKET_NAME": rules_bucket.bucket_name, "RULE_BUCKET_PREFIX": ""}, 132 | ) 133 | -------------------------------------------------------------------------------- /devsecops_quickstart/cfn_nag/layer.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/devsecops-quickstart/31e0c62740085888b24aa024244b0c6c35a68b8e/devsecops_quickstart/cfn_nag/layer.zip -------------------------------------------------------------------------------- /devsecops_quickstart/cfn_nag/rules/README.md: -------------------------------------------------------------------------------- 1 | Add custom rules for cfn-nag here -------------------------------------------------------------------------------- /devsecops_quickstart/cloud9.py: -------------------------------------------------------------------------------- 1 | import aws_cdk.aws_cloud9 as cloud9 2 | import aws_cdk.aws_ec2 as ec2 3 | import aws_cdk.aws_iam as iam 4 | import aws_cdk.aws_secretsmanager as secretmanager 5 | import aws_cdk.core as cdk 6 | import aws_cdk.aws_codecommit as codecommit 7 | 8 | 9 | class Cloud9Stack(cdk.Stack): 10 | def __init__(self, scope: cdk.Construct, general_config: dict, **kwargs): 11 | 12 | super().__init__(scope, id="Cloud9", **kwargs) 13 | 14 | repository = codecommit.Repository.from_repository_name( 15 | self, 16 | id="Repository", 17 | repository_name=general_config["repository_name"], 18 | ) 19 | 20 | secret = secretmanager.Secret( # nosec 21 | self, 22 | id="cloud9_admin_password", 23 | secret_name="cloud9_admin_password", 24 | description="cloud9 admin password", 25 | ) 26 | 27 | cloud9_admin_user = iam.User( 28 | self, 29 | id="cloud9_admin", 30 | user_name="cloud9_admin", 31 | password=secret.secret_value, 32 | ) 33 | cloud9_admin_user.add_managed_policy( 34 | iam.ManagedPolicy.from_managed_policy_arn( 35 | self, 36 | id="AWSCloud9Administrator", 37 | managed_policy_arn="arn:aws:iam::aws:policy/AWSCloud9Administrator", 38 | ) 39 | ) 40 | cloud9_admin_user.add_managed_policy( 41 | iam.ManagedPolicy.from_managed_policy_arn( 42 | self, 43 | id="AWSCodeCommitFullAccess", 44 | managed_policy_arn="arn:aws:iam::aws:policy/AWSCodeCommitFullAccess", 45 | ) 46 | ) 47 | 48 | vpc = ec2.Vpc(self, "Cloud9-VPC", max_azs=3) 49 | 50 | cloud9_environment = cloud9.CfnEnvironmentEC2( 51 | self, 52 | general_config["repository_name"], 53 | name=general_config["repository_name"], 54 | instance_type="t2.micro", 55 | automatic_stop_time_minutes=30, 56 | owner_arn=cloud9_admin_user.user_arn, 57 | subnet_id=vpc.public_subnets[0].subnet_id, 58 | repositories=[ 59 | cloud9.CfnEnvironmentEC2.RepositoryProperty( 60 | repository_url=repository.repository_clone_url_http, 61 | path_component=f"/{general_config['repository_name']}", 62 | ) 63 | ], 64 | ) 65 | 66 | ide_url = "https://{region}.console.aws.amazon.com/cloud9/ide/{id}".format( 67 | region=general_config["toolchain_region"], id=cloud9_environment.ref 68 | ) 69 | 70 | secret_url = ( 71 | "https://{region}.console.aws.amazon.com/secretsmanager/home?region={region}#!/secret?name={secret}".format( 72 | region=general_config["toolchain_region"], secret=secret.secret_name 73 | ) 74 | ) 75 | 76 | cdk.CfnOutput(self, "IDE_Account", value=general_config["toolchain_account"]) 77 | cdk.CfnOutput(self, "IDE_URL", value=ide_url) 78 | cdk.CfnOutput(self, "Admin_User_Name", value=cloud9_admin_user.user_name) 79 | cdk.CfnOutput(self, "Admin_Password_Secret_URL", value=secret_url) 80 | 81 | cloud9_admin_user.node.default_child.cfn_options.metadata = { 82 | "cfn_nag": { 83 | "rules_to_suppress": [ 84 | {"id": "F2000", "reason": "Single user for Cloud9 admin"}, 85 | ] 86 | } 87 | } 88 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/go.mod: -------------------------------------------------------------------------------- 1 | module devsecops-quickstart/opa-scan 2 | 3 | go 1.16 4 | 5 | require ( 6 | github.com/aws/aws-lambda-go v1.13.3 7 | github.com/aws/aws-sdk-go-v2 v1.3.2 8 | github.com/aws/aws-sdk-go-v2/config v1.1.4 9 | github.com/aws/aws-sdk-go-v2/service/codepipeline v1.2.2 10 | github.com/aws/aws-sdk-go-v2/service/s3 v1.4.0 11 | github.com/open-policy-agent/opa v0.43.1 12 | ) 13 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/lambda/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "context" 5 | "devsecops-quickstart/opa-scan/pkg/app" 6 | "encoding/json" 7 | "fmt" 8 | 9 | "github.com/aws/aws-lambda-go/lambda" 10 | "github.com/aws/aws-sdk-go-v2/aws" 11 | "github.com/aws/aws-sdk-go-v2/config" 12 | "github.com/aws/aws-sdk-go-v2/service/codepipeline" 13 | "github.com/aws/aws-sdk-go-v2/service/codepipeline/types" 14 | ) 15 | 16 | type MyEvent struct { 17 | Parameters string `json:"parameters"` 18 | CodePipelineJob CodePipelineJob `json:"CodePipeline.job"` 19 | } 20 | 21 | type CodePipelineParameters struct { 22 | Rules []string `json:"rules"` 23 | Parameters string `json:"parameters"` 24 | } 25 | 26 | // CodePipelineJob represents a job from an AWS CodePipeline event 27 | type CodePipelineJob struct { 28 | ID string `json:"id"` 29 | AccountID string `json:"accountId"` 30 | Data CodePipelineData `json:"data"` 31 | } 32 | 33 | // CodePipelineData represents a job from an AWS CodePipeline event 34 | type CodePipelineData struct { 35 | ActionConfiguration CodePipelineActionConfiguration `json:"actionConfiguration"` 36 | InputArtifacts []CodePipelineInputArtifact `json:"inputArtifacts"` 37 | OutPutArtifacts []CodePipelineOutputArtifact `json:"outputArtifacts"` 38 | ArtifactCredentials CodePipelineArtifactCredentials `json:"artifactCredentials"` 39 | ContinuationToken string `json:"continuationToken"` 40 | } 41 | 42 | // CodePipelineActionConfiguration represents an Action Configuration 43 | type CodePipelineActionConfiguration struct { 44 | Configuration CodePipelineConfiguration `json:"configuration"` 45 | } 46 | 47 | // CodePipelineConfiguration represents a configuration for an Action Configuration 48 | type CodePipelineConfiguration struct { 49 | FunctionName string `json:"FunctionName"` 50 | UserParameters string `json:"UserParameters"` 51 | } 52 | 53 | // CodePipelineInputArtifact represents an input artifact 54 | type CodePipelineInputArtifact struct { 55 | Location CodePipelineInputLocation `json:"location"` 56 | Revision *string `json:"revision"` 57 | Name string `json:"name"` 58 | } 59 | 60 | // CodePipelineInputLocation represents a input location 61 | type CodePipelineInputLocation struct { 62 | S3Location CodePipelineS3Location `json:"s3Location"` 63 | LocationType string `json:"type"` 64 | } 65 | 66 | // CodePipelineS3Location represents an s3 input location 67 | type CodePipelineS3Location struct { 68 | BucketName string `json:"bucketName"` 69 | ObjectKey string `json:"objectKey"` 70 | } 71 | 72 | // CodePipelineOutputArtifact represents an output artifact 73 | type CodePipelineOutputArtifact struct { 74 | Location CodePipelineInputLocation `json:"location"` 75 | Revision *string `json:"revision"` 76 | Name string `json:"name"` 77 | } 78 | 79 | // CodePipelineOutputLocation represents a output location 80 | type CodePipelineOutputLocation struct { 81 | S3Location CodePipelineS3Location `json:"s3Location"` 82 | LocationType string `json:"type"` 83 | } 84 | 85 | // CodePipelineArtifactCredentials represents CodePipeline artifact credentials 86 | type CodePipelineArtifactCredentials struct { 87 | SecretAccessKey string `json:"secretAccessKey"` 88 | SessionToken string `json:"sessionToken"` 89 | AccessKeyID string `json:"accessKeyId"` 90 | } 91 | 92 | func HandleLambdaRequest(ctx context.Context, event MyEvent) (string, error) { 93 | fmt.Println("Handler started!") 94 | 95 | // WARNING: Uncomment only for testing/debugging purposes, as the event contains temporary credentials 96 | // passed from pipeline to lambda 97 | // eventJson, _ := json.Marshal(event) 98 | // fmt.Println("Event received: " + string(eventJson)) 99 | 100 | codePipelineJob := event.CodePipelineJob 101 | 102 | // WARNING: Uncomment only for testing/debugging purposes, as the event contains temporary credentials 103 | // passed from pipeline to lambda 104 | // jobJson, _ := json.Marshal(codePipelineJob) 105 | // fmt.Println("CodePipeline Job: " + string(jobJson)) 106 | 107 | userParameters := codePipelineJob.Data.ActionConfiguration.Configuration.UserParameters 108 | var parameters CodePipelineParameters 109 | err := json.Unmarshal([]byte(userParameters), ¶meters) 110 | 111 | if err != nil { 112 | return "", err 113 | } 114 | 115 | inputArtifact := codePipelineJob.Data.InputArtifacts[0].Location.S3Location 116 | 117 | c := app.New(parameters.Rules, event.Parameters) 118 | result, err := c.Eval(ctx, "s3://"+inputArtifact.BucketName+"/"+inputArtifact.ObjectKey) 119 | if err != nil { 120 | return "", err 121 | } 122 | 123 | resultJson, err := json.Marshal(result) 124 | if err != nil { 125 | return "", err 126 | } 127 | 128 | fmt.Println("Evaluation result: " + string(resultJson)) 129 | 130 | return PutCodePipelineResult(ctx, codePipelineJob, result) 131 | } 132 | 133 | func PutCodePipelineResult(ctx context.Context, job CodePipelineJob, result app.ColomboResult) (string, error) { 134 | cfg, err := config.LoadDefaultConfig(context.TODO()) 135 | if err != nil { 136 | return "", err 137 | } 138 | 139 | svc := codepipeline.NewFromConfig(cfg) 140 | 141 | var failedRules []app.RuleResult 142 | for _, ruleResult := range result.Rules { 143 | if ruleResult.Valid == "false" { 144 | failedRules = append(failedRules, ruleResult) 145 | } 146 | } 147 | 148 | if len(failedRules) > 0 { 149 | failedJson, err := json.Marshal(failedRules) 150 | if err != nil { 151 | return "", err 152 | } 153 | 154 | fmt.Println("Marking job as failure due to the non-compliant resources: " + string(failedJson)) 155 | 156 | _, err = svc.PutJobFailureResult(context.Background(), &codepipeline.PutJobFailureResultInput{ 157 | JobId: aws.String(job.ID), 158 | FailureDetails: &types.FailureDetails{ 159 | Message: aws.String(string(failedJson)), 160 | Type: types.FailureTypeJobFailed, 161 | }, 162 | }) 163 | 164 | if err != nil { 165 | return "", err 166 | } 167 | return "Marked job" + job.ID + " as failure.", nil 168 | } 169 | 170 | _, err = svc.PutJobSuccessResult(context.Background(), &codepipeline.PutJobSuccessResultInput{ 171 | JobId: aws.String(job.ID), 172 | }) 173 | 174 | if err != nil { 175 | return "", err 176 | } 177 | 178 | fmt.Println("Marking job as success, all resources are compliant.") 179 | return "Marked job" + job.ID + " as success.", nil 180 | } 181 | 182 | func main() { 183 | fmt.Println("Running OpaScan.....") 184 | lambda.Start(HandleLambdaRequest) 185 | } 186 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/opascan.py: -------------------------------------------------------------------------------- 1 | import aws_cdk.core as cdk 2 | import aws_cdk.aws_lambda_go as lambda_go 3 | import aws_cdk.aws_lambda as lambda_ 4 | import aws_cdk.aws_iam as iam 5 | import aws_cdk.aws_s3 as s3 6 | import aws_cdk.aws_s3_deployment as s3_deployment 7 | import aws_cdk.aws_kms as kms 8 | 9 | 10 | class OPAScanStack(cdk.Stack): 11 | def __init__(self, scope: cdk.Construct, general_config: dict, **kwargs): 12 | 13 | super().__init__(scope, id="OPAScan", **kwargs) 14 | 15 | lambda_role = iam.Role.from_role_arn( 16 | self, 17 | "opa-scan-role", 18 | role_arn=f"arn:aws:iam::{general_config['toolchain_account']}:role/opa-scan-role", 19 | ) 20 | 21 | lambda_role.add_managed_policy( 22 | iam.ManagedPolicy.from_managed_policy_arn( 23 | self, "lambda-service-basic-role", "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" 24 | ) 25 | ) 26 | 27 | lambda_policy = iam.Policy( 28 | self, 29 | "lambda-role-policy", 30 | statements=[ 31 | iam.PolicyStatement( 32 | effect=iam.Effect.ALLOW, 33 | actions=["codepipeline:PutJobSuccessResult", "codepipeline:PutJobFailureResult"], 34 | resources=["*"], 35 | ) 36 | ], 37 | ) 38 | 39 | cfn_policy = lambda_policy.node.default_child 40 | cfn_policy.cfn_options.metadata = { 41 | "cfn_nag": { 42 | "rules_to_suppress": [ 43 | {"id": "W12", "reason": "Circular dependency, pipeline is not deployed yet"}, 44 | ] 45 | } 46 | } 47 | 48 | lambda_policy.attach_to_role(lambda_role) 49 | 50 | encryption_key = kms.Key(self, "opa-scan-rules-key", enable_key_rotation=True) 51 | encryption_key.add_to_resource_policy( 52 | iam.PolicyStatement( 53 | effect=iam.Effect.ALLOW, 54 | actions=["kms:Decrypt", "kms:DescribeKey"], 55 | resources=["*"], 56 | principals=[iam.ArnPrincipal(lambda_role.role_arn)], 57 | ) 58 | ) 59 | 60 | rules_bucket = s3.Bucket( 61 | self, 62 | id="opa-scan-rules-bucket", 63 | bucket_name=f"opa-scan-rules-{self.account}", 64 | removal_policy=cdk.RemovalPolicy.DESTROY, 65 | block_public_access=s3.BlockPublicAccess.BLOCK_ALL, 66 | encryption=s3.BucketEncryption.KMS, 67 | encryption_key=encryption_key, 68 | ) 69 | 70 | cdk.Tags.of(rules_bucket).add("resource-owner", "opa-scan") 71 | 72 | s3_deployment.BucketDeployment( 73 | self, 74 | id="opa-scan-rules-deployment", 75 | destination_bucket=rules_bucket, 76 | sources=[s3_deployment.Source.asset("./devsecops_quickstart/opa_scan/rules")], 77 | memory_limit=128, 78 | ) 79 | 80 | rules_bucket.add_to_resource_policy( 81 | iam.PolicyStatement( 82 | actions=["s3:List*", "s3:GetObject*", "s3:GetBucket*"], 83 | resources=[ 84 | rules_bucket.bucket_arn, 85 | f"{rules_bucket.bucket_arn}/*", 86 | ], 87 | principals=[iam.ArnPrincipal(lambda_role.role_arn)], 88 | ) 89 | ) 90 | 91 | lambda_go.GoFunction( 92 | self, 93 | "opa-scan", 94 | function_name="opa-scan", 95 | entry="devsecops_quickstart/opa_scan/lambda/main.go", 96 | role=lambda_role, 97 | environment={"RUN_ON_LAMBDA": "True"}, 98 | timeout=cdk.Duration.minutes(2), 99 | memory_size=256, 100 | runtime=lambda_.Runtime.GO_1_X, 101 | ) 102 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/app/app.go: -------------------------------------------------------------------------------- 1 | package app 2 | 3 | import ( 4 | "context" 5 | "devsecops-quickstart/opa-scan/pkg/filesystem" 6 | "devsecops-quickstart/opa-scan/pkg/runners" 7 | "devsecops-quickstart/opa-scan/pkg/utils" 8 | "embed" 9 | "encoding/json" 10 | "errors" 11 | "fmt" 12 | "log" 13 | "reflect" 14 | "regexp" 15 | "strconv" 16 | "strings" 17 | "sync" 18 | 19 | "github.com/open-policy-agent/opa/ast" 20 | "github.com/open-policy-agent/opa/rego" 21 | ) 22 | 23 | // Colombo struct 24 | type Colombo struct { 25 | inputPath string 26 | rulesPath []string 27 | libsPath string 28 | parameters map[string]string 29 | } 30 | 31 | // ColomboResult struc 32 | type ColomboResult struct { 33 | Rules []RuleResult 34 | } 35 | 36 | // RuleResult struct definition 37 | type RuleResult struct { 38 | ID string `json:"id"` 39 | Message string `json:"message"` 40 | Type string `json:"type"` 41 | Valid string `json:"valid"` 42 | } 43 | 44 | var ( 45 | app *Colombo 46 | once sync.Once 47 | //go:embed lib/*.rego 48 | libs embed.FS 49 | ) 50 | 51 | func newResult() ColomboResult { 52 | return ColomboResult{} 53 | } 54 | 55 | // New Creates a new Colombo Object 56 | func New(rulesPath []string, parameters string) *Colombo { 57 | // method to ensure singleton behaviour 58 | once.Do(func() { 59 | app = &Colombo{ 60 | libsPath: "./lib", 61 | } 62 | err := app.setRulesPath(rulesPath) 63 | if err != nil { 64 | log.Fatal(err) 65 | } 66 | app.SetParameters(parameters) 67 | }) 68 | 69 | return app 70 | } 71 | 72 | // SetParameters set parameters to replace in input file 73 | func (c *Colombo) SetParameters(parameters string) { 74 | c.parameters = map[string]string{} 75 | 76 | // Split paramters by "," allowing lists Example: foo1=hello,foo2=[sub1,sub2] 77 | r := regexp.MustCompile(`(?:\[.*?\]|[^,])+`) 78 | parametersSlice := r.FindAllString(parameters, -1) 79 | 80 | for _, param := range parametersSlice { 81 | keyValue := strings.Split(param, "=") 82 | key := keyValue[0] 83 | value := strings.Replace(keyValue[1], "\"", "", -1) 84 | 85 | if strings.HasPrefix(value, "[") { 86 | valueTemp := strings.Replace(value, "[", "", -1) 87 | valueTemp = strings.Replace(valueTemp, "]", "", -1) 88 | valueSlice := strings.Split(valueTemp, ",") 89 | for i, v := range valueSlice { 90 | valueSlice[i] = "\"" + v + "\"" 91 | } 92 | value = "[" + strings.Join(valueSlice, ",") + "]" 93 | } else if _, err := strconv.Atoi(value); err != nil { 94 | value = "\"" + value + "\"" 95 | } 96 | 97 | c.parameters[key] = value 98 | } 99 | } 100 | 101 | func (c *Colombo) setRulesPath(rulesPath []string) error { 102 | for i := 0; i < len(rulesPath); i++ { 103 | fs := filesystem.NewFilesystemByPath(rulesPath[i]) 104 | if ok, _ := fs.PathExists(rulesPath[i]); ok == false { 105 | return errors.New("Rules path (" + rulesPath[i] + ") does not exists") 106 | } 107 | } 108 | c.rulesPath = rulesPath 109 | return nil 110 | } 111 | 112 | func (c Colombo) getRules() (map[string]string, error) { 113 | rules := map[string]string{} 114 | 115 | for i := 0; i < len(c.rulesPath); i++ { 116 | fs := filesystem.NewFilesystemByPath(c.rulesPath[i]) 117 | rulesPath, err := fs.Read(c.rulesPath[i], "^.+\\.(rego)$", false) 118 | if err != nil { 119 | return rules, err 120 | } 121 | for _, v := range rulesPath { 122 | fmt.Println("Loaded rule: " + v.FilePath) 123 | rules[v.FilePath] = v.Data 124 | } 125 | } 126 | 127 | return rules, nil 128 | } 129 | 130 | // readInputContent Reads input content and returns a map with the path as key and file content as value. 131 | // Input can be a directory path or a file path. 132 | func (c Colombo) readInputContent(path string) ([]utils.InputFile, error) { 133 | inputFilesJSON := []utils.InputFile{} 134 | 135 | fs := filesystem.NewFilesystemByPath(path) 136 | inputFiles, err := fs.Read(path, "^.+\\.(template).(json)$", true) 137 | if err != nil { 138 | return inputFilesJSON, err 139 | } 140 | 141 | for _, v := range inputFiles { 142 | fmt.Println("Loaded input: " + v.FilePath) 143 | var data map[string]interface{} 144 | json.Unmarshal([]byte(v.Data), &data) 145 | inputFilesJSON = append(inputFilesJSON, utils.InputFile{ 146 | Type: "JSON", 147 | FilePath: v.FilePath, 148 | Data: data, 149 | }) 150 | } 151 | 152 | return inputFilesJSON, nil 153 | } 154 | 155 | func (c Colombo) makeRunner(inputFiles []utils.InputFile) runners.RunnerInterface { 156 | var runner runners.RunnerInterface 157 | 158 | if len(inputFiles) == 1 { 159 | //Check if Terraform 160 | if _, ok := inputFiles[0].Data["terraform_version"]; ok { 161 | runner = runners.NewTerraform(c.parameters) 162 | } else { 163 | runner = runners.NewCloudFormation(c.parameters) 164 | } 165 | } else { 166 | runner = runners.NewCloudFormation(c.parameters) 167 | } 168 | 169 | return runner 170 | } 171 | 172 | func (c Colombo) prepareModules(runner runners.RunnerInterface) (map[string]string, error) { 173 | fmt.Println("Reading OpaScan Rules...") 174 | modules, err := c.getRules() 175 | if err != nil { 176 | return modules, err 177 | } 178 | 179 | fmt.Println("Loading Libraries....") 180 | for _, v := range runner.GetRegoLibraries() { 181 | f, err := libs.ReadFile("lib/" + v + ".rego") 182 | if err != nil { 183 | log.Fatal(err) 184 | } 185 | modules[v] = string(f) 186 | } 187 | 188 | return modules, nil 189 | } 190 | 191 | // Eval run evaluation 192 | func (c *Colombo) Eval(ctx context.Context, input string) (ColomboResult, error) { 193 | fmt.Println("Starting OpaScan Evaluation...") 194 | 195 | // Reads input content and returns a map with the path as key and file content as value 196 | fmt.Println("Reading Input: " + input) 197 | c.inputPath = input 198 | inputFiles, err := c.readInputContent(c.inputPath) 199 | if err != nil { 200 | log.Fatal(err) 201 | } 202 | 203 | fmt.Println("Creating final Input....") 204 | runner := c.makeRunner(inputFiles) 205 | finalInput := runner.GenerateInput(inputFiles) 206 | 207 | // load Rego lib and rules 208 | modules, err := c.prepareModules(runner) 209 | if err != nil { 210 | log.Fatal(err) 211 | } 212 | 213 | // Compile the module. The keys are used as identifiers in error messages. 214 | compiler, err := ast.CompileModules(modules) 215 | 216 | fmt.Println("Checking Compliance.....") 217 | rego := rego.New( 218 | rego.Query("data.rules[_].rule"), 219 | rego.Compiler(compiler), 220 | rego.Input(finalInput.Data)) 221 | 222 | //Run evaluation. 223 | rs, err := rego.Eval(ctx) 224 | fmt.Println("Evaluation Finished.") 225 | if err != nil { 226 | return newResult(), err 227 | } 228 | fmt.Println(rs) 229 | res := c.parseScanResult(rs) 230 | 231 | return res, err 232 | } 233 | 234 | func (c Colombo) parseScanResult(rs rego.ResultSet) ColomboResult { 235 | result := newResult() 236 | 237 | if len(rs) == 0 { 238 | return result 239 | } 240 | 241 | for _, r := range rs { 242 | for _, exp := range r.Expressions { 243 | s := reflect.ValueOf(exp.Value) 244 | for i := 0; i < s.Len(); i++ { 245 | ret := s.Index(i).Interface() 246 | data, _ := json.Marshal(ret) 247 | rule := &RuleResult{} 248 | json.Unmarshal(data, rule) 249 | result.Rules = append(result.Rules, *rule) 250 | } 251 | } 252 | } 253 | 254 | return result 255 | } 256 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/app/app_test.go: -------------------------------------------------------------------------------- 1 | package app 2 | 3 | import ( 4 | "errors" 5 | "reflect" 6 | "sort" 7 | "testing" 8 | ) 9 | 10 | //Test New app function 11 | func TestNewAppEmptyArgs(t *testing.T) { 12 | a := New([]string{}, "") 13 | 14 | if reflect.TypeOf(a).String() != "*app.Colombo" { 15 | t.Error("New() is not returning *Colombo instance") 16 | } 17 | } 18 | 19 | // Test assign parameters tu app 20 | func TestNewAppWithMixedParameters(t *testing.T) { 21 | parameters := "foo1=abc,foo2=225,foo3=[value1,value2,value3]" 22 | expectedResult := map[string]string{ 23 | "foo1": "\"abc\"", 24 | "foo2": "225", 25 | "foo3": "[\"value1\",\"value2\",\"value3\"]", 26 | } 27 | a := New([]string{}, "") 28 | 29 | a.SetParameters(parameters) 30 | 31 | if reflect.DeepEqual(a.parameters, expectedResult) == false { 32 | t.Error("App SetParameters() with string parameters are not mapped correctly") 33 | } 34 | } 35 | 36 | func TestReadLocalAppInputContentNonExistantFile(t *testing.T) { 37 | expectedError := errors.New("Input file tfplan.json does not exist.") 38 | inputPath := "tfplan.json" 39 | a := New([]string{}, "") 40 | 41 | _, err := a.readInputContent(inputPath) 42 | if err == nil || reflect.DeepEqual(err.Error(), expectedError.Error()) == false { 43 | t.Error("App readInputContent() must return an error") 44 | } 45 | } 46 | 47 | //Test New app function 48 | func TestReadLocalTerraformAppInputContent(t *testing.T) { 49 | inputPath := "../../data/tfplan.json" 50 | a := New([]string{}, "") 51 | 52 | foo, err := a.readInputContent(inputPath) 53 | if err != nil { 54 | t.Error(err) 55 | } 56 | 57 | if len(foo) == 0 { 58 | t.Error("App Failed to read input content data") 59 | } 60 | value := foo[0] 61 | 62 | if value.Data == nil { 63 | t.Error("App Failed to read input content data") 64 | } 65 | } 66 | 67 | func TestReadS3AppInputContentNonExistantFile(t *testing.T) { 68 | inputPath := "s3://opascan-codepipeline-output-bucket/opascan-test-pipelin/tf_plan/7DRj2I" 69 | expectedError := errors.New("S3 object " + inputPath + " does not exist.") 70 | a := New([]string{}, "") 71 | 72 | _, err := a.readInputContent(inputPath) 73 | if err == nil || reflect.DeepEqual(err.Error(), expectedError.Error()) == false { 74 | t.Error("App readInputContent() must return an error") 75 | } 76 | } 77 | 78 | //Test New app function 79 | func TestReadS3TerraformAppInputContent(t *testing.T) { 80 | inputPath := "s3://opascan-test/terraform-examples/tfplan.json" 81 | a := New([]string{}, "") 82 | 83 | foo, err := a.readInputContent(inputPath) 84 | if err != nil { 85 | t.Error(err) 86 | } 87 | 88 | value := foo[0] 89 | if value.Data == nil { 90 | t.Error("App Failed to read input content data") 91 | } 92 | } 93 | 94 | func TestReadS3CloudFormationFolderAppInputContent(t *testing.T) { 95 | inputPath := "s3://opascan-test/ecs-refarch-cloudformation" 96 | files := []string{ 97 | "s3://opascan-test/ecs-refarch-cloudformation/infrastructure/ecs-cluster.json", 98 | "s3://opascan-test/ecs-refarch-cloudformation/infrastructure/lifecyclehook.json", 99 | "s3://opascan-test/ecs-refarch-cloudformation/infrastructure/load-balancers.json", 100 | "s3://opascan-test/ecs-refarch-cloudformation/infrastructure/security-groups.json", 101 | "s3://opascan-test/ecs-refarch-cloudformation/infrastructure/vpc.json", 102 | "s3://opascan-test/ecs-refarch-cloudformation/master.json", 103 | } 104 | 105 | a := New([]string{}, "") 106 | 107 | foo, err := a.readInputContent(inputPath) 108 | if err != nil { 109 | t.Error(err) 110 | } 111 | 112 | keys := make([]string, 0, len(foo)) 113 | 114 | for _, v := range foo { 115 | keys = append(keys, v.FilePath) 116 | } 117 | 118 | sort.Strings(keys) 119 | sort.Strings(files) 120 | 121 | if reflect.DeepEqual(files, keys) == false { 122 | t.Error("readInputContent it's not reading all files for a CF project.") 123 | } 124 | } 125 | 126 | func TestSetS3AppRegoRules(t *testing.T) { 127 | a := New([]string{}, "") 128 | 129 | inputPath := []string{"s3://test-new-sec-bucket/rego-rules/terraform"} 130 | err := a.setRulesPath(inputPath) 131 | if err != nil { 132 | t.Error("App set S3 rules error: " + err.Error()) 133 | } 134 | 135 | rules, err := a.getRules() 136 | 137 | files := make([]string, 0, len(rules)) 138 | 139 | for k := range rules { 140 | files = append(files, k) 141 | } 142 | 143 | expectedFiles := []string{"s3://test-new-sec-bucket/rego-rules/terraform/sg_axa.rego"} 144 | if reflect.DeepEqual(expectedFiles, files) == false { 145 | t.Error("setRulesPath is not setting rego files correctly") 146 | } 147 | } 148 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/app/lib/cloudformation_utils.rego: -------------------------------------------------------------------------------- 1 | package cf_utils 2 | 3 | resource_types = {rt | 4 | r = input.Resources[_] 5 | rt = r.type 6 | } 7 | 8 | resources_names = {r | 9 | input.Resources[r] 10 | } 11 | 12 | resources_by_type = {rt: rs | 13 | resource_types[rt] 14 | rs = {id: r | 15 | r = input.Resources[id] 16 | r.type = rt 17 | } 18 | } 19 | 20 | is_tagged(resource) { 21 | count(resource.tags) > 0 22 | } 23 | 24 | has_tag(resource, key) { 25 | some k 26 | resource.tags[k] 27 | k == key 28 | } -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/app/lib/terraform_utils.rego: -------------------------------------------------------------------------------- 1 | package terraform_utils 2 | 3 | 4 | is_tagged(resource) { 5 | resource.values.tags[_] = _ 6 | } 7 | 8 | has_tag(resource, key) { 9 | some k 10 | resource.values.tags[k] 11 | k == key 12 | } 13 | 14 | resource_types = {rt | 15 | r = resources[_] 16 | rt = r.type 17 | } 18 | 19 | resources_by_type = {rt: rs | 20 | resource_types[rt] 21 | rs = {r | 22 | r = resources[ri] 23 | r.type == rt 24 | } 25 | } 26 | 27 | resources = { r | 28 | some path, value 29 | 30 | # Walk over the JSON tree and check if the node we are 31 | # currently on is a module (either root or child) resources 32 | # value. 33 | walk(input.planned_values, [path, value]) 34 | 35 | # Look for resources in the current value based on path 36 | rs := module_resources(path, value) 37 | 38 | # Aggregate them into `resources` 39 | r := rs[_] 40 | } 41 | 42 | # Variant to match root_module resources 43 | module_resources(path, value) = rs { 44 | 45 | # Expect something like: 46 | # 47 | # { 48 | # "root_module": { 49 | # "resources": [...], 50 | # ... 51 | # } 52 | # ... 53 | # } 54 | # 55 | # Where the path is [..., "root_module", "resources"] 56 | 57 | reverse_index(path, 1) == "resources" 58 | reverse_index(path, 2) == "root_module" 59 | rs := value 60 | } 61 | 62 | # Variant to match child_modules resources 63 | module_resources(path, value) = rs { 64 | 65 | # Expect something like: 66 | # 67 | # { 68 | # ... 69 | # "child_modules": [ 70 | # { 71 | # "resources": [...], 72 | # ... 73 | # }, 74 | # ... 75 | # ] 76 | # ... 77 | # } 78 | # 79 | # Where the path is [..., "child_modules", 0, "resources"] 80 | # Note that there will always be an index int between `child_modules` 81 | # and `resources`. We know that walk will only visit each one once, 82 | # so we shouldn't need to keep track of what the index is. 83 | 84 | reverse_index(path, 1) == "resources" 85 | reverse_index(path, 3) == "child_modules" 86 | rs := value 87 | } 88 | 89 | reverse_index(path, idx) = value { 90 | value := path[count(path) - idx] 91 | } -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/app/lib/utils.rego: -------------------------------------------------------------------------------- 1 | package utils 2 | 3 | array_contains(arr, elem) { 4 | arr[_] = elem 5 | } 6 | 7 | allow_resource(resource) = ret { 8 | ret = { 9 | "valid": "true", 10 | "id": resource.address, 11 | "message": "", 12 | "type": resource.type 13 | } 14 | } 15 | 16 | deny_resource_with_message(resource, message) = ret { 17 | ret = { 18 | "valid": "false", 19 | "id": resource.address, 20 | "message": message, 21 | "type": resource.type 22 | } 23 | } 24 | 25 | approval_resource_with_message(resource, message) = ret { 26 | ret = { 27 | "valid": "approval", 28 | "id": resource.address, 29 | "message": message, 30 | "type": resource.type 31 | } 32 | } -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/filesystem/adapters/local.go: -------------------------------------------------------------------------------- 1 | package adapters 2 | 3 | import ( 4 | "devsecops-quickstart/opa-scan/pkg/utils" 5 | "errors" 6 | "io/ioutil" 7 | "log" 8 | "os" 9 | "path/filepath" 10 | "regexp" 11 | ) 12 | 13 | type AddapterInterface interface { 14 | PathExists(path string) (bool, error) 15 | Read(filePath string, experesion string, walk bool) ([]utils.File, error) 16 | Write(location map[string]string, content string) error 17 | } 18 | 19 | type LocalAddapter struct { 20 | AddapterInterface 21 | } 22 | 23 | func NewLocalAdapter() *LocalAddapter { 24 | return &LocalAddapter{} 25 | } 26 | 27 | func (a LocalAddapter) PathExists(path string) (bool, error) { 28 | _, err := os.Stat(path) 29 | return !os.IsNotExist(err), err 30 | } 31 | 32 | func (a LocalAddapter) Read(path string, experesion string, walk bool) ([]utils.File, error) { 33 | inputFiles := []utils.File{} 34 | info, err := os.Stat(path) 35 | if os.IsNotExist(err) { 36 | return inputFiles, errors.New("Input file " + path + " does not exist.") 37 | } 38 | 39 | // Check if is a directory 40 | if info.IsDir() { 41 | // Scan inputPath recursively and read all files 42 | filesPath, err := a.ScanFolderRecursively(path, experesion) 43 | if err != nil { 44 | return inputFiles, err 45 | } 46 | 47 | for i := 0; i < len(filesPath); i++ { 48 | file, err := a.getFileContent(filesPath[i]) 49 | if err != nil { 50 | return inputFiles, err 51 | } 52 | inputFiles = append(inputFiles, file) 53 | } 54 | } else { 55 | // If is not a dir is a file 56 | file, err := a.getFileContent(path) 57 | if err != nil { 58 | return inputFiles, errors.New("Input File is not a JSON documment") 59 | } 60 | inputFiles = append(inputFiles, file) 61 | } 62 | 63 | return inputFiles, nil 64 | } 65 | 66 | func (a LocalAddapter) getFileContent(filePath string) (utils.File, error) { 67 | content, err := ioutil.ReadFile(filePath) 68 | if err != nil { 69 | return utils.File{}, err 70 | } 71 | 72 | // Convert []byte to string and print to screen 73 | text := string(content) 74 | 75 | file := utils.File{ 76 | Type: "JSON", 77 | FilePath: filePath, 78 | Data: text, 79 | } 80 | 81 | return file, err 82 | } 83 | 84 | func (a LocalAddapter) ScanFolderRecursively(path string, experesion string) ([]string, error) { 85 | libRegEx, e := regexp.Compile(experesion) 86 | if e != nil { 87 | log.Fatal(e) 88 | } 89 | 90 | files := []string{} 91 | 92 | e = filepath.Walk(path, func(path string, info os.FileInfo, err error) error { 93 | if err == nil && libRegEx.MatchString(info.Name()) { 94 | files = append(files, path) 95 | } 96 | return nil 97 | }) 98 | 99 | return files, e 100 | } 101 | 102 | func (a LocalAddapter) Write(location map[string]string, content string) error { 103 | var err error 104 | return err 105 | } 106 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/filesystem/adapters/s3.go: -------------------------------------------------------------------------------- 1 | package adapters 2 | 3 | import ( 4 | "archive/zip" 5 | "bytes" 6 | "context" 7 | "devsecops-quickstart/opa-scan/pkg/utils" 8 | "errors" 9 | "io/ioutil" 10 | "log" 11 | "net/url" 12 | "regexp" 13 | "strings" 14 | "sync" 15 | 16 | "github.com/aws/aws-sdk-go-v2/aws" 17 | "github.com/aws/aws-sdk-go-v2/config" 18 | "github.com/aws/aws-sdk-go-v2/service/s3" 19 | ) 20 | 21 | var ( 22 | s3Svc *S3Addapter 23 | once sync.Once 24 | ) 25 | 26 | type S3Addapter struct { 27 | AddapterInterface 28 | s3Client *s3.Client 29 | } 30 | 31 | // New Creates a new S3Addapter Object 32 | func NewS3Addapter() *S3Addapter { 33 | // method to ensure singleton behaviour 34 | once.Do(func() { 35 | cfg, err := config.LoadDefaultConfig(context.TODO()) 36 | if err != nil { 37 | log.Fatalf("unable to load SDK config, %v", err) 38 | } 39 | 40 | svc := s3.NewFromConfig(cfg) 41 | 42 | s3Svc = &S3Addapter{ 43 | s3Client: svc, 44 | } 45 | }) 46 | 47 | return s3Svc 48 | } 49 | func (a S3Addapter) validateS3Url(path string) (*url.URL, error) { 50 | u, err := url.Parse(path) 51 | if err != nil || u.Scheme != "s3" { 52 | return u, errors.New("Url " + path + " is not valid S3 url.") 53 | } 54 | 55 | u.Path = strings.Replace(u.Path, "/", "", 1) 56 | 57 | return u, nil 58 | } 59 | 60 | func (a S3Addapter) PathExists(path string) (bool, error) { 61 | // Validate S3 url 62 | u, err := a.validateS3Url(path) 63 | if err != nil { 64 | return false, err 65 | } 66 | 67 | resp, err := a.s3Client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{ 68 | Bucket: aws.String(u.Host), 69 | Prefix: aws.String(u.Path), 70 | }) 71 | 72 | return len(resp.Contents) > 0, nil 73 | } 74 | 75 | func (a S3Addapter) Read(filePath string, experesion string, walk bool) ([]utils.File, error) { 76 | inputFiles := []utils.File{} 77 | 78 | // Validate Regular expresion 79 | libRegEx, err := regexp.Compile(experesion) 80 | if err != nil { 81 | return inputFiles, err 82 | } 83 | 84 | // Validate S3 url 85 | u, err := a.validateS3Url(filePath) 86 | 87 | resp, err := a.s3Client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{ 88 | Bucket: aws.String(u.Host), 89 | Prefix: aws.String(u.Path), 90 | //Delimiter: aws.String("/"), 91 | //StartAfter: aws.String(key), 92 | }) 93 | if err != nil { 94 | return inputFiles, err 95 | } 96 | 97 | for _, item := range resp.Contents { 98 | files, err := a.readContent(u.Host, *item.Key) 99 | if err != nil { 100 | return inputFiles, err 101 | } 102 | 103 | for _, file := range files { 104 | if libRegEx.MatchString(file.FilePath) { 105 | inputFiles = append(inputFiles, file) 106 | } 107 | } 108 | } 109 | 110 | if len(inputFiles) == 0 { 111 | return inputFiles, errors.New("S3 object " + filePath + " does not exist.") 112 | } 113 | 114 | return inputFiles, nil 115 | } 116 | 117 | func (a S3Addapter) readContent(bucket string, key string) ([]utils.File, error) { 118 | ctx := context.Background() 119 | req, err := a.s3Client.GetObject(ctx, &s3.GetObjectInput{ 120 | Bucket: aws.String(bucket), 121 | Key: aws.String(key), 122 | }) 123 | if err != nil { 124 | return []utils.File{}, err 125 | } 126 | 127 | s3objectBytes, err := ioutil.ReadAll(req.Body) 128 | if err != nil { 129 | return []utils.File{}, err 130 | } 131 | 132 | if *req.ContentType != "application/zip" { 133 | file := utils.File{ 134 | Type: "S3", 135 | FilePath: "s3://" + bucket + "/" + key, 136 | Data: string(s3objectBytes), 137 | } 138 | 139 | return []utils.File{file}, nil 140 | } 141 | 142 | zipReader, err := zip.NewReader(bytes.NewReader(s3objectBytes), int64(len(s3objectBytes))) 143 | if err != nil { 144 | return []utils.File{}, err 145 | } 146 | 147 | // Read all the files from zip archive 148 | files := []utils.File{} 149 | for _, zipFile := range zipReader.File { 150 | zipFileBytes, err := readZipFile(zipFile) 151 | if err != nil { 152 | return []utils.File{}, err 153 | } 154 | 155 | file := utils.File{ 156 | Type: "S3", 157 | FilePath: "s3://" + bucket + "/" + key + "/" + zipFile.FileHeader.Name, 158 | Data: string(zipFileBytes), 159 | } 160 | files = append(files, file) 161 | } 162 | 163 | return files, nil 164 | } 165 | 166 | func readZipFile(zf *zip.File) ([]byte, error) { 167 | f, err := zf.Open() 168 | if err != nil { 169 | return nil, err 170 | } 171 | defer f.Close() 172 | return ioutil.ReadAll(f) 173 | } 174 | 175 | func (a S3Addapter) Write(location map[string]string, content string) error { 176 | ctx := context.Background() 177 | 178 | _, err := a.s3Client.PutObject(ctx, &s3.PutObjectInput{ 179 | Bucket: aws.String(location["bucket"]), 180 | Key: aws.String(location["key"]), 181 | Body: strings.NewReader(content), // bytes.NewReader(buffer), 182 | ContentDisposition: aws.String("attachment"), 183 | // ContentLength: aws.Int64(int64(len(buffer))), 184 | // ContentType: aws.String(http.DetectContentType(buffer)), 185 | // ServerSideEncryption: aws.String("AES256"), 186 | }) 187 | 188 | return err 189 | } 190 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/filesystem/filesystem.go: -------------------------------------------------------------------------------- 1 | package filesystem 2 | 3 | import ( 4 | "devsecops-quickstart/opa-scan/pkg/filesystem/adapters" 5 | "devsecops-quickstart/opa-scan/pkg/utils" 6 | "net/url" 7 | ) 8 | 9 | // Filesystem struct 10 | type Filesystem struct { 11 | driver adapters.AddapterInterface 12 | } 13 | 14 | // NewFilesystem creates a new Filesystem instance 15 | func NewFilesystem(driver string) *Filesystem { 16 | var addapter adapters.AddapterInterface 17 | 18 | if driver == "s3" { 19 | addapter = adapters.NewS3Addapter() 20 | } else if driver == "local" { 21 | addapter = adapters.NewLocalAdapter() 22 | } 23 | 24 | return &Filesystem{ 25 | driver: addapter, 26 | } 27 | } 28 | 29 | func NewFilesystemByPath(path string) *Filesystem { 30 | addapter := "local" 31 | u, err := url.Parse(path) 32 | if err == nil && u.Scheme == "s3" { 33 | addapter = "s3" 34 | } 35 | 36 | return NewFilesystem(addapter) 37 | } 38 | 39 | func (f Filesystem) PathExists(path string) (bool, error) { 40 | return f.driver.PathExists(path) 41 | } 42 | 43 | func (f Filesystem) Read(path string, experesion string, walk bool) ([]utils.File, error) { 44 | return f.driver.Read(path, experesion, walk) 45 | } 46 | 47 | func (f Filesystem) Write(location map[string]string, content string) error { 48 | return f.driver.Write(location, content) 49 | } 50 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/runners/cloudformation.go: -------------------------------------------------------------------------------- 1 | package runners 2 | 3 | import ( 4 | "devsecops-quickstart/opa-scan/pkg/utils" 5 | "encoding/json" 6 | "log" 7 | "strings" 8 | ) 9 | 10 | // CloudFormation Runner 11 | type CloudFormation struct { 12 | RunnerInterface 13 | language string 14 | libraries []string 15 | parameters map[string]string 16 | } 17 | 18 | // NewCloudFormation creates a new CloudFormation runner instance 19 | func NewCloudFormation(parameters map[string]string) *CloudFormation { 20 | c := &CloudFormation{ 21 | language: "cloudformation", 22 | libraries: []string{"cloudformation_utils", "utils"}, 23 | } 24 | 25 | c.SetParameters(parameters) 26 | 27 | return c 28 | } 29 | 30 | // GetLanguage returns runner language 31 | func (c CloudFormation) GetLanguage() string { 32 | return c.language 33 | } 34 | 35 | // GetRegoLibraries returns Rego libraries used by the runner 36 | func (c CloudFormation) GetRegoLibraries() []string { 37 | return c.libraries 38 | } 39 | 40 | // SetParameters sets runner parameters 41 | func (c *CloudFormation) SetParameters(p map[string]string) (err error) { 42 | c.parameters = map[string]string{} 43 | 44 | for k, v := range p { 45 | k := "{\"Ref\":\"" + strings.TrimSpace(k) + "\"}" 46 | c.parameters[k] = v 47 | } 48 | 49 | return nil 50 | } 51 | 52 | // GenerateInput returns CF template with parameters replaced. 53 | func (c CloudFormation) GenerateInput(inputFiles []utils.InputFile) utils.InputFile { 54 | finalInput := make(map[string]interface{}) 55 | 56 | // Loop all CF files and replace parameters 57 | for _, inputFile := range inputFiles { 58 | inputFile.Data = c.Transform(inputFile.Data) 59 | inputFile, err := json.Marshal(inputFile.Data) 60 | if err != nil { 61 | log.Fatal(err) 62 | } 63 | 64 | inputFileString := c.ReplaceParameters(string(inputFile)) 65 | 66 | parseFile := make(map[string]interface{}) 67 | json.Unmarshal([]byte(inputFileString), &parseFile) 68 | 69 | finalInput = utils.Merge(finalInput, parseFile) 70 | } 71 | 72 | return utils.InputFile{ 73 | Type: "JSON", 74 | Data: finalInput, 75 | } 76 | } 77 | 78 | // ReplaceParameters Loop all parameters argument and Replace {"Ref":"parameterName"} 79 | func (c CloudFormation) ReplaceParameters(str string) string { 80 | // Replace all blanK spaces from the input file 81 | str = strings.Replace(str, " ", "", -1) 82 | 83 | for key, value := range c.parameters { 84 | str = strings.Replace(str, key, value, -1) 85 | } 86 | 87 | return str 88 | } 89 | 90 | // Transform provider specific template to opascan native format. 91 | func (c CloudFormation) Transform(template map[string]interface{}) map[string]interface{} { 92 | resources := template["Resources"] 93 | 94 | myMap := resources.(map[string]interface{}) 95 | for key, resource := range myMap { 96 | r := resource.(map[string]interface{}) 97 | 98 | r["address"] = r["Type"].(string) + "." + key 99 | r["type"] = r["Type"].(string) 100 | r["name"] = key 101 | 102 | tags := map[string]string{} 103 | if _, ok := r["Tags"]; ok == true { 104 | for _, v := range r["Tags"].([]interface{}) { 105 | foo := v.(map[string]interface{}) 106 | tags[foo["Key"].(string)] = foo["Value"].(string) 107 | } 108 | } 109 | r["tags"] = tags 110 | 111 | delete(r, "Type") 112 | delete(r, "Tags") 113 | myMap[key] = r 114 | } 115 | 116 | template["Resources"] = myMap 117 | 118 | return template 119 | } 120 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/runners/runner.go: -------------------------------------------------------------------------------- 1 | package runners 2 | 3 | import "devsecops-quickstart/opa-scan/pkg/utils" 4 | 5 | // RunnerInterface contract definition 6 | type RunnerInterface interface { 7 | GetLanguage() string 8 | SetParameters(map[string]string) error 9 | GenerateInput(inputFiles []utils.InputFile) utils.InputFile 10 | GetRegoLibraries() []string 11 | } 12 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/runners/terraform.go: -------------------------------------------------------------------------------- 1 | package runners 2 | 3 | import "devsecops-quickstart/opa-scan/pkg/utils" 4 | 5 | // Terraform Runner 6 | type Terraform struct { 7 | RunnerInterface 8 | language string 9 | libraries []string 10 | parameters map[string]string 11 | } 12 | 13 | // NewTerraform creates a new Terraform runner instance 14 | func NewTerraform(parameters map[string]string) *Terraform { 15 | t := &Terraform{ 16 | language: "terraform", 17 | libraries: []string{"terraform_utils", "utils"}, 18 | } 19 | 20 | t.SetParameters(parameters) 21 | 22 | return t 23 | } 24 | 25 | // GetLanguage returns runner language 26 | func (t Terraform) GetLanguage() string { 27 | return t.language 28 | } 29 | 30 | // GetRegoLibraries returns Rego libraries used by the runner 31 | func (t Terraform) GetRegoLibraries() []string { 32 | return t.libraries 33 | } 34 | 35 | // SetParameters sets runner parameters 36 | func (t *Terraform) SetParameters(p map[string]string) (err error) { 37 | t.parameters = p 38 | 39 | return nil 40 | } 41 | 42 | // GenerateInput returns terraform plan file content 43 | func (t Terraform) GenerateInput(inputFiles []utils.InputFile) utils.InputFile { 44 | for _, v := range inputFiles { 45 | return v 46 | } 47 | 48 | return utils.InputFile{} 49 | } 50 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/utils/json.go: -------------------------------------------------------------------------------- 1 | package utils 2 | 3 | import ( 4 | "encoding/json" 5 | "fmt" 6 | "reflect" 7 | ) 8 | 9 | const ( 10 | // JSONDoc type for json files 11 | JSONDoc = "json" 12 | ) 13 | 14 | // AreEqualJSON validate if two json strings are equal 15 | func AreEqualJSON(s1, s2 string) (bool, error) { 16 | var o1 interface{} 17 | var o2 interface{} 18 | 19 | errmsg := "error json unmarshalling string: %s. error: %v" 20 | 21 | var err error 22 | err = json.Unmarshal([]byte(s1), &o1) 23 | if err != nil { 24 | return false, fmt.Errorf(errmsg, s1, err.Error()) 25 | } 26 | err = json.Unmarshal([]byte(s2), &o2) 27 | if err != nil { 28 | return false, fmt.Errorf(errmsg, s2, err.Error()) 29 | } 30 | 31 | return reflect.DeepEqual(o1, o2), nil 32 | } 33 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/utils/maps.go: -------------------------------------------------------------------------------- 1 | package utils 2 | 3 | func GetMapValueByIndex(m map[string]interface{}, i int) interface{} { 4 | count := 0 5 | for _, v := range m { 6 | if count == i { 7 | return v 8 | } 9 | count = count + 1 10 | } 11 | 12 | return "" 13 | } 14 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/utils/types.go: -------------------------------------------------------------------------------- 1 | package utils 2 | 3 | // IacDocument contains raw IaC file data and other metadata for a given file 4 | type File struct { 5 | Type string 6 | FilePath string 7 | Data string 8 | } 9 | 10 | type InputFile struct { 11 | Type string 12 | FilePath string 13 | Data map[string]interface{} 14 | } 15 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/pkg/utils/utils.go: -------------------------------------------------------------------------------- 1 | package utils 2 | 3 | import ( 4 | "reflect" 5 | ) 6 | 7 | var ( 8 | MaxDepth = 32 9 | ) 10 | 11 | // Merge recursively merges the src and dst maps. Key conflicts are resolved by 12 | // preferring src, or recursively descending, if both src and dst are maps. 13 | func Merge(dst, src map[string]interface{}) map[string]interface{} { 14 | return merge(dst, src, 0) 15 | } 16 | 17 | func merge(dst, src map[string]interface{}, depth int) map[string]interface{} { 18 | if depth > MaxDepth { 19 | panic("too deep!") 20 | } 21 | for key, srcVal := range src { 22 | if dstVal, ok := dst[key]; ok { 23 | srcMap, srcMapOk := mapify(srcVal) 24 | dstMap, dstMapOk := mapify(dstVal) 25 | if srcMapOk && dstMapOk { 26 | srcVal = merge(dstMap, srcMap, depth+1) 27 | } 28 | } 29 | dst[key] = srcVal 30 | } 31 | return dst 32 | } 33 | 34 | func mapify(i interface{}) (map[string]interface{}, bool) { 35 | value := reflect.ValueOf(i) 36 | if value.Kind() == reflect.Map { 37 | m := map[string]interface{}{} 38 | for _, k := range value.MapKeys() { 39 | m[k.String()] = value.MapIndex(k).Interface() 40 | } 41 | return m, true 42 | } 43 | return map[string]interface{}{}, false 44 | } 45 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/rules/cloudformation/s3_encryption.rego: -------------------------------------------------------------------------------- 1 | package rules.s3_encryption 2 | 3 | import data.cf_utils 4 | import data.utils 5 | 6 | buckets[id] = resource{ 7 | resources = cf_utils.resources_by_type["AWS::S3::Bucket"] 8 | resource = resources[id] 9 | } 10 | is_encrypted(resource) { 11 | resource.Properties.BucketEncryption[_] = _ 12 | } 13 | has_not_encryption(resource, key) = msg { 14 | msg = sprintf("Bucket %s is not encrypted", [key]) 15 | not is_encrypted(resource) 16 | } 17 | 18 | rule[r] { 19 | resource = buckets[id] 20 | msg = has_not_encryption(resource, id) 21 | r = utils.deny_resource_with_message(resource, msg) 22 | } { 23 | resource = buckets[id] 24 | is_encrypted(resource) 25 | r = utils.allow_resource(resource) 26 | } 27 | -------------------------------------------------------------------------------- /devsecops_quickstart/opa_scan/rules/cloudformation/tag_all_resources.rego: -------------------------------------------------------------------------------- 1 | # The following rule checks that "taggable" resource types have tag 2 | # values with at least 6 characters. 3 | package rules.tag_all_resources 4 | 5 | import data.cf_utils 6 | import data.utils 7 | 8 | taggable_resource_types = { 9 | "AWS::EC2::Instance", 10 | "AWS::AutoScaling::AutoScalingGroup", 11 | "AWS::S3::Bucket", 12 | "AWS::EC2::SecurityGroup", 13 | "AWS::EC2::Subnet", 14 | "AWS::EC2::VPC", 15 | "AWS::EC2::DHCPOptions", 16 | "AWS::EC2::VPNConnection", 17 | "AWS::EC2::VPNGateway", 18 | } 19 | 20 | 21 | taggable_resources[id] = resource { 22 | some resource_type 23 | taggable_resource_types[resource_type] 24 | resources = cf_utils.resources_by_type[resource_type] 25 | resource = resources[id] 26 | } 27 | 28 | is_tagged(resource) { 29 | resource.Properties.Tags[_] = _ 30 | } 31 | 32 | is_improperly_tagged(resource) = msg { 33 | not is_tagged(resource) 34 | msg = "No Tags Found" 35 | } 36 | 37 | rule[r] { 38 | resource = taggable_resources[key] 39 | msg = is_improperly_tagged(resource) 40 | r = utils.deny_resource_with_message(resource, msg) 41 | } { 42 | resource = taggable_resources[key] 43 | not is_improperly_tagged(resource) 44 | r = utils.allow_resource(resource) 45 | } 46 | -------------------------------------------------------------------------------- /devsecops_quickstart/pipeline.py: -------------------------------------------------------------------------------- 1 | import aws_cdk.aws_codebuild as codebuild 2 | import aws_cdk.core as cdk 3 | import aws_cdk.aws_codepipeline as codepipeline 4 | import aws_cdk.aws_codepipeline_actions as codepipeline_actions 5 | import aws_cdk.pipelines as pipelines 6 | import aws_cdk.aws_codecommit as codecommit 7 | import aws_cdk.aws_iam as iam 8 | 9 | import aws_cdk.aws_lambda as lambda_ 10 | 11 | import logging 12 | 13 | from devsecops_quickstart.cloud9 import Cloud9Stack 14 | from devsecops_quickstart.opa_scan.opascan import OPAScanStack 15 | from devsecops_quickstart.cfn_nag.cfn_nag import CfnNagStack 16 | from devsecops_quickstart.sample_app.sample_app import SampleAppStage 17 | 18 | logger = logging.getLogger() 19 | logger.setLevel(logging.INFO) 20 | 21 | 22 | class ToolingStage(cdk.Stage): 23 | def __init__(self, scope: cdk.Construct, general_config: dict, **kwargs): 24 | super().__init__(scope, id="tooling", **kwargs) 25 | 26 | Cloud9Stack(self, general_config=general_config, **kwargs) 27 | 28 | OPAScanStack(self, general_config=general_config) 29 | 30 | CfnNagStack(self, general_config=general_config) 31 | 32 | 33 | class CICDPipelineStack(cdk.Stack): 34 | def __init__( 35 | self, 36 | scope: cdk.Construct, 37 | id: str, 38 | general_config: dict, 39 | stages_config: dict, 40 | **kwargs, 41 | ) -> None: 42 | super().__init__(scope, id, stack_name=id, **kwargs) 43 | 44 | cfn_nag_role = iam.Role( 45 | self, "cfn-nag-role", role_name="cfn-nag-role", assumed_by=iam.ServicePrincipal("lambda.amazonaws.com") 46 | ) 47 | 48 | opa_scan_role = iam.Role( 49 | self, 50 | "opa-scan-role", 51 | role_name="opa-scan-role", 52 | assumed_by=iam.ServicePrincipal("lambda.amazonaws.com"), 53 | ) 54 | 55 | # repository = codecommit.Repository(self, "Repository", repository_name=general_config["repository_name"]) 56 | 57 | repository = codecommit.Repository.from_repository_name( 58 | self, 59 | id="Repository", 60 | repository_name=general_config["repository_name"], 61 | ) 62 | 63 | cdk.CfnOutput(self, "repository-url", value=repository.repository_clone_url_http) 64 | 65 | # Defines the artifact representing the sourcecode 66 | source_artifact = codepipeline.Artifact() 67 | 68 | # Defines the artifact representing the cloud assembly (cloudformation template + all other assets) 69 | cloud_assembly_artifact = codepipeline.Artifact() 70 | 71 | source_action = codepipeline_actions.CodeCommitSourceAction( 72 | repository=repository, 73 | branch=general_config["repository_branch"], 74 | output=source_artifact, 75 | action_name="Source", 76 | ) 77 | 78 | synth_action = pipelines.SimpleSynthAction( 79 | cloud_assembly_artifact=cloud_assembly_artifact, 80 | source_artifact=source_artifact, 81 | install_commands=[ 82 | "nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 " 83 | "--storage-driver=overlay2 &", 84 | 'timeout 15 sh -c "until docker info; do echo .; sleep 1; done"', 85 | ] 86 | + [ 87 | "npm install aws-cdk -g", 88 | "pip install -r requirements.txt", 89 | ] 90 | + [ 91 | "cd $HOME/.goenv && git pull --ff-only && cd -", 92 | "goenv install 1.16.3", 93 | "goenv local 1.16.3", 94 | "go version", 95 | ] 96 | + [ 97 | "git init", 98 | "cd devsecops_quickstart/cfn_nag", 99 | "git submodule add https://github.com/stelligent/cfn-nag-pipeline.git", 100 | "ls cfn-nag-pipeline", 101 | "cd -", 102 | ], 103 | synth_command="npx cdk synth", 104 | test_commands=[ 105 | "python -m flake8 .", 106 | "python -m black --check .", 107 | ], 108 | environment=codebuild.BuildEnvironment(privileged=True), 109 | role_policy_statements=[ 110 | iam.PolicyStatement(effect=iam.Effect.ALLOW, actions=["sts:assumeRole"], resources=["*"]) 111 | ], 112 | ) 113 | 114 | pipeline = pipelines.CdkPipeline( 115 | self, 116 | f"{id}-pipeline", 117 | cloud_assembly_artifact=cloud_assembly_artifact, 118 | pipeline_name=id, 119 | source_action=source_action, 120 | synth_action=synth_action, 121 | ) 122 | 123 | pipeline_artifact_bucket = pipeline.code_pipeline.artifact_bucket 124 | pipeline_artifact_bucket.grant_read(cfn_nag_role) 125 | pipeline_artifact_bucket.grant_read(opa_scan_role) 126 | cdk.CfnOutput(self, "pipeline-artifact-bucket", value=pipeline_artifact_bucket.bucket_name) 127 | 128 | cdk.Tags.of(pipeline_artifact_bucket).add("resource-owner", "pipeline") 129 | 130 | # FIXME 131 | pipeline_artifact_bucket.encryption_key.node.default_child.cfn_options.metadata = { 132 | "cfn_nag": { 133 | "rules_to_suppress": [ 134 | {"id": "F19", "reason": "CDK Generated Resource for Pipeline Artifacts"}, 135 | ] 136 | } 137 | } 138 | 139 | # git_submodules is currently not supported for CODEPIPELINE type 140 | # synth_action.project.node.default_child.source = codebuild.CfnProject.SourceProperty( 141 | # type="CODEPIPELINE", 142 | # git_submodules_config=codebuild.CfnProject.GitSubmodulesConfigProperty(fetch_submodules=True), 143 | # ) 144 | 145 | bandit_project = codebuild.PipelineProject( 146 | self, 147 | "Bandit", 148 | build_spec=codebuild.BuildSpec.from_object( 149 | { 150 | "version": "0.2", 151 | "phases": { 152 | "install": {"commands": ["pip install bandit"]}, 153 | "build": {"commands": ["python -m bandit -v -r devsecops_quickstart"]}, 154 | }, 155 | } 156 | ), 157 | ) 158 | snyk_project = codebuild.PipelineProject( 159 | self, 160 | "Snyk", 161 | role=iam.Role( 162 | self, 163 | "snyk-build-role", 164 | assumed_by=iam.ServicePrincipal("codebuild.amazonaws.com"), 165 | inline_policies={ 166 | "GetSecretValue": iam.PolicyDocument( 167 | statements=[ 168 | iam.PolicyStatement( 169 | effect=iam.Effect.ALLOW, 170 | actions=[ 171 | "secretsmanager:GetSecretValue", 172 | ], 173 | resources=["*"], 174 | ) 175 | ] 176 | ) 177 | }, 178 | ), 179 | build_spec=codebuild.BuildSpec.from_object( 180 | { 181 | "version": "0.2", 182 | "phases": { 183 | "install": { 184 | "commands": [ 185 | "n lts", 186 | "npm install -g snyk", 187 | "pip install awscli --upgrade", 188 | "pip install -r requirements.txt", 189 | ] 190 | }, 191 | "build": { 192 | "commands": [ 193 | ( 194 | "SNYK_TOKEN=$(aws secretsmanager get-secret-value " 195 | "--query SecretString --output text " 196 | f"--secret-id {general_config['secret_name']['snyk']} " 197 | f"--region {general_config['toolchain_region']})" 198 | ), 199 | "snyk test", 200 | "snyk monitor", 201 | ] 202 | }, 203 | }, 204 | } 205 | ), 206 | ) 207 | 208 | pipeline.add_application_stage( 209 | app_stage=ToolingStage(self, general_config=general_config), 210 | ) 211 | 212 | validate_stage = pipeline.add_stage("validate") 213 | validate_stage.add_actions( 214 | codepipeline_actions.CodeBuildAction( 215 | action_name="bandit", 216 | project=bandit_project, 217 | input=source_artifact, 218 | type=codepipeline_actions.CodeBuildActionType.TEST, 219 | ), 220 | codepipeline_actions.CodeBuildAction( 221 | action_name="snyk", 222 | project=snyk_project, 223 | input=source_artifact, 224 | type=codepipeline_actions.CodeBuildActionType.TEST, 225 | ), 226 | codepipeline_actions.LambdaInvokeAction( 227 | action_name="opa-scan", 228 | inputs=[cloud_assembly_artifact], 229 | lambda_=lambda_.Function.from_function_arn( 230 | self, "opa-scan-lambda", f"arn:aws:lambda:{self.region}:{self.account}:function:opa-scan" 231 | ), 232 | user_parameters={"Rules": [f"s3://opa-scan-rules-{self.account}/cloudformation"]}, 233 | ), 234 | codepipeline_actions.LambdaInvokeAction( 235 | action_name="cfn-nag", 236 | inputs=[cloud_assembly_artifact], 237 | lambda_=lambda_.Function.from_function_arn( 238 | self, "cfn-nag-lambda", f"arn:aws:lambda:{self.region}:{self.account}:function:cfn-nag" 239 | ), 240 | user_parameters_string="**/*.template.json", 241 | ), 242 | ) 243 | 244 | for stage_config_item in stages_config.items(): 245 | stage = stage_config_item[0] 246 | stage_config = stage_config_item[1] 247 | 248 | pipeline.add_application_stage( 249 | manual_approvals=stage_config["manual_approvals"], 250 | app_stage=SampleAppStage( 251 | self, 252 | stage=stage, 253 | general_config=general_config, 254 | stage_config=stage_config, 255 | env=cdk.Environment( 256 | account=stage_config["stage_account"], 257 | region=stage_config["stage_region"], 258 | ), 259 | ), 260 | ) 261 | -------------------------------------------------------------------------------- /devsecops_quickstart/sample_app/sample_app.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import core as cdk 2 | 3 | # import aws_cdk.aws_ssm as ssm 4 | # import aws_cdk.aws_s3 as s3 5 | 6 | 7 | class SampleApp(cdk.Stack): 8 | def __init__(self, scope: cdk.Construct, stage: str, general_config: dict, stage_config: dict, **kwargs): 9 | 10 | super().__init__(scope, id="SampleApp", **kwargs) 11 | 12 | # The code that defines your stack goes here 13 | 14 | # opa_scan_lambda_arn = ssm.StringParameter.from_string_parameter_name( 15 | # self, "lambda-arn-ssm-param", "doesntexist" 16 | # ) 17 | # 18 | # rules_bucket = s3.Bucket( 19 | # self, 20 | # id="sample-bucket", 21 | # bucket_name=f"sample-bucket-{self.account}", 22 | # removal_policy=cdk.RemovalPolicy.DESTROY, 23 | # block_public_access=s3.BlockPublicAccess.BLOCK_ALL, 24 | # ) 25 | 26 | 27 | class SampleAppStage(cdk.Stage): 28 | def __init__(self, scope: cdk.Construct, stage: str, general_config: dict, stage_config: dict, **kwargs): 29 | super().__init__(scope, id=stage, **kwargs) 30 | 31 | SampleApp(self, stage, general_config, stage_config, **kwargs) 32 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [tool.black] 2 | line-length = 120 3 | extend-exclude = 'node_modules' -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aws-cdk.core==1.161.0 2 | aws-cdk.aws-cloud9==1.161.0 3 | aws-cdk.aws-codebuild==1.161.0 4 | aws-cdk.aws-codecommit==1.161.0 5 | aws-cdk.aws-codedeploy==1.161.0 6 | aws-cdk.aws-codepipeline==1.161.0 7 | aws-cdk.aws-codepipeline-actions==1.161.0 8 | aws-cdk.aws-lambda==1.161.0 9 | aws-cdk.aws-lambda-go==1.161.0 10 | aws-cdk.aws-s3==1.161.0 11 | aws-cdk.aws-s3-assets==1.161.0 12 | aws-cdk.aws-s3-deployment==1.161.0 13 | aws-cdk.pipelines==1.161.0 14 | aws-cdk.aws_ssm==1.161.0 15 | black==22.3.0 16 | boto3==1.18.34 17 | botocore==1.21.34 18 | flake8==3.9.2 19 | urllib3==1.26.5 20 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import setuptools 2 | 3 | 4 | with open("README.md") as fp: 5 | long_description = fp.read() 6 | 7 | 8 | setuptools.setup( 9 | name="devsecops_quickstart", 10 | version="0.0.1", 11 | description="An empty CDK Python app", 12 | long_description=long_description, 13 | long_description_content_type="text/markdown", 14 | author="author", 15 | package_dir={"": "devsecops_quickstart"}, 16 | packages=setuptools.find_packages(where="devsecops_quickstart"), 17 | install_requires=[], 18 | python_requires=">=3.6", 19 | classifiers=[ 20 | "Development Status :: 4 - Beta", 21 | "Intended Audience :: Developers", 22 | "Programming Language :: JavaScript", 23 | "Programming Language :: Python :: 3 :: Only", 24 | "Programming Language :: Python :: 3.6", 25 | "Programming Language :: Python :: 3.7", 26 | "Programming Language :: Python :: 3.8", 27 | "Topic :: Software Development :: Code Generators", 28 | "Topic :: Utilities", 29 | "Typing :: Typed", 30 | ], 31 | ) 32 | -------------------------------------------------------------------------------- /source.bat: -------------------------------------------------------------------------------- 1 | @echo off 2 | 3 | rem The sole purpose of this script is to make the command 4 | rem 5 | rem source .venv/bin/activate 6 | rem 7 | rem (which activates a Python virtualenv on Linux or Mac OS X) work on Windows. 8 | rem On Windows, this command just runs this batch file (the argument is ignored). 9 | rem 10 | rem Now we don't need to document a Windows command for activating a virtualenv. 11 | 12 | echo Executing .venv\Scripts\activate.bat for you 13 | .venv\Scripts\activate.bat 14 | --------------------------------------------------------------------------------