├── tests ├── __init__.py ├── processor │ ├── __init__.py │ ├── comparison │ │ ├── __init__.py │ │ ├── comparisonantlr │ │ │ └── __init__.py │ │ └── test_comparison_functions.py │ ├── connector │ │ ├── __init__.py │ │ └── test_arn_parser.py │ ├── database │ │ └── __init__.py │ ├── helper │ │ ├── __init__.py │ │ ├── config │ │ │ ├── __init__.py │ │ │ └── test_rundata_utils.py │ │ ├── file │ │ │ ├── __init__.py │ │ │ └── test_file_utils.py │ │ ├── httpapi │ │ │ └── __init__.py │ │ ├── json │ │ │ └── __init__.py │ │ └── yaml │ │ │ └── __init__.py │ ├── logging │ │ ├── __init__.py │ │ └── test_log_handler.py │ ├── reporting │ │ ├── __init__.py │ │ └── test_json_output.py │ ├── templates │ │ ├── __init__.py │ │ └── aws │ │ │ ├── __init__.py │ │ │ ├── sample │ │ │ ├── ValidJsonInvalidTemplate.txt │ │ │ ├── InvalidTemplate.txt │ │ │ ├── SingleENIwithMultipleEIPs.json │ │ │ ├── SQS_With_CloudWatch_Alarms.txt │ │ │ └── SQS_With_CloudWatch_Alarms.template │ │ │ └── aws_parser.py │ └── template_processor │ │ ├── __init__.py │ │ ├── aws │ │ ├── __init__.py │ │ └── sample │ │ │ ├── parameters.json │ │ │ └── EC2InstanceWithSecurityGroupSample.yaml │ │ ├── azure │ │ ├── __init__.py │ │ └── sample │ │ │ └── keyvault.json │ │ ├── base │ │ └── __init__.py │ │ ├── google │ │ ├── __init__.py │ │ └── sample │ │ │ ├── cloudbuild.yaml │ │ │ └── cloudbuild.jinja │ │ └── terraform │ │ ├── __init__.py │ │ └── samples │ │ ├── sample_3 │ │ ├── modules │ │ │ ├── ebs_volume │ │ │ │ ├── output.tf │ │ │ │ ├── main.tf │ │ │ │ └── vars.tf │ │ │ └── ec2 │ │ │ │ └── output.tf │ │ └── ec2 │ │ │ ├── vars.tf │ │ │ ├── terraform.tfvars │ │ │ └── main.tf │ │ ├── sample_4 │ │ ├── modules │ │ │ ├── security_group │ │ │ │ ├── output.tf │ │ │ │ ├── main.tf │ │ │ │ └── vars.tf │ │ │ ├── iam_role │ │ │ │ ├── output.tf │ │ │ │ ├── main.tf │ │ │ │ └── vars.tf │ │ │ ├── vpc │ │ │ │ ├── main.tf │ │ │ │ ├── output.tf │ │ │ │ └── vars.tf │ │ │ └── subnet │ │ │ │ ├── main.tf │ │ │ │ ├── output.tf │ │ │ │ └── vars.tf │ │ └── lambda │ │ │ ├── vars.tf │ │ │ └── terraform.tfvars │ │ ├── sample_2 │ │ ├── vars.tf │ │ ├── terraform.tfvars │ │ └── main.tf │ │ └── sample_1 │ │ └── main.tf └── jsons │ ├── git_snapshot.json │ └── sample_snapshots.json ├── docs ├── .gitignore ├── docs │ ├── images │ │ ├── logo.png │ │ ├── jenkins-job.png │ │ ├── codebuild-ec2.png │ │ ├── codebuild-name.png │ │ ├── crawler-basic.jpg │ │ ├── token │ │ │ ├── token1.png │ │ │ ├── token2.png │ │ │ └── validate_access_token.png │ │ ├── travis-add-app.png │ │ ├── codebuild-navbar.png │ │ ├── codebuild-source.png │ │ ├── jenkins-pipeline.png │ │ ├── api │ │ │ ├── collection_list.png │ │ │ └── run_compliance.png │ │ ├── codebuild-buildspec.png │ │ ├── codebuild-sg-rules.png │ │ ├── high-level-process.png │ │ ├── jenkins-dashboard.png │ │ ├── travis-repositories.png │ │ ├── travis-search-app.png │ │ ├── codebuild-environment.png │ │ ├── jenkins-build-history.png │ │ ├── jenkins-build-sidebar.png │ │ ├── travis-build-options.png │ │ ├── travis-configure-project.png │ │ └── jenkins-build-history-fails.png │ ├── access.md │ ├── tests │ │ ├── tests-definition.md │ │ ├── master-test.md │ │ ├── outputs.md │ │ └── syntax.md │ ├── connectors │ │ ├── connector-definition.md │ │ ├── teams.md │ │ ├── kubernetes.md │ │ ├── slack.md │ │ ├── jira.md │ │ └── azboard.md │ ├── api │ │ ├── webhook.md │ │ ├── remediation.md │ │ └── api_overview.md │ ├── notification │ │ └── notification.md │ ├── snapshots │ │ ├── helm.md │ │ └── snapshot-definition.md │ ├── limitations │ │ └── aws-cloudformation-template-limitations.md │ ├── exclusions │ │ └── exclusion.md │ ├── extra.css │ ├── configuration │ │ └── basics.md │ ├── workflow.md │ └── index.md └── theme │ └── main.html ├── src └── processor │ ├── crawler │ └── __init__.py │ ├── database │ ├── readme.md │ └── __init__.py │ ├── helper │ ├── __init__.py │ ├── file │ │ ├── __init__.py │ │ └── file_utils.py │ ├── hcl │ │ ├── __init__.py │ │ ├── transformer.py │ │ ├── parser.py │ │ └── hcl_utils.py │ ├── jinja │ │ └── __init__.py │ ├── json │ │ └── __init__.py │ ├── utils │ │ ├── __init__.py │ │ ├── jinjatemplates │ │ │ ├── __init__.py │ │ │ ├── fs_connector.json │ │ │ ├── git_connector.json │ │ │ ├── mastertest.json │ │ │ └── mastersnapshot.json │ │ ├── cli_generate_azure_vault_key.py │ │ └── cli_terraform_to_json.py │ ├── xml │ │ ├── __init__.py │ │ └── xml_utils.py │ ├── yaml │ │ ├── __init__.py │ │ └── yaml_utils.py │ ├── config │ │ ├── __init__.py │ │ └── config.ini │ └── httpapi │ │ └── __init__.py │ ├── logging │ ├── __init__.py │ └── readme.md │ ├── reporting │ ├── readme.md │ └── __init__.py │ ├── comparison │ ├── __init__.py │ ├── readme.md │ ├── rules │ │ ├── __init__.py │ │ ├── arm │ │ │ └── __init__.py │ │ ├── common │ │ │ ├── __init__.py │ │ │ └── sensitive_extension.py │ │ ├── terraform │ │ │ └── __init__.py │ │ ├── cloudformation │ │ │ └── __init__.py │ │ └── deploymentmanager │ │ │ └── __init__.py │ ├── comparisonantlr │ │ ├── __init__.py │ │ ├── comparatorListener.py │ │ ├── test_comparator.py │ │ ├── compare_types.py │ │ └── input.txt │ └── comparison_functions.py │ ├── connector │ ├── __init__.py │ ├── git_connector │ │ └── __init__.py │ ├── special_crawler │ │ ├── __init__.py │ │ └── base_crawler.py │ ├── special_node_pull │ │ ├── __init__.py │ │ └── base_node_pull.py │ ├── special_compliance │ │ ├── __init__.py │ │ └── compliances.py │ ├── snapshot_exception.py │ ├── arn_parser.py │ └── snapshot_utils.py │ ├── templates │ ├── __init__.py │ ├── aws │ │ └── __init__.py │ ├── azure │ │ └── __init__.py │ ├── base │ │ └── __init__.py │ ├── google │ │ ├── __init__.py │ │ └── util.py │ ├── helm │ │ ├── __init__.py │ │ └── helm_parser.py │ ├── kubernetes │ │ ├── __init__.py │ │ └── kubernetes_parser.py │ └── terraform │ │ ├── __init__.py │ │ └── helper │ │ ├── __init__.py │ │ ├── expression │ │ ├── __init__.py │ │ ├── terraform_expressions.py │ │ └── base_expressions.py │ │ └── function │ │ ├── __init__.py │ │ ├── encoding_function.py │ │ ├── numeric_functions.py │ │ └── string_functions.py │ ├── collection_config │ └── __init__.py │ ├── template_processor │ ├── __init__.py │ ├── base │ │ ├── __init__.py │ │ └── base_template_constatns.py │ ├── ack_processor.py │ ├── aso_processor.py │ ├── kcc_processor.py │ ├── json_template_processor.py │ ├── yaml_template_processor.py │ ├── google_template_processor.py │ ├── helm_chart_template_processor.py │ └── kubernetes_template_processor.py │ └── __init__.py ├── log └── README ├── MANIFEST.in ├── utilities ├── json2md │ ├── requirements.txt │ ├── Makefile │ ├── templateKCC.md │ ├── templateAzureQuickstart.md │ └── json2md.py ├── validator.py ├── populate_json.py ├── terraform_to_json.py ├── mongo_install.txt └── curl_cmds.txt ├── .vscode ├── settings.json └── launch.json ├── realm ├── validation │ └── gitScenario │ │ ├── resource-pass.json │ │ ├── test.json │ │ └── snapshot.json ├── fsConnector.json ├── gitConnector.json ├── privHTTPSConnector.json ├── privSSHConnector.json ├── awsConnector.json ├── azureConnector.json └── googleStructure.json ├── dockerfiles ├── Dockerfile └── Dockerfile_remote ├── .github └── workflows │ ├── snyk_code_scanner.yml │ ├── snyk_docker_scanner.yml │ ├── snyk_dependencies_scanner.yml │ ├── test_master.yaml │ ├── test_development.yaml │ ├── documentation.yaml │ └── deploy_docker.yaml ├── requirements.txt ├── config.ini ├── setup.py └── .gitignore /tests/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/.gitignore: -------------------------------------------------------------------------------- 1 | build -------------------------------------------------------------------------------- /tests/processor/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/crawler/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/database/readme.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/logging/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/logging/readme.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/reporting/readme.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /log/README: -------------------------------------------------------------------------------- 1 | Add all the logs here. 2 | -------------------------------------------------------------------------------- /src/processor/comparison/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/comparison/readme.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/connector/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/database/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/file/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/hcl/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/jinja/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/json/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/utils/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/xml/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/yaml/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/reporting/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/comparison/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/connector/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/database/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/helper/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/logging/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/reporting/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/templates/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/collection_config/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/comparison/rules/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/config/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/httpapi/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/aws/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/azure/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/base/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/google/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/helm/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/helper/config/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/helper/file/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/helper/httpapi/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/helper/json/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/helper/yaml/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/templates/aws/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/comparison/rules/arm/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/template_processor/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/kubernetes/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/terraform/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/template_processor/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/comparison/comparisonantlr/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/comparison/rules/common/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/comparison/rules/terraform/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/connector/git_connector/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/connector/special_crawler/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/connector/special_node_pull/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/helper/utils/jinjatemplates/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/template_processor/base/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/terraform/helper/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/template_processor/aws/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/template_processor/azure/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/template_processor/base/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/template_processor/google/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/comparison/rules/cloudformation/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/connector/special_compliance/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/comparison/comparisonantlr/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/comparison/rules/deploymentmanager/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/terraform/helper/expression/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/processor/templates/terraform/helper/function/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include src/processor/helper/utils/jinjatemplates/*.json 2 | -------------------------------------------------------------------------------- /utilities/json2md/requirements.txt: -------------------------------------------------------------------------------- 1 | pandas 2 | jinja2 3 | tabulate 4 | -------------------------------------------------------------------------------- /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "python.pythonPath": "testenv/bin/python" 3 | } -------------------------------------------------------------------------------- /src/processor/__init__.py: -------------------------------------------------------------------------------- 1 | # Prancer Basic 2 | 3 | __version__ = '3.0.28' 4 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_3/modules/ebs_volume/output.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /realm/validation/gitScenario/resource-pass.json: -------------------------------------------------------------------------------- 1 | { 2 | "webserver": { 3 | "port": 80 4 | } 5 | } -------------------------------------------------------------------------------- /docs/docs/images/logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/logo.png -------------------------------------------------------------------------------- /tests/processor/templates/aws/sample/ValidJsonInvalidTemplate.txt: -------------------------------------------------------------------------------- 1 | { 2 | "field1": "value1", 3 | "field2": "value2" 4 | } 5 | -------------------------------------------------------------------------------- /docs/docs/images/jenkins-job.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/jenkins-job.png -------------------------------------------------------------------------------- /docs/docs/images/codebuild-ec2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/codebuild-ec2.png -------------------------------------------------------------------------------- /docs/docs/images/codebuild-name.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/codebuild-name.png -------------------------------------------------------------------------------- /docs/docs/images/crawler-basic.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/crawler-basic.jpg -------------------------------------------------------------------------------- /docs/docs/images/token/token1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/token/token1.png -------------------------------------------------------------------------------- /docs/docs/images/token/token2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/token/token2.png -------------------------------------------------------------------------------- /docs/docs/images/travis-add-app.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/travis-add-app.png -------------------------------------------------------------------------------- /docs/docs/images/codebuild-navbar.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/codebuild-navbar.png -------------------------------------------------------------------------------- /docs/docs/images/codebuild-source.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/codebuild-source.png -------------------------------------------------------------------------------- /docs/docs/images/jenkins-pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/jenkins-pipeline.png -------------------------------------------------------------------------------- /docs/docs/images/api/collection_list.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/api/collection_list.png -------------------------------------------------------------------------------- /docs/docs/images/api/run_compliance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/api/run_compliance.png -------------------------------------------------------------------------------- /docs/docs/images/codebuild-buildspec.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/codebuild-buildspec.png -------------------------------------------------------------------------------- /docs/docs/images/codebuild-sg-rules.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/codebuild-sg-rules.png -------------------------------------------------------------------------------- /docs/docs/images/high-level-process.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/high-level-process.png -------------------------------------------------------------------------------- /docs/docs/images/jenkins-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/jenkins-dashboard.png -------------------------------------------------------------------------------- /docs/docs/images/travis-repositories.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/travis-repositories.png -------------------------------------------------------------------------------- /docs/docs/images/travis-search-app.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/travis-search-app.png -------------------------------------------------------------------------------- /docs/docs/images/codebuild-environment.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/codebuild-environment.png -------------------------------------------------------------------------------- /docs/docs/images/jenkins-build-history.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/jenkins-build-history.png -------------------------------------------------------------------------------- /docs/docs/images/jenkins-build-sidebar.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/jenkins-build-sidebar.png -------------------------------------------------------------------------------- /docs/docs/images/travis-build-options.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/travis-build-options.png -------------------------------------------------------------------------------- /docs/docs/images/travis-configure-project.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/travis-configure-project.png -------------------------------------------------------------------------------- /tests/processor/template_processor/aws/sample/parameters.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "ParameterKey": "KeyName", 4 | "ParameterValue": "testkey" 5 | } 6 | ] -------------------------------------------------------------------------------- /docs/docs/images/jenkins-build-history-fails.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/jenkins-build-history-fails.png -------------------------------------------------------------------------------- /docs/docs/images/token/validate_access_token.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prancer-io/cloud-validation-framework/HEAD/docs/docs/images/token/validate_access_token.png -------------------------------------------------------------------------------- /realm/fsConnector.json: -------------------------------------------------------------------------------- 1 | { 2 | "fileType": "structure", 3 | "type": "filesystem", 4 | "companyName": "prancer-test", 5 | "folderPath": "" 6 | } -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/security_group/output.tf: -------------------------------------------------------------------------------- 1 | output "id" { 2 | description = "The ID of the security group" 3 | value = aws_security_group.sgroup.id 4 | } 5 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_2/vars.tf: -------------------------------------------------------------------------------- 1 | variable "location" { 2 | default = "eastus2" 3 | } 4 | 5 | variable "account_tier" { 6 | description = "Account tier for the storage account" 7 | } -------------------------------------------------------------------------------- /src/processor/helper/utils/jinjatemplates/fs_connector.json: -------------------------------------------------------------------------------- 1 | { 2 | "fileType": "structure", 3 | "type": "filesystem", 4 | "username": "USER_1", 5 | "companyName": "prancer", 6 | "folderPath": "{{basedir}}" 7 | } 8 | -------------------------------------------------------------------------------- /tests/processor/template_processor/google/sample/cloudbuild.yaml: -------------------------------------------------------------------------------- 1 | imports: 2 | - path: cloudbuild.jinja 3 | 4 | resources: 5 | - name: build 6 | type: cloudbuild.jinja 7 | properties: 8 | resourceToList: deployments 9 | -------------------------------------------------------------------------------- /utilities/validator.py: -------------------------------------------------------------------------------- 1 | """ Driver to run the validator functions """ 2 | 3 | 4 | if __name__ == "__main__": 5 | import sys 6 | from processor.helper.utils.cli_validator import validator_main 7 | sys.exit(validator_main()) 8 | -------------------------------------------------------------------------------- /utilities/populate_json.py: -------------------------------------------------------------------------------- 1 | """ Driver file for populating json files to database """ 2 | 3 | 4 | if __name__ == "__main__": 5 | import sys 6 | from processor.helper.utils.cli_populate_json import populate_json_main 7 | sys.exit(populate_json_main()) 8 | -------------------------------------------------------------------------------- /realm/gitConnector.json: -------------------------------------------------------------------------------- 1 | { 2 | "fileType": "structure", 3 | "type":"filesystem", 4 | "companyName": "prancer-test", 5 | "gitProvider": "https://github.com/prancer-io/cloud-validation-framework", 6 | "branchName":"master", 7 | "private": false 8 | } -------------------------------------------------------------------------------- /dockerfiles/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.10.13-alpine 2 | ARG APP_VERSION 3 | ENV APP_VERSION=$APP_VERSION 4 | RUN apk update \ 5 | && apk upgrade \ 6 | && apk add git build-base libffi-dev openssl-dev 7 | RUN pip install ply 8 | RUN pip install prancer-basic==$APP_VERSION 9 | -------------------------------------------------------------------------------- /utilities/terraform_to_json.py: -------------------------------------------------------------------------------- 1 | """ Driver file to convert terraform to json files """ 2 | 3 | 4 | if __name__ == "__main__": 5 | import sys 6 | from processor.helper.utils.cli_terraform_to_json import terraform_to_json_main 7 | sys.exit(terraform_to_json_main()) 8 | -------------------------------------------------------------------------------- /src/processor/templates/terraform/helper/expression/terraform_expressions.py: -------------------------------------------------------------------------------- 1 | from processor.templates.terraform.helper.expression import base_expressions 2 | 3 | expression_list = [ 4 | { "expression" : "(^.*[?].*[:].*$)", "method" : base_expressions.conditional_expression }, 5 | ] 6 | -------------------------------------------------------------------------------- /tests/processor/logging/test_log_handler.py: -------------------------------------------------------------------------------- 1 | from processor.logging.log_handler import getlogger 2 | 3 | 4 | def test_getlogger(): 5 | logger = getlogger() 6 | assert logger is not None 7 | logger1 = getlogger() 8 | assert logger1 is not None 9 | assert id(logger) == id(logger1) -------------------------------------------------------------------------------- /src/processor/helper/utils/jinjatemplates/git_connector.json: -------------------------------------------------------------------------------- 1 | { 2 | "fileType" : "structure", 3 | "type" : "filesystem", 4 | "companyName" : "prancer", 5 | "gitProvider" : "https://github.com/prancer-io/prancer-compliance-test.git", 6 | "branchName" : "master", 7 | "private" : false 8 | } 9 | -------------------------------------------------------------------------------- /docs/docs/access.md: -------------------------------------------------------------------------------- 1 | # Accessing the Prancer Platform 2 | There are three ways to communicate with the framework. 3 | 4 | - Command Line Interface (CLI) 5 | - Application Programming Interface (API) 6 | - Web Interface 7 | 8 | > Prancer Basic supports only the CLI method, Prancer Enterprise, and Premium support all of the above methods. -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_2/terraform.tfvars: -------------------------------------------------------------------------------- 1 | storage_name = "pranter-storage" 2 | storage_count = 1 3 | storage_rg_name = "storage-rg" 4 | replication_type = "LRS" 5 | enableSecureTransfer = false 6 | allow_blob_public_access = true 7 | tags = {} 8 | -------------------------------------------------------------------------------- /src/processor/helper/utils/jinjatemplates/mastertest.json: -------------------------------------------------------------------------------- 1 | { 2 | "masterSnapshot" : "mastersnapshot", 3 | "fileType" : "mastertest", 4 | "testSet" : [ ], 5 | "notification" : [ ], 6 | "connector" : "{{connector}}", 7 | "remoteFile" : "{{iacDir}}/iac/master-compliance-test.json", 8 | "connectorUsers" : [ ] 9 | } 10 | -------------------------------------------------------------------------------- /src/processor/connector/snapshot_exception.py: -------------------------------------------------------------------------------- 1 | """ 2 | Common snapshot execpetion. 3 | """ 4 | class SnapshotsException(Exception): 5 | """Exception raised for snapshots""" 6 | 7 | def __init__(self, message="Error in snapshots for container"): 8 | self.message = message 9 | super().__init__(self.message) 10 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_3/modules/ebs_volume/main.tf: -------------------------------------------------------------------------------- 1 | resource "aws_ebs_volume" "volume" { 2 | availability_zone = var.availability_zone 3 | encrypted = var.encrypted 4 | size = var.size 5 | tags = var.tags 6 | 7 | lifecycle { 8 | prevent_destroy = false 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /realm/privHTTPSConnector.json: -------------------------------------------------------------------------------- 1 | { 2 | "fileType": "structure", 3 | "type":"filesystem", 4 | "companyName": "prancer-test", 5 | "gitProvider": "https://github.com/prancer-io/prancer-compliance-test.git", 6 | "branchName":"master", 7 | "httpsUser": "", 8 | "httpsPassword": "", 9 | "private": true 10 | } -------------------------------------------------------------------------------- /realm/privSSHConnector.json: -------------------------------------------------------------------------------- 1 | { 2 | "fileType": "structure", 3 | "type":"filesystem", 4 | "companyName": "prancer-test", 5 | "gitProvider": "git@github.com:prancer-io/prancer-compliance-test.git", 6 | "branchName":"master", 7 | "sshKeyfile": "", 8 | "sshUser": "git", 9 | "sshHost": "github.com", 10 | "private": true 11 | } -------------------------------------------------------------------------------- /src/processor/helper/hcl/transformer.py: -------------------------------------------------------------------------------- 1 | 2 | from typing import List 3 | from hcl2.transformer import DictTransformer 4 | 5 | class HClDictTransformer(DictTransformer): 6 | 7 | def full_splat(self, args: List) -> str: 8 | return ".".join(args) 9 | 10 | def full_splat_expr_term(self, args: List) -> str: 11 | return "%s[*].%s" % (args[0], args[1]) 12 | -------------------------------------------------------------------------------- /dockerfiles/Dockerfile_remote: -------------------------------------------------------------------------------- 1 | FROM python:3.9-alpine3.15 2 | ENV APP_VERSION=$version 3 | RUN apk update && apk upgrade && apk add git build-base libffi-dev openssl-dev 4 | COPY opadir/opa /usr/local/bin/opa 5 | RUN chmod +x /usr/local/bin/opa 6 | COPY helmdir/linux-amd64/helm /usr/local/bin/helm 7 | RUN chmod +x /usr/local/bin/helm 8 | RUN pip install ply 9 | RUN pip install prancer-basic==$version -------------------------------------------------------------------------------- /src/processor/connector/special_node_pull/base_node_pull.py: -------------------------------------------------------------------------------- 1 | 2 | class BaseNodePull: 3 | def __init__(self, resource, **kwargs): 4 | self.resource = resource 5 | self.resource_type = "" 6 | 7 | def check_for_node_pull(self, resource_type): 8 | """ 9 | check the resource type need to clone the child nodes or not 10 | """ 11 | return self.resource -------------------------------------------------------------------------------- /utilities/json2md/Makefile: -------------------------------------------------------------------------------- 1 | install: 2 | python3 -m venv "json2md_venv" 3 | . json2md_venv/bin/activate && pip3 install --upgrade pip && pip3 install -r requirements.txt 4 | chmod +x json2md.py 5 | zip: 6 | mkdir -p dist/json2md 7 | zip -r dist/json2md.zip dist/json2md && zip -g -r dist/json2md.zip requirements.txt json2md.py template.md Makefile README.md 8 | clean_env: 9 | rm -rf "json2md_venv" 10 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/iam_role/output.tf: -------------------------------------------------------------------------------- 1 | output "arn" { 2 | description = "ARN of IAM role" 3 | value = aws_iam_role.iamrole.arn 4 | } 5 | 6 | output "name" { 7 | description = "Name of IAM role" 8 | value = aws_iam_role.iamrole.name 9 | } 10 | 11 | output "path" { 12 | description = "Path of IAM role" 13 | value = aws_iam_role.iamrole.path 14 | } 15 | -------------------------------------------------------------------------------- /tests/processor/template_processor/google/sample/cloudbuild.jinja: -------------------------------------------------------------------------------- 1 | resources: 2 | - name: build-something 3 | action: gcp-types/cloudbuild-v1:cloudbuild.projects.builds.create 4 | metadata: 5 | runtimePolicy: 6 | - UPDATE_ALWAYS 7 | properties: 8 | steps: 9 | - name: gcr.io/cloud-builders/gcloud 10 | args: 11 | - deployment-manager 12 | - {{ properties['resourceToList'] }} 13 | - list 14 | timeout: 120s -------------------------------------------------------------------------------- /src/processor/connector/special_crawler/base_crawler.py: -------------------------------------------------------------------------------- 1 | 2 | class BaseCrawler: 3 | def __init__(self, **kwargs): 4 | self.resource_types = {} 5 | 6 | def check_for_special_crawl(self, resource_type): 7 | """ 8 | check the resource type need special crawling or not. if it need special crawling then 9 | it will process the special crawler and returns the updated resources 10 | """ 11 | return self.resources -------------------------------------------------------------------------------- /realm/validation/gitScenario/test.json: -------------------------------------------------------------------------------- 1 | { 2 | "fileType": "test", 3 | "snapshot": "snapshot", 4 | "testSet": [ 5 | { 6 | "testName ": "Ensure configuration uses port 80", 7 | "version": "0.1", 8 | "cases": [ 9 | { 10 | "testId": "1", 11 | "rule": "{1}.webserver.port=80" 12 | } 13 | ] 14 | } 15 | ] 16 | } 17 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/iam_role/main.tf: -------------------------------------------------------------------------------- 1 | resource "aws_iam_role" "iamrole" { 2 | name = var.role_name 3 | path = var.role_path 4 | max_session_duration = var.max_session_duration 5 | description = var.role_description 6 | 7 | force_detach_policies = var.force_detach_policies 8 | permissions_boundary = var.role_permissions_boundary_arn 9 | 10 | assume_role_policy = var.assume_role_policy 11 | 12 | tags = var.tags 13 | } 14 | -------------------------------------------------------------------------------- /realm/validation/gitScenario/snapshot.json: -------------------------------------------------------------------------------- 1 | { 2 | "fileType": "snapshot", 3 | "snapshots": [ 4 | { 5 | "source": "gitConnector", 6 | "nodes": [ 7 | { 8 | "snapshotId": "1", 9 | "type": "json", 10 | "collection": "webserver", 11 | "paths": [ 12 | "realm/validation/gitScenario/resource-pass.json" 13 | ] 14 | } 15 | ] 16 | } 17 | ] 18 | } -------------------------------------------------------------------------------- /.github/workflows/snyk_code_scanner.yml: -------------------------------------------------------------------------------- 1 | name: Snyk code scan 2 | on: 3 | #push: 4 | workflow_dispatch: 5 | 6 | jobs: 7 | security: 8 | runs-on: ubuntu-latest 9 | steps: 10 | - uses: actions/checkout@master 11 | - name: Set up Node 14 12 | uses: actions/setup-node@v3 13 | with: 14 | node-version: 14 15 | - name: install Snyk CLI 16 | run: npm install -g snyk 17 | - name: run Snyk Code Test 18 | run: snyk auth ${{ secrets.SNYK_TOKEN }} && snyk code test --severity-threshold=high 19 | 20 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/vpc/main.tf: -------------------------------------------------------------------------------- 1 | resource "aws_vpc" "vpc" { 2 | cidr_block = var.cidr_block 3 | instance_tenancy = var.instance_tenancy 4 | enable_dns_hostnames = var.enable_dns_hostnames 5 | enable_dns_support = var.enable_dns_support 6 | enable_classiclink = var.enable_classiclink 7 | enable_classiclink_dns_support = var.enable_classiclink_dns_support 8 | assign_generated_ipv6_cidr_block = var.enable_ipv6 9 | 10 | tags = var.tags 11 | } 12 | -------------------------------------------------------------------------------- /docs/docs/tests/tests-definition.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | **Prancer** cloud validation framework can connect to various providers to capture monitored resources' states and run tests against them. To do that, we need test files and test cases against the monitored resources. 4 | 5 | ## Definitions 6 | 7 | - **Master test file**: Test cases for different resource types rather than individual resources. Master test file works in tandem with the master snapshot configuration file. 8 | - **Test file**: Test cases for individual resources. Test file works in tandem with the snapshot configuration file. 9 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_3/modules/ebs_volume/vars.tf: -------------------------------------------------------------------------------- 1 | variable "availability_zone" { 2 | description = "Availability zone for volume (NOTE: EC2 instance mounting volume must reside in the same AS as the volume created here" 3 | } 4 | 5 | variable "encrypted" { 6 | description = "Encryption" 7 | default = true 8 | } 9 | 10 | variable "size" { 11 | description = "The size of the drive in GiBs" 12 | type = number 13 | } 14 | 15 | variable "tags" { 16 | description = "Mapping of tags to assign to resources" 17 | type = map(string) 18 | } 19 | -------------------------------------------------------------------------------- /src/processor/helper/utils/jinjatemplates/mastersnapshot.json: -------------------------------------------------------------------------------- 1 | { 2 | "fileType": "masterSnapshot", 3 | "snapshots": [ 4 | { 5 | "type": "filesystem", 6 | "connectorUser": "{{user}}", 7 | "nodes": [ 8 | { 9 | "masterSnapshotId": "{{abbrev}}_TEMPLATE_SNAPSHOT", 10 | "type": "{{iacType}}", 11 | "collection": "{{iacType}}", 12 | "paths": [ 13 | "{{container}}/data" 14 | ] 15 | } 16 | ], 17 | "testUser": "{{user}}", 18 | "source": "{{connector}}" 19 | } 20 | ] 21 | } 22 | -------------------------------------------------------------------------------- /src/processor/helper/xml/xml_utils.py: -------------------------------------------------------------------------------- 1 | import xml.etree.ElementTree as ET 2 | 3 | def parse_element(element): 4 | parsed = { 5 | "name": element.tag, 6 | "text": element.text.strip() if element.text and element.text.strip() else None, 7 | "attributes": element.attrib, 8 | "children": [] 9 | } 10 | 11 | for child in element: 12 | parsed["children"].append(parse_element(child)) 13 | 14 | return parsed 15 | 16 | def xml_to_json(xml_content): 17 | root = ET.fromstring(xml_content) 18 | parsed_xml = parse_element(root) 19 | return parsed_xml 20 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_2/main.tf: -------------------------------------------------------------------------------- 1 | resource "azurerm_storage_account" "storageAccount" { 2 | count = var.storage_count 3 | name = var.storage_name 4 | resource_group_name = var.storage_rg_name 5 | location = var.location 6 | account_tier = var.account_tier 7 | account_replication_type = var.replication_type 8 | enable_https_traffic_only = var.enableSecureTransfer 9 | allow_blob_public_access = var.allow_blob_public_access 10 | tags = var.tags 11 | } -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/subnet/main.tf: -------------------------------------------------------------------------------- 1 | resource "aws_subnet" "subnet" { 2 | vpc_id = var.vpc_id 3 | cidr_block = var.subnet_cidr_block 4 | availability_zone = var.availability_zone 5 | availability_zone_id = var.availability_zone_id 6 | map_public_ip_on_launch = var.map_public_ip_on_launch 7 | assign_ipv6_address_on_creation = var.assign_ipv6_address_on_creation 8 | ipv6_cidr_block = var.ipv6_cidr_block 9 | tags = var.tags 10 | } 11 | -------------------------------------------------------------------------------- /tests/processor/templates/aws/sample/InvalidTemplate.txt: -------------------------------------------------------------------------------- 1 | 2020-01-15 04:46:15,102(apicontroller: 590) - Remote: 127.0.0.1 2 | 2020-01-15 04:46:15,125(apicontroller: 371) - Host: portal.prancer.dev, Method: GET vault API 3 | 2020-01-15 04:46:15,147(apicontroller: 449) - Method: GET, ImmutableMultiDict([('key_name', 'prancer-web-config-DB-URL')]) 4 | 2020-01-15 04:47:45,148(http_utils: 75) - HTTP GET https://secrets-kv-whitekite.vault.azure.net/secrets/prancer-web-config-DB-URL-CUSTOMER?api-version=7.0 ....... 5 | 2020-01-15 04:47:45,277(http_utils: 45) - GET status: 200 6 | 2020-01-15 04:47:45,296(restapi_azure: 214) - Get Id status: 200 7 | -------------------------------------------------------------------------------- /realm/awsConnector.json: -------------------------------------------------------------------------------- 1 | { 2 | "organization": "Organization name", 3 | "type": "aws", 4 | "fileType": "structure", 5 | "name": "Unit/Department name", 6 | "accounts": [ 7 | { 8 | "account-name": "Account name", 9 | "account-description": "Description of account", 10 | "account-id": "", 11 | "users": [ 12 | { 13 | "name": "", 14 | "access-key": "", 15 | "secret-access": "" 16 | } 17 | ] 18 | } 19 | ] 20 | } -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/subnet/output.tf: -------------------------------------------------------------------------------- 1 | output "id" { 2 | description = "List of IDs of private subnets" 3 | value = aws_subnet.subnet.id 4 | } 5 | 6 | output "arn" { 7 | description = "List of ARNs of private subnets" 8 | value = aws_subnet.subnet.arn 9 | } 10 | 11 | output "cidr_block" { 12 | description = "List of cidr_blocks of private subnets" 13 | value = aws_subnet.subnet.cidr_block 14 | } 15 | 16 | output "ipv6_cidr_block" { 17 | description = "List of IPv6 cidr_blocks of private subnets in an IPv6 enabled VPC" 18 | value = aws_subnet.subnet.ipv6_cidr_block 19 | } 20 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | antlr4-python3-runtime==4.13.0 2 | pyfiglet==0.8.post1 3 | termcolor==1.1.0 4 | boto3==1.17.16 5 | urllib3==1.26.5 6 | cfn-flip==1.2.3 7 | gitdb2==2.0.5 8 | GitPython==3.1.37 9 | pymongo==4.4.1 10 | attrs==23.1.0 11 | pytest==7.4.0 12 | pytest-cov==4.1.0 13 | pytest-mock==3.11.1 14 | python-dateutil==2.8.2 15 | requests==2.31.0 16 | ply==3.10 17 | pyhcl==0.4.4 18 | python-hcl2==3.0.4 19 | google-api-python-client==1.7.8 20 | google-auth==1.6.3 21 | google-auth-httplib2==0.0.3 22 | oauth2client==4.1.3 23 | pyyaml==6.0.1 24 | httplib2==0.19.0 25 | dnspython==2.4.1 26 | Jinja2==3.1.2 27 | ruamel.yaml==0.16.12 28 | kubernetes==12.0.1 29 | lark-parser==0.10.1 30 | MarkupSafe==2.1.3 -------------------------------------------------------------------------------- /.github/workflows/snyk_docker_scanner.yml: -------------------------------------------------------------------------------- 1 | name: Snyk docker scan 2 | on: 3 | #push: 4 | workflow_dispatch: 5 | 6 | jobs: 7 | security: 8 | runs-on: ubuntu-latest 9 | steps: 10 | - uses: actions/checkout@master 11 | - name: Set up Node 14 12 | uses: actions/setup-node@v3 13 | with: 14 | node-version: 14 15 | - name: install Snyk CLI 16 | run: npm install -g snyk 17 | - name: Pulling the docker image 18 | run: docker pull python:3.6-alpine3.10 19 | - name: run Snyk Code Test 20 | run: snyk auth ${{ secrets.SNYK_TOKEN }} && snyk test --docker python --file=dockerfiles/Dockerfile --severity-threshold=high 21 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/security_group/main.tf: -------------------------------------------------------------------------------- 1 | resource "aws_security_group" "sgroup" { 2 | name = var.name 3 | description = var.description 4 | vpc_id = var.vpc_id 5 | revoke_rules_on_delete = var.revoke_rules_on_delete 6 | 7 | dynamic "ingress" { 8 | for_each = var.ingress_enabled ? [1] : [] 9 | 10 | content { 11 | description = var.ingress_description 12 | from_port = var.ingress_from_port 13 | to_port = var.ingress_to_port 14 | protocol = var.ingress_protocol 15 | cidr_blocks = var.ingress_cidr_blocks 16 | } 17 | } 18 | 19 | tags = var.tags 20 | } 21 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_1/main.tf: -------------------------------------------------------------------------------- 1 | 2 | variable "vnet_name" { 3 | default = "prancer-vnet" 4 | } 5 | 6 | variable "vnet_rg" { 7 | default = "prancer-test-rg" 8 | } 9 | 10 | variable "address_space" { 11 | default = "10.254.0.0/16" 12 | } 13 | 14 | variable "location" { 15 | default = "eastus2" 16 | } 17 | 18 | variable "tags" { 19 | default = { 20 | environment = "Production", 21 | project = "Prancer" 22 | } 23 | } 24 | 25 | resource "azurerm_virtual_network" "vnet" { 26 | name = var.vnet_name 27 | resource_group_name = var.vnet_rg 28 | address_space = [var.address_space] 29 | location = var.location 30 | tags = var.tags 31 | } -------------------------------------------------------------------------------- /src/processor/comparison/comparisonantlr/comparatorListener.py: -------------------------------------------------------------------------------- 1 | # Generated from comparator.g4 by ANTLR 4.13.0 2 | from antlr4 import * 3 | if "." in __name__: 4 | from .comparatorParser import comparatorParser 5 | else: 6 | from comparatorParser import comparatorParser 7 | 8 | # This class defines a complete listener for a parse tree produced by comparatorParser. 9 | class comparatorListener(ParseTreeListener): 10 | 11 | # Enter a parse tree produced by comparatorParser#expression. 12 | def enterExpression(self, ctx:comparatorParser.ExpressionContext): 13 | pass 14 | 15 | # Exit a parse tree produced by comparatorParser#expression. 16 | def exitExpression(self, ctx:comparatorParser.ExpressionContext): 17 | pass 18 | 19 | 20 | 21 | del comparatorParser -------------------------------------------------------------------------------- /.github/workflows/snyk_dependencies_scanner.yml: -------------------------------------------------------------------------------- 1 | 2 | name: Snyk dependencies scan 3 | on: 4 | workflow_dispatch: 5 | 6 | jobs: 7 | security: 8 | runs-on: ubuntu-latest 9 | steps: 10 | - uses: actions/checkout@master 11 | - name: Set up Node 14 12 | uses: actions/setup-node@v3 13 | with: 14 | node-version: 14 15 | - name: install Snyk CLI 16 | run: npm install -g snyk 17 | - name: run Snyk requirements.txt pip installation 18 | run: pip install -r requirements.txt 19 | - name: run Snyk requirements.txt pip installation 20 | run: pip install -r utilities/json2md/requirements.txt 21 | - name: run Snyk dependencies Test 22 | run: snyk auth ${{ secrets.SNYK_TOKEN }} && snyk test --severity-threshold=high --all-projects 23 | 24 | -------------------------------------------------------------------------------- /docs/docs/connectors/connector-definition.md: -------------------------------------------------------------------------------- 1 | ## What is a connector? 2 | Prancer platform can connect to various API providers to get the data. To connect to those providers, we need a **connector configuration file**. This connector configuration file has information about connecting to that API provider and the credential we need to do that. 3 | 4 | ## Supported providers 5 | Currently, the following providers are supported in the connector configuration file: 6 | 7 | - Filesystem 8 | - Azure 9 | - AWS 10 | - Google 11 | - Kubernetes Cluster 12 | - Azure Board 13 | - Jira 14 | - GitLab 15 | - Slack 16 | - Microsoft Teams 17 | - BitBucket 18 | 19 | -------------------------------------------------------------------------------- /src/processor/helper/utils/cli_generate_azure_vault_key.py: -------------------------------------------------------------------------------- 1 | import uuid 2 | 3 | from processor.connector.vault import get_vault_data, set_vault_data 4 | from processor_enterprise.controller.vaultcontroller import set_key_visbility, EDITABLE 5 | 6 | def generate_password(): 7 | return str(uuid.uuid4()) 8 | 9 | def generate_azure_vault_key(): 10 | 11 | key = input("Enter the key to add or update its password: ") 12 | 13 | is_key_exists = get_vault_data(secret_key=key) 14 | 15 | password = generate_password() 16 | 17 | is_created = set_vault_data(key_name=key, value=password) 18 | 19 | if is_key_exists and is_created: 20 | print("Regenerating password for key: ", key) 21 | elif is_created: 22 | set_key_visbility(key, EDITABLE) 23 | print("Creating and generating password for key: ", key) 24 | else: 25 | print("Getting issue while generating key:", key) 26 | 27 | -------------------------------------------------------------------------------- /.vscode/launch.json: -------------------------------------------------------------------------------- 1 | { 2 | // Use IntelliSense to learn about possible attributes. 3 | // Hover to view descriptions of existing attributes. 4 | // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 5 | "version": "0.2.0", 6 | "configurations": [ 7 | { 8 | "env": { 9 | "BASEDIR": "${workspaceFolder}", 10 | "PYTHONPATH": "${workspaceFolder}/src", 11 | "FRAMEWORKDIR": "${workspaceFolder}" 12 | }, 13 | "name": "Python: Current File", 14 | "type": "python", 15 | "request": "launch", 16 | "program": "${file}", 17 | "console": "integratedTerminal", 18 | "python": "${command:python.interpreterPath}", 19 | "args": [ 20 | "gitScenario" 21 | ] 22 | } 23 | ] 24 | } -------------------------------------------------------------------------------- /src/processor/templates/terraform/helper/function/encoding_function.py: -------------------------------------------------------------------------------- 1 | """ 2 | Performs all in built encoding functions which are supported by terraform processor 3 | """ 4 | import json 5 | from processor.helper.json.json_utils import json_from_string 6 | from processor.logging.log_handler import getlogger 7 | 8 | logger = getlogger() 9 | 10 | def jsonencode(json_str): 11 | """ convert string json representation to json format """ 12 | if isinstance(json_str, dict): 13 | # NOTE: convertion from string to json is already done in processor 14 | return json_str 15 | return json.loads(json_str) 16 | 17 | def jsondecode(json_str): 18 | """ convert string json representation to json format """ 19 | if isinstance(json_str, dict): 20 | # NOTE: convertion from string to json is already done in processor 21 | return json_str 22 | return json.loads(json_str) -------------------------------------------------------------------------------- /realm/azureConnector.json: -------------------------------------------------------------------------------- 1 | { 2 | "filetype":"structure", 3 | "type":"azure", 4 | "companyName": "Company Name", 5 | "tenant_id": "", 6 | "accounts": [ 7 | { 8 | "department": "Unit/Department name", 9 | "subscription": [ 10 | { 11 | "subscription_name": "", 12 | "subscription_description": "Subscription (Account) description", 13 | "subscription_id": "", 14 | "users": [ 15 | { 16 | "name":"", 17 | "client_id": "", 18 | "client_secret": "" 19 | } 20 | ] 21 | } 22 | ] 23 | } 24 | ] 25 | } -------------------------------------------------------------------------------- /tests/processor/connector/test_arn_parser.py: -------------------------------------------------------------------------------- 1 | """ Tests for ARN Parser""" 2 | from processor.connector import arn_parser 3 | 4 | def test_arnparse(): 5 | arn_str = "arn:aws:s3:us-west-1::" 6 | arn_object = arn_parser.arnparse(arn_str) 7 | assert isinstance(arn_object, arn_parser.Arn) 8 | 9 | arn_str = "arn:aws:ec2:us-west-1::uniqueid" 10 | arn_object = arn_parser.arnparse(arn_str) 11 | assert isinstance(arn_object, arn_parser.Arn) 12 | 13 | arn_str = "arn:partition:service:region:account-id:resource-type/resource-id" 14 | arn_object = arn_parser.arnparse(arn_str) 15 | assert isinstance(arn_object, arn_parser.Arn) 16 | 17 | 18 | 19 | def test_exception_arnparser(): 20 | arn_str = "aws:s3:us-west-1::" 21 | try: 22 | arn_object = arn_parser.arnparse(arn_str) 23 | except Exception as e: 24 | arn_object = str(e) 25 | assert type(arn_object) is str 26 | 27 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/vpc/output.tf: -------------------------------------------------------------------------------- 1 | output "vpc_id" { 2 | description = "The ID of the VPC" 3 | value = aws_vpc.vpc.id 4 | } 5 | 6 | output "vpc_arn" { 7 | description = "The ARN of the VPC" 8 | value = aws_vpc.vpc.arn 9 | } 10 | 11 | output "vpc_cidr_block" { 12 | description = "The CIDR block of the VPC" 13 | value = aws_vpc.vpc.cidr_block 14 | } 15 | 16 | output "default_security_group_id" { 17 | description = "The ID of the security group created by default on VPC creation" 18 | value = aws_vpc.vpc.default_security_group_id 19 | } 20 | 21 | output "default_network_acl_id" { 22 | description = "The ID of the default network ACL" 23 | value = aws_vpc.vpc.default_network_acl_id 24 | } 25 | 26 | output "default_route_table_id" { 27 | description = "The ID of the default route table" 28 | value = aws_vpc.vpc.default_route_table_id 29 | } 30 | -------------------------------------------------------------------------------- /realm/googleStructure.json: -------------------------------------------------------------------------------- 1 | { 2 | "organization": "company1", 3 | "type": "google", 4 | "fileType": "structure", 5 | "auth_uri": "https://accounts.google.com/o/oauth2/auth", 6 | "token_uri": "https://oauth2.googleapis.com/token", 7 | "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", 8 | "projects": [ 9 | { 10 | "project-name": "", 11 | "project-id": "", 12 | "users": [ 13 | { 14 | "name": "", 15 | "type": "service_account", 16 | "private_key_id": "", 17 | "private_key": "", 18 | "client_email": "@.iam.gserviceaccount.com", 19 | "client_id": "", 20 | "client_x509_cert_url": "" 21 | } 22 | ] 23 | } 24 | ] 25 | } -------------------------------------------------------------------------------- /src/processor/comparison/rules/common/sensitive_extension.py: -------------------------------------------------------------------------------- 1 | from processor.logging.log_handler import getlogger 2 | logger = getlogger() 3 | 4 | def sensitive_extensions(generated_snapshot, kwargs={}): 5 | paths = kwargs.get("paths", []) 6 | sensitive_extension_list = [ 7 | ".pfx", ".p12", ".cer", ".crt", ".crl", ".csr", ".der", ".p7b", ".p7r", ".spc", ".pem" 8 | ] 9 | output = {} 10 | for path in paths: 11 | extension = "."+path.split(".")[-1] 12 | if sensitive_extension_list: 13 | if extension in sensitive_extension_list: 14 | output["issue"] = True 15 | output["skipped"] = False 16 | output["sensitive_extensions_err"] = "Sensitive files should not be checked into the git repo" 17 | else: 18 | output["issue"] = False 19 | output["skipped"] = False 20 | 21 | if not paths: 22 | output["issue"] = False 23 | output["skipped"] = False 24 | 25 | return output 26 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_3/ec2/vars.tf: -------------------------------------------------------------------------------- 1 | variable "instance_count" {} 2 | variable "name" {} 3 | variable "instance_type" {} 4 | variable "user_data" {} 5 | variable "user_data_base64" {} 6 | variable "key_name" {} 7 | variable "monitoring" {} 8 | variable "get_password_data" {} 9 | variable "vpc_security_group_ids" {} 10 | variable "subnet_id" {} 11 | variable "iam_instance_profile" {} 12 | variable "associate_public_ip_address" {} 13 | variable "ipv6_address_count" {} 14 | variable "ipv6_addresses" {} 15 | variable "ebs_optimized" {} 16 | variable "root_block_device" {} 17 | variable "ebs_block_device" {} 18 | variable "ephemeral_block_device" {} 19 | variable "network_interface" {} 20 | variable "disable_api_termination" {} 21 | variable "instance_initiated_shutdown_behavior" {} 22 | variable "placement_group" {} 23 | variable "tenancy" {} 24 | 25 | variable "availability_zone" {} 26 | variable "encrypted" {} 27 | variable "size" {} 28 | 29 | variable "tags" { 30 | type = map 31 | } 32 | -------------------------------------------------------------------------------- /src/processor/helper/hcl/parser.py: -------------------------------------------------------------------------------- 1 | import json 2 | from hcl.parser import HclParser, pickle_file 3 | from hcl.api import u, isHcl 4 | from hcl2.lark_parser import Lark_StandAlone 5 | from processor.helper.hcl import yacc 6 | from processor.helper.hcl.transformer import HClDictTransformer 7 | 8 | class TerraformHCLParer(HclParser): 9 | def __init__(self): 10 | self.yacc = yacc.yacc(module=self, debug=False, optimize=1, picklefile=pickle_file) 11 | 12 | def parse(self, s): 13 | return self.yacc.parse(s, lexer=yacc.TerraformLexer()) 14 | 15 | def loads(fp): 16 | ''' 17 | Deserializes a string and converts it to a dictionary. The contents 18 | of the string must either be JSON or HCL. 19 | 20 | :returns: Dictionary 21 | ''' 22 | s = fp.read() 23 | hcl2 = Lark_StandAlone(transformer=HClDictTransformer()) 24 | return hcl2.parse(s + "\n") 25 | # s = u(s) 26 | # if isHcl(s): 27 | # return TerraformHCLParer().parse(s) 28 | # else: 29 | # return json.loads(s) -------------------------------------------------------------------------------- /utilities/json2md/templateKCC.md: -------------------------------------------------------------------------------- 1 | # Automated Vulnerability Scan result and Static Code Analysis for Kubernetes Config Connector (KCC) files 2 | 3 | Source Repository: https://github.com/GoogleCloudPlatform/k8s-config-connector/ 4 | Compliance help: https://cloud.google.com/security-command-center/docs/concepts-vulnerabilities-findings 5 | 6 | ## Compliance run Meta Data 7 | {{ data.meta }} 8 | 9 | ## Results 10 | {% for item in data.results %} 11 | ### Test ID - {{ item.id }} 12 | Title: {{ item.title }}\ 13 | Test Result: **{{ item.result }}**\ 14 | Description : {{ item.description }}\ 15 | 16 | #### Test Details 17 | - eval: {{ item.eval }} 18 | - id : {{ item.id }} 19 | 20 | #### Snapshots 21 | {{ item.snapshots }} 22 | 23 | - masterTestId: {{ item.masterTestId }} 24 | - masterSnapshotId: {{ item.masterSnapshotId }} 25 | - type: {{ item.type }} 26 | - rule: {{ item.rule }} 27 | - severity: {{ item.severity }} 28 | 29 | tags 30 | {{ item.tags }} 31 | ---------------------------------------------------------------- 32 | 33 | {% endfor %} -------------------------------------------------------------------------------- /utilities/mongo_install.txt: -------------------------------------------------------------------------------- 1 | # Reference: https://www.linode.com/docs/databases/mongodb/install-mongodb-on-ubuntu-16-04/ 2 | # These are instructions for installation of mongodb 4.0 on Ubuntu 16.04(xenial) 3 | 4 | sudo apt-get update 5 | sudo apt-get upgrade 6 | # GPG signing key for mongodb 3.12 7 | sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 8 | # GPG signing key for mongodb 4.0 9 | sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 10 | # For Ubuntu 16.04 11 | echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list 12 | # For Ubuntu 18.04 13 | echo "deb http://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list 14 | sudo apt-get update 15 | 16 | sudo apt-get install -y mongodb-org 17 | 18 | sudo systemctl start mongodb 19 | sudo systemctl status mongodb 20 | sudo systemctl stop mongodb 21 | sudo systemctl restart mongodb 22 | -------------------------------------------------------------------------------- /src/processor/helper/config/config.ini: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | subscription = configdata/subscription.json 3 | 4 | [AZURE] 5 | api = realm/azureApiVersions.json 6 | azureStructureFolder = realm/ 7 | 8 | [GIT] 9 | parameterStructureFolder = realm/ 10 | 11 | [TESTS] 12 | containerFolder = realm/validation/ 13 | 14 | [REPORTING] 15 | reportOutputFolder = realm/validation/ 16 | 17 | [LOGGING] 18 | level = INFO 19 | maxbytes = 10 20 | backupcount = 10 21 | propagate = true 22 | logFolder = log 23 | dbname = whitekite 24 | 25 | [MONGODB] 26 | dbname1 = mongodb://user:password@localhost:27017/validator 27 | dbname = validator 28 | COLLECTION = resources 29 | SNAPSHOT = snapshots 30 | TEST = tests 31 | STRUCTURE = structures 32 | OUTPUT = outputs 33 | 34 | 35 | [INDEXES] 36 | OUTPUT = name, container, timestamp 37 | 38 | 39 | [VAULT] 40 | type = azure 41 | tenant_id = f997f2f9-a48f-465a-9677-54e8d1d90e5d 42 | client_id = 6b0e2fda-269d-44b3-bf44-08cc16e9cf72 43 | keyvault = secrets-kv-whitekite 44 | 45 | 46 | [NOTIFICATION] 47 | enabled=False 48 | 49 | [RESULT] 50 | console_min_severity_error=Low -------------------------------------------------------------------------------- /src/processor/templates/helm/helm_parser.py: -------------------------------------------------------------------------------- 1 | from processor.logging.log_handler import getlogger 2 | from processor.templates.base.template_parser import TemplateParser 3 | from processor.helper.file.file_utils import exists_file,exists_dir 4 | 5 | 6 | logger = getlogger() 7 | 8 | 9 | class HelmTemplateParser(TemplateParser): 10 | def __init__(self, template_file, tosave=False, **kwargs): 11 | """ 12 | Base Parser class for parse helm 13 | """ 14 | super().__init__(template_file, tosave=False, **kwargs) 15 | self.type = {} 16 | 17 | def parse(self,file_path): 18 | return "" 19 | 20 | def validate(self,file_path): 21 | helm_source = file_path.rpartition("/")[0] 22 | check_file_path = "%s/Chart.yaml" % helm_source 23 | valeus_file_path = "%s/values.yaml" % helm_source 24 | template_dir_path = "%s/templates" % helm_source 25 | 26 | if all([exists_file(check_file_path),exists_file(valeus_file_path),exists_dir(template_dir_path)]): 27 | return True 28 | return False -------------------------------------------------------------------------------- /src/processor/helper/hcl/hcl_utils.py: -------------------------------------------------------------------------------- 1 | import codecs 2 | import hcl 3 | import hcl2 4 | from lark import tree 5 | from processor.helper.hcl import parser 6 | from processor.logging.log_handler import getlogger 7 | 8 | logger = getlogger() 9 | 10 | def hcl_to_json(file_path): 11 | """ 12 | converts the hcl file to json file 13 | """ 14 | json_data = {} 15 | try: 16 | with open(file_path, 'r', encoding="utf-8") as fp: 17 | json_data = parser.loads(fp) 18 | except Exception as e: 19 | try: 20 | with codecs.open(file_path, "r", encoding="utf-8-sig") as fp: 21 | json_data = parser.loads(fp) 22 | except Exception as e: 23 | error = str(e) 24 | error = error.split("Expected one of")[0] 25 | logger.debug("Unspported terraform file, error while parsing file: %s , error: %s", file_path, error) 26 | 27 | return json_data 28 | 29 | if __name__ == "__main__": 30 | json_data = hcl_to_json("/tmp/extrasg.tf") 31 | import json 32 | print(json.dumps(json_data, indent=2)) 33 | 34 | -------------------------------------------------------------------------------- /src/processor/templates/terraform/helper/expression/base_expressions.py: -------------------------------------------------------------------------------- 1 | """ 2 | process the expression and returns the processed values 3 | """ 4 | from processor.logging.log_handler import getlogger 5 | 6 | logger = getlogger() 7 | 8 | 9 | def conditional_expression(expression): 10 | """ 11 | perform the condition operation on provided expression and returns the result 12 | """ 13 | expression_list = expression.split(" ? ") 14 | condition = expression_list[0] 15 | true_value = expression_list[1].split(" : ")[0] 16 | false_value = expression_list[1].split(" : ")[1] 17 | try: 18 | eval(true_value) 19 | except: 20 | true_value = f'"{true_value}"' 21 | try: 22 | eval(false_value) 23 | except: 24 | false_value = f'"{false_value}"' 25 | new_expression = "%s if %s else %s" % (true_value, condition, false_value) 26 | try: 27 | response = eval(new_expression) 28 | return response, True 29 | except Exception as e: 30 | logger.error(expression) 31 | logger.error(e) 32 | return expression, False 33 | -------------------------------------------------------------------------------- /docs/docs/api/webhook.md: -------------------------------------------------------------------------------- 1 | **Webhook APIs** 2 | === 3 | 4 | **Webhook - save** 5 | --- 6 | - Enable or Disable github webhook autofix feature for a collection 7 | 8 | **CURL Sample** 9 | ``` 10 | curl -X POST https://portal.prancer.io/prancer-customer1/api/webhook/github/create -H 'authorization: Bearer ' -d '{ "collection" : "azure_arm", "enable" : true }' 11 | ``` 12 | 13 | - **URL:** https://portal.prancer.io/prancer-customer1/api/webhook/github/create 14 | - **Method:** POST 15 | - **Header:** 16 | ``` 17 | - content-type: application/json 18 | - Authorization: Bearer 19 | ``` 20 | - **Param:** 21 | ``` 22 | { 23 | "collection" : "azure_arm", 24 | "enable" : true 25 | } 26 | ``` 27 | 28 | **Explanation** 29 | 30 | - collection: Name of the collection for which configure the webhook 31 | - enable: Boolean value to specify enable or disable the webhook. 32 | 33 | **Response:** 34 | ``` 35 | { 36 | "data": {}, 37 | "error": "", 38 | "error_list": [], 39 | "message": "Successfully updated configuration", 40 | "metadata": {}, 41 | "status": 200 42 | } 43 | ``` 44 | -------------------------------------------------------------------------------- /utilities/json2md/templateAzureQuickstart.md: -------------------------------------------------------------------------------- 1 | # Automated Vulnerability Scan result and Static Code Analysis for Azure Quickstart files 2 | 3 | ## Azure Kubernetes Services (AKS) 4 | 5 | Source Repository: https://github.com/Azure/azure-quickstart-templates 6 | 7 | Scan engine: **Prancer Framework** (https://www.prancer.io) 8 | 9 | Compliance Database: https://github.com/prancer-io/prancer-compliance-test/tree/master/azure/iac 10 | 11 | ## Compliance run Meta Data 12 | {{ data.meta }} 13 | 14 | ## Results 15 | {% for item in data.results %} 16 | ### Test ID - {{ item.id }} 17 | Title: {{ item.title }}\ 18 | Test Result: **{{ item.result }}**\ 19 | Description : {{ item.description }}\ 20 | 21 | #### Test Details 22 | - eval: {{ item.eval }} 23 | - id : {{ item.id }} 24 | 25 | #### Snapshots 26 | {{ item.snapshots }} 27 | 28 | - masterTestId: {{ item.masterTestId }} 29 | - masterSnapshotId: {{ item.masterSnapshotId }} 30 | - type: {{ item.type }} 31 | - rule: {{ item.rule }} 32 | - severity: {{ item.severity }} 33 | 34 | tags 35 | {{ item.tags }} 36 | ---------------------------------------------------------------- 37 | 38 | {% endfor %} -------------------------------------------------------------------------------- /utilities/curl_cmds.txt: -------------------------------------------------------------------------------- 1 | curl -H "Content-Type:application/json" -X GET "http://localhost:8000/whitekite/api/version/" 2 | curl -H "Content-Type:application/json" -X POST "http://localhost:8000/whitekite/api/tests/" -d'{"container": "container3"}' 3 | curl -H "Content-Type:application/json" -X GET "http://localhost:8000/whitekite/api/results/container3/" 4 | curl -H "Content-Type:application/json" -X GET "http://localhost:8000/whitekite/api/results/container3/?page=1&pagesize=2" 5 | curl -H "Content-Type:application/json" -X GET "http://localhost:8000/whitekite/api/results/container3/test1/" 6 | curl -H "Content-Type:application/json" -X GET "http://localhost:8000/whitekite/api/results/container3/test6/?page=2&pagesize=1" 7 | curl -H "Content-Type:application/json" -X GET "http://localhost:8000/whitekite/api/results/container3/test6/?all=false" 8 | curl -H "Content-Type:application/json" -X GET "http://localhost:8000/whitekite/api/execute/container3/test1/" 9 | curl -H "Content-Type:application/json" -X GET "http://localhost:8000/whitekite/api/execute/container3/" 10 | curl -H "Content-Type:application/json" -X GET "http://localhost:8000/whitekite/api/containers/" 11 | 12 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_3/modules/ec2/output.tf: -------------------------------------------------------------------------------- 1 | output "id" { 2 | description = "List of IDs of instances" 3 | value = aws_instance.ec2.*.id 4 | } 5 | 6 | output "arn" { 7 | description = "List of ARNs of instances" 8 | value = aws_instance.ec2.*.arn 9 | } 10 | 11 | output "availability_zone" { 12 | description = "List of availability zones of instances" 13 | value = aws_instance.ec2.*.availability_zone 14 | } 15 | 16 | output "public_ip" { 17 | description = "List of public IP addresses assigned to the instances, if applicable" 18 | value = aws_instance.ec2.*.public_ip 19 | } 20 | 21 | output "private_dns" { 22 | description = "List of private DNS names assigned to the instances. Can only be used inside the Amazon EC2, and only available if you've enabled DNS hostnames for your VPC" 23 | value = aws_instance.ec2.*.private_dns 24 | } 25 | 26 | output "private_ip" { 27 | description = "List of private IP addresses assigned to the instances" 28 | value = aws_instance.ec2.*.private_ip 29 | } 30 | 31 | output "security_groups" { 32 | description = "List of associated security groups of instances" 33 | value = aws_instance.ec2.*.security_groups 34 | } 35 | -------------------------------------------------------------------------------- /config.ini: -------------------------------------------------------------------------------- 1 | [AZURE] 2 | api = realm/azureApiVersions.json 3 | azureStructureFolder = realm/ 4 | azureCli = false 5 | 6 | [GOOGLE] 7 | params = realm/googleParams.json 8 | 9 | [GIT] 10 | parameterStructureFolder = realm/ 11 | 12 | [KUBERNETES] 13 | kubernetesStructureFolder = /realm 14 | 15 | [HELM] 16 | helmexe = $HELM_HOME/helm 17 | 18 | [TESTS] 19 | containerFolder = realm/validation/ 20 | database = NONE 21 | 22 | [OPA] 23 | opa = true 24 | opaexe = $OPA_HOME/opa 25 | 26 | [REPORTING] 27 | reportOutputFolder = realm/validation/ 28 | 29 | [LOGGING] 30 | level = INFO 31 | maxbytes = 10 32 | backupcount = 10 33 | propagate = true 34 | logFolder = log 35 | dbname = validator 36 | 37 | [MONGODB] 38 | dburl = mongodb://localhost:27017/validator 39 | dbname = validator 40 | COLLECTION = resources 41 | SNAPSHOT = snapshots 42 | TEST = tests 43 | STRUCTURE = structures 44 | MASTERSNAPSHOT = mastersnapshots 45 | MASTERTEST = mastertests 46 | OUTPUT = outputs 47 | NOTIFICATIONS = notifications 48 | 49 | [INDEXES] 50 | OUTPUT = name, container, timestamp 51 | 52 | [VAULT] 53 | type = azure 54 | tenant_id = 55 | client_id = 56 | keyvault = 57 | 58 | [NOTIFICATION] 59 | enabled=False 60 | 61 | [RESULT] 62 | console_min_severity_error=Low -------------------------------------------------------------------------------- /.github/workflows/test_master.yaml: -------------------------------------------------------------------------------- 1 | name: Unit testing and Integration testing 2 | on: 3 | pull_request: 4 | types: [opened, synchronize] 5 | branches: 6 | - 'master' 7 | 8 | workflow_dispatch: 9 | inputs: 10 | branch: 11 | required: true 12 | description: 'Branch' 13 | default: 'master' 14 | jobs: 15 | build: 16 | name: build and publish 17 | runs-on: ubuntu-latest 18 | 19 | steps: 20 | - name: Set up QEMU 21 | uses: docker/setup-qemu-action@v1 22 | 23 | - name: Set up Docker Build 24 | uses: docker/setup-buildx-action@v1 25 | 26 | - name: Set up Python 3.8 27 | uses: actions/setup-python@v2 28 | with: 29 | python-version: "3.8" 30 | 31 | - name: Checkout code 32 | uses: actions/checkout@v2 33 | with: 34 | repository: prancer-io/cloud-validation-framework 35 | ref: ${{ github.head_ref }} 36 | token: ${{ secrets.GIT_TOKEN }} 37 | 38 | - name: testing 39 | run: | 40 | # docker run --rm -v $(pwd):$(pwd) -w=$(pwd) python:3.6.8 sh dev-test.sh 41 | docker run --rm -v $(pwd):$(pwd) -w=$(pwd) python:3.8 sh dev-test.sh 42 | docker run --rm -v $(pwd):$(pwd) -w=$(pwd) python:3.9 sh dev-test.sh 43 | -------------------------------------------------------------------------------- /src/processor/helper/file/file_utils.py: -------------------------------------------------------------------------------- 1 | """Utility functions for file and directory""" 2 | 3 | import os 4 | from processor.logging.log_handler import getlogger 5 | 6 | logger = getlogger() 7 | 8 | def exists_dir(dirname): 9 | """Check if this path exists and is a directory""" 10 | if dirname and os.path.exists(dirname) and os.path.isdir(dirname): 11 | return True 12 | return False 13 | 14 | 15 | def exists_file(fname): 16 | """Check if path exists and is a file""" 17 | if fname and os.path.exists(fname) and os.path.isfile(fname): 18 | return True 19 | return False 20 | 21 | 22 | def remove_file(fname): 23 | """Remove the file.""" 24 | try: 25 | os.remove(fname) 26 | return True 27 | except: 28 | return False 29 | 30 | 31 | def mkdir_path(dirpath): 32 | """Make directories recursively.""" 33 | try: 34 | os.makedirs(dirpath) 35 | return exists_dir(dirpath) 36 | except: 37 | return False 38 | 39 | def save_file(file_path, content): 40 | "write content in file and save file to specified path" 41 | try: 42 | f = open(file_path, "w") 43 | f.write(content) 44 | f.close() 45 | return True 46 | except Exception as e: 47 | logger.error(e) 48 | return False 49 | -------------------------------------------------------------------------------- /docs/docs/notification/notification.md: -------------------------------------------------------------------------------- 1 | **Notification** 2 | === 3 | 4 | - The notification configuration file contains the details about sending the Notification to the user with the test result. 5 | 6 | ``` 7 | { 8 | "container": "" 9 | "name": "", 10 | "json": { 11 | "fileType": "notifications" 12 | "type": "notifications", 13 | "notifications": [ 14 | { 15 | "notificationId": "", 16 | "type": "email", 17 | "level": "all", 18 | "user": "", 19 | "to": [ 20 | "", 21 | "" 22 | ] 23 | } 24 | ] 25 | } 26 | } 27 | ``` 28 | 29 | - **Explanation:** 30 | - **container-name:** Name of the collection for which you want notifications. 31 | - **notification-name:** Any notification name for reference. 32 | - **notification-id:** Notification id to uniquely identify the notification. 33 | - **level:** level of output results to include in Notification. It could be "passed," "failed," or "all." 34 | - **user:** Add sender email from which you want to send an email. 35 | - **to:** Add a list of receiver emails to which you want to send the email notification. -------------------------------------------------------------------------------- /src/processor/templates/terraform/helper/function/numeric_functions.py: -------------------------------------------------------------------------------- 1 | """ 2 | performs all in built numeric functions which are supported by terraform processor 3 | """ 4 | from processor.logging.log_handler import getlogger 5 | import math 6 | 7 | def to_abs(num): 8 | """ return the absolute value of given number """ 9 | return abs(num) 10 | 11 | def ceil(num): 12 | """ return the smallest integer greater than or equal to given number """ 13 | return math.ceil(num) 14 | 15 | def floor(num): 16 | """ return the largest integer less than or equal to given number """ 17 | return math.floor(num) 18 | 19 | def log(num, base): 20 | """ returns the logarithm of a given number in a given base """ 21 | return math.log(num, base) 22 | 23 | def to_max(*args): 24 | """ returns the largest item from given list of items """ 25 | return max(*args) 26 | 27 | def to_min(*args): 28 | """ returns the smallest item from given list of items """ 29 | return min(*args) 30 | 31 | def pow(num, power): 32 | """ returns the number raised to the given power """ 33 | return math.pow(num, power) 34 | 35 | def signum(num): 36 | """ determines the sign of a number, returning a number between -1 and 1 """ 37 | if num > 0: 38 | return 1 39 | elif num < 0: 40 | return -1 41 | else: 42 | return 0 -------------------------------------------------------------------------------- /tests/processor/reporting/test_json_output.py: -------------------------------------------------------------------------------- 1 | import os 2 | from processor.reporting.json_output import dump_output_results, json_record 3 | 4 | def mock_config_value(key, default=None): 5 | return 'pytestdb' 6 | 7 | def mock_insert_one_document(doc, collection, dbname): 8 | pass 9 | 10 | def test_dump_output_results(monkeypatch, create_temp_dir): 11 | monkeypatch.setattr('processor.reporting.json_output.insert_one_document', mock_insert_one_document) 12 | monkeypatch.setattr('processor.reporting.json_output.config_value', mock_config_value) 13 | newpath = create_temp_dir() 14 | fname = '%s/test1.json' % newpath 15 | new_fname = '%s/output-a1.json' % newpath 16 | dump_output_results([], fname, 'test1', 'snapshot') 17 | file_exists = os.path.exists(new_fname) 18 | assert False == file_exists 19 | val = dump_output_results([], fname, 'test1', 'snapshot', False) 20 | assert val is None 21 | 22 | 23 | def test_json_record(monkeypatch): 24 | monkeypatch.setattr('processor.reporting.json_output.config_value', mock_config_value) 25 | val = json_record('abcd', 'test', 'a.json', json_data=None) 26 | assert val is not None 27 | val = json_record('abcd', 'test', 'a.json', json_data={'$schema': '1.9.0'}) 28 | assert val is not None 29 | exists = '$schema' not in val['json'] 30 | assert exists == True -------------------------------------------------------------------------------- /.github/workflows/test_development.yaml: -------------------------------------------------------------------------------- 1 | name: Unit testing and Integration testing [development] 2 | on: 3 | push: 4 | branches: 5 | - development 6 | 7 | pull_request: 8 | types: [opened, synchronize] 9 | 10 | workflow_dispatch: 11 | inputs: 12 | branch: 13 | required: true 14 | description: 'Branch' 15 | default: 'development' 16 | jobs: 17 | build: 18 | name: build and publish 19 | runs-on: ubuntu-latest 20 | 21 | steps: 22 | - name: Set up QEMU 23 | uses: docker/setup-qemu-action@v1 24 | 25 | - name: Set up Docker Build 26 | uses: docker/setup-buildx-action@v1 27 | 28 | - name: Set up Python 3.8 29 | uses: actions/setup-python@v2 30 | with: 31 | python-version: "3.8" 32 | 33 | - name: Checkout code 34 | uses: actions/checkout@v2 35 | with: 36 | repository: prancer-io/cloud-validation-framework 37 | ref: ${{ github.head_ref }} 38 | token: ${{ secrets.GIT_TOKEN }} 39 | 40 | - name: testing 41 | run: | 42 | # docker run --rm -v $(pwd):$(pwd) -w=$(pwd) python:3.6.8 sh dev-test.sh 43 | docker run --rm -v $(pwd):$(pwd) -w=$(pwd) python:3.8 sh dev-test.sh 44 | docker run --rm -v $(pwd):$(pwd) -w=$(pwd) python:3.9 sh dev-test.sh 45 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/iam_role/vars.tf: -------------------------------------------------------------------------------- 1 | variable "role_name" { 2 | description = "IAM role name" 3 | type = string 4 | default = "" 5 | } 6 | 7 | variable "role_path" { 8 | description = "Path of IAM role" 9 | type = string 10 | default = "/" 11 | } 12 | 13 | variable "max_session_duration" { 14 | description = "Maximum CLI/API session duration in seconds between 3600 and 43200" 15 | type = number 16 | default = 3600 17 | } 18 | 19 | variable "role_description" { 20 | description = "IAM Role description" 21 | type = string 22 | default = "" 23 | } 24 | 25 | variable "force_detach_policies" { 26 | description = "Whether policies should be detached from this role when destroying" 27 | type = bool 28 | default = false 29 | } 30 | 31 | variable "role_permissions_boundary_arn" { 32 | description = "Permissions boundary ARN to use for IAM role" 33 | type = string 34 | default = "" 35 | } 36 | 37 | variable "assume_role_policy" { 38 | description = "The policy that grants an entity permission to assume the role." 39 | type = string 40 | default = "" 41 | } 42 | 43 | variable "tags" { 44 | description = "A map of tags to add to IAM role resources" 45 | type = map(string) 46 | default = {} 47 | } 48 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/security_group/vars.tf: -------------------------------------------------------------------------------- 1 | variable "vpc_id" { 2 | description = "ID of the VPC where to create security group" 3 | type = string 4 | } 5 | 6 | variable "name" { 7 | description = "Name of security group" 8 | type = string 9 | } 10 | 11 | variable "description" { 12 | description = "Description of security group" 13 | type = string 14 | default = "Security Group managed by Terraform" 15 | } 16 | 17 | variable "revoke_rules_on_delete" { 18 | description = "Instruct Terraform to revoke all of the Security Groups attached ingress and egress rules before deleting the rule itself. Enable for EMR." 19 | type = bool 20 | default = false 21 | } 22 | 23 | variable "ingress_enabled" { 24 | type = bool 25 | default = false 26 | } 27 | 28 | variable "ingress_description" { 29 | default = "" 30 | } 31 | 32 | variable "ingress_from_port" { 33 | default = "" 34 | } 35 | 36 | variable "ingress_to_port" { 37 | default = "" 38 | } 39 | 40 | variable "ingress_protocol" { 41 | default = "" 42 | } 43 | 44 | variable "ingress_cidr_blocks" { 45 | type = list(string) 46 | default = [] 47 | } 48 | 49 | variable "tags" { 50 | description = "A mapping of tags to assign to security group" 51 | type = map(string) 52 | default = {} 53 | } 54 | -------------------------------------------------------------------------------- /tests/processor/template_processor/azure/sample/keyvault.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 3 | "contentVersion": "1.0.0.0", 4 | "parameters": {}, 5 | "resources": [ 6 | { 7 | "type": "Microsoft.KeyVault/vaults", 8 | "name": "[parameters('keyVaultSettings').settings[copyIndex('kvcopy')].name]", 9 | "condition": "[equals(resourceGroup().name, parameters('keyVaultSettings').settings[copyIndex('kvcopy')].resourceGroup)]", 10 | "copy": { 11 | "name": "kvcopy", 12 | "count": "[length(parameters('keyVaultSettings').settings)]" 13 | }, 14 | "tags": {}, 15 | "apiVersion": "2016-10-01", 16 | "location": "[resourceGroup().location]", 17 | "properties": { 18 | "enabledForDeployment": true, 19 | "enabledForDiskEncryption": true, 20 | "enabledForTemplateDeployment": true, 21 | "enableSoftDelete": true, 22 | "enablePurgeProtection": true, 23 | "tenantId": "[subscription().tenantId]", 24 | "accessPolicies": "[parameters('keyVaultSettings').settings[copyIndex('kvcopy')].accessPolicies]", 25 | "sku": { 26 | "name": "[parameters('keyVaultSettings').settings[copyIndex('kvcopy')].sku]", 27 | "family": "A" 28 | } 29 | } 30 | } 31 | ] 32 | } -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/subnet/vars.tf: -------------------------------------------------------------------------------- 1 | variable "vpc_id" { 2 | description = "The VPC ID" 3 | type = string 4 | default = "0.0.0.0/0" 5 | } 6 | 7 | variable "subnet_cidr_block" { 8 | description = "The CIDR block for the subnet" 9 | type = string 10 | default = "0.0.0.0/0" 11 | } 12 | 13 | variable "availability_zone" { 14 | description = "The AZ of the subnet" 15 | type = string 16 | default = null 17 | } 18 | 19 | variable "availability_zone_id" { 20 | description = "The AZ ID of the subnet" 21 | type = string 22 | default = null 23 | } 24 | 25 | variable "map_public_ip_on_launch" { 26 | description = "Should be false if you do not want to auto-assign public IP on launch" 27 | type = bool 28 | default = true 29 | } 30 | 31 | variable "assign_ipv6_address_on_creation" { 32 | description = "Specify true to indicate that network interfaces created in the specified subnet should be assigned an IPv6 address" 33 | type = bool 34 | default = false 35 | } 36 | 37 | variable "ipv6_cidr_block" { 38 | description = "The IPv6 network range for the subnet, in CIDR notation." 39 | type = string 40 | default = null 41 | } 42 | 43 | variable "tags" { 44 | description = "A map of tags to add to all resources" 45 | type = map(string) 46 | default = {} 47 | } 48 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_3/ec2/terraform.tfvars: -------------------------------------------------------------------------------- 1 | instance_count = 1 2 | name = "ec2" 3 | instance_type = "t3.micro" 4 | user_data = null 5 | user_data_base64 = null 6 | key_name = "" 7 | monitoring = false 8 | get_password_data = false 9 | vpc_security_group_ids = null 10 | iam_instance_profile = null 11 | subnet_id = "" 12 | associate_public_ip_address = null 13 | ipv6_address_count = null 14 | ipv6_addresses = null 15 | ebs_optimized = false 16 | root_block_device = [] 17 | ebs_block_device = [] 18 | ephemeral_block_device = [] 19 | network_interface = [] 20 | disable_api_termination = false 21 | instance_initiated_shutdown_behavior = "stop" 22 | placement_group = null 23 | tenancy = null 24 | 25 | availability_zone = "us-east-2a" 26 | encrypted = false 27 | size = 5 28 | 29 | tags = { 30 | Name = "prancer-ec2" 31 | Environment = "Production" 32 | Project = "Prancer" 33 | } 34 | -------------------------------------------------------------------------------- /.github/workflows/documentation.yaml: -------------------------------------------------------------------------------- 1 | name: Update Documentation 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | 8 | jobs: 9 | build: 10 | runs-on: ubuntu-latest 11 | 12 | steps: 13 | - name: Checkout repository 14 | uses: actions/checkout@v2 15 | 16 | - name: Set up Python 17 | uses: actions/setup-python@v2 18 | with: 19 | python-version: '3.x' 20 | 21 | - name: Install dependencies 22 | run: | 23 | python -m pip install --upgrade pip 24 | pip install mkdocs 25 | 26 | - name: Build documentation 27 | run: mkdocs build 28 | 29 | - name: Copy files to server 30 | uses: appleboy/scp-action@master 31 | with: 32 | host: ${{ secrets.DOC_SERVER_HOST }} 33 | username: ${{ secrets.DOC_SERVER_USERNAME }} 34 | key: ${{ secrets.DOC_SERVER_SSH_KEY }} 35 | source: "docs/build/" 36 | target: "/var/www/docs.prancer.io/public_html/" 37 | strip_components: 2 38 | 39 | - name: Run commands on server 40 | uses: appleboy/ssh-action@master 41 | with: 42 | host: ${{ secrets.DOC_SERVER_HOST }} 43 | username: ${{ secrets.DOC_SERVER_USERNAME }} 44 | key: ${{ secrets.DOC_SERVER_SSH_KEY }} 45 | script: | 46 | cd /var/www/docs.prancer.io/public_html 47 | # add the commands to update the files here 48 | find -user azureuser -exec chmod g+w {} \; 49 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/lambda/vars.tf: -------------------------------------------------------------------------------- 1 | variable "role_name" {} 2 | variable "role_path" {} 3 | variable "max_session_duration" {} 4 | variable "role_description" {} 5 | variable "force_detach_policies" {} 6 | variable "role_permissions_boundary_arn" {} 7 | variable "assume_role_policy" {} 8 | 9 | variable "cidr_block" {} 10 | variable "instance_tenancy" {} 11 | variable "enable_dns_hostnames" {} 12 | variable "enable_dns_support" {} 13 | variable "enable_classiclink" {} 14 | variable "enable_classiclink_dns_support" {} 15 | variable "enable_ipv6" {} 16 | 17 | variable "subnet_cidr_block" {} 18 | variable "availability_zone" {} 19 | variable "availability_zone_id" {} 20 | variable "map_public_ip_on_launch" {} 21 | variable "assign_ipv6_address_on_creation" {} 22 | variable "ipv6_cidr_block" {} 23 | 24 | variable "sgroup_name" {} 25 | variable "sgroup_description" {} 26 | variable "revoke_rules_on_delete" {} 27 | 28 | variable "description" {} 29 | variable "environment" {} 30 | variable "kms_key_arn" {} 31 | variable "filename" {} 32 | variable "function_name" {} 33 | variable "handler" {} 34 | variable "memory_size" {} 35 | variable "publish" {} 36 | variable "reserved_concurrent_executions" {} 37 | variable "runtime" {} 38 | variable "s3_bucket" {} 39 | variable "s3_key" {} 40 | variable "s3_object_version" {} 41 | variable "timeout" {} 42 | 43 | variable "tags" { 44 | type = map 45 | } 46 | variable "tracing_mode" {} 47 | -------------------------------------------------------------------------------- /src/processor/templates/kubernetes/kubernetes_parser.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import hcl 4 | from yaml.loader import FullLoader 5 | from cfn_flip import flip, to_yaml, to_json 6 | from processor.logging.log_handler import getlogger 7 | from processor.helper.yaml.yaml_utils import yaml_from_file 8 | from processor.helper.json.json_utils import save_json_to_file,json_from_file 9 | from processor.templates.base.template_parser import TemplateParser 10 | 11 | logger = getlogger() 12 | 13 | class KubernetesTemplateParser(TemplateParser): 14 | """ 15 | Base Parser class for parse cloud templates 16 | """ 17 | 18 | def __init__(self, template_file, tosave=False, **kwargs): 19 | """ 20 | """ 21 | super().__init__(template_file, tosave=False, **kwargs) 22 | self.type = {} 23 | 24 | 25 | def parse(self,file_path): 26 | """ 27 | docstring 28 | """ 29 | template_json = None 30 | with open(file_path) as scanned_file: 31 | try: 32 | template_json = json.loads(to_json(scanned_file.read())) 33 | self.contentType = 'yaml' 34 | except: 35 | file_name = file_path.split("/")[-1] 36 | logger.error("\t\t ERROR: please check yaml file contains correct content: %s", file_name) 37 | return template_json 38 | 39 | def kind_detector(self): 40 | """ 41 | docstring 42 | """ 43 | return "simple" -------------------------------------------------------------------------------- /docs/docs/snapshots/helm.md: -------------------------------------------------------------------------------- 1 | ## HelmChart master snapshot configuration 2 | HelmChart is only available for master snapshot configuration file. because when helm binary process helm chart template it will go to generate one multiple yaml file which is support by **prancer**. 3 | 4 | **Prancer** will minify the generated multiple yaml file which created by helm binary to multiple single yaml file. 5 | 6 | Here is the master snapshot configuration file template for helm chart : 7 | 8 | ```json 9 | { 10 | "fileType": "masterSnapshot", 11 | "snapshots": [ 12 | { 13 | "source": "", 14 | "nodes": [ 15 | { 16 | "masterSnapshotId": "", 17 | "type": "helmChart", 18 | "collection": "", 19 | "paths":[ 20 | "" 21 | ] 22 | } 23 | ] 24 | } 25 | ] 26 | } 27 | ``` 28 | 29 | sample file : 30 | 31 | ```json 32 | { 33 | "fileType": "masterSnapshot", 34 | "snapshots": [ 35 | { 36 | "source": "test-gitConnector", 37 | "nodes": [ 38 | { 39 | "masterSnapshotId": "helm_", 40 | "type": "helmChart", 41 | "collection": "multiple", 42 | "paths":[ 43 | "helm/" 44 | ] 45 | } 46 | ] 47 | } 48 | ] 49 | } 50 | ``` 51 | -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/modules/vpc/vars.tf: -------------------------------------------------------------------------------- 1 | variable "cidr_block" { 2 | description = "The CIDR block for the VPC" 3 | type = string 4 | default = "0.0.0.0/0" 5 | } 6 | 7 | variable "instance_tenancy" { 8 | description = "A tenancy option for instances launched into the VPC" 9 | type = string 10 | default = "default" 11 | } 12 | 13 | variable "enable_dns_hostnames" { 14 | description = "Should be true to enable DNS hostnames in the VPC" 15 | type = bool 16 | default = false 17 | } 18 | 19 | variable "enable_dns_support" { 20 | description = "Should be true to enable DNS support in the VPC" 21 | type = bool 22 | default = true 23 | } 24 | 25 | variable "enable_classiclink" { 26 | description = "Should be true to enable ClassicLink for the VPC. Only valid in regions and accounts that support EC2 Classic." 27 | type = bool 28 | default = null 29 | } 30 | 31 | variable "enable_classiclink_dns_support" { 32 | description = "Should be true to enable ClassicLink DNS Support for the VPC. Only valid in regions and accounts that support EC2 Classic." 33 | type = bool 34 | default = null 35 | } 36 | 37 | variable "enable_ipv6" { 38 | description = "Requests an Amazon-provided IPv6 CIDR block with a /56 prefix length for the VPC. You cannot specify the range of IP addresses, or the size of the CIDR block." 39 | type = bool 40 | default = false 41 | } 42 | 43 | variable "tags" { 44 | description = "A map of tags to add to all resources" 45 | type = map(string) 46 | default = {} 47 | } 48 | -------------------------------------------------------------------------------- /tests/jsons/git_snapshot.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "_id" : "5ca482597456216a40c75cd0", 4 | "checksum" : "99914b932bd37a50b983c5e7c90ae93b", 5 | "collection" : "snapshots", 6 | "container" : "gitcontainer", 7 | "json" : { 8 | "contentVersion" : "1.0.0.0", 9 | "fileType" : "snapshot", 10 | "snapshots" : [ 11 | { 12 | "source": "gitConnector", 13 | "type": "filesystem", 14 | "testUser": "git", 15 | "nodes": [ 16 | { 17 | "snapshotId": "1", 18 | "type": "json", 19 | "collection": "security_groups", 20 | "path": "devops/cf/mytemplate.json" 21 | } 22 | ] 23 | } 24 | ] 25 | }, 26 | "name" : "snapshot", 27 | "timestamp" : "1554285145993", 28 | "type" : "snapshot" 29 | }, 30 | { 31 | "_id" : "5ca482597456216a40c75cd0", 32 | "checksum" : "99914b932bd37a50b983c5e7c90ae93b", 33 | "collection" : "snapshots", 34 | "container" : "gitcontainer", 35 | "json" : { 36 | "contentVersion" : "1.0.0.0", 37 | "fileType" : "snapshot", 38 | "snapshots" : [ 39 | { 40 | "source": "gitConnector", 41 | "type": "filesystem", 42 | "testUser": "git", 43 | "nodes": [ 44 | { 45 | "snapshotId": "2", 46 | "type": "json", 47 | "collection": "security_groups", 48 | "path": "devops/cf/mytemplate.json" 49 | } 50 | ] 51 | } 52 | ] 53 | }, 54 | "name" : "snapshot1.json", 55 | "timestamp" : "1554285145993", 56 | "type" : "snapshot" 57 | } 58 | ] 59 | -------------------------------------------------------------------------------- /docs/docs/api/remediation.md: -------------------------------------------------------------------------------- 1 | **Remediation APIs** 2 | === 3 | 4 | - Remediation is feature for auto fix any security related issue in Pre deployment template files or fix the configuration on cloud resources post deployment. 5 | 6 | **Remediation - Run** 7 | --- 8 | 9 | **CURL Sample** 10 | ``` 11 | curl -X POST https://portal.prancer.io/prancer-customer1/api/remediate/testcase/' -H 'authorization: Bearer ' -H 'content-type: application/json' -d '{ "output_id":"608d646f32e86e9c9453c665", "snapshot_id":"ARM_TEMPLATE_SNAPSHOT10", "remediation_id":"PR-AZR-0053-ARM" }' 12 | ``` 13 | 14 | - **URL:** https://portal.prancer.io/prancer-customer1/api/remediate/testcase/ 15 | - **Method:** POST 16 | - **Header:** 17 | ``` 18 | - content-type: application/json 19 | - Authorization: Bearer 20 | ``` 21 | - **Param:** 22 | ``` 23 | { 24 | output_id: "608d646f32e86e9c9453c665", 25 | remediation_id: "PR-AZR-0053-ARM", 26 | snapshot_id: "ARM_TEMPLATE_SNAPSHOT10" 27 | } 28 | ``` 29 | - **Explanation:** 30 | 31 | `Required Fields` 32 | 33 | - **output_id:** Object Id of output collection for which you want to run remediation. 34 | - **snapshot_id:** A valid snapshotId which should be contains in output object. 35 | - **remediation_id:** Valid predefined remediation Id. Remediation will be apply on resource which will be refer from provided snapshot Id. 36 | 37 | 38 | **Response:** 39 | ``` 40 | { 41 | "data": { 42 | "url": "https://github.com///pull/151" 43 | }, 44 | "error": "", 45 | "error_list": [], 46 | "message": "Remediation completed", 47 | "metadata": {}, 48 | "status": 200 49 | } 50 | ``` 51 | -------------------------------------------------------------------------------- /docs/docs/snapshots/snapshot-definition.md: -------------------------------------------------------------------------------- 1 | ## Introduction 2 | 3 | **prancer** validation framework can connect to various providers to capture monitored resources states and run compliance tests against them. For that purpose, we need snapshot configuration files to specify those monitored resources and take snapshots and store them. 4 | 5 | ## Definitions 6 | 7 | - **Master Snapshot Configuration File**: A json based configuration file in the **prancer** cloud validation framework which defines the **type of monitored resources** in a target environment. As an exemplary, in a master snapshot configuration file we define different type of resources in the Azure: Virtual Machine, Virtual Network, and Network Security Group. 8 | 9 | - **Snapshot Configuration File**: A json based configuration file in the **prancer** cloud validation framework which defines **individual monitored resources** in a target environment. As an example, in a snapshot configuration file we define individual resources in our Azure cloud: Virtual Machine 1, Virtual Machine 2, and Virtual Network A. 10 | 11 | - **Snapshot**: A json based file which contains the *state* of a monitored resource at a given time. 12 | 13 | > **crawler** which is an enterprise edition feature of the **Prancer** cloud validation framework leverages the **master snapshot configuration file** to examine the target environment and find new resources. It generates the snapshot configuration files automatically. For more information read the [crawler section](../crawler/crawler-definition.md). 14 | 15 | 16 | -------------------------------------------------------------------------------- /tests/processor/templates/aws/sample/SingleENIwithMultipleEIPs.json: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion": "2010-09-09", 3 | "Description": "Template Creates a single EC2 instance with a single ENI which has multiple private and public IPs", 4 | "Parameters":{ 5 | "Subnet": { 6 | "Description": "ID of the Subnet the instance should be launched in, this will link the instance to the same VPC.", 7 | "Type": "List" 8 | 9 | } 10 | }, 11 | "Resources": { 12 | "EIP1": { 13 | "Type": "AWS::EC2::EIP", 14 | "Properties": { 15 | "Domain": "VPC" 16 | } 17 | }, 18 | "EIP2": { 19 | "Type": "AWS::EC2::EIP", 20 | "Properties": { 21 | "Domain": "VPC" 22 | } 23 | }, 24 | "Association1": 25 | { 26 | "Type": "AWS::EC2::EIPAssociation", 27 | "DependsOn" : ["ENI","EIP1"], 28 | "Properties": { 29 | "AllocationId": { "Fn::GetAtt" : [ "EIP1", "AllocationId" ]}, 30 | "NetworkInterfaceId": {"Ref":"ENI"}, 31 | "PrivateIpAddress": {"Fn::Select" : [ "0", {"Fn::GetAtt" : [ "ENI" , "SecondaryPrivateIpAddresses"]} ]} 32 | } 33 | }, 34 | "Association2": 35 | { 36 | "Type": "AWS::EC2::EIPAssociation", 37 | "DependsOn" : ["ENI","EIP2"], 38 | "Properties": { 39 | "AllocationId": { "Fn::GetAtt" : [ "EIP2", "AllocationId" ]}, 40 | "NetworkInterfaceId": {"Ref":"ENI"}, 41 | "PrivateIpAddress": {"Fn::Select" : [ "1", {"Fn::GetAtt" : [ "ENI" , "SecondaryPrivateIpAddresses"]} ]} 42 | } 43 | }, 44 | "ENI": 45 | { 46 | "Type" : "AWS::EC2::NetworkInterface", 47 | "Properties" : { 48 | "SecondaryPrivateIpAddressCount" : 2, 49 | "SourceDestCheck" : true, 50 | "SubnetId" : { "Fn::Select" : [ "0", {"Ref" : "Subnet"} ] } 51 | } 52 | 53 | 54 | 55 | }}} 56 | -------------------------------------------------------------------------------- /src/processor/connector/arn_parser.py: -------------------------------------------------------------------------------- 1 | class MalformedArnError(Exception): 2 | def __init__(self, arn_str): 3 | self.arn_str = arn_str 4 | 5 | def __str__(self): 6 | return 'arn_str: {arn_str}'.format(arn_str=self.arn_str) 7 | 8 | 9 | class Arn(object): 10 | def __init__(self, partition, service, region, account_id, resource_type, resource): 11 | self.partition = partition 12 | self.service = service 13 | self.region = region 14 | self.account_id = account_id 15 | self.resource_type = resource_type 16 | self.resource = resource 17 | 18 | 19 | def arnparse(arn_str): 20 | if not arn_str.startswith('arn:'): 21 | raise MalformedArnError(arn_str) 22 | 23 | elements = arn_str.split(':', 5) 24 | service = elements[2] 25 | resource = elements[5] 26 | 27 | if service in ['s3', 'sns', 'apigateway', 'execute-api', 'acm', 'rds']: 28 | resource_type = None 29 | else: 30 | resource_type, resource = _parse_resource(resource) 31 | 32 | return Arn( 33 | partition=elements[1], 34 | service=service, 35 | region=elements[3] if elements[3] != '' else None, 36 | account_id=elements[4] if elements[4] != '' else None, 37 | resource_type=resource_type, 38 | resource=resource, 39 | ) 40 | 41 | 42 | def _parse_resource(resource): 43 | first_separator_index = -1 44 | for idx, c in enumerate(resource): 45 | if c in (':', '/'): 46 | first_separator_index = idx 47 | break 48 | 49 | if first_separator_index != -1: 50 | resource_type = resource[:first_separator_index] 51 | resource = resource[first_separator_index + 1:] 52 | else: 53 | resource_type = None 54 | 55 | return resource_type, resource -------------------------------------------------------------------------------- /src/processor/template_processor/base/base_template_constatns.py: -------------------------------------------------------------------------------- 1 | from processor.template_processor.aws_template_processor import AWSTemplateProcessor 2 | from processor.template_processor.azure_template_processor import AzureTemplateProcessor 3 | from processor.template_processor.google_template_processor import GoogleTemplateProcessor 4 | from processor.template_processor.terraform_template_processor import TerraformTemplateProcessor 5 | from processor.template_processor.kubernetes_template_processor import KubernetesTemplateProcessor 6 | from processor.template_processor.yaml_template_processor import YamlTemplateProcessor 7 | from processor.template_processor.json_template_processor import JsonTemplateProcessor 8 | from processor.template_processor.helm_chart_template_processor import HelmChartTemplateProcessor 9 | from processor.template_processor.ack_processor import AckTemplateProcessor 10 | from processor.template_processor.aso_processor import AsoTemplateProcessor 11 | from processor.template_processor.kcc_processor import KccTemplateProcessor 12 | from processor.template_processor.base.base_template_processor import TemplateProcessor 13 | 14 | TEMPLATE_NODE_TYPES = { 15 | "cloudformation": AWSTemplateProcessor, 16 | "arm" : AzureTemplateProcessor, 17 | "deploymentmanager" : GoogleTemplateProcessor, 18 | "terraform" : TerraformTemplateProcessor, 19 | "kubernetesObjectFiles" : KubernetesTemplateProcessor, 20 | "yaml" : YamlTemplateProcessor, 21 | "json": JsonTemplateProcessor, 22 | "helmChart" : HelmChartTemplateProcessor, 23 | "ack" : AckTemplateProcessor, # AWS Controllers for Kubernetes 24 | "aso" : AsoTemplateProcessor, # Azure Service Operator 25 | "kcc" : KccTemplateProcessor, # GCP Kubernetes Config Connector 26 | "common" : TemplateProcessor # TemplateProcessor 27 | } -------------------------------------------------------------------------------- /src/processor/helper/utils/cli_terraform_to_json.py: -------------------------------------------------------------------------------- 1 | """ 2 | Common utility file to convert terraform to json files. 3 | """ 4 | import argparse 5 | import sys 6 | import atexit 7 | from processor.logging.log_handler import getlogger 8 | from processor.helper.config.rundata_utils import init_currentdata, delete_currentdata 9 | from processor.helper.file.file_utils import exists_file 10 | from processor.helper.json.json_utils import save_json_to_file 11 | from processor.connector.snapshot_custom import convert_to_json 12 | 13 | 14 | 15 | 16 | def convert_terraform_to_json(terraform, output=None): 17 | if exists_file(terraform): 18 | if not output: 19 | parts = terraform.rsplit('.', -1) 20 | output = '%s.json' % parts[0] 21 | _, json_data = convert_to_json(terraform, 'terraform') 22 | if json_data: 23 | save_json_to_file(json_data, output) 24 | 25 | 26 | def terraform_to_json_main(arg_vals=None): 27 | """Main driver utility for converting terraform to json files.""" 28 | logger = getlogger() 29 | logger.info("Comand: '%s %s'", sys.executable.rsplit('/', 1)[-1], ' '.join(sys.argv)) 30 | cmd_parser = argparse.ArgumentParser("Convert terraform to json files") 31 | cmd_parser.add_argument('terraform', action='store', 32 | help='Full path of the terraform file.') 33 | cmd_parser.add_argument('--output', action='store', default=None, 34 | help='Path to store the file.') 35 | args = cmd_parser.parse_args(arg_vals) 36 | # Delete the rundata at the end of the script. 37 | atexit.register(delete_currentdata) 38 | logger.info(args) 39 | init_currentdata() 40 | convert_terraform_to_json(args.terraform, args.output) 41 | return 0 42 | -------------------------------------------------------------------------------- /docs/docs/connectors/teams.md: -------------------------------------------------------------------------------- 1 | # Teams structure file 2 | 3 | Integration of Prancer Web with **Teams** for notifications management based on Prancer CSPM or PAC findings. 4 | 5 | The integration with **Teams** is as follows: 6 | 7 | 1. Each collection in the collection pages(**Infra/PAC** Management) can be integrated with **Teams** 8 | 2. Choose the dropdown option from the collection and select `Third Party Integration`. 9 | 3. Select the `Teams` 10 | 11 | When the user clicks on the integration service, a new page/modal opens with pre-populated fields for the ticket. User can edit as per convenience. On submit, The notifications shall be enabled for the specified collection. 12 | 13 | 14 | Here is a sample of the **Teams** structure file: 15 | 16 | ```json 17 | { 18 | "fileType": "structure", 19 | "type": "teams", 20 | "webhook": "" 21 | } 22 | ``` 23 | 24 | | Key |Value Description | 25 | | ------------- |:-------------: | 26 | |webhook| Created webhook from teams to be pasted here| 27 | 28 | sample file: 29 | 30 | ```json 31 | { 32 | "fileType": "structure", 33 | "type": "teams", 34 | "webhook": "https://prancerenterprise.webhook.office.com/webhookb2/***" 35 | } 36 | ``` 37 | 38 | ## Generate Webhook URL from teams 39 | 40 | Once you have logged in to the **Teams** follow these steps to generate the AuthToken: 41 | 42 | 1. Go to the Teams 43 | 2. Select the `Teams` from the left-panel 44 | 3. Select the Channel that should receive the notifications 45 | 4. Click on the `More options`(i.e. 3 dots) 46 | 5. Select the `Connectors` 47 | 6. Select the `Incoming Webhook`, and click on `webhook URL` 48 | 49 | By following the steps you'll be able to copy the Webhook URL. This url will be available to copy anytime you want, unless you have revoked the webhook URL. 50 | -------------------------------------------------------------------------------- /docs/docs/api/api_overview.md: -------------------------------------------------------------------------------- 1 | **How to use Prancer Enterprise APIs** 2 | === 3 | 4 | Here are the steps to generate an access token and access prancer enterprise APIs. 5 | 6 | ## 1) Generate API Token: 7 | 8 | API tokens are being used to connect to the Prancer tenant programmatically. You can generate multiple tokens for different purposes. 9 | the following use cases can be covered by generating tokens: 10 | 11 | - connecting to Prancer API 12 | - running VSCode extension 13 | - connecting from CI tools 14 | 15 | ### How to generate tokens 16 | Login to the Prancer Portal, On the user menu (top right), select the drop-down menu and select `User Access Token`. 17 | 18 | ![../images/token/token1.png](../images/token/token1.png) 19 | 20 | Click on `New Token` to generate a new token. Make sure you keep the token somewhere safe, you cannot retrieve it when you close the page. 21 | 22 | ![../images/token/token2.png](../images/token/token2.png) 23 | 24 | ## 2) Call the authentication API 25 | 26 | Call [Validate Access Token](authentication.md#validate-access-token) API to validate your token and get the authenticated JWT Bearer token to access APIs. 27 | 28 | ![../images/token/validate_access_token.png](../images/token/validate_access_token.png) 29 | 30 | ## 3) Access the Prancer Enterprise API 31 | 32 | You got the authenticated `JWT Bearer token` by calling Validate Access Token API. Now you can use it to call APIs of Prancer Enterprise. For example, 33 | 34 | - Call the [Get Collection List](collection.md#collection-get) API to get all collections list. 35 | 36 | ![../images/api/collection_list.png](../images/api/collection_list.png) 37 | 38 | - Call the [Run compliance](compliance.md#compliance-run-compliance) API to run compliance on a collection. 39 | 40 | ![../images/api/run_compliance.png](../images/api/run_compliance.png) -------------------------------------------------------------------------------- /src/processor/comparison/comparisonantlr/test_comparator.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from antlr4 import InputStream 3 | from antlr4 import CommonTokenStream 4 | from antlr4.error.ErrorListener import ErrorListener, ConsoleErrorListener 5 | from processor.comparison.comparisonantlr.comparatorLexer import comparatorLexer 6 | from processor.comparison.comparisonantlr.comparatorParser import comparatorParser 7 | from processor.comparison.comparisonantlr.rule_interpreter import RuleInterpreter 8 | from processor.logging.log_handler import getlogger 9 | 10 | logger = getlogger() 11 | class MyConsoleErrorListener(ErrorListener): 12 | 13 | def syntaxError(self, recognizer, offendingSymbol, line, column, msg, e): 14 | logger.info("******line " + str(line) + ":" + str(column) + " " + msg) 15 | 16 | ConsoleErrorListener.INSTANCE = MyConsoleErrorListener() 17 | 18 | def main(argv): 19 | # input = FileStream(argv[1]) 20 | try: 21 | with open(argv[1]) as f: 22 | for line in f: 23 | code = line.rstrip() 24 | print('#' * 75) 25 | print('Actual Rule: ', code) 26 | inputStream = InputStream(code) 27 | lexer = comparatorLexer(inputStream) 28 | stream = CommonTokenStream(lexer) 29 | parser = comparatorParser(stream) 30 | tree = parser.expression() 31 | print(tree.toStringTree(recog=parser)) 32 | children = [] 33 | for child in tree.getChildren(): 34 | children.append((child.getText())) 35 | print('*' * 50) 36 | print("All the parsed tokens: ", children) 37 | r_i = RuleInterpreter(children) 38 | return True 39 | except: 40 | return False 41 | 42 | 43 | if __name__ == '__main__': 44 | main(sys.argv) 45 | -------------------------------------------------------------------------------- /docs/docs/connectors/kubernetes.md: -------------------------------------------------------------------------------- 1 | # Kubernetes structure file 2 | 3 | The **Kubernetes** connector allows you to inspect your **Kubernetes** cluster using their API. The connector is a wrapper around the **Kubernetes** ReST API. 4 | 5 | Here is a sample of the Kubernetes structure file: 6 | 7 | ```json 8 | { 9 | "filetype": "", 10 | "type": "", 11 | "companyName": "", 12 | "clusterName": "", 13 | "clusterUrl": "", 14 | "namespaces": [ 15 | { 16 | "namespace": "", 17 | "serviceAccounts": [ 18 | { 19 | "name": "", 20 | "secret":"" 21 | } 22 | ] 23 | } 24 | ] 25 | } 26 | ``` 27 | 28 | | Key |Value Description | 29 | | ------------- |:-------------: | 30 | |filetype |structure| 31 | |type|kubernetes| 32 | |company-name|your company name| 33 | |cluster-name|your cluster name| 34 | |cluster-url|use url like this `https://:6443` or you can use domain| 35 | |namespace|use namespace which we need to connect for getting data| 36 | |service-account-name|the service account name which has access to namespace| 37 | |secret|the token of secret| 38 | 39 | sample file: 40 | 41 | ```json 42 | { 43 | "filetype": "structure", 44 | "type": "kubernetes", 45 | "companyName": "Company Name", 46 | "clusterName": "prancer-prod-prod-eastus2-aksprod01", 47 | "clusterUrl": "https://prancer-kube:6443", 48 | "namespaces": [ 49 | { 50 | "namespace": "default", 51 | "serviceAccounts": [ 52 | { 53 | "name": "prancer_ro", 54 | "secret":"" 55 | } 56 | ] 57 | } 58 | ] 59 | } 60 | ``` 61 | -------------------------------------------------------------------------------- /src/processor/connector/special_compliance/compliances.py: -------------------------------------------------------------------------------- 1 | 2 | COMPLIANCES = [{ 3 | "masterTestId":"SENSITIVE_EXTENSION_TEST", 4 | "masterSnapshotId" : [ 5 | "ALL" 6 | ], 7 | "type":"python", 8 | "rule":"file(sensitive_extension.py)", 9 | "evals":[ 10 | { 11 | "id":"PR-COM-SEN-EXT-001", 12 | "eval":"data.rule.sensitive_extensions", 13 | "message":"data.rule.sensitive_extensions_err", 14 | "remediationDescription":"you need to add these extensions to your .gitignore file to prevent them from checking in to your repository. these files need to be moved securely to a vault to be managed securely and then referenced in your code", 15 | "remediationFunction":"" 16 | } 17 | ], 18 | "severity":"Medium", 19 | "title":"Sensitive files should not be checked into the git repo", 20 | "description":"Certain file types contain sensitive information and should not be checked into the git repositories. You need to move these files to a vault and reference them from your code. Prancer checks for the following file types to make sure they are not in the repo:
*.PFX or *.P12 - Personal Information Exchange Format
*.PEM - a Base64 encoded DER certificate
*.CER or *.CRT - Base64-encoded or DER-encoded binary X.509 Certificate
*.CRL - Certificate Revocation List
*.CSR - Certificate Signing Request
*.DER - DER-encoded binary X.509 Certificate
*.P7B or *.P7R or *.SPC - Cryptographic Message Syntax Standard
.Key – key files", 21 | "tags":[ 22 | { 23 | "cloud":"git", 24 | "compliance":[ 25 | "Best Practice" 26 | ], 27 | "service":[ 28 | "common" 29 | ] 30 | } 31 | ], 32 | "resourceTypes":[ 33 | "sensitive_extension" 34 | ], 35 | "status":"enable" 36 | }] -------------------------------------------------------------------------------- /.github/workflows/deploy_docker.yaml: -------------------------------------------------------------------------------- 1 | name: Docker build and publish 2 | on: 3 | workflow_dispatch: 4 | inputs: 5 | branch: 6 | required: true 7 | description: 'branch to build' 8 | default: 'master' 9 | 10 | jobs: 11 | build: 12 | name: build and publish 13 | runs-on: ubuntu-latest 14 | 15 | steps: 16 | - name: Set up QEMU 17 | uses: docker/setup-qemu-action@v1 18 | 19 | - name: Set up Docker Buildx 20 | uses: docker/setup-buildx-action@v1 21 | 22 | - name: Checkout code 23 | uses: actions/checkout@v2 24 | with: 25 | repository: prancer-io/cloud-validation-framework 26 | ref: ${{ github.event.inputs.branch }} 27 | token: ${{ secrets.GIT_TOKEN }} 28 | 29 | - name: read_version 30 | id: read_version 31 | run: | 32 | VERSION=$(cat setup.py | grep version=) 33 | echo $VERSION > output 34 | sed -i "s/'//g" output 35 | sed -i 's/"//g' output 36 | sed -i 's/,//g' output 37 | sed -i 's/version=//g' output 38 | sed -i 's/\n//g' output 39 | VERSION=$(cat output | tr -d '\n') 40 | echo Application version $VERSION 41 | echo ::set-output name=VERSION::$VERSION 42 | 43 | - name: Docker build 44 | id: docker_build 45 | run: | 46 | docker build -t prancer/prancer-basic:${{ steps.read_version.outputs.VERSION }}\ 47 | --build-arg APP_VERSION=${{ steps.read_version.outputs.VERSION }} -f dockerfiles/Dockerfile . 48 | 49 | - name: Docker push 50 | id: docker_push 51 | run: | 52 | # Wait X seconds for pypi to have the binary ready 53 | docker login -u ${{ secrets.DOCKER_USER }} -p '${{ secrets.DOCKER_PASSWORD }}' 54 | docker push prancer/prancer-basic:${{ steps.read_version.outputs.VERSION }} 55 | -------------------------------------------------------------------------------- /docs/docs/limitations/aws-cloudformation-template-limitations.md: -------------------------------------------------------------------------------- 1 | # AWS Cloudformation Unsupported Scenarios 2 | 3 | ### We apply our compliance rules on your YAML and JSON templates to find security threats, but to do that we have to process the parameters, functions, and attributes of your template. we are able to process many things from it still there are some functions that we can't process because contains such a value that only can be available after a resource gets created. therefor we put those attributes as it is in the generated snapshot. 4 |
5 | 6 | #### Here is the [link](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/rules-section-structure.html) of AWS Cloudformation supported Functions

7 | 8 | **List of Unsupported Scenarios** 9 | | Function Name | Note | 10 | | ------------- | ---- | 11 | Fn::EachMemberEquals|Some of the values only can be processed at the runtime 12 | Fn::EachMemberIn|Some of the values only can be processed at the runtime 13 | Fn::RefAll|For example, it returns all values of AWS::EC2::VPC::Id 14 | Fn::ValueOf|Returns an attribute value or list of values for a specific parameter and attribute. 15 | Fn::ValueOfAll|Returns an attribute value or list of values for a specific parameter and attribute. 16 | Fn::Transform|specifies one or more macros that AWS CloudFormation uses to process your template 17 | Fn::GetAtt|It returns the value of the attribute after the creation of the resource 18 | Fn::GetAZs|The intrinsic function Fn::GetAZs returns an array that lists Availability Zones for a specified region 19 | Fn::ImportValue|returns the value of an output exported by another stack 20 | Fn::Contains| There are some scenarios in which some resources dependent on the output of the other resource 21 | 22 |
23 | 24 | **All Pseudo parameters reference** 25 | - AWS::AccountId 26 | - AWS::NotificationARNs 27 | - AWS::NoValue 28 | - AWS::Partition 29 | - AWS::Region 30 | - AWS::StackId 31 | - AWS::StackName 32 | - AWS::URLSuffix -------------------------------------------------------------------------------- /src/processor/template_processor/ack_processor.py: -------------------------------------------------------------------------------- 1 | from yaml.loader import FullLoader 2 | from processor.logging.log_handler import getlogger 3 | from processor.template_processor.base.base_template_processor import TemplateProcessor 4 | from processor.helper.yaml.yaml_utils import yaml_from_file 5 | 6 | logger = getlogger() 7 | 8 | class AckTemplateProcessor(TemplateProcessor): 9 | """ 10 | Base Template Processor for process template 11 | """ 12 | 13 | def __init__(self, node, **kwargs): 14 | super().__init__(node, tosave=False, **kwargs) 15 | 16 | def is_template_file(self, file_path): 17 | """ 18 | check for valid template file for parse arm template 19 | """ 20 | if len(file_path.split(".")) > 0 and file_path.split(".")[-1] == "yaml": 21 | json_data = yaml_from_file(file_path, loader=FullLoader) 22 | return True if (json_data) else False 23 | return False 24 | 25 | def process_template(self, paths): 26 | """ 27 | process the files stored at specified paths and returns the template 28 | """ 29 | template_json = None 30 | 31 | if paths and isinstance(paths, list): 32 | template_file_path = "" 33 | deployment_file_path = "" 34 | 35 | for path in paths: 36 | file_path = '%s/%s' % (self.dir_path, path) 37 | logger.info("Fetching data : %s ", path) 38 | if self.is_template_file(file_path): 39 | template_file_path = file_path 40 | 41 | self.template_file = template_file_path 42 | if template_file_path: 43 | template_json = yaml_from_file(file_path,loader=FullLoader) 44 | if template_json: 45 | self.contentType = 'yaml' 46 | if template_json.get("kind"): 47 | self.resource_types = [template_json.get("kind").lower()] 48 | return template_json -------------------------------------------------------------------------------- /src/processor/template_processor/aso_processor.py: -------------------------------------------------------------------------------- 1 | from yaml.loader import FullLoader 2 | from processor.logging.log_handler import getlogger 3 | from processor.template_processor.base.base_template_processor import TemplateProcessor 4 | from processor.helper.yaml.yaml_utils import yaml_from_file 5 | 6 | logger = getlogger() 7 | 8 | class AsoTemplateProcessor(TemplateProcessor): 9 | """ 10 | Base Template Processor for process template 11 | """ 12 | 13 | def __init__(self, node, **kwargs): 14 | super().__init__(node, tosave=False, **kwargs) 15 | 16 | def is_template_file(self, file_path): 17 | """ 18 | check for valid template file for parse arm template 19 | """ 20 | if len(file_path.split(".")) > 0 and file_path.split(".")[-1] == "yaml": 21 | json_data = yaml_from_file(file_path, loader=FullLoader) 22 | return True if (json_data) else False 23 | return False 24 | 25 | def process_template(self, paths): 26 | """ 27 | process the files stored at specified paths and returns the template 28 | """ 29 | template_json = None 30 | 31 | if paths and isinstance(paths, list): 32 | template_file_path = "" 33 | deployment_file_path = "" 34 | 35 | for path in paths: 36 | file_path = '%s/%s' % (self.dir_path, path) 37 | logger.info("Fetching data : %s ", path) 38 | if self.is_template_file(file_path): 39 | template_file_path = file_path 40 | 41 | self.template_file = template_file_path 42 | if template_file_path: 43 | template_json = yaml_from_file(file_path,loader=FullLoader) 44 | if template_json: 45 | self.contentType = 'yaml' 46 | if template_json.get("kind"): 47 | self.resource_types = [template_json.get("kind").lower()] 48 | return template_json -------------------------------------------------------------------------------- /src/processor/template_processor/kcc_processor.py: -------------------------------------------------------------------------------- 1 | from yaml.loader import FullLoader 2 | from processor.logging.log_handler import getlogger 3 | from processor.template_processor.base.base_template_processor import TemplateProcessor 4 | from processor.helper.yaml.yaml_utils import yaml_from_file 5 | 6 | logger = getlogger() 7 | 8 | class KccTemplateProcessor(TemplateProcessor): 9 | """ 10 | Base Template Processor for process template 11 | """ 12 | 13 | def __init__(self, node, **kwargs): 14 | super().__init__(node, tosave=False, **kwargs) 15 | 16 | def is_template_file(self, file_path): 17 | """ 18 | check for valid template file for parse arm template 19 | """ 20 | if len(file_path.split(".")) > 0 and file_path.split(".")[-1] == "yaml": 21 | json_data = yaml_from_file(file_path, loader=FullLoader) 22 | return True if (json_data) else False 23 | return False 24 | 25 | def process_template(self, paths): 26 | """ 27 | process the files stored at specified paths and returns the template 28 | """ 29 | template_json = None 30 | 31 | if paths and isinstance(paths, list): 32 | template_file_path = "" 33 | deployment_file_path = "" 34 | 35 | for path in paths: 36 | file_path = '%s/%s' % (self.dir_path, path) 37 | logger.info("Fetching data : %s ", path) 38 | if self.is_template_file(file_path): 39 | template_file_path = file_path 40 | 41 | self.template_file = template_file_path 42 | if template_file_path: 43 | template_json = yaml_from_file(file_path,loader=FullLoader) 44 | if template_json: 45 | self.contentType = 'yaml' 46 | if template_json.get("kind"): 47 | self.resource_types = [template_json.get("kind").lower()] 48 | return template_json -------------------------------------------------------------------------------- /docs/docs/exclusions/exclusion.md: -------------------------------------------------------------------------------- 1 | In an exclusion file, we are defining the test cases that need to be skipped based on resource path, testIDs or both. 2 | 3 | There are three types of exclusions supported: 4 | 5 | - test exclusion: The exclusionType is set to `test` and field-value is set in `masterTestID` 6 | - resource exclusion: The exclusionType is set to `resource` and the field-value is an array set in `paths` 7 | - single exclusion: The exclusionType is set to `single` and both `masterTestID` and `paths` fields should be present to have combination of these two for exclusion. 8 | 9 | ``` json 10 | { 11 | "companyName": "", 12 | "container": , 13 | "fileType": "Exclusion", 14 | "exclusions": [ 15 | { 16 | "exclusionType": "resource", 17 | "paths": [ 18 | "" 19 | ] 20 | }, 21 | { 22 | "exclusionType": "single", 23 | "masterTestID": "", 24 | "paths": [ 25 | "" 26 | ] 27 | }, 28 | { 29 | "exclusionType": "test", 30 | "masterTestID": "" 31 | } 32 | ] 33 | } 34 | ``` 35 | 36 | Remember to substitute all values in this file that looks like a `` such as: 37 | 38 | | Tag | Value Description | 39 | |-----|-------------------| 40 | | path of the resource | the path of the resource | 41 | | TEST_ID | the masterTestID to be used to exclude | 42 | 43 | Here is an example of that: 44 | 45 | ```json 46 | { 47 | "companyName": "", 48 | "container": , 49 | "fileType": "Exclusion", 50 | "exclusions": [ 51 | { 52 | "exclusionType": "resource", 53 | "paths": [ 54 | "/test-multi-yaml/multiple-yamls/multiple-helm-response_multiple_yaml_2.yaml" 55 | ] 56 | }, 57 | { 58 | "exclusionType": "single", 59 | "masterTestID": "TEST_POD_1", 60 | "paths": [ 61 | "/deployment/deployment-definition.yaml" 62 | ] 63 | }, 64 | { 65 | "exclusionType": "test", 66 | "masterTestID": "TEST_POD_4" 67 | } 68 | ] 69 | } 70 | ``` 71 | -------------------------------------------------------------------------------- /docs/docs/connectors/slack.md: -------------------------------------------------------------------------------- 1 | # Slack structure file 2 | 3 | Integration of Prancer Web with **Slack** for notifications management based on Prancer CSPM or PAC findings. 4 | 5 | The integration with **Slack** is as follows: 6 | 7 | 1. Each collection in the collection pages(**Infra/PAC** Management) can be integrated with **Slack** 8 | 2. Choose the dropdown option from the collection and select `Third Party Integration`. 9 | 3. Select the `Slack` 10 | 11 | When the user clicks on the integration service, a new page/modal opens with pre-populated fields for the ticket. User can edit as per convenience. On submit, The notifications shall be enabled for the specified collection. 12 | 13 | 14 | Here is a sample of the **Slack** structure file: 15 | 16 | ```json 17 | { 18 | "fileType": "structure", 19 | "type": "slack", 20 | "webhook": "" 21 | } 22 | ``` 23 | 24 | | Key |Value Description | 25 | | ------------- |:-------------: | 26 | |webhook| Created webhook from slack to be pasted here| 27 | 28 | sample file: 29 | 30 | ```json 31 | { 32 | "fileType": "structure", 33 | "type": "slack", 34 | "webhook": "https://hooks.slack.com/services/***" 35 | } 36 | ``` 37 | 38 | ## Generate Webhook URL from slack 39 | 40 | Once you have logged in to the **Slack** follow these steps to generate the AuthToken: 41 | 42 | 1. Go to the Home page 43 | 2. Drop down using option shown beside ``, select `Settings & administration`, and select the `manage apps` 44 | 3. Click on `Build` from the top-right corner. 45 | 4. Select `Create New App` if not created.(In order to generate webhook URL you have to have an app created) 46 | 5. Click on the app that you have created 47 | 6. Select the `Incoming Webhooks`, which is underneath the `Features` 48 | 7. Click on the `Add New Webhook to Workspace` if not generated already. 49 | 8. Select the Channel that should receive the notifications 50 | 51 | By following the steps you'll be able to copy the Webhook URL. This url will be available to copy anytime you want, unless you have revoked the webhook URL. 52 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | """A setup module for prancer-basic.""" 2 | 3 | 4 | # setuptools for distribution 5 | from setuptools import find_packages, setup 6 | import os 7 | from src import processor 8 | 9 | 10 | with open('requirements.txt') as f: 11 | required = f.read().splitlines() 12 | 13 | LONG_DESCRIPTION = """ 14 | Prancer Basic allows to users to run cloud validation. 15 | The supported cloud frameworks are azure, aws and git. 16 | """ 17 | 18 | setup( 19 | name='prancer-basic', 20 | # also update the version in processor.__init__.py file 21 | version='3.0.28', 22 | description='Prancer Basic, http://prancer.io/', 23 | long_description=LONG_DESCRIPTION, 24 | license = "BSD", 25 | # The project's main homepage. 26 | url='https://github.com/prancer-io/cloud-validation-framework', 27 | # Author(s) details 28 | author='Farshid M/Ajey Khanapuri', 29 | author_email='ajey.khanapuri@liquware.com', 30 | classifiers=[ 31 | "Development Status :: 3 - Alpha", 32 | 'Topic :: Software Development :: Libraries :: Application Frameworks', 33 | 'Topic :: Software Development :: Libraries :: Python Modules', 34 | "License :: OSI Approved :: BSD License", 35 | ], 36 | packages=find_packages(where="src", 37 | exclude=['log', 'rundata', 'utilities', 'tests']), 38 | include_package_data=True, 39 | package_dir={'': 'src'}, 40 | setup_requires=['ply==3.10'], 41 | install_requires=required, 42 | python_requires='>=3.0', 43 | entry_points={ 44 | 'console_scripts': [ 45 | 'validator = processor.helper.utils.cli_validator:validator_main', 46 | 'prancer = processor.helper.utils.cli_validator:validator_main', 47 | 'populate_json = processor.helper.utils.cli_populate_json:populate_json_main', 48 | 'terraform_to_json = processor.helper.utils.cli_terraform_to_json:terraform_to_json_main', 49 | 'register_key_in_azure_vault = processor.helper.utils.cli_generate_azure_vault_key:generate_azure_vault_key' 50 | ], 51 | } 52 | ) 53 | 54 | -------------------------------------------------------------------------------- /tests/jsons/sample_snapshots.json: -------------------------------------------------------------------------------- 1 | [{ 2 | "_id" : "5ccbb91174562101c3ef604e", 3 | "checksum" : "64c44dbce45593d36483a5f073c1743a", 4 | "collection" : "security_groups", 5 | "json" : { 6 | "Resources" : { 7 | "PrancerTutorialSecGroup" : { 8 | "Properties" : { 9 | "GroupDescription" : "Slightly more complex SG to show rule matching", 10 | "GroupName" : "prancer-tutorial-sg", 11 | "SecurityGroupIngress" : [ 12 | { 13 | "CidrIp" : "0.0.0.0/0", 14 | "Description" : "Allow anyone to access this port", 15 | "FromPort" : 80, 16 | "IpProtocol" : "tcp", 17 | "ToPort" : 80 18 | }, 19 | { 20 | "CidrIp" : "0.0.0.0/0", 21 | "Description" : "Allow anyone to access this port from outside", 22 | "FromPort" : 443, 23 | "IpProtocol" : "tcp", 24 | "ToPort" : 443 25 | }, 26 | { 27 | "CidrIp" : "172.16.0.0/16", 28 | "Description" : "Allow anyone from the VPC to access SSH ports", 29 | "FromPort" : 22, 30 | "IpProtocol" : "tcp", 31 | "ToPort" : 22 32 | } 33 | ], 34 | "VpcId" : { 35 | "Ref" : "PrancerTutorialVpc" 36 | } 37 | }, 38 | "Type" : "AWS::EC2::SecurityGroup" 39 | }, 40 | "PrancerTutorialVpc" : { 41 | "Properties" : { 42 | "CidrBlock" : "172.16.0.0/16", 43 | "EnableDnsHostnames" : true, 44 | "EnableDnsSupport" : true, 45 | "InstanceTenancy" : "default" 46 | }, 47 | "Type" : "AWS::EC2::VPC" 48 | } 49 | } 50 | }, 51 | "node" : { 52 | "collection" : "security_groups", 53 | "path" : "devops/cf/mytemplate.json", 54 | "snapshotId" : "1", 55 | "type" : "json" 56 | }, 57 | "path" : "devops/cf/mytemplate.json", 58 | "queryuser" : "", 59 | "reference" : "master", 60 | "snapshotId" : "1", 61 | "source" : "gitConnector", 62 | "structure" : "git", 63 | "timestamp" : 1556855057031 64 | }] 65 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .vscode/ 11 | .Python 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .coverage 43 | .coverage.* 44 | .cache 45 | nosetests.xml 46 | coverage.xml 47 | *.cover 48 | .hypothesis/ 49 | .pytest_cache/ 50 | rundata 51 | 52 | # Translations 53 | *.mo 54 | *.pot 55 | 56 | # Django stuff: 57 | *.log 58 | local_settings.py 59 | db.sqlite3 60 | 61 | # Flask stuff: 62 | instance/ 63 | .webassets-cache 64 | 65 | # Scrapy stuff: 66 | .scrapy 67 | 68 | # Sphinx documentation 69 | docs/_build/ 70 | 71 | # PyBuilder 72 | target/ 73 | 74 | # Jupyter Notebook 75 | .ipynb_checkpoints 76 | 77 | # pyenv 78 | .python-version 79 | 80 | # celery beat schedule file 81 | celerybeat-schedule 82 | 83 | # SageMath parsed files 84 | *.sage.py 85 | 86 | # Environments 87 | .env 88 | .venv 89 | env/ 90 | venv/ 91 | penv/* 92 | githubenv/* 93 | ENV/ 94 | env.bak/ 95 | venv.bak/ 96 | 97 | # Spyder project settings 98 | .spyderproject 99 | .spyproject 100 | 101 | # Rope project settings 102 | .ropeproject 103 | 104 | # mkdocs documentation 105 | /site 106 | 107 | # mypy 108 | .mypy_cache/ 109 | .idea 110 | .python-version 111 | tmp/* 112 | logs/*.log 113 | log/*.log 114 | junit/* 115 | rundata/rundata 116 | .coverage 117 | p3_requirements.txt 118 | python3env/* 119 | penv/* 120 | __pycache__ 121 | *.pyc 122 | *.interp 123 | *.tokens 124 | configdata/mysubscription.json 125 | -------------------------------------------------------------------------------- /tests/processor/helper/file/test_file_utils.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | import os.path 3 | import tempfile 4 | from processor.helper.file.file_utils import exists_dir 5 | from processor.helper.file.file_utils import exists_file 6 | from processor.helper.file.file_utils import remove_file 7 | from processor.helper.file.file_utils import mkdir_path 8 | 9 | 10 | def mock_dirs(): 11 | return ['/tmp', '~/tmp', '~/abc'] 12 | 13 | 14 | def mock_filenames(): 15 | return ['/tmp/a', '~/tmp/a.txt', '~/abc/b.ini'] 16 | 17 | 18 | def mock_exists_file_check(fname): 19 | fnames = mock_filenames() 20 | return True if fname in fnames else False 21 | 22 | 23 | def mock_is_file_check(fname): 24 | fnames = mock_filenames() 25 | return True if fname in fnames else False 26 | 27 | 28 | def mock_exists_dir_check(dirname): 29 | dirs = mock_dirs() 30 | return True if dirname in dirs else False 31 | 32 | 33 | def mock_is_dir_check(dirname): 34 | dirs = mock_dirs() 35 | return True if dirname in dirs else False 36 | 37 | 38 | def test_none_directory(): 39 | assert False == exists_dir(None) 40 | 41 | 42 | def test_exists_dir(monkeypatch): 43 | monkeypatch.setattr(os.path, 'exists', mock_exists_dir_check) 44 | monkeypatch.setattr(os.path, 'isdir', mock_is_dir_check) 45 | assert True == exists_dir('~/tmp') 46 | assert True == exists_dir('~/abc') 47 | assert False == exists_dir('/xyz') 48 | 49 | 50 | def test_none_file(): 51 | assert False == exists_file(None) 52 | 53 | 54 | def test_exists_file(monkeypatch): 55 | monkeypatch.setattr(os.path, 'exists', mock_exists_file_check) 56 | monkeypatch.setattr(os.path, 'isfile', mock_is_file_check) 57 | assert True == exists_file('/tmp/a') 58 | assert False == exists_file('/tmp/b') 59 | 60 | 61 | def test_remove_file(create_temp_file): 62 | fname = create_temp_file('a.txt') 63 | assert True == remove_file(fname) 64 | assert False == remove_file('/tmp/axzs') 65 | 66 | 67 | def ignoretest_mkdir_path(create_temp_dir): 68 | newpath = create_temp_dir() 69 | assert True == mkdir_path('%s/a/b/c' % newpath) 70 | assert False == mkdir_path('/a/b/c') -------------------------------------------------------------------------------- /docs/docs/extra.css: -------------------------------------------------------------------------------- 1 | .wy-side-nav-search { 2 | background-color: #eaeaea; 3 | } 4 | 5 | .wy-nav-top a { 6 | color: white !important; 7 | } 8 | 9 | div.logo { 10 | text-align: center; 11 | margin-bottom: 1em; 12 | } 13 | 14 | div.logo img { 15 | max-height: 5em; 16 | max-width: 75%; 17 | display: block; 18 | margin: auto; 19 | height: auto; 20 | width: auto; 21 | background-color: transparent; 22 | padding: 0; 23 | border-radius: 0; 24 | } 25 | 26 | p { 27 | color: #11337b; 28 | } 29 | 30 | li { 31 | color: #11337b; 32 | } 33 | 34 | h1 { 35 | color: #ea171a; 36 | } 37 | 38 | h2 { 39 | color: #11337b; 40 | } 41 | 42 | a { 43 | color: #9d0406; 44 | } 45 | 46 | a:active, a:hover, a:visited { 47 | color: #cc0000; 48 | } 49 | 50 | .toctree-l1 a { 51 | color: white; 52 | } 53 | 54 | .toctree-l1:hover { 55 | background: #08408e; 56 | color: #9d0406; 57 | } 58 | 59 | .wy-menu-vertical li.current { 60 | background: #e7f2fa !important; 61 | } 62 | 63 | .wy-side-nav-search input[type=text] { 64 | border-color: #838383; 65 | } 66 | 67 | .rst-content blockquote { 68 | padding: 12px 24px; 69 | margin-left: 0; 70 | font-style: italic; 71 | background-color: #f3f3f3; 72 | border: 1px solid #cccccc; 73 | } 74 | 75 | .rst-content blockquote p { 76 | position: relative; 77 | margin-bottom: 1em; 78 | } 79 | 80 | .rst-content blockquote p notetitle { 81 | font-weight: bold; 82 | display: inline-block; 83 | margin: -13px -25px; 84 | background-color: #999; 85 | position: absolute; 86 | top: 0; 87 | left: 0; 88 | right: 0; 89 | padding: 0.5em; 90 | color: white; 91 | } 92 | 93 | .rst-content blockquote p:first-of-type { 94 | margin-bottom: 2.5em; 95 | } 96 | 97 | .rst-content blockquote p:last-of-type { 98 | margin-bottom: 0; 99 | } 100 | 101 | .wy-nav-side { 102 | background: #08408e !important; 103 | } 104 | 105 | .wy-nav-content { 106 | max-width: 100% !important; 107 | } 108 | 109 | code, .rst-content tt, .rst-content code { 110 | white-space: pre; 111 | } -------------------------------------------------------------------------------- /src/processor/template_processor/json_template_processor.py: -------------------------------------------------------------------------------- 1 | import json 2 | import re 3 | import os 4 | from yaml.loader import FullLoader 5 | from processor.logging.log_handler import getlogger 6 | from processor.helper.json.json_utils import json_from_file, get_field_value 7 | from processor.template_processor.base.base_template_processor import TemplateProcessor 8 | from processor.templates.google.google_parser import GoogleTemplateParser 9 | from processor.helper.file.file_utils import exists_file 10 | from processor.helper.config.config_utils import get_test_json_dir, framework_dir 11 | from processor.helper.yaml.yaml_utils import yaml_from_file 12 | from cfn_flip import flip, to_yaml, to_json 13 | 14 | logger = getlogger() 15 | 16 | class JsonTemplateProcessor(TemplateProcessor): 17 | """ 18 | Base Template Processor for process template 19 | """ 20 | 21 | def __init__(self, node, **kwargs): 22 | super().__init__(node, tosave=False, **kwargs) 23 | 24 | def is_template_file(self, file_path): 25 | """ 26 | check for valid template file for parse arm template 27 | """ 28 | if len(file_path.split(".")) > 0 and file_path.split(".")[-1] == "json": 29 | json_data = json_from_file(file_path) 30 | return True if (json_data) else False 31 | return False 32 | 33 | def process_template(self, paths): 34 | """ 35 | process the files stored at specified paths and returns the template 36 | """ 37 | template_json = None 38 | 39 | if paths and isinstance(paths, list): 40 | template_file_path = "" 41 | deployment_file_path = "" 42 | 43 | for path in paths: 44 | file_path = '%s/%s' % (self.dir_path, path) 45 | logger.info("Fetching data : %s ", path) 46 | if self.is_template_file(file_path): 47 | template_file_path = file_path 48 | 49 | self.template_file = template_file_path 50 | if template_file_path: 51 | template_json = json_from_file(file_path) 52 | self.contentType = 'json' 53 | return template_json -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_4/lambda/terraform.tfvars: -------------------------------------------------------------------------------- 1 | role_name = "prancer-iam-role" 2 | role_path = "/" 3 | max_session_duration = 3600 4 | role_description = "" 5 | force_detach_policies = false 6 | role_permissions_boundary_arn = "" 7 | assume_role_policy = < 0 and file_path.split(".")[-1] == "yaml": 29 | json_data = yaml_from_file(file_path, loader=FullLoader) 30 | return True if (json_data) else False 31 | return False 32 | 33 | def process_template(self, paths): 34 | """ 35 | process the files stored at specified paths and returns the template 36 | """ 37 | template_json = None 38 | 39 | if paths and isinstance(paths, list): 40 | template_file_path = "" 41 | deployment_file_path = "" 42 | 43 | for path in paths: 44 | file_path = '%s/%s' % (self.dir_path, path) 45 | logger.info("Fetching data : %s ", path) 46 | if self.is_template_file(file_path): 47 | template_file_path = file_path 48 | 49 | self.template_file = template_file_path 50 | if template_file_path: 51 | template_json = yaml_from_file(file_path,loader=FullLoader) 52 | self.contentType = 'yaml' 53 | return template_json -------------------------------------------------------------------------------- /tests/processor/template_processor/terraform/samples/sample_3/ec2/main.tf: -------------------------------------------------------------------------------- 1 | data "aws_ami" "ubuntu" { 2 | most_recent = true 3 | 4 | filter { 5 | name = "name" 6 | values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"] 7 | } 8 | 9 | filter { 10 | name = "virtualization-type" 11 | values = ["hvm"] 12 | } 13 | 14 | owners = ["099720109477"] # Canonical 15 | } 16 | 17 | module "ec2" { 18 | source = "../modules/ec2" 19 | instance_count = var.instance_count 20 | name = var.name 21 | ami = data.aws_ami.ubuntu.id 22 | instance_type = var.instance_type 23 | user_data = var.user_data 24 | user_data_base64 = var.user_data_base64 25 | key_name = var.key_name 26 | monitoring = var.monitoring 27 | get_password_data = var.get_password_data 28 | vpc_security_group_ids = var.vpc_security_group_ids 29 | subnet_id = var.subnet_id 30 | iam_instance_profile = var.iam_instance_profile 31 | associate_public_ip_address = var.associate_public_ip_address 32 | ipv6_address_count = var.ipv6_address_count 33 | ipv6_addresses = var.ipv6_addresses 34 | ebs_optimized = var.ebs_optimized 35 | root_block_device = var.root_block_device 36 | ebs_block_device = var.ebs_block_device 37 | ephemeral_block_device = var.ephemeral_block_device 38 | network_interface = var.network_interface 39 | disable_api_termination = var.disable_api_termination 40 | instance_initiated_shutdown_behavior = var.instance_initiated_shutdown_behavior 41 | placement_group = var.placement_group 42 | tenancy = var.tenancy 43 | tags = var.tags 44 | } 45 | 46 | module "ebs_volume" { 47 | source = "../modules/ebs_volume" 48 | availability_zone = var.availability_zone 49 | encrypted = var.encrypted 50 | size = var.size 51 | tags = var.tags 52 | } 53 | -------------------------------------------------------------------------------- /tests/processor/comparison/test_comparison_functions.py: -------------------------------------------------------------------------------- 1 | from processor.comparison.comparison_functions import apply_extras, equality,\ 2 | less_than, less_than_equal, greater_than, greater_than_equal, exists 3 | 4 | extras = ['len'] 5 | 6 | 7 | def test_apply_extras(): 8 | assert 0 == apply_extras('', extras) 9 | assert 0 == apply_extras(None, extras) 10 | assert 0 == apply_extras([], extras) 11 | assert 0 == apply_extras({}, extras) 12 | assert 4 == apply_extras('abcd', extras) 13 | assert 5 == apply_extras([1,2,3,4,5], extras) 14 | 15 | 16 | def test_equality(): 17 | data = {'a': 'b', 'c': 1, 'd': [1,2,3], 'e': {'h':1}} 18 | assert True == equality(data, 'a', 'b') 19 | assert True == equality(data, 'd', 3, extras=extras) 20 | assert True == equality(data, 'a', 'd', is_not=True, extras=extras) 21 | 22 | 23 | def test_less_than(): 24 | data = {'a': 'b', 'c': 4, 'd': [1,2,3], 'e': {'h':1}} 25 | assert False == less_than(data, 'c', 3) 26 | assert False == less_than(data, 'c', 4) 27 | assert True == less_than(data, 'c', 6) 28 | assert True == less_than(data, 'd', 10, extras=extras) 29 | assert True == less_than(data, 'd', 2, is_not=True, extras=extras) 30 | 31 | 32 | def test_less_than_equal(): 33 | data = {'a': 'b', 'c': 4, 'd': [1,2,3], 'e': {'h':1}} 34 | assert True == less_than_equal(data, 'c', 4) 35 | assert True == less_than_equal(data, 'd', 3, extras=extras) 36 | assert True == less_than_equal(data, 'd', 2, is_not=True, extras=extras) 37 | 38 | 39 | def test_greater_than(): 40 | data = {'a': 'b', 'c': 4, 'd': [1,2,3], 'e': {'h':1}} 41 | assert False == greater_than(data, 'c', 6) 42 | assert False == greater_than(data, 'c', 4) 43 | assert True == greater_than(data, 'c', 2) 44 | assert True == greater_than(data, 'd', 2, extras=extras) 45 | assert True == greater_than(data, 'd', 4, is_not=True, extras=extras) 46 | 47 | 48 | def test_greater_than_equal(): 49 | data = {'a': 'b', 'c': 4, 'd': [1,2,3], 'e': {'h':1}} 50 | assert True == greater_than_equal(data, 'c', 4) 51 | assert True == greater_than_equal(data, 'd', 3, extras=extras) 52 | assert True == greater_than_equal(data, 'd', 4, is_not=True, extras=extras) 53 | 54 | 55 | def test_exists(): 56 | data = {'a': 'b', 'c': 4, 'd': [1, 2, 3], 'e': {'h': 1}} 57 | assert True == exists(data, 'c', None) 58 | assert True == exists(data, 'f', None, is_not=True) -------------------------------------------------------------------------------- /src/processor/template_processor/google_template_processor.py: -------------------------------------------------------------------------------- 1 | import json 2 | import re 3 | import os 4 | from yaml.loader import FullLoader 5 | from processor.logging.log_handler import getlogger 6 | from processor.helper.json.json_utils import json_from_file, get_field_value 7 | from processor.template_processor.base.base_template_processor import TemplateProcessor 8 | from processor.templates.google.google_parser import GoogleTemplateParser 9 | from processor.helper.file.file_utils import exists_file 10 | from processor.helper.config.config_utils import get_test_json_dir, framework_dir 11 | from processor.helper.yaml.yaml_utils import yaml_from_file 12 | from cfn_flip import flip, to_yaml, to_json 13 | 14 | logger = getlogger() 15 | 16 | class GoogleTemplateProcessor(TemplateProcessor): 17 | """ 18 | Base Template Processor for process template 19 | """ 20 | 21 | def __init__(self, node, **kwargs): 22 | super().__init__(node, tosave=False, **kwargs) 23 | 24 | def is_template_file(self, file_path): 25 | """ 26 | check for valid template file for parse arm template 27 | """ 28 | if len(file_path.split(".")) > 0 and file_path.split(".")[-1] == "yaml": 29 | json_data = yaml_from_file(file_path, loader=FullLoader) 30 | return True if (json_data and "resources" in json_data) else False 31 | return False 32 | 33 | def process_template(self, paths): 34 | """ 35 | process the files stored at specified paths and returns the template 36 | """ 37 | template_json = None 38 | 39 | if paths and isinstance(paths, list): 40 | template_file_path = "" 41 | deployment_file_path = "" 42 | 43 | for path in paths: 44 | file_path = '%s/%s' % (self.dir_path, path) 45 | logger.info("Fetching data : %s ", path) 46 | if self.is_template_file(file_path): 47 | template_file_path = file_path 48 | 49 | self.template_files = [template_file_path] 50 | if template_file_path: 51 | google_template_parser = GoogleTemplateParser(template_file_path) 52 | template_json = google_template_parser.parse() 53 | self.contentType = google_template_parser.contentType 54 | self.resource_types = google_template_parser.resource_types 55 | return template_json -------------------------------------------------------------------------------- /src/processor/template_processor/helm_chart_template_processor.py: -------------------------------------------------------------------------------- 1 | from processor.template_processor.base.base_template_processor import TemplateProcessor 2 | from processor.helper.file.file_utils import exists_file,exists_dir 3 | from processor.helper.yaml.yaml_utils import yaml_from_file,HelmChartConvertionKey 4 | from yaml.loader import FullLoader 5 | from processor.logging.log_handler import getlogger 6 | from processor.templates.helm.helm_parser import HelmTemplateParser 7 | 8 | logger = getlogger() 9 | 10 | class HelmChartTemplateProcessor(TemplateProcessor): 11 | """ 12 | For process helm charts 13 | """ 14 | def __init__(self, node, **kwargs): 15 | super().__init__(node, tosave=False, **kwargs) 16 | 17 | def is_template_file(self, file_path): 18 | """ 19 | check for valid template file for parse helm template 20 | """ 21 | file_type = file_path.split(".")[-1] 22 | file_name = file_path.split("/")[-1].split(".")[0] 23 | if file_type == "yaml" and file_name == "Chart" or HelmChartConvertionKey in file_path: 24 | helm_source = file_path.rpartition("/")[0] 25 | helm_template = HelmTemplateParser(helm_source) 26 | if helm_template.validate(helm_source): 27 | return True 28 | # file_path.rpartition("/")[0] 29 | return True 30 | return False 31 | 32 | def process_template(self, paths): 33 | """ 34 | process the files stored at specified paths and returns the template 35 | """ 36 | template_json = None 37 | 38 | if paths and isinstance(paths, list): 39 | template_file_path = "" 40 | # paths[0] = paths. 41 | for path in paths: 42 | file_path = '%s/%s' % (self.dir_path, path) 43 | logger.info("Fetching data : %s ", path) 44 | if self.is_template_file(file_path): 45 | template_file_path = file_path 46 | 47 | self.template_file = template_file_path 48 | if template_file_path: 49 | template_json = yaml_from_file(file_path,loader=FullLoader) 50 | if template_json: 51 | self.contentType = 'yaml' 52 | if template_json.get("kind"): 53 | self.resource_types = [template_json.get("kind").lower()] 54 | self.contentType = 'yaml' 55 | return template_json 56 | 57 | 58 | -------------------------------------------------------------------------------- /src/processor/template_processor/kubernetes_template_processor.py: -------------------------------------------------------------------------------- 1 | from yaml.loader import FullLoader 2 | from processor.logging.log_handler import getlogger 3 | from processor.template_processor.base.base_template_processor import TemplateProcessor 4 | from processor.templates.kubernetes.kubernetes_parser import KubernetesTemplateParser 5 | from processor.helper.file.file_utils import exists_file 6 | from processor.helper.yaml.yaml_utils import yaml_from_file 7 | 8 | logger = getlogger() 9 | 10 | class KubernetesTemplateProcessor(TemplateProcessor): 11 | """ 12 | Base Template Processor for process template 13 | """ 14 | 15 | def __init__(self, node, **kwargs): 16 | super().__init__(node, tosave=False, **kwargs) 17 | 18 | def is_template_file(self, file_path): 19 | """ 20 | check for valid template file for parse arm template 21 | """ 22 | if len(file_path.split(".")) > 0 and file_path.split(".")[-1] == "yaml": 23 | json_data = yaml_from_file(file_path, loader=FullLoader) 24 | kube_policy = ["apiVersion","kind","metadata","spec"] 25 | return True if json_data and any(elem in json_data for elem in kube_policy) else False 26 | return False 27 | 28 | def process_template(self, paths): 29 | """ 30 | process the files stored at specified paths and returns the template 31 | """ 32 | template_json = None 33 | 34 | if paths and isinstance(paths, list): 35 | template_file_path = "" 36 | 37 | for path in paths: 38 | file_path = '%s/%s' % (self.dir_path, path) 39 | logger.info("Fetching data : %s ", path) 40 | if self.is_template_file(file_path): 41 | template_file_path = file_path 42 | else : 43 | logger.info("\t\t WARN: %s contains invalid Kubernetes yaml") 44 | 45 | self.template_file = template_file_path 46 | if template_file_path and exists_file(template_file_path): 47 | kubernetes_template_parser = KubernetesTemplateParser(template_file_path) 48 | template_json = kubernetes_template_parser.parse(template_file_path) 49 | if template_json: 50 | self.contentType = kubernetes_template_parser.contentType 51 | if template_json.get("kind"): 52 | self.resource_types = [template_json.get("kind").lower()] 53 | 54 | return template_json 55 | -------------------------------------------------------------------------------- /tests/processor/templates/aws/sample/SQS_With_CloudWatch_Alarms.txt: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion" : "2010-09-09", 3 | 4 | "Description" : "AWS CloudFormation Sample Template SQS_With_CloudWatch_Alarms: Sample template showing how to create an SQS queue with AWS CloudWatch alarms on queue depth. **WARNING** This template creates an Amazon SQS Queue and one or more Amazon CloudWatch alarms. You will be billed for the AWS resources used if you create a stack from this template.", 5 | 6 | "Parameters" : { 7 | "AlarmEMail": { 8 | "Description": "EMail address to notify if there are any operational issues", 9 | "Type": "String", 10 | "AllowedPattern": "([a-zA-Z0-9_\\-\\.]+)@((\\[[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.)|(([a-zA-Z0-9\\-]+\\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\\]?)", 11 | "ConstraintDescription": "must be a valid email address." 12 | } 13 | }, 14 | 15 | "Resources" : { 16 | "MyQueue" : { 17 | "Type" : "AWS::SQS::Queue", 18 | "Properties" : { 19 | } 20 | }, 21 | 22 | "AlarmTopic": { 23 | "Type": "AWS::SNS::Topic", 24 | "Properties": { 25 | "Subscription": [{ 26 | "Endpoint": { "Ref": "AlarmEMail" }, 27 | "Protocol": "email" 28 | }] 29 | } 30 | }, 31 | 32 | "QueueDepthAlarm": { 33 | "Type": "AWS::CloudWatch::Alarm", 34 | "Properties": { 35 | "AlarmDescription": "Alarm if queue depth grows beyond 10 messages", 36 | "Namespace": "AWS/SQS", 37 | "MetricName": "ApproximateNumberOfMessagesVisible", 38 | "Dimensions": [{ 39 | "Name": "QueueName", 40 | "Value" : { "Fn::GetAtt" : ["MyQueue", "QueueName"] } 41 | }], 42 | "Statistic": "Sum", 43 | "Period": "300", 44 | "EvaluationPeriods": "1", 45 | "Threshold": "10", 46 | "ComparisonOperator": "GreaterThanThreshold", 47 | "AlarmActions": [{ "Ref": "AlarmTopic" }], 48 | "InsufficientDataActions": [{ "Ref": "AlarmTopic" }] 49 | } 50 | } 51 | }, 52 | "Outputs" : { 53 | "QueueURL" : { 54 | "Description" : "URL of newly created SQS Queue", 55 | "Value" : { "Ref" : "MyQueue" } 56 | }, 57 | "QueueARN" : { 58 | "Description" : "ARN of newly created SQS Queue", 59 | "Value" : { "Fn::GetAtt" : ["MyQueue", "Arn"]} 60 | }, 61 | "QueueName" : { 62 | "Description" : "Name newly created SQS Queue", 63 | "Value" : { "Fn::GetAtt" : ["MyQueue", "QueueName"]} 64 | } 65 | } 66 | } 67 | -------------------------------------------------------------------------------- /tests/processor/templates/aws/sample/SQS_With_CloudWatch_Alarms.template: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion" : "2010-09-09", 3 | 4 | "Description" : "AWS CloudFormation Sample Template SQS_With_CloudWatch_Alarms: Sample template showing how to create an SQS queue with AWS CloudWatch alarms on queue depth. **WARNING** This template creates an Amazon SQS Queue and one or more Amazon CloudWatch alarms. You will be billed for the AWS resources used if you create a stack from this template.", 5 | 6 | "Parameters" : { 7 | "AlarmEMail": { 8 | "Description": "EMail address to notify if there are any operational issues", 9 | "Type": "String", 10 | "AllowedPattern": "([a-zA-Z0-9_\\-\\.]+)@((\\[[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.)|(([a-zA-Z0-9\\-]+\\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\\]?)", 11 | "ConstraintDescription": "must be a valid email address." 12 | } 13 | }, 14 | 15 | "Resources" : { 16 | "MyQueue" : { 17 | "Type" : "AWS::SQS::Queue", 18 | "Properties" : { 19 | } 20 | }, 21 | 22 | "AlarmTopic": { 23 | "Type": "AWS::SNS::Topic", 24 | "Properties": { 25 | "Subscription": [{ 26 | "Endpoint": { "Ref": "AlarmEMail" }, 27 | "Protocol": "email" 28 | }] 29 | } 30 | }, 31 | 32 | "QueueDepthAlarm": { 33 | "Type": "AWS::CloudWatch::Alarm", 34 | "Properties": { 35 | "AlarmDescription": "Alarm if queue depth grows beyond 10 messages", 36 | "Namespace": "AWS/SQS", 37 | "MetricName": "ApproximateNumberOfMessagesVisible", 38 | "Dimensions": [{ 39 | "Name": "QueueName", 40 | "Value" : { "Fn::GetAtt" : ["MyQueue", "QueueName"] } 41 | }], 42 | "Statistic": "Sum", 43 | "Period": "300", 44 | "EvaluationPeriods": "1", 45 | "Threshold": "10", 46 | "ComparisonOperator": "GreaterThanThreshold", 47 | "AlarmActions": [{ "Ref": "AlarmTopic" }], 48 | "InsufficientDataActions": [{ "Ref": "AlarmTopic" }] 49 | } 50 | } 51 | }, 52 | "Outputs" : { 53 | "QueueURL" : { 54 | "Description" : "URL of newly created SQS Queue", 55 | "Value" : { "Ref" : "MyQueue" } 56 | }, 57 | "QueueARN" : { 58 | "Description" : "ARN of newly created SQS Queue", 59 | "Value" : { "Fn::GetAtt" : ["MyQueue", "Arn"]} 60 | }, 61 | "QueueName" : { 62 | "Description" : "Name newly created SQS Queue", 63 | "Value" : { "Fn::GetAtt" : ["MyQueue", "QueueName"]} 64 | } 65 | } 66 | } 67 | -------------------------------------------------------------------------------- /src/processor/templates/terraform/helper/function/string_functions.py: -------------------------------------------------------------------------------- 1 | """ 2 | Performs all in built string functions which are supported by terraform processor 3 | """ 4 | from processor.logging.log_handler import getlogger 5 | import decimal 6 | 7 | logger = getlogger() 8 | 9 | def chomp(str_value): 10 | """ removes newline characters at the end of a string. """ 11 | return str_value.rstrip() 12 | 13 | # def format(str_value): 14 | # """ removes newline characters at the end of a string. """ 15 | # return str_value.rstrip() 16 | 17 | def join(concat_ele, ele_list): 18 | """ concat list items and returns a string """ 19 | return concat_ele.join(ele_list) 20 | 21 | def lower(str_value): 22 | """ convert string to lower case """ 23 | return str_value.lower() 24 | 25 | def replace(str_value, substring, replacement): 26 | """ searches for substring and replace the value of substring """ 27 | return str_value.replace(substring , str(replacement)) 28 | 29 | def split(separator, str_value): 30 | """ split the string by given separator and returns the list """ 31 | return str_value.split(separator) 32 | 33 | def trim(str_value, trim_string): 34 | """ trim the characters from the given string """ 35 | return str_value.strip(trim_string) 36 | 37 | def trimprefix(str_value, trim_string): 38 | """ trim the characters from the given string """ 39 | if str_value.startswith(trim_string): 40 | str_value = str_value[len(trim_string):] 41 | return trimprefix(str_value, trim_string) 42 | return str_value 43 | 44 | def trimsuffix(str_value, trim_string): 45 | """ trim the characters from the given string """ 46 | if str_value.endswith(trim_string): 47 | str_value = str_value[:-(len(trim_string))] 48 | return trimsuffix(str_value, trim_string) 49 | return str_value 50 | 51 | def trimspace(str_value): 52 | """ trim the space from the given string """ 53 | return str_value.strip() 54 | 55 | def upper(str_value): 56 | """ convert string to upper case """ 57 | return str_value.upper() 58 | 59 | def strrev(str_value): 60 | """ reverse the characters of given string """ 61 | return str_value[::-1] 62 | 63 | def substr(str_value, offset, length): 64 | """ return the subsctring of given string """ 65 | return str_value[offset:length] 66 | 67 | def title(str_value): 68 | """ converts the first character of each word of given string to uppercase. """ 69 | return str_value.title() 70 | 71 | def format(spec, *values): 72 | """ format the string """ 73 | return spec % values 74 | -------------------------------------------------------------------------------- /src/processor/comparison/comparisonantlr/compare_types.py: -------------------------------------------------------------------------------- 1 | """All comparison functions.""" 2 | 3 | import math 4 | 5 | EQ = '=' 6 | NEQ = '!=' 7 | GT = '>' 8 | GTE = '>=' 9 | LT = '<' 10 | LTE = '<=' 11 | 12 | 13 | int_funcs = { 14 | EQ: lambda lhs, rhs: lhs == rhs, 15 | NEQ: lambda lhs, rhs: lhs != rhs, 16 | GT: lambda lhs, rhs: lhs > rhs, 17 | GTE: lambda lhs, rhs: lhs >= rhs, 18 | LT: lambda lhs, rhs: lhs < rhs, 19 | LTE: lambda lhs, rhs: lhs <= rhs 20 | } 21 | 22 | float_funcs = { 23 | EQ: lambda lhs, rhs: math.isclose(lhs, rhs), 24 | NEQ: lambda lhs, rhs: not math.isclose(lhs, rhs), 25 | GT: lambda lhs, rhs: lhs > rhs, 26 | GTE: lambda lhs, rhs: lhs > rhs or math.isclose(lhs, rhs), 27 | LT: lambda lhs, rhs: lhs < rhs, 28 | LTE: lambda lhs, rhs: lhs < rhs or math.isclose(lhs, rhs) 29 | } 30 | 31 | 32 | def compare_none(loperand, roperand, op): 33 | if loperand is None and roperand is None: 34 | return True if op == EQ else False 35 | return False 36 | 37 | 38 | def compare_int(loperand, roperand, op): 39 | if type(loperand) is int and type(roperand) is int: 40 | if op in int_funcs: 41 | return int_funcs[op](loperand, roperand) 42 | return False 43 | 44 | 45 | def compare_float(loperand, roperand, op): 46 | if type(loperand) is float and type(roperand) is float: 47 | if op in float_funcs: 48 | return float_funcs[op](loperand, roperand) 49 | return False 50 | 51 | 52 | def compare_boolean(loperand, roperand, op): 53 | if type(loperand) is bool and type(roperand) is bool: 54 | if op == EQ: 55 | return True if loperand == roperand else False 56 | elif op == NEQ: 57 | return True if loperand != roperand else False 58 | return False 59 | 60 | 61 | def compare_str(loperand, roperand, op): 62 | if type(loperand) is str and type(roperand) is str: 63 | if op in int_funcs: 64 | return int_funcs[op](loperand, roperand) 65 | return False 66 | 67 | 68 | def compare_list(loperand, roperand, op): 69 | if type(loperand) is list and type(roperand) is list: 70 | if op in int_funcs: 71 | return int_funcs[op](loperand, roperand) 72 | return False 73 | 74 | def compare_in(loperand, roperand, op): 75 | if loperand and roperand: 76 | return roperand in loperand 77 | return False 78 | 79 | def compare_dict(loperand, roperand, op): 80 | if type(loperand) is dict and type(roperand) is dict: 81 | if op in int_funcs: 82 | return int_funcs[op](loperand, roperand) 83 | return False 84 | -------------------------------------------------------------------------------- /src/processor/connector/snapshot_utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Snapshot utils contains common functionality for all snapshots. 3 | """ 4 | import time 5 | from datetime import datetime 6 | import hashlib 7 | from processor.database.database import COLLECTION, get_documents 8 | from processor.logging.log_handler import getlogger 9 | 10 | 11 | logger = getlogger() 12 | 13 | 14 | def validate_snapshot_nodes(snapshot_nodes): 15 | snapshot_data = {} 16 | valid_snapshotids = True 17 | if snapshot_nodes: 18 | for node in snapshot_nodes: 19 | if 'snapshotId' in node and node['snapshotId']: 20 | snapshot_data[node['snapshotId']] = False 21 | if not isinstance(node['snapshotId'], str): 22 | valid_snapshotids = False 23 | elif 'masterSnapshotId' in node and node['masterSnapshotId']: 24 | snapshot_data[node['masterSnapshotId']] = False 25 | if not isinstance(node['masterSnapshotId'], str): 26 | valid_snapshotids = False 27 | else: 28 | logger.error('All snapshot nodes should contain snapshotId or masterSnapshotId attribute with a string value') 29 | valid_snapshotids = False 30 | break 31 | # snapshot_data[node['snapshotId']] = False 32 | # if not isinstance(node['snapshotId'], str): 33 | # valid_snapshotids = False 34 | if not valid_snapshotids: 35 | logger.error('All snapshot Ids should be strings, even numerals should be quoted') 36 | return snapshot_data, valid_snapshotids 37 | 38 | 39 | def get_data_record(ref_name, node, user, snapshot_source, connector_type): 40 | """ The data node record, common function across connectors.""" 41 | collection = node['collection'] if 'collection' in node else COLLECTION 42 | parts = snapshot_source.split('.') 43 | return { 44 | "structure": connector_type, 45 | "reference": ref_name, 46 | "source": parts[0], 47 | "path": '', 48 | "timestamp": int(datetime.utcnow().timestamp() * 1000), 49 | "queryuser": user, 50 | "checksum": hashlib.md5("{}".encode('utf-8')).hexdigest(), 51 | "node": node, 52 | "snapshotId": node['snapshotId'] if 'snapshotId' in node else '', 53 | "mastersnapshot": False, 54 | "masterSnapshotId": node['masterSnapshotId'] if 'masterSnapshotId' in node else '', 55 | "collection": collection.replace('.', '').lower(), 56 | "json": {} # Refactor when node is absent it should None, when empty object put it as {} 57 | } 58 | -------------------------------------------------------------------------------- /docs/theme/main.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | 3 | {# 4 | The entry point for the ReadTheDocs Theme. 5 | 6 | Any theme customisations should override this file to redefine blocks defined in 7 | the various templates. The custom theme should only need to define a main.html 8 | which `{% extends "base.html" %}` and defines various blocks which will replace 9 | the blocks defined in base.html and its included child templates. 10 | #} 11 | 12 | {%- block site_name %} 13 | 16 | {%- endblock %} 17 | 18 | {%- block site_meta %} 19 | 20 | 21 | 22 | {% if page and page.is_homepage %}{% endif %} 23 | {% if config.site_author %}{% endif %} 24 | {% if page and page.canonical_url %}{% endif %} 25 | {% if config.site_favicon %} 26 | {% else %}{% endif %} 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | {%- endblock %} -------------------------------------------------------------------------------- /docs/docs/connectors/jira.md: -------------------------------------------------------------------------------- 1 | # Jira structure file 2 | 3 | Integration of Prancer Web with **Jira** for story management and file/view tickets based on Prancer CSPM or PAC findings. 4 | 5 | The integration with **Jira** is as follows: 6 | 7 | 1. Each collection in the collection pages(**Infra/PAC** Management) can be integrated with **Jira** 8 | 2. Choose the dropdown option from the collection and select `Third Party Integration`. 9 | 3. Select the `Jira` 10 | 11 | When the user clicks on the integration service, a new page/modal opens with pre-populated fields for the ticket. User can edit as per convenience. On submit, the ticket shall be created with the ticket platform **Jira**. 12 | 13 | In Reporting pages : Infra Findings and Application Findings, a new option to create **Jira** ticket has to be created when opening a single item. 14 | 15 | This will create an integration with **Jira** ticketing system (ticket to be created automatically with the proper description for the collection). 16 | Here is a sample of the **Jira** structure file: 17 | 18 | ```json 19 | { 20 | "fileType": "structure", 21 | "type": "jira", 22 | "url": "", 23 | "username": "", 24 | "authtoken": "accesstoken-jira", 25 | "organisation": "Prancer", 26 | "project": "", 27 | "severity": "" 28 | } 29 | ``` 30 | 31 | | Key |Value Description | 32 | | ------------- |:-------------: | 33 | |url| URL to the Jira board| 34 | |username|your user-email of jira| 35 | |authtoken|AuthToken of the jira| 36 | |project|Name of your project.| 37 | |severity|Type of severity you want to assign to this particular task.(Options: High, Medium, Low)| 38 | 39 | sample file: 40 | 41 | ```json 42 | { 43 | "fileType": "structure", 44 | "type": "jira", 45 | "url": "https://testjiraprancer.atlassian.net", 46 | "username": "prancer-user@prancer.io", 47 | "authtoken": "prancer-user-prancer-io-customer120-accesstoken-jira", 48 | "organisation": "Prancer", 49 | "project": "SAM", 50 | "severity": "High" 51 | } 52 | ``` 53 | 54 | ## Generate AuthToken from Jira 55 | 56 | Once you have logged in to the Jira follow these steps to generate the AuthToken: 57 | 58 | 1. Go to the Home page 59 | 2. Click on `Settings`, and select the `Atlassian account settings` 60 | 3. Click on the `Security` 61 | 4. Underneath `API token` click on the `create and manage API tokens` 62 | 5. Click on `Create API token`, and add label 63 | 64 | By following the steps you'll be able to copy the token. Make sure the token is saved and secured somewhere safe, as you won't be able to see that again. 65 | -------------------------------------------------------------------------------- /docs/docs/configuration/basics.md: -------------------------------------------------------------------------------- 1 | # Files and Folders Structure 2 | ### Prancer project directory 3 | 4 | The files to operate **Prancer** should be put into a project directory. It is recommended to have a source control mechanism for this folder to keep track of changes. All the **Prancer** files should be stored in a sub-directory of your choice, we recommend something like `tests/prancer/`. To create it, you can run something like: 5 | 6 | 7 | mkdir -p tests/prancer 8 | cd tests/prancer 9 | 10 | > Most of these folder names and locations are configurable in `config.ini` file. 11 | 12 | ### Validation directory 13 | 14 | The `validation` directory is the internal working directory for **Prancer**. This is where snapshot config files and tests will be created. All of them will be written inside a sub-directory of `validation`. Create the `validation` directory using: 15 | 16 | mkdir -p validation 17 | 18 | Under the `validation` directory lives Collection directories. Each collection is a logical grouping for your test files and snapshot configuration files. Their sole purpose is to group and create a hierarchy to organize your tests better. Create a Collection directory like so: 19 | 20 | mkdir -p validation/container1 21 | 22 | You can create as many as needed, and there is no specific naming convention. The only prerequisite is to respect your filesystem's requirements. 23 | 24 | ### Project configuration file 25 | 26 | At the root of the **Prancer** project directory, you will be putting the `config.ini` file that tells **Prancer** how to behave for the current project. The configuration of **Prancer** is detailed in the next section. It consists of sections and key-value pairs, just like in any `INI` files. Here is an example: 27 | 28 | [section] 29 | key = value 30 | key = value 31 | 32 | [section2] 33 | key = value 34 | key = value 35 | 36 | Look at the following sections to understand what you can put in this file. 37 | 38 | ### Reporting folder 39 | **Prancer** requires you to specify a path where output files should be stored after running tests. You can use the same directory as your `TESTS`, but it will create a separate structure if you don't. Depending on your artifact-building approach, you might want to split them or keep them together. 40 | 41 | ### Collection folder 42 | **Prancer** requires you to specify where your snapshot configuration files and test files are when using the filesystem storage-based approach. This section of the configuration defines where to find those files. 43 | 44 | ### Snapshot folder 45 | When you are using Filesystem to store the result of snapshots, **Prancer** creates a folder inside the collection to store the snapshots. 46 | -------------------------------------------------------------------------------- /docs/docs/tests/master-test.md: -------------------------------------------------------------------------------- 1 | In a master test file, we are defining the test cases against the resource types rather than individual resources. it works in tandem with the master snapshot configuration file. 2 | 3 | ```json 4 | { 5 | "fileType": "mastertest", 6 | "notification": [], 7 | "masterSnapshot": "", 8 | "testSet": [ 9 | { 10 | "masterTestName": "", 11 | "version": "", 12 | "cases": [ 13 | { 14 | "masterTestId": "", 15 | "rule":"" 16 | } 17 | ] 18 | } 19 | ] 20 | } 21 | ``` 22 | 23 | Remember to substitute all values in this file that looks like a `` such as: 24 | 25 | | Tag | Value Description | 26 | |-----|-------------------| 27 | | notifications | the name of the notification file we want to use along with this test file | 28 | | master-Snapshot-name | the name of the master snapshot configuration file we want to use along with this test file | 29 | | master-Test-Name | the name of the master test name for this section | 30 | | version | The version of the rule engine. Current version is `0.1` | 31 | | master-Test-Id | the id of the master test case | 32 | | rule | the rule we want to examine | 33 | 34 | Here is an example of that: 35 | 36 | ```json 37 | { 38 | "fileType": "mastertest", 39 | "notification": [], 40 | "masterSnapshot": "snapshot3", 41 | "testSet": [ 42 | { 43 | "masterTestName": "test3", 44 | "version": "0.1", 45 | "cases": [ 46 | { 47 | "masterTestId": "1", 48 | "rule":"exist({12}.location)" 49 | }, 50 | { 51 | "masterTestId": "2", 52 | "rule":"{13}.location='eastus2'" 53 | }, 54 | { 55 | "masterTestId": "3", 56 | "rule": "exist({14}.properties.addressSpace.addressPrefixes[])" 57 | }, 58 | { 59 | "masterTestId": "4", 60 | "rule": "count({15}.properties.dhcpOptions.dnsServers[])=2" 61 | }, 62 | { 63 | "masterTestId": "5", 64 | "rule": "{16}.properties.subnets['name'='abc-nprod-dev-eastus2-Subnet1'].properties.addressPrefix='192.23.26.0/24'" 65 | }, 66 | { 67 | "masterTestId": "6", 68 | "rule": "{17}.tags.COST_LOCATION={18}.tags.COST_LOCATION" 69 | } 70 | ] 71 | } 72 | ] 73 | } 74 | ``` 75 | -------------------------------------------------------------------------------- /tests/processor/templates/aws/aws_parser.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | path = os.path.dirname(os.path.abspath(__file__)) 4 | 5 | def test_valid_yaml(monkeypatch): 6 | from processor.templates.aws.aws_parser import AWSTemplateParser 7 | parameter_file = None 8 | template_file = '%s/sample/EC2InstanceWithSecurityGroupSample.yaml' % path 9 | aws_template_parser = AWSTemplateParser(template_file, parameter_file=parameter_file) 10 | template_json = aws_template_parser.parse() 11 | assert template_json != None 12 | assert template_json["AWSTemplateFormatVersion"] == "2010-09-09" 13 | 14 | def test_valid_json(monkeypatch): 15 | from processor.templates.aws.aws_parser import AWSTemplateParser 16 | parameter_file = None 17 | template_file = '%s/sample/SingleENIwithMultipleEIPs.json' % path 18 | aws_template_parser = AWSTemplateParser(template_file, parameter_file=parameter_file) 19 | template_json = aws_template_parser.parse() 20 | assert template_json != None 21 | assert template_json["AWSTemplateFormatVersion"] == "2010-09-09" 22 | 23 | def test_valid_template_as_json(monkeypatch): 24 | from processor.templates.aws.aws_parser import AWSTemplateParser 25 | parameter_file = None 26 | template_file = '%s/sample/SQS_With_CloudWatch_Alarms.template' % path 27 | aws_template_parser = AWSTemplateParser(template_file, parameter_file=parameter_file) 28 | template_json = aws_template_parser.parse() 29 | assert template_json != None 30 | assert template_json["AWSTemplateFormatVersion"] == "2010-09-09" 31 | 32 | def test_valid_text_as_json(monkeypatch): 33 | from processor.templates.aws.aws_parser import AWSTemplateParser 34 | parameter_file = None 35 | template_file = '%s/sample/SQS_With_CloudWatch_Alarms.txt' % path 36 | aws_template_parser = AWSTemplateParser(template_file, parameter_file=parameter_file) 37 | template_json = aws_template_parser.parse() 38 | assert template_json != None 39 | assert template_json["AWSTemplateFormatVersion"] == "2010-09-09" 40 | 41 | def test_invalid_text(monkeypatch): 42 | from processor.templates.aws.aws_parser import AWSTemplateParser 43 | parameter_file = None 44 | template_file = '%s/sample/InvalidTemplate.txt' % path 45 | aws_template_parser = AWSTemplateParser(template_file, parameter_file=parameter_file) 46 | template_json = aws_template_parser.parse() 47 | assert template_json is None 48 | 49 | 50 | def test_valid_text_invalid_template(monkeypatch): 51 | from processor.templates.aws.aws_parser import AWSTemplateParser 52 | parameter_file = None 53 | template_file = '%s/sample/ValidJsonInvalidTemplate.txt' % path 54 | aws_template_parser = AWSTemplateParser(template_file, parameter_file=parameter_file) 55 | template_json = aws_template_parser.parse() 56 | assert template_json is None -------------------------------------------------------------------------------- /docs/docs/connectors/azboard.md: -------------------------------------------------------------------------------- 1 | # Azure Board integration 2 | 3 | Integration of Prancer Web with **Azure Board** will help you with ticket management, and file/spectate tickets based on Prancer CSPM or PAC findings. 4 | 5 | The integration with **Azure Board** is as follows: 6 | 7 | 1. Each collection in the collection pages(**Infra/PAC** Management) can be integrated with **Azure Board** 8 | 2. Choose the dropdown option from the collection and select `Third Party Integration`. 9 | 3. Select the `Azure Board` 10 | 11 | When the user clicks on the integration service, a new page/modal opens with pre-populated fields for the workitem. User can edit as per convenience. On submit, the workitem shall be created with the ticket platform **Azure Board**. 12 | 13 | In Reporting pages : Infra Findings and Application Findings, a new option to create **Azure Board** ticket has to be created when opening a single item. 14 | 15 | This will create an integration with **Azure Board** ticketing system (workitem to be created automatically with the proper description for the collection). 16 | 17 | Here is a sample of the Azure Board structure file: 18 | 19 | ```json 20 | { 21 | "fileType": "structure", 22 | "type": "azureboard", 23 | "url": "", 24 | "username": "", 25 | "authtoken":"azureboard-accesstoken", 26 | "organisation": "prancer", 27 | "project": "", 28 | "severity": "" 29 | } 30 | ``` 31 | 32 | | Key |Value Description | 33 | | ------------- |:-------------: | 34 | |url| URL to the azure board| 35 | |username|your user-email of azure cloud| 36 | |authtoken|AuthToken of the azure board| 37 | |project|Name of your project.| 38 | |severity|Type of severity you want to assign to this particular task.(Options: High, Medium, Low)| 39 | 40 | sample file: 41 | 42 | ```json 43 | { 44 | "fileType": "structure", 45 | "type": "azureboard", 46 | "url": "https://dev.azure.com/wildkloud", 47 | "username": "prancer-user@prancer.io", 48 | "authtoken": "prancer-io-customer-accesstoken-azureboard", 49 | "organisation": "prancer", 50 | "project": "NextGen Cloud", 51 | "severity": "high" 52 | } 53 | ``` 54 | 55 | ## Generate AuthToken from Azure Board 56 | 57 | Once you have logged in to the Azure follow these steps to generate the AuthToken: 58 | 59 | 1. Go to the Home Page 60 | 2. Select the `` 61 | 3. Click the `User Settings` and select the `Personal access token` 62 | 4. Create a new token by clicking `New Token` 63 | 5. Provide `Name` and under `Work Items` give the `Read & Write` access. 64 | 6. Click on the `Create`. 65 | 66 | By following the steps you'll be able to copy the token. Make sure the token is saved and secured somewhere safe, as you won't be able to see that again. 67 | -------------------------------------------------------------------------------- /src/processor/comparison/comparisonantlr/input.txt: -------------------------------------------------------------------------------- 1 | count({1}.firewall.rules[]+{2}.firewall.rules[])=13 2 | {1}.firewall.rules['name'='rule1'].port={2}.firewall.rules['name'='rule1'].port 3 | count({1}.firewall.rules[])=count({2}.firewall.rules[]) 4 | count(count({1}.firewall.rules[]) + count({1}.firewall.rules[]))=13 5 | exist({1}.location) 6 | exist({1}.firewall.location) 7 | exist({1}.firewall.rules[]) 8 | count({1}.firewall.rules[])!=13 9 | count({1}.firewall.rules[])=13 10 | {1}.firewall.port=443 11 | {1}.location='eastus2' 12 | exist({1}.location)=FAlSE 13 | {1}.firewall.port = 443 14 | {1}.firewall.rules['name'='rule1'].port=443 15 | {1}.firewall.rules[*].port=443 16 | {1}.firewall.port={2}.firewall.port 17 | {1}.firewall.rules[0].port={2}.firewall.port 18 | exist({1}[0].location) 19 | exist({1}['name'='abc']) 20 | {1}.firewall.rules['name'='abc'].port={2}.firewall.port 21 | {1}.firewall.rules['name'='abc'].ports[2].port={2}.firewall.port 22 | {1}.firewall.cost=443.25 23 | {13}.resources[].properties.securityRules[]=443 24 | {13}.resources[*].properties.securityRules[]=443 25 | {13}.resources[*].properties.securityRules[]=443 26 | {13}.resources[*].properties.securityRules[*]=443 27 | {13}.resources[0].properties.securityRules[].properties.access=Allow 28 | {13}.resources[0].properties.securityRules[*].properties.direction=Inbound 29 | {15}.resources[*].properties.securityRules[*].properties.direction=Inbound 30 | {13}.resources[0].properties.securityRules[0].properties.sourceAddressPrefix=443 31 | {13}.resources[0].properties.securityRules[1].properties=443 32 | {13}.resources[0].properties.securityRules["name" = "httpFromPublic"]=443 33 | {13}.resources[0].properties.securityRules['name'='httpFromPublic'].properties.destinationPortRange=443 34 | {13}.resources[0].properties.securityRules[*].direction=Inbound 35 | {abcd}.resources[*].properties.securityRules[*].properties.direction=Inbound 36 | {abcd}.resources[*].properties.securityRules[*].properties.destinationPortRange=443 37 | {15}.resources[1].properties.securityRules['name' = 'httpFromPublic'].properties.destinationPortRange=80 38 | {13}.resources[0].properties.securityRules[1].properties.destinationAddressPrefix=172.1.1.0/24 39 | contains({13}.resources[0].properties.securityRules[1].properties.destinationAddressPrefix)=172.1.1.0 40 | {AWS-SECURITY_GROUP1}.resources[*].properties.securityRules[*].properties.destinationPortRange=443 41 | {15}.resources[1].properties.securityRules['name' = 'httpFromPublic'].properties.destinationPortRange=80 42 | {AWS-SECURITY_GROUP1}.resources[*].properties.securityRules[*].properties.destinationPortRange=443 43 | {AZR_NSG-2}.resources[*].properties.securityRules[*].properties.destinationPortRange=443 44 | {1}.version=1.5.2 45 | {1}.version='Boolean flag to turn on and off of virtual machine scale sets' 46 | {1}.location='eastus-2' 47 | -------------------------------------------------------------------------------- /docs/docs/tests/outputs.md: -------------------------------------------------------------------------------- 1 | Once you run a test, it will generate an output file in the container's directory. This file is always called `output-xyz.json` where the `xyz.json` is the original test file name. 2 | 3 | # Structure of output files 4 | 5 | The structure of an output test file always looks like this: 6 | ```json 7 | { 8 | "contentVersion": "1.0.0.0", 9 | "fileType": "output", 10 | "timestamp": 1555342894792, 11 | "snapshot": "snapshot", 12 | "container": "container1", 13 | "test": "test", 14 | "results": [] 15 | } 16 | ``` 17 | 18 | Here is a description of the different items you will find in this file: 19 | 20 | | Field | Description | 21 | |-----|-------------------| 22 | | contentVersion | The version of the rule engine used to parse the rule | 23 | | timestamp | Epoch timestamp when this file was generated | 24 | | snapshot | Name of the snapshot that was used | 25 | | container | Name of the container used in this test | 26 | | test | Name of the test that was used | 27 | | results | The results of all tests ran, see below for more information | 28 | 29 | # Results 30 | 31 | The results section contains all the result of each test case that was run in one big list. Each result contains information regarding the testcase so you can see what was the result and what information it used to run the test. Here is an example with field by field explanations: 32 | 33 | ```json 34 | { 35 | "result": "failed", 36 | "snapshots": [], 37 | "testId": "1", 38 | "rule": "{1}.Vpcs[0].CidrBlock='172.31.0.0/16'" 39 | } 40 | ``` 41 | 42 | | Field | Description | 43 | |-----|-------------------| 44 | | result | Reports if the test case was a passed or a failed (failure) | 45 | | snapshots | An array of all snapshots that were used in rule. See below for more information. | 46 | | testId | The name of the test case that generated this result | 47 | | rule | The rule that was used to run this test | 48 | 49 | # Snapshots 50 | 51 | The `snapshots` section of a test result contains all the information you would need to debug a failed test. Here is an example with an explanation of the fields: 52 | 53 | ```json 54 | { 55 | "id": "1", 56 | "path": "", 57 | "structure": "aws", 58 | "reference": "", 59 | "source": "awsStructure" 60 | } 61 | ``` 62 | 63 | | Field | Description | 64 | |-----|-------------------| 65 | | id | Name of the snapshot that was used as part of the rule | 66 | | path | The path that this snapshot refers to (`Azure` and `Git` only) | 67 | | structure | The type of snapshot | 68 | | reference | **TBD** | 69 | | source | The connector name that was used to retrieve the data| 70 | -------------------------------------------------------------------------------- /docs/docs/workflow.md: -------------------------------------------------------------------------------- 1 | # Prancer Workflow 2 | 3 | **Prancer** expects its configuration files to be available on your system to complete its workflow. 4 | We have two options to get up and running with the Prancer platform: 5 | - Easy way 6 | - Hard way 7 | 8 | # Prancer workflow: The Easy way 9 | The easiest way (recommended way) is to clone the `Hello World!` application and build your project around that. 10 | 11 | > you can find the detail of the `Hello World` application [here.](https://github.com/prancer-io/prancer-hello-world) 12 | 13 | You can modify the files in the `Hello World` application and add your files based on the project you are working on. This is the easiest and fastest way to get you up and running with the Prancer platform 14 | 15 | # Prancer workflow: The Hard way 16 | Based on your project structure, you can create the required files and folders and put the config files there. 17 | 18 | 19 | ## Setup the framework 20 | 21 | Working with **Prancer** itself is a straightforward activity and needs only a few steps: 22 | 23 | ### Prancer Basic Workflow 24 | 1. Create a **Prancer** project directory in your application's project directory 25 | 2. Configure the connectors for each required provider 26 | 3. Create collections 27 | 4. Use existing `snapshot configuration` files 28 | 5. Use existing `test` files based on available compliance 29 | 6. Add optional exclusions to skip tests based on resources, tests, or combination of both 30 | 7. Run the tests 31 | 32 | Some of these steps are more involved than others, but the general workflow is straightforward to understand to keep the learning curve as simple as possible. 33 | 34 | # Running compliance tests 35 | 36 | Running a test and gathering results was kept to the most simple steps possible so that integration into an existing continuous improvement/continuous deployment pipeline stays as simple as possible. The last thing you want is to use a cumbersome tool: 37 | 38 | 1. Checkout your application project 39 | 2. Go to the **Prancer** project directory 40 | 3. Run the prancer platform and act on return code 41 | 4. Save the outputs as artifacts for later viewing 42 | 43 | Integrating with a CI/CD pipeline can be as simple as running a simple **BaSH** script in a folder with an `if` statement around it to catch potential failures. With the files written to disk, you can then dig into the results as you want by parsing the simple **JSON** files. 44 | 45 | # The validation workflow 46 | 47 | Each time the tool is running, the test suite is executed sequentially: 48 | 49 | 1. Configuration files are read (Project configuration, Connector configuration, Snapshot configuration, Tests) 50 | 2. Providers are communicated with, snapshots are built and then saved to the database 51 | 3. Tests run against the snapshots 52 | 4. Reports are produced 53 | 54 | ![High-Level process](images/high-level-process.png) 55 | -------------------------------------------------------------------------------- /utilities/json2md/json2md.py: -------------------------------------------------------------------------------- 1 | #! json2md_venv/bin/python3 2 | 3 | import argparse 4 | import json 5 | import os 6 | import re 7 | 8 | import pandas as pd 9 | from urllib.parse import urljoin 10 | from jinja2 import Environment, FileSystemLoader 11 | 12 | meta_key = 'meta' 13 | result_key = 'results' 14 | meta_keys = ['timestamp', 'snapshot', 'container', 'test'] 15 | table_keys = ['snapshots', 'tags'] 16 | 17 | path_prefix=os.environ.get('PATH_PREFIX') 18 | rego_prefix=os.environ.get('REGO_PREFIX') 19 | 20 | 21 | def valid_path(s): 22 | if not os.path.isfile(s): 23 | raise argparse.ArgumentTypeError("Not exist path: {0}.".format(s)) 24 | return s 25 | 26 | 27 | def list_to_table(list_value): 28 | tb = pd.json_normalize(list_value) 29 | if path_prefix is not None and 'paths' in tb.columns: 30 | tb['paths'] = tb['paths'].apply(lambda x: [f'{path_prefix}{i}' for i in x]) 31 | 32 | tb = tb.T.reset_index() 33 | tb.columns = ['Title', 'Description'] 34 | return tb.to_markdown(index=False) 35 | 36 | 37 | def prepare_data(data): 38 | 39 | # form metadata 40 | meta_data = list_to_table({key: data.get(key) for key in meta_keys}) 41 | 42 | # form the message result 43 | if result_key not in data: 44 | raise ValueError(f"[ERROR] JSON data doesn't has key: {result_key}. Check your json file or update the result_key in script") 45 | # sys.exit(1) 46 | 47 | results = data[result_key] 48 | for i in range(len(results)): 49 | if rego_prefix is not None and 'rule' in results[i]: 50 | file_name = re.search(r"file\((.*)\)", results[i]['rule']).group(1) 51 | results[i]['rule'] = f"file({urljoin(rego_prefix, file_name)})" 52 | 53 | for key in table_keys: 54 | try: 55 | results[i][key] = list_to_table(results[i][key]) 56 | except KeyError: 57 | print(f"[WARNING] JSON data doesn't has key: {key}. Your output might not completed as expectation") 58 | 59 | # update data 60 | data[meta_key] = meta_data 61 | 62 | return data 63 | 64 | 65 | def main(): 66 | parser = argparse.ArgumentParser() 67 | parser.add_argument('--template', help="Template file", type=str, default="template.md") 68 | parser.add_argument('--input', help="Path to json data", type=valid_path, required=True) 69 | parser.add_argument('--output', help="Write to markdown file", type=str, default="output.md") 70 | 71 | args, _ = parser.parse_known_args() 72 | # print(args) 73 | 74 | env = Environment(loader=FileSystemLoader('.')) 75 | rtemplate = env.get_template(args.template) 76 | 77 | with open(args.input) as f: 78 | data = json.load(f) 79 | 80 | rtemplate.stream(data=prepare_data(data)).dump(args.output) 81 | print(f"Completed! Your MD file at: {args.output}") 82 | 83 | 84 | if __name__ == '__main__': 85 | main() 86 | -------------------------------------------------------------------------------- /tests/processor/template_processor/aws/sample/EC2InstanceWithSecurityGroupSample.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Metadata: 3 | License: Apache-2.0 4 | Description: 'AWS CloudFormation Sample Template EC2InstanceWithSecurityGroupSample: 5 | Create an Amazon EC2 instance running the Amazon Linux AMI. The AMI is chosen based 6 | on the region in which the stack is run. This example creates an EC2 security group 7 | for the instance to give you SSH access. **WARNING** This template creates an Amazon 8 | EC2 instance. You will be billed for the AWS resources used if you create a stack 9 | from this template.' 10 | Parameters: 11 | KeyName: 12 | Description: Name of an existing EC2 KeyPair to enable SSH access to the instance 13 | Type: AWS::EC2::KeyPair::KeyName 14 | ConstraintDescription: must be the name of an existing EC2 KeyPair. 15 | InstanceType: 16 | Description: WebServer EC2 instance type 17 | Type: String 18 | Default: t3.small 19 | AllowedValues: [t2.nano, t2.micro, t2.small, t2.medium, t2.large, t2.xlarge, t2.2xlarge, 20 | t3.nano, t3.micro, t3.small, t3.medium, t3.large, t3.xlarge, t3.2xlarge, 21 | m4.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m4.10xlarge, 22 | m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge, 23 | c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, 24 | g3.8xlarge, 25 | r5.large, r5.xlarge, r5.2xlarge, r5.4xlarge, r3.12xlarge, 26 | i3.xlarge, i3.2xlarge, i3.4xlarge, i3.8xlarge, 27 | d2.xlarge, d2.2xlarge, d2.4xlarge, d2.8xlarge] 28 | ConstraintDescription: must be a valid EC2 instance type. 29 | SSHLocation: 30 | Description: The IP address range that can be used to SSH to the EC2 instances 31 | Type: String 32 | MinLength: 9 33 | MaxLength: 18 34 | Default: 0.0.0.0/0 35 | AllowedPattern: (\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2}) 36 | ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x. 37 | LatestAmiId: 38 | Type: 'AWS::SSM::Parameter::Value' 39 | Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2' 40 | Resources: 41 | EC2Instance: 42 | Type: AWS::EC2::Instance 43 | Properties: 44 | InstanceType: !Ref 'InstanceType' 45 | SecurityGroups: [] 46 | KeyName: !Ref 'KeyName' 47 | ImageId: !Ref 'LatestAmiId' 48 | Outputs: 49 | InstanceId: 50 | Description: InstanceId of the newly created EC2 instance 51 | Value: !Ref 'EC2Instance' 52 | AZ: 53 | Description: Availability Zone of the newly created EC2 instance 54 | Value: !GetAtt [EC2Instance, AvailabilityZone] 55 | PublicDNS: 56 | Description: Public DNSName of the newly created EC2 instance 57 | Value: !GetAtt [EC2Instance, PublicDnsName] 58 | PublicIP: 59 | Description: Public IP address of the newly created EC2 instance 60 | Value: !GetAtt [EC2Instance, PublicIp] -------------------------------------------------------------------------------- /docs/docs/index.md: -------------------------------------------------------------------------------- 1 | # Prancer Security Platform Docs 2 | 3 | Welcome to the **Prancer** cloud security platform documentation! 4 | 5 | **Prancer** is an end-to-end cloud security platform that promotes Shift Left strategies. Prancer provides tools for the pre-deployment and post-deployment multi-cloud validation at scale for your Infrastructure as Code (IaC) security and continuous compliance requirements in the cloud. Also, It provides Penetration as Code (PAC) Framework. 6 | > Note: PAC is not covered in this documentation. 7 | 8 | For the IaC Static Code Analysis (SCA) part, **Prancer** integrates with your DevOps pipeline to ensure that secure and quality code will reach the cloud. It prevents any security misconfiguration from being applied to the cloud via the IaC pipeline security validation gates. In case of finding any problem, Prancer can file a PR on behalf of the user to remediate the issue in the code. 9 | 10 | Moreover, **Prancer** validates your cloud resources based on available compliance frameworks and custom policy files. It scans the cloud environment continuously to find security misconfigurations. It alerts and reports on problems and provides an easy way to auto remediate them. 11 | 12 | # Editions of Prancer Platform 13 | 14 | Prancer Cloud Security Platform comes in 3 different editions: 15 | 16 | * **Basic edition** is the community edition of the platform. It is an open-source version of the framework available on GitHub. This feature-rich tool allows you to run the platform in its fundamental features via command-line interface(CLI). 17 | 18 | * **Enterprise edition** comes with many enhancements available for enterprise companies. It is a virtual appliance running inside the company's network. You can use the web interface, enterprise CLI or API calls to access Prancer features. 19 | 20 | * **Premium edition** is the subscription-based edition software as a service (SaaS) solution of the Prancer platform. It has all the enterprise edition's advanced features and is hosted on the prancer cloud. 21 | 22 | This documentation is focused on the Prancer Framework Basic Edition (Open Source). In some places where we need to refer to other platform editions, we specifically mention it in the docs. This will be highlighted in the text: 23 | 24 | If we target the Enterprise Edition 25 | > Target Platform : ** Enterprise Edition ** 26 | 27 | If we target the Premium Edition 28 | > Target Platform : ** Premium Edition ** 29 | 30 | # Overview of documentation 31 | 32 | The first few pages of the documentation are for the general workflow, installation, and terminology. After that, each section will focus on specific aspects of the platform and various configuration options available for you. 33 | 34 | 35 | -------------------------------------------------------------------------------- /src/processor/helper/yaml/yaml_utils.py: -------------------------------------------------------------------------------- 1 | """ Utility functions for yaml.""" 2 | 3 | import yaml 4 | from yaml.loader import FullLoader 5 | from collections import OrderedDict 6 | from processor.helper.file.file_utils import exists_file 7 | from processor.logging.log_handler import getlogger 8 | 9 | logger = getlogger() 10 | MultipleConvertionKey = "_multiple_yaml" 11 | HelmChartConvertionKey = "_prancer_helm_template" 12 | 13 | def save_yaml_to_file(indata, outfile, indent=None): 14 | """Save dict data to the file in yaml format""" 15 | if indata is not None: 16 | try: 17 | with open(outfile, 'w') as yamlfile: 18 | yaml.dump(indata, yamlfile, indent=indent) 19 | except: 20 | pass 21 | 22 | 23 | def yaml_from_string(yaml_str): 24 | """Get dict from the string in yaml format.""" 25 | try: 26 | yamldata = yaml.load(yaml_str, Loader=FullLoader) 27 | return yamldata 28 | except: 29 | print('Failed to load yaml data: %s' % yaml_str) 30 | return None 31 | 32 | 33 | def yaml_from_file(yamlfile, loader=None): 34 | """ Get yaml data from the file in a dict.""" 35 | yamldata = None 36 | try: 37 | if exists_file(yamlfile): 38 | with open(yamlfile) as infile: 39 | if loader: 40 | yamldata = yaml.load(infile, Loader=loader) 41 | else: 42 | yamldata = yaml.load(infile, Loader=FullLoader) 43 | except Exception as ex: 44 | print('Failed to load yaml from file: %s, exception: %s' % (yamlfile, ex)) 45 | return yamldata 46 | 47 | 48 | def valid_yaml(yaml_input): 49 | """ Checks validity of the yaml """ 50 | try: 51 | data = yaml.load(yaml_input, Loader=FullLoader) 52 | return isinstance(data, dict) 53 | except: 54 | print('Not a valid yaml: %s' % yaml_input) 55 | return False 56 | 57 | def multiple_yaml_from_file(yamlfile, loader=None): 58 | """ Get multiple yaml data from the file in a dict.""" 59 | yamldata = None 60 | try: 61 | if exists_file(yamlfile): 62 | with open(yamlfile) as infile: 63 | if loader: 64 | yamldata = list(yaml.load_all(infile, Loader=loader)) 65 | else: 66 | yamldata = list(yaml.load_all(infile)) 67 | except Exception as ex: 68 | return None 69 | return yamldata 70 | 71 | def is_multiple_yaml_file(file_path): 72 | try: 73 | if len (multiple_yaml_from_file(file_path,loader=FullLoader)) > 1: 74 | return True 75 | else: 76 | return False 77 | except Exception as ex: 78 | return False 79 | 80 | def is_multiple_yaml_convertion(file_path): 81 | return MultipleConvertionKey in file_path 82 | 83 | def is_helm_chart_convertion(file_path): 84 | return HelmChartConvertionKey in file_path 85 | -------------------------------------------------------------------------------- /tests/processor/helper/config/test_rundata_utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | import shutil 3 | from processor.helper.config.rundata_utils import init_currentdata,\ 4 | put_in_currentdata, delete_from_currentdata, delete_currentdata 5 | from processor.helper.config.config_utils import framework_currentdata 6 | 7 | 8 | TESTSDIR = None 9 | 10 | 11 | def set_tests_dir(): 12 | global TESTSDIR 13 | if TESTSDIR: 14 | return TESTSDIR 15 | MYDIR = os.path.abspath(os.path.dirname(__file__)) 16 | TESTSDIR = os.getenv('FRAMEWORKDIR', os.path.join(MYDIR, '../../../../')) 17 | return TESTSDIR 18 | 19 | set_tests_dir() 20 | 21 | 22 | def test_init_config(): 23 | runcfg = framework_currentdata() 24 | rundir = os.path.dirname(runcfg) 25 | # if os.path.exists(rundir): 26 | # shutil.rmtree(rundir) 27 | # assert False == os.path.exists(rundir) 28 | assert True == os.path.exists(runcfg) 29 | init_currentdata() 30 | assert True == os.path.exists(rundir) 31 | assert True == os.path.exists(runcfg) 32 | os.remove(runcfg) 33 | assert True == os.path.exists(rundir) 34 | assert False == os.path.exists(runcfg) 35 | init_currentdata() 36 | assert True == os.path.exists(rundir) 37 | assert True == os.path.exists(runcfg) 38 | 39 | 40 | def test_add_to_run_config(load_json_file): 41 | runcfg = framework_currentdata() 42 | init_currentdata() 43 | assert True == os.path.exists(runcfg) 44 | put_in_currentdata('a', 'val1') 45 | runconfig = load_json_file(runcfg) 46 | result = True if runconfig and 'a' in runconfig and runconfig['a'] == 'val1' else False 47 | assert result == True 48 | put_in_currentdata('b', ['val1']) 49 | runconfig = load_json_file(runcfg) 50 | result = True if runconfig and 'b' in runconfig and runconfig['b'] == ['val1'] else False 51 | assert result == True 52 | put_in_currentdata('b', 'val2') 53 | runconfig = load_json_file(runcfg) 54 | result = True if runconfig and 'b' in runconfig and runconfig['b'] == ['val1', 'val2'] else False 55 | assert result == True 56 | 57 | 58 | def test_delete_from_run_config(load_json_file): 59 | runcfg = framework_currentdata() 60 | init_currentdata() 61 | assert True == os.path.exists(runcfg) 62 | put_in_currentdata('a', 'val1') 63 | runconfig = load_json_file(runcfg) 64 | result = True if runconfig and 'a' in runconfig and runconfig['a'] == 'val1' else False 65 | assert result == True 66 | delete_from_currentdata('a') 67 | runconfig = load_json_file(runcfg) 68 | result = False if runconfig and 'a' in runconfig else True 69 | assert result == True 70 | 71 | 72 | def test_delete_run_config(): 73 | runcfg = framework_currentdata() 74 | init_currentdata() 75 | assert True == os.path.exists(runcfg) 76 | put_in_currentdata('token', 'abcd') 77 | delete_currentdata() 78 | assert False == os.path.exists(runcfg) 79 | -------------------------------------------------------------------------------- /src/processor/comparison/comparison_functions.py: -------------------------------------------------------------------------------- 1 | """All comparison functions.""" 2 | from processor.helper.json.json_utils import check_field_exists, get_field_value 3 | 4 | 5 | def apply_extras(value, extras): 6 | """Apply any additional functionalities after evaluation.""" 7 | for extra in extras: 8 | if extra == 'len': 9 | value = len(value) if hasattr(value, 'len') or hasattr(value, '__len__') else 0 10 | return value 11 | 12 | 13 | def equality(data, loperand, roperand, is_not=False, extras=None): 14 | """ Compare and return value """ 15 | value = get_field_value(data, loperand) 16 | eql = False 17 | if value: 18 | if extras: 19 | value = apply_extras(value, extras) 20 | if type(value) == type(roperand) and value == roperand: 21 | eql = True 22 | if is_not: 23 | eql = not eql 24 | return eql 25 | 26 | 27 | def less_than(data, loperand, roperand, is_not=False, extras=None): 28 | """ Compare and return value """ 29 | value = get_field_value(data, loperand) 30 | lt = False 31 | if value: 32 | if extras: 33 | value = apply_extras(value, extras) 34 | if type(value) == type(roperand) and value < roperand: 35 | lt = True 36 | if is_not: 37 | lt = not lt 38 | return lt 39 | 40 | 41 | def less_than_equal(data, loperand, roperand, is_not=False, extras=None): 42 | """ Compare and return value """ 43 | value = get_field_value(data, loperand) 44 | lte = False 45 | if value: 46 | if extras: 47 | value = apply_extras(value, extras) 48 | if type(value) == type(roperand) and value <= roperand: 49 | lte = True 50 | if is_not: 51 | lte = not lte 52 | return lte 53 | 54 | 55 | def greater_than(data, loperand, roperand, is_not=False, extras=None): 56 | """ Compare and return value """ 57 | value = get_field_value(data, loperand) 58 | gt = False 59 | if value: 60 | if extras: 61 | value = apply_extras(value, extras) 62 | if type(value) == type(roperand) and value > roperand: 63 | gt = True 64 | if is_not: 65 | gt = not gt 66 | return gt 67 | 68 | 69 | def greater_than_equal(data, loperand, roperand, is_not=False, extras=None): 70 | """ Compare and return value """ 71 | value = get_field_value(data, loperand) 72 | gte = False 73 | if value: 74 | if extras: 75 | value = apply_extras(value, extras) 76 | if type(value) == type(roperand) and value >= roperand: 77 | gte = True 78 | if is_not: 79 | gte = not gte 80 | return gte 81 | 82 | 83 | def exists(data, loperand, _, is_not=False, extras=None): 84 | """ Compare and return value """ 85 | present = check_field_exists(data, loperand) 86 | if is_not: 87 | present = not present 88 | return present 89 | -------------------------------------------------------------------------------- /docs/docs/tests/syntax.md: -------------------------------------------------------------------------------- 1 | Rules are not limited to simple comparison statements. You have access to a few functions and a comprehensive number of operators. 2 | 3 | # Operators 4 | 5 | We have talked about the equality `=` comparison operator so far, but as you can see from the list below, you have access to all common operators from mathematical and boolean expressions. 6 | 7 | **Arithmetic operators** 8 | 9 | | Operator | What it does | 10 | |:--------:|--------------| 11 | | `+` | Addition operator | 12 | | `-` | Subtraction operator | 13 | | `*` | Multiplication operator | 14 | | `/` | Division operator | 15 | | `%` | Modulo operator | 16 | | `^^` | Exponent operator | 17 | 18 | **Boolean operators** 19 | 20 | | Operator | What it does | 21 | |:--------:|--------------| 22 | | `||` | Or operator | 23 | | `&&` | And operator | 24 | | `!` | Not operator | 25 | 26 | **Comparison operators** 27 | 28 | | Operator | What it does | 29 | |:--------:|--------------| 30 | | `=` | Compares for equality | 31 | | `<` | Compares for less than | 32 | | `>` | Compares for greater than | 33 | | `<=` | Compares for less than or equal to | 34 | | `>=` | Compares for greater than or equal to | 35 | 36 | **Lookup operators** 37 | 38 | | Operator | What it does | 39 | |:--------:|--------------| 40 | | `{m}` | Retrieves the snapshot data for snapshot id `m` | 41 | 42 | **List operators** 43 | 44 | | Operator | What it does | 45 | |:--------:|--------------| 46 | | `m[]` | Retrieves the complete list from `m` | 47 | | `m[n]` | Retrieves the `n`'th item from the list `m` | 48 | | `m['n'='o']` | Retrieves the item that matches predicate `n` equals `o` from the list `m` | 49 | 50 | **Dictionnary operators** 51 | 52 | | Operator | What it does | 53 | |:--------:|--------------| 54 | | `m.n` | Retrieves the property `n` from the dictionnary `m` | 55 | 56 | # Functions 57 | 58 | Functions work just like any other programming language, use the identifier and pass the parameters inside of parentheses and they will yield something. 59 | 60 | **Data control** 61 | 62 | | Operator | What it does | 63 | |:--------:|--------------| 64 | | `exists(n)` | Ensures that `n` does not resolve to `None` | 65 | 66 | **Data aggregation** 67 | 68 | | Operator | What it does | 69 | |:--------:|--------------| 70 | | `count(n)` | Counts the number of items in `n` | 71 | 72 | # Mixing operators and functions 73 | 74 | You can create very complex expressions with what you have seen so far. There is almost no limit to what you can do with the rules engine. Here are a few complex examples: 75 | 76 | **Compare that the dnsServers of two different snapshots** 77 | 78 | {2}.properties.dhcpOptions.dnsServers[] = {3}.properties.dhcpOptions.dnsServers[] 79 | 80 | **Ensure you have at least 4 DNS servers defined** 81 | 82 | count({2}.properties.dhcpOptions.dnsServers[]) + count({3}.properties.dhcpOptions.dnsServers[]) = 4 83 | 84 | The only limit is your imagination and the data that you have on hand. -------------------------------------------------------------------------------- /src/processor/templates/google/util.py: -------------------------------------------------------------------------------- 1 | from jinja2 import Undefined 2 | # from jinja2._compat import implements_to_string, string_types 3 | from jinja2.utils import missing, object_type_repr 4 | from processor.logging.log_handler import getlogger 5 | 6 | logger = getlogger() 7 | 8 | class ResourceContext(object): 9 | 10 | def __init__(self, properties={}, **kwargs): 11 | self.properties = properties 12 | 13 | def __getattribute__(self, name): 14 | __dict__ = super(ResourceContext, self).__getattribute__('__dict__') 15 | value = {} 16 | if name in __dict__: 17 | value = object.__getattribute__(self, name) 18 | return value 19 | 20 | 21 | # @implements_to_string 22 | class SilentUndefined(Undefined): 23 | ''' 24 | handle undefined variables 25 | ''' 26 | def _fail_with_undefined_error(self, *args, **kwargs): 27 | if self._undefined_hint is None: 28 | if self._undefined_obj is missing: 29 | hint = '%r is undefined' % self._undefined_name 30 | # elif not isinstance(self._undefined_name, string_types): 31 | elif not isinstance(self._undefined_name, str): 32 | hint = '%s has no element %r' % ( 33 | self._undefined_obj, 34 | self._undefined_name 35 | ) 36 | else: 37 | hint = '%r has no attribute %r' % ( 38 | self._undefined_obj, 39 | self._undefined_name 40 | ) 41 | else: 42 | hint = self._undefined_hint 43 | # logger.error(hint) 44 | return '' 45 | 46 | __slots__ = () 47 | __iter__ = __str__ = __len__ = __nonzero__ = __eq__ = \ 48 | __ne__ = __bool__ = __hash__ = \ 49 | _fail_with_undefined_error 50 | 51 | def __getattr__(self, name): 52 | if name[:2] == '__': 53 | raise AttributeError(name) 54 | return self._fail_with_undefined_error() 55 | 56 | __add__ = __radd__ = __mul__ = __rmul__ = __div__ = __rdiv__ = \ 57 | __truediv__ = __rtruediv__ = __floordiv__ = __rfloordiv__ = \ 58 | __mod__ = __rmod__ = __pos__ = __neg__ = __call__ = \ 59 | __getitem__ = __lt__ = __le__ = __gt__ = __ge__ = __int__ = \ 60 | __float__ = __complex__ = __pow__ = __rpow__ = __sub__ = \ 61 | __rsub__ = _fail_with_undefined_error 62 | 63 | def __eq__(self, other): 64 | return type(self) is type(other) 65 | 66 | def __ne__(self, other): 67 | return not self.__eq__(other) 68 | 69 | def __hash__(self): 70 | return id(type(self)) 71 | 72 | def __str__(self): 73 | return u'' 74 | 75 | def __len__(self): 76 | return 0 77 | 78 | def __iter__(self): 79 | if 0: 80 | yield None 81 | 82 | def __nonzero__(self): 83 | return False 84 | __bool__ = __nonzero__ 85 | 86 | def __repr__(self): 87 | return 'Undefined' --------------------------------------------------------------------------------