├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── app.py ├── assets ├── containers │ ├── prowler_list_check │ │ ├── Dockerfile │ │ └── app.py │ └── prowler_scan_check │ │ ├── Dockerfile │ │ └── app.py └── lambdas │ ├── enable_import_prowler_findings │ └── index.py │ ├── enable_security_hub_resource │ └── index.py │ └── transform_findings │ ├── index.py │ └── requirements.txt ├── azure-pipelines.yml ├── cdk.json ├── custom_constructs ├── prowler_scanner.py ├── security_hub.py └── ssm_stored_parameter.py ├── docs └── Security_Hub_Analysis.png ├── requirements-dev.txt ├── requirements.txt ├── setup.py ├── source.bat ├── stacks ├── __init__.py ├── analytic_sink_stack.py ├── security_hub_aggegation_stack.py └── security_hub_collection_stack.py ├── test.py └── tests └── transform_finding_test.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.swp 2 | package-lock.json 3 | __pycache__ 4 | .pytest_cache 5 | .env 6 | .venv 7 | *.egg-info 8 | .idea 9 | 10 | # CDK asset staging directory 11 | .cdk.staging 12 | cdk.out 13 | 14 | # Mac OS X 15 | .DS_Store 16 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | ## Reporting Bugs/Feature Requests 10 | 11 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 12 | 13 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 14 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 15 | 16 | * A reproducible test case or series of steps 17 | * The version of our code being used 18 | * Any modifications you've made relevant to the bug 19 | * Anything unusual about your environment or deployment 20 | 21 | ## Contributing via Pull Requests 22 | 23 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 24 | 25 | 1. You are working against the latest source on the *main* branch. 26 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 27 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 28 | 29 | To send us a pull request, please: 30 | 31 | 1. Fork the repository. 32 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 33 | 3. Ensure local tests pass. 34 | 4. Commit to your fork using clear commit messages. 35 | 5. Send us a pull request, answering any default questions in the pull request interface. 36 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 37 | 38 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 39 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 40 | 41 | ## Finding contributions to work on 42 | 43 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 44 | 45 | ## Code of Conduct 46 | 47 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 48 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 49 | opensource-codeofconduct@amazon.com with any additional questions or comments. 50 | 51 | 52 | ## Security issue notifications 53 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 54 | 55 | 56 | ## Licensing 57 | 58 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 59 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## AWS Security Hub Analytic Pipeline 2 | 3 | [AWS Security Hub](https://aws.amazon.com/security-hub/) is a service that gives you aggregated visibility into your security and compliance posture across multiple AWS accounts. By exporting findings to Athena, you can enrich the data up with additional sources such as your configuration management database (CMDB) or IT service management database. Additionally, you can build analytic dimenions to find trends and patterns. This code is part of an upcoming blog post (link TBA) on visualizing Security Hub Findings to senior leaders and decision-makers. 4 | 5 | This repository contains a CDK stack that builds the following infrastructure 6 | 7 | ![CDK Infrastructure](docs/Security_Hub_Analysis.png) 8 | 9 | Essentially there are two CDK Stacks (in one CDK Application): 10 | - A stack deployed per region containing Security Hub and handles the necessary infrastructure to stream findings to S3 11 | - A stack deployed once containing all the infrastructure to transform raw findings and expose it with Athena. 12 | 13 | Additionally, included in the CDK application is Serverless Prowler Scanner designed to quickly provide real-data to help you walk through the Athena and QuickSight pieces, 14 | by providing you with actual actionable data. 15 | 16 | ## Building the stack 17 | 18 | Create a virtual environment 19 | 20 | ```bash 21 | $ python3 -m venv .venv 22 | $ source .venv/bin/activate 23 | ``` 24 | 25 | Install CDK dependencies 26 | 27 | ```bash 28 | $ pip install -r requirements.txt 29 | ``` 30 | 31 | ## Deploy the stack 32 | 33 | ```bash 34 | $ cdk deploy 35 | ``` 36 | 37 | ## Tested Use Case 38 | - AWS native security services -- GuardDuty, Access Analyzer, Inspector 39 | - Security Hub standards -- CIS Benchmark, PCI/DSS, AWS Security Best Practices 40 | - Third Party Integrations -- Prowler 41 | 42 | ## Security 43 | 44 | See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. 45 | 46 | ## License 47 | 48 | This library is licensed under the MIT-0 License. See the LICENSE file. 49 | 50 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import os 3 | from aws_cdk import core as cdk 4 | 5 | # For consistency with TypeScript code, `cdk` is the preferred import name for 6 | # the CDK's core module. The following line also imports it as `core` for use 7 | # with examples from the CDK Developer's Guide, which are in the process of 8 | # being updated to use `cdk`. You may delete this import if you don't need it. 9 | from aws_cdk import core 10 | 11 | from stacks.security_hub_collection_stack import SecurityHubCollectionStack 12 | from stacks.security_hub_aggegation_stack import SecurityHubAggregationStack 13 | from stacks.analytic_sink_stack import AnalyticSinkStack 14 | 15 | app = core.App() 16 | 17 | # You can get a list of all regions by using these commands 18 | # response = boto3.client('ec2').describe_regions() 19 | # region_names = [r['RegionName'] for r in response['Regions'] if r['OptInStatus'] == 'opt-in-not-required'] 20 | 21 | analytic_sink_stack = AnalyticSinkStack(app, 'AnalyticSink', 22 | env=cdk.Environment( 23 | region='us-east-1' 24 | )) 25 | 26 | regions = ['us-east-2', 'us-west-1'] 27 | 28 | for r in regions: 29 | stack = SecurityHubCollectionStack(app, f'SecurityHub-{r}', 30 | env=cdk.Environment( 31 | region=r 32 | )) 33 | stack.add_dependency(analytic_sink_stack) 34 | 35 | stack = SecurityHubAggregationStack(app, 'Aggregation', 36 | env=cdk.Environment( 37 | region='us-east-1' 38 | ), 39 | sink_region='us-east-1') 40 | stack.add_dependency(analytic_sink_stack) 41 | 42 | 43 | app.synth() 44 | -------------------------------------------------------------------------------- /assets/containers/prowler_list_check/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM toniblyx/prowler:latest as builder 2 | 3 | FROM public.ecr.aws/lambda/python:3.8 4 | 5 | COPY --from=builder /prowler /prowler 6 | COPY app.py ./ 7 | 8 | RUN yum update -y && \ 9 | yum install -y python3 bash curl jq file coreutils which && \ 10 | pip3 install --upgrade pip && \ 11 | pip install awscli boto3 detect-secrets 12 | 13 | CMD ["app.handler"] -------------------------------------------------------------------------------- /assets/containers/prowler_list_check/app.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import boto3 4 | import logging 5 | from typing import List 6 | from time import sleep 7 | 8 | logger = logging.getLogger() 9 | logger.setLevel(logging.INFO) 10 | 11 | 12 | class ProwlerListGroups: 13 | def __init__(self, topic_arn): 14 | self.topic = boto3.resource('sns').Topic(topic_arn) 15 | 16 | @staticmethod 17 | def __list_groups(): 18 | stream = os.popen('/prowler/prowler -l -b') 19 | raw_out = stream.read() 20 | return raw_out.split('\n') 21 | 22 | @staticmethod 23 | def process_groups(raw_out: List[str]): 24 | ret = [] 25 | 26 | new_lines = [st.replace('[0;39', '').replace('[0;34m', '').replace('[0;35m', '').replace('[0;36m','') for st in raw_out] 27 | 28 | for n in new_lines: 29 | if len(n.strip()) > 0: 30 | start = n.index('[') 31 | stop = n.index(']') 32 | ret.append(n[start + 1: stop]) 33 | 34 | return ret 35 | 36 | def handler(self, event, context): 37 | groups = ProwlerListGroups.__list_groups() 38 | processed = ProwlerListGroups.process_groups(groups) 39 | logger.info(f'groups found {groups}') 40 | for p in processed: 41 | sleep(0.1) # Adds Slight Delay to help with Throttling API Calls during fanout 42 | self.topic.publish(Message=p) 43 | 44 | 45 | def handler(event, context): 46 | ProwlerListGroups(topic_arn=os.environ.get('topic_arn')).handler(event, context) 47 | return 'Done' 48 | -------------------------------------------------------------------------------- /assets/containers/prowler_scan_check/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM toniblyx/prowler:latest as builder 2 | 3 | FROM public.ecr.aws/lambda/python:3.8 4 | 5 | COPY --from=builder /prowler /prowler 6 | RUN ln -s /tmp /prowler/output 7 | RUN chmod a+wr /prowler 8 | COPY app.py ./ 9 | 10 | RUN yum update -y && \ 11 | yum install -y python3 bash curl jq file coreutils which && \ 12 | pip3 install --upgrade pip && \ 13 | pip install awscli boto3 detect-secrets 14 | 15 | CMD ["app.handler"] 16 | -------------------------------------------------------------------------------- /assets/containers/prowler_scan_check/app.py: -------------------------------------------------------------------------------- 1 | import os 2 | import boto3 3 | from typing import List 4 | import json 5 | import sys 6 | import logging 7 | 8 | logger = logging.getLogger() 9 | logger.setLevel(logging.DEBUG) 10 | 11 | 12 | class ProwlerScanGroup: 13 | def __init__(self, topic_arn): 14 | self.__topic = boto3.resource('sns').Topic(topic_arn) 15 | self.__region = os.environ['AWS_REGION'] 16 | logger.debug(f'topic_arn={topic_arn}') 17 | logger.debug(f'region={self.__region}') 18 | 19 | def __get_check(self, check_id:str) -> str: 20 | logger.debug('Executing ' + f"/prowler/prowler -r {self.__region} -c '{check_id}' -M 'json-asff' -S") 21 | stream = os.popen(f"/prowler/prowler -r {self.__region} -f {self.__region} -c '{check_id}' -M 'json-asff' -S") 22 | raw_out = stream.read() 23 | return raw_out 24 | 25 | def handle(self, event, context): 26 | logger.debug(event) 27 | records = event['Records'] 28 | for r in records: 29 | group = r['Sns']['Message'] 30 | logger.debug(self.__get_check(group)) 31 | 32 | 33 | def handler(event, context): 34 | ProwlerScanGroup(topic_arn=os.environ['topic_arn']).handle(event, context) 35 | return 'Done: python' 36 | -------------------------------------------------------------------------------- /assets/lambdas/enable_import_prowler_findings/index.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import logging 3 | import os 4 | 5 | logger = logging.getLogger() 6 | logger.setLevel(logging.INFO) 7 | 8 | 9 | class EnableSecurityHubResource: 10 | def __init__(self): 11 | self.client = boto3.client('securityhub') 12 | self.region = os.environ.get('AWS_REGION') 13 | 14 | def on_create(self, event): 15 | logger.debug('Event: %s' % event) 16 | properties = event['ResourceProperties'] 17 | product_arn = properties['product_arn'] 18 | logger.info('Enabling Security Hub Integration: %s' % product_arn) 19 | try: 20 | self.client.enable_import_findings_for_product( 21 | ProductArn=product_arn 22 | ) 23 | except self.client.exceptions.ResourceConflictException: 24 | logger.info('Product already enabled') 25 | 26 | return {'PhysicalResourceId': product_arn} 27 | 28 | def on_update(self, event): 29 | properties = event['ResourceProperties'] 30 | product_arn = properties['product_arn'] 31 | logger.warning( 32 | 'Attempt to update custom resource. Products is either enabled or disabled. See Event for details %s' % event) 33 | return {'PhysicalResourceId': product_arn} 34 | 35 | def on_delete(self, event): 36 | logger.debug('Event: %s' % event) 37 | properties = event['ResourceProperties'] 38 | product_arn = properties['product_arn'] 39 | logger.info('Disabling Security Hub Integration: %s' % product_arn) 40 | self.client.disable_import_findings_for_product( 41 | ProductArn=product_arn 42 | ) 43 | return {'PhysicalResourceId': product_arn} 44 | 45 | def handle(self, event, context): 46 | request_type = event['RequestType'] 47 | if request_type == 'Create': 48 | return self.on_create(event) 49 | elif request_type == 'Update': 50 | return self.on_update(event) 51 | elif request_type == 'Delete': 52 | return self.on_delete(event) 53 | else: 54 | raise Exception("Invalid request type: %s" % request_type) 55 | 56 | fn = EnableSecurityHubResource() 57 | 58 | def handler(event, context): 59 | return fn.handle(event, context) 60 | -------------------------------------------------------------------------------- /assets/lambdas/enable_security_hub_resource/index.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | from botocore.config import Config 3 | import logging 4 | import os 5 | 6 | logger = logging.getLogger() 7 | logger.setLevel(logging.INFO) 8 | 9 | 10 | class EnableSecurityHubResource: 11 | def __init__(self): 12 | self.region = os.environ.get('AWS_REGION') 13 | self.client = boto3.client('securityhub') 14 | 15 | def on_create(self, event): 16 | logger.info('Enabling Security Hub') 17 | logger.debug('Event: %s' % event) 18 | try: 19 | self.client.enable_security_hub( 20 | EnableDefaultStandards=True 21 | ) 22 | except self.client.exceptions.ResourceConflictException: 23 | logger.info('Security Hub already enabled') 24 | 25 | return {'PhysicalResourceId': self.region} 26 | 27 | def on_update(self, event): 28 | logger.warning( 29 | 'Attempt to update custom resource. Security Hub is either enabled or disabled. See Event for details %s' % event) 30 | return {'PhysicalResourceId': self.region} 31 | 32 | def on_delete(self, event): 33 | 34 | logger.info('Disabling Security Hub') 35 | logger.debug('Event: %s' % event) 36 | self.client.disable_security_hub() 37 | return {'PhysicalResourceId': self.region} 38 | 39 | def handle(self, event, context): 40 | request_type = event['RequestType'] 41 | props = event["ResourceProperties"] 42 | region = None 43 | if 'region' in props and props['region'] is not None: 44 | self.region = props['region'] 45 | self.client = boto3.client('securityhub', config=Config( 46 | region_name=self.region 47 | )) 48 | 49 | if request_type == 'Create': 50 | return self.on_create(event) 51 | elif request_type == 'Update': 52 | return self.on_update(event) 53 | elif request_type == 'Delete': 54 | return self.on_delete(event) 55 | else: 56 | raise Exception("Invalid request type: %s" % request_type) 57 | 58 | fn = EnableSecurityHubResource() 59 | 60 | def handler(event, context): 61 | return fn.handle(event,context) 62 | -------------------------------------------------------------------------------- /assets/lambdas/transform_findings/index.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | import logging 4 | import re 5 | from os import environ 6 | from flatten_json import flatten 7 | 8 | logger = logging.getLogger() 9 | logger.setLevel(logging.INFO) 10 | 11 | 12 | class TransformFindings: 13 | def __init__(self, bucket_name, destination_prefix='AWSLogs'): 14 | self.__s3_resource = boto3.resource('s3') 15 | self.__s3_client = boto3.client('s3') 16 | 17 | self.__bucket_name = bucket_name 18 | self.__bucket = self.__s3_resource.Bucket(self.__bucket_name) 19 | self.__destination_prefix = destination_prefix 20 | 21 | def fix_dictionary(self, finding: dict): 22 | keys = finding.keys() 23 | ret = {} 24 | for key in keys: 25 | new_key = str(key) 26 | value = finding[key] 27 | value_as_string = str(value) 28 | 29 | if isinstance(value, dict): 30 | value = self.fix_dictionary(value) 31 | if isinstance(key, str): 32 | new_key = re.sub('\W', '_', key) 33 | 34 | ret[new_key] = value_as_string 35 | return ret 36 | 37 | def __process_record(self, object_key): 38 | output = {} 39 | response = self.__s3_client.get_object(Bucket=self.__bucket_name, Key=object_key) 40 | raw_findings = '[' + response['Body'].read().decode('utf-8').replace('}{', '},\n{') + ']' 41 | raw_list = json.loads(raw_findings) 42 | for item in raw_list: 43 | account_id = item['detail']['findings'][0]['AwsAccountId'] 44 | sp = item['resources'][0].split('/') 45 | if 'product/aws/securityhub' in item['resources'][0]: 46 | product_node = sp[3] 47 | product = product_node.split(':') 48 | product_name = product[2] 49 | region = product[3] 50 | else: 51 | product_name = sp[2] 52 | region = sp[0].split(':')[3] 53 | 54 | key = account_id + '/' + product_name + '/' + region 55 | findings = item['detail']['findings'] 56 | for f in findings: 57 | logger.info(f'raw_finding={f}') 58 | flatten_finding = flatten(f) 59 | fixed = self.fix_dictionary(flatten_finding) 60 | logger.info(f'fixed_finding={fixed}') 61 | 62 | if key not in output: 63 | output[key] = [fixed] 64 | else: 65 | output[key].append(fixed) 66 | 67 | return output 68 | 69 | def __persist_record(self, output: dict, partition: str, object_name): 70 | for key in output: 71 | s3_path = f'{self.__destination_prefix}/{key}/{partition}/{object_name}.json' 72 | body = '' 73 | for version in output[key]: 74 | body += json.dumps(version) + '\n' 75 | self.__bucket.put_object(Key=s3_path, Body=body) 76 | 77 | def handle(self, event, context): 78 | records = event['Records'] 79 | for r in records: 80 | object_key = r['s3']['object']['key'] 81 | output = self.__process_record(object_key) 82 | 83 | partition = '/'.join(object_key.split('/')[2:-2]) 84 | object_name = object_key.split('/')[-1] 85 | 86 | self.__persist_record(output, partition, object_name) 87 | 88 | 89 | def handler(event, context): 90 | logger.info(event) 91 | return TransformFindings(bucket_name=environ['bucket_name'], 92 | destination_prefix=environ['destination_prefix']).handle(event, context) 93 | -------------------------------------------------------------------------------- /assets/lambdas/transform_findings/requirements.txt: -------------------------------------------------------------------------------- 1 | flatten_json -------------------------------------------------------------------------------- /azure-pipelines.yml: -------------------------------------------------------------------------------- 1 | # Python package 2 | # Create and test a Python package on multiple Python versions. 3 | # Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more: 4 | # https://docs.microsoft.com/azure/devops/pipelines/languages/python 5 | 6 | trigger: 7 | - main 8 | 9 | pool: 10 | vmImage: ubuntu-latest 11 | strategy: 12 | matrix: 13 | Python27: 14 | python.version: '2.7' 15 | Python35: 16 | python.version: '3.5' 17 | Python36: 18 | python.version: '3.6' 19 | Python37: 20 | python.version: '3.7' 21 | 22 | steps: 23 | - task: UsePythonVersion@0 24 | inputs: 25 | versionSpec: '$(python.version)' 26 | displayName: 'Use Python $(python.version)' 27 | 28 | - script: | 29 | python -m pip install --upgrade pip 30 | pip install -r requirements.txt 31 | displayName: 'Install dependencies' 32 | 33 | - script: | 34 | pip install pytest pytest-azurepipelines 35 | pytest 36 | displayName: 'pytest' 37 | -------------------------------------------------------------------------------- /cdk.json: -------------------------------------------------------------------------------- 1 | { 2 | "app": "python3 app.py", 3 | "context": { 4 | "@aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": true, 5 | "@aws-cdk/core:enableStackNameDuplicates": "true", 6 | "aws-cdk:enableDiffNoFail": "true", 7 | "@aws-cdk/core:stackRelativeExports": "true", 8 | "@aws-cdk/aws-ecr-assets:dockerIgnoreSupport": true, 9 | "@aws-cdk/aws-secretsmanager:parseOwnedSecretName": true, 10 | "@aws-cdk/aws-kms:defaultKeyPolicies": true, 11 | "@aws-cdk/aws-s3:grantWriteWithoutAcl": true, 12 | "@aws-cdk/aws-ecs-patterns:removeDefaultDesiredCount": true, 13 | "@aws-cdk/aws-rds:lowercaseDbIdentifier": true, 14 | "@aws-cdk/aws-efs:defaultEncryptionAtRest": true 15 | } 16 | } 17 | -------------------------------------------------------------------------------- /custom_constructs/prowler_scanner.py: -------------------------------------------------------------------------------- 1 | from os import path 2 | from aws_cdk import ( 3 | core as cdk, 4 | aws_events as events, 5 | aws_events_targets as events_targets, 6 | aws_iam as iam, 7 | aws_lambda as lmb, 8 | aws_logs as logs, 9 | aws_sns as sns, 10 | aws_sns_subscriptions as sns_subscriptions, 11 | aws_sqs as sqs, 12 | custom_resources 13 | ) 14 | from .security_hub import SecurityHub 15 | 16 | MAXIMUM_LAMBDA_TIME = cdk.Duration.minutes(15) 17 | 18 | 19 | class ProwlerScanner(cdk.Construct): 20 | """This is a Scanner""" 21 | 22 | def __init__(self, scope: cdk.Construct, identifier: str, 23 | schedule: events.Schedule = events.Schedule.rate(cdk.Duration.hours(1)) 24 | ): 25 | super().__init__(scope, identifier) 26 | self.__this_dir = path.dirname(__file__) 27 | 28 | fanout_topic = sns.Topic(self, 'ProwlerFanoutTopic') 29 | 30 | list_checks = self.__create_list_function(fanout_topic) 31 | 32 | events.Rule(self, 'ProwlerScannerSchedule', 33 | description='This Rule triggers Prowler to scan at the rate specified in the schedule', 34 | schedule=schedule, 35 | targets=[events_targets.LambdaFunction(handler=list_checks)]) 36 | 37 | scanner = self.__create_scanner_function(fanout_topic) 38 | 39 | queue = sqs.Queue(self, 'ProwlerDeadLetter') 40 | 41 | fanout_topic.add_subscription( 42 | subscription=sns_subscriptions.LambdaSubscription( 43 | fn=scanner, 44 | dead_letter_queue=queue 45 | ) 46 | ) 47 | 48 | 49 | 50 | @property 51 | def security_hub_product_arn(self): 52 | return f'arn:aws:securityhub:{cdk.Aws.REGION}::product/prowler/prowler' 53 | 54 | def __create_list_function(self, fanout_topic: sns.Topic): 55 | ret = lmb.DockerImageFunction(self, 'ProwlerListChecks', 56 | code=lmb.DockerImageCode.from_image_asset( 57 | path.join(self.__this_dir, '../assets/containers/prowler_list_check')), 58 | environment={ 59 | 'topic_arn': fanout_topic.topic_arn 60 | }, 61 | timeout=MAXIMUM_LAMBDA_TIME) 62 | 63 | fanout_topic.grant_publish(ret) 64 | return ret 65 | 66 | def __create_scanner_function(self, fanout_topic: sns.Topic): 67 | ret = lmb.DockerImageFunction(self, 'ProwlerScan', 68 | code=lmb.DockerImageCode.from_image_asset( 69 | path.join(self.__this_dir, '../assets/containers/prowler_scan_check') 70 | ), 71 | environment={ 72 | 'topic_arn': fanout_topic.topic_arn 73 | }, 74 | timeout=MAXIMUM_LAMBDA_TIME) 75 | 76 | fanout_topic.grant_publish(ret) 77 | 78 | ret.add_to_role_policy(iam.PolicyStatement( 79 | actions=[ 80 | 'securityhub:BatchImportFindings', 81 | 'securityhub:GetFindings', 82 | 'dax:ListTables', 83 | 'ds:ListAuthorizedApplications', 84 | 'ds:DescribeRoles', 85 | 'ec2:GetEbsEncryptionByDefault', 86 | 'ecr:Describe*', 87 | 'support:Describe*', 88 | 'tag:GetTagKeys', 89 | 'glue:Get*', 90 | 'glue:SearchTables', 91 | 'glue:BatchGetDevEndpoints', 92 | ], 93 | resources=['*'], 94 | effect=iam.Effect.ALLOW 95 | )) 96 | 97 | ret.role.add_managed_policy( 98 | iam.ManagedPolicy.from_managed_policy_arn(self, 'ViewOnlyPolicy', 99 | managed_policy_arn='arn:aws:iam::aws:policy/job-function/ViewOnlyAccess' 100 | ) 101 | ) 102 | 103 | ret.role.add_managed_policy( 104 | iam.ManagedPolicy.from_managed_policy_arn(self, 'SecurityAuditPolicy', 105 | managed_policy_arn='arn:aws:iam::aws:policy/SecurityAudit' 106 | ) 107 | ) 108 | return ret 109 | -------------------------------------------------------------------------------- /custom_constructs/security_hub.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | from aws_cdk import ( 3 | core as cdk, 4 | aws_events as events, 5 | aws_events_targets as events_targets, 6 | aws_iam as iam, 7 | aws_lambda as lmb, 8 | aws_logs as logs, 9 | aws_s3 as s3, 10 | aws_kinesisfirehose as kinesisfirehose, 11 | aws_securityhub as securityhub, 12 | custom_resources 13 | ) 14 | from typing import Optional, List 15 | from os import path 16 | 17 | 18 | class SecurityHub(cdk.Construct): 19 | """Security Hub Contruct designed to act like an L2 CDK Construct""" 20 | 21 | def __init__(self, scope: cdk.Construct, identifier: str): 22 | super().__init__(scope, identifier) 23 | 24 | self.this_dir = path.dirname(__file__) 25 | enable_disable_function = lmb.Function(self, 'EnableSHFunction', 26 | code=lmb.Code.from_asset(path.join(self.this_dir, 27 | '../assets/lambdas/enable_security_hub_resource')), 28 | handler='index.handler', 29 | runtime=lmb.Runtime.PYTHON_3_8) 30 | 31 | enable_disable_function.add_to_role_policy(iam.PolicyStatement( 32 | effect=iam.Effect.ALLOW, 33 | actions=[ 34 | 'securityhub:EnableSecurityHub', 35 | 'securityhub:DisableSecurityHub' 36 | ], 37 | resources=['*'] 38 | )) 39 | 40 | enable_provider = custom_resources.Provider(self, 'EnableSHProvider', 41 | on_event_handler=enable_disable_function, 42 | log_retention=logs.RetentionDays.ONE_DAY) 43 | cdk.CustomResource(self, 'EnableSH', 44 | service_token=enable_provider.service_token, 45 | removal_policy=cdk.RemovalPolicy.RETAIN) 46 | self.__enabled = True 47 | 48 | @property 49 | def is_enabled(self): 50 | return self.__enabled 51 | 52 | def stream_raw_findings_to_s3(self, 53 | bucket_name: str, 54 | bucket_arn: str, 55 | bucket_region=None, 56 | raw_prefix='raw/firehose'): 57 | if bucket_region is None: 58 | bucket_region = cdk.Aws.REGION 59 | 60 | target_bucket = s3.Bucket.from_bucket_attributes(self, 'TargetBucket', 61 | bucket_name=bucket_name, 62 | bucket_arn=bucket_arn, 63 | region=bucket_region 64 | ) 65 | 66 | role = iam.Role(self, 'DeliveryRole', 67 | assumed_by=iam.ServicePrincipal('firehose.amazonaws.com')) 68 | 69 | target_bucket.grant_read_write(role) 70 | 71 | delivery_stream = kinesisfirehose.CfnDeliveryStream(self, 'SHDeliveryStream', 72 | delivery_stream_type='DirectPut', 73 | extended_s3_destination_configuration=kinesisfirehose.CfnDeliveryStream.ExtendedS3DestinationConfigurationProperty( 74 | role_arn=role.role_arn, 75 | bucket_arn=target_bucket.bucket_arn, 76 | buffering_hints=kinesisfirehose.CfnDeliveryStream.BufferingHintsProperty( 77 | interval_in_seconds=900, 78 | size_in_m_bs=128 79 | ), 80 | compression_format='UNCOMPRESSED', 81 | prefix=raw_prefix 82 | )) 83 | 84 | stream_rule = events.Rule(self, 'StreamFromKinesisToS3', 85 | event_pattern=events.EventPattern( 86 | source=['aws.securityhub'], 87 | detail_type=['Security Hub Findings - Imported'], 88 | )) 89 | target = events_targets.KinesisFirehoseStream( 90 | stream=delivery_stream, 91 | ) 92 | stream_rule.add_target(target) 93 | 94 | def enable_import_findings_for_product(self, product_arn): 95 | this_dir = path.dirname(__file__) 96 | 97 | enable_disable_function = lmb.Function(self, 'EnableSHImportFunction', 98 | code=lmb.Code.from_asset(path.join(self.this_dir, 99 | '../assets/lambdas/enable_import_prowler_findings')), 100 | handler='index.handler', 101 | runtime=lmb.Runtime.PYTHON_3_8) 102 | 103 | enable_disable_function.add_to_role_policy(iam.PolicyStatement( 104 | effect=iam.Effect.ALLOW, 105 | actions=['securityhub:EnableImportFindingsForProduct'], 106 | resources=['*'] 107 | )) 108 | 109 | enable_provider = custom_resources.Provider(self, 'EnableSHImportProvider', 110 | on_event_handler=enable_disable_function, 111 | log_retention=logs.RetentionDays.ONE_DAY) 112 | 113 | cdk.CustomResource(self, 'EnableSHImport', 114 | service_token=enable_provider.service_token, 115 | properties={ 116 | 'product_arn': product_arn 117 | }, 118 | removal_policy=cdk.RemovalPolicy.RETAIN) 119 | 120 | # FIXME: If it is already created, then we need to check the region and possibly delete it. 121 | def enable_aggregation(self, regions: Optional[List[str]] = None): 122 | if regions is not None and len(regions) > 0: 123 | on_create = custom_resources.AwsSdkCall( 124 | action='createFindingAggregator', 125 | service='SecurityHub', 126 | parameters={ 127 | 'RegionLinkingMode': 'SPECIFIED_REGIONS', 128 | 'Regions': regions 129 | }, 130 | physical_resource_id=custom_resources.PhysicalResourceId.of(cdk.Aws.REGION) 131 | ) 132 | 133 | 134 | else: 135 | on_create = custom_resources.AwsSdkCall( 136 | action='createFindingAggregator', 137 | service='SecurityHub', 138 | parameters={ 139 | 'RegionLinkingMode': 'ALL_REGIONS', 140 | }, 141 | physical_resource_id=custom_resources.PhysicalResourceId.of(cdk.Aws.REGION) 142 | ) 143 | custom_resources.AwsCustomResource(self, 'SecurityHubAggregation', 144 | on_create=on_create, 145 | policy=custom_resources.AwsCustomResourcePolicy.from_sdk_calls( 146 | resources=custom_resources.AwsCustomResourcePolicy.ANY_RESOURCE 147 | )) 148 | -------------------------------------------------------------------------------- /custom_constructs/ssm_stored_parameter.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import ( 2 | core as cdk, 3 | custom_resources 4 | ) 5 | 6 | from datetime import datetime 7 | 8 | 9 | class SSMStoredParameter(custom_resources.AwsCustomResource): 10 | def __init__(self, scope: cdk.Construct, construct_id: str, parameter_name: str, region: str): 11 | super().__init__(scope, construct_id, 12 | on_update=custom_resources.AwsSdkCall( 13 | service='SSM', 14 | action='getParameter', 15 | parameters={ 16 | 'Name': parameter_name 17 | }, 18 | region=region, 19 | physical_resource_id=custom_resources.PhysicalResourceId.of( 20 | str(datetime.now().timestamp())) 21 | ), 22 | policy=custom_resources.AwsCustomResourcePolicy.from_sdk_calls( 23 | resources=custom_resources.AwsCustomResourcePolicy.ANY_RESOURCE 24 | ) 25 | ) 26 | 27 | @property 28 | def value_as_a_string(self) -> str: 29 | return self.get_response_field_reference('Parameter.Value').to_string() 30 | -------------------------------------------------------------------------------- /docs/Security_Hub_Analysis.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-security-hub-analytic-pipeline/8aacb34f2240bab5e8d2b6d45b8afd9594fc59b5/docs/Security_Hub_Analysis.png -------------------------------------------------------------------------------- /requirements-dev.txt: -------------------------------------------------------------------------------- 1 | -r requirements.txt 2 | boto3 3 | pytest 4 | moto[all] 5 | flatten_json -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aws_cdk.core 2 | aws_cdk.aws_events 3 | aws_cdk.aws_events_targets 4 | aws_cdk.aws_glue 5 | aws_cdk.aws_iam 6 | aws_cdk.aws_kinesisfirehose 7 | aws_cdk.aws_lambda 8 | aws_cdk.aws_lambda_python 9 | aws_cdk.aws_logs 10 | aws_cdk.aws_s3 11 | aws_cdk.aws_s3_notifications 12 | aws_cdk.aws_securityhub 13 | aws_cdk.aws_sns 14 | aws_cdk.aws_sns_subscriptions 15 | aws_cdk.aws_sqs 16 | aws_cdk.custom_resources 17 | boto3 18 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import setuptools 2 | 3 | 4 | with open("README.md") as fp: 5 | long_description = fp.read() 6 | 7 | 8 | setuptools.setup( 9 | name="artifact_code", 10 | version="0.0.1", 11 | 12 | description="An empty CDK Python app", 13 | long_description=long_description, 14 | long_description_content_type="text/markdown", 15 | 16 | author="author", 17 | 18 | package_dir={"": "artifact_code"}, 19 | packages=setuptools.find_packages(where="artifact_code"), 20 | 21 | install_requires=[ 22 | "aws-cdk.core==1.98.0", 23 | ], 24 | 25 | python_requires=">=3.6", 26 | 27 | classifiers=[ 28 | "Development Status :: 4 - Beta", 29 | 30 | "Intended Audience :: Developers", 31 | 32 | "License :: OSI Approved :: Apache Software License", 33 | 34 | "Programming Language :: JavaScript", 35 | "Programming Language :: Python :: 3 :: Only", 36 | "Programming Language :: Python :: 3.6", 37 | "Programming Language :: Python :: 3.7", 38 | "Programming Language :: Python :: 3.8", 39 | 40 | "Topic :: Software Development :: Code Generators", 41 | "Topic :: Utilities", 42 | 43 | "Typing :: Typed", 44 | ], 45 | ) 46 | -------------------------------------------------------------------------------- /source.bat: -------------------------------------------------------------------------------- 1 | @echo off 2 | 3 | rem The sole purpose of this script is to make the command 4 | rem 5 | rem source .venv/bin/activate 6 | rem 7 | rem (which activates a Python virtualenv on Linux or Mac OS X) work on Windows. 8 | rem On Windows, this command just runs this batch file (the argument is ignored). 9 | rem 10 | rem Now we don't need to document a Windows command for activating a virtualenv. 11 | 12 | echo Executing .venv\Scripts\activate.bat for you 13 | .venv\Scripts\activate.bat 14 | -------------------------------------------------------------------------------- /stacks/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/aws-security-hub-analytic-pipeline/8aacb34f2240bab5e8d2b6d45b8afd9594fc59b5/stacks/__init__.py -------------------------------------------------------------------------------- /stacks/analytic_sink_stack.py: -------------------------------------------------------------------------------- 1 | from os import path 2 | from aws_cdk import ( 3 | core as cdk, 4 | aws_iam as iam, 5 | aws_glue as glue, 6 | aws_lambda as lmb, 7 | aws_lambda_python as lambda_python, 8 | aws_ssm as ssm, 9 | aws_s3 as s3, 10 | aws_s3_notifications as s3_notifications 11 | ) 12 | 13 | class AnalyticSinkStack(cdk.Stack): 14 | def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs): 15 | super().__init__(scope, construct_id, **kwargs) 16 | destination_prefix = 'Findings' 17 | this_dir = path.dirname(__file__) 18 | 19 | self.__bucket = s3.Bucket(self, 'Bucket', 20 | auto_delete_objects=True, 21 | removal_policy=cdk.RemovalPolicy.DESTROY, 22 | encryption=s3.BucketEncryption.S3_MANAGED 23 | ) 24 | 25 | self.__query_result_bucket = s3.Bucket(self, 'QueryResultBucket', 26 | removal_policy=cdk.RemovalPolicy.RETAIN, 27 | encryption=s3.BucketEncryption.S3_MANAGED) 28 | 29 | # Make bucket accessible via other stacks in other Regions 30 | ssm.StringParameter(self, 'BucketParameter', 31 | parameter_name='/AnalyticSinkStack/BucketName', 32 | string_value=self.__bucket.bucket_name) 33 | ssm.StringParameter(self, 'BucketArn', 34 | parameter_name='/AnalyticSinkStack/BucketArn', 35 | string_value=self.__bucket.bucket_arn) 36 | 37 | # Transforms Findings so that keys are consumable by Athena 38 | transform_findings = lambda_python.PythonFunction(self, 'TransformFindings', 39 | entry=path.join(this_dir, 40 | '../assets/lambdas/transform_findings'), 41 | handler='handler', 42 | runtime=lmb.Runtime.PYTHON_3_8, 43 | environment={ 44 | 'bucket_name': self.__bucket.bucket_name, 45 | 'destination_prefix': destination_prefix 46 | }) 47 | 48 | self.__bucket.grant_read_write(transform_findings) 49 | 50 | self.__bucket.add_object_created_notification(s3_notifications.LambdaDestination(transform_findings)) 51 | 52 | role = iam.Role(self, 'CrawlerRole', 53 | assumed_by=iam.ServicePrincipal('glue.amazonaws.com')) 54 | 55 | self.__bucket.grant_read(role) 56 | role.add_to_policy(iam.PolicyStatement( 57 | effect=iam.Effect.ALLOW, 58 | actions=[ 59 | 's3:GetBucketLocation', 60 | 's3:ListBucket', 61 | 's3:ListAllMyBuckets', 62 | 's3:GetBucketAcl' 63 | ], 64 | resources=[f'{self.__bucket.bucket_arn}*'] 65 | )) 66 | # Glue Permissions 67 | role.add_to_policy(iam.PolicyStatement( 68 | effect=iam.Effect.ALLOW, 69 | actions=[ 70 | 'glue:*', 71 | 'iam:ListRolePolicies', 72 | 'iam:GetRole', 73 | 'iam:GetRolePolicy' 74 | ], 75 | resources=['*'] 76 | )) 77 | role.add_to_policy(iam.PolicyStatement( 78 | effect=iam.Effect.ALLOW, 79 | actions=[ 80 | 's3:GetObject' 81 | ], 82 | resources=[ 83 | 'arn:aws:s3:::crawler-public*', 84 | 'arn:aws:s3:::aws-glue-*' 85 | ] 86 | )) 87 | role.add_to_policy(iam.PolicyStatement( 88 | effect=iam.Effect.ALLOW, 89 | actions=[ 90 | 'logs:CreateLogGroup', 91 | 'logs:CreateLogStream', 92 | 'logs:PutLogEvents' 93 | ], 94 | resources=['arn:aws:logs:*:*:/aws-glue/*'] 95 | )) 96 | 97 | database = glue.Database(self, 'SecurityHubDatabase', 98 | database_name='security_hub_database') 99 | 100 | glue.CfnCrawler(self, 'SecurityHubCrawler', 101 | role=role.role_arn, 102 | database_name=database.database_name, 103 | schedule=glue.CfnCrawler.ScheduleProperty( 104 | schedule_expression='cron(0 0/1 * * ? *)' 105 | ), 106 | targets=glue.CfnCrawler.TargetsProperty( 107 | s3_targets=[glue.CfnCrawler.S3TargetProperty( 108 | path=f's3://{self.__bucket.bucket_name}/{destination_prefix}' 109 | )] 110 | ), 111 | table_prefix='security-hub-crawled-', 112 | name='SecurityHubCrawler') 113 | @property 114 | def target_bucket(self) -> s3.Bucket: 115 | return self.__bucket 116 | 117 | @property 118 | def query_result_bucket(self) -> s3.Bucket: 119 | return self.__query_result_bucket -------------------------------------------------------------------------------- /stacks/security_hub_aggegation_stack.py: -------------------------------------------------------------------------------- 1 | from os import path 2 | from aws_cdk import ( 3 | core as cdk 4 | ) 5 | 6 | from custom_constructs.security_hub import SecurityHub 7 | from custom_constructs.prowler_scanner import ProwlerScanner 8 | from custom_constructs.ssm_stored_parameter import SSMStoredParameter 9 | 10 | 11 | class SecurityHubAggregationStack(cdk.Stack): 12 | def __init__(self, scope: cdk.Construct, construct_id: str, sink_region: str, **kwargs): 13 | super().__init__(scope, construct_id, **kwargs) 14 | security_hub = SecurityHub(self, 'SecurityHub') 15 | scanner = ProwlerScanner(self, 'ExampleScanner') 16 | 17 | sink_bucket_name = SSMStoredParameter(self, 'BucketName', 18 | parameter_name='/AnalyticSinkStack/BucketName', 19 | region=sink_region).value_as_a_string 20 | sink_bucket_arn = SSMStoredParameter(self, 'BucketArn', 21 | parameter_name='/AnalyticSinkStack/BucketArn', 22 | region=sink_region).value_as_a_string 23 | 24 | security_hub.enable_import_findings_for_product(scanner.security_hub_product_arn) 25 | security_hub.stream_raw_findings_to_s3( 26 | bucket_name=sink_bucket_name, 27 | bucket_arn=sink_bucket_arn, 28 | bucket_region=sink_region 29 | ) 30 | security_hub.enable_aggregation() 31 | -------------------------------------------------------------------------------- /stacks/security_hub_collection_stack.py: -------------------------------------------------------------------------------- 1 | from os import path 2 | from aws_cdk import ( 3 | core as cdk 4 | ) 5 | 6 | from custom_constructs.security_hub import SecurityHub 7 | from custom_constructs.prowler_scanner import ProwlerScanner 8 | from custom_constructs.ssm_stored_parameter import SSMStoredParameter 9 | 10 | 11 | class SecurityHubCollectionStack(cdk.Stack): 12 | def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs): 13 | super().__init__(scope, construct_id, **kwargs) 14 | security_hub = SecurityHub(self, 'SecurityHub') 15 | scanner = ProwlerScanner(self, 'ExampleScanner') 16 | security_hub.enable_import_findings_for_product(scanner.security_hub_product_arn) 17 | -------------------------------------------------------------------------------- /test.py: -------------------------------------------------------------------------------- 1 | from time import sleep 2 | 3 | print('Test') 4 | #sleep(0.1) 5 | print('Test') 6 | 7 | -------------------------------------------------------------------------------- /tests/transform_finding_test.py: -------------------------------------------------------------------------------- 1 | from assets.lambdas.transform_findings.index import TransformFindings 2 | import boto3 3 | from moto import mock_s3 4 | 5 | 6 | def __make_bucket(bucket_name: str): 7 | bucket = boto3.resource('s3').Bucket(bucket_name) 8 | bucket.create() 9 | return bucket 10 | 11 | 12 | @mock_s3 13 | def test_fix_dictionary(): 14 | bucket = __make_bucket('tester') 15 | transform_findings = TransformFindings(bucket.name) 16 | 17 | finding = { 18 | 'first/level/test': 'test', 19 | 'ProductArn': 'arn:aws:securityhub:us-east-1::product/aws/securityhub', 20 | 'Types': ['Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark'], 21 | 'Description': 'Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to port 22.', 22 | 'SchemaVersion': '2018-10-08', 23 | 'Compliance': {'Status': 'PASSED'}, 24 | 'GeneratorId': 'arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark/v/1.2.0/rule/4.1', 25 | 'FirstObservedAt': '2021-01-31T04:52:30.123Z', 26 | 'CreatedAt': '2021-01-31T04:52:30.123Z', 27 | 'RecordState': 'ACTIVE', 28 | 'Title': '4.1 Ensure no security groups allow ingress from 0.0.0.0/0 to port 22', 29 | 'Workflow': {'Status': 'RESOLVED'}, 30 | 'LastObservedAt': '2021-05-07T11:05:27.353Z', 31 | 'Severity': {'Normalized': 0, 'Label': 'INFORMATIONAL', 'Product': 0, 'Original': 'INFORMATIONAL'}, 32 | 'UpdatedAt': '2021-05-07T11:05:25.775Z', 33 | 'FindingProviderFields': { 34 | 'Types': [ 35 | 'Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark'], 36 | 'Severity': {'Normalized': 0, 'Label': 'INFORMATIONAL', 'Product': 0, 'Original': 'INFORMATIONAL'} 37 | }, 38 | 'WorkflowState': 'NEW', 39 | 'ProductFields': { 40 | 'StandardsGuideArn': 'arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark/v/1.2.0', 41 | 'StandardsGuideSubscriptionArn': 'arn:aws:securityhub:us-east-1:0123456789:subscription/cis-aws-foundations-benchmark/v/1.2.0', 42 | 'RuleId': '4.1', 43 | 'RecommendationUrl': 'https://docs.aws.amazon.com/console/securityhub/standards-cis-4.1/remediation', 44 | 'RelatedAWSResources:0/name': 'securityhub-restricted-ssh-38a80c22', 45 | 'RelatedAWSResources:0/type': 'AWS::Config::ConfigRule', 46 | 'StandardsControlArn': 'arn:aws:securityhub:us-east-1:0123456789:control/cis-aws-foundations-benchmark/v/1.2.0/4.1', 47 | 'aws/securityhub/ProductName': 'Security Hub', 48 | 'aws/securityhub/CompanyName': 'AWS', 49 | 'aws/securityhub/FindingId': 'arn:aws:securityhub:us-east-1::product/aws/securityhub/arn:aws:securityhub:us-east-1:0123456789:subscription/cis-aws-foundations-benchmark/v/1.2.0/4.1/finding/2a55570b-74e9-4aa3-9f4e-66f515c7ff03' 50 | }, 51 | 'AwsAccountId': '0123456789', 52 | 'Id': 'arn:aws:securityhub:us-east-1:0123456789:subscription/cis-aws-foundations-benchmark/v/1.2.0/4.1/finding/2a55570b-74e9-4aa3-9f4e-66f515c7ff03', 53 | 'Remediation': { 54 | 'Recommendation': { 55 | 'Text': 'For directions on how to fix this issue, please consult the AWS Security Hub CIS documentation.', 56 | 'Url': 'https://docs.aws.amazon.com/console/securityhub/standards-cis-4.1/remediation'} 57 | }, 58 | 'Resources': [{ 59 | 'Partition': 'aws', 60 | 'Type': 'AwsEc2SecurityGroup', 61 | 'Details': { 62 | 'AwsEc2SecurityGroup': { 63 | 'GroupName': 'default', 64 | 'OwnerId': '0123456789', 65 | 'VpcId': 'vpc-0123456789', 66 | 'IpPermissions': [{'IpProtocol': '-1', 'UserIdGroupPairs': [ 67 | {'UserId': '0123456789', 'GroupId': 'sg-0123456789'}]}], 68 | 'IpPermissionsEgress': [{'IpProtocol': '-1', 'IpRanges': [{'CidrIp': '0.0.0.0/0'}]}], 69 | 'GroupId': 'sg-0123456789'} 70 | }, 71 | 'Region': 'us-east-1', 'Id': 'arn:aws:ec2:us-east-1:0123456789:security-group/sg-0123456789' 72 | }] 73 | } 74 | result = transform_findings.fix_dictionary(finding) 75 | 76 | assert isinstance(result, dict) 77 | assert 'first/level/test' not in result 78 | assert 'first_level_test' in result 79 | assert 'ProductFields' in result 80 | assert 'aws/securityhub/ProductName' not in result['ProductFields'] 81 | assert 'aws_securityhub_ProductName' in result['ProductFields'] 82 | assert 'aws/securityhub/CompanyName' not in result['ProductFields'] 83 | assert 'aws_securityhub_CompanyName' in result['ProductFields'] 84 | assert 'aws/securityhub/FindingId' not in result['ProductFields'] 85 | assert 'aws_securityhub_FindingId' in result['ProductFields'] 86 | assert 'RelatedAWSResources:0/name' not in result['ProductFields'] 87 | assert 'RelatedAWSResources_0_name' in result['ProductFields'] --------------------------------------------------------------------------------