├── .gitignore ├── .readthedocs.yaml ├── LICENSE ├── README.md ├── docs ├── Makefile ├── make.bat └── source │ ├── conf.py │ ├── images │ └── Invictus-Incident-Response.jpg │ ├── index.rst │ ├── installation │ └── start.rst │ └── use │ ├── example.rst │ ├── usage.rst │ └── work.rst ├── invictus-aws.py ├── requirements.txt └── source ├── files ├── example.ddl ├── policy.json └── queries.yaml ├── main ├── analysis.py ├── configuration.py ├── enumeration.py ├── ir.py └── logs.py └── utils ├── enum.py ├── strings.py └── utils.py /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | .vscode 3 | results/ 4 | build/ 5 | -------------------------------------------------------------------------------- /.readthedocs.yaml: -------------------------------------------------------------------------------- 1 | # .readthedocs.yaml 2 | # Read the Docs configuration file 3 | # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details 4 | 5 | # Required 6 | version: 2 7 | 8 | # Set the OS, Python version and other tools you might need 9 | build: 10 | os: ubuntu-22.04 11 | tools: 12 | python: "3.12" 13 | # You can also specify other tool versions: 14 | # nodejs: "19" 15 | # rust: "1.64" 16 | # golang: "1.19" 17 | 18 | # Build documentation in the "docs/" directory with Sphinx 19 | sphinx: 20 | configuration: docs/source/conf.py 21 | 22 | # Optionally build your docs in additional formats such as PDF and ePub 23 | # formats: 24 | # - pdf 25 | # - epub 26 | 27 | # Optional but recommended, declare the Python requirements required 28 | # to build your documentation 29 | # See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html 30 | python: 31 | install: 32 | - requirements: requirements.txt 33 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 Invictus Incident Response 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Invictus-AWS 2 | 3 | ![alt text](https://github.com/invictus-ir/Microsoft-Extractor-Suite/blob/main/docs/source/Images/Invictus-Incident-Response.jpg?raw=true) 4 | 5 | ## Getting started with Invictus-AWS 6 | To get started with Invictus-AWS, check out the [Invictus-AWS docs.](https://invictus-aws.readthedocs.io/en/latest/) 7 | 8 | ## Introduction 9 | Invictus-AWS is a python script that will help automatically enumerate and acquire relevant data from an AWS environment. 10 | The tool doesn't require any installation it can be run as a standalone script with minimal configuration required. 11 | The goal for Invictus-AWS is to allow incident responders or other security personnel to quickly get an insight into an AWS environment to answer the following questions: 12 | - What services are running in an AWS environment. 13 | - For each of the services what are the configuration details. 14 | - What logging is available for each of the services that might be relevant in an incident response scenario. 15 | - Is there any threat that I can find easily with the CloudTrail logs. 16 | 17 | Want to know more about this project? 18 | We did a talk at FIRST Amsterdam 2022 and the slides are available here: 19 | https://github.com/invictus-ir/talks/blob/main/FIRST_2022_TC_AMS_Presentation.pdf 20 | 21 | 22 | ## Get started 23 | 24 | To run the script you will have to use the AWS CLI. 25 | 26 | - Install the AWS CLI package, you can simply follow the instructions here (https://aws.amazon.com/cli/) 27 | - Install Python3 on your local system 28 | - Install the requirements with `$pip3 install -r requirements.txt` 29 | - An account with permissions to access the AWS environment you want to acquire data from 30 | - Configure AWS account with `$aws configure` 31 | 32 | Note: This requires the AWS Access Key ID for the account you use to run the script. 33 | 34 | The user running the script must have these 2 policies in order to have the necessary permissions : 35 | * The AWS managed - job function policy `ReadOnlyAccess` 36 | * The policy that you can find in `source/files/policy.json` 37 | 38 | ## How it works 39 | 40 | The tool is divided into 4 different steps : 41 | 1. The first step performs enumeration of activated AWS services and its details. 42 | 2. The second step retrieves configuration details about the activated services. 43 | 3. The third step extracts available logs for the activated services. 44 | 4. The fourth and last step analyze CloudTrail logs, and only CloudTrail logs, by running Athena queries against it. The queries are written in the file `source/files/queries/yaml`. There are already some queries, but you can remove or add your own. If you add you own queries, be careful to respect this style : `name-of-your-query: ... FROM DATABASE.TABLE ...` , don't specify the database and table. 45 | The logs used by this step can be CloudTrail logs extracted by step 3 or your own CloudTrail logs. But there are some requirements about what the logs look like. They need to be stored in a S3 bucket in the default format (one JSON file, with a single line containing the event). 46 | 47 | Each step can be run independently. There is no need to have completed step 1 to proceed with step 2. 48 | 49 | ## Usage 50 | 51 | The script runs with a few parameters : 52 | * `-h` to print out the help menu. 53 | * `-p profile` or `--profile profile`. Specify your aws profile. Default is `default`. 54 | * `-w cloud` or `-w local`. 'cloud' option if you want the results to be stored in a S3 bucket (automatically created). 'local' option if you want the results to be written to local storage. The default option is 'cloud'. So if you want to use 'cloud' option, you can either write nothing, write only `-w` or write `-w cloud`. 55 | * `-r region` or `-A [region]`. Use the first option if you want the tool to analyze only the specified region. Use the second option if you want the tool to analyze all regions. You can also specify a region if you want to start with that one. 56 | * `-s [step,step]`. Provide a comma-separated list of the steps to be executed. 1 = Enumeration. 2 = Configuration. 3 = Logs Extraction. 4 = Logs Analysis. The default option is 1,2,3 as **step 4 has to be executed alone**. So if you want to run the three first steps, you can either write nothing, write only `-s` or write `-s 1,2,3`. If you want to run step 4, then write `-s 4`. 57 | * `-start YYYY-MM-DD`. Start date for the Cloudtrail logs collection. It is recommended to use it every time step 3 is executed as it will be extremely long to collect each logs. It has to be used with `-end` and must only be used with step 3. 58 | * `-end YYYY-MM-DD`. End date for the Cloudtrail logs collection. It is recommended to use it every time step 3 is executed as it will be extremely long to collect each logs. It has to be used with `-start` and must only be used with step 3. 59 | > **_NOTE:_** The next parameters only apply if you run step 4. You have to collect the logs with step 3 on another execution or by your own means. 60 | 61 | * `-b bucket`. Bucket containing the CloudTrail logs. Format is `bucket/subfolders/`. 62 | * `-o bucket`. Bucket where the results of the queries will be stored. Must look like `bucket/[subfolders]/`. 63 | * `-c catalog`. Catalog used by Athena. 64 | * `-d database`. Database used by Athena. You can either input an existing database or a new one that will be created. 65 | * `-t table`. Table used by Athena. You can either input an existing table, input a new one (that will have the same structure as the default one) or input a .ddl file giving details about your new table. An example.ddl is available for you, just add the structure, modify the name of the table and the location of your logs. 66 | * `-f file.yaml`. Your own file containing your queries for the analysis. If you don't want to use or modify the default file, you can use your own by specifying it with this option. The file has to already exist. 67 | * `-x timeframe`. Used by the queries to filter their results. The query part with the timeframe will automatically be added at the end of your queries if you specify a timeframe. You don't have to add it yourself to your queries. 68 | 69 | Default Usage : `python3 invictus-aws.py` will get you into a walkthrough mode 70 | Power User Usage : `$python3 invictus-aws.py [-h] -w [{cloud,local}] (-r AWS_REGION | -A [ALL_REGIONS]) -s [STEP] [-start YYYY-MM-DD] [-end YYYY-MM-DD] [-b SOURCE_BUCKET] [-o OUTPUT_BUCKET][-c CATALOG] [-d DATABASE] [-t TABLE] [-f QUERY_FILE] [-x TIMEFRAME]` 71 | 72 | ### Examples 73 | 74 | **Acquire data exclusively from the eu-wests-3 region, excluding the Configuration step and store the output locally.** : 75 | `$python3 invictus-aws.py -r eu-west-3 -s 1,3 -w local` 76 | *Mind that the CloudTrail logs, if existing, will be written both locally and in a S3 bucket as the analysis step needs the logs to be in a bucket.* 77 | 78 | **Acquire data from all region, beginning by eu-west-3, with all the default steps (1,2,3) and with results written in a S3 Bucket.** : 79 | `$python3 invictus-aws.py -A eu-west-3` 80 | 81 | **Analyze CloudTrail logs using the tool default database and table.** : 82 | `$python3 invictus-aws.py -r eu-west-3 -s 4 -b bucket/path-to-the-existing-logs/ -o bucket/path-to-existing-folder-to-store-the-results/` 83 | *In this example, the -b option is needed the first time as the default database and table will be created. Then you don't need it anymore as the table is already initialized. 84 | But don't forget that if you modify your logs source and still want to use the default table, you need to delete it before.* 85 | 86 | **Analyze CloudTrail logs using the tool default database and table, filter the results to the last 7 days and write the results locally.** : 87 | `$python3 invictus-aws.py -r eu-west-3 -w local -s 4 -x 7` 88 | *In this example, the -b option is not written as explained above. The -o option is also not written as we don't need any output bucket as the results will be written locally.* 89 | 90 | **Analyze CloudTrail logs using either a new database or table (with the same structure as the default one)** : 91 | `$python3 invictus-aws.py -r eu-west-3 -w -s 4 -b bucket/path-to-the-existing-logs/ -o bucket/path-to-existing-folder-to-store-the-results/ -c your-catalog -d your-database -t your-table` 92 | *In this example, the -b option is needed the first time as the default database and table will be created. Then you don't need it anymore as the table is already initialized. 93 | But don't forget that if you modify your logs source and still want to use the default table, you need to delete it before.** 94 | 95 | **Analyze CloudTrail logs using your existing database and table, using your own query file** : 96 | `$python3 invictus-aws.py -r eu-west-3 -s 4 -c your-catalog -d your-database -t your-table -f path-to-existing-query-file` 97 | 98 | **Analyze CloudTrail logs using a new table with your own structure.** : 99 | `$python3 invictus-aws.py -r eu-west-3 -s 4 -b bucket/path-to-the-existing-logs/ -o bucket/path-to-existing-folder-where-to-put-the-results/ -c your-catalog -d your-database -t your-creation-table-file.ddl` 100 | *You can find an example of ddl file in `source/files`. Just replace the name of the table by the one you want to create, the location by the location of your CloudTrail logs and add the structure of your table. The default table used by the tool is explained here : https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html .* 101 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line, and also 5 | # from the environment for the first two. 6 | SPHINXOPTS ?= 7 | SPHINXBUILD ?= sphinx-build 8 | SOURCEDIR = source 9 | BUILDDIR = build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 21 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=source 11 | set BUILDDIR=build 12 | 13 | %SPHINXBUILD% >NUL 2>NUL 14 | if errorlevel 9009 ( 15 | echo. 16 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 17 | echo.installed, then set the SPHINXBUILD environment variable to point 18 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 19 | echo.may add the Sphinx directory to PATH. 20 | echo. 21 | echo.If you don't have Sphinx installed, grab it from 22 | echo.https://www.sphinx-doc.org/ 23 | exit /b 1 24 | ) 25 | 26 | if "%1" == "" goto help 27 | 28 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 29 | goto end 30 | 31 | :help 32 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 33 | 34 | :end 35 | popd 36 | -------------------------------------------------------------------------------- /docs/source/conf.py: -------------------------------------------------------------------------------- 1 | # Configuration file for the Sphinx documentation builder. 2 | # 3 | # For the full list of built-in configuration values, see the documentation: 4 | # https://www.sphinx-doc.org/en/master/usage/configuration.html 5 | 6 | # -- Project information ----------------------------------------------------- 7 | # https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information 8 | 9 | import os 10 | import sys 11 | sys.path.insert(0, os.path.abspath('../..')) 12 | 13 | project = 'Invictus-AWS' 14 | copyright = '2023, Antonio Macovei, Rares Bratean & Benjamin Guillouzo' 15 | author = 'Antonio Macovei, Rares Bratean & Benjamin Guillouzo' 16 | 17 | # -- General configuration --------------------------------------------------- 18 | # https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration 19 | 20 | import sphinx_rtd_theme 21 | extensions = [ 22 | "sphinx.ext.autodoc", 23 | "sphinx.ext.viewcode", 24 | "sphinx.ext.todo", 25 | "sphinx.ext.napoleon", 26 | "sphinx_rtd_theme", 27 | "m2r2", 28 | ] 29 | 30 | source_suffix = [".rst", ".md"] 31 | 32 | templates_path = ['_templates'] 33 | exclude_patterns = [] 34 | 35 | 36 | 37 | # -- Options for HTML output ------------------------------------------------- 38 | # https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output 39 | 40 | html_theme = 'sphinx_rtd_theme' 41 | -------------------------------------------------------------------------------- /docs/source/images/Invictus-Incident-Response.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/invictus-ir/Invictus-AWS/2f0d1fef3dfec3ce49c70351884e7bf524258245/docs/source/images/Invictus-Incident-Response.jpg -------------------------------------------------------------------------------- /docs/source/index.rst: -------------------------------------------------------------------------------- 1 | .. image:: /images/Invictus-Incident-Response.jpg 2 | :alt: Invictus logo 3 | 4 | Invictus-AWS documentation! 5 | =================================== 6 | 7 | Invictus-AWS is a python script that will help automatically enumerate and acquire relevant data from an AWS environment. The tool doesn't require any installation it can be run as a standalone script with minimal configuration required. The goal for Invictus-AWS is to allow incident responders or other security personnel to quickly get an insight into an AWS environment to answer the following questions: 8 | 9 | * What services are running in an AWS environment. 10 | * For each of the services what are the configuration details. 11 | * What logging is available for each of the services that might be relevant in an incident response scenario. 12 | * Is there any threat that I can find easily with the CloudTrail logs. 13 | 14 | Want to know more about this project? We did a talk at FIRST Amsterdam 2022 and the slides are available here: https://github.com/invictus-ir/talks/blob/main/FIRST_2022_TC_AMS_Presentation.pdf 15 | 16 | .. note:: 17 | 18 | 🆘 Incident Response support reach out to cert@invictus-ir.com or go to https://www.invictus-ir.com/247 19 | 20 | Getting Help 21 | ------------ 22 | 23 | Have a bug report or feature request? Open an issue on the Github repository : https://github.com/invictus-ir/Invictus-AWS . 24 | 25 | .. toctree:: 26 | :maxdepth: 2 27 | :hidden: 28 | :caption: Installation 29 | 30 | installation/start 31 | 32 | .. toctree:: 33 | :maxdepth: 2 34 | :hidden: 35 | :caption: Operation 36 | 37 | use/work 38 | use/usage 39 | use/example -------------------------------------------------------------------------------- /docs/source/installation/start.rst: -------------------------------------------------------------------------------- 1 | Get started 2 | =========== 3 | 4 | To run the script you will have to use the AWS CLI. 5 | 6 | * Install the AWS CLI package. You can simply follow the instructions here : https://aws.amazon.com/cli/. 7 | * Install Python3 on your local system 8 | * Install the requirements with :samp:`pip3 install -r requirements.txt` 9 | * An account with permissions to access the AWS environment you want to acquire data from 10 | * Configure AWS account with :samp:`aws configure` 11 | 12 | .. note:: 13 | 14 | Note: This requires the AWS Access Key ID for the account you use to run the script. 15 | 16 | The user running the script must have these 2 policies in order to have the necessary permissions : 17 | 18 | * The AWS managed - job function policy :samp:`ReadOnlyAccess` 19 | * The policy that you can find in :samp:`source/files/policy.json` 20 | -------------------------------------------------------------------------------- /docs/source/use/example.rst: -------------------------------------------------------------------------------- 1 | Examples 2 | ======== 3 | 4 | **Acquire data exclusively from the eu-wests-3 region, excluding the Configuration step and store the output locally.** : 5 | 6 | ``$python3 invictus-aws.py -r eu-west-3 -s 1,3 -w local`` 7 | 8 | *Mind that the CloudTrail logs, if existing, will be written both locally and in a S3 bucket as the analysis step needs the logs to be in a bucket.* 9 | 10 | ========================= 11 | 12 | **Acquire data from all region, beginning by eu-west-3, with all the default steps (1,2,3) and with results written in a S3 Bucket.** : 13 | 14 | ``$python3 invictus-aws.py -A eu-west-3`` 15 | 16 | ========================= 17 | 18 | **Analyze CloudTrail logs using the tool default database and table.** : 19 | 20 | 21 | ``$python3 invictus-aws.py -r eu-west-3 -s 4 -b bucket/path-to-the-existing-logs/ -o bucket/path-to-existing-folder-to-store-the-results/`` 22 | 23 | *In this example, the -b option is needed the first time as the default database and table will be created. Then you don't need it anymore as the table is already initialized. 24 | But don't forget that if you modify your logs source and still want to use the default table, you need to delete it before.* 25 | 26 | ========================= 27 | 28 | **Analyze CloudTrail logs using the tool default database and table, filter the results to the last 7 days and write the results locally.** : 29 | 30 | ``$python3 invictus-aws.py -r eu-west-3 -w local -s 4 -x 7`` 31 | 32 | *In this example, the -b option is not written as explained above. The -o option is also not written as we don't need any output bucket as the results will be written locally.* 33 | 34 | ========================= 35 | 36 | **Analyze CloudTrail logs using either a new database or table (with the same structure as the default one)** : 37 | 38 | ``$python3 invictus-aws.py -r eu-west-3 -w -s 4 -b bucket/path-to-the-existing-logs/ -o bucket/path-to-existing-folder-to-store-the-results/ -c your-catalog -d your-database -t your-table`` 39 | 40 | *In this example, the -b option is needed the first time as the default database and table will be created. Then you don't need it anymore as the table is already initialized. 41 | But don't forget that if you modify your logs source and still want to use the default table, you need to delete it before.** 42 | 43 | ========================= 44 | 45 | **Analyze CloudTrail logs using your existing database and table, using your own query file** : 46 | 47 | ``$python3 invictus-aws.py -r eu-west-3 -s 4 -o bucket/path-to-existing-folder-where-to-put-the-results/ -c your-catalog -d your-database -t your-table -f path-to-existing-query-file`` 48 | 49 | ========================= 50 | 51 | **Analyze CloudTrail logs using a new table with your own structure.** : 52 | 53 | ``$python3 invictus-aws.py -a eu-west-3 -s 4 -s bucket/path-to-the-existing-logs/ -o bucket/path-to-existing-folder-where-to-put-the-results/ -c your-catalog -d your-database -t your-creation-table-file.ddl`` 54 | 55 | *You can find an example of ddl file in `source/files`. Just replace the name of the table by the one you want to create, the location by the location of your CloudTrail logs and add the structure of your table. The default table used by the tool is explained here :* https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html . -------------------------------------------------------------------------------- /docs/source/use/usage.rst: -------------------------------------------------------------------------------- 1 | Usage 2 | ===== 3 | 4 | | Default Usage : ``python3 invictus-aws.py`` will get you into a walkthrough mode 5 | | Power User Usage : ``python3 invictus-aws.py [-h] [-p profile] -w [{cloud,local}] (-r AWS_REGION | -A [ALL_REGIONS]) -s [STEP] [-start YYYY-MM-DD] [-end YYYY-MM-DD] [-b SOURCE_BUCKET] [-o OUTPUT_BUCKET][-c CATALOG] [-d DATABASE] [-t TABLE] [-f QUERY_FILE] [-x TIMEFRAME]`` 6 | 7 | The script runs with a few parameters : 8 | 9 | * ``-h`` to print out the help menu. 10 | * ``-p profile`` or ``--profile profile``. Specify your aws profile. Default is ``default``. 11 | * ``-w cloud`` or ``-w local``. 'cloud' option if you want the results to be stored in a S3 bucket (automatically created). 'local' option if you want the results to be written to local storage. The default option is 'cloud'. So if you want to use 'cloud' option, you can either write nothing, write only `-w` or write `-w cloud`. 12 | * ``-r region`` or ``-a [region]``. Use the first option if you want the tool to analyze only the specified region. Use the second option if you want the tool to analyze all regions. You can also specify a region if you want to start with that one. 13 | * ``-s [step,step]``. Provide a comma-separated list of the steps to be executed. 1 = Enumeration. 2 = Configuration. 3 = Logs Extraction. 4 = Logs Analysis. The default option is 1,2,3 as **step 4 has to be executed alone**. So if you want to run the three first steps, you can either write nothing, write only `-s` or write `-s 1,2,3`. If you want to run step 4, then write `-s 4`. 14 | * ``-start YYYY-MM-DD``. Start date for the Cloudtrail logs collection. It is recommended to use it every time step 3 is executed as it will be extremely long to collect each logs. It has to be used with `-end` and must only be used with step 3. 15 | * ``-end YYYY-MM-DD``. End date for the Cloudtrail logs collection. It is recommended to use it every time step 3 is executed as it will be extremely long to collect each logs. It has to be used with `-start` and must only be used with step 3. 16 | 17 | .. note:: 18 | 19 | The next parameters only apply if you run step 4. You have to collect the logs with step 3 on another execution or by your own means. 20 | 21 | * ``-b bucket``. Bucket containing the CloudTrail logs. Format is ``bucket/subfolders/``. 22 | * ``-o bucket``. Bucket where the results of the queries will be stored. Must look like ``bucket/[subfolders]/``. 23 | * ``-c catalog``. Catalog used by Athena. 24 | * ``-d database``. Database used by Athena. You can either input an existing database or a new one that will be created. 25 | * ``-t table``. Table used by Athena. You can either input an existing table, input a new one (that will have the same structure as the default one) or input a .ddl file giving details about your new table. An example.ddl is available for you, just add the structure, modify the name of the table and the location of your logs. 26 | * ``-f file.yaml``. Your own file containing your queries for the analysis. If you don't want to use or modify the default file, you can use your own by specifying it with this option. The file has to already exist. 27 | * ``-x timeframe``. Used by the queries to filter their results. The query part with the timeframe will automatically be added at the end of your queries if you specify a timeframe. You don't have to add it yourself to your queries. 28 | 29 | -------------------------------------------------------------------------------- /docs/source/use/work.rst: -------------------------------------------------------------------------------- 1 | How it works 2 | ============ 3 | 4 | The tool is divided into 4 different steps : 5 | 6 | #. 1st step performs enumeration of activated AWS services and its details. 7 | #. 2nd step retrieves configuration details about the activated services. 8 | #. 3rd step extracts available logs for the activated services. 9 | #. 4th step analyze CloudTrail logs, and only CloudTrail logs, by running Athena queries against it. 10 | 11 | .. note:: 12 | 13 | Step 4 : 14 | The queries are written in the file :samp:`source/files/queries.yaml`. 15 | There are already some queries, but you can remove or add your own. If you add you own queries, be careful to respect this style :samp:`name-of-your-query ...FROM DATABASE.TABLE ...` , don't specify the database and table. 16 | 17 | Step 4 : 18 | The logs used by this step can be CloudTrail logs extracted by step 3 or your own CloudTrail logs. But there are some requirements about what the logs look like. They need to be stored in a S3 bucket in the default format (one JSON file, with a single line containing the event). 19 | 20 | .. note:: 21 | 22 | Each step can be run independently. There is no need to have completed step 1 to proceed with step 2. 23 | -------------------------------------------------------------------------------- /invictus-aws.py: -------------------------------------------------------------------------------- 1 | """Main file of the tool, used to run all the steps.""" 2 | 3 | import argparse, sys 4 | from os import path 5 | import datetime 6 | from re import match 7 | 8 | import source.utils.utils 9 | 10 | from source.main.ir import IR 11 | from source.utils.utils import * 12 | from source.utils.strings import * 13 | 14 | ACCOUNT_CLIENT = boto3.client('account') 15 | 16 | def set_args(): 17 | """Define the arguments used when calling the tool.""" 18 | parser = argparse.ArgumentParser(add_help=False) 19 | 20 | parser.add_argument( 21 | "-h", 22 | "--help", 23 | action="help", 24 | default=argparse.SUPPRESS, 25 | help="[+] Show this help message and exit.", 26 | ) 27 | 28 | parser.add_argument( 29 | "--menu", 30 | action="store_true", 31 | help="[+] Run in walkthrough mode") 32 | 33 | parser.add_argument( 34 | "-p", 35 | "--profile", 36 | nargs="?", 37 | type=str, 38 | default="default", 39 | const="default", 40 | help="[+] Specify your aws profile. Default profile is 'default'." 41 | ) 42 | 43 | parser.add_argument( 44 | "-w", 45 | "--write", 46 | nargs="?", 47 | type=str, 48 | default="cloud", 49 | const="cloud", 50 | choices=['cloud', 'local'], 51 | help="[+] 'cloud' option if you want the results to be stored in a S3 bucket (automatically created). 'local' option if you want the results to be written to local storage. The default option is 'cloud'. So if you want to use 'cloud' option, you can either write nothing, write only `-w` or write `-w cloud`." 52 | ) 53 | 54 | group1 = parser.add_mutually_exclusive_group() 55 | group1.add_argument( 56 | "-r", 57 | "--aws-region", 58 | help="[+] Only scan the specified region of the account. Can't be used with -a.", 59 | ) 60 | group1.add_argument( 61 | "-A", 62 | "--all-regions", 63 | nargs="?", 64 | type=str, 65 | const="us-east-1", 66 | default="not-all", 67 | help="[+] Scan all the enabled regions of the account. If you specify a region, it will begin by this region. Can't be used with -r.", 68 | ) 69 | 70 | parser.add_argument( 71 | "-s", 72 | "--step", 73 | nargs='?', 74 | type=str, 75 | default="1,2,3", 76 | const="1,2,3", 77 | help="[+] Provide a comma-separated list of the steps to be executed. 1 = Enumeration. 2 = Configuration. 3 = Logs Extraction. 4 = Logs Analysis. The default option is all steps. So if you want to run all the four steps, you can either write nothing, write only `-s` or write `-s 1,2,3,4`." 78 | ) 79 | 80 | parser.add_argument( 81 | "-start", 82 | "--start-time", 83 | help="[+] Start time of the Cloudtrail logs to be collected. Must only be used with step 3. Format is YYYY-MM-DD." 84 | ) 85 | 86 | parser.add_argument( 87 | "-end", 88 | "--end-time", 89 | help="[+] End time of the Cloudtrail logs to be collected. Mudt only be used with step 3. Format is YYYY-MM-DD." 90 | ) 91 | 92 | parser.add_argument( 93 | "-b", 94 | "--source-bucket", 95 | type=str, 96 | help="[+] Bucket containing the cloudtrail logs. Must look like bucket/subfolders/." 97 | ) 98 | 99 | parser.add_argument( 100 | "-o", 101 | "--output-bucket", 102 | type=str, 103 | help="[+] Bucket where the results of the queries will be stored. Must look like bucket/subfolders/." 104 | ) 105 | 106 | parser.add_argument( 107 | "-c", 108 | "--catalog", 109 | type=str, 110 | help="[+] Catalog used by Athena." 111 | ) 112 | 113 | parser.add_argument( 114 | "-d", 115 | "--database", 116 | type=str, 117 | help="[+] Database used by Athena. You can either input an existing database or a new one that will be created. If so, don't forget to input a .ddl file for your table." 118 | ) 119 | 120 | parser.add_argument( 121 | "-t", 122 | "--table", 123 | type=str, 124 | help="[+] Table used by Athena. You can either input an existing table or input a .ddl file giving details about your new table. An example.ddl is available for you, just add the structure, modify the name of the table and the location of your logs." 125 | ) 126 | 127 | parser.add_argument( 128 | "-f", 129 | "--queryfile", 130 | nargs='?', 131 | default="source/files/queries.yaml", 132 | const="source/files/queries.yaml", 133 | type=str, 134 | help="[+] Your own file containing your queries for the analysis. If you don't want to use or modify the default file, you can use your own by specifying it with this option. The file has to already exist." 135 | ) 136 | 137 | parser.add_argument( 138 | "-x", 139 | "--timeframe", 140 | type=str, 141 | help="[+] Used by the queries to filter their results. The timeframe sequence will automatically be added at the end of your queries if you specify a timeframe. You don't have to add it yourself to your queries." 142 | ) 143 | 144 | return parser.parse_args() 145 | 146 | def run_steps(dl, region, first_region, steps, start, end, source, output, catalog, database, table, queryfile, exists, timeframe): 147 | 148 | """Run the steps of the tool (enum, config, logs extraction, logs analysis). 149 | 150 | Parameters 151 | ---------- 152 | dl : bool 153 | True if the user wants to download the results, False if he wants the results to be written in a s3 bucket 154 | region : str 155 | Region in which the tool is executed 156 | regionless : str 157 | "not-all" if the tool is used on only one region. First region to run the tool on otherwise 158 | steps : list of str 159 | Steps to run (1 for enum, 2 for config, 3 for logs extraction, 4 for analysis) 160 | start : str 161 | Start time for logs collection 162 | end : str 163 | End time for logs collection 164 | source : str 165 | Source bucket for the analysis part (4) 166 | output : str 167 | Output bucket for the analysis part (4) 168 | catalog : str 169 | Data catalog used with the database 170 | database : str 171 | Database containing the table for logs analytics 172 | table : str 173 | Contains the sql requirements to query the logs 174 | queryfile : str 175 | File containing the queries 176 | exists : tuple of bool 177 | If the input db and table already exists 178 | timeframe : str 179 | Time filter for default queries 180 | """ 181 | if dl: 182 | create_folder(ROOT_FOLDER + "/" + region) 183 | 184 | logs = "" 185 | 186 | if "4" in steps: 187 | ir = IR(region, dl, steps, source, output, catalog, database, table) 188 | else : 189 | ir = IR(region, dl, steps) 190 | 191 | if "4" in steps: 192 | try: 193 | ir.execute_analysis(queryfile, exists, timeframe) 194 | except Exception as e: 195 | print(str(e)) 196 | else: 197 | if "1" in steps: 198 | try: 199 | ir.execute_enumeration(first_region) 200 | except Exception as e: 201 | print(str(e)) 202 | 203 | if "2" in steps: 204 | try: 205 | ir.execute_configuration(first_region) 206 | except Exception as e: 207 | print(str(e)) 208 | 209 | if "3" in steps: 210 | try: 211 | logs = ir.execute_logs(first_region, start, end) 212 | if logs == "0": 213 | print(f"{ERROR} Be aware that no Cloudtrail logs were found.") 214 | except Exception as e: 215 | print(str(e)) 216 | 217 | def verify_all_regions(input_region): 218 | """Search for all enabled regions and verify that the given region exists (region that the tool will begin with). 219 | 220 | Parameters 221 | ---------- 222 | input_region : str 223 | If we're in this function, the used decided to run the tool on all enabled regions. This given region is the first one that the tool will analyze. 224 | """ 225 | response = try_except( 226 | ACCOUNT_CLIENT.list_regions, 227 | RegionOptStatusContains=["ENABLED", "ENABLED_BY_DEFAULT"], 228 | ) 229 | try: 230 | regions = response["Regions"] 231 | except KeyError as e: 232 | print(f"{ERROR} The security token included in the request is expired. Please set a new token before running the tool.") 233 | sys.exit(-1) 234 | 235 | region_names = [] 236 | 237 | for region in regions: 238 | region_names.append(region["RegionName"]) 239 | 240 | if input_region in region_names: 241 | region_names.remove(input_region) 242 | region_names.insert(0, input_region) 243 | 244 | return region_names 245 | elif input_region == "not-all": 246 | print(f"{ERROR} You have to specify a region option to run the tool : (-r) or (-A).") 247 | sys.exit(-1) 248 | else: 249 | print(f"{ERROR} The region you entered doesn't exist or is not enabled. Please enter a valid region.") 250 | sys.exit(-1) 251 | 252 | def verify_one_region(region): 253 | """Verify the region inputs and run the steps of the tool for one region. 254 | 255 | Parameters 256 | ---------- 257 | region : str 258 | Region to run the tool in 259 | 260 | Returns 261 | ------- 262 | enabled : array 263 | Contains the region if it's enabled 264 | """ 265 | enabled = [] 266 | 267 | try: 268 | response = ACCOUNT_CLIENT.get_region_opt_status(RegionName=region) 269 | response.pop("ResponseMetadata", None) 270 | if ( 271 | response["RegionOptStatus"] == "ENABLED_BY_DEFAULT" 272 | or response["RegionOptStatus"] == "ENABLED" 273 | ): 274 | enabled.append(region) 275 | except ClientError as e: 276 | if e.response['Error']['Code'] == 'ExpiredTokenException': 277 | print(f"{ERROR} The security token included in the request is expired. Please set a new token before running the tool.") 278 | else: 279 | print(f"{ERROR} {e}") 280 | sys.exit(-1) 281 | 282 | return enabled 283 | 284 | def verify_steps(steps, source, output, catalog, database, table, region, dl): 285 | """Verify that the steps entered are correct. 286 | 287 | Parameters 288 | ---------- 289 | steps : list of str 290 | Steps to run (1 for enum, 2 for config, 3 for logs extraction, 4 for analysis) 291 | source : str 292 | Source bucket for the analysis part (4) 293 | output : str 294 | Output bucket for the analysis part (4) 295 | catalog : str 296 | Data catalog used with the database 297 | database : str 298 | Database containing the table for logs analytics 299 | table : str 300 | Contains the sql requirements to query the logs 301 | region : str 302 | Region in which the tool is executed 303 | dl : bool 304 | True if the user wants to download the results, False if he wants the results to be written in a s3 bucket 305 | 306 | Returns 307 | ------- 308 | steps : list of str 309 | Steps to run (1 for enum, 2 for config, 3 for logs extraction, 4 for analysis) 310 | source : str 311 | Source bucket for the analysis part (4) 312 | output : str 313 | Output bucket for the analysis part (4) 314 | database : str 315 | Database containing the table for logs analytics 316 | table : str 317 | Contains the sql requirements to query the logs 318 | exists : tuple of bool 319 | If the input db and table already exists 320 | """ 321 | #Verifying steps inputs 322 | 323 | verify_steps_input(steps) 324 | 325 | if "4" not in steps and (source != None or output != None or catalog != None or database != None or table != None): 326 | print( 327 | "invictus-aws.py: error: You can't use -b, -o, -c, -d, -t with another step than the 4th." 328 | ) 329 | sys.exit(-1) 330 | 331 | #if db and table already exists (the basic ones if no input was provided) 332 | db_exists = False 333 | table_exists = False 334 | 335 | #Verifying Athena inputs 336 | 337 | if "4" in steps: 338 | 339 | athena = boto3.client("athena", region_name=region) 340 | 341 | if catalog is None and database is None and table is None: 342 | 343 | #we need to verify if cloudtrailanalysis db and clogs table already exists 344 | 345 | databases = athena.list_databases(CatalogName="AwsDataCatalog") 346 | for db in databases["DatabaseList"]: 347 | if db["Name"] == "cloudtrailanalysis": 348 | db_exists = True 349 | break 350 | 351 | if db_exists: 352 | tables = athena.list_table_metadata(CatalogName="AwsDataCatalog",DatabaseName="cloudtrailanalysis") 353 | for tb in tables["TableMetadataList"]: 354 | if tb["Name"] == "logs": 355 | table_exists = True 356 | break 357 | 358 | elif catalog is not None and database is not None and table is not None: 359 | 360 | #Verifying catalog exists 361 | catalogs = athena.list_data_catalogs() 362 | if not any(cat['CatalogName'] == catalog for cat in catalogs['DataCatalogsSummary']): 363 | print(f"{ERROR} The data catalog you entered doesn't exist.") 364 | sys.exit(-1) 365 | 366 | regex_pattern = r'^[a-zA-Z_]{1,255}$' 367 | 368 | if not match(regex_pattern, database): 369 | print(f"{ERROR} Wrong database name format. Database name can only contains letters and `_`, up to 255 characters.") 370 | sys.exit(-1) 371 | 372 | if not table.endswith(".ddl") and not match(regex_pattern, table): 373 | print(f"{ERROR} Wrong table name format. Table name can only contains letters and `_`, up to 255 characters.") 374 | sys.exit(-1) 375 | 376 | databases = athena.list_databases(CatalogName=catalog) 377 | for db in databases["DatabaseList"]: 378 | if db["Name"] == database: 379 | db_exists = True 380 | 381 | if db_exists: 382 | tables = athena.list_table_metadata(CatalogName=catalog,DatabaseName=database) 383 | if table.endswith(".ddl"): 384 | exists = path.isfile(table) 385 | if not exists: 386 | print(f"{ERROR} you have to input a valid .ddl file to create a new table.") 387 | sys.exit(-1) 388 | else: 389 | tmp_table = get_table(table, False)[0] 390 | else: 391 | tmp_table = table 392 | for tb in tables["TableMetadataList"]: 393 | if tb["Name"] == tmp_table: 394 | table_exists = True 395 | if tb["Name"] == tmp_table and table.endswith(".ddl"): 396 | print(f"[!] Warning : The table {database}.{tmp_table} already exists. Using the existing one..") 397 | 398 | else: 399 | print(f"{ERROR} All or none of these arguments are required: -c/--catalog, -d/--database, -t/--table.") 400 | sys.exit(-1) 401 | 402 | if output == None and dl == False: 403 | print(f"{ERROR} The following arguments are required: -o/--output-bucket.") 404 | sys.exit(-1) 405 | elif output != None and dl == True: 406 | print(f"{ERROR} The following arguments are not asked: -o/--output-bucket.") 407 | sys.exit(-1) 408 | 409 | if source != None and table.endswith(".ddl"): 410 | print(f"{ERROR} The following arguments are not asked: -b/--source-bucket.") 411 | sys.exit(-1) 412 | 413 | #if all athena args are none, the source is none but table doesn't exists 414 | if (catalog == None and database == None and table == None) and source == None and not table_exists: 415 | print(f"{ERROR} The following arguments are required: -b/--source-bucket.") 416 | sys.exit(-1) 417 | 418 | # if all athena args are set, db or table is not set and source is not set : we need to recreate a table so we need the source bucket (no need if .ddl file as the source bucket is hardcoded in) 419 | if (catalog != None and database != None and table != None) and (not db_exists or not table_exists) and source == None and not table.endswith(".ddl"): 420 | print(f"{ERROR} The following arguments are required: -b/--source-bucket.") 421 | sys.exit(-1) 422 | 423 | if (db_exists and table_exists) and source != None: 424 | print(f"{ERROR} The following arguments are not asked: -b/--source-bucket.\n[+] Don't forget to delete your table if you want to change the logs source.") 425 | sys.exit(-1) 426 | 427 | #Verify buckets inputs 428 | 429 | if source != None: 430 | source = verify_bucket(source, "source", False) 431 | 432 | 433 | if output != None: 434 | output = verify_bucket(output, "output", False) 435 | 436 | exists = [db_exists, table_exists] 437 | 438 | return steps, source, output, database, table, exists 439 | 440 | def verify_catalog_db(catalog, database, region, to_create): 441 | 442 | #if db already exists (the basic one if no input is provided) 443 | db_exists = False 444 | 445 | athena = boto3.client("athena", region_name=region) 446 | 447 | if catalog is None and database is None: 448 | 449 | #we need to verify if cloudtrailanalysis db already exists 450 | databases = athena.list_databases(CatalogName="AwsDataCatalog") 451 | for db in databases["DatabaseList"]: 452 | if db["Name"] == "cloudtrailanalysis": 453 | db_exists = True 454 | break 455 | 456 | elif catalog is not None and database is not None: 457 | #Verifying catalog exists 458 | catalogs = athena.list_data_catalogs() 459 | 460 | if not any(cat['CatalogName'] == catalog for cat in catalogs['DataCatalogsSummary']): 461 | print(f"{ERROR} The data catalog you entered doesn't exist.") 462 | sys.exit(-1) 463 | 464 | regex_pattern = r'^[a-zA-Z_]{1,255}$' 465 | 466 | if not match(regex_pattern, database): 467 | print(f"{ERROR} Wrong database name format. Database name can only contains letters and `_`, up to 255 characters.") 468 | sys.exit(-1) 469 | 470 | databases = athena.list_databases(CatalogName=catalog) 471 | for db in databases["DatabaseList"]: 472 | if db["Name"] == database: 473 | db_exists = True 474 | break 475 | 476 | else: 477 | print(f"{ERROR} You have to enter the 3 values : catalog, database and table") 478 | sys.exit(-1) 479 | 480 | if not db_exists and not to_create: 481 | print(f"{ERROR} The database you entered doesn't exists.") 482 | sys.exit(-1) 483 | 484 | if db_exists and to_create: 485 | print(f"{ERROR} The database you entered already exists.") 486 | sys.exit(-1) 487 | 488 | return db_exists 489 | 490 | def verify_table(catalog, database, table, region, to_create, db_exists): 491 | 492 | table_exists = False 493 | athena = boto3.client("athena", region_name=region) 494 | 495 | if catalog is None and database is None and table is None: 496 | catalog = "AwsDataCatalog" 497 | database = "cloudtrailanalysis" 498 | table = "logs" 499 | 500 | if db_exists: 501 | tables = athena.list_table_metadata(CatalogName=catalog,DatabaseName=database) 502 | for tb in tables["TableMetadataList"]: 503 | if tb["Name"] == "logs": 504 | table_exists = True 505 | 506 | else: 507 | 508 | regex_pattern = r'^[a-zA-Z_]{1,255}$' 509 | 510 | if not match(regex_pattern, table): 511 | print(f"{ERROR} Wrong table name format. Table name can only contains letters and `_`, up to 255 characters.") 512 | sys.exit(-1) 513 | 514 | if db_exists: 515 | tables = athena.list_table_metadata(CatalogName=catalog,DatabaseName=database) 516 | for tb in tables["TableMetadataList"]: 517 | if tb["Name"] == table: 518 | table_exists = True 519 | if not table_exists and not to_create: 520 | print(f"{ERROR} The table you entered doesn't exists.") 521 | sys.exit(-1) 522 | 523 | if table_exists and to_create: 524 | print(f"{ERROR} The table you entered already exists.") 525 | sys.exit(-1) 526 | 527 | return table_exists 528 | 529 | def verify_structure_input(catalog, database, structure, region, db_exists): 530 | 531 | athena = boto3.client("athena", region_name=region) 532 | table_exists = False 533 | 534 | if structure.endswith(".ddl"): 535 | exists = path.isfile(structure) 536 | if not exists: 537 | print(f"{ERROR} you have to input an existing .ddl file to create a new table.") 538 | sys.exit(-1) 539 | else: 540 | s3 = get_s3_in_ddl(structure) 541 | verify_bucket(s3, "output", True) 542 | tmp_table = get_table(structure, False)[0] 543 | else: 544 | print(f"{ERROR} you have to input an existing .ddl file to create a new table.") 545 | sys.exit(-1) 546 | 547 | if db_exists: 548 | tables = athena.list_table_metadata(CatalogName=catalog,DatabaseName=database) 549 | for tb in tables["TableMetadataList"]: 550 | if tb["Name"] == tmp_table: 551 | table_exists = True 552 | 553 | return table_exists 554 | 555 | def verify_queryfile_input(queryfile): 556 | 557 | exists = path.isfile(queryfile) 558 | if not exists: 559 | print(f"{ERROR} you have to input an existing query file.") 560 | sys.exit(-1) 561 | 562 | def verify_bucket(bucket, entry_type, is_from_ddl): 563 | """Verify that the user inputs regarding the buckets logs are correct. 564 | 565 | Parameters 566 | ---------- 567 | bucket : str 568 | Bucket we verify it exists 569 | entry_type : str 570 | Source or output bucket (for log analysis) 571 | 572 | Returns 573 | ------- 574 | bucket : str 575 | Bucket we verify it exists 576 | """ 577 | 578 | 579 | if not bucket.endswith("/"): 580 | bucket = bucket+"/" 581 | 582 | if not bucket.startswith("s3://"): 583 | bucket = f"s3://{bucket}" 584 | 585 | if bucket.startswith("arn:aws:s3:::"): 586 | print(f"{ERROR} The {entry_type} bucket you entered is not written well. Please verify that the format is 's3://s3-name/[potential-folders]/'") 587 | sys.exit(-1) 588 | 589 | name, prefix = get_bucket_and_prefix(bucket) 590 | 591 | s3 = boto3.resource('s3') 592 | source_bucket = s3.Bucket(name) 593 | 594 | if not source_bucket.creation_date: 595 | if is_from_ddl: 596 | print(f"{ERROR} The {entry_type} bucket you entered as LOCATION in your .DDL file doesn't exists or is not written well. Please verify that the format is 's3://s3-name/[potential-folders]/'") 597 | else: 598 | print(f"{ERROR} The {entry_type} bucket you entered doesn't exists or is not written well. Please verify that the format is 's3://s3-name/[potential-folders]/'") 599 | sys.exit(-1) 600 | 601 | if prefix: 602 | response = source.utils.utils.S3_CLIENT.list_objects_v2(Bucket=name, Prefix=prefix) 603 | if 'Contents' not in response or len(response['Contents']) == 0: 604 | if is_from_ddl: 605 | print(f"{ERROR} The {entry_type} bucket you entered as LOCATION in your .DDL file doesn't exists or is not written well. Please verify that the format is 's3://s3-name/[potential-folders]/'") 606 | else: 607 | print(f"{ERROR} The {entry_type} bucket you entered doesn't exists or is not written well. Please verify that the format is 's3://s3-name/[potential-folders]/'") 608 | sys.exit(-1) 609 | 610 | return bucket 611 | 612 | def verify_dates(start, end, steps): 613 | """Verify if the date inputs are correct. 614 | 615 | Parameters 616 | ---------- 617 | start : str 618 | Start time 619 | end : str 620 | End time 621 | steps : list of str 622 | Steps to run 623 | """ 624 | if "3" not in steps and (start != None or end != None): 625 | print("[!] invictus-aws.py: error: Only input dates with step 3.") 626 | sys.exit(-1) 627 | 628 | elif "3" not in steps and (start == None or end == None): 629 | pass 630 | 631 | else: 632 | if start != None and end != None: 633 | 634 | present = datetime.datetime.now() 635 | 636 | try: 637 | start_date = datetime.datetime.strptime(start, "%Y-%m-%d") 638 | except ValueError: 639 | print("[!] invictus-aws.py: error: Start date in not in a valid format.") 640 | sys.exit(-1) 641 | 642 | try: 643 | end_date = datetime.datetime.strptime(end, "%Y-%m-%d") 644 | except ValueError: 645 | print("[!] invictus-aws.py: error: End date in not in a valid format.") 646 | sys.exit(-1) 647 | 648 | if start_date > present: 649 | print("[!] invictus-aws.py: error: Start date can not be in the future.") 650 | sys.exit(-1) 651 | elif end_date > present: 652 | print("[!] invictus-aws.py: error: End date can not be in the future.") 653 | sys.exit(-1) 654 | elif start_date >= end_date: 655 | print("[!] invictus-aws.py: error: Start date can not be equal to or more recent than End date.") 656 | sys.exit(-1) 657 | 658 | elif start == None and end != None: 659 | print("[!] invictus-aws.py: error: Start date in not defined.") 660 | sys.exit(-1) 661 | elif start != None and end == None: 662 | print("[!] invictus-aws.py: error: End date in not defined.") 663 | sys.exit(-1) 664 | elif start == None and end == None: 665 | print("[!] invictus-aws.py: error: You have to specify start and end time.") 666 | sys.exit(-1) 667 | 668 | def verify_file(queryfile, steps): 669 | """Verify if the query file input is correct. 670 | 671 | Parameters 672 | ---------- 673 | queryfile : str 674 | Yaml file containing the query 675 | steps : list of str 676 | Steps to run 677 | """ 678 | if "4" not in steps and queryfile != "source/files/queries.yaml": 679 | print("[!] invictus-aws.py: error: Only input queryfile with step 4.") 680 | sys.exit(-1) 681 | 682 | if not path.isfile(queryfile): 683 | print(f"invictus-aws.py: error: {queryfile} does not exist.") 684 | sys.exit(-1) 685 | elif not queryfile.endswith(".yaml") and not queryfile.endswith(".yml"): 686 | print(f"invictus-aws.py: error: Please provide a yaml file as your query file.") 687 | sys.exit(-1) 688 | 689 | def verify_timeframe(timeframe, steps): 690 | """Verify the input timeframe which is used to filter queries results. 691 | 692 | Parameters 693 | ---------- 694 | timeframe : str 695 | Input timeframe 696 | steps : list of str 697 | Steps to run 698 | """ 699 | if timeframe != None: 700 | 701 | if "4" not in steps: 702 | print("{ERROR} Only input timeframe with step 4.") 703 | sys.exit(-1) 704 | 705 | if not timeframe.isdigit() or int(timeframe) <= 0: 706 | print("{ERROR} Only input valid number > 0") 707 | sys.exit(-1) 708 | 709 | def verify_steps_input(steps): 710 | 711 | for step in steps: 712 | if step not in POSSIBLE_STEPS: 713 | print(f"{ERROR} The steps you entered are not allowed. Please only enter valid steps.") 714 | sys.exit(-1) 715 | 716 | if "4" in steps and ("3" in steps or "2" in steps or "1" in steps): 717 | print(f"{ERROR} Step 4 can only be executed alone.") 718 | sys.exit(-1) 719 | 720 | def verify_input(input_name, possible_inputs): 721 | if input_name not in possible_inputs: 722 | print("[!] invictus-aws.py: error: The input you entered are not allowed. Please only enter valid inputs.") 723 | sys.exit(-1) 724 | 725 | def verify_region_input(region_choice, region, steps): 726 | ##all_regions = array of all regions in all mode, array of one region otherwise 727 | 728 | first_region="not-all" 729 | 730 | try: 731 | if region_choice == "1": 732 | try: 733 | if region: 734 | all_regions = verify_all_regions(region) 735 | first_region = region 736 | else: 737 | all_regions = verify_all_regions("us-east-1") 738 | first_region = "us-east-1" 739 | except IndexError: 740 | all_regions = verify_all_regions("us-east-1") 741 | first_region = "us-east-1" 742 | else: 743 | all_regions = verify_one_region(region) 744 | except IndexError: 745 | print(f"{ERROR} Please only enter valid region and follow the pattern needed.") 746 | sys.exit(-1) 747 | 748 | return all_regions, first_region 749 | 750 | def verify_profile(profile): 751 | if profile not in boto3.session.Session().available_profiles: 752 | print("[!] invictus-aws.py: error: The profile you entered does not exist. Please only enter valid profile.") 753 | sys.exit(-1) 754 | 755 | 756 | def main(): 757 | """Get the arguments and run the appropriate functions.""" 758 | print(TOOL_NAME) 759 | 760 | profile = None 761 | dl = None 762 | region = None 763 | first_region = None 764 | steps = None 765 | start = None 766 | end = None 767 | source_bucket = None 768 | output_bucket = None 769 | catalog = None 770 | database = None 771 | table = None 772 | queryfile = None 773 | exists = None 774 | timeframe = None 775 | 776 | args = set_args() 777 | real_arg_count = len(sys.argv) - 1 778 | 779 | if args.menu or real_arg_count == 0 : 780 | if real_arg_count >= 2: 781 | print(f"{ERROR} --menu cannot be used with other arguments") 782 | sys.exit(1) 783 | 784 | ## Walkthough mode 785 | print(WALKTHROUGHT_ENTRY) 786 | 787 | print(PROFILE_PRESENTATION) 788 | profile_input = input(PROFILE_ACTION) 789 | verify_input(profile_input, ["1", "2"]) 790 | if profile_input == "1": 791 | profile = input(PROFILE) 792 | verify_profile(profile) 793 | boto3.setup_default_session(profile_name=profile) 794 | 795 | print(STEPS_PRESENTATION) 796 | steps = input(STEPS_ACTION) 797 | verify_steps_input(steps.split(",")) 798 | 799 | print(REGION_PRESENTATION) 800 | region_choice = input(REGION_ACTION) 801 | verify_input(region_choice, ["1", "2"]) 802 | if region_choice == "1" and "4" in steps: 803 | print(f"{ERROR} You cant run the tool on all regions with the Analysis step") 804 | sys.exit(-1) 805 | if region_choice == "2": 806 | region = input(REGION) 807 | else: 808 | region = input(ALL_REGION) 809 | regions, first_region = verify_region_input(region_choice, region, steps) 810 | 811 | set_clients(first_region) 812 | 813 | print(STORAGE_PRESENTATION) 814 | storage_input = input(STORAGE_ACTION) 815 | verify_input(storage_input, ["1", "2"]) 816 | if storage_input == "1": 817 | dl = True 818 | else: 819 | dl = False 820 | 821 | if "3" in steps: 822 | print(START_END_PRESENTATION) 823 | start = input(START) 824 | end = input(END) 825 | verify_dates(start, end, steps) 826 | 827 | if "4" in steps: 828 | 829 | print(DB_INITIALIZED_PRESENTATION) 830 | init_db_input = input(DB_INITIALIZED_ACTION) 831 | verify_input(init_db_input, ["1", "2"]) 832 | 833 | #[if the database exists, If the table exists] 834 | exists = [False, False] 835 | 836 | if init_db_input == "2": #aka db is not initialized yet 837 | 838 | print(DEFAULT_NAME_PRESENTATION) 839 | default_name_input = input(DEFAULT_NAME_ACTION) 840 | verify_input(default_name_input, ["1", "2"]) 841 | 842 | if default_name_input == "2": #aka the guy doesn't want the defaults names 843 | 844 | print(NEW_NAMES_PRESENTATION) 845 | catalog = input(CATALOG_ACTION) 846 | database = input(DB_ACTION) 847 | 848 | exists[0] = verify_catalog_db(catalog, database, regions[0], True) 849 | 850 | print(DEFAULT_STRUCTURE_PRESENTATION) 851 | default_structure_input = input(DEFAULT_STRUCTURE_ACTION) 852 | verify_input(default_structure_input, ["1", "2"]) 853 | 854 | exists = [False, False] 855 | if default_structure_input == "1": 856 | table = input(STRUCTULE_FILE) 857 | exists[1] = verify_structure_input(catalog, database, table, regions[0], exists[0]) 858 | 859 | else: 860 | 861 | print(TABLE_PRESENTATION) 862 | table = input(TABLE_ACTION) 863 | exists[1] = verify_table(catalog, database, table, regions[0], True, exists[0]) 864 | 865 | source_bucket = input(INPUT_BUCKET_ACTION) 866 | verify_bucket(source_bucket, "source", False) 867 | 868 | else: 869 | catalog = None 870 | database = None 871 | table = None 872 | exists[0] = verify_catalog_db(catalog, database, regions[0], True) 873 | exists[1] = verify_table(catalog, database, table, regions[0], True, exists[0]) 874 | 875 | source_bucket = input(INPUT_BUCKET_ACTION) 876 | verify_bucket(source_bucket, "source", False) 877 | 878 | if not dl and ((table and not table.endswith(".ddl")) or not table): 879 | output_bucket = input(OUTPUT_BUCKET_ACTION) 880 | verify_bucket(output_bucket, "output", False) 881 | 882 | else: 883 | print(NEW_NAMES_PRESENTATION) 884 | catalog = input(CATALOG_ACTION) 885 | database = input(DB_ACTION) 886 | exists[0] = verify_catalog_db(catalog, database, regions[0], False) 887 | table = input(TABLE_ACTION) 888 | exists[1] = verify_table(catalog, database, table, regions[0], False, exists[0]) 889 | 890 | print(DEFAULT_QUERY_PRESENTATION) 891 | queryfile = input(DEFAULT_QUERY_ACTION) 892 | verify_input(queryfile, ["1", "2"]) 893 | 894 | if queryfile == "1": 895 | queryfile = input(QUERY_FILE) 896 | verify_queryfile_input(queryfile) 897 | else: 898 | queryfile = "source/files/queries.yaml" 899 | 900 | print(TIMEFRAME_PRESENTATION) 901 | timeframe_input = input(TIMEFRAME_ACTION) 902 | verify_input(timeframe_input, ["1", "2"]) 903 | 904 | if timeframe_input == "1": 905 | timeframe = input(TIMEFRAME) 906 | verify_timeframe(timeframe, steps) 907 | 908 | else: 909 | 910 | steps = args.step.split(",") 911 | verify_steps_input(steps) 912 | 913 | profile = args.profile 914 | if profile != "default": 915 | verify_profile(profile) 916 | boto3.setup_default_session(profile_name=profile) 917 | 918 | region = args.aws_region 919 | all_regions= args.all_regions 920 | 921 | if region: 922 | regions, first_region = verify_region_input("2", region, steps) 923 | else: 924 | regions, first_region = verify_region_input("1", all_regions, steps) 925 | 926 | set_clients(first_region) 927 | 928 | dl = True if args.write == 'local' else False 929 | 930 | if "3" in steps: 931 | start = args.start_time 932 | end = args.end_time 933 | verify_dates(start, end, steps) 934 | 935 | source_bucket = args.source_bucket 936 | output_bucket = args.output_bucket 937 | 938 | catalog = args.catalog 939 | database = args.database 940 | table = args.table 941 | 942 | queryfile = args.queryfile 943 | verify_file(queryfile, steps) 944 | 945 | timeframe = args.timeframe 946 | verify_timeframe(timeframe, steps) 947 | 948 | for region in regions: 949 | steps, source_bucket, output_bucket, database, table, exists = verify_steps(steps, source_bucket, output_bucket, catalog, database, table, region, dl) 950 | 951 | for region in regions: 952 | run_steps(dl, region, first_region, steps, start, end, source_bucket, output_bucket, catalog, database, table, queryfile, exists, timeframe) 953 | 954 | if __name__ == "__main__": 955 | main() 956 | 957 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | boto3 2 | requests 3 | pyyaml 4 | pandas 5 | xlsxwriter 6 | click 7 | tqdm 8 | sphinx_rtd_theme 9 | sphinx 10 | m2r2 -------------------------------------------------------------------------------- /source/files/example.ddl: -------------------------------------------------------------------------------- 1 | CREATE EXTERNAL TABLE IF NOT EXISTS TABLE ( 2 | eventversion STRING, 3 | useridentity STRUCT< 4 | type:STRING, 5 | principalid:STRING, 6 | arn:STRING, 7 | accountid:STRING, 8 | invokedby:STRING, 9 | accesskeyid:STRING, 10 | userName:STRING, 11 | sessioncontext:STRUCT< 12 | attributes:STRUCT< 13 | mfaauthenticated:STRING, 14 | creationdate:STRING>, 15 | sessionissuer:STRUCT< 16 | type:STRING, 17 | principalId:STRING, 18 | arn:STRING, 19 | accountId:STRING, 20 | userName:STRING>, 21 | ec2RoleDelivery:string, 22 | webIdFederationData:map 23 | > 24 | >, 25 | eventtime STRING, 26 | eventsource STRING, 27 | eventname STRING, 28 | awsregion STRING, 29 | sourceipaddress STRING, 30 | useragent STRING, 31 | errorcode STRING, 32 | errormessage STRING, 33 | requestparameters STRING, 34 | responseelements STRING, 35 | additionaleventdata STRING, 36 | requestid STRING, 37 | eventid STRING, 38 | resources ARRAY>, 42 | eventtype STRING, 43 | apiversion STRING, 44 | readonly STRING, 45 | recipientaccountid STRING, 46 | serviceeventdetails STRING, 47 | sharedeventid STRING, 48 | vpcendpointid STRING, 49 | tlsDetails struct< 50 | tlsVersion:string, 51 | cipherSuite:string, 52 | clientProvidedHostHeader:string> 53 | ) 54 | ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' 55 | STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat' 56 | OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' 57 | LOCATION 's3://my_s3_bucket/my_logs_folder' -------------------------------------------------------------------------------- /source/files/policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "VisualEditor0", 6 | "Effect": "Allow", 7 | "Action": [ 8 | "athena:*", 9 | "glue:*" 10 | ], 11 | "Resource": "*" 12 | }, 13 | { 14 | "Sid": "VisualEditor1", 15 | "Effect": "Allow", 16 | "Action": [ 17 | "s3:PutObject", 18 | "s3:CreateBucket", 19 | "s3:DeleteObject", 20 | "s3:DeleteBucket" 21 | ], 22 | "Resource": "arn:aws:s3:::invictus-aws-*" 23 | } 24 | ] 25 | } -------------------------------------------------------------------------------- /source/files/queries.yaml: -------------------------------------------------------------------------------- 1 | aws_attached_malicious_lambda_layer: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'lambda.amazonaws.com' AND eventName LIKE 'UpdateFunctionConfiguration%' ESCAPE '\'); 2 | aws_cloudtrail_disable_logging: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'cloudtrail.amazonaws.com' AND eventName IN ('StopLogging', 'UpdateTrail', 'DeleteTrail')); 3 | aws_config_disable_recording: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'config.amazonaws.com' AND eventName IN ('DeleteDeliveryChannel', 'StopConfigurationRecorder')); 4 | aws_delete_identity: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'ses.amazonaws.com' AND eventName = 'DeleteIdentity'); 5 | aws_ec2_disable_encryption: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'ec2.amazonaws.com' AND eventName = 'DisableEbsEncryptionByDefault'); 6 | aws_ec2_startup_script_change: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'ec2.amazonaws.com' AND eventName = 'ModifyInstanceAttribute' AND requestParameters LIKE '%userData%'); 7 | aws_ec2_vm_export_failure: SELECT * FROM DATABASE.TABLE WHERE ((eventName = 'CreateInstanceExportTask' AND eventSource = 'ec2.amazonaws.com') AND NOT ((errorMessage LIKE '%%%' ESCAPE '\') OR (errorCode LIKE '%%%' ESCAPE '\') OR (responseElements LIKE '%Failure%' ESCAPE '\'))); 8 | aws_ecs_task_definition_cred_endpoint_query: SELECT * FROM ( SELECT * FROM DATABASE.TABLE WHERE requestparameters LIKE '%containerDefinitions%' ) WHERE requestparameters LIKE '%"command":"%$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI%"%'; 9 | aws_efs_fileshare_modified_or_deleted: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'elasticfilesystem.amazonaws.com' AND eventName = 'DeleteFileSystem'); 10 | aws_efs_fileshare_mount_modified_or_deleted: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'elasticfilesystem.amazonaws.com' AND eventName = 'DeleteMountTarget'); 11 | aws_eks_cluster_created_or_deleted: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'eks.amazonaws.com' AND eventName IN ('CreateCluster', 'DeleteCluster')); 12 | aws_elasticache_security_group_created: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'elasticache.amazonaws.com' AND eventName = 'CreateCacheSecurityGroup'); 13 | aws_elasticache_security_group_modified_or_deleted: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'elasticache.amazonaws.com' AND eventName IN ('DeleteCacheSecurityGroup', 'AuthorizeCacheSecurityGroupIngress', 'RevokeCacheSecurityGroupIngress', 'AuthorizeCacheSecurityGroupEgress', 'RevokeCacheSecurityGroupEgress')); 14 | aws_enum_buckets: SELECT * FROM DATABASE.TABLE WHERE ((eventSource = 's3.amazonaws.com' AND eventName = 'ListBuckets') AND NOT (useridentity.type = 'AssumedRole')); 15 | aws_guardduty_disruption: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'guardduty.amazonaws.com' AND eventName = 'CreateIPSet'); 16 | aws_iam_backdoor_users_keys: SELECT * FROM DATABASE.TABLE WHERE ((eventSource = 'iam.amazonaws.com' AND eventName = 'CreateAccessKey') AND NOT (userIdentity.arn LIKE '%userName%')) ; 17 | aws_passed_role_to_glue_development_endpoint: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'glue.amazonaws.com' AND eventName IN ('CreateDevEndpoint', 'DeleteDevEndpoint', 'UpdateDevEndpoint')); 18 | aws_rds_change_master_password: SELECT * FROM ( SELECT * FROM DATABASE.TABLE WHERE responseElements LIKE '%pendingModifiedValues%' ) WHERE responseElements LIKE '%"masterUserPassword":"%%%"' ESCAPE '\' AND eventName = 'ModifyDBInstance'; 19 | aws_rds_public_db_restore: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'rds.amazonaws.com' AND regexp_extract(responseElements, '"publiclyAccessible":"([^"]+)"', 1) LIKE 'true' AND eventName = 'RestoreDBInstanceFromDBSnapshot'); 20 | aws_root_account_usage: SELECT * FROM DATABASE.TABLE WHERE (useridentity.type = 'Root' AND NOT (eventType = 'AwsServiceEvent')); 21 | aws_route_53_domain_transferred_lock_disabled: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'route53.amazonaws.com' AND eventName = 'DisableDomainTransferLock'); 22 | aws_route_53_domain_transferred_to_another_account: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'route53.amazonaws.com' AND eventName = 'TransferDomainToAnotherAwsAccount'); 23 | aws_s3_data_management_tampering: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 's3.amazonaws.com' AND eventName IN ('PutBucketLogging', 'PutBucketWebsite', 'PutEncryptionConfiguration', 'PutLifecycleConfiguration', 'PutReplicationConfiguration', 'ReplicateObject', 'RestoreObject')); 24 | aws_securityhub_finding_evasion: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'securityhub.amazonaws.com' AND eventName IN ('BatchUpdateFindings', 'DeleteInsight', 'UpdateFindings', 'UpdateInsight')); 25 | aws_snapshot_backup_exfiltration: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'ec2.amazonaws.com' AND eventName = 'ModifySnapshotAttribute'); 26 | aws_sts_assumerole_misuse: SELECT * FROM DATABASE.TABLE WHERE (userIdentity.type = 'AssumedRole' AND userIdentity.sessionContext.sessionIssuer.type = 'Role'); 27 | aws_sts_getsessiontoken_misuse: SELECT * FROM DATABASE.TABLE WHERE (eventSource = 'sts.amazonaws.com' AND eventName = 'GetSessionToken' AND userIdentity.type = 'IAMUser'); 28 | aws_update_login_profile: SELECT * FROM DATABASE.TABLE WHERE ((eventSource = 'iam.amazonaws.com' AND eventName = 'UpdateLoginProfile')); -------------------------------------------------------------------------------- /source/main/analysis.py: -------------------------------------------------------------------------------- 1 | """File used for the analysis.""" 2 | 3 | import yaml, datetime 4 | from source.utils.utils import athena_query, rename_file_s3, get_table, set_clients, date, get_bucket_and_prefix, ENDC, OKGREEN, ROOT_FOLDER, create_folder, create_tmp_bucket, get_random_chars 5 | import source.utils.utils 6 | from source.utils.enum import paginate 7 | import pandas as pd 8 | from os import remove, replace 9 | from time import sleep 10 | 11 | 12 | class Analysis: 13 | 14 | source_bucket = None 15 | output_bucket = None 16 | region = None 17 | results = None 18 | dl = None 19 | path = None 20 | time = None 21 | 22 | def __init__(self, region, dl): 23 | """Handle the constructor of the Analysis class. 24 | 25 | Parameters 26 | ---------- 27 | region : str 28 | Region in which to tool is executed 29 | dl : bool 30 | True if the user wants to download the results, False if he wants the results to be written in a s3 bucket 31 | """ 32 | self.region = region 33 | self.results = [] 34 | self.dl = dl 35 | 36 | #new folder for each run 37 | now = datetime.datetime.now() 38 | self.time = now.strftime("%H:%M:%S") 39 | 40 | if dl: 41 | self.path = ROOT_FOLDER + self.region + "/queries-results/" 42 | self.path = f"{self.path}{date}/{self.time}/" 43 | create_folder(self.path) 44 | 45 | def self_test(self): 46 | """Test function.""" 47 | print("[+] Logs Analysis test passed\n") 48 | 49 | def execute(self, source_bucket, output_bucket, catalog, db, table, queryfile, exists, timeframe): 50 | """Handle the main function of the class. 51 | 52 | Parameters 53 | ---------- 54 | source_bucket : str 55 | Source bucket 56 | output_bucket : str 57 | Output bucket 58 | catalog : str 59 | Data catalog used with the database 60 | db : str 61 | Database containing the table for logs analytics 62 | table : str 63 | Contains the sql requirements to query the logs 64 | queryfile : str 65 | File containing the queries 66 | exists : tuple of bool 67 | If the input db and table already exists 68 | timeframe : str 69 | Time filter for default queries 70 | """ 71 | print(f"[+] Beginning Logs Analysis") 72 | 73 | set_clients(self.region) 74 | self.source_bucket = source_bucket 75 | 76 | print(source_bucket, output_bucket, catalog, db, table, queryfile, exists, timeframe) 77 | 78 | #this devil's code just get the location data of the .ddl file provided (the bucket where the source logs are stored) 79 | if not source_bucket and table and table.endswith(".ddl"): 80 | self.source_bucket = get_table(table, False)[1][1].split("LOCATION")[1].strip() 81 | 82 | if output_bucket == None: 83 | rand_str = get_random_chars(5) 84 | tmp_bucket = f"invictus-aws-tmp-results-{rand_str}" 85 | create_tmp_bucket(self.region, tmp_bucket) 86 | output_bucket = f"s3://{tmp_bucket}/" 87 | 88 | bucket, prefix = get_bucket_and_prefix(output_bucket) 89 | print(bucket, prefix) 90 | if not prefix: 91 | prefix = "queries-results/" 92 | self.output_bucket = f"{output_bucket}{prefix}{date}/{self.time}/" 93 | else: 94 | self.output_bucket = f"{output_bucket}{date}/{self.time}/" 95 | 96 | source.utils.utils.S3_CLIENT.put_object(Bucket=bucket, Key=(f"{prefix}{date}/{self.time}/")) 97 | 98 | #True if not using tool default db and table 99 | not_using_default_names = False 100 | if (catalog != None and db != None and table != None): 101 | not_using_default_names = True 102 | else: 103 | catalog = "AwsDataCatalog" 104 | db = "cloudtrailAnalysis" 105 | table = "logs" 106 | 107 | isTrail = self.is_trail_bucket(catalog, db, table) 108 | 109 | if not exists[0] or not exists[1]: 110 | self.init_athena(db, table, self.source_bucket, self.output_bucket, exists, isTrail) 111 | 112 | try: 113 | with open(queryfile) as f: 114 | queries = yaml.safe_load(f) 115 | print(f"[+] Using query file : {queryfile}") 116 | except Exception as e: 117 | print(f"[!] invictus-aws.py: error: {str(e)}") 118 | 119 | if not not_using_default_names: 120 | db = "cloudtrailAnalysis" 121 | elif table.endswith(".ddl"): 122 | table = get_table(table, False)[0] 123 | 124 | #Running all the queries 125 | 126 | for key, value in queries.items(): 127 | 128 | if timeframe != None: 129 | link = "AND" 130 | if "WHERE" not in value: 131 | link = "WHERE" 132 | if value[-1] == ";": 133 | value = value.replace(value[-1], f" {link} date_diff('day', from_iso8601_timestamp(eventtime), current_timestamp) <= {timeframe};") 134 | else: 135 | value = value + f" {link} date_diff('day', from_iso8601_timestamp(eventtime), current_timestamp) <= {timeframe};" 136 | 137 | print(f"[+] Running Query : {key}") 138 | #replacing DATABASE and TABLE in each query 139 | value = value.replace("DATABASE", db) 140 | value = value.replace("TABLE", table) 141 | 142 | result = athena_query(self.region, value, self.output_bucket) 143 | 144 | id = result["QueryExecution"]["QueryExecutionId"] 145 | self.results_query(id, key) 146 | 147 | bucket, folder = get_bucket_and_prefix(self.output_bucket) 148 | 149 | sleep(1) 150 | done = True 151 | 152 | try: 153 | rename_file_s3(bucket, folder, f"{key}-output.csv", f'{id}.csv') 154 | except Exception as e: 155 | done = False 156 | 157 | while not done : 158 | try: 159 | sleep(500/1000) 160 | rename_file_s3(bucket, folder, f"{key}-output.csv", f'{id}.csv') 161 | done = True 162 | except: 163 | done = False 164 | 165 | self.merge_results() 166 | self.clear_folder(self.dl) 167 | 168 | def init_athena(self, db, table, source_bucket, output_bucket, exists, isTrail): 169 | """Initiate athena database and table for further analysis. 170 | 171 | Parameters 172 | ---------- 173 | db : str 174 | Database used 175 | table : str 176 | Table used 177 | source_bucket : str 178 | Source bucket of the logs of the table 179 | output_bucket : str 180 | Bucket where to put the results of the queries 181 | exists : bool 182 | If the given db and table already exists 183 | isTrail : bool 184 | if the source bucket of the table is a bucket trail 185 | """ 186 | # if db doesn't exists 187 | if not exists[0]: 188 | query_db = f"CREATE DATABASE IF NOT EXISTS {db};" 189 | athena_query(self.region, query_db, self.output_bucket) 190 | print(f"[+] Database {db} created") 191 | 192 | #if table doesn't exists 193 | if not exists[1]: 194 | if table.endswith(".ddl"): 195 | tb = self.set_table(table, db) 196 | with open(table) as ddl: 197 | query_table = ddl.read() 198 | athena_query(self.region, query_table, self.output_bucket) 199 | table = tb 200 | elif not isTrail: 201 | query_table = f""" 202 | CREATE EXTERNAL TABLE IF NOT EXISTS {db}.{table} ( 203 | eventversion STRING, 204 | useridentity STRUCT< 205 | type:STRING, 206 | principalid:STRING, 207 | arn:STRING, 208 | accountid:STRING, 209 | invokedby:STRING, 210 | accesskeyid:STRING, 211 | userName:STRING, 212 | sessioncontext:STRUCT< 213 | attributes:STRUCT< 214 | mfaauthenticated:STRING, 215 | creationdate:STRING>, 216 | sessionissuer:STRUCT< 217 | type:STRING, 218 | principalId:STRING, 219 | arn:STRING, 220 | accountId:STRING, 221 | userName:STRING>, 222 | ec2RoleDelivery:string, 223 | webIdFederationData:map 224 | > 225 | >, 226 | eventtime STRING, 227 | eventsource STRING, 228 | eventname STRING, 229 | awsregion STRING, 230 | sourceipaddress STRING, 231 | useragent STRING, 232 | errorcode STRING, 233 | errormessage STRING, 234 | requestparameters STRING, 235 | responseelements STRING, 236 | additionaleventdata STRING, 237 | requestid STRING, 238 | eventid STRING, 239 | resources ARRAY>, 243 | eventtype STRING, 244 | apiversion STRING, 245 | readonly STRING, 246 | recipientaccountid STRING, 247 | serviceeventdetails STRING, 248 | sharedeventid STRING, 249 | vpcendpointid STRING, 250 | tlsDetails struct< 251 | tlsVersion:string, 252 | cipherSuite:string, 253 | clientProvidedHostHeader:string> 254 | ) 255 | ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' 256 | LOCATION '{source_bucket}' 257 | """ 258 | else: 259 | query_table = f""" 260 | CREATE EXTERNAL TABLE IF NOT EXISTS {db}.{table} ( 261 | eventversion STRING, 262 | useridentity STRUCT< 263 | type:STRING, 264 | principalid:STRING, 265 | arn:STRING, 266 | accountid:STRING, 267 | invokedby:STRING, 268 | accesskeyid:STRING, 269 | userName:STRING, 270 | sessioncontext:STRUCT< 271 | attributes:STRUCT< 272 | mfaauthenticated:STRING, 273 | creationdate:STRING>, 274 | sessionissuer:STRUCT< 275 | type:STRING, 276 | principalId:STRING, 277 | arn:STRING, 278 | accountId:STRING, 279 | userName:STRING>, 280 | ec2RoleDelivery:string, 281 | webIdFederationData:map 282 | > 283 | >, 284 | eventtime STRING, 285 | eventsource STRING, 286 | eventname STRING, 287 | awsregion STRING, 288 | sourceipaddress STRING, 289 | useragent STRING, 290 | errorcode STRING, 291 | errormessage STRING, 292 | requestparameters STRING, 293 | responseelements STRING, 294 | additionaleventdata STRING, 295 | requestid STRING, 296 | eventid STRING, 297 | resources ARRAY>, 301 | eventtype STRING, 302 | apiversion STRING, 303 | readonly STRING, 304 | recipientaccountid STRING, 305 | serviceeventdetails STRING, 306 | sharedeventid STRING, 307 | vpcendpointid STRING, 308 | tlsDetails struct< 309 | tlsVersion:string, 310 | cipherSuite:string, 311 | clientProvidedHostHeader:string> 312 | ) 313 | ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' 314 | STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat' 315 | OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' 316 | LOCATION '{source_bucket}' 317 | """ 318 | 319 | athena_query(self.region, query_table, output_bucket) 320 | print(f"[+] Table {db}.{table} created") 321 | 322 | def set_table(self, ddl, db): 323 | """Replace the table name of the ddl file by database.table 324 | 325 | Parameters 326 | ---------- 327 | ddl : str 328 | Ddl file 329 | db : str 330 | Name of the db 331 | 332 | Returns 333 | ------- 334 | table : str 335 | Name of the table 336 | """ 337 | table, data = get_table(ddl, True) 338 | if not "." in table: 339 | data = data[0].replace(table, f"{db}.{table}") + "(\n" + data[1] 340 | with open(ddl, "wt") as f: 341 | f.write(data) 342 | f.close() 343 | return table 344 | 345 | def results_query(self, id, query): 346 | """Print the results of the query and where they are written. 347 | 348 | Parameters 349 | ---------- 350 | id : str 351 | Id of the query 352 | query : str 353 | Query run 354 | """ 355 | number = len(source.utils.utils.ATHENA_CLIENT.get_query_results(QueryExecutionId=id)["ResultSet"]["Rows"]) 356 | if number == 2: 357 | print(f"[+] {OKGREEN}{number-1} hit !{ENDC}") 358 | self.results.append(f"{query}-output.csv") 359 | elif number > 999: 360 | print(f"[+] {OKGREEN}{number-1}+ hits !{ENDC}") 361 | self.results.append(f"{query}-output.csv") 362 | elif number > 2: 363 | print(f"[+] {OKGREEN}{number-1} hits !{ENDC}") 364 | self.results.append(f"{query}-output.csv") 365 | else: 366 | print(f"[+] {number-1} hit. You may have better luck next time my young padawan !") 367 | 368 | def merge_results(self): 369 | """Merge the results csv files in one single xlsx file.""" 370 | if self.results: 371 | 372 | bucket_name, prefix = get_bucket_and_prefix(self.output_bucket) 373 | 374 | name_writer = f"merged_file.xlsx" 375 | writer = pd.ExcelWriter(name_writer, engine='xlsxwriter') 376 | 377 | for local_file_name in self.results: 378 | s3_file_name = prefix + local_file_name 379 | source.utils.utils.S3_CLIENT.download_file(bucket_name, s3_file_name, local_file_name) 380 | 381 | 382 | for i, file in enumerate(self.results): 383 | sheet = str(file)[:-4] 384 | if len(sheet) > 31: 385 | sheet = sheet[:24] + sheet[-7:] 386 | df = pd.read_csv(file, sep=",", dtype="string") 387 | df.to_excel(writer, sheet_name=sheet) 388 | 389 | writer.close() 390 | 391 | if not self.dl: 392 | 393 | source.utils.utils.S3_CLIENT.upload_file(writer, bucket_name, f'{prefix}{name_writer}') 394 | remove(name_writer) 395 | for local_file_name in self.results: 396 | remove(local_file_name) 397 | 398 | print(f"[+] Results stored in {self.output_bucket}") 399 | print(f"[+] Merged results stored into {self.output_bucket}{name_writer}") 400 | 401 | self.results.append(name_writer) 402 | 403 | else: 404 | for local_file_name in self.results: 405 | replace(local_file_name, f"{self.path}{local_file_name}") 406 | replace(name_writer, f"{self.path}{name_writer}") 407 | print(f"[+] Results stored in {self.path}") 408 | print(f"[+] Merged results stored into {self.path}{name_writer}") 409 | else: 410 | print(f"[+] No results at all were found") 411 | 412 | def clear_folder(self, dl): 413 | """If results written locally, delete the tmp bucket created for the analysis. If results written in a bucket, clear the bucket so the .metadata and .txt are deleted. 414 | 415 | Parameters 416 | ---------- 417 | dl : bool 418 | True if the user wants to download the results, False if he wants the results to be written in a s3 bucket 419 | """ 420 | bucket, prefix = get_bucket_and_prefix(self.output_bucket) 421 | 422 | if dl: 423 | res = paginate(source.utils.utils.S3_CLIENT, "list_objects_v2", "Contents", Bucket=bucket) 424 | 425 | if res: 426 | objects = [{'Key': obj['Key']} for obj in res] 427 | source.utils.utils.S3_CLIENT.delete_objects(Bucket=bucket, Delete={'Objects': objects}) 428 | 429 | try: 430 | source.utils.utils.S3_CLIENT.delete_bucket( 431 | Bucket=bucket 432 | ) 433 | except Exception as e: 434 | print(f"[!] invictus-aws.py: error: {str(e)}") 435 | 436 | else: 437 | 438 | res = paginate(source.utils.utils.S3_CLIENT, "list_objects_v2", "Contents", Bucket=bucket, Prefix=prefix) 439 | 440 | if res: 441 | for el in res: 442 | if not el["Key"].split("/")[-1] in self.results: 443 | source.utils.utils.S3_CLIENT.delete_object( 444 | Bucket=bucket, 445 | Key=f"{el['Key']}" 446 | ) 447 | 448 | def is_trail_bucket(self, catalog, db, table): 449 | """Verify if a table source bucket is a trail bucket. 450 | 451 | Parameters 452 | ---------- 453 | catalog : str 454 | Catalog of the database and table 455 | db : str 456 | Database of the table 457 | table : str 458 | Table of the logs 459 | 460 | Returns 461 | ------- 462 | isTrail : bool 463 | if the trail has a specified bucket 464 | """ 465 | isTrail = False 466 | 467 | if self.source_bucket != None: 468 | 469 | response = source.utils.utils.CLOUDTRAIL_CLIENT.describe_trails() 470 | if response['trailList']: 471 | trails = response['trailList'] 472 | 473 | for trail in trails: 474 | logging_info = trail['S3BucketName'] 475 | 476 | if logging_info and logging_info in self.source_bucket: 477 | isTrail = True 478 | src = self.source_bucket 479 | print("[!] Warning : You are using a trail bucket as source. Be aware these buckets can have millions of logs and so the tool can take a lot of time to process it all. Use the most precise subfolder available to be more efficient.") 480 | break 481 | else: 482 | try: 483 | response = source.utils.utils.ATHENA_CLIENT.get_table_metadata( 484 | CatalogName=catalog, 485 | DatabaseName=db, 486 | TableName=table 487 | ) 488 | 489 | if response["TableMetadata"]["Parameters"]["inputformat"] == "com.amazon.emr.cloudtrail.CloudTrailInputFormat": 490 | isTrail = True 491 | except Exception as e: 492 | print(e) 493 | 494 | return isTrail 495 | 496 | -------------------------------------------------------------------------------- /source/main/configuration.py: -------------------------------------------------------------------------------- 1 | """File used for the configuration collection.""" 2 | 3 | from source.utils.utils import create_s3_if_not_exists, PREPARATION_BUCKET, ROOT_FOLDER, create_command, create_folder, write_file, write_s3, set_clients 4 | import source.utils.utils 5 | from source.utils.enum import * 6 | import json, boto3 7 | from time import sleep 8 | 9 | 10 | class Configuration: 11 | 12 | results = {} 13 | bucket = "" 14 | region = None 15 | services = {} 16 | dl = None 17 | 18 | def __init__(self, region, dl): 19 | """Handle the constructor of the Configuration class. 20 | 21 | Parameters 22 | ---------- 23 | region : str 24 | Region in which to tool is executed 25 | dl : bool 26 | True if the user wants to download the results, False if he wants the results to be written in a s3 bucket 27 | """ 28 | self.region = region 29 | self.dl = dl 30 | if not self.dl: 31 | self.bucket = create_s3_if_not_exists(self.region, PREPARATION_BUCKET) 32 | 33 | def self_test(self): 34 | """Test function.""" 35 | print("[+] Configuration test passed") 36 | 37 | def execute(self, services, regionless): 38 | """Handle the main function of the class. Run every configuration function and then write the results where asked. 39 | 40 | Parameters 41 | --------- 42 | services : list 43 | Array used to write the results of the different configuration functions 44 | regionless : str 45 | "not-all" if the tool is used on only one region. First region to run the tool on otherwise 46 | """ 47 | print(f"[+] Beginning Configuration Extraction") 48 | 49 | set_clients(self.region) 50 | 51 | self.services = services 52 | 53 | if (regionless != "" and regionless == self.region) or regionless == "not-all": 54 | self.get_configuration_s3() 55 | self.get_configuration_iam() 56 | self.get_configuration_cloudtrail() 57 | self.get_configuration_route53() 58 | 59 | self.get_configuration_wafv2() 60 | self.get_configuration_lambda() 61 | self.get_configuration_vpc() 62 | self.get_configuration_elasticbeanstalk() 63 | 64 | self.get_configuration_ec2() 65 | self.get_configuration_dynamodb() 66 | self.get_configuration_rds() 67 | 68 | self.get_configuration_cloudwatch() 69 | self.get_configuration_guardduty() 70 | self.get_configuration_detective() 71 | self.get_configuration_inspector2() 72 | self.get_configuration_maciev2() 73 | 74 | if self.dl: 75 | confs = ROOT_FOLDER + self.region + "/configurations/" 76 | create_folder(confs) 77 | with tqdm(desc="[+] Writing results", leave=False, total = len(self.results)) as pbar: 78 | for el in self.results: 79 | write_file( 80 | confs + f"{el}.json", 81 | "w", 82 | json.dumps(self.results[el], indent=4, default=str), 83 | ) 84 | pbar.update() 85 | sleep(0.1) 86 | print(f"[+] Configuration results stored in the folder {confs}") 87 | else: 88 | with tqdm(desc="[+] Writing results", leave=False, total = len(self.results)) as pbar: 89 | for el in self.results: 90 | write_s3( 91 | self.bucket, 92 | f"{self.region}/configuration/{el}.json", 93 | json.dumps(self.results, indent=4, default=str), 94 | ) 95 | pbar.update() 96 | sleep(0.1) 97 | print(f"[+] Configurations results stored in the bucket {self.bucket}") 98 | 99 | def get_configuration_s3(self): 100 | """Retrieve multiple elements of the configuration of the existing s3 buckets.""" 101 | s3_list = self.services["s3"] 102 | 103 | ''' 104 | In the first part, we verify that the enumeration of the service is already done. 105 | If it doesn't, we redo it. 106 | If it is, we verify if the service is available or not. 107 | ''' 108 | 109 | if s3_list["count"] == -1: 110 | elements = s3_lookup() 111 | 112 | if len(elements) == 0: 113 | self.display_progress(0, "s3") 114 | return 115 | 116 | elif s3_list["count"] == 0: 117 | self.display_progress(0, "s3") 118 | return 119 | else: 120 | elements = s3_list["elements"] 121 | 122 | ''' 123 | In this part, we get some parts of the configuration of each element of the service (s3 in this case) 124 | Then all the results are added to a same json file. 125 | This way of working is used in every function of this class. 126 | ''' 127 | 128 | objects = {} 129 | buckets_logging = {} 130 | buckets_policy = {} 131 | buckets_acl = {} 132 | buckets_location = {} 133 | 134 | with tqdm(desc="[+] Getting S3 Configuration", leave=False, total = len(elements)) as pbar: 135 | for bucket in elements: 136 | bucket_name = bucket["Name"] 137 | objects[bucket_name] = [] 138 | 139 | # list_objects_v2 140 | 141 | objects[bucket_name] = simple_paginate(source.utils.utils.S3_CLIENT, "list_objects_v2", Bucket=bucket_name) 142 | 143 | # get_bucket_logging 144 | 145 | response = try_except(source.utils.utils.S3_CLIENT.get_bucket_logging, Bucket=bucket_name) 146 | if "LoggingEnabled" in response: 147 | buckets_logging[bucket_name] = response 148 | else: 149 | buckets_logging[bucket_name] = {"LoggingEnabled": False} 150 | 151 | # get_bucket_policy 152 | 153 | response = try_except(source.utils.utils.S3_CLIENT.get_bucket_policy, Bucket=bucket_name) 154 | buckets_policy[bucket_name] = json.loads(response.get("Policy", "{}")) 155 | 156 | # get_bucket_acl 157 | 158 | response = try_except(source.utils.utils.S3_CLIENT.get_bucket_acl, Bucket=bucket_name) 159 | response.pop("ResponseMetadata", None) 160 | response = fix_json(response) 161 | buckets_acl[bucket_name] = response 162 | 163 | # get_bucket_location 164 | 165 | response = try_except(source.utils.utils.S3_CLIENT.get_bucket_location, Bucket=bucket_name) 166 | response.pop("ResponseMetadata", None) 167 | response = fix_json(response) 168 | buckets_location[bucket_name] = response 169 | pbar.update() 170 | 171 | # Presenting the results properly 172 | results = [] 173 | results.append( 174 | create_command("aws s3api list-objects-v2 --bucket ", objects) 175 | ) 176 | results.append( 177 | create_command( 178 | "aws s3api get-bucket-logging --bucket ", buckets_logging 179 | ) 180 | ) 181 | results.append( 182 | create_command( 183 | "aws s3api get-bucket-policy --bucket ", buckets_policy 184 | ) 185 | ) 186 | results.append( 187 | create_command("aws s3api get-bucket-acl --bucket ", buckets_acl) 188 | ) 189 | results.append( 190 | create_command( 191 | "aws s3api get-bucket-location --bucket ", buckets_location 192 | ) 193 | ) 194 | self.results["s3"] = results 195 | self.display_progress(len(results), "s3") 196 | 197 | def get_configuration_wafv2(self): 198 | """Retrieve multiple elements of the configuration of the existing web acls.""" 199 | waf_list = self.services["wafv2"] 200 | 201 | if waf_list["count"] == -1: 202 | wafs = misc_lookup("WAF", source.utils.utils.WAF_CLIENT.list_web_acls, "NextMarker", "WebACLs", Scope="REGIONAL", Limit=100) 203 | 204 | identifiers = [] 205 | for el in wafs: 206 | identifiers.append(el["ARN"]) 207 | 208 | if len(identifiers) == 0: 209 | self.display_progress(0, "wafv2") 210 | return 211 | 212 | elif waf_list["count"] == 0: 213 | self.display_progress(0, "wafv2") 214 | return 215 | else: 216 | identifiers = waf_list["ids"] 217 | 218 | logging_config = {} 219 | rule_groups = {} 220 | managed_rule_sets = {} 221 | ip_sets = {} 222 | resources = {} 223 | 224 | with tqdm(desc="[+] Getting WAF configuration", leave=False, total = len(identifiers)) as pbar: 225 | for arn in identifiers: 226 | 227 | # get_logging_configuration 228 | 229 | response = try_except(source.utils.utils.WAF_CLIENT.get_logging_configuration, ResourceArn=arn) 230 | response.pop("ResponseMetadata", None) 231 | response = fix_json(response) 232 | if "WAFNonexistentItemException" in response["error"]: 233 | response["error"] = "[!] Error: The Logging feature is not enabled for the selected Amazon WAF Web Access Control List (Web ACL)." 234 | logging_config[arn] = response 235 | 236 | # list_rules_groups 237 | # Use of misc_lookup as not every results are listed at the first call if there are a lot 238 | 239 | rule_groups[arn] = simple_misc_lookup("WAF", source.utils.utils.WAF_CLIENT.list_rule_groups, "NextMarker", Scope="REGIONAL", Limit=100) 240 | 241 | 242 | # list_managed_rule_sets 243 | 244 | managed_rule_sets[arn] = simple_misc_lookup("WAF", source.utils.utils.WAF_CLIENT.list_managed_rule_sets, "NextMarker", Scope="REGIONAL", Limit=100) 245 | 246 | # list_ip_sets 247 | 248 | ip_sets[arn] = simple_misc_lookup("WAF", source.utils.utils.WAF_CLIENT.list_ip_sets, "NextMarker", Scope="REGIONAL", Limit=100) 249 | 250 | #list_resources_for_web_acl 251 | 252 | response = try_except("WAF", source.utils.utils.WAF_CLIENT.list_resources_for_web_acl, WebACLArn=arn) 253 | response.pop("ResponseMetadata", None) 254 | response = fix_json(response) 255 | resources[arn] = response 256 | pbar.update() 257 | 258 | results = [] 259 | results.append( 260 | create_command( 261 | "aws wafv2 get-logging-configuration --resource-arn ", 262 | logging_config, 263 | ) 264 | ) 265 | results.append( 266 | create_command("aws wafv2 list-rule-groups --scope REGIONAL", rule_groups) 267 | ) 268 | results.append( 269 | create_command( 270 | "aws wafv2 list-managed-rule-sets --scope REGIONAL", managed_rule_sets 271 | ) 272 | ) 273 | results.append( 274 | create_command("aws wafv2 list-ip-sets --scope REGIONAL", ip_sets) 275 | ) 276 | results.append( 277 | create_command( 278 | "aws wafv2 list-resources-for-web-acl --resource-arn ", resources 279 | ) 280 | ) 281 | 282 | self.results["wafv2"] = results 283 | self.display_progress(len(results), "wafv2") 284 | 285 | def get_configuration_lambda(self): 286 | """Retrieve multiple elements of the configuration of the existing lambdas.""" 287 | lambda_list = self.services["lambda"] 288 | 289 | if lambda_list["count"] == -1: 290 | functions = paginate(source.utils.utils.LAMBDA_CLIENT, "list_functions", "Functions") 291 | 292 | if len(functions) == 0: 293 | self.display_progress(0, "lambda") 294 | return 295 | 296 | identifiers = [] 297 | for function in functions: 298 | identifiers.append(function["FunctionName"]) 299 | 300 | elif lambda_list["count"] == 0: 301 | self.display_progress(0, "lambda") 302 | return 303 | else: 304 | identifiers = lambda_list["ids"] 305 | 306 | function_config = {} 307 | 308 | with tqdm(desc="[+] Getting LAMBDA configuration", leave=False, total = len(identifiers)) as pbar: 309 | for name in identifiers: 310 | if name == "": 311 | continue 312 | 313 | # get_function_configuration 314 | 315 | response = try_except( 316 | source.utils.utils.LAMBDA_CLIENT.get_function_configuration, FunctionName=name 317 | ) 318 | response.pop("ResponseMetadata", None) 319 | response = fix_json(response) 320 | function_config[name] = response 321 | pbar.update() 322 | 323 | # get_account_settings 324 | 325 | response = try_except(source.utils.utils.LAMBDA_CLIENT.get_account_settings) 326 | response.pop("ResponseMetadata", None) 327 | response = fix_json(response) 328 | account_settings = response 329 | 330 | # list_event_source_mappings 331 | 332 | event_source_mappings = simple_paginate(source.utils.utils.LAMBDA_CLIENT, "list_event_source_mappings") 333 | 334 | results = [] 335 | results.append( 336 | create_command( 337 | "aws lambda get-function-configuration --function-name ", 338 | function_config, 339 | ) 340 | ) 341 | results.append( 342 | create_command("aws lambda get-account-settings", account_settings) 343 | ) 344 | results.append( 345 | create_command( 346 | "aws lambda list-event-source-mappings", event_source_mappings 347 | ) 348 | ) 349 | 350 | self.results["lambda"] = results 351 | self.display_progress(len(results), "lambda") 352 | 353 | def get_configuration_vpc(self): 354 | """Retrieve multiple elements of the configuration of the existing vpcs.""" 355 | vpc_list = self.services["vpc"] 356 | 357 | if vpc_list["count"] == -1: 358 | 359 | vpcs = paginate(source.utils.utils.EC2_CLIENT, "describe_vpcs", "Vpcs") 360 | 361 | if len(vpcs) == 0: 362 | self.display_progress(0, "vpc") 363 | return 364 | 365 | identifiers = [] 366 | for vpc in vpcs: 367 | identifiers.append(vpc["VpcId"]) 368 | 369 | elif vpc_list["count"] == 0: 370 | self.display_progress(0, "vpc") 371 | return 372 | else: 373 | identifiers = vpc_list["ids"] 374 | 375 | dns_support = {} 376 | dns_hostnames = {} 377 | 378 | with tqdm(desc="[+] Getting VPC configuration", leave=False, total = len(identifiers)) as pbar: 379 | for id in identifiers: 380 | if id == "": 381 | continue 382 | 383 | # describe_vpc_attribute 384 | 385 | response = try_except( 386 | source.utils.utils.EC2_CLIENT.describe_vpc_attribute, 387 | VpcId=id, 388 | Attribute="enableDnsSupport", 389 | ) 390 | response.pop("ResponseMetadata", None) 391 | response = fix_json(response) 392 | dns_support[id] = response 393 | 394 | # describe_vpc_attribute 395 | 396 | response = try_except( 397 | source.utils.utils.EC2_CLIENT.describe_vpc_attribute, 398 | VpcId=id, 399 | Attribute="enableDnsHostnames", 400 | ) 401 | response.pop("ResponseMetadata", None) 402 | response = fix_json(response) 403 | dns_hostnames[id] = response 404 | pbar.update() 405 | 406 | # describe_flow_logs 407 | 408 | flow_logs = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_flow_logs") 409 | 410 | # describe_vpc_peering_connections 411 | 412 | peering_connections = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_vpc_peering_connections") 413 | 414 | # describe_vpc_endpoint_connections 415 | 416 | endpoint_connections = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_vpc_endpoint_connections") 417 | 418 | # describe_vpc_endpoint_service_configurations 419 | 420 | endpoint_service_config = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_vpc_endpoint_service_configurations") 421 | 422 | # describe_vpc_classic_link 423 | 424 | response = try_except(source.utils.utils.EC2_CLIENT.describe_vpc_classic_link) 425 | response.pop("ResponseMetadata", None) 426 | response = fix_json(response) 427 | classic_links = response 428 | 429 | # describe_vpc_endpoints 430 | 431 | endpoints = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_vpc_endpoints") 432 | 433 | # describe_local_gateway_route_table_vpc_associations 434 | 435 | response = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_local_gateway_route_table_vpc_associations") 436 | local_gateway_route_table = response 437 | 438 | results = [] 439 | results.append( 440 | create_command( 441 | "aws ec2 describe-vpc-attribute --vpc-id --attribute enableDnsSupport", 442 | dns_support, 443 | ) 444 | ) 445 | results.append( 446 | create_command( 447 | "aws ec2 describe-vpc-attribute --vpc-id --attribute enableDnsHostnames", 448 | dns_hostnames, 449 | ) 450 | ) 451 | results.append(create_command("aws ec2 describe-flow-logs", flow_logs)) 452 | results.append( 453 | create_command( 454 | "aws ec2 describe-vpc-peering-connections", peering_connections 455 | ) 456 | ) 457 | results.append( 458 | create_command( 459 | "aws ec2 describe-vpc-endpoint-connections", endpoint_connections 460 | ) 461 | ) 462 | results.append( 463 | create_command( 464 | "aws ec2 describe-vpc-endpoint-service-configurations", 465 | endpoint_service_config, 466 | ) 467 | ) 468 | results.append( 469 | create_command("aws ec2 describe-vpc-classic-link", classic_links) 470 | ) 471 | results.append(create_command("aws ec2 describe-vpc-endpoints", endpoints)) 472 | results.append( 473 | create_command( 474 | "aws ec2 describe-local-gateway-route-table-vpc-associations", 475 | local_gateway_route_table, 476 | ) 477 | ) 478 | 479 | self.results["vpc"] = results 480 | self.display_progress(len(results), "vpc") 481 | 482 | def get_configuration_elasticbeanstalk(self): 483 | """Retrieve multiple elements of the configuration of the existing elasticbeanstalk environments.""" 484 | eb_list = self.services["elasticbeanstalk"] 485 | 486 | if eb_list["count"] == -1: 487 | environments = paginate(source.utils.utils.EB_CLIENT, "describe_environments", "Environments") 488 | 489 | if len(environments) == 0: 490 | self.display_progress(0, "elasticbeanstalk") 491 | return 492 | 493 | identifiers = [] 494 | for env in environments: 495 | identifiers.append(env["EnvironmentId"]) 496 | 497 | elif eb_list["count"] == 0: 498 | self.display_progress(0, "elasticbeanstalk") 499 | return 500 | else: 501 | identifiers = [] 502 | environments = eb_list["elements"] 503 | for el in environments: 504 | identifiers.append(el["EnvironmentId"]) 505 | 506 | resources = {} 507 | managed_actions = {} 508 | managed_action_history = {} 509 | instances_health = {} 510 | 511 | with tqdm(desc="[+] Getting ELASTICBEANSTALK configuration", leave=False, total = len(identifiers)) as pbar: 512 | for id in identifiers: 513 | if id == "": 514 | continue 515 | resources[id] = [] 516 | 517 | # describe_environment_resources 518 | 519 | response = try_except( 520 | source.utils.utils.EB_CLIENT.describe_environment_resources, EnvironmentId=id 521 | ) 522 | response.pop("ResponseMetadata", None) 523 | response = fix_json(response) 524 | resources[id] = response 525 | 526 | # describe_environment_managed_actions 527 | 528 | managed_actions[id] = [] 529 | response = try_except( 530 | source.utils.utils.EB_CLIENT.describe_environment_managed_actions, EnvironmentId=id 531 | ) 532 | response.pop("ResponseMetadata", None) 533 | response = fix_json(response) 534 | managed_actions[id] = response 535 | 536 | # describe_environment_managed_action_history 537 | 538 | managed_action_history[id] = [] 539 | managed_action_history[id] = simple_paginate(source.utils.utils.EB_CLIENT, "describe_environment_managed_action_history", EnvironmentId=id) 540 | 541 | # describe_instances_health 542 | 543 | instances_health[id] = [] 544 | instances_health[id] = simple_misc_lookup("ELASTICBEANSTALK", source.utils.utils.EB_CLIENT.describe_instances_health, "NextToken", EnvironmentId=id) 545 | pbar.update() 546 | 547 | # describe_applications 548 | 549 | response = try_except(source.utils.utils.EB_CLIENT.describe_applications) 550 | response.pop("ResponseMetadata", None) 551 | data = fix_json(response) 552 | applications = data 553 | 554 | # describe_account_attributes 555 | 556 | response = try_except(source.utils.utils.EB_CLIENT.describe_account_attributes) 557 | response.pop("ResponseMetadata", None) 558 | data = response 559 | account_attributes = data 560 | 561 | results = [] 562 | results.append( 563 | create_command( 564 | "aws elasticbeanstalk describe-environment-resources --environment-id ", 565 | resources, 566 | ) 567 | ) 568 | results.append( 569 | create_command( 570 | "aws elasticbeanstalk describe-environment-managed-actions --environment-id ", 571 | managed_actions, 572 | ) 573 | ) 574 | results.append( 575 | create_command( 576 | "aws elasticbeanstalk describe-environment-managed-action-history --environment-id ", 577 | managed_action_history, 578 | ) 579 | ) 580 | results.append( 581 | create_command( 582 | "aws elasticbeanstalk describe-instances-health --environment-id ", 583 | instances_health, 584 | ) 585 | ) 586 | results.append( 587 | create_command("aws elasticbeanstalk describe-environments", environments) 588 | ) 589 | results.append( 590 | create_command("aws elasticbeanstalk describe-applications", applications) 591 | ) 592 | results.append( 593 | create_command( 594 | "aws elasticbeanstalk describe-account-attributes", account_attributes 595 | ) 596 | ) 597 | 598 | self.results["eb"] = results 599 | self.display_progress(len(results), "elasticbeanstalk") 600 | 601 | def get_configuration_route53(self): 602 | """Retrieve multiple elements of the configuration of the existing routes53 hosted zones.""" 603 | route53_list = self.services["route53"] 604 | 605 | if route53_list["count"] == -1: 606 | hosted_zones = paginate(source.utils.utils.ROUTE53_CLIENT, "list_hosted_zones", "HostedZones") 607 | 608 | if len(hosted_zones) == 0: 609 | self.display_progress(0, "route53") 610 | return 611 | 612 | identifiers = [] 613 | for zone in hosted_zones: 614 | identifiers.append(zone["Id"]) 615 | 616 | elif route53_list["count"] == 0: 617 | self.display_progress(0, "route53") 618 | return 619 | else: 620 | identifiers = route53_list["ids"] 621 | 622 | # list_traffic_policies 623 | 624 | get_traffic_policies = list_traffic_policies_lookup(source.utils.utils.ROUTE53_CLIENT.list_traffic_policies) 625 | 626 | # list_resolver_configs 627 | 628 | resolver_configs = simple_paginate(source.utils.utils.ROUTE53_RESOLVER_CLIENT, "list_resolver_configs") 629 | 630 | # list_firewall_configs 631 | 632 | resolver_firewall_config = simple_paginate(source.utils.utils.ROUTE53_RESOLVER_CLIENT, "list_firewall_configs") 633 | 634 | # list_resolver_query_log_configs 635 | 636 | resolver_log_configs = simple_paginate(source.utils.utils.ROUTE53_RESOLVER_CLIENT, "list_resolver_query_log_configs") 637 | 638 | get_zones = [] 639 | results = [] 640 | 641 | # get_hosted_zone 642 | 643 | with tqdm(desc="[+] Getting ROUTE53 configuration", leave=False, total = len(identifiers)) as pbar: 644 | for id in identifiers: 645 | response = try_except(source.utils.utils.ROUTE53_CLIENT.get_hosted_zone, Id=id) 646 | response.pop("ResponseMetadata", None) 647 | response = fix_json(response) 648 | get_zones.append(response) 649 | pbar.update() 650 | 651 | results.append( 652 | create_command("aws route53 list-traffic-policies", get_traffic_policies) 653 | ) 654 | results.append( 655 | create_command("aws route53 get-hosted-zone --id ", get_zones) 656 | ) 657 | results.append( 658 | create_command( 659 | "aws route53resolver list-resolver-configs", resolver_configs 660 | ) 661 | ) 662 | results.append( 663 | create_command( 664 | "aws route53resolver list-resolver-query-log-configs", 665 | resolver_log_configs, 666 | ) 667 | ) 668 | results.append( 669 | create_command( 670 | "aws route53resolver list-firewall-configs ", resolver_firewall_config 671 | ) 672 | ) 673 | 674 | self.results["route53"] = results 675 | self.display_progress(len(results), "route53") 676 | return 677 | 678 | def get_configuration_ec2(self): 679 | """Retrieve multiple elements of the configuration of the existing ec2 instances.""" 680 | ec2_list = self.services["ec2"] 681 | 682 | if ec2_list["count"] == -1: 683 | elements = ec2_lookup() 684 | 685 | if len(elements) == 0: 686 | self.display_progress(0, "ec2") 687 | return 688 | 689 | else: 690 | self.display_progress(0, "ec2") 691 | return 692 | 693 | elif ec2_list["count"] == 0: 694 | self.display_progress(0, "ec2") 695 | return 696 | 697 | # describe_export_tasks 698 | 699 | response = try_except(source.utils.utils.EC2_CLIENT.describe_export_tasks) 700 | response.pop("ResponseMetadata", None) 701 | export = fix_json(response) 702 | 703 | # describe_fleets 704 | 705 | fleets = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_fleets") 706 | 707 | # describe_hosts 708 | 709 | hosts = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_hosts") 710 | 711 | # describe_key_pairs 712 | 713 | response = try_except(source.utils.utils.EC2_CLIENT.describe_key_pairs) 714 | response.pop("ResponseMetadata", None) 715 | key_pairs = fix_json(response) 716 | 717 | # describe_volumes 718 | 719 | volumes = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_volumes") 720 | 721 | # describe_subnets 722 | 723 | subnets = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_subnets") 724 | 725 | # describe_security_groups 726 | 727 | sec_groups = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_security_groups") 728 | 729 | # describe_route_tables 730 | 731 | route_tables = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_route_tables") 732 | 733 | # describe_snapshots 734 | 735 | snapshots = simple_paginate(source.utils.utils.EC2_CLIENT, "describe_snapshots") 736 | 737 | results = [] 738 | results.append(create_command("aws ec2 describe-export-tasks", export)) 739 | results.append(create_command("aws ec2 describe-fleets", fleets)) 740 | results.append(create_command("aws ec2 describe-hosts", hosts)) 741 | results.append(create_command("aws ec2 describe-key-pairs", key_pairs)) 742 | results.append(create_command("aws ec2 describe-volumes", volumes)) 743 | results.append(create_command("aws ec2 describe-subnets", subnets)) 744 | results.append(create_command("aws ec2 describe-security-groups", sec_groups)) 745 | results.append(create_command("aws ec2 describe-route-tables", route_tables)) 746 | results.append(create_command("aws ec2 describe-snapshots", snapshots)) 747 | 748 | self.results["ec2"] = results 749 | self.display_progress(len(results), "ec2") 750 | 751 | def get_configuration_iam(self): 752 | """Retrieve multiple elements of the configuration of the existing iam users.""" 753 | iam_list = self.services["iam"] 754 | 755 | if iam_list["count"] == -1: 756 | elements = paginate(source.utils.utils.IAM_CLIENT, "list_users", "Users") 757 | 758 | if len(elements) == 0: 759 | self.display_progress(0, "ec2") 760 | return 761 | 762 | elif iam_list["count"] == 0: 763 | self.display_progress(0, "iam") 764 | return 765 | 766 | # get_account_summary 767 | 768 | response = try_except(source.utils.utils.IAM_CLIENT.get_account_summary) 769 | response.pop("ResponseMetadata", None) 770 | get_summary = fix_json(response) 771 | 772 | # get_account_authorization_details 773 | 774 | get_auth_details = simple_paginate(source.utils.utils.IAM_CLIENT, "get_account_authorization_details") 775 | 776 | # list_ssh_public_keys 777 | 778 | list_ssh_pub_keys = simple_paginate(source.utils.utils.IAM_CLIENT, "list_ssh_public_keys") 779 | 780 | # list_mfa_devices 781 | 782 | list_mfa_devices = simple_paginate(source.utils.utils.IAM_CLIENT, "list_mfa_devices") 783 | 784 | results = [] 785 | results.append(create_command("aws iam get-account-summary", get_summary)) 786 | results.append( 787 | create_command( 788 | "aws iam get-account-authorization-details", get_auth_details 789 | ) 790 | ) 791 | results.append( 792 | create_command("aws iam list-ssh-public-keys ", list_ssh_pub_keys) 793 | ) 794 | results.append(create_command("aws iam list-mfa-devices ", list_mfa_devices)) 795 | 796 | self.results["iam"] = results 797 | self.display_progress(len(results), "iam") 798 | return 799 | 800 | def get_configuration_dynamodb(self): 801 | """Retrieve multiple elements of the configuration of the existing dynamodb tables.""" 802 | dynamodb_list = self.services["s3"] 803 | 804 | if dynamodb_list["count"] == -1: 805 | tables = paginate(source.utils.utils.DYNAMODB_CLIENT, "list_tables", "TableNames") 806 | 807 | if len(tables) == 0: 808 | self.display_progress(0, "dynamodb") 809 | return 810 | 811 | elif dynamodb_list["count"] == 0: 812 | self.display_progress(0, "dynamodb") 813 | return 814 | else: 815 | tables = dynamodb_list["elements"] 816 | 817 | tables_info = [] 818 | export_info = [] 819 | 820 | # list_backups 821 | 822 | backups = simple_paginate(source.utils.utils.DYNAMODB_CLIENT, "list_backups") 823 | 824 | # list_exports 825 | 826 | list_exports = misc_lookup("DYNAMOBDB", source.utils.utils.DYNAMODB_CLIENT.list_exports, "NextToken", "ExportSummaries", MaxResults=100) 827 | 828 | # describe_table 829 | 830 | with tqdm(desc="[+] Getting DYNAMODB configuration", leave=False, total = len(tables)) as pbar: 831 | for table in tables: 832 | response = try_except(source.utils.utils.DYNAMODB_CLIENT.describe_table, TableName=table) 833 | response.pop("ResponseMetadata", None) 834 | get_table = fix_json(response) 835 | tables_info.append(get_table) 836 | pbar.update() 837 | 838 | # describe_export 839 | 840 | for export in list_exports: 841 | response = try_except( 842 | source.utils.utils.DYNAMODB_CLIENT.describe_export, ExportArn=export.get("ExportArn", "") 843 | ) 844 | response.pop("ResponseMetadata", None) 845 | get_export = fix_json(response) 846 | export_info.append(get_export) 847 | 848 | results = [] 849 | results.append(create_command("aws dynamodb list-backups", backups)) 850 | results.append( 851 | create_command( 852 | "aws dynamodb describe-table --table-name ", tables_info 853 | ) 854 | ) 855 | results.append(create_command("aws dynamodb list-exports", list_exports)) 856 | results.append( 857 | create_command( 858 | "aws dynamodb describe-export --export-arn ", export_info 859 | ) 860 | ) 861 | 862 | self.results["dynamodb"] = results 863 | self.display_progress(len(results), "dynamodb") 864 | return 865 | 866 | def get_configuration_rds(self): 867 | """Retrieve multiple elements of the configuration of the existing rds instances.""" 868 | rds_list = self.services["rds"] 869 | 870 | if rds_list["count"] == -1: 871 | elements = paginate(source.utils.utils.RDS_CLIENT, "describe_db_instances", "DBInstances") 872 | 873 | if len(elements) == 0: 874 | self.display_progress(0, "rds") 875 | return 876 | 877 | elif rds_list["count"] == 0: 878 | self.display_progress(0, "rds") 879 | return 880 | 881 | # describe_db_clusters 882 | 883 | clusters = simple_paginate(source.utils.utils.RDS_CLIENT, "describe_db_clusters") 884 | 885 | # describe_db_snapshots 886 | 887 | snapshots = simple_paginate(source.utils.utils.RDS_CLIENT, "describe_db_snapshots") 888 | 889 | # describe_db_proxies 890 | 891 | proxies = simple_paginate(source.utils.utils.RDS_CLIENT, "describe_db_proxies") 892 | 893 | results = [] 894 | results.append(create_command("aws rds describe-db-clusters", clusters)) 895 | results.append(create_command("aws rds describe-db-snapshots", snapshots)) 896 | results.append(create_command("aws rds describe-db-proxies ", proxies)) 897 | 898 | self.results["rds"] = results 899 | self.display_progress(len(results), "rds") 900 | return 901 | 902 | def get_configuration_guardduty(self): 903 | """Retrieve multiple elements of the configuration of the existing guardduty detectors.""" 904 | guardduty_list = self.services["guardduty"] 905 | 906 | if guardduty_list["count"] == -1: 907 | detectors = paginate(source.utils.utils.GUARDDUTY_CLIENT, "list_detectors", "DetectorIds") 908 | 909 | if len(detectors) == 0: 910 | self.display_progress(0, "guardduty") 911 | return 912 | 913 | elif guardduty_list["count"] == 0: 914 | self.display_progress(0, "guardduty") 915 | return 916 | else: 917 | detectors = guardduty_list["ids"] 918 | 919 | detectors = {} 920 | filters = {} 921 | filter_data = {} 922 | publishing_destinations = {} 923 | threat_intel = {} 924 | ip_sets = {} 925 | 926 | with tqdm(desc="[+] Getting GUARDDUTY configuration", leave=False, total = len(detectors)) as pbar: 927 | for detector in detectors: 928 | 929 | # get_detector 930 | 931 | response = try_except(source.utils.utils.GUARDDUTY_CLIENT.get_detector, DetectorId=detector) 932 | response.pop("ResponseMetadata", None) 933 | detectors[detector] = response 934 | 935 | # list_filters 936 | 937 | filters[detector] = simple_paginate(source.utils.utils.GUARDDUTY_CLIENT, "list_filters", DetectorId=detector) 938 | 939 | filter_names = [] 940 | for el in filters[detector]: 941 | filter_names.extend(el["FilterNames"]) 942 | 943 | # get_filter 944 | 945 | for filter_name in filter_names: 946 | filter_data[detector] = [] 947 | response = try_except( 948 | source.utils.utils.GUARDDUTY_CLIENT.get_filter, 949 | DetectorId=detector, 950 | FilterName=filter_name, 951 | ) 952 | response.pop("ResponseMetadata", None) 953 | filter_data[detector].extend(response) 954 | 955 | # list_publishing_destinations 956 | 957 | publishing_destinations[detector] = simple_misc_lookup( 958 | "GUARDDUTY", 959 | source.utils.utils.GUARDDUTY_CLIENT.list_publishing_destinations, 960 | "NextToken", 961 | DetectorId=detector, 962 | MaxResults=100 963 | ) 964 | 965 | # list_threat_intel_sets 966 | 967 | threat_intel[detector] = simple_paginate(source.utils.utils.GUARDDUTY_CLIENT, "list_threat_intel_sets", DetectorId=detector) 968 | 969 | # list_ip_sets 970 | 971 | ip_sets[detector] = simple_paginate(source.utils.utils.GUARDDUTY_CLIENT, "list_ip_sets", DetectorId=detector) 972 | 973 | pbar.update() 974 | 975 | 976 | results = [] 977 | results.append( 978 | create_command("guardduty get-detector --detector-id ", detectors) 979 | ) 980 | results.append( 981 | create_command("guardduty list-filters --detector-id ", filters) 982 | ) 983 | results.append( 984 | create_command( 985 | "guardduty get-filter --detector-id --filter-name ", 986 | filter_data, 987 | ) 988 | ) 989 | results.append( 990 | create_command( 991 | "guardduty list-publishing-destinations --detector-id ", 992 | publishing_destinations, 993 | ) 994 | ) 995 | results.append( 996 | create_command( 997 | "guardduty list-threat-intel-sets --detector-id ", threat_intel 998 | ) 999 | ) 1000 | results.append( 1001 | create_command("guardduty list-ip-sets --detector-id ", ip_sets) 1002 | ) 1003 | 1004 | self.results["guardduty"] = results 1005 | self.display_progress(len(results), "guardduty") 1006 | return 1007 | 1008 | def get_configuration_cloudwatch(self): 1009 | """Retrieve multiple elements of the configuration of the existing cloudwatch dashboards.""" 1010 | cloudwatch_list = self.services["cloudwatch"] 1011 | 1012 | if cloudwatch_list["count"] == -1: 1013 | dashboards = paginate(source.utils.utils.CLOUDWATCH_CLIENT, "list_dashboards", "DashboardEntries") 1014 | 1015 | if len(dashboards) == 0: 1016 | self.display_progress(0, "cloudwatch") 1017 | return 1018 | 1019 | elif cloudwatch_list["count"] == 0: 1020 | self.display_progress(0, "cloudwatch") 1021 | return 1022 | else: 1023 | dashboards = cloudwatch_list["elements"] 1024 | 1025 | dashboards_data = {} 1026 | with tqdm(desc="[+] Getting CLOUDWATCH configuration", leave=False, total = len(dashboards)) as pbar: 1027 | for dashboard in dashboards: 1028 | dashboard_name = dashboard["DashboardName"] 1029 | if dashboard_name == "": 1030 | continue 1031 | 1032 | # get_dashboard 1033 | 1034 | response = try_except( 1035 | source.utils.utils.CLOUDWATCH_CLIENT.get_dashboard, DashboardName=dashboard_name 1036 | ) 1037 | response.pop("ResponseMetadata", None) 1038 | dashboards_data[dashboard_name] = fix_json(response) 1039 | pbar.update() 1040 | 1041 | # list_metrics 1042 | 1043 | metrics = simple_paginate(source.utils.utils.CLOUDWATCH_CLIENT, "list_metrics") 1044 | 1045 | results = [] 1046 | results.append( 1047 | create_command( 1048 | "aws cloudwatch get-dashboard --name ", dashboards_data 1049 | ) 1050 | ) 1051 | results.append( 1052 | create_command("aws cloudwatch list-metrics --name ", metrics) 1053 | ) 1054 | 1055 | self.results["cloudwatch"] = results 1056 | self.display_progress(len(results), "cloudwatch") 1057 | return 1058 | 1059 | def get_configuration_maciev2(self): 1060 | """Retrieve multiple elements of the configuration of the existing macie buckets.""" 1061 | macie_list = self.services["macie"] 1062 | 1063 | if macie_list["count"] == -1: 1064 | elements = paginate(source.utils.utils.MACIE_CLIENT, "describe_buckets", "buckets") 1065 | 1066 | if len(elements) == 0: 1067 | self.display_progress(0, "macie") 1068 | return 1069 | elif macie_list["count"] == 0: 1070 | self.display_progress(0, "macie") 1071 | return 1072 | 1073 | # get_finding_statistics 1074 | 1075 | response = try_except(source.utils.utils.MACIE_CLIENT.get_finding_statistics, groupBy="type") 1076 | response.pop("ResponseMetadata", None) 1077 | statistics_severity = fix_json(response) 1078 | 1079 | # get_finding_statistics 1080 | 1081 | response = try_except( 1082 | source.utils.utils.MACIE_CLIENT.get_finding_statistics, groupBy="severity.description" 1083 | ) 1084 | response.pop("ResponseMetadata", None) 1085 | statistics_type = fix_json(response) 1086 | 1087 | results = [] 1088 | results.append( 1089 | create_command( 1090 | "aws macie2 get-finding-statistics --group-by severity.description", 1091 | statistics_severity, 1092 | ) 1093 | ) 1094 | results.append( 1095 | create_command( 1096 | "aws macie2 get-finding-statistics --group-by type", statistics_type 1097 | ) 1098 | ) 1099 | 1100 | self.results["macie"] = results 1101 | self.display_progress(len(results), "macie") 1102 | return 1103 | 1104 | def get_configuration_inspector2(self): 1105 | """Retrieve multiple elements of the configuration of the existing inspector coverages.""" 1106 | inspector_list = self.services["inspector"] 1107 | 1108 | if inspector_list["count"] == 0: 1109 | self.display_progress(0, "inspector") 1110 | return 1111 | 1112 | coverage = paginate(source.utils.utils.INSPECTOR_CLIENT, "list_coverage", "coveredResources") 1113 | 1114 | if len(coverage) == 0: 1115 | self.display_progress(0, "inspector") 1116 | return 1117 | 1118 | # list_usage_totals 1119 | 1120 | usage = simple_paginate(source.utils.utils.INSPECTOR_CLIENT, "list_usage_totals") 1121 | 1122 | # list_account_permissions 1123 | 1124 | permission = simple_paginate(source.utils.utils.INSPECTOR_CLIENT, "list_account_permissions") 1125 | 1126 | results = [] 1127 | results.append(create_command("aws inspector2 list-coverage", coverage)) 1128 | results.append(create_command("aws inspector2 list-usage-totals", usage)) 1129 | results.append( 1130 | create_command("aws inspector2 list-account-permissions", permission) 1131 | ) 1132 | 1133 | self.results["inspector"] = results 1134 | self.display_progress(len(results), "inspector") 1135 | 1136 | def get_configuration_detective(self): 1137 | """Retrieve multiple elements of the configuration of the existing detective graphs.""" 1138 | detective_list = self.services["detective"] 1139 | 1140 | if detective_list["count"] == -1: 1141 | graphs = misc_lookup("DETECTIVE", source.utils.utils.DETECTIVE_CLIENT.list_graphs, "NextToken", "GraphList", MaxResults=100) 1142 | 1143 | if len(graphs) == 0: 1144 | self.display_progress(0, "detective") 1145 | return 1146 | 1147 | elif detective_list["count"] == 0: 1148 | self.display_progress(0, "detective") 1149 | return 1150 | 1151 | results = [] 1152 | results.append(create_command("aws detective list-graphs ", graphs)) 1153 | 1154 | self.results["detective"] = results 1155 | self.display_progress(len(results), "detective") 1156 | print("finito detectivo") 1157 | 1158 | def get_configuration_cloudtrail(self): 1159 | """Retrieve multiple elements of the configuration of the existing cloudtrail trails.""" 1160 | cloudtrail_list = self.services["cloudtrail"] 1161 | 1162 | if cloudtrail_list["count"] == -1: 1163 | trails = paginate(source.utils.utils.CLOUDTRAIL_CLIENT, "list_trails", "Trails") 1164 | 1165 | if len(trails) == 0: 1166 | self.display_progress(0, "cloudtrail") 1167 | return 1168 | 1169 | elif cloudtrail_list["count"] == 0: 1170 | self.display_progress(0, "cloudtrail") 1171 | return 1172 | else: 1173 | trails = cloudtrail_list["elements"] 1174 | 1175 | trails_data = {} 1176 | with tqdm(desc="[+] Getting CLOUDTRAIL configuration", leave=False, total = len(trails)) as pbar: 1177 | for trail in trails: 1178 | trail_name = trail.get("Name", "") 1179 | if trail_name == "": 1180 | continue 1181 | 1182 | home_region = trail.get("HomeRegion") 1183 | source.utils.utils.CLOUDTRAIL_CLIENT = boto3.client("cloudtrail", region_name=home_region) 1184 | 1185 | response = try_except(source.utils.utils.CLOUDTRAIL_CLIENT.get_trail, Name=trail_name) 1186 | response.pop("ResponseMetadata", None) 1187 | trails_data[trail_name] = fix_json(response) 1188 | pbar.update() 1189 | 1190 | results = [] 1191 | results.append(create_command("aws cloudtrail list-trails", trails)) 1192 | results.append( 1193 | create_command("aws cloudtrail get-trail --name ", trails_data) 1194 | ) 1195 | 1196 | self.results["cloudtrail"] = results 1197 | self.display_progress(len(results), "cloudtrail") 1198 | 1199 | def display_progress(self, count, name): 1200 | """Display if the configuration of the given service worked. 1201 | 1202 | Parameters 1203 | ---------- 1204 | count : int 1205 | != 0 a configuration file was created. 0 otherwise 1206 | name : str 1207 | Name of the service 1208 | """ 1209 | if count != 0: 1210 | print( 1211 | "\t\u2705 " 1212 | + name.upper() 1213 | + "\033[1m" 1214 | + " - Configuration Extracted " 1215 | + "\033[0m" 1216 | ) 1217 | else: 1218 | print( 1219 | " \t\u274c " 1220 | + name.upper() 1221 | + "\033[1m" 1222 | + " - No Configuration" 1223 | + "\033[0m" 1224 | ) 1225 | -------------------------------------------------------------------------------- /source/main/enumeration.py: -------------------------------------------------------------------------------- 1 | """File used for the enumeration.""" 2 | 3 | from source.utils.enum import * 4 | from source.utils.utils import create_s3_if_not_exists, PREPARATION_BUCKET, ROOT_FOLDER, create_folder, set_clients, write_file, write_s3 5 | import source.utils.utils 6 | import json 7 | from time import sleep 8 | 9 | 10 | class Enumeration: 11 | 12 | services = {} 13 | bucket = "" 14 | region = None 15 | dl = None 16 | 17 | def __init__(self, region, dl): 18 | """Handle the constructor of the Enumeration class. 19 | 20 | Parameters 21 | ---------- 22 | region : str 23 | Region in which to tool is executed 24 | dl : bool 25 | True if the user wants to download the results, False if he wants the results to be written in a s3 bucket 26 | """ 27 | self.dl = dl 28 | self.region = region 29 | 30 | if not self.dl: 31 | self.bucket = create_s3_if_not_exists(self.region, PREPARATION_BUCKET) 32 | 33 | def self_test(self): 34 | """Test function.""" 35 | print("[+] Enumeration test passed") 36 | 37 | def execute(self, services, regionless): 38 | """Handle the main function of the class. Run every enumeration function and then write the results where asked. 39 | 40 | Parameters 41 | --------- 42 | services : list 43 | Array used to write the results of the different enumerations functions 44 | regionless : str 45 | "not-all" if the tool is used on only one region. First region to run the tool on otherwise 46 | 47 | Returns 48 | ------- 49 | self.services : object 50 | Object where the results of the functions are written 51 | """ 52 | print(f"[+] Beginning Enumeration of Services") 53 | 54 | set_clients(self.region) 55 | 56 | self.services = services 57 | 58 | if (regionless != "" and regionless == self.region) or regionless == "not-all": 59 | self.enumerate_s3() 60 | self.enumerate_iam() 61 | self.enumerate_cloudtrail_trails() 62 | self.enumerate_route53() 63 | 64 | self.enumerate_wafv2() 65 | self.enumerate_lambda() 66 | self.enumerate_vpc() 67 | self.enumerate_elasticbeanstalk() 68 | 69 | self.enumerate_ec2() 70 | self.enumerate_dynamodb() 71 | self.enumerate_rds() 72 | self.enumerate_eks() 73 | self.enumerate_elasticsearch() 74 | self.enumerate_secrets() 75 | self.enumerate_kinesis() 76 | 77 | self.enumerate_cloudwatch() 78 | self.enumerate_guardduty() 79 | self.enumerate_detective() 80 | self.enumerate_inspector2() 81 | self.enumerate_maciev2() 82 | 83 | 84 | if self.dl: 85 | confs = ROOT_FOLDER + self.region + "/enumeration/" 86 | create_folder(confs) 87 | with tqdm(desc="[+] Writing results", leave=False, total = len(self.services)) as pbar: 88 | for el in self.services: 89 | if self.services[el]["count"] > 0: 90 | write_file( 91 | confs + f"{el}.json", 92 | "w", 93 | json.dumps(self.services[el]["elements"], indent=4, default=str), 94 | ) 95 | pbar.update() 96 | sleep(0.1) 97 | print(f"[+] Enumeration results stored in the folder {ROOT_FOLDER}{self.region}/enumeration/") 98 | else: 99 | with tqdm(desc="[+] Writing results", leave=False, total = len(self.services)) as pbar: 100 | for key, value in self.services.items(): 101 | if value["count"] > 0: 102 | write_s3( 103 | self.bucket, 104 | f"{self.region}/enumeration/{key}.json", 105 | json.dumps(value["elements"], indent=4, default=str) 106 | ) 107 | pbar.update() 108 | sleep(0.1) 109 | print(f"[+] Enumeration results stored in the bucket {self.bucket}") 110 | 111 | return self.services 112 | 113 | def enumerate_s3(self): 114 | """Enumerate S3 buckets in use.""" 115 | elements = s3_lookup() 116 | 117 | self.services["s3"]["count"] = len(elements) 118 | self.services["s3"]["elements"] = elements 119 | 120 | identifiers = [] 121 | for el in elements: 122 | identifiers.append(el["Name"]) 123 | 124 | self.services["s3"]["ids"] = identifiers 125 | 126 | self.display_progress(self.services["s3"]["ids"], "s3", True) 127 | 128 | def enumerate_wafv2(self): 129 | """Enumerate waf web acls in use.""" 130 | elements = misc_lookup("WAF", source.utils.utils.WAF_CLIENT.list_web_acls, "NextMarker", "WebACLs", Scope="REGIONAL", Limit=100) 131 | 132 | self.services["wafv2"]["count"] = len(elements) 133 | self.services["wafv2"]["elements"] = elements 134 | 135 | identifiers = [] 136 | for el in elements: 137 | identifiers.append(el["ARN"]) 138 | 139 | self.services["wafv2"]["ids"] = identifiers 140 | 141 | self.display_progress(self.services["wafv2"]["ids"], "wafv2", True) 142 | 143 | def enumerate_lambda(self): 144 | """Enumerate lambdas in use.""" 145 | elements = paginate(source.utils.utils.LAMBDA_CLIENT, "list_functions", "Functions") 146 | 147 | self.services["lambda"]["count"] = len(elements) 148 | self.services["lambda"]["elements"] = elements 149 | 150 | identifiers = [] 151 | for el in elements: 152 | identifiers.append(el["FunctionName"]) 153 | 154 | self.services["lambda"]["ids"] = identifiers 155 | 156 | self.display_progress(self.services["lambda"]["ids"], "lambda", True) 157 | 158 | def enumerate_vpc(self): 159 | """Enumerate vpcs in use.""" 160 | elements = paginate(source.utils.utils.EC2_CLIENT, "describe_vpcs", "Vpcs") 161 | 162 | self.services["vpc"]["count"] = len(elements) 163 | self.services["vpc"]["elements"] = elements 164 | 165 | identifiers = [] 166 | for el in elements: 167 | identifiers.append(el["VpcId"]) 168 | 169 | self.services["vpc"]["ids"] = identifiers 170 | 171 | self.display_progress(self.services["vpc"]["ids"], "vpc", True) 172 | 173 | def enumerate_elasticbeanstalk(self): 174 | """Enumerate elasticbeanstalk environments in use.""" 175 | elements = paginate(source.utils.utils.EB_CLIENT, "describe_environments", "Environments") 176 | 177 | self.services["elasticbeanstalk"]["count"] = len(elements) 178 | self.services["elasticbeanstalk"]["elements"] = elements 179 | 180 | identifiers = [] 181 | for el in elements: 182 | identifiers.append(el["EnvironmentArn"]) 183 | 184 | self.services["elasticbeanstalk"]["ids"] = identifiers 185 | 186 | self.display_progress( 187 | self.services["elasticbeanstalk"]["ids"], "elasticbeanstalk", True 188 | ) 189 | 190 | def enumerate_route53(self): 191 | """Enumerate routes53 hosted zones in use.""" 192 | elements = paginate(source.utils.utils.ROUTE53_CLIENT, "list_hosted_zones", "HostedZones") 193 | 194 | self.services["route53"]["count"] = len(elements) 195 | self.services["route53"]["elements"] = elements 196 | 197 | identifiers = [] 198 | for el in elements: 199 | identifiers.append(el["Id"]) 200 | 201 | self.services["route53"]["ids"] = identifiers 202 | 203 | self.display_progress(self.services["route53"]["ids"], "route53", True) 204 | 205 | def enumerate_ec2(self): 206 | """Enumerate ec2 instances in use.""" 207 | elements = ec2_lookup() 208 | 209 | self.services["ec2"]["count"] = len(elements) 210 | self.services["ec2"]["elements"] = elements 211 | 212 | 213 | identifiers = [] 214 | for el in elements: 215 | identifiers.append(el["InstanceId"]) 216 | 217 | self.services["ec2"]["ids"] = identifiers 218 | 219 | self.display_progress(self.services["ec2"]["ids"], "ec2", True) 220 | 221 | def enumerate_iam(self): 222 | """Enumerate IAM users in use.""" 223 | elements = paginate(source.utils.utils.IAM_CLIENT, "list_users", "Users") 224 | 225 | self.services["iam"]["count"] = len(elements) 226 | self.services["iam"]["elements"] = elements 227 | 228 | identifiers = [] 229 | for el in elements: 230 | identifiers.append(el["Arn"]) 231 | 232 | self.services["iam"]["ids"] = identifiers 233 | 234 | self.display_progress(self.services["iam"]["ids"], "iam", True) 235 | 236 | def enumerate_dynamodb(self): 237 | """Enumerate dynamodb tables in use.""" 238 | elements = paginate(source.utils.utils.DYNAMODB_CLIENT, "list_tables", "TableNames") 239 | 240 | self.services["dynamodb"]["count"] = len(elements) 241 | self.services["dynamodb"]["elements"] = elements 242 | 243 | identifiers = elements 244 | 245 | self.services["dynamodb"]["ids"] = identifiers 246 | 247 | self.display_progress(self.services["dynamodb"]["ids"], "dynamodb", True) 248 | 249 | def enumerate_rds(self): 250 | """Enumerate rds instances in use.""" 251 | elements = paginate(source.utils.utils.RDS_CLIENT, "describe_db_instances", "DBInstances") 252 | 253 | self.services["rds"]["count"] = len(elements) 254 | self.services["rds"]["elements"] = elements 255 | 256 | identifiers = [] 257 | for el in elements: 258 | identifiers.append(el["DBInstanceArn"]) 259 | 260 | self.services["rds"]["ids"] = identifiers 261 | 262 | self.display_progress(self.services["rds"]["ids"], "rds", True) 263 | 264 | def enumerate_eks(self): 265 | """Enumerate eks clusters in use.""" 266 | elements = paginate(source.utils.utils.EKS_CLIENT, "list_clusters", "clusters") 267 | 268 | self.services["eks"]["count"] = len(elements) 269 | self.services["eks"]["elements"] = elements 270 | 271 | identifiers = [] 272 | for el in elements: 273 | identifiers.append(el) 274 | 275 | self.services["eks"]["ids"] = identifiers 276 | 277 | self.display_progress(self.services["eks"]["ids"], "eks", True) 278 | 279 | def enumerate_elasticsearch(self): 280 | """Enumerate elasticsearch domains in use.""" 281 | response = try_except(source.utils.utils.ELS_CLIENT.list_domain_names) 282 | response.pop("ResponseMetadata", None) 283 | response = fix_json(response) 284 | elements = response.get("DomainNames", []) 285 | 286 | self.services["els"]["count"] = len(elements) 287 | self.services["els"]["elements"] = elements 288 | 289 | identifiers = [] 290 | for el in elements: 291 | identifiers.append(el["DomainName"]) 292 | 293 | self.services["els"]["ids"] = identifiers 294 | 295 | self.display_progress(self.services["els"]["ids"], "els", True) 296 | 297 | def enumerate_secrets(self): 298 | """Enumerate secretsmanager secrets in use.""" 299 | elements = paginate(source.utils.utils.SECRETS_CLIENT, "list_secrets", "SecretList") 300 | 301 | self.services["secrets"]["count"] = len(elements) 302 | self.services["secrets"]["elements"] = elements 303 | 304 | identifiers = [] 305 | for el in elements: 306 | identifiers.append(el["ARN"]) 307 | 308 | self.services["secrets"]["ids"] = identifiers 309 | 310 | self.display_progress(self.services["secrets"]["ids"], "secrets", True) 311 | 312 | def enumerate_kinesis(self): 313 | """Enumerate kinesis streams in use.""" 314 | elements = paginate(source.utils.utils.KINESIS_CLIENT, "list_streams", "StreamNames") 315 | 316 | self.services["kinesis"]["count"] = len(elements) 317 | self.services["kinesis"]["elements"] = elements 318 | 319 | self.services["kinesis"]["ids"] = elements 320 | 321 | self.display_progress(self.services["kinesis"]["ids"], "kinesis", True) 322 | 323 | def enumerate_cloudwatch(self): 324 | """Enumerate cloudwatch dashboards in use.""" 325 | elements = paginate(source.utils.utils.CLOUDWATCH_CLIENT, "list_dashboards", "DashboardEntries") 326 | 327 | self.services["cloudwatch"]["count"] = len(elements) 328 | self.services["cloudwatch"]["elements"] = elements 329 | 330 | identifiers = [] 331 | for el in elements: 332 | identifiers.append(el["DashboardArn"]) 333 | 334 | self.services["cloudwatch"]["ids"] = identifiers 335 | 336 | self.display_progress(self.services["cloudwatch"]["ids"], "cloudwatch", True) 337 | 338 | def enumerate_cloudtrail_trails(self): 339 | """Enumerate cloudtrail trails in use.""" 340 | elements = paginate(source.utils.utils.CLOUDTRAIL_CLIENT, "list_trails", "Trails") 341 | 342 | self.services["cloudtrail"]["count"] = len(elements) 343 | self.services["cloudtrail"]["elements"] = elements 344 | 345 | identifiers = [] 346 | for el in elements: 347 | identifiers.append(el["Name"]) 348 | 349 | self.services["cloudtrail"]["ids"] = identifiers 350 | 351 | self.display_progress(self.services["cloudtrail"]["ids"], "cloudtrail", True) 352 | 353 | def enumerate_guardduty(self): 354 | """Enumerate guardduty detectors in use.""" 355 | elements = paginate(source.utils.utils.GUARDDUTY_CLIENT, "list_detectors", "DetectorIds") 356 | 357 | self.services["guardduty"]["count"] = len(elements) 358 | self.services["guardduty"]["elements"] = elements 359 | 360 | identifiers = elements 361 | 362 | self.services["guardduty"]["ids"] = identifiers 363 | 364 | self.display_progress(self.services["guardduty"]["ids"], "guardduty", True) 365 | 366 | def enumerate_inspector2(self): 367 | """Enumerate inspector coverages in use.""" 368 | elements = paginate(source.utils.utils.INSPECTOR_CLIENT, "list_coverage", "coveredResources") 369 | 370 | self.services["inspector"]["count"] = len(elements) 371 | self.services["inspector"]["elements"] = elements 372 | 373 | identifiers = [] 374 | for el in elements: 375 | identifiers.append(el["resourceId"]) 376 | 377 | self.services["inspector"]["ids"] = identifiers 378 | 379 | self.display_progress(self.services["inspector"]["ids"], "inspector", False) 380 | 381 | def enumerate_detective(self): 382 | """Enumerate detective graphs in use.""" 383 | elements = misc_lookup("DETECTIVE", source.utils.utils.DETECTIVE_CLIENT.list_graphs, "NextToken", "GraphList", MaxResults=100) 384 | 385 | self.services["detective"]["count"] = len(elements) 386 | self.services["detective"]["elements"] = elements 387 | 388 | identifiers = [] 389 | for el in elements: 390 | identifiers.append(el["Arn"]) 391 | 392 | self.services["detective"]["ids"] = identifiers 393 | 394 | self.display_progress(self.services["detective"]["ids"], "detective", True) 395 | 396 | def enumerate_maciev2(self): 397 | """Enumerate macie buckets in use.""" 398 | elements = paginate(source.utils.utils.MACIE_CLIENT, "describe_buckets", "buckets") 399 | 400 | self.services["macie"]["count"] = len(elements) 401 | self.services["macie"]["elements"] = elements 402 | 403 | identifiers = [] 404 | for el in elements: 405 | identifiers.append(el["bucketArn"]) 406 | 407 | self.services["macie"]["ids"] = identifiers 408 | 409 | self.display_progress(self.services["macie"]["ids"], "macie", True) 410 | 411 | def display_progress(self, ids, name, no_list=False): 412 | """Display the progress and the content of the service. 413 | 414 | Parameters 415 | ---------- 416 | ids : list of str 417 | Identifiers of the elements of the service 418 | name : str 419 | Name of the service 420 | no_list : bool 421 | True if we don't want the name of each identifiers to be printed out. False otherwise 422 | """ 423 | if len(ids) != 0: 424 | if no_list: 425 | print("\t\u2705 " + name.upper() + "\033[1m" + " - In use") 426 | else: 427 | print( 428 | "\t\u2705 " 429 | + name.upper() 430 | + "\033[1m" 431 | + " - In use with a count of " 432 | + str(len(ids)) 433 | + "\033[0m" 434 | + " and with the following identifiers: " 435 | ) 436 | for identity in ids: 437 | print("\t\t\u2022 " + identity) 438 | else: 439 | print( 440 | "\t\u274c " 441 | + name.upper() 442 | + "\033[1m" 443 | + " - Not in use" 444 | + "\033[0m" 445 | ) 446 | -------------------------------------------------------------------------------- /source/main/ir.py: -------------------------------------------------------------------------------- 1 | """File used to run all the steps.""" 2 | 3 | from source.main.enumeration import Enumeration 4 | from source.main.configuration import Configuration 5 | from source.main.logs import Logs 6 | from source.main.analysis import Analysis 7 | from source.utils.utils import ENUMERATION_SERVICES, BOLD, ENDC 8 | 9 | class IR: 10 | 11 | services = None 12 | e = None 13 | c = None 14 | l = None 15 | a = None 16 | source = None 17 | output = None 18 | catalog = None 19 | database = None 20 | table = None 21 | 22 | def __init__(self, region, dl, steps, source=None, output=None, catalog=None, database=None, table=None): 23 | """Handle the constructor of the IR class. 24 | 25 | Parameters 26 | ---------- 27 | region : str 28 | Region in which to tool is executed 29 | dl : bool 30 | True if the user wants to download the results, False if he wants the results to be written in a s3 bucket 31 | steps : list of str 32 | Steps to run (1 for enum, 2 for config, 3 for logs extraction, 4 for analysis) 33 | source : str, optional 34 | Source bucket for the analysis part (4) 35 | output : str, optional 36 | Output bucket for the analysis part (4) 37 | catalog : str, optional 38 | Data catalog used with the database 39 | database : str , optional 40 | Database containing the table for logs analytics 41 | table : str, optional 42 | Contains the sql requirements to query the logs 43 | """ 44 | print(f"\n[+] Working on region {BOLD}{region}{ENDC}") 45 | 46 | if "4" in steps: 47 | self.a = Analysis(region, dl) 48 | if source != None: 49 | self.source = source 50 | if output != None: 51 | self.output = output 52 | if catalog != None: 53 | self.catalog = catalog 54 | if database != None: 55 | self.database = database 56 | if table != None: 57 | self.table = table 58 | else: 59 | self.services = ENUMERATION_SERVICES 60 | 61 | if "1" in steps: 62 | self.e = Enumeration(region, dl) 63 | if "2" in steps: 64 | self.c = Configuration(region, dl) 65 | if "3" in steps: 66 | self.l = Logs(region, dl) 67 | 68 | def execute_enumeration(self, regionless): 69 | """Run the enumeration main function. 70 | 71 | Parameters 72 | ---------- 73 | regionless : str 74 | 'not-all' if the tool is used on only one region. First region to run the tool on otherwise 75 | """ 76 | self.services = self.e.execute(self.services, regionless) 77 | 78 | def execute_configuration(self, regionless): 79 | """Run the configuration main function. 80 | 81 | Parameters 82 | ---------- 83 | regionless : str 84 | 'not-all' if the tool is used on only one region. First region to run the tool on otherwise 85 | """ 86 | self.c.execute(self.services, regionless) 87 | 88 | def execute_logs(self, regionless, start, end): 89 | """Run the logs extraction main function. 90 | 91 | Parameters 92 | ---------- 93 | regionless : str 94 | 'not-all' if the tool is used on only one region. First region to run the tool on otherwise 95 | start : str 96 | Start date of the logs collected 97 | end : str 98 | End date of the logs collected 99 | """ 100 | self.l.execute(self.services, regionless, start, end) 101 | 102 | def execute_analysis(self, queryfile, exists, timeframe): 103 | """Run the logs analysis main function. 104 | 105 | Parameters 106 | ---------- 107 | queryfile : str 108 | File containing the queries to run 109 | exists : tuple of bool 110 | Array containing information about if the db and table exists 111 | timeframe : str 112 | Timeframe used in the query to filter results 113 | """ 114 | self.a.execute(self.source, self.output, self.catalog, self.database, self.table, queryfile, exists, timeframe) -------------------------------------------------------------------------------- /source/main/logs.py: -------------------------------------------------------------------------------- 1 | """File used for the logs collection 2 | """ 3 | 4 | import datetime 5 | from sys import exit 6 | from json import loads, dumps 7 | from time import sleep 8 | from os import remove, rmdir 9 | from requests import get 10 | 11 | import source.utils.utils 12 | from source.utils.utils import write_file, create_folder, copy_or_write_s3, create_command, writefile_s3, LOGS_RESULTS, create_s3_if_not_exists, LOGS_BUCKET, ROOT_FOLDER, set_clients, write_or_dl, write_s3, athena_query 13 | from source.utils.enum import * 14 | 15 | 16 | class Logs: 17 | 18 | bucket = None 19 | region = None 20 | dl = None 21 | confs = None 22 | results = None 23 | 24 | def __init__(self, region, dl): 25 | """Constructor of the Logs Collection class 26 | 27 | Parameters 28 | ---------- 29 | region : str 30 | Region in which to tool is executed 31 | dl : bool 32 | True if the user wants to download the results, False if he wants the results to be written in a s3 bucket 33 | """ 34 | 35 | self.region = region 36 | self.results = LOGS_RESULTS 37 | self.dl = dl 38 | 39 | #Also created for cloudtrail-logs results 40 | self.confs = ROOT_FOLDER + self.region + "/logs" 41 | 42 | if not self.dl: 43 | self.bucket = create_s3_if_not_exists(self.region, LOGS_BUCKET) 44 | else: 45 | create_folder(self.confs) 46 | 47 | def self_test(self): 48 | """Test function 49 | """ 50 | 51 | print("[+] Logs Extraction test passed\n") 52 | 53 | def execute(self, services, regionless, start, end): 54 | """Main function of the class. Run every logs extraction function and then write the results where asked 55 | 56 | Parameters 57 | ---------- 58 | services : list 59 | Array used to write the results of the different configuration functions 60 | regionless : str 61 | "not-all" if the tool is used on only one region. First region to run the tool on otherwise 62 | start : str 63 | Start time for logs collection 64 | end : str 65 | End time for logs collection 66 | """ 67 | 68 | print(f"[+] Beginning Logs Extraction") 69 | 70 | set_clients(self.region) 71 | 72 | self.services = services 73 | 74 | if regionless == self.region or regionless == "not-all": 75 | self.get_logs_s3() 76 | self.get_logs_cloudtrail_logs(start, end) 77 | 78 | self.get_logs_wafv2() 79 | self.get_logs_vpc() 80 | self.get_logs_elasticbeanstalk() 81 | 82 | self.get_logs_route53() 83 | self.get_logs_rds() 84 | 85 | self.get_logs_cloudwatch() 86 | self.get_logs_guardduty() 87 | self.get_logs_inspector2() 88 | self.get_logs_maciev2() 89 | 90 | if self.dl: 91 | with tqdm(desc="[+] Writing results", leave=False, total = len(self.results)) as pbar: 92 | for key, value in self.results.items(): 93 | if value["results"] and key != "cloudtrail-logs": 94 | write_or_dl(key, value, self.confs) 95 | elif key == "cloudtrail-logs": 96 | for el in value["results"]: 97 | trail = el["CloudTrailEvent"] 98 | obj = loads(trail) 99 | dump = dumps(obj, default=str) 100 | create_folder(f"{self.confs}/cloudtrail-logs/") 101 | write_file( 102 | f"{self.confs}/cloudtrail-logs/{obj['eventID']}.json", 103 | "w", 104 | dump, 105 | ) 106 | pbar.update() 107 | sleep(0.1) 108 | print(f"[+] Logs extraction results stored in the folder {ROOT_FOLDER}{self.region}/logs_acquisition/") 109 | 110 | 111 | else: 112 | with tqdm(desc="[+] Writing results", leave=False, total = len(self.results)) as pbar: 113 | for key, value in self.results.items(): 114 | if value["results"] and key != "cloudtrail-logs": 115 | copy_or_write_s3(key, value, self.bucket, self.region) 116 | pbar.update() 117 | sleep(0.1) 118 | 119 | #doing the cloudtrail logs writing at the end because it can be very long 120 | if self.results["cloudtrail-logs"]["results"]: 121 | res = self.results["cloudtrail-logs"]["results"] 122 | 123 | with tqdm(desc="[+] Writing cloudtrail results", leave=False, total = len(self.results["cloudtrail-logs"]["results"])) as pbar: 124 | for el in res: 125 | 126 | trail = el["CloudTrailEvent"] 127 | obj = loads(trail) 128 | dump = dumps(obj, default=str) 129 | write_s3( 130 | self.bucket, 131 | f"{self.region}/logs/cloudtrail-logs/{obj['eventID']}.json", 132 | dump, 133 | ) 134 | pbar.update() 135 | sleep(0.1) 136 | 137 | print(f"[+] Logs extraction results stored in the bucket {self.bucket}") 138 | 139 | def get_logs_guardduty(self): 140 | """Retrieve the logs of the existing guardduty detectors 141 | """ 142 | 143 | guardduty_list = self.services["guardduty"] 144 | 145 | ''' 146 | In the first part, we verify that the enumeration of the service is already done. 147 | If it doesn't, we redo it. 148 | If it is, we verify if the service is available or not. 149 | ''' 150 | 151 | if guardduty_list["count"] == -1: 152 | detector_ids = paginate(source.utils.utils.GUARDDUTY_CLIENT, "list_detectors", "DetectorIds") 153 | 154 | if len(detector_ids) == 0: 155 | self.display_progress(0, "guardduty") 156 | return 157 | 158 | elif guardduty_list["count"] == 0: 159 | self.display_progress(0, "guardduty") 160 | return 161 | else: 162 | detector_ids = guardduty_list["ids"] 163 | 164 | ''' 165 | In this part, we get the logs of the service (if existing) 166 | Then all the results are added to a same json file. 167 | ''' 168 | 169 | findings_data = {} 170 | with tqdm(desc="[+] Getting GUARDDUTY logs", leave=False, total = len(detector_ids)) as pbar: 171 | for detector in detector_ids: 172 | findings = paginate(source.utils.utils.GUARDDUTY_CLIENT, "list_findings", "FindingIds", DetectorId=detector) 173 | 174 | response = try_except( 175 | source.utils.utils.GUARDDUTY_CLIENT.get_findings, DetectorId=detector, FindingIds=findings 176 | ) 177 | response.pop("ResponseMetadata", None) 178 | response = fix_json(response) 179 | findings_data[detector] = response 180 | pbar.update() 181 | 182 | results = [] 183 | results.append( 184 | create_command( 185 | "guardduty get-findings --detector-id --findings-id ", 186 | findings_data, 187 | ) 188 | ) 189 | 190 | self.results["guardduty"]["action"] = 0 191 | self.results["guardduty"]["results"] = results 192 | 193 | self.display_progress(len(results), "guardduty") 194 | 195 | def get_logs_cloudtrail_logs(self, start, end): 196 | """Retrieve the cloudtrail logs 197 | 198 | Parameters 199 | ---------- 200 | start : str 201 | Start time for logs collection 202 | end : str 203 | End time for logs collection 204 | """ 205 | 206 | #trails_name = paginate(source.utils.utils.CLOUDTRAIL_CLIENT, "list_trails", "Trails") 207 | #if trails_name: 208 | # if len(trails_name) == 1: 209 | # response = source.utils.utils.CLOUDTRAIL_CLIENT.get_trail(Name=trails_name["TrailARN"]) 210 | # bucket = response["Trail"]["S3BucketName"] 211 | # 212 | # if "S3KeyPrefix" in response["Trail"]: 213 | # prefix = response["Trail"]["S3KeyPrefix"] 214 | # else: 215 | # prefix = "" 216 | # 217 | # all_bucket = f"{bucket}/{prefix}" 218 | # print(f"[+] You have an existing Cloudtrail trail. You can use the associated bucket {all_bucket} as source for the analysis. But don't forget to restrain the number of logs as much as possible by using the most precise subfolder.") 219 | # else: 220 | # buckets = [] 221 | # for trail in trails_name: 222 | # response = source.utils.utils.CLOUDTRAIL_CLIENT.get_trail(Name=trail["TrailARN"]) 223 | # bucket = response["Trail"]["S3BucketName"] 224 | # if "S3KeyPrefix" in response["Trail"]: 225 | # prefix = response["Trail"]["S3KeyPrefix"] 226 | # else: 227 | # prefix = "" 228 | # 229 | # all_bucket = f"{bucket}/{prefix}" 230 | # buckets.append(all_bucket) 231 | # 232 | # print(f"[+] You have multiple existing Cloudtrail trails. You can use the associated buckets listed below as source for the analysis.\n[!] Warning : If you do so, don't forget to restrain the number of logs as much as possible by using the most precise subfolder :") 233 | # for b in buckets: 234 | # print(f"\u2022 {b}") 235 | # 236 | ## 237 | #else: 238 | start_date = start.split("-") 239 | end_date = end.split("-") 240 | datetime_start = datetime.datetime(int(start_date[0]), int(start_date[1]), int(start_date[2])) 241 | datetime_end = datetime.datetime(int(end_date[0]), int(end_date[1]), int(end_date[2])) 242 | 243 | logs = paginate(source.utils.utils.CLOUDTRAIL_CLIENT, "lookup_events", "Events", StartTime=datetime_start, EndTime=datetime_end) 244 | if len(logs) == 0: 245 | self.display_progress(0, "cloudtrail") 246 | return 247 | 248 | self.results["cloudtrail-logs"]["action"] = 0 249 | self.results["cloudtrail-logs"]["results"] = logs 250 | self.display_progress(1, "cloudtrail-logs") 251 | 252 | def get_logs_wafv2(self): 253 | """Retrieve the logs of the existing waf web acls 254 | """ 255 | 256 | waf_list = self.services["wafv2"] 257 | 258 | if waf_list["count"] == -1: 259 | wafs = misc_lookup("WAF", source.utils.utils.WAF_CLIENT.list_web_acls, "NextMarker", "WebACLs", Scope="REGIONAL", Limit=100) 260 | 261 | if len(wafs) == 0: 262 | self.display_progress(0, "wafv2") 263 | return 264 | 265 | identifiers = [] 266 | for el in wafs: 267 | identifiers.append(el["ARN"]) 268 | 269 | elif waf_list["count"] == 0: 270 | self.display_progress(0, "wafv2") 271 | return 272 | else: 273 | identifiers = waf_list["ids"] 274 | return identifiers 275 | 276 | cnt = 0 277 | 278 | self.results["wafv2"]["action"] = 1 279 | 280 | with tqdm(desc="[+] Getting WAF logs", leave=False, total = len(identifiers)) as pbar: 281 | for arn in identifiers: 282 | logging = try_except(source.utils.utils.WAF_CLIENT.get_logging_configuration, ResourceArn=arn) 283 | if "LoggingConfiguration" in logging: 284 | destinations = logging["LoggingConfiguration"]["LogDestinationConfigs"] 285 | for destination in destinations: 286 | if "s3" in destination: 287 | bucket = destination.split(":")[-1] 288 | src_bucket = bucket.split("/")[0] 289 | 290 | self.results["wafv2"]["results"].append(src_bucket) 291 | 292 | cnt += 1 293 | pbar.update() 294 | 295 | self.display_progress(cnt, "wafv2") 296 | 297 | def get_logs_vpc(self): 298 | """Retrieve the logs of the existing vpcs 299 | """ 300 | 301 | vpc_list = self.services["vpc"] 302 | 303 | if vpc_list["count"] == -1: 304 | vpcs = paginate(source.utils.utils.EC2_CLIENT, "describe_vpcs", "Vpcs") 305 | 306 | if len(vpcs) == 0: 307 | self.display_progress(0, "vpc") 308 | return 309 | 310 | elif vpc_list["count"] == 0: 311 | self.display_progress(0, "vpc") 312 | return 313 | 314 | flow_logs = paginate(source.utils.utils.EC2_CLIENT, "describe_flow_logs", "FlowLogs") 315 | cnt = 0 316 | 317 | self.results["vpc"]["action"] = 1 318 | 319 | with tqdm(desc="[+] Getting VPC logs", leave=False, total = len(flow_logs)) as pbar: 320 | for flow_log in flow_logs: 321 | if "s3" in flow_log["LogDestinationType"]: 322 | bucket = flow_log["LogDestination"].split(":")[-1] 323 | src_bucket = bucket.split("/")[0] 324 | 325 | self.results["vpc"]["results"].append(src_bucket) 326 | cnt += 1 327 | pbar.update() 328 | self.display_progress(cnt, "vpc") 329 | 330 | def get_logs_elasticbeanstalk(self): 331 | """Retrieve the logs of the configuration of the existing elasticbeanstalk environments 332 | """ 333 | 334 | eb = source.utils.utils.EB_CLIENT 335 | 336 | eb_list = self.services["elasticbeanstalk"] 337 | 338 | if eb_list["count"] == -1: 339 | 340 | environments = paginate(source.utils.utils.EB_CLIENT, "describe_environments", "Environments") 341 | 342 | if len(environments) == 0: 343 | self.display_progress(0, "elasticbeanstalk") 344 | return 345 | 346 | elif eb_list["count"] == 0: 347 | self.display_progress(0, "elasticbeanstalk") 348 | return 349 | else: 350 | environments = eb_list["elements"] 351 | 352 | path = self.confs + "elasticbeanstalk/" 353 | create_folder(path) 354 | 355 | with tqdm(desc="[+] Getting ELASTICBEANSTALK logs", leave=False, total = len(environments)) as pbar: 356 | for environment in environments: 357 | name = environment.get("EnvironmentName", "") 358 | if name == "": 359 | continue 360 | 361 | response = try_except( 362 | eb.request_environment_info, EnvironmentName=name, InfoType="bundle" 363 | ) 364 | response.pop("ResponseMetadata", None) 365 | response = fix_json(response) 366 | sleep(60) 367 | 368 | response = try_except( 369 | eb.retrieve_environment_info, EnvironmentName=name, InfoType="bundle" 370 | ) 371 | response.pop("ResponseMetadata", None) 372 | response = fix_json(response) 373 | 374 | urls = response["EnvironmentInfo"] 375 | if len(urls) > 0: 376 | url = urls[-1] 377 | url = url["Message"] 378 | 379 | filename = path + name + ".zip" 380 | r = get(url) 381 | with open(filename, "wb") as f: 382 | f.write(r.content) 383 | 384 | if not self.dl: 385 | key = "eb/" + name + ".zip" 386 | writefile_s3(self.bucket, key, filename) 387 | remove(filename) 388 | rmdir(path) 389 | pbar.update() 390 | 391 | self.display_progress(len(environments), "elasticbeanstalk") 392 | 393 | def get_logs_cloudwatch(self): 394 | """Retrieve the logs of the configuration of the existing cloudwatch dashboards 395 | """ 396 | 397 | cloudwatch_list = self.services["cloudwatch"] 398 | 399 | if cloudwatch_list["count"] == -1: 400 | dashboards = paginate(source.utils.utils.CLOUDWATCH_CLIENT, "list_dashboards", "DashboardEntries") 401 | 402 | if len(dashboards) == 0: 403 | self.display_progress(0, "cloudwatch") 404 | return 405 | 406 | elif cloudwatch_list["count"] == 0: 407 | self.display_progress(0, "cloudwatch") 408 | return 409 | else: 410 | dashboards = cloudwatch_list["elements"] 411 | 412 | dashboards_data = {} 413 | 414 | with tqdm(desc="[+] Getting CLOUDWATCH logs", leave=False, total = len(dashboards)) as pbar: 415 | for dashboard in dashboards: 416 | dashboard_name = dashboard.get("DashboardName", "") 417 | if dashboard_name == "": 418 | continue 419 | response = try_except( 420 | source.utils.utils.CLOUDWATCH_CLIENT.get_dashboard, DashboardName=dashboard_name 421 | ) 422 | response.pop("ResponseMetadata", None) 423 | dashboards_data[dashboard_name] = fix_json(response) 424 | pbar.update() 425 | 426 | metrics = try_except(source.utils.utils.CLOUDWATCH_CLIENT, "list_metrics") 427 | 428 | alarms = simple_paginate(source.utils.utils.CLOUDWATCH_CLIENT, "describe_alarms") 429 | 430 | results = [] 431 | results.append( 432 | create_command("cloudwatch get-dashboard --name ", dashboards_data) 433 | ) 434 | results.append(create_command("cloudwatch list-metrics --name ", metrics)) 435 | results.append( 436 | create_command("cloudwatch describe-alarms --name ", alarms) 437 | ) 438 | 439 | self.results["cloudwatch"]["action"] = 0 440 | self.results["cloudwatch"]["results"] = results 441 | 442 | self.display_progress(len(results), "cloudwatch") 443 | 444 | def get_logs_s3(self): 445 | """Retrieve the logs of the configuration of the existing s3 buckets 446 | """ 447 | 448 | s3_list = self.services["s3"] 449 | 450 | if s3_list["count"] == -1: 451 | 452 | elements = s3_lookup() 453 | 454 | if len(elements) == 0: 455 | self.display_progress(0, "s3") 456 | return 457 | 458 | elif s3_list["count"] == 0: 459 | # if there is not bucket at all 460 | self.display_progress(0, "s3") 461 | return 462 | else: 463 | elements = s3_list["elements"] 464 | 465 | cnt = 0 466 | 467 | self.results["s3"]["action"] = 1 468 | self.results["s3"]["results"] = [] 469 | 470 | with tqdm(desc="[+] Getting S3 logs", leave=False, total = len(elements)) as pbar: 471 | 472 | # Simulez une action quelconque. 473 | for bucket in elements: 474 | 475 | name = bucket["Name"] 476 | 477 | logging = try_except(source.utils.utils.S3_CLIENT.get_bucket_logging, Bucket=name) 478 | 479 | if "LoggingEnabled" in logging: 480 | target = logging["LoggingEnabled"]["TargetBucket"] 481 | bucket = target.split(":")[-1] 482 | src_bucket = bucket.split("/")[0] 483 | 484 | if logging["LoggingEnabled"]["TargetPrefix"]: 485 | prefix = logging["LoggingEnabled"]["TargetPrefix"] 486 | src_bucket = f"{src_bucket}|{prefix}" 487 | 488 | self.results["s3"]["results"].append(src_bucket) 489 | 490 | cnt += 1 491 | pbar.update() 492 | 493 | self.display_progress(cnt, "s3") 494 | 495 | def get_logs_inspector2(self): 496 | """Retrieve the logs of the configuration of the existing inspector coverages 497 | """ 498 | 499 | inspector_list = self.services["inspector"] 500 | 501 | if inspector_list["count"] == -1: 502 | 503 | covered = paginate(source.utils.utils.INSPECTOR_CLIENT, "list_coverage", "coveredResources") 504 | 505 | if len(covered) == 0: 506 | self.display_progress(0, "inspector") 507 | return 508 | 509 | elif inspector_list["count"] == 0: 510 | self.display_progress(0, "inspector") 511 | return 512 | 513 | get_findings = simple_paginate(source.utils.utils.INSPECTOR_CLIENT, "list_findings") 514 | 515 | get_grouped_findings = simple_paginate( 516 | source.utils.utils.INSPECTOR_CLIENT, "list_finding_aggregations", aggregationType="TITLE" 517 | ) 518 | 519 | results = [] 520 | results.append(create_command("aws inspector2 list-findings", get_findings)) 521 | results.append( 522 | create_command( 523 | "aws inspector2 list-finding-aggregations --aggregation-type TITLE", 524 | get_grouped_findings, 525 | ) 526 | ) 527 | 528 | self.results["inspector"]["action"] = 0 529 | self.results["inspector"]["results"] = results 530 | 531 | self.display_progress(len(results), "inspector") 532 | 533 | def get_logs_maciev2(self): 534 | """Retrieve the logs of the configuration of the existing macie buckets 535 | """ 536 | 537 | macie_list = self.services["macie"] 538 | 539 | if macie_list["count"] == -1: 540 | 541 | elements = paginate(source.utils.utils.MACIE_CLIENT, "describe_buckets", "buckets") 542 | 543 | if len(elements) == 0: 544 | self.display_progress(0, "macie") 545 | return 546 | elif macie_list["count"] == 0: 547 | self.display_progress(0, "macie") 548 | return 549 | 550 | get_list_findings = simple_paginate(source.utils.utils.MACIE_CLIENT, "list_findings") 551 | 552 | response = try_except( 553 | source.utils.utils.MACIE_CLIENT.get_findings, 554 | findingIds=get_list_findings.get("findingIds", []), 555 | ) 556 | response.pop("ResponseMetadata", None) 557 | findings = fix_json(response) 558 | 559 | results = [] 560 | results.append(create_command("aws macie2 list-findings", get_list_findings)) 561 | results.append( 562 | create_command("aws macie2 get-findings --finding-ids ", findings) 563 | ) 564 | 565 | self.results["macie"]["action"] = 0 566 | self.results["macie"]["results"] = results 567 | 568 | self.display_progress(len(results), "macie") 569 | 570 | def download_rds(self, nameDB, rds, logname): 571 | """'Download' the rds logs 572 | 573 | Parameters 574 | ---------- 575 | nameDB : str 576 | name of the rds instance 577 | rds : object 578 | RDS client 579 | logname : str 580 | name of the logfile to get 581 | 582 | Returns 583 | ------- 584 | ret : dict 585 | Part of the response 586 | """ 587 | 588 | response = try_except( 589 | rds.download_db_log_file_portion, 590 | DBInstanceIdentifier=nameDB, 591 | LogFileName=logname, 592 | Marker="0", 593 | ) 594 | 595 | ret = response.get("LogFileData", "") 596 | return ret 597 | 598 | def get_logs_rds(self): 599 | """Retrieve the logs of the configuration of the existing rds instances 600 | """ 601 | 602 | rds_list = self.services["rds"] 603 | 604 | if rds_list["count"] == -1: 605 | 606 | list_of_dbs = paginate(source.utils.utils.RDS_CLIENT, "describe_db_instances", "DBInstances") 607 | 608 | if len(list_of_dbs) == 0: 609 | self.display_progress(0, "rds") 610 | return 611 | 612 | elif rds_list["count"] == 0: 613 | self.display_progress(0, "rds") 614 | return 615 | else: 616 | list_of_dbs = rds_list["elements"] 617 | 618 | total_logs = [] 619 | 620 | with tqdm(desc="[+] Getting RDS logs", leave=False, total = len(list_of_dbs)) as pbar: 621 | for db in list_of_dbs: 622 | total_logs.append( 623 | self.download_rds( 624 | db["DBInstanceIdentifier"], 625 | source.utils.utils.RDS_CLIENT, 626 | "external/mysql-external.log", 627 | ) 628 | ) 629 | total_logs.append( 630 | self.download_rds( 631 | db["DBInstanceIdentifier"], source.utils.utils.RDS_CLIENT, "error/mysql-error.log" 632 | ) 633 | ) 634 | pbar.update() 635 | 636 | self.results["rds"]["action"] = 0 637 | self.results["rds"]["results"] = total_logs 638 | 639 | self.display_progress(len(list_of_dbs), "rds") 640 | 641 | def get_logs_route53(self): 642 | """Retrieve the logs of the configuration of the existing routes53 hosted zones 643 | """ 644 | 645 | route53_list = self.services["route53"] 646 | 647 | if route53_list["count"] == -1: 648 | 649 | hosted_zones = paginate(source.utils.utils.ROUTE53_CLIENT, "list_hosted_zones", "HostedZones") 650 | 651 | if hosted_zones: 652 | self.display_progress(0, "route53") 653 | return 654 | 655 | elif route53_list["count"] == 0: 656 | self.display_progress(0, "route53") 657 | return 658 | 659 | resolver_log_configs = paginate(source.utils.utils.ROUTE53_RESOLVER_CLIENT, "list_resolver_query_log_configs", "ResolverQueryLogConfigs") 660 | cnt = 0 661 | 662 | self.results["route53"]["action"] = 1 663 | self.results["route53"]["results"] = [] 664 | 665 | with tqdm(desc="[+] Getting ROUTE53 logs", leave=False, total = len(resolver_log_configs)) as pbar: 666 | for bucket_location in resolver_log_configs: 667 | if "s3" in bucket_location["DestinationArn"]: 668 | bucket = bucket_location["DestinationArn"].split(":")[-1] 669 | 670 | if "/" in bucket: 671 | 672 | src_bucket = bucket.split("/")[0] 673 | prefix = bucket.split("/")[1] 674 | result = f"{src_bucket}|{prefix}" 675 | 676 | else : 677 | result = bucket 678 | 679 | self.results["route53"]["results"].append(result) 680 | 681 | cnt += 1 682 | pbar.update() 683 | 684 | self.display_progress(cnt, "route53") 685 | 686 | def display_progress(self, count, name): 687 | """Diplays if the configuration of the given service worked 688 | 689 | Parameters 690 | ---------- 691 | count : int 692 | != 0 a configuration file was created. 0 otherwise 693 | name : str 694 | Name of the service 695 | """ 696 | 697 | if count != 0: 698 | print( 699 | " \u2705 " 700 | + name.upper() 701 | + "\033[1m" 702 | + " - Logs extracted" 703 | + "\033[0m" 704 | ) 705 | else: 706 | print( 707 | " \u274c " + name.upper() + "\033[1m" + " - No Logs" + "\033[0m" 708 | ) 709 | -------------------------------------------------------------------------------- /source/utils/enum.py: -------------------------------------------------------------------------------- 1 | """File containg all the aws enumeration function used to get data.""" 2 | 3 | from source.utils.utils import fix_json, try_except 4 | import source.utils.utils 5 | from tqdm import tqdm 6 | 7 | def s3_lookup(): 8 | """Return all existing buckets. 9 | 10 | Returns 11 | ------- 12 | elements : list 13 | List of the existing buckets 14 | """ 15 | response = try_except(source.utils.utils.S3_CLIENT.list_buckets) 16 | buckets = fix_json(response) 17 | 18 | elements = [] 19 | elements = buckets.get("Buckets", []) 20 | 21 | return elements 22 | 23 | def ec2_lookup(): 24 | """Return all ec2 instances. 25 | 26 | Returns 27 | ------- 28 | elements : list 29 | List of the existing ec2 instances 30 | 31 | """ 32 | elements = [] 33 | paginator = source.utils.utils.EC2_CLIENT.get_paginator("describe_instances") 34 | 35 | try: 36 | with tqdm(desc=f"[+] Getting EC2 data", leave=False) as pbar: 37 | for page in paginator.paginate(): 38 | page.pop("ResponseMetadata", None) 39 | page = fix_json(page) 40 | if page["Reservations"]: 41 | elements.extend(page["Reservations"][0]["Instances"]) 42 | pbar.update() 43 | except Exception as e: 44 | print(f"[!] invictus-aws.py: error: {str(e)}") 45 | 46 | return elements 47 | 48 | def simple_paginate(client, command, **kwargs): 49 | """Return all the results of the command, no matter the number of results. 50 | 51 | Parameters 52 | ---------- 53 | client : str 54 | Name of the client used to call the request (S3, LAMBDA, etc) 55 | command : str 56 | Command executed 57 | **kwargs : list, optional 58 | List of parameters to add to the command. 59 | 60 | Returns 61 | ------- 62 | elements : list 63 | List of the results of the command 64 | """ 65 | elements = [] 66 | 67 | paginator = client.get_paginator(command) 68 | 69 | try: 70 | with tqdm(desc=f"[+] Getting {client.meta.service_model.service_name.upper()} data", leave=False) as pbar: 71 | for page in paginator.paginate(**kwargs): 72 | page.pop("ResponseMetadata", None) 73 | page = fix_json(page) 74 | elements.append(page) 75 | pbar.update() 76 | except Exception as e: 77 | print(f"[!] invictus-aws.py: error: {str(e)}") 78 | 79 | return elements 80 | 81 | def paginate(client, command, array, **kwargs): 82 | """Do the same as the previous function, but we can then filter the results on a specific part of the response. 83 | 84 | Parameters 85 | ---------- 86 | client : str 87 | Name of the client used to call the request (S3, LAMBDA, etc) 88 | command : str 89 | Command executed 90 | array : str 91 | Filter added to get a specific part of the results 92 | **kwargs : list, optional 93 | List of parameters to add to the command. 94 | 95 | Returns 96 | ------- 97 | elements : list 98 | List of the results of the command 99 | """ 100 | elements = [] 101 | paginator = client.get_paginator(command) 102 | 103 | try: 104 | with tqdm(desc=f"[+] Getting {client.meta.service_model.service_name.upper()} data", leave=False) as pbar: 105 | for page in paginator.paginate(**kwargs): 106 | page.pop("ResponseMetadata", None) 107 | page = fix_json(page) 108 | elements.extend(page.get(array, [])) 109 | pbar.update() 110 | except Exception as e: 111 | if client != source.utils.utils.MACIE_CLIENT and "Macie is not enabled" not in str(e): 112 | print(f"[!] invictus-aws.py: error: {str(e)}") 113 | 114 | return elements 115 | 116 | def simple_misc_lookup(client, function, name_token, **kwargs): 117 | """Return all the results of the command, no matter the number of results. Used by functions not usable by paginate. 118 | 119 | Parameters 120 | ---------- 121 | client : str 122 | Name of the client (S3, LAMBDA, etc) only used for the progress bar 123 | function : str 124 | Concatenation of the client and the command (CLIENT.COMMAND) 125 | name_token : str 126 | Name of the token used by the command to get the other pages of results. 127 | **kwargs : list, optional 128 | List of parameters to add to the command. 129 | 130 | Returns 131 | ------- 132 | elements : list 133 | List of the results of the command 134 | """ 135 | tokens = [] 136 | 137 | elements = [] 138 | response = try_except(function, **kwargs) 139 | response.pop("ResponseMetadata", None) 140 | response = fix_json(response) 141 | elements = response 142 | 143 | token = "" 144 | if name_token in response: 145 | token = response.get(name_token) 146 | 147 | with tqdm(desc=f"[+] Getting {client} configuration", leave=False) as pbar: 148 | while token: 149 | response = try_except(function, **kwargs) 150 | response.pop("ResponseMetadata", None) 151 | response = fix_json(response) 152 | 153 | token = "" 154 | if name_token in response: 155 | token = response.get(name_token) 156 | if tokens[-1] == token: 157 | break 158 | else: 159 | elements.extend(response) 160 | tokens.append(token) 161 | pbar.update() 162 | 163 | return elements 164 | 165 | def misc_lookup(client, function, name_token, array, **kwargs): 166 | """Do the same as the previous function, but we can then filter the results on a specific part of the response. Used by functions not usable by paginate. 167 | 168 | Parameters 169 | ---------- 170 | client : str 171 | Name of the client (S3, LAMBDA, etc) only used for the progress bar 172 | function : str 173 | Concatenation of the client and the command (CLIENT.COMMAND) 174 | name_token : str 175 | Name of the token used by the command to get the other pages of results. 176 | array : str 177 | Filter added to get a specific part of the results 178 | **kwargs : list, optional 179 | List of parameters to add to the command. 180 | 181 | Returns 182 | ------- 183 | elements : list 184 | List of the results of the command 185 | """ 186 | tokens = [] 187 | 188 | elements = [] 189 | response = try_except(function, **kwargs) 190 | response.pop("ResponseMetadata", None) 191 | response = fix_json(response) 192 | elements.extend(response.get(array, [])) 193 | 194 | token = "" 195 | if name_token in response: 196 | token = response.get(name_token) 197 | tokens.append(token) 198 | 199 | with tqdm(desc=f"[+] Getting {client} configuration", leave=False) as pbar: 200 | while token: 201 | response = try_except(function, **kwargs) 202 | response.pop("ResponseMetadata", None) 203 | response = fix_json(response) 204 | 205 | token = "" 206 | if name_token in response: 207 | token = response.get(name_token) 208 | if tokens[-1] == token: 209 | break 210 | else: 211 | elements.extend(response.get(array, [])) 212 | tokens.append(token) 213 | pbar.update() 214 | 215 | return elements 216 | 217 | def list_traffic_policies_lookup(function): 218 | """Get all the results of the list_traffic_policies command of the route53 client. 219 | 220 | Parameters 221 | ---------- 222 | function : str 223 | Concatenation of the client and the command 224 | 225 | Returns 226 | ------- 227 | elements : list 228 | List of the results of the command 229 | """ 230 | elements = [] 231 | response = try_except(function, MaxItems="100") 232 | response.pop("ResponseMetadata", None) 233 | elements = fix_json(response) 234 | 235 | token = "" 236 | if response["IsTruncated"] == True: 237 | token = response["TrafficPolicyIdMarker"] 238 | 239 | with tqdm(desc=f"[+] Getting ROUTE53 configuration", leave=False) as pbar: 240 | while token: 241 | response = try_except(function, MaxItems="100") 242 | response.pop("ResponseMetadata", None) 243 | response = fix_json(response) 244 | elements.extend(response) 245 | 246 | token = "" 247 | if response["IsTruncated"] == True: 248 | token = response["TrafficPolicyIdMarker"] 249 | pbar.update() 250 | 251 | return elements 252 | -------------------------------------------------------------------------------- /source/utils/strings.py: -------------------------------------------------------------------------------- 1 | ERROR="[!] Invictus-AWS: error:" 2 | 3 | ############# invictus-aws.py ############# 4 | 5 | TOOL_NAME= """ 6 | _ _ _ 7 | (_) (_) | | 8 | _ _ ____ ___ ___| |_ _ _ ___ ______ __ ___ _____ 9 | | | '_ \ \ / / |/ __| __| | | / __|______/ _` \ \ /\ / / __| 10 | | | | | \ V /| | (__| |_| |_| \__ \ | (_| |\ V V /\__ \ 11 | |_|_| |_|\_/ |_|\___|\__|\__,_|___/ \__,_| \_/\_/ |___/ 12 | 13 | 14 | Copyright (c) 2024 Invictus Incident Response 15 | Authors: Antonio Macovei, Rares Bratean & Benjamin Guillouzo 16 | """ 17 | 18 | 19 | WALKTHROUGHT_ENTRY="[+] Entering Walk–through mode..\n" 20 | 21 | PROFILE_PRESENTATION="[+] Possible profiles: \n [1] User-defined Profile\n [2] Default Profile" 22 | PROFILE_ACTION="[+] Select the profile you want to use: " 23 | PROFILE=" [+] Profile you want : " 24 | 25 | STEPS_PRESENTATION="\n[+] Possible actions :\n [1] Services Enumeration\n [2] Services Configuration Acquisition\n [3] Log Acquisition\n [4] Log Analysis" 26 | STEPS_ACTION="[+] Select the action you want to perform, specify multiple actions with a comma (e.g. 1,2,3): " 27 | 28 | STORAGE_PRESENTATION="\n[+] Possible storage :\n [1] Local\n [2] Cloud" 29 | STORAGE_ACTION="[+] Press the number associated with the storage you want: " 30 | 31 | REGION_PRESENTATION="\n[+] Specify your region:\n [1] All regions, this option is not available with the Analysis step.\n [2] One specific region" 32 | REGION_ACTION="[+] Press the number associated with the operation: " 33 | REGION=" [+] Region that you want: " 34 | ALL_REGION=" [+] Region that you want (optional): " 35 | 36 | DB_INITIALIZED_PRESENTATION="\n[+] Database Initialization possibilities:\n [1] The database you want to use is already initialized\n [2] The database you want to use is not initialized yet" 37 | DB_INITIALIZED_ACTION="[+] Press the number associated with the option you want. If it's the first time you run the tool, press 2: " 38 | 39 | INPUT_BUCKET_ACTION="\n[+] Enter the S3 URI of the bucket containing the CloudTrail logs. Format is s3://s3_name/subfolders/ : " 40 | OUTPUT_BUCKET_ACTION="\n[+] Enter the S3 URI of where the results of the queries will be stored. Format is s3://s3_name/[subfolders]/ : " 41 | 42 | DEFAULT_NAME_PRESENTATION="\n[+] Name possibilities:\n [1] Creates new database and table, using the default names (cloudtrailanalysis & logs)\n [2] Creates new database and table, using your own names" 43 | DEFAULT_NAME_ACTION="[+] Press the number associated with the option you want: " 44 | 45 | NAMES_PRESENTATION="\n[+] You will now have to enter your existing catalog, database and table" 46 | NEW_NAMES_PRESENTATION="\n[+] You will now have to enter the catalog to use and the database name you want" 47 | TABLE_PRESENTATION="\n[+] Don't forget to enter the name of the table you want" 48 | CATALOG_ACTION=" [+] Catalog name : " 49 | DB_ACTION=" [+] Database name : " 50 | TABLE_ACTION=" [+] Table name : " 51 | 52 | DEFAULT_STRUCTURE_PRESENTATION="\n[+] Structure file possibilities:\n [1] Use your own structure file\n [2] Use the default structure" 53 | DEFAULT_STRUCTURE_ACTION="[+] Press the number associated with the option you want: " 54 | STRUCTULE_FILE=" [+] Enter the name of the structure file you want to use for your table : " 55 | 56 | DEFAULT_QUERY_PRESENTATION="\n[+] Query file possibilities:\n [1] Use your own file\n [2] Use the default query file" 57 | DEFAULT_QUERY_ACTION="[+] Press the number associated with the option you want: " 58 | QUERY_FILE=" [+] Enter the name of the query file you want to use : " 59 | 60 | TIMEFRAME_PRESENTATION="\n[+] Timeframe possibilities:\n [1] Use a timeframe to filter logs results\n [2] Don't use a timeframe" 61 | TIMEFRAME_ACTION="[+] Press the number associated with the option you want: " 62 | TIMEFRAME=" [+] Enter the number of last days to analyze : " 63 | 64 | START_END_PRESENTATION="\n[+] Enter the start and end dates of the logs you want to collect" 65 | START=" [+] Start date of the logs collection (YYYY-MM-DD): " 66 | END=" [+] End date of the logs collection (YYYY-MM-DD): " -------------------------------------------------------------------------------- /source/utils/utils.py: -------------------------------------------------------------------------------- 1 | """File containing all types of functions and variables, used everywhere in the tool.""" 2 | 3 | import boto3 4 | from botocore.exceptions import ClientError 5 | import datetime, os 6 | from sys import exit 7 | from random import choices 8 | from string import ascii_lowercase, digits 9 | from json import dumps 10 | from source.utils.strings import * 11 | 12 | 13 | def get_random_chars(n): 14 | """Generate random str. 15 | 16 | Parameters 17 | ---------- 18 | n : int 19 | Number of char to be generated 20 | 21 | Returns 22 | ------- 23 | ret : str 24 | Random str of n chars 25 | """ 26 | ret = "".join(choices(ascii_lowercase + digits, k=n)) 27 | return ret 28 | 29 | def is_list(list_data): 30 | """Verify if the given data are a list. 31 | 32 | list_data : list 33 | List to verify 34 | """ 35 | for data in list_data: 36 | if isinstance(data, datetime.datetime): 37 | data = str(data) 38 | if isinstance(data, list): 39 | is_list(data) 40 | if isinstance(data, dict): 41 | is_dict(data) 42 | 43 | def is_dict(data_dict): 44 | """Verify if the given data are a dictionary. 45 | 46 | Parameters 47 | ---------- 48 | data_dict : dict 49 | Dictionary to verify 50 | """ 51 | for data in data_dict: 52 | if isinstance(data_dict[data], datetime.datetime): 53 | data_dict[data] = str(data_dict[data]) 54 | if isinstance(data_dict[data], list): 55 | is_list(data_dict[data]) 56 | if isinstance(data_dict[data], dict): 57 | is_dict(data_dict[data]) 58 | 59 | def fix_json(response): 60 | """Correct json format. 61 | 62 | Parameters 63 | ---------- 64 | response : json 65 | Usually the response of a request (boto3, requests) 66 | 67 | Returns 68 | ------- 69 | response : json 70 | Fixed response 71 | """ 72 | if isinstance(response, dict): 73 | is_dict(response) 74 | 75 | return response 76 | 77 | def try_except(func, *args, **kwargs): 78 | """Try except function. 79 | 80 | Parameters 81 | ---------- 82 | func : str 83 | Function tried 84 | *args : list 85 | List of args, optional 86 | **kwargs : list 87 | List of key pairs args, optional 88 | 89 | Returns 90 | ------- 91 | ret : str 92 | Either the execution of the function, either an object 93 | """ 94 | try: 95 | ret = func(*args, **kwargs) 96 | except Exception as e: 97 | ret = {"count": 0, "error": str(e)} 98 | return ret 99 | 100 | def create_command(command, output): 101 | """Merge the command and its results. 102 | 103 | Parameters 104 | ---------- 105 | command : str 106 | Command made 107 | output : dict 108 | Output of the command 109 | 110 | Returns 111 | ------- 112 | command_output : dict 113 | Command and its results 114 | """ 115 | command_output = {} 116 | command_output["command"] = command 117 | command_output["output"] = output 118 | return command_output 119 | 120 | def writefile_s3(bucket, key, filename): 121 | """Write a local file to a s3 bucket. 122 | 123 | Parameters 124 | ---------- 125 | bucket : str 126 | Bucket in which the file is uploaded 127 | key : str 128 | Path where the file is uploaded 129 | filename : str 130 | File to be uploaded 131 | 132 | Returns 133 | ------- 134 | response : dict 135 | Response of the request made 136 | """ 137 | response = S3_CLIENT.meta.client.upload_file(filename, bucket, key) 138 | return response 139 | 140 | def create_s3_if_not_exists(region, bucket_name): 141 | """Create a s3 bucket if needed during the investigation. 142 | 143 | Parameters 144 | ---------- 145 | region : str 146 | Region where to create the s3 147 | bucket_name : str 148 | Name of the new bucket 149 | 150 | Returns 151 | ------- 152 | bucket_name : str 153 | Name of the newly created bucket 154 | 155 | Note that for region=us-east-1, AWS necessitates that you leave LocationConstraint blank 156 | https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html#API_CreateBucket_RequestBody 157 | """ 158 | s3 = boto3.client("s3", region_name=region) 159 | response = s3.list_buckets() 160 | 161 | for bkt in response["Buckets"]: 162 | if bkt["Name"] == bucket_name: 163 | return bucket_name 164 | 165 | print(f"[+] Logs bucket does not exists, creating it now: {bucket_name}") 166 | 167 | bucket_config = dict() 168 | if region != "us-east-1": 169 | bucket_config["CreateBucketConfiguration"] = {"LocationConstraint": region} 170 | 171 | try: 172 | response = s3.create_bucket(Bucket=bucket_name, **bucket_config) 173 | except ClientError as e: 174 | print(e) 175 | exit(-1) 176 | return bucket_name 177 | 178 | def write_file(file, mode, content): 179 | """Write content to a new file. 180 | 181 | Parameters 182 | ---------- 183 | file : str 184 | File to be filled 185 | mode : str 186 | Opening mode of the file (w, a, etc) 187 | content : str 188 | Content to be written in the file 189 | """ 190 | with open(file, mode) as f: 191 | f.write(content) 192 | 193 | def create_folder(path): 194 | """Create a folder. 195 | 196 | Parameters 197 | ---------- 198 | path : str 199 | Path of the folder to be created 200 | """ 201 | os.makedirs(path, exist_ok=True) 202 | 203 | def run_s3_dl(bucket, path, prefix=""): 204 | """Handle the steps of the content's download of a s3 bucket. 205 | 206 | Parameters 207 | ---------- 208 | bucket : str 209 | Bucket being copied 210 | path : str 211 | Local path where to paste the content of the bucket 212 | prefix : str, optional 213 | Specific folder in the bucket to download 214 | """ 215 | paginator = S3_CLIENT.get_paginator('list_objects_v2') 216 | operation_parameters = {"Bucket": bucket, "Prefix": prefix} 217 | 218 | for page in paginator.paginate(**operation_parameters): 219 | if 'Contents' in page: 220 | for s3_object in page['Contents']: 221 | s3_key = s3_object['Key'] 222 | local_path = os.path.join(path, s3_key) 223 | 224 | local_directory = os.path.dirname(local_path) 225 | create_folder(local_directory) 226 | 227 | if not local_path.endswith("/"): 228 | S3_CLIENT.download_file(bucket, s3_key, local_path) 229 | 230 | def write_s3(bucket, key, content): 231 | """Write content to s3 bucket. 232 | 233 | Parameters 234 | ---------- 235 | bucket : str 236 | Name of the bucket in which we put data 237 | key : str 238 | Path in the bucket 239 | content : str 240 | Data to be put 241 | 242 | Returns 243 | ------- 244 | response : dict 245 | Results of the request made 246 | """ 247 | response = S3_CLIENT.put_object(Bucket=bucket, Key=key, Body=content) 248 | return response 249 | 250 | def copy_s3_bucket(src_bucket, dst_bucket, service, region, prefix=""): 251 | """Copy the content at a specific path of a s3 bucket to another. 252 | 253 | Parameters 254 | ---------- 255 | src_bucket : str 256 | Bucket where all the logs of the corresponding service are stored 257 | dst_bucket : str 258 | Bucket used in incident response 259 | service : str 260 | Service of which the logs are copied (s3, ec2, etc) 261 | region : str 262 | Region where the service is scanned 263 | prefix : str, optional 264 | Path of the data to copy to reduce the amount of data 265 | """ 266 | s3res = boto3.resource("s3") 267 | 268 | paginator = S3_CLIENT.get_paginator('list_objects_v2') 269 | operation_parameters = {"Bucket": src_bucket, "Prefix": prefix} 270 | 271 | for page in paginator.paginate(**operation_parameters): 272 | if 'Contents' in page: 273 | for key in page['Contents']: 274 | copy_source = {"Bucket": src_bucket, "Key": key["Key"]} 275 | new_key = f"{region}/logs/{service}/{src_bucket}/{key['Key']}" 276 | try_except(s3res.meta.client.copy, copy_source, dst_bucket, new_key) 277 | 278 | def copy_or_write_s3(key, value, dst_bucket, region): 279 | """Depending on the action content of value (0 or 1), write the data to our s3 bucket, or copy the data to the source bucket to our bucket. 280 | 281 | Parameters 282 | ---------- 283 | key : str 284 | Name of the service 285 | value : dict 286 | Either logs of the service or the buckets where the logs are stored, based on the const LOGS_RESULTS 287 | dst_bucket : str 288 | Bucket where to put the data 289 | region : str 290 | Region where the serice is scanned 291 | """ 292 | if value["action"] == 0: 293 | write_s3( 294 | dst_bucket, 295 | f"{region}/logs/{key}.json", 296 | dumps(value["results"], indent=4, default=str), 297 | ) 298 | else: 299 | for src_bucket in value["results"]: 300 | prefix = "" 301 | 302 | if "|" in src_bucket: 303 | split = src_bucket.split("|") 304 | bucket = split[0] 305 | prefix = split[1] 306 | else: 307 | bucket = src_bucket 308 | 309 | copy_s3_bucket(bucket, dst_bucket, key, region, prefix) 310 | 311 | def write_or_dl(key, value, conf): 312 | """Depending on the action content of value (0 or 1), write the data to a single json file, or download the content of a s3 bucket. 313 | 314 | Parameters 315 | ---------- 316 | key : str 317 | Name of the service 318 | value : str 319 | Either logs of the service or the buckets where the logs are stored, based on the const LOGS_RESULTS 320 | conf : str 321 | Path to write the results 322 | """ 323 | if value["action"] == 0: 324 | write_file( 325 | conf + f"/{key}.json", 326 | "w", 327 | dumps(value["results"], indent=4, default=str), 328 | ) 329 | else: 330 | for bucket in value["results"]: 331 | path = f"{conf}/{key}" 332 | create_folder(path) 333 | prefix = "" 334 | 335 | if "|" in bucket: 336 | split = bucket.split("|") 337 | bucket = split[0] 338 | prefix = split[1] 339 | run_s3_dl(bucket, path, prefix) 340 | 341 | def athena_query(region, query, bucket): 342 | """Run an athena query and verifies it worked. 343 | 344 | region: str 345 | Region where the query is made 346 | query : str 347 | Query to be run 348 | bucket : str 349 | bucket where the results are written 350 | 351 | Returns 352 | ------- 353 | response : dict 354 | Results of the response 355 | """ 356 | athena = boto3.client("athena", region_name=region) 357 | 358 | #try: 359 | result = athena.start_query_execution( 360 | QueryString=query, 361 | ResultConfiguration={"OutputLocation": bucket} 362 | ) 363 | #except Exception: 364 | # print(f"{ERROR} Verify the output bucket you entered is valid and in the same region.") 365 | 366 | id = result["QueryExecutionId"] 367 | status = "QUEUED" 368 | 369 | while status != "SUCCEEDED": 370 | response = athena.get_query_execution(QueryExecutionId=id) 371 | status = response["QueryExecution"]["Status"]["State"] 372 | 373 | if status == "FAILED" or status == "CANCELLED": 374 | print(f'[!] invictus-aws.py: error: {response["QueryExecution"]["Status"]["AthenaError"]["ErrorMessage"]}') 375 | exit(-1) 376 | 377 | return response 378 | 379 | def rename_file_s3(bucket, folder, new_key, old_key): 380 | """Rename a s3 file (well it copies it, changes the name, then deletes the old one). 381 | 382 | Parameters 383 | ---------- 384 | bucket : str 385 | Bucket where the file to rename is 386 | folder : str 387 | Folder where the file to rename is 388 | new_key : str 389 | New name of the file 390 | old_key : str 391 | Old name of the file 392 | """ 393 | S3_CLIENT.copy_object( 394 | Bucket=bucket, 395 | Key=f'{folder}{new_key}', 396 | CopySource = {"Bucket": bucket, "Key": f"{folder}{old_key}"} 397 | ) 398 | 399 | S3_CLIENT.delete_object( 400 | Bucket=bucket, 401 | Key=f"{folder}{old_key}" 402 | ) 403 | 404 | def get_table(ddl, get_db): 405 | """Get the table name out of a ddl file. 406 | 407 | Parameters 408 | ---------- 409 | ddl : str 410 | Ddl file 411 | get_db : bool 412 | False if you only want the table name. True if you also want the db name that can be present just before the table name. 413 | 414 | Returns 415 | ------- 416 | table : str 417 | Name of the table 418 | data : str 419 | Content of the ddl file 420 | """ 421 | with open(ddl, "rt") as ddl: 422 | data = ddl.read() 423 | table_content = data.split("(", 1) 424 | table = table_content[0].strip().split(" ")[-1] 425 | if "." in table and not get_db: 426 | table = table.split(".")[1] 427 | return table, table_content 428 | 429 | def get_s3_in_ddl(ddl): 430 | """Get the table name out of a ddl file. 431 | 432 | Parameters 433 | ---------- 434 | ddl : str 435 | Ddl file 436 | get_db : bool 437 | False if you only want the table name. True if you also want the db name that can be present just before the table name. 438 | 439 | Returns 440 | ------- 441 | table : str 442 | Name of the table 443 | data : str 444 | Content of the ddl file 445 | """ 446 | with open(ddl, "rt") as ddl: 447 | data = ddl.read() 448 | location = data.split("LOCATION") 449 | s3 = location[1].strip().split("'")[1] 450 | return s3 451 | 452 | def get_bucket_and_prefix(bucket): 453 | """Split bucket name and prefix of the given bucket. 454 | 455 | Parameters 456 | ---------- 457 | bucket : str 458 | Bucket to split up 459 | 460 | Returns 461 | ------- 462 | bucket_name : str 463 | Name of the bucket 464 | prefix : str 465 | Path in the bucket 466 | """ 467 | if bucket.startswith("s3://"): 468 | bucket = bucket.replace("s3://", "") 469 | 470 | el = bucket.split("/", 1) 471 | bucket_name = el[0] 472 | 473 | if len(el) > 1 : 474 | prefix = el[1] 475 | else: 476 | prefix = None 477 | 478 | return bucket_name, prefix 479 | 480 | def create_tmp_bucket(region, bucket_name): 481 | """Create a tmp bucket (only when the local flag is present for the logs analysis). 482 | 483 | Parameters 484 | ---------- 485 | region : str 486 | Name of the region where the analysis is done 487 | bucket_name : 488 | Name of the tmp bucket 489 | 490 | Note that for region=us-east-1, AWS necessitates that you leave LocationConstraint blank 491 | https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html#API_CreateBucket_RequestBody 492 | """ 493 | s3 = boto3.client("s3", region_name=region) 494 | 495 | bucket_config = dict() 496 | if region != "us-east-1": 497 | bucket_config["CreateBucketConfiguration"] = {"LocationConstraint": region} 498 | 499 | try: 500 | s3.create_bucket(Bucket=bucket_name, **bucket_config) 501 | except ClientError as e: 502 | print(e) 503 | exit(-1) 504 | 505 | 506 | ##################### 507 | # RANDOM GENERATION # 508 | ##################### 509 | 510 | date = datetime.date.today().strftime("%Y-%m-%d") 511 | random_chars = get_random_chars(5) 512 | PREPARATION_BUCKET = "invictus-aws-" + date + "-" + random_chars 513 | LOGS_BUCKET = "invictus-aws-" + date + "-" + random_chars 514 | 515 | ######### 516 | # FILES # 517 | ######### 518 | 519 | ROOT_FOLDER = "./results/" 520 | 521 | ########## 522 | # COLORS # 523 | ########## 524 | 525 | OKGREEN = '\033[92m' 526 | WARNING = '\033[93m' 527 | FAIL = '\033[91m' 528 | ENDC = '\033[0m' 529 | BOLD = '\033[1m' 530 | UNDERLINE = '\033[4m' 531 | 532 | ########### 533 | # CLIENTS # 534 | ########### 535 | 536 | S3_CLIENT = None 537 | CLOUDWATCH_CLIENT = None 538 | CLOUDTRAIL_CLIENT = None 539 | ROUTE53_CLIENT = None 540 | IAM_CLIENT = None 541 | GUARDDUTY_CLIENT = None 542 | WAF_CLIENT = None 543 | LAMBDA_CLIENT = None 544 | EC2_CLIENT = None 545 | EB_CLIENT = None 546 | ROUTE53_RESOLVER_CLIENT = None 547 | DYNAMODB_CLIENT = None 548 | RDS_CLIENT = None 549 | EKS_CLIENT = None 550 | ELS_CLIENT = None 551 | SECRETS_CLIENT = None 552 | KINESIS_CLIENT = None 553 | INSPECTOR_CLIENT = None 554 | DETECTIVE_CLIENT = None 555 | MACIE_CLIENT = None 556 | SSM_CLIENT = None 557 | ATHENA_CLIENT = None 558 | 559 | def set_clients(region): 560 | """Set the clients to the given region. 561 | 562 | Parameters 563 | ---------- 564 | region : str 565 | Region where the client will be used 566 | """ 567 | 568 | global S3_CLIENT 569 | global CLOUDWATCH_CLIENT 570 | global CLOUDTRAIL_CLIENT 571 | global ROUTE53_CLIENT 572 | global IAM_CLIENT 573 | 574 | global LAMBDA_CLIENT 575 | global WAF_CLIENT 576 | global EC2_CLIENT 577 | global EB_CLIENT 578 | global ROUTE53_RESOLVER_CLIENT 579 | global DYNAMODB_CLIENT 580 | global RDS_CLIENT 581 | global EKS_CLIENT 582 | global ELS_CLIENT 583 | global SECRETS_CLIENT 584 | global KINESIS_CLIENT 585 | global GUARDDUTY_CLIENT 586 | global INSPECTOR_CLIENT 587 | global DETECTIVE_CLIENT 588 | global MACIE_CLIENT 589 | global SSM_CLIENT 590 | global ATHENA_CLIENT 591 | 592 | S3_CLIENT = boto3.client("s3") 593 | CLOUDWATCH_CLIENT = boto3.client("cloudwatch") 594 | CLOUDTRAIL_CLIENT = boto3.client("cloudtrail") 595 | ROUTE53_CLIENT = boto3.client("route53") 596 | IAM_CLIENT = boto3.client("iam") 597 | 598 | WAF_CLIENT = boto3.client("wafv2", region_name=region) 599 | LAMBDA_CLIENT = boto3.client("lambda", region_name=region) 600 | EC2_CLIENT = boto3.client("ec2", region_name=region) 601 | EB_CLIENT = boto3.client("elasticbeanstalk", region_name=region) 602 | ROUTE53_RESOLVER_CLIENT = boto3.client("route53resolver", region_name=region) 603 | DYNAMODB_CLIENT = boto3.client("dynamodb", region_name=region) 604 | RDS_CLIENT = boto3.client("rds", region_name=region) 605 | EKS_CLIENT = boto3.client("eks", region_name=region) 606 | ELS_CLIENT = boto3.client("es", region_name=region) 607 | SECRETS_CLIENT = boto3.client("secretsmanager", region_name=region) 608 | KINESIS_CLIENT = boto3.client("kinesis", region_name=region) 609 | GUARDDUTY_CLIENT = boto3.client("guardduty", region_name=region) 610 | INSPECTOR_CLIENT = boto3.client("inspector2", region_name=region) 611 | DETECTIVE_CLIENT = boto3.client("detective", region_name=region) 612 | MACIE_CLIENT = boto3.client("macie2", region_name=region) 613 | SSM_CLIENT = boto3.client("ssm", region_name=region) 614 | ATHENA_CLIENT = boto3.client("athena", region_name=region) 615 | 616 | ######## 617 | # MISC # 618 | ######## 619 | 620 | POSSIBLE_STEPS = ["1", "2", "3", "4"] 621 | 622 | ''' 623 | -1 means we didn't enter in the enumerate function associated 624 | 0 means we ran the associated function but the service wasn't available 625 | ''' 626 | ENUMERATION_SERVICES = { 627 | "s3": {"count": -1, "elements": [], "ids": []}, 628 | "wafv2": {"count": -1, "elements": [], "ids": []}, 629 | "lambda": {"count": -1, "elements": [], "ids": []}, 630 | "vpc": {"count": -1, "elements": [], "ids": []}, 631 | "elasticbeanstalk": {"count": -1, "elements": [], "ids": []}, 632 | "route53": {"count": -1, "elements": [], "ids": []}, 633 | "ec2": {"count": -1, "elements": [], "ids": []}, 634 | "iam": {"count": -1, "elements": [], "ids": []}, 635 | "dynamodb": {"count": -1, "elements": [], "ids": []}, 636 | "rds": {"count": -1, "elements": [], "ids": []}, 637 | "eks": {"count": -1, "elements": [], "ids": []}, 638 | "els": {"count": -1, "elements": [], "ids": []}, 639 | "secrets": {"count": -1, "elements": [], "ids": []}, 640 | "kinesis": {"count": -1, "elements": [], "ids": []}, 641 | "cloudwatch": {"count": -1, "elements": [], "ids": []}, 642 | "guardduty": {"count": -1, "elements": [], "ids": []}, 643 | "detective": {"count": -1, "elements": [], "ids": []}, 644 | "inspector": {"count": -1, "elements": [], "ids": []}, 645 | "macie": {"count": -1, "elements": [], "ids": []}, 646 | "cloudtrail-logs": {"count": -1, "elements": [], "ids": []}, 647 | "cloudtrail": {"count": -1, "elements": [], "ids": []}, 648 | } 649 | 650 | LOGS_RESULTS = { 651 | "guardduty": {"action": -1,"results": []}, 652 | "cloudtrail-logs": {"action": -1,"results": []}, 653 | "wafv2": {"action": -1,"results": []}, 654 | "vpc": {"action": -1,"results": []}, 655 | "cloudwatch": {"action": -1,"results": []}, 656 | "s3": {"action": -1,"results": []}, 657 | "inspector": {"action": -1,"results": []}, 658 | "macie": {"action": -1,"results": []}, 659 | "rds": {"action": -1,"results": []}, 660 | "route53": {"action": -1,"results": []} 661 | } 662 | 663 | --------------------------------------------------------------------------------