├── LICENSE ├── README.md ├── Sample_Output ├── aws_service_enum_sample_output.png ├── azure_service_enum_sample_output.png └── gcp_service_enum_sample_output.png ├── aws_service_enum ├── README.md ├── aws_enum_services.py └── requirements.txt ├── azure_service_enum ├── README.md ├── azure_enum_services.py └── requirements.txt └── gcp_service_enum ├── README.md ├── gcp_enum_services.py └── requirements.txt /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | For more information about usage of these scripts, Please have a look at respective tool folders linked below 2 | 3 | * [AWS_SERVICE_ENUM](/aws_service_enum) 4 | * [GCP_SERVICE_ENUM](/gcp_service_enum) 5 | * [AZURE_SERVICE_ENUM](/azure_service_enum) 6 | 7 | ## License 8 | 9 | Copyright 2023 NotSoSecure Global Services Limited 10 | 11 | Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at 12 | 13 | http://www.apache.org/licenses/LICENSE-2.0 14 | Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. 15 | -------------------------------------------------------------------------------- /Sample_Output/aws_service_enum_sample_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NotSoSecure/cloud-service-enum/896a84e7b36ecf227cfb512207468ecd6377f58a/Sample_Output/aws_service_enum_sample_output.png -------------------------------------------------------------------------------- /Sample_Output/azure_service_enum_sample_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NotSoSecure/cloud-service-enum/896a84e7b36ecf227cfb512207468ecd6377f58a/Sample_Output/azure_service_enum_sample_output.png -------------------------------------------------------------------------------- /Sample_Output/gcp_service_enum_sample_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NotSoSecure/cloud-service-enum/896a84e7b36ecf227cfb512207468ecd6377f58a/Sample_Output/gcp_service_enum_sample_output.png -------------------------------------------------------------------------------- /aws_service_enum/README.md: -------------------------------------------------------------------------------- 1 | This tool is helpful in scenarios where you got AWS credentials (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`) through SSRF or any other vulnerability, but you are not sure if given credentials have access to other services or not. Instead of just trying for top 10 aws services (s3, ec2, etc), you can run this tool and it will let you enumerate through each non-intrusive(For example, only listing buckets, this tool won't be creating/modifying bucket) feature of each service. 2 | 3 | ## Requirements 4 | 5 | * `pip install -r requirements.txt` 6 | 7 | ## Usage 8 | 9 | - **aws_enum_services.py**: This option allows the script to utilize default AWS credentials stored in the credentials file. 10 | - **aws_enum_services.py --profile**: Users can specify a profile option from the AWS credentials file to be used by the script. 11 | - **aws_enum_services.py --access-key ACCESS_KEY --secret-key SECRET_KEY --session-token SESSION_TOKEN**: With this option, users can manually input their AWS access key, secret key, and session token for authentication. 12 | 13 | ~~~ 14 | usage: aws_enum_services.py [-h] [--access-key ACCESS_KEY] [--secret-key SECRET_KEY] [--session-token SESSION_TOKEN] 15 | [--list-services] 16 | [--services {ec2,s3,rds,lambda,cloudfront,dynamodb,iam,sns,sqs,ecr,elasticbeanstalk,route53,cloudwatch,codepipeline,sagemaker,secretsmanager,glue,stepfunctions,eks,cloudtrail,kinesis,redshift,elasticache,apigateway,cloudformation,appsync,ssm,elastictranscoder,datapipeline,mediaconvert,storagegateway,workspaces,cloud9,lex-models,iot,medialive,datasync,emr,athena,pinpoint,efs,mediapackage,mq,organizations,detective,opsworks,codecommit,appmesh,backup,mediapackage-vod,mediastore} [{ec2,s3,rds,lambda,cloudfront,dynamodb,iam,sns,sqs,ecr,elasticbeanstalk,route53,cloudwatch,codepipeline,sagemaker,secretsmanager,glue,stepfunctions,eks,cloudtrail,kinesis,redshift,elasticache,apigateway,cloudformation,appsync,ssm,elastictranscoder,datapipeline,mediaconvert,storagegateway,workspaces,cloud9,lex-models,iot,medialive,datasync,emr,athena,pinpoint,efs,mediapackage,mq,organizations,detective,opsworks,codecommit,appmesh,backup,mediapackage-vod,mediastore} ...]] 17 | [--region REGION [REGION ...]] [--thread THREAD] [--output-file OUTPUT_FILE] 18 | 19 | options: 20 | -h, --help show this help message and exit 21 | --access-key ACCESS_KEY 22 | Provide Access key 23 | --secret-key SECRET_KEY 24 | Provide Secrect Key 25 | --session-token SESSION_TOKEN 26 | Provide session token if available 27 | --list-services Provide list of services 28 | --services {ec2,s3,rds,lambda,cloudfront,dynamodb,iam,sns,sqs,ecr,elasticbeanstalk,route53,cloudwatch,codepipeline,sagemaker,secretsmanager,glue,stepfunctions,eks,cloudtrail,kinesis,redshift,elasticache,apigateway,cloudformation,appsync,ssm,elastictranscoder,datapipeline,mediaconvert,storagegateway,workspaces,cloud9,lex-models,iot,medialive,datasync,emr,athena,pinpoint,efs,mediapackage,mq,organizations,detective,opsworks,codecommit,appmesh,backup,mediapackage-vod,mediastore} [{ec2,s3,rds,lambda,cloudfront,dynamodb,iam,sns,sqs,ecr,elasticbeanstalk,route53,cloudwatch,codepipeline,sagemaker,secretsmanager,glue,stepfunctions,eks,cloudtrail,kinesis,redshift,elasticache,apigateway,cloudformation,appsync,ssm,elastictranscoder,datapipeline,mediaconvert,storagegateway,workspaces,cloud9,lex-models,iot,medialive,datasync,emr,athena,pinpoint,efs,mediapackage,mq,organizations,detective,opsworks,codecommit,appmesh,backup,mediapackage-vod,mediastore} ...] 29 | Services that need to be enumerated 30 | --region REGION [REGION ...] 31 | Provide regions, eg --region us-east-1, eu-north-1 32 | --thread THREAD, -t THREAD 33 | Treading count 34 | --output-file OUTPUT_FILE, -o OUTPUT_FILE 35 | json output in file 36 | ~~~ 37 | 38 | Most of the options are pretty self-explanatory, however, I would like to draw your attention towards the following 3 options: 39 | 40 | `--region` this will allow you to specify a default region. If no region is selected it will enumerate over all regions. 41 | 42 | `--output-file` saves the results in json format 43 | 44 | `--service` provide specific service that you want to enumerate 45 | 46 | You can run `--list-services` to list all the available service that this tool currently can enumerate 47 | 48 | ## Sample Output 49 | 50 | ![](/Sample_Output/aws_service_enum_sample_output.png) 51 | 52 | ## Author 53 | 54 | * [Raunak Parmar](https://www.linkedin.com/in/trouble1raunak/) -------------------------------------------------------------------------------- /aws_service_enum/aws_enum_services.py: -------------------------------------------------------------------------------- 1 | import boto3, argparse, threading, json 2 | from tabulate import tabulate 3 | from botocore.exceptions import ClientError 4 | import crayons, warnings, concurrent.futures 5 | from concurrent.futures import ThreadPoolExecutor, as_completed 6 | from prettytable import PrettyTable 7 | 8 | 9 | warnings.filterwarnings("ignore", category=FutureWarning) 10 | 11 | choices = ['ec2','s3','rds','lambda','cloudfront','dynamodb','iam','sns','sqs','ecr','elasticbeanstalk','route53','cloudwatch','codepipeline','sagemaker','secretsmanager','glue','stepfunctions','eks','cloudtrail','kinesis','redshift','elasticache', 'ecs', 12 | 'apigateway','cloudformation','appsync','ssm','elastictranscoder','datapipeline','mediaconvert','storagegateway','workspaces','cloud9','lex-models','iot','medialive','datasync','emr','athena','pinpoint','efs','mediapackage','mq','organizations','detective','opsworks','codecommit','appmesh','backup','mediapackage-vod','mediastore', 'serverlessrepo'] 13 | 14 | 15 | access_key_profile = None 16 | secret_key_profile = None 17 | session_token_profile = None 18 | 19 | session = boto3.Session() 20 | credentials = session.get_credentials() 21 | 22 | if credentials: 23 | access_key_profile = credentials.access_key 24 | secret_key_profile = credentials.secret_key 25 | session_token_profile = credentials.token 26 | else: 27 | print("Error: Configure the credentials manually or use profile. For more information, check help.") 28 | 29 | parser = argparse.ArgumentParser() 30 | parser.add_argument('--access-key', help='Provide Access key', default=access_key_profile, required=False) 31 | parser.add_argument('--secret-key', help='Provide Secrect Key', default=secret_key_profile, required=False) 32 | parser.add_argument('--session-token', help='Provide session token if available', default=session_token_profile, required=False) 33 | parser.add_argument('--profile', help='AWS profile name to use for credentials', required=False) 34 | parser.add_argument('--list-services', help='Provide list of services', required=False, action='store_true') 35 | parser.add_argument('--services', help='Services that need to be enumerated', nargs='+', required=False, choices=choices) 36 | parser.add_argument('--region', help='Provide regions, eg --region us-east-1, eu-north-1', required=False, nargs='+') 37 | parser.add_argument('--thread', "-t", help='Treading count', required=False) 38 | parser.add_argument('--output-file', "-o", help='json output in file', required=False) 39 | args = parser.parse_args() 40 | 41 | 42 | json_body = {} 43 | 44 | if args.thread: 45 | Thread_Count = int(args.thread) 46 | else: 47 | Thread_Count = 5 48 | 49 | if args.list_services: 50 | services = [ 51 | 'ec2', 's3', 'rds', 'lambda', 'cloudfront', 'dynamodb', 'iam', 'sns', 'sqs', 'ecr', 52 | 'elasticbeanstalk', 'route53', 'cloudwatch', 'codepipeline', 'sagemaker', 'secretsmanager', 53 | 'glue', 'stepfunctions', 'eks', 'cloudtrail', 'kinesis', 'redshift', 'elasticache', 54 | 'apigateway', 'cloudformation', 'appsync', 'ssm', 'elastictranscoder', 'datapipeline', 55 | 'mediaconvert', 'storagegateway', 'workspaces', 'cloud9', 'lex-models', 'iot', 'medialive', 56 | 'datasync', 'emr', 'athena', 'pinpoint', 'efs', 'mediapackage', 'mq', 'organizations', 57 | 'detective', 'opsworks', 'codecommit', 'appmesh', 'backup', 'mediapackage-vod', 'mediastore', 'serverlessrepo' 58 | ] 59 | 60 | table = PrettyTable() 61 | table.field_names = ['Services'] 62 | table.align['Services'] = 'l' # Align the text to the left side 63 | 64 | for service in services: 65 | table.add_row([service]) 66 | 67 | print(table) 68 | exit() 69 | if args.profile: 70 | session = boto3.Session(profile_name=args.profile) 71 | credentials = session.get_credentials() 72 | access_key = credentials.access_key 73 | secret_key = credentials.secret_key 74 | session_token = credentials.token 75 | else: 76 | access_key = args.access_key 77 | secret_key = args.secret_key 78 | session_token = args.session_token 79 | 80 | 81 | #access_key = args.access_key 82 | #secret_key = args.secret_key 83 | #session_token = args.session_token 84 | 85 | 86 | if args.region == None: 87 | regions = ["eu-north-1","ap-south-1","eu-west-3","eu-west-2","eu-west-1","ap-northeast-3", 88 | "ap-northeast-2","ap-northeast-1","sa-east-1","ca-central-1","ap-southeast-1", 89 | "ap-southeast-2","eu-central-1","us-east-1","us-east-2","us-west-1","us-west-2"] 90 | else: 91 | print(crayons.green("[+] Looking for specified region: ", bold=True), crayons.magenta(", ".join(args.region))) 92 | regions = args.region 93 | 94 | def get_client(service_name, region_name=""): 95 | 96 | # config = Config(connect_timeout=20, 97 | # read_timeout=20, 98 | # retries={'max_attempts': 10}, 99 | # max_pool_connections=MAX_POOL_CONNECTIONS * 2) 100 | 101 | 102 | client = boto3.client( 103 | service_name, 104 | aws_access_key_id=access_key, 105 | aws_secret_access_key=secret_key, 106 | aws_session_token=session_token, 107 | region_name=region_name 108 | ) 109 | 110 | return client 111 | 112 | def describe_ec2_instances(): 113 | started = "List EC2 instances:" 114 | instance_data = [] 115 | 116 | def describe_instances(region): 117 | 118 | ec2_client = get_client('ec2', region_name=region) 119 | response = ec2_client.describe_instances() 120 | 121 | instances = response['Reservations'] 122 | 123 | for reservation in instances: 124 | for instance in reservation['Instances']: 125 | instance_data.append([ 126 | instance['InstanceId'], 127 | instance['State']['Name'], 128 | instance['InstanceType'], 129 | instance['LaunchTime'], 130 | region 131 | ]) 132 | processes = [] 133 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 134 | for region in regions: 135 | processes.append(executor.submit(describe_instances, region)) 136 | 137 | json_body["ec2"] = instance_data 138 | 139 | if instance_data == []: 140 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 141 | return 142 | 143 | 144 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(instance_data, headers=['Instance ID', 'Instance State', 'Instance Type', 'Launch Time', 'Region'], tablefmt='psql')) 145 | 146 | def describe_vpcs(): 147 | started = "List VPCs:" 148 | vpc_data = [] 149 | 150 | def describe_vpcs_in_region(region): 151 | ec2_client = get_client('ec2', region_name=region) 152 | response = ec2_client.describe_vpcs() 153 | vpcs = response['Vpcs'] 154 | for vpc in vpcs: 155 | vpc_data.append([ 156 | vpc['VpcId'], 157 | vpc['CidrBlock'], 158 | vpc['State'], 159 | region 160 | ]) 161 | 162 | processes = [] 163 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 164 | for region in regions: 165 | processes.append(executor.submit(describe_vpcs_in_region, region)) 166 | 167 | if vpc_data == []: 168 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 169 | return 170 | 171 | print(crayons.green("[+] " + started, bold=True)) 172 | print(tabulate(vpc_data, headers=['VPC ID', 'CIDR Block', 'State', 'Region'], tablefmt='psql')) 173 | 174 | 175 | def list_s3_buckets(): 176 | started = "List S3 buckets:" 177 | s3_client = get_client('s3', region_name=None) 178 | response = s3_client.list_buckets() 179 | buckets = response['Buckets'] 180 | bucket_data = [] 181 | for bucket in buckets: 182 | bucket_data.append([ 183 | bucket['Name'], 184 | bucket['CreationDate'] 185 | ]) 186 | json_body["s3"] = bucket_data 187 | if bucket_data == []: 188 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 189 | return 190 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(bucket_data, headers=['Bucket Name', 'Creation Date'], tablefmt='psql')) 191 | 192 | def describe_rds_instances(): 193 | started = "List RDS instances:" 194 | instance_data = [] 195 | 196 | def describe_instances(region): 197 | rds_client = get_client('rds', region_name=region) 198 | response = rds_client.describe_db_instances() 199 | instances = response['DBInstances'] 200 | for instance in instances: 201 | instance_data.append([ 202 | instance['DBInstanceIdentifier'], 203 | instance['DBInstanceClass'], 204 | instance['Engine'], 205 | instance['DBInstanceStatus'], 206 | region 207 | ]) 208 | 209 | processes = [] 210 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 211 | for region in regions: 212 | processes.append(executor.submit(describe_instances, region)) 213 | 214 | json_body["rds"] = instance_data 215 | 216 | if not instance_data: 217 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 218 | return 219 | 220 | print(crayons.green("[+] " + started, bold=True), "\r\n", 221 | tabulate(instance_data, headers=['Instance Identifier', 'Instance Class', 'Engine', 'Instance Status', 'Region'], tablefmt='psql')) 222 | 223 | def list_lambda_functions(): 224 | started = "List Lambda functions:" 225 | function_data = [] 226 | 227 | def describe_lambda_functions(region): 228 | lambda_client = get_client('lambda', region_name=region) 229 | response = lambda_client.list_functions() 230 | functions = response['Functions'] 231 | for function in functions: 232 | function_data.append([ 233 | function['FunctionName'], 234 | function['Runtime'], 235 | function['LastModified'] 236 | ]) 237 | 238 | processes = [] 239 | ec2_client = boto3.client('ec2') 240 | regions = [region['RegionName'] for region in ec2_client.describe_regions()['Regions']] 241 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 242 | for region in regions: 243 | processes.append(executor.submit(describe_lambda_functions, region)) 244 | 245 | json_body["lambda"] = function_data 246 | 247 | if not function_data: 248 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 249 | return 250 | print(crayons.green("[+] " + started, bold=True), "\r\n", 251 | tabulate(function_data, headers=['Function Name', 'Runtime', 'Last Modified'], tablefmt='psql')) 252 | 253 | 254 | def list_cloudfront_distributions(): 255 | started = "List CloudFront distributions:" 256 | distribution_data = [] 257 | 258 | def list_distributions(region): 259 | cloudfront_client = get_client('cloudfront', region_name=region) 260 | response = cloudfront_client.list_distributions() 261 | 262 | if "items" in response['DistributionList']: 263 | distributions = response['DistributionList']['Items'] 264 | for distribution in distributions: 265 | distribution_data.append([ 266 | distribution['Id'], 267 | distribution['ARN'], 268 | distribution['Status'], 269 | region 270 | ]) 271 | 272 | processes = [] 273 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 274 | for region in regions: 275 | processes.append(executor.submit(list_distributions, region)) 276 | 277 | json_body["cloudfront"] = distribution_data 278 | 279 | if not distribution_data: 280 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 281 | return 282 | 283 | print(crayons.green("[+] " + started, bold=True), "\r\n", 284 | tabulate(distribution_data, headers=['Distribution ID', 'ARN', 'Status', 'Region'], tablefmt='psql')) 285 | 286 | def list_dynamodb_tables(): 287 | started = "List DynamoDB tables:" 288 | table_data = [] 289 | 290 | def list_tables(region): 291 | dynamodb_client = get_client('dynamodb', region_name=region) 292 | response = dynamodb_client.list_tables() 293 | 294 | tables = response['TableNames'] 295 | for table in tables: 296 | table_data.append([table, region]) 297 | 298 | processes = [] 299 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 300 | for region in regions: 301 | processes.append(executor.submit(list_tables, region)) 302 | json_body["dynamodb"] = table_data 303 | if not table_data: 304 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 305 | return 306 | 307 | print(crayons.green("[+] " + started, bold=True), "\r\n", 308 | tabulate(table_data, headers=['Table Name', 'Region'], tablefmt='psql')) 309 | 310 | 311 | def list_iam_users(): 312 | started = "List IAM users:" 313 | user_data = [] 314 | iam_client = get_client('iam') 315 | response = iam_client.list_users() 316 | groups = iam_client.list_groups() 317 | users = response['Users'] 318 | 319 | for user in users: 320 | policies = [] 321 | attached_policies = iam_client.list_attached_user_policies(UserName=user['UserName']) 322 | for policy in attached_policies['AttachedPolicies']: 323 | policies.append(policy['PolicyName']) 324 | user_data.append([ 325 | user['UserName'], 326 | user['UserId'], 327 | user['Arn'], 328 | policies # Attach policies here 329 | ]) 330 | 331 | json_body["iam"] = user_data 332 | 333 | if user_data == []: 334 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 335 | return 336 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(user_data, headers=['Username', 'User ID', 'ARN', 'Attached Policies'], tablefmt='psql')) 337 | 338 | 339 | def list_iam_users_group(): 340 | started = "List Group Name and Attached Policies:" 341 | iam_client = get_client('iam') 342 | response = iam_client.list_users() 343 | groups = iam_client.list_groups() 344 | roles = iam_client.list_roles()['Roles'] 345 | user_data = [] 346 | 347 | policy_roles = {} 348 | for role in roles: 349 | attached_policies = iam_client.list_attached_role_policies(RoleName=role['RoleName']) 350 | for policy in attached_policies['AttachedPolicies']: 351 | policy_roles[policy['PolicyArn']] = role['RoleName'] 352 | 353 | for group in groups['Groups']: 354 | attached_policies = iam_client.list_attached_group_policies(GroupName=group['GroupName']) 355 | for policy in attached_policies['AttachedPolicies']: 356 | user_data.append([group['GroupName'], policy['PolicyName']]) 357 | 358 | json_body["iam"] = user_data 359 | 360 | if not user_data: 361 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 362 | return 363 | 364 | print(crayons.green("[+] " + started, bold=True)) 365 | print(tabulate(user_data, headers=['Group Name', 'Attached Policies'], tablefmt='psql')) 366 | 367 | 368 | def list_iam_users_roles(): 369 | started = "List Roles and Attached Policies:" 370 | iam_client = get_client('iam') 371 | roles = iam_client.list_roles()['Roles'] 372 | user_data = [] 373 | 374 | policy_roles = {} 375 | for role in roles: 376 | attached_policies = iam_client.list_attached_role_policies(RoleName=role['RoleName']) 377 | for policy in attached_policies['AttachedPolicies']: 378 | user_data.append([role['RoleName'], policy['PolicyName']]) 379 | 380 | json_body["iam"] = user_data 381 | 382 | if not user_data: 383 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 384 | return 385 | 386 | print(crayons.green("[+] " + started, bold=True)) 387 | print(tabulate(user_data, headers=['Role Name', 'Attached Policies'], tablefmt='psql')) 388 | 389 | def list_customer_managed_policies(): 390 | started = "List Customer Managed Policies:" 391 | iam_client = get_client('iam') 392 | user_data = [] 393 | 394 | response = iam_client.list_policies(Scope='Local') 395 | policies = response['Policies'] 396 | 397 | for policy in policies: 398 | if policy['IsAttachable']: 399 | user_data.append([policy['PolicyName'], policy['Arn']]) 400 | 401 | json_body["customer_managed_policies"] = user_data 402 | 403 | if not user_data: 404 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 405 | return 406 | 407 | print(crayons.green("[+] " + started, bold=True), "\r\n", 408 | tabulate(user_data, headers=['Policy Name', 'ARN'], tablefmt='psql')) 409 | 410 | 411 | def list_sns_topics(): 412 | started = "List SNS topics:" 413 | topic_data = [] 414 | 415 | def list_topics(region): 416 | sns_client = get_client('sns', region_name=region) 417 | response = sns_client.list_topics() 418 | 419 | topics = response['Topics'] 420 | for topic in topics: 421 | topic_data.append([topic['TopicArn'], region]) 422 | 423 | processes = [] 424 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 425 | for region in regions: 426 | processes.append(executor.submit(list_topics, region)) 427 | 428 | json_body["sns"] = topic_data 429 | 430 | if not topic_data: 431 | 432 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 433 | return 434 | 435 | print(crayons.green("[+] " + started, bold=True), "\r\n", 436 | tabulate(topic_data, headers=['Topic ARN', 'Region'], tablefmt='psql')) 437 | 438 | 439 | def list_sqs_queues(): 440 | started = "List SQS queues:" 441 | queue_data = [] 442 | 443 | def list_queues(region): 444 | sqs_client = get_client('sqs', region_name=region) 445 | response = sqs_client.list_queues() 446 | if "QueueUrls" in response: 447 | queues = response['QueueUrls'] 448 | for queue in queues: 449 | queue_data.append([queue, region]) 450 | 451 | processes = [] 452 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 453 | for region in regions: 454 | processes.append(executor.submit(list_queues, region)) 455 | 456 | json_body["sqs"] = queue_data 457 | 458 | if not queue_data: 459 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 460 | return 461 | 462 | print(crayons.green("[+] " + started, bold=True), "\r\n", 463 | tabulate(queue_data, headers=['Queue URL', 'Region'], tablefmt='psql')) 464 | 465 | 466 | def describe_ecr_repositories(): 467 | started = "List ECR repositories:" 468 | repository_data = [] 469 | 470 | def describe_repositories(region): 471 | ecr_client = get_client('ecr', region_name=region) 472 | response = ecr_client.describe_repositories() 473 | 474 | repositories = response['repositories'] 475 | for repository in repositories: 476 | repository_data.append([repository['repositoryName'], region]) 477 | 478 | processes = [] 479 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 480 | for region in regions: 481 | processes.append(executor.submit(describe_repositories, region)) 482 | json_body["ecr"] = repository_data 483 | if not repository_data: 484 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 485 | return 486 | 487 | print(crayons.green("[+] " + started, bold=True), "\r\n", 488 | tabulate(repository_data, headers=['Repository Name', 'Region'], tablefmt='psql')) 489 | 490 | def describe_elasticbeanstalk_applications(): 491 | started = "List Elastic Beanstalk applications:" 492 | application_data = [] 493 | 494 | def describe_applications(region): 495 | elasticbeanstalk_client = get_client('elasticbeanstalk', region_name=region) 496 | response = elasticbeanstalk_client.describe_applications() 497 | applications = response['Applications'] 498 | for application in applications: 499 | application_name = application['ApplicationName'] 500 | date_created = application['DateCreated'] 501 | 502 | response = elasticbeanstalk_client.describe_environments(ApplicationName=application_name) 503 | environments = response['Environments'] 504 | 505 | for env in environments: 506 | environment_name = env['EnvironmentName'] 507 | environment_url = env.get('CNAME', 'N/A') 508 | application_data.append([application_name, date_created, region, environment_name, environment_url]) 509 | 510 | processes = [] 511 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 512 | for region in regions: 513 | processes.append(executor.submit(describe_applications, region)) 514 | json_body["elasticbeanstalk"] = application_data 515 | if not application_data: 516 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 517 | return 518 | 519 | print(crayons.green("[+] " + started, bold=True), "\r\n", 520 | tabulate(application_data, headers=['Application Name', 'Date Created', 'Region', 'Environment Name', 'Environment URL'], tablefmt='psql')) 521 | 522 | 523 | 524 | 525 | def list_route53_hosted_zones(): 526 | started = "List Route 53 hosted zones:" 527 | hosted_zone_data = [] 528 | 529 | def list_hosted_zones(region): 530 | route53_client = get_client('route53', region_name=region) 531 | response = route53_client.list_hosted_zones() 532 | hosted_zones = response['HostedZones'] 533 | for hosted_zone in hosted_zones: 534 | hosted_zone_data.append([hosted_zone['Name'], hosted_zone['Id'], region]) 535 | 536 | processes = [] 537 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 538 | for region in regions: 539 | processes.append(executor.submit(list_hosted_zones, region)) 540 | 541 | for process in processes: 542 | process.result() 543 | json_body["route53"] = hosted_zone_data 544 | if not hosted_zone_data: 545 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 546 | return 547 | 548 | print(crayons.green("[+] " + started, bold=True), "\r\n", 549 | tabulate(hosted_zone_data, headers=['Hosted Zone Name', 'Hosted Zone ID', 'Region'], tablefmt='psql')) 550 | 551 | 552 | def describe_cloudwatch_alarms(): 553 | started = "List CloudWatch alarms:" 554 | alarm_data = [] 555 | 556 | def describe_alarms(region): 557 | cloudwatch_client = get_client('cloudwatch', region_name=region) 558 | response = cloudwatch_client.describe_alarms() 559 | alarms = response['MetricAlarms'] 560 | for alarm in alarms: 561 | alarm_data.append([alarm['AlarmName'], alarm['StateValue'], region]) 562 | 563 | processes = [] 564 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 565 | for region in regions: 566 | processes.append(executor.submit(describe_alarms, region)) 567 | 568 | json_body["cloudwatch"] = alarm_data 569 | 570 | if not alarm_data: 571 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 572 | return 573 | 574 | print(crayons.green("[+] " + started, bold=True), "\r\n", 575 | tabulate(alarm_data, headers=['Alarm Name', 'State Value', 'Region'], tablefmt='psql')) 576 | 577 | 578 | def list_codepipeline_pipelines(): 579 | started = "List CodePipeline pipelines:" 580 | pipeline_data = [] 581 | 582 | def list_pipelines(region): 583 | codepipeline_client = get_client('codepipeline', region_name=region) 584 | if region == 'ap-northeast-3': 585 | return 586 | response = codepipeline_client.list_pipelines() 587 | pipelines = response['pipelines'] 588 | for pipeline in pipelines: 589 | pipeline_data.append([pipeline['name'], region]) 590 | 591 | processes = [] 592 | with ThreadPoolExecutor(max_workers=1) as executor: 593 | for region in regions: 594 | processes.append(executor.submit(list_pipelines, region)) 595 | 596 | for process in as_completed(processes): 597 | process.result() 598 | json_body["codepipeline"] = pipeline_data 599 | if not pipeline_data: 600 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 601 | return 602 | 603 | print(crayons.green("[+] " + started, bold=True), "\r\n", 604 | tabulate(pipeline_data, headers=['Pipeline Name', 'Region'], tablefmt='psql')) 605 | 606 | 607 | def list_sagemaker_notebooks(): 608 | started = "List Sagemaker notebooks:" 609 | notebook_data = [] 610 | 611 | def list_notebook_instances(region): 612 | sagemaker_client = get_client('sagemaker', region_name=region) 613 | response = sagemaker_client.list_notebook_instances() 614 | notebooks = response['NotebookInstances'] 615 | for notebook in notebooks: 616 | notebook_data.append([notebook['NotebookInstanceName'], notebook['NotebookInstanceStatus'], region]) 617 | 618 | processes = [] 619 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 620 | for region in regions: 621 | processes.append(executor.submit(list_notebook_instances, region)) 622 | json_body["sagemaker"] = notebook_data 623 | if not notebook_data: 624 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 625 | return 626 | 627 | print(crayons.green("[+] " + started, bold=True), "\r\n", 628 | tabulate(notebook_data, headers=['Notebook Instance Name', 'Notebook Instance Status', 'Region'], 629 | tablefmt='psql')) 630 | 631 | 632 | def list_secretsmanager_secrets(): 633 | started = "List Secrets Manager secrets:" 634 | secret_data = [] 635 | 636 | def list_secrets(region): 637 | secretsmanager_client = get_client('secretsmanager', region_name=region) 638 | response = secretsmanager_client.list_secrets() 639 | secrets = response['SecretList'] 640 | for secret in secrets: 641 | secret_data.append([secret['Name'], secret['LastChangedDate'], region]) 642 | 643 | processes = [] 644 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 645 | for region in regions: 646 | processes.append(executor.submit(list_secrets, region)) 647 | json_body["secretsmanager"] = secret_data 648 | if not secret_data: 649 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 650 | return 651 | 652 | print(crayons.green("[+] " + started, bold=True), "\r\n", 653 | tabulate(secret_data, headers=['Secret Name', 'Last Changed Date', 'Region'], tablefmt='psql')) 654 | 655 | 656 | def list_glue_data_catalogs(): 657 | started = "List Glue data catalogs:" 658 | catalog_data = [] 659 | 660 | def list_catalogs(region): 661 | glue_client = get_client('glue', region_name=region) 662 | response = glue_client.get_databases() 663 | catalogs = response['DatabaseList'] 664 | for catalog in catalogs: 665 | catalog_data.append([catalog['Name'], region]) 666 | 667 | processes = [] 668 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 669 | for region in regions: 670 | processes.append(executor.submit(list_catalogs, region)) 671 | json_body["glue"] = catalog_data 672 | if not catalog_data: 673 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 674 | return 675 | 676 | print(crayons.green("[+] " + started, bold=True), "\r\n", 677 | tabulate(catalog_data, headers=['Data Catalog Name', 'Region'], tablefmt='psql')) 678 | 679 | 680 | def list_stepfunctions_state_machines(): 681 | started = "List Step Functions state machines:" 682 | state_machine_data = [] 683 | 684 | def list_state_machines(region): 685 | stepfunctions_client = get_client('stepfunctions', region_name=region) 686 | response = stepfunctions_client.list_state_machines() 687 | state_machines = response['stateMachines'] 688 | for state_machine in state_machines: 689 | state_machine_data.append([state_machine['name'], state_machine['status'], region]) 690 | 691 | processes = [] 692 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 693 | for region in regions: 694 | processes.append(executor.submit(list_state_machines, region)) 695 | json_body["stepfunctions"] = state_machine_data 696 | if not state_machine_data: 697 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 698 | return 699 | 700 | print(crayons.green("[+] " + started, bold=True), "\r\n", 701 | tabulate(state_machine_data, headers=['State Machine Name', 'Status', 'Region'], tablefmt='psql')) 702 | 703 | 704 | def list_eks_clusters(): 705 | started = "List EKS clusters:" 706 | cluster_data = [] 707 | 708 | def list_clusters(region): 709 | eks_client = get_client('eks', region_name=region) 710 | response = eks_client.list_clusters() 711 | clusters = response['clusters'] 712 | for cluster in clusters: 713 | cluster_data.append([cluster, region]) 714 | 715 | processes = [] 716 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 717 | for region in regions: 718 | processes.append(executor.submit(list_clusters, region)) 719 | json_body["eks"] = cluster_data 720 | if not cluster_data: 721 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 722 | return 723 | 724 | print(crayons.green("[+] " + started, bold=True), "\r\n", 725 | tabulate(cluster_data, headers=['Cluster Name', 'Region'], tablefmt='psql')) 726 | 727 | def describe_cloudtrail_trails(): 728 | started = "List CloudTrail trails:" 729 | trail_data = [] 730 | 731 | cloudtrail_client = get_client('cloudtrail', region_name=None) 732 | response = cloudtrail_client.describe_trails() 733 | trails = response['trailList'] 734 | for trail in trails: 735 | trail_data.append([trail['Name'], trail['HomeRegion']]) 736 | 737 | json_body["cloudtrail"] = trail_data 738 | if not trail_data: 739 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 740 | return 741 | 742 | print(crayons.green("[+] " + started, bold=True), "\r\n", 743 | tabulate(trail_data, headers=['Trail Name', 'Home Region'], tablefmt='psql')) 744 | 745 | def list_kinesis_streams(): 746 | started = "List Kinesis data streams:" 747 | stream_data = [] 748 | 749 | def list_streams(region): 750 | kinesis_client = get_client('kinesis', region_name=region) 751 | response = kinesis_client.list_streams() 752 | streams = response['StreamNames'] 753 | for stream in streams: 754 | stream_data.append([stream, region]) 755 | 756 | processes = [] 757 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 758 | for region in regions: 759 | processes.append(executor.submit(list_streams, region)) 760 | 761 | json_body["kinesis"] = stream_data 762 | 763 | if not stream_data: 764 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 765 | return 766 | 767 | print(crayons.green("[+] " + started, bold=True), "\r\n", 768 | tabulate(stream_data, headers=['Stream Name', 'Region'], tablefmt='psql')) 769 | 770 | def describe_redshift_clusters(): 771 | started = "List Redshift clusters:" 772 | cluster_data = [] 773 | 774 | def describe_clusters(region): 775 | redshift_client = get_client('redshift', region_name=region) 776 | response = redshift_client.describe_clusters() 777 | clusters = response['Clusters'] 778 | for cluster in clusters: 779 | cluster_data.append([cluster['ClusterIdentifier'], cluster['NodeType'], cluster['ClusterStatus'], region]) 780 | 781 | processes = [] 782 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 783 | for region in regions: 784 | processes.append(executor.submit(describe_clusters, region)) 785 | json_body["redshift"] = cluster_data 786 | if not cluster_data: 787 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 788 | return 789 | 790 | print(crayons.green("[+] " + started, bold=True), "\r\n", 791 | tabulate(cluster_data, headers=['Cluster Identifier', 'Node Type', 'Cluster Status', 'Region'], tablefmt='psql')) 792 | 793 | def describe_elasticache_clusters(): 794 | started = "List Elasticache clusters:" 795 | cluster_data = [] 796 | 797 | def describe_clusters(region): 798 | elasticache_client = get_client('elasticache', region_name=region) 799 | response = elasticache_client.describe_cache_clusters() 800 | clusters = response['CacheClusters'] 801 | for cluster in clusters: 802 | cluster_data.append([cluster['CacheClusterId'], cluster['Engine'], cluster['CacheClusterStatus'], region]) 803 | 804 | processes = [] 805 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 806 | for region in regions: 807 | processes.append(executor.submit(describe_clusters, region)) 808 | 809 | json_body["elasticache"] = cluster_data 810 | 811 | if not cluster_data: 812 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 813 | return 814 | 815 | print(crayons.green("[+] " + started, bold=True), "\r\n", 816 | tabulate(cluster_data, headers=['Cache Cluster ID', 'Engine', 'Cluster Status', 'Region'], tablefmt='psql')) 817 | 818 | def list_apigateway_apis(): 819 | started = "List API Gateway APIs:" 820 | api_data = [] 821 | 822 | def get_rest_apis(region): 823 | apigateway_client = get_client('apigateway', region_name=region) 824 | response = apigateway_client.get_rest_apis() 825 | apis = response['items'] 826 | for api in apis: 827 | api_data.append([api['name'], region]) 828 | 829 | processes = [] 830 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 831 | for region in regions: 832 | processes.append(executor.submit(get_rest_apis, region)) 833 | 834 | json_body["apigateway"] = api_data 835 | 836 | if not api_data: 837 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 838 | return 839 | 840 | print(crayons.green("[+] " + started, bold=True), "\r\n", 841 | tabulate(api_data, headers=['API Name', 'Region'], tablefmt='psql')) 842 | 843 | 844 | 845 | def list_apigateway_apis123(): 846 | started = "List API Gateway APIs:" 847 | api_data = [] 848 | 849 | def get_rest_apis(region): 850 | apigateway_client = get_client('apigateway', region_name=region) 851 | response = apigateway_client.get_rest_apis() 852 | apis = response['items'] 853 | for api in apis: 854 | endpoint_configuration = apigateway_client.get_rest_api( 855 | restApiId=api['id'] 856 | )['endpointConfiguration'] 857 | api_type = "Private" if endpoint_configuration['types'] == ['PRIVATE'] else "Public" 858 | api_data.append([api['name'], api_type, region]) 859 | 860 | processes = [] 861 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 862 | for region in regions: 863 | processes.append(executor.submit(get_rest_apis, region)) 864 | 865 | json_body["apigateway"] = api_data 866 | 867 | if not api_data: 868 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 869 | return 870 | 871 | print(crayons.green("[+] " + started, bold=True), "\r\n", 872 | tabulate(api_data, headers=['API Name', 'Type', 'Region'], tablefmt='psql')) 873 | 874 | 875 | def list_ecs_clusters(): 876 | started = "List ECS Clusters:" 877 | ecs_data = [] 878 | 879 | def get_ecs_clusters(region): 880 | ecs_client = get_client('ecs', region_name=region) 881 | response = ecs_client.list_clusters() 882 | clusters = response['clusterArns'] 883 | if not clusters: 884 | return 885 | 886 | for cluster_arn in clusters: 887 | cluster_name = cluster_arn.split('/')[-1] 888 | ecs_data.append([cluster_name, region]) 889 | 890 | processes = [] 891 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 892 | for region in regions: 893 | processes.append(executor.submit(get_ecs_clusters, region)) 894 | 895 | json_body["ecs_clusters"] = ecs_data 896 | 897 | # Check if ecs_data is empty 898 | if not any(ecs_data): 899 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 900 | return 901 | 902 | print(crayons.green("[+] " + started, bold=True)) 903 | print(tabulate(ecs_data, headers=['Cluster Name', 'Region'], tablefmt='psql')) 904 | 905 | 906 | 907 | 908 | def list_cloudformation_stacks(): 909 | started = "List CloudFormation stacks:" 910 | stack_data = [] 911 | 912 | def get_stacks(region): 913 | cloudformation_client = get_client('cloudformation', region_name=region) 914 | response = cloudformation_client.list_stacks() 915 | stacks = response['StackSummaries'] 916 | for stack in stacks: 917 | stack_data.append([stack['StackName'], stack['StackStatus'], region]) 918 | 919 | processes = [] 920 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 921 | for region in regions: 922 | processes.append(executor.submit(get_stacks, region)) 923 | 924 | json_body["cloudformation"] = stack_data 925 | 926 | if stack_data == []: 927 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 928 | return 929 | 930 | print(crayons.green("[+] " + started, bold=True), "\r\n", 931 | tabulate(stack_data, headers=['Stack Name', 'Stack Status', 'Region'], tablefmt='psql')) 932 | 933 | def list_appsync_apis(): 934 | started = "List AppSync APIs:" 935 | api_data = [] 936 | 937 | def get_apis(region): 938 | appsync_client = get_client('appsync', region_name=region) 939 | response = appsync_client.list_graphql_apis() 940 | apis = response['graphqlApis'] 941 | for api in apis: 942 | api_data.append([api['name'], api['authenticationType'], region]) 943 | 944 | processes = [] 945 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 946 | for region in regions: 947 | processes.append(executor.submit(get_apis, region)) 948 | 949 | json_body["appsync"] = api_data 950 | 951 | if not api_data: 952 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 953 | return 954 | 955 | print(crayons.green("[+] " + started, bold=True), "\r\n", 956 | tabulate(api_data, headers=['API Name', 'Authentication Type', 'Region'], tablefmt='psql')) 957 | 958 | def list_ssm_documents(): 959 | started = "List Systems Manager documents:" 960 | document_data = [] 961 | 962 | 963 | ssm_client = get_client('ssm', region_name=None) 964 | response = ssm_client.list_documents() 965 | documents = response['DocumentIdentifiers'] 966 | for document in documents: 967 | document_data.append([document['Name'], document['DocumentType']]) 968 | 969 | json_body["ssm"] = document_data 970 | 971 | if not document_data: 972 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 973 | return 974 | 975 | print(crayons.green("[+] " + started, bold=True), "\r\n", 976 | tabulate(document_data, headers=['Document Name', 'Document Type'], tablefmt='psql')) 977 | 978 | def list_elastictranscoder_pipelines(): 979 | started = "List Elastic Transcoder pipelines:" 980 | pipeline_data = [] 981 | 982 | def get_pipelines(region): 983 | elastictranscoder_client = get_client('elastictranscoder', region_name=region) 984 | response = elastictranscoder_client.list_pipelines() 985 | pipelines = response['Pipelines'] 986 | for pipeline in pipelines: 987 | pipeline_data.append([pipeline['Name'], pipeline['Status'], region]) 988 | 989 | processes = [] 990 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 991 | for region in regions: 992 | processes.append(executor.submit(get_pipelines, region)) 993 | 994 | json_body["elastictranscoder"] = pipeline_data 995 | 996 | if not pipeline_data: 997 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 998 | return 999 | 1000 | print(crayons.green("[+] " + started, bold=True), "\r\n", 1001 | tabulate(pipeline_data, headers=['Pipeline Name', 'Status', 'Region'], tablefmt='psql')) 1002 | 1003 | def list_datapipeline_pipelines(): 1004 | started = "List Data Pipeline pipelines:" 1005 | pipeline_data = [] 1006 | 1007 | def get_pipelines(region): 1008 | datapipeline_client = get_client('datapipeline', region_name=region) 1009 | response = datapipeline_client.list_pipelines() 1010 | pipelines = response['pipelineIdList'] 1011 | for pipeline in pipelines: 1012 | pipeline_data.append([pipeline['name'], pipeline['status'], region]) 1013 | 1014 | processes = [] 1015 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1016 | for region in regions: 1017 | processes.append(executor.submit(get_pipelines, region)) 1018 | 1019 | json_body["datapipeline"] = pipeline_data 1020 | 1021 | if not pipeline_data: 1022 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1023 | return 1024 | 1025 | print(crayons.green("[+] " + started, bold=True), "\r\n", 1026 | tabulate(pipeline_data, headers=['Pipeline Name', 'Status', 'Region'], tablefmt='psql')) 1027 | 1028 | def list_mediaconvert_jobs(): 1029 | started = "List MediaConvert jobs:" 1030 | 1031 | try: 1032 | job_data = [] 1033 | for region in regions: 1034 | if region == "ap-northeast-3": 1035 | continue 1036 | mediaconvert_client = get_client('mediaconvert', region_name=region) 1037 | response = mediaconvert_client.list_jobs() 1038 | 1039 | jobs = response['Jobs'] 1040 | 1041 | for job in jobs: 1042 | job_data.append([job['Id'], job['Status'],region]) 1043 | 1044 | json_body["mediaconvert"] = job_data 1045 | 1046 | if job_data == []: 1047 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1048 | return 1049 | 1050 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(job_data, headers=['Job ID', 'Status', 'Region'], tablefmt='psql')) 1051 | except ClientError as e: 1052 | error_code = e.response['Error']['Code'] 1053 | if error_code == 'ForbiddenException': 1054 | print(crayons.red("Error: You must use the subscription API to subscribe your account to the service before using this operation.", bold=True)) 1055 | else: 1056 | print(crayons.red("An error occurred: "+ str(e), bold=True)) 1057 | 1058 | def list_storagegateway_gateways(): 1059 | started = "List Storage Gateway gateways:" 1060 | gateway_data = [] 1061 | 1062 | def list_gateways(region): 1063 | storagegateway_client = get_client('storagegateway', region_name=region) 1064 | response = storagegateway_client.list_gateways() 1065 | gateways = response['Gateways'] 1066 | for gateway in gateways: 1067 | gateway_data.append([gateway['GatewayId'], gateway['GatewayType'], gateway['GatewayOperationalState'], region]) 1068 | 1069 | processes = [] 1070 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1071 | for region in regions: 1072 | processes.append(executor.submit(list_gateways, region)) 1073 | json_body["storagegateway"] = gateway_data 1074 | if gateway_data == []: 1075 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1076 | return 1077 | 1078 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(gateway_data, headers=['Gateway ID', 'Gateway Type', 'Operational State', 'Region'], tablefmt='psql')) 1079 | 1080 | 1081 | def describe_workspaces(): 1082 | started = "List WorkSpaces:" 1083 | workspace_data = [] 1084 | 1085 | def describe_workspaces_in_region(region): 1086 | workspaces_client = get_client('workspaces', region_name=region) 1087 | response = workspaces_client.describe_workspaces() 1088 | workspaces = response['Workspaces'] 1089 | for workspace in workspaces: 1090 | workspace_data.append([workspace['WorkspaceId'], workspace['UserName'], workspace['State'], region]) 1091 | 1092 | processes = [] 1093 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1094 | for region in regions: 1095 | processes.append(executor.submit(describe_workspaces_in_region, region)) 1096 | 1097 | json_body["workspaces"] = workspace_data 1098 | 1099 | if workspace_data == []: 1100 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1101 | return 1102 | 1103 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(workspace_data, headers=['Workspace ID', 'User Name', 'State', 'Region'], tablefmt='psql')) 1104 | 1105 | 1106 | def list_cloud9_environments(): 1107 | started = "List Cloud9 environments:" 1108 | environment_data = [] 1109 | 1110 | def list_environments_in_region(region): 1111 | cloud9_client = get_client('cloud9', region_name=region) 1112 | response = cloud9_client.list_environments() 1113 | environments = response['environmentIds'] 1114 | for environment in environments: 1115 | environment_data.append([environment, region]) 1116 | 1117 | processes = [] 1118 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1119 | for region in regions: 1120 | processes.append(executor.submit(list_environments_in_region, region)) 1121 | 1122 | json_body["cloud9"] = environment_data 1123 | 1124 | if environment_data == []: 1125 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1126 | return 1127 | 1128 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(environment_data, headers=['Environment ID', 'Region'], tablefmt='psql')) 1129 | 1130 | def list_lex_bots(): 1131 | started = "List Lex bots:" 1132 | bot_data = [] 1133 | 1134 | def list_bots_in_region(region): 1135 | lex_client = get_client('lex-models', region_name=region) 1136 | response = lex_client.get_bots() 1137 | bots = response['bots'] 1138 | for bot in bots: 1139 | bot_data.append([bot['name'], bot['status'], region]) 1140 | 1141 | processes = [] 1142 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1143 | for region in regions: 1144 | processes.append(executor.submit(list_bots_in_region, region)) 1145 | 1146 | json_body["lex"] = bot_data 1147 | 1148 | if bot_data == []: 1149 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1150 | return 1151 | 1152 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(bot_data, headers=['Bot Name', 'Status', 'Region'], tablefmt='psql')) 1153 | 1154 | def list_iot_things(): 1155 | started = "List IoT things:" 1156 | thing_data = [] 1157 | 1158 | def list_things_in_region(region): 1159 | iot_client = get_client('iot', region_name=region) 1160 | response = iot_client.list_things() 1161 | things = response['things'] 1162 | for thing in things: 1163 | thing_data.append([thing['thingName'], thing['thingTypeName'], region]) 1164 | 1165 | processes = [] 1166 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1167 | for region in regions: 1168 | processes.append(executor.submit(list_things_in_region, region)) 1169 | 1170 | json_body["iot"] = thing_data 1171 | 1172 | if thing_data == []: 1173 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1174 | return 1175 | 1176 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(thing_data, headers=['Thing Name', 'Thing Type', 'Region'], tablefmt='psql')) 1177 | 1178 | def list_medialive_channels(): 1179 | started = "List MediaLive channels:" 1180 | channel_data = [] 1181 | 1182 | def list_channels_in_region(region): 1183 | medialive_client = get_client('medialive', region_name=region) 1184 | response = medialive_client.list_channels() 1185 | channels = response['Channels'] 1186 | 1187 | for channel in channels: 1188 | channel_data.append([channel['ChannelName'], channel['State'], region]) 1189 | 1190 | processes = [] 1191 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1192 | for region in regions: 1193 | processes.append(executor.submit(list_channels_in_region, region)) 1194 | 1195 | json_body["medialive"] = channel_data 1196 | 1197 | if channel_data == []: 1198 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1199 | return 1200 | 1201 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(channel_data, headers=['Channel Name', 'State', 'Region'], tablefmt='psql')) 1202 | 1203 | def list_datasync_tasks(): 1204 | started = "List DataSync tasks:" 1205 | task_data = [] 1206 | 1207 | def list_tasks_in_region(region): 1208 | datasync_client = get_client('datasync', region_name=region) 1209 | response = datasync_client.list_tasks() 1210 | tasks = response['Tasks'] 1211 | 1212 | for task in tasks: 1213 | task_data.append([task['TaskArn'], task['Status'], region]) 1214 | 1215 | processes = [] 1216 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1217 | for region in regions: 1218 | processes.append(executor.submit(list_tasks_in_region, region)) 1219 | 1220 | json_body["datasync"] = task_data 1221 | 1222 | if task_data == []: 1223 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1224 | return 1225 | 1226 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(task_data, headers=['Task ARN', 'Status', 'Region'], tablefmt='psql')) 1227 | 1228 | def list_emr_clusters(): 1229 | started = "List Elastic MapReduce (EMR) clusters:" 1230 | cluster_data = [] 1231 | 1232 | def list_clusters_in_region(region): 1233 | emr_client = get_client('emr', region_name=region) 1234 | response = emr_client.list_clusters() 1235 | clusters = response['Clusters'] 1236 | 1237 | for cluster in clusters: 1238 | cluster_data.append([cluster['Id'], cluster['Name'], cluster['Status']['State'], region]) 1239 | 1240 | processes = [] 1241 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1242 | for region in regions: 1243 | processes.append(executor.submit(list_clusters_in_region, region)) 1244 | 1245 | json_body["emr"] = cluster_data 1246 | 1247 | if cluster_data == []: 1248 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1249 | return 1250 | 1251 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(cluster_data, headers=['Cluster ID', 'Name', 'Status', 'Region'], tablefmt='psql')) 1252 | 1253 | def list_athena_workgroups(): 1254 | started = "List Athena workgroups:" 1255 | workgroup_data = [] 1256 | 1257 | def list_workgroups_in_region(region): 1258 | athena_client = get_client('athena', region_name=region) 1259 | response = athena_client.list_work_groups() 1260 | workgroups = response['WorkGroups'] 1261 | 1262 | for workgroup in workgroups: 1263 | workgroup_data.append([workgroup['Name'], workgroup['State'], region]) 1264 | 1265 | processes = [] 1266 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1267 | for region in regions: 1268 | processes.append(executor.submit(list_workgroups_in_region, region)) 1269 | 1270 | json_body["athena"] = workgroup_data 1271 | 1272 | if workgroup_data == []: 1273 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1274 | return 1275 | 1276 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(workgroup_data, headers=['WorkGroup Name', 'State', 'Region'], tablefmt='psql')) 1277 | 1278 | def list_pinpoint_applications(): 1279 | started = "List Pinpoint applications:" 1280 | application_data = [] 1281 | 1282 | def list_applications_in_region(region): 1283 | pinpoint_client = get_client('pinpoint', region_name=region) 1284 | response = pinpoint_client.get_apps() 1285 | applications = response['ApplicationsResponse']["Item"] 1286 | 1287 | for application in applications: 1288 | application_data.append([application['Id'], application['Name'], region]) 1289 | 1290 | processes = [] 1291 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1292 | for region in regions: 1293 | processes.append(executor.submit(list_applications_in_region, region)) 1294 | 1295 | json_body["pinpoint"] = application_data 1296 | 1297 | if application_data == []: 1298 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1299 | return 1300 | 1301 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(application_data, headers=['Application ID', 'Name', 'Region'], tablefmt='psql')) 1302 | 1303 | def list_efs_file_systems(): 1304 | started = "List Elastic File System (EFS) file systems:" 1305 | file_system_data = [] 1306 | 1307 | def describe_file_systems_in_region(region): 1308 | efs_client = get_client('efs', region_name=region) 1309 | response = efs_client.describe_file_systems() 1310 | file_systems = response['FileSystems'] 1311 | 1312 | for fs in file_systems: 1313 | file_system_data.append([fs['FileSystemId'], fs['CreationTime'], fs['SizeInBytes']['Value'], region]) 1314 | 1315 | processes = [] 1316 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1317 | for region in regions: 1318 | processes.append(executor.submit(describe_file_systems_in_region, region)) 1319 | 1320 | json_body["efs"] = file_system_data 1321 | 1322 | if file_system_data == []: 1323 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1324 | return 1325 | 1326 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(file_system_data, headers=['File System ID', 'Creation Time', 'Size (Bytes)', 'Region'], tablefmt='psql')) 1327 | 1328 | def list_glue_crawlers(): 1329 | started = "List Glue crawlers:" 1330 | crawler_data = [] 1331 | 1332 | def describe_crawlers_in_region(region): 1333 | glue_client = get_client('glue', region_name=region) 1334 | response = glue_client.get_crawlers() 1335 | crawlers = response['Crawlers'] 1336 | 1337 | for crawler in crawlers: 1338 | crawler_data.append([crawler['Name'], crawler['State'], region]) 1339 | 1340 | processes = [] 1341 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1342 | for region in regions: 1343 | processes.append(executor.submit(describe_crawlers_in_region, region)) 1344 | 1345 | json_body["glue"] = crawler_data 1346 | 1347 | if crawler_data == []: 1348 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1349 | return 1350 | 1351 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(crawler_data, headers=['Crawler Name', 'State', 'Region'], tablefmt='psql')) 1352 | 1353 | 1354 | def list_datasync_locations(): 1355 | started = "List DataSync locations:" 1356 | location_data = [] 1357 | 1358 | def describe_locations_in_region(region): 1359 | datasync_client = get_client('datasync', region_name=region) 1360 | response = datasync_client.list_locations() 1361 | locations = response['Locations'] 1362 | 1363 | for location in locations: 1364 | location_data.append([location['LocationArn'], location['LocationUri'], location['LocationType'], region]) 1365 | 1366 | processes = [] 1367 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1368 | for region in regions: 1369 | processes.append(executor.submit(describe_locations_in_region, region)) 1370 | 1371 | json_body["datasync"] = location_data 1372 | 1373 | if location_data == []: 1374 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1375 | return 1376 | 1377 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(location_data, headers=['Location ARN', 'Location URI', 'Location Type', 'Region'], tablefmt='psql')) 1378 | 1379 | def list_mediapackage_channels(): 1380 | started = "List MediaPackage channels:" 1381 | channel_data = [] 1382 | for region in regions: 1383 | try: 1384 | mediapackage_client = get_client('mediapackage', region_name=region) 1385 | response = mediapackage_client.list_channels() 1386 | except: 1387 | continue 1388 | channels = response['Channels'] 1389 | 1390 | for channel in channels: 1391 | channel_data.append([channel['Id'], channel['Description'], channel['Status'],region]) 1392 | 1393 | json_body["mediapackage"] = channel_data 1394 | 1395 | if channel_data == []: 1396 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1397 | return 1398 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(channel_data, headers=['Channel ID', 'Description', 'Status', 'Region'], tablefmt='psql')) 1399 | 1400 | def list_mq_brokers(): 1401 | started = "List MQ brokers (Amazon MQ):" 1402 | broker_data = [] 1403 | for region in regions: 1404 | mq_client = get_client('mq', region_name=region) 1405 | response = mq_client.list_brokers() 1406 | 1407 | brokers = response['BrokerSummaries'] 1408 | 1409 | for broker in brokers: 1410 | broker_data.append([broker['BrokerId'], broker['BrokerName'], broker['BrokerState'],region]) 1411 | 1412 | json_body["mq"] = broker_data 1413 | 1414 | if broker_data == []: 1415 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1416 | return 1417 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(broker_data, headers=['Broker ID', 'Broker Name', 'Broker State', 'Region'], tablefmt='psql')) 1418 | 1419 | def list_organizations_accounts(): 1420 | started = "List Organizations:" 1421 | account_data = [] 1422 | 1423 | def process_region(region): 1424 | organizations_client = get_client('organizations', region_name=region) 1425 | response = organizations_client.list_accounts() 1426 | accounts = response['Accounts'] 1427 | for account in accounts: 1428 | account_data.append([account['Id'], account['Name'], account['Status'], region]) 1429 | 1430 | processes = [] 1431 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1432 | for region in regions: 1433 | processes.append(executor.submit(process_region, region)) 1434 | 1435 | json_body["organizations"] = account_data 1436 | 1437 | if account_data == []: 1438 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1439 | return 1440 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(account_data, headers=['Account ID', 'Account Name', 'Status', 'Region'], tablefmt='psql')) 1441 | 1442 | def list_detective_graphs(): 1443 | started = "List Detective graphs:" 1444 | graph_data = [] 1445 | 1446 | def process_region(region): 1447 | detective_client = get_client('detective', region_name=region) 1448 | response = detective_client.list_graphs() 1449 | graphs = response['GraphList'] 1450 | for graph in graphs: 1451 | graph_data.append([graph['Arn'], graph['CreatedTime'], graph['Status'], region]) 1452 | 1453 | processes = [] 1454 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1455 | for region in regions: 1456 | processes.append(executor.submit(process_region, region)) 1457 | 1458 | json_body["detective"] = graph_data 1459 | 1460 | if graph_data == []: 1461 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1462 | return 1463 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(graph_data, headers=['Graph ARN', 'Created Time', 'Status', 'Region'], tablefmt='psql')) 1464 | 1465 | 1466 | def list_opsworks_stacks(): 1467 | started = "List OpsWorks stacks:" 1468 | stack_data = [] 1469 | 1470 | def process_region(region): 1471 | opsworks_client = get_client('opsworks', region_name=region) 1472 | response = opsworks_client.describe_stacks() 1473 | stacks = response['Stacks'] 1474 | for stack in stacks: 1475 | stack_data.append([stack['StackId'], stack['Name'], stack['Status'], region]) 1476 | 1477 | processes = [] 1478 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1479 | for region in regions: 1480 | processes.append(executor.submit(process_region, region)) 1481 | 1482 | json_body["opsworks"] = stack_data 1483 | 1484 | if stack_data == []: 1485 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1486 | return 1487 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(stack_data, headers=['Stack ID', 'Name', 'Status', 'Region'], tablefmt='psql')) 1488 | 1489 | def list_codecommit_repositories(): 1490 | started = "List CodeCommit repositories:" 1491 | repository_data = [] 1492 | 1493 | def process_region(region): 1494 | codecommit_client = get_client('codecommit', region_name=region) 1495 | response = codecommit_client.list_repositories() 1496 | repositories = response['repositories'] 1497 | for repository in repositories: 1498 | repository_data.append([repository['repositoryId'], repository['repositoryName'], repository['repositoryDescription'], region]) 1499 | 1500 | processes = [] 1501 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1502 | for region in regions: 1503 | processes.append(executor.submit(process_region, region)) 1504 | json_body["codecommit"] = repository_data 1505 | if repository_data == []: 1506 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1507 | return 1508 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(repository_data, headers=['Repository ID', 'Name', 'Description', 'Region'], tablefmt='psql')) 1509 | 1510 | def list_cloudformation_change_sets(): 1511 | started = "List CloudFormation change sets:" 1512 | def list_change_sets(stack_name, region): 1513 | cloudformation_client = get_client('cloudformation', region_name=region) 1514 | try: 1515 | response = cloudformation_client.list_change_sets(StackName=stack_name) 1516 | except ClientError as e: 1517 | #print(e) 1518 | error_message = e.response['Error']['Message'] 1519 | #print(crayons.red(f"Error retrieving change sets: {error_message} ({region})", bold=True)) 1520 | return 1521 | 1522 | 1523 | change_sets = response['Summaries'] 1524 | change_set_data = [] 1525 | for change_set in change_sets: 1526 | change_set_data.append([change_set['ChangeSetName'], change_set['StackName'], change_set['Status'],region]) 1527 | 1528 | json_body["cloudformation"].append(change_set_data) 1529 | 1530 | if change_set_data == []: 1531 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1532 | return 1533 | 1534 | print(crayons.green("[+] " + started, bold=True), "\r\n" ,tabulate(change_set_data, headers=['Change Set Name', 'Stack Name', 'Status', 'Region'], tablefmt='psql')) 1535 | json_body["cloudformation"] = [] 1536 | for region in regions: 1537 | cloudformation_client = get_client('cloudformation', region_name=region) 1538 | response = cloudformation_client.list_stacks() 1539 | 1540 | stacks = response['StackSummaries'] 1541 | stack_names = [stack['StackName'] for stack in stacks] 1542 | 1543 | threads = [] 1544 | for stack_name in stack_names: 1545 | t = threading.Thread(target=list_change_sets, args=(stack_name, region)) 1546 | t.start() 1547 | threads.append(t) 1548 | 1549 | for t in threads: 1550 | t.join() 1551 | 1552 | def list_appmesh_meshes(): 1553 | started = "List App Mesh meshes:" 1554 | mesh_data = [] 1555 | 1556 | def process_region(region): 1557 | appmesh_client = get_client('appmesh', region_name=region) 1558 | response = appmesh_client.list_meshes() 1559 | meshes = response['meshes'] 1560 | for mesh in meshes: 1561 | mesh_data.append([mesh['MeshName'], mesh['CreatedTime'], mesh['Status'], region]) 1562 | 1563 | processes = [] 1564 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1565 | for region in regions: 1566 | processes.append(executor.submit(process_region, region)) 1567 | 1568 | json_body["appmesh"] = mesh_data 1569 | 1570 | if mesh_data == []: 1571 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1572 | return 1573 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(mesh_data, headers=['Mesh Name', 'Created Time', 'Status', 'Region'], tablefmt='psql')) 1574 | 1575 | 1576 | def list_backup_plans(): 1577 | started = "List AWS Backup plans:" 1578 | plan_data = [] 1579 | 1580 | def process_region(region): 1581 | backup_client = get_client('backup', region_name=region) 1582 | response = backup_client.list_backup_plans() 1583 | plans = response['BackupPlansList'] 1584 | for plan in plans: 1585 | plan_data.append([plan['BackupPlanId'], plan['BackupPlanName'], plan['CreationDate'], region]) 1586 | 1587 | processes = [] 1588 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1589 | for region in regions: 1590 | processes.append(executor.submit(process_region, region)) 1591 | 1592 | json_body["backup"] = plan_data 1593 | 1594 | if plan_data == []: 1595 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1596 | return 1597 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(plan_data, headers=['Plan ID', 'Plan Name', 'Creation Date', 'Region'], tablefmt='psql')) 1598 | 1599 | def list_mediapackage_vod_assets(): 1600 | started = "List MediaPackage VOD assets:" 1601 | asset_data = [] 1602 | 1603 | def process_region(region): 1604 | mediapackage_vod_client = get_client('mediapackage-vod', region_name=region) 1605 | response = mediapackage_vod_client.list_assets() 1606 | assets = response['Assets'] 1607 | for asset in assets: 1608 | asset_data.append([asset['Id'], asset['Arn'], asset['CreatedAt'], region]) 1609 | 1610 | processes = [] 1611 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1612 | for region in regions: 1613 | processes.append(executor.submit(process_region, region)) 1614 | 1615 | json_body["mediapackage-vod"] = asset_data 1616 | 1617 | if asset_data == []: 1618 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1619 | return 1620 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(asset_data, headers=['Asset ID', 'ARN', 'Created At', 'Region'], tablefmt='psql')) 1621 | 1622 | def list_mediastore_containers(): 1623 | started = "List Elemental MediaStore containers:" 1624 | container_data = [] 1625 | 1626 | def process_region(region): 1627 | mediastore_client = get_client('mediastore', region_name=region) 1628 | response = mediastore_client.list_containers() 1629 | containers = response['Containers'] 1630 | for container in containers: 1631 | container_data.append([container['Name'], container['Status'], container['CreationTime'], region]) 1632 | 1633 | processes = [] 1634 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1635 | for region in regions: 1636 | processes.append(executor.submit(process_region, region)) 1637 | json_body["mediastore"] = container_data 1638 | if container_data == []: 1639 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1640 | return 1641 | print(crayons.green("[+] " + started, bold=True), "\r\n", tabulate(container_data, headers=['Container Name', 'Status', 'Creation Time', 'Region'], tablefmt='psql')) 1642 | 1643 | def describe_snapshots(): 1644 | started = "List EBS Snapshots:" 1645 | snapshot_data = [] 1646 | 1647 | def describe_snapshots_in_region(region): 1648 | ec2_client = get_client('ec2', region_name=region) 1649 | response = ec2_client.describe_snapshots(OwnerIds=['self']) 1650 | snapshots = response['Snapshots'] 1651 | for snapshot in snapshots: 1652 | snapshot_data.append([ 1653 | snapshot['SnapshotId'], 1654 | snapshot['VolumeId'], 1655 | snapshot['StartTime'], 1656 | snapshot['State'], 1657 | 'Public' if snapshot.get('Encrypted') is False else 'Private', # Check if snapshot is encrypted 1658 | region 1659 | ]) 1660 | 1661 | processes = [] 1662 | ec2_client = boto3.client('ec2') 1663 | regions = [region['RegionName'] for region in ec2_client.describe_regions()['Regions']] 1664 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1665 | for region in regions: 1666 | processes.append(executor.submit(describe_snapshots_in_region, region)) 1667 | 1668 | if snapshot_data == []: 1669 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1670 | return 1671 | 1672 | print(crayons.green("[+] " + started, bold=True)) 1673 | print(tabulate(snapshot_data, headers=['Snapshot ID', 'Volume ID', 'Start Time', 'State', 'Public/Private', 'Region'], tablefmt='psql')) 1674 | 1675 | 1676 | def describe_serverless_apps(): 1677 | app_data = [] 1678 | started = "List Serverless Application:" 1679 | # Initialize SAR client 1680 | sar_client = boto3.client('serverlessrepo', region_name="us-east-1") 1681 | 1682 | # List applications 1683 | response = sar_client.list_applications() 1684 | 1685 | # Extract application data 1686 | for app in response['Applications']: 1687 | app_data.append([ 1688 | app['Name'], 1689 | app['Author'], 1690 | app['Description'], 1691 | app['CreationTime'] 1692 | ]) 1693 | 1694 | # Display application data in table format 1695 | headers = ["Name", "Author", "Description", "Creation Time"] 1696 | print(crayons.green("[+] " + started, bold=True)) 1697 | if not app_data: 1698 | print(crayons.yelow("[!] " + started + " (Empty!)", bold=True)) 1699 | else: 1700 | print(tabulate(app_data, headers=headers, tablefmt='psql')) 1701 | 1702 | 1703 | 1704 | 1705 | 1706 | def describe_subnets(): 1707 | started = "List Subnets:" 1708 | subnet_data = [] 1709 | 1710 | def describe_subnets_in_region(region): 1711 | ec2_client = get_client('ec2', region_name=region) 1712 | response = ec2_client.describe_subnets() 1713 | subnets = response['Subnets'] 1714 | 1715 | for subnet in subnets: 1716 | subnet_data.append([ 1717 | subnet['SubnetId'], 1718 | subnet['VpcId'], 1719 | subnet['CidrBlock'], 1720 | subnet['AvailabilityZone'], 1721 | region 1722 | ]) 1723 | 1724 | processes = [] 1725 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1726 | for region in regions: 1727 | processes.append(executor.submit(describe_subnets_in_region, region)) 1728 | 1729 | if subnet_data == []: 1730 | print(crayons.yelow("[!] " + started + " (Empty!)", bold=True)) 1731 | return 1732 | 1733 | headers = ['Subnet ID', 'VPC ID', 'CIDR Block', 'Availability Zone', 'Region'] 1734 | print(crayons.green("[+] " + started, bold=True)) 1735 | print(tabulate(subnet_data, headers=headers, tablefmt='psql')) 1736 | 1737 | 1738 | def describe_volumes(): 1739 | started = "List EBS Volumes:" 1740 | volume_data = [] 1741 | 1742 | def describe_volumes_in_region(region): 1743 | ec2_client = boto3.client('ec2', region_name=region) 1744 | response = ec2_client.describe_volumes() 1745 | volumes = response['Volumes'] 1746 | 1747 | for volume in volumes: 1748 | volume_data.append([ 1749 | volume['VolumeId'], 1750 | volume['Size'], 1751 | volume['AvailabilityZone'], 1752 | volume['State'], 1753 | region 1754 | ]) 1755 | processes = [] 1756 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1757 | for region in regions: 1758 | processes.append(executor.submit(describe_volumes_in_region, region)) 1759 | 1760 | if volume_data == []: 1761 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1762 | return 1763 | 1764 | headers = ['Volume ID', 'Size (GiB)', 'Availability Zone', 'State', 'Region'] 1765 | print(crayons.green("[+] " + started, bold=True)) 1766 | print(tabulate(volume_data, headers=headers, tablefmt='psql')) 1767 | 1768 | def describe_amis(): 1769 | started = "List AMIs:" 1770 | ami_data = [] 1771 | 1772 | def describe_amis_in_region(region): 1773 | 1774 | ec2_client = get_client('ec2', region_name=region) 1775 | response = ec2_client.describe_images(Owners=['self']) 1776 | amis = response['Images'] 1777 | 1778 | for ami in amis: 1779 | ami_data.append([ 1780 | ami['ImageId'], 1781 | ami['Name'], 1782 | ami['CreationDate'], 1783 | region 1784 | ]) 1785 | 1786 | processes = [] 1787 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1788 | for region in regions: 1789 | processes.append(executor.submit(describe_amis_in_region, region)) 1790 | 1791 | if ami_data == []: 1792 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1793 | return 1794 | 1795 | headers = ['AMI ID', 'Name', 'Creation Date', 'Region'] 1796 | print(crayons.green("[+] " + started, bold=True)) 1797 | print(tabulate(ami_data, headers=headers, tablefmt='psql')) 1798 | 1799 | def describe_security_groups(): 1800 | started = "List Security Groups:" 1801 | group_data = [] 1802 | 1803 | def describe_security_groups_in_region(region): 1804 | ec2_client = get_client('ec2', region_name=region) 1805 | response = ec2_client.describe_security_groups() 1806 | security_groups = response['SecurityGroups'] 1807 | 1808 | for group in security_groups: 1809 | group_data.append([ 1810 | group['GroupId'], 1811 | group['GroupName'], 1812 | group['Description'], 1813 | region 1814 | ]) 1815 | 1816 | processes = [] 1817 | with ThreadPoolExecutor(max_workers=Thread_Count) as executor: 1818 | for region in regions: 1819 | processes.append(executor.submit(describe_security_groups_in_region, region)) 1820 | 1821 | if group_data == []: 1822 | print(crayons.yellow("[!] " + started + " (Empty!)", bold=True)) 1823 | return 1824 | 1825 | headers = ['Group ID', 'Group Name', 'Description', 'Region'] 1826 | print(crayons.green("[+] " + started, bold=True)) 1827 | print(tabulate(group_data, headers=headers, tablefmt='psql')) 1828 | 1829 | 1830 | services_list = { 1831 | "ec2": "describe_ec2_instances","vpc":"describe_vpcs","s3": "list_s3_buckets","rds": "describe_rds_instances","lambda": "list_lambda_functions","cloudfront": "list_cloudfront_distributions","dynamodb": "list_dynamodb_tables","iam": "list_iam_users","iam": "list_iam_users_group", "iam": "list_iam_users_roles","iam": "list_customer_managed_policies","sns": "list_sns_topics", 1832 | "sqs": "list_sqs_queues","ecr": "describe_ecr_repositories","elasticbeanstalk": "describe_elasticbeanstalk_applications","route53": "list_route53_hosted_zones","cloudwatch": "describe_cloudwatch_alarms","codepipeline": "list_codepipeline_pipelines","sagemaker": "list_sagemaker_notebooks", 1833 | "secretsmanager": "list_secretsmanager_secrets","glue": "list_glue_data_catalogs","stepfunctions": "list_stepfunctions_state_machines","eks": "list_eks_clusters","cloudtrail": "describe_cloudtrail_trails","kinesis": "list_kinesis_streams","redshift": "describe_redshift_clusters", 1834 | "elasticache": "describe_elasticache_clusters","apigateway": "list_apigateway_apis", "ecs": "list_ecs_clusters","cloudformation": "list_cloudformation_stacks","appsync": "list_appsync_apis","ssm": "list_ssm_documents","elastictranscoder": "list_elastictranscoder_pipelines","datapipeline": "list_datapipeline_pipelines", 1835 | "mediaconvert": "list_mediaconvert_jobs","storagegateway": "list_storagegateway_gateways","workspaces": "describe_workspaces","cloud9": "list_cloud9_environments","lex-models": "list_lex_bots","iot": "list_iot_things","medialive": "list_medialive_channels","datasync": "list_datasync_tasks", 1836 | "emr": "list_emr_clusters","athena": "list_athena_workgroups","pinpoint": "list_pinpoint_applications","efs": "list_efs_file_systems","mediapackage": "list_mediapackage_channels","mq": "list_mq_brokers","organizations": "list_organizations_accounts","detective": "list_detective_graphs", 1837 | "opsworks": "list_opsworks_stacks","codecommit": "list_codecommit_repositories","appmesh": "list_appmesh_meshes","backup": "list_backup_plans","mediapackage-vod": "list_mediapackage_vod_assets","mediastore": "list_mediastore_containers","Snapshots":"describe_snapshots","Subnet":"describe_subnets", 1838 | "Volumes":"describe_volumes","ami":"describe_amis","SecurityGroups":"describe_security_groups", "serverlessrepo": "describe_serverless_apps" 1839 | } 1840 | 1841 | 1842 | functions = [ 1843 | describe_ec2_instances,describe_vpcs,list_s3_buckets,describe_rds_instances,list_lambda_functions,list_cloudfront_distributions,list_dynamodb_tables,list_iam_users,list_iam_users_group,list_iam_users_roles,list_customer_managed_policies,list_sns_topics,list_sqs_queues,describe_ecr_repositories,describe_elasticbeanstalk_applications,list_route53_hosted_zones, 1844 | describe_cloudwatch_alarms,list_codepipeline_pipelines,list_sagemaker_notebooks,list_secretsmanager_secrets,list_glue_data_catalogs,list_stepfunctions_state_machines,list_eks_clusters,describe_cloudtrail_trails,list_kinesis_streams,describe_redshift_clusters, 1845 | describe_elasticache_clusters,list_apigateway_apis, list_ecs_clusters, list_cloudformation_stacks,list_appsync_apis,list_ssm_documents,list_elastictranscoder_pipelines,list_datapipeline_pipelines,list_mediaconvert_jobs,list_storagegateway_gateways,describe_workspaces,list_cloud9_environments, 1846 | list_lex_bots,list_iot_things,list_medialive_channels,list_datasync_tasks,list_emr_clusters,list_athena_workgroups,list_pinpoint_applications,list_efs_file_systems,list_glue_crawlers,list_datasync_locations,list_mediapackage_channels,list_mq_brokers,list_organizations_accounts, 1847 | list_detective_graphs,list_opsworks_stacks,list_codecommit_repositories,list_cloudformation_change_sets,list_appmesh_meshes,list_backup_plans,list_mediapackage_vod_assets,list_mediastore_containers,describe_snapshots, 1848 | describe_subnets,describe_volumes,describe_amis,describe_security_groups, describe_serverless_apps 1849 | ] 1850 | 1851 | 1852 | 1853 | 1854 | def get_profile(): 1855 | profile = get_client("sts", region_name=None) 1856 | try: 1857 | response = profile.get_caller_identity() 1858 | userId = response["UserId"] 1859 | account = response["Account"] 1860 | arn = response["Arn"] 1861 | print() 1862 | print(crayons.magenta("[+] User Profile", bold=True)) 1863 | print(crayons.yellow("UserId: " + userId, bold=True)) 1864 | print(crayons.yellow("Account: " + account, bold=True)) 1865 | print(crayons.yellow("Arn: " + arn, bold=True)) 1866 | print("-------------------------------------------") 1867 | print() 1868 | except: 1869 | print(crayons.red("[!] Access tokens is not valid!")) 1870 | exit() 1871 | 1872 | if args.access_key and args.secret_key: 1873 | get_profile() 1874 | # Define the number of threads to run concurrently 1875 | num_threads = 5 1876 | 1877 | if args.services != None: 1878 | specific_args = [] 1879 | for srv in args.services: 1880 | specific_args.append(services_list[srv]) 1881 | # Filter the functions based on the specified services 1882 | filtered_functions = [func for func in functions if func.__name__ in specific_args] 1883 | else: 1884 | # Execute all functions 1885 | filtered_functions = functions 1886 | 1887 | with concurrent.futures.ThreadPoolExecutor() as executor: 1888 | # Submit the filtered functions to the executor 1889 | futures = [executor.submit(f) for f in filtered_functions] 1890 | 1891 | # Wait for the first num_threads futures to complete 1892 | for future in concurrent.futures.as_completed(futures[:num_threads]): 1893 | pass 1894 | 1895 | else: 1896 | print(crayons.red("[-] Please provide --access-key and --secret-key!", bold=True)) 1897 | 1898 | 1899 | if args.output_file: 1900 | 1901 | with open(args.output_file, 'w') as file: 1902 | json.dump(json_body, file, indent=4, sort_keys=True, default=str) 1903 | print() 1904 | print(crayons.green(f'AWS data saved to {args.output_file}', bold=True)) 1905 | print() 1906 | -------------------------------------------------------------------------------- /aws_service_enum/requirements.txt: -------------------------------------------------------------------------------- 1 | boto3==1.34.135 2 | botocore==1.34.135 3 | colorama==0.4.6 4 | crayons==0.4.0 5 | jmespath==1.0.1 6 | prettytable==3.10.0 7 | python-dateutil==2.9.0.post0 8 | s3transfer==0.10.2 9 | six==1.16.0 10 | tabulate==0.9.0 11 | urllib3==2.2.2 12 | wcwidth==0.2.13 13 | -------------------------------------------------------------------------------- /azure_service_enum/README.md: -------------------------------------------------------------------------------- 1 | `azure_service_enum.py` let you discover Azure services which through management access token. 2 | This tool is helpful in scenarios where you got “ARM access token” through command injection or any other vulnerability, but you are not sure if given credentials have access to other services or not. Instead of just trying for top few azure services (buckets, compute VM, etc), you can run this tool and it will let you enumerate through each non-intrusive(For example, only listing storage Accounts, this tool won't be creating/modifying bucket) features of each service. 3 | 4 | ## Requirements 5 | * Management access token needs to be provided as arguments. 6 | * `pip install -r requirements.txt` will install all necessary python packages. 7 | 8 | ## Usage 9 | 10 | ~~~ 11 | usage: azure_enum_services.py [-h] --access-token ACCESS_TOKEN [--output-file OUTPUT_FILE] 12 | 13 | options: 14 | -h, --help show this help message and exit 15 | --access-token ACCESS_TOKEN 16 | Provide Azure Management Access token 17 | --output-file OUTPUT_FILE 18 | Provide output file path (Optional) 19 | ~~~ 20 | 21 | ## Sample Output 22 | 23 | ![](/Sample_Output/azure_service_enum_sample_output.png) 24 | 25 | ## Author 26 | 27 | * [Raunak Parmar](https://www.linkedin.com/in/trouble1raunak/) -------------------------------------------------------------------------------- /azure_service_enum/azure_enum_services.py: -------------------------------------------------------------------------------- 1 | import requests, json 2 | from argparse import ArgumentParser 3 | from tabulate import tabulate 4 | from crayons import blue, yellow, red, green 5 | from requests.adapters import HTTPAdapter 6 | from urllib3.util.retry import Retry 7 | 8 | parser = ArgumentParser() 9 | parser.add_argument('--access-token', help='Provide Azure Management Access token', required=True) 10 | parser.add_argument('--output-file', help='Provide output file path (Optional)', required=False) 11 | args = parser.parse_args() 12 | 13 | # Create a session with retry mechanism 14 | session = requests.Session() 15 | retry_strategy = Retry(total=10, backoff_factor=0.5) 16 | adapter = HTTPAdapter(max_retries=retry_strategy) 17 | session.mount('http://', adapter) 18 | session.mount('https://', adapter) 19 | 20 | jsonOutputs = [] 21 | 22 | def http_request(url, headers): 23 | response = session.get(url, headers=headers) 24 | response.raise_for_status() 25 | return response.json() 26 | 27 | 28 | def list_azure_data(access_token): 29 | headers = { 30 | 'Authorization': 'Bearer ' + access_token, 31 | 'Content-Type': 'application/json' 32 | } 33 | 34 | subscriptions_url = 'https://management.azure.com/subscriptions?api-version=2020-01-01' 35 | try: 36 | subscriptions_data = http_request(subscriptions_url, headers) 37 | except Exception as e: 38 | print(red("[!] Error Found: Access Token expired or invalid.", bold=True)) 39 | exit() 40 | 41 | if 'value' in subscriptions_data: 42 | subscriptions = subscriptions_data['value'] 43 | subscription_table = [] 44 | 45 | for subscription in subscriptions: 46 | subscription_id = subscription['subscriptionId'] 47 | subscription_display_name = subscription['displayName'] 48 | subscription_table.append([subscription_id, subscription_display_name]) 49 | resource_groups_url = f'https://management.azure.com/subscriptions/{subscription_id}/resources?api-version=2020-06-01' 50 | services_data = http_request(resource_groups_url, headers) 51 | 52 | if 'value' in services_data: 53 | services = services_data['value'] 54 | services_table = [] 55 | 56 | for service in services: 57 | service_Id = service['id'] 58 | service_name = service['name'] 59 | service_type = service['type'] 60 | service_resource_group = service_Id.split("/")[4] 61 | services_table.append([service_name, service_type, service_resource_group]) 62 | 63 | if services_table: 64 | print(yellow(f"Subscription: {subscription_display_name} ({subscription_id})", bold=True)) 65 | print(tabulate(services_table, headers=[blue('Service Name', bold=True), blue('Service Type', bold=True), blue('Resource Group', bold=True)], tablefmt='psql')) 66 | print() 67 | json_data = { 68 | "subscriptionID":subscription_id, 69 | "subscriptionDisplayName":subscription_display_name, 70 | "resources":services_data['value'], 71 | } 72 | jsonOutputs.append(json_data) 73 | 74 | 75 | if subscription_table: 76 | print(yellow("Listing Subscriptions:", bold=True)) 77 | print(tabulate(subscription_table, headers=[blue('Subscription ID', bold=True), blue('Subscription Name', bold=True)], tablefmt='psql')) 78 | print() 79 | else: 80 | print(red("No subscriptions found. 1", bold=True)) 81 | else: 82 | print(red("No subscriptions found.2", bold=True)) 83 | 84 | # Save JSON object to file 85 | if args.output_file: 86 | with open(args.output_file, 'w') as file: 87 | json.dump(jsonOutputs, file, indent=4) 88 | print() 89 | print(green(f'Azure data saved to {args.output_file}', bold=True)) 90 | print() 91 | 92 | 93 | list_azure_data(args.access_token) 94 | -------------------------------------------------------------------------------- /azure_service_enum/requirements.txt: -------------------------------------------------------------------------------- 1 | requests 2 | tabulate 3 | crayons -------------------------------------------------------------------------------- /gcp_service_enum/README.md: -------------------------------------------------------------------------------- 1 | `gcp_service_enum.py` let you discover gcp services by providing service account key file 2 | 3 | ## Requirements 4 | 5 | * Service account key needs to be provided as arguments. In some cases projectId, projectNo are also required 6 | * pip install -r requirements.txt will install all necessary python packages. 7 | 8 | ## Usage 9 | ~~~ 10 | usage: gcp_enum_services.py [-h] -f F [--output-file OUTPUT_FILE] 11 | 12 | options: 13 | -h, --help show this help message and exit 14 | -f F Provide service account key file Json file 15 | --output-file OUTPUT_FILE 16 | Provide output file path (Optional) 17 | ~~~ 18 | 19 | ## Sample Output 20 | 21 | ![](/Sample_Output/gcp_service_enum_sample_output.png) 22 | 23 | ## Author 24 | 25 | * [Raunak Parmar](https://www.linkedin.com/in/trouble1raunak/) -------------------------------------------------------------------------------- /gcp_service_enum/gcp_enum_services.py: -------------------------------------------------------------------------------- 1 | from crayons import red, green, yellow, blue 2 | from tabulate import tabulate 3 | from google.oauth2 import service_account 4 | from google.cloud import resource_manager, compute_v1, storage, functions_v2 5 | import argparse, json 6 | 7 | parser = argparse.ArgumentParser() 8 | parser.add_argument('-f', help='Provide service account key file Json file', required=True) 9 | parser.add_argument('--output-file', help='Provide output file path (Optional)', required=False) 10 | 11 | args = parser.parse_args() 12 | 13 | json_body = {} 14 | 15 | def list_gcp_data(service_account_file): 16 | # Load service account credentials 17 | try: 18 | credentials = service_account.Credentials.from_service_account_file(service_account_file) 19 | except Exception as e: 20 | print(red("[-] Invalid file!")) 21 | exit() 22 | 23 | # Initialize Resource Manager client 24 | client = resource_manager.Client(credentials=credentials) 25 | 26 | # List projects 27 | print(blue("Listing Projects:", bold=True)) 28 | projects = list(client.list_projects()) 29 | project_table = [] 30 | for project in projects: 31 | project_id = project.project_id 32 | project_name = project.name 33 | project_table.append([project_id, project_name]) 34 | print_table(project_table, headers=['Project ID', 'Project Name']) 35 | print() 36 | 37 | # List resources for each project 38 | for project in projects: 39 | project_id = project.project_id 40 | project_name = project.name 41 | print(blue(f"Project: {project_name} ({project_id})", bold=True)) 42 | print(yellow("Resources:", bold=True)) 43 | 44 | # List resource types 45 | resource_types = ['cloud_storage','instances', 'disks', 'networks', 'buckets', 'firewalls', 'cloudfunctions', 'cloudsql'] 46 | for resource_type in resource_types: 47 | resources = list_resources(project_id, resource_type, credentials) 48 | resource_table = [] 49 | for resource in resources: 50 | 51 | if resource_type == 'instances': 52 | resource_name = resource.name 53 | elif resource_type == 'disks': 54 | resource_name = resource.name 55 | else: 56 | resource_name = resource 57 | resource_table.append([resource_name]) 58 | 59 | try: 60 | json_body[resource_name].append(resource_name) 61 | except: 62 | json_body[resource_name] = [] 63 | json_body[resource_name].append(resource_name) 64 | if resource_table: 65 | print(green(f"\n{resource_type.capitalize()}:", bold=True)) 66 | print_table(resource_table, headers=[resource_type.capitalize()]) 67 | 68 | print() 69 | 70 | def list_resources(project_id, resource_type, credentials): 71 | if resource_type == 'instances': 72 | client = compute_v1.InstancesClient(credentials=credentials) 73 | response = client.list(request={"project": project_id, "zone": "us-central1-a"}) 74 | return response.items 75 | elif resource_type == 'disks': 76 | client = compute_v1.DisksClient(credentials=credentials) 77 | response = client.list(request={"project": project_id, "zone": "us-central1-a"}) 78 | return response.items 79 | elif resource_type == 'networks': 80 | client = compute_v1.NetworksClient(credentials=credentials) 81 | response = client.list(request={"project": project_id}) 82 | return [network.name for network in response.items] 83 | elif resource_type == 'buckets': 84 | client = storage.Client(credentials=credentials) 85 | buckets = client.list_buckets(project=project_id) 86 | return [bucket.name for bucket in buckets] 87 | elif resource_type == 'firewalls': 88 | client = compute_v1.FirewallsClient(credentials=credentials) 89 | response = client.list(request={"project": project_id}) 90 | return [firewall.name for firewall in response.items] 91 | elif resource_type == 'cloud_storage': 92 | client = storage.Client(credentials=credentials) 93 | buckets = client.list_buckets(project=project_id) 94 | return [bucket.name for bucket in buckets] 95 | elif resource_type == 'cloudfunctions': 96 | client = functions_v2.FunctionServiceClient(credentials=credentials) 97 | # List functions 98 | parent = f"projects/{project_id}/locations/-" 99 | functions = client.list_functions(parent=parent) 100 | return [function.name for function in functions] 101 | else: 102 | return [] 103 | 104 | def print_table(data, headers): 105 | # Print data in table format 106 | if data: 107 | print(tabulate(data, headers=headers, tablefmt='psql')) 108 | else: 109 | print("\tNo resources found.") 110 | 111 | # Example usage 112 | service_account_file = 'gcp-audit.json' 113 | list_gcp_data(args.f) 114 | 115 | 116 | if args.output_file: 117 | with open(args.output_file, 'w') as file: 118 | json.dump(json_body, file, indent=4) 119 | 120 | print() 121 | print(green(f'GCP data saved to {args.output_file}', bold=True)) 122 | print() -------------------------------------------------------------------------------- /gcp_service_enum/requirements.txt: -------------------------------------------------------------------------------- 1 | crayons 2 | tabulate 3 | google-cloud-resource-manager==0.30.5 4 | google-cloud-compute 5 | google-cloud-functions 6 | google-cloud-storage 7 | --------------------------------------------------------------------------------