├── .gitignore ├── LICENSE ├── README.md ├── aws_tf ├── README.md ├── action_run.tpl ├── main.tf ├── outputs.tf ├── variables.tf └── versions.tf └── gcp_tf ├── README.md ├── action_run.tpl ├── main.tf ├── outputs.tf ├── variables.tf └── versions.tf /.gitignore: -------------------------------------------------------------------------------- 1 | # Local .terraform directories 2 | **/.terraform/* 3 | 4 | # .tfstate files 5 | *.tfstate 6 | *.tfstate.* 7 | 8 | # .tfvars files 9 | *.tfvars 10 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 JP 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OffensiveCloudDistribution 2 | Have you ever needed to scan 3 million hosts with masscan? What about running EyeWitness on 5k servers.. Without sacrificing accuracy, those things will take quite awhile! 3 | What if you could stand up 50 EC2 instances to each take a small part of the work, have each of the instances spit the results to an S3 Bucket, and then spin down the instances. All while staying in the Free AWS Tier. This Terraform module lets you do that! 4 | 5 | [@ok_bye_now](https://twitter.com/ok_bye_now) 6 | 7 | [@thesubtlety](https://twitter.com/thesubtlety) 8 | 9 | ## What do I need to get started? 10 | - An AWS or GCP account 11 | - Terraform 12 | 13 | Yes, thats it! The scripts contained here configure the EC2 instances, kick the actions off and throw the results into an S3 bucket for you. 14 | 15 | ### Getting Started 16 | 17 | #### AWS Instructions 18 | 19 | 1. Download and install Terraform for your platform. https://www.vasos-koupparis.com/terraform-getting-started-install/ 20 | 2. Create an AWS account if you don't already have one. 21 | 3. Retrieve the AWS access and secret keys 22 | 4. `git clone https://github.com/jordanpotti/OffensiveCloudDistribution` 23 | 5. `cd OffensiveCloudDistribution/aws_tf` 24 | 6. `terraform init` 25 | 7. `terraform apply` ; You will need to enter a couple values here such as how many instances, the host name, the IP you want to SSH into the instances with and a line delimited list of IP's to scan. 26 | 8. The results will give you the IP, as well as the Private SSH key. Copy this key into a `.pem` file to SSH into the servers. 27 | 9. The results of the scan (Or custom action specified by you) will end up in a randomly named S3 Bucket. Download the files placed there from the scan before you run `terraform destroy` since this will destroy your S3 bucket as well. 28 | 29 | #### For GCP instructions, check out the `readme` located in the `gcp_tf` directory 30 | 31 | ## Note 32 | 33 | To bypass the module asking for variables, simply add a `terraform.tfvars` file in the `aws_tf` or `gcp_tf` directory to add the values, eg: 34 | 35 | ``` 36 | secret_key = "" 37 | access_key = "" 38 | scan_list = "" 39 | instance_count = "" 40 | allow_ingress = "" 41 | host_name = "" 42 | ``` 43 | 44 | 45 | ## Other Platforms 46 | Currently, the Terraform module here is based on AWS and GCP, PR's are welcome :) 47 | 48 | ## Disclaimer: 49 | Please be aware of the AWS and GCP Free Tier rules. Using instances that qualify for the free tier, you can utilize 750 hours per month. By modifying certain pieces of the Terraform module (Like changing the instance size), and not destroying resources after your job is done, you will likely incur hefty charges. 50 | 51 | * https://aws.amazon.com/free/terms/ 52 | * https://cloud.google.com/free/docs/gcp-free-tier 53 | -------------------------------------------------------------------------------- /aws_tf/README.md: -------------------------------------------------------------------------------- 1 | ## Default Actions 2 | 3 | The default action here kicks off a masscan on the scan list you specify. Take a look at `action_run.tpl` to see what exactly the script is doing. 4 | 5 | Masscan has a shard option which makes it trivial to split scans up by sharding the scans among the servers spun up by Terraform. Other tools don't have the feature so using a tool like [splitter](https://github.com/jordanpotti/splitter) will let you split a file up and using the `count` variable, specify what list you'd like to scan. 6 | 7 | A template to get you started splitting lists of hosts to scan: 8 | 9 | ``` 10 | /snap/bin/aws s3 cp s3://${s3_bucket}/${scan_list} . 11 | 12 | curl https://github.com/jordanpotti/splitter/releases/download/Linux/splitter -L --output splitter 13 | 14 | chmod +x splitter 15 | 16 | ./splitter -target ${scan_list} -numb ${total} 17 | 18 | git clone 19 | 20 | cd 21 | 22 | ./ --target ${count} --out ${count}.out 23 | 24 | /snap/bin/aws s3 cp ${count}.out s3://${s3_bucket}/${count}.out 25 | ``` 26 | ### Troubleshooting 27 | To verify a scan kicked off, or troubleshoot an action, SSH into one of your servers and run `tail -f /var/log/cloud-init-output.log`, that will also let you track the progress of your scans. 28 | -------------------------------------------------------------------------------- /aws_tf/action_run.tpl: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # This makes and runs masscan, use it for whatever you would like though. 3 | 4 | apt update -y 5 | apt install git -y 6 | apt install build-essential -y 7 | snap install aws-cli --classic 8 | 9 | sudo apt-get install git gcc make libpcap-dev 10 | git clone https://github.com/robertdavidgraham/masscan 11 | cd masscan 12 | make -j 13 | 14 | /snap/bin/aws s3 cp s3://${s3_bucket}/${scan_list} . 15 | 16 | sudo bin/masscan --top-ports 50 -iL ${scan_list} --rate 500 --excludefile data/exclude.conf -oB results-${count}.masscan.bin --shard ${count}/${total} 17 | 18 | /snap/bin/aws s3 cp results-${count}.masscan.bin s3://${s3_bucket}/results-${count}.masscan.bin 19 | 20 | 21 | 22 | -------------------------------------------------------------------------------- /aws_tf/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "${var.aws_region}" 3 | access_key = "${var.access_key}" 4 | secret_key = "${var.secret_key}" 5 | } 6 | 7 | resource "tls_private_key" "temp_key" { 8 | algorithm = "RSA" 9 | rsa_bits = 4096 10 | } 11 | 12 | resource "aws_key_pair" "generated_key" { 13 | key_name = "temp_key" 14 | public_key = "${tls_private_key.temp_key.public_key_openssh}" 15 | } 16 | 17 | data "template_file" "init" { 18 | count = "${var.instance_count}" 19 | template = "${file("action_run.tpl")}" 20 | vars = { 21 | count = "${count.index}" 22 | total = "${var.instance_count}" 23 | s3_bucket = "${random_id.s3.hex}" 24 | scan_list = "${var.scan_list}" 25 | } 26 | } 27 | resource "aws_iam_role" "temp_role" { 28 | name = "temp_role" 29 | 30 | assume_role_policy = < instance.public_ip 5 | } 6 | } 7 | output "instance_private_ip_addresses" { 8 | value = { 9 | for instance in aws_instance.vm-ubuntu: 10 | instance.id => instance.private_ip 11 | } 12 | } 13 | output "Private_SSH_Key_Value" { 14 | value = "${tls_private_key.temp_key.private_key_pem}" 15 | } 16 | 17 | output "s3_bucket" { 18 | value = "${random_id.s3.hex}" 19 | } 20 | 21 | output "Username" { 22 | value = "ubuntu" 23 | } 24 | -------------------------------------------------------------------------------- /aws_tf/variables.tf: -------------------------------------------------------------------------------- 1 | variable "aws_region" { 2 | description = "Region for the VPC" 3 | default = "us-east-1" 4 | } 5 | 6 | variable "host_name" { 7 | description = "Host name to give server" 8 | } 9 | 10 | variable "secret_key" {} 11 | variable "access_key" {} 12 | 13 | 14 | variable "allow_ingress" { 15 | description = "IP that will be allowed to access the Ubuntu host ie, x.x.x.x/x" 16 | } 17 | 18 | variable "instance_count" { 19 | } 20 | 21 | 22 | variable "scan_list" { 23 | description = "List of IP's to scan (Enter file name here)" 24 | } 25 | -------------------------------------------------------------------------------- /aws_tf/versions.tf: -------------------------------------------------------------------------------- 1 | 2 | terraform { 3 | required_version = ">= 0.12" 4 | } 5 | -------------------------------------------------------------------------------- /gcp_tf/README.md: -------------------------------------------------------------------------------- 1 | ## Getting Started 2 | 3 | Ensure your creds are downloaded from the Cloud Console and env vars are set 4 | `export GOOGLE_CLOUD_KEYFILE_JSON={{path}}` 5 | or, if you're using a service account 6 | `export GOOGLE_APPLICATION_CREDENTIALS={{path}}` 7 | 8 | Make any modifications to the `variables.tf` and `action_run.tpl` files. No changes are required, but note the region and zone are defaulted to `us-central1-a`, there are no restrictions on which IPs can access port 22 on the created, and instances are created without oauth scope restrictions. 9 | 10 | Run `terraform init && terraform apply`. 11 | 12 | To destroy, run `terraform destroy` 13 | 14 | Terraform Reference: https://www.terraform.io/docs/providers/google/provider_reference.html 15 | 16 | ### Accessing your instances 17 | Assuming you have the Google [CloudTools SDK for CLI](https://cloud.google.com/sdk/docs/#deb).. 18 | 19 | The SSH keys are per instance and created automatically. There are other SSH access options with GCP such as OS Login - feel free to submit a PR. 20 | `gcloud beta compute --project "your-project-name" ssh --zone "us-central1-a" "your-instance-id"` 21 | `gcloud compute ssh --project [PROJECT_ID] --zone [ZONE] [INSTANCE_NAME]` 22 | 23 | Reference: 24 | * https://cloud.google.com/compute/docs/instances/connecting-to-instance 25 | * https://cloud.google.com/compute/docs/instances/managing-instance-access 26 | 27 | ### Viewing output 28 | You can watch the serial port console output in the Cloud Console or use the below command. The default masscan can take a few minutes to update and compile and then timing will be dependent on whatever your action is. 29 | ``` 30 | gcloud compute instances get-serial-port-output instance-name \ 31 | --port port \ 32 | --start start \ 33 | --zone zone 34 | ``` 35 | Having issues? Try here - https://cloud.google.com/compute/docs/troubleshooting/ 36 | 37 | ### Storage 38 | Interact with Google Cloud Storage 39 | `gsutil ls` 40 | 41 | Note that 50GB of storage may cost you a dollar. 42 | 43 | ### Pricing 44 | https://cloud.google.com/free/docs/gcp-free-tier#always-free-usage-limits 45 | -------------------------------------------------------------------------------- /gcp_tf/action_run.tpl: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # This installs and runs masscan, use it for whatever you would like though. 3 | 4 | apt -y update 5 | apt -y install build-essential git gcc make libpcap-dev 6 | 7 | cd /root 8 | git clone https://github.com/robertdavidgraham/masscan 9 | cd masscan 10 | make 11 | 12 | gsutil cp gs://${bucket}/${scan_list} . 13 | 14 | bin/masscan --top-ports 50 -iL ${scan_list} --rate 500 --excludefile data/exclude.conf -oB results-${count}.masscan.bin --shard ${count}/${total} 15 | 16 | gsutil cp results-${count}.masscan.bin gs://${bucket}/results-${count}.masscan.bin 17 | -------------------------------------------------------------------------------- /gcp_tf/main.tf: -------------------------------------------------------------------------------- 1 | provider "google" { 2 | project = "${var.project_id}" 3 | region = "${var.gcp_region}" 4 | zone = "${var.gcp_zone}" 5 | } 6 | 7 | # Set up the Google Compute Instance 8 | resource "random_id" "instance_name" { 9 | byte_length = 8 10 | } 11 | resource "google_compute_instance" "vm_instance" { 12 | name = "${var.host_name}-${random_id.instance_name.hex}-${count.index}" 13 | count = "${var.instance_count}" 14 | machine_type = "f1-micro" 15 | service_account { 16 | scopes = ["cloud-platform"] 17 | } 18 | metadata_startup_script = "${element(data.template_file.init.*.rendered, count.index)}" 19 | boot_disk { 20 | initialize_params { 21 | image = "debian-cloud/debian-9" 22 | } 23 | } 24 | network_interface { 25 | network = "${google_compute_network.vpc_network.self_link}" 26 | access_config { 27 | } 28 | } 29 | lifecycle { 30 | ignore_changes = [ 31 | machine_type, 32 | ] 33 | } 34 | } 35 | resource "google_compute_network" "vpc_network" { 36 | name = "temp-terraform-network" 37 | auto_create_subnetworks = "true" 38 | } 39 | resource "google_compute_firewall" "default" { 40 | name = "temp-terraform-firewall" 41 | network = "${google_compute_network.vpc_network.self_link}" 42 | allow { 43 | protocol = "tcp" 44 | ports = ["22"] 45 | } 46 | source_ranges = [var.allow_ingress] 47 | } 48 | 49 | 50 | # Setup the storage config, private by default 51 | resource "random_id" "bucket_name" { 52 | byte_length = 8 53 | } 54 | resource "google_storage_bucket" "action_storage" { 55 | name = "${random_id.bucket_name.hex}" 56 | force_destroy = true 57 | } 58 | resource "google_storage_bucket_object" "object" { 59 | bucket = "${random_id.bucket_name.hex}" 60 | name = "${var.scan_list}" 61 | source = "${var.scan_list}" 62 | depends_on = [google_storage_bucket.action_storage] 63 | } 64 | 65 | 66 | # Configure scaling distribution, vars used in tpl 67 | data "template_file" "init" { 68 | count = "${var.instance_count}" 69 | template = "${file("action_run.tpl")}" 70 | vars = { 71 | count = "${count.index + 1}" 72 | total = "${var.instance_count}" 73 | bucket = "${random_id.bucket_name.hex}" 74 | scan_list = "${var.scan_list}" 75 | } 76 | } 77 | -------------------------------------------------------------------------------- /gcp_tf/outputs.tf: -------------------------------------------------------------------------------- 1 | output "instance_public_ip_addresses" { 2 | value = { 3 | for instance in google_compute_instance.vm_instance: 4 | instance.id => instance.network_interface.0.access_config.0.nat_ip 5 | } 6 | } 7 | 8 | output "instance_private_ip_addresses" { 9 | value = { 10 | for instance in google_compute_instance.vm_instance: 11 | instance.id => instance.network_interface.0.network_ip 12 | } 13 | } 14 | 15 | output "bucket_name" { 16 | value = google_storage_bucket.action_storage.name 17 | } 18 | 19 | output "project_id" { 20 | value = google_compute_instance.vm_instance.0.project 21 | } 22 | 23 | output "project_zone" { 24 | value = google_compute_instance.vm_instance.0.zone 25 | } 26 | 27 | output "ssh_command" { 28 | value = "gcloud beta compute ssh --project [project-id] --zone [gcp-zone] [instance-id]" 29 | } 30 | -------------------------------------------------------------------------------- /gcp_tf/variables.tf: -------------------------------------------------------------------------------- 1 | variable "gcp_region" { 2 | description = "Region for the VPC (e.g. us-central1)" 3 | } 4 | 5 | variable "gcp_zone" { 6 | description = "Zone for the VPC (e.g. us-central1-a)" 7 | } 8 | 9 | variable "project_id" { 10 | description = "Your GCP Project Name" 11 | } 12 | 13 | variable "host_name" { 14 | description = "Host name to give server" 15 | } 16 | 17 | variable "allow_ingress" { 18 | description = "CIDR/IP that will be allowed to access the host (0.0.0.0/0)" 19 | default = "0.0.0.0/0" 20 | } 21 | 22 | variable "scan_list" { 23 | description = "File with list of IP's to scan" 24 | } 25 | 26 | variable "instance_count" { 27 | } 28 | -------------------------------------------------------------------------------- /gcp_tf/versions.tf: -------------------------------------------------------------------------------- 1 | 2 | terraform { 3 | required_version = ">= 0.12" 4 | } 5 | --------------------------------------------------------------------------------