├── LICENSE
├── README.md
├── artifacts.tf
├── assets
└── infrastructure.png
├── bucket.tf
├── config
├── nuclei-config.yaml
└── report-config.yaml
├── dynamodb.tf
├── glue.tf
├── main.tf
├── output.tf
├── src
├── go.mod
├── go.sum
└── main.go
├── static
└── infrastructure.png
├── variables.tf
└── versions.tf
/LICENSE:
--------------------------------------------------------------------------------
1 | The MIT License (MIT)
2 |
3 | Copyright (c) 2023, Jonathan Walker
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
6 |
7 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
8 |
9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
10 |
11 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Nuclei Runner
2 |
3 | 
4 |
5 | This terraform module allows you to execute [Nuclei](https://github.com/projectdiscovery/nuclei) within a [lambda function](https://aws.amazon.com/lambda/) within AWS. This is designed to be the backend for [Nuclear Pond](https://github.com/DevSecOpsDocs/Nuclear-Pond). Please go to that repository first if you have not. The purpose of which is to allow you to perform automated scans on your infrastructure and allow the results to be parsed in any way that you choose.
6 |
7 | Nuclei can help you identify technologies running within your infrastructure, misconfigurations, exploitable vulnerabilities, network protocols, default credentials, exposed panels, takeovers, and so much more. Continuously monitoring for such vulnerabilities within your network can be crucial to providing you with a last line of defense against vulnerabilities hidden within your cloud infrastructure.
8 |
9 | > :warning: **This is vulnerable to Remote Code Execution**: Be careful where you deploy this as I have made no attempt to sanitize inputs for flexibility purposes. Since it is running in lambda, the risk is generally low but if you were to attach a network interface to this it could add significant risk.
10 |
11 | ## Engineering Decisions
12 |
13 | With any engineering project, design decisions are made based on the requirements of a given project. In which these designs have some limitations which are the following:
14 |
15 | - Args are passed directly, to allow you to specify any arguments to nuclei, in invoking the lambda function and since the sink is `exec.Command` this is vulnerable to remote code execution by design and can be easily escaped
16 | - Never pass `-u`, `-l`, `-json`, or `-o` flag to this lambda function but you can pass any other nuclei arguments you like
17 | - Nuclei refuses to not write to `$HOME/.config` so the `HOME`, which is not a writable filesystem with lambda, is set to `/tmp` which can cause warm starts to have the same filesystem and perhaps poison future configurations
18 | - Lambda function in golang is rebuilt on every apply for ease of development
19 | - When configuration files are updated, you might have to destroy and recreate the infrastructure
20 |
21 | ### Event Json
22 |
23 | This is what must be passed to the lambda function. The `Targets` can be a list of one or many, the lambda function will handle passing in the `-u` or `-l` flag accordingly. The `Args` input are any valid flags for nuclei. The `Output` flag allows you to output either the command line output, json findings, or s3 key where the results are uploaded to.
24 |
25 | ```
26 | {
27 | "Targets": [
28 | "https://devsecopsdocs.com"
29 | ],
30 | "Args": [
31 | "-t",
32 | "dns"
33 | ],
34 | "Output": "json"
35 | }
36 | ```
37 |
38 |
39 | ## Requirements
40 |
41 | | Name | Version |
42 | |------|---------|
43 | | [terraform](#requirement\_terraform) | >= 1.0 |
44 | | [archive](#requirement\_archive) | 2.2.0 |
45 | | [aws](#requirement\_aws) | 4.50.0 |
46 | | [github](#requirement\_github) | 5.14.0 |
47 | | [null](#requirement\_null) | 3.2.1 |
48 |
49 | ## Providers
50 |
51 | | Name | Version |
52 | |------|---------|
53 | | [archive](#provider\_archive) | 2.2.0 |
54 | | [aws](#provider\_aws) | 4.50.0 |
55 | | [github](#provider\_github) | 5.14.0 |
56 | | [null](#provider\_null) | 3.2.1 |
57 |
58 | ## Modules
59 |
60 | No modules.
61 |
62 | ## Resources
63 |
64 | | Name | Type |
65 | |------|------|
66 | | [aws_cloudwatch_log_group.log_group](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/cloudwatch_log_group) | resource |
67 | | [aws_dynamodb_table.scan_state_table](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/dynamodb_table) | resource |
68 | | [aws_glue_catalog_database.database](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/glue_catalog_database) | resource |
69 | | [aws_glue_catalog_table.table](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/glue_catalog_table) | resource |
70 | | [aws_iam_policy.policy](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/iam_policy) | resource |
71 | | [aws_iam_role.lambda_role](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/iam_role) | resource |
72 | | [aws_iam_role_policy_attachment.policy](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/iam_role_policy_attachment) | resource |
73 | | [aws_lambda_alias.alias](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/lambda_alias) | resource |
74 | | [aws_lambda_function.function](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/lambda_function) | resource |
75 | | [aws_lambda_layer_version.configs_layer](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/lambda_layer_version) | resource |
76 | | [aws_lambda_layer_version.layer](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/lambda_layer_version) | resource |
77 | | [aws_lambda_layer_version.templates_layer](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/lambda_layer_version) | resource |
78 | | [aws_s3_bucket.bucket](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/s3_bucket) | resource |
79 | | [aws_s3_bucket_public_access_block.block](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/s3_bucket_public_access_block) | resource |
80 | | [aws_s3_bucket_server_side_encryption_configuration.encryption](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/s3_bucket_server_side_encryption_configuration) | resource |
81 | | [aws_s3_object.upload_config](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/s3_object) | resource |
82 | | [aws_s3_object.upload_nuclei](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/s3_object) | resource |
83 | | [aws_s3_object.upload_templates](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/resources/s3_object) | resource |
84 | | [null_resource.build](https://registry.terraform.io/providers/hashicorp/null/3.2.1/docs/resources/resource) | resource |
85 | | [null_resource.download_nuclei](https://registry.terraform.io/providers/hashicorp/null/3.2.1/docs/resources/resource) | resource |
86 | | [null_resource.download_templates](https://registry.terraform.io/providers/hashicorp/null/3.2.1/docs/resources/resource) | resource |
87 | | [archive_file.nuclei_config](https://registry.terraform.io/providers/hashicorp/archive/2.2.0/docs/data-sources/file) | data source |
88 | | [archive_file.zip](https://registry.terraform.io/providers/hashicorp/archive/2.2.0/docs/data-sources/file) | data source |
89 | | [aws_iam_policy_document.policy](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/data-sources/iam_policy_document) | data source |
90 | | [aws_iam_policy_document.trust](https://registry.terraform.io/providers/hashicorp/aws/4.50.0/docs/data-sources/iam_policy_document) | data source |
91 | | [github_release.templates](https://registry.terraform.io/providers/hashicorp/github/5.14.0/docs/data-sources/release) | data source |
92 |
93 | ## Inputs
94 |
95 | | Name | Description | Type | Default | Required |
96 | |------|-------------|------|---------|:--------:|
97 | | [github\_owner](#input\_github\_owner) | Github owner to use for templates | `string` | `"projectdiscovery"` | no |
98 | | [github\_repository](#input\_github\_repository) | Github repository to use for templates | `string` | `"nuclei-templates"` | no |
99 | | [github\_token](#input\_github\_token) | Github token to use for private templates, leave empty if you don't need private templates | `string` | `""` | no |
100 | | [memory\_size](#input\_memory\_size) | n/a | `number` | `512` | no |
101 | | [nuclei\_arch](#input\_nuclei\_arch) | Nuclei architecture to use | `string` | `"linux_amd64"` | no |
102 | | [nuclei\_timeout](#input\_nuclei\_timeout) | Lambda function timeout | `number` | `900` | no |
103 | | [nuclei\_version](#input\_nuclei\_version) | Nuclei version to use | `string` | `"2.8.7"` | no |
104 | | [project\_name](#input\_project\_name) | Name of the project to create and must be unique as S3 bucket names are global | `any` | n/a | yes |
105 | | [release\_tag](#input\_release\_tag) | Github release tag to use for templates | `string` | `"v9.3.4"` | no |
106 | | [tags](#input\_tags) | n/a | `map(string)` |
{
"Name": "nuclei-scanner"
}
| no |
107 |
108 | ## Outputs
109 |
110 | | Name | Description |
111 | |------|-------------|
112 | | [dynamodb\_state\_table](#output\_dynamodb\_state\_table) | n/a |
113 | | [function\_name](#output\_function\_name) | n/a |
114 |
--------------------------------------------------------------------------------
/artifacts.tf:
--------------------------------------------------------------------------------
1 | provider "github" {
2 | token = var.github_token
3 | }
4 |
5 | # Download nuclei binary and templates
6 | resource "null_resource" "download_nuclei" {
7 | triggers = {
8 | version = var.nuclei_version
9 | }
10 |
11 | provisioner "local-exec" {
12 | command = "curl -o ${path.module}/src/nuclei.zip -L https://github.com/projectdiscovery/nuclei/releases/download/v${var.nuclei_version}/nuclei_${var.nuclei_version}_${var.nuclei_arch}.zip"
13 | }
14 | }
15 |
16 | # Private templates download from github
17 | data "github_release" "templates" {
18 | repository = var.github_repository
19 | owner = var.github_owner
20 | retrieve_by = "tag"
21 | release_tag = var.release_tag
22 | }
23 |
24 | resource "null_resource" "download_templates" {
25 | triggers = {
26 | version = var.release_tag
27 | }
28 |
29 | provisioner "local-exec" {
30 | command = "curl -o ${path.module}/src/nuclei-templates.zip -L ${data.github_release.templates.zipball_url}"
31 | }
32 | }
33 |
34 | # Upload them to s3
35 | resource "aws_s3_object" "upload_nuclei" {
36 | depends_on = [null_resource.download_nuclei]
37 |
38 | bucket = aws_s3_bucket.bucket.id
39 | key = "nuclei.zip"
40 | source = "${path.module}/src/nuclei.zip"
41 | }
42 |
43 | resource "aws_s3_object" "upload_templates" {
44 | depends_on = [null_resource.download_templates]
45 |
46 | bucket = aws_s3_bucket.bucket.id
47 | key = "nuclei-templates.zip"
48 | source = "${path.module}/src/nuclei-templates.zip"
49 | }
50 |
51 | # Nuclei configuration files
52 | data "archive_file" "nuclei_config" {
53 | type = "zip"
54 | source_dir = "${path.module}/config"
55 | output_path = "nuclei-configs.zip"
56 | }
57 |
58 | resource "aws_s3_object" "upload_config" {
59 | depends_on = [data.archive_file.nuclei_config]
60 | bucket = aws_s3_bucket.bucket.id
61 | key = "nuclei-configs.zip"
62 | source = "${path.module}/nuclei-configs.zip"
63 | }
64 |
65 | # Build the lambda function to execute binary
66 | resource "null_resource" "build" {
67 | triggers = {
68 | always = timestamp()
69 | }
70 |
71 | provisioner "local-exec" {
72 | command = "cd ${path.module}/src && GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o main"
73 | }
74 | }
75 |
76 | data "archive_file" "zip" {
77 | depends_on = [null_resource.build]
78 | type = "zip"
79 | source_file = "src/main"
80 | output_path = "lambda.zip"
81 | }
--------------------------------------------------------------------------------
/assets/infrastructure.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DevSecOpsDocs/terraform-nuclear-pond/28faf3bd100ad44605c369be6f3acb157d36ca5a/assets/infrastructure.png
--------------------------------------------------------------------------------
/bucket.tf:
--------------------------------------------------------------------------------
1 | #tfsec:ignore:aws-s3-enable-bucket-logging tfsec:ignore:aws-s3-enable-versioning
2 | resource "aws_s3_bucket" "bucket" {
3 | bucket = "${var.project_name}-artifacts"
4 | tags = var.tags
5 |
6 | # Delete all objects in the bucket before deleting the bucket
7 | force_destroy = true
8 | }
9 |
10 | #tfsec:ignore:aws-s3-encryption-customer-key
11 | resource "aws_s3_bucket_server_side_encryption_configuration" "encryption" {
12 | bucket = aws_s3_bucket.bucket.id
13 | rule {
14 | apply_server_side_encryption_by_default {
15 | sse_algorithm = "AES256"
16 | }
17 | }
18 | }
19 |
20 | resource "aws_s3_bucket_public_access_block" "block" {
21 | bucket = aws_s3_bucket.bucket.id
22 |
23 | block_public_acls = true
24 | block_public_policy = true
25 | ignore_public_acls = true
26 | restrict_public_buckets = true
27 | }
28 |
--------------------------------------------------------------------------------
/config/nuclei-config.yaml:
--------------------------------------------------------------------------------
1 | # Headers to include with all HTTP request
2 | header:
3 | - 'X-BugBounty-Hacker: github/nuclearpond'
4 |
5 | # Directory based template execution
6 | templates:
7 | - dns/
8 |
9 | # Tags based template execution
10 | # tags: exposures,cve
11 |
12 | # Template Filters
13 | # tags: exposures,cve
14 | # author: geeknik,pikpikcu,dhiyaneshdk
15 | # severity: critical,high,medium
16 |
17 | # Template Allowlist
18 | # include-tags: dos,fuzz # Tag based inclusion (allows overwriting nuclei-ignore list)
19 | # include-templates: # Template based inclusion (allows overwriting nuclei-ignore list)
20 | # - vulnerabilities/xxx
21 | # - misconfiguration/xxxx
22 |
23 | # Template Denylist
24 | # exclude-tags: info # Tag based exclusion
25 | # exclude-templates: # Template based exclusion
26 | # - vulnerabilities/xxx
27 | # - misconfiguration/xxxx
28 |
29 | # Rate Limit configuration
30 | rate-limit: 500
31 | bulk-size: 50
32 | concurrency: 50
--------------------------------------------------------------------------------
/config/report-config.yaml:
--------------------------------------------------------------------------------
1 | allow-list:
2 | severity: []
3 | tags: []
4 | deny-list:
5 | severity: []
6 | tags: []
7 | github:
8 | base-url: ""
9 | username: ""
10 | owner: ""
11 | token: ""
12 | project-name: ""
13 | issue-label: ""
14 | severity-as-label: false
15 | gitlab:
16 | base-url: ""
17 | username: ""
18 | token: ""
19 | project-name: ""
20 | issue-label: ""
21 | severity-as-label: false
22 | jira:
23 | cloud: false
24 | update-existing: false
25 | url: ""
26 | account-id: ""
27 | email: ""
28 | token: ""
29 | project-name: ""
30 | issue-type: ""
31 | severity-as-label: false
32 | markdown:
33 | directory: ""
34 | sarif:
35 | file: ""
36 | elasticsearch:
37 | host: ""
38 | ip: ""
39 | port: 0
40 | ssl: false
41 | ssl-verification: false
42 | username: ""
43 | password: ""
44 | index-name: ""
45 |
--------------------------------------------------------------------------------
/dynamodb.tf:
--------------------------------------------------------------------------------
1 | resource "aws_dynamodb_table" "scan_state_table" {
2 | name = "${var.project_name}-server-state"
3 | billing_mode = "PAY_PER_REQUEST"
4 | hash_key = "scan_id"
5 | attribute {
6 | name = "scan_id"
7 | type = "S"
8 | }
9 |
10 | ttl {
11 | attribute_name = "ttl"
12 | enabled = true
13 | }
14 |
15 | tags = var.tags
16 | }
--------------------------------------------------------------------------------
/glue.tf:
--------------------------------------------------------------------------------
1 | resource "aws_glue_catalog_database" "database" {
2 | name = "nuclei_db"
3 | description = "Database for nuclei findings"
4 | }
5 |
6 | resource "aws_glue_catalog_table" "table" {
7 | name = "findings_db"
8 | database_name = aws_glue_catalog_database.database.name
9 | table_type = "EXTERNAL_TABLE"
10 |
11 | storage_descriptor {
12 | location = "s3://${aws_s3_bucket.bucket.id}/findings/"
13 |
14 | input_format = "org.apache.hadoop.mapred.TextInputFormat"
15 | output_format = "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
16 |
17 | columns {
18 | name = "extracted-results"
19 | type = "array"
20 | }
21 | columns {
22 | name = "host"
23 | type = "string"
24 | }
25 | columns {
26 | name = "info"
27 | type = "struct,classification:string,description:string,name:string,reference:array,severity:string,tags:array>"
28 | }
29 | columns {
30 | name = "metadata"
31 | type = "struct"
32 | }
33 | columns {
34 | name = "matched-at"
35 | type = "string"
36 | }
37 | columns {
38 | name = "matched-line"
39 | type = "string"
40 | }
41 | columns {
42 | name = "matcher-status"
43 | type = "string"
44 | }
45 | columns {
46 | name = "template-id"
47 | type = "string"
48 | }
49 | columns {
50 | name = "timestamp"
51 | type = "string"
52 | }
53 | columns {
54 | name = "type"
55 | type = "string"
56 | }
57 | columns {
58 | name = "matcher-name"
59 | type = "string"
60 | }
61 | columns {
62 | name = "curl-command"
63 | type = "string"
64 | }
65 | columns {
66 | name = "ip"
67 | type = "string"
68 | }
69 | columns {
70 | name = "template-url"
71 | type = "string"
72 | }
73 | columns {
74 | name = "template"
75 | type = "string"
76 | }
77 |
78 | ser_de_info {
79 | serialization_library = "org.openx.data.jsonserde.JsonSerDe"
80 | parameters = {
81 | "serialization.format" = "1"
82 | "serialization.null.format" = "null"
83 | }
84 | }
85 | }
86 |
87 | partition_keys {
88 | name = "dt"
89 | type = "string"
90 | }
91 |
92 | parameters = {
93 | "EXTERNAL" = "TRUE"
94 | "compressionType" = "none"
95 | "classification" = "json"
96 | "typeOfData" = "file"
97 | "projection.enabled" = "true"
98 | "projection.dt.type" = "date"
99 | "projection.dt.format" = "yyyy/MM/dd/HH"
100 | "projection.dt.interval" = "1"
101 | "projection.dt.interval.unit" = "HOURS"
102 | "projection.dt.range" = "NOW-3MONTHS,NOW"
103 | "storage.location.template" = "s3://${aws_s3_bucket.bucket.id}/findings/$${dt}"
104 | }
105 | }
--------------------------------------------------------------------------------
/main.tf:
--------------------------------------------------------------------------------
1 | # tfsec:ignore:aws-lambda-enable-tracing
2 | resource "aws_lambda_function" "function" {
3 | depends_on = [aws_lambda_layer_version.layer, aws_lambda_layer_version.templates_layer, aws_lambda_layer_version.configs_layer]
4 | filename = "lambda.zip"
5 | function_name = "${var.project_name}-function"
6 |
7 | role = aws_iam_role.lambda_role.arn
8 | layers = [aws_lambda_layer_version.layer.arn, aws_lambda_layer_version.templates_layer.arn, aws_lambda_layer_version.configs_layer.arn]
9 |
10 | handler = "main"
11 | runtime = "go1.x"
12 | timeout = var.nuclei_timeout
13 | memory_size = var.memory_size
14 |
15 | source_code_hash = data.archive_file.zip.output_base64sha256
16 |
17 | environment {
18 | variables = {
19 | "BUCKET_NAME" = aws_s3_bucket.bucket.id
20 | }
21 | }
22 |
23 | tags = var.tags
24 | }
25 |
26 | resource "aws_lambda_alias" "alias" {
27 | name = var.project_name
28 | description = "Nuclei scanner lambda function"
29 | function_name = aws_lambda_function.function.arn
30 | function_version = "$LATEST"
31 | }
32 |
33 | # Layer to run nuclei in lambda
34 | resource "aws_lambda_layer_version" "layer" {
35 | depends_on = [aws_s3_object.upload_nuclei]
36 | layer_name = "${var.project_name}-nuclei-layer"
37 | s3_bucket = aws_s3_bucket.bucket.id
38 | s3_key = "nuclei.zip"
39 | compatible_runtimes = ["go1.x"]
40 | }
41 |
42 | # Layer to have nuclei templates
43 | resource "aws_lambda_layer_version" "templates_layer" {
44 | depends_on = [aws_s3_object.upload_templates]
45 | layer_name = "${var.project_name}-nuclei-templates-layer"
46 | s3_bucket = aws_s3_bucket.bucket.id
47 | s3_key = "nuclei-templates.zip"
48 | compatible_runtimes = ["go1.x"]
49 | }
50 |
51 | # Layer for nuclei configs
52 | resource "aws_lambda_layer_version" "configs_layer" {
53 | depends_on = [aws_s3_object.upload_config]
54 | layer_name = "${var.project_name}-nuclei-config-layer"
55 | s3_bucket = aws_s3_bucket.bucket.id
56 | s3_key = "nuclei-configs.zip"
57 | compatible_runtimes = ["go1.x"]
58 | }
59 |
60 | # tfsec:ignore:aws-cloudwatch-log-group-customer-key
61 | resource "aws_cloudwatch_log_group" "log_group" {
62 | name = "/aws/lambda/${var.project_name}-function"
63 |
64 | retention_in_days = 90
65 |
66 | tags = var.tags
67 | }
68 |
69 | ###
70 | # IAM
71 | ###
72 | resource "aws_iam_role" "lambda_role" {
73 | name = "${var.project_name}-role"
74 |
75 | assume_role_policy = data.aws_iam_policy_document.trust.json
76 |
77 | tags = var.tags
78 | }
79 |
80 | data "aws_iam_policy_document" "trust" {
81 | statement {
82 | actions = ["sts:AssumeRole"]
83 |
84 | principals {
85 | type = "Service"
86 | identifiers = ["lambda.amazonaws.com"]
87 | }
88 | }
89 | }
90 |
91 | # attach policy to role
92 | resource "aws_iam_role_policy_attachment" "policy" {
93 | role = aws_iam_role.lambda_role.name
94 | policy_arn = aws_iam_policy.policy.arn
95 | }
96 |
97 | # IAM policy for lambda
98 | resource "aws_iam_policy" "policy" {
99 | name = "${var.project_name}-policy"
100 | description = "Policy for lambda"
101 |
102 | policy = data.aws_iam_policy_document.policy.json
103 | }
104 |
105 | # tfsec:ignore:aws-iam-no-policy-wildcards
106 | data "aws_iam_policy_document" "policy" {
107 | statement {
108 | sid = "AllowCloudWatchLogs"
109 | actions = [
110 | "logs:CreateLogGroup",
111 | "logs:CreateLogStream",
112 | "logs:PutLogEvents"
113 | ]
114 |
115 | resources = ["arn:aws:logs:*:*:*"]
116 | }
117 |
118 | statement {
119 | sid = "AllowS3Upload"
120 | effect = "Allow"
121 | actions = [
122 | "s3:PutObject"
123 | ]
124 | resources = [
125 | "arn:aws:s3:::${aws_s3_bucket.bucket.id}/findings/*"
126 | ]
127 | }
128 | }
--------------------------------------------------------------------------------
/output.tf:
--------------------------------------------------------------------------------
1 | output "function_name" {
2 | value = aws_lambda_function.function.arn
3 | }
4 |
5 | output "dynamodb_state_table" {
6 | value = aws_dynamodb_table.scan_state_table.arn
7 | }
--------------------------------------------------------------------------------
/src/go.mod:
--------------------------------------------------------------------------------
1 | module main
2 |
3 | go 1.19
4 |
5 | require (
6 | github.com/aws/aws-lambda-go v1.36.1
7 | github.com/aws/aws-sdk-go v1.44.170
8 | github.com/google/uuid v1.3.0
9 | )
10 |
11 | require github.com/jmespath/go-jmespath v0.4.0 // indirect
12 |
--------------------------------------------------------------------------------
/src/go.sum:
--------------------------------------------------------------------------------
1 | github.com/aws/aws-lambda-go v1.36.1 h1:CJxGkL9uKszIASRDxzcOcLX6juzTLoTKtCIgUGcTjTU=
2 | github.com/aws/aws-lambda-go v1.36.1/go.mod h1:jwFe2KmMsHmffA1X2R09hH6lFzJQxzI8qK17ewzbQMM=
3 | github.com/aws/aws-sdk-go v1.44.170 h1:9dGTB7XRHzDB8+1NOIg/QS/DhCWgIM/iMC1dlZv42CE=
4 | github.com/aws/aws-sdk-go v1.44.170/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
5 | github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
6 | github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
7 | github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
8 | github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
9 | github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
10 | github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
11 | github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
12 | github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
13 | github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
14 | github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
15 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
16 | github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
17 | github.com/stretchr/testify v1.7.2 h1:4jaiDzPyXQvSd7D0EjG45355tLlV3VOECpq10pLC+8s=
18 | github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
19 | golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
20 | golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
21 | golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
22 | golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
23 | golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
24 | golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
25 | golang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0=
26 | golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
27 | golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
28 | golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
29 | golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
30 | golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
31 | golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
32 | golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
33 | golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
34 | golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
35 | golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
36 | golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
37 | golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
38 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
39 | golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
40 | golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
41 | golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
42 | golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
43 | golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
44 | golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
45 | golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
46 | golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
47 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
48 | gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
49 | gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
50 | gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
51 |
--------------------------------------------------------------------------------
/src/main.go:
--------------------------------------------------------------------------------
1 | package main
2 |
3 | import (
4 | "bufio"
5 | "context"
6 | "encoding/base64"
7 | "encoding/json"
8 | "fmt"
9 | "os"
10 | "os/exec"
11 | "time"
12 |
13 | "github.com/aws/aws-lambda-go/lambda"
14 | "github.com/aws/aws-sdk-go/aws"
15 | "github.com/aws/aws-sdk-go/aws/session"
16 | "github.com/aws/aws-sdk-go/service/s3/s3manager"
17 | "github.com/google/uuid"
18 | )
19 |
20 | // Event is the input event for the Lambda function.
21 | type Event struct {
22 | Targets []string `json:"targets"`
23 | Args []string `json:"args"`
24 | Output string `json:"output"`
25 | }
26 |
27 | // Response is the output response for the Lambda function.
28 | type Response struct {
29 | Output string `json:"output"`
30 | Error string `json:"error"`
31 | }
32 |
33 | // Variables for the nuclei binary, filesystem location, and temporary files
34 | var (
35 | nucleiBinary = "/opt/nuclei"
36 | fileSystem = "/tmp/"
37 | targetsFile = fileSystem + "targets.txt"
38 | scanOutput = fileSystem + "output.json"
39 | )
40 |
41 | func handler(ctx context.Context, event Event) (Response, error) {
42 | // Set the $HOME environment so nuclei can write inside of lambda
43 | os.Setenv("HOME", fileSystem)
44 |
45 | // Check to see if you have Args and Command in the event
46 | if len(event.Targets) == 0 || len(event.Args) == 0 || event.Output == "" {
47 | return Response{
48 | Error: "Nuclei requires a targets, args, and output to run. Please specify the target(s), args, and output within the event.",
49 | }, nil
50 | }
51 |
52 | // Check to see if it is a single target or multiple
53 | if len(event.Targets) == 1 {
54 | // If it's a single target it prepends -u target to the args
55 | event.Args = append([]string{"-u", event.Targets[0]}, event.Args...)
56 | } else {
57 | // If it's a list of targets write them to a file and prepends -l targets.txt to the args
58 | targetsFile, err := writeTargets(event.Targets)
59 | if err != nil {
60 | return Response{
61 | Error: err.Error(),
62 | }, nil
63 | }
64 | event.Args = append([]string{"-l", targetsFile}, event.Args...)
65 | }
66 |
67 | // If the output is json or s3 then output as json
68 | if event.Output == "json" || event.Output == "s3" {
69 | event.Args = append(event.Args, "-json", "-o", scanOutput, "-silent")
70 | }
71 |
72 | // Run the nuclei binary with the command and args
73 | output, err := runNuclei(event.Args)
74 | base64output := base64.StdEncoding.EncodeToString([]byte(output))
75 | if err != nil {
76 | // Return output as base64 to display in the console
77 | return Response{
78 | Output: string(base64output),
79 | Error: err.Error(),
80 | }, nil
81 | }
82 |
83 | // Send the scan results to the sink
84 | if event.Output == "json" {
85 | findings, err := jsonOutputFindings(scanOutput)
86 | // convert it to json
87 | jsonFindings, err := json.Marshal(findings)
88 | if err != nil {
89 | return Response{
90 | Output: output,
91 | Error: err.Error(),
92 | }, nil
93 | }
94 | return Response{
95 | Output: string(jsonFindings),
96 | }, nil
97 | } else if event.Output == "cmd" {
98 | return Response{
99 | Output: string(base64output),
100 | }, nil
101 | } else if event.Output == "s3" {
102 | // Read the findings as []interface{}
103 | findings, err := jsonOutputFindings(scanOutput)
104 | if err != nil {
105 | return Response{
106 | Output: output,
107 | Error: err.Error(),
108 | }, nil
109 | }
110 |
111 | if len(findings) == 0 {
112 | return Response{
113 | Output: "No findings, better luck next time!",
114 | }, nil
115 | }
116 |
117 | // Write the findings to a file and upload to s3
118 | s3Key, err := writeAndUploadFindings(findings)
119 | if err != nil {
120 | return Response{
121 | Output: output,
122 | Error: err.Error(),
123 | }, nil
124 | }
125 |
126 | if s3Key == "No findings" {
127 | return Response{
128 | Output: "No findings, better luck next time!",
129 | }, nil
130 | }
131 |
132 | // Return the s3 key
133 | return Response{
134 | Output: s3Key,
135 | }, nil
136 | } else {
137 | return Response{
138 | Output: output,
139 | Error: "Output type not supported. Please specify json or cmd.",
140 | }, nil
141 | }
142 | }
143 |
144 | // Run Nuclei with the command and args
145 | func runNuclei(args []string) (string, error) {
146 | // Run the nuclei binary with the command and args
147 | cmd := exec.Command(nucleiBinary, args...)
148 | output, err := cmd.CombinedOutput()
149 | if err != nil {
150 | return string(output), err
151 | }
152 | return string(output), nil
153 | }
154 |
155 | // Write targets to a file on disk and return filename
156 | func writeTargets(targets []string) (string, error) {
157 | // Check if the targets file exists, if it does delete it
158 | if _, err := os.Stat(targetsFile); err == nil {
159 | os.Remove(targetsFile)
160 | }
161 |
162 | // Create a file
163 | file, err := os.Create(targetsFile)
164 | if err != nil {
165 | return "", err
166 | }
167 | defer file.Close()
168 |
169 | // Write the list to the file.
170 | for _, target := range targets {
171 | _, err := file.WriteString(target + "\n")
172 | if err != nil {
173 | // Handle the error.
174 | }
175 | }
176 |
177 | // Return the filename
178 | return targetsFile, nil
179 | }
180 |
181 | // jsonFindings reads the output.json file and returns the findings
182 | func jsonOutputFindings(scanOutputFile string) ([]interface{}, error) {
183 | file, err := os.Open(scanOutputFile)
184 | if err != nil {
185 | return nil, err
186 | }
187 | defer file.Close()
188 |
189 | // Create a scanner to read the file line by line
190 | scanner := bufio.NewScanner(file)
191 |
192 | // Iterate through the file and append the findings to the findings array
193 | var findings []interface{}
194 | for scanner.Scan() {
195 | var data interface{}
196 | if err := json.Unmarshal(scanner.Bytes(), &data); err != nil {
197 | return nil, err
198 | }
199 | findings = append(findings, data)
200 | }
201 |
202 | // Check for errors while reading the file
203 | if err := scanner.Err(); err != nil {
204 | return nil, err
205 | }
206 |
207 | // Return the findings
208 | return findings, nil
209 | }
210 |
211 | // Takes in []interface{}, iterates through it, writes it to a file based on the date, and uploads it to S3
212 | func writeAndUploadFindings(findings []interface{}) (string, error) {
213 | // Bucket and region
214 | region := os.Getenv("AWS_REGION")
215 | bucket := os.Getenv("BUCKET_NAME")
216 | // Iterate through the interface and convert to a slice of strings for writing to a file
217 | var s3Findings []string
218 | for _, finding := range findings {
219 | jsonFinding, err := json.Marshal(finding)
220 | if err != nil {
221 | return "failed to upload to s3", err
222 | }
223 | s3Findings = append(s3Findings, string(jsonFinding))
224 | }
225 |
226 | if len(s3Findings) == 0 {
227 | return "No findings", nil
228 | }
229 |
230 | // Two variables for filename, must be unique on execution, and s3 key partitioned with findings/year/month/day/hour/nuclei-findings-.json
231 | t := time.Now()
232 | uuid := uuid.New().String()
233 | s3Key := fmt.Sprintf("findings/%d/%02d/%02d/%02d/nuclei-findings-%s.json", t.Year(), t.Month(), t.Day(), t.Hour(), uuid)
234 | filename := fmt.Sprintf("nuclei-findings-%s.json", uuid)
235 |
236 | // Write the findings to a file
237 | file, err := os.Create(fileSystem + filename)
238 | if err != nil {
239 | return "Failed to write to filesystem", err
240 | }
241 | defer file.Close()
242 |
243 | // Write the list to the file.
244 | for _, finding := range s3Findings {
245 | _, err := file.WriteString(finding + "\n")
246 | if err != nil {
247 | return "Failed to write json to file", err
248 | }
249 | }
250 |
251 | // Upload the file to S3
252 | sess, err := session.NewSession(&aws.Config{
253 | Region: aws.String(region)},
254 | )
255 | if err != nil {
256 | return "Failed to create session", err
257 | }
258 |
259 | // Create an uploader with the session and default options
260 | uploader := s3manager.NewUploader(sess)
261 |
262 | findingsFile, err := os.Open(fileSystem + filename)
263 | if err != nil {
264 | return "Failed to open file", err
265 | }
266 |
267 | // Upload the file to S3.
268 | _, err = uploader.Upload(&s3manager.UploadInput{
269 | Bucket: aws.String(bucket),
270 | Key: aws.String(s3Key),
271 | Body: findingsFile,
272 | })
273 | if err != nil {
274 | return "Failed to upload file", err
275 | }
276 |
277 | // S3 path for the file
278 | s3uri := fmt.Sprintf("s3://%s/%s", bucket, s3Key)
279 |
280 | // Return the s3 uri after uploading
281 | return s3uri, nil
282 | }
283 |
284 | // Contains checks to see if a string is in a slice of strings
285 | func contains(elems []string, v string) bool {
286 | for _, s := range elems {
287 | if v == s {
288 | return true
289 | }
290 | }
291 | return false
292 | }
293 |
294 | func main() {
295 | lambda.Start(handler)
296 | }
297 |
--------------------------------------------------------------------------------
/static/infrastructure.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DevSecOpsDocs/terraform-nuclear-pond/28faf3bd100ad44605c369be6f3acb157d36ca5a/static/infrastructure.png
--------------------------------------------------------------------------------
/variables.tf:
--------------------------------------------------------------------------------
1 | variable "project_name" {
2 | description = "Name of the project to create and must be unique as S3 bucket names are global"
3 | }
4 |
5 | # Nuclei binary configuration
6 | variable "nuclei_version" {
7 | description = "Nuclei version to use"
8 | default = "2.8.7"
9 | }
10 |
11 | variable "nuclei_arch" {
12 | description = "Nuclei architecture to use"
13 | default = "linux_amd64"
14 | }
15 |
16 | # Private Templates
17 | variable "github_repository" {
18 | description = "Github repository to use for templates"
19 | default = "nuclei-templates"
20 | }
21 |
22 | variable "github_owner" {
23 | description = "Github owner to use for templates"
24 | default = "projectdiscovery"
25 | }
26 |
27 | variable "release_tag" {
28 | description = "Github release tag to use for templates"
29 | default = "v9.3.4"
30 | }
31 |
32 | variable "github_token" {
33 | description = "Github token to use for private templates, leave empty if you don't need private templates"
34 | default = ""
35 | sensitive = true
36 | }
37 |
38 | variable "nuclei_timeout" {
39 | type = number
40 | description = "Lambda function timeout"
41 | default = 900
42 | }
43 |
44 | variable "memory_size" {
45 | type = number
46 | default = 512
47 | }
48 |
49 | variable "tags" {
50 | type = map(string)
51 | default = {
52 | "Name" = "nuclei-scanner"
53 | }
54 | }
--------------------------------------------------------------------------------
/versions.tf:
--------------------------------------------------------------------------------
1 | terraform {
2 | required_version = ">= 1.0"
3 | required_providers {
4 | aws = {
5 | source = "hashicorp/aws"
6 | version = "4.50.0"
7 | }
8 | null = {
9 | source = "hashicorp/null"
10 | version = "3.2.1"
11 | }
12 | archive = {
13 | source = "hashicorp/archive"
14 | version = "2.2.0"
15 | }
16 | github = {
17 | source = "hashicorp/github"
18 | version = "5.14.0"
19 | }
20 | }
21 | }
--------------------------------------------------------------------------------