├── .circleci └── config.yml ├── .gitignore ├── README.md └── terraform ├── aws.tf ├── cloudwatch.tf ├── elasticsearch.tf ├── iam.tf ├── kinesis.tf ├── kms.tf ├── outputs.tf ├── policies ├── ec2_assume_role.json ├── elasticsearch_policy.json └── firehose_assume_role.json ├── s3.tf ├── test ├── aws.tf ├── backend.tfvars.example ├── ec2-test.tf ├── files │ └── ec2-test.tpl ├── network.tf ├── outputs.tf ├── terraform.tfvars.example ├── variables.tf └── vpc-test.tf └── variables.tf /.circleci/config.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | jobs: 3 | terraform: 4 | docker: 5 | - image: hashicorp/terraform 6 | environment: 7 | AWS_DEFAULT_REGION: us-east-1 8 | steps: 9 | - checkout 10 | - run: 11 | name: EKK Stack - Set up Terraform 12 | command: cd terraform/test && terraform init -backend=false 13 | - run: 14 | name: EKK Stack - Validate Terraform 15 | command: cd terraform/test && terraform validate -check-variables=false 16 | 17 | workflows: 18 | version: 2 19 | 20 | validate: 21 | jobs: 22 | - terraform 23 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ## Terraform ## 2 | # 3 | # Ignore private variables 4 | *.tfvars 5 | # 6 | # Compiled files 7 | *.tfstate* 8 | # 9 | # Module directory 10 | .terraform/ 11 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # devsecops-ekk-stack 2 | 3 | Terraform that builds an EKK logging stack. 4 | 5 | This stack is based on [this CloudFormation example.](https://us-west-2.console.aws.amazon.com/cloudformation/designer/home?region=us-west-2&templateUrl=https://s3.amazonaws.com/scriptdepot/es.template) 6 | 7 | The stack also creates a small EC2 instance (defined in ec2-test.tf) that will be configured with a kinesis agent to test writing into the stream. If you do not wish to deploy this instance, move this file out of the terraform directory or change the extension of the file. 8 | 9 | ## Usage 10 | 11 | This stack is meant to be consumed as a module in your existing terraform stack. You can consume it by using code similar to this: 12 | 13 | ```hcl 14 | module "ekk_stack" { 15 | source = "github.com/GSA/devsecops-ekk-stack//terraform" 16 | s3_logging_bucket_name = "${var.s3_logging_bucket_name}" 17 | kinesis_delivery_stream = "${var.kinesis_delivery_stream}" 18 | ekk_kinesis_stream_name = "${var.ekk_kinesis_stream_name}" 19 | } 20 | ``` 21 | 22 | ...where the variables referenced above are defined in your terraform.tfvars file. "var.s3_logging_bucket_name" should be set to a bucket (which the stack will create) to contain copies of the kinesis firehose logs. "var.kinesis_delivery_stream" should be set to the name of the firehose delivery stream that you wish to use. The EKK stack will create this delivery stream with the name you provide with this variable. 23 | 24 | The Kinesis stream will send to Elasticsearch and S3. 25 | 26 | ## Test Deployment 27 | 28 | Use these steps to deploy the test. 29 | 30 | 1. Create an S3 bucket for the terraform state. 31 | 1. Run the following command: 32 | 33 | ````sh 34 | cd terraform/test 35 | cp backend.tfvars.example backend.tfvars 36 | cp terraform.tfvars.example terraform.tfvars 37 | ```` 38 | 39 | 1. Fill out backend.tfvars with the name of the S3 bucket you just created. 40 | 1. Fill out terraform.tfvars with required values. 41 | 1. Run the init: 42 | 43 | ````sh 44 | terraform init --backend-config="backend.tfvars" 45 | ```` 46 | 47 | 1. Run a plan to make sure everything is fine and ready to go: 48 | 49 | ````sh 50 | terraform plan 51 | ```` 52 | 53 | 1. If there are no issues, apply the stack: 54 | 55 | ````sh 56 | terraform apply 57 | ```` 58 | Following the steps above will emulate the intended behavior of the stack. You must execute it from the test directory just below the terraform directory. The test consumes the stack as a module and deploys it, then sets up an EC2 instance that will install the aws-kinesis-agent and configure it to stream to the Kinesis Firehose delivery stream. 59 | 60 | The EC2 instance also configures itself with a cron job that performs a curl against its local apache2 daemon 5900 times every minute. This is used to generate logs that the Kinesis agent will capture. To verify that it is working properly, you can login to the EC2 instance and tail the aws-kinesis agent log (/var/log/aws-kinesis/aws-kinesis-agent.log) or look in the web console at the CloudWatch metrics for the Firehose delivery stream itself. 61 | -------------------------------------------------------------------------------- /terraform/aws.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | version = "~> 1.8" 3 | } 4 | -------------------------------------------------------------------------------- /terraform/cloudwatch.tf: -------------------------------------------------------------------------------- 1 | resource "aws_cloudwatch_log_group" "es_log_group" { 2 | name = "${var.es_log_group_name}" 3 | retention_in_days = "${var.es_log_retention_in_days}" 4 | } 5 | 6 | resource "aws_cloudwatch_log_group" "s3_log_group" { 7 | name = "${var.s3_log_group_name}" 8 | retention_in_days = "${var.s3_log_retention_in_days}" 9 | } 10 | 11 | resource "aws_cloudwatch_log_stream" "es_log_stream" { 12 | name = "${var.es_log_stream_name}" 13 | log_group_name = "${aws_cloudwatch_log_group.es_log_group.name}" 14 | } 15 | 16 | resource "aws_cloudwatch_log_stream" "s3_log_stream" { 17 | name = "${var.s3_log_stream_name}" 18 | log_group_name = "${aws_cloudwatch_log_group.s3_log_group.name}" 19 | } -------------------------------------------------------------------------------- /terraform/elasticsearch.tf: -------------------------------------------------------------------------------- 1 | resource "aws_elasticsearch_domain" "elasticsearch" { 2 | domain_name = "${var.es_domain_name}" 3 | elasticsearch_version = "${var.es_version}" 4 | 5 | cluster_config { 6 | dedicated_master_enabled = "${var.es_dedicated_master_enabled}" 7 | instance_type = "${var.es_instance_type}" 8 | instance_count = "${var.es_instance_count}" 9 | zone_awareness_enabled = "${var.es_zone_awareness_enabled}" 10 | dedicated_master_type = "${var.es_dedicated_master_instance_type}" 11 | dedicated_master_count = "${var.es_dedicated_master_count}" 12 | } 13 | 14 | advanced_options { 15 | "rest.action.multi.allow_explicit_index" = "${var.es_advanced_allow_explicit_index}" 16 | } 17 | 18 | ebs_options { 19 | ebs_enabled = "${var.es_ebs_enabled}" 20 | iops = "${var.es_ebs_iops}" 21 | volume_size = "${var.es_ebs_volume_size}" 22 | volume_type = "${var.es_ebs_volume_type}" 23 | } 24 | 25 | encrypt_at_rest { 26 | enabled = "true" 27 | kms_key_id = "${var.es_kms_key_id != "" ? var.es_kms_key_id : aws_kms_key.es_kms_key.key_id}" 28 | } 29 | 30 | snapshot_options { 31 | automated_snapshot_start_hour = "${var.es_snapshot_start_hour}" 32 | } 33 | 34 | access_policies = </etc/aws-kinesis/agent.json 9 | { 10 | "cloudwatch.emitMetrics": true, 11 | "firehose.endpoint": "firehose.us-east-1.amazonaws.com", 12 | "flows": [ 13 | { 14 | "filePattern": "/var/log/httpd/access_log", 15 | "kinesisStream": "${kinesisstream}", 16 | "dataProcessingOptions": [ 17 | { 18 | "optionName": "LOGTOJSON", 19 | "logFormat": "COMMONAPACHELOG" 20 | } 21 | ] 22 | } 23 | ] 24 | } 25 | EOF 26 | 27 | cat</root/test.sh 28 | #!/bin/bash 29 | # Test connectivity to localhost port 30 | for i in {1..5900};do curl -s -w "%{time_total}\n" -o /dev/null http://localhost/; done 31 | EOF 32 | 33 | # Set cron to run the tests 34 | chmod +x /root/test.sh 35 | echo "* * * * * /root/test.sh" >> /root/mycron 36 | crontab /root/mycron 37 | rm /root/mycron 38 | 39 | # Set permissions on httpd directory 40 | service httpd start 41 | chown -R aws-kinesis-agent-user:root /var/log/httpd 42 | chmod -R 764 /var/log/httpd 43 | 44 | # Make sure kinesis agent is using the correct agent.json above 45 | service aws-kinesis-agent start -------------------------------------------------------------------------------- /terraform/test/network.tf: -------------------------------------------------------------------------------- 1 | resource "aws_security_group" "main" { 2 | vpc_id = "${module.test_vpc.vpc_id}" 3 | 4 | # SSH access from anywhere 5 | ingress { 6 | from_port = 22 7 | to_port = 22 8 | protocol = "tcp" 9 | cidr_blocks = ["0.0.0.0/0"] 10 | } 11 | ingress { 12 | from_port = 80 13 | to_port = 80 14 | protocol = "tcp" 15 | cidr_blocks = ["0.0.0.0/0"] 16 | } 17 | egress { 18 | from_port = 0 19 | to_port = 0 20 | protocol = "-1" 21 | cidr_blocks = ["0.0.0.0/0"] 22 | } 23 | } -------------------------------------------------------------------------------- /terraform/test/outputs.tf: -------------------------------------------------------------------------------- 1 | output "test_instance_ip" { 2 | value = "${aws_instance.stream_tester.public_ip}" 3 | } -------------------------------------------------------------------------------- /terraform/test/terraform.tfvars.example: -------------------------------------------------------------------------------- 1 | # List your logging bucket name here. Make sure it's done a naming collison with someone else's bucket. 2 | s3_logging_bucket_name = "" 3 | ec_test_instance_key_name = "" -------------------------------------------------------------------------------- /terraform/test/variables.tf: -------------------------------------------------------------------------------- 1 | variable "s3_logging_bucket_name" { 2 | type = "string" 3 | } 4 | variable "ec_test_instance_key_name" { 5 | type = "string" 6 | } 7 | variable "kinesis_delivery_stream" { 8 | default = "DevSecOpsKinesisDeliveryStream" 9 | } 10 | variable "ekk_kinesis_stream_name" { 11 | default = "DevSecOpsKinesisStream" 12 | } 13 | variable "es_instance_type" { 14 | default = "t2.micro.elasticsearch" 15 | } 16 | variable "es_dedicated_master_instance_type" { 17 | default = "t2.micro.elasticsearch" 18 | } -------------------------------------------------------------------------------- /terraform/test/vpc-test.tf: -------------------------------------------------------------------------------- 1 | module "test_vpc" { 2 | source = "github.com/terraform-aws-modules/terraform-aws-vpc" 3 | 4 | name = "ec2-test" 5 | cidr = "10.0.0.0/16" 6 | public_subnets = ["10.0.1.0/24"] 7 | enable_nat_gateway = "false" 8 | enable_dns_hostnames = "true" 9 | enable_dns_support = "true" 10 | azs = ["us-east-1c", "us-east-1d"] 11 | } -------------------------------------------------------------------------------- /terraform/variables.tf: -------------------------------------------------------------------------------- 1 | variable "s3_logging_bucket_name" { 2 | type = "string" 3 | } 4 | 5 | variable "kinesis_delivery_stream" { 6 | type = "string" 7 | } 8 | 9 | variable "ekk_kinesis_stream_name" { 10 | type = "string" 11 | } 12 | 13 | variable "s3_kms_key_arn" { 14 | type = "string" 15 | description = "KMS Key ARN used to encrypt data within S3 bucket. The key must already exist within the account." 16 | default = "" 17 | } 18 | 19 | variable "ekk_kinesis_stream_kms_key_id" { 20 | description = "This is the GUID of a KMS key, not the ARN!" 21 | default = "" 22 | } 23 | 24 | variable "ekk_kinesis_stream_kms_key_arn" { 25 | description = "This is the ARN of a KMS key, not the GUID!" 26 | default = "" 27 | } 28 | 29 | variable "ekk_kinesis_stream_shard_count" { 30 | default = "1" 31 | } 32 | 33 | variable "ekk_kinesis_stream_retention_period" { 34 | default = "24" 35 | } 36 | 37 | variable "ekk_kinesis_stream_shard_metrics" { 38 | default = ["IncomingBytes", "OutgoingBytes"] 39 | } 40 | 41 | variable "aws_region" { 42 | default = "us-east-1" 43 | } 44 | 45 | variable "es_domain_name" { 46 | default = "devsecops-ekk-stack" 47 | } 48 | 49 | variable "es_version" { 50 | default = "5.5" 51 | } 52 | 53 | variable "es_instance_type" { 54 | default = "m4.large.elasticsearch" 55 | } 56 | 57 | variable "es_instance_count" { 58 | default = "2" 59 | } 60 | 61 | variable "es_dedicated_master_instance_type" { 62 | default = "m4.large.elasticsearch" 63 | } 64 | 65 | variable "es_kms_key_id" { 66 | type = "string" 67 | description = "KMS Key ID (NOT the ARN!) used to encrypt data within elasticsearch instances. The key must already exist within the account." 68 | default = "" 69 | } 70 | 71 | variable "es_dedicated_master_count" { 72 | default = "2" 73 | } 74 | 75 | variable "ekk_role_name" { 76 | default = "EKKRole" 77 | } 78 | 79 | variable "ekk_role_policy_name" { 80 | default = "EKKRolePolicy" 81 | } 82 | 83 | variable "s3_delivery_role_name" { 84 | default = "EKKS3DeliveryRole" 85 | } 86 | 87 | variable "s3_role_log_bucket_access_policy" { 88 | default = "S3RoleBucketAccessPolicy" 89 | } 90 | 91 | variable "es_delivery_role_name" { 92 | default = "ESDeliveryRole" 93 | } 94 | 95 | variable "es_log_group_name" { 96 | default = "ElasticSearchDeliveryLogGroup" 97 | } 98 | 99 | variable "es_log_retention_in_days" { 100 | default = "7" 101 | } 102 | 103 | variable "es_log_stream_name" { 104 | default = "ElasticSearchDelivery" 105 | } 106 | 107 | variable "s3_log_group_name" { 108 | default = "S3DeliveryLogGroup" 109 | } 110 | 111 | variable "s3_log_retention_in_days" { 112 | default = "7" 113 | } 114 | 115 | variable "s3_log_stream_name" { 116 | default = "S3Delivery" 117 | } 118 | 119 | variable "es_dedicated_master_enabled" { 120 | default = "true" 121 | } 122 | 123 | variable "es_zone_awareness_enabled" { 124 | default = "true" 125 | } 126 | 127 | variable "es_advanced_allow_explicit_index" { 128 | default = "true" 129 | } 130 | 131 | variable "es_ebs_enabled" { 132 | default = "true" 133 | } 134 | 135 | variable "es_ebs_iops" { 136 | default = "0" 137 | } 138 | 139 | variable "es_ebs_volume_size" { 140 | default = "20" 141 | } 142 | 143 | variable "es_ebs_volume_type" { 144 | default = "gp2" 145 | } 146 | 147 | variable "es_snapshot_start_hour" { 148 | default = "0" 149 | } 150 | 151 | variable "es_buffering_interval" { 152 | default = "60" 153 | } 154 | 155 | variable "es_buffering_size" { 156 | default = "50" 157 | } 158 | 159 | variable "es_cloudwatch_logging_enabled" { 160 | default = "true" 161 | } 162 | 163 | variable "es_index_name" { 164 | default = "logmonitor" 165 | } 166 | 167 | variable "es_type_name" { 168 | default = "log" 169 | } 170 | 171 | variable "es_index_rotation_period" { 172 | default = "NoRotation" 173 | } 174 | 175 | variable "es_retry_duration" { 176 | default = "60" 177 | } 178 | 179 | variable "es_s3_backup_mode" { 180 | default = "AllDocuments" 181 | } 182 | 183 | variable "s3_buffer_size" { 184 | default = "10" 185 | } 186 | 187 | variable "s3_buffer_interval" { 188 | default = "300" 189 | } 190 | 191 | variable "s3_compression_format" { 192 | default = "UNCOMPRESSED" 193 | } 194 | 195 | variable "s3_prefix" { 196 | default = "firehose/" 197 | } 198 | 199 | variable "s3_cloudwatch_logging_enabled" { 200 | default = "true" 201 | } 202 | --------------------------------------------------------------------------------