├── outputs.tf ├── OSSMETADATA ├── .github ├── CODEOWNERS ├── PULL_REQUEST_TEMPLATE.md ├── ISSUE_TEMPLATE │ ├── question-discussion.md │ ├── security-vulnerability-report.md │ ├── feature_request.md │ └── bug_report.md ├── workflows │ ├── add-to-project-v2.yml │ ├── terraform-lint.yml │ ├── test-terraform-module.yml │ ├── stale.yml │ ├── release-drafter.yml │ └── validate-pr-title.yml ├── release.yml └── release-drafter.yml ├── .terraform-docs.yml ├── examples ├── README.md ├── cloudwatch-logs │ ├── variables.tf │ ├── main.tf │ └── README.md ├── rds-logs │ ├── mysql │ │ ├── variables.tf │ │ ├── README.md │ │ └── main.tf │ ├── postgresql │ │ ├── variables.tf │ │ ├── README.md │ │ └── main.tf │ └── README.md ├── cloudwatch-metrics │ ├── variables.tf │ ├── main.tf │ └── README.md ├── complete │ ├── variables.tf │ ├── main.tf │ └── README.md └── s3-logfile │ ├── variables.tf │ ├── main.tf │ └── README.md ├── docs ├── overview.png ├── rds-logs-overview.png ├── s3-logfile-overview.png ├── cloudwatch-logs-overview.png └── cloudwatch-metrics-overview.png ├── CONTRIBUTORS ├── .gitignore ├── SUPPORT.md ├── versions.tf ├── modules ├── rds-logs │ ├── versions.tf │ ├── README.md │ ├── main.tf │ ├── USAGE.md │ └── variables.tf ├── s3-logfile │ ├── versions.tf │ ├── main.tf │ ├── README.md │ ├── variables.tf │ └── USAGE.md ├── cloudwatch-logs │ ├── versions.tf │ ├── outputs.tf │ ├── iam.tf │ ├── main.tf │ ├── README.md │ ├── variables.tf │ └── USAGE.md ├── cloudwatch-metrics │ ├── versions.tf │ ├── outputs.tf │ ├── iam.tf │ ├── main.tf │ ├── README.md │ ├── variables.tf │ └── USAGE.md └── kinesis-firehose-honeycomb │ ├── versions.tf │ ├── outputs.tf │ ├── variables.tf │ ├── USAGE.md │ └── main.tf ├── Makefile ├── CODE_OF_CONDUCT.md ├── tests ├── versions.tf ├── cloudwatch-logs.tf ├── setup-honeycomb-aws-integrations.tf ├── kinesis-firehose.tf ├── rds.tf ├── cloudwatch-metrics.tf └── s3-logfile.tf ├── scripts ├── terraform-format.sh ├── docs.sh └── update-changelog.sh ├── LICENSE ├── NOTICE ├── RELEASING.md ├── SECURITY.md ├── main.tf ├── README.md ├── USAGE.md ├── variables.tf └── CHANGELOG.md /outputs.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /OSSMETADATA: -------------------------------------------------------------------------------- 1 | osslifecycle=active 2 | -------------------------------------------------------------------------------- /.github/CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @honeycombio/pipeline-team 2 | -------------------------------------------------------------------------------- /.terraform-docs.yml: -------------------------------------------------------------------------------- 1 | recursive: 2 | enabled: true 3 | path: modules 4 | -------------------------------------------------------------------------------- /examples/README.md: -------------------------------------------------------------------------------- 1 | # Examples 2 | 3 | Some examples of making use of the various modules. 4 | -------------------------------------------------------------------------------- /docs/overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/honeycombio/terraform-aws-integrations/HEAD/docs/overview.png -------------------------------------------------------------------------------- /CONTRIBUTORS: -------------------------------------------------------------------------------- 1 | honeycombio/terraform-aws-integrations contributors: 2 | 3 | [currently empty - PRs welcome!] 4 | -------------------------------------------------------------------------------- /docs/rds-logs-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/honeycombio/terraform-aws-integrations/HEAD/docs/rds-logs-overview.png -------------------------------------------------------------------------------- /docs/s3-logfile-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/honeycombio/terraform-aws-integrations/HEAD/docs/s3-logfile-overview.png -------------------------------------------------------------------------------- /docs/cloudwatch-logs-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/honeycombio/terraform-aws-integrations/HEAD/docs/cloudwatch-logs-overview.png -------------------------------------------------------------------------------- /docs/cloudwatch-metrics-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/honeycombio/terraform-aws-integrations/HEAD/docs/cloudwatch-metrics-overview.png -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | .terraform 3 | .terraform.* 4 | *.tfstate 5 | *.tfstate.backup 6 | terraform.tfvars 7 | 8 | # Temporary output directory 9 | tempouts/ 10 | -------------------------------------------------------------------------------- /SUPPORT.md: -------------------------------------------------------------------------------- 1 | # How to Get Help 2 | 3 | This project uses GitHub issues to track bugs, feature requests, and questions about using the project. Please search for existing issues before filing a new one. 4 | -------------------------------------------------------------------------------- /versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 1.0" 3 | 4 | required_providers { 5 | aws = { 6 | source = "hashicorp/aws" 7 | version = "~> 6.0" 8 | } 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /modules/rds-logs/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 1.0" 3 | 4 | required_providers { 5 | aws = { 6 | source = "hashicorp/aws" 7 | version = "~> 6.0" 8 | } 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /modules/s3-logfile/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 1.0" 3 | 4 | required_providers { 5 | aws = { 6 | source = "hashicorp/aws" 7 | version = "~> 6.0" 8 | } 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | default: terraform-format 2 | 3 | update-changelog: 4 | ./scripts/update-changelog.sh 5 | 6 | generate-docs: 7 | ./scripts/docs.sh 8 | 9 | terraform-format: 10 | ./scripts/terraform-format.sh $(validate) 11 | -------------------------------------------------------------------------------- /modules/cloudwatch-logs/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 1.0" 3 | 4 | required_providers { 5 | aws = { 6 | source = "hashicorp/aws" 7 | version = "~> 6.0" 8 | } 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | 3 | This project has adopted the Honeycomb User Community Code of Conduct to clarify expected behavior in our community. 4 | 5 | https://www.honeycomb.io/honeycomb-user-community-code-of-conduct/ -------------------------------------------------------------------------------- /modules/cloudwatch-metrics/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 1.0" 3 | 4 | required_providers { 5 | aws = { 6 | source = "hashicorp/aws" 7 | version = "~> 6.0" 8 | } 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /modules/kinesis-firehose-honeycomb/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 1.1" 3 | 4 | required_providers { 5 | aws = { 6 | source = "hashicorp/aws" 7 | version = "~> 6.0" 8 | } 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /modules/cloudwatch-logs/outputs.tf: -------------------------------------------------------------------------------- 1 | output "cloudwatch_log_subscription_filters" { 2 | value = var.use_order_independent_filter_resource_naming ? [for filter in aws_cloudwatch_log_subscription_filter.filters : filter.name] : aws_cloudwatch_log_subscription_filter.this[*].name 3 | } 4 | -------------------------------------------------------------------------------- /tests/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 1.1" 3 | 4 | required_providers { 5 | aws = { 6 | source = "hashicorp/aws" 7 | version = "~> 6.0" 8 | } 9 | 10 | random = { 11 | source = "hashicorp/random" 12 | version = "~> 3.0" 13 | } 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /examples/cloudwatch-logs/variables.tf: -------------------------------------------------------------------------------- 1 | variable "honeycomb_api_key" { 2 | type = string 3 | description = "Your Honeycomb team's API key." 4 | sensitive = true 5 | } 6 | 7 | variable "honeycomb_api_host" { 8 | type = string 9 | default = "https://api.honeycomb.io" 10 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 11 | } 12 | -------------------------------------------------------------------------------- /examples/rds-logs/mysql/variables.tf: -------------------------------------------------------------------------------- 1 | variable "honeycomb_api_key" { 2 | type = string 3 | description = "Your Honeycomb team's API key." 4 | sensitive = true 5 | } 6 | 7 | variable "honeycomb_api_host" { 8 | type = string 9 | default = "https://api.honeycomb.io" 10 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 11 | } 12 | -------------------------------------------------------------------------------- /examples/cloudwatch-metrics/variables.tf: -------------------------------------------------------------------------------- 1 | variable "honeycomb_api_key" { 2 | type = string 3 | description = "Your Honeycomb team's API key." 4 | sensitive = true 5 | } 6 | 7 | variable "honeycomb_api_host" { 8 | type = string 9 | default = "https://api.honeycomb.io" 10 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 11 | } 12 | -------------------------------------------------------------------------------- /examples/rds-logs/postgresql/variables.tf: -------------------------------------------------------------------------------- 1 | variable "honeycomb_api_key" { 2 | type = string 3 | description = "Your Honeycomb team's API key." 4 | sensitive = true 5 | } 6 | 7 | variable "honeycomb_api_host" { 8 | type = string 9 | default = "https://api.honeycomb.io" 10 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 11 | } 12 | -------------------------------------------------------------------------------- /scripts/terraform-format.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | if [ -n "$1" ] && [ $1 == true ]; then 4 | echo "Running terraform format validation" 5 | git grep --cached -Il '' -- '*.tf' '*.tfvars' | xargs -L1 terraform fmt -check=true -diff=true -list=true 6 | else 7 | echo "Running terraform format validation and auto update" 8 | git grep --cached -Il '' -- '*.tf' '*.tfvars' | xargs -L1 terraform fmt -write=true -diff=true -list=true 9 | fi 10 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 5 | 6 | ## Which problem is this PR solving? 7 | 8 | - Closes # 9 | 10 | ## Short description of the changes 11 | 12 | ## How to verify that this has the expected result 13 | -------------------------------------------------------------------------------- /scripts/docs.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -euo pipefail 3 | 4 | # This script generates the docs that .github/workflows/terraform-docs.yml 5 | # would, but cannot push due to repo ACLs. 6 | 7 | if ! (command -v terraform-docs >/dev/null); then 8 | echo "You need terraform-docs:" 9 | echo " go install github.com/terraform-docs/terraform-docs@v0.16.0" 10 | exit 1 11 | fi 12 | 13 | terraform-docs markdown table --indent 2 --output-file USAGE.md . 14 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/question-discussion.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Question/Discussion 3 | about: General question about how things work or a discussion 4 | title: '' 5 | labels: 'type: discussion' 6 | assignees: '' 7 | 8 | --- 9 | 10 | 15 | -------------------------------------------------------------------------------- /.github/workflows/add-to-project-v2.yml: -------------------------------------------------------------------------------- 1 | name: Add to project 2 | on: 3 | issues: 4 | types: [opened] 5 | pull_request_target: 6 | types: [opened] 7 | jobs: 8 | add-to-project: 9 | runs-on: ubuntu-latest 10 | name: Add issues and PRs to project 11 | steps: 12 | - uses: actions/add-to-project@main 13 | with: 14 | project-url: https://github.com/orgs/honeycombio/projects/27 15 | github-token: ${{ secrets.GHPROJECTS_TOKEN }} 16 | -------------------------------------------------------------------------------- /examples/complete/variables.tf: -------------------------------------------------------------------------------- 1 | variable "honeycomb_api_key" { 2 | type = string 3 | description = "Your Honeycomb team's API key." 4 | sensitive = true 5 | } 6 | 7 | variable "s3_bucket_arn" { 8 | type = string 9 | description = "ARN for bucket containing ALB logs" 10 | } 11 | 12 | variable "honeycomb_api_host" { 13 | type = string 14 | default = "https://api.honeycomb.io" 15 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 16 | } 17 | -------------------------------------------------------------------------------- /examples/s3-logfile/variables.tf: -------------------------------------------------------------------------------- 1 | variable "honeycomb_api_key" { 2 | type = string 3 | description = "Your Honeycomb team's API key." 4 | sensitive = true 5 | } 6 | 7 | variable "s3_bucket_arn" { 8 | type = string 9 | description = "ARN for bucket containing ALB logs" 10 | } 11 | 12 | variable "honeycomb_api_host" { 13 | type = string 14 | default = "https://api.honeycomb.io" 15 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 16 | } 17 | 18 | -------------------------------------------------------------------------------- /.github/workflows/terraform-lint.yml: -------------------------------------------------------------------------------- 1 | name: Lints and formatting 2 | on: 3 | workflow_dispatch: 4 | push: 5 | paths-ignore: 6 | - README.md 7 | - CHANGELOG.md 8 | 9 | jobs: 10 | terraform-fmt: 11 | runs-on: ubuntu-latest 12 | steps: 13 | - uses: actions/checkout@v2 14 | - uses: hashicorp/setup-terraform@v1 15 | 16 | - name: Terraform fmt 17 | id: fmt 18 | run: make terraform-format validate=true 19 | 20 | - name: GitHub Action linting Terraform files 21 | uses: devops-infra/action-tflint@0.3 22 | -------------------------------------------------------------------------------- /examples/s3-logfile/main.tf: -------------------------------------------------------------------------------- 1 | module "alb_logs" { 2 | source = "honeycombio/integrations/aws//modules/s3-logfile" 3 | 4 | name = "tf-integrations-alb-${random_pet.this.id}" 5 | parser_type = "alb" # valid types are alb, elb, cloudfront, vpc-flow-log, s3-access, json, and keyval 6 | honeycomb_api_key = var.honeycomb_api_key 7 | honeycomb_api_host = var.honeycomb_api_host 8 | # bucket with alb access logs 9 | s3_bucket_arn = var.s3_bucket_arn 10 | } 11 | 12 | # dependencies 13 | 14 | resource "random_pet" "this" { 15 | length = 2 16 | } 17 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/security-vulnerability-report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Security vulnerability report 3 | about: Let us know if you discover a security vulnerability 4 | title: '' 5 | labels: 'type: security' 6 | assignees: '' 7 | 8 | --- 9 | 10 | 15 | **Versions** 16 | 17 | **Description** 18 | 19 | (Please include any relevant CVE advisory links) 20 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2022 Hound Technology, Inc. 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. 14 | -------------------------------------------------------------------------------- /.github/release.yml: -------------------------------------------------------------------------------- 1 | # .github/release.yml 2 | 3 | changelog: 4 | exclude: 5 | labels: 6 | - no-changelog 7 | categories: 8 | - title: 💥 Breaking Changes 💥 9 | labels: 10 | - "version: bump major" 11 | - breaking-change 12 | - title: 💡 Enhancements 13 | labels: 14 | - "type: enhancement" 15 | - title: 🐛 Fixes 16 | labels: 17 | - "type: bug" 18 | - title: 🛠 Maintenance 19 | labels: 20 | - "type: maintenance" 21 | - "type: dependencies" 22 | - "type: documentation" 23 | - title: 🤷 Other Changes 24 | labels: 25 | - "*" 26 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2016-Present Honeycomb, Hound Technology, Inc. All Rights Reserved. 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. 14 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: 'type: enhancement' 6 | assignees: '' 7 | 8 | --- 9 | 10 | 15 | 16 | **Is your feature request related to a problem? Please describe.** 17 | 18 | 19 | **Describe the solution you'd like** 20 | 21 | 22 | **Describe alternatives you've considered** 23 | 24 | 25 | **Additional context** 26 | -------------------------------------------------------------------------------- /tests/cloudwatch-logs.tf: -------------------------------------------------------------------------------- 1 | 2 | module "cloudwatch_logs" { 3 | source = "../modules/cloudwatch-logs" 4 | 5 | name = "cwlogs-${random_pet.this.id}" 6 | cloudwatch_log_groups = [module.log_group.cloudwatch_log_group_name] 7 | 8 | honeycomb_api_host = var.honeycomb_api_host 9 | honeycomb_api_key = var.honeycomb_api_key 10 | honeycomb_dataset_name = "cloudwatch-logs" 11 | 12 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 13 | } 14 | 15 | module "log_group" { 16 | source = "terraform-aws-modules/cloudwatch/aws//modules/log-group" 17 | version = "~> 3.0" 18 | 19 | name = "tf-integrations-${random_pet.this.id}" 20 | retention_in_days = 1 21 | } 22 | -------------------------------------------------------------------------------- /scripts/update-changelog.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | if [ -z "${GITHUB_TOKEN+x}" ]; then 4 | echo "Updating the CHANGELOG requires the environment variable GITHUB_TOKEN." 5 | echo "Please set this variable to a GITHUB Personal Access Token scoped to 'repo'" 6 | echo "How-to details: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token" 7 | exit 255 8 | fi 9 | 10 | if ! [ -x "$(command -v changelog-from-release)" ]; then 11 | echo '`changelog-from-release` is not installed. Installing it now.' >&2 12 | go install github.com/rhysd/changelog-from-release/v3@latest 13 | fi 14 | 15 | 16 | changelog-from-release > CHANGELOG.md 17 | echo 'CHANGELOG.md has been updated.' >&2 18 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Let us know if something is not working as expected 4 | title: '' 5 | labels: 'type: bug' 6 | assignees: '' 7 | 8 | --- 9 | 10 | 17 | 18 | **Versions** 19 | 20 | 21 | **Steps to reproduce** 22 | 23 | 1. 24 | 25 | **Additional context** 26 | -------------------------------------------------------------------------------- /modules/kinesis-firehose-honeycomb/outputs.tf: -------------------------------------------------------------------------------- 1 | output "kinesis_firehose_delivery_stream_arn" { 2 | value = aws_kinesis_firehose_delivery_stream.stream.arn 3 | } 4 | 5 | output "otel_collector_service_url" { 6 | value = length(local.destinations) > 1 ? nonsensitive(aws_apprunner_service.otel_collector[0].service_url) : null 7 | description = "The URL of the OpenTelemetry collector App Runner service (only available when using multiple destinations)" 8 | } 9 | 10 | output "otel_collector_service_arn" { 11 | value = length(local.destinations) > 1 ? nonsensitive(aws_apprunner_service.otel_collector[0].arn) : null 12 | description = "The ARN of the OpenTelemetry collector App Runner service (only available when using multiple destinations)" 13 | } 14 | -------------------------------------------------------------------------------- /.github/release-drafter.yml: -------------------------------------------------------------------------------- 1 | name-template: 'v$RESOLVED_VERSION 🌈' 2 | tag-template: 'v$RESOLVED_VERSION' 3 | categories: 4 | - title: '🚀 Features' 5 | labels: 6 | - 'feature' 7 | - 'enhancement' 8 | - title: '🐛 Bug Fixes' 9 | labels: 10 | - 'fix' 11 | - 'bugfix' 12 | - 'bug' 13 | - title: '🧰 Maintenance' 14 | labels: 15 | - 'chore' 16 | change-template: '- $TITLE @$AUTHOR (#$NUMBER)' 17 | change-title-escapes: '\<*_&' # You can add # and @ to disable mentions, and add ` to disable code blocks. 18 | exclude-labels: 19 | - 'skip-changelog' 20 | version-resolver: 21 | major: 22 | labels: 23 | - 'major' 24 | minor: 25 | labels: 26 | - 'minor' 27 | patch: 28 | labels: 29 | - 'patch' 30 | default: patch 31 | template: | 32 | ## Changes 33 | 34 | $CHANGES 35 | -------------------------------------------------------------------------------- /modules/cloudwatch-metrics/outputs.tf: -------------------------------------------------------------------------------- 1 | output "cloudwatch_metric_stream_arn" { 2 | value = aws_cloudwatch_metric_stream.metric-stream.arn 3 | } 4 | 5 | output "cloudwatch_metric_stream_name" { 6 | value = aws_cloudwatch_metric_stream.metric-stream.name 7 | } 8 | 9 | output "otel_collector_service_url" { 10 | value = length(var.additional_destinations) > 0 ? nonsensitive(module.kfh.otel_collector_service_url) : null 11 | description = "The URL of the OpenTelemetry collector App Runner service (only available when using multiple destinations)" 12 | } 13 | 14 | output "otel_collector_service_arn" { 15 | value = length(var.additional_destinations) > 0 ? nonsensitive(module.kfh.otel_collector_service_arn) : null 16 | description = "The ARN of the OpenTelemetry collector App Runner service (only available when using multiple destinations)" 17 | } 18 | -------------------------------------------------------------------------------- /tests/setup-honeycomb-aws-integrations.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-2" 3 | } 4 | 5 | data "aws_vpc" "default" { 6 | default = true 7 | } 8 | 9 | resource "random_pet" "this" { 10 | length = 2 11 | } 12 | 13 | data "aws_subnets" "default" { 14 | filter { 15 | name = "vpc-id" 16 | values = [data.aws_vpc.default.id] 17 | } 18 | } 19 | 20 | data "aws_security_group" "default" { 21 | vpc_id = data.aws_vpc.default.id 22 | filter { 23 | name = "group-name" 24 | values = ["default"] 25 | } 26 | } 27 | 28 | variable "honeycomb_api_host" { 29 | type = string 30 | default = "https://api.honeycomb.io" 31 | } 32 | 33 | variable "honeycomb_api_key" { 34 | type = string 35 | } 36 | 37 | // shared s3 bucket for cloudwatch-logs and cloudwatch-metrics 38 | // kinesis failure messages 39 | module "firehose_failure_bucket" { 40 | source = "terraform-aws-modules/s3-bucket/aws" 41 | version = "~> 3.0" 42 | 43 | bucket = "honeycomb-tf-integrations-failures-${random_pet.this.id}" 44 | force_destroy = true 45 | } 46 | -------------------------------------------------------------------------------- /modules/cloudwatch-metrics/iam.tf: -------------------------------------------------------------------------------- 1 | # https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-metric-streams-trustpolicy.html 2 | resource "aws_iam_role" "this" { 3 | name_prefix = var.name 4 | assume_role_policy = data.aws_iam_policy_document.assume_role.json 5 | tags = var.tags 6 | } 7 | 8 | data "aws_iam_policy_document" "assume_role" { 9 | statement { 10 | actions = ["sts:AssumeRole"] 11 | principals { 12 | type = "Service" 13 | identifiers = ["streams.metrics.cloudwatch.amazonaws.com"] 14 | } 15 | } 16 | } 17 | 18 | # https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-metric-streams-trustpolicy.html 19 | resource "aws_iam_role_policy" "this" { 20 | name_prefix = var.name 21 | role = aws_iam_role.this.id 22 | policy = data.aws_iam_policy_document.this.json 23 | } 24 | 25 | data "aws_iam_policy_document" "this" { 26 | statement { 27 | actions = ["firehose:PutRecord", "firehose:PutRecordBatch"] 28 | resources = [module.kfh.kinesis_firehose_delivery_stream_arn] 29 | } 30 | } 31 | 32 | -------------------------------------------------------------------------------- /RELEASING.md: -------------------------------------------------------------------------------- 1 | # RELEASING 2 | 3 | ## VERSIONING 4 | 5 | This follows the [recommendation from the HashiCorp team](https://developer.hashicorp.com/terraform/registry/modules/publish#releasing-new-versions): each tag and release has a semantic version optionally prefixed with a `v` (i.e. v1.0.1 or 0.9.4). 6 | 7 | ## MAKING A NEW RELEASE 8 | 9 | Follow these steps to create a new release: 10 | 11 | * Open the [Releases Page](https://github.com/honeycombio/terraform-aws-integrations/releases). 12 | 13 | 14 | * If there are new changes since the last release, you will see a draft release with notes and a recommended tag / name - this is created by [release-drafter](.github/workflows/release-drafter.yml). 15 | 16 | 17 | * Review and make changes to the content or version number. 18 | 19 | 20 | * Save. DO NOT PUBLISH YET. 21 | 22 | 23 | * Locally, run `make update-changelog`. We do this before publishing so our new release includes the commit with the latest version controlled CHANGELOG.md file. 24 | 25 | 26 | * Open a PR and merge in once approved. 27 | 28 | 29 | * PUBLISH the Release. 🎉 30 | 31 | 32 | * Validate & Celebrate - https://registry.terraform.io/modules/honeycombio/integrations/aws/latest. 33 | -------------------------------------------------------------------------------- /examples/cloudwatch-logs/main.tf: -------------------------------------------------------------------------------- 1 | module "cloudwatch_logs" { 2 | source = "honeycombio/integrations/aws//modules/cloudwatch-logs" 3 | 4 | name = "cwlogs-${random_pet.this.id}" 5 | cloudwatch_log_groups = [module.log_group.cloudwatch_log_group_name, "/aws/rds/instance/tf-integrations-rds-mysql-enabled-blowfish/slowquery"] 6 | 7 | honeycomb_api_key = var.honeycomb_api_key 8 | honeycomb_dataset_name = "cloudwatch-logs" 9 | # Users generally don't need to set this unless they're using Secure Tenancy 10 | honeycomb_api_host = var.honeycomb_api_host 11 | 12 | # firehose failure logs can be found here for troubleshooting 13 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 14 | } 15 | 16 | # dependencies 17 | 18 | resource "random_pet" "this" { 19 | length = 2 20 | } 21 | 22 | module "log_group" { 23 | source = "terraform-aws-modules/cloudwatch/aws//modules/log-group" 24 | version = "~> 3.0" 25 | 26 | name = "tf-integrations-${random_pet.this.id}" 27 | retention_in_days = 1 28 | } 29 | 30 | module "firehose_failure_bucket" { 31 | source = "terraform-aws-modules/s3-bucket/aws" 32 | version = "~> 3.0" 33 | 34 | bucket = "honeycomb-tf-integrations-failures-${random_pet.this.id}" 35 | force_destroy = true 36 | } 37 | -------------------------------------------------------------------------------- /modules/cloudwatch-logs/iam.tf: -------------------------------------------------------------------------------- 1 | data "aws_caller_identity" "current" {} 2 | data "aws_region" "current" {} 3 | 4 | locals { 5 | account_id = data.aws_caller_identity.current.account_id 6 | region = data.aws_region.current.region 7 | } 8 | 9 | resource "aws_iam_role" "this" { 10 | name_prefix = var.name 11 | assume_role_policy = data.aws_iam_policy_document.assume_role.json 12 | tags = var.tags 13 | } 14 | 15 | data "aws_iam_policy_document" "assume_role" { 16 | statement { 17 | actions = ["sts:AssumeRole"] 18 | 19 | principals { 20 | type = "Service" 21 | identifiers = ["logs.${local.region}.amazonaws.com"] 22 | } 23 | 24 | condition { 25 | test = "StringLike" 26 | variable = "aws:SourceArn" 27 | 28 | values = [ 29 | "arn:aws:logs:${local.region}:${local.account_id}:*" 30 | ] 31 | } 32 | } 33 | } 34 | 35 | resource "aws_iam_role_policy" "cwl_policy" { 36 | name_prefix = var.name 37 | role = aws_iam_role.this.id 38 | policy = data.aws_iam_policy_document.this.json 39 | } 40 | 41 | 42 | data "aws_iam_policy_document" "this" { 43 | statement { 44 | actions = ["firehose:PutRecord", "firehose:PutRecordBatch"] 45 | resources = [module.kfh.kinesis_firehose_delivery_stream_arn] 46 | } 47 | } 48 | -------------------------------------------------------------------------------- /.github/workflows/test-terraform-module.yml: -------------------------------------------------------------------------------- 1 | name: Test Terraform Modules 2 | 3 | on: 4 | workflow_dispatch: 5 | push: 6 | paths: 7 | - '**/*.tf' 8 | - '!examples/**' 9 | env: 10 | TF_IN_AUTOMATION: 1 11 | TF_VAR_honeycomb_api_key: ${{ secrets.HONEYCOMB_API_KEY }} 12 | AWS_DEFAULT_REGION: us-east-1 13 | 14 | jobs: 15 | tf-apply: 16 | name: Test terraform-aws-integrations 17 | runs-on: ubuntu-latest 18 | # These permissions are needed to interact with GitHub's OIDC Token endpoint. 19 | permissions: 20 | id-token: write 21 | contents: read 22 | defaults: 23 | run: 24 | working-directory: ./tests 25 | steps: 26 | - uses: actions/checkout@v3 27 | 28 | - name: Configure AWS Credentials 29 | uses: aws-actions/configure-aws-credentials@v2 30 | with: 31 | aws-region: us-east-1 32 | role-to-assume: ${{ secrets.AWS_ASSUME_ROLE_ARN }} 33 | role-duration-seconds: 1800 # 30min 34 | 35 | - uses: hashicorp/setup-terraform@v2 36 | 37 | - name: Terraform Init 38 | run: terraform init -upgrade 39 | 40 | - name: Terraform Validate 41 | run: terraform validate -no-color 42 | 43 | - name: Terraform Apply 44 | run: terraform apply -auto-approve -no-color -input=false 45 | 46 | - name: Terraform Destroy 47 | run: terraform destroy -auto-approve -no-color -input=false 48 | if: 'always()' 49 | -------------------------------------------------------------------------------- /.github/workflows/stale.yml: -------------------------------------------------------------------------------- 1 | name: 'Close stale issues and PRs' 2 | on: 3 | schedule: 4 | - cron: '30 1 * * *' 5 | 6 | jobs: 7 | stale: 8 | name: 'Close stale issues and PRs' 9 | runs-on: ubuntu-latest 10 | permissions: 11 | issues: write 12 | pull-requests: write 13 | 14 | steps: 15 | - uses: actions/stale@v4 16 | with: 17 | start-date: '2021-09-01T00:00:00Z' 18 | stale-issue-message: 'Marking this issue as stale because it has been open 14 days with no activity. Please add a comment if this is still an ongoing issue; otherwise this issue will be automatically closed in 7 days.' 19 | stale-pr-message: 'Marking this PR as stale because it has been open 30 days with no activity. Please add a comment if this PR is still relevant; otherwise this PR will be automatically closed in 7 days.' 20 | close-issue-message: 'Closing this issue due to inactivity. Please see our [Honeycomb OSS Lifecyle and Practices](https://github.com/honeycombio/home/blob/main/honeycomb-oss-lifecycle-and-practices.md).' 21 | close-pr-message: 'Closing this PR due to inactivity. Please see our [Honeycomb OSS Lifecyle and Practices](https://github.com/honeycombio/home/blob/main/honeycomb-oss-lifecycle-and-practices.md).' 22 | days-before-issue-stale: 14 23 | days-before-pr-stale: 30 24 | days-before-issue-close: 7 25 | days-before-pr-close: 7 26 | any-of-labels: 'status: info needed,status: revision needed' 27 | -------------------------------------------------------------------------------- /SECURITY.md: -------------------------------------------------------------------------------- 1 | # Security Policy 2 | 3 | This security policy applies to public projects under the [honeycombio organization][gh-organization] on GitHub. 4 | For security reports involving the services provided at `(ui|ui-eu|api|api-eu).honeycomb.io`, refer to the [Honeycomb Bug Bounty Program][bugbounty] for scope, expectations, and reporting procedures. 5 | 6 | ## Security/Bugfix Versions 7 | 8 | Security and bug fixes are generally provided only for the last minor version. 9 | Fixes are released either as part of the next minor version or as an on-demand patch version. 10 | 11 | Security fixes are given priority and might be enough to cause a new version to be released. 12 | 13 | ## Reporting a Vulnerability 14 | 15 | We encourage responsible disclosure of security vulnerabilities. 16 | If you find something suspicious, we encourage and appreciate your report! 17 | 18 | ### Ways to report 19 | 20 | In order for the vulnerability reports to reach maintainers as soon as possible, the preferred way is to use the "Report a vulnerability" button under the "Security" tab of the associated GitHub project. 21 | This creates a private communication channel between the reporter and the maintainers. 22 | 23 | If you are absolutely unable to or have strong reasons not to use GitHub's vulnerability reporting workflow, please reach out to the Honeycomb security team at [security@honeycomb.io](mailto:security@honeycomb.io). 24 | 25 | [gh-organization]: https://github.com/honeycombio 26 | [bugbounty]: https://www.honeycomb.io/bugbountyprogram 27 | -------------------------------------------------------------------------------- /.github/workflows/release-drafter.yml: -------------------------------------------------------------------------------- 1 | name: Release Drafter 2 | 3 | on: 4 | push: 5 | # branches to consider in the event; optional, defaults to all 6 | branches: 7 | - main 8 | # pull_request event is required only for autolabeler 9 | pull_request: 10 | # Only following types are handled by the action, but one can default to all as well 11 | types: [opened, reopened, synchronize] 12 | # pull_request_target event is required for autolabeler to support PRs from forks 13 | # pull_request_target: 14 | # types: [opened, reopened, synchronize] 15 | 16 | permissions: 17 | contents: read 18 | 19 | jobs: 20 | update_release_draft: 21 | permissions: 22 | # write permission is required to create a github release 23 | contents: write 24 | # write permission is required for autolabeler 25 | # otherwise, read permission is required at least 26 | pull-requests: write 27 | runs-on: ubuntu-latest 28 | steps: 29 | # (Optional) GitHub Enterprise requires GHE_HOST variable set 30 | #- name: Set GHE_HOST 31 | # run: | 32 | # echo "GHE_HOST=${GITHUB_SERVER_URL##https:\/\/}" >> $GITHUB_ENV 33 | 34 | # Drafts your next Release notes as Pull Requests are merged into "master" 35 | - uses: release-drafter/release-drafter@v5 36 | # (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml 37 | # with: 38 | # config-name: my-config.yml 39 | # disable-autolabeler: true 40 | env: 41 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 42 | -------------------------------------------------------------------------------- /examples/cloudwatch-metrics/main.tf: -------------------------------------------------------------------------------- 1 | module "cloudwatch_metric_stream" { 2 | source = "honeycombio/integrations/aws//modules/cloudwatch-metrics" 3 | 4 | name = "cloudwatch-metrics-complete" 5 | 6 | honeycomb_dataset_name = "cloudwatch-metrics" 7 | honeycomb_api_key = var.honeycomb_api_key 8 | # Users generally don't need to set this unless they're using Secure Tenancy 9 | honeycomb_api_host = var.honeycomb_api_host 10 | 11 | # firehose failure logs can be found here for troubleshooting 12 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 13 | 14 | # include and exclude cannot be used together, they are mutually exclusive 15 | # ONLY send these namepaces and metrics 16 | include_filters = [ 17 | { 18 | namespace = "AWS/EC2" 19 | metric_names = [ 20 | "CPUUtilization", 21 | "CPUCreditBalance", 22 | "NetworkIn", 23 | "NetworkOut", 24 | ] 25 | }, 26 | { 27 | namespace = "AWS/ELB" 28 | metric_names = [] # include all metrics 29 | } 30 | ] 31 | 32 | # send all namespaces and metrics EXCEPT these 33 | # exclude_filters = [ 34 | # { 35 | # namespace = "AWS/Lambda" 36 | # metric_names = [] # exclude all metrics 37 | # } 38 | # ] 39 | 40 | tags = { 41 | Environment = "sandbox" 42 | } 43 | } 44 | 45 | resource "random_pet" "this" { 46 | length = 2 47 | } 48 | 49 | module "firehose_failure_bucket" { 50 | source = "terraform-aws-modules/s3-bucket/aws" 51 | version = "~> 3.0" 52 | 53 | bucket = "honeycomb-tf-integrations-failures-${random_pet.this.id}" 54 | force_destroy = true 55 | } 56 | -------------------------------------------------------------------------------- /modules/cloudwatch-logs/main.tf: -------------------------------------------------------------------------------- 1 | module "kfh" { 2 | source = "../kinesis-firehose-honeycomb" 3 | 4 | name = var.name 5 | 6 | honeycomb_api_host = var.honeycomb_api_host 7 | honeycomb_api_key = var.honeycomb_api_key 8 | honeycomb_dataset_name = var.honeycomb_dataset_name 9 | additional_destinations = var.additional_destinations 10 | 11 | enable_lambda_transform = var.enable_lambda_transform 12 | lambda_transform_arn = var.lambda_transform_arn 13 | 14 | http_buffering_size = var.http_buffering_size 15 | http_buffering_interval = var.http_buffering_interval 16 | 17 | s3_failure_bucket_arn = var.s3_failure_bucket_arn 18 | s3_backup_mode = var.s3_backup_mode 19 | s3_buffer_size = var.s3_buffer_size 20 | s3_buffer_interval = var.s3_buffer_interval 21 | s3_compression_format = var.s3_compression_format 22 | 23 | tags = var.tags 24 | } 25 | 26 | resource "aws_cloudwatch_log_subscription_filter" "this" { 27 | count = !var.use_order_independent_filter_resource_naming ? length(var.cloudwatch_log_groups) : 0 28 | name = "${var.cloudwatch_log_groups[count.index]}-logs_subscription_filter" 29 | role_arn = aws_iam_role.this.arn 30 | log_group_name = var.cloudwatch_log_groups[count.index] 31 | filter_pattern = var.log_subscription_filter_pattern 32 | destination_arn = module.kfh.kinesis_firehose_delivery_stream_arn 33 | } 34 | 35 | resource "aws_cloudwatch_log_subscription_filter" "filters" { 36 | for_each = var.use_order_independent_filter_resource_naming ? toset(var.cloudwatch_log_groups) : [] 37 | 38 | name = "${each.key}-logs_subscription_filter" 39 | role_arn = aws_iam_role.this.arn 40 | log_group_name = each.key 41 | filter_pattern = var.log_subscription_filter_pattern 42 | destination_arn = module.kfh.kinesis_firehose_delivery_stream_arn 43 | } 44 | -------------------------------------------------------------------------------- /tests/kinesis-firehose.tf: -------------------------------------------------------------------------------- 1 | # Test single destination (direct delivery) 2 | module "kinesis_firehose_single" { 3 | source = "../modules/kinesis-firehose-honeycomb" 4 | 5 | name = "kfh-single-${random_pet.this.id}" 6 | 7 | honeycomb_api_host = var.honeycomb_api_host 8 | honeycomb_api_key = var.honeycomb_api_key 9 | honeycomb_dataset_name = "kinesis-single" 10 | 11 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 12 | } 13 | 14 | # Test multiple destinations (App Runner multiplexing) 15 | module "kinesis_firehose_multi" { 16 | source = "../modules/kinesis-firehose-honeycomb" 17 | 18 | name = "kfh-multi-${random_pet.this.id}" 19 | 20 | honeycomb_api_host = var.honeycomb_api_host 21 | honeycomb_api_key = var.honeycomb_api_key 22 | honeycomb_dataset_name = "kinesis-primary" 23 | 24 | additional_destinations = [ 25 | { 26 | honeycomb_dataset_name = "kinesis-secondary" 27 | honeycomb_api_host = var.honeycomb_api_host 28 | honeycomb_api_key = var.honeycomb_api_key 29 | }, 30 | { 31 | honeycomb_dataset_name = "kinesis-tertiary" 32 | honeycomb_api_host = var.honeycomb_api_host 33 | honeycomb_api_key = var.honeycomb_api_key 34 | } 35 | ] 36 | 37 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 38 | } 39 | 40 | # Output to verify App Runner service is created for multi-destination 41 | output "single_destination_stream_arn" { 42 | value = nonsensitive(module.kinesis_firehose_single.kinesis_firehose_delivery_stream_arn) 43 | } 44 | 45 | output "multi_destination_stream_arn" { 46 | value = nonsensitive(module.kinesis_firehose_multi.kinesis_firehose_delivery_stream_arn) 47 | } 48 | 49 | output "otel_collector_service_url" { 50 | value = nonsensitive(module.kinesis_firehose_multi.otel_collector_service_url) 51 | description = "Should be non-null for multi-destination scenario" 52 | } 53 | 54 | output "otel_collector_service_arn" { 55 | value = nonsensitive(module.kinesis_firehose_multi.otel_collector_service_arn) 56 | description = "Should be non-null for multi-destination scenario" 57 | } -------------------------------------------------------------------------------- /modules/cloudwatch-metrics/main.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | // until we remove the "namespace_" variables, we'll reconcile the inputs here 3 | // by converting the deprecated input into a similar looking object 4 | include_filters = length(var.include_filters) > 0 ? var.include_filters : [ 5 | for f in var.namespace_include_filters : { namespace = f, metric_names = [] } 6 | ] 7 | exclude_filters = length(var.exclude_filters) > 0 ? var.exclude_filters : [ 8 | for f in var.namespace_exclude_filters : { namespace = f, metric_names = [] } 9 | ] 10 | } 11 | module "kfh" { 12 | source = "../kinesis-firehose-honeycomb" 13 | 14 | name = var.name 15 | 16 | honeycomb_api_host = var.honeycomb_api_host 17 | honeycomb_api_key = var.honeycomb_api_key 18 | honeycomb_dataset_name = var.honeycomb_dataset_name 19 | additional_destinations = var.additional_destinations 20 | 21 | http_buffering_size = var.http_buffering_size 22 | http_buffering_interval = var.http_buffering_interval 23 | 24 | s3_failure_bucket_arn = var.s3_failure_bucket_arn 25 | s3_backup_mode = var.s3_backup_mode 26 | s3_buffer_size = var.s3_buffer_size 27 | s3_buffer_interval = var.s3_buffer_interval 28 | s3_compression_format = var.s3_compression_format 29 | 30 | tags = var.tags 31 | } 32 | 33 | resource "aws_cloudwatch_metric_stream" "metric-stream" { 34 | name = var.name 35 | role_arn = aws_iam_role.this.arn 36 | firehose_arn = module.kfh.kinesis_firehose_delivery_stream_arn 37 | output_format = var.output_format 38 | 39 | include_linked_accounts_metrics = var.include_linked_accounts_metrics 40 | 41 | # NOTE: include and exclude filters are _mutually exclusive_, you may not have 42 | # both (though this is difficult to enforce in variable validation. 43 | dynamic "include_filter" { 44 | for_each = local.include_filters 45 | 46 | content { 47 | namespace = include_filter.value.namespace 48 | metric_names = include_filter.value.metric_names 49 | } 50 | } 51 | 52 | dynamic "exclude_filter" { 53 | for_each = local.exclude_filters 54 | 55 | content { 56 | namespace = exclude_filter.value.namespace 57 | metric_names = exclude_filter.value.metric_names 58 | } 59 | } 60 | 61 | tags = var.tags 62 | } 63 | -------------------------------------------------------------------------------- /.github/workflows/validate-pr-title.yml: -------------------------------------------------------------------------------- 1 | name: "Validate PR Title" 2 | 3 | on: 4 | pull_request: 5 | types: 6 | - opened 7 | - edited 8 | - synchronize 9 | 10 | jobs: 11 | main: 12 | name: Validate PR title 13 | runs-on: ubuntu-latest 14 | steps: 15 | - uses: amannn/action-semantic-pull-request@v5 16 | id: lint_pr_title 17 | name: "🤖 Check PR title follows conventional commit spec" 18 | env: 19 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 20 | with: 21 | # Have to specify all types because `maint` and `rel` aren't defaults 22 | types: | 23 | maint 24 | rel 25 | fix 26 | feat 27 | chore 28 | ci 29 | docs 30 | style 31 | refactor 32 | perf 33 | test 34 | ignoreLabels: | 35 | "type: dependencies" 36 | # When the previous steps fails, the workflow would stop. By adding this 37 | # condition you can continue the execution with the populated error message. 38 | - if: always() && (steps.lint_pr_title.outputs.error_message != null) 39 | name: "📝 Add PR comment about using conventional commit spec" 40 | uses: marocchino/sticky-pull-request-comment@v2 41 | with: 42 | header: pr-title-lint-error 43 | message: | 44 | Thank you for contributing to the project! 🎉 45 | 46 | We require pull request titles to follow the [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0/) and it looks like your proposed title needs to be adjusted. 47 | 48 | Make sure to prepend with `feat:`, `fix:`, or another option in the list below. 49 | 50 | Once you update the title, this workflow will re-run automatically and validate the updated title. 51 | 52 | Details: 53 | 54 | ``` 55 | ${{ steps.lint_pr_title.outputs.error_message }} 56 | ``` 57 | 58 | # Delete a previous comment when the issue has been resolved 59 | - if: ${{ steps.lint_pr_title.outputs.error_message == null }} 60 | name: "❌ Delete PR comment after title has been updated" 61 | uses: marocchino/sticky-pull-request-comment@v2 62 | with: 63 | header: pr-title-lint-error 64 | delete: true 65 | -------------------------------------------------------------------------------- /tests/rds.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | rds_mysql_db_name = "tf-integrations-rds-mysql-${random_pet.this.id}" 3 | } 4 | 5 | module "rds_mysql_logs" { 6 | source = "../modules/rds-logs" 7 | depends_on = [ 8 | module.rds_mysql 9 | ] 10 | name = "rds-logs-${random_pet.this.id}" 11 | db_engine = "mysql" 12 | db_name = local.rds_mysql_db_name 13 | db_log_types = ["slowquery"] 14 | honeycomb_api_host = var.honeycomb_api_host 15 | honeycomb_api_key = var.honeycomb_api_key 16 | honeycomb_dataset_name = "rds-mysql-logs" 17 | 18 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 19 | } 20 | 21 | /*** RDS ***/ 22 | 23 | 24 | data "aws_rds_engine_version" "rds_mysql" { 25 | engine = "mysql" 26 | default_only = true 27 | } 28 | module "rds_mysql" { 29 | source = "terraform-aws-modules/rds/aws" 30 | 31 | identifier = local.rds_mysql_db_name 32 | 33 | # All available versions: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.VersionMgmt 34 | engine = "mysql" 35 | engine_version = data.aws_rds_engine_version.rds_mysql.version 36 | family = "mysql8.0" 37 | major_engine_version = "8.0" 38 | instance_class = "db.t3.micro" 39 | 40 | 41 | allocated_storage = 20 42 | max_allocated_storage = 100 43 | 44 | db_name = replace(local.rds_mysql_db_name, "-", "") 45 | username = "tfuser" 46 | port = 3306 47 | 48 | multi_az = false 49 | subnet_ids = data.aws_subnets.default.ids 50 | vpc_security_group_ids = ["${data.aws_security_group.default.id}"] 51 | 52 | maintenance_window = "Mon:00:00-Mon:03:00" 53 | backup_window = "03:00-06:00" 54 | enabled_cloudwatch_logs_exports = ["slowquery"] 55 | create_cloudwatch_log_group = true 56 | 57 | backup_retention_period = 0 58 | skip_final_snapshot = true 59 | deletion_protection = false 60 | 61 | performance_insights_enabled = false 62 | performance_insights_retention_period = 7 63 | 64 | parameters = [ 65 | { 66 | name = "character_set_client" 67 | value = "utf8mb4" 68 | }, 69 | { 70 | name = "character_set_server" 71 | value = "utf8mb4" 72 | }, 73 | { 74 | name = "slow_query_log" 75 | value = "1" 76 | }, 77 | { 78 | name = "long_query_time" 79 | value = "0" 80 | }, 81 | { 82 | name = "log_output" 83 | value = "FILE" 84 | } 85 | ] 86 | } 87 | -------------------------------------------------------------------------------- /examples/rds-logs/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS RDS LOGS INTEGRATION 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send logs from AWS RDS to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS RDS Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/rds-logs-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipeline to take RDS logs from CloudWatch log groups and send them to Honeycomb can be created and managed via this module. 10 | 11 | ## Use 12 | 13 | First, add the minimal Terraform configuration, which includes the required fields: 14 | 15 | ```hcl 16 | module "honeycomb-aws-rds-logs-integration" { 17 | source = "honeycombio/integrations/aws//modules/rds-logs" 18 | 19 | name = "rds-logs-integration" 20 | db_engine = "mysql" 21 | db_name = "mysql-db-name" 22 | db_log_types = ["slowquery"] 23 | honeycomb_api_key = var.honeycomb_api_key // Your Honeycomb team's API key 24 | honeycomb_dataset_name = "rds-mysql-logs" 25 | 26 | s3_failure_bucket_arn = var.s3_bucket_arn // The full ARN of the bucket storing Kinesis Firehose failure logs. 27 | } 28 | ``` 29 | 30 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 31 | 32 | ```bash 33 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 34 | ``` 35 | 36 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 37 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 38 | 39 | ```bash 40 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 41 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 42 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 43 | ``` 44 | 45 | Now you can run `terraform plan/apply` in sequence. 46 | 47 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 48 | 49 | ## Examples 50 | 51 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/rds-logs). 52 | 53 | ## Development 54 | 55 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 56 | 57 | ## Contributions 58 | 59 | Features, bug fixes and other changes to this module are gladly accepted. 60 | Please open issues or a pull request with your change. 61 | 62 | All contributions will be released under the Apache License 2.0. 63 | -------------------------------------------------------------------------------- /modules/rds-logs/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS RDS LOGS INTEGRATION 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send logs from AWS RDS to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS RDS Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/rds-logs-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipeline to take RDS logs from CloudWatch log groups and send them to Honeycomb can be created and managed via this module. 10 | 11 | ## Use 12 | 13 | First, add the minimal Terraform configuration, which includes the required fields: 14 | 15 | ```hcl 16 | module "honeycomb-aws-rds-logs-integration" { 17 | source = "honeycombio/integrations/aws//modules/rds-logs" 18 | 19 | name = "rds-logs-integration" 20 | db_engine = "mysql" 21 | db_name = "mysql-db-name" 22 | db_log_types = ["slowquery"] 23 | honeycomb_api_key = var.honeycomb_api_key // Your Honeycomb team's API key 24 | honeycomb_dataset_name = "rds-mysql-logs" 25 | 26 | s3_failure_bucket_arn = var.s3_bucket_arn // The full ARN of the bucket storing Kinesis Firehose failure logs. 27 | } 28 | ``` 29 | 30 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 31 | 32 | ```bash 33 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 34 | ``` 35 | 36 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 37 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 38 | 39 | ```bash 40 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 41 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 42 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 43 | ``` 44 | 45 | Now you can run `terraform plan/apply` in sequence. 46 | 47 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 48 | 49 | ## Examples 50 | 51 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/rds-logs). 52 | 53 | ## Development 54 | 55 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 56 | 57 | ## Contributions 58 | 59 | Features, bug fixes and other changes to this module are gladly accepted. 60 | Please open issues or a pull request with your change. 61 | 62 | All contributions will be released under the Apache License 2.0. 63 | -------------------------------------------------------------------------------- /examples/rds-logs/mysql/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS RDS LOGS INTEGRATION 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send logs from AWS RDS to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS RDS Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/rds-logs-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipeline to take RDS logs from CloudWatch log groups and send them to Honeycomb can be created and managed via this module. 10 | 11 | ## Use 12 | 13 | First, add the minimal Terraform configuration, which includes the required fields: 14 | 15 | ```hcl 16 | module "honeycomb-aws-rds-logs-integration" { 17 | source = "honeycombio/integrations/aws//modules/rds-logs" 18 | 19 | name = "rds-logs-integration" 20 | db_engine = "mysql" 21 | db_name = "mysql-db-name" 22 | db_log_types = ["slowquery"] 23 | honeycomb_api_key = var.honeycomb_api_key // Your Honeycomb team's API key 24 | honeycomb_dataset_name = "rds-mysql-logs" 25 | 26 | s3_failure_bucket_arn = var.s3_bucket_arn // The full ARN of the bucket storing Kinesis Firehose failure logs. 27 | } 28 | ``` 29 | 30 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 31 | 32 | ```bash 33 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 34 | ``` 35 | 36 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 37 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 38 | 39 | ```bash 40 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 41 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 42 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 43 | ``` 44 | 45 | Now you can run `terraform plan/apply` in sequence. 46 | 47 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 48 | 49 | ## Examples 50 | 51 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/rds-logs). 52 | 53 | ## Development 54 | 55 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 56 | 57 | ## Contributions 58 | 59 | Features, bug fixes and other changes to this module are gladly accepted. 60 | Please open issues or a pull request with your change. 61 | 62 | All contributions will be released under the Apache License 2.0. 63 | -------------------------------------------------------------------------------- /examples/rds-logs/postgresql/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS RDS LOGS INTEGRATION 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send logs from AWS RDS to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS RDS Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/rds-logs-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipeline to take RDS logs from CloudWatch log groups and send them to Honeycomb can be created and managed via this module. 10 | 11 | ## Use 12 | 13 | First, add the minimal Terraform configuration, which includes the required fields: 14 | 15 | ```hcl 16 | module "honeycomb-aws-rds-logs-integration" { 17 | source = "honeycombio/integrations/aws//modules/rds-logs" 18 | 19 | name = "rds-logs-integration" 20 | db_engine = "mysql" 21 | db_name = "mysql-db-name" 22 | db_log_types = ["slowquery"] 23 | honeycomb_api_key = var.honeycomb_api_key // Your Honeycomb team's API key 24 | honeycomb_dataset_name = "rds-mysql-logs" 25 | 26 | s3_failure_bucket_arn = var.s3_bucket_arn // The full ARN of the bucket storing Kinesis Firehose failure logs. 27 | } 28 | ``` 29 | 30 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 31 | 32 | ```bash 33 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 34 | ``` 35 | 36 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 37 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 38 | 39 | ```bash 40 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 41 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 42 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 43 | ``` 44 | 45 | Now you can run `terraform plan/apply` in sequence. 46 | 47 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 48 | 49 | ## Examples 50 | 51 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/rds-logs). 52 | 53 | ## Development 54 | 55 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 56 | 57 | ## Contributions 58 | 59 | Features, bug fixes and other changes to this module are gladly accepted. 60 | Please open issues or a pull request with your change. 61 | 62 | All contributions will be released under the Apache License 2.0. 63 | -------------------------------------------------------------------------------- /examples/s3-logfile/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS S3 LOGS IN A BUCKET 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send logs from an AWS S3 Bucket to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS Logs from a bucket Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/s3-logfile-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipelines to take logs from a S3 bucket and send them to 10 | Honecyomb can be created and managed via this module. 11 | 12 | ## Use 13 | 14 | First, add the minimal Terraform configuration, which includes the required fields: 15 | 16 | ```hcl 17 | module "logs_from_a_bucket_integrations" { 18 | source = "honeycombio/integrations/aws//modules/s3-logfile" 19 | name = var.logs_integration_name 20 | 21 | parser_type = var.parser_type // valid types are alb, elb, cloudfront, vpc-flow-log, s3-access, json, and keyval 22 | s3_bucket_arn = var.s3_bucket_arn // The full ARN of the bucket storing the logs. 23 | 24 | 25 | honeycomb_api_key = var.honeycomb_api_key // Your Honeycomb team's API key. 26 | honeycomb_dataset_name = "alb-logs" // Your Honeycomb dataset name that will receive the metrics. 27 | } 28 | ``` 29 | 30 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 31 | 32 | ```bash 33 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 34 | ``` 35 | 36 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 37 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 38 | 39 | ```bash 40 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 41 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 42 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 43 | ``` 44 | 45 | Now you can run `terraform plan/apply` in sequence. 46 | 47 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 48 | 49 | ## Examples 50 | 51 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/s3-logfile). 52 | 53 | ## Development 54 | 55 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 56 | 57 | ## Contributions 58 | 59 | Features, bug fixes and other changes to this module are gladly accepted. 60 | Please open issues or a pull request with your change. 61 | 62 | All contributions will be released under the Apache License 2.0. 63 | -------------------------------------------------------------------------------- /examples/cloudwatch-metrics/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS CLOUDWATCH METRICS INTEGRATION 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send CloudWatch Metrics to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS CloudWatch metrics Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/cloudwatch-metrics-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipeline to take metrics from a CloudWatch Metrics group and send them to Honeycomb can be created and managed via this module. 10 | 11 | ## Use 12 | 13 | First, add the minimal Terraform configuration, which includes the required fields: 14 | 15 | ```hcl 16 | module "honeycomb-aws-cloudwatch-metrics-integration" { 17 | source = "honeycombio/integrations/aws//modules/cloudwatch-metrics" 18 | 19 | name = var.cloudwatch_metrics_integration_name // A name for the Integration. 20 | 21 | honeycomb_api_key = var.HONEYCOMB_API_KEY // Honeycomb API key. 22 | honeycomb_dataset_name = "cloudwatch-metrics" // Your Honeycomb dataset name that will receive the metrics. 23 | 24 | s3_failure_bucket_arn = var.s3_bucket_arn // A S3 bucket that will store any metrics that failed to be sent to Honeycomb. 25 | } 26 | ``` 27 | 28 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 29 | 30 | ```bash 31 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 32 | ``` 33 | 34 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 35 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 36 | 37 | ```bash 38 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 39 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 40 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 41 | ``` 42 | 43 | Now you can run `terraform plan/apply` in sequence. 44 | 45 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 46 | 47 | ## Examples 48 | 49 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/cloudwatch-metrics). 50 | 51 | ## Development 52 | 53 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 54 | 55 | ## Contributions 56 | 57 | Features, bug fixes and other changes to this module are gladly accepted. 58 | Please open issues or a pull request with your change. 59 | 60 | All contributions will be released under the Apache License 2.0. 61 | -------------------------------------------------------------------------------- /modules/cloudwatch-metrics/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS CLOUDWATCH METRICS INTEGRATION 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send CloudWatch Metrics to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS CloudWatch metrics Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/cloudwatch-metrics-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipeline to take metrics from a CloudWatch Metrics group and send them to Honeycomb can be created and managed via this module. 10 | 11 | ## Use 12 | 13 | First, add the minimal Terraform configuration, which includes the required fields: 14 | 15 | ```hcl 16 | module "honeycomb-aws-cloudwatch-metrics-integration" { 17 | source = "honeycombio/integrations/aws//modules/cloudwatch-metrics" 18 | 19 | name = var.cloudwatch_metrics_integration_name // A name for the Integration. 20 | 21 | honeycomb_api_key = var.HONEYCOMB_API_KEY // Honeycomb API key. 22 | honeycomb_dataset_name = "cloudwatch-metrics" // Your Honeycomb dataset name that will receive the metrics. 23 | 24 | s3_failure_bucket_arn = var.s3_bucket_arn // A S3 bucket that will store any metrics that failed to be sent to Honeycomb. 25 | } 26 | ``` 27 | 28 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 29 | 30 | ```bash 31 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 32 | ``` 33 | 34 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 35 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 36 | 37 | ```bash 38 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 39 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 40 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 41 | ``` 42 | 43 | Now you can run `terraform plan/apply` in sequence. 44 | 45 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 46 | 47 | ## Examples 48 | 49 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/cloudwatch-metrics). 50 | 51 | ## Development 52 | 53 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 54 | 55 | ## Contributions 56 | 57 | Features, bug fixes and other changes to this module are gladly accepted. 58 | Please open issues or a pull request with your change. 59 | 60 | All contributions will be released under the Apache License 2.0. 61 | -------------------------------------------------------------------------------- /examples/cloudwatch-logs/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS CLOUDWATCH LOGS INTEGRATION 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send CloudWatch Logs to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS CloudWatch Logs Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/cloudwatch-logs-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipeline to take logs from a CloudWatch Log group and send them to Honeycomb can be created and managed via this module. 10 | 11 | ## Use 12 | 13 | First, add the minimal Terraform configuration, which includes the required fields: 14 | 15 | ```hcl 16 | module "honeycomb-aws-cloudwatch-logs-integration" { 17 | source = "honeycombio/integrations/aws//modules/cloudwatch-logs" 18 | 19 | name = var.cloudwatch_logs_integration_name // A name for the Integration. 20 | 21 | #aws cloudwatch integration 22 | cloudwatch_log_groups = ["/aws/lambda/S3LambdaHandler-test"] // CloudWatch Log Group names to stream to Honeycomb. 23 | s3_failure_bucket_arn = var.s3_bucket_name 24 | // S3 bucket ARN that will store any logs that failed to be sent to Honeycomb. 25 | 26 | #honeycomb 27 | honeycomb_api_key = var.HONEYCOMB_API_KEY // Honeycomb API key. 28 | honeycomb_dataset_name = "cloudwatch-logs" // Your Honeycomb dataset name that will receive the logs. 29 | } 30 | ``` 31 | 32 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 33 | 34 | ```bash 35 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 36 | ``` 37 | 38 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 39 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 40 | 41 | ```bash 42 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 43 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 44 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 45 | ``` 46 | 47 | Now you can run `terraform plan/apply` in sequence. 48 | 49 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 50 | 51 | ## Examples 52 | 53 | Examples that use this module can be found in[`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/cloudwatch-logs). 54 | 55 | ## Development 56 | 57 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 58 | 59 | ## Contributions 60 | 61 | Features, bug fixes and other changes to this module are gladly accepted. 62 | Please open issues or a pull request with your change. 63 | 64 | All contributions will be released under the Apache License 2.0. 65 | -------------------------------------------------------------------------------- /modules/cloudwatch-logs/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS CLOUDWATCH LOGS INTEGRATION 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send CloudWatch Logs to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS CloudWatch Logs Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/cloudwatch-logs-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipeline to take logs from a CloudWatch Log group and send them to Honeycomb can be created and managed via this module. 10 | 11 | ## Use 12 | 13 | First, add the minimal Terraform configuration, which includes the required fields: 14 | 15 | ```hcl 16 | module "honeycomb-aws-cloudwatch-logs-integration" { 17 | source = "honeycombio/integrations/aws//modules/cloudwatch-logs" 18 | 19 | name = var.cloudwatch_logs_integration_name // A name for the Integration. 20 | 21 | #aws cloudwatch integration 22 | cloudwatch_log_groups = ["/aws/lambda/S3LambdaHandler-test"] // CloudWatch Log Group names to stream to Honeycomb. 23 | s3_failure_bucket_arn = var.s3_bucket_name 24 | // S3 bucket ARN that will store any logs that failed to be sent to Honeycomb. 25 | 26 | #honeycomb 27 | honeycomb_api_key = var.HONEYCOMB_API_KEY // Honeycomb API key. 28 | honeycomb_dataset_name = "cloudwatch-logs" // Your Honeycomb dataset name that will receive the logs. 29 | } 30 | ``` 31 | 32 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 33 | 34 | ```bash 35 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 36 | ``` 37 | 38 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 39 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 40 | 41 | ```bash 42 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 43 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 44 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 45 | ``` 46 | 47 | Now you can run `terraform plan/apply` in sequence. 48 | 49 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 50 | 51 | ## Examples 52 | 53 | Examples that use this module can be found in[`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/cloudwatch-logs). 54 | 55 | ## Development 56 | 57 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 58 | 59 | ## Contributions 60 | 61 | Features, bug fixes and other changes to this module are gladly accepted. 62 | Please open issues or a pull request with your change. 63 | 64 | All contributions will be released under the Apache License 2.0. 65 | -------------------------------------------------------------------------------- /examples/rds-logs/mysql/main.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | db_name = "tf-integrations-rds-mysql-${random_pet.this.id}" 3 | } 4 | 5 | module "mysql_logs" { 6 | source = "honeycombio/integrations/aws//modules/rds-logs" 7 | 8 | name = "rds-logs-${random_pet.this.id}" 9 | db_engine = "mysql" 10 | db_name = local.db_name 11 | db_log_types = ["slowquery"] # valid types for mysql include general, error, slowquery (audit logs not supported) 12 | honeycomb_api_host = var.honeycomb_api_host 13 | honeycomb_api_key = var.honeycomb_api_key 14 | honeycomb_dataset_name = "rds-mysql-logs" 15 | # firehose failure logs can be found here for troubleshooting 16 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 17 | } 18 | 19 | # dependencies 20 | 21 | resource "random_pet" "this" { 22 | length = 2 23 | } 24 | 25 | data "aws_vpc" "default" { 26 | default = true 27 | } 28 | 29 | data "aws_subnets" "default" { 30 | filter { 31 | name = "vpc-id" 32 | values = [data.aws_vpc.default.id] 33 | } 34 | } 35 | 36 | data "aws_security_group" "default" { 37 | vpc_id = data.aws_vpc.default.id 38 | filter { 39 | name = "group-name" 40 | values = ["default"] 41 | } 42 | } 43 | 44 | module "rds_mysql" { 45 | source = "terraform-aws-modules/rds/aws" 46 | 47 | identifier = local.db_name 48 | 49 | # All available versions: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.VersionMgmt 50 | engine = "mysql" 51 | engine_version = "8.0.27" 52 | family = "mysql8.0" 53 | major_engine_version = "8.0" 54 | instance_class = "db.t3.micro" 55 | 56 | 57 | allocated_storage = 20 58 | max_allocated_storage = 100 59 | 60 | db_name = replace(local.db_name, "-", "") 61 | username = "tfuser" 62 | port = 3306 63 | 64 | multi_az = false 65 | subnet_ids = data.aws_subnets.default.ids 66 | vpc_security_group_ids = ["${data.aws_security_group.default.id}"] 67 | 68 | maintenance_window = "Mon:00:00-Mon:03:00" 69 | backup_window = "03:00-06:00" 70 | enabled_cloudwatch_logs_exports = ["slowquery"] 71 | create_cloudwatch_log_group = true 72 | 73 | backup_retention_period = 0 74 | skip_final_snapshot = true 75 | deletion_protection = false 76 | 77 | performance_insights_enabled = false 78 | performance_insights_retention_period = 7 79 | 80 | parameters = [ 81 | { 82 | name = "character_set_client" 83 | value = "utf8mb4" 84 | }, 85 | { 86 | name = "character_set_server" 87 | value = "utf8mb4" 88 | }, 89 | { 90 | name = "slow_query_log" 91 | value = "1" 92 | }, 93 | { 94 | name = "long_query_time" 95 | value = "0" # define what you consider to be a long query here, in seconds 96 | }, 97 | { 98 | name = "log_output" 99 | value = "FILE" 100 | } 101 | ] 102 | } 103 | 104 | module "firehose_failure_bucket" { 105 | source = "terraform-aws-modules/s3-bucket/aws" 106 | version = "~> 3.0" 107 | 108 | bucket = "honeycomb-tf-integrations-failures-${random_pet.this.id}" 109 | force_destroy = true 110 | } 111 | -------------------------------------------------------------------------------- /modules/rds-logs/main.tf: -------------------------------------------------------------------------------- 1 | data "aws_caller_identity" "current" {} 2 | data "aws_region" "current" {} 3 | 4 | locals { 5 | account_id = data.aws_caller_identity.current.account_id 6 | region = data.aws_region.current.region 7 | is_aurora = trimprefix(var.db_engine, "aurora") != var.db_engine // startswith() only available in TF >=1.3 8 | log_group_prefix = local.is_aurora ? "aws/rds/cluster" : "aws/rds/instance" 9 | log_groups = [for log_type in var.db_log_types : "/${local.log_group_prefix}/${var.db_name}/${log_type}"] 10 | enable_lambda_transform = var.db_engine == "mysql" || var.db_engine == "postgresql" 11 | tags = merge(var.tags, { 12 | "Honeycomb Agentless" = true, 13 | "Terraform" = true, 14 | }) 15 | } 16 | 17 | data "aws_iam_policy_document" "lambda" { 18 | statement { 19 | actions = ["firehose:PutRecordBatch"] 20 | resources = ["arn:aws:firehose:${local.region}:${local.account_id}:deliverystream/${var.name}"] 21 | } 22 | } 23 | 24 | resource "aws_iam_policy" "lambda" { 25 | count = local.enable_lambda_transform ? 1 : 0 26 | description = "Honeycomb RDS Lambda Transform" 27 | policy = data.aws_iam_policy_document.lambda.json 28 | } 29 | 30 | module "rds_lambda_transform" { 31 | source = "terraform-aws-modules/lambda/aws" 32 | version = "4.18.0" 33 | 34 | count = local.enable_lambda_transform ? 1 : 0 35 | 36 | function_name = "${var.name}-honeycomb-rds-${var.db_engine}-log-parser" 37 | description = "Parses RDS logs coming off of Kinesis Firehose, sending them back to the Firehose as structured JSON events." 38 | handler = "rds-${var.db_engine}-kfh-transform" 39 | runtime = "provided.al2" 40 | architectures = var.lambda_function_architecture == "amd64" ? ["x86_64"] : ["arm64"] 41 | memory_size = var.lambda_function_memory 42 | timeout = var.lambda_function_timeout 43 | 44 | create_package = false 45 | s3_existing_package = { 46 | bucket = coalesce(var.lambda_package_bucket, "honeycomb-integrations-${data.aws_region.current.region}") 47 | key = coalesce(var.lambda_package_key, "agentless-integrations-for-aws/${var.agentless_integrations_version}/rds-${var.db_engine}-kfh-transform-${var.lambda_function_architecture}.zip") 48 | } 49 | 50 | attach_policy = true 51 | policy = aws_iam_policy.lambda[count.index].arn 52 | 53 | tags = local.tags 54 | } 55 | 56 | module "cloudwatch_logs" { 57 | source = "../cloudwatch-logs" 58 | name = var.name 59 | 60 | cloudwatch_log_groups = local.log_groups 61 | honeycomb_api_key = var.honeycomb_api_key 62 | honeycomb_dataset_name = var.honeycomb_dataset_name 63 | honeycomb_api_host = var.honeycomb_api_host 64 | s3_failure_bucket_arn = var.s3_failure_bucket_arn 65 | enable_lambda_transform = local.enable_lambda_transform 66 | lambda_transform_arn = local.enable_lambda_transform ? module.rds_lambda_transform[0].lambda_function_arn : "" 67 | s3_buffer_size = var.s3_buffer_size 68 | s3_buffer_interval = var.s3_buffer_interval 69 | s3_backup_mode = var.s3_backup_mode 70 | s3_compression_format = var.s3_compression_format 71 | s3_force_destroy = var.s3_force_destroy 72 | http_buffering_size = var.http_buffering_size 73 | http_buffering_interval = var.http_buffering_interval 74 | tags = var.tags 75 | } 76 | -------------------------------------------------------------------------------- /modules/s3-logfile/main.tf: -------------------------------------------------------------------------------- 1 | data "aws_arn" "s3_bucket" { 2 | arn = var.s3_bucket_arn 3 | } 4 | 5 | data "aws_arn" "kms_key" { 6 | arn = var.kms_key_arn 7 | count = var.kms_key_arn == "" ? 0 : 1 8 | } 9 | 10 | data "aws_region" "current" {} 11 | 12 | locals { 13 | tags = merge(var.tags, { 14 | "Honeycomb Agentless" = true, 15 | "Terraform" = true, 16 | }) 17 | } 18 | 19 | data "aws_iam_policy_document" "lambda" { 20 | statement { 21 | actions = ["s3:GetObject"] 22 | resources = ["${var.s3_bucket_arn}/*"] 23 | } 24 | dynamic "statement" { 25 | for_each = var.kms_key_arn != "" ? ["allow_kms"] : [] 26 | content { 27 | actions = ["kms:Decrypt"] 28 | resources = [trimprefix(var.kms_key_arn, "key/")] 29 | } 30 | } 31 | } 32 | 33 | resource "aws_iam_policy" "lambda" { 34 | description = "Honeycomb Agentless Lambda" 35 | policy = data.aws_iam_policy_document.lambda.json 36 | } 37 | 38 | module "s3_processor" { 39 | source = "terraform-aws-modules/lambda/aws" 40 | version = "4.18.0" 41 | 42 | function_name = var.name 43 | description = "Parses LB access logs from S3, sending them to Honeycomb as structured events" 44 | handler = "s3-handler" 45 | runtime = "provided.al2" 46 | architectures = var.lambda_function_architecture == "amd64" ? ["x86_64"] : ["arm64"] 47 | memory_size = var.lambda_function_memory 48 | timeout = var.lambda_function_timeout 49 | 50 | create_package = false 51 | s3_existing_package = { 52 | bucket = coalesce(var.lambda_package_bucket, "honeycomb-integrations-${data.aws_region.current.region}") 53 | key = coalesce(var.lambda_package_key, "agentless-integrations-for-aws/${var.agentless_integrations_version}/s3-handler-${var.lambda_function_architecture}.zip") 54 | } 55 | 56 | 57 | environment_variables = { 58 | PARSER_TYPE = var.parser_type 59 | FORCE_GUNZIP = true 60 | ENVIRONMENT = var.environment 61 | HONEYCOMB_WRITE_KEY = var.honeycomb_api_key 62 | KMS_KEY_ID = (var.kms_key_arn != "" ? data.aws_arn.kms_key[0].resource : "") 63 | API_HOST = var.honeycomb_api_host 64 | DATASET = var.honeycomb_dataset 65 | SAMPLE_RATE = var.sample_rate 66 | SAMPLE_RATE_RULES = jsonencode(var.sample_rate_rules) 67 | FILTER_FIELDS = join(",", var.filter_fields) 68 | RENAME_FIELDS = join(",", [for k, v in var.rename_fields : "${k}=${v}"]) 69 | LINE_FILTER_RULES = jsonencode(var.line_filter_rules) 70 | } 71 | 72 | attach_policy = true 73 | policy = aws_iam_policy.lambda.arn 74 | 75 | attach_network_policy = var.vpc_subnet_ids != null ? true : false 76 | vpc_subnet_ids = var.vpc_subnet_ids != null ? var.vpc_subnet_ids : null 77 | vpc_security_group_ids = var.vpc_security_group_ids != null ? var.vpc_security_group_ids : null 78 | 79 | tags = local.tags 80 | } 81 | 82 | module "log_bucket" { 83 | source = "terraform-aws-modules/s3-bucket/aws//modules/notification" 84 | version = "3.15.2" 85 | 86 | bucket = data.aws_arn.s3_bucket.resource 87 | 88 | lambda_notifications = { 89 | processor = { 90 | function_arn = module.s3_processor.lambda_function_arn 91 | function_name = module.s3_processor.lambda_function_name 92 | events = ["s3:ObjectCreated:*"] 93 | filter_prefix = var.s3_filter_prefix 94 | filter_suffix = var.s3_filter_suffix 95 | } 96 | } 97 | } 98 | -------------------------------------------------------------------------------- /examples/rds-logs/postgresql/main.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | db_name = "tf-integrations-rds-postgresql-${random_pet.this.id}" 3 | } 4 | 5 | module "postgresql_logs" { 6 | source = "honeycombio/integrations/aws//modules/rds-logs" 7 | 8 | name = "rds-logs-${random_pet.this.id}" 9 | db_engine = "postgresql" 10 | db_name = local.db_name 11 | db_log_types = ["postgresql"] # valid types for postgresql include postgresql and upgrade (only slow query logs will be structured) 12 | honeycomb_api_host = var.honeycomb_api_host 13 | honeycomb_api_key = var.honeycomb_api_key 14 | honeycomb_dataset_name = "rds-postgresql-logs" 15 | # firehose failure logs can be found here for troubleshooting 16 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 17 | } 18 | 19 | # dependencies 20 | 21 | data "aws_vpc" "default" { 22 | default = true 23 | } 24 | 25 | data "aws_subnets" "default" { 26 | filter { 27 | name = "vpc-id" 28 | values = [data.aws_vpc.default.id] 29 | } 30 | } 31 | 32 | data "aws_security_group" "default" { 33 | vpc_id = data.aws_vpc.default.id 34 | filter { 35 | name = "group-name" 36 | values = ["default"] 37 | } 38 | } 39 | 40 | resource "random_pet" "this" { 41 | length = 2 42 | } 43 | 44 | module "rds_postgres" { 45 | source = "terraform-aws-modules/rds/aws" 46 | 47 | identifier = local.db_name 48 | 49 | # All available versions: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts 50 | engine = "postgres" 51 | engine_version = "14.1" 52 | family = "postgres14" # DB parameter group 53 | major_engine_version = "14" # DB option group 54 | instance_class = "db.t3.micro" 55 | 56 | allocated_storage = 20 57 | max_allocated_storage = 100 58 | 59 | db_name = replace(local.db_name, "-", "") 60 | username = "tfuser" 61 | port = 5432 62 | 63 | multi_az = false 64 | subnet_ids = data.aws_subnets.default.ids 65 | vpc_security_group_ids = ["${data.aws_security_group.default.id}"] 66 | 67 | maintenance_window = "Mon:00:00-Mon:03:00" 68 | backup_window = "03:00-06:00" 69 | enabled_cloudwatch_logs_exports = ["postgresql"] 70 | create_cloudwatch_log_group = true 71 | 72 | backup_retention_period = 1 73 | skip_final_snapshot = true 74 | deletion_protection = false 75 | 76 | performance_insights_enabled = true 77 | performance_insights_retention_period = 7 78 | create_monitoring_role = true 79 | monitoring_interval = 60 80 | monitoring_role_name = "example-monitoring-role-name" 81 | monitoring_role_use_name_prefix = true 82 | monitoring_role_description = "Description for monitoring role" 83 | 84 | parameters = [ 85 | { 86 | name = "autovacuum" 87 | value = 1 88 | }, 89 | { 90 | name = "client_encoding" 91 | value = "utf8" 92 | }, 93 | { 94 | name = "log_statement" 95 | value = "none" 96 | }, 97 | { 98 | name = "log_min_duration_statement" 99 | value = "0" # define what you consider to be a long query here, in milliseconds 100 | } 101 | ] 102 | } 103 | 104 | module "firehose_failure_bucket" { 105 | source = "terraform-aws-modules/s3-bucket/aws" 106 | version = "~> 3.0" 107 | 108 | bucket = "honeycomb-tf-integrations-failures-${random_pet.this.id}" 109 | force_destroy = true 110 | } 111 | -------------------------------------------------------------------------------- /main.tf: -------------------------------------------------------------------------------- 1 | # --------------------------------------------------------------------------------------------------------------------- 2 | # SETUP ALL SUPPORTED AWS TO HONEYCOMB INTEGRATIONS 3 | # These templates show an example of how to use the aws-honeycomb-integrations module to send observability data from 4 | # various AWS services to Honeycomb. We deploy two Auto 5 | # Note - the templates below reflect how to use the submodules available to setup integrations from Cloudwatch, S3 etc. 6 | # Required variables need to be provided. Please see docs for details on what is expected. 7 | # --------------------------------------------------------------------------------------------------------------------- 8 | 9 | data "aws_region" "current" {} 10 | 11 | locals { 12 | failure_bucket = replace(var.delivery_failure_s3_bucket_name, "{REGION}", data.aws_region.current.region) 13 | } 14 | 15 | module "failure_bucket" { 16 | source = "terraform-aws-modules/s3-bucket/aws" 17 | version = "~> 3.0" 18 | 19 | bucket = local.failure_bucket 20 | 21 | control_object_ownership = true 22 | object_ownership = "BucketOwnerEnforced" 23 | block_public_policy = true 24 | block_public_acls = true 25 | ignore_public_acls = true 26 | restrict_public_buckets = true 27 | } 28 | 29 | 30 | module "cloudwatch_logs" { 31 | source = "./modules/cloudwatch-logs" 32 | name = var.cloudwatch_logs_name 33 | 34 | count = length(var.cloudwatch_log_groups) > 0 ? 1 : 0 35 | cloudwatch_log_groups = var.cloudwatch_log_groups 36 | 37 | honeycomb_api_host = var.honeycomb_api_host 38 | honeycomb_api_key = var.honeycomb_api_key 39 | honeycomb_dataset_name = "cloudwatch-logs" 40 | 41 | s3_failure_bucket_arn = module.failure_bucket.s3_bucket_arn 42 | 43 | tags = var.tags 44 | } 45 | 46 | module "rds_logs" { 47 | source = "./modules/rds-logs" 48 | name = var.rds_logs_name 49 | 50 | count = var.enable_rds_logs ? 1 : 0 51 | 52 | db_name = var.rds_db_name 53 | db_engine = var.rds_db_engine 54 | db_log_types = var.rds_db_log_types 55 | honeycomb_api_host = var.honeycomb_api_host 56 | honeycomb_api_key = var.honeycomb_api_key 57 | honeycomb_dataset_name = "rds-${var.rds_db_engine}-logs" 58 | 59 | s3_failure_bucket_arn = module.failure_bucket.s3_bucket_arn 60 | 61 | tags = var.tags 62 | } 63 | 64 | module "cloudwatch_metrics" { 65 | source = "./modules/cloudwatch-metrics" 66 | name = var.cloudwatch_metrics_name 67 | 68 | count = var.enable_cloudwatch_metrics ? 1 : 0 69 | 70 | honeycomb_api_host = var.honeycomb_api_host 71 | honeycomb_api_key = var.honeycomb_api_key 72 | honeycomb_dataset_name = "cloudwatch-metrics" 73 | 74 | include_filters = var.cloudwatch_metrics_include_filters 75 | exclude_filters = var.cloudwatch_metrics_exclude_filters 76 | 77 | s3_failure_bucket_arn = module.failure_bucket.s3_bucket_arn 78 | 79 | tags = var.tags 80 | } 81 | 82 | module "s3_logfile" { 83 | source = "./modules/s3-logfile" 84 | name = var.s3_logfile_name 85 | 86 | count = var.s3_bucket_arn != "" ? 1 : 0 87 | 88 | honeycomb_api_key = var.honeycomb_api_key 89 | honeycomb_api_host = var.honeycomb_api_host 90 | 91 | parser_type = var.s3_parser_type 92 | s3_bucket_arn = var.s3_bucket_arn 93 | s3_filter_prefix = var.s3_filter_prefix 94 | s3_filter_suffix = var.s3_filter_suffix 95 | sample_rate = var.sample_rate 96 | 97 | vpc_subnet_ids = var.vpc_subnet_ids != null ? var.vpc_subnet_ids : null 98 | vpc_security_group_ids = var.vpc_security_group_ids != null ? var.vpc_security_group_ids : null 99 | 100 | tags = var.tags 101 | } 102 | -------------------------------------------------------------------------------- /modules/s3-logfile/README.md: -------------------------------------------------------------------------------- 1 | # TERRAFORM AWS S3 LOGS IN A BUCKET 2 | 3 | This repo contains a module for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send logs from an AWS S3 Bucket to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS Logs from a bucket Integration overview](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/s3-logfile-overview.png?raw=true) 8 | 9 | All required resources to setup an integration pipelines to take logs from a S3 bucket and send them to 10 | Honecyomb can be created and managed via this module. 11 | 12 | ## Use 13 | 14 | First, add the minimal Terraform configuration, which includes the required fields: 15 | 16 | ```hcl 17 | module "logs_from_a_bucket_integrations" { 18 | source = "honeycombio/integrations/aws//modules/s3-logfile" 19 | name = var.logs_integration_name 20 | 21 | parser_type = var.parser_type // valid types are alb, elb, cloudfront, vpc-flow-log, s3-access, json, and keyval 22 | s3_bucket_arn = var.s3_bucket_arn // The full ARN of the bucket storing the logs. 23 | 24 | 25 | honeycomb_api_key = var.honeycomb_api_key // Your Honeycomb team's API key. 26 | honeycomb_dataset_name = "alb-logs" // Your Honeycomb dataset name that will receive the metrics. 27 | } 28 | ``` 29 | 30 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 31 | 32 | ```bash 33 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 34 | ``` 35 | 36 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 37 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 38 | 39 | ```bash 40 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 41 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 42 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 43 | ``` 44 | 45 | Now you can run `terraform plan/apply` in sequence. 46 | 47 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 48 | 49 | ## Examples 50 | 51 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/s3-logfile). 52 | 53 | ### Line Filter Rules 54 | 55 | The Line Filter Rules configuration is handled via a JSON object that is passed into the lambda. 56 | Here's an example of what that should look like: 57 | 58 | ```hcl 59 | line_filter_rules = [{ 60 | Prefix = "alb-log-bucket-prefix", 61 | MatchLinePatterns = ["target_status_code=500", "error"] 62 | FilterLinePatterns = [".*\/health.*"] 63 | },{ 64 | Prefix = "alb-log-other-prefix", 65 | MatchLinePatterns = [".*"], 66 | FilterLinePatterns = [".*GET.*"] 67 | }] 68 | ``` 69 | 70 | Once a prefix is matched, the match and filter patterns will be applied to the contents of the S3 object. 71 | The MatchLinePatterns and FieldLinePatterns objects are arrays so you can put in multiple match patterns and multiple filter patterns. 72 | These are applied in order and as soon as it matches a MatchLinePattern, it keeps it. 73 | As soon as it matches a FilterLinePattern, it drops the line and moves on. 74 | 75 | When the Lambda finishes, it emits a log to stdout which says the number of processed lines and number of kept lines. 76 | 77 | ## Development 78 | 79 | Refer to our [development documentation](https://github.com/honeycombio/terraform-aws-integrations#development) for details. 80 | 81 | ## Contributions 82 | 83 | Features, bug fixes and other changes to this module are gladly accepted. 84 | Please open issues or a pull request with your change. 85 | 86 | All contributions will be released under the Apache License 2.0. 87 | -------------------------------------------------------------------------------- /tests/cloudwatch-metrics.tf: -------------------------------------------------------------------------------- 1 | module "cloudwatch_metrics_defaults" { 2 | source = "../modules/cloudwatch-metrics" 3 | 4 | name = "cwm-default-${random_pet.this.id}" 5 | 6 | honeycomb_api_host = var.honeycomb_api_host 7 | honeycomb_api_key = var.honeycomb_api_key 8 | honeycomb_dataset_name = "cloudwatch-metrics" 9 | 10 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 11 | } 12 | 13 | module "cloudwatch_metrics_legacy" { 14 | source = "../modules/cloudwatch-metrics" 15 | 16 | name = "cwm-legacy-${random_pet.this.id}" 17 | 18 | honeycomb_api_host = var.honeycomb_api_host 19 | honeycomb_api_key = var.honeycomb_api_key 20 | honeycomb_dataset_name = "cloudwatch-metrics" 21 | 22 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 23 | 24 | namespace_include_filters = ["AWS/RDS", "AWS/EC2"] 25 | } 26 | 27 | module "cloudwatch_metrics" { 28 | source = "../modules/cloudwatch-metrics" 29 | 30 | name = "cwm-${random_pet.this.id}" 31 | 32 | honeycomb_api_host = var.honeycomb_api_host 33 | honeycomb_api_key = var.honeycomb_api_key 34 | honeycomb_dataset_name = "cloudwatch-metrics" 35 | additional_destinations = [{ 36 | honeycomb_dataset_name = "cloudwatch-metrics-2" 37 | honeycomb_api_host = var.honeycomb_api_host 38 | honeycomb_api_key = var.honeycomb_api_key 39 | }] 40 | 41 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 42 | 43 | include_filters = [ 44 | { 45 | namespace = "AWS/RDS" 46 | metric_names = [] 47 | }, 48 | { 49 | namespace = "AWS/EC2" 50 | metric_names = [ 51 | "CPUUtilization", 52 | "CPUCreditBalance", 53 | "NetworkIn", 54 | "NetworkOut", 55 | ] 56 | } 57 | ] 58 | } 59 | 60 | # Test with three destinations to verify App Runner multiplexing 61 | module "cloudwatch_metrics_multi" { 62 | source = "../modules/cloudwatch-metrics" 63 | 64 | name = "cwm-multi-${random_pet.this.id}" 65 | 66 | honeycomb_api_host = var.honeycomb_api_host 67 | honeycomb_api_key = var.honeycomb_api_key 68 | honeycomb_dataset_name = "cloudwatch-metrics-primary" 69 | additional_destinations = [ 70 | { 71 | honeycomb_dataset_name = "cloudwatch-metrics-secondary" 72 | honeycomb_api_host = var.honeycomb_api_host 73 | honeycomb_api_key = var.honeycomb_api_key 74 | }, 75 | { 76 | honeycomb_dataset_name = "cloudwatch-metrics-tertiary" 77 | honeycomb_api_host = var.honeycomb_api_host 78 | honeycomb_api_key = var.honeycomb_api_key 79 | } 80 | ] 81 | 82 | s3_failure_bucket_arn = module.firehose_failure_bucket.s3_bucket_arn 83 | 84 | include_filters = [ 85 | { 86 | namespace = "AWS/RDS" 87 | metric_names = [] 88 | } 89 | ] 90 | } 91 | 92 | # Outputs to verify App Runner service creation 93 | output "cwm_multi_otel_collector_url" { 94 | value = nonsensitive(module.cloudwatch_metrics_multi.otel_collector_service_url) 95 | description = "Should be non-null when using multiple destinations" 96 | } 97 | 98 | output "cwm_multi_otel_collector_arn" { 99 | value = nonsensitive(module.cloudwatch_metrics_multi.otel_collector_service_arn) 100 | description = "Should be non-null when using multiple destinations" 101 | } 102 | 103 | # Test outputs for single destination (should be null) 104 | output "cwm_default_otel_collector_url" { 105 | value = nonsensitive(module.cloudwatch_metrics_defaults.otel_collector_service_url) 106 | description = "Should be null when using single destination" 107 | } 108 | 109 | output "cwm_default_otel_collector_arn" { 110 | value = nonsensitive(module.cloudwatch_metrics_defaults.otel_collector_service_arn) 111 | description = "Should be null when using single destination" 112 | } 113 | -------------------------------------------------------------------------------- /examples/complete/main.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | db_name = "tf-integrations-rds-mysql-${random_pet.this.id}" 3 | } 4 | 5 | module "honeycomb-aws-integrations" { 6 | source = "honeycombio/integrations/aws" 7 | 8 | # aws cloudwatch logs integration 9 | cloudwatch_log_groups = [module.log_group.cloudwatch_log_group_name] // CloudWatch Log Group names to stream to Honeycomb. 10 | 11 | # aws rds logs integration 12 | enable_rds_logs = true 13 | rds_db_name = local.db_name 14 | rds_db_engine = "mysql" 15 | rds_db_log_types = ["slowquery"] # valid types include general, slowquery, error, and audit (audit will be unstructured) 16 | 17 | # aws metrics integration 18 | # enable_cloudwatch_metrics = true 19 | # Only stream specific EC2 metrics and all ELB metrics to Honeycomb 20 | # cloudwatch_metrics_include_filters = [ 21 | # { 22 | # namespace = "AWS/EC2" 23 | # metric_names = [ 24 | # "CPUUtilization", 25 | # "DiskWriteOps", 26 | # "NetworkIn", 27 | # "NetworkOut" 28 | # ] 29 | # }, 30 | # { 31 | # namespace = "AWS/ELB" 32 | # metric_names = [] # include all metrics for this namespace 33 | # } 34 | # ] 35 | 36 | # s3 logfile - alb access logs 37 | s3_bucket_arn = var.s3_bucket_arn 38 | s3_parser_type = "alb" # valid types are alb, elb, cloudfront, vpc-flow-log, s3-access, json, and keyval 39 | 40 | #honeycomb 41 | honeycomb_api_key = var.honeycomb_api_key // Honeycomb API key. 42 | honeycomb_dataset = "terraform-aws-integrations-test" // Your Honeycomb dataset name that will receive the logs. 43 | # Users generally don't need to set this unless they're using Secure Tenancy 44 | honeycomb_api_host = var.honeycomb_api_host 45 | } 46 | 47 | # dependencies 48 | 49 | resource "random_pet" "this" { 50 | length = 2 51 | } 52 | 53 | data "aws_vpc" "default" { 54 | default = true 55 | } 56 | 57 | data "aws_subnets" "default" { 58 | filter { 59 | name = "vpc-id" 60 | values = [data.aws_vpc.default.id] 61 | } 62 | } 63 | 64 | data "aws_security_group" "default" { 65 | vpc_id = data.aws_vpc.default.id 66 | filter { 67 | name = "group-name" 68 | values = ["default"] 69 | } 70 | } 71 | 72 | ### cloudwatch logs 73 | 74 | module "log_group" { 75 | source = "terraform-aws-modules/cloudwatch/aws//modules/log-group" 76 | version = "~> 3.0" 77 | 78 | name = "tf-integrations-${random_pet.this.id}" 79 | retention_in_days = 1 80 | } 81 | 82 | ### rds 83 | 84 | module "rds_mysql" { 85 | source = "terraform-aws-modules/rds/aws" 86 | 87 | identifier = local.db_name 88 | 89 | # All available versions: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.VersionMgmt 90 | engine = "mysql" 91 | engine_version = "8.0.27" 92 | family = "mysql8.0" 93 | major_engine_version = "8.0" 94 | instance_class = "db.t3.micro" 95 | 96 | 97 | allocated_storage = 20 98 | max_allocated_storage = 100 99 | 100 | db_name = replace(local.db_name, "-", "") 101 | username = "tfuser" 102 | port = 3306 103 | 104 | multi_az = false 105 | subnet_ids = data.aws_subnets.default.ids 106 | vpc_security_group_ids = ["${data.aws_security_group.default.id}"] 107 | 108 | maintenance_window = "Mon:00:00-Mon:03:00" 109 | backup_window = "03:00-06:00" 110 | enabled_cloudwatch_logs_exports = ["slowquery"] 111 | create_cloudwatch_log_group = true 112 | 113 | backup_retention_period = 0 114 | skip_final_snapshot = true 115 | deletion_protection = false 116 | 117 | performance_insights_enabled = false 118 | performance_insights_retention_period = 7 119 | 120 | parameters = [ 121 | { 122 | name = "character_set_client" 123 | value = "utf8mb4" 124 | }, 125 | { 126 | name = "character_set_server" 127 | value = "utf8mb4" 128 | }, 129 | { 130 | name = "slow_query_log" 131 | value = "1" 132 | }, 133 | { 134 | name = "log_output" 135 | value = "FILE" 136 | }, 137 | { 138 | name = "long_query_time" 139 | value = "0" # define what you consider to be a long query here, in seconds 140 | } 141 | ] 142 | } 143 | -------------------------------------------------------------------------------- /tests/s3-logfile.tf: -------------------------------------------------------------------------------- 1 | module "alb_logs" { 2 | source = "../modules/s3-logfile" 3 | 4 | name = "tf-integrations-alb-${random_pet.this.id}" 5 | parser_type = "alb" 6 | honeycomb_api_key = var.honeycomb_api_key 7 | honeycomb_api_host = var.honeycomb_api_host 8 | 9 | s3_bucket_arn = data.aws_s3_bucket.log_bucket.arn 10 | 11 | sample_rate_rules = [{ 12 | Prefix = "sampled-2", 13 | SampleRate = 2 14 | }] 15 | } 16 | 17 | module "elb_logs" { 18 | source = "../modules/s3-logfile" 19 | 20 | parser_type = "elb" 21 | name = "tf-integrations-elb-${random_pet.this.id}" 22 | honeycomb_api_key = var.honeycomb_api_key 23 | honeycomb_api_host = var.honeycomb_api_host 24 | 25 | s3_bucket_arn = data.aws_s3_bucket.log_bucket.arn 26 | } 27 | 28 | // ideally this would be dynamically created, but using a random_pet in the bucket name 29 | // and the feeding it into the ALB module results in a TF provider error: 30 | // When expanding the plan for module.alb.aws_lb.this[0] to include new values 31 | // learned so far during apply, provider "registry.terraform.io/hashicorp/aws" 32 | // produced an invalid new value for .access_logs[0].bucket: was 33 | // cty.StringVal(""), but now 34 | // cty.StringVal("honeycomb-tf-integrations-logs-decent-sunbird"). 35 | 36 | data "aws_s3_bucket" "log_bucket" { 37 | bucket = "honeycomb-tf-integrations-logs" 38 | } 39 | 40 | /*** ALB ***/ 41 | 42 | module "alb" { 43 | source = "terraform-aws-modules/alb/aws" 44 | version = "~> 7.0" 45 | 46 | name = "${random_pet.this.id}-sampled-2" 47 | load_balancer_type = "application" 48 | 49 | vpc_id = data.aws_vpc.default.id 50 | subnets = toset(data.aws_subnets.default.ids) 51 | security_groups = [aws_security_group.allow_http.id] 52 | 53 | access_logs = { 54 | bucket = data.aws_s3_bucket.log_bucket.id 55 | prefix = "sampled-2" 56 | } 57 | 58 | target_groups = [ 59 | { 60 | name_prefix = "defaul" 61 | backend_protocol = "HTTP" 62 | backend_port = 80 63 | target_type = "instance" 64 | } 65 | ] 66 | 67 | http_tcp_listeners = [{ 68 | port = 80 69 | protocol = "HTTP" 70 | http_listener_rules = [{ 71 | actions = [{ 72 | type = "fixed-response" 73 | content_type = "text/plain" 74 | status_code = 200 75 | message_body = "Hello" 76 | }] 77 | }] 78 | }] 79 | } 80 | 81 | resource "aws_security_group" "allow_http" { 82 | name = "allow_http-${random_pet.this.id}" 83 | description = "Allow HTTP inbound traffic" 84 | vpc_id = data.aws_vpc.default.id 85 | 86 | ingress { 87 | description = "HTTP from public" 88 | from_port = 80 89 | to_port = 80 90 | protocol = "tcp" 91 | cidr_blocks = ["0.0.0.0/0"] 92 | ipv6_cidr_blocks = ["::/0"] 93 | } 94 | 95 | egress { 96 | from_port = 0 97 | to_port = 0 98 | protocol = "-1" 99 | cidr_blocks = ["0.0.0.0/0"] 100 | ipv6_cidr_blocks = ["::/0"] 101 | } 102 | } 103 | 104 | /*** ELB ***/ 105 | 106 | module "elb" { 107 | source = "terraform-aws-modules/elb/aws" 108 | version = "~> 3.0" 109 | 110 | name = random_pet.this.id 111 | 112 | subnets = toset(data.aws_subnets.default.ids) 113 | security_groups = [aws_security_group.allow_http.id] 114 | internal = false 115 | 116 | number_of_instances = 1 117 | instances = [module.ec2_instances.id] 118 | 119 | health_check = { 120 | target = "HTTP:80/" 121 | interval = 30 122 | healthy_threshold = 2 123 | unhealthy_threshold = 2 124 | timeout = 5 125 | } 126 | 127 | access_logs = { 128 | bucket = data.aws_s3_bucket.log_bucket.id 129 | } 130 | 131 | listener = [ 132 | { 133 | instance_port = "80" 134 | instance_protocol = "http" 135 | lb_port = "80" 136 | lb_protocol = "http" 137 | }, 138 | ] 139 | } 140 | 141 | data "aws_ami" "latest" { 142 | owners = ["amazon"] 143 | most_recent = true 144 | 145 | filter { 146 | name = "name" 147 | values = ["amzn2-ami-*-x86_64-ebs"] 148 | } 149 | } 150 | 151 | module "ec2_instances" { 152 | source = "terraform-aws-modules/ec2-instance/aws" 153 | version = "~> 6.0" 154 | 155 | ami = data.aws_ami.latest.id 156 | instance_type = "t2.micro" 157 | vpc_security_group_ids = [aws_security_group.allow_http.id] 158 | subnet_id = element(data.aws_subnets.default.ids, 0) 159 | associate_public_ip_address = true 160 | 161 | tags = { 162 | Name = random_pet.this.id 163 | } 164 | } 165 | -------------------------------------------------------------------------------- /modules/kinesis-firehose-honeycomb/variables.tf: -------------------------------------------------------------------------------- 1 | variable "name" { 2 | type = string 3 | description = "A name for this CloudWatch Kinesis Firehose Stream." 4 | 5 | validation { 6 | condition = length(var.name) >= 1 && length(var.name) <= 32 7 | error_message = "We use var.name as a name_prefix, so it must be 1-32 characters in length." 8 | } 9 | } 10 | 11 | variable "honeycomb_dataset_name" { 12 | type = string 13 | description = "Your Honeycomb dataset name." 14 | } 15 | 16 | variable "honeycomb_api_key" { 17 | type = string 18 | description = "Your Honeycomb team's API key." 19 | sensitive = true 20 | } 21 | 22 | # Optional variables for customer configuration 23 | variable "honeycomb_api_host" { 24 | type = string 25 | default = "https://api.honeycomb.io" 26 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 27 | } 28 | 29 | # Additional sinks can be configured here. 30 | variable "additional_destinations" { 31 | type = list(object({ 32 | honeycomb_dataset_name = string, 33 | honeycomb_api_key = string, 34 | honeycomb_api_host = string, 35 | })) 36 | sensitive = true 37 | default = [] 38 | } 39 | 40 | variable "enable_lambda_transform" { 41 | type = bool 42 | default = false 43 | description = "Enable a Lambda transform on the Kinesis Firehose to preprocess and structure the logs" 44 | } 45 | 46 | variable "lambda_transform_arn" { 47 | type = string 48 | default = "" 49 | description = "If enable_lambda_transform is set to true, specify a valid arn" 50 | } 51 | 52 | variable "lambda_processor_parameters" { 53 | type = map(string) 54 | default = {} 55 | description = <= 1 && var.s3_buffer_size <= 128 87 | error_message = "The s3_buffer_size must be 1-128 MiBs." 88 | } 89 | } 90 | 91 | variable "s3_buffer_interval" { 92 | type = number 93 | default = 400 94 | description = "In seconds. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 95 | 96 | validation { 97 | condition = var.s3_buffer_interval >= 60 && var.s3_buffer_interval <= 900 98 | error_message = "The s3_buffer_interval must be 60-900 seconds." 99 | } 100 | } 101 | 102 | variable "s3_compression_format" { 103 | type = string 104 | default = "GZIP" 105 | description = "May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 106 | 107 | validation { 108 | condition = contains(["GZIP", 109 | "Snappy", 110 | "Zip", 111 | "Hadoop-Compatible Snappy"], 112 | var.s3_compression_format) 113 | error_message = "Not an allowed compression format." 114 | } 115 | } 116 | 117 | variable "s3_backup_mode" { 118 | type = string 119 | default = "FailedDataOnly" 120 | description = "Should we only backup to S3 data that failed delivery, or all data?" 121 | 122 | validation { 123 | condition = contains(["FailedDataOnly", "AllData"], 124 | var.s3_backup_mode) 125 | error_message = "Not an allowed s3_backup_mode." 126 | } 127 | } 128 | 129 | variable "tags" { 130 | type = map(string) 131 | default = {} 132 | description = "A map of tags to apply to resources created by this module." 133 | } 134 | 135 | variable "otel_access_key" { 136 | type = string 137 | description = "Access key for OpenTelemetry collector awsfirehose receiver authentication. Generated randomly if not provided." 138 | default = "" 139 | sensitive = true 140 | } 141 | 142 | variable "otel_collector_version" { 143 | type = string 144 | description = "The version tag of the Honeycomb OpenTelemetry collector image to use." 145 | default = "v0.0.19" 146 | } 147 | -------------------------------------------------------------------------------- /modules/cloudwatch-logs/variables.tf: -------------------------------------------------------------------------------- 1 | variable "name" { 2 | type = string 3 | description = "A name for this CloudWatch Kinesis Firehose Stream." 4 | default = "honeycomb-cloudwatch-logs" 5 | 6 | validation { 7 | condition = length(var.name) >= 1 && length(var.name) <= 32 8 | error_message = "We use var.name as a name_prefix, so it must be 1-32 characters in length." 9 | } 10 | } 11 | 12 | variable "s3_failure_bucket_arn" { 13 | type = string 14 | description = "ARN of the S3 bucket that will store any logs that failed to be sent to Honeycomb." 15 | } 16 | 17 | variable "cloudwatch_log_groups" { 18 | type = list(string) 19 | description = "CloudWatch Log Group names to stream to Honeycomb" 20 | } 21 | 22 | variable "honeycomb_dataset_name" { 23 | type = string 24 | description = "Your Honeycomb dataset name." 25 | } 26 | 27 | variable "honeycomb_api_key" { 28 | type = string 29 | description = "Your Honeycomb team's API key." 30 | sensitive = true 31 | } 32 | 33 | # Additional sinks can be configured here. 34 | variable "additional_destinations" { 35 | type = list(object({ 36 | honeycomb_dataset_name = string, 37 | honeycomb_api_key = string, 38 | honeycomb_api_host = string, 39 | })) 40 | sensitive = true 41 | default = [] 42 | } 43 | 44 | # Optional variables for customer configuration 45 | variable "enable_lambda_transform" { 46 | type = bool 47 | description = "Enable a Lambda transform on the Kinesis Firehose to preprocess and structure the logs" 48 | default = false 49 | } 50 | 51 | variable "lambda_transform_arn" { 52 | type = string 53 | description = "If enable_lambda_transform is set to true, specify a valid arn" 54 | default = "" 55 | } 56 | variable "log_subscription_filter_pattern" { 57 | type = string 58 | description = "A valid CloudWatch Logs filter pattern for subscribing to a filtered stream of log events. Defaults to empty string to match everything. For more information, see the Amazon CloudWatch Logs User Guide." 59 | default = "" 60 | } 61 | 62 | variable "honeycomb_api_host" { 63 | type = string 64 | default = "https://api.honeycomb.io" 65 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 66 | } 67 | 68 | variable "s3_buffer_size" { 69 | type = number 70 | default = 10 71 | description = "In MiB. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 72 | 73 | validation { 74 | condition = var.s3_buffer_size >= 1 && var.s3_buffer_size <= 128 75 | error_message = "The s3_buffer_size must be 1-128 MiBs." 76 | } 77 | } 78 | 79 | variable "s3_buffer_interval" { 80 | type = number 81 | default = 400 82 | description = "In seconds. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 83 | 84 | validation { 85 | condition = var.s3_buffer_interval >= 60 && var.s3_buffer_interval <= 900 86 | error_message = "The s3_buffer_interval must be 60-900 seconds." 87 | } 88 | } 89 | 90 | variable "s3_compression_format" { 91 | type = string 92 | default = "GZIP" 93 | description = "May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 94 | 95 | validation { 96 | condition = contains(["GZIP", 97 | "Snappy", 98 | "Zip", 99 | "Hadoop-Compatible Snappy"], 100 | var.s3_compression_format) 101 | error_message = "Not an allowed compression format." 102 | } 103 | } 104 | 105 | variable "s3_backup_mode" { 106 | type = string 107 | default = "FailedDataOnly" 108 | description = "Should we only backup to S3 data that failed delivery, or all data?" 109 | 110 | validation { 111 | condition = contains(["FailedDataOnly", "AllData"], 112 | var.s3_backup_mode) 113 | error_message = "Not an allowed s3_backup_mode." 114 | } 115 | } 116 | 117 | variable "s3_force_destroy" { 118 | type = bool 119 | default = true 120 | description = < 2 | ## Requirements 3 | 4 | | Name | Version | 5 | |------|---------| 6 | | [terraform](#requirement\_terraform) | >= 1.1 | 7 | | [aws](#requirement\_aws) | >= 5.0 | 8 | 9 | ## Providers 10 | 11 | | Name | Version | 12 | |------|---------| 13 | | [aws](#provider\_aws) | >= 5.0 | 14 | 15 | ## Modules 16 | 17 | No modules. 18 | 19 | ## Resources 20 | 21 | | Name | Type | 22 | |------|------| 23 | | [aws_iam_role.firehose_s3_role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource | 24 | | [aws_iam_role_policy.firehose_lambda_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy) | resource | 25 | | [aws_iam_role_policy.firehose_s3_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy) | resource | 26 | | [aws_kinesis_firehose_delivery_stream.http_stream](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kinesis_firehose_delivery_stream) | resource | 27 | | [aws_iam_policy_document.firehose-assume-role-policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | 28 | | [aws_iam_policy_document.firehose_lambda_policy_document](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | 29 | | [aws_iam_policy_document.firehose_s3_policy_document](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | 30 | | [aws_region.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) | data source | 31 | 32 | ## Inputs 33 | 34 | | Name | Description | Type | Default | Required | 35 | |------|-------------|------|---------|:--------:| 36 | | [additional\_destinations](#input\_additional\_destinations) | Additional sinks can be configured here. |
list(object({
honeycomb_dataset_name = string,
honeycomb_api_key = string,
honeycomb_api_host = string,
}))
| `[]` | no | 37 | | [enable\_lambda\_transform](#input\_enable\_lambda\_transform) | Enable a Lambda transform on the Kinesis Firehose to preprocess and structure the logs | `bool` | `false` | no | 38 | | [honeycomb\_api\_host](#input\_honeycomb\_api\_host) | If you use a Secure Tenancy or other proxy, put its schema://host[:port] here. | `string` | `"https://api.honeycomb.io"` | no | 39 | | [honeycomb\_api\_key](#input\_honeycomb\_api\_key) | Your Honeycomb team's API key. | `string` | n/a | yes | 40 | | [honeycomb\_dataset\_name](#input\_honeycomb\_dataset\_name) | Your Honeycomb dataset name. | `string` | n/a | yes | 41 | | [http\_buffering\_interval](#input\_http\_buffering\_interval) | Kinesis Firehose http buffer interval, in seconds. | `number` | `60` | no | 42 | | [http\_buffering\_size](#input\_http\_buffering\_size) | Kinesis Firehose http buffer size, in MiB. | `number` | `15` | no | 43 | | [lambda\_processor\_parameters](#input\_lambda\_processor\_parameters) | Values passed as the Lambda processing\_configuration.processors.parameters, as detailed
at https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kinesis_firehose_delivery_stream#processors.

Do not use the default values for BufferSizeInMBs (3) and BufferIntervalInSeconds (60) or you will trigger a provider bug (https://github.com/hashicorp/terraform-provider-aws/issues/9827) resulting in infinite diffs. | `map(string)` | `{}` | no | 44 | | [lambda\_transform\_arn](#input\_lambda\_transform\_arn) | If enable\_lambda\_transform is set to true, specify a valid arn | `string` | `""` | no | 45 | | [name](#input\_name) | A name for this CloudWatch Kinesis Firehose Stream. | `string` | n/a | yes | 46 | | [s3\_backup\_mode](#input\_s3\_backup\_mode) | Should we only backup to S3 data that failed delivery, or all data? | `string` | `"FailedDataOnly"` | no | 47 | | [s3\_buffer\_interval](#input\_s3\_buffer\_interval) | In seconds. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `400` | no | 48 | | [s3\_buffer\_size](#input\_s3\_buffer\_size) | In MiB. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `10` | no | 49 | | [s3\_compression\_format](#input\_s3\_compression\_format) | May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `string` | `"GZIP"` | no | 50 | | [s3\_failure\_bucket\_arn](#input\_s3\_failure\_bucket\_arn) | A name of the S3 bucket that will store any logs that failed to be sent to Honeycomb. | `string` | n/a | yes | 51 | | [tags](#input\_tags) | A map of tags to apply to resources created by this module. | `map(string)` | `{}` | no | 52 | 53 | ## Outputs 54 | 55 | | Name | Description | 56 | |------|-------------| 57 | | [kinesis\_firehose\_delivery\_stream\_arn](#output\_kinesis\_firehose\_delivery\_stream\_arn) | n/a | 58 | -------------------------------------------------------------------------------- /modules/cloudwatch-metrics/variables.tf: -------------------------------------------------------------------------------- 1 | # Required variables 2 | variable "name" { 3 | type = string 4 | description = "A unique name for this CloudWatch Metric Stream." 5 | default = "honeycomb-cloudwatch-metrics" 6 | 7 | validation { 8 | condition = length(var.name) >= 1 && length(var.name) <= 32 9 | error_message = "We use var.name as a name_prefix, so it must be 1-32 characters in length." 10 | } 11 | } 12 | 13 | variable "honeycomb_dataset_name" { 14 | type = string 15 | description = "Your Honeycomb dataset name." 16 | } 17 | 18 | variable "honeycomb_api_key" { 19 | type = string 20 | description = "Your Honeycomb team's API key." 21 | sensitive = true 22 | } 23 | 24 | # Additional sinks can be configured here. 25 | variable "additional_destinations" { 26 | type = list(object({ 27 | honeycomb_dataset_name = string, 28 | honeycomb_api_key = string, 29 | honeycomb_api_host = string, 30 | })) 31 | sensitive = true 32 | default = [] 33 | } 34 | 35 | # Optional variables for customer configuration 36 | variable "honeycomb_api_host" { 37 | type = string 38 | default = "https://api.honeycomb.io" 39 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 40 | } 41 | 42 | variable "tags" { 43 | type = map(string) 44 | default = {} 45 | description = "A map of tags to apply to resources created by this module." 46 | } 47 | 48 | variable "include_filters" { 49 | type = list(object({ 50 | namespace = string 51 | metric_names = list(string) 52 | })) 53 | default = [] 54 | description = <= 1 && var.s3_buffer_size <= 128 114 | error_message = "The s3_buffer_size must be 1-128 MiBs." 115 | } 116 | } 117 | 118 | variable "s3_buffer_interval" { 119 | type = number 120 | default = 400 121 | description = "In seconds. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 122 | 123 | validation { 124 | condition = var.s3_buffer_interval >= 60 && var.s3_buffer_interval <= 900 125 | error_message = "The s3_buffer_interval must be 60-900 seconds." 126 | } 127 | } 128 | 129 | variable "s3_compression_format" { 130 | type = string 131 | default = "GZIP" 132 | description = "May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 133 | 134 | validation { 135 | condition = contains(["GZIP", 136 | "Snappy", 137 | "Zip", 138 | "Hadoop-Compatible Snappy"], 139 | var.s3_compression_format) 140 | error_message = "Not an allowed compression format." 141 | } 142 | } 143 | 144 | variable "s3_backup_mode" { 145 | type = string 146 | default = "FailedDataOnly" 147 | description = "Should we only backup to S3 data that failed delivery, or all data?" 148 | 149 | validation { 150 | condition = contains(["FailedDataOnly", "AllData"], 151 | var.s3_backup_mode) 152 | error_message = "Not an allowed s3_backup_mode." 153 | } 154 | } 155 | 156 | variable "output_format" { 157 | type = string 158 | default = "opentelemetry1.0" 159 | description = "Output format of metrics. You should probably not modify this value; the default format is supported, but others may not be." 160 | 161 | validation { 162 | condition = contains(["json", "opentelemetry1.0"], var.output_format) 163 | error_message = "Not an allowed output format." 164 | } 165 | } 166 | 167 | variable "http_buffering_size" { 168 | type = number 169 | default = 15 170 | description = "Kinesis Firehose http buffer size, in MiB." 171 | } 172 | 173 | variable "http_buffering_interval" { 174 | type = number 175 | default = 60 176 | description = "Kinesis Firehose http buffer interval, in seconds." 177 | } 178 | -------------------------------------------------------------------------------- /examples/complete/README.md: -------------------------------------------------------------------------------- 1 | # Honeycomb Terraform AWS Integrations 2 | 3 | This repo contains a set of modules in the [modules folder](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules) for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send observability data to [Honeycomb](https://www.honeycomb.io/). 4 | 5 | ## How does this work? 6 | 7 | ![AWS Integrations architecture](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/overview.png?raw=true) 8 | 9 | ## Supported Integrations 10 | 11 | Available supported integrations include: 12 | 13 | * [CloudWatch Logs](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules/cloudwatch-logs) 14 | * [CloudWatch Metrics](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules/cloudwatch-metrics) 15 | * [RDS Logs](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules/rds-logs) 16 | * [Logs from a S3 Bucket](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules/s3-logfile) 17 | 18 | To use an individual Terraform integration, refer to the integration's README. 19 | Otherwise, refer to the [configuration instructions](#usage) below to configure all supported integrations. 20 | 21 | ## Usage 22 | 23 | First, add the minimal Terraform configuration, which includes the required fields for all [supported Terraform integrations](#supported-integrations): 24 | 25 | ```hcl 26 | module "honeycomb-aws-integrations" { 27 | source = "honeycombio/integrations/aws" 28 | 29 | # aws cloudwatch logs integration 30 | cloudwatch_log_groups = [module.log_group.cloudwatch_log_group_name] // CloudWatch Log Group names to stream to Honeycomb. 31 | 32 | # aws rds logs integration 33 | enable_rds_logs = true 34 | rds_db_name = var.db_name 35 | rds_db_engine = "mysql" 36 | rds_db_log_types = ["slowquery"] // valid types include general, slowquery, error, and audit (audit will be unstructured) 37 | 38 | # aws metrics integration 39 | # enable_cloudwatch_metrics = true 40 | # Only stream specific EC2 metrics and all ELB metrics to Honeycomb 41 | # cloudwatch_metrics_include_filters = [ 42 | # { 43 | # namespace = "AWS/EC2" 44 | # metric_names = [ 45 | # "CPUUtilization", 46 | # "DiskWriteOps", 47 | # "NetworkIn", 48 | # "NetworkOut" 49 | # ] 50 | # }, 51 | # { 52 | # namespace = "AWS/ELB" 53 | # metric_names = [] # include all metrics for this namespace 54 | # } 55 | # ] 56 | 57 | # s3 logfile - alb access logs 58 | s3_bucket_arn = var.s3_bucket_arn 59 | s3_parser_type = "alb" // valid types are alb, elb, cloudfront, vpc-flow-log, s3-access, json, and keyval 60 | 61 | #honeycomb 62 | honeycomb_api_key = var.honeycomb_api_key // Honeycomb API key. 63 | honeycomb_dataset = "terraform-aws-integrations-test" // Your Honeycomb dataset name that will receive the logs. 64 | 65 | # Users generally don't need to set this unless they're using Secure Tenancy 66 | honeycomb_api_host = var.honeycomb_api_host 67 | } 68 | ``` 69 | 70 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 71 | 72 | ```bash 73 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 74 | ``` 75 | 76 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 77 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 78 | 79 | ```bash 80 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 81 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 82 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 83 | ``` 84 | 85 | Now you can run `terraform plan/apply` in sequence. 86 | 87 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 88 | 89 | ## Examples 90 | 91 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/complete). 92 | 93 | ## Development 94 | 95 | ### Tests 96 | 97 | Test cases that run against local code are in [`tests/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/tests). 98 | To set up: 99 | 100 | 1. Set the API key used by Terraform setting the `TF_VAR_HONEYCOMB_API_KEY` environment variable. 101 | 102 | ```bash 103 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 104 | ``` 105 | 106 | 2. Set up AWS credentials. 107 | Please see [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 108 | 109 | ```bash 110 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 111 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 112 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 113 | ``` 114 | 115 | 3. Run `terraform init` to initialize the working directory. 116 | 117 | 4. `terraform plan` and `terraform apply` will now work as expected, as will 118 | `terraform destroy`. 119 | 120 | 5. Test cases also run as part of the pipeline. 121 | See [test-terraform-module.yml](https://github.com/honeycombio/terraform-aws-integrations/blob/main/.github/workflows/test-terraform-module.yml) 122 | 123 | ### Docs 124 | 125 | Configuration option documentation is autogenerated by running `make generate-docs`, and put in each module's [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 126 | 127 | Please regenerate and commit before merging any changes to this repository. 128 | 129 | ### Linters 130 | 131 | We use [tflint](https://github.com/terraform-linters/tflint) and `terraform fmt`, and enforce this linting with a [GitHub Action](.github/workflows/tflint.yml). 132 | 133 | You can run `make terraform-format` to automatically fix formatting issues. 134 | 135 | ## Contributions 136 | 137 | Features, bug fixes and other changes to this module are gladly accepted. 138 | Please open issues or a pull request with your change. 139 | 140 | All contributions will be released under the Apache License 2.0. 141 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Honeycomb Terraform AWS Integrations 2 | 3 | [![OSS Lifecycle](https://img.shields.io/osslifecycle/honeycombio/terraform-aws-integrations)](https://github.com/honeycombio/home/blob/main/honeycomb-oss-lifecycle-and-practices.md) 4 | [![Terraform Registry](https://img.shields.io/github/v/release/honeycombio/terraform-aws-integrations?color=5e4fe3&label=Terraform%20Registry&logo=terraform&sort=semver)](https://registry.terraform.io/modules/honeycombio/integrations/aws/latest) 5 | 6 | This repo contains a set of modules in the [modules folder](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules) for resources in [AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/) to send observability data to [Honeycomb](https://www.honeycomb.io/). 7 | 8 | ## 📣 Adopting version 1.0.0 9 | 10 | As of `v1.0.0` of this module, version 5 of the AWS Provider is required. 11 | If you still need support for version 4 of the AWS Provider, continue to use `v0.5.0`. 12 | 13 | ## How does this work? 14 | 15 | ![AWS Integrations architecture](https://github.com/honeycombio/terraform-aws-integrations/blob/main/docs/overview.png?raw=true) 16 | 17 | ## Supported Integrations 18 | 19 | Available supported integrations include: 20 | 21 | * [CloudWatch Logs](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules/cloudwatch-logs) 22 | * [CloudWatch Metrics](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules/cloudwatch-metrics) 23 | * [RDS Logs](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules/rds-logs) 24 | * [Logs from a S3 Bucket](https://github.com/honeycombio/terraform-aws-integrations/tree/main/modules/s3-logfile) 25 | 26 | To use an individual Terraform integration, refer to the integration's README. 27 | Otherwise, refer to the [configuration instructions](#usage) below to configure all supported integrations. 28 | 29 | ## Usage 30 | 31 | First, add the minimal Terraform configuration, which includes the required fields for all [supported Terraform integrations](#supported-integrations): 32 | 33 | ```hcl 34 | module "honeycomb-aws-integrations" { 35 | source = "honeycombio/integrations/aws" 36 | 37 | # aws cloudwatch logs integration 38 | cloudwatch_log_groups = [module.log_group.cloudwatch_log_group_name] // CloudWatch Log Group names to stream to Honeycomb. 39 | 40 | # aws rds logs integration 41 | enable_rds_logs = true 42 | rds_db_name = var.db_name 43 | rds_db_engine = "mysql" 44 | rds_db_log_types = ["slowquery"] // valid types include general, slowquery, error, and audit (audit will be unstructured) 45 | 46 | # aws metrics integration 47 | # enable_cloudwatch_metrics = true 48 | 49 | # s3 logfile - alb access logs 50 | s3_bucket_arn = var.s3_bucket_arn 51 | s3_parser_type = "alb" // valid types are alb, elb, cloudfront, vpc-flow-log, s3-access, json, and keyval 52 | 53 | #honeycomb 54 | honeycomb_api_key = var.honeycomb_api_key // Honeycomb API key. 55 | honeycomb_dataset = "terraform-aws-integrations-test" // Your Honeycomb dataset name that will receive the logs. 56 | 57 | # Users generally don't need to set this unless they're using Secure Tenancy 58 | honeycomb_api_host = var.honeycomb_api_host 59 | } 60 | ``` 61 | 62 | Then, set the Honeycomb API key for Terraform to use, by setting the `HONEYCOMB_API_KEY` environment variable. 63 | 64 | ```bash 65 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 66 | ``` 67 | 68 | Then, set up AWS credentials for the intended AWS account where the resources will be created and managed. 69 | Please refer to [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 70 | 71 | ```bash 72 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 73 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 74 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 75 | ``` 76 | 77 | Now you can run `terraform plan/apply` in sequence. 78 | 79 | For more configuration options, refer to [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 80 | 81 | ## Examples 82 | 83 | Examples that use this module can be found in [`examples/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/examples/complete). 84 | 85 | ## Development 86 | 87 | ### Tests 88 | 89 | Test cases that run against local code are in [`tests/`](https://github.com/honeycombio/terraform-aws-integrations/tree/main/tests). 90 | To set up: 91 | 92 | 1. Set the API key used by Terraform setting the `TF_VAR_HONEYCOMB_API_KEY` environment variable. 93 | 94 | ```bash 95 | export TF_VAR_HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY 96 | ``` 97 | 98 | 2. Set up AWS credentials. 99 | Please see [Terraform documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) for more details and options. 100 | 101 | ```bash 102 | export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID 103 | export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY 104 | export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION 105 | ``` 106 | 107 | 3. Run `terraform init` to initialize the working directory. 108 | 109 | 4. `terraform plan` and `terraform apply` will now work as expected, as will 110 | `terraform destroy`. 111 | 112 | 5. Test cases also run as part of the pipeline. 113 | See [test-terraform-module.yml](https://github.com/honeycombio/terraform-aws-integrations/blob/main/.github/workflows/test-terraform-module.yml) 114 | 115 | ### Docs 116 | 117 | Configuration option documentation is autogenerated by running `make generate-docs`, and put in each module's [USAGE.md](https://github.com/honeycombio/terraform-aws-integrations/blob/main/USAGE.md). 118 | 119 | Please regenerate and commit before merging any changes to this repository. 120 | 121 | ### Linters 122 | 123 | We use [tflint](https://github.com/terraform-linters/tflint) and `terraform fmt`, and enforce this linting with a [GitHub Action](.github/workflows/tflint.yml). 124 | 125 | You can run `make terraform-format` to automatically fix formatting issues. 126 | 127 | ## Contributions 128 | 129 | Features, bug fixes and other changes to this module are gladly accepted. 130 | Please open issues or a pull request with your change. 131 | 132 | All contributions will be released under the Apache License 2.0. 133 | -------------------------------------------------------------------------------- /modules/s3-logfile/USAGE.md: -------------------------------------------------------------------------------- 1 | 2 | ## Requirements 3 | 4 | | Name | Version | 5 | |------|---------| 6 | | [terraform](#requirement\_terraform) | >= 1.0 | 7 | | [aws](#requirement\_aws) | >= 5.0 | 8 | 9 | ## Providers 10 | 11 | | Name | Version | 12 | |------|---------| 13 | | [aws](#provider\_aws) | >= 5.0 | 14 | 15 | ## Modules 16 | 17 | | Name | Source | Version | 18 | |------|--------|---------| 19 | | [log\_bucket](#module\_log\_bucket) | terraform-aws-modules/s3-bucket/aws//modules/notification | 3.15.2 | 20 | | [s3\_processor](#module\_s3\_processor) | terraform-aws-modules/lambda/aws | 4.18.0 | 21 | 22 | ## Resources 23 | 24 | | Name | Type | 25 | |------|------| 26 | | [aws_iam_policy.lambda](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource | 27 | | [aws_arn.kms_key](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/arn) | data source | 28 | | [aws_arn.s3_bucket](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/arn) | data source | 29 | | [aws_iam_policy_document.lambda](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | 30 | | [aws_region.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) | data source | 31 | 32 | ## Inputs 33 | 34 | | Name | Description | Type | Default | Required | 35 | |------|-------------|------|---------|:--------:| 36 | | [agentless\_integrations\_version](#input\_agentless\_integrations\_version) | Version of https://github.com/honeycombio/agentless-integrations-for-aws to use. Default is LATEST, but note that specifying this does not automatically update the lambda to use the newest versions as they are released. | `string` | `"LATEST"` | no | 37 | | [environment](#input\_environment) | The environment this code is running in. If set, will be added as 'env' to each event. | `string` | `""` | no | 38 | | [filter\_fields](#input\_filter\_fields) | Strings to specify which field names to remove from events. | `list(string)` | `[]` | no | 39 | | [honeycomb\_api\_host](#input\_honeycomb\_api\_host) | Internal. Alternative Honeycomb API host. | `string` | `"https://api.honeycomb.io"` | no | 40 | | [honeycomb\_api\_key](#input\_honeycomb\_api\_key) | Honeycomb API Key | `string` | n/a | yes | 41 | | [honeycomb\_dataset](#input\_honeycomb\_dataset) | Honeycomb Dataset where events will be sent. | `string` | `"lb-access-logs"` | no | 42 | | [kms\_key\_arn](#input\_kms\_key\_arn) | KMS Key ARN of key used to encript var.honeycomb\_api\_key. | `string` | `""` | no | 43 | | [lambda\_function\_architecture](#input\_lambda\_function\_architecture) | Instruction set architecture for your Lambda function. | `string` | `"amd64"` | no | 44 | | [lambda\_function\_memory](#input\_lambda\_function\_memory) | Memory allocated to the Lambda function in MB. Must be between 128 and 10,240 (10GB), in 64MB increments. | `number` | `192` | no | 45 | | [lambda\_function\_timeout](#input\_lambda\_function\_timeout) | Timeout in seconds for lambda function. | `number` | `600` | no | 46 | | [lambda\_package\_bucket](#input\_lambda\_package\_bucket) | Internal. Override S3 bucket where lambda function zip is located. | `string` | `""` | no | 47 | | [lambda\_package\_key](#input\_lambda\_package\_key) | Internal. Override S3 key where lambda function zip is located. | `string` | `""` | no | 48 | | [line\_filter\_rules](#input\_line\_filter\_rules) | Rules for filtering lines. MatchLinePatterns will keep lines based on their content. FilterLinePatterns will drop lines based on their content. |
list(object({
Prefix : string,
MatchLinePatterns : list(string),
FilterLinePatterns : list(string),
}))
| `[]` | no | 49 | | [name](#input\_name) | A name for this integration.
Used for the lambda name, so should be unique within your AWS account. | `string` | n/a | yes | 50 | | [parser\_type](#input\_parser\_type) | The type of logfile to parse. | `string` | n/a | yes | 51 | | [rename\_fields](#input\_rename\_fields) | Map of fields to rename, old -> new. | `map(string)` | `{}` | no | 52 | | [s3\_bucket\_arn](#input\_s3\_bucket\_arn) | The full ARN of the bucket storing load balancer access logs. | `string` | n/a | yes | 53 | | [s3\_filter\_prefix](#input\_s3\_filter\_prefix) | Prefix within logs bucket to restrict processing. | `string` | `""` | no | 54 | | [s3\_filter\_suffix](#input\_s3\_filter\_suffix) | Suffix of files that should be processed. | `string` | `".gz"` | no | 55 | | [sample\_rate](#input\_sample\_rate) | Sample rate. See https://honeycomb.io/docs/guides/sampling/. | `number` | `1` | no | 56 | | [sample\_rate\_rules](#input\_sample\_rate\_rules) | Extra rules for determining sample rates. Prefix will match objects based on their prefix. Order matters - first matching rule wins. |
list(object({
Prefix : string,
SampleRate : number,
}))
| `[]` | no | 57 | | [tags](#input\_tags) | Tags to add to resources created by this module. | `map(string)` | `null` | no | 58 | | [vpc\_security\_group\_ids](#input\_vpc\_security\_group\_ids) | List of security group ids when Lambda Function should run in the VPC. | `list(string)` | `null` | no | 59 | | [vpc\_subnet\_ids](#input\_vpc\_subnet\_ids) | List of subnet ids when Lambda Function should run in the VPC. Usually private or intra subnets. | `list(string)` | `null` | no | 60 | 61 | ## Outputs 62 | 63 | No outputs. 64 | 65 | -------------------------------------------------------------------------------- /modules/rds-logs/USAGE.md: -------------------------------------------------------------------------------- 1 | 2 | ## Requirements 3 | 4 | | Name | Version | 5 | |------|---------| 6 | | [terraform](#requirement\_terraform) | >= 1.0 | 7 | | [aws](#requirement\_aws) | >= 5.0 | 8 | 9 | ## Providers 10 | 11 | | Name | Version | 12 | |------|---------| 13 | | [aws](#provider\_aws) | >= 5.0 | 14 | 15 | ## Modules 16 | 17 | | Name | Source | Version | 18 | |------|--------|---------| 19 | | [cloudwatch\_logs](#module\_cloudwatch\_logs) | ../cloudwatch-logs | n/a | 20 | | [rds\_lambda\_transform](#module\_rds\_lambda\_transform) | terraform-aws-modules/lambda/aws | 4.18.0 | 21 | 22 | ## Resources 23 | 24 | | Name | Type | 25 | |------|------| 26 | | [aws_iam_policy.lambda](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource | 27 | | [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source | 28 | | [aws_iam_policy_document.lambda](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | 29 | | [aws_region.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) | data source | 30 | 31 | ## Inputs 32 | 33 | | Name | Description | Type | Default | Required | 34 | |------|-------------|------|---------|:--------:| 35 | | [agentless\_integrations\_version](#input\_agentless\_integrations\_version) | Version of https://github.com/honeycombio/agentless-integrations-for-aws to use. Default is LATEST, but note that specifying this does not automatically update the lambda to use the newest versions as they are released. | `string` | `"LATEST"` | no | 36 | | [db\_engine](#input\_db\_engine) | Engine type on your RDS database | `string` | n/a | yes | 37 | | [db\_log\_types](#input\_db\_log\_types) | n/a | `list(string)` | n/a | yes | 38 | | [db\_name](#input\_db\_name) | Name of your RDS database. | `string` | n/a | yes | 39 | | [honeycomb\_api\_host](#input\_honeycomb\_api\_host) | If you use a Secure Tenancy or other proxy, put its schema://host[:port] here. | `string` | `"https://api.honeycomb.io"` | no | 40 | | [honeycomb\_api\_key](#input\_honeycomb\_api\_key) | Your Honeycomb team's API key. | `string` | n/a | yes | 41 | | [honeycomb\_dataset\_name](#input\_honeycomb\_dataset\_name) | Your Honeycomb dataset name. | `string` | n/a | yes | 42 | | [http\_buffering\_interval](#input\_http\_buffering\_interval) | Kinesis Firehose http buffer interval, in seconds. | `number` | `60` | no | 43 | | [http\_buffering\_size](#input\_http\_buffering\_size) | Kinesis Firehose http buffer size, in MiB. | `number` | `15` | no | 44 | | [lambda\_function\_architecture](#input\_lambda\_function\_architecture) | Instruction set architecture for your Lambda function. | `string` | `"amd64"` | no | 45 | | [lambda\_function\_memory](#input\_lambda\_function\_memory) | Memory allocated to the Lambda function in MB. Must be between 128 and 10,240 (10GB), in 64MB increments. | `number` | `192` | no | 46 | | [lambda\_function\_timeout](#input\_lambda\_function\_timeout) | Timeout in seconds for lambda function. | `number` | `600` | no | 47 | | [lambda\_package\_bucket](#input\_lambda\_package\_bucket) | Internal. Override S3 bucket where lambda function zip is located. | `string` | `""` | no | 48 | | [lambda\_package\_key](#input\_lambda\_package\_key) | Internal. Override S3 key where lambda function zip is located. | `string` | `""` | no | 49 | | [log\_subscription\_filter\_pattern](#input\_log\_subscription\_filter\_pattern) | A valid CloudWatch Logs filter pattern for subscribing to a filtered stream of log events. Defaults to empty string to match everything. For more information, see the Amazon CloudWatch Logs User Guide. | `string` | `""` | no | 50 | | [name](#input\_name) | A name for this CloudWatch Kinesis Firehose Stream. | `string` | n/a | yes | 51 | | [s3\_backup\_mode](#input\_s3\_backup\_mode) | Should we only backup to S3 data that failed delivery, or all data? | `string` | `"FailedDataOnly"` | no | 52 | | [s3\_buffer\_interval](#input\_s3\_buffer\_interval) | In seconds. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `400` | no | 53 | | [s3\_buffer\_size](#input\_s3\_buffer\_size) | In MiB. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `10` | no | 54 | | [s3\_compression\_format](#input\_s3\_compression\_format) | May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `string` | `"GZIP"` | no | 55 | | [s3\_failure\_bucket\_arn](#input\_s3\_failure\_bucket\_arn) | ARN of the S3 bucket that will store any logs that failed to be sent to Honeycomb. | `string` | n/a | yes | 56 | | [s3\_force\_destroy](#input\_s3\_force\_destroy) | By default, AWS will decline to delete S3 buckets that are not empty:
`BucketNotEmpty: The bucket you tried to delete is not empty`. These buckets
are used for backup if delivery or processing fail.
#
To allow this module's resources to be removed, we've set force\_destroy =
true, allowing non-empty buckets to be deleted. If you want to block this and
preserve those failed deliveries, you can set this value to false, though that
will leave terraform unable to cleanly destroy the module. | `bool` | `true` | no | 57 | | [tags](#input\_tags) | Tags to add to resources created by this module. | `map(string)` | `null` | no | 58 | 59 | ## Outputs 60 | 61 | No outputs. 62 | 63 | -------------------------------------------------------------------------------- /modules/rds-logs/variables.tf: -------------------------------------------------------------------------------- 1 | variable "name" { 2 | type = string 3 | description = "A name for this CloudWatch Kinesis Firehose Stream." 4 | 5 | validation { 6 | condition = length(var.name) >= 1 && length(var.name) <= 32 7 | error_message = "We use var.name as a name_prefix, so it must be 1-32 characters in length." 8 | } 9 | } 10 | 11 | variable "db_name" { 12 | type = string 13 | description = "Name of your RDS database." 14 | } 15 | 16 | variable "db_engine" { 17 | type = string 18 | description = "Engine type on your RDS database" 19 | validation { 20 | condition = contains(["aurora-mysql", "aurora-postgresql", "mariadb", "sqlserver", "mysql", "oracle", "postgresql"], 21 | var.db_engine) 22 | error_message = "Not a valid database engine." 23 | } 24 | } 25 | 26 | variable "db_log_types" { 27 | type = list(string) 28 | } 29 | 30 | variable "honeycomb_dataset_name" { 31 | type = string 32 | description = "Your Honeycomb dataset name." 33 | } 34 | 35 | variable "honeycomb_api_key" { 36 | type = string 37 | description = "Your Honeycomb team's API key." 38 | sensitive = true 39 | } 40 | 41 | variable "s3_failure_bucket_arn" { 42 | type = string 43 | description = "ARN of the S3 bucket that will store any logs that failed to be sent to Honeycomb." 44 | } 45 | 46 | # Optional variables for customer configuration 47 | variable "log_subscription_filter_pattern" { 48 | type = string 49 | description = "A valid CloudWatch Logs filter pattern for subscribing to a filtered stream of log events. Defaults to empty string to match everything. For more information, see the Amazon CloudWatch Logs User Guide." 50 | default = "" 51 | } 52 | 53 | variable "honeycomb_api_host" { 54 | type = string 55 | default = "https://api.honeycomb.io" 56 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 57 | } 58 | 59 | variable "s3_buffer_size" { 60 | type = number 61 | default = 10 62 | description = "In MiB. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 63 | 64 | validation { 65 | condition = var.s3_buffer_size >= 1 && var.s3_buffer_size <= 128 66 | error_message = "The s3_buffer_size must be 1-128 MiBs." 67 | } 68 | } 69 | 70 | variable "s3_buffer_interval" { 71 | type = number 72 | default = 400 73 | description = "In seconds. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 74 | 75 | validation { 76 | condition = var.s3_buffer_interval >= 60 && var.s3_buffer_interval <= 900 77 | error_message = "The s3_buffer_interval must be 60-900 seconds." 78 | } 79 | } 80 | 81 | variable "s3_compression_format" { 82 | type = string 83 | default = "GZIP" 84 | description = "May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 85 | 86 | validation { 87 | condition = contains(["GZIP", 88 | "Snappy", 89 | "Zip", 90 | "Hadoop-Compatible Snappy"], 91 | var.s3_compression_format) 92 | error_message = "Not an allowed compression format." 93 | } 94 | } 95 | 96 | variable "s3_backup_mode" { 97 | type = string 98 | default = "FailedDataOnly" 99 | description = "Should we only backup to S3 data that failed delivery, or all data?" 100 | 101 | validation { 102 | condition = contains(["FailedDataOnly", "AllData"], 103 | var.s3_backup_mode) 104 | error_message = "Not an allowed s3_backup_mode." 105 | } 106 | } 107 | 108 | variable "s3_force_destroy" { 109 | type = bool 110 | default = true 111 | description = < 2 | ## Requirements 3 | 4 | | Name | Version | 5 | |------|---------| 6 | | [terraform](#requirement\_terraform) | >= 1.0 | 7 | | [aws](#requirement\_aws) | >= 5.0 | 8 | 9 | ## Providers 10 | 11 | | Name | Version | 12 | |------|---------| 13 | | [aws](#provider\_aws) | >= 5.0 | 14 | 15 | ## Modules 16 | 17 | | Name | Source | Version | 18 | |------|--------|---------| 19 | | [kfh](#module\_kfh) | ../kinesis-firehose-honeycomb | n/a | 20 | 21 | ## Resources 22 | 23 | | Name | Type | 24 | |------|------| 25 | | [aws_cloudwatch_log_subscription_filter.filters](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_subscription_filter) | resource | 26 | | [aws_cloudwatch_log_subscription_filter.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_subscription_filter) | resource | 27 | | [aws_iam_role.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource | 28 | | [aws_iam_role_policy.cwl_policy](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy) | resource | 29 | | [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source | 30 | | [aws_iam_policy_document.assume_role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | 31 | | [aws_iam_policy_document.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | 32 | | [aws_region.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) | data source | 33 | 34 | ## Inputs 35 | 36 | | Name | Description | Type | Default | Required | 37 | |------|-------------|------|---------|:--------:| 38 | | [additional\_destinations](#input\_additional\_destinations) | Additional sinks can be configured here. |
list(object({
honeycomb_dataset_name = string,
honeycomb_api_key = string,
honeycomb_api_host = string,
}))
| `[]` | no | 39 | | [cloudwatch\_log\_groups](#input\_cloudwatch\_log\_groups) | CloudWatch Log Group names to stream to Honeycomb | `list(string)` | n/a | yes | 40 | | [enable\_lambda\_transform](#input\_enable\_lambda\_transform) | Enable a Lambda transform on the Kinesis Firehose to preprocess and structure the logs | `bool` | `false` | no | 41 | | [honeycomb\_api\_host](#input\_honeycomb\_api\_host) | If you use a Secure Tenancy or other proxy, put its schema://host[:port] here. | `string` | `"https://api.honeycomb.io"` | no | 42 | | [honeycomb\_api\_key](#input\_honeycomb\_api\_key) | Your Honeycomb team's API key. | `string` | n/a | yes | 43 | | [honeycomb\_dataset\_name](#input\_honeycomb\_dataset\_name) | Your Honeycomb dataset name. | `string` | n/a | yes | 44 | | [http\_buffering\_interval](#input\_http\_buffering\_interval) | Kinesis Firehose http buffer interval, in seconds. | `number` | `60` | no | 45 | | [http\_buffering\_size](#input\_http\_buffering\_size) | Kinesis Firehose http buffer size, in MiB. | `number` | `15` | no | 46 | | [lambda\_transform\_arn](#input\_lambda\_transform\_arn) | If enable\_lambda\_transform is set to true, specify a valid arn | `string` | `""` | no | 47 | | [log\_subscription\_filter\_pattern](#input\_log\_subscription\_filter\_pattern) | A valid CloudWatch Logs filter pattern for subscribing to a filtered stream of log events. Defaults to empty string to match everything. For more information, see the Amazon CloudWatch Logs User Guide. | `string` | `""` | no | 48 | | [name](#input\_name) | A name for this CloudWatch Kinesis Firehose Stream. | `string` | `"honeycomb-cloudwatch-logs"` | no | 49 | | [s3\_backup\_mode](#input\_s3\_backup\_mode) | Should we only backup to S3 data that failed delivery, or all data? | `string` | `"FailedDataOnly"` | no | 50 | | [s3\_buffer\_interval](#input\_s3\_buffer\_interval) | In seconds. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `400` | no | 51 | | [s3\_buffer\_size](#input\_s3\_buffer\_size) | In MiB. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `10` | no | 52 | | [s3\_compression\_format](#input\_s3\_compression\_format) | May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `string` | `"GZIP"` | no | 53 | | [s3\_failure\_bucket\_arn](#input\_s3\_failure\_bucket\_arn) | ARN of the S3 bucket that will store any logs that failed to be sent to Honeycomb. | `string` | n/a | yes | 54 | | [s3\_force\_destroy](#input\_s3\_force\_destroy) | By default, AWS will decline to delete S3 buckets that are not empty:
`BucketNotEmpty: The bucket you tried to delete is not empty`. These buckets
are used for backup if delivery or processing fail.
#
To allow this module's resources to be removed, we've set force\_destroy =
true, allowing non-empty buckets to be deleted. If you want to block this and
preserve those failed deliveries, you can set this value to false, though that
will leave terraform unable to cleanly destroy the module. | `bool` | `true` | no | 55 | | [tags](#input\_tags) | A map of tags to apply to resources created by this module. | `map(string)` | `{}` | no | 56 | | [use\_order\_independent\_filter\_resource\_naming](#input\_use\_order\_independent\_filter\_resource\_naming) | Use order-independent naming for log group subscription filter resources. | `bool` | `false` | no | 57 | 58 | ## Outputs 59 | 60 | | Name | Description | 61 | |------|-------------| 62 | | [cloudwatch\_log\_subscription\_filters](#output\_cloudwatch\_log\_subscription\_filters) | n/a | 63 | -------------------------------------------------------------------------------- /modules/cloudwatch-metrics/USAGE.md: -------------------------------------------------------------------------------- 1 | 2 | ## Requirements 3 | 4 | | Name | Version | 5 | |------|---------| 6 | | [terraform](#requirement\_terraform) | >= 1.0 | 7 | | [aws](#requirement\_aws) | >= 5.0 | 8 | 9 | ## Providers 10 | 11 | | Name | Version | 12 | |------|---------| 13 | | [aws](#provider\_aws) | >= 5.0 | 14 | 15 | ## Modules 16 | 17 | | Name | Source | Version | 18 | |------|--------|---------| 19 | | [kfh](#module\_kfh) | ../kinesis-firehose-honeycomb | n/a | 20 | 21 | ## Resources 22 | 23 | | Name | Type | 24 | |------|------| 25 | | [aws_cloudwatch_metric_stream.metric-stream](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_metric_stream) | resource | 26 | | [aws_iam_role.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource | 27 | | [aws_iam_role_policy.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy) | resource | 28 | | [aws_iam_policy_document.assume_role](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | 29 | | [aws_iam_policy_document.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source | 30 | 31 | ## Inputs 32 | 33 | | Name | Description | Type | Default | Required | 34 | |------|-------------|------|---------|:--------:| 35 | | [additional\_destinations](#input\_additional\_destinations) | Additional sinks can be configured here. |
list(object({
honeycomb_dataset_name = string,
honeycomb_api_key = string,
honeycomb_api_host = string,
}))
| `[]` | no | 36 | | [exclude\_filters](#input\_exclude\_filters) | An optional list of exclusive CloudWatch Metric filters. If set, we'll only stream metrics that do not match these namespace and metric names.
Pass an empty list (`[]`) to `metric_names` to exclude all metrics for the namespace.
Mututally exclusive with `exclude_filters`. |
list(object({
namespace = string
metric_names = list(string)
}))
| `[]` | no | 37 | | [honeycomb\_api\_host](#input\_honeycomb\_api\_host) | If you use a Secure Tenancy or other proxy, put its schema://host[:port] here. | `string` | `"https://api.honeycomb.io"` | no | 38 | | [honeycomb\_api\_key](#input\_honeycomb\_api\_key) | Your Honeycomb team's API key. | `string` | n/a | yes | 39 | | [honeycomb\_dataset\_name](#input\_honeycomb\_dataset\_name) | Your Honeycomb dataset name. | `string` | n/a | yes | 40 | | [http\_buffering\_interval](#input\_http\_buffering\_interval) | Kinesis Firehose http buffer interval, in seconds. | `number` | `60` | no | 41 | | [http\_buffering\_size](#input\_http\_buffering\_size) | Kinesis Firehose http buffer size, in MiB. | `number` | `15` | no | 42 | | [include\_filters](#input\_include\_filters) | An optional list of inclusive CloudWatch Metric filters. If set, we'll only stream metrics matching these namespace and metric names.
Pass an empty list (`[]`) to `metric_names` to include all metrics for the namespace.
Mututally exclusive with `exclude_filters`. |
list(object({
namespace = string
metric_names = list(string)
}))
| `[]` | no | 43 | | [include\_linked\_accounts\_metrics](#input\_include\_linked\_accounts\_metrics) | If you are creating a metric stream in a monitoring account, specify true to include metrics from source accounts that are linked to this monitoring account, in the metric stream. | `bool` | `false` | no | 44 | | [name](#input\_name) | A unique name for this CloudWatch Metric Stream. | `string` | `"honeycomb-cloudwatch-metrics"` | no | 45 | | [namespace\_exclude\_filters](#input\_namespace\_exclude\_filters) | DEPRECATED: use `include_filters` instead.

An optional list of CloudWatch Metric namespaces to exclude. If set, we'll only stream metrics that are not in these namespaces.
Mutually exclusive with `namespace_include_filters`. | `list(string)` | `[]` | no | 46 | | [namespace\_include\_filters](#input\_namespace\_include\_filters) | DEPRECATED: use `include_filters` instead.

An optional list of CloudWatch Metric namespaces to include. If set, we'll only stream metrics from these namespaces.
Mutually exclusive with `namespace_exclude_filters`. | `list(string)` | `[]` | no | 47 | | [output\_format](#input\_output\_format) | Output format of metrics. You should probably not modify this value; the default format is supported, but others may not be. | `string` | `"opentelemetry1.0"` | no | 48 | | [s3\_backup\_mode](#input\_s3\_backup\_mode) | Should we only backup to S3 data that failed delivery, or all data? | `string` | `"FailedDataOnly"` | no | 49 | | [s3\_buffer\_interval](#input\_s3\_buffer\_interval) | In seconds. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `400` | no | 50 | | [s3\_buffer\_size](#input\_s3\_buffer\_size) | In MiB. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `10` | no | 51 | | [s3\_compression\_format](#input\_s3\_compression\_format) | May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `string` | `"GZIP"` | no | 52 | | [s3\_failure\_bucket\_arn](#input\_s3\_failure\_bucket\_arn) | ARN of the S3 bucket that will store any logs that failed to be sent to Honeycomb. | `string` | n/a | yes | 53 | | [tags](#input\_tags) | A map of tags to apply to resources created by this module. | `map(string)` | `{}` | no | 54 | 55 | ## Outputs 56 | 57 | | Name | Description | 58 | |------|-------------| 59 | | [cloudwatch\_metric\_stream\_arn](#output\_cloudwatch\_metric\_stream\_arn) | n/a | 60 | | [cloudwatch\_metric\_stream\_name](#output\_cloudwatch\_metric\_stream\_name) | n/a | 61 | 62 | -------------------------------------------------------------------------------- /USAGE.md: -------------------------------------------------------------------------------- 1 | 2 | ## Requirements 3 | 4 | | Name | Version | 5 | |------|---------| 6 | | [terraform](#requirement\_terraform) | >= 1.0 | 7 | | [aws](#requirement\_aws) | >= 5.0 | 8 | 9 | ## Providers 10 | 11 | | Name | Version | 12 | |------|---------| 13 | | [aws](#provider\_aws) | >= 5.0 | 14 | 15 | ## Modules 16 | 17 | | Name | Source | Version | 18 | |------|--------|---------| 19 | | [cloudwatch\_logs](#module\_cloudwatch\_logs) | ./modules/cloudwatch-logs | n/a | 20 | | [cloudwatch\_metrics](#module\_cloudwatch\_metrics) | ./modules/cloudwatch-metrics | n/a | 21 | | [failure\_bucket](#module\_failure\_bucket) | terraform-aws-modules/s3-bucket/aws | ~> 3.0 | 22 | | [rds\_logs](#module\_rds\_logs) | ./modules/rds-logs | n/a | 23 | | [s3\_logfile](#module\_s3\_logfile) | ./modules/s3-logfile | n/a | 24 | 25 | ## Resources 26 | 27 | | Name | Type | 28 | |------|------| 29 | | [aws_region.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region) | data source | 30 | 31 | ## Inputs 32 | 33 | | Name | Description | Type | Default | Required | 34 | |------|-------------|------|---------|:--------:| 35 | | [cloudwatch\_log\_groups](#input\_cloudwatch\_log\_groups) | CloudWatch Log Group names to stream to Honeycomb | `list(string)` | `[]` | no | 36 | | [cloudwatch\_logs\_name](#input\_cloudwatch\_logs\_name) | Name for the CloudWatch Logs integration resources | `string` | `"honeycomb-cloudwatch-logs"` | no | 37 | | [cloudwatch\_metrics\_name](#input\_cloudwatch\_metrics\_name) | Name for the CloudWatch Metrics integration resources | `string` | `"honeycomb-cloudwatch-metrics"` | no | 38 | | [delivery\_failure\_s3\_bucket\_name](#input\_delivery\_failure\_s3\_bucket\_name) | Name for S3 bucket that will be created to hold Kinesis Firehose delivery failures. | `string` | `"honeycomb-firehose-failures-{REGION}"` | no | 39 | | [enable\_cloudwatch\_metrics](#input\_enable\_cloudwatch\_metrics) | n/a | `bool` | `false` | no | 40 | | [enable\_rds\_logs](#input\_enable\_rds\_logs) | n/a | `bool` | `false` | no | 41 | | [environment](#input\_environment) | The environment this code is running in. If set, will be added as 'env' to each event. | `string` | `""` | no | 42 | | [honeycomb\_api\_host](#input\_honeycomb\_api\_host) | If you use a Secure Tenancy or other proxy, put its schema://host[:port] here. | `string` | `"https://api.honeycomb.io"` | no | 43 | | [honeycomb\_api\_key](#input\_honeycomb\_api\_key) | Your Honeycomb team's API key. | `string` | n/a | yes | 44 | | [honeycomb\_dataset](#input\_honeycomb\_dataset) | Honeycomb Dataset where events will be sent. | `string` | `"lb-access-logs"` | no | 45 | | [http\_buffering\_interval](#input\_http\_buffering\_interval) | Kinesis Firehose http buffer interval, in seconds. | `number` | `60` | no | 46 | | [http\_buffering\_size](#input\_http\_buffering\_size) | Kinesis Firehose http buffer size, in MiB. | `number` | `15` | no | 47 | | [rds\_db\_engine](#input\_rds\_db\_engine) | n/a | `string` | `""` | no | 48 | | [rds\_db\_log\_types](#input\_rds\_db\_log\_types) | n/a | `list(string)` | `[]` | no | 49 | | [rds\_db\_name](#input\_rds\_db\_name) | n/a | `string` | `""` | no | 50 | | [rds\_logs\_name](#input\_rds\_logs\_name) | Name for the RDS Logs integration resources | `string` | `"honeycomb-rds-cloudwatch-logs"` | no | 51 | | [s3\_backup\_mode](#input\_s3\_backup\_mode) | Should we only backup to S3 data that failed delivery, or all data? | `string` | `"FailedDataOnly"` | no | 52 | | [s3\_bucket\_arn](#input\_s3\_bucket\_arn) | The full ARN of the bucket storing logs - must pass s3\_parser\_type with this | `string` | `""` | no | 53 | | [s3\_buffer\_interval](#input\_s3\_buffer\_interval) | The Firehose S3 buffer interval (in seconds). See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `400` | no | 54 | | [s3\_buffer\_size](#input\_s3\_buffer\_size) | The size of the Firehose S3 buffer (in MiB). See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `number` | `10` | no | 55 | | [s3\_compression\_format](#input\_s3\_compression\_format) | The Firehose S3 compression format. May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html | `string` | `"GZIP"` | no | 56 | | [s3\_filter\_prefix](#input\_s3\_filter\_prefix) | Prefix within logs bucket to restrict processing. | `string` | `""` | no | 57 | | [s3\_filter\_suffix](#input\_s3\_filter\_suffix) | Suffix of files that should be processed. | `string` | `".gz"` | no | 58 | | [s3\_force\_destroy](#input\_s3\_force\_destroy) | By default, AWS will decline to delete S3 buckets that are not empty:
`BucketNotEmpty: The bucket you tried to delete is not empty`. These buckets
are used for backup if delivery or processing fail.
#
To allow this module's resources to be removed, we've set force\_destroy =
true, allowing non-empty buckets to be deleted. If you want to block this and
preserve those failed deliveries, you can set this value to false, though that
will leave terraform unable to cleanly destroy the module. | `bool` | `true` | no | 59 | | [s3\_logfile\_name](#input\_s3\_logfile\_name) | Name for the S3 Logfile integration resources | `string` | `"honeycomb-s3-logfile"` | no | 60 | | [s3\_parser\_type](#input\_s3\_parser\_type) | The type of logfile to parse. | `string` | `""` | no | 61 | | [sample\_rate](#input\_sample\_rate) | Sample rate - used for S3 logfiles only. See https://honeycomb.io/docs/guides/sampling/. | `number` | `1` | no | 62 | | [tags](#input\_tags) | Tags to add to resources created by this module. | `map(string)` | `null` | no | 63 | | [vpc\_security\_group\_ids](#input\_vpc\_security\_group\_ids) | List of security group ids when Lambda Function should run in the VPC. | `list(string)` | `null` | no | 64 | | [vpc\_subnet\_ids](#input\_vpc\_subnet\_ids) | List of subnet ids when Lambda Function should run in the VPC. Usually private or intra subnets. | `list(string)` | `null` | no | 65 | 66 | ## Outputs 67 | 68 | No outputs. 69 | 70 | -------------------------------------------------------------------------------- /modules/kinesis-firehose-honeycomb/main.tf: -------------------------------------------------------------------------------- 1 | data "aws_region" "current" {} 2 | 3 | # Generate access key for OpenTelemetry collector if not provided 4 | resource "random_password" "otel_access_key" { 5 | count = length(local.destinations) > 1 && var.otel_access_key == "" ? 1 : 0 6 | length = 32 7 | special = false 8 | } 9 | 10 | locals { 11 | region = data.aws_region.current.region 12 | actual_otel_access_key = var.otel_access_key != "" ? var.otel_access_key : (length(local.destinations) > 1 ? random_password.otel_access_key[0].result : "") 13 | default_lambda_parameters = [{ "name" = "BufferSizeInMBs", "value" = 2 }, { "name" = "BufferIntervalInSeconds", "value" = 61 }] 14 | user_lambda_parameters = [for k, v in var.lambda_processor_parameters : { "name" = k, "value" = v }] 15 | 16 | lambda_parameters = concat( 17 | [{ "name" = "LambdaArn", "value" = "${var.lambda_transform_arn}:$LATEST" }], 18 | local.default_lambda_parameters, 19 | local.user_lambda_parameters 20 | ) 21 | 22 | destinations = concat([{ 23 | honeycomb_dataset_name = var.honeycomb_dataset_name 24 | honeycomb_api_key = var.honeycomb_api_key 25 | honeycomb_api_host = var.honeycomb_api_host 26 | }], 27 | var.additional_destinations 28 | ) 29 | 30 | # Create OpenTelemetry collector configuration as a structured object 31 | otel_config = { 32 | receivers = { 33 | awsfirehose = { 34 | endpoint = "0.0.0.0:4433" 35 | record_type = "otlp_v1" 36 | access_key = local.actual_otel_access_key 37 | } 38 | } 39 | 40 | exporters = { 41 | for idx, dest in local.destinations : "otlphttp/${idx}" => { 42 | endpoint = "${dest.honeycomb_api_host}/v1/metrics" 43 | headers = { 44 | "x-honeycomb-team" = dest.honeycomb_api_key 45 | "x-honeycomb-dataset" = dest.honeycomb_dataset_name 46 | } 47 | } 48 | } 49 | 50 | processors = { 51 | batch = { 52 | timeout = "300s" 53 | send_batch_size = 100000 54 | } 55 | } 56 | 57 | service = { 58 | pipelines = { 59 | metrics = { 60 | receivers = ["awsfirehose"] 61 | processors = ["batch"] 62 | exporters = [for idx, dest in local.destinations : "otlphttp/${idx}"] 63 | } 64 | } 65 | } 66 | } 67 | } 68 | 69 | moved { 70 | from = aws_kinesis_firehose_delivery_stream.http_stream 71 | to = aws_kinesis_firehose_delivery_stream.stream 72 | } 73 | 74 | resource "aws_kinesis_firehose_delivery_stream" "stream" { 75 | name = length(local.destinations) == 1 ? var.name : "${var.name}-collector" 76 | destination = "http_endpoint" 77 | 78 | http_endpoint_configuration { 79 | # Single destination: direct to Honeycomb, Multiple destinations: via collector 80 | url = length(local.destinations) == 1 ? ( 81 | "${local.destinations[0].honeycomb_api_host}/1/kinesis_events/${local.destinations[0].honeycomb_dataset_name}" 82 | ) : ( 83 | "https://${aws_apprunner_service.otel_collector[0].service_url}/" 84 | ) 85 | 86 | name = length(local.destinations) == 1 ? "honeycomb" : "otel-collector" 87 | 88 | access_key = length(local.destinations) == 1 ? ( 89 | local.destinations[0].honeycomb_api_key 90 | ) : ( 91 | local.actual_otel_access_key 92 | ) 93 | 94 | role_arn = aws_iam_role.firehose_s3_role.arn 95 | s3_backup_mode = var.s3_backup_mode 96 | buffering_size = var.http_buffering_size 97 | buffering_interval = var.http_buffering_interval 98 | 99 | s3_configuration { 100 | role_arn = aws_iam_role.firehose_s3_role.arn 101 | bucket_arn = var.s3_failure_bucket_arn 102 | 103 | buffering_size = var.s3_buffer_size 104 | buffering_interval = var.s3_buffer_interval 105 | compression_format = var.s3_compression_format 106 | } 107 | 108 | request_configuration { 109 | content_encoding = "GZIP" 110 | } 111 | 112 | dynamic "processing_configuration" { 113 | for_each = var.lambda_transform_arn != "" ? ["allow_transform"] : [] 114 | content { 115 | enabled = var.enable_lambda_transform 116 | 117 | processors { 118 | type = "Lambda" 119 | 120 | dynamic "parameters" { 121 | for_each = local.lambda_parameters 122 | content { 123 | parameter_name = parameters.value.name 124 | parameter_value = parameters.value.value 125 | } 126 | } 127 | } 128 | } 129 | } 130 | } 131 | } 132 | 133 | resource "aws_apprunner_service" "otel_collector" { 134 | count = length(local.destinations) > 1 ? 1 : 0 135 | service_name = "${var.name}-otel-collector" 136 | 137 | source_configuration { 138 | auto_deployments_enabled = false 139 | image_repository { 140 | image_configuration { 141 | port = "4433" 142 | runtime_environment_variables = { 143 | OTEL_CONFIG = jsonencode(local.otel_config) 144 | } 145 | start_command = "--config env:OTEL_CONFIG" 146 | } 147 | image_identifier = "public.ecr.aws/honeycombio/honeycomb-opentelemetry-collector:${var.otel_collector_version}" 148 | image_repository_type = "ECR_PUBLIC" 149 | } 150 | } 151 | 152 | instance_configuration { 153 | cpu = "0.25 vCPU" 154 | memory = "0.5 GB" 155 | } 156 | 157 | network_configuration { 158 | ip_address_type = "IPV4" 159 | ingress_configuration { 160 | is_publicly_accessible = true 161 | } 162 | } 163 | 164 | observability_configuration { 165 | observability_enabled = false 166 | } 167 | 168 | tags = var.tags 169 | } 170 | 171 | 172 | data "aws_iam_policy_document" "firehose-assume-role-policy" { 173 | statement { 174 | actions = ["sts:AssumeRole"] 175 | 176 | principals { 177 | type = "Service" 178 | identifiers = ["firehose.amazonaws.com"] 179 | } 180 | } 181 | } 182 | 183 | resource "aws_iam_role" "firehose_s3_role" { 184 | name_prefix = var.name 185 | assume_role_policy = data.aws_iam_policy_document.firehose-assume-role-policy.json 186 | } 187 | 188 | data "aws_iam_policy_document" "firehose_s3_policy_document" { 189 | statement { 190 | actions = [ 191 | "s3:AbortMultipartUpload", 192 | "s3:GetBucketLocation", 193 | "s3:GetObject", 194 | "s3:ListBucket", 195 | "s3:ListBucketMultipartUploads", 196 | "s3:PutObject" 197 | ] 198 | resources = [ 199 | var.s3_failure_bucket_arn, 200 | "${var.s3_failure_bucket_arn}/*" 201 | ] 202 | } 203 | } 204 | 205 | resource "aws_iam_role_policy" "firehose_s3_policy" { 206 | name = "firehose_s3_policy_${local.region}" 207 | role = aws_iam_role.firehose_s3_role.id 208 | policy = data.aws_iam_policy_document.firehose_s3_policy_document.json 209 | } 210 | 211 | data "aws_iam_policy_document" "firehose_lambda_policy_document" { 212 | statement { 213 | actions = [ 214 | "lambda:InvokeFunction", 215 | "lambda:GetFunctionConfiguration" 216 | ] 217 | resources = [ 218 | "${var.lambda_transform_arn}:*" 219 | ] 220 | } 221 | } 222 | 223 | resource "aws_iam_role_policy" "firehose_lambda_policy" { 224 | count = var.enable_lambda_transform ? 1 : 0 225 | name = "firehose_lambda_policy_${local.region}" 226 | role = aws_iam_role.firehose_s3_role.id 227 | policy = data.aws_iam_policy_document.firehose_lambda_policy_document.json 228 | } 229 | -------------------------------------------------------------------------------- /variables.tf: -------------------------------------------------------------------------------- 1 | variable "honeycomb_api_key" { 2 | type = string 3 | description = "Your Honeycomb team's API key." 4 | sensitive = true 5 | } 6 | 7 | variable "cloudwatch_log_groups" { 8 | type = list(string) 9 | description = "CloudWatch Log Group names to stream to Honeycomb" 10 | default = [] 11 | } 12 | 13 | variable "delivery_failure_s3_bucket_name" { 14 | type = string 15 | description = "Name for S3 bucket that will be created to hold Kinesis Firehose delivery failures." 16 | default = "honeycomb-firehose-failures-{REGION}" 17 | } 18 | 19 | variable "honeycomb_api_host" { 20 | type = string 21 | default = "https://api.honeycomb.io" 22 | description = "If you use a Secure Tenancy or other proxy, put its schema://host[:port] here." 23 | } 24 | 25 | variable "s3_buffer_size" { 26 | type = number 27 | default = 10 28 | description = "The size of the Firehose S3 buffer (in MiB). See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 29 | 30 | validation { 31 | condition = var.s3_buffer_size >= 1 && var.s3_buffer_size <= 128 32 | error_message = "The s3_buffer_size must be 1-128 MiBs." 33 | } 34 | } 35 | 36 | variable "s3_buffer_interval" { 37 | type = number 38 | default = 400 39 | description = "The Firehose S3 buffer interval (in seconds). See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 40 | 41 | validation { 42 | condition = var.s3_buffer_interval >= 60 && var.s3_buffer_interval <= 900 43 | error_message = "The s3_buffer_interval must be 60-900 seconds." 44 | } 45 | } 46 | 47 | variable "s3_compression_format" { 48 | type = string 49 | default = "GZIP" 50 | description = "The Firehose S3 compression format. May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html" 51 | 52 | validation { 53 | condition = contains(["GZIP", 54 | "Snappy", 55 | "Zip", 56 | "Hadoop-Compatible Snappy"], 57 | var.s3_compression_format) 58 | error_message = "Not an allowed compression format." 59 | } 60 | } 61 | 62 | variable "s3_backup_mode" { 63 | type = string 64 | default = "FailedDataOnly" 65 | description = "Should we only backup to S3 data that failed delivery, or all data?" 66 | 67 | validation { 68 | condition = contains(["FailedDataOnly", "AllData"], 69 | var.s3_backup_mode) 70 | error_message = "Not an allowed s3_backup_mode." 71 | } 72 | } 73 | 74 | variable "s3_force_destroy" { 75 | type = bool 76 | default = true 77 | description = < 2 | # [v2.1.0 🛠️](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v2.1.0) - 2025-11-06 3 | 4 | ## Changes 5 | 6 | - feat: Add cloudwatch metrics filters variable to top level module (#96) | @mterhar 7 | 8 | 9 | # [v2.0.1 🛠️](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v2.0.1) - 2025-10-22 10 | 11 | ## Changes 12 | 13 | - fix: Wrap nonsensitive in a condition so it doesn't error if null (#103) | @kentquirk 14 | 15 | 16 | # [v2.0.0 🛠️](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v2.0.0) - 2025-10-13 17 | 18 | ⚠️ 19 | This release updates the hashicorp/aws dependency from v5.0 ~> v6.0 which introduces breaking changes to the AWS module's API. 20 | There are no changes to the API of the Honeycomb module itself. 21 | Upgrade to this release of the Honeycomb module when you are ready to [migrate to v6 of the AWS module](https://registry.terraform.io/providers/hashicorp/aws/6.16.0/docs/guides/version-6-upgrade). 22 | 23 | ## Changes 24 | 25 | - maint(deps): Update hashicorp/AWS to v6 (#101) | @MikeGoldsmith 26 | 27 | [Changes][v2.0.0] 28 | 29 | 30 | # [v1.5.2 🛠️](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.5.2) - 2025-10-13 31 | 32 | ## Changes 33 | 34 | - maint(deps): revert upgrade to hashicorp/AWS module v6 (#99) | @MikeGoldsmith 35 | 36 | [Changes][v1.5.2] 37 | 38 | 39 | # [v1.5.1 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.5.1) - 2025-10-06 40 | 41 | ## Changes 42 | 43 | - maint: remove upper bound on hashicorp/aws provider version constraint (#94) | @robbkidd 44 | - feat(kinesis): add App Runner OpenTelemetry collector for multiplexing (#95) | @lizthegrey 45 | 46 | [Changes][v1.5.1] 47 | 48 | 49 | # [v1.5.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.5.0) - 2025-07-03 50 | 51 | ## Changes 52 | 53 | - feat(kinesis): allow multiple sinks for one firehose (#89) | @lizthegrey 54 | - feat: implements terraform to set LINE_FILTER_RULES (#87) | @mterhar 55 | - maint: Limit hashicorp/aws provider to 5.x versions (#91) | @MikeGoldsmith 56 | - maint: Update generated usage docs (#90) | @MikeGoldsmith 57 | 58 | [Changes][v1.5.0] 59 | 60 | 61 | # [v1.3.1 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.3.1) - 2024-12-19 62 | 63 | ## Changes 64 | 65 | - maint: Fix formating in cloudwatch logs [@MikeGoldsmith](https://github.com/MikeGoldsmith) ([#79](https://github.com/honeycombio/terraform-aws-integrations/issues/79)) 66 | - fix: Add opt-in config to create order independent log filters [@aburgel](https://github.com/aburgel) ([#78](https://github.com/honeycombio/terraform-aws-integrations/issues/78)) 67 | - fix: pin dependant modules to reduce churn [@jharley](https://github.com/jharley) ([#76](https://github.com/honeycombio/terraform-aws-integrations/issues/76)) 68 | - docs: update vulnerability reporting process [@robbkidd](https://github.com/robbkidd) ([#75](https://github.com/honeycombio/terraform-aws-integrations/issues/75)) 69 | 70 | 71 | [Changes][v1.4.0] 72 | 73 | 74 | 75 | # [v1.4.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.4.0) - 2025-03-18 76 | 77 | ## Changes 78 | 79 | NOTE: the cloudwatch-metrics output format `opentelemetry0.7` has been removed. 80 | If you previously used this output format value, you will need to update to the newer `opentelemetry1.0` format. 81 | 82 | - maint: Remove OpenTelemetry 0.7 data format [@MikeGoldsmith] ([#83](https://github.com/honeycombio/terraform-aws-integrations/issues/83)) 83 | 84 | ## 🚀 Features 85 | 86 | - feat: Allow resource names to be overridden when installing top-level module [@yotamat] ([#85](https://github.com/honeycombio/terraform-aws-integrations/issues/85)) 87 | 88 | 89 | [Changes][v1.3.1] 90 | 91 | 92 | 93 | # [v1.3.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.3.0) - 2024-04-18 94 | 95 | ## Changes 96 | 97 | NOTE: the default output format of the cloudwatch-metrics stream has be updated to `opentelemetry1.0`. 98 | If you had overridden this value consider making the same change to avoid losing metric stream events in the future. 99 | 100 | ## 🚀 Features 101 | 102 | - cloudwatch-metrics - stream opentelemetry1.0 [@jharley](https://github.com/jharley) ([#70](https://github.com/honeycombio/terraform-aws-integrations/issues/70)) 103 | 104 | 105 | [Changes][v1.3.0] 106 | 107 | 108 | 109 | # [v1.2.1 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.2.1) - 2023-11-28 110 | 111 | ## Changes 112 | 113 | - bug: give lambda parameters a fixed order [@RainofTerra](https://github.com/RainofTerra) ([#64](https://github.com/honeycombio/terraform-aws-integrations/issues/64)) 114 | 115 | 116 | [Changes][v1.2.1] 117 | 118 | 119 | 120 | # [v1.2.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.2.0) - 2023-10-17 121 | 122 | ## Changes 123 | 124 | - maint: upgrade from go1.x to provided.al2 [@brookesargent](https://github.com/brookesargent) ([#65](https://github.com/honeycombio/terraform-aws-integrations/issues/65)) 125 | 126 | 127 | [Changes][v1.2.0] 128 | 129 | 130 | 131 | # [v1.1.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.1.0) - 2023-07-05 132 | 133 | ## Changes 134 | 135 | - feature: CloudWatch Metrics metric-level filter support [@jharley](https://github.com/jharley) ([#58](https://github.com/honeycombio/terraform-aws-integrations/issues/58)) 136 | 137 | [Changes][v1.1.0] 138 | 139 | 140 | 141 | # [v1.0.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v1.0.0) - 2023-06-23 142 | 143 | ## Changes 144 | 145 | - feature: support AWS Provider 5.x [@jharley](https://github.cim/jharley) | ([#53](https://github.com/honeycombio/terraform-aws-integrations/issues/53)) 146 | - feature: Moving `s3_configuration {}` from root block to `http_endpoint_configuration` [@ryan-keswick](https://github.com/ryan-keswick) | ([#49](https://github.com/honeycombio/terraform-aws-integrations/issues/49)) 147 | 148 | 📣 This drops support for AWS Provider 4.x. If you are still on v4 of the AWS provider, please continue to use `v0.5.0` of this module. 149 | 150 | [Changes][v1.0.0] 151 | 152 | 153 | 154 | # [v0.5.0🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.5.0) - 2023-06-22 155 | 156 | ## Changes 157 | 158 | - feat: Provide interface to the sample rate rules to this module [@NLincoln](https://github.com/NLincoln) ([#47](https://github.com/honeycombio/terraform-aws-integrations/issues/47)) 159 | 160 | 161 | [Changes][v0.5.0] 162 | 163 | 164 | 165 | # [v0.4.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.4.0) - 2023-05-11 166 | 167 | ## Changes 168 | 169 | - fix: be explicit about S3 access policies [@dstrelau](https://github.com/dstrelau) ([#45](https://github.com/honeycombio/terraform-aws-integrations/issues/45)) 170 | - This is technically a breaking change, in that it changes the exact resources created by the root module, but 171 | AWS changed the defaults of S3 buckets such that the old version does not work. The new resources match the new AWS defaults and are functionally equivalent to the old ones (ensuring the S3 is completely private). Only users of the root module should see a diff. Individual sub-modules are not effected. 172 | 173 | [Changes][v0.4.0] 174 | 175 | 176 | 177 | # [v0.3.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.3.0) - 2023-03-16 178 | 179 | ## Changes 180 | 181 | - Supporting running lambda in a vpc [@ryan-keswick](https://github.com/ryan-keswick) ([#41](https://github.com/honeycombio/terraform-aws-integrations/issues/41)) 182 | - [build] add repo name and repo link to asana task [@brookesargent](https://github.com/brookesargent) ([#39](https://github.com/honeycombio/terraform-aws-integrations/issues/39)) 183 | 184 | 185 | [Changes][v0.3.0] 186 | 187 | 188 | 189 | # [v0.2.8 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.2.8) - 2022-12-15 190 | 191 | ## Changes 192 | 193 | - Remove 'pro/enterprise only' disclaimer for metrics [@jharley](https://github.com/jharley) ([#37](https://github.com/honeycombio/terraform-aws-integrations/issues/37)) 194 | - [ci] update workflow to work for pull\_request\_target [@brookesargent](https://github.com/brookesargent) ([#36](https://github.com/honeycombio/terraform-aws-integrations/issues/36)) 195 | - [ci] send Github issues and PRs to Asana [@brookesargent](https://github.com/brookesargent) ([#35](https://github.com/honeycombio/terraform-aws-integrations/issues/35)) 196 | 197 | 198 | [Changes][v0.2.8] 199 | 200 | 201 | 202 | # [v0.2.7 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.2.7) - 2022-11-23 203 | 204 | ## Changes 205 | 206 | - README Improvements [@mjingle](https://github.com/mjingle) ([#32](https://github.com/honeycombio/terraform-aws-integrations/issues/32)) 207 | 208 | 209 | [Changes][v0.2.7] 210 | 211 | 212 | 213 | # [v0.2.6 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.2.6) - 2022-11-22 214 | 215 | ## Changes 216 | 217 | - Include readmes for the examples and OSS metadata [@mjayaram](https://github.com/mjayaram) ([#30](https://github.com/honeycombio/terraform-aws-integrations/issues/30)) 218 | - fix postgresql example [@brookesargent](https://github.com/brookesargent) ([#29](https://github.com/honeycombio/terraform-aws-integrations/issues/29)) 219 | 220 | 221 | [Changes][v0.2.6] 222 | 223 | 224 | 225 | # [v0.2.5 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.2.5) - 2022-11-21 226 | 227 | ## Changes 228 | 229 | - Fix links to images in the README [@mjayaram](https://github.com/mjayaram) ([#27](https://github.com/honeycombio/terraform-aws-integrations/issues/27)) 230 | 231 | 232 | [Changes][v0.2.5] 233 | 234 | 235 | 236 | # [v0.2.4 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.2.4) - 2022-11-21 237 | 238 | ## Changes 239 | 240 | - update tf docs [@mjayaram](https://github.com/mjayaram) ([#25](https://github.com/honeycombio/terraform-aws-integrations/issues/25)) 241 | - make cloudwatch logs optional at root [@brookesargent](https://github.com/brookesargent) ([#24](https://github.com/honeycombio/terraform-aws-integrations/issues/24)) 242 | - update loadbalancer logs readme to be generic [@mjayaram](https://github.com/mjayaram) ([#21](https://github.com/honeycombio/terraform-aws-integrations/issues/21)) 243 | - update all readme [@brookesargent](https://github.com/brookesargent) ([#23](https://github.com/honeycombio/terraform-aws-integrations/issues/23)) 244 | - add logs in a bucket to main module [@brookesargent](https://github.com/brookesargent) ([#22](https://github.com/honeycombio/terraform-aws-integrations/issues/22)) 245 | - add to /examples [@brookesargent](https://github.com/brookesargent) ([#17](https://github.com/honeycombio/terraform-aws-integrations/issues/17)) 246 | 247 | 248 | [Changes][v0.2.4] 249 | 250 | 251 | 252 | # [v0.2.3 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.2.3) - 2022-11-18 253 | 254 | ## Changes 255 | 256 | - [s3-logfile] allow cloudfront parser [@dstrelau](https://github.com/dstrelau) ([#20](https://github.com/honeycombio/terraform-aws-integrations/issues/20)) 257 | - Clarify s3-logfile var.name [@dstrelau](https://github.com/dstrelau) ([#18](https://github.com/honeycombio/terraform-aws-integrations/issues/18)) 258 | - Relax and variable-ize s3-logfile filter\_suffix [@dstrelau](https://github.com/dstrelau) ([#19](https://github.com/honeycombio/terraform-aws-integrations/issues/19)) 259 | - try to avoid infinite diffs [@dstrelau](https://github.com/dstrelau) ([#16](https://github.com/honeycombio/terraform-aws-integrations/issues/16)) 260 | - [docs] RDS readme [@brookesargent](https://github.com/brookesargent) ([#15](https://github.com/honeycombio/terraform-aws-integrations/issues/15)) 261 | 262 | 263 | [Changes][v0.2.3] 264 | 265 | 266 | 267 | # [v0.2.2 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.2.2) - 2022-11-14 268 | 269 | ## Fixes 270 | 271 | - Correctly pass honeycomb\_api\_host through to CW Logs module [@dstrelau](https://github.com/dstrelau) ([#14](https://github.com/honeycombio/terraform-aws-integrations/issues/14)) 272 | 273 | 274 | [Changes][v0.2.2] 275 | 276 | 277 | 278 | # [v0.2.1 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.2.1) - 2022-11-14 279 | 280 | ## Changes 281 | 282 | - remove `startswith()` usage to support older TFs [@dstrelau](https://github.com/dstrelau) ([#13](https://github.com/honeycombio/terraform-aws-integrations/issues/13)) 283 | 284 | 285 | [Changes][v0.2.1] 286 | 287 | 288 | 289 | # [v0.2.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.2.0) - 2022-11-11 290 | 291 | ## Changes 292 | 293 | - RDS Logs Lambda Transform [@brookesargent](https://github.com/brookesargent) ([#7](https://github.com/honeycombio/terraform-aws-integrations/issues/7)) 294 | 295 | 296 | [Changes][v0.2.0] 297 | 298 | 299 | 300 | # [v0.1.0 🌈](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.1.0) - 2022-11-08 301 | 302 | ## Changes 303 | 304 | - first draft of module readmes [@mjayaram](https://github.com/mjayaram) ([#10](https://github.com/honeycombio/terraform-aws-integrations/issues/10)) 305 | - Rename lb-logs -> s3-logfile [@dstrelau](https://github.com/dstrelau) ([#11](https://github.com/honeycombio/terraform-aws-integrations/issues/11)) 306 | - add cloudwatch metrics + KFH submodules [@dstrelau](https://github.com/dstrelau) ([#6](https://github.com/honeycombio/terraform-aws-integrations/issues/6)) 307 | - Include docs generation config and USAGE docs for submodules [@mjayaram](https://github.com/mjayaram) ([#8](https://github.com/honeycombio/terraform-aws-integrations/issues/8)) 308 | - Setup path to prod for modules [@mjayaram](https://github.com/mjayaram) ([#5](https://github.com/honeycombio/terraform-aws-integrations/issues/5)) 309 | - Adds expected nested module structure to allow for publish/use of submodules [@mjayaram](https://github.com/mjayaram) ([#4](https://github.com/honeycombio/terraform-aws-integrations/issues/4)) 310 | - Extending CW Logs to Honeycomb Module to take a list of log groups [@mjayaram](https://github.com/mjayaram) ([#3](https://github.com/honeycombio/terraform-aws-integrations/issues/3)) 311 | - Sets up module to send logs from ONE log group to Honeycomb via Kinesis [@mjayaram](https://github.com/mjayaram) ([#2](https://github.com/honeycombio/terraform-aws-integrations/issues/2)) 312 | - [lb] Add agentless LB module [@dstrelau](https://github.com/dstrelau) ([#1](https://github.com/honeycombio/terraform-aws-integrations/issues/1)) 313 | 314 | 315 | [Changes][v0.1.0] 316 | 317 | 318 | 319 | # [v0.0.1](https://github.com/honeycombio/terraform-aws-integrations/releases/tag/v0.0.1) - 2022-11-02 320 | 321 | First release 🎉 322 | 323 | [Changes][v0.0.1] 324 | 325 | 326 | [v1.3.1]: https://github.com/honeycombio/terraform-aws-integrations/compare/v1.3.0...v1.3.1 327 | [v1.3.0]: https://github.com/honeycombio/terraform-aws-integrations/compare/v1.2.1...v1.3.0 328 | [v1.2.1]: https://github.com/honeycombio/terraform-aws-integrations/compare/v1.2.0...v1.2.1 329 | [v1.2.0]: https://github.com/honeycombio/terraform-aws-integrations/compare/v1.1.0...v1.2.0 330 | [v1.1.0]: https://github.com/honeycombio/terraform-aws-integrations/compare/v1.0.0...v1.1.0 331 | [v1.0.0]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.5.0...v1.0.0 332 | [v0.5.0]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.4.0...v0.5.0 333 | [v0.4.0]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.3.0...v0.4.0 334 | [v0.3.0]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.2.8...v0.3.0 335 | [v0.2.8]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.2.7...v0.2.8 336 | [v0.2.7]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.2.6...v0.2.7 337 | [v0.2.6]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.2.5...v0.2.6 338 | [v0.2.5]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.2.4...v0.2.5 339 | [v0.2.4]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.2.3...v0.2.4 340 | [v0.2.3]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.2.2...v0.2.3 341 | [v0.2.2]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.2.1...v0.2.2 342 | [v0.2.1]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.2.0...v0.2.1 343 | [v0.2.0]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.1.0...v0.2.0 344 | [v0.1.0]: https://github.com/honeycombio/terraform-aws-integrations/compare/v0.0.1...v0.1.0 345 | [v0.0.1]: https://github.com/honeycombio/terraform-aws-integrations/tree/v0.0.1 346 | 347 | 348 | --------------------------------------------------------------------------------