├── .gitignore ├── sample.csv ├── main.tf └── readme.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled files 2 | *.tfstate 3 | *.tfstate.backup 4 | 5 | # Module directory 6 | .terraform/ 7 | -------------------------------------------------------------------------------- /sample.csv: -------------------------------------------------------------------------------- 1 | name,value,description,tag 2 | China-1.0.1.0-24,1.0.1.0/22,geolocation,maxmind 3 | China-1.0.2.0-24,1.0.2.0/23,geolocation,maxmind 4 | -------------------------------------------------------------------------------- /main.tf: -------------------------------------------------------------------------------- 1 | # Creating terraform resources from a CSV file with an external data source 2 | data "external" "csv_file" { 3 | program = ["jq", "--slurp", "--raw-input", "--raw-output", "split(\"\n\") | .[1:] | map(select(length > 0) | split(\",\")) | map({\"name\": .[0], \"value\": .[1], \"description\": .[2], \"tag\": .[3]}) | {\"names\": map(.name) | join(\",\"), \"values\": map(.value) | join(\",\"), \"description\": map(.description) | join(\",\"), \"tag\": map(.tag) | join(\",\")}", "${path.module}/sample.csv"] 4 | } 5 | 6 | 7 | resource "null_resource" "csv_external_data_source_method" { 8 | count = "${length(split(",", data.external.csv_file.result.names))}" 9 | triggers = { 10 | name = "${element(split(",", data.external.csv_file.result.names), count.index)}" 11 | value = "${element(split(",", data.external.csv_file.result.values), count.index)}" 12 | description = "${element(split(",", data.external.csv_file.result.description), count.index)}" 13 | tag = "${element(split(",", data.external.csv_file.result.tag), count.index)}" 14 | } 15 | } 16 | 17 | # Creating terraform resources from a CSV file using interpolation 18 | data "null_data_source" "csv_file" { 19 | inputs = { 20 | file_data = "${chomp(file("${path.module}/sample.csv"))}" 21 | } 22 | } 23 | 24 | resource "null_resource" "csv_interpolation_method" { 25 | count = "${length(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))))}" 26 | 27 | triggers = { 28 | name = "${element(split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)), 0)}" 29 | value = "${element(split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)), 1)}" 30 | description = "${element(split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)), 2)}" 31 | tag = "${element(split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)), 3)}" 32 | } 33 | } 34 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # Update 2020-04-02 2 | 3 | Do not use this. [Terraform 0.12 introduced `csvdecode()`](https://www.terraform.io/docs/configuration/functions/csvdecode.html), which you should use instead. 4 | 5 | This technique is unnecessary and was never really recommended. It was mostly an advanced walkthrough of just how far you could push Terraform's interpolation functions to make up for missing features. 6 | 7 | The original text follows. 8 | 9 | -------- 10 | 11 | This originally spawned from [a reddit thread](https://www.reddit.com/r/Terraform/comments/8h7k9v/how_to_create_large_number_of_resoucres_in/) asking how to create resources from a CSV file. 12 | 13 | This repo contains a slightly edited version of my answer, along with the code I used just for reference. 14 | 15 | # How do I create terraform resources from a CSV file? 16 | 17 | There's no simple answer here. I will warn you that **the solution is very obtuse and I do not recommend using it.** 18 | 19 | I took a sample CSV and realized that to get it into terraform, the only two routes are to either convert it to a single JSON object and then read it in with the [external data source](https://www.terraform.io/docs/providers/external/data_source.html), or to build a CSV parser out of terraform interpolation syntax. We'll try both. 20 | 21 | 22 | 23 | - [How do I create terraform resources from a CSV file?](#how-do-i-create-terraform-resources-from-a-csv-file) 24 | - [Using an External Data Source](#using-an-external-data-source) 25 | - [Using raw Terraform Interpolation](#using-raw-terraform-interpolation) 26 | - [Explanation](#explanation) 27 | - [How the count works](#how-the-count-works) 28 | - [How the values work](#how-the-values-work) 29 | - [Conclusion](#conclusion) 30 | 31 | 32 | 33 | ## Using an External Data Source 34 | 35 | First, let's look at using the External Data source. This one front-loads the insanity by [using JQ to format the data](https://gist.github.com/RulerOf/0c95c1f6344479f9c064079fc6070b85) for consumption by the external provider: 36 | 37 | ```hcl 38 | data "external" "csv_file" { 39 | program = ["jq", "--slurp", "--raw-input", "--raw-output", "split(\"\n\") | .[1:] | map(select(length > 0) | split(\",\")) | map({\"name\": .[0], \"value\": .[1], \"description\": .[2], \"tag\": .[3]}) | {\"names\": map(.name) | join(\",\"), \"values\": map(.value) | join(\",\"), \"description\": map(.description) | join(\",\"), \"tag\": map(.tag) | join(\",\")}", "${path.module}/sample.csv"] 40 | } 41 | 42 | resource "null_resource" "csv_external_data_source_method" { 43 | count = "${length(split(",", data.external.csv_file.result.names))}" 44 | triggers = { 45 | name = "${element(split(",", data.external.csv_file.result.names), count.index)}" 46 | value = "${element(split(",", data.external.csv_file.result.values), count.index)}" 47 | description = "${element(split(",", data.external.csv_file.result.description), count.index)}" 48 | tag = "${element(split(",", data.external.csv_file.result.tag), count.index)}" 49 | } 50 | } 51 | ``` 52 | 53 | That single resource gives us a `terraform plan` output that will count out to the end of the CSV and generate a resource for each CSV entry: 54 | 55 | ``` 56 | + null_resource.csv_external_data_source_method[0] 57 | id: 58 | triggers.%: "4" 59 | triggers.description: "geolocation" 60 | triggers.name: "China-1.0.1.0-24" 61 | triggers.tag: "maxmind" 62 | triggers.value: "1.0.1.0/22" 63 | 64 | + null_resource.csv_external_data_source_method[1] 65 | id: 66 | triggers.%: "4" 67 | triggers.description: "geolocation" 68 | triggers.name: "China-1.0.2.0-24" 69 | triggers.tag: "maxmind" 70 | triggers.value: "1.0.2.0/23" 71 | ``` 72 | 73 | You'll obviously need to have jq installed. 74 | 75 | It's worth mentioning that this can probably be made less crazy by using a script to parse the data and feed something a little more dynamic to JQ, but JQ and I don't have quite that kind of relationship ;) 76 | 77 | ## Using raw Terraform Interpolation 78 | 79 | This strategy has us performing lots of repetitive interpolation thanks to a few limitations in the [null data source](https://www.terraform.io/docs/providers/null/data_source.html). In this example, the data source is pretty short, but the resource where we use the data is incomprehensibly long, with each key in the resource using no fewer than eight interpolation functions: 80 | 81 | ```hcl 82 | data "null_data_source" "csv_file" { 83 | inputs = { 84 | file_data = "${chomp(file("${path.module}/sample.csv"))}" 85 | } 86 | } 87 | 88 | resource "null_resource" "csv_interpolation_method" { 89 | count = "${length(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))))}" 90 | 91 | triggers = { 92 | name = "${element(split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)), 0)}" 93 | value = "${element(split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)), 1)}" 94 | description = "${element(split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)), 2)}" 95 | tag = "${element(split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)), 3)}" 96 | } 97 | } 98 | ``` 99 | 100 | Nonetheless, we end up with identical `terraform plan` output: 101 | 102 | ``` 103 | + null_resource.csv_interpolation_method[0] 104 | id: 105 | triggers.%: "4" 106 | triggers.description: "geolocation" 107 | triggers.name: "China-1.0.1.0-24" 108 | triggers.tag: "maxmind" 109 | triggers.value: "1.0.1.0/22" 110 | 111 | + null_resource.csv_interpolation_method[1] 112 | id: 113 | triggers.%: "4" 114 | triggers.description: "geolocation" 115 | triggers.name: "China-1.0.2.0-24" 116 | triggers.tag: "maxmind" 117 | triggers.value: "1.0.2.0/23" 118 | ``` 119 | 120 | ## Explanation 121 | 122 | Of the two, I actually consider the interpolation technique to be the better choice. It's pure terraform and somehow less obtuse. Somehow. Now, let's walk through some of the interpolation: 123 | 124 | ### How the count works 125 | 126 | ```hcl 127 | split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")) 128 | ``` 129 | 130 | * This gives us a list where each element of the list is a line of our CSV file. 131 | 132 | ```hcl 133 | slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))) 134 | ``` 135 | 136 | * We need to remove the CSV header, so we'll `slice()` it starting from line 1 and going to the `length()` of the list itself. 137 | 138 | ```hcl 139 | count = "${length(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))))}" 140 | ``` 141 | 142 | * The `length()` of the now header-free list is how many times we want to `count` the resource we're creating from the CSV data. 143 | 144 | ### How the values work 145 | 146 | ```hcl 147 | element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index) 148 | ``` 149 | 150 | * We take our primitive from above and use `element()` to choose the item from the list that matches the current `count.index`. At this point, it's raw CSV like `China-1.0.1.0-24,1.0.1.0/22,geolocation,maxmind` 151 | 152 | ```hcl 153 | split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)) 154 | ``` 155 | 156 | * We take the CSV and `split()` it on every comma to get a list where each element is a column of the CSV line 157 | 158 | ```hcl 159 | name = "${element(split(",", element(slice(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")), 1, length(split("\n", lookup(data.null_data_source.csv_file.outputs, "file_data")))), count.index)), 0)}" 160 | ``` 161 | 162 | * Finally, we use `element()` to extract the contents of a specific column as the value for each of the keys in our resource. 163 | 164 | 165 | # Conclusion 166 | 167 | You can do it, but the lack of any real answer elsewhere on the internet... turns out it's for a reason. There's special code in each of these techniques to ensure that the first line of the CSV isn't processed, so your CSV _must_ have a header just like the example one does if you see fit to use this. 168 | 169 | I'm only really writing this up here because it's a topic that I couldn't find really anywhere on the Internet, and since it's _possible_, I decided I had to figure out just how it would have to work. Perhaps solving this here could lead to a better implementation by someone else, but I'm not really sure about that. Terraform could probably stand to have some proper CSV/JSON support instead. Also, this technically doesn't even follow [the spec](https://tools.ietf.org/html/rfc4180) because it doesn't handle properly-escaped commas which CAN exist in a CSV file. 170 | 171 | Cheers. 172 | --------------------------------------------------------------------------------