├── .gitignore ├── README.md ├── Vagrantfile └── ansible ├── ansible.cfg ├── filter_plugins └── server_group_filter.py ├── hosts.yaml ├── inventory.py ├── provision_bind9_servers.yaml ├── provision_consul_client_servers.yaml ├── provision_consul_servers.yaml ├── provision_consul_template_servers.yaml ├── provision_hostsfile.yaml ├── provision_prometheus_alertmanager_servers.yaml ├── provision_prometheus_client_servers.yaml ├── provision_prometheus_consul_exporter_servers.yaml ├── provision_prometheus_node_exporter_servers.yaml ├── provision_prometheus_servers.yaml ├── provision_resolv_conf.yaml ├── roles ├── bind9 │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── etc │ │ └── bind │ │ ├── named.conf.local │ │ ├── named.conf.options │ │ └── zones │ │ └── master │ │ ├── db.1.136.172 │ │ ├── db.2.136.172 │ │ ├── db.3.136.172 │ │ ├── db.4.136.172 │ │ └── db.lan ├── consul │ ├── defaults │ │ └── main.yml │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ ├── main.yml │ │ ├── trusty.yml │ │ └── xenial.yml │ └── templates │ │ ├── Makefile │ │ ├── consul_client.conf │ │ ├── consul_client.json │ │ ├── consul_client.service.j2 │ │ ├── consul_server.conf │ │ ├── consul_server.json │ │ └── consul_server.service.j2 ├── consul_template │ ├── defaults │ │ └── main.yml │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ ├── main.yml │ │ ├── trusty.yml │ │ └── xenial.yml │ └── templates │ │ ├── consul-template-service.conf.j2 │ │ ├── consul-template-service.service.j2 │ │ ├── consul_template_config.hql │ │ ├── consul_template_config.json │ │ ├── haproxy.cfg.ctmpl │ │ └── icinga.cfg.ctmpl ├── grafana │ ├── defaults │ │ └── main.yml │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── grafana.conf.j2 │ │ └── grafana.ini ├── haproxy │ ├── defaults │ │ └── main.yml │ └── tasks │ │ └── main.yml ├── hostsfile │ ├── defaults │ │ └── main.yml │ └── tasks │ │ └── main.yml ├── nagios_checks │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── check_bind.j2 ├── nomad │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── Makefile │ │ ├── nomad_client.conf │ │ ├── nomad_client.json │ │ ├── nomad_server.conf │ │ └── nomad_server.json ├── openssl │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── ssl_cert_generator ├── prometheus_alertmanager │ ├── defaults │ │ └── main.yml │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ ├── main.yml │ │ ├── trusty.yml │ │ └── xenial.yml │ └── templates │ │ ├── alertmanager.conf.j2 │ │ └── alertmanager.service.j2 ├── prometheus_consul_exporter │ ├── defaults │ │ └── main.yml │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ ├── main.yml │ │ ├── trusty.yml │ │ └── xenial.yml │ └── templates │ │ ├── consul_exporter.conf.j2 │ │ └── consul_exporter.service.j2 ├── prometheus_node_exporter │ ├── defaults │ │ └── main.yml │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ ├── main.yml │ │ ├── trusty.yml │ │ └── xenial.yml │ └── templates │ │ ├── node_exporter.conf.j2 │ │ └── node_exporter.service.j2 ├── prometheus_server │ ├── defaults │ │ └── main.yml │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ ├── main.yml │ │ ├── trusty.yml │ │ └── xenial.yml │ └── templates │ │ ├── etc │ │ └── prometheus │ │ │ └── rules │ │ │ └── downtime.rules │ │ ├── prometheus.conf.j2 │ │ └── prometheus.service.j2 └── user_role │ ├── defaults │ └── main.yml │ ├── tasks │ └── main.yml │ └── templates │ └── keys │ └── consul.pub ├── run_playbooks.sh ├── static_inventory └── vars ├── keys ├── id_rsa ├── id_rsa.copy └── id_rsa.pub └── makevault.yml /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | .vagrant/ 3 | ansible/*.retry 4 | ansible/install.out 5 | *.pyc 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Consul Service Discovery and Prometheus Monitoring Stack 2 | 3 | ### Goals: 4 | * Create [Bind9](https://wiki.debian.org/Bind9) DNS environment 5 | * Create [Consul](https://github.com/hashicorp/consul) cluster used for service discovery 6 | * Create [Prometheus](https://github.com/prometheus) Server cluster 7 | * Create [Prometheus AlertManager](https://github.com/prometheus/alertmanager) service on defined nodes 8 | * Create [Prometheus NodeExporter](https://github.com/prometheus/node_exporter) service on defined nodes 9 | * Create Prometheus [Rules](https://prometheus.io/docs/querying/rules/)/[Alerts](https://prometheus.io/docs/alerting/rules/) 10 | * Create [Third Party Exporter](https://github.com/prometheus/consul_exporter) 11 | * Create [Grafana](https://github.com/grafana/grafana) service 12 | * Create [Haproxy](http://www.haproxy.org/) load balancer 13 | * Create [Consul Template](https://github.com/hashicorp/consul-template) service 14 | 15 | ## Network Environment 16 | **GOAL:** Install Bind9 DNS master into the private network to resolve hostname A records and PTR records and forward Consul DNS requests. 17 | 18 | ## Consul Cluster 19 | **GOAL:** Install a 3 mode Consul cluster for service discovery and DNS resolution. 20 | 21 | ## HA Prometheus Cluster 22 | **GOAL:** Install a HA Prometheus cluster for monitoring, metrics, alerting, and integrate 3rd party exporters, like consul forwarding 23 | 24 | ## Haproxy load balancing 25 | **GOAL:** Install Haproxy to verify consul template functionality. **TODO:** update Haproxy version from 1.5 to 1.7 26 | 27 | 28 | ### Vagrantfile 29 | 30 | The Vagrantfile reads the ```./ansible/hosts.yaml``` to create the machines. This also serves as the inventory file for the plays listed below. You might want to use the great plugin [Vagrant HostsUpdater](https://github.com/cogitatio/vagrant-hostsupdater) to update your hypervisor's hostsfile: 31 | ``` 32 | vagrant plugin install vagrant-hostsupdater 33 | ``` 34 | 35 | Spin up the infrastructure: 36 | ``` 37 | vagrant up 38 | ``` 39 | 40 | This will, by default, create: 41 | * core1.lan - bind9 DNS server, Consul Client 42 | * prometheus1.lan - Prometheus server, Prometheus AlertManager, Grafana server 43 | * prometheus2.lan - Prometheus server, Prometheus AlertManager, Grafana server 44 | * consul1.lan - Consul Server, Consul Client, Prometheus node exporter 45 | * consul2.lan - Consul Server, Consul Client, Prometheus node exporter 46 | * consul3.lan - Consul Server, Consul Client, Prometheus node exporter 47 | * client1.lan - Consul Client, Prometheus node exporter, Consul Template service with Haproxy cfg template 48 | 49 | ### Ansible playbooks 50 | 51 | The ansible playbooks provision the nodes listed above. The Ansible roles have been created to work on Ubuntu 14 and Ubuntu 16 nodes, but the VMs have different network interface names, as well as Vagrant boxes. 52 | 53 | Checkout branch ```ubuntu16``` to use Ubuntu Xenial hosts. 54 | 55 | Run all playbooks in order: 56 | ``` 57 | cd ansible 58 | ./run_playbooks.sh 59 | ``` 60 | 61 | Or manually run to playbooks: 62 | ``` 63 | cd ansible 64 | ansible-playbook provision_bind9_servers.yaml -i inventory.py -u vagrant -k -b 65 | ansible-playbook provision_prometheus_servers.yaml -i inventory.py -u vagrant -k -b 66 | ansible-playbook provision_prometheus_alertmanager_servers.yaml -i inventory.py -u vagrant -k -b 67 | ansible-playbook provision_prometheus_node_exporter_servers.yaml -i inventory.py -u vagrant -k -b 68 | ansible-playbook provision_prometheus_consul_exporter_servers.yaml -i inventory.py -u vagrant -k -b 69 | ansible-playbook provision_consul_servers.yaml inventory.py -i inventory.py -u vagrant -k -b 70 | ansible-playbook provision_consul_client_servers.yaml -i inventory.py -u vagrant -k -b 71 | ansible-playbook provision_consul_template_servers.yaml -i inventory.py -u vagrant -k -b 72 | ``` 73 | 74 | ### Verify Consul DNS: 75 | 76 | **Check Forward DNS with nslookup:** 77 | 78 | DNS lookup the node-exporter service: 79 | ``` 80 | root@client1:~# nslookup prometheus.service.consul 81 | Server: 172.136.1.11 82 | Address: 172.136.1.11#53 83 | 84 | Non-authoritative answer: 85 | Name: prometheus.service.consul 86 | Address: 172.136.2.12 87 | Name: prometheus.service.consul 88 | Address: 172.136.2.13 89 | Name: prometheus.service.consul 90 | Address: 172.136.2.11 91 | 92 | ``` 93 | DNS lookup the bind service: 94 | ``` 95 | root@client1:~# nslookup bind.service.consul 96 | Server: 172.136.1.11 97 | Address: 172.136.1.11#53 98 | 99 | Non-authoritative answer: 100 | Name: bind.service.consul 101 | Address: 172.136.1.11 102 | ``` 103 | 104 | DNS lookup the grafana service: 105 | ``` 106 | root@client1:~# nslookup grafana.service.consul 107 | Server: 172.136.1.11 108 | Address: 172.136.1.11#53 109 | 110 | Non-authoritative answer: 111 | Name: grafana.service.consul 112 | Address: 172.136.4.11 113 | Name: grafana.service.consul 114 | Address: 172.136.4.12 115 | ``` 116 | 117 | 118 | **Check reverse DNS with dig** 119 | 120 | DNS Reverse lookup a node ip address 121 | ``` 122 | root@client1:~# dig -x 172.136.2.11 123 | 124 | ; <<>> DiG 9.9.5-3ubuntu0.14-Ubuntu <<>> -x 172.136.2.11 125 | ;; global options: +cmd 126 | ;; Got answer: 127 | ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17545 128 | ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3 129 | 130 | ;; OPT PSEUDOSECTION: 131 | ; EDNS: version: 0, flags:; udp: 4096 132 | ;; QUESTION SECTION: 133 | ;11.2.136.172.in-addr.arpa. IN PTR 134 | 135 | ;; ANSWER SECTION: 136 | 11.2.136.172.in-addr.arpa. 604800 IN PTR consul1.lan. 137 | 138 | ;; AUTHORITY SECTION: 139 | 2.136.172.in-addr.arpa. 604800 IN NS core2.lan. 140 | 2.136.172.in-addr.arpa. 604800 IN NS core1.lan. 141 | 142 | ;; ADDITIONAL SECTION: 143 | core1.lan. 604800 IN A 172.136.1.11 144 | core2.lan. 604800 IN A 172.136.1.12 145 | 146 | ;; Query time: 4 msec 147 | ;; SERVER: 172.136.1.11#53(172.136.1.11) 148 | ;; WHEN: Tue Sep 12 03:17:20 UTC 2017 149 | ;; MSG SIZE rcvd: 151 150 | ``` 151 | 152 | ### Notable UIs 153 | 154 | **Prometheus Server UIs:** 155 | * [Prometheus UI on prometheus1.lan](http://prometheus1.lan:9090) 156 | * [Prometheus UI on prometheus2.lan](http://prometheus2.lan:9090) 157 | * [Load Balanced Prometheus UI](http://client1.lan:9090) 158 | 159 | **Grafana UIs:** 160 | 161 | Additionally, Create a new prometheus datasource in Grafana. 162 | **TODO** The load balanced Grafana endpoint needs session stickiness to handle authentication persistence 163 | 164 | * [Grafana UI on prometheus1.lan](http://prometheus1.lan:3000) admin:admin 165 | * [Grafana UI on prometheus2.lan](http://prometheus2.lan:3000) admin:admin 166 | * [Load Balanced Grafana UI](http://client1.lan:3000) (Load balanced) 167 | 168 | **Consul UIs:** 169 | * [Consul 1 Admin UI](http://consul1.lan:8500/ui/#) 170 | * [Consul 2 Admin UI](http://consul2.lan:8500/ui/#) 171 | * [Consul 3 Admin UI](http://consul3.lan:8500/ui/#) 172 | 173 | Several Consul services are being used. TCP health checks and a simple script check on the DNS bind service. 174 | 175 | **Haproxy Admin:** 176 | 177 | To illustrate the usage of consul template, Grafana and the Prometheus admin are load balanced: 178 | 179 | * [Haproxy Admin UI](http://client1.lan:8888) admin:adminpw 180 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | require 'yaml' 2 | 3 | VAGRANTFILE_API_VERSION = "2" 4 | 5 | base_dir = File.expand_path(File.dirname(__FILE__)) 6 | inventory_file = base_dir + "/ansible/hosts.yaml" 7 | 8 | servers = YAML.load_file(inventory_file) 9 | meta = servers["_meta"] 10 | 11 | Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| 12 | 13 | config.vm.box = "bento/ubuntu-16.04" 14 | config.ssh.insert_key = false 15 | 16 | if Vagrant.has_plugin?("vagrant-vbguest") 17 | config.vbguest.auto_update = false 18 | end 19 | 20 | servers.each do |ansible_key,vars| 21 | vars["hosts"].each_with_index do |host,i| 22 | 23 | ip = meta["hostvars"][host]["vagrant_ip"] ? meta["hostvars"][host]["vagrant_ip"] : "10.0.0.#{i + 10}" 24 | cpu = meta["hostvars"][host]["vagrant_cpu"] ? meta["hostvars"][host]["vagrant_cpu"] : 1 25 | mem = meta["hostvars"][host]["vagrant_mem"] ? meta["hostvars"][host]["vagrant_mem"] : 2048 26 | box = meta["hostvars"][host]["vagrant_box"] ? meta["hostvars"][host]["vagrant_box"] : "bento/ubuntu-16.04" 27 | 28 | config.vm.define "#{host}" do |node| 29 | node.vm.network "private_network", ip: "#{ip}" 30 | #node.vm.network :public_network, ip: "#{ip}", bridge: "br0" 31 | node.vm.hostname = "#{host}" 32 | config.vm.provider "virtualbox" do |v, override| 33 | override.vm.box = "#{box}" 34 | v.memory = "#{mem}" 35 | v.cpus = "#{cpu}" 36 | end 37 | end 38 | 39 | end if vars["hosts"] 40 | end 41 | 42 | end 43 | -------------------------------------------------------------------------------- /ansible/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | host_key_checking = no 3 | record_host_keys = no -------------------------------------------------------------------------------- /ansible/filter_plugins/server_group_filter.py: -------------------------------------------------------------------------------- 1 | def hash_to_tuples(h): 2 | return h.items() 3 | 4 | def hash_keys(h): 5 | return h.keys() 6 | 7 | def hash_values(h): 8 | return h.values() 9 | 10 | def zipped_hash(h, key, item): 11 | ret = [] 12 | for el in h: 13 | if h[el].has_key(item): 14 | for subel in h[el][item]: 15 | ret.append({"key" : h[el][key], "value" : subel }) 16 | return ret 17 | 18 | def server_group_filter(d,l,i): 19 | result = [] 20 | if len(l) > 0: 21 | for item in l: 22 | if item != i: 23 | try: 24 | ip_addr = d[item]['ansible_eth1']['ipv4']['address'] 25 | if ip_addr: 26 | result.append(ip_addr) 27 | except: 28 | pass 29 | return result 30 | 31 | def server_group_filter_v2(d,l,i,iface='ansible_eth1'): 32 | result = [] 33 | if len(l) > 0: 34 | for item in l: 35 | if item != i: 36 | try: 37 | ip_addr = d[item][iface]['ipv4']['address'] 38 | if ip_addr: 39 | result.append(ip_addr) 40 | except: 41 | pass 42 | return result 43 | 44 | def server_group_ip_map(d,l): 45 | result = [] 46 | if len(l) > 0: 47 | for item in l: 48 | try: 49 | ip_addr = d[item]['ansible_eth1']['ipv4']['address'] 50 | if ip_addr: 51 | result.append(ip_addr) 52 | except: 53 | pass 54 | return result 55 | 56 | def server_group_ip_map_v2(d,l,iface='ansible_eth1'): 57 | result = [] 58 | if len(l) > 0: 59 | for item in l: 60 | try: 61 | ip_addr = d[item][iface]['ipv4']['address'] 62 | if ip_addr: 63 | result.append(ip_addr) 64 | except: 65 | pass 66 | return result 67 | 68 | def user_at_ip_map_v2(d,l,iface='ansible_eth1',user='root'): 69 | result = [] 70 | if len(l) > 0: 71 | for item in l: 72 | try: 73 | ip_addr = d[item][iface]['ipv4']['address'] 74 | if ip_addr: 75 | user_at_str = "%s@%s" % (user, ip_addr) 76 | result.append(user_at_str) 77 | except: 78 | pass 79 | return result 80 | 81 | def server_group_ip_port_map_v2(d,l,iface,port): 82 | result = [] 83 | if len(l) > 0: 84 | for item in l: 85 | try: 86 | ip_addr = d[item][iface]['ipv4']['address'] 87 | if ip_addr: 88 | ip_port_string = ip_addr + " port " + port 89 | result.append(ip_port_string) 90 | except: 91 | pass 92 | return result 93 | 94 | 95 | def etcd_inventory_list(d,name,iface,port,local="no"): 96 | result = [] 97 | if d[name]: 98 | if local != "no": 99 | local_str = "http://" + local + ":" + port 100 | result.append(local_str) 101 | try: 102 | ip_addr = d[name][iface]['ipv4']['address'] 103 | if ip_addr: 104 | ip_port_string = "http://" + ip_addr + ":" + port 105 | result.append(ip_port_string) 106 | except: 107 | pass 108 | return result 109 | 110 | def etcd_masters_list(d,l,iface,port1): 111 | result = [] 112 | if len(l) > 0: 113 | for item in l: 114 | try: 115 | ip_addr = d[item][iface]['ipv4']['address'] 116 | 117 | if ip_addr: 118 | string1 = "http://" + ip_addr + ":" + port1 119 | result.append(string1) 120 | except: 121 | pass 122 | return result 123 | 124 | 125 | def etcd_cluster_list(d,l,iface,port1): 126 | result = [] 127 | if len(l) > 0: 128 | for item in l: 129 | try: 130 | ip_addr = d[item][iface]['ipv4']['address'] 131 | 132 | if ip_addr: 133 | string1 = item + "=http://" + ip_addr + ":" + port1 134 | result.append(string1) 135 | except: 136 | pass 137 | return result 138 | 139 | def server_group_consul_map_v2(d,l,iface='eth1'): 140 | result = [] 141 | if len(l) > 0: 142 | for item in l: 143 | try: 144 | ip_addr = d[item]['ansible_'+iface]['ipv4']['address'] 145 | if ip_addr: 146 | new_ip_port = ip_addr + ":4647" 147 | result.append(new_ip_port) 148 | except: 149 | pass 150 | return result 151 | 152 | def init_list(name): 153 | newlist = [] 154 | newlist.append(name) 155 | return newlist 156 | 157 | def convert_to_ip(name): 158 | ip_addr = hostvars['ansible_eth2']['ipv4']['address'] 159 | if ip_addr: 160 | return ip_addr 161 | 162 | class FilterModule(object): 163 | ''' utility filters for operating on hashes ''' 164 | 165 | def filters(self): 166 | return { 167 | 'hash_to_tuples' : hash_to_tuples 168 | ,'hash_keys' : hash_keys 169 | ,'hash_values' : hash_values 170 | ,'zipped_hash' : zipped_hash 171 | ,'server_group_filter' : server_group_filter 172 | ,'server_group_ip_map' : server_group_ip_map 173 | ,'server_group_filter_v2' : server_group_filter_v2 174 | ,'server_group_ip_map_v2' : server_group_ip_map_v2 175 | ,'server_group_consul_map_v2' : server_group_consul_map_v2 176 | ,'server_group_ip_port_map_v2' : server_group_ip_port_map_v2 177 | ,'user_at_ip_map_v2' : user_at_ip_map_v2 178 | ,'etcd_inventory_list' : etcd_inventory_list 179 | ,'etcd_masters_list' : etcd_masters_list 180 | ,'etcd_cluster_list' : etcd_cluster_list 181 | ,'init_list' : init_list 182 | } 183 | -------------------------------------------------------------------------------- /ansible/hosts.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | core_servers: 3 | hosts: [ core1.lan ] 4 | vars: 5 | ansible_ssh_user: vagrant 6 | ansible_ssh_private_key_file: "~/.vagrant.d/insecure_private_key" 7 | prometheus_servers: 8 | hosts: [ prometheus1.lan, prometheus2.lan ] 9 | vars: 10 | ansible_ssh_user: vagrant 11 | ansible_ssh_private_key_file: "~/.vagrant.d/insecure_private_key" 12 | prometheus_client_servers: 13 | hosts: [ client1.lan ] 14 | vars: 15 | ansible_ssh_user: vagrant 16 | ansible_ssh_private_key_file: "~/.vagrant.d/insecure_private_key" 17 | consul_servers: 18 | hosts: [ consul1.lan, consul2.lan, consul3.lan ] 19 | vars: 20 | ansible_ssh_user: vagrant 21 | ansible_ssh_private_key_file: "~/.vagrant.d/insecure_private_key" 22 | consul_client_servers: 23 | children: [ prometheus_client_servers,prometheus_servers ] 24 | consul_template_servers: 25 | hosts: [ client1.lan ] 26 | vars: 27 | ansible_ssh_user: vagrant 28 | ansible_ssh_private_key_file: "~/.vagrant.d/insecure_private_key" 29 | prometheus_node_exporter_servers: 30 | children: [ prometheus_servers, prometheus_client_servers, consul_servers, core_servers ] 31 | prometheus_consul_exporter_servers: 32 | children: [ consul_servers ] 33 | prometheus_alertmanager_servers: 34 | children: [ prometheus_servers ] 35 | all: 36 | children: [ prometheus_servers, prometheus_client_servers, consul_servers, consul_client_servers ] 37 | _meta: 38 | hostvars: 39 | core1.lan: 40 | vagrant_ip: "172.136.1.11" 41 | vagrant_box: "ubuntu/trusty64" 42 | prometheus1.lan: 43 | vagrant_ip: "172.136.4.11" 44 | vagrant_box: "ubuntu/trusty64" 45 | prometheus2.lan: 46 | vagrant_ip: "172.136.4.12" 47 | vagrant_box: "ubuntu/trusty64" 48 | client1.lan: 49 | vagrant_ip: "172.136.3.11" 50 | vagrant_box: "ubuntu/trusty64" 51 | consul1.lan: 52 | vagrant_ip: "172.136.2.11" 53 | vagrant_box: "ubuntu/trusty64" 54 | consul2.lan: 55 | vagrant_ip: "172.136.2.12" 56 | vagrant_box: "ubuntu/trusty64" 57 | consul3.lan: 58 | vagrant_ip: "172.136.2.13" 59 | vagrant_box: "ubuntu/trusty64" 60 | -------------------------------------------------------------------------------- /ansible/inventory.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | ''' 4 | Example custom dynamic inventory script for Ansible. 5 | ''' 6 | 7 | import os 8 | import sys 9 | import argparse 10 | import yaml 11 | 12 | try: 13 | import json 14 | except ImportError: 15 | import simplejson as json 16 | 17 | class ExampleInventory(object): 18 | 19 | def __init__(self): 20 | self.inventory = {} 21 | self.read_cli_args() 22 | 23 | # Called with `--list`. 24 | if self.args.list: 25 | self.inventory = self.example_inventory() 26 | # Called with `--host [hostname]`. 27 | elif self.args.host: 28 | # Not implemented, since we return _meta info `--list`. 29 | self.inventory = self.empty_inventory() 30 | # If no groups or vars are present, return empty inventory. 31 | else: 32 | self.inventory = self.empty_inventory() 33 | 34 | print json.dumps(self.inventory); 35 | 36 | # Example inventory for testing. 37 | def example_inventory(self): 38 | f = open('hosts.yaml') 39 | hosts = yaml.safe_load(f) 40 | f.close() 41 | return hosts 42 | 43 | # Empty inventory for testing. 44 | def empty_inventory(self): 45 | return {'_meta': {'hostvars': {}}} 46 | 47 | # Read the command line args passed to the script. 48 | def read_cli_args(self): 49 | parser = argparse.ArgumentParser() 50 | parser.add_argument('--list', action = 'store_true') 51 | parser.add_argument('--host', action = 'store') 52 | self.args = parser.parse_args() 53 | 54 | # Get the inventory. 55 | ExampleInventory() 56 | -------------------------------------------------------------------------------- /ansible/provision_bind9_servers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Set global facts on hosts 4 | hosts: all 5 | gather_facts: yes 6 | vars_files: 7 | - vars/makevault.yml 8 | become: yes 9 | become_user: root 10 | pre_tasks: 11 | - name: Set facts for playbook 12 | set_fact: 13 | fact_bind9_allowed_subnets: 14 | - "172.136.1.0/24" 15 | - "172.136.2.0/24" 16 | - "172.136.3.0/24" 17 | - "172.136.4.0/24" 18 | - "172.136.5.0/24" 19 | - "172.136.6.0/24" 20 | 21 | - name: Install Bind9 22 | hosts: core1.lan 23 | become: yes 24 | become_user: root 25 | vars_files: 26 | - vars/makevault.yml 27 | roles: 28 | - role: bind9 29 | bind9_type: master 30 | bind9_listen: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 31 | bind9_named_allow_transfer: "any" 32 | bind9_named_forward_zones: 33 | - name: "lan" 34 | path: "/etc/bind/zones/master" 35 | file: "db.lan" 36 | hosts: 37 | - 172.136.1.12 38 | bind9_named_reverse_zones: 39 | - name: "1.136.172.in-addr.arpa" 40 | path: "/etc/bind/zones/master" 41 | file: "db.1.136.172" 42 | hosts: 43 | - 172.136.1.12 44 | - name: "2.136.172.in-addr.arpa" 45 | path: "/etc/bind/zones/master" 46 | file: "db.2.136.172" 47 | hosts: 48 | - 172.136.1.12 49 | - name: "3.136.172.in-addr.arpa" 50 | path: "/etc/bind/zones/master" 51 | file: "db.3.136.172" 52 | hosts: 53 | - 172.136.1.12 54 | - name: "4.136.172.in-addr.arpa" 55 | path: "/etc/bind/zones/master" 56 | file: "db.4.136.172" 57 | hosts: 58 | - 172.136.1.12 59 | bind9_named_forwarder_zones: 60 | - name: "consul" 61 | hosts: "{{ hostvars | server_group_ip_port_map_v2(groups['consul_servers'], 'ansible_eth1', '8600') }}" 62 | bind9_named_acls: 63 | - name: trusted 64 | allowed: "{{ fact_bind9_allowed_subnets }}" 65 | -------------------------------------------------------------------------------- /ansible/provision_consul_client_servers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install Prometheus clients with Consul 4 | hosts: prometheus_client_servers 5 | become: yes 6 | become_user: root 7 | gather_facts: yes 8 | vars_files: 9 | - vars/makevault.yml 10 | roles: 11 | - role: consul 12 | consul_type: "client" 13 | consul_version: "0.9.1" 14 | consul_log_level: "DEBUG" 15 | consul_advertise_iface: 'eth1' 16 | consul_advertise_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 17 | consul_client_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 18 | consul_server_list: ["172.136.2.11", "172.136.2.12", "172.136.2.13"] 19 | # Can this be hostnames? 20 | #consul_server_list: "{{ hostvars | server_group_ip_map_v2(groups['consul_servers'], 'ansible_eth1') }}" 21 | consul_services_hash: 22 | services: 23 | - name: prometheusnode 24 | tags: 25 | - prometheusnode 26 | - node-exporter 27 | address: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 28 | port: 9100 29 | checks: 30 | - id: prometheusnode_check_1 31 | name: Prometheus node-exporter TCP on port 9100 32 | tcp: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}:9100" 33 | interval: 10s 34 | timeout: 1s 35 | 36 | - name: Install Bind9 clients with Consul 37 | hosts: core_servers 38 | become: yes 39 | become_user: root 40 | gather_facts: yes 41 | vars_files: 42 | - vars/makevault.yml 43 | roles: 44 | - role: nagios_checks 45 | nagios_checks_templates_list: 46 | - src: "check_bind.j2" 47 | dest: "/usr/lib/nagios/plugins/custom/check_bind" 48 | - role: consul 49 | consul_type: "client" 50 | consul_version: "0.9.1" 51 | consul_log_level: "DEBUG" 52 | consul_advertise_iface: 'eth1' 53 | consul_advertise_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 54 | consul_client_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 55 | consul_server_list: ["172.136.2.11", "172.136.2.12", "172.136.2.13"] 56 | # Can this be hostnames? 57 | #consul_server_list: "{{ hostvars | server_group_ip_map_v2(groups['consul_servers'], 'ansible_eth1') }}" 58 | consul_services_hash: 59 | services: 60 | - name: bind 61 | tags: 62 | - dns 63 | checks: 64 | - id: bind_dns_server_check 65 | name: Check bind DNS script 66 | script: "/usr/lib/nagios/plugins/custom/check_bind" 67 | interval: 10s 68 | timeout: 1s 69 | 70 | - name: Install Prometheus servers with Consul clients for clustered service discovery 71 | hosts: prometheus_servers 72 | become: yes 73 | become_user: root 74 | gather_facts: yes 75 | vars_files: 76 | - vars/makevault.yml 77 | roles: 78 | - role: consul 79 | consul_type: "client" 80 | consul_version: "0.9.1" 81 | consul_log_level: "DEBUG" 82 | consul_advertise_iface: 'eth1' 83 | consul_advertise_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 84 | consul_client_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 85 | consul_server_list: ["172.136.2.11", "172.136.2.12", "172.136.2.13"] 86 | # Can this be hostnames? 87 | #consul_server_list: "{{ hostvars | server_group_ip_map_v2(groups['consul_servers'], 'ansible_eth1') }}" 88 | consul_services_hash: 89 | services: 90 | - name: grafana 91 | tags: 92 | - grafana 93 | address: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 94 | port: 3000 95 | checks: 96 | - id: grafana_check_prometheus_servers 97 | name: Grafana TCP on port 3000 98 | tcp: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}:3000" 99 | interval: 10s 100 | timeout: 1s 101 | - name: prometheusserver 102 | tags: 103 | - prometheusserver 104 | address: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 105 | port: 9090 106 | checks: 107 | - id: prometheus_check_prometheus_servers 108 | name: Prometheus Server TCP on port 9090 109 | tcp: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}:9090" 110 | interval: 10s 111 | timeout: 1s 112 | -------------------------------------------------------------------------------- /ansible/provision_consul_servers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install Consul servers 4 | hosts: consul_servers 5 | become: yes 6 | become_user: root 7 | vars_files: 8 | - vars/makevault.yml 9 | roles: 10 | - role: consul 11 | consul_type: "server" 12 | consul_version: "0.9.1" 13 | consul_log_level: "DEBUG" 14 | consul_advertise_iface: "eth1" 15 | consul_server_list_lookup_interface: 'ansible_eth1' 16 | consul_advertise_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 17 | consul_http_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 18 | consul_https_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 19 | consul_dns_addr: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 20 | consul_server_list: "{{ groups['consul_servers'] }}" 21 | consul_services_hash: 22 | services: 23 | - name: prometheus 24 | tags: 25 | - prometheus 26 | - node-exporter 27 | address: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 28 | port: 9100 29 | checks: 30 | - id: prometheus_check_3 31 | name: Prometheus node-exporter TCP on port 9100 32 | tcp: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}:9100" 33 | interval: 10s 34 | timeout: 1s 35 | -------------------------------------------------------------------------------- /ansible/provision_consul_template_servers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install Consul Template servers 4 | hosts: consul_template_servers 5 | become: yes 6 | become_user: root 7 | roles: 8 | - role: haproxy 9 | haproxy_apt_repo: True 10 | haproxy_version: "1.5" 11 | - role: consul_template 12 | consul_template_consul_address: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}:8500" 13 | consul_template_template_list: 14 | - template: "haproxy.cfg.ctmpl" 15 | source: "/var/tmp/haproxy.cfg.ctmpl" 16 | destination: "/etc/haproxy/haproxy.cfg" 17 | command: "/etc/init.d/haproxy reload" 18 | command_timeout: "60s" 19 | perms: "0644" 20 | backup: "true" 21 | wait: 22 | min: "2s" 23 | max: "10s" 24 | -------------------------------------------------------------------------------- /ansible/provision_hostsfile.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Fix hostsfile 4 | hosts: all 5 | gather_facts: no 6 | become: yes 7 | become_user: root 8 | roles: 9 | - role: hostsfile 10 | hostsfile_iface: "ansible_eth1" 11 | -------------------------------------------------------------------------------- /ansible/provision_prometheus_alertmanager_servers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Provision Prometheus Alert Manager servers 4 | hosts: prometheus_alertmanager_servers 5 | become: yes 6 | become_user: root 7 | roles: 8 | - role: openssl 9 | openssl_generator_complexity: 4096 10 | - role: prometheus_alertmanager -------------------------------------------------------------------------------- /ansible/provision_prometheus_client_servers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Provision Prometheus client servers 4 | hosts: prometheus_client_servers 5 | become: yes 6 | become_user: root 7 | roles: 8 | - role: prometheus_node_exporter -------------------------------------------------------------------------------- /ansible/provision_prometheus_consul_exporter_servers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Provision Prometheus consul exporter servers 4 | hosts: prometheus_consul_exporter_servers 5 | become: yes 6 | become_user: root 7 | roles: 8 | - role: prometheus_consul_exporter -------------------------------------------------------------------------------- /ansible/provision_prometheus_node_exporter_servers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Provision Prometheus node exporter servers 4 | hosts: prometheus_node_exporter_servers 5 | become: yes 6 | become_user: root 7 | roles: 8 | - role: prometheus_node_exporter -------------------------------------------------------------------------------- /ansible/provision_prometheus_servers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Provision Prometheus servers 4 | hosts: prometheus_servers 5 | become: yes 6 | become_user: root 7 | roles: 8 | - role: prometheus_server 9 | - role: grafana -------------------------------------------------------------------------------- /ansible/provision_resolv_conf.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Update resolv.conf 4 | hosts: all 5 | become: yes 6 | become_user: root 7 | tasks: 8 | - name: Replace nameserver line 9 | lineinfile: 10 | dest: "/etc/resolv.conf" 11 | regexp: "^nameserver.*" 12 | line: "nameserver 172.136.1.11" 13 | - name: Replace search line 14 | lineinfile: 15 | dest: "/etc/resolv.conf" 16 | regexp: "^search.*" 17 | line: "search lan" -------------------------------------------------------------------------------- /ansible/roles/bind9/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | bind9_type: "master" 4 | bind9_serial: "3" 5 | bind9_rev_serial: "3" 6 | bind9_listen: "0.0.0.0" 7 | 8 | bind9_named_allow_transfer: "none" 9 | bind9_statistics: 10 | port: 8888 11 | address: "0.0.0.0" 12 | 13 | bind9_named_conf_templates: 14 | - src: "etc/bind/named.conf.options" 15 | dest: "/etc/bind/named.conf.options" 16 | - src: "etc/bind/named.conf.local" 17 | dest: "/etc/bind/named.conf.local" 18 | 19 | bind9_named_forward_zones: [] 20 | #bind9_named_forward_zones: 21 | # - name: "lan" 22 | # path: "/etc/bind/zones/master" 23 | # file: "db.lan" 24 | # transfer_hosts: [] 25 | 26 | bind9_named_reverse_zones: [] 27 | #bind9_named_reverse_zones: 28 | # - name: "136.17.in-addr.arpa" 29 | # path: "/etc/bind/zones/master" 30 | # file: "db.136.17.in-addr.arpa" 31 | # transfer_hosts: [] 32 | 33 | bind9_named_forwarder_zones: [] 34 | #bind9_named_forwarder_zones: 35 | # - name: "consul" 36 | # hosts: [] 37 | 38 | bind9_named_acls: [] 39 | 40 | bind9_named_forwarders: 41 | - "8.8.8.8" 42 | - "8.8.4.4" 43 | -------------------------------------------------------------------------------- /ansible/roles/bind9/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Ensure apt dependencies installed 4 | apt: 5 | name: "{{ item.name }}" 6 | update_cache: "{{ item.update_cache | default('yes') }}" 7 | cache_valid_time: "{{ item.cache_valid_time | default('3600') }}" 8 | with_items: 9 | - name: bind9 10 | - name: bind9utils 11 | - name: bind9-doc 12 | 13 | - name: Ensure /etc/default/bind9 has IPV4 option set 14 | lineinfile: 15 | dest: "/etc/default/bind9" 16 | regexp: "^OPTIONS=.*" 17 | line: 'OPTIONS="-4 -u bind"' 18 | 19 | - name: Ensure bind9 forward directories are present 20 | shell: "mkdir -p {{ item.path }}" 21 | with_items: "{{ bind9_named_forward_zones }}" 22 | 23 | - name: Ensure bind9 reverse directories are present 24 | shell: "mkdir -p {{ item.path }}" 25 | with_items: "{{ bind9_named_reverse_zones }}" 26 | 27 | - name: Ensure bind9 named conf templates in place 28 | template: 29 | src: "{{ item.src }}" 30 | dest: "{{ item.dest }}" 31 | backup: yes 32 | register: tmpl 33 | with_items: "{{ bind9_named_conf_templates }}" 34 | 35 | - name: Ensure forward zone templates in place on master 36 | when: bind9_type == "master" 37 | template: 38 | src: "etc/bind/zones/master/{{ item.file }}" 39 | dest: "{{ item.path }}/{{ item.file }}" 40 | backup: yes 41 | register: fwd_zone_tmpl 42 | with_items: "{{ bind9_named_forward_zones }}" 43 | 44 | - name: Ensure reverse zone templates in place on master 45 | when: bind9_type == "master" 46 | template: 47 | src: "etc/bind/zones/master/{{ item.file }}" 48 | dest: "{{ item.path }}/{{ item.file }}" 49 | backup: yes 50 | register: rev_zone_tmpl 51 | with_items: "{{ bind9_named_reverse_zones }}" 52 | 53 | - name: Test named-checkzone forward records when master 54 | when: bind9_type == "master" 55 | shell: "named-checkzone {{ item.name }} {{ item.path }}/{{ item.file }}" 56 | with_items: "{{ bind9_named_forward_zones }}" 57 | 58 | - name: Test named-checkzone reverse records when master 59 | when: bind9_type == "master" 60 | shell: "named-checkzone {{ item.name }} {{ item.path }}/{{ item.file }}" 61 | with_items: "{{ bind9_named_reverse_zones }}" 62 | 63 | - name: Test named-checkconf 64 | shell: "named-checkconf" 65 | register: named_check_conf 66 | 67 | - name: Restart bind9 when checks pass 68 | when: named_check_conf.rc == 0 69 | service: 70 | name: bind9 71 | state: restarted -------------------------------------------------------------------------------- /ansible/roles/bind9/templates/etc/bind/named.conf.local: -------------------------------------------------------------------------------- 1 | {% if bind9_named_forward_zones is defined and bind9_named_forward_zones|length > 0 %} 2 | {% for zone in bind9_named_forward_zones %} 3 | zone "{{ zone.name }}" { 4 | {% if bind9_type is defined %} 5 | type {{ bind9_type }}; 6 | file "{{ zone.path }}/{{ zone.file }}"; 7 | {% if bind9_type == 'master' %}allow-transfer{% else %}masters{% endif %} { {{ zone.hosts | join("; ") }}; }; 8 | {% endif %} 9 | }; 10 | {% endfor %} 11 | {% endif %} 12 | 13 | {% if bind9_named_reverse_zones is defined and bind9_named_reverse_zones|length > 0 %} 14 | {% for zone in bind9_named_reverse_zones %} 15 | zone "{{ zone.name }}" { 16 | {% if bind9_type is defined %} 17 | type {{ bind9_type }}; 18 | file "{{ zone.path }}/{{ zone.file }}"; 19 | {% if bind9_type == 'master' %}allow-transfer{% else %}masters{% endif %} { {{ zone.hosts | join("; ") }}; }; 20 | {% endif %} 21 | }; 22 | {% endfor %} 23 | {% endif %} 24 | 25 | {% if bind9_named_forwarder_zones is defined and bind9_named_forwarder_zones|length > 0 %} 26 | {% for zone in bind9_named_forwarder_zones %} 27 | zone "{{ zone.name }}" { 28 | type forward; 29 | forward only; 30 | forwarders { {{ zone.hosts | join("; ") }}; }; 31 | }; 32 | {% endfor %} 33 | {% endif %} 34 | -------------------------------------------------------------------------------- /ansible/roles/bind9/templates/etc/bind/named.conf.options: -------------------------------------------------------------------------------- 1 | {% if bind9_named_acls is defined and bind9_named_acls | length > 0 %} 2 | {% for acl in bind9_named_acls %} 3 | acl "{{ acl.name }}" { 4 | {% if acl.allowed is defined and acl.allowed|length > 0 %} 5 | {% for allow in acl.allowed %} 6 | {{ allow }}; 7 | {% endfor %} 8 | {% endif %} 9 | }; 10 | {% endfor %} 11 | {% endif %} 12 | 13 | {% if bind9_statistics is defined and bind9_statistics.address is defined and bind9_statistics.port is defined %} 14 | statistics-channels { 15 | inet {{ bind9_statistics.address }} port {{ bind9_statistics.port }} allow { any; }; 16 | inet ::1 port {{ bind9_statistics.port }} allow { any; }; 17 | }; 18 | {% endif %} 19 | 20 | options { 21 | directory "/var/cache/bind"; 22 | 23 | // If there is a firewall between you and nameservers you want 24 | // to talk to, you may need to fix the firewall to allow multiple 25 | // ports to talk. See http://www.kb.cert.org/vuls/id/800113 26 | 27 | // If your ISP provided one or more IP addresses for stable 28 | // nameservers, you probably want to use them as forwarders. 29 | // Uncomment the following block, and insert the addresses replacing 30 | // the all-0's placeholder. 31 | 32 | // forwarders { 33 | // 0.0.0.0; 34 | // }; 35 | 36 | recursion yes; 37 | {% if bind9_named_acls is defined and bind9_named_acls|length > 0 %} 38 | allow-recursion { {% for acl in bind9_named_acls %}{{ acl.name }}; {% endfor %} }; 39 | {% else %} 40 | allow-recursion { any; }; 41 | {% endif %} 42 | listen-on { {{ bind9_listen }}; }; # private IP 43 | allow-transfer { {{ bind9_named_allow_transfer }}; }; # disable zone transfers by default 44 | 45 | {% if bind9_named_forwarders is defined and bind9_named_forwarders|length > 0 %} 46 | forwarders { 47 | {% for fwd in bind9_named_forwarders %} 48 | {{ fwd }}; 49 | {% endfor %} 50 | }; 51 | {% endif %} 52 | 53 | //======================================================================== 54 | // If BIND logs error messages about the root key being expired, 55 | // you will need to update your keys. See https://www.isc.org/bind-keys 56 | //======================================================================== 57 | //dnssec-validation auto; 58 | 59 | //auth-nxdomain no; # conform to RFC1035 60 | //listen-on-v6 { any; }; 61 | 62 | // override for consul 63 | dnssec-validation no; 64 | dnssec-enable no; 65 | 66 | //turn off ipv6 and authoritative answer 67 | auth-nxdomain no; # conform to RFC1035 68 | listen-on-v6 { none; }; 69 | 70 | }; 71 | -------------------------------------------------------------------------------- /ansible/roles/bind9/templates/etc/bind/zones/master/db.1.136.172: -------------------------------------------------------------------------------- 1 | ; 2 | ; BIND reverse data file for local loopback interface 3 | ; 4 | $TTL 604800 5 | @ IN SOA {{ ansible_fqdn }}. admin.{{ ansible_fqdn }}. ( 6 | {{ bind9_rev_serial }} ; Serial 7 | 604800 ; Refresh 8 | 86400 ; Retry 9 | 2419200 ; Expire 10 | 604800 ) ; Negative Cache TTL 11 | 12 | ; name servers - NS records 13 | IN NS core1.lan. 14 | IN NS core2.lan. 15 | 16 | ; PTR Records 17 | 11 IN PTR core1.lan. ; 172.136.1.11 18 | -------------------------------------------------------------------------------- /ansible/roles/bind9/templates/etc/bind/zones/master/db.2.136.172: -------------------------------------------------------------------------------- 1 | ; 2 | ; BIND reverse data file for local loopback interface 3 | ; 4 | $TTL 604800 5 | @ IN SOA {{ ansible_fqdn }}. admin.{{ ansible_fqdn }}. ( 6 | {{ bind9_rev_serial }} ; Serial 7 | 604800 ; Refresh 8 | 86400 ; Retry 9 | 2419200 ; Expire 10 | 604800 ) ; Negative Cache TTL 11 | 12 | ; name servers - NS records 13 | IN NS core1.lan. 14 | IN NS core2.lan. 15 | 16 | ; PTR Records 17 | 11 IN PTR consul1.lan. ; 172.136.2.11 18 | 12 IN PTR consul2.lan. ; 172.136.2.12 19 | 13 IN PTR consul3.lan. ; 172.136.2.13 20 | -------------------------------------------------------------------------------- /ansible/roles/bind9/templates/etc/bind/zones/master/db.3.136.172: -------------------------------------------------------------------------------- 1 | ; 2 | ; BIND reverse data file for local loopback interface 3 | ; 4 | $TTL 604800 5 | @ IN SOA {{ ansible_fqdn }}. admin.{{ ansible_fqdn }}. ( 6 | {{ bind9_rev_serial }} ; Serial 7 | 604800 ; Refresh 8 | 86400 ; Retry 9 | 2419200 ; Expire 10 | 604800 ) ; Negative Cache TTL 11 | 12 | ; name servers - NS records 13 | IN NS core1.lan. 14 | IN NS core2.lan. 15 | 16 | ; PTR Records 17 | 11 IN PTR client1.lan. ; 172.136.3.11 18 | 12 IN PTR client2.lan. ; 172.136.3.12 19 | -------------------------------------------------------------------------------- /ansible/roles/bind9/templates/etc/bind/zones/master/db.4.136.172: -------------------------------------------------------------------------------- 1 | ; 2 | ; BIND reverse data file for local loopback interface 3 | ; 4 | $TTL 604800 5 | @ IN SOA {{ ansible_fqdn }}. admin.{{ ansible_fqdn }}. ( 6 | {{ bind9_rev_serial }} ; Serial 7 | 604800 ; Refresh 8 | 86400 ; Retry 9 | 2419200 ; Expire 10 | 604800 ) ; Negative Cache TTL 11 | 12 | ; name servers - NS records 13 | IN NS core1.lan. 14 | IN NS core2.lan. 15 | 16 | 11 IN PTR prometheus1.lan. ; 172.136.4.11 17 | 12 IN PTR prometheus2.lan. ; 172.136.4.12 18 | 13 IN PTR prometheus3.lan. ; 172.136.4.13 19 | -------------------------------------------------------------------------------- /ansible/roles/bind9/templates/etc/bind/zones/master/db.lan: -------------------------------------------------------------------------------- 1 | ; 2 | ; BIND data file for local loopback interface 3 | ; 4 | $TTL 604800 5 | @ IN SOA {{ ansible_fqdn }}. admin.{{ ansible_fqdn }}. ( 6 | {{ bind9_serial }} ; Serial 7 | 604800 ; Refresh 8 | 86400 ; Retry 9 | 2419200 ; Expire 10 | 604800 ) ; Negative Cache TTL 11 | ; 12 | ; name servers - NS records 13 | IN NS core1.lan. 14 | IN NS core2.lan. 15 | 16 | ; name servers - A records 17 | core1.lan. IN A 172.136.1.11 18 | core2.lan. IN A 172.136.1.12 19 | 20 | consul1.lan. IN A 172.136.2.11 21 | consul2.lan. IN A 172.136.2.12 22 | consul3.lan. IN A 172.136.2.13 23 | 24 | client1.lan. IN A 172.136.3.11 25 | 26 | prometheus1.lan. IN A 172.136.4.11 27 | prometheus2.lan. IN A 172.136.4.12 28 | prometheus3.lan. IN A 172.136.4.13 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /ansible/roles/consul/defaults/main.yml: -------------------------------------------------------------------------------- 1 | consul_user: "root" 2 | consul_group: "root" 3 | 4 | consul_version: "0.9.3" 5 | consul_url: "https://releases.hashicorp.com/consul/{{ consul_version }}/consul_{{ consul_version }}_linux_amd64.zip" 6 | 7 | consul_apt_deps: 8 | - name: "unzip" 9 | 10 | consul_type: "server" 11 | consul_bin_path: "/usr/bin" 12 | consul_executable: "{{ consul_bin_path }}/consul" 13 | 14 | consul_init_path: "/etc/init" 15 | consul_conf_path: "/etc/consul" 16 | 17 | consul_desired_state: "started" 18 | 19 | consul_server_list: [] 20 | 21 | consul_advertise_iface: "eth0" 22 | consul_advertise_addr: "0.0.0.0" 23 | consul_http_addr: "0.0.0.0" 24 | consul_https_addr: "0.0.0.0" 25 | consul_dns_addr: "0.0.0.0" 26 | consul_client_addr: "0.0.0.0" 27 | consul_datadir: "/var/lib/consul" 28 | consul_datacenter: "lan1" 29 | consul_enable_syslog: "true" 30 | consul_log_level: "DEBUG" 31 | consul_encrypt: "KnqifJT6qkF0X+Zx9saajg==" 32 | 33 | consul_server_list_lookup_interface: "eth0" 34 | 35 | consul_services_hash: 36 | services: [] 37 | # - name: prometheus 38 | # tags: 39 | # - prometheus 40 | # - node-exporter 41 | # address: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}" 42 | # port: 9100 43 | # checks: 44 | # - id: ssh_check 45 | # name: Prometheus node-exporter TCP on port 9100 46 | # tcp: "{{ hostvars[inventory_hostname]['ansible_eth1']['ipv4']['address'] }}:9100" 47 | # interval: 10s 48 | # timeout: 1s 49 | -------------------------------------------------------------------------------- /ansible/roles/consul/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: restart_consul_server_14.04 4 | service: 5 | name: consul_server 6 | state: restarted 7 | 8 | - name: restart_consul_server_16.04 9 | systemd: 10 | name: consul_server 11 | state: restarted 12 | 13 | - name: restart_consul_client_14.04 14 | service: 15 | name: consul_client 16 | state: restarted 17 | 18 | - name: restart_consul_client_16.04 19 | systemd: 20 | name: consul_client 21 | state: restarted 22 | -------------------------------------------------------------------------------- /ansible/roles/consul/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Ensure Consul conf paths exist 4 | file: 5 | dest: "{{ item.dest }}" 6 | state: "{{ item.state | default('directory') }}" 7 | recurse: "{{ item.recurse | default('yes') }}" 8 | with_items: 9 | - dest: "{{ consul_conf_path }}" 10 | - dest: "{{ consul_datadir }}" 11 | 12 | - name: Ensure Consul apt dependencies are met 13 | apt: 14 | name: "{{ item.name }}" 15 | update_cache: "{{ item.update_cache | default('yes') }}" 16 | cache_valid_time: "{{ item.cache_valid_time | default('3600') }}" 17 | with_items: "{{ consul_apt_deps }}" 18 | 19 | - name: Download Consul and unzip to {{ consul_executable }} 20 | unarchive: 21 | src: "{{ item.src }}" 22 | dest: "{{ item.dest }}" 23 | copy: "no" 24 | mode: "0775" 25 | creates: "{{ item.creates }}" 26 | with_items: 27 | - src: "{{ consul_url }}" 28 | dest: "{{ consul_bin_path }}" 29 | creates: "{{ consul_executable }}" 30 | 31 | - name: Add consul services when specified 32 | when: consul_services_hash['services']|length > 0 33 | copy: 34 | content: "{{ consul_services_hash | to_nice_json(indent=2) }}" 35 | dest: "{{ consul_conf_path }}/services.json" 36 | notify: 37 | - "restart_consul_{{ consul_type }}_{{ ansible_distribution_version }}" 38 | 39 | - name: Include xenial when distro greater than 14.04 40 | when: ansible_distribution_version|version_compare('14.04','>') 41 | include: xenial.yml 42 | 43 | - name: Include trusty when distro less than 16.04 44 | when: ansible_distribution_version|version_compare('16.04','<') 45 | include: trusty.yml 46 | -------------------------------------------------------------------------------- /ansible/roles/consul/tasks/trusty.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Template Consul server files when server 4 | when: consul_type == 'server' 5 | template: 6 | src: "{{ item.src }}" 7 | dest: "{{ item.dest }}" 8 | mode: "{{ item.mode | default('0644') }}" 9 | owner: "{{ item.owner | default('root') }}" 10 | group: "{{ item.group | default('root') }}" 11 | with_items: 12 | - src: "consul_server.json" 13 | dest: "/etc/consul/consul_server.json" 14 | - src: "consul_server.conf" 15 | dest: "/etc/init/consul_server.conf" 16 | notify: 17 | - "restart_consul_server_{{ ansible_distribution_version }}" 18 | 19 | - name: Template Consul client files when client 20 | when: consul_type == 'client' 21 | template: 22 | src: "{{ item.src }}" 23 | dest: "{{ item.dest }}" 24 | mode: "{{ item.mode | default('0644') }}" 25 | owner: "{{ item.owner | default('root') }}" 26 | group: "{{ item.group | default('root') }}" 27 | with_items: 28 | - src: "consul_client.json" 29 | dest: "/etc/consul/consul_client.json" 30 | - src: "consul_client.conf" 31 | dest: "/etc/init/consul_client.conf" 32 | notify: 33 | - "restart_consul_client_{{ ansible_distribution_version }}" 34 | 35 | - name: Ensure Consul in desired state and enabled 36 | service: 37 | name: "consul_{{ consul_type }}" 38 | state: "{{ consul_desired_state }}" 39 | enabled: yes 40 | -------------------------------------------------------------------------------- /ansible/roles/consul/tasks/xenial.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Template Consul server files when server 4 | when: consul_type == 'server' 5 | template: 6 | src: "{{ item.src }}" 7 | dest: "{{ item.dest }}" 8 | mode: "{{ item.mode | default('0644') }}" 9 | owner: "{{ item.owner | default('root') }}" 10 | group: "{{ item.group | default('root') }}" 11 | with_items: 12 | - src: "consul_server.service.j2" 13 | dest: "/etc/systemd/system/consul_server.service" 14 | - src: "consul_server.json" 15 | dest: "/etc/consul/consul_server.json" 16 | notify: 17 | - "restart_consul_server_{{ ansible_distribution_version }}" 18 | 19 | - name: Template Consul client files when client 20 | when: consul_type == 'client' 21 | template: 22 | src: "{{ item.src }}" 23 | dest: "{{ item.dest }}" 24 | mode: "{{ item.mode | default('0644') }}" 25 | owner: "{{ item.owner | default('root') }}" 26 | group: "{{ item.group | default('root') }}" 27 | with_items: 28 | - src: "consul_client.service.j2" 29 | dest: "/etc/systemd/system/consul_client.service" 30 | - src: "consul_client.json" 31 | dest: "/etc/consul/consul_client.json" 32 | notify: 33 | - "restart_consul_server_{{ ansible_distribution_version }}" 34 | 35 | - name: Ensure Consul in desired state 36 | systemd: 37 | name: "consul_{{ consul_type }}" 38 | state: "{{ consul_desired_state }}" 39 | -------------------------------------------------------------------------------- /ansible/roles/consul/templates/Makefile: -------------------------------------------------------------------------------- 1 | deps: 2 | cd /vagrant/consul 3 | wget -q -nc https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip 4 | wget -q -nc https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_web_ui.zip 5 | wget -q -nc https://github.com/cbednarski/hostess/releases/download/v0.1.0/hostess_linux_amd64 6 | apt-get update -qq 7 | apt-get install -y unzip 8 | unzip -o consul_0.6.4_linux_amd64.zip 9 | unzip -o consul_0.6.4_web_ui.zip -d dist 10 | 11 | install: 12 | cd /vagrant/consul 13 | # Install hostess so we can edit the hosts file 14 | cp hostess_linux_amd64 /usr/local/bin/hostess 15 | chmod +x /usr/local/bin/hostess 16 | 17 | # Get IP address of the current machine from eth1 18 | ifconfig eth1 | grep "inet addr" | awk 'BEGIN { FS = "[ :]+" }{print $$4}' > /tmp/self.ip 19 | #hostess add consul `cat /tmp/self.ip` 20 | 21 | # Install consul binaries 22 | stop consul || true 23 | cp /vagrant/consul/consul /usr/local/bin/consul 24 | 25 | # Install consul UI 26 | mkdir -p /opt/consul-ui 27 | cp -R /vagrant/consul/dist/* /opt/consul-ui 28 | 29 | # Add consul configuration 30 | mkdir -p /etc/consul 31 | mkdir -p /var/lib/consul 32 | 33 | install-server: 34 | cd /vagrant/consul 35 | sed 's/ADDR/'`cat /tmp/self.ip`'/g' /vagrant/consul/consul-server.json > /etc/consul/consul.json 36 | 37 | # Setup consul service and start it 38 | cp /vagrant/consul/consul.upstart.conf /etc/init/consul.conf 39 | restart consul || start consul 40 | sleep 1 41 | 42 | install-client: 43 | cd /vagrant/consul 44 | sed 's/ADDR/'`cat /tmp/self.ip`'/g' /vagrant/consul/consul-client.json > /etc/consul/consul.json 45 | 46 | # Setup consul service and start it 47 | cp /vagrant/consul/consul.upstart.conf /etc/init/consul.conf 48 | restart consul || start consul 49 | sleep 1 50 | -------------------------------------------------------------------------------- /ansible/roles/consul/templates/consul_client.conf: -------------------------------------------------------------------------------- 1 | description "Consul client process" 2 | 3 | start on (local-filesystems and net-device-up IFACE!={{ consul_advertise_iface }}) 4 | stop on runlevel [!12345] 5 | 6 | respawn 7 | 8 | setuid {{ consul_user }} 9 | setgid {{ consul_group }} 10 | 11 | exec {{ consul_executable }} agent -enable-script-checks=true -config-dir {{ consul_conf_path }} 12 | -------------------------------------------------------------------------------- /ansible/roles/consul/templates/consul_client.json: -------------------------------------------------------------------------------- 1 | { 2 | "server": false, 3 | "advertise_addr": "{{ consul_advertise_addr }}", 4 | "client_addr": "{{ consul_client_addr }}", 5 | "data_dir": "{{ consul_datadir }}", 6 | "datacenter": "{{ consul_datacenter }}", 7 | "enable_syslog": {{ consul_enable_syslog }}, 8 | "log_level": "{{ consul_log_level }}", 9 | "encrypt": "{{ consul_encrypt }}", 10 | "start_join" : {{ consul_server_list | to_json }} 11 | } 12 | -------------------------------------------------------------------------------- /ansible/roles/consul/templates/consul_client.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Consul client 3 | After=network.target 4 | 5 | [Service] 6 | User={{ consul_user }} 7 | Group={{ consul_group }} 8 | Environment="GOMAXPROCS=2" 9 | ExecStart={{ consul_executable }} agent -enable-script-checks=true -config-dir {{ consul_conf_path }} 10 | ExecReload=/bin/kill -9 $MAINPID 11 | KillSignal=SIGINT 12 | Restart=on-failure 13 | 14 | [Install] 15 | WantedBy=multi-user.target 16 | -------------------------------------------------------------------------------- /ansible/roles/consul/templates/consul_server.conf: -------------------------------------------------------------------------------- 1 | description "Consul server process" 2 | 3 | start on (local-filesystems and net-device-up IFACE!={{ consul_advertise_iface }}) 4 | stop on runlevel [!12345] 5 | 6 | respawn 7 | 8 | setuid {{ consul_user }} 9 | setgid {{ consul_group }} 10 | 11 | exec {{ consul_executable }} agent -enable-script-checks=true -ui -config-dir {{ consul_conf_path }} 12 | -------------------------------------------------------------------------------- /ansible/roles/consul/templates/consul_server.json: -------------------------------------------------------------------------------- 1 | { 2 | "server": true, 3 | "bootstrap_expect": {{ consul_server_list|length }}, 4 | "advertise_addr": "{{ consul_advertise_addr }}", 5 | "datacenter": "{{ consul_datacenter }}", 6 | "data_dir": "{{ consul_datadir }}", 7 | "addresses": { 8 | "http": "{{ consul_http_addr }}", 9 | "https": "{{ consul_https_addr }}", 10 | "dns": "{{ consul_dns_addr }}" 11 | }, 12 | "enable_syslog": {{ consul_enable_syslog }}, 13 | "log_level": "{{ consul_log_level }}", 14 | "encrypt": "{{ consul_encrypt }}", 15 | "retry_join" : {{ hostvars | server_group_filter_v2(consul_server_list, inventory_hostname, consul_server_list_lookup_interface) | to_json }} 16 | } 17 | -------------------------------------------------------------------------------- /ansible/roles/consul/templates/consul_server.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Consul server 3 | After=network.target 4 | 5 | [Service] 6 | User={{ consul_user }} 7 | Group={{ consul_group }} 8 | Environment="GOMAXPROCS=2" 9 | ExecStart={{ consul_executable }} agent -enable-script-checks=true -ui -config-dir {{ consul_conf_path }} 10 | ExecReload=/bin/kill -9 $MAINPID 11 | KillSignal=SIGINT 12 | Restart=on-failure 13 | 14 | [Install] 15 | WantedBy=multi-user.target 16 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | consul_template_version: "0.19.3" 4 | consul_template_service_name: "consul-template" 5 | consul_template_url: "https://releases.hashicorp.com/{{ consul_template_service_name }}/{{ consul_template_version }}/{{ consul_template_service_name }}_{{ consul_template_version }}_linux_amd64.tgz" 6 | consul_template_parent_dir: "/opt" 7 | consul_template_exec_dir: "/usr/local/bin" 8 | consul_template_template_list: [] 9 | #consul_template_template_list: 10 | # - template: "haproxy.cfg.ctmpl" 11 | # source: "/tmp/haproxy.cfg.ctmpl" 12 | # destination: "/etc/haproxy/haproxy.cfg" 13 | # command: "/etc/init.d/haproxy reload" 14 | # command_timeout: "60s" 15 | # perms: "0644" 16 | # backup: "true" 17 | # wait: 18 | # min: "2s" 19 | # max: "10s" 20 | 21 | consul_template_config_dest: "/etc/consul" 22 | consul_template_consul_address: "localhost:8500" 23 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: reload systemd-daemon 4 | shell: systemctl daemon-reload 5 | 6 | - name: restart consul_template 7 | when: consul_template_template_list is defined and consul_template_template_list|length > 0 8 | service: 9 | name: consul-template-service 10 | state: restarted 11 | 12 | - name: restart_consul_template_14.04 13 | when: consul_template_template_list is defined and consul_template_template_list|length > 0 14 | service: 15 | name: consul-template-service 16 | state: restarted 17 | 18 | - name: restart_consul_template_16.04 19 | when: consul_template_template_list is defined and consul_template_template_list|length > 0 20 | systemd: 21 | name: consul-template-service 22 | state: restarted 23 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Ensure Consul template dirs exists 4 | file: 5 | dest: "{{ item.dest }}" 6 | state: "{{ item.state | default('directory') }}" 7 | with_items: 8 | - dest: "{{ consul_template_parent_dir }}/{{ consul_template_service_name }}-{{ consul_template_version }}" 9 | - dest: "{{ consul_template_config_dest }}" 10 | 11 | - name: Ensure consul template downloaded 12 | unarchive: 13 | src: "{{ consul_template_url }}" 14 | dest: "{{ consul_template_parent_dir }}/{{ consul_template_service_name }}-{{ consul_template_version }}/" 15 | creates: "{{ consul_template_parent_dir }}/{{ consul_template_service_name }}-{{ consul_template_version }}/{{ consul_template_service_name }}" 16 | remote_src: yes 17 | 18 | - name: Ensure 775 permissions on Consul Template binary 19 | file: 20 | dest: "{{ consul_template_parent_dir }}/{{ consul_template_service_name }}-{{ consul_template_version }}/{{ consul_template_service_name }}" 21 | mode: "0775" 22 | 23 | - name: Symlink Consul Template binary to exec dir 24 | file: 25 | src: "{{ consul_template_parent_dir }}/{{ consul_template_service_name }}-{{ consul_template_version }}/{{ consul_template_service_name }}" 26 | dest: "{{ consul_template_exec_dir }}/{{ consul_template_service_name }}" 27 | state: link 28 | 29 | - name: Ensure Consul template config template written 30 | template: 31 | src: "consul_template_config.json" 32 | dest: "{{ consul_template_config_dest }}/consul_template_config.json" 33 | notify: 34 | - restart_consul_template_{{ ansible_distribution_version }} 35 | 36 | - name: Ensure Consul templates created when specified 37 | when: consul_template_template_list is defined and consul_template_template_list|length > 0 38 | template: 39 | src: "{{ item.template }}" 40 | dest: "{{ item.source }}" 41 | with_items: "{{ consul_template_template_list }}" 42 | notify: 43 | - "restart_consul_template_{{ ansible_distribution_version }}" 44 | 45 | 46 | - name: Include xenial when distro greater than 14.04 47 | when: ansible_distribution_version|version_compare('14.04','>') 48 | include: xenial.yml 49 | 50 | - name: Include trusty when distro less than 16.04 51 | when: ansible_distribution_version|version_compare('16.04','<') 52 | include: trusty.yml 53 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/tasks/trusty.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Consul Template upstart directory 4 | file: 5 | dest: "/etc/init" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template Consul Template upstart service file 10 | register: upstart_service_template 11 | template: 12 | src: "consul-template-service.conf.j2" 13 | dest: "/etc/init/consul-template-service.conf" 14 | notify: 15 | - "restart_consul_template_14.04" 16 | 17 | - name: Ensure consul_template service started and enabled 18 | when: consul_template_template_list is defined and consul_template_template_list|length > 0 19 | service: 20 | name: consul-template-service 21 | enabled: yes 22 | state: started 23 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/tasks/xenial.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Consul Template systemd directory 4 | file: 5 | dest: "/etc/systemd/system" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template Consul Template systemd service file 10 | register: systemd_service_template 11 | template: 12 | src: "consul-template-service.service.j2" 13 | dest: "/etc/systemd/system/consul_template.service" 14 | notify: 15 | - "reload systemd-daemon" 16 | - "restart_consul_template_16.04" 17 | 18 | - name: Ensure consul_template service started 19 | when: consul_template_template_list is defined and consul_template_template_list|length > 0 20 | systemd: 21 | name: consul-template-service 22 | state: started 23 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/templates/consul-template-service.conf.j2: -------------------------------------------------------------------------------- 1 | description "Consul Template Service" 2 | 3 | start on (local-filesystems) 4 | stop on runlevel [!12345] 5 | 6 | respawn 7 | 8 | exec {{ consul_template_exec_dir }}/{{ consul_template_service_name }} \ 9 | -config="{{ consul_template_config_dest }}/consul_template_config.json" 10 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/templates/consul-template-service.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Consul Template Service 3 | Documentation=https://github.com/hashicorp/consul-template 4 | 5 | [Service] 6 | ExecStart={{ consul_template_exec_dir }}/{{ consul_template_service_name }} \ 7 | -config="{{ consul_template_config_dest }}/consul_template_config.json" 8 | Restart=on-failure 9 | RestartSec=5 10 | 11 | [Install] 12 | WantedBy=multi-user.target 13 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/templates/consul_template_config.hql: -------------------------------------------------------------------------------- 1 | # This denotes the start of the configuration section for Consul. All values 2 | # contained in this section pertain to Consul. 3 | consul { 4 | # This block specifies the basic authentication information to pass with the 5 | # request. For more information on authentication, please see the Consul 6 | # documentation. 7 | auth { 8 | enabled = true 9 | username = "test" 10 | password = "test" 11 | } 12 | 13 | # This is the address of the Consul agent. By default, this is 14 | # 127.0.0.1:8500, which is the default bind and port for a local Consul 15 | # agent. It is not recommended that you communicate directly with a Consul 16 | # server, and instead communicate with the local Consul agent. There are many 17 | # reasons for this, most importantly the Consul agent is able to multiplex 18 | # connections to the Consul server and reduce the number of open HTTP 19 | # connections. Additionally, it provides a "well-known" IP address for which 20 | # clients can connect. 21 | address = "127.0.0.1:8500" 22 | 23 | # This is the ACL token to use when connecting to Consul. If you did not 24 | # enable ACLs on your Consul cluster, you do not need to set this option. 25 | # 26 | # This option is also available via the environment variable CONSUL_TOKEN. 27 | token = "abcd1234" 28 | 29 | # This controls the retry behavior when an error is returned from Consul. 30 | # Consul Template is highly fault tolerant, meaning it does not exit in the 31 | # face of failure. Instead, it uses exponential back-off and retry functions 32 | # to wait for the cluster to become available, as is customary in distributed 33 | # systems. 34 | retry { 35 | # This enabled retries. Retries are enabled by default, so this is 36 | # redundant. 37 | enabled = true 38 | 39 | # This specifies the number of attempts to make before giving up. Each 40 | # attempt adds the exponential backoff sleep time. Setting this to 41 | # zero will implement an unlimited number of retries. 42 | attempts = 12 43 | 44 | # This is the base amount of time to sleep between retry attempts. Each 45 | # retry sleeps for an exponent of 2 longer than this base. For 5 retries, 46 | # the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s. 47 | backoff = "250ms" 48 | 49 | # This is the maximum amount of time to sleep between retry attempts. 50 | # When max_backoff is set to zero, there is no upper limit to the 51 | # exponential sleep between retry attempts. 52 | # If max_backoff is set to 10s and backoff is set to 1s, sleep times 53 | # would be: 1s, 2s, 4s, 8s, 10s, 10s, ... 54 | max_backoff = "1m" 55 | } 56 | 57 | # This block configures the SSL options for connecting to the Consul server. 58 | ssl { 59 | # This enables SSL. Specifying any option for SSL will also enable it. 60 | enabled = true 61 | 62 | # This enables SSL peer verification. The default value is "true", which 63 | # will check the global CA chain to make sure the given certificates are 64 | # valid. If you are using a self-signed certificate that you have not added 65 | # to the CA chain, you may want to disable SSL verification. However, please 66 | # understand this is a potential security vulnerability. 67 | verify = false 68 | 69 | # This is the path to the certificate to use to authenticate. If just a 70 | # certificate is provided, it is assumed to contain both the certificate and 71 | # the key to convert to an X509 certificate. If both the certificate and 72 | # key are specified, Consul Template will automatically combine them into an 73 | # X509 certificate for you. 74 | cert = "/path/to/client/cert" 75 | key = "/path/to/client/key" 76 | 77 | # This is the path to the certificate authority to use as a CA. This is 78 | # useful for self-signed certificates or for organizations using their own 79 | # internal certificate authority. 80 | ca_cert = "/path/to/ca" 81 | 82 | # This is the path to a directory of PEM-encoded CA cert files. If both 83 | # `ca_cert` and `ca_path` is specified, `ca_cert` is preferred. 84 | ca_path = "path/to/certs/" 85 | 86 | # This sets the SNI server name to use for validation. 87 | server_name = "my-server.com" 88 | } 89 | } 90 | 91 | # This is the signal to listen for to trigger a reload event. The default 92 | # value is shown below. Setting this value to the empty string will cause CT 93 | # to not listen for any reload signals. 94 | reload_signal = "SIGHUP" 95 | 96 | # This is the signal to listen for to trigger a graceful stop. The default 97 | # value is shown below. Setting this value to the empty string will cause CT 98 | # to not listen for any graceful stop signals. 99 | kill_signal = "SIGINT" 100 | 101 | # This is the maximum interval to allow "stale" data. By default, only the 102 | # Consul leader will respond to queries; any requests to a follower will 103 | # forward to the leader. In large clusters with many requests, this is not as 104 | # scalable, so this option allows any follower to respond to a query, so long 105 | # as the last-replicated data is within these bounds. Higher values result in 106 | # less cluster load, but are more likely to have outdated data. 107 | max_stale = "10m" 108 | 109 | # This is the log level. If you find a bug in Consul Template, please enable 110 | # debug logs so we can help identify the issue. This is also available as a 111 | # command line flag. 112 | log_level = "warn" 113 | 114 | # This is the path to store a PID file which will contain the process ID of the 115 | # Consul Template process. This is useful if you plan to send custom signals 116 | # to the process. 117 | pid_file = "/path/to/pid" 118 | 119 | # This is the quiescence timers; it defines the minimum and maximum amount of 120 | # time to wait for the cluster to reach a consistent state before rendering a 121 | # template. This is useful to enable in systems that have a lot of flapping, 122 | # because it will reduce the the number of times a template is rendered. 123 | wait { 124 | min = "5s" 125 | max = "10s" 126 | } 127 | 128 | # This denotes the start of the configuration section for Vault. All values 129 | # contained in this section pertain to Vault. 130 | vault { 131 | # This is the address of the Vault leader. The protocol (http(s)) portion 132 | # of the address is required. 133 | address = "https://vault.service.consul:8200" 134 | 135 | # This is the grace period between lease renewal of periodic secrets and secret 136 | # re-acquisition. When renewing a secret, if the remaining lease is less than or 137 | # equal to the configured grace, Consul Template will request a new credential. 138 | # This prevents Vault from revoking the credential at expiration and Consul 139 | # Template having a stale credential. 140 | # 141 | # Note: If you set this to a value that is higher than your default TTL or 142 | # max TTL, Consul Template will always read a new secret! 143 | grace = "5m" 144 | 145 | # This is the token to use when communicating with the Vault server. 146 | # Like other tools that integrate with Vault, Consul Template makes the 147 | # assumption that you provide it with a Vault token; it does not have the 148 | # incorporated logic to generate tokens via Vault's auth methods. 149 | # 150 | # This value can also be specified via the environment variable VAULT_TOKEN. 151 | token = "abcd1234" 152 | 153 | # This tells Consul Template that the provided token is actually a wrapped 154 | # token that should be unwrapped using Vault's cubbyhole response wrapping 155 | # before being used. Please see Vault's cubbyhole response wrapping 156 | # documentation for more information. 157 | unwrap_token = true 158 | 159 | # This option tells Consul Template to automatically renew the Vault token 160 | # given. If you are unfamiliar with Vault's architecture, Vault requires 161 | # tokens be renewed at some regular interval or they will be revoked. Consul 162 | # Template will automatically renew the token at half the lease duration of 163 | # the token. The default value is true, but this option can be disabled if 164 | # you want to renew the Vault token using an out-of-band process. 165 | # 166 | # Note that secrets specified in a template (using {{secret}} for example) 167 | # are always renewed, even if this option is set to false. This option only 168 | # applies to the top-level Vault token itself. 169 | renew_token = true 170 | 171 | # This section details the retry options for connecting to Vault. Please see 172 | # the retry options in the Consul section for more information (they are the 173 | # same). 174 | retry { 175 | # ... 176 | } 177 | 178 | # This section details the SSL options for connecting to the Vault server. 179 | # Please see the SSL options in the Consul section for more information (they 180 | # are the same). 181 | ssl { 182 | # ... 183 | } 184 | } 185 | 186 | # This block defines the configuration for connecting to a syslog server for 187 | # logging. 188 | syslog { 189 | # This enables syslog logging. Specifying any other option also enables 190 | # syslog logging. 191 | enabled = true 192 | 193 | # This is the name of the syslog facility to log to. 194 | facility = "LOCAL5" 195 | } 196 | 197 | # This block defines the configuration for de-duplication mode. Please see the 198 | # de-duplication mode documentation later in the README for more information 199 | # on how de-duplication mode operates. 200 | deduplicate { 201 | # This enables de-duplication mode. Specifying any other options also enables 202 | # de-duplication mode. 203 | enabled = true 204 | 205 | # This is the prefix to the path in Consul's KV store where de-duplication 206 | # templates will be pre-rendered and stored. 207 | prefix = "consul-template/dedup/" 208 | } 209 | 210 | # This block defines the configuration for exec mode. Please see the exec mode 211 | # documentation at the bottom of this README for more information on how exec 212 | # mode operates and the caveats of this mode. 213 | exec { 214 | # This is the command to exec as a child process. There can be only one 215 | # command per Consul Template process. 216 | command = "/usr/bin/app" 217 | 218 | # This is a random splay to wait before killing the command. The default 219 | # value is 0 (no wait), but large clusters should consider setting a splay 220 | # value to prevent all child processes from reloading at the same time when 221 | # data changes occur. When this value is set to non-zero, Consul Template 222 | # will wait a random period of time up to the splay value before reloading 223 | # or killing the child process. This can be used to prevent the thundering 224 | # herd problem on applications that do not gracefully reload. 225 | splay = "5s" 226 | 227 | env { 228 | # This specifies if the child process should not inherit the parent 229 | # process's environment. By default, the child will have full access to the 230 | # environment variables of the parent. Setting this to true will send only 231 | # the values specified in `custom_env` to the child process. 232 | pristine = false 233 | 234 | # This specifies additional custom environment variables in the form shown 235 | # below to inject into the child's runtime environment. If a custom 236 | # environment variable shares its name with a system environment variable, 237 | # the custom environment variable takes precedence. Even if pristine, 238 | # whitelist, or blacklist is specified, all values in this option 239 | # are given to the child process. 240 | custom = ["PATH=$PATH:/etc/myapp/bin"] 241 | 242 | # This specifies a list of environment variables to exclusively include in 243 | # the list of environment variables exposed to the child process. If 244 | # specified, only those environment variables matching the given patterns 245 | # are exposed to the child process. These strings are matched using Go's 246 | # glob function, so wildcards are permitted. 247 | whitelist = ["CONSUL_*"] 248 | 249 | # This specifies a list of environment variables to exclusively prohibit in 250 | # the list of environment variables exposed to the child process. If 251 | # specified, any environment variables matching the given patterns will not 252 | # be exposed to the child process, even if they are whitelisted. The values 253 | # in this option take precedence over the values in the whitelist. 254 | # These strings are matched using Go's glob function, so wildcards are 255 | # permitted. 256 | blacklist = ["VAULT_*"] 257 | } 258 | 259 | # This defines the signal that will be sent to the child process when a 260 | # change occurs in a watched template. The signal will only be sent after the 261 | # process is started, and the process will only be started after all 262 | # dependent templates have been rendered at least once. The default value is 263 | # nil, which tells Consul Template to stop the child process and spawn a new 264 | # one instead of sending it a signal. This is useful for legacy applications 265 | # or applications that cannot properly reload their configuration without a 266 | # full reload. 267 | reload_signal = "" 268 | 269 | # This defines the signal sent to the child process when Consul Template is 270 | # gracefully shutting down. The application should begin a graceful cleanup. 271 | # If the application does not terminate before the `kill_timeout`, it will 272 | # be terminated (effectively "kill -9"). The default value is "SIGTERM". 273 | kill_signal = "SIGINT" 274 | 275 | # This defines the amount of time to wait for the child process to gracefully 276 | # terminate when Consul Template exits. After this specified time, the child 277 | # process will be force-killed (effectively "kill -9"). The default value is 278 | # "30s". 279 | kill_timeout = "2s" 280 | } 281 | 282 | # This block defines the configuration for a template. Unlike other blocks, 283 | # this block may be specified multiple times to configure multiple templates. 284 | # It is also possible to configure templates via the CLI directly. 285 | template { 286 | # This is the source file on disk to use as the input template. This is often 287 | # called the "Consul Template template". This option is required if not using 288 | # the `contents` option. 289 | source = "/path/on/disk/to/template.ctmpl" 290 | 291 | # This is the destination path on disk where the source template will render. 292 | # If the parent directories do not exist, Consul Template will attempt to 293 | # create them. 294 | destination = "/path/on/disk/where/template/will/render.txt" 295 | 296 | # This option allows embedding the contents of a template in the configuration 297 | # file rather then supplying the `source` path to the template file. This is 298 | # useful for short templates. This option is mutually exclusive with the 299 | # `source` option. 300 | contents = "{{ keyOrDefault \"service/redis/maxconns@east-aws\" \"5\" }}" 301 | 302 | # This is the optional command to run when the template is rendered. The 303 | # command will only run if the resulting template changes. The command must 304 | # return within 30s (configurable), and it must have a successful exit code. 305 | # Consul Template is not a replacement for a process monitor or init system. 306 | command = "restart service foo" 307 | 308 | # This is the maximum amount of time to wait for the optional command to 309 | # return. Default is 30s. 310 | command_timeout = "60s" 311 | 312 | # Exit with an error when accessing a struct or map field/key that does not 313 | # exist. The default behavior will print "" when accessing a field 314 | # that does not exist. It is highly recommended you set this to "true" when 315 | # retrieving secrets from Vault. 316 | error_on_missing_key = false 317 | 318 | # This is the permission to render the file. If this option is left 319 | # unspecified, Consul Template will attempt to match the permissions of the 320 | # file that already exists at the destination path. If no file exists at that 321 | # path, the permissions are 0644. 322 | perms = 0600 323 | 324 | # This option backs up the previously rendered template at the destination 325 | # path before writing a new one. It keeps exactly one backup. This option is 326 | # useful for preventing accidental changes to the data without having a 327 | # rollback strategy. 328 | backup = true 329 | 330 | # These are the delimiters to use in the template. The default is "{{" and 331 | # "}}", but for some templates, it may be easier to use a different delimiter 332 | # that does not conflict with the output file itself. 333 | left_delimiter = "{{" 334 | right_delimiter = "}}" 335 | 336 | # This is the `minimum(:maximum)` to wait before rendering a new template to 337 | # disk and triggering a command, separated by a colon (`:`). If the optional 338 | # maximum value is omitted, it is assumed to be 4x the required minimum value. 339 | # This is a numeric time with a unit suffix ("5s"). There is no default value. 340 | # The wait value for a template takes precedence over any globally-configured 341 | # wait. 342 | wait { 343 | min = "2s" 344 | max = "10s" 345 | } 346 | } 347 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/templates/consul_template_config.json: -------------------------------------------------------------------------------- 1 | { 2 | "consul": { 3 | "address": "{{ consul_template_consul_address }}" 4 | } 5 | {% if consul_template_template_list is defined and consul_template_template_list|length > 0 %}, 6 | {% for tmpl in consul_template_template_list %} 7 | "template": { 8 | {% if tmpl.command is defined %} 9 | "command": "{{ tmpl.command }}", 10 | {% endif %} 11 | {% if tmpl.command_timeout is defined %} 12 | "command_timeout": "{{ tmpl.command_timeout }}", 13 | {% endif %} 14 | {% if tmpl.perms is defined %} 15 | "perms": "{{ tmpl.perms }}", 16 | {% endif %} 17 | {% if tmpl.backup is defined %} 18 | "backup": {{ tmpl.backup }}, 19 | {% endif %} 20 | {% if tmpl.wait is defined and tmpl.wait.min is defined and tmpl.wait.max is defined %} 21 | "wait": { 22 | "min": "{{ tmpl.wait.min }}", 23 | "max": "{{ tmpl.wait.max }}" 24 | }, 25 | "source": "{{ tmpl.source }}", 26 | "destination": "{{ tmpl.destination }}" 27 | {% endif %} 28 | }{% if not loop.last %},{% endif %} 29 | {% endfor %} 30 | {% endif %} 31 | } 32 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/templates/haproxy.cfg.ctmpl: -------------------------------------------------------------------------------- 1 | {% raw %}global 2 | description load balancer 3 | stats socket /run/haproxy/admin.sock mode 660 level admin 4 | stats timeout 30s 5 | log /dev/log daemon notice 6 | chroot /var/lib/haproxy 7 | user haproxy 8 | group haproxy 9 | daemon 10 | stats maxconn 128 11 | maxconn 65536 12 | spread-checks 5 13 | 14 | # Default SSL material locations 15 | ca-base /etc/ssl/certs 16 | crt-base /etc/ssl/private 17 | 18 | # Default ciphers to use on SSL-enabled listening sockets. 19 | # For more information, see ciphers(1SSL). 20 | ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL 21 | 22 | defaults 23 | log global 24 | mode http 25 | maxconn 16384 26 | default-server inter 2s maxconn 8192 27 | option dontlognull 28 | # option httpchk /heartbeat 29 | option redispatch 30 | timeout connect 5s 31 | timeout client 180s 32 | timeout server 180s 33 | 34 | listen stats 35 | bind *:8888 36 | stats enable 37 | stats hide-version 38 | stats realm Haproxy\ Statistics 39 | stats uri / 40 | stats auth admin:adminpw 41 | 42 | listen prometheusserver 43 | option httplog 44 | option httpchk GET /status 45 | http-check expect string Version 46 | bind *:9090{{range service "prometheusserver"}} 47 | server {{.Node}} {{.Address}}:{{.Port}} check{{end}} 48 | 49 | listen prometheusnode 50 | option httplog 51 | option httpchk GET /metrics 52 | http-check expect string TYPE 53 | bind *:9010{{range service "prometheusnode"}} 54 | server {{.Node}} {{.Address}}:{{.Port}} check{{end}} 55 | 56 | listen grafana 57 | option httplog 58 | option httpchk GET /login 59 | http-check expect string Grafana 60 | bind *:3000{{range service "grafana"}} 61 | server {{.Node}} {{.Address}}:{{.Port}} check{{end}} 62 | {% endraw %} 63 | -------------------------------------------------------------------------------- /ansible/roles/consul_template/templates/icinga.cfg.ctmpl: -------------------------------------------------------------------------------- 1 | ### Host Definitions ### 2 | {{range nodes}} 3 | define host { 4 | use linux-host 5 | hostname {{.Node}} 6 | alias {{.Node}} 7 | address {{.Address}} 8 | } 9 | {{end}} 10 | ###End host Definitions ### 11 | 12 | ### Service Definitions ### 13 | {{range services}}{{range service .Name}} 14 | define service { 15 | service_description {{.Node}}_{{.Name}}_{{.Port}} 16 | host_name {{.Node}} 17 | check_command check_tcp!"-p {{.Port}}" 18 | } 19 | {{end}}{{end}} 20 | ### End Service Definitions ### -------------------------------------------------------------------------------- /ansible/roles/grafana/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | grafana_version: 4.3.3 4 | grafana_deb_name: grafana 5 | grafana_unarchive_prefix: /tmp 6 | grafana_conf_dir: /etc/grafana 7 | grafana_service: grafana-server 8 | 9 | grafana_ini_vars: 10 | - name: "paths" 11 | - name: "server" 12 | - name: "database" 13 | - name: "session" 14 | - name: "analytics" 15 | - name: "security" 16 | - name: "users" 17 | - name: "auth.anonymous" 18 | - name: "auth.github" 19 | - name: "auth.google" 20 | - name: "auth.proxy" 21 | - name: "auth.basic" 22 | - name: "auth.ldap" 23 | - name: "smtp" 24 | - name: "emails" 25 | - name: "log" 26 | - name: "log.file" 27 | - name: "dashboards.json" 28 | vars: 29 | - enabled: "false" 30 | - path: "/var/lib/grafana/dashboards" 31 | -------------------------------------------------------------------------------- /ansible/roles/grafana/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Restart grafana-server 4 | service: 5 | name: grafana-server 6 | state: restarted -------------------------------------------------------------------------------- /ansible/roles/grafana/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Ensure deb downloaded and installed 4 | apt: 5 | deb: "https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_4.4.3_amd64.deb" 6 | #allow_unauthenticated: yes 7 | state: present 8 | 9 | - name: Template config files for grafana 10 | template: 11 | src: "{{ item.src }}" 12 | dest: "{{ item.dest }}" 13 | with_items: 14 | - src: grafana.conf.j2 15 | dest: "{{ grafana_conf_dir }}/grafana.ini" 16 | notify: 17 | - Restart grafana-server 18 | 19 | #- name: Ensure grafana starts on boot 20 | # command: "update-rc.d {{ grafana_service }} defaults 95 10" 21 | 22 | - name: Ensure grafana started and enabled 23 | service: 24 | name: "{{ grafana_service }}" 25 | state: started 26 | enabled: yes 27 | -------------------------------------------------------------------------------- /ansible/roles/grafana/templates/grafana.conf.j2: -------------------------------------------------------------------------------- 1 | #jinja2: trim_blocks: True, lstrip_blocks: True 2 | # {{ ansible_managed }} 3 | {% if grafana_ini_vars is defined and grafana_ini_vars|length > 0 %} 4 | {% for block in grafana_ini_vars %} 5 | {% if block['name'] is defined %} 6 | [{{block['name']}}] 7 | {% endif %} 8 | {% for k,v in block.items() %} 9 | {% if k == 'vars' %} 10 | {% for confitem in v %} 11 | {% for key,val in confitem.items() %} 12 | {{key}} = {{val}} 13 | {% endfor %} 14 | {% endfor %} 15 | 16 | {% endif %} 17 | {% endfor %} 18 | {% endfor %} 19 | {% endif %} 20 | -------------------------------------------------------------------------------- /ansible/roles/grafana/templates/grafana.ini: -------------------------------------------------------------------------------- 1 | ##################### Grafana Configuration Example ##################### 2 | # 3 | # Everything has defaults so you only need to uncomment things you want to 4 | # change 5 | 6 | ; app_mode = production 7 | 8 | #################################### Paths #################################### 9 | [paths] 10 | # Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used) 11 | # 12 | ;data = /var/lib/grafana 13 | # 14 | # Directory where grafana can store logs 15 | # 16 | ;logs = /var/log/grafana 17 | 18 | #################################### Server #################################### 19 | [server] 20 | # Protocol (http or https) 21 | ;protocol = http 22 | 23 | # The ip address to bind to, empty will bind to all interfaces 24 | ;http_addr = 25 | 26 | # The http port to use 27 | ;http_port = 3000 28 | 29 | # The public facing domain name used to access grafana from a browser 30 | ;domain = localhost 31 | 32 | # Redirect to correct domain if host header does not match domain 33 | # Prevents DNS rebinding attacks 34 | ;enforce_domain = false 35 | 36 | # The full public facing url 37 | ;root_url = %(protocol)s://%(domain)s:%(http_port)s/ 38 | 39 | # Log web requests 40 | ;router_logging = false 41 | 42 | # the path relative working path 43 | ;static_root_path = public 44 | 45 | # enable gzip 46 | ;enable_gzip = false 47 | 48 | # https certs & key file 49 | ;cert_file = 50 | ;cert_key = 51 | 52 | #################################### Database #################################### 53 | [database] 54 | # Either "mysql", "postgres" or "sqlite3", it's your choice 55 | ;type = sqlite3 56 | ;host = 127.0.0.1:3306 57 | ;name = grafana 58 | ;user = root 59 | ;password = 60 | 61 | # For "postgres" only, either "disable", "require" or "verify-full" 62 | ;ssl_mode = disable 63 | 64 | # For "sqlite3" only, path relative to data_path setting 65 | ;path = grafana.db 66 | 67 | #################################### Session #################################### 68 | [session] 69 | # Either "memory", "file", "redis", "mysql", "postgresql", default is "file" 70 | ;provider = file 71 | 72 | # Provider config options 73 | # memory: not have any config yet 74 | # file: session dir path, is relative to grafana data_path 75 | # redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=grafana` 76 | # mysql: go-sql-driver/mysql dsn config string, e.g. `user:password@tcp(127.0.0.1:3306)/database_name` 77 | # postgres: user=a password=b host=localhost port=5432 dbname=c sslmode=disable 78 | ;provider_config = sessions 79 | 80 | # Session cookie name 81 | ;cookie_name = grafana_sess 82 | 83 | # If you use session in https only, default is false 84 | ;cookie_secure = false 85 | 86 | # Session life time, default is 86400 87 | ;session_life_time = 86400 88 | 89 | #################################### Analytics #################################### 90 | [analytics] 91 | # Server reporting, sends usage counters to stats.grafana.org every 24 hours. 92 | # No ip addresses are being tracked, only simple counters to track 93 | # running instances, dashboard and error counts. It is very helpful to us. 94 | # Change this option to false to disable reporting. 95 | ;reporting_enabled = true 96 | 97 | # Google Analytics universal tracking code, only enabled if you specify an id here 98 | ;google_analytics_ua_id = 99 | 100 | #################################### Security #################################### 101 | [security] 102 | # default admin user, created on startup 103 | ;admin_user = admin 104 | 105 | # default admin password, can be changed before first start of grafana, or in profile settings 106 | ;admin_password = admin 107 | 108 | # used for signing 109 | ;secret_key = SW2YcwTIb9zpOOhoPsMm 110 | 111 | # Auto-login remember days 112 | ;login_remember_days = 7 113 | ;cookie_username = grafana_user 114 | ;cookie_remember_name = grafana_remember 115 | 116 | # disable gravatar profile images 117 | ;disable_gravatar = false 118 | 119 | #################################### Users #################################### 120 | [users] 121 | # disable user signup / registration 122 | ;allow_sign_up = true 123 | 124 | # Allow non admin users to create organizations 125 | ;allow_org_create = true 126 | 127 | # Set to true to automatically assign new users to the default organization (id 1) 128 | ;auto_assign_org = true 129 | 130 | # Default role new users will be automatically assigned (if disabled above is set to true) 131 | ;auto_assign_org_role = Viewer 132 | 133 | #################################### Anonymous Auth ########################## 134 | [auth.anonymous] 135 | # enable anonymous access 136 | ;enabled = false 137 | 138 | # specify organization name that should be used for unauthenticated users 139 | ;org_name = Main Org. 140 | 141 | # specify role for unauthenticated users 142 | ;org_role = Viewer 143 | 144 | #################################### Github Auth ########################## 145 | [auth.github] 146 | ;enabled = false 147 | ;allow_sign_up = false 148 | ;client_id = some_id 149 | ;client_secret = some_secret 150 | ;scopes = user:email,read:org 151 | ;auth_url = https://github.com/login/oauth/authorize 152 | ;token_url = https://github.com/login/oauth/access_token 153 | ;api_url = https://api.github.com/user 154 | ;team_ids = 155 | ;allowed_domains = 156 | ;allowed_organizations = 157 | 158 | #################################### Google Auth ########################## 159 | [auth.google] 160 | ;enabled = false 161 | ;allow_sign_up = false 162 | ;client_id = some_client_id 163 | ;client_secret = some_client_secret 164 | ;scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email 165 | ;auth_url = https://accounts.google.com/o/oauth2/auth 166 | ;token_url = https://accounts.google.com/o/oauth2/token 167 | ;api_url = https://www.googleapis.com/oauth2/v1/userinfo 168 | ;allowed_domains = 169 | 170 | #################################### Auth Proxy ########################## 171 | [auth.proxy] 172 | ;enabled = false 173 | ;header_name = X-WEBAUTH-USER 174 | ;header_property = username 175 | ;auto_sign_up = true 176 | 177 | #################################### Basic Auth ########################## 178 | [auth.basic] 179 | ;enabled = true 180 | 181 | #################################### Auth LDAP ########################## 182 | [auth.ldap] 183 | ;enabled = false 184 | ;config_file = /etc/grafana/ldap.toml 185 | 186 | #################################### SMTP / Emailing ########################## 187 | [smtp] 188 | ;enabled = false 189 | ;host = localhost:25 190 | ;user = 191 | ;password = 192 | ;cert_file = 193 | ;key_file = 194 | ;skip_verify = false 195 | ;from_address = admin@grafana.localhost 196 | 197 | [emails] 198 | ;welcome_email_on_sign_up = false 199 | 200 | #################################### Logging ########################## 201 | [log] 202 | # Either "console", "file", default is "console" 203 | # Use comma to separate multiple modes, e.g. "console, file" 204 | ;mode = console, file 205 | 206 | # Buffer length of channel, keep it as it is if you don't know what it is. 207 | ;buffer_len = 10000 208 | 209 | # Either "Trace", "Debug", "Info", "Warn", "Error", "Critical", default is "Trace" 210 | ;level = Info 211 | 212 | # For "console" mode only 213 | [log.console] 214 | ;level = 215 | 216 | # For "file" mode only 217 | [log.file] 218 | ;level = 219 | # This enables automated log rotate(switch of following options), default is true 220 | ;log_rotate = true 221 | 222 | # Max line number of single file, default is 1000000 223 | ;max_lines = 1000000 224 | 225 | # Max size shift of single file, default is 28 means 1 << 28, 256MB 226 | ;max_lines_shift = 28 227 | 228 | # Segment log daily, default is true 229 | ;daily_rotate = true 230 | 231 | # Expired days of log file(delete after max days), default is 7 232 | ;max_days = 7 233 | 234 | #################################### AMPQ Event Publisher ########################## 235 | [event_publisher] 236 | ;enabled = false 237 | ;rabbitmq_url = amqp://localhost/ 238 | ;exchange = grafana_events 239 | 240 | ;#################################### Dashboard JSON files ########################## 241 | [dashboards.json] 242 | ;enabled = false 243 | ;path = /var/lib/grafana/dashboards -------------------------------------------------------------------------------- /ansible/roles/haproxy/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | haproxy_version: "1.5" 4 | haproxy_apt_repo: False 5 | haproxy_apt_ppa: "ppa:vbernat/haproxy-{{ haproxy_version }}" 6 | -------------------------------------------------------------------------------- /ansible/roles/haproxy/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Ensure apt repo exists when specified 4 | when: haproxy_apt_repo 5 | apt_repository: 6 | repo: "{{ haproxy_apt_ppa }}" 7 | update_cache: yes 8 | 9 | - name: Install haproxy 10 | apt: 11 | name: "haproxy" 12 | update_cache: yes 13 | cache_valid_time: 3600 14 | -------------------------------------------------------------------------------- /ansible/roles/hostsfile/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | hostsfile_iface: "ansible_eth1" 4 | -------------------------------------------------------------------------------- /ansible/roles/hostsfile/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Ensure hostname is set to inventory hostname 4 | hostname: 5 | name: "{{ inventory_hostname }}" 6 | 7 | - name: Ensure ubuntu 127.0.1.1 loopback is absent 8 | lineinfile: 9 | regexp: "^127\\.0\\.1\\.1\\s+.*" 10 | state: absent 11 | dest: "/etc/hosts" 12 | 13 | - name: Ensure 127.0.0.1 loopback resolve to hostname doesnt exist 14 | lineinfile: 15 | regexp: "^127\\.0\\.0\\.1\\s+{{ inventory_hostname }}.*" 16 | state: absent 17 | dest: "/etc/hosts" 18 | 19 | - name: Ensure ubuntu 127.0.0.1 loopback only contains localhost 20 | lineinfile: 21 | line: "127.0.0.1 localhost" 22 | regexp: "^127\\.0\\.0\\.1\\s+localhost$" 23 | dest: "/etc/hosts" 24 | -------------------------------------------------------------------------------- /ansible/roles/nagios_checks/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | nagios_checks_templates_dir: "/usr/lib/nagios/plugins/custom" 4 | nagios_checks_templates_list: [] 5 | #nagios_checks_templates_list: 6 | # - src: "check_bind.j2" 7 | # dest: "/usr/lib/nagios/plugins/custom/check_bind" 8 | -------------------------------------------------------------------------------- /ansible/roles/nagios_checks/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Ensure directory exists 4 | file: 5 | dest: "{{ nagios_checks_templates_dir }}" 6 | recurse: yes 7 | state: directory 8 | 9 | - name: Template specified files 10 | template: 11 | src: "{{ item.src }}" 12 | dest: "{{ item.dest }}" 13 | mode: 0777 14 | with_items: "{{ nagios_checks_templates_list }}" 15 | -------------------------------------------------------------------------------- /ansible/roles/nagios_checks/templates/check_bind.j2: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # This program is free software; you can redistribute it and/or modify 4 | # it under the terms of the GNU General Public License as published by 5 | # the Free Software Foundation; either version 2 of the License, or 6 | # (at your option) any later version. 7 | # 8 | # This program is distributed in the hope that it will be useful, 9 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 10 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 | # GNU General Public License for more details. 12 | # 13 | # You should have received a copy of the GNU General Public License 14 | # along with this program; if not, write to the Free Software 15 | # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 16 | 17 | PROGNAME=`basename $0` 18 | VERSION="Version 1.3," 19 | AUTHOR="2009, Mike Adolphs (http://www.matejunkie.com/)" 20 | 21 | ST_OK=0 22 | ST_WR=1 23 | ST_CR=2 24 | ST_UK=3 25 | path_pid="/var/run/named" 26 | name_pid="named.pid" 27 | path_rndc="/usr/sbin" 28 | path_stats="/var/bind" 29 | path_tmp="/tmp" 30 | version=9.4 31 | pid_check=1 32 | 33 | print_version() { 34 | echo "$VERSION $AUTHOR" 35 | } 36 | 37 | print_help() { 38 | print_version $PROGNAME $VERSION 39 | echo "" 40 | echo "$PROGNAME is a Nagios plugin to check the bind daemon whether it's" 41 | echo "running via its pid file and then gets the statistics via rndc stats." 42 | echo "The user that run the script needs the ability to 'sudo rndc stats'!" 43 | echo "The timeframe in which the rndc stats output is updated is controlled" 44 | echo "by the check interval. The output shows amount of requests of various" 45 | echo "types occured during the last check interval." 46 | echo "The script itself is written sh-compliant and free software under the" 47 | echo "terms of the GPLv2 (or later)." 48 | echo "" 49 | echo "$PROGNAME -p/--path_pid /var/run/named -n/--name_pid named.pid" 50 | echo " -r/--path-rndc /usr/sbin -s/--path-stats /var/bind" 51 | echo " -V/--bind-version 9.3/9.4/9.5 -N" 52 | echo "" 53 | echo "Options:" 54 | echo " -p/--path-pid)" 55 | echo " Path where the pid file for bind is stored. You might need to" 56 | echo " alter this to your distribution's way of dealing with pid files." 57 | echo " Default is: /var/run/named" 58 | echo " -n/--name_pid)" 59 | echo " Name of the pid file. Default is: named.pid" 60 | echo " -r/--path-rndc)" 61 | echo " Path where the rndc binary is located. Default is: /usr/sbin" 62 | echo " -s/--path-stats)" 63 | echo " Path where the named.stats file is stored. Default is:" 64 | echo " /var/bind" 65 | echo " -t/--path-tmp)" 66 | echo " Path where the temporary named.stats excerpt is being stored." 67 | echo " Default is: /tmp" 68 | echo " -V/--bind-version)" 69 | echo " Specifies the bind version you're running. Currently there's" 70 | echo " BIND 9.3, 9.4 and 9.5 supported. Default is: 9.4" 71 | echo " -N/--no-pid-check)" 72 | echo " If you don't want that the script checks for the pid file," 73 | echo " use this option. Default is: off" 74 | echo " -C/--chroot)" 75 | echo " If you're running BIND in a chroot environment, use this" 76 | echo " option to define a path to be chrooted. Please adjust also" 77 | echo " your sudo configuration to enable the Nagios user to chroot!" 78 | echo " Default is: /opt/chroot-bind" 79 | exit $ST_UK 80 | } 81 | 82 | while test -n "$1"; do 83 | case "$1" in 84 | -help|-h) 85 | print_help 86 | exit $ST_UK 87 | ;; 88 | --version|-v) 89 | print_version $PROGNAME $VERSION 90 | exit $ST_UK 91 | ;; 92 | --path-pid|-p) 93 | path_pid=$2 94 | shift 95 | ;; 96 | --name-pid|-n) 97 | name_pid=$2 98 | shift 99 | ;; 100 | --path-rndc|-r) 101 | path_rndc=$2 102 | shift 103 | ;; 104 | --path-stats|-s) 105 | path_stats=$2 106 | shift 107 | ;; 108 | --path-tmp|-t) 109 | path_tmp=$2 110 | shift 111 | ;; 112 | --bind-version|-V) 113 | version=$2 114 | shift 115 | ;; 116 | --no-check-pid|-N) 117 | pid_check=0 118 | ;; 119 | --chroot|-C) 120 | path_chroot=$2 121 | shift 122 | ;; 123 | *) 124 | echo "Unknown argument: $1" 125 | print_help 126 | exit $ST_UK 127 | ;; 128 | esac 129 | shift 130 | done 131 | 132 | check_pid() { 133 | if [ -f "$path_pid/$name_pid" ] 134 | then 135 | retval=0 136 | else 137 | retval=1 138 | fi 139 | } 140 | 141 | trigger_stats() { 142 | if [ -n "$path_chroot" ] 143 | then 144 | sudo chroot $path_chroot $path_rndc/rndc stats 145 | else 146 | sudo $path_rndc/rndc stats 147 | fi 148 | } 149 | 150 | copy_to_tmp() { 151 | tac $path_stats/named.stats | awk '/--- \([0-9]*\)/{p=1} p{print} /\+\+\+ \([0-9]*\)/{p=0;if (count++==1) exit}' > $path_tmp/named.stats.tmp 152 | } 153 | 154 | get_vals() { 155 | case ${version} in 156 | 9.3) 157 | succ_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^success [0-9]*' | sort -n | awk '{print $2}'` 158 | succ_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^success [0-9]*' | sort -n | grep -m1 '^success [0-9]*' | awk '{print $2}'` 159 | ref_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^referral [0-9]*' | sort -n | awk '{print $2}'` 160 | ref_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^referral [0-9]*' | sort -n | grep -m1 '^referral [0-9]*' | awk '{print $2}'` 161 | nxrr_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^nxrrset [0-9]*' | sort -n | awk '{print $2}'` 162 | nxrr_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^nxrrset [0-9]*' | sort -n | grep -m1 '^nxrrset [0-9]*' | awk '{print $2}'` 163 | nxdom_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^nxdomain [0-9]*' | sort -n | awk '{print $2}'` 164 | nxdom_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^nxdomain [0-9]*' | sort -n | grep -m1 '^nxdomain [0-9]*' | awk '{print $2}'` 165 | rec_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^recursion [0-9]*' | sort -n | awk '{print $2}'` 166 | rec_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^recursion [0-9]*' | sort -n | grep -m1 '^recursion [0-9]*' | awk '{print $2}'` 167 | fail_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^failure [0-9]*' | sort -n | awk '{print $2}'` 168 | fail_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^failure [0-9]*' | sort -n | grep -m1 '^failure [0-9]*' | awk '{print $2}'` 169 | ;; 170 | 9.4) 171 | succ_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^success [0-9]*' | sort -n | awk '{print $2}'` 172 | succ_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^success [0-9]*' | sort -n | grep -m1 '^success [0-9]*' | awk '{print $2}'` 173 | ref_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^referral [0-9]*' | sort -n | awk '{print $2}'` 174 | ref_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^referral [0-9]*' | sort -n | grep -m1 '^referral [0-9]*' | awk '{print $2}'` 175 | nxrr_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^nxrrset [0-9]*' | sort -n | awk '{print $2}'` 176 | nxrr_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^nxrrset [0-9]*' | sort -n | grep -m1 '^nxrrset [0-9]*' | awk '{print $2}'` 177 | nxdom_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^nxdomain [0-9]*' | sort -n | awk '{print $2}'` 178 | nxdom_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^nxdomain [0-9]*' | sort -n | grep -m1 '^nxdomain [0-9]*' | awk '{print $2}'` 179 | rec_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^recursion [0-9]*' | sort -n | awk '{print $2}'` 180 | rec_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^recursion [0-9]*' | sort -n | grep -m1 '^recursion [0-9]*' | awk '{print $2}'` 181 | fail_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^failure [0-9]*' | sort -n | awk '{print $2}'` 182 | fail_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^failure [0-9]*' | sort -n | grep -m1 '^failure [0-9]*' | awk '{print $2}'` 183 | dup_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^duplicate [0-9]*' | sort -n | awk '{print $2}'` 184 | dup_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^duplicate [0-9]*' | sort -n | grep -m1 '^duplicate [0-9]*' | awk '{print $2}'` 185 | drop_1st=`cat $path_tmp/named.stats.tmp | grep -m1 '^dropped [0-9]*' | sort -n | awk '{print $2}'` 186 | drop_2nd=`cat $path_tmp/named.stats.tmp | grep -o '^dropped [0-9]*' | sort -n | grep -m1 '^dropped [0-9]*' | awk '{print $2}'` 187 | ;; 188 | 9.5) 189 | succ_1st=`grep 'resulted in successful answer' $path_tmp/named.stats.tmp | awk '{ print $1 }' | grep -m1 ''` 190 | succ_2nd=`grep 'resulted in successful answer' $path_tmp/named.stats.tmp | awk '{ print $1 }' | sort -n | grep -m1 ''` 191 | ref_1st=`grep 'resulted in referral' $path_tmp/named.stats.tmp | awk '{ print $1 }' | grep -m1 ''` 192 | ref_2nd=`grep 'resulted in referral' $path_tmp/named.stats.tmp | awk '{ print $1 }' | sort -n | grep -m1 ''` 193 | nxrr_1st=`grep 'resulted in nxrrset' $path_tmp/named.stats.tmp | awk '{ print $1 }' | grep -m1 ''` 194 | nxrr_2nd=`grep 'resulted in nxrrset' $path_tmp/named.stats.tmp | awk '{ print $1 }' | sort -n | grep -m1 ''` 195 | nxdom_1st=`grep 'resulted in NXDOMAIN' $path_tmp/named.stats.tmp | awk '{ print $1 }' | grep -m1 ''` 196 | nxdom_2nd=`grep 'resulted in NXDOMAIN' $path_tmp/named.stats.tmp | awk '{ print $1 }' | sort -n | grep -m1 ''` 197 | rec_1st=`grep 'caused recursion' $path_tmp/named.stats.tmp | awk '{ print $1 }' | grep -m1 ''` 198 | rec_2nd=`grep 'caused recursion' $path_tmp/named.stats.tmp | awk '{ print $1 }' | sort -n | grep -m1 ''` 199 | fail_1st=`grep 'resulted in SERVFAIL' $path_tmp/named.stats.tmp | awk '{ print $1 }' | grep -m1 ''` 200 | fail_2nd=`grep 'resulted in SERVFAIL' $path_tmp/named.stats.tmp | awk '{ print $1 }' | sort -n | grep -m1 ''` 201 | dup_1st=`grep 'duplicate queries received' $path_tmp/named.stats.tmp | awk '{ print $1 }' | grep -m1 ''` 202 | dup_2nd=`grep 'duplicate queries received' $path_tmp/named.stats.tmp | awk '{ print $1 }' | sort -n | grep -m1 ''` 203 | ;; 204 | esac 205 | 206 | if [ "$succ_1st" == '' ] 207 | then 208 | success=0 209 | else 210 | success=`expr $succ_1st - $succ_2nd` 211 | fi 212 | if [ "$ref_1st" == '' ] 213 | then 214 | referral=0 215 | else 216 | referral=`expr $ref_1st - $ref_2nd` 217 | fi 218 | if [ "$nxrr_1st" == '' ] 219 | then 220 | nxrrset=0 221 | else 222 | nxrrset=`expr $nxrr_1st - $nxrr_2nd` 223 | fi 224 | if [ "$nxdom_1st" == '' ] 225 | then 226 | nxdomain=0 227 | else 228 | nxdomain=`expr $nxdom_1st - $nxdom_2nd` 229 | fi 230 | if [ "$rec_1st" == '' ] 231 | then 232 | recursion=0 233 | else 234 | recursion=`expr $rec_1st - $rec_2nd` 235 | fi 236 | if [ "$fail_1st" == '' ] 237 | then 238 | failure=0 239 | else 240 | failure=`expr $fail_1st - $fail_2nd` 241 | fi 242 | if [ "$dup_1st" == '' ] 243 | then 244 | duplicate=0 245 | else 246 | duplicate=`expr $dup_1st - $dup_2nd` 247 | fi 248 | if [ "$drop_1st" == '' ] 249 | then 250 | dropped=0 251 | else 252 | dropped=`expr $drop_1st - $drop_2nd` 253 | fi 254 | } 255 | 256 | get_perfdata() { 257 | case ${version} in 258 | 9.3) 259 | perfdata=`echo "'success'=$success 'referral'=$referral 'nxrrset'=$nxrrset 'nxdomain'=$nxdomain 'recursion'=$recursion 'failure'=$failure"` 260 | ;; 261 | *) 262 | perfdata=`echo "'success'=$success 'referral'=$referral 'nxrrset'=$nxrrset 'nxdomain'=$nxdomain 'recursion'=$recursion 'failure'=$failure 'duplicate'=$duplicate 'dropped'=$dropped"` 263 | ;; 264 | esac 265 | } 266 | 267 | if [ ${pid_check} == 1 ] 268 | then 269 | check_pid 270 | if [ "$retval" = 1 ] 271 | then 272 | echo "There's no pid file for bind9. Is it actually running?" 273 | exit $ST_CR 274 | fi 275 | fi 276 | 277 | trigger_stats 278 | copy_to_tmp 279 | get_vals 280 | get_perfdata 281 | 282 | echo "Bind9 is running. $success successfull requests, $referral referrals, $nxdomain nxdomains since last check. | $perfdata" 283 | exit $ST_OK 284 | -------------------------------------------------------------------------------- /ansible/roles/nomad/defaults/main.yml: -------------------------------------------------------------------------------- 1 | nomad_user: "root" 2 | nomad_group: "root" 3 | 4 | nomad_version: "0.3.2" 5 | nomad_url: "https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version }}_linux_amd64.zip" 6 | 7 | nomad_apt_deps: 8 | - name: "unzip" 9 | 10 | nomad_type: "server" 11 | nomad_bin_path: "/usr/bin" 12 | nomad_executable: "{{ nomad_bin_path }}/nomad" 13 | 14 | nomad_init_path: "/etc/init" 15 | nomad_conf_path: "/etc/nomad" 16 | 17 | nomad_desired_state: "started" 18 | 19 | nomad_server_list: [] 20 | nomad_server_templates: 21 | - src: "nomad_server.json" 22 | dest: "/etc/nomad/nomad_server.json" 23 | - src: "nomad_server.conf" 24 | dest: "/etc/init/nomad_server.conf" 25 | 26 | nomad_client_templates: 27 | - src: "nomad_client.json" 28 | dest: "/etc/nomad/nomad_client.json" 29 | - src: "nomad_client.conf" 30 | dest: "/etc/init/nomad_client.conf" 31 | 32 | nomad_network_iface: "eth0" 33 | nomad_advertise_addr: "0.0.0.0" 34 | nomad_bind_address: "0.0.0.0" 35 | nomad_client_bind_address: "0.0.0.0" 36 | nomad_client_addr: "0.0.0.0" 37 | nomad_datadir: "/var/lib/nomad" 38 | nomad_datacenter: "int1" 39 | nomad_enable_syslog: "true" 40 | nomad_log_level: "INFO" 41 | nomad_encrypt: "KnqifJT6qkF0X+Zx9spnjg==" 42 | -------------------------------------------------------------------------------- /ansible/roles/nomad/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Ensure Nomad conf paths exist 4 | file: 5 | dest: "{{ item.dest }}" 6 | state: "{{ item.state | default('directory') }}" 7 | recurse: "{{ item.recurse | default('yes') }}" 8 | with_items: 9 | - dest: "{{ nomad_conf_path }}" 10 | - dest: "{{ nomad_datadir }}" 11 | 12 | - name: Ensure Nomad apt dependencies are met 13 | apt: 14 | name: "{{ item.name }}" 15 | update_cache: "{{ item.update_cache | default('yes') }}" 16 | cache_valid_time: "{{ item.cache_valid_time | default('3600') }}" 17 | with_items: "{{ nomad_apt_deps }}" 18 | 19 | - name: Download Nomad and unzip to {{ nomad_executable }} 20 | unarchive: 21 | src: "{{ item.src }}" 22 | dest: "{{ item.dest }}" 23 | copy: "no" 24 | mode: "0775" 25 | creates: "{{ item.creates }}" 26 | with_items: 27 | - src: "{{ nomad_url }}" 28 | dest: "{{ nomad_bin_path }}" 29 | creates: "{{ nomad_executable }}" 30 | 31 | - name: Template Nomad server files when server 32 | when: nomad_type == 'server' 33 | template: 34 | src: "{{ item.src }}" 35 | dest: "{{ item.dest }}" 36 | mode: "{{ item.mode | default('0644') }}" 37 | owner: "{{ item.owner | default('root') }}" 38 | group: "{{ item.group | default('root') }}" 39 | with_items: "{{ nomad_server_templates }}" 40 | 41 | - name: Template Nomad client files when client 42 | when: nomad_type == 'client' 43 | template: 44 | src: "{{ item.src }}" 45 | dest: "{{ item.dest }}" 46 | mode: "{{ item.mode | default('0644') }}" 47 | owner: "{{ item.owner | default('root') }}" 48 | group: "{{ item.group | default('root') }}" 49 | with_items: "{{ nomad_client_templates }}" 50 | 51 | - name: Ensure Nomad in desired state 52 | service: 53 | name: "nomad_{{ nomad_type }}" 54 | state: "{{ nomad_desired_state }}" 55 | pattern: "{{ nomad_init_path }}/nomad_{{ nomad_type }}" -------------------------------------------------------------------------------- /ansible/roles/nomad/templates/Makefile: -------------------------------------------------------------------------------- 1 | deps: 2 | cd /vagrant/nomad 3 | wget -q -nc https://releases.hashicorp.com/nomad/0.3.1/nomad_0.3.1_linux_amd64.zip 4 | wget -q -nc https://github.com/cbednarski/hostess/releases/download/v0.1.0/hostess_linux_amd64 5 | apt-get update -qq 6 | apt-get install -y unzip 7 | unzip -o nomad_0.3.1_linux_amd64.zip 8 | 9 | install: 10 | cd /vagrant/nomad 11 | # Install hostess so we can edit the hosts file 12 | cp hostess_linux_amd64 /usr/local/bin/hostess 13 | chmod +x /usr/local/bin/hostess 14 | 15 | # Get IP address of the current machine from eth1 16 | ifconfig eth1 | grep "inet addr" | awk 'BEGIN { FS = "[ :]+" }{print $$4}' > /tmp/self.ip 17 | #hostess add consul `cat /tmp/self.ip` 18 | 19 | # Install nomad binaries 20 | stop nomad || true 21 | cp /vagrant/nomad/nomad /usr/local/bin/nomad 22 | 23 | # Add nomad configuration 24 | mkdir -p /etc/nomad 25 | 26 | install-server: 27 | cd /vagrant/nomad 28 | sed 's/ADDR/'`cat /tmp/self.ip`'/g' /vagrant/nomad/nomad-server.json > /etc/nomad/nomad.json # we also change bind addr due to https://github.com/hashicorp/nomad/issues/186 29 | # Setup nomad service and start it 30 | cp /vagrant/nomad/nomad.upstart.conf /etc/init/nomad.conf 31 | restart nomad || start nomad 32 | sleep 1 33 | 34 | install-client: 35 | cd /vagrant/nomad 36 | sed 's/ADDR/'`cat /tmp/self.ip`'/g' /vagrant/nomad/nomad-client.json > /etc/nomad/nomad.json 37 | # Setup nomad service and start it 38 | cp /vagrant/nomad/nomad.upstart.conf /etc/init/nomad.conf 39 | restart nomad || start nomad 40 | sleep 1 41 | -------------------------------------------------------------------------------- /ansible/roles/nomad/templates/nomad_client.conf: -------------------------------------------------------------------------------- 1 | description "Nomad client process" 2 | 3 | start on (local-filesystems and net-device-up IFACE!={{ nomad_network_iface }}) 4 | stop on runlevel [!12345] 5 | 6 | respawn 7 | 8 | setuid {{ nomad_user }} 9 | setgid {{ nomad_group }} 10 | 11 | exec {{ nomad_executable }} agent -config {{ nomad_conf_path }} 12 | -------------------------------------------------------------------------------- /ansible/roles/nomad/templates/nomad_client.json: -------------------------------------------------------------------------------- 1 | { 2 | "bind_addr": "{{ nomad_bind_address }}", 3 | "data_dir": "{{ nomad_datadir }}", 4 | "datacenter": "{{ nomad_datacenter }}", 5 | "enable_syslog": {{ nomad_enable_syslog }}, 6 | "syslog_facility": "LOCAL0", 7 | "log_level": "{{ nomad_log_level }}", 8 | "client": { 9 | "enabled": true, 10 | "network_interface": "{{ nomad_network_iface }}", 11 | "servers": {{ nomad_server_list | to_json }} 12 | } 13 | } 14 | -------------------------------------------------------------------------------- /ansible/roles/nomad/templates/nomad_server.conf: -------------------------------------------------------------------------------- 1 | description "Nomad server process" 2 | 3 | start on (local-filesystems and net-device-up IFACE!={{ nomad_network_iface }}) 4 | stop on runlevel [!12345] 5 | 6 | respawn 7 | 8 | setuid {{ nomad_user }} 9 | setgid {{ nomad_group }} 10 | 11 | exec {{ nomad_executable }} agent -config {{ nomad_conf_path }} 12 | -------------------------------------------------------------------------------- /ansible/roles/nomad/templates/nomad_server.json: -------------------------------------------------------------------------------- 1 | { 2 | "bind_addr": "{{ nomad_bind_address }}", 3 | "advertise": { 4 | "rpc": "{{ nomad_bind_address }}:4647", 5 | "serf": "{{ nomad_bind_address }}:4648" 6 | }, 7 | "data_dir": "{{ nomad_datadir }}", 8 | "datacenter": "{{ nomad_datacenter }}", 9 | "enable_syslog": {{ nomad_enable_syslog }}, 10 | "syslog_facility": "LOCAL0", 11 | "log_level": "{{ nomad_log_level }}", 12 | "server": { 13 | "bootstrap_expect": {{ nomad_server_list | length }}, 14 | "enabled": true, 15 | "start_join": {{ nomad_server_list | to_json }} 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /ansible/roles/openssl/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | openssl_apt_package_name: "openssl" 4 | openssl_base_dir: "/etc/ssl" 5 | openssl_cert_ca_base_dir: "{{ openssl_base_dir }}/certs" 6 | openssl_cert_private_base: "{{ openssl_base_dir }}/private" 7 | 8 | #openssl_generator_cert_name: "mycert" 9 | #openssl_generator_cert_domain: "*.dev.lan" 10 | openssl_generator_complexity: "2048" 11 | openssl_generator_cert_name: "{{ inventory_hostname }}" 12 | openssl_generator_cert_domain: "{{ inventory_hostname }}" 13 | openssl_generator_passphrase: "" 14 | openssl_generator_country: "US" 15 | openssl_generator_state: "Colorado" 16 | openssl_generator_locality: "Denver" 17 | openssl_generator_org: "" 18 | openssl_generator_org_unit: "" 19 | openssl_generator_email_address: "" 20 | 21 | openssl_generator_script_dir: "/usr/bin" 22 | openssl_generator_script_name: "ssl_cert_generator" 23 | 24 | openssl_create_pem: "yes" 25 | openssl_cert_dir: "{{ openssl_base_dir }}/{{ openssl_generator_cert_name }}" 26 | openssl_pem_name: "{{ openssl_pem_dir }}/{{ openssl_generator_cert_name }}" 27 | -------------------------------------------------------------------------------- /ansible/roles/openssl/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Install openssl 4 | apt: 5 | name: "{{ openssl_apt_package_name }}" 6 | state: present 7 | update_cache: yes 8 | cache_valid_time: 3600 9 | 10 | - name: Ensure certificate directories are present 11 | file: 12 | dest: "{{ item.dest }}" 13 | state: "{{ item.state|default('directory' )}}" 14 | recurse: "{{ item.recurse|default('yes') }}" 15 | owner: "{{ item.owner|default('root') }}" 16 | group: "{{ item.group|default('root') }}" 17 | with_items: 18 | - dest: "{{ openssl_base_dir }}" 19 | - dest: "{{ openssl_cert_dir }}" 20 | - dest: "{{ openssl_cert_ca_base_dir }}" 21 | - dest: "{{ openssl_cert_private_base }}" 22 | 23 | - name: Ensure pem dir is present if specified 24 | file: 25 | dest: "{{ item.dest }}" 26 | state: "{{ item.state|default('directory' )}}" 27 | recurse: "{{ item.recurse|default('yes') }}" 28 | owner: "{{ item.owner|default('root') }}" 29 | group: "{{ item.group|default('root') }}" 30 | with_items: 31 | - dest: "{{ openssl_cert_dir }}" 32 | when: openssl_create_pem is defined and openssl_create_pem == 'yes' 33 | 34 | - name: Ensure ssl cert generator is created 35 | template: 36 | src: "{{ openssl_generator_script_name }}" 37 | dest: "{{ openssl_generator_script_dir }}/{{ openssl_generator_script_name }}" 38 | mode: 0755 39 | 40 | - name: Ensure ssl cert created 41 | shell: "{{ openssl_generator_script_dir }}/{{ openssl_generator_script_name }}" 42 | args: 43 | creates: "{{ openssl_cert_dir }}/{{openssl_generator_cert_name }}.crt" 44 | 45 | - name: Ensure pem created if specified 46 | shell: "cat {{ openssl_cert_dir }}/{{openssl_generator_cert_name }}.crt {{ openssl_cert_dir }}/{{openssl_generator_cert_name }}.key | tee {{ openssl_cert_dir }}/{{openssl_generator_cert_name }}.pem" 47 | args: 48 | creates: "{{ openssl_cert_dir }}/{{openssl_generator_cert_name }}.pem" 49 | when: openssl_create_pem is defined and openssl_create_pem == 'yes' 50 | -------------------------------------------------------------------------------- /ansible/roles/openssl/templates/ssl_cert_generator: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Specify where we will install 4 | # the certificate 5 | SSL_DIR="{{ openssl_base_dir }}/{{ openssl_generator_cert_name }}" 6 | 7 | # Set the wildcarded domain 8 | # we want to use 9 | DOMAIN="{{ openssl_generator_cert_domain }}" 10 | 11 | # A blank passphrase 12 | PASSPHRASE="{{ openssl_generator_passphrase }}" 13 | 14 | # Set our CSR variables 15 | SUBJ=" 16 | C={{ openssl_generator_country }} 17 | ST={{ openssl_generator_state }} 18 | O={{ openssl_generator_org }} 19 | localityName={{ openssl_generator_locality }} 20 | commonName=$DOMAIN 21 | organizationalUnitName={{ openssl_generator_org_unit }} 22 | emailAddress={{ openssl_generator_email_address }} 23 | " 24 | 25 | # Create our SSL directory 26 | # in case it doesn't exist 27 | sudo mkdir -p "$SSL_DIR" 28 | 29 | # Generate our Private Key, CSR and Certificate 30 | sudo openssl genrsa -out "$SSL_DIR/{{ openssl_generator_cert_name }}.key" {{ openssl_generator_complexity }} 31 | sudo openssl req -new -subj "$(echo -n "$SUBJ" | tr "\n" "/")" -key "$SSL_DIR/{{ openssl_generator_cert_name }}.key" -out "$SSL_DIR/{{ openssl_generator_cert_name }}.csr" -passin pass:$PASSPHRASE 32 | sudo openssl x509 -req -days 365 -in "$SSL_DIR/{{ openssl_generator_cert_name }}.csr" -signkey "$SSL_DIR/{{ openssl_generator_cert_name }}.key" -out "$SSL_DIR/{{ openssl_generator_cert_name }}.crt" -------------------------------------------------------------------------------- /ansible/roles/prometheus_alertmanager/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | prometheus_alertmanager_version: "0.8.0" 4 | prometheus_alertmanager_bin_dir: "/usr/local/bin" 5 | prometheus_alertmanager_config_dir: "/etc/prometheus_alertmanager" 6 | prometheus_alertmanager_config_file: "{{ prometheus_alertmanager_config_dir }}/config.yml" 7 | prometheus_alertmanager_config_hash: 8 | global: 9 | smtp_smarthost: '{{ inventory_hostname }}:587' 10 | smtp_from: 'alertmanager@{{ inventory_hostname }}' 11 | route: 12 | receiver: 'team-X-mails' 13 | group_by: ['alertname', 'cluster'] 14 | group_wait: 30s 15 | group_interval: 5m 16 | repeat_interval: 3h 17 | routes: 18 | - match_re: 19 | service: ^(prometheus|foo2|baz)$ 20 | receiver: team-X-mails 21 | receivers: 22 | - name: 'team-X-mails' 23 | email_configs: 24 | - to: 'team-X+cjalinsky@altitudedigital.com' 25 | require_tls: false 26 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_alertmanager/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: reload systemd-daemon 4 | shell: systemctl daemon-reload 5 | 6 | - name: restart_alertmanager_14.04 7 | service: 8 | name: alertmanager 9 | state: restarted 10 | 11 | - name: restart_alertmanager_16.04 12 | systemd: 13 | name: alertmanager 14 | state: restarted 15 | daemon_reload: yes 16 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_alertmanager/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Prometheus Alert Manager release directory 4 | file: 5 | dest: "{{ item }}" 6 | state: directory 7 | recurse: yes 8 | with_items: 9 | - "/opt/prometheus_alertmanager_v{{ prometheus_alertmanager_version }}" 10 | - "{{ prometheus_alertmanager_config_dir }}" 11 | 12 | - name: Install dependencies thru apt 13 | apt: 14 | name: "{{ item }}" 15 | update_cache: yes 16 | cache_valid_time: 3600 17 | with_items: 18 | - "build-essential" 19 | - "libc6-dev" 20 | 21 | - name: Download Prometheus Alert Manager release tarball 22 | get_url: 23 | url: "https://github.com/prometheus/alertmanager/releases/download/v{{ prometheus_alertmanager_version }}/alertmanager-{{ prometheus_alertmanager_version }}.linux-amd64.tar.gz" 24 | dest: "/opt/prometheus_alertmanager_v{{ prometheus_alertmanager_version }}/alertmanager-{{ prometheus_alertmanager_version }}.linux-amd64.tar.gz" 25 | 26 | - name: Unarchive Prometheus tarball 27 | unarchive: 28 | src: "/opt/prometheus_alertmanager_v{{ prometheus_alertmanager_version }}/alertmanager-{{ prometheus_alertmanager_version }}.linux-amd64.tar.gz" 29 | dest: "/opt/prometheus_alertmanager_v{{ prometheus_alertmanager_version }}/" 30 | remote_src: yes 31 | creates: "/opt/prometheus_alertmanager_v{{ prometheus_alertmanager_version }}/alertmanager-{{ prometheus_alertmanager_version }}.linux-amd64/alertmanager" 32 | 33 | - name: Ensure 775 permissions on Prometheus Alert Manager binary 34 | file: 35 | dest: "/opt/prometheus_alertmanager_v{{ prometheus_alertmanager_version }}/alertmanager-{{ prometheus_alertmanager_version }}.linux-amd64/alertmanager" 36 | mode: "0775" 37 | 38 | - name: Symlink Prometheus Alert Manager binary to exec dir 39 | file: 40 | src: "/opt/prometheus_alertmanager_v{{ prometheus_alertmanager_version }}/alertmanager-{{ prometheus_alertmanager_version }}.linux-amd64/alertmanager" 41 | dest: "/usr/local/bin/alertmanager" 42 | state: link 43 | 44 | - name: Fail when expected and actual prometheus versions do not match 45 | shell: alertmanager -version 46 | register: alertmanager_version_output 47 | failed_when: prometheus_alertmanager_version not in alertmanager_version_output.stdout 48 | 49 | - name: Template config file from prometheus_alertmanager_config_file variable 50 | copy: 51 | content: "{{ prometheus_alertmanager_config_hash | to_nice_yaml }}" 52 | dest: "{{ prometheus_alertmanager_config_file }}" 53 | 54 | - name: Include xenial when distro greater than 14.04 55 | when: ansible_distribution_version|version_compare('14.04','>') 56 | include: xenial.yml 57 | 58 | - name: Include trusty when distro less than 16.04 59 | when: ansible_distribution_version|version_compare('16.04','<') 60 | include: trusty.yml 61 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_alertmanager/tasks/trusty.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Alert Manager upstart directory 4 | file: 5 | dest: "/etc/init" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template Alert Manager upstart service file 10 | register: upstart_service_template 11 | template: 12 | src: "alertmanager.conf.j2" 13 | dest: "/etc/init/alertmanager.conf" 14 | notify: 15 | - "restart_alertmanager_{{ ansible_distribution_version }}" 16 | 17 | - name: Ensure alertmanager service started and enabled 18 | service: 19 | name: alertmanager 20 | enabled: yes 21 | state: started 22 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_alertmanager/tasks/xenial.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Alert Manager systemd directory 4 | file: 5 | dest: "/etc/systemd/system" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template Alert Manager systemd service file 10 | register: systemd_service_template 11 | template: 12 | src: "alertmanager.service.j2" 13 | dest: "/etc/systemd/system/alertmanager.service" 14 | notify: 15 | - "restart_alertmanager_{{ ansible_distribution_version }}" 16 | 17 | - name: Ensure alertmanager service started and enabled 18 | systemd: 19 | name: alertmanager 20 | state: started 21 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_alertmanager/templates/alertmanager.conf.j2: -------------------------------------------------------------------------------- 1 | description "Prometheus Alert Manager process" 2 | 3 | start on (local-filesystems) 4 | stop on runlevel [!12345] 5 | 6 | respawn 7 | 8 | exec {{ prometheus_alertmanager_bin_dir }}/alertmanager -config.file={{ prometheus_alertmanager_config_file }} -------------------------------------------------------------------------------- /ansible/roles/prometheus_alertmanager/templates/alertmanager.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Alert Manager 3 | Documentation=https://github.com/prometheus/alertmanager 4 | 5 | [Service] 6 | ExecStart={{ prometheus_alertmanager_bin_dir }}/alertmanager \ 7 | -config.file={{ prometheus_alertmanager_config_file }} 8 | Restart=on-failure 9 | RestartSec=5 10 | 11 | [Install] 12 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /ansible/roles/prometheus_consul_exporter/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | prometheus_consul_exporter_version: "0.3.0" 4 | prometheus_consul_exporter_bin_dir: "/usr/local/bin" 5 | prometheus_consul_exporter_consul_server: "http://{{ inventory_hostname }}:8500" 6 | prometheus_consul_exporter_kv_prefix: "/" 7 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_consul_exporter/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: restart_consul_exporter_14.04 4 | service: 5 | name: consul_exporter 6 | state: restarted 7 | 8 | - name: restart_consul_exporter_16.04 9 | service: 10 | name: consul_exporter 11 | state: restarted 12 | daemon_reload: yes 13 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_consul_exporter/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Unarchive Prometheus Consul Exporter release tarball 4 | unarchive: 5 | src: "https://github.com/prometheus/consul_exporter/releases/download/v{{ prometheus_consul_exporter_version }}/consul_exporter-{{ prometheus_consul_exporter_version }}.linux-amd64.tar.gz" 6 | dest: "/opt" 7 | remote_src: yes 8 | creates: "/opt/consul_exporter-{{ prometheus_consul_exporter_version }}.linux-amd64/consul_exporter" 9 | 10 | - name: Ensure 775 permissions on Prometheus Consul Exporter binary 11 | file: 12 | dest: "/opt/consul_exporter-{{ prometheus_consul_exporter_version }}.linux-amd64/consul_exporter" 13 | mode: "0775" 14 | 15 | - name: Symlink Prometheus Consul Exporter binary to exec dir 16 | file: 17 | src: "/opt/consul_exporter-{{ prometheus_consul_exporter_version }}.linux-amd64/consul_exporter" 18 | dest: "/usr/local/bin/consul_exporter" 19 | state: link 20 | 21 | - name: Include xenial when distro greater than 14.04 22 | when: ansible_distribution_version|version_compare('14.04','>') 23 | include: xenial.yml 24 | 25 | - name: Include trusty when distro less than 16.04 26 | when: ansible_distribution_version|version_compare('16.04','<') 27 | include: trusty.yml 28 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_consul_exporter/tasks/trusty.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Consul Exporter upstart directory 4 | file: 5 | dest: "/etc/init" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template Consul Exporter upstart service file 10 | register: upstart_service_template 11 | template: 12 | src: "consul_exporter.conf.j2" 13 | dest: "/etc/init/consul_exporter.conf" 14 | notify: 15 | - "restart_consul_exporter_{{ ansible_distribution_version }}" 16 | 17 | - name: Ensure consul_exporter service started and enabled 18 | service: 19 | name: consul_exporter 20 | enabled: yes 21 | state: started 22 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_consul_exporter/tasks/xenial.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Consul Exporter systemd directory 4 | file: 5 | dest: "/etc/systemd/system" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template Consul Exporter systemd service file 10 | register: systemd_service_template 11 | template: 12 | src: "consul_exporter.service.j2" 13 | dest: "/etc/systemd/system/consul_exporter.service" 14 | notify: 15 | - "restart_consul_exporter_{{ ansible_distribution_version }}" 16 | 17 | - name: Ensure consul_exporter service started and enabled 18 | systemd: 19 | name: consul_exporter 20 | state: started 21 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_consul_exporter/templates/consul_exporter.conf.j2: -------------------------------------------------------------------------------- 1 | description "Prometheus Consul Exporter process" 2 | 3 | start on (local-filesystems) 4 | stop on runlevel [!12345] 5 | 6 | respawn 7 | 8 | exec {{ prometheus_consul_exporter_bin_dir }}/consul_exporter -consul.server={{ prometheus_consul_exporter_consul_server }} -kv.prefix={{ prometheus_consul_exporter_kv_prefix }} -------------------------------------------------------------------------------- /ansible/roles/prometheus_consul_exporter/templates/consul_exporter.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Consul Exporter 3 | Documentation=https://github.com/prometheus/consul_exporter 4 | 5 | [Service] 6 | ExecStart={{ prometheus_consul_exporter_bin_dir }}/consul_exporter 7 | Restart=on-failure 8 | RestartSec=5 9 | 10 | [Install] 11 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /ansible/roles/prometheus_node_exporter/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | prometheus_node_exporter_version: "0.14.0" 4 | prometheus_node_exporter_bin_dir: "/usr/local/bin" 5 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_node_exporter/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: restart_node_exporter_14.04 4 | service: 5 | name: node_exporter 6 | state: restarted 7 | 8 | - name: restart_node_exporter_16.04 9 | systemd: 10 | name: node_exporter 11 | state: restarted 12 | daemon_reload: yes 13 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_node_exporter/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Prometheus Node Exporter release directory 4 | file: 5 | dest: "{{ item }}" 6 | state: directory 7 | recurse: yes 8 | with_items: 9 | - "/opt/prometheus_node_exporter_{{ prometheus_node_exporter_version }}" 10 | 11 | - name: Download Prometheus Node Exporter release tarball 12 | get_url: 13 | url: "https://github.com/prometheus/node_exporter/releases/download/v{{ prometheus_node_exporter_version }}/node_exporter-{{ prometheus_node_exporter_version }}.linux-amd64.tar.gz" 14 | dest: "/opt/prometheus_node_exporter_{{ prometheus_node_exporter_version }}/node_exporter-{{ prometheus_node_exporter_version }}.linux-amd64.tar.gz" 15 | 16 | - name: Unarchive Prometheus tarball 17 | unarchive: 18 | src: "/opt/prometheus_node_exporter_{{ prometheus_node_exporter_version }}/node_exporter-{{ prometheus_node_exporter_version }}.linux-amd64.tar.gz" 19 | dest: "/opt/prometheus_node_exporter_{{ prometheus_node_exporter_version }}/" 20 | remote_src: yes 21 | creates: "/opt/prometheus_node_exporter_{{ prometheus_node_exporter_version }}/node_exporter-{{ prometheus_node_exporter_version }}.linux-amd64/node_exporter" 22 | 23 | - name: Ensure 775 permissions on Prometheus Node Exporter binary 24 | file: 25 | dest: "/opt/prometheus_node_exporter_{{ prometheus_node_exporter_version }}/node_exporter-{{ prometheus_node_exporter_version }}.linux-amd64/node_exporter" 26 | mode: "0775" 27 | 28 | - name: Symlink Prometheus Node Exporter binary to exec dir 29 | file: 30 | src: "/opt/prometheus_node_exporter_{{ prometheus_node_exporter_version }}/node_exporter-{{ prometheus_node_exporter_version }}.linux-amd64/node_exporter" 31 | dest: "/usr/local/bin/node_exporter" 32 | state: link 33 | 34 | - name: Fail when expected and actual prometheus versions do not match 35 | shell: node_exporter -version 36 | register: node_exporter_version_output 37 | failed_when: prometheus_node_exporter_version not in node_exporter_version_output.stdout 38 | 39 | - name: Include xenial when distro greater than 14.04 40 | when: ansible_distribution_version|version_compare('14.04','>') 41 | include: xenial.yml 42 | 43 | - name: Include trusty when distro less than 16.04 44 | when: ansible_distribution_version|version_compare('16.04','<') 45 | include: trusty.yml 46 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_node_exporter/tasks/trusty.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Node Exporter upstart directory 4 | file: 5 | dest: "/etc/init" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template Node Exporter upstart service file 10 | register: upstart_service_template 11 | template: 12 | src: "node_exporter.conf.j2" 13 | dest: "/etc/init/node_exporter.conf" 14 | notify: 15 | - "restart_node_exporter_{{ ansible_distribution_version }}" 16 | 17 | - name: Ensure node_exporter service started and enabled 18 | service: 19 | name: node_exporter 20 | enabled: yes 21 | state: started 22 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_node_exporter/tasks/xenial.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Node Exporter systemd directory 4 | file: 5 | dest: "/etc/systemd/system" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template Node Exporter systemd service file 10 | register: systemd_service_template 11 | template: 12 | src: "node_exporter.service.j2" 13 | dest: "/etc/systemd/system/node_exporter.service" 14 | notify: 15 | - "restart_node_exporter_{{ ansible_distribution_version }}" 16 | 17 | - name: Ensure node_exporter service started 18 | systemd: 19 | name: node_exporter 20 | state: started 21 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_node_exporter/templates/node_exporter.conf.j2: -------------------------------------------------------------------------------- 1 | description "Prometheus Node Exporter process" 2 | 3 | start on (local-filesystems) 4 | stop on runlevel [!12345] 5 | 6 | respawn 7 | 8 | exec {{ prometheus_node_exporter_bin_dir }}/node_exporter -------------------------------------------------------------------------------- /ansible/roles/prometheus_node_exporter/templates/node_exporter.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Node Exporter 3 | Documentation=https://github.com/prometheus/node_exporter 4 | 5 | [Service] 6 | ExecStart={{ prometheus_node_exporter_bin_dir }}/node_exporter 7 | Restart=on-failure 8 | RestartSec=5 9 | 10 | [Install] 11 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /ansible/roles/prometheus_server/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | prometheus_server_version: "1.7.1" 4 | prometheus_server_bin_dir: "/usr/local/bin" 5 | prometheus_server_config_dir: "/etc/prometheus" 6 | prometheus_server_logs_dir: "/etc/prometheus/logs" 7 | prometheus_server_config_dest: "/etc/prometheus/config.yml" 8 | prometheus_alertmanager_url: "http://{{ inventory_hostname }}:9093" 9 | prometheus_rules_dir: "/etc/prometheus/rules" 10 | prometheus_server_config_file: 11 | scrape_configs: 12 | - job_name: "node" 13 | scrape_interval: "15s" 14 | static_configs: 15 | - targets: ["172.136.3.11:9100", "consul2.lan:9107"] 16 | - job_name: "consul" 17 | scrape_interval: "15s" 18 | consul_sd_configs: 19 | - server: "consul1.lan:8500" 20 | services: ['prometheus'] 21 | datacenter: "lan1" 22 | token: "KnqifJT6qkF0X+Zx9spnjg==" 23 | relabel_configs: 24 | - source_labels: ['__meta_consul_service'] 25 | regex: '(.*)' 26 | target_label: 'job' 27 | replacement: '$1' 28 | rule_files: 29 | - "/etc/prometheus/rules/downtime.rules" 30 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_server/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: restart_prometheus_14.04 4 | service: 5 | name: prometheus 6 | state: restarted 7 | 8 | - name: restart_prometheus_16.04 9 | systemd: 10 | name: prometheus 11 | state: restarted 12 | daemon_reload: yes 13 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_server/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Prometheus release directory 4 | file: 5 | dest: "{{ item }}" 6 | state: directory 7 | recurse: yes 8 | with_items: 9 | - "/opt/prometheus_v{{ prometheus_server_version }}" 10 | - "{{ prometheus_server_config_dir }}" 11 | - "{{ prometheus_server_logs_dir }}" 12 | - "{{ prometheus_rules_dir }}" 13 | 14 | - name: Download Prometheus v{{ prometheus_server_version }} release tarball 15 | get_url: 16 | url: "https://github.com/prometheus/prometheus/releases/download/v{{ prometheus_server_version }}/prometheus-{{ prometheus_server_version }}.linux-amd64.tar.gz" 17 | dest: "/opt/prometheus_v{{ prometheus_server_version }}/prometheus-{{ prometheus_server_version }}.linux-amd64.tar.gz" 18 | 19 | - name: Unarchive Prometheus v{{ prometheus_server_version }} tarball 20 | unarchive: 21 | src: "/opt/prometheus_v{{ prometheus_server_version }}/prometheus-{{ prometheus_server_version }}.linux-amd64.tar.gz" 22 | dest: "/opt/prometheus_v{{ prometheus_server_version }}/" 23 | remote_src: yes 24 | creates: "/opt/prometheus_v{{ prometheus_server_version }}/prometheus-{{ prometheus_server_version }}.linux-amd64/prometheus" 25 | 26 | - name: Ensure 775 permissions on prometheus binary 27 | file: 28 | dest: "/opt/prometheus_v{{ prometheus_server_version }}/prometheus-{{ prometheus_server_version }}.linux-amd64/prometheus" 29 | mode: "0775" 30 | 31 | - name: Symlink Prometheus binary to exec dir 32 | file: 33 | src: "/opt/prometheus_v{{ prometheus_server_version }}/prometheus-{{ prometheus_server_version }}.linux-amd64/prometheus" 34 | dest: "/usr/local/bin/prometheus" 35 | state: link 36 | 37 | - name: Fail when expected and actual prometheus versions do not match 38 | shell: prometheus -version 39 | register: prometheus_version_output 40 | changed_when: True 41 | failed_when: prometheus_server_version not in prometheus_version_output.stdout 42 | 43 | - name: Create rules files when specified in prometheus_server_config_file 44 | when: prometheus_server_config_file is defined and prometheus_server_config_file['rule_files'] is defined and prometheus_server_config_file['rule_files']|length > 0 45 | template: 46 | src: "../templates/{{ item }}" 47 | dest: "{{ item }}" 48 | with_items: "{{ prometheus_server_config_file['rule_files'] }}" 49 | 50 | - name: Template config file 51 | copy: 52 | content: "{{ prometheus_server_config_file|to_nice_yaml }}" 53 | dest: "{{ prometheus_server_config_dest }}" 54 | notify: 55 | - "restart_prometheus_{{ ansible_distribution_version }}" 56 | 57 | - name: Include xenial when distro greater than 14.04 58 | when: ansible_distribution_version|version_compare('14.04','>') 59 | include: xenial.yml 60 | 61 | - name: Include trusty when distro less than 16.04 62 | when: ansible_distribution_version|version_compare('16.04','<') 63 | include: trusty.yml 64 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_server/tasks/trusty.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Prometheus upstart directory 4 | file: 5 | dest: "/etc/init" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template Prometheus upstart service file 10 | register: upstart_service_template 11 | template: 12 | src: "prometheus.conf.j2" 13 | dest: "/etc/init/prometheus.conf" 14 | notify: 15 | - "restart_prometheus_{{ ansible_distribution_version }}" 16 | 17 | - name: Ensure prometheus service started and enabled 18 | service: 19 | name: prometheus 20 | enabled: yes 21 | state: started 22 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_server/tasks/xenial.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Prometheus systemd directory 4 | file: 5 | dest: "/etc/systemd/system" 6 | state: directory 7 | recurse: yes 8 | 9 | - name: Template prometheus systemd service file 10 | register: systemd_service_template 11 | template: 12 | src: "prometheus.service.j2" 13 | dest: "/etc/systemd/system/prometheus.service" 14 | notify: 15 | - "restart_prometheus_{{ ansible_distribution_version }}" 16 | 17 | - name: Ensure prometheus service started and enabled 18 | systemd: 19 | name: prometheus 20 | state: started 21 | -------------------------------------------------------------------------------- /ansible/roles/prometheus_server/templates/etc/prometheus/rules/downtime.rules: -------------------------------------------------------------------------------- 1 | {% raw %} 2 | # Alert for any instance that is unreachable for >5 minutes. 3 | ALERT InstanceDown 4 | IF up == 0 5 | FOR 5m 6 | LABELS { severity = "page" } 7 | ANNOTATIONS { 8 | summary = "Instance {{ $labels.instance }} down", 9 | description = "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes.", 10 | } 11 | {% endraw %} -------------------------------------------------------------------------------- /ansible/roles/prometheus_server/templates/prometheus.conf.j2: -------------------------------------------------------------------------------- 1 | description "Prometheus Alert Manager process" 2 | 3 | start on (local-filesystems) 4 | stop on runlevel [!12345] 5 | 6 | respawn 7 | 8 | exec {{ prometheus_server_bin_dir }}/prometheus -config.file={{ prometheus_server_config_dest }}{% if prometheus_alertmanager_url is defined and prometheus_alertmanager_url != "" %} -alertmanager.url={{ prometheus_alertmanager_url }}{% endif%} -------------------------------------------------------------------------------- /ansible/roles/prometheus_server/templates/prometheus.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Prometheus Server 3 | Documentation=https://prometheus.io/docs/ 4 | 5 | [Service] 6 | ExecStart={{ prometheus_server_bin_dir }}/prometheus \ 7 | -config.file={{ prometheus_server_config_dest }} \ 8 | -alertmanager.url={{ prometheus_alertmanager_url }} 9 | Restart=on-failure 10 | RestartSec=5 11 | 12 | [Install] 13 | WantedBy=multi-user.target 14 | -------------------------------------------------------------------------------- /ansible/roles/user_role/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | user_role_users: 4 | - name: "user_role_user" 5 | shell: "/bin/bash" 6 | groups: 7 | - "user_role_user_group" 8 | append: "yes" 9 | generate_ssh_key: "yes" 10 | state: "present" 11 | remove: "no" 12 | ssh_key_bits: "2048" 13 | authorized: 14 | - "keys/consul.pub" 15 | #password: "" 16 | system: "no" -------------------------------------------------------------------------------- /ansible/roles/user_role/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create Groups 4 | group: 5 | name: "{{ item.1 | default(omit) }}" 6 | with_subelements: 7 | - "{{ user_role_users }}" 8 | - "groups" 9 | - flags: 10 | skip_missing: True 11 | 12 | - name: Create User 13 | user: 14 | name: "{{ item.0.name }}" 15 | shell: "{{ item.0.shell | default('false') }}" 16 | groups: "{{ item.1 | default('') }}" 17 | append: "{{ item.0.append | default('yes') }}" 18 | generate_ssh_key: "{{ item.0.generate_ssh_key | default('no') }}" 19 | createhome: "{{ item.0.createhome | default('yes') }}" 20 | state: "{{ item.0.state | default('present') }}" 21 | remove: "{{ item.0.remove | default('no') }}" 22 | ssh_key_bits: "{{ item.0.ssh_key_bits | default('2048') }}" 23 | system: "{{ item.system | default('no') }}" 24 | with_subelements: 25 | - "{{ user_role_users }}" 26 | - "groups" 27 | - flags: 28 | skip_missing: True 29 | 30 | - name: Set password to user 31 | shell: "echo {{ item.name }}:{{ item.password }} | sudo chpasswd" 32 | #no_log: True 33 | register: passwd_set 34 | with_items: "{{ user_role_users }}" 35 | ignore_errors: yes 36 | 37 | - name: Add user to sudoers 38 | shell: "echo '{{ item.name }} ALL = (root) NOPASSWD:ALL' | sudo tee /etc/sudoers.d/{{ item.name }}" 39 | with_items: "{{ user_role_users }}" 40 | 41 | - name: Ensure 0440 Permissions on /etc/sudoers.d/ file 42 | shell: "chmod 0440 /etc/sudoers.d/{{ item.name }}" 43 | with_items: "{{ user_role_users }}" 44 | 45 | #- name: Create users groups 46 | # user: 47 | # name: "{{ item.name }}" 48 | # groups: "{{ item.groups | join(',') | default('') }}" 49 | # append: "{{ item.append | default('yes') }}" 50 | # with_items: "{{ user_role_users }}" 51 | 52 | - name: Add Authorized key 53 | authorized_key: 54 | user: "{{ item.0.name }}" 55 | key: "{{ lookup('file', item.1) }}" 56 | with_subelements: 57 | - "{{ user_role_users }}" 58 | - "authorized" 59 | - flags: 60 | skip_missing: True 61 | 62 | - name: Copy private keys 63 | copy: 64 | src: "{{ item.1.src }}" 65 | dest: "{{ item.1.dest }}" 66 | mode: "0600" 67 | owner: "{{ item.0.name }}" 68 | with_subelements: 69 | - "{{ user_role_users }}" 70 | - "private_keys" 71 | - flags: 72 | skip_missing: True -------------------------------------------------------------------------------- /ansible/roles/user_role/templates/keys/consul.pub: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/PMJ7B/EiWn51N0jIGpUcVxg+GuWCMnudK+/sySNguhtQYXHOMGXVP3I7RkyjMD+y18K27YABAgrQMtexowlZKFI/dTdPAZCob7m2pr4RyW6UySp6bPNzldHhN0iyHqt2/gnfXLX2OCwGe8yGsCsWEIb19xaYWJkL/0rmKaq82/vqqMOmBko3JOom1UPExr9TXc8kQCBXVVL1U9WQyOJnZewPlrbzj8xIdtUX5Vv5lmU2VsLl+vvC7fNnLJeCocWjdfGf1j0eTJtAYgSc2FujrjNhfU0+YTRs9hextq4SQlgHPHA4KNIfZdxT9/mOIET1I5L0+dNf0VawZJD0Gpf/ -------------------------------------------------------------------------------- /ansible/run_playbooks.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # $1 is playbook to run string 4 | # $2 is inventory string 5 | # $3 is filename string 6 | 7 | function run_playbook () { 8 | echo "Running: ansible-playbook $1 -i $2 at $(date)" 9 | ansible-playbook $1 -i $2 >> $3 2>&1 10 | if [ $? -eq 0 ]; then 11 | echo "Success." 12 | else 13 | echo "Unsuccessful playbook. Error Code $?" 14 | exit 1 15 | fi 16 | } 17 | 18 | echo "Beginning Installation at $(date)." 19 | 20 | run_playbook provision_hostsfile.yaml inventory.py install.out 21 | run_playbook provision_bind9_servers.yaml inventory.py install.out 22 | run_playbook provision_resolv_conf.yaml inventory.py install.out 23 | run_playbook provision_prometheus_servers.yaml inventory.py install.out 24 | run_playbook provision_prometheus_alertmanager_servers.yaml inventory.py install.out 25 | run_playbook provision_prometheus_node_exporter_servers.yaml inventory.py install.out 26 | run_playbook provision_prometheus_consul_exporter_servers.yaml inventory.py install.out 27 | run_playbook provision_consul_servers.yaml inventory.py install.out 28 | run_playbook provision_consul_client_servers.yaml inventory.py install.out 29 | run_playbook provision_consul_template_servers.yaml inventory.py install.out 30 | 31 | echo "Ending Installation at $(date)." 32 | -------------------------------------------------------------------------------- /ansible/static_inventory: -------------------------------------------------------------------------------- 1 | [core_servers] 2 | core1.lan 3 | 4 | [prometheus_servers] 5 | prometheus[1:2].lan 6 | 7 | [prometheus_client_servers] 8 | client1.lan 9 | 10 | [consul_servers] 11 | consul[1:3].lan 12 | 13 | [consul_client_servers:children] 14 | prometheus_servers 15 | prometheus_client_servers 16 | 17 | [consul_template_servers] 18 | client1.lan 19 | 20 | [prometheus_node_exporter_servers:children] 21 | prometheus_servers 22 | prometheus_client_servers 23 | consul_servers 24 | core_servers 25 | 26 | [prometheus_consul_exporter_servers:children] 27 | consul_servers 28 | 29 | [prometheus_alertmanager_servers:children] 30 | prometheus_servers 31 | -------------------------------------------------------------------------------- /ansible/vars/keys/id_rsa: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIJKQIBAAKCAgEAxVjjakDRkUGaEv2+aO/nvmbsAdIku3oMP5adP0lNZ1GdjaW9 3 | LqKLEhlMLsrWCFP0YmCmmDOM2ThoGs/oE5n7ugFHVACr6nsEw4dNaxt4m7f10Re+ 4 | HhChiz+dPkyM8+7AQ1u4QkJiO01+crtsJ0hazxOcCk71dMu32bA4LOGNVOVMI4FY 5 | DEzdmQT7CmJCXoQqoNwPXte8/0BEiE+gqyo52yuvd35e5Fe3ZW2+ZrDo5224CFVb 6 | v2iFK9uThaUy2/Yf5YKX7SIryOPaiLyS2MNRvuoBrSZTnfM+yP2BOwCEI8ddpp7A 7 | CgqGiPrqqrz7n6NyJwFy3FGOETnttmSciVNk4sAgCoTnUJKJPYRxG5QQwE4+gD2n 8 | Lps0CwjBm5pOdG0tle7jTw94Oq+PM3Kz4Von38Kia0NrcLGdRb3BLI9ncrFcawue 9 | aP2XnX++mpPaK/8B8uTgSfKP7YTzDRBN6f0st267MmwXMDVu03dTPg5OYZ3ZMynR 10 | dHq3Uhu//rPpKVfVjGEGs9nNnNa60h0F8MruaGGPxfOVvttJYGNWn3tH+XzENGBd 11 | 8g8hvLpZvAuezDNgIz1dWwHBW2boCobCX9A4dqhm6WNnGVJSdea8H1tluMhNJY/c 12 | 5oig1yw4slMuqjR7KjxOy1xu7Bc7dDUnCBNuSXqB/trsZ1UifqyOQOqlq0kCAwEA 13 | AQKCAgEAi2h3fh+76zjEdpX5ZHfU5ZQfNEoW2Lkm3eSvwH0vyMTG38EtGzyHsYld 14 | X9C50bg88yIDjZEc9nX6VLZ54x7rzEIK+TRgPYwZmWaDMh593MLfS43SJ7rn405U 15 | 7iV1jPwjpXGtgGEUs3/Sy4nKpRVf0NixajbM/mzKu2TYyhurk0VHu+0rHcVO5UG3 16 | dWZkLBCBbJEH6p/MSyRDRWe6wC9cH4+ErI0Ay52LWINl9XUTkijX0Es8PMLBEHJU 17 | jBxfCTOc5KyUVlfFRmxvfBh6Q9X2cT8tX/urJfEncSRjniKHNh8n8QRD9NPqpela 18 | zLIjrmnvwc4sLacPDOw3GJmKWxgmMyqFD8LyJHpAOtJUJqGqyZnA8qBSp9A5y5MV 19 | YKnI2wAa8dRWi2jXcQlXxgKyatmapfsDVIETbktqlglMd+M/UjzyKBHowoeQgWmt 20 | UNrORNPbLdmBRqUZ2hAeT4asYadJsBi5I957akuJqjz3LyiL1Jbp2PUotYmZdkOk 21 | i5ukSHn9ifeU2MpcrgH0mM0FIhNhspKLde4TQTTm2/1r/elJ8B95FHsprxgc1Usv 22 | 4GAy4Wd0jZNwnWmWeYi+w0DsCJ3pBCDhLtdbTPk3zPKCZcwsr3iRHAKVuEUn5yvc 23 | dmKXKoIMIiSaKI/gPC15yYMnipT12LSEnZqOR8dyq0RLc0KI9YECggEBAOc1dIBP 24 | g7pXEky1LAjpA9lCyo89pctYzVkF1nzb/bPb/rJlqmLrlVuRE+fXzch1eNT5l5rb 25 | AJjHY2FWsDW6nDoczt81n3bIBM6/7y7bigo6JWFD05M5q31Eh7jfeIlqOCFtptx1 26 | d0J6PKz3qAgiYVt9Tck4XU6yc4AYYo/saOn4IcOE2DZzEIaBFKgbls/ofz7ePj6U 27 | oPRsLP37J+FYi/H9wZSOtuCDdIgDcTFKCqVaGVKSgZ7TnTS1Qe9sF/t0g+opD23F 28 | Oq7wpgOs1DwTxxLPE/42Iy3y7OkmB5iyDD1gdzTPNcrRwwR7H8VZo/YAAvfRg9lq 29 | 4rbznPEDEhrBUv0CggEBANqB8+OU4jsg2BUDgYm7VlUppdZflJAlZ9FCiibRJh5e 30 | GWLA+TboQBv76jkbcACc8wpGDE1CXOeM7eKpYmJ7klEuJyTXFwc24nd+XjM7l+cR 31 | I3DcB76TnpAmq7Y1uruVEbaf46Ohfzm2fhHK88OUG1Ev4X1ublm4sa3daAc72u/M 32 | 2ZPqFBbIQQeHlQPbXDCHcrvPE+4bnmjgYYLztnosgvHt7iZfec/XOr+aCz1Qz2NY 33 | mf9UEWDfTe07Kp9Nu/yogowix6RyBoPb31AblvPdlJrnGzyHmrqKR8IlXeTV0Kd8 34 | LLiwN2vJa1ph46Yp8pPlMpc67krUXHaT0Nb6TxJyCT0CggEAGLsNBRR8QJhDqtTO 35 | Kv3RnZGZXQz5CPEstdL8CEr3rh9f9PEO8pnEVhEEkg3xQvndvNodr0CxpaXeKXE+ 36 | U+DsCQM/1XxE3E20xfCdwuNUtpdVnDvHC75la3bDFbFBkWE9M8YozPf7k67DgrTS 37 | 0lvrEgArK9TNljdcR11UXZSp/CzAX8t73Qcdw2VXMkwRK5fw1GatPcBkHwDbJyeQ 38 | MnhoL7A23o8NXKJy+oFzwBPtY5Zpkv18w+JatGKW9OcZAZqK6acOsW0hQ8kV2NrJ 39 | sMtkxcxElTJxPN75HE8nDNwulvoUhh+L32/kbjxFP7TA5vVFXACbG8xlB7CzHsx3 40 | SUlYeQKCAQBJymEGlhzgQvM2BX0VekJpeCsc/iI5m6g3rvTgrvjkgHqqMuj2dfBQ 41 | gv5SUsW8L0SUT5uP671cJOehB/rFr112NNPbSRfpw8su92nwe0uNKOiDEIs8/LUr 42 | kSOj4Pkyp+DXf59sfvlsEAj7EULI5ZdkD3ti1IxHmGvIf2DGMJx7Jv7DSOWH3BC5 43 | 7juQld/IM7W3XvMoEgOo/z1u78bL84Q1F1i5dLy6R8D7O5ZYGpYuwpke9ND5me/D 44 | EZwPOD0n0fBkSsqI6kjw9quvbeHIcSfiN/Gmu1SxcklRczUGNn7EUNb5d+j/aYHr 45 | f+tVDg1xjJFNKQWYF6cyFDEOqHFTlVcxAoIBAQDGeL6ougai1gk0yttcLeMLfzZz 46 | 9S6ScBlNxZdEG+MepxNq5Ef+8FLBWOVOVVGdCDQITsVeSaVxpeYA64OwuNhOUfrc 47 | GhEW50+KLrhUTu1Vyj21g5f9toON/MeS0TeP5qwBWSOXKhf5s5jr4yCBrg3foa+i 48 | BgvJC80k+a7kIITthy17Jh3PulfCIQ71Juwep82hPUw5zYaCvktdIeNBUd9IAwhT 49 | G+k0503+UVUGiab7ChMNm62ZdmJPrrudwDTegV/Ul2Fli3Ig4Z+wao6JM9HmZ5XE 50 | bHKtZXoITECpF4ZohRvDD7+1F0xvR9tesmHViz+Rly88aGZbhC5hpisNwGsB 51 | -----END RSA PRIVATE KEY----- 52 | -------------------------------------------------------------------------------- /ansible/vars/keys/id_rsa.copy: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIJKQIBAAKCAgEAxVjjakDRkUGaEv2+aO/nvmbsAdIku3oMP5adP0lNZ1GdjaW9 3 | LqKLEhlMLsrWCFP0YmCmmDOM2ThoGs/oE5n7ugFHVACr6nsEw4dNaxt4m7f10Re+ 4 | HhChiz+dPkyM8+7AQ1u4QkJiO01+crtsJ0hazxOcCk71dMu32bA4LOGNVOVMI4FY 5 | DEzdmQT7CmJCXoQqoNwPXte8/0BEiE+gqyo52yuvd35e5Fe3ZW2+ZrDo5224CFVb 6 | v2iFK9uThaUy2/Yf5YKX7SIryOPaiLyS2MNRvuoBrSZTnfM+yP2BOwCEI8ddpp7A 7 | CgqGiPrqqrz7n6NyJwFy3FGOETnttmSciVNk4sAgCoTnUJKJPYRxG5QQwE4+gD2n 8 | Lps0CwjBm5pOdG0tle7jTw94Oq+PM3Kz4Von38Kia0NrcLGdRb3BLI9ncrFcawue 9 | aP2XnX++mpPaK/8B8uTgSfKP7YTzDRBN6f0st267MmwXMDVu03dTPg5OYZ3ZMynR 10 | dHq3Uhu//rPpKVfVjGEGs9nNnNa60h0F8MruaGGPxfOVvttJYGNWn3tH+XzENGBd 11 | 8g8hvLpZvAuezDNgIz1dWwHBW2boCobCX9A4dqhm6WNnGVJSdea8H1tluMhNJY/c 12 | 5oig1yw4slMuqjR7KjxOy1xu7Bc7dDUnCBNuSXqB/trsZ1UifqyOQOqlq0kCAwEA 13 | AQKCAgEAi2h3fh+76zjEdpX5ZHfU5ZQfNEoW2Lkm3eSvwH0vyMTG38EtGzyHsYld 14 | X9C50bg88yIDjZEc9nX6VLZ54x7rzEIK+TRgPYwZmWaDMh593MLfS43SJ7rn405U 15 | 7iV1jPwjpXGtgGEUs3/Sy4nKpRVf0NixajbM/mzKu2TYyhurk0VHu+0rHcVO5UG3 16 | dWZkLBCBbJEH6p/MSyRDRWe6wC9cH4+ErI0Ay52LWINl9XUTkijX0Es8PMLBEHJU 17 | jBxfCTOc5KyUVlfFRmxvfBh6Q9X2cT8tX/urJfEncSRjniKHNh8n8QRD9NPqpela 18 | zLIjrmnvwc4sLacPDOw3GJmKWxgmMyqFD8LyJHpAOtJUJqGqyZnA8qBSp9A5y5MV 19 | YKnI2wAa8dRWi2jXcQlXxgKyatmapfsDVIETbktqlglMd+M/UjzyKBHowoeQgWmt 20 | UNrORNPbLdmBRqUZ2hAeT4asYadJsBi5I957akuJqjz3LyiL1Jbp2PUotYmZdkOk 21 | i5ukSHn9ifeU2MpcrgH0mM0FIhNhspKLde4TQTTm2/1r/elJ8B95FHsprxgc1Usv 22 | 4GAy4Wd0jZNwnWmWeYi+w0DsCJ3pBCDhLtdbTPk3zPKCZcwsr3iRHAKVuEUn5yvc 23 | dmKXKoIMIiSaKI/gPC15yYMnipT12LSEnZqOR8dyq0RLc0KI9YECggEBAOc1dIBP 24 | g7pXEky1LAjpA9lCyo89pctYzVkF1nzb/bPb/rJlqmLrlVuRE+fXzch1eNT5l5rb 25 | AJjHY2FWsDW6nDoczt81n3bIBM6/7y7bigo6JWFD05M5q31Eh7jfeIlqOCFtptx1 26 | d0J6PKz3qAgiYVt9Tck4XU6yc4AYYo/saOn4IcOE2DZzEIaBFKgbls/ofz7ePj6U 27 | oPRsLP37J+FYi/H9wZSOtuCDdIgDcTFKCqVaGVKSgZ7TnTS1Qe9sF/t0g+opD23F 28 | Oq7wpgOs1DwTxxLPE/42Iy3y7OkmB5iyDD1gdzTPNcrRwwR7H8VZo/YAAvfRg9lq 29 | 4rbznPEDEhrBUv0CggEBANqB8+OU4jsg2BUDgYm7VlUppdZflJAlZ9FCiibRJh5e 30 | GWLA+TboQBv76jkbcACc8wpGDE1CXOeM7eKpYmJ7klEuJyTXFwc24nd+XjM7l+cR 31 | I3DcB76TnpAmq7Y1uruVEbaf46Ohfzm2fhHK88OUG1Ev4X1ublm4sa3daAc72u/M 32 | 2ZPqFBbIQQeHlQPbXDCHcrvPE+4bnmjgYYLztnosgvHt7iZfec/XOr+aCz1Qz2NY 33 | mf9UEWDfTe07Kp9Nu/yogowix6RyBoPb31AblvPdlJrnGzyHmrqKR8IlXeTV0Kd8 34 | LLiwN2vJa1ph46Yp8pPlMpc67krUXHaT0Nb6TxJyCT0CggEAGLsNBRR8QJhDqtTO 35 | Kv3RnZGZXQz5CPEstdL8CEr3rh9f9PEO8pnEVhEEkg3xQvndvNodr0CxpaXeKXE+ 36 | U+DsCQM/1XxE3E20xfCdwuNUtpdVnDvHC75la3bDFbFBkWE9M8YozPf7k67DgrTS 37 | 0lvrEgArK9TNljdcR11UXZSp/CzAX8t73Qcdw2VXMkwRK5fw1GatPcBkHwDbJyeQ 38 | MnhoL7A23o8NXKJy+oFzwBPtY5Zpkv18w+JatGKW9OcZAZqK6acOsW0hQ8kV2NrJ 39 | sMtkxcxElTJxPN75HE8nDNwulvoUhh+L32/kbjxFP7TA5vVFXACbG8xlB7CzHsx3 40 | SUlYeQKCAQBJymEGlhzgQvM2BX0VekJpeCsc/iI5m6g3rvTgrvjkgHqqMuj2dfBQ 41 | gv5SUsW8L0SUT5uP671cJOehB/rFr112NNPbSRfpw8su92nwe0uNKOiDEIs8/LUr 42 | kSOj4Pkyp+DXf59sfvlsEAj7EULI5ZdkD3ti1IxHmGvIf2DGMJx7Jv7DSOWH3BC5 43 | 7juQld/IM7W3XvMoEgOo/z1u78bL84Q1F1i5dLy6R8D7O5ZYGpYuwpke9ND5me/D 44 | EZwPOD0n0fBkSsqI6kjw9quvbeHIcSfiN/Gmu1SxcklRczUGNn7EUNb5d+j/aYHr 45 | f+tVDg1xjJFNKQWYF6cyFDEOqHFTlVcxAoIBAQDGeL6ougai1gk0yttcLeMLfzZz 46 | 9S6ScBlNxZdEG+MepxNq5Ef+8FLBWOVOVVGdCDQITsVeSaVxpeYA64OwuNhOUfrc 47 | GhEW50+KLrhUTu1Vyj21g5f9toON/MeS0TeP5qwBWSOXKhf5s5jr4yCBrg3foa+i 48 | BgvJC80k+a7kIITthy17Jh3PulfCIQ71Juwep82hPUw5zYaCvktdIeNBUd9IAwhT 49 | G+k0503+UVUGiab7ChMNm62ZdmJPrrudwDTegV/Ul2Fli3Ig4Z+wao6JM9HmZ5XE 50 | bHKtZXoITECpF4ZohRvDD7+1F0xvR9tesmHViz+Rly88aGZbhC5hpisNwGsB 51 | -----END RSA PRIVATE KEY----- 52 | -------------------------------------------------------------------------------- /ansible/vars/keys/id_rsa.pub: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDFWONqQNGRQZoS/b5o7+e+ZuwB0iS7egw/lp0/SU1nUZ2Npb0uoosSGUwuytYIU/RiYKaYM4zZOGgaz+gTmfu6AUdUAKvqewTDh01rG3ibt/XRF74eEKGLP50+TIzz7sBDW7hCQmI7TX5yu2wnSFrPE5wKTvV0y7fZsDgs4Y1U5UwjgVgMTN2ZBPsKYkJehCqg3A9e17z/QESIT6CrKjnbK693fl7kV7dlbb5msOjnbbgIVVu/aIUr25OFpTLb9h/lgpftIivI49qIvJLYw1G+6gGtJlOd8z7I/YE7AIQjx12mnsAKCoaI+uqqvPufo3InAXLcUY4ROe22ZJyJU2TiwCAKhOdQkok9hHEblBDATj6APacumzQLCMGbmk50bS2V7uNPD3g6r48zcrPhWiffwqJrQ2twsZ1FvcEsj2dysVxrC55o/Zedf76ak9or/wHy5OBJ8o/thPMNEE3p/Sy3brsybBcwNW7Td1M+Dk5hndkzKdF0erdSG7/+s+kpV9WMYQaz2c2c1rrSHQXwyu5oYY/F85W+20lgY1afe0f5fMQ0YF3yDyG8ulm8C57MM2AjPV1bAcFbZugKhsJf0Dh2qGbpY2cZUlJ15rwfW2W4yE0lj9zmiKDXLDiyUy6qNHsqPE7LXG7sFzt0NScIE25JeoH+2uxnVSJ+rI5A6qWrSQ== 2 | -------------------------------------------------------------------------------- /ansible/vars/makevault.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | # This should be converted to a vault 4 | authorized_registry_users: 5 | - user: tester 6 | password: pw 7 | - user: testuser 8 | password: pass 9 | 10 | _users: 11 | - name: "consul" 12 | shell: "/bin/bash" 13 | groups: 14 | - "consul" 15 | append: "yes" 16 | generate_ssh_key: "no" 17 | ssh_key_bits: "4096" 18 | authorized: 19 | - "vars/keys/id_rsa.pub" 20 | private_keys: 21 | - src: "vars/keys/id_rsa" 22 | dest: "/home/consul/.ssh/id_rsa" 23 | password: 12345 24 | system: "no" --------------------------------------------------------------------------------