├── prometheus ├── Dockerfile └── prometheus.yml ├── README.md └── Makefile /prometheus/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM prom/prometheus 2 | COPY prometheus.yml /etc/prometheus/prometheus.yml 3 | -------------------------------------------------------------------------------- /prometheus/prometheus.yml: -------------------------------------------------------------------------------- 1 | ### prometheus.yml ### 2 | 3 | global: 4 | scrape_interval: 15s # By default, scrape targets every 15 seconds. 5 | evaluation_interval: 15s # By default, scrape targets every 15 seconds. 6 | # scrape_timeout is set to the global default (10s). 7 | 8 | # Attach these extra labels to all time-series collected by this Prometheus instance. 9 | external_labels: 10 | monitor: 'swarm-monitor' 11 | 12 | #rule_files: 13 | # - '/etc/prometheus/alert.rules' 14 | 15 | # A scrape configuration containing exactly one endpoint to scrape: 16 | scrape_configs: 17 | - job_name: 'dns-service-discovery' 18 | 19 | dns_sd_configs: 20 | - names: 21 | - 'monitoring.service.dc1.consul' 22 | type: 'A' 23 | port: 8080 24 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Docker swarm on Digital Ocean 2 | 3 | 4 | ## Architecture overview 5 | ```bash 6 | infra swarm-master swarm-node-1 swarm-node-2 7 | +--------------+ +---------------+ +----------------+ +----------------+ 8 | | consul | | registrator | | registartor | | registartor | service discovery 9 | +------------------------------------------------------------------------------------------------- 10 | | minilogbox | | | | | | | 11 | | kibanabox | | logspout | | logspout | | logspout | logs aggregation 12 | +------------------------------------------------------------------------------------------------- 13 | | | | grafana | | | | | 14 | | | | prometheus | | | | | 15 | | | | cadvisor | | cadvisor | | cadvisor | monitoring 16 | +------------------------------------------------------------------------------------------------- 17 | 18 | 19 | ``` 20 | 21 | ## Requierements 22 | * docker-machine https://docs.docker.com/machine/ 23 | * GNU make 24 | * a Digital Ocean account 25 | 26 | 27 | ## Used components 28 | The following components will be installed 29 | * consul (progrium/consul) 30 | * cadvisor (google/cadvisor) 31 | * registrator (sourcestream/registrator) modified version to support the docker 1.9 network overlay 32 | * logspout (progrium/logspout) 33 | * kibanabox (sirile/kibanabox) 34 | * sirile (sirile/minilogbox) 35 | * prometheus (sourcestream/prometheus-sd) modified version of prometheus to use service discovery with DNS through consul 36 | * grafana (grafana/grafana) 37 | 38 | ## Usage 39 | ```bash 40 | export digitalocean_token=[YOUR DIGITAL OCEAN TOKEN] 41 | make all 42 | ``` 43 | 44 | After the script is done you will get a list with all admin urls: 45 | ```bash 46 | consul http://10.131.156.22:8500/ 47 | kibana http://10.131.156.22:5601/ 48 | prometheus http://10.131.180.190:9090/ 49 | grafana http://10.131.180.190:3000/ 50 | to map those ports to your localhost run 51 | make ssh-tunnels 52 | ``` 53 | However this are IPs from the private network of Digital Ocean and can not be accessed from outside. I use a reverse proxy with password protection, but you can also use a ssh tunnel for testing. You can automatically create a ssh tunnel to the UIs above using 54 | 55 | ```bash 56 | make ssh-tunnels 57 | ``` 58 | 59 | This command will map all the ports on your localhost and print an output: 60 | 61 | ```bash 62 | consul http://localhost:8500/ 63 | kibana http://localhost:5601/ 64 | prometheus http://localhost:9090/ 65 | grafana http://localhost:3000/ 66 | ``` 67 | Now you can access the UIs in a secure way. 68 | ## Connect to the swarm master 69 | ```bash 70 | eval $(docker-machine env --swarm dev-swarm-master) 71 | docker info 72 | ``` 73 | the output should look like 74 | ```bash 75 | Containers: 11 76 | Images: 10 77 | Role: primary 78 | Strategy: spread 79 | Filters: health, port, dependency, affinity, constraint 80 | Nodes: 3 81 | dev-swarm-master: 46.102.118.254:2376 82 | └ Status: Healthy 83 | └ Containers: 5 84 | └ Reserved CPUs: 0 / 1 85 | └ Reserved Memory: 0 B / 1.026 GiB 86 | └ Labels: com.docker.network.driver.overlay.bind_interface=eth1, environment=dev, executiondriver=native-0.2, kernelversion=3.16.0-4-amd64, node_name=dev-swarm-master, operatingsystem=Debian GNU/Linux 8 (jessie), provider=digitalocean, storagedriver=aufs 87 | dev-swarm-node-1: 46.102.255.101:2376 88 | └ Status: Healthy 89 | └ Containers: 3 90 | └ Reserved CPUs: 0 / 1 91 | └ Reserved Memory: 0 B / 1.026 GiB 92 | └ Labels: com.docker.network.driver.overlay.bind_interface=eth1, environment=dev, executiondriver=native-0.2, kernelversion=3.16.0-4-amd64, node_name=dev-swarm-node-1, operatingsystem=Debian GNU/Linux 8 (jessie), provider=digitalocean, storagedriver=aufs 93 | dev-swarm-node-2: 46.102.119.136:2376 94 | └ Status: Healthy 95 | └ Containers: 3 96 | └ Reserved CPUs: 0 / 1 97 | └ Reserved Memory: 0 B / 1.026 GiB 98 | └ Labels: com.docker.network.driver.overlay.bind_interface=eth1, environment=dev, executiondriver=native-0.2, kernelversion=3.16.0-4-amd64, node_name=dev-swarm-node-2, operatingsystem=Debian GNU/Linux 8 (jessie), provider=digitalocean, storagedriver=aufs 99 | CPUs: 3 100 | Total Memory: 3.078 GiB 101 | Name: dev-swarm-master 102 | ``` 103 | 104 | ## Make sure all the services are running correctly 105 | ```bash 106 | docker ps 107 | 108 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 109 | 6f6e2817f3b9 google/cadvisor:latest "/usr/bin/cadvisor" About an hour ago Up About an hour 8080/tcp dev-swarm-master/cadvisor-dev-swarm-master 110 | 86d232d76994 google/cadvisor:latest "/usr/bin/cadvisor" About an hour ago Up About an hour 8080/tcp dev-swarm-node-2/cadvisor-dev-swarm-node-2 111 | 8806ba459470 google/cadvisor:latest "/usr/bin/cadvisor" About an hour ago Up About an hour 8080/tcp dev-swarm-node-1/cadvisor-dev-swarm-node-1 112 | aa55eb734e81 grafana/grafana "/usr/sbin/grafana-se" About an hour ago Up About an hour 10.135.180.190:3000->3000/tcp, 127.0.0.1:3000->3000/tcp dev-swarm-master/grafana 113 | e9cf7a4901c8 sourcestream/prometheus-sd "/bin/prometheus -con" About an hour ago Up About an hour 127.0.0.1:9090->9090/tcp, 10.135.180.190:9090->9090/tcp dev-swarm-master/prometheus 114 | 0cf24ced7533 progrium/logspout "/bin/logspout syslog" About an hour ago Up About an hour 8000/tcp dev-swarm-master/logspout-dev-swarm-master 115 | c10c54fa77d9 progrium/logspout "/bin/logspout syslog" About an hour ago Up About an hour 8000/tcp dev-swarm-node-2/logspout-dev-swarm-node-2 116 | afb99afe86c0 progrium/logspout "/bin/logspout syslog" About an hour ago Up About an hour 8000/tcp dev-swarm-node-1/logspout-dev-swarm-node-1 117 | 2a0df7e903db sourcestream/registrator "/bin/registrator -in" About an hour ago Up About an hour dev-swarm-master/registrator-dev-swarm-master 118 | 8418bf9985f7 sourcestream/registrator "/bin/registrator -in" About an hour ago Up About an hour dev-swarm-node-2/registrator-dev-swarm-node-2 119 | 6989ead0d4b2 sourcestream/registrator "/bin/registrator -in" About an hour ago Up About an hour dev-swarm-node-1/registrator-dev-swarm-node-1 120 | 121 | ``` 122 | 123 | ## TODO 124 | * remove the infra node and move the consul, kibanabox and minilogbox inside of the swarm 125 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | digitalocean_image=debian-8-x64 2 | 3 | driver_ops = --driver=digitalocean \ 4 | --digitalocean-region=fra1 \ 5 | --digitalocean-image=$(digitalocean_image) \ 6 | --digitalocean-access-token=$(digitalocean_token) \ 7 | --digitalocean-size=1gb \ 8 | --digitalocean-private-networking=true 9 | 10 | environment=dev 11 | infra_node_name=$(environment)-infra 12 | swarm_master_name=$(environment)-swarm-master 13 | swarm_node_name=$(environment)-swarm-node 14 | monitoring_overlay_network_name=service-monitoring 15 | logging_overlay_network_name=service-logging 16 | 17 | init-vars: 18 | @$(eval infra_node_ip_private := $(shell docker-machine ssh $(infra_node_name) ip addr list eth1 |grep "inet " |cut -d' ' -f6|cut -d/ -f1)) 19 | @# echo infra_node_ip_private: $(infra_node_ip_private) 20 | @$(eval infra_docker_config := $(shell docker-machine config $(infra_node_name))) 21 | @# echo infra_docker_config: $(infra_docker_config) 22 | 23 | init-swarm-master-vars: 24 | $(eval swarm_master_config := $(shell docker-machine config --swarm $(swarm_master_name))) 25 | $(eval smarm_master_ip_private := $(shell docker-machine ssh $(swarm_master_name) ip addr list eth1 |grep "inet " |cut -d' ' -f6|cut -d/ -f1)) 26 | @echo smarm_master_ip_private: $(smarm_master_ip_private) 27 | 28 | init-nodes-vars: 29 | $(eval smarm_nodes := $(shell docker-machine ls -q | grep $(swarm_node_name))) 30 | 31 | create-infra: 32 | @printf "\e[33m*** \e create $(infra_node_name)... \e[33m***\e[0m\n" 33 | docker-machine create \ 34 | $(driver_ops) \ 35 | $(infra_node_name) 36 | 37 | destroy-infra: 38 | docker-machine rm $(infra_node_name) 39 | 40 | install-consul: init-vars 41 | @printf "\e[33m*** \e install service discovery - consul on $(infra_node_name)... \e[33m***\e[0m\n" 42 | docker $(infra_docker_config) run -d \ 43 | -p 127.0.0.1:8500:8500 \ 44 | -p $(infra_node_ip_private):8500:8500 \ 45 | -p $(infra_node_ip_private):53:53 \ 46 | -p $(infra_node_ip_private):53:53/udp \ 47 | -p $(infra_node_ip_private):8301:8301 \ 48 | -p $(infra_node_ip_private):8301:8301/udp \ 49 | -p $(infra_node_ip_private):8302:8302 \ 50 | -p $(infra_node_ip_private):8302:8302/udp \ 51 | -p $(infra_node_ip_private):8400:8400 \ 52 | -h consul \ 53 | --name consul \ 54 | --restart always \ 55 | progrium/consul -server -bootstrap -ui-dir /ui 56 | 57 | create-master: init-vars install-consul 58 | @printf "\e[33m*** \e create swarm-master $(swarm_master_name)... \e[33m***\e[0m\n" 59 | docker-machine create \ 60 | $(driver_ops) \ 61 | --swarm \ 62 | --swarm-master \ 63 | --engine-label="environment=$(environment)" \ 64 | --engine-label="com.docker.network.driver.overlay.bind_interface=eth1" \ 65 | --engine-label="node_name=$(swarm_master_name)" \ 66 | --swarm-discovery="consul://$(infra_node_ip_private):8500" \ 67 | --engine-opt="cluster-store=consul://$(infra_node_ip_private):8500" \ 68 | --engine-opt="cluster-advertise=eth1:2376" \ 69 | $(swarm_master_name) 70 | 71 | destroy-master: 72 | docker-machine rm $(swarm_master_name) 73 | 74 | create-nodes: init-vars install-consul 75 | @for number in 1 2 ; do \ 76 | printf "\e[33m*** \e create smarm node $(swarm_node_name)-$$number... \e[33m***\e[0m\n"; \ 77 | docker-machine create \ 78 | $(driver_ops) \ 79 | --swarm \ 80 | --engine-label="node_name=$(swarm_node_name)-$$number" \ 81 | --engine-label="environment=$(environment)" \ 82 | --engine-label="com.docker.network.driver.overlay.bind_interface=eth1" \ 83 | --swarm-discovery="consul://$(infra_node_ip_private):8500" \ 84 | --engine-opt="cluster-store=consul://$(infra_node_ip_private):8500" \ 85 | --engine-opt="cluster-advertise=eth1:2376" \ 86 | $(swarm_node_name)-$$number; \ 87 | done 88 | 89 | destroy-nodes: 90 | @for node_name in $(smarm_nodes); do \ 91 | printf "### destorying $(node_name)..." \ 92 | docker-machine rm $$(docker-machine config $$node_name); \ 93 | done 94 | 95 | create-overlay-network-logging: init-swarm-master-vars 96 | @printf "\e[33m*** \e create $(logging_overlay_network_name) overlay network \e[33m***\e[0m\n" 97 | docker $(swarm_master_config) network create -d overlay $(logging_overlay_network_name) 98 | 99 | install-logging: init-nodes-vars init-swarm-master-vars create-overlay-network-logging 100 | @printf "\e[33m*** \e installing logstash and elasticsearch @ $(infra_node_name)... \e[33m***\e[0m\n" 101 | docker $(infra_docker_config) run -d \ 102 | --name logbox \ 103 | -h logbox \ 104 | -p $(infra_node_ip_private):5000:5000/udp \ 105 | -p $(infra_node_ip_private):9200:9200 \ 106 | --restart=always \ 107 | sirile/minilogbox 108 | @printf "\e[33m*** \e installing kibana @ $(infra_node_name)... \e[33m***\e[0m\n" 109 | docker $(infra_docker_config) run -d \ 110 | -p $(infra_node_ip_private):5601:5601 \ 111 | -p 127.0.0.1:5601:5601 \ 112 | -h kibanabox \ 113 | --name kibanabox \ 114 | --restart=always \ 115 | sirile/kibanabox http://$(infra_node_ip_private):9200 116 | @for node_name in $(smarm_nodes) $(swarm_master_name); do \ 117 | printf "\e[33m*** \e installing logspout @ $$node_name ... \e[33m***\e[0m\n"; \ 118 | docker $(swarm_master_config) run -d \ 119 | --name logspout-$$node_name \ 120 | --restart=always \ 121 | -h logspout \ 122 | --net=$(logging_overlay_network_name) \ 123 | -e constraint:node_name==$$node_name \ 124 | -v /var/run/docker.sock:/tmp/docker.sock \ 125 | progrium/logspout syslog://$(infra_node_ip_private):5000; \ 126 | done 127 | 128 | create-overlay-network-monitoring: init-swarm-master-vars 129 | @printf "\e[33m*** \e create $(monitoring_overlay_network_name) overlay network \e[33m***\e[0m\n" 130 | docker $(swarm_master_config) network create -d overlay $(monitoring_overlay_network_name) 131 | 132 | install-monitoring: init-nodes-vars init-swarm-master-vars create-overlay-network-monitoring install-prometheus install-grafana 133 | @for node_name in $(smarm_nodes) $(swarm_master_name); do \ 134 | printf "\e[33m*** \e installing cadvisor @ $$node_name ... \e[33m***\e[0m\n"; \ 135 | docker $(swarm_master_config) run -d \ 136 | --name cadvisor-$$node_name \ 137 | --restart=always \ 138 | --volume=/:/rootfs:ro \ 139 | --volume=/var/run:/var/run:rw \ 140 | --volume=/sys:/sys:ro \ 141 | --volume=/var/lib/docker/:/var/lib/docker:ro \ 142 | --detach=true \ 143 | -e SERVICE_NAME=monitoring \ 144 | --net=$(monitoring_overlay_network_name) \ 145 | -e constraint:node_name==$$node_name \ 146 | google/cadvisor:latest; \ 147 | done 148 | 149 | install-registrator: init-vars init-nodes-vars init-swarm-master-vars 150 | @for node_name in $(smarm_nodes) $(swarm_master_name); do \ 151 | printf "\e[33m*** \e installing registrator @ $$node_name ... \e[33m***\e[0m\n"; \ 152 | docker $(swarm_master_config) run -d \ 153 | -v /var/run/docker.sock:/tmp/docker.sock \ 154 | --restart=always \ 155 | --name registrator-$$node_name \ 156 | --net=host \ 157 | -e constraint:node_name==$$node_name \ 158 | sourcestream/registrator -internal consul://$(infra_node_ip_private):8500; \ 159 | done 160 | 161 | install-prometheus: init-vars init-swarm-master-vars 162 | @printf "\e[33m*** \e installing prometheus @ $(swarm_master_name) ... \e[33m***\e[0m\n"; \ 163 | docker $(swarm_master_config) run -d \ 164 | -p $(smarm_master_ip_private):9090:9090 \ 165 | -p 127.0.0.1:9090:9090 \ 166 | --name prometheus \ 167 | --restart=always \ 168 | --net=$(monitoring_overlay_network_name) \ 169 | -e SERVICE_NAME=prometheus \ 170 | --dns $(infra_node_ip_private) \ 171 | -e constraint:node_name==$(swarm_master_name) \ 172 | sourcestream/prometheus-sd 173 | 174 | remove-prometheus: init-vars init-swarm-master-vars 175 | docker rm -f prometheus 176 | docker rmi sourcestream/prometheus-sd 177 | 178 | all: create-infra create-master create-nodes install-registrator install-logging install-monitoring display-admin-urls 179 | 180 | display-admin-urls: init-vars init-swarm-master-vars 181 | @printf "\e[33m" 182 | @printf "consul http://$(infra_node_ip_private):8500/ \n" 183 | @printf "kibana http://$(infra_node_ip_private):5601/ \n" 184 | @printf "prometheus http://$(smarm_master_ip_private):9090/ \n" 185 | @printf "grafana http://$(smarm_master_ip_private):3000/ \n" 186 | @printf "to map those ports to your localhost run \n" 187 | @printf "make ssh-tunnels" 188 | @printf " \e[0m\n" 189 | 190 | destroy-all: init-nodes-vars 191 | docker-machine rm $(smarm_nodes) $(infra_node_name) $(swarm_master_name) 192 | 193 | install-grafana: init-swarm-master-vars 194 | docker $(swarm_master_config) run -d \ 195 | -p $(smarm_master_ip_private):3000:3000 \ 196 | -p 127.0.0.1:3000:3000 \ 197 | --net=$(monitoring_overlay_network_name) \ 198 | --name grafana \ 199 | -e SERVICE_NAME=grafana \ 200 | -e constraint:node_name==$(swarm_master_name) \ 201 | grafana/grafana 202 | 203 | ssh-tunnels: init-vars init-swarm-master-vars 204 | @# consul 205 | docker-machine ssh $(infra_node_name) -N -L 8500:localhost:8500 & 206 | @printf "\x1B[35mconsul http://localhost:8500/ \e[0m\n" 207 | @# kibana 208 | docker-machine ssh $(infra_node_name) -N -L 5601:localhost:5601 & 209 | @printf "\x1B[35mkibana http://localhost:5601/ \e[0m\n" 210 | @# prometheus 211 | docker-machine ssh $(swarm_master_name) -N -L 9090:localhost:9090 & 212 | @printf "\x1B[35mprometheus http://localhost:9090/ \e[0m\n" 213 | @# grafana 214 | docker-machine ssh $(swarm_master_name) -N -L 3000:localhost:3000 & 215 | @printf "\x1B[35mgrafana http://localhost:3000/ \e[0m\n" 216 | --------------------------------------------------------------------------------