├── README.md ├── ansible ├── .ansible.cfg ├── docker.yml ├── kafka.yml ├── monitoring.yml └── requirements.yml ├── app ├── Dockerfile └── main.go ├── grafana └── api-metrics-dashboard.json ├── load ├── ammo.txt ├── data ├── load.yaml ├── makeammo.py └── token.txt └── terraform ├── .gitignore ├── dns.tf ├── instance-group.tf ├── instances.tf ├── loadbalancer.tf ├── main.tf ├── meta.txt ├── network.tf ├── registry.tf ├── service-accounts.tf ├── spec.yml └── vars.auto.tfvars /README.md: -------------------------------------------------------------------------------- 1 | # Events API Yandex Cloud 2 | 3 | Collection of tools that were used on rebrainme's webinar with yandex cloud. 4 | 5 | # Tools! 6 | 7 | - **terraform** - terraform files for container registry, service accounts, instances, instance group, load balancer, cloudflare's dns 8 | - **ansible** - ansible playbooks for docker, kafka, grafana, prometheus 9 | - **app** - small application server which receives json events through http and push them to kafka (should be rewritten with fasthttp and some optimizations). 10 | - **grafana** - dashboards used at webinar 11 | - **load** - yandex tank configration for load testing 12 | 13 | # Tested versions 14 | - terraform: 15 | ```sh 16 | $ terraform version 17 | Terraform v0.12.18 18 | ``` 19 | 20 | - ansible: 21 | ```sh 22 | $ ansible --version 23 | ansible 2.9.4 24 | ``` 25 | 26 | - golang: 1.13.8 from docker golang:latest 27 | 28 | - yandex tank: 29 | ```sh 30 | # docker run -v /home/cloud-user/load/:/var/loadtest/ --net host -it direvius/yandex-tank --version 31 | No handlers could be found for logger "netort.resource" 32 | YandexTank/1.12.1 33 | ``` 34 | 35 | # Terraform 36 | To run terraform you should create **private.auto.tfvars** file with the following content: 37 | ```ini 38 | yc_token = "YANDEX CLOUD OAUTH TOKEN" 39 | yc_cloud_id = "YANDEX CLOUD ID" 40 | yc_folder_id = "YANDEX CLOUD FOLDER ID" 41 | cf_email = "CLOUDFLARE EMAIL" 42 | cf_token = "CLOUDFLARE TOKEN" 43 | cf_zone_id = "CLOUDFLARE ZONE ID" 44 | ``` 45 | 46 | Then just run: 47 | ```sh 48 | $ terraform apply 49 | ``` 50 | 51 | # Ansible 52 | In fact, ansible is not used directly - it is run by terraform to provision kafka, build and monitoring hosts. Anyway, you have to download all the roles from anible galaxy: 53 | ```sh 54 | $ ansible-galaxy install -r requirements.yml 55 | ``` 56 | 57 | Also, you can use provided .ansible.cfg file. 58 | 59 | # Load 60 | First of all signup to [yandex overload](http://overload.yandex.net) if you want to get some visualisation on the web. After you're in - obtain your api key and write it down to **token.txt** file. 61 | 62 | After that you can easily run load test using docker: 63 | ```sh 64 | docker run -v $(pwd):/var/loadtest/ --net host -it direvius/yandex-tank -c load.yml ammo.txt 65 | ``` 66 | 67 | **ammo.txt** can be generated with ammo generators - [Official doc](https://yandextank.readthedocs.io/en/latest/ammo_generators.html) 68 | 69 | # Grafana 70 | Just import dashboard to your grafana instance and select proper prometheus data source - thats it! 71 | 72 | # Application 73 | Use docker to build the container: 74 | ```sh 75 | $ docker build -t app . 76 | ``` 77 | 78 | Then you can run it with: 79 | ```sh 80 | $ docker run -d --name api -p 8080:8080 api 81 | ``` 82 | 83 | Some flags are used by the app: 84 | - **-addr** - address to listen to. By default - :8080 85 | - **-kafka** - separated list of kafka brokers to use (i.e. 10.0.0.1:9092,10.0.0.2:9092). By default - 127.0.0.1:9092 86 | 87 | License 88 | ---- 89 | MIT 90 | 91 | Copyright (c) 2020 Vasiliy I Ozerov, Rebrain 92 | 93 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 94 | 95 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 96 | 97 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 98 | 99 | -------------------------------------------------------------------------------- /ansible/.ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | timeout=60 3 | host_key_checking = False 4 | allow_world_readable_tmpfiles=true 5 | [ssh_connection] 6 | pipelining = True 7 | ssh_args = -o ControlMaster=auto -o ControlPersist=300s -o ForwardAgent=yes 8 | control_path = %(directory)s/%%h-%%p-%%r 9 | [diff] 10 | always = true 11 | -------------------------------------------------------------------------------- /ansible/docker.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: true 4 | roles: 5 | - name: geerlingguy.docker 6 | -------------------------------------------------------------------------------- /ansible/kafka.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | 4 | pre_tasks: 5 | - name: Update apt 6 | become: yes 7 | apt: 8 | cache_valid_time: 1800 9 | update_cache: yes 10 | 11 | roles: 12 | - name: sansible.zookeeper 13 | - name: sansible.kafka 14 | 15 | vars: 16 | sansible_kafka_server_properties: 17 | broker.id: 1 18 | listeners: "PLAINTEXT://0.0.0.0:{{ sansible_kafka_port }}" 19 | advertised.listeners: "PLAINTEXT://{{ ansible_default_ipv4['address'] }}:{{ sansible_kafka_port }}" 20 | log.dirs: /home/kafka/data 21 | -------------------------------------------------------------------------------- /ansible/monitoring.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: true 4 | roles: 5 | - cloudalchemy-prometheus 6 | - cloudalchemy-grafana 7 | vars: 8 | prometheus_global: 9 | scrape_interval: 2s 10 | scrape_timeout: 2s 11 | evaluation_interval: 10s 12 | grafana_security: 13 | admin_user: admin 14 | admin_password: "Password" 15 | 16 | 17 | -------------------------------------------------------------------------------- /ansible/requirements.yml: -------------------------------------------------------------------------------- 1 | - src: https://github.com/cloudalchemy/ansible-prometheus.git 2 | version: master 3 | scm: git 4 | name: cloudalchemy-prometheus 5 | 6 | - src: https://github.com/cloudalchemy/ansible-grafana.git 7 | version: master 8 | scm: git 9 | name: cloudalchemy-grafana 10 | 11 | - src: https://github.com/sansible/kafka.git 12 | version: master 13 | scm: git 14 | name: sansible.kafka 15 | 16 | - src: https://github.com/sansible/zookeeper.git 17 | version: master 18 | scm: git 19 | name: sansible.zookeeper 20 | 21 | - src: https://github.com/geerlingguy/ansible-role-docker.git 22 | version: master 23 | scm: git 24 | name: geerlingguy.docker 25 | -------------------------------------------------------------------------------- /app/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:latest AS build 2 | 3 | WORKDIR /go/src/app 4 | COPY . . 5 | 6 | ENV CGO_ENABLED=0 7 | RUN go get && go build -o app main.go 8 | 9 | FROM alpine:latest 10 | 11 | WORKDIR /app 12 | COPY --from=build /go/src/app/app /app/ 13 | 14 | CMD ["/app/app"] 15 | -------------------------------------------------------------------------------- /app/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "encoding/json" 5 | "flag" 6 | "io" 7 | "io/ioutil" 8 | "log" 9 | "net/http" 10 | "strings" 11 | 12 | "github.com/Shopify/sarama" 13 | "github.com/prometheus/client_golang/prometheus" 14 | "github.com/prometheus/client_golang/prometheus/promhttp" 15 | ) 16 | 17 | var ( 18 | // Config options 19 | addr = flag.String("addr", ":8080", "TCP address to listen to") 20 | kafka = flag.String("kafka", "127.0.0.1:9092", "Kafka endpoints") 21 | producer sarama.SyncProducer 22 | 23 | // Declaring prometheus metrics 24 | apiDurations = prometheus.NewSummary( 25 | prometheus.SummaryOpts{ 26 | Name: "api_durations_seconds", 27 | Help: "API duration seconds", 28 | Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001}, 29 | }, 30 | ) 31 | 32 | apiRequestsCount = prometheus.NewCounter( 33 | prometheus.CounterOpts{ 34 | Name: "api_requests_count", 35 | Help: "API requests count", 36 | }) 37 | 38 | apiErrorsCount = prometheus.NewCounterVec( 39 | prometheus.CounterOpts{ 40 | Name: "api_errors_count", 41 | Help: "API errors count", 42 | }, 43 | []string{"type"}, 44 | ) 45 | ) 46 | 47 | func init() { 48 | // Registering prometheus metrics 49 | prometheus.MustRegister(apiDurations) 50 | prometheus.MustRegister(apiRequestsCount) 51 | prometheus.MustRegister(apiErrorsCount) 52 | prometheus.MustRegister(prometheus.NewBuildInfoCollector()) 53 | } 54 | 55 | func main() { 56 | var err error 57 | flag.Parse() 58 | 59 | 60 | log.Printf("Got kafka addr: %s\n", *kafka) 61 | // Setup producer 62 | config := sarama.NewConfig() 63 | config.Producer.RequiredAcks = sarama.WaitForLocal 64 | config.Producer.Compression = sarama.CompressionSnappy 65 | config.Producer.Return.Successes = true 66 | brokers := strings.Split(*kafka, ",") 67 | producer, err = sarama.NewSyncProducer(brokers, config) 68 | if err != nil { 69 | panic(err) 70 | } 71 | defer producer.Close() 72 | 73 | http.HandleFunc("/status", statusHandlerFunc) 74 | http.HandleFunc("/post", postHandlerFunc) 75 | http.Handle("/metrics", promhttp.Handler()) 76 | 77 | if err := http.ListenAndServe(*addr, nil); err != nil { 78 | log.Fatalf("Error in ListenAndServe: %s", err) 79 | } 80 | } 81 | 82 | // STATUS method 83 | func statusHandlerFunc(w http.ResponseWriter, r *http.Request) { 84 | io.WriteString(w, "ok\n") 85 | } 86 | 87 | func isJSON(s []byte) bool { 88 | var js map[string]interface{} 89 | return json.Unmarshal(s, &js) == nil 90 | } 91 | 92 | // POST Method 93 | func postHandlerFunc(w http.ResponseWriter, r *http.Request) { 94 | // Incrementing requests count metric 95 | apiRequestsCount.Inc() 96 | 97 | // Observing request time 98 | timer := prometheus.NewTimer(apiDurations) 99 | defer timer.ObserveDuration() 100 | 101 | // Reading post body 102 | body, err := ioutil.ReadAll(r.Body) 103 | if err != nil { 104 | apiErrorsCount.WithLabelValues("body").Inc() 105 | log.Printf("Error reading body: %v", err) 106 | http.Error(w, "Can't read body", http.StatusBadRequest) 107 | return 108 | } 109 | 110 | // Checking if json is correct 111 | if !isJSON(body) { 112 | apiErrorsCount.WithLabelValues("json").Inc() 113 | log.Printf("Invalid json provided") 114 | http.Error(w, "Can't parse json", http.StatusBadRequest) 115 | return 116 | } 117 | 118 | // Posting data to kafka 119 | msg := &sarama.ProducerMessage{Topic: "loader", Value: sarama.ByteEncoder(body)} 120 | partition, offset, err := producer.SendMessage(msg) 121 | if err != nil { 122 | apiErrorsCount.WithLabelValues("kafka").Inc() 123 | log.Printf("Kafka Error: %s\n", err) 124 | http.Error(w, "Can't post to kafka", http.StatusInternalServerError) 125 | return 126 | } 127 | 128 | // Writing response 129 | response := struct { 130 | Status string `json:"status"` 131 | Partition int32 `json:"partition"` 132 | Offset int64 `json:offset` 133 | }{"ok", partition, offset} 134 | w.Header().Set("Content-Type", "application/json") 135 | json.NewEncoder(w).Encode(response) 136 | } 137 | -------------------------------------------------------------------------------- /grafana/api-metrics-dashboard.json: -------------------------------------------------------------------------------- 1 | { 2 | "__inputs": [ 3 | { 4 | "name": "DS_PROMETHEUS", 5 | "label": "Prometheus", 6 | "description": "", 7 | "type": "datasource", 8 | "pluginId": "prometheus", 9 | "pluginName": "Prometheus" 10 | } 11 | ], 12 | "__requires": [ 13 | { 14 | "type": "grafana", 15 | "id": "grafana", 16 | "name": "Grafana", 17 | "version": "6.6.2" 18 | }, 19 | { 20 | "type": "panel", 21 | "id": "graph", 22 | "name": "Graph", 23 | "version": "" 24 | }, 25 | { 26 | "type": "datasource", 27 | "id": "prometheus", 28 | "name": "Prometheus", 29 | "version": "1.0.0" 30 | } 31 | ], 32 | "annotations": { 33 | "list": [ 34 | { 35 | "builtIn": 1, 36 | "datasource": "-- Grafana --", 37 | "enable": true, 38 | "hide": true, 39 | "iconColor": "rgba(0, 211, 255, 1)", 40 | "name": "Annotations & Alerts", 41 | "type": "dashboard" 42 | } 43 | ] 44 | }, 45 | "editable": true, 46 | "gnetId": null, 47 | "graphTooltip": 0, 48 | "id": null, 49 | "links": [], 50 | "panels": [ 51 | { 52 | "aliasColors": {}, 53 | "bars": false, 54 | "dashLength": 10, 55 | "dashes": false, 56 | "datasource": "${DS_PROMETHEUS}", 57 | "fill": 1, 58 | "fillGradient": 0, 59 | "gridPos": { 60 | "h": 9, 61 | "w": 12, 62 | "x": 0, 63 | "y": 0 64 | }, 65 | "hiddenSeries": false, 66 | "id": 5, 67 | "legend": { 68 | "avg": false, 69 | "current": false, 70 | "max": false, 71 | "min": false, 72 | "show": true, 73 | "total": false, 74 | "values": false 75 | }, 76 | "lines": true, 77 | "linewidth": 1, 78 | "nullPointMode": "null", 79 | "options": { 80 | "dataLinks": [] 81 | }, 82 | "percentage": false, 83 | "pointradius": 2, 84 | "points": false, 85 | "renderer": "flot", 86 | "seriesOverrides": [], 87 | "spaceLength": 10, 88 | "stack": false, 89 | "steppedLine": false, 90 | "targets": [ 91 | { 92 | "expr": "api_durations_seconds", 93 | "legendFormat": "{{ instance }} {{ quantile }}", 94 | "refId": "A" 95 | } 96 | ], 97 | "thresholds": [], 98 | "timeFrom": null, 99 | "timeRegions": [], 100 | "timeShift": null, 101 | "title": "Panel Title", 102 | "tooltip": { 103 | "shared": true, 104 | "sort": 0, 105 | "value_type": "individual" 106 | }, 107 | "type": "graph", 108 | "xaxis": { 109 | "buckets": null, 110 | "mode": "time", 111 | "name": null, 112 | "show": true, 113 | "values": [] 114 | }, 115 | "yaxes": [ 116 | { 117 | "format": "short", 118 | "label": null, 119 | "logBase": 1, 120 | "max": null, 121 | "min": null, 122 | "show": true 123 | }, 124 | { 125 | "format": "short", 126 | "label": null, 127 | "logBase": 1, 128 | "max": null, 129 | "min": null, 130 | "show": true 131 | } 132 | ], 133 | "yaxis": { 134 | "align": false, 135 | "alignLevel": null 136 | } 137 | }, 138 | { 139 | "aliasColors": {}, 140 | "bars": false, 141 | "dashLength": 10, 142 | "dashes": false, 143 | "datasource": "${DS_PROMETHEUS}", 144 | "fill": 1, 145 | "fillGradient": 0, 146 | "gridPos": { 147 | "h": 9, 148 | "w": 12, 149 | "x": 12, 150 | "y": 0 151 | }, 152 | "hiddenSeries": false, 153 | "id": 3, 154 | "legend": { 155 | "avg": false, 156 | "current": false, 157 | "max": false, 158 | "min": false, 159 | "show": true, 160 | "total": false, 161 | "values": false 162 | }, 163 | "lines": true, 164 | "linewidth": 1, 165 | "nullPointMode": "null", 166 | "options": { 167 | "dataLinks": [] 168 | }, 169 | "percentage": false, 170 | "pointradius": 2, 171 | "points": false, 172 | "renderer": "flot", 173 | "seriesOverrides": [], 174 | "spaceLength": 10, 175 | "stack": false, 176 | "steppedLine": false, 177 | "targets": [ 178 | { 179 | "expr": "irate(api_errors_count[1m])", 180 | "legendFormat": "{{ instance }} {{ type }}", 181 | "refId": "A" 182 | } 183 | ], 184 | "thresholds": [], 185 | "timeFrom": null, 186 | "timeRegions": [], 187 | "timeShift": null, 188 | "title": "Errors Count", 189 | "tooltip": { 190 | "shared": true, 191 | "sort": 0, 192 | "value_type": "individual" 193 | }, 194 | "type": "graph", 195 | "xaxis": { 196 | "buckets": null, 197 | "mode": "time", 198 | "name": null, 199 | "show": true, 200 | "values": [] 201 | }, 202 | "yaxes": [ 203 | { 204 | "format": "short", 205 | "label": null, 206 | "logBase": 1, 207 | "max": null, 208 | "min": null, 209 | "show": true 210 | }, 211 | { 212 | "format": "short", 213 | "label": null, 214 | "logBase": 1, 215 | "max": null, 216 | "min": null, 217 | "show": true 218 | } 219 | ], 220 | "yaxis": { 221 | "align": false, 222 | "alignLevel": null 223 | } 224 | }, 225 | { 226 | "aliasColors": {}, 227 | "bars": false, 228 | "dashLength": 10, 229 | "dashes": false, 230 | "datasource": "${DS_PROMETHEUS}", 231 | "fill": 1, 232 | "fillGradient": 0, 233 | "gridPos": { 234 | "h": 8, 235 | "w": 24, 236 | "x": 0, 237 | "y": 9 238 | }, 239 | "hiddenSeries": false, 240 | "id": 2, 241 | "legend": { 242 | "avg": false, 243 | "current": false, 244 | "max": false, 245 | "min": false, 246 | "show": true, 247 | "total": false, 248 | "values": false 249 | }, 250 | "lines": true, 251 | "linewidth": 1, 252 | "nullPointMode": "null", 253 | "options": { 254 | "dataLinks": [] 255 | }, 256 | "percentage": false, 257 | "pointradius": 2, 258 | "points": false, 259 | "renderer": "flot", 260 | "seriesOverrides": [], 261 | "spaceLength": 10, 262 | "stack": false, 263 | "steppedLine": false, 264 | "targets": [ 265 | { 266 | "expr": "irate(api_requests_count[1m])", 267 | "legendFormat": "{{ instance }}", 268 | "refId": "A" 269 | } 270 | ], 271 | "thresholds": [], 272 | "timeFrom": null, 273 | "timeRegions": [], 274 | "timeShift": null, 275 | "title": "Requests Count", 276 | "tooltip": { 277 | "shared": true, 278 | "sort": 0, 279 | "value_type": "individual" 280 | }, 281 | "type": "graph", 282 | "xaxis": { 283 | "buckets": null, 284 | "mode": "time", 285 | "name": null, 286 | "show": true, 287 | "values": [] 288 | }, 289 | "yaxes": [ 290 | { 291 | "format": "short", 292 | "label": null, 293 | "logBase": 1, 294 | "max": null, 295 | "min": null, 296 | "show": true 297 | }, 298 | { 299 | "format": "short", 300 | "label": null, 301 | "logBase": 1, 302 | "max": null, 303 | "min": null, 304 | "show": true 305 | } 306 | ], 307 | "yaxis": { 308 | "align": false, 309 | "alignLevel": null 310 | } 311 | } 312 | ], 313 | "schemaVersion": 22, 314 | "style": "dark", 315 | "tags": [], 316 | "templating": { 317 | "list": [] 318 | }, 319 | "time": { 320 | "from": "now-15m", 321 | "to": "now" 322 | }, 323 | "timepicker": { 324 | "refresh_intervals": [ 325 | "5s", 326 | "10s", 327 | "30s", 328 | "1m", 329 | "5m", 330 | "15m", 331 | "30m", 332 | "1h", 333 | "2h", 334 | "1d" 335 | ] 336 | }, 337 | "timezone": "", 338 | "title": "API Metrics", 339 | "uid": "Z9lsGjwZk", 340 | "version": 2 341 | } -------------------------------------------------------------------------------- /load/ammo.txt: -------------------------------------------------------------------------------- 1 | 132 json 2 | POST /post HTTP/1.1 3 | Host: events.kis.im 4 | User-Agent: tank 5 | Accept: */* 6 | Connection: Close 7 | Content-Length: 16 8 | 9 | {"key1":"data1"} 10 | -------------------------------------------------------------------------------- /load/data: -------------------------------------------------------------------------------- 1 | POST||/post||json||{"key1":"data1"} 2 | -------------------------------------------------------------------------------- /load/load.yaml: -------------------------------------------------------------------------------- 1 | overload: 2 | enabled: true 3 | package: yandextank.plugins.DataUploader 4 | token_file: "token.txt" 5 | phantom: 6 | address: events.kis.im 7 | load_profile: 8 | load_type: rps 9 | schedule: const(15000, 3m) 10 | console: 11 | enabled: true 12 | telegraf: 13 | enabled: false 14 | -------------------------------------------------------------------------------- /load/makeammo.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import sys 5 | 6 | def make_ammo(method, url, headers, case, body): 7 | """ makes phantom ammo """ 8 | #http request w/o entity body template 9 | req_template = ( 10 | "%s %s HTTP/1.1\r\n" 11 | "%s\r\n" 12 | "\r\n" 13 | ) 14 | 15 | #http request with entity body template 16 | req_template_w_entity_body = ( 17 | "%s %s HTTP/1.1\r\n" 18 | "%s\r\n" 19 | "Content-Length: %d\r\n" 20 | "\r\n" 21 | "%s\r\n" 22 | ) 23 | 24 | if not body: 25 | req = req_template % (method, url, headers) 26 | else: 27 | req = req_template_w_entity_body % (method, url, headers, len(body), body) 28 | 29 | #phantom ammo template 30 | ammo_template = ( 31 | "%d %s\n" 32 | "%s" 33 | ) 34 | 35 | return ammo_template % (len(req), case, req) 36 | 37 | def main(): 38 | for stdin_line in sys.stdin: 39 | try: 40 | method, url, case, body = stdin_line.split("||") 41 | body = body.strip() 42 | except: 43 | method, url, case = stdin_line.split("||") 44 | body = None 45 | 46 | method, url, case = method.strip(), url.strip(), case.strip() 47 | 48 | headers = "Host: events.kis.im\r\n" + \ 49 | "User-Agent: tank\r\n" + \ 50 | "Accept: */*\r\n" + \ 51 | "Connection: Close" 52 | 53 | sys.stdout.write(make_ammo(method, url, headers, case, body)) 54 | 55 | if __name__ == "__main__": 56 | main() 57 | 58 | -------------------------------------------------------------------------------- /load/token.txt: -------------------------------------------------------------------------------- 1 | OVERLOAD API TOKEN HERE 2 | -------------------------------------------------------------------------------- /terraform/.gitignore: -------------------------------------------------------------------------------- 1 | *swp 2 | private.auto.tfvars 3 | .terraform 4 | terraform.tfstate 5 | terraform.tfstate.backup 6 | -------------------------------------------------------------------------------- /terraform/dns.tf: -------------------------------------------------------------------------------- 1 | resource "cloudflare_record" "eye" { 2 | zone_id = var.cf_zone_id 3 | name = "eye" 4 | value = yandex_compute_instance.monitoring.network_interface.0.nat_ip_address 5 | type = "A" 6 | ttl = 1 7 | proxied = true 8 | } 9 | 10 | resource "cloudflare_record" "build" { 11 | zone_id = var.cf_zone_id 12 | name = "build" 13 | value = yandex_compute_instance.build.network_interface.0.nat_ip_address 14 | type = "A" 15 | ttl = 300 16 | proxied = false 17 | } 18 | 19 | resource "cloudflare_record" "kafka" { 20 | zone_id = var.cf_zone_id 21 | name = "kafka" 22 | value = yandex_compute_instance.kafka.network_interface.0.nat_ip_address 23 | type = "A" 24 | ttl = 300 25 | proxied = false 26 | } 27 | 28 | resource "cloudflare_record" "events" { 29 | zone_id = var.cf_zone_id 30 | name = "events" 31 | value = [ 32 | for v in yandex_lb_network_load_balancer.events_api_lb.listener: 33 | v.external_address_spec.0.address if v.name == "events-api-listener" 34 | ][0] 35 | type = "A" 36 | ttl = 1 37 | proxied = true 38 | } 39 | 40 | -------------------------------------------------------------------------------- /terraform/instance-group.tf: -------------------------------------------------------------------------------- 1 | data "yandex_compute_image" "container-optimized-image" { 2 | family = "container-optimized-image" 3 | } 4 | 5 | resource "yandex_compute_instance_group" "events_api_ig" { 6 | name = "events-api-ig" 7 | service_account_id = yandex_iam_service_account.instances.id 8 | 9 | instance_template { 10 | platform_id = "standard-v2" 11 | resources { 12 | memory = 2 13 | cores = 2 14 | } 15 | boot_disk { 16 | mode = "READ_WRITE" 17 | initialize_params { 18 | image_id = data.yandex_compute_image.container-optimized-image.id 19 | size = 10 20 | } 21 | } 22 | network_interface { 23 | network_id = yandex_vpc_network.internal.id 24 | subnet_ids = [yandex_vpc_subnet.internal-a.id, yandex_vpc_subnet.internal-b.id, yandex_vpc_subnet.internal-c.id] 25 | nat = true 26 | } 27 | 28 | metadata = { 29 | docker-container-declaration = file("spec.yml") 30 | ssh-keys = "ubuntu:${file("~/.ssh/id_rsa.pub")}" 31 | } 32 | service_account_id = yandex_iam_service_account.docker.id 33 | } 34 | 35 | scale_policy { 36 | auto_scale { 37 | initial_size = 3 38 | measurement_duration = 60 39 | cpu_utilization_target = 60 40 | min_zone_size = 1 41 | max_size = 6 42 | warmup_duration = 60 43 | stabilization_duration = 180 44 | } 45 | } 46 | 47 | allocation_policy { 48 | zones = ["ru-central1-a", "ru-central1-b", "ru-central1-c"] 49 | } 50 | 51 | deploy_policy { 52 | max_unavailable = 1 53 | max_creating = 1 54 | max_expansion = 1 55 | max_deleting = 1 56 | } 57 | 58 | load_balancer { 59 | target_group_name = "events-api-tg" 60 | } 61 | } 62 | -------------------------------------------------------------------------------- /terraform/instances.tf: -------------------------------------------------------------------------------- 1 | data "yandex_compute_image" "ubuntu_image" { 2 | family = "ubuntu-1804-lts" 3 | } 4 | 5 | resource "yandex_compute_instance" "build" { 6 | name = "build" 7 | zone = "ru-central1-a" 8 | hostname = "build" 9 | platform_id = "standard-v1" 10 | 11 | resources { 12 | cores = 1 13 | memory = 2 14 | } 15 | 16 | boot_disk { 17 | initialize_params { 18 | image_id = data.yandex_compute_image.ubuntu_image.id 19 | size = 10 20 | } 21 | } 22 | 23 | network_interface { 24 | subnet_id = yandex_vpc_subnet.internal-a.id 25 | nat = true 26 | } 27 | 28 | metadata = { 29 | ssh-keys = "ubuntu:${file("~/.ssh/id_rsa.pub")}" 30 | } 31 | 32 | # remote-exec will wait for ssh up and running, after that local-exec will come into play 33 | # XXX By default requires SSH agent to be running 34 | provisioner "remote-exec" { 35 | inline = ["# Connected!"] 36 | connection { 37 | host = self.network_interface.0.nat_ip_address 38 | user = "ubuntu" 39 | } 40 | } 41 | 42 | provisioner "local-exec" { 43 | working_dir = var.ansible_workdir 44 | environment = { 45 | ANSIBLE_HOST_KEY_CHECKING = "False" 46 | } 47 | command = "ansible-playbook -u ubuntu -i '${self.network_interface.0.nat_ip_address},' docker.yml" 48 | } 49 | } 50 | 51 | resource "yandex_compute_instance" "monitoring" { 52 | name = "monitoring" 53 | zone = "ru-central1-a" 54 | hostname = "monitoring" 55 | platform_id = "standard-v1" 56 | 57 | resources { 58 | cores = 1 59 | memory = 2 60 | } 61 | 62 | boot_disk { 63 | initialize_params { 64 | image_id = data.yandex_compute_image.ubuntu_image.id 65 | size = 10 66 | } 67 | } 68 | 69 | network_interface { 70 | subnet_id = yandex_vpc_subnet.internal-a.id 71 | nat = true 72 | } 73 | 74 | metadata = { 75 | ssh-keys = "ubuntu:${file("~/.ssh/id_rsa.pub")}" 76 | } 77 | 78 | # remote-exec will wait for ssh up and running, after that local-exec will come into play 79 | # XXX By default requires SSH agent to be running 80 | provisioner "remote-exec" { 81 | inline = ["# Connected!"] 82 | connection { 83 | host = self.network_interface.0.nat_ip_address 84 | user = "ubuntu" 85 | } 86 | } 87 | 88 | provisioner "local-exec" { 89 | working_dir = var.ansible_workdir 90 | environment = { 91 | ANSIBLE_HOST_KEY_CHECKING = "False" 92 | } 93 | command = "ansible-playbook -u ubuntu -i '${self.network_interface.0.nat_ip_address},' monitoring.yml" 94 | } 95 | } 96 | 97 | resource "yandex_compute_instance" "kafka" { 98 | name = "kafka" 99 | zone = "ru-central1-a" 100 | hostname = "kafka" 101 | platform_id = "standard-v2" 102 | 103 | resources { 104 | cores = 2 105 | memory = 8 106 | } 107 | 108 | boot_disk { 109 | initialize_params { 110 | image_id = data.yandex_compute_image.ubuntu_image.id 111 | size = 50 112 | } 113 | } 114 | 115 | network_interface { 116 | subnet_id = yandex_vpc_subnet.internal-a.id 117 | nat = true 118 | } 119 | 120 | metadata = { 121 | ssh-keys = "ubuntu:${file("~/.ssh/id_rsa.pub")}" 122 | } 123 | 124 | # remote-exec will wait for ssh up and running, after that local-exec will come into play 125 | # XXX By default requires SSH agent to be running 126 | provisioner "remote-exec" { 127 | inline = ["# Connected!"] 128 | connection { 129 | host = self.network_interface.0.nat_ip_address 130 | user = "ubuntu" 131 | } 132 | } 133 | 134 | provisioner "local-exec" { 135 | working_dir = var.ansible_workdir 136 | environment = { 137 | ANSIBLE_HOST_KEY_CHECKING = "False" 138 | } 139 | command = "ansible-playbook -u ubuntu -i '${self.network_interface.0.nat_ip_address},' kafka.yml" 140 | } 141 | } 142 | -------------------------------------------------------------------------------- /terraform/loadbalancer.tf: -------------------------------------------------------------------------------- 1 | data "yandex_lb_target_group" "events_api_tg" { 2 | name = yandex_compute_instance_group.events_api_ig.load_balancer[0].target_group_name 3 | } 4 | 5 | resource "yandex_lb_network_load_balancer" "events_api_lb" { 6 | name = "events-api-lb" 7 | 8 | listener { 9 | name = "events-api-listener" 10 | port = 80 11 | target_port = 8080 12 | external_address_spec { 13 | ip_version = "ipv4" 14 | } 15 | } 16 | 17 | attached_target_group { 18 | target_group_id = data.yandex_lb_target_group.events_api_tg.id 19 | 20 | healthcheck { 21 | name = "http" 22 | http_options { 23 | port = 8080 24 | path = "/status" 25 | } 26 | } 27 | } 28 | } 29 | 30 | output "ls" { 31 | value = [ 32 | for v in yandex_lb_network_load_balancer.events_api_lb.listener: 33 | v.external_address_spec.0.address if v.name == "events-api-listener" 34 | ][0] 35 | } 36 | -------------------------------------------------------------------------------- /terraform/main.tf: -------------------------------------------------------------------------------- 1 | # Variables 2 | variable "ansible_workdir" { 3 | type = string 4 | description = "Path to Ansible workdir where provisioner tasks are located (i.e. ../ansible)" 5 | } 6 | 7 | ## Yandex Cloud 8 | variable "yc_token" { 9 | type = string 10 | description = "Yandex Cloud API key" 11 | } 12 | variable "yc_region" { 13 | type = string 14 | description = "Yandex Cloud Region (i.e. ru-central1-a)" 15 | } 16 | variable "yc_cloud_id" { 17 | type = string 18 | description = "Yandex Cloud id" 19 | } 20 | variable "yc_folder_id" { 21 | type = string 22 | description = "Yandex Cloud folder id" 23 | } 24 | 25 | variable "cf_email" { 26 | type = string 27 | description = "Cloudflare email" 28 | } 29 | 30 | variable "cf_token" { 31 | type = string 32 | description = "Cloudflare api token" 33 | } 34 | 35 | variable "cf_zone_id" { 36 | type = string 37 | description = "CF zone id" 38 | } 39 | 40 | #----- 41 | 42 | # Provider 43 | provider "yandex" { 44 | token = var.yc_token 45 | cloud_id = var.yc_cloud_id 46 | folder_id = var.yc_folder_id 47 | zone = var.yc_region 48 | } 49 | 50 | provider "cloudflare" { 51 | email = var.cf_email 52 | api_key = var.cf_token 53 | } 54 | -------------------------------------------------------------------------------- /terraform/meta.txt: -------------------------------------------------------------------------------- 1 | users: 2 | - name: vozerov 3 | groups: sudo 4 | shell: /bin/bash 5 | sudo: ['ALL=(ALL) NOPASSWD:ALL'] 6 | ssh-authorized-keys: 7 | - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCtCGJinXDwKlenUFSsGWisxaF0DcNiV8utVEmfka3se0CW2yNIr1vDCeJxPdAlxGG0O/28WLWuTArLiDsTJjYqN+VchXLAfQmCn+5A313RjZlJK0KSxgL8IP6Q68diUzuCKdisjhKIKISTCzfL7uTKE0JEsUsF7bfSYvOEBtYFLWD1XH4qmi6HeFpSsnr7EPVC7FZzFdOgpZ6Vm6fwS7ygdVLapm6SH1lkNm0Xnt3WjaPkZaWRcAmUxC0RWT36QNwbBKTVLaFbfuF4FBeuBm7cwt2a/nnQzpbk9lVkeZYhFxH/5Zi8Uv8LYYJwTBvYzXaceDdH2zSxYwltiCTaKiYl vozerov@mba 8 | -------------------------------------------------------------------------------- /terraform/network.tf: -------------------------------------------------------------------------------- 1 | resource "yandex_vpc_network" "internal" { 2 | name = "internal" 3 | } 4 | 5 | resource "yandex_vpc_subnet" "internal-a" { 6 | name = "internal-a" 7 | zone = "ru-central1-a" 8 | network_id = yandex_vpc_network.internal.id 9 | v4_cidr_blocks = ["172.16.1.0/24"] 10 | } 11 | 12 | resource "yandex_vpc_subnet" "internal-b" { 13 | name = "internal-b" 14 | zone = "ru-central1-b" 15 | network_id = yandex_vpc_network.internal.id 16 | v4_cidr_blocks = ["172.16.2.0/24"] 17 | } 18 | 19 | resource "yandex_vpc_subnet" "internal-c" { 20 | name = "internal-c" 21 | zone = "ru-central1-c" 22 | network_id = yandex_vpc_network.internal.id 23 | v4_cidr_blocks = ["172.16.3.0/24"] 24 | } 25 | -------------------------------------------------------------------------------- /terraform/registry.tf: -------------------------------------------------------------------------------- 1 | resource "yandex_container_registry" "events" { 2 | name = "events" 3 | } 4 | 5 | -------------------------------------------------------------------------------- /terraform/service-accounts.tf: -------------------------------------------------------------------------------- 1 | resource "yandex_iam_service_account" "docker" { 2 | name = "docker" 3 | description = "service account to use container registry" 4 | } 5 | 6 | resource "yandex_iam_service_account" "instances" { 7 | name = "instances" 8 | description = "service account to manage VMs" 9 | } 10 | 11 | resource "yandex_resourcemanager_folder_iam_binding" "editor" { 12 | folder_id = var.yc_folder_id 13 | 14 | role = "editor" 15 | 16 | members = [ 17 | "serviceAccount:${yandex_iam_service_account.instances.id}", 18 | ] 19 | } 20 | 21 | resource "yandex_resourcemanager_folder_iam_binding" "puller" { 22 | folder_id = var.yc_folder_id 23 | 24 | role = "container-registry.images.puller" 25 | 26 | members = [ 27 | "serviceAccount:${yandex_iam_service_account.docker.id}", 28 | ] 29 | } 30 | 31 | resource "yandex_resourcemanager_folder_iam_binding" "pusher" { 32 | folder_id = var.yc_folder_id 33 | 34 | role = "container-registry.images.pusher" 35 | 36 | members = [ 37 | "serviceAccount:${yandex_iam_service_account.docker.id}", 38 | ] 39 | } 40 | -------------------------------------------------------------------------------- /terraform/spec.yml: -------------------------------------------------------------------------------- 1 | spec: 2 | containers: 3 | - name: api 4 | image: vozerov/events-api:v1 5 | command: 6 | - /app/app 7 | args: 8 | - -kafka=kafka.ru-central1.internal:9092 9 | securityContext: 10 | privileged: false 11 | tty: false 12 | stdin: false 13 | restartPolicy: Always 14 | -------------------------------------------------------------------------------- /terraform/vars.auto.tfvars: -------------------------------------------------------------------------------- 1 | ansible_workdir = "../ansible" 2 | --------------------------------------------------------------------------------