├── .gitignore ├── Readme.md ├── backup ├── backup-container-data.sh ├── backup-images.sh ├── backup-volumes.sh └── restore-volumes.sh ├── docs ├── middleware.png ├── portainer.png ├── structure.png ├── traefikui.png └── whoami.png ├── local_server ├── Makefile ├── Readme.md ├── docker-compose-another.yml ├── docker-compose.yml ├── traefik_certs │ └── .gitkeep └── traefik_conf │ └── certificates.toml ├── perks ├── airflow │ ├── .env │ ├── Makefile │ └── docker-compose.yml ├── awx │ ├── SECRET_KEY │ ├── credentials.py │ ├── docker-compose.yml │ ├── environment.sh │ ├── nginx.conf │ └── redis.conf ├── borgbackup │ └── docker-compose.yml ├── cachetio │ ├── cachet-patch │ │ ├── Dockerfile │ │ ├── Makefile │ │ └── entrypoint.sh │ └── docker-compose.yml ├── calibre │ └── docker-compose.yml ├── clickhouse-tabix │ └── docker-compose.yml ├── docker-compose.yml ├── docker-registry-pull-through │ ├── README.md │ └── docker-compose.yml ├── docker-registry │ ├── docker-compose.yml │ └── registry │ │ └── config.yml ├── freshrss │ └── docker-compose.yml ├── grafana-loki │ ├── Makefile │ ├── docker-compose.yml │ ├── loki │ │ └── loki-config.yaml │ └── promtail │ │ └── promtail-config.yaml ├── heimdall │ └── docker-compose.yml ├── ivre │ └── docker-compose.yml ├── jaeger │ ├── Readme.md │ └── docker-compose.yml ├── konga │ ├── Makefile │ ├── docker-compose.yml │ └── konga │ │ ├── connections.js │ │ └── users.js ├── logseq-renderers │ └── docker-compose.yml ├── loki-cloud │ ├── docker-compose.yml │ └── promtail │ │ └── config.yml ├── metabase │ ├── Readme.md │ └── docker-compose.yml ├── monitoring │ ├── alertmanager │ │ ├── config.yml │ │ └── template │ │ │ └── notifications.tmpl │ ├── blackboxexporter │ │ ├── config.full.yml │ │ ├── config.small.yml │ │ └── config.yml │ ├── config.monitoring │ ├── docker-compose.yml │ ├── grafana │ │ ├── dashboards │ │ │ ├── http-ping.json │ │ │ └── system-monitor.json │ │ └── provisioning │ │ │ ├── dashboards │ │ │ └── all.yml │ │ │ └── datasources │ │ │ └── all.yml │ └── prometheus │ │ ├── Readme.md │ │ ├── alert.rules │ │ └── prometheus.yml ├── next-cloud │ ├── docker-compose.yml │ └── postgres_init │ │ └── initdb.sh ├── nexus │ ├── README.md │ ├── docker-compose.yml │ ├── docs │ │ ├── create-docker-proxy.json │ │ ├── docker-registry-nexus-01.png │ │ ├── docker-registry-nexus-02.png │ │ ├── dockerinfo.png │ │ ├── groupconfiguration.webp │ │ ├── proxyprivateconfiguration.webp │ │ └── showinginnexus.png │ └── nexus_registry.sh ├── ollama-intel-arc │ ├── Dockerfile │ ├── README.md │ ├── docker-compose.yml │ └── scripts │ │ └── serve.sh ├── openldap │ ├── README.md │ └── docker-compose.yml ├── openzipking │ └── docker-compose.yml ├── photoprism │ └── docker-compose.yml ├── prefecthq │ ├── Makefile │ ├── README.md │ └── docker-compose.yml ├── sonarcube │ ├── Makefile │ ├── README.md │ ├── docker-compose.yml │ └── unrestricted_client_body_size.conf ├── spark │ └── docker-compose.yml ├── teedy │ └── docker-compose.yml ├── tubearchivist │ └── docker-compose.yml ├── ubooquity │ └── docker-compose.yml ├── vorta │ └── docker-compose.yml └── warden │ ├── README.md │ └── docker-compose.yml └── public_server ├── Makefile ├── docker-compose.yml ├── letsencrypt └── .gitkeep ├── traefik_certs └── .gitkeep └── traefik_conf └── certificates.toml /.gitignore: -------------------------------------------------------------------------------- 1 | perks/airflow/config 2 | perks/airflow/dags 3 | perks/airflow/logs 4 | perks/airflow/plugins 5 | -------------------------------------------------------------------------------- /Readme.md: -------------------------------------------------------------------------------- 1 | # Unobtrusive local development with traefik2, docker and letsencrypt 2 | 3 | When it comes to developing sites that use subdomains, this can be a challenging or boring task to set up in development because localhost isn’t a fully qualified domain name. You can’t just goto example.localhost. You either constantly edit /etc/hosts or setting up 4 | some dnsmasq in your local network. 5 | 6 | So far most often used solutions by myself were http://lvh.me - a free service that resolves itself along with all subdomains to localhost, 7 | and similar to it service nip.io which allows to code ip address to resolve into domain name 192-168-1-250.nip.io, and thus allowing to 8 | generate letsencrypt certificate to mentioned domain. 9 | 10 | This played good for a half of the decade, but as more and more of mine clients were moving to pull request reviews and choosing docker or 11 | kubernetes as a platform in parallel, I found myself more and more limited with classic solutions above. 12 | 13 | What I wanted last year - is reproducible scenario to implement own, branded local development environment that I can easily reproduce 14 | on a remote server to implement pull request reviews scenarios, supporting some reasonable perks like green seal certificates, some troubleshooting capabilities and so on. 15 | 16 | Let's take a look on a approach I am using now for clients using dockerized applications (either local or swarm) 17 | 18 | ## Choosing components for the solution 19 | 20 | Cooking components: 21 | 22 | a) docker - classic or in swarm mode. (On my work notebook I have docker in swarm mode, it does not stop it from be used as a standalone docker environment too) 23 | 24 | b) traefik2 , and it's official docker image. Traefik is an open-source edge router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them. Kind of interest for me are two backends supported - docker and kubernetes. 25 | 26 | c) Development subdomain. For nicely working solution we need to dedicate some subdomain for development, for example: `*.lvh.voronenko.net` - resolving to localhost, or `*.preview.project.net` - some wildcard domain pointing to your staging server. If you want branding neutral naming - you can use '*.fiks.im' as well. 27 | 28 | d) Wildcard certificate from letsencrypt to organize green seal certificate. Although on remote box traefik might take care on obtaining green seal certificate for each exposed fqdn for you, if you have possibility to pregenerate wildcard certificate - why not? 29 | 30 | For domain resolving to localhost, like mentioned `*.lvh.voronenko.net` - you need to take care of certificate on your own. 31 | New! If you want pre-generated grean seal certificates for local development - feel free to use `*.fiks.im` domain that resolves to localhost. Pregenerated certificates available at , check for details 32 | 33 | For generating letsencrypt certificates my current tool of choice - is acme.sh - shell zero dependency tool. It supports number of dns providers, and generating wildcard certificate might be as simple as running short shell command. 34 | 35 | ```sh 36 | 37 | acme.sh --issue --dns dns_gd -d lvh.voronenko.net -d "*.lvh.voronenko.net" 38 | ``` 39 | 40 | Note that tool also takes care on prolonging certificate when necessary. 41 | Installing certificates into necessary folder also is as simple as executing shell script 42 | 43 | ```sh 44 | ~/.acme.sh/acme.sh --install-cert -d "${DOMAIN_NAME}" \ 45 | --cert-file $TARGET/cert.pem \ 46 | --key-file $TARGET/privkey.pem \ 47 | --fullchain-file $TARGET/fullchain.pem 48 | ``` 49 | 50 | In worse case scenario (if you haven't instructed acme.sh with a command what to do on certificate prolongation) - you would need to take care on copying new set of certs once in 93 days. 51 | 52 | e) Some tool you can inspect what happens with docker. There are many good console tools, but if you like one with web ui - portainer is definitely one of the best. 53 | 54 | Now, when we have discussed components, let's combine them together: 55 | 56 | ## Local setup - instance wide traefik setup for local development 57 | 58 | ![alt](https://github.com/Voronenko/traefik2-compose-template/raw/master/docs/structure.png) 59 | 60 | UPD: Now with branding neural domain - fiks.im (guaranteed until 2033) - pre-generated green seal certificates available. Check for details. 61 | 62 | We need: 63 | 64 | `traefik_certs` folder, where we would store pregenerated wildcard certificate (you might have more than one) also here traefik will store information about own certificates. 65 | 66 | `traefik_conf` folder, where we would store parts of the traefik configuration we want to exclude from docker-compose. 67 | 68 | By default, `certificates.toml` tells traefik that we have one pregenerated certificate, which can be found under specified path in certs folder. 69 | 70 | ```toml 71 | [[tls.certificates]] #first certificate 72 | certFile = "/certs/fullchain.pem" 73 | keyFile = "/certs/privkey.pem" 74 | 75 | #[[tls.certificates]] #second certificate 76 | # certFile = "/path/to/other.cert" 77 | # keyFile = "/path/to/other.key" 78 | ``` 79 | 80 | Controlling `Makefile` which helps you to create shared public network for docker to be used by traefik to look for exposed services, and up/down routines 81 | 82 | ```makefile 83 | 84 | create-traefik-network-once: 85 | docker network create --attachable traefik-public 86 | up: 87 | docker-compose up -d 88 | down: 89 | docker-compose down 90 | ``` 91 | 92 | and heart of the construction: main traefik docker-compose file, in order to make story shorter I will comment most important parts of the configuration 93 | 94 | ```yaml 95 | 96 | version: '3.4' 97 | services: 98 | traefik: 99 | image: traefik:v2.10.4 100 | # on my notebook traefik serves on default http https ports 101 | # this allows natural urls like https://app.fiks.im/ 102 | ports: 103 | - 80:80 104 | - 443:443 105 | # setup can be easily transformed into swarm deploy 106 | deploy: 107 | replicas: 1 108 | placement: 109 | constraints: 110 | - node.role == manager 111 | preferences: 112 | - spread: node.id 113 | labels: [] 114 | # /certs and /conf are dedicated known directories in 115 | # official traefik image. 116 | # in order to allow traefik to expose docker backend we are 117 | # mapping docker socket inside. 118 | volumes: 119 | - /var/run/docker.sock:/var/run/docker.sock:ro 120 | - ./traefik_certs:/certs 121 | - ./traefik_conf:/conf 122 | restart: always 123 | # if traefik supposed to serve real swarm cluster, peram should be specified 124 | # --docker.swarmmode 125 | command: 126 | - "--api.insecure=true" 127 | - "--providers.docker=true" 128 | - "--providers.docker.watch=true" 129 | - "--providers.docker.exposedbydefault=false" 130 | - "--providers.file.directory=/conf/" 131 | - "--providers.file.watch=true" 132 | - "--entrypoints.web.address=:80" 133 | - "--entrypoints.websecure.address=:443" 134 | - "--log.level=DEBUG" 135 | - "--accessLog" 136 | - "--api" 137 | - "--metrics" 138 | - "--metrics.prometheus" 139 | - "--providers.docker.network=traefik-public" 140 | networks: 141 | - default 142 | - traefik-public 143 | # traefik configuration via labels 144 | # expose traefik dashboard under address https://traefik.fiks.im 145 | labels: 146 | - "traefik.enable=true" 147 | - "traefik.http.routers.traefik.rule=Host(`traefik.fiks.im`)" 148 | - "traefik.http.routers.traefik.entrypoints=websecure" 149 | - "traefik.http.routers.traefik.tls.certresolver=letsencryptresolver" 150 | - "traefik.http.routers.traefik.service=api@internal" 151 | - "traefik.http.routers.traefik.tls=true" 152 | # docker management UI to be exposed under 153 | # https://docker.fiks.im 154 | portainer: 155 | image: portainer/portainer 156 | restart: always 157 | volumes: 158 | - /var/run/docker.sock:/var/run/docker.sock 159 | - portainer_data:/data 160 | labels: 161 | - "traefik.enable=true" 162 | - "traefik.http.routers.portainer.rule=Host(`docker.fiks.im`)" 163 | - "traefik.http.routers.portainer.entrypoints=web" 164 | - "traefik.http.routers.portainer-secure.rule=Host(`docker.fiks.im`)" 165 | - "traefik.http.routers.portainer-secure.entrypoints=websecure" 166 | - "traefik.http.routers.portainer-secure.tls=true" 167 | volumes: 168 | portainer_data: 169 | networks: 170 | traefik-public: 171 | external: true 172 | ``` 173 | 174 | On a that moment, if you start the setup, i.e. `docker-compose up -d`, you already will get: 175 | 176 | a) traefikUI at https://traefik.fiks.im/ - not to much to change, but a nice bird eye view 177 | on currently detected services. Note, that page is served on a nice grean seal certificate you configured. 178 | 179 | ![alt](https://github.com/Voronenko/traefik2-compose-template/raw/master/docs/traefikui.png) 180 | 181 | b) PortainerUI at https://docker.fiks.im/ 182 | ![alt](https://github.com/Voronenko/traefik2-compose-template/raw/master/docs/portainer.png) which provides detailed insides 183 | in docker running on your machine, containers, services and et cetera. 184 | 185 | 186 | Note that traefik docker-compose is fully independent component of your system without direct relation to any project you are currently worked on. This means that it can be constantly running on your host, like any other web-server. 187 | 188 | At a moment when you need to expose some docker project you are currently working on to a traefik, you would need 189 | 190 | a) add service exposed outside to the `traefik-public` network. 191 | 192 | b) specify discovery rules, like fqdn for the service - "Host(`whoami.fiks.im`)" 193 | 194 | c) provide hints to traefik for any additional configuration, 195 | like redirections, authorization/password protection etc. 196 | 197 | Once you launch such docker-compose in your project folder, it will immediately get exposed with traefik. Moreover - all your teammates working on project have exactly same experience, but on their own notebooks. 198 | 199 | 200 | ```yaml 201 | 202 | whoami: 203 | image: "containous/whoami" 204 | container_name: "simple-service" 205 | networks: 206 | - app 207 | - traefik-public 208 | labels: 209 | - "traefik.enable=true" 210 | - "traefik.http.routers.whoami.rule=Host(`whoami.fiks.im`)" 211 | - "traefik.http.routers.whoami.entrypoints=web" 212 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 213 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 214 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 215 | - "traefik.http.routers.traefik-secure.entrypoints=websecure" 216 | - "traefik.http.routers.traefik-secure.rule=Host(`whoami.fiks.im`)" 217 | # - "traefik.http.routers.traefik-secure.middlewares=traefik-auth" 218 | - "traefik.http.routers.traefik-secure.tls=true" 219 | networks: 220 | traefik-public: 221 | external: true 222 | 223 | ``` 224 | 225 | Your service(s) get exposed on a dedicated names, serving on a green seal certificates - almost identical copy of your production environment. 226 | 227 | ![alt](https://github.com/Voronenko/traefik2-compose-template/raw/master/docs/whoami.png) 228 | 229 | 230 | ## Installing setup on a public server 231 | 232 | Instance wide traefik setup for remote development (branch deploys, etc) is done in a similar way, but you would need to protect better your intellectual property (saying you do not want anyone on a web to preview your branch deploys) as well as protect portainer and traefik dashboards with credentials or even fully remove them from public access. 233 | 234 | ![alt](https://github.com/Voronenko/traefik2-compose-template/raw/master/docs/middleware.png) 235 | 236 | Traefik has multiple middlewares to choose from https://docs.traefik.io/middlewares/overview/ 237 | 238 | Most reasonable middlewares to consider would be 239 | 240 | BasicAuth https://docs.traefik.io/middlewares/basicauth/ 241 | 242 | DigestAuth https://docs.traefik.io/middlewares/digestauth/ 243 | 244 | IPWhiteList https://docs.traefik.io/middlewares/ipwhitelist/ 245 | 246 | RateLimit https://docs.traefik.io/middlewares/ratelimit/ 247 | 248 | also for a public server traefik will take care on a generating green seal certificates for you, if you haven't provided pregenerated wildcard certificate. 249 | 250 | Also changes to the `local_server` setup would be activating letsencrypt certificates provider 251 | We introduce new mapped letsencrypt folder to store automatically retrieved certificates. 252 | 253 | ```yaml 254 | 255 | volumes: 256 | - /var/run/docker.sock:/var/run/docker.sock:ro 257 | - ./traefik_certs:/certs 258 | - ./traefik_conf:/conf 259 | - ./letsencrypt:/letsencrypt 260 | ``` 261 | 262 | and we additionally have to activate letsencrypt acme provider in traefik 263 | 264 | ```yaml 265 | labels: 266 | - "--certificatesResolvers.letsencrypt.acme.email=" 267 | - "--certificatesResolvers.letsencrypt.acme.tlsChallenge=true" 268 | - "--certificatesResolvers.letsencrypt.acme.httpChallenge=true" 269 | - "--certificatesResolvers.letsencrypt.acme.httpChallenge.entryPoint=web" 270 | - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json" 271 | ``` 272 | 273 | However, I recommend at least for host serving as branch deploys preview, pregenerate wild card certificate. 274 | It is quite useless to generate new certificate per every new branch deployed (branchname.preview.voronenko.net) 275 | 276 | For services exposed through traefik, requiring automatic certificate from letsencrypt, you would need to instruct traefik to use letsencrypt for that service. 277 | 278 | ```yaml 279 | labels: 280 | - traefik.http.routers.whoami.tls.certresolver=letsencrypt 281 | ``` 282 | 283 | Full example for the service exposed: 284 | 285 | ```yaml 286 | 287 | whoami: 288 | image: "containous/whoami" 289 | container_name: "simple-service" 290 | labels: 291 | - "traefik.enable=true" 292 | - "traefik.http.routers.whoami.rule=Host(`whoami.preview.voronenko.net`)" 293 | - "traefik.http.routers.whoami.entrypoints=web" 294 | # - "traefik.http.routers.traefik-secure.middlewares=traefik-auth" 295 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 296 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 297 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 298 | - "traefik.http.routers.traefik-secure.entrypoints=websecure" 299 | - "traefik.http.routers.traefik-secure.rule=Host(`whoami.preview.voronenko.net`)" 300 | - "traefik.http.routers.traefik-secure.tls=true" 301 | - traefik.http.middlewares.whoami.compress=true 302 | - traefik.http.routers.whoami.tls.certresolver=letsencrypt 303 | networks: 304 | traefik-public 305 | ``` 306 | 307 | 308 | ## Summary 309 | 310 | With described approach you are able to provide unobtrusive local development with traefik2, docker and letsencrypt individually as well as for all your teammates. Startup owners are able to enforce "branded" development environment, like `app.lvh.mystartup.domain` 311 | 312 | Solution is universal, and works nicely with multiple projects, including standalone tools distributed as docker container. 313 | 314 | You can easily able to extend approach to public server, implementing "preview server" for the same components. Traefik and docker allow you possibility also to introduce pull request previews in a reasonable time (in a day) 315 | 316 | Mentioned in article examples can be tried at https://github.com/Voronenko/traefik2-compose-template 317 | 318 | `local_server` is the example of the development environment on your localhost, i.e. https://someapp.fiks.im/ 319 | 320 | `public_server` is the example of the environment environment that can be deployed to the public server. 321 | 322 | -------------------------------------------------------------------------------- /backup/backup-container-data.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export backup_path=${BACKUP_PATH:-"$PWD/data/"} 4 | ignore_list=("*devops-gitlab-runner-compose*" "volume2") 5 | 6 | echo "Backing up container data (inspection output)" 7 | echo "--------------------------------------------" 8 | 9 | for i in $(docker ps -q | xargs docker inspect --format='{{.Name}}' | cut -f2 -d/) 10 | do container_name=$i 11 | echo -n "$container_name - " 12 | container_data=$(docker inspect "$container_name") 13 | mkdir -p "$backup_path/$container_name" 14 | echo "$container_data" > "$backup_path/$container_name/$container_name-data.txt" 15 | echo "OK" 16 | done 17 | 18 | echo "" 19 | -------------------------------------------------------------------------------- /backup/backup-images.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Backing up container images" 4 | echo "---------------------------" 5 | 6 | export backup_path=${BACKUP_PATH:-"$PWD/data/"} 7 | ignore_list=("*devops-gitlab-runner-compose*" "volume2") 8 | 9 | 10 | for i in $(docker ps -q | xargs docker inspect --format='{{.Name}}' | cut -f2 -d/) 11 | do container_name=$i 12 | echo -n "$container_name - " 13 | container_image=$(docker inspect --format='{{.Config.Image}}' "$container_name") 14 | mkdir -p "$backup_path"/"$container_name" 15 | save_file="$backup_path"/"$container_name"/"$container_name-image.tar" 16 | docker save -o "$save_file" "$container_image" 17 | echo "OK" 18 | done 19 | 20 | echo "DONE" 21 | -------------------------------------------------------------------------------- /backup/backup-volumes.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "Backing up volumes" 3 | echo "------------------" 4 | 5 | export backup_path=${BACKUP_PATH:-"$PWD/data/"} 6 | 7 | ignore_list=("*devops-gitlab-runner-compose*" "volume2") 8 | 9 | echo $backuo_path 10 | 11 | # Create the backup directory 12 | mkdir -p "$backup_path"/volumes 13 | 14 | 15 | for volume in $(docker volume ls -q | xargs docker volume inspect -f '"{{.Name}}""{{.Mountpoint}}"') 16 | do 17 | # Get the volume name and path 18 | volume_name=$(echo "$volume" | cut -f2 -d\") 19 | volume_path=$(echo "$volume" | cut -f4 -d\") 20 | 21 | skip_volume=false 22 | for pattern in "${ignore_list[@]}" 23 | do 24 | if [[ $volume_name == $pattern ]]; then 25 | skip_volume=true 26 | break 27 | fi 28 | done 29 | 30 | if $skip_volume; then 31 | echo "Skipping backup for volume: $volume_name (matched pattern: $pattern)" 32 | continue 33 | fi 34 | 35 | 36 | # Backup the volume 37 | echo -n "$volume_name - " 38 | docker run --rm -v "$volume_path":/volume -v "$backup_path"/volumes:/backup busybox tar -cvzf /backup/"$volume_name".tar.gz /volume >/dev/null 2>&1 39 | echo "OK" 40 | done 41 | 42 | echo "" 43 | -------------------------------------------------------------------------------- /backup/restore-volumes.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Path: restore-volumes.sh 4 | # Restore all volumes from the backup 5 | # Inside the backup directory, there is a volumes directory, which contains all the volumes. 6 | 7 | # destructive operation, requires sourcing the files 8 | 9 | echo "Volumes restorations" 10 | echo "------------------" 11 | 12 | # Check if backup path exists 13 | if [ ! -d "$backup_path" ] 14 | then 15 | echo "Error: backup path does not exist" 16 | exit 1 17 | fi 18 | 19 | volumes=$(ls "$backup_path/volumes") 20 | 21 | for volume in $volumes 22 | do 23 | # Get volume name from the file name 24 | volume=$(echo "$volume" | cut -f1 -d.) 25 | 26 | echo -n "$volume - " 27 | docker run --rm -v "$volume":/volume -v "$backup_path"/volumes:/backup busybox sh -c "cd /volume && tar -xvf /backup/$volume.tar.gz --strip 1" >/dev/null 2>&1 28 | echo "OK" 29 | done 30 | -------------------------------------------------------------------------------- /docs/middleware.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/docs/middleware.png -------------------------------------------------------------------------------- /docs/portainer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/docs/portainer.png -------------------------------------------------------------------------------- /docs/structure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/docs/structure.png -------------------------------------------------------------------------------- /docs/traefikui.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/docs/traefikui.png -------------------------------------------------------------------------------- /docs/whoami.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/docs/whoami.png -------------------------------------------------------------------------------- /local_server/Makefile: -------------------------------------------------------------------------------- 1 | update-certs: 2 | curl -sLo ./traefik_certs/fiksim_privkey.pem https://github.com/Voronenko/fiks.im/releases/download/$(shell curl -s https://api.github.com/repos/Voronenko/fiks.im/releases/latest | grep tag_name | cut -d '"' -f 4)/fiksim_privkey.pem 3 | curl -sLo ./traefik_certs/fiksim_cert.pem https://github.com/Voronenko/fiks.im/releases/download/$(shell curl -s https://api.github.com/repos/Voronenko/fiks.im/releases/latest | grep tag_name | cut -d '"' -f 4)/fiksim_cert.pem 4 | curl -sLo ./traefik_certs/fiksim_fullchain.pem https://github.com/Voronenko/fiks.im/releases/download/$(shell curl -s https://api.github.com/repos/Voronenko/fiks.im/releases/latest | grep tag_name | cut -d '"' -f 4)/fiksim_fullchain.pem 5 | 6 | create-traefik-network-once: 7 | docker network create --attachable traefik-public 8 | up: 9 | docker-compose up -d 10 | down: 11 | docker-compose down 12 | 13 | 14 | swarm-create-traefik-network-once: 15 | docker network create --driver=overlay --attachable traefik-public 16 | 17 | swarm-up: 18 | docker stack deploy --compose-file=docker-compose.yml traefik 19 | swarm-down: 20 | docker stack rm traefik 21 | -------------------------------------------------------------------------------- /local_server/Readme.md: -------------------------------------------------------------------------------- 1 | # Short notes 2 | 3 | to run `docker-compose up -d` 4 | 5 | once executed, on http://traefik.fiks.im:8880/ you can find dashboard. 6 | 7 | Take a look on example of 8 | 9 | (http endpoint on port 880) or < (usual http port on 80) 10 | and 11 | (https endpoint on port 8443) or (usual https port on 443) 12 | 13 | 14 | Traefik serves only containers that share `traefik-public` docker network. 15 | 16 | That introduces some isolation to the application into "public" and "private" parts when needed 17 | 18 | for ideas to enable your service as https and http 19 | 20 | 21 | # Integrating external services 22 | 23 | 24 | ```yaml 25 | 26 | version: '3.4' 27 | services: 28 | whoami2: 29 | image: "containous/whoami" 30 | # container_name: "simple-service2" 31 | networks: 32 | - traefik-public 33 | labels: 34 | - "traefik.enable=true" 35 | # - "traefik.http.middlewares.whoami2-https.redirectscheme.scheme=https" 36 | - "traefik.http.routers.whoami2-http.entrypoints=web" 37 | - "traefik.http.routers.whoami2-http.rule=Host(`whoami2.fiks.im`)" 38 | # - "traefik.http.routers.whoami2-http.middlewares=whoami2-https@docker" 39 | - "traefik.http.routers.whoami2.entrypoints=websecure" 40 | - "traefik.http.routers.whoami2.rule=Host(`whoami2.fiks.im`)" 41 | - "traefik.http.routers.whoami2.tls=true" 42 | - "traefik.http.routers.whoami2.tls.certresolver=default" 43 | networks: 44 | traefik-public: 45 | external: true 46 | ``` 47 | 48 | 49 | # Configuration via files and commandline 50 | 51 | 2BD 52 | 53 | ``` 54 | [log] 55 | level = "ERROR" 56 | 57 | [providers.docker] 58 | exposedByDefault = false 59 | endpoint = "tcp://dockerproxy:2375" 60 | network = "traefik" 61 | 62 | [api] 63 | dashboard = true 64 | debug = true 65 | 66 | [entryPoints] 67 | [entryPoints.web] 68 | address = ":80" 69 | [entryPoints.web-secure] 70 | address = ":443" 71 | [entryPoints.dashboard] 72 | address = ":8080" 73 | 74 | [certificatesResolvers] 75 | [certificatesResolvers.default.acme] 76 | email = "contact@myemail.com" 77 | storage = "acme.json" 78 | [certificatesResolvers.default.acme.tlsChallenge] 79 | ``` 80 | 81 | 82 | -------------------------------------------------------------------------------- /local_server/docker-compose-another.yml: -------------------------------------------------------------------------------- 1 | version: '3.4' 2 | services: 3 | whoami2: 4 | image: "containous/whoami" 5 | # container_name: "simple-service2" 6 | networks: 7 | - traefik-public 8 | labels: 9 | - "traefik.enable=true" 10 | - "traefik.http.middlewares.whoami2-https.redirectscheme.scheme=https" 11 | - "traefik.http.routers.whoami2-http.entrypoints=web" 12 | - "traefik.http.routers.whoami2-http.rule=Host(`whoami2.fiks.im`)" 13 | - "traefik.http.routers.whoami2-http.middlewares=whoami2-https@docker" 14 | - "traefik.http.routers.whoami2.entrypoints=websecure" 15 | - "traefik.http.routers.whoami2.rule=Host(`whoami2.fiks.im`)" 16 | - "traefik.http.routers.whoami2.tls=true" 17 | - "traefik.http.routers.whoami2.tls.certresolver=default" 18 | networks: 19 | traefik-public: 20 | external: true 21 | -------------------------------------------------------------------------------- /local_server/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.4' 2 | services: 3 | traefik: 4 | image: traefik:v2.11.20 5 | healthcheck: 6 | test: ["CMD-SHELL", "wget --spider --quiet http://127.0.0.1/ping || exit 1"] 7 | interval: 30s 8 | timeout: 10s 9 | retries: 3 10 | ports: 11 | - 80:80 12 | - 443:443 13 | # - 8080:8080 14 | # - 5432:5432 15 | # - 27017:27017 16 | deploy: 17 | replicas: 1 18 | placement: 19 | constraints: 20 | - node.role == manager 21 | preferences: 22 | - spread: node.id 23 | labels: [] 24 | volumes: 25 | - /var/run/docker.sock:/var/run/docker.sock:ro 26 | - ./traefik_certs:/certs 27 | - ./traefik_conf:/conf 28 | restart: always 29 | # --docker.swarmmode 30 | command: 31 | - "--api.insecure=true" 32 | - "--providers.docker=true" 33 | - "--providers.docker.watch=true" 34 | - "--providers.docker.exposedbydefault=false" 35 | - "--providers.file.directory=/conf/" 36 | - "--providers.file.watch=true" 37 | - "--entrypoints.web.address=:80" 38 | - "--entrypoints.websecure.address=:443" 39 | - "--entrypoints.mongo.address=:27017" 40 | - "--entrypoints.postgres.address=:5432" 41 | - "--log.level=DEBUG" 42 | - "--accessLog" 43 | - "--api" 44 | - "--metrics" 45 | - "--metrics.prometheus" 46 | - "--providers.docker.network=traefik-public" 47 | networks: 48 | - default 49 | - traefik-public 50 | labels: 51 | - "traefik.enable=true" 52 | - "traefik.http.routers.traefik.rule=Host(`traefik.lvh.fiks.im`)" 53 | - "traefik.http.routers.traefik.entrypoints=websecure" 54 | - "traefik.http.routers.traefik.tls.certresolver=letsencryptresolver" 55 | - "traefik.http.routers.traefik.service=api@internal" 56 | - "traefik.http.routers.traefik.tls=true" 57 | # - "traefik.http.routers.traefik.tls.certresolver=letsencryptresolver" 58 | # - "traefik.http.routers.traefik.middlewares=traefik-auth" 59 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=admin:$$apr1$$8EVjn/nj$$GiLUZqcbueT 60 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 61 | # monitoring 62 | - "--ping=true" 63 | - "--ping.entrypoint=web" 64 | whoami: 65 | image: "containous/whoami" 66 | container_name: "simple-service" 67 | labels: 68 | - "traefik.enable=true" 69 | - "traefik.http.routers.whoami.rule=Host(`whoami.fiks.im`)" 70 | - "traefik.http.routers.whoami.entrypoints=web" 71 | 72 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 73 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 74 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 75 | 76 | - "traefik.http.routers.traefik-secure.entrypoints=websecure" 77 | - "traefik.http.routers.traefik-secure.rule=Host(`whoami.fiks.im`)" 78 | # - "traefik.http.routers.traefik-secure.middlewares=traefik-auth" 79 | - "traefik.http.routers.traefik-secure.tls=true" 80 | portainer: 81 | image: portainer/portainer-ce:2.0.0 82 | restart: always 83 | volumes: 84 | - /var/run/docker.sock:/var/run/docker.sock 85 | - portainer_data:/data 86 | # ports: 87 | # - 8000:8000 88 | # - 9000:9000 89 | labels: 90 | - "traefik.enable=true" 91 | - "traefik.http.services.portainer-secure.loadbalancer.server.port=9000" 92 | - "traefik.http.routers.portainer.rule=Host(`docker.fiks.im`)" 93 | - "traefik.http.routers.portainer.entrypoints=web" 94 | - "traefik.http.routers.portainer-secure.rule=Host(`docker.fiks.im`)" 95 | - "traefik.http.routers.portainer-secure.entrypoints=websecure" 96 | - "traefik.http.routers.portainer-secure.tls=true" 97 | # Uncomment this label if your Traefik instance is using a specific Docker network and replace the network name 98 | # This is not necessary if you set your default network in Traefik config/parameters 99 | # - "traefik.docker.network=your-traefik-network-name" 100 | volumes: 101 | portainer_data: 102 | networks: 103 | traefik-public: 104 | external: true 105 | -------------------------------------------------------------------------------- /local_server/traefik_certs/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/local_server/traefik_certs/.gitkeep -------------------------------------------------------------------------------- /local_server/traefik_conf/certificates.toml: -------------------------------------------------------------------------------- 1 | # Uncomment block if you have placed your certs under /certs 2 | #[[tls.certificates]] #first certificate 3 | # certFile = "/certs/fullchain.pem" 4 | # keyFile = "/certs/privkey.pem" 5 | 6 | #[[tls.certificates]] #second certificate 7 | # certFile = "/path/to/other.cert" 8 | # keyFile = "/path/to/other.key" 9 | -------------------------------------------------------------------------------- /perks/airflow/.env: -------------------------------------------------------------------------------- 1 | AIRFLOW_UID=1000 2 | -------------------------------------------------------------------------------- /perks/airflow/Makefile: -------------------------------------------------------------------------------- 1 | up: 2 | docker-compose up 3 | -------------------------------------------------------------------------------- /perks/airflow/docker-compose.yml: -------------------------------------------------------------------------------- 1 | # Licensed to the Apache Software Foundation (ASF) under one 2 | # or more contributor license agreements. See the NOTICE file 3 | # distributed with this work for additional information 4 | # regarding copyright ownership. The ASF licenses this file 5 | # to you under the Apache License, Version 2.0 (the 6 | # "License"); you may not use this file except in compliance 7 | # with the License. You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, 12 | # software distributed under the License is distributed on an 13 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | # KIND, either express or implied. See the License for the 15 | # specific language governing permissions and limitations 16 | # under the License. 17 | # 18 | 19 | # Basic Airflow cluster configuration for CeleryExecutor with Redis and PostgreSQL. 20 | # 21 | # WARNING: This configuration is for local development. Do not use it in a production deployment. 22 | # 23 | # This configuration supports basic configuration using environment variables or an .env file 24 | # The following variables are supported: 25 | # 26 | # AIRFLOW_IMAGE_NAME - Docker image name used to run Airflow. 27 | # Default: apache/airflow:2.9.0 28 | # AIRFLOW_UID - User ID in Airflow containers 29 | # Default: 50000 30 | # AIRFLOW_PROJ_DIR - Base path to which all the files will be volumed. 31 | # Default: . 32 | # Those configurations are useful mostly in case of standalone testing/running Airflow in test/try-out mode 33 | # 34 | # _AIRFLOW_WWW_USER_USERNAME - Username for the administrator account (if requested). 35 | # Default: airflow 36 | # _AIRFLOW_WWW_USER_PASSWORD - Password for the administrator account (if requested). 37 | # Default: airflow 38 | # _PIP_ADDITIONAL_REQUIREMENTS - Additional PIP requirements to add when starting all containers. 39 | # Use this option ONLY for quick checks. Installing requirements at container 40 | # startup is done EVERY TIME the service is started. 41 | # A better way is to build a custom image or extend the official image 42 | # as described in https://airflow.apache.org/docs/docker-stack/build.html. 43 | # Default: '' 44 | # 45 | # Feel free to modify this file to suit your needs. 46 | --- 47 | x-airflow-common: 48 | &airflow-common 49 | # In order to add custom dependencies or upgrade provider packages you can use your extended image. 50 | # Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml 51 | # and uncomment the "build" line below, Then run `docker-compose build` to build the images. 52 | image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.9.0} 53 | # build: . 54 | environment: 55 | &airflow-common-env 56 | AIRFLOW__CORE__EXECUTOR: CeleryExecutor 57 | AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow 58 | AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow 59 | AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0 60 | AIRFLOW__CORE__FERNET_KEY: '' 61 | AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true' 62 | AIRFLOW__CORE__LOAD_EXAMPLES: 'true' 63 | AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth,airflow.api.auth.backend.session' 64 | # yamllint disable rule:line-length 65 | # Use simple http server on scheduler for health checks 66 | # See https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/check-health.html#scheduler-health-check-server 67 | # yamllint enable rule:line-length 68 | AIRFLOW__SCHEDULER__ENABLE_HEALTH_CHECK: 'true' 69 | # WARNING: Use _PIP_ADDITIONAL_REQUIREMENTS option ONLY for a quick checks 70 | # for other purpose (development, test and especially production usage) build/extend Airflow image. 71 | _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-} 72 | # The following line can be used to set a custom config file, stored in the local config folder 73 | # If you want to use it, outcomment it and replace airflow.cfg with the name of your config file 74 | # AIRFLOW_CONFIG: '/opt/airflow/config/airflow.cfg' 75 | volumes: 76 | - ${AIRFLOW_PROJ_DIR:-.}/dags:/opt/airflow/dags 77 | - ${AIRFLOW_PROJ_DIR:-.}/logs:/opt/airflow/logs 78 | - ${AIRFLOW_PROJ_DIR:-.}/config:/opt/airflow/config 79 | - ${AIRFLOW_PROJ_DIR:-.}/plugins:/opt/airflow/plugins 80 | user: "${AIRFLOW_UID:-50000}:0" 81 | depends_on: 82 | &airflow-common-depends-on 83 | redis: 84 | condition: service_healthy 85 | postgres: 86 | condition: service_healthy 87 | 88 | services: 89 | postgres: 90 | image: postgres:13 91 | environment: 92 | POSTGRES_USER: airflow 93 | POSTGRES_PASSWORD: airflow 94 | POSTGRES_DB: airflow 95 | volumes: 96 | - postgres-db-volume:/var/lib/postgresql/data 97 | networks: 98 | - airflow 99 | healthcheck: 100 | test: ["CMD", "pg_isready", "-U", "airflow"] 101 | interval: 10s 102 | retries: 5 103 | start_period: 5s 104 | restart: always 105 | 106 | redis: 107 | image: redis:latest 108 | expose: 109 | - 6379 110 | networks: 111 | - airflow 112 | healthcheck: 113 | test: ["CMD", "redis-cli", "ping"] 114 | interval: 10s 115 | timeout: 30s 116 | retries: 50 117 | start_period: 30s 118 | restart: always 119 | 120 | airflow-webserver: 121 | <<: *airflow-common 122 | command: webserver 123 | ports: 124 | - "8080:8080" 125 | healthcheck: 126 | test: ["CMD", "curl", "--fail", "http://localhost:8080/health"] 127 | interval: 30s 128 | timeout: 10s 129 | retries: 5 130 | start_period: 30s 131 | networks: 132 | - airflow 133 | - traefik-public 134 | labels: 135 | - "traefik.enable=true" 136 | - "traefik.http.routers.airflow-web.rule=Host(`airflow.fiks.im`)" 137 | - "traefik.http.routers.airflow-web.entrypoints=web" 138 | 139 | - "traefik.http.services.airflow-web.loadbalancer.server.port=8080" 140 | 141 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 142 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 143 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 144 | 145 | - "traefik.http.routers.airflow-web-secure.entrypoints=websecure" 146 | - "traefik.http.routers.airflow-web-secure.rule=Host(`airflow.fiks.im`)" 147 | # - "traefik.http.routers.airflow-web-secure.middlewares=traefik-auth" 148 | - "traefik.http.routers.airflow-web-secure.tls=true" 149 | 150 | restart: always 151 | depends_on: 152 | <<: *airflow-common-depends-on 153 | airflow-init: 154 | condition: service_completed_successfully 155 | 156 | airflow-scheduler: 157 | <<: *airflow-common 158 | command: scheduler 159 | healthcheck: 160 | test: ["CMD", "curl", "--fail", "http://localhost:8974/health"] 161 | interval: 30s 162 | timeout: 10s 163 | retries: 5 164 | start_period: 30s 165 | restart: always 166 | networks: 167 | - airflow 168 | depends_on: 169 | <<: *airflow-common-depends-on 170 | airflow-init: 171 | condition: service_completed_successfully 172 | 173 | airflow-worker: 174 | <<: *airflow-common 175 | command: celery worker 176 | healthcheck: 177 | # yamllint disable rule:line-length 178 | test: 179 | - "CMD-SHELL" 180 | - 'celery --app airflow.providers.celery.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}" || celery --app airflow.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}"' 181 | interval: 30s 182 | timeout: 10s 183 | retries: 5 184 | start_period: 30s 185 | networks: 186 | - airflow 187 | environment: 188 | <<: *airflow-common-env 189 | # Required to handle warm shutdown of the celery workers properly 190 | # See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation 191 | DUMB_INIT_SETSID: "0" 192 | restart: always 193 | depends_on: 194 | <<: *airflow-common-depends-on 195 | airflow-init: 196 | condition: service_completed_successfully 197 | 198 | airflow-triggerer: 199 | <<: *airflow-common 200 | command: triggerer 201 | healthcheck: 202 | test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"'] 203 | interval: 30s 204 | timeout: 10s 205 | retries: 5 206 | start_period: 30s 207 | networks: 208 | - airflow 209 | restart: always 210 | depends_on: 211 | <<: *airflow-common-depends-on 212 | airflow-init: 213 | condition: service_completed_successfully 214 | 215 | airflow-init: 216 | <<: *airflow-common 217 | entrypoint: /bin/bash 218 | networks: 219 | - airflow 220 | # yamllint disable rule:line-length 221 | command: 222 | - -c 223 | - | 224 | if [[ -z "${AIRFLOW_UID}" ]]; then 225 | echo 226 | echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m" 227 | echo "If you are on Linux, you SHOULD follow the instructions below to set " 228 | echo "AIRFLOW_UID environment variable, otherwise files will be owned by root." 229 | echo "For other operating systems you can get rid of the warning with manually created .env file:" 230 | echo " See: https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#setting-the-right-airflow-user" 231 | echo 232 | fi 233 | one_meg=1048576 234 | mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg)) 235 | cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat) 236 | disk_available=$$(df / | tail -1 | awk '{print $$4}') 237 | warning_resources="false" 238 | if (( mem_available < 4000 )) ; then 239 | echo 240 | echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m" 241 | echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))" 242 | echo 243 | warning_resources="true" 244 | fi 245 | if (( cpus_available < 2 )); then 246 | echo 247 | echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m" 248 | echo "At least 2 CPUs recommended. You have $${cpus_available}" 249 | echo 250 | warning_resources="true" 251 | fi 252 | if (( disk_available < one_meg * 10 )); then 253 | echo 254 | echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m" 255 | echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))" 256 | echo 257 | warning_resources="true" 258 | fi 259 | if [[ $${warning_resources} == "true" ]]; then 260 | echo 261 | echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m" 262 | echo "Please follow the instructions to increase amount of resources available:" 263 | echo " https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#before-you-begin" 264 | echo 265 | fi 266 | mkdir -p /sources/logs /sources/dags /sources/plugins 267 | chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins} 268 | exec /entrypoint airflow version 269 | # yamllint enable rule:line-length 270 | environment: 271 | <<: *airflow-common-env 272 | _AIRFLOW_DB_MIGRATE: 'true' 273 | _AIRFLOW_WWW_USER_CREATE: 'true' 274 | _AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow} 275 | _AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow} 276 | _PIP_ADDITIONAL_REQUIREMENTS: '' 277 | user: "0:0" 278 | volumes: 279 | - ${AIRFLOW_PROJ_DIR:-.}:/sources 280 | 281 | airflow-cli: 282 | <<: *airflow-common 283 | profiles: 284 | - debug 285 | networks: 286 | - airflow 287 | environment: 288 | <<: *airflow-common-env 289 | CONNECTION_CHECK_MAX_COUNT: "0" 290 | # Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252 291 | command: 292 | - bash 293 | - -c 294 | - airflow 295 | 296 | # You can enable flower by adding "--profile flower" option e.g. docker-compose --profile flower up 297 | # or by explicitly targeted on the command line e.g. docker-compose up flower. 298 | # See: https://docs.docker.com/compose/profiles/ 299 | flower: 300 | <<: *airflow-common 301 | command: celery flower 302 | profiles: 303 | - flower 304 | networks: 305 | - airflow 306 | - traefik-public 307 | ports: 308 | - "5555:5555" 309 | healthcheck: 310 | test: ["CMD", "curl", "--fail", "http://localhost:5555/"] 311 | interval: 30s 312 | timeout: 10s 313 | retries: 5 314 | start_period: 30s 315 | restart: always 316 | labels: 317 | - "traefik.enable=true" 318 | - "traefik.http.routers.airflow-web.rule=Host(`airflow-flower.fiks.im`)" 319 | - "traefik.http.routers.airflow-web.entrypoints=web" 320 | 321 | - "traefik.http.services.airflow-web.loadbalancer.server.port=5555" 322 | 323 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 324 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 325 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 326 | 327 | - "traefik.http.routers.airflow-web-secure.entrypoints=websecure" 328 | - "traefik.http.routers.airflow-web-secure.rule=Host(`airflow-flower.fiks.im`)" 329 | # - "traefik.http.routers.airflow-web-secure.middlewares=traefik-auth" 330 | - "traefik.http.routers.airflow-web-secure.tls=true" 331 | depends_on: 332 | <<: *airflow-common-depends-on 333 | airflow-init: 334 | condition: service_completed_successfully 335 | 336 | volumes: 337 | postgres-db-volume: 338 | 339 | networks: 340 | airflow: 341 | traefik-public: 342 | external: true 343 | -------------------------------------------------------------------------------- /perks/awx/SECRET_KEY: -------------------------------------------------------------------------------- 1 | awxsecret -------------------------------------------------------------------------------- /perks/awx/credentials.py: -------------------------------------------------------------------------------- 1 | DATABASES = { 2 | 'default': { 3 | 'ATOMIC_REQUESTS': True, 4 | 'ENGINE': 'django.db.backends.postgresql', 5 | 'NAME': "awx", 6 | 'USER': "awx", 7 | 'PASSWORD': "awxpass", 8 | 'HOST': "postgres", 9 | 'PORT': "5432", 10 | } 11 | } 12 | 13 | BROADCAST_WEBSOCKET_SECRET = "aGxpRTFPS1dZOVJVSG1KeVJqc2Q=" 14 | -------------------------------------------------------------------------------- /perks/awx/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '2' 2 | services: 3 | 4 | web: 5 | image: ansible/awx_web:11.1.0 6 | container_name: awx_web 7 | depends_on: 8 | - redis 9 | - memcached 10 | - postgres 11 | ports: 12 | - "8052:8052" 13 | hostname: awxweb 14 | user: root 15 | restart: unless-stopped 16 | volumes: 17 | - /home/slavko/awx/supervisor-socket:/var/run/supervisor 18 | - /home/slavko/awx/rsyslog-socket:/var/run/awx-rsyslog/ 19 | - /home/slavko/awx/rsyslog-config:/var/lib/awx/rsyslog/ 20 | - "$PWD/SECRET_KEY:/etc/tower/SECRET_KEY" 21 | - "$PWD/environment.sh:/etc/tower/conf.d/environment.sh" 22 | - "$PWD/credentials.py:/etc/tower/conf.d/credentials.py" 23 | - "$PWD/nginx.conf:/etc/nginx/nginx.conf:ro" 24 | - "/home/slavko/awx/redis_socket:/var/run/redis/:rw" 25 | - "/home/slavko/awx/memcached_socket:/var/run/memcached/:rw" 26 | - "/home/slavko/awx/projects:/var/lib/awx/projects:rw" 27 | environment: 28 | http_proxy: 29 | https_proxy: 30 | no_proxy: 31 | 32 | task: 33 | image: ansible/awx_task:11.1.0 34 | container_name: awx_task 35 | depends_on: 36 | - redis 37 | - memcached 38 | - web 39 | - postgres 40 | hostname: awx 41 | user: root 42 | restart: unless-stopped 43 | volumes: 44 | - /home/slavko/awx/supervisor-socket:/var/run/supervisor 45 | - /home/slavko/awx/rsyslog-socket:/var/run/awx-rsyslog/ 46 | - /home/slavko/awx/rsyslog-config:/var/lib/awx/rsyslog/ 47 | - "$PWD/SECRET_KEY:/etc/tower/SECRET_KEY" 48 | - "$PWD/environment.sh:/etc/tower/conf.d/environment.sh" 49 | - "$PWD/credentials.py:/etc/tower/conf.d/credentials.py" 50 | - "/home/slavko/awx/redis_socket:/var/run/redis/:rw" 51 | - "/home/slavko/awx/memcached_socket:/var/run/memcached/:rw" 52 | - "/home/slavko/awx/projects:/var/lib/awx/projects:rw" 53 | environment: 54 | http_proxy: 55 | https_proxy: 56 | no_proxy: 57 | SUPERVISOR_WEB_CONFIG_PATH: '/supervisor.conf' 58 | 59 | redis: 60 | image: redis 61 | container_name: awx_redis 62 | restart: unless-stopped 63 | environment: 64 | http_proxy: 65 | https_proxy: 66 | no_proxy: 67 | command: ["/usr/local/etc/redis/redis.conf"] 68 | volumes: 69 | - "$PWD/redis.conf:/usr/local/etc/redis/redis.conf:ro" 70 | - "/home/slavko/awx/redis_socket:/var/run/redis/:rw" 71 | - "/home/slavko/awx/memcached_socket:/var/run/memcached/:rw" 72 | 73 | memcached: 74 | image: "memcached:alpine" 75 | container_name: awx_memcached 76 | command: ["-s", "/var/run/memcached/memcached.sock", "-a", "0666"] 77 | restart: unless-stopped 78 | environment: 79 | http_proxy: 80 | https_proxy: 81 | no_proxy: 82 | volumes: 83 | - "/home/slavko/awx/memcached_socket:/var/run/memcached/:rw" 84 | 85 | postgres: 86 | image: postgres:10 87 | container_name: awx_postgres 88 | restart: unless-stopped 89 | volumes: 90 | - /home/slavko/awx/data/10/data/:/var/lib/postgresql/data/pgdata:Z 91 | environment: 92 | POSTGRES_USER: awx 93 | POSTGRES_PASSWORD: awxpass 94 | POSTGRES_DB: awx 95 | PGDATA: /var/lib/postgresql/data/pgdata 96 | http_proxy: 97 | https_proxy: 98 | no_proxy: 99 | volumes: 100 | supervisor-socket: 101 | rsyslog-socket: 102 | rsyslog-config: 103 | -------------------------------------------------------------------------------- /perks/awx/environment.sh: -------------------------------------------------------------------------------- 1 | DATABASE_USER=awx 2 | DATABASE_NAME=awx 3 | DATABASE_HOST=postgres 4 | DATABASE_PORT=5432 5 | DATABASE_PASSWORD=awxpass 6 | AWX_ADMIN_USER=admin 7 | AWX_ADMIN_PASSWORD=password 8 | -------------------------------------------------------------------------------- /perks/awx/nginx.conf: -------------------------------------------------------------------------------- 1 | #user awx; 2 | 3 | worker_processes 1; 4 | 5 | pid /tmp/nginx.pid; 6 | 7 | events { 8 | worker_connections 1024; 9 | } 10 | 11 | http { 12 | include /etc/nginx/mime.types; 13 | default_type application/octet-stream; 14 | server_tokens off; 15 | 16 | log_format main '$remote_addr - $remote_user [$time_local] "$request" ' 17 | '$status $body_bytes_sent "$http_referer" ' 18 | '"$http_user_agent" "$http_x_forwarded_for"'; 19 | 20 | access_log /dev/stdout main; 21 | 22 | map $http_upgrade $connection_upgrade { 23 | default upgrade; 24 | '' close; 25 | } 26 | 27 | sendfile on; 28 | #tcp_nopush on; 29 | #gzip on; 30 | 31 | upstream uwsgi { 32 | server 127.0.0.1:8050; 33 | } 34 | 35 | upstream daphne { 36 | server 127.0.0.1:8051; 37 | } 38 | 39 | 40 | server { 41 | listen 8052 default_server; 42 | 43 | # If you have a domain name, this is where to add it 44 | server_name _; 45 | keepalive_timeout 65; 46 | 47 | # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months) 48 | add_header Strict-Transport-Security max-age=15768000; 49 | add_header Content-Security-Policy "default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/"; 50 | add_header X-Content-Security-Policy "default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/"; 51 | 52 | # Protect against click-jacking https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009) 53 | add_header X-Frame-Options "DENY"; 54 | 55 | location /nginx_status { 56 | stub_status on; 57 | access_log off; 58 | allow 127.0.0.1; 59 | deny all; 60 | } 61 | 62 | location /static/ { 63 | alias /var/lib/awx/public/static/; 64 | } 65 | 66 | location /favicon.ico { alias /var/lib/awx/public/static/favicon.ico; } 67 | 68 | location /websocket { 69 | # Pass request to the upstream alias 70 | proxy_pass http://daphne; 71 | # Require http version 1.1 to allow for upgrade requests 72 | proxy_http_version 1.1; 73 | # We want proxy_buffering off for proxying to websockets. 74 | proxy_buffering off; 75 | # http://en.wikipedia.org/wiki/X-Forwarded-For 76 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 77 | # enable this if you use HTTPS: 78 | proxy_set_header X-Forwarded-Proto https; 79 | # pass the Host: header from the client for the sake of redirects 80 | proxy_set_header Host $http_host; 81 | # We've set the Host header, so we don't need Nginx to muddle 82 | # about with redirects 83 | proxy_redirect off; 84 | # Depending on the request value, set the Upgrade and 85 | # connection headers 86 | proxy_set_header Upgrade $http_upgrade; 87 | proxy_set_header Connection $connection_upgrade; 88 | } 89 | 90 | location / { 91 | # Add trailing / if missing 92 | rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent; 93 | uwsgi_read_timeout 120s; 94 | uwsgi_pass uwsgi; 95 | include /etc/nginx/uwsgi_params; proxy_set_header X-Forwarded-Port 443; 96 | uwsgi_param HTTP_X_FORWARDED_PORT 443; 97 | } 98 | } 99 | } 100 | -------------------------------------------------------------------------------- /perks/awx/redis.conf: -------------------------------------------------------------------------------- 1 | unixsocket /var/run/redis/redis.sock 2 | unixsocketperm 777 3 | port 0 4 | bind 127.0.0.1 5 | -------------------------------------------------------------------------------- /perks/borgbackup/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '2.7' 2 | services: 3 | borgsshd: 4 | image: voronenko/nas-backup:latest 5 | # network_mode: host 6 | restart: always 7 | read_only: false 8 | volumes: 9 | - type: volume 10 | source: ssh_host_keys 11 | target: /etc/ssh/host_keys_initial 12 | read_only: false 13 | - type: volume 14 | source: repository 15 | target: /repository 16 | read_only: false 17 | - type: tmpfs 18 | target: /home/borg/.ssh # authorized_keys 19 | tmpfs: 20 | # nosuid,nodev,noexec added by default 21 | # mode: '1777' 22 | size: 16k 23 | - type: tmpfs 24 | # > FileNotFoundError: [Errno 2] No usable temporary directory found [...] 25 | target: /tmp 26 | tmpfs: 27 | # mode: '1777' 28 | size: 1M 29 | ports: 30 | - 2200:2200 31 | # cap_drop: [ALL] 32 | # security_opt: [no-new-privileges] 33 | cpus: 0.8 34 | mem_limit: 128M 35 | 36 | volumes: 37 | ssh_host_keys: 38 | repository: 39 | driver: local 40 | driver_opts: 41 | o: bind 42 | type: none 43 | device: /share/appsdata/borgbackup 44 | ssh_host_keys: 45 | driver_opts: 46 | o: bind 47 | type: none 48 | device: /share/appsdata/borgbackup/config 49 | -------------------------------------------------------------------------------- /perks/cachetio/cachet-patch/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM cachethq/docker:2.3.18 2 | #USER root 3 | #RUN sed -i 's/cachet:install/app:install/g' /sbin/entrypoint.sh 4 | ADD entrypoint.sh /sbin/entrypoint.sh 5 | #USER 1001 6 | -------------------------------------------------------------------------------- /perks/cachetio/cachet-patch/Makefile: -------------------------------------------------------------------------------- 1 | build: 2 | docker build -t voronenko/cachethq:2.3.18 . 3 | push: 4 | docker push voronenko/cachethq:2.3.18 5 | -------------------------------------------------------------------------------- /perks/cachetio/cachet-patch/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -o errexit -o nounset -o pipefail 3 | 4 | [ "${DEBUG:-false}" == true ] && set -x 5 | 6 | check_database_connection() { 7 | echo "Attempting to connect to database ..." 8 | case "${DB_DRIVER}" in 9 | mysql) 10 | prog="mysqladmin -h ${DB_HOST} -u ${DB_USERNAME} ${DB_PASSWORD:+-p$DB_PASSWORD} -P ${DB_PORT} status" 11 | ;; 12 | pgsql) 13 | prog="/usr/bin/pg_isready" 14 | prog="${prog} -h ${DB_HOST} -p ${DB_PORT} -U ${DB_USERNAME} -d ${DB_DATABASE} -t 1" 15 | ;; 16 | sqlite) 17 | prog="touch /var/www/html/database/database.sqlite" 18 | esac 19 | timeout=60 20 | while ! ${prog} >/dev/null 2>&1 21 | do 22 | timeout=$(( timeout - 1 )) 23 | if [[ "$timeout" -eq 0 ]]; then 24 | echo 25 | echo "Could not connect to database server! Aborting..." 26 | exit 1 27 | fi 28 | echo -n "." 29 | sleep 1 30 | done 31 | echo 32 | } 33 | 34 | checkdbinitmysql() { 35 | table=sessions 36 | if [[ "$(mysql -N -s -h "${DB_HOST}" -u "${DB_USERNAME}" "${DB_PASSWORD:+-p$DB_PASSWORD}" "${DB_DATABASE}" -P "${DB_PORT}" -e \ 37 | "select count(*) from information_schema.tables where \ 38 | table_schema='${DB_DATABASE}' and table_name='${DB_PREFIX}${table}';")" -eq 1 ]]; then 39 | echo "Table ${DB_PREFIX}${table} exists! ..." 40 | else 41 | echo "Table ${DB_PREFIX}${table} does not exist! ..." 42 | init_db 43 | fi 44 | 45 | } 46 | 47 | checkdbinitpsql() { 48 | table=sessions 49 | export PGPASSWORD=${DB_PASSWORD} 50 | if [[ "$(psql -h "${DB_HOST}" -p "${DB_PORT}" -U "${DB_USERNAME}" -d "${DB_DATABASE}" -c "SELECT to_regclass('${DB_PREFIX}${table}');" | grep -c "${DB_PREFIX}${table}")" -eq 1 ]]; then 51 | echo "Table ${DB_PREFIX}${table} exists! ..." 52 | else 53 | echo "Table ${DB_PREFIX}${table} does not exist! ..." 54 | init_db 55 | fi 56 | 57 | } 58 | 59 | check_configured() { 60 | case "${DB_DRIVER}" in 61 | mysql) 62 | checkdbinitmysql 63 | ;; 64 | pgsql) 65 | checkdbinitpsql 66 | ;; 67 | esac 68 | } 69 | 70 | check_sendmail() { 71 | if [[ "${MAIL_DRIVER:-}" == "sendmail" ]]; then 72 | sudo /usr/sbin/postfix start 73 | fi 74 | } 75 | 76 | initialize_system() { 77 | echo "Initializing Cachet container ..." 78 | 79 | APP_KEY=${APP_KEY:-} 80 | APP_ENV=${APP_ENV:-development} 81 | APP_DEBUG=${APP_DEBUG:-true} 82 | APP_URL=${APP_URL:-http://localhost} 83 | APP_LOG=${APP_LOG:-errorlog} 84 | 85 | DB_DRIVER=${DB_DRIVER:-pgsql} 86 | DB_HOST=${DB_HOST:-postgres} 87 | DB_DATABASE=${DB_DATABASE:-cachet} 88 | DB_PREFIX=${DB_PREFIX:-} 89 | DB_USERNAME=${DB_USERNAME:-postgres} 90 | DB_PASSWORD=${DB_PASSWORD:-postgres} 91 | DB_PORT=${DB_PORT:-} 92 | DB_PREFIX=${DB_PREFIX:-} 93 | 94 | if [[ "${DB_DRIVER}" = "pgsql" ]]; then 95 | DB_PORT=${DB_PORT:-5432} 96 | fi 97 | 98 | if [[ "${DB_DRIVER}" = "mysql" ]]; then 99 | DB_PORT=${DB_PORT:-3306} 100 | fi 101 | 102 | if [[ "${DB_DRIVER}" = "sqlite" ]]; then 103 | DB_DATABASE="" 104 | DB_HOST="" 105 | DB_PORT="" 106 | DB_USERNAME="" 107 | DB_PASSWORD="" 108 | fi 109 | 110 | CACHE_DRIVER=${CACHE_DRIVER:-apc} 111 | 112 | SESSION_DRIVER=${SESSION_DRIVER:-apc} 113 | SESSION_DOMAIN=${SESSION_DOMAIN:-} 114 | SESSION_SECURE_COOKIE=${SESSION_SECURE_COOKIE:-} 115 | 116 | QUEUE_DRIVER=${QUEUE_DRIVER:-database} 117 | CACHET_EMOJI=${CACHET_EMOJI:-false} 118 | CACHET_BEACON=${CACHET_BEACON:-true} 119 | CACHET_AUTO_TWITTER=${CACHET_AUTO_TWITTER:-true} 120 | 121 | MAIL_DRIVER=${MAIL_DRIVER:-smtp} 122 | MAIL_HOST=${MAIL_HOST:-localhost} 123 | MAIL_PORT=${MAIL_PORT:-25} 124 | MAIL_USERNAME=${MAIL_USERNAME:-} 125 | MAIL_PASSWORD=${MAIL_PASSWORD:-} 126 | MAIL_ADDRESS=${MAIL_ADDRESS:-} 127 | MAIL_NAME=${MAIL_NAME:-} 128 | MAIL_ENCRYPTION=${MAIL_ENCRYPTION:-} 129 | 130 | REDIS_HOST=${REDIS_HOST:-} 131 | REDIS_DATABASE=${REDIS_DATABASE:-} 132 | REDIS_PORT=${REDIS_PORT:-} 133 | REDIS_PASSWORD=${REDIS_PASSWORD:-} 134 | 135 | GITHUB_TOKEN=${GITHUB_TOKEN:-} 136 | 137 | NEXMO_KEY=${NEXMO_KEY:-} 138 | NEXMO_SECRET=${NEXMO_SECRET:-} 139 | NEXMO_SMS_FROM=${NEXMO_SMS_FROM:-Cachet} 140 | 141 | PHP_MAX_CHILDREN=${PHP_MAX_CHILDREN:-5} 142 | 143 | TRUSTED_PROXIES=${TRUSTED_PROXIES:-} 144 | 145 | # configure env file 146 | 147 | sed 's,{{APP_ENV}},'"${APP_ENV}"',g' -i /var/www/html/.env 148 | sed 's,{{APP_DEBUG}},'"${APP_DEBUG}"',g' -i /var/www/html/.env 149 | sed 's,{{APP_URL}},'"${APP_URL}"',g' -i /var/www/html/.env 150 | sed 's,{{APP_LOG}},'"${APP_LOG}"',g' -i /var/www/html/.env 151 | 152 | sed 's,{{DB_DRIVER}},'"${DB_DRIVER}"',g' -i /var/www/html/.env 153 | sed 's,{{DB_HOST}},'"${DB_HOST}"',g' -i /var/www/html/.env 154 | sed 's,{{DB_DATABASE}},'"${DB_DATABASE}"',g' -i /var/www/html/.env 155 | sed 's,{{DB_PREFIX}},'"${DB_PREFIX}"',g' -i /var/www/html/.env 156 | sed 's,{{DB_USERNAME}},'"${DB_USERNAME}"',g' -i /var/www/html/.env 157 | sed 's,{{DB_PASSWORD}},'"${DB_PASSWORD}"',g' -i /var/www/html/.env 158 | sed 's,{{DB_PORT}},'"${DB_PORT}"',g' -i /var/www/html/.env 159 | sed 's,{{DB_PREFIX}},'"${DB_PREFIX}"',g' -i /var/www/html/.env 160 | 161 | sed 's,{{CACHE_DRIVER}},'"${CACHE_DRIVER}"',g' -i /var/www/html/.env 162 | 163 | sed 's,{{SESSION_DRIVER}},'"${SESSION_DRIVER}"',g' -i /var/www/html/.env 164 | sed 's,{{SESSION_DOMAIN}},'"${SESSION_DOMAIN}"',g' -i /var/www/html/.env 165 | sed 's,{{SESSION_SECURE_COOKIE}},'"${SESSION_SECURE_COOKIE}"',g' -i /var/www/html/.env 166 | 167 | sed 's,{{QUEUE_DRIVER}},'"${QUEUE_DRIVER}"',g' -i /var/www/html/.env 168 | sed 's,{{CACHET_EMOJI}},'"${CACHET_EMOJI}"',g' -i /var/www/html/.env 169 | sed 's,{{CACHET_BEACON}},'"${CACHET_BEACON}"',g' -i /var/www/html/.env 170 | sed 's,{{CACHET_AUTO_TWITTER}},'"${CACHET_AUTO_TWITTER}"',g' -i /var/www/html/.env 171 | 172 | sed 's,{{MAIL_DRIVER}},'"${MAIL_DRIVER}"',g' -i /var/www/html/.env 173 | sed 's,{{MAIL_HOST}},'"${MAIL_HOST}"',g' -i /var/www/html/.env 174 | sed 's,{{MAIL_PORT}},'"${MAIL_PORT}"',g' -i /var/www/html/.env 175 | sed 's,{{MAIL_USERNAME}},'"${MAIL_USERNAME}"',g' -i /var/www/html/.env 176 | sed 's,{{MAIL_PASSWORD}},'"${MAIL_PASSWORD}"',g' -i /var/www/html/.env 177 | sed 's,{{MAIL_ADDRESS}},'"${MAIL_ADDRESS}"',g' -i /var/www/html/.env 178 | sed 's,{{MAIL_NAME}},'"${MAIL_NAME}"',g' -i /var/www/html/.env 179 | sed 's,{{MAIL_ENCRYPTION}},'"${MAIL_ENCRYPTION}"',g' -i /var/www/html/.env 180 | 181 | sed 's,{{REDIS_HOST}},'"${REDIS_HOST}"',g' -i /var/www/html/.env 182 | sed 's,{{REDIS_DATABASE}},'"${REDIS_DATABASE}"',g' -i /var/www/html/.env 183 | sed 's,{{REDIS_PORT}},'"${REDIS_PORT}"',g' -i /var/www/html/.env 184 | sed 's,{{REDIS_PASSWORD}},'"${REDIS_PASSWORD}"',g' -i /var/www/html/.env 185 | 186 | sed 's,{{GITHUB_TOKEN}},'"${GITHUB_TOKEN}"',g' -i /var/www/html/.env 187 | 188 | sed 's,{{NEXMO_KEY}},'"${NEXMO_KEY}"',g' -i /var/www/html/.env 189 | sed 's,{{NEXMO_SECRET}},'"${NEXMO_SECRET}"',g' -i /var/www/html/.env 190 | sed 's,{{NEXMO_SMS_FROM}},'"${NEXMO_SMS_FROM}"',g' -i /var/www/html/.env 191 | 192 | sed 's,{{PHP_MAX_CHILDREN}},'"${PHP_MAX_CHILDREN}"',g' -i /etc/php7/php-fpm.d/www.conf 193 | 194 | sed 's,{{TRUSTED_PROXIES}},'"${TRUSTED_PROXIES}"',g' -i /var/www/html/.env 195 | 196 | if [[ -z "${APP_KEY}" || "${APP_KEY}" = "null" ]]; then 197 | keygen="$(php artisan key:generate --show)" 198 | APP_KEY=$(echo "${keygen}") 199 | echo "ERROR: Please set the 'APP_KEY=${APP_KEY}' environment variable at runtime or in docker-compose.yml and re-launch" 200 | exit 0 201 | fi 202 | 203 | sed "s,{{APP_KEY}},$APP_KEY,g" -i /var/www/html/.env 204 | 205 | # remove empty lines 206 | sed '/^.*=""$/d' -i /var/www/html/.env 207 | 208 | rm -rf bootstrap/cache/* 209 | } 210 | 211 | init_db() { 212 | echo "Initializing Cachet database ..." 213 | php artisan app:install --no-interaction 214 | check_configured 215 | } 216 | 217 | migrate_db() { 218 | force="" 219 | if [[ "${FORCE_MIGRATION:-false}" == true ]]; then 220 | force="--force" 221 | fi 222 | php artisan migrate ${force} 223 | } 224 | 225 | seed_db() { 226 | php artisan db:seed 227 | } 228 | 229 | start_system() { 230 | initialize_system 231 | check_database_connection 232 | check_configured 233 | migrate_db 234 | seed_db 235 | echo "Starting Cachet! ..." 236 | php artisan config:cache 237 | # BEGIN of the hack to fix PHP 7.2, as suggested in: https://github.com/CachetHQ/Cachet/issues/4132 238 | php artisan route:cache 239 | echo "if(version_compare(PHP_VERSION, '7.2.0', '>=')) { error_reporting(E_ALL ^ E_NOTICE ^ E_WARNING); }" >> bootstrap/cache/routes.php 240 | # END of the hack 241 | /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf 242 | } 243 | 244 | check_sendmail 245 | start_system 246 | 247 | exit 0 248 | -------------------------------------------------------------------------------- /perks/cachetio/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | 3 | volumes: 4 | cachet_data: {} 5 | pg_data: {} 6 | 7 | services: 8 | db: 9 | image: "postgres:13" 10 | restart: unless-stopped 11 | volumes: 12 | - pg_data:/var/lib/postgresql/data 13 | environment: 14 | - POSTGRES_USER=statuspage 15 | - POSTGRES_PASSWORD=statuspage 16 | - POSTGRES_DB=statuspage 17 | networks: 18 | - cachet 19 | cachet: 20 | # image: cachethq/docker:2.3.18 21 | image: voronenko/cachethq:2.3.18 22 | restart: unless-stopped 23 | networks: 24 | - cachet 25 | - traefik-public 26 | environment: 27 | - DB_DRIVER=pgsql 28 | - DB_HOST=db 29 | - DB_PORT=5432 30 | - DB_DATABASE=statuspage 31 | - APP_DEBUG=true 32 | - APP_KEY="base64:sXwj91YqHuc4vATNN0/9hMgCfWv8lfjRE3eMd7gtYFw=" 33 | - DB_PASSWORD=statuspage 34 | - DB_USERNAME=statuspage 35 | # - MAIL_USERNAME= 36 | # - MAIL_PASSWORD= 37 | # - MAIL_DRIVER= 38 | # - MAIL_HOST= 39 | # - MAIL_PORT= 40 | # - MAIL_ADDRESS= 41 | # - MAIL_NAME= 42 | # - MAIL_ENCRYPTION= 43 | labels: 44 | - "traefik.enable=true" 45 | - "traefik.http.routers.cachet.rule=Host(`cachet.fiks.im`)" 46 | - "traefik.http.routers.cachet.entrypoints=web" 47 | 48 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 49 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 50 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 51 | 52 | - "traefik.http.services.cachet-secure.loadbalancer.server.port=8000" 53 | - "traefik.http.routers.cachet-secure.entrypoints=websecure" 54 | - "traefik.http.routers.cachet-secure.rule=Host(`cachet.fiks.im`)" 55 | # - "traefik.http.routers.loki-secure.middlewares=traefik-auth" 56 | - "traefik.http.routers.cachet-secure.tls=true" 57 | 58 | networks: 59 | cachet: 60 | traefik-public: 61 | external: true 62 | -------------------------------------------------------------------------------- /perks/calibre/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | services: 3 | calibre: 4 | restart: always 5 | image: lscr.io/linuxserver/calibre:6.4.0 6 | environment: 7 | - PUID=1000 8 | - PGID=1000 9 | - TZ=Europe/Warsaw 10 | volumes: 11 | - /share/data/books/calibre:/config 12 | - /share/Public/BOOKS_INBOX/:/config/inbox 13 | ports: 14 | - 4080:8080 15 | - 4081:8081 16 | restart: always 17 | cops: 18 | image: lscr.io/linuxserver/cops:latest 19 | container_name: cops 20 | environment: 21 | - PUID=1000 22 | - PGID=1000 23 | - TZ=Europe/Warsaw 24 | volumes: 25 | - ./config:/config 26 | - /share/data/books/calibre/CALIBRE:/books 27 | restart: always 28 | networks: 29 | traefik-public: 30 | labels: 31 | - "traefik.enable=true" 32 | - "traefik.http.routers.cops.rule=Host(`calibre.fiks.im`)" 33 | - "traefik.http.routers.cops.entrypoints=web" 34 | - "traefik.http.services.cops.loadbalancer.server.port=80" 35 | 36 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 37 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 38 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 39 | 40 | - "traefik.http.routers.cops-secure.entrypoints=websecure" 41 | - "traefik.http.routers.cops-secure.rule=Host(`calibre.fiks.im`)" 42 | # - "traefik.http.routers.cops-secure.middlewares=traefik-auth" 43 | - "traefik.http.routers.cops-secure.tls=true" 44 | 45 | networks: 46 | traefik-public: 47 | external: true 48 | -------------------------------------------------------------------------------- /perks/clickhouse-tabix/docker-compose.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: "3.7" 3 | services: 4 | tabix: 5 | image: spoonest/clickhouse-tabix-web-client 6 | container_name: tabix 7 | networks: 8 | traefik-public: 9 | extra_hosts: 10 | docker.lvh.voronenko.net: 172.19.0.1 11 | lvh.voronenko.net: 172.19.0.1 12 | clickhouse: 172.19.0.1 13 | labels: 14 | - "traefik.enable=true" 15 | - "traefik.http.routers.tabix.rule=Host(`tabix.lvh.voronenko.net`)" 16 | - "traefik.http.routers.tabix.entrypoints=web" 17 | - "traefik.http.services.tabix.loadbalancer.server.port=80" 18 | 19 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 20 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 21 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 22 | 23 | - "traefik.http.routers.tabix-secure.entrypoints=websecure" 24 | - "traefik.http.routers.tabix-secure.rule=Host(`tabix.lvh.voronenko.net`)" 25 | # - "traefik.http.routers.heim-secure.middlewares=traefik-auth" 26 | - "traefik.http.routers.tabix-secure.tls=true" 27 | restart: unless-stopped 28 | networks: 29 | traefik-public: 30 | external: true 31 | 32 | -------------------------------------------------------------------------------- /perks/docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | homepage: 3 | image: ghcr.io/gethomepage/homepage:latest 4 | container_name: homepage 5 | environment: 6 | PUID: 1000 # optional, your user id 7 | PGID: 1000 # optional, your group id 8 | #ports: 9 | # - 3000:3000 10 | volumes: 11 | - ./config:/app/config # Make sure your local config directory exists 12 | - /var/run/docker.sock:/var/run/docker.sock 13 | restart: unless-stopped 14 | networks: 15 | traefik-public: 16 | extra_hosts: 17 | docker.lvh.voronenko.net: 172.25.0.1 18 | lvh.voronenko.net: 172.25.0.1 19 | # healthcheck: 20 | # test: ["CMD", "curl", "-f", "http://localhost"] 21 | # interval: 1m 22 | # timeout: 10s 23 | # retries: 3 24 | # start_period: 20s 25 | deploy: 26 | replicas: 1 27 | placement: 28 | constraints: 29 | - node.role == manager 30 | preferences: 31 | - spread: node.id 32 | labels: 33 | - "traefik.enable=true" 34 | - "traefik.http.routers.hpage.rule=Host(`home.lvh.voronenko.net`,`home.me.voronenko.net`,`home.fiks.im`)" 35 | - "traefik.http.routers.hpage.entrypoints=web" 36 | - "traefik.http.services.hpage.loadbalancer.server.port=3000" 37 | 38 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 39 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 40 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 41 | 42 | - "traefik.http.routers.hpage-secure.entrypoints=websecure" 43 | - "traefik.http.routers.hpage-secure.rule=Host(`home.lvh.voronenko.net`,`home.me.voronenko.net`,`home.fiks.im`)" 44 | # - "traefik.http.routers.hpage-secure.middlewares=traefik-auth" 45 | - "traefik.http.routers.hpage-secure.tls=true" 46 | labels: 47 | - "traefik.enable=true" 48 | - "traefik.http.routers.hpage.rule=Host(`home.lvh.voronenko.net`,`home.me.voronenko.net`,`home.fiks.im`)" 49 | - "traefik.http.routers.hpage.entrypoints=web" 50 | - "traefik.http.services.hpage.loadbalancer.server.port=3000" 51 | 52 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 53 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 54 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 55 | 56 | - "traefik.http.routers.hpage-secure.entrypoints=websecure" 57 | - "traefik.http.routers.hpage-secure.rule=Host(`home.lvh.voronenko.net`,`home.me.voronenko.net`,`home.fiks.im`)" 58 | # - "traefik.http.routers.hpage-secure.middlewares=traefik-auth" 59 | - "traefik.http.routers.hpage-secure.tls=true" 60 | networks: 61 | traefik-public: 62 | external: true 63 | -------------------------------------------------------------------------------- /perks/docker-registry-pull-through/README.md: -------------------------------------------------------------------------------- 1 | # Idea of setup 2 | 3 | https://circleci.com/docs/docker-hub-pull-through-mirror 4 | 5 | Set up an independent Linux server that has Docker installed. 6 | We set up this registry as an independent server (i.e. outside of cdci boxes) to avoid load on this cache registry affecting other cdci server services. 7 | Assuming that the IP address for this server is 192.0.2.1, the URL for the registry to set up is http://192.0.2.1:5000 This URL will be needed later to arm Nomad clients and the VM Service 8 | 9 | ``` 10 | sudo docker run \ 11 | -d \ 12 | -p 80:5000 \ 13 | --restart=always \ 14 | --name=through-cache \ 15 | -e REGISTRY_PROXY_REMOTEURL="https://registry-1.docker.io" \ 16 | -e REGISTRY_PROXY_USERNAME=DOCKER_HUB_USERNAME \ 17 | -e REGISTRY_PROXY_PASSWORD=DOCKER_HUB_ACCESS_TOKEN \ 18 | registry 19 | ``` 20 | 21 | Finally, make sure that the TCP port is open and accessible. For better security, we recommend that you only open the port to Nomad clients and VMs for machine executors and remote docker engines. 22 | 23 | Similar command for running secure registry would be 24 | 25 | ``` 26 | sudo docker run \ 27 | -d \ 28 | -p 443:5000 \ 29 | --restart=always \ 30 | --name=through-cache-secure \ 31 | -v /root/tls:/data/tls \ 32 | -e REGISTRY_PROXY_REMOTEURL="https://registry-1.docker.io" \ 33 | -e REGISTRY_PROXY_USERNAME=DOCKER_HUB_USERNAME \ 34 | -e REGISTRY_PROXY_PASSWORD=DOCKER_HUB_ACCESS_TOKEN \ 35 | -e REGISTRY_HTTP_TLS_CERTIFICATE=/data/tls/fullchain.pem \ 36 | -e REGISTRY_HTTP_TLS_KEY=/data/tls/privkey.pem \ 37 | registry 38 | ``` 39 | 40 | 41 | 42 | ## CORS setup 43 | 44 | http: 45 | headers: 46 | Access-Control-Allow-Origin: ['http://10.x.x.x'] 47 | 48 | or 49 | 50 | -e REGISTRY_HTTP_HEADERS_Access-Control-Allow-Origin="['*']" 51 | 52 | 53 | ## Registry mirror setup 54 | 55 | Create a daemon.json configuration for the docker daemon. For example inside of /tmp/daemon.json on the host, the is running docker (usually the same host that is running gitlab-runner) 56 | in a permanent install it could be also /etc/docker/daemon.json 57 | 58 | ```json 59 | { 60 | "registry-mirrors": [ 61 | "https://mirror.gcr.io" 62 | ] 63 | } 64 | ``` 65 | 66 | 67 | To verify that the cache is correctly configured, run: 68 | 69 | 70 | docker system info 71 | The output should include Registry Mirrors, and should look similar to the following: 72 | 73 | 74 | Containers: 2 75 | Running: 0 76 | Paused: 0 77 | Stopped: 2 78 | Images: 2 79 | Server Version: 17.03.1-ce 80 | Storage Driver: overlay2 81 | Backing Filesystem: extfs 82 | Supports d_type: true 83 | Native Overlay Diff: true 84 | Logging Driver: json-file 85 | ... 86 | Registry Mirrors: 87 | https://mirror.gcr.io 88 | -------------------------------------------------------------------------------- /perks/docker-registry-pull-through/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | 3 | services: 4 | registry-srv: 5 | image: registry:2.8 6 | restart: unless-stopped 7 | ports: 8 | - 5000:5000 9 | networks: 10 | - joxit-app 11 | - traefik-public 12 | environment: 13 | - REGISTRY_PROXY_REMOTEURL="https://registry-1.docker.io" 14 | - REGISTRY_PROXY_USERNAME=SPECIFY 15 | # https://hub.docker.com/settings/security 16 | - REGISTRY_PROXY_PASSWORD=SPECIFY 17 | # - REGISTRY_HTTP_TLS_CERTIFICATE=/data/tls/fullchain.pem 18 | # - REGISTRY_HTTP_TLS_KEY=/data/tls/privkey.pem 19 | volumes: 20 | - ./registry/storage:/var/lib/registry 21 | - ./registry/tls:/data/tls 22 | - ./registry/config.yml:/etc/docker/registry/config.yml:ro 23 | deploy: 24 | mode: 'global' 25 | placement: 26 | constraints: 27 | - 'node.hostname == docker-registry' 28 | labels: 29 | - "traefik.port=5000" 30 | - "traefik.enable=true" 31 | - "traefik.http.routers.registry.rule=Host(`registry.lvh.voronenko.net`)" 32 | - "traefik.http.routers.registry.entrypoints=web" 33 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 34 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 35 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 36 | - "traefik.http.services.registry-secure.loadbalancer.server.port=80" 37 | - "traefik.http.routers.registry-secure.entrypoints=websecure" 38 | - "traefik.http.routers.registry-secure.rule=Host(`registry.lvh.voronenko.net`)" 39 | # - "traefik.http.routers.registry-secure.middlewares=traefik-auth" 40 | - "traefik.http.routers.registry-secure.tls=true" 41 | labels: 42 | - "traefik.port=5000" 43 | - "traefik.enable=true" 44 | - "traefik.http.routers.registry.rule=Host(`registry.lvh.voronenko.net`)" 45 | - "traefik.http.routers.registry.entrypoints=web" 46 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 47 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 48 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 49 | - "traefik.http.services.registry-secure.loadbalancer.server.port=80" 50 | - "traefik.http.routers.registry-secure.entrypoints=websecure" 51 | - "traefik.http.routers.registry-secure.rule=Host(`registry.lvh.voronenko.net`)" 52 | # - "traefik.http.routers.registry-secure.middlewares=traefik-auth" 53 | - "traefik.http.routers.registry-secure.tls=true" 54 | 55 | registry-ui: 56 | image: joxit/docker-registry-ui:2.2.1 57 | restart: unless-stopped 58 | # ports: 59 | # - 8080:80 60 | environment: 61 | - REGISTRY_URL=https://registry.lvh.voronenko.net 62 | - REGISTRY_TITLE=Private Docker Registry 63 | - NGINX_PROXY_PASS_URL=https://registry.lvh.voronenko.net 64 | - DELETE_IMAGES=true 65 | - SINGLE_REGISTRY=true 66 | networks: 67 | - joxit-app 68 | - traefik-public 69 | labels: 70 | - "traefik.enable=true" 71 | - "traefik.http.routers.joxit.rule=Host(`registry-ui.lvh.voronenko.net`)" 72 | - "traefik.http.routers.joxit.entrypoints=web" 73 | 74 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 75 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 76 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 77 | 78 | - "traefik.http.services.joxit-secure.loadbalancer.server.port=80" 79 | - "traefik.http.routers.joxit-secure.entrypoints=websecure" 80 | - "traefik.http.routers.joxit-secure.rule=Host(`registry-ui.lvh.voronenko.net`)" 81 | # - "traefik.http.routers.joxit-secure.middlewares=traefik-auth" 82 | - "traefik.http.routers.joxit-secure.tls=true" 83 | networks: 84 | joxit-app: 85 | traefik-public: 86 | external: true 87 | -------------------------------------------------------------------------------- /perks/docker-registry/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | 3 | services: 4 | registry-srv: 5 | image: registry:latest 6 | restart: always 7 | ports: 8 | - 5000:5000 9 | networks: 10 | - joxit-app 11 | - traefik-public 12 | volumes: 13 | - ./registry/storage:/var/lib/registry 14 | - ./registry/config.yml:/etc/docker/registry/config.yml:ro 15 | registry-ui: 16 | image: joxit/docker-registry-ui:static 17 | restart: always 18 | # ports: 19 | # - 8080:80 20 | environment: 21 | - REGISTRY_URL=http://registry-srv:5000 22 | - DELETE_IMAGES=true 23 | networks: 24 | - joxit-app 25 | - traefik-public 26 | labels: 27 | - "traefik.enable=true" 28 | - "traefik.http.routers.joxit.rule=Host(`joxit.k.voronenko.net`)" 29 | - "traefik.http.routers.joxit.entrypoints=web" 30 | 31 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 32 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 33 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 34 | 35 | - "traefik.http.services.joxit-secure.loadbalancer.server.port=80" 36 | - "traefik.http.routers.joxit-secure.entrypoints=websecure" 37 | - "traefik.http.routers.joxit-secure.rule=Host(`joxit.k.voronenko.net`)" 38 | # - "traefik.http.routers.joxit-secure.middlewares=traefik-auth" 39 | - "traefik.http.routers.joxit-secure.tls=true" 40 | networks: 41 | joxit-app: 42 | traefik-public: 43 | external: true 44 | -------------------------------------------------------------------------------- /perks/docker-registry/registry/config.yml: -------------------------------------------------------------------------------- 1 | version: 0.1 2 | log: 3 | fields: 4 | service: registry 5 | storage: 6 | cache: 7 | blobdescriptor: inmemory 8 | filesystem: 9 | rootdirectory: /var/lib/registry 10 | delete: 11 | enabled: true 12 | http: 13 | addr: :5000 14 | headers: 15 | X-Content-Type-Options: [nosniff] 16 | health: 17 | storagedriver: 18 | enabled: true 19 | interval: 10s 20 | threshold: 3 21 | -------------------------------------------------------------------------------- /perks/freshrss/docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | freshrss: 3 | image: freshrss/freshrss:latest 4 | # # Optional build section if you want to build the image locally: 5 | # build: 6 | # # Pick #latest (slow releases) or #edge (rolling release) or a specific release like #1.21.0 7 | # context: https://github.com/FreshRSS/FreshRSS.git#latest 8 | # dockerfile: Docker/Dockerfile-Alpine 9 | container_name: freshrss 10 | hostname: freshrss 11 | restart: unless-stopped 12 | volumes: 13 | - ./data:/var/www/FreshRSS/data 14 | - ./extensions:/var/www/FreshRSS/extensions 15 | environment: 16 | PUID: 1000 17 | PGID: 1000 18 | TZ: Europe/Kyiv 19 | CRON_MIN: '3,33' 20 | TRUSTED_PROXY: 172.16.0.1/12 192.168.0.1/16 192.168.0.2/16 192.168.0.3/16 21 | FRESHRSS_DEFAULT_USER: admin 22 | FRESHRSS_DEFAULT_PASSWORD: freshrss 23 | FRESHRSS_API_ENABLED: "true" 24 | FRESHRSS_SALT: "18dd21ac5472c670bbfc3532733cd479d42c3c48" 25 | networks: 26 | traefik-public: 27 | healthcheck: 28 | test: ["CMD", "php", "-r", "$$headers = get_headers('http://localhost/api/fever.php', 1); echo (strpos($$headers[0], '200') !== false) ? exit(0) : exit(1);"] 29 | interval: 1m 30 | timeout: 10s 31 | retries: 3 32 | labels: 33 | - "traefik.enable=true" 34 | - "traefik.http.routers.freshrss.rule=Host(`rss.fiks.im`)" 35 | - "traefik.http.routers.freshrss.entrypoints=web" 36 | - "traefik.http.services.freshrss.loadbalancer.server.port=80" 37 | 38 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 39 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 40 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 41 | 42 | - "traefik.http.routers.freshrss-secure.entrypoints=websecure" 43 | - "traefik.http.routers.freshrss-secure.rule=Host(`rss.fiks.im`)" 44 | # - "traefik.http.routers.freshrss-secure.middlewares=traefik-auth" 45 | - "traefik.http.routers.freshrss-secure.tls=true" 46 | - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https" 47 | 48 | networks: 49 | traefik-public: 50 | external: true 51 | -------------------------------------------------------------------------------- /perks/grafana-loki/Makefile: -------------------------------------------------------------------------------- 1 | update-configs: 2 | wget https://raw.githubusercontent.com/grafana/loki/v1.5.0/cmd/loki/loki-local-config.yaml -O loki/loki-config.yaml 3 | wget https://raw.githubusercontent.com/grafana/loki/v1.5.0/cmd/promtail/promtail-docker-config.yaml -O promtail/promtail-config.yaml 4 | -------------------------------------------------------------------------------- /perks/grafana-loki/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | 3 | volumes: 4 | loki_data: {} 5 | 6 | services: 7 | loki: 8 | image: grafana/loki:1.5.0 9 | ports: 10 | - "3100:3100" 11 | volumes: 12 | - ./loki/:/etc/loki/ 13 | - loki_data:/tmp/loki/ 14 | restart: always 15 | command: -config.file=/etc/loki/loki-config.yaml 16 | networks: 17 | - loki 18 | - traefik-public 19 | labels: 20 | - "traefik.enable=true" 21 | - "traefik.http.routers.loki.rule=Host(`loki.k.voronenko.net`)" 22 | - "traefik.http.routers.loki.entrypoints=web" 23 | 24 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 25 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 26 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 27 | 28 | - "traefik.http.services.loki-secure.loadbalancer.server.port=3100" 29 | - "traefik.http.routers.loki-secure.entrypoints=websecure" 30 | - "traefik.http.routers.loki-secure.rule=Host(`loki.k.voronenko.net`)" 31 | # - "traefik.http.routers.loki-secure.middlewares=traefik-auth" 32 | - "traefik.http.routers.loki-secure.tls=true" 33 | 34 | 35 | # promtail: 36 | # image: grafana/promtail:1.5.0 37 | # volumes: 38 | # - /var/log:/var/log 39 | # - ./promtail/:/etc/promtail/ 40 | # 41 | # command: -config.file=/etc/promtail/promtail-config.yaml 42 | # networks: 43 | # - loki 44 | 45 | networks: 46 | loki: 47 | traefik-public: 48 | external: true 49 | 50 | -------------------------------------------------------------------------------- /perks/grafana-loki/loki/loki-config.yaml: -------------------------------------------------------------------------------- 1 | auth_enabled: false 2 | 3 | server: 4 | http_listen_port: 3100 5 | 6 | ingester: 7 | lifecycler: 8 | address: 127.0.0.1 9 | ring: 10 | kvstore: 11 | store: inmemory 12 | replication_factor: 1 13 | final_sleep: 0s 14 | chunk_idle_period: 5m 15 | chunk_retain_period: 30s 16 | max_transfer_retries: 0 17 | 18 | schema_config: 19 | configs: 20 | - from: 2018-04-15 21 | store: boltdb 22 | object_store: filesystem 23 | schema: v11 24 | index: 25 | prefix: index_ 26 | period: 168h 27 | 28 | storage_config: 29 | boltdb: 30 | directory: /tmp/loki/index 31 | 32 | filesystem: 33 | directory: /tmp/loki/chunks 34 | 35 | limits_config: 36 | enforce_metric_name: false 37 | reject_old_samples: true 38 | reject_old_samples_max_age: 168h 39 | 40 | chunk_store_config: 41 | max_look_back_period: 0s 42 | 43 | table_manager: 44 | retention_deletes_enabled: false 45 | retention_period: 0s 46 | -------------------------------------------------------------------------------- /perks/grafana-loki/promtail/promtail-config.yaml: -------------------------------------------------------------------------------- 1 | server: 2 | http_listen_port: 9080 3 | grpc_listen_port: 0 4 | 5 | positions: 6 | filename: /tmp/positions.yaml 7 | 8 | clients: 9 | - url: http://loki.k.voronenko.net:3100/loki/api/v1/push 10 | 11 | scrape_configs: 12 | - job_name: system 13 | static_configs: 14 | - targets: 15 | - localhost 16 | labels: 17 | job: varlogs 18 | __path__: /var/log/*log 19 | -------------------------------------------------------------------------------- /perks/heimdall/docker-compose.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: "3.7" 3 | services: 4 | heimdall: 5 | image: ghcr.io/linuxserver/heimdall 6 | container_name: heimdall 7 | environment: 8 | # - PUID=1000 9 | # - PGID=1000 10 | - TZ=Europe/Kyiv 11 | volumes: 12 | - ./config:/config 13 | networks: 14 | traefik-public: 15 | labels: 16 | - "traefik.enable=true" 17 | - "traefik.http.routers.heim.rule=Host(`nas.fiks.im`)" 18 | - "traefik.http.routers.heim.entrypoints=web" 19 | - "traefik.http.services.heim.loadbalancer.server.port=80" 20 | 21 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 22 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 23 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 24 | 25 | - "traefik.http.routers.heim-secure.entrypoints=websecure" 26 | - "traefik.http.routers.heim-secure.rule=Host(`nas.fiks.im`)" 27 | # - "traefik.http.routers.heim-secure.middlewares=traefik-auth" 28 | - "traefik.http.routers.heim-secure.tls=true" 29 | restart: always 30 | networks: 31 | traefik-public: 32 | external: true 33 | 34 | -------------------------------------------------------------------------------- /perks/ivre/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | ivredb: 4 | image: ivre/db 5 | container_name: ivredb 6 | networks: 7 | - ivre 8 | volumes: 9 | - ./var_lib_mongodb:/var/lib/mongodb 10 | - ./var_log_mongodb:/var/log/mongodb 11 | restart: unless-stopped 12 | ivreweb: 13 | image: ivre/web 14 | container_name: ivreweb 15 | restart: unless-stopped 16 | networks: 17 | - ivre 18 | - traefik-public 19 | labels: 20 | - "traefik.enable=true" 21 | - "traefik.http.routers.ivre.rule=Host(`sec.lvh.voronenko.net`)" 22 | - "traefik.http.routers.ivre.entrypoints=web" 23 | 24 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 25 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 26 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 27 | 28 | - "traefik.http.routers.ivre-secure.entrypoints=websecure" 29 | - "traefik.http.routers.ivre-secure.rule=Host(`sec.lvh.voronenko.net`)" 30 | # - "traefik.http.routers.grafana-secure.middlewares=traefik-auth" 31 | - "traefik.http.routers.ivre-secure.tls=true" 32 | 33 | depends_on: 34 | - ivredb 35 | ivreclient: 36 | image: ivre/client 37 | container_name: ivreclient 38 | networks: 39 | - ivre 40 | volumes: 41 | - ./ivre-share:/ivre-share 42 | depends_on: 43 | - ivredb 44 | stdin_open: true 45 | tty: true 46 | networks: 47 | ivre: 48 | traefik-public: 49 | external: true 50 | 51 | -------------------------------------------------------------------------------- /perks/jaeger/Readme.md: -------------------------------------------------------------------------------- 1 | Port Protocol Component Function 2 | 5775 UDP agent accept zipkin.thrift over compact thrift protocol (deprecated, used by legacy clients only) 3 | 6831 UDP agent accept jaeger.thrift over compact thrift protocol 4 | 6832 UDP agent accept jaeger.thrift over binary thrift protocol 5 | 5778 HTTP agent serve configs 6 | 16686 HTTP query serve frontend 7 | 14268 HTTP collector accept jaeger.thrift directly from clients 8 | 14250 HTTP collector accept model.proto 9 | 9411 HTTP collector Zipkin compatible endpoint (optional) 10 | 11 | -------------------------------------------------------------------------------- /perks/jaeger/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | services: 3 | jaeger: 4 | image: jaegertracing/all-in-one:latest 5 | ports: 6 | # 5775 UDP agent accept zipkin.thrift over compact thrift protocol (deprecated, used by legacy clients only) 7 | - "5775:5775/udp" 8 | # 6831 UDP agent accept jaeger.thrift over compact thrift protocol 9 | - "6831:6831/udp" 10 | # 6832 UDP agent accept jaeger.thrift over binary thrift protocol 11 | - "6832:6832/udp" 12 | # 5778 HTTP agent serve configs 13 | - "5778:5778" 14 | # 16686 HTTP query serve frontend 15 | - "16686:16686" 16 | # 14268 HTTP collector accept jaeger.thrift directly from clients 17 | - "14268:14268" 18 | # 14250 HTTP collector accept model.proto 19 | - "14250:14250" 20 | # 9411 HTTP collector Zipkin compatible endpoint (optional) 21 | - "9411:9411" 22 | environment: 23 | - COLLECTOR_ZIPKIN_HTTP_PORT=9411 24 | networks: 25 | - jaeger-app 26 | - traefik-public 27 | labels: 28 | - "traefik.enable=true" 29 | - "traefik.http.routers.jaeger.rule=Host(`jaeger.k.voronenko.net`)" 30 | - "traefik.http.routers.jaeger.entrypoints=web" 31 | 32 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 33 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 34 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 35 | 36 | - "traefik.http.services.jaeger-secure.loadbalancer.server.port=16686" 37 | - "traefik.http.routers.jaeger-secure.entrypoints=websecure" 38 | - "traefik.http.routers.jaeger-secure.rule=Host(`jaeger.k.voronenko.net`)" 39 | # - "traefik.http.routers.jaeger-secure.middlewares=traefik-auth" 40 | - "traefik.http.routers.jaeger-secure.tls=true" 41 | 42 | networks: 43 | jaeger-app: 44 | traefik-public: 45 | external: true 46 | 47 | -------------------------------------------------------------------------------- /perks/konga/Makefile: -------------------------------------------------------------------------------- 1 | up: 2 | docker-compose up -d 3 | down: 4 | docker-compose down --volumes --remove-orphans 5 | -------------------------------------------------------------------------------- /perks/konga/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.4' 2 | services: 3 | db: 4 | image: postgres:9.5 5 | environment: 6 | POSTGRES_DB: ${KONG_PG_DATABASE:-kong} 7 | POSTGRES_PASSWORD: ${KONG_PG_PASSWORD:-kong} 8 | POSTGRES_USER: ${KONG_PG_USER:-kong} 9 | healthcheck: 10 | test: ["CMD", "pg_isready", "-U", "${KONG_PG_USER:-kong}"] 11 | interval: 30s 12 | timeout: 30s 13 | retries: 3 14 | restart: on-failure 15 | stdin_open: true 16 | tty: true 17 | networks: 18 | - kong-net 19 | volumes: 20 | - keyfiles_kong_data:/var/lib/postgresql/data 21 | 22 | ####################################### 23 | # Konga database prepare 24 | ####################################### 25 | konga-prepare: 26 | image: pantsel/konga:latest 27 | command: "-c prepare -a postgres -u postgresql://${KONG_PG_USER:-kong}:${KONG_PG_PASSWORD:-kong}@db:5432/${KONG_PG_DATABASE:-kong} && tail -f /dev/null" 28 | environment: 29 | DB_ADAPTER: postgres 30 | DB_HOST: db 31 | DB_USER: ${KONG_PG_USER:-kong} 32 | DB_PASSWORD: ${KONG_PG_PASSWORD:-kong} 33 | DB_DATABASE: ${KONG_PG_DATABASE:-kong} 34 | NODE_ENV: production 35 | KONGA_SEED_USER_DATA_SOURCE_FILE: /tmp/konga-bootstrsap/users.js 36 | KONGA_SEED_KONG_NODE_DATA_SOURCE_FILE: /tmp/konga-bootstrsap/connections.js 37 | volumes: 38 | - ./konga:/tmp/konga-bootstrsap:ro 39 | networks: 40 | - kong-net 41 | restart: on-failure 42 | ####################################### 43 | # Konga: Kong GUI 44 | ####################################### 45 | konga: 46 | image: pantsel/konga:latest 47 | restart: on-failure 48 | networks: 49 | - kong-net 50 | - traefik-public 51 | environment: 52 | DB_ADAPTER: postgres 53 | DB_HOST: db 54 | DB_USER: ${KONG_PG_USER:-kong} 55 | DB_PASSWORD: ${KONG_PG_PASSWORD:-kong} 56 | DB_DATABASE: ${KONG_PG_DATABASE:-kong} 57 | NODE_ENV: production 58 | KONGA_SEED_USER_DATA_SOURCE_FILE: /tmp/konga-bootstrap/users.js 59 | KONGA_SEED_KONG_NODE_DATA_SOURCE_FILE: /tmp/konga-bootstrsap/connections.js 60 | volumes: 61 | - ./konga:/tmp/konga-bootstrsap:ro 62 | # ports: 63 | # - "1337:1337" 64 | labels: 65 | - "traefik.enable=true" 66 | - "traefik.http.routers.konga.rule=Host(`konga.lvh.voronenko.net`)" 67 | - "traefik.http.routers.konga.entrypoints=web" 68 | 69 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 70 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 71 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 72 | 73 | - "traefik.http.routers.konga-secure.entrypoints=websecure" 74 | - "traefik.http.routers.konga-secure.rule=Host(`konga.lvh.voronenko.net`)" 75 | # - "traefik.http.routers.konga-secure.middlewares=traefik-auth" 76 | - "traefik.http.routers.konga-secure.tls=true" 77 | 78 | volumes: 79 | keyfiles_kong_data: {} 80 | 81 | networks: 82 | kong-net: 83 | traefik-public: 84 | external: true 85 | -------------------------------------------------------------------------------- /perks/konga/konga/connections.js: -------------------------------------------------------------------------------- 1 | module.exports = [ 2 | { 3 | "name": "10.22.1.219 KONGSIM", 4 | "type": "default", 5 | "kong_admin_url": "http://10.10.10.10:8001", 6 | "health_checks": false, 7 | } 8 | ] 9 | -------------------------------------------------------------------------------- /perks/konga/konga/users.js: -------------------------------------------------------------------------------- 1 | module.exports = [ 2 | { 3 | "username": "admin", 4 | "email": "admin@some.domain", 5 | "firstName": "SomeAdministrator", 6 | "lastName": "For the Konga", 7 | "node_id": "http://kong:8001", 8 | "admin": true, 9 | "active" : true, 10 | "password": "admin" 11 | } 12 | ] 13 | -------------------------------------------------------------------------------- /perks/logseq-renderers/docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | logseq_drawio: 3 | image: jgraph/drawio 4 | container_name: logseqdrawio 5 | restart: always 6 | networks: 7 | - traefik-public 8 | labels: 9 | - "traefik.enable=true" 10 | - "traefik.http.routers.drawio.rule=Host(`drawio.fiks.im`)" 11 | - "traefik.http.routers.drawio.entrypoints=web" 12 | - "traefik.http.routers.drawio_secure.rule=Host(`drawio.fiks.im`)" 13 | - "traefik.http.routers.drawio_secure.entrypoints=websecure" 14 | - "traefik.http.routers.drawio_secure.tls=true" 15 | - "traefik.http.services.logseq_drawio.loadbalancer.server.port=8080" 16 | logseq_plantuml: 17 | image: plantuml/plantuml-server:tomcat 18 | container_name: logseqplantuml 19 | restart: always 20 | networks: 21 | - traefik-public 22 | labels: 23 | - "traefik.enable=true" 24 | - "traefik.http.routers.plantuml.rule=Host(`plantuml.fiks.im`)" 25 | - "traefik.http.routers.plantuml.entrypoints=web" 26 | - "traefik.http.routers.plantuml_secure.rule=Host(`plantuml.fiks.im`)" 27 | - "traefik.http.routers.plantuml_secure.entrypoints=websecure" 28 | - "traefik.http.routers.plantuml_secure.tls=true" 29 | - "traefik.http.services.logseq_plantuml.loadbalancer.server.port=8080" 30 | 31 | 32 | networks: 33 | traefik-public: 34 | external: true 35 | -------------------------------------------------------------------------------- /perks/loki-cloud/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | services: 3 | loki-cloud: 4 | image: grafana/promtail:main 5 | # ports: 6 | # - 3100:3100 7 | volumes: 8 | - /var/log:/var/log:ro 9 | - ./promtail:/etc/promtail:rw 10 | restart: unless-stopped 11 | networks: 12 | traefik-public: 13 | external: true 14 | -------------------------------------------------------------------------------- /perks/loki-cloud/promtail/config.yml: -------------------------------------------------------------------------------- 1 | server: 2 | http_listen_port: 0 3 | grpc_listen_port: 0 4 | 5 | positions: 6 | filename: /tmp/positions.yaml 7 | 8 | #client: 9 | # url: https://1111:eyJrIjoiY 10 | 11 | scrape_configs: 12 | - job_name: system 13 | static_configs: 14 | - targets: 15 | - localhost 16 | labels: 17 | job: testlog 18 | __path__: /var/log/ufw.log 19 | -------------------------------------------------------------------------------- /perks/metabase/Readme.md: -------------------------------------------------------------------------------- 1 | Zxcvbnmnbvcxz1! 2 | -------------------------------------------------------------------------------- /perks/metabase/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | services: 3 | metabase-db: 4 | image: mysql:5.7 5 | volumes: 6 | - metabase_mysql_data:/var/lib/mysql 7 | restart: unless-stopped 8 | # ports: 9 | # - "3307:3306" 10 | networks: 11 | - metabase 12 | environment: 13 | MYSQL_ROOT_PASSWORD: metabase 14 | MYSQL_DATABASE: metabase 15 | MYSQL_USER: metabase 16 | MYSQL_PASSWORD: metabase 17 | metabase-app: 18 | image: softasap/metabase:202309141500 19 | restart: unless-stopped 20 | # ports: 21 | # - 3001:3000 22 | volumes: 23 | # declare your mount volume /host/dir:/container/dir 24 | - metabase_data:/metabase-data 25 | - ./csv:/csv 26 | networks: 27 | - metabase 28 | - traefik-public 29 | restart: unless-stopped 30 | environment: 31 | MB_DB_TYPE: mysql 32 | MB_DB_DBNAME: metabase 33 | MB_DB_PORT: 3306 34 | MB_DB_USER: metabase 35 | MB_DB_PASS: metabase 36 | MB_DB_HOST: metabase-db 37 | depends_on: 38 | - metabase-db 39 | links: 40 | - metabase-db 41 | labels: 42 | - "traefik.enable=true" 43 | - "traefik.http.routers.metabase.rule=Host(`mb.lvh.voronenko.net`)" 44 | - "traefik.http.routers.metabase.entrypoints=web" 45 | 46 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 47 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 48 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 49 | 50 | - "traefik.http.routers.metabase-secure.entrypoints=websecure" 51 | - "traefik.http.routers.metabase-secure.rule=Host(`mb.lvh.voronenko.net`)" 52 | # - "traefik.http.routers.metabase-secure.middlewares=traefik-auth" 53 | - "traefik.http.routers.metabase-secure.tls=true" 54 | 55 | volumes: 56 | metabase_mysql_data: {} 57 | metabase_data: {} 58 | 59 | networks: 60 | metabase: 61 | traefik-public: 62 | external: true 63 | 64 | -------------------------------------------------------------------------------- /perks/monitoring/alertmanager/config.yml: -------------------------------------------------------------------------------- 1 | global: 2 | # The smarthost and SMTP sender used for mail notifications. 3 | smtp_smarthost: 'smtp.gmail.com:587' 4 | # smtp_from: 'emailaddress@' 5 | # smtp_auth_username: 'emailaddress@' 6 | # smtp_auth_password: 'password' 7 | # The auth token for Hipchat. 8 | # hipchat_auth_token: '1234556789' 9 | # Alternative host for Hipchat. 10 | # hipchat_api_url: 'https://hipchat.foobar.org/' 11 | 12 | templates: 13 | - '/etc/prometheus_alertmanager/template/*.tmpl' 14 | 15 | # The root route on which each incoming alert enters. 16 | route: 17 | # The labels by which incoming alerts are grouped together. For example, 18 | # multiple alerts coming in for cluster=A and alertname=LatencyHigh would 19 | # be batched into a single group. 20 | group_by: ['alertname', 'cluster', 'service'] 21 | 22 | # When a new group of alerts is created by an incoming alert, wait at 23 | # least 'group_wait' to send the initial notification. 24 | # This way ensures that you get multiple alerts for the same group that start 25 | # firing shortly after another are batched together on the first 26 | # notification. 27 | group_wait: 30s 28 | 29 | # When the first notification was sent, wait 'group_interval' to send a batch 30 | # of new alerts that started firing for that group. 31 | group_interval: 5m 32 | 33 | # If an alert has successfully been sent, wait 'repeat_interval' to 34 | # resend them. 35 | repeat_interval: 3h 36 | 37 | # A default receiver 38 | receiver: slack-channel 39 | 40 | # All the above attributes are inherited by all child routes and can 41 | # overwritten on each. 42 | 43 | # The child route trees. [20/177] 44 | routes: 45 | # This routes performs a regular expression match on alert labels to 46 | # catch alerts that are related to a list of services. 47 | - match_re: 48 | service: ^(foo1|foo2|baz)$ 49 | receiver: slack-channel 50 | # The service has a sub-route for critical alerts, any alerts 51 | # that do not match, i.e. severity != critical, fall-back to the 52 | # parent node and are sent to 'team-X-mails' 53 | routes: 54 | - match: 55 | severity: critical 56 | receiver: slack-channel 57 | - match: 58 | service: files 59 | receiver: slack-channel 60 | 61 | routes: 62 | - match: 63 | severity: critical 64 | receiver: slack-channel 65 | 66 | # This route handles all alerts coming from a database service. If there's 67 | # no team to handle it, it defaults to the DB team. 68 | - match: 69 | service: database 70 | receiver: slack-channel 71 | # Also group alerts by affected database. 72 | group_by: [alertname, cluster, database] 73 | routes: 74 | - match: 75 | owner: team-X 76 | receiver: slack-channel 77 | - match: 78 | owner: team-Y 79 | receiver: slack-channel 80 | 81 | 82 | # Inhibition rules allow to mute a set of alerts given that another alert is 83 | # firing. 84 | # We use this to mute any warning-level notifications if the same alert is 85 | # already critical. 86 | inhibit_rules: 87 | - source_match: 88 | severity: 'critical' 89 | target_match: 90 | severity: 'warning' 91 | # Apply inhibition if the alertname is the same. 92 | equal: ['alertname', 'cluster', 'service'] 93 | 94 | 95 | receivers: 96 | - name: slack-channel 97 | # email_configs: 98 | # - to: voronenko@gmail.com 99 | # - to: youremail 100 | slack_configs: 101 | - channel: #notifications 102 | send_resolved: true 103 | api_url: https://hooks.slack.com/services/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 104 | icon_url: https://avatars3.githubusercontent.com/u/4029521 105 | title: '{{ template "custom_title" . }}' 106 | text: '{{ template "custom_slack_message" . }}' 107 | -------------------------------------------------------------------------------- /perks/monitoring/alertmanager/template/notifications.tmpl: -------------------------------------------------------------------------------- 1 | {{ define "__single_message_title" }}{{ range .Alerts.Firing }}{{ .Labels.alertname }} @ {{ .Annotations.identifier }}{{ end }}{{ range .Alerts.Resolved }}{{ .Labels.alertname }} @ {{ .Annotations.identifier }}{{ end }}{{ end }} 2 | {{ define "custom_title" }}[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ if or (and (eq (len .Alerts.Firing) 1) (eq (len .Alerts.Resolved) 0)) (and (eq (len .Alerts.Firing) 0) (eq (len .Alerts.Resolved) 1)) }}{{ template "__single_message_title" . }}{{ end }}{{ end }} 3 | {{ define "custom_slack_message" }} 4 | {{ if or (and (eq (len .Alerts.Firing) 1) (eq (len .Alerts.Resolved) 0)) (and (eq (len .Alerts.Firing) 0) (eq (len .Alerts.Resolved) 1)) }} 5 | {{ range .Alerts.Firing }}{{ .Annotations.description }}{{ end }}{{ range .Alerts.Resolved }}{{ .Annotations.description }}{{ end }} 6 | {{ else }} 7 | {{ if gt (len .Alerts.Firing) 0 }} 8 | *Alerts Firing:* 9 | {{ range .Alerts.Firing }}- {{ .Annotations.identifier }}: {{ .Annotations.description }} 10 | *Details:* 11 | {{ range .Labels.SortedPairs }} *{{ .Name }}:* `{{ .Value }}`{{ end }} 12 | {{ end }}{{ end }} 13 | {{ if gt (len .Alerts.Resolved) 0 }} 14 | *Alerts Resolved:* 15 | {{ range .Alerts.Resolved }}- {{ .Annotations.identifier }}: {{ .Annotations.description }} 16 | *Details:* 17 | {{ range .Labels.SortedPairs }} *{{ .Name }}:* `{{ .Value }}`{{ end }} 18 | {{ end }}{{ end }} 19 | {{ end }} 20 | {{ end }} 21 | -------------------------------------------------------------------------------- /perks/monitoring/blackboxexporter/config.full.yml: -------------------------------------------------------------------------------- 1 | 2 | modules: 3 | http_2xx_example: 4 | prober: http 5 | timeout: 5s 6 | http: 7 | valid_http_versions: ["HTTP/1.1", "HTTP/2.0"] 8 | valid_status_codes: [] # Defaults to 2xx 9 | method: GET 10 | headers: 11 | Host: www.softasap.com 12 | Accept-Language: en-US 13 | Origin: example.com 14 | no_follow_redirects: false 15 | fail_if_ssl: false 16 | fail_if_not_ssl: false 17 | fail_if_body_matches_regexp: 18 | - "Could not connect to database" 19 | fail_if_body_not_matches_regexp: 20 | - "Download the latest version here" 21 | fail_if_header_matches: # Verifies that no cookies are set 22 | - header: Set-Cookie 23 | allow_missing: true 24 | regexp: '.*' 25 | fail_if_header_not_matches: 26 | - header: Access-Control-Allow-Origin 27 | regexp: '(\*|example\.com)' 28 | tls_config: 29 | insecure_skip_verify: false 30 | preferred_ip_protocol: "ip4" # defaults to "ip6" 31 | ip_protocol_fallback: false # no fallback to "ip6" 32 | http_post_2xx: 33 | prober: http 34 | timeout: 5s 35 | http: 36 | method: POST 37 | headers: 38 | Content-Type: application/json 39 | body: '{}' 40 | http_basic_auth_example: 41 | prober: http 42 | timeout: 5s 43 | http: 44 | method: POST 45 | headers: 46 | Host: "login.example.com" 47 | basic_auth: 48 | username: "username" 49 | password: "mysecret" 50 | http_custom_ca_example: 51 | prober: http 52 | http: 53 | method: GET 54 | tls_config: 55 | ca_file: "/certs/my_cert.crt" 56 | tls_connect: 57 | prober: tcp 58 | timeout: 5s 59 | tcp: 60 | tls: true 61 | tcp_connect_example: 62 | prober: tcp 63 | timeout: 5s 64 | imap_starttls: 65 | prober: tcp 66 | timeout: 5s 67 | tcp: 68 | query_response: 69 | - expect: "OK.*STARTTLS" 70 | - send: ". STARTTLS" 71 | - expect: "OK" 72 | - starttls: true 73 | - send: ". capability" 74 | - expect: "CAPABILITY IMAP4rev1" 75 | smtp_starttls: 76 | prober: tcp 77 | timeout: 5s 78 | tcp: 79 | query_response: 80 | - expect: "^220 ([^ ]+) ESMTP (.+)$" 81 | - send: "EHLO prober" 82 | - expect: "^250-STARTTLS" 83 | - send: "STARTTLS" 84 | - expect: "^220" 85 | - starttls: true 86 | - send: "EHLO prober" 87 | - expect: "^250-AUTH" 88 | - send: "QUIT" 89 | irc_banner_example: 90 | prober: tcp 91 | timeout: 5s 92 | tcp: 93 | query_response: 94 | - send: "NICK prober" 95 | - send: "USER prober prober prober :prober" 96 | - expect: "PING :([^ ]+)" 97 | send: "PONG ${1}" 98 | - expect: "^:[^ ]+ 001" 99 | icmp_example: 100 | prober: icmp 101 | timeout: 5s 102 | icmp: 103 | preferred_ip_protocol: "ip4" 104 | source_ip_address: "127.0.0.1" 105 | dns_udp_example: 106 | prober: dns 107 | timeout: 5s 108 | dns: 109 | query_name: "www.prometheus.io" 110 | query_type: "A" 111 | valid_rcodes: 112 | - NOERROR 113 | validate_answer_rrs: 114 | fail_if_matches_regexp: 115 | - ".*127.0.0.1" 116 | fail_if_all_match_regexp: 117 | - ".*127.0.0.1" 118 | fail_if_not_matches_regexp: 119 | - "www.prometheus.io.\t300\tIN\tA\t127.0.0.1" 120 | fail_if_none_matches_regexp: 121 | - "127.0.0.1" 122 | validate_authority_rrs: 123 | fail_if_matches_regexp: 124 | - ".*127.0.0.1" 125 | validate_additional_rrs: 126 | fail_if_matches_regexp: 127 | - ".*127.0.0.1" 128 | dns_soa: 129 | prober: dns 130 | dns: 131 | query_name: "prometheus.io" 132 | query_type: "SOA" 133 | dns_tcp_example: 134 | prober: dns 135 | dns: 136 | transport_protocol: "tcp" # defaults to "udp" 137 | preferred_ip_protocol: "ip4" # defaults to "ip6" 138 | query_name: "www.prometheus.io" 139 | -------------------------------------------------------------------------------- /perks/monitoring/blackboxexporter/config.small.yml: -------------------------------------------------------------------------------- 1 | modules: 2 | http_2xx: 3 | prober: http 4 | timeout: 5s 5 | http: 6 | valid_http_versions: ["HTTP/1.1", "HTTP/2"] 7 | valid_status_codes: [] # Defaults to 2xx 8 | method: GET 9 | no_follow_redirects: false 10 | fail_if_ssl: false 11 | fail_if_not_ssl: false 12 | fail_if_matches_regexp: 13 | - "Could not connect to database" 14 | tls_config: 15 | insecure_skip_verify: false 16 | preferred_ip_protocol: "ip4" # defaults to "ip6" 17 | 18 | -------------------------------------------------------------------------------- /perks/monitoring/blackboxexporter/config.yml: -------------------------------------------------------------------------------- 1 | modules: 2 | http_2xx_example: 3 | prober: http 4 | timeout: 5s 5 | http: 6 | valid_http_versions: ["HTTP/1.1", "HTTP/2.0"] 7 | valid_status_codes: [] # Defaults to 2xx 8 | method: GET 9 | no_follow_redirects: false 10 | fail_if_ssl: false 11 | fail_if_not_ssl: false 12 | fail_if_body_matches_regexp: 13 | - "Could not connect to database" 14 | fail_if_body_not_matches_regexp: 15 | - "Download the latest version here" 16 | fail_if_header_matches: # Verifies that no cookies are set 17 | - header: Set-Cookie 18 | allow_missing: true 19 | regexp: '.*' 20 | # fail_if_header_not_matches: 21 | # - header: Access-Control-Allow-Origin 22 | # regexp: '(\*|example\.com)' 23 | tls_config: 24 | insecure_skip_verify: false 25 | preferred_ip_protocol: "ip4" # defaults to "ip6" 26 | ip_protocol_fallback: false # no fallback to "ip6" 27 | -------------------------------------------------------------------------------- /perks/monitoring/config.monitoring: -------------------------------------------------------------------------------- 1 | GF_SECURITY_ADMIN_PASSWORD=q 2 | -------------------------------------------------------------------------------- /perks/monitoring/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | 3 | volumes: 4 | prometheus_data: {} 5 | grafana_data: {} 6 | 7 | services: 8 | 9 | prometheus: 10 | image: prom/prometheus:v2.19.1 11 | volumes: 12 | - ./prometheus/:/etc/prometheus/ 13 | - prometheus_data:/prometheus 14 | command: 15 | - '--config.file=/etc/prometheus/prometheus.yml' 16 | - '--storage.tsdb.path=/prometheus' 17 | - '--web.console.libraries=/usr/share/prometheus/console_libraries' 18 | - '--web.console.templates=/usr/share/prometheus/consoles' 19 | - '--web.enable-lifecycle' 20 | ports: 21 | - 9090:9090 22 | networks: 23 | - monitoring-stack 24 | - traefik-public 25 | restart: always 26 | labels: 27 | - "traefik.enable=true" 28 | - "traefik.http.routers.prometheus.rule=Host(`prometheus.k.voronenko.net`)" 29 | - "traefik.http.routers.prometheus.entrypoints=web" 30 | 31 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 32 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 33 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 34 | 35 | - "traefik.http.services.prometheus-secure.loadbalancer.server.port=9090" 36 | - "traefik.http.routers.prometheus-secure.entrypoints=websecure" 37 | - "traefik.http.routers.prometheus-secure.rule=Host(`prometheus.k.voronenko.net`)" 38 | # - "traefik.http.routers.prometheus-secure.middlewares=traefik-auth" 39 | - "traefik.http.routers.prometheus-secure.tls=true" 40 | 41 | 42 | node-exporter: 43 | image: prom/node-exporter 44 | volumes: 45 | - /proc:/host/proc:ro 46 | - /sys:/host/sys:ro 47 | - /:/rootfs:ro 48 | command: 49 | - '--path.procfs=/host/proc' 50 | - '--path.sysfs=/host/sys' 51 | - --collector.filesystem.ignored-mount-points 52 | - "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)" 53 | ports: 54 | - 9100:9100 55 | networks: 56 | - monitoring-stack 57 | restart: always 58 | 59 | alertmanager: 60 | image: prom/alertmanager:v0.21.0 61 | ports: 62 | - 9093:9093 63 | volumes: 64 | - ./alertmanager/:/etc/alertmanager/ 65 | restart: always 66 | command: 67 | - '--config.file=/etc/alertmanager/config.yml' 68 | - '--storage.path=/alertmanager' 69 | networks: 70 | - monitoring-stack 71 | 72 | cadvisor: 73 | image: google/cadvisor 74 | volumes: 75 | - /:/rootfs:ro 76 | - /var/run:/var/run:rw 77 | - /sys:/sys:ro 78 | - /var/lib/docker/:/var/lib/docker:ro 79 | ports: 80 | - 8080:8080 81 | networks: 82 | - monitoring-stack 83 | restart: always 84 | deploy: 85 | mode: global 86 | labels: 87 | - "traefik.enable=true" 88 | - "traefik.http.routers.cadvisor.rule=Host(`cadvisor.k.voronenko.net`)" 89 | - "traefik.http.routers.cadvisor.entrypoints=web" 90 | 91 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 92 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 93 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 94 | 95 | - "traefik.http.services.cadvisor-secure.loadbalancer.server.port=8080" 96 | - "traefik.http.routers.cadvisor-secure.entrypoints=websecure" 97 | - "traefik.http.routers.cadvisor-secure.rule=Host(`cadvisor.k.voronenko.net`)" 98 | # - "traefik.http.routers.prometheus-secure.middlewares=traefik-auth" 99 | - "traefik.http.routers.cadvisor-secure.tls=true" 100 | 101 | 102 | grafana: 103 | image: grafana/grafana:7.0.3-ubuntu 104 | ports: 105 | - 3000:3000 106 | volumes: 107 | - grafana_data:/var/lib/grafana 108 | - ./grafana/dashboards:/var/lib/grafana/dashboards 109 | - ./grafana/provisioning:/etc/grafana/provisioning 110 | networks: 111 | - monitoring-stack 112 | - traefik-public 113 | env_file: 114 | - config.monitoring 115 | labels: 116 | - "traefik.enable=true" 117 | - "traefik.http.routers.grafana.rule=Host(`grafana.k.voronenko.net`)" 118 | - "traefik.http.routers.grafana.entrypoints=web" 119 | 120 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 121 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 122 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 123 | 124 | # - "traefik.http.services.grafana-secure.loadbalancer.server.port=3000" 125 | - "traefik.http.routers.grafana-secure.entrypoints=websecure" 126 | - "traefik.http.routers.grafana-secure.rule=Host(`grafana.k.voronenko.net`)" 127 | # - "traefik.http.routers.prometheus-secure.middlewares=traefik-auth" 128 | - "traefik.http.routers.grafana-secure.tls=true" 129 | 130 | 131 | blackbox_exporter: 132 | image: prom/blackbox-exporter:v0.16.0 133 | ports: 134 | - 9115:9115 135 | volumes: 136 | - ./blackboxexporter/:/etc/blackboxexporter/ 137 | command: 138 | - '--config.file=/etc/blackboxexporter/config.yml' 139 | restart: always 140 | networks: 141 | monitoring-stack: 142 | traefik-public: 143 | external: true 144 | -------------------------------------------------------------------------------- /perks/monitoring/grafana/dashboards/http-ping.json: -------------------------------------------------------------------------------- 1 | { 2 | "annotations": { 3 | "list": [ 4 | { 5 | "builtIn": 1, 6 | "datasource": "-- Grafana --", 7 | "enable": true, 8 | "hide": true, 9 | "iconColor": "rgba(0, 211, 255, 1)", 10 | "name": "Annotations & Alerts", 11 | "type": "dashboard" 12 | } 13 | ] 14 | }, 15 | "description": "HTTP Ping based Blackbox Exporter Metric Dashboard.", 16 | "editable": true, 17 | "gnetId": null, 18 | "graphTooltip": 0, 19 | "hideControls": false, 20 | "id": 2, 21 | "links": [], 22 | "refresh": "1m", 23 | "rows": [ 24 | { 25 | "collapse": false, 26 | "height": 250, 27 | "panels": [ 28 | { 29 | "aliasColors": {}, 30 | "bars": false, 31 | "dashLength": 10, 32 | "dashes": false, 33 | "datasource": "Prometheus", 34 | "fill": 0, 35 | "id": 4, 36 | "legend": { 37 | "alignAsTable": true, 38 | "avg": true, 39 | "current": true, 40 | "max": true, 41 | "min": true, 42 | "rightSide": false, 43 | "show": true, 44 | "total": false, 45 | "values": true 46 | }, 47 | "lines": true, 48 | "linewidth": 1, 49 | "links": [], 50 | "nullPointMode": "null", 51 | "percentage": false, 52 | "pointradius": 1, 53 | "points": true, 54 | "renderer": "flot", 55 | "seriesOverrides": [], 56 | "spaceLength": 10, 57 | "span": 12, 58 | "stack": false, 59 | "steppedLine": false, 60 | "targets": [ 61 | { 62 | "expr": "probe_duration_seconds", 63 | "format": "time_series", 64 | "intervalFactor": 2, 65 | "refId": "A" 66 | } 67 | ], 68 | "thresholds": [], 69 | "timeFrom": null, 70 | "timeShift": null, 71 | "title": "HTTP Ping (Simple)", 72 | "tooltip": { 73 | "shared": true, 74 | "sort": 0, 75 | "value_type": "individual" 76 | }, 77 | "type": "graph", 78 | "xaxis": { 79 | "buckets": null, 80 | "mode": "time", 81 | "name": null, 82 | "show": true, 83 | "values": [] 84 | }, 85 | "yaxes": [ 86 | { 87 | "format": "short", 88 | "label": null, 89 | "logBase": 1, 90 | "max": null, 91 | "min": null, 92 | "show": true 93 | }, 94 | { 95 | "format": "short", 96 | "label": null, 97 | "logBase": 1, 98 | "max": null, 99 | "min": null, 100 | "show": true 101 | } 102 | ] 103 | } 104 | ], 105 | "repeat": null, 106 | "repeatIteration": null, 107 | "repeatRowId": null, 108 | "showTitle": false, 109 | "title": "Dashboard Row", 110 | "titleSize": "h6" 111 | }, 112 | { 113 | "collapse": false, 114 | "height": "500px", 115 | "panels": [ 116 | { 117 | "aliasColors": { 118 | "Standard Deviation": "#DEDAF7" 119 | }, 120 | "bars": false, 121 | "dashLength": 10, 122 | "dashes": false, 123 | "datasource": "Prometheus", 124 | "decimals": 3, 125 | "editable": true, 126 | "error": false, 127 | "fill": 1, 128 | "grid": {}, 129 | "height": "", 130 | "id": 1, 131 | "legend": { 132 | "alignAsTable": true, 133 | "avg": false, 134 | "current": false, 135 | "hideEmpty": true, 136 | "max": false, 137 | "min": true, 138 | "rightSide": false, 139 | "show": true, 140 | "sideWidth": null, 141 | "total": false, 142 | "values": true 143 | }, 144 | "lines": true, 145 | "linewidth": 1, 146 | "links": [], 147 | "minSpan": 2, 148 | "nullPointMode": "connected", 149 | "percentage": false, 150 | "pointradius": 5, 151 | "points": false, 152 | "renderer": "flot", 153 | "repeat": "region", 154 | "scopedVars": { 155 | "region": { 156 | "selected": false, 157 | "text": "local", 158 | "value": "local" 159 | }, 160 | "site": { 161 | "selected": false, 162 | "text": "https://pagertree.com", 163 | "value": "https://pagertree.com" 164 | } 165 | }, 166 | "seriesOverrides": [ 167 | { 168 | "alias": "down", 169 | "bars": true, 170 | "color": "#BF1B00", 171 | "legend": false, 172 | "stack": false, 173 | "yaxis": 2 174 | }, 175 | { 176 | "alias": "resolve", 177 | "color": "#2F575E", 178 | "zindex": -3 179 | }, 180 | { 181 | "alias": "connect", 182 | "color": "#EAB839", 183 | "zindex": -2 184 | }, 185 | { 186 | "alias": "tls", 187 | "color": "#6D1F62", 188 | "zindex": -1 189 | }, 190 | { 191 | "alias": "processing", 192 | "color": "#629E51", 193 | "zindex": 1 194 | }, 195 | { 196 | "alias": "transfer", 197 | "color": "#64B0C8", 198 | "zindex": 2 199 | } 200 | ], 201 | "spaceLength": 10, 202 | "span": 12, 203 | "stack": true, 204 | "steppedLine": true, 205 | "targets": [ 206 | { 207 | "expr": "sum by (phase) (probe_http_duration_seconds{instance=~\"$site\",region=~\"$region\"})\n", 208 | "format": "time_series", 209 | "hide": false, 210 | "interval": "", 211 | "intervalFactor": 2, 212 | "legendFormat": "{{phase}}", 213 | "refId": "A", 214 | "step": 4 215 | }, 216 | { 217 | "expr": "(1 - probe_success{instance=~\"$site\",region=~\"$region\"} > 0)", 218 | "format": "time_series", 219 | "interval": "", 220 | "intervalFactor": 2, 221 | "legendFormat": "down", 222 | "refId": "B", 223 | "step": 4 224 | } 225 | ], 226 | "thresholds": [], 227 | "timeFrom": null, 228 | "timeShift": null, 229 | "title": "$region", 230 | "tooltip": { 231 | "msResolution": false, 232 | "shared": true, 233 | "sort": 0, 234 | "value_type": "individual" 235 | }, 236 | "type": "graph", 237 | "xaxis": { 238 | "buckets": null, 239 | "mode": "time", 240 | "name": null, 241 | "show": true, 242 | "values": [] 243 | }, 244 | "yaxes": [ 245 | { 246 | "format": "s", 247 | "label": null, 248 | "logBase": 1, 249 | "max": null, 250 | "min": null, 251 | "show": true 252 | }, 253 | { 254 | "format": "none", 255 | "label": "", 256 | "logBase": 1, 257 | "max": "1", 258 | "min": 0, 259 | "show": false 260 | } 261 | ] 262 | } 263 | ], 264 | "repeat": "site", 265 | "repeatIteration": null, 266 | "repeatRowId": null, 267 | "showTitle": true, 268 | "title": "$site", 269 | "titleSize": "h6" 270 | }, 271 | { 272 | "collapse": false, 273 | "height": "500px", 274 | "panels": [ 275 | { 276 | "aliasColors": { 277 | "Standard Deviation": "#DEDAF7" 278 | }, 279 | "bars": false, 280 | "dashLength": 10, 281 | "dashes": false, 282 | "datasource": "Prometheus", 283 | "decimals": 3, 284 | "editable": true, 285 | "error": false, 286 | "fill": 1, 287 | "grid": {}, 288 | "height": "", 289 | "id": 2, 290 | "legend": { 291 | "alignAsTable": true, 292 | "avg": false, 293 | "current": false, 294 | "hideEmpty": true, 295 | "max": false, 296 | "min": true, 297 | "rightSide": false, 298 | "show": true, 299 | "sideWidth": null, 300 | "total": false, 301 | "values": true 302 | }, 303 | "lines": true, 304 | "linewidth": 1, 305 | "links": [], 306 | "minSpan": 2, 307 | "nullPointMode": "connected", 308 | "percentage": false, 309 | "pointradius": 5, 310 | "points": false, 311 | "renderer": "flot", 312 | "repeat": "region", 313 | "scopedVars": { 314 | "region": { 315 | "selected": false, 316 | "text": "local", 317 | "value": "local" 318 | }, 319 | "site": { 320 | "selected": false, 321 | "text": "https://prometheus.io", 322 | "value": "https://prometheus.io" 323 | } 324 | }, 325 | "seriesOverrides": [ 326 | { 327 | "alias": "down", 328 | "bars": true, 329 | "color": "#BF1B00", 330 | "legend": false, 331 | "stack": false, 332 | "yaxis": 2 333 | }, 334 | { 335 | "alias": "resolve", 336 | "color": "#2F575E", 337 | "zindex": -3 338 | }, 339 | { 340 | "alias": "connect", 341 | "color": "#EAB839", 342 | "zindex": -2 343 | }, 344 | { 345 | "alias": "tls", 346 | "color": "#6D1F62", 347 | "zindex": -1 348 | }, 349 | { 350 | "alias": "processing", 351 | "color": "#629E51", 352 | "zindex": 1 353 | }, 354 | { 355 | "alias": "transfer", 356 | "color": "#64B0C8", 357 | "zindex": 2 358 | } 359 | ], 360 | "spaceLength": 10, 361 | "span": 12, 362 | "stack": true, 363 | "steppedLine": true, 364 | "targets": [ 365 | { 366 | "expr": "sum by (phase) (probe_http_duration_seconds{instance=~\"$site\",region=~\"$region\"})\n", 367 | "format": "time_series", 368 | "hide": false, 369 | "interval": "", 370 | "intervalFactor": 2, 371 | "legendFormat": "{{phase}}", 372 | "refId": "A", 373 | "step": 4 374 | }, 375 | { 376 | "expr": "(1 - probe_success{instance=~\"$site\",region=~\"$region\"} > 0)", 377 | "format": "time_series", 378 | "interval": "", 379 | "intervalFactor": 2, 380 | "legendFormat": "down", 381 | "refId": "B", 382 | "step": 4 383 | } 384 | ], 385 | "thresholds": [], 386 | "timeFrom": null, 387 | "timeShift": null, 388 | "title": "$region", 389 | "tooltip": { 390 | "msResolution": false, 391 | "shared": true, 392 | "sort": 0, 393 | "value_type": "individual" 394 | }, 395 | "type": "graph", 396 | "xaxis": { 397 | "buckets": null, 398 | "mode": "time", 399 | "name": null, 400 | "show": true, 401 | "values": [] 402 | }, 403 | "yaxes": [ 404 | { 405 | "format": "s", 406 | "label": null, 407 | "logBase": 1, 408 | "max": null, 409 | "min": null, 410 | "show": true 411 | }, 412 | { 413 | "format": "none", 414 | "label": "", 415 | "logBase": 1, 416 | "max": "1", 417 | "min": 0, 418 | "show": false 419 | } 420 | ] 421 | } 422 | ], 423 | "repeat": null, 424 | "repeatIteration": 1512073262759, 425 | "repeatRowId": 1, 426 | "showTitle": true, 427 | "title": "$site", 428 | "titleSize": "h6" 429 | }, 430 | { 431 | "collapse": false, 432 | "height": "500px", 433 | "panels": [ 434 | { 435 | "aliasColors": { 436 | "Standard Deviation": "#DEDAF7" 437 | }, 438 | "bars": false, 439 | "dashLength": 10, 440 | "dashes": false, 441 | "datasource": "Prometheus", 442 | "decimals": 3, 443 | "editable": true, 444 | "error": false, 445 | "fill": 1, 446 | "grid": {}, 447 | "height": "", 448 | "id": 3, 449 | "legend": { 450 | "alignAsTable": true, 451 | "avg": false, 452 | "current": false, 453 | "hideEmpty": true, 454 | "max": false, 455 | "min": true, 456 | "rightSide": false, 457 | "show": true, 458 | "sideWidth": null, 459 | "total": false, 460 | "values": true 461 | }, 462 | "lines": true, 463 | "linewidth": 1, 464 | "links": [], 465 | "minSpan": 2, 466 | "nullPointMode": "connected", 467 | "percentage": false, 468 | "pointradius": 5, 469 | "points": false, 470 | "renderer": "flot", 471 | "repeat": "region", 472 | "scopedVars": { 473 | "region": { 474 | "selected": false, 475 | "text": "local", 476 | "value": "local" 477 | }, 478 | "site": { 479 | "selected": false, 480 | "text": "https://www.google.com", 481 | "value": "https://www.google.com" 482 | } 483 | }, 484 | "seriesOverrides": [ 485 | { 486 | "alias": "down", 487 | "bars": true, 488 | "color": "#BF1B00", 489 | "legend": false, 490 | "stack": false, 491 | "yaxis": 2 492 | }, 493 | { 494 | "alias": "resolve", 495 | "color": "#2F575E", 496 | "zindex": -3 497 | }, 498 | { 499 | "alias": "connect", 500 | "color": "#EAB839", 501 | "zindex": -2 502 | }, 503 | { 504 | "alias": "tls", 505 | "color": "#6D1F62", 506 | "zindex": -1 507 | }, 508 | { 509 | "alias": "processing", 510 | "color": "#629E51", 511 | "zindex": 1 512 | }, 513 | { 514 | "alias": "transfer", 515 | "color": "#64B0C8", 516 | "zindex": 2 517 | } 518 | ], 519 | "spaceLength": 10, 520 | "span": 12, 521 | "stack": true, 522 | "steppedLine": true, 523 | "targets": [ 524 | { 525 | "expr": "sum by (phase) (probe_http_duration_seconds{instance=~\"$site\",region=~\"$region\"})\n", 526 | "format": "time_series", 527 | "hide": false, 528 | "interval": "", 529 | "intervalFactor": 2, 530 | "legendFormat": "{{phase}}", 531 | "refId": "A", 532 | "step": 4 533 | }, 534 | { 535 | "expr": "(1 - probe_success{instance=~\"$site\",region=~\"$region\"} > 0)", 536 | "format": "time_series", 537 | "interval": "", 538 | "intervalFactor": 2, 539 | "legendFormat": "down", 540 | "refId": "B", 541 | "step": 4 542 | } 543 | ], 544 | "thresholds": [], 545 | "timeFrom": null, 546 | "timeShift": null, 547 | "title": "$region", 548 | "tooltip": { 549 | "msResolution": false, 550 | "shared": true, 551 | "sort": 0, 552 | "value_type": "individual" 553 | }, 554 | "type": "graph", 555 | "xaxis": { 556 | "buckets": null, 557 | "mode": "time", 558 | "name": null, 559 | "show": true, 560 | "values": [] 561 | }, 562 | "yaxes": [ 563 | { 564 | "format": "s", 565 | "label": null, 566 | "logBase": 1, 567 | "max": null, 568 | "min": null, 569 | "show": true 570 | }, 571 | { 572 | "format": "none", 573 | "label": "", 574 | "logBase": 1, 575 | "max": "1", 576 | "min": 0, 577 | "show": false 578 | } 579 | ] 580 | } 581 | ], 582 | "repeat": null, 583 | "repeatIteration": 1512073262759, 584 | "repeatRowId": 1, 585 | "showTitle": true, 586 | "title": "$site", 587 | "titleSize": "h6" 588 | } 589 | ], 590 | "schemaVersion": 14, 591 | "style": "dark", 592 | "tags": [], 593 | "templating": { 594 | "list": [ 595 | { 596 | "allValue": null, 597 | "current": { 598 | "text": "All", 599 | "value": "$__all" 600 | }, 601 | "datasource": "Prometheus", 602 | "hide": 0, 603 | "includeAll": true, 604 | "label": "Site", 605 | "multi": true, 606 | "name": "site", 607 | "options": [], 608 | "query": "label_values(probe_http_status_code, instance)", 609 | "refresh": 1, 610 | "regex": "", 611 | "sort": 0, 612 | "tagValuesQuery": "", 613 | "tags": [], 614 | "tagsQuery": "", 615 | "type": "query", 616 | "useTags": false 617 | }, 618 | { 619 | "allValue": ".+", 620 | "current": { 621 | "text": "All", 622 | "value": "$__all" 623 | }, 624 | "datasource": "Prometheus", 625 | "hide": 0, 626 | "includeAll": true, 627 | "label": "Region", 628 | "multi": true, 629 | "name": "region", 630 | "options": [], 631 | "query": "label_values(probe_http_status_code{region!=\"\"}, region)", 632 | "refresh": 1, 633 | "regex": "", 634 | "sort": 0, 635 | "tagValuesQuery": "", 636 | "tags": [], 637 | "tagsQuery": "", 638 | "type": "query", 639 | "useTags": false 640 | }, 641 | { 642 | "allValue": null, 643 | "current": { 644 | "text": "All", 645 | "value": "$__all" 646 | }, 647 | "datasource": "Prometheus", 648 | "hide": 2, 649 | "includeAll": true, 650 | "label": null, 651 | "multi": true, 652 | "name": "ssl_expires", 653 | "options": [], 654 | "query": "query_result(bottomk(5, min by (instance) (probe_ssl_earliest_cert_expiry) - time()))", 655 | "refresh": 1, 656 | "regex": "/instance=\"([^\"]*)\"/", 657 | "sort": 0, 658 | "tagValuesQuery": "", 659 | "tags": [], 660 | "tagsQuery": "", 661 | "type": "query", 662 | "useTags": false 663 | } 664 | ] 665 | }, 666 | "time": { 667 | "from": "now-15m", 668 | "to": "now" 669 | }, 670 | "timepicker": { 671 | "refresh_intervals": [ 672 | "5s", 673 | "10s", 674 | "30s", 675 | "1m", 676 | "5m", 677 | "15m", 678 | "30m", 679 | "1h", 680 | "2h", 681 | "1d" 682 | ], 683 | "time_options": [ 684 | "5m", 685 | "15m", 686 | "1h", 687 | "6h", 688 | "12h", 689 | "24h", 690 | "2d", 691 | "7d", 692 | "30d" 693 | ] 694 | }, 695 | "timezone": "browser", 696 | "title": "HTTP Ping", 697 | "id": 1, 698 | "version": 1 699 | } 700 | -------------------------------------------------------------------------------- /perks/monitoring/grafana/provisioning/dashboards/all.yml: -------------------------------------------------------------------------------- 1 | apiVersion: 1 2 | 3 | providers: 4 | - name: 'default' 5 | orgId: 1 6 | folder: '' 7 | type: file 8 | disableDeletion: false 9 | updateIntervalSeconds: 10 #how often Grafana will scan for changed dashboards 10 | options: 11 | path: /var/lib/grafana/dashboards 12 | -------------------------------------------------------------------------------- /perks/monitoring/grafana/provisioning/datasources/all.yml: -------------------------------------------------------------------------------- 1 | # config file version 2 | apiVersion: 1 3 | 4 | # list of datasources that should be deleted from the database 5 | deleteDatasources: 6 | - name: Prometheus 7 | orgId: 1 8 | 9 | # list of datasources to insert/update depending 10 | # whats available in the database 11 | datasources: 12 | # name of the datasource. Required 13 | - name: 'Prometheus' 14 | type: 'prometheus' 15 | access: 'proxy' 16 | org_id: 1 17 | url: 'http://prometheus:9090' 18 | is_default: true 19 | version: 1 20 | editable: true 21 | - name: 'Loki' 22 | type: 'loki' 23 | access: 'proxy' 24 | org_id: 1 25 | url: http://loki.k.voronenko.net:3100 26 | editable: true 27 | version: 1 28 | -------------------------------------------------------------------------------- /perks/monitoring/prometheus/Readme.md: -------------------------------------------------------------------------------- 1 | Get ideas from https://github.com/Voronenko/awesome-prometheus-alerts 2 | -------------------------------------------------------------------------------- /perks/monitoring/prometheus/alert.rules: -------------------------------------------------------------------------------- 1 | groups: 2 | - name: example 3 | rules: 4 | 5 | - alert: service_down 6 | expr: up == 0 7 | for: 30s 8 | labels: 9 | severity: page 10 | annotations: 11 | summary: "Instance {{ $labels.instance }} down" 12 | description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 30 seconds." 13 | 14 | - alert: high_load 15 | expr: node_load1 > 0.8 16 | for: 30s 17 | labels: 18 | severity: page 19 | annotations: 20 | summary: "Instance {{ $labels.instance }} under high load" 21 | description: "{{ $labels.instance }} of job {{ $labels.job }} is under high load." 22 | 23 | - alert: site_down 24 | expr: probe_success < 1 25 | for: 30s 26 | labels: 27 | severity: page 28 | annotations: 29 | summary: "Site Down: {{$labels.instance}}" 30 | description: "Site Down: {{$labels.instance}} for more than 30 seconds" 31 | -------------------------------------------------------------------------------- /perks/monitoring/prometheus/prometheus.yml: -------------------------------------------------------------------------------- 1 | # my global config 2 | global: 3 | scrape_interval: 15s # By default, scrape targets every 15 seconds. 4 | evaluation_interval: 15s # By default, scrape targets every 15 seconds. 5 | # scrape_timeout is set to the global default (10s). 6 | 7 | # Attach these labels to any time series or alerts when communicating with 8 | # external systems (federation, remote storage, Alertmanager). 9 | external_labels: 10 | monitor: 'my-project' 11 | 12 | # Load and evaluate rules in this file every 'evaluation_interval' seconds. 13 | rule_files: 14 | - 'alert.rules' 15 | # - "first.rules" 16 | # - "second.rules" 17 | 18 | # alert 19 | alerting: 20 | alertmanagers: 21 | - scheme: http 22 | static_configs: 23 | - targets: 24 | - "alertmanager:9093" 25 | 26 | # A scrape configuration containing exactly one endpoint to scrape: 27 | # Here it's Prometheus itself. 28 | scrape_configs: 29 | # The job name is added as a label `job=` to any timeseries scraped from this config. 30 | - job_name: 'prometheus' 31 | - job_name: 'node' 32 | 33 | # Override the global default and scrape targets from this job every 5 seconds. 34 | scrape_interval: 5s 35 | 36 | # metrics_path defaults to '/metrics' 37 | # scheme defaults to 'http'. 38 | 39 | static_configs: 40 | - targets: 41 | - localhost:9090 42 | - cadvisor:8080 43 | - node-exporter:9100 44 | - job_name: 'blackbox' 45 | metrics_path: /probe 46 | params: 47 | module: [http_2xx] # Look for a HTTP 200 response. 48 | static_configs: 49 | - targets: 50 | # - https://www.voronenko.info 51 | # - https://www.softasap.com 52 | relabel_configs: 53 | - source_labels: [__address__] 54 | target_label: __param_target 55 | - source_labels: [__param_target] 56 | target_label: instance 57 | - target_label: __address__ 58 | replacement: "blackbox_exporter:9115" # Blackbox exporter. 59 | - target_label: region 60 | replacement: "local" 61 | -------------------------------------------------------------------------------- /perks/next-cloud/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | services: 3 | db: 4 | restart: always 5 | image: postgres:13.10-bullseye 6 | networks: 7 | - nextcloud 8 | ports: 9 | - 5432:5432 10 | environment: 11 | - POSTGRES_USER=postgres 12 | - POSTGRES_PASSWORD=postgres 13 | - POSTGRES_DB=postgres 14 | # - POSTGRES_MULTIPLE_DATABASES=nextcloud 15 | volumes: 16 | - ./postgres:/var/lib/postgresql/data 17 | - ./postgres_init/initdb.sh:/docker-entrypoint-initdb.d/initdb.sh 18 | nextcloud_backup: 19 | container_name: nextcloud_db_backup 20 | image: tiredofit/db-backup 21 | depends_on: 22 | - db 23 | volumes: 24 | - ./db_backups/:/backup 25 | #- ./post-script.sh:/assets/custom-scripts/post-script.sh 26 | networks: 27 | nextcloud: 28 | environment: 29 | - DEBUG_MODE=TRUE 30 | - DB_TYPE=postgres 31 | - DB_HOST=db 32 | - DB_NAME=nextcloud 33 | - DB_USER=nextcloud 34 | - DB_PASS=nextcloud 35 | - DB_DUMP_FREQ=360 # backup 12 hours 12 * 60 36 | # - DB_DUMP_BEGIN="+10" # backup starts immediately 37 | - DB_CLEANUP_TIME=72000 # clean backups that are older than 72000 minutes 38 | - CHECKSUM=SHA1 39 | - COMPRESSION=GZ 40 | - SPLIT_DB=true 41 | - CONTAINER_ENABLE_MONITORING=FALSE 42 | restart: always 43 | 44 | redis: 45 | image: redis:6.0.16 46 | restart: always 47 | networks: 48 | - nextcloud 49 | volumes: 50 | - ./redis:/var/lib/redis 51 | nextcloud: 52 | image: nextcloud:27.0.2 53 | restart: always 54 | networks: 55 | - nextcloud 56 | - traefik-public 57 | depends_on: 58 | - redis 59 | - db 60 | ports: 61 | - 8888:80 62 | labels: 63 | - "traefik.enable=true" 64 | # - "traefik.http.routers.heim.rule=Host(`nextcloud.fiks.im`)" 65 | # - "traefik.http.routers.heim.entrypoints=web" 66 | - "traefik.http.services.nextcloud.loadbalancer.server.port=80" 67 | 68 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 69 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 70 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 71 | 72 | - "traefik.http.routers.nextcloud-secure.entrypoints=websecure" 73 | - "traefik.http.routers.nextcloud-secure.rule=Host(`nextcloud.fiks.im`)" 74 | # - "traefik.http.routers.nextcloud-secure.middlewares=traefik-auth" 75 | - "traefik.http.routers.nextcloud-secure.tls=true" 76 | environment: 77 | - POSTGRES_DB=nextcloud 78 | - POSTGRES_USER=nextcloud 79 | - POSTGRES_PASSWORD=nextcloud 80 | - POSTGRES_HOST=db 81 | - NEXTCLOUD_ADMIN_USER=admin 82 | - NEXTCLOUD_ADMIN_PASSWORD=admin 83 | - REDIS_HOST=redis 84 | - NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.fiks.im 85 | - TRUSTED_PROXIES=172.29.8.0/8 86 | volumes: 87 | - ./nextcloud:/var/www/html 88 | # - ./nextcloud/documents_storage:/var/www/html/data 89 | networks: 90 | nextcloud: 91 | traefik-public: 92 | external: true 93 | -------------------------------------------------------------------------------- /perks/next-cloud/postgres_init/initdb.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | psql "postgres://$POSTGRES_USER:$POSTGRES_PASSWORD@$POSTGRES_HOST/$POSTGRES_DB?sslmode=disable" <<-EOSQL 3 | 4 | create database nextcloud; 5 | 6 | create user nextcloud with password 'nextcloud'; 7 | 8 | grant all privileges on database nextcloud to nextcloud; 9 | 10 | EOSQL 11 | -------------------------------------------------------------------------------- /perks/nexus/README.md: -------------------------------------------------------------------------------- 1 | # Enabling scripting 2 | docker exec -it nexus bash 3 | echo "nexus.scripts.allowCreation=true" >> /nexus-data/etc/nexus.properties 4 | exit 5 | docker-compose restart 6 | 7 | ## configuring mirror 8 | 9 | As user root, make or edit the following file: /etc/docker/daemon.json 10 | 11 | { 12 | "insecure-registries": ["nexus.fiks.im:5000"], 13 | "registry-mirrors": ["https://nexus.fiks.im:5000"] 14 | } 15 | -------------------------------------------------------------------------------- /perks/nexus/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.4' 2 | services: 3 | nexus: 4 | container_name: nexus 5 | image: sonatype/nexus3 6 | restart: unless-stopped 7 | environment: 8 | - REGISTRY_HTTP_RELATIVEURLS=true 9 | - TZ=Europe/Warsaw 10 | # ports: 11 | # - 8081:8081 12 | # - 5000:5000 13 | networks: 14 | traefik-public: 15 | volumes: 16 | - /etc/localtime:/etc/localtime:ro 17 | - nexus-data:/nexus-data 18 | labels: 19 | - "traefik.enable=true" 20 | - "traefik.http.services.nexus-secure.loadbalancer.server.port=8081" 21 | - "traefik.http.routers.nexus-secure.service=nexus-secure" 22 | - "traefik.http.routers.nexus-secure.entrypoints=websecure" 23 | - "traefik.http.routers.nexus-secure.rule=Host(`nexus.fiks.im`)" 24 | - "traefik.http.routers.nexus-secure.tls=true" 25 | # Registry Endpoint 26 | - "traefik.http.services.nexusregistry-secure.loadbalancer.server.port=5000" 27 | - "traefik.http.routers.nexusregistry-secure.service=nexusregistry-secure" 28 | - "traefik.http.routers.nexusregistry-secure.entrypoints=websecure" 29 | - "traefik.http.routers.nexusregistry-secure.rule=Host(`registry.fiks.im`)" 30 | - "traefik.http.routers.nexusregistry-secure.tls=true" 31 | 32 | - "traefik.http.services.nexusregistry.loadbalancer.server.port=5000" 33 | - "traefik.http.routers.nexusregistry.service=nexusregistry" 34 | - "traefik.http.routers.nexusregistry.entrypoints=web" 35 | - "traefik.http.routers.nexusregistry.rule=Host(`registry.fiks.im`)" 36 | 37 | 38 | volumes: 39 | nexus-data: 40 | # external: true 41 | 42 | # nexus-data: 43 | # driver: local 44 | # driver_opts: 45 | # type: nfs 46 | # o: addr=192.168.86.12,rw 47 | # device: ":/volume1/Docker/NexusData" 48 | 49 | networks: 50 | traefik-public: 51 | external: true 52 | -------------------------------------------------------------------------------- /perks/nexus/docs/create-docker-proxy.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "CreateDockerProxy", 3 | "type": "groovy", 4 | "content": "repository.createDockerProxy('docker-proxy', 'https://registry-1.docker.io', 'HUB', null, 5000, null)" 5 | } 6 | -------------------------------------------------------------------------------- /perks/nexus/docs/docker-registry-nexus-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/perks/nexus/docs/docker-registry-nexus-01.png -------------------------------------------------------------------------------- /perks/nexus/docs/docker-registry-nexus-02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/perks/nexus/docs/docker-registry-nexus-02.png -------------------------------------------------------------------------------- /perks/nexus/docs/dockerinfo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/perks/nexus/docs/dockerinfo.png -------------------------------------------------------------------------------- /perks/nexus/docs/groupconfiguration.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/perks/nexus/docs/groupconfiguration.webp -------------------------------------------------------------------------------- /perks/nexus/docs/proxyprivateconfiguration.webp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/perks/nexus/docs/proxyprivateconfiguration.webp -------------------------------------------------------------------------------- /perks/nexus/docs/showinginnexus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/perks/nexus/docs/showinginnexus.png -------------------------------------------------------------------------------- /perks/nexus/nexus_registry.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | export NEXUS_ADDRESS=https://nexus.fiks.im 4 | export ADMIN_PASS=admin 5 | 6 | until curl --fail --insecure $NEXUS_ADDRESS; do 7 | sleep 1 8 | done 9 | 10 | #echo curl -v -u admin:$ADMIN_PASS --insecure --header 'Content-Type: application/json' "${NEXUS_ADDRESS}/service/rest/v1/script" -d @./docs/create-docker-proxy.json 11 | 12 | curl -v -u admin:$ADMIN_PASS --insecure --header 'Content-Type: application/json' "${NEXUS_ADDRESS}/service/rest/v1/script" -d @./docs/create-docker-proxy.json 13 | curl -v -X POST -u admin:$ADMIN_PASS --insecure --header 'Content-Type: text/plain' "${NEXUS_ADDRESS}/service/rest/v1/script/CreateDockerProxy/run" 14 | -------------------------------------------------------------------------------- /perks/ollama-intel-arc/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM intelanalytics/ipex-llm-inference-cpp-xpu:latest 2 | 3 | ENV DEBIAN_FRONTEND=noninteractive 4 | ENV OLLAMA_HOST=0.0.0.0:11434 5 | 6 | COPY ./scripts/serve.sh /usr/share/lib/serve.sh 7 | 8 | ENTRYPOINT ["/bin/bash", "/usr/share/lib/serve.sh"] 9 | -------------------------------------------------------------------------------- /perks/ollama-intel-arc/README.md: -------------------------------------------------------------------------------- 1 | # Run Ollama using your Intel Arc GPU 2 | 3 | A Docker-based setup for running Ollama as a backend and Open WebUI as a frontend, leveraging Intel Arc Series GPUs on Linux systems. 4 | 5 | ## Overview 6 | This repository provides a convenient way to run Ollama as a backend and Open WebUI as a frontend, allowing you to interact with Large Language Models (LLM) using an Intel Arc Series GPU on your Linux system. 7 | 8 | ![screenshot](resources/open-webui.png) 9 | 10 | ## Services 11 | 1. Ollama 12 | * Runs llama.cpp and Ollama with IPEX-LLM on your Linux computer with Intel Arc GPU. 13 | * Built following the guidelines from [Intel](https://github.com/intel/ipex-llm/blob/main/docs/mddocs/DockerGuides/README.md). 14 | * Uses the official [Intel ipex-llm docker image](https://hub.docker.com/r/intelanalytics/ipex-llm-inference-cpp-xpu) as the base container. 15 | * Uses the latest versions of required packages, prioritizing cutting-edge features over stability. 16 | * Exposes port `11434` for connecting other tools to your Ollama service. 17 | 18 | 2. Open WebUI 19 | * The official distribution of Open WebUI. 20 | * `WEBUI_AUTH` is turned off for authentication-free usage. 21 | * `ENABLE_OPENAI_API` and `ENABLE_OLLAMA_API` flags are set to off and on, respectively, allowing interactions via Ollama only. 22 | 23 | ## Setup 24 | Run the following commands to start your Ollama instance 25 | ```bash 26 | $ git clone https://github.com/eleiton/ollama-intel-arc.git 27 | $ cd ollama-intel-arc 28 | $ podman compose up 29 | ``` 30 | 31 | ## Validate 32 | Run the following command to verify your Ollama instance is up and running 33 | ```bash 34 | $ curl http://localhost:11434/ 35 | Ollama is running 36 | ``` 37 | When using Open WebUI, you should see this partial output in your console, indicating your arc gpu was detected 38 | ```bash 39 | [ollama-intel-arc] | Found 1 SYCL devices: 40 | [ollama-intel-arc] | | | | | |Max | |Max |Global | | 41 | [ollama-intel-arc] | | | | | |compute|Max work|sub |mem | | 42 | [ollama-intel-arc] | |ID| Device Type| Name|Version|units |group |group|size | Driver version| 43 | [ollama-intel-arc] | |--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------| 44 | [ollama-intel-arc] | | 0| [level_zero:gpu:0]| Intel Arc Graphics| 12.71| 128| 1024| 32| 62400M| 1.6.32224+14| 45 | ``` 46 | 47 | ## Usage 48 | * Open your web browser to http://localhost:3000 to access the Open WebUI web page. 49 | * For more information on using Open WebUI, refer to the official documentation at https://docs.openwebui.com/ . 50 | 51 | ## Updating the images 52 | Before any updates, be sure to stop your containers 53 | ```bash 54 | $ podman compose down 55 | ``` 56 | 57 | ### ollama-intel-arc Image 58 | If there are new updates in the [ipex-llm docker image](https://hub.docker.com/r/intelanalytics/ipex-llm-inference-cpp-xpu), you may want to update the Ollama image and containers, to stay updated. 59 | 60 | First check any containers running the docker image, and remove them 61 | ```bash 62 | $ podman ps -a 63 | CONTAINER ID IMAGE 64 | 111479fde20f localhost/ollama-intel-arc:latest 65 | 66 | $ podman rm 67 | ``` 68 | 69 | The go ahead and remove the docker image: 70 | ```bash 71 | $ podman image list 72 | REPOSITORY TAG 73 | localhost/ollama-intel-arc latest 74 | 75 | $ podman rmi 76 | ``` 77 | After that, you can run compose up, to rebuild the image from scratch 78 | ```bash 79 | $ podman compose up 80 | ``` 81 | ### open-webui Image 82 | If there are new updates in Open WebUI, just do a pull and the new changes will be retrieved automatically. 83 | ```bash 84 | $ podman compose pull 85 | ``` 86 | 87 | ## Manually connecting to your Ollama container 88 | You can connect directly to your Ollama container by running these commands: 89 | 90 | ```bash 91 | $ podman exec -it ollama-intel-arc /bin/bash 92 | $ /llm/ollama/ollama -v 93 | ``` 94 | 95 | ## My development environment: 96 | * Core Ultra 7 155H 97 | * Intel® Arc™ Graphics (Meteor Lake-P) 98 | * Fedora 41 99 | 100 | ## References 101 | * [Open WebUI documentation](https://docs.openwebui.com/) 102 | * [Intel - ipex-llm](https://github.com/intel/ipex-llm/blob/main/docs/mddocs/DockerGuides/docker_cpp_xpu_quickstart.md) 103 | -------------------------------------------------------------------------------- /perks/ollama-intel-arc/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | ollama-intel-arc: 4 | build: . 5 | #image: ollama-intel-arc:latest 6 | container_name: ollama-intel-arc 7 | networks: 8 | ollama: 9 | restart: unless-stopped 10 | devices: 11 | - /dev/dri:/dev/dri 12 | volumes: 13 | - ollama-volume:/root/.ollama 14 | ports: 15 | - 11434:11434 16 | open-webui: 17 | image: ghcr.io/open-webui/open-webui:latest 18 | container_name: open-webui 19 | volumes: 20 | - open-webui-volume:/app/backend/data 21 | depends_on: 22 | - ollama-intel-arc 23 | networks: 24 | traefik-public: 25 | ollama: 26 | environment: 27 | - WEBUI_AUTH=False 28 | - ENABLE_OPENAI_API=True 29 | - ENABLE_OLLAMA_API=True 30 | extra_hosts: 31 | - host.docker.internal:host-gateway 32 | labels: 33 | - "traefik.enable=true" 34 | - "traefik.http.routers.ollama.rule=Host(`llmd.voronenko.net`,`llmd.voronenko.net`,`llmd.fiks.im`)" 35 | - "traefik.http.routers.llm.entrypoints=web" 36 | - "traefik.http.services.llm.loadbalancer.server.port=8080" 37 | 38 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 39 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 40 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 41 | 42 | - "traefik.http.routers.llm-secure.entrypoints=websecure" 43 | - "traefik.http.routers.llm-secure.rule=Host(`llmd.voronenko.net`,`llmd.voronenko.net`,`llmd.fiks.im`)" 44 | # - "traefik.http.routers.llm-secure.middlewares=traefik-auth" 45 | - "traefik.http.routers.llm-secure.tls=true" 46 | restart: unless-stopped 47 | 48 | volumes: 49 | ollama-volume: {} 50 | open-webui-volume: {} 51 | networks: 52 | traefik-public: 53 | external: true 54 | ollama: 55 | -------------------------------------------------------------------------------- /perks/ollama-intel-arc/scripts/serve.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | cd /llm/scripts/ 4 | source ipex-llm-init --gpu --device Arc 5 | 6 | mkdir -p /llm/ollama 7 | cd /llm/ollama 8 | init-ollama 9 | ./ollama serve 10 | -------------------------------------------------------------------------------- /perks/openldap/README.md: -------------------------------------------------------------------------------- 1 | https://www.techrepublic.com/article/how-to-populate-an-ldap-server-with-users-and-groups-via-phpldapadmin/ 2 | -------------------------------------------------------------------------------- /perks/openldap/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | services: 3 | ldap-server: 4 | image: osixia/openldap:1.5.0 5 | container_name: openldap 6 | environment: 7 | LDAP_LOG_LEVEL: "256" 8 | LDAP_ORGANISATION: "Example Inc." 9 | LDAP_DOMAIN: "example.org" 10 | LDAP_BASE_DN: "" 11 | LDAP_ADMIN_PASSWORD: "admin" 12 | LDAP_CONFIG_PASSWORD: "config" 13 | LDAP_READONLY_USER: "true" 14 | LDAP_READONLY_USER_USERNAME: "readonly" 15 | LDAP_READONLY_USER_PASSWORD: "readonly" 16 | LDAP_RFC2307BIS_SCHEMA: "false" 17 | LDAP_BACKEND: "mdb" 18 | LDAP_TLS: "true" 19 | LDAP_TLS_CRT_FILENAME: "ldap.crt" 20 | LDAP_TLS_KEY_FILENAME: "ldap.key" 21 | LDAP_TLS_DH_PARAM_FILENAME: "dhparam.pem" 22 | LDAP_TLS_CA_CRT_FILENAME: "ca.crt" 23 | LDAP_TLS_ENFORCE: "false" 24 | LDAP_TLS_CIPHER_SUITE: "SECURE256:-VERS-SSL3.0" 25 | LDAP_TLS_VERIFY_CLIENT: "demand" 26 | LDAP_REPLICATION: "false" 27 | #LDAP_REPLICATION_CONFIG_SYNCPROV: 'binddn="cn=admin,cn=config" bindmethod=simple credentials="$$LDAP_CONFIG_PASSWORD" searchbase="cn=config" type=refreshAndPersist retry="60 +" timeout=1 starttls=critical' 28 | #LDAP_REPLICATION_DB_SYNCPROV: 'binddn="cn=admin,$$LDAP_BASE_DN" bindmethod=simple credentials="$$LDAP_ADMIN_PASSWORD" searchbase="$$LDAP_BASE_DN" type=refreshAndPersist interval=00:00:00:10 retry="60 +" timeout=1 starttls=critical' 29 | #LDAP_REPLICATION_HOSTS: "#PYTHON2BASH:['ldap://ldap.example.org','ldap://ldap2.example.org']" 30 | KEEP_EXISTING_CONFIG: "false" 31 | LDAP_REMOVE_CONFIG_AFTER_SETUP: "true" 32 | LDAP_SSL_HELPER_PREFIX: "ldap" 33 | tty: true 34 | stdin_open: true 35 | networks: 36 | - openldap 37 | volumes: 38 | - openldap_var_lib:/var/lib/ldap 39 | - openldap_slap_d:/etc/ldap/slapd.d 40 | - openldap_certs:/container/service/slapd/assets/certs/ 41 | ports: 42 | - "389:389" 43 | - "636:636" 44 | # For replication to work correctly, domainname and hostname must be 45 | # set correctly so that "hostname"."domainname" equates to the 46 | # fully-qualified domain name for the host. 47 | domainname: "example.org" 48 | hostname: "ldap-server" 49 | ldap-server-admin: 50 | image: osixia/phpldapadmin:latest 51 | container_name: phpldapadmin 52 | networks: 53 | - openldap 54 | - traefik-public 55 | environment: 56 | PHPLDAPADMIN_LDAP_HOSTS: "openldap" 57 | PHPLDAPADMIN_HTTPS: "false" 58 | labels: 59 | - "traefik.enable=true" 60 | - "traefik.http.routers.openldap.rule=Host(`ldap.fiks.im`)" 61 | - "traefik.http.routers.openldap.entrypoints=web" 62 | 63 | - "traefik.http.services.openldap.loadbalancer.server.port=80" 64 | 65 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 66 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 67 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 68 | 69 | - "traefik.http.routers.openldap-secure.entrypoints=websecure" 70 | - "traefik.http.routers.openldap-secure.rule=Host(`ldap.fiks.im`)" 71 | # - "traefik.http.routers.openldap-secure.middlewares=traefik-auth" 72 | - "traefik.http.routers.openldap-secure.tls=true" 73 | #ports: 74 | # - "8080:80" 75 | depends_on: 76 | - openldap 77 | 78 | volumes: 79 | openldap_var_lib: 80 | openldap_slap_d: 81 | openldap_certs: 82 | 83 | networks: 84 | openldap: 85 | traefik-public: 86 | external: true 87 | -------------------------------------------------------------------------------- /perks/openzipking/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | 3 | volumes: 4 | mysql_data: {} 5 | 6 | services: 7 | mysql: 8 | image: openzipkin/zipkin-mysql 9 | # Uncomment to expose the storage port for testing 10 | # ports: 11 | # - 3306:3306 12 | networks: 13 | - zipkin-app 14 | 15 | # The zipkin process services the UI, and also exposes a POST endpoint that 16 | # instrumentation can send trace data to. Scribe is disabled by default. 17 | zipkin: 18 | image: openzipkin/zipkin 19 | # Environment settings are defined here https://github.com/openzipkin/zipkin/blob/master/zipkin-server/README.md#environment-variables 20 | environment: 21 | - STORAGE_TYPE=mysql 22 | # Point the zipkin at the storage backend 23 | - MYSQL_HOST=mysql 24 | # Uncomment to enable scribe 25 | # - SCRIBE_ENABLED=true 26 | # Uncomment to enable self-tracing 27 | # - SELF_TRACING_ENABLED=true 28 | # Uncomment to enable debug logging 29 | # - JAVA_OPTS=-Dlogging.level.zipkin2=DEBUG 30 | ports: 31 | # Port used for the Zipkin UI and HTTP Api 32 | - 9411:9411 33 | # Uncomment if you set SCRIBE_ENABLED=true 34 | # - 9410:9410 35 | networks: 36 | - zipkin-app 37 | - traefik-public 38 | labels: 39 | - "traefik.enable=true" 40 | - "traefik.http.routers.zipkin.rule=Host(`zipkin.k.voronenko.net`)" 41 | - "traefik.http.routers.zipkin.entrypoints=web" 42 | 43 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 44 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 45 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 46 | 47 | - "traefik.http.services.zipkin-secure.loadbalancer.server.port=9411" 48 | - "traefik.http.routers.zipkin-secure.entrypoints=websecure" 49 | - "traefik.http.routers.zipkin-secure.rule=Host(`zipkin.k.voronenko.net`)" 50 | # - "traefik.http.routers.zipkin-secure.middlewares=traefik-auth" 51 | - "traefik.http.routers.zipkin-secure.tls=true" 52 | 53 | # Adds a cron to process spans since midnight every hour, and all spans each day 54 | # This data is served by http://192.168.99.100:8080/dependency 55 | # 56 | # For more details, see https://github.com/openzipkin/docker-zipkin-dependencies 57 | dependencies: 58 | image: openzipkin/zipkin-dependencies 59 | container_name: dependencies 60 | entrypoint: crond -f 61 | environment: 62 | - STORAGE_TYPE=mysql 63 | - MYSQL_HOST=mysql 64 | # Add the baked-in username and password for the zipkin-mysql image 65 | - MYSQL_USER=zipkin 66 | - MYSQL_PASS=zipkin 67 | # Uncomment to see dependency processing logs 68 | # - ZIPKIN_LOG_LEVEL=DEBUG 69 | # Uncomment to adjust memory used by the dependencies job 70 | # - JAVA_OPTS=-verbose:gc -Xms1G -Xmx1G 71 | networks: 72 | - zipkin-app 73 | 74 | networks: 75 | zipkin-app: 76 | traefik-public: 77 | external: true 78 | -------------------------------------------------------------------------------- /perks/photoprism/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | services: 3 | photoprism: 4 | ## Use photoprism/photoprism:preview for testing preview builds: 5 | restart: always 6 | image: photoprism/photoprism:latest 7 | ## Don't enable automatic restarts until PhotoPrism has been properly configured and tested! 8 | ## If the service gets stuck in a restart loop, this points to a memory, filesystem, network, or database issue: 9 | ## https://docs.photoprism.app/getting-started/troubleshooting/#fatal-server-errors 10 | # restart: unless-stopped 11 | security_opt: 12 | - seccomp:unconfined 13 | - apparmor:unconfined 14 | # ports: 15 | # - "2342:2342" # HTTP port (host:container) 16 | environment: 17 | PHOTOPRISM_ADMIN_PASSWORD: "photoprism" # !!! PLEASE CHANGE YOUR INITIAL "admin" PASSWORD !!! 18 | PHOTOPRISM_SITE_URL: "https://photo.fiks.im/" # public server URL incl http:// or https:// and /path, :port is optional 19 | PHOTOPRISM_ORIGINALS_LIMIT: 30000 # file size limit for originals in MB (increase for high-res video) 20 | PHOTOPRISM_HTTP_COMPRESSION: "gzip" # improves transfer speed and bandwidth utilization (none or gzip) 21 | PHOTOPRISM_DEBUG: "false" # run in debug mode (shows additional log messages) 22 | PHOTOPRISM_PUBLIC: "false" # no authentication required (disables password protection) 23 | PHOTOPRISM_READONLY: "false" # don't modify originals directory (reduced functionality) 24 | PHOTOPRISM_EXPERIMENTAL: "false" # enables experimental features 25 | PHOTOPRISM_DISABLE_CHOWN: "false" # disables storage permission updates on startup 26 | PHOTOPRISM_DISABLE_WEBDAV: "false" # disables built-in WebDAV server 27 | PHOTOPRISM_DISABLE_SETTINGS: "false" # disables Settings in Web UI 28 | PHOTOPRISM_DISABLE_TENSORFLOW: "false" # disables all features depending on TensorFlow 29 | PHOTOPRISM_DISABLE_FACES: "false" # disables facial recognition 30 | PHOTOPRISM_DISABLE_CLASSIFICATION: "false" # disables image classification 31 | PHOTOPRISM_DARKTABLE_PRESETS: "false" # enables Darktable presets and disables concurrent RAW conversion 32 | PHOTOPRISM_DETECT_NSFW: "false" # flag photos as private that MAY be offensive (requires TensorFlow) 33 | PHOTOPRISM_UPLOAD_NSFW: "true" # allows uploads that MAY be offensive 34 | # PHOTOPRISM_DATABASE_DRIVER: "sqlite" # SQLite is an embedded database that doesn't require a server 35 | PHOTOPRISM_DATABASE_DRIVER: "mysql" # use MariaDB 10.5+ or MySQL 8+ instead of SQLite for improved performance 36 | PHOTOPRISM_DATABASE_SERVER: "mariadb:3306" # MariaDB or MySQL database server (hostname:port) 37 | PHOTOPRISM_DATABASE_NAME: "photoprism" # MariaDB or MySQL database schema name 38 | PHOTOPRISM_DATABASE_USER: "photoprism" # MariaDB or MySQL database user name 39 | PHOTOPRISM_DATABASE_PASSWORD: "photoprism" # MariaDB or MySQL database user password 40 | PHOTOPRISM_SITE_TITLE: "PhotoPrism" 41 | PHOTOPRISM_SITE_CAPTION: "Browse Your Life" 42 | PHOTOPRISM_SITE_DESCRIPTION: "" 43 | PHOTOPRISM_SITE_AUTHOR: "" 44 | ## Hardware video transcoding config (optional) 45 | # PHOTOPRISM_FFMPEG_BUFFERS: "64" # FFmpeg capture buffers (default: 32) 46 | # PHOTOPRISM_FFMPEG_BITRATE: "32" # FFmpeg encoding bitrate limit in Mbit/s (default: 50) 47 | # PHOTOPRISM_FFMPEG_ENCODER: "h264_v4l2m2m" # use Video4Linux for AVC transcoding (default: libx264) 48 | # PHOTOPRISM_FFMPEG_ENCODER: "h264_qsv" # use Intel Quick Sync Video for AVC transcoding (default: libx264) 49 | # PHOTOPRISM_INIT: "intel-graphics tensorflow-amd64-avx2" # enable TensorFlow AVX2 & Intel Graphics support 50 | ## Enable TensorFlow AVX2 support for modern Intel CPUs (requires starting the container as root) 51 | # PHOTOPRISM_INIT: "tensorflow-amd64-avx2" 52 | ## Run as a specific user, group, or with a custom umask (does not work together with "user:") 53 | # PHOTOPRISM_UID: 1000 54 | # PHOTOPRISM_GID: 1000 55 | # PHOTOPRISM_UMASK: 0000 56 | HOME: "/photoprism" 57 | ## Start as a non-root user (see https://docs.docker.com/engine/reference/run/#user) 58 | # user: "1000:1000" 59 | ## Hardware devices for video transcoding and machine learning (optional) 60 | devices: 61 | # - "/dev/video11:/dev/video11" # Video4Linux (h264_v4l2m2m) 62 | - "/dev/dri/renderD128:/dev/dri/renderD128" # Intel GPU 63 | # - "/dev/dri/card0:/dev/dri/card0" 64 | working_dir: "/photoprism" 65 | ## Storage Folders: "~" is a shortcut for your home directory, "." for the current directory 66 | volumes: 67 | # "/host/folder:/photoprism/folder" # example 68 | - "/share/data/photoprism/photos/:/photoprism/originals" # original media files (photos and videos) 69 | # - "/example/family:/photoprism/originals/family" # *additional* media folders can be mounted like this 70 | - "/share/Public/PHOTOS_INBOX:/photoprism/import" # *optional* base folder from which files can be imported to originals 71 | - "./storage:/photoprism/storage" # *writable* storage folder for cache, database, and sidecar files (never remove) 72 | labels: 73 | - "traefik.enable=true" 74 | - "traefik.http.services.photoprism.loadbalancer.server.port=2342" 75 | - "traefik.http.routers.photoprism.entrypoints=websecure" 76 | - "traefik.http.routers.photoprism.rule=Host(`photo.fiks.im`)" 77 | - "traefik.http.routers.photoprism.tls=true" 78 | networks: 79 | traefik-public: 80 | photoprism: 81 | mariadb: 82 | ## If MariaDB gets stuck in a restart loop, this points to a memory or filesystem issue: 83 | ## https://docs.photoprism.app/getting-started/troubleshooting/#fatal-server-errors 84 | restart: always 85 | image: mariadb:10.8 86 | security_opt: # see https://github.com/MariaDB/mariadb-docker/issues/434#issuecomment-1136151239 87 | - seccomp:unconfined 88 | - apparmor:unconfined 89 | command: mysqld --innodb-buffer-pool-size=512M --transaction-isolation=READ-COMMITTED --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --max-connections=512 --innodb-rollback-on-timeout=OFF --innodb-lock-wait-timeout=120 90 | ## Never store database files on an unreliable device such as a USB flash drive, an SD card, or a shared network folder: 91 | volumes: 92 | - "./database:/var/lib/mysql" # DO NOT REMOVE 93 | networks: 94 | photoprism: 95 | environment: 96 | MARIADB_AUTO_UPGRADE: "1" 97 | MARIADB_INITDB_SKIP_TZINFO: "1" 98 | MARIADB_DATABASE: "photoprism" 99 | MARIADB_USER: "photoprism" 100 | MARIADB_PASSWORD: "photoprism" 101 | MARIADB_ROOT_PASSWORD: "photoprism" 102 | photoprism_backup: 103 | container_name: photoprism_db_backup 104 | image: tiredofit/db-backup 105 | depends_on: 106 | - mariadb 107 | volumes: 108 | - ./db_backups/:/backup 109 | #- ./post-script.sh:/assets/custom-scripts/post-script.sh 110 | networks: 111 | photoprism: 112 | environment: 113 | - DEBUG_MODE=TRUE 114 | - DB_TYPE=mysql 115 | - DB_HOST=mariadb 116 | - DB_NAME=photoprism 117 | - DB_USER=photoprism 118 | - DB_PASS=photoprism 119 | - DB_DUMP_FREQ=360 # backup 12 hours 12 * 60 120 | # - DB_DUMP_BEGIN="+10" # backup starts immediately 121 | - DB_CLEANUP_TIME=72000 # clean backups that are older than 72000 minutes 122 | - CHECKSUM=SHA1 123 | - COMPRESSION=GZ 124 | - SPLIT_DB=true 125 | - CONTAINER_ENABLE_MONITORING=FALSE 126 | restart: always 127 | 128 | networks: 129 | traefik-public: 130 | external: true 131 | photoprism: 132 | -------------------------------------------------------------------------------- /perks/prefecthq/Makefile: -------------------------------------------------------------------------------- 1 | run-server: 2 | docker-compose --profile server up 3 | cli-run: 4 | docker-compose run cli 5 | -------------------------------------------------------------------------------- /perks/prefecthq/README.md: -------------------------------------------------------------------------------- 1 | # Prefect 2 with Docker Compose 2 | 3 | This repository contains everything you need to run Prefect Server, a Prefect agent, or the Prefect CLI using Docker Compose. 4 | 5 | ## Credits 6 | 7 | Thank you to Ryan Peden with his boulerplate 8 | 9 | Together with Ryan Peden thanks to Paco Ibañez for his excellent work in [this repository](https://github.com/fraibacas/prefect-orion), which helped me get started. nment. 10 | 11 | 12 | ## Prerequisites 13 | 14 | * A Linux, MacOS, or Windows computer or VM with Docker installed. If you are running Windows, you must have Docker set up to use Linux containers, not Windows containers. 15 | 16 | ## Limitations 17 | 18 | * If you run a Prefect agent in Docker, it will not be able to run `DockerContainer` deployments unless you share the host's Docker socket with the agent container because Docker-in-Docker is not supported. 19 | 20 | ## Getting Started 21 | 22 | 23 | The `docker-compose.yml` file contains five services: 24 | 25 | * `prefectdatabase` - Postgres database for Prefect Server 26 | * `prefectminio` - MinIO S3-compatible object store, useful for experimenting with remote file storage without needing a cloud storage account. 27 | * `prefectserver` - Prefect Server API and UI 28 | * `prefectagent` - Prefect Agent 29 | * `prefectcli` - A container that mounts this repository's `flows` directory and offers an ideal environment for building and applying deployments and running flows. 30 | 31 | ### Prefect Server 32 | 33 | To run Prefect Server, open a terminal, navigate to the directory where you cloned this repository, and run: 34 | 35 | ``` 36 | docker-compose --profile server up 37 | ``` 38 | 39 | This will start PostgreSQL and Prefect Server. When the serveris ready, you will see a line that looks like: 40 | 41 | ``` 42 | server_1 | INFO: Uvicorn running on http://0.0.0.0:4200 (Press CTRL+C to quit) 43 | ``` 44 | 45 | The Prefect Server container shares port 4200 with the host machine, so if you open a web browser and navigate to `http://localhost:4200` you will see the Prefect UI. 46 | 47 | ### Prefect CLI 48 | 49 | Next, open another terminal in the same directory and run: 50 | 51 | ``` 52 | docker-compose run cli 53 | ``` 54 | 55 | This runs an interactive Bash session in a container that shares a Docker network with the server you just started. If you run `ls`, you will see that the container shares the `flows` subdirectory of the repository on the host machine: 56 | 57 | ``` 58 | flow.py 59 | root@fb032110b1c1:~/flows# 60 | ``` 61 | 62 | To demonstrate the container is connected to the Prefect Server instance you launched earlier, run: 63 | 64 | ``` 65 | python flow.py 66 | ``` 67 | 68 | Then, in a web browser on your host machine, navigate to `http://localhost:4200/runs` and you will see the flow you just ran in your CLI container. 69 | 70 | If you'd like to use the CLI container to interact with Prefect Cloud instead of a local Prefect Server instance, update `docker-compose.yml` and change the agent service's `PREFECT_API_URL` environment variable to match your Prefect Cloud API URL. Then, uncomment the `PREFECT_API_KEY` environment variable and replace `YOUR_API_KEY` with your own API key. If you'd prefer not to put your API key in a Docker Compose file, you can also store it in an environment variable on your host machine and pass it through to Docker Compose like so: 71 | 72 | ``` 73 | - PREFECT_API_KEY=${PREFECT_API_KEY} 74 | ``` 75 | 76 | ### Prefect Agent 77 | 78 | You can run a Prefect Agent by updating `docker-compose.yml` and changing `YOUR_WORK_QUEUE_NAME` to match the name of the Prefect work queue you would like to connect to, and then running the following command: 79 | 80 | ``` 81 | docker-compose --profile agent up 82 | ``` 83 | 84 | This will run a Prefect agent and connect to the work queue you provided. 85 | 86 | As with the CLI, you can also use Docker Compose to run an agent that connects to Prefect Cloud by updating the agent's `PREFECT_API_URL` and `PREFECT_API_KEY` settings in `docker-compose.yml`. 87 | 88 | ### MinIO Storage 89 | 90 | MinIO is an S3-compatible object store that works perfectly as remote storage for Prefect deployments. You can run it inside your corporate network and use it as a private, secure object store, or just run it locally in Docker Compose and use it for testing and experimenting with Prefect deployments. 91 | 92 | If you'd like to use MinIO with Prefect in Docker compose, start them both at once by running: 93 | 94 | ``` 95 | docker compose --profile server --profile minio up 96 | ``` 97 | 98 | Although Prefect Server won't need to talk to MinIO, Prefect agents and the Prefect CLI will need to talk to both MinIO _and_ Prefect Server to create and run depoyments, so it's best to start them simultaneously. 99 | 100 | After the MinIO container starts, you can load the MinIO UI in your web browser by navigating to `http://localhost:9000`. Sign in by entering `minioadmin` as both the username and password. 101 | 102 | Create a bucket named `prefect-flows` to store your Prefect flows, and then click **Identity->Service Accounts** to create a service account. This will give you an access key and a secret you can enter in a Prefect block to let the Prefect CLI and agents write to and read from your MinIO storage bucket. 103 | 104 | After you create a MinIO service account, open the Prefect UI at `http://localhost:4200`. Click **Blocks**, then add a **Remote File System** block. Give the block any name you'd like, but remember what name you choose because you will need it when creating a deployment. 105 | 106 | In the _Basepath_ field, enter `s3://prefect-flows`. 107 | 108 | Finally, the _Settings_ JSON field should look like this: 109 | 110 | ``` 111 | { 112 | "key": "YOUR_MINIO_KEY", 113 | "secret": "YOUR_MINIO_SECRET", 114 | "client_kwargs": { 115 | "endpoint_url": "http://minio:9000" 116 | } 117 | } 118 | ``` 119 | 120 | Replace the placeholders with the key and secret MinIO generated when you created the service account. You are now ready to deploy a flow to a MinIO storage bucket! If you want to try it, open a new terminal and run: 121 | 122 | ``` 123 | docker compose run cli 124 | ``` 125 | 126 | Then, when the CLI container starts and gives you a Bash prompt, run: 127 | 128 | ``` 129 | prefect deployment build -sb "remote-file-system/your-storage-block-name" -n "Awesome MinIO deployment" -q "awesome" "flow.py:greetings" 130 | ``` 131 | 132 | Now, if you open `http://localhost:9001/buckets/prefect-flows/browse` in a web browser, you will see that flow.py has been copied into your MinIO bucket. 133 | 134 | ### Next Steps 135 | 136 | You can run as many profiles as once as you'd like. For example, if you have created a deployment and want to start and agent for it, but don't want to open two separate terminals to run Prefect Server, an agent, _and_ MinIO you can start them all at once by running: 137 | 138 | ``` 139 | docker compose --profile server --profile minio --profile agent up 140 | ``` 141 | 142 | And if you want to start two separate agents that pull from different work queues? No problem! Just duplicate the agent service, give it a different name, and set its work queue name. For example: 143 | 144 | ``` 145 | agent_two: 146 | image: prefecthq/prefect:2.3.0-python3.10 147 | restart: always 148 | entrypoint: ["prefect", "agent", "start", "-q", "YOUR_OTHER_WORK_QUEUE_NAME"] 149 | environment: 150 | - PREFECT_API_URL=http://server:4200/api 151 | profiles: ["agent"] 152 | ``` 153 | 154 | Now, when you run `docker-compose --profile agent up`, both agents will start, connect to the Prefect Server API, and begin polling their work queues. 155 | -------------------------------------------------------------------------------- /perks/prefecthq/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.9" 2 | services: 3 | 4 | ### Prefect Database 5 | prefectdatabase: 6 | image: postgres:15.2-alpine 7 | restart: always 8 | environment: 9 | - POSTGRES_USER=postgres 10 | - POSTGRES_PASSWORD=postgres 11 | - POSTGRES_DB=prefect 12 | networks: 13 | - prefect 14 | expose: 15 | - 5432 16 | volumes: 17 | - prefectdb:/var/lib/postgresql/data 18 | profiles: ["server"] 19 | 20 | ### MinIO for flow storage 21 | prefectminio: 22 | image: minio/minio:latest 23 | entrypoint: ["minio", "server", "--address", "0.0.0.0:9000", "--console-address", "0.0.0.0:9001", "/data"] 24 | networks: 25 | - prefect 26 | volumes: 27 | - "prefectminio:/data" 28 | ports: 29 | - 9000:9000 30 | - 9001:9001 31 | profiles: ["minio"] 32 | 33 | ### Prefect Server API and UI 34 | prefectserver: 35 | image: prefecthq/prefect:2.18-python3.12-conda 36 | restart: always 37 | volumes: 38 | - prefectserver:/root/.prefect 39 | entrypoint: ["/opt/prefect/entrypoint.sh", "prefect", "server", "start"] 40 | networks: 41 | - prefect 42 | - traefik-public 43 | environment: 44 | #- PREFECT_UI_URL=http://127.0.0.1:4200/api 45 | #- PREFECT_API_URL=http://127.0.0.1:4200/api 46 | # If you want to access Prefect Server UI from anywhere other than the Docker host machine, you will need to change 47 | # PREFECT_UI_URL and PREFECT_API_URL to match the external hostname/IP of the host machine. For example: 48 | - PREFECT_UI_URL=http://prefect.fiks.im/api 49 | - PREFECT_API_URL=http://prefect.fiks.im/api 50 | - PREFECT_SERVER_API_HOST=0.0.0.0 51 | - PREFECT_API_DATABASE_CONNECTION_URL=postgresql+asyncpg://postgres:postgres@prefectdatabase:5432/prefect 52 | # Uncomment the following line if you want to use the 'S3 Bucket' storage block instead of the older 'S3' storage 53 | # - EXTRA_PIP_PACKAGES=prefect-aws 54 | labels: 55 | - "traefik.enable=true" 56 | - "traefik.http.routers.prefecthq.rule=Host(`prefect.fiks.im`)" 57 | - "traefik.http.routers.prefecthq.entrypoints=web" 58 | 59 | - "traefik.http.services.prefecthq.loadbalancer.server.port=4200" 60 | 61 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 62 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 63 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 64 | 65 | - "traefik.http.routers.prefecthq-secure.entrypoints=websecure" 66 | - "traefik.http.routers.prefecthq-secure.rule=Host(`prefect.fiks.im`)" 67 | # - "traefik.http.routers.prefecthq-secure.middlewares=traefik-auth" 68 | - "traefik.http.routers.prefecthq-secure.tls=true" 69 | ports: 70 | - 4200:4200 71 | depends_on: 72 | - prefectdatabase 73 | profiles: ["server"] 74 | 75 | ## Prefect Agent 76 | agent: 77 | image: prefecthq/prefect:2.18-python3.12-conda 78 | restart: always 79 | entrypoint: ["/opt/prefect/entrypoint.sh", "prefect", "agent", "start", "-q", "YOUR_WORK_QUEUE_NAME"] 80 | networks: 81 | - prefect 82 | environment: 83 | - PREFECT_API_URL=http://prefectserver:4200/api 84 | # Use PREFECT_API_KEY if connecting the agent to Prefect Cloud 85 | # - PREFECT_API_KEY=YOUR_API_KEY 86 | profiles: ["agent"] 87 | 88 | ### Prefect CLI 89 | cli: 90 | image: prefecthq/prefect:2.11.5-python3.11 91 | entrypoint: "bash" 92 | working_dir: "/root/flows" 93 | volumes: 94 | - "./flows:/root/flows" 95 | networks: 96 | - prefect 97 | environment: 98 | - PREFECT_API_URL=http://server:4200/api 99 | # Use PREFECT_API_KEY to use the CLI to interact with Prefect Cloud 100 | # - PREFECT_API_KEY=YOUR_API_KEY 101 | profiles: ["cli"] 102 | volumes: 103 | prefectserver: 104 | prefectdb: 105 | prefectminio: 106 | networks: 107 | prefect: 108 | traefik-public: 109 | external: true 110 | 111 | -------------------------------------------------------------------------------- /perks/sonarcube/Makefile: -------------------------------------------------------------------------------- 1 | init_mem: 2 | sudo sysctl -w vm.max_map_count=262144 3 | -------------------------------------------------------------------------------- /perks/sonarcube/README.md: -------------------------------------------------------------------------------- 1 | https://github.com/SonarSource/docker-sonarqube/blob/master/example-compose-files/sq-with-postgres/docker-compose.yml 2 | https://raw.githubusercontent.com/SonarSource/docker-sonarqube/master/example-compose-files/sq-with-postgres/docker-compose.yml 3 | 4 | 5 | https://sonar.lvh.voronenko.net/ 6 | admin/admin 7 | 8 | 9 | 10 | -------------------------------------------------------------------------------- /perks/sonarcube/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | x-sonarqube-labels: &sonarqube-labels 3 | labels: 4 | - "traefik.enable=true" 5 | - "traefik.http.services.sonar.loadbalancer.server.port=9000" 6 | - "traefik.http.routers.sonar.entrypoints=websecure" 7 | - "traefik.http.routers.sonar.rule=Host(`sonar.lvh.voronenko.net`)" 8 | - "traefik.http.routers.sonar.tls=true" 9 | # Reuse example https://yaml.org/type/merge.html 10 | # <<: *sonarqube-labels 11 | # ? "traefik.enable=false" 12 | services: 13 | sonarqube: 14 | image: sonarqube:community 15 | hostname: sonarqube 16 | container_name: sonarqube 17 | networks: 18 | traefik-public: 19 | sonar-network: 20 | depends_on: 21 | - db 22 | # - search-1 23 | # - search-2 24 | # - search-3 25 | environment: 26 | SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar 27 | SONAR_JDBC_USERNAME: sonar 28 | SONAR_JDBC_PASSWORD: sonar 29 | SONAR_WEB_PORT: 9000 30 | # SONAR_CLUSTER_SEARCH_HOSTS: "search-1,search-2,search-3" 31 | # SONAR_CLUSTER_HOSTS: "sonarqube" 32 | # SONAR_AUTH_JWTBASE64HS256SECRET: "dZ0EB0KxnF++nr5+4vfTCaun/eWbv6gOoXodiAMqcFo=" 33 | # VIRTUAL_HOST: sonarqube.dev.local 34 | VIRTUAL_PORT: 9000 35 | <<: *sonarqube-labels 36 | deploy: 37 | <<: *sonarqube-labels 38 | volumes: 39 | - sonarqube_data:/opt/sonarqube/data 40 | - sonarqube_extensions:/opt/sonarqube/extensions 41 | - sonarqube_logs:/opt/sonarqube/logs 42 | # ports: 43 | # - "9000:9000" 44 | db: 45 | image: postgres:13 46 | hostname: postgresql 47 | container_name: postgresql 48 | networks: 49 | sonar-network: 50 | environment: 51 | POSTGRES_USER: sonar 52 | POSTGRES_PASSWORD: sonar 53 | POSTGRES_DB: sonar 54 | volumes: 55 | - postgresql:/var/lib/postgresql 56 | - postgresql_data:/var/lib/postgresql/data 57 | # search-1: 58 | # image: sonarqube:datacenter-search 59 | # hostname: "search-1" 60 | # cpus: 0.5 61 | # mem_limit: 4096M 62 | # mem_reservation: 1024M 63 | # depends_on: 64 | # - db 65 | # networks: 66 | # - sonar-network 67 | # environment: 68 | # SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar 69 | # SONAR_JDBC_USERNAME: sonar 70 | # SONAR_JDBC_PASSWORD: sonar 71 | # SONAR_CLUSTER_ES_HOSTS: "search-1,search-2,search-3" 72 | # SONAR_CLUSTER_NODE_NAME: "search-1" 73 | # volumes: 74 | # - search-data-1:/opt/sonarqube/data 75 | # search-2: 76 | # image: sonarqube:datacenter-search 77 | # hostname: "search-2" 78 | # cpus: 0.5 79 | # mem_limit: 4096M 80 | # mem_reservation: 1024M 81 | # depends_on: 82 | # - db 83 | # networks: 84 | # - sonar-network 85 | # environment: 86 | # SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar 87 | # SONAR_JDBC_USERNAME: sonar 88 | # SONAR_JDBC_PASSWORD: sonar 89 | # SONAR_CLUSTER_ES_HOSTS: "search-1,search-2,search-3" 90 | # SONAR_CLUSTER_NODE_NAME: "search-2" 91 | # volumes: 92 | # - search-data-2:/opt/sonarqube/data 93 | # search-3: 94 | # image: sonarqube:datacenter-search 95 | # hostname: "search-3" 96 | # cpus: 0.5 97 | # mem_limit: 4096M 98 | # mem_reservation: 1024M 99 | # depends_on: 100 | # - db 101 | # networks: 102 | # - sonar-network 103 | # environment: 104 | # SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar 105 | # SONAR_JDBC_USERNAME: sonar 106 | # SONAR_JDBC_PASSWORD: sonar 107 | # SONAR_CLUSTER_ES_HOSTS: "search-1,search-2,search-3" 108 | # SONAR_CLUSTER_NODE_NAME: "search-3" 109 | # volumes: 110 | # - search-data-3:/opt/sonarqube/data 111 | 112 | volumes: 113 | sonarqube_data: 114 | sonarqube_extensions: 115 | sonarqube_logs: 116 | postgresql: 117 | postgresql_data: 118 | search-data-1: 119 | search-data-2: 120 | search-data-3: 121 | 122 | networks: 123 | traefik-public: 124 | external: true 125 | sonar-network: 126 | ipam: 127 | driver: default 128 | config: 129 | - subnet: 172.28.2.0/24 130 | -------------------------------------------------------------------------------- /perks/sonarcube/unrestricted_client_body_size.conf: -------------------------------------------------------------------------------- 1 | client_max_body_size 0; 2 | -------------------------------------------------------------------------------- /perks/spark/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | spark-master: 4 | image: bde2020/spark-master:3.3.0-hadoop3.3 5 | container_name: spark-master 6 | ports: 7 | - "8080:8080" 8 | - "7077:7077" 9 | environment: 10 | - INIT_DAEMON_STEP=setup_spark 11 | spark-worker-1: 12 | image: bde2020/spark-worker:3.3.0-hadoop3.3 13 | container_name: spark-worker-1 14 | depends_on: 15 | - spark-master 16 | ports: 17 | - "8081:8081" 18 | environment: 19 | - "SPARK_MASTER=spark://spark-master:7077" 20 | -------------------------------------------------------------------------------- /perks/teedy/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | services: 3 | # Teedy Application 4 | teedy-server: 5 | image: sismics/docs:v1.11 6 | # ports: 7 | # # Map internal port to host 8 | # - 8080:8080 9 | environment: 10 | # Base url to be used 11 | DOCS_BASE_URL: "https://docs.fiks.im" 12 | # Set the admin email 13 | DOCS_ADMIN_EMAIL_INIT: "youremail@domain" 14 | # Set the admin password (compatiple with htpassword) 15 | DOCS_ADMIN_PASSWORD_INIT: "$$2a..." 16 | # Setup the database connection. "teedy-db" is the hostname 17 | # and "teedy" is the name of the database the application 18 | # will connect to. 19 | DATABASE_URL: "jdbc:postgresql://teedy-db:5432/teedy" 20 | DATABASE_USER: "teedy" 21 | DATABASE_PASSWORD: "teedy" 22 | networks: 23 | traefik-public: 24 | teedy: 25 | labels: 26 | - "traefik.enable=true" 27 | - "traefik.http.routers.teedy.rule=Host(`docs.fiks.im`)" 28 | - "traefik.http.routers.teedy.entrypoints=web" 29 | - "traefik.http.services.teedy.loadbalancer.server.port=8080" 30 | 31 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 32 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 33 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 34 | 35 | - "traefik.http.routers.teedy-secure.entrypoints=websecure" 36 | - "traefik.http.routers.teedy-secure.rule=Host(`docs.fiks.im`)" 37 | # - "traefik.http.routers.teedy-secure.middlewares=traefik-auth" 38 | - "traefik.http.routers.teedy-secure.tls=true" 39 | restart: always 40 | volumes: 41 | - ./docs/data:/data 42 | depends_on: 43 | - teedy-db 44 | 45 | # DB for Teedy 46 | teedy-db: 47 | image: postgres:13.1-alpine 48 | restart: always 49 | expose: 50 | - 5432 51 | environment: 52 | POSTGRES_USER: "teedy" 53 | POSTGRES_PASSWORD: "teedy" 54 | POSTGRES_DB: "teedy" 55 | networks: 56 | teedy: 57 | volumes: 58 | - ./docs/db:/var/lib/postgresql/data 59 | 60 | teedy_backup: 61 | container_name: teedy_db_backup 62 | image: tiredofit/db-backup 63 | depends_on: 64 | - teedy-db 65 | volumes: 66 | - ./docs/db_backups/:/backup 67 | #- ./post-script.sh:/assets/custom-scripts/post-script.sh 68 | networks: 69 | teedy: 70 | environment: 71 | - DEBUG_MODE=TRUE 72 | - DB_TYPE=postgres 73 | - DB_HOST=teedy-db 74 | - DB_NAME=teedy 75 | - DB_USER=teedy 76 | - DB_PASS=teedy 77 | - DB_DUMP_FREQ=360 # backup 12 hours 12 * 60 78 | # - DB_DUMP_BEGIN="+10" # backup starts immediately 79 | - DB_CLEANUP_TIME=72000 # clean backups that are older than 72000 minutes 80 | - CHECKSUM=SHA1 81 | - COMPRESSION=GZ 82 | - SPLIT_DB=true 83 | - CONTAINER_ENABLE_MONITORING=FALSE 84 | restart: always 85 | networks: 86 | traefik-public: 87 | external: true 88 | teedy: 89 | 90 | -------------------------------------------------------------------------------- /perks/tubearchivist/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.3' 2 | 3 | services: 4 | tubearchivist: 5 | container_name: tubearchivist 6 | restart: always 7 | image: bbilly1/tubearchivist 8 | networks: 9 | - traefik-public 10 | - tube 11 | # ports: 12 | # - 8000:8000 13 | volumes: 14 | - ./media:/youtube 15 | - ./cache:/cache 16 | environment: 17 | - ES_URL=http://archivist-es:9200 # needs protocol e.g. http and port 18 | - REDIS_HOST=archivist-redis # don't add protocol 19 | - HOST_UID=1000 20 | - HOST_GID=1000 21 | - TA_HOST=tube.fiks.im # set your host name 22 | - TA_USERNAME=admin # your initial TA credentials 23 | - TA_PASSWORD=admin # your initial TA credentials 24 | - ELASTIC_PASSWORD=verysecret # set password for Elasticsearch 25 | - TZ=Europe/Kyiv # set your time zone 26 | depends_on: 27 | - archivist-es 28 | - archivist-redis 29 | labels: 30 | - "traefik.enable=true" 31 | - "traefik.http.routers.tube.rule=Host(`tube.fiks.im`)" 32 | - "traefik.http.routers.tube.entrypoints=web" 33 | 34 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 35 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 36 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 37 | 38 | - "traefik.http.routers.tube-secure.entrypoints=websecure" 39 | - "traefik.http.routers.tube-secure.rule=Host(`tube.fiks.im`)" 40 | # - "traefik.http.routers.whoami-secure.middlewares=traefik-auth" 41 | - "traefik.http.routers.tube-secure.tls=true" 42 | - "traefik.http.routers.tube-secure.tls.certresolver=le" 43 | 44 | archivist-redis: 45 | image: redis/redis-stack-server 46 | container_name: archivist-redis 47 | networks: 48 | - tube 49 | restart: always 50 | expose: 51 | - "6379" 52 | volumes: 53 | - ./redis:/data 54 | depends_on: 55 | - archivist-es 56 | archivist-es: 57 | image: bbilly1/tubearchivist-es # only for amd64, or use official es 8.7.0 58 | container_name: archivist-es 59 | restart: always 60 | networks: 61 | - tube 62 | environment: 63 | - "ELASTIC_PASSWORD=verysecret" # matching Elasticsearch password 64 | - "ES_JAVA_OPTS=-Xms512m -Xmx512m" 65 | - "xpack.security.enabled=true" 66 | - "discovery.type=single-node" 67 | - "path.repo=/usr/share/elasticsearch/data/snapshot" 68 | ulimits: 69 | memlock: 70 | soft: -1 71 | hard: -1 72 | volumes: 73 | - es:/usr/share/elasticsearch/data # check for permission error when using bind mount, see readme 74 | expose: 75 | - "9200" 76 | 77 | volumes: 78 | media: 79 | cache: 80 | redis: 81 | es: 82 | networks: 83 | traefik-public: 84 | external: true 85 | tube: 86 | -------------------------------------------------------------------------------- /perks/ubooquity/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | 3 | services: 4 | ubooquity: 5 | image: linuxserver/ubooquity 6 | container_name: ubooquity 7 | restart: always 8 | ports: 9 | - 2202:2202 10 | - 2203:2203 11 | volumes: 12 | - ./config:/config 13 | - /share/data/books/calibre/CALIBRE:/books 14 | - /share/data/books/comics:/comics 15 | environment: 16 | PUID: 1000 17 | PGID: 1000 18 | TZ: Europe/Kyiv 19 | MAXMEM: 2048 20 | networks: 21 | traefik-public: 22 | labels: 23 | - "traefik.enable=true" 24 | - "traefik.http.routers.ubooquity.rule=Host(`books.fiks.im`)" 25 | - "traefik.http.routers.ubooquity.entrypoints=web" 26 | - "traefik.http.services.ubooquity.loadbalancer.server.port=2202" 27 | 28 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 29 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 30 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 31 | 32 | - "traefik.http.routers.ubooquity-secure.entrypoints=websecure" 33 | - "traefik.http.routers.ubooquity-secure.rule=Host(`books.fiks.im`)" 34 | # - "traefik.http.routers.ubooquity-secure.middlewares=traefik-auth" 35 | - "traefik.http.routers.ubooquity-secure.tls=true" 36 | 37 | networks: 38 | traefik-public: 39 | external: true 40 | -------------------------------------------------------------------------------- /perks/vorta/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.4" 2 | services: 3 | vorta: 4 | image: voronenko/vorta:latest 5 | container_name: vorta 6 | cap_add: 7 | - SYS_ADMIN 8 | security_opt: 9 | - apparmor:unconfined 10 | privileged: true 11 | environment: 12 | - TZ=Europe/Warsaw 13 | - USER_ID=1000 14 | - GROUP_ID=100 15 | volumes: 16 | - ./config:/config 17 | - /share/data:/home/app/locations/data:ro 18 | - /share/Public:/home/app/locations/Public:ro 19 | ports: 20 | - 5800:5800 21 | restart: unless-stopped 22 | -------------------------------------------------------------------------------- /perks/warden/README.md: -------------------------------------------------------------------------------- 1 | https://upgrade.yubico.com/getapikey/ 2 | 3 | 4 | Client ID: 72057 5 | Secret key: 39bnXP67fE5eJwm1Ofv7ftYrh+k= 6 | 7 | -------------------------------------------------------------------------------- /perks/warden/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | 3 | services: 4 | bitwardenrs: 5 | image: vaultwarden/server 6 | container_name: vaultwarden 7 | restart: unless-stopped 8 | volumes: 9 | - ./data:/data 10 | environment: 11 | - WEBSOCKET_ENABLED=true 12 | - WEB_VAULT_ENABLED=true 13 | - SIGNUPS_ALLOWED=true 14 | # Comment admin token to disable admin interface 15 | - ADMIN_TOKEN=admin 16 | - YUBICO_CLIENT_ID=1111111 17 | - YUBICO_SECRET_KEY=22222222 18 | networks: 19 | traefik-public: 20 | labels: 21 | - "traefik.enable=true" 22 | - "traefik.http.routers.bitwarden.rule=Host(`vault.fiks.im`)" 23 | - "traefik.http.routers.bitwarden.entrypoints=web" 24 | - "traefik.http.services.bitwarden.loadbalancer.server.port=80" 25 | 26 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 27 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 28 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 29 | 30 | - "traefik.http.routers.bitwarden-secure.entrypoints=websecure" 31 | - "traefik.http.routers.bitwarden-secure.rule=Host(`vault.fiks.im`)" 32 | # - "traefik.http.routers.bitwarden-secure.middlewares=traefik-auth" 33 | - "traefik.http.routers.bitwarden-secure.tls=true" 34 | - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https" 35 | 36 | # - "com.centurylinklabs.watchtower.enable=true" 37 | # # Ip filtering 38 | # - "traefik.http.routers.bitwarden.middlewares=whitelist@file" 39 | logging: 40 | driver: "syslog" 41 | options: 42 | tag: "Bitwarden" 43 | networks: 44 | traefik-public: 45 | external: true 46 | -------------------------------------------------------------------------------- /public_server/Makefile: -------------------------------------------------------------------------------- 1 | create-traefik-network-once: 2 | docker network create traefik-public 3 | -------------------------------------------------------------------------------- /public_server/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.4' 2 | # Variables to fill in: 3 | # - your mail address for contact with Let's Encrypt 4 | # - MD5 hash of your password (use http://www.htaccesstools.com/htpasswd-generator/) 5 | # Replace preview.voronenko.net with your domain 6 | 7 | services: 8 | traefik: 9 | image: traefik:v2.11.20 10 | healthcheck: 11 | test: ["CMD-SHELL", "wget --spider --quiet http://127.0.0.1/ping || exit 1"] 12 | interval: 30s 13 | timeout: 10s 14 | retries: 3 15 | ports: 16 | - 80:80 17 | - 443:443 18 | - 8080:8080 19 | labels: 20 | - "traefik.http.routers.traefik.rule=(Host(``) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`)))" 21 | - "traefik.http.routers.traefik.service=api@internal" 22 | - "traefik.http.routers.traefik.tls.certresolver=letsencrypt" 23 | - "traefik.http.routers.traefik.entrypoints=websecure" 24 | - "traefik.http.routers.traefik.middlewares=dashboardauth" 25 | - "traefik.http.middlewares.dashboardauth.basicauth.users=admin:" 26 | # Redirect all HTTP to HTTPS permanently 27 | - traefik.http.routers.http_catchall.rule=HostRegexp(`{any:.+}`) 28 | - traefik.http.routers.http_catchall.entrypoints=web 29 | - traefik.http.routers.http_catchall.middlewares=https_redirect 30 | - traefik.http.middlewares.https_redirect.redirectscheme.scheme=https 31 | - traefik.http.middlewares.https_redirect.redirectscheme.permanent=true 32 | volumes: 33 | - /var/run/docker.sock:/var/run/docker.sock:ro 34 | - ./traefik_certs:/certs 35 | - ./traefik_conf:/conf 36 | - ./letsencrypt:/letsencrypt 37 | restart: always 38 | # --docker.swarmmode 39 | command: 40 | - "--api.insecure=true" 41 | - "--providers.docker=true" 42 | - "--providers.docker.watch=true" 43 | - "--providers.docker.exposedbydefault=false" 44 | - "--providers.file.directory=/conf/" 45 | - "--providers.file.watch=true" 46 | - "--entrypoints.web.address=:80" 47 | - "--entrypoints.websecure.address=:443" 48 | - "--log.level=DEBUG" 49 | - "--accessLog" 50 | - "--api" 51 | - "--metrics" 52 | - "--metrics.prometheus" 53 | - "--providers.docker.network=traefik-public" 54 | - "--certificatesResolvers.letsencrypt.acme.email=" 55 | - "--certificatesResolvers.letsencrypt.acme.tlsChallenge=true" 56 | - "--certificatesResolvers.letsencrypt.acme.httpChallenge=true" 57 | - "--certificatesResolvers.letsencrypt.acme.httpChallenge.entryPoint=web" 58 | - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json" 59 | # monitoring 60 | - "--ping=true" 61 | - "--ping.entrypoint=web" 62 | networks: 63 | - default 64 | - traefik-public 65 | whoami: 66 | image: "containous/whoami" 67 | container_name: "simple-service" 68 | labels: 69 | - "traefik.enable=true" 70 | - "traefik.http.routers.whoami.rule=Host(`whoami.preview.voronenko.net`)" 71 | - "traefik.http.routers.whoami.entrypoints=web" 72 | # - "traefik.http.routers.traefik-secure.middlewares=traefik-auth" 73 | # - "traefik.http.middlewares.traefik-auth.basicauth.users=USER:PASSWORD" 74 | # - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https" 75 | # - "traefik.http.routers.traefik.middlewares=traefik-https-redirect" 76 | - "traefik.http.routers.traefik-secure.entrypoints=websecure" 77 | - "traefik.http.routers.traefik-secure.rule=Host(`whoami.preview.voronenko.net`)" 78 | - "traefik.http.routers.traefik-secure.tls=true" 79 | - traefik.http.middlewares.whoami.compress=true 80 | - traefik.http.routers.whoami.tls.certresolver=letsencrypt 81 | networks: 82 | traefik-public: 83 | external: true 84 | -------------------------------------------------------------------------------- /public_server/letsencrypt/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/public_server/letsencrypt/.gitkeep -------------------------------------------------------------------------------- /public_server/traefik_certs/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Voronenko/traefik2-compose-template/f9da9afa757a29a43e4fdc32de8921a9ae5e2639/public_server/traefik_certs/.gitkeep -------------------------------------------------------------------------------- /public_server/traefik_conf/certificates.toml: -------------------------------------------------------------------------------- 1 | [[tls.certificates]] #first certificate 2 | certFile = "/certs/fullchain.pem" 3 | keyFile = "/certs/privkey.pem" 4 | 5 | #[[tls.certificates]] #second certificate 6 | # certFile = "/path/to/other.cert" 7 | # keyFile = "/path/to/other.key" 8 | --------------------------------------------------------------------------------