├── .gitignore ├── README.md ├── deploy.yml ├── env.yml.example └── roles ├── bazarr ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── docker-install ├── README.md ├── defaults │ └── main.yml ├── files │ └── docker-daemon.json ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── emby ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── fireflyiii ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── gitea ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ ├── docker-compose.yml.j2 │ └── gitea ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── homeassistant ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ ├── configuration.yaml.j2 │ ├── docker-compose.yml.j2 │ └── docker-wol.service.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── jellyfin ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── piper ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── radarr ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── readarr ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── sabnzbd ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── slskd ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── sonarr ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── sqlite-backup ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── traefik ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ ├── docker-compose.yml.j2 │ └── traefik_dynamic.toml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── wg-easy ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml ├── whisper ├── README.md ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── docker-compose.yml.j2 ├── tests │ ├── inventory │ └── test.yml └── vars │ └── main.yml └── zabbix ├── README.md ├── defaults └── main.yml ├── handlers └── main.yml ├── meta └── main.yml ├── tasks └── main.yml ├── templates └── docker-compose.yml.j2 ├── tests ├── inventory └── test.yml └── vars └── main.yml /.gitignore: -------------------------------------------------------------------------------- 1 | env.yml 2 | .vscode 3 | .ansible -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # docker-collection 2 | Collection of docker images deployed with Ansible 3 | 4 | Services currently supported: 5 | 6 | * traefik 7 | * sabnzbd 8 | * sonarr 9 | * radarr 10 | * bazarr 11 | * emby 12 | * jellyfin 13 | * home assistant 14 | * wg-easy 15 | * slskd 16 | 17 | ## How to 18 | 19 | 1. Install ansible, see: [Ansible install](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-and-upgrading-ansible-with-pip) 20 | 2. Open env.yml.example and adjust as needed (explanations of all vars is in the example file) 21 | 3. Save the file as env.yml 22 | 4. Run the deploy.yml playbook with ansible-playbook: 23 | ```ansible-playbook deploy.yml``` 24 | 25 | Every service will use the following DNS records: 26 | 27 | ```$SERVICE.example.com``` 28 | 29 | So, will result in sabnzbd.example.com, sonarr.example.com etc etc. 30 | 31 | ## NOTES 32 | 33 | ### OS 34 | 35 | I have and will only test this on Linux, it might work on WSL as well but this has not been tested. This playbook will also install Docker 36 | but only if the detected OS is debian or redhat based. 37 | 38 | ### DNS 39 | 40 | You will obviously have to create the correct DNS records for all these services. 41 | 42 | ### Firewall 43 | 44 | You might still need to adjust your firewall to allow specific traffic. 45 | 46 | ## Plans 47 | 48 | ### Add docker swarm compatibility (now experimental) 49 | 50 | The biggest problem here was that I need USB devices attached to Home Assistant and docker swarm doesn't support this. 51 | I've tried attaching it to the overlay networks but for some reason I've yet to figure out, I couldn't get that to work. -------------------------------------------------------------------------------- /deploy.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install docker, deploy traefik and then other services 3 | hosts: localhost 4 | gather_facts: true 5 | become: true 6 | 7 | pre_tasks: 8 | 9 | - name: Include vars file 10 | ansible.builtin.include_vars: env.yml 11 | 12 | roles: 13 | - role: docker-install 14 | - role: traefik 15 | when: traefik == true 16 | - role: sabnzbd 17 | when: sabnzbd == true 18 | - role: sonarr 19 | when: sonarr == true 20 | - role: radarr 21 | when: radarr == true 22 | - role: bazarr 23 | when: bazarr == true 24 | - role: readarr 25 | vars: 26 | service: "ebooks" 27 | when: readarr == true 28 | - role: readarr 29 | vars: 30 | service: "audiobooks" 31 | when: readarr == true 32 | - role: homeassistant 33 | when: homeassistant == true and swarm == false 34 | - role: emby 35 | when: emby == true 36 | - role: jellyfin 37 | when: jellyfin == true 38 | - role: wg-easy 39 | when: wireguard == true 40 | - role: slskd 41 | when: slskd == true 42 | - role: fireflyiii 43 | when: fireflyiii == true 44 | - role: whisper 45 | when: whisper == true 46 | - role: piper 47 | when: piper == true 48 | - role: sqlite-backup 49 | when: sqlite_backup == true 50 | - role: gitea 51 | when: gitea == true 52 | - role: zabbix 53 | when: zabbix == true 54 | -------------------------------------------------------------------------------- /env.yml.example: -------------------------------------------------------------------------------- 1 | # Environment file 2 | 3 | # Use this file to quickly deploy without all the prompts 4 | 5 | # Global vars 6 | 7 | # workdir: This is the location where all the docker files and data of the containers will reside 8 | workdir: "/docker" 9 | 10 | # domain: The domain every service will be used with as subdomains (e.g. sabnzbd.example.com, homeassistant.example.com, etc) 11 | domain: "example.com" 12 | 13 | # swarm: Experimental, enables docker swarm and tries to deploy everything in separate stacks 14 | swarm: false 15 | 16 | ## Traefik 17 | 18 | # This variable determines if Traefik will be deployed or not 19 | # NOTE: All other services assume Traefik will be up and running, use this only to deploy services to an already running Traefik environment 20 | traefik: true 21 | 22 | # application: This is the name of the network, e.g. 'proxy' will become 'proxy_network' 23 | application: "proxy" 24 | 25 | # provider: Traefik supports DNS challenges for Letsencrypt, this playbook supports 'azuredns' and 'cloudflare' providers. You can also set it to 'none' 26 | # and fill out the cert_key and cert_pem variables 27 | provider: "cloudflare" 28 | 29 | # cert_key and cert_pem: If you want to use your own certificates, paste their content in these variables. 30 | # cert_key holds the private key, cert_pem the certificate 31 | # cert_key: 32 | # cert_pem: 33 | 34 | # Azure provider credentials, use when provider is 'azuredns' 35 | # AZURE_CLIENT_ID: 36 | # AZURE_SECRET: 37 | # AZURE_SUBSCRIPTION_ID: 38 | # AZURE_TENANT: 39 | 40 | # Cloudflare provider credentials, use when provider is 'cloudflare' 41 | CF_API_EMAIL: "someone@example.com" 42 | CF_API_KEY: "cf-api-token" 43 | CF_DNS_API_TOKEN: "cf-dns-api-token" 44 | 45 | # Email address to use for letsencrypt 46 | email_address: "someone@example.com" 47 | 48 | # local_hosts: A map of external services you want to link to Traefik. E.g. a service running on the host (192.168.1.250) on port 10000 49 | # local_hosts: 50 | # service: 51 | # name: service 52 | # address: 192.168.1.250 53 | # port: 10000 54 | # scheme: http 55 | 56 | ## Sabnzbd 57 | 58 | # This variable determines if Sabnzbd will be deployed or not 59 | sabnzbd: true 60 | 61 | # download_dir: Point this to the directory where you keep your downloads. This same location will be mounted in any *arr container as well. 62 | # It is recommended to mount the downloads and media folder in the same local dir and then mount that into the container 63 | # (e.g. local_dir/downloads and local_dir/movies and then mount /local_dir) 64 | download_dir: "/downloads" 65 | 66 | ## Sonarr 67 | 68 | # This variable determines if Sonarr will be deployed or not 69 | sonarr: true 70 | 71 | ## Radarr 72 | 73 | # This variable determines if Radarr will be deployed or not 74 | radarr: true 75 | 76 | ## Bazarr 77 | 78 | # This variable determines if Bazarr will be deployed or not 79 | bazarr: true 80 | 81 | ## Readarr 82 | 83 | # This variable determines if Readarr will be deployed or not 84 | 85 | # See the deploy playbook for an example on how to deploy 2 instances, 1 for ebooks and another for audiobooks 86 | readarr: true 87 | 88 | ## Home Assistant 89 | 90 | # This variable determines if Home Assistant will be deployed or not 91 | homeassistant: true 92 | 93 | # tty: Map of tty devices to mount (will be mounted under the same name inside container). 94 | # Example: 95 | # tty: 96 | # device1: 97 | # path: "/dev/ttyUSB0" 98 | tty: 99 | rfxcom: 100 | path: "/dev/tty0" 101 | 102 | # wolpipe: A map of device and mac addresses that we can turn on using Wake on LAN. A named pipe will be created on the host 103 | # and a systemd service will be running. This allows us to avoid using Host networking while still being able to use WOL. 104 | # Example: 105 | # wolpipe: 106 | # pc1: 107 | # mac: "xx:yy:zz:aa:bb" 108 | 109 | # hacs: Run the HACS download script: https://hacs.xyz/docs/setup/download 110 | hacs: true 111 | 112 | ## Emby 113 | 114 | # This variable determines if Emby will be deployed or not 115 | emby: true 116 | 117 | ## Jellyfin 118 | 119 | # This variable determines if Jellyfin will be deployed or not 120 | jellyfin: true 121 | 122 | ## wg-easy 123 | 124 | # This variable determines if wg-easy will be deployed or not 125 | wireguard: true 126 | 127 | # wg_host: Set to the public address or hostname of your instance 128 | wg_host: "example.com" 129 | 130 | # wg_pass: Password to use for the web frontend 131 | wg_pass: "mypassword" 132 | 133 | # wg_default_address: IP range of the wireguard clients 134 | wg_default_address: "10.8.0.x" 135 | 136 | # wg_default_dns: DNS server to use 137 | wg_default_dns: "1.1.1.1" 138 | 139 | # wg_allowed_ips: IP ranges clients are allows to access 140 | wg_allowed_ips: "10.8.0.0/24, 192.168.1.0/24" 141 | 142 | ## slskd 143 | 144 | # This variable determines if slskd will be deployed or not 145 | slskd: true 146 | 147 | ## fireflyiii 148 | 149 | # This variable determines if fireflyiii will be deployed or not 150 | fireflyiii: false 151 | 152 | # emailaddress: Email address to use for email notifications 153 | emailaddress: "your-email@domain.com" 154 | 155 | # emailaddress_destination: Email address to use for email notifications 156 | emailaddress_destination: "your-email@domain.com" 157 | 158 | # database_password: Password to use for the database 159 | database_password: "your-secure-password" 160 | 161 | # mail_host: Host to use for mail notifications 162 | mail_host: "smtp.provider.com" 163 | 164 | # mail_port: Port to use for mail notifications 165 | mail_port: 587 166 | 167 | # mail_username: Username to use for mail notifications 168 | mail_username: "your-email@domain.com" 169 | 170 | # mail_password: Password to use for mail notifications 171 | mail_password: "your-secure-password" 172 | 173 | # mail_encryption: Encryption to use for mail notifications 174 | mail_encryption: "tls" 175 | 176 | # nordigen_id: ID to use for nordigen 177 | # nordigen_id: "your-nordigen-id" 178 | 179 | # nordigen_key: Key to use for nordigen 180 | # nordigen_key: "your-nordigen-key" 181 | 182 | # spectre_id: ID to use for spectre 183 | # spectre_app_id: "your-spectre-id" 184 | 185 | # spectre_secret: Secret to use for spectre 186 | # spectre_secret: "your-spectre-secret" 187 | 188 | ## whisper 189 | 190 | # This variable determines if whisper will be deployed or not 191 | whisper: true 192 | 193 | # whisper_language: Language to use for whisper 194 | whisper_language: "nl" 195 | 196 | # whisper_model: Model to use for whisper 197 | whisper_model: "medium" 198 | 199 | ## piper 200 | 201 | # This variable determines if piper will be deployed or not 202 | piper: true 203 | 204 | # Configuration options for the piper text-to-speech service: 205 | # - piper_voice: The voice to use for text-to-speech 206 | # - piper_length: The length of the generated audio 207 | # - piper_noise: The amount of noise to add to the generated audio 208 | # - piper_noisew: The amount of noise weighting to apply 209 | # - piper_speaker: The speaker ID to use for the text-to-speech 210 | # - piper_procs: The number of parallel processes to use for text-to-speech 211 | 212 | piper_voice: "en_US-lessac-medium" 213 | piper_length: "1.0" 214 | piper_noise: "0.667" 215 | piper_noisew: "0.333" 216 | piper_speaker: "0" 217 | piper_procs: "1" 218 | 219 | # sqlite-backup is a small utility I made to backup my databases. See: https://github.com/bpmb82/sqlite-backup 220 | sqlite_backup: true 221 | 222 | # backupdir: Directory to store backups in 223 | backupdir: "/sql-backup" 224 | # backup_interval: Interval to backup in hours 225 | interval: "24" 226 | 227 | ## gitea 228 | 229 | # This variable determines if gitea will be deployed or not 230 | gitea: true 231 | 232 | # This determines if we will use SSH passthrough to the container using user 'git' or if we will listen on port 2222 233 | ssh_passthrough: true 234 | 235 | ## zabbix 236 | 237 | # This variable determines if zabbix will be deployed or not 238 | zabbix: true 239 | -------------------------------------------------------------------------------- /roles/bazarr/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/bazarr/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for bazarr 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: Etc/UTC 6 | loadbalancer: false 7 | swarm: false 8 | 9 | # service defaults 10 | service: "bazarr" 11 | service_version: "latest" 12 | service_port: "6767" 13 | service_image: "lscr.io/linuxserver/bazarr" 14 | 15 | -------------------------------------------------------------------------------- /roles/bazarr/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for bazarr 3 | -------------------------------------------------------------------------------- /roles/bazarr/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/bazarr/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for bazarr 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 0644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Deploy {{ service }} 35 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 36 | 37 | - name: Wait until {{ service }} is running 38 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 39 | register: service_running 40 | until: service_running.stdout == '"running"' 41 | retries: 40 42 | delay: 20 43 | 44 | when: swarm == false -------------------------------------------------------------------------------- /roles/bazarr/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | networks: 10 | - {{ application }}-network 11 | environment: 12 | - PUID=1000 13 | - PGID=1000 14 | - TZ={{ timezone }} 15 | volumes: 16 | - "{{ workdir }}/{{ service }}/config:/config" 17 | - "{{ download_dir }}:/downloads" 18 | logging: 19 | driver: "json-file" 20 | options: 21 | max-size: "1024k" 22 | max-file: "4" 23 | {% if not swarm %} 24 | labels: 25 | - APP={{ service }} 26 | - traefik.enable=true 27 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 28 | - traefik.http.routers.{{ service }}.entrypoints=websecure 29 | - traefik.http.routers.{{ service }}.tls=true 30 | {% if letsencrypt %} 31 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 32 | {% endif %} 33 | - traefik.http.routers.{{ service }}.service={{ service }} 34 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 35 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 36 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 37 | {% if loadbalancer %} 38 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 39 | {% endif %} 40 | - traefik.docker.network={{ application }}-network 41 | restart: unless-stopped 42 | {% endif %} 43 | {% if swarm %} 44 | deploy: 45 | labels: 46 | - APP={{ service }} 47 | - traefik.enable=true 48 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 49 | - traefik.http.routers.{{ service }}.entrypoints=websecure 50 | - traefik.http.routers.{{ service }}.tls=true 51 | {% if letsencrypt %} 52 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 53 | {% endif %} 54 | - traefik.http.routers.{{ service }}.service={{ service }} 55 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 56 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 57 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 58 | {% if loadbalancer %} 59 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 60 | {% endif %} 61 | - traefik.docker.network={{ application }}-network 62 | # service restart policy 63 | restart_policy: 64 | condition: on-failure 65 | delay: 5s 66 | max_attempts: 10 67 | window: 120s 68 | # service update configuration 69 | update_config: 70 | parallelism: 1 71 | delay: 10s 72 | failure_action: continue 73 | monitor: 60s 74 | max_failure_ratio: 0.3 75 | {% endif %} 76 | 77 | networks: 78 | {{ application }}-network: 79 | external: true -------------------------------------------------------------------------------- /roles/bazarr/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/bazarr/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - bazarr 6 | -------------------------------------------------------------------------------- /roles/bazarr/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for bazarr 3 | -------------------------------------------------------------------------------- /roles/docker-install/README.md: -------------------------------------------------------------------------------- 1 | # docker-install 2 | ------------- 3 | 4 | Ansible role to deploy Docker on Debian or CentOS based hosts. This repo is meant to be used as a Git submodule. 5 | 6 | ## Requirements 7 | ------------ 8 | 9 | - Server running Ubuntu 20/22 or CentOS 7 or higher 10 | 11 | ## Use as Git submodule 12 | 13 | Needs to be in a 'roles' folder: 14 | ``` 15 | /roles/docker-install/$CONTENTS_OF_REPO 16 | /playbook.yml 17 | ``` 18 | 19 | Usage: 20 | ``` 21 | cd roles/ 22 | git submodule add git@bitbucket.org:vislink-engineering/docker-install.git 23 | ``` 24 | 25 | ## Role Variables 26 | ------------- 27 | 28 | `swarm_mode` 29 | 30 | Optional, determines whether to install Docker Swarm mode (default = `true`) 31 | *We default to swarm mode because it allows us to use Docker Secrets* 32 | 33 | `single_host` 34 | 35 | Optional, determines if we need to install on a single host or multiple (default = `true`) 36 | 37 | `advertise_address` 38 | 39 | Optional, address to bind the docker swarm to 40 | 41 | `primary_master` 42 | 43 | Optional, host to use as the primary swarm master 44 | 45 | ## Example Playbook 46 | ------------- 47 | 48 | ``` 49 | - hosts: servers 50 | roles: 51 | role: 52 | - docker-install 53 | vars: 54 | - advertise_address: 192.168.1.100 55 | ``` -------------------------------------------------------------------------------- /roles/docker-install/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for docker 3 | workdir: "/docker" 4 | single_host: true 5 | swarm: false 6 | advertise_address: 127.0.0.1 -------------------------------------------------------------------------------- /roles/docker-install/files/docker-daemon.json: -------------------------------------------------------------------------------- 1 | { 2 | "userns-remap": "default" 3 | } 4 | -------------------------------------------------------------------------------- /roles/docker-install/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for docker 3 | -------------------------------------------------------------------------------- /roles/docker-install/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/docker-install/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for docker 3 | - name: Checking if docker is already installed 4 | ansible.builtin.shell: docker --version 5 | ignore_errors: true 6 | register: docker_check 7 | 8 | - name: Install docker if OS is debian or redhat based 9 | block: 10 | - name: Fail if OS is not debian or redhat based 11 | ansible.builtin.fail: 12 | msg: "This is not a Debian or RedHat based OS, please install Docker manually" 13 | when: ansible_facts['os_family'] != "Debian" and ansible_facts['os_family'] != "RedHat" 14 | 15 | - name: Install docker if OS is Debian based 16 | block: 17 | - name: Install required system packages for docker if OS is Debian based 18 | ansible.builtin.apt: name={{ item }} state=latest update_cache=yes 19 | loop: [ 'apt-transport-https', 'ca-certificates', 'curl', 'gnupg', 'software-properties-common', 'python3-pip', 'python3-venv', 'python3-setuptools', 'wget' ] 20 | when: ansible_facts['os_family'] == "Debian" 21 | 22 | - name: Install keyrings 23 | ansible.builtin.shell: install -m 0755 -d /etc/apt/keyrings 24 | when: ansible_facts['os_family'] == "Debian" 25 | 26 | - name: Install GPG key 27 | ansible.builtin.shell: curl -fsSL https://download.docker.com/linux/{{ ansible_facts['distribution'] | lower }}/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg && chmod a+r /etc/apt/keyrings/docker.gpg 28 | when: ansible_facts['os_family'] == "Debian" 29 | 30 | - name: Add docker repository 31 | ansible.builtin.shell: echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/{{ ansible_facts['distribution'] | lower }} $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null 32 | when: ansible_facts['os_family'] == "Debian" 33 | 34 | - name: Install docker.io if OS is Debian 35 | ansible.builtin.apt: name={{ item }} state=latest update_cache=yes 36 | loop: [ 'docker-ce', 'docker-ce-cli', 'containerd.io', 'docker-buildx-plugin', 'docker-compose-plugin' ] 37 | when: ansible_facts['os_family'] == "Debian" 38 | when: ansible_facts['os_family'] == "Debian" 39 | 40 | - name: Install docker if OS is RedHat 41 | block: 42 | - name: Install dnf python library 43 | ansible.builtin.shell: dnf install -y python3-libdnf5 44 | 45 | - name: Install required system packages for docker if OS is RedHat based 46 | ansible.builtin.yum: name={{ item }} state=latest 47 | loop: [ 'ca-certificates', 'curl', 'python3-pip', 'python3-setuptools', 'wget', 'dnf-plugins-core' ] 48 | 49 | - name: Add docker repo to yum 50 | ansible.builtin.shell: dnf config-manager --add-repo https://download.docker.com/linux/{{ ansible_facts['distribution']|lower }}/docker-ce.repo 51 | 52 | - name: Install docker engine if OS is RedHat 53 | ansible.builtin.yum: 54 | name: "{{ item }}" 55 | state: present 56 | loop: [ 'docker-ce', 'docker-ce-cli', 'containerd.io', 'docker-compose-plugin' ] 57 | 58 | when: ansible_facts['os_family'] == "RedHat" 59 | 60 | - name: Stop docker service 61 | service: name=docker state=stopped 62 | when: usernamespaces is defined 63 | 64 | - name: Copy default docker daemon configuration 65 | ansible.builtin.template: 66 | src: files/docker-daemon.json 67 | dest: "/etc/docker/daemon.json" 68 | mode: 0644 69 | when: usernamespaces is defined 70 | 71 | - name: Creating docker compose location 72 | ansible.builtin.file: 73 | path: "{{ workdir }}" 74 | state: directory 75 | 76 | - name: Start docker daemon 77 | ansible.builtin.service: name=docker state=started enabled=true 78 | when: usernamespaces is defined 79 | 80 | when: docker_check.failed 81 | 82 | - name: Debug 83 | debug: 84 | var: swarm 85 | 86 | - name: Debug2 87 | debug: 88 | var: single_host 89 | 90 | - name: Enable swarm on single host if swarm is set and single_host == true 91 | block: 92 | 93 | - name: Initialize docker swarm with advertise address 94 | ansible.builtin.shell: docker swarm init # --advertise-addr {{ advertise_address }} --listen-addr 0.0.0.0 95 | 96 | when: swarm == true and single_host == true 97 | 98 | - name: Enable swarm on multi hosts if swarm is set and single_host == false 99 | block: 100 | 101 | - name: Initialize docker swarm with advertise address on first node 102 | ansible.builtin.shell: docker swarm init --advertise-addr {{ advertise_address }} --listen-addr 0.0.0.0 103 | when: advertise_address is defined and inventory_hostname == primary_manager 104 | 105 | - name: Initialize docker swarm init without advertise address 106 | ansible.builtin.shell: docker swarm init 107 | when: advertise_address is not defined and inventory_hostname == primary_manager 108 | 109 | - name: Get the manager join token so we can let other hosts join 110 | ansible.builtin.shell: docker swarm join-token -q manager 111 | register: manager_join_token 112 | when: inventory_hostname == primary_manager 113 | 114 | - name: Save the advertise address if advertise_address was not set 115 | ansible.builtin.shell: docker info | awk '/Node Address/{print $3}' 116 | register: advertise_address_new 117 | when: advertise_address is not defined and inventory_hostname == primary_manager 118 | 119 | - name: Save the join token and advertise_address vars 120 | ansible.builtin.set_fact: 121 | advertise_address: "{{ advertise_address }}" 122 | manager_join_token: "{{ manager_join_token.stdout }}" 123 | when: advertise_address is defined and inventory_hostname == primary_manager 124 | 125 | - name: Save the join token and advertise_address vars (with auto address) 126 | ansible.builtin.set_fact: 127 | advertise_address: advertise_address_new.stdout 128 | manager_join_token: manager_join_token.stdout 129 | when: advertise_address is not defined and inventory_hostname == primary_manager 130 | 131 | - name: Join other hosts to the swarm, we add them as managers as well (using advertise_address) 132 | ansible.builtin.shell: docker swarm join --token {{ hostvars[primary_manager]['manager_join_token'] }} {{ advertise_address }} 133 | when: inventory_hostname != primary_manager 134 | 135 | when: swarm == true and single_host == false 136 | 137 | - name: Stop docker service 138 | service: name=docker state=stopped 139 | when: docker_check.failed 140 | 141 | - name: Start docker daemon 142 | ansible.builtin.service: name=docker state=started enabled=true 143 | when: docker_check.failed 144 | 145 | - name: Rerun gather_facts 146 | ansible.builtin.gather_facts: -------------------------------------------------------------------------------- /roles/docker-install/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/docker-install/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - docker 6 | -------------------------------------------------------------------------------- /roles/docker-install/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for docker 3 | -------------------------------------------------------------------------------- /roles/emby/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/emby/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for emby 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: "UTC" 6 | loadbalancer: false 7 | swarm: false 8 | transcoding: false 9 | nvidia: false 10 | raspberry: false 11 | 12 | # service defaults 13 | service: "emby" 14 | service_version: "latest" 15 | service_port: "8096" 16 | service_image: "emby/embyserver" -------------------------------------------------------------------------------- /roles/emby/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for emby 3 | -------------------------------------------------------------------------------- /roles/emby/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/emby/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for emby 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 0644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Get video group ID if transcoding is enabled 35 | ansible.builtin.shell: "getent group video | cut -d: -f3" 36 | register: videogroup 37 | when: transcoding == true 38 | 39 | - name: Get render group ID if transcoding is enabled 40 | ansible.builtin.shell: "getent group render | cut -d: -f3" 41 | register: rendergroup 42 | when: transcoding == true 43 | 44 | - name: Deploy {{ service }} 45 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 46 | 47 | - name: Wait until {{ service }} is running 48 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 49 | register: service_running 50 | until: service_running.stdout == '"running"' 51 | retries: 40 52 | delay: 20 53 | 54 | when: swarm == false -------------------------------------------------------------------------------- /roles/emby/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | networks: 10 | - {{ application }}-network 11 | environment: 12 | - PUID=1000 13 | - PGID=1000 14 | - TZ="{{ timezone }}" 15 | {% if (transcoding) and (not swarm) %} 16 | - GIDLIST={{ videogroup.stdout }},{{ rendergroup.stdout }} 17 | {% endif %} 18 | volumes: 19 | - "{{ workdir }}/{{ service }}/config:/config" 20 | - "{{ download_dir }}:/downloads" 21 | logging: 22 | driver: "json-file" 23 | options: 24 | max-size: "1024k" 25 | max-file: "4" 26 | {% if not swarm %} 27 | labels: 28 | - APP={{ service }} 29 | - traefik.enable=true 30 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 31 | - traefik.http.routers.{{ service }}.entrypoints=websecure 32 | - traefik.http.routers.{{ service }}.tls=true 33 | {% if letsencrypt %} 34 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 35 | {% endif %} 36 | - traefik.http.routers.{{ service }}.service={{ service }} 37 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 38 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 39 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 40 | {% if loadbalancer %} 41 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 42 | {% endif %} 43 | - traefik.docker.network={{ application }}-network 44 | {% if nvidia %} 45 | runtime: nvidia 46 | {% endif %} 47 | {% if transcoding %} 48 | devices: 49 | - /dev/dri:/dev/dri 50 | {% endif %} 51 | {% if raspberry %} 52 | devices: 53 | - /dev/vchiq:/dev/vchiq 54 | {% endif %} 55 | restart: unless-stopped 56 | {% endif %} 57 | {% if swarm %} 58 | deploy: 59 | labels: 60 | - APP={{ service }} 61 | - traefik.enable=true 62 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 63 | - traefik.http.routers.{{ service }}.entrypoints=websecure 64 | - traefik.http.routers.{{ service }}.tls=true 65 | {% if letsencrypt %} 66 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 67 | {% endif %} 68 | - traefik.http.routers.{{ service }}.service={{ service }} 69 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 70 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 71 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 72 | {% if loadbalancer %} 73 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 74 | {% endif %} 75 | - traefik.docker.network={{ application }}-network 76 | # service restart policy 77 | restart_policy: 78 | condition: on-failure 79 | delay: 5s 80 | max_attempts: 10 81 | window: 120s 82 | # service update configuration 83 | update_config: 84 | parallelism: 1 85 | delay: 10s 86 | failure_action: continue 87 | monitor: 60s 88 | max_failure_ratio: 0.3 89 | {% endif %} 90 | 91 | networks: 92 | {{ application }}-network: 93 | external: true -------------------------------------------------------------------------------- /roles/emby/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/emby/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - emby 6 | -------------------------------------------------------------------------------- /roles/emby/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for emby 3 | -------------------------------------------------------------------------------- /roles/fireflyiii/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/fireflyiii/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for fireflyiii 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: "UTC" 6 | loadbalancer: false 7 | swarm: false 8 | 9 | # service defaults 10 | service: "firefly" 11 | service_version: "latest" 12 | service_port: "8080" 13 | service_image: "fireflyiii/core" 14 | 15 | service_importer_version: "latest" 16 | service_importer_port: "8080" 17 | service_importer_image: "fireflyiii/data-importer" 18 | 19 | service_db_version: "lts" 20 | service_db_image: "mariadb" 21 | 22 | # fireflyiii defaults 23 | emailaddress: "someone@example.com" 24 | 25 | # database defaults 26 | database_name: "firefly" 27 | database_username: "firefly" 28 | database_password: "firefly" 29 | -------------------------------------------------------------------------------- /roles/fireflyiii/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for fireflyiii 3 | -------------------------------------------------------------------------------- /roles/fireflyiii/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | galaxy_tags: [] 25 | # List tags for your role here, one per line. A tag is a keyword that describes 26 | # and categorizes the role. Users find roles by searching for tags. Be sure to 27 | # remove the '[]' above, if you add tags to this list. 28 | # 29 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 30 | # Maximum 20 tags per role. 31 | 32 | dependencies: [] 33 | -------------------------------------------------------------------------------- /roles/fireflyiii/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for fireflyiii 3 | - name: Save a random string so we can have multiple servers at the same time 4 | set_fact: 5 | app_key: "{{ lookup('password', '/dev/null chars=ascii_lowercase,ascii_uppercase,digits length=32') }}" 6 | auto_import_secret: "{{ lookup('password', '/dev/null chars=ascii_lowercase,ascii_uppercase,digits length=32') }}" 7 | static_cron_token: "{{ lookup('password', '/dev/null chars=ascii_lowercase,ascii_uppercase,digits length=32') }}" 8 | 9 | - name: Create folders 10 | ansible.builtin.file: 11 | path: "{{ workdir }}/{{ service }}/{{ item }}" 12 | recurse: true 13 | state: directory 14 | mode: 0755 15 | loop: 16 | - config 17 | - upload 18 | - db 19 | 20 | - name: Copy {{ service }} docker-compose jinja template 21 | ansible.builtin.template: 22 | src: templates/docker-compose.yml.j2 23 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 24 | mode: 0644 25 | 26 | - name: Deploy {{ service }} if swarm mode is enabled 27 | block: 28 | 29 | - name: Deploy {{ service }} 30 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 31 | 32 | - name: Wait until {{ service }} is running 33 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 34 | register: service_running 35 | until: service_running.stdout == "3 / 3" 36 | retries: 40 37 | delay: 20 38 | 39 | when: swarm == true 40 | 41 | - name: Deploy {{ service }} 42 | block: 43 | 44 | - name: Deploy {{ service }} 45 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 46 | 47 | - name: Wait until {{ service }} is running 48 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 49 | register: service_running 50 | until: service_running.stdout == '"running"' 51 | retries: 40 52 | delay: 20 53 | 54 | when: swarm == false -------------------------------------------------------------------------------- /roles/fireflyiii/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | services: 4 | 5 | {{ service }}: 6 | image: {{ service_image }}:{{ service_version }} 7 | networks: 8 | - {{ application }}-network 9 | environment: 10 | - PUID=1000 11 | - PGID=1000 12 | - TZ="{{ timezone }}" 13 | - APP_ENV=production 14 | - APP_DEBUG=false 15 | - SITE_OWNER={{ emailaddress }} 16 | - APP_KEY={{ app_key }} 17 | - DEFAULT_LANGUAGE=en_US 18 | - DEFAULT_LOCALE=equal 19 | - TRUSTED_PROXIES=** 20 | - LOG_CHANNEL=stack 21 | - APP_LOG_LEVEL=notice 22 | - AUDIT_LOG_LEVEL=emergency 23 | - AUDIT_LOG_CHANNEL= 24 | - PAPERTRAIL_HOST= 25 | - PAPERTRAIL_PORT= 26 | - DB_CONNECTION=mysql 27 | - DB_HOST={{ service }}-db 28 | - DB_PORT=3306 29 | - DB_DATABASE={{ database_name }} 30 | - DB_USERNAME={{ database_username }} 31 | - DB_PASSWORD={{ database_password }} 32 | - DB_SOCKET= 33 | - MYSQL_USE_SSL=false 34 | - MYSQL_SSL_VERIFY_SERVER_CERT=true 35 | - MYSQL_SSL_CAPATH=/etc/ssl/certs/ 36 | - MYSQL_SSL_CA= 37 | - MYSQL_SSL_CERT= 38 | - MYSQL_SSL_KEY= 39 | - MYSQL_SSL_CIPHER= 40 | - CACHE_DRIVER=file 41 | - SESSION_DRIVER=file 42 | - REDIS_SCHEME=tcp 43 | - REDIS_PATH= 44 | - REDIS_HOST=127.0.0.1 45 | - REDIS_PORT=6379 46 | - REDIS_USERNAME= 47 | - REDIS_PASSWORD= 48 | - REDIS_DB="0" 49 | - REDIS_CACHE_DB="1" 50 | - COOKIE_PATH="/" 51 | - COOKIE_DOMAIN= 52 | - COOKIE_SECURE=false 53 | - COOKIE_SAMESITE=lax 54 | - MAIL_MAILER=log 55 | - MAIL_HOST={{ mail_host|default("null", true)}} 56 | - MAIL_PORT={{ mail_port|default(2525, true) }} 57 | - MAIL_FROM={{ emailaddress }} 58 | - MAIL_USERNAME={{ mail_username|default("null", true) }} 59 | - MAIL_PASSWORD={{ mail_password|default("null", true) }} 60 | - MAIL_ENCRYPTION={{ mail_encryption|default("null", true) }} 61 | - MAIL_SENDMAIL_COMMAND= 62 | - MAILGUN_DOMAIN= 63 | - MAILGUN_SECRET= 64 | - MAILGUN_ENDPOINT=api.mailgun.net 65 | - MANDRILL_SECRET= 66 | - SPARKPOST_SECRET= 67 | - MAILERSEND_API_KEY= 68 | - SEND_ERROR_MESSAGE=true 69 | - SEND_REPORT_JOURNALS=true 70 | - ENABLE_EXTERNAL_MAP=false 71 | - ENABLE_EXCHANGE_RATES=false 72 | - ENABLE_EXTERNAL_RATES=false 73 | - MAP_DEFAULT_LAT=51.983333 74 | - MAP_DEFAULT_LONG=5.916667 75 | - MAP_DEFAULT_ZOOM=6 76 | - VALID_URL_PROTOCOLS= 77 | - AUTHENTICATION_GUARD=web 78 | - AUTHENTICATION_GUARD_HEADER=REMOTE_USER 79 | - AUTHENTICATION_GUARD_EMAIL= 80 | - PASSPORT_PRIVATE_KEY= 81 | - PASSPORT_PUBLIC_KEY= 82 | - CUSTOM_LOGOUT_URL= 83 | - DISABLE_FRAME_HEADER=false 84 | - DISABLE_CSP_HEADER=false 85 | - TRACKER_SITE_ID= 86 | - TRACKER_URL= 87 | - ALLOW_WEBHOOKS=false 88 | - STATIC_CRON_TOKEN={{ static_cron_token }} 89 | - DKR_BUILD_LOCALE=false 90 | - DKR_CHECK_SQLITE=true 91 | - APP_NAME=FireflyIII 92 | - BROADCAST_DRIVER=log 93 | - QUEUE_DRIVER=sync 94 | - CACHE_PREFIX=firefly 95 | - PUSHER_KEY= 96 | - IPINFO_TOKEN= 97 | - PUSHER_SECRET= 98 | - PUSHER_ID= 99 | - DEMO_USERNAME= 100 | - DEMO_PASSWORD= 101 | - USE_RUNNING_BALANCE=false 102 | - FIREFLY_III_LAYOUT=v1 103 | - APP_URL=https://{{ service }}.{{ domain }} 104 | volumes: 105 | - "{{ workdir }}/{{ service }}/upload:/var/www/html/storage/upload" 106 | logging: 107 | driver: "json-file" 108 | options: 109 | max-size: "1024k" 110 | max-file: "4" 111 | {% if not swarm %} 112 | labels: 113 | - APP={{ service }} 114 | - traefik.enable=true 115 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 116 | - traefik.http.routers.{{ service }}.entrypoints=websecure 117 | - traefik.http.routers.{{ service }}.tls=true 118 | {% if letsencrypt %} 119 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 120 | {% endif %} 121 | - traefik.http.routers.{{ service }}.service={{ service }} 122 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 123 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 124 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 125 | {% if loadbalancer %} 126 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 127 | {% endif %} 128 | - traefik.docker.network={{ application }}-network 129 | restart: unless-stopped 130 | {% endif %} 131 | {% if swarm %} 132 | deploy: 133 | labels: 134 | - APP={{ service }} 135 | - traefik.enable=true 136 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 137 | - traefik.http.routers.{{ service }}.entrypoints=websecure 138 | - traefik.http.routers.{{ service }}.tls=true 139 | {% if letsencrypt %} 140 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 141 | {% endif %} 142 | - traefik.http.routers.{{ service }}.service={{ service }} 143 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 144 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 145 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 146 | {% if loadbalancer %} 147 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 148 | {% endif %} 149 | - traefik.docker.network={{ application }}-network 150 | # service restart policy 151 | restart_policy: 152 | condition: on-failure 153 | delay: 5s 154 | max_attempts: 10 155 | window: 120s 156 | # service update configuration 157 | update_config: 158 | parallelism: 1 159 | delay: 10s 160 | failure_action: continue 161 | monitor: 60s 162 | max_failure_ratio: 0.3 163 | {% endif %} 164 | 165 | {{ service }}-importer: 166 | image: {{ service_importer_image }}:{{ service_importer_version }} 167 | networks: 168 | - {{ application }}-network 169 | environment: 170 | - PUID=1000 171 | - PGID=1000 172 | - TZ="{{ timezone }}" 173 | - FIREFLY_III_URL=http://{{ service }}:8080 174 | - VANITY_URL=https://{{ service }}.{{ domain }} 175 | - FIREFLY_III_ACCESS_TOKEN={{ firefly_access_token|default("", true) }} 176 | - FIREFLY_III_CLIENT_ID={{ firefly_client_id|default("", true) }} 177 | - NORDIGEN_ID={{ nordigen_id|default("", true) }} 178 | - NORDIGEN_KEY={{ nordigen_key|default("", true) }} 179 | - NORDIGEN_SANDBOX=false 180 | - RESPOND_TO_GOCARDLESS_LIMIT=wait 181 | - GOCARDLESS_GET_ACCOUNT_DETAILS=false 182 | - GOCARDLESS_GET_BALANCE_DETAILS=false 183 | - SPECTRE_APP_ID={{ spectre_app_id|default("", true) }} 184 | - SPECTRE_SECRET={{ spectre_secret|default("", true) }} 185 | - USE_CACHE=false 186 | - IGNORE_DUPLICATE_ERRORS=false 187 | - IGNORE_NOT_FOUND_TRANSACTIONS=false 188 | - AUTO_IMPORT_SECRET={{ auto_import_secret }} 189 | - CAN_POST_AUTOIMPORT=true 190 | - CAN_POST_FILES=true 191 | - IMPORT_DIR_ALLOWLIST= 192 | - FALLBACK_IN_DIR=false 193 | - VERIFY_TLS_SECURITY=true 194 | - JSON_CONFIGURATION_DIR= 195 | - CONNECTION_TIMEOUT=31.41 196 | - APP_ENV=local 197 | - APP_DEBUG=false 198 | - LOG_CHANNEL=stack 199 | - LOG_RETURN_JSON=false 200 | - LOG_LEVEL=debug 201 | - TRUSTED_PROXIES=** 202 | - ENABLE_MAIL_REPORT=false 203 | - EXPECT_SECURE_URL=false 204 | - MAIL_MAILER= 205 | - MAIL_DESTINATION={{ emailaddress_destination }} 206 | - MAIL_FROM={{ emailaddress }} 207 | - MAIL_HOST={{ mail_host|default("null", true)}} 208 | - MAIL_PORT={{ mail_port|default(2525, true) }} 209 | - MAIL_USERNAME={{ mail_username|default("null", true) }} 210 | - MAIL_PASSWORD={{ mail_password|default("null", true) }} 211 | - MAIL_ENCRYPTION={{ mail_encryption|default("null", true) }} 212 | - MAILGUN_DOMAIN= 213 | - MAILGUN_SECRET= 214 | - MAILGUN_ENDPOINT= 215 | - POSTMARK_TOKEN= 216 | - BROADCAST_DRIVER=log 217 | - CACHE_DRIVER=file 218 | - QUEUE_CONNECTION=sync 219 | - SESSION_DRIVER=file 220 | - SESSION_LIFETIME=120 221 | - IS_EXTERNAL=false 222 | - REDIS_HOST=127.0.0.1 223 | - REDIS_PASSWORD="null" 224 | - REDIS_PORT=6379 225 | - REDIS_DB="0" 226 | - REDIS_CACHE_DB="1" 227 | - ASSET_URL= 228 | - TRACKER_SITE_ID= 229 | - TRACKER_URL= 230 | - APP_NAME=DataImporter 231 | - APP_URL=http://localhost 232 | logging: 233 | driver: "json-file" 234 | options: 235 | max-size: "1024k" 236 | max-file: "4" 237 | {% if not swarm %} 238 | labels: 239 | - APP={{ service }}-importer 240 | - traefik.enable=true 241 | - traefik.http.routers.{{ service }}-import.rule=Host(`{{ service }}-import.{{ domain }}`) 242 | - traefik.http.routers.{{ service }}-import.entrypoints=websecure 243 | - traefik.http.routers.{{ service }}-import.tls=true 244 | {% if letsencrypt %} 245 | - traefik.http.routers.{{ service }}-import.tls.certresolver=le_resolver 246 | {% endif %} 247 | - traefik.http.routers.{{ service }}-import.service={{ service }}-import 248 | - traefik.http.services.{{ service }}-import.loadbalancer.server.port={{ service_importer_port }} 249 | - traefik.http.services.{{ service }}-import.loadbalancer.sticky.cookie=true 250 | - traefik.http.services.{{ service }}-import.loadbalancer.sticky.cookie.secure=true 251 | {% if loadbalancer %} 252 | - traefik.http.services.{{ service }}-import.loadbalancer.passhostheader=true 253 | {% endif %} 254 | - traefik.docker.network={{ application }}-network 255 | restart: unless-stopped 256 | {% endif %} 257 | {% if swarm %} 258 | deploy: 259 | labels: 260 | - APP={{ service }}-import 261 | - traefik.enable=true 262 | - traefik.http.routers.{{ service }}-import.rule=Host(`{{ service }}-import.{{ domain }}`) 263 | - traefik.http.routers.{{ service }}-import.entrypoints=websecure 264 | - traefik.http.routers.{{ service }}-import.tls=true 265 | {% if letsencrypt %} 266 | - traefik.http.routers.{{ service }}-import.tls.certresolver=le_resolver 267 | {% endif %} 268 | - traefik.http.routers.{{ service }}-import.service={{ service }}-import 269 | - traefik.http.services.{{ service }}-import.loadbalancer.server.port={{ service_port }} 270 | - traefik.http.services.{{ service }}-import.loadbalancer.sticky.cookie=true 271 | - traefik.http.services.{{ service }}-import.loadbalancer.sticky.cookie.secure=true 272 | {% if loadbalancer %} 273 | - traefik.http.services.{{ service }}-import.loadbalancer.passhostheader=true 274 | {% endif %} 275 | - traefik.docker.network={{ application }}-network 276 | # service restart policy 277 | restart_policy: 278 | condition: on-failure 279 | delay: 5s 280 | max_attempts: 10 281 | window: 120s 282 | # service update configuration 283 | update_config: 284 | parallelism: 1 285 | delay: 10s 286 | failure_action: continue 287 | monitor: 60s 288 | max_failure_ratio: 0.3 289 | {% endif %} 290 | 291 | 292 | cron: 293 | image: alpine 294 | command: sh -c "echo \"0 3 * * * wget -qO- http://{{ service }}:{{ service_port }}/api/v1/cron/{{ static_cron_token }};echo\" | crontab - && crond -f -L /dev/stdout" 295 | networks: 296 | - {{ application }}-network 297 | {% if not swarm %} 298 | restart: always 299 | {% endif %} 300 | {% if swarm %} 301 | deploy: 302 | # service restart policy 303 | restart_policy: 304 | condition: on-failure 305 | delay: 5s 306 | max_attempts: 10 307 | window: 120s 308 | # service update configuration 309 | update_config: 310 | parallelism: 1 311 | delay: 10s 312 | failure_action: continue 313 | monitor: 60s 314 | max_failure_ratio: 0.3 315 | {% endif %} 316 | 317 | {{ service }}-db: 318 | image: {{ service_db_image }}:{{ service_db_version }} 319 | networks: 320 | - {{ application }}-network 321 | environment: 322 | - PUID=1000 323 | - PGID=1000 324 | - TZ="{{ timezone }}" 325 | - MYSQL_RANDOM_ROOT_PASSWORD=yes 326 | - MYSQL_USER={{ database_username }} 327 | - MYSQL_PASSWORD={{ database_password }} 328 | - MYSQL_DATABASE={{ database_name }} 329 | volumes: 330 | - "{{ workdir }}/{{ service }}/db:/var/lib/mysql" 331 | logging: 332 | driver: "json-file" 333 | options: 334 | max-size: "1024k" 335 | max-file: "4" 336 | {% if not swarm %} 337 | restart: unless-stopped 338 | {% endif %} 339 | {% if swarm %} 340 | deploy: 341 | # service restart policy 342 | restart_policy: 343 | condition: on-failure 344 | delay: 5s 345 | max_attempts: 10 346 | window: 120s 347 | # service update configuration 348 | update_config: 349 | parallelism: 1 350 | delay: 10s 351 | failure_action: continue 352 | monitor: 60s 353 | max_failure_ratio: 0.3 354 | {% endif %} 355 | 356 | networks: 357 | {{ application }}-network: 358 | external: true -------------------------------------------------------------------------------- /roles/fireflyiii/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/fireflyiii/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - fireflyiii 6 | -------------------------------------------------------------------------------- /roles/fireflyiii/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for fireflyiii 3 | -------------------------------------------------------------------------------- /roles/gitea/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/gitea/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for gitea 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: "UTC" 6 | loadbalancer: false 7 | swarm: false 8 | 9 | # service defaults 10 | service: "gitea" 11 | service_version: "1.23.4" 12 | service_port: "3000" 13 | service_image: "gitea/gitea" 14 | 15 | database_version: "14" 16 | database_name: "gitea" 17 | database_user: "gitea" 18 | database_password: "gitea" 19 | -------------------------------------------------------------------------------- /roles/gitea/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for gitea 3 | -------------------------------------------------------------------------------- /roles/gitea/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/gitea/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for gitea 3 | - name: Create user 'git' 4 | ansible.builtin.user: 5 | name: git 6 | create_home: true 7 | register: git_user 8 | 9 | - name: Create SSH keys for user git 10 | ansible.builtin.shell: sudo -u git ssh-keygen -t ed25519 -f /home/git/.ssh/id_ed25519 -N "" -C "Gitea Host Key" 11 | 12 | - name: Add SSH keys to authorized_keys 13 | ansible.builtin.shell: sudo -u git cat /home/git/.ssh/id_ed25519.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys 14 | 15 | - name: Set permissions on file 16 | ansible.builtin.shell: sudo -u git chmod 600 /home/git/.ssh/authorized_keys 17 | 18 | - name: Create gitea script on host 19 | ansible.builtin.copy: 20 | src: templates/gitea 21 | dest: /usr/local/bin/gitea 22 | mode: '0755' 23 | 24 | - name: Create config and database folder 25 | ansible.builtin.file: 26 | path: "{{ workdir }}/{{ service }}/{{ item }}" 27 | recurse: true 28 | state: directory 29 | mode: '0777' 30 | owner: git 31 | loop: ["config", "database"] 32 | 33 | - name: Create data folder 34 | ansible.builtin.file: 35 | path: "{{ download_dir }}/{{ service }}/data/git" 36 | recurse: true 37 | state: directory 38 | mode: '0777' 39 | owner: git 40 | 41 | - name: Copy {{ service }} docker-compose jinja template 42 | ansible.builtin.template: 43 | src: templates/docker-compose.yml.j2 44 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 45 | mode: '0644' 46 | 47 | - name: Deploy {{ service }} if swarm mode is enabled 48 | block: 49 | 50 | - name: Deploy {{ service }} 51 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 52 | 53 | - name: Wait until {{ service }} is running 54 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 55 | register: service_running 56 | until: service_running.stdout == "1" 57 | retries: 40 58 | delay: 20 59 | 60 | when: swarm == true 61 | 62 | - name: Deploy {{ service }} 63 | block: 64 | 65 | - name: Deploy {{ service }} 66 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 67 | 68 | - name: Wait until {{ service }} is running 69 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 70 | register: service_running 71 | until: service_running.stdout == '"running"' 72 | retries: 40 73 | delay: 20 74 | 75 | when: swarm == false -------------------------------------------------------------------------------- /roles/gitea/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | services: 4 | 5 | {{ service }}: 6 | image: {{ service_image }}:{{ service_version }} 7 | networks: 8 | - {{ application }}-network 9 | environment: 10 | - USER=git 11 | - USER_UID={{ git_user.uid }} 12 | - USER_GID={{ git_user.group }} 13 | - GITEA__database__DB_TYPE=postgres 14 | - GITEA__database__HOST=giteadb:5432 15 | - GITEA__database__NAME={{ database_name }} 16 | - GITEA__database__USER={{ database_user }} 17 | - GITEA__database__PASSWD={{ database_password }} 18 | - TZ="{{ timezone }}" 19 | volumes: 20 | - "/home/git/.ssh:/data/git/.ssh" 21 | - "{{ workdir }}/{{ service }}/config:/etc/gitea" 22 | - "{{ download_dir }}/{{ service }}/data:/var/lib/gitea" 23 | logging: 24 | driver: "json-file" 25 | options: 26 | max-size: "1024k" 27 | max-file: "4" 28 | ports: 29 | - "127.0.0.1:2222:22" 30 | {% if not swarm %} 31 | labels: 32 | - APP={{ service }} 33 | - traefik.enable=true 34 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 35 | - traefik.http.routers.{{ service }}.entrypoints=websecure 36 | - traefik.http.routers.{{ service }}.tls=true 37 | {% if letsencrypt %} 38 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 39 | {% endif %} 40 | - traefik.http.routers.{{ service }}.service={{ service }} 41 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 42 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 43 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 44 | {% if loadbalancer %} 45 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 46 | {% endif %} 47 | - traefik.docker.network={{ application }}-network 48 | restart: unless-stopped 49 | {% endif %} 50 | {% if swarm %} 51 | deploy: 52 | labels: 53 | - APP={{ service }} 54 | - traefik.enable=true 55 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 56 | - traefik.http.routers.{{ service }}.entrypoints=websecure 57 | - traefik.http.routers.{{ service }}.tls=true 58 | {% if letsencrypt %} 59 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 60 | {% endif %} 61 | - traefik.http.routers.{{ service }}.service={{ service }} 62 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 63 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 64 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 65 | {% if loadbalancer %} 66 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 67 | {% endif %} 68 | - traefik.docker.network={{ application }}-network 69 | # service restart policy 70 | restart_policy: 71 | condition: on-failure 72 | delay: 5s 73 | max_attempts: 10 74 | window: 120s 75 | # service update configuration 76 | update_config: 77 | parallelism: 1 78 | delay: 10s 79 | failure_action: continue 80 | monitor: 60s 81 | max_failure_ratio: 0.3 82 | {% endif %} 83 | 84 | giteadb: 85 | image: postgres:{{ database_version }} 86 | networks: 87 | - {{ application }}-network 88 | environment: 89 | - POSTGRES_USER={{ database_user }} 90 | - POSTGRES_PASSWORD={{ database_password }} 91 | - POSTGRES_DB={{ database_name }} 92 | volumes: 93 | - postgres:/var/lib/postgresql/data 94 | {% if not swarm %} 95 | labels: 96 | - APP={{ service }} 97 | restart: unless-stopped 98 | {% endif %} 99 | {% if swarm %} 100 | deploy: 101 | labels: 102 | - APP={{ service }} 103 | # service restart policy 104 | restart_policy: 105 | condition: on-failure 106 | delay: 5s 107 | max_attempts: 10 108 | window: 120s 109 | # service update configuration 110 | update_config: 111 | parallelism: 1 112 | delay: 10s 113 | failure_action: continue 114 | monitor: 60s 115 | max_failure_ratio: 0.3 116 | {% endif %} 117 | 118 | 119 | networks: 120 | {{ application }}-network: 121 | external: true 122 | 123 | volumes: 124 | postgres: -------------------------------------------------------------------------------- /roles/gitea/templates/gitea: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@" 3 | -------------------------------------------------------------------------------- /roles/gitea/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/gitea/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - gitea 6 | -------------------------------------------------------------------------------- /roles/gitea/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for gitea 3 | -------------------------------------------------------------------------------- /roles/homeassistant/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/homeassistant/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for homeassistant 3 | workdir: "/docker" 4 | timezone: "UTC" 5 | loadbalancer: false 6 | 7 | # service defaults 8 | hacs: true 9 | service: "homeassistant" 10 | service_version: "stable" 11 | service_port: "8123" 12 | service_image: "ghcr.io/home-assistant/home-assistant" 13 | service_config_edit: true -------------------------------------------------------------------------------- /roles/homeassistant/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for homeassistant 3 | -------------------------------------------------------------------------------- /roles/homeassistant/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/homeassistant/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for homeassistant 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Run the HACS script 11 | ansible.builtin.shell: cd {{ workdir }}/{{ service }}/config && wget -O - https://get.hacs.xyz | bash - 12 | when: hacs == true 13 | 14 | - name: Create host side configuration for Wake On LAN 15 | block: 16 | 17 | - name: Install wakeonlan on the host if OS is Debian based 18 | ansible.builtin.apt: name=wakeonlan state=latest update_cache=yes 19 | when: ansible_facts['os_family'] == "Debian" 20 | 21 | - name: Install ether-wake on the host if OS is RedHat based 22 | ansible.builtin.yum: name=net-tools state=latest 23 | when: ansible_facts['os_family'] == "RedHat" 24 | 25 | - name: Remove named pipe if it exists 26 | ansible.builtin.shell: rm -f {{ workdir }}/{{ service }}/config/wolpipe 27 | ignore_errors: true 28 | 29 | - name: Create the named pipe 30 | ansible.builtin.shell: mkfifo {{ workdir }}/{{ service }}/config/wolpipe 31 | 32 | - name: Make sure it has the correct permissions 33 | ansible.builtin.shell: chmod 644 {{ workdir }}/{{ service }}/config/wolpipe 34 | 35 | - name: Copy over systemd service 36 | ansible.builtin.template: 37 | src: templates/docker-wol.service.j2 38 | dest: /etc/systemd/system/docker-wol.service 39 | mode: 0644 40 | 41 | - name: Start docker-wol daemon 42 | ansible.builtin.service: name=docker-wol state=started enabled=true 43 | 44 | when: wolpipe is defined 45 | 46 | - name: Copy {{ service }} docker-compose jinja template 47 | ansible.builtin.template: 48 | src: templates/docker-compose.yml.j2 49 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 50 | mode: 0644 51 | 52 | - name: Deploy {{ service }} 53 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 54 | 55 | - name: Wait until {{ service }} is running 56 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 57 | register: service_running 58 | until: service_running.stdout == '"running"' 59 | retries: 40 60 | delay: 20 61 | 62 | - name: Update {{ service }} if we need to edit the config 63 | block: 64 | 65 | - name: Get configuration 66 | ansible.builtin.shell: "awk '/host_whitelist/{print $3}' {{ workdir }}/{{ service }}/config/{{ service }}.ini | tr -d ," 67 | register: host_whitelist 68 | 69 | - name: Create additional yaml files for automation, scripts etc 70 | ansible.builtin.shell: touch {{ workdir }}/{{ service }}/config/{{ item }}.yaml 71 | loop: 72 | - "automation" 73 | - "scripts" 74 | - "scenese" 75 | 76 | - name: Create the themes folder 77 | ansible.builtin.shell: mkdir -p {{ workdir }}/{{ service }}/config/themes 78 | 79 | - name: Copy {{ service }} configuration template 80 | ansible.builtin.template: 81 | src: templates/configuration.yaml.j2 82 | dest: "{{ workdir }}/{{ service }}/config/configuration.yaml" 83 | 84 | - name: Restart {{ service }} 85 | ansible.builtin.shell: docker restart {{ service }}-{{ service }}-1 86 | 87 | when: service_config_edit == true -------------------------------------------------------------------------------- /roles/homeassistant/templates/configuration.yaml.j2: -------------------------------------------------------------------------------- 1 | # Loads default set of integrations. Do not remove. 2 | default_config: 3 | 4 | # Load frontend themes from the themes folder 5 | frontend: 6 | themes: !include_dir_merge_named themes 7 | 8 | automation: !include automations.yaml 9 | script: !include scripts.yaml 10 | scene: !include scenes.yaml 11 | 12 | http: 13 | use_x_forwarded_for: true 14 | trusted_proxies: 15 | - 172.0.0.0/8 16 | - 127.0.0.1 17 | 18 | {% if wolpipe or tty %} 19 | shell_command: 20 | {% if wolpipe %} 21 | {% for device in wolpipe %} 22 | wol_{{ device }}: "/bin/echo '{{ wolpipe[device].mac }}' > /config/wolpipe" 23 | {% endfor %} 24 | {% endif %} 25 | {% if tty %} 26 | {% for device in tty %} 27 | {{ device }}: "{{ tty[device].path }}" 28 | {% endfor %} 29 | {% endif %} 30 | {% endif %} -------------------------------------------------------------------------------- /roles/homeassistant/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | networks: 10 | - {{ application }}-network 11 | environment: 12 | - PUID=1000 13 | - PGID=1000 14 | - TZ="{{ timezone }}" 15 | - HC_BASEPATH=/config 16 | - HC_IGNORE_SSL=True 17 | volumes: 18 | - "{{ workdir }}/{{ service }}/config:/config" 19 | {% if wolpipe %} 20 | - {{ workdir }}/{{ service }}/config/wolpipe:/config/wolpipe 21 | {% endif %} 22 | logging: 23 | driver: "json-file" 24 | options: 25 | max-size: "1024k" 26 | max-file: "4" 27 | {% if tty %} 28 | devices: 29 | {% for device in tty %} 30 | - {{ tty[device].path }}:{{ tty[device].path }} 31 | {% endfor %} 32 | {% endif %} 33 | labels: 34 | - APP={{ service }} 35 | - traefik.enable=true 36 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 37 | - traefik.http.routers.{{ service }}.entrypoints=websecure 38 | - traefik.http.routers.{{ service }}.tls=true 39 | {% if letsencrypt %} 40 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 41 | {% endif %} 42 | - traefik.http.routers.{{ service }}.service={{ service }} 43 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 44 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 45 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 46 | {% if loadbalancer %} 47 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 48 | {% endif %} 49 | - traefik.docker.network={{ application }}-network 50 | restart: unless-stopped 51 | 52 | networks: 53 | {{ application }}-network: 54 | external: true -------------------------------------------------------------------------------- /roles/homeassistant/templates/docker-wol.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Listen for wake-on-lan from docker via pipe 3 | 4 | [Service] 5 | Restart=always 6 | {% if ansible_facts['os_family'] == 'Debian' %} 7 | ExecStart=/usr/bin/wakeonlan -f {{ workdir }}/{{ service }}/config/wolpipe 8 | {% elif ansible_facts['os_family'] == 'RedHat' %} 9 | ExecStart=bash -c "ether-wake $$(<{{ workdir }}/{{ service }}/config/wolpipe)" 10 | {% endif %} 11 | [Install] 12 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /roles/homeassistant/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/homeassistant/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - homeassistant 6 | -------------------------------------------------------------------------------- /roles/homeassistant/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for homeassistant 3 | -------------------------------------------------------------------------------- /roles/jellyfin/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/jellyfin/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for jellyfin 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: Etc/UTC 6 | loadbalancer: false 7 | swarm: false 8 | transcoding: false 9 | raspberry: false 10 | 11 | # service defaults 12 | service: "jellyfin" 13 | service_version: "latest" 14 | service_port: "8096" 15 | service_image: "lscr.io/linuxserver/jellyfin" 16 | 17 | -------------------------------------------------------------------------------- /roles/jellyfin/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for jellyfin 3 | -------------------------------------------------------------------------------- /roles/jellyfin/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/jellyfin/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for jellyfin 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 0644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Deploy {{ service }} 35 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 36 | 37 | - name: Wait until {{ service }} is running 38 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 39 | register: service_running 40 | until: service_running.stdout == '"running"' 41 | retries: 40 42 | delay: 20 43 | 44 | when: swarm == false -------------------------------------------------------------------------------- /roles/jellyfin/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | networks: 10 | - {{ application }}-network 11 | environment: 12 | - PUID=1000 13 | - PGID=1000 14 | - TZ={{ timezone }} 15 | - JELLYFIN_PublishedServerUrl={{ service }}.{{ domain }} 16 | volumes: 17 | - "{{ workdir }}/{{ service }}/config:/config" 18 | - "{{ download_dir }}:/downloads" 19 | ports: 20 | - 7358:7359/udp 21 | logging: 22 | driver: "json-file" 23 | options: 24 | max-size: "1024k" 25 | max-file: "4" 26 | {% if not swarm %} 27 | labels: 28 | - APP={{ service }} 29 | - traefik.enable=true 30 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 31 | - traefik.http.routers.{{ service }}.entrypoints=websecure 32 | - traefik.http.routers.{{ service }}.tls=true 33 | {% if letsencrypt %} 34 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 35 | {% endif %} 36 | - traefik.http.routers.{{ service }}.service={{ service }} 37 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 38 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 39 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 40 | {% if loadbalancer %} 41 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 42 | {% endif %} 43 | - traefik.docker.network={{ application }}-network 44 | {% if transcoding %} 45 | devices: 46 | - /dev/dri:/dev/dri 47 | {% endif %} 48 | {% if raspberry %} 49 | devices: 50 | - /dev/vchiq:/dev/vchiq 51 | {% endif %} 52 | restart: unless-stopped 53 | {% endif %} 54 | {% if swarm %} 55 | deploy: 56 | labels: 57 | - APP={{ service }} 58 | - traefik.enable=true 59 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 60 | - traefik.http.routers.{{ service }}.entrypoints=websecure 61 | - traefik.http.routers.{{ service }}.tls=true 62 | {% if letsencrypt %} 63 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 64 | {% endif %} 65 | - traefik.http.routers.{{ service }}.service={{ service }} 66 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 67 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 68 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 69 | {% if loadbalancer %} 70 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 71 | {% endif %} 72 | - traefik.docker.network={{ application }}-network 73 | # service restart policy 74 | restart_policy: 75 | condition: on-failure 76 | delay: 5s 77 | max_attempts: 10 78 | window: 120s 79 | # service update configuration 80 | update_config: 81 | parallelism: 1 82 | delay: 10s 83 | failure_action: continue 84 | monitor: 60s 85 | max_failure_ratio: 0.3 86 | {% endif %} 87 | 88 | networks: 89 | {{ application }}-network: 90 | external: true -------------------------------------------------------------------------------- /roles/jellyfin/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/jellyfin/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - jellyfin 6 | -------------------------------------------------------------------------------- /roles/jellyfin/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for jellyfin 3 | -------------------------------------------------------------------------------- /roles/piper/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/piper/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for piper 3 | workdir: "/docker" 4 | timezone: "UTC" 5 | loadbalancer: false 6 | swarm: false 7 | 8 | # service defaults 9 | service: "piper" 10 | service_version: "latest" 11 | service_port: "10200" 12 | service_image: "lscr.io/linuxserver/piper" 13 | 14 | piper_voice: "en_US-lessac-medium" 15 | piper_length: "1.0" 16 | piper_noise: "0.667" 17 | piper_noisew: "0.333" 18 | piper_speaker: "0" 19 | piper_procs: "1" 20 | -------------------------------------------------------------------------------- /roles/piper/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for piper 3 | -------------------------------------------------------------------------------- /roles/piper/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | galaxy_tags: [] 25 | # List tags for your role here, one per line. A tag is a keyword that describes 26 | # and categorizes the role. Users find roles by searching for tags. Be sure to 27 | # remove the '[]' above, if you add tags to this list. 28 | # 29 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 30 | # Maximum 20 tags per role. 31 | 32 | dependencies: [] 33 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 34 | # if you add dependencies to this list. 35 | -------------------------------------------------------------------------------- /roles/piper/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for piper 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 0644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Deploy {{ service }} 35 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 36 | 37 | - name: Wait until {{ service }} is running 38 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 39 | register: service_running 40 | until: service_running.stdout == '"running"' 41 | retries: 40 42 | delay: 20 43 | 44 | when: swarm == false -------------------------------------------------------------------------------- /roles/piper/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | services: 4 | 5 | {{ service }}: 6 | image: {{ service_image }}:{{ service_version }} 7 | networks: 8 | - {{ application }}-network 9 | environment: 10 | - PUID=1000 11 | - PGID=1000 12 | - PIPER_VOICE={{ piper_voice }} 13 | - PIPER_LENGTH={{ piper_length }} 14 | - PIPER_NOISE={{ piper_noise }} 15 | - PIPER_NOISEW={{ piper_noisew }} 16 | - PIPER_SPEAKER={{ piper_speaker }} 17 | - PIPER_PROCS={{ piper_procs }} 18 | - TZ="{{ timezone }}" 19 | volumes: 20 | - "{{ workdir }}/{{ service }}/config:/config" 21 | logging: 22 | driver: "json-file" 23 | options: 24 | max-size: "1024k" 25 | max-file: "4" 26 | {% if not swarm %} 27 | labels: 28 | - APP={{ service }} 29 | - traefik.enable=true 30 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 31 | - traefik.http.routers.{{ service }}.entrypoints=websecure 32 | - traefik.http.routers.{{ service }}.tls=true 33 | {% if letsencrypt %} 34 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 35 | {% endif %} 36 | - traefik.http.routers.{{ service }}.service={{ service }} 37 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 38 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 39 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 40 | {% if loadbalancer %} 41 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 42 | {% endif %} 43 | - traefik.docker.network={{ application }}-network 44 | restart: unless-stopped 45 | {% endif %} 46 | {% if swarm %} 47 | deploy: 48 | labels: 49 | - APP={{ service }} 50 | - traefik.enable=true 51 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 52 | - traefik.http.routers.{{ service }}.entrypoints=websecure 53 | - traefik.http.routers.{{ service }}.tls=true 54 | {% if letsencrypt %} 55 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 56 | {% endif %} 57 | - traefik.http.routers.{{ service }}.service={{ service }} 58 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 59 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 60 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 61 | {% if loadbalancer %} 62 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 63 | {% endif %} 64 | - traefik.docker.network={{ application }}-network 65 | # service restart policy 66 | restart_policy: 67 | condition: on-failure 68 | delay: 5s 69 | max_attempts: 10 70 | window: 120s 71 | # service update configuration 72 | update_config: 73 | parallelism: 1 74 | delay: 10s 75 | failure_action: continue 76 | monitor: 60s 77 | max_failure_ratio: 0.3 78 | {% endif %} 79 | 80 | networks: 81 | {{ application }}-network: 82 | external: true -------------------------------------------------------------------------------- /roles/piper/tests/inventory: -------------------------------------------------------------------------------- 1 | #SPDX-License-Identifier: MIT-0 2 | localhost 3 | 4 | -------------------------------------------------------------------------------- /roles/piper/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - piper 6 | -------------------------------------------------------------------------------- /roles/piper/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for piper 3 | -------------------------------------------------------------------------------- /roles/radarr/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/radarr/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for radarr 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: "UTC" 6 | loadbalancer: false 7 | swarm: false 8 | 9 | # service defaults 10 | service: "radarr" 11 | service_version: "develop" 12 | service_port: "7878" 13 | service_image: "lscr.io/linuxserver/radarr" -------------------------------------------------------------------------------- /roles/radarr/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for radarr 3 | -------------------------------------------------------------------------------- /roles/radarr/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/radarr/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for sonarr 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 0644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Deploy {{ service }} 35 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 36 | 37 | - name: Wait until {{ service }} is running 38 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 39 | register: service_running 40 | until: service_running.stdout == '"running"' 41 | retries: 40 42 | delay: 20 43 | 44 | when: swarm == false -------------------------------------------------------------------------------- /roles/radarr/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | networks: 10 | - {{ application }}-network 11 | environment: 12 | - PUID=1000 13 | - PGID=1000 14 | - TZ="{{ timezone }}" 15 | volumes: 16 | - "{{ workdir }}/{{ service }}/config:/config" 17 | - "{{ download_dir }}:/downloads" 18 | logging: 19 | driver: "json-file" 20 | options: 21 | max-size: "1024k" 22 | max-file: "4" 23 | {% if not swarm %} 24 | labels: 25 | - APP={{ service }} 26 | - traefik.enable=true 27 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 28 | - traefik.http.routers.{{ service }}.entrypoints=websecure 29 | - traefik.http.routers.{{ service }}.tls=true 30 | {% if letsencrypt %} 31 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 32 | {% endif %} 33 | - traefik.http.routers.{{ service }}.service={{ service }} 34 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 35 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 36 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 37 | {% if loadbalancer %} 38 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 39 | {% endif %} 40 | - traefik.docker.network={{ application }}-network 41 | restart: unless-stopped 42 | {% endif %} 43 | {% if swarm %} 44 | deploy: 45 | labels: 46 | - APP={{ service }} 47 | - traefik.enable=true 48 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 49 | - traefik.http.routers.{{ service }}.entrypoints=websecure 50 | - traefik.http.routers.{{ service }}.tls=true 51 | {% if letsencrypt %} 52 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 53 | {% endif %} 54 | - traefik.http.routers.{{ service }}.service={{ service }} 55 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 56 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 57 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 58 | {% if loadbalancer %} 59 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 60 | {% endif %} 61 | - traefik.docker.network={{ application }}-network 62 | # service restart policy 63 | restart_policy: 64 | condition: on-failure 65 | delay: 5s 66 | max_attempts: 10 67 | window: 120s 68 | # service update configuration 69 | update_config: 70 | parallelism: 1 71 | delay: 10s 72 | failure_action: continue 73 | monitor: 60s 74 | max_failure_ratio: 0.3 75 | {% endif %} 76 | 77 | networks: 78 | {{ application }}-network: 79 | external: true -------------------------------------------------------------------------------- /roles/radarr/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/radarr/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - radarr 6 | -------------------------------------------------------------------------------- /roles/radarr/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for radarr 3 | -------------------------------------------------------------------------------- /roles/readarr/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/readarr/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for readarr 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: "UTC" 6 | loadbalancer: false 7 | swarm: false 8 | 9 | # service defaults 10 | service: "readarr" 11 | service_version: "develop" 12 | service_port: "8787" 13 | service_image: "lscr.io/linuxserver/readarr" -------------------------------------------------------------------------------- /roles/readarr/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for readarr 3 | -------------------------------------------------------------------------------- /roles/readarr/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/readarr/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for readarr 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 0644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Deploy {{ service }} 35 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 36 | 37 | - name: Wait until {{ service }} is running 38 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 39 | register: service_running 40 | until: service_running.stdout == '"running"' 41 | retries: 40 42 | delay: 20 43 | 44 | when: swarm == false -------------------------------------------------------------------------------- /roles/readarr/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | networks: 10 | - {{ application }}-network 11 | environment: 12 | - PUID=1000 13 | - PGID=1000 14 | - TZ="{{ timezone }}" 15 | volumes: 16 | - "{{ workdir }}/{{ service }}/config:/config" 17 | - "{{ download_dir }}:/downloads" 18 | logging: 19 | driver: "json-file" 20 | options: 21 | max-size: "1024k" 22 | max-file: "4" 23 | {% if not swarm %} 24 | labels: 25 | - APP={{ service }} 26 | - traefik.enable=true 27 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 28 | - traefik.http.routers.{{ service }}.entrypoints=websecure 29 | - traefik.http.routers.{{ service }}.tls=true 30 | {% if letsencrypt %} 31 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 32 | {% endif %} 33 | - traefik.http.routers.{{ service }}.service={{ service }} 34 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 35 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 36 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 37 | {% if loadbalancer %} 38 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 39 | {% endif %} 40 | - traefik.docker.network={{ application }}-network 41 | restart: unless-stopped 42 | {% endif %} 43 | {% if swarm %} 44 | deploy: 45 | labels: 46 | - APP={{ service }} 47 | - traefik.enable=true 48 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 49 | - traefik.http.routers.{{ service }}.entrypoints=websecure 50 | - traefik.http.routers.{{ service }}.tls=true 51 | {% if letsencrypt %} 52 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 53 | {% endif %} 54 | - traefik.http.routers.{{ service }}.service={{ service }} 55 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 56 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 57 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 58 | {% if loadbalancer %} 59 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 60 | {% endif %} 61 | - traefik.docker.network={{ application }}-network 62 | # service restart policy 63 | restart_policy: 64 | condition: on-failure 65 | delay: 5s 66 | max_attempts: 10 67 | window: 120s 68 | # service update configuration 69 | update_config: 70 | parallelism: 1 71 | delay: 10s 72 | failure_action: continue 73 | monitor: 60s 74 | max_failure_ratio: 0.3 75 | {% endif %} 76 | 77 | networks: 78 | {{ application }}-network: 79 | external: true -------------------------------------------------------------------------------- /roles/readarr/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/readarr/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - readarr 6 | -------------------------------------------------------------------------------- /roles/readarr/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for readarr 3 | -------------------------------------------------------------------------------- /roles/sabnzbd/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/sabnzbd/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for sabnzbd 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: "UTC" 6 | loadbalancer: false 7 | swarm: false 8 | 9 | # service defaults 10 | service: "sabnzbd" 11 | service_version: "latest" 12 | service_port: "8080" 13 | service_image: "lscr.io/linuxserver/sabnzbd" 14 | service_config_edit: true -------------------------------------------------------------------------------- /roles/sabnzbd/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for sabnzbd 3 | -------------------------------------------------------------------------------- /roles/sabnzbd/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/sabnzbd/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for sabnzbd 3 | - name: Create folders 4 | ansible.builtin.shell: mkdir -p {{ workdir }}/{{ service }}/config 5 | 6 | - name: Copy {{ service }} docker-compose jinja template 7 | template: 8 | src: templates/docker-compose.yml.j2 9 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 10 | mode: 0644 11 | 12 | - name: Deploy if docker swarm 13 | block: 14 | 15 | - name: Deploy {{ service }} 16 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 17 | 18 | - name: Wait until {{ service }} is running 19 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 20 | register: service_running 21 | until: service_running.stdout == "1" 22 | retries: 40 23 | delay: 20 24 | 25 | when: swarm == true 26 | 27 | - name: Deploy {{ service }} 28 | block: 29 | 30 | - name: Deploy {{ service }} 31 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 32 | 33 | - name: Wait until {{ service }} is running 34 | shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 35 | register: service_running 36 | until: service_running.stdout == '"running"' 37 | retries: 40 38 | delay: 20 39 | 40 | when: swarm == false 41 | 42 | - name: Update {{ service }} if we need to edit the config 43 | block: 44 | 45 | - name: Wait until {{ service }} finished its configuration 46 | shell: grep -c host_whitelist {{ workdir }}/{{ service }}/config/{{ service }}.ini 47 | register: config_ready 48 | until: config_ready.stdout == "1" 49 | retries: 40 50 | delay: 20 51 | 52 | - name: Get configuration 53 | shell: "awk '/host_whitelist/{print $3}' {{ workdir }}/{{ service }}/config/{{ service }}.ini | tr -d ," 54 | register: host_whitelist 55 | 56 | - name: Change the configuration 57 | ansible.builtin.shell: "sed -i 's/{{ host_whitelist.stdout }}/{{ service }}.{{ domain }}/g' {{ workdir }}/{{ service }}/config/{{ service }}.ini" 58 | 59 | - name: Restart {{ service }} (swarm mode) 60 | ansible.builtin.shell: docker service update --force $(docker stack services {{ service }} | awk '/{{ service }}/{print $1}') 61 | when: swarm == true 62 | 63 | - name: Stop {{ service }} 64 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml down 65 | when: swarm == false 66 | 67 | - name: Start {{ service }} 68 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 69 | when: swarm == false 70 | 71 | - name: Wait until {{ service }} is running 72 | shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 73 | register: service_running 74 | until: service_running.stdout == '"running"' 75 | retries: 40 76 | delay: 20 77 | when: swarm == false 78 | 79 | when: service_config_edit == true 80 | -------------------------------------------------------------------------------- /roles/sabnzbd/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | {% if not swarm %} 10 | hostname: {{ service }}.{{ domain }} 11 | {% endif %} 12 | networks: 13 | - {{ application }}-network 14 | environment: 15 | - PUID=1000 16 | - PGID=1000 17 | - TZ="{{ timezone }}" 18 | volumes: 19 | - "{{ workdir }}/{{ service }}/config:/config" 20 | - "{{ download_dir }}:/downloads" 21 | logging: 22 | driver: "json-file" 23 | options: 24 | max-size: "1024k" 25 | max-file: "4" 26 | {% if not swarm %} 27 | labels: 28 | - APP={{ service }} 29 | - traefik.enable=true 30 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 31 | - traefik.http.routers.{{ service }}.entrypoints=websecure 32 | - traefik.http.routers.{{ service }}.tls=true 33 | {% if letsencrypt %} 34 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 35 | {% endif %} 36 | - traefik.http.routers.{{ service }}.service={{ service }} 37 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 38 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 39 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 40 | {% if loadbalancer %} 41 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 42 | {% endif %} 43 | - traefik.docker.network={{ application }}-network 44 | restart: unless-stopped 45 | {% endif %} 46 | {% if swarm %} 47 | deploy: 48 | labels: 49 | - APP={{ service }} 50 | - traefik.enable=true 51 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 52 | - traefik.http.routers.{{ service }}.entrypoints=websecure 53 | - traefik.http.routers.{{ service }}.tls=true 54 | {% if letsencrypt %} 55 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 56 | {% endif %} 57 | - traefik.http.routers.{{ service }}.service={{ service }} 58 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 59 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 60 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 61 | {% if loadbalancer %} 62 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 63 | {% endif %} 64 | - traefik.docker.network={{ application }}-network 65 | # service restart policy 66 | restart_policy: 67 | condition: on-failure 68 | delay: 5s 69 | max_attempts: 10 70 | window: 120s 71 | # service update configuration 72 | update_config: 73 | parallelism: 1 74 | delay: 10s 75 | failure_action: continue 76 | monitor: 60s 77 | max_failure_ratio: 0.3 78 | {% endif %} 79 | 80 | networks: 81 | {{ application }}-network: 82 | external: true -------------------------------------------------------------------------------- /roles/sabnzbd/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/sabnzbd/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - sabnzbd 6 | -------------------------------------------------------------------------------- /roles/sabnzbd/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for sabnzbd 3 | -------------------------------------------------------------------------------- /roles/slskd/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/slskd/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for slskd 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: "UTC" 6 | loadbalancer: false 7 | swarm: false 8 | 9 | # service defaults 10 | service: "slskd" 11 | service_version: "latest" 12 | service_port: "5030" 13 | service_image: "slskd/slskd" -------------------------------------------------------------------------------- /roles/slskd/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for slskd 3 | -------------------------------------------------------------------------------- /roles/slskd/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/slskd/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for slskd 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/app" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 0644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Deploy {{ service }} 35 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 36 | 37 | - name: Wait until {{ service }} is running 38 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 39 | register: service_running 40 | until: service_running.stdout == '"running"' 41 | retries: 40 42 | delay: 20 43 | 44 | when: swarm == false 45 | -------------------------------------------------------------------------------- /roles/slskd/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | networks: 10 | - {{ application }}-network 11 | environment: 12 | - PUID=1000 13 | - PGID=1000 14 | - SLSKD_REMOTE_CONFIGURATION=true 15 | - TZ="{{ timezone }}" 16 | volumes: 17 | - "{{ workdir }}/{{ service }}/app:/app" 18 | - "{{ download_dir }}:/downloads" 19 | logging: 20 | driver: "json-file" 21 | options: 22 | max-size: "1024k" 23 | max-file: "4" 24 | ports: 25 | - 50300:50300 26 | {% if not swarm %} 27 | labels: 28 | - APP={{ service }} 29 | - traefik.enable=true 30 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 31 | - traefik.http.routers.{{ service }}.entrypoints=websecure 32 | - traefik.http.routers.{{ service }}.tls=true 33 | {% if letsencrypt %} 34 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 35 | {% endif %} 36 | - traefik.http.routers.{{ service }}.service={{ service }} 37 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 38 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 39 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 40 | {% if loadbalancer %} 41 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 42 | {% endif %} 43 | - traefik.docker.network={{ application }}-network 44 | restart: unless-stopped 45 | {% endif %} 46 | {% if swarm %} 47 | deploy: 48 | labels: 49 | - APP={{ service }} 50 | - traefik.enable=true 51 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 52 | - traefik.http.routers.{{ service }}.entrypoints=websecure 53 | - traefik.http.routers.{{ service }}.tls=true 54 | {% if letsencrypt %} 55 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 56 | {% endif %} 57 | - traefik.http.routers.{{ service }}.service={{ service }} 58 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 59 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 60 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 61 | {% if loadbalancer %} 62 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 63 | {% endif %} 64 | - traefik.docker.network={{ application }}-network 65 | # service restart policy 66 | restart_policy: 67 | condition: on-failure 68 | delay: 5s 69 | max_attempts: 10 70 | window: 120s 71 | # service update configuration 72 | update_config: 73 | parallelism: 1 74 | delay: 10s 75 | failure_action: continue 76 | monitor: 60s 77 | max_failure_ratio: 0.3 78 | {% endif %} 79 | 80 | networks: 81 | {{ application }}-network: 82 | external: true -------------------------------------------------------------------------------- /roles/slskd/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/slskd/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - slskd 6 | -------------------------------------------------------------------------------- /roles/slskd/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for slskd 3 | -------------------------------------------------------------------------------- /roles/sonarr/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/sonarr/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for sonarr 3 | workdir: "/docker" 4 | download_dir: "/downloads" 5 | timezone: "UTC" 6 | loadbalancer: false 7 | swarm: false 8 | 9 | # service defaults 10 | service: "sonarr" 11 | service_version: "develop" 12 | service_port: "8989" 13 | service_image: "lscr.io/linuxserver/sonarr" -------------------------------------------------------------------------------- /roles/sonarr/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for sonarr 3 | -------------------------------------------------------------------------------- /roles/sonarr/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/sonarr/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for sonarr 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 0644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Deploy {{ service }} 35 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 36 | 37 | - name: Wait until {{ service }} is running 38 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 39 | register: service_running 40 | until: service_running.stdout == '"running"' 41 | retries: 40 42 | delay: 20 43 | 44 | when: swarm == false -------------------------------------------------------------------------------- /roles/sonarr/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | networks: 10 | - {{ application }}-network 11 | environment: 12 | - PUID=1000 13 | - PGID=1000 14 | - TZ="{{ timezone }}" 15 | volumes: 16 | - "{{ workdir }}/{{ service }}/config:/config" 17 | - "{{ download_dir }}:/downloads" 18 | logging: 19 | driver: "json-file" 20 | options: 21 | max-size: "1024k" 22 | max-file: "4" 23 | {% if not swarm %} 24 | labels: 25 | - APP={{ service }} 26 | - traefik.enable=true 27 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 28 | - traefik.http.routers.{{ service }}.entrypoints=websecure 29 | - traefik.http.routers.{{ service }}.tls=true 30 | {% if letsencrypt %} 31 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 32 | {% endif %} 33 | - traefik.http.routers.{{ service }}.service={{ service }} 34 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 35 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 36 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 37 | {% if loadbalancer %} 38 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 39 | {% endif %} 40 | - traefik.docker.network={{ application }}-network 41 | restart: unless-stopped 42 | {% endif %} 43 | {% if swarm %} 44 | deploy: 45 | labels: 46 | - APP={{ service }} 47 | - traefik.enable=true 48 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 49 | - traefik.http.routers.{{ service }}.entrypoints=websecure 50 | - traefik.http.routers.{{ service }}.tls=true 51 | {% if letsencrypt %} 52 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 53 | {% endif %} 54 | - traefik.http.routers.{{ service }}.service={{ service }} 55 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 56 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 57 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 58 | {% if loadbalancer %} 59 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 60 | {% endif %} 61 | - traefik.docker.network={{ application }}-network 62 | # service restart policy 63 | restart_policy: 64 | condition: on-failure 65 | delay: 5s 66 | max_attempts: 10 67 | window: 120s 68 | # service update configuration 69 | update_config: 70 | parallelism: 1 71 | delay: 10s 72 | failure_action: continue 73 | monitor: 60s 74 | max_failure_ratio: 0.3 75 | {% endif %} 76 | 77 | networks: 78 | {{ application }}-network: 79 | external: true -------------------------------------------------------------------------------- /roles/sonarr/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/sonarr/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - sonarr 6 | -------------------------------------------------------------------------------- /roles/sonarr/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for sonarr 3 | -------------------------------------------------------------------------------- /roles/sqlite-backup/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/sqlite-backup/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for sqlite-backup 3 | workdir: "/docker" 4 | timezone: "UTC" 5 | swarm: false 6 | 7 | # service defaults 8 | service: "sqlite-backup" 9 | service_version: "latest" 10 | service_image: "bpmbee/sqlite-backup" 11 | 12 | backupdir: "/backup" 13 | interval: "24" -------------------------------------------------------------------------------- /roles/sqlite-backup/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for sqlite-backup 3 | -------------------------------------------------------------------------------- /roles/sqlite-backup/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | galaxy_tags: [] 25 | # List tags for your role here, one per line. A tag is a keyword that describes 26 | # and categorizes the role. Users find roles by searching for tags. Be sure to 27 | # remove the '[]' above, if you add tags to this list. 28 | # 29 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 30 | # Maximum 20 tags per role. 31 | 32 | dependencies: [] 33 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 34 | # if you add dependencies to this list. 35 | -------------------------------------------------------------------------------- /roles/sqlite-backup/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for sqlite-backup 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ item }}" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | loop: 10 | - "{{ workdir }}/{{ service }}/config" 11 | - "{{ backupdir }}" 12 | 13 | - name: Copy {{ service }} docker-compose jinja template 14 | ansible.builtin.template: 15 | src: templates/docker-compose.yml.j2 16 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 17 | mode: 0644 18 | 19 | - name: Deploy {{ service }} if swarm mode is enabled 20 | block: 21 | 22 | - name: Deploy {{ service }} 23 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 24 | 25 | - name: Wait until {{ service }} is running 26 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 27 | register: service_running 28 | until: service_running.stdout == "1" 29 | retries: 40 30 | delay: 20 31 | 32 | when: swarm == true 33 | 34 | - name: Deploy {{ service }} 35 | block: 36 | 37 | - name: Deploy {{ service }} 38 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 39 | 40 | - name: Wait until {{ service }} is running 41 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 42 | register: service_running 43 | until: service_running.stdout == '"running"' 44 | retries: 40 45 | delay: 20 46 | 47 | when: swarm == false -------------------------------------------------------------------------------- /roles/sqlite-backup/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | services: 4 | 5 | {{ service }}: 6 | image: {{ service_image }}:{{ service_version }} 7 | networks: 8 | - {{ application }}-network 9 | environment: 10 | - PUID=1000 11 | - PGID=1000 12 | - SOURCE_DIR=/source 13 | - BACKUP_DIR=/backup 14 | - INTERVAL={{ interval }} 15 | - TZ="{{ timezone }}" 16 | volumes: 17 | - "{{ workdir }}/:/source" 18 | - "{{ backupdir }}/:/backup" 19 | logging: 20 | driver: "json-file" 21 | options: 22 | max-size: "1024k" 23 | max-file: "4" 24 | {% if not swarm %} 25 | labels: 26 | - APP={{ service }} 27 | restart: unless-stopped 28 | {% endif %} 29 | {% if swarm %} 30 | deploy: 31 | labels: 32 | - APP={{ service }} 33 | # service restart policy 34 | restart_policy: 35 | condition: on-failure 36 | delay: 5s 37 | max_attempts: 10 38 | window: 120s 39 | # service update configuration 40 | update_config: 41 | parallelism: 1 42 | delay: 10s 43 | failure_action: continue 44 | monitor: 60s 45 | max_failure_ratio: 0.3 46 | {% endif %} 47 | 48 | networks: 49 | {{ application }}-network: 50 | external: true -------------------------------------------------------------------------------- /roles/sqlite-backup/tests/inventory: -------------------------------------------------------------------------------- 1 | #SPDX-License-Identifier: MIT-0 2 | localhost 3 | 4 | -------------------------------------------------------------------------------- /roles/sqlite-backup/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - sqlite-backup 6 | -------------------------------------------------------------------------------- /roles/sqlite-backup/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for sqlite-backup 3 | -------------------------------------------------------------------------------- /roles/traefik/README.md: -------------------------------------------------------------------------------- 1 | # traefik-swarm 2 | ------------- 3 | 4 | Ansible role to deploy Traefik to docker swarm, can be used for general web-based proxies but also specifically for Linkmatrix and Management. This repo is meant to be used as a Git submodule. 5 | 6 | ## Requirements 7 | ------------ 8 | 9 | - Running docker swarm 10 | - Azure DNS credentials (when using letsencrypt which is default) 11 | 12 | ## Use as Git submodule 13 | 14 | Needs to be in a 'roles' folder: 15 | ``` 16 | /roles/traefik-swarm/$CONTENTS_OF_REPO 17 | /playbook.yml 18 | ``` 19 | 20 | Usage: 21 | ``` 22 | cd roles/ 23 | git submodule add git@bitbucket.org:vislink-engineering/traefik-swarm.git 24 | ``` 25 | 26 | ## Role Variables 27 | ------------- 28 | 29 | `application` 30 | 31 | Required, will be used to specify which docker network to use. For Linkmatrix, you could simply set it to 'linkmatrix' 32 | 33 | `letsencrypt` 34 | 35 | Defaults to `true`, uses Azure DNS challenge and will prompt for secrets 36 | 37 | `loadbalancer` 38 | 39 | Required when used behind a loadbalancer, default is `false` 40 | 41 | `linkmatrix` 42 | 43 | Optional, when set it will also open up port 1883 for MQTT, defaults to `false` 44 | 45 | `management` 46 | 47 | Optional, when set it will also open up port 8847 for WMT Management, defaults to `false` 48 | 49 | `debug` 50 | 51 | Optional, will set Traefik logs to debug, defaults to `false` 52 | 53 | ## Example Playbook 54 | ------------- 55 | 56 | ``` 57 | - hosts: servers 58 | roles: 59 | role: 60 | - traefik-swarm 61 | vars: 62 | - linkmatrix: true 63 | ``` -------------------------------------------------------------------------------- /roles/traefik/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for traefik-swarm 3 | workdir: "/docker" 4 | swarm: false 5 | local_host: false 6 | letsencrypt: true 7 | debug: false 8 | loadbalancer: false 9 | forwardauth: false 10 | primary_manager: "localhost" 11 | email_address: "someone@example.com" -------------------------------------------------------------------------------- /roles/traefik/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for traefik-swarm 3 | -------------------------------------------------------------------------------- /roles/traefik/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/traefik/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for traefik-swarm 3 | 4 | - name: Create folders 5 | ansible.builtin.shell: "mkdir -p {{ workdir }}/traefik/letsencrypt" 6 | 7 | - name: Docker Swarm secrets section 8 | block: 9 | 10 | - name: Remove existing docker secrets if they exist 11 | ansible.builtin.shell: docker secret rm {{ item }} 12 | loop: "{{ docker_azure_secret_names }}" 13 | ignore_errors: true 14 | when: provider == "azuredns" 15 | 16 | - name: Create docker secrets 17 | ansible.builtin.shell: echo {{ item.secret_value }} | docker secret create {{ item.name }} - 18 | no_log: true 19 | loop: 20 | - { name: 'azure_client_id', secret_value: '{{ AZURE_CLIENT_ID }}' } 21 | - { name: 'azure_client_secret', secret_value: '{{ AZURE_SECRET }}' } 22 | - { name: 'azure_resource_group', secret_value: '{{ AZURE_RESOURCE_GROUP }}' } 23 | - { name: 'azure_subscription_id', secret_value: '{{ AZURE_SUBSCRIPTION_ID }}' } 24 | - { name: 'azure_tenant_id', secret_value: '{{ AZURE_TENANT }}' } 25 | when: provider == "azuredns" 26 | 27 | - name: Remove existing docker secrets if they exist 28 | ansible.builtin.shell: docker secret rm {{ item }} 29 | loop: "{{ docker_cloudflare_secret_names }}" 30 | ignore_errors: true 31 | when: provider == "cloudflare" 32 | 33 | - name: Create docker secrets 34 | ansible.builtin.shell: echo {{ item.secret_value }} | docker secret create {{ item.name }} - 35 | no_log: false 36 | loop: 37 | - { name: 'cf_api_email', secret_value: '{{ CF_API_EMAIL }}' } 38 | - { name: 'cf_api_key', secret_value: '{{ CF_API_KEY }}' } 39 | - { name: 'cf_dns_api_token', secret_value: '{{ CF_DNS_API_TOKEN }}'} 40 | when: provider == "cloudflare" 41 | 42 | when: letsencrypt == true and swarm == true 43 | 44 | - name: Docker secrets section 45 | block: 46 | 47 | - name: Create the secrets folder and set permissions to root user only 48 | ansible.builtin.file: 49 | path: "{{ workdir }}/docker-secrets" 50 | state: directory 51 | mode: 0700 52 | 53 | - name: Create the azuredns docker secrets files 54 | ansible.builtin.shell: echo "{{ item.secret_value }}" > {{ workdir }}/docker-secrets/{{ item.name }} && chmod 0600 {{ workdir }}/docker-secrets/{{ item.name }} 55 | loop: 56 | - { name: 'azure_client_id', secret_value: '{{ AZURE_CLIENT_ID }}' } 57 | - { name: 'azure_client_secret', secret_value: '{{ AZURE_SECRET }}' } 58 | - { name: 'azure_resource_group', secret_value: '{{ AZURE_RESOURCE_GROUP }}' } 59 | - { name: 'azure_subscription_id', secret_value: '{{ AZURE_SUBSCRIPTION_ID }}' } 60 | - { name: 'azure_tenant_id', secret_value: '{{ AZURE_TENANT }}' } 61 | when: provider == "azuredns" 62 | 63 | - name: Create the cloudflare docker secrets files 64 | ansible.builtin.shell: echo "{{ item.secret_value }}" > {{ workdir }}/docker-secrets/{{ item.name }} && chmod 0600 {{ workdir }}/docker-secrets/{{ item.name }} 65 | loop: 66 | - { name: 'cf_api_email', secret_value: '{{ CF_API_EMAIL }}' } 67 | - { name: 'cf_api_key', secret_value: '{{ CF_API_KEY }}' } 68 | - { name: 'cf_dns_api_token', secret_value: '{{ CF_DNS_API_TOKEN }}'} 69 | when: provider == "cloudflare" 70 | 71 | when: letsencrypt == true and swarm == false 72 | 73 | - name: Deploy certificates section 74 | block: 75 | 76 | - name: Copy certficate key 77 | no_log: True 78 | ansible.builtin.shell: echo "{{ cert_key }}" >> {{ workdir }}/traefik/certificate.key && chmod 0600 {{ workdir }}/traefik/certificate.key 79 | 80 | - name: Copy certificate pem 81 | no_log: True 82 | ansible.builtin.shell: echo "{{ cert_pem }}" >> {{ workdir }}/traefik/certificate.pem && chmod 0600 {{ workdir }}/traefik/certificate.pem 83 | 84 | when: letsencrypt == false 85 | 86 | 87 | - name: Copy traefik_dynamic.toml file 88 | ansible.builtin.template: 89 | src: templates/traefik_dynamic.toml.j2 90 | dest: "{{ workdir }}/traefik/traefik_dynamic.toml" 91 | mode: 0644 92 | when: letsencrypt == false or local_host == true 93 | 94 | - name: Check if docker network exists 95 | ansible.builtin.shell: docker network ls | grep -c "{{ application }}-network" || /bin/true 96 | register: docker_network_exists 97 | 98 | - name: Create docker swarm networks 99 | block: 100 | 101 | - name: Get IP address in use by this node 102 | ansible.builtin.shell: docker info | awk '/Node Address/{print $3}' 103 | register: node_address 104 | when: local_host is false and inventory_hostname == primary_manager 105 | 106 | - name: Use the node IP address to get the physical interface 107 | ansible.builtin.shell: ip -br -4 a sh | awk '/{{ node_address.stdout }}/{print $1}' 108 | register: node_interface 109 | when: local_host is false and inventory_hostname == primary_manager 110 | 111 | - name: Get the interface mtu (necessary on some cloud servers) 112 | ansible.builtin.shell: cat /sys/class/net/{{ node_interface.stdout }}/mtu 113 | register: node_interface_mtu 114 | when: local_host is false and inventory_hostname == primary_manager 115 | 116 | - name: Create docker network if it does not exist 117 | ansible.builtin.shell: docker network create --scope swarm --driver overlay --opt com.docker.network.driver.mtu={{ node_interface_mtu.stdout }} --attachable --subnet 172.31.255.0/24 {{ application }}-network 118 | when: local_host is false and inventory_hostname == primary_manager 119 | 120 | - name: Create docker network if it does not exist (local-dev) 121 | ansible.builtin.shell: docker network create --scope swarm --driver overlay --attachable --subnet 172.31.255.0/24 {{ application }}-network 122 | when: local_host is true 123 | 124 | when: docker_network_exists.stdout == "0" and swarm == true 125 | 126 | - name: Create docker networks 127 | block: 128 | 129 | - name: Create docker network for traefik 130 | ansible.builtin.shell: docker network create {{ application }}-network 131 | 132 | when: docker_network_exists.stdout == "0" and swarm == false 133 | 134 | - name: Copy traefik docker-compose jinja template 135 | ansible.builtin.template: 136 | src: templates/docker-compose.yml.j2 137 | dest: "{{ workdir }}/traefik/docker-compose.yml" 138 | mode: 0644 139 | 140 | - name: Deploy in swarm mode 141 | block: 142 | 143 | - name: Deploy traefik swarm 144 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/traefik/docker-compose.yml traefik 145 | 146 | - name: Wait until traefik is running 147 | ansible.builtin.shell: docker stack services --filter name=traefik_traefik traefik | grep -c 1/1 || /bin/true 148 | register: traefik_running 149 | until: traefik_running.stdout == "1" 150 | retries: 40 151 | delay: 20 152 | 153 | when: swarm == true 154 | 155 | - name: Deploy in docker-compose 156 | block: 157 | 158 | - name: Deploy traefik 159 | ansible.builtin.shell: docker compose -f {{ workdir }}/traefik/docker-compose.yml up -d 160 | 161 | - name: Wait until traefik is running 162 | ansible.builtin.shell: docker inspect traefik-traefik-1 | jq '.[] | .State.Status' 163 | register: service_running 164 | until: service_running.stdout == '"running"' 165 | retries: 40 166 | delay: 10 167 | 168 | when: swarm == false -------------------------------------------------------------------------------- /roles/traefik/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | {% if letsencrypt and provider == "azuredns" and swarm %} 6 | secrets: 7 | azure_client_id: 8 | external: true 9 | azure_client_secret: 10 | external: true 11 | azure_resource_group: 12 | external: true 13 | azure_subscription_id: 14 | external: true 15 | azure_tenant_id: 16 | external: true 17 | {% elif letsencrypt and provider == "cloudflare" and swarm %} 18 | secrets: 19 | cf_api_email: 20 | external: true 21 | cf_api_key: 22 | external: true 23 | cf_dns_api_token: 24 | external: true 25 | {% endif %} 26 | 27 | services: 28 | 29 | traefik: 30 | image: traefik:{{ traefik_version | default("latest", true ) }} 31 | command: 32 | - --providers.docker=true 33 | {% if swarm %} 34 | - --providers.swarm.endpoint=unix:///var/run/docker.sock 35 | {% endif %} 36 | - --providers.docker.exposedbydefault=false 37 | - --providers.docker.network={{ application }}-network 38 | - --entrypoints.web.address=:80 39 | - --entrypoints.web.http.redirections.entryPoint.to=websecure 40 | - --entrypoints.web.http.redirections.entryPoint.scheme=https 41 | - --entrypoints.websecure.address=:443 42 | {% if (not letsencrypt) or (local_host) %} 43 | - --providers.file.filename=/etc/traefik/traefik_dynamic.toml 44 | - --providers.file.watch=true 45 | {% endif %} 46 | {% if debug %} 47 | - --log.level=DEBUG 48 | {% endif %} 49 | {% if letsencrypt %} 50 | - --certificatesresolvers.le_resolver.acme.dnschallenge=true 51 | - --certificatesresolvers.le_resolver.acme.dnschallenge.provider={{ provider }} 52 | {% if provider == "azure" %} 53 | - --certificatesresolvers.le_resolver.acme.email={{ email_address }} 54 | {% elif provider == "cloudflare" %} 55 | - --certificatesresolvers.cloudflare.acme.dnschallenge.resolvers=1.1.1.1:53,1.0.0.1:53 56 | - --certificatesresolvers.le_resolver.acme.email={{ CF_API_EMAIL }} 57 | {% endif %} 58 | - --certificatesresolvers.le_resolver.acme.storage=/letsencrypt/acme.json 59 | - --entrypoints.websecure.http.tls.certResolver=le_resolver 60 | - --entrypoints.websecure.http.tls.domains[0].main={{ domain }} 61 | - --entrypoints.websecure.http.tls.domains[0].sans=*.{{ domain }} 62 | {% endif %} 63 | - --accesslog 64 | - --api.dashboard=true 65 | {% if loadbalancer %} 66 | - --entryPoints.websecure.proxyProtocol.insecure 67 | - --entryPoints.websecure.forwardedHeaders.insecure 68 | {% endif %} 69 | networks: 70 | - {{ application }}-network 71 | ports: 72 | {% if not forwardauth %} 73 | - 80:80/tcp 74 | - 443:443/tcp 75 | {% endif %} 76 | {% if forwardauth %} 77 | - target: 80 78 | published: 80 79 | mode: host 80 | - target: 443 81 | published: 443 82 | mode: host 83 | {% endif %} 84 | volumes: 85 | {% if not letsencrypt %} 86 | - "{{ workdir }}/traefik/certificate.key:/certificate.key" 87 | - "{{ workdir }}/traefik/certificate.pem:/certificate.pem" 88 | {% endif %} 89 | {% if (not letsencrypt) or (local_host) %} 90 | - "{{ workdir }}/traefik/traefik_dynamic.toml:/etc/traefik/traefik_dynamic.toml" 91 | {% endif %} 92 | {% if letsencrypt %} 93 | - "{{ workdir }}/traefik/letsencrypt:/letsencrypt" 94 | {% endif %} 95 | - /var/run/docker.sock:/var/run/docker.sock 96 | {% if letsencrypt and provider == "azuredns" %} 97 | environment: 98 | - AZURE_CLIENT_ID_FILE=/run/secrets/azure_client_id 99 | - AZURE_CLIENT_SECRET_FILE=/run/secrets/azure_client_secret 100 | - AZURE_SUBSCRIPTION_ID_FILE=/run/secrets/azure_subscription_id 101 | - AZURE_TENANT_ID_FILE=/run/secrets/azure_tenant_id 102 | - AZURE_RESOURCE_GROUP_FILE=/run/secrets/azure_resource_group 103 | secrets: 104 | - azure_client_id 105 | - azure_client_secret 106 | - azure_resource_group 107 | - azure_subscription_id 108 | - azure_tenant_id 109 | {% endif %} 110 | {% if letsencrypt and provider == "cloudflare" %} 111 | environment: 112 | - CF_API_EMAIL_FILE=/run/secrets/cf_api_email 113 | - CF_API_KEY_FILE=/run/secrets/cf_api_key 114 | - CF_DNS_API_TOKEN_FILE=/run/secrets/cf_dns_api_token 115 | secrets: 116 | - cf_api_email 117 | - cf_api_key 118 | - cf_dns_api_token 119 | {% endif %} 120 | extra_hosts: 121 | - host.docker.internal:{{ ansible_docker0.ipv4.address }} 122 | logging: 123 | driver: "json-file" 124 | options: 125 | max-size: "1024k" 126 | max-file: "4" 127 | {% if swarm %} 128 | deploy: 129 | labels: 130 | - APP=traefik 131 | # service restart policy 132 | restart_policy: 133 | condition: on-failure 134 | delay: 5s 135 | max_attempts: 10 136 | window: 120s 137 | # service update configuration 138 | update_config: 139 | parallelism: 1 140 | delay: 10s 141 | failure_action: continue 142 | monitor: 60s 143 | max_failure_ratio: 0.3 144 | # placement constraint - in this case on 'manager' nodes only 145 | placement: 146 | constraints: [node.role == manager] 147 | {% else %} 148 | restart: unless-stopped 149 | {% endif %} 150 | 151 | networks: 152 | {{ application }}-network: 153 | external: true 154 | 155 | {% if not letsencrypt %} 156 | configs: 157 | traefik-config: 158 | file: /docker/traefik_dynamic.toml 159 | {% endif %} 160 | 161 | {% if (not swarm) and (letsencrypt) %} 162 | secrets: 163 | {% if provider == "azuredns" %} 164 | azure_client_id: 165 | file: "{{ workdir }}/docker-secrets/azure_client_id" 166 | azure_client_secret: 167 | file: "{{ workdir }}/docker-secrets/azure_client_secret" 168 | azure_resource_group: 169 | file: "{{ workdir }}/docker-secrets/azure_resource_group" 170 | azure_subscription_id: 171 | file: "{{ workdir }}/docker-secrets/azure_subscription_id" 172 | azure_tenant_id: 173 | file: "{{ workdir }}/docker-secrets/azure_tenant_id" 174 | {% endif %} 175 | {% if provider == "cloudflare" %} 176 | cf_api_email: 177 | file: "{{ workdir }}/docker-secrets/cf_api_email" 178 | cf_api_key: 179 | file: "{{ workdir }}/docker-secrets/cf_api_key" 180 | cf_dns_api_token: 181 | file: "{{ workdir }}/docker-secrets/cf_dns_api_token" 182 | {% endif %} 183 | {% endif %} -------------------------------------------------------------------------------- /roles/traefik/templates/traefik_dynamic.toml.j2: -------------------------------------------------------------------------------- 1 | {% if not letsencrypt %} 2 | [tls.stores] 3 | [tls.stores.default] 4 | [tls.stores.default.defaultCertificate] 5 | certFile = "/certificate.pem" 6 | keyFile = "/certificate.key" 7 | 8 | [[tls.certificates]] 9 | certFile = "/certificate.pem" 10 | keyFile = "/certificate.key" 11 | stores = ["default"] 12 | {% endif %} 13 | 14 | {% if local_host %} 15 | [http.routers] 16 | {% for service in local_hosts %} 17 | [http.routers.{{ service.name }}] 18 | rule="Host(`{{ service.name }}.{{ domain }}`)" 19 | entrypoints="websecure" 20 | service="{{ service.name }}" 21 | {% if letsencrypt %} 22 | [http.routers.{{ service.name }}.tls] 23 | certresolver="le_resolver" 24 | {% else %} 25 | tls=true 26 | {% endif %} 27 | 28 | [http.serversTransports.external] 29 | insecureSkipVerify = true 30 | 31 | [http.services] 32 | [http.services.{{ service.name }}] 33 | [http.services.{{ service.name }}.loadbalancer.sticky.cookie] 34 | secure=true 35 | {% if service.scheme == "https" %} 36 | [http.services.{{ service.name }}.loadbalancer] 37 | serversTransport = "external" 38 | {% endif %} 39 | [[http.services.{{service.name }}.loadbalancer.servers]] 40 | url="{{ service.scheme }}://{{ service.address }}:{{ service.port }}" 41 | {% endfor %} 42 | {% endif %} -------------------------------------------------------------------------------- /roles/traefik/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/traefik/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - traefik-swarm 6 | -------------------------------------------------------------------------------- /roles/traefik/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for traefik-swarm 3 | docker_azure_secret_names: 4 | - azure_client_id 5 | - azure_client_secret 6 | - azure_resource_group 7 | - azure_subscription_id 8 | - azure_tenant_id 9 | 10 | docker_cloudflare_secret_names: 11 | - cloudflare_email 12 | - cloudflare_apikey -------------------------------------------------------------------------------- /roles/wg-easy/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/wg-easy/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for wg-easy 3 | workdir: "/docker" 4 | timezone: "UTC" 5 | loadbalancer: false 6 | swarm: false 7 | 8 | # service defaults 9 | service: "wireguard" 10 | service_version: "latest" 11 | service_port: "51821" 12 | service_image: "weejewel/wg-easy" 13 | 14 | wg_host: "example.com" 15 | wg_pass: "pleasechangeme" 16 | wg_port: "51820" 17 | wg_default_address: "10.8.0.x" 18 | wg_default_dns: "1.1.1.1" 19 | wg_allowed_ips: "10.8.0.0/24, 192.168.1.0/24" -------------------------------------------------------------------------------- /roles/wg-easy/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for wg-easy 3 | -------------------------------------------------------------------------------- /roles/wg-easy/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | # 25 | # Provide a list of supported platforms, and for each platform a list of versions. 26 | # If you don't wish to enumerate all versions for a particular platform, use 'all'. 27 | # To view available platforms and versions (or releases), visit: 28 | # https://galaxy.ansible.com/api/v1/platforms/ 29 | # 30 | # platforms: 31 | # - name: Fedora 32 | # versions: 33 | # - all 34 | # - 25 35 | # - name: SomePlatform 36 | # versions: 37 | # - all 38 | # - 1.0 39 | # - 7 40 | # - 99.99 41 | 42 | galaxy_tags: [] 43 | # List tags for your role here, one per line. A tag is a keyword that describes 44 | # and categorizes the role. Users find roles by searching for tags. Be sure to 45 | # remove the '[]' above, if you add tags to this list. 46 | # 47 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 48 | # Maximum 20 tags per role. 49 | 50 | dependencies: [] 51 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 52 | # if you add dependencies to this list. 53 | -------------------------------------------------------------------------------- /roles/wg-easy/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for wg-easy 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Make sure ip_tables is ready to go 11 | ansible.builtin.shell: modprobe ip_tables 12 | 13 | - name: Copy {{ service }} docker-compose jinja template 14 | ansible.builtin.template: 15 | src: templates/docker-compose.yml.j2 16 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 17 | mode: 0644 18 | 19 | - name: Deploy {{ service }} if swarm mode is enabled 20 | block: 21 | 22 | - name: Deploy {{ service }} 23 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 24 | 25 | - name: Wait until {{ service }} is running 26 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 27 | register: service_running 28 | until: service_running.stdout == "1" 29 | retries: 40 30 | delay: 20 31 | 32 | when: swarm == true 33 | 34 | - name: Deploy {{ service }} 35 | block: 36 | 37 | - name: Deploy {{ service }} 38 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 39 | 40 | - name: Wait until {{ service }} is running 41 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 42 | register: service_running 43 | until: service_running.stdout == '"running"' 44 | retries: 40 45 | delay: 20 46 | 47 | when: swarm == false -------------------------------------------------------------------------------- /roles/wg-easy/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | version: "3.3" 4 | 5 | services: 6 | 7 | {{ service }}: 8 | image: {{ service_image }}:{{ service_version }} 9 | networks: 10 | - {{ application }}-network 11 | environment: 12 | - WG_HOST={{ wg_host }} 13 | - PASSWORD={{ wg_pass }} 14 | - WG_PORT={{ wg_port }} 15 | - WG_DEFAULT_ADDRESS={{ wg_default_address }} 16 | - WG_DEFAULT_DNS={{ wg_default_dns }} 17 | - WG_ALLOWED_IPS={{ wg_allowed_ips }} 18 | - TZ="{{ timezone }}" 19 | volumes: 20 | - "{{ workdir }}/{{ service }}/config:/etc/wireguard" 21 | logging: 22 | driver: "json-file" 23 | options: 24 | max-size: "1024k" 25 | max-file: "4" 26 | {% if not swarm %} 27 | labels: 28 | - APP={{ service }} 29 | - traefik.enable=true 30 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 31 | - traefik.http.routers.{{ service }}.entrypoints=websecure 32 | - traefik.http.routers.{{ service }}.tls=true 33 | {% if letsencrypt %} 34 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 35 | {% endif %} 36 | - traefik.http.routers.{{ service }}.service={{ service }} 37 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 38 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 39 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 40 | {% if loadbalancer %} 41 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 42 | {% endif %} 43 | - traefik.docker.network={{ application }}-network 44 | ports: 45 | - {{ wg_port }}:{{ wg_port }}/udp 46 | cap_add: 47 | - NET_ADMIN 48 | - SYS_MODULE 49 | sysctls: 50 | - net.ipv4.ip_forward=1 51 | - net.ipv4.conf.all.src_valid_mark=1 52 | restart: unless-stopped 53 | {% endif %} 54 | {% if swarm %} 55 | ports: 56 | - 51820:51820/udp 57 | deploy: 58 | labels: 59 | - APP={{ service }} 60 | - traefik.enable=true 61 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 62 | - traefik.http.routers.{{ service }}.entrypoints=websecure 63 | - traefik.http.routers.{{ service }}.tls=true 64 | {% if letsencrypt %} 65 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 66 | {% endif %} 67 | - traefik.http.routers.{{ service }}.service={{ service }} 68 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 69 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 70 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 71 | {% if loadbalancer %} 72 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 73 | {% endif %} 74 | - traefik.docker.network={{ application }}-network 75 | # service restart policy 76 | restart_policy: 77 | condition: on-failure 78 | delay: 5s 79 | max_attempts: 10 80 | window: 120s 81 | # service update configuration 82 | update_config: 83 | parallelism: 1 84 | delay: 10s 85 | failure_action: continue 86 | monitor: 60s 87 | max_failure_ratio: 0.3 88 | {% endif %} 89 | 90 | networks: 91 | {{ application }}-network: 92 | external: true -------------------------------------------------------------------------------- /roles/wg-easy/tests/inventory: -------------------------------------------------------------------------------- 1 | localhost 2 | 3 | -------------------------------------------------------------------------------- /roles/wg-easy/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - wg-easy 6 | -------------------------------------------------------------------------------- /roles/wg-easy/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for wg-easy 3 | -------------------------------------------------------------------------------- /roles/whisper/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/whisper/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # defaults file for whisper 3 | workdir: "/docker" 4 | timezone: "UTC" 5 | loadbalancer: false 6 | swarm: false 7 | 8 | # service defaults 9 | service: "whisper" 10 | service_version: "latest" 11 | service_port: "10300" 12 | service_image: "lscr.io/linuxserver/faster-whisper" 13 | 14 | whisper_language: "en" 15 | whisper_model: "tiny-int8" 16 | whisper_beam: 1 -------------------------------------------------------------------------------- /roles/whisper/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for whisper 3 | -------------------------------------------------------------------------------- /roles/whisper/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | galaxy_tags: [] 25 | # List tags for your role here, one per line. A tag is a keyword that describes 26 | # and categorizes the role. Users find roles by searching for tags. Be sure to 27 | # remove the '[]' above, if you add tags to this list. 28 | # 29 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 30 | # Maximum 20 tags per role. 31 | 32 | dependencies: [] 33 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 34 | # if you add dependencies to this list. 35 | -------------------------------------------------------------------------------- /roles/whisper/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for whisper 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}/config" 6 | recurse: true 7 | state: directory 8 | mode: 0755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 0644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }} | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Deploy {{ service }} 35 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 36 | 37 | - name: Wait until {{ service }} is running 38 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-1 | jq '.[] | .State.Status' 39 | register: service_running 40 | until: service_running.stdout == '"running"' 41 | retries: 40 42 | delay: 20 43 | 44 | when: swarm == false -------------------------------------------------------------------------------- /roles/whisper/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | # This file is managed by ansible, do not modify 2 | # 3 | services: 4 | 5 | {{ service }}: 6 | image: {{ service_image }}:{{ service_version }} 7 | networks: 8 | - {{ application }}-network 9 | environment: 10 | - PUID=1000 11 | - PGID=1000 12 | - WHISPER_MODEL={{ whisper_model }} 13 | - WHISPER_BEAM={{ whisper_beam }} 14 | - WHISPER_LANG={{ whisper_language }} 15 | - TZ="{{ timezone }}" 16 | volumes: 17 | - "{{ workdir }}/{{ service }}/config:/config" 18 | logging: 19 | driver: "json-file" 20 | options: 21 | max-size: "1024k" 22 | max-file: "4" 23 | {% if not swarm %} 24 | labels: 25 | - APP={{ service }} 26 | - traefik.enable=true 27 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 28 | - traefik.http.routers.{{ service }}.entrypoints=websecure 29 | - traefik.http.routers.{{ service }}.tls=true 30 | {% if letsencrypt %} 31 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 32 | {% endif %} 33 | - traefik.http.routers.{{ service }}.service={{ service }} 34 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 35 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 36 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 37 | {% if loadbalancer %} 38 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 39 | {% endif %} 40 | - traefik.docker.network={{ application }}-network 41 | restart: unless-stopped 42 | {% endif %} 43 | {% if swarm %} 44 | deploy: 45 | labels: 46 | - APP={{ service }} 47 | - traefik.enable=true 48 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 49 | - traefik.http.routers.{{ service }}.entrypoints=websecure 50 | - traefik.http.routers.{{ service }}.tls=true 51 | {% if letsencrypt %} 52 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 53 | {% endif %} 54 | - traefik.http.routers.{{ service }}.service={{ service }} 55 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 56 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 57 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 58 | {% if loadbalancer %} 59 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 60 | {% endif %} 61 | - traefik.docker.network={{ application }}-network 62 | # service restart policy 63 | restart_policy: 64 | condition: on-failure 65 | delay: 5s 66 | max_attempts: 10 67 | window: 120s 68 | # service update configuration 69 | update_config: 70 | parallelism: 1 71 | delay: 10s 72 | failure_action: continue 73 | monitor: 60s 74 | max_failure_ratio: 0.3 75 | {% endif %} 76 | 77 | networks: 78 | {{ application }}-network: 79 | external: true -------------------------------------------------------------------------------- /roles/whisper/tests/inventory: -------------------------------------------------------------------------------- 1 | #SPDX-License-Identifier: MIT-0 2 | localhost 3 | 4 | -------------------------------------------------------------------------------- /roles/whisper/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - whisper 6 | -------------------------------------------------------------------------------- /roles/whisper/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for whisper 3 | -------------------------------------------------------------------------------- /roles/zabbix/README.md: -------------------------------------------------------------------------------- 1 | Role Name 2 | ========= 3 | 4 | A brief description of the role goes here. 5 | 6 | Requirements 7 | ------------ 8 | 9 | Any pre-requisites that may not be covered by Ansible itself or the role should be mentioned here. For instance, if the role uses the EC2 module, it may be a good idea to mention in this section that the boto package is required. 10 | 11 | Role Variables 12 | -------------- 13 | 14 | A description of the settable variables for this role should go here, including any variables that are in defaults/main.yml, vars/main.yml, and any variables that can/should be set via parameters to the role. Any variables that are read from other roles and/or the global scope (ie. hostvars, group vars, etc.) should be mentioned here as well. 15 | 16 | Dependencies 17 | ------------ 18 | 19 | A list of other roles hosted on Galaxy should go here, plus any details in regards to parameters that may need to be set for other roles, or variables that are used from other roles. 20 | 21 | Example Playbook 22 | ---------------- 23 | 24 | Including an example of how to use your role (for instance, with variables passed in as parameters) is always nice for users too: 25 | 26 | - hosts: servers 27 | roles: 28 | - { role: username.rolename, x: 42 } 29 | 30 | License 31 | ------- 32 | 33 | BSD 34 | 35 | Author Information 36 | ------------------ 37 | 38 | An optional section for the role authors to include contact information, or a website (HTML is not allowed). 39 | -------------------------------------------------------------------------------- /roles/zabbix/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # service defaults 3 | workdir: "/docker" 4 | timezone: "UTC" 5 | loadbalancer: false 6 | swarm: false 7 | 8 | service: "zabbix" 9 | service_version: "7.0-alpine-latest" 10 | service_port: "8080" 11 | 12 | postgres_version: "15" 13 | postgres_user: "zabbix" 14 | postgres_password: "zabbix" 15 | -------------------------------------------------------------------------------- /roles/zabbix/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # handlers file for zabbix 3 | -------------------------------------------------------------------------------- /roles/zabbix/meta/main.yml: -------------------------------------------------------------------------------- 1 | galaxy_info: 2 | author: your name 3 | description: your role description 4 | company: your company (optional) 5 | 6 | # If the issue tracker for your role is not on github, uncomment the 7 | # next line and provide a value 8 | # issue_tracker_url: http://example.com/issue/tracker 9 | 10 | # Choose a valid license ID from https://spdx.org - some suggested licenses: 11 | # - BSD-3-Clause (default) 12 | # - MIT 13 | # - GPL-2.0-or-later 14 | # - GPL-3.0-only 15 | # - Apache-2.0 16 | # - CC-BY-4.0 17 | license: license (GPL-2.0-or-later, MIT, etc) 18 | 19 | min_ansible_version: 2.1 20 | 21 | # If this a Container Enabled role, provide the minimum Ansible Container version. 22 | # min_ansible_container_version: 23 | 24 | galaxy_tags: [] 25 | # List tags for your role here, one per line. A tag is a keyword that describes 26 | # and categorizes the role. Users find roles by searching for tags. Be sure to 27 | # remove the '[]' above, if you add tags to this list. 28 | # 29 | # NOTE: A tag is limited to a single word comprised of alphanumeric characters. 30 | # Maximum 20 tags per role. 31 | 32 | dependencies: [] 33 | # List your role dependencies here, one per line. Be sure to remove the '[]' above, 34 | # if you add dependencies to this list. 35 | -------------------------------------------------------------------------------- /roles/zabbix/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # tasks file for zabbix 3 | - name: Create folders 4 | ansible.builtin.file: 5 | path: "{{ workdir }}/{{ service }}" 6 | recurse: true 7 | state: directory 8 | mode: 755 9 | 10 | - name: Copy {{ service }} docker-compose jinja template 11 | ansible.builtin.template: 12 | src: templates/docker-compose.yml.j2 13 | dest: "{{ workdir }}/{{ service }}/docker-compose.yml" 14 | mode: 644 15 | 16 | - name: Deploy {{ service }} if swarm mode is enabled 17 | block: 18 | 19 | - name: Deploy {{ service }} 20 | ansible.builtin.shell: docker stack deploy -c {{ workdir }}/{{ service }}/docker-compose.yml {{ service }} 21 | 22 | - name: Wait until {{ service }} is running 23 | ansible.builtin.shell: docker stack services --filter name={{ service }}_{{ service }} {{ service }}-frontend | grep -c 1/1 || /bin/true 24 | register: service_running 25 | until: service_running.stdout == "1" 26 | retries: 40 27 | delay: 20 28 | 29 | when: swarm == true 30 | 31 | - name: Deploy {{ service }} 32 | block: 33 | 34 | - name: Deploy {{ service }} 35 | ansible.builtin.shell: docker compose -f {{ workdir }}/{{ service }}/docker-compose.yml up -d 36 | 37 | - name: Wait until {{ service }} is running 38 | ansible.builtin.shell: docker inspect {{ service }}-{{service }}-frontend-1 | jq '.[] | .State.Status' 39 | register: service_running 40 | until: service_running.stdout == '"running"' 41 | retries: 40 42 | delay: 20 43 | 44 | when: swarm == false 45 | -------------------------------------------------------------------------------- /roles/zabbix/templates/docker-compose.yml.j2: -------------------------------------------------------------------------------- 1 | services: 2 | 3 | zabbix-server: 4 | image: zabbix/zabbix-server-pgsql:{{ service_version }} 5 | networks: 6 | - {{ application }}-network 7 | environment: 8 | DB_SERVER_HOST: postgres 9 | DB_SERVER_PORT: 5432 10 | POSTGRES_USER: {{ postgres_user }} 11 | POSTGRES_PASSWORD: {{ postgres_password }} 12 | POSTGRES_DB: zabbix 13 | ports: 14 | - "10051:10051" 15 | depends_on: 16 | - postgres 17 | {% if not swarm %} 18 | labels: 19 | - APP={{ service }} 20 | restart: unless-stopped 21 | {% endif %} 22 | {% if swarm %} 23 | deploy: 24 | labels: 25 | - APP={{ service }} 26 | # service restart policy 27 | restart_policy: 28 | condition: on-failure 29 | delay: 5s 30 | max_attempts: 10 31 | window: 120s 32 | # service update configuration 33 | update_config: 34 | parallelism: 1 35 | delay: 10s 36 | failure_action: continue 37 | monitor: 60s 38 | max_failure_ratio: 0.3 39 | {% endif %} 40 | 41 | zabbix-frontend: 42 | image: zabbix/zabbix-web-nginx-pgsql:{{ service_version }} 43 | networks: 44 | - {{ application }}-network 45 | environment: 46 | DB_SERVER_HOST: postgres 47 | POSTGRES_USER: {{ postgres_user }} 48 | POSTGRES_PASSWORD: {{ postgres_password }} 49 | POSTGRES_DB: zabbix 50 | PHP_TZ: {{ timezone }} 51 | ZBX_SERVER_HOST: zabbix-server 52 | ZBX_SERVER_PORT: 10051 53 | depends_on: 54 | - zabbix-server 55 | {% if not swarm %} 56 | labels: 57 | - APP={{ service }} 58 | - traefik.enable=true 59 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 60 | - traefik.http.routers.{{ service }}.entrypoints=websecure 61 | - traefik.http.routers.{{ service }}.tls=true 62 | {% if letsencrypt %} 63 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 64 | {% endif %} 65 | - traefik.http.routers.{{ service }}.service={{ service }} 66 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 67 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 68 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 69 | {% if loadbalancer %} 70 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 71 | {% endif %} 72 | - traefik.docker.network={{ application }}-network 73 | restart: unless-stopped 74 | {% endif %} 75 | {% if swarm %} 76 | deploy: 77 | labels: 78 | - APP={{ service }} 79 | - traefik.enable=true 80 | - traefik.http.routers.{{ service }}.rule=Host(`{{ service }}.{{ domain }}`) 81 | - traefik.http.routers.{{ service }}.entrypoints=websecure 82 | - traefik.http.routers.{{ service }}.tls=true 83 | {% if letsencrypt %} 84 | - traefik.http.routers.{{ service }}.tls.certresolver=le_resolver 85 | {% endif %} 86 | - traefik.http.routers.{{ service }}.service={{ service }} 87 | - traefik.http.services.{{ service }}.loadbalancer.server.port={{ service_port }} 88 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie=true 89 | - traefik.http.services.{{ service }}.loadbalancer.sticky.cookie.secure=true 90 | {% if loadbalancer %} 91 | - traefik.http.services.{{ service }}.loadbalancer.passhostheader=true 92 | {% endif %} 93 | - traefik.docker.network={{ application }}-network 94 | # service restart policy 95 | restart_policy: 96 | condition: on-failure 97 | delay: 5s 98 | max_attempts: 10 99 | window: 120s 100 | # service update configuration 101 | update_config: 102 | parallelism: 1 103 | delay: 10s 104 | failure_action: continue 105 | monitor: 60s 106 | max_failure_ratio: 0.3 107 | {% endif %} 108 | 109 | postgres: 110 | image: postgres:{{ postgres_version }} 111 | networks: 112 | - {{ application }}-network 113 | volumes: 114 | - postgres:/var/lib/postgresql/data 115 | environment: 116 | POSTGRES_USER: {{ postgres_user }} 117 | POSTGRES_PASSWORD: {{ postgres_password }} 118 | POSTGRES_DB: zabbix 119 | PG_DATA: /var/lib/postgresql/data/pgdata 120 | {% if not swarm %} 121 | labels: 122 | - APP=postgres 123 | restart: unless-stopped 124 | {% endif %} 125 | {% if swarm %} 126 | deploy: 127 | labels: 128 | - APP=postgres 129 | # service restart policy 130 | restart_policy: 131 | condition: on-failure 132 | delay: 5s 133 | max_attempts: 10 134 | window: 120s 135 | # service update configuration 136 | update_config: 137 | parallelism: 1 138 | delay: 10s 139 | failure_action: continue 140 | monitor: 60s 141 | max_failure_ratio: 0.3 142 | {% endif %} 143 | 144 | volumes: 145 | postgres: {} 146 | 147 | networks: 148 | {{ application }}-network: 149 | external: true -------------------------------------------------------------------------------- /roles/zabbix/tests/inventory: -------------------------------------------------------------------------------- 1 | #SPDX-License-Identifier: MIT-0 2 | localhost 3 | 4 | -------------------------------------------------------------------------------- /roles/zabbix/tests/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | remote_user: root 4 | roles: 5 | - zabbix 6 | -------------------------------------------------------------------------------- /roles/zabbix/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # vars file for zabbix 3 | --------------------------------------------------------------------------------