├── .gitignore ├── Makefile ├── README.md ├── backup-scripts ├── Dockerfile ├── backup-gw.sh ├── backup-restic.sh └── backup-to-b2.sh ├── docker-compose.yml ├── go-carbon ├── Dockerfile ├── carbon.conf └── storage-schemas.conf ├── grafana └── Dockerfile ├── graphite-api ├── Dockerfile └── graphite-api.yaml ├── homeassistant └── Dockerfile ├── jenkins └── Dockerfile ├── loki ├── Dockerfile ├── foo.yaml └── local-config.yaml ├── mergerfs-tools └── Dockerfile ├── muximux └── Dockerfile ├── netdata └── Dockerfile ├── nginx ├── Dockerfile ├── files └── nginx.conf ├── organizr └── Dockerfile ├── promtail ├── Dockerfile └── promtail.yaml ├── qbittorrent └── Dockerfile ├── snapraid └── Dockerfile ├── syslog-ng-converter ├── Dockerfile └── syslog-ng.conf ├── telegraf-snmp-unifi ├── Dockerfile └── telegraf.conf ├── timecapsule ├── Dockerfile └── smb.conf ├── unifi ├── 16-x-frame-options └── Dockerfile └── xteve └── Dockerfile /.gitignore: -------------------------------------------------------------------------------- 1 | plex/download_url 2 | Attic 3 | plex/download_url 4 | .env 5 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | pull: ## pull all pullable docker images 2 | @docker-compose pull --ignore-pull-failures --parallel 3 | 4 | up: ## create and start all docker containers 5 | @docker-compose up -d --build 6 | 7 | start: up ## alias for 'up' 8 | 9 | # start: ## create and start all docker containers 10 | # @docker-compose up -d --build 11 | 12 | # rebuild: ## rebuild and recreate container specified in CONTAINER= arg 13 | # @if [ -z "$(CONTAINER)" ]; then \ 14 | # echo "ERROR: Must specify CONTAINER="; \ 15 | # exit 1; \ 16 | # fi 17 | # @docker-compose build --pull --no-cache $(CONTAINER) 18 | # @docker-compose up -d --force-recreate $(CONTAINER) 19 | 20 | # rebuild-all:: build-utility-images 21 | # rebuild-all:: ## rebuild and recreate all containers 22 | # @docker-compose up -d --build --force-recreate 23 | 24 | # build-utility-images:: build-snapraid 25 | # build-utility-images:: build-mergerfs-tools 26 | # build-utility-images:: build-backup-scripts 27 | 28 | # build-snapraid: ## rebuild ./snapraid image 29 | # @docker build --no-cache --pull -t joemiller/snapraid ./snapraid 30 | 31 | # build-mergerfs-tools: ## rebuild ./mergerfs-tools image 32 | # @docker build --no-cache --pull -t joemiller/mergerfs-tools ./mergerfs-tools 33 | 34 | # build-backup-scripts: ## rebuild ./backup-scripts image 35 | # @docker build --no-cache --pull -t joemiller/backup-scripts ./backup-scripts 36 | 37 | # helpers 38 | help: ## print list of tasks and descriptions 39 | @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' 40 | 41 | .DEFAULT_GOAL := help 42 | 43 | .PHONY: all 44 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | docker-htpc 2 | =========== 3 | 4 | > :warning: This readme is NOT up to date. Just use the pieces as an example for your 5 | > own configs. 6 | 7 | Containers for HTPC apps. 8 | 9 | All containers are built from the `ubuntu:trusty` base image because most of the 10 | apps publish official .deb distributions for Ubuntu. 11 | 12 | ``` 13 | WARNING: This is a work in progress and suits my own specific needs. Feel free 14 | to use pieces of it but I wouldn't recommend using it wholesale unless your 15 | HTPC server setup is magically identical to mine. 16 | ``` 17 | 18 | init 19 | ---- 20 | 21 | All containers use [s6-overlay](https://github.com/just-containers/s6-overlay) 22 | for process supervision and pre-startup scripts to fix permissions on the 23 | /config volumes. 24 | 25 | user 26 | ---- 27 | 28 | The apps inside the containers each run as the `nobody` user for security. 29 | Permission dropping is handled by s6-overlay. Also, s6-overlay will run 30 | `chown -R nobody /config` inside each container during startup to fix up the 31 | perms in the /config volumes. 32 | 33 | Volumes 34 | ------- 35 | 36 | Each container generally follows a similar pattern for config and data volumes: 37 | 38 | - `/files` -> `/files`: Large data volume. In my case, there are folders such 39 | as: `/files/tv_shows`, `/files/movies`, `/files/downloads`, etc. 40 | 41 | - `/etc/` -> `/config`: eg: `/etc/sabnzbd/` on the host is 42 | mapped to `/config` inside the sabnzbd 43 | container for persistent config storage. 44 | 45 | This is not the case for all containers. Check the Makefile for details on 46 | each container. 47 | 48 | TODO: enumerate each container's host volume mounts here. 49 | 50 | Ports 51 | ----- 52 | 53 | - `organizr`: 54 | - 8282: http web ui 55 | - `muximux`: 56 | - 8080: http web ui 57 | - `tautulli`: 58 | - 8181: http web ui 59 | - `sabzbd`: 60 | - 8085: http web ui 61 | - `sonarr`: 62 | - 8989: http web ui 63 | - `deluge`: 64 | - 8083: http web ui 65 | - *Note*: the deluge container is run with `--net=host` in order to allow deluge to punch holes with NAT-PMP. It will work 66 | fine without `--net=host` however, perhaps with limited connectivity to some torrent peers but more secure. 67 | - `plex`: 68 | - network: `host` 69 | - `embyserver`: 70 | - network: `host` 71 | - `timecapsule`: 72 | - network: macvlan. 73 | - This container will be allocated its own IP on your local network based on the macvlan config. 74 | - Use `docker inspect` to view the allocated IP 75 | - See [timecapsule (Samba)](#timecapsule-samba) for more details. 76 | - `unifi`: 77 | - network: macvlan. 78 | - This container will be allocated its own IP on your local network based on the macvlan config. 79 | - Use `docker inspect` to view the allocated IP 80 | - `grafana`: 81 | - 3000: http web ui 82 | - `graphite-api`: 83 | - network: `metrics` 84 | - No forwarded ports 85 | - `go-carbon`: 86 | - 2003: graphite line protocol 87 | - `influxdb`: 88 | - network: `metrics` 89 | - No forwarded ports 90 | - `telegraf-snmp-unifi`: 91 | - network: `metrics` 92 | - No forwarded ports 93 | 94 | Note: most of these apps can also expose TLS https ports but the current config 95 | does not expose these. 96 | 97 | Usage 98 | ----- 99 | 100 | ### List tasks and descriptions 101 | 102 | $ make help 103 | 104 | ### Create and start containers 105 | 106 | $ make start 107 | 108 | Containers will be created, started, and set with `--restart=always` flag. 109 | 110 | ### Update (rebuild + recreate) all containers 111 | 112 | $ make rebuild-all 113 | 114 | ### Update (rebuild + recreate) specific containers 115 | 116 | $ make rebuild CONTAINER= 117 | 118 | ### Start/Stop/Restart individual containers 119 | 120 | No make tasks for these since they're simple docker commands: 121 | 122 | $ docker ps 123 | $ docker stop sabnzbd 124 | $ docker start sabnzbd 125 | $ docker restart sabnzbd 126 | 127 | ### View container logs 128 | 129 | Use `docker logs` command: 130 | 131 | $ docker logs sonarr 132 | $ docker logs -f sonarr 133 | 134 | Notes 135 | ----- 136 | 137 | ### deluge autoconnect 138 | 139 | Sonarr will not be able to connect to deluge unless you login to the deluge-web 140 | client and select the server. This needs to be done once per startup of the 141 | deluge container, unless auto-connect is enabled. Follow these directions 142 | to setup deluge-web auto-connect: http://dev.deluge-torrent.org/wiki/Faq#HowdoIauto-connecttoaspecificdaemon 143 | 144 | ### timecapsule (Samba) 145 | 146 | This container expects to connect to a docker network named `localnet`. This 147 | network should be created with the macvlan or ipvlan drivers available in 148 | docker 1.12+. This allows the container to be started and acquire its own 149 | IP on your local network in order to run Bonjour (avahi) services to announce 150 | itself on the network. 151 | 152 | This is particular to my setup because I have an existing 153 | samba instance running on the host providing general file sharing services. This 154 | container acts as a separate device that looks roughly like a timecapsule on 155 | the network exporting only a single `timemachine` mount. 156 | 157 | It is probably possible to use standard Docker port mapping instead. See the 158 | udp and tcp ports exposed in `timecapsule/Dockerfile`. However, getting the 159 | avahi announcement to work could be challenging. A workaround would be to 160 | publish the timecapsule services from the host's avahi using the 161 | `timecapsule/smb.service` file. 162 | 163 | If your local network is 192.168.0.0/24 you can create a `localnet` network 164 | with the following docker command: 165 | 166 | $ docker network create -d macvlan \ 167 | --subnet 192.168.0.0/24 \ 168 | --gateway 192.168.0.1 \ 169 | -o parent=eth0 \ 170 | localnet 171 | 172 | TimeMachine will attempt to use the entire disk by default. We constrain it 173 | by lying about the max available space on the samba share. This is done using 174 | a `dfree` script. By default, the max space presented to TimeMachine is 750GB. 175 | This can be changed by updating the `TIMEMACHINE_MAX_VOL_SIZE_GB` environment 176 | variable in the Dockerfile. 177 | 178 | ### metrics (grafana, graphite-api, go-carbon) 179 | 180 | All three metrics containers are attached to a user-defined bridge network 181 | named `metrics`. The only forwarded port is port 3000 (http) to the `grafana` 182 | host. 183 | 184 | Metrics are stored in `/etc/go-carbon/data` which is mounted into both the 185 | `go-carbon` and `graphite-api` containers. 186 | 187 | `grafana` fetches metrics from the `graphite-api` container over port 8000. 188 | 189 | Metrics are sent to port 2003 which is forwarded into the `go-carbon` container, 190 | example from host: 191 | 192 | echo "testdata.foo.bar $RANDOM $(date +%s)" | tee >&2 | nc localhost 2003 193 | -------------------------------------------------------------------------------- /backup-scripts/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM restic/restic:latest as restic-installer 2 | 3 | FROM ubuntu:20.04 4 | 5 | RUN apt-get update -q \ 6 | && apt-get install -qy \ 7 | bash \ 8 | curl \ 9 | zbackup \ 10 | openssh-client \ 11 | unzip \ 12 | && apt-get -y autoremove \ 13 | && apt-get -y clean \ 14 | && rm -rf /var/lib/apt/lists/* \ 15 | && rm -rf /tmp/* 16 | 17 | RUN curl https://rclone.org/install.sh | bash 18 | 19 | COPY backup-gw.sh / 20 | COPY backup-restic.sh / 21 | COPY backup-to-b2.sh / 22 | 23 | COPY --from=restic-installer /usr/bin/restic /usr/bin/restic 24 | -------------------------------------------------------------------------------- /backup-scripts/backup-gw.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Usage 4 | # ----- 5 | # 6 | # - Backups: Run this script periodically. Weekly or Daily. It will backup the 7 | # BACKUP_HOST's root disk using the `dump` utility and store in a zbackup 8 | # repo at the $REPO path on the local node. 9 | # 10 | # - Cleanup: After each backup run a `find -mtime 180` is run against the $REPO/backups/ 11 | # path to delete backups older than 180 days. A `zbackup gc` is then run to garbage collect 12 | # any removed blocks from the $REPO. 13 | # 14 | # - Restore: Use `zbackup restore` to pipe a backup from $REPO/backups/ into `restore`, similar 15 | # to gunzip -c'ing a gzip'd dump file, eg: 16 | # 17 | # zbackup restore --password-file "$REPO_PASSWD_FILE" $REPO/backups/gw-2019-06-08.dump \ 18 | # | restore -ivf - 19 | # 20 | # Example usage from docker: 21 | # 22 | # docker run \ 23 | # --rm \ 24 | # -v /files/online_backups/zbackup-gw/:/data \ 25 | # -v /etc/backup-scripts/backup-gw.zbackup.pass:/config/zbackup.pass:ro \ 26 | # -v /etc/backup-scripts/gw-backup-ssh:/config/ssh.key:ro \ 27 | # -e "BACKUP_HOST=firewall" \ 28 | # -e "BACKUP_USER=backup" \ 29 | # joemiller/backup-scripts:latest \ 30 | # /backup-gw.sh 31 | # 32 | 33 | set -eou pipefail 34 | 35 | REPO="${REPO:-/data}" 36 | REPO_PASSWD_FILE="${REPO_PASSWD_FILE:-/config/zbackup.pass}" 37 | MAX_DAYS="${MAX_DAYS:-180}" 38 | 39 | BACKUP_HOST="${BACKUP_HOST:-}" 40 | BACKUP_USER="${BACKUP_USER:-}" 41 | BACKUP_SSH_KEY="${BACKUP_SSH_KEY:-/config/ssh.key}" 42 | 43 | BACKUP_NAME="gw-$(date +%Y-%m-%d).dump" 44 | 45 | init() { 46 | if [[ ! -e "$REPO/info" ]]; then 47 | echo "==> initializing new repo: $REPO" 48 | zbackup init --password-file "$REPO_PASSWD_FILE" "$REPO" 49 | fi 50 | } 51 | 52 | backup() { 53 | local ssh_args=("-oCompression=no" "-oStrictHostKeyChecking=no" "-c" "aes256-gcm@openssh.com") 54 | 55 | echo "==> Backing up $BACKUP_HOST to $REPO/backups/$BACKUP_NAME" 56 | 57 | ssh "${ssh_args[@]}" "$BACKUP_USER@$BACKUP_HOST" -i "$BACKUP_SSH_KEY" 'dump -0au -h0 -f - /' \ 58 | | zbackup backup --password-file "$REPO_PASSWD_FILE" --cache-size 512mb "$REPO/backups/$BACKUP_NAME" 59 | } 60 | 61 | cleanup() { 62 | echo "==> Removing old backups" 63 | find "$REPO/backups/" -type f -mtime +"$MAX_DAYS" -print -delete 64 | 65 | zbackup gc --password-file "$REPO_PASSWD_FILE" "$REPO" 66 | } 67 | 68 | main() { 69 | echo "==> Started $(date)" 70 | init 71 | du -shx "$REPO" 72 | backup 73 | cleanup 74 | du -shx "$REPO" 75 | echo "==> Finished $(date)" 76 | } 77 | main "$@" 78 | -------------------------------------------------------------------------------- /backup-scripts/backup-restic.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Example usage from docker: 4 | # 5 | # docker run \ 6 | # --rm \ 7 | # -v /files/online_backups/restic:/data \ 8 | # -v /home/joeym:/home/joeym:ro \ 9 | # -v /etc/backup-scripts:/etc/backup-scripts:ro \ 10 | # -v /virt/persistent:/virt/persistent:ro \ 11 | # -v /etc/backup-scripts/restic-exclude.txt:/config/exclude:ro \ 12 | # -e "HOST=server" \ 13 | # -e "RESTIC_PASSWORD=password-here-foo" \ 14 | # joemiller/backup-scripts:latest \ 15 | # /backup-restic.sh \ 16 | # /home/joe \ 17 | # /etc/backup-scripts \ 18 | # /virt/persistent 19 | # 20 | 21 | set -eou pipefail 22 | 23 | # RESTIC=./restic-master 24 | RESTIC=restic 25 | 26 | HOST="${HOST:-$(hostname -s)}" 27 | EXCLUDE_LIST="${EXCLUDE_LIST:-/config/exclude}" 28 | 29 | RESTIC_REPOSITORY="${RESTIC_REPOSITORY:-/data}" 30 | export RESTIC_REPOSITORY 31 | 32 | 33 | if [[ -z "${RESTIC_PASSWORD:-}" ]]; then 34 | echo 'Missing env var RESTIC_PASSWORD' 35 | exit 1 36 | fi 37 | 38 | init() { 39 | if [[ ! -e "$RESTIC_REPOSITORY/config" ]]; then 40 | echo "==> initializing new repo: $RESTIC_REPOSITORY" 41 | $RESTIC init 42 | fi 43 | } 44 | 45 | backup() { 46 | for i in "${DIRS[@]}"; do 47 | $RESTIC backup "$i" --exclude-file="$EXCLUDE_LIST" --host="$HOST" # --quiet 48 | done 49 | } 50 | 51 | cleanup() { 52 | # run pruner ever 3rd day 53 | if [[ $(( $(date +%e) % 3)) == 0 ]]; then 54 | restic forget --keep-last 4 --keep-daily 14 --keep-weekly 8 --keep-monthly 12 55 | fi 56 | 57 | # prune every 30th days-ish 58 | if [[ $(( $(date +%e) % 30)) == 0 ]]; then 59 | restic prune 60 | fi 61 | } 62 | 63 | check() { 64 | echo "TODO: run 'check' command periodically too..." 65 | } 66 | 67 | main() { 68 | echo "==> Started $(date)" 69 | 70 | DIRS=("$@") 71 | 72 | if [[ ${#DIRS[@]} -eq 0 ]]; then 73 | echo "Usage: $0 /local/path1 [/local/path2, ...]" 74 | exit 1 75 | fi 76 | 77 | init 78 | backup 79 | cleanup 80 | check 81 | echo "==> Finished $(date)" 82 | } 83 | main "$@" 84 | -------------------------------------------------------------------------------- /backup-scripts/backup-to-b2.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Example usage from docker: 4 | # 5 | # docker run \ 6 | # --rm \ 7 | # -v /etc/backup-scripts/rclone.conf:/config/rclone.conf:ro \ 8 | # -v /files:/files:ro \ 9 | # -e BUCKET=joeym-home-backups \ 10 | # joemiller/backup-scripts:latest \ 11 | # /backup-to-b2.sh \ 12 | # /files/photos \ 13 | # /files/online_backups 14 | 15 | set -eou pipefail 16 | # set -x 17 | 18 | RCLONE_CONFIG="${RCLONE_CONFIG:-/config/rclone.conf}" 19 | # /config/rclone.conf file should look like this: (insert account-id and api-key) 20 | # [b2] 21 | # type = b2 22 | # account = KEY_ID 23 | # key = KEY 24 | # endpoint = 25 | 26 | SYNC_DIRS=() # pass in a list of dirs to sync as args to the script 27 | BUCKET="${BUCKET:-}" 28 | BWLIMIT="${BWLIMIT:-1.5m}" # upload bandwidth limit, bytes 29 | 30 | RCLONE_FLAGS=("--b2-hard-delete" "--config=$RCLONE_CONFIG") 31 | RCLONE_FLAGS+=("--modify-window=1s" "--retries=10" "--transfers=32" "--checkers=48" "--bwlimit=${BWLIMIT}") 32 | RCLONE_FLAGS+=("--stats-log-level=NOTICE" "--stats=30m") 33 | RCLONE_FLAGS+=("--links") 34 | RCLONE_FLAGS+=("--exclude=.gphotos.token") 35 | # uncomment for interactive runs: 36 | # RCLONE_FLAGS+=("--progress" "--stats=5s") 37 | # uncomment for dry-run 38 | # RCLONE_FLAGS+=("-n") 39 | 40 | sync() { 41 | local errors=0 42 | local dir 43 | 44 | for dir in "${SYNC_DIRS[@]}"; do 45 | echo "==> sync: $dir" 46 | # 47 | # rclone sync --flags /src b2:bucket/src 48 | # 49 | if ! rclone sync "${RCLONE_FLAGS[@]}" "$dir" "b2:${BUCKET}$dir"; then 50 | errors=$((errors + 1)) 51 | fi 52 | done 53 | 54 | if [[ $errors -gt 0 ]]; then 55 | echo "==> sync: $dir: There were $errors errors."; 56 | exit 1 57 | fi 58 | } 59 | 60 | cleanup() { 61 | local errors=0 62 | local dir 63 | 64 | for dir in "${SYNC_DIRS[@]}"; do 65 | echo "==> cleanup: $dir" 66 | # 67 | # rclone cleanup --flags b2:bucket/src 68 | # 69 | if ! rclone cleanup "${RCLONE_FLAGS[@]}" "b2:${BUCKET}$dir"; then 70 | errors=$((errors + 1)) 71 | fi 72 | done 73 | 74 | if [[ $errors -gt 0 ]]; then 75 | echo "==> cleanup: $dir: There were $errors errors."; 76 | exit 1 77 | fi 78 | } 79 | 80 | main() { 81 | echo "==> Started $(date)" 82 | 83 | SYNC_DIRS=("$@") 84 | 85 | if [[ ${#SYNC_DIRS[@]} -eq 0 ]]; then 86 | echo "Usage: $0 /local/path1 [/local/path2, ...]" 87 | exit 1 88 | fi 89 | 90 | if [[ -z "$BUCKET" ]]; then 91 | echo "Missing env var 'BUCKET'" 92 | exit 1 93 | fi 94 | 95 | if [[ ! -e "$RCLONE_CONFIG" ]]; then 96 | echo "File not found: $RCLONE_CONFIG" 97 | exit 1 98 | fi 99 | 100 | sync 101 | cleanup 102 | echo "==> Finished $(date)" 103 | } 104 | main "$@" 105 | 106 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # XXX: some network features don't work in 3.0-3.3, but work in 2.1, go figure :( 3 | version: '2.1' 4 | 5 | networks: 6 | # 'localnet': this network allows containers to obtain their own IP's on the localnetwork, similar to a VM in bridge mode 7 | # - For ipv4, an address between 172.25.0.64-95 will be assigned by Docker. For static assignments, see the 'unifi' container config. 8 | # - For ipv6, docker will not assign an address and the container must acquire an address using SLAAC, which assumes 9 | # a router on the localnet providing SLAAC services is available. 10 | localnet: 11 | driver: macvlan 12 | enable_ipv6: true 13 | driver_opts: 14 | parent: br0 15 | ipam: 16 | config: 17 | # bogus ipv6 subnet here. The container should use SLAAC to acquire a real ipv6 addr from your router 18 | - subnet: 2001::0/64 19 | # 172.25.0.64/27 = 64-95. Don't let your DHCP server assign IP's from this range. 20 | - subnet: 172.25.0.0/24 21 | gateway: 172.25.0.1 22 | ip_range: 172.25.0.64/27 23 | 24 | # 'metrics': a private network for graphite/grafana related services to communicate with each other. 25 | metrics: 26 | driver: bridge 27 | ipam: 28 | driver: default 29 | config: 30 | - subnet: 192.168.1.0/24 31 | gateway: 192.168.1.1 32 | 33 | services: 34 | tautulli: 35 | container_name: tautulli 36 | image: tautulli/tautulli:latest 37 | restart: always 38 | environment: 39 | - PUID=65534 40 | - PGID=65534 41 | - ADVANCED_GIT_BRANCH=master 42 | volumes: 43 | - /virt/persistent/tautulli:/config 44 | - /etc/localtime:/etc/localtime:ro 45 | ports: 46 | - "8181:8181" 47 | 48 | organizr-v2: 49 | container_name: organizr-v2 50 | image: organizr/organizr:latest 51 | restart: always 52 | environment: 53 | - PUID=65534 54 | - PGID=65534 55 | volumes: 56 | - /virt/persistent/organizr-v2:/config 57 | - /etc/localtime:/etc/localtime:ro 58 | ports: 59 | - "8282:80" 60 | 61 | sabnzbd: 62 | container_name: sabnzbd 63 | image: lscr.io/linuxserver/sabnzbd:latest 64 | restart: always 65 | environment: 66 | - PUID=65534 67 | - PGID=65534 68 | - TZ=America/Los_Angeles 69 | volumes: 70 | - /files:/files 71 | - /tmp/sabnzbd:/tmp/sabnzbd 72 | - /virt/persistent/sabnzbd:/config 73 | - /etc/localtime:/etc/localtime:ro 74 | ports: 75 | - "8085:8085" 76 | 77 | sonarr: 78 | container_name: sonarr 79 | image: lscr.io/linuxserver/sonarr:latest 80 | restart: always 81 | environment: 82 | - XDG_CONFIG_HOME=/config 83 | - PUID=65534 84 | - PGID=65534 85 | volumes: 86 | - /files:/files 87 | - /virt/persistent/sonarr:/config 88 | - /etc/localtime:/etc/localtime:ro 89 | ports: 90 | - "8989:8989" 91 | 92 | radarr: 93 | container_name: radarr 94 | image: lscr.io/linuxserver/radarr:latest 95 | restart: always 96 | environment: 97 | - PUID=65534 98 | - PGID=65534 99 | - TZ=America/Los_Angeles 100 | volumes: 101 | - /files:/files 102 | - /virt/persistent/radarr:/config 103 | - /etc/localtime:/etc/localtime:ro 104 | - /etc/timezone:/etc/timezone:ro 105 | ports: 106 | - 7878:7878 107 | 108 | deluge: 109 | container_name: deluge 110 | image: linuxserver/deluge:2.0.5 111 | restart: always 112 | environment: 113 | - PUID=65534 114 | - PGID=65534 115 | - TZ=America/Los_Angeles 116 | volumes: 117 | - /files:/files 118 | - /virt/persistent/deluge:/config 119 | - /etc/localtime:/etc/localtime:ro 120 | ports: 121 | - "8083:8112" # previous config used 8083 for web, linuxserver uses 8112. re-map for backwards compat 122 | - "6881:6881" 123 | - "6881:6881/udp" 124 | 125 | # qbittorrent: 126 | # container_name: qbittorrent 127 | # image: qbittorrent 128 | # build: ./qbittorrent 129 | # environment: 130 | # - PUID=65534 131 | # - PGID=65534 132 | # - TZ=America/Los_Angeles 133 | # - WEBUI_PORT=8084 134 | # volumes: 135 | # - /path/to/appdata/config:/config 136 | # - /path/to/downloads:/downloads 137 | # ports: 138 | # - 6881:6881 139 | # - 6881:6881/udp 140 | # - 8084:8084 141 | # restart: unless-stopped 142 | 143 | plex: 144 | container_name: plex 145 | image: lscr.io/linuxserver/plex:latest 146 | restart: always 147 | volumes: 148 | - /files:/files 149 | - /virt/persistent/plex:/config 150 | - /tmp/plex-transcode:/transcode 151 | - /etc/localtime:/etc/localtime:ro 152 | network_mode: host 153 | devices: 154 | - /dev/dri:/dev/dri 155 | environment: 156 | # linuxserver/plex: udpate to latest available including betas: 157 | - VERSION=latest 158 | # ports: 159 | # - "32400:32400" 160 | 161 | emby: 162 | container_name: emby 163 | image: emby/embyserver:latest 164 | restart: always 165 | volumes: 166 | - /files:/files 167 | - /virt/persistent/emby:/config 168 | - /etc/localtime:/etc/localtime:ro 169 | environment: 170 | - APP_USER=nobody 171 | - APP_UID=65534 172 | - APP_GID=65534 173 | network_mode: host 174 | devices: 175 | - /dev/dri:/dev/dri 176 | 177 | timecapsule: 178 | container_name: timecapsule 179 | build: ./timecapsule 180 | restart: always 181 | hostname: timecapsule 182 | environment: 183 | - TM_USER=backup 184 | - TM_PW=backup 185 | - TM_ID=34 # uid of the backup user on the host 186 | - TM_SIZE=712000 # MB 187 | volumes: 188 | - /etc/localtime:/etc/localtime:ro 189 | - /files/timemachine:/timemachine 190 | networks: 191 | - localnet 192 | 193 | unifi: 194 | container_name: unifi 195 | build: ./unifi 196 | restart: always 197 | environment: 198 | - PUID=65534 199 | - PGID=65534 200 | volumes: 201 | - /virt/persistent/unifi:/config 202 | - /etc/localtime:/etc/localtime:ro 203 | networks: 204 | localnet: 205 | ipv4_address: 172.25.0.65 206 | 207 | grafana: 208 | container_name: grafana 209 | build: ./grafana 210 | restart: always 211 | environment: 212 | - GF_INSTALL_PLUGINS=grafana-clock-panel 213 | volumes: 214 | - /virt/persistent/grafana:/var/lib/grafana 215 | networks: 216 | - metrics 217 | ports: 218 | - "3000:3000" 219 | 220 | # 2021-10-24: disabled graphite-api. Appears the project is abandoned. Using graphite-web from official graphite image instead 221 | # graphite-api: 222 | # container_name: graphite-api 223 | # image: joemiller/graphite-api 224 | # build: ./graphite-api 225 | # restart: always 226 | # volumes: 227 | # - /virt/persistent/graphite-api/graphite-api.yaml:/etc/graphite-api.yaml 228 | # - /virt/persistent/go-carbon/data:/data 229 | # networks: 230 | # - metrics 231 | 232 | # official graphite container with only graphite-web and nginx running 233 | graphite: 234 | container_name: graphite 235 | image: graphiteapp/graphite-statsd 236 | restart: always 237 | environment: 238 | - CARBON_DISABLED=1 239 | - CARBON_AGGREGATOR_DISABLED=1 240 | - STATSD_DISABLED=1 241 | - GO_CARBON_DISABLED=1 242 | - REDIS_DISABLED=1 243 | - GRAPHITE_CARBONLINK_HOSTS=go-carbon:7002 244 | volumes: 245 | - /virt/persistent/graphite:/opt/graphite/conf 246 | - /virt/persistent/go-carbon/data:/opt/graphite/storage/whisper 247 | networks: 248 | - metrics 249 | # uncomment to expose the graphite-web UI 250 | # ports: 251 | # - "9111:80" 252 | 253 | go-carbon: 254 | container_name: go-carbon 255 | restart: always 256 | build: ./go-carbon 257 | volumes: 258 | - /virt/persistent/go-carbon/data:/data 259 | - /virt/persistent/go-carbon/config:/config 260 | networks: 261 | - metrics 262 | ports: 263 | - "2003:2003" 264 | - "2003:2003/udp" 265 | - "2004:2004" 266 | # - "7002:7002" 267 | # - "8080:8080" 268 | 269 | influxdb: 270 | container_name: influxdb 271 | image: influxdb:1.8 272 | restart: always 273 | volumes: 274 | - /virt/persistent/influxdb:/var/lib/influxdb 275 | - /virt/persistent/influxdb2:/var/lib/influxdb2 276 | - /virt/persistent/influxdb2/etc-influxdb2:/etc/influxdb2 277 | networks: 278 | - metrics 279 | ports: 280 | - "8086:8086" 281 | 282 | telegraf-snmp-unifi: 283 | container_name: telegraf-snmp-unifi 284 | build: ./telegraf-snmp-unifi 285 | restart: always 286 | volumes: 287 | - /virt/persistent/telegraf-snmp-unifi:/etc/telegraf 288 | networks: 289 | - metrics 290 | 291 | # netdata: 292 | # container_name: netdata 293 | # image: joemiller/netdata 294 | # build: ./netdata 295 | # restart: always 296 | # hostname: server 297 | # environment: 298 | # # docker group from the host: 299 | # - PGID=128 300 | # cap_add: 301 | # - SYS_PTRACE 302 | # security_opt: 303 | # - apparmor=unconfined 304 | # volumes: 305 | # - /proc:/host/proc:ro 306 | # - /sys:/host/sys:ro 307 | # - /var/lib/smartmontools:/var/log/smartd:ro 308 | # - /var/run/docker.sock:/var/run/docker.sock:ro 309 | # ports: 310 | # - "19999:19999" 311 | 312 | jenkins: 313 | container_name: jenkins 314 | build: ./jenkins 315 | restart: always 316 | hostname: server 317 | environment: 318 | - JAVA_OPTS="-Xmx256m" 319 | volumes: 320 | - /etc/localtime:/etc/localtime:ro 321 | - /etc/timezone:/etc/timezone:ro 322 | # jenkins runs with uid 1000: 323 | # sudo mkdir -p /virt/persistent/jenkins/ ; sudo chown -R 1000 /virt/persistent/jenkins/ 324 | - /virt/persistent/jenkins:/var/jenkins_home 325 | # docker socket 326 | - /var/run/docker.sock:/var/run/docker.sock:ro 327 | ports: 328 | - "8090:8080" 329 | # GID 128 is docker group from the host. Necessary to allow access to the docker.sock 330 | user: jenkins:128 331 | 332 | nginx: 333 | container_name: nginx 334 | build: ./nginx 335 | restart: always 336 | hostname: server 337 | volumes: 338 | - /files:/files:ro 339 | - /etc/localtime:/etc/localtime:ro 340 | ports: 341 | - "80:80" 342 | 343 | scrutiny: 344 | container_name: scrutiny 345 | image: analogj/scrutiny:latest 346 | restart: always 347 | volumes: 348 | - /etc/localtime:/etc/localtime:ro 349 | - /etc/timezone:/etc/timezone:ro 350 | - /run/udev:/run/udev:ro 351 | - /virt/persistent/scrutiny:/scrutiny/config 352 | devices: 353 | - /dev/sda 354 | - /dev/sdb 355 | - /dev/sdc 356 | - /dev/sdd 357 | - /dev/sde 358 | - /dev/sdf 359 | - /dev/nvme0 360 | cap_add: 361 | - SYS_RAWIO 362 | - SYS_ADMIN 363 | ports: 364 | - 9988:8080 365 | 366 | # homeassistant: 367 | # container_name: homeassistant 368 | # image: joemiller/homeassistant 369 | # build: ./homeassistant 370 | # restart: always 371 | # volumes: 372 | # - /virt/persistent/homeassistant:/config 373 | # - /etc/localtime:/etc/localtime:ro 374 | # restart: always 375 | # network_mode: host 376 | 377 | nzbhydra2: 378 | container_name: nzbhydra2 379 | image: lscr.io/linuxserver/nzbhydra2:latest 380 | restart: always 381 | environment: 382 | - PUID=65534 383 | - PGID=65534 384 | volumes: 385 | - /etc/localtime:/etc/localtime:ro 386 | - /etc/timezone:/etc/timezone:ro 387 | - /virt/persistent/nzbhydra2:/config 388 | #- :/downloads 389 | ports: 390 | - 5076:5076 391 | 392 | bazarr: 393 | container_name: bazarr 394 | image: lscr.io/linuxserver/bazarr:latest 395 | restart: always 396 | environment: 397 | - PUID=65534 398 | - PGID=65534 399 | - TZ=America/Los_Angeles 400 | volumes: 401 | - /etc/localtime:/etc/localtime:ro 402 | - /etc/timezone:/etc/timezone:ro 403 | - /virt/persistent/bazarr:/config 404 | - /files:/files 405 | ports: 406 | - 6767:6767 407 | 408 | loki: 409 | container_name: loki 410 | build: ./loki 411 | restart: always 412 | command: -config.file=/etc/loki/local-config.yaml 413 | environment: 414 | - TZ=America/Los_Angeles 415 | volumes: 416 | - /etc/localtime:/etc/localtime:ro 417 | - /etc/timezone:/etc/timezone:ro 418 | - /virt/persistent/loki/data:/data 419 | networks: 420 | - metrics 421 | ports: 422 | - 3100:3100 423 | 424 | promtail: 425 | container_name: promtail 426 | build: ./promtail 427 | restart: always 428 | command: -config.file=/etc/promtail/promtail.yaml 429 | environment: 430 | - TZ=America/Los_Angeles 431 | volumes: 432 | - /etc/localtime:/etc/localtime:ro 433 | - /etc/timezone:/etc/timezone:ro 434 | - /var/log:/var/log:ro 435 | - /virt/persistent/loki/promtail:/data 436 | networks: 437 | - metrics 438 | 439 | syslog-ng-converter: 440 | container_name: syslog-ng-converter 441 | build: ./syslog-ng-converter 442 | restart: always 443 | # uncomment for more debug output: 444 | #command: -dve 445 | environment: 446 | - TZ=America/Los_Angeles 447 | volumes: 448 | - /etc/localtime:/etc/localtime:ro 449 | - /etc/timezone:/etc/timezone:ro 450 | networks: 451 | - metrics 452 | ports: 453 | - "1514:1514" 454 | - "1514:1514/udp" 455 | 456 | # IPTV bridge for plex 457 | # xteve: 458 | # container_name: xteve 459 | # image: joemiller/xteve 460 | # build: ./xteve 461 | # ports: 462 | # - "34400:34400" 463 | # volumes: 464 | # - /etc/localtime:/etc/localtime:ro 465 | # - /etc/timezone:/etc/timezone:ro 466 | # - /virt/persistent/xteve/root:/root/xteve 467 | # - /virt/persistent/xteve/tmp:/tmp/xteve 468 | 469 | # -- BEGIN teslamate stack -- 470 | teslamate: 471 | container_name: teslamate 472 | image: teslamate/teslamate:latest 473 | restart: always 474 | environment: 475 | - DATABASE_USER=${TESLAMATE_DB_USER} 476 | - DATABASE_PASS=${TESLAMATE_DB_PASS} 477 | - DATABASE_NAME=${TESLAMATE_DB_NAME} 478 | - DATABASE_HOST=teslamate-database 479 | - MQTT_HOST=teslamate-mosquitto 480 | volumes: 481 | - /etc/localtime:/etc/localtime:ro 482 | ports: 483 | - 4000:4000 484 | cap_drop: 485 | - all 486 | 487 | teslamate-grafana: 488 | container_name: teslamate-grafana 489 | image: teslamate/grafana:latest 490 | restart: always 491 | environment: 492 | - DATABASE_USER=${TESLAMATE_DB_USER} 493 | - DATABASE_PASS=${TESLAMATE_DB_PASS} 494 | - DATABASE_NAME=${TESLAMATE_DB_NAME} 495 | - DATABASE_HOST=teslamate-database 496 | ports: 497 | - 4001:3000 498 | volumes: 499 | - /etc/localtime:/etc/localtime:ro 500 | - /virt/persistent/teslamate/grafana:/var/lib/grafana 501 | 502 | teslamate-database: 503 | container_name: teslamate-postgres 504 | image: postgres:12 505 | restart: always 506 | environment: 507 | - POSTGRES_USER=${TESLAMATE_DB_USER} 508 | - POSTGRES_PASSWORD=${TESLAMATE_DB_PASS} 509 | - POSTGRES_DB=teslamate 510 | volumes: 511 | - /etc/localtime:/etc/localtime:ro 512 | - /virt/persistent/teslamate/postgres:/var/lib/postgresql/data 513 | 514 | teslamate-mosquitto: 515 | container_name: teslamate-mosquitto 516 | image: eclipse-mosquitto:1.6 517 | restart: always 518 | ports: 519 | - 1883:1883 520 | volumes: 521 | - /etc/localtime:/etc/localtime:ro 522 | - /virt/persistent/teslamate/mosquitto-conf:/mosquitto/config 523 | - /virt/persistent/teslamate/mosquitto-data:/mosquitto/data 524 | # -- END teslamate stack -- 525 | 526 | # build-only containers / utility containers. These are not services. 527 | snapraid: 528 | container_name: snapraid 529 | image: joemiller/snapraid 530 | build: ./snapraid 531 | entrypoint: "bash -c" 532 | command: "exit 0" 533 | 534 | mergerfs-tools: 535 | container_name: mergerfs-tools 536 | image: joemiller/mergerfs-tools 537 | build: ./mergerfs-tools 538 | entrypoint: "bash -c" 539 | command: "exit 0" 540 | 541 | backup-scripts: 542 | container_name: backup-scripts 543 | image: joemiller/backup-scripts 544 | build: ./backup-scripts 545 | entrypoint: "bash -c" 546 | command: "exit 0" 547 | -------------------------------------------------------------------------------- /go-carbon/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:20.04 2 | 3 | ENV GO_CARBON_PKG_URL https://github.com/lomik/go-carbon/releases/download/v0.9.1/go-carbon_0.9.1_amd64.deb 4 | 5 | RUN apt-get update -q \ 6 | && apt-get install -qy \ 7 | wget \ 8 | && apt-get -y autoremove \ 9 | && apt-get -y clean \ 10 | && rm -rf /var/lib/apt/lists/* \ 11 | && rm -rf /tmp/* 12 | 13 | RUN cd /tmp \ 14 | && wget -q "${GO_CARBON_PKG_URL}" \ 15 | && dpkg -i /tmp/*.deb \ 16 | && rm -rf /tmp/* 17 | 18 | EXPOSE 2003 2004 7002 7007 8080 2003/udp 19 | VOLUME /data 20 | 21 | ADD carbon.conf /config/carbon.conf 22 | ADD storage-schemas.conf /config/storage-schemas.conf 23 | 24 | CMD ["go-carbon", "-config", "/config/carbon.conf"] 25 | -------------------------------------------------------------------------------- /go-carbon/carbon.conf: -------------------------------------------------------------------------------- 1 | [common] 2 | user = "" 3 | # logfile = "/var/log/go-carbon/go-carbon.log" 4 | log-level = "info" 5 | graph-prefix = "carbon.agents.{host}" 6 | metric-endpoint = "local" 7 | max-cpu = 1 8 | metric-interval = "1m0s" 9 | 10 | [whisper] 11 | data-dir = "/data" 12 | schemas-file = "/config/storage-schemas.conf" 13 | aggregation-file = "" 14 | workers = 1 15 | max-updates-per-second = 0 16 | sparse-create = false 17 | enabled = true 18 | 19 | [cache] 20 | max-size = 1000000 21 | write-strategy = "max" 22 | 23 | [udp] 24 | listen = ":2003" 25 | enabled = true 26 | log-incomplete = false 27 | buffer-size = 0 28 | 29 | [tcp] 30 | listen = ":2003" 31 | enabled = true 32 | buffer-size = 0 33 | 34 | [pickle] 35 | listen = ":2004" 36 | max-message-size = 67108864 37 | enabled = true 38 | buffer-size = 0 39 | 40 | [carbonlink] 41 | listen = ":7002" 42 | enabled = true 43 | read-timeout = "30s" 44 | 45 | [carbonserver] 46 | listen = ":8080" 47 | enabled = true 48 | buckets = 10 49 | max-globs = 100 50 | metrics-as-counters = false 51 | read-timeout = "1m0s" 52 | write-timeout = "1m0s" 53 | scan-frequency = "5m0s" 54 | 55 | [dump] 56 | enabled = false 57 | path = "" 58 | restore-per-second = 0 59 | 60 | [pprof] 61 | listen = "localhost:7007" 62 | enabled = false 63 | -------------------------------------------------------------------------------- /go-carbon/storage-schemas.conf: -------------------------------------------------------------------------------- 1 | [default] 2 | pattern = .* 3 | retentions = 10s:7d,1m:14d,5m:30d,1h:1y,1d:5y 4 | -------------------------------------------------------------------------------- /grafana/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM grafana/grafana:8.3.3 2 | 3 | # NOTE: this doesn't work because the /var/lib/grafana/plugins is mounted in from the host. 4 | # to insall plugins, docker-exec into the running container, run the plugin install command, then retart the container 5 | #RUN grafana-cli plugins install grafana-worldmap-panel 6 | 7 | USER nobody 8 | 9 | # disable x-frame-options: deny, so that we can be used with organizr 10 | ENV GF_SECURITY_ALLOW_EMBEDDING=true 11 | -------------------------------------------------------------------------------- /graphite-api/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM brutasse/graphite-api 2 | -------------------------------------------------------------------------------- /graphite-api/graphite-api.yaml: -------------------------------------------------------------------------------- 1 | search_index: /data/index 2 | finders: 3 | - graphite_api.finders.whisper.WhisperFinder 4 | functions: 5 | - graphite_api.functions.SeriesFunctions 6 | - graphite_api.functions.PieFunctions 7 | whisper: 8 | directories: 9 | - /data 10 | carbon: 11 | hosts: 12 | - go-carbon:7002 13 | timeout: 1 14 | retry_delay: 15 15 | carbon_prefix: carbon 16 | replication_factor: 1 17 | allowed_origins: 18 | - '*' # Allow CORS 19 | time_zone: UTC 20 | -------------------------------------------------------------------------------- /homeassistant/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM homeassistant/home-assistant:stable 2 | -------------------------------------------------------------------------------- /jenkins/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM docker:stable as docker 2 | 3 | FROM jenkins/jenkins:lts 4 | 5 | COPY --from=docker /usr/local/bin/docker /bin/docker 6 | -------------------------------------------------------------------------------- /loki/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM grafana/loki:2.4.1 2 | 3 | COPY local-config.yaml /etc/loki/ 4 | -------------------------------------------------------------------------------- /loki/foo.yaml: -------------------------------------------------------------------------------- 1 | auth_enabled: false 2 | 3 | server: 4 | http_listen_port: 3100 5 | 6 | ingester: 7 | lifecycler: 8 | address: 127.0.0.1 9 | ring: 10 | kvstore: 11 | store: inmemory 12 | replication_factor: 1 13 | final_sleep: 0s 14 | chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed 15 | max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h 16 | chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first 17 | chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m) 18 | max_transfer_retries: 0 # Chunk transfers disabled 19 | 20 | schema_config: 21 | configs: 22 | - from: 2020-10-24 23 | store: boltdb-shipper 24 | object_store: filesystem 25 | schema: v11 26 | index: 27 | prefix: index_ 28 | period: 24h 29 | 30 | storage_config: 31 | boltdb_shipper: 32 | active_index_directory: /var/loki/boltdb-shipper-active 33 | cache_location: /var/loki/boltdb-shipper-cache 34 | cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space 35 | shared_store: filesystem 36 | filesystem: 37 | directory: /var/loki/chunks 38 | 39 | compactor: 40 | working_directory: /var/loki/boltdb-shipper-compactor 41 | shared_store: filesystem 42 | 43 | limits_config: 44 | reject_old_samples: true 45 | reject_old_samples_max_age: 168h 46 | 47 | chunk_store_config: 48 | max_look_back_period: 0s 49 | 50 | table_manager: 51 | retention_deletes_enabled: false 52 | retention_period: 0s 53 | 54 | ruler: 55 | storage: 56 | type: local 57 | local: 58 | directory: /tmp/loki/rules 59 | rule_path: /tmp/loki/rules-temp 60 | alertmanager_url: http://localhost:9093 61 | ring: 62 | kvstore: 63 | store: inmemory 64 | enable_api: true 65 | 66 | -------------------------------------------------------------------------------- /loki/local-config.yaml: -------------------------------------------------------------------------------- 1 | auth_enabled: false 2 | 3 | server: 4 | http_listen_port: 3100 5 | grpc_listen_port: 9096 6 | 7 | common: 8 | path_prefix: /data 9 | storage: 10 | filesystem: 11 | chunks_directory: /data/chunks 12 | rules_directory: /data/rules 13 | replication_factor: 1 14 | ring: 15 | instance_addr: 127.0.0.1 16 | kvstore: 17 | store: inmemory 18 | 19 | compactor: 20 | working_directory: /data/boltdb-shipper-compactor 21 | shared_store: filesystem 22 | 23 | limits_config: 24 | retention_period: 1440h 25 | 26 | table_manager: 27 | retention_deletes_enabled: false 28 | retention_period: 0s 29 | 30 | schema_config: 31 | configs: 32 | - from: 2020-10-24 33 | store: boltdb-shipper 34 | object_store: filesystem 35 | schema: v11 36 | index: 37 | prefix: index_ 38 | period: 24h 39 | 40 | ruler: 41 | alertmanager_url: http://localhost:9093 42 | -------------------------------------------------------------------------------- /mergerfs-tools/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:20.04 2 | 3 | RUN apt-get update -q \ 4 | && apt-get install -qy \ 5 | git \ 6 | python3 \ 7 | rsync \ 8 | locales \ 9 | make \ 10 | && apt-get -y autoremove \ 11 | && apt-get -y clean \ 12 | && rm -rf /var/lib/apt/lists/* \ 13 | && rm -rf /tmp/* 14 | 15 | # ensure python runs with utf-8 encoding 16 | RUN locale-gen en_US.UTF-8 17 | ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' LC_ALL='en_US.UTF-8' 18 | 19 | # install mergerfs-tools to /usr/local/bin: 20 | RUN mkdir /src \ 21 | && cd /src \ 22 | && git clone https://github.com/joemiller/mergerfs-tools.git \ 23 | && cd mergerfs-tools \ 24 | && make install \ 25 | && rm -rf -- /src 26 | -------------------------------------------------------------------------------- /muximux/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM linuxserver/muximux 2 | -------------------------------------------------------------------------------- /netdata/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM netdata/netdata 2 | 3 | -------------------------------------------------------------------------------- /nginx/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:mainline-alpine 2 | 3 | COPY nginx.conf /etc/nginx/nginx.conf 4 | COPY files /etc/nginx/sites-enabled/files 5 | -------------------------------------------------------------------------------- /nginx/files: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | listen [::]:80; 4 | server_name server; 5 | 6 | location / { 7 | root /files; 8 | autoindex on; 9 | autoindex_localtime on; 10 | index index.html index.htm; 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /nginx/nginx.conf: -------------------------------------------------------------------------------- 1 | user nginx; 2 | worker_processes 1; 3 | 4 | error_log /var/log/nginx/error.log warn; # NOTE: this is a symlink to /dev/stderr in the official docker image 5 | pid /var/run/nginx.pid; 6 | 7 | events { 8 | worker_connections 1024; 9 | } 10 | 11 | http { 12 | include /etc/nginx/mime.types; 13 | default_type application/octet-stream; 14 | 15 | access_log /var/log/nginx/access.log; # NOTE: this is a symlink to /dev/stdout in the official docker image 16 | 17 | sendfile on; 18 | tcp_nopush on; 19 | tcp_nodelay on; 20 | 21 | keepalive_requests 100; 22 | keepalive_timeout 65; 23 | 24 | gzip on; 25 | gzip_http_version 1.0; 26 | gzip_comp_level 2; 27 | gzip_proxied any; 28 | gzip_vary off; 29 | gzip_types text/plain text/css application/x-javascript text/xml application/xml application/rss+xml application/atom+xml text/javascript application/javascript application/json text/mathml; 30 | gzip_min_length 1000; 31 | gzip_disable "MSIE [1-6]\."; 32 | 33 | variables_hash_max_size 1024; 34 | variables_hash_bucket_size 64; 35 | server_names_hash_bucket_size 64; 36 | types_hash_max_size 2048; 37 | types_hash_bucket_size 64; 38 | 39 | include /etc/nginx/sites-enabled/*; 40 | } 41 | -------------------------------------------------------------------------------- /organizr/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM lsiocommunity/organizr 2 | -------------------------------------------------------------------------------- /promtail/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM grafana/promtail:2.4.1 2 | 3 | COPY promtail.yaml /etc/promtail/promtail.yaml 4 | -------------------------------------------------------------------------------- /promtail/promtail.yaml: -------------------------------------------------------------------------------- 1 | server: 2 | http_listen_port: 9080 3 | grpc_listen_port: 0 4 | 5 | positions: 6 | filename: /data/promtail-positions.yaml 7 | 8 | clients: 9 | #- url: http://loki:3100/loki/api/v1/push 10 | - url: http://loki:3100/loki/api/v1/push 11 | 12 | # scrape_configs: 13 | # - job_name: system 14 | # static_configs: 15 | # - targets: 16 | # - localhost 17 | # labels: 18 | # job: varlogs 19 | # __path__: /var/log/*log 20 | 21 | scrape_configs: 22 | - job_name: journal 23 | journal: 24 | json: true 25 | max_age: 12h 26 | path: /var/log/journal 27 | labels: 28 | job: systemd-journal 29 | relabel_configs: 30 | - source_labels: ['__journal__systemd_unit'] 31 | target_label: 'unit' 32 | - source_labels: ['__journal_container_name'] 33 | target_label: 'container' 34 | - source_labels: ['__journal_syslog_identifier'] 35 | target_label: 'syslog_identifier' 36 | - source_labels: ['__journal__hostname'] 37 | target_label: 'hostname' 38 | - source_labels: ['__journal__comm'] 39 | target_label: 'comm' 40 | - source_labels: ['__journal__uid'] 41 | target_label: 'uid' 42 | 43 | - job_name: syslog 44 | syslog: 45 | listen_address: 0.0.0.0:1514 46 | idle_timeout: 60s 47 | label_structured_data: yes 48 | labels: 49 | job: "syslog" 50 | relabel_configs: 51 | # - source_labels: ['__syslog_message_hostname'] 52 | # target_label: 'hostname' 53 | - action: labelmap 54 | regex: __syslog_message_(.+) 55 | -------------------------------------------------------------------------------- /qbittorrent/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM lscr.io/linuxserver/qbittorrent 2 | -------------------------------------------------------------------------------- /snapraid/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:20.04 2 | 3 | # 2021/09/01 (joe): switched PPA to syprat/snapraid which has latest v11.5. The official PPA listed on snapraid.it is tikhonov/snapraid but it hasn't been updated in years. 4 | 5 | RUN apt-get update -q \ 6 | && apt-get install -qy software-properties-common \ 7 | && add-apt-repository -y ppa:syprat/snapraid \ 8 | && apt-get update -q \ 9 | && apt-get install -qy \ 10 | smartmontools \ 11 | snapraid \ 12 | && apt-get -y autoremove \ 13 | && apt-get -y clean \ 14 | && rm -rf /var/lib/apt/lists/* \ 15 | && rm -rf /tmp/* 16 | 17 | ENTRYPOINT ["/usr/bin/snapraid"] 18 | -------------------------------------------------------------------------------- /syslog-ng-converter/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM balabit/syslog-ng:3.35.1 2 | 3 | COPY syslog-ng.conf /etc/syslog-ng/syslog-ng.conf 4 | -------------------------------------------------------------------------------- /syslog-ng-converter/syslog-ng.conf: -------------------------------------------------------------------------------- 1 | @version: 3.35 2 | 3 | options { 4 | keep_hostname(yes); 5 | }; 6 | 7 | source udp { 8 | syslog(ip(0.0.0.0) port(1514) transport("udp")); 9 | }; 10 | 11 | source tcp { 12 | syslog(ip(0.0.0.0) port(1514) transport("tcp")); 13 | }; 14 | 15 | #rewrite r_host { 16 | # set("gw", value("HOST")); 17 | #}; 18 | 19 | destination forward { 20 | syslog("promtail" transport("tcp") port(1514)); 21 | }; 22 | 23 | log { 24 | source(udp); 25 | source(tcp); 26 | # rewrite(r_host); 27 | destination(forward); 28 | }; 29 | -------------------------------------------------------------------------------- /telegraf-snmp-unifi/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM prdonut2k7/telegraf-snmp-unifi 2 | 3 | ADD telegraf.conf /etc/telegraf/telegraf.conf 4 | -------------------------------------------------------------------------------- /telegraf-snmp-unifi/telegraf.conf: -------------------------------------------------------------------------------- 1 | # JUST A SAMPLE. Set agents and snmp community below for your site. 2 | 3 | ## 4 | ## SNMP Input Plugin 5 | ## 6 | ## UniFi APs (Gen 2/Gen 3) 7 | ## Last update: 11 April 2017 8 | ## 9 | [[inputs.snmp]] 10 | # List of agents to poll 11 | agents = [ "uap1", "uap2" ] 12 | # Polling interval 13 | interval = "60s" 14 | # Timeout for each SNMP query. 15 | timeout = "10s" 16 | # Number of retries to attempt within timeout. 17 | retries = 3 18 | # SNMP version, UAP only supports v1 19 | version = 1 20 | # SNMP community string. 21 | community = "public" 22 | # The GETBULK max-repetitions parameter 23 | max_repetitions = 10 24 | # Measurement name 25 | name = "snmp.UAP" 26 | ## 27 | ## System Details 28 | ## 29 | # System name (hostname) 30 | [[inputs.snmp.field]] 31 | is_tag = true 32 | name = "sysName" 33 | oid = "RFC1213-MIB::sysName.0" 34 | # System vendor OID 35 | [[inputs.snmp.field]] 36 | name = "sysObjectID" 37 | oid = "RFC1213-MIB::sysObjectID.0" 38 | # System description 39 | [[inputs.snmp.field]] 40 | name = "sysDescr" 41 | oid = "RFC1213-MIB::sysDescr.0" 42 | # System contact 43 | [[inputs.snmp.field]] 44 | name = "sysContact" 45 | oid = "RFC1213-MIB::sysContact.0" 46 | # System location 47 | [[inputs.snmp.field]] 48 | name = "sysLocation" 49 | oid = "RFC1213-MIB::sysLocation.0" 50 | # System uptime 51 | [[inputs.snmp.field]] 52 | name = "sysUpTime" 53 | oid = "RFC1213-MIB::sysUpTime.0" 54 | # UAP model 55 | [[inputs.snmp.field]] 56 | name = "unifiApSystemModel" 57 | oid = "UBNT-UniFi-MIB::unifiApSystemModel" 58 | # UAP firmware version 59 | [[inputs.snmp.field]] 60 | name = "unifiApSystemVersion" 61 | oid = "UBNT-UniFi-MIB::unifiApSystemVersion" 62 | ## 63 | ## Host Resources 64 | ## 65 | # Total memory 66 | [[inputs.snmp.field]] 67 | name = "memTotal" 68 | oid = "FROGFOOT-RESOURCES-MIB::memTotal.0" 69 | # Free memory 70 | [[inputs.snmp.field]] 71 | name = "memFree" 72 | oid = "FROGFOOT-RESOURCES-MIB::memFree.0" 73 | # Buffer memory 74 | [[inputs.snmp.field]] 75 | name = "memBuffer" 76 | oid = "FROGFOOT-RESOURCES-MIB::memBuffer.0" 77 | # Cache memory 78 | [[inputs.snmp.field]] 79 | name = "memCache" 80 | oid = "FROGFOOT-RESOURCES-MIB::memCache.0" 81 | # Per-interface traffic, errors, drops 82 | [[inputs.snmp.table]] 83 | oid = "IF-MIB::ifTable" 84 | [[inputs.snmp.table.field]] 85 | is_tag = true 86 | oid = "IF-MIB::ifDescr" 87 | ## 88 | ## Interface Details & Metrics 89 | ## 90 | # Wireless interfaces 91 | [[inputs.snmp.table]] 92 | oid = "UBNT-UniFi-MIB::unifiRadioTable" 93 | [[inputs.snmp.table.field]] 94 | is_tag = true 95 | oid = "UBNT-UniFi-MIB::unifiRadioName" 96 | [[inputs.snmp.table.field]] 97 | is_tag = true 98 | oid = "UBNT-UniFi-MIB::unifiRadioRadio" 99 | # BSS instances 100 | [[inputs.snmp.table]] 101 | oid = "UBNT-UniFi-MIB::unifiVapTable" 102 | [[inputs.snmp.table.field]] 103 | is_tag = true 104 | oid = "UBNT-UniFi-MIB::unifiVapName" 105 | [[inputs.snmp.table.field]] 106 | is_tag = true 107 | oid = "UBNT-UniFi-MIB::unifiVapRadio" 108 | # Ethernet interfaces 109 | [[inputs.snmp.table]] 110 | oid = "UBNT-UniFi-MIB::unifiIfTable" 111 | [[inputs.snmp.table.field]] 112 | is_tag = true 113 | oid = "UBNT-UniFi-MIB::unifiIfName" 114 | ## 115 | ## System Performance 116 | ## 117 | # System load averages 118 | [[inputs.snmp.table]] 119 | oid = "FROGFOOT-RESOURCES-MIB::loadTable" 120 | [[inputs.snmp.table.field]] 121 | is_tag = true 122 | oid = "FROGFOOT-RESOURCES-MIB::loadDescr" 123 | ## 124 | ## SNMP metrics 125 | ## 126 | # Number of SNMP messages received 127 | [[inputs.snmp.field]] 128 | name = "snmpInPkts" 129 | oid = "SNMPv2-MIB::snmpInPkts.0" 130 | # Number of SNMP Get-Request received 131 | [[inputs.snmp.field]] 132 | name = "snmpInGetRequests" 133 | oid = "SNMPv2-MIB::snmpInGetRequests.0" 134 | # Number of SNMP Get-Next received 135 | [[inputs.snmp.field]] 136 | name = "snmpInGetNexts" 137 | oid = "SNMPv2-MIB::snmpInGetNexts.0" 138 | # Number of SNMP objects requested 139 | [[inputs.snmp.field]] 140 | name = "snmpInTotalReqVars" 141 | oid = "SNMPv2-MIB::snmpInTotalReqVars.0" 142 | # Number of SNMP Get-Response received 143 | [[inputs.snmp.field]] 144 | name = "snmpInGetResponses" 145 | oid = "SNMPv2-MIB::snmpInGetResponses.0" 146 | # Number of SNMP messages sent 147 | [[inputs.snmp.field]] 148 | name = "snmpOutPkts" 149 | oid = "SNMPv2-MIB::snmpOutPkts.0" 150 | # Number of SNMP Get-Request sent 151 | [[inputs.snmp.field]] 152 | name = "snmpOutGetRequests" 153 | oid = "SNMPv2-MIB::snmpOutGetRequests.0" 154 | # Number of SNMP Get-Next sent 155 | [[inputs.snmp.field]] 156 | name = "snmpOutGetNexts" 157 | oid = "SNMPv2-MIB::snmpOutGetNexts.0" 158 | # Number of SNMP Get-Response sent 159 | [[inputs.snmp.field]] 160 | name = "snmpOutGetResponses" 161 | oid = "SNMPv2-MIB::snmpOutGetResponses.0" 162 | ## 163 | ## InfluxDB Output Plugin 164 | ## 165 | [[outputs.influxdb]] 166 | urls = ["http://influxdb:8086"] # required 167 | database = "telegraf" # required 168 | retention_policy = "" 169 | write_consistency = "any" 170 | timeout = "5s" 171 | # username = "username" 172 | # password = "password" 173 | user_agent = "telegraf" 174 | # 175 | -------------------------------------------------------------------------------- /timecapsule/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM willtho/samba-timemachine 2 | 3 | # delete the 'kvm' group from the container because it uses GID 34 which conflicts 4 | # with the backup user/group from my host node. Without this, the setup.sh script inside 5 | # the timemachine container fails and you won't be able to login to the SMB share 6 | RUN delgroup kvm 7 | 8 | # don't advertise ssh and sftp 9 | RUN rm -f -- /etc/avahi/services/ssh.service /etc/avahi/services/sftp-ssh.service 10 | 11 | COPY smb.conf /etc/samba/smb.conf 12 | -------------------------------------------------------------------------------- /timecapsule/smb.conf: -------------------------------------------------------------------------------- 1 | # forked from: https://github.com/willtho89/docker-samba-timemachine/blob/62c2085c75d33201a952b5daa1f391c8c717a359/smb.conf 2 | # updates to include config itmes from: https://wiki.samba.org/index.php/Configure_Samba_to_Work_Better_with_Mac_OS_X 3 | 4 | [global] 5 | # Basic Samba configuration 6 | server role = standalone server 7 | passdb backend = tdbsam 8 | obey pam restrictions = yes 9 | security = user 10 | printcap name = /dev/null 11 | load printers = no 12 | socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=524288 SO_SNDBUF=524288 13 | server string = Samba Server %v 14 | map to guest = bad user 15 | dns proxy = no 16 | wide links = yes 17 | follow symlinks = yes 18 | unix extensions = no 19 | acl allow execute always = yes 20 | log file = /var/log/samba/%m.log 21 | max log size = 1000 22 | 23 | # Special configuration for Apple's Time Machine 24 | fruit:model = MacPro 25 | fruit:advertise_fullsync = true 26 | fruit:aapl = yes 27 | fruit:metadata = stream 28 | fruit:veto_appledouble = no 29 | fruit:posix_rename = yes 30 | fruit:zero_file_id = yes 31 | fruit:wipe_intentionally_left_blank_rfork = yes 32 | fruit:delete_empty_adfiles = yes 33 | 34 | ## Define your shares here 35 | [TimeMachine] 36 | path = /timemachine 37 | valid users = %U 38 | writable = yes 39 | durable handles = yes 40 | kernel oplocks = no 41 | kernel share modes = no 42 | posix locking = no 43 | vfs objects = catia fruit streams_xattr 44 | ea support = yes 45 | browseable = yes 46 | read only = No 47 | inherit acls = yes 48 | fruit:time machine = yes 49 | -------------------------------------------------------------------------------- /unifi/16-x-frame-options: -------------------------------------------------------------------------------- 1 | #!/usr/bin/with-contenv bash 2 | 3 | sed -i -e '/X-Frame-Options/d' \ 4 | /usr/lib/unifi/webapps/ROOT/WEB-INF/resourcerewrite.xml \ 5 | /usr/lib/unifi/webapps/ROOT/WEB-INF/urlrewrite.xml 6 | -------------------------------------------------------------------------------- /unifi/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM lscr.io/linuxserver/unifi-controller:6.5.55 2 | 3 | COPY 16-x-frame-options /etc/cont-init.d/ 4 | -------------------------------------------------------------------------------- /xteve/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM bl0m1/xtevedocker:latest 2 | --------------------------------------------------------------------------------