:8123`
77 | * Create new user and password.
78 | * Set location.
79 | * Set either static IP address in Settings > System > Network
80 | or set IP reservation on your dhcp server.
81 |
82 | ### User preferences
83 |
84 | change date format and first day of the week, enable advanced mode
85 |
86 | ### SSH
87 |
88 | * Install addon - Advanced SSH & Web Terminal
89 | * In the configuration set username and copy paste full public key from `.ssh/id_rsa.pub`
90 |
91 | ### Useful addons
92 |
93 | * VSCode
94 |
95 | ### Reverse proxy
96 |
97 | Caddy is used, details
98 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
99 |
100 | `Caddyfile`
101 | ```bash
102 | home.{$MY_DOMAIN} {
103 | reverse_proxy homeassistant:8123
104 | }
105 | ```
106 |
107 | adding to `configuration.yaml`, either by ssh and nano or VSCode addon
108 |
109 | ```yml
110 | http:
111 | use_x_forwarded_for: true
112 | trusted_proxies:
113 | - 10.0.19.4
114 |
115 | homeassistant:
116 | external_url: "https://home.example.com:8123"
117 | ```
118 |
119 | # guides, tips, resources
120 |
121 | * [How To Build a Stable and Robust Zigbee Network](https://smarthomescene.com/guides/how-to-build-a-stable-and-robust-zigbee-network/)
122 | * [Zigbee Smart home - best buy tips](https://vdbrink.github.io/buy/smart_home_best_buy_tips)
123 |
124 | # old mess shit beyond this point
125 |
126 | ---
127 | ---
128 | ---
129 | ---
130 | ---
131 | ---
132 |
133 |
134 | # Files and directory structure
135 |
136 | ```
137 | /home/
138 | └── ~/
139 | └── docker/
140 | └── home_assistant/
141 | ├── home_assistant_config/
142 | ├── .env
143 | └── docker-compose.yml
144 | ```
145 |
146 | * `home_assistant_config/` - configuration
147 | * `.env` - a file containing environment variables for docker compose
148 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers
149 |
150 | You only need to provide the two files.
151 | The directories are created by docker compose on the first run.
152 |
153 | # docker-compose
154 |
155 | `docker-compose.yml`
156 | ```yml
157 | services:
158 |
159 | homeassistant:
160 | image: "ghcr.io/home-assistant/home-assistant:stable"
161 | container_name: homeassistant
162 | hostname: homeassistant
163 | privileged: true
164 | restart: unless-stopped
165 | env_file: .env
166 | volumes:
167 | - ./home_assistant_config:/config
168 | - /etc/localtime:/etc/localtime:ro
169 | ports:
170 | - "8123:8123"
171 |
172 | networks:
173 | default:
174 | name: $DOCKER_MY_NETWORK
175 | external: true
176 | ```
177 |
178 | `.env`
179 | ```bash
180 | # GENERAL
181 | DOCKER_MY_NETWORK=caddy_net
182 | TZ=Europe/Bratislava
183 | ```
184 |
185 | **All containers must be on the same network**.
186 | Which is named in the `.env` file.
187 | If one does not exist yet: `docker network create caddy_net`
188 |
189 | # Reverse proxy
190 |
191 | Caddy is used, details
192 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
193 |
194 | `Caddyfile`
195 | ```bash
196 | home.{$MY_DOMAIN} {
197 | reverse_proxy homeassistant:8123
198 | }
199 | ```
200 |
201 | For security the following needs to be added to home assistant config file,
202 | which gets created on the first run in the direcotry `home_assistant_config`
203 |
204 | `configuration.yaml`
205 |
206 | ```yml
207 | http:
208 | use_x_forwarded_for: true
209 | trusted_proxies:
210 | - 172.16.0.0/12
211 | ip_ban_enabled: true
212 | login_attempts_threshold: 10
213 | ```
214 |
215 |
216 | ---------- end for now -----------
217 |
218 | # First run
219 |
220 |
221 | 
222 |
223 |
224 | # Specifics of my setup
225 |
226 | * no long term use yet
227 | * amd cpu and no gpu, so no experience with hw transcoding
228 | * media files are stored and shared on trunas scale VM
229 | and mounted directly on the docker host using [systemd mounts](https://forum.manjaro.org/t/root-tip-systemd-mount-unit-samples/1191),
230 | instead of fstab or autofs.
231 |
232 | `/etc/systemd/system/mnt-bigdisk.mount`
233 | ```ini
234 | [Unit]
235 | Description=12TB truenas mount
236 |
237 | [Mount]
238 | What=//10.0.19.19/Dataset-01
239 | Where=/mnt/bigdisk
240 | Type=cifs
241 | Options=ro,username=ja,password=qq,file_mode=0700,dir_mode=0700,uid=1000
242 | DirectoryMode=0700
243 |
244 | [Install]
245 | WantedBy=multi-user.target
246 | ```
247 |
248 | `/etc/systemd/system/mnt-bigdisk.automount`
249 | ```ini
250 | [Unit]
251 | Description=12TB truenas mount
252 |
253 | [Automount]
254 | Where=/mnt/bigdisk
255 |
256 | [Install]
257 | WantedBy=multi-user.target
258 | ```
259 |
260 | to automount on boot - `sudo systemctl enable mnt-bigdisk.automount`
261 |
262 | # Troubleshooting
263 |
264 |
265 | 
266 |
267 | *We're unable to connect to the selected server right now. Please ensure it is running and try again.*
268 |
269 | If you encounter this, try opening the url in browsers private window.
270 | If it works then clear the cookies in your browser.
271 |
272 |
273 | # Update
274 |
275 | Manual image update:
276 |
277 | - `docker-compose pull`
278 | - `docker-compose up -d`
279 | - `docker image prune`
280 |
281 | # Backup and restore
282 |
283 | #### Backup
284 |
285 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
286 | that makes daily snapshot of the entire directory.
287 |
288 | #### Restore
289 |
290 | * down the bookstack containers `docker-compose down`
291 | * delete the entire bookstack directory
292 | * from the backup copy back the bookstack directory
293 | * start the containers `docker-compose up -d`
294 |
--------------------------------------------------------------------------------
/homer/assets/favicon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/favicon.png
--------------------------------------------------------------------------------
/homer/assets/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/logo.png
--------------------------------------------------------------------------------
/homer/assets/tools/bitwarden.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/bitwarden.png
--------------------------------------------------------------------------------
/homer/assets/tools/bookstack.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/bookstack.png
--------------------------------------------------------------------------------
/homer/assets/tools/grafana.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/grafana.png
--------------------------------------------------------------------------------
/homer/assets/tools/nextcloud.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/nextcloud.png
--------------------------------------------------------------------------------
/homer/assets/tools/portainer.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/portainer.png
--------------------------------------------------------------------------------
/homer/readme.md:
--------------------------------------------------------------------------------
1 | # Homer in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose
8 |
9 | Homepage.
10 |
11 | * [Github](https://github.com/bastienwirtz/homer)
12 | * [DockerHub image used](https://hub.docker.com/r/b4bz/homer)
13 |
14 | Homer is a simple static web page, configured using a yaml file.
15 | The docker image uses darkhttpd simple web server on alpine linux.
16 |
17 | # Files and directory structure
18 |
19 | ```
20 | /home/
21 | └── ~/
22 | └── docker/
23 | └── homer/
24 | ├── assets/
25 | │ └── tools/
26 | ├── .env
27 | ├── docker-compose.yml
28 | └── config.yml
29 | ```
30 |
31 | * `assets/` - a directory containing icons and other directories with icons
32 | * `.env` - a file containing environment variables for docker compose
33 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container
34 | * `config.yml` - homer's configuration file bind mounted in to the container
35 |
36 | All files and folders need to be provided.
37 | `assets` direcotry is part of this repo.
38 |
39 | # docker-compose
40 |
41 | `docker-compose.yml`
42 | ```yml
43 | version: "2"
44 | services:
45 |
46 | homer:
47 | image: b4bz/homer:latest
48 | container_name: homer
49 | hostname: homer
50 | restart: unless-stopped
51 | volumes:
52 | - ./config.yml:/www/config.yml:ro
53 | - ./assets/:/www/assets:ro
54 |
55 | networks:
56 | default:
57 | external:
58 | name: $DOCKER_MY_NETWORK
59 | ```
60 |
61 | `.env`
62 | ```bash
63 | # GENERAL
64 | MY_DOMAIN=example.com
65 | DOCKER_MY_NETWORK=caddy_net
66 | TZ=Europe/Bratislava
67 | ```
68 |
69 | # Reverse proxy
70 |
71 | Caddy v2 is used, details
72 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
73 |
74 | `Caddyfile`
75 | ```
76 | {$MY_DOMAIN} {
77 | reverse_proxy homer:8080
78 | }
79 | ```
80 |
81 | # Config
82 |
83 | Homer in this `config.yml` file.
84 | This one is based on the example from
85 | the [github](https://github.com/bastienwirtz/homer).
86 |
87 |
88 | `config.yml`
89 | ```yml
90 | title: "Homepage"
91 | subtitle: "Homer"
92 | logo: "assets/logo.png"
93 | # icon: "fas fa-skull-crossbones"
94 | footer: 'less boring look with a footer
'
95 |
96 | # Optional navbar
97 | links:
98 | - name: "Font Awesome Icons Galery"
99 | icon: "fab fa-fort-awesome"
100 | url: "https://fontawesome.com/icons?d=gallery"
101 | - name: "Reddit SelfHosted"
102 | icon: "fab fa-reddit"
103 | url: "https://www.reddit.com/r/selfhosted/"
104 |
105 | # First level array represent a group
106 | # Single service with an empty name if not using groups
107 | services:
108 | - name: "Main"
109 | icon: "fab fa-docker"
110 | items:
111 | - name: "Bookstack"
112 | logo: "/assets/tools/bookstack.png"
113 | subtitle: "Notes and Documentation"
114 | url: "https://book.example.com"
115 | - name: "Bitwarden"
116 | logo: "/assets/tools/bitwarden.png"
117 | subtitle: "Password Manager"
118 | url: "https://passwd.example.com"
119 | - name: "Nextcloud"
120 | logo: "/assets/tools/nextcloud.png"
121 | subtitle: "File Sync & Share"
122 | url: "https://nextcloud.example.com"
123 | - name: "Monitoring"
124 | icon: "fas fa-heartbeat"
125 | items:
126 | - name: "Prometheus + Grafana"
127 | logo: "/assets/tools/grafana.png"
128 | subtitle: "Metric analytics & dashboards"
129 | url: "https://grafana.example.com"
130 | - name: "Portainer"
131 | logo: "/assets/tools/portainer.png"
132 | subtitle: "Docker Manager"
133 | url: "https://portainer.example.com"
134 | ```
135 |
136 | 
137 |
138 | # Update
139 |
140 | [Watchtower](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/watchtower)
141 | updates the image automatically.
142 |
143 | Manual image update:
144 |
145 | - `docker-compose pull`
146 | - `docker-compose up -d`
147 | - `docker image prune`
148 |
149 | # Backup and restore
150 |
151 | #### Backup
152 |
153 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
154 | that makes daily snapshot of the entire directory.
155 |
156 | #### Restore
157 |
158 | * down the homer container `docker-compose down`
159 | * delete the entire homer directory
160 | * from the backup copy back the homer directory
161 | * start the container `docker-compose up -d`
162 |
--------------------------------------------------------------------------------
/kopia_backup/kopia_cli_deploy_win/DEPLOY.cmd:
--------------------------------------------------------------------------------
1 | @echo off
2 |
3 | :: checking if the script is run as administrator
4 | net session >nul 2>&1
5 | if %errorLevel% == 0 (
6 | echo - Success: Administrative permissions confirmed.
7 | ) else (
8 | echo - RUN AS ADMINISTRATOR
9 | pause
10 | exit /B
11 | )
12 |
13 | echo - powershell ExecutionPolicy changing to Bypass
14 | powershell.exe Set-ExecutionPolicy -ExecutionPolicy Bypass
15 |
16 | echo - checking if C:\Kopia folder exists, creating it if not
17 | if not exist "C:\Kopia\" (
18 | mkdir C:\Kopia
19 | )
20 |
21 | if exist "C:\Kopia\kopia_backup_scipt.ps1" (
22 | echo - C:\Kopia\kopia_backup_scipt.ps1 exists, renaming it with random suffix
23 | ren "C:\Kopia\kopia_backup_scipt.ps1" "kopia_backup_scipt_%random%.ps1"
24 | )
25 |
26 | echo - copying files to C:\Kopia
27 | robocopy "%~dp0\" "C:\Kopia" "kopia.exe" /NDL /NJH /NJS
28 | robocopy "%~dp0\" "C:\Kopia" "kopia_backup_scipt.ps1" /NDL /NJH /NJS
29 | echo.
30 |
31 | echo - adding C:\Kopia to PATH
32 | setx /M PATH "%PATH%;C:\Kopia"
33 |
34 | if exist C:\Windows\System32\Tasks\kopia_backup_schedule (
35 | echo - scheduled task with that name already exists, skipping
36 | echo - delete the task in taskschd.msc if you want fresh import
37 | ) else (
38 | echo - importing scheduled task kopia_backup_schedule
39 | schtasks.exe /Create /XML "%~dp0\kopia_backup_schedule.xml" /tn "kopia_backup_schedule"
40 | )
41 |
42 | echo.
43 | echo --------------------------------------------------------------
44 | echo.
45 | echo DEPLOYMENT DONE
46 | echo GO READ WHATS WRITTEN IN C:\Kopia\kopia_backup_scipt.ps1
47 | pause
48 |
--------------------------------------------------------------------------------
/kopia_backup/kopia_cli_deploy_win/kopia.exe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/kopia_backup/kopia_cli_deploy_win/kopia.exe
--------------------------------------------------------------------------------
/kopia_backup/kopia_cli_deploy_win/kopia_backup_schedule.xml:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/kopia_backup/kopia_cli_deploy_win/kopia_backup_schedule.xml
--------------------------------------------------------------------------------
/kopia_backup/kopia_cli_deploy_win/kopia_backup_scipt.ps1:
--------------------------------------------------------------------------------
1 | # v0.2
2 | # Before using this script, create a repo, note the setting of a password
3 | # kopia repo create filesystem --path C:\kopia_repo --password aaa
4 |
5 | # for backblaza b2
6 | # kopia repository create b2 --bucket=rakanishu --key-id=001496285081a7e0000000003 --key=K0016L8FAMRp/F+6ckbXIYpP0UgTky0 --password aaa
7 |
8 | # useful commands:
9 | # - kopia repo status
10 | # - kopia repo connect filesystem --path C:\kopia_repo -p aaa
11 | # - kopia snap list --all
12 | # - kopia mount all K:
13 |
14 | # mounting might need be executed as non-admin user, weird windows thing
15 | # if one does not see the drive:
16 | # - 'net use' shows path that can be pasted to explorer or browser
17 | # \\127.0.0.1@51295\DavWWWRoot
18 |
19 | # logs location is set C:\Kopia\Kopia_Logs for every command
20 | # if it was not manually set then it would be
21 | # C:\Windows\System32\config\systemprofile\AppData
22 |
23 |
24 | # to backup multiple targets/paths:
25 | # - [array]$BACKUP_THIS = 'C:\Test','C:\users','C:\blabla'
26 |
27 | $REPOSITORY_PATH = 'C:\kopia_repo'
28 | $KOPIA_PASSWORD = 'aaa'
29 | [array]$BACKUP_THIS = 'C:\test'
30 | $LOG_PATH = 'C:\Kopia\Kopia_Logs'
31 | $USE_SHADOW_COPY = $false
32 |
33 | # ----------------------------------------------------------------------------
34 |
35 | $Env:KOPIA_LOG_DIR = $LOG_PATH
36 |
37 | kopia repository connect filesystem --path $REPOSITORY_PATH --password $KOPIA_PASSWORD
38 | # kopia repository connect b2 --bucket=kopia-repo-rakanishu --key-id=001496285081a7e0000000003 --key=K0016L8FAMRp/F+6ckbXIYpP0UgTky0 --password $KOPIA_PASSWORD
39 |
40 | kopia policy set --global --compression=zstd-fastest --keep-annual=0 --keep-monthly=12 --keep-weekly=0 --keep-daily=14 --keep-hourly=0 --keep-latest=3
41 |
42 | if ($USE_SHADOW_COPY) {
43 | kopia policy set --global --enable-volume-shadow-copy=when-available
44 | }
45 |
46 | foreach ($path in $BACKUP_THIS) {
47 | kopia snapshot create $path --file-log-level=info
48 | }
49 |
50 | kopia repository disconnect
51 |
--------------------------------------------------------------------------------
/kopia_backup/kopia_server_deploy_service_win/DEPLOY.cmd:
--------------------------------------------------------------------------------
1 | @echo off
2 |
3 | :: checking if the script is run as administrator
4 | net session >nul 2>&1
5 | if %errorLevel% == 0 (
6 | echo - Success: Administrative permissions confirmed.
7 | ) else (
8 | echo - RUN AS ADMINISTRATOR
9 | pause
10 | exit /B
11 | )
12 |
13 | echo - powershell ExecutionPolicy changing to Bypass
14 | powershell.exe Set-ExecutionPolicy -ExecutionPolicy Bypass
15 |
16 | echo - checking if C:\Kopia folder exists, creating it if not
17 | if not exist "C:\Kopia\" (
18 | mkdir C:\Kopia
19 | )
20 |
21 | if exist "C:\Kopia\kopia_server_start.cmd" (
22 | echo - C:\Kopia\kopia_server_start.cmd exists, renaming it with random suffix
23 | ren "C:\Kopia\kopia_server_start.cmd" "kopia_backup_scipt_%random%.ps1"
24 | )
25 |
26 | echo - copying files to C:\Kopia
27 | robocopy "%~dp0\" "C:\Kopia" "kopia.exe" /NDL /NJH /NJS
28 | robocopy "%~dp0\" "C:\Kopia" "kopia_server_start.cmd" /NDL /NJH /NJS
29 | robocopy "%~dp0\" "C:\Kopia" "shawl.exe" /NDL /NJH /NJS
30 | echo.
31 |
32 | echo - adding C:\Kopia to PATH
33 | setx /M PATH "%PATH%;C:\Kopia"
34 |
35 | echo - creting Kopia service
36 | C:\Kopia\shawl.exe add --log-dir C:\kopia\Kopia_service_logs --name Kopia -- C:\Kopia\kopia_server_start.cmd
37 |
38 | echo - setting Kopia service to start automaticly at boot
39 | sc config Kopia start=auto
40 |
41 | echo - start Kopia service
42 | sc start Kopia
43 |
44 | echo - copying link to Desktop
45 | robocopy "%~dp0\" "%USERPROFILE%\Desktop" "Kopia.url" /NDL /NJH /NJS
46 |
47 | echo.
48 | echo --------------------------------------------------------------
49 | echo.
50 | echo DEPLOYMENT DONE
51 | echo KOPIA SERVER CAN NOW BE FIND AT WEB PAGE: localhost:51515
52 | echo A LINK SHOULD BE ON YOUR DESKTOP
53 | echo.
54 | pause
55 |
--------------------------------------------------------------------------------
/kopia_backup/kopia_server_deploy_service_win/Kopia.url:
--------------------------------------------------------------------------------
1 | [InternetShortcut]
2 | URL=http://localhost:51515/
3 | IDList=
4 | HotKey=0
5 | [{000214A0-0000-0000-C000-000000000046}]
6 | Prop3=19,2
7 |
--------------------------------------------------------------------------------
/kopia_backup/kopia_server_deploy_service_win/kopia.exe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/kopia_backup/kopia_server_deploy_service_win/kopia.exe
--------------------------------------------------------------------------------
/kopia_backup/kopia_server_deploy_service_win/kopia_server_start.cmd:
--------------------------------------------------------------------------------
1 | kopia server start --insecure --config-file=C:\Kopia\repository.config --log-dir=C:\Kopia\Kopia_Logs --address=127.0.0.1:51515 --server-username=admin --server-password=aaa --enable-actions
2 |
3 | :: to have full functinality of a kopia server
4 | :: replace the above command with the one below
5 | :: in it the address was changed to 0.0.0.0 to allow connection over network
6 | :: and tls cert stuff was added without which server does not functions
7 |
8 | :: kopia server start --tls-generate-cert --tls-cert-file C:\Kopia\tls_kopia.cert --tls-key-file C:\Kopia\tls_kopia.key --config-file=C:\Kopia\repository.config --log-dir=C:\Kopia\Kopia_Logs --address=0.0.0.0:51515 --server-username=admin --server-password=aaa
9 |
10 | :: restart the kopia service and check C:\Kopia if the tls_kopia files are now there
11 | :: now again edit this file to remove "--tls-generate-cert" part from the command
12 | :: restart the service again
13 |
14 | :: log in to the webGUI and create a repo
15 |
16 | :: now theres need to add users that will be able to backup to that repo
17 | :: no GUI for that for whatever reason
18 | :: open windows cmd / powershell as admin
19 | :: connect to the repo manually
20 | :: could help going to the webgui > repository > small shell-icon at the bottom
21 | :: click on it and it will show command to execute to get repo status
22 | :: adjust the command to be about adding a user
23 |
24 | :: C:\Kopia\kopia.exe --config-file=C:\Kopia\repository.config server user add myuser@mylaptop
25 |
26 | :: once again restart the kopia service
27 |
28 | :: now finally can go to the client machine
29 | :: Kopia Repository Server
30 | :: give the ip address and port, use https, something like https://10.0.19.95:51515
31 | :: write random stuff in to "Trusted server certificate fingerprint (SHA256)"
32 | :: kopia on connect attempt will tell what the the real fingerprint is
33 | :: kopy it in to the field and try connect agan
34 | :: can override username/machine name in the advanced section
35 |
--------------------------------------------------------------------------------
/kopia_backup/kopia_server_deploy_service_win/shawl.exe:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/kopia_backup/kopia_server_deploy_service_win/shawl.exe
--------------------------------------------------------------------------------
/lubelog/readme.md:
--------------------------------------------------------------------------------
1 | # LubeLog in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Vehicle service records and maintainence tracker.
10 |
11 | * [Official site](https://lubelogger.com/)
12 | * [Github](https://github.com/hargata/lubelog)
13 |
14 | LubeLogger is a new open source vehicle info dump place.
15 |
16 | Written in javascript with LiteDB file based database.
17 |
18 | # Files and directory structure
19 |
20 | ```
21 | /home/
22 | └── ~/
23 | └── docker/
24 | └── LubeLog/
25 | ├── 🗁 lubelog_config/
26 | ├── 🗁 lubelog_data/
27 | ├── 🗁 lubelog_documents/
28 | ├── 🗁 lubelog_images/
29 | ├── 🗁 lubelog_keys/
30 | ├── 🗁 lubelog_log/
31 | ├── 🗁 lubelog_temp/
32 | ├── 🗁 lubelog_translations/
33 | ├── 🗋 .env
34 | └── 🗋 docker-compose.yml
35 | ```
36 |
37 | * `lubelog directories` - with data
38 | * `.env` - a file containing environment variables for docker compose
39 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers
40 |
41 | Only the two files are required. The directories are created on the first run.
42 |
43 | # docker-compose
44 |
45 | [Dockercompose](https://github.com/hargata/lubelog/blob/main/docker-compose.yml)
46 | from the github page used as a template.
47 |
48 | `docker-compose.yml`
49 | ```yml
50 | services:
51 | lubelog:
52 | image: ghcr.io/hargata/lubelogger:latest
53 | container_name: lubelog
54 | hostname: lubelog
55 | restart: unless-stopped
56 | env_file: .env
57 | volumes:
58 | - ./lubelog_config:/App/config
59 | - ./lubelog_data:/App/data
60 | - ./lubelog_translations:/App/wwwroot/translations
61 | - ./lubelog_documents:/App/wwwroot/documents
62 | - ./lubelog_images:/App/wwwroot/images
63 | - ./lubelog_temp:/App/wwwroot/temp
64 | - ./lubelog_log:/App/log
65 | - ./lubelog_keys:/root/.aspnet/DataProtection-Keys
66 | ports:
67 | - 8080:8080
68 |
69 | networks:
70 | default:
71 | name: $DOCKER_MY_NETWORK
72 | external: true
73 | ```
74 |
75 | `.env`
76 | ```bash
77 | # GENERAL
78 | DOCKER_MY_NETWORK=caddy_net
79 | TZ=Europe/Bratislava
80 |
81 | #LUBELOG
82 | LC_ALL=en_US.UTF-8
83 | LANG=en_US.UTF-8
84 | MailConfig__EmailServer=smtp-relay.brevo.com
85 | MailConfig__EmailFrom=lubelog@example.com
86 | MailConfig__UseSSL=True
87 | MailConfig__Port=587
88 | MailConfig__Username=
89 | MailConfig__Password=
90 | LOGGING__LOGLEVEL__DEFAULT=Error
91 | ```
92 |
93 | **All containers must be on the same network**.
94 | Which is named in the `.env` file.
95 | If one does not exist yet: `docker network create caddy_net`
96 |
97 | # Reverse proxy
98 |
99 | Caddy v2 is used, details
100 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
101 |
102 | `Caddyfile`
103 | ```php
104 | auto.{$MY_DOMAIN} {
105 | reverse_proxy lubelog:80
106 | }
107 | ```
108 |
109 | # First run
110 |
111 | ---
112 |
113 |
114 | # Trouble shooting
115 |
116 |
117 | # Update
118 |
119 | Manual image update:
120 |
121 | - `docker-compose pull`
122 | - `docker-compose up -d`
123 | - `docker image prune`
124 |
125 | It is **strongly recommended** to now add current **tags** to the images in the compose.
126 | Tags will allow you to easily return to a working state if an update goes wrong.
127 |
128 |
129 | # Backup and restore
130 |
131 | #### Backup
132 |
133 |
134 | #### Restore
135 |
136 |
137 | # Backup of just user data
138 |
139 |
140 | #### Backup script
141 |
142 |
143 | #### Cronjob - scheduled backup
144 |
145 | # Restore the user data
146 |
147 |
--------------------------------------------------------------------------------
/mealie/readme.md:
--------------------------------------------------------------------------------
1 | # Mealie in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Recipe cookbook.
10 |
11 | * [The official site](https://hay-kot.github.io/mealie/)
12 | * [Github](https://github.com/hay-kot/mealie)
13 | * [DockerHub](https://hub.docker.com/r/hkotel/mealie)
14 |
15 | Mealie is a simple, open source, self hosted cookbook.
16 | Written in python and javascript, using Vue framework for frontend.
17 | It stores recipies in plain json as well as sqlite database.
18 |
19 | # Files and directory structure
20 |
21 | ```
22 | /home/
23 | └── ~/
24 | └── docker/
25 | └── bookstack/
26 | ├── 🗁 mealie_data/
27 | ├── 🗋 .env
28 | └── 🗋 docker-compose.yml
29 | ```
30 |
31 | * `mealie_data/` - a directory with persistent data and backups
32 | * `.env` - a file containing environment variables for docker compose
33 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers
34 |
35 | Only the files are required. The directory is created on the first run.
36 |
37 | # docker-compose
38 |
39 | The official documentation compose example
40 | [here.](https://hay-kot.github.io/mealie/documentation/getting-started/install/#docker-compose-with-sqlite)
41 |
42 | `docker-compose.yml`
43 | ```yml
44 | services:
45 |
46 | mealie:
47 | image: hkotel/mealie
48 | container_name: mealie
49 | hostname: mealie
50 | restart: unless-stopped
51 | env_file: .env
52 | expose:
53 | - "80"
54 | volumes:
55 | - ./mealie_data/:/app/data
56 |
57 | networks:
58 | default:
59 | name: $DOCKER_MY_NETWORK
60 | external: true
61 | ```
62 |
63 | `.env`
64 | ```bash
65 | # GENERAL
66 | DOCKER_MY_NETWORK=caddy_net
67 | TZ=Europe/Bratislava
68 |
69 | # MEALIE
70 | PUID=1000
71 | PGID=1000
72 | RECIPE_PUBLIC=true
73 | RECIPE_SHOW_NUTRITION=true
74 | RECIPE_SHOW_ASSETS=true
75 | RECIPE_LANDSCAPE_VIEW=true
76 | RECIPE_DISABLE_COMMENTS=false
77 | RECIPE_DISABLE_AMOUNT=false
78 | ```
79 |
80 | **All containers must be on the same network**.
81 | Which is named in the `.env` file.
82 | If one does not exist yet: `docker network create caddy_net`
83 |
84 | # Reverse proxy
85 |
86 | Caddy v2 is used, details
87 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
88 |
89 | `Caddyfile`
90 | ```php
91 | book.{$MY_DOMAIN} {
92 | reverse_proxy mealie:80
93 | }
94 | ```
95 |
96 | # First run
97 |
98 | Default login: `changeme@email.com` // `MyPassword`
99 |
100 | ---
101 |
102 | 
103 |
104 | # New version incomig
105 |
106 | There is a new version in work, v1.0.0 is already in beta5,
107 | but it seems a major changes are introduced and theres not yet feature to
108 | share recipies with people without password.
109 |
110 | # Update
111 |
112 | Manual image update:
113 |
114 | - `docker-compose pull`
115 | - `docker-compose up -d`
116 | - `docker image prune`
117 |
118 | It is **strongly recommended** to now add current **tags** to the images in the compose.
119 | Tags will allow you to easily return to a working state if an update goes wrong.
120 |
121 | # Backup and restore
122 |
123 | #### Backup
124 |
125 | Using [kopia](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/kopia_backup)
126 | or [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
127 | to make daily snapshot of the entire docker directory.
128 |
129 | #### Restore
130 |
131 | * down the containers `docker-compose down`
132 | * delete/move/rename the entire project directory
133 | * from the backups copy back the entire project directory
134 | * start the containers `docker-compose up -d`
135 |
--------------------------------------------------------------------------------
/meshcrentral/readme.md:
--------------------------------------------------------------------------------
1 | # Meshcentral in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Powerful remote desktop toolset.
10 |
11 | * [Official site](https://www.meshcommander.com/meshcentral2)
12 | * [Github](https://github.com/Ylianst/MeshCentral)
13 | * [ghcr.io](https://github.com/ylianst/MeshCentral/pkgs/container/meshcentral)
14 |
15 | Web based, can be a replacement for TeamViewer or Anydesk.
16 | The server is written in javascript, running in node.js runtime.
17 | The client application is written mostly in C runnig Duktape javascript engine.
18 |
19 | For database the server uses a build in neDB, which should be enough for
20 | less than 100 clients deployments. Or MongoDB can be deployed for better
21 | performance and robustness but added complexity.
22 |
23 | The architecture is relatively simple.
24 |
25 | * a server you host is accessible through a web site
26 | * clients can from this site install Mesh Agent
27 | which allows full control of the device from the servers web
28 |
29 | Theres also an entire aspect of possibility of using
30 | Intel AMT - Active Management Technology through port 4433.
31 |
32 | ---
33 |
34 | 
35 |
36 | # Files and directory structure
37 |
38 | ```
39 | /home/
40 | └── ~/
41 | └── docker/
42 | └── meshcentral/
43 | ├── meshcentral/
44 | ├── .env
45 | └── docker-compose.yml
46 | ```
47 |
48 | * `meshcentral/` - persistent data, most notable is config.json in data\
49 | * `.env` - a file containing environment variables for docker compose
50 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers
51 |
52 | You only need to provide the two files.
53 | The directories are created by docker compose on the first run.
54 |
55 | # docker-compose
56 |
57 | The official docker image is hosted [on github.](https://github.com/ylianst/MeshCentral/pkgs/container/meshcentral)
58 | More info [here](https://github.com/Ylianst/MeshCentral/tree/master/docker)
59 | This setup goes more robust way, with a separate container running mongodb.
60 |
61 | `docker-compose.yml`
62 | ```yml
63 | services:
64 |
65 | meshcentral-db:
66 | image: mongo:latest
67 | container_name: meshcentral-db
68 | hostname: meshcentral-db
69 | restart: unless-stopped
70 | env_file: .env
71 | volumes:
72 | - ./meshcentral/mongodb_data:/data/db
73 |
74 | meshcentral:
75 | image: ghcr.io/ylianst/meshcentral:latest
76 | container_name: meshcentral
77 | hostname: meshcentral
78 | restart: unless-stopped
79 | env_file: .env
80 | depends_on:
81 | - meshcentral-db
82 | volumes:
83 | # config.json and other important files live here. A must for data persistence
84 | - ./meshcentral/data:/opt/meshcentral/meshcentral-data
85 | # where file uploads for users live
86 | - ./meshcentral/user_files:/opt/meshcentral/meshcentral-files
87 | # location for the meshcentral-backups - this should be mounted to an external storage
88 | - ./meshcentral/backup:/opt/meshcentral/meshcentral-backup
89 | # location for site customization files
90 | - ./meshcentral/web:/opt/meshcentral/meshcentral-web
91 |
92 | networks:
93 | default:
94 | name: $DOCKER_MY_NETWORK
95 | external: true
96 | ```
97 |
98 | `.env`
99 | ```bash
100 | # GENERAL
101 | MY_DOMAIN=example.com
102 | DOCKER_MY_NETWORK=caddy_net
103 | TZ=Europe/Bratislava
104 |
105 | # MESHCENTRAL
106 | NODE_ENV=production
107 |
108 | # initial mongodb-variables
109 | MONGO_INITDB_ROOT_USERNAME=mongodbadmin
110 | MONGO_INITDB_ROOT_PASSWORD=mongodbpasswd
111 |
112 | # initial meshcentral-variables
113 | # the following options are only used if no config.json exists in the data-folder
114 |
115 | # your hostname
116 | HOSTNAME=mesh.example.com
117 | USE_MONGODB=true
118 | # set to your reverse proxy IP if you want to put meshcentral behind a reverse proxy
119 | REVERSE_PROXY=example.com
120 | REVERSE_PROXY_TLS_PORT=443
121 | # set to true if you wish to enable iframe support
122 | IFRAME=false
123 | # set to false if you want disable self-service creation of new accounts besides the first (admin)
124 | ALLOW_NEW_ACCOUNTS=true
125 | # set to true to enable WebRTC - per documentation it is not officially released with meshcentral and currently experimental. Use with caution
126 | WEBRTC=false
127 | # set to true to allow plugins
128 | ALLOWPLUGINS=false
129 | # set to true to allow session recording
130 | LOCALSESSIONRECORDING=false
131 | # set to enable or disable minification of json, reduces traffic
132 | MINIFY=true
133 | ```
134 |
135 | Bit of an issue is that the official project expects to find the database
136 | at the hostname `mongodb`. It's hardcoded in the
137 | [startup.sh](https://github.com/Ylianst/MeshCentral/blob/master/docker/startup.sh)
138 | which on first run generates `config.json`.
139 | This is not ideal as one likely will run several containers and
140 | undescriptive container name or hostname is annoying.
141 |
142 | To deal with this, **run it first time for few minutes, then down it, edit the**
143 | `.\meshcentral\data\config.json` and change the mongoDb line to look like this:
144 |
145 | "settings": {
146 | "mongoDb": "mongodb://mongodbadmin:mongodbpasswd@meshcentral-db:27017",
147 | },
148 |
149 | if meshcentral container shows: *ERROR: Unable to parse /opt/meshcentral/meshcentral-data/config.json*
150 | you need to down it, delete the `meshcentral` with the persistent data,
151 | and up it again, now let it run longer before downing and editing the database path.
152 |
153 | # Reverse proxy
154 |
155 | Caddy v2 is used, details
156 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
157 |
158 | `Caddyfile`
159 | ```
160 | mesh.{$MY_DOMAIN} {
161 | reverse_proxy meshcentral:443 {
162 | transport http {
163 | tls
164 | tls_insecure_skip_verify
165 | }
166 | }
167 | }
168 | ```
169 |
170 | ---
171 |
172 | 
173 |
174 | # The usage on clients
175 |
176 | # Improved safety
177 |
178 |
179 | # Trouble shooting
180 |
181 | # Running without separate database
182 |
183 | `docker-compose.yml`
184 | ```yml
185 | services:
186 |
187 | meshcentral:
188 | image: ghcr.io/ylianst/meshcentral:latest
189 | container_name: meshcentral
190 | hostname: meshcentral
191 | restart: unless-stopped
192 | env_file: .env
193 | volumes:
194 | # config.json and other important files live here. A must for data persistence
195 | - ./meshcentral/data:/opt/meshcentral/meshcentral-data
196 | # where file uploads for users live
197 | - ./meshcentral/user_files:/opt/meshcentral/meshcentral-files
198 | # location for the meshcentral-backups - this should be mounted to an external storage
199 | - ./meshcentral/backup:/opt/meshcentral/meshcentral-backup
200 | # location for site customization files
201 | - ./meshcentral/web:/opt/meshcentral/meshcentral-web
202 |
203 | networks:
204 | default:
205 | name: $DOCKER_MY_NETWORK
206 | external: true
207 | ```
208 |
209 | `.env`
210 | ```bash
211 | # GENERAL
212 | MY_DOMAIN=example.com
213 | DOCKER_MY_NETWORK=caddy_net
214 | TZ=Europe/Bratislava
215 |
216 | # MESHCENTRAL
217 | NODE_ENV=production
218 |
219 | # initial mongodb-variables
220 | MONGO_INITDB_ROOT_USERNAME=mongodbadmin
221 | MONGO_INITDB_ROOT_PASSWORD=mongodbpasswd
222 |
223 | # initial meshcentral-variables
224 | # the following options are only used if no config.json exists in the data-folder
225 |
226 | # your hostname
227 | HOSTNAME=mesh.example.com
228 | USE_MONGODB=false
229 | # set to your reverse proxy IP if you want to put meshcentral behind a reverse proxy
230 | REVERSE_PROXY=example.com
231 | REVERSE_PROXY_TLS_PORT=443
232 | # set to true if you wish to enable iframe support
233 | IFRAME=false
234 | # set to false if you want disable self-service creation of new accounts besides the first (admin)
235 | ALLOW_NEW_ACCOUNTS=true
236 | # set to true to enable WebRTC - per documentation it is not officially released with meshcentral and currently experimental. Use with caution
237 | WEBRTC=false
238 | # set to true to allow plugins
239 | ALLOWPLUGINS=false
240 | # set to true to allow session recording
241 | LOCALSESSIONRECORDING=false
242 | # set to enable or disable minification of json, reduces traffic
243 | MINIFY=true
244 | ```
245 |
246 |
247 |
248 | # Update
249 |
250 | Manual image update:
251 |
252 | - `docker-compose pull`
253 | - `docker-compose up -d`
254 | - `docker image prune`
255 |
256 | # Backup and restore
257 |
258 | #### Backup
259 |
260 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
261 | that makes daily snapshot of the entire directory.
262 |
263 | #### Restore
264 |
265 | * down the bookstack containers `docker-compose down`
266 | * delete the entire bookstack directory
267 | * from the backup copy back the bookstack directory
268 | * start the containers `docker-compose up -d`
269 |
270 |
--------------------------------------------------------------------------------
/mikrotik/cli_basics.md:
--------------------------------------------------------------------------------
1 | # reset config
2 |
3 | `/system reset-configuration`
4 |
5 | # clien mode range extender
6 |
7 |
8 |
9 | # Dumb switch with wifi AP
10 |
11 | ## Objective
12 |
13 | ## Steps
14 |
15 |
--------------------------------------------------------------------------------
/mikrotik/dumb_switch_with_wifi_AP.md:
--------------------------------------------------------------------------------
1 | # Dumb switch with wifi AP
2 |
3 | ## Objective
4 |
5 | * every single port bridged, device acting like a switch
6 | * wifi being bridged to the same network
7 | * automatic IP assigned
8 | * no DHCP, no NAT, no nothing, just like a dumb switch with wifi thats all
9 |
10 | ## Steps
11 |
12 | Clear any config
13 |
14 | * System > Reset Configuration > No Default Configuration
15 |
16 | ---
17 |
18 | **Bridge**
19 |
20 | * Bridge > add bridge1
21 | * Ports > add > All interfaces to bridge1
22 |
23 | *winbox will reconnect*
24 |
25 | ---
26 |
27 | **IP**
28 |
29 | *dynamic*
30 |
31 | * IP > DHCP client > add > interface bridge, rest defaults
32 |
33 | *static*
34 |
35 | * IP > Addresses > add >
36 | * Address - 192.168.88.2/24
37 | * Network - it gets filled out automatically on ok/apply based on subnet mask in address
38 | * interface - bridge
39 |
40 | ---
41 |
42 | 2ghz
43 |
44 | * Wireless > open 2ghz interface > Wireless tab > click Advanced Mode > in this tab
45 | * change Mode from station to ap bridge
46 | * change Band to 2Ghz- B/G/N
47 | * Channel Width stay at 20Mhz if lot of wifi around you, go to 40Mhz if not
48 | * Frequency - pick a channel \[1-2412 ; 6-2437; 11-2462\]
49 | * SSID - pick a name
50 | * Radio Name - if you you want to distingush between APs with same SSID?
51 | * Skip DFS channels - all
52 | * Security Profile - just make note of the one set there, should be default
53 | * WPS mode - disable
54 | * Country - pick yours
55 |
56 | -------------
57 |
58 | 5ghz
59 |
60 | * Wireless > open 5ghz interface > Wireless tab > click Advanced Mode > in this tab
61 | * change Mode from station to ap bridge
62 | * change Band to 5Ghz- N/AC
63 | * Channel Width 20/40/80Mhz XXXX
64 | * Frequency - auto?
65 | * SSID - pick a name
66 | * Radio Name - if you you want to distingush between APs with same SSID?
67 | * Skip DFS channels - all
68 | * Security Profile - just make note of the one set there, should be default
69 | * WPS mode - disable
70 | * Country - pick yours
71 |
72 | ---------------
73 |
74 | security
75 |
76 | * Wireless > Security Profiles > default
77 | * change Mode to dynamic keys
78 | * check WPA PSK and WPA2 PSK
79 | * set passwords in WPA/WPA2 pre-shared key inputs
80 |
81 | ------------
82 |
--------------------------------------------------------------------------------
/mikrotik/hairpin_nat_reflection.md:
--------------------------------------------------------------------------------
1 | https://forum.mikrotik.com/viewtopic.php?t=179343
2 |
--------------------------------------------------------------------------------
/mounting_network_share_linux/readme.md:
--------------------------------------------------------------------------------
1 | # Mounting Network Shares in Linux
2 |
3 | possible ways to mount stuff - fstab, autofs, systemd, docker volumes if its for docker
4 |
5 | possible types of shares
6 |
7 | * smb/samba/cifs - the most common share, support on all OS
8 | * nfs - mostly used between linux machines, bit better performance
9 | * iSCSI - the share is mounted as a block device as if it was really a disk,
10 | great performance for small files
11 |
12 | More on setup of these shares is in
13 | [TrueNAS Scale guide.](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/trueNASscale)
14 |
15 | # smb/samba/cifs
16 |
17 | [Arch wiki](https://wiki.archlinux.org/title/samba#As_systemd_unit)
18 | on samba systemd mount
19 |
20 | * you will create two files in `/etc/systemd/system`
21 | * one will have extension `.mount` the other `.automount`
22 | * the name will be the same for both and it MUST correspond with the planned
23 | mount path. Slashes `/` being replaced by dashes `-`.
24 | So if the share should be at `/mnt/mirror` the files are named
25 | `mnt-mirror.mount` and `mnt-mirror.automount`
26 | * copy paste the bellow content, edit as you see fit,
27 | changing description, ip address and path, user and password,..
28 | * linux command `id` will show your current user `uid` and `gid`
29 | * after ther changes execute command `sudo systemctl enable mnt-mirror.automount`
30 | This will setup mounting that does not fail on boot if there are network issues,
31 | and really mounts the target only on request
32 |
33 | `mnt-mirror.mount`
34 | ```ini
35 | [Unit]
36 | Description=3TB truenas mirror mount
37 |
38 | [Mount]
39 | What=//10.0.19.11/Mirror
40 | Where=/mnt/mirror
41 | Type=cifs
42 | Options=rw,username=kopia,password=aaa,file_mode=0644,dir_mode=0755,uid=1000,gid=1000
43 |
44 | [Install]
45 | WantedBy=multi-user.target
46 | ```
47 |
48 | `mnt-mirror.automount`
49 | ```ini
50 | [Unit]
51 | Description=3TB truenas mirror mount
52 |
53 | [Automount]
54 | Where=/mnt/mirror
55 |
56 | [Install]
57 | WantedBy=multi-user.target
58 | ```
59 |
60 | ### Useful commants
61 |
62 | `smbclient -L 10.0.19.11` - list shares mounted from the ip
63 | `systemctl list-units -t mount --all`
64 |
--------------------------------------------------------------------------------
/network-knowledge-base/asus.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/network-knowledge-base/asus.png
--------------------------------------------------------------------------------
/network-knowledge-base/dhcp.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/network-knowledge-base/dhcp.md
--------------------------------------------------------------------------------
/network-knowledge-base/dns.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/network-knowledge-base/dns.md
--------------------------------------------------------------------------------
/network-knowledge-base/port_forwarding.md:
--------------------------------------------------------------------------------
1 | # Port Forwarding Guide
2 |
3 | # What is a port
4 |
5 | An IP address identifies specific machine on a network,
6 | **a port** identifies specific **application** running on that machine.
7 |
8 | A port is **a number** between [1 - 65,535] that is assigned by the OS
9 | to any application that wants to communicate over the network.
10 | Can be choosen at random, can be hard set.
11 |
12 | 
13 |
14 | # How Router's NAT works
15 |
16 | * **Allows outgoing** traffic on any port.
17 | * **Drops incoming** traffic unless it is a response to communication
18 | initialized from the inside.
19 |
20 | This is not really some security effort, the router just literally has no idea
21 | where to send it...
22 |
23 |
24 | More unnecessary details
25 |
26 | NAT is implemented in your router.
27 | It makes your **LAN side** devices able to connect with the
28 | outside world - **WAN side,** through one public IP.
29 | "The internet" *sees* just one device it communicates with,
30 | at least at this network level.
31 |
32 | 
33 |
34 |
35 | #### LAN side initialized communication
36 |
37 | * You visit a website, let's say `youtube.com`
38 | * Your browser has some random port assigned by the OS,
39 | this will be the **source port**.
40 | The local IP address of the machine it runs on will be the **source IP**
41 | * Browser/OS ask DNS servers for IP address of `youtube.com`,
42 | the answer is `142.250.191.78` - **destination IP**
43 | * Youtube is a website, standard for https is using port `443` - **destination port.**
44 | * All requred information are there. Destination[ip & port] Source[ip & port].
45 | * Packets are send.
46 | * The browser now waits for a response at that random port.
47 | * Since the router is the **default gateway**, thats where the packets arrive.
48 | * The router saves all that info in its state table for a time, could be seconds,
49 | could be days depending on protocol and [state](https://serverfault.com/a/481909).
50 | * Router doing the NAT now replaces the **source IP address** of that one machine,
51 | with its own wan IP address,
52 | it might also change source port but that is not as important,
53 | and sends it out in the direction of the **destination IP**.
54 | * Response comes back, router knows it is a response because it's coming from the
55 | IP that it contacted recently and the **destination port** it uses is the same
56 | number that was used as the source port.
57 | * It checks the state table for the **original source IP and source port**,
58 | put them in, now as destination and off the packets go.
59 | * The browser receives response on its assigned port, from the IP it contacted.
60 |
61 | #### WAN side initialized communication
62 |
63 | * Want to connect to a jellyfin server to watch some movies from browser.
64 | * You know the IP address or the url.
65 | You also expect it to run on default port jellyfin uses `8096`
66 | * The browser makes the request.
67 | * The router sees it coming at port `8096`, but where does it send it?
68 | There is nothing in the state table, that would tell it.
69 | * So it drops it, unless there is a port forwarding rule that says
70 | that if something comes to port `8096` send it to this local ip address
71 | and at that port...
72 |
73 | Youtube explanation videos if you want deeper dive:
74 |
75 | * [NAT - Network Address Translation.](https://www.youtube.com/watch?v=RG97rvw1eUo)
76 | * [Public IP vs. Private IP and Port Forwarding](https://www.youtube.com/watch?v=92b-jjBURkw)
77 |
78 | ---
79 | ---
80 |
81 |
82 |
83 | # Double NAT (CGNAT)
84 |
85 |
86 |
87 | **Bad News.**
88 | It is very likely that even when you do everything 100% correctly,
89 | you still wont get your ports open.
90 | The reason being that your machine is behind double NAT.
91 | **Your ISP** - internet service provider, has you behind its own NAT device
92 | and that WAN side of your router is not really "the internet", but ISPs LAN side.
93 |
94 | A way to try and check, is looking up your [public IP online](http://icanhazip.com/)
95 | then log in to your router and finding somewhere the IP address of your WAN interface.
96 | If they are the same then your are not behind double NAT and port forwarding
97 | will work straight away.
98 | If they differ and some local IP is there, then there is still a chance it will work,
99 | but you wont know till you try.
100 |
101 | But if you are failing to make port forwarding work, it's time to call your ISP
102 | and ask about public IP, how much would it cost.
103 | It can be few extra € to your monthly bill.
104 |
105 | # Port forwarding
106 |
107 | Finally. Right?!
108 |
109 | You want to host something, lets say a minecraft server.
110 | You set it all up, you give your public IP address to others and they try to connect,
111 | but your router blocks them. It's a connection initialized from the outside.
112 |
113 | So you need to tell your router/firewall to let through the traffic that comes to
114 | minecraft default port - `25565` and where to send it on your LAN,
115 | to the local IP of your minecraft server.
116 |
117 | 
118 |
119 | #### Examples of port forward rule
120 |
121 | * [Asus](https://i.imgur.com/brs9Mr6.png)
122 | * [TPlink](https://i.imgur.com/FNS2xCj.png)
123 | * [ubiquiti](https://i.imgur.com/D04HVJc.png)
124 |
125 | How to actually create that port forward rule depends on router/firewall model.
126 |
127 | Generally what to expect
128 |
129 | * It would be called port forwarding, or a virtual server, or be under NAT section.
130 | * **The port** on which to expect traffic is obviously a core information,
131 | sometimes it is called a service port or an external port.
132 | * **IP address** is required, so that the router knows where on the LAN side
133 | to send traffic that comes to that external port.
134 | * The setup might offer option for **internal port**,
135 | this can be often left empty, or the same port number is put there.
136 | It is there to give you option to run stuff on your LAN network on a different
137 | port than the one you open to the world. Like your webserver is `80`,
138 | but you open to the world port `12250` or whatever.
139 | * **The protocol** - TCP or UDP, if dunno **select both / all**
140 | You don't need to fear you are opening too much, if there is no service
141 | running on that port for that protocol it is same as a closed port.
142 |
143 | # Testing if port forwarding works
144 |
145 | 
146 |
147 | First you need to understand that unless there is some application running
148 | that answers on that port, all tests will come back as - closed port.
149 |
150 | For testing we can use websites that will test if a port is open at specified public IP.
151 |
152 | * [portchecker.co](https://portchecker.co/)
153 | * [yougetsignal.com](https://www.yougetsignal.com/tools/open-ports/)
154 |
155 | ## Windows
156 |
157 | * [Find the local ip address](https://www.youtube.com/results?search_query=how+to+find+ip+address+windows)
158 | of the machine you are planning to use for the test.
159 | * Follow the instruction in Port forwarding section of this guide
160 | and forward port `666` to the IP of that machine.
161 | * Download [Port Listener](https://www.rjlsoftware.com/software/utility/portlistener/).
162 | * Run Port Listener, set port `666`, press Start.
163 | * If a windows firewall notification pops up with a question, answer yes.
164 | * Go to [portchecker.co](https://portchecker.co/), set the port to 666 and press Check.
165 |
166 | 
167 |
168 | In windows it is also pretty useful knowing that you can go
169 | `Task Manager` > `Performance` > `Open Resource Monitor` > `Network` Tab
170 |
171 | There unroll `Listening Ports`
172 | and you should find there - `listener.exe` with port `666` and firewall status
173 | should be *allowed, not restricted*
174 |
175 | 
176 |
177 | ## Linux
178 |
179 | * find your ip address - `ip r`
180 | * Follow the instruction in Port forwarding section of this guide
181 | and forward port `666` to the IP of that machine.
182 | * try running netcat - `nc `
183 | * if it is not installed, get it for your distro,
184 | for arch it's `openbsd-netcat`, for debian it's `netcat-openbsd`,
185 | for fedora it's `netcat`
186 | * execute `sudo nc -vv -l -p 666`
187 | * Go to [portchecker.co](https://portchecker.co/), set the port to 666 and press Check.
188 |
189 |
190 | ## UDP port test
191 |
192 | UDP is kinda special cuz it's session-less, so you need to actually communicate
193 | through it to test it.
194 |
195 | * on a linux machine on LAN - `sudo nc -vv -u -l -p 666`
196 | * on a linux machine somewhere out there - `nc -u the_public_ip_goes_here 666`
197 | * write something and it should appear on the other side
198 |
--------------------------------------------------------------------------------
/network-knowledge-base/readme.md:
--------------------------------------------------------------------------------
1 | # network-knowledge-base
2 |
3 |
4 | * mac address
5 | * ip address
6 | * mask, mask notation
7 | * gateway
8 | * port
9 | * packet and frame
10 | * broadcast
11 | * ARP
12 | * NAT
13 | * bridge, bridging
14 | * switch, router, firewall
15 | * Bandwidth
16 | * DNS
17 | * DHCP
18 | * ICMP ping
19 | * TCP / UDP
20 | * HTTP
21 | * SSH
22 | * VPN
23 | * [OSI/ISO model idea](https://www.reddit.com/r/networking/comments/2bazcl/i_dont_honestly_understand_the_osi_model_despite/cj45hih/)
24 |
25 | [digital ocean](https://www.digitalocean.com/community/tutorials/an-introduction-to-networking-terminology-interfaces-and-protocols)
26 | [100 terms](https://www.makeuseof.com/networking-terms-glossary/)
27 |
28 | 
29 |
30 |
31 | # Network-hostnamess discovery / Zero-configuration networking /
32 |
33 | http://jaredrobinson.com/blog/upnp-ssdp-mdns-llmnr-etc-on-the-home-network/
34 |
35 | mDNS, NetBIOS, Samba, UPnP, Avahi, LLMNR, WINS,...
36 |
37 | What a mess
38 |
39 | Seems LLMNR is reliable and widly available everywhere for hostname resolution.
40 | Even when wiki says its being phased out by microsoft.
41 |
42 | Tested with wireshark. Pinging for a nonexisting hostname mans LLMNR
43 | broadcast to every device on network is send asking who is that hostname.
44 | Works same when pinging from archlinux or pinging from win8.1
45 |
46 | [TCP vs UDP](https://youtu.be/jE_FcgpQ7Co)
47 |
48 | # useful links
49 |
50 | * https://dnsdumpster.com/
51 | can check subdomains registered, ideal would be wildcard certificate
52 |
53 |
54 | OSI Model
55 |
56 | * https://www.youtube.com/watch?v=2iFFRqzX3yE
57 |
--------------------------------------------------------------------------------
/network-knowledge-base/tplink.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/network-knowledge-base/tplink.png
--------------------------------------------------------------------------------
/nextcloud/nginx.conf:
--------------------------------------------------------------------------------
1 | worker_processes auto;
2 |
3 | error_log /var/log/nginx/error.log warn;
4 | pid /var/run/nginx.pid;
5 |
6 |
7 | events {
8 | worker_connections 1024;
9 | }
10 |
11 |
12 | http {
13 | include /etc/nginx/mime.types;
14 | default_type application/octet-stream;
15 |
16 | log_format main '$remote_addr - $remote_user [$time_local] "$request" '
17 | '$status $body_bytes_sent "$http_referer" '
18 | '"$http_user_agent" "$http_x_forwarded_for"';
19 |
20 | access_log /var/log/nginx/access.log main;
21 |
22 | sendfile on;
23 | #tcp_nopush on;
24 |
25 | # Prevent nginx HTTP Server Detection
26 | server_tokens off;
27 |
28 | keepalive_timeout 65;
29 |
30 | #gzip on;
31 |
32 | upstream php-handler {
33 | server nextcloud-app:9000;
34 | }
35 |
36 | server {
37 | listen 80;
38 |
39 | # HSTS settings
40 | # WARNING: Only add the preload option once you read about
41 | # the consequences in https://hstspreload.org/. This option
42 | # will add the domain to a hardcoded list that is shipped
43 | # in all major browsers and getting removed from this list
44 | # could take several months.
45 | #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
46 |
47 | # set max upload size
48 | client_max_body_size 512M;
49 | fastcgi_buffers 64 4K;
50 |
51 | # Enable gzip but do not remove ETag headers
52 | gzip on;
53 | gzip_vary on;
54 | gzip_comp_level 4;
55 | gzip_min_length 256;
56 | gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
57 | gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
58 |
59 | # Pagespeed is not supported by Nextcloud, so if your server is built
60 | # with the `ngx_pagespeed` module, uncomment this line to disable it.
61 | #pagespeed off;
62 |
63 | # HTTP response headers borrowed from Nextcloud `.htaccess`
64 | add_header Referrer-Policy "no-referrer" always;
65 | add_header X-Content-Type-Options "nosniff" always;
66 | add_header X-Download-Options "noopen" always;
67 | add_header X-Frame-Options "SAMEORIGIN" always;
68 | add_header X-Permitted-Cross-Domain-Policies "none" always;
69 | add_header X-Robots-Tag "none" always;
70 | add_header X-XSS-Protection "1; mode=block" always;
71 |
72 | # Remove X-Powered-By, which is an information leak
73 | fastcgi_hide_header X-Powered-By;
74 |
75 | # Path to the root of your installation
76 | root /var/www/html;
77 |
78 | # Specify how to handle directories -- specifying `/index.php$request_uri`
79 | # here as the fallback means that Nginx always exhibits the desired behaviour
80 | # when a client requests a path that corresponds to a directory that exists
81 | # on the server. In particular, if that directory contains an index.php file,
82 | # that file is correctly served; if it doesn't, then the request is passed to
83 | # the front-end controller. This consistent behaviour means that we don't need
84 | # to specify custom rules for certain paths (e.g. images and other assets,
85 | # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
86 | # `try_files $uri $uri/ /index.php$request_uri`
87 | # always provides the desired behaviour.
88 | index index.php index.html /index.php$request_uri;
89 |
90 | # Rule borrowed from `.htaccess` to handle Microsoft DAV clients
91 | location = / {
92 | if ( $http_user_agent ~ ^DavClnt ) {
93 | return 302 /remote.php/webdav/$is_args$args;
94 | }
95 | }
96 |
97 | location = /robots.txt {
98 | allow all;
99 | log_not_found off;
100 | access_log off;
101 | }
102 |
103 | # Make a regex exception for `/.well-known` so that clients can still
104 | # access it despite the existence of the regex rule
105 | # `location ~ /(\.|autotest|...)` which would otherwise handle requests
106 | # for `/.well-known`.
107 | location ^~ /.well-known {
108 | # The rules in this block are an adaptation of the rules
109 | # in `.htaccess` that concern `/.well-known`.
110 |
111 | location = /.well-known/carddav { return 301 /remote.php/dav/; }
112 | location = /.well-known/caldav { return 301 /remote.php/dav/; }
113 |
114 | location /.well-known/acme-challenge { try_files $uri $uri/ =404; }
115 | location /.well-known/pki-validation { try_files $uri $uri/ =404; }
116 |
117 | # Let Nextcloud's API for `/.well-known` URIs handle all other
118 | # requests by passing them to the front-end controller.
119 | return 301 /index.php$request_uri;
120 | }
121 |
122 | # Rules borrowed from `.htaccess` to hide certain paths from clients
123 | location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; }
124 | location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; }
125 |
126 | # Ensure this block, which passes PHP files to the PHP process, is above the blocks
127 | # which handle static assets (as seen below). If this block is not declared first,
128 | # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
129 | # to the URI, resulting in a HTTP 500 error response.
130 | location ~ \.php(?:$|/) {
131 | # Required for legacy support
132 | rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;
133 |
134 | fastcgi_split_path_info ^(.+?\.php)(/.*)$;
135 | set $path_info $fastcgi_path_info;
136 |
137 | try_files $fastcgi_script_name =404;
138 |
139 | include fastcgi_params;
140 | fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
141 | fastcgi_param PATH_INFO $path_info;
142 | #fastcgi_param HTTPS on;
143 |
144 | fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice
145 | fastcgi_param front_controller_active true; # Enable pretty urls
146 | fastcgi_pass php-handler;
147 |
148 | fastcgi_intercept_errors on;
149 | fastcgi_request_buffering off;
150 | }
151 |
152 | location ~ \.(?:css|js|svg|gif)$ {
153 | try_files $uri /index.php$request_uri;
154 | expires 6M; # Cache-Control policy borrowed from `.htaccess`
155 | access_log off; # Optional: Don't log access to assets
156 | }
157 |
158 | location ~ \.woff2?$ {
159 | try_files $uri /index.php$request_uri;
160 | expires 7d; # Cache-Control policy borrowed from `.htaccess`
161 | access_log off; # Optional: Don't log access to assets
162 | }
163 |
164 | # Rule borrowed from `.htaccess`
165 | location /remote {
166 | return 301 /remote.php$request_uri;
167 | }
168 |
169 | location / {
170 | try_files $uri $uri/ /index.php$request_uri;
171 | }
172 | }
173 | }
174 |
--------------------------------------------------------------------------------
/phpipam/readme.md:
--------------------------------------------------------------------------------
1 | # phpIPAM in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # WORK IN PROGRESS
8 | # WORK IN PROGRESS
9 | # WORK IN PROGRESS
10 |
11 | # Purpose
12 |
13 | IP address managment, LAN information and documentation tool.
14 |
15 |
16 | * [Official site](https://phpipam.net/)
17 | * [Github](https://github.com/phpipam/phpipam)
18 | * [DockerHub image used](https://hub.docker.com/r/phpipam/phpipam-www)
19 |
20 | IPAM is a universal term that stands for
21 | [IP Address Management](https://en.wikipedia.org/wiki/IP_address_management).
22 | phpIPAM is an open source tool fullfilling this purpose.
23 | Coded in php and using mariadb for database.
24 |
25 | My exposure to it is limited and at this moment
26 | So far use is just tracking of used IP.
27 |
28 | Can be used to keep inventory of IT hardware, or organization of server racks.
29 |
30 | # Files and directory structure
31 |
32 | ```
33 | /home/
34 | └── ~/
35 | └── docker/
36 | └── phpipam/
37 | ├── phpipam-mariadb-data/
38 | ├── .env
39 | └── docker-compose.yml
40 | ```
41 |
42 | * `phpipam-mariadb-data/` - a directory where phpipam will store its database data
43 | * `.env` - a file containing environment variables for docker compose
44 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers
45 |
46 | You only need to provide the files.
47 | The directory is created by docker compose on the first run.
48 |
49 | # docker-compose
50 |
51 | `docker-compose.yml`
52 | ```yml
53 | version: '3'
54 | services:
55 |
56 | phpipam-web:
57 | image: phpipam/phpipam-www:latest
58 | container_name: phpipam-web
59 | hostname: phpipam-web
60 | # ports:
61 | # - "80:80"
62 | restart: unless-stopped
63 | env_file: .env
64 | depends_on:
65 | - phpipam-mariadb
66 |
67 | phpipam-mariadb:
68 | image: mariadb:latest
69 | container_name: phpipam-mariadb
70 | hostname: phpipam-mariadb
71 | restart: unless-stopped
72 | env_file: .env
73 | volumes:
74 | - ./phpipam-mariadb-data:/var/lib/mysql
75 |
76 | networks:
77 | default:
78 | external:
79 | name: $DOCKER_MY_NETWORK
80 | ```
81 |
82 | `.env`
83 | ```bash
84 | # GENERAL
85 | DOCKER_MY_NETWORK=caddy_net
86 | TZ=Europe/Bratislava
87 |
88 | IPAM_DATABASE_HOST=phpipam-mariadb
89 | IPAM_DATABASE_PASS=my_secret_phpipam_pass
90 | IPAM_DATABASE_WEBHOST=%
91 | MYSQL_ROOT_PASSWORD=my_secret_mysql_root_pass
92 | ```
93 |
94 | # Scheduling and cron issues
95 |
96 | The default docker-compose deployment uses cron container.
97 | Problem is it does not work, so Ofelia is used.
98 | [Here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/ofelia)
99 | is guide how to set it up.
100 |
101 | Bellow is Ofelia's config file for discovery and ping check of live hosts.
102 |
103 | `config.ini`
104 | ```ini
105 | [job-exec "phpipam ping"]
106 | schedule = @every 10m
107 | container = phpipam-web
108 | command = /usr/bin/php /phpipam/functions/scripts/pingCheck.php
109 |
110 | [job-exec "phpipam discovery"]
111 | schedule = @every 25m
112 | container = phpipam-web
113 | command = /usr/bin/php /phpipam/functions/scripts/discoveryCheck.php
114 | ```
115 |
116 | # Reverse proxy
117 |
118 | Caddy v2 is used, details
119 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
120 |
121 | `Caddyfile`
122 | ```
123 | ipam.{$MY_DOMAIN} {
124 | reverse_proxy phpipam-web:80
125 | }
126 | ```
127 |
128 | # First run
129 |
130 | 
131 |
132 |
133 | * New phpipam installation
134 | * Automatic database installation
135 | * MySQL username: root
136 | * MySQL password: my_secret_mysql_root_pass
137 |
138 | # Update
139 |
140 | [Watchtower](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/watchtower)
141 | updates the image automatically.
142 |
143 | Manual image update:
144 |
145 | - `docker-compose pull`
146 | - `docker-compose up -d`
147 | - `docker image prune`
148 |
149 | # Backup and restore
150 |
151 | #### Backup
152 |
153 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
154 | that makes daily snapshot of the entire directory.
155 |
156 | #### Restore
157 |
158 | * down the homer container `docker-compose down`
159 | * delete the entire homer directory
160 | * from the backup copy back the homer directory
161 | * start the container `docker-compose up -d`
162 |
--------------------------------------------------------------------------------
/port_forwarding_guide/readme.md:
--------------------------------------------------------------------------------
1 | # Port Forwarding
2 |
3 | https://www.reddit.com/r/selfhosted/comments/17tlvs7/i_suppose_im_too_stupid_for_port_forwarding/
4 |
5 | ###### guide-by-example
6 |
7 | You want to selfhost stuff.
8 | You know little and want to start somewhere, FAST!
9 |
10 | # Requirements
11 |
12 | * A **spare PC** that will be the server.
13 | Can be **virtualmachine**.. virtualbox, hyperv.
14 | * **Google**.
15 | If the guide says do X, and steps seem insuficient,
16 | you google that shit and add the word **youtube**.
17 |
18 | # Install a linux on the server
19 |
20 | 
21 |
22 | [Some video.](https://www.youtube.com/watch?v=SyBuNZxzy_Y)
23 |
24 | * **download linux iso**. For noobs I picked [EndeavourOS \(2GB\)](https://github.com/endeavouros-team/ISO/releases/download/1-EndeavourOS-ISO-releases-archive/EndeavourOS_Cassini_Nova-03-2023_R1.iso)
25 | * why that linux and not xxx? Under the hood its Arch Linux.
26 | * **make bootable usb** from the iso, recommend use [ventoy](https://www.ventoy.net/en/doc_start.html)
27 | * download; run; select usb; click install; exit; copy iso on to it
28 | * **boot from the usb**, maybe on newer machines need to disable secure boot in bios
29 | * **click through the installation**
30 | * pick online installer when offered
31 | * during install, there can be step called `Desktop` - pick `No Desktop`
32 | or whatever, does not really matter
33 | * when picking disk layout choose wipe everything
34 | * username lets say you pick `noob`
35 | * done
36 |
37 | # Basic setup of the linux server
38 |
39 | 
40 |
41 |
42 | **SSH** - a tiny application that allows you to execute commands
43 | from your comfy windows PC on the damn server
44 |
45 | * log in to the server and be in terminal
46 | * ssh is installed by default, but disabled
47 | * to check status - `systemctl status sshd`
48 | * to **enable it** `sudo systemctl enable --now sshd`
49 | * `ip a` or `ip r` - show [somewhere in there](https://www.cyberciti.biz/faq/linux-ip-command-examples-usage-syntax/#3)
50 | what IP address the server got assigned
51 | lets say you got `192.168.1.8`,
52 | nope I am not explaining IP addresses
53 | * done
54 |
55 | *arrow up key in terminal will cycle through old comamnds in history*
56 |
57 | # Remote connect to the server
58 |
59 | 
60 |
61 | * **install** [mobaXterm](https://mobaxterm.mobatek.net/) on your windows machine
62 | * use it to **connect** to the server using its ip address and username
63 | * [have a pic](https://i.imgur.com/lhRGt1p.png)
64 | * done
65 |
66 | # Install docker
67 |
68 | 
69 |
70 | **Docker** - a thing that makes hosting super easy, people prepared *recipies*,
71 | you copy paste them, maybe edit a bit, run them
72 |
73 | * **install docker-compose** - `sudo pacman -S docker-compose`
74 | * **enable docker service** - `sudo systemctl enable --now docker`
75 | * add your user to docker group so you dont need to sudo all the time
76 | `sudo gpasswd -a noob docker`
77 | * log out, log back in
78 | * done
79 |
80 | # Using docker
81 |
82 | Well, its time to learn how to create and edit files and copy paste shit
83 | in to them, IN LINUX!
84 | Honestly could be annoying as fuck at first, but mobaXterm should make it easier
85 | with the right mouse click paste.
86 | Nano editor is relatively simple and everywhere so that will be used.
87 |
88 | * be in your home directory, the command `cd` will always get you there
89 | * create directory `mkdir docker`
90 | * go in to it `cd docker`
91 | * create directory `mkdir nginx`
92 | * go in to it `cd nginx`
93 | * Oh look at you being all hacker in terminal, following simple directions
94 | * create empty docker-compose.yml file `nano docker-compose.yml`
95 | * paste in to it this *recipe*, spacing matters
96 | ```
97 | services:
98 |
99 | nginx:
100 | image: nginx:latest
101 | container_name: nginx
102 | hostname: nginx
103 | ports:
104 | - "80:80"
105 | ```
106 | * save using `ctrl+s`; exit `ctrl+x`
107 | * run command `sudo docker compose up -d`
108 | will say the container started
109 | * on your windows machine go to your browser
110 | in address bar put the ip of your server `192.168.1.8` bam
111 |
112 | 
113 |
114 | # undertanding what you just did
115 |
116 | * on linux server a docker container is running, its a webserver and it is
117 | accessible for others.
118 | Most of selfhosted stuff is just webserver with some database.
119 | * if this part is done that means that shit like hosting own netflix(jellyfin),
120 | or google drive/calendar/photos(nextcloud), or own password manager(vaultwarden)
121 | or own minecraft server(minecraft server) is just one `docker-compose.yml` away.
122 | * you could almost abandon terminal at this point, just start googling portainer
123 | and you can be doing this shit through a webpage. I dont use it, but it
124 | got good I heard.
125 |
126 | # undertanding what you did not get done
127 |
128 | * this shit is on your own local network, not accessible from the outside.
129 | Cant call grandma and tell her to write `192.168.1.8` in to her browser
130 | to see your awesome nginx welcome running.
131 | She tells you that the dumb fuck you are, you do not have public IP and ports
132 | forwarded.
133 | To get that working is bit challenging, probably deserves own page,
134 | not realy speedrun, but thorough steps as shit gets sideways fast and people
135 | can dick around for hours trying wrong shit.
136 | * everything here is just basic setup that breaks easily,
137 | server got dynamic IP, turn it off for a weekend and it might get a different ip
138 | assigned next time it starts. Container is not set to start on boot,...
139 | * you dont understand how this shit works, fixing not working stuff be hard,
140 | but now you can start to consume all the guides and tutorials on
141 | docker compose and try stuff...
142 |
143 | ## Links
144 |
145 | * https://www.reddit.com/r/HomeNetworking/comments/i7ijiz/a_guide_to_port_forwarding/
146 |
147 |
148 |
--------------------------------------------------------------------------------
/portainer/readme.md:
--------------------------------------------------------------------------------
1 | # Portainer in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose
8 |
9 | Web GUI for overview and management of docker environment.
10 |
11 | * [Official site](https://www.portainer.io)
12 | * [Github](https://github.com/portainer/portainer)
13 | * [DockerHub image used](https://hub.docker.com/r/portainer/portainer-ce/)
14 |
15 | Lightweight, allows to easily manage docker containers,
16 | images, networks, volumes,...
17 |
18 | I don't really use as I love [ctop])(https://github.com/bcicen/ctop),
19 | but I keep eye out on it and recommend it to noobs.
20 |
21 | # Files and directory structure
22 |
23 | ```
24 | /home/
25 | └── ~/
26 | └── docker/
27 | └── portainer/
28 | ├── portainer_data/
29 | ├── .env
30 | └── docker-compose.yml
31 | ```
32 |
33 | * `portainer_data/` - a directory where portainer stores its peristent data
34 | * `.env` - a file containing environment variables for docker compose
35 | * `docker-compose.yml` - a docker compose file, telling docker
36 | how to run the containers
37 |
38 | You only need to provide the files.
39 | The directory is created by docker compose on the first run.
40 |
41 | # docker-compose
42 |
43 | `docker-compose.yml`
44 | ```yml
45 | services:
46 | portainer:
47 | image: portainer/portainer-ce
48 | container_name: portainer
49 | hostname: portainer
50 | command: -H unix:///var/run/docker.sock
51 | restart: unless-stopped
52 | env_file: .env
53 | volumes:
54 | - /var/run/docker.sock:/var/run/docker.sock
55 | - ./portainer_data:/data
56 | expose:
57 | - "9443"
58 |
59 | networks:
60 | default:
61 | name: $DOCKER_MY_NETWORK
62 | external: true
63 | ```
64 |
65 | `.env`
66 | ```bash
67 | # GENERAL
68 | DOCKER_MY_NETWORK=caddy_net
69 | TZ=Europe/Bratislava
70 | ```
71 |
72 | # reverse proxy
73 |
74 | Caddy v2 is used, details
75 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
76 |
77 | `Caddyfile`
78 | ```
79 | port.example.com {
80 | reverse_proxy portainer:9443 {
81 | transport http {
82 | tls
83 | tls_insecure_skip_verify
84 | }
85 | }
86 | }
87 | ```
88 |
89 | # Update
90 |
91 | Manual image update:
92 |
93 | - `docker-compose pull`
94 | - `docker-compose up -d`
95 | - `docker image prune`
96 |
97 | # Backup and restore
98 |
99 | #### Backup
100 |
101 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
102 | that makes daily snapshot of the entire directory.
103 |
104 | #### Restore
105 |
106 | * down the portainer container `docker-compose down`
107 | * delete the entire portainer directory
108 | * from the backup copy back the portainer directory
109 | * start the container `docker-compose up -d`
110 |
--------------------------------------------------------------------------------
/qbittorrent/readme.md:
--------------------------------------------------------------------------------
1 | # qBittorrent in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | WORK IN PROGRESS
10 | WORK IN PROGRESS
11 | WORK IN PROGRESS
12 |
13 | Torrents downloader.
14 |
15 | * [Official site](https://jellyfin.org/)
16 | * [Github](https://github.com/jellyfin/jellyfin)
17 | * [DockerHub](https://hub.docker.com/r/jellyfin/jellyfin/)
18 |
19 | qBittorrent is a cross-platform free and open-source BitTorrent client.
20 | The interface is clone of the popular uTorrent.
21 | Major feature is the build in torrent search on various trackers.
22 |
23 | Written mostly in C++, using libtorrent under the hood.
24 |
25 | # Files and directory structure
26 |
27 | ```
28 | /mnt/
29 | └── smalldisk/
30 | └── torrents/
31 | /home/
32 | └── ~/
33 | └── docker/
34 | └── qbittorrent/
35 | ├── qbittorrent-config/
36 | ├── .env
37 | └── docker-compose.yml
38 | ```
39 |
40 | * `/mnt/bigdisk/...` - a mounted storage share
41 | * `jellyfin-config/` - configuration
42 | * `.env` - a file containing environment variables for docker compose
43 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers
44 |
45 | You only need to provide the files.
46 | The directories are created by docker compose on the first run.
47 |
48 | # docker-compose
49 |
50 | Dockerhub linuxserver/bookstack
51 | [example compose.](https://hub.docker.com/r/linuxserver/bookstack)
52 |
53 | `docker-compose.yml`
54 | ```yml
55 | services:
56 |
57 | qbittorrent:
58 | image: lscr.io/linuxserver/qbittorrent
59 | container_name: qbittorrent
60 | hostname: qbittorrent
61 | restart: unless-stopped
62 | env_file: .env
63 | volumes:
64 | - ./qbittorrent-config:/config
65 | - /mnt/smalldisk/torrents:/downloads
66 | expose:
67 | - 8080
68 | ports:
69 | - 6881:6881
70 | - 6881:6881/udp
71 | - 8080:8080
72 |
73 | networks:
74 | default:
75 | name: $DOCKER_MY_NETWORK
76 | external: true
77 | ```
78 |
79 | `.env`
80 | ```bash
81 | # GENERAL
82 | MY_DOMAIN=example.com
83 | DOCKER_MY_NETWORK=caddy_net
84 | TZ=Europe/Bratislava
85 |
86 | WEBUI_PORT=8080
87 | PUID=1000
88 | PGID=1000
89 | ```
90 |
91 | **All containers must be on the same network**.
92 | Which is named in the `.env` file.
93 | If one does not exist yet: `docker network create caddy_net`
94 |
95 | # Reverse proxy
96 |
97 | Caddy is used, details
98 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
99 |
100 | `Caddyfile`
101 | ```
102 | q.{$MY_DOMAIN} {
103 | reverse_proxy qbittorrent:8080
104 | }
105 | ```
106 |
107 | # First run
108 |
109 | Default login: `admin@admin.com` // `password`
110 |
111 | ---
112 |
113 | 
114 |
115 |
116 | # Specifics of my setup
117 |
118 | * no long term use yet
119 | * no gpu, so no experience with hw transcoding
120 | * media files are stored and shared on trunas scale VM
121 | and mounted to the docker host using systemd mounts,
122 | instead of fstab or autofs.
123 |
124 | `/etc/systemd/system/mnt-bigdisk.mount`
125 | ```ini
126 | [Unit]
127 | Description=12TB truenas mount
128 |
129 | [Mount]
130 | What=//10.0.19.19/Dataset-01
131 | Where=/mnt/bigdisk
132 | Type=cifs
133 | Options=ro,username=ja,password=qq,file_mode=0700,dir_mode=0700,uid=1000
134 | DirectoryMode=0700
135 |
136 | [Install]
137 | WantedBy=multi-user.target
138 | ```
139 |
140 | `/etc/systemd/system/mnt-bigdisk.automount`
141 | ```ini
142 | [Unit]
143 | Description=myshare automount
144 |
145 | [Automount]
146 | Where=/mnt/bigdisk
147 |
148 | [Install]
149 | WantedBy=multi-user.target
150 | ```
151 |
152 | automount on boot - `sudo systemctl start mnt-bigdisk.automount`
153 |
154 | # Update
155 |
156 | Manual image update:
157 |
158 | - `docker-compose pull`
159 | - `docker-compose up -d`
160 | - `docker image prune`
161 |
162 | # Backup and restore
163 |
164 | #### Backup
165 |
166 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
167 | that makes daily snapshot of the entire directory.
168 |
169 | #### Restore
170 |
171 | * down the bookstack containers `docker-compose down`
172 | * delete the entire bookstack directory
173 | * from the backup copy back the bookstack directory
174 | * start the containers `docker-compose up -d`
175 |
176 | # Backup of just user data
177 |
178 | Users data daily export using the
179 | [official procedure.](https://www.bookstackapp.com/docs/admin/backup-restore/)
180 | For bookstack it means database dump and backing up several directories
181 | containing user uploaded files.
182 |
183 | Daily [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) run
184 | takes care of backing up the directories.
185 | So only database dump is needed.
186 | The created backup sqlite3 file is overwritten on every run of the script,
187 | but that's ok since borg is making daily snapshots.
188 |
189 | #### Create a backup script
190 |
191 | Placed inside `bookstack` directory on the host
192 |
193 | `bookstack-backup-script.sh`
194 | ```bash
195 | #!/bin/bash
196 |
197 | # CREATE DATABASE DUMP, bash -c '...' IS USED OTHERWISE OUTPUT > WOULD TRY TO GO TO THE HOST
198 | docker container exec bookstack-db bash -c 'mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE > $MYSQL_DIR/BACKUP.bookstack.database.sql'
199 | ```
200 |
201 | the script must be **executable** - `chmod +x bookstack-backup-script.sh`
202 |
203 | #### Cronjob
204 |
205 | Running on the host, so that the script will be periodically run.
206 |
207 | * `su` - switch to root
208 | * `crontab -e` - add new cron job
209 | * `0 22 * * * /home/bastard/docker/bookstack/bookstack-backup-script.sh`
210 | runs it every day [at 22:00](https://crontab.guru/#0_22_*_*_*)
211 | * `crontab -l` - list cronjobs to check
212 |
213 | # Restore the user data
214 |
215 | Assuming clean start, first restore the database before running the app container.
216 |
217 | * start only the database container: `docker-compose up -d bookstack-db`
218 | * copy `BACKUP.bookstack.database.sql` in `bookstack/bookstack-db-data/`
219 | * restore the database inside the container
220 | `docker container exec --workdir /config bookstack-db bash -c 'mysql -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE < BACKUP.bookstack.database.sql'`
221 | * now start the app container: `docker-compose up -d`
222 | * let it run so it creates its file structure
223 | * down the containers `docker-compose down`
224 | * in `bookstack/bookstack-data/www/`
225 | replace directories `files`,`images`,`uploads` and the file `.env`
226 | with the ones from the BorgBackup repository
227 | * start the containers: `docker-compose up -d`
228 | * if there was a major version jump, exec in to the app container and run `php artisan migrate`
229 | `docker container exec -it bookstack /bin/bash`
230 | `cd /var/www/html/`
231 | `php artisan migrate`
232 |
233 | Again, the above steps are based on the
234 | [official procedure.](https://www.bookstackapp.com/docs/admin/backup-restore/)
235 |
--------------------------------------------------------------------------------
/rustdesk/readme.md:
--------------------------------------------------------------------------------
1 | # Rustdesk in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Remote desktop access.
10 |
11 | * [Official site](https://rustdesk.com/)
12 | * [Github](https://github.com/rustdesk/rustdesk)
13 | * [DockerHub for S6](https://hub.docker.com/r/rustdesk/rustdesk-server-s6)
14 |
15 | Rustdesk is a new fully opensource alternative for TeamViewer or Anydesk.
16 | The major aspects are that it does NAT punching,
17 | and lets you host all the infrastructure for it to function.
18 | Written in rust(gasp), with Dart and Flutter framework for client side.
19 |
20 | The idea is:
21 |
22 | * Run a rustdesk server reachable online.
23 | * Install clients on machines you want to connect from / to.
24 | * The clients application keeps a regular heartbeat communication
25 | with the server, in a way to [punch a hole](https://youtu.be/S7Ifw5XsypQ)
26 | in the NAT and so allows connection initialized from the outside,
27 | without doing port forwarding.
28 |
29 | ---
30 |
31 | 
32 |
33 | # Files and directory structure
34 |
35 | ```
36 | /home/
37 | └── ~/
38 | └── docker/
39 | └── rustdesk/
40 | ├── 🗁 rustdesk_data/
41 | ├── 🗋 .env
42 | └── 🗋 docker-compose.yml
43 | ```
44 |
45 | * `rustdesk_data/` - persistent data, contains sqlite database and the keys
46 | * `.env` - a file containing environment variables for docker compose
47 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers
48 |
49 | You only need to provide the two files.
50 | The directory is created by docker compose on the first run.
51 |
52 | # docker-compose
53 |
54 | Using [S6-overlay](https://github.com/rustdesk/rustdesk-server#s6-overlay-based-images)
55 | based image.
56 | It's a simpler, single container approach. The
57 | [complexity](https://github.com/rustdesk/rustdesk-server#classic-image)
58 | of rustdesk's `hbbs` server and `hbbr` relay hidden.
59 |
60 | No network section since no http traffic that would need reverse proxy, yet.
61 | So just mapped ports on to docker host to do their thing.
62 |
63 | `docker-compose.yml`
64 | ```yml
65 | services:
66 | rustdesk:
67 | image: rustdesk/rustdesk-server-s6:1.1.7-1
68 | container_name: rustdesk
69 | hostname: rustdesk
70 | restart: unless-stopped
71 | env_file: .env
72 | ports:
73 | - "21116:21116"
74 | - "21115:21115"
75 | - "21116:21116/udp"
76 | - "21117:21117"
77 | - "21118:21118"
78 | - "21119:21119"
79 | volumes:
80 | - ./rustdesk_data:/data
81 | ```
82 |
83 | `.env`
84 | ```bash
85 | # GENERAL
86 | TZ=Europe/Bratislava
87 |
88 | # RUSTDESK
89 | RELAY=rust.example.com:21117
90 | ENCRYPTED_ONLY=1
91 | # KEY_PRIV=
92 | # KEY_PUB=
93 | ```
94 |
95 | In the `.env` file encryption is enabled, so that only clients that have
96 | correct public key will be allowed access to the rustdesk server.
97 | The keys are generated on the first run of the compose and can be found in
98 | the `rustdesk_data` directory.
99 | Once generated they should be added to the `.env` file for easier migration.
100 | The public key needs to be distributed with the clients apps installation.
101 |
102 | # Port forwarding
103 |
104 | as can be seen in the compose
105 |
106 | * **21115 - 21119** TCP need to be forwarded to docker host
107 | * **21116** is TCP **and UDP**
108 |
109 | 21115 is used for the NAT type test,
110 | 21116/UDP is used for the ID registration and heartbeat service,
111 | 21116/TCP is used for TCP hole punching and connection service,
112 | 21117 is used for the Relay services,
113 | and 21118 and 21119 are used to support web clients.
114 | [source](https://rustdesk.com/docs/en/self-host/install/)
115 |
116 | ---
117 |
118 | 
119 |
120 | # The installation on clients
121 |
122 | * Download and install the client apps from [the official site](https://rustdesk.com/).
123 | * Three dots > ID/Relay Server
124 | * `ID Server`: rust.example.com
125 | * `Key`: *\*
126 | * The green dot at the bottom should be green saying "ready".
127 |
128 | 
129 |
130 | **On windows** one
131 | [can deploy](https://rustdesk.com/docs/en/self-host/install/#put-config-in-rustdeskexe-file-name-windows-only)
132 | client with **pre-sets** by renaming the installation file to:
133 | `rustdesk-host=,key=.exe`
134 |
135 | example: `rustdesk-host=rust.example.com,key=3AVva64bn1ea2vsDuOuQH3i8+2M=.exe`
136 |
137 | If by chance the public key contains symbols not usable in windows filenames,
138 | down the container, delete the files `id_ed25519` and `id_ed25519.pub`,
139 | up the container and try with the new keys.
140 |
141 | # Extra info
142 |
143 | * You really really **really want to be using domain and not your public IP**
144 | when installing clients and setting ID server. That `rust.example.com`
145 | can be changed to point at a different IP any time you want. Hard set IP not.
146 | * Can do `tcpdump -n udp port 21116` on a docker host to **see heartbeat** udp traffic.
147 | Seems machines report-in every \~13 seconds.
148 | * on **windows** a **service** named `rustdesk` is enabled.
149 | Disable it if the machine should be accessible only on demand,
150 | when someone first runs rustdesk manually.
151 | In powershell - `Set-Service rustdesk -StartupType Disabled`
152 | * One can relatively easily
153 | **hardcode server url and pub key in to an executable** using
154 | [github actions.](https://rustdesk.com/docs/en/self-host/hardcode-settings/)
155 | Tested it and it works. But seems you can only do workflow run of nightly build,
156 | meaning all the latest stuff added is included, which means higher chance of bugs.
157 | Make sure you do step *"Enable upload permissions for workflows"*,
158 | before you run the workflow.
159 | * Questions about issues with selfhosting are **not answered** on github -
160 | [#763](https://github.com/rustdesk/rustdesk/discussions/763),
161 | next to try is their [discord](https://discord.com/invite/nDceKgxnkV) or
162 | [subreddit](https://www.reddit.com/r/rustdesk/).
163 | * [FAQ](https://github.com/rustdesk/rustdesk/wiki/FAQ)
164 | * How does [rustdesk work?](https://github.com/rustdesk/rustdesk/wiki/How-does-RustDesk-work%3F)
165 |
166 | 
167 |
168 | # Trouble shooting
169 |
170 | ---
171 |
172 | #### If just one machine is having issues.
173 |
174 | uninstall, plus delete:
175 |
176 | * `C:\Windows\ServiceProfiles\LocalService\AppData\Roaming\RustDesk`
177 | * `%AppData%\RustDesk`
178 |
179 | Restart. Reinstall.
180 | Do not use the installer you used before, **download** from the site latest.
181 |
182 | ---
183 |
184 | #### Error - Failed to connect to relay server
185 |
186 | * I had wrong url set as `RELAY` in the `.env`
187 | * if url is correct I would test if port 21117 tcp forwards
188 |
189 | ---
190 |
191 | #### Investigate port forwarding
192 |
193 | Install netcat and tcpdump on the docker host.
194 |
195 | * docker compose down rustdesk container so that ports are free to use
196 | * start a small netcat server listening on whichever port we test
197 | `sudo nc -u -vv -l -p 21116`
198 | the `-u` means udp traffic, delete to do tcp
199 | * on a machine somewhere else in the world, not on the same network, try
200 | `nc -u 21116`
201 |
202 | If you write something and press enter, it should appear on the other machine, confirming
203 | that port forwarding works.
204 | Also useful command can be `tcpdump -n udp port 21116`
205 | When port forwarding works, one should see heartbeat chatter,
206 | as machines with installed rustdesk are announcing themselves every \~13 seconds.
207 |
208 | ---
209 |
210 | # Manual image update:
211 |
212 | - `docker-compose pull`
213 | - `docker-compose up -d`
214 | - `docker image prune`
215 |
216 | # Backup and restore
217 |
218 | #### Backup
219 |
220 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
221 | that makes daily snapshot of the entire directory.
222 |
223 | #### Restore
224 |
225 | * down the bookstack containers `docker-compose down`
226 | * delete the entire bookstack directory
227 | * from the backup copy back the bookstack directory
228 | * start the containers `docker-compose up -d`
229 |
230 |
--------------------------------------------------------------------------------
/snipeit/readme.md:
--------------------------------------------------------------------------------
1 | # Snipe-IT in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | IT inventory managment tool.
10 |
11 | * [Official site](https://snipeitapp.com/)
12 | * [Github](https://github.com/snipe/snipe-it)
13 | * [DockerHub](https://hub.docker.com/r/snipe/snipe-it/)
14 |
15 | Snipe-IT is a modern, open source, go-to asset managment tool with LDAP integration.
16 | Written in PHP, using Laravel framework.
17 | This setup is using mariadb database for storing the data.
18 | Dockerhub image maintained by
19 | [linuxserver.io](https://docs.linuxserver.io/images/docker-snipe-it)
20 | is used.
21 |
22 | # Files and directory structure
23 |
24 | ```
25 | /home/
26 | └── ~/
27 | └── docker/
28 | └── snipeit/
29 | ├── config/
30 | ├── snipeit-db/
31 | ├── .env
32 | └── docker-compose.yml
33 | ```
34 |
35 | * `config/` - a directory where snipe-it will store its web server stuff
36 | * `snipeit-db/` - a directory where snipeit will store its database data
37 | * `.env` - a file containing environment variables for docker compose
38 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers
39 |
40 | You only need to provide the files.
41 | The directories are created by docker compose on the first run.
42 |
43 | # docker-compose
44 |
45 | `docker-compose.yml`
46 | ```yml
47 | services:
48 | snipeit-db:
49 | image: mariadb
50 | container_name: snipeit-db
51 | hostname: snipeit-db
52 | restart: unless-stopped
53 | env_file: .env
54 | volumes:
55 | - ./snipeit-db:/var/lib/mysql
56 |
57 | snipeit:
58 | image: linuxserver/snipe-it:latest
59 | container_name: snipeit
60 | hostname: snipeit
61 | restart: unless-stopped
62 | env_file: .env
63 | depends_on:
64 | - snipeit-db
65 | volumes:
66 | - ./config:/config
67 |
68 | networks:
69 | default:
70 | name: $DOCKER_MY_NETWORK
71 | external: true
72 | ```
73 |
74 | `.env`
75 | ```bash
76 | # GENERAL
77 | MY_DOMAIN=example.com
78 | DOCKER_MY_NETWORK=caddy_net
79 | TZ=Europe/Bratislava
80 |
81 | #LINUXSERVER.IO mariadb
82 | PUID=1000
83 | PGID=1000
84 | MYSQL_ROOT_PASSWORD=snipeit
85 | MYSQL_DATABASE=snipeit
86 | MYSQL_USER=snipeit
87 | MYSQL_PASSWORD=snipeit
88 |
89 | #LINUXSERVER.IO Snipe-IT
90 | APP_URL=https://snipe.example.com
91 | MYSQL_PORT_3306_TCP_ADDR=snipeit-db
92 | MYSQL_PORT_3306_TCP_PORT=3306
93 | MYSQL_DATABASE=snipeit
94 | MYSQL_USER=snipeit
95 | MYSQL_PASSWORD=snipeit
96 | APP_TRUSTED_PROXIES=*
97 |
98 | #EMAIL
99 | MAIL_PORT_587_TCP_ADDR=smtp-relay.sendinblue.com
100 | MAIL_PORT_587_TCP_PORT=587
101 | MAIL_ENV_FROM_ADDR=noreply@example.com
102 | MAIL_ENV_FROM_NAME=snipe-it admin
103 | MAIL_ENV_ENCRYPTION=tls
104 | MAIL_ENV_USERNAME=your_email@registrated-on-sendinblue.com
105 | MAIL_ENV_PASSWORD=your_sendinblue_smtp_key_value
106 | ```
107 |
108 | **All containers must be on the same network**.
109 | Which is named in the `.env` file.
110 | If one does not exist yet: `docker network create caddy_net`
111 |
112 | # Reverse proxy
113 |
114 | Caddy v2 is used, details
115 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
116 |
117 | `Caddyfile`
118 | ```
119 | snipe.{$MY_DOMAIN} {
120 | encode gzip
121 | reverse_proxy snipeit:443 {
122 | transport http {
123 | tls
124 | tls_insecure_skip_verify
125 | }
126 | }
127 | }
128 | ```
129 |
130 | # First run
131 |
132 |
133 | 
134 |
135 | ---
136 |
137 |
138 | # Trouble shooting
139 |
140 |
141 |
142 | # Update
143 |
144 | Manual image update:
145 |
146 | - `docker-compose pull`
147 | - `docker-compose up -d`
148 | - `docker image prune`
149 |
150 | # Backup and restore
151 |
152 | #### Backup
153 |
154 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
155 | that makes daily snapshot of the entire directory.
156 |
157 | #### Restore
158 |
159 | * down the snipeit containers `docker-compose down`
160 | * delete the entire snipeit directory
161 | * from the backup copy restore the snipeit directory
162 | * start the containers `docker-compose up -d`
163 |
164 | # Backup of just user data
165 |
166 | Users data daily export using the
167 | [official procedure.](https://www.snipeitapp.com/docs/admin/backup-restore/)
168 | For snipeit it means database dump and backing up several directories
169 | containing user uploaded files.
170 |
171 | Daily [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) run
172 | takes care of backing up the directories.
173 | So only database dump is needed.
174 | The created backup sqlite3 file is overwritten on every run of the script,
175 | but that's ok since borg is making daily snapshots.
176 |
177 | #### Create a backup script
178 |
179 | Placed inside `snipeit` directory on the host
180 |
181 | `snipeit-backup-script.sh`
182 | ```bash
183 | #!/bin/bash
184 |
185 | # CREATE DATABASE DUMP, bash -c '...' IS USED OTHERWISE OUTPUT > WOULD TRY TO GO TO THE HOST
186 | docker container exec snipeit-db bash -c 'mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE > $MYSQL_DIR/BACKUP.snipeit.database.sql'
187 | ```
188 |
189 | the script must be **executable** - `chmod +x snipeit-backup-script.sh`
190 |
191 | #### Cronjob
192 |
193 | Running on the host, so that the script will be periodically run.
194 |
195 | * `su` - switch to root
196 | * `crontab -e` - add new cron job
197 | * `0 22 * * * /home/bastard/docker/snipeit/snipeit-backup-script.sh`
198 | runs it every day [at 22:00](https://crontab.guru/#0_22_*_*_*)
199 | * `crontab -l` - list cronjobs to check
200 |
201 | # Restore the user data
202 |
203 | Assuming clean start, first restore the database before running the app container.
204 |
205 | * start only the database container: `docker-compose up -d snipeit-db`
206 | * copy `BACKUP.snipeit.database.sql` in `snipeit/snipeit-db-data/`
207 | * restore the database inside the container
208 | `docker container exec --workdir /config snipeit-db bash -c 'mysql -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE < BACKUP.snipeit.database.sql'`
209 | * now start the app container: `docker-compose up -d`
210 | * let it run so it creates its file structure
211 | * down the containers `docker-compose down`
212 | * in `snipeit/snipeit-data/www/`
213 | replace directories `files`,`images`,`uploads` and the file `.env`
214 | with the ones from the BorgBackup repository
215 | * start the containers: `docker-compose up -d`
216 | * if there was a major version jump, exec in to the app container and run `php artisan migrate`
217 | `docker container exec -it snipeit /bin/bash`
218 | `cd /var/www/html/`
219 | `php artisan migrate`
220 |
221 | Again, the above steps are based on the
222 | [official procedure.](https://www.snipeitapp.com/docs/admin/backup-restore/)
223 |
--------------------------------------------------------------------------------
/squid/readme.md:
--------------------------------------------------------------------------------
1 | # Squid
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Forward proxy to avoid websites IP bans.
10 |
11 | * [Official](http://www.squid-cache.org/)
12 | * [Github](https://github.com/squid-cache/squid)
13 | * [Arch wiki](https://wiki.archlinux.org/title/Squid)
14 |
15 | Caching and forwarding HTTP web proxy.
16 | Main use here is being able to access web pages from a different IP than
17 | your own in a comfortable way.
18 | Other uses are caching to improve speed and load, and ability to block domains,
19 | ads, IPs,...
20 |
21 | Squid is written in C++.
22 |
23 | # Hosting
24 |
25 | Free oracle cloud instance can be used to host squid somewhere in the world.
26 | [Detailed setup guide here.](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/free_cloud_hosting_VPS)
27 |
28 | # Files and directory structure
29 |
30 | ```
31 | /home/
32 | └── ~/
33 | └── docker/
34 | └── squid/
35 | ├── 🗋 docker-compose.yml
36 | └── 🗋 squid.conf
37 | ```
38 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container
39 | * `squid.conf` - main configuration file for squid
40 |
41 | # Compose
42 |
43 | `docker-compose.yml`
44 | ```yml
45 | services:
46 | squid:
47 | image: ubuntu/squid
48 | container_name: squid
49 | hostname: squid
50 | restart: unless-stopped
51 | ports:
52 | - "56566:56566"
53 | volumes:
54 | - ./squid.conf:/etc/squid/squid.conf
55 | - ./squid_cache:/var/spool/squid
56 | ```
57 |
58 | # squid.conf
59 |
60 | `squid.conf`
61 | ```php
62 | max_filedescriptors 1048576
63 | http_port 56566
64 | http_access allow all
65 |
66 | cache deny all
67 | visible_hostname squidproxy
68 |
69 | forwarded_for delete
70 | via off
71 | follow_x_forwarded_for deny all
72 | request_header_access X-Forwarded-For deny all
73 | ```
74 |
75 | A minimal config that works.
76 |
77 | For running in docker, `max_filedescriptors 1048576` is required, prevents error:
78 | *FATAL: xcalloc: Unable to allocate 1073741816 blocks of 432 bytes
79 | squid cache terminated abnormally*
80 |
81 | A firewall is used for security, allows in-connections only from one public IP.
82 | Otherwise a VPN like wireguard would be used,
83 | so not much interest in acl security and authorization provided by the config.
84 | Also no interest in caching.
85 | So this is just a config with some headers turned off for maybe better hiding
86 | of the real IP.
87 |
88 | **Testing**
89 |
90 | Linux curl command can test if reverse proxy works.
91 |
92 | `curl -x http://666.6.66.6:56566 -L http://archlinux.org`
93 |
94 | # Setting proxy in browsers
95 |
96 | 
97 |
98 | Every browser has proxy settings where ip and port can be set and it should
99 | work globally for every site. But if only certain domains should go through proxy
100 | then thers browsers addons.
101 |
102 | **FoxyProxy Standard**
103 |
104 | * [firefox](https://addons.mozilla.org/en-US/firefox/addon/foxyproxy-standard/)
105 | * [chrome](https://chrome.google.com/webstore/detail/foxyproxy-standard/gcknhkkoolaabfmlnjonogaaifnjlfnp)
106 |
107 | In config one can setup the proxy ip and port and then one can enable or disable proxy.
108 | But it also has pattern section where url wildcard can be set and proxy
109 | is enabled all the time but applies only on sites fitting pattern.
110 |
111 | # Update
112 |
113 | Manual image update:
114 |
115 | - `docker compose pull`
116 | - `docker compose up -d`
117 | - `docker image prune`
118 |
119 |
--------------------------------------------------------------------------------
/syncthing/readme.md:
--------------------------------------------------------------------------------
1 | # Syncthing
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Synchronize folders between devices.
10 |
11 | * [Official](https://syncthing.net/)
12 | * [Github](https://github.com/syncthing/syncthing)
13 |
14 | Simple and elegant solution for Synchronizing folders.
15 | Clients are installed on devices, and paired using the Syncthing servers.
16 | There are Windows, MacOs, Linux, Android clients, and 3rd party Möbius Sync for iOS.
17 |
18 | Written in Go.
19 |
20 | # Files and directory structure
21 |
22 | ```
23 | /home/
24 | └── ~/
25 | └── docker/
26 | └── syncthing/
27 | ├── 🗋 .env
28 | └── 🗋 docker-compose.yml
29 | ```
30 | * `.env` - a file containing environment variables for docker compose
31 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container
32 |
33 | # Compose
34 |
35 | `docker-compose.yml`
36 | ```yml
37 | services:
38 |
39 | syncthing:
40 | image: syncthing/syncthing
41 | container_name: syncthing
42 | hostname: syncthing
43 | restart: unless-stopped
44 | env_file: .env
45 | volumes:
46 | - /mnt/mirror/syncthing:/var/syncthing
47 | ports:
48 | - 8384:8384 # Web UI
49 | - 22000:22000/tcp # TCP file transfers
50 | - 22000:22000/udp # QUIC file transfers
51 | - 21027:21027/udp # Receive local discovery broadcasts
52 |
53 | networks:
54 | default:
55 | name: $DOCKER_MY_NETWORK
56 | external: true
57 | ```
58 |
59 | `.env`
60 | ```bash
61 | # GENERAL
62 | DOCKER_MY_NETWORK=caddy_net
63 | TZ=Europe/Bratislava
64 |
65 | # SYNCTHING
66 | PUID=1000
67 | PGID=1000
68 | ```
69 |
70 | # Reverse proxy
71 |
72 | Caddy v2 is used, details
73 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
74 |
75 | `Caddyfile`
76 | ```php
77 | sync.{$MY_DOMAIN} {
78 | reverse_proxy syncthing:8384
79 | }
80 | ```
81 |
82 | # First run
83 |
84 | 
85 |
86 | visit the webgui, setup username and password in settings > GUI.
87 |
88 | * intall sync on other devices
89 | * add folders, confirm them on webgui
90 |
91 | sync should just start.
92 |
93 |
94 | # Trouble shooting
95 |
96 | # Update
97 |
98 | Manual image update:
99 |
100 | - `docker compose pull`
101 | - `docker compose up -d`
102 | - `docker image prune`
103 |
104 |
--------------------------------------------------------------------------------
/umami/readme.md:
--------------------------------------------------------------------------------
1 | # Umami
2 |
3 |
4 | # Purpose & Overview
5 | Self-hosted, private, simple web site analytics with Umami.
6 |
7 | * [Github] (https://github.com/umami-software/umami)
8 |
9 | # Files and directory structure
10 |
11 | ```
12 | /home/
13 | └── ~/
14 | └── docker/
15 | └── umami/
16 | ├── .env
17 | └── docker-compose.yml
18 | ```
19 | * `.env` - a file containing environment variables for docker compose
20 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container
21 |
22 | ### - Create .env and docker-compose.yml file
23 | `.env`
24 | ```
25 | # GENERAL
26 | MY_DOMAIN=example.com
27 | DOCKER_MY_NETWORK=caddy_net
28 |
29 | # UMAMI
30 | DATABASE_URL=postgresql://umami:generate_me@db:5432/umami
31 | DATABASE_TYPE=postgresql
32 | # generate a secret with `openssl rand -base64 32`
33 | APP_SECRET=generate_me
34 | # uncomment and change for custom analytics script name
35 | # TRACKER_SCRIPT_NAME=custom_script_name
36 |
37 | # UMAMI DB
38 | POSTGRES_DB=umami
39 | POSTGRES_USER=umami
40 | # generate a password with `openssl rand -base64 32`
41 | POSTGRES_PASSWORD=generate_me
42 | ```
43 |
44 | `docker-compose.yml`
45 | ```yml
46 | ---
47 | version: '3'
48 | services:
49 |
50 | umami:
51 | container_name: umami
52 | image: ghcr.io/umami-software/umami:postgresql-latest
53 | ports:
54 | - "3000:3000"
55 | env_file: .env
56 | depends_on:
57 | db:
58 | condition: service_healthy
59 | restart: always
60 |
61 | db:
62 | container_name: umami-db
63 | image: postgres:15-alpine
64 | env_file: .env
65 | volumes:
66 | - umami-db-data:/var/lib/postgresql/data
67 | restart: always
68 | healthcheck:
69 | test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}"]
70 | interval: 5s
71 | timeout: 5s
72 | retries: 5
73 |
74 | volumes:
75 | umami-db-data:
76 |
77 | networks:
78 | default:
79 | name: $DOCKER_MY_NETWORK
80 | external: true
81 |
82 | ```
83 |
84 | # Reverse proxy
85 |
86 | Caddy is used, details
87 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
88 |
89 | `Caddyfile`
90 | ```
91 | umami.{$MY_DOMAIN} {
92 | reverse_proxy umami:3000
93 |
94 | log {
95 | output file /data/logs/umami_access.log {
96 | roll_size 20mb
97 | roll_keep 5
98 | }
99 | }
100 | }
101 | ```
102 |
103 | ### - Run it all
104 | Restarting the Caddy container `docker container restart caddy` will kick in the changes. Give Caddy time to get certificates, check `docker logs caddy`.
105 |
106 | # First run
107 |
108 | Default login: `admin` // `umami`. Go and change the password straight away.
109 |
110 | # Trouble shooting
111 |
112 | Check umami logs `docker logs umami`.
113 |
114 | # Update
115 |
116 | [Watchtower](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/watchtower)
117 | updates the image automatically.
118 |
119 | Manual image update:
120 |
121 | - `docker-compose pull`
122 | - `docker-compose up -d`
123 | - `docker image prune`
124 |
--------------------------------------------------------------------------------
/unifi/readme.md:
--------------------------------------------------------------------------------
1 | # UniFi Network Application
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Ubiquiti managment software for **wifi** access points and other ubiquiti hardware.
10 |
11 | * [Official site](https://www.ui.com/software/)
12 | * [linuxserver github](https://github.com/linuxserver/docker-unifi-network-application)
13 |
14 | UniFi is a web based managment software for Ubiquiti devices.
15 | It is written in Java, utilizing the Spring Framework
16 | and using MongoDB as a database.
17 | Docker image used here is provided by
18 | [linuxserver.io](https://www.linuxserver.io/)
19 |
20 | # Migration from UniFi Controller
21 |
22 | * Do the manual **backup** of your old instance, through webgui settings.
23 | * Down the old container.
24 | * Spin the new stuff
25 | * Restore the backup
26 |
27 |
28 | Extra Info & Rant
29 |
30 |
31 | Previously called [UniFi Controller](https://github.com/linuxserver/docker-unifi-controller)
32 |
33 | Ubiquiti morons decided to change the name to UniFi Network Application.
34 | Then also tried to go for name UniFi Network Server with a claim that its for
35 | selfhosted version. In docs and even in downloads they mostly use the `application`.
36 | Though love that inside the webgui version its just `Network 8.0.28`
37 |
38 | With this name change, linuxserver.io also changed the deployment so that
39 | mongo database is now a separate container.
40 | Would not be a big issue, if mongo would not [suck big time](https://github.com/docker-library/mongo/issues/174)
41 | at initiating databases in new deployments, making it unnecessary complicated.
42 | Or if linuxserver.io could make a decision and write
43 | [cleaner instructions](https://github.com/linuxserver/docker-unifi-network-application/issues/13)
44 | instead of trying to teach to fish.
45 | Also linuxserver.io official stance is to use older version of mongo v3.6 - v4.4
46 | Reports are that raspberry pi 4 users need to go for that v3.6
47 |
48 | Big help to get this going cleanly was [this repo](https://github.com/GiuseppeGalilei/Ubiquiti-Tips-and-Tricks),
49 | from [this](https://www.reddit.com/r/Ubiquiti/comments/18stenb/unifi_network_application_easy_docker_deployment/)
50 | reddit post.
51 | First time Ive seen `configs` used in compose this way, saved a bother of doing
52 | a separate mounting of `mongo-init.js`, that for some reason did not work for me.
53 | Here it is improved a bit by using variables, so stuff can be set just in `.env`
54 |
55 |
56 |
57 | 
58 |
59 | # Files and directory structure
60 |
61 | ```
62 | /home/
63 | └── ~/
64 | └── docker/
65 | └── unifi/
66 | ├── 🗁 mongo_db_data/
67 | ├── 🗁 unifi_data/
68 | ├── 🗋 .env
69 | └── 🗋 docker-compose.yml
70 | ```
71 |
72 | * `mongo_db_data/` - database data
73 | * `unifi_data/` - unifi configuration and other data
74 | * `.env` - a file containing environment variables for docker compose
75 | * `docker-compose.yml` - a docker compose file, telling docker
76 | how to run the containers
77 |
78 | You only need to provide the files.
79 | The directory is created by docker compose on the first run.
80 |
81 | # docker-compose
82 |
83 | Compose should not need any changes, theres `.env` file for that.
84 |
85 | Worth noting is use of [configs](https://docs.docker.com/compose/compose-file/08-configs/)
86 | to bypass the need for separate `mongo-init.js` file.
87 | The use comes from [this repo](https://github.com/GiuseppeGalilei/Ubiquiti-Tips-and-Tricks).
88 |
89 | latest unifi version tag - [here](https://github.com/linuxserver/docker-unifi-network-application/releases)
90 |
91 | `docker-compose.yml`
92 | ```yml
93 | services:
94 |
95 | unifi-db:
96 | image: mongo:4
97 | container_name: unifi-db
98 | hostname: unifi-db
99 | restart: unless-stopped
100 | env_file: .env
101 | volumes:
102 | - ./mongo_db_data:/data/db
103 | expose:
104 | - 27017
105 | configs:
106 | - source: init-mongo.js
107 | target: /docker-entrypoint-initdb.d/init-mongo.js
108 |
109 | unifi-app:
110 | image: lscr.io/linuxserver/unifi-network-application:8.4.59
111 | container_name: unifi-app
112 | hostname: unifi-app
113 | restart: unless-stopped
114 | env_file: .env
115 | depends_on:
116 | - unifi-db
117 | volumes:
118 | - ./unifi_data:/config
119 | ports:
120 | - 8443:8443
121 | - 3478:3478/udp
122 | - 10001:10001/udp
123 | - 8080:8080
124 | - 1900:1900/udp #optional
125 | - 8843:8843 #optional
126 | - 8880:8880 #optional
127 | - 6789:6789 #optional
128 | - 5514:5514/udp #optional
129 |
130 | networks:
131 | default:
132 | name: $DOCKER_MY_NETWORK
133 | external: true
134 |
135 | configs:
136 | init-mongo.js:
137 | content: |
138 | db.getSiblingDB("$MONGO_DBNAME").createUser({user: "$MONGO_USER", pwd: "$MONGO_PASS", roles: [{role: "$MONGO_ROLE", db: "$MONGO_DBNAME"}]});
139 | db.getSiblingDB("${MONGO_DBNAME}_stat").createUser({user: "$MONGO_USER", pwd: "$MONGO_PASS", roles: [{role: "$MONGO_ROLE", db: "${MONGO_DBNAME}_stat"}]});
140 | ```
141 |
142 | `.env`
143 | ```bash
144 | # GENERAL
145 | DOCKER_MY_NETWORK=caddy_net
146 | TZ=Europe/Bratislava
147 |
148 | #UNIFI LINUXSERVER.IO
149 | PUID=1000
150 | PGID=1000
151 | MEM_LIMIT=1024
152 | MEM_STARTUP=1024
153 | MONGO_USER=unifi
154 | MONGO_PASS=N9uHz2ct
155 | MONGO_HOST=unifi-db
156 | MONGO_PORT=27017
157 | MONGO_DBNAME=unifi_db
158 | MONGO_ROLE=dbOwner
159 | # MONGO_TLS= #optional
160 | # MONGO_AUTHSOURCE= #optional
161 |
162 | ```
163 |
164 | # Reverse proxy
165 |
166 | Caddy v2 is used, details
167 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
168 |
169 | `Caddyfile`
170 | ```
171 | unifi.{$MY_DOMAIN} {
172 | encode gzip
173 | reverse_proxy unifi-app:8443 {
174 | transport http {
175 | tls
176 | tls_insecure_skip_verify
177 | }
178 | }
179 | }
180 | ```
181 |
182 | # Adoption
183 |
184 | 
185 |
186 | The controller might see your APs during initial setup,
187 | but it can not adopt them before you set your docker host IP
188 | as `Override Inform Host`.
189 |
190 | * **Inform Host** check the **Override** checbox
191 | *Settings > System > Advanced*
192 | * enter docker-host IP
193 | * adopt devices
194 |
195 | # Some Settings
196 |
197 | * **Disable "Connects high performance clients to 5 GHz only"**
198 | *Old interface > Settings > Wireless Networks > Edit > Advanced Options*
199 | When enabled it forces devices to ignore 2.4GHz which obviously causes problems at range.
200 | Fucking monstrous stupidity to be default on,
201 | but I guess globaly they have power to cleanup 2.4GHz a bit.
202 | * **802.11 DTIM Period - sets to 3**
203 | *Settings > Wifi > Edit > Advanced*
204 | For [apple devices](https://www.sniffwifi.com/2016/05/go-to-sleep-go-to-sleep-go-to-sleep.html)
205 |
206 | # Migration
207 |
208 | * old controller - settings - system - somewhere backup - just settings - you get a `unf` file
209 | * new controller - on the first run theres an option to restore from backup, pick the file
210 | * done
211 |
212 | # Config autobackup
213 |
214 | config files should be in `~/unifi/unifi_data/data/backup`
215 |
216 | # Encountered issues
217 |
218 | Because I had `./mongo_db_data:/data` as bind mount
219 | instead of `./mongo_db_data:/data/db`, there was an issue.
220 | Everything worked fine for weeks, even shutdown of the docker host
221 | or sudden power outage... but on downing of the container and then upping,
222 | login would stop working with the notice "invalid username and/or password".
223 |
224 | # Update
225 |
226 | Manual image update:
227 |
228 | - `docker-compose pull`
229 | - `docker-compose up -d`
230 | - `docker image prune`
231 |
232 |
--------------------------------------------------------------------------------
/uptime-kuma/readme.md:
--------------------------------------------------------------------------------
1 | # Uptime Kuma in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Uptime monitoring and notifications.
10 |
11 | * [Github](https://github.com/louislam/uptime-kuma)
12 |
13 | Simple, modern, uptime monitoring of sites, hosts, ports, containers,
14 | with rich notification system.
15 |
16 | Written in javascript.
17 |
18 | ---
19 |
20 | 
21 |
22 | # Files and directory structure
23 |
24 | ```
25 | /home/
26 | └── ~/
27 | └── docker/
28 | └── uptimekuma/
29 | ├── 🗁 uptimekuma_data/
30 | └── 🗋 docker-compose.yml
31 | ```
32 |
33 | * `uptimekuma_data` - persistent data, contains sqlite database
34 | * `.env` - a file containing environment variables for docker compose
35 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers
36 |
37 | You only need to provide the two files.
38 | The directory is created by docker compose on the first run.
39 |
40 | # docker-compose
41 |
42 | It's a simpler, single container approach.
43 |
44 | `docker-compose.yml`
45 | ```yml
46 | services:
47 | uptimekuma:
48 | image: louislam/uptime-kuma:1
49 | container_name: uptimekuma
50 | hostname: uptimekuma
51 | restart: unless-stopped
52 | ports:
53 | - "3001:3001"
54 | volumes:
55 | - ./uptimekuma_data:/app/data
56 |
57 | networks:
58 | default:
59 | name: $DOCKER_MY_NETWORK
60 | external: true
61 | ```
62 |
63 | `.env`
64 | ```bash
65 | # GENERAL
66 | TZ=Europe/Bratislava
67 | DOCKER_MY_NETWORK=caddy_net
68 | ```
69 | # Reverse proxy
70 |
71 | Caddy v2 is used, details
72 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
73 |
74 | `Caddyfile`
75 | ```php
76 | uptime.{$MY_DOMAIN} {
77 | reverse_proxy uptimekuma:3001
78 | }
79 | ```
80 |
81 | # Manual image update:
82 |
83 | - `docker-compose pull`
84 | - `docker-compose up -d`
85 | - `docker image prune`
86 |
87 | # Backup and restore
88 |
89 | #### Backup
90 |
91 | should be just backup of `uptimekuma_data` directory
92 |
93 | not tested yet
94 |
95 | #### Restore
96 |
--------------------------------------------------------------------------------
/watchtower/readme.md:
--------------------------------------------------------------------------------
1 | # Watchtower in docker
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose
8 |
9 | Automatic updates of docker containers.
10 |
11 | * [Github](https://github.com/containrrr/watchtower)
12 | * [DockerHub image used](https://hub.docker.com/r/containrrr/watchtower)
13 |
14 | Watchtower is an application that will monitor the running Docker containers
15 | and watch for changes to the images that those containers
16 | were originally started from. If watchtower detects that an image has changed,
17 | it will automatically restart the container using the new image.
18 |
19 | As of now, Watchtower needs to always pull images to know if they changed.
20 | This can be bandwidth intensive, so its scheduled checks should account for this.
21 |
22 | # Files and directory structure
23 |
24 | ```
25 | /home/
26 | └── ~/
27 | └── docker/
28 | └── watchtower/
29 | ├── .env
30 | └── docker-compose.yml
31 | ```
32 |
33 | * `.env` - a file containing environment variables for docker compose
34 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container
35 |
36 | Only these two files must be provided.
37 |
38 | # docker-compose
39 |
40 | Scheduled to run every saturday at midnight using environment variable.
41 | Heads up that not a typical cron format is used,
42 | [seconds are the first digit](https://pkg.go.dev/github.com/robfig/cron@v1.2.0?tab=doc#hdr-CRON_Expression_Format).
43 |
44 | `docker-compose.yml`
45 | ```yml
46 | version: '3'
47 | services:
48 |
49 | watchtower:
50 | image: containrrr/watchtower:latest
51 | container_name: watchtower
52 | hostname: watchtower
53 | restart: unless-stopped
54 | env_file: .env
55 | volumes:
56 | - /var/run/docker.sock:/var/run/docker.sock
57 | ```
58 |
59 | `.env`
60 | ```bash
61 | # GENERAL
62 |
63 | # WATCHTOWER
64 | WATCHTOWER_SCHEDULE=0 0 0 * * SAT
65 | WATCHTOWER_CLEANUP=true
66 | WATCHTOWER_TIMEOUT=30s
67 | WATCHTOWER_DEBUG=false
68 | WATCHTOWER_INCLUDE_STOPPED=false
69 | ```
70 |
71 | # Update
72 |
73 | * [watchtower](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/watchtower)
74 | updates itself automatically
75 |
76 | * manual image update
77 | `docker-compose pull`
78 | `docker-compose up -d`
79 | `docker image prune`
80 |
--------------------------------------------------------------------------------
/watchyourlan/readme.md:
--------------------------------------------------------------------------------
1 | # WatchYourLAN
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Monitor LAN with regular IP scans.
10 |
11 | * [Github](https://github.com/aceberg/WatchYourLAN)
12 |
13 | Simple webgui ip scanner with notification when new unknown MAC address appears.
14 |
15 | Backend is written in Go.
16 |
17 | # Files and directory structure
18 |
19 | ```
20 | /home/
21 | └── ~/
22 | └── docker/
23 | └── watchyourlan/
24 | ├── 🗋 .env
25 | └── 🗋 docker-compose.yml
26 | ```
27 | * `.env` - a file containing environment variables for docker compose
28 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container
29 |
30 | # Compose
31 |
32 | Of note is the `network_mode` being set to `host`,
33 | which means that the container shares the IP with the docker-host
34 | and is on the docker-host network, likely the main netowork,
35 | not some virtual docker network.
36 |
37 | `docker-compose.yml`
38 | ```yml
39 | services:
40 |
41 | watchyourlan:
42 | image: aceberg/watchyourlan
43 | container_name: watchyourlan
44 | hostname: watchyourlan
45 | network_mode: host
46 | env_file: .env
47 | restart: unless-stopped
48 | volumes:
49 | - ./watchyourlan_data:/data
50 | ```
51 |
52 | `.env`
53 | ```bash
54 | # GENERAL
55 | TZ=Europe/Bratislava
56 |
57 | IFACE="ens33"
58 | GUIIP: "10.0.19.4"
59 | GUIPORT="8840"
60 | THEME="darkly"
61 | ```
62 |
63 | To get variables `IFACE` and `GUIIP` for the `.env` file,
64 | ssh to docker-host and `ip r`
65 |
66 | # Reverse proxy
67 |
68 | Caddy v2 is used, details
69 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
70 |
71 | Because of the container running in a host network mode, the IP of the docker-host
72 | is used instead of just some container hostname.
73 |
74 | `Caddyfile`
75 | ```php
76 | lan.{$MY_DOMAIN} {
77 | reverse_proxy 10.0.19.4:8840
78 | }
79 | ```
80 |
81 | # Notifications
82 |
83 | WatchYourLAN uses [Shoutrrr](https://containrrr.dev/shoutrrr/v0.5/services/generic/)
84 | for notifications.
85 | If using [ntfy like me](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/gotify-ntfy-signal),
86 | one just uses generic webhook notation of shoutrrr.
87 |
88 | In Config:
89 |
90 | * Shoutrrr URL: `generic+https://ntfy.example.com/LAN_home`
91 |
92 | # Trouble shooting
93 |
94 | # Update
95 |
96 | Manual image update:
97 |
98 | - `docker compose pull`
99 | - `docker compose up -d`
100 | - `docker image prune`
101 |
102 |
--------------------------------------------------------------------------------
/wg-easy/readme.md:
--------------------------------------------------------------------------------
1 | # wg-easy
2 |
3 | ###### guide-by-example
4 |
5 | 
6 |
7 | # Purpose & Overview
8 |
9 | Web GUI for Wireguard VPN.
10 |
11 | * [Github](https://github.com/wg-easy/wg-easy)
12 |
13 | Wireguard is the best VPN solution right now. But its not noob friendly or easy.
14 | WG-easy tries to solve this.
15 |
16 | Written in javascript.
17 |
18 | # Files and directory structure
19 |
20 | ```
21 | /home/
22 | └── ~/
23 | └── docker/
24 | └── wg-easy/
25 | ├── 🗁 wireguard_data/
26 | ├── 🗋 .env
27 | └── 🗋 docker-compose.yml
28 | ```
29 | * `wireguard_data/` - a directory with wireguard config files
30 | * `.env` - a file containing environment variables for docker compose
31 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container
32 |
33 | # Compose
34 |
35 | `docker-compose.yml`
36 | ```yml
37 | services:
38 |
39 | wg-easy:
40 | image: ghcr.io/wg-easy/wg-easy:14
41 | container_name: wg-easy
42 | hostname: wg-easy
43 | restart: unless-stopped
44 | env_file: .env
45 | volumes:
46 | - ./wireguard_data:/etc/wireguard
47 | ports:
48 | - "51820:51820/udp" # vpn traffic
49 | - "51821:51821" # web interface
50 | cap_add:
51 | - NET_ADMIN
52 | - SYS_MODULE
53 | sysctls:
54 | - net.ipv4.ip_forward=1
55 | - net.ipv4.conf.all.src_valid_mark=1
56 |
57 | networks:
58 | default:
59 | name: $DOCKER_MY_NETWORK
60 | external: true
61 | ```
62 |
63 | `.env`
64 | ```bash
65 | # GENERAL
66 | DOCKER_MY_NETWORK=caddy_net
67 | TZ=Europe/Bratislava
68 |
69 | #WG-EASY
70 | WG_HOST=vpn.example.com # can also be just public IP
71 | # PASSWORD=supersecretpassword
72 | PASSWORD_HASH=$$2a$$12$$52a84HoSf99aLL7lmt9NsO0hlhZmGuJnyBK.bToiSdbQhTvMjV3ce
73 | WG_PORT=51820
74 | WG_DEFAULT_ADDRESS=10.221.221.x
75 | WG_ALLOWED_IPS=192.168.1.0/24
76 | WG_DEFAULT_DNS=
77 | ```
78 |
79 | In version 14 `PASSWORD` as env variable is no longer allowed
80 | and `PASSWORD_HASH` must be used.
81 | It is [a bcrypt hash](https://github.com/wg-easy/wg-easy/blob/master/How_to_generate_an_bcrypt_hash.md)
82 | of the password and in compose it must be without quotation marks
83 | and any `$` symbol needs to be doubled - replaced with `$$`.
84 |
85 | DNS is set to null as I had issues with it set, but it should be tried,
86 | set it to ip address where at port 53 dns server answers. Test then with nslookup.
87 |
88 | # Reverse proxy
89 |
90 | Caddy v2 is used, details
91 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
92 |
93 | `Caddyfile`
94 | ```php
95 | vpn.{$MY_DOMAIN} {
96 | reverse_proxy wg-easy:51821
97 | }
98 | ```
99 |
100 | # First run
101 |
102 | 
103 |
104 | Login with the password from the .env file.
105 | Add user, download config, use it.
106 |
107 | # Trouble shooting
108 |
109 | # Update
110 |
111 | Manual image update:
112 |
113 | - `docker compose pull`
114 | - `docker compose up -d`
115 | - `docker image prune`
116 |
117 |
--------------------------------------------------------------------------------
/windows_package_managers/readme.md:
--------------------------------------------------------------------------------
1 | # Windows Package Managers
2 |
3 | ###### guide-by-example
4 |
5 | # Purpose & Overview
6 |
7 | Install and manage software on windows through command line.
8 |
9 | * winget
10 | * chocolatey
11 | * scoop
12 |
13 |
14 | # Winget
15 |
16 | `winget search irfanview`
17 | `winget install irfanview`
18 |
19 | \+ Comes preinstalled with windows 10+
20 | \- Feels like unmanaged afterthought
21 |
22 | # Scoop
23 |
24 | ### User
25 |
26 | * non-admin powershell terminal
27 | * `Set-ExecutionPolicy RemoteSigned -Scope CurrentUser`
28 | * `irm get.scoop.sh | iex`
29 | * `scoop install git sudo`
30 | * `scoop bucket add extras`
31 | * `scoop bucket add sysinternals`
32 | * `scoop bucket add nonportable`
33 |
34 | ### Machine-wide
35 |
36 | * admin powershell terminal
37 | * `Set-ExecutionPolicy Bypass`
38 | * `iex "& {$(irm get.scoop.sh)} -RunAsAdmin"`
39 | * `scoop install git sudo --global`
40 | * `scoop bucket add extras`
41 | * `scoop bucket add sysinternals`
42 | * `scoop bucket add nonportable`
43 |
44 | ### Useful
45 |
46 | * search - `scoop search mpv`
47 | * `scoop install mpv --global`
48 | * search for avaialble pacakges - [scoop.sh](https://scoop.sh/)
49 |
50 | # Choco
51 |
52 | `Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))`
53 |
54 |
55 |
--------------------------------------------------------------------------------