├── NUT-Network UPS Tools └── readme.md ├── README.md ├── _knowledge-base ├── dhcp.md ├── dns.md ├── filesystems_storage.md ├── ipv6.md ├── linux_terminal_get_comfortable.md ├── mounting_network_share_linux.md ├── port_forwarding.md └── vlans.md ├── arch_linux_host_install └── readme.md ├── arch_raspberry_pi └── readme.md ├── audiobookshelf └── readme.md ├── beginners-speedrun-selfhosting └── readme.md ├── bookstack └── readme.md ├── borg_backup └── readme.md ├── caddy_v2 └── readme.md ├── checkmk └── readme.md ├── ddclient └── readme.md ├── disk_NAS_bench_Fio ├── crystaldisk.fio ├── lawrance_script.sh ├── readme.md ├── test.sh └── universal_linux.fio ├── dnsmasq └── readme.md ├── docker-basics-and-troubleshoot └── readme.md ├── esxi └── readme.md ├── free_cloud_hosting_VPS └── readme.md ├── frigate └── readme.md ├── get_good_linux_terminal └── readme.md ├── gotify-ntfy-signal └── readme.md ├── home_assistant └── readme.md ├── homer ├── assets │ ├── favicon.png │ ├── logo.png │ └── tools │ │ ├── bitwarden.png │ │ ├── bookstack.png │ │ ├── grafana.png │ │ ├── nextcloud.png │ │ └── portainer.png └── readme.md ├── jellyfin └── readme.md ├── kopia_backup ├── kopia_cli_deploy_win │ ├── DEPLOY.cmd │ ├── kopia.exe │ ├── kopia_backup_schedule.xml │ └── kopia_backup_scipt.ps1 ├── kopia_server_deploy_service_win │ ├── DEPLOY.cmd │ ├── Kopia.url │ ├── kopia.exe │ ├── kopia_server_start.cmd │ └── shawl.exe └── readme.md ├── lubelog └── readme.md ├── mealie └── readme.md ├── meshcrentral └── readme.md ├── mikrotik ├── cli_basics.md ├── dumb_switch_with_wifi_AP.md └── hairpin_nat_reflection.md ├── minecraft └── readme.md ├── mounting_network_share_linux └── readme.md ├── network-knowledge-base ├── asus.png ├── dhcp.md ├── dns.md ├── port_forwarding.md ├── readme.md └── tplink.png ├── nextcloud ├── nginx.conf └── readme.md ├── opnsense └── readme.md ├── phpipam └── readme.md ├── port_forwarding_guide └── readme.md ├── portainer └── readme.md ├── prometheus_grafana_loki ├── dashboards │ ├── docker_containers.json │ ├── docker_host.json │ └── monitoring_services.json └── readme.md ├── proxmox └── readme.md ├── qbittorrent └── readme.md ├── rustdesk └── readme.md ├── snipeit └── readme.md ├── squid └── readme.md ├── syncthing └── readme.md ├── trueNASscale └── readme.md ├── umami └── readme.md ├── unifi └── readme.md ├── uptime-kuma └── readme.md ├── vaultwarden └── readme.md ├── watchtower └── readme.md ├── watchyourlan └── readme.md ├── wg-easy └── readme.md ├── windows_package_managers └── readme.md ├── wireguard └── readme.md └── zammad └── readme.md /NUT-Network UPS Tools/readme.md: -------------------------------------------------------------------------------- 1 | # NUT - Network UPS Tools 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/TAIgm4Y.png) 6 | 7 | WORK IN PROGRESS
8 | WORK IN PROGRESS
9 | WORK IN PROGRESS
10 | 11 | # Purpose & Overview 12 | 13 | UPS - uninterruptible power supply managment. Huge drivers support. 14 | 15 | 16 | * [Official site](https://networkupstools.org/index.html) 17 | * [Github](https://github.com/networkupstools/nut) 18 | * [Archlinux Wiki](https://wiki.archlinux.org/title/Network_UPS_Tools) 19 | 20 | The main objective is to be able to shutdown properly larger amount of devices 21 | when power goes out and ups battery starts to get low. 22 | 23 | Nut is collection of programs and drivers, mostly written in C. 24 | For webgui it uses apache webserver. 25 | 26 | [Techno Tim youtube video](https://www.youtube.com/watch?v=vyBP7wpN72c) 27 | 28 | As per the video I am testing this running on rpi. 29 | [Here's](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/arch_raspberry_pi) 30 | arch on rpi setup. 31 | 32 | WORK IN PROGRESS
33 | WORK IN PROGRESS
34 | WORK IN PROGRESS
35 | 36 | --- 37 | 38 | # Files and directory structure 39 | 40 | ``` 41 | /home/ 42 | └── ~/ 43 | └── docker/ 44 | └── rustdesk/ 45 | ├── 🗁 rustdesk_data/ 46 | ├── 🗋 .env 47 | └── 🗋 docker-compose.yml 48 | ``` 49 | 50 | * `rustdesk_data/` - persistent data, contains sqlite database and the keys 51 | * `.env` - a file containing environment variables for docker compose 52 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 53 | 54 | You only need to provide the two files.
55 | The directory is created by docker compose on the first run. 56 | 57 | # docker-compose 58 | 59 | Using [S6-overlay](https://github.com/rustdesk/rustdesk-server#s6-overlay-based-images) 60 | based image.
61 | It's a simpler, single container approach. The 62 | [complexity](https://github.com/rustdesk/rustdesk-server#classic-image) 63 | of rustdesk's `hbbs` server and `hbbr` relay hidden. 64 | 65 | No network section since no http traffic that would need reverse proxy, yet.
66 | So just mapped ports on to docker host to do their thing. 67 | 68 | `docker-compose.yml` 69 | ```yml 70 | services: 71 | rustdesk: 72 | image: rustdesk/rustdesk-server-s6:1.1.7-1 73 | container_name: rustdesk 74 | hostname: rustdesk 75 | restart: unless-stopped 76 | env_file: .env 77 | ports: 78 | - "21116:21116" 79 | - "21115:21115" 80 | - "21116:21116/udp" 81 | - "21117:21117" 82 | - "21118:21118" 83 | - "21119:21119" 84 | volumes: 85 | - ./rustdesk_data:/data 86 | ``` 87 | 88 | `.env` 89 | ```bash 90 | # GENERAL 91 | TZ=Europe/Bratislava 92 | 93 | # RUSTDESK 94 | RELAY=rust.example.com:21117 95 | ENCRYPTED_ONLY=1 96 | # KEY_PRIV= 97 | # KEY_PUB= 98 | ``` 99 | 100 | In the `.env` file encryption is enabled, so that only clients that have 101 | correct public key will be allowed access to the rustdesk server.
102 | The keys are generated on the first run of the compose and can be found in 103 | the `rustdesk_data` directory. 104 | Once generated they should be added to the `.env` file for easier migration. 105 | The public key needs to be distributed with the clients apps installation. 106 | 107 | # Port forwarding 108 | 109 | as can be seen in the compose 110 | 111 | * **21115 - 21119** TCP need to be forwarded to docker host
112 | * **21116** is TCP **and UDP** 113 | 114 | 21115 is used for the NAT type test, 115 | 21116/UDP is used for the ID registration and heartbeat service, 116 | 21116/TCP is used for TCP hole punching and connection service, 117 | 21117 is used for the Relay services, 118 | and 21118 and 21119 are used to support web clients.
119 | [source](https://rustdesk.com/docs/en/self-host/install/) 120 | 121 | --- 122 | 123 | ![interface-pic](https://i.imgur.com/CK6pRyq.png) 124 | 125 | # The installation on clients 126 | 127 | * Download and install the client apps from [the official site](https://rustdesk.com/). 128 | * Three dots > ID/Relay Server 129 | * `ID Server`: rust.example.com 130 | * `Key`: *\* 131 | * The green dot at the bottom should be green saying "ready". 132 | 133 | ![settings-pic](https://i.imgur.com/lX6egMH.png) 134 | 135 | **On windows** one 136 | [can deploy](https://rustdesk.com/docs/en/self-host/install/#put-config-in-rustdeskexe-file-name-windows-only) 137 | client with **pre-sets** by renaming the installation file to: 138 | `rustdesk-host=,key=.exe` 139 | 140 | example: `rustdesk-host=rust.example.com,key=3AVva64bn1ea2vsDuOuQH3i8+2M=.exe` 141 | 142 | If by chance the public key contains symbols not usable in windows filenames, 143 | down the container, delete the files `id_ed25519` and `id_ed25519.pub`, 144 | up the container and try with the new keys. 145 | 146 | # Extra info 147 | 148 | * You really really **really want to be using domain and not your public IP** 149 | when installing clients and setting ID server. That `rust.example.com` 150 | can be changed to point at a different IP any time you want. Hard set IP not. 151 | * Can do `tcpdump -n udp port 21116` on a docker host to **see heartbeat** udp traffic. 152 | Seems machines report-in every \~13 seconds. 153 | * on **windows** a **service** named `rustdesk` is enabled. 154 | Disable it if the machine should be accessible only on demand, 155 | when someone first runs rustdesk manually.
156 | In powershell - `Set-Service rustdesk -StartupType Disabled` 157 | * One can relatively easily 158 | **hardcode server url and pub key in to an executable** using 159 | [github actions.](https://rustdesk.com/docs/en/self-host/hardcode-settings/)
160 | Tested it and it works. But seems you can only do workflow run of nightly build, 161 | meaning all the latest stuff added is included, which means higher chance of bugs.
162 | Make sure you do step *"Enable upload permissions for workflows"*, 163 | before you run the workflow. 164 | * Questions about issues with selfhosting are **not answered** on github - 165 | [#763](https://github.com/rustdesk/rustdesk/discussions/763), 166 | next to try is their [discord](https://discord.com/invite/nDceKgxnkV) or 167 | [subreddit](https://www.reddit.com/r/rustdesk/). 168 | * [FAQ](https://github.com/rustdesk/rustdesk/wiki/FAQ) 169 | * How does [rustdesk work?](https://github.com/rustdesk/rustdesk/wiki/How-does-RustDesk-work%3F) 170 | 171 | ![logo](https://i.imgur.com/ptfVMtJ.png) 172 | 173 | # Trouble shooting 174 | 175 | --- 176 | 177 | #### If just one machine is having issues. 178 | 179 | uninstall, plus delete: 180 | 181 | * `C:\Windows\ServiceProfiles\LocalService\AppData\Roaming\RustDesk` 182 | * `%AppData%\RustDesk` 183 | 184 | Restart. Reinstall.
185 | Do not use the installer you used before, **download** from the site latest. 186 | 187 | --- 188 | 189 | #### Error - Failed to connect to relay server 190 | 191 | * I had wrong url set as `RELAY` in the `.env` 192 | * if url is correct I would test if port 21117 tcp forwards 193 | 194 | --- 195 | 196 | #### Investigate port forwarding 197 | 198 | Install netcat and tcpdump on the docker host. 199 | 200 | * docker compose down rustdesk container so that ports are free to use 201 | * start a small netcat server listening on whichever port we test
202 | `sudo nc -u -vv -l -p 21116`
203 | the `-u` means udp traffic, delete to do tcp 204 | * on a machine somewhere else in the world, not on the same network, try 205 | `nc -u 21116` 206 | 207 | If you write something and press enter, it should appear on the other machine, confirming 208 | that port forwarding works.
209 | Also useful command can be `tcpdump -n udp port 21116`
210 | When port forwarding works, one should see heartbeat chatter, 211 | as machines with installed rustdesk are announcing themselves every \~13 seconds. 212 | 213 | --- 214 | 215 | # Manual image update: 216 | 217 | - `docker-compose pull`
218 | - `docker-compose up -d`
219 | - `docker image prune` 220 | 221 | # Backup and restore 222 | 223 | #### Backup 224 | 225 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 226 | that makes daily snapshot of the entire directory. 227 | 228 | #### Restore 229 | 230 | * down the bookstack containers `docker-compose down`
231 | * delete the entire bookstack directory
232 | * from the backup copy back the bookstack directory
233 | * start the containers `docker-compose up -d` 234 | 235 | -------------------------------------------------------------------------------- /_knowledge-base/dhcp.md: -------------------------------------------------------------------------------- 1 | # DHCP 2 | 3 | DHCP is a service that uppon request assigns IP address and other paramaters 4 | to a machine on a network. 5 | 6 | It solves the issue of having to manually set up every device 7 | 8 | # What is a port 9 | 10 | An IP address identifies specific machine on a network, 11 | **a port** identifies specific **application** running on that machine. 12 | 13 | A port is **a number** between [1 - 65,535] that is assigned by the OS 14 | to any application that wants to communicate over the network. 15 | Can be choosen at random, can be hard set. 16 | 17 | ![pic_port_header](https://i.imgur.com/TM0pOQN.png) 18 | 19 | # How Router's NAT works 20 | -------------------------------------------------------------------------------- /_knowledge-base/dns.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/_knowledge-base/dns.md -------------------------------------------------------------------------------- /_knowledge-base/filesystems_storage.md: -------------------------------------------------------------------------------- 1 | https://www.youtube.com/watch?v=NQJkTiLXfgs 2 | -------------------------------------------------------------------------------- /_knowledge-base/ipv6.md: -------------------------------------------------------------------------------- 1 | https://gist.github.com/timothyham/dd003dbad5614b425a8325ec820fd785?ck_subscriber_id=512831968 2 | -------------------------------------------------------------------------------- /_knowledge-base/linux_terminal_get_comfortable.md: -------------------------------------------------------------------------------- 1 | # Get comfortable in Linux terminal 2 | 3 | ###### guide-by-example 4 | 5 | 6 | WORK IN PROGRESS
7 | WORK IN PROGRESS
8 | WORK IN PROGRESS
9 | 10 | You want use terminal in linux.
11 | You want it to not suck donkey balls. 12 | 13 | ### Mindset 14 | 15 | It is easy.
16 | Like genuanly it is outragously trivial. 17 | 18 | * moving between directories 19 | * editing files 20 | * executing some command that some guide tells you 21 | 22 | ### Core aspects to make comfortable 23 | 24 | * Terminal 25 | * ssh in to linux 26 | * moving between directories 27 | * editing files 28 | 29 | ### Advanced aspects 30 | 31 | * shell 32 | * services and systemd 33 | * logs 34 | * scheduled stuff 35 | * filesystems and mounting 36 | 37 | # Terminal 38 | 39 | The application that you run to connect remotely to a server.
40 | For windows you want to use mobaXterm. Learn how to use well. 41 | 42 | Tips 43 | 44 | * set font globaly to something larger, like 14
45 | Settings > Configuration > Terminal > Default Font Settings
46 | be aware that once set it only applies to newly created sessions, 47 | if there is a session in left pane it already has own settings that needs changing 48 | * be aware of copy paste in it, middle click right click menu 49 | * 50 | 51 | # SSH 52 | 53 | You connect to the linux server only sometimes, but when you do 54 | you want it to be instant 55 | -------------------------------------------------------------------------------- /_knowledge-base/mounting_network_share_linux.md: -------------------------------------------------------------------------------- 1 | # Mounting Network Shares in Linux 2 | 3 | possible ways to mount stuff - fstab, autofs, systemd, docker volumes if its for docker 4 | 5 | possible types of shares 6 | 7 | * smb/samba/cifs - the most common share, support on all OS 8 | * nfs - mostly used between linux machines, bit better performance 9 | * iSCSI - the share is mounted as a block device as if it was really a disk, 10 | great performance for small files 11 | 12 | More on setup of these shares is in 13 | [TrueNAS Scale guide.](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/trueNASscale) 14 | 15 | # smb/samba/cifs 16 | 17 | [Arch wiki](https://wiki.archlinux.org/title/samba#As_systemd_unit) 18 | on samba systemd mount 19 | 20 | * you will create two files in `/etc/systemd/system` 21 | * one will have extension `.mount` the other `.automount` 22 | * the name will be the same for both and it MUST correspond with the planned 23 | mount path. Slashes `/` being replaced by dashes `-`.
24 | So if the share should be at `/mnt/mirror` the files are named 25 | `mnt-mirror.mount` and `mnt-mirror.automount` 26 | * copy paste the bellow content, edit as you see fit, 27 | changing description, ip address and path, user and password,.. 28 | * linux command `id` will show your current user `uid` and `gid` 29 | * after ther changes execute command `sudo systemctl enable mnt-mirror.automount` 30 | This will setup mounting that does not fail on boot if there are network issues, 31 | and really mounts the target only on request 32 | 33 | `mnt-mirror.mount` 34 | ```ini 35 | [Unit] 36 | Description=3TB truenas mirror mount 37 | 38 | [Mount] 39 | What=//10.0.19.11/Mirror 40 | Where=/mnt/mirror 41 | Type=cifs 42 | Options=rw,username=kopia,password=aaa,file_mode=0644,dir_mode=0755,uid=1000,gid=1000 43 | 44 | [Install] 45 | WantedBy=multi-user.target 46 | ``` 47 | 48 | `mnt-mirror.automount` 49 | ```ini 50 | [Unit] 51 | Description=3TB truenas mirror mount 52 | 53 | [Automount] 54 | Where=/mnt/mirror 55 | 56 | [Install] 57 | WantedBy=multi-user.target 58 | ``` 59 | 60 | ### Useful commants 61 | 62 | `smbclient -L 10.0.19.11` - list shares mounted from the ip 63 | `systemctl list-units -t mount --all` 64 | -------------------------------------------------------------------------------- /_knowledge-base/port_forwarding.md: -------------------------------------------------------------------------------- 1 | # Port Forwarding Guide 2 | 3 | # What is a port 4 | 5 | An IP address identifies specific machine on a network, 6 | **a port** identifies specific **application** running on that machine. 7 | 8 | A port is **a number** between [1 - 65,535] that is assigned by the OS 9 | to any application that wants to communicate over the network. 10 | Can be choosen at random, can be hard set. 11 | 12 | ![pic_port_header](https://i.imgur.com/TM0pOQN.png) 13 | 14 | # How Router's NAT works 15 | 16 | * **Allows outgoing** traffic on any port. 17 | * **Drops incoming** traffic unless it is a response to communication 18 | initialized from the inside. 19 | 20 | This is not really some security effort, the router just literally has no idea 21 | where to send it... 22 | 23 |
24 | More unnecessary details 25 | 26 | NAT is implemented in your router.
27 | It makes your **LAN side** devices able to connect with the 28 | outside world - **WAN side,** through one public IP. 29 | "The internet" *sees* just one device it communicates with, 30 | at least at this network level. 31 | 32 | ![pic_nat](https://i.imgur.com/Nk7u4MP.png) 33 | 34 | 35 | #### LAN side initialized communication 36 | 37 | * You visit a website, let's say `youtube.com` 38 | * Your browser has some random port assigned by the OS, 39 | this will be the **source port**. 40 | The local IP address of the machine it runs on will be the **source IP** 41 | * Browser/OS ask DNS servers for IP address of `youtube.com`, 42 | the answer is `142.250.191.78` - **destination IP** 43 | * Youtube is a website, standard for https is using port `443` - **destination port.** 44 | * All requred information are there. Destination[ip & port] Source[ip & port]. 45 | * Packets are send. 46 | * The browser now waits for a response at that random port. 47 | * Since the router is the **default gateway**, thats where the packets arrive. 48 | * The router saves all that info in its state table for a time, could be seconds, 49 | could be days depending on protocol and [state](https://serverfault.com/a/481909). 50 | * Router doing the NAT now replaces the **source IP address** of that one machine, 51 | with its own wan IP address, 52 | it might also change source port but that is not as important, 53 | and sends it out in the direction of the **destination IP**. 54 | * Response comes back, router knows it is a response because it's coming from the 55 | IP that it contacted recently and the **destination port** it uses is the same 56 | number that was used as the source port. 57 | * It checks the state table for the **original source IP and source port**, 58 | put them in, now as destination and off the packets go. 59 | * The browser receives response on its assigned port, from the IP it contacted. 60 | 61 | #### WAN side initialized communication 62 | 63 | * Want to connect to a jellyfin server to watch some movies from browser. 64 | * You know the IP address or the url. 65 | You also expect it to run on default port jellyfin uses `8096` 66 | * The browser makes the request. 67 | * The router sees it coming at port `8096`, but where does it send it? 68 | There is nothing in the state table, that would tell it. 69 | * So it drops it, unless there is a port forwarding rule that says 70 | that if something comes to port `8096` send it to this local ip address 71 | and at that port... 72 | 73 | Youtube explanation videos if you want deeper dive: 74 | 75 | * [NAT - Network Address Translation.](https://www.youtube.com/watch?v=RG97rvw1eUo) 76 | * [Public IP vs. Private IP and Port Forwarding](https://www.youtube.com/watch?v=92b-jjBURkw) 77 | 78 | --- 79 | --- 80 | 81 |
82 | 83 | # Double NAT (CGNAT) 84 | 85 | 86 | 87 | **Bad News.**
88 | It is very likely that even when you do everything 100% correctly, 89 | you still wont get your ports open.
90 | The reason being that your machine is behind double NAT. 91 | **Your ISP** - internet service provider, has you behind its own NAT device 92 | and that WAN side of your router is not really "the internet", but ISPs LAN side. 93 | 94 | A way to try and check, is looking up your [public IP online](http://icanhazip.com/) 95 | then log in to your router and finding somewhere the IP address of your WAN interface. 96 | If they are the same then your are 100% not behind double NAT and port forwarding 97 | will work straight away.
98 | If they differ it does not automaticly mean that it wont work, 99 | just that you will not know till you try. 100 | 101 | But if you are failing to make port forwarding work, it's time to call your ISP 102 | and ask about public IP, how much would it cost. 103 | It can be few extra € to your monthly bill. 104 | 105 | # Port forwarding 106 | 107 | Finally. Right?! 108 | 109 | You want to host something, lets say a minecraft server.
110 | You set it all up, you give your public IP address to others and they try to connect, 111 | but your router blocks them. It's a connection initialized from the outside. 112 | 113 | So you need to tell your router/firewall to let through the traffic that comes to 114 | minecraft default port - `25565` and where to send it on your LAN, 115 | to the local IP of your minecraft server. 116 | 117 | ![diagram_port_forw_minecraft](https://i.imgur.com/PNR32Mz.png) 118 | 119 | #### Examples of port forward rule 120 | 121 | * [Asus](https://i.imgur.com/brs9Mr6.png) 122 | * [TPlink](https://i.imgur.com/FNS2xCj.png) 123 | * [ubiquiti](https://i.imgur.com/D04HVJc.png) 124 | 125 | How to actually create that port forward rule depends on router/firewall model. 126 | 127 | Generally what to expect 128 | 129 | * It would be called port forwarding, or a virtual server, or be under NAT section. 130 | * **The port** on which to expect traffic is obviously a core information, 131 | sometimes it is called a service port or an external port. 132 | * **IP address** is required, so that the router knows where on the LAN side 133 | to send traffic that comes to that external port. 134 | * The setup might offer option for **internal port**, 135 | this can be often left empty, or the same port number is put there.
136 | It is there to give you option to run stuff on your LAN network on a different 137 | port than the one you open to the world. Like your webserver is `80`, 138 | but you open to the world port `12250` or whatever. 139 | * **The protocol** - TCP or UDP, if dunno **select both / all**
140 | You don't need to fear you are opening too much, if there is no service 141 | running on that port for that protocol it is same as a closed port. 142 | 143 | # Testing if port forwarding works 144 | 145 | ![port_check_web](https://i.imgur.com/d5fNnCX.png) 146 | 147 | First you need to understand that unless there is some application running 148 | that answers on that port, all tests will come back as - closed port. 149 | 150 | For testing we can use websites that will test if a port is open at specified public IP. 151 | 152 | * [portchecker.co](https://portchecker.co/) 153 | * [yougetsignal.com](https://www.yougetsignal.com/tools/open-ports/) 154 | 155 | ## Windows 156 | 157 | * [Find the local ip address](https://www.youtube.com/results?search_query=how+to+find+ip+address+windows) 158 | of the machine you are planning to use for the test. 159 | * Follow the instruction in Port forwarding section of this guide 160 | and forward port `666` to the IP of that machine. 161 | * Download [Port Listener](https://www.rjlsoftware.com/software/utility/portlistener/). 162 | * Run Port Listener, set port `666`, press Start. 163 | * If a windows firewall notification pops up with a question, answer yes. 164 | * Go to [portchecker.co](https://portchecker.co/), set the port to 666 and press Check. 165 | * You get the answer if the internet sees your port as open or closed. 166 | 167 | ![port_listener](https://i.imgur.com/A9fxIbi.png) 168 | 169 | In windows it is also pretty useful knowing that you can go 170 | `Task Manager` > `Performance` > `Open Resource Monitor` > `Network` Tab 171 | 172 | There unroll `Listening Ports` 173 | and you should find there - `listener.exe` with port `666` and firewall status 174 | should be *allowed, not restricted* 175 | 176 | ![windows_port_check](https://i.imgur.com/putdef0.png) 177 | 178 | ## Linux 179 | 180 | * find your ip address - `ip r` 181 | * Follow the instruction in Port forwarding section of this guide 182 | and forward port `666` to the IP of that machine. 183 | * try running netcat - `nc `
184 | * if it is not installed, get it for your distro, 185 | for arch it's `openbsd-netcat`, for debian it's `netcat-openbsd`, 186 | for fedora it's `netcat` 187 | * execute `sudo nc -vv -l -p 666` 188 | * Go to [portchecker.co](https://portchecker.co/), set the port to 666 and press Check. 189 | 190 | 191 | ## UDP port test 192 | 193 | UDP is kinda special cuz it's session-less, so you need to actually communicate 194 | through it to test it. 195 | 196 | * on a linux machine on LAN - `sudo nc -vv -u -l -p 666` 197 | * on a linux machine somewhere out there - `nc -u the_public_ip_goes_here 666` 198 | * write something and it should appear on the other side 199 | -------------------------------------------------------------------------------- /arch_raspberry_pi/readme.md: -------------------------------------------------------------------------------- 1 | # raspberry pi setup using Arch linux ARM 2 | 3 | ![logo](https://i.imgur.com/N0Y4vco.png) 4 | 5 | Tested on RPi3 6 | 7 | # Get Arch on it 8 | 9 | * **download** pre-prepared **image**, aarch64, if not terribly out of date
10 | [https://github.com/andrewboring/alarm-images/releases](https://github.com/andrewboring/alarm-images/releases) 11 | * **flash** the image to to an sd card using [etcher](https://etcher.balena.io/#download-etcher) 12 | * **boot** the rpi with it, login root//root 13 | 14 | Or follow [the official instructions.](https://archlinuxarm.org/platforms/armv8/broadcom/raspberry-pi-3) 15 | With manual partitioning, and extracting and moving but everything is latest. 16 | 17 | # Get Arch in to working state 18 | 19 | * check space used and assigned `df -h` and `lsblk`
20 | if root partition is too small 21 | * `cfdisk /dev/mmcblk0` or whatever is your drive path, 22 | and resize the partition, write changes 23 | * `resize2fs /dev/mmcblk0p2` or whatever is the path to the partition 24 | * since the image can be older update might not be just `pacman -Syu` 25 | * update keyring `pacman -Sy archlinux-keyring`, `pacman -Sy archlinuxarm-keyring`
26 | if it refuses - `pacman-key --init`; `pacman-key --populate`
27 | Can take some time. 28 | * update the system `pacman -Syu` 29 | * create a new user and set the password 30 | * `useradd -m -G wheel bastard`
31 | * `passwd bastard` 32 | * edit sudoers to allow users of the group wheel to sudo
33 | `EDITOR=nano visudo`
34 | `%wheel ALL=(ALL) ALL` 35 | * install sudo `pacman -S sudo` 36 | * try login as the new user 37 | 38 | # Run prepared ansible 39 | 40 | [This ansibe playbooks repo](https://github.com/DoTheEvo/ansible-arch) 41 | setup some stuff for arch.
42 | But for arm based stuff it needs some adjustment. 43 | 44 | #### Locale 45 | 46 | Ansible needs utf8 [locale](https://wiki.archlinux.org/title/locale) set. 47 | 48 | * run `locale`, if its already UTF-8 then we are done here 49 | * `sudo nano /etc/locale.gen`
50 | `en_US.UTF-8 UTF-8` uncomment 51 | * `locale-gen` - generates locale 52 | * `localectl set-locale LANG=en_US.UTF-8` - sets locale 53 | * logout, login, check `locale` again 54 | 55 | #### YAY and Reflect 56 | 57 | 58 | `playbook_core.yml` installs lot of basic packages. It works for most, 59 | except for `yay` and `reflector` as they throw an error 60 | cuz of ARM archiceture. 61 | 62 | So before running `playbook_core.yml` or after experiencing any error, 63 | just edit the `playbook_core.yml` and delete the responsible section, and run again. 64 | 65 | `playbook_zsh.yml` and `playbook_docker.yml` worked without any issues. 66 | 67 | After its done uninstall ansible stuff - `sudo pacman -Rns ansible` 68 | 69 | The raspberry is now ready. 70 | 71 | ![neofetch](https://i.imgur.com/Eha3bOX.png) 72 | -------------------------------------------------------------------------------- /audiobookshelf/readme.md: -------------------------------------------------------------------------------- 1 | # Audiobookshelf 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/vviMB5v.png) 6 | 7 | # Purpose & Overview 8 | 9 | WORK IN PROGRESS
10 | WORK IN PROGRESS
11 | WORK IN PROGRESS
12 | 13 | Selfhosted audiobook library. 14 | 15 | * [Official site](https://www.audiobookshelf.org/) 16 | * [Github](https://github.com/advplyr/audiobookshelf) 17 | 18 | Opensource. Able to download the books localy. Written in javascript. 19 | 20 | # Client apps 21 | 22 | * android - [audiobookshelf-app](https://github.com/advplyr/audiobookshelf-app) 23 | * ios - [plappa](https://apps.apple.com/us/app/plappa/id6475201956), 24 | unless [the official one](https://github.com/advplyr/audiobookshelf-app) is out of beta 25 | 26 | # Files and directory structure 27 | 28 | ``` 29 | /mnt/ 30 | └── bigdisk/ 31 | └── audiobooks/ 32 | /home/ 33 | └── ~/ 34 | └── docker/ 35 | └── audiobookshelf/ 36 | ├── config/ 37 | ├── metadata/ 38 | ├── .env 39 | └── compose.yml 40 | ``` 41 | 42 | * `/mnt/bigdisk/...` - a mounted media storage share 43 | * `config/` - persistent configuration 44 | * `metadata/` - metadata 45 | * `.env` - a file containing environment variables for docker compose 46 | * `compose.yml` - a docker compose file, telling docker how to run the containers 47 | 48 | You only need to provide the two files.
49 | The directories are created by docker compose on the first run. 50 | 51 | # compose 52 | 53 | Port is only exposed, meaning it's just documentation to know that 54 | it's running on port 80. Reverse proxy is expected so thats why not 55 | really needed opening ports. 56 | 57 | 58 | `compose.yml` 59 | ```yml 60 | services: 61 | 62 | audiobookshelf: 63 | image: ghcr.io/advplyr/audiobookshelf 64 | container_name: audiobookshelf 65 | hostname: audiobookshelf 66 | restart: unless-stopped 67 | env_file: .env 68 | volumes: 69 | - ./config:/config 70 | - ./metadata:/metadata 71 | - /mnt/bigdisk/audiobooks:/mnt/audiobooks 72 | expose: 73 | - "80" 74 | 75 | networks: 76 | default: 77 | name: $DOCKER_MY_NETWORK 78 | external: true 79 | ``` 80 | 81 | `.env` 82 | ```bash 83 | # GENERAL 84 | DOCKER_MY_NETWORK=caddy_net 85 | TZ=Europe/Bratislava 86 | PUID=1000 87 | PGID=1000 88 | ``` 89 | 90 | **All containers must be on the same network**.
91 | Which is named in the `.env` file.
92 | If one does not exist yet: `docker network create caddy_net` 93 | 94 | # Reverse proxy 95 | 96 | Caddy is used, details 97 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
98 | 99 | `Caddyfile` 100 | ``` 101 | books.{$MY_DOMAIN} { 102 | reverse_proxy audiobookshelf:80 103 | } 104 | ``` 105 | 106 | # First run 107 | 108 | 109 | ... 110 | 111 | # Library organization 112 | 113 | 114 | 115 | # Troubleshooting 116 | 117 | 118 | 119 | # Update 120 | 121 | Manual image update: 122 | 123 | - `docker-compose pull`
124 | - `docker-compose up -d`
125 | - `docker image prune` 126 | 127 | # Backup and restore 128 | 129 | #### Backup 130 | 131 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 132 | that makes daily snapshot of the entire directory. 133 | 134 | #### Restore 135 | 136 | * down the bookstack containers `docker-compose down`
137 | * delete the entire bookstack directory
138 | * from the backup copy back the bookstack directory
139 | * start the containers `docker-compose up -d` 140 | -------------------------------------------------------------------------------- /bookstack/readme.md: -------------------------------------------------------------------------------- 1 | # Bookstack in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/qDXwqaU.png) 6 | 7 | # Purpose & Overview 8 | 9 | Documentation and notes. 10 | 11 | * [Official site](https://www.bookstackapp.com/) 12 | * [Github](https://github.com/BookStackApp/BookStack) 13 | * [DockerHub](https://hub.docker.com/r/linuxserver/bookstack) 14 | 15 | BookStack is a modern, open source, good looking wiki platform 16 | for storing and organizing information. 17 | 18 | Written in PHP, using Laravel framework, with MySQL database for the user data.
19 | There is no official Dockerhub image so the one maintained by 20 | [linuxserver.io](https://www.linuxserver.io/) is used, 21 | which uses nginx as a web server. 22 | 23 | # Files and directory structure 24 | 25 | ``` 26 | /home/ 27 | └── ~/ 28 | └── docker/ 29 | └── bookstack/ 30 | ├── 🗁 bookstack_data/ 31 | ├── 🗁 bookstack_db_data/ 32 | ├── 🗋 .env 33 | ├── 🗋 docker-compose.yml 34 | └── 🗋 bookstack-backup-script.sh 35 | ``` 36 | 37 | * `bookstack_data/` - a directory with bookstacks web app data 38 | * `bookstack_db_data/` - a directory with database data 39 | * `.env` - a file containing environment variables for docker compose 40 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 41 | * `bookstack-backup-script.sh` - a backup script, to be run daily 42 | 43 | Only the files are required. The directories are created on the first run. 44 | 45 | # docker-compose 46 | 47 | Dockerhub linuxserver/bookstack 48 | [example compose.](https://hub.docker.com/r/linuxserver/bookstack) 49 | 50 | `docker-compose.yml` 51 | ```yml 52 | services: 53 | 54 | bookstack-db: 55 | image: linuxserver/mariadb 56 | container_name: bookstack-db 57 | hostname: bookstack-db 58 | restart: unless-stopped 59 | env_file: .env 60 | volumes: 61 | - ./bookstack_db_data:/config 62 | ports: 63 | - "3306:3306" 64 | 65 | bookstack: 66 | image: linuxserver/bookstack 67 | container_name: bookstack 68 | hostname: bookstack 69 | restart: unless-stopped 70 | env_file: .env 71 | depends_on: 72 | - bookstack-db 73 | volumes: 74 | - ./bookstack_data:/config 75 | ports: 76 | - "80:80" 77 | 78 | networks: 79 | default: 80 | name: $DOCKER_MY_NETWORK 81 | external: true 82 | ``` 83 | 84 | `.env` 85 | ```bash 86 | # GENERAL 87 | DOCKER_MY_NETWORK=caddy_net 88 | TZ=Europe/Bratislava 89 | 90 | #LINUXSERVER.IO 91 | PUID=1000 92 | PGID=1000 93 | 94 | # BOOKSTACK-MARIADB 95 | MYSQL_ROOT_PASSWORD=bookstack 96 | MYSQL_DATABASE=bookstack 97 | MYSQL_USER=bookstack 98 | MYSQL_PASSWORD=bookstack 99 | 100 | # BOOKSTACK 101 | APP_URL=https://book.example.com 102 | DB_HOST=bookstack-db 103 | DB_USER=bookstack 104 | DB_PASS=bookstack 105 | DB_DATABASE=bookstack 106 | 107 | # USING SENDINBLUE FOR SENDING EMAILS 108 | MAIL_DRIVER=smtp 109 | MAIL_ENCRYPTION=tls 110 | MAIL_HOST=smtp-relay.sendinblue.com 111 | MAIL_PORT=587 112 | MAIL_FROM=book@example.com 113 | MAIL_USERNAME= 114 | MAIL_PASSWORD= 115 | ``` 116 | 117 | **All containers must be on the same network**.
118 | Which is named in the `.env` file.
119 | If one does not exist yet: `docker network create caddy_net` 120 | 121 | `APP_URL` in the `.env` **must be set** for bookstack to work.
122 | `MAIL_` stuff must be set for password reset and new registrations. 123 | 124 | # Reverse proxy 125 | 126 | Caddy v2 is used, details 127 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
128 | 129 | `Caddyfile` 130 | ```php 131 | book.{$MY_DOMAIN} { 132 | reverse_proxy bookstack:80 133 | } 134 | ``` 135 | 136 | # First run 137 | 138 | Default login: `admin@admin.com` // `password` 139 | 140 | --- 141 | 142 | ![interface-pic](https://i.imgur.com/cN1GUZw.png) 143 | 144 | # Trouble shooting 145 | 146 | * It did not start.
147 | Ctrl+f in `.env` file for word `example` to be replaced with actual domain 148 | name. `APP_URL` has to be set correctly for bookstack to work. 149 | * After update cant see edit tools.
150 | Clear browsers cookies/cache. 151 | * The test email button in preferences throws error.
152 | Exec in to the container and `printenv` to see. 153 | Check [mail.php](https://github.com/BookStackApp/BookStack/blob/development/app/Config/mail.php) 154 | to see exact `MAIL_` env variables names and default values. 155 | Test in Thunderbird your smtp server working or not. 156 | 157 | # Update 158 | 159 | Manual image update: 160 | 161 | - `docker-compose pull`
162 | - `docker-compose up -d`
163 | - `docker image prune` 164 | 165 | It is **strongly recommended** to now add current **tags** to the images in the compose.
166 | Tags will allow you to easily return to a working state if an update goes wrong. 167 | 168 | If there was a **major version jump**, and bookstack does not work, 169 | exec in to the app container and run php artisan migrate
170 | `docker container exec -it bookstack /bin/bash`
171 | `cd /app/www`
172 | `php artisan migrate` 173 | 174 | # Backup and restore 175 | 176 | #### Backup 177 | 178 | Using [kopia](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/kopia_backup) 179 | or [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 180 | to make daily snapshot of the entire docker directory. 181 | 182 | #### Restore 183 | 184 | * down the containers `docker-compose down`
185 | * delete/move/rename the entire project directory
186 | * from the backups copy back the entire project directory
187 | * start the containers `docker-compose up -d` 188 | 189 | # Backup of just user data 190 | 191 | Users data daily export using the 192 | [official procedure.](https://www.bookstackapp.com/docs/admin/backup-restore/)
193 | For bookstack it means database dump and backing up several directories 194 | containing user uploaded files. 195 | 196 | Daily kopia/borg backup run takes care of backing up the directories. 197 | So only database dump is needed and done with the script.
198 | The created backup sql file is overwritten on every run of the script, 199 | but that's ok since kopia/borg are keeping daily snapshots. 200 | 201 | #### Backup script 202 | 203 | Placed inside `bookstack` directory on the host 204 | 205 | `bookstack-backup-script.sh` 206 | ```bash 207 | #!/bin/bash 208 | 209 | # CREATE DATABASE DUMP, bash -c '...' IS USED OTHERWISE OUTPUT > WOULD TRY TO GO TO THE HOST 210 | docker container exec bookstack-db bash -c 'mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE > $MYSQL_DIR/BACKUP.bookstack.database.sql' 211 | ``` 212 | 213 | the script must be **executable** - `chmod +x bookstack-backup-script.sh` 214 | 215 | #### Cronjob - scheduled backup 216 | 217 | Running on the host 218 | 219 | * `su` - switch to root 220 | * `crontab -e` - add new cron job
221 | * `0 22 * * * /home/bastard/docker/bookstack/bookstack-backup-script.sh`
222 | runs it every day [at 22:00](https://crontab.guru/#0_22_*_*_*) 223 | * `crontab -l` - list cronjobs to check 224 | 225 | # Restore the user data 226 | 227 | Assuming clean start and latest images.
228 | Will need `BACKUP.bookstack.database.sql` and content of `bookstack_data/www/`
229 | Note that database restore must happen before bookstack app is first run. 230 | 231 | * start only the database container: `docker-compose up -d bookstack-db` 232 | * copy `BACKUP.bookstack.database.sql` in `bookstack/bookstack_db_data/` 233 | * restore the database inside the container
234 | `docker container exec --workdir /config bookstack-db bash -c 'mysql -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE < BACKUP.bookstack.database.sql'` 235 | * now start the app container: `docker-compose up -d` 236 | * let it run so it creates its file structure 237 | * down the containers `docker-compose down` 238 | * in `bookstack/bookstack_data/www/`
239 | replace directories `files`,`images`,`uploads` and the file `.env`
240 | with the ones from the BorgBackup repository 241 | * start the containers: `docker-compose up -d` 242 | * if there was a major version jump, exec in to the app container and run `php artisan migrate`
243 | `docker container exec -it bookstack /bin/bash`
244 | `cd /app/www`
245 | `php artisan migrate` 246 | 247 | Again, the above steps are based on the 248 | [official procedure](https://www.bookstackapp.com/docs/admin/backup-restore/) 249 | at the time of writing this. 250 | -------------------------------------------------------------------------------- /borg_backup/readme.md: -------------------------------------------------------------------------------- 1 | # BorgBackup in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/dR50bkP.png) 6 | 7 | # Purpose & Overview 8 | 9 | Backups. 10 | 11 | * [Official site](https://www.borgbackup.org/) 12 | * [Github](https://github.com/borgbackup/borg) 13 | 14 | Borg is an open source deduplicating archiver with compression and encryption.
15 | Written in python with performance critical code implemented in C/Cython. 16 | 17 | Highlight of borg is the deduplication, where files are cut in to variable size 18 | chunks, and only new chunks are stored. 19 | This allows to keep snapshots from several days, weeks and months, 20 | while not wasting disk space. 21 | 22 | In this setup borg is installed directly on the host system.
23 | A script is created that backs up the entire docker directory and /etc locally.
24 | Cronjob is set to execute this script daily. 25 | 26 | The repository is also pruned on each run of the script - 27 | old archives are deleted while keeping the ones fitting the retention rules 28 | in the script.
29 | One backup per day for last 7 days, last 4 weeks, last 6 months are kept. 30 | 31 | # Files and directory structure 32 | 33 | ``` 34 | /home/ 35 | └── ~/ 36 | ├── borg/ 37 | │ ├── docker_backup/ 38 | │ ├── borg_backup.sh 39 | │ └── borg_backup.log 40 | │ 41 | └── docker/ 42 | ├── container-setup #1 43 | ├── container-setup #2 44 | ├── ... 45 | ``` 46 | 47 | * `docker_backup/` - borg repository directory containg the backups 48 | * `borg_backup.sh` - the backup script that adds new archive in to the repository 49 | * `borg_backup.log` - log file with the dates of backups 50 | 51 | Only `borg_backup.sh` has to be provided.
52 | Repo directory is created by `borg init` command 53 | and the log file is created on the first run. 54 | 55 | 56 | # The setup 57 | 58 | #### Install BorgBackup 59 | 60 | Borg is likely in your linux repositories. 61 | 62 | #### Create a new borg repo 63 | 64 | `mkdir ~/borg`
65 | `borg init --encryption=none ~/borg/docker_backup` 66 | 67 | Note the lack of sudo for initialization, this allows regular user to browse it. 68 | But the script should be run as root to have full access. 69 | 70 | #### The backup script 71 | 72 | `borg_backup.sh` 73 | ```bash 74 | #!/bin/bash 75 | 76 | # INITIALIZE THE REPO WITH THE COMMAND: 77 | # borg init --encryption=none ~/borg/my_backup 78 | # THEN RUN THIS SCRIPT 79 | 80 | # ----------------------------------------------- 81 | 82 | BACKUP_THIS='/home/bastard/docker /etc' 83 | REPOSITORY='/home/bastard/borg/docker_backup' 84 | LOGFILE='/home/bastard/borg/borg_backup.log' 85 | 86 | # ----------------------------------------------- 87 | 88 | NOW=$(date +"%Y-%m-%d | %H:%M | ") 89 | echo "$NOW Starting Backup and Prune" >> $LOGFILE 90 | 91 | # CREATES NEW ARCHIVE IN PRESET REPOSITORY 92 | 93 | borg create \ 94 | $REPOSITORY::'{now:%s}' \ 95 | $BACKUP_THIS \ 96 | \ 97 | --compression zstd \ 98 | --one-file-system \ 99 | --exclude-caches \ 100 | --exclude-if-present '.nobackup' \ 101 | --exclude '/home/*/Downloads/' \ 102 | 103 | # DELETES ARCHIVES NOT FITTING KEEP-RULES 104 | 105 | borg prune -v --list $REPOSITORY \ 106 | --keep-daily=7 \ 107 | --keep-weekly=4 \ 108 | --keep-monthly=6 \ 109 | --keep-yearly=0 \ 110 | 111 | echo "$NOW Done" >> $LOGFILE 112 | echo '------------------------------' >> $LOGFILE 113 | 114 | # --- USEFULL SHIT --- 115 | 116 | # setup above ignores directories containing '.nobackup' file 117 | # make '.nobackup' imutable using chattr to prevent accidental removal 118 | # touch .nobackup 119 | # chattr +i .nobackup 120 | 121 | # in the repo folder, to list available backups: 122 | # borg list . 123 | # to mount one of them: 124 | # borg mount .::1584472836 ~/temp 125 | # to umount: 126 | # borg umount ~/temp 127 | # to delete single backup in a repo: 128 | # borg delete .::1584472836 129 | ``` 130 | 131 | The script must be **executabe** - `chmod +x borg_backup.sh` 132 | 133 | ### Manual run 134 | 135 | `sudo ./borg_backup.sh` 136 | 137 | It could ask about 138 | *Attempting to access a previously unknown unencrypted repository*
139 | Answer yes.
140 | If we would initialize the repo with sudo then it would be no issue, 141 | but then non root user would not be able to enter the repo directory. 142 | 143 | ### Automatic execution 144 | 145 | Previously cron has been used, but decided to try systemd timers. 146 | 147 | [This](https://blog.andrewkeech.com/posts/170719_borg.html) is the source. 148 | 149 | 150 | 151 | 152 | 153 | 154 | # Accessing the backup files 155 | 156 | * go in to the borg repo
157 | `cd /home/bastard/borg/docker_backup/` 158 | * list the archives
159 | `sudo borg list .` 160 | * choose one by the date, copy its identifier which is epoch time, e.g. 1588986941 161 | * mount it to some folder
162 | `sudo borg mount .::1588986941 /mnt/temp` 163 | * browse the directory where the archive is mounted and do whatever is needed 164 | * umount the backup
165 | `sudo borg umount /mnt/temp` 166 | 167 | # Extra info 168 | 169 | Test your backups, test your recovery procedure. 170 | 171 | # Remote backup 172 | 173 | Backing up borg repo to a network share or cloud using rclone 174 | 175 | *To be continued* 176 | -------------------------------------------------------------------------------- /checkmk/readme.md: -------------------------------------------------------------------------------- 1 | # checkmk 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/yMDhlLJ.png) 6 | 7 | # Purpose 8 | 9 | Monitoring of machines, containers, services, logs, ... 10 | 11 | * [Official site](https://checkmk.com/) 12 | * [github](https://github.com/Checkmk/checkmk) 13 | 14 | Monitoring in this case means gathering and showing information on how services 15 | or machines or containers are running. 16 | Can be cpu, io, ram, disk use, network throughput, latency,... 17 | can be number of http requests, errors, results of backups... 18 | 19 | # Overview 20 | 21 | [Good youtube overview.](https://www.youtube.com/watch?v=7OnhuCsR7jg) 22 | 23 | checkmk is a fork of nagios and is mostly written in python.
24 | Interesting fact is that there is no database where data are stored, 25 | RRD files for metrics and plaintext logs for everything else. 26 | 27 | Agents are installed on machines that should be monitored, 28 | they expose gathered data at port 6556 for cmk to pull.
29 | SNMP support as well. 30 | 31 | ![overview](https://i.imgur.com/HB0bLyU.png) 32 | 33 | ### Editions 34 | 35 | [Docs](https://docs.checkmk.com/master/en/intro_setup.html#editions) 36 | 37 | * **raw** - 100% open source, unlimited use, some features are missing 38 | or are harder to set up. For example no push mode from agents. 39 | * **cloud** - full featured with better performing version of the monitoring micro core, 40 | but with 750 services limit 41 | 42 | I am gonna go with cloud for now, as 750 sounds like plenty for my use cases. 43 | 44 | 45 | # Files and directory structure 46 | 47 | ``` 48 | /home/ 49 | └── ~/ 50 | └── docker/ 51 | └── checkmk/ 52 | ├── 🗁 checkmk_data/ 53 | ├── 🗋 docker-compose.yml 54 | └── 🗋 .env 55 | ``` 56 | 57 | * `checkmk_data/` - a directory where checkmk_data stores its persistent data 58 | * `.env` - a file containing environment variables for docker compose 59 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 60 | 61 | The two files must be provided.
62 | The directory is created by docker compose on the first run. 63 | 64 | # docker-compose 65 | 66 | A simple compose.
67 | Of note is use of ram as tmpfs mount into the container 68 | and setting a 1024 limit for max open files by a single process. 69 | 70 | *Note* - the port is only `expose`, since theres expectation of use 71 | of a reverse proxy and accessing the services by hostname, not ip and port. 72 | 73 | [Docs](https://docs.checkmk.com/latest/en/ports.html) on ports used in cmk. 74 | 75 | `docker-compose.yml` 76 | ```yml 77 | services: 78 | checkmk: 79 | # image: checkmk/check-mk-raw 80 | image: checkmk/check-mk-cloud 81 | container_name: checkmk 82 | hostname: checkmk 83 | restart: unless-stopped 84 | env_file: .env 85 | ulimits: 86 | nofile: 1024 87 | tmpfs: 88 | - /opt/omd/sites/cmk/tmp:uid=1000,gid=1000 89 | volumes: 90 | - ./checkmk_data:/omd/sites 91 | - /etc/localtime:/etc/localtime:ro 92 | expose: 93 | - "5000" # webgui 94 | ports: 95 | - 8000:8000 # agents who push 96 | 97 | networks: 98 | default: 99 | name: $DOCKER_MY_NETWORK 100 | external: true 101 | ``` 102 | 103 | `.env` 104 | 105 | ```bash 106 | # GENERAL 107 | DOCKER_MY_NETWORK=caddy_net 108 | TZ=Europe/Bratislava 109 | 110 | # CMK 111 | CMK_SITE_ID=dom 112 | CMK_PASSWORD=WUx666yd0qCWh 113 | ``` 114 | 115 | **All containers must be on the same network**.
116 | Which is named in the `.env` file.
117 | If one does not exist yet: `docker network create caddy_net` 118 | 119 | ## Reverse proxy 120 | 121 | Caddy v2 is used, details 122 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
123 | 124 | `Caddyfile` 125 | ```php 126 | cmk.{$MY_DOMAIN} { 127 | reverse_proxy checkmk:5000 128 | } 129 | ``` 130 | 131 | # First run 132 | 133 | ![login](https://i.imgur.com/pDCvn4D.png) 134 | 135 | Visit `cmk.example.com` or whatever you set in reverse proxy.
136 | Password for user `cmkadmin` is set the `.env` file. 137 | 138 | Usual security recommendation is to create a new user 139 | and disable the default admin account. 140 | 141 | # Agents 142 | 143 | ![login](https://i.imgur.com/vC5peFG.png) 144 | 145 | ## Installation Windows Machine 146 | 147 | [Documentation](https://docs.checkmk.com/latest/en/agent_windows.html) 148 | 149 | * Note the hostname and the ip address of the machine. 150 | * Agent installation msi file is available at
151 | `https://cmk.example.com//check_mk/agents/`
152 | or webgui - Setup > Agents > Windows, Linux, Solaris, AIX > Windows - MSI
153 | downloads an msi, install. 154 | * some win servers by default block ping by their firewall, allow ping through
155 | `wf.msc` - Inbound Rules - enable "File and Printer Sharing (Echo Request - ICMPv4-In)" 156 | * CMK Web GUI > Setup > Hosts > Add host 157 | 158 | * Host name - `` - should be all thats needed 159 | * IP address family - ipv4 only 160 | * IPv4 address - `` 161 | 162 | Green button - `Save & run service directory`
163 | After a while list of services should be listed
164 | Top left green check mark - `Accept all`
165 | Yellow exclamation mark top right corner - to review changes
166 | Left top green button - `Activate on selected sites` 167 | 168 | #### Agent registration - TLS 169 | 170 | Will need password for user - `agent_registration`
171 | 172 | * Setup > Users > agent_registration - edit - blue pencil left 173 | * green dice - randomizes password - make the note of the new password 174 | * apply changes - yellow exclamation mark 175 | 176 | on the machine where the agent is installed 177 | 178 | * cmd as administrator, not powershell 179 | * `cd "C:\Program Files (x86)\checkmk\service\"` 180 | * `cmk-agent-ctl.exe register --hostname WIN-2022 --server cmk.example.com --site dom --user agent_registration --password "TJUE@ILTQFEUFQCT@ADS"` 181 | * DO MAKE SURE YOU USE THE CORRECT **HOSTNAME**
182 | I spent quite a while troubleshooting when I registered 3rd machine with hostname 183 | of the second machine that was already registered. 184 | 185 | #### troubleshooting 186 | 187 | * Setup > Hosts > `` > Save & run connection tests 188 | * `cmk-agent-ctl status` - run on the host 189 | * `echo | nc 10.0.19.194 6556` - executed on the server, hosts ip is used
190 | before TLS it should reply with data, afterwards its `162%` 191 | * Monitor > Overview > Host search > 3 lines icon next to hostname > Download agent output 192 | 193 | ## Installation Linux Machine 194 | 195 | might be in repos, if not path to cmk instance has agents and plugins
196 | `https://cmk.example.com//check_mk/agents/`
197 | 198 | `wget https://cmk.example.com/dom/check_mk/agents/check-mk-agent_2.3.0p6-1_all.deb` 199 | `sudo dpkg -i check-mk-agent_2.3.0p6-1_all.deb` 200 | 201 | docker plugin 202 | 203 | `wget https://cmk.example.com/dom/check_mk/agents/plugins/mk_docker.py`
204 | `sudo install -m 0755 mk_docker.py /usr/lib/check_mk_agent/plugins` 205 | 206 | #### TLS 207 | 208 | get password for user - `agent_registration` 209 | 210 | `sudo cmk-agent-ctl register --hostname debianu --server cmk.example.com --site dom --user agent_registration --password "TJUE@ILTQFEUFQCT@ADS"` 211 | 212 | ##### troubleshooting 213 | 214 | * `sudo cmk-agent-ctl status` 215 | * `ss -tulpn | grep 6556` - checks if the port is binded 216 | * `netstat -ano | grep 6556` 217 | * after reinstall [the agent does not listen on the port](https://forum.checkmk.com/t/checkmk-agent-not-listening-on-6556-after-reinstalling-agent-v2-1-0/34882)
218 | `sudo cmk-agent-ctl delete-all --enable-insecure-connections` 219 | 220 | # SNMP monitoring 221 | 222 | Using a mikrotik switch here 223 | 224 | * login to mikrotik > IP > SNMP 225 | * button `Communities` 226 | * disable public 227 | * Add New 228 | * Set Name - `snmp_home` 229 | * Security - `Authorized` 230 | * Read Access 231 | * authentication protocol - `SHA1` 232 | * Authentication Password - set some password 233 | * back in SNMP settings 234 | * enabled - checked 235 | * Trap Community - `snmp_home` 236 | * Trap Version - `3` 237 | * Apply 238 | 239 | CMK Web GUI > Setup > Hosts > Add host 240 | 241 | * host name - `CRS310` 242 | * IPv4 address - `10.0.19.240` 243 | * Checkmk agent / API integrations - No API integrations, no Checkmk agent 244 | * SNMP - SNMP v2 or v3 245 | * SNMP credentials - Credentials for SNMPv3 with authentication but without privacy (authNoPriv) 246 | * Authentication protocol - `SHA1` 247 | * Security name - `snmp_home` 248 | * Authentication Password - whatever was set 249 | * Save and run service discovery 250 | * apply changes 251 | 252 | # Push 253 | 254 | # Alerts 255 | 256 | # Logs 257 | 258 | # Update 259 | 260 | Manual image update: 261 | 262 | - `docker-compose pull`
263 | - `docker-compose up -d`
264 | - `docker image prune` 265 | 266 | # Backup and restore 267 | 268 | #### Backup 269 | 270 | #### Restore 271 | 272 | * down the containers `docker-compose down`
273 | * delete the entire monitoring directory
274 | * from the backup copy back the monitoring directory
275 | * start the containers `docker-compose up -d` 276 | -------------------------------------------------------------------------------- /ddclient/readme.md: -------------------------------------------------------------------------------- 1 | # DDclient 2 | 3 | ###### guide-by-example 4 | 5 | # Purpose & Overview 6 | 7 | * Note - learned about [ddns-updater](https://github.com/qdm12/ddns-updater) 8 | 9 | Automatic DNS entries update. 10 | 11 | * [Official site](https://sourceforge.net/p/ddclient/wiki/usage/) 12 | * [Github](https://github.com/ddclient/ddclient) 13 | * [DockerHub](https://hub.docker.com/r/linuxserver/ddclient) 14 | 15 | DDclient is a Perl client used to update dynamic DNS records.
16 | Very useful if not having a static IP from the ISP. 17 | It makes sure that if you reset your router, or have a power outage, 18 | and you get a new public IP assigned, this IP gets automaticly set 19 | in the DNS records for your domains. 20 | 21 | This setup runs directly on the host machine.
22 | It works by checking every 10 minutes 23 | [checkip.dyndns.org](http://checkip.dyndns.org/), 24 | and if the IP changed from the previous one, it logs in to the DNS provider and 25 | updates the DNS records. 26 | 27 | # Files and directory structure 28 | 29 | ``` 30 | /etc/ 31 | └── ddclient/ 32 | └── ddclient.conf 33 | ``` 34 | 35 | # Installation 36 | 37 | Install ddclient from your linux official repos. 38 | 39 | # Configuration 40 | 41 | Official ddclient config example 42 | [here](https://github.com/ddclient/ddclient/blob/master/sample-etc_ddclient.conf). 43 | 44 | This setup assumes the DNS records are managed on Cloudflare.
45 | Make sure all subdomains in the config have A-records. 46 | 47 | `ddclient.conf` 48 | 49 | ```bash 50 | daemon=600 51 | syslog=yes 52 | mail=root 53 | mail-failure=root 54 | pid=/var/run/ddclient.pid 55 | ssl=yes 56 | 57 | use=web, web=checkip.dyndns.org/, web-skip='IP Address' 58 | 59 | ## 60 | ## CloudFlare (www.cloudflare.com) 61 | ## 62 | protocol=cloudflare, \ 63 | zone=example.com, \ 64 | ttl=1, \ 65 | login=bastard@gmail.com, \ 66 | password= \ 67 | example.com,*.example.com,subdomain.example.com 68 | 69 | ## 70 | protocol=cloudflare, \ 71 | zone=example.org, \ 72 | ttl=1, \ 73 | login=bastard@gmail.com, \ 74 | password= \ 75 | example.org,*.example.org,whatever.example.org 76 | ``` 77 | 78 | # Start the service 79 | 80 | `sudo systemctl enable --now ddclient` 81 | 82 | # Troubleshooting 83 | 84 | If it would timeout on start, check the real location of `ddclient.pid`
85 | `sudo find / -name ddclient.pid` 86 | 87 | If it is correctly set in the `ddclient.conf`. 88 | 89 | # Update 90 | 91 | During host linux packages update. 92 | 93 | # Backup and restore 94 | 95 | #### Backup 96 | 97 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 98 | that makes daily snapshot of the /etc directory which contains the config file. 99 | 100 | #### restore 101 | 102 | Replace the content of the config file with the one from the backup. 103 | -------------------------------------------------------------------------------- /disk_NAS_bench_Fio/crystaldisk.fio: -------------------------------------------------------------------------------- 1 | [global] 2 | ioengine=windowsaio 3 | #ioengine=libaio 4 | filesize=1g 5 | filename=.fio-diskmark 6 | direct=1 #use O_DIRECT IO (negates buffered) 7 | time_based #keep running until runtime/timeout is met 8 | runtime=30 #stop workload when this amount of time has passed 9 | loops=1 #number of times to run the job 10 | #refill_buffers #always writes new random data in the buffer 11 | #randrepeat=0 #do not use repeatable random IO pattern 12 | thread #use threads instead of processes 13 | stonewall #insert a hard barrier between this job and previous 14 | 15 | [Seq-Read-Q32T1] 16 | iodepth=32 17 | numjobs=1 18 | bs=1m 19 | rw=read 20 | 21 | [Seq-Write-Q32T1] 22 | iodepth=32 23 | numjobs=1 24 | bs=1m 25 | rw=write 26 | 27 | [Rand-Read-4K-Q8T8] 28 | iodepth=8 29 | numjobs=8 30 | openfiles=8 31 | bs=4k 32 | rw=randread 33 | 34 | [Rand-Write-4K-Q8T8] 35 | iodepth=8 36 | numjobs=8 37 | openfiles=8 38 | bs=4k 39 | rw=randwrite 40 | 41 | [Rand-Read-4K-Q32T1] 42 | iodepth=32 43 | numjobs=1 44 | bs=4k 45 | rw=randread 46 | 47 | [Rand-Write-4K-Q32T1] 48 | iodepth=32 49 | numjobs=1 50 | bs=4k 51 | rw=randwrite 52 | 53 | [Rand-Read-4K-Q1T1] 54 | iodepth=1 55 | numjobs=1 56 | bs=4k 57 | rw=randread 58 | 59 | [Rand-Write-4K-Q1T1] 60 | iodepth=1 61 | numjobs=1 62 | bs=4k 63 | rw=randwrite 64 | unlink=1 65 | -------------------------------------------------------------------------------- /disk_NAS_bench_Fio/lawrance_script.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # this script requires fio bc jq 4 | type fio bc jq > /dev/null || exit 5 | 6 | # Directory to test 7 | TEST_DIR=$1 8 | 9 | # Parameters for the tests should be representive of the workload you want to simulate 10 | BS="1M" # Block size 11 | IOENGINE="libaio" # IO engine 12 | IODEPTH="16" # IO depth sets how many I/O requests a single job can handle at once 13 | DIRECT="1" # Direct IO at 0 is buffered with RAM which may skew results and I/O 1 is unbuffered 14 | NUMJOBS="5" # Number of jobs is how many independent I/O streams are being sent to the storage 15 | FSYNC="0" # Fsync 0 leaves flushing up to Linux 1 force write commits to disk 16 | NUMFILES="5" # Number of files is number of independent I/O threads or processes that FIO will spawn 17 | FILESIZE="1G" # File size for the tests, you can use: K M G 18 | 19 | # Check if directory is provided 20 | if [ -z "$TEST_DIR" ]; then 21 | echo "Usage: $0 [directory]" 22 | exit 1 23 | fi 24 | 25 | # Function to perform FIO test and display average output 26 | perform_test() { 27 | RW_TYPE=$1 28 | 29 | echo "Running $RW_TYPE test with block size $BS, ioengine $IOENGINE, iodepth $IODEPTH, direct $DIRECT, numjobs $NUMJOBS, fsync $FSYNC, using $NUMFILES files of size $FILESIZE on $TEST_DIR" 30 | 31 | # Initialize variables to store cumulative values 32 | TOTAL_READ_IOPS=0 33 | TOTAL_WRITE_IOPS=0 34 | TOTAL_READ_BW=0 35 | TOTAL_WRITE_BW=0 36 | 37 | for ((i=1; i<=NUMFILES; i++)); do 38 | TEST_FILE="$TEST_DIR/fio_test_file_$i" 39 | 40 | # Running FIO for each file and parsing output 41 | OUTPUT=$(fio --name=test_$i \ 42 | --filename=$TEST_FILE \ 43 | --rw=$RW_TYPE \ 44 | --bs=$BS \ 45 | --ioengine=$IOENGINE \ 46 | --iodepth=$IODEPTH \ 47 | --direct=$DIRECT \ 48 | --numjobs=$NUMJOBS \ 49 | --fsync=$FSYNC \ 50 | --size=$FILESIZE \ 51 | --group_reporting \ 52 | --output-format=json) 53 | 54 | # Accumulate values 55 | TOTAL_READ_IOPS=$(echo $OUTPUT | jq '.jobs[0].read.iops + '"$TOTAL_READ_IOPS") 56 | TOTAL_WRITE_IOPS=$(echo $OUTPUT | jq '.jobs[0].write.iops + '"$TOTAL_WRITE_IOPS") 57 | TOTAL_READ_BW=$(echo $OUTPUT | jq '(.jobs[0].read.bw / 1024) + '"$TOTAL_READ_BW") 58 | TOTAL_WRITE_BW=$(echo $OUTPUT | jq '(.jobs[0].write.bw / 1024) + '"$TOTAL_WRITE_BW") 59 | done 60 | 61 | # Calculate averages 62 | AVG_READ_IOPS=$(echo "$TOTAL_READ_IOPS / $NUMFILES" | bc -l) 63 | AVG_WRITE_IOPS=$(echo "$TOTAL_WRITE_IOPS / $NUMFILES" | bc -l) 64 | AVG_READ_BW=$(echo "$TOTAL_READ_BW / $NUMFILES" | bc -l) 65 | AVG_WRITE_BW=$(echo "$TOTAL_WRITE_BW / $NUMFILES" | bc -l) 66 | 67 | # Format and print averages, omitting 0 results 68 | [ "$(echo "$AVG_READ_IOPS > 0" | bc)" -eq 1 ] && printf "Average Read IOPS: %'.2f\n" $AVG_READ_IOPS 69 | [ "$(echo "$AVG_WRITE_IOPS > 0" | bc)" -eq 1 ] && printf "Average Write IOPS: %'.2f\n" $AVG_WRITE_IOPS 70 | [ "$(echo "$AVG_READ_BW > 0" | bc)" -eq 1 ] && printf "Average Read Bandwidth (MB/s): %'.2f\n" $AVG_READ_BW 71 | [ "$(echo "$AVG_WRITE_BW > 0" | bc)" -eq 1 ] && printf "Average Write Bandwidth (MB/s): %'.2f\n" $AVG_WRITE_BW 72 | 73 | } 74 | 75 | # Run tests 76 | perform_test randwrite 77 | perform_test randread 78 | perform_test write 79 | perform_test read 80 | perform_test readwrite 81 | 82 | # Clean up 83 | for ((i=1; i<=NUMFILES; i++)); do 84 | rm "$TEST_DIR/fio_test_file_$i" 85 | done 86 | -------------------------------------------------------------------------------- /disk_NAS_bench_Fio/readme.md: -------------------------------------------------------------------------------- 1 | # Fio or KdiskMark 2 | 3 | ###### guide-by-example 4 | 5 | # Purpose & Overview 6 | 7 | Benchmark disks and NAS performance. 8 | 9 | https://github.com/JonMagon/KDiskMark 10 | 11 | * [Github](https://github.com/axboe/fio) 12 | * [Official documentation](https://fio.readthedocs.io/en/latest/index.html) 13 | 14 | Command line tool. Exremely rich in all the aspects.
15 | This repo aims to just have a simple one preset that tells most about the disk. 16 | 17 | # Install on 18 | 19 | 20 | # Useful links 21 | 22 | https://www.youtube.com/watch?v=T23uPC6qKeA 23 | 24 | https://www.youtube.com/watch?v=mBhXUYh-76o 25 | https://arstechnica.com/gadgets/2020/02/how-fast-are-your-disks-find-out-the-open-source-way-with-fio/ 26 | https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000LX7xSAG 27 | 28 | -------------------------------------------------------------------------------- /disk_NAS_bench_Fio/universal_linux.fio: -------------------------------------------------------------------------------- 1 | [global] 2 | bs=128K 3 | iodepth=256 4 | direct=1 5 | ioengine=libaio 6 | group_reporting 7 | time_based 8 | name=seq 9 | log_avg_msec=1000 10 | bwavgtime=1000 11 | filenameofi=/dev/nvme0n1 12 | #size=100G 13 | 14 | [rd_qd_256_128k_1w] 15 | stonewall 16 | bs=128k 17 | iodepth=256 18 | numjobs=1 19 | rw=read 20 | runtime=60 21 | write_bw_log=seq_read_bw.log 22 | 23 | [wr_qd_256_128k_1w] 24 | stonewall 25 | bs=128k 26 | iodepth=256 27 | numjobs=1 28 | rw=write 29 | runtime=60 30 | write_bw_log=seq_write_bw.log 31 | -------------------------------------------------------------------------------- /dnsmasq/readme.md: -------------------------------------------------------------------------------- 1 | # dnsmasq 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/SOa4kRd.png) 6 | 7 | # Purpose & Overview 8 | 9 | Lightweight DHCP and DNS server. 10 | 11 | * [Official site](http://www.thekelleys.org.uk/dnsmasq/doc.html) 12 | * [Arch wiki](https://wiki.archlinux.org/index.php/dnsmasq) 13 | 14 | dnsmasq solves the problem of accessing self hosted stuff when you are inside 15 | your network. As asking google's DNS for `example.com` will return your 16 | very own public IP and most routers/firewalls wont allow this loopback, 17 | where your requests should go out and then right back.
18 | Usual quick way to solve this issue is 19 | [editing the `hosts` file]( 20 | https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2#--editing-hosts-file) 21 | on your machine, adding `192.168.1.222 example.com` IP-hostname pair. 22 | This tells your machine to fuck asking google's DNS, the rule is right there, 23 | `example.com` goes directly to the local server ip `192.168.1.222`.
24 | But if more devices should "just work" it is a no-go, since this just works 25 | one the machine which `hosts` file was edited. 26 | 27 | So the answer is running a DNS server that does this 28 | paring of IPs with hostnames, and a DHCP server that tells the devices 29 | on the network to use this DNS. 30 | 31 | *extra info*
32 | DNS servers run on port 53. 33 | 34 | # Prerequisites 35 | 36 | * the machine that will be running it should have set static IP 37 | 38 | # Files and directory structure 39 | 40 | ``` 41 | /etc/ 42 | ├── dnsmasq.conf 43 | ├── hosts 44 | └── resolve.conf 45 | ``` 46 | 47 | * `dnsmasq.conf` - the main config file for dnsmasq where DNS and DHCP functionality is set 48 | * `resolve.conf` - a file containing ip addresses of DNS nameservers to be used 49 | by the machine it resides on 50 | * `hosts` - a file that can provide additional hostname-ip mapping 51 | 52 | `hosts` and `resolve.conf` are just normal system files always in use on any linux 53 | system.
54 | `dnsmasq.conf` comes with the dnsmasq installation. 55 | 56 | # Installation 57 | 58 | Install dnsmasq from your linux official repos. 59 | 60 | # Configuration 61 | 62 | `dnsmasq.conf` 63 | 64 | ```bash 65 | # DNS -------------------------------------------------------------------------- 66 | 67 | # Never forward plain names (without a dot or domain part) 68 | domain-needed 69 | # Never forward addresses in the non-routed address spaces. 70 | bogus-priv 71 | 72 | # If you don't want dnsmasq to read /etc/resolv.conf 73 | no-resolv 74 | no-poll 75 | 76 | cache-size=1000 77 | 78 | # interface and address 79 | interface=enp0s25 80 | listen-address=::1,127.0.0.1 81 | 82 | # Upstream Google and Cloudflare nameservers 83 | server=8.8.8.8 84 | server=1.1.1.1 85 | 86 | # DNS entries ------------------------------------------------------------------ 87 | 88 | # wildcard DNS entry sending domain and all its subdomains to an ip 89 | address=/example.com/192.168.1.2 90 | # subdomain override 91 | address=/plex.example.com/192.168.1.3 92 | 93 | # DHCP ------------------------------------------------------------------------- 94 | 95 | dhcp-authoritative 96 | dhcp-range=192.168.1.50,192.168.1.200,255.255.255.0,480h 97 | # gateway 98 | dhcp-option=option:router,192.168.1.1 99 | 100 | # DHCP static IPs -------------------------------------------------------------- 101 | # mac address : ip address 102 | 103 | dhcp-host=08:00:27:68:f9:bf,192.168.1.150 104 | 105 | #dhcp-leasefile=/var/lib/misc/dnsmasq.leases 106 | ``` 107 | 108 | *extra info* 109 | 110 | * `dnsmasq --test` - validates the config 111 | * `dnsmasq --help dhcp` - lists all the DHCP options 112 | 113 | You can also run **just DNS server**, by deleting the DHCP section 114 | in the `dnsmasq.conf` to the end.
115 | Then on your router, in the DHCP>DNS settings, you just put in the ip address 116 | of the dnsmasq host as the DNS server. 117 | 118 | # resolv.conf 119 | 120 | A file that contains DNS nameservers to be used by the linux machine it sits on.
121 | Since dnsmasq, a DNS server, is running right on this machine, 122 | the entries just point to localhost.
123 | 124 | `resolv.conf` 125 | ``` 126 | nameserver ::1 127 | nameserver 127.0.0.1 128 | ``` 129 | 130 | Bit of an issue is that `resolv.conf` belongs to glibc, a core linux library. 131 | But there are other network related services that like to fuck with it. 132 | Like dhcpcd, networkmanager, systemd-resolved,...
133 | Ideally you know what is running on your host linux system, but just in case 134 | `resolv.conf` will be flagged as immutable. 135 | This prevents all possible changes to it unless the attribute is removed. 136 | 137 | Edit `/etc/resolv.conf` and set localhost as the DNS nameserver, as shown above. 138 | 139 | * Make it immutable to prevent any changes to it.
140 | `sudo chattr +i /etc/resolv.conf` 141 | * Check if the content is what was set.
142 | `cat /etc/resolv.conf` 143 | 144 | # /etc/hosts 145 | 146 | `hosts` 147 | ``` 148 | 192.168.1.2 docker-host 149 | 192.168.1.1 gateway 150 | 192.168.1.2 example.com 151 | 192.168.1.2 nextcloud.example.com 152 | 192.168.1.2 book.example.com 153 | 192.168.1.2 passwd.example.com 154 | 192.168.1.2 grafana.example.com 155 | ``` 156 | 157 | This is a file present on every system, linux, windows, mac, android,... 158 | where you can assign a hostname to an IP.
159 | dnsmasq reads `/etc/hosts` for IP hostname pairs and adds them to its own 160 | resolve records. 161 | 162 | Unfortunately no wildcard support.
163 | But as seen in the `dnsmasq.conf`, when domain is set it acts as a wildcard 164 | rule. So `example.com` stuff here is just for show. 165 | 166 | # Start the service 167 | 168 | `sudo systemctl enable --now dnsmasq` 169 | 170 | * Check if it started without errors
171 | `journalctl -u dnsmasq.service` 172 | * If you get "port already in use" error, check which service is using port 53
173 | `sudo ss -tulwnp`
174 | stop and disable that service, for example if it is `systemd-resolved`
175 | `sudo systemctl disable --now systemd-resolved` 176 | * Make sure you **disable other DHCP servers** on the network, 177 | usually a router is running one. 178 | 179 | # Test it 180 | 181 | #### DHCP 182 | 183 | Set some machine on the network to use DHCP for its network setting.
184 | Network connection should just work with full connectivity. 185 | 186 | You can check on the dnsmasq host, file `/var/lib/misc/dnsmasq.leases` 187 | for the active leases. Location of the file can vary base on your linux distro. 188 | 189 | #### DNS 190 | 191 | nslookup is a utility that checks DNS mapping, 192 | part of `bind-utils` or `bind-tools` packages, again depending on the distro, 193 | but also available on windows. 194 | 195 | * `nslookup google.com` 196 | * `nslookup docker-host` 197 | * `nslookup example.com` 198 | * `nslookup whateverandom.example.com` 199 | * `nslookup plex.example.com` 200 | 201 | ### Troubleshooting 202 | 203 | * **ping fails from windows when using hostname**
204 | windows ping does not do dns lookup when just plain hostname is used
205 | `ping meh-pc`
206 | it's a [quirk](https://superuser.com/questions/495759/why-is-ping-unable-to-resolve-a-name-when-nslookup-works-fine/1257512#1257512) 207 | of windows ping utility. 208 | Can be solved by adding dot, which makes it look like domain name and this 209 | forces the dns lookup before pinging
210 | `ping meh-pc.`
211 | 212 | * **slow ping of a hostname, but fast nslookup on a linux machine**
213 | for me it was `systemd-resolved` running on the machine I was doing ping from.
214 | It can be stopped and disabled.
215 | `sudo systemctl disable --now systemd-resolved` 216 | 217 | # Update 218 | 219 | During host linux packages update. 220 | 221 | # Backup and restore 222 | 223 | #### Backup 224 | 225 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 226 | that makes daily snapshot of the /etc directory which contains the config files. 227 | 228 | #### restore 229 | 230 | Replace the content of the config files with the one from the backup. 231 | -------------------------------------------------------------------------------- /docker-basics-and-troubleshoot/readme.md: -------------------------------------------------------------------------------- 1 | # Docker basics and troubleshooting 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/6SIoRWX.png) 6 | 7 | 8 | # Overview 9 | 10 | **Docker** is a container technology that uses the hosts kernel while putting the userland applications into a sandbox. 11 | It is much more lightweight than a full-fledged virtual machine, but can only run stuff that can run on host. 12 | 13 | **Compose** is a Docker tool that allows defining containers using a YAML file and run them with a single command. 14 | 15 | The purpose of docker and docker-compose is either development or deployment.
16 | Most of the stuff around here is about deployment, how to deploy others people work
17 | 18 | # Basics 19 | 20 | tag - not just version number, tags are not even present on local docker.. 21 | 22 | ports quotes - always in quotes because in low number it might try to interpret ports as base 60 23 | -------------------------------------------------------------------------------- /free_cloud_hosting_VPS/readme.md: -------------------------------------------------------------------------------- 1 | # Free Hosting VPS 2 | 3 | ###### guide-by-example 4 | 5 | # Purpose & Overview 6 | 7 | Free virtual private server hosting on a cloud.
8 | Here are github repos with details on various providers: 9 | 10 | * [Stack on a budget](https://github.com/255kb/stack-on-a-budget) 11 | * [Cloud Service Providers Free Tier Overview](https://github.com/cloudcommunity/Cloud-Free-Tier-Comparison) 12 | So far I only run Oracle Cloud.
13 | Will add others if I will deal with them, or if Oracle fucks me over. 14 | 15 | # Oracle Cloud 16 | 17 | ![logo](https://i.imgur.com/SVJ6dTP.png) 18 | 19 | ### What Free Tier Offers 20 | 21 | As of 2023. 22 | 23 | [Official docs](https://docs.oracle.com/en-us/iaas/Content/FreeTier/freetier_topic-Always_Free_Resources.htm) 24 | 25 | * 1-2 instances of AMD based Compute VMs with 1/8 OCPU and 1 GB ram each
26 | shape name: VM.Standard.E2.1.Micro 27 | * 1-4 instances of 4 Arm-based Ampere A1 cores with 24 GB of ram to divide
28 | shape name: VM.Standard.A1.Flex 29 | * Block Volumes Storage, 200 GB total, default 50GB for boot of any VM 30 | * 10 Mbps bandwidth 31 | * 10 TB per month outbound data transfer 32 | 33 | ### Registration 34 | 35 | * A **credit card** is needed during the registration. 36 | * **[Home region](https://docs.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm)** 37 | is picked and it can **not** be changed later. 38 | Choise here will impact the IP address location and possibly availability of 39 | free instances to create. 40 | * I read that you wont be able to re-use credit card if you **terminate** your account. 41 | 42 | Seen comments online that some just cant get the registration done. 43 | 44 | ### Instance creaction 45 | 46 | ![insance](https://i.imgur.com/nrR6Kx3.png) 47 | 48 | Instances > Create instance 49 | 50 | * If none selected, pick Compartment in left column, default root 51 | * Name - whatever 52 | * Create in compartment - default 53 | * Placement - default; `Always Free-eligible` 54 | * Security - default - shield disabled 55 | * Image and shape 56 | * VM.Standard.E2.1.Micro `Always Free-eligible` 57 | * Ubuntu 22.04 `Always Free-eligible` 58 | * Networking - edit; create a new virtual cloud network; or pick existing one
59 | same for subnet
60 | Assign a public IPv4 address 61 | * Add SSH keys - `Paste public keys`. Well it be good if you were not noob, 62 | and already have your ssh identiy and that you just paste your public key.
63 | Noobs will need to pick `Generate a key pair for me`, then download 64 | both public and private keys and google out how to use them to 65 | SSH in the VM once it's running. 66 | * Boot volume - keep default 67 | 68 | You can ignore €1.85/month charge for Boot volume displayed, its a bug. 69 | After the creation, in Billing & Cost Management - Subscriptions or Cost analysis 70 | the ammount displayed should be 0€. 71 | 72 | 73 | ### Firewall settings 74 | 75 | **Oracle network settings ingress rule** 76 | 77 | * Virtual Cloud Networks > VNC you got there > Subnet > Securty List > Ingress Rules 78 | * A rule for port 22 should be there, to let in ssh. 79 | * Add new rule 80 | * Source Type - CIDR 81 | * Source CIDR - `0.0.0.0/0` 82 | * IP Protocol - `All Protocols` 83 | 84 | ![ingress](https://i.imgur.com/YouPN9n.png) 85 | 86 | 87 | **firewalld in ubuntu open port** 88 | 89 | in examples here port will be 7777/tcp 90 | 91 | * `sudo apt-get update && sudo apt-get upgrade` 92 | * `sudo apt install firewalld` 93 | * `sudo firewall-cmd --zone=public --permanent --add-port=7777/tcp` 94 | * `sudo firewall-cmd --reload` 95 | 96 | **To test** 97 | 98 | * ssh in to the VM 99 | * `nc -l 7777` - starts a netcat server listening on port 7777/tcp 100 | * go on site like [yougetsignal.com](https://www.yougetsignal.com/tools/open-ports/)
101 | put in IP of the VM, port, test if you get `open` 102 | * or from your linux machine `nc 7777`,
103 | write something, it should appear in the VM 104 | 105 | If it works, then depending on the use, one can start securing stuff better. 106 | For example, restricing from what public ip connection are accepted - 107 | ingress rule has `Source CIDR` set to lets say `353.749.385.54/32` 108 | 109 | If set, the online test from a website will fail, but nc from 110 | a machine with that public IP will work. 111 | 112 | ### Docker on ubuntu 113 | 114 | For some reason [its not stupid simple](https://docs.docker.com/engine/install/ubuntu/) 115 | to install docker on ubuntu. You are put to decisions.
116 | I picked the script way, which I guess dont get upated? 117 | 118 | * `curl -fsSL https://get.docker.com | bash` 119 | 120 | ### Prevent reclaim if VM runs idle too much 121 | 122 | ![idle](https://i.imgur.com/q7mGQns.png) 123 | 124 | I will not be running anything for now as a test, 125 | if email about idle instance comes. 126 | 127 | IF it comes the solution should be to put regular load on the VM. 128 | This should do it. 129 | 130 | * `echo "*/5 * * * * root timeout 46 nice md5sum /dev/zero" | sudo tee /etc/cron.d/dummy-load` 131 | 132 | Some discussion on this 133 | 134 | * [1](https://www.reddit.com/r/oraclecloud/comments/122b4gf/a_simple_cron_controlled_load_generator_for/) 135 | * [2](https://www.reddit.com/r/oraclecloud/comments/125rege/what_to_run_on_always_free_to_prevent_idle/) 136 | 137 | ## Archlinux 138 | 139 | ![arch](https://i.imgur.com/eXGmmqR.png) 140 | 141 | I am used to archlinux and everything else feels wrong.
142 | So to get it as a VM on oracle cloud. 143 | 144 | * Download the latest qcow2 image
145 | [https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2](https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2) 146 | * Storage > Buckets > Create Bucket > Defaults are fine 147 | * Pick the bucket from the list > Upload the qcow2 arch image 148 | * Compute > Custom images > Import image 149 | * Operating system - Generic Linux 150 | * Import from an Object Storage bucket 151 | * Image type - QCOW2 152 | * Paravirtualized mode 153 | * Import image 154 | 155 | Afterwards its going same as with any other instance. 156 | 157 | arch/arch for SSH login, recommend disabling password login and do IP restriction 158 | 159 | Not tested but theres also this as option: 160 | 161 | * [Convert instructions](https://gist.github.com/zengxinhui/01afb43b8d663a4232a42ee9858be45e) 162 | * [arch ARM](https://www.reddit.com/r/archlinux/comments/14iqb6h/how_to_install_arch_on_an_oracle_cloud_free_tier/) 163 | 164 | ### Links 165 | 166 | Some youtube videos and articles 167 | 168 | * [youtube-1](https://youtu.be/NKc3k7xceT8) 169 | * [youtube-2](https://youtu.be/zWeFD4NNF5o) 170 | * [ryanharrison - oracle-cloud-free-server](https://ryanharrison.co.uk/2023/01/28/oracle-cloud-free-server.html) 171 | 172 | # GCE - Google Compute Engine 173 | 174 | ![logo-gce](https://i.imgur.com/Eau2Hm5.png) 175 | 176 | 177 | ### What Free Tier Offers 178 | 179 | [Official docs](https://cloud.google.com/free/docs/free-cloud-features#compute) 180 | 181 | As of 2023. 182 | 183 | * 1 e2-micro VM instance; 0.25-2 vCPU (1 shared core); 1GB ram 184 | * 30 GB disk storage (default 10GB) 185 | * 600/300 Mbps bandwidth 186 | * 1 GB per month outbound data transfer 187 | 188 | ### Registration 189 | 190 | Credit card is required.
191 | Otherwise it's smooth process as you likely have google account, 192 | and if you have credit card tight to, its just few yes clicks. 193 | 194 | ### New VM instance in Free Tier 195 | 196 | On the GCE console web 197 | 198 | * Create a new project named whatever lowercase. 199 | * Add your SSH key to be able to ssh in
200 | left pane > metadata > SSH Keys > Edit > Add Item
201 | * Create a new virtual machine
202 | left pane > Compute Engine > VM instances > Create new instance 203 | * it asks to enable Compute Engine API, enable it 204 | * Name 205 | * Region - must be one the three: `us-west1`, `us-central1`, `us-east1` 206 | * Zone - default 207 | * Machine series - `E2` 208 | * Machine type - e2-micro(2vCPU, 1 core, 1GB memory) 209 | * stuff left on default 210 | * Boot disk > Change 211 | * debian 12 (latest) 212 | * disk increase if desired, up to 30GB should be free 213 | * Firewall > allow both `http` and `https` traffic 214 | * CREATE 215 | 216 | After few minutes new one with public IP listed should be listed.
217 | Test ssh in to it from your terminal. 218 | 219 | -------------------------------------------------------------------------------- /get_good_linux_terminal/readme.md: -------------------------------------------------------------------------------- 1 | # Get Good in Linux terminal 2 | 3 | ###### guide-by-example 4 | 5 | 6 | WORK IN PROGRESS
7 | WORK IN PROGRESS
8 | WORK IN PROGRESS
9 | 10 | You want or need to use terminal in linux.
11 | You want it to not suck donkey balls. 12 | 13 | *requirements* - installed linux distro once, knows what a distro is, 14 | knows how how to install packages in linux, knows what a repository is 15 | and other basic terms 16 | 17 | ### Core aspects to make comfortable 18 | 19 | * Terminal 20 | * ssh in to linux 21 | * moving between directories 22 | * editing files 23 | 24 | ### Advanced aspects 25 | 26 | * shell 27 | * services and systemd 28 | * logs 29 | * scheduled stuff 30 | * filesystems and mounting 31 | 32 | # Terminal 33 | 34 | The application that you run to connect remotely to a server.
35 | For windows you want to use mobaXterm. Learn how to use well. 36 | 37 | Tips 38 | 39 | * set font globaly to something larger, like 14
40 | Settings > Configuration > Terminal > Default Font Settings
41 | be aware that once set it only applies to newly created sessions, 42 | if there is a session in left pane it already has own settings that needs changing 43 | * be aware of copy paste in it, middle click right click menu 44 | * 45 | 46 | # SSH 47 | 48 | You connect to the linux server only sometimes, but when you do 49 | you want it to be instant 50 | -------------------------------------------------------------------------------- /gotify-ntfy-signal/readme.md: -------------------------------------------------------------------------------- 1 | # gotify ntfy signal 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/ZkGGeT5.png) 6 | 7 | # Purpose & Overview 8 | 9 | Instant push notifications if email feels old timey and crowded 10 | 11 | * [gotify](https://github.com/gotify/server) 12 | * [ntfy](https://github.com/binwiederhier/ntfy) 13 | * [bbernhard/signal-cli-rest-api ](https://github.com/bbernhard/signal-cli-rest-api) 14 | 15 | --- 16 | 17 | # Overview 18 | 19 | * **gotify** - great for a single person use, but the moment theres more people 20 | they need to share a single account and so lack the ability to choose 21 | what to get and what not to get. 22 | * **ntfy** - simple original approach to just subscribing to "topics". 23 | Single line of code to send a push notification. Support for multiple users, supports ios. 24 | * **signal-cli-rest-api** - no gui, needs a sim card, a phone number registred, 25 | notifications are send through that phone number. 26 | Signal's wider spread might make it a winner, since you are not asking people 27 | to install an another app. 28 | 29 | Afte few weeks of tinkering with these... **ntfy is the winner for me**, for now.
30 | Compose files for the other two are at the end. 31 | 32 | # docker-compose for ntfy 33 | 34 | `docker-compose.yml` 35 | ```yml 36 | services: 37 | 38 | ntfy: 39 | image: binwiederhier/ntfy 40 | container_name: ntfy 41 | hostname: ntfy 42 | env_file: .env 43 | restart: unless-stopped 44 | command: 45 | - serve 46 | ports: 47 | - "80:80" 48 | volumes: 49 | - ./ntfy_cache:/var/cache/ntfy 50 | - ./ntfy_etc:/etc/ntfy 51 | 52 | networks: 53 | default: 54 | name: $DOCKER_MY_NETWORK 55 | external: true 56 | ``` 57 | 58 | `.env` 59 | ```bash 60 | # GENERAL 61 | DOCKER_MY_NETWORK=caddy_net 62 | TZ=Europe/Bratislava 63 | ``` 64 | 65 | # Reverse proxy 66 | 67 | Caddy is used, details 68 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
69 | 70 | `Caddyfile` 71 | ``` 72 | ntfy.{$MY_DOMAIN} { 73 | reverse_proxy ntfy:80 74 | } 75 | ``` 76 | 77 | # The usage 78 | 79 | [Documentation](https://docs.ntfy.sh/publish/) 80 | 81 | ntfy uses "topics" for categorization, which creates a very handy disconnect from 82 | sender and receiver.
83 | Lets say there's a minecraft server and there are notifications when someone 84 | joins. These notifications are send to `minecraft` topic, not to specified users. 85 | Users can subscribe to the topic if they want those notifications. 86 | This gives great flexibility and is the main reason why ntfy wins 87 | over other solutions. In this setup there is also no authentification. 88 | 89 | #### Linux 90 | 91 | `curl -d "a player joined" https://ntfy.example.com/minecraft` 92 | 93 | #### Windows 94 | 95 | * win10+ 96 | 97 | `Invoke-RestMethod -Method 'Post' -Uri https://ntfy.example.com/minecraft -Body "a player joined" -UseBasicParsing` 98 | 99 | * win8.1 and older need bit extra for https to work
100 | 101 | ``` 102 | [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 103 | Invoke-RestMethod -Method 'Post' -Uri https://ntfy.example.com/minecraft -Body "a player joined" -UseBasicParsing 104 | ``` 105 | 106 | #### systemd unit file service 107 | 108 | To allows use of ntfy `OnFailure` and `OnSuccess` inside systemd unit files. 109 | 110 | To send useful info [specifiers](https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Specifiers) 111 | are used. 112 | 113 | * %n - full unit name 114 | * %p - prefix part of the name 115 | * %i - instance name, between @ and suffix 116 | * %H - machine hostname 117 | 118 | Systemd template unit file is used. 119 | These contains `@` to allow for dynamical naming at runtime. 120 | They are called with additional info added between `@` and the suffix `.service` 121 | 122 | `ntfy@.service` 123 | ``` 124 | [Unit] 125 | Description=ntfy notification service 126 | After=network.target 127 | 128 | [Service] 129 | Type=simple 130 | ExecStart=/bin/curl -d "%i | %H" https://ntfy.example.com/systemd 131 | ``` 132 | 133 | Example of a service using the above defined service to send notifications. 134 | 135 | `borg.service` 136 | ``` 137 | [Unit] 138 | Description=BorgBackup docker 139 | OnFailure=ntfy@failure-%p.service 140 | OnSuccess=ntfy@success-%p.service 141 | 142 | [Service] 143 | Type=simple 144 | ExecStart=/opt/borg_backup.sh 145 | ``` 146 | 147 | # Grafana to ntfy 148 | 149 | ![ntfy](https://i.imgur.com/gL81jRg.png) 150 | 151 | Alerting in grafana to ntfy works, but its ugly with just json shown. 152 | 153 | To solve this 154 | 155 | * Add container [grafana-to-ntfy](https://github.com/kittyandrew/grafana-to-ntfy). 156 | Set in `.env` ntfy local url 157 | * in grafana set contact point webhook aimed at `http://grafana-to-ntfy:8080`, 158 | with credentials from the `.env` 159 | 160 | `docker-compose.yml` 161 | ```yml 162 | services: 163 | 164 | ntfy: 165 | image: binwiederhier/ntfy:v2.4.0 166 | container_name: ntfy 167 | hostname: ntfy 168 | env_file: .env 169 | restart: unless-stopped 170 | command: 171 | - serve 172 | ports: 173 | - "80:80" 174 | volumes: 175 | - ./ntfy_cache:/var/cache/ntfy 176 | - ./ntfy_etc:/etc/ntfy 177 | 178 | grafana-to-ntfy: 179 | container_name: grafana-to-ntfy 180 | hostname: grafana-to-ntfy 181 | image: kittyandrew/grafana-to-ntfy 182 | restart: unless-stopped 183 | env_file: 184 | - .env 185 | ports: 186 | - "8080:8080" 187 | 188 | 189 | networks: 190 | default: 191 | name: $DOCKER_MY_NETWORK 192 | external: true 193 | 194 | ``` 195 | 196 | `.env` 197 | ```php 198 | # GENERAL 199 | DOCKER_MY_NETWORK=caddy_net 200 | TZ=Europe/Bratislava 201 | 202 | NTFY_URL=http://ntfy:80/whatever 203 | BAUTH_USER=admin 204 | BAUTH_PASS=test 205 | ``` 206 | 207 |
208 |

gotify and signal compose

209 | 210 | `gotify-docker-compose.yml` 211 | ```yml 212 | services: 213 | 214 | gotify: 215 | image: gotify/server 216 | container_name: gotify 217 | hostname: gotify 218 | restart: unless-stopped 219 | env_file: .env 220 | volumes: 221 | - "./gotify_data:/app/data" 222 | 223 | networks: 224 | default: 225 | name: caddy_net 226 | external: true 227 | ``` 228 | 229 | `signal-docker-compose.yml` 230 | ```yml 231 | signal: 232 | image: bbernhard/signal-cli-rest-api 233 | container_name: signal 234 | hostname: signal 235 | env_file: .env 236 | restart: unless-stopped 237 | volumes: 238 | - "./signal-cli-config:/home/.local/share/signal-cli" #map "signal-cli-config" folder on host system into docker container. the folder contains the password and cryptographic keys when a new number is registered 239 | 240 | networks: 241 | default: 242 | name: caddy_net 243 | external: true 244 | ``` 245 | 246 |
247 | 248 | --- 249 | --- 250 | -------------------------------------------------------------------------------- /home_assistant/readme.md: -------------------------------------------------------------------------------- 1 | # Home Assistant 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/lV7LdOC.png) 6 | 7 | # Purpose & Overview 8 | 9 | WORK IN PROGRESS
10 | WORK IN PROGRESS
11 | WORK IN PROGRESS
12 | 13 | Home monitoring and automation system. 14 | 15 | * [Official site](https://www.home-assistant.io/) 16 | * [Github](https://github.com/home-assistant) 17 | 18 | HA is designed to be a central control platform for IoT - Internet of Things. 19 | You buy some sensors for movement, temperature, light, door, power consumption,... 20 | you buy some smart light switches, lighbulbs, locks, blinds, relays, microphones,...
21 | And HA lets you automate. If movement happens in room X, switch on light Y, 22 | If temperature drops below X turn on relay Y. If doors X are open send push 23 | notification to user Z. 24 | 25 | HA is open source, written in python. 26 | 27 | # Hardware 28 | 29 | I picked **Zigbee** for my main wireless protocol. 30 | 31 | * **Zigbee** - Cheap to get in to, widespread selection of devices. 32 | But uses 2.4Ghz same as wifi so there's chance for 33 | [interference](https://www.metageek.com/training/resources/zigbee-wifi-coexistence/). 34 | * **Z-Wave** - 900Mhz means great penetration and no wifi interference. 35 | More reliable compatibility between devices. 36 | But several times more expensive and poorer selection of devices. 37 | * **Wifi** - Cheapest to get in to as people have wifi. But should not be long 38 | term plan. It is though prefered for wireless devices that stream nonstop data, 39 | like let's say a smart powerplug that reports power consumption. 40 | It saves on limited bandwidth that Zigbee or Z-Wave have. 41 | 42 | I got: 43 | 44 | * [ZigStar UZG-01](https://uzg.zig-star.com/product/) as the zigbee coordinator, bought from elecrow. 45 | * 3x [Philips Hue Motion Sensor](https://www.philips-hue.com/en-gb/p/hue-hue-motion-sensor/8719514342125) (P/N: 929003067501) 46 | 47 | 48 | # Installation 49 | 50 | * [Official documentation](https://www.home-assistant.io/installation/) 51 | 52 | ## Docker vs Virtual Machine 53 | 54 | Its not really a decision, you want to **go full Virtual Machine.**
55 | Reason being that addons that are essential are installed as docker containers 56 | in to the HA, and there is no way to nest it inside HA when running as a container itself. 57 | 58 | I have ESXI hypervisor and I just followed the instructions. 59 | 60 | Some core steps. 61 | 62 | * [download vmdk](https://www.home-assistant.io/installation/windows) 63 | * Create a new VM - Linux; **Debian 11 x64**; 2 cpus; 4G ram
64 | * Remove disk and dvdrom; add existing disk we dl; switch to IDE 0 65 | * Network adapter switch from VMXnet3 to E1000e 66 | * in VM Options switch from BIOS to EFI 67 | 68 | I had some issues when I did not get it right during creation and tried to change 69 | afterwards. The VM would not see the disk. But fresh creation worked 70 | with debian 11 x64 set. 71 | 72 | # The Initial Configuration 73 | 74 | ### First login 75 | 76 | * Log in at the `:8123` 77 | * Create new user and password. 78 | * Set location. 79 | * Set either static IP address in Settings > System > Network
80 | or set IP reservation on your dhcp server. 81 | 82 | ### User preferences 83 | 84 | change date format and first day of the week, enable advanced mode 85 | 86 | ### SSH 87 | 88 | * Install addon - Advanced SSH & Web Terminal 89 | * In the configuration set username and copy paste full public key from `.ssh/id_rsa.pub` 90 | 91 | ### Useful addons 92 | 93 | * VSCode 94 | 95 | ### Reverse proxy 96 | 97 | Caddy is used, details 98 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
99 | 100 | `Caddyfile` 101 | ```bash 102 | home.{$MY_DOMAIN} { 103 | reverse_proxy homeassistant:8123 104 | } 105 | ``` 106 | 107 | adding to `configuration.yaml`, either by ssh and nano or VSCode addon 108 | 109 | ```yml 110 | http: 111 | use_x_forwarded_for: true 112 | trusted_proxies: 113 | - 10.0.19.4 114 | 115 | homeassistant: 116 | external_url: "https://home.example.com:8123" 117 | ``` 118 | 119 | # guides, tips, resources 120 | 121 | * [How To Build a Stable and Robust Zigbee Network](https://smarthomescene.com/guides/how-to-build-a-stable-and-robust-zigbee-network/) 122 | * [Zigbee Smart home - best buy tips](https://vdbrink.github.io/buy/smart_home_best_buy_tips) 123 | 124 | # old mess shit beyond this point 125 | 126 | --- 127 | --- 128 | --- 129 | --- 130 | --- 131 | --- 132 | 133 | 134 | # Files and directory structure 135 | 136 | ``` 137 | /home/ 138 | └── ~/ 139 | └── docker/ 140 | └── home_assistant/ 141 | ├── home_assistant_config/ 142 | ├── .env 143 | └── docker-compose.yml 144 | ``` 145 | 146 | * `home_assistant_config/` - configuration 147 | * `.env` - a file containing environment variables for docker compose 148 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 149 | 150 | You only need to provide the two files.
151 | The directories are created by docker compose on the first run. 152 | 153 | # docker-compose 154 | 155 | `docker-compose.yml` 156 | ```yml 157 | services: 158 | 159 | homeassistant: 160 | image: "ghcr.io/home-assistant/home-assistant:stable" 161 | container_name: homeassistant 162 | hostname: homeassistant 163 | privileged: true 164 | restart: unless-stopped 165 | env_file: .env 166 | volumes: 167 | - ./home_assistant_config:/config 168 | - /etc/localtime:/etc/localtime:ro 169 | ports: 170 | - "8123:8123" 171 | 172 | networks: 173 | default: 174 | name: $DOCKER_MY_NETWORK 175 | external: true 176 | ``` 177 | 178 | `.env` 179 | ```bash 180 | # GENERAL 181 | DOCKER_MY_NETWORK=caddy_net 182 | TZ=Europe/Bratislava 183 | ``` 184 | 185 | **All containers must be on the same network**.
186 | Which is named in the `.env` file.
187 | If one does not exist yet: `docker network create caddy_net` 188 | 189 | # Reverse proxy 190 | 191 | Caddy is used, details 192 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
193 | 194 | `Caddyfile` 195 | ```bash 196 | home.{$MY_DOMAIN} { 197 | reverse_proxy homeassistant:8123 198 | } 199 | ``` 200 | 201 | For security the following needs to be added to home assistant config file, 202 | which gets created on the first run in the direcotry `home_assistant_config` 203 | 204 | `configuration.yaml` 205 | 206 | ```yml 207 | http: 208 | use_x_forwarded_for: true 209 | trusted_proxies: 210 | - 172.16.0.0/12 211 | ip_ban_enabled: true 212 | login_attempts_threshold: 10 213 | ``` 214 | 215 | 216 | ---------- end for now ----------- 217 | 218 | # First run 219 | 220 | 221 | ![interface-pic](https://i.imgur.com/pZMi6bb.png) 222 | 223 | 224 | # Specifics of my setup 225 | 226 | * no long term use yet 227 | * amd cpu and no gpu, so no experience with hw transcoding 228 | * media files are stored and shared on trunas scale VM 229 | and mounted directly on the docker host using [systemd mounts](https://forum.manjaro.org/t/root-tip-systemd-mount-unit-samples/1191), 230 | instead of fstab or autofs. 231 | 232 | `/etc/systemd/system/mnt-bigdisk.mount` 233 | ```ini 234 | [Unit] 235 | Description=12TB truenas mount 236 | 237 | [Mount] 238 | What=//10.0.19.19/Dataset-01 239 | Where=/mnt/bigdisk 240 | Type=cifs 241 | Options=ro,username=ja,password=qq,file_mode=0700,dir_mode=0700,uid=1000 242 | DirectoryMode=0700 243 | 244 | [Install] 245 | WantedBy=multi-user.target 246 | ``` 247 | 248 | `/etc/systemd/system/mnt-bigdisk.automount` 249 | ```ini 250 | [Unit] 251 | Description=12TB truenas mount 252 | 253 | [Automount] 254 | Where=/mnt/bigdisk 255 | 256 | [Install] 257 | WantedBy=multi-user.target 258 | ``` 259 | 260 | to automount on boot - `sudo systemctl enable mnt-bigdisk.automount` 261 | 262 | # Troubleshooting 263 | 264 | 265 | ![error-pic](https://i.imgur.com/KQhmZTQ.png) 266 | 267 | *We're unable to connect to the selected server right now. Please ensure it is running and try again.* 268 | 269 | If you encounter this, try opening the url in browsers private window.
270 | If it works then clear the cookies in your browser. 271 | 272 | 273 | # Update 274 | 275 | Manual image update: 276 | 277 | - `docker-compose pull`
278 | - `docker-compose up -d`
279 | - `docker image prune` 280 | 281 | # Backup and restore 282 | 283 | #### Backup 284 | 285 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 286 | that makes daily snapshot of the entire directory. 287 | 288 | #### Restore 289 | 290 | * down the bookstack containers `docker-compose down`
291 | * delete the entire bookstack directory
292 | * from the backup copy back the bookstack directory
293 | * start the containers `docker-compose up -d` 294 | -------------------------------------------------------------------------------- /homer/assets/favicon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/favicon.png -------------------------------------------------------------------------------- /homer/assets/logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/logo.png -------------------------------------------------------------------------------- /homer/assets/tools/bitwarden.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/bitwarden.png -------------------------------------------------------------------------------- /homer/assets/tools/bookstack.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/bookstack.png -------------------------------------------------------------------------------- /homer/assets/tools/grafana.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/grafana.png -------------------------------------------------------------------------------- /homer/assets/tools/nextcloud.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/nextcloud.png -------------------------------------------------------------------------------- /homer/assets/tools/portainer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/homer/assets/tools/portainer.png -------------------------------------------------------------------------------- /homer/readme.md: -------------------------------------------------------------------------------- 1 | # Homer in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/NSZ1DTH.png) 6 | 7 | # Purpose 8 | 9 | Homepage. 10 | 11 | * [Github](https://github.com/bastienwirtz/homer) 12 | * [DockerHub image used](https://hub.docker.com/r/b4bz/homer) 13 | 14 | Homer is a simple static web page, configured using a yaml file.
15 | The docker image uses darkhttpd simple web server on alpine linux. 16 | 17 | # Files and directory structure 18 | 19 | ``` 20 | /home/ 21 | └── ~/ 22 | └── docker/ 23 | └── homer/ 24 | ├── assets/ 25 | │ └── tools/ 26 | ├── .env 27 | ├── docker-compose.yml 28 | └── config.yml 29 | ``` 30 | 31 | * `assets/` - a directory containing icons and other directories with icons 32 | * `.env` - a file containing environment variables for docker compose 33 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container 34 | * `config.yml` - homer's configuration file bind mounted in to the container 35 | 36 | All files and folders need to be provided.
37 | `assets` direcotry is part of this repo. 38 | 39 | # docker-compose 40 | 41 | `docker-compose.yml` 42 | ```yml 43 | version: "2" 44 | services: 45 | 46 | homer: 47 | image: b4bz/homer:latest 48 | container_name: homer 49 | hostname: homer 50 | restart: unless-stopped 51 | volumes: 52 | - ./config.yml:/www/config.yml:ro 53 | - ./assets/:/www/assets:ro 54 | 55 | networks: 56 | default: 57 | external: 58 | name: $DOCKER_MY_NETWORK 59 | ``` 60 | 61 | `.env` 62 | ```bash 63 | # GENERAL 64 | MY_DOMAIN=example.com 65 | DOCKER_MY_NETWORK=caddy_net 66 | TZ=Europe/Bratislava 67 | ``` 68 | 69 | # Reverse proxy 70 | 71 | Caddy v2 is used, details 72 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
73 | 74 | `Caddyfile` 75 | ``` 76 | {$MY_DOMAIN} { 77 | reverse_proxy homer:8080 78 | } 79 | ``` 80 | 81 | # Config 82 | 83 | Homer in this `config.yml` file.
84 | This one is based on the example from 85 | the [github](https://github.com/bastienwirtz/homer). 86 | 87 | 88 | `config.yml` 89 | ```yml 90 | title: "Homepage" 91 | subtitle: "Homer" 92 | logo: "assets/logo.png" 93 | # icon: "fas fa-skull-crossbones" 94 | footer: '

less boring look with a footer

' 95 | 96 | # Optional navbar 97 | links: 98 | - name: "Font Awesome Icons Galery" 99 | icon: "fab fa-fort-awesome" 100 | url: "https://fontawesome.com/icons?d=gallery" 101 | - name: "Reddit SelfHosted" 102 | icon: "fab fa-reddit" 103 | url: "https://www.reddit.com/r/selfhosted/" 104 | 105 | # First level array represent a group 106 | # Single service with an empty name if not using groups 107 | services: 108 | - name: "Main" 109 | icon: "fab fa-docker" 110 | items: 111 | - name: "Bookstack" 112 | logo: "/assets/tools/bookstack.png" 113 | subtitle: "Notes and Documentation" 114 | url: "https://book.example.com" 115 | - name: "Bitwarden" 116 | logo: "/assets/tools/bitwarden.png" 117 | subtitle: "Password Manager" 118 | url: "https://passwd.example.com" 119 | - name: "Nextcloud" 120 | logo: "/assets/tools/nextcloud.png" 121 | subtitle: "File Sync & Share" 122 | url: "https://nextcloud.example.com" 123 | - name: "Monitoring" 124 | icon: "fas fa-heartbeat" 125 | items: 126 | - name: "Prometheus + Grafana" 127 | logo: "/assets/tools/grafana.png" 128 | subtitle: "Metric analytics & dashboards" 129 | url: "https://grafana.example.com" 130 | - name: "Portainer" 131 | logo: "/assets/tools/portainer.png" 132 | subtitle: "Docker Manager" 133 | url: "https://portainer.example.com" 134 | ``` 135 | 136 | ![look](https://i.imgur.com/hrggtcZ.png) 137 | 138 | # Update 139 | 140 | [Watchtower](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/watchtower) 141 | updates the image automatically. 142 | 143 | Manual image update: 144 | 145 | - `docker-compose pull`
146 | - `docker-compose up -d`
147 | - `docker image prune` 148 | 149 | # Backup and restore 150 | 151 | #### Backup 152 | 153 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 154 | that makes daily snapshot of the entire directory. 155 | 156 | #### Restore 157 | 158 | * down the homer container `docker-compose down`
159 | * delete the entire homer directory
160 | * from the backup copy back the homer directory
161 | * start the container `docker-compose up -d` 162 | -------------------------------------------------------------------------------- /kopia_backup/kopia_cli_deploy_win/DEPLOY.cmd: -------------------------------------------------------------------------------- 1 | @echo off 2 | 3 | :: checking if the script is run as administrator 4 | net session >nul 2>&1 5 | if %errorLevel% == 0 ( 6 | echo - Success: Administrative permissions confirmed. 7 | ) else ( 8 | echo - RUN AS ADMINISTRATOR 9 | pause 10 | exit /B 11 | ) 12 | 13 | echo - powershell ExecutionPolicy changing to Bypass 14 | powershell.exe Set-ExecutionPolicy -ExecutionPolicy Bypass 15 | 16 | echo - checking if C:\Kopia folder exists, creating it if not 17 | if not exist "C:\Kopia\" ( 18 | mkdir C:\Kopia 19 | ) 20 | 21 | if exist "C:\Kopia\kopia_backup_scipt.ps1" ( 22 | echo - C:\Kopia\kopia_backup_scipt.ps1 exists, renaming it with random suffix 23 | ren "C:\Kopia\kopia_backup_scipt.ps1" "kopia_backup_scipt_%random%.ps1" 24 | ) 25 | 26 | echo - copying files to C:\Kopia 27 | robocopy "%~dp0\" "C:\Kopia" "kopia.exe" /NDL /NJH /NJS 28 | robocopy "%~dp0\" "C:\Kopia" "kopia_backup_scipt.ps1" /NDL /NJH /NJS 29 | echo. 30 | 31 | echo - adding C:\Kopia to PATH 32 | setx /M PATH "%PATH%;C:\Kopia" 33 | 34 | if exist C:\Windows\System32\Tasks\kopia_backup_schedule ( 35 | echo - scheduled task with that name already exists, skipping 36 | echo - delete the task in taskschd.msc if you want fresh import 37 | ) else ( 38 | echo - importing scheduled task kopia_backup_schedule 39 | schtasks.exe /Create /XML "%~dp0\kopia_backup_schedule.xml" /tn "kopia_backup_schedule" 40 | ) 41 | 42 | echo. 43 | echo -------------------------------------------------------------- 44 | echo. 45 | echo DEPLOYMENT DONE 46 | echo GO READ WHATS WRITTEN IN C:\Kopia\kopia_backup_scipt.ps1 47 | pause 48 | -------------------------------------------------------------------------------- /kopia_backup/kopia_cli_deploy_win/kopia.exe: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/kopia_backup/kopia_cli_deploy_win/kopia.exe -------------------------------------------------------------------------------- /kopia_backup/kopia_cli_deploy_win/kopia_backup_schedule.xml: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/kopia_backup/kopia_cli_deploy_win/kopia_backup_schedule.xml -------------------------------------------------------------------------------- /kopia_backup/kopia_cli_deploy_win/kopia_backup_scipt.ps1: -------------------------------------------------------------------------------- 1 | # v0.2 2 | # Before using this script, create a repo, note the setting of a password 3 | # kopia repo create filesystem --path C:\kopia_repo --password aaa 4 | 5 | # for backblaza b2 6 | # kopia repository create b2 --bucket=rakanishu --key-id=001496285081a7e0000000003 --key=K0016L8FAMRp/F+6ckbXIYpP0UgTky0 --password aaa 7 | 8 | # useful commands: 9 | # - kopia repo status 10 | # - kopia repo connect filesystem --path C:\kopia_repo -p aaa 11 | # - kopia snap list --all 12 | # - kopia mount all K: 13 | 14 | # mounting might need be executed as non-admin user, weird windows thing 15 | # if one does not see the drive: 16 | # - 'net use' shows path that can be pasted to explorer or browser 17 | # \\127.0.0.1@51295\DavWWWRoot 18 | 19 | # logs location is set C:\Kopia\Kopia_Logs for every command 20 | # if it was not manually set then it would be 21 | # C:\Windows\System32\config\systemprofile\AppData 22 | 23 | 24 | # to backup multiple targets/paths: 25 | # - [array]$BACKUP_THIS = 'C:\Test','C:\users','C:\blabla' 26 | 27 | $REPOSITORY_PATH = 'C:\kopia_repo' 28 | $KOPIA_PASSWORD = 'aaa' 29 | [array]$BACKUP_THIS = 'C:\test' 30 | $LOG_PATH = 'C:\Kopia\Kopia_Logs' 31 | $USE_SHADOW_COPY = $false 32 | 33 | # ---------------------------------------------------------------------------- 34 | 35 | $Env:KOPIA_LOG_DIR = $LOG_PATH 36 | 37 | kopia repository connect filesystem --path $REPOSITORY_PATH --password $KOPIA_PASSWORD 38 | # kopia repository connect b2 --bucket=kopia-repo-rakanishu --key-id=001496285081a7e0000000003 --key=K0016L8FAMRp/F+6ckbXIYpP0UgTky0 --password $KOPIA_PASSWORD 39 | 40 | kopia policy set --global --compression=zstd-fastest --keep-annual=0 --keep-monthly=12 --keep-weekly=0 --keep-daily=14 --keep-hourly=0 --keep-latest=3 41 | 42 | if ($USE_SHADOW_COPY) { 43 | kopia policy set --global --enable-volume-shadow-copy=when-available 44 | } 45 | 46 | foreach ($path in $BACKUP_THIS) { 47 | kopia snapshot create $path --file-log-level=info 48 | } 49 | 50 | kopia repository disconnect 51 | -------------------------------------------------------------------------------- /kopia_backup/kopia_server_deploy_service_win/DEPLOY.cmd: -------------------------------------------------------------------------------- 1 | @echo off 2 | 3 | :: checking if the script is run as administrator 4 | net session >nul 2>&1 5 | if %errorLevel% == 0 ( 6 | echo - Success: Administrative permissions confirmed. 7 | ) else ( 8 | echo - RUN AS ADMINISTRATOR 9 | pause 10 | exit /B 11 | ) 12 | 13 | echo - powershell ExecutionPolicy changing to Bypass 14 | powershell.exe Set-ExecutionPolicy -ExecutionPolicy Bypass 15 | 16 | echo - checking if C:\Kopia folder exists, creating it if not 17 | if not exist "C:\Kopia\" ( 18 | mkdir C:\Kopia 19 | ) 20 | 21 | if exist "C:\Kopia\kopia_server_start.cmd" ( 22 | echo - C:\Kopia\kopia_server_start.cmd exists, renaming it with random suffix 23 | ren "C:\Kopia\kopia_server_start.cmd" "kopia_backup_scipt_%random%.ps1" 24 | ) 25 | 26 | echo - copying files to C:\Kopia 27 | robocopy "%~dp0\" "C:\Kopia" "kopia.exe" /NDL /NJH /NJS 28 | robocopy "%~dp0\" "C:\Kopia" "kopia_server_start.cmd" /NDL /NJH /NJS 29 | robocopy "%~dp0\" "C:\Kopia" "shawl.exe" /NDL /NJH /NJS 30 | echo. 31 | 32 | echo - adding C:\Kopia to PATH 33 | setx /M PATH "%PATH%;C:\Kopia" 34 | 35 | echo - creting Kopia service 36 | C:\Kopia\shawl.exe add --log-dir C:\kopia\Kopia_service_logs --name Kopia -- C:\Kopia\kopia_server_start.cmd 37 | 38 | echo - setting Kopia service to start automaticly at boot 39 | sc config Kopia start=auto 40 | 41 | echo - start Kopia service 42 | sc start Kopia 43 | 44 | echo - copying link to Desktop 45 | robocopy "%~dp0\" "%USERPROFILE%\Desktop" "Kopia.url" /NDL /NJH /NJS 46 | 47 | echo. 48 | echo -------------------------------------------------------------- 49 | echo. 50 | echo DEPLOYMENT DONE 51 | echo KOPIA SERVER CAN NOW BE FIND AT WEB PAGE: localhost:51515 52 | echo A LINK SHOULD BE ON YOUR DESKTOP 53 | echo. 54 | pause 55 | -------------------------------------------------------------------------------- /kopia_backup/kopia_server_deploy_service_win/Kopia.url: -------------------------------------------------------------------------------- 1 | [InternetShortcut] 2 | URL=http://localhost:51515/ 3 | IDList= 4 | HotKey=0 5 | [{000214A0-0000-0000-C000-000000000046}] 6 | Prop3=19,2 7 | -------------------------------------------------------------------------------- /kopia_backup/kopia_server_deploy_service_win/kopia.exe: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/kopia_backup/kopia_server_deploy_service_win/kopia.exe -------------------------------------------------------------------------------- /kopia_backup/kopia_server_deploy_service_win/kopia_server_start.cmd: -------------------------------------------------------------------------------- 1 | kopia server start --insecure --config-file=C:\Kopia\repository.config --log-dir=C:\Kopia\Kopia_Logs --address=127.0.0.1:51515 --server-username=admin --server-password=aaa --enable-actions 2 | 3 | :: to have full functinality of a kopia server 4 | :: replace the above command with the one below 5 | :: in it the address was changed to 0.0.0.0 to allow connection over network 6 | :: and tls cert stuff was added without which server does not functions 7 | 8 | :: kopia server start --tls-generate-cert --tls-cert-file C:\Kopia\tls_kopia.cert --tls-key-file C:\Kopia\tls_kopia.key --config-file=C:\Kopia\repository.config --log-dir=C:\Kopia\Kopia_Logs --address=0.0.0.0:51515 --server-username=admin --server-password=aaa 9 | 10 | :: restart the kopia service and check C:\Kopia if the tls_kopia files are now there 11 | :: now again edit this file to remove "--tls-generate-cert" part from the command 12 | :: restart the service again 13 | 14 | :: log in to the webGUI and create a repo 15 | 16 | :: now theres need to add users that will be able to backup to that repo 17 | :: no GUI for that for whatever reason 18 | :: open windows cmd / powershell as admin 19 | :: connect to the repo manually 20 | :: could help going to the webgui > repository > small shell-icon at the bottom 21 | :: click on it and it will show command to execute to get repo status 22 | :: adjust the command to be about adding a user 23 | 24 | :: C:\Kopia\kopia.exe --config-file=C:\Kopia\repository.config server user add myuser@mylaptop 25 | 26 | :: once again restart the kopia service 27 | 28 | :: now finally can go to the client machine 29 | :: Kopia Repository Server 30 | :: give the ip address and port, use https, something like https://10.0.19.95:51515 31 | :: write random stuff in to "Trusted server certificate fingerprint (SHA256)" 32 | :: kopia on connect attempt will tell what the the real fingerprint is 33 | :: kopy it in to the field and try connect agan 34 | :: can override username/machine name in the advanced section 35 | -------------------------------------------------------------------------------- /kopia_backup/kopia_server_deploy_service_win/shawl.exe: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/kopia_backup/kopia_server_deploy_service_win/shawl.exe -------------------------------------------------------------------------------- /lubelog/readme.md: -------------------------------------------------------------------------------- 1 | # LubeLog in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/7zjQQzy.png) 6 | 7 | # Purpose & Overview 8 | 9 | Vehicle service records and maintainence tracker. 10 | 11 | * [Official site](https://lubelogger.com/) 12 | * [Github](https://github.com/hargata/lubelog) 13 | 14 | LubeLogger is a new open source vehicle info dump place. 15 | 16 | Written in javascript with LiteDB file based database. 17 | 18 | # Files and directory structure 19 | 20 | ``` 21 | /home/ 22 | └── ~/ 23 | └── docker/ 24 | └── LubeLog/ 25 | ├── 🗁 lubelog_config/ 26 | ├── 🗁 lubelog_data/ 27 | ├── 🗁 lubelog_documents/ 28 | ├── 🗁 lubelog_images/ 29 | ├── 🗁 lubelog_keys/ 30 | ├── 🗁 lubelog_log/ 31 | ├── 🗁 lubelog_temp/ 32 | ├── 🗁 lubelog_translations/ 33 | ├── 🗋 .env 34 | └── 🗋 docker-compose.yml 35 | ``` 36 | 37 | * `lubelog directories` - with data 38 | * `.env` - a file containing environment variables for docker compose 39 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 40 | 41 | Only the two files are required. The directories are created on the first run. 42 | 43 | # docker-compose 44 | 45 | [Dockercompose](https://github.com/hargata/lubelog/blob/main/docker-compose.yml) 46 | from the github page used as a template. 47 | 48 | `docker-compose.yml` 49 | ```yml 50 | services: 51 | lubelog: 52 | image: ghcr.io/hargata/lubelogger:latest 53 | container_name: lubelog 54 | hostname: lubelog 55 | restart: unless-stopped 56 | env_file: .env 57 | volumes: 58 | - ./lubelog_config:/App/config 59 | - ./lubelog_data:/App/data 60 | - ./lubelog_translations:/App/wwwroot/translations 61 | - ./lubelog_documents:/App/wwwroot/documents 62 | - ./lubelog_images:/App/wwwroot/images 63 | - ./lubelog_temp:/App/wwwroot/temp 64 | - ./lubelog_log:/App/log 65 | - ./lubelog_keys:/root/.aspnet/DataProtection-Keys 66 | ports: 67 | - 8080:8080 68 | 69 | networks: 70 | default: 71 | name: $DOCKER_MY_NETWORK 72 | external: true 73 | ``` 74 | 75 | `.env` 76 | ```bash 77 | # GENERAL 78 | DOCKER_MY_NETWORK=caddy_net 79 | TZ=Europe/Bratislava 80 | 81 | #LUBELOG 82 | LC_ALL=en_US.UTF-8 83 | LANG=en_US.UTF-8 84 | MailConfig__EmailServer=smtp-relay.brevo.com 85 | MailConfig__EmailFrom=lubelog@example.com 86 | MailConfig__UseSSL=True 87 | MailConfig__Port=587 88 | MailConfig__Username= 89 | MailConfig__Password= 90 | LOGGING__LOGLEVEL__DEFAULT=Error 91 | ``` 92 | 93 | **All containers must be on the same network**.
94 | Which is named in the `.env` file.
95 | If one does not exist yet: `docker network create caddy_net` 96 | 97 | # Reverse proxy 98 | 99 | Caddy v2 is used, details 100 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
101 | 102 | `Caddyfile` 103 | ```php 104 | auto.{$MY_DOMAIN} { 105 | reverse_proxy lubelog:80 106 | } 107 | ``` 108 | 109 | # First run 110 | 111 | --- 112 | 113 | 114 | # Trouble shooting 115 | 116 | 117 | # Update 118 | 119 | Manual image update: 120 | 121 | - `docker-compose pull`
122 | - `docker-compose up -d`
123 | - `docker image prune` 124 | 125 | It is **strongly recommended** to now add current **tags** to the images in the compose.
126 | Tags will allow you to easily return to a working state if an update goes wrong. 127 | 128 | 129 | # Backup and restore 130 | 131 | #### Backup 132 | 133 | 134 | #### Restore 135 | 136 | 137 | # Backup of just user data 138 | 139 | 140 | #### Backup script 141 | 142 | 143 | #### Cronjob - scheduled backup 144 | 145 | # Restore the user data 146 | 147 | -------------------------------------------------------------------------------- /mealie/readme.md: -------------------------------------------------------------------------------- 1 | # Mealie in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/G546d6v.png) 6 | 7 | # Purpose & Overview 8 | 9 | Recipe cookbook. 10 | 11 | * [The official site](https://hay-kot.github.io/mealie/) 12 | * [Github](https://github.com/hay-kot/mealie) 13 | * [DockerHub](https://hub.docker.com/r/hkotel/mealie) 14 | 15 | Mealie is a simple, open source, self hosted cookbook.
16 | Written in python and javascript, using Vue framework for frontend. 17 | It stores recipies in plain json as well as sqlite database. 18 | 19 | # Files and directory structure 20 | 21 | ``` 22 | /home/ 23 | └── ~/ 24 | └── docker/ 25 | └── bookstack/ 26 | ├── 🗁 mealie_data/ 27 | ├── 🗋 .env 28 | └── 🗋 docker-compose.yml 29 | ``` 30 | 31 | * `mealie_data/` - a directory with persistent data and backups 32 | * `.env` - a file containing environment variables for docker compose 33 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 34 | 35 | Only the files are required. The directory is created on the first run. 36 | 37 | # docker-compose 38 | 39 | The official documentation compose example 40 | [here.](https://hay-kot.github.io/mealie/documentation/getting-started/install/#docker-compose-with-sqlite) 41 | 42 | `docker-compose.yml` 43 | ```yml 44 | services: 45 | 46 | mealie: 47 | image: hkotel/mealie 48 | container_name: mealie 49 | hostname: mealie 50 | restart: unless-stopped 51 | env_file: .env 52 | expose: 53 | - "80" 54 | volumes: 55 | - ./mealie_data/:/app/data 56 | 57 | networks: 58 | default: 59 | name: $DOCKER_MY_NETWORK 60 | external: true 61 | ``` 62 | 63 | `.env` 64 | ```bash 65 | # GENERAL 66 | DOCKER_MY_NETWORK=caddy_net 67 | TZ=Europe/Bratislava 68 | 69 | # MEALIE 70 | PUID=1000 71 | PGID=1000 72 | RECIPE_PUBLIC=true 73 | RECIPE_SHOW_NUTRITION=true 74 | RECIPE_SHOW_ASSETS=true 75 | RECIPE_LANDSCAPE_VIEW=true 76 | RECIPE_DISABLE_COMMENTS=false 77 | RECIPE_DISABLE_AMOUNT=false 78 | ``` 79 | 80 | **All containers must be on the same network**.
81 | Which is named in the `.env` file.
82 | If one does not exist yet: `docker network create caddy_net` 83 | 84 | # Reverse proxy 85 | 86 | Caddy v2 is used, details 87 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
88 | 89 | `Caddyfile` 90 | ```php 91 | book.{$MY_DOMAIN} { 92 | reverse_proxy mealie:80 93 | } 94 | ``` 95 | 96 | # First run 97 | 98 | Default login: `changeme@email.com` // `MyPassword` 99 | 100 | --- 101 | 102 | ![interface-pic](https://i.imgur.com/Y1VtD0e.png) 103 | 104 | # New version incomig 105 | 106 | There is a new version in work, v1.0.0 is already in beta5, 107 | but it seems a major changes are introduced and theres not yet feature to 108 | share recipies with people without password. 109 | 110 | # Update 111 | 112 | Manual image update: 113 | 114 | - `docker-compose pull`
115 | - `docker-compose up -d`
116 | - `docker image prune` 117 | 118 | It is **strongly recommended** to now add current **tags** to the images in the compose.
119 | Tags will allow you to easily return to a working state if an update goes wrong. 120 | 121 | # Backup and restore 122 | 123 | #### Backup 124 | 125 | Using [kopia](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/kopia_backup) 126 | or [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 127 | to make daily snapshot of the entire docker directory. 128 | 129 | #### Restore 130 | 131 | * down the containers `docker-compose down`
132 | * delete/move/rename the entire project directory
133 | * from the backups copy back the entire project directory
134 | * start the containers `docker-compose up -d` 135 | -------------------------------------------------------------------------------- /meshcrentral/readme.md: -------------------------------------------------------------------------------- 1 | # Meshcentral in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/aqBSYbu.png) 6 | 7 | # Purpose & Overview 8 | 9 | Powerful remote desktop toolset. 10 | 11 | * [Official site](https://www.meshcommander.com/meshcentral2) 12 | * [Github](https://github.com/Ylianst/MeshCentral) 13 | * [ghcr.io](https://github.com/ylianst/MeshCentral/pkgs/container/meshcentral) 14 | 15 | Web based, can be a replacement for TeamViewer or Anydesk.
16 | The server is written in javascript, running in node.js runtime. 17 | The client application is written mostly in C runnig Duktape javascript engine. 18 | 19 | For database the server uses a build in neDB, which should be enough for 20 | less than 100 clients deployments. Or MongoDB can be deployed for better 21 | performance and robustness but added complexity. 22 | 23 | The architecture is relatively simple. 24 | 25 | * a server you host is accessible through a web site 26 | * clients can from this site install Mesh Agent 27 | which allows full control of the device from the servers web 28 | 29 | Theres also an entire aspect of possibility of using 30 | Intel AMT - Active Management Technology through port 4433. 31 | 32 | --- 33 | 34 | ![interface-pic](https://i.imgur.com/0egkM4J.png) 35 | 36 | # Files and directory structure 37 | 38 | ``` 39 | /home/ 40 | └── ~/ 41 | └── docker/ 42 | └── meshcentral/ 43 | ├── meshcentral/ 44 | ├── .env 45 | └── docker-compose.yml 46 | ``` 47 | 48 | * `meshcentral/` - persistent data, most notable is config.json in data\ 49 | * `.env` - a file containing environment variables for docker compose 50 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 51 | 52 | You only need to provide the two files.
53 | The directories are created by docker compose on the first run. 54 | 55 | # docker-compose 56 | 57 | The official docker image is hosted [on github.](https://github.com/ylianst/MeshCentral/pkgs/container/meshcentral) 58 | More info [here](https://github.com/Ylianst/MeshCentral/tree/master/docker)
59 | This setup goes more robust way, with a separate container running mongodb. 60 | 61 | `docker-compose.yml` 62 | ```yml 63 | services: 64 | 65 | meshcentral-db: 66 | image: mongo:latest 67 | container_name: meshcentral-db 68 | hostname: meshcentral-db 69 | restart: unless-stopped 70 | env_file: .env 71 | volumes: 72 | - ./meshcentral/mongodb_data:/data/db 73 | 74 | meshcentral: 75 | image: ghcr.io/ylianst/meshcentral:latest 76 | container_name: meshcentral 77 | hostname: meshcentral 78 | restart: unless-stopped 79 | env_file: .env 80 | depends_on: 81 | - meshcentral-db 82 | volumes: 83 | # config.json and other important files live here. A must for data persistence 84 | - ./meshcentral/data:/opt/meshcentral/meshcentral-data 85 | # where file uploads for users live 86 | - ./meshcentral/user_files:/opt/meshcentral/meshcentral-files 87 | # location for the meshcentral-backups - this should be mounted to an external storage 88 | - ./meshcentral/backup:/opt/meshcentral/meshcentral-backup 89 | # location for site customization files 90 | - ./meshcentral/web:/opt/meshcentral/meshcentral-web 91 | 92 | networks: 93 | default: 94 | name: $DOCKER_MY_NETWORK 95 | external: true 96 | ``` 97 | 98 | `.env` 99 | ```bash 100 | # GENERAL 101 | MY_DOMAIN=example.com 102 | DOCKER_MY_NETWORK=caddy_net 103 | TZ=Europe/Bratislava 104 | 105 | # MESHCENTRAL 106 | NODE_ENV=production 107 | 108 | # initial mongodb-variables 109 | MONGO_INITDB_ROOT_USERNAME=mongodbadmin 110 | MONGO_INITDB_ROOT_PASSWORD=mongodbpasswd 111 | 112 | # initial meshcentral-variables 113 | # the following options are only used if no config.json exists in the data-folder 114 | 115 | # your hostname 116 | HOSTNAME=mesh.example.com 117 | USE_MONGODB=true 118 | # set to your reverse proxy IP if you want to put meshcentral behind a reverse proxy 119 | REVERSE_PROXY=example.com 120 | REVERSE_PROXY_TLS_PORT=443 121 | # set to true if you wish to enable iframe support 122 | IFRAME=false 123 | # set to false if you want disable self-service creation of new accounts besides the first (admin) 124 | ALLOW_NEW_ACCOUNTS=true 125 | # set to true to enable WebRTC - per documentation it is not officially released with meshcentral and currently experimental. Use with caution 126 | WEBRTC=false 127 | # set to true to allow plugins 128 | ALLOWPLUGINS=false 129 | # set to true to allow session recording 130 | LOCALSESSIONRECORDING=false 131 | # set to enable or disable minification of json, reduces traffic 132 | MINIFY=true 133 | ``` 134 | 135 | Bit of an issue is that the official project expects to find the database 136 | at the hostname `mongodb`. It's hardcoded in the 137 | [startup.sh](https://github.com/Ylianst/MeshCentral/blob/master/docker/startup.sh) 138 | which on first run generates `config.json`.
139 | This is not ideal as one likely will run several containers and 140 | undescriptive container name or hostname is annoying.
141 | 142 | To deal with this, **run it first time for few minutes, then down it, edit the** 143 | `.\meshcentral\data\config.json` and change the mongoDb line to look like this: 144 | 145 | "settings": { 146 | "mongoDb": "mongodb://mongodbadmin:mongodbpasswd@meshcentral-db:27017", 147 | }, 148 | 149 | if meshcentral container shows: *ERROR: Unable to parse /opt/meshcentral/meshcentral-data/config.json*
150 | you need to down it, delete the `meshcentral` with the persistent data, 151 | and up it again, now let it run longer before downing and editing the database path. 152 | 153 | # Reverse proxy 154 | 155 | Caddy v2 is used, details 156 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
157 | 158 | `Caddyfile` 159 | ``` 160 | mesh.{$MY_DOMAIN} { 161 | reverse_proxy meshcentral:443 { 162 | transport http { 163 | tls 164 | tls_insecure_skip_verify 165 | } 166 | } 167 | } 168 | ``` 169 | 170 | --- 171 | 172 | ![interface-pic](https://i.imgur.com/CK6pRyq.png) 173 | 174 | # The usage on clients 175 | 176 | # Improved safety 177 | 178 | 179 | # Trouble shooting 180 | 181 | # Running without separate database 182 | 183 | `docker-compose.yml` 184 | ```yml 185 | services: 186 | 187 | meshcentral: 188 | image: ghcr.io/ylianst/meshcentral:latest 189 | container_name: meshcentral 190 | hostname: meshcentral 191 | restart: unless-stopped 192 | env_file: .env 193 | volumes: 194 | # config.json and other important files live here. A must for data persistence 195 | - ./meshcentral/data:/opt/meshcentral/meshcentral-data 196 | # where file uploads for users live 197 | - ./meshcentral/user_files:/opt/meshcentral/meshcentral-files 198 | # location for the meshcentral-backups - this should be mounted to an external storage 199 | - ./meshcentral/backup:/opt/meshcentral/meshcentral-backup 200 | # location for site customization files 201 | - ./meshcentral/web:/opt/meshcentral/meshcentral-web 202 | 203 | networks: 204 | default: 205 | name: $DOCKER_MY_NETWORK 206 | external: true 207 | ``` 208 | 209 | `.env` 210 | ```bash 211 | # GENERAL 212 | MY_DOMAIN=example.com 213 | DOCKER_MY_NETWORK=caddy_net 214 | TZ=Europe/Bratislava 215 | 216 | # MESHCENTRAL 217 | NODE_ENV=production 218 | 219 | # initial mongodb-variables 220 | MONGO_INITDB_ROOT_USERNAME=mongodbadmin 221 | MONGO_INITDB_ROOT_PASSWORD=mongodbpasswd 222 | 223 | # initial meshcentral-variables 224 | # the following options are only used if no config.json exists in the data-folder 225 | 226 | # your hostname 227 | HOSTNAME=mesh.example.com 228 | USE_MONGODB=false 229 | # set to your reverse proxy IP if you want to put meshcentral behind a reverse proxy 230 | REVERSE_PROXY=example.com 231 | REVERSE_PROXY_TLS_PORT=443 232 | # set to true if you wish to enable iframe support 233 | IFRAME=false 234 | # set to false if you want disable self-service creation of new accounts besides the first (admin) 235 | ALLOW_NEW_ACCOUNTS=true 236 | # set to true to enable WebRTC - per documentation it is not officially released with meshcentral and currently experimental. Use with caution 237 | WEBRTC=false 238 | # set to true to allow plugins 239 | ALLOWPLUGINS=false 240 | # set to true to allow session recording 241 | LOCALSESSIONRECORDING=false 242 | # set to enable or disable minification of json, reduces traffic 243 | MINIFY=true 244 | ``` 245 | 246 | 247 | 248 | # Update 249 | 250 | Manual image update: 251 | 252 | - `docker-compose pull`
253 | - `docker-compose up -d`
254 | - `docker image prune` 255 | 256 | # Backup and restore 257 | 258 | #### Backup 259 | 260 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 261 | that makes daily snapshot of the entire directory. 262 | 263 | #### Restore 264 | 265 | * down the bookstack containers `docker-compose down`
266 | * delete the entire bookstack directory
267 | * from the backup copy back the bookstack directory
268 | * start the containers `docker-compose up -d` 269 | 270 | -------------------------------------------------------------------------------- /mikrotik/cli_basics.md: -------------------------------------------------------------------------------- 1 | # reset config 2 | 3 | `/system reset-configuration` 4 | 5 | # clien mode range extender 6 | 7 | 8 | 9 | # Dumb switch with wifi AP 10 | 11 | ## Objective 12 | 13 | ## Steps 14 | 15 | -------------------------------------------------------------------------------- /mikrotik/dumb_switch_with_wifi_AP.md: -------------------------------------------------------------------------------- 1 | # Dumb switch with wifi AP 2 | 3 | ## Objective 4 | 5 | * every single port bridged, device acting like a switch 6 | * wifi being bridged to the same network 7 | * automatic IP assigned 8 | * no DHCP, no NAT, no nothing, just like a dumb switch with wifi thats all 9 | 10 | ## Steps 11 | 12 | Clear any config 13 | 14 | * System > Reset Configuration > No Default Configuration 15 | 16 | --- 17 | 18 | **Bridge** 19 | 20 | * Bridge > add bridge1 21 | * Ports > add > All interfaces to bridge1 22 | 23 | *winbox will reconnect* 24 | 25 | --- 26 | 27 | **IP** 28 | 29 | *dynamic* 30 | 31 | * IP > DHCP client > add > interface bridge, rest defaults 32 | 33 | *static* 34 | 35 | * IP > Addresses > add >  36 | * Address - 192.168.88.2/24 37 | * Network - it gets filled out automatically on ok/apply based on subnet mask in address  38 | * interface - bridge 39 | 40 | --- 41 | 42 | 2ghz 43 | 44 | * Wireless > open 2ghz interface > Wireless tab > click Advanced Mode > in this tab 45 | * change Mode from station to ap bridge 46 | * change Band to 2Ghz- B/G/N 47 | * Channel Width stay at 20Mhz if lot of wifi around you, go to 40Mhz if not 48 | * Frequency - pick a channel \[1-2412 ; 6-2437; 11-2462\] 49 | * SSID - pick a name 50 | * Radio Name - if you you want to distingush between APs with same SSID? 51 | * Skip DFS channels - all 52 | * Security Profile - just make note of the one set there, should be default 53 | * WPS mode - disable 54 | * Country - pick yours 55 | 56 | ------------- 57 | 58 | 5ghz 59 | 60 | * Wireless > open 5ghz interface > Wireless tab > click Advanced Mode > in this tab 61 | * change Mode from station to ap bridge 62 | * change Band to 5Ghz- N/AC 63 | * Channel Width 20/40/80Mhz XXXX 64 | * Frequency - auto? 65 | * SSID - pick a name 66 | * Radio Name - if you you want to distingush between APs with same SSID? 67 | * Skip DFS channels - all 68 | * Security Profile - just make note of the one set there, should be default 69 | * WPS mode - disable 70 | * Country - pick yours 71 | 72 | --------------- 73 | 74 | security 75 | 76 | * Wireless > Security Profiles > default 77 | * change Mode to dynamic keys 78 | * check WPA PSK and WPA2 PSK 79 | * set passwords in WPA/WPA2 pre-shared key inputs 80 | 81 | ------------ 82 | -------------------------------------------------------------------------------- /mikrotik/hairpin_nat_reflection.md: -------------------------------------------------------------------------------- 1 | https://forum.mikrotik.com/viewtopic.php?t=179343 2 | -------------------------------------------------------------------------------- /mounting_network_share_linux/readme.md: -------------------------------------------------------------------------------- 1 | # Mounting Network Shares in Linux 2 | 3 | possible ways to mount stuff - fstab, autofs, systemd, docker volumes if its for docker 4 | 5 | possible types of shares 6 | 7 | * smb/samba/cifs - the most common share, support on all OS 8 | * nfs - mostly used between linux machines, bit better performance 9 | * iSCSI - the share is mounted as a block device as if it was really a disk, 10 | great performance for small files 11 | 12 | More on setup of these shares is in 13 | [TrueNAS Scale guide.](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/trueNASscale) 14 | 15 | # smb/samba/cifs 16 | 17 | [Arch wiki](https://wiki.archlinux.org/title/samba#As_systemd_unit) 18 | on samba systemd mount 19 | 20 | * you will create two files in `/etc/systemd/system` 21 | * one will have extension `.mount` the other `.automount` 22 | * the name will be the same for both and it MUST correspond with the planned 23 | mount path. Slashes `/` being replaced by dashes `-`.
24 | So if the share should be at `/mnt/mirror` the files are named 25 | `mnt-mirror.mount` and `mnt-mirror.automount` 26 | * copy paste the bellow content, edit as you see fit, 27 | changing description, ip address and path, user and password,.. 28 | * linux command `id` will show your current user `uid` and `gid` 29 | * after ther changes execute command `sudo systemctl enable mnt-mirror.automount` 30 | This will setup mounting that does not fail on boot if there are network issues, 31 | and really mounts the target only on request 32 | 33 | `mnt-mirror.mount` 34 | ```ini 35 | [Unit] 36 | Description=3TB truenas mirror mount 37 | 38 | [Mount] 39 | What=//10.0.19.11/Mirror 40 | Where=/mnt/mirror 41 | Type=cifs 42 | Options=rw,username=kopia,password=aaa,file_mode=0644,dir_mode=0755,uid=1000,gid=1000 43 | 44 | [Install] 45 | WantedBy=multi-user.target 46 | ``` 47 | 48 | `mnt-mirror.automount` 49 | ```ini 50 | [Unit] 51 | Description=3TB truenas mirror mount 52 | 53 | [Automount] 54 | Where=/mnt/mirror 55 | 56 | [Install] 57 | WantedBy=multi-user.target 58 | ``` 59 | 60 | ### Useful commants 61 | 62 | `smbclient -L 10.0.19.11` - list shares mounted from the ip 63 | `systemctl list-units -t mount --all` 64 | -------------------------------------------------------------------------------- /network-knowledge-base/asus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/network-knowledge-base/asus.png -------------------------------------------------------------------------------- /network-knowledge-base/dhcp.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/network-knowledge-base/dhcp.md -------------------------------------------------------------------------------- /network-knowledge-base/dns.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/network-knowledge-base/dns.md -------------------------------------------------------------------------------- /network-knowledge-base/port_forwarding.md: -------------------------------------------------------------------------------- 1 | # Port Forwarding Guide 2 | 3 | # What is a port 4 | 5 | An IP address identifies specific machine on a network, 6 | **a port** identifies specific **application** running on that machine. 7 | 8 | A port is **a number** between [1 - 65,535] that is assigned by the OS 9 | to any application that wants to communicate over the network. 10 | Can be choosen at random, can be hard set. 11 | 12 | ![pic_port_header](https://i.imgur.com/TM0pOQN.png) 13 | 14 | # How Router's NAT works 15 | 16 | * **Allows outgoing** traffic on any port. 17 | * **Drops incoming** traffic unless it is a response to communication 18 | initialized from the inside. 19 | 20 | This is not really some security effort, the router just literally has no idea 21 | where to send it... 22 | 23 |
24 | More unnecessary details 25 | 26 | NAT is implemented in your router.
27 | It makes your **LAN side** devices able to connect with the 28 | outside world - **WAN side,** through one public IP. 29 | "The internet" *sees* just one device it communicates with, 30 | at least at this network level. 31 | 32 | ![pic_nat](https://i.imgur.com/Nk7u4MP.png) 33 | 34 | 35 | #### LAN side initialized communication 36 | 37 | * You visit a website, let's say `youtube.com` 38 | * Your browser has some random port assigned by the OS, 39 | this will be the **source port**. 40 | The local IP address of the machine it runs on will be the **source IP** 41 | * Browser/OS ask DNS servers for IP address of `youtube.com`, 42 | the answer is `142.250.191.78` - **destination IP** 43 | * Youtube is a website, standard for https is using port `443` - **destination port.** 44 | * All requred information are there. Destination[ip & port] Source[ip & port]. 45 | * Packets are send. 46 | * The browser now waits for a response at that random port. 47 | * Since the router is the **default gateway**, thats where the packets arrive. 48 | * The router saves all that info in its state table for a time, could be seconds, 49 | could be days depending on protocol and [state](https://serverfault.com/a/481909). 50 | * Router doing the NAT now replaces the **source IP address** of that one machine, 51 | with its own wan IP address, 52 | it might also change source port but that is not as important, 53 | and sends it out in the direction of the **destination IP**. 54 | * Response comes back, router knows it is a response because it's coming from the 55 | IP that it contacted recently and the **destination port** it uses is the same 56 | number that was used as the source port. 57 | * It checks the state table for the **original source IP and source port**, 58 | put them in, now as destination and off the packets go. 59 | * The browser receives response on its assigned port, from the IP it contacted. 60 | 61 | #### WAN side initialized communication 62 | 63 | * Want to connect to a jellyfin server to watch some movies from browser. 64 | * You know the IP address or the url. 65 | You also expect it to run on default port jellyfin uses `8096` 66 | * The browser makes the request. 67 | * The router sees it coming at port `8096`, but where does it send it? 68 | There is nothing in the state table, that would tell it. 69 | * So it drops it, unless there is a port forwarding rule that says 70 | that if something comes to port `8096` send it to this local ip address 71 | and at that port... 72 | 73 | Youtube explanation videos if you want deeper dive: 74 | 75 | * [NAT - Network Address Translation.](https://www.youtube.com/watch?v=RG97rvw1eUo) 76 | * [Public IP vs. Private IP and Port Forwarding](https://www.youtube.com/watch?v=92b-jjBURkw) 77 | 78 | --- 79 | --- 80 | 81 |
82 | 83 | # Double NAT (CGNAT) 84 | 85 | 86 | 87 | **Bad News.**
88 | It is very likely that even when you do everything 100% correctly, 89 | you still wont get your ports open.
90 | The reason being that your machine is behind double NAT. 91 | **Your ISP** - internet service provider, has you behind its own NAT device 92 | and that WAN side of your router is not really "the internet", but ISPs LAN side. 93 | 94 | A way to try and check, is looking up your [public IP online](http://icanhazip.com/) 95 | then log in to your router and finding somewhere the IP address of your WAN interface. 96 | If they are the same then your are not behind double NAT and port forwarding 97 | will work straight away.
98 | If they differ and some local IP is there, then there is still a chance it will work, 99 | but you wont know till you try. 100 | 101 | But if you are failing to make port forwarding work, it's time to call your ISP 102 | and ask about public IP, how much would it cost. 103 | It can be few extra € to your monthly bill. 104 | 105 | # Port forwarding 106 | 107 | Finally. Right?! 108 | 109 | You want to host something, lets say a minecraft server.
110 | You set it all up, you give your public IP address to others and they try to connect, 111 | but your router blocks them. It's a connection initialized from the outside. 112 | 113 | So you need to tell your router/firewall to let through the traffic that comes to 114 | minecraft default port - `25565` and where to send it on your LAN, 115 | to the local IP of your minecraft server. 116 | 117 | ![diagram_port_forw_minecraft](https://i.imgur.com/PNR32Mz.png) 118 | 119 | #### Examples of port forward rule 120 | 121 | * [Asus](https://i.imgur.com/brs9Mr6.png) 122 | * [TPlink](https://i.imgur.com/FNS2xCj.png) 123 | * [ubiquiti](https://i.imgur.com/D04HVJc.png) 124 | 125 | How to actually create that port forward rule depends on router/firewall model. 126 | 127 | Generally what to expect 128 | 129 | * It would be called port forwarding, or a virtual server, or be under NAT section. 130 | * **The port** on which to expect traffic is obviously a core information, 131 | sometimes it is called a service port or an external port. 132 | * **IP address** is required, so that the router knows where on the LAN side 133 | to send traffic that comes to that external port. 134 | * The setup might offer option for **internal port**, 135 | this can be often left empty, or the same port number is put there.
136 | It is there to give you option to run stuff on your LAN network on a different 137 | port than the one you open to the world. Like your webserver is `80`, 138 | but you open to the world port `12250` or whatever. 139 | * **The protocol** - TCP or UDP, if dunno **select both / all**
140 | You don't need to fear you are opening too much, if there is no service 141 | running on that port for that protocol it is same as a closed port. 142 | 143 | # Testing if port forwarding works 144 | 145 | ![port_check_web](https://i.imgur.com/d5fNnCX.png) 146 | 147 | First you need to understand that unless there is some application running 148 | that answers on that port, all tests will come back as - closed port. 149 | 150 | For testing we can use websites that will test if a port is open at specified public IP. 151 | 152 | * [portchecker.co](https://portchecker.co/) 153 | * [yougetsignal.com](https://www.yougetsignal.com/tools/open-ports/) 154 | 155 | ## Windows 156 | 157 | * [Find the local ip address](https://www.youtube.com/results?search_query=how+to+find+ip+address+windows) 158 | of the machine you are planning to use for the test. 159 | * Follow the instruction in Port forwarding section of this guide 160 | and forward port `666` to the IP of that machine. 161 | * Download [Port Listener](https://www.rjlsoftware.com/software/utility/portlistener/). 162 | * Run Port Listener, set port `666`, press Start. 163 | * If a windows firewall notification pops up with a question, answer yes. 164 | * Go to [portchecker.co](https://portchecker.co/), set the port to 666 and press Check. 165 | 166 | ![port_listener](https://i.imgur.com/A9fxIbi.png) 167 | 168 | In windows it is also pretty useful knowing that you can go 169 | `Task Manager` > `Performance` > `Open Resource Monitor` > `Network` Tab 170 | 171 | There unroll `Listening Ports` 172 | and you should find there - `listener.exe` with port `666` and firewall status 173 | should be *allowed, not restricted* 174 | 175 | ![windows_port_check](https://i.imgur.com/putdef0.png) 176 | 177 | ## Linux 178 | 179 | * find your ip address - `ip r` 180 | * Follow the instruction in Port forwarding section of this guide 181 | and forward port `666` to the IP of that machine. 182 | * try running netcat - `nc `
183 | * if it is not installed, get it for your distro, 184 | for arch it's `openbsd-netcat`, for debian it's `netcat-openbsd`, 185 | for fedora it's `netcat` 186 | * execute `sudo nc -vv -l -p 666` 187 | * Go to [portchecker.co](https://portchecker.co/), set the port to 666 and press Check. 188 | 189 | 190 | ## UDP port test 191 | 192 | UDP is kinda special cuz it's session-less, so you need to actually communicate 193 | through it to test it. 194 | 195 | * on a linux machine on LAN - `sudo nc -vv -u -l -p 666` 196 | * on a linux machine somewhere out there - `nc -u the_public_ip_goes_here 666` 197 | * write something and it should appear on the other side 198 | -------------------------------------------------------------------------------- /network-knowledge-base/readme.md: -------------------------------------------------------------------------------- 1 | # network-knowledge-base 2 | 3 | 4 | * mac address 5 | * ip address 6 | * mask, mask notation 7 | * gateway 8 | * port 9 | * packet and frame 10 | * broadcast 11 | * ARP 12 | * NAT 13 | * bridge, bridging 14 | * switch, router, firewall 15 | * Bandwidth 16 | * DNS 17 | * DHCP 18 | * ICMP ping 19 | * TCP / UDP 20 | * HTTP 21 | * SSH 22 | * VPN 23 | * [OSI/ISO model idea](https://www.reddit.com/r/networking/comments/2bazcl/i_dont_honestly_understand_the_osi_model_despite/cj45hih/) 24 | 25 | [digital ocean](https://www.digitalocean.com/community/tutorials/an-introduction-to-networking-terminology-interfaces-and-protocols)
26 | [100 terms](https://www.makeuseof.com/networking-terms-glossary/) 27 | 28 | ![logo](https://i.imgur.com/ATNGPaJ.png) 29 | 30 | 31 | # Network-hostnamess discovery / Zero-configuration networking / 32 | 33 | http://jaredrobinson.com/blog/upnp-ssdp-mdns-llmnr-etc-on-the-home-network/ 34 | 35 | mDNS, NetBIOS, Samba, UPnP, Avahi, LLMNR, WINS,... 36 | 37 | What a mess 38 | 39 | Seems LLMNR is reliable and widly available everywhere for hostname resolution. 40 | Even when wiki says its being phased out by microsoft. 41 | 42 | Tested with wireshark. Pinging for a nonexisting hostname mans LLMNR 43 | broadcast to every device on network is send asking who is that hostname. 44 | Works same when pinging from archlinux or pinging from win8.1 45 | 46 | [TCP vs UDP](https://youtu.be/jE_FcgpQ7Co) 47 | 48 | # useful links 49 | 50 | * https://dnsdumpster.com/
51 | can check subdomains registered, ideal would be wildcard certificate 52 | 53 | 54 | OSI Model 55 | 56 | * https://www.youtube.com/watch?v=2iFFRqzX3yE 57 | -------------------------------------------------------------------------------- /network-knowledge-base/tplink.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DoTheEvo/selfhosted-apps-docker/a3604f8622336ae21f29af828d76c91fbb30a9f4/network-knowledge-base/tplink.png -------------------------------------------------------------------------------- /nextcloud/nginx.conf: -------------------------------------------------------------------------------- 1 | worker_processes auto; 2 | 3 | error_log /var/log/nginx/error.log warn; 4 | pid /var/run/nginx.pid; 5 | 6 | 7 | events { 8 | worker_connections 1024; 9 | } 10 | 11 | 12 | http { 13 | include /etc/nginx/mime.types; 14 | default_type application/octet-stream; 15 | 16 | log_format main '$remote_addr - $remote_user [$time_local] "$request" ' 17 | '$status $body_bytes_sent "$http_referer" ' 18 | '"$http_user_agent" "$http_x_forwarded_for"'; 19 | 20 | access_log /var/log/nginx/access.log main; 21 | 22 | sendfile on; 23 | #tcp_nopush on; 24 | 25 | # Prevent nginx HTTP Server Detection 26 | server_tokens off; 27 | 28 | keepalive_timeout 65; 29 | 30 | #gzip on; 31 | 32 | upstream php-handler { 33 | server nextcloud-app:9000; 34 | } 35 | 36 | server { 37 | listen 80; 38 | 39 | # HSTS settings 40 | # WARNING: Only add the preload option once you read about 41 | # the consequences in https://hstspreload.org/. This option 42 | # will add the domain to a hardcoded list that is shipped 43 | # in all major browsers and getting removed from this list 44 | # could take several months. 45 | #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always; 46 | 47 | # set max upload size 48 | client_max_body_size 512M; 49 | fastcgi_buffers 64 4K; 50 | 51 | # Enable gzip but do not remove ETag headers 52 | gzip on; 53 | gzip_vary on; 54 | gzip_comp_level 4; 55 | gzip_min_length 256; 56 | gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; 57 | gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; 58 | 59 | # Pagespeed is not supported by Nextcloud, so if your server is built 60 | # with the `ngx_pagespeed` module, uncomment this line to disable it. 61 | #pagespeed off; 62 | 63 | # HTTP response headers borrowed from Nextcloud `.htaccess` 64 | add_header Referrer-Policy "no-referrer" always; 65 | add_header X-Content-Type-Options "nosniff" always; 66 | add_header X-Download-Options "noopen" always; 67 | add_header X-Frame-Options "SAMEORIGIN" always; 68 | add_header X-Permitted-Cross-Domain-Policies "none" always; 69 | add_header X-Robots-Tag "none" always; 70 | add_header X-XSS-Protection "1; mode=block" always; 71 | 72 | # Remove X-Powered-By, which is an information leak 73 | fastcgi_hide_header X-Powered-By; 74 | 75 | # Path to the root of your installation 76 | root /var/www/html; 77 | 78 | # Specify how to handle directories -- specifying `/index.php$request_uri` 79 | # here as the fallback means that Nginx always exhibits the desired behaviour 80 | # when a client requests a path that corresponds to a directory that exists 81 | # on the server. In particular, if that directory contains an index.php file, 82 | # that file is correctly served; if it doesn't, then the request is passed to 83 | # the front-end controller. This consistent behaviour means that we don't need 84 | # to specify custom rules for certain paths (e.g. images and other assets, 85 | # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus 86 | # `try_files $uri $uri/ /index.php$request_uri` 87 | # always provides the desired behaviour. 88 | index index.php index.html /index.php$request_uri; 89 | 90 | # Rule borrowed from `.htaccess` to handle Microsoft DAV clients 91 | location = / { 92 | if ( $http_user_agent ~ ^DavClnt ) { 93 | return 302 /remote.php/webdav/$is_args$args; 94 | } 95 | } 96 | 97 | location = /robots.txt { 98 | allow all; 99 | log_not_found off; 100 | access_log off; 101 | } 102 | 103 | # Make a regex exception for `/.well-known` so that clients can still 104 | # access it despite the existence of the regex rule 105 | # `location ~ /(\.|autotest|...)` which would otherwise handle requests 106 | # for `/.well-known`. 107 | location ^~ /.well-known { 108 | # The rules in this block are an adaptation of the rules 109 | # in `.htaccess` that concern `/.well-known`. 110 | 111 | location = /.well-known/carddav { return 301 /remote.php/dav/; } 112 | location = /.well-known/caldav { return 301 /remote.php/dav/; } 113 | 114 | location /.well-known/acme-challenge { try_files $uri $uri/ =404; } 115 | location /.well-known/pki-validation { try_files $uri $uri/ =404; } 116 | 117 | # Let Nextcloud's API for `/.well-known` URIs handle all other 118 | # requests by passing them to the front-end controller. 119 | return 301 /index.php$request_uri; 120 | } 121 | 122 | # Rules borrowed from `.htaccess` to hide certain paths from clients 123 | location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; } 124 | location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; } 125 | 126 | # Ensure this block, which passes PHP files to the PHP process, is above the blocks 127 | # which handle static assets (as seen below). If this block is not declared first, 128 | # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php` 129 | # to the URI, resulting in a HTTP 500 error response. 130 | location ~ \.php(?:$|/) { 131 | # Required for legacy support 132 | rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri; 133 | 134 | fastcgi_split_path_info ^(.+?\.php)(/.*)$; 135 | set $path_info $fastcgi_path_info; 136 | 137 | try_files $fastcgi_script_name =404; 138 | 139 | include fastcgi_params; 140 | fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; 141 | fastcgi_param PATH_INFO $path_info; 142 | #fastcgi_param HTTPS on; 143 | 144 | fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice 145 | fastcgi_param front_controller_active true; # Enable pretty urls 146 | fastcgi_pass php-handler; 147 | 148 | fastcgi_intercept_errors on; 149 | fastcgi_request_buffering off; 150 | } 151 | 152 | location ~ \.(?:css|js|svg|gif)$ { 153 | try_files $uri /index.php$request_uri; 154 | expires 6M; # Cache-Control policy borrowed from `.htaccess` 155 | access_log off; # Optional: Don't log access to assets 156 | } 157 | 158 | location ~ \.woff2?$ { 159 | try_files $uri /index.php$request_uri; 160 | expires 7d; # Cache-Control policy borrowed from `.htaccess` 161 | access_log off; # Optional: Don't log access to assets 162 | } 163 | 164 | # Rule borrowed from `.htaccess` 165 | location /remote { 166 | return 301 /remote.php$request_uri; 167 | } 168 | 169 | location / { 170 | try_files $uri $uri/ /index.php$request_uri; 171 | } 172 | } 173 | } 174 | -------------------------------------------------------------------------------- /phpipam/readme.md: -------------------------------------------------------------------------------- 1 | # phpIPAM in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/GrWPooR.png) 6 | 7 | # WORK IN PROGRESS 8 | # WORK IN PROGRESS 9 | # WORK IN PROGRESS 10 | 11 | # Purpose 12 | 13 | IP address managment, LAN information and documentation tool. 14 | 15 | 16 | * [Official site](https://phpipam.net/) 17 | * [Github](https://github.com/phpipam/phpipam) 18 | * [DockerHub image used](https://hub.docker.com/r/phpipam/phpipam-www) 19 | 20 | IPAM is a universal term that stands for 21 | [IP Address Management](https://en.wikipedia.org/wiki/IP_address_management).
22 | phpIPAM is an open source tool fullfilling this purpose. 23 | Coded in php and using mariadb for database. 24 | 25 | My exposure to it is limited and at this moment 26 | So far use is just tracking of used IP. 27 | 28 | Can be used to keep inventory of IT hardware, or organization of server racks. 29 | 30 | # Files and directory structure 31 | 32 | ``` 33 | /home/ 34 | └── ~/ 35 | └── docker/ 36 | └── phpipam/ 37 | ├── phpipam-mariadb-data/ 38 | ├── .env 39 | └── docker-compose.yml 40 | ``` 41 | 42 | * `phpipam-mariadb-data/` - a directory where phpipam will store its database data 43 | * `.env` - a file containing environment variables for docker compose 44 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 45 | 46 | You only need to provide the files.
47 | The directory is created by docker compose on the first run. 48 | 49 | # docker-compose 50 | 51 | `docker-compose.yml` 52 | ```yml 53 | version: '3' 54 | services: 55 | 56 | phpipam-web: 57 | image: phpipam/phpipam-www:latest 58 | container_name: phpipam-web 59 | hostname: phpipam-web 60 | # ports: 61 | # - "80:80" 62 | restart: unless-stopped 63 | env_file: .env 64 | depends_on: 65 | - phpipam-mariadb 66 | 67 | phpipam-mariadb: 68 | image: mariadb:latest 69 | container_name: phpipam-mariadb 70 | hostname: phpipam-mariadb 71 | restart: unless-stopped 72 | env_file: .env 73 | volumes: 74 | - ./phpipam-mariadb-data:/var/lib/mysql 75 | 76 | networks: 77 | default: 78 | external: 79 | name: $DOCKER_MY_NETWORK 80 | ``` 81 | 82 | `.env` 83 | ```bash 84 | # GENERAL 85 | DOCKER_MY_NETWORK=caddy_net 86 | TZ=Europe/Bratislava 87 | 88 | IPAM_DATABASE_HOST=phpipam-mariadb 89 | IPAM_DATABASE_PASS=my_secret_phpipam_pass 90 | IPAM_DATABASE_WEBHOST=% 91 | MYSQL_ROOT_PASSWORD=my_secret_mysql_root_pass 92 | ``` 93 | 94 | # Scheduling and cron issues 95 | 96 | The default docker-compose deployment uses cron container.
97 | Problem is it does not work, so Ofelia is used.
98 | [Here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/ofelia) 99 | is guide how to set it up. 100 | 101 | Bellow is Ofelia's config file for discovery and ping check of live hosts. 102 | 103 | `config.ini` 104 | ```ini 105 | [job-exec "phpipam ping"] 106 | schedule = @every 10m 107 | container = phpipam-web 108 | command = /usr/bin/php /phpipam/functions/scripts/pingCheck.php 109 | 110 | [job-exec "phpipam discovery"] 111 | schedule = @every 25m 112 | container = phpipam-web 113 | command = /usr/bin/php /phpipam/functions/scripts/discoveryCheck.php 114 | ``` 115 | 116 | # Reverse proxy 117 | 118 | Caddy v2 is used, details 119 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
120 | 121 | `Caddyfile` 122 | ``` 123 | ipam.{$MY_DOMAIN} { 124 | reverse_proxy phpipam-web:80 125 | } 126 | ``` 127 | 128 | # First run 129 | 130 | ![logo](https://i.imgur.com/W7YhwqK.jpg) 131 | 132 | 133 | * New phpipam installation 134 | * Automatic database installation 135 | * MySQL username: root 136 | * MySQL password: my_secret_mysql_root_pass 137 | 138 | # Update 139 | 140 | [Watchtower](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/watchtower) 141 | updates the image automatically. 142 | 143 | Manual image update: 144 | 145 | - `docker-compose pull`
146 | - `docker-compose up -d`
147 | - `docker image prune` 148 | 149 | # Backup and restore 150 | 151 | #### Backup 152 | 153 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 154 | that makes daily snapshot of the entire directory. 155 | 156 | #### Restore 157 | 158 | * down the homer container `docker-compose down`
159 | * delete the entire homer directory
160 | * from the backup copy back the homer directory
161 | * start the container `docker-compose up -d` 162 | -------------------------------------------------------------------------------- /port_forwarding_guide/readme.md: -------------------------------------------------------------------------------- 1 | # Port Forwarding 2 | 3 | https://www.reddit.com/r/selfhosted/comments/17tlvs7/i_suppose_im_too_stupid_for_port_forwarding/ 4 | 5 | ###### guide-by-example 6 | 7 | You want to selfhost stuff.
8 | You know little and want to start somewhere, FAST! 9 | 10 | # Requirements 11 | 12 | * A **spare PC** that will be the server.
13 | Can be **virtualmachine**.. virtualbox, hyperv. 14 | * **Google**.
15 | If the guide says do X, and steps seem insuficient, 16 | you google that shit and add the word **youtube**. 17 | 18 | # Install a linux on the server 19 | 20 | ![endeavouros_logo](https://i.imgur.com/DSMmaj8.png) 21 | 22 | [Some video.](https://www.youtube.com/watch?v=SyBuNZxzy_Y) 23 | 24 | * **download linux iso**. For noobs I picked [EndeavourOS \(2GB\)](https://github.com/endeavouros-team/ISO/releases/download/1-EndeavourOS-ISO-releases-archive/EndeavourOS_Cassini_Nova-03-2023_R1.iso) 25 | * why that linux and not xxx? Under the hood its Arch Linux. 26 | * **make bootable usb** from the iso, recommend use [ventoy](https://www.ventoy.net/en/doc_start.html) 27 | * download; run; select usb; click install; exit; copy iso on to it 28 | * **boot from the usb**, maybe on newer machines need to disable secure boot in bios 29 | * **click through the installation** 30 | * pick online installer when offered 31 | * during install, there can be step called `Desktop` - pick `No Desktop`
32 | or whatever, does not really matter 33 | * when picking disk layout choose wipe everything 34 | * username lets say you pick `noob` 35 | * done 36 | 37 | # Basic setup of the linux server 38 | 39 | ![ssh](https://i.imgur.com/ElFrBog.png) 40 | 41 | 42 | **SSH** - a tiny application that allows you to execute commands 43 | from your comfy windows PC on the damn server 44 | 45 | * log in to the server and be in terminal 46 | * ssh is installed by default, but disabled 47 | * to check status - `systemctl status sshd` 48 | * to **enable it** `sudo systemctl enable --now sshd` 49 | * `ip a` or `ip r` - show [somewhere in there](https://www.cyberciti.biz/faq/linux-ip-command-examples-usage-syntax/#3) 50 | what IP address the server got assigned
51 | lets say you got `192.168.1.8`, 52 | nope I am not explaining IP addresses 53 | * done 54 | 55 | *arrow up key in terminal will cycle through old comamnds in history* 56 | 57 | # Remote connect to the server 58 | 59 | ![mobasterm_logo](https://i.imgur.com/aBL85Tr.png) 60 | 61 | * **install** [mobaXterm](https://mobaxterm.mobatek.net/) on your windows machine 62 | * use it to **connect** to the server using its ip address and username 63 | * [have a pic](https://i.imgur.com/lhRGt1p.png)
64 | * done 65 | 66 | # Install docker 67 | 68 | ![docker_logo](https://i.imgur.com/6SS5lFj.png) 69 | 70 | **Docker** - a thing that makes hosting super easy, people prepared *recipies*, 71 | you copy paste them, maybe edit a bit, run them 72 | 73 | * **install docker-compose** - `sudo pacman -S docker-compose` 74 | * **enable docker service** - `sudo systemctl enable --now docker` 75 | * add your user to docker group so you dont need to sudo all the time
76 | `sudo gpasswd -a noob docker` 77 | * log out, log back in 78 | * done 79 | 80 | # Using docker 81 | 82 | Well, its time to learn how to create and edit files and copy paste shit 83 | in to them, IN LINUX!
84 | Honestly could be annoying as fuck at first, but mobaXterm should make it easier 85 | with the right mouse click paste.
86 | Nano editor is relatively simple and everywhere so that will be used. 87 | 88 | * be in your home directory, the command `cd` will always get you there 89 | * create directory `mkdir docker` 90 | * go in to it `cd docker` 91 | * create directory `mkdir nginx` 92 | * go in to it `cd nginx` 93 | * Oh look at you being all hacker in terminal, following simple directions 94 | * create empty docker-compose.yml file `nano docker-compose.yml` 95 | * paste in to it this *recipe*, spacing matters 96 | ``` 97 | services: 98 | 99 | nginx: 100 | image: nginx:latest 101 | container_name: nginx 102 | hostname: nginx 103 | ports: 104 | - "80:80" 105 | ``` 106 | * save using `ctrl+s`; exit `ctrl+x` 107 | * run command `sudo docker compose up -d`
108 | will say the container started 109 | * on your windows machine go to your browser
110 | in address bar put the ip of your server `192.168.1.8` bam
111 | 112 | ![nging_welcome](https://i.imgur.com/Iv0B6bN.png) 113 | 114 | # undertanding what you just did 115 | 116 | * on linux server a docker container is running, its a webserver and it is 117 | accessible for others.
118 | Most of selfhosted stuff is just webserver with some database. 119 | * if this part is done that means that shit like hosting own netflix(jellyfin), 120 | or google drive/calendar/photos(nextcloud), or own password manager(vaultwarden) 121 | or own minecraft server(minecraft server) is just one `docker-compose.yml` away. 122 | * you could almost abandon terminal at this point, just start googling portainer 123 | and you can be doing this shit through a webpage. I dont use it, but it 124 | got good I heard. 125 | 126 | # undertanding what you did not get done 127 | 128 | * this shit is on your own local network, not accessible from the outside. 129 | Cant call grandma and tell her to write `192.168.1.8` in to her browser 130 | to see your awesome nginx welcome running. 131 | She tells you that the dumb fuck you are, you do not have public IP and ports 132 | forwarded.
133 | To get that working is bit challenging, probably deserves own page, 134 | not realy speedrun, but thorough steps as shit gets sideways fast and people 135 | can dick around for hours trying wrong shit. 136 | * everything here is just basic setup that breaks easily, 137 | server got dynamic IP, turn it off for a weekend and it might get a different ip 138 | assigned next time it starts. Container is not set to start on boot,... 139 | * you dont understand how this shit works, fixing not working stuff be hard, 140 | but now you can start to consume all the guides and tutorials on 141 | docker compose and try stuff... 142 | 143 | ## Links 144 | 145 | * https://www.reddit.com/r/HomeNetworking/comments/i7ijiz/a_guide_to_port_forwarding/ 146 | 147 | 148 | -------------------------------------------------------------------------------- /portainer/readme.md: -------------------------------------------------------------------------------- 1 | # Portainer in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/QxnuB1g.png) 6 | 7 | # Purpose 8 | 9 | Web GUI for overview and management of docker environment. 10 | 11 | * [Official site](https://www.portainer.io) 12 | * [Github](https://github.com/portainer/portainer) 13 | * [DockerHub image used](https://hub.docker.com/r/portainer/portainer-ce/) 14 | 15 | Lightweight, allows to easily manage docker containers, 16 | images, networks, volumes,... 17 | 18 | I don't really use as I love [ctop])(https://github.com/bcicen/ctop), 19 | but I keep eye out on it and recommend it to noobs. 20 | 21 | # Files and directory structure 22 | 23 | ``` 24 | /home/ 25 | └── ~/ 26 | └── docker/ 27 | └── portainer/ 28 | ├── portainer_data/ 29 | ├── .env 30 | └── docker-compose.yml 31 | ``` 32 | 33 | * `portainer_data/` - a directory where portainer stores its peristent data 34 | * `.env` - a file containing environment variables for docker compose 35 | * `docker-compose.yml` - a docker compose file, telling docker 36 | how to run the containers 37 | 38 | You only need to provide the files.
39 | The directory is created by docker compose on the first run. 40 | 41 | # docker-compose 42 | 43 | `docker-compose.yml` 44 | ```yml 45 | services: 46 | portainer: 47 | image: portainer/portainer-ce 48 | container_name: portainer 49 | hostname: portainer 50 | command: -H unix:///var/run/docker.sock 51 | restart: unless-stopped 52 | env_file: .env 53 | volumes: 54 | - /var/run/docker.sock:/var/run/docker.sock 55 | - ./portainer_data:/data 56 | expose: 57 | - "9443" 58 | 59 | networks: 60 | default: 61 | name: $DOCKER_MY_NETWORK 62 | external: true 63 | ``` 64 | 65 | `.env` 66 | ```bash 67 | # GENERAL 68 | DOCKER_MY_NETWORK=caddy_net 69 | TZ=Europe/Bratislava 70 | ``` 71 | 72 | # reverse proxy 73 | 74 | Caddy v2 is used, details 75 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
76 | 77 | `Caddyfile` 78 | ``` 79 | port.example.com { 80 | reverse_proxy portainer:9443 { 81 | transport http { 82 | tls 83 | tls_insecure_skip_verify 84 | } 85 | } 86 | } 87 | ``` 88 | 89 | # Update 90 | 91 | Manual image update: 92 | 93 | - `docker-compose pull`
94 | - `docker-compose up -d`
95 | - `docker image prune` 96 | 97 | # Backup and restore 98 | 99 | #### Backup 100 | 101 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 102 | that makes daily snapshot of the entire directory. 103 | 104 | #### Restore 105 | 106 | * down the portainer container `docker-compose down`
107 | * delete the entire portainer directory
108 | * from the backup copy back the portainer directory
109 | * start the container `docker-compose up -d` 110 | -------------------------------------------------------------------------------- /qbittorrent/readme.md: -------------------------------------------------------------------------------- 1 | # qBittorrent in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/lL6Jucn.png) 6 | 7 | # Purpose & Overview 8 | 9 | WORK IN PROGRESS
10 | WORK IN PROGRESS
11 | WORK IN PROGRESS
12 | 13 | Torrents downloader. 14 | 15 | * [Official site](https://jellyfin.org/) 16 | * [Github](https://github.com/jellyfin/jellyfin) 17 | * [DockerHub](https://hub.docker.com/r/jellyfin/jellyfin/) 18 | 19 | qBittorrent is a cross-platform free and open-source BitTorrent client.
20 | The interface is clone of the popular uTorrent.
21 | Major feature is the build in torrent search on various trackers. 22 | 23 | Written mostly in C++, using libtorrent under the hood. 24 | 25 | # Files and directory structure 26 | 27 | ``` 28 | /mnt/ 29 | └── smalldisk/ 30 | └── torrents/ 31 | /home/ 32 | └── ~/ 33 | └── docker/ 34 | └── qbittorrent/ 35 | ├── qbittorrent-config/ 36 | ├── .env 37 | └── docker-compose.yml 38 | ``` 39 | 40 | * `/mnt/bigdisk/...` - a mounted storage share 41 | * `jellyfin-config/` - configuration 42 | * `.env` - a file containing environment variables for docker compose 43 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 44 | 45 | You only need to provide the files.
46 | The directories are created by docker compose on the first run. 47 | 48 | # docker-compose 49 | 50 | Dockerhub linuxserver/bookstack 51 | [example compose.](https://hub.docker.com/r/linuxserver/bookstack) 52 | 53 | `docker-compose.yml` 54 | ```yml 55 | services: 56 | 57 | qbittorrent: 58 | image: lscr.io/linuxserver/qbittorrent 59 | container_name: qbittorrent 60 | hostname: qbittorrent 61 | restart: unless-stopped 62 | env_file: .env 63 | volumes: 64 | - ./qbittorrent-config:/config 65 | - /mnt/smalldisk/torrents:/downloads 66 | expose: 67 | - 8080 68 | ports: 69 | - 6881:6881 70 | - 6881:6881/udp 71 | - 8080:8080 72 | 73 | networks: 74 | default: 75 | name: $DOCKER_MY_NETWORK 76 | external: true 77 | ``` 78 | 79 | `.env` 80 | ```bash 81 | # GENERAL 82 | MY_DOMAIN=example.com 83 | DOCKER_MY_NETWORK=caddy_net 84 | TZ=Europe/Bratislava 85 | 86 | WEBUI_PORT=8080 87 | PUID=1000 88 | PGID=1000 89 | ``` 90 | 91 | **All containers must be on the same network**.
92 | Which is named in the `.env` file.
93 | If one does not exist yet: `docker network create caddy_net` 94 | 95 | # Reverse proxy 96 | 97 | Caddy is used, details 98 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
99 | 100 | `Caddyfile` 101 | ``` 102 | q.{$MY_DOMAIN} { 103 | reverse_proxy qbittorrent:8080 104 | } 105 | ``` 106 | 107 | # First run 108 | 109 | Default login: `admin@admin.com` // `password` 110 | 111 | --- 112 | 113 | ![interface-pic](https://i.imgur.com/cN1GUZw.png) 114 | 115 | 116 | # Specifics of my setup 117 | 118 | * no long term use yet 119 | * no gpu, so no experience with hw transcoding 120 | * media files are stored and shared on trunas scale VM 121 | and mounted to the docker host using systemd mounts, 122 | instead of fstab or autofs. 123 | 124 | `/etc/systemd/system/mnt-bigdisk.mount` 125 | ```ini 126 | [Unit] 127 | Description=12TB truenas mount 128 | 129 | [Mount] 130 | What=//10.0.19.19/Dataset-01 131 | Where=/mnt/bigdisk 132 | Type=cifs 133 | Options=ro,username=ja,password=qq,file_mode=0700,dir_mode=0700,uid=1000 134 | DirectoryMode=0700 135 | 136 | [Install] 137 | WantedBy=multi-user.target 138 | ``` 139 | 140 | `/etc/systemd/system/mnt-bigdisk.automount` 141 | ```ini 142 | [Unit] 143 | Description=myshare automount 144 | 145 | [Automount] 146 | Where=/mnt/bigdisk 147 | 148 | [Install] 149 | WantedBy=multi-user.target 150 | ``` 151 | 152 | automount on boot - `sudo systemctl start mnt-bigdisk.automount` 153 | 154 | # Update 155 | 156 | Manual image update: 157 | 158 | - `docker-compose pull`
159 | - `docker-compose up -d`
160 | - `docker image prune` 161 | 162 | # Backup and restore 163 | 164 | #### Backup 165 | 166 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 167 | that makes daily snapshot of the entire directory. 168 | 169 | #### Restore 170 | 171 | * down the bookstack containers `docker-compose down`
172 | * delete the entire bookstack directory
173 | * from the backup copy back the bookstack directory
174 | * start the containers `docker-compose up -d` 175 | 176 | # Backup of just user data 177 | 178 | Users data daily export using the 179 | [official procedure.](https://www.bookstackapp.com/docs/admin/backup-restore/)
180 | For bookstack it means database dump and backing up several directories 181 | containing user uploaded files. 182 | 183 | Daily [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) run 184 | takes care of backing up the directories. 185 | So only database dump is needed.
186 | The created backup sqlite3 file is overwritten on every run of the script, 187 | but that's ok since borg is making daily snapshots. 188 | 189 | #### Create a backup script 190 | 191 | Placed inside `bookstack` directory on the host 192 | 193 | `bookstack-backup-script.sh` 194 | ```bash 195 | #!/bin/bash 196 | 197 | # CREATE DATABASE DUMP, bash -c '...' IS USED OTHERWISE OUTPUT > WOULD TRY TO GO TO THE HOST 198 | docker container exec bookstack-db bash -c 'mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE > $MYSQL_DIR/BACKUP.bookstack.database.sql' 199 | ``` 200 | 201 | the script must be **executable** - `chmod +x bookstack-backup-script.sh` 202 | 203 | #### Cronjob 204 | 205 | Running on the host, so that the script will be periodically run. 206 | 207 | * `su` - switch to root 208 | * `crontab -e` - add new cron job
209 | * `0 22 * * * /home/bastard/docker/bookstack/bookstack-backup-script.sh`
210 | runs it every day [at 22:00](https://crontab.guru/#0_22_*_*_*) 211 | * `crontab -l` - list cronjobs to check 212 | 213 | # Restore the user data 214 | 215 | Assuming clean start, first restore the database before running the app container. 216 | 217 | * start only the database container: `docker-compose up -d bookstack-db` 218 | * copy `BACKUP.bookstack.database.sql` in `bookstack/bookstack-db-data/` 219 | * restore the database inside the container
220 | `docker container exec --workdir /config bookstack-db bash -c 'mysql -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE < BACKUP.bookstack.database.sql'` 221 | * now start the app container: `docker-compose up -d` 222 | * let it run so it creates its file structure 223 | * down the containers `docker-compose down` 224 | * in `bookstack/bookstack-data/www/`
225 | replace directories `files`,`images`,`uploads` and the file `.env`
226 | with the ones from the BorgBackup repository 227 | * start the containers: `docker-compose up -d` 228 | * if there was a major version jump, exec in to the app container and run `php artisan migrate`
229 | `docker container exec -it bookstack /bin/bash`
230 | `cd /var/www/html/`
231 | `php artisan migrate` 232 | 233 | Again, the above steps are based on the 234 | [official procedure.](https://www.bookstackapp.com/docs/admin/backup-restore/) 235 | -------------------------------------------------------------------------------- /rustdesk/readme.md: -------------------------------------------------------------------------------- 1 | # Rustdesk in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/ImsIffW.png) 6 | 7 | # Purpose & Overview 8 | 9 | Remote desktop access. 10 | 11 | * [Official site](https://rustdesk.com/) 12 | * [Github](https://github.com/rustdesk/rustdesk) 13 | * [DockerHub for S6](https://hub.docker.com/r/rustdesk/rustdesk-server-s6) 14 | 15 | Rustdesk is a new fully opensource alternative for TeamViewer or Anydesk.
16 | The major aspects are that it does NAT punching, 17 | and lets you host all the infrastructure for it to function.
18 | Written in rust(gasp), with Dart and Flutter framework for client side.
19 | 20 | The idea is: 21 | 22 | * Run a rustdesk server reachable online. 23 | * Install clients on machines you want to connect from / to. 24 | * The clients application keeps a regular heartbeat communication 25 | with the server, in a way to [punch a hole](https://youtu.be/S7Ifw5XsypQ) 26 | in the NAT and so allows connection initialized from the outside, 27 | without doing port forwarding. 28 | 29 | --- 30 | 31 | ![interface-pic](https://i.imgur.com/ekA7Hms.png) 32 | 33 | # Files and directory structure 34 | 35 | ``` 36 | /home/ 37 | └── ~/ 38 | └── docker/ 39 | └── rustdesk/ 40 | ├── 🗁 rustdesk_data/ 41 | ├── 🗋 .env 42 | └── 🗋 docker-compose.yml 43 | ``` 44 | 45 | * `rustdesk_data/` - persistent data, contains sqlite database and the keys 46 | * `.env` - a file containing environment variables for docker compose 47 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 48 | 49 | You only need to provide the two files.
50 | The directory is created by docker compose on the first run. 51 | 52 | # docker-compose 53 | 54 | Using [S6-overlay](https://github.com/rustdesk/rustdesk-server#s6-overlay-based-images) 55 | based image.
56 | It's a simpler, single container approach. The 57 | [complexity](https://github.com/rustdesk/rustdesk-server#classic-image) 58 | of rustdesk's `hbbs` server and `hbbr` relay hidden. 59 | 60 | No network section since no http traffic that would need reverse proxy, yet.
61 | So just mapped ports on to docker host to do their thing. 62 | 63 | `docker-compose.yml` 64 | ```yml 65 | services: 66 | rustdesk: 67 | image: rustdesk/rustdesk-server-s6:1.1.7-1 68 | container_name: rustdesk 69 | hostname: rustdesk 70 | restart: unless-stopped 71 | env_file: .env 72 | ports: 73 | - "21116:21116" 74 | - "21115:21115" 75 | - "21116:21116/udp" 76 | - "21117:21117" 77 | - "21118:21118" 78 | - "21119:21119" 79 | volumes: 80 | - ./rustdesk_data:/data 81 | ``` 82 | 83 | `.env` 84 | ```bash 85 | # GENERAL 86 | TZ=Europe/Bratislava 87 | 88 | # RUSTDESK 89 | RELAY=rust.example.com:21117 90 | ENCRYPTED_ONLY=1 91 | # KEY_PRIV= 92 | # KEY_PUB= 93 | ``` 94 | 95 | In the `.env` file encryption is enabled, so that only clients that have 96 | correct public key will be allowed access to the rustdesk server.
97 | The keys are generated on the first run of the compose and can be found in 98 | the `rustdesk_data` directory. 99 | Once generated they should be added to the `.env` file for easier migration. 100 | The public key needs to be distributed with the clients apps installation. 101 | 102 | # Port forwarding 103 | 104 | as can be seen in the compose 105 | 106 | * **21115 - 21119** TCP need to be forwarded to docker host
107 | * **21116** is TCP **and UDP** 108 | 109 | 21115 is used for the NAT type test, 110 | 21116/UDP is used for the ID registration and heartbeat service, 111 | 21116/TCP is used for TCP hole punching and connection service, 112 | 21117 is used for the Relay services, 113 | and 21118 and 21119 are used to support web clients.
114 | [source](https://rustdesk.com/docs/en/self-host/install/) 115 | 116 | --- 117 | 118 | ![interface-pic](https://i.imgur.com/CK6pRyq.png) 119 | 120 | # The installation on clients 121 | 122 | * Download and install the client apps from [the official site](https://rustdesk.com/). 123 | * Three dots > ID/Relay Server 124 | * `ID Server`: rust.example.com 125 | * `Key`: *\* 126 | * The green dot at the bottom should be green saying "ready". 127 | 128 | ![settings-pic](https://i.imgur.com/lX6egMH.png) 129 | 130 | **On windows** one 131 | [can deploy](https://rustdesk.com/docs/en/self-host/install/#put-config-in-rustdeskexe-file-name-windows-only) 132 | client with **pre-sets** by renaming the installation file to: 133 | `rustdesk-host=,key=.exe` 134 | 135 | example: `rustdesk-host=rust.example.com,key=3AVva64bn1ea2vsDuOuQH3i8+2M=.exe` 136 | 137 | If by chance the public key contains symbols not usable in windows filenames, 138 | down the container, delete the files `id_ed25519` and `id_ed25519.pub`, 139 | up the container and try with the new keys. 140 | 141 | # Extra info 142 | 143 | * You really really **really want to be using domain and not your public IP** 144 | when installing clients and setting ID server. That `rust.example.com` 145 | can be changed to point at a different IP any time you want. Hard set IP not. 146 | * Can do `tcpdump -n udp port 21116` on a docker host to **see heartbeat** udp traffic. 147 | Seems machines report-in every \~13 seconds. 148 | * on **windows** a **service** named `rustdesk` is enabled. 149 | Disable it if the machine should be accessible only on demand, 150 | when someone first runs rustdesk manually.
151 | In powershell - `Set-Service rustdesk -StartupType Disabled` 152 | * One can relatively easily 153 | **hardcode server url and pub key in to an executable** using 154 | [github actions.](https://rustdesk.com/docs/en/self-host/hardcode-settings/)
155 | Tested it and it works. But seems you can only do workflow run of nightly build, 156 | meaning all the latest stuff added is included, which means higher chance of bugs.
157 | Make sure you do step *"Enable upload permissions for workflows"*, 158 | before you run the workflow. 159 | * Questions about issues with selfhosting are **not answered** on github - 160 | [#763](https://github.com/rustdesk/rustdesk/discussions/763), 161 | next to try is their [discord](https://discord.com/invite/nDceKgxnkV) or 162 | [subreddit](https://www.reddit.com/r/rustdesk/). 163 | * [FAQ](https://github.com/rustdesk/rustdesk/wiki/FAQ) 164 | * How does [rustdesk work?](https://github.com/rustdesk/rustdesk/wiki/How-does-RustDesk-work%3F) 165 | 166 | ![logo](https://i.imgur.com/ptfVMtJ.png) 167 | 168 | # Trouble shooting 169 | 170 | --- 171 | 172 | #### If just one machine is having issues. 173 | 174 | uninstall, plus delete: 175 | 176 | * `C:\Windows\ServiceProfiles\LocalService\AppData\Roaming\RustDesk` 177 | * `%AppData%\RustDesk` 178 | 179 | Restart. Reinstall.
180 | Do not use the installer you used before, **download** from the site latest. 181 | 182 | --- 183 | 184 | #### Error - Failed to connect to relay server 185 | 186 | * I had wrong url set as `RELAY` in the `.env` 187 | * if url is correct I would test if port 21117 tcp forwards 188 | 189 | --- 190 | 191 | #### Investigate port forwarding 192 | 193 | Install netcat and tcpdump on the docker host. 194 | 195 | * docker compose down rustdesk container so that ports are free to use 196 | * start a small netcat server listening on whichever port we test
197 | `sudo nc -u -vv -l -p 21116`
198 | the `-u` means udp traffic, delete to do tcp 199 | * on a machine somewhere else in the world, not on the same network, try 200 | `nc -u 21116` 201 | 202 | If you write something and press enter, it should appear on the other machine, confirming 203 | that port forwarding works.
204 | Also useful command can be `tcpdump -n udp port 21116`
205 | When port forwarding works, one should see heartbeat chatter, 206 | as machines with installed rustdesk are announcing themselves every \~13 seconds. 207 | 208 | --- 209 | 210 | # Manual image update: 211 | 212 | - `docker-compose pull`
213 | - `docker-compose up -d`
214 | - `docker image prune` 215 | 216 | # Backup and restore 217 | 218 | #### Backup 219 | 220 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 221 | that makes daily snapshot of the entire directory. 222 | 223 | #### Restore 224 | 225 | * down the bookstack containers `docker-compose down`
226 | * delete the entire bookstack directory
227 | * from the backup copy back the bookstack directory
228 | * start the containers `docker-compose up -d` 229 | 230 | -------------------------------------------------------------------------------- /snipeit/readme.md: -------------------------------------------------------------------------------- 1 | # Snipe-IT in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/NEABBL7.png) 6 | 7 | # Purpose & Overview 8 | 9 | IT inventory managment tool. 10 | 11 | * [Official site](https://snipeitapp.com/) 12 | * [Github](https://github.com/snipe/snipe-it) 13 | * [DockerHub](https://hub.docker.com/r/snipe/snipe-it/) 14 | 15 | Snipe-IT is a modern, open source, go-to asset managment tool with LDAP integration.
16 | Written in PHP, using Laravel framework. 17 | This setup is using mariadb database for storing the data.
18 | Dockerhub image maintained by 19 | [linuxserver.io](https://docs.linuxserver.io/images/docker-snipe-it) 20 | is used. 21 | 22 | # Files and directory structure 23 | 24 | ``` 25 | /home/ 26 | └── ~/ 27 | └── docker/ 28 | └── snipeit/ 29 | ├── config/ 30 | ├── snipeit-db/ 31 | ├── .env 32 | └── docker-compose.yml 33 | ``` 34 | 35 | * `config/` - a directory where snipe-it will store its web server stuff 36 | * `snipeit-db/` - a directory where snipeit will store its database data 37 | * `.env` - a file containing environment variables for docker compose 38 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 39 | 40 | You only need to provide the files.
41 | The directories are created by docker compose on the first run. 42 | 43 | # docker-compose 44 | 45 | `docker-compose.yml` 46 | ```yml 47 | services: 48 | snipeit-db: 49 | image: mariadb 50 | container_name: snipeit-db 51 | hostname: snipeit-db 52 | restart: unless-stopped 53 | env_file: .env 54 | volumes: 55 | - ./snipeit-db:/var/lib/mysql 56 | 57 | snipeit: 58 | image: linuxserver/snipe-it:latest 59 | container_name: snipeit 60 | hostname: snipeit 61 | restart: unless-stopped 62 | env_file: .env 63 | depends_on: 64 | - snipeit-db 65 | volumes: 66 | - ./config:/config 67 | 68 | networks: 69 | default: 70 | name: $DOCKER_MY_NETWORK 71 | external: true 72 | ``` 73 | 74 | `.env` 75 | ```bash 76 | # GENERAL 77 | MY_DOMAIN=example.com 78 | DOCKER_MY_NETWORK=caddy_net 79 | TZ=Europe/Bratislava 80 | 81 | #LINUXSERVER.IO mariadb 82 | PUID=1000 83 | PGID=1000 84 | MYSQL_ROOT_PASSWORD=snipeit 85 | MYSQL_DATABASE=snipeit 86 | MYSQL_USER=snipeit 87 | MYSQL_PASSWORD=snipeit 88 | 89 | #LINUXSERVER.IO Snipe-IT 90 | APP_URL=https://snipe.example.com 91 | MYSQL_PORT_3306_TCP_ADDR=snipeit-db 92 | MYSQL_PORT_3306_TCP_PORT=3306 93 | MYSQL_DATABASE=snipeit 94 | MYSQL_USER=snipeit 95 | MYSQL_PASSWORD=snipeit 96 | APP_TRUSTED_PROXIES=* 97 | 98 | #EMAIL 99 | MAIL_PORT_587_TCP_ADDR=smtp-relay.sendinblue.com 100 | MAIL_PORT_587_TCP_PORT=587 101 | MAIL_ENV_FROM_ADDR=noreply@example.com 102 | MAIL_ENV_FROM_NAME=snipe-it admin 103 | MAIL_ENV_ENCRYPTION=tls 104 | MAIL_ENV_USERNAME=your_email@registrated-on-sendinblue.com 105 | MAIL_ENV_PASSWORD=your_sendinblue_smtp_key_value 106 | ``` 107 | 108 | **All containers must be on the same network**.
109 | Which is named in the `.env` file.
110 | If one does not exist yet: `docker network create caddy_net` 111 | 112 | # Reverse proxy 113 | 114 | Caddy v2 is used, details 115 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
116 | 117 | `Caddyfile` 118 | ``` 119 | snipe.{$MY_DOMAIN} { 120 | encode gzip 121 | reverse_proxy snipeit:443 { 122 | transport http { 123 | tls 124 | tls_insecure_skip_verify 125 | } 126 | } 127 | } 128 | ``` 129 | 130 | # First run 131 | 132 | 133 | ![interface-pic](https://i.imgur.com/wtwb4hn.png) 134 | 135 | --- 136 | 137 | 138 | # Trouble shooting 139 | 140 | 141 | 142 | # Update 143 | 144 | Manual image update: 145 | 146 | - `docker-compose pull`
147 | - `docker-compose up -d`
148 | - `docker image prune` 149 | 150 | # Backup and restore 151 | 152 | #### Backup 153 | 154 | Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) 155 | that makes daily snapshot of the entire directory. 156 | 157 | #### Restore 158 | 159 | * down the snipeit containers `docker-compose down`
160 | * delete the entire snipeit directory
161 | * from the backup copy restore the snipeit directory
162 | * start the containers `docker-compose up -d` 163 | 164 | # Backup of just user data 165 | 166 | Users data daily export using the 167 | [official procedure.](https://www.snipeitapp.com/docs/admin/backup-restore/)
168 | For snipeit it means database dump and backing up several directories 169 | containing user uploaded files. 170 | 171 | Daily [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup) run 172 | takes care of backing up the directories. 173 | So only database dump is needed.
174 | The created backup sqlite3 file is overwritten on every run of the script, 175 | but that's ok since borg is making daily snapshots. 176 | 177 | #### Create a backup script 178 | 179 | Placed inside `snipeit` directory on the host 180 | 181 | `snipeit-backup-script.sh` 182 | ```bash 183 | #!/bin/bash 184 | 185 | # CREATE DATABASE DUMP, bash -c '...' IS USED OTHERWISE OUTPUT > WOULD TRY TO GO TO THE HOST 186 | docker container exec snipeit-db bash -c 'mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE > $MYSQL_DIR/BACKUP.snipeit.database.sql' 187 | ``` 188 | 189 | the script must be **executable** - `chmod +x snipeit-backup-script.sh` 190 | 191 | #### Cronjob 192 | 193 | Running on the host, so that the script will be periodically run. 194 | 195 | * `su` - switch to root 196 | * `crontab -e` - add new cron job
197 | * `0 22 * * * /home/bastard/docker/snipeit/snipeit-backup-script.sh`
198 | runs it every day [at 22:00](https://crontab.guru/#0_22_*_*_*) 199 | * `crontab -l` - list cronjobs to check 200 | 201 | # Restore the user data 202 | 203 | Assuming clean start, first restore the database before running the app container. 204 | 205 | * start only the database container: `docker-compose up -d snipeit-db` 206 | * copy `BACKUP.snipeit.database.sql` in `snipeit/snipeit-db-data/` 207 | * restore the database inside the container
208 | `docker container exec --workdir /config snipeit-db bash -c 'mysql -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE < BACKUP.snipeit.database.sql'` 209 | * now start the app container: `docker-compose up -d` 210 | * let it run so it creates its file structure 211 | * down the containers `docker-compose down` 212 | * in `snipeit/snipeit-data/www/`
213 | replace directories `files`,`images`,`uploads` and the file `.env`
214 | with the ones from the BorgBackup repository 215 | * start the containers: `docker-compose up -d` 216 | * if there was a major version jump, exec in to the app container and run `php artisan migrate`
217 | `docker container exec -it snipeit /bin/bash`
218 | `cd /var/www/html/`
219 | `php artisan migrate` 220 | 221 | Again, the above steps are based on the 222 | [official procedure.](https://www.snipeitapp.com/docs/admin/backup-restore/) 223 | -------------------------------------------------------------------------------- /squid/readme.md: -------------------------------------------------------------------------------- 1 | # Squid 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/U42Ot3z.jpg) 6 | 7 | # Purpose & Overview 8 | 9 | Forward proxy to avoid websites IP bans.
10 | 11 | * [Official](http://www.squid-cache.org/) 12 | * [Github](https://github.com/squid-cache/squid) 13 | * [Arch wiki](https://wiki.archlinux.org/title/Squid) 14 | 15 | Caching and forwarding HTTP web proxy.
16 | Main use here is being able to access web pages from a different IP than 17 | your own in a comfortable way.
18 | Other uses are caching to improve speed and load, and ability to block domains, 19 | ads, IPs,... 20 | 21 | Squid is written in C++. 22 | 23 | # Hosting 24 | 25 | Free oracle cloud instance can be used to host squid somewhere in the world.
26 | [Detailed setup guide here.](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/free_cloud_hosting_VPS) 27 | 28 | # Files and directory structure 29 | 30 | ``` 31 | /home/ 32 | └── ~/ 33 | └── docker/ 34 | └── squid/ 35 | ├── 🗋 docker-compose.yml 36 | └── 🗋 squid.conf 37 | ``` 38 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container 39 | * `squid.conf` - main configuration file for squid 40 | 41 | # Compose 42 | 43 | `docker-compose.yml` 44 | ```yml 45 | services: 46 | squid: 47 | image: ubuntu/squid 48 | container_name: squid 49 | hostname: squid 50 | restart: unless-stopped 51 | ports: 52 | - "56566:56566" 53 | volumes: 54 | - ./squid.conf:/etc/squid/squid.conf 55 | - ./squid_cache:/var/spool/squid 56 | ``` 57 | 58 | # squid.conf 59 | 60 | `squid.conf` 61 | ```php 62 | max_filedescriptors 1048576 63 | http_port 56566 64 | http_access allow all 65 | 66 | cache deny all 67 | visible_hostname squidproxy 68 | 69 | forwarded_for delete 70 | via off 71 | follow_x_forwarded_for deny all 72 | request_header_access X-Forwarded-For deny all 73 | ``` 74 | 75 | A minimal config that works. 76 | 77 | For running in docker, `max_filedescriptors 1048576` is required, prevents error:
78 | *FATAL: xcalloc: Unable to allocate 1073741816 blocks of 432 bytes 79 | squid cache terminated abnormally* 80 | 81 | A firewall is used for security, allows in-connections only from one public IP. 82 | Otherwise a VPN like wireguard would be used, 83 | so not much interest in acl security and authorization provided by the config.
84 | Also no interest in caching.
85 | So this is just a config with some headers turned off for maybe better hiding 86 | of the real IP. 87 | 88 | **Testing** 89 | 90 | Linux curl command can test if reverse proxy works. 91 | 92 | `curl -x http://666.6.66.6:56566 -L http://archlinux.org` 93 | 94 | # Setting proxy in browsers 95 | 96 | ![foxy](https://i.imgur.com/oYIA5u1.jpg) 97 | 98 | Every browser has proxy settings where ip and port can be set and it should 99 | work globally for every site. But if only certain domains should go through proxy 100 | then thers browsers addons. 101 | 102 | **FoxyProxy Standard** 103 | 104 | * [firefox](https://addons.mozilla.org/en-US/firefox/addon/foxyproxy-standard/) 105 | * [chrome](https://chrome.google.com/webstore/detail/foxyproxy-standard/gcknhkkoolaabfmlnjonogaaifnjlfnp) 106 | 107 | In config one can setup the proxy ip and port and then one can enable or disable proxy.
108 | But it also has pattern section where url wildcard can be set and proxy 109 | is enabled all the time but applies only on sites fitting pattern. 110 | 111 | # Update 112 | 113 | Manual image update: 114 | 115 | - `docker compose pull`
116 | - `docker compose up -d`
117 | - `docker image prune` 118 | 119 | -------------------------------------------------------------------------------- /syncthing/readme.md: -------------------------------------------------------------------------------- 1 | # Syncthing 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/Vgtn1FM.png) 6 | 7 | # Purpose & Overview 8 | 9 | Synchronize folders between devices.
10 | 11 | * [Official](https://syncthing.net/) 12 | * [Github](https://github.com/syncthing/syncthing) 13 | 14 | Simple and elegant solution for Synchronizing folders.
15 | Clients are installed on devices, and paired using the Syncthing servers. 16 | There are Windows, MacOs, Linux, Android clients, and 3rd party Möbius Sync for iOS. 17 | 18 | Written in Go. 19 | 20 | # Files and directory structure 21 | 22 | ``` 23 | /home/ 24 | └── ~/ 25 | └── docker/ 26 | └── syncthing/ 27 | ├── 🗋 .env 28 | └── 🗋 docker-compose.yml 29 | ``` 30 | * `.env` - a file containing environment variables for docker compose 31 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container 32 | 33 | # Compose 34 | 35 | `docker-compose.yml` 36 | ```yml 37 | services: 38 | 39 | syncthing: 40 | image: syncthing/syncthing 41 | container_name: syncthing 42 | hostname: syncthing 43 | restart: unless-stopped 44 | env_file: .env 45 | volumes: 46 | - /mnt/mirror/syncthing:/var/syncthing 47 | ports: 48 | - 8384:8384 # Web UI 49 | - 22000:22000/tcp # TCP file transfers 50 | - 22000:22000/udp # QUIC file transfers 51 | - 21027:21027/udp # Receive local discovery broadcasts 52 | 53 | networks: 54 | default: 55 | name: $DOCKER_MY_NETWORK 56 | external: true 57 | ``` 58 | 59 | `.env` 60 | ```bash 61 | # GENERAL 62 | DOCKER_MY_NETWORK=caddy_net 63 | TZ=Europe/Bratislava 64 | 65 | # SYNCTHING 66 | PUID=1000 67 | PGID=1000 68 | ``` 69 | 70 | # Reverse proxy 71 | 72 | Caddy v2 is used, details 73 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2). 74 | 75 | `Caddyfile` 76 | ```php 77 | sync.{$MY_DOMAIN} { 78 | reverse_proxy syncthing:8384 79 | } 80 | ``` 81 | 82 | # First run 83 | 84 | ![webgui](https://i.imgur.com/ywdYeU2.png) 85 | 86 | visit the webgui, setup username and password in settings > GUI. 87 | 88 | * intall sync on other devices 89 | * add folders, confirm them on webgui 90 | 91 | sync should just start. 92 | 93 | 94 | # Trouble shooting 95 | 96 | # Update 97 | 98 | Manual image update: 99 | 100 | - `docker compose pull`
101 | - `docker compose up -d`
102 | - `docker image prune` 103 | 104 | -------------------------------------------------------------------------------- /umami/readme.md: -------------------------------------------------------------------------------- 1 | # Umami 2 | 3 | 4 | # Purpose & Overview 5 | Self-hosted, private, simple web site analytics with Umami. 6 | 7 | * [Github] (https://github.com/umami-software/umami) 8 | 9 | # Files and directory structure 10 | 11 | ``` 12 | /home/ 13 | └── ~/ 14 | └── docker/ 15 | └── umami/ 16 | ├── .env 17 | └── docker-compose.yml 18 | ``` 19 | * `.env` - a file containing environment variables for docker compose 20 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container 21 | 22 | ### - Create .env and docker-compose.yml file 23 | `.env` 24 | ``` 25 | # GENERAL 26 | MY_DOMAIN=example.com 27 | DOCKER_MY_NETWORK=caddy_net 28 | 29 | # UMAMI 30 | DATABASE_URL=postgresql://umami:generate_me@db:5432/umami 31 | DATABASE_TYPE=postgresql 32 | # generate a secret with `openssl rand -base64 32` 33 | APP_SECRET=generate_me 34 | # uncomment and change for custom analytics script name 35 | # TRACKER_SCRIPT_NAME=custom_script_name 36 | 37 | # UMAMI DB 38 | POSTGRES_DB=umami 39 | POSTGRES_USER=umami 40 | # generate a password with `openssl rand -base64 32` 41 | POSTGRES_PASSWORD=generate_me 42 | ``` 43 | 44 | `docker-compose.yml` 45 | ```yml 46 | --- 47 | version: '3' 48 | services: 49 | 50 | umami: 51 | container_name: umami 52 | image: ghcr.io/umami-software/umami:postgresql-latest 53 | ports: 54 | - "3000:3000" 55 | env_file: .env 56 | depends_on: 57 | db: 58 | condition: service_healthy 59 | restart: always 60 | 61 | db: 62 | container_name: umami-db 63 | image: postgres:15-alpine 64 | env_file: .env 65 | volumes: 66 | - umami-db-data:/var/lib/postgresql/data 67 | restart: always 68 | healthcheck: 69 | test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}"] 70 | interval: 5s 71 | timeout: 5s 72 | retries: 5 73 | 74 | volumes: 75 | umami-db-data: 76 | 77 | networks: 78 | default: 79 | name: $DOCKER_MY_NETWORK 80 | external: true 81 | 82 | ``` 83 | 84 | # Reverse proxy 85 | 86 | Caddy is used, details 87 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
88 | 89 | `Caddyfile` 90 | ``` 91 | umami.{$MY_DOMAIN} { 92 | reverse_proxy umami:3000 93 | 94 | log { 95 | output file /data/logs/umami_access.log { 96 | roll_size 20mb 97 | roll_keep 5 98 | } 99 | } 100 | } 101 | ``` 102 | 103 | ### - Run it all 104 | Restarting the Caddy container `docker container restart caddy` will kick in the changes. Give Caddy time to get certificates, check `docker logs caddy`. 105 | 106 | # First run 107 | 108 | Default login: `admin` // `umami`. Go and change the password straight away. 109 | 110 | # Trouble shooting 111 | 112 | Check umami logs `docker logs umami`. 113 | 114 | # Update 115 | 116 | [Watchtower](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/watchtower) 117 | updates the image automatically. 118 | 119 | Manual image update: 120 | 121 | - `docker-compose pull`
122 | - `docker-compose up -d`
123 | - `docker image prune` 124 | -------------------------------------------------------------------------------- /unifi/readme.md: -------------------------------------------------------------------------------- 1 | # UniFi Network Application 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/tAEVBnp.png) 6 | 7 | # Purpose & Overview 8 | 9 | Ubiquiti managment software for **wifi** access points and other ubiquiti hardware.
10 | 11 | * [Official site](https://www.ui.com/software/) 12 | * [linuxserver github](https://github.com/linuxserver/docker-unifi-network-application) 13 | 14 | UniFi is a web based managment software for Ubiquiti devices.
15 | It is written in Java, utilizing the Spring Framework 16 | and using MongoDB as a database.
17 | Docker image used here is provided by 18 | [linuxserver.io](https://www.linuxserver.io/) 19 | 20 | # Migration from UniFi Controller 21 | 22 | * Do the manual **backup** of your old instance, through webgui settings. 23 | * Down the old container. 24 | * Spin the new stuff 25 | * Restore the backup 26 | 27 |
28 | Extra Info & Rant 29 |
30 | 31 | Previously called [UniFi Controller](https://github.com/linuxserver/docker-unifi-controller) 32 | 33 | Ubiquiti morons decided to change the name to UniFi Network Application. 34 | Then also tried to go for name UniFi Network Server with a claim that its for 35 | selfhosted version. In docs and even in downloads they mostly use the `application`.
36 | Though love that inside the webgui version its just `Network 8.0.28` 37 | 38 | With this name change, linuxserver.io also changed the deployment so that 39 | mongo database is now a separate container.
40 | Would not be a big issue, if mongo would not [suck big time](https://github.com/docker-library/mongo/issues/174) 41 | at initiating databases in new deployments, making it unnecessary complicated. 42 | Or if linuxserver.io could make a decision and write 43 | [cleaner instructions](https://github.com/linuxserver/docker-unifi-network-application/issues/13) 44 | instead of trying to teach to fish.
45 | Also linuxserver.io official stance is to use older version of mongo v3.6 - v4.4
46 | Reports are that raspberry pi 4 users need to go for that v3.6 47 | 48 | Big help to get this going cleanly was [this repo](https://github.com/GiuseppeGalilei/Ubiquiti-Tips-and-Tricks), 49 | from [this](https://www.reddit.com/r/Ubiquiti/comments/18stenb/unifi_network_application_easy_docker_deployment/) 50 | reddit post.
51 | First time Ive seen `configs` used in compose this way, saved a bother of doing 52 | a separate mounting of `mongo-init.js`, that for some reason did not work for me. 53 | Here it is improved a bit by using variables, so stuff can be set just in `.env` 54 | 55 |
56 | 57 | ![backup_restore](https://i.imgur.com/WYleMWj.png) 58 | 59 | # Files and directory structure 60 | 61 | ``` 62 | /home/ 63 | └── ~/ 64 | └── docker/ 65 | └── unifi/ 66 | ├── 🗁 mongo_db_data/ 67 | ├── 🗁 unifi_data/ 68 | ├── 🗋 .env 69 | └── 🗋 docker-compose.yml 70 | ``` 71 | 72 | * `mongo_db_data/` - database data 73 | * `unifi_data/` - unifi configuration and other data 74 | * `.env` - a file containing environment variables for docker compose 75 | * `docker-compose.yml` - a docker compose file, telling docker 76 | how to run the containers 77 | 78 | You only need to provide the files.
79 | The directory is created by docker compose on the first run. 80 | 81 | # docker-compose 82 | 83 | Compose should not need any changes, theres `.env` file for that. 84 | 85 | Worth noting is use of [configs](https://docs.docker.com/compose/compose-file/08-configs/) 86 | to bypass the need for separate `mongo-init.js` file.
87 | The use comes from [this repo](https://github.com/GiuseppeGalilei/Ubiquiti-Tips-and-Tricks). 88 | 89 | latest unifi version tag - [here](https://github.com/linuxserver/docker-unifi-network-application/releases) 90 | 91 | `docker-compose.yml` 92 | ```yml 93 | services: 94 | 95 | unifi-db: 96 | image: mongo:4 97 | container_name: unifi-db 98 | hostname: unifi-db 99 | restart: unless-stopped 100 | env_file: .env 101 | volumes: 102 | - ./mongo_db_data:/data/db 103 | expose: 104 | - 27017 105 | configs: 106 | - source: init-mongo.js 107 | target: /docker-entrypoint-initdb.d/init-mongo.js 108 | 109 | unifi-app: 110 | image: lscr.io/linuxserver/unifi-network-application:8.4.59 111 | container_name: unifi-app 112 | hostname: unifi-app 113 | restart: unless-stopped 114 | env_file: .env 115 | depends_on: 116 | - unifi-db 117 | volumes: 118 | - ./unifi_data:/config 119 | ports: 120 | - 8443:8443 121 | - 3478:3478/udp 122 | - 10001:10001/udp 123 | - 8080:8080 124 | - 1900:1900/udp #optional 125 | - 8843:8843 #optional 126 | - 8880:8880 #optional 127 | - 6789:6789 #optional 128 | - 5514:5514/udp #optional 129 | 130 | networks: 131 | default: 132 | name: $DOCKER_MY_NETWORK 133 | external: true 134 | 135 | configs: 136 | init-mongo.js: 137 | content: | 138 | db.getSiblingDB("$MONGO_DBNAME").createUser({user: "$MONGO_USER", pwd: "$MONGO_PASS", roles: [{role: "$MONGO_ROLE", db: "$MONGO_DBNAME"}]}); 139 | db.getSiblingDB("${MONGO_DBNAME}_stat").createUser({user: "$MONGO_USER", pwd: "$MONGO_PASS", roles: [{role: "$MONGO_ROLE", db: "${MONGO_DBNAME}_stat"}]}); 140 | ``` 141 | 142 | `.env` 143 | ```bash 144 | # GENERAL 145 | DOCKER_MY_NETWORK=caddy_net 146 | TZ=Europe/Bratislava 147 | 148 | #UNIFI LINUXSERVER.IO 149 | PUID=1000 150 | PGID=1000 151 | MEM_LIMIT=1024 152 | MEM_STARTUP=1024 153 | MONGO_USER=unifi 154 | MONGO_PASS=N9uHz2ct 155 | MONGO_HOST=unifi-db 156 | MONGO_PORT=27017 157 | MONGO_DBNAME=unifi_db 158 | MONGO_ROLE=dbOwner 159 | # MONGO_TLS= #optional 160 | # MONGO_AUTHSOURCE= #optional 161 | 162 | ``` 163 | 164 | # Reverse proxy 165 | 166 | Caddy v2 is used, details 167 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
168 | 169 | `Caddyfile` 170 | ``` 171 | unifi.{$MY_DOMAIN} { 172 | encode gzip 173 | reverse_proxy unifi-app:8443 { 174 | transport http { 175 | tls 176 | tls_insecure_skip_verify 177 | } 178 | } 179 | } 180 | ``` 181 | 182 | # Adoption 183 | 184 | ![override_pic](https://i.imgur.com/VyCqaCp.png) 185 | 186 | The controller might see your APs during initial setup, 187 | but it can not adopt them before you set your docker host IP 188 | as `Override Inform Host`. 189 | 190 | * **Inform Host** check the **Override** checbox
191 | *Settings > System > Advanced*
192 | * enter docker-host IP 193 | * adopt devices 194 | 195 | # Some Settings 196 | 197 | * **Disable "Connects high performance clients to 5 GHz only"**
198 | *Old interface > Settings > Wireless Networks > Edit > Advanced Options*
199 | When enabled it forces devices to ignore 2.4GHz which obviously causes problems at range. 200 | Fucking monstrous stupidity to be default on, 201 | but I guess globaly they have power to cleanup 2.4GHz a bit. 202 | * **802.11 DTIM Period - sets to 3**
203 | *Settings > Wifi > Edit > Advanced*
204 | For [apple devices](https://www.sniffwifi.com/2016/05/go-to-sleep-go-to-sleep-go-to-sleep.html)
205 | 206 | # Migration 207 | 208 | * old controller - settings - system - somewhere backup - just settings - you get a `unf` file 209 | * new controller - on the first run theres an option to restore from backup, pick the file 210 | * done 211 | 212 | # Config autobackup 213 | 214 | config files should be in `~/unifi/unifi_data/data/backup` 215 | 216 | # Encountered issues 217 | 218 | Because I had `./mongo_db_data:/data` as bind mount 219 | instead of `./mongo_db_data:/data/db`, there was an issue. 220 | Everything worked fine for weeks, even shutdown of the docker host 221 | or sudden power outage... but on downing of the container and then upping, 222 | login would stop working with the notice "invalid username and/or password". 223 | 224 | # Update 225 | 226 | Manual image update: 227 | 228 | - `docker-compose pull`
229 | - `docker-compose up -d`
230 | - `docker image prune` 231 | 232 | -------------------------------------------------------------------------------- /uptime-kuma/readme.md: -------------------------------------------------------------------------------- 1 | # Uptime Kuma in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/Q51w85x.png) 6 | 7 | # Purpose & Overview 8 | 9 | Uptime monitoring and notifications. 10 | 11 | * [Github](https://github.com/louislam/uptime-kuma) 12 | 13 | Simple, modern, uptime monitoring of sites, hosts, ports, containers, 14 | with rich notification system. 15 | 16 | Written in javascript. 17 | 18 | --- 19 | 20 | ![interface-pic](https://i.imgur.com/a99GvY2.jpg) 21 | 22 | # Files and directory structure 23 | 24 | ``` 25 | /home/ 26 | └── ~/ 27 | └── docker/ 28 | └── uptimekuma/ 29 | ├── 🗁 uptimekuma_data/ 30 | └── 🗋 docker-compose.yml 31 | ``` 32 | 33 | * `uptimekuma_data` - persistent data, contains sqlite database 34 | * `.env` - a file containing environment variables for docker compose 35 | * `docker-compose.yml` - a docker compose file, telling docker how to run the containers 36 | 37 | You only need to provide the two files.
38 | The directory is created by docker compose on the first run. 39 | 40 | # docker-compose 41 | 42 | It's a simpler, single container approach. 43 | 44 | `docker-compose.yml` 45 | ```yml 46 | services: 47 | uptimekuma: 48 | image: louislam/uptime-kuma:1 49 | container_name: uptimekuma 50 | hostname: uptimekuma 51 | restart: unless-stopped 52 | ports: 53 | - "3001:3001" 54 | volumes: 55 | - ./uptimekuma_data:/app/data 56 | 57 | networks: 58 | default: 59 | name: $DOCKER_MY_NETWORK 60 | external: true 61 | ``` 62 | 63 | `.env` 64 | ```bash 65 | # GENERAL 66 | TZ=Europe/Bratislava 67 | DOCKER_MY_NETWORK=caddy_net 68 | ``` 69 | # Reverse proxy 70 | 71 | Caddy v2 is used, details 72 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2). 73 | 74 | `Caddyfile` 75 | ```php 76 | uptime.{$MY_DOMAIN} { 77 | reverse_proxy uptimekuma:3001 78 | } 79 | ``` 80 | 81 | # Manual image update: 82 | 83 | - `docker-compose pull`
84 | - `docker-compose up -d`
85 | - `docker image prune` 86 | 87 | # Backup and restore 88 | 89 | #### Backup 90 | 91 | should be just backup of `uptimekuma_data` directory 92 | 93 | not tested yet 94 | 95 | #### Restore 96 | -------------------------------------------------------------------------------- /watchtower/readme.md: -------------------------------------------------------------------------------- 1 | # Watchtower in docker 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/xXS2bzZ.png) 6 | 7 | # Purpose 8 | 9 | Automatic updates of docker containers. 10 | 11 | * [Github](https://github.com/containrrr/watchtower) 12 | * [DockerHub image used](https://hub.docker.com/r/containrrr/watchtower) 13 | 14 | Watchtower is an application that will monitor the running Docker containers 15 | and watch for changes to the images that those containers 16 | were originally started from. If watchtower detects that an image has changed, 17 | it will automatically restart the container using the new image. 18 | 19 | As of now, Watchtower needs to always pull images to know if they changed. 20 | This can be bandwidth intensive, so its scheduled checks should account for this. 21 | 22 | # Files and directory structure 23 | 24 | ``` 25 | /home/ 26 | └── ~/ 27 | └── docker/ 28 | └── watchtower/ 29 | ├── .env 30 | └── docker-compose.yml 31 | ``` 32 | 33 | * `.env` - a file containing environment variables for docker compose 34 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container 35 | 36 | Only these two files must be provided. 37 | 38 | # docker-compose 39 | 40 | Scheduled to run every saturday at midnight using environment variable.
41 | Heads up that not a typical cron format is used, 42 | [seconds are the first digit](https://pkg.go.dev/github.com/robfig/cron@v1.2.0?tab=doc#hdr-CRON_Expression_Format). 43 | 44 | `docker-compose.yml` 45 | ```yml 46 | version: '3' 47 | services: 48 | 49 | watchtower: 50 | image: containrrr/watchtower:latest 51 | container_name: watchtower 52 | hostname: watchtower 53 | restart: unless-stopped 54 | env_file: .env 55 | volumes: 56 | - /var/run/docker.sock:/var/run/docker.sock 57 | ``` 58 | 59 | `.env` 60 | ```bash 61 | # GENERAL 62 | 63 | # WATCHTOWER 64 | WATCHTOWER_SCHEDULE=0 0 0 * * SAT 65 | WATCHTOWER_CLEANUP=true 66 | WATCHTOWER_TIMEOUT=30s 67 | WATCHTOWER_DEBUG=false 68 | WATCHTOWER_INCLUDE_STOPPED=false 69 | ``` 70 | 71 | # Update 72 | 73 | * [watchtower](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/watchtower) 74 | updates itself automatically 75 | 76 | * manual image update
77 | `docker-compose pull`
78 | `docker-compose up -d`
79 | `docker image prune` 80 | -------------------------------------------------------------------------------- /watchyourlan/readme.md: -------------------------------------------------------------------------------- 1 | # WatchYourLAN 2 | 3 | ###### guide-by-example 4 | 5 | ![pic](https://i.imgur.com/YDDcvVg.png) 6 | 7 | # Purpose & Overview 8 | 9 | Monitor LAN with regular IP scans.
10 | 11 | * [Github](https://github.com/aceberg/WatchYourLAN) 12 | 13 | Simple webgui ip scanner with notification when new unknown MAC address appears. 14 | 15 | Backend is written in Go. 16 | 17 | # Files and directory structure 18 | 19 | ``` 20 | /home/ 21 | └── ~/ 22 | └── docker/ 23 | └── watchyourlan/ 24 | ├── 🗋 .env 25 | └── 🗋 docker-compose.yml 26 | ``` 27 | * `.env` - a file containing environment variables for docker compose 28 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container 29 | 30 | # Compose 31 | 32 | Of note is the `network_mode` being set to `host`, 33 | which means that the container shares the IP with the docker-host 34 | and is on the docker-host network, likely the main netowork, 35 | not some virtual docker network. 36 | 37 | `docker-compose.yml` 38 | ```yml 39 | services: 40 | 41 | watchyourlan: 42 | image: aceberg/watchyourlan 43 | container_name: watchyourlan 44 | hostname: watchyourlan 45 | network_mode: host 46 | env_file: .env 47 | restart: unless-stopped 48 | volumes: 49 | - ./watchyourlan_data:/data 50 | ``` 51 | 52 | `.env` 53 | ```bash 54 | # GENERAL 55 | TZ=Europe/Bratislava 56 | 57 | IFACE="ens33" 58 | GUIIP: "10.0.19.4" 59 | GUIPORT="8840" 60 | THEME="darkly" 61 | ``` 62 | 63 | To get variables `IFACE` and `GUIIP` for the `.env` file, 64 | ssh to docker-host and `ip r` 65 | 66 | # Reverse proxy 67 | 68 | Caddy v2 is used, details 69 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2). 70 | 71 | Because of the container running in a host network mode, the IP of the docker-host 72 | is used instead of just some container hostname. 73 | 74 | `Caddyfile` 75 | ```php 76 | lan.{$MY_DOMAIN} { 77 | reverse_proxy 10.0.19.4:8840 78 | } 79 | ``` 80 | 81 | # Notifications 82 | 83 | WatchYourLAN uses [Shoutrrr](https://containrrr.dev/shoutrrr/v0.5/services/generic/) 84 | for notifications.
85 | If using [ntfy like me](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/gotify-ntfy-signal), 86 | one just uses generic webhook notation of shoutrrr. 87 | 88 | In Config: 89 | 90 | * Shoutrrr URL: `generic+https://ntfy.example.com/LAN_home` 91 | 92 | # Trouble shooting 93 | 94 | # Update 95 | 96 | Manual image update: 97 | 98 | - `docker compose pull`
99 | - `docker compose up -d`
100 | - `docker image prune` 101 | 102 | -------------------------------------------------------------------------------- /wg-easy/readme.md: -------------------------------------------------------------------------------- 1 | # wg-easy 2 | 3 | ###### guide-by-example 4 | 5 | ![logo](https://i.imgur.com/IRgkp2o.png) 6 | 7 | # Purpose & Overview 8 | 9 | Web GUI for Wireguard VPN.
10 | 11 | * [Github](https://github.com/wg-easy/wg-easy) 12 | 13 | Wireguard is the best VPN solution right now. But its not noob friendly or easy.
14 | WG-easy tries to solve this. 15 | 16 | Written in javascript. 17 | 18 | # Files and directory structure 19 | 20 | ``` 21 | /home/ 22 | └── ~/ 23 | └── docker/ 24 | └── wg-easy/ 25 | ├── 🗁 wireguard_data/ 26 | ├── 🗋 .env 27 | └── 🗋 docker-compose.yml 28 | ``` 29 | * `wireguard_data/` - a directory with wireguard config files 30 | * `.env` - a file containing environment variables for docker compose 31 | * `docker-compose.yml` - a docker compose file, telling docker how to run the container 32 | 33 | # Compose 34 | 35 | `docker-compose.yml` 36 | ```yml 37 | services: 38 | 39 | wg-easy: 40 | image: ghcr.io/wg-easy/wg-easy:14 41 | container_name: wg-easy 42 | hostname: wg-easy 43 | restart: unless-stopped 44 | env_file: .env 45 | volumes: 46 | - ./wireguard_data:/etc/wireguard 47 | ports: 48 | - "51820:51820/udp" # vpn traffic 49 | - "51821:51821" # web interface 50 | cap_add: 51 | - NET_ADMIN 52 | - SYS_MODULE 53 | sysctls: 54 | - net.ipv4.ip_forward=1 55 | - net.ipv4.conf.all.src_valid_mark=1 56 | 57 | networks: 58 | default: 59 | name: $DOCKER_MY_NETWORK 60 | external: true 61 | ``` 62 | 63 | `.env` 64 | ```bash 65 | # GENERAL 66 | DOCKER_MY_NETWORK=caddy_net 67 | TZ=Europe/Bratislava 68 | 69 | #WG-EASY 70 | WG_HOST=vpn.example.com # can also be just public IP 71 | # PASSWORD=supersecretpassword 72 | PASSWORD_HASH=$$2a$$12$$52a84HoSf99aLL7lmt9NsO0hlhZmGuJnyBK.bToiSdbQhTvMjV3ce 73 | WG_PORT=51820 74 | WG_DEFAULT_ADDRESS=10.221.221.x 75 | WG_ALLOWED_IPS=192.168.1.0/24 76 | WG_DEFAULT_DNS= 77 | ``` 78 | 79 | In version 14 `PASSWORD` as env variable is no longer allowed 80 | and `PASSWORD_HASH` must be used.
81 | It is [a bcrypt hash](https://github.com/wg-easy/wg-easy/blob/master/How_to_generate_an_bcrypt_hash.md) 82 | of the password and in compose it must be without quotation marks 83 | and any `$` symbol needs to be doubled - replaced with `$$`. 84 | 85 | DNS is set to null as I had issues with it set, but it should be tried, 86 | set it to ip address where at port 53 dns server answers. Test then with nslookup. 87 | 88 | # Reverse proxy 89 | 90 | Caddy v2 is used, details 91 | [here](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/caddy_v2).
92 | 93 | `Caddyfile` 94 | ```php 95 | vpn.{$MY_DOMAIN} { 96 | reverse_proxy wg-easy:51821 97 | } 98 | ``` 99 | 100 | # First run 101 | 102 | ![loginpic](https://i.imgur.com/V30cDwq.png) 103 | 104 | Login with the password from the .env file.
105 | Add user, download config, use it. 106 | 107 | # Trouble shooting 108 | 109 | # Update 110 | 111 | Manual image update: 112 | 113 | - `docker compose pull`
114 | - `docker compose up -d`
115 | - `docker image prune` 116 | 117 | -------------------------------------------------------------------------------- /windows_package_managers/readme.md: -------------------------------------------------------------------------------- 1 | # Windows Package Managers 2 | 3 | ###### guide-by-example 4 | 5 | # Purpose & Overview 6 | 7 | Install and manage software on windows through command line. 8 | 9 | * winget 10 | * chocolatey 11 | * scoop 12 | 13 | 14 | # Winget 15 | 16 | `winget search irfanview` 17 | `winget install irfanview` 18 | 19 | \+ Comes preinstalled with windows 10+
20 | \- Feels like unmanaged afterthought 21 | 22 | # Scoop 23 | 24 | ### User 25 | 26 | * non-admin powershell terminal 27 | * `Set-ExecutionPolicy RemoteSigned -Scope CurrentUser` 28 | * `irm get.scoop.sh | iex` 29 | * `scoop install git sudo` 30 | * `scoop bucket add extras` 31 | * `scoop bucket add sysinternals` 32 | * `scoop bucket add nonportable` 33 | 34 | ### Machine-wide 35 | 36 | * admin powershell terminal 37 | * `Set-ExecutionPolicy Bypass` 38 | * `iex "& {$(irm get.scoop.sh)} -RunAsAdmin"` 39 | * `scoop install git sudo --global` 40 | * `scoop bucket add extras` 41 | * `scoop bucket add sysinternals` 42 | * `scoop bucket add nonportable` 43 | 44 | ### Useful 45 | 46 | * search - `scoop search mpv` 47 | * `scoop install mpv --global` 48 | * search for avaialble pacakges - [scoop.sh](https://scoop.sh/) 49 | 50 | # Choco 51 | 52 | `Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))` 53 | 54 | 55 | --------------------------------------------------------------------------------