├── LICENSE.md
├── README.md
├── configuration-templates
├── RCLONE
│ └── config.yml
├── RSYNC
│ └── config.yml
├── SCP
│ └── config.yml
└── install_template.sh
├── generating-gpg-keys.md
├── influxdb-post-processing.md
├── install_scripts.sh
├── monitoring-storage-quotas.md
├── raspbian-system-snapshots.md
├── resources
├── quota-flow-canvas.png
└── quota-flow-example.json
├── scripts
├── iotstack_backup
├── iotstack_backup_general
├── iotstack_backup_gitea
├── iotstack_backup_influxdb
├── iotstack_backup_influxdb2
├── iotstack_backup_mariadb
├── iotstack_backup_nextcloud
├── iotstack_backup_postgres
├── iotstack_backup_quota_report
├── iotstack_backup_wordpress
├── iotstack_migration_backup
├── iotstack_migration_restore
├── iotstack_reload_influxdb
├── iotstack_reload_influxdb2
├── iotstack_restore
├── iotstack_restore_general
├── iotstack_restore_gitea
├── iotstack_restore_influxdb
├── iotstack_restore_influxdb2
├── iotstack_restore_mariadb
├── iotstack_restore_nextcloud
├── iotstack_restore_postgres
├── iotstack_restore_wordpress
├── show_iotstackbackup_configuration
└── snapshot_raspbian_system
└── ssh-tutorial.md
/LICENSE.md:
--------------------------------------------------------------------------------
1 | The MIT License (MIT)
2 |
3 | Copyright © 2020 Phill Kelley
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
6 |
7 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
8 |
9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
10 |
--------------------------------------------------------------------------------
/configuration-templates/RCLONE/config.yml:
--------------------------------------------------------------------------------
1 | backup:
2 | method: "RCLONE"
3 | prefix: "remote:path/to/backups"
4 | retain: 8
5 |
6 | restore:
7 | method: "RCLONE"
8 | prefix: "remote:path/to/backups"
9 |
--------------------------------------------------------------------------------
/configuration-templates/RSYNC/config.yml:
--------------------------------------------------------------------------------
1 | backup:
2 | method: "RSYNC"
3 | prefix: "user@host.domain.com:path/to/backups"
4 | retain: 8
5 |
6 | restore:
7 | method: "RSYNC"
8 | # options: "-O"
9 | prefix: "user@host.domain.com:path/to/backups"
10 |
--------------------------------------------------------------------------------
/configuration-templates/SCP/config.yml:
--------------------------------------------------------------------------------
1 | backup:
2 | method: "SCP"
3 | # options: "-O"
4 | prefix: "user@host.domain.com:path/to/backups"
5 | retain: 8
6 |
7 | restore:
8 | method: "SCP"
9 | # options: "-O"
10 | prefix: "user@host.domain.com:path/to/backups"
11 |
--------------------------------------------------------------------------------
/configuration-templates/install_template.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # should not run as root
4 | [ "$EUID" -eq 0 ] && echo "This script should NOT be run using sudo" && exit 1
5 |
6 | # exactly one argument
7 | if [ "$#" -ne 1 ]; then
8 | echo "Usage: $(basename "$0") template"
9 | exit 1
10 | fi
11 |
12 | TEMPLATES_DIR=$(realpath $(dirname "$0"))
13 | TEMPLATE=${1^^}
14 | CONFIG_DIR="$HOME/.config/iotstack_backup"
15 | CONFIG_YML="config.yml"
16 |
17 | # ensure the configuration directory exists
18 | mkdir -p "$CONFIG_DIR"
19 | TILDED_DIR=$(cd "$CONFIG_DIR" && dirs +0)
20 |
21 | # does a configuration file exist already?
22 | if [ ! -e "$CONFIG_DIR/$CONFIG_YML" ] ; then
23 |
24 | # no! does the requested template exit?
25 | if [ -e "$TEMPLATES_DIR/$TEMPLATE/$CONFIG_YML" ] ; then
26 |
27 | # yes! copy the template file into place
28 | cp -a "$TEMPLATES_DIR/$TEMPLATE/$CONFIG_YML" "$CONFIG_DIR/$CONFIG_YML"
29 | echo "$TEMPLATE template copied to $TILDED_DIR/$CONFIG_YML"
30 |
31 | else
32 |
33 | echo "Skipped: ./$TEMPLATE does not exist or does not contain $CONFIG_YML"
34 |
35 | fi
36 |
37 | else
38 |
39 | echo "Skipped: a configuration file already exists. This is a safety"
40 | echo " feature to avoid accidentally overwriting a working"
41 | echo " configuration. If you really want to install a new"
42 | echo " template:"
43 | echo " 1. rm $TILDED_DIR/$CONFIG_YML"
44 | echo " 2. re-run this command"
45 |
46 | fi
47 |
--------------------------------------------------------------------------------
/generating-gpg-keys.md:
--------------------------------------------------------------------------------
1 | # Tutorial: Generating GnuPG Keys
2 |
3 | Snapshot encryption is completely optional. You do not have to encrypt your snapshots unless you want to.
4 |
5 | This guide will help you generate a GnuPG key-pair so you can encrypt your snapshots. Generation is done in a non-destructive manner. It will **not** affect any existing keychains you might have. The end product of this guide is ASCII text files containing matching public and private keys. The keys will only go into effect when you import them on the systems where you need to use them.
6 |
7 | You don't need to create encryption keys just for IOTstackBackup. If you already have one or more GnuPG key-pairs with encryption/decryption capabilities, feel free to re-use those keys. The reverse is also true. Just because you created a key-pair for encrypting snapshots doesn't mean you can't re-use those keys for encrypting other files.
8 |
9 | There are many guides about this topic on the web so, if you prefer to follow a different guide, you can do that. One example is the excellent [Dr Duh](https://github.com/drduh/YubiKey-Guide) guide which will walk you through the process of creating GnuPG keys on a YubiKey.
10 |
11 | > Acknowledgement: many ideas are drawn from the [Dr Duh](https://github.com/drduh/YubiKey-Guide) guide.
12 |
13 | ## Contents
14 |
15 | - [Prerequisites](#prerequistes)
16 | - [Generating a key-pair](#keyPairGeneration)
17 |
18 | - [Step 1 – Setup](#keyPairGenSetup)
19 | - [Step 2 – Key generation](#keyPairKeyGen)
20 | - [Step 3 – Key export](#keyPairGenExport)
21 | - [Step 4 – Tear-down](#keyPairGenTeardown)
22 |
23 | - [Using your public key](#usePublic)
24 |
25 | - [Installing your public key](#installPublic)
26 | - [Trusting your public key](#trustPublic)
27 | - [Your public key in action](#actionPublic)
28 |
29 | - [Using your private key](#usePrivate)
30 |
31 | - [Installing your private key](#installPrivate)
32 | - [Trusting your keys](#trustPrivate)
33 | - [Your private key in action](#actionPrivate)
34 |
35 |
36 | ## Prerequisites
37 |
38 | This guide assumes:
39 |
40 | 1. You are working on a Raspberry Pi (or Debian system).
41 | 2. You have installed the `gnupg2` package and any necessary supporting tools.
42 |
43 | If you used [PiBuilder](https://github.com/Paraphraser/PiBuilder) to build your operating system:
44 |
45 | - All required components are already installed; and
46 | - Your Pi is also *"[Dr Duh](https://github.com/drduh/YubiKey-Guide) ready".*
47 |
48 | The [Dr Duh](https://github.com/drduh/YubiKey-Guide) guide explains how to install GnuPG and related tools on other platforms such as macOS and Windows.
49 |
50 |
51 | ## Generating a key-pair
52 |
53 |
54 | ### Step 1 – Setup
55 |
56 | Execute the following commands on your Raspberry Pi:
57 |
58 | ```
59 | $ RAMDISK="/run/user/$(id -u)/ramdisk" ; mkdir -p "$RAMDISK"
60 | $ sudo mount -t tmpfs -o size=128M myramdisk "$RAMDISK"
61 | $ sudo chown $USER:$USER "$RAMDISK"; chmod 700 "$RAMDISK"
62 | $ export GNUPGHOME=$(mktemp -d -p "$RAMDISK"); echo "GNUPGHOME = $GNUPGHOME"
63 | $ cd "$GNUPGHOME"
64 | ```
65 |
66 | In words:
67 |
68 | 1. Construct a mount-point for a RAM disk.
69 | 2. Create and mount a 128MB RAM disk at that mount-point.
70 | 3. Assign RAM disk ownership and permissions to the current user.
71 | 4. Make a temporary working directory for GnuPG operations in the RAM disk. This disconnects the current login session from the default of `~/.gnupg` and means all operations will fail safe.
72 | 5. Make the temporary working directory the current working directory.
73 |
74 | By its very nature, a RAM disk is ephemeral. It will disappear as soon as you reboot your Pi or complete the [tear-down](#keyPairGenTeardown) steps. It is also an isolated environment which is inherently forgiving of any mistakes. You can experiment freely without worrying about breaking anything.
75 |
76 |
77 | ### Step 2 – Key generation
78 |
79 | Although it isn't mandatory, you should always protect any private key with a passphrase. You can use any scheme you like but one good approach is to let GnuPG generate some random gibberish for you:
80 |
81 | ```
82 | $ PASSPHRASE=$(gpg --gen-random --armor 0 24)
83 | $ echo $PASSPHRASE
84 | ```
85 |
86 | You will need to provide your passphrase each time you need to either manipulate your private key or decrypt a file that is encrypted with the corresponding public key. If you forget your passphrase, your private key will be lost, your public key useless, and you will not be able to decrypt your files.
87 |
88 | > If you follow the [Dr Duh](https://github.com/drduh/YubiKey-Guide) and your private keys are stored on a YubiKey, you will need to enter the key's PIN and then touch the key to approve each decryption operation.
89 |
90 | Once you have decided on your passphrase, use the following as a template:
91 |
92 | ```
93 | gpg --batch --quick-generate-key "«given» «last» («comment») <«email»>" rsa4096 encrypt never
94 | ```
95 |
96 | Replace:
97 |
98 | * `«given»` and `«last»` with your name
99 | * `«comment»` with something reflecting the purpose of the key-pair; and
100 | * `«email»` with an RFC821 "mailbox@domain" email address.
101 |
102 | None of the fields needs to be truthful. The email address does not have to exist. Example:
103 |
104 | ```
105 | $ gpg --batch --quick-generate-key "Roger Rabbit (Raspberry Pi Backups) " rsa4096 encrypt never
106 | ```
107 |
108 | You will be prompted for your passphrase. If you decide you do **not** want to use a passphrase, respond to the prompt by pressing tab to select ``, then press return, and then confirm your decision by pressing return again. Without the protection of a passphrase, anyone with access to your private key will be able to decrypt any file encrypted with the corresponding public key.
109 |
110 | There will be a small delay and you will see a result like the following:
111 |
112 | ```
113 | gpg: keybox '/run/user/1000/ramdisk/tmp.iqcUsUYgRU/pubring.kbx' created
114 | gpg: /run/user/1000/ramdisk/tmp.iqcUsUYgRU/trustdb.gpg: trustdb created
115 | gpg: key 88F86CF116522378 marked as ultimately trusted
116 | gpg: directory '/run/user/1000/ramdisk/tmp.iqcUsUYgRU/openpgp-revocs.d' created
117 | gpg: revocation certificate stored as '/run/user/1000/ramdisk/tmp.iqcUsUYgRU/openpgp-revocs.d/CE90947C208A2B994B1ED48988F86CF116522378.rev'
118 | ```
119 |
120 | In the above output, the 3rd line is:
121 |
122 | ```
123 | gpg: key 88F86CF116522378 marked as ultimately trusted
124 | ```
125 |
126 | The string `88F86CF116522378` is the keyID of this key-pair. Find the same line in your output and associate your keyID with the `GPGKEYID` environment variable, like this:
127 |
128 | ```
129 | $ export GPGKEYID=88F86CF116522378
130 | ```
131 |
132 | Make a note of this command because you will need both it and the keyID again.
133 |
134 |
135 | ### Step 3 – Key export
136 |
137 | At this point, your newly-generated keys are stored in a keychain in the RAM disk. You need to export them:
138 |
139 | ```
140 | $ gpg --export-secret-keys --armor "$GPGKEYID" >"$HOME/$GPGKEYID.gpg.private.key.asc"
141 | $ gpg --export --armor "$GPGKEYID" >"$HOME/$GPGKEYID.gpg.public.key.asc"
142 | ```
143 |
144 | > If you set a passphrase, you will have to supply it for the first command.
145 |
146 | The files exported to your home directory are just plain ASCII text. You can list their contents like this:
147 |
148 | ```
149 | $ cat "$HOME/$GPGKEYID.gpg.private.key.asc"
150 | $ cat "$HOME/$GPGKEYID.gpg.public.key.asc"
151 | ```
152 |
153 |
154 | ### Step 4 – Tear-down
155 |
156 | Run the following commands:
157 |
158 | ```
159 | $ cd
160 | $ sudo umount "$RAMDISK"
161 | $ rmdir "$RAMDISK"
162 | $ unset GNUPGHOME
163 | ```
164 |
165 | In words:
166 |
167 | 1. Move to your home directory.
168 | 2. Un-mount the RAM disk.
169 | 3. Delete the mount-point for the RAM disk.
170 | 4. Clear the GNUPGHOME variable, which means the default of `~/.gnupg` is now back in effect.
171 |
172 | Thus far, other than the ASCII text files containing your exported keys, no command has had a persistent effect. If you are unsure of any decisions you have made (eg whether or not to use a passphrase), you can simply start over from [Setup](#keyPairGenSetup).
173 |
174 |
175 | ## Using your public key
176 |
177 | Your *public* key is used for encryption. That means it needs to be installed on every machine where you may wish to encrypt a snapshot.
178 |
179 |
180 | ### Installing your public key
181 |
182 | Replace `88F86CF116522378` with your own keyID then execute the command:
183 |
184 | ```
185 | $ export GPGKEYID=88F86CF116522378
186 | ```
187 |
188 | The following command assumes the file containing your public key is in your working directory (ie a simple `ls` command will show it):
189 |
190 | ```
191 | $ gpg --import "$GPGKEYID.gpg.public.key.asc"
192 | gpg: key 88F86CF116522378: public key "Roger Rabbit (Raspberry Pi Backups) " imported
193 | gpg: Total number processed: 1
194 | gpg: imported: 1
195 | ```
196 |
197 |
198 | ### Trusting your public key
199 |
200 | If you list your public keys, you will see that the key is untrusted:
201 |
202 | ```
203 | $ gpg -k
204 | /home/pi/.gnupg/pubring.kbx
205 | ---------------------------
206 | pub rsa4096 2023-01-03 [CE]
207 | CE90947C208A2B994B1ED48988F86CF116522378
208 | uid [ unknown] Roger Rabbit (Raspberry Pi Backups)
209 | ```
210 |
211 | The "unknown" indicates that the key is not trusted. You should fix that by running the following commands:
212 |
213 | ```
214 | $ gpg --edit-key "$GPGKEYID"
215 | gpg> trust
216 | Your decision? 5
217 | Do you really want to set this key to ultimate trust? (y/N) y
218 | gpg> quit
219 | $
220 | ```
221 |
222 | You can confirm that the key is trusted by listing your public keys again:
223 |
224 | ```
225 | $ gpg -k
226 | /home/pi/.gnupg/pubring.kbx
227 | ---------------------------
228 | pub rsa4096 2023-01-03 [CE]
229 | CE90947C208A2B994B1ED48988F86CF116522378
230 | uid [ultimate] Roger Rabbit (Raspberry Pi Backups)
231 | ```
232 |
233 | If you do not mark your public key as trusted, `gpg` will ask for permission to use the public key each time you attempt to use it to encrypt a file.
234 |
235 |
236 | ### Your public key in action
237 |
238 | Create a test file. Something like this:
239 |
240 | ```
241 | $ ls -l >plaintext.txt
242 | ```
243 |
244 | To encrypt that file:
245 |
246 | ```
247 | $ gpg --recipient "$GPGKEYID" --output "plaintext.txt.gpg" --encrypt "plaintext.txt"
248 | ```
249 |
250 |
251 | ## Using your private key
252 |
253 | Your *private* key is used for decryption. That means it only needs to be installed on machines where you intend to decrypt snapshots.
254 |
255 |
256 | ### Installing your private key
257 |
258 | You will need to copy the ASCII file containing your private key onto any machine where you intend to decrypt your backups. How you do that is up to you (eg USB drive, scp, sshfs).
259 |
260 | Anyone with possession of your private key and your passphrase (assuming you set one) will be able to decrypt your backups. You should keep this in mind as you decide how to copy the ASCII file containing your private key from machine to machine.
261 |
262 | Similarly, if you lose your private key and don't have a backup of the ASCII file containing your private key from which you can re-import the private key, you will lose access to your backups.
263 |
264 | Replace `88F86CF116522378` with your own keyID then execute the command:
265 |
266 | ```
267 | $ export GPGKEYID=88F86CF116522378
268 | ```
269 |
270 | The following command assumes the file containing your public key is in your working directory (ie a simple `ls` command will show it):
271 |
272 | ```
273 | $ gpg --import "$GPGKEYID.gpg.private.key.asc"
274 | gpg: key 0x88F86CF116522378: public key "Roger Rabbit (Raspberry Pi Backups) " imported
275 | Please enter the passphrase to import the OpenPGP secret key:
276 | "Roger Rabbit (Raspberry Pi Backups) "
277 | 4096-bit RSA key, ID 0x88F86CF116522378,
278 | created 2023-01-03.
279 |
280 | Passphrase:
281 | gpg: key 0x88F86CF116522378: secret key imported
282 | gpg: Total number processed: 1
283 | gpg: imported: 1
284 | gpg: secret keys read: 1
285 | gpg: secret keys imported: 1
286 | ```
287 |
288 | Your private key also contains your public key so this process imports both keys. To list your public key, use the lower-case `-k` option (which is a synonym for the `--list-keys` option):
289 |
290 | ```
291 | $ gpg -k
292 | pub rsa4096/0x88F86CF116522378 2023-01-03 [CE]
293 | Key fingerprint = CE90 947C 208A 2B99 4B1E D489 88F8 6CF1 1652 2378
294 | uid [ unknown] Roger Rabbit (Raspberry Pi Backups)
295 | ```
296 |
297 | To list your private key, use the upper-case `-K` option (which is a synonym for the `--list-secret-keys` option):
298 |
299 | ```
300 | $ gpg -K
301 | sec rsa4096/0x88F86CF116522378 2023-01-03 [CE]
302 | Key fingerprint = CE90 947C 208A 2B99 4B1E D489 88F8 6CF1 1652 2378
303 | uid [ unknown] Roger Rabbit (Raspberry Pi Backups)
304 | ```
305 |
306 | The only difference between the two is the "pub" vs "sec" in the first line.
307 |
308 |
309 | ### Trusting your keys
310 |
311 | In the output above, you may have noted the "unknown" trust status for the keys in your key-pair. You can fix that in the same way as you did when you imported your public key:
312 |
313 | ```
314 | $ gpg --edit-key "$GPGKEYID"
315 | gpg> trust
316 | Your decision? 5
317 | Do you really want to set this key to ultimate trust? (y/N) y
318 | gpg> quit
319 | $
320 | ```
321 |
322 |
323 | ## Your private key in action
324 |
325 | Assuming the file you encrypted earlier is available on the system where you installed your private key, you can decrypt that file like this:
326 |
327 | ```
328 | $ gpg --output "restored.txt" --decrypt "plaintext.txt.gpg"
329 |
330 | gpg: encrypted with rsa4096 key, ID 0x88F86CF116522378, created 2023-01-03
331 | "Roger Rabbit (Raspberry Pi Backups) "
332 | Please enter the passphrase to unlock the OpenPGP secret key:
333 | "Roger Rabbit (Raspberry Pi Backups) "
334 | 4096-bit RSA key, ID 0x88F86CF116522378,
335 | created 2023-01-03.
336 |
337 | Passphrase:
338 | ```
339 |
340 | Notes:
341 |
342 | 1. If you set a passphrase, you will have to supply it at the prompt.
343 | 2. The GPGKEYID environment variable does not have to be set for decryption (`gpg` figures that out for itself).
344 |
345 | Assuming successful decryption, the result will be the file `restored.txt`, which you can compare with the original `plaintext.txt` to assure yourself that the round-trip has maintained fidelity.
346 |
347 | Whenever you are decrypting a file, always think about where to store the result. Suppose you are working on a macOS or Windows system where the encrypted snapshots are stored in your Dropbox scope. If you decrypt into the same directory, Dropbox will sync the decrypted file so you will have lost the protection afforded by encryption. It would be more appropriate to either move (or copy) the encrypted snapshot out of the Dropbox scope or adjust the `--output` path to point to a destination outside of the Dropbox scope.
348 |
--------------------------------------------------------------------------------
/influxdb-post-processing.md:
--------------------------------------------------------------------------------
1 | # Tutorial: InfluxDB 1.8 post-processing
2 |
3 | User credentials and access rights do not survive a round trip through backup and restore.
4 | This is a [known issue](https://github.com/influxdata/influxdb/issues/21494) and the lack of activity suggests it is unlikely to be fixed.
5 |
6 | Technically, the same statement applies to continuous queries. However, IOTstackBackup provides a solution for CQs.
7 |
8 | ## background
9 |
10 | What continuous queries, user credentials and access rights have in common is that they are stored in:
11 |
12 | ```
13 | ~/IOTstack/volumes/influxdb/data/meta/meta.db
14 | ```
15 |
16 | The `meta.db` file *is* included in the portable backup generated by `influxd` and it *is* faithfully archived and restored by IOTstackBackup. But, for some unknown reason, `influxd` does not reconstitute those properties from the portable backup. This means it is reasonably likely that *all* properties which are stored in `meta.db` will suffer a similar fate.
17 |
18 | ## workarounds
19 |
20 | IOTstackBackup provides two distinct workarounds for this problem:
21 |
22 | 1. Automatic extraction and restoration of continuous queries; and
23 | 2. A manual workaround in the form of an optional epilog file where you can define users and grants.
24 |
25 | The workarounds are supported in both:
26 |
27 | * `iotstack_restore_influxdb`; and
28 | * `iotstack_reload_influxdb`.
29 |
30 | To take advantage of the optional epilog you need to create a text file at the following path:
31 |
32 | ```
33 | ~/IOTstack/services/influxdb/iotstack_restore_influxdb.epilog
34 | ```
35 |
36 | The *epilog file* is expected to contain InfluxQL commands. There is no mechanism for automating its construction. You have to set it up yourself, by hand, and take responsibility for its validity.
37 |
38 | If the *epilog file* exists when `iotstack_restore_influxdb` runs, its contents will be passed to the container for execution. This occurs after the databases have been restored.
39 |
40 | You can test your epilog file like this:
41 |
42 | ``` console
43 | $ cd ~/IOTstack/services/influxdb
44 | $ docker cp iotstack_restore_influxdb.epilog influxdb:.
45 | $ docker exec influxdb influx -import -path "iotstack_restore_influxdb.epilog"
46 | ```
47 |
48 | The epilog file is kept in the `services` directory so it is included in the general backup and restore. Because the general restore runs before the database restores, the epilog file will already be in place when the InfluxDB restore runs.
49 |
50 | ## practical example
51 |
52 | Make the following assumptions:
53 |
54 | 1. You are working in the influx CLI:
55 |
56 | ``` console
57 | $ influx
58 | >
59 | ```
60 |
61 | 2. You define the following users:
62 |
63 | ``` sql
64 | > CREATE USER "dba" WITH PASSWORD 'supremo' WITH ALL PRIVILEGES
65 | > CREATE USER "nodered" WITH PASSWORD 'nodereds_big_secret'
66 | > CREATE USER "grafana" WITH PASSWORD 'grafanas_little_secret'
67 | ```
68 |
69 | 3. You have a `power` database to which you grant the following access rights:
70 |
71 | ``` sql
72 | > GRANT WRITE ON "power" TO "nodered"
73 | > GRANT READ ON "power" TO "grafana"
74 | ```
75 |
76 | The content of `iotstack_restore_influxdb.epilog` would simply be those same commands:
77 |
78 | ``` sql
79 | CREATE USER "dba" WITH PASSWORD 'supremo' WITH ALL PRIVILEGES
80 | CREATE USER "nodered" WITH PASSWORD 'nodereds_big_secret'
81 | CREATE USER "grafana" WITH PASSWORD 'grafanas_little_secret'
82 | GRANT WRITE ON "power" TO "nodered"
83 | GRANT READ ON "power" TO "grafana"
84 | ```
85 |
86 | Note:
87 |
88 | * InfluxQL follows SQL conventions on comments. The following are valid comments:
89 |
90 | ``` sql
91 | -- this is a valid comment
92 | /* this is also a valid comment but it must be on one line */
93 | ```
94 |
95 | These comments are invalid:
96 |
97 | ``` sql
98 | # this will cause influx to complain
99 | // this will also cause influx to complain
100 | /* influx does
101 | not like C-style
102 | multiline comments
103 | */
104 | ```
105 |
106 | ## command reconstruction
107 |
108 | If you want to take advantage of the epilog workaround but you do not have a record of the necessary commands, you can usually reconstruct them by hand.
109 |
110 | ### users
111 |
112 | Usernames can be retrieved via:
113 |
114 | ``` sql
115 | > SHOW USERS
116 | user admin
117 | ---- -----
118 | dba true
119 | nodered false
120 | grafana false
121 | ```
122 |
123 | Passwords are stored as hashes so there is no way to recover and display the plain-text versions.
124 |
125 | Templates:
126 |
127 | * admin user:
128 |
129 | ``` sql
130 | CREATE USER "«user»" WITH PASSWORD '«password»' WITH ALL PRIVILEGES
131 | ```
132 |
133 | * non-admin user:
134 |
135 | ``` sql
136 | CREATE USER "«user»" WITH PASSWORD '«password»'
137 | ```
138 |
139 | Note:
140 |
141 | * The use of both single- and double-quotes is intentional. Usernames must be wrapped in double-quotes and passwords must be wrapped in single quotes.
142 |
143 | ### grants
144 |
145 | Grants can be retrieved by iterating the non-admin users:
146 |
147 | ``` sql
148 | > SHOW GRANTS FOR nodered
149 | database privilege
150 | -------- ---------
151 | power WRITE
152 |
153 | > SHOW GRANTS FOR grafana
154 | database privilege
155 | -------- ---------
156 | power READ
157 | ```
158 |
159 | Template:
160 |
161 | ``` sql
162 | GRANT «privilege» ON "«database»" TO "«user»"
163 | ```
164 |
165 | ### continuous queries
166 |
167 | The `iotstack_backup_influxdb` and `iotstack_reload_influxdb` scripts extract your continuous queries and save them into the file `continuous-queries.influxql`, along with the other files InfluxDB creates when asked to produce a portable backup.
168 |
169 | That file is then included in the `.tar` produced by `iotstack_backup_influxdb`, from which it is extracted when `iotstack_restore_influxdb` runs.
170 |
171 | Note:
172 |
173 | * If you have been using the epilog to reload your queries, InfluxDB will not object as the (implicit) duplicate is created when the epilog is run.
174 |
--------------------------------------------------------------------------------
/install_scripts.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | SCRIPTS=$(realpath $(dirname "$0"))/scripts
4 |
5 | # does a "scripts" folder exist in this directory?
6 | if [ -d "$SCRIPTS" ] ; then
7 |
8 | # the winner by default is
9 | WINNER="$HOME/.local/bin"
10 |
11 | # does the winner exist?
12 | if [ ! -d "$WINNER" ] ; then
13 |
14 | # check the search path
15 | for CANDIDATE in ${PATH//:/ }; do
16 | if [[ $CANDIDATE == $HOME* ]] ; then
17 | WINNER="$CANDIDATE"
18 | break
19 | fi
20 | done
21 |
22 | fi
23 |
24 | # make sure the winner exists
25 | mkdir -p "$WINNER"
26 |
27 | # copy executables into place
28 | cp -av "$SCRIPTS"/* "$WINNER"
29 |
30 | else
31 |
32 | echo "Skipped: $SCRIPTS not found in $PWD"
33 |
34 | fi
35 |
36 | # check apt-installable dependencies
37 | for D in curl jq wget ; do
38 | if [ -z "$(which "$D")" ] ; then
39 | echo ""
40 | echo "=========================================================================="
41 | echo "IOTstackBackup depends on \"$D\" which does not seem to be installed on your"
42 | echo "system. Please run the following command:"
43 | echo ""
44 | echo " sudo apt install $D"
45 | echo ""
46 | echo "=========================================================================="
47 | fi
48 | done
49 |
50 | # check if mosquitto_pub is available
51 | if [ -z "$(which mosquitto_pub)" ] ; then
52 |
53 | echo ""
54 | echo "=========================================================================="
55 | echo "IOTstackBackup depends on \"mosquitto_pub\" which does not seem to be installed"
56 | echo "on your system. Please run the following command:"
57 | echo ""
58 | echo " sudo apt install mosquitto-clients"
59 | echo ""
60 | echo "=========================================================================="
61 |
62 | fi
63 |
64 | # check if shyaml is installed
65 | if [ -z "$(which shyaml)" ] ; then
66 |
67 | echo ""
68 | echo "=========================================================================="
69 | echo "IOTstackBackup depends on \"shyaml\" which does not seem to be installed on your"
70 | echo "system. On Bullseye or earlier systems, please run:"
71 | echo ""
72 | echo " pip3 install -U shyaml"
73 | echo ""
74 | echo "On Bookworm systems, run:"
75 | echo ""
76 | echo " pip3 install -U --break-system-packages shyaml"
77 | echo "=========================================================================="
78 |
79 | fi
80 |
81 | # check for rclone
82 | if [ -z "$(which rclone)" ] ; then
83 |
84 | echo ""
85 | echo "=========================================================================="
86 | echo "If you intend to use RCLONE with IOTstackBackup, please note that it is not"
87 | echo "installed on your system. Please run the following command:"
88 | echo ""
89 | echo " curl https://rclone.org/install.sh | sudo bash"
90 | echo ""
91 | echo "=========================================================================="
92 |
93 | fi
94 |
--------------------------------------------------------------------------------
/monitoring-storage-quotas.md:
--------------------------------------------------------------------------------
1 | # Tutorial: Monitoring Storage Quotas
2 |
3 | Running out of disk space on either your Raspberry Pi or your remote backup system (eg Dropbox) can prevent your backups from completing properly.
4 |
5 | IOTstackBackup includes a convenience script which uses RCLONE to monitor the storage utilisation on both your Raspberry Pi and your RCLONE remote[†](#supportMatrix).
6 |
7 | Key point:
8 |
9 | * This convenience script is specific to RCLONE. It does not work with either RSYNC or SCP.
10 |
11 | ## installation
12 |
13 | The `iotstack_backup_quota_report` script is contained in the `scripts` folder and will be installed by `install_scripts.sh`.
14 |
15 | ## dependencies
16 |
17 | The script relies on both `mosquitto_pub` and `rclone`. If those are not installed on your system, you will need to run either/both:
18 |
19 | ``` console
20 | $ sudo apt install mosquitto-clients
21 | $ curl https://rclone.org/install.sh | sudo bash
22 | ```
23 |
24 | You may also need to follow the [*rclone* (Dropbox)](./README.md#rcloneOption) setup instructions to configure RCLONE.
25 |
26 | ## calling sequence
27 |
28 | The script takes three optional arguments:
29 |
30 | 1. The name of your RCLONE remote. The default for this argument is "dropbox".
31 |
32 | This argument must be a name that appears when you ask RCLONE to list your remotes. For example:
33 |
34 | ``` console
35 | $ rclone listremotes
36 | dropbox:
37 | ```
38 |
39 | † RCLONE supports many kinds of remote. However, not every remote supports the "about" command which is used to determine space utilisation on the remote. There is a definitive list at [rclone.org](https://rclone.org/overview/#optional-features). The "About" column.
40 |
41 | You can also test whether your remote supports the feature like this:
42 |
43 | ``` console
44 | $ rclone about dropbox:
45 | Total: 2.005 TiB
46 | Used: 122.058 GiB
47 | Free: 1.885 TiB
48 | ```
49 |
50 | Note, the trailing colon on the remote name is:
51 |
52 | * *required* when you use the `rclone about` command; but
53 | * *omitted* when you invoke `iotstack_backup_quota_report`.
54 |
55 | 2. The MQTT topic prefix. The default is "/quota". The prefix is prepended to form the actual topic strings. For example:
56 |
57 | - `/quota/dropbox`
58 | - `/quota/iot-hub`
59 |
60 | In the first case, "dropbox" is the name of your RCLONE remote. In the second case, "iot-hub" is the name of the Raspberry Pi where the script is running.
61 |
62 | 3. The domain name or IP address of the host running your MQTT broker. Defaults to "127.0.0.1" on the assumption that Mosquitto is running in a local container.
63 | 4. The port number for your MQTT broker. Defaults to 1883.
64 |
65 | ## command line example
66 |
67 | This example uses all arguments:
68 |
69 | ```
70 | $ iotstack_backup_quota_report dropbox /home/quota mqtt.mydomain.com 1883
71 | ```
72 |
73 | The resulting MQTT messages will have the following structure:
74 |
75 | ```
76 | /home/quota/dropbox { "total": 2204123529216, "used": 141289520349, "free": 2062834008867 }
77 | /home/quota/iot-hub { "total": 472305868800, "used": 16434315264, "free": 436624248832 }
78 | ```
79 |
80 | The values are in bytes. Interpretation:
81 |
82 | * The total account space on Dropbox is 2TB, of which 1.8TB remains unused.
83 | * The storage space on the Raspberry Pi system (an SSD) is 440GB, of which 406GB remains unused.
84 |
85 | ## cron job example
86 |
87 | The following `crontab` entry would run the above script at 6AM each day:
88 |
89 | ```
90 | 0 6 * * * iotstack_backup_quota_report dropbox /home/quota mqtt.mydomain.com 1883
91 | ```
92 |
93 | When invoked by `cron`, the script writes its output to:
94 |
95 | ```
96 | ~/Logs/iotstack_backup_quota_report.log
97 | ```
98 |
99 | The log is normally empty but is the place to start looking if the `cron` job stops working.
100 |
101 | ## Node-RED flow example
102 |
103 | 
104 |
105 | Example code matching that flow can be found [here](./resources/quota-flow-example.json).
106 |
107 | The *MQTT-In* node subscribes to the wildcard topic:
108 |
109 | ```
110 | /home/quota/+
111 | ```
112 |
113 | Continuing with the previous example, the flow would receive two messages:
114 |
115 | - `/home/quota/dropbox`
116 | - `/home/quota/iot-hub`
117 |
118 | The *Change* node extracts the system name (ie `dropbox` or `iot-hub`) from the last component of the topic string and stores it in `msg.system`.
119 |
120 | The *Function* node is a bare-bones implementation which checks for utilisation over a threshold value (90%) and constructs a very basic email message.
121 |
122 | Where utilisation exceeds the threshold, the *Function* node exits via the upper outlet; the lower outlet otherwise.
123 |
124 | > Both outlets are wired to the email notification in the example but you would typically only use that during testing. In normal use, disconnecting the lower outlet would result in an email only being sent when disk utilisation was above the threshold.
125 |
126 | Fairly obviously, you can change this flow to send other kinds of notification (eg PushSafer) and/or log the metrics to an Influx database which you subsequently chart in Grafana.
127 |
128 |
--------------------------------------------------------------------------------
/raspbian-system-snapshots.md:
--------------------------------------------------------------------------------
1 | # Taking a snapshot of your Raspberry Pi system
2 |
3 | Creating a full backup of a Raspberry Pi is a bit of a challenge. Whichever approach you use can take a lot of time and a lot of storage space. It also isn't really practical on a "live" system because of the way files are changing constantly. In general, the only way to be absolutely certain that you have a *coherent* backup is to take your active storage media (SD or SSD) offline and use an imaging tool. That means you're without your Pi's services for the duration of the backup process.
4 |
5 | Raspberry Pi failures can be categorised into those that affect the Pi itself, and those that affect your storage media. If your Pi has emitted magic smoke but the media is otherwise undamaged then, most of the time, you can move the media to a new Pi (if you can get one). You might need to tweak your DHCP configuration to use the new MAC address(es) but that's about it.
6 |
7 | If your media has gone splat and you happen to have a reasonably up-to-date image, you can probably get away with restoring that image to the same or new media but you also take the risk that you are restoring the very conditions that led to the media failure in the first place. You are usually better advised to cut your losses and start with a clean image of Raspberry Pi OS.
8 |
9 | If you use [PiBuilder](https://github.com/Paraphraser/PiBuilder) to build your Raspberry Pi OS, it will clone IOTstack plus install docker and docker-compose along with all their dependencies and useful packages. If you have been sufficiently diligent about maintaining your own clone of PiBuilder, it can make a fair fist of rebuilding your Pi "as it was" with customised configuration files in `/boot`, `/etc` and so on.
10 |
11 | If you use `iotstack_backup` then recovering your IOTstack can be as simple as:
12 |
13 | ```
14 | $ iotstack_restore «runtag»
15 | $ cd ~/IOTstack
16 | $ docker-compose up -d
17 | ```
18 |
19 | But there will still probably be a few gaps, such as:
20 |
21 | * the content of your home directory outside of IOTstack;
22 | * the content of other user accounts you may have created;
23 | * recent configuration decisions you implemented in `/etc` which you haven't yet gotten around to adding to PiBuilder;
24 | * your crontab;
25 | * …
26 |
27 | The gaps are where the `snapshot_raspbian_system` script can help.
28 |
29 | The script follows the same minimalist "only backup things that can't be reconstructed from elsewhere" philosophy as the rest of IOTstackBackup.
30 |
31 | There is, however, no matching "restore" script. The basic idea of a *snapshot* is a *resource* from which you can cherry-pick items, evaluate each for ongoing suitability, then move them into place by hand. A snapshot will help you answer questions like:
32 |
33 | * How did I have `/etc/resolvconf.conf` set up?
34 | * What was in `~/.ssh/authorized_keys`?
35 | * I know I had Git repository X cloned from somewhere but what was the URL?
36 |
37 | ## Contents
38 |
39 | - [Installation](#installation)
40 |
41 | - [Configuration](#configuration)
42 |
43 | - [First run](#firstRun)
44 | - [Second or subsequent run](#secondRun)
45 |
46 | - [Automatic exclusions](#autoExclusions)
47 | - [Override markers](#forceOverride)
48 |
49 | - [forcing inclusion](#forceInclude)
50 | - [forcing exclusion](#forceExclude)
51 | - [override marker conflicts](#forceBoth)
52 | - [nested override markers](#forceNesting)
53 |
54 | - [Script result](#scriptOutputPlain)
55 | - [Automatic encryption with GnuPG (optional)](#scriptOutputCrypt)
56 |
57 | - [Encryption details](#cryptoDetails)
58 |
59 | - [Script argument (optional)](#scriptArg)
60 |
61 | - [Version control systems](#vcs)
62 |
63 | - [git](#vcsGit)
64 | - [subversion](#vcsSvn)
65 |
66 | - [Log examples](#logExamples)
67 |
68 | - [`~/IOTstack` (git repository)](#logGit1)
69 | - [`~/.local/IOTstackBackup` (git repository)](#logGit2)
70 | - [a subversion example](#logSvn)
71 | - [`~/.local/lib`](#logExclude)
72 |
73 | - [Tips and tricks](#tipsNtricks)
74 |
75 | - [Extracting a snaphot](#tipExtract)
76 | - [checking ownership and permissions](#tipsOwner)
77 | - [change discovery](#tipsDiff)
78 | - [symlinks - known weakness](#tipsSymlinks)
79 |
80 |
81 | ## Installation
82 |
83 | The snapshot script is part of IOTstackBackup. It will be installed along with the other scripts when you follow the steps at [download the repository](README.md#downloadRepository).
84 |
85 | > If you are an existing user of IOTstackBackup, please follow the [periodic maintenance](README.md#periodicMaintenance) instructions.
86 |
87 |
88 | ### Configuration
89 |
90 | The snapshot script piggy-backs off your IOTstackBackup configuration settings which are stored in:
91 |
92 | ```
93 | ~/.config/iotstack_backup/config.yml
94 | ```
95 |
96 | If you have not yet set up that file, please work through the [configuration file](README.md#configFile) steps.
97 |
98 | When saving snapshots on the remote system, the `snapshot_raspbian_system` script behaves a little differently than the `iotstack_backup` script. Assume the following example configurations:
99 |
100 | - either:
101 |
102 | ```
103 | backup method = SCP or RSYNC
104 | remote prefix = user@host:/path/IoT-Hub/backups
105 | ```
106 |
107 | - or:
108 |
109 | ```
110 | backup method = RCLONE
111 | remote prefix = dropbox:IoT-Hub/backups
112 | ```
113 |
114 | The string `snapshot_raspbian_system` is appended to the prefix, as in:
115 |
116 | - `user@host:/path/IoT-Hub/backups/snapshot_raspbian_system` or
117 |
118 | - `dropbox:IoT-Hub/backups/snapshot_raspbian_system`
119 |
120 | The `snapshot_raspbian_system` directory on the remote system is where snapshots are saved.
121 |
122 | Key points:
123 |
124 | * Irrespective of whether you are using SCP, RSYNC or RCLONE as your backup method, the remote copy performed by `snapshot_raspbian_system` is a *file* operation, not a *directory* operation. There is no "synchronisation" between local and remote directories.
125 | * The retain count does not apply so there is no automatic cleanup. Keep this in mind if you decide to run this script from `cron` on a daily basis.
126 |
127 |
128 | ## First run
129 |
130 | The first time you run the script it will initialise a default list of inclusions:
131 |
132 | ```
133 | $ snapshot_raspbian_system
134 | /home/pi/.config/iotstack_backup/snapshot_raspbian_system-inclusions.txt initialised from defaults:
135 | /etc
136 | /etc-baseline
137 | /var/spool/cron/crontabs
138 | /home
139 | ```
140 |
141 | Those paths are the starting points for each snapshot. Together, they capture the most common locations for customisations of your Raspberry Pi OS.
142 |
143 | You can edit `snapshot_raspbian_system-inclusions.txt` to add or remove paths (directories or files) as needed.
144 |
145 | Notes:
146 |
147 | * [PiBuilder](https://github.com/Paraphraser/PiBuilder) makes the following copies in its 01 script:
148 |
149 | - within either `/boot/firmware` (Bookworm and later) or `/boot` (Bullseye and earlier):
150 |
151 | - `config.txt` is copied as `config.txt.baseline`
152 | - `cmdline.txt` is copied as `cmdline.txt.baseline`
153 |
154 | - the entire `/etc` directory is copied as `/etc-baseline`
155 |
156 | The intention is that you will always have reference copies of files and folder structures "as shipped" with your Raspberry Pi OS image immediately after its first boot.
157 |
158 | Between them, the three baseline items should be able to help you answer the question, "what have I done to change my running system from its defaults?"
159 |
160 | The baseline items are included in each snapshot so you can still answer that question when your system is broken and you need to rebuild it.
161 |
162 | If you did not use [PiBuilder](https://github.com/Paraphraser/PiBuilder) then the baseline reference copies may not be present but their absence will not cause the snapshot script to fail.
163 |
164 | * Even though they do not appear in the [list of inclusions](#backupList), each snapshot automatically includes any files matching the following patterns:
165 |
166 | ```
167 | /boot/config.txt*
168 | /boot/cmdline.txt*
169 | /boot/firmware/config.txt*
170 | /boot/firmware/cmdline.txt*
171 | ```
172 |
173 | In other words, if you also follow the convention of maintaining `.bak` files, those will get included along with the `.baseline` files.
174 |
175 |
176 | ## Second or subsequent run
177 |
178 | On a second or subsequent run, the script will:
179 |
180 | 1. Snapshot your system using the [list of inclusions](#backupList) to guide its activities;
181 | 2. Optionally encrypt the resulting `.tar.gz`; and
182 | 3. Save the result to your configured remote.
183 |
184 | As the script is running, everything is written into a temporary directory. This avoids potential chicken-and-egg problems such as what happens if the backup files are being written into a directory that you then you add to the [list of inclusions](#backupList).
185 |
186 | At the end of the run, the snapshot is copied off the local machine using whichever of SCP, RSYNC or RCLONE is in effect. Then the temporary directory is erased. Once the script has finished, the only place the snapshot exists is the remote machine.
187 |
188 | This is an example run, with encryption enabled, where the result is transmitted to Dropbox:
189 |
190 | ```
191 | $ GPGKEYID=88F86CF116522378 snapshot_raspbian_system
192 |
193 | ----- Starting snapshot_raspbian_system at Thu 05 Jan 2023 17:33:30 AEDT -----
194 | Environment:
195 | Script marker = snapshot_raspbian_system
196 | Search paths = /home/pi/.config/iotstack_backup/snapshot_raspbian_system-inclusions.txt
197 | Exclude marker = .exclude.snapshot_raspbian_system
198 | Include marker = .include.snapshot_raspbian_system
199 | Cloud method = RCLONE
200 | Cloud reference = dropbox:IoT-Hub/backups/snapshot_raspbian_system
201 | Scanning:
202 | /etc
203 | /etc-baseline
204 | /var/spool/cron/crontabs
205 | /home
206 | Paths included in the backup:
207 | /etc
208 | /etc-baseline
209 | /var/spool/cron/crontabs
210 | /home
211 | /boot/cmdline.txt
212 | /boot/cmdline.txt.bak
213 | /boot/cmdline.txt.baseline
214 | /boot/config.txt
215 | /boot/config.txt.baseline
216 | /dev/shm/backup_annotations_uwfIL6.txt
217 | Paths excluded from the backup:
218 | /home/pi/.local/IOTstackBackup
219 | /home/pi/.local/IOTstackAliases
220 | /home/pi/PiBuilder
221 | /home/pi/IOTstack
222 | /home/pi/.local/bin
223 | /home/pi/.local/lib
224 | /home/pi/.cache
225 | Encrypting the backup using --recipient 88F86CF116522378
226 | Using rclone to copy the result off this machine
227 | 2023/01/05 17:33:45 INFO : 2023-01-05_1733.sec-dev.raspbian-snapshot.tar.gz.gpg: Copied (new)
228 | 2023/01/05 17:33:45 INFO :
229 | Transferred: 3.483 MiB / 3.483 MiB, 100%, 594.492 KiB/s, ETA 0s
230 | Transferred: 1 / 1, 100%
231 | Elapsed time: 7.8s
232 |
233 | 2023/01/05 17:33:45 INFO : Dropbox root 'IoT-Hub/backups/snapshot_raspbian_system': Committing uploads - please wait...
234 | ----- Finished snapshot_raspbian_system at Thu 05 Jan 2023 17:33:45 AEDT -----
235 | ```
236 |
237 |
238 | ### Automatic exclusions
239 |
240 | The script iterates the [list of inclusions](#backupList). For each item that is a directory (eg `/home`), the directory structure is searched for subdirectories named `.git` or `.svn`. Each parent directory containing either a `.git` or `.svn` subdirectory is automatically excluded from the backup.
241 |
242 | > The rationale is that such directories can be recovered using Git or Subversion.
243 |
244 | Rather than expecting you to remember all the excluded directories and the URLs needed to recover them from their respective repositories, the backup log lists the excluded directories along with sufficient information for you to be able to run `git clone` and `svn checkout` commands.
245 |
246 | No attempt is made to save any uncommitted files as part of the backup. The log simply records what you may need to reconstruct by other means.
247 |
248 | The script also automatically excludes any directories named `.cache` on the assumption that those will be recreated on demand.
249 |
250 |
251 | ### Override markers
252 |
253 | You can override the default behaviour to either force the inclusion of a directory that would otherwise be excluded automatically, or force the exclusion of a directory that is being included automatically.
254 |
255 |
256 | #### forcing inclusion
257 |
258 | If a directory is being excluded automatically but you decide that it should be included, you can force its inclusion by adding a marker file to the directory. For example:
259 |
260 | ``` console
261 | $ cd directory/you/want/to/include
262 | $ touch .include.snapshot_raspbian_system
263 | ```
264 |
265 | The scope of include markers is limited to directories that would otherwise be excluded automatically. You can't create an include marker in some random place like `/var/log` and expect the script to go find it.
266 |
267 | > You can, of course, modify the [list of inclusions](#backupList) to include `/var/log`.
268 |
269 |
270 | #### forcing exclusion
271 |
272 | If a directory is being included in your backup but you decide it can be reconstructed more easily by other means, you can exclude it by adding a marker file to the directory.
273 |
274 | The path `~/.local/lib` is a good example of a directory you may wish to exclude:
275 |
276 | ``` console
277 | $ cd ~/.local/lib
278 | $ touch .exclude.snapshot_raspbian_system
279 | ```
280 |
281 | The scope of exclude markers is limited to directories in the [list of inclusions](#backupList).
282 |
283 |
284 | #### override marker conflicts
285 |
286 | If a directory contains both an include and an exclude marker, the include marker prevails.
287 |
288 |
289 | #### nested override markers
290 |
291 | The `tar` application is passed two lists: a set of paths for inclusion, and a set of paths for exclusion. In any situation involving nested directories where items in the two lists might appear to create a paradox, `tar` is the final arbiter of what gets included in the snapshot.
292 |
293 |
294 | ### Script result
295 |
296 | The result of running `snapshot_raspbian_system` is a file named in the following format:
297 |
298 | ```
299 | yyyy-mm-dd_hhmm.hostname.raspbian-snapshot.tar.gz
300 | ```
301 |
302 |
303 | ### Automatic encryption with GnuPG (optional)
304 |
305 | By default, directories like `~/.ssh` and `/etc/ssh` will be included in your snapshots. Those directories *may* contain sufficient information to help an attacker gain access to or impersonate your systems.
306 |
307 | You *could* [exclude](#forceExclude) those directories from the backup. On the other hand, you may wish to ensure the contents of those directories are backed-up and ready to hand if you ever have to rebuild your Raspberry Pi.
308 |
309 | You can resolve this conundrum by encrypting the snapshot. If `gpg` is installed and the environment variable `GPGKEYID` points to a valid public key in your keychain, the script will encrypt the snapshot using that keyID as the recipient and indicate this by appending the `.gpg` extension.
310 |
311 | If you need help setting up GnuPG keys, please see [this tutorial](generating-gpg-keys.md).
312 |
313 |
314 | #### Encryption details
315 |
316 | The command used to encrypt the `.tar.gz` is:
317 |
318 | ```
319 | $ gpg --recipient "$GPGKEYID" \
320 | --output yyyy-mm-dd_hhmm.hostname.raspbian-snapshot.tar.gz.gpg \
321 | --encrypt yyyy-mm-dd_hhmm.hostname.raspbian-snapshot.tar.gz
322 | ```
323 |
324 | Encryption only needs the public key. The script checks for the presence of the public key in your keychain and will warn you if it is not found.
325 |
326 | This form of encryption produces a "binary" output rather than an ASCII "armoured" output.
327 |
328 | To decrypt the file, you would use:
329 |
330 | ```
331 | $ gpg \
332 | --output yyyy-mm-dd_hhmm.hostname.raspbian-snapshot.tar.gz \
333 | --decrypt yyyy-mm-dd_hhmm.hostname.raspbian-snapshot.tar.gz.gpg
334 | ```
335 |
336 | Decryption needs access to your private key and that, in turn, may need additional steps such as entering a passcode and/or unlocking and touching a token like a YubiKey.
337 |
338 | It should be self-evident that you should make sure you can decrypt your encrypted snapshots **before** you have to rely on them!
339 |
340 |
341 | ## Script argument (optional)
342 |
343 | By default, the `snapshot_raspbian_system` script uses its own name for the following:
344 |
345 | * the path to the list of inclusions:
346 |
347 | - `~/.config/iotstack_backup/snapshot_raspbian_system-inclusions.txt`
348 |
349 | * the include and exclude markers:
350 |
351 | - `.include.snapshot_raspbian_system`
352 | - `.exclude.snapshot_raspbian_system`
353 |
354 | * the directory name on the remote system into which snapshots are stored:
355 |
356 | - `… IoT-Hub/backups/snapshot_raspbian_system`
357 |
358 | You can change this behaviour passing an optional argument to the script. For example, running the following command:
359 |
360 | ``` console
361 | $ snapshot_raspbian_system snapshot-for-me
362 | ```
363 |
364 | will result in:
365 |
366 | * the path to the list of inclusions being:
367 |
368 | - `~/.config/iotstack_backup/snapshot-for-me-inclusions.txt`
369 |
370 | * the include and exclude markers being:
371 |
372 | - `.include.snapshot-for-me`
373 | - `.exclude.snapshot-for-me`
374 |
375 | * the directory name on the remote system into which snapshots are stored being:
376 |
377 | - `… IoT-Hub/backups/snapshot-for-me`
378 |
379 | The ability to influence path and marker names is intended to help you tailor your snapshots to different situations. You can create custom [lists of inclusions](#backupList) and custom [inclusion](#forceInclude) and [exclusion](#forceExclude) override schemes that control the content of any snapshots.
380 |
381 | Nevertheless, because remote operations are involved, you should try to avoid passing arguments containing spaces or other characters that are open to misinterpretation by BASH. The script does its best to handle these correctly but it can't account for all possible interpretations by remote systems.
382 |
383 |
384 | ## Version control systems
385 |
386 |
387 | ### git
388 |
389 | In any git repository on a Raspberry Pi, one of two things will be true:
390 |
391 | 1. The local repository is a clone of an upstream repository where you started by doing something like:
392 |
393 | ``` console
394 | $ git clone URL myrepo
395 | ```
396 |
397 | In this case, local modifications can be saved to a remote system via git `add`, `commit` and `push` commands. If you don't have commit rights on the upstream repository, you should consider forking the upstream repository. The implication is that a subsequent `git clone` will recover your local modifications.
398 |
399 | 2. The local repository is the authoritative instance. In other words, you started by doing something like:
400 |
401 | ``` console
402 | $ mkdir myrepo
403 | $ cd myrepo
404 | $ git init
405 | ```
406 |
407 | This case is an example of where you should consider:
408 |
409 | * establishing an upstream remote you can `push` to; and/or
410 | * adding an [inclusion marker](#forceInclude) to the directory so that it is included in the snapshot.
411 |
412 |
413 | ### subversion
414 |
415 | With subversion, it is far more common to checkout from a remote repository:
416 |
417 | ``` console
418 | $ svn checkout URL myrepo
419 | ```
420 |
421 | If you make local changes, those can be added (if new) and committed, where a commit implies a push to the remote repository.
422 |
423 |
424 | ## Log examples
425 |
426 |
427 | ### `~/IOTstack` (git repository)
428 |
429 | The path `~/IOTstack` is excluded because it contains `~/IOTstack/.git`. The resulting log entry is:
430 |
431 | ```
432 | ----- [snapshot_raspbian_system] ----- excluding /home/pi/IOTstack
433 | origin https://github.com/SensorsIot/IOTstack.git (fetch)
434 | origin https://github.com/SensorsIot/IOTstack.git (push)
435 | On branch master
436 | Your branch is up to date with 'origin/master'.
437 |
438 | Untracked files:
439 | (use "git add ..." to include in what will be committed)
440 | .env
441 |
442 | nothing added to commit but untracked files present (use "git add" to track)
443 | ```
444 |
445 | That tells you that you can recover that directory as follows:
446 |
447 | ```
448 | $ git clone https://github.com/SensorsIot/IOTstack.git /home/pi/IOTstack
449 | ```
450 |
451 | The reason why `.env` shows up as untracked while files like `docker-compose.yml` and directories like `backup`, `services` and `volumes` do not show up is `.env` is not in IOTstack's standard `.gitignore` list while the others are.
452 |
453 | As it happens, `iotstack_backup` does backup `.env` along with the other files and directories mentioned above so this is a side-effect you can ignore.
454 |
455 |
456 | ### `~/.local/IOTstackBackup` (git repository)
457 |
458 | This directory is excluded because it contains a `.git` subdirectory. The log entry is:
459 |
460 | ```
461 | ----- [snapshot_raspbian_system] ----- excluding /home/pi/.local/IOTstackBackup
462 | origin https://github.com/Paraphraser/IOTstackBackup.git (fetch)
463 | origin https://github.com/Paraphraser/IOTstackBackup.git (push)
464 | On branch master
465 | Your branch is up to date with 'origin/master'.
466 |
467 | nothing to commit, working tree clean
468 | ```
469 |
470 | Again, this tells you that you can recover the directory via:
471 |
472 | ```
473 | $ git clone https://github.com/Paraphraser/IOTstackBackup.git /home/pi/.local/IOTstackBackup
474 | ```
475 |
476 |
477 | ### a subversion example
478 |
479 | ```
480 | ----- [snapshot_raspbian_system] ----- excluding /home/pi/.local/bin
481 | svn://svnserver.my.domain.com/trunk/user/local/bin
482 | ```
483 |
484 | The directory can be reconstructed with:
485 |
486 | ```
487 | $ svn checkout svn://svnserver.my.domain.com/trunk/user/local/bin /home/pi/.local/bin
488 | ```
489 |
490 |
491 | ### `~/.local/lib`
492 |
493 | This directory is not excluded by default but can be excluded by creating an [exclusion marker](#forceExclude). The resulting log entry reports the exclusion with no additional details:
494 |
495 | ```
496 | ----- [snapshot_raspbian_system] ----- excluding /home/pi/.local/lib
497 | ```
498 |
499 | This directory is a byproduct of other installations (ie is recreated automatically).
500 |
501 |
502 | ## Tips and tricks
503 |
504 | Assume the following snapshot:
505 |
506 | ```
507 | 2023-01-05_1733.sec-dev.raspbian-snapshot.tar.gz
508 | ```
509 |
510 | > If the file is still encrypted (ie has a `.gpg` extension), see [encryption details](#cryptoDetails) for an example of how to decrypt it.
511 |
512 |
513 | ### Extracting a snaphot
514 |
515 | The contents of a snapshot can be extracted like this:
516 |
517 | ```
518 | $ mkdir extract
519 | $ tar -C extract -xzf 2023-01-05_1733.sec-dev.raspbian-snapshot.tar.gz
520 | tar: Removing leading '/' from member names
521 | $ cd extract
522 | ```
523 |
524 | The working directory will contain everything recovered from the snapshot.
525 |
526 |
527 | ### checking ownership and permissions
528 |
529 | One of the problems you will run into when you extract a tar in the manner described above is that ownership will be assigned to the current user. In other words, the contents of directories like `/etc` will not be owned by root.
530 |
531 | Although you can pass an option to `tar` to cause it to retain the original ownership, that isn't always appropriate because you may well be doing the extraction on your support host (macOS or Windows) where the user and group IDs differ from those in effect on the Raspberry Pi when the `tar` archive was created. You also need to use `sudo` and that can be unwise (eg see [symlinks](#tipsSymlinks) below).
532 |
533 | It's usually more convenient to answer questions about ownership and permissions by inspecting the archive. For example, what is the correct ownership and permission mode on `/etc/resolvconf.conf`?
534 |
535 | ```
536 | $ tar -tvzf 2023-01-05_1733.sec-dev.raspbian-snapshot.tar.gz | grep "/etc/resolvconf.conf"
537 | -rw-r--r-- 0 root root 500 Jan 3 2021 /etc/resolvconf.conf.bak
538 | -rw-r--r-- 0 root root 625 Oct 25 23:11 /etc/resolvconf.conf
539 | ```
540 |
541 |
542 | ### change discovery
543 |
544 | The `diff` tool is one of the Unix world's hidden gems. As well as figuring out the differences between two text files, it can also report on the differences between whole directory trees.
545 |
546 | Suppose you have just extracted the snapshot as explained above. Try running:
547 |
548 | ```
549 | $ diff etc-baseline etc
550 | ```
551 |
552 | The report will tell you:
553 |
554 | 1. which files/folders are in common (ie have not changed since your system was built);
555 | 2. which files are only in `/etc-baseline` (ie have been removed from `/etc` since the baseline was established);
556 | 3. which files are only in `/etc` (ie have been added to `/etc` since the baseline was established); and
557 | 4. which files are in both directories and have changed, and summarise their differences.
558 |
559 | The report may seem a bit "busy" at first but you will quickly realise that it is telling you *exactly* what you need to know when it comes to configuring a newly-built system to behave the same as an older system which has just emitted magic smoke.
560 |
561 |
562 | ### symlinks - known weakness
563 |
564 | When symlinks are encountered by `tar`, its default behaviour is to include the symlink "as is" rather than follow the link to the file system objects to which it points (dereferencing).
565 |
566 | This behaviour can be overridden with the `-h` option but that appears to result in the snapshot failing to unpack without `sudo`. That's problematic because it implies an attempt to re-establish the "same" relationships on the recovery system. That is somewhere between *inappropriate* and *dangerous.*
567 |
568 | > In case it isn't crystal clear: **don't** use `sudo` to unpack a snapshot!
569 |
570 | There may be a way to resolve this but, for now, the script avoids the problem by accepting the default behaviour. What this means is that some files which you would normally expect to find in `/etc` will not be present in the snapshot. At the time of writing, the files in this category were:
571 |
572 | ```
573 | /etc/mtab
574 | /etc/os-release
575 | /etc/rmt
576 | ```
577 |
578 |
--------------------------------------------------------------------------------
/resources/quota-flow-canvas.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Paraphraser/IOTstackBackup/877ddaef5d1e8479d0795078f83afb90a2684577/resources/quota-flow-canvas.png
--------------------------------------------------------------------------------
/resources/quota-flow-example.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "id": "139f6648a7cddeaf",
4 | "type": "tab",
5 | "label": "Quota Example",
6 | "disabled": false,
7 | "info": "",
8 | "env": []
9 | },
10 | {
11 | "id": "4b440bc827423850",
12 | "type": "mqtt in",
13 | "z": "139f6648a7cddeaf",
14 | "name": "/home/quota/[system]",
15 | "topic": "/home/quota/+",
16 | "qos": "2",
17 | "datatype": "json",
18 | "broker": "c5d29fb5.89907",
19 | "nl": false,
20 | "rap": true,
21 | "rh": 0,
22 | "inputs": 0,
23 | "x": 140,
24 | "y": 100,
25 | "wires": [
26 | [
27 | "23daf8c09b8d04ad"
28 | ]
29 | ]
30 | },
31 | {
32 | "id": "ce6bf8e1423ff554",
33 | "type": "function",
34 | "z": "139f6648a7cddeaf",
35 | "name": "Check Quota 90%",
36 | "func": "// declare threshold\nconst threshold = 90.0;\n\n// calculate percent used\nvar used = 100.0 * msg.payload.used / msg.payload.total;\n\n// set the email message subject line\nmsg.topic = msg.system + \" space utilisation\"\n\n// check over threshold\nif (used > threshold) {\n msg.payload = \"Space used exceeds \" + threshold + \"% threshold\";\n return [msg, null];\n}\n\n// otherwise report in bounds\nmsg.payload = \"Space used is below \" + threshold + \" % threshold\";\nreturn [null, msg];\n",
37 | "outputs": 2,
38 | "noerr": 0,
39 | "initialize": "",
40 | "finalize": "",
41 | "libs": [],
42 | "x": 310,
43 | "y": 260,
44 | "wires": [
45 | [
46 | "bc4a31d2ce5a8ff2"
47 | ],
48 | [
49 | "bc4a31d2ce5a8ff2"
50 | ]
51 | ],
52 | "outputLabels": [
53 | "above threshold",
54 | "below threshold"
55 | ]
56 | },
57 | {
58 | "id": "bc4a31d2ce5a8ff2",
59 | "type": "e-mail",
60 | "z": "139f6648a7cddeaf",
61 | "server": "smtp.domain.com",
62 | "port": "465",
63 | "secure": true,
64 | "tls": false,
65 | "name": "user@domain.com",
66 | "dname": "email notification",
67 | "x": 570,
68 | "y": 260,
69 | "wires": []
70 | },
71 | {
72 | "id": "23daf8c09b8d04ad",
73 | "type": "change",
74 | "z": "139f6648a7cddeaf",
75 | "name": "extract system name",
76 | "rules": [
77 | {
78 | "t": "set",
79 | "p": "system",
80 | "pt": "msg",
81 | "to": "$split(msg.topic,'/')[-1]",
82 | "tot": "jsonata"
83 | }
84 | ],
85 | "action": "",
86 | "property": "",
87 | "from": "",
88 | "to": "",
89 | "reg": false,
90 | "x": 240,
91 | "y": 180,
92 | "wires": [
93 | [
94 | "ce6bf8e1423ff554"
95 | ]
96 | ],
97 | "info": "# Extract System Name\n\nThe syntax:\n\n```\n$split(msg.topic,'/')[-1]\n```\n\nassumes the incoming topic string is like:\n\n```\n/something/quota/system\n```\n\nThe `$split(msg.topic,'/')` function will crack that string about the \"/\" separator and return an array of four elements:\n\n* index=0, a null string (everything before the first `/` although some people prefer to omit the leading `/` from their topic strings)\n* index=1, the string \"something\"\n* index=2, the string \"quota\"\n* index=3, the string \"system\"\n\nThe trailing `[-1]` returns the string at the last index position which is \"system\"."
98 | },
99 | {
100 | "id": "c5d29fb5.89907",
101 | "type": "mqtt-broker",
102 | "name": "Docker MQTT",
103 | "broker": "mosquitto",
104 | "port": "1883",
105 | "clientid": "",
106 | "autoConnect": true,
107 | "usetls": false,
108 | "compatmode": false,
109 | "protocolVersion": "4",
110 | "keepalive": "60",
111 | "cleansession": true,
112 | "birthTopic": "",
113 | "birthQos": "0",
114 | "birthRetain": "false",
115 | "birthPayload": "",
116 | "birthMsg": {},
117 | "closeTopic": "",
118 | "closeQos": "0",
119 | "closeRetain": "false",
120 | "closePayload": "",
121 | "closeMsg": {},
122 | "willTopic": "",
123 | "willQos": "0",
124 | "willRetain": "false",
125 | "willPayload": "",
126 | "willMsg": {},
127 | "userProps": "",
128 | "sessionExpiry": ""
129 | }
130 | ]
--------------------------------------------------------------------------------
/scripts/iotstack_backup:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 |
12 | # the project name is the all-lower-case form of the folder name
13 | PROJECT=$(basename ${IOTSTACK,,})
14 |
15 | # useful function
16 | isStackUp() {
17 | if COUNT=$( \
18 | curl -s --unix-socket /var/run/docker.sock http://localhost/containers/json | \
19 | jq -c ".[].Labels.\"com.docker.compose.project\"" | \
20 | grep -c "\"$1\"$" \
21 | ) ; then
22 | if [ $COUNT -gt 0 ] ; then
23 | return 0
24 | fi
25 | fi
26 | return 1
27 | }
28 |
29 | USAGE=0
30 |
31 | case "$#" in
32 |
33 | 0 )
34 | RUNTAG=${1:-$(date +"%Y-%m-%d_%H%M").$HOSTNAME}
35 | BY_HOST_DIR=$HOSTNAME
36 | ;;
37 |
38 | 1 )
39 | # anything that looks like an option (- -h --help)?
40 | if [ ${1::1} == "-" ] ; then
41 | USAGE=1
42 | else
43 | RUNTAG="$1"
44 | BY_HOST_DIR=$HOSTNAME
45 | fi
46 | ;;
47 |
48 | 2 )
49 | RUNTAG="$1"
50 | BY_HOST_DIR="$2"
51 | ;;
52 |
53 | *)
54 | USAGE=1
55 | ;;
56 |
57 | esac
58 |
59 | if [ $USAGE -ne 0 ] ; then
60 | echo "Usage: $SCRIPT {runtag} {by_host_dir}"
61 | echo "where:"
62 | echo " runtag defaults to $(date +"%Y-%m-%d_%H%M").$HOSTNAME)"
63 | echo " by_host_dir defaults to $HOSTNAME"
64 | exit 1
65 | fi
66 |
67 | # check dependencies
68 | if [ -z "$(which shyaml)" -o -z "$(which curl)" -o -z "$(which jq)" ] ; then
69 | echo "Missing dependencies. Please re-run install_scripts.sh."
70 | exit 1
71 | fi
72 |
73 | # the configuration file is at
74 | CONFIG_YML="$HOME/.config/iotstack_backup/config.yml"
75 |
76 | # does the configuration file exist?
77 | if [ -e "$CONFIG_YML" ] ; then
78 | CLOUD_METHOD=$(shyaml get-value backup.method < "$CONFIG_YML")
79 | CLOUD_PREFIX=$(shyaml get-value backup.prefix < "$CONFIG_YML")
80 | CLOUD_OPTIONS=$(shyaml -q get-value backup.options < "$CONFIG_YML")
81 | LOCAL_RETAIN=$(shyaml get-value backup.retain < "$CONFIG_YML")
82 | else
83 | echo "Warning: Configuration file not found: $CONFIG_YML"
84 | fi
85 |
86 | # apply defaults if not set from configuration file
87 | CLOUD_METHOD=${CLOUD_METHOD:-"SCP"}
88 | CLOUD_PREFIX=${CLOUD_PREFIX:-"myuser@myhost.mydomain.com:/path/to/backup/directory/on/myhost"}
89 | LOCAL_RETAIN=${LOCAL_RETAIN:-"8"}
90 |
91 | # form the cloud path
92 | CLOUD_PATH="$CLOUD_PREFIX/$BY_HOST_DIR"
93 |
94 | # assumptions
95 | BACKUPSDIR="$IOTSTACK/backups"
96 | COMPOSENAME="docker-compose.yml"
97 | COMPOSE="$IOTSTACK/$COMPOSENAME"
98 | LOGNAME="backup-log"
99 | LOGFILE="$RUNTAG.$LOGNAME.txt"
100 |
101 | # check the key assumptions
102 | if ! [ -d "$IOTSTACK" -a -e "$COMPOSE" ] ; then
103 | echo "Error: One of the following does not exist:"
104 | echo " $IOTSTACK"
105 | echo " $COMPOSE"
106 | echo "This may indicate a problem with your installation."
107 | exit 1
108 | fi
109 |
110 | # check IOTstack seems to be running
111 | if ! isStackUp "$PROJECT" ; then
112 | echo "Warning: $PROJECT does not seem to be running. The general backup"
113 | echo " will work but backups for database and other special-case"
114 | echo " containers will be skipped."
115 | fi
116 |
117 | # make sure the backups directory exists, has correct ownership & mode
118 | [ -d "$BACKUPSDIR" ] || mkdir -m 755 -p "$BACKUPSDIR"
119 | [ $(stat -c "%U:%G" "$BACKUPSDIR") = "$USER:$USER" ] || sudo chown $USER:$USER "$BACKUPSDIR"
120 | [ $(stat -c "%a" "$BACKUPSDIR") = "755" ] || sudo chmod 755 "$BACKUPSDIR"
121 |
122 | # move into the backups directory
123 | cd "$BACKUPSDIR"
124 |
125 | # ensure that the log exists and redirect to it
126 | touch "$LOGFILE"
127 | exec >> "$LOGFILE"
128 | exec 2>> "$LOGFILE"
129 |
130 | echo "----- Starting $SCRIPT at $(date) -----"
131 | echo " RUNTAG = $RUNTAG"
132 | echo " CLOUD_METHOD = $CLOUD_METHOD"
133 | echo "CLOUD_OPTIONS = $CLOUD_OPTIONS"
134 | echo " CLOUD_PREFIX = $CLOUD_PREFIX"
135 | echo " CLOUD_PATH = $CLOUD_PATH"
136 | echo ""
137 |
138 | # record images in use (ukkopahis suggestion on Discord)
139 | docker image ls --all --digests --no-trunc
140 |
141 | # perform the backups
142 | iotstack_backup_general "$BACKUPSDIR" "$RUNTAG"
143 | iotstack_backup_influxdb "$BACKUPSDIR" "$RUNTAG"
144 | iotstack_backup_influxdb2 "$BACKUPSDIR" "$RUNTAG"
145 | iotstack_backup_nextcloud "$BACKUPSDIR" "$RUNTAG"
146 | iotstack_backup_mariadb "$BACKUPSDIR" "$RUNTAG"
147 | iotstack_backup_postgres "$BACKUPSDIR" "$RUNTAG"
148 | iotstack_backup_wordpress "$BACKUPSDIR" "$RUNTAG"
149 | iotstack_backup_gitea "$BACKUPSDIR" "$RUNTAG"
150 |
151 | # copy the files (keep in mind that log entries written after the
152 | # log is copied to the remote will only be in the local log).
153 | case "$CLOUD_METHOD" in
154 |
155 | "RCLONE" )
156 | rclone sync -v $CLOUD_OPTIONS \
157 | "$BACKUPSDIR" \
158 | "$CLOUD_PATH" \
159 | --exclude "$LOGFILE" \
160 | --exclude "influxdb/**"
161 | ;;
162 |
163 | "RSYNC" )
164 | # note that the slash after "$BACKUPSDIR" is required!
165 | rsync -vrt $CLOUD_OPTIONS --delete \
166 | --exclude="$LOGFILE" \
167 | --exclude=influxdb \
168 | "$BACKUPSDIR"/ \
169 | "$CLOUD_PATH"
170 | ;;
171 |
172 | "SCP" )
173 | scp $CLOUD_OPTIONS "$RUNTAG".* "$CLOUD_PATH"
174 | ;;
175 |
176 | *)
177 | echo "Warning: $CLOUD_METHOD backup method is not supported"
178 | echo "Warning: The only backup files are the ones in $BACKUPSDIR"
179 | ;;
180 |
181 | esac
182 |
183 | # cleanup: containers - "influx" not "influxdb" - then logs as a special case
184 | for C in "general" "influx" "influxdb2" "nextcloud" "mariadb" "postgres" "wordpress"; do
185 | ls -t1 *."$C-backup".* 2>/dev/null | tail -n +$LOCAL_RETAIN | xargs rm -f
186 | done
187 | ls -t1 *."$LOGNAME".* 2>/dev/null | tail -n +$LOCAL_RETAIN | xargs rm -f
188 |
189 |
190 | echo "----- Finished $SCRIPT at $(date) -----"
191 |
--------------------------------------------------------------------------------
/scripts/iotstack_backup_general:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | DEFAULTFILENAME=${DEFAULTFILENAME:-"general-backup.tar.gz"}
12 |
13 | # the project name is the all-lower-case form of the folder name
14 | PROJECT=$(basename ${IOTSTACK,,})
15 |
16 | # $1 is required and is either path to a .tar.gz or path to a folder
17 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
18 | # $3 is optional and overrides the default file name
19 |
20 | case "$#" in
21 |
22 | 1)
23 | BACKUP_TAR_GZ=$(realpath "$1")
24 | ;;
25 |
26 | 2 | 3)
27 | BACKUP_TAR_GZ=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
28 | ;;
29 |
30 | *)
31 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
32 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
33 | echo " (override defaults to $DEFAULTFILENAME)"
34 | exit 1
35 | ;;
36 |
37 | esac
38 |
39 | # fail safe if the file already exists - no accidental overwrites
40 | if [ -e "$BACKUP_TAR_GZ" ] ; then
41 | echo "Error: $BACKUP_TAR_GZ already exists - will not be overwritten"
42 | exit 1
43 | fi
44 |
45 | # assumptions
46 | COMPOSENAME="docker-compose.yml"
47 | COMPOSE="$IOTSTACK/$COMPOSENAME"
48 |
49 | # check the key assumptions
50 | if ! [ -d "$IOTSTACK" -a -e "$COMPOSE" ] ; then
51 | echo "Error: One of the following does not exist:"
52 | echo " $IOTSTACK"
53 | echo " $COMPOSE"
54 | echo "This may indicate a problem with your installation."
55 | exit 1
56 | fi
57 |
58 | # define mandatory files/folders to be included in the backup
59 | # (note - temporary file created in RAM)
60 | BACKUP_INCLUDE="$(mktemp -p /dev/shm/)"
61 | cat <<-INCLUSIONS >"$BACKUP_INCLUDE"
62 | ./services/
63 | ./volumes/
64 | INCLUSIONS
65 |
66 | # check that the items to be included exist
67 | for INCLUDE in $(cat $BACKUP_INCLUDE); do
68 | I=$(realpath "$IOTSTACK/$INCLUDE")
69 | if [ ! -e "$I" ]; then
70 | echo "Error: $I does not exist. This may indicate a problem with your installation."
71 | exit 1
72 | fi
73 | done
74 |
75 | # add all *.yml *.env and .env files in directory-relative form
76 | # (this will capture mkdocs.yml)
77 | for INCLUDE in "$IOTSTACK"/*.yml "$IOTSTACK"/*.env "$IOTSTACK"/.env ; do
78 | if [ -f "$INCLUDE" ] ; then
79 | echo "."/$(basename "$INCLUDE") >> "$BACKUP_INCLUDE"
80 | fi
81 | done
82 |
83 | # define files/folders to be excluded from the backup
84 | # (note - temporary file created in RAM)
85 | BACKUP_EXCLUDE="$(mktemp -p /dev/shm/)"
86 | cat <<-EXCLUSIONS >"$BACKUP_EXCLUDE"
87 | ./mkdocs.yml
88 | ./volumes/domoticz/domocookie.txt
89 | ./volumes/domoticz/domoticz.db-shm
90 | ./volumes/domoticz/domoticz.db-wal
91 | ./volumes/esphome/config/.esphome
92 | ./volumes/gitea
93 | ./volumes/influxdb
94 | ./volumes/influxdb2
95 | ./volumes/mariadb
96 | ./volumes/mosquitto/data
97 | ./volumes/motioneye/var_lib_motioneye
98 | ./volumes/nextcloud
99 | ./volumes/postgres
100 | ./volumes/wordpress
101 | ./volumes/pihole.restored
102 | ./volumes/subversion
103 | ./volumes/zigbee2mqtt/data/log
104 | ./volumes/lost+found
105 | EXCLUSIONS
106 |
107 | # now we can begin
108 | echo "----- Starting $SCRIPT at $(date) -----"
109 |
110 | # create the file (sets ownership correctly)
111 | touch "$BACKUP_TAR_GZ"
112 |
113 | # add information to the report
114 | echo "Paths included in the backup:"
115 | cat "$BACKUP_INCLUDE"
116 | echo "Paths excluded from the backup:"
117 | cat "$BACKUP_EXCLUDE"
118 |
119 | # perform the backup (relative to ~/IOTstack)
120 | sudo tar \
121 | -czf "$BACKUP_TAR_GZ" \
122 | -C "$IOTSTACK" \
123 | -X "$BACKUP_EXCLUDE" \
124 | -T "$BACKUP_INCLUDE"
125 |
126 | # report size of archive
127 | du -h "$BACKUP_TAR_GZ"
128 |
129 | # clean up the working files
130 | rm $BACKUP_INCLUDE
131 | rm $BACKUP_EXCLUDE
132 |
133 | echo "----- Finished $SCRIPT at $(date) -----"
134 |
--------------------------------------------------------------------------------
/scripts/iotstack_backup_gitea:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER="${CONTAINER:-"gitea"}"
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.tar.gz"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # the database container is
18 | CONTAINER_DB="${CONTAINER}_db"
19 |
20 | # useful function
21 | isContainerRunning() {
22 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
23 | if [ "$STATUS" = "\"running\"" ] ; then
24 | return 0
25 | fi
26 | fi
27 | return 1
28 | }
29 |
30 | preferredCommand() {
31 | if [ -n "$(docker exec "$CONTAINER_DB" which "$1")" ] ; then
32 | echo "$1"
33 | else
34 | echo "$2"
35 | fi
36 | }
37 |
38 |
39 | # $1 is required and is either path to a .tar.gz or the path to a folder
40 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
41 | # $3 is optional and overrides the default file name
42 |
43 | case "$#" in
44 |
45 | 1)
46 | BACKUP_TAR_GZ=$(realpath "$1")
47 | ;;
48 |
49 | 2 | 3)
50 | BACKUP_TAR_GZ=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
51 | ;;
52 |
53 | *)
54 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
55 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
56 | echo " (override defaults to $DEFAULTFILENAME)"
57 | exit 1
58 | ;;
59 |
60 | esac
61 |
62 | # fail safe if the file already exists - no accidental overwrites
63 | if [ -e "$BACKUP_TAR_GZ" ] ; then
64 | echo "Error: $BACKUP_TAR_GZ already exists - will not be overwritten"
65 | exit 1
66 | fi
67 |
68 | # assumptions
69 | COMPOSENAME="docker-compose.yml"
70 | COMPOSE="$IOTSTACK/$COMPOSENAME"
71 | VOLUMES="$IOTSTACK/volumes"
72 | GITEA_VOLUMES="$VOLUMES/$CONTAINER"
73 | GITEA_DIR="data"
74 | GITEA_DB_DIR="db_backup"
75 | GITEA_DB_BACKUP="$GITEA_VOLUMES/$GITEA_DB_DIR"
76 |
77 | # check that the primary container is running
78 | if ! isContainerRunning "$CONTAINER"; then
79 | echo "Warning: $CONTAINER not running - backup skipped"
80 | exit 0
81 | fi
82 |
83 | # now we can begin
84 | echo "----- Starting $SCRIPT at $(date) -----"
85 |
86 | # create a file to hold the list of inclusions
87 | BACKUP_INCLUDE="$(mktemp -p /dev/shm/)"
88 |
89 | # initialise the set of folders to be included in the backup.
90 | #
91 | cat <<-INCLUSIONS >"$BACKUP_INCLUDE"
92 | ./$GITEA_DIR
93 | INCLUSIONS
94 |
95 | # is the database engine running? The up-to-date gitea container depends
96 | # on gitea_db so the only situation where the database should NOT be
97 | # running is if the gitea instance is the legacy container
98 | if isContainerRunning "$CONTAINER_DB" ; then
99 |
100 | # yes! ensure the db_backup backup directory exists & has correct
101 | # ownership & mode
102 | [ -d "$GITEA_DB_BACKUP" ] || sudo mkdir -m 755 -p "$GITEA_DB_BACKUP"
103 | [ $(stat -c "%U:%G" "$GITEA_DB_BACKUP") = "$USER:$USER" ] || sudo chown -R $USER:$USER "$GITEA_DB_BACKUP"
104 | [ $(stat -c "%a" "$GITEA_DB_BACKUP") = "755" ] || sudo chmod -R 755 "$GITEA_DB_BACKUP"
105 |
106 | # append to the list of inclusions
107 | echo "./$GITEA_DB_DIR" >>"$BACKUP_INCLUDE"
108 |
109 | # the db_backup backup directory needs to be empty
110 | if [ $(ls -1 "$GITEA_DB_BACKUP" | wc -l) -gt 0 ] ; then
111 | echo "Erasing $GITEA_DB_BACKUP"
112 | sudo rm "$GITEA_DB_BACKUP"/*
113 | fi
114 |
115 | # tell MariaDB to take a backup
116 | echo "Telling $CONTAINER_DB (MariaDB) to create a portable backup"
117 | docker exec "$CONTAINER_DB" bash -c "$(preferredCommand mariadb-dump mysqldump) --single-transaction -p\$MYSQL_ROOT_PASSWORD \$MYSQL_DATABASE >/backup/backup.sql"
118 |
119 | else
120 | echo "Warning: $CONTAINER_DB not running - assuming legacy $CONTAINER"
121 | fi
122 |
123 |
124 | # check that the items to be included exist
125 | echo "Paths included in the backup:"
126 | for INCLUDE in $(cat $BACKUP_INCLUDE); do
127 | I=$(realpath "$GITEA_VOLUMES/$INCLUDE")
128 | if [ -d "$I" ]; then
129 | echo " $I"
130 | else
131 | echo "Error: $I does not exist. This may indicate a problem with your installation - backup skipped."
132 | exit 1
133 | fi
134 | done
135 |
136 | # create the file (sets ownership)
137 | touch "$BACKUP_TAR_GZ"
138 |
139 | # perform the backup (relative to ~/IOTstack/volumes/wordpress)
140 | echo "Collecting the backup files into a tar.gz"
141 | sudo tar \
142 | -czf "$BACKUP_TAR_GZ" \
143 | -C "$GITEA_VOLUMES" \
144 | -T "$BACKUP_INCLUDE"
145 |
146 | # report size of archive
147 | du -h "$BACKUP_TAR_GZ"
148 |
149 | echo "----- Finished $SCRIPT at $(date) -----"
--------------------------------------------------------------------------------
/scripts/iotstack_backup_influxdb:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER=${CONTAINER:-"influxdb"}
12 | # default filename hard-coded for backwards compatibility
13 | DEFAULTFILENAME=${DEFAULTFILENAME:-"influx-backup.tar"}
14 |
15 | # the project name is the all-lower-case form of the folder name
16 | PROJECT=$(basename ${IOTSTACK,,})
17 |
18 | # useful function
19 | isContainerRunning() {
20 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
21 | if [ "$STATUS" = "\"running\"" ] ; then
22 | return 0
23 | fi
24 | fi
25 | return 1
26 | }
27 |
28 | # $1 is required and is either path to a .tar or the path to a folder
29 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
30 | # $3 is optional and overrides the default file name
31 |
32 | case "$#" in
33 |
34 | 1)
35 | BACKUP_TAR=$(realpath "$1")
36 | ;;
37 |
38 | 2 | 3)
39 | BACKUP_TAR=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
40 | ;;
41 |
42 | *)
43 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
44 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
45 | echo " (override defaults to $DEFAULTFILENAME)"
46 | exit 1
47 | ;;
48 |
49 | esac
50 |
51 | # fail safe if the file already exists - no accidental overwrites
52 | if [ -e "$BACKUP_TAR" ] ; then
53 | echo "Error: $BACKUP_TAR already exists - will not be overwritten"
54 | exit 1
55 | fi
56 |
57 | # assumptions
58 | COMPOSENAME="docker-compose.yml"
59 | CQSNAME="continuous-queries.influxql"
60 | EPILOGNAME="iotstack_restore_$CONTAINER.epilog"
61 |
62 | COMPOSE="$IOTSTACK/$COMPOSENAME"
63 | BACKUPS="$IOTSTACK/backups"
64 | EXTERNALEPILOG="$IOTSTACK/services/$CONTAINER/$EPILOGNAME"
65 | INTERNALEPILOG="/$(uuidgen).epilog"
66 | EXTERNALBACKUP="$BACKUPS/$CONTAINER"
67 | EXTERNALBACKUPDB="$EXTERNALBACKUP/db"
68 | INTERNALBACKUPDB="/var/lib/influxdb/backup"
69 | INFLUXDATA="$IOTSTACK/volumes/$CONTAINER/data"
70 | EXTERNALCQS="$EXTERNALBACKUPDB/$CQSNAME"
71 | INTERNALCQS="$INTERNALBACKUPDB/$CQSNAME"
72 |
73 | # is influxdb running?
74 | if ! isContainerRunning "$CONTAINER" ; then
75 | echo "Warning: $CONTAINER container not running - backup skipped"
76 | exit 0
77 | fi
78 |
79 | # make sure the backups directory exists & has correct ownership & mode
80 | [ -d "$BACKUPS" ] || mkdir -m 755 -p "$BACKUPS"
81 | [ $(stat -c "%U:%G" "$BACKUPS") = "$USER:$USER" ] || sudo chown $USER:$USER "$BACKUPS"
82 | [ $(stat -c "%a" "$BACKUPS") = "755" ] || sudo chmod 755 "$BACKUPS"
83 |
84 | # make sure the influx backup directory exists & has correct ownership & mode
85 | [ -d "$EXTERNALBACKUPDB" ] || sudo mkdir -m 755 -p "$EXTERNALBACKUPDB"
86 | [ $(stat -c "%U:%G" "$EXTERNALBACKUP") = "root:root" ] || sudo chown -R root:root "$EXTERNALBACKUP"
87 | [ $(stat -c "%a" "$EXTERNALBACKUP") = "755" ] || sudo chmod -R 755 "$EXTERNALBACKUP"
88 |
89 | # now we can begin
90 | echo "----- Starting $SCRIPT at $(date) -----"
91 |
92 | # create the file (sets ownership)
93 | touch "$BACKUP_TAR"
94 |
95 | # the influx backup directory needs to be empty
96 | if [ $(ls -1 "$EXTERNALBACKUPDB" | wc -l) -gt 0 ] ; then
97 | echo "Erasing $EXTERNALBACKUPDB"
98 | sudo rm "$EXTERNALBACKUPDB"/*
99 | fi
100 |
101 | # tell influx to perform the backup
102 | echo "Telling influxd to create a portable backup"
103 | docker exec "$CONTAINER" influxd backup -portable "$INTERNALBACKUPDB"
104 |
105 | # attempt to collect any continuous queries
106 | echo "Extracting any continuous queries"
107 | docker exec influxdb bash -c \
108 | "influx -execute 'SHOW CONTINUOUS QUERIES' \
109 | | grep 'CREATE CONTINUOUS QUERY' \
110 | | sed 's/^.*CREATE CONTINUOUS QUERY/CREATE CONTINUOUS QUERY/g' \
111 | >$INTERNALCQS"
112 |
113 | # remove if an empty file was created (ie no continuous queries)
114 | [ -f "$EXTERNALCQS" -a ! -s "$EXTERNALCQS" ] && sudo rm "$EXTERNALCQS"
115 |
116 | # sweep the backup into a tar (sudo is needed because backup files are
117 | # created with owner root, group root, mode 600)
118 | echo "Collecting the backup files into a tar"
119 | sudo tar \
120 | -cf "$BACKUP_TAR" \
121 | -C "$EXTERNALBACKUPDB" \
122 | .
123 |
124 | # report size of archive
125 | du -h "$BACKUP_TAR"
126 |
127 | echo "----- Finished $SCRIPT at $(date) -----"
--------------------------------------------------------------------------------
/scripts/iotstack_backup_influxdb2:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER=${CONTAINER:-"influxdb2"}
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.tar"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # useful function
18 | isContainerRunning() {
19 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
20 | if [ "$STATUS" = "\"running\"" ] ; then
21 | return 0
22 | fi
23 | fi
24 | return 1
25 | }
26 |
27 | # $1 is required and is either path to a .tar or the path to a folder
28 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
29 | # $3 is optional and overrides the default file name
30 |
31 | case "$#" in
32 |
33 | 1)
34 | BACKUP_TAR=$(realpath "$1")
35 | ;;
36 |
37 | 2 | 3)
38 | BACKUP_TAR=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
39 | ;;
40 |
41 | *)
42 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
43 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
44 | echo " (override defaults to $DEFAULTFILENAME)"
45 | exit 1
46 | ;;
47 |
48 | esac
49 |
50 | # fail safe if the file already exists - no accidental overwrites
51 | if [ -e "$BACKUP_TAR" ] ; then
52 | echo "Error: $BACKUP_TAR already exists - will not be overwritten"
53 | exit 1
54 | fi
55 |
56 | # assumptions
57 | COMPOSENAME="docker-compose.yml"
58 | COMPOSE="$IOTSTACK/$COMPOSENAME"
59 | INFLUXSTORE="$IOTSTACK/volumes/$CONTAINER"
60 | INFLUXBACKUP="$INFLUXSTORE/backup"
61 | INFLUXENGINE="./data/engine"
62 |
63 | # is influxdb2 running?
64 | if ! isContainerRunning "$CONTAINER" ; then
65 | echo "Warning: $CONTAINER container not running - backup skipped"
66 | exit 0
67 | fi
68 |
69 | # docker-compose should have created the path to the backup directory
70 | if [ ! -d "$INFLUXBACKUP" ] ; then
71 | echo "$INFLUXBACKUP does not exist. This is usually created by docker-compose."
72 | echo "Has InfluxDB 2 been initialised properly?"
73 | exit 1
74 | fi
75 |
76 | # now we can begin
77 | echo "----- Starting $SCRIPT at $(date) -----"
78 |
79 | # create the file (sets ownership)
80 | touch "$BACKUP_TAR"
81 |
82 | # the influx backup directory needs to be empty
83 | if [ $(ls -1 "$INFLUXBACKUP" | wc -l) -gt 0 ] ; then
84 | echo "Erasing $INFLUXBACKUP"
85 | sudo rm "$INFLUXBACKUP"/*
86 | fi
87 |
88 | # tell influx to perform the backup
89 | echo "Telling InfluxDB 2 to create a backup"
90 | docker exec "$CONTAINER" influx backup /var/lib/backup
91 |
92 | # sweep the backup into a tar (sudo is needed because backup files are
93 | # created with owner root, group root, mode 600)
94 | echo "Collecting the backup files into a tar"
95 | sudo tar \
96 | -cf "$BACKUP_TAR" \
97 | -C "$INFLUXSTORE" \
98 | --exclude="$INFLUXENGINE" \
99 | .
100 |
101 | # report size of archive
102 | du -h "$BACKUP_TAR"
103 |
104 | echo "----- Finished $SCRIPT at $(date) -----"
--------------------------------------------------------------------------------
/scripts/iotstack_backup_mariadb:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER=${CONTAINER:-"mariadb"}
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.tar.gz"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # useful function
18 | isContainerRunning() {
19 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
20 | if [ "$STATUS" = "\"running\"" ] ; then
21 | return 0
22 | fi
23 | fi
24 | return 1
25 | }
26 |
27 | preferredCommand() {
28 | if [ -n "$(docker exec "$CONTAINER" which "$1")" ] ; then
29 | echo "$1"
30 | else
31 | echo "$2"
32 | fi
33 | }
34 |
35 |
36 | # $1 is required and is either path to a .tar.gz or the path to a folder
37 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
38 | # $3 is optional and overrides the default file name
39 |
40 | case "$#" in
41 |
42 | 1)
43 | BACKUP_TAR_GZ=$(realpath "$1")
44 | ;;
45 |
46 | 2 | 3)
47 | BACKUP_TAR_GZ=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
48 | ;;
49 |
50 | *)
51 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
52 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
53 | echo " (override defaults to $DEFAULTFILENAME)"
54 | exit 1
55 | ;;
56 |
57 | esac
58 |
59 | # fail safe if the file already exists - no accidental overwrites
60 | if [ -e "$BACKUP_TAR_GZ" ] ; then
61 | echo "Error: $BACKUP_TAR_GZ already exists - will not be overwritten"
62 | exit 1
63 | fi
64 |
65 | # assumptions
66 | COMPOSENAME="docker-compose.yml"
67 | COMPOSE="$IOTSTACK/$COMPOSENAME"
68 | VOLUMES="$IOTSTACK/volumes"
69 | MARIADB_VOLUMES="$VOLUMES/$CONTAINER"
70 | MARIADB_BACKUP="$MARIADB_VOLUMES/db_backup"
71 |
72 | # check that container is running
73 | if ! isContainerRunning "$CONTAINER" ; then
74 | echo "Warning: $CONTAINER not running - backup skipped"
75 | exit 0
76 | fi
77 |
78 | # make sure the database backup directory exists & has correct ownership & mode
79 | [ -d "$MARIADB_BACKUP" ] || sudo mkdir -m 755 -p "$MARIADB_BACKUP"
80 | [ $(stat -c "%U:%G" "$MARIADB_BACKUP") = "$USER:$USER" ] || sudo chown -R $USER:$USER "$MARIADB_BACKUP"
81 | [ $(stat -c "%a" "$MARIADB_BACKUP") = "755" ] || sudo chmod -R 755 "$MARIADB_BACKUP"
82 |
83 | # now we can begin
84 | echo "----- Starting $SCRIPT at $(date) -----"
85 |
86 | # create the file (sets ownership)
87 | touch "$BACKUP_TAR_GZ"
88 |
89 | # the database backup directory needs to be empty
90 | if [ $(ls -1 "$MARIADB_BACKUP" | wc -l) -gt 0 ] ; then
91 | echo "Erasing $MARIADB_BACKUP"
92 | sudo rm "$MARIADB_BACKUP"/*
93 | fi
94 |
95 |
96 | # tell MariaDB to take a backup
97 | echo "Telling $CONTAINER to create a portable backup"
98 | docker exec "$CONTAINER" bash -c "$(preferredCommand mariadb-dump mysqldump) --single-transaction -p\$MYSQL_ROOT_PASSWORD \$MYSQL_DATABASE >/backup/backup.sql"
99 |
100 | # perform the backup (relative to ~/IOTstack/volumes/mariadb)
101 | echo "Collecting the backup files into a tar.gz"
102 | sudo tar \
103 | -czf "$BACKUP_TAR_GZ" \
104 | -C "$MARIADB_BACKUP" \
105 | .
106 |
107 | # report size of archive
108 | du -h "$BACKUP_TAR_GZ"
109 |
110 | echo "----- Finished $SCRIPT at $(date) -----"
--------------------------------------------------------------------------------
/scripts/iotstack_backup_nextcloud:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER="${CONTAINER:-"nextcloud"}"
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.tar.gz"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # the database container is
18 | CONTAINER_DB="${CONTAINER}_db"
19 |
20 | # useful function
21 | isContainerRunning() {
22 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
23 | if [ "$STATUS" = "\"running\"" ] ; then
24 | return 0
25 | fi
26 | fi
27 | return 1
28 | }
29 |
30 | preferredCommand() {
31 | if [ -n "$(docker exec "$CONTAINER_DB" which "$1")" ] ; then
32 | echo "$1"
33 | else
34 | echo "$2"
35 | fi
36 | }
37 |
38 |
39 | # $1 is required and is either path to a .tar.gz or the path to a folder
40 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
41 | # $3 is optional and overrides the default file name
42 |
43 | case "$#" in
44 |
45 | 1)
46 | BACKUP_TAR_GZ=$(realpath "$1")
47 | ;;
48 |
49 | 2 | 3)
50 | BACKUP_TAR_GZ=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
51 | ;;
52 |
53 | *)
54 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
55 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
56 | echo " (override defaults to $DEFAULTFILENAME)"
57 | exit 1
58 | ;;
59 |
60 | esac
61 |
62 | # fail safe if the file already exists - no accidental overwrites
63 | if [ -e "$BACKUP_TAR_GZ" ] ; then
64 | echo "Error: $BACKUP_TAR_GZ already exists - will not be overwritten"
65 | exit 1
66 | fi
67 |
68 | # assumptions
69 | COMPOSENAME="docker-compose.yml"
70 | COMPOSE="$IOTSTACK/$COMPOSENAME"
71 | VOLUMES="$IOTSTACK/volumes"
72 | NEXTCLOUD_VOLUMES="$VOLUMES/$CONTAINER"
73 | NEXTCLOUD_DB_BACKUP="$NEXTCLOUD_VOLUMES/db_backup"
74 |
75 | # check that containers are running
76 | if ! isContainerRunning "$CONTAINER" || ! isContainerRunning "$CONTAINER_DB" ; then
77 | echo "Warning: $CONTAINER and/or $CONTAINER_DB not running - backup skipped"
78 | exit 0
79 | fi
80 |
81 | # make sure the nextcloud_db backup directory exists & has correct ownership & mode
82 | [ -d "$NEXTCLOUD_DB_BACKUP" ] || sudo mkdir -m 755 -p "$NEXTCLOUD_DB_BACKUP"
83 | [ $(stat -c "%U:%G" "$NEXTCLOUD_DB_BACKUP") = "$USER:$USER" ] || sudo chown -R $USER:$USER "$NEXTCLOUD_DB_BACKUP"
84 | [ $(stat -c "%a" "$NEXTCLOUD_DB_BACKUP") = "755" ] || sudo chmod -R 755 "$NEXTCLOUD_DB_BACKUP"
85 |
86 | # now we can begin
87 | echo "----- Starting $SCRIPT at $(date) -----"
88 |
89 | # create the file (sets ownership)
90 | touch "$BACKUP_TAR_GZ"
91 |
92 | # the nextcloud_db backup directory needs to be empty
93 | if [ $(ls -1 "$NEXTCLOUD_DB_BACKUP" | wc -l) -gt 0 ] ; then
94 | echo "Erasing $NEXTCLOUD_DB_BACKUP"
95 | sudo rm "$NEXTCLOUD_DB_BACKUP"/*
96 | fi
97 |
98 | # create a file to hold the list of inclusions
99 | BACKUP_INCLUDE="$(mktemp -p /dev/shm/)"
100 |
101 | # define the folders to be included in the backup.
102 | cat <<-INCLUSIONS >"$BACKUP_INCLUDE"
103 | ./db_backup
104 | ./html/config
105 | ./html/custom_apps
106 | ./html/data
107 | ./html/themes
108 | INCLUSIONS
109 |
110 | # check that the items to be included exist
111 | echo "Paths included in the backup:"
112 | for INCLUDE in $(cat $BACKUP_INCLUDE); do
113 | I=$(realpath "$NEXTCLOUD_VOLUMES/$INCLUDE")
114 | if [ -d "$I" ]; then
115 | echo " $I"
116 | else
117 | echo "Error: $I does not exist. This may indicate a problem with your installation - backup skipped."
118 | exit 1
119 | fi
120 | done
121 |
122 | # tell nextcloud to go into maintenance mode
123 | echo "Putting $CONTAINER into maintenance mode"
124 | docker exec -u www-data -it "$CONTAINER" php occ maintenance:mode --on
125 |
126 | # tell MariaDB to take a backup
127 | echo "Telling $CONTAINER_DB (MariaDB) to create a portable backup"
128 | docker exec "$CONTAINER_DB" bash -c "$(preferredCommand mariadb-dump mysqldump) --single-transaction -p\$MYSQL_ROOT_PASSWORD \$MYSQL_DATABASE >/backup/backup.sql"
129 |
130 | # perform the backup (relative to ~/IOTstack/volumes/nextcloud)
131 | echo "Collecting the backup files into a tar.gz"
132 | sudo tar \
133 | -czf "$BACKUP_TAR_GZ" \
134 | -C "$NEXTCLOUD_VOLUMES" \
135 | -T "$BACKUP_INCLUDE"
136 |
137 | # tell nextcloud to come out of maintenance mode
138 | echo "Taking $CONTAINER out of maintenance mode"
139 | docker exec -u www-data -it "$CONTAINER" php occ maintenance:mode --off
140 |
141 | # report size of archive
142 | du -h "$BACKUP_TAR_GZ"
143 |
144 | echo "----- Finished $SCRIPT at $(date) -----"
--------------------------------------------------------------------------------
/scripts/iotstack_backup_postgres:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER=${CONTAINER:-"postgres"}
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.sql.gz.tar"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # useful function
18 | isContainerRunning() {
19 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
20 | if [ "$STATUS" = "\"running\"" ] ; then
21 | return 0
22 | fi
23 | fi
24 | return 1
25 | }
26 |
27 | # $1 is required and is either path to a .tar.gz or the path to a folder
28 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
29 | # $3 is optional and overrides the default file name
30 |
31 | case "$#" in
32 |
33 | 1)
34 | BACKUP_SQL_GZ_TAR=$(realpath "$1")
35 | ;;
36 |
37 | 2 | 3)
38 | BACKUP_SQL_GZ_TAR=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
39 | ;;
40 |
41 | *)
42 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
43 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
44 | echo " (override defaults to $DEFAULTFILENAME)"
45 | exit 1
46 | ;;
47 |
48 | esac
49 |
50 | # fail safe if the file already exists - no accidental overwrites
51 | if [ -e "$BACKUP_SQL_GZ_TAR" ] ; then
52 | echo "Error: $BACKUP_SQL_GZ_TAR already exists - will not be overwritten"
53 | exit 1
54 | fi
55 |
56 | # assumptions
57 | COMPOSENAME="docker-compose.yml"
58 | COMPOSE="$IOTSTACK/$COMPOSENAME"
59 | VOLUMES="$IOTSTACK/volumes"
60 | POSTGRES_VOLUMES="$VOLUMES/$CONTAINER"
61 | POSTGRES_BACKUP="$POSTGRES_VOLUMES/db_backup"
62 |
63 | # check that container is running
64 | if ! isContainerRunning "$CONTAINER" ; then
65 | echo "Warning: $CONTAINER not running - backup skipped"
66 | exit 0
67 | fi
68 |
69 | # check that the backup directory exists
70 | if [ ! -d "$POSTGRES_BACKUP" ] ; then
71 |
72 | echo "Error: the $POSTGRES_BACKUP directory does not exist."
73 | echo ""
74 | echo "This is probably because your service definition for PostgreSQL does not"
75 | echo "include the following volume mapping:"
76 | echo ""
77 | echo " - ./volumes/postgres/db_backup:/backup"
78 | echo ""
79 | echo "Please compare your service definition with the IOTstack template at:"
80 | echo ""
81 | echo " $IOTSTACK/.templates/postgres/service.yml"
82 | echo ""
83 | echo "and ensure that your active service definition in $COMPOSENAME"
84 | echo "accurately reflects the version in the template."
85 | echo ""
86 |
87 | exit 1
88 |
89 | fi
90 |
91 | # now we can begin
92 | echo "----- Starting $SCRIPT at $(date) -----"
93 |
94 | # create the file (sets ownership)
95 | touch "$BACKUP_SQL_GZ_TAR"
96 |
97 | # the database backup directory needs to be empty
98 | if [ $(ls -1 "$POSTGRES_BACKUP" | wc -l) -gt 0 ] ; then
99 | echo "Erasing $POSTGRES_BACKUP"
100 | sudo rm "$POSTGRES_BACKUP"/*
101 | fi
102 |
103 |
104 | # tell postgres to take a backup
105 | echo "Telling $CONTAINER to create a backup"
106 | docker exec "$CONTAINER" bash -c 'pg_dumpall -U $POSTGRES_USER | gzip > /backup/postgres_backup.sql.gz'
107 |
108 | # perform the backup (relative to ~/IOTstack/volumes/postgres)
109 | echo "Collecting the backup file (a .gz) into a .tar"
110 | sudo tar \
111 | -czf "$BACKUP_SQL_GZ_TAR" \
112 | -C "$POSTGRES_BACKUP" \
113 | .
114 |
115 | # report size of archive
116 | du -h "$BACKUP_SQL_GZ_TAR"
117 |
118 | echo "----- Finished $SCRIPT at $(date) -----"
--------------------------------------------------------------------------------
/scripts/iotstack_backup_quota_report:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # the name of the running script (ie the script can be renamed)
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # check dependencies
10 | if [ -z "$(which mosquitto_pub)" -o -z "$(which rclone)" ] ; then
11 | echo "Missing dependencies. Please re-run install_scripts.sh."
12 | exit 1
13 | fi
14 |
15 | # acquire parameters. Defaults assume:
16 | # 1. the rclone remote to be queried is "dropbox".
17 | # 2. a prefix string of "/quota" is appropriate.
18 | # 3. both the script and an MQTT broker are running on the same host.
19 | # 4. the MQTT broker is listening on port 1883
20 | RCLONEREMOTE=${1:-"dropbox"}
21 | TOPICPREFIX=${2:-"/quota"}
22 | MQTTBROKER=${3:-"127.0.0.1"}
23 | MQTTPORT=${4:-"1883"}
24 |
25 | # running interactively ?
26 | if [ ! -t 0 ] ; then
27 |
28 | # no! redirect output and errors to a log file
29 | mkdir -p "$HOME/Logs"
30 | exec >> "$HOME/Logs/$SCRIPT.log" 2>&1
31 |
32 | fi
33 |
34 | # function to invoke rclone to fetch disk quota information
35 | # $1 = required remote name for rclone (eg "dropbox:" or "/")
36 | # $2 = required topic suffix (eg "dropbox" or "local")
37 | fetch() {
38 |
39 | # invoke rclone to fetch quota info in json format and reduce to a single line of output
40 | local QUOTA=$(rclone about "$1" --json | tr '\n\t' ' ')
41 |
42 | # did the operation succeed and return a non-empty string?
43 | if [ $? -eq 0 -a -n "$QUOTA" ] ; then
44 |
45 |
46 | # yes! publish via MQTT
47 | mosquitto_pub -h "$MQTTBROKER" -p "$MQTTPORT" -t "$TOPICPREFIX/$2" -m "$QUOTA"
48 |
49 | else
50 |
51 | # no! record failure in the log
52 | echo "rclone is unable to fetch $2 quota information"
53 |
54 | fi
55 |
56 | }
57 |
58 | # the syntax for rclone remotes is:
59 | # $1 = the name of the remote followed by a colon
60 | # $2 = the name of the remote without the trailing colon
61 | fetch "$RCLONEREMOTE:" "$RCLONEREMOTE"
62 |
63 | # the syntax for the local file system is:
64 | # $1 = "/" (no trailing colon)
65 | # $2 = the short hostname of the local system. Note:
66 | # a. The host name is assumed to follow DNS rules and contain
67 | # ONLY letters, digits and hyphens (the result is undefined
68 | # if a non-DNS-compliant host name is being used).
69 | # b. "hostname -s" is used rather than $HOSTNAME because this
70 | # improves portability (eg on macOS).
71 | fetch "/" "$(hostname -s)"
72 |
--------------------------------------------------------------------------------
/scripts/iotstack_backup_wordpress:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER="${CONTAINER:-"wordpress"}"
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.tar.gz"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # the database container is
18 | CONTAINER_DB="${CONTAINER}_db"
19 |
20 | # useful function
21 | isContainerRunning() {
22 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
23 | if [ "$STATUS" = "\"running\"" ] ; then
24 | return 0
25 | fi
26 | fi
27 | return 1
28 | }
29 |
30 | preferredCommand() {
31 | if [ -n "$(docker exec "$CONTAINER_DB" which "$1")" ] ; then
32 | echo "$1"
33 | else
34 | echo "$2"
35 | fi
36 | }
37 |
38 |
39 | # $1 is required and is either path to a .tar.gz or the path to a folder
40 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
41 | # $3 is optional and overrides the default file name
42 |
43 | case "$#" in
44 |
45 | 1)
46 | BACKUP_TAR_GZ=$(realpath "$1")
47 | ;;
48 |
49 | 2 | 3)
50 | BACKUP_TAR_GZ=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
51 | ;;
52 |
53 | *)
54 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
55 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
56 | echo " (override defaults to $DEFAULTFILENAME)"
57 | exit 1
58 | ;;
59 |
60 | esac
61 |
62 | # fail safe if the file already exists - no accidental overwrites
63 | if [ -e "$BACKUP_TAR_GZ" ] ; then
64 | echo "Error: $BACKUP_TAR_GZ already exists - will not be overwritten"
65 | exit 1
66 | fi
67 |
68 | # assumptions
69 | COMPOSENAME="docker-compose.yml"
70 | COMPOSE="$IOTSTACK/$COMPOSENAME"
71 | VOLUMES="$IOTSTACK/volumes"
72 | WORDPRESS_VOLUMES="$VOLUMES/$CONTAINER"
73 | WORDPRESS_DIR="html"
74 | WORDPRESS_DB_DIR="db_backup"
75 | WORDPRESS_DB_BACKUP="$WORDPRESS_VOLUMES/$WORDPRESS_DB_DIR"
76 |
77 | # check that containers are running
78 | if ! isContainerRunning "$CONTAINER" || ! isContainerRunning "$CONTAINER_DB" ; then
79 | echo "Warning: $CONTAINER and/or $CONTAINER_DB not running - backup skipped"
80 | exit 0
81 | fi
82 |
83 | # make sure the wordpress_db backup directory exists & has correct ownership & mode
84 | [ -d "$WORDPRESS_DB_BACKUP" ] || sudo mkdir -m 755 -p "$WORDPRESS_DB_BACKUP"
85 | [ $(stat -c "%U:%G" "$WORDPRESS_DB_BACKUP") = "$USER:$USER" ] || sudo chown -R $USER:$USER "$WORDPRESS_DB_BACKUP"
86 | [ $(stat -c "%a" "$WORDPRESS_DB_BACKUP") = "755" ] || sudo chmod -R 755 "$WORDPRESS_DB_BACKUP"
87 |
88 | # now we can begin
89 | echo "----- Starting $SCRIPT at $(date) -----"
90 |
91 | # create the file (sets ownership)
92 | touch "$BACKUP_TAR_GZ"
93 |
94 | # the wordpress_db backup directory needs to be empty
95 | if [ $(ls -1 "$WORDPRESS_DB_BACKUP" | wc -l) -gt 0 ] ; then
96 | echo "Erasing $WORDPRESS_DB_BACKUP"
97 | sudo rm "$WORDPRESS_DB_BACKUP"/*
98 | fi
99 |
100 | # create a file to hold the list of inclusions
101 | BACKUP_INCLUDE="$(mktemp -p /dev/shm/)"
102 |
103 | # define the folders to be included in the backup.
104 | cat <<-INCLUSIONS >"$BACKUP_INCLUDE"
105 | ./$WORDPRESS_DB_DIR
106 | ./$WORDPRESS_DIR
107 | INCLUSIONS
108 |
109 | # check that the items to be included exist
110 | echo "Paths included in the backup:"
111 | for INCLUDE in $(cat $BACKUP_INCLUDE); do
112 | I=$(realpath "$WORDPRESS_VOLUMES/$INCLUDE")
113 | if [ -d "$I" ]; then
114 | echo " $I"
115 | else
116 | echo "Error: $I does not exist. This may indicate a problem with your installation - backup skipped."
117 | exit 1
118 | fi
119 | done
120 |
121 | # tell MariaDB to take a backup
122 | echo "Telling $CONTAINER_DB (MariaDB) to create a portable backup"
123 | docker exec "$CONTAINER_DB" bash -c "$(preferredCommand mariadb-dump mysqldump) --single-transaction -p\$MYSQL_ROOT_PASSWORD \$MYSQL_DATABASE >/backup/backup.sql"
124 |
125 | # perform the backup (relative to ~/IOTstack/volumes/wordpress)
126 | echo "Collecting the backup files into a tar.gz"
127 | sudo tar \
128 | -czf "$BACKUP_TAR_GZ" \
129 | -C "$WORDPRESS_VOLUMES" \
130 | -T "$BACKUP_INCLUDE"
131 |
132 | # report size of archive
133 | du -h "$BACKUP_TAR_GZ"
134 |
135 | echo "----- Finished $SCRIPT at $(date) -----"
--------------------------------------------------------------------------------
/scripts/iotstack_migration_backup:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # should run without arguments
10 | [ $# -ne 0 ] && echo "$SCRIPT parameter(s) $@ ignored"
11 |
12 | # assumptions that can be overridden
13 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
14 | RUNTAG=${RUNTAG:-"migration"}
15 |
16 | # the project name is the all-lower-case form of the folder name
17 | PROJECT=$(basename ${IOTSTACK,,})
18 |
19 | # useful function
20 | isStackUp() {
21 | if COUNT=$( \
22 | curl -s --unix-socket /var/run/docker.sock http://localhost/containers/json | \
23 | jq -c ".[].Labels.\"com.docker.compose.project\"" | \
24 | grep -c "\"$1\"$" \
25 | ) ; then
26 | if [ $COUNT -gt 0 ] ; then
27 | return 0
28 | fi
29 | fi
30 | return 1
31 | }
32 |
33 | # check dependencies
34 | if [ -z "$(which curl)" -o -z "$(which jq)" ] ; then
35 | echo "Missing dependencies. Please re-run install_scripts.sh."
36 | exit 1
37 | fi
38 |
39 | # assumption
40 | COMPOSE="$IOTSTACK/docker-compose.yml"
41 |
42 | # assertion
43 |
44 | # check the key assumptions
45 | if ! [ -d "$IOTSTACK" -a -e "$COMPOSE" ] ; then
46 | echo "Error: One of the following does not exist:"
47 | echo " $IOTSTACK"
48 | echo " $COMPOSE"
49 | echo "This may indicate a problem with your installation."
50 | exit 1
51 | fi
52 |
53 | # check IOTstack seems to be running
54 | if ! isStackUp "$PROJECT" ; then
55 | echo "Warning: $PROJECT does not seem to be running. The general backup"
56 | echo " will work but backups for database and other special-case"
57 | echo " containers will be skipped."
58 | fi
59 |
60 | # perform the backups
61 | iotstack_backup_general "$PWD" "$RUNTAG"
62 | iotstack_backup_influxdb "$PWD" "$RUNTAG"
63 | iotstack_backup_influxdb2 "$PWD" "$RUNTAG"
64 | iotstack_backup_nextcloud "$PWD" "$RUNTAG"
65 | iotstack_backup_mariadb "$PWD" "$RUNTAG"
66 | iotstack_backup_postgres "$PWD" "$RUNTAG"
67 | iotstack_backup_wordpress "$PWD" "$RUNTAG"
68 | iotstack_backup_gitea "$PWD" "$RUNTAG"
69 |
70 | echo "----- Finished $SCRIPT at $(date) -----"
71 |
--------------------------------------------------------------------------------
/scripts/iotstack_migration_restore:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # should run without arguments
10 | [ $# -ne 0 ] && echo "$SCRIPT parameter(s) $@ ignored"
11 |
12 | # assumptions that can be overridden
13 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
14 | RUNTAG=${RUNTAG:-"migration"}
15 |
16 | # the project name is the all-lower-case form of the folder name
17 | PROJECT=$(basename ${IOTSTACK,,})
18 |
19 | # useful function
20 | isStackUp() {
21 | if COUNT=$( \
22 | curl -s --unix-socket /var/run/docker.sock http://localhost/containers/json | \
23 | jq -c ".[].Labels.\"com.docker.compose.project\" | contains(\"$1\")" | \
24 | wc -l \
25 | ) ; then
26 | if [ $COUNT -gt 0 ] ; then
27 | return 0
28 | fi
29 | fi
30 | return 1
31 | }
32 |
33 | # check dependencies
34 | if [ -z "$(which curl)" -o -z "$(which jq)" ] ; then
35 | echo "Missing dependencies. Please re-run install_scripts.sh."
36 | exit 1
37 | fi
38 |
39 | # IOTstack directory must exist
40 | [ ! -d "$IOTSTACK" ] && echo "Error: $IOTSTACK does not exist" && exit 1
41 |
42 | # but no part of IOTstack can be running
43 | if isStackUp "$PROJECT" ; then
44 | echo "Error: $PROJECT must not be running"
45 | exit 1
46 | fi
47 |
48 | # the compose file must not exist
49 | COMPOSE="$IOTSTACK/docker-compose.yml"
50 | [ -e "$COMPOSE" ] && echo "Error: $COMPOSE already exists" && exit 1
51 |
52 | # the .env must not exist
53 | ENVFILE="$IOTSTACK/.env"
54 | [ -e "$ENVFILE" ] && echo "Error: $ENVFILE already exists" && exit 1
55 |
56 | # in most cases, services and volumes will not exist but, if they do,
57 | # the subordinate scripts will adopt merging behaviour
58 |
59 | # try to perform the restores
60 | iotstack_restore_general "$PWD" "$RUNTAG"
61 | iotstack_restore_influxdb "$PWD" "$RUNTAG"
62 | iotstack_restore_influxdb2 "$PWD" "$RUNTAG"
63 | iotstack_restore_nextcloud "$PWD" "$RUNTAG"
64 | iotstack_restore_mariadb "$PWD" "$RUNTAG"
65 | iotstack_restore_postgres "$PWD" "$RUNTAG"
66 | iotstack_restore_wordpress "$PWD" "$RUNTAG"
67 | iotstack_restore_gitea "$PWD" "$RUNTAG"
68 |
69 | echo "----- Finished $SCRIPT at $(date) -----"
70 |
--------------------------------------------------------------------------------
/scripts/iotstack_reload_influxdb:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # this script should be run without arguments
10 | [ $# -ne 0 ] && echo "$SCRIPT parameter(s) $@ ignored"
11 |
12 | # assumptions that can be overridden
13 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
14 | CONTAINER="${CONTAINER:-"influxdb"}"
15 |
16 | # the project name is the all-lower-case form of the folder name
17 | PROJECT=$(basename ${IOTSTACK,,})
18 |
19 | # useful function
20 | isContainerRunning() {
21 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
22 | if [ "$STATUS" = "\"running\"" ] ; then
23 | return 0
24 | fi
25 | fi
26 | return 1
27 | }
28 |
29 | # assumptions
30 | COMPOSENAME="docker-compose.yml"
31 | CQSNAME="continuous-queries.influxql"
32 | EPILOGNAME="iotstack_restore_$CONTAINER.epilog"
33 |
34 | COMPOSE="$IOTSTACK/$COMPOSENAME"
35 | BACKUPS="$IOTSTACK/backups"
36 | EXTERNALEPILOG="$IOTSTACK/services/$CONTAINER/$EPILOGNAME"
37 | INTERNALEPILOG="/$(uuidgen).epilog"
38 | EXTERNALBACKUP="$BACKUPS/$CONTAINER"
39 | EXTERNALBACKUPDB="$EXTERNALBACKUP/db"
40 | INTERNALBACKUPDB="/var/lib/influxdb/backup"
41 | INFLUXDATA="$IOTSTACK/volumes/$CONTAINER/data"
42 | EXTERNALCQS="$EXTERNALBACKUPDB/$CQSNAME"
43 | INTERNALCQS="$INTERNALBACKUPDB/$CQSNAME"
44 |
45 | # ensure the backup directory exists
46 | [ -d "$BACKUPS" ] || mkdir "$BACKUPS"
47 |
48 | # is influx running?
49 | if isContainerRunning "$CONTAINER" ; then
50 |
51 | # yes! execute the backup command
52 | echo "backing up $CONTAINER databases"
53 | docker exec "$CONTAINER" influxd backup -portable "$INTERNALBACKUPDB"
54 |
55 | # attempt to collect any continuous queries
56 | echo "Extracting any continuous queries"
57 | docker exec influxdb bash -c \
58 | "influx -execute 'SHOW CONTINUOUS QUERIES' \
59 | | grep 'CREATE CONTINUOUS QUERY' \
60 | | sed 's/^.*CREATE CONTINUOUS QUERY/CREATE CONTINUOUS QUERY/g' \
61 | >$INTERNALCQS"
62 |
63 | # remove if an empty file was created (ie no continuous queries)
64 | [ -f "$EXTERNALCQS" -a ! -s "$EXTERNALCQS" ] && rm "$EXTERNALCQS"
65 |
66 | echo "deactivating $CONTAINER"
67 | docker-compose -f "$COMPOSE" rm --force --stop -v "$CONTAINER"
68 |
69 | echo "removing the running database"
70 | sudo rm -rf "$INFLUXDATA"
71 |
72 | echo "restarting $CONTAINER"
73 | docker-compose -f "$COMPOSE" up -d "$CONTAINER"
74 |
75 | # wait for influx to be ready
76 | while ! docker exec "$CONTAINER" curl -s "http://localhost:8086" >/dev/null 2>&1 ; do
77 | echo "waiting for $CONTAINER to become ready"
78 | sleep 1
79 | done
80 |
81 | echo "reloading the influx databases"
82 | docker exec "$CONTAINER" influxd restore -portable "$INTERNALBACKUPDB"
83 |
84 | # are there any continuous queries to be reloaded?
85 | echo "Checking for optional continuous queries: $EXTERNALCQS"
86 | if [ -f "$EXTERNALCQS" ] ; then
87 |
88 | # yes! tell influx to load the file at the internal path)
89 | echo " Telling influx to reload continuous queries"
90 | docker exec "$CONTAINER" influx -import -path "$INTERNALCQS"
91 |
92 | fi
93 |
94 | # does the hook script exist?
95 | echo "Checking for optional epilog: $EXTERNALEPILOG"
96 | if [ -f "$EXTERNALEPILOG" ] ; then
97 |
98 | # yes! copy hook script into the container at the working directory
99 | echo " Epilog found - copying into container"
100 | docker cp "$EXTERNALEPILOG" "$CONTAINER:$INTERNALEPILOG"
101 |
102 | # tell influx to process the hook script (also in the working dir)
103 | echo " Telling influx to process epilog"
104 | docker exec "$CONTAINER" influx -import -path "$INTERNALEPILOG"
105 |
106 | # the hook script vanishes the next time the container is
107 | # recreated so there is no need to clean up.
108 |
109 | else
110 |
111 | echo " No epilog found"
112 |
113 | fi
114 |
115 | echo "The $CONTAINER databases should be good to go"
116 |
117 | else
118 |
119 | echo "$CONTAINER must be running when $SCRIPT is started"
120 |
121 | fi
122 |
123 | echo "$SCRIPT completed"
--------------------------------------------------------------------------------
/scripts/iotstack_reload_influxdb2:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER="${CONTAINER:-"influxdb2"}"
12 |
13 | # the project name is the all-lower-case form of the folder name
14 | PROJECT=$(basename ${IOTSTACK,,})
15 |
16 | # useful function
17 | isContainerRunning() {
18 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
19 | if [ "$STATUS" = "\"running\"" ] ; then
20 | return 0
21 | fi
22 | fi
23 | return 1
24 | }
25 |
26 | # the compose file is
27 | COMPOSENAME="docker-compose.yml"
28 | COMPOSE="$IOTSTACK/$COMPOSENAME"
29 |
30 | # the persistent store is
31 | INFLUXSTORE="$IOTSTACK/volumes/$CONTAINER"
32 |
33 | # the backup directory is
34 | INFLUXBACKUP="$INFLUXSTORE/backup"
35 |
36 | # the engine directory is
37 | INFLUXENGINE="$INFLUXSTORE/data/engine"
38 |
39 | # ensure the backup directory exists
40 | if [ ! -d "$INFLUXBACKUP" ] ; then
41 | echo "$INFLUXBACKUP does not exist. This is usually created by docker-compose."
42 | echo "Has $CONTAINER been initialised properly?"
43 | exit 1
44 | fi
45 |
46 | # is the container running?
47 | if isContainerRunning "$CONTAINER" ; then
48 |
49 | # yes! prepare the environment
50 | echo "Clearing $INFLUXBACKUP"
51 | sudo rm "$INFLUXBACKUP"/*
52 |
53 | # run a backup
54 | echo "backing up $CONTAINER databases"
55 | docker exec "$CONTAINER" influx backup /var/lib/backup
56 |
57 | # stop the container
58 | echo "deactivating $CONTAINER"
59 | docker-compose -f "$COMPOSE" rm --force --stop -v "$CONTAINER"
60 |
61 | # erase the engine
62 | echo "removing the running databases"
63 | sudo rm -rf "$INFLUXENGINE"
64 |
65 | # start the container
66 | echo "starting the container"
67 | docker-compose -f "$COMPOSE" up -d "$CONTAINER"
68 |
69 | # wait for the container to be ready
70 | while ! docker exec "$CONTAINER" influx ping >/dev/null 2>&1 ; do
71 | echo "waiting for $CONTAINER to become ready"
72 | sleep 1
73 | done
74 |
75 | # restore from the backup just taken
76 | docker exec "$CONTAINER" influx restore /var/lib/backup --full
77 |
78 | echo "The $CONTAINER databases should be good to go"
79 |
80 | else
81 |
82 | echo "$CONTAINER must be running when $SCRIPT is started"
83 |
84 | fi
85 |
86 | echo "$SCRIPT completed"
--------------------------------------------------------------------------------
/scripts/iotstack_restore:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 |
12 | # the project name is the all-lower-case form of the folder name
13 | PROJECT=$(basename ${IOTSTACK,,})
14 |
15 | # useful function
16 | isStackUp() {
17 | if COUNT=$( \
18 | curl -s --unix-socket /var/run/docker.sock http://localhost/containers/json | \
19 | jq -c ".[].Labels.\"com.docker.compose.project\"" | \
20 | grep -c "\"$1\"$" \
21 | ) ; then
22 | if [ $COUNT -gt 0 ] ; then
23 | return 0
24 | fi
25 | fi
26 | return 1
27 | }
28 |
29 | USAGE=0
30 |
31 | case "$#" in
32 |
33 | 1 )
34 | RUNTAG="$1"
35 | # extract all characters to the right of the first period
36 | BY_HOST_DIR=${RUNTAG#*.}
37 | # if no period, RUNTAG is copied to BY_HOST_DIR
38 | if [ "$BY_HOST_DIR" == "$RUNTAG" ] ; then
39 | echo "Error: by_host_dir can't be derived from the runtag."
40 | echo " Try passing a second argument."
41 | USAGE=1
42 | fi
43 | ;;
44 |
45 | 2 )
46 | RUNTAG="$1"
47 | BY_HOST_DIR="$2"
48 | ;;
49 |
50 | * )
51 | USAGE=1
52 | ;;
53 |
54 | esac
55 |
56 | if [ $USAGE -ne 0 ] ; then
57 | echo "Usage: $SCRIPT runtag {by_host_dir}"
58 | echo "where:"
59 | echo " runtag (eg yyyy-mm-dd_hhmm.by_host_dir)"
60 | echo " by_host_dir"
61 | echo " - must be a host that has performed at least one backup"
62 | echo " - is derived from runtag if omitted but runtag syntax"
63 | echo " must be yyyy-mm-dd_hhmm.hostname"
64 | exit 1
65 | fi
66 |
67 | # check dependencies
68 | if [ -z "$(which shyaml)" -o -z "$(which curl)" -o -z "$(which jq)" ] ; then
69 | echo "Missing dependencies. Please re-run install_scripts.sh."
70 | exit 1
71 | fi
72 |
73 | # the configuration file is at
74 | CONFIG_YML="$HOME/.config/iotstack_backup/config.yml"
75 |
76 | # does the configuration file exist?
77 | if [ -e "$CONFIG_YML" ] ; then
78 | CLOUD_METHOD=$(shyaml get-value restore.method < "$CONFIG_YML")
79 | CLOUD_OPTIONS=$(shyaml -q get-value restore.options < "$CONFIG_YML")
80 | CLOUD_PREFIX=$(shyaml get-value restore.prefix < "$CONFIG_YML")
81 | else
82 | echo "Warning: Configuration file not found: $CONFIG_YML"
83 | fi
84 |
85 | # apply defaults if not set from configuration file
86 | CLOUD_METHOD=${CLOUD_METHOD:-"SCP"}
87 | CLOUD_PREFIX=${CLOUD_PREFIX:-"myuser@myhost.mydomain.com:/path/to/backup/directory/on/myhost"}
88 |
89 | # form the cloud path
90 | CLOUD_PATH="$CLOUD_PREFIX/$BY_HOST_DIR"
91 |
92 | # assumptions
93 | COMPOSENAME="docker-compose.yml"
94 | COMPOSE="$IOTSTACK/$COMPOSENAME"
95 |
96 | # check the key assumption
97 | if [ ! -d "$IOTSTACK" ] ; then
98 | echo "Error: $IOTSTACK does not exist. This may indicate a problem with your installation."
99 | echo ""
100 | echo "Note - if you are trying to perform a \"bare metal\" restore,"
101 | echo " you need to do the following to establish the basic"
102 | echo " structures needed before a restore can work:"
103 | echo " 1. Clone IOTstack from GitHub."
104 | echo " 2. Run the menu and install Docker."
105 | echo " 3. Reboot (suggested by the menu)."
106 | exit 1
107 | fi
108 |
109 | echo "----- Starting $SCRIPT at $(date) -----"
110 | echo " CLOUD_METHOD = $CLOUD_METHOD"
111 | echo "CLOUD_OPTIONS = $CLOUD_OPTIONS"
112 | echo " CLOUD_PREFIX = $CLOUD_PREFIX"
113 | echo " CLOUD_PATH = $CLOUD_PATH"
114 |
115 | # make a temporary directory within the scope of IOTstack
116 | RESTOREDIR=$(mktemp -d -p "$IOTSTACK")
117 |
118 | # copy the backups into the restore directory
119 | echo "Attempting to fetch backup images for $RUNTAG"
120 |
121 | case "$CLOUD_METHOD" in
122 |
123 | "RCLONE" )
124 | rclone copy -v $CLOUD_OPTIONS --include "$RUNTAG.*" "$CLOUD_PATH" "$RESTOREDIR"
125 | ;;
126 |
127 | "RSYNC" | "SCP" )
128 | scp $CLOUD_OPTIONS "$CLOUD_PATH/$RUNTAG".* "$RESTOREDIR"
129 | ;;
130 |
131 | * )
132 | echo "Warning: $CLOUD_METHOD backup method is not supported"
133 | echo "Warning: The only backup files are the ones in $BACKUPSDIR"
134 | ;;
135 |
136 | esac
137 |
138 | # presume that the stack is not running and does not need restarting
139 | RESTART="NO"
140 |
141 | # does the compose file exist?
142 | if [ -e "$COMPOSE" ] ; then
143 |
144 | # yes! is IOTstack (or any part of it) running?
145 | if isStackUp "$PROJECT" ; then
146 |
147 | echo "Deactivating $PROJECT"
148 | docker-compose -f "$COMPOSE" down
149 |
150 | # the stack should be re-launched on exit
151 | RESTART="YES"
152 |
153 | fi
154 |
155 | fi
156 |
157 | # try to perform the restores
158 | iotstack_restore_general "$RESTOREDIR" "$RUNTAG"
159 | iotstack_restore_influxdb "$RESTOREDIR" "$RUNTAG"
160 | iotstack_restore_influxdb2 "$RESTOREDIR" "$RUNTAG"
161 | iotstack_restore_nextcloud "$RESTOREDIR" "$RUNTAG"
162 | iotstack_restore_mariadb "$RESTOREDIR" "$RUNTAG"
163 | iotstack_restore_postgres "$RESTOREDIR" "$RUNTAG"
164 | iotstack_restore_wordpress "$RESTOREDIR" "$RUNTAG"
165 | iotstack_restore_gitea "$RESTOREDIR" "$RUNTAG"
166 |
167 | # clean up the temporary restore directory
168 | echo "Cleaning up"
169 | rm -rf "$RESTOREDIR"
170 |
171 | # should the stack be brought up?
172 | if [ "$RESTART" = "YES" ] ; then
173 |
174 | # yes! if RESTART=YES then COMPOSE must exist
175 | echo "Reactivating the stack"
176 | docker-compose -f "$COMPOSE" up -d
177 |
178 | fi
179 |
180 | echo "----- Finished $SCRIPT at $(date) -----"
181 |
--------------------------------------------------------------------------------
/scripts/iotstack_restore_general:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | DEFAULTFILENAME=${DEFAULTFILENAME:-"general-backup.tar.gz"}
12 |
13 | # the project name is the all-lower-case form of the folder name
14 | PROJECT=$(basename ${IOTSTACK,,})
15 |
16 | # useful function
17 | isStackUp() {
18 | if COUNT=$( \
19 | curl -s --unix-socket /var/run/docker.sock http://localhost/containers/json | \
20 | jq -c ".[].Labels.\"com.docker.compose.project\"" | \
21 | grep -c "\"$1\"$" \
22 | ) ; then
23 | if [ $COUNT -gt 0 ] ; then
24 | return 0
25 | fi
26 | fi
27 | return 1
28 | }
29 |
30 | # snapshot the time
31 | RUNDATE=$(date +"%Y-%m-%d_%H%M")
32 |
33 | # $1 is required and is either path to a .tar.gz or path to a folder
34 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
35 | # $3 is optional and overrides the default file name
36 |
37 | case "$#" in
38 |
39 | 1)
40 | RESTORE_TAR_GZ=$(realpath "$1")
41 | ;;
42 |
43 | 2 | 3)
44 | RESTORE_TAR_GZ=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
45 | ;;
46 |
47 | *)
48 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
49 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
50 | echo " (override defaults to $DEFAULTFILENAME)"
51 | exit 1
52 | ;;
53 |
54 | esac
55 |
56 | # it is an error if the restore tar does not exist
57 | if [ ! -e "$RESTORE_TAR_GZ" ] ; then
58 | echo "Warning: $RESTORE_TAR_GZ does not exist - skipped"
59 | exit 0
60 | fi
61 |
62 | # assumptions
63 | COMPOSENAME="docker-compose.yml"
64 | COMPOSE="$IOTSTACK/$COMPOSENAME"
65 | SERVICESNAME="services"
66 | SERVICESROOT="$IOTSTACK/$SERVICESNAME"
67 | VOLUMESNAME="volumes"
68 | VOLUMESROOT="$IOTSTACK/$VOLUMESNAME"
69 |
70 | # check that the IOTstack folder exists
71 | if [ ! -d "$IOTSTACK" ] ; then
72 | echo "Error: $IOTSTACK does not exist. This may indicate a problem with your installation."
73 | exit 1
74 | fi
75 |
76 | # does the compose file exist?
77 | if [ -e "$COMPOSE" ] ; then
78 |
79 | # yes! is IOTstack (or any part of it) running?
80 | if isStackUp "$PROJECT"; then
81 |
82 | echo "Error: $PROJECT should NOT be running during a restore"
83 | echo " Please deactivate $PROJECT and try the restore again"
84 | exit 1
85 |
86 | fi
87 |
88 | fi
89 |
90 | echo "----- Starting $SCRIPT at $(date) -----"
91 |
92 | # make a temporary directory to unpack into
93 | RESTOREDIR=$(mktemp -d -p "$IOTSTACK")
94 |
95 | # define restored structures in terms of that
96 | SERVICESRESTOREDIR="$RESTOREDIR/$SERVICESNAME"
97 | VOLUMESRESTOREDIR="$RESTOREDIR/$VOLUMESNAME"
98 |
99 | # unpack the general backup into that directory
100 | echo "unpacking $RESTORE_TAR_GZ"
101 | sudo tar -x --same-owner -z -f "$RESTORE_TAR_GZ" -C "$RESTOREDIR"
102 |
103 | # was a "services" directory restored?
104 | if [ -d "$SERVICESRESTOREDIR" ] ; then
105 |
106 | # make sure the services root exists
107 | mkdir -p "$SERVICESROOT"
108 |
109 | # iterate the restored contents
110 | for SPATH in "$SERVICESRESTOREDIR"/* ; do
111 |
112 | SNAME=$(basename "$SPATH")
113 | DPATH="$SERVICESROOT/$SNAME"
114 |
115 | echo "removing old $DPATH"
116 | sudo rm -rf "$DPATH"
117 |
118 | echo "moving restored $SNAME into place"
119 | sudo mv "$SPATH" "$DPATH"
120 |
121 | done
122 |
123 | # ensure services owned by current user
124 | sudo chown -R "$USER:$USER" "$SERVICESROOT"
125 |
126 | # done with this directory
127 | sudo rm -rf "$SERVICESRESTOREDIR"
128 |
129 | fi
130 |
131 | # was a "volumes" directory restored?
132 | if [ -d "$VOLUMESRESTOREDIR" ] ; then
133 |
134 | # make sure the volumes root exists
135 | sudo mkdir -p "$VOLUMESROOT"
136 |
137 | # iterate the restored contents
138 | for SPATH in "$VOLUMESRESTOREDIR"/* ; do
139 |
140 | SNAME=$(basename "$SPATH")
141 | DPATH="$VOLUMESROOT/$SNAME"
142 |
143 | echo "removing old $DPATH"
144 | sudo rm -rf "$DPATH"
145 |
146 | echo "moving restored $SNAME into place"
147 | sudo mv "$SPATH" "$DPATH"
148 |
149 | done
150 |
151 | # done with this directory
152 | sudo rm -rf "$VOLUMESRESTOREDIR"
153 |
154 | fi
155 |
156 | # restore whatever remains into ~/IOTstack
157 | for SPATH in "$RESTOREDIR"/* "$RESTOREDIR"/.*; do
158 |
159 | # is the inclusion a file (filters . .. and other junk)
160 | if [ -f "$SPATH" ] ; then
161 |
162 | SNAME=$(basename "$SPATH")
163 | DPATH="$IOTSTACK/$SNAME"
164 |
165 | # does the destination exist?
166 | if [ -e "$DPATH" ] ; then
167 |
168 | # yes! compare the two files
169 | cmp "$SPATH" "$DPATH" >/dev/null 2>&1
170 |
171 | # do the two files compare same?
172 | if [ $? -ne 0 ] ; then
173 |
174 | # no! move the restored version into place with a tag
175 | echo "Restoring $SNAME as $SNAME.$RUNDATE"
176 | mv "$SPATH" "$DPATH.$RUNDATE"
177 |
178 | # ensure owned by current user
179 | sudo chown "$USER:$USER" "$DPATH.$RUNDATE"
180 |
181 | else
182 |
183 | echo "$SNAME already exists and compares same - skipped"
184 |
185 | fi
186 |
187 | else
188 |
189 | # no! move the restored version into place
190 | echo "Restoring $SNAME"
191 | mv "$SPATH" "$DPATH"
192 |
193 | # ensure owned by current user
194 | sudo chown "$USER:$USER" "$DPATH"
195 |
196 | fi
197 |
198 | fi
199 |
200 | done
201 |
202 | echo "Cleaning up"
203 | rm -rf "$RESTOREDIR"
204 |
205 | echo "----- Finished $SCRIPT at $(date) -----"
206 |
--------------------------------------------------------------------------------
/scripts/iotstack_restore_gitea:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER="${CONTAINER:-"gitea"}"
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.tar.gz"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # the database container is
18 | CONTAINER_DB="${CONTAINER}_db"
19 |
20 | # useful function
21 | isContainerRunning() {
22 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
23 | if [ "$STATUS" = "\"running\"" ] ; then
24 | return 0
25 | fi
26 | fi
27 | return 1
28 | }
29 |
30 | preferredCommand() {
31 | if [ -n "$(docker exec "$CONTAINER_DB" which "$1")" ] ; then
32 | echo "$1"
33 | else
34 | echo "$2"
35 | fi
36 | }
37 |
38 |
39 | # $1 is required and is either path to a .tar.gz or the path to a folder
40 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
41 | # $3 is optional and overrides the default file name
42 |
43 | case "$#" in
44 |
45 | 1)
46 | RESTORE_TAR_GZ=$(realpath "$1")
47 | ;;
48 |
49 | 2 | 3)
50 | RESTORE_TAR_GZ=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
51 | ;;
52 |
53 | *)
54 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
55 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
56 | echo " (override defaults to $DEFAULTFILENAME)"
57 | exit 1
58 | ;;
59 |
60 | esac
61 |
62 | # it is an error if the restore tar does not exist
63 | if [ ! -e "$RESTORE_TAR_GZ" ] ; then
64 | echo "Warning: $RESTORE_TAR_GZ does not exist - skipped"
65 | exit 0
66 | fi
67 |
68 | # assumptions
69 | COMPOSENAME="docker-compose.yml"
70 | COMPOSE="$IOTSTACK/$COMPOSENAME"
71 | VOLUMES="$IOTSTACK/volumes"
72 | GITEA_VOLUMES="$VOLUMES/$CONTAINER"
73 | GITEA_DIR="data"
74 | GITEA_DB_DIR="db_backup"
75 | GITEA_DB_BACKUP="$GITEA_VOLUMES/$GITEA_DB_DIR"
76 |
77 | # check the key assumptions
78 | if ! [ -d "$IOTSTACK" -a -e "$COMPOSE" ] ; then
79 | echo "Error: One of the following does not exist:"
80 | echo " $IOTSTACK"
81 | echo " $COMPOSE"
82 | echo "This may indicate a problem with your installation."
83 | exit 1
84 | fi
85 |
86 | # check that containers are not running
87 | for C in "$CONTAINER" "$CONTAINER_DB" ; do
88 | if isContainerRunning "$C" ; then
89 | echo "Warning: $CONTAINER should not be running at the start of a restore."
90 | echo " Please deactivate and try again."
91 | exit 0
92 | fi
93 | done
94 |
95 | # now we can begin
96 | echo "----- Starting $SCRIPT at $(date) -----"
97 |
98 | # make a temporary directory to unpack into
99 | RESTOREDIR=$(mktemp -d -p "$IOTSTACK")
100 |
101 | # unpack the general backup into the temporary directory
102 | echo "unpacking $RESTORE_TAR_GZ"
103 | sudo tar -x --same-owner -z -f "$RESTORE_TAR_GZ" -C "$RESTOREDIR"
104 |
105 | # check that the data directory is available (database is optional)
106 | if ! [ -d "$RESTOREDIR/$GITEA_DIR" ] ; then
107 | echo "Error: $GITEA_DIR not found in backup"
108 | echo "This may indicate $RESTORE_TAR_GZ is malformed."
109 | exit 1
110 | fi
111 |
112 | # does the top-level folder of the persistent store exist?
113 | if [ -d "$GITEA_VOLUMES" ] ; then
114 | echo "erasing contents of $GITEA_VOLUMES"
115 | sudo rm -rf "$GITEA_VOLUMES"/*
116 | else
117 | echo "creating empty $GITEA_VOLUMES"
118 | sudo mkdir -p "$GITEA_VOLUMES"
119 | fi
120 |
121 | echo "moving restored artifacts into place"
122 | sudo mv "$RESTOREDIR/"* "$GITEA_VOLUMES/"
123 |
124 | echo "cleaning up intermediate files"
125 | sudo rm -rf "$RESTOREDIR"
126 |
127 | # did the restore make the database backup available?
128 | if [ -d "$GITEA_DB_BACKUP" ] ; then
129 |
130 | # yes! bring up the wordpress_db container (done early to give time to start)
131 | echo "activating $CONTAINER_DB (temporarily)"
132 | docker-compose -f "$COMPOSE" up -d "$CONTAINER_DB"
133 |
134 | # stabilisation time - prophylactic
135 | sleep 3
136 |
137 | # wait for mariadb (gitea_db) to be ready for business
138 | echo "waiting for $CONTAINER_DB to start"
139 | RETRIES=30
140 | while : ; do
141 |
142 | # see if wordpress_db reports itself ready for business
143 | docker exec "$CONTAINER_DB" iotstack_healthcheck.sh >/dev/null 2>&1
144 |
145 | # is the container ready?
146 | if [ $? -ne 0 ] ; then
147 |
148 | # no! decrement the retry counter
149 | let "RETRIES-=1"
150 |
151 | # should we retry?
152 | if [ $RETRIES -gt 0 ] ; then
153 |
154 | # yes! wait, then retry
155 | sleep 2 ; echo " re-trying ($RETRIES) ..." ; continue
156 |
157 | fi
158 |
159 | # retries exhausted - declare failure
160 | echo "$CONTAINER_DB did not come up properly - unable to reload database"
161 | exit 1
162 |
163 | fi
164 |
165 | # healthcheck passed
166 | break;
167 |
168 | done
169 |
170 | # extra stabilisation time - prophylactic
171 | sleep 3
172 |
173 | # try to ensure the container has a root password
174 | echo "checking whether $CONTAINER_DB has a root password (ignore any errors)"
175 | echo "(refer https://github.com/linuxserver/docker-mariadb/issues/163)"
176 | docker exec "$CONTAINER_DB" bash -c "$(preferredCommand mariadb-admin mysqladmin) -u root password \$MYSQL_ROOT_PASSWORD"
177 |
178 | # tell gitea_db to perform the restore
179 | echo "telling $CONTAINER_DB to restore a portable backup"
180 | docker exec "$CONTAINER_DB" bash -c "$(preferredCommand mariadb mysql) -p\$MYSQL_ROOT_PASSWORD \$MYSQL_DATABASE /dev/null 2>&1 ; do
127 | echo "waiting for $CONTAINER to become ready"
128 | sleep 1
129 | done
130 |
131 | # tell influx to perform the restore
132 | echo "Telling influxd to restore a portable backup"
133 | docker exec "$CONTAINER" influxd restore -portable "$INTERNALBACKUPDB"
134 |
135 | # are there any continuous queries to be reloaded?
136 | echo "Checking for optional continuous queries: $EXTERNALCQS"
137 | if [ -f "$EXTERNALCQS" ] ; then
138 |
139 | # yes! tell influx to load the file at the internal path)
140 | echo " Telling influx to reload continuous queries"
141 | docker exec "$CONTAINER" influx -import -path "$INTERNALCQS"
142 |
143 | fi
144 |
145 | # does the hook script exist?
146 | echo "Checking for optional epilog: $EXTERNALEPILOG"
147 | if [ -f "$EXTERNALEPILOG" ] ; then
148 |
149 | # yes! copy hook script into the container at the working directory
150 | echo " Epilog found - copying into container"
151 | docker cp "$EXTERNALEPILOG" "$CONTAINER:$INTERNALEPILOG"
152 |
153 | # tell influx to process the hook script (also in the working dir)
154 | echo " Telling influx to process epilog"
155 | docker exec "$CONTAINER" influx -import -path "$INTERNALEPILOG"
156 |
157 | # when the container is terminated, the hook script vanishes so
158 | # there is no need to clean up.
159 |
160 | else
161 |
162 | echo " No epilog found"
163 |
164 | fi
165 |
166 | # take down influxdb
167 | echo "deactivating $CONTAINER"
168 | docker-compose -f "$COMPOSE" rm --force --stop -v "$CONTAINER"
169 |
170 | echo "----- Finished $SCRIPT at $(date) -----"
171 |
172 |
--------------------------------------------------------------------------------
/scripts/iotstack_restore_influxdb2:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER="${CONTAINER:-"influxdb2"}"
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.tar"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # useful function
18 | isContainerRunning() {
19 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
20 | if [ "$STATUS" = "\"running\"" ] ; then
21 | return 0
22 | fi
23 | fi
24 | return 1
25 | }
26 |
27 | # $1 is required and is either path to a .tar or the path to a folder
28 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
29 | # $3 is optional and overrides the default file name
30 |
31 | case "$#" in
32 |
33 | 1)
34 | RESTORE_TAR=$(realpath "$1")
35 | ;;
36 |
37 | 2 | 3)
38 | RESTORE_TAR=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
39 | ;;
40 |
41 | *)
42 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
43 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
44 | echo " (override defaults to $DEFAULTFILENAME)"
45 | exit 1
46 | ;;
47 |
48 | esac
49 |
50 | # it is an error if the restore tar does not exist
51 | if [ ! -e "$RESTORE_TAR" ] ; then
52 | echo "Warning: $RESTORE_TAR does not exist - skipped"
53 | exit 0
54 | fi
55 |
56 | # assumptions
57 | COMPOSENAME="docker-compose.yml"
58 | COMPOSE="$IOTSTACK/$COMPOSENAME"
59 | INFLUXSTORE="$IOTSTACK/volumes/$CONTAINER"
60 | INFLUXBACKUP="$INFLUXSTORE/backup"
61 |
62 | # check the key assumptions
63 | if ! [ -d "$IOTSTACK" -a -e "$COMPOSE" ] ; then
64 | echo "Error: One of the following does not exist:"
65 | echo " $IOTSTACK"
66 | echo " $COMPOSE"
67 | echo "This may indicate a problem with your installation."
68 | exit 1
69 | fi
70 |
71 | # check that the container is not running
72 | if isContainerRunning "$CONTAINER" ; then
73 | echo "Error: $CONTAINER should NOT be running at the start of a restore."
74 | echo " Please deactivate and try again."
75 | exit 1
76 | fi
77 |
78 | # now we can begin
79 | echo "----- Starting $SCRIPT at $(date) -----"
80 |
81 | # the whole persistent store for influxdb2 should be erased
82 | echo "Erasing $INFLUXSTORE"
83 | sudo rm -rf "$INFLUXSTORE"
84 |
85 | # re-create the top level directory
86 | sudo mkdir -p "$INFLUXSTORE"
87 |
88 | # unpack the restore tar
89 | echo "unpacking $RESTORE_TAR"
90 | sudo tar -x --same-owner -f "$RESTORE_TAR" -C "$INFLUXSTORE"
91 |
92 | # bring up the influxdb container (done early to give time to start)
93 | echo "activating $CONTAINER (temporarily)"
94 | docker-compose -f "$COMPOSE" up -d "$CONTAINER"
95 |
96 | # wait for the container to be ready
97 | while ! docker exec "$CONTAINER" influx ping >/dev/null 2>&1 ; do
98 | echo "waiting for $CONTAINER to become ready"
99 | sleep 1
100 | done
101 |
102 | # tell influx to perform the restore
103 | echo "Telling $CONTAINER to restore a backup"
104 | docker exec "$CONTAINER" influx restore /var/lib/backup --full
105 |
106 | # take down influxdb
107 | echo "deactivating $CONTAINER"
108 | docker-compose -f "$COMPOSE" rm --force --stop -v "$CONTAINER"
109 |
110 | echo "----- Finished $SCRIPT at $(date) -----"
111 |
112 |
--------------------------------------------------------------------------------
/scripts/iotstack_restore_mariadb:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER="${CONTAINER:-"mariadb"}"
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.tar.gz"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # useful function
18 | isContainerRunning() {
19 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
20 | if [ "$STATUS" = "\"running\"" ] ; then
21 | return 0
22 | fi
23 | fi
24 | return 1
25 | }
26 |
27 | preferredCommand() {
28 | if [ -n "$(docker exec "$CONTAINER_DB" which "$1")" ] ; then
29 | echo "$1"
30 | else
31 | echo "$2"
32 | fi
33 | }
34 |
35 |
36 | # $1 is required and is either path to a .tar.gz or the path to a folder
37 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
38 | # $3 is optional and overrides the default file name
39 |
40 | case "$#" in
41 |
42 | 1)
43 | RESTORE_TAR_GZ=$(realpath "$1")
44 | ;;
45 |
46 | 2 | 3)
47 | RESTORE_TAR_GZ=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
48 | ;;
49 |
50 | *)
51 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
52 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
53 | echo " (override defaults to $DEFAULTFILENAME)"
54 | exit 1
55 | ;;
56 |
57 | esac
58 |
59 | # it is an error if the restore tar does not exist
60 | if [ ! -e "$RESTORE_TAR_GZ" ] ; then
61 | echo "Warning: $RESTORE_TAR_GZ does not exist - skipped"
62 | exit 0
63 | fi
64 |
65 | # assumptions
66 | COMPOSENAME="docker-compose.yml"
67 | COMPOSE="$IOTSTACK/$COMPOSENAME"
68 | VOLUMES="$IOTSTACK/volumes"
69 | MARIADB_VOLUMES="$VOLUMES/$CONTAINER"
70 | MARIADB_BACKUP="$MARIADB_VOLUMES/db_backup"
71 |
72 | # check the key assumptions
73 | if ! [ -d "$IOTSTACK" -a -e "$COMPOSE" ] ; then
74 | echo "Error: One of the following does not exist:"
75 | echo " $IOTSTACK"
76 | echo " $COMPOSE"
77 | echo "This may indicate a problem with your installation."
78 | exit 1
79 | fi
80 |
81 | # check that container is not running
82 | if isContainerRunning "$CONTAINER" ; then
83 | echo "Warning: $CONTAINER should NOT be running at the start of a restore."
84 | echo " Please deactivate and try again."
85 | exit 1
86 | fi
87 |
88 | # now we can begin
89 | echo "----- Starting $SCRIPT at $(date) -----"
90 |
91 | # make a temporary directory to unpack into
92 | RESTOREDIR=$(mktemp -d -p "$IOTSTACK")
93 |
94 | # unpack the general backup into the temporary directory
95 | echo "unpacking $RESTORE_TAR_GZ"
96 | sudo tar -x --same-owner -z -f "$RESTORE_TAR_GZ" -C "$RESTOREDIR"
97 |
98 | # did that result in anything being restores?
99 | if [ $(ls -1 "$RESTOREDIR" | wc -l) -gt 0 ] ; then
100 |
101 | # erase the old persistent storage area
102 | echo "Erasing $MARIADB_VOLUMES"
103 | sudo rm -rf "$MARIADB_VOLUMES"
104 |
105 | # create the backup directory
106 | [ -d "$MARIADB_BACKUP" ] || sudo mkdir -m 755 -p "$MARIADB_BACKUP"
107 | sudo chown $USER:$USER "$MARIADB_BACKUP"
108 | sudo chmod 755 "$MARIADB_BACKUP"
109 |
110 | # move backup files into place
111 | echo "moving backup files into place"
112 | mv "$RESTOREDIR"/* "$MARIADB_BACKUP"
113 |
114 | # bring up the container (initialises everything else)
115 | echo "activating $CONTAINER (temporarily)"
116 | docker-compose -f "$COMPOSE" up -d "$CONTAINER"
117 |
118 | # wait for the container to be ready
119 | while ! docker exec "$CONTAINER" iotstack_healthcheck.sh >/dev/null 2>&1 ; do
120 | echo "waiting for $CONTAINER to become ready"
121 | sleep 1
122 | done
123 |
124 | # try to ensure the container has a root password
125 | echo "checking whether $CONTAINER has a root password (ignore any errors)"
126 | echo "(refer https://github.com/linuxserver/docker-mariadb/issues/163)"
127 | docker exec "$CONTAINER" bash -c "$(preferredCommand mariadb-admin mysqladmin) -u root password \$MYSQL_ROOT_PASSWORD"
128 |
129 | # perform the restore
130 | echo "telling $CONTAINER to restore from backup"
131 | docker exec "$CONTAINER" bash -c "$(preferredCommand mariadb mysql) -p\${MYSQL_ROOT_PASSWORD} \${MYSQL_DATABASE} /dev/null 2>&1
191 |
192 | # is the container ready?
193 | if [ $? -ne 0 ] ; then
194 |
195 | # no! decrement the retry counter
196 | let "RETRIES-=1"
197 |
198 | # should we retry?
199 | if [ $RETRIES -gt 0 ] ; then
200 |
201 | # yes! wait, then retry
202 | sleep 1 ; echo " re-trying ($RETRIES) ..." ; continue
203 |
204 | fi
205 |
206 | # retries exhausted - declare failure
207 | echo "$CONTAINER_DB did not come up properly - unable to reload database"
208 | exit 1
209 |
210 | fi
211 |
212 | # healthcheck passed
213 | break;
214 |
215 | done
216 |
217 | # extra stabilisation time - prophylactic
218 | sleep 3
219 |
220 | # try to ensure the container has a root password
221 | echo "checking whether $CONTAINER_DB has a root password (ignore any errors)"
222 | echo "(refer https://github.com/linuxserver/docker-mariadb/issues/163)"
223 | docker exec "$CONTAINER_DB" bash -c "$(preferredCommand mariadb-admin mysqladmin) -u root password \$MYSQL_ROOT_PASSWORD"
224 |
225 | # tell nextcloud_db to perform the restore
226 | echo "telling $CONTAINER_DB to restore a portable backup"
227 | docker exec "$CONTAINER_DB" bash -c "$(preferredCommand mariadb mysql) -p\$MYSQL_ROOT_PASSWORD \$MYSQL_DATABASE /dev/null 2>&1
244 |
245 | # leave the while loop on success
246 | if [ $? -eq 8 ] ; then break ; fi
247 |
248 | fi
249 |
250 | # decrement the retry counter
251 | let "RETRIES-=1"
252 |
253 | # should we retry?
254 | if [ $RETRIES -gt 0 ] ; then
255 |
256 | # yes! wait, then retry
257 | sleep 1 ; echo " re-trying ($RETRIES) ..." ; continue
258 |
259 | fi
260 |
261 | # retries exhausted - declare failure
262 | echo "$CONTAINER did not come up properly"
263 | echo " - unable to ensure $CONTAINER is taken out of maintenance mode"
264 | exit 1
265 |
266 | done
267 |
268 | # extra stabilisation time - prophylactic
269 | sleep 3
270 |
271 | echo "Taking $CONTAINER out of maintenance mode"
272 | docker exec -u www-data -it "$CONTAINER" php occ maintenance:mode --off
273 |
274 | # take down nextcloud service
275 | echo "deactivating $CONTAINER and $CONTAINER_DB"
276 | docker-compose -f "$COMPOSE" rm --force --stop -v "$CONTAINER" "$CONTAINER_DB"
277 |
278 | echo "----- Finished $SCRIPT at $(date) -----"
279 |
--------------------------------------------------------------------------------
/scripts/iotstack_restore_postgres:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER="${CONTAINER:-"postgres"}"
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.sql.gz.tar"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # useful function
18 | isContainerRunning() {
19 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
20 | if [ "$STATUS" = "\"running\"" ] ; then
21 | return 0
22 | fi
23 | fi
24 | return 1
25 | }
26 |
27 | # $1 is required and is either path to a .tar.gz or the path to a folder
28 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
29 | # $3 is optional and overrides the default file name
30 |
31 | case "$#" in
32 |
33 | 1)
34 | RESTORE_SQL_GZ_TAR=$(realpath "$1")
35 | ;;
36 |
37 | 2 | 3)
38 | RESTORE_SQL_GZ_TAR=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
39 | ;;
40 |
41 | *)
42 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
43 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
44 | echo " (override defaults to $DEFAULTFILENAME)"
45 | exit 1
46 | ;;
47 |
48 | esac
49 |
50 | # it is an error if the restore tar does not exist
51 | if [ ! -e "$RESTORE_SQL_GZ_TAR" ] ; then
52 | echo "Warning: $RESTORE_SQL_GZ_TAR does not exist - skipped"
53 | exit 0
54 | fi
55 |
56 | # assumptions
57 | COMPOSENAME="docker-compose.yml"
58 | COMPOSE="$IOTSTACK/$COMPOSENAME"
59 | VOLUMES="$IOTSTACK/volumes"
60 | POSTGRES_VOLUMES="$VOLUMES/$CONTAINER"
61 | POSTGRES_BACKUP="$POSTGRES_VOLUMES/db_backup"
62 |
63 | # check the key assumptions
64 | if ! [ -d "$IOTSTACK" -a -e "$COMPOSE" ] ; then
65 | echo "Error: One of the following does not exist:"
66 | echo " $IOTSTACK"
67 | echo " $COMPOSE"
68 | echo "This may indicate a problem with your installation."
69 | exit 1
70 | fi
71 |
72 | # check that container is not running
73 | if isContainerRunning "$CONTAINER" ; then
74 | echo "Warning: $CONTAINER should NOT be running at the start of a restore."
75 | echo " Please deactivate and try again."
76 | exit 1
77 | fi
78 |
79 | # now we can begin
80 | echo "----- Starting $SCRIPT at $(date) -----"
81 |
82 | # make a temporary directory to unpack into
83 | RESTOREDIR=$(mktemp -d -p "$IOTSTACK")
84 |
85 | # unpack the general backup into the temporary directory
86 | echo "unpacking $RESTORE_SQL_GZ_TAR"
87 | sudo tar -x --same-owner -z -f "$RESTORE_SQL_GZ_TAR" -C "$RESTOREDIR"
88 |
89 | # did that result in anything being restores?
90 | if [ $(ls -1 "$RESTOREDIR" | wc -l) -gt 0 ] ; then
91 |
92 | # erase the old persistent storage area
93 | echo "Erasing $POSTGRES_VOLUMES"
94 | sudo rm -rf "$POSTGRES_VOLUMES"
95 |
96 | # bring up the container (initialises everything else)
97 | echo "activating $CONTAINER (temporarily)"
98 | docker-compose -f "$COMPOSE" up -d "$CONTAINER"
99 |
100 | # wait for the container to be ready
101 | while ! docker exec "$CONTAINER" pg_isready >/dev/null 2>&1 ; do
102 | echo "waiting for $CONTAINER to become ready"
103 | sleep 1
104 | done
105 |
106 | # that should have created the backup directory
107 | if [ -d "$POSTGRES_BACKUP" ] ; then
108 |
109 | # directory exists - move backup files into place
110 | echo "moving backup files into place"
111 | sudo mv "$RESTOREDIR"/* "$POSTGRES_BACKUP"
112 |
113 | # perform the restore
114 | echo "telling $CONTAINER to restore from backup"
115 | docker exec "$CONTAINER" bash -c 'gunzip -c /backup/postgres_backup.sql.gz | psql -U $POSTGRES_USER postgres >/backup/postgres_restore.log 2>&1'
116 |
117 | else
118 |
119 | echo "Error: restore can't be processed because the $POSTGRES_BACKUP directory does not exist."
120 | echo ""
121 | echo "This is probably because your service definition for PostgreSQL does not"
122 | echo "include the following volume mapping:"
123 | echo ""
124 | echo " - ./volumes/postgres/db_backup:/backup"
125 | echo ""
126 | echo "Please compare your service definition with the IOTstack template at:"
127 | echo ""
128 | echo " $IOTSTACK/.templates/postgres/service.yml"
129 | echo ""
130 | echo "and ensure that your active service definition in $COMPOSENAME"
131 | echo "accurately reflects the version in the template."
132 | echo ""
133 |
134 | fi
135 |
136 | # take down the service
137 | echo "deactivating $CONTAINER"
138 | docker-compose -f "$COMPOSE" rm --force --stop -v "$CONTAINER"
139 |
140 | else
141 | echo "$RESTOREDIR is empty - that would be the end of that"
142 | fi
143 |
144 | # tidy up
145 | rm -rf "$RESTOREDIR"
146 |
147 | echo "----- Finished $SCRIPT at $(date) -----"
148 |
--------------------------------------------------------------------------------
/scripts/iotstack_restore_wordpress:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # support user renaming of script
4 | SCRIPT=$(basename "$0")
5 |
6 | # should not run as root
7 | [ "$EUID" -eq 0 ] && echo "$SCRIPT should NOT be run using sudo" && exit 1
8 |
9 | # assumptions that can be overridden
10 | IOTSTACK=${IOTSTACK:-"$HOME/IOTstack"}
11 | CONTAINER="${CONTAINER:-"wordpress"}"
12 | DEFAULTFILENAME=${DEFAULTFILENAME:-"$CONTAINER-backup.tar.gz"}
13 |
14 | # the project name is the all-lower-case form of the folder name
15 | PROJECT=$(basename ${IOTSTACK,,})
16 |
17 | # the database container is
18 | CONTAINER_DB="${CONTAINER}_db"
19 |
20 | # useful function
21 | isContainerRunning() {
22 | if STATUS=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$1/json | jq .State.Status) ; then
23 | if [ "$STATUS" = "\"running\"" ] ; then
24 | return 0
25 | fi
26 | fi
27 | return 1
28 | }
29 |
30 | preferredCommand() {
31 | if [ -n "$(docker exec "$CONTAINER_DB" which "$1")" ] ; then
32 | echo "$1"
33 | else
34 | echo "$2"
35 | fi
36 | }
37 |
38 |
39 | # $1 is required and is either path to a .tar.gz or the path to a folder
40 | # $2 is optional and is the runtag (yyyy-mm-dd_hhmm.host-name)
41 | # $3 is optional and overrides the default file name
42 |
43 | case "$#" in
44 |
45 | 1)
46 | RESTORE_TAR_GZ=$(realpath "$1")
47 | ;;
48 |
49 | 2 | 3)
50 | RESTORE_TAR_GZ=$(realpath "$1/$2.${3:-"$DEFAULTFILENAME"}")
51 | ;;
52 |
53 | *)
54 | echo "Usage 1: $SCRIPT path/to/$DEFAULTFILENAME"
55 | echo "Usage 2: $SCRIPT path/to/backupdir runtag {override}"
56 | echo " (override defaults to $DEFAULTFILENAME)"
57 | exit 1
58 | ;;
59 |
60 | esac
61 |
62 | # it is an error if the restore tar does not exist
63 | if [ ! -e "$RESTORE_TAR_GZ" ] ; then
64 | echo "Warning: $RESTORE_TAR_GZ does not exist - skipped"
65 | exit 0
66 | fi
67 |
68 | # assumptions
69 | COMPOSENAME="docker-compose.yml"
70 | COMPOSE="$IOTSTACK/$COMPOSENAME"
71 | VOLUMES="$IOTSTACK/volumes"
72 | WORDPRESS_VOLUMES="$VOLUMES/$CONTAINER"
73 | WORDPRESS_DB_DIR="db_backup"
74 | WORDPRESS_DIR="html"
75 |
76 | # check the key assumptions
77 | if ! [ -d "$IOTSTACK" -a -e "$COMPOSE" ] ; then
78 | echo "Error: One of the following does not exist:"
79 | echo " $IOTSTACK"
80 | echo " $COMPOSE"
81 | echo "This may indicate a problem with your installation."
82 | exit 1
83 | fi
84 |
85 | # check that containers are not running
86 | if isContainerRunning "$CONTAINER" || isContainerRunning "$CONTAINER_DB" ; then
87 | echo "Warning: Neither $CONTAINER nor $CONTAINER_DB should be running at"
88 | echo " the start of a restore. Please deactivate both and try again."
89 | exit 0
90 | fi
91 |
92 | # now we can begin
93 | echo "----- Starting $SCRIPT at $(date) -----"
94 |
95 | # make a temporary directory to unpack into
96 | RESTOREDIR=$(mktemp -d -p "$IOTSTACK")
97 |
98 | # unpack the general backup into the temporary directory
99 | echo "unpacking $RESTORE_TAR_GZ"
100 | sudo tar -x --same-owner -z -f "$RESTORE_TAR_GZ" -C "$RESTOREDIR"
101 |
102 | # check expected folders are present in restore
103 | for SPATH in "$WORDPRESS_DIR" "$WORDPRESS_DB_DIR" ; do
104 | if ! [ -d "$RESTOREDIR/$SPATH" ] ; then
105 | echo "Error: $SPATH not found in backup"
106 | echo "This may indicate $RESTORE_TAR_GZ is malformed."
107 | exit 1
108 | fi
109 | done
110 |
111 | # does the top-level folder of the persistent store exist?
112 | if [ -d "$WORDPRESS_VOLUMES" ] ; then
113 | echo "erasing contents of $WORDPRESS_VOLUMES"
114 | sudo rm -rf "$WORDPRESS_VOLUMES"/*
115 | else
116 | echo "creating empty $WORDPRESS_VOLUMES"
117 | sudo mkdir -p "$WORDPRESS_VOLUMES"
118 | fi
119 |
120 | echo "moving restored artifacts into place"
121 | sudo mv "$RESTOREDIR/"* "$WORDPRESS_VOLUMES/"
122 |
123 | echo "cleaning up intermediate files"
124 | sudo rm -rf "$RESTOREDIR"
125 |
126 | # bring up the wordpress_db container (done early to give time to start)
127 | echo "activating $CONTAINER_DB (temporarily)"
128 | docker-compose -f "$COMPOSE" up -d "$CONTAINER_DB"
129 |
130 | # extra stabilisation time - prophylactic
131 | sleep 3
132 |
133 | # wait for mariadb (wordpress_db) to be ready for business
134 | echo "waiting for $CONTAINER_DB to start"
135 | RETRIES=30
136 | while : ; do
137 |
138 | # see if wordpress_db reports itself ready for business
139 | docker exec "$CONTAINER_DB" iotstack_healthcheck.sh >/dev/null 2>&1
140 |
141 | # is the container ready?
142 | if [ $? -ne 0 ] ; then
143 |
144 | # no! decrement the retry counter
145 | let "RETRIES-=1"
146 |
147 | # should we retry?
148 | if [ $RETRIES -gt 0 ] ; then
149 |
150 | # yes! wait, then retry
151 | sleep 1 ; echo " re-trying ($RETRIES) ..." ; continue
152 |
153 | fi
154 |
155 | # retries exhausted - declare failure
156 | echo "$CONTAINER_DB did not come up properly - unable to reload database"
157 | exit 1
158 |
159 | fi
160 |
161 | # healthcheck passed
162 | break;
163 |
164 | done
165 |
166 | # extra stabilisation time - prophylactic
167 | sleep 3
168 |
169 | # try to ensure the container has a root password
170 | echo "checking whether $CONTAINER_DB has a root password (ignore any errors)"
171 | echo "(refer https://github.com/linuxserver/docker-mariadb/issues/163)"
172 | docker exec "$CONTAINER_DB" bash -c "$(preferredCommand mariadb-admin mysqladmin) -u root password \$MYSQL_ROOT_PASSWORD"
173 |
174 | # tell wordpress_db to perform the restore
175 | echo "telling $CONTAINER_DB to restore a portable backup"
176 | docker exec "$CONTAINER_DB" bash -c "$(preferredCommand mariadb mysql) -p\$MYSQL_ROOT_PASSWORD \$MYSQL_DATABASE "$CONFIG_INCLUDE"
48 | /etc
49 | /etc-baseline
50 | /var/spool/cron/crontabs
51 | /home
52 | INCLUSIONS
53 |
54 | echo "$CONFIG_INCLUDE initialised from defaults:"
55 | sed -e "s/^/ /" "$CONFIG_INCLUDE"
56 |
57 | exit 0
58 |
59 | fi
60 |
61 | # we can begin
62 | echo "----- Starting $SCRIPT at $(date) -----"
63 |
64 | # the working inclusions will be here (content of the configured
65 | # inclusions, filtered to remove items that don't exist)
66 | WORKING_INCLUDE="$(mktemp -p /dev/shm/)" ; touch "$WORKING_INCLUDE"
67 |
68 | # form the full cloud reference
69 | CLOUD_REF="$CLOUD_PREFIX/$MARKER"
70 |
71 | # extract the target portion (everything to the left of the first colon)
72 | CLOUD_TARGET="${CLOUD_REF%%:*}"
73 |
74 | # extract the path portion (everything to the right of the first colon)
75 | CLOUD_PATH=${CLOUD_REF#*:}
76 |
77 | # the backup is stored in this directory
78 | BACKUP_DIR=$(mktemp -d --tmpdir "$SCRIPT-XXXXXX")
79 |
80 | # the backup file is stored in that directory
81 | BACKUP_TAR_GZ="$BACKUP_DIR/$(date +"%Y-%m-%d_%H%M").$HOSTNAME.raspbian-snapshot.tar.gz"
82 |
83 | # create a temporary file to hold a list of excluded paths
84 | BACKUP_EXCLUDE="$(mktemp -p /dev/shm/ backup_exclusions_XXXXXX.txt)"
85 |
86 | # plus a temporary file to annotations
87 | ANNOTATIONS="$(mktemp -p /dev/shm/ backup_annotations_XXXXXX.txt)"
88 |
89 | # report facts
90 | echo "Environment:"
91 | echo " Script marker = $MARKER"
92 | echo " Search paths = $CONFIG_INCLUDE"
93 | echo " Exclude marker = $EXCLUDE_MARKER"
94 | echo " Include marker = $INCLUDE_MARKER"
95 | echo " Cloud method = $CLOUD_METHOD"
96 | echo " Cloud reference = $CLOUD_REF"
97 |
98 | # iterate the list of included DIRECTORIES
99 | echo "Scanning:"
100 | for INCLUDE in $(cat "$CONFIG_INCLUDE") ; do
101 |
102 | # does the item exist?
103 | if [ -e "$INCLUDE" ] ; then
104 |
105 | # yes! add it to the working include list
106 | echo "$INCLUDE" >>"$WORKING_INCLUDE"
107 |
108 | fi
109 |
110 | # is the included item a directory?
111 | if [ -d "$INCLUDE" ] ; then
112 |
113 | # emit the name to give an indication of progress
114 | echo " $INCLUDE"
115 |
116 | # yes! search for sub-directories already managed by git or
117 | # subversion, or which contain the marker file to explicitly
118 | # exclude the directory and its contents. If a match is found,
119 | # add the PARENT directory to the exclusion list
120 | for EXCLUDE in ".git" ".svn" "$EXCLUDE_MARKER" ; do
121 | for EXCLUDED in $(sudo find "$INCLUDE" -name "$EXCLUDE") ; do
122 | PARENTDIR=$(dirname "$EXCLUDED")
123 | if [ ! -e "$PARENTDIR/$INCLUDE_MARKER" ] ; then
124 | echo "$PARENTDIR" >>"$BACKUP_EXCLUDE"
125 | echo -e "\n----- [$SCRIPT] ----- excluding $PARENTDIR" >>"$ANNOTATIONS"
126 | case "$EXCLUDE" in
127 | ".git" )
128 | if [ -n "$(which git)" ] ; then
129 | git -C "$PARENTDIR" remote -v >>"$ANNOTATIONS" 2>&1
130 | git -C "$PARENTDIR" status >>"$ANNOTATIONS" 2>&1
131 | fi
132 | ;;
133 | ".svn" )
134 | if [ -n "$(which svn)" ] ; then
135 | svn info --show-item url "$PARENTDIR" >>"$ANNOTATIONS" 2>&1
136 | svn status "$PARENTDIR" >>"$ANNOTATIONS" 2>&1
137 | fi
138 | ;;
139 | *)
140 | ;;
141 | esac
142 | fi
143 | done
144 | done
145 |
146 | # now search each looking for directories explicitly identified
147 | # as caches and exclude those too. Done here as a "for" in case
148 | # more cache-like candidates are identified in future.
149 | for EXCLUDE in ".cache" ; do
150 | for EXCLUDED in $(sudo find "$INCLUDE" -type d -name "$EXCLUDE") ; do
151 | if [ ! -e "$EXCLUDED/$INCLUDE_MARKER" ] ; then
152 | echo "$EXCLUDED" >>"$BACKUP_EXCLUDE"
153 | echo -e "\n----- [$SCRIPT] ----- excluding $EXCLUDED" >>"$ANNOTATIONS"
154 | fi
155 | done
156 | done
157 |
158 | fi
159 |
160 | done
161 |
162 | # append FILES for inclusion - /boot done here so that only files
163 | # guaranteed to exist get added to the list and don't trigger spurious
164 | # warnings from tar
165 | ls -1 /boot/config.txt* /boot/cmdline.txt* >>"$WORKING_INCLUDE"
166 | ls -1 /boot/firmware/config.txt* /boot/firmware/cmdline.txt* >>"$WORKING_INCLUDE"
167 | echo "$ANNOTATIONS" >>"$WORKING_INCLUDE"
168 |
169 | # create the file (sets ownership correctly)
170 | touch "$BACKUP_TAR_GZ"
171 |
172 | # add information to the report
173 | echo "Paths included in the backup:"
174 | sed -e "s/^/ /" "$WORKING_INCLUDE"
175 | echo "Paths excluded from the backup:"
176 | sed -e "s/^/ /" "$BACKUP_EXCLUDE"
177 |
178 | # perform the backup
179 | sudo tar \
180 | -czPf "$BACKUP_TAR_GZ" \
181 | -X "$BACKUP_EXCLUDE" \
182 | -T "$WORKING_INCLUDE" \
183 | --warning=none
184 |
185 | # clean up the working files
186 | rm "$WORKING_INCLUDE"
187 | rm "$BACKUP_EXCLUDE"
188 |
189 | # assume the file to be backed-up is the .tar.gz just created
190 | BACKUP_RESULT="$BACKUP_TAR_GZ"
191 |
192 | # is gpg installed and is there a candidate key to use?
193 | if [ -n "$(which gpg)" -a -n "$GPGKEYID" ] ; then
194 |
195 | # yes! search the keychain for that key
196 | gpg --list-keys "$GPGKEYID" >/dev/null 2>&1
197 |
198 | # was the key found in the keychain?
199 | if [ $? -eq 0 ] ; then
200 |
201 | # yes! redefine the backup result to be an encrypted file
202 | BACKUP_RESULT="${BACKUP_RESULT}.gpg"
203 |
204 | # perform the encryption
205 | echo "Encrypting the backup using --recipient $GPGKEYID"
206 | gpg --recipient "$GPGKEYID" --output "$BACKUP_RESULT" --encrypt "$BACKUP_TAR_GZ"
207 |
208 | else
209 |
210 | # no! moan
211 | echo "Warning: key $GPGKEYID not found in keychain - unable to encrypt backup"
212 |
213 | fi
214 |
215 | fi
216 |
217 | # copy the backup file off this machine
218 | case "$CLOUD_METHOD" in
219 |
220 | "RCLONE" )
221 | echo "Using rclone to copy the result off this machine"
222 | rclone copy -v \
223 | "$BACKUP_RESULT" \
224 | "$CLOUD_TARGET:$CLOUD_PATH"
225 | ;;
226 |
227 | "RSYNC" )
228 | echo "Using rsync to copy the result off this machine"
229 | ssh "$CLOUD_TARGET" "mkdir -p \"$CLOUD_PATH\""
230 | rsync -vrt \
231 | "$BACKUP_RESULT" \
232 | "$CLOUD_TARGET:$CLOUD_PATH/"
233 | ;;
234 |
235 | "SCP" )
236 | echo "Using scp to copy the result off this machine"
237 | ssh "$CLOUD_TARGET" "mkdir -p \"$CLOUD_PATH\""
238 | scp "$BACKUP_RESULT" "$CLOUD_TARGET:$CLOUD_PATH/"
239 | ;;
240 |
241 | *)
242 | echo "Warning: $CLOUD_METHOD backup method is not supported"
243 | echo "Warning: The only backup files are the ones in $BACKUPSDIR"
244 | ;;
245 |
246 | esac
247 |
248 | # remove the temporary backup structures
249 | rm -rf "$BACKUP_DIR"
250 |
251 | echo "----- Finished $SCRIPT at $(date) -----"
252 |
--------------------------------------------------------------------------------
/ssh-tutorial.md:
--------------------------------------------------------------------------------
1 | # Tutorial: Setting up SSH keys for password-less access
2 |
3 | ## Background
4 |
5 | Security professionals (people whose dreams occur in a world filled with acronym soup which they never bother to explain to anyone) refer to the process described here as "TOFU", which means "Trust On First Use". TOFU is not considered a polite term.
6 |
7 | Security professionals recommend setting up certificates. It's a laudable recommendation. In the long term, certificates are indeed worth the price of the very steep learning curve, particularly once you have more than a handful of hosts to worry about. Certificates are also "more secure", although what constitutes "sufficient" security in the average home environment is open to debate.
8 |
9 | The learning-curve is the problem. If I started to explain the process of setting up certificates, I doubt you'd keep reading.
10 |
11 | This tutorial is already long enough! And this is the *simple* approach.
12 |
13 | ## Task Goal
14 |
15 | Set up two computers so that they can communicate with each other, securely, over SSH, without needing passwords.
16 |
17 | ## The actors
18 |
19 | I'm going to use two Raspberry Pis but the same concepts apply if you are using desktop machines running Linux, macOS or Windows.
20 |
21 | ### First Raspberry Pi
22 |
23 | |Variable | Value |
24 | |------------|---------------|
25 | |Host Name | sec-dev |
26 | |Domain Name | sec-dev.local |
27 | |User Name | secuser |
28 |
29 | ### Second Raspberry Pi
30 |
31 | |Variable | Value |
32 | |------------|---------------|
33 | |Host Name | tri-dev |
34 | |Domain Name | tri-dev.local |
35 | |User Name | triuser |
36 |
37 | I'm using Multicast DNS names (the things ending in `.local`; sometimes known as "ZeroConf", "Bonjour" or "Rendezvous" names) but you can substitute domain names or IP addresses if you wish.
38 |
39 | ## Assumption
40 |
41 | I'm going to assume that both computers have SSH running. The simplest way to get SSH running on a Raspberry Pi is to create a file named `ssh` on the `/boot` volume, and reboot. If your RPi is connected to a keyboard and screen, open a terminal session and type:
42 |
43 | ```
44 | $ sudo touch /boot/ssh
45 | $ sudo reboot
46 | ```
47 |
48 | Alternatively, temporarily move the SD card (or SSD if you are booting and running from SSD) to a desktop machine and create a file called "ssh" in the boot partition. Only the **name** of the file is important. Its contents are irrelevant.
49 |
50 | ## Login checks
51 |
52 | ### First login
53 |
54 | Make sure that you can already login to each machine, albeit using passwords. Specifically:
55 |
56 | 1. On sec-dev:
57 |
58 | ```
59 | $ ssh triuser@tri-dev.local
60 |
61 | The authenticity of host 'tri-dev.local (192.168.203.7)' can't be established.
62 | ED25519 key fingerprint is SHA256:a8e73b2ba4f2f183c3a90a9911817d6ece4eb3d45fd.
63 | Are you sure you want to continue connecting (yes/no)? yes
64 | Warning: Permanently added 'tri-dev.local,192.168.203.7' (ED25519) to the list of known hosts.
65 | triuser@ tri-dev.local's password: ••••••••
66 | Linux tri-dev 5.4.83-v7l+ #1379 SMP Mon Dec 14 13:11:54 GMT 2020 armv7l
67 | …
68 | $
69 | ```
70 |
71 | 2. On tri-dev:
72 |
73 | ```
74 | $ ssh secuser@sec-dev.local
75 |
76 | The authenticity of host 'sec-dev.local (192.168.203.9)' can't be established.
77 | ED25519 key fingerprint is SHA256: 16befc20e7b13a52361e60698fa1742dcb41bb52331.
78 | Are you sure you want to continue connecting (yes/no)? yes
79 | Warning: Permanently added 'sec-dev.local,192.168.203.9' (ED25519) to the list of known hosts.
80 | secuser@sec-dev.local's password: ••••••••
81 | Linux sec-dev 5.4.83-v7l+ #1379 SMP Mon Dec 14 13:11:54 GMT 2020 armv7l
82 | …
83 | $
84 | ```
85 |
86 | 3. On **both** machines, logout (either Control+D or `exit`).
87 |
88 | Everything from "The authenticity of …" down to "Warning: …" is **this** Raspberry Pi telling you that it doesn't know about the **other** Raspberry Pi.
89 |
90 | This is the *Trust On First Use* pattern in action. Once you type "yes", each Raspberry Pi will remember (trust) the other one.
91 |
92 | ### Subsequent logins
93 |
94 | 1. On sec-dev:
95 |
96 | ```
97 | $ ssh triuser@tri-dev.local
98 |
99 | triuser@tri-dev.local's password: ••••••••
100 | Linux tri-dev 5.4.83-v7l+ #1379 SMP Mon Dec 14 13:11:54 GMT 2020 armv7l
101 | …
102 | $
103 | ```
104 |
105 | 2. On tri-dev:
106 |
107 | ```
108 | $ ssh secuser@sec-dev.local
109 |
110 | secuser@sec-dev.local's password: ••••••••
111 | Linux sec-dev 5.4.83-v7l+ #1379 SMP Mon Dec 14 13:11:54 GMT 2020 armv7l
112 | …
113 | $
114 | ```
115 |
116 | 3. On **both** machines, logout (either Control+D or `exit`).
117 |
118 | This time, each Raspberry Pi already knows about the other so it bypasses the TOFU warnings and goes straight to asking you for the password.
119 |
120 | But our goal is password-less access so let's keep moving.
121 |
122 | ## Generate user key-pairs
123 |
124 | 1. On sec-dev:
125 |
126 | ```
127 | $ ssh-keygen -t rsa -C "secuser@sec-dev rsa key"
128 | ```
129 |
130 | Notes:
131 |
132 | * I am specifying "rsa" because it is likely to be supported on any computers you are trying to use, pretty much regardless of age. It is also usually what you get by default if you omit the `-t rsa`. If you wish to use a more up-to-date algorithm, you can replace "rsa" with "ed25519". However, that will change some filenames and make these instructions more difficult to follow. You will also need to make sure that ED25519 is supported on all the systems where you want to use it.
133 | * the `-C "secuser@sec-dev rsa key"` is a comment which becomes part of the key files to help you identify which keys belong to what.
134 |
135 | ssh-keygen will ask some questions. Accept all the defaults by pressing return:
136 |
137 | ```
138 | Generating public/private rsa key pair.
139 | Enter file in which to save the key (/home/secuser/.ssh/id_rsa):
140 | Enter passphrase (empty for no passphrase):
141 | Enter same passphrase again:
142 | ```
143 |
144 | You will then see output similar to this:
145 |
146 | ```
147 | Your identification has been saved in /home/secuser/.ssh/id_rsa.
148 | Your public key has been saved in /home/secuser/.ssh/id_rsa.pub.
149 | The key fingerprint is:
150 | SHA256:NugXeay77qdVHYjvuaNg/HP3o6VdUWYdffZiWtT4xEY secuser@sec-dev rsa key
151 | The key's randomart image is:
152 | +---[RSA 2048]----+
153 | | *E|
154 | | . .o @|
155 | | . ...=B|
156 | | . o . .+++|
157 | | . S o o+.o |
158 | | . o = o.. .|
159 | | . * . o ..|
160 | | o =o o.=..|
161 | | o*+o+.=.oo|
162 | +----[SHA256]-----+
163 | ```
164 |
165 | 2. On tri-dev:
166 |
167 | ```
168 | $ ssh-keygen -t rsa -C "triuser rsa key"
169 | ```
170 |
171 | and accept all the defaults, as above.
172 |
173 | ## Exchange public keys
174 |
175 | 1. On sec-dev:
176 |
177 | ```
178 | $ ssh-copy-id -i ~/.ssh/id_rsa.pub triuser@tri-dev.local
179 | ```
180 |
181 | The expected response looks like this:
182 |
183 | ```
184 | /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/secuser/.ssh/id_rsa.pub"
185 | /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
186 | /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
187 | triuser@tri-dev.local's password: ••••••••
188 |
189 | Number of key(s) added: 1
190 |
191 | Now try logging into the machine, with: "ssh 'triuser@tri-dev.local'"
192 | and check to make sure that only the key(s) you wanted were added.
193 | ```
194 |
195 | Follow the advice and try logging-in:
196 |
197 | ```
198 | $ ssh triuser@tri-dev.local
199 | Linux tri-dev 5.4.83-v7l+ #1379 SMP Mon Dec 14 13:11:54 GMT 2020 armv7l
200 | …
201 | $
202 | ```
203 |
204 | Like magic! No password prompt.
205 |
206 | 2. On tri-dev:
207 |
208 | ```
209 | $ ssh-copy-id -i ~/.ssh/id_rsa.pub secuser@sec-dev.local
210 | ```
211 |
212 | You should get a password prompt and, if you then try logging in:
213 |
214 | ```
215 | $ ssh secuser@sec-dev.local
216 | Linux sec-dev 5.4.83-v7l+ #1379 SMP Mon Dec 14 13:11:54 GMT 2020 armv7l
217 | …
218 | $
219 | ```
220 |
221 | Double magic!
222 |
223 | 3. On **both** machines, logout (either Control+D or `exit`).
224 |
225 | ## What's stored where
226 |
227 | On each machine, the `~/.ssh` directory will contain four files:
228 |
229 | * `id_rsa` the private key for **this** user on **this** machine. This was set up by `ssh-keygen`.
230 | * `id_rsa.pub` the public key matching the private key. This was set up by `ssh-keygen`.
231 | * `known_hosts` a list of public keys for the **other** hosts **this** user account knows about. These keys are set up by the TOFU pattern.
232 | * `authorized_keys` a list of public keys for the **users** that **this** user account is authorised to access without a password. These keys are set up by `ssh-copy-id`.
233 |
234 | All of these files are "printable" in the sense that you can do:
235 |
236 | ```
237 | $ cat ~/.ssh/*
238 | ```
239 |
240 | ### private keys
241 |
242 | Security professionals like to tell you that a private key should never leave the computer on which it is generated. But, fairly obviously, you will want to make sure it is included in backups. If you don't then you will have to go through this process again. Unless you're paranoid, somewhat more reasonable advice is to "treat your private key like your wallet and don't leave it lying around."
243 |
244 | ### known_hosts
245 |
246 | On some implementations, the information in `known_hosts` is hashed to make it unreadable. This is the case on the Raspberry Pi.
247 |
248 | Suppose you rebuild tri-dev starting from a clean copy of Raspbian. On first boot, the operating system will generate new host keys (the host equivalent of `ssh-keygen` to generate user key-pairs). Those are stored in `/etc/ssh`.
249 |
250 | When you try logging-in from sec-dev, you'll get a warning about how some other computer might be trying to impersonate the computer you are actually trying to reach. What the warning **actually** means is that the entry for tri-dev in your `known_hosts` file on sec-dev doesn't match the keys that were generated when tri-dev was rebuilt.
251 |
252 | To solve this problem, you just remove the offending key from your known-hosts file. For example, to make sec-dev forget about tri-dev:
253 |
254 | ```
255 | $ ssh-keygen -R tri-dev.local
256 | ```
257 |
258 | Then you can try to connect. You'll get the TOFU warning as sec-dev learns tri-dev's new identity, after which everything will be back to normal.
259 |
260 | ### authorized_keys
261 |
262 | This is just a text file with one line for each host that has sent its public key to **this** host via `ssh-copy-id`.
263 |
264 | When you add a new host to your network, it can become a bit of a pain to have to run around and exchange all the keys. It's a "full mesh" or n2 problem, and one that "certificates" avoid.
265 |
266 | There is, however, no reason why you can't maintain a single authoritative list of all of your public keys which you update once, then push to all of your machines (ie reduce the n2 problem to an n problem).
267 |
268 | Suppose you decide that sec-dev will be the authoritative copy of your authorized_keys file. It already holds the public key for tri-dev. It simply needs its own public key added:
269 |
270 | ```
271 | $ cd ~/.ssh
272 | $ cat id_rsa.pub >>authorized_keys
273 | ```
274 |
275 | Then you can push that file to tri-dev:
276 |
277 | ```
278 | $ scp authorized_keys triuser@tri-dev.local:~/.ssh
279 | ```
280 |
281 | Suppose you add a new Raspberry Pi called "quad-dev". You've reached the point where you've used `ssh-copy-id` to transfer quad-dev's public key to sec-dev, so it's now in sec-dev's authorized_keys file. All you need to do is to go to sec-dev and:
282 |
283 | ```
284 | $ scp authorized_keys triuser@tri-dev.local:~/.ssh
285 | $ scp authorized_keys quaduser@quad-dev.local:~/.ssh
286 | ```
287 |
288 | > You'll still face the one-time TOFU pattern on each host that hasn't previously connected to quad-dev.
289 |
290 | ### ~/.ssh/config
291 |
292 | One file we haven't mentioned is the config file. This is the basic pattern:
293 |
294 | ```
295 | host sec-dev
296 | hostname sec-dev.local
297 | user secuser
298 |
299 | host tri-dev
300 | hostname tri-dev.local
301 | user triuser
302 | ```
303 |
304 | With that file in place on both sec-dev and tri-dev:
305 |
306 | 1. On sec-dev, you can login to tri-dev with just:
307 |
308 | ```
309 | $ ssh tri-dev
310 | ```
311 |
312 | 2. On tri-dev, you can login to sec-dev with just:
313 |
314 | ```
315 | $ ssh sec-dev
316 | ```
317 |
318 | What's happening is that SSH is matching on the name in the "host" field, replacing it with the value of the "hostname" field, and sticking the value of the "user" field on the front.
319 |
320 | If any of your computers is running macOS High Sierra or later, also add this line after each "user" field:
321 |
322 | ```
323 | UseKeychain yes
324 | ```
325 |
326 | You can't necessarily put "UseKeychain no" on non-macOS systems because the configuration file parser may not recognise the directive. A better approach is to maintain a master copy which includes the "UseKeychain yes" directives, use that file "as is" on macOS, but filter it for systems that don't support the directive. For example:
327 |
328 | * On a macOS system (as-is):
329 |
330 | ```
331 | $ cp ssh_config_master ~/.ssh/config
332 | ```
333 |
334 | * On other systems (filter):
335 |
336 | ```
337 | $ grep -v "UseKeychain" ssh_config_master >~/.ssh/config
338 | ```
339 |
--------------------------------------------------------------------------------