├── LICENSE ├── README.md ├── docs ├── FAQ_AND_TROUBLESHOOTING.md ├── FIREJAIL.md ├── HOW_TO_BTRFS.md ├── HOW_TO_FIREFOX.md ├── HOW_TO_MANAGE_SECRETS.md ├── HOW_TO_SECURE_USB_DEVICE.md ├── INSTALL.md ├── LIMITATIONS.md ├── NETWORKING.md ├── PROXY.md └── THREAT_MODEL.md ├── images └── notifications.png ├── install.sh ├── packages ├── aur └── regular ├── rootfs ├── etc │ ├── audit │ │ ├── auditd.conf │ │ └── rules.d │ │ │ ├── 00-reset.rules │ │ │ ├── 01-exclude.rules │ │ │ ├── 10-files.rules │ │ │ └── 99-immutable.rules │ ├── bash.bashrc │ ├── conf.d │ │ └── snapper │ ├── cve-ignore.list │ ├── docker │ │ └── daemon.json │ ├── firejail │ │ ├── firecfg.d │ │ │ └── disabled.conf │ │ ├── firefox-common.local │ │ ├── firejail.config │ │ ├── globals.local │ │ ├── keepassxc.local │ │ └── signal-desktop.local │ ├── iwd │ │ └── main.conf │ ├── libvirt │ │ ├── hooks │ │ │ └── qemu │ │ ├── libvirtd.conf │ │ └── qemu.conf │ ├── locale.conf │ ├── nftables.conf │ ├── pacman.d │ │ └── hooks │ │ │ ├── post-20-dash-symlink.hook │ │ │ ├── post-20-firejail-hardening.hook │ │ │ ├── post-20-firejail-symlinks.hook │ │ │ ├── post-90-should-reboot-check.hook │ │ │ └── pre-10-deny-xorg-packages.hook │ ├── resolv.conf │ ├── security │ │ └── faillock.conf │ ├── snapper │ │ └── configs │ │ │ ├── home │ │ │ └── root │ ├── sudoers │ ├── sysctl.d │ │ └── 99-swappiness.conf │ ├── systemd │ │ ├── network │ │ │ ├── 70-wired.network │ │ │ └── 80-wireless.network │ │ ├── resolved.conf.d │ │ │ └── default.conf │ │ ├── sleep.conf.d │ │ │ └── disable-hibernate.conf │ │ ├── system │ │ │ ├── auditd-notify.service │ │ │ ├── auditor.service │ │ │ ├── auditor.timer │ │ │ ├── btrfs-balance.service │ │ │ ├── btrfs-balance.timer │ │ │ ├── check-secure-boot.service │ │ │ ├── dirmngr@etc-pacman.d-gnupg.service.d │ │ │ │ └── override.conf │ │ │ ├── getty@tty1.service.d │ │ │ │ └── autologin.conf │ │ │ ├── local-forwarding-proxy.service │ │ │ ├── pacman-notify.service │ │ │ ├── pacman-notify.timer │ │ │ ├── pacman-sync.service │ │ │ ├── pacman-sync.timer │ │ │ ├── should-reboot-check.service │ │ │ ├── should-reboot-check.timer │ │ │ ├── systemd-networkd-wait-online.service.d │ │ │ │ └── override.conf │ │ │ └── usb-auto-mount@.service │ │ └── user │ │ │ ├── backup-to-cloud.service │ │ │ ├── backup-to-cloud.timer │ │ │ ├── gammastep.service │ │ │ ├── journalctl-notify.service │ │ │ ├── restic-unattended.service │ │ │ ├── restic-unattended.timer │ │ │ └── sway-session.target │ └── udev │ │ └── rules.d │ │ └── 99-usb-auto-mount.rules └── usr │ └── local │ └── bin │ ├── auditd-notify │ ├── auditor │ ├── backup-to-cloud │ ├── firefox │ ├── journalctl-notify │ ├── lock │ ├── notify-disk-error │ ├── notify-low-on-disk-space │ ├── pacman │ ├── pacman-notify │ ├── proxify │ ├── restic-backup-to-cloud │ ├── should-reboot-check │ ├── signal-desktop │ ├── try-to-fix-low-on-disk-space │ ├── usb-auto-mount │ └── yay └── sync.sh /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 ShellCode 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ArchLinux Hardened 2 | 3 | This repository contains my ArchLinux setup which focuses on desktop security. 4 | 5 | Beside security, my setup also aims to use all the bleeding edge and state of the art software we currently have available, most notably: 6 | 7 | - Btrfs : [copy-on-write](https://en.wikipedia.org/wiki/Copy-on-write) filesystem with snapshot support 8 | - Wayland : because X11 is old, slow, and insecure 9 | - NFTables : because firewalling with iptables syntax sucks 10 | 11 | Because of its hardened nature, you might have to get your hands dirty to get things to work. 12 | Therefore this setup is not recommended if you don't have good GNU/Linux knowledge already. 13 | 14 | ## Highlights 15 | 16 | Physical tampering hardening: 17 | 18 | - Secure boot without Microsoft's keys 19 | - No GRUB-like bootloader, the kernel is booted into directly thanks to [unified kernel images](https://wiki.archlinux.org/title/Unified_kernel_image) 20 | - Full disk encryption using LUKS 2 21 | 22 | Exploit mitigation: 23 | 24 | - GrapheneOS' hardened kernel 25 | - Kernel's lockdown mode set to "integrity" 26 | - Firejail + AppArmor (see [FIREJAIL.md](docs/FIREJAIL.md) for the why) 27 | 28 | Network hardening: 29 | 30 | - Strict firewalling rules (drop everything by default, see [NETWORKING.md](https://github.com/ShellCode33/ArchLinux-Hardened/blob/master/docs/NETWORKING.md)) 31 | - Reverse Path Filtering set to strict 32 | - ICMP redirects disabled 33 | - The hardened kernel has very strong defaults regarding network security 34 | 35 | System monitoring: 36 | 37 | - Auditd reporting through desktop notifications 38 | - Many systemd services helping you to manage your system to keep it secure 39 | - Firewall denials notifications 40 | 41 | System resilience: 42 | 43 | - LTS kernel fallback from the BIOS to fix a broken system 44 | - Automated encrypted backups uploaded to a remote server (manual configuration required) 45 | - Automated encrypted incremental backups to an external USB drive (manual configuration required) 46 | 47 | This setup uses desktop notifications extensively, I think this is a good way of monitoring your PC. 48 | 49 | I want to know what's going on at all times, if something fails I want to be aware of it as soon as possible in order to fix it. 50 | 51 | Here's a sample of notifications you might get: 52 | 53 | ![alt notification](images/notifications.png) 54 | 55 | ## Installation 56 | 57 | Head over to [INSTALL.md](docs/INSTALL.md) 58 | 59 | ## Additional documentation 60 | 61 | - [My threat model](/docs/THREAT_MODEL.md) 62 | - [Manage SSH and GPG secrets securely without a password thanks to KeePassXC](docs/HOW_TO_MANAGE_SECRETS.md) 63 | - [Setup an auto-mounted encrypted standalone USB device](docs/HOW_TO_SECURE_USB_DEVICE.md) 64 | - [Firefox hardening tips](docs/HOW_TO_FIREFOX.md) 65 | - [FAQ and troubleshooting](docs/FAQ_AND_TROUBLESHOOTING.md) 66 | -------------------------------------------------------------------------------- /docs/FAQ_AND_TROUBLESHOOTING.md: -------------------------------------------------------------------------------- 1 | TODO 2 | 3 | - How many passwords will I have to remember ? 4 | - It doesn't seem to work in my VM, how can I make it work ? 5 | - Pacman/Yay doesn't want me to install packages, wtf ? 6 | - Help! My PC won't boot anymore! 7 | - How can I expose internal service to the outside world ? (SSH server, HTTP server, etc.) 8 | - Application XYZ doesn't work, AppArmor says it's denied, what do I do ? 9 | - I need to set an environment variable globally, how do I do that ? 10 | -------------------------------------------------------------------------------- /docs/FIREJAIL.md: -------------------------------------------------------------------------------- 1 | # Firejail 2 | 3 | This tool is a bit controversial because it is a setuid binary. 4 | 5 | I ended up choosing it anyway for its ease of configuration and balanced defaults. 6 | Firejail's default profiles are more permissive than good AppArmor profiles (such as the ones from [apparmor.d](https://github.com/roddhjav/apparmor.d)) 7 | but the thing with AppArmor on ArchLinux is that it is almost unusable for most applications (that don't provide an AppArmor profile themselves). 8 | ArchLinux is a rolling release therefore programs are updated very regularly. 9 | It means that AppArmor profiles become outdated very fast, and things break frequently. 10 | 11 | Regarding the setuid controversy, I was quite sold by a point made by the creator of Firejail [in that thread](https://github.com/netblue30/firejail/issues/3046): 12 | 13 | > Once inside a sandbox - firejail, bubblewrap, or any other seccomp sandbox - you can not exploit any SUID executable present in the system. It has to do with the way seccomp is handled by the kernel. The attack surface of the program that configured seccomp becomes irrelevant. In other words, if you get control of a firefox running in a sandbox, the kernel wouldn't let you exploit the program that started the sandbox. 14 | 15 | (Also, I don't really care if an attacker is able to privesc on my machine, see my [threat model](THREAT_MODEL.md)) 16 | 17 | In addition to that, Firejail has an AppArmor profile which is in use as well. 18 | -------------------------------------------------------------------------------- /docs/HOW_TO_BTRFS.md: -------------------------------------------------------------------------------- 1 | # Administration tips for btrfs 2 | 3 | Tips and tricks for managing your btrfs submodules. 4 | 5 | ## Create top-level subvolumes 6 | 7 | Mount the top-level subvolume (which is of ID 5): 8 | ``` 9 | sudo mount /dev/mapper/archlinux -o subvolid=5 /mnt 10 | ``` 11 | 12 | Create the subvolume: 13 | 14 | ``` 15 | sudo btrfs subvolume create /mnt/@my-new-subvolume 16 | ``` 17 | 18 | Now that your subvolume has been created, get its ID: 19 | 20 | ``` 21 | sudo btrfs subvolume list / | grep my-new-subvolume 22 | ``` 23 | 24 | You can add it to `/etc/fstab` if need be. 25 | 26 | ## Balancing 27 | 28 | This is done automatically already on a daily basis by [btrfs-balance.service](https://github.com/ShellCode33/ArchLinux-Hardened/blob/master/rootfs/etc/systemd/system/btrfs-balance.service). 29 | 30 | But if you want to run it manually, you can use the following command: 31 | 32 | ``` 33 | sudo btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mlimit=4 / 34 | ``` 35 | 36 | Where `/` is the path to the subvolume to balance. 37 | 38 | Official documentation [there](https://btrfs.readthedocs.io/en/latest/btrfs-balance.html). 39 | 40 | ## Disable CoW on an existing folder 41 | 42 | Do not use the `nodatacow` mount option, it wont work !! Use `chattr +C` instead. 43 | 44 | From [btrfs(5)](https://man.archlinux.org/man/btrfs.5#MOUNT_OPTIONS): 45 | 46 | within a single file system, it is not possible to mount some subvolumes with nodatacow and others with datacow. The mount option of the first mounted subvolume applies to any other subvolumes. 47 | 48 | Setting `chattr +C` on an existing folder is undefined behavior. 49 | To workaround that, first make sure the folder is not in use (for system directories you will have to boot into a live CD). 50 | 51 | Let's say we want to disable CoW on `/var` which is the mount point of the subvolume `@var`. 52 | 53 | Mount the top-level volume from your LiveCD: 54 | 55 | ``` 56 | mount /dev/mapper/archlinux -o subvolid=5 /mnt 57 | ``` 58 | 59 | Rename the folder you want to disable CoW for: 60 | 61 | ``` 62 | mv /mnt/@var /mnt/@old_var 63 | ``` 64 | 65 | Create the new `@var` subvolume: 66 | 67 | ``` 68 | btrfs subvolume create /mnt/@var 69 | ``` 70 | 71 | Disable CoW: 72 | 73 | ``` 74 | chattr +C /mnt/@var 75 | ``` 76 | 77 | Copy the old content the new subvolume: 78 | 79 | ``` 80 | cp -a --reflink=never /mnt/@old_var/. /mnt/@var 81 | ``` 82 | 83 | Remove the old subvolume: 84 | 85 | ``` 86 | btrfs subvolume delete /mnt/@old_var 87 | ``` 88 | 89 | And wait for it to complete: 90 | 91 | ``` 92 | btrfs subvolume sync /mnt 93 | ``` 94 | 95 | You must now change `/etc/fstab` to match the new subvolid. 96 | 97 | These instructions were given for subvolumes, but the same logic also applies to regular folders. Except no subvolume has to be created/deleted. 98 | -------------------------------------------------------------------------------- /docs/HOW_TO_FIREFOX.md: -------------------------------------------------------------------------------- 1 | # Firefox 2 | 3 | Firefox has been hardened both security and privacy wise thanks to [arkenfox/user.js](https://github.com/arkenfox/user.js). 4 | 5 | Its configuration is being updated automatically every time you start Firefox thanks to [this wrapper script](https://github.com/ShellCode33/ArchLinux-Hardened/blob/master/rootfs/usr/local/bin/firefox). 6 | 7 | If you're not happy with its current behavior, edit the Configuration overrides which are available in my dotfiles repo [there](https://github.com/ShellCode33/.dotfiles/blob/master/.mozilla/firefox/user-overrides.js). 8 | 9 | As always, a line has to be drawn between security and usability. If you want a truly privacy respecting browser, use the TOR browser. 10 | 11 | ## Remember cookies after a restart 12 | 13 | Perform `CTRL + i` on the website you want to stay logged into after a restart of Firefox. 14 | 15 | Then look for "Set cookies" and untick "Use Default". Make sure the radio button is set to "Allow". 16 | -------------------------------------------------------------------------------- /docs/HOW_TO_MANAGE_SECRETS.md: -------------------------------------------------------------------------------- 1 | # Secrets Management 2 | 3 | I think everyone agrees that typing passwords is annoying, especially when you have long and complex passphrases. 4 | This tutorial aims to show you how to securely manage different kinds of secrets, so that you have to type only one password: [KeePassXC](https://keepassxc.org)'s. 5 | 6 | 7 | ## SSH Keys 8 | 9 | Prerequisit: being able to SSH into your server using your SSH keypair. 10 | 11 | Under the hood, KeePassXC uses ssh-agent to store SSH secrets. 12 | 13 | To make sure ssh-agent is up and running, you should add the following to your login shell profile file: 14 | 15 | ``` 16 | eval $(ssh-agent) > /dev/null 17 | ``` 18 | 19 | The login shell is the shell assigned to your user in `/etc/passwd`. The line above should be added to `~/.bash_profile` if your login shell is bash, or to `~/.profile` if your login shell is sh or dash. 20 | 21 | The eval statement will set the `SSH_AUTH_SOCK` environment variable for all the programs you run. You must logout for these changes to propagate. 22 | 23 | Now we must enable the SSH Agent integration in KeePassXC. Head over to the settings section and click on *"SSH Agent"*. 24 | 25 | You have nothing else to do but to tick *"Enable SSH Agent integration"*, you should see a green dialog telling you that the connection with ssh-agent is working properly. 26 | 27 | Now let's create a new entry in KeePassXC which contains the password of your SSH private key, while creating the entry, you should see a *"SSH Agent"* tab, click on it, make sure the following is ticked: 28 | 29 | - Add key to agent when database is opened/unlocked 30 | - Remove key from agent when database is closed/unlocked 31 | 32 | Under the *"Private key"* section, choose the *"External file"* radio button, click on *"Browse..."* and pick your private key (the one that **doesn't** end with `.pub.`). 33 | 34 | You are now good to go, ssh won't ask for your password anymore as long as KeePassXC's database is open. 35 | 36 | ## GPG Keys 37 | 38 | Prerequisit: having a pair of GPG keys generated already. 39 | 40 | KeePassXC doesn't support GPG out of the box, but there's a workaround: the libsecret integration. 41 | 42 | Unlock your database and create a new group/folder that will contain all your GPG keys. 43 | 44 | Head over to the settings section and click on *"Secret Service Integration"* and tick *"Enable KeePassXC Freedesktop.org Secret Service integration"*. 45 | 46 | Under the *"Exposed database groups"* you should see that an entry is there already but the *"Group"* field is set to *"none"*. Click on the edit button on the right of this row. A new settings dialog will open, head over to the *"Secret Service Integration"* again, and tick *"Expose entries under this group"*, then choose the previously created group. 47 | 48 | The Secret Service integration is now setup properly. 49 | 50 | We must now setup our keys so that they are automatically picked up by gpg. 51 | 52 | By default gpg will spawn a gpg-agent that will store your passwords for a given period of time so that you don't have to enter your passwords multiple times if you want to use the same key again. Thing is, we want KeePassXC to manage our GPG keys, not gpg-agent. If KeePassXC's database is locked, the gpg keys shouldn't stay in gpg-agent memory. 53 | 54 | To prevent gpg-agent from keeping our keys in memory, add the following to `~/.gnupg/gpg-agent.conf`: 55 | 56 | ``` 57 | default-cache-ttl 0 58 | max-cache-ttl 0 59 | ``` 60 | 61 | It will basically disable caching of GPG keys. 62 | 63 | When asked for a password, GPG will spawn a "pinentry" program. There are many of them and you can basically use any of them as long as they have libsecret integration. I personnally use `pinentry-curses` which doesn't spawn any GUI and asks for a password within the terminal. 64 | 65 | To make sure the pinentry program you use has support for libsecret, you can run the following command (replace `pinentry-curses` by the pinentry you're planning to use): 66 | 67 | ``` 68 | $ ldd "$(command -v pinentry-curses)" | grep libsecret 69 | ``` 70 | 71 | If you see something like this in the ouput, then libsecret should be supported: 72 | 73 | ``` 74 | libsecret-1.so.0 => /usr/lib/libsecret-1.so.0 (0x00006a1d0397b000) 75 | ``` 76 | 77 | To change the pinentry program of gpg-agent, you can add the following to `~/.gnupg/gpg-agent.conf`: 78 | 79 | ``` 80 | pinentry-program /usr/bin/pinentry-curses 81 | ``` 82 | 83 | At this point everything is ready, you must kill any already running gpg-agent instance so that your config modifications are applied: 84 | 85 | ``` 86 | $ killall gpg-agent 87 | ``` 88 | 89 | There's one last thing to be aware of. Currently the pinentry of gpg-agent has no clue about what password entry to use in our KeePassXC database. We will now see how to link a GPG key to an entry in our KeePassXC database. 90 | 91 | Run the following command: 92 | 93 | ``` 94 | $ gpg --list-secret-keys --with-keygrip 95 | ``` 96 | 97 | If you have generated only one GPG keypair, your output should be similar to this: 98 | 99 | ``` 100 | /home/shellcode/.gnupg/pubring.kbx 101 | ---------------------------------- 102 | sec rsa4096 2023-07-20 [SC] 103 | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 104 | Keygrip = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY 105 | uid [ultimate] YourName 106 | ssb rsa4096 2023-07-20 [E] 107 | Keygrip = ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ 108 | ``` 109 | 110 | We can see in this output that we have two keys, YYY and ZZZ. 111 | 112 | Note: a keygrip is basically a unique identifier of your keys. The pinentry programs will use this keygrip to query password to the libsecret provider (in our case KeePassXC). 113 | 114 | We must now head over to KeePassXC and create a new entry for our GPG key. This new entry must be in the previously created group. Add your GPG key password to this entry and name it whatever you like. 115 | 116 | Then click on the *"Advanced"* tab, there's a *"Additional attributes"* section. 117 | 118 | In this section create an entry called `xdg:schema` and set its value to `org.gnupg.Passphrase`. 119 | 120 | Then create a second entry called `keygrip` which must contain the keygrip of your key prefixed with `n/`. 121 | With the output above it should be: 122 | 123 | ``` 124 | n/YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY 125 | ``` 126 | 127 | And you're done ! The pinentry program won't ask for your password anymore, as long as your KeePassXC database is unlocked. 128 | 129 | You can test this by running: 130 | 131 | ``` 132 | $ killall gpg-agent 133 | $ echo test | gpg --armor --sign 134 | ``` 135 | 136 | I invite you to make sure it's working properly by trying to lock/unlock your KeePassXC database to make sure the behavior is as expected. 137 | 138 | ## Automatically lock your database when you lock your computer 139 | 140 | If an attacker robs your PC while your KeePassXC database is unlocked, the passwords/keys remain in memory and could be extracted without having to unlock the computer (see [DMA attack](https://en.wikipedia.org/wiki/DMA_attack)). 141 | 142 | In this chapter we will harden our setup to automatically lock KeePassXC's database when we lock our PC so that secrets don't stay in memory. 143 | 144 | Head over to KeePassXC settings, under the *"Security"* tab, make sure *"Lock databases when session is locked or lid is closed"* is ticked. 145 | 146 | On some desktop environments it will work out of the box (check it), on others, it wont. In my experience, it mostly doesn't work out of the box. 147 | 148 | If it doesn't work for you, the following command must be run before the lock program is being run: 149 | 150 | ``` 151 | dbus-send --print-reply --dest=org.keepassxc.KeePassXC.MainWindow /keepassxc org.keepassxc.KeePassXC.MainWindow.lockAllDatabases 152 | ``` 153 | 154 | Therefore you must identify how your session is being locked (which program, and who's reponsible for starting this program). 155 | 156 | Personnaly, I use Sway as tiling window manager and I added the following to my config file: 157 | 158 | ``` 159 | set $lock dbus-send --print-reply --dest=org.keepassxc.KeePassXC.MainWindow /keepassxc org.keepassxc.KeePassXC.MainWindow.lockAllDatabases; swaylock --daemonize --ignore-empty-password --image $wallpaper 160 | 161 | # Lock the computer 162 | bindsym $mod+Ctrl+l exec "$lock" 163 | 164 | # This will lock your screen after 300 seconds of inactivity, then turn off 165 | # your displays after another 300 seconds, and turn your screens back on when 166 | # resumed. It will also lock your screen before your computer goes to sleep. 167 | exec swayidle -w \ 168 | timeout 300 "$lock" \ 169 | timeout 600 'swaymsg "output * power off"' resume 'swaymsg "output * power on"' \ 170 | before-sleep "$lock" 171 | ``` 172 | 173 | And boom KeePassXC database is now being locked when I lock my computer. 174 | -------------------------------------------------------------------------------- /docs/HOW_TO_SECURE_USB_DEVICE.md: -------------------------------------------------------------------------------- 1 | # Create a secure USB device 2 | 3 | Goals: 4 | 5 | - Be able to store data on the device (obviously...) 6 | - Data must be encrypted so that only you can read it 7 | - The USB device must be bootable to access encrypted data on any physical computer you have access to 8 | - If the USB device is lost and someone plugs it in its own computer, contact details must be readable 9 | - Using the device must be seamless, we already have a hardened setup, we don't want to bother having to type one more password 10 | 11 | First, identify the USB device using lsblk, example: 12 | 13 | ``` 14 | $ lsblk 15 | NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS 16 | sda 8:0 0 931.5G 0 disk 17 | ├─sda1 8:1 0 300M 0 part /efi 18 | └─sda2 8:2 0 931.2G 0 part 19 | └─cryptroot 254:0 0 931.2G 0 crypt / 20 | sdb 8:16 1 115.7G 0 disk 21 | └─sdb1 8:17 1 115.7G 0 part 22 | ``` 23 | 24 | Here we will use the device `/dev/sdb`. 25 | 26 | WARNING: think twice before running any command or you might lose data ! 27 | 28 | ## Prepare the device 29 | 30 | Write random data to the whole device (this will take some time, plug the device to a USB 3 port if you have one): 31 | 32 | ``` 33 | sudo dd bs=1M if=/dev/urandom of=/dev/sdb status=progress 34 | ``` 35 | 36 | Remove any FS magic bytes (in case dd randomly created valid ones) otherwise some tools will complain: 37 | 38 | ``` 39 | lsblk -plnx size -o name /dev/sdb | sudo xargs -n1 wipefs --all 40 | ``` 41 | 42 | ## Create partitions 43 | 44 | - Partition 1: FAT32 partition which contains a README.txt with contact details in case the USB key is lost 45 | - Partition 2: EFI partition with Microsoft signed bootloader (to access your data from any physical computer you have access to) 46 | - Partition 3: LUKS encrypted partition which contains a minimal Linux install to access your data from any computer 47 | - Partition 4: LUKS encrypted partition which will contain all your data 48 | 49 | Create the partition table: 50 | 51 | ``` 52 | sudo sgdisk --clear /dev/sdb --new 1::+64MiB --new 2::+128MiB --typecode 2:ef00 /dev/sdb --new 3::+10GiB --new 4::0 53 | ``` 54 | 55 | (You could allocate only 5GiB to the Linux partition, but I'd rather be safe than sorry) 56 | 57 | Name the partitions: 58 | 59 | ``` 60 | sudo sgdisk /dev/sdb --change-name=1:README --change-name=2:EFI --change-name=3:LINUX_ENCRYPTED --change-name=4:STORAGE_ENCRYPTED 61 | ``` 62 | 63 | Create the FAT32 partitions: 64 | 65 | ``` 66 | sudo mkfs.vfat -n "README" -F 32 /dev/sdb1 67 | sudo mkfs.vfat -n "EFI" -F 32 /dev/sdb2 68 | ``` 69 | 70 | Create the LUKS layouts (LUKS 2 is not properly supported by GRUB yet): 71 | 72 | ``` 73 | sudo cryptsetup luksFormat --type luks1 --label LINUX /dev/sdb3 74 | sudo cryptsetup luksFormat --type luks1 --label STORAGE /dev/sdb4 75 | ``` 76 | 77 | (Choose strong passphrases and add them to your password manager) 78 | 79 | Define some variables we will need along the way: 80 | 81 | ``` 82 | boot_uuid="$(lsblk -o uuid /dev/sdb2 | tail -1)" 83 | luks_root_uuid="$(sudo cryptsetup luksUUID /dev/sdb3)" 84 | luks_storage_uuid="$(sudo cryptsetup luksUUID /dev/sdb4)" 85 | ``` 86 | 87 | Open the newly created LUKS containers: 88 | 89 | ``` 90 | sudo cryptsetup luksOpen /dev/sdb3 "$luks_root_uuid" 91 | sudo cryptsetup luksOpen /dev/sdb4 "$luks_storage_uuid" 92 | ``` 93 | 94 | Use ext4 for the Linux partition: 95 | 96 | ``` 97 | sudo mkfs.ext4 -L LINUX "/dev/mapper/$luks_root_uuid" 98 | ``` 99 | 100 | If you plan to store incremental system backups of your main drive, you might want to format the storage partition using btrfs: 101 | 102 | ``` 103 | sudo mkfs.btrfs --label STORAGE "/dev/mapper/$luks_storage_uuid" 104 | sudo mount "/dev/mapper/$luks_storage_uuid" /mnt 105 | sudo btrfs subvolume create /mnt/@snapshots 106 | sudo umount /mnt 107 | ``` 108 | 109 | Otherwise use the good old ext4 filesystem: 110 | 111 | ``` 112 | sudo mkfs.ext4 -L STORAGE "/dev/mapper/$luks_storage_uuid" 113 | ``` 114 | 115 | ## Create the README.txt 116 | 117 | Mount the README partition: 118 | 119 | ``` 120 | sudo mount /dev/sdb1 /mnt 121 | ``` 122 | 123 | Write contact details to a `.txt` file so that Windows users can read it easily : 124 | 125 | ``` 126 | sudo nvim /mnt/README.txt 127 | ``` 128 | 129 | Then umount: 130 | 131 | ``` 132 | sudo umount /mnt 133 | ``` 134 | 135 | ## Configure Linux on the embedded USB device 136 | 137 | We want it to : 138 | 139 | - Be bootable on any secure boot enabled computer 140 | - Auto-mount the storage partition 141 | 142 | Debian has been chosen for two reasons: 143 | 144 | - Because it's very stable, if we have to boot into this USB device it probably means something went wrong at some point and we don't want to deal with a broken install 145 | - Because it supports secure boot out of the box, meaning we will be able to boot into it on any computer (as long as it allows us to boot into external USB) 146 | 147 | You might want to read [Installing Debian GNU/Linux from a Unix/Linux System](https://www.debian.org/releases/stable/amd64/apds03.en.html). Otherwise, just follow along. 148 | 149 | In order to install Debian on the USB device while using Arch, you must install the following package: 150 | 151 | ``` 152 | sudo pacman -S debootstrap 153 | ``` 154 | 155 | Mount the Linux partition previously created on the USB device: 156 | 157 | ``` 158 | sudo mount "/dev/mapper/$luks_root_uuid" /mnt 159 | ``` 160 | 161 | DANGER: be careful, your main drive is accessible from within the chroot as well !! 162 | 163 | Then run debootstrap to install debian: 164 | 165 | ``` 166 | sudo debootstrap --arch amd64 --components main,contrib,non-free-firmware stable /mnt http://ftp.us.debian.org/debian 167 | ``` 168 | 169 | (Don't forget to use the `proxify` script if you use my setup) 170 | 171 | When it's done, mount additional resources: 172 | 173 | ``` 174 | sudo mount --mkdir /dev/sdb2 /mnt/boot/efi 175 | sudo mount -t proc proc /mnt/proc 176 | sudo mount -t sysfs sys /mnt/sys 177 | sudo mount -o bind /dev /mnt/dev 178 | sudo mount --rbind /sys/firmware/efi/efivars /mnt/sys/firmware/efi/efivars/ 179 | ``` 180 | 181 | Now is time to setup a few things: 182 | 183 | ``` 184 | echo backup-drive | sudo tee /mnt/etc/hostname 185 | echo '127.0.0.1 backup-drive' | sudo tee -a /mnt/etc/hosts 186 | echo | sudo tee /mnt/etc/motd 187 | ``` 188 | 189 | Add the security repos: 190 | 191 | ``` 192 | echo 'deb http://security.debian.org/ stable-security main contrib non-free-firmware' | sudo tee -a /mnt/etc/apt/sources.list 193 | sudo LANG=C.UTF-8 TERM=xterm-color chroot /mnt bash --login -c 'apt-get update && apt-get upgrade -y' 194 | ``` 195 | 196 | Install additional packages: 197 | 198 | ``` 199 | sudo LANG=C.UTF-8 TERM=xterm-color chroot /mnt bash --login -c 'apt-get install -y linux-image-amd64 firmware-linux firmware-iwlwifi zstd grub-efi cryptsetup cryptsetup-initramfs btrfs-progs fdisk gdisk sudo neovim network-manager xserver-xorg xinit lightdm xfce4 dbus-x11 thunar xfce4-terminal firefox-esr keepassxc network-manager-gnome' 200 | ``` 201 | 202 | Create a swapfile: 203 | 204 | ``` 205 | sudo fallocate -l 1G /mnt/swapfile 206 | sudo chmod 600 /mnt/swapfile 207 | sudo mkswap /mnt/swapfile 208 | ``` 209 | 210 | Create a first keyfile which will allow the booted OS to auto-mount the storage partition: 211 | 212 | ``` 213 | sudo dd bs=512 count=4 if=/dev/random of="/mnt/root/luks_${luks_storage_uuid}.keyfile" iflag=fullblock 214 | sudo chmod 400 "/mnt/root/luks_${luks_storage_uuid}.keyfile" 215 | ``` 216 | 217 | Create a second keyfile which will allow the initramfs to decrypt the root partition: 218 | 219 | ``` 220 | sudo dd bs=512 count=4 if=/dev/random of="/mnt/root/luks_${luks_root_uuid}.keyfile" iflag=fullblock 221 | sudo chmod 400 "/mnt/root/luks_${luks_root_uuid}.keyfile" 222 | ``` 223 | 224 | Now we must enroll the newly created keyfiles so that we can open the USB device with it: 225 | 226 | ``` 227 | sudo cryptsetup luksAddKey /dev/sdb3 "/mnt/root/luks_${luks_root_uuid}.keyfile" 228 | sudo cryptsetup luksAddKey /dev/sdb4 "/mnt/root/luks_${luks_storage_uuid}.keyfile" 229 | ``` 230 | 231 | And add the following to crypttab so that `cryptsetup-initramfs` knows which key to use to allow the initramfs to decrypt the root partition: 232 | 233 | ``` 234 | echo "$luks_root_uuid UUID=$luks_root_uuid /root/luks_${luks_root_uuid}.keyfile luks,discard" | sudo tee -a /mnt/etc/crypttab 235 | ``` 236 | 237 | Add the following to the cryptsetup-initramfs hook: 238 | 239 | ``` 240 | echo 'KEYFILE_PATTERN="/root/luks_*.keyfile"' | sudo tee -a /mnt/etc/cryptsetup-initramfs/conf-hook 241 | ``` 242 | 243 | Run the following command to open the LUKS storage partition automatically at boot: 244 | 245 | ``` 246 | echo "$luks_storage_uuid UUID=$luks_storage_uuid /root/luks_${luks_storage_uuid}.keyfile luks,discard" | sudo tee -a /mnt/etc/crypttab 247 | ``` 248 | 249 | Let's setup the fstab: 250 | 251 | ``` 252 | echo "/dev/mapper/$luks_root_uuid / ext4 defaults 0 1" | sudo tee /mnt/etc/fstab 253 | echo '/swapfile none swap sw 0 0' | sudo tee -a /mnt/etc/fstab 254 | echo "UUID=$boot_uuid /boot/efi vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 0" | sudo tee -a /mnt/etc/fstab 255 | ``` 256 | 257 | If your external USB device **filesystem is btrfs**, run the following command: 258 | 259 | ``` 260 | echo "/dev/mapper/$luks_storage_uuid /storage btrfs defaults,noatime,nodiratime,subvol=@snapshots,compress=zstd,space_cache=v2 0 2" | sudo tee -a /mnt/etc/fstab 261 | ``` 262 | 263 | If your external USB device **filesystem is ext4**, run the following command: 264 | 265 | ``` 266 | echo "/dev/mapper/$luks_storage_uuid /storage ext4 defaults,noatime,nodiratime 0 2" | sudo tee -a /mnt/etc/fstab 267 | ``` 268 | 269 | Now let's setup the bootloader and the initramfs: 270 | 271 | ``` 272 | echo 'GRUB_ENABLE_CRYPTODISK=y' | sudo tee -a /mnt/etc/default/grub 273 | echo "GRUB_CMDLINE_LINUX=\"cryptdevice=UUID=${luks_root_uuid}:${luks_root_uuid}\"" | sudo tee -a /mnt/etc/default/grub 274 | echo 'GRUB_DISTRIBUTOR="Backup-Drive"' | sudo tee -a /mnt/etc/default/grub 275 | echo 'UMASK=0077' | sudo tee -a /mnt/etc/initramfs-tools/initramfs.conf 276 | 277 | sudo chroot /mnt bash --login -c "update-initramfs -u -k all" 278 | sudo chroot /mnt bash --login -c "update-grub" 279 | sudo chroot /mnt bash --login -c "grub-install /dev/sdb" 280 | ``` 281 | 282 | Now is time to create a user: 283 | 284 | ``` 285 | username=YOUR_NAME 286 | sudo chroot /mnt bash --login -c "useradd -m $username -s /bin/bash" 287 | sudo chroot /mnt bash --login -c "passwd $username" 288 | sudo chroot /mnt bash --login -c "usermod -aG sudo $username" 289 | ``` 290 | 291 | Make sure the user will be able to use /storage: 292 | 293 | ``` 294 | sudo mkdir /mnt/storage 295 | sudo mount "/dev/mapper/$luks_storage_uuid" /mnt/storage 296 | sudo chown -R 1000:1000 /mnt/storage 297 | ``` 298 | 299 | Autologin your user into the graphical session: 300 | 301 | ``` 302 | sudo sed -i "s/#autologin-user=/autologin-user=$username/g" /mnt/etc/lightdm/lightdm.conf 303 | ``` 304 | 305 | Modify your bashrc in case you forget later where is the mount point: 306 | 307 | ``` 308 | echo 'echo "Storage partition is mounted at /storage ;)"' | sudo tee -a "/mnt/home/$username/.bashrc" 309 | ``` 310 | 311 | Enable some useful systemd services: 312 | 313 | ``` 314 | sudo chroot /mnt bash --login -c 'systemctl enable NetworkManager' 315 | ``` 316 | 317 | Congratz your USB device is now ready ! 318 | 319 | You can unmount everything like so: 320 | 321 | ``` 322 | sudo umount --recursive /mnt 323 | ``` 324 | 325 | And close LUKS containers: 326 | 327 | ``` 328 | sudo cryptsetup luksClose "/dev/mapper/$luks_root_uuid" 329 | sudo cryptsetup luksClose "/dev/mapper/$luks_storage_uuid" 330 | ``` 331 | 332 | And make sure everything is working properly ;) 333 | 334 | ## Auto-mount the external USB device on your PC 335 | 336 | In this chapter we are only interested in the storage area of the USB device. 337 | 338 | Make sure the **Linux partition** (not the storage one) of the USB drive is mounted: 339 | 340 | ``` 341 | sudo cryptsetup luksOpen /dev/sdb3 "$luks_root_uuid" 342 | sudo mount "/dev/mapper/$luks_root_uuid" /mnt 343 | ``` 344 | 345 | And copy the previously created keyfile to the root of your main computer: 346 | 347 | ``` 348 | sudo cp "/mnt/root/luks_${luks_storage_uuid}.keyfile" "/root/luks_${luks_storage_uuid}.keyfile" 349 | ``` 350 | 351 | Warning: the keyfile should be readable only by root !! 352 | 353 | Create a folder where the USB drive will be mounted: 354 | 355 | ``` 356 | sudo mkdir -m 700 -p "/media/usb/my_device" 357 | ``` 358 | 359 | Run the following command to open the LUKS container automatically using the keyfile: 360 | 361 | ``` 362 | echo "$luks_storage_uuid UUID=$luks_storage_uuid /root/luks_${luks_storage_uuid}.keyfile luks,discard,nofail" | sudo tee -a /etc/crypttab 363 | ``` 364 | 365 | If your external USB device **filesystem is btrfs**, run the following command: 366 | 367 | ``` 368 | echo "/dev/mapper/$luks_storage_uuid /media/usb/my_device btrfs defaults,noatime,nodiratime,subvol=@snapshots,compress=zstd,space_cache=v2,nofail 0 2" | sudo tee -a /etc/fstab 369 | ``` 370 | 371 | If your external USB device **filesystem is ext4**, run the following command: 372 | 373 | ``` 374 | echo "/dev/mapper/$luks_storage_uuid /media/usb/my_device ext4 defaults,noatime,nodiratime,nofail 0 2" | sudo tee -a /etc/fstab 375 | ``` 376 | 377 | And boom, you're done, you can now unmount the Linux partition: 378 | 379 | ``` 380 | sudo umount /mnt 381 | ``` 382 | 383 | You can also try to reboot your computer to make sure the USB device is properly mounted at boot. 384 | -------------------------------------------------------------------------------- /docs/INSTALL.md: -------------------------------------------------------------------------------- 1 | # Installation 2 | 3 | In order to have a proper secure boot, you will have to install your own keys in the BIOS firmware. 4 | By default, almost all computers are shipped with Microsoft's keys. This is to ensure out of the box 5 | secure boot on Windows. Note that [Microsoft offers a service](https://learn.microsoft.com/en-us/windows-hardware/drivers/dashboard/file-signing-manage) 6 | that allows anyone to sign a UEFI firmware. So basically if you decide to use Microsoft's keys, 7 | anyone who manages to get its UEFI firmware signed will be able to bypass your secure boot. 8 | I don't want that. And I don't trust Microsoft. So I decided to enroll my own keys instead. 9 | 10 | ⚠ Replacing Microsoft's keys will probably break your Windows boot if you have one, this ArchLinux setup has not been tested with a Windows dual boot, use at your own risk. In any case, [backing up those keys](https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot#Backing_up_current_variables) won't hurt (note that some BIOS allow you to reset keys to their factory default, meaning Microsoft's keys). 11 | 12 | - Set an admin password to restrict the access to your BIOS settings (add it to your password manager) 13 | - Remove all the cryptographic keys from your BIOS and enter the "setup mode" (some BIOS won't let you enter the setup mode if the SB is enabled) 14 | - Download and boot into the [ArchLinux ISO](https://archlinux.org/download/). 15 | 16 | Let's go! 17 | 18 | ```sh 19 | $ pacman -Sy git 20 | $ git clone https://github.com/ShellCode33/ArchLinux-Hardened 21 | $ cd ArchLinux-Hardened 22 | $ ./install.sh 23 | ``` 24 | 25 | If you get gpg/keyring related errors, do the following : 26 | 27 | ```sh 28 | $ killall gpg-agent 29 | $ rm -rf /etc/pacman.d/gnupg # might say resource is busy, it's ok 30 | $ pacman-key --init 31 | $ pacman-key --populate 32 | $ pacman -Sy archlinux-keyring 33 | ``` 34 | 35 | Once the installation is finished, reboot your computer and log into your freshly installed OS. 36 | 37 | You must now make sure the secure boot is setup properly. Many things could go wrong with it. 38 | 39 | The install script automatically tries to insert secure boot keys into your BIOS. 40 | 41 | To make sure they are there, run: 42 | 43 | ``` 44 | $ sudo sbkeysync --verbose 45 | ``` 46 | 47 | You should see something like this: 48 | 49 | ``` 50 | Filesystem keystore: 51 | /etc/secureboot/keys/db/db.auth [3337 bytes] 52 | /etc/secureboot/keys/KEK/KEK.auth [3336 bytes] 53 | /etc/secureboot/keys/PK/PK.auth [3334 bytes] 54 | firmware keys: 55 | PK: 56 | /CN=SecureBoot PK 57 | KEK: 58 | /CN=SecureBoot KEK 59 | db: 60 | /CN=SecureBoot db 61 | 62 | [...] 63 | ``` 64 | 65 | If it works, great ! You can now go back to your BIOS and enable the secure boot again. 66 | 67 | But if like me your BIOS sucks (ASUS 👀), it might not have worked and the keys are not there. 68 | 69 | Before you try the following, put your BIOS into "setup mode" again. 70 | 71 | You can try to run the following command: 72 | 73 | ``` 74 | sudo chattr -i /sys/firmware/efi/efivars/{PK,KEK,db}* 75 | ``` 76 | 77 | And then: 78 | 79 | ``` 80 | sudo sbkeysync --verbose --pk 81 | ``` 82 | 83 | To see if it helps. According to some issues on GitHub it worked for some, but not for me unfortunately. 84 | 85 | The error I have and that you might have as well is the following: 86 | 87 | ``` 88 | Inserting key update /etc/secureboot/keys/KEK/KEK.auth into KEK 89 | Error writing key update: Invalid argument 90 | Error syncing keystore file /etc/secureboot/keys/KEK/KEK.auth 91 | ``` 92 | 93 | The `Invalid argument` part seems to indicate a firmware bug, but it's hard to know for sure. 94 | 95 | In such cases, your last chance is to enroll keys manually into the BIOS. 96 | 97 | Put the secure boot keys on your EFI partition like so: 98 | 99 | ``` 100 | sudo find /etc/secureboot -name '*.auth' -exec cp {} /efi \; 101 | ``` 102 | 103 | Go back to your BIOS and in the secure boot menu, enroll them and enable the secure boot. 104 | Chances are, it should work now. You can confirm with `sudo sbkeysync --verbose`. 105 | 106 | Don't forget to remove the keys from the EFI partition: 107 | 108 | ``` 109 | sudo shred -u /efi/db.auth /efi/KEK.auth /efi/PK.auth 110 | ``` 111 | -------------------------------------------------------------------------------- /docs/LIMITATIONS.md: -------------------------------------------------------------------------------- 1 | TODO 2 | 3 | 4 | TL;DR : Linux is not safe 5 | -------------------------------------------------------------------------------- /docs/NETWORKING.md: -------------------------------------------------------------------------------- 1 | # Networking 2 | 3 | The networking configuration of this ArchLinux setup follows the usual security principal which states that "what is not explicitly allowed is forbidden". 4 | 5 | ## Firewall Policies 6 | 7 | All the [firewall policies](https://github.com/ShellCode33/ArchLinux-Hardened/blob/master/rootfs/etc/nftables.conf) are set to `drop` by default, which means nothing can come in or out unless stated otherwise. 8 | 9 | While both the `input` and `forward` chains can easily be set to `drop` without breaking much, the `output` chain is another kettle of fish. 10 | 11 | I obviously want a usable setup, and being able to reach the internet is a requirement. 12 | But then how do I manage to conciliate security and usability in that matter ? 13 | By using a local proxy ! 14 | 15 | ## The Local Proxy 16 | 17 | The idea is that by default nothing has access to the internet 18 | (thanks to the output `drop` policy), 19 | but any application routing its traffic through the local proxy will. 20 | 21 | This is achieved by matching the `skgid` nftables metadata. 22 | The `skgid` matches the `allow-internet` group, any program using that group 23 | will be allowed to reach the internet. 24 | The proxy is run using that group. 25 | 26 | You can read more on how to use the proxy in the [PROXY.md](PROXY.md) documentation. 27 | 28 | Some informed users reading this that might be thinking "But why? What's the point? 29 | It doesn't add anything security-wise". 30 | And you would be kind of right. A determined attacker would just have to look for 31 | running proxies or look for the `HTTPS_PROXY` environment variables to be able to 32 | bypass this measure. 33 | 34 | But let me try to convince you of the usefulness of this: 35 | 36 | 1. It would still prevent poorly written malware from exfiltrating data and reverse connecting to a C&C (script kiddies are all over the place). 37 | 2. It prevents "well-behaved" applications from reaching the internet without you noticing. Many applications fetch content from remote servers (especially Google Cloud and Amazon AWS). 38 | 3. It can help prevent supply chain attacks. Many times in the past it has happened that dependencies of legitimate programs have been backdoored. Blocking access to the internet might prevent those legitimate apps from self-updating and potentially pulling malicious code. First program that comes to my mind is neovim and its npm dependencies. 39 | 4. This local proxy allows you to monitor what's going on (i.e. which apps are using the network). It can be useful for statistics purposes or to look for something fishy. Being able to monitor what's going on on a system is also a big component of systems security. 40 | 41 | This is both defense in depth and a great privacy tool. (Privacy in the sense that it prevents apps from reaching the internet, your IP will obviously still be the same). 42 | 43 | ## Kernel hardening 44 | 45 | The `linux-hardened` kernel of ArchLinux has various defaults which harden the network configuration as well. 46 | 47 | Here's a non exhaustive list: 48 | 49 | - Enable syn flood protection: `net.ipv4.tcp_syncookies` 50 | - Ignore source-routed packets: `net.ipv4.conf.all.accept_source_route` 51 | - Ignore source-routed packets: `net.ipv4.conf.default.accept_source_route` 52 | - Ignore ICMP redirects: `net.ipv4.conf.all.accept_redirects` 53 | - Ignore ICMP redirects: `net.ipv4.conf.default.accept_redirects` 54 | - Ignore ICMP redirects from non-GW hosts: `net.ipv4.conf.all.secure_redirects` 55 | - Ignore ICMP redirects from non-GW hosts: `net.ipv4.conf.default.secure_redirects` 56 | - Don't allow traffic between networks or act as a router: `net.ipv4.ip_forward` 57 | - Don't allow traffic between networks or act as a router: `net.ipv4.conf.all.send_redirects` 58 | - Don't allow traffic between networks or act as a router: `net.ipv4.conf.default.send_redirects` 59 | - Reverse path filtering - IP spoofing protection: `net.ipv4.conf.all.rp_filter` 60 | - Reverse path filtering - IP spoofing protection: `net.ipv4.conf.default.rp_filter` 61 | - Ignore ICMP broadcasts to avoid participating in Smurf attacks: `net.ipv4.icmp_echo_ignore_broadcasts` 62 | - Ignore bad ICMP errors: `net.ipv4.icmp_ignore_bogus_error_responses` 63 | - Log spoofed, source-routed, and redirect packets: `net.ipv4.conf.all.log_martians` 64 | - Log spoofed, source-routed, and redirect packets: `net.ipv4.conf.default.log_martians` 65 | 66 | You can query their value using the `sysctl` command, for example: 67 | 68 | ``` 69 | $ sysctl net.ipv4.tcp_syncookies 70 | net.ipv4.tcp_syncookies = 1 71 | ``` 72 | -------------------------------------------------------------------------------- /docs/PROXY.md: -------------------------------------------------------------------------------- 1 | # Local Forwarding Proxy 2 | 3 | As mentioned in [NETWORKING.md](NETWORKING.md), this setup uses a local forwarding 4 | proxy to manually choose which applications are allowed to reach the internet. 5 | 6 | This document shows you some examples on how to configure your 7 | applications to allow them to reach the internet through the HTTP proxy. 8 | 9 | Note that using an HTTP proxy doesn't mean you have to use HTTP as underlying protocol. 10 | This is a bit counter intuitive but what it means is that to initiate the connection, 11 | you have to use the CONNECT HTTP method. Then the proxy will make the connection on your 12 | behalf and forward the TCP stream to you. 13 | 14 | [Glider](https://github.com/nadoo/glider) is the forwarding proxy I choose. 15 | It is being run as a systemd service which is configured [here](https://github.com/ShellCode33/ArchLinux-Hardened/blob/master/rootfs/etc/systemd/system/local-forwarding-proxy.service). 16 | 17 | ## Temporary access from the CLI 18 | 19 | This section shows you how to give temporary internet access to any program you want. 20 | 21 | Mature enough programs will provide ways for you to route their traffic through a proxy. 22 | For example curl has a `-x` parameter. But it would be very cumbersome to look for a 23 | specific CLI flag for each application. This is why there's a "standard" way of doing this, 24 | by using the `HTTP_PROXY` and `HTTPS_PROXY` environment variables. 25 | 26 | For example the commands that follow are equivalent: 27 | 28 | ``` 29 | $ curl -x http://127.0.0.1:8080 https://remote-server 30 | $ HTTPS_PROXY=http://127.0.0.1:8080 curl https://remote-server 31 | ``` 32 | 33 | To simplify things, a [wrapper script called proxify](https://github.com/ShellCode33/ArchLinux-Hardened/blob/master/rootfs/usr/local/bin/proxify) has been written. 34 | 35 | Anytime you want to reach the internet you just have to prepend it to the command you want to run, for example: 36 | 37 | ``` 38 | $ proxify curl ifconfig.me 39 | WW.XX.YY.ZZ 40 | ``` 41 | 42 | Otherwise you would get: 43 | 44 | ``` 45 | $ curl ifconfig.me 46 | curl: (7) Failed to connect to ifconfig.me port 80: Couldn't connect to server 47 | ``` 48 | 49 | If the program you want to give internet access to doesn't provide a way to be run 50 | through a proxy. You can use sudo to run it within the `allow-internet` group. 51 | This completely bypasses the proxy, therefore it should be used as a last resort if no other method is available: 52 | 53 | ``` 54 | $ sudo -g allow-internet curl ifconfig.me 55 | ``` 56 | 57 | ## Persistent access 58 | 59 | There are some applications that you always want to allow reaching the internet. 60 | The configuration is application specific, here are a few examples. 61 | 62 | ### Firefox 63 | 64 | You can setup the proxy from the GUI by going to the settings and searching for "proxy". 65 | 66 | Tick the "Manual proxy configuration" and use `127.0.0.1` for the host, and `8080` for the port. 67 | 68 | Don't forget to tick `Also use this proxy for HTTPS`. 69 | 70 | Alternatively, if you don't want to use the GUI, you can customize your Firefox profile 71 | by creating a custom `user.js` file. Personally I use [arkenfox/user.js](https://github.com/arkenfox/user.js) and [here's my 72 | config](https://github.com/ShellCode33/.dotfiles/blob/master/.mozilla/firefox/user-overrides.js). 73 | 74 | ### SSH 75 | 76 | To reach your SSH machines simply add the following to your `~/.ssh/config` : 77 | 78 | ``` 79 | Host * 80 | ProxyCommand=socat STDIO PROXY:127.0.0.1:%h:%p,proxyport=8080 81 | ``` 82 | 83 | The idea is to use socat to perform the HTTP CONNECT method for us. 84 | The TCP stream is then forwarded to `ssh`. 85 | See `man 5 ssh_config` to know more about `ProxyCommand`. 86 | 87 | ## The Docker daemon 88 | 89 | The Docker daemon must be able to reach the internet to pull images. 90 | 91 | This is already configured for you in [/etc/docker/daemon.json](https://github.com/ShellCode33/ArchLinux-Hardened/blob/master/rootfs/etc/docker/daemon.json) 92 | Note that this setting only applies to the daemon, not containers ! 93 | 94 | ## Systemd services 95 | 96 | You simply have to add `Group=allow-internet` to the `[Service]` section of the service configuration file. 97 | 98 | If you didn't create the service yourself and want to allow a third party service 99 | to reach the internet, you can create an override for it using `sudo systemctl edit service-name` and add the following content: 100 | 101 | ``` 102 | [Service] 103 | Group=allow-internet 104 | ``` 105 | -------------------------------------------------------------------------------- /docs/THREAT_MODEL.md: -------------------------------------------------------------------------------- 1 | # Threat Model 2 | 3 | Very good article: https://www.privacyguides.org/en/basics/threat-modeling/ 4 | 5 | > Balancing security, privacy, and usability is one of the first and most difficult tasks you'll face on your privacy journey. Everything is a trade-off: The more secure something is, the more restricting or inconvenient it generally is, etc. Often, people find that the problem with the tools they see recommended is that they're just too hard to start using! 6 | 7 | My setup aims to be single-user, therefore in my threat model I consider that if any malicious actor gets code execution on my machine it's over. Even if that code execution is unprivileged. Most of the "sensitive" stuff I have is under my home directory. This is what I want to protect, not some random configuration files in `/etc`. 8 | 9 | ![alt authorization](https://imgs.xkcd.com/comics/authorization.png) 10 | 11 | Image Credit: https://xkcd.com/1200/ 12 | 13 | ## What do I want to protect? 14 | 15 | - My personal data integrity and confidentially (documents, passwords, pictures, etc.) 16 | - My privacy (e.g. eavesdropping, laptops usually have built-in microphones, cameras, etc.) 17 | 18 | ## Who do I want to protect it from? 19 | 20 | - Thieves that could steal my hardware 21 | - Corporations that make money on my back 22 | - Script kiddies, and even more tech savvy hackers 23 | - Slow down state-sponsored hackers as much as possible, but considering how much money and manpower they have, if I'm being targeted I do realize there's nothing I can do 24 | 25 | ## How likely is it that I will need to protect it? 26 | 27 | Not likely. 28 | 29 | I would say the most likely to happen is either a supply chain attack or 30 | a Firefox exploit spreading from a random website I visit. 31 | Hopefully the payload wont work with my setup considering I'm running a Linux machine, 32 | most malware campaigns target Windows/Mac machines because of their market share. 33 | 34 | ## How bad are the consequences if I fail? 35 | 36 | Not that bad, I'm not a public figure or anything, just a random person on the internet. 37 | 38 | Still, I don't want my passwords, SSH and GPG keys to leak. 39 | 40 | It's more a matter of principle that a real need to avoid any consequence. 41 | 42 | ## How much trouble am I willing to go through to try to prevent potential consequences? 43 | 44 | Enough to spend a good portion of my free time writing configuration files and documentation. 45 | But I still want a setup that I can easily use on a daily basis to perform administrative 46 | tasks, programming, and entertain myself on the internet. 47 | -------------------------------------------------------------------------------- /images/notifications.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShellCode33/ArchLinux-Hardened/b7366a915d5baec8ffb993de4b4e5675ac27af11/images/notifications.png -------------------------------------------------------------------------------- /install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # 4 | # ArchLinux Hardened installation script. 5 | # 6 | # Heavily inspired by https://github.com/maximbaz/dotfiles/blob/master/install.sh 7 | # 8 | 9 | set -euo pipefail 10 | cd "$(dirname "$0")" 11 | trap on_error ERR 12 | 13 | # Redirect outputs to files for easier debugging 14 | exec 1> >(tee "stdout.log") 15 | exec 2> >(tee "stderr.log" >&2) 16 | 17 | # Dialog 18 | BACKTITLE="ArchLinux Hardened Installation" 19 | 20 | on_error() { 21 | ret=$? 22 | echo "[$0] Error on line $LINENO: $BASH_COMMAND" 23 | exit $ret 24 | } 25 | 26 | get_input() { 27 | title="$1" 28 | description="$2" 29 | 30 | input=$(dialog --clear --stdout --backtitle "$BACKTITLE" --title "$title" --inputbox "$description" 0 0) 31 | echo "$input" 32 | } 33 | 34 | get_password() { 35 | title="$1" 36 | description="$2" 37 | 38 | init_pass=$(dialog --clear --stdout --backtitle "$BACKTITLE" --title "$title" --passwordbox "$description" 0 0) 39 | test -z "$init_pass" && echo >&2 "password cannot be empty" && exit 1 40 | 41 | test_pass=$(dialog --clear --stdout --backtitle "$BACKTITLE" --title "$title" --passwordbox "$description again" 0 0) 42 | if [[ "$init_pass" != "$test_pass" ]]; then 43 | echo "Passwords did not match" >&2 44 | exit 1 45 | fi 46 | echo "$init_pass" 47 | } 48 | 49 | get_choice() { 50 | title="$1" 51 | description="$2" 52 | shift 2 53 | options=("$@") 54 | dialog --clear --stdout --backtitle "$BACKTITLE" --title "$title" --menu "$description" 0 0 0 "${options[@]}" 55 | } 56 | 57 | if [ ! -d /sys/firmware/efi ]; then 58 | echo >&2 "legacy BIOS boot detected, this install script only works with UEFI." 59 | exit 1 60 | fi 61 | 62 | # Unmount previously mounted devices in case the install script is run multiple times 63 | swapoff -a || true 64 | umount -R /mnt 2>/dev/null || true 65 | cryptsetup luksClose archlinux 2>/dev/null || true 66 | 67 | # Basic settings 68 | timedatectl set-ntp true 69 | hwclock --systohc --utc 70 | 71 | # Keyring from ISO might be outdated, upgrading it just in case 72 | pacman -Sy --noconfirm --needed archlinux-keyring 73 | 74 | # Make sure some basic tools that will be used in this script are installed 75 | pacman -Sy --noconfirm --needed git reflector terminus-font dialog wget 76 | 77 | # Adjust the font size in case the screen is hard to read 78 | noyes=("Yes" "The font is too small" "No" "The font size is just fine") 79 | hidpi=$(get_choice "Font size" "Is your screen HiDPI?" "${noyes[@]}") || exit 1 80 | clear 81 | [[ "$hidpi" == "Yes" ]] && font="ter-132n" || font="ter-716n" 82 | setfont "$font" 83 | 84 | # Setup CPU/GPU target 85 | cpu_list=("Intel" "" "AMD" "") 86 | cpu_target=$(get_choice "Installation" "Select the targetted CPU vendor" "${cpu_list[@]}") || exit 1 87 | clear 88 | 89 | noyes=("Yes" "" "No" "") 90 | install_igpu_drivers=$(get_choice "Installation" "Does your CPU have integrated graphics ?" "${noyes[@]}") || exit 1 91 | clear 92 | 93 | gpu_list=("Nvidia" "" "AMD" "" "None" "I don't have any GPU") 94 | gpu_target=$(get_choice "Installation" "Select the targetted GPU vendor" "${gpu_list[@]}") || exit 1 95 | clear 96 | 97 | # Ask which device to install ArchLinux on 98 | devicelist=$(lsblk -dplnx size -o name,size | grep -Ev "boot|rpmb|loop" | tac | tr '\n' ' ') 99 | read -r -a devicelist <<<"$devicelist" 100 | device=$(get_choice "Installation" "Select installation disk" "${devicelist[@]}") || exit 1 101 | clear 102 | 103 | noyes=("Yes" "I want to remove everything on $device" "No" "GOD NO !! ABORT MISSION") 104 | lets_go=$(get_choice "Are you absolutely sure ?" "YOU ARE ABOUT TO ERASE EVERYTHING ON $device" "${noyes[@]}") || exit 1 105 | clear 106 | [[ "$lets_go" == "No" ]] && exit 1 107 | 108 | hostname=$(get_input "Hostname" "Enter hostname") || exit 1 109 | clear 110 | test -z "$hostname" && echo >&2 "hostname cannot be empty" && exit 1 111 | 112 | user=$(get_input "User" "Enter username") || exit 1 113 | clear 114 | test -z "$user" && echo >&2 "user cannot be empty" && exit 1 115 | 116 | user_password=$(get_password "User" "Enter password") || exit 1 117 | clear 118 | test -z "$user_password" && echo >&2 "user password cannot be empty" && exit 1 119 | 120 | luks_password=$(get_password "LUKS" "Enter password") || exit 1 121 | clear 122 | test -z "$luks_password" && echo >&2 "LUKS password cannot be empty" && exit 1 123 | 124 | echo "Setting up fastest mirrors..." 125 | reflector --country France,Germany --latest 30 --sort rate --save /etc/pacman.d/mirrorlist 126 | clear 127 | 128 | echo "Writing random bytes to $device, go grab some coffee it might take a while" 129 | dd bs=1M if=/dev/urandom of="$device" status=progress || true 130 | 131 | # Setting up partitions 132 | lsblk -plnx size -o name "${device}" | xargs -n1 wipefs --all 133 | sgdisk --clear "${device}" --new 1::-551MiB "${device}" --new 2::0 --typecode 2:ef00 "${device}" 134 | sgdisk --change-name=1:primary --change-name=2:ESP "${device}" 135 | 136 | # shellcheck disable=SC2086,SC2010 137 | { 138 | part_root="$(ls ${device}* | grep -E "^${device}p?1$")" 139 | part_boot="$(ls ${device}* | grep -E "^${device}p?2$")" 140 | } 141 | 142 | mkfs.vfat -n "EFI" -F 32 "${part_boot}" 143 | echo -n "$luks_password" | cryptsetup luksFormat --label archlinux "${part_root}" 144 | echo -n "$luks_password" | cryptsetup luksOpen "${part_root}" archlinux 145 | mkfs.btrfs --label archlinux /dev/mapper/archlinux 146 | 147 | # Create btrfs subvolumes 148 | mount /dev/mapper/archlinux /mnt 149 | btrfs subvolume create /mnt/@ 150 | btrfs subvolume create /mnt/@home 151 | btrfs subvolume create /mnt/@swap 152 | btrfs subvolume create /mnt/@snapshots 153 | btrfs subvolume create /mnt/@home-snapshots 154 | btrfs subvolume create /mnt/@libvirt 155 | btrfs subvolume create /mnt/@docker 156 | btrfs subvolume create /mnt/@cache-pacman-pkgs 157 | btrfs subvolume create /mnt/@var 158 | btrfs subvolume create /mnt/@var-log 159 | btrfs subvolume create /mnt/@var-tmp 160 | umount /mnt 161 | 162 | # 163 | # Great btrfs documentations: 164 | # https://en.opensuse.org/SDB:BTRFS 165 | # https://wiki.debian.org/Btrfs 166 | # https://wiki.archlinux.org/title/btrfs 167 | # https://github.com/archlinux/archinstall/issues/781 168 | # 169 | 170 | mount_opt="defaults,noatime,nodiratime,compress=zstd,space_cache=v2" 171 | mount -o subvol=@,$mount_opt /dev/mapper/archlinux /mnt 172 | mount --mkdir -o umask=0077 "${part_boot}" /mnt/efi 173 | mount --mkdir -o subvol=@home,$mount_opt /dev/mapper/archlinux /mnt/home 174 | mount --mkdir -o subvol=@swap,$mount_opt /dev/mapper/archlinux /mnt/.swap 175 | mount --mkdir -o subvol=@snapshots,$mount_opt /dev/mapper/archlinux /mnt/.snapshots 176 | mount --mkdir -o subvol=@home-snapshots,$mount_opt /dev/mapper/archlinux /mnt/home/.snapshots 177 | 178 | # Copy-on-Write is no good for big files that are written multiple times. 179 | # This includes: logs, containers, virtual machines, databases, etc. 180 | # They usually lie in /var, therefore CoW will be disabled for everything in /var 181 | # Note that currently btrfs does not support the nodatacow mount option. 182 | mount --mkdir -o subvol=@var,$mount_opt /dev/mapper/archlinux /mnt/var 183 | chattr +C /mnt/var # Disable Copy-on-Write for /var 184 | mount --mkdir -o subvol=@var-log,$mount_opt /dev/mapper/archlinux /mnt/var/log 185 | mount --mkdir -o subvol=@libvirt,$mount_opt /dev/mapper/archlinux /mnt/var/lib/libvirt 186 | mount --mkdir -o subvol=@docker,$mount_opt /dev/mapper/archlinux /mnt/var/lib/docker # I feel like using the btrfs storage driver of Docker is not safe yet 187 | 188 | # Not worth snapshotting, creating subvolumes for them so that they're not included 189 | # Be careful not to break a potential future rollback of /@ !! 190 | mount --mkdir -o subvol=@cache-pacman-pkgs,$mount_opt /dev/mapper/archlinux /mnt/var/cache/pacman/pkg 191 | mount --mkdir -o subvol=@var-tmp,$mount_opt /dev/mapper/archlinux /mnt/var/tmp 192 | 193 | # Create swapfile for btrfs main system 194 | btrfs filesystem mkswapfile /mnt/.swap/swapfile 195 | mkswap /mnt/.swap/swapfile # according to btrfs doc it shouldn't be needed, it's a bug 196 | swapon /mnt/.swap/swapfile # we use the swap so that genfstab detects it 197 | 198 | # Install all packages listed in packages/regular 199 | grep -o '^[^ *#]*' packages/regular >regular_packages_to_install 200 | 201 | if [[ "$gpu_target" != "None" || "$install_igpu_drivers" = "Yes" ]]; then 202 | { 203 | # Open-source OpenGL drivers 204 | echo mesa 205 | 206 | # Vulkan Installable Client Driver 207 | echo vulkan-icd-loader 208 | } >>regular_packages_to_install 209 | fi 210 | 211 | if [[ "$cpu_target" == "Intel" ]]; then 212 | echo intel-ucode >>regular_packages_to_install 213 | 214 | if [[ "$install_igpu_drivers" == "Yes" ]]; then 215 | { 216 | # HD Graphics series starting from Broadwell (2014) and newer 217 | echo intel-media-driver 218 | 219 | # GMA 4500 (2008) up to Coffee Lake (2017) 220 | echo libva-intel-driver 221 | 222 | # Open-source Vulkan driver for Intel GPUs 223 | echo vulkan-intel 224 | } >>regular_packages_to_install 225 | fi 226 | elif [[ "$cpu_target" == "AMD" ]]; then 227 | echo amd-ucode >>regular_packages_to_install 228 | else 229 | echo "Unsupported CPU" 230 | exit 1 231 | fi 232 | 233 | if [[ "$gpu_target" != "None" ]]; then 234 | { 235 | 236 | # GeForce 8 series and newer GPUs up until GeForce GTX 750 237 | # VA-API on Radeon HD 2000 and newer GPUs 238 | echo libva-mesa-driver 239 | 240 | if [[ "$gpu_target" = "Nvidia" ]]; then 241 | echo vulkan-nouveau 242 | elif [[ "$gpu_target" = "AMD" ]]; then 243 | echo vulkan-radeon 244 | fi 245 | } >>regular_packages_to_install 246 | fi 247 | 248 | pacstrap -K /mnt - /mnt/etc/kernel/cmdline 292 | 293 | echo "FONT=$font" >/mnt/etc/vconsole.conf 294 | echo "KEYMAP=fr-latin1" >>/mnt/etc/vconsole.conf 295 | 296 | echo "${hostname}" >/mnt/etc/hostname 297 | echo "en_US.UTF-8 UTF-8" >>/mnt/etc/locale.gen 298 | echo "fr_FR.UTF-8 UTF-8" >>/mnt/etc/locale.gen 299 | ln -sf /usr/share/zoneinfo/Europe/Paris /mnt/etc/localtime 300 | arch-chroot /mnt locale-gen 301 | 302 | genfstab -U /mnt >>/mnt/etc/fstab 303 | 304 | # For a smoother transition between Plymouth and Sway 305 | touch /mnt/etc/hushlogins 306 | sed -i 's/HUSHLOGIN_FILE.*/#\0/g' /etc/login.defs 307 | 308 | # Creating user 309 | arch-chroot /mnt useradd -m -s /bin/sh "$user" # keep a real POSIX shell as default, not zsh, that will come later 310 | for group in wheel audit libvirt firejail; do 311 | arch-chroot /mnt groupadd -rf "$group" 312 | arch-chroot /mnt gpasswd -a "$user" "$group" 313 | done 314 | echo "$user:$user_password" | arch-chroot /mnt chpasswd 315 | 316 | # Create a group that will be able to reach the internet (see docs/PROXY.md) 317 | arch-chroot /mnt groupadd -rf allow-internet 318 | 319 | # Temporarly give sudo NOPASSWD rights to user for yay 320 | echo "$user ALL=(ALL) NOPASSWD:ALL" >>"/mnt/etc/sudoers" 321 | 322 | # Temporarly disable pacman wrapper so that no warning is issued 323 | mv /mnt/usr/local/bin/pacman /mnt/usr/local/bin/pacman.disable 324 | 325 | # Install AUR helper 326 | arch-chroot -u "$user" /mnt /bin/bash -c 'mkdir /tmp/yay.$$ && \ 327 | cd /tmp/yay.$$ && \ 328 | curl "https://aur.archlinux.org/cgit/aur.git/plain/PKGBUILD?h=yay-bin" -o PKGBUILD && \ 329 | makepkg -si --noconfirm' 330 | 331 | # Install AUR packages 332 | grep -o '^[^ *#]*' packages/aur >aur_packages_to_install 333 | 334 | if [[ "$gpu_target" = "Nvidia" ]]; then 335 | echo nouveau-fw >>aur_packages_to_install 336 | fi 337 | 338 | HOME="/home/$user" arch-chroot -u "$user" /mnt /usr/bin/yay --noconfirm -Sy - /mnt/etc/mkinitcpio.conf 365 | MODULES=($modules) 366 | BINARIES=(setfont) 367 | FILES=() 368 | HOOKS=(base consolefont keymap udev autodetect modconf block plymouth encrypt filesystems keyboard) 369 | EOF 370 | 371 | # This must be done after plymouth is installed from the AUR 372 | arch-chroot /mnt mkinitcpio -p linux-hardened 373 | 374 | # Generate UEFI keys, sign kernels, enroll keys, etc. 375 | echo 'KERNEL=linux-hardened' >/mnt/etc/arch-secure-boot/config 376 | arch-chroot /mnt arch-secure-boot initial-setup 377 | 378 | # Hardening 379 | arch-chroot /mnt chmod 700 /boot 380 | arch-chroot /mnt passwd -dl root 381 | 382 | # Setup firejail 383 | arch-chroot /mnt /usr/bin/firecfg 384 | echo "$user" >/mnt/etc/firejail/firejail.users 385 | 386 | # Setup DNS 387 | rm -f /mnt/etc/resolv.conf 388 | arch-chroot /mnt ln -s /usr/lib/systemd/resolv.conf /etc/resolv.conf 389 | 390 | # Configure systemd services 391 | arch-chroot /mnt systemctl enable systemd-networkd 392 | arch-chroot /mnt systemctl enable systemd-resolved 393 | arch-chroot /mnt systemctl enable systemd-timesyncd 394 | arch-chroot /mnt systemctl enable getty@tty1 395 | arch-chroot /mnt systemctl enable dbus-broker 396 | arch-chroot /mnt systemctl enable iwd 397 | arch-chroot /mnt systemctl enable auditd 398 | arch-chroot /mnt systemctl enable nftables 399 | arch-chroot /mnt systemctl enable docker 400 | arch-chroot /mnt systemctl enable libvirtd 401 | arch-chroot /mnt systemctl enable check-secure-boot 402 | arch-chroot /mnt systemctl enable apparmor 403 | arch-chroot /mnt systemctl enable auditd-notify 404 | arch-chroot /mnt systemctl enable local-forwarding-proxy 405 | 406 | # Configure systemd timers 407 | arch-chroot /mnt systemctl enable snapper-timeline.timer 408 | arch-chroot /mnt systemctl enable snapper-cleanup.timer 409 | arch-chroot /mnt systemctl enable auditor.timer 410 | arch-chroot /mnt systemctl enable btrfs-scrub@-.timer 411 | arch-chroot /mnt systemctl enable btrfs-balance.timer 412 | arch-chroot /mnt systemctl enable pacman-sync.timer 413 | arch-chroot /mnt systemctl enable pacman-notify.timer 414 | arch-chroot /mnt systemctl enable should-reboot-check.timer 415 | 416 | # Configure systemd user services 417 | arch-chroot /mnt systemctl --global enable dbus-broker 418 | arch-chroot /mnt systemctl --global enable journalctl-notify 419 | arch-chroot /mnt systemctl --global enable pipewire 420 | arch-chroot /mnt systemctl --global enable wireplumber 421 | arch-chroot /mnt systemctl --global enable gammastep 422 | 423 | # Run userspace configuration 424 | HOME="/home/$user" arch-chroot -u "$user" /mnt /bin/bash -c 'cd && \ 425 | git clone https://github.com/ShellCode33/.dotfiles && \ 426 | .dotfiles/install.sh' 427 | -------------------------------------------------------------------------------- /packages/aur: -------------------------------------------------------------------------------- 1 | # TRY TO AVOID AUR PACKAGES AS MUCH AS POSSIBLE 2 | 3 | # Secure boot utils 4 | arch-secure-boot 5 | 6 | # You can choose your own theme from there: https://github.com/adi1090x/plymouth-themes 7 | plymouth 8 | plymouth-theme-colorful-loop-git 9 | -------------------------------------------------------------------------------- /packages/regular: -------------------------------------------------------------------------------- 1 | # Basic system tools 2 | base 3 | base-devel 4 | 5 | # Kernel related 6 | linux-hardened 7 | linux-hardened-headers 8 | linux-lts 9 | linux-firmware 10 | mkinitcpio 11 | 12 | # Filesystem tools 13 | btrfs-progs 14 | gparted 15 | gptfdisk 16 | dosfstools # create FAT partitions 17 | 18 | # Secure boot 19 | efibootmgr 20 | efitools 21 | sbsigntools 22 | edk2-shell # useful EFI shell in case things go wrong 23 | 24 | # Networking 25 | iwd 26 | iptables-nft # includes nftables as a dependency, will automatically uninstall iptables 27 | firefox 28 | wget 29 | dnsmasq # used by libvirtd 30 | glider # HTTP proxy used for the network opt-in security measure 31 | socat # used to relay SSH through the HTTP proxy 32 | 33 | # Security 34 | apparmor 35 | firejail 36 | xdg-dbus-proxy # Used by firejail to enforce DBUS policies 37 | polkit 38 | keepassxc 39 | 40 | # Backup 41 | restic 42 | syncthing 43 | 44 | # Virtualization/Containerization 45 | docker 46 | docker-compose 47 | qemu-full 48 | virt-manager 49 | 50 | # Monitoring 51 | audit # Log system failures and important security events 52 | fwupd # Hardware firmware updates manager 53 | dunst # Notification daemon 54 | htop 55 | iotop 56 | lsof 57 | nvtop # htop equivalent for GPUs 58 | 59 | # Desktop environment 60 | sway 61 | swaylock 62 | swayidle 63 | swaybg 64 | kanshi # Monitors management 65 | xdg-desktop-portal-wlr # Enable screen sharing capabilites 66 | xdg-desktop-portal-gtk # File chooser and others 67 | flameshot # Take screenshots 68 | grim # flameshot dependency for Wayland 69 | wezterm # Terminal emulator 70 | wl-clipboard 71 | bemenu-wayland 72 | python-i3ipc # Interact with Sway using Python 73 | python-psutil # status-bar requirement 74 | libnotify # notify-send 75 | qt5-wayland 76 | qt6-wayland 77 | slurp # Select monitor region 78 | brightnessctl # Control monitor brightness 79 | gammastep # decrease blue light during the night 80 | playerctl # control various media players 81 | 82 | # Basic services 83 | pipewire 84 | pipewire-audio 85 | pipewire-alsa 86 | pipewire-jack 87 | pipewire-pulse 88 | wireplumber 89 | openssh 90 | dbus-broker # Improved DBUS implementation (performance and security) 91 | 92 | # Fonts 93 | ttf-fantasque-nerd 94 | noto-fonts-emoji 95 | terminus-font # TTY font 96 | 97 | # Themes 98 | arc-gtk-theme 99 | papirus-icon-theme 100 | 101 | # Dev 102 | neovim 103 | git 104 | lazygit 105 | rustup 106 | rust-analyzer 107 | clang 108 | ipython 109 | python-pip 110 | man-db 111 | man-pages 112 | npm # for nvim plugins 113 | 114 | # Debugging 115 | gdb 116 | pwndbg 117 | strace 118 | 119 | # File viewers/editors 120 | zathura 121 | zathura-pdf-poppler 122 | swayimg 123 | mpv 124 | gimp 125 | imagemagick 126 | tree 127 | fzf 128 | 129 | # CLI 130 | zsh 131 | dash 132 | dialog 133 | ripgrep 134 | bat 135 | 136 | # Other 137 | most 138 | p7zip 139 | unzip 140 | debootstrap # see docs/HOW_TO_SECURE_USB_DEVICE.md 141 | signal-desktop 142 | usbutils 143 | wtype 144 | dmidecode # used by libvirtd 145 | -------------------------------------------------------------------------------- /rootfs/etc/audit/auditd.conf: -------------------------------------------------------------------------------- 1 | # 2 | # This file controls the configuration of the audit daemon 3 | # 4 | # See `man auditd.conf` 5 | # 6 | # For my fellow french nerds, there is an excellent article 7 | # available there that will help you get started with auditd: 8 | # https://connect.ed-diamond.com/GNU-Linux-Magazine/glmfhs-093/journalisez-les-actions-de-vos-utilisateurs-avec-auditd 9 | 10 | local_events = yes 11 | log_file = /var/log/audit/audit.log 12 | write_logs = yes 13 | log_format = ENRICHED 14 | log_group = audit 15 | priority_boost = 4 16 | flush = INCREMENTAL_ASYNC 17 | freq = 50 18 | num_logs = 3 19 | name_format = NONE 20 | ##name = mydomain 21 | max_log_file = 10 22 | max_log_file_action = ROTATE 23 | verify_email = yes 24 | action_mail_acct = root 25 | space_left = 500 26 | space_left_action = exec /usr/local/bin/notify-low-on-disk-space 27 | admin_space_left = 100 28 | admin_space_left_action = exec /usr/local/bin/try-to-fix-low-on-disk-space 29 | disk_full_action = exec /usr/local/bin/try-to-fix-low-on-disk-space 30 | disk_error_action = exec /usr/local/bin/notify-disk-error 31 | ##tcp_listen_port = 60 32 | tcp_listen_queue = 5 33 | tcp_max_per_addr = 1 34 | use_libwrap = yes 35 | ##tcp_client_ports = 1024-65535 36 | tcp_client_max_idle = 0 37 | transport = TCP 38 | enable_krb5 = no 39 | krb5_principal = auditd 40 | ##krb5_key_file = /etc/audit/audit.key 41 | distribute_network = no 42 | q_depth = 2000 43 | overflow_action = SYSLOG 44 | max_restarts = 10 45 | plugin_dir = /etc/audit/plugins.d 46 | end_of_event_timeout = 2 47 | -------------------------------------------------------------------------------- /rootfs/etc/audit/rules.d/00-reset.rules: -------------------------------------------------------------------------------- 1 | # Remove previously created rules 2 | -D 3 | -------------------------------------------------------------------------------- /rootfs/etc/audit/rules.d/01-exclude.rules: -------------------------------------------------------------------------------- 1 | # Filter out messages we don't use to prevent spam 2 | -a exclude,always -F msgtype=BPF 3 | -a exclude,always -F msgtype=USER_START 4 | -a exclude,always -F msgtype=USER_END 5 | -a exclude,always -F msgtype=CWD 6 | -a exclude,always -F msgtype=PATH 7 | -a exclude,always -F msgtype=USER_ACCT 8 | -a exclude,always -F msgtype=CRED_REFR 9 | -a exclude,always -F msgtype=CRED_DISP 10 | -a exclude,always -F msgtype=CRED_ACQ 11 | -a exclude,always -F msgtype=NETFILTER_CFG 12 | -a exclude,always -F msgtype=EXECVE 13 | 14 | # Monitoring of systemd services is done through journalctl 15 | -a exclude,always -F msgtype=SERVICE_START 16 | -a exclude,always -F msgtype=SERVICE_STOP 17 | -------------------------------------------------------------------------------- /rootfs/etc/audit/rules.d/10-files.rules: -------------------------------------------------------------------------------- 1 | # 2 | # Tips: 3 | # 4 | # - Use the 'ausyscall' program to search syscalls 5 | # - Use '-F auid!=unset' if you don't want to log actions that are not made by a logged in user. 6 | # 7 | 8 | # Monitor writes to /etc/shadow 9 | -w /etc/shadow -p wa -F auid!=unset -k etc_shadow 10 | 11 | # System wide LD_PRELOAD (typical rootkit) 12 | -w /etc/ld.so.preload -p wa -F auid!=unset -k etc_ld_so_preload 13 | 14 | # Potential bootkit 15 | -w /usr/bin/arch-secure-boot -p wa -F auid!=unset -k secure_boot_manager 16 | -w /etc/arch-secure-boot/keys -p rwa -F auid!=unset -k secure_boot_keys 17 | -w /etc/secureboot/keys -p rwa -F auid!=unset -k secure_boot_keys 18 | -w /efi -p wa -F auid!=unset -k efi_boot 19 | 20 | # Monitor auditd itself, someone might try to alter its configuration to increase stealthiness 21 | -w /var/log/audit/ -p wa -F auid!=unset -k auditd_tampering 22 | -w /etc/audit/ -p wa -F auid!=unset -k auditd_tampering 23 | -w /etc/libaudit.conf -p wa -F auid!=unset -k auditd_tampering 24 | -w /usr/bin/auditctl -p x -F auid!=unset -k auditd_tampering 25 | -w /usr/bin/auditd -p x -F auid!=unset -k auditd_tampering 26 | -w /usr/bin/augenrules -p x -F auid!=unset -k auditd_tampering 27 | 28 | # Detect privileged process accessing home directories (separate arch mandatory otherwise the kernel is confused) 29 | -a always,exit -F dir=/home/ -F arch=b32 -S open,openat,openat2,getdents,getdents64,mkdir,mkdirat,rmdir,unlink,unlinkat -F uid=0 -F auid>=1000 -F auid!=unset -C auid!=obj_uid -k power_abuse 30 | -a always,exit -F dir=/home/ -F arch=b64 -S open,openat,openat2,getdents,getdents64,mkdir,mkdirat,rmdir,unlink,unlinkat -F uid=0 -F auid>=1000 -F auid!=unset -C auid!=obj_uid -k power_abuse 31 | -------------------------------------------------------------------------------- /rootfs/etc/audit/rules.d/99-immutable.rules: -------------------------------------------------------------------------------- 1 | # Make auditd rules immutable, cannot change them at runtime 2 | -e 2 3 | -------------------------------------------------------------------------------- /rootfs/etc/bash.bashrc: -------------------------------------------------------------------------------- 1 | # If not running interactively, don't do anything 2 | [[ $- != *i* ]] && return 3 | 4 | [[ "$WAYLAND_DISPLAY" ]] && shopt -s checkwinsize 5 | 6 | export HISTFILE="/home/$USER/.cache/bash_history" 7 | export PS1='\[\e[1;92m\][\u@\h \w]\[\e[0m\] \$ ' 8 | 9 | [ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion 10 | -------------------------------------------------------------------------------- /rootfs/etc/conf.d/snapper: -------------------------------------------------------------------------------- 1 | ## Path: System/Snapper 2 | 3 | ## Type: string 4 | ## Default: "" 5 | # List of snapper configurations. 6 | SNAPPER_CONFIGS="root home" 7 | 8 | -------------------------------------------------------------------------------- /rootfs/etc/cve-ignore.list: -------------------------------------------------------------------------------- 1 | CVE-2020-23922 # Crash in CLI tool, no security impact 2 | CVE-2021-3468 # The highest threat from this vulnerability is to the availability of the avahi service, which becomes unresponsive after this flaw is triggered 3 | -------------------------------------------------------------------------------- /rootfs/etc/docker/daemon.json: -------------------------------------------------------------------------------- 1 | { 2 | "proxies": { 3 | "http-proxy": "http://127.0.0.1:8080", 4 | "https-proxy": "http://127.0.0.1:8080", 5 | "no-proxy": "localhost,127.0.0.0/8" 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /rootfs/etc/firejail/firecfg.d/disabled.conf: -------------------------------------------------------------------------------- 1 | !man 2 | !dnsmasq 3 | -------------------------------------------------------------------------------- /rootfs/etc/firejail/firefox-common.local: -------------------------------------------------------------------------------- 1 | # Enable native notifications. 2 | dbus-user.talk org.freedesktop.Notifications 3 | # Allow inhibiting screensavers. 4 | dbus-user.talk org.freedesktop.ScreenSaver 5 | # Allow screensharing under Wayland. 6 | dbus-user.talk org.freedesktop.portal.Desktop 7 | 8 | -------------------------------------------------------------------------------- /rootfs/etc/firejail/firejail.config: -------------------------------------------------------------------------------- 1 | # Disable /mnt, /media, /run/mount and /run/media access. By default access 2 | # to these directories is enabled. Unlike --disable-mnt profile option this 3 | # cannot be overridden by --noblacklist or --ignore. 4 | disable-mnt yes 5 | 6 | # If seccomp subsystem in Linux kernel kills a program, a message is posted to syslog. 7 | # Starting with Linux kernel version 4.14, it is possible to send seccomp violation messages 8 | # even if the program is allowed to continue (see "seccomp-error-action EPERM" above). 9 | # This logging feature is disabled by default in our implementation. 10 | seccomp-log yes 11 | -------------------------------------------------------------------------------- /rootfs/etc/firejail/globals.local: -------------------------------------------------------------------------------- 1 | nonewprivs 2 | -------------------------------------------------------------------------------- /rootfs/etc/firejail/keepassxc.local: -------------------------------------------------------------------------------- 1 | # For the SSH integration 2 | noblacklist ${HOME}/.ssh 3 | ignore private-tmp 4 | noblacklist /tmp/ssh-* 5 | 6 | # libsecret integration 7 | dbus-user.own org.freedesktop.secrets 8 | -------------------------------------------------------------------------------- /rootfs/etc/firejail/signal-desktop.local: -------------------------------------------------------------------------------- 1 | ignore nonewprivs 2 | -------------------------------------------------------------------------------- /rootfs/etc/iwd/main.conf: -------------------------------------------------------------------------------- 1 | [Network] 2 | NameResolvingService=systemd 3 | -------------------------------------------------------------------------------- /rootfs/etc/libvirt/hooks/qemu: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | 3 | import contextlib 4 | import logging 5 | import os 6 | import sys 7 | from subprocess import CalledProcessError, run 8 | from time import sleep 9 | from xml.etree import ElementTree as xml 10 | 11 | LOG_FILE = "/var/log/libvirt/hook.log" 12 | 13 | logging.basicConfig( 14 | filename=LOG_FILE, 15 | encoding="utf-8", 16 | level=logging.DEBUG, 17 | format="[%(asctime)s][%(levelname)s] %(message)s", 18 | datefmt="%m/%d/%Y %H:%M:%S", 19 | ) 20 | 21 | 22 | def start_display_manager() -> None: 23 | try: 24 | logging.debug("Logging in user from TTY1...") 25 | run( 26 | ["/usr/bin/systemctl", "start", "getty@tty1.service"], 27 | capture_output=True, 28 | check=True, 29 | ) 30 | logging.info( 31 | "User was logged in successfully, " 32 | "graphical applications can now run again", 33 | ) 34 | except CalledProcessError as cpe: 35 | logging.error("Command failure: %s", cpe.cmd) 36 | logging.error("Command output: %s", cpe.stderr.decode()) 37 | raise 38 | 39 | 40 | def stop_display_manager() -> None: 41 | try: 42 | logging.debug("Logging out user from TTY1...") 43 | run( 44 | ["/usr/bin/systemctl", "stop", "getty@tty1.service"], 45 | capture_output=True, 46 | check=True, 47 | ) 48 | logging.info( 49 | "User was logged out successfully, " 50 | "graphical applications are not running anymore", 51 | ) 52 | except CalledProcessError as cpe: 53 | logging.error("Command failure: %s", cpe.cmd) 54 | logging.error("Command output: %s", cpe.stderr.decode()) 55 | raise 56 | 57 | 58 | def unbind_framebuffers() -> None: 59 | logging.info("Unbinding VT consoles framebuffers...") 60 | for vtconsole in os.listdir("/sys/class/vtconsole/"): 61 | try: 62 | with open(f"/sys/class/vtconsole/{vtconsole}/name") as file: 63 | vtconsole_name = file.read() 64 | except FileNotFoundError: 65 | continue 66 | 67 | if "frame buffer" not in vtconsole_name: 68 | continue 69 | 70 | with open(f"/sys/class/vtconsole/{vtconsole}/bind", "w") as file: 71 | file.write("0\n") 72 | 73 | logging.info("Unbinding EFI framebuffer...") 74 | try: 75 | with open( 76 | "/sys/bus/platform/drivers/efi-framebuffer/unbind", 77 | "w", 78 | ) as file: 79 | file.write("efi-framebuffer.0\n") 80 | except OSError as ose: 81 | logging.warning("efi-framebuffer.0: %s", ose) 82 | 83 | 84 | def rebind_framebuffers() -> None: 85 | logging.info("Rebinding EFI framebuffer...") 86 | try: 87 | with open( 88 | "/sys/bus/platform/drivers/efi-framebuffer/bind", 89 | "w", 90 | ) as file: 91 | file.write("efi-framebuffer.0\n") 92 | except OSError as ose: 93 | logging.warning("efi-framebuffer.0: %s", ose) 94 | 95 | logging.info("Rebinding VT consoles framebuffers...") 96 | for vtconsole in os.listdir("/sys/class/vtconsole/"): 97 | try: 98 | with open(f"/sys/class/vtconsole/{vtconsole}/name") as file: 99 | vtconsole_name = file.read() 100 | except FileNotFoundError: 101 | continue 102 | 103 | if "frame buffer" not in vtconsole_name: 104 | continue 105 | 106 | with open(f"/sys/class/vtconsole/{vtconsole}/bind", "w") as file: 107 | file.write("1\n") 108 | 109 | 110 | def try_unload_driver(driver_name: str) -> None: 111 | """Work around race condition. 112 | 113 | modprobe: FATAL: Module amdgpu is in use 114 | """ 115 | max_attempts = 5 116 | current_attempt = 1 117 | 118 | while True: 119 | try: 120 | run( 121 | ["/usr/bin/modprobe", "-r", driver_name], 122 | capture_output=True, 123 | check=True, 124 | ) 125 | break 126 | except CalledProcessError: 127 | if current_attempt == max_attempts: 128 | raise 129 | 130 | current_attempt += 1 131 | sleep(0.5) 132 | 133 | 134 | def load_gpu_drivers() -> None: 135 | try: 136 | logging.debug("Loading kernel drivers...") 137 | run( 138 | ["/usr/bin/modprobe", "amdgpu"], 139 | capture_output=True, 140 | check=True, 141 | ) 142 | logging.info("amdgpu driver was loaded successfully") 143 | except CalledProcessError as cpe: 144 | logging.error("Command failure: %s", cpe.cmd) 145 | logging.error("Command output: %s", cpe.stderr.decode()) 146 | raise 147 | 148 | 149 | def unload_gpu_drivers() -> None: 150 | try: 151 | logging.debug("Unloading kernel drivers...") 152 | try_unload_driver("amdgpu") 153 | logging.info("amdgpu driver was unloaded successfully") 154 | except CalledProcessError as cpe: 155 | logging.error("Command failure: %s", cpe.cmd) 156 | logging.error("Command output: %s", cpe.stderr.decode()) 157 | raise 158 | 159 | 160 | def load_vfio_drivers() -> None: 161 | try: 162 | for vfio_driver in ("vfio", "vfio_pci", "vfio_iommu_type1"): 163 | logging.debug("Loading %s driver...", vfio_driver) 164 | run( 165 | ["/usr/bin/modprobe", vfio_driver], 166 | capture_output=True, 167 | check=True, 168 | ) 169 | logging.info("%s driver was loaded successfully", vfio_driver) 170 | except CalledProcessError as cpe: 171 | logging.error("Command failure: %s", cpe.cmd) 172 | logging.error("Command output: %s", cpe.stderr.decode()) 173 | raise 174 | 175 | 176 | def unload_vfio_drivers() -> None: 177 | try: 178 | for vfio_driver in ("vfio_pci", "vfio_iommu_type1", "vfio"): 179 | logging.debug("Unloading %s driver...", vfio_driver) 180 | try_unload_driver(vfio_driver) 181 | logging.info("%s driver was unloaded successfully", vfio_driver) 182 | except CalledProcessError as cpe: 183 | logging.error("Command failure: %s", cpe.cmd) 184 | logging.error("Command output: %s", cpe.stderr.decode()) 185 | raise 186 | 187 | 188 | def setup_amd_gpu_passthrough() -> None: 189 | try: 190 | stop_display_manager() 191 | unbind_framebuffers() 192 | unload_gpu_drivers() 193 | load_vfio_drivers() 194 | logging.info("GPU passthrough setup successfully!") 195 | except Exception: 196 | logging.error( 197 | "An error occured while trying setup GPU passthrough, " 198 | "trying to revert changes...", 199 | ) 200 | teardown_amd_gpu_passthrough() 201 | sys.exit(1) 202 | 203 | 204 | def teardown_amd_gpu_passthrough() -> None: 205 | """Unlike the setup function, the tear down process will keep going on error.""" 206 | with contextlib.suppress(CalledProcessError): 207 | unload_vfio_drivers() 208 | 209 | with contextlib.suppress(CalledProcessError): 210 | load_gpu_drivers() 211 | 212 | rebind_framebuffers() 213 | start_display_manager() 214 | 215 | 216 | def get_vm_pci_devices(vm_name: str) -> list[xml.Element]: 217 | vm_config = xml.parse(f"/etc/libvirt/qemu/{vm_name}.xml").getroot() 218 | return vm_config.findall( 219 | "devices/hostdev[@type='pci']/source/address", 220 | ) 221 | 222 | 223 | def is_pci_device_gpu(node: xml.Element) -> bool: 224 | domain = node.get("domain") 225 | bus = node.get("bus") 226 | slot = node.get("slot") 227 | function = node.get("function") 228 | 229 | if domain is None or bus is None or slot is None or function is None: 230 | error = "unexpected VM XML config" 231 | raise ValueError(error) 232 | 233 | proc = run( 234 | ["/usr/bin/lspci", "-s", f"{bus}:{slot}.{function}"], 235 | capture_output=True, 236 | check=True, 237 | ) 238 | 239 | lspci_output = proc.stdout.decode() 240 | 241 | if lspci_output[-1] == "\n": 242 | lspci_output = lspci_output[:-1] 243 | 244 | if "VGA" in lspci_output: 245 | logging.info("VM wants to passthrough following GPU: %s", lspci_output) 246 | return True 247 | 248 | return False 249 | 250 | 251 | def main() -> None: 252 | vm_name = sys.argv[1] 253 | operation = sys.argv[2] 254 | 255 | logging.debug("Machine: %s Operation: %s", vm_name, operation) 256 | 257 | if operation not in ("prepare", "release"): 258 | sys.exit(0) 259 | 260 | try: 261 | pci_passthrough_devices = get_vm_pci_devices(vm_name) 262 | except FileNotFoundError: 263 | logging.info( 264 | "Looks like %s is a new VM being created, don't do anything", vm_name 265 | ) 266 | sys.exit(0) 267 | 268 | logging.debug( 269 | "VM wants to passthrough %d PCI devices", 270 | len(pci_passthrough_devices), 271 | ) 272 | 273 | vm_wants_gpu_passthrough = False 274 | for device in pci_passthrough_devices: 275 | try: 276 | if is_pci_device_gpu(device): 277 | vm_wants_gpu_passthrough = True 278 | except CalledProcessError as cpe: 279 | logging.error("Command failure: %s", cpe.cmd) 280 | logging.error("Command output: %s", cpe.stderr.decode()) 281 | logging.error("Failed to determine if PCI device is a GPU") 282 | continue 283 | except ValueError as ve: 284 | logging.error("%s", ve) 285 | logging.error("Failed to determine if PCI device is a GPU") 286 | continue 287 | 288 | exit_status = 0 289 | must_tear_down = False 290 | 291 | if vm_wants_gpu_passthrough: 292 | if operation == "prepare": 293 | try: 294 | setup_amd_gpu_passthrough() 295 | logging.info("GPU passthrough setup successfully!") 296 | except Exception: 297 | logging.error( 298 | "An error occured while trying setup GPU passthrough, " 299 | "trying to revert changes...", 300 | ) 301 | must_tear_down = True 302 | exit_status = 1 303 | 304 | elif operation == "release": 305 | must_tear_down = True 306 | 307 | if must_tear_down: 308 | try: 309 | teardown_amd_gpu_passthrough() 310 | except Exception: 311 | logging.error( 312 | "An error occured while trying to tear down GPU passthrough...", 313 | ) 314 | exit_status = 1 315 | 316 | sys.exit(exit_status) 317 | 318 | 319 | if __name__ == "__main__": 320 | main() 321 | -------------------------------------------------------------------------------- /rootfs/etc/libvirt/libvirtd.conf: -------------------------------------------------------------------------------- 1 | unix_sock_group = 'libvirt' 2 | unix_sock_rw_perms = '0770' 3 | 4 | # Useful for troubleshooting 5 | log_filters="3:qemu 1:libvirt" 6 | log_outputs="2:file:/var/log/libvirt/libvirtd.log" 7 | -------------------------------------------------------------------------------- /rootfs/etc/libvirt/qemu.conf: -------------------------------------------------------------------------------- 1 | user = "username_placeholder" 2 | group = "username_placeholder" 3 | -------------------------------------------------------------------------------- /rootfs/etc/locale.conf: -------------------------------------------------------------------------------- 1 | LANG="en_US.UTF-8" 2 | LC_MESSAGES="en_US.UTF-8" 3 | LC_MONETARY="en_US.UTF-8" 4 | LC_PAPER="en_US.UTF-8" 5 | LC_MEASUREMENT="en_US.UTF-8" 6 | LC_ADDRESS="en_US.UTF-8" 7 | LC_TIME="en_US.UTF-8" 8 | LC_COLLATE="en_US.UTF-8" 9 | LC_CTYPE="en_US.UTF-8" 10 | -------------------------------------------------------------------------------- /rootfs/etc/nftables.conf: -------------------------------------------------------------------------------- 1 | #!/sbin/nft -f 2 | 3 | flush ruleset 4 | 5 | table inet filter { 6 | chain input { 7 | type filter hook input priority 0; policy drop; 8 | ct state invalid counter drop comment "early drop of invalid packets" 9 | 10 | # https://googleprojectzero.blogspot.com/2015/01/finding-and-exploiting-ntpd.html 11 | iif lo accept comment "accept loopback" 12 | iif != lo ip daddr 127.0.0.1/8 counter drop comment "drop connections to loopback not coming from loopback" 13 | iif != lo ip6 daddr ::1/128 counter drop comment "drop connections to loopback not coming from loopback" 14 | 15 | # Rate limit ICMP echo-request to prevent flood 16 | ip protocol icmp icmp type echo-request limit rate 3/second accept 17 | ip protocol icmp icmp type echo-request counter drop 18 | 19 | # Accept already established/related connections 20 | ct state {established, related} counter accept comment "accept all connections related to connections made by us" 21 | 22 | # Accept ICMP (ipv4 only) 23 | ip protocol icmp counter accept comment "allow ICMPv4 packets" 24 | 25 | # DHCP (not required to get DHCP to work, but useful for renewals) 26 | udp sport 67 udp dport 68 counter accept comment "allow DHCP traffic over UDP" 27 | 28 | # libvirt related config 29 | iifname "virbr*" tcp dport 53 counter accept comment "allow VMs to reach the host's DNS server (dnsmasq)" 30 | iifname "virbr*" udp dport 53 counter accept comment "allow VMs to reach the host's DNS server (dnsmasq)" 31 | iifname "virbr*" udp dport 67 counter accept comment "allow VMs to reach the host's DHCP server (dnsmasq)" 32 | 33 | # Log any failed inbound traffic attempt 34 | log flags all prefix "FIREWALL REJECTED INPUT: " counter 35 | } 36 | 37 | chain forward { 38 | type filter hook forward priority 0; policy drop; 39 | 40 | # libvirt/docker related config 41 | iifname "virbr*" counter accept comment "allow VMs to reach the internet through the host" 42 | iifname "docker*" counter accept comment "allow Docker containers to reach the internet through the host" 43 | ct state established,related counter accept comment "accept all established/related connections" 44 | 45 | # Log any failed forward traffic attempt 46 | log flags all prefix "FIREWALL REJECTED FORWARD: " counter 47 | 48 | reject with icmp type port-unreachable comment "explicitly reject packets" 49 | } 50 | 51 | chain output { 52 | type filter hook output priority 0; policy drop; 53 | 54 | # Allow reaching localhost services 55 | oif lo accept comment "accept loopback" 56 | 57 | # Accept already established/related connections 58 | ct state {established, related} accept comment "accept all connections related to connections made by us" 59 | 60 | # libvirt/docker related config 61 | oifname "virbr*" counter accept comment "allow the host to reach the VMs" 62 | oifname "docker*" counter accept comment "allow the host to reach Docker containers" 63 | 64 | # ICMP 65 | ip protocol icmp counter accept comment "accept all ICMP types" 66 | 67 | # DHCP 68 | udp sport 68 udp dport 67 counter accept comment "allow DHCP traffic over UDP" 69 | 70 | # NTP 71 | udp dport 123 counter accept comment "allow NTP traffic over UDP" 72 | 73 | # DNS 74 | udp dport 53 accept comment "allow DNS traffic over UDP" 75 | 76 | # DNS over TLS (not authenticated yet https://github.com/systemd/systemd/issues/25676#issuecomment-1634810897) 77 | tcp dport 853 counter accept comment "allow DoT traffic over TCP" 78 | 79 | # SyncThing 80 | tcp dport 22000 counter accept comment "allow SyncThing traffic" 81 | 82 | # Allow everything for the allow-internet group 83 | ip protocol tcp meta skgid allow-internet counter accept comment "allow TCP outbound traffic for the allow-internet group" 84 | ip protocol udp meta skgid allow-internet counter accept comment "allow UDP outbound traffic for the allow-internet group" 85 | 86 | # Log any failed outbound traffic attempt 87 | log flags all prefix "FIREWALL REJECTED OUTPUT: " counter 88 | 89 | reject with icmp type port-unreachable comment "explicitly reject packets" 90 | } 91 | } 92 | -------------------------------------------------------------------------------- /rootfs/etc/pacman.d/hooks/post-20-dash-symlink.hook: -------------------------------------------------------------------------------- 1 | [Trigger] 2 | Type = Package 3 | Operation = Install 4 | Operation = Upgrade 5 | Target = bash 6 | 7 | [Action] 8 | Description = Re-pointing /bin/sh symlink to dash... 9 | When = PostTransaction 10 | Exec = /usr/bin/ln -sfT dash /usr/bin/sh 11 | Depends = dash 12 | -------------------------------------------------------------------------------- /rootfs/etc/pacman.d/hooks/post-20-firejail-hardening.hook: -------------------------------------------------------------------------------- 1 | [Trigger] 2 | Operation = Install 3 | Operation = Upgrade 4 | Type = Package 5 | Target = firejail 6 | 7 | [Action] 8 | Depends = coreutils 9 | Depends = bash 10 | When = PostTransaction 11 | Exec = /usr/bin/sh -c "chown root:firejail /usr/bin/firejail && chmod 4750 /usr/bin/firejail" 12 | Description = Setting /usr/bin/firejail owner to "root:firejail" and mode "4750" 13 | -------------------------------------------------------------------------------- /rootfs/etc/pacman.d/hooks/post-20-firejail-symlinks.hook: -------------------------------------------------------------------------------- 1 | [Trigger] 2 | Type = Path 3 | Operation = Install 4 | Operation = Upgrade 5 | Operation = Remove 6 | Target = usr/bin/* 7 | Target = usr/share/applications/*.desktop 8 | 9 | [Action] 10 | Description = Configure symlinks in /usr/local/bin based on firecfg.config... 11 | When = PostTransaction 12 | Depends = firejail 13 | Exec = /bin/sh -c 'firecfg >/dev/null 2>&1' 14 | -------------------------------------------------------------------------------- /rootfs/etc/pacman.d/hooks/post-90-should-reboot-check.hook: -------------------------------------------------------------------------------- 1 | [Trigger] 2 | Operation = Upgrade 3 | Type = Path 4 | Target = usr/lib/modules/*/vmlinu* 5 | 6 | [Action] 7 | Description = Checking if system should be rebooted... 8 | When = PostTransaction 9 | Exec = /usr/bin/systemctl start should-reboot-check.service 10 | -------------------------------------------------------------------------------- /rootfs/etc/pacman.d/hooks/pre-10-deny-xorg-packages.hook: -------------------------------------------------------------------------------- 1 | [Trigger] 2 | Operation = Install 3 | Type = Package 4 | Target = * 5 | 6 | [Action] 7 | Description = Deny installation of xorg packages 8 | When = PreTransaction 9 | AbortOnFail 10 | NeedsTargets 11 | Exec=/bin/bash -c "grep -q 'xorg\|x11' - && echo DO NOT INSTALL X11 TOOLS, FIND WAYLAND ALTERNATIVES && exit 1 || exit 0" 12 | -------------------------------------------------------------------------------- /rootfs/etc/resolv.conf: -------------------------------------------------------------------------------- 1 | /usr/lib/systemd/resolv.conf -------------------------------------------------------------------------------- /rootfs/etc/security/faillock.conf: -------------------------------------------------------------------------------- 1 | # Deny access to sudo after that many failures 2 | deny = 5 3 | 4 | # Deny access for (in seconds) 5 | unlock_time = 300 6 | -------------------------------------------------------------------------------- /rootfs/etc/snapper/configs/home: -------------------------------------------------------------------------------- 1 | 2 | # subvolume to snapshot 3 | SUBVOLUME="/home" 4 | 5 | # filesystem type 6 | FSTYPE="btrfs" 7 | 8 | 9 | # btrfs qgroup for space aware cleanup algorithms 10 | QGROUP="" 11 | 12 | 13 | # fraction or absolute size of the filesystems space the snapshots may use 14 | SPACE_LIMIT="0.5" 15 | 16 | # fraction or absolute size of the filesystems space that should be free 17 | FREE_LIMIT="0.2" 18 | 19 | 20 | # users and groups allowed to work with config 21 | ALLOW_USERS="" 22 | ALLOW_GROUPS="" 23 | 24 | # sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots 25 | # directory 26 | SYNC_ACL="no" 27 | 28 | 29 | # start comparing pre- and post-snapshot in background after creating 30 | # post-snapshot 31 | BACKGROUND_COMPARISON="yes" 32 | 33 | 34 | # run daily number cleanup 35 | NUMBER_CLEANUP="yes" 36 | 37 | # limit for number cleanup 38 | NUMBER_MIN_AGE="1800" 39 | NUMBER_LIMIT="50" 40 | NUMBER_LIMIT_IMPORTANT="10" 41 | 42 | 43 | # create hourly snapshots 44 | TIMELINE_CREATE="yes" 45 | 46 | # cleanup hourly snapshots after some time 47 | TIMELINE_CLEANUP="yes" 48 | 49 | # limits for timeline cleanup 50 | TIMELINE_MIN_AGE="1800" 51 | TIMELINE_LIMIT_HOURLY="10" 52 | TIMELINE_LIMIT_DAILY="7" 53 | TIMELINE_LIMIT_WEEKLY="0" 54 | TIMELINE_LIMIT_MONTHLY="3" 55 | TIMELINE_LIMIT_YEARLY="0" 56 | 57 | 58 | # cleanup empty pre-post-pairs 59 | EMPTY_PRE_POST_CLEANUP="yes" 60 | 61 | # limits for empty pre-post-pair cleanup 62 | EMPTY_PRE_POST_MIN_AGE="1800" 63 | 64 | -------------------------------------------------------------------------------- /rootfs/etc/snapper/configs/root: -------------------------------------------------------------------------------- 1 | 2 | # subvolume to snapshot 3 | SUBVOLUME="/" 4 | 5 | # filesystem type 6 | FSTYPE="btrfs" 7 | 8 | 9 | # btrfs qgroup for space aware cleanup algorithms 10 | QGROUP="" 11 | 12 | 13 | # fraction or absolute size of the filesystems space the snapshots may use 14 | SPACE_LIMIT="0.5" 15 | 16 | # fraction or absolute size of the filesystems space that should be free 17 | FREE_LIMIT="0.2" 18 | 19 | 20 | # users and groups allowed to work with config 21 | ALLOW_USERS="" 22 | ALLOW_GROUPS="" 23 | 24 | # sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots 25 | # directory 26 | SYNC_ACL="no" 27 | 28 | 29 | # start comparing pre- and post-snapshot in background after creating 30 | # post-snapshot 31 | BACKGROUND_COMPARISON="yes" 32 | 33 | 34 | # run daily number cleanup 35 | NUMBER_CLEANUP="yes" 36 | 37 | # limit for number cleanup 38 | NUMBER_MIN_AGE="1800" 39 | NUMBER_LIMIT="50" 40 | NUMBER_LIMIT_IMPORTANT="10" 41 | 42 | 43 | # create hourly snapshots 44 | TIMELINE_CREATE="yes" 45 | 46 | # cleanup hourly snapshots after some time 47 | TIMELINE_CLEANUP="yes" 48 | 49 | # limits for timeline cleanup 50 | TIMELINE_MIN_AGE="1800" 51 | TIMELINE_LIMIT_HOURLY="5" 52 | TIMELINE_LIMIT_DAILY="7" 53 | TIMELINE_LIMIT_WEEKLY="1" 54 | TIMELINE_LIMIT_MONTHLY="1" 55 | TIMELINE_LIMIT_YEARLY="0" 56 | 57 | 58 | # cleanup empty pre-post-pairs 59 | EMPTY_PRE_POST_CLEANUP="yes" 60 | 61 | # limits for empty pre-post-pair cleanup 62 | EMPTY_PRE_POST_MIN_AGE="1800" 63 | 64 | -------------------------------------------------------------------------------- /rootfs/etc/sudoers: -------------------------------------------------------------------------------- 1 | Defaults env_reset 2 | Defaults secure_path="/usr/local/bin:/usr/bin" 3 | 4 | # Fixes CVE-2005-4890, see https://www.errno.fr/TTYPushback.html 5 | Defaults use_pty 6 | 7 | Defaults passwd_timeout=0 8 | Defaults passprompt="[sudo] password for %p: " 9 | Defaults insults 10 | Defaults pwfeedback 11 | 12 | Defaults editor="/usr/bin/nvim" 13 | 14 | # Useful for applications that wrap sudo (e.g. yay) 15 | Defaults env_keep += "ftp_proxy rsync_proxy http_proxy https_proxy no_proxy" 16 | Defaults env_keep += "FTP_PROXY RSYNC_PROXY HTTP_PROXY HTTPS_PROXY NO_PROXY" 17 | 18 | # Useful for GUI applications that require root privileges (e.g. Wireshark) 19 | Defaults env_keep += "WAYLAND_DISPLAY XDG_RUNTIME_DIR SDL_VIDEODRIVER QT_QPA_PLATFORM" 20 | 21 | %wheel ALL=(ALL:ALL) ALL 22 | -------------------------------------------------------------------------------- /rootfs/etc/sysctl.d/99-swappiness.conf: -------------------------------------------------------------------------------- 1 | vm.swappiness=10 2 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/network/70-wired.network: -------------------------------------------------------------------------------- 1 | [Match] 2 | Name=en* 3 | 4 | [Network] 5 | DHCP=ipv4 6 | 7 | # Do not use the advertised DNS as the default DNS. 8 | # Use it only to resolve local domains. 9 | DNSDefaultRoute=false 10 | 11 | # A local DNS server will most likely not use DoT 12 | DNSOverTLS=opportunistic 13 | 14 | # Usually local domains are resolved through mDNS 15 | # But mDNS is disabled to reduce the attack surface 16 | # Therefore you can resolve .local domains using 17 | # regular DNS if you want to. 18 | Domains=~local 19 | DNSSECNegativeTrustAnchors=local 20 | 21 | [DHCPv4] 22 | RouteMetric=100 23 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/network/80-wireless.network: -------------------------------------------------------------------------------- 1 | [Match] 2 | Type=wlan 3 | 4 | [Network] 5 | DHCP=ipv4 6 | 7 | # Do not use the advertised DNS as the default DNS. 8 | # Use it only to resolve local domains. 9 | DNSDefaultRoute=false 10 | 11 | # A local DNS server will most likely not use DoT 12 | DNSOverTLS=opportunistic 13 | 14 | # Usually local domains are resolved through mDNS 15 | # But mDNS is disabled to reduce the attack surface 16 | # Therefore you can resolve .local domains using 17 | # regular DNS if you want to. 18 | Domains=~local 19 | DNSSECNegativeTrustAnchors=local 20 | 21 | [DHCPv4] 22 | RouteMetric=600 23 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/resolved.conf.d/default.conf: -------------------------------------------------------------------------------- 1 | # This will only be used as a last resort if no DNS is 2 | # provided through other mean. Configure DNS using 3 | # systemd-networkd instead. 4 | [Resolve] 5 | DNS=9.9.9.9#dns.quad9.net 6 | # Quad9 is specified again as a fallback in case an interface is configured not to use the DNS= above 7 | FallbackDNS=9.9.9.9#dns.quad9.net 1.1.1.1#cloudflare-dns.com 2620:fe::9#dns.quad9.net 2606:4700:4700::1111#cloudflare-dns.com 8 | DNSOverTLS=yes 9 | 10 | # Note that DNSSEC doesn't really work yet !! 11 | # See: https://github.com/systemd/systemd/issues/25676 12 | DNSSEC=true 13 | 14 | # Network hardening 15 | LLMNR=no 16 | MulticastDNS=no 17 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/sleep.conf.d/disable-hibernate.conf: -------------------------------------------------------------------------------- 1 | [Sleep] 2 | AllowHibernation=no 3 | AllowSuspendThenHibernate=no 4 | AllowHybridSleep=no 5 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/auditd-notify.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Start the script that will monitor /var/log/audit.log and log relevant things 3 | 4 | [Service] 5 | Type=simple 6 | ExecStart=/usr/local/bin/auditd-notify 7 | Restart=on-failure 8 | RestartSec=3s 9 | 10 | [Install] 11 | WantedBy=multi-user.target 12 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/auditor.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Run the auditor script to check for system misconfigurations 3 | After=network.target systemd-networkd-wait-online.service 4 | 5 | [Service] 6 | Type=oneshot 7 | ExecStart=/usr/local/bin/proxify /usr/local/bin/auditor 8 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/auditor.timer: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Run auditor.service every week 3 | 4 | [Timer] 5 | OnCalendar=weekly 6 | Persistent=true 7 | 8 | [Install] 9 | WantedBy=timers.target 10 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/btrfs-balance.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Run btrfs balance to regain disk space 3 | 4 | [Service] 5 | Type=oneshot 6 | ExecStart=/bin/bash -c 'echo \'NOTIFY {"urgency": "normal", "title": "SYSTEM MAINTENANCE", "body": "Btrfs balancing in progress..."}\'; btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mlimit=4 /' 7 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/btrfs-balance.timer: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Run btrfs-balance.service daily 3 | 4 | [Timer] 5 | OnCalendar=daily 6 | Persistent=true 7 | 8 | [Install] 9 | WantedBy=timers.target 10 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/check-secure-boot.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Check the secure boot status at boot 3 | 4 | [Service] 5 | Type=oneshot 6 | ExecStart=/bin/bash -c '[ "$(hexdump /sys/firmware/efi/efivars/SecureBoot-* | awk \'{print $4}\')" = "0001" ] || echo \'NOTIFY {"urgency": "critical", "title": "SECURE BOOT DISABLED", "body": "Please take action"}\'' 7 | 8 | [Install] 9 | WantedBy=multi-user.target 10 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/dirmngr@etc-pacman.d-gnupg.service.d/override.conf: -------------------------------------------------------------------------------- 1 | [Service] 2 | Group=allow-internet 3 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/getty@tty1.service.d/autologin.conf: -------------------------------------------------------------------------------- 1 | [Service] 2 | Type=simple 3 | Restart=on-failure 4 | ExecStart= 5 | ExecStart=-/sbin/agetty -o '-p -f -- \\u' --skip-login --nonewline --noissue --noclear --autologin username_placeholder %I $TERM 6 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/local-forwarding-proxy.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Forward proxy using Glider 3 | After=network.target nftables.service 4 | 5 | [Service] 6 | Type=simple 7 | DynamicUser=yes 8 | Group=allow-internet 9 | Restart=always 10 | LimitNOFILE=102400 11 | ExecStart=/usr/bin/glider -listen http://127.0.0.1:8080 12 | 13 | [Install] 14 | WantedBy=multi-user.target 15 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/pacman-notify.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Send desktop notifications related to pacman 3 | 4 | [Service] 5 | Type=oneshot 6 | ExecStart=/usr/local/bin/pacman-notify 7 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/pacman-notify.timer: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Run pacman-notify.service every day 3 | 4 | [Timer] 5 | OnCalendar=daily 6 | Persistent=true 7 | 8 | [Install] 9 | WantedBy=timers.target 10 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/pacman-sync.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Cache upgradable packages in the background but do not install them 3 | After=network.target systemd-networkd-wait-online.service 4 | 5 | [Service] 6 | Type=oneshot 7 | ExecStart=/usr/local/bin/proxify /usr/bin/pacman -Syuw --noconfirm 8 | Restart=on-failure 9 | RestartSec=10s 10 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/pacman-sync.timer: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Run pacman-sync.service daily 3 | 4 | [Timer] 5 | OnCalendar=daily 6 | Persistent=true 7 | 8 | [Install] 9 | WantedBy=timers.target 10 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/should-reboot-check.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Check if the system should be rebooted because of kernel or libraries updates 3 | 4 | [Service] 5 | Type=oneshot 6 | ExecStart=/usr/local/bin/should-reboot-check 7 | Restart=on-failure 8 | RestartSec=10s 9 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/should-reboot-check.timer: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Run should-reboot-check.service daily 3 | 4 | [Timer] 5 | OnCalendar=daily 6 | Persistent=true 7 | 8 | [Install] 9 | WantedBy=timers.target 10 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/systemd-networkd-wait-online.service.d/override.conf: -------------------------------------------------------------------------------- 1 | # Add --any so that the network is considered online if any of the interface is ready 2 | [Service] 3 | ExecStart= 4 | ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --any 5 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/system/usb-auto-mount@.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Auto-mount device %i 3 | 4 | [Service] 5 | Type=oneshot 6 | RemainAfterExit=true 7 | ExecStart=/usr/local/bin/usb-auto-mount add %i 8 | ExecStop=/usr/local/bin/usb-auto-mount remove %i 9 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/user/backup-to-cloud.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Automatically backup the home folder and send the encrypted archive to a remote server 3 | 4 | [Service] 5 | Type=oneshot 6 | Restart=on-failure 7 | RestartSec=600 8 | Environment=GPG_RECIPIENT=A548562A20375286 9 | ExecStart=/usr/local/bin/backup-to-cloud 10 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/user/backup-to-cloud.timer: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Run backup-to-cloud.service weekly 3 | 4 | [Timer] 5 | OnCalendar=weekly 6 | Persistent=true 7 | 8 | [Install] 9 | WantedBy=timers.target 10 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/user/gammastep.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Display color temperature adjustment 3 | PartOf=graphical-session.target 4 | 5 | [Service] 6 | Type=simple 7 | ExecStart=/usr/bin/gammastep 8 | 9 | [Install] 10 | WantedBy=graphical-session.target 11 | 12 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/user/journalctl-notify.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Follow journalctl and send desktop notification accordingly 3 | PartOf=graphical-session.target 4 | 5 | [Service] 6 | Type=simple 7 | ExecStart=/usr/local/bin/journalctl-notify 8 | 9 | [Install] 10 | WantedBy=graphical-session.target 11 | 12 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/user/restic-unattended.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Automatically backup the home folder to a remote server using Restic 3 | 4 | [Service] 5 | Type=oneshot 6 | Restart=on-failure 7 | RestartSec=600 8 | Environment=GOMAXPROCS=1 9 | Environment=RESTIC_PASSWORD_FILE=%h/.local/share/restic/password 10 | Environment=BACKUP_USER=backup 11 | Environment=RESTIC_REPO_NAME=restic-repo 12 | ExecStart=/usr/local/bin/restic-backup-to-cloud 13 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/user/restic-unattended.timer: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Run restic-unattended.service daily 3 | 4 | [Timer] 5 | OnCalendar=daily 6 | Persistent=true 7 | 8 | [Install] 9 | WantedBy=timers.target 10 | 11 | -------------------------------------------------------------------------------- /rootfs/etc/systemd/user/sway-session.target: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=sway compositor session 3 | Documentation=man:systemd.special(7) 4 | BindsTo=graphical-session.target 5 | Wants=graphical-session-pre.target 6 | After=graphical-session-pre.target 7 | -------------------------------------------------------------------------------- /rootfs/etc/udev/rules.d/99-usb-auto-mount.rules: -------------------------------------------------------------------------------- 1 | KERNEL=="sd[a-z][0-9]", SUBSYSTEMS=="usb", ACTION=="add", RUN+="/usr/bin/systemctl start usb-auto-mount@%k.service" 2 | KERNEL=="sd[a-z][0-9]", SUBSYSTEMS=="usb", ACTION=="remove", RUN+="/usr/bin/systemctl stop usb-auto-mount@%k.service" 3 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/auditd-notify: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 -u 2 | 3 | # ruff: noqa: E501,PTH123,PTH116,SIM115 4 | 5 | """Script that monitors auditd logs to send relevant Desktop notifications. 6 | 7 | Inspired by AppArmor's aa-notify. 8 | See: https://gitlab.com/apparmor/apparmor/-/blob/master/utils/aa-notify 9 | """ 10 | 11 | import json 12 | import os 13 | import re 14 | import sys 15 | import time 16 | from io import TextIOWrapper 17 | from traceback import format_exc 18 | from typing import Iterable, Tuple 19 | 20 | DEFAULT_SLEEP = 1 21 | LOG_FILE = "/var/log/audit/audit.log" 22 | 23 | 24 | def follow_auditd_events() -> Iterable: 25 | boot_time = None 26 | 27 | # Get timestamp at which system booted to know from which point we should process events 28 | with open("/proc/stat") as file: 29 | for line in file: 30 | if line.startswith("btime"): 31 | boot_time = int(line.split()[1]) 32 | break 33 | 34 | if not boot_time: 35 | error = "Unable to find boot time" 36 | raise ValueError(error) 37 | 38 | # Record initial file size to detect if log rotates 39 | log_size = os.stat(LOG_FILE).st_size 40 | # Record initial file inode number to detect if log gets renamed 41 | log_inode = os.stat(LOG_FILE).st_ino 42 | 43 | logdata = open(LOG_FILE) 44 | 45 | # Skip events that occured before boot 46 | line = logdata.readline() 47 | while line: 48 | match = re.search(r"audit\((\d+)\.\d+:\d+\)", line) 49 | 50 | if not match: 51 | raise ValueError("Unable to parse auditd log entry: " + line) 52 | 53 | timestamp = int(match.group(1)) 54 | 55 | if timestamp > boot_time: 56 | # First event that occured after boot is found ! 57 | # We must stop skipping lines. But we still want to process this one. 58 | # So we seek back before that line 59 | logdata.seek(logdata.tell() - len(line)) 60 | break 61 | 62 | line = logdata.readline() 63 | 64 | while True: 65 | logdata, log_inode, log_size = reopen_logfile_if_needed( 66 | logdata, 67 | log_inode, 68 | log_size, 69 | ) 70 | 71 | yield from logdata 72 | 73 | time.sleep(DEFAULT_SLEEP) 74 | 75 | logdata.close() 76 | 77 | 78 | def reopen_logfile_if_needed( 79 | logdata: TextIOWrapper, 80 | log_inode: int, 81 | log_size: int, 82 | ) -> Tuple[TextIOWrapper, int, int]: 83 | retry = True 84 | 85 | while retry: 86 | try: 87 | stat = os.stat(LOG_FILE) 88 | 89 | # Reopen file if inode has changed, e.g. rename by logrotate 90 | if stat.st_ino != log_inode: 91 | logdata = open(LOG_FILE) 92 | # Store new inode number for next comparisons 93 | log_inode = stat.st_ino 94 | 95 | # Start reading from the beginning if file shrank 96 | if stat.st_size < log_size: 97 | logdata.seek(0) 98 | log_size = stat.st_size # Reset file size value 99 | 100 | # Record new file size if grown 101 | if stat.st_size > log_size: 102 | log_size = stat.st_size 103 | 104 | retry = False 105 | except FileNotFoundError: # noqa: PERF203 106 | time.sleep(DEFAULT_SLEEP) 107 | 108 | return logdata, log_inode, log_size 109 | 110 | 111 | # Examples of logs to parse: 112 | # type=DAEMON_ROTATE msg=audit(1677959023.483:5737): op=rotate-logs auid=0 uid=0 ses=4294967295 pid=1 subj=unconfined res=successAUID="root" UID="root" 113 | # type=AVC msg=audit(1677959181.975:4): apparmor="STATUS" info="AppArmor Filesystem Enabled" pid=1 comm="swapper/0" 114 | # type=AVC msg=audit(1677959203.946:105): apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=671 comm="apparmor_parser" 115 | # type=AVC msg=audit(1677959581.156:342): apparmor="DENIED" operation="open" profile="firefox" name="/home/shellcode/.dotfiles/.config/user-dirs.dirs" pid=1523 comm=46532042726F6B65722031363933 requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000FSUID="shellcode" OUID="shellcode" 116 | # type=PROCTITLE msg=audit(1677959203.476:99): proctitle="(systemd)" 117 | # type=USER_START msg=audit(1677959203.479:100): pid=625 uid=0 auid=1000 ses=2 subj=unconfined msg='op=PAM:session_open grantors=pam_loginuid,pam_loginuid,pam_keyinit,pam_systemd_home,pam_limits,pam_unix,pam_permit,pam_mail,pam_systemd,pam_env acct="shellcode" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'UID="root" AUID="shellcode" 118 | def parse_auditd_entry(event: str) -> dict[str, str]: 119 | log_entry = {} 120 | 121 | # First we replace some parts of the entry for an easier split 122 | event = event.replace("msg=audit(", "timestamp=") # ) 123 | event = event.replace("msg='", "") 124 | event = event.replace("'", "") 125 | event = event.replace("", " ") # enriched logs 126 | index = event.find(":") 127 | 128 | if index == -1: 129 | error = "Unable to parse log entry" 130 | raise ValueError(error) 131 | 132 | event = f"{event[:index]} event_id={event[index+1:]}" 133 | 134 | index = event.find(":") 135 | 136 | if index == -1: 137 | error = "Unable to parse log entry" 138 | raise ValueError(error) 139 | 140 | event = f"{event[:index-1]}{event[index+1:]}" 141 | 142 | # Then we split spaces and equal signs to create a dict 143 | tokens = event.split() 144 | 145 | for token in tokens: 146 | tokens = token.split("=", 1) 147 | 148 | # Key has no value, ignoring it 149 | if len(tokens) == 1: 150 | continue 151 | 152 | key, value = tokens 153 | key = key.strip() 154 | value = value.strip() 155 | 156 | # Key or value is empty, ignoring it 157 | if not key or not value: 158 | continue 159 | 160 | # Clean value if necessary 161 | if (value[0] == '"' and value[-1] == '"') or ( 162 | value[0] == "'" and value[-1] == "'" 163 | ): 164 | value = value[1:-1] 165 | 166 | log_entry[key] = value 167 | 168 | return log_entry 169 | 170 | 171 | def notify(title: str, body: str) -> None: 172 | # See /usr/local/bin/journalctl-notify 173 | print("NOTIFY", json.dumps({"urgency": "critical", "title": title, "body": body})) 174 | 175 | 176 | # type=ANOM_ABEND msg=audit(1701614935.255:3933): auid=1000 uid=1000 gid=1000 ses=2 subj=unconfined pid=11602 comm="dunst" exe="/usr/bin/dunst" sig=5 res=1AUID="shellcode" UID="shellcode" GID="shellcode" 177 | # 178 | # For an explanation of record types, see: 179 | # https://access.redhat.com/articles/4409591 180 | 181 | 182 | # Examples of logs considered relevant 183 | # type=AVC msg=audit(1677959581.156:342): apparmor="DENIED" operation="open" profile="firefox" name="/home/shellcode/.dotfiles/.config/user-dirs.dirs" pid=1523 comm=46532042726F6B65722031363933 requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000FSUID="shellcode" OUID="shellcode" 184 | # type=SECCOMP msg=audit(1701619501.959:5364): auid=1000 uid=1000 gid=1000 ses=1 subj=firejail-default//&unconfined pid=16904 comm="wget" exe="/usr/bin/wget" sig=0 arch=c000003e syscall=41 compat=0 ip=0x6c62ba930deb code=0x50000AUID="shellcode" UID="shellcode" GID="shellcode" ARCH=x86_64 SYSCALL=socket 185 | # type=ANOM_PROMISCUOUS msg=audit(1701697411.140:335): dev=wlan0 prom=256 old_prom=0 auid=1000 uid=0 gid=0 ses=1AUID="shellcode" UID="root" GID="root" 186 | def notify_if_relevant(log_entry: dict[str, str]) -> None: 187 | if "apparmor" in log_entry and log_entry["apparmor"] == "DENIED": 188 | title = "APPARMOR DENIAL" 189 | body = ( 190 | f"Profile: {log_entry['profile']}\n" 191 | f"Operation: {log_entry['operation']}\n" 192 | f"On: {log_entry.get('name', 'unknown')}\n" 193 | f"By: {log_entry.get('FSUID', 'unknown')}\n" 194 | f"Denied: {log_entry.get(r'denied_mask', 'unknown')}" 195 | ) 196 | 197 | elif log_entry["type"] == "SECCOMP" and "firejail" in log_entry["subj"]: 198 | title = "FIREJAIL DENIAL (SECCOMP)" 199 | body = ( 200 | f"Program: {log_entry['exe']}\n" 201 | f"PID: {log_entry['pid']}\n" 202 | f"User: {log_entry['AUID']}" 203 | ) 204 | 205 | elif log_entry["type"] == "ANOM_ABEND" and "firejail" in log_entry["subj"]: 206 | title = "PROGRAM CRASHED (COREDUMP)" 207 | body = ( 208 | f"Program: {log_entry['exe']}\n" 209 | f"PID: {log_entry['pid']}\n" 210 | f"Signal: {log_entry['sig']}\n" 211 | f"User: {log_entry['AUID']}" 212 | ) 213 | 214 | elif log_entry["type"] == "ANOM_PROMISCUOUS" and log_entry["prom"] != "0": 215 | title = "INTERFACE IN PROMISCUOUS MODE" 216 | body = ( 217 | f"Inteface: {log_entry['dev']}\n" 218 | f"User: {log_entry['AUID']}\n\n" 219 | "Someone might be listening to your network traffic !" 220 | ) 221 | 222 | elif log_entry["type"] == "USER_AUTH" and log_entry["res"] == "failed": 223 | title = "AUTHENTICATION FAILURE" 224 | body = ( 225 | f"From: {log_entry['UID']}\n" 226 | f"Using: {log_entry['exe']}" 227 | ) 228 | 229 | elif ( 230 | log_entry["type"] == "SYSCALL" 231 | and log_entry["key"] == "etc_shadow" 232 | ): 233 | title = "SECRET FILE ACCESS" 234 | body = ( 235 | f"Path: /etc/shadow\n" 236 | f"From: {log_entry['AUID']}\n" 237 | f"Using: {log_entry['exe']}\n" 238 | f"Syscall: {log_entry['SYSCALL']}\n" 239 | f"Success: {log_entry['success']}" 240 | ) 241 | 242 | elif ( 243 | log_entry["type"] == "SYSCALL" 244 | and log_entry["key"] == "secure_boot_keys" 245 | and log_entry["exe"] not in ("/usr/bin/sbsign", "/usr/bin/find") 246 | ): 247 | title = "SECRET FILE ACCESS" 248 | body = ( 249 | f"Path: /etc/arch-secure-boot/keys/\n" 250 | f"From: {log_entry['AUID']}\n" 251 | f"Using: {log_entry['exe']}\n" 252 | f"Syscall: {log_entry['SYSCALL']}\n" 253 | f"Success: {log_entry['success']}" 254 | ) 255 | 256 | elif log_entry["type"] == "SYSCALL" and log_entry["key"] == "etc_ld_so_preload": 257 | title = "ROOTKIT BEHAVIOR DETECTED" 258 | body = ( 259 | f"Path: /etc/ld.so.preload\n" 260 | f"From: {log_entry['AUID']}\n" 261 | f"Using: {log_entry['exe']}\n" 262 | f"Syscall: {log_entry['SYSCALL']}\n" 263 | f"Success: {log_entry['success']}" 264 | ) 265 | 266 | elif ( 267 | log_entry["type"] == "SYSCALL" 268 | and log_entry["key"] == "auditd_tampering" 269 | and log_entry["exe"] not in ("/usr/bin/pacman") 270 | ): 271 | title = "AUDITD TAMPERING" 272 | body = ( 273 | f"From: {log_entry['AUID']}\n" 274 | f"Using: {log_entry['exe']}\n" 275 | f"Syscall: {log_entry['SYSCALL']}\n" 276 | f"Success: {log_entry['success']}" 277 | ) 278 | 279 | elif log_entry["type"] == "SYSCALL" and log_entry["key"] == "power_abuse": 280 | title = "PRIVILEGED PROCESS USING HOME DIRECTORY" 281 | body = ( 282 | f"From: {log_entry['AUID']}\n" 283 | f"Using: {log_entry['exe']}\n" 284 | f"Syscall: {log_entry['SYSCALL']}\n" 285 | f"Success: {log_entry['success']}" 286 | ) 287 | 288 | else: # not considered relevant 289 | return 290 | 291 | notify(title, body) 292 | 293 | 294 | def main() -> None: 295 | try: 296 | for event in follow_auditd_events(): 297 | parsed_event = parse_auditd_entry(event) 298 | notify_if_relevant(parsed_event) 299 | except Exception: # noqa: BLE001 300 | notify("AUDITD LOGS MONITORING FAILURE", format_exc()) 301 | sys.exit(1) 302 | 303 | 304 | if __name__ == "__main__": 305 | main() 306 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/auditor: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 -u 2 | # coding: utf-8 3 | 4 | """ 5 | Script that looks for security issues on your system and notify you. 6 | 7 | Should be run periodically using a systemd timer. 8 | See /etc/systemd/system/auditor.{service,timer} 9 | """ 10 | 11 | import os 12 | import sys 13 | import json 14 | from stat import S_ISDIR, S_ISREG, S_IWOTH, S_IXOTH, S_IRWXO 15 | from typing import List, Tuple 16 | from subprocess import CalledProcessError, run, check_output 17 | import requests 18 | 19 | def check_world_writable(top_folder: str, dangerous_files_detected: List[str]): 20 | """ 21 | For now, only o+w permissions are considered dangerous. 22 | """ 23 | 24 | directories = os.listdir(top_folder) 25 | 26 | # Not optimized since it will run at each recursion, but better for readability 27 | if top_folder == "/": 28 | directories.remove("proc") 29 | directories.remove("sys") 30 | directories.remove(".snapshots") 31 | 32 | for file in directories: 33 | fullpath = os.path.join(top_folder, file) 34 | mode = os.lstat(fullpath).st_mode 35 | 36 | if S_ISDIR(mode) and mode & S_IXOTH: 37 | # It's a traversable directory by "others", recurse into it 38 | # (reporting files in directories not traversable would be a false positive) 39 | check_world_writable(fullpath, dangerous_files_detected) 40 | 41 | elif S_ISREG(mode) and mode & S_IWOTH: 42 | # It's a regular file writable by "others", DANGEROUS 43 | dangerous_files_detected.append(fullpath) 44 | 45 | def check_homes_permission() -> List[str]: 46 | """ 47 | Check that NO permission is given to "others". 48 | """ 49 | 50 | homes_with_wrong_perm = [] 51 | home_folder = "/home" 52 | 53 | for user in os.listdir(home_folder): 54 | fullpath = os.path.join(home_folder, user) 55 | mode = os.stat(fullpath).st_mode 56 | 57 | if mode & S_IRWXO: 58 | homes_with_wrong_perm.append(fullpath) 59 | 60 | if os.stat("/root").st_mode & S_IRWXO: 61 | homes_with_wrong_perm.append("/root") 62 | 63 | return homes_with_wrong_perm 64 | 65 | def check_docker() -> List[str]: 66 | """ 67 | Make sure noone is in the docker group. 68 | Being in the docker group = being root 69 | """ 70 | 71 | docker_users = [] 72 | docker_line = None 73 | 74 | with open("/etc/group") as file: 75 | for line in file: 76 | if line.startswith("docker"): 77 | docker_line = line.strip() 78 | break 79 | 80 | if docker_line: 81 | tokens = docker_line.split(":") 82 | if tokens[-1] != "": 83 | docker_users = tokens[-1].split(",") 84 | 85 | return docker_users 86 | 87 | def check_secure_boot() -> bool: 88 | """ 89 | Returns True if secure boot is enabled, False otherwise. 90 | """ 91 | 92 | sb_file = None 93 | 94 | for file in os.listdir("/sys/firmware/efi/efivars"): 95 | if file.startswith("SecureBoot-"): 96 | sb_file = f"/sys/firmware/efi/efivars/{file}" 97 | 98 | if sb_file is None: 99 | return False 100 | 101 | with open(sb_file, "rb") as stream: 102 | content = stream.read() 103 | return content[4] == 1 104 | 105 | 106 | def vercmp(ver1, ver2) -> int: 107 | """ 108 | Compare versions of pacman packages. 109 | """ 110 | return int(check_output(["/usr/bin/vercmp", ver1, ver2]).decode()) 111 | 112 | def check_local_cves() -> List[Tuple[str, str]]: 113 | """ 114 | Compare installed packages to the list of reported CVEs by ArchLinux. 115 | """ 116 | 117 | local_cves = [] 118 | cves_to_ignore = [] 119 | 120 | try: 121 | with open("/etc/cve-ignore.list", "r") as file: 122 | for line in file: 123 | cve_to_ignore = line 124 | comment_index = line.find("#") 125 | 126 | if comment_index != -1: 127 | cve_to_ignore = cve_to_ignore[:comment_index] 128 | 129 | cve_to_ignore = cve_to_ignore.strip() 130 | 131 | if cve_to_ignore: 132 | cves_to_ignore.append(cve_to_ignore) 133 | except FileNotFoundError: 134 | pass 135 | 136 | print("CVEs ignore list:", cves_to_ignore) 137 | 138 | output = check_output(["/usr/bin/pacman", "-Qn"]).decode().split("\n") 139 | 140 | installed_packages = {} 141 | for line in output: 142 | if not line: 143 | continue 144 | 145 | tokens = line.split() 146 | 147 | if len(tokens) != 2: 148 | raise ValueError(f"Unexpected entry, please report this bug: {line}") 149 | 150 | installed_packages[tokens[0]] = tokens[1] 151 | 152 | content = requests.get("https://security.archlinux.org/issues/all.json") 153 | 154 | for entry in content.json(): 155 | impacted_packages = entry["packages"] 156 | affected_version = entry["affected"] 157 | fixed_in_version = entry["fixed"] 158 | 159 | cves = entry["issues"] 160 | 161 | for cve in cves_to_ignore: 162 | if cve in cves: 163 | cves.remove(cve) 164 | 165 | if not cves: 166 | continue 167 | 168 | if len(cves) > 5: 169 | cves = cves[:4] + ["and more..."] 170 | 171 | cves = " ".join(cves) 172 | 173 | for package in impacted_packages: 174 | if package not in installed_packages: 175 | continue 176 | 177 | if fixed_in_version: 178 | if vercmp(fixed_in_version, installed_packages[package]) > 0: 179 | local_cves.append((package, cves)) 180 | 181 | else: # fixed_in_version unavailable, we don't know when it was fixed, but we know the affected version(s?) 182 | if installed_packages[package] == affected_version: 183 | local_cves.append((package, cves)) 184 | 185 | return local_cves 186 | 187 | def report(title: str, body: str, urgency: str = "critical"): 188 | print("NOTIFY", json.dumps({"urgency": urgency, "title": title, "body": body})) 189 | 190 | def main() -> None: 191 | 192 | if os.geteuid() != 0: 193 | print("This script must be run as root.") 194 | sys.exit(1) 195 | 196 | report("AUDIT IN PROGRESS", "The auditor script is looking for security issues...", urgency="normal") 197 | 198 | print("Checking home folders permissions...") 199 | too_permissive_homes_perm = check_homes_permission() 200 | 201 | if too_permissive_homes_perm: 202 | report("DANGEROUS PERMISSIONS DETECTED", 203 | "\nThe following home folders are at risk:\n\n" + 204 | "\n".join(too_permissive_homes_perm)) 205 | 206 | print("Checking docker group...") 207 | people_in_docker_group = check_docker() 208 | 209 | if people_in_docker_group: 210 | report("DANGEROUS GROUP DETECTED", 211 | "\nThe following users are in the docker group, noone should be in that group:\n\n" + 212 | "\n".join(people_in_docker_group)) 213 | 214 | print("Checking secure boot...") 215 | secure_boot_enabled = check_secure_boot() 216 | if not secure_boot_enabled: 217 | report("SECURE BOOT DISABLED", "Make sure to enable it in your BIOS") 218 | 219 | print("Checking local CVEs...") 220 | packages_with_cve = check_local_cves() 221 | 222 | if packages_with_cve: 223 | report("VULNERABLE PACKAGES DETECTED", 224 | "\nThe following packages are vulnerable:\n\n" + 225 | "\n\n".join([f"{p[0]}: {p[1]}" for p in packages_with_cve]) + 226 | "\n\nReview them and take action.\nAdd the CVE to /etc/cve-ignore.list if they are not relevant to you.") 227 | 228 | world_writables_files = [] 229 | print("Checking world writable files...") 230 | check_world_writable("/", world_writables_files) 231 | 232 | if world_writables_files: 233 | report("DANGEROUS PERMISSIONS DETECTED", 234 | "\nAnyone can write to those files:\n\n" + 235 | "\n".join(world_writables_files)) 236 | 237 | sys.exit(0) 238 | 239 | if __name__ == "__main__": 240 | main() 241 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/backup-to-cloud: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 -u 2 | 3 | """ 4 | You probably want to use Restic instead of this home-made script. 5 | 6 | See: /usr/local/bin/restic-backup-to-cloud 7 | """ 8 | 9 | import os 10 | import sys 11 | import json 12 | import hashlib 13 | import psutil 14 | import socket 15 | import os.path 16 | from time import sleep 17 | from datetime import datetime 18 | from typing import List 19 | from subprocess import run, CalledProcessError, PIPE, DEVNULL 20 | 21 | # Do not edit, create a ~/.ssh/config entry 22 | SERVER_HOST = "backup-server" 23 | 24 | HOSTNAME = socket.gethostname() 25 | USERNAME = os.environ["USER"] 26 | BACKUP_PREFIX = f"backup-home-{USERNAME}-{HOSTNAME}" 27 | 28 | # Use the ssb key ID from the output of `gpg --list-secret-keys --keyid-format=long` 29 | GPG_RECIPIENT = os.environ["GPG_RECIPIENT"] 30 | 31 | def notify(title: str, body: str, urgency: str = "normal"): 32 | 33 | if urgency == "critical": 34 | icon = "computer-fail" 35 | else: 36 | icon = "synology-cloud-station-backup" 37 | 38 | # See /usr/local/bin/journalctl-notify 39 | print("NOTIFY", json.dumps({"title": title, "body": body, "urgency": urgency, "icon": icon})) 40 | 41 | def cleanup() -> None: 42 | for file in os.listdir(f"/home/{USERNAME}"): 43 | if file.startswith(BACKUP_PREFIX): 44 | os.unlink(f"/home/{USERNAME}/{file}") 45 | print(f"Removed /home/{USERNAME}/{file}") 46 | 47 | def backup_home() -> str: 48 | """ 49 | Compress the user's home folder and return a path to the created archive. 50 | """ 51 | 52 | date = datetime.now() 53 | date_as_str = f"{date.year}-{date.month:0>2}-{date.day:0>2}--{date.hour:0>2}-{date.minute:0>2}-{date.second:0>2}--{date.microsecond:0>6}" 54 | archive_name = f"{BACKUP_PREFIX}-{date_as_str}.tar.gz" 55 | 56 | print(f"Creating /home/{USERNAME}/{archive_name} ...") 57 | 58 | tar_process = run(["/usr/bin/tar", 59 | f"--exclude=home/{USERNAME}/{BACKUP_PREFIX}*", 60 | "--create", 61 | "--gzip", 62 | "--preserve-permissions", 63 | "--warning=no-file-changed", 64 | "--warning=no-file-ignored", 65 | f"--file=home/{USERNAME}/{archive_name}", 66 | f"home/{USERNAME}"], 67 | cwd=f"/", 68 | stderr=PIPE, 69 | stdout=DEVNULL) 70 | 71 | if tar_process.returncode == 2: 72 | raise CalledProcessError(returncode=tar_process.returncode, 73 | cmd=tar_process.args, 74 | output=tar_process.stdout, 75 | stderr=tar_process.stderr) 76 | 77 | return f"/home/{USERNAME}/{archive_name}" 78 | 79 | def encrypt(filepath: str) -> str: 80 | encrypted_filepath = f"{filepath}.gpg" 81 | 82 | print(f"Encrypting {filepath} ...") 83 | 84 | run(["/usr/bin/gpg", 85 | "--recipient", GPG_RECIPIENT, 86 | "--cipher-algo", "AES256", 87 | "--compress-algo", "none", 88 | "--output", encrypted_filepath, 89 | "--encrypt", filepath], 90 | stderr=PIPE, 91 | stdout=DEVNULL, 92 | check=True) 93 | 94 | return encrypted_filepath 95 | 96 | def process_sha1sum(encrypted_filepath: str, write_to_disk=True) -> str: 97 | 98 | print(f"Computing sha1 of {encrypted_filepath} ...") 99 | 100 | with open(encrypted_filepath, "rb") as stream: 101 | sha1sum = hashlib.file_digest(stream, "sha1").hexdigest() 102 | 103 | if write_to_disk: 104 | # Save hash to disk for later use in case the backup process was interrupted 105 | with open(encrypted_filepath + ".sha1", "w") as stream: 106 | stream.write(sha1sum) 107 | 108 | return sha1sum 109 | 110 | def ssh_until_success(command: List[str]) -> None: 111 | while True: 112 | try: 113 | run(command, stderr=PIPE, stdout=DEVNULL, check=True) 114 | break # Command succeeded without exception, break out of the loop 115 | except CalledProcessError as cpe: 116 | # If we get this error, it means the SSH key is not yet in the SSH agent, 117 | # most probably because KeePassXC database is still locked. 118 | # The loop will keep going until the key is found in ssh-agent. 119 | if b"Permission denied (publickey)" not in cpe.stderr: 120 | raise 121 | 122 | notify("SYSTEM MAINTENANCE", 123 | "Unable to access remote backup server, make sure the SSH key is in ssh-agent.\n\nIs KeePassXC unlocked ?", 124 | urgency="critical") 125 | 126 | sleep(10) 127 | 128 | def upload(encrypted_filepath: str, sha1sum: str) -> None: 129 | 130 | print(f"Uploading {encrypted_filepath} ...") 131 | encrypted_filename = os.path.basename(encrypted_filepath) 132 | 133 | ssh_until_success(["/usr/bin/ssh", "-oBatchMode=yes", SERVER_HOST, f"mkdir -p {HOSTNAME}"]) 134 | ssh_until_success(["/usr/bin/scp", "-oBatchMode=yes", encrypted_filepath, f"{SERVER_HOST}:~/{HOSTNAME}/"]) 135 | 136 | # The sha1 file marks the end of the backup process, 137 | # this file is expected by the `harden-backup` script which runs server-side 138 | ssh_until_success(["/usr/bin/ssh", "-oBatchMode=yes", SERVER_HOST, f"echo -n {sha1sum} > {HOSTNAME}/{encrypted_filename}.sha1"]) 139 | 140 | def process_not_uploaded_backups() -> None: 141 | """ 142 | Process left over backups, this might happen if the script was unable 143 | to upload the previously made backup. It can happen for various reasons 144 | such as power failure, network failure, etc. 145 | 146 | If sha1sums of leftover backups do not match, they will be removed later on. 147 | """ 148 | 149 | for file in os.listdir(f"/home/{USERNAME}"): 150 | if file.startswith(BACKUP_PREFIX) and file.endswith(".gpg") and os.path.exists(f"/home/{USERNAME}/{file}.sha1"): 151 | 152 | with open(f"/home/{USERNAME}/{file}.sha1", "r") as stream: 153 | existing_sha1 = stream.read().strip() 154 | 155 | actual_sha1sum = process_sha1sum(f"/home/{USERNAME}/{file}", write_to_disk=False) 156 | 157 | if existing_sha1 != actual_sha1sum: 158 | print("Files sha1sum don't match, ignoring...") 159 | return 160 | 161 | upload(f"/home/{USERNAME}/{file}", actual_sha1sum) 162 | 163 | def main() -> None: 164 | 165 | if GPG_RECIPIENT is None: 166 | print("Environment variable GPG_RECIPIENT not set.") 167 | print("Use the ID from your subkey (ssb) in the output of 'gpg --list-secret-keys --keyid-format long'") 168 | sys.exit(1) 169 | 170 | notify("SYSTEM MAINTENANCE", "Backup of the home folder in progress...") 171 | 172 | # We are not in a hurry, let's give a low 'nice' priority to the backup process 173 | # so that we don't use too much system resources. 174 | # Note: nice and ionice properties are inherited by subprocesses. 175 | this_script = psutil.Process(os.getpid()) 176 | this_script.nice(10) 177 | this_script.ionice(psutil.IOPRIO_CLASS_IDLE) 178 | 179 | process_not_uploaded_backups() 180 | cleanup() 181 | 182 | try: 183 | archive_path = backup_home() 184 | encrypted_archive_path = encrypt(archive_path) 185 | sha1sum = process_sha1sum(encrypted_archive_path) 186 | upload(encrypted_archive_path, sha1sum) 187 | cleanup() 188 | notify("SYSTEM MAINTENANCE", "Backup of the home folder successful !") 189 | return_code = 0 190 | except CalledProcessError as cpe: 191 | error_str = cpe.stderr.decode() 192 | notify("SYSTEM MAINTENANCE", f"Backup of the home folder failed with:\n\n" + error_str, urgency="critical") 193 | return_code = 1 194 | except Exception as exc: 195 | notify("SYSTEM MAINTENANCE", f"Backup of the home folder failed with:\n\n" + str(exc), urgency="critical") 196 | return_code = 1 197 | 198 | sys.exit(return_code) 199 | 200 | if __name__ == "__main__": 201 | main() 202 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/firefox: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | 3 | import os 4 | import os.path 5 | import re 6 | import sys 7 | from configparser import ConfigParser 8 | from io import TextIOWrapper 9 | from typing import Dict, List, Optional, Tuple 10 | 11 | import requests 12 | 13 | FIREFOX_DIR = os.environ["HOME"] + "/.mozilla/firefox" 14 | USER_JS_OVERRIDES_PATH = f"{FIREFOX_DIR}/user-overrides.js" 15 | 16 | USER_JS_URL = "https://raw.githubusercontent.com/arkenfox/user.js/master/user.js" 17 | USER_PREF_REGEX = r'user_pref\("(?P.+)",\s*"?(?P[^"\)]+)' 18 | 19 | 20 | def get_profiles() -> List[Tuple[str, str]]: 21 | profiles = [] 22 | 23 | config = ConfigParser() 24 | config.read(FIREFOX_DIR + "/profiles.ini") 25 | 26 | for section in config.sections(): 27 | if section.startswith("Profile"): 28 | name = config[section]["Name"] 29 | path = config[section]["Path"] 30 | profiles.append((name, path)) 31 | 32 | return profiles 33 | 34 | 35 | def install(profile_path: str, new_user_js_content: str) -> None: 36 | user_js = f"{profile_path}/user.js" 37 | 38 | with open(user_js, "w") as stream: 39 | stream.write(new_user_js_content) 40 | 41 | with open(USER_JS_OVERRIDES_PATH) as stream_override: 42 | stream.write("\n\n// ------ OVERRIDES START HERE ------\n\n") 43 | stream.write(stream_override.read()) 44 | 45 | 46 | def clean(profile_path: str) -> None: 47 | """Remove all entries from prefs.js that are in user.js regardless of whether they are active or not. 48 | They will be set back from user.js next time Firefox starts. 49 | 50 | Firefox must be closed for this to work because prefs.js is overwritten on exit. 51 | """ 52 | user_js = f"{profile_path}/user.js" 53 | prefs_js = f"{profile_path}/prefs.js" 54 | 55 | # prefs.js doesnt exist, it might be an unused profile 56 | if not os.path.exists(prefs_js): 57 | return 58 | 59 | prefs_to_remove = [] 60 | 61 | with open(user_js) as stream: 62 | for line in stream: 63 | match = re.search(USER_PREF_REGEX, line) 64 | 65 | if match: 66 | prefs_to_remove.append(match.group("pref_key")) 67 | 68 | with open(prefs_js) as stream: 69 | prefs_js_content = stream.read() 70 | 71 | with open(prefs_js, "w") as stream: 72 | for line in prefs_js_content.split("\n"): 73 | match = re.match(USER_PREF_REGEX, line) 74 | 75 | if match and match.group("pref_key") not in prefs_to_remove: 76 | stream.write(f"{line}\n") 77 | 78 | 79 | def _find_proxy_in_stream( 80 | stream: TextIOWrapper, 81 | scheme: str, 82 | ) -> Tuple[Optional[str], Optional[str]]: 83 | proxy_host = None 84 | proxy_port = None 85 | 86 | for line in stream: 87 | match = re.search(USER_PREF_REGEX, line) 88 | 89 | if match: 90 | if match.group("pref_key") == f"network.proxy.{scheme}": 91 | proxy_host = match.group("pref_value") 92 | elif match.group("pref_key") == f"network.proxy.{scheme}_port": 93 | proxy_port = match.group("pref_value") 94 | 95 | if proxy_host and proxy_port: 96 | break 97 | 98 | return proxy_host, proxy_port 99 | 100 | 101 | def find_proxies() -> Dict[str, str]: 102 | proxies = {} 103 | 104 | for scheme in ("http", "ssl"): 105 | with open(USER_JS_OVERRIDES_PATH) as stream: 106 | proxy_host, proxy_port = _find_proxy_in_stream(stream, scheme) 107 | 108 | if proxy_host and proxy_port: 109 | proxy_str = f"http://{proxy_host}:{proxy_port}" 110 | 111 | if scheme not in proxies: 112 | proxies[scheme] = proxy_str 113 | elif proxies[scheme] != proxy_str: 114 | # FIXME: for now we are expecting that the HTTPS proxy points to the HTTP one. 115 | # "network.proxy.share_proxy_settings" should probably be used instead. 116 | error = "for now we are expecting HTTP(S) proxies to be the same" 117 | raise NotImplementedError(error) 118 | 119 | return proxies 120 | 121 | 122 | def main() -> None: 123 | profiles = get_profiles() 124 | new_user_js_content = None 125 | 126 | for proxy in ( 127 | find_proxies() | {"": ""} 128 | ).values(): # Add empty strings for no proxy at all 129 | try: 130 | print(f"Downloading {USER_JS_URL}") 131 | new_user_js_content = requests.get( 132 | USER_JS_URL, 133 | timeout=0.5, 134 | proxies={"http": proxy, "https": proxy}, 135 | ).text 136 | break 137 | except requests.exceptions.RequestException: 138 | new_user_js_content = None 139 | 140 | if new_user_js_content: 141 | for profile_name, profile_path in profiles: 142 | profile_full_path = f"{FIREFOX_DIR}/{profile_path}" 143 | print(f"Processing profile: {profile_name}") 144 | install(profile_full_path, new_user_js_content) 145 | clean(profile_full_path) 146 | 147 | else: 148 | print("Internet seems unreachable, but it's ok, let's start Firefox anyway") 149 | 150 | # Run the real Firefox 151 | if os.path.exists("/usr/bin/firejail"): 152 | os.execv("/usr/bin/firejail", ["firejail", "/usr/bin/firefox"] + sys.argv[1:]) 153 | else: 154 | os.execv("/usr/bin/firefox", ["firefox"] + sys.argv[1:]) 155 | 156 | 157 | if __name__ == "__main__": 158 | main() 159 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/journalctl-notify: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 -u 2 | 3 | import asyncio 4 | import contextlib 5 | import json 6 | import sys 7 | from asyncio.subprocess import DEVNULL, PIPE, Process 8 | 9 | # Sleeping a little bit is very important to prevent reaching 10 | # the TasksMax limit of systemd. 11 | # See: https://unix.stackexchange.com/questions/253903/creating-threads-fails-with-resource-temporarily-unavailable-with-4-3-kernel 12 | # It can happen that you have many logs to report which could cause this script to 13 | # spawn many notify-send processes and therefore reaching the TasksMax limit. 14 | # It will basically crash. 15 | SLEEP_BETWEEN_NOTIFY = 0.1 16 | 17 | 18 | async def is_running(proc: Process) -> bool: 19 | with contextlib.suppress(asyncio.TimeoutError): 20 | await asyncio.wait_for(proc.wait(), 1e-6) 21 | return proc.returncode is None 22 | 23 | 24 | async def send_notify( 25 | title: str, 26 | body: str, 27 | urgency: str, 28 | icon: str | None = None, 29 | ) -> None: 30 | params = ["--urgency", urgency, title, body] 31 | 32 | if icon: 33 | params += ["--icon", icon] 34 | 35 | await asyncio.create_subprocess_exec("/usr/bin/notify-send", *params) 36 | await asyncio.sleep(SLEEP_BETWEEN_NOTIFY) 37 | 38 | 39 | async def try_process_notify(line: bytes) -> None: 40 | begin = line.find(b"NOTIFY") 41 | 42 | # Early return for a line which is not our responsibility to parse 43 | if begin == -1: 44 | return 45 | 46 | log = line[begin + len("NOTIFY") :].strip().decode() 47 | 48 | try: 49 | log = json.loads(log) 50 | except json.JSONDecodeError: 51 | print("Wrong log format:", log) 52 | return 53 | 54 | if len(log) == 3: 55 | urgency = log["urgency"] 56 | title = log["title"] 57 | body = log["body"] 58 | await send_notify(title, body, urgency) 59 | 60 | elif len(log) == 4: 61 | urgency = log["urgency"] 62 | icon = log["icon"] 63 | title = log["title"] 64 | body = log["body"] 65 | await send_notify(title, body, urgency, icon) 66 | 67 | else: 68 | print("Wrong log format:", log) 69 | return 70 | 71 | 72 | async def try_process_firewall_rejection(line_as_bytes: bytes) -> None: 73 | # Early return for a line which is not our responsibility to parse 74 | if line_as_bytes.find(b"FIREWALL REJECTED") == -1: 75 | return 76 | 77 | begin = line_as_bytes.find(b":") 78 | line = line_as_bytes[begin + 1 :].decode().strip() 79 | tokens = line.split() 80 | content: dict[str, str] = {} 81 | 82 | for token in tokens: 83 | key_value = token.split("=") 84 | 85 | if len(key_value) == 1: 86 | content[key_value[0]] = "" 87 | elif len(key_value) == 2: 88 | content[key_value[0]] = key_value[1] 89 | else: 90 | error = f"Unable to parse: {key_value} from the following log:\n{line}" 91 | raise ValueError(error) 92 | 93 | # Process only TCP and UDP 94 | if "PROTO" not in content or content["PROTO"] not in ("TCP", "UDP"): 95 | return 96 | 97 | # Don't process rejected broadcast packets (too much spam) 98 | if "MACDST" in content and content["MACDST"] == "ff:ff:ff:ff:ff:ff": 99 | return 100 | 101 | # Don't process rejected multicast packets 102 | if "DST" in content and content["DST"].startswith("224.0.0."): 103 | return 104 | 105 | try: 106 | src_addr = content["SRC"] 107 | src_port = content["SPT"] 108 | dst_addr = content["DST"] 109 | dst_port = content["DPT"] 110 | proto = content["PROTO"] 111 | iface_in = content["IN"] 112 | iface_out = content["OUT"] 113 | except KeyError as ke: 114 | error = f"{ke} is missing from the following log:\n{line}" 115 | raise ValueError(error) from ke 116 | 117 | body = f"{src_addr} tried to reach {dst_addr}:{dst_port} ({proto})\n" 118 | 119 | if "REJECTED FORWARD" in line: 120 | chain = "FORWARD" 121 | body += f"From interface {iface_in} to {iface_out}" 122 | elif "REJECTED INPUT" in line: 123 | chain = "INPUT" 124 | body += f"To interface {iface_in}" 125 | elif "REJECTED OUTPUT" in line: 126 | # Don't send notification for blocked output (too much spam) 127 | return 128 | else: 129 | error = f"unexpected rejected token from the following log:\n{line}" 130 | raise ValueError(error) 131 | 132 | await send_notify(f"FIREWALL {chain} DENIED", body, "critical") 133 | 134 | 135 | async def try_process_systemd_service_failure(line_as_bytes: bytes) -> None: 136 | line_begin = line_as_bytes.find(b"systemd[") 137 | 138 | # Early return for a line which is not our responsibility to parse 139 | if line_begin == -1: 140 | return 141 | 142 | # Skip date and hostname + remove commas 143 | line_as_bytes = line_as_bytes[line_begin:].replace(b",", b" ") 144 | 145 | first_colon_index = line_as_bytes.find(b":") 146 | 147 | if first_colon_index == -1: 148 | return 149 | 150 | second_colon_index = line_as_bytes.find(b":", first_colon_index + 1) 151 | 152 | if second_colon_index == -1: 153 | return 154 | 155 | service_name = line_as_bytes[first_colon_index + 1 : second_colon_index].strip() 156 | service_name = service_name.decode() 157 | tokens = line_as_bytes[second_colon_index + 1 :].decode().strip().split() 158 | variables = {} 159 | 160 | for token in tokens: 161 | if "=" in token: 162 | key, value = token.split("=") 163 | variables[key] = value 164 | 165 | # See man systemd.exec if you want to support more. 166 | if ("code=exited" in tokens and "status=0/SUCCESS" not in tokens) or ( 167 | "code=dumped" in tokens 168 | ): 169 | await send_notify( 170 | "SYSTEMD SERVICE ISSUE", 171 | f"{service_name} {variables['code']} with status {variables['status']}", 172 | "critical", 173 | ) 174 | 175 | 176 | async def main() -> None: 177 | journal_system = await asyncio.create_subprocess_exec( 178 | "journalctl", 179 | "--boot", 180 | "--system", 181 | "--follow", 182 | "--lines=all", 183 | stdout=PIPE, 184 | stderr=DEVNULL, 185 | ) 186 | 187 | journal_user = await asyncio.create_subprocess_exec( 188 | "journalctl", 189 | "--boot", 190 | "--user", 191 | "--follow", 192 | "--lines=all", 193 | stdout=PIPE, 194 | stderr=DEVNULL, 195 | ) 196 | 197 | while await is_running(journal_system) and await is_running(journal_user): 198 | for journal in (journal_system, journal_user): 199 | while ( 200 | True 201 | ): # Try to read as much lines as possible while there are some available 202 | try: 203 | line = await asyncio.wait_for( 204 | journal.stdout.readline(), 205 | timeout=1, 206 | ) # The lower the timeout, the higher the CPU usage 207 | except asyncio.TimeoutError: 208 | # No new line available yet, try the next journal by breaking out 209 | break 210 | 211 | try: 212 | await try_process_notify(line) 213 | await try_process_firewall_rejection(line) 214 | await try_process_systemd_service_failure(line) 215 | except Exception as exc: # noqa: BLE001 216 | await send_notify( 217 | "critical", 218 | "JOURNALCTL NOTIFIER PARSER ERROR", 219 | str(exc), 220 | ) 221 | 222 | # This script should never exit, if it does something went wrong, exit with status 1 223 | print("Error: one of the subprocesses exited") 224 | 225 | await send_notify( 226 | "critical", 227 | "JOURNALCTL NOTIFIER STOPPED", 228 | "One of the journalctl subprocess exited, that should not happen", 229 | ) 230 | sys.exit(1) 231 | 232 | 233 | if __name__ == "__main__": 234 | asyncio.run(main()) 235 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/lock: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | """Lock scripts which doesn't trigger if you're connected to your home network. 4 | 5 | On lock it also sends a dbus message to KeePassXC to lock the database 6 | (so nothing that stays in memory). 7 | 8 | You can use --force if you want to lock even if you're on your home network. 9 | """ 10 | 11 | import os 12 | import sys 13 | from subprocess import run 14 | 15 | WALLPAPER = os.environ["HOME"] + "/.local/share/wallpaper.jpg" 16 | 17 | # Get your router mac address using "ip neighbor" 18 | HOME_NETWORK_MAC = "50:6f:0c:73:22:b0" # CHANGEME 19 | 20 | 21 | def lock_keepassxc() -> None: 22 | run( 23 | [ 24 | "/usr/bin/dbus-send", 25 | "--print-reply", 26 | "--dest=org.keepassxc.KeePassXC.MainWindow", 27 | "/keepassxc", 28 | "org.keepassxc.KeePassXC.MainWindow.lockAllDatabases", 29 | ], 30 | check=False, # so that it doesn't error if KeePassXC is not running 31 | capture_output=True, 32 | ) 33 | 34 | 35 | def lock_desktop() -> None: 36 | os.execv( 37 | "/usr/bin/swaylock", 38 | [ 39 | "swaylock", 40 | "--ignore-empty-password", 41 | "--show-failed-attempts", 42 | "--image", 43 | WALLPAPER, 44 | ], 45 | ) 46 | 47 | 48 | def is_connected_to_home_network() -> bool: 49 | proc = run( 50 | ["/usr/bin/ip", "neighbor"], 51 | check=True, 52 | capture_output=True, 53 | ) 54 | 55 | return HOME_NETWORK_MAC in proc.stdout.decode() 56 | 57 | 58 | def main() -> None: 59 | force_lock = sys.argv[1] == "--force" if len(sys.argv) == 2 else False 60 | 61 | if force_lock or not is_connected_to_home_network(): 62 | lock_keepassxc() 63 | lock_desktop() 64 | 65 | 66 | if __name__ == "__main__": 67 | main() 68 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/notify-disk-error: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # This script is run by auditd in order to warn the 4 | # admin that a disk failure has been detected. 5 | # See /etc/audit/auditd.conf 6 | 7 | # See /usr/local/bin/journalctl-notify for log format 8 | echo 'NOTIFY {"urgency": "critical", "title": "DISK FAILURE", "body": "Detected by auditd, please investigate"}' 9 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/notify-low-on-disk-space: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # This script is run by auditd in order to warn the 4 | # admin that the system is getting low on disk space 5 | # See /etc/audit/auditd.conf 6 | 7 | LOCK_DIR=/tmp/.notify-low-on-disk-space.lock 8 | 9 | # Creating a directory is an atomic operation, 10 | # this is why we use it as a mutex to prevent 11 | # multiple instances of this script to run. 12 | if ! mkdir -- "$LOCK_DIR" 13 | then 14 | exit 0 15 | fi 16 | 17 | sync 18 | space_left="$(btrfs filesystem usage / | grep df | awk '{ print $4 }')" 19 | 20 | echo "NOTIFY {\"urgency\": \"critical\", \"title\": \"LOW ON DISK SPACE\", \"body\": \"Space left on device: $space_left\"}" 21 | 22 | # Tell auditd to try to resume logging 23 | kill -SIGUSR2 $PPID 24 | 25 | # We wait a bit for auditd to resume because 26 | # if it decides to rerun this script immediatly 27 | # we are still holding the lock causing the new 28 | # instance to exit. 29 | sleep 1s 30 | rmdir -- "$LOCK_DIR" 31 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/pacman: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # If not trying to install packages, don't show warning and directly exec pacman 4 | # Exception made for COMING_FROM_YAY which is just here to prevent warning duplicate 5 | if [ "$COMING_FROM_YAY" = "1" ] || [ "$#" -lt 2 ] || [[ "$1" != -S* ]] || [[ "$1" = -Ss* ]]; then 6 | exec /usr/local/bin/proxify /usr/bin/pacman "$@" 7 | exit 1 8 | fi 9 | 10 | # To differentiate between a residual db.lock and a systemd 11 | # service running pacman in the background 12 | if pgrep -f /usr/bin/pacman &>/dev/null; then 13 | echo 14 | echo -e "\e[31mWAIT FOR PACMAN TO COMPLETE\e[0m" 15 | echo 16 | echo "Refusing to run because pacman is already running." 17 | exit 1 18 | fi 19 | 20 | if /usr/bin/pacman -Qu &>/dev/null; then 21 | echo 22 | echo -e "\e[31mUPGRADE YOUR SYSTEM FIRST\e[0m" 23 | echo 24 | echo "Refusing to run because a system upgrade is available." 25 | echo "Installing a new package could lead to partial upgrade and break your system." 26 | echo "Run 'pacman -Syu' first." 27 | exit 1 28 | fi 29 | 30 | echo 31 | echo -e "\e[31mINSTALLING PACKAGES IS DISCOURAGED\e[0m" 32 | echo 33 | echo "Please consider using Docker containers instead to keep your system clean." 34 | 35 | read -p "Are you sure you want to install those packages ? [y/N] " -r answer 36 | 37 | if [ "$answer" != "${answer#[Yy]}" ]; then 38 | echo 39 | /usr/local/bin/proxify /usr/bin/pacman "$@" && echo -e "\n\e[31mDon't forget to update your Arch install script\e[0m" 40 | else 41 | echo 42 | echo "Good choice ;)" 43 | fi 44 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/pacman-notify: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 -u 2 | 3 | import json 4 | import os 5 | import sys 6 | from subprocess import run 7 | 8 | 9 | def notify(title: str, body: str, icon: str | None = None) -> None: 10 | content = { 11 | "title": title, 12 | "body": body, 13 | "urgency": "normal", 14 | } 15 | 16 | if icon: 17 | content["icon"] = icon 18 | 19 | # See /usr/local/bin/journalctl-notify 20 | print("NOTIFY", json.dumps(content)) 21 | 22 | 23 | def check_residual_files() -> None: 24 | files_to_review: list[str] = [] 25 | for root, _, files in os.walk("/etc"): 26 | for file in files: 27 | if file.endswith((".pacnew", ".pacsave")): 28 | files_to_review.append(f"{root}/{file}") 29 | 30 | if files_to_review: 31 | notify( 32 | "PACMAN REVIEW REQUIRED", 33 | "The following files require your attention:\n\n" 34 | + "\n".join(files_to_review), 35 | ) 36 | 37 | 38 | def check_update_available() -> None: 39 | proc = run(["/usr/bin/pacman", "-Qu"]) 40 | 41 | if proc.returncode == 0: 42 | notify( 43 | "SYSTEM UPDATE AVAILABLE", 44 | "Please consider upgrading it", 45 | icon="mintbackup", 46 | ) 47 | elif proc.returncode == 1: 48 | "No update available" 49 | else: 50 | # Something went wrong 51 | sys.exit(1) 52 | 53 | 54 | def main() -> None: 55 | check_update_available() 56 | check_residual_files() 57 | 58 | 59 | if __name__ == "__main__": 60 | main() 61 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/proxify: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | proxy="http://127.0.0.1:8080" 3 | 4 | echo "Running $1 through local proxy $proxy" 5 | 6 | export HTTP{S,}_PROXY="$proxy" 7 | export http{s,}_proxy="$proxy" 8 | 9 | export RSYNC_PROXY="$proxy" 10 | export rsync_proxy="$proxy" 11 | 12 | export FTP_PROXY="$proxy" 13 | export ftp_proxy="$proxy" 14 | 15 | export {NO_PROXY,no_proxy}="localhost,127.0.0.1,localaddress,.localdomain.com" 16 | 17 | exec "$@" 18 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/restic-backup-to-cloud: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 -u 2 | 3 | import os 4 | import sys 5 | import json 6 | import psutil 7 | import os.path 8 | from subprocess import run, CalledProcessError, PIPE, DEVNULL 9 | 10 | # Do not edit, create a ~/.ssh/config entry 11 | SERVER_HOST = "backup-server" 12 | 13 | HOME = os.getenv("HOME") 14 | 15 | # Set these in restic-unattended.service 16 | BACKUP_USER = os.getenv("BACKUP_USER") 17 | RESTIC_REPO_NAME = os.getenv("RESTIC_REPO_NAME") 18 | 19 | def notify(title: str, body: str, urgency: str = "normal"): 20 | 21 | if urgency == "critical": 22 | icon = "computer-fail" 23 | else: 24 | icon = "synology-cloud-station-backup" 25 | 26 | print("NOTIFY", json.dumps({"title": title, "body": body, "urgency": urgency, "icon": icon})) 27 | 28 | def main() -> None: 29 | 30 | notify("SYSTEM MAINTENANCE", "Incremental backup of the home folder in progress...") 31 | 32 | # We are not in a hurry, let's give a low 'nice' priority to the backup process 33 | # so that we don't use too much system resources. 34 | # Note: nice and ionice properties are inherited by subprocesses. 35 | this_script = psutil.Process(os.getpid()) 36 | this_script.nice(10) 37 | this_script.ionice(psutil.IOPRIO_CLASS_IDLE) 38 | 39 | if not HOME: 40 | notify("SYSTEM MAINTENANCE", f"Incremental backup of the home folder failed: HOME variable is not set, wtf ?", urgency="critical") 41 | sys.exit(1) 42 | 43 | if not BACKUP_USER: 44 | notify("SYSTEM MAINTENANCE", f"Incremental backup of the home folder failed: BACKUP_USER variable is not set", urgency="critical") 45 | sys.exit(1) 46 | 47 | if not RESTIC_REPO_NAME: 48 | notify("SYSTEM MAINTENANCE", f"Incremental backup of the home folder failed: RESTIC_REPO_NAME variable is not set", urgency="critical") 49 | sys.exit(1) 50 | 51 | try: 52 | run(["/usr/bin/restic", "-r", 53 | f"sftp:{BACKUP_USER}@{SERVER_HOST}:{RESTIC_REPO_NAME}", 54 | "backup", "--no-scan", "--one-file-system", 55 | "--exclude", f"{HOME}/.cache/*", 56 | HOME], 57 | stderr=PIPE, 58 | stdout=DEVNULL, 59 | check=True) 60 | 61 | notify("SYSTEM MAINTENANCE", "Incremental backup of the home folder successful !") 62 | return_code = 0 63 | 64 | except CalledProcessError as cpe: 65 | error_str = cpe.stderr.decode() 66 | notify("SYSTEM MAINTENANCE", f"Incremental backup of the home folder failed with:\n\n" + error_str, urgency="critical") 67 | return_code = 1 68 | except Exception as exc: 69 | notify("SYSTEM MAINTENANCE", f"Incremental backup of the home folder failed with:\n\n" + str(exc), urgency="critical") 70 | return_code = 1 71 | 72 | sys.exit(return_code) 73 | 74 | if __name__ == "__main__": 75 | main() 76 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/should-reboot-check: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 -u 2 | 3 | import os 4 | import re 5 | import json 6 | from typing import Set, Tuple 7 | from subprocess import check_output 8 | from difflib import SequenceMatcher 9 | 10 | def notify(title: str, body: str) -> None: 11 | # See /usr/local/bin/journalctl-notify 12 | print("NOTIFY", json.dumps({"urgency": "critical", "title": title, "body": body, "icon": "system-reboot"})) 13 | 14 | def closeness(str1: str, str2: str) -> float: 15 | return SequenceMatcher(None, str1, str2).ratio() 16 | 17 | def check_kernel() -> Tuple[str, str]: 18 | """ 19 | Returns a tuple where the first is the kernel version currently running and the 20 | second one is item is the installed kernel that looks the closest to the one running. 21 | """ 22 | running_kernel = check_output(["/usr/bin/uname", "-r"]).decode().strip() 23 | 24 | installed_kernels = [file for file in os.listdir("/boot") if file.startswith("vmlinu")] 25 | kernels_versions = {} 26 | 27 | for kernel_filename in installed_kernels: 28 | file_output = check_output(["/usr/bin/file", f"/boot/{kernel_filename}"]).decode().strip() 29 | 30 | # Linux kernel x86 boot executable bzImage, version 6.4.10-hardened1-1-hardened (linux-hardened@archlinux) 31 | match = re.search(r"version\s+(?P.*?)\s+", file_output) 32 | 33 | if match: 34 | kernels_versions[kernel_filename] = match.group("version") 35 | 36 | # Find the installed kernel that is the closest to the one running 37 | closest_kernel = max(kernels_versions, 38 | key=lambda k: closeness(kernels_versions[k], running_kernel)) 39 | 40 | return running_kernel, kernels_versions[closest_kernel] 41 | 42 | def check_libraries() -> Set[str]: 43 | """ 44 | Returns a set of programs using outdated libraries that are still 45 | in memory but have been updated on disk. 46 | """ 47 | programs_using_outdated_libraries = set() 48 | lsof_output = check_output(["/usr/bin/lsof", "+c", "0"]).decode() 49 | 50 | for line in lsof_output.splitlines(): 51 | if re.search(r"DEL.*?lib", line): 52 | program = line.split(maxsplit=1)[0] 53 | programs_using_outdated_libraries.add(program) 54 | 55 | return programs_using_outdated_libraries 56 | 57 | def main(): 58 | running_kernel, installed_kernel = check_kernel() 59 | programs_using_outdated_libraries = check_libraries() 60 | 61 | notif_body = "" 62 | 63 | if running_kernel != installed_kernel: 64 | notif_body += "Kernel has been updated, a reboot is required !\n\n" 65 | notif_body += f"Running kernel:\n{running_kernel}\n\n" 66 | notif_body += f"Installed kernel:\n{installed_kernel}" 67 | 68 | if programs_using_outdated_libraries: 69 | notif_body += "\n\n" 70 | 71 | if programs_using_outdated_libraries: 72 | notif_body += "The following programs are using outdated libraries, " \ 73 | "you should restart them or reboot your system:\n\n" 74 | 75 | notif_body += "\n".join(sorted(programs_using_outdated_libraries)) 76 | 77 | if notif_body: 78 | notify("CONSIDER REBOOTING", "\n" + notif_body) 79 | 80 | if __name__ == "__main__": 81 | main() 82 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/signal-desktop: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # As I'm writing this, signal-desktop doesn't automatically source ~/.config/electron-flags.conf 3 | # But we need it in order to pass Wayland-related flags. 4 | # This wrapper script might probably be removable at some point when the maintainer of signal-desktop fixes this. 5 | # See: https://wiki.archlinux.org/title/wayland#Electron 6 | exec /usr/local/bin/proxify /usr/bin/signal-desktop $(cat "$XDG_CONFIG_HOME/electron-flags.conf") 7 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/try-to-fix-low-on-disk-space: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # This script is run as a last resort by auditd in 4 | # order to try to fix a very low disk space available 5 | # See /etc/audit/auditd.conf 6 | 7 | LOCK_DIR=/tmp/.try-to-fix-low-on-disk-space.lock 8 | 9 | # Creating a directory is an atomic operation, 10 | # this is why we use it as a mutex to prevent 11 | # multiple instances of this script to run. 12 | if ! mkdir -- "$LOCK_DIR"; then 13 | exit 0 14 | fi 15 | 16 | space_left="$(btrfs filesystem usage / | grep df | awk '{ print $4 }')" 17 | echo "NOTIFY {\"urgency\": \"critical\", \"title\": \"CRITICALLY LOW ON DISK SPACE\", \"body\": \"Starting to take action...\\nSpace left on device: $space_left\"}" 18 | 19 | # Agressively cleaning pacman cache 20 | pacman --noconfirm -Scc 21 | 22 | # Clean cache from users' home 23 | rm -rf /home/*/.cache/yay 24 | rm -rf /home/*/.cache/mozilla/firefox 25 | rm -rf /home/*/.cache/chromium 26 | rm -rf /home/*/.cache/pip 27 | rm -rf /home/*/.cache/go-build 28 | rm -rf /home/*/.cache/debuginfod_client 29 | 30 | # Balance btrfs 31 | btrfs balance start -dusage=0 / 32 | btrfs balance start -dusage=50 / 33 | btrfs subvolume sync / 34 | sync 35 | 36 | # Inform the admin what space is left after the cleanup 37 | space_left="$(btrfs filesystem usage / | grep df | awk '{ print $4 }')" 38 | echo "NOTIFY {\"urgency\": \"normal\", \"title\": \"CRITICALLY LOW ON DISK SPACE\", \"body\": \"Necessary actions have been taken\\nSpace left on device: $space_left\"}" 39 | 40 | # Tell auditd to try to resume logging 41 | kill -SIGUSR2 $PPID 42 | 43 | # We wait a bit for auditd to resume because 44 | # if it decides to rerun this script immediatly 45 | # we are still holding the lock causing the new 46 | # instance to exit. 47 | sleep 1s 48 | rmdir -- "$LOCK_DIR" 49 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/usb-auto-mount: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 -u 2 | 3 | import os 4 | import sys 5 | import json 6 | from subprocess import DEVNULL, PIPE, CalledProcessError, run 7 | from typing import Dict, Optional, Set 8 | 9 | def notify(body: str, urgency: str = "normal") -> None: 10 | 11 | if urgency == "critical": 12 | icon = "computer-fail" 13 | else: 14 | icon = "drive-removable-media-usb" 15 | 16 | # See /usr/local/bin/journalctl-notify 17 | print("NOTIFY", json.dumps({"title": "USB DEVICE MANAGER", "body": body, "urgency": urgency, "icon": icon})) 18 | 19 | def mount_encrypted(encrypted_part_uuid: str, luks_key_path: str) -> None: 20 | try: 21 | run(["/usr/bin/cryptsetup", 22 | "open", 23 | f"/dev/disk/by-uuid/{encrypted_part_uuid}", 24 | encrypted_part_uuid, 25 | "--key-file", luks_key_path], 26 | stdout=DEVNULL, stderr=PIPE, check=True) 27 | except CalledProcessError as cpe: 28 | error_str = cpe.stderr.decode() 29 | 30 | # Allow already exists error 31 | if "already exists" not in error_str: 32 | notify(f"Command 'cryptsetup open' failed with:\n\n" + error_str, urgency="critical") 33 | raise 34 | 35 | try: 36 | run(["/usr/bin/mount", 37 | "--mkdir", 38 | f"/dev/mapper/{encrypted_part_uuid}"], 39 | stdout=DEVNULL, stderr=PIPE, check=True) 40 | except CalledProcessError as cpe: 41 | error_str = cpe.stderr.decode() 42 | notify(f"Command 'mount /dev/mapper/{encrypted_part_uuid}' failed with:\n\n" + error_str, urgency="critical") 43 | raise 44 | 45 | def mount_not_encrypted(partition_uuid: str) -> None: 46 | try: 47 | run(["/usr/bin/mount", "--mkdir", f"UUID={partition_uuid}"], 48 | stdout=DEVNULL, stderr=PIPE, check=True) 49 | except CalledProcessError as cpe: 50 | error_str = cpe.stderr.decode() 51 | 52 | if "can't find in /etc/fstab" in error_str: 53 | print("Device not in /etc/fstab, ignoring it") 54 | else: 55 | notify(f"Command 'mount UUID={partition_uuid}' failed with:\n\n" + error_str, urgency="critical") 56 | raise 57 | 58 | def parse_device_variables(block_device: str) -> Dict[str, str]: 59 | device_variables = {} 60 | 61 | try: 62 | proc = run(["udevadm", "info", "--query=env", "--export", f"/dev/{block_device}"], 63 | check=True, stdout=PIPE, stderr=PIPE) 64 | except CalledProcessError as cpe: 65 | error_str = cpe.stderr.decode() 66 | notify(f"Command 'udevadm info' failed with:\n\n" + error_str, urgency="critical") 67 | raise 68 | 69 | for line in proc.stdout.decode().splitlines(): 70 | key, value = line.split("=", maxsplit=1) 71 | 72 | if value[0] == "'" and value[-1] == "'": 73 | value = value[1:-1] 74 | 75 | device_variables[key] = value 76 | 77 | return device_variables 78 | 79 | def find_luks_key_path(partition_uuid: str) -> Optional[str]: 80 | with open("/etc/crypttab", "r") as stream: 81 | for line in stream: 82 | line = line.strip() 83 | 84 | # Ignore comments and empty lines 85 | if not line or line[0] == "#": 86 | continue 87 | 88 | tokens = line.split() 89 | uuid = tokens[1] 90 | luks_key_path = tokens[2] 91 | 92 | if not uuid.startswith("UUID="): 93 | continue 94 | 95 | if uuid.endswith(partition_uuid): 96 | return luks_key_path 97 | 98 | return None 99 | 100 | def find_mount_point(block_device: str) -> Optional[str]: 101 | try: 102 | proc = run(["/usr/bin/lsblk", 103 | "--noheadings", 104 | "--list", 105 | "--output=name,mountpoints", 106 | f"/dev/{block_device}"], 107 | stdout=PIPE, stderr=PIPE, check=True) 108 | 109 | except CalledProcessError: 110 | return None 111 | 112 | lsblk_output = proc.stdout.decode() 113 | 114 | for line in lsblk_output.splitlines(): 115 | tokens = line.split(maxsplit=1) 116 | 117 | if len(tokens) == 2: 118 | return tokens[1] 119 | 120 | return None 121 | 122 | def handle_auto_mount(block_device: str) -> None: 123 | 124 | device_variables = parse_device_variables(block_device) 125 | partition_uuid = device_variables["ID_FS_UUID"] 126 | luks_key_path = find_luks_key_path(partition_uuid) 127 | 128 | if luks_key_path: 129 | mount_encrypted(partition_uuid, luks_key_path) 130 | else: 131 | mount_not_encrypted(partition_uuid) 132 | 133 | mount_point = find_mount_point(block_device) 134 | 135 | if mount_point: 136 | notify(f"Device /dev/{block_device} mounted successfully to {mount_point}") 137 | 138 | def clean_cryptsetup_leftovers() -> Set[str]: 139 | """ 140 | If hot-pluggable drives are removed from the computer, there will be 141 | left over entries in /dev/mapper this function aims to remove entries 142 | that do not match a physical device anymore. 143 | 144 | Returns a set of removed devices. 145 | """ 146 | removed_devices = set() 147 | mapped_devices = os.listdir("/dev/mapper/") 148 | 149 | for device in mapped_devices: 150 | proc = run(["/usr/bin/cryptsetup", "status", device], stdout=PIPE) 151 | stdout = proc.stdout.decode() 152 | 153 | for line in stdout.splitlines(): 154 | line = line.strip() 155 | line = line.replace(" ", "") 156 | 157 | if line == "device:(null)": 158 | # Leftover detected, closing it 159 | run(["/usr/bin/cryptsetup", "close", device]) 160 | print(f"Removed leftover /dev/mapper/{device}") 161 | removed_devices.add(f"/dev/mapper/{device}") 162 | 163 | return removed_devices 164 | 165 | def cleanup_mounts(block_device: str) -> None: 166 | removed_luks_devices = clean_cryptsetup_leftovers() 167 | 168 | with open("/etc/mtab", "r") as stream: 169 | for line in stream: 170 | tokens = line.split() 171 | 172 | if len(tokens) < 2: 173 | continue 174 | 175 | device = tokens[0] 176 | mount_point = tokens[1] 177 | 178 | if device in removed_luks_devices or device == block_device: 179 | run(["/usr/bin/umount", "--lazy", mount_point]) 180 | try: 181 | os.rmdir(mount_point) 182 | except OSError as ose: 183 | print(str(ose)) 184 | 185 | def main(): 186 | 187 | if len(sys.argv) != 3: 188 | print(f"Usage: {sys.argv[0]} [ACTION] [BLOCK DEVICE]") 189 | sys.exit(1) 190 | 191 | action = sys.argv[1] 192 | block_device = sys.argv[2] 193 | 194 | print(f"{action} device {block_device}") 195 | 196 | if action == "add": 197 | handle_auto_mount(block_device) 198 | 199 | elif action == "remove": 200 | cleanup_mounts(block_device) 201 | 202 | else: 203 | raise ValueError("Unknown action type") 204 | 205 | if __name__ == "__main__": 206 | main() 207 | -------------------------------------------------------------------------------- /rootfs/usr/local/bin/yay: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # If not trying to install packages, don't show warning and directly exec yay 4 | if [ "$#" -lt 2 ] || [[ "$1" != -S* ]] || [[ "$1" = -Ss* ]]; then 5 | exec /usr/local/bin/proxify /usr/bin/yay --sudoflags="COMING_FROM_YAY=1" "$@" 6 | exit 1 7 | fi 8 | 9 | if /usr/bin/pacman -Qu &>/dev/null; then 10 | echo 11 | echo -e "\e[31mUPGRADE YOUR SYSTEM FIRST\e[0m" 12 | echo 13 | echo "Refusing to run because a system upgrade is available." 14 | echo "Installing a new package could lead to partial upgrade and break your system." 15 | echo "Run 'yay -Syu' first." 16 | exit 1 17 | fi 18 | 19 | echo 20 | echo -e "\e[31mINSTALLING PACKAGES IS DISCOURAGED\e[0m" 21 | echo 22 | echo "Please consider using Docker containers instead to keep your system clean." 23 | 24 | read -p "Are you sure you want to install those packages ? [y/N] " -r answer 25 | 26 | if [ "$answer" != "${answer#[Yy]}" ]; then 27 | echo 28 | /usr/local/bin/proxify /usr/bin/yay --sudoflags="COMING_FROM_YAY=1" "$@" && echo -e "\n\e[31mDon't forget to update your Arch install script\e[0m" 29 | else 30 | echo 31 | echo "Good choice ;)" 32 | fi 33 | -------------------------------------------------------------------------------- /sync.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -euo pipefail 3 | cd "$(dirname "$0")" 4 | 5 | if [ "$EUID" -ne 0 ]; then 6 | echo "Please run as root using sudo" 7 | exit 1 8 | fi 9 | 10 | copy_if_different() { 11 | from="$1" 12 | to="$2" 13 | 14 | if grep username_placeholder "$from" &>/dev/null; then 15 | sha1from="$(sed "s/username_placeholder/$SUDO_USER/g" "$from" | sha1sum | awk '{print $1}' || true)" 16 | else 17 | sha1from="$(sha1sum "$from" 2>/dev/null | awk '{print $1}' || true)" 18 | fi 19 | 20 | sha1to="$(sha1sum "$to" 2>/dev/null | awk '{print $1}' || true)" 21 | 22 | if [ "$sha1from" != "$sha1to" ]; then 23 | mkdir -p "$(dirname "$to")" 24 | if cp --interactive "$from" "$to"; then 25 | sed -i "s/username_placeholder/$SUDO_USER/g" "$to" 26 | echo "$from -> $to" 27 | fi 28 | fi 29 | } 30 | 31 | export -f copy_if_different 32 | 33 | find rootfs -type f ! -name "packages-*" -exec bash -c 'file="$1"; dest="/${file#rootfs/}"; copy_if_different "$file" "$dest"' shell {} \; 34 | --------------------------------------------------------------------------------