├── README.md ├── arch └── virtualbox-fix-modules-burn-way.md ├── command-line ├── git-checkout-remote.md ├── grep-for-multiple-files.md ├── kill-multiple-processes-at-once.md ├── move-a-git-repo-with-history.md ├── push-to-multiple-git.md ├── remove-unwanted-prefixes.md ├── rsync-over-ssh.md └── use-python-to-webshare-any-directory.md ├── docker ├── change-docker-root.md ├── changeRootDir.md └── yum-apt-repos-docker.md ├── libvirt ├── qemu-no-internet.md └── virsh-host-cpu.md ├── linux └── firefox-dark-theme.md ├── nmap └── scan-ip-range-for-port.md ├── openshift └── dynamic-storage-allocation.md ├── synology ├── Init_3rdparty_1.9-003.spk ├── ddnsupdater_1.27-002.spk └── synology-ddns.md ├── template.md ├── ubiquiti └── move-ap-to-new-controller.md ├── ubuntu └── ubuntu_full_boot_partition.md └── vim └── sudo_vim.md /README.md: -------------------------------------------------------------------------------- 1 | # TIL 2 | 3 | > Today I Learned 4 | 5 | A collection of concise write-ups on small things I learn day to day across a variety of technologies. These are things that don't really warrant a full blog post. These are mostly things I learn by pairing with smart people at [LinuxServer.io](http://linuxserver.io/). 6 | 7 | --- 8 | 9 | ### Categories 10 | 11 | * [arch](#arch) 12 | * [command-line](#command-line) 13 | * [docker](#docker) 14 | * [libvirt](#libvirt) 15 | * [linux](#linux) 16 | * [nmap](#nmap) 17 | * [ubiquiti](#ubiquiti) 18 | * [ubuntu](#ubuntu) 19 | * [vim](#vim) 20 | 21 | ### arch 22 | 23 | - [Fix Virtualbox Modules after kernel update - if all else fails](arch/virtualbox-fix-modules-burn-way.md) 24 | 25 | ### command-line 26 | 27 | - [Remove unwanted prefix from multiple files](command-line/remove-unwanted-prefixes.md) 28 | - [Share any directory as a website using Python](command-line/use-python-to-webshare-any-directory.md) 29 | - [Push to multiple git remotes at once](command-line/push-to-multiple-git.md) 30 | - [Rsync over SSH](command-line/rsync-over-ssh.md) 31 | - [Move a git repo with history](command-line/move-a-git-repo-with-history.md) 32 | - [Git checkout remote branch with tracking](command-line/git-checkout-remote.md) 33 | - [Kill multiple processes at once](command-line/kill-multiple-processes-at-once.md) 34 | - [Grep multiple files at once](command-line/grep-for-multiple-files.md) 35 | 36 | ### docker 37 | 38 | - [Official Docker install script for `yum` and `apt`](docker/yum-apt-repos-docker.md) 39 | - [Change Docker root directory under systemd](docker/change-docker-root.md) 40 | 41 | ### libvirt 42 | 43 | - [List Host CPU capabilities using `virsh`](libvirt/virsh-host-cpu.md) 44 | - [Fix QEMU guest with no internet](libvirt/qemu-no-internet.md) 45 | 46 | ### linux 47 | 48 | - [Fix Firefox dark themeing issues](linux/firefox-dark-theme.md) 49 | 50 | ### nmap 51 | 52 | - [Scan IP range for a specific port](nmap/scan-ip-range-for-port.md) 53 | 54 | ### openshift 55 | 56 | - [OCP dynamic storage allocation](openshift/dynamic-storage-allocation.md) 57 | 58 | ### synology 59 | 60 | - [Setup Dynamic DNS on a Synology NAS](synology/synology-ddns.md) 61 | 62 | ### ubiquiti 63 | 64 | - [Move a Ubiquiti AP to a new controller](ubiquiti/move-ap-to-new-controller.md) 65 | 66 | ### ubuntu 67 | 68 | - [Fix a full `/boot` partition](ubuntu/ubuntu_full_boot_partition.md) 69 | 70 | ### vim 71 | 72 | - [sudo a file from within vim](vim/sudo_vim.md) 73 | -------------------------------------------------------------------------------- /arch/virtualbox-fix-modules-burn-way.md: -------------------------------------------------------------------------------- 1 | # Virtualbox - Fix after Kernel Update 2 | So, there I was left with a broken Virtualbox following an Arch kernel update. Ok, should be simple, need to reload the host kernel modules for virtualbox... well if only!, and dkms let me down too. 3 | 4 | Despite nothing really working and virtualbox constantly spitting out a kernel driver error message, I resorted to google and found this last "burn it all" solution to work: 5 | 6 | 1. Refresh the system: 7 | ```sh 8 | sudo pacman -Syyu 9 | ``` 10 | This should iron out any package conflicts and such. 11 | 12 | 2. Then, (re)install virtualbox and the virtualbox-host-modules package: 13 | ```sh 14 | sudo pacman -S virtualbox virtualbox-host-modules 15 | ``` 16 | 17 | 3. Add your user to the virtualbox group: 18 | ```sh 19 | sudo gpasswd -a yourusername vboxusers 20 | ``` 21 | 22 | 4. run /sbin/rcvboxdrv setup 23 | ```sh 24 | sudo /sbin/rcvbox setup 25 | ``` 26 | This should now successfully fix the modules and allow virtualbox to play nice again. Although I did install the new kernel headers too. 27 | 28 | Credit: https://bbs.archlinux.org/viewtopic.php?id=150349 29 | -------------------------------------------------------------------------------- /command-line/git-checkout-remote.md: -------------------------------------------------------------------------------- 1 | ### Git checkout remote branch with tracking 2 | 3 | Checkout a branch from a remote branch somewhere with local tracking. 4 | 5 | `git checkout -t origin/feature/blah` 6 | -------------------------------------------------------------------------------- /command-line/grep-for-multiple-files.md: -------------------------------------------------------------------------------- 1 | ### grep for multiple files in subdirs 2 | 3 | Use grep to find and delete multiple files at once, in subfolders too. 4 | 5 | ``` 6 | tree -fi | grep '.*-2.MP4' 7 | 8 | tree -fi | grep '.*-2.MP4' 9 | ./2021-08-19/7F8A1487-2.MP4 10 | ./2021-08-19/7F8A1517-2.MP4 11 | ./2021-08-20/7F8A1757-2.MP4 12 | ./2021-08-20/7F8A1758-2.MP4 13 | 14 | # pipe to xargs to delete 15 | tree -fi | grep '.*-2.MP4' | xargs rm 16 | ``` 17 | 18 | Tree is useful as it prints a relative path into the subdirs for use by `xargs rm`. `ls -R` lists recursive files but without relative paths from your current working dir. 19 | 20 | Or if just in the current directory a simple `ls` will suffice: 21 | 22 | ``` 23 | ls | grep '.*-2.CR3' 24 | ``` -------------------------------------------------------------------------------- /command-line/kill-multiple-processes-at-once.md: -------------------------------------------------------------------------------- 1 | ### Kill multiple processes at once 2 | 3 | Using pgrep it's possible to kill multiple processes at once. Passing the PIDs via xargs to kill is even better. 4 | 5 | `pgrep docker | xargs kill -9` 6 | 7 | - Source(s) 8 | - [1](https://unix.stackexchange.com/questions/138202/can-i-chain-pgrep-with-kill) 9 | - [2](https://www.linuxquestions.org/questions/linux-newbie-8/how-do-i-extract-the-pid-field-from-ps-aux-command-361150/) 10 | -------------------------------------------------------------------------------- /command-line/move-a-git-repo-with-history.md: -------------------------------------------------------------------------------- 1 | ### Move a git repo with history 2 | 3 | Move a git repo with all the branches and history from one server to another. 4 | 5 | ``` 6 | git clone --mirror 7 | cd 8 | git remote set-url origin 9 | git push -f origin 10 | ``` 11 | 12 | - Source(s) 13 | - [stackoverflow.com](http://stackoverflow.com/questions/1484648/how-to-migrate-git-repository-from-one-server-to-a-new-one) 14 | -------------------------------------------------------------------------------- /command-line/push-to-multiple-git.md: -------------------------------------------------------------------------------- 1 | ### Push/pull to multiple Git locations 2 | 3 | Working with multiple git remotes can be awkward. In my use case I wanted to push the same repository (my notes) between a Github instance behind a firewall and a private repository on Github.com. Git has support natively for multiple 4 | 5 | ``` 6 | git remote set-url origin --push --add user1@repo1 7 | git remote set-url origin --push --add user2@repo2 8 | git remote -v 9 | ``` 10 | 11 | - Source(s) 12 | - [stackoverflow.com](http://stackoverflow.com/questions/849308/pull-push-from-multiple-remote-locations/12795747#12795747) 13 | -------------------------------------------------------------------------------- /command-line/remove-unwanted-prefixes.md: -------------------------------------------------------------------------------- 1 | ### Remove unwanted prefixes from filenames 2 | 3 | Remote the prefix 'unwanted' from the beginning of each filename with .jpg suffix. 4 | 5 | `rename 's/^unwanted//' *.jpg` 6 | 7 | - Source(s) 8 | - [@climagic](https://twitter.com/climagic) 9 | -------------------------------------------------------------------------------- /command-line/rsync-over-ssh.md: -------------------------------------------------------------------------------- 1 | ## Sync a folder over SSH using rsync 2 | 3 | ### Update the contents of a folder 4 | 5 | In order to save bandwidth and time, we can avoid copying the files that we already have in the destination folder that have not been modified in the source folder. To do this, we can add the parameter -u to rsync, this will synchronize the destination folder with the source folder, this is where the delta-transfer algorithm enters. To synchronize two folders like this we use: 6 | 7 | rsync -rtvu source_folder/ destination_folder/ 8 | 9 | By default, rsync will take into consideration the date of modification of the file and the size of the file to decide whether the file or part of it needs to be transferred or if the file can be left alone, but we can instead use a hash to decide whether the file is different or not. To do this we need to use the -c parameter, which will perform a checksum in the files to be transferred. This will skip any file where the checksum coincides. 10 | 11 | rsync -rtvuc source_folder/ destination_folder/ 12 | 13 | ### Synchronizing two folders with rsync 14 | 15 | To keep two folders in synchrony, not only do we need to add the new files in the source folder to the destination folder, as in the past topics, we also need to remove the files that are deleted in the source folder from the destination folder. rsync allow us to do this with the parameter --delete, this used in conjunction with the previously explained parameter -u that updates modified files allow us to keep two directories in synchrony while saving bandwidth. 16 | 17 | rsync -rtvu --delete source_folder/ destination_folder/ 18 | 19 | The deletion process can take place in different moments of the transfer by adding some additional parameters: 20 | 21 | rsync can look for missing files and delete them before it does the transfer process, this is the default behavior and can be set with `--delete-before` 22 | rsync can look for missing files after the transfer is completed, with the parameter --delete-after 23 | rsync can delete the files done during the transfer, when a file is found to be missing, it is deleted at that moment, we enable this behavior with `--delete-during` 24 | rsync can do the transfer and find the missing files during this process, but instead of delete the files during this process, waits until it is finished and delete the files it found afterwards, this can be accomplished with `--delete-delay` 25 | e.g.: 26 | 27 | rsync -rtvu --delete-delay source_folder/ destination_folder/ 28 | 29 | - Source(s) 30 | - [1](http://www.jveweb.net/en/archives/2010/11/synchronizing-folders-with-rsync.html) 31 | -------------------------------------------------------------------------------- /command-line/use-python-to-webshare-any-directory.md: -------------------------------------------------------------------------------- 1 | ### Share any directory as a website using Python 2 | 3 | Have you ever wanted to share a directory as a web server without setting up a full web server? Using Python it's really easy and combined with a bash alias, it's *even* easier! 4 | 5 | `alias webshare='python -c "import SimpleHTTPServer;SimpleHTTPServer.test()"'` 6 | 7 | - Source(s) 8 | - [samrowe.com/wordpress/advancing-in-the-bash-shell/](http://samrowe.com/wordpress/advancing-in-the-bash-shell/) 9 | -------------------------------------------------------------------------------- /docker/change-docker-root.md: -------------------------------------------------------------------------------- 1 | ### Change Docker root dir using systemd 2 | 3 | The Docker root dir is usually something like `/var/lib/docker` by default. Here's how to change it using a systemd `.service` file. 4 | 5 | Find your current root directory using `docker info`. 6 | 7 | $ docker info 8 | Root Dir: /var/lib/docker/aufs 9 | 10 | Since we're using systemd modifying the `DOCKER-OPTS` tag within `/etc/default/docker` to include `-g /new/root/dir` isn't going to work. There are two options, both require you to edit your `docker.service` file. 11 | 12 | > Pro tip: `systemctl status docker.service` will print the location of this file at the top of the output 13 | 14 | ##### Option 1 - Direct edit to `docker.service` 15 | 16 | * Edit `ExecStart` line to look like this `ExecStart =/usr/bin/dockerd -g /new/docker/root/dir -H fd://` 17 | * `systemctl daemon-reload` 18 | * `systemctl restart docker` 19 | * `docker info` - verify the root dir has updated 20 | 21 | ##### Option 2 - Create a systemd drop-in service file (better way) 22 | 23 | This option is preferred as directly editing `.service` files should be avoided. They may be overwritten during an update for example. 24 | 25 | * `vi /etc/systemd/system/docker.service.d/docker.root.conf` and populate with: 26 | 27 | ```sh 28 | [Service] 29 | ExecStart= 30 | ExecStart=/usr/bin/dockerd -g /new/docker/root -H fd:// 31 | ``` 32 | 33 | * `systemctl daemon-reload` 34 | * `systemctl restart docker` 35 | * `docker info` - verify the root dir has updated 36 | 37 | ##### Option 3 - Create/Modify a json config file (even better way) 38 | 39 | This option is preferred over Option 2 because it only changes the docker root directory and nothing else. 40 | Open or create `/etc/docker/daemon.json` and populate it with: 41 | 42 | ```json 43 | { 44 | "data-root": "/new/docker/root" 45 | } 46 | ``` 47 | 48 | * `systemctl daemon-reload` 49 | * `systemctl restart docker` 50 | * `docker info` - verify the root dir has updated 51 | 52 | ***Note - Existing Containers and Images*** 53 | If you already have containers or images in `/var/lib/docker` you may wish to stop and back these up before moving them to the new root location. Moving can be done by either `rsync -a /var/lib/docker/* /path/to/new/root` or if permissions do not matter, you can simply use mv or cp too. 54 | -------------------------------------------------------------------------------- /docker/changeRootDir.md: -------------------------------------------------------------------------------- 1 | ## Getting Docker to Play Nicely with Systemd based Distros - How to Change the Root Dir for containers and images from the default. 2 | 3 | One thing that proved to be an irritation was being able to specify an alternate Root Dir for Docker. This is where Docker stores containers and images, and defaults to `/var/lib/docker` (on Debian at least). However, if like me, this is not where I want to Root Dir. Tip: Your Root Dir can be found by running: 4 | ```sh 5 | $ sudo docker info 6 | ``` 7 | 8 | Ordinarily, this is easily fixed by editing the Docker file, which on Debian is located in `/etc/default/` to include the `-g /path/to/new/root` argument on the end of the DOCKER-OPTS line. 9 | 10 | This is fine for sysvinit systems, but systemd seems to ignore this completely, and instead, utilises the docker.service file, which on Debian Jessie, is located in: `/lib/systemd/system`. 11 | 12 | The problem is, while there are docs, they could be much clearer about this; and there is evidence all over the internet that many systemd users are struggling. For many, the general solution being the messy use of a symlink from `/var/lib/docker`. I too was advised to consider this solution, but I simply refused; surely there has to be a better way... and there was! 13 | 14 | ***Note - Solution Limitation*** 15 | The solution here is within Debian Jessie. Things such as default paths and locations may differ with other distributions. The solutions below assume you are within the systemd/system directory on your distribution. 16 | 17 | ***Note - Existing Containers and Images*** 18 | If you already have containers or images in `/var/lib/docker` you may wish to stop and back these up before moving them to the new root location. Moving can be done by either `rsync -a /var/lib/docker/* /path/to/new/root` or if 19 | permissions do not matter, you can simply use mv or cp too. 20 | 21 | Two solutions are presented below, one which edits the docker.service file directly; and the other which uses a systemd drop-in. The latter of which is by far the better way of doing things. 22 | 23 | ### Solution 1 (Dirty, Easy Way): Edit the docker.service file directly 24 | 1. Navigate to whereever your systemd docker.service file is located 25 | ```sh 26 | $ cd /lib/systemd/system 27 | ``` 28 | 2. Using your preferred application, edit the docker.service file. 29 | ```sh 30 | $ sudo nano docker.service 31 | ``` 32 | 3. Find the line: `ExecStart=/usr/bin/docker daemon -H fd://` and ammend it to include `-g /path/to/new/root/dir` arguments (obviously make sure you specify the new root dir). So for example: `ExecStart =/usr/bin/docker daemon -g /media/docker -H fd://` 33 | 34 | 4. Save the file. 35 | 36 | 5. Restart docker. 37 | 38 | 6. Verify the new Docker Root (see below) 39 | 40 | ### Solution 2 (Nicer, Easy Way): Create systemd drop-in to override ExecStart in docker.service 41 | In this method, we create a drop-in snippet that effectively overrides docker’s default ExecStart arguments, which systemd will scan and execute. This is by far cleaner, given there would be no need to continually update docker.service following updates which may overwrite docker.service in the process. 42 | 43 | 1. Create docker.service.d directory: 44 | sudo mkdir docker.service.d and move into the directory. 45 | ```sh 46 | $ sudo mkdir /etc/systemd/system/docker.service.d && cd /etc/systemd/system/docker.service.d 47 | ``` 48 | 49 | 3. Create a `.conf` file within the new directory e.g. `docker.root.conf` (the file can be named whatever you wish as far as I know!). 50 | ```sh 51 | $ sudo nano docker.service.d/docker.root.conf 52 | ``` 53 | 4. Add the following and save: 54 | ```sh 55 | [Service] 56 | ExecStart= 57 | ExecStart=/usr/bin/docker daemon -g /path/to/new/root -H fd:// 58 | ``` 59 | ***Note*** remember, it is necessary to blank the ExecStart value before we reassign it - so the Arch Linux guys say, and they're rarely wrong right? Be sure to specify the path to your desired Docker root dir too. 60 | 61 | 5. Reload systemd, scanning for changed units 62 | ```sh 63 | $ sudo systemctl daemon-reload 64 | ``` 65 | 66 | 6. (Re)start Docker 67 | ```sh 68 | $ sudo systemctl restart docker 69 | ``` 70 | Or 71 | ```sh 72 | $ sudo systemctl start docker 73 | ``` 74 | 75 | 7. Verify the new Docker Root (see below) 76 | 77 | ### Verify the new Docker Root 78 | 1. Confirm the change by running `sudo docker info` and pay attention to the `Root Dir` value. 79 | Confirm any existing containers can be found and started and/or run docker’s hello-world example: 80 | ```sh 81 | $ sudo docker run -ti hello-world 82 | ``` 83 | 84 | 2. Confirm the container(s) and/or image(s) presence within your new root dir: 85 | ```sh 86 | $ ls -la /path/to/new/root/docker/containers 87 | ``` 88 | ***Note*** you can check the container id(s) by running: 89 | ```sh 90 | $ sudo docker ps -a 91 | ``` 92 | 3. If all checks out, you should be able to safely remove `/var/lib/docker` or leave it if you wish. 93 | 94 | -------------------------------------------------------------------------------- /docker/yum-apt-repos-docker.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ### Official Docker install script 4 | 5 | Official Docker repos for apt and yum to install the binary. 6 | 7 | `curl -sSL https://get.docker.com/ | sh` 8 | 9 | - Source(s) 10 | - [Docker Blog](https://blog.docker.com/2015/07/new-apt-and-yum-repos/) 11 | -------------------------------------------------------------------------------- /libvirt/qemu-no-internet.md: -------------------------------------------------------------------------------- 1 | ### QEMU guest has no internet 2 | 3 | If the behavior you are seeing is host can access the guest, and guest can access the host, but the guest can't access other machines on the network or visa versa... probably the host's firewall is blocking access. 4 | 5 | See: https://bugs.launchpad.net/ubuntu/+source/ufw/+bug/573461 6 | 7 | Specifically, this section: "The final step is to disable netfilter on the bridge: 8 | 9 | ```bash 10 | # cat >> /etc/sysctl.conf <` tags. 4 | 5 | ```sh 6 | # virsh capabilities | virsh cpu-baseline /dev/stdin 7 | 8 | IvyBridge 9 | Intel 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | ``` 30 | 31 | - Source(s) 32 | - [1](https://kashyapc.fedorapeople.org/virt/openstack/devstack-nvmx.txt) 33 | -------------------------------------------------------------------------------- /linux/firefox-dark-theme.md: -------------------------------------------------------------------------------- 1 | ### Fix Firefox Text Input Box Themeing Issues 2 | 3 | Arc Dark Theme made all my text input box backgrounds, well, dark. Too dark. Here's how to fix it. 4 | 5 | ``` 6 | cd ~/.mozilla/firefox 7 | ls [Check for the directory for the profile you wish to fix] 8 | cd [profile directory] 9 | mkdir chrome 10 | cd chrome 11 | vi userContent.css 12 | ``` 13 | 14 | The content of `userContent.css` should be thus: 15 | 16 | ```css 17 | input:not(.urlbar-input):not(.textbox-input):not(.form-control):not([type='checkbox']) { 18 | -moz-appearance: none !important; 19 | background-color: white; 20 | color: black; 21 | } 22 | 23 | #downloads-indicator-counter { 24 | color: white; 25 | } 26 | 27 | textarea { 28 | -moz-appearance: none !important; 29 | background-color: white; 30 | color: black; 31 | } 32 | 33 | select { 34 | -moz-appearance: none !important; 35 | background-color: white; 36 | color: black; 37 | } 38 | ``` 39 | 40 | - Source(s) 41 | - [1](https://www.youtube.com/watch?v=2a7rgRsO6q4) 42 | - [2](link2) 43 | -------------------------------------------------------------------------------- /nmap/scan-ip-range-for-port.md: -------------------------------------------------------------------------------- 1 | ### Scan IP range for a specific port with `nmap` 2 | 3 | Perform a scan with nmap on a specific IP range for machines listening on port 22 (SSH). 4 | 5 | `nmap 192.168.1.0/24 -p 22 -oG - | grep open` 6 | 7 | Other example(s): 8 | 9 | `nmap -sV --open 192.168.1.0/24 -p22 | grep interesting -i` 10 | 11 | - Source(s) 12 | - [1](http://helpdesk.maytechgroup.com/support/solutions/articles/3000008280-how-to-move-a-ubiquiti-unifi-access-point-to-a-new-controller-v2-x-) 13 | - [2](http://johanharjono.com/scan-lan-for-ssh-able-hosts.html) 14 | -------------------------------------------------------------------------------- /openshift/dynamic-storage-allocation.md: -------------------------------------------------------------------------------- 1 | ### OCP Dynamic Storage Allocation 2 | 3 | Using annotations it is possible dynamically allocate storage, in this case on Gluster with Heketi. 4 | 5 | ``` 6 | apiVersion: v1 7 | kind: PersistentVolumeClaim 8 | metadata: 9 | name: something 10 | annotations: 11 | volume.beta.kubernetes.io/storage-class: gluster-heketi 12 | spec: 13 | accessModes: 14 | - ReadWriteMany 15 | resources: 16 | requests: 17 | storage: 10Gi 18 | ``` 19 | 20 | - Source(s) 21 | - [ocp-docs](https://docs.openshift.com/container-platform/3.5/install_config/storage_examples/storage_classes_dynamic_provisioning.html) 22 | -------------------------------------------------------------------------------- /synology/Init_3rdparty_1.9-003.spk: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ironicbadger/til/cb0c0aaa7bd2646c61a40a7f28344f89482affb6/synology/Init_3rdparty_1.9-003.spk -------------------------------------------------------------------------------- /synology/ddnsupdater_1.27-002.spk: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ironicbadger/til/cb0c0aaa7bd2646c61a40a7f28344f89482affb6/synology/ddnsupdater_1.27-002.spk -------------------------------------------------------------------------------- /synology/synology-ddns.md: -------------------------------------------------------------------------------- 1 | ### Setup DDns on a Synology NAS 2 | 3 | This is simplest way I found to update dynamic DNS with Namecheap on a Synology NAS. I performed this on a Synology DS215j. 4 | 5 | First, install PERL from the official package lists in the Synology Package Center. Next, using the two .spk files in this repository, install the `Init_3rdparty_1.9-003.spk` using the "manual install" option. Finally install the ddns client spk itself again using the "manual install option". 6 | 7 | Next, you need to configure a dynamic hostname to be updated in Namecheap and grab the DDNS password and configure it in the example file below... 8 | 9 | So far as I can tell, this file should live in `/tmp/cache/ddclient/ddclient.conf` but there's also a second copy appeared in `/volume1/@appstore/ddnsupdater/ddclient.conf`, not sure which to use. 10 | 11 | ``` 12 | ###################################################################### 13 | ## 14 | ## ddclient.conf created 06/01/2014 17:14 15 | ## 16 | ###################################################################### 17 | daemon=300 18 | max-interval=22d 19 | ssl=no 20 | syslog=yes 21 | pid=/var/run/ddclient.pid 22 | file=/tmp/cache/ddclient/ddclient.conf 23 | cache=/tmp/cache/ddclient/ddclient.cache 24 | notify-failure=@administrators 25 | 26 | # DDNS Provider Parameters Section 27 | use=web, web=dynamicdns.park-your-domain.com/getip 28 | protocol=namecheap 29 | server=dynamicdns.park-your-domain.com 30 | login=tld.domainhere 31 | password=ddnspass 32 | subdomainhere 33 | ``` 34 | -------------------------------------------------------------------------------- /template.md: -------------------------------------------------------------------------------- 1 | ### title 2 | 3 | exposition 4 | 5 | `cmd or whatever` 6 | 7 | Other example(s): 8 | 9 | `a variation on above but with more advanced functionality` 10 | 11 | - Source(s) 12 | - [1](link1) 13 | - [2](link2) 14 | -------------------------------------------------------------------------------- /ubiquiti/move-ap-to-new-controller.md: -------------------------------------------------------------------------------- 1 | ### Transfer adoption of a Ubiquiti AP 2 | 3 | Ubiquiti APs are 'adopted' by the control software, here's how to migrate an already configured AP to a new controller. 4 | 5 | nmap 192.168.1.0/24 -p 22 -oG - |grep open 6 | ssh adminuser@AP-IP 7 | syswrapper.sh restore-default 8 | === reboot (wait 20 secs) === 9 | ssh ubnt@AP-IP 10 | password = ubnt (default) 11 | mca-cli 12 | set-inform http://controllerip:8080/inform 13 | 14 | === adopt AP on the controller === 15 | === when state = disconnected after adoption continue here === 16 | 17 | set-inform http://controllerip:8080/inform 18 | exit 19 | 20 | Yes, you really do run the same `set-inform` command twice either side of 'adopting' the AP on the controller. Seems dumb to me. 21 | 22 | - Source(s) 23 | - [1](http://helpdesk.maytechgroup.com/support/solutions/articles/3000008280-how-to-move-a-ubiquiti-unifi-access-point-to-a-new-controller-v2-x-) 24 | -------------------------------------------------------------------------------- /ubuntu/ubuntu_full_boot_partition.md: -------------------------------------------------------------------------------- 1 | ### Fix a full /boot partition on Ubuntu 2 | 3 | If `apt` won't work properly because `/boot` is full you only have one option to clean up old kernels which are installed but not used. This one liner will list 4 | 5 | ``` 6 | sudo apt-get remove `dpkg --list 'linux-image*' |grep ^ii | awk '{print $2}'\ | grep -v \`uname -r\`` 7 | ``` 8 | 9 | Then run `apt-get update` then `autoremove` and so on. 10 | 11 | - Source(s) 12 | - [askubuntu](http://askubuntu.com/questions/345588/what-is-the-safest-way-to-clean-up-boot-partition) 13 | -------------------------------------------------------------------------------- /vim/sudo_vim.md: -------------------------------------------------------------------------------- 1 | ### elevate privileges in vim 2 | 3 | You forgot to `sudo` before opening a file again didn't you. Here's how to get the required permissions on that file without leaving vim. 4 | 5 | `:w !sudo tee %` 6 | 7 | Or, add this to your `.vimrc` and simply `w!!` for the same effect. 8 | 9 | `cmap w!! w !sudo tee > /dev/null %` 10 | --------------------------------------------------------------------------------