├── MIT-LICENSE ├── README.md └── duplexRsync.sh /MIT-LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2019 Francois Payette 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining 4 | a copy of this software and associated documentation files (the 5 | "Software"), to deal in the Software without restriction, including 6 | without limitation the rights to use, copy, modify, merge, publish, 7 | distribute, sublicense, and/or sell copies of the Software, and to 8 | permit persons to whom the Software is furnished to do so, subject to 9 | the following conditions: 10 | 11 | The above copyright notice and this permission notice shall be 12 | included in all copies or substantial portions of the Software. 13 | 14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 15 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 16 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 17 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 18 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 19 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 20 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 21 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DuplexRsync 2 | 3 | 🌟 Simple realtime 2-way sync. 4 | 5 | ### Problem 6 | 7 | I often find myself editing quite a few files on remote hosts; for anything non-trivial I like to use local-running tools such as Sublime. I've used [rsub](https://github.com/henrikpersson/rsub), it's very nice and lightweight. Sometimes(often) the light editing turns heavier and more and more files are worked on. I have noticed that when the ssh tunnel dies and is recreated while a file is open, the file will be truncated to zlitch --a glitch to look out for that is more likely to occur when multiple files are open. 8 | 9 | When things keep getting heavier, I've then used [sshfs](https://github.com/osxfuse/osxfuse/wiki/SSHFS) to mount a remote directory and fuse it to the local filesystem. This usually works ok, but for some types of workflows such as sublime projects with a lot of files in subfolders (node_modules? --sometimes this one starts to feel like a whole Gentoo distro) it is inadequate. Search becomes extra slow. The SublimeText project tree spins and spins and spins, features that have become automatisms are unworkable. Also, open files prevent the tunnelling connection from exiting; and a broken tunnel (say you close your laptop without closing everything and unmounting) can leave the fuse subsystem in a weird state, where you cannot remount to the previous location until a reboot, as well as other minor glitches. 10 | 11 | ### Solution 12 | 13 | DuplexRsync is a simple and pretty sweet (although only lightly tested as of 2019/03, PLEASE BE CAREFUL AND ALWAYS HAVE BACKUPS and/or VERSIONING!) solution based on fswatch and rsync. It's a single file you'll put in your local directory that will maintain (DropBox|GoogleDrive)-style 2-way sync between the current directory and a remote directory via SSH. This has the advantage to work fine when offline. This bash script is a bit macOSX-centric because that's what I use locally, please feel free to adapt. By default the script excludes node_modules and all folders that start with a period. (.git etc) 14 | 15 | ### Merging 16 | 17 | If a file has been edited on both ends while offline (duplexRsync not running), merging will simply crush the oldest edit; it will never results in conflict files. This is harsh but simpler; with git these days I think edits that have some value should be committed, so we delegate versioning there. 18 | 19 | If you attempt to sync mismatched folders, a lot of files in the remote folder would get deleted. When launching duplexRsync you'll be prompted to either merge the folders (create these files in the local folder), or destroy all the extra files in the remote folder. 20 | 21 | Latency for multiple remote edits to propagate to local folder is set by default to 3 seconds, this prevents infinite cycling of change detection. Over very slow network connections you might need to increase this value. 22 | 23 | ### Setup 24 | 25 | on your remote machine you'll need fswatch: 26 | 27 | 28 | sudo add-apt-repository ppa:hadret/fswatch 29 | sudo apt-get update 30 | sudo apt-get install -y fswatch 31 | 32 | on your local machine you'll need brew, that's it. This script will install the other required components (socat fswatch and gnu-getopt) 33 | 34 | chmod u+x duplexRsync.sh 35 | ./duplexRsync.sh --remoteHost user@192.168.0.2 36 | 37 | ### Caveats 38 | 39 | This is a simple solution, it does not implement any distributed locking. If you or processes are editing at both ends simultaneously, over and above the crushing of the oldest edit of the same file mentioned, there's a window while a newly created file can get deleted. Conversely but less serious, there's also a window during which a deleted file could be recreated. An argument to --delete-older-than "seconds" in rsync would mitigate the first edge case, I think the second one(zombie file coming back) is a an annoyance I can live with. 40 | 41 | ### Related 42 | 43 | Thanks for all the feedback in various forums. Here are a few related projects that have been brought to my attention. I have not tried any of these, they all look very well written; they could come in handy later. 44 | 45 | #### Heavier 46 | 47 | - [osync](https://github.com/deajan/osync) 48 | 49 | #### Heaviest 50 | 51 | - [Mutagen.io](https://mutagen.io/) 52 | - [Syncthing](https://github.com/syncthing/syncthing) 53 | - [Unison](https://github.com/bcpierce00/unison) 54 | 55 | 56 | 57 | 58 | That's it!🔥 Cheers! 59 | 60 | Please Note: A few hidden files are created to maintain the 2-way sync, they all start by .____*. The remote directory will be straight off the home of your remote user's home; there's an optional --remoteParent if you need to change that. 61 | 62 | 63 | License: MIT 64 | -------------------------------------------------------------------------------- /duplexRsync.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # REQUIREMENT we need fswatch on both ends, run this to get it on ubuntu1604 4 | #sudo add-apt-repository ppa:hadret/fswatch 5 | #sudo apt-get update 6 | #sudo apt-get install -y fswatch 7 | printHelp(){ 8 | echo "USAGE: duplexRsync --remoteHost user@host 9 | 10 | DuplexRsync requires fswatch on both ends, this tries to install it locally using brew(required). 11 | on the remote end run: 12 | sudo add-apt-repository ppa:hadret/fswatch 13 | sudo apt-get update 14 | sudo apt-get install -y fswatch 15 | 16 | you need to specify: 17 | --remoteHost ex: user@192.168.0.2. 18 | 19 | You can also optionally specify: 20 | --remoteParent contains/will contain the remoteDir" 21 | } 22 | 23 | # if our arguments match this string, it's the socat fork trgger for remote change detection; increment sentinel and exit 24 | if [ "$*" = "sentinelIncrement" ]; 25 | then 26 | sentval=$(cat .____sentinel);sentval=$((sentval+1));echo $sentval > .____sentinel; 27 | exit; 28 | fi 29 | 30 | 31 | if [ "$*" = "" ]; 32 | then 33 | printHelp; 34 | exit; 35 | fi 36 | 37 | # we need brew on macosx 38 | if [ -z $(command -v brew) ]; 39 | then 40 | printHelp; 41 | exit 42 | fi 43 | 44 | # this is for macosx, we also need socat to create a socket to remote trigger rsync 45 | brew install socat fswatch gnu-getopt 46 | 47 | 48 | function randomLocalPort() { 49 | localPort=42 50 | localPort=$RANDOM; 51 | let "localPort %= 999"; 52 | localPort="42$localPort" 53 | } 54 | 55 | function randomRemotePort() { 56 | remotePort=42 57 | remotePort=$RANDOM; 58 | let "remotePort %= 999"; 59 | remotePort="42$remotePort" 60 | } 61 | 62 | 63 | if ! options=$(/usr/local/Cellar/gnu-getopt/*/bin/getopt -u -o hr:p: -l help,remoteHost:,remoteParent: -- "$@") 64 | then 65 | # something went wrong, getopt will put out an error message for us 66 | exit 1 67 | fi 68 | 69 | 70 | set -- $options 71 | 72 | while [ $# -gt 0 ] 73 | do 74 | case $1 in 75 | # for options with required arguments, an additional shift is required 76 | -h|--help ) printHelp; exit; shift;; 77 | -r|--remoteHost ) remoteHost=$2; shift;; 78 | -p|--remoteParent ) remoteParent=$2; shift;; 79 | --) shift; break;; 80 | #(-*) echo "$0: error - unrecognized option $1" 1>&2; exit 1;; 81 | (*) break;; 82 | esac 83 | shift 84 | done 85 | 86 | if [ -z "$remoteHost" ]; 87 | then 88 | echo "Missing Argument: --remoteHost" 89 | printHelp; 90 | exit; 91 | fi 92 | 93 | remoteDir=${PWD##*/} 94 | remoteDir="$remoteParent$remoteDir" 95 | 96 | 97 | 98 | 99 | if [ ! -f ~/.ssh/id_rsa.pub ]; 100 | then 101 | echo "You need a key pair to use duplexRsync. You can generate one using: ssh-keygen -t rsa" 102 | exit; 103 | fi 104 | 105 | # we'll need to ssh without pass - use public key crypto to ssh into remote end, rsync needs this 106 | #we are copying our pubkey to ssh in without prompt 107 | cat ~/.ssh/id_rsa.pub | ssh "$remoteHost" 'mkdir .ssh;pubkey=$(cat); touch .ssh/authorized_keys; if grep -q "$pubkey" ".ssh/authorized_keys"; then echo "puublic key for this user already present"; else echo $pubkey >> .ssh/authorized_keys;fi' 108 | 109 | 110 | fswatchPath=$(ssh "$remoteHost" 'command -v fswatch') 111 | #on macosx remote the $PATH variable is different when local or ssh, lets try with looking up the local path 112 | if [ -z "$fswatchPath" ]; 113 | then 114 | fswatchPath=$(ssh "$remoteHost" 'command -v /usr/local/bin/fswatch') 115 | fi 116 | 117 | if [ -z "$fswatchPath" ]; 118 | then 119 | echo "ERROR: missing fswatch at remote end" 120 | printHelp; 121 | exit; 122 | fi 123 | 124 | # kill all remote fswatches for this path that might be lingering 125 | ssh $remoteHost "pkill -P \$(ps alx | egrep '.*pipe_w.*____rsyncSignal.sh --pwd $PWD --port $remotePort' | awk '{print \$4}' | head -n 1)" 126 | 127 | ssh $remoteHost "pkill -f '____rsyncSignal.sh --pwd $PWD'" 128 | # if we have the ssh tunnel running this will match and we kill it; pwd args to prevent killing other folders being watched 129 | pkill -f "rsyncSignal.sh --pwd $PWD" 130 | # if we have a lingering socat kill it 131 | # we shouldnt have one, this is a bad plan if using multple sockets 132 | #pkill -f "sentinelIncrement.sh --pwd $PWD" 133 | 134 | echo '0' > .____sentinel 135 | #create localsocket to listen for remote changes 136 | socatRes="not listening yet, we get a random port in the following loop"; 137 | while [ ! -z "$socatRes" ] 138 | do 139 | randomLocalPort; 140 | socatRes=""; 141 | # frok call this script with a special argument that simply inccrement snetinel and exits 142 | socatRes=$(socat TCP-LISTEN:$localPort,fork EXEC:"./duplexRsync.sh sentinelIncrement" 2>&1 &) & 143 | # result should be empty when listen works 144 | done; 145 | 146 | echo "listening locally on:$localPort" 147 | 148 | 149 | #for now we use the same port at both ends, this is a bit sloppy we should test to make sure it's not used with the ssh -R call 150 | remotePort=$localPort 151 | 152 | #we dump to a remote file the fswatch command that allows local running socat to get a signal of a remote change 153 | # modification to add the -r switch to all subs excluding node_modules. This is required because fswatch will still iterate over all subdirs because the -e switch is a pattern, not a path 154 | 155 | # if you get a bunch of: inotify_add_watch: No space left on device 156 | # you will need to https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers 157 | # check your current limit: cat /proc/sys/fs/inotify/max_user_watches 158 | # ATTENTION: you cannot change this kernel param if running in an unpriviledged container, you'll need to run this in the hosting kernel's env 159 | # echo fs.inotify.max_user_watches=524288 | tee -a /etc/sysctl.conf && sysctl -p; echo "increasing the limit of watches, cannot be done in unpriv container" 160 | #echo "$fswatchPath -r -e \"node_modules\" -o . | while read f; do echo 1 | nc localhost $remotePort; done" | ssh $remoteHost "mkdir -p $remoteDir; cd $remoteDir; cat > .____rsyncSignal.sh" 161 | absPath=$(ssh $remoteHost "mkdir -p $remoteDir; cd $remoteDir; pwd") 162 | 163 | # we are exluding node_modules and folders starting with . 164 | ssh $remoteHost "mkdir -p $remoteDir; cd $remoteDir; find $absPath -maxdepth 1 -mindepth 1 -type d ! -name \"node_modules\" ! -name \".*\"| awk '{ print \"\\\"\"\$0\"\\\"\"}' | nl | awk -F\\\" '{printf \"/usr/bin/fswatch -x --event Updated --event Created --event Removed --event Renamed --event MovedFrom --event MovedTo -r \\\"%s\\\" | while read f; do echo 1 | nc localhost $remotePort; done \& \n\", \$2, \$1, \$1}' > .____rsyncSignal.sh" 165 | ssh $remoteHost "cd $remoteDir; echo \"/usr/bin/fswatch -x --event Updated --event Created --event Removed --event Renamed --event MovedFrom --event MovedTo -o $absPath | while read f; do echo 1 | nc localhost $remotePort; done\" >> .____rsyncSignal.sh" 166 | 167 | # we are exluding node_modules and folders starting with . 168 | # this should work, but there seems to be a bug in fswatch, so we are using multiple processes instead 169 | #ssh $remoteHost "mkdir -p $remoteDir; cd $remoteDir; find $absPath -maxdepth 1 -mindepth 1 -type d ! -name \"node_modules\" ! -name \".*\" | awk '{ print \"\\\"\"\$0\"\\\"\"}' | awk -F\\\" '{printf \" \\\"%s\\\" \", \$2}' | (echo -n \" /usr/bin/fswatch -x --event Updated --event Created --event Removed --event Renamed --event MovedFrom --event MovedTo -r \" && cat) > .____rsyncSignal.sh" 170 | #ssh $remoteHost "cd $remoteDir; echo \" | while read f; do if [ -z \\\"\$skip\\\" ]; then skip=\\\"recursive first msg is spurious\\\"; else echo 1 | nc localhost $remotePort; fi done & /usr/bin/fswatch -o $absPath | while read f; do echo 1 | nc localhost $remotePort; done\" >> .____rsyncSignal.sh" 171 | #exit 1; 172 | 173 | 174 | function duplex_rsync() { 175 | 176 | # kill all remote fswatches, also supress kill notice in bash 177 | ssh $remoteHost "pkill -P \$(ps alx | egrep '.*pipe_w.*____rsyncSignal.sh --pwd $PWD --port $remotePort' | awk '{print \$4}' | head -n 1) >/dev/null 2&>1" 178 | 179 | # kill the remote fswatch while we sync, pwd arg used to prevent attempting to kill other watches; port prevent killing if 2 locals have the exact same path local 180 | # also this discloses local path to remote end; dont think this is serious 181 | ssh $remoteHost "pkill -f '____rsyncSignal.sh --pwd $PWD --port $remotePort'" 182 | 183 | 184 | # also kill the tunnel 185 | pkill -f "rsyncSignal.sh --pwd $PWD" 186 | 187 | # order matters; if we got a remote trigger we'll process remote as src first to prevent restoring files that might have just been deleted 188 | if [ "$trigger" = "remote" ]; 189 | then 190 | rsync -auzP --exclude ".*/" --exclude ".____*" --exclude "node_modules" --delete "$remoteHost:$remoteDir/" .; 191 | rsync -auzP --exclude ".*/" --exclude ".____*" --exclude "node_modules" --delete . "$remoteHost:$remoteDir"; 192 | else # local as src first 193 | rsync -auzP --exclude ".*/" --exclude ".____*" --exclude "node_modules" --delete . "$remoteHost:$remoteDir"; 194 | rsync -auzP --exclude ".*/" --exclude ".____*" --exclude "node_modules" --delete "$remoteHost:$remoteDir/" .; 195 | fi; 196 | 197 | 198 | ssh -R localhost:$localPort:127.0.0.1:$remotePort $remoteHost "cd $remoteDir; bash .____rsyncSignal.sh --pwd $PWD --port $remotePort"& 199 | #tunnelPid="$!" 200 | # echo "tunnelPid:$tunnelPid" 201 | } 202 | 203 | lastSentinel=$(cat .____sentinel); 204 | 205 | # we always start from the local dir 206 | trigger=local; 207 | # do a trial run to see if we'd delete files on the remote end 208 | wouldDeleteCount=$(rsync -anuzP --exclude ".*/" --exclude ".____*" --exclude "node_modules" --delete . $remoteHost:$remoteDir/ | grep deleting | wc -l); 209 | wouldDeleteCount="$(echo -e "${wouldDeleteCount}" | tr -d '[:space:]')" 210 | 211 | wouldDeleteRemoteFiles=$(rsync -anuzP --exclude ".*/" --exclude ".____*" --exclude "node_modules" --delete . $remoteHost:$remoteDir/ | grep deleting); 212 | if [ ! -z "$wouldDeleteRemoteFiles" ]; 213 | then 214 | 215 | unset destroyAhead 216 | unset localFileCount 217 | localFileCount=$(find . -type f | egrep -v '\..+/' | egrep -v '\./duplexRsync.sh' | egrep -v '\./\.____*' | wc -l | tr -d '[:space:]') 218 | # if the local directory is empty using same pattern as rsync above we always merge 219 | if [ "$localFileCount" -eq 0 ] 220 | then 221 | destroyAhead="merge" 222 | else 223 | echo "WOULD delete count: $wouldDeleteCount" 224 | echo "$wouldDeleteRemoteFiles" 225 | fi 226 | 227 | while ! [[ "$destroyAhead" =~ ^(destroy|merge|abort)$ ]] 228 | do 229 | 230 | if [ "$wouldDeleteCount" -gt 5 ] 231 | then 232 | major=" ----MAJOR----- "; 233 | fi 234 | 235 | if [ "$wouldDeleteCount" -gt 42 ] 236 | then 237 | major=" ----INTERSTELLAR BYPASS LEVEL----- "; 238 | fi 239 | 240 | echo "ATTENTION $major DESTRUCTION AHEAD: There is/are $wouldDeleteCount file(s) present in the remote folder that are not present locally. Could the remote folder be totally unrelated? Would you like to merge the folders by creeating these locally(merge),Sync and destroy(destroy) or abort?(merge/destroy/abort)" 241 | read destroyAhead 242 | done 243 | if [ "$destroyAhead" = "abort" ]; 244 | then 245 | exit; 246 | elif [ "$destroyAhead" = "merge" ]; 247 | then 248 | # sync from remote without delete 249 | rsync -auzP --exclude ".*/" --exclude ".____*" --exclude "node_modules" "$remoteHost:$remoteDir/" .; 250 | fi 251 | fi; 252 | 253 | duplex_rsync;fswatch -r -o . | while read f; 254 | do 255 | sentinel=$(cat .____sentinel); 256 | echo "sentinel $sentinel lastSentinel: $lastSentinel" 257 | sentinelInc=$((sentinel-lastSentinel)); 258 | # if the change is remote(incremented ____sentinel) lets slow down and wait to gobble multiple events 259 | if [ $sentinelInc -gt 0 ] 260 | then 261 | echo 'remote change detected'; 262 | trigger=remote; 263 | duplex_rsync; 264 | sleep 3; 265 | else 266 | echo 'local change detected'; 267 | trigger=local; 268 | duplex_rsync; 269 | fi 270 | lastSentinel=$sentinel; 271 | done; 272 | --------------------------------------------------------------------------------