├── .gitignore ├── LICENSE ├── Makefile ├── README.md ├── config.yaml.sample ├── configuration.go ├── db.go ├── fsnotify.go ├── go.mod ├── go.sum ├── main.go ├── main_test.go └── systemd ├── README.md └── user └── ipfs-sync.service /.gitignore: -------------------------------------------------------------------------------- 1 | rel/ 2 | ipfs-sync 3 | ipfs-sync.exe 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright © 2020, The ipfs-sync Contributors. All rights reserved. 2 | 3 | Redistribution and use in source and binary forms, with or without 4 | modification, are permitted provided that the following conditions are met: 5 | 6 | 1. Redistributions of source code must retain the above copyright notice, this 7 | list of conditions and the following disclaimer. 8 | 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | 3. Neither the name of ipfs-sync nor the names of its 14 | contributors may be used to endorse or promote products derived from 15 | this software without specific prior written permission. 16 | 17 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 18 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 19 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 20 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 21 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 22 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 23 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 24 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 25 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 26 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 27 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | VERSION = $(shell git tag --contains) 2 | 3 | default: 4 | go fmt 5 | go build -ldflags "-X main.version=$(VERSION)" 6 | 7 | rel: 8 | go fmt 9 | mkdir rel/ 10 | 11 | CGO_ENABLED=0 GOOS=linux go build -ldflags "-X main.version=$(VERSION)" -o ipfs-sync 12 | upx ipfs-sync 13 | tar -caf ipfs-sync-linux64.tar.xz ipfs-sync LICENSE README.md systemd config.yaml.sample 14 | mv ipfs-sync-linux64.tar.xz rel/ 15 | 16 | CGO_ENABLED=0 GOOS=linux GOARCH=arm go build -ldflags "-X main.version=$(VERSION)" -o ipfs-sync 17 | upx ipfs-sync 18 | tar -caf ipfs-sync-linuxARM.tar.xz ipfs-sync LICENSE README.md systemd config.yaml.sample 19 | mv ipfs-sync-linuxARM.tar.xz rel/ 20 | 21 | CGO_ENABLED=0 GOOS=darwin go build -ldflags "-X main.version=$(VERSION)" -o ipfs-sync 22 | upx ipfs-sync 23 | tar -caf ipfs-sync-darwin64.tar.gz ipfs-sync LICENSE README.md config.yaml.sample 24 | mv ipfs-sync-darwin64.tar.gz rel/ 25 | 26 | CGO_ENABLED=0 GOOS=windows go build -ldflags "-X main.version=$(VERSION)" -o ipfs-sync.exe 27 | upx ipfs-sync.exe 28 | zip ipfs-sync-win64.zip ipfs-sync.exe LICENSE README.md config.yaml.sample 29 | mv ipfs-sync-win64.zip rel/ 30 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ipfs-sync 2 | [![Go Reference](https://pkg.go.dev/badge/github.com/TheDiscordian/ipfs-sync.svg)](https://pkg.go.dev/github.com/TheDiscordian/ipfs-sync) 3 | 4 | *Note: This software is very young. If you discover any bugs, please report them via the issue tracker.* 5 | 6 | `ipfs-sync` is a simple daemon which will watch files on your filesystem, mirror them to MFS, automatically update related pins, and update related IPNS keys, so you can always access your directories from the same address. You can use it to sync your documents, photos, videos, or even a website! 7 | 8 | Buy Me A Coffee 9 | 10 | ## Installation 11 | 12 | If your OS or architechture isn't supported, please open an issue! If it's easily supported with Go, I'll definitely consider it 😊. 13 | 14 | ### Binary 15 | 16 | If you're on an Arch based distro, `ipfs-sync` is available on the [AUR](https://aur.archlinux.org/packages/ipfs-sync/). 17 | 18 | Binaries are available on the [releases](https://github.com/TheDiscordian/ipfs-sync/releases) page for other distros and OSs. 19 | 20 | ### Source 21 | 22 | You need `go` installed, with a working `GOPATH` and `$GOPATH/bin` should be added to your `$PATH` to execute the command. 23 | 24 | `go install github.com/TheDiscordian/ipfs-sync` 25 | 26 | ## Usage 27 | 28 | The only required parameter is `dirs`, which can be specified in the config file, or as an argument. The `ID` parameter is simply a unique idenifier for you to remember, the IPNS key will be generated using this ID. 29 | 30 | It's recommended you either use the included systemd user-service, or run `ipfs-sync` with a command like `ipfs-sync -config $HOME/.ipfs-sync.yaml -db $HOME/.ipfs-sync.db`, after placing a config file in `~/.ipfs-sync.yaml`. 31 | 32 | ```bash 33 | Usage of ipfs-sync: 34 | -basepath string 35 | relative MFS directory path (default "/ipfs-sync/") 36 | -config string 37 | path to config file to use (default "/home/user/.ipfs-sync.yaml") 38 | -copyright 39 | display copyright and exit 40 | -db string 41 | path to file where db should be stored (default "/home/user/.ipfs-sync.db") 42 | -dirs value 43 | set the dirs to monitor in json format like: [{"ID":"Example1", "Dir":"/home/user/Documents/", "Nocopy": false},{"ID":"Example2", "Dir":"/home/user/Pictures/", "Nocopy": false}] 44 | -endpoint string 45 | node to connect to over HTTP (default "http://127.0.0.1:5001") 46 | -ignore value 47 | set the suffixes to ignore (default: ["kate-swp", "swp", "part", "crdownload"]) 48 | -ignorehidden 49 | ignore anything prefixed with "." 50 | -sync duration 51 | time to sleep between IPNS syncs (ex: 120s) (default 10s) 52 | -timeout duration 53 | longest time to wait for API calls like 'version' and 'files/mkdir' (ex: 60s) (default 30s) 54 | -v display verbose output 55 | -version 56 | display version and exit 57 | ``` 58 | 59 | `ipfs-sync` can be setup and used as a service. Simply point it to a config file, and restart it whenever the config is updated. An example config file can be found at `config.yaml.sample`. 60 | 61 | 62 | ## Example 63 | 64 | Getting started is simple. The only required field is `dirs`, so if we wanted to sync a folder, we'd simply run: 65 | 66 | ``` 67 | ipfs-sync -dirs '[{"ID":"ExampleID", "Dir":"/home/user/Documents/ExampleFolder/", "Nocopy": false}]' 68 | 2021/02/12 18:03:38 ipfs-sync starting up... 69 | 2021/02/12 18:03:38 ExampleID not found, generating... 70 | 2021/02/12 18:03:38 Adding file to /ipfs-sync/ExampleFolder/index.html ... 71 | 2021/02/12 18:04:40 ExampleID loaded: k51qzi5uqu5dlpvinw1zhxzo4880ge5hg9tp3ao4ye3aujdru9rap2h7izk5lm 72 | ``` 73 | 74 | This command will first check if there's a key named `ExampleID` in `ipfs-sync`'s namespace, if not, it'll generate and return one. In this example, it synced a simple website to `k51qzi5uqu5dlpvinw1zhxzo4880ge5hg9tp3ao4ye3aujdru9rap2h7izk5lm`. As you add/remove/change files in the directory now, they'll be visible live at that address. 75 | 76 | The `Nocopy` option enables the `--nocopy` option when adding files for that shared directory, more info about the option can be found [here](https://docs.ipfs.io/reference/http/api/#api-v0-add), and it requires the [ipfs filestore experimental feature](https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#ipfs-filestore) enabled. 77 | -------------------------------------------------------------------------------- /config.yaml.sample: -------------------------------------------------------------------------------- 1 | # 2 | # Config file for ipfs-sync 3 | # It's highly recommended you set DB and Dirs before running the daemon. 4 | # 5 | # If using the default systemd script, it expects a config file to be in $USER/.ipfs-sync.yaml by default 6 | # 7 | 8 | # Path to file where db should be stored (example: "/home/user/.ipfs-sync.db") 9 | DB: 10 | 11 | # Verify filestore integrity on startup (ignored if no dirs use "nocopy") 12 | VerifyFilestore: false 13 | 14 | # Set the dirs to monitor: 15 | Dirs: 16 | ## Unique identifier for the IPNS key 17 | # - ID: Example1 18 | ## Full path of directory to sync 19 | # Dir: /home/user/Documents/ 20 | ## If true, use filestore (if enabled on IPFS daemon) 21 | # Nocopy: false 22 | ## If true, will use filesize+modification date to track changes, instead of hashing. Recommended if you have a very large directory. 23 | # DontHash: false 24 | ## If true, will pin the root directory 25 | # Pin: false 26 | ## If true, and EstuaryAPIKey is set, will attempt to pin the CID via Estuary as well 27 | # Estuary: false 28 | # - ID: Example2 29 | # Dir: /home/user/Pictures/ 30 | # Nocopy: false 31 | # DontHash: false 32 | # Pin: false 33 | 34 | # API key for Estuary (optional, find out more at https://estuary.tech) 35 | EstuaryAPIKey: 36 | 37 | # Relative MFS directory path (default "/ipfs-sync/") 38 | BasePath: /ipfs-sync/ 39 | 40 | # Node to connect to over HTTP (default "http://127.0.0.1:5001") 41 | EndPoint: http://127.0.0.1:5001 42 | 43 | # File extensions to ignore 44 | Ignore: 45 | - kate-swp 46 | - swp 47 | - part 48 | - crdownload 49 | 50 | # If true, ignore anything prefixed with "." 51 | IgnoreHidden: true 52 | 53 | # Time to sleep between IPNS syncs (ex: 120s) (default 10s) 54 | Sync: 10s 55 | 56 | # Timeout for simple commands like `version` and `files/mkdir`. Ignored for calls that are expected to take a while like `add`. 57 | Timeout: 30s 58 | -------------------------------------------------------------------------------- /configuration.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "embed" 5 | "encoding/json" 6 | "flag" 7 | "fmt" 8 | "io/ioutil" 9 | "log" 10 | "os" 11 | "time" 12 | 13 | "gopkg.in/yaml.v3" 14 | ) 15 | 16 | var ( 17 | BasePathFlag = flag.String("basepath", "/ipfs-sync/", "relative MFS directory path") 18 | BasePath string 19 | EndPointFlag = flag.String("endpoint", "http://127.0.0.1:5001", "node to connect to over HTTP") 20 | EndPoint string 21 | DirKeysFlag = new(SyncDirs) 22 | DirKeys []*DirKey 23 | SyncTimeFlag = flag.Duration("sync", time.Second*10, "time to sleep between IPNS syncs (ex: 120s)") 24 | SyncTime time.Duration 25 | TimeoutTimeFlag = flag.Duration("timeout", time.Second*30, "longest time to wait for API calls like 'version' and 'files/mkdir' (ex: 60s)") 26 | TimeoutTime time.Duration 27 | ConfigFileFlag = flag.String("config", getHomeDir()+".ipfs-sync.yaml", "path to config file to use") 28 | ConfigFile string 29 | IgnoreFlag = new(IgnoreStruct) 30 | Ignore []string 31 | LicenseFlag = flag.Bool("copyright", false, "display copyright and exit") 32 | DBPathFlag = flag.String("db", getHomeDir()+".ipfs-sync.db", `path to file where db should be stored`) 33 | DBPath string 34 | IgnoreHiddenFlag = flag.Bool("ignorehidden", false, `ignore anything prefixed with "."`) 35 | IgnoreHidden bool 36 | VersionFlag = flag.Bool("version", false, "display version and exit") 37 | VerboseFlag = flag.Bool("v", false, "display verbose output") 38 | Verbose bool 39 | EstuaryAPIKey string // don't make this a flag 40 | VerifyFilestoreFlag = flag.Bool("verify", false, "verify filestore on startup (not recommended unless you're having issues)") 41 | VerifyFilestore bool 42 | 43 | version string // passed by -ldflags 44 | ) 45 | 46 | func init() { 47 | flag.Var(DirKeysFlag, "dirs", `set the dirs to monitor in json format like: [{"ID":"Example1", "Dir":"/home/user/Documents/", "Nocopy": false},{"ID":"Example2", "Dir":"/home/user/Pictures/", "Nocopy": false}]`) 48 | flag.Var(IgnoreFlag, "ignore", `set the suffixes to ignore (default: ["kate-swp", "swp", "part", "crdownload"])`) 49 | } 50 | 51 | func getHomeDir() string { 52 | homeDir, _ := os.UserHomeDir() 53 | return homeDir + string(os.PathSeparator) 54 | } 55 | 56 | //go:embed config.yaml.sample 57 | var content embed.FS 58 | 59 | // DirKey used for keeping track of directories, and it's used in the `dirs` config paramerter. 60 | type DirKey struct { 61 | // config values 62 | ID string `json:"ID" yaml:"ID"` 63 | Dir string `yaml:"Dir"` 64 | Nocopy bool `yaml:"Nocopy"` 65 | DontHash bool `yaml:"DontHash"` 66 | Pin bool `yaml:"Pin"` 67 | Estuary bool `yaml:"Estuary"` 68 | 69 | // probably best to let this be managed automatically 70 | CID string 71 | MFSPath string 72 | } 73 | 74 | // SyncDirs is used for reading what the user specifies for which directories they'd like to sync. 75 | type SyncDirs struct { 76 | DirKeys []*DirKey 77 | json string 78 | } 79 | 80 | // Set takes a JSON string and marshals it into `sd`. 81 | func (sd *SyncDirs) Set(str string) error { 82 | sd.DirKeys = make([]*DirKey, 0, 1) 83 | sd.json = str 84 | return json.Unmarshal([]byte(str), &sd.DirKeys) 85 | } 86 | 87 | // String returns the raw JSON used to build `sd`. 88 | func (sd *SyncDirs) String() string { 89 | return sd.json 90 | } 91 | 92 | // IgnoreStruct is used for reading what the user specifies for which extensions they'd like to ignore. 93 | type IgnoreStruct struct { 94 | Ignores []string 95 | json string 96 | } 97 | 98 | // Set takes a JSON string and marshals it into `ig`. 99 | func (ig *IgnoreStruct) Set(str string) error { 100 | ig.Ignores = make([]string, 0, 1) 101 | ig.json = str 102 | return json.Unmarshal([]byte(str), &ig.Ignores) 103 | } 104 | 105 | // String returns the raw JSON used to build `ig`. 106 | func (ig *IgnoreStruct) String() string { 107 | return ig.json 108 | } 109 | 110 | // ConfigFileStruct is used for loading information from the config file. 111 | type ConfigFileStruct struct { 112 | BasePath string `yaml:"BasePath"` 113 | EndPoint string `yaml:"EndPoint"` 114 | Dirs []*DirKey `yaml:"Dirs"` 115 | Sync string `yaml:"Sync"` 116 | Ignore []string `yaml:"Ignore"` 117 | DB string `yaml:"DB"` 118 | IgnoreHidden bool `yaml:"IgnoreHidden"` 119 | Timeout string `yaml:"Timeout"` 120 | EstuaryAPIKey string `yaml:"EstuaryAPIKey"` 121 | VerifyFilestore bool `yaml:"VerifyFilestore"` 122 | } 123 | 124 | func loadConfig(path string) { 125 | log.Println("Loading config file", path) 126 | cfgFile, err := os.Open(path) 127 | if err != nil { 128 | log.Println("Config file not found, generating...") 129 | defaultconfig, _ := content.ReadFile("config.yaml.sample") 130 | err = ioutil.WriteFile(path, defaultconfig, 0644) 131 | if err != nil { 132 | log.Println("[ERROR] Error loading config file:", err) 133 | log.Println("[ERROR] Skipping config file...") 134 | return 135 | } 136 | cfgFile, err = os.Open(path) 137 | if err != nil { 138 | log.Println("[ERROR] Error loading config file:", err) 139 | log.Println("[ERROR] Skipping config file...") 140 | return 141 | } 142 | } 143 | defer cfgFile.Close() 144 | cfgTxt, _ := ioutil.ReadAll(cfgFile) 145 | 146 | cfg := new(ConfigFileStruct) 147 | err = yaml.Unmarshal(cfgTxt, cfg) 148 | if err != nil { 149 | log.Println("[ERROR] Error decoding config file:", err) 150 | log.Println("[ERROR] Skipping config file...") 151 | return 152 | } 153 | if cfg.BasePath != "" { 154 | BasePath = cfg.BasePath 155 | } 156 | if cfg.EndPoint != "" { 157 | EndPoint = cfg.EndPoint 158 | } 159 | if len(cfg.Dirs) > 0 { 160 | DirKeys = cfg.Dirs 161 | } 162 | if cfg.Sync != "" { 163 | tsTime, err := time.ParseDuration(cfg.Sync) 164 | if err != nil { 165 | log.Println("[ERROR] Error processing sync in config file:", err) 166 | } else { 167 | SyncTime = tsTime 168 | } 169 | } 170 | if cfg.Timeout != "" { 171 | tsTime, err := time.ParseDuration(cfg.Timeout) 172 | if err != nil { 173 | log.Println("[ERROR] Error processing timeout in config file:", err) 174 | } else { 175 | TimeoutTime = tsTime 176 | } 177 | } 178 | if cfg.DB != "" { 179 | DBPath = cfg.DB 180 | } 181 | IgnoreHidden = cfg.IgnoreHidden 182 | EstuaryAPIKey = cfg.EstuaryAPIKey 183 | VerifyFilestore = cfg.VerifyFilestore 184 | } 185 | 186 | // Process flags, and load config. 187 | func ProcessFlags() { 188 | flag.Parse() 189 | if *LicenseFlag { 190 | fmt.Println("Copyright © 2020, The ipfs-sync Contributors. All rights reserved.") 191 | fmt.Println("BSD 3-Clause “New” or “Revised” License.") 192 | fmt.Println("License available at: https://github.com/TheDiscordian/ipfs-sync/blob/master/LICENSE") 193 | os.Exit(0) 194 | } 195 | if *VersionFlag { 196 | if version == "" { 197 | version = "devel" 198 | } 199 | fmt.Printf("ipfs-sync %s\n", version) 200 | os.Exit(0) 201 | } 202 | log.Println("ipfs-sync starting up...") 203 | 204 | ConfigFile = *ConfigFileFlag 205 | if ConfigFile != "" { 206 | loadConfig(ConfigFile) 207 | } 208 | if len(DirKeysFlag.DirKeys) > 0 { 209 | DirKeys = DirKeysFlag.DirKeys 210 | } 211 | 212 | // Process Dir 213 | if len(DirKeys) == 0 { 214 | log.Fatalln(`dirs field is required as flag, or in config.`) 215 | } else { // Check if Dir entries are at least somewhat valid. 216 | for _, dk := range DirKeys { 217 | if len(dk.Dir) == 0 { 218 | log.Fatalln("Dir entry path cannot be empty. (ID:", dk.ID, ")") 219 | } 220 | 221 | // Check if trailing "/" exists, if not, append it. 222 | if dk.Dir[len(dk.Dir)-1] != os.PathSeparator { 223 | dk.Dir = dk.Dir + string(os.PathSeparator) 224 | } 225 | } 226 | } 227 | 228 | if *BasePathFlag != "/ipfs-sync/" || BasePath == "" { 229 | BasePath = *BasePathFlag 230 | } 231 | 232 | if *EndPointFlag != "http://127.0.0.1:5001" || EndPoint == "" { 233 | EndPoint = *EndPointFlag 234 | } 235 | 236 | // Ignore has no defaults so we need to set them here (if nothing else set it) 237 | if len(IgnoreFlag.Ignores) > 0 { 238 | Ignore = IgnoreFlag.Ignores 239 | } else if len(Ignore) == 0 { 240 | Ignore = []string{"kate-swp", "swp", "part", "crdownload"} 241 | } 242 | if *DBPathFlag != "" { 243 | DBPath = *DBPathFlag 244 | } 245 | if DBPath != "" { 246 | InitDB(DBPath) 247 | } 248 | if *SyncTimeFlag != time.Second*10 || SyncTime == 0 { 249 | SyncTime = *SyncTimeFlag 250 | } 251 | if *TimeoutTimeFlag != time.Second*30 || TimeoutTime == 0 { 252 | TimeoutTime = *TimeoutTimeFlag 253 | } 254 | if *IgnoreHiddenFlag { 255 | IgnoreHidden = true 256 | } 257 | if *VerifyFilestoreFlag { 258 | VerifyFilestore = true 259 | } 260 | Verbose = *VerboseFlag 261 | 262 | _, err := doRequest(TimeoutTime, "version") 263 | if err != nil { 264 | log.Fatalln("Failed to connect to end point:", err) 265 | } 266 | } 267 | -------------------------------------------------------------------------------- /db.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "io" 5 | "log" 6 | "os" 7 | "os/signal" 8 | "strings" 9 | "syscall" 10 | 11 | "github.com/cespare/xxhash/v2" 12 | "github.com/syndtr/goleveldb/leveldb" 13 | "github.com/syndtr/goleveldb/leveldb/util" 14 | "sync" 15 | ) 16 | 17 | var ( 18 | DB *leveldb.DB 19 | 20 | HashLock *sync.RWMutex 21 | Hashes map[string]*FileHash 22 | ) 23 | 24 | type FileHash struct { 25 | PathOnDisk string 26 | Hash []byte 27 | FakeHash []byte // timestamp 28 | } 29 | 30 | // Update cross-references the hash at PathOnDisk with the one in the db, updating if necessary. Returns true if updated. 31 | func (fh *FileHash) Update() bool { 32 | if DB == nil || fh == nil { 33 | return false 34 | } 35 | var tsChanged bool 36 | var hashChanged bool 37 | if fh.Hash != nil { 38 | dbhash, err := DB.Get([]byte(fh.PathOnDisk), nil) 39 | if err != nil || string(dbhash) != string(fh.Hash) { 40 | DB.Put([]byte(fh.PathOnDisk), fh.Hash, nil) 41 | hashChanged = true 42 | } 43 | } else { 44 | hashChanged = true 45 | } 46 | dbts, err := DB.Get([]byte("ts_"+fh.PathOnDisk), nil) 47 | if err != nil || string(dbts) != string(fh.FakeHash) { 48 | DB.Put([]byte("ts_"+fh.PathOnDisk), fh.FakeHash, nil) 49 | tsChanged = true 50 | } 51 | return hashChanged && tsChanged 52 | } 53 | 54 | // Delete removes the PathOnFisk:Hash from the db, works with directories. path is used in case fh is nil (directory) 55 | func (fh *FileHash) Delete(path string) { 56 | if DB == nil { 57 | return 58 | } 59 | if fh != nil { 60 | path = fh.PathOnDisk 61 | } 62 | iter := DB.NewIterator(util.BytesPrefix([]byte(path)), nil) 63 | for iter.Next() { 64 | path := iter.Key() 65 | if Verbose { 66 | log.Println("Deleting", string(path), "from DB ...") 67 | } 68 | DB.Delete(path, nil) 69 | DB.Delete([]byte("ts_"+string(path)), nil) 70 | delete(Hashes, string(path)) 71 | } 72 | iter.Release() 73 | } 74 | 75 | // Recalculate simply recalculates the Hash, updating Hash and PathOnDisk, and returning a copy of the pointer. 76 | func (fh *FileHash) Recalculate(PathOnDisk string, dontHash bool) *FileHash { 77 | fh.PathOnDisk = PathOnDisk 78 | timestamp := GetHashValue(PathOnDisk, true) 79 | if string(timestamp) != string(fh.FakeHash) { 80 | fh.FakeHash = timestamp 81 | if !dontHash { 82 | fh.Hash = GetHashValue(PathOnDisk, false) 83 | } 84 | } 85 | return fh 86 | } 87 | 88 | func GetHashValue(fpath string, dontHash bool) []byte { 89 | if !dontHash { 90 | f, err := os.Open(fpath) 91 | if err != nil { 92 | return nil 93 | } 94 | hash := xxhash.New() 95 | if _, err := io.Copy(hash, f); err != nil { 96 | f.Close() 97 | return nil 98 | } 99 | f.Close() 100 | return hash.Sum(nil) 101 | } else { 102 | fi, err := os.Stat(fpath) 103 | if err != nil { 104 | return nil 105 | } 106 | size := fi.Size() 107 | time := fi.ModTime().Unix() 108 | return []byte{byte(0xff & size), byte(0xff & (size >> 8)), byte(0xff & (size >> 16)), byte(0xff & (size >> 32)), 109 | byte(0xff & (size >> 40)), byte(0xff & (size >> 48)), byte(0xff & (size >> 56)), byte(0xff & (size >> 64)), 110 | byte(0xff & time), byte(0xff & (time >> 8)), byte(0xff & (time >> 16)), byte(0xff & (time >> 32)), 111 | byte(0xff & (time >> 40)), byte(0xff & (time >> 48)), byte(0xff & (time >> 56)), byte(0xff & (time >> 64)), 112 | } 113 | } 114 | } 115 | 116 | // HashDir recursively searches through a directory, hashing every file, and returning them as a list []*FileHash. 117 | func HashDir(path string, dontHash bool) (map[string]*FileHash, error) { 118 | files, err := filePathWalkDir(path) 119 | if err != nil { 120 | return nil, err 121 | } 122 | hashes := make(map[string]*FileHash, len(files)) 123 | for _, file := range files { 124 | if Verbose { 125 | log.Println("Loading", file, "...") 126 | } 127 | splitName := strings.Split(file, ".") 128 | if findInStringSlice(Ignore, splitName[len(splitName)-1]) > -1 { 129 | continue 130 | } 131 | 132 | // Load existing data from DB 133 | var hash, timestamp []byte 134 | if !dontHash { 135 | hash, _ = DB.Get([]byte(file), nil) 136 | } 137 | timestamp, _ = DB.Get([]byte("ts_"+file), nil) 138 | fh := &FileHash{PathOnDisk: file, Hash: hash, FakeHash: timestamp} 139 | fh.Recalculate(file, dontHash) // Recalculate using info from DB (avoiding rehash if possible) 140 | hashes[file] = fh 141 | } 142 | return hashes, nil 143 | } 144 | 145 | // InitDB initializes a database at `path`. 146 | func InitDB(path string) { 147 | Hashes = make(map[string]*FileHash) 148 | HashLock = new(sync.RWMutex) 149 | tdb, err := leveldb.OpenFile(path, nil) 150 | if err != nil { 151 | log.Fatalln(err) 152 | } 153 | DB = tdb 154 | c := make(chan os.Signal) 155 | signal.Notify(c, os.Interrupt, syscall.SIGTERM) 156 | signal.Notify(c, os.Interrupt, syscall.SIGINT) 157 | go func() { 158 | <-c 159 | DB.Close() 160 | os.Exit(1) 161 | }() 162 | } 163 | -------------------------------------------------------------------------------- /fsnotify.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "io/fs" 5 | "log" 6 | "os" 7 | "path/filepath" 8 | "strings" 9 | 10 | "github.com/fsnotify/fsnotify" 11 | ) 12 | 13 | func watchDir(dir string, nocopy bool, dontHash bool) chan bool { 14 | dirSplit := strings.Split(dir, string(os.PathSeparator)) 15 | dirName := dirSplit[len(dirSplit)-2] 16 | 17 | localDirs := make(map[string]bool) 18 | 19 | // creates a new file watcher 20 | watcher, err := fsnotify.NewWatcher() 21 | if err != nil { 22 | log.Println("ERROR", err) 23 | return nil 24 | } 25 | 26 | watchThis := func(path string, fi fs.DirEntry, err error) error { 27 | // since fsnotify can watch all the files in a directory, watchers only need to be added to each nested directory 28 | // we must check for nil as a panic is possible if fi is for some reason nil 29 | if fi != nil && fi.IsDir() { 30 | filePathSplit := strings.Split(path, string(os.PathSeparator)) 31 | if IgnoreHidden { 32 | if len(filePathSplit[len(filePathSplit)-1]) > 0 { 33 | if filePathSplit[len(filePathSplit)-1][0] == '.' { 34 | return fs.SkipDir 35 | } 36 | } else { 37 | if filePathSplit[len(filePathSplit)-2][0] == '.' { 38 | return fs.SkipDir 39 | } 40 | } 41 | } 42 | return watcher.Add(path) 43 | } 44 | 45 | return nil 46 | } 47 | 48 | addFile := func(fname string, overwrite bool) { 49 | splitName := strings.Split(fname, string(os.PathSeparator)) 50 | parentDir := strings.Join(splitName[:len(splitName)-1], string(os.PathSeparator)) 51 | makeDir := !localDirs[parentDir] 52 | if makeDir { 53 | localDirs[parentDir] = true 54 | } 55 | mfsPath := fname[len(dir):] 56 | if os.PathSeparator != '/' { 57 | mfsPath = strings.ReplaceAll(mfsPath, string(os.PathSeparator), "/") 58 | } 59 | repl, err := AddFile(fname, dirName+"/"+mfsPath, nocopy, makeDir, overwrite) 60 | if err != nil { 61 | log.Println("WATCHER ERROR", err) 62 | } 63 | if repl != "" { 64 | if Verbose { 65 | log.Println("AddFile reply:", repl) 66 | } 67 | } 68 | if Hashes != nil { 69 | HashLock.Lock() 70 | if Hashes[fname] != nil { 71 | Hashes[fname].Recalculate(fname, dontHash) 72 | } else { 73 | Hashes[fname] = new(FileHash).Recalculate(fname, dontHash) 74 | } 75 | Hashes[fname].Update() 76 | HashLock.Unlock() 77 | } 78 | } 79 | 80 | addDir := func(path string, fi fs.DirEntry, err error) error { 81 | if fi != nil && fi.IsDir() { 82 | filePathSplit := strings.Split(path, string(os.PathSeparator)) 83 | if IgnoreHidden { 84 | if len(filePathSplit[len(filePathSplit)-1]) > 0 { 85 | if filePathSplit[len(filePathSplit)-1][0] == '.' { 86 | return fs.SkipDir 87 | } 88 | } else { 89 | if filePathSplit[len(filePathSplit)-2][0] == '.' { 90 | return fs.SkipDir 91 | } 92 | } 93 | } 94 | return nil 95 | } else { 96 | addFile(path, false) 97 | } 98 | 99 | return nil 100 | } 101 | 102 | // starting at the root of the project, walk each file/directory searching for directories 103 | if err := filepath.WalkDir(dir, watchThis); err != nil { 104 | log.Println("ERROR", err) 105 | } 106 | 107 | done := make(chan bool, 1) 108 | 109 | go func() { 110 | defer watcher.Close() 111 | for { 112 | select { 113 | // watch for events 114 | case event, ok := <-watcher.Events: 115 | if !ok { 116 | log.Println("NOT OK") 117 | return 118 | } 119 | if Verbose { 120 | log.Println("fsnotify event:", event) 121 | } 122 | if len(event.Name) == 0 { 123 | continue 124 | } 125 | filePathSplit := strings.Split(event.Name, string(os.PathSeparator)) 126 | if IgnoreHidden { 127 | if len(filePathSplit[len(filePathSplit)-1]) > 0 { 128 | if filePathSplit[len(filePathSplit)-1][0] == '.' { 129 | continue 130 | } 131 | } else { 132 | if filePathSplit[len(filePathSplit)-2][0] == '.' { 133 | continue 134 | } 135 | } 136 | } 137 | splitName := strings.Split(event.Name, ".") 138 | if findInStringSlice(Ignore, splitName[len(splitName)-1]) > -1 { 139 | continue 140 | } 141 | switch event.Op { 142 | case fsnotify.Create: 143 | fi, err := os.Stat(event.Name) 144 | if err != nil { 145 | log.Println("WATCHER ERROR", err) 146 | } else if !fi.Mode().IsDir() { 147 | addFile(event.Name, true) 148 | } else if err := filepath.WalkDir(event.Name, watchThis); err == nil { 149 | filepath.WalkDir(event.Name, addDir) 150 | } else { 151 | log.Println("ERROR", err) 152 | } 153 | case fsnotify.Write: 154 | addFile(event.Name, true) 155 | case fsnotify.Remove, fsnotify.Rename: 156 | // check if file is *actually* gone 157 | _, err := os.Stat(event.Name) 158 | if err == nil { 159 | continue 160 | } 161 | // remove watcher, just in case it's a directory 162 | watcher.Remove(event.Name) 163 | if localDirs[event.Name] { 164 | delete(localDirs, event.Name) 165 | } 166 | fpath := event.Name[len(dir):] 167 | if string(os.PathSeparator) != "/" { 168 | fpath = strings.ReplaceAll(fpath, string(os.PathSeparator), "/") 169 | } 170 | log.Println("Removing", dirName+"/"+fpath, "...") 171 | err = RemoveFile(dirName + "/" + fpath) 172 | if err != nil { 173 | log.Println("ERROR", err) 174 | } 175 | if Hashes != nil { 176 | HashLock.Lock() 177 | Hashes[event.Name].Delete(event.Name) 178 | HashLock.Unlock() 179 | } 180 | } 181 | case err, ok := <-watcher.Errors: 182 | if !ok { 183 | log.Println("WATCHER NOT OK") 184 | return 185 | } 186 | log.Println("error:", err) 187 | case <-done: 188 | return 189 | } 190 | } 191 | }() 192 | 193 | return done 194 | } 195 | -------------------------------------------------------------------------------- /go.mod: -------------------------------------------------------------------------------- 1 | module github.com/TheDiscordian/ipfs-sync 2 | 3 | go 1.16 4 | 5 | require ( 6 | github.com/cespare/xxhash/v2 v2.1.1 7 | github.com/fsnotify/fsnotify v1.4.9 8 | github.com/syndtr/goleveldb v1.0.0 9 | gopkg.in/yaml.v2 v2.4.0 // indirect 10 | gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b 11 | ) 12 | -------------------------------------------------------------------------------- /go.sum: -------------------------------------------------------------------------------- 1 | github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY= 2 | github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= 3 | github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= 4 | github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4= 5 | github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= 6 | github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 7 | github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db h1:woRePGFeVFfLKN/pOkfl+p/TAqKOfFu+7KPlMVpok/w= 8 | github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= 9 | github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI= 10 | github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= 11 | github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= 12 | github.com/onsi/ginkgo v1.7.0 h1:WSHQ+IS43OoUrWtD1/bbclrwK8TTH5hzp+umCiuxHgs= 13 | github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= 14 | github.com/onsi/gomega v1.4.3 h1:RE1xgDvH7imwFD45h+u2SgIfERHlS2yNG4DObb5BSKU= 15 | github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= 16 | github.com/syndtr/goleveldb v1.0.0 h1:fBdIW9lB4Iz0n9khmH8w27SJ3QEJ7+IgjPEwGSZiFdE= 17 | github.com/syndtr/goleveldb v1.0.0/go.mod h1:ZVVdQEZoIme9iO1Ch2Jdy24qqXrMMOU6lpPAyBWyWuQ= 18 | golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA= 19 | golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 20 | golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 21 | golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 22 | golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9 h1:L2auWcuQIvxz9xSEqzESnV/QN/gNRXNApHi3fYwl2w0= 23 | golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 24 | golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= 25 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 26 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= 27 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 28 | gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4= 29 | gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= 30 | gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= 31 | gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= 32 | gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 33 | gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= 34 | gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= 35 | gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo= 36 | gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 37 | -------------------------------------------------------------------------------- /main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bytes" 5 | "context" 6 | "encoding/json" 7 | "errors" 8 | "fmt" 9 | "io" 10 | "io/fs" 11 | "io/ioutil" 12 | "log" 13 | "mime/multipart" 14 | "net/http" 15 | "net/textproto" 16 | "net/url" 17 | "os" 18 | "path/filepath" 19 | "strings" 20 | "time" 21 | ) 22 | 23 | const ( 24 | KeySpace = "ipfs-sync." 25 | API = "/api/v0/" 26 | ) 27 | 28 | func findInStringSlice(slice []string, val string) int { 29 | for i, item := range slice { 30 | if item == val { 31 | return i 32 | } 33 | } 34 | return -1 35 | } 36 | 37 | // doRequest does an API request to the node specified in EndPoint. If timeout is 0 it isn't used. 38 | func doRequest(timeout time.Duration, cmd string) (string, error) { 39 | var cancel context.CancelFunc 40 | ctx := context.Background() 41 | if timeout > 0 { 42 | ctx, cancel = context.WithTimeout(ctx, timeout) 43 | defer cancel() 44 | } 45 | c := &http.Client{} 46 | req, err := http.NewRequestWithContext(ctx, "POST", EndPoint+API+cmd, nil) 47 | if err != nil { 48 | return "", err 49 | } 50 | resp, err := c.Do(req) 51 | if err != nil { 52 | return "", err 53 | } 54 | defer resp.Body.Close() 55 | body, err := ioutil.ReadAll(resp.Body) 56 | if err != nil { 57 | return "", err 58 | } 59 | 60 | errStruct := new(ErrorStruct) 61 | err = json.Unmarshal(body, errStruct) 62 | if err == nil { 63 | if errStruct.Error() != "" { 64 | return string(body), errStruct 65 | } 66 | } 67 | 68 | return string(body), nil 69 | } 70 | 71 | // HashStruct is useful when you only care about the returned hash. 72 | type HashStruct struct { 73 | Hash string 74 | } 75 | 76 | // GetFileCID gets a file CID based on MFS path relative to BasePath. 77 | func GetFileCID(filePath string) string { 78 | out, _ := doRequest(TimeoutTime, "files/stat?hash=true&arg="+url.QueryEscape(BasePath+filePath)) 79 | 80 | fStat := new(HashStruct) 81 | 82 | err := json.Unmarshal([]byte(out), &fStat) 83 | if err != nil { 84 | return "" 85 | } 86 | return fStat.Hash 87 | } 88 | 89 | // RemoveFile removes a file from the MFS relative to BasePath. 90 | func RemoveFile(fpath string) error { 91 | _, err := doRequest(TimeoutTime, fmt.Sprintf(`files/rm?arg=%s&force=true`, url.QueryEscape(BasePath+fpath))) 92 | return err 93 | } 94 | 95 | // MakeDir makes a directory along with parents in path 96 | func MakeDir(path string) error { 97 | _, err := doRequest(TimeoutTime, fmt.Sprintf(`files/mkdir?arg=%s&parents=true`, url.QueryEscape(BasePath+path))) 98 | return err 99 | } 100 | 101 | func filePathWalkDir(root string) ([]string, error) { 102 | var files []string 103 | err := filepath.WalkDir(root, func(path string, info fs.DirEntry, err error) error { 104 | if info == nil { 105 | return errors.New(fmt.Sprintf("cannot access '%s' for crawling", path)) 106 | } 107 | if !info.IsDir() { 108 | filePathSplit := strings.Split(path, string(os.PathSeparator)) 109 | if IgnoreHidden && filePathSplit[len(filePathSplit)-1][0] == '.' { 110 | return nil 111 | } 112 | files = append(files, path) 113 | } else { 114 | dirPathSplit := strings.Split(path, string(os.PathSeparator)) 115 | if IgnoreHidden && len(dirPathSplit[len(dirPathSplit)-1]) > 0 && dirPathSplit[len(dirPathSplit)-1][0] == '.' { 116 | return filepath.SkipDir 117 | } 118 | } 119 | return nil 120 | }) 121 | return files, err 122 | } 123 | 124 | // AddDir adds a directory, and returns CID. 125 | func AddDir(path string, nocopy bool, pin bool, estuary bool) (string, error) { 126 | pathSplit := strings.Split(path, string(os.PathSeparator)) 127 | dirName := pathSplit[len(pathSplit)-2] 128 | files, err := filePathWalkDir(path) 129 | if err != nil { 130 | return "", err 131 | } 132 | localDirs := make(map[string]bool) 133 | for _, file := range files { 134 | filePathSplit := strings.Split(file, string(os.PathSeparator)) 135 | if IgnoreHidden && filePathSplit[len(filePathSplit)-1][0] == '.' { 136 | continue 137 | } 138 | splitName := strings.Split(file, ".") 139 | if findInStringSlice(Ignore, splitName[len(splitName)-1]) > -1 { 140 | continue 141 | } 142 | parentDir := strings.Join(filePathSplit[:len(filePathSplit)-1], string(os.PathSeparator)) 143 | makeDir := !localDirs[parentDir] 144 | if makeDir { 145 | localDirs[parentDir] = true 146 | } 147 | mfsPath := file[len(path):] 148 | if os.PathSeparator != '/' { 149 | mfsPath = strings.ReplaceAll(mfsPath, string(os.PathSeparator), "/") 150 | } 151 | _, err := AddFile(file, dirName+"/"+mfsPath, nocopy, makeDir, false) 152 | if err != nil { 153 | log.Println("Error adding file:", err) 154 | } 155 | } 156 | cid := GetFileCID(dirName) 157 | if pin { 158 | err := Pin(cid) 159 | log.Println("Error pinning", dirName, ":", err) 160 | } 161 | if estuary { 162 | if err := PinEstuary(cid, dirName); err != nil { 163 | log.Println("Error pinning to Estuary:", err) 164 | } 165 | } 166 | return cid, err 167 | } 168 | 169 | // A simple IPFS add, if onlyhash is true, only the CID is generated and returned 170 | func IPFSAddFile(fpath string, nocopy, onlyhash bool) (*HashStruct, error) { 171 | client := http.Client{} 172 | f, err := os.Open(fpath) 173 | if err != nil { 174 | return nil, err 175 | } 176 | 177 | pr, pw := io.Pipe() 178 | writer := multipart.NewWriter(pw) 179 | 180 | defer pr.Close() 181 | 182 | req, err := http.NewRequest("POST", EndPoint+API+fmt.Sprintf(`add?nocopy=%t&pin=false&quieter=true&only-hash=%t`, nocopy, onlyhash), pr) 183 | if err != nil { 184 | return nil, err 185 | } 186 | req.Header.Add("Content-Type", writer.FormDataContentType()) 187 | 188 | go func() { 189 | defer f.Close() 190 | defer writer.Close() 191 | 192 | h := make(textproto.MIMEHeader) 193 | h.Set("Abspath", fpath) 194 | h.Set("Content-Disposition", fmt.Sprintf(`form-data; name="%s"; filename="%s"`, "file", url.QueryEscape(f.Name()))) 195 | h.Set("Content-Type", "application/octet-stream") 196 | 197 | part, err := writer.CreatePart(h) 198 | if err != nil { 199 | pw.CloseWithError(err) 200 | return 201 | } 202 | 203 | if Verbose { 204 | log.Println("Generating file headers...") 205 | } 206 | 207 | _, err = io.Copy(part, f) 208 | pw.CloseWithError(err) 209 | }() 210 | 211 | if Verbose { 212 | log.Println("Doing add request...") 213 | } 214 | 215 | resp, err := client.Do(req) 216 | if err != nil { 217 | return nil, err 218 | } 219 | defer resp.Body.Close() 220 | 221 | var hash HashStruct 222 | err = json.NewDecoder(resp.Body).Decode(&hash) 223 | 224 | if Verbose { 225 | log.Println("File hash:", hash.Hash) 226 | } 227 | 228 | return &hash, err 229 | } 230 | 231 | // AddFile adds a file to the MFS relative to BasePath. from should be the full path to the file intended to be added. 232 | // If makedir is true, it'll create the directory it'll be placed in. 233 | // If overwrite is true, it'll perform an rm before copying to MFS. 234 | func AddFile(from, to string, nocopy bool, makedir bool, overwrite bool) (string, error) { 235 | log.Println("Adding file from", from, "to", BasePath+to, "...") 236 | hash, err := IPFSAddFile(from, nocopy, false) 237 | if err != nil { 238 | return "", err 239 | } 240 | 241 | if makedir { 242 | toSplit := strings.Split(to, "/") 243 | parent := strings.Join(toSplit[:len(toSplit)-1], "/") 244 | if Verbose { 245 | log.Printf("Creating parent directory '%s' in MFS...\n", parent) 246 | } 247 | err = MakeDir(parent) 248 | if err != nil { 249 | return "", err 250 | } 251 | } 252 | 253 | if overwrite { 254 | if Verbose { 255 | log.Println("Removing existing file (if any)...") 256 | } 257 | RemoveFile(to) 258 | } 259 | 260 | // send files/cp request 261 | if Verbose { 262 | log.Println("Adding file to mfs path:", BasePath+to) 263 | } 264 | _, err = doRequest(TimeoutTime, fmt.Sprintf(`files/cp?arg=%s&arg=%s`, "/ipfs/"+url.QueryEscape(hash.Hash), url.QueryEscape(BasePath+to))) 265 | if err != nil { 266 | if Verbose { 267 | log.Println("Error on files/cp:", err) 268 | log.Println("fpath:", from) 269 | } 270 | if HandleBadBlockError(err, from, nocopy) { 271 | log.Println("files/cp failure due to filestore, retrying (recursive)") 272 | AddFile(from, to, nocopy, makedir, overwrite) 273 | } 274 | } 275 | return hash.Hash, err 276 | } 277 | 278 | type FileStoreStatus int 279 | 280 | const NoFile FileStoreStatus = 11 281 | 282 | type FileStoreKey struct { 283 | Slash string `json:"/"` 284 | } 285 | 286 | // FileStoreEntry is for results returned by `filestore/verify`, only processes Status and Key, as that's all ipfs-sync uses. 287 | type FileStoreEntry struct { 288 | Status FileStoreStatus 289 | Key FileStoreKey 290 | } 291 | 292 | var fileStoreCleanupLock chan int 293 | 294 | func init() { 295 | fileStoreCleanupLock = make(chan int, 1) 296 | } 297 | 298 | // FileStoreEntry is for results returned by `filestore/verify`, only processes Status and Key, as that's all ipfs-sync uses. 299 | type RefResp struct { 300 | Err string 301 | Ref string 302 | } 303 | 304 | // Completely removes a CID, even if pinned 305 | func RemoveCID(cid string) { 306 | var found bool 307 | // Build our own request because we want to stream data... 308 | c := &http.Client{} 309 | req, err := http.NewRequest("POST", EndPoint+API+"refs?unique=true&recursive=true&arg="+cid, nil) 310 | if err != nil { 311 | log.Println(err) 312 | return 313 | } 314 | 315 | // Send request 316 | resp, err := c.Do(req) 317 | if err != nil { 318 | log.Println(err) 319 | return 320 | } 321 | defer resp.Body.Close() 322 | 323 | dec := json.NewDecoder(resp.Body) 324 | if err != nil { 325 | log.Println(err) 326 | return 327 | } 328 | 329 | // Decode the json stream and process it 330 | for dec.More() { 331 | found = true 332 | refResp := new(RefResp) 333 | err := dec.Decode(refResp) 334 | if err != nil { 335 | log.Println("Error decoding ref response stream:", err) 336 | continue 337 | } 338 | 339 | newcid := refResp.Ref 340 | if newcid == "" { 341 | newcid = cid 342 | } 343 | 344 | if Verbose { 345 | log.Println("Removing block:", newcid) 346 | } 347 | RemoveBlock(newcid) 348 | } 349 | if !found { 350 | if Verbose { 351 | log.Println("Removing block:", cid) 352 | } 353 | RemoveBlock(cid) 354 | } 355 | } 356 | 357 | // remove block, even if pinned 358 | func RemoveBlock(cid string) { 359 | var err error 360 | for _, err = doRequest(TimeoutTime, "block/rm?arg="+cid); err != nil && strings.HasPrefix(err.Error(), "pinned"); _, err = doRequest(TimeoutTime, "block/rm?arg="+cid) { 361 | splitErr := strings.Split(err.Error(), " ") 362 | var cid2 string 363 | if len(splitErr) < 3 { // This is caused by IPFS returning "pinned (recursive)", it means the file in question has been explicitly pinned, and for some unknown reason, it chooses to omit the CID in this particular situation 364 | cid2 = cid 365 | } else { 366 | cid2 = splitErr[2] 367 | } 368 | log.Println("Effected block is pinned, removing pin:", cid2) 369 | _, err := doRequest(0, "pin/rm?arg="+cid2) // no timeout 370 | if err != nil { 371 | log.Println("Error removing pin:", err) 372 | } 373 | } 374 | 375 | if err != nil { 376 | log.Println("Error removing bad block:", err) 377 | } 378 | } 379 | 380 | // CleanFilestore removes blocks that point to files that don't exist 381 | func CleanFilestore() { 382 | select { 383 | case fileStoreCleanupLock <- 1: 384 | defer func() { <-fileStoreCleanupLock }() 385 | default: 386 | return 387 | } 388 | if Verbose { 389 | log.Println("Removing blocks that point to a file that doesn't exist from filestore...") 390 | } 391 | 392 | // Build our own request because we want to stream data... 393 | c := &http.Client{} 394 | req, err := http.NewRequest("POST", EndPoint+API+"filestore/verify", nil) 395 | if err != nil { 396 | log.Println(err) 397 | return 398 | } 399 | 400 | // Send request 401 | resp, err := c.Do(req) 402 | if err != nil { 403 | log.Println(err) 404 | return 405 | } 406 | defer resp.Body.Close() 407 | 408 | dec := json.NewDecoder(resp.Body) 409 | if err != nil { 410 | log.Println(err) 411 | return 412 | } 413 | 414 | // Decode the json stream and process it 415 | for dec.More() { 416 | fsEntry := new(FileStoreEntry) 417 | err := dec.Decode(fsEntry) 418 | if err != nil { 419 | log.Println("Error decoding fsEntry stream:", err) 420 | continue 421 | } 422 | if fsEntry.Status == NoFile { // if the block points to a file that doesn't exist, remove it. 423 | log.Println("Removing reference from filestore:", fsEntry.Key.Slash) 424 | RemoveBlock(fsEntry.Key.Slash) 425 | } 426 | } 427 | } 428 | 429 | // HandleBackBlockError runs CleanFilestore() and returns true if there was a bad block error. 430 | func HandleBadBlockError(err error, fpath string, nocopy bool) bool { 431 | txt := err.Error() 432 | if strings.HasPrefix(txt, "failed to get block") || strings.HasSuffix(txt, "no such file or directory") { 433 | if Verbose { 434 | log.Println("Handling bad block error: " + txt) 435 | } 436 | if fpath == "" { // TODO attempt to get fpath from error msg when possible 437 | CleanFilestore() 438 | } else { 439 | cid, err := IPFSAddFile(fpath, nocopy, true) 440 | if err == nil { 441 | RemoveCID(cid.Hash) 442 | } else { 443 | log.Println("Error handling bad block error:", err) 444 | } 445 | } 446 | return true 447 | } 448 | return false 449 | } 450 | 451 | // Pin CID 452 | func Pin(cid string) error { 453 | resp, err := doRequest(0, "pin/add?arg="+url.QueryEscape(cid)) // no timeout 454 | if resp != "" { 455 | if Verbose { 456 | log.Println("Pin response:", resp) 457 | } 458 | } 459 | return err 460 | } 461 | 462 | // ErrorStruct allows us to read the errors received by the IPFS daemon. 463 | type ErrorStruct struct { 464 | Message string // used for error text 465 | Error2 string `json:"Error"` // also used for error text 466 | Code int 467 | Type string 468 | } 469 | 470 | // Outputs the error text contained in the struct, statistfies error interface. 471 | func (es *ErrorStruct) Error() string { 472 | switch { 473 | case es.Message != "": 474 | return es.Message 475 | case es.Error2 != "": 476 | return es.Error2 477 | } 478 | return "" 479 | } 480 | 481 | // UpdatePin updates a recursive pin to a new CID, unpinning old content. 482 | func UpdatePin(from, to string, nocopy bool) { 483 | _, err := doRequest(0, "pin/update?arg="+url.QueryEscape(from)+"&arg="+url.QueryEscape(to)) // no timeout 484 | if err != nil { 485 | log.Println("Error updating pin:", err) 486 | if Verbose { 487 | log.Println("From CID:", from, "To CID:", to) 488 | } 489 | if HandleBadBlockError(err, "", nocopy) { 490 | if Verbose { 491 | log.Println("Bad blocks found, running pin/update again (recursive)") 492 | } 493 | UpdatePin(from, to, nocopy) 494 | return 495 | } 496 | err = Pin(to) 497 | if err != nil { 498 | log.Println("[ERROR] Error adding pin:", err) 499 | } 500 | } 501 | } 502 | 503 | // Key contains information about an IPNS key. 504 | type Key struct { 505 | Id string 506 | Name string 507 | } 508 | 509 | // Keys is used to store a slice of Key. 510 | type Keys struct { 511 | Keys []Key 512 | } 513 | 514 | // ListKeys lists all the keys in the IPFS daemon. 515 | // TODO Only return keys in the namespace. 516 | func ListKeys() (*Keys, error) { 517 | res, err := doRequest(TimeoutTime, "key/list") 518 | if err != nil { 519 | return nil, err 520 | } 521 | keys := new(Keys) 522 | err = json.Unmarshal([]byte(res), keys) 523 | if err != nil { 524 | return nil, err 525 | } 526 | return keys, nil 527 | } 528 | 529 | // ResolveIPNS takes an IPNS key and returns the CID it resolves to. 530 | func ResolveIPNS(key string) (string, error) { 531 | res, err := doRequest(0, "name/resolve?arg="+key) // no timeout 532 | if err != nil { 533 | return "", err 534 | } 535 | type PathStruct struct { 536 | Path string 537 | } 538 | path := new(PathStruct) 539 | err = json.Unmarshal([]byte(res), path) 540 | if err != nil { 541 | return "", err 542 | } 543 | pathSplit := strings.Split(path.Path, "/") 544 | if len(pathSplit) < 3 { 545 | return "", errors.New("Unexpected output in name/resolve: " + path.Path) 546 | } 547 | return pathSplit[2], nil 548 | } 549 | 550 | // Generates an IPNS key in the keyspace based on name. 551 | func GenerateKey(name string) Key { 552 | res, err := doRequest(TimeoutTime, "key/gen?arg="+KeySpace+name) 553 | if err != nil { 554 | log.Panicln("[ERROR]", err) 555 | } 556 | key := new(Key) 557 | err = json.Unmarshal([]byte(res), key) 558 | if err != nil { 559 | log.Panicln("[ERROR]", err) 560 | } 561 | return *key 562 | } 563 | 564 | // Publish CID to IPNS 565 | func Publish(cid, key string) error { 566 | _, err := doRequest(0, fmt.Sprintf("name/publish?arg=%s&key=%s", url.QueryEscape(cid), KeySpace+key)) // no timeout 567 | return err 568 | } 569 | 570 | type EstuaryFile struct { 571 | Cid string 572 | Name string 573 | } 574 | 575 | type IPFSRemotePinningResponse struct { 576 | Count int 577 | Results []*IPFSRemotePinResult 578 | } 579 | 580 | type IPFSRemotePinResult struct { 581 | RequestId string 582 | Pin *IPFSRemotePin 583 | } 584 | 585 | type IPFSRemotePin struct { 586 | Cid string 587 | } 588 | 589 | func doEstuaryRequest(reqType, cmd string, jsonData []byte) (string, error) { 590 | if EstuaryAPIKey == "" { 591 | return "", errors.New("Estuary API key is blank.") 592 | } 593 | var cancel context.CancelFunc 594 | ctx := context.Background() 595 | if TimeoutTime > 0 { 596 | ctx, cancel = context.WithTimeout(ctx, TimeoutTime) 597 | defer cancel() 598 | } 599 | c := &http.Client{} 600 | 601 | var ( 602 | req *http.Request 603 | err error 604 | ) 605 | if jsonData != nil { 606 | req, err = http.NewRequestWithContext(ctx, reqType, "https://api.estuary.tech/"+cmd, bytes.NewBuffer(jsonData)) 607 | } else { 608 | req, err = http.NewRequestWithContext(ctx, reqType, "https://api.estuary.tech/"+cmd, nil) 609 | } 610 | if err != nil { 611 | return "", err 612 | } 613 | 614 | req.Header.Add("Authorization", "Bearer "+EstuaryAPIKey) 615 | req.Header.Add("Content-Type", "application/json") 616 | resp, err := c.Do(req) 617 | if err != nil { 618 | return "", err 619 | } 620 | defer resp.Body.Close() 621 | body, err := ioutil.ReadAll(resp.Body) 622 | if err != nil { 623 | return "", err 624 | } 625 | 626 | errStruct := new(ErrorStruct) 627 | err = json.Unmarshal(body, errStruct) 628 | if err == nil { 629 | if errStruct.Error() != "" { 630 | return string(body), errStruct 631 | } 632 | } 633 | 634 | return string(body), nil 635 | } 636 | 637 | func PinEstuary(cid, name string) error { 638 | jsonData, _ := json.Marshal(&EstuaryFile{Cid: cid, Name: name}) 639 | _, err := doEstuaryRequest("POST", "pinning/pins", jsonData) 640 | return err 641 | } 642 | 643 | func UpdatePinEstuary(oldcid, newcid, name string) { 644 | resp, err := doEstuaryRequest("GET", "pinning/pins?cid="+oldcid, nil) 645 | if err != nil { 646 | log.Println("Error getting Estuary pin:", err) 647 | return 648 | } 649 | pinResp := new(IPFSRemotePinningResponse) 650 | err = json.Unmarshal([]byte(resp), pinResp) 651 | if err != nil { 652 | log.Println("Error decoding Estuary pin list:", err) 653 | return 654 | } 655 | // FIXME Estuary doesn't seem to support `cid` GET field yet, this code can be removed when it does: 656 | var reqId string 657 | pinResp.Count = 0 658 | for _, pinResult := range pinResp.Results { 659 | if pinResult.Pin.Cid == oldcid { 660 | reqId = pinResult.RequestId 661 | pinResp.Count = 1 662 | break 663 | } 664 | } 665 | // END OF FIXME 666 | jsonData, _ := json.Marshal(&EstuaryFile{Cid: newcid, Name: name}) 667 | if pinResp.Count > 0 { 668 | _, err := doEstuaryRequest("POST", "pinning/pins/"+reqId, jsonData) 669 | if err != nil { 670 | log.Println("Error updating Estuary pin:", err) 671 | } else { 672 | return 673 | } 674 | } 675 | err = PinEstuary(newcid, name) 676 | if err != nil { 677 | log.Println("Error pinning to Estuary:", err) 678 | } 679 | } 680 | 681 | // WatchDog watches for directory updates, periodically updates IPNS records, and updates recursive pins. 682 | func WatchDog() { 683 | // Init WatchDog 684 | keys, err := ListKeys() 685 | if err != nil { 686 | log.Fatalln("Failed to retrieve keys:", err) 687 | } 688 | for _, dk := range DirKeys { 689 | found := false 690 | 691 | splitPath := strings.Split(dk.Dir, string(os.PathSeparator)) 692 | dk.MFSPath = splitPath[len(splitPath)-2] 693 | 694 | // Hash directory if we're using a DB. 695 | if DB != nil { 696 | if Verbose { 697 | log.Println("Hashing", dk.Dir, "...") 698 | } 699 | 700 | hashmap, err := HashDir(dk.Dir, dk.DontHash) 701 | if err != nil { 702 | log.Panicln("Error hashing directory for hash DB:", err) 703 | } 704 | localDirs := make(map[string]bool) 705 | HashLock.Lock() 706 | for _, hash := range hashmap { 707 | if hash.Update() { 708 | if Verbose { 709 | log.Println("File updated:", hash.PathOnDisk) 710 | } 711 | 712 | // grab parent dir, check if we've already created it 713 | splitName := strings.Split(hash.PathOnDisk, string(os.PathSeparator)) 714 | parentDir := strings.Join(splitName[:len(splitName)-1], string(os.PathSeparator)) 715 | makeDir := !localDirs[parentDir] 716 | if makeDir { 717 | localDirs[parentDir] = true 718 | } 719 | 720 | mfsPath := hash.PathOnDisk[len(dk.Dir):] 721 | if os.PathSeparator != '/' { 722 | mfsPath = strings.ReplaceAll(mfsPath, string(os.PathSeparator), "/") 723 | } 724 | _, err := AddFile(hash.PathOnDisk, dk.MFSPath+"/"+mfsPath, dk.Nocopy, makeDir, false) 725 | if err != nil { 726 | log.Println("Error adding file:", err) 727 | } 728 | } 729 | Hashes[hash.PathOnDisk] = hash 730 | } 731 | HashLock.Unlock() 732 | } 733 | 734 | // Check if we recognize any keys, mark them as found, and load them if so. 735 | for _, ik := range keys.Keys { 736 | if ik.Name == KeySpace+dk.ID { 737 | var err error 738 | dk.CID, err = ResolveIPNS(ik.Id) 739 | if err != nil { 740 | log.Println("Error resolving IPNS:", err) 741 | log.Println("Republishing key...") 742 | dk.CID = GetFileCID(dk.MFSPath) 743 | Publish(dk.CID, dk.ID) 744 | } 745 | found = true 746 | log.Println(dk.ID, "loaded:", ik.Id) 747 | watchDir(dk.Dir, dk.Nocopy, dk.DontHash) 748 | break 749 | } 750 | } 751 | if found { 752 | continue 753 | } 754 | log.Println(dk.ID, "not found, generating...") 755 | ik := GenerateKey(dk.ID) 756 | var err error 757 | dk.CID, err = AddDir(dk.Dir, dk.Nocopy, dk.Pin, dk.Estuary) 758 | if err != nil { 759 | log.Panicln("[ERROR] Failed to add directory:", err) 760 | } 761 | Publish(dk.CID, dk.ID) 762 | log.Println(dk.ID, "loaded:", ik.Id) 763 | watchDir(dk.Dir, dk.Nocopy, dk.DontHash) 764 | } 765 | 766 | // Main loop 767 | for { 768 | time.Sleep(SyncTime) 769 | for _, dk := range DirKeys { 770 | if fCID := GetFileCID(dk.MFSPath); len(fCID) > 0 && fCID != dk.CID { 771 | // log.Printf("[DEBUG] '%s' != '%s'", fCID, dk.CID) 772 | if dk.Pin { 773 | UpdatePin(dk.CID, fCID, dk.Nocopy) 774 | } 775 | if dk.Estuary { 776 | UpdatePinEstuary(dk.CID, fCID, strings.Split(dk.MFSPath, "/")[0]) 777 | } 778 | Publish(fCID, dk.ID) 779 | dk.CID = fCID 780 | log.Println(dk.MFSPath, "updated...") 781 | } 782 | } 783 | } 784 | } 785 | 786 | func main() { 787 | // Process config and flags. 788 | ProcessFlags() 789 | 790 | log.Println("Starting up ipfs-sync", version, "...") 791 | 792 | for _, dk := range DirKeys { 793 | if dk.Nocopy { 794 | // Cleanup filestore first. 795 | if VerifyFilestore { 796 | CleanFilestore() 797 | } 798 | break 799 | } 800 | } 801 | 802 | // Start WatchDog. 803 | log.Println("Starting watchdog...") 804 | WatchDog() 805 | } 806 | -------------------------------------------------------------------------------- /main_test.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "errors" 5 | "testing" 6 | ) 7 | 8 | func init() { 9 | EndPoint = "http://127.0.0.1:5001" 10 | Verbose = true 11 | } 12 | 13 | func TestListKeys(t *testing.T) { 14 | keys, err := ListKeys() 15 | if err != nil { 16 | t.Error(err) 17 | } 18 | if keys == nil { 19 | t.Error("Keys are nil!") 20 | } 21 | } 22 | 23 | func TestResolveIPNS(t *testing.T) { 24 | cid, err := ResolveIPNS("k51qzi5uqu5djwygzxb01sprni3r6u2nru36gxabe5w8n3go27hxc819ic2w1q") 25 | if err != nil { 26 | t.Error(err) 27 | } 28 | if cid != "QmWa7egj1g4Dmv35s1AauW4KZqMBA84WqrRduxRYbQ5T3p" { 29 | t.Error("Unexpected CID returned from IPNS query.") 30 | } 31 | } 32 | 33 | func TestCleanFilestore(t *testing.T) { 34 | if !HandleBadBlockError(errors.New("no such file or directory")) { 35 | t.Error("Failed to cleanup bad block!") 36 | } 37 | } 38 | -------------------------------------------------------------------------------- /systemd/README.md: -------------------------------------------------------------------------------- 1 | # ipfs-sync systemd user service 2 | 3 | ## Setup 4 | 5 | 1. Ensure `ipfs-sync` is in your path. 6 | - Alternatively, edit `./user/ipfs-sync.service`, and modify the `ExecStart` line to your liking. 7 | 2. Copy the sample config file `config.yaml.sample` to `~/.ipfs-sync.yaml`, and edit it to your liking. 8 | 3. Copy the service file located in `./user/` to `~/.config/systemd/user/` (create the directory if it doesn't exist). 9 | 4. (Optional) Enable auto-starting the `ipfs-sync` daemon with `systemctl --user enable ipfs-sync` 10 | 5. Start the `ipfs-sync` daemon with `systemctl --user start ipfs-sync` 11 | 6. (Optional) Verify the daemon is running with `systemctl --user status ipfs-sync` 12 | 13 | ### Tip 14 | 15 | If you make a configuration change, don't forget to restart the daemon with `systemctl --user restart ipfs-sync`. 16 | -------------------------------------------------------------------------------- /systemd/user/ipfs-sync.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=ipfs-sync 3 | 4 | [Service] 5 | Type=simple 6 | StandardOutput=journal 7 | ExecStart=/bin/bash -c 'ipfs-sync -config $HOME/.ipfs-sync.yaml -db $HOME/.ipfs-sync.db' 8 | 9 | [Install] 10 | WantedBy=default.target 11 | --------------------------------------------------------------------------------