├── .gitignore ├── CONTRIBUTING ├── LICENSE ├── README.md ├── crfs.go ├── crfs_test.go ├── go.mod ├── go.sum └── stargz ├── stargz.go ├── stargz_test.go └── stargzify └── stargzify.go /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | crfs 3 | /stargz/stargzify/stargzify 4 | -------------------------------------------------------------------------------- /CONTRIBUTING: -------------------------------------------------------------------------------- 1 | To contribute, send a GitHub PR. Unless your change is an obvious fix, 2 | please open an issue about your bug or feature request first, before 3 | sending code. 4 | 5 | Once you've sent a pull request, a bot will check that you've signed 6 | the Google CLA: 7 | 8 | https://cla.developers.google.com 9 | 10 | There is no mailing list yet; use GitHub issues for now. 11 | 12 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2019 Google LLC. All rights reserved. 2 | 3 | Redistribution and use in source and binary forms, with or without 4 | modification, are permitted provided that the following conditions are 5 | met: 6 | 7 | * Redistributions of source code must retain the above copyright 8 | notice, this list of conditions and the following disclaimer. 9 | * Redistributions in binary form must reproduce the above 10 | copyright notice, this list of conditions and the following disclaimer 11 | in the documentation and/or other materials provided with the 12 | distribution. 13 | * Neither the name of Google Inc. nor the names of its 14 | contributors may be used to endorse or promote products derived from 15 | this software without specific prior written permission. 16 | 17 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 18 | "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 19 | LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 20 | A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 21 | OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 22 | SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 23 | LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 24 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 25 | THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 26 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 27 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # CRFS: Container Registry Filesystem 2 | 3 | Discussion: https://github.com/golang/go/issues/30829 4 | 5 | ## Overview 6 | 7 | **CRFS** is a read-only FUSE filesystem that lets you mount a 8 | container image, served directly from a container registry (such as 9 | [gcr.io](https://gcr.io/)), without pulling it all locally first. 10 | 11 | ## Background 12 | 13 | Starting a container should be fast. Currently, however, starting a 14 | container in many environments requires doing a `pull` operation from 15 | a container registry to read the entire container image from the 16 | registry and write the entire container image to the local machine's 17 | disk. It's pretty silly (and wasteful) that a read operation becomes a 18 | write operation. For small containers, this problem is rarely noticed. 19 | For larger containers, though, the pull operation quickly becomes the 20 | slowest part of launching a container, especially on a cold node. 21 | Contrast this with launching a VM on major cloud providers: even with 22 | a VM image that's hundreds of gigabytes, the VM boots in seconds. 23 | That's because the hypervisors' block devices are reading from the 24 | network on demand. The cloud providers all have great internal 25 | networks. Why aren't we using those great internal networks to read 26 | our container images on demand? 27 | 28 | ## Why does Go want this? 29 | 30 | Go's continuous build system tests Go on [many operating systems and 31 | architectures](https://build.golang.org/), using a mix of containers 32 | (mostly for Linux) and VMs (for other operating systems). We 33 | prioritize fast builds, targeting 5 minute turnaround for pre-submit 34 | tests when testing new changes. For isolation and other reasons, we 35 | run all our containers in a single-use fresh VMs. Generally our 36 | containers do start quickly, but some of our containers are very large 37 | and take a long time to start. To work around that, we've automated 38 | the creation of VM images where our heavy containers are pre-pulled. 39 | This is all a silly workaround. It'd be much better if we could just 40 | read the bytes over the network from the right place, without the all 41 | the hoops. 42 | 43 | ## Tar files 44 | 45 | One reason that reading the bytes directly from the source on demand 46 | is somewhat non-trivial is that container images are, somewhat 47 | regrettably, represented by *tar.gz* files, and tar files are 48 | unindexed, and gzip streams are not seekable. This means that trying 49 | to read 1KB out of a file named `/var/lib/foo/data` still involves 50 | pulling hundreds of gigabytes to uncompress the stream, to decode the 51 | entire tar file until you find the entry you're looking for. You can't 52 | look it up by its path name. 53 | 54 | ## Introducing Stargz 55 | 56 | Fortunately, we can fix the fact that *tar.gz* files are unindexed and 57 | unseekable, while still making the file a valid *tar.gz* file by 58 | taking advantage of the fact that two gzip streams can be concatenated 59 | and still be a valid gzip stream. So you can just make a tar file 60 | where each tar entry is its own gzip stream. 61 | 62 | We introduce a format, **Stargz**, a **S**eekable 63 | **tar.gz** format that's still a valid tar.gz file for everything else 64 | that's unaware of these details. 65 | 66 | In summary: 67 | 68 | * That traditional `*.tar.gz` format is: `Gzip(TarF(file1) + TarF(file2) + TarF(file3) + TarFooter))` 69 | * Stargz's format is: `Gzip(TarF(file1)) + Gzip(TarF(file2)) + Gzip(TarF(file3_chunk1)) + Gzip(F(file3_chunk2)) + Gzip(F(index of earlier files in magic file), TarFooter)`, where the trailing ZIP-like index contains offsets for each file/chunk's GZIP header in the overall **stargz** file. 70 | 71 | This makes images a few percent larger (due to more gzip headers and 72 | loss of compression context between files), but it's plenty 73 | acceptable. 74 | 75 | ## Converting images 76 | 77 | If you're using `docker push` to push to a registry, you can't use 78 | CRFS to mount the image. Maybe one day `docker push` will push 79 | *stargz* files (or something with similar properties) by default, but 80 | not yet. So for now we need to convert the storage image layers from 81 | *tar.gz* into *stargz*. There is a tool that does that. **TODO: examples** 82 | 83 | ## Operation 84 | 85 | When mounting an image, the FUSE filesystem makes a couple Docker 86 | Registry HTTP API requests to the container registry to get the 87 | metadata for the container and all its layers. 88 | 89 | It then does HTTP Range requests to read just the **stargz** index out 90 | of the end of each of the layers. The index is stored similar to how 91 | the ZIP format's TOC is stored, storing a pointer to the index at the 92 | very end of the file. Generally it takes 1 HTTP request to read the 93 | index, but no more than 2. In any case, we're assuming a fast network 94 | (GCE VMs to gcr.io, or similar) with low latency to the container 95 | registry. Each layer needs these 1 or 2 HTTP requests, but they can 96 | all be done in parallel. 97 | 98 | From that, we keep the index in memory, so `readdir`, `stat`, and 99 | friends are all served from memory. For reading data, the index 100 | contains the offset of each file's `GZIP(TAR(file data))` range of the 101 | overall *stargz* file. To make it possible to efficiently read a small 102 | amount of data from large files, there can actually be multiple 103 | **stargz** index entries for large files. (e.g. a new gzip stream 104 | every 16MB of a large file). 105 | 106 | ## Union/overlay filesystems 107 | 108 | CRFS can do the aufs/overlay2-ish unification of multiple read-only 109 | *stargz* layers, but it will stop short of trying to unify a writable 110 | filesystem layer atop. For that, you can just use the traditional 111 | Linux filesystems. 112 | 113 | ## Using with Docker, without modifying Docker 114 | 115 | Ideally container runtimes would support something like this whole 116 | scheme natively, but in the meantime a workaround is that when 117 | converting an image into *stargz* format, the converter tool can also 118 | produce an image variant that only has metadata (environment, 119 | entrypoints, etc) and no file contents. Then you can bind mount in the 120 | contents from the CRFS FUSE filesystem. 121 | 122 | That is, the convert tool can do: 123 | 124 | **Input**: `gcr.io/your-proj/container:v2` 125 | 126 | **Output**: `gcr.io/your-proj/container:v2meta` + `gcr.io/your-proj/container:v2stargz` 127 | 128 | What you actually run on Docker or Kubernetes then is the `v2meta` 129 | version, so your container host's `docker pull` or equivalent only 130 | pulls a few KB. The gigabytes of remaining data is read lazily via 131 | CRFS from the `v2stargz` layer directly from the container registry. 132 | 133 | ## Status 134 | 135 | WIP. Enough parts are implemented & tested for me to realize this 136 | isn't crazy. I'm publishing this document first for discussion while I 137 | finish things up. Maybe somebody will point me to an existing 138 | implementation, which would be great. 139 | 140 | ## Discussion 141 | 142 | See https://github.com/golang/go/issues/30829 143 | -------------------------------------------------------------------------------- /crfs.go: -------------------------------------------------------------------------------- 1 | // Copyright 2019 The Go Authors. All rights reserved. 2 | // Use of this source code is governed by a BSD-style 3 | // license that can be found in the LICENSE file. 4 | 5 | // The crfs command runs the Container Registry Filesystem, providing a read-only 6 | // FUSE filesystem for container images. 7 | // 8 | // For purposes of documentation, we'll assume you've mounted this at /crfs. 9 | // 10 | // Currently (as of 2019-03-21) it only mounts a single layer at the top level. 11 | // In the future it'll have paths like: 12 | // 13 | // /crfs/image/gcr.io/foo-proj/image/latest 14 | // /crfs/layer/gcr.io/foo-proj/image/latest/xxxxxxxxxxxxxx 15 | // 16 | // For mounting a squashed image and a layer, respectively, with the 17 | // host, owner, image name, and version encoded in the path 18 | // components. 19 | package main 20 | 21 | import ( 22 | "context" 23 | "encoding/json" 24 | "errors" 25 | "flag" 26 | "fmt" 27 | "io" 28 | "io/ioutil" 29 | "log" 30 | "net/http" 31 | "os" 32 | "regexp" 33 | "sort" 34 | "strconv" 35 | "strings" 36 | "sync" 37 | "sync/atomic" 38 | "syscall" 39 | "time" 40 | "unsafe" 41 | 42 | "bazil.org/fuse" 43 | fspkg "bazil.org/fuse/fs" 44 | "cloud.google.com/go/compute/metadata" 45 | "github.com/google/crfs/stargz" 46 | namepkg "github.com/google/go-containerregistry/pkg/name" 47 | "github.com/google/go-containerregistry/pkg/v1/google" 48 | "golang.org/x/sys/unix" 49 | ) 50 | 51 | const ( 52 | debug = false 53 | 54 | // whiteoutPrefix is a filename prefix for a "whiteout" file which is an empty 55 | // file that signifies a path should be deleted. 56 | // See https://github.com/opencontainers/image-spec/blob/775207bd45b6cb8153ce218cc59351799217451f/layer.md#whiteouts 57 | whiteoutPrefix = ".wh." 58 | 59 | // whiteoutOpaqueDir is a filename of "opaque whiteout" which indicates that 60 | // all siblings are hidden in the lower layer. 61 | // See https://github.com/opencontainers/image-spec/blob/775207bd45b6cb8153ce218cc59351799217451f/layer.md#opaque-whiteout 62 | whiteoutOpaqueDir = whiteoutPrefix + whiteoutPrefix + ".opq" 63 | 64 | // opaqueXattr is a key of an xattr for an overalyfs opaque directory. 65 | // See https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt 66 | opaqueXattr = "trusted.overlay.opaque" 67 | 68 | // opaqueXattrValue is value of an xattr for an overalyfs opaque directory. 69 | // See https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt 70 | opaqueXattrValue = "y" 71 | ) 72 | 73 | var ( 74 | fuseDebug = flag.Bool("fuse_debug", false, "enable verbose FUSE debugging") 75 | ) 76 | 77 | func usage() { 78 | fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0]) 79 | fmt.Fprintf(os.Stderr, " %s (defaults to /crfs)\n", os.Args[0]) 80 | flag.PrintDefaults() 81 | } 82 | 83 | func main() { 84 | flag.Parse() 85 | mntPoint := "/crfs" 86 | if flag.NArg() > 1 { 87 | usage() 88 | os.Exit(2) 89 | } 90 | if flag.NArg() == 1 { 91 | mntPoint = flag.Arg(0) 92 | } 93 | if *fuseDebug { 94 | fuse.Debug = func(msg interface{}) { 95 | log.Printf("fuse debug: %v", msg) 96 | } 97 | } 98 | 99 | log.Printf("crfs: mounting") 100 | c, err := fuse.Mount(mntPoint, fuse.FSName("crfs"), fuse.Subtype("crfs"), fuse.ReadOnly(), fuse.AllowOther()) 101 | if err != nil { 102 | log.Fatal(err) 103 | } 104 | defer c.Close() 105 | defer fuse.Unmount(mntPoint) 106 | 107 | log.Printf("crfs: serving") 108 | fs := new(FS) 109 | err = fspkg.Serve(c, fs) 110 | if err != nil { 111 | log.Fatal(err) 112 | } 113 | 114 | // check if the mount process has an error to report 115 | <-c.Ready 116 | if err := c.MountError; err != nil { 117 | log.Fatal(err) 118 | } 119 | } 120 | 121 | // FS is the CRFS filesystem. 122 | // It implements https://godoc.org/bazil.org/fuse/fs#FS 123 | type FS struct { 124 | // TODO: options, probably. logger, etc. 125 | } 126 | 127 | // Root returns the root filesystem node for the CRFS filesystem. 128 | // See https://godoc.org/bazil.org/fuse/fs#FS 129 | func (fs *FS) Root() (fspkg.Node, error) { 130 | return &rootNode{ 131 | fs: fs, 132 | dirEnts: dirEnts{initChildren: func(de *dirEnts) { 133 | de.m["layers"] = &dirEnt{ 134 | dtype: fuse.DT_Dir, 135 | lookupNode: func(inode uint64) (fspkg.Node, error) { 136 | return newLayersRoot(fs, inode), nil 137 | }, 138 | } 139 | de.m["images"] = &dirEnt{ 140 | dtype: fuse.DT_Dir, 141 | lookupNode: func(inode uint64) (fspkg.Node, error) { 142 | return &imagesRoot{fs: fs, inode: inode}, nil 143 | }, 144 | } 145 | de.m["README-crfs.txt"] = &dirEnt{ 146 | dtype: fuse.DT_File, 147 | lookupNode: func(inode uint64) (fspkg.Node, error) { 148 | return &staticFile{ 149 | inode: inode, 150 | contents: "This is CRFS. See https://github.com/google/crfs.\n", 151 | }, nil 152 | }, 153 | } 154 | }}, 155 | }, nil 156 | } 157 | 158 | // imagesOfHost returns the images for the given registry host and 159 | // owner (e.g. GCP project name). 160 | // 161 | // Note that this is gcr.io specific as there's no way in the Registry 162 | // protocol to do this. So this won't work for index.docker.io. We'll 163 | // need to do something else there. 164 | // TODO: something else for docker hub. 165 | func (fs *FS) imagesOfHost(ctx context.Context, host, owner string) (imageNames []string, err error) { 166 | req, err := http.NewRequest("GET", "https://"+host+"/v2/"+owner+"/tags/list", nil) 167 | if err != nil { 168 | return nil, err 169 | } 170 | // TODO: auth. This works for public stuff so far, though. 171 | req = req.WithContext(ctx) 172 | res, err := http.DefaultClient.Do(req) 173 | if err != nil { 174 | return nil, err 175 | } 176 | defer res.Body.Close() 177 | if res.StatusCode != 200 { 178 | return nil, errors.New(res.Status) 179 | } 180 | var resj struct { 181 | Images []string `json:"child"` 182 | } 183 | if err := json.NewDecoder(res.Body).Decode(&resj); err != nil { 184 | return nil, err 185 | } 186 | sort.Strings(resj.Images) 187 | return resj.Images, nil 188 | } 189 | 190 | type manifest struct { 191 | SchemaVersion int `json:"schemaVersion"` 192 | MediaType string `json:"mediaType"` 193 | Config *blobRef `json:"config"` 194 | Layers []*blobRef `json:"layers"` 195 | } 196 | 197 | type blobRef struct { 198 | Size int64 `json:"size"` 199 | MediaType string `json:"mediaType"` 200 | Digest string `json:"digest"` 201 | } 202 | 203 | func (fs *FS) getManifest(ctx context.Context, host, owner, image, ref string) (*manifest, error) { 204 | urlStr := "https://" + host + "/v2/" + owner + "/" + image + "/manifests/" + ref 205 | req, err := http.NewRequest("GET", urlStr, nil) 206 | if err != nil { 207 | return nil, err 208 | } 209 | // TODO: auth. This works for public stuff so far, though. 210 | req = req.WithContext(ctx) 211 | req.Header.Set("Accept", "*") // application/vnd.docker.distribution.manifest.v2+json 212 | res, err := http.DefaultClient.Do(req) 213 | if err != nil { 214 | return nil, err 215 | } 216 | defer res.Body.Close() 217 | if res.StatusCode != 200 { 218 | slurp, _ := ioutil.ReadAll(res.Body) 219 | return nil, fmt.Errorf("non-200 for %q: %v, %q", urlStr, res.Status, slurp) 220 | } 221 | resj := new(manifest) 222 | if err := json.NewDecoder(res.Body).Decode(resj); err != nil { 223 | return nil, err 224 | } 225 | return resj, nil 226 | } 227 | 228 | func (fs *FS) getConfig(ctx context.Context, host, owner, image, ref string) (string, error) { 229 | urlStr := "https://" + host + "/v2/" + owner + "/" + image + "/blobs/" + ref 230 | req, err := http.NewRequest("GET", urlStr, nil) 231 | if err != nil { 232 | return "", err 233 | } 234 | req = req.WithContext(ctx) 235 | res, err := http.DefaultClient.Do(req) 236 | if err != nil { 237 | return "", err 238 | } 239 | defer res.Body.Close() 240 | if res.StatusCode != 200 { 241 | slurp, _ := ioutil.ReadAll(res.Body) 242 | return "", fmt.Errorf("non-200 for %q: %v, %q", urlStr, res.Status, slurp) 243 | } 244 | slurp, err := ioutil.ReadAll(res.Body) 245 | return string(slurp), err 246 | } 247 | 248 | type dirEnt struct { 249 | lazyInode 250 | dtype fuse.DirentType 251 | lookupNode func(inode uint64) (fspkg.Node, error) 252 | } 253 | 254 | type dirEnts struct { 255 | initOnce sync.Once 256 | initChildren func(*dirEnts) 257 | mu sync.Mutex 258 | m map[string]*dirEnt 259 | } 260 | 261 | func (de *dirEnts) Lookup(ctx context.Context, name string) (fspkg.Node, error) { 262 | de.condInit() 263 | de.mu.Lock() 264 | defer de.mu.Unlock() 265 | e, ok := de.m[name] 266 | if !ok { 267 | log.Printf("returning ENOENT for name %q", name) 268 | return nil, fuse.ENOENT 269 | } 270 | if e.lookupNode == nil { 271 | log.Printf("node %q has no lookupNode defined", name) 272 | return nil, fuse.ENOENT 273 | } 274 | return e.lookupNode(e.inode()) 275 | } 276 | 277 | func (de *dirEnts) ReadDirAll(ctx context.Context) (ents []fuse.Dirent, err error) { 278 | de.condInit() 279 | de.mu.Lock() 280 | defer de.mu.Unlock() 281 | ents = make([]fuse.Dirent, 0, len(de.m)) 282 | for name, e := range de.m { 283 | ents = append(ents, fuse.Dirent{ 284 | Name: name, 285 | Inode: e.inode(), 286 | Type: e.dtype, 287 | }) 288 | } 289 | sort.Slice(ents, func(i, j int) bool { return ents[i].Name < ents[j].Name }) 290 | return ents, nil 291 | } 292 | 293 | func (de *dirEnts) condInit() { de.initOnce.Do(de.doInit) } 294 | func (de *dirEnts) doInit() { 295 | de.m = map[string]*dirEnt{} 296 | if de.initChildren != nil { 297 | de.initChildren(de) 298 | } 299 | } 300 | 301 | // atomicInodeIncr holds the most previously allocate global inode number. 302 | // It should only be accessed/incremented with sync/atomic. 303 | var atomicInodeIncr uint32 304 | 305 | // lazyInode is a lazily-allocated inode number. 306 | // 307 | // We only use 32 bits out of 64 to leave room for overlayfs to play 308 | // games with the upper bits. TODO: maybe that's not necessary. 309 | type lazyInode struct{ v uint32 } 310 | 311 | func (si *lazyInode) inode() uint64 { 312 | for { 313 | v := atomic.LoadUint32(&si.v) 314 | if v != 0 { 315 | return uint64(v) 316 | } 317 | v = atomic.AddUint32(&atomicInodeIncr, 1) 318 | if atomic.CompareAndSwapUint32(&si.v, 0, v) { 319 | return uint64(v) 320 | } 321 | } 322 | } 323 | 324 | // childInodeNumberCache is a temporary, lazily solution to having 325 | // stable inode numbers in node types where we haven't yet pushed it 326 | // down properly. This map grows forever (which is bad) and maps the 327 | // tuple (parent directory inode, child name string) to the child's 328 | // inode number. Its map key type is inodeAndString 329 | var childInodeNumberCache sync.Map 330 | 331 | type inodeAndString struct { 332 | inode uint64 333 | childName string 334 | } 335 | 336 | func getOrMakeChildInode(inode uint64, childName string) uint64 { 337 | key := inodeAndString{inode, childName} 338 | if v, ok := childInodeNumberCache.Load(key); ok { 339 | log.Printf("re-using inode %v/%q = %v", inode, childName, v) 340 | return v.(uint64) 341 | } 342 | actual, loaded := childInodeNumberCache.LoadOrStore(key, uint64(atomic.AddUint32(&atomicInodeIncr, 1))) 343 | if loaded { 344 | log.Printf("race lost creating inode %v/%q = %v", inode, childName, actual) 345 | } else { 346 | log.Printf("created inode %v/%q = %v", inode, childName, actual) 347 | } 348 | return actual.(uint64) 349 | } 350 | 351 | // rootNode is the contents of /crfs. 352 | // Children include: 353 | // layers/ -- individual layers; directories by hostname/user/layer 354 | // images/ -- merged layers; directories by hostname/user/layer 355 | // README-crfs.txt 356 | type rootNode struct { 357 | fs *FS 358 | dirEnts 359 | lazyInode 360 | } 361 | 362 | func (n *rootNode) Attr(ctx context.Context, a *fuse.Attr) error { 363 | setDirAttr(a) 364 | a.Inode = n.inode() 365 | a.Valid = 30 * 24 * time.Hour 366 | return nil 367 | } 368 | 369 | func setDirAttr(a *fuse.Attr) { 370 | a.Mode = 0755 | os.ModeDir 371 | // TODO: more? 372 | } 373 | 374 | // layersRoot is the contents of /crfs/layers/ 375 | // 376 | // Its children are hostnames (such as "gcr.io"). 377 | // 378 | // A special directory, "local", permits walking into stargz files on 379 | // disk, local to the directory where crfs is running. This is useful for 380 | // debugging. 381 | type layersRoot struct { 382 | fs *FS 383 | inode uint64 384 | dirEnts 385 | } 386 | 387 | func newLayersRoot(fs *FS, inode uint64) *layersRoot { 388 | lr := &layersRoot{fs: fs, inode: inode} 389 | lr.dirEnts.initChildren = func(de *dirEnts) { 390 | de.m["local"] = &dirEnt{ 391 | dtype: fuse.DT_Dir, 392 | lookupNode: func(inode uint64) (fspkg.Node, error) { 393 | return &layerDebugRoot{fs: fs, inode: inode}, nil 394 | }, 395 | } 396 | for _, n := range commonRegistryHostnames { 397 | lr.addHostDirLocked(n) 398 | } 399 | } 400 | return lr 401 | } 402 | 403 | func (n *layersRoot) Attr(ctx context.Context, a *fuse.Attr) error { 404 | setDirAttr(a) 405 | a.Valid = 30 * 24 * time.Hour 406 | a.Inode = n.inode 407 | return nil 408 | } 409 | 410 | var commonRegistryHostnames = []string{ 411 | "gcr.io", 412 | "us.gcr.io", 413 | "eu.gcr.io", 414 | "asia.gcr.io", 415 | "index.docker.io", 416 | } 417 | 418 | func isGCR(host string) bool { 419 | return host == "gcr.io" || strings.HasSuffix(host, ".gcr.io") 420 | } 421 | 422 | func (n *layersRoot) addHostDirLocked(name string) { 423 | n.dirEnts.m[name] = &dirEnt{ 424 | dtype: fuse.DT_Dir, 425 | lookupNode: func(inode uint64) (fspkg.Node, error) { 426 | return newLayerHost(n.fs, name, inode), nil 427 | }, 428 | } 429 | } 430 | 431 | func (n *layersRoot) Lookup(ctx context.Context, name string) (fspkg.Node, error) { 432 | child, err := n.dirEnts.Lookup(ctx, name) 433 | if err != fuse.ENOENT { 434 | return child, err 435 | } 436 | // TODO: validate name looks like a hostname? 437 | n.dirEnts.mu.Lock() 438 | if _, ok := n.dirEnts.m[name]; !ok { 439 | n.addHostDirLocked(name) 440 | } 441 | n.dirEnts.mu.Unlock() 442 | return n.dirEnts.Lookup(ctx, name) 443 | } 444 | 445 | // layerDebugRoot is /crfs/layers/local/ 446 | // Its contents are *.star.gz files in the current directory. 447 | type layerDebugRoot struct { 448 | fs *FS 449 | inode uint64 450 | } 451 | 452 | func (n *layerDebugRoot) Attr(ctx context.Context, a *fuse.Attr) error { 453 | setDirAttr(a) 454 | a.Inode = n.inode 455 | return nil 456 | } 457 | 458 | func (n *layerDebugRoot) ReadDirAll(ctx context.Context) (ents []fuse.Dirent, err error) { 459 | fis, err := ioutil.ReadDir(".") 460 | for _, fi := range fis { 461 | name := fi.Name() 462 | if !strings.HasSuffix(name, ".stargz") { 463 | continue 464 | } 465 | // TODO: populate inode number 466 | ents = append(ents, fuse.Dirent{Type: fuse.DT_Dir, Name: name}) 467 | } 468 | return ents, err 469 | } 470 | 471 | func (n *layerDebugRoot) Lookup(ctx context.Context, name string) (fspkg.Node, error) { 472 | f, err := os.Open(name) 473 | if err != nil { 474 | return nil, err 475 | } 476 | fi, err := f.Stat() 477 | if err != nil { 478 | f.Close() 479 | return nil, err 480 | } 481 | r, err := stargz.Open(io.NewSectionReader(f, 0, fi.Size())) 482 | if err != nil { 483 | f.Close() 484 | log.Printf("error opening local stargz: %v", err) 485 | return nil, err 486 | } 487 | root, ok := r.Lookup("") 488 | if !ok { 489 | f.Close() 490 | return nil, errors.New("failed to find root in stargz") 491 | } 492 | return &node{ 493 | fs: n.fs, 494 | te: root, 495 | sr: r, 496 | f: f, 497 | child: make(map[string]fspkg.Node), 498 | }, nil 499 | } 500 | 501 | // layerHost is, say, /crfs/layers/gcr.io/ (with host == "gcr.io") 502 | // 503 | // Its children are the next level (GCP project, docker hub owner), a layerHostOwner. 504 | type layerHost struct { 505 | fs *FS 506 | host string 507 | inode uint64 508 | dirEnts 509 | } 510 | 511 | func newLayerHost(fs *FS, host string, inode uint64) *layerHost { 512 | n := &layerHost{ 513 | fs: fs, 514 | host: host, 515 | inode: inode, 516 | } 517 | n.dirEnts = dirEnts{ 518 | initChildren: func(de *dirEnts) { 519 | if !isGCR(n.host) || !metadata.OnGCE() { 520 | return 521 | } 522 | if proj, _ := metadata.ProjectID(); proj != "" { 523 | n.addLayerHostOwnerLocked(proj) 524 | } 525 | }, 526 | } 527 | return n 528 | } 529 | 530 | func (n *layerHost) Attr(ctx context.Context, a *fuse.Attr) error { 531 | setDirAttr(a) 532 | a.Valid = 15 * time.Second 533 | a.Inode = n.inode 534 | return nil 535 | } 536 | 537 | var gcpProjRE = regexp.MustCompile(`^[a-z]([-a-z0-9]*[a-z0-9])?$`) 538 | 539 | func (n *layerHost) addLayerHostOwnerLocked(owner string) { // owner == GCP project on gcr.io 540 | n.dirEnts.m[owner] = &dirEnt{ 541 | dtype: fuse.DT_Dir, 542 | lookupNode: func(inode uint64) (fspkg.Node, error) { 543 | return &layerHostOwner{ 544 | fs: n.fs, 545 | host: n.host, 546 | owner: owner, 547 | inode: inode, 548 | }, nil 549 | }, 550 | } 551 | } 552 | 553 | func (n *layerHost) Lookup(ctx context.Context, name string) (fspkg.Node, error) { 554 | child, err := n.dirEnts.Lookup(ctx, name) 555 | if err != fuse.ENOENT { 556 | return child, err 557 | } 558 | 559 | // For gcr.io hosts, the next level lookup is the GCP project name, 560 | // which we can validate. 561 | if isGCR(n.host) { 562 | proj := name 563 | if len(name) < 6 || len(name) > 30 || !gcpProjRE.MatchString(proj) { 564 | return nil, fuse.ENOENT 565 | } 566 | } else { 567 | // TODO: validate index.docker.io next level lookups 568 | } 569 | 570 | n.dirEnts.mu.Lock() 571 | if _, ok := n.dirEnts.m[name]; !ok { 572 | n.addLayerHostOwnerLocked(name) 573 | } 574 | n.dirEnts.mu.Unlock() 575 | 576 | return n.dirEnts.Lookup(ctx, name) 577 | } 578 | 579 | // layerHostOwner is, say, /crfs/layers/gcr.io/foo-proj/ 580 | // 581 | // Its children are image names in that project. 582 | type layerHostOwner struct { 583 | fs *FS 584 | inode uint64 585 | host string // "gcr.io" 586 | owner string // "foo-proj" (GCP project, docker hub owner) 587 | } 588 | 589 | func (n *layerHostOwner) Attr(ctx context.Context, a *fuse.Attr) error { 590 | setDirAttr(a) 591 | a.Inode = n.inode 592 | return nil 593 | } 594 | 595 | func (n *layerHostOwner) ReadDirAll(ctx context.Context) (ents []fuse.Dirent, err error) { 596 | images, err := n.fs.imagesOfHost(ctx, n.host, n.owner) 597 | if err != nil { 598 | return nil, err 599 | } 600 | for _, name := range images { 601 | ents = append(ents, fuse.Dirent{Type: fuse.DT_Dir, Name: name}) 602 | } 603 | return ents, nil 604 | } 605 | 606 | func (n *layerHostOwner) Lookup(ctx context.Context, imageName string) (fspkg.Node, error) { 607 | // TODO: auth, dockerhub, context 608 | repo, err := namepkg.NewRepository(n.host + "/" + n.owner + "/" + imageName) 609 | if err != nil { 610 | log.Printf("bad name: %v", err) 611 | return nil, err 612 | } 613 | tags, err := google.List(repo) 614 | if err != nil { 615 | log.Printf("list: %v", err) 616 | return nil, err 617 | } 618 | m := map[string]string{} 619 | for k, mi := range tags.Manifests { 620 | for _, tag := range mi.Tags { 621 | m[tag] = k 622 | } 623 | } 624 | return &layerHostOwnerImage{ 625 | fs: n.fs, 626 | inode: getOrMakeChildInode(n.inode, imageName), 627 | host: n.host, 628 | owner: n.owner, 629 | image: imageName, 630 | tags: tags, 631 | tagsMap: m, 632 | }, nil 633 | } 634 | 635 | // layerHostOwnerImage is, say, /crfs/layers/gcr.io/foo-proj/ubuntu 636 | // 637 | // Its children are specific version of that image (in the form 638 | // "sha256-7de52a7970a2d0a7d355c76e4f0e02b0e6ebc2841f64040062a27313761cc978", 639 | // with hyphens instead of colons, for portability). 640 | // 641 | // And then also symlinks of tags to said ugly directories. 642 | type layerHostOwnerImage struct { 643 | fs *FS 644 | inode uint64 645 | host string // "gcr.io" 646 | owner string // "foo-proj" (GCP project, docker hub owner) 647 | image string // "ubuntu" 648 | tags *google.Tags 649 | tagsMap map[string]string // "latest" -> "sha256:fooo" 650 | } 651 | 652 | func (n *layerHostOwnerImage) Attr(ctx context.Context, a *fuse.Attr) error { 653 | setDirAttr(a) 654 | a.Inode = n.inode 655 | return nil 656 | } 657 | 658 | func uncolon(s string) string { return strings.Replace(s, ":", "-", 1) } 659 | func recolon(s string) string { return strings.Replace(s, "-", ":", 1) } 660 | 661 | func (n *layerHostOwnerImage) ReadDirAll(ctx context.Context) (ents []fuse.Dirent, err error) { 662 | for k := range n.tags.Manifests { 663 | ents = append(ents, fuse.Dirent{Type: fuse.DT_Dir, Name: uncolon(k)}) 664 | } 665 | for k := range n.tagsMap { 666 | ents = append(ents, fuse.Dirent{Type: fuse.DT_Link, Name: k}) 667 | } 668 | return ents, nil 669 | } 670 | 671 | func (n *layerHostOwnerImage) Lookup(ctx context.Context, name string) (fspkg.Node, error) { 672 | if targ, ok := n.tagsMap[name]; ok { 673 | return symlinkNode(uncolon(targ)), nil 674 | } 675 | 676 | withColon := recolon(name) 677 | if _, ok := n.tags.Manifests[withColon]; ok { 678 | mf, err := n.fs.getManifest(ctx, n.host, n.owner, n.image, withColon) 679 | if err != nil { 680 | log.Printf("getManifest: %v", err) 681 | return nil, err 682 | } 683 | return &layerHostOwnerImageReference{ 684 | fs: n.fs, 685 | inode: getOrMakeChildInode(n.inode, name), 686 | host: n.host, 687 | owner: n.owner, 688 | image: n.image, 689 | ref: withColon, 690 | mf: mf, 691 | }, nil 692 | } 693 | return nil, fuse.ENOENT 694 | } 695 | 696 | // layerHostOwnerImageReference is a specific version of an image: 697 | // /crfs/layers/gcr.io/foo-proj/ubuntu/sha256-7de52a7970a2d0a7d355c76e4f0e02b0e6ebc2841f64040062a27313761cc978 698 | type layerHostOwnerImageReference struct { 699 | fs *FS 700 | inode uint64 701 | host string // "gcr.io" 702 | owner string // "foo-proj" (GCP project, docker hub owner) 703 | image string // "ubuntu" 704 | ref string // "sha256:xxxx" (with colon) 705 | mf *manifest 706 | } 707 | 708 | func (n *layerHostOwnerImageReference) Attr(ctx context.Context, a *fuse.Attr) error { 709 | setDirAttr(a) 710 | a.Valid = 30 * 24 * time.Hour 711 | a.Inode = n.inode 712 | return nil 713 | } 714 | 715 | func (n *layerHostOwnerImageReference) ReadDirAll(ctx context.Context) (ents []fuse.Dirent, err error) { 716 | for i, layer := range n.mf.Layers { 717 | ents = append(ents, fuse.Dirent{Type: fuse.DT_Dir, Name: uncolon(layer.Digest)}) 718 | ents = append(ents, fuse.Dirent{Type: fuse.DT_Link, Name: strconv.Itoa(i)}) 719 | } 720 | ents = append(ents, fuse.Dirent{Type: fuse.DT_Link, Name: "top"}) 721 | ents = append(ents, fuse.Dirent{Type: fuse.DT_Link, Name: "bottom"}) 722 | ents = append(ents, fuse.Dirent{Type: fuse.DT_File, Name: "config"}) 723 | return 724 | } 725 | 726 | func (n *layerHostOwnerImageReference) Lookup(ctx context.Context, name string) (fspkg.Node, error) { 727 | i, err := strconv.Atoi(name) 728 | if err == nil && i >= 0 && i < len(n.mf.Layers) { 729 | return symlinkNode(uncolon(n.mf.Layers[i].Digest)), nil 730 | } 731 | if name == "top" { 732 | return symlinkNode(fmt.Sprint(len(n.mf.Layers) - 1)), nil 733 | } 734 | if name == "bottom" { 735 | return symlinkNode("0"), nil 736 | } 737 | if name == "config" { 738 | conf, err := n.fs.getConfig(ctx, n.host, n.owner, n.image, n.mf.Config.Digest) 739 | if err != nil { 740 | log.Printf("getConfig: %v", err) 741 | return nil, err 742 | } 743 | return &staticFile{contents: conf}, nil // TODO: add inode for staticFile 744 | } 745 | 746 | refColon := recolon(name) 747 | var layerSize int64 748 | for _, layer := range n.mf.Layers { 749 | if layer.Digest == refColon { 750 | layerSize = layer.Size 751 | break 752 | } 753 | } 754 | if layerSize == 0 { 755 | return nil, fuse.ENOENT 756 | } 757 | 758 | // Probe the tar.gz. URL to see if it serves a redirect. 759 | // 760 | // gcr.io serves a redirect for the layer tar.gz blobs, but only on GET, not HEAD. 761 | // So add a Range header to bound response size. gcr.io ignores the Range request header, 762 | // but if gcr changes its behavior or we're hitting a different registry implementation, 763 | // then we don't want to download the full thing. 764 | urlStr := "https://" + n.host + "/v2/" + n.owner + "/" + n.image + "/blobs/" + refColon 765 | req, err := http.NewRequest("GET", urlStr, nil) 766 | if err != nil { 767 | return nil, err 768 | } 769 | req = req.WithContext(ctx) 770 | req.Header.Set("Range", "bytes=0-1") 771 | // TODO: auth 772 | res, err := http.DefaultTransport.RoundTrip(req) // NOT DefaultClient; don't want redirects 773 | if err != nil { 774 | return nil, err 775 | } 776 | defer res.Body.Close() 777 | if res.StatusCode >= 400 { 778 | log.Printf("hitting %s: %v", urlStr, res.Status) 779 | return nil, syscall.EIO 780 | } 781 | if redir := res.Header.Get("Location"); redir != "" && res.StatusCode/100 == 3 { 782 | urlStr = redir 783 | } 784 | 785 | sr := io.NewSectionReader(&urlReaderAt{url: urlStr}, 0, layerSize) 786 | r, err := stargz.Open(sr) 787 | if err != nil { 788 | log.Printf("error opening remote stargz in %s: %v", urlStr, err) 789 | return nil, err 790 | } 791 | root, ok := r.Lookup("") 792 | if !ok { 793 | return nil, errors.New("failed to find root in stargz") 794 | } 795 | return &node{ 796 | fs: n.fs, 797 | te: root, 798 | sr: r, 799 | child: make(map[string]fspkg.Node), 800 | }, nil 801 | } 802 | 803 | type urlReaderAt struct { 804 | url string 805 | } 806 | 807 | func (r *urlReaderAt) ReadAt(p []byte, off int64) (n int, err error) { 808 | ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) 809 | defer cancel() 810 | req, err := http.NewRequest("GET", r.url, nil) 811 | if err != nil { 812 | return 0, err 813 | } 814 | req = req.WithContext(ctx) 815 | rangeVal := fmt.Sprintf("bytes=%d-%d", off, off+int64(len(p))-1) 816 | req.Header.Set("Range", rangeVal) 817 | log.Printf("Fetching %s (%d at %d) of %s ...\n", rangeVal, len(p), off, r.url) 818 | // TODO: auth 819 | res, err := http.DefaultTransport.RoundTrip(req) // NOT DefaultClient; don't want redirects 820 | if err != nil { 821 | log.Printf("range read of %s: %v", r.url, err) 822 | return 0, err 823 | } 824 | defer res.Body.Close() 825 | if res.StatusCode != http.StatusPartialContent { 826 | log.Printf("range read of %s: %v", r.url, res.Status) 827 | return 0, err 828 | } 829 | return io.ReadFull(res.Body, p) 830 | } 831 | 832 | type symlinkNode string // underlying is target 833 | 834 | func (s symlinkNode) Attr(ctx context.Context, a *fuse.Attr) error { 835 | a.Mode = os.ModeSymlink | 0644 836 | // TODO: inode 837 | return nil 838 | } 839 | 840 | func (s symlinkNode) Readlink(ctx context.Context, req *fuse.ReadlinkRequest) (string, error) { 841 | return string(s), nil 842 | } 843 | 844 | // imagesRoot is the contents of /crfs/images/ 845 | // Its children are hostnames (such as "gcr.io"). 846 | type imagesRoot struct { 847 | fs *FS 848 | inode uint64 849 | } 850 | 851 | func (n *imagesRoot) Attr(ctx context.Context, a *fuse.Attr) error { 852 | setDirAttr(a) 853 | a.Valid = 30 * 24 * time.Hour 854 | a.Inode = n.inode 855 | return nil 856 | } 857 | 858 | func (n *imagesRoot) ReadDirAll(ctx context.Context) (ents []fuse.Dirent, err error) { 859 | for _, n := range commonRegistryHostnames { 860 | ents = append(ents, fuse.Dirent{Type: fuse.DT_Dir, Name: n}) 861 | } 862 | return 863 | } 864 | 865 | type staticFile struct { 866 | contents string 867 | inode uint64 868 | } 869 | 870 | func (f *staticFile) Attr(ctx context.Context, a *fuse.Attr) error { 871 | a.Mode = 0644 872 | a.Inode = f.inode 873 | a.Size = uint64(len(f.contents)) 874 | a.Blocks = blocksOf(a.Size) 875 | return nil 876 | } 877 | 878 | func (f *staticFile) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) error { 879 | if req.Offset < 0 { 880 | return syscall.EINVAL 881 | } 882 | if req.Offset > int64(len(f.contents)) { 883 | resp.Data = nil 884 | return nil 885 | } 886 | bufSize := int64(req.Size) 887 | remain := int64(len(f.contents)) - req.Offset 888 | if bufSize > remain { 889 | bufSize = remain 890 | } 891 | resp.Data = make([]byte, bufSize) 892 | n := copy(resp.Data, f.contents[req.Offset:]) 893 | resp.Data = resp.Data[:n] // redundant, but for clarity 894 | return nil 895 | } 896 | 897 | func inodeOfEnt(ent *stargz.TOCEntry) uint64 { 898 | return uint64(uintptr(unsafe.Pointer(ent))) 899 | } 900 | 901 | func direntType(ent *stargz.TOCEntry) fuse.DirentType { 902 | switch ent.Type { 903 | case "dir": 904 | return fuse.DT_Dir 905 | case "reg": 906 | return fuse.DT_File 907 | case "symlink": 908 | return fuse.DT_Link 909 | case "block": 910 | return fuse.DT_Block 911 | case "char": 912 | return fuse.DT_Char 913 | case "fifo": 914 | return fuse.DT_FIFO 915 | } 916 | return fuse.DT_Unknown 917 | } 918 | 919 | // node is a CRFS node in the FUSE filesystem. 920 | // See https://godoc.org/bazil.org/fuse/fs#Node 921 | type node struct { 922 | fs *FS 923 | te *stargz.TOCEntry 924 | sr *stargz.Reader 925 | f *os.File // non-nil if root & in debug mode 926 | opaque bool // true if this node is an overlayfs opaque directory 927 | 928 | mu sync.Mutex // guards child, below 929 | // child maps from previously-looked up base names (like "foo.txt") to the 930 | // fspkg.Node that was previously returned. This prevents FUSE inode numbers 931 | // from getting out of sync 932 | child map[string]fspkg.Node 933 | } 934 | 935 | var ( 936 | _ fspkg.Node = (*node)(nil) 937 | _ fspkg.NodeStringLookuper = (*node)(nil) 938 | _ fspkg.NodeReadlinker = (*node)(nil) 939 | _ fspkg.NodeOpener = (*node)(nil) 940 | // TODO: implement NodeReleaser and n.f.Close() when n.f is non-nil 941 | 942 | _ fspkg.HandleReadDirAller = (*nodeHandle)(nil) 943 | _ fspkg.HandleReader = (*nodeHandle)(nil) 944 | 945 | _ fspkg.HandleReadDirAller = (*rootNode)(nil) 946 | _ fspkg.NodeStringLookuper = (*rootNode)(nil) 947 | ) 948 | 949 | func blocksOf(size uint64) (blocks uint64) { 950 | blocks = size / 512 951 | if size%512 > 0 { 952 | blocks++ 953 | } 954 | return 955 | } 956 | 957 | // Attr populates a with the attributes of n. 958 | // See https://godoc.org/bazil.org/fuse/fs#Node 959 | func (n *node) Attr(ctx context.Context, a *fuse.Attr) error { 960 | fi := n.te.Stat() 961 | a.Valid = 30 * 24 * time.Hour 962 | a.Inode = inodeOfEnt(n.te) 963 | a.Size = uint64(fi.Size()) 964 | a.Blocks = blocksOf(a.Size) 965 | a.Mtime = fi.ModTime() 966 | a.Mode = fi.Mode() 967 | a.Uid = uint32(n.te.Uid) 968 | a.Gid = uint32(n.te.Gid) 969 | a.Rdev = uint32(unix.Mkdev(uint32(n.te.DevMajor), uint32(n.te.DevMinor))) 970 | a.Nlink = uint32(n.te.NumLink) 971 | if a.Nlink == 0 { 972 | a.Nlink = 1 // zero "NumLink" means one so we map them here. 973 | } 974 | if debug { 975 | log.Printf("attr of %s: %s", n.te.Name, *a) 976 | } 977 | return nil 978 | } 979 | 980 | // ReadDirAll returns all directory entries in the directory node n. 981 | // 982 | // https://godoc.org/bazil.org/fuse/fs#HandleReadDirAller 983 | func (h *nodeHandle) ReadDirAll(ctx context.Context) (ents []fuse.Dirent, err error) { 984 | n := h.n 985 | whiteouts := map[string]*stargz.TOCEntry{} 986 | normalEnts := map[string]bool{} 987 | n.te.ForeachChild(func(baseName string, ent *stargz.TOCEntry) bool { 988 | // We don't want to show ".wh."-prefixed whiteout files. 989 | if strings.HasPrefix(baseName, whiteoutPrefix) { 990 | if baseName == whiteoutOpaqueDir { 991 | return true 992 | } 993 | // Add an overlayfs-styled whiteout direntry later. 994 | whiteouts[baseName] = ent 995 | return true 996 | } 997 | 998 | normalEnts[baseName] = true 999 | ents = append(ents, fuse.Dirent{ 1000 | Inode: inodeOfEnt(ent), 1001 | Type: direntType(ent), 1002 | Name: baseName, 1003 | }) 1004 | return true 1005 | }) 1006 | 1007 | // Append whiteouts if no entry replaces the target entry in the lower layer. 1008 | for w, ent := range whiteouts { 1009 | if ok := normalEnts[w[len(whiteoutPrefix):]]; !ok { 1010 | ents = append(ents, fuse.Dirent{ 1011 | Inode: inodeOfEnt(ent), 1012 | Type: fuse.DT_Char, 1013 | Name: w[len(whiteoutPrefix):], 1014 | }) 1015 | 1016 | } 1017 | } 1018 | sort.Slice(ents, func(i, j int) bool { return ents[i].Name < ents[j].Name }) 1019 | return ents, nil 1020 | } 1021 | 1022 | // Lookup looks up a child entry of the directory node n. 1023 | // 1024 | // See https://godoc.org/bazil.org/fuse/fs#NodeStringLookuper 1025 | func (n *node) Lookup(ctx context.Context, name string) (fspkg.Node, error) { 1026 | n.mu.Lock() 1027 | defer n.mu.Unlock() 1028 | if c, ok := n.child[name]; ok { 1029 | return c, nil 1030 | } 1031 | 1032 | // We don't want to show ".wh."-prefixed whiteout files. 1033 | if strings.HasPrefix(name, whiteoutPrefix) { 1034 | return nil, fuse.ENOENT 1035 | } 1036 | 1037 | e, ok := n.te.LookupChild(name) 1038 | if !ok { 1039 | // If the entry exists as a whiteout, show an overlayfs-styled whiteout node. 1040 | if e, ok := n.te.LookupChild(fmt.Sprintf("%s%s", whiteoutPrefix, name)); ok { 1041 | c := &whiteout{e} 1042 | n.child[name] = c 1043 | return c, nil 1044 | } 1045 | return nil, fuse.ENOENT 1046 | } 1047 | 1048 | var opaque bool 1049 | if _, ok := e.LookupChild(whiteoutOpaqueDir); ok { 1050 | // This entry is an opaque directory. 1051 | opaque = true 1052 | } 1053 | 1054 | c := &node{ 1055 | fs: n.fs, 1056 | te: e, 1057 | sr: n.sr, 1058 | child: make(map[string]fspkg.Node), 1059 | opaque: opaque, 1060 | } 1061 | n.child[name] = c 1062 | 1063 | return c, nil 1064 | } 1065 | 1066 | // Readlink reads the target of a symlink. 1067 | // 1068 | // See https://godoc.org/bazil.org/fuse/fs#NodeReadlinker 1069 | func (n *node) Readlink(ctx context.Context, req *fuse.ReadlinkRequest) (string, error) { 1070 | if n.te.Type != "symlink" { 1071 | return "", syscall.EINVAL 1072 | } 1073 | return n.te.LinkName, nil 1074 | } 1075 | 1076 | // Listxattr lists the extended attributes specified for the node. 1077 | // 1078 | // See https://godoc.org/bazil.org/fuse/fs#NodeListxattrer 1079 | func (n *node) Listxattr(ctx context.Context, req *fuse.ListxattrRequest, resp *fuse.ListxattrResponse) error { 1080 | var allXattrs []byte 1081 | if n.opaque { 1082 | // This node is an opaque directory so add overlayfs-compliant indicator. 1083 | allXattrs = append(append(allXattrs, []byte(opaqueXattr)...), 0) 1084 | } 1085 | for k, _ := range n.te.Xattrs { 1086 | allXattrs = append(append(allXattrs, []byte(k)...), 0) 1087 | } 1088 | 1089 | if req.Position >= uint32(len(allXattrs)) { 1090 | resp.Xattr = []byte{} 1091 | return nil 1092 | } 1093 | resp.Xattr = allXattrs[req.Position:] 1094 | return nil 1095 | } 1096 | 1097 | // Getxattr reads the specified extended attribute. 1098 | // 1099 | // See https://godoc.org/bazil.org/fuse/fs#NodeGetxattrer 1100 | func (n *node) Getxattr(ctx context.Context, req *fuse.GetxattrRequest, resp *fuse.GetxattrResponse) error { 1101 | var xattr []byte 1102 | if req.Name == opaqueXattr { 1103 | if n.opaque { 1104 | // This node is an opaque directory so give overlayfs-compliant indicator. 1105 | xattr = []byte(opaqueXattrValue) 1106 | } 1107 | } else { 1108 | xattr = n.te.Xattrs[req.Name] 1109 | } 1110 | if req.Position >= uint32(len(xattr)) { 1111 | resp.Xattr = []byte{} 1112 | return nil 1113 | } 1114 | resp.Xattr = xattr[req.Position:] 1115 | return nil 1116 | } 1117 | 1118 | func (n *node) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fspkg.Handle, error) { 1119 | h := &nodeHandle{ 1120 | n: n, 1121 | isDir: req.Dir, 1122 | } 1123 | resp.Handle = h.HandleID() 1124 | if !req.Dir { 1125 | var err error 1126 | h.sr, err = n.sr.OpenFile(n.te.Name) 1127 | if err != nil { 1128 | return nil, err 1129 | } 1130 | } 1131 | return h, nil 1132 | } 1133 | 1134 | // whiteout is an overlayfs whiteout file which is a character device with 0/0 1135 | // device number. 1136 | // See https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt 1137 | type whiteout struct { 1138 | te *stargz.TOCEntry 1139 | } 1140 | 1141 | func (w *whiteout) Attr(ctx context.Context, a *fuse.Attr) error { 1142 | a.Valid = 30 * 24 * time.Hour 1143 | a.Inode = inodeOfEnt(w.te) 1144 | a.Mode = os.ModeDevice | os.ModeCharDevice 1145 | a.Rdev = uint32(unix.Mkdev(0, 0)) 1146 | a.Nlink = 1 1147 | return nil 1148 | } 1149 | 1150 | // nodeHandle is a node that's been opened (opendir or for read). 1151 | type nodeHandle struct { 1152 | n *node 1153 | isDir bool 1154 | sr *io.SectionReader // of file bytes 1155 | 1156 | mu sync.Mutex 1157 | lastChunkOff int64 1158 | lastChunkSize int 1159 | lastChunk []byte 1160 | } 1161 | 1162 | func (h *nodeHandle) HandleID() fuse.HandleID { 1163 | return fuse.HandleID(uintptr(unsafe.Pointer(h))) 1164 | } 1165 | 1166 | func (h *nodeHandle) chunkData(offset int64, size int) ([]byte, error) { 1167 | h.mu.Lock() 1168 | if h.lastChunkOff == offset && h.lastChunkSize == size { 1169 | defer h.mu.Unlock() 1170 | if debug { 1171 | log.Printf("cache HIT, chunk off=%d/size=%d", offset, size) 1172 | } 1173 | return h.lastChunk, nil 1174 | } 1175 | h.mu.Unlock() 1176 | 1177 | if debug { 1178 | log.Printf("reading chunk for offset=%d, size=%d", offset, size) 1179 | } 1180 | buf := make([]byte, size) 1181 | n, err := h.sr.ReadAt(buf, offset) 1182 | if debug { 1183 | log.Printf("... ReadAt = %v, %v", n, err) 1184 | } 1185 | if err == nil { 1186 | h.mu.Lock() 1187 | h.lastChunkOff = offset 1188 | h.lastChunkSize = size 1189 | h.lastChunk = buf 1190 | h.mu.Unlock() 1191 | } 1192 | return buf, err 1193 | } 1194 | 1195 | // See https://godoc.org/bazil.org/fuse/fs#HandleReader 1196 | func (h *nodeHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) error { 1197 | n := h.n 1198 | 1199 | resp.Data = make([]byte, req.Size) 1200 | nr := 0 1201 | offset := req.Offset 1202 | for nr < req.Size { 1203 | ce, ok := n.sr.ChunkEntryForOffset(n.te.Name, offset+int64(nr)) 1204 | if !ok { 1205 | break 1206 | } 1207 | if debug { 1208 | log.Printf("need chunk data for %q at %d (size=%d, for chunk from log %d-%d (%d), phys %d-%d (%d)) ...", 1209 | n.te.Name, req.Offset, req.Size, ce.ChunkOffset, ce.ChunkOffset+ce.ChunkSize, ce.ChunkSize, ce.Offset, ce.NextOffset(), ce.NextOffset()-ce.Offset) 1210 | } 1211 | chunkData, err := h.chunkData(ce.ChunkOffset, int(ce.ChunkSize)) 1212 | if err != nil { 1213 | return err 1214 | } 1215 | n := copy(resp.Data[nr:], chunkData[offset+int64(nr)-ce.ChunkOffset:]) 1216 | nr += n 1217 | } 1218 | resp.Data = resp.Data[:nr] 1219 | if debug { 1220 | log.Printf("Read response: size=%d @ %d, read %d", req.Size, req.Offset, nr) 1221 | } 1222 | return nil 1223 | } 1224 | -------------------------------------------------------------------------------- /crfs_test.go: -------------------------------------------------------------------------------- 1 | // Copyright 2019 The Go Authors. All rights reserved. 2 | // Use of this source code is governed by a BSD-style 3 | // license that can be found in the LICENSE file. 4 | 5 | package main 6 | 7 | import ( 8 | "archive/tar" 9 | "bytes" 10 | "context" 11 | "crypto/sha256" 12 | "fmt" 13 | "io" 14 | "io/ioutil" 15 | "os" 16 | "path/filepath" 17 | "strings" 18 | "testing" 19 | 20 | "bazil.org/fuse" 21 | fspkg "bazil.org/fuse/fs" 22 | "github.com/google/crfs/stargz" 23 | "golang.org/x/sys/unix" 24 | ) 25 | 26 | const ( 27 | chunkSize = 4 28 | middleOffset = chunkSize / 2 29 | sampleData = "0123456789" 30 | ) 31 | 32 | // Tests *nodeHandle.Read about offset and size calculation. 33 | func TestReadNode(t *testing.T) { 34 | sizeCond := map[string]int64{ 35 | "single_chunk": chunkSize - middleOffset, 36 | "multi_chunks": chunkSize + middleOffset, 37 | } 38 | innerOffsetCond := map[string]int64{ 39 | "at_top": 0, 40 | "at_middle": middleOffset, 41 | } 42 | baseOffsetCond := map[string]int64{ 43 | "of_1st_chunk": chunkSize * 0, 44 | "of_2nd_chunk": chunkSize * 1, 45 | "of_last_chunk": chunkSize * (int64(len(sampleData)) / chunkSize), 46 | } 47 | fileSizeCond := map[string]int64{ 48 | "in_1_chunk_file": chunkSize * 1, 49 | "in_2_chunks_file": chunkSize * 2, 50 | "in_max_size_file": int64(len(sampleData)), 51 | } 52 | 53 | for sn, size := range sizeCond { 54 | for in, innero := range innerOffsetCond { 55 | for bo, baseo := range baseOffsetCond { 56 | for fn, filesize := range fileSizeCond { 57 | t.Run(fmt.Sprintf("reading_%s_%s_%s_%s", sn, in, bo, fn), func(t *testing.T) { 58 | if filesize > int64(len(sampleData)) { 59 | t.Fatal("sample file size is larger than sample data") 60 | } 61 | 62 | wantN := size 63 | offset := baseo + innero 64 | if remain := filesize - offset; remain < wantN { 65 | if wantN = remain; wantN < 0 { 66 | wantN = 0 67 | } 68 | } 69 | 70 | // use constant string value as a data source. 71 | want := strings.NewReader(sampleData) 72 | 73 | // data we want to get. 74 | wantData := make([]byte, wantN) 75 | _, err := want.ReadAt(wantData, offset) 76 | if err != nil && err != io.EOF { 77 | t.Fatalf("want.ReadAt (offset=%d,size=%d): %v", offset, wantN, err) 78 | } 79 | 80 | // data we get through a nodeHandle. 81 | h := makeNodeHandle(t, []byte(sampleData)[:filesize], chunkSize) 82 | req := &fuse.ReadRequest{ 83 | Offset: offset, 84 | Size: int(size), 85 | } 86 | resp := &fuse.ReadResponse{} 87 | h.Read(context.TODO(), req, resp) 88 | 89 | if !bytes.Equal(wantData, resp.Data) { 90 | t.Errorf("off=%d; read data = (size=%d,data=%q); want (size=%d,data=%q)", 91 | offset, len(resp.Data), string(resp.Data), wantN, string(wantData)) 92 | } 93 | }) 94 | } 95 | } 96 | } 97 | } 98 | } 99 | 100 | // makeNodeHandle makes minimal nodeHandle containing a given data. 101 | func makeNodeHandle(t *testing.T, contents []byte, chunkSize int64) *nodeHandle { 102 | name := "test" 103 | if strings.HasSuffix(name, "/") { 104 | t.Fatalf("bogus trailing slash in file %q", name) 105 | } 106 | 107 | // builds a sample stargz 108 | tr, cancel := buildSingleFileTar(t, name, contents) 109 | defer cancel() 110 | var stargzBuf bytes.Buffer 111 | w := stargz.NewWriter(&stargzBuf) 112 | w.ChunkSize = int(chunkSize) 113 | if err := w.AppendTar(tr); err != nil { 114 | t.Fatalf("Append: %v", err) 115 | } 116 | if err := w.Close(); err != nil { 117 | t.Fatalf("Writer.Close: %v", err) 118 | } 119 | stargzData, err := ioutil.ReadAll(&stargzBuf) 120 | if err != nil { 121 | t.Fatalf("Read all stargz data: %v", err) 122 | } 123 | 124 | // opens the sample stargz and makes a nodeHandle 125 | sr, err := stargz.Open(io.NewSectionReader(bytes.NewReader(stargzData), 0, int64(len(stargzData)))) 126 | if err != nil { 127 | t.Fatalf("Open the sample stargz file: %v", err) 128 | } 129 | te, ok := sr.Lookup(name) 130 | if !ok { 131 | t.Fatal("failed to get the sample file from the built stargz") 132 | } 133 | h := &nodeHandle{ 134 | n: &node{ 135 | fs: new(FS), 136 | te: te, 137 | sr: sr, 138 | }, 139 | } 140 | h.sr, err = sr.OpenFile(name) 141 | if err != nil { 142 | t.Fatalf("failed to open the sample file %q from the built stargz: %v", name, err) 143 | } 144 | return h 145 | } 146 | 147 | // buildSingleFileTar makes a tar file which contains a regular file which has 148 | // the name and contents specified by the arguments. 149 | func buildSingleFileTar(t *testing.T, name string, contents []byte) (r io.Reader, cancel func()) { 150 | pr, pw := io.Pipe() 151 | go func() { 152 | tw := tar.NewWriter(pw) 153 | if err := tw.WriteHeader(&tar.Header{ 154 | Typeflag: tar.TypeReg, 155 | Name: name, 156 | Mode: 0644, 157 | Size: int64(len(contents)), 158 | }); err != nil { 159 | t.Errorf("writing header to the input tar: %v", err) 160 | pw.Close() 161 | return 162 | } 163 | if _, err := tw.Write(contents); err != nil { 164 | t.Errorf("writing contents to the input tar: %v", err) 165 | pw.Close() 166 | return 167 | } 168 | if err := tw.Close(); err != nil { 169 | t.Errorf("closing write of input tar: %v", err) 170 | } 171 | pw.Close() 172 | return 173 | }() 174 | return pr, func() { go pr.Close(); go pw.Close() } 175 | } 176 | 177 | // Tests if whiteouts are overlayfs-compatible. 178 | func TestWhiteout(t *testing.T) { 179 | tests := []struct { 180 | name string 181 | in []tarEntry 182 | want []crfsCheck 183 | }{ 184 | { 185 | name: "1_whiteout_with_sibling", 186 | in: tarOf( 187 | dir("foo/"), 188 | file("foo/bar.txt", ""), 189 | file("foo/.wh.foo.txt", ""), 190 | ), 191 | want: checks( 192 | hasValidWhiteout("foo/foo.txt"), 193 | fileNotExist("foo/.wh.foo.txt"), 194 | ), 195 | }, 196 | { 197 | name: "1_whiteout_with_duplicated_name", 198 | in: tarOf( 199 | dir("foo/"), 200 | file("foo/bar.txt", "test"), 201 | file("foo/.wh.bar.txt", ""), 202 | ), 203 | want: checks( 204 | hasFileDigest("foo/bar.txt", digestFor("test")), 205 | fileNotExist("foo/.wh.bar.txt"), 206 | ), 207 | }, 208 | { 209 | name: "1_opaque", 210 | in: tarOf( 211 | dir("foo/"), 212 | file("foo/.wh..wh..opq", ""), 213 | ), 214 | want: checks( 215 | hasNodeXattrs("foo/", opaqueXattr, opaqueXattrValue), 216 | fileNotExist("foo/.wh..wh..opq"), 217 | ), 218 | }, 219 | { 220 | name: "1_opaque_with_sibling", 221 | in: tarOf( 222 | dir("foo/"), 223 | file("foo/.wh..wh..opq", ""), 224 | file("foo/bar.txt", "test"), 225 | ), 226 | want: checks( 227 | hasNodeXattrs("foo/", opaqueXattr, opaqueXattrValue), 228 | hasFileDigest("foo/bar.txt", digestFor("test")), 229 | fileNotExist("foo/.wh..wh..opq"), 230 | ), 231 | }, 232 | { 233 | name: "1_opaque_with_xattr", 234 | in: tarOf( 235 | dir("foo/", xAttr{"foo": "bar"}), 236 | file("foo/.wh..wh..opq", ""), 237 | ), 238 | want: checks( 239 | hasNodeXattrs("foo/", opaqueXattr, opaqueXattrValue), 240 | hasNodeXattrs("foo/", "foo", "bar"), 241 | fileNotExist("foo/.wh..wh..opq"), 242 | ), 243 | }, 244 | } 245 | 246 | for _, tt := range tests { 247 | t.Run(tt.name, func(t *testing.T) { 248 | tr, cancel := buildTarGz(t, tt.in) 249 | defer cancel() 250 | var stargzBuf bytes.Buffer 251 | w := stargz.NewWriter(&stargzBuf) 252 | if err := w.AppendTar(tr); err != nil { 253 | t.Fatalf("Append: %v", err) 254 | } 255 | if err := w.Close(); err != nil { 256 | t.Fatalf("Writer.Close: %v", err) 257 | } 258 | b := stargzBuf.Bytes() 259 | 260 | r, err := stargz.Open(io.NewSectionReader(bytes.NewReader(b), 0, int64(len(b)))) 261 | if err != nil { 262 | t.Fatalf("stargz.Open: %v", err) 263 | } 264 | root, ok := r.Lookup("") 265 | if !ok { 266 | t.Fatalf("failed to find root in stargz") 267 | } 268 | for _, want := range tt.want { 269 | want.check(t, &node{ 270 | te: root, 271 | sr: r, 272 | child: make(map[string]fspkg.Node), 273 | }) 274 | } 275 | }) 276 | } 277 | } 278 | 279 | func buildTarGz(t *testing.T, ents []tarEntry) (r io.Reader, cancel func()) { 280 | pr, pw := io.Pipe() 281 | go func() { 282 | tw := tar.NewWriter(pw) 283 | for _, ent := range ents { 284 | if err := ent.appendTar(tw); err != nil { 285 | t.Errorf("building input tar: %v", err) 286 | pw.Close() 287 | return 288 | } 289 | } 290 | if err := tw.Close(); err != nil { 291 | t.Errorf("closing write of input tar: %v", err) 292 | } 293 | pw.Close() 294 | return 295 | }() 296 | return pr, func() { go pr.Close(); go pw.Close() } 297 | } 298 | 299 | func tarOf(s ...tarEntry) []tarEntry { return s } 300 | 301 | func checks(s ...crfsCheck) []crfsCheck { return s } 302 | 303 | type tarEntry interface { 304 | appendTar(*tar.Writer) error 305 | } 306 | 307 | type tarEntryFunc func(*tar.Writer) error 308 | 309 | func (f tarEntryFunc) appendTar(tw *tar.Writer) error { return f(tw) } 310 | 311 | func file(name, contents string) tarEntry { 312 | return tarEntryFunc(func(tw *tar.Writer) error { 313 | if strings.HasSuffix(name, "/") { 314 | return fmt.Errorf("bogus trailing slash in file %q", name) 315 | } 316 | if err := tw.WriteHeader(&tar.Header{ 317 | Typeflag: tar.TypeReg, 318 | Name: name, 319 | Mode: 0644, 320 | Size: int64(len(contents)), 321 | }); err != nil { 322 | return err 323 | } 324 | _, err := io.WriteString(tw, contents) 325 | return err 326 | }) 327 | } 328 | 329 | func dir(d string, opts ...interface{}) tarEntry { 330 | return tarEntryFunc(func(tw *tar.Writer) error { 331 | var xattrs xAttr 332 | for _, opt := range opts { 333 | if v, ok := opt.(xAttr); ok { 334 | xattrs = v 335 | } else { 336 | return fmt.Errorf("unsupported opt") 337 | } 338 | } 339 | name := string(d) 340 | if !strings.HasSuffix(name, "/") { 341 | panic(fmt.Sprintf("missing trailing slash in dir %q ", name)) 342 | } 343 | return tw.WriteHeader(&tar.Header{ 344 | Typeflag: tar.TypeDir, 345 | Name: name, 346 | Mode: 0755, 347 | Xattrs: xattrs, 348 | }) 349 | }) 350 | } 351 | 352 | type xAttr map[string]string 353 | 354 | type crfsCheck interface { 355 | check(t *testing.T, root *node) 356 | } 357 | 358 | type crfsCheckFn func(*testing.T, *node) 359 | 360 | func (f crfsCheckFn) check(t *testing.T, root *node) { f(t, root) } 361 | 362 | func fileNotExist(file string) crfsCheck { 363 | return crfsCheckFn(func(t *testing.T, root *node) { 364 | _, _, err := getDirentAndNode(root, file) 365 | if err == nil { 366 | t.Errorf("Node %q exists", file) 367 | } 368 | }) 369 | } 370 | 371 | func hasFileDigest(file string, digest string) crfsCheck { 372 | return crfsCheckFn(func(t *testing.T, root *node) { 373 | _, ni, err := getDirentAndNode(root, file) 374 | if err != nil { 375 | t.Fatalf("failed to get node %q: %v", file, err) 376 | } 377 | n, ok := ni.(*node) 378 | if !ok { 379 | t.Fatalf("file %q isn't a normal node", file) 380 | } 381 | if n.te.Digest != digest { 382 | t.Fatalf("Digest(%q) = %q, want %q", file, n.te.Digest, digest) 383 | } 384 | }) 385 | } 386 | 387 | func hasValidWhiteout(name string) crfsCheck { 388 | return crfsCheckFn(func(t *testing.T, root *node) { 389 | ent, n, err := getDirentAndNode(root, name) 390 | if err != nil { 391 | t.Fatalf("failed to get node %q: %v", name, err) 392 | } 393 | var a fuse.Attr 394 | if err := n.Attr(context.Background(), &a); err != nil { 395 | t.Fatalf("failed to get attributes of file %q: %v", name, err) 396 | } 397 | if a.Inode != ent.Inode { 398 | t.Errorf("inconsistent inodes %d(Node) != %d(Dirent)", a.Inode, ent.Inode) 399 | return 400 | } 401 | 402 | // validate the direntry 403 | if ent.Type != fuse.DT_Char { 404 | t.Errorf("whiteout %q isn't a char device", name) 405 | return 406 | } 407 | 408 | // validate the node 409 | if a.Mode != os.ModeDevice|os.ModeCharDevice { 410 | t.Errorf("whiteout %q has an invalid mode %o; want %o", 411 | name, a.Mode, os.ModeDevice|os.ModeCharDevice) 412 | return 413 | } 414 | if a.Rdev != uint32(unix.Mkdev(0, 0)) { 415 | t.Errorf("whiteout %q has invalid device numbers (%d, %d); want (0, 0)", 416 | name, unix.Major(uint64(a.Rdev)), unix.Minor(uint64(a.Rdev))) 417 | return 418 | } 419 | }) 420 | } 421 | 422 | func hasNodeXattrs(entry, name, value string) crfsCheck { 423 | return crfsCheckFn(func(t *testing.T, root *node) { 424 | _, ni, err := getDirentAndNode(root, entry) 425 | if err != nil { 426 | t.Fatalf("failed to get node %q: %v", entry, err) 427 | } 428 | n, ok := ni.(*node) 429 | if !ok { 430 | t.Fatalf("node %q isn't a normal node", entry) 431 | } 432 | 433 | // check xattr exists in the xattrs list. 434 | listres := fuse.ListxattrResponse{} 435 | if err := n.Listxattr(context.Background(), &fuse.ListxattrRequest{}, &listres); err != nil { 436 | t.Fatalf("failed to get xattrs list of node %q: %v", entry, err) 437 | } 438 | xattrs := bytes.Split(listres.Xattr, []byte{0}) 439 | var found bool 440 | for _, x := range xattrs { 441 | if string(x) == name { 442 | found = true 443 | } 444 | } 445 | if !found { 446 | t.Errorf("node %q doesn't have an xattr %q", entry, name) 447 | return 448 | } 449 | 450 | // check the xattr has valid value. 451 | getres := fuse.GetxattrResponse{} 452 | if err := n.Getxattr(context.Background(), &fuse.GetxattrRequest{Name: name}, &getres); err != nil { 453 | t.Fatalf("failed to get xattr %q of node %q: %v", name, entry, err) 454 | } 455 | if string(getres.Xattr) != value { 456 | t.Errorf("node %q has an invalid xattr %q; want %q", entry, getres.Xattr, value) 457 | return 458 | } 459 | }) 460 | } 461 | 462 | // getDirentAndNode gets dirent and node at the specified path at once and makes 463 | // sure that the both of them exist. 464 | func getDirentAndNode(root *node, path string) (ent fuse.Dirent, n fspkg.Node, err error) { 465 | dir, base := filepath.Split(filepath.Clean(path)) 466 | 467 | // get the target's parent directory. 468 | d := root 469 | for _, name := range strings.Split(dir, "/") { 470 | if len(name) == 0 { 471 | continue 472 | } 473 | var di fspkg.Node 474 | di, err = d.Lookup(context.Background(), name) 475 | if err != nil { 476 | return 477 | } 478 | var ok bool 479 | d, ok = di.(*node) 480 | if !ok { 481 | err = fmt.Errorf("directory %q isn't a normal node", name) 482 | return 483 | } 484 | } 485 | 486 | // get the target's direntry. 487 | dhi, err := d.Open(context.Background(), &fuse.OpenRequest{Dir: true}, &fuse.OpenResponse{}) 488 | if err != nil { 489 | return 490 | } 491 | dh, ok := dhi.(*nodeHandle) 492 | if !ok { 493 | err = fmt.Errorf("the parent directory of %q isn't a normal node", path) 494 | return 495 | } 496 | var ents []fuse.Dirent 497 | ents, err = dh.ReadDirAll(context.Background()) 498 | if err != nil { 499 | return 500 | } 501 | var found bool 502 | for _, e := range ents { 503 | if e.Name == base { 504 | ent, found = e, true 505 | } 506 | } 507 | if !found { 508 | err = fmt.Errorf("direntry %q not found in the parent directory of %q", base, path) 509 | return 510 | } 511 | 512 | // get the target's node. 513 | if n, err = d.Lookup(context.Background(), base); err != nil { 514 | return 515 | } 516 | 517 | return 518 | } 519 | 520 | func digestFor(content string) string { 521 | sum := sha256.Sum256([]byte(content)) 522 | return fmt.Sprintf("sha256:%x", sum) 523 | } 524 | -------------------------------------------------------------------------------- /go.mod: -------------------------------------------------------------------------------- 1 | module github.com/google/crfs 2 | 3 | go 1.12 4 | 5 | require ( 6 | bazil.org/fuse v0.0.0-20180421153158-65cc252bf669 7 | cloud.google.com/go v0.37.2 8 | github.com/google/go-containerregistry v0.0.0-20191010200024-a3d713f9b7f8 9 | golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b 10 | ) 11 | -------------------------------------------------------------------------------- /go.sum: -------------------------------------------------------------------------------- 1 | bazil.org/fuse v0.0.0-20180421153158-65cc252bf669 h1:FNCRpXiquG1aoyqcIWVFmpTSKVcx2bQD38uZZeGtdlw= 2 | bazil.org/fuse v0.0.0-20180421153158-65cc252bf669/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8= 3 | cloud.google.com/go v0.25.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= 4 | cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= 5 | cloud.google.com/go v0.31.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= 6 | cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= 7 | cloud.google.com/go v0.37.2 h1:4y4L7BdHenTfZL0HervofNTHh9Ad6mNX72cQvl+5eH0= 8 | cloud.google.com/go v0.37.2/go.mod h1:H8IAquKe2L30IxoupDgqTaQvKSwF/c8prYHynGIWQbA= 9 | git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg= 10 | git.apache.org/thrift.git v0.12.0/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg= 11 | github.com/Azure/azure-sdk-for-go v19.1.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= 12 | github.com/Azure/go-autorest v10.15.5+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= 13 | github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= 14 | github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA= 15 | github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo= 16 | github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI= 17 | github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= 18 | github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= 19 | github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c= 20 | github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ= 21 | github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= 22 | github.com/aws/aws-sdk-go v1.15.90/go.mod h1:es1KtYUFs7le0xQ3rOihkuoVD90z7D0fR2Qm4S00/gU= 23 | github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= 24 | github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g= 25 | github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= 26 | github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= 27 | github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= 28 | github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= 29 | github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= 30 | github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= 31 | github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 32 | github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 33 | github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= 34 | github.com/docker/cli v0.0.0-20190925022749-754388324470 h1:KrSeY2qJPl1blFLllwCMBIgwilomqEte/nb8dPhqY2o= 35 | github.com/docker/cli v0.0.0-20190925022749-754388324470/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= 36 | github.com/docker/distribution v2.6.0-rc.1.0.20180327202408-83389a148052+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= 37 | github.com/docker/docker v1.4.2-0.20180531152204-71cd53e4a197/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= 38 | github.com/docker/docker-credential-helpers v0.6.3 h1:zI2p9+1NQYdnG6sMU26EX4aVGlqbInSQxQXLvzJ4RPQ= 39 | github.com/docker/docker-credential-helpers v0.6.3/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y= 40 | github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec= 41 | github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= 42 | github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs= 43 | github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU= 44 | github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I= 45 | github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc= 46 | github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= 47 | github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= 48 | github.com/gliderlabs/ssh v0.1.1/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0= 49 | github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= 50 | github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= 51 | github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= 52 | github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= 53 | github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= 54 | github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= 55 | github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E= 56 | github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= 57 | github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= 58 | github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM= 59 | github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 60 | github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= 61 | github.com/google/btree v0.0.0-20180124185431-e89373fe6b4a/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= 62 | github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= 63 | github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ= 64 | github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= 65 | github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= 66 | github.com/google/go-containerregistry v0.0.0-20190401170947-eedaddc5e2c8 h1:dFHKDxeD4XBwLyUUUDtuBrH/4Hrl++H2Gtv5rOpku7I= 67 | github.com/google/go-containerregistry v0.0.0-20190401170947-eedaddc5e2c8/go.mod h1:yZAFP63pRshzrEYLXLGPmUt0Ay+2zdjmMN1loCnRLUk= 68 | github.com/google/go-containerregistry v0.0.0-20191009212737-d753c5604768 h1:vSjeYJhbBmtpC+n8bshsGUsmwkEqJKjL9uSuCTaL3Eo= 69 | github.com/google/go-containerregistry v0.0.0-20191009212737-d753c5604768/go.mod h1:KyKXa9ciM8+lgMXwOVsXi7UxGrsf9mM61Mzs+xKUrKE= 70 | github.com/google/go-containerregistry v0.0.0-20191010200024-a3d713f9b7f8 h1:i2MA7D3vtR5uk9ZPzVp/IC9616kCPv0RScyRD/tVQGM= 71 | github.com/google/go-containerregistry v0.0.0-20191010200024-a3d713f9b7f8/go.mod h1:KyKXa9ciM8+lgMXwOVsXi7UxGrsf9mM61Mzs+xKUrKE= 72 | github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ= 73 | github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= 74 | github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI= 75 | github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= 76 | github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= 77 | github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY= 78 | github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= 79 | github.com/googleapis/gnostic v0.2.2/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY= 80 | github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg= 81 | github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= 82 | github.com/gotestyourself/gotestyourself v2.2.0+incompatible/go.mod h1:zZKM6oeNM8k+FRljX1mnzVYeS8wiGgQyvST1/GafPbY= 83 | github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= 84 | github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw= 85 | github.com/grpc-ecosystem/grpc-gateway v1.6.2/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw= 86 | github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= 87 | github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= 88 | github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= 89 | github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= 90 | github.com/jellevandenhooff/dkim v0.0.0-20150330215556-f50fe3d243e1/go.mod h1:E0B/fFc00Y+Rasa88328GlI/XbtyysCtTHZS8h7IrBU= 91 | github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k= 92 | github.com/json-iterator/go v0.0.0-20180701071628-ab8a2e0c74be/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= 93 | github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= 94 | github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= 95 | github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= 96 | github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= 97 | github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= 98 | github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= 99 | github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= 100 | github.com/kr/pty v1.1.3/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= 101 | github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= 102 | github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= 103 | github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= 104 | github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= 105 | github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= 106 | github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= 107 | github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= 108 | github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= 109 | github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= 110 | github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= 111 | github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= 112 | github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= 113 | github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= 114 | github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s= 115 | github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0= 116 | github.com/openzipkin/zipkin-go v0.1.1/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8= 117 | github.com/openzipkin/zipkin-go v0.1.3/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8= 118 | github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw= 119 | github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= 120 | github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= 121 | github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY= 122 | github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= 123 | github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= 124 | github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= 125 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 126 | github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= 127 | github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= 128 | github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs= 129 | github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= 130 | github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= 131 | github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= 132 | github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= 133 | github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= 134 | github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= 135 | github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= 136 | github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4= 137 | github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= 138 | github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= 139 | github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q= 140 | github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= 141 | github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= 142 | github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= 143 | github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= 144 | github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= 145 | github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= 146 | github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= 147 | github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= 148 | github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= 149 | github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= 150 | github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= 151 | github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA= 152 | github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= 153 | github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= 154 | go.opencensus.io v0.18.0/go.mod h1:vKdFvxhtzZ9onBp9VKHK8z/sRpBMnKAsufL7wlDrCOA= 155 | go.opencensus.io v0.19.1/go.mod h1:gug0GbSHa8Pafr0d2urOSgoXHZ6x/RUlaiT0d9pqb4A= 156 | go.opencensus.io v0.19.2/go.mod h1:NO/8qkisMZLZ1FCsKNqtJPwc8/TaclWyY0B6wcYNg9M= 157 | go4.org v0.0.0-20180809161055-417644f6feb5/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE= 158 | golang.org/x/build v0.0.0-20190314133821-5284462c4bec/go.mod h1:atTaCNAy0f16Ah5aV1gMSwgiKVHwu/JncqDpuRr7lS4= 159 | golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= 160 | golang.org/x/crypto v0.0.0-20181030102418-4d3f4d9ffa16/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= 161 | golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= 162 | golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 163 | golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= 164 | golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= 165 | golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= 166 | golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= 167 | golang.org/x/lint v0.0.0-20181217174547-8f45f776aaf1/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= 168 | golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= 169 | golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= 170 | golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 171 | golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 172 | golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 173 | golang.org/x/net v0.0.0-20181029044818-c44066c5c816/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 174 | golang.org/x/net v0.0.0-20181106065722-10aee1819953/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 175 | golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 176 | golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 177 | golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 178 | golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 179 | golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 180 | golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53 h1:kcXqo9vE6fsZY5X5Rd7R1l7fTgnWaDCVmln65REefiE= 181 | golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 182 | golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 183 | golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI= 184 | golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 185 | golang.org/x/oauth2 v0.0.0-20180724155351-3d292e4d0cdc/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= 186 | golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= 187 | golang.org/x/oauth2 v0.0.0-20181017192945-9dcd33a902f4/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= 188 | golang.org/x/oauth2 v0.0.0-20181203162652-d668ce993890/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= 189 | golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421 h1:Wo7BWFiOk0QRFMLYMqJGFMd9CgUAcGx7V+qEg/h5IBI= 190 | golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= 191 | golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw= 192 | golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 193 | golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 194 | golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 195 | golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6 h1:bjcUS9ztw9kFmmIxJInhon/0Is3p+EHBKNgquIzo1OI= 196 | golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 197 | golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 198 | golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU= 199 | golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 200 | golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 201 | golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 202 | golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 203 | golang.org/x/sys v0.0.0-20181029174526-d69651ed3497/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 204 | golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 205 | golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 206 | golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 207 | golang.org/x/sys v0.0.0-20181218192612-074acd46bca6/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 208 | golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 209 | golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54 h1:xe1/2UUJRmA9iDglQSlkx8c5n3twv58+K0mPpC2zmhA= 210 | golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 211 | golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 212 | golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b h1:ag/x1USPSsqHud38I9BAC88qdNLDHHtQ4mlgQIZPPNA= 213 | golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 214 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 215 | golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 216 | golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= 217 | golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= 218 | golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= 219 | golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 220 | golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 221 | golang.org/x/tools v0.0.0-20181219222714-6e267b5cc78e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 222 | golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 223 | golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= 224 | golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= 225 | google.golang.org/api v0.0.0-20180910000450-7ca32eb868bf/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= 226 | google.golang.org/api v0.0.0-20181030000543-1d582fd0359e/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= 227 | google.golang.org/api v0.0.0-20181220000619-583d854617af/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0= 228 | google.golang.org/api v0.2.0/go.mod h1:IfRCZScioGtypHNTlz3gFk67J8uePVW7uDTBzXuIkhU= 229 | google.golang.org/api v0.3.0/go.mod h1:IuvZyQh8jgscv8qWfQ4ABd8m7hEudgBFM/EdhA3BnXw= 230 | google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= 231 | google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= 232 | google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= 233 | google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508= 234 | google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= 235 | google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= 236 | google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= 237 | google.golang.org/genproto v0.0.0-20181029155118-b69ba1387ce2/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= 238 | google.golang.org/genproto v0.0.0-20181219182458-5a97ab628bfb/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg= 239 | google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= 240 | google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= 241 | google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio= 242 | google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs= 243 | google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= 244 | gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= 245 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 246 | gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 247 | gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= 248 | gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= 249 | gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= 250 | gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 251 | gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 252 | gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw= 253 | grpc.go4.org v0.0.0-20170609214715-11d0a25b4919/go.mod h1:77eQGdRu53HpSqPFJFmuJdjuHRquDANNeA4x7B8WQ9o= 254 | honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= 255 | honnef.co/go/tools v0.0.0-20180920025451-e3ad64cb4ed3/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= 256 | honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= 257 | honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= 258 | k8s.io/api v0.0.0-20180904230853-4e7be11eab3f/go.mod h1:iuAfoD4hCxJ8Onx9kaTIt30j7jUFS00AXQi6QMi99vA= 259 | k8s.io/apimachinery v0.0.0-20180904193909-def12e63c512/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0= 260 | k8s.io/client-go v0.0.0-20180910083459-2cefa64ff137/go.mod h1:7vJpHMYJwNQCWgzmNV+VYUl1zCObLyodBc8nIyt8L5s= 261 | k8s.io/kube-openapi v0.0.0-20180731170545-e3762e86a74c/go.mod h1:BXM9ceUBTj2QnfH2MK1odQs778ajze1RxcmP6S8RVVc= 262 | k8s.io/kubernetes v1.11.10/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk= 263 | -------------------------------------------------------------------------------- /stargz/stargz.go: -------------------------------------------------------------------------------- 1 | // Copyright 2019 The Go Authors. All rights reserved. 2 | // Use of this source code is governed by a BSD-style 3 | // license that can be found in the LICENSE file. 4 | 5 | // The stargz package reads & writes tar.gz ("tarball") files in a 6 | // seekable, indexed format call "stargz". A stargz file is still a 7 | // valid tarball, but it's slightly bigger with new gzip streams for 8 | // each new file & throughout large files, and has an index in a magic 9 | // file at the end. 10 | package stargz 11 | 12 | import ( 13 | "archive/tar" 14 | "bufio" 15 | "bytes" 16 | "compress/gzip" 17 | "crypto/sha256" 18 | "encoding/json" 19 | "errors" 20 | "fmt" 21 | "hash" 22 | "io" 23 | "io/ioutil" 24 | "os" 25 | "path" 26 | "sort" 27 | "strconv" 28 | "strings" 29 | "time" 30 | ) 31 | 32 | // TOCTarName is the name of the JSON file in the tar archive in the 33 | // table of contents gzip stream. 34 | const TOCTarName = "stargz.index.json" 35 | 36 | // FooterSize is the number of bytes in the stargz footer. 37 | // 38 | // The footer is an empty gzip stream with no compression and an Extra 39 | // header of the form "%016xSTARGZ", where the 64 bit hex-encoded 40 | // number is the offset to the gzip stream of JSON TOC. 41 | // 42 | // 47 comes from: 43 | // 44 | // 10 byte gzip header + 45 | // 2 byte (LE16) length of extra, encoding 22 (16 hex digits + len("STARGZ")) == "\x16\x00" + 46 | // 22 bytes of extra (fmt.Sprintf("%016xSTARGZ", tocGzipOffset)) 47 | // 5 byte flate header 48 | // 8 byte gzip footer (two little endian uint32s: digest, size) 49 | const FooterSize = 47 50 | 51 | // A Reader permits random access reads from a stargz file. 52 | type Reader struct { 53 | sr *io.SectionReader 54 | toc *jtoc 55 | 56 | // m stores all non-chunk entries, keyed by name. 57 | m map[string]*TOCEntry 58 | 59 | // chunks stores all TOCEntry values for regular files that 60 | // are split up. For a file with a single chunk, it's only 61 | // stored in m. 62 | chunks map[string][]*TOCEntry 63 | } 64 | 65 | // Open opens a stargz file for reading. 66 | func Open(sr *io.SectionReader) (*Reader, error) { 67 | if sr.Size() < FooterSize { 68 | return nil, fmt.Errorf("stargz size %d is smaller than the stargz footer size", sr.Size()) 69 | } 70 | // TODO: read a bigger chunk (1MB?) at once here to hopefully 71 | // get the TOC + footer in one go. 72 | var footer [FooterSize]byte 73 | if _, err := sr.ReadAt(footer[:], sr.Size()-FooterSize); err != nil { 74 | return nil, fmt.Errorf("error reading footer: %v", err) 75 | } 76 | tocOff, ok := parseFooter(footer[:]) 77 | if !ok { 78 | return nil, fmt.Errorf("error parsing footer") 79 | } 80 | tocTargz := make([]byte, sr.Size()-tocOff-FooterSize) 81 | if _, err := sr.ReadAt(tocTargz, tocOff); err != nil { 82 | return nil, fmt.Errorf("error reading %d byte TOC targz: %v", len(tocTargz), err) 83 | } 84 | zr, err := gzip.NewReader(bytes.NewReader(tocTargz)) 85 | if err != nil { 86 | return nil, fmt.Errorf("malformed TOC gzip header: %v", err) 87 | } 88 | zr.Multistream(false) 89 | tr := tar.NewReader(zr) 90 | h, err := tr.Next() 91 | if err != nil { 92 | return nil, fmt.Errorf("failed to find tar header in TOC gzip stream: %v", err) 93 | } 94 | if h.Name != TOCTarName { 95 | return nil, fmt.Errorf("TOC tar entry had name %q; expected %q", h.Name, TOCTarName) 96 | } 97 | toc := new(jtoc) 98 | if err := json.NewDecoder(tr).Decode(&toc); err != nil { 99 | return nil, fmt.Errorf("error decoding TOC JSON: %v", err) 100 | } 101 | r := &Reader{sr: sr, toc: toc} 102 | if err := r.initFields(); err != nil { 103 | return nil, fmt.Errorf("failed to initialize fields of entries: %v", err) 104 | } 105 | return r, nil 106 | } 107 | 108 | // TOCEntry is an entry in the stargz file's TOC (Table of Contents). 109 | type TOCEntry struct { 110 | // Name is the tar entry's name. It is the complete path 111 | // stored in the tar file, not just the base name. 112 | Name string `json:"name"` 113 | 114 | // Type is one of "dir", "reg", "symlink", "hardlink", "char", 115 | // "block", "fifo", or "chunk". 116 | // The "chunk" type is used for regular file data chunks past the first 117 | // TOCEntry; the 2nd chunk and on have only Type ("chunk"), Offset, 118 | // ChunkOffset, and ChunkSize populated. 119 | Type string `json:"type"` 120 | 121 | // Size, for regular files, is the logical size of the file. 122 | Size int64 `json:"size,omitempty"` 123 | 124 | // ModTime3339 is the modification time of the tar entry. Empty 125 | // means zero or unknown. Otherwise it's in UTC RFC3339 126 | // format. Use the ModTime method to access the time.Time value. 127 | ModTime3339 string `json:"modtime,omitempty"` 128 | modTime time.Time 129 | 130 | // LinkName, for symlinks and hardlinks, is the link target. 131 | LinkName string `json:"linkName,omitempty"` 132 | 133 | // Mode is the permission and mode bits. 134 | Mode int64 `json:"mode,omitempty"` 135 | 136 | // Uid is the user ID of the owner. 137 | Uid int `json:"uid,omitempty"` 138 | 139 | // Gid is the group ID of the owner. 140 | Gid int `json:"gid,omitempty"` 141 | 142 | // Uname is the username of the owner. 143 | // 144 | // In the serialized JSON, this field may only be present for 145 | // the first entry with the same Uid. 146 | Uname string `json:"userName,omitempty"` 147 | 148 | // Gname is the group name of the owner. 149 | // 150 | // In the serialized JSON, this field may only be present for 151 | // the first entry with the same Gid. 152 | Gname string `json:"groupName,omitempty"` 153 | 154 | // Offset, for regular files, provides the offset in the 155 | // stargz file to the file's data bytes. See ChunkOffset and 156 | // ChunkSize. 157 | Offset int64 `json:"offset,omitempty"` 158 | 159 | nextOffset int64 // the Offset of the next entry with a non-zero Offset 160 | 161 | // DevMajor is the major device number for "char" and "block" types. 162 | DevMajor int `json:"devMajor,omitempty"` 163 | 164 | // DevMinor is the major device number for "char" and "block" types. 165 | DevMinor int `json:"devMinor,omitempty"` 166 | 167 | // NumLink is the number of entry names pointing to this entry. 168 | // Zero means one name references this entry. 169 | NumLink int 170 | 171 | // Xattrs are the extended attribute for the entry. 172 | Xattrs map[string][]byte `json:"xattrs,omitempty"` 173 | 174 | // Digest stores the OCI checksum for regular files payload. 175 | // It has the form "sha256:abcdef01234....". 176 | Digest string `json:"digest,omitempty"` 177 | 178 | // ChunkOffset is non-zero if this is a chunk of a large, 179 | // regular file. If so, the Offset is where the gzip header of 180 | // ChunkSize bytes at ChunkOffset in Name begin. 181 | // 182 | // In serialized form, a "chunkSize" JSON field of zero means 183 | // that the chunk goes to the end of the file. After reading 184 | // from the stargz TOC, though, the ChunkSize is initialized 185 | // to a non-zero file for when Type is either "reg" or 186 | // "chunk". 187 | ChunkOffset int64 `json:"chunkOffset,omitempty"` 188 | ChunkSize int64 `json:"chunkSize,omitempty"` 189 | 190 | children map[string]*TOCEntry 191 | } 192 | 193 | // ModTime returns the entry's modification time. 194 | func (e *TOCEntry) ModTime() time.Time { return e.modTime } 195 | 196 | // NextOffset returns the position (relative to the start of the 197 | // stargz file) of the next gzip boundary after e.Offset. 198 | func (e *TOCEntry) NextOffset() int64 { return e.nextOffset } 199 | 200 | func (e *TOCEntry) addChild(baseName string, child *TOCEntry) { 201 | if e.children == nil { 202 | e.children = make(map[string]*TOCEntry) 203 | } 204 | if child.Type == "dir" { 205 | e.NumLink++ // Entry ".." in the subdirectory links to this directory 206 | } 207 | e.children[baseName] = child 208 | } 209 | 210 | // isDataType reports whether TOCEntry is a regular file or chunk (something that 211 | // contains regular file data). 212 | func (e *TOCEntry) isDataType() bool { return e.Type == "reg" || e.Type == "chunk" } 213 | 214 | // jtoc is the JSON-serialized table of contents index of the files in the stargz file. 215 | type jtoc struct { 216 | Version int `json:"version"` 217 | Entries []*TOCEntry `json:"entries"` 218 | } 219 | 220 | // Stat returns a FileInfo value representing e. 221 | func (e *TOCEntry) Stat() os.FileInfo { return fileInfo{e} } 222 | 223 | // ForeachChild calls f for each child item. If f returns false, iteration ends. 224 | // If e is not a directory, f is not called. 225 | func (e *TOCEntry) ForeachChild(f func(baseName string, ent *TOCEntry) bool) { 226 | for name, ent := range e.children { 227 | if !f(name, ent) { 228 | return 229 | } 230 | } 231 | } 232 | 233 | // LookupChild returns the directory e's child by its base name. 234 | func (e *TOCEntry) LookupChild(baseName string) (child *TOCEntry, ok bool) { 235 | child, ok = e.children[baseName] 236 | return 237 | } 238 | 239 | // fileInfo implements os.FileInfo using the wrapped *TOCEntry. 240 | type fileInfo struct{ e *TOCEntry } 241 | 242 | var _ os.FileInfo = fileInfo{} 243 | 244 | func (fi fileInfo) Name() string { return path.Base(fi.e.Name) } 245 | func (fi fileInfo) IsDir() bool { return fi.e.Type == "dir" } 246 | func (fi fileInfo) Size() int64 { return fi.e.Size } 247 | func (fi fileInfo) ModTime() time.Time { return fi.e.ModTime() } 248 | func (fi fileInfo) Sys() interface{} { return fi.e } 249 | func (fi fileInfo) Mode() (m os.FileMode) { 250 | m = os.FileMode(fi.e.Mode) & os.ModePerm 251 | switch fi.e.Type { 252 | case "dir": 253 | m |= os.ModeDir 254 | case "symlink": 255 | m |= os.ModeSymlink 256 | case "char": 257 | m |= os.ModeDevice | os.ModeCharDevice 258 | case "block": 259 | m |= os.ModeDevice 260 | case "fifo": 261 | m |= os.ModeNamedPipe 262 | } 263 | // TODO: ModeSetuid, ModeSetgid, if/as needed. 264 | return m 265 | } 266 | 267 | // initFields populates the Reader from r.toc after decoding it from 268 | // JSON. 269 | // 270 | // Unexported fields are populated and TOCEntry fields that were 271 | // implicit in the JSON are populated. 272 | func (r *Reader) initFields() error { 273 | r.m = make(map[string]*TOCEntry, len(r.toc.Entries)) 274 | r.chunks = make(map[string][]*TOCEntry) 275 | var lastPath string 276 | uname := map[int]string{} 277 | gname := map[int]string{} 278 | var lastRegEnt *TOCEntry 279 | for _, ent := range r.toc.Entries { 280 | ent.Name = strings.TrimPrefix(ent.Name, "./") 281 | if ent.Type == "reg" { 282 | lastRegEnt = ent 283 | } 284 | if ent.Type == "chunk" { 285 | ent.Name = lastPath 286 | r.chunks[ent.Name] = append(r.chunks[ent.Name], ent) 287 | if ent.ChunkSize == 0 && lastRegEnt != nil { 288 | ent.ChunkSize = lastRegEnt.Size - ent.ChunkOffset 289 | } 290 | } else { 291 | lastPath = ent.Name 292 | 293 | if ent.Uname != "" { 294 | uname[ent.Uid] = ent.Uname 295 | } else { 296 | ent.Uname = uname[ent.Uid] 297 | } 298 | if ent.Gname != "" { 299 | gname[ent.Gid] = ent.Gname 300 | } else { 301 | ent.Gname = uname[ent.Gid] 302 | } 303 | 304 | ent.modTime, _ = time.Parse(time.RFC3339, ent.ModTime3339) 305 | 306 | if ent.Type == "dir" { 307 | ent.NumLink++ // Parent dir links to this directory 308 | r.m[strings.TrimSuffix(ent.Name, "/")] = ent 309 | } else { 310 | r.m[ent.Name] = ent 311 | } 312 | } 313 | if ent.Type == "reg" && ent.ChunkSize > 0 && ent.ChunkSize < ent.Size { 314 | r.chunks[ent.Name] = make([]*TOCEntry, 0, ent.Size/ent.ChunkSize+1) 315 | r.chunks[ent.Name] = append(r.chunks[ent.Name], ent) 316 | } 317 | if ent.ChunkSize == 0 && ent.Size != 0 { 318 | ent.ChunkSize = ent.Size 319 | } 320 | } 321 | 322 | // Populate children, add implicit directories: 323 | for _, ent := range r.toc.Entries { 324 | if ent.Type == "chunk" { 325 | continue 326 | } 327 | // add "foo/": 328 | // add "foo" child to "" (creating "" if necessary) 329 | // 330 | // add "foo/bar/": 331 | // add "bar" child to "foo" (creating "foo" if necessary) 332 | // 333 | // add "foo/bar.txt": 334 | // add "bar.txt" child to "foo" (creating "foo" if necessary) 335 | // 336 | // add "a/b/c/d/e/f.txt": 337 | // create "a/b/c/d/e" node 338 | // add "f.txt" child to "e" 339 | 340 | name := ent.Name 341 | if ent.Type == "dir" { 342 | name = strings.TrimSuffix(name, "/") 343 | } 344 | pdir := r.getOrCreateDir(parentDir(name)) 345 | ent.NumLink++ // at least one name(ent.Name) references this entry. 346 | if ent.Type == "hardlink" { 347 | if org, ok := r.m[ent.LinkName]; ok { 348 | org.NumLink++ // original entry is referenced by this ent.Name. 349 | ent = org 350 | } else { 351 | return fmt.Errorf("%q is a hardlink but the linkname %q isn't found", ent.Name, ent.LinkName) 352 | } 353 | } 354 | pdir.addChild(path.Base(name), ent) 355 | } 356 | 357 | lastOffset := r.sr.Size() 358 | for i := len(r.toc.Entries) - 1; i >= 0; i-- { 359 | e := r.toc.Entries[i] 360 | if e.isDataType() { 361 | e.nextOffset = lastOffset 362 | } 363 | if e.Offset != 0 { 364 | lastOffset = e.Offset 365 | } 366 | } 367 | 368 | return nil 369 | } 370 | 371 | func parentDir(p string) string { 372 | dir, _ := path.Split(p) 373 | return strings.TrimSuffix(dir, "/") 374 | } 375 | 376 | func (r *Reader) getOrCreateDir(d string) *TOCEntry { 377 | e, ok := r.m[d] 378 | if !ok { 379 | e = &TOCEntry{ 380 | Name: d, 381 | Type: "dir", 382 | Mode: 0755, 383 | NumLink: 2, // The directory itself(.) and the parent link to this directory. 384 | } 385 | r.m[d] = e 386 | if d != "" { 387 | pdir := r.getOrCreateDir(parentDir(d)) 388 | pdir.addChild(path.Base(d), e) 389 | } 390 | } 391 | return e 392 | } 393 | 394 | // ChunkEntryForOffset returns the TOCEntry containing the byte of the 395 | // named file at the given offset within the file. 396 | func (r *Reader) ChunkEntryForOffset(name string, offset int64) (e *TOCEntry, ok bool) { 397 | e, ok = r.Lookup(name) 398 | if !ok || !e.isDataType() { 399 | return nil, false 400 | } 401 | ents := r.chunks[name] 402 | if len(ents) < 2 { 403 | if offset >= e.ChunkSize { 404 | return nil, false 405 | } 406 | return e, true 407 | } 408 | i := sort.Search(len(ents), func(i int) bool { 409 | e := ents[i] 410 | return e.ChunkOffset >= offset || (offset > e.ChunkOffset && offset < e.ChunkOffset+e.ChunkSize) 411 | }) 412 | if i == len(ents) { 413 | return nil, false 414 | } 415 | return ents[i], true 416 | } 417 | 418 | // Lookup returns the Table of Contents entry for the given path. 419 | // 420 | // To get the root directory, use the empty string. 421 | func (r *Reader) Lookup(path string) (e *TOCEntry, ok bool) { 422 | if r == nil { 423 | return 424 | } 425 | e, ok = r.m[path] 426 | if ok && e.Type == "hardlink" { 427 | e, ok = r.m[e.LinkName] 428 | } 429 | return 430 | } 431 | 432 | func (r *Reader) OpenFile(name string) (*io.SectionReader, error) { 433 | ent, ok := r.Lookup(name) 434 | if !ok { 435 | // TODO: come up with some error plan. This is lazy: 436 | return nil, &os.PathError{ 437 | Path: name, 438 | Op: "OpenFile", 439 | Err: os.ErrNotExist, 440 | } 441 | } 442 | if ent.Type != "reg" { 443 | return nil, &os.PathError{ 444 | Path: name, 445 | Op: "OpenFile", 446 | Err: errors.New("not a regular file"), 447 | } 448 | } 449 | fr := &fileReader{ 450 | r: r, 451 | size: ent.Size, 452 | ents: r.getChunks(ent), 453 | } 454 | return io.NewSectionReader(fr, 0, fr.size), nil 455 | } 456 | 457 | func (r *Reader) getChunks(ent *TOCEntry) []*TOCEntry { 458 | if ents, ok := r.chunks[ent.Name]; ok { 459 | return ents 460 | } 461 | return []*TOCEntry{ent} 462 | } 463 | 464 | type fileReader struct { 465 | r *Reader 466 | size int64 467 | ents []*TOCEntry // 1 or more reg/chunk entries 468 | } 469 | 470 | func (fr *fileReader) ReadAt(p []byte, off int64) (n int, err error) { 471 | if off >= fr.size { 472 | return 0, io.EOF 473 | } 474 | if off < 0 { 475 | return 0, errors.New("invalid offset") 476 | } 477 | var i int 478 | if len(fr.ents) > 1 { 479 | i = sort.Search(len(fr.ents), func(i int) bool { 480 | return fr.ents[i].ChunkOffset >= off 481 | }) 482 | if i == len(fr.ents) { 483 | i = len(fr.ents) - 1 484 | } 485 | } 486 | ent := fr.ents[i] 487 | if ent.ChunkOffset > off { 488 | if i == 0 { 489 | return 0, errors.New("internal error; first chunk offset is non-zero") 490 | } 491 | ent = fr.ents[i-1] 492 | } 493 | 494 | // If ent is a chunk of a large file, adjust the ReadAt 495 | // offset by the chunk's offset. 496 | off -= ent.ChunkOffset 497 | 498 | finalEnt := fr.ents[len(fr.ents)-1] 499 | gzOff := ent.Offset 500 | // gzBytesRemain is the number of compressed gzip bytes in this 501 | // file remaining, over 1+ gzip chunks. 502 | gzBytesRemain := finalEnt.NextOffset() - gzOff 503 | 504 | sr := io.NewSectionReader(fr.r.sr, gzOff, gzBytesRemain) 505 | 506 | const maxGZread = 2 << 20 507 | var bufSize int = maxGZread 508 | if gzBytesRemain < maxGZread { 509 | bufSize = int(gzBytesRemain) 510 | } 511 | 512 | br := bufio.NewReaderSize(sr, bufSize) 513 | if _, err := br.Peek(bufSize); err != nil { 514 | return 0, fmt.Errorf("fileReader.ReadAt.peek: %v", err) 515 | } 516 | 517 | gz, err := gzip.NewReader(br) 518 | if err != nil { 519 | return 0, fmt.Errorf("fileReader.ReadAt.gzipNewReader: %v", err) 520 | } 521 | if n, err := io.CopyN(ioutil.Discard, gz, off); n != off || err != nil { 522 | return 0, fmt.Errorf("discard of %d bytes = %v, %v", off, n, err) 523 | } 524 | return io.ReadFull(gz, p) 525 | } 526 | 527 | // A Writer writes stargz files. 528 | // 529 | // Use NewWriter to create a new Writer. 530 | type Writer struct { 531 | bw *bufio.Writer 532 | cw *countWriter 533 | toc *jtoc 534 | diffHash hash.Hash // SHA-256 of uncompressed tar 535 | 536 | closed bool 537 | gz *gzip.Writer 538 | lastUsername map[int]string 539 | lastGroupname map[int]string 540 | 541 | // ChunkSize optionally controls the maximum number of bytes 542 | // of data of a regular file that can be written in one gzip 543 | // stream before a new gzip stream is started. 544 | // Zero means to use a default, currently 4 MiB. 545 | ChunkSize int 546 | } 547 | 548 | // currentGzipWriter writes to the current w.gz field, which can 549 | // change throughout writing a tar entry. 550 | // 551 | // Additionally, it updates w's SHA-256 of the uncompressed bytes 552 | // of the tar file. 553 | type currentGzipWriter struct{ w *Writer } 554 | 555 | func (cgw currentGzipWriter) Write(p []byte) (int, error) { 556 | cgw.w.diffHash.Write(p) 557 | return cgw.w.gz.Write(p) 558 | } 559 | 560 | func (w *Writer) chunkSize() int { 561 | if w.ChunkSize <= 0 { 562 | return 4 << 20 563 | } 564 | return w.ChunkSize 565 | } 566 | 567 | // NewWriter returns a new stargz writer writing to w. 568 | // 569 | // The writer must be closed to write its trailing table of contents. 570 | func NewWriter(w io.Writer) *Writer { 571 | bw := bufio.NewWriter(w) 572 | cw := &countWriter{w: bw} 573 | return &Writer{ 574 | bw: bw, 575 | cw: cw, 576 | toc: &jtoc{Version: 1}, 577 | diffHash: sha256.New(), 578 | } 579 | } 580 | 581 | // Close writes the stargz's table of contents and flushes all the 582 | // buffers, returning any error. 583 | func (w *Writer) Close() error { 584 | if w.closed { 585 | return nil 586 | } 587 | defer func() { w.closed = true }() 588 | 589 | if err := w.closeGz(); err != nil { 590 | return err 591 | } 592 | 593 | // Write the TOC index. 594 | tocOff := w.cw.n 595 | w.gz, _ = gzip.NewWriterLevel(w.cw, gzip.BestCompression) 596 | w.gz.Extra = []byte("stargz.toc") 597 | tw := tar.NewWriter(currentGzipWriter{w}) 598 | tocJSON, err := json.MarshalIndent(w.toc, "", "\t") 599 | if err != nil { 600 | return err 601 | } 602 | if err := tw.WriteHeader(&tar.Header{ 603 | Typeflag: tar.TypeReg, 604 | Name: TOCTarName, 605 | Size: int64(len(tocJSON)), 606 | }); err != nil { 607 | return err 608 | } 609 | if _, err := tw.Write(tocJSON); err != nil { 610 | return err 611 | } 612 | 613 | if err := tw.Close(); err != nil { 614 | return err 615 | } 616 | if err := w.closeGz(); err != nil { 617 | return err 618 | } 619 | 620 | // And a little footer with pointer to the TOC gzip stream. 621 | if _, err := w.bw.Write(footerBytes(tocOff)); err != nil { 622 | return err 623 | } 624 | 625 | if err := w.bw.Flush(); err != nil { 626 | return err 627 | } 628 | 629 | return nil 630 | } 631 | 632 | func (w *Writer) closeGz() error { 633 | if w.closed { 634 | return errors.New("write on closed Writer") 635 | } 636 | if w.gz != nil { 637 | if err := w.gz.Close(); err != nil { 638 | return err 639 | } 640 | w.gz = nil 641 | } 642 | return nil 643 | } 644 | 645 | // nameIfChanged returns name, unless it was the already the value of (*mp)[id], 646 | // in which case it returns the empty string. 647 | func (w *Writer) nameIfChanged(mp *map[int]string, id int, name string) string { 648 | if name == "" { 649 | return "" 650 | } 651 | if *mp == nil { 652 | *mp = make(map[int]string) 653 | } 654 | if (*mp)[id] == name { 655 | return "" 656 | } 657 | (*mp)[id] = name 658 | return name 659 | } 660 | 661 | func (w *Writer) condOpenGz() { 662 | if w.gz == nil { 663 | w.gz, _ = gzip.NewWriterLevel(w.cw, gzip.BestCompression) 664 | } 665 | } 666 | 667 | // AppendTar reads the tar or tar.gz file from r and appends 668 | // each of its contents to w. 669 | // 670 | // The input r can optionally be gzip compressed but the output will 671 | // always be gzip compressed. 672 | func (w *Writer) AppendTar(r io.Reader) error { 673 | br := bufio.NewReader(r) 674 | var tr *tar.Reader 675 | if isGzip(br) { 676 | // NewReader can't fail if isGzip returned true. 677 | zr, _ := gzip.NewReader(br) 678 | tr = tar.NewReader(zr) 679 | } else { 680 | tr = tar.NewReader(br) 681 | } 682 | for { 683 | h, err := tr.Next() 684 | if err == io.EOF { 685 | break 686 | } 687 | if err != nil { 688 | return fmt.Errorf("error reading from source tar: tar.Reader.Next: %v", err) 689 | } 690 | if h.Name == TOCTarName { 691 | // It is possible for a layer to be "stargzified" twice during the 692 | // distribution lifecycle. So we reserve "TOCTarName" here to avoid 693 | // duplicated entries in the resulting layer. 694 | continue 695 | } 696 | 697 | var xattrs map[string][]byte 698 | if h.Xattrs != nil { 699 | xattrs = make(map[string][]byte) 700 | for k, v := range h.Xattrs { 701 | xattrs[k] = []byte(v) 702 | } 703 | } 704 | ent := &TOCEntry{ 705 | Name: h.Name, 706 | Mode: h.Mode, 707 | Uid: h.Uid, 708 | Gid: h.Gid, 709 | Uname: w.nameIfChanged(&w.lastUsername, h.Uid, h.Uname), 710 | Gname: w.nameIfChanged(&w.lastGroupname, h.Gid, h.Gname), 711 | ModTime3339: formatModtime(h.ModTime), 712 | Xattrs: xattrs, 713 | } 714 | w.condOpenGz() 715 | tw := tar.NewWriter(currentGzipWriter{w}) 716 | if err := tw.WriteHeader(h); err != nil { 717 | return err 718 | } 719 | switch h.Typeflag { 720 | case tar.TypeLink: 721 | ent.Type = "hardlink" 722 | ent.LinkName = h.Linkname 723 | case tar.TypeSymlink: 724 | ent.Type = "symlink" 725 | ent.LinkName = h.Linkname 726 | case tar.TypeDir: 727 | ent.Type = "dir" 728 | case tar.TypeReg: 729 | ent.Type = "reg" 730 | ent.Size = h.Size 731 | case tar.TypeChar: 732 | ent.Type = "char" 733 | ent.DevMajor = int(h.Devmajor) 734 | ent.DevMinor = int(h.Devminor) 735 | case tar.TypeBlock: 736 | ent.Type = "block" 737 | ent.DevMajor = int(h.Devmajor) 738 | ent.DevMinor = int(h.Devminor) 739 | case tar.TypeFifo: 740 | ent.Type = "fifo" 741 | default: 742 | return fmt.Errorf("unsupported input tar entry %q", h.Typeflag) 743 | } 744 | 745 | // We need to keep a reference to the TOC entry for regular files, so that we 746 | // can fill the digest later. 747 | var regFileEntry *TOCEntry 748 | var payloadDigest hash.Hash 749 | if h.Typeflag == tar.TypeReg { 750 | regFileEntry = ent 751 | payloadDigest = sha256.New() 752 | } 753 | 754 | if h.Typeflag == tar.TypeReg && ent.Size > 0 { 755 | var written int64 756 | totalSize := ent.Size // save it before we destroy ent 757 | tee := io.TeeReader(tr, payloadDigest) 758 | for written < totalSize { 759 | if err := w.closeGz(); err != nil { 760 | return err 761 | } 762 | 763 | chunkSize := int64(w.chunkSize()) 764 | remain := totalSize - written 765 | if remain < chunkSize { 766 | chunkSize = remain 767 | } else { 768 | ent.ChunkSize = chunkSize 769 | } 770 | ent.Offset = w.cw.n 771 | ent.ChunkOffset = written 772 | 773 | w.condOpenGz() 774 | 775 | if _, err := io.CopyN(tw, tee, chunkSize); err != nil { 776 | return fmt.Errorf("error copying %q: %v", h.Name, err) 777 | } 778 | w.toc.Entries = append(w.toc.Entries, ent) 779 | written += chunkSize 780 | ent = &TOCEntry{ 781 | Name: h.Name, 782 | Type: "chunk", 783 | } 784 | } 785 | } else { 786 | w.toc.Entries = append(w.toc.Entries, ent) 787 | } 788 | if payloadDigest != nil { 789 | regFileEntry.Digest = fmt.Sprintf("sha256:%x", payloadDigest.Sum(nil)) 790 | } 791 | if err := tw.Flush(); err != nil { 792 | return err 793 | } 794 | } 795 | return nil 796 | } 797 | 798 | // DiffID returns the SHA-256 of the uncompressed tar bytes. 799 | // It is only valid to call DiffID after Close. 800 | func (w *Writer) DiffID() string { 801 | return fmt.Sprintf("sha256:%x", w.diffHash.Sum(nil)) 802 | } 803 | 804 | // footerBytes the 47 byte footer. 805 | func footerBytes(tocOff int64) []byte { 806 | buf := bytes.NewBuffer(make([]byte, 0, FooterSize)) 807 | gz, _ := gzip.NewWriterLevel(buf, gzip.NoCompression) 808 | gz.Header.Extra = []byte(fmt.Sprintf("%016xSTARGZ", tocOff)) 809 | gz.Close() 810 | if buf.Len() != FooterSize { 811 | panic(fmt.Sprintf("footer buffer = %d, not %d", buf.Len(), FooterSize)) 812 | } 813 | return buf.Bytes() 814 | } 815 | 816 | func parseFooter(p []byte) (tocOffset int64, ok bool) { 817 | if len(p) != FooterSize { 818 | return 0, false 819 | } 820 | zr, err := gzip.NewReader(bytes.NewReader(p)) 821 | if err != nil { 822 | return 0, false 823 | } 824 | extra := zr.Header.Extra 825 | if len(extra) != 16+len("STARGZ") { 826 | return 0, false 827 | } 828 | if string(extra[16:]) != "STARGZ" { 829 | return 0, false 830 | } 831 | tocOffset, err = strconv.ParseInt(string(extra[:16]), 16, 64) 832 | return tocOffset, err == nil 833 | } 834 | 835 | func formatModtime(t time.Time) string { 836 | if t.IsZero() || t.Unix() == 0 { 837 | return "" 838 | } 839 | return t.UTC().Round(time.Second).Format(time.RFC3339) 840 | } 841 | 842 | // countWriter counts how many bytes have been written to its wrapped 843 | // io.Writer. 844 | type countWriter struct { 845 | w io.Writer 846 | n int64 847 | } 848 | 849 | func (cw *countWriter) Write(p []byte) (n int, err error) { 850 | n, err = cw.w.Write(p) 851 | cw.n += int64(n) 852 | return 853 | } 854 | 855 | // isGzip reports whether br is positioned right before an upcoming gzip stream. 856 | // It does not consume any bytes from br. 857 | func isGzip(br *bufio.Reader) bool { 858 | const ( 859 | gzipID1 = 0x1f 860 | gzipID2 = 0x8b 861 | gzipDeflate = 8 862 | ) 863 | peek, _ := br.Peek(3) 864 | return len(peek) >= 3 && peek[0] == gzipID1 && peek[1] == gzipID2 && peek[2] == gzipDeflate 865 | } 866 | -------------------------------------------------------------------------------- /stargz/stargz_test.go: -------------------------------------------------------------------------------- 1 | // Copyright 2019 The Go Authors. All rights reserved. 2 | // Use of this source code is governed by a BSD-style 3 | // license that can be found in the LICENSE file. 4 | 5 | package stargz 6 | 7 | import ( 8 | "archive/tar" 9 | "bytes" 10 | "compress/gzip" 11 | "crypto/sha256" 12 | "encoding/json" 13 | "errors" 14 | "fmt" 15 | "io" 16 | "io/ioutil" 17 | "reflect" 18 | "sort" 19 | "strings" 20 | "testing" 21 | ) 22 | 23 | // Tests 47 byte footer encoding, size, and parsing. 24 | func TestFooter(t *testing.T) { 25 | for off := int64(0); off <= 200000; off += 1023 { 26 | footer := footerBytes(off) 27 | if len(footer) != FooterSize { 28 | t.Fatalf("for offset %v, footer length was %d, not expected %d. got bytes: %q", off, len(footer), FooterSize, footer) 29 | } 30 | got, ok := parseFooter(footer) 31 | if !ok { 32 | t.Fatalf("failed to parse footer for offset %d, footer: %q", off, footer) 33 | } 34 | if got != off { 35 | t.Fatalf("parseFooter(footerBytes(offset %d)) = %d; want %d", off, got, off) 36 | 37 | } 38 | } 39 | } 40 | 41 | func TestWriteAndOpen(t *testing.T) { 42 | const content = "Some contents" 43 | invalidUtf8 := "\xff\xfe\xfd" 44 | 45 | xAttrFile := xAttr{"foo": "bar", "invalid-utf8": invalidUtf8} 46 | sampleOwner := owner{uid: 50, gid: 100} 47 | 48 | tests := []struct { 49 | name string 50 | chunkSize int 51 | in []tarEntry 52 | want []stargzCheck 53 | wantNumGz int // expected number of gzip streams 54 | }{ 55 | { 56 | name: "empty", 57 | in: tarOf(), 58 | wantNumGz: 2, // TOC + footer 59 | want: checks( 60 | numTOCEntries(0), 61 | ), 62 | }, 63 | { 64 | name: "1dir_1empty_file", 65 | in: tarOf( 66 | dir("foo/"), 67 | file("foo/bar.txt", ""), 68 | ), 69 | wantNumGz: 3, // dir, TOC, footer 70 | want: checks( 71 | numTOCEntries(2), 72 | hasDir("foo/"), 73 | hasFileLen("foo/bar.txt", 0), 74 | entryHasChildren("foo", "bar.txt"), 75 | hasFileDigest("foo/bar.txt", digestFor("")), 76 | ), 77 | }, 78 | { 79 | name: "1dir_1file", 80 | in: tarOf( 81 | dir("foo/"), 82 | file("foo/bar.txt", content, xAttrFile), 83 | ), 84 | wantNumGz: 4, // var dir, foo.txt alone, TOC, footer 85 | want: checks( 86 | numTOCEntries(2), 87 | hasDir("foo/"), 88 | hasFileLen("foo/bar.txt", len(content)), 89 | hasFileDigest("foo/bar.txt", digestFor(content)), 90 | hasFileContentsRange("foo/bar.txt", 0, content), 91 | hasFileContentsRange("foo/bar.txt", 1, content[1:]), 92 | entryHasChildren("", "foo"), 93 | entryHasChildren("foo", "bar.txt"), 94 | hasFileXattrs("foo/bar.txt", "foo", "bar"), 95 | hasFileXattrs("foo/bar.txt", "invalid-utf8", invalidUtf8), 96 | ), 97 | }, 98 | { 99 | name: "2meta_2file", 100 | in: tarOf( 101 | dir("bar/", sampleOwner), 102 | dir("foo/", sampleOwner), 103 | file("foo/bar.txt", content, sampleOwner), 104 | ), 105 | wantNumGz: 4, // both dirs, foo.txt alone, TOC, footer 106 | want: checks( 107 | numTOCEntries(3), 108 | hasDir("bar/"), 109 | hasDir("foo/"), 110 | hasFileLen("foo/bar.txt", len(content)), 111 | entryHasChildren("", "bar", "foo"), 112 | entryHasChildren("foo", "bar.txt"), 113 | hasChunkEntries("foo/bar.txt", 1), 114 | hasEntryOwner("bar/", sampleOwner), 115 | hasEntryOwner("foo/", sampleOwner), 116 | hasEntryOwner("foo/bar.txt", sampleOwner), 117 | ), 118 | }, 119 | { 120 | name: "3dir", 121 | in: tarOf( 122 | dir("bar/"), 123 | dir("foo/"), 124 | dir("foo/bar/"), 125 | ), 126 | wantNumGz: 3, // 3 dirs, TOC, footer 127 | want: checks( 128 | hasDirLinkCount("bar/", 2), 129 | hasDirLinkCount("foo/", 3), 130 | hasDirLinkCount("foo/bar/", 2), 131 | ), 132 | }, 133 | { 134 | name: "symlink", 135 | in: tarOf( 136 | dir("foo/"), 137 | symlink("foo/bar", "../../x"), 138 | ), 139 | wantNumGz: 3, // metas + TOC + footer 140 | want: checks( 141 | numTOCEntries(2), 142 | hasSymlink("foo/bar", "../../x"), 143 | entryHasChildren("", "foo"), 144 | entryHasChildren("foo", "bar"), 145 | ), 146 | }, 147 | { 148 | name: "chunked_file", 149 | chunkSize: 4, 150 | in: tarOf( 151 | dir("foo/"), 152 | file("foo/big.txt", "This "+"is s"+"uch "+"a bi"+"g fi"+"le"), 153 | ), 154 | wantNumGz: 9, 155 | want: checks( 156 | numTOCEntries(7), // 1 for foo dir, 6 for the foo/big.txt file 157 | hasDir("foo/"), 158 | hasFileLen("foo/big.txt", len("This is such a big file")), 159 | hasFileDigest("foo/big.txt", digestFor("This is such a big file")), 160 | hasFileContentsRange("foo/big.txt", 0, "This is such a big file"), 161 | hasFileContentsRange("foo/big.txt", 1, "his is such a big file"), 162 | hasFileContentsRange("foo/big.txt", 2, "is is such a big file"), 163 | hasFileContentsRange("foo/big.txt", 3, "s is such a big file"), 164 | hasFileContentsRange("foo/big.txt", 4, " is such a big file"), 165 | hasFileContentsRange("foo/big.txt", 5, "is such a big file"), 166 | hasFileContentsRange("foo/big.txt", 6, "s such a big file"), 167 | hasFileContentsRange("foo/big.txt", 7, " such a big file"), 168 | hasFileContentsRange("foo/big.txt", 8, "such a big file"), 169 | hasFileContentsRange("foo/big.txt", 9, "uch a big file"), 170 | hasFileContentsRange("foo/big.txt", 10, "ch a big file"), 171 | hasFileContentsRange("foo/big.txt", 11, "h a big file"), 172 | hasFileContentsRange("foo/big.txt", 12, " a big file"), 173 | hasFileContentsRange("foo/big.txt", len("This is such a big file")-1, ""), 174 | hasChunkEntries("foo/big.txt", 6), 175 | ), 176 | }, 177 | { 178 | name: "block_char_fifo", 179 | in: tarOf( 180 | tarEntryFunc(func(w *tar.Writer) error { 181 | return w.WriteHeader(&tar.Header{ 182 | Name: "b", 183 | Typeflag: tar.TypeBlock, 184 | Devmajor: 123, 185 | Devminor: 456, 186 | }) 187 | }), 188 | tarEntryFunc(func(w *tar.Writer) error { 189 | return w.WriteHeader(&tar.Header{ 190 | Name: "c", 191 | Typeflag: tar.TypeChar, 192 | Devmajor: 111, 193 | Devminor: 222, 194 | }) 195 | }), 196 | tarEntryFunc(func(w *tar.Writer) error { 197 | return w.WriteHeader(&tar.Header{ 198 | Name: "f", 199 | Typeflag: tar.TypeFifo, 200 | }) 201 | }), 202 | ), 203 | wantNumGz: 3, 204 | want: checks( 205 | lookupMatch("b", &TOCEntry{Name: "b", Type: "block", DevMajor: 123, DevMinor: 456, NumLink: 1}), 206 | lookupMatch("c", &TOCEntry{Name: "c", Type: "char", DevMajor: 111, DevMinor: 222, NumLink: 1}), 207 | lookupMatch("f", &TOCEntry{Name: "f", Type: "fifo", NumLink: 1}), 208 | ), 209 | }, 210 | } 211 | 212 | for _, tt := range tests { 213 | t.Run(tt.name, func(t *testing.T) { 214 | tr, cancel := buildTarGz(t, tt.in) 215 | defer cancel() 216 | var stargzBuf bytes.Buffer 217 | w := NewWriter(&stargzBuf) 218 | w.ChunkSize = tt.chunkSize 219 | if err := w.AppendTar(tr); err != nil { 220 | t.Fatalf("Append: %v", err) 221 | } 222 | if err := w.Close(); err != nil { 223 | t.Fatalf("Writer.Close: %v", err) 224 | } 225 | b := stargzBuf.Bytes() 226 | 227 | diffID := w.DiffID() 228 | wantDiffID := diffIDOfGz(t, b) 229 | if diffID != wantDiffID { 230 | t.Errorf("DiffID = %q; want %q", diffID, wantDiffID) 231 | } 232 | 233 | got := countGzStreams(t, b) 234 | if got != tt.wantNumGz { 235 | t.Errorf("number of gzip streams = %d; want %d", got, tt.wantNumGz) 236 | } 237 | 238 | r, err := Open(io.NewSectionReader(bytes.NewReader(b), 0, int64(len(b)))) 239 | if err != nil { 240 | t.Fatalf("stargz.Open: %v", err) 241 | } 242 | for _, want := range tt.want { 243 | want.check(t, r) 244 | } 245 | 246 | }) 247 | } 248 | } 249 | 250 | func diffIDOfGz(t *testing.T, b []byte) string { 251 | h := sha256.New() 252 | zr, err := gzip.NewReader(bytes.NewReader(b)) 253 | if err != nil { 254 | t.Fatalf("diffIDOfGz: %v", err) 255 | } 256 | if _, err := io.Copy(h, zr); err != nil { 257 | t.Fatalf("diffIDOfGz.Copy: %v", err) 258 | } 259 | return fmt.Sprintf("sha256:%x", h.Sum(nil)) 260 | } 261 | 262 | func countGzStreams(t *testing.T, b []byte) (numStreams int) { 263 | len0 := len(b) 264 | br := bytes.NewReader(b) 265 | zr := new(gzip.Reader) 266 | t.Logf("got gzip streams:") 267 | for { 268 | zoff := len0 - br.Len() 269 | if err := zr.Reset(br); err != nil { 270 | if err == io.EOF { 271 | return 272 | } 273 | t.Fatalf("countGzStreams, Reset: %v", err) 274 | } 275 | zr.Multistream(false) 276 | n, err := io.Copy(ioutil.Discard, zr) 277 | if err != nil { 278 | t.Fatalf("countGzStreams, Copy: %v", err) 279 | } 280 | var extra string 281 | if len(zr.Header.Extra) > 0 { 282 | extra = fmt.Sprintf("; extra=%q", zr.Header.Extra) 283 | } 284 | t.Logf(" [%d] at %d in stargz, uncompressed length %d%s", numStreams, zoff, n, extra) 285 | numStreams++ 286 | } 287 | } 288 | 289 | func digestFor(content string) string { 290 | sum := sha256.Sum256([]byte(content)) 291 | return fmt.Sprintf("sha256:%x", sum) 292 | } 293 | 294 | type numTOCEntries int 295 | 296 | func (n numTOCEntries) check(t *testing.T, r *Reader) { 297 | if r.toc == nil { 298 | t.Fatal("nil TOC") 299 | } 300 | if got, want := len(r.toc.Entries), int(n); got != want { 301 | t.Errorf("got %d TOC entries; want %d", got, want) 302 | } 303 | t.Logf("got TOC entries:") 304 | for i, ent := range r.toc.Entries { 305 | entj, _ := json.Marshal(ent) 306 | t.Logf(" [%d]: %s\n", i, entj) 307 | } 308 | if t.Failed() { 309 | t.FailNow() 310 | } 311 | } 312 | 313 | func tarOf(s ...tarEntry) []tarEntry { return s } 314 | 315 | func checks(s ...stargzCheck) []stargzCheck { return s } 316 | 317 | type stargzCheck interface { 318 | check(t *testing.T, r *Reader) 319 | } 320 | 321 | type stargzCheckFn func(*testing.T, *Reader) 322 | 323 | func (f stargzCheckFn) check(t *testing.T, r *Reader) { f(t, r) } 324 | 325 | func hasFileLen(file string, wantLen int) stargzCheck { 326 | return stargzCheckFn(func(t *testing.T, r *Reader) { 327 | for _, ent := range r.toc.Entries { 328 | if ent.Name == file { 329 | if ent.Type != "reg" { 330 | t.Errorf("file type of %q is %q; want \"reg\"", file, ent.Type) 331 | } else if ent.Size != int64(wantLen) { 332 | t.Errorf("file size of %q = %d; want %d", file, ent.Size, wantLen) 333 | } 334 | return 335 | } 336 | } 337 | t.Errorf("file %q not found", file) 338 | }) 339 | } 340 | 341 | func hasFileXattrs(file, name, value string) stargzCheck { 342 | return stargzCheckFn(func(t *testing.T, r *Reader) { 343 | for _, ent := range r.toc.Entries { 344 | if ent.Name == file { 345 | if ent.Type != "reg" { 346 | t.Errorf("file type of %q is %q; want \"reg\"", file, ent.Type) 347 | } 348 | if ent.Xattrs == nil { 349 | t.Errorf("file %q has no xattrs", file) 350 | return 351 | } 352 | valueFound, found := ent.Xattrs[name] 353 | if !found { 354 | t.Errorf("file %q has no xattr %q", file, name) 355 | return 356 | } 357 | if string(valueFound) != value { 358 | t.Errorf("file %q has xattr %q with value %q instead of %q", file, name, valueFound, value) 359 | } 360 | 361 | return 362 | } 363 | } 364 | t.Errorf("file %q not found", file) 365 | }) 366 | } 367 | 368 | func hasFileDigest(file string, digest string) stargzCheck { 369 | return stargzCheckFn(func(t *testing.T, r *Reader) { 370 | ent, ok := r.Lookup(file) 371 | if !ok { 372 | t.Fatalf("didn't find TOCEntry for file %q", file) 373 | } 374 | if ent.Digest != digest { 375 | t.Fatalf("Digest(%q) = %q, want %q", file, ent.Digest, digest) 376 | } 377 | }) 378 | } 379 | 380 | func hasFileContentsRange(file string, offset int, want string) stargzCheck { 381 | return stargzCheckFn(func(t *testing.T, r *Reader) { 382 | f, err := r.OpenFile(file) 383 | if err != nil { 384 | t.Fatal(err) 385 | } 386 | got := make([]byte, len(want)) 387 | n, err := f.ReadAt(got, int64(offset)) 388 | if err != nil { 389 | t.Fatalf("ReadAt(len %d, offset %d) = %v, %v", len(got), offset, n, err) 390 | } 391 | if string(got) != want { 392 | t.Fatalf("ReadAt(len %d, offset %d) = %q, want %q", len(got), offset, got, want) 393 | } 394 | }) 395 | } 396 | 397 | func hasChunkEntries(file string, wantChunks int) stargzCheck { 398 | return stargzCheckFn(func(t *testing.T, r *Reader) { 399 | ent, ok := r.Lookup(file) 400 | if !ok { 401 | t.Fatalf("no file for %q", file) 402 | } 403 | if ent.Type != "reg" { 404 | t.Fatalf("file %q has unexpected type %q; want reg", file, ent.Type) 405 | } 406 | chunks := r.getChunks(ent) 407 | if len(chunks) != wantChunks { 408 | t.Errorf("len(r.getChunks(%q)) = %d; want %d", file, len(chunks), wantChunks) 409 | return 410 | } 411 | f := chunks[0] 412 | 413 | var gotChunks []*TOCEntry 414 | var last *TOCEntry 415 | for off := int64(0); off < f.Size; off++ { 416 | e, ok := r.ChunkEntryForOffset(file, off) 417 | if !ok { 418 | t.Errorf("no ChunkEntryForOffset at %d", off) 419 | return 420 | } 421 | if last != e { 422 | gotChunks = append(gotChunks, e) 423 | last = e 424 | } 425 | } 426 | if !reflect.DeepEqual(chunks, gotChunks) { 427 | t.Errorf("gotChunks=%d, want=%d; contents mismatch", len(gotChunks), wantChunks) 428 | } 429 | 430 | // And verify the NextOffset 431 | for i := 0; i < len(gotChunks)-1; i++ { 432 | ci := gotChunks[i] 433 | cnext := gotChunks[i+1] 434 | if ci.NextOffset() != cnext.Offset { 435 | t.Errorf("chunk %d NextOffset %d != next chunk's Offset of %d", i, ci.NextOffset(), cnext.Offset) 436 | } 437 | } 438 | }) 439 | } 440 | 441 | func entryHasChildren(dir string, want ...string) stargzCheck { 442 | return stargzCheckFn(func(t *testing.T, r *Reader) { 443 | want := append([]string(nil), want...) 444 | var got []string 445 | ent, ok := r.Lookup(dir) 446 | if !ok { 447 | t.Fatalf("didn't find TOCEntry for dir node %q", dir) 448 | } 449 | for baseName := range ent.children { 450 | got = append(got, baseName) 451 | } 452 | sort.Strings(got) 453 | sort.Strings(want) 454 | if !reflect.DeepEqual(got, want) { 455 | t.Errorf("children of %q = %q; want %q", dir, got, want) 456 | } 457 | }) 458 | } 459 | 460 | func hasDir(file string) stargzCheck { 461 | return stargzCheckFn(func(t *testing.T, r *Reader) { 462 | for _, ent := range r.toc.Entries { 463 | if ent.Name == file { 464 | if ent.Type != "dir" { 465 | t.Errorf("file type of %q is %q; want \"dir\"", file, ent.Type) 466 | } 467 | return 468 | } 469 | } 470 | t.Errorf("directory %q not found", file) 471 | }) 472 | } 473 | 474 | func hasDirLinkCount(file string, count int) stargzCheck { 475 | return stargzCheckFn(func(t *testing.T, r *Reader) { 476 | for _, ent := range r.toc.Entries { 477 | if ent.Name == file { 478 | if ent.Type != "dir" { 479 | t.Errorf("file type of %q is %q; want \"dir\"", file, ent.Type) 480 | return 481 | } 482 | if ent.NumLink != count { 483 | t.Errorf("link count of %q = %d; want %d", file, ent.NumLink, count) 484 | } 485 | return 486 | } 487 | } 488 | t.Errorf("directory %q not found", file) 489 | }) 490 | } 491 | 492 | func hasSymlink(file, target string) stargzCheck { 493 | return stargzCheckFn(func(t *testing.T, r *Reader) { 494 | for _, ent := range r.toc.Entries { 495 | if ent.Name == file { 496 | if ent.Type != "symlink" { 497 | t.Errorf("file type of %q is %q; want \"symlink\"", file, ent.Type) 498 | } else if ent.LinkName != target { 499 | t.Errorf("link target of symlink %q is %q; want %q", file, ent.LinkName, target) 500 | } 501 | return 502 | } 503 | } 504 | t.Errorf("symlink %q not found", file) 505 | }) 506 | } 507 | 508 | func lookupMatch(name string, want *TOCEntry) stargzCheck { 509 | return stargzCheckFn(func(t *testing.T, r *Reader) { 510 | e, ok := r.Lookup(name) 511 | if !ok { 512 | t.Fatalf("failed to Lookup entry %q", name) 513 | } 514 | if !reflect.DeepEqual(e, want) { 515 | t.Errorf("entry %q mismatch.\n got: %+v\nwant: %+v\n", name, e, want) 516 | } 517 | 518 | }) 519 | } 520 | 521 | func hasEntryOwner(entry string, owner owner) stargzCheck { 522 | return stargzCheckFn(func(t *testing.T, r *Reader) { 523 | ent, ok := r.Lookup(strings.TrimSuffix(entry, "/")) 524 | if !ok { 525 | t.Errorf("entry %q not found", entry) 526 | return 527 | } 528 | if ent.Uid != owner.uid || ent.Gid != owner.gid { 529 | t.Errorf("entry %q has invalid owner (uid:%d, gid:%d) instead of (uid:%d, gid:%d)", entry, ent.Uid, ent.Gid, owner.uid, owner.gid) 530 | return 531 | } 532 | }) 533 | } 534 | 535 | type tarEntry interface { 536 | appendTar(*tar.Writer) error 537 | } 538 | 539 | type tarEntryFunc func(*tar.Writer) error 540 | 541 | func (f tarEntryFunc) appendTar(tw *tar.Writer) error { return f(tw) } 542 | 543 | func buildTarGz(t *testing.T, ents []tarEntry) (r io.Reader, cancel func()) { 544 | pr, pw := io.Pipe() 545 | go func() { 546 | tw := tar.NewWriter(pw) 547 | for _, ent := range ents { 548 | if err := ent.appendTar(tw); err != nil { 549 | t.Errorf("building input tar: %v", err) 550 | pw.Close() 551 | return 552 | } 553 | } 554 | if err := tw.Close(); err != nil { 555 | t.Errorf("closing write of input tar: %v", err) 556 | } 557 | pw.Close() 558 | return 559 | }() 560 | return pr, func() { go pr.Close(); go pw.Close() } 561 | } 562 | 563 | func dir(d string, opts ...interface{}) tarEntry { 564 | return tarEntryFunc(func(tw *tar.Writer) error { 565 | var o owner 566 | for _, opt := range opts { 567 | if v, ok := opt.(owner); ok { 568 | o = v 569 | } else { 570 | return errors.New("unsupported opt") 571 | } 572 | } 573 | name := string(d) 574 | if !strings.HasSuffix(name, "/") { 575 | panic(fmt.Sprintf("missing trailing slash in dir %q ", name)) 576 | } 577 | return tw.WriteHeader(&tar.Header{ 578 | Typeflag: tar.TypeDir, 579 | Name: name, 580 | Mode: 0755, 581 | Uid: o.uid, 582 | Gid: o.gid, 583 | }) 584 | }) 585 | } 586 | 587 | // xAttr are extended attributes to set on test files created with the file func. 588 | type xAttr map[string]string 589 | 590 | // owner is owner ot set on test files and directories with the file and dir functions. 591 | type owner struct { 592 | uid int 593 | gid int 594 | } 595 | 596 | func file(name, contents string, opts ...interface{}) tarEntry { 597 | return tarEntryFunc(func(tw *tar.Writer) error { 598 | var xattrs xAttr 599 | var o owner 600 | for _, opt := range opts { 601 | switch v := opt.(type) { 602 | case xAttr: 603 | xattrs = v 604 | case owner: 605 | o = v 606 | default: 607 | return errors.New("unsupported opt") 608 | } 609 | } 610 | if strings.HasSuffix(name, "/") { 611 | return fmt.Errorf("bogus trailing slash in file %q", name) 612 | } 613 | if err := tw.WriteHeader(&tar.Header{ 614 | Typeflag: tar.TypeReg, 615 | Name: name, 616 | Mode: 0644, 617 | Xattrs: xattrs, 618 | Size: int64(len(contents)), 619 | Uid: o.uid, 620 | Gid: o.gid, 621 | }); err != nil { 622 | return err 623 | } 624 | _, err := io.WriteString(tw, contents) 625 | return err 626 | }) 627 | } 628 | 629 | func symlink(name, target string) tarEntry { 630 | return tarEntryFunc(func(tw *tar.Writer) error { 631 | return tw.WriteHeader(&tar.Header{ 632 | Typeflag: tar.TypeSymlink, 633 | Name: name, 634 | Linkname: target, 635 | Mode: 0644, 636 | }) 637 | }) 638 | } 639 | 640 | // Tests *Reader.ChunkEntryForOffset about offset and size calculation. 641 | func TestChunkEntryForOffset(t *testing.T) { 642 | const chunkSize = 4 643 | tests := []struct { 644 | name string 645 | fileSize int64 646 | reqOffset int64 647 | wantOk bool 648 | wantChunkOffset int64 649 | wantChunkSize int64 650 | }{ 651 | { 652 | name: "1st_chunk_in_1_chunk_reg", 653 | fileSize: chunkSize * 1, 654 | reqOffset: chunkSize * 0, 655 | wantChunkOffset: chunkSize * 0, 656 | wantChunkSize: chunkSize, 657 | wantOk: true, 658 | }, 659 | { 660 | name: "2nd_chunk_in_1_chunk_reg", 661 | fileSize: chunkSize * 1, 662 | reqOffset: chunkSize * 1, 663 | wantOk: false, 664 | }, 665 | { 666 | name: "1st_chunk_in_2_chunks_reg", 667 | fileSize: chunkSize * 2, 668 | reqOffset: chunkSize * 0, 669 | wantChunkOffset: chunkSize * 0, 670 | wantChunkSize: chunkSize, 671 | wantOk: true, 672 | }, 673 | { 674 | name: "2nd_chunk_in_2_chunks_reg", 675 | fileSize: chunkSize * 2, 676 | reqOffset: chunkSize * 1, 677 | wantChunkOffset: chunkSize * 1, 678 | wantChunkSize: chunkSize, 679 | wantOk: true, 680 | }, 681 | { 682 | name: "3rd_chunk_in_2_chunks_reg", 683 | fileSize: chunkSize * 2, 684 | reqOffset: chunkSize * 2, 685 | wantOk: false, 686 | }, 687 | } 688 | 689 | for _, te := range tests { 690 | t.Run(te.name, func(t *testing.T) { 691 | name := "test" 692 | _, r := regularFileReader(name, te.fileSize, chunkSize) 693 | ce, ok := r.ChunkEntryForOffset(name, te.reqOffset) 694 | if ok != te.wantOk { 695 | t.Errorf("ok = %v; want (%v)", ok, te.wantOk) 696 | } else if ok { 697 | if !(ce.ChunkOffset == te.wantChunkOffset && ce.ChunkSize == te.wantChunkSize) { 698 | t.Errorf("chunkOffset = %d, ChunkSize = %d; want (chunkOffset = %d, chunkSize = %d)", 699 | ce.ChunkOffset, ce.ChunkSize, te.wantChunkOffset, te.wantChunkSize) 700 | } 701 | } 702 | }) 703 | } 704 | } 705 | 706 | // regularFileReader makes a minimal Reader of "reg" and "chunk" without tar-related information. 707 | func regularFileReader(name string, size int64, chunkSize int64) (*TOCEntry, *Reader) { 708 | ent := &TOCEntry{ 709 | Name: name, 710 | Type: "reg", 711 | } 712 | m := ent 713 | chunks := make([]*TOCEntry, 0, size/chunkSize+1) 714 | var written int64 715 | for written < size { 716 | remain := size - written 717 | cs := chunkSize 718 | if remain < cs { 719 | cs = remain 720 | } 721 | ent.ChunkSize = cs 722 | ent.ChunkOffset = written 723 | chunks = append(chunks, ent) 724 | written += cs 725 | ent = &TOCEntry{ 726 | Name: name, 727 | Type: "chunk", 728 | } 729 | } 730 | 731 | if len(chunks) == 1 { 732 | chunks = nil 733 | } 734 | return m, &Reader{ 735 | m: map[string]*TOCEntry{name: m}, 736 | chunks: map[string][]*TOCEntry{name: chunks}, 737 | } 738 | } 739 | -------------------------------------------------------------------------------- /stargz/stargzify/stargzify.go: -------------------------------------------------------------------------------- 1 | // Copyright 2019 The Go Authors. All rights reserved. 2 | // Use of this source code is governed by a BSD-style 3 | // license that can be found in the LICENSE file. 4 | 5 | // The stargzify command converts a remote container image into an equivalent 6 | // image with its layers transformed into stargz files instead of gzipped tar 7 | // files. The image is still a valid container image, but its layers contain 8 | // multiple gzip streams instead of one and have a Table of Contents at the end. 9 | package main 10 | 11 | import ( 12 | "crypto/sha256" 13 | "encoding/hex" 14 | "flag" 15 | "fmt" 16 | "hash" 17 | "io" 18 | "io/ioutil" 19 | "log" 20 | "net/http" 21 | "os" 22 | "strings" 23 | 24 | "github.com/google/crfs/stargz" 25 | "github.com/google/go-containerregistry/pkg/authn" 26 | "github.com/google/go-containerregistry/pkg/logs" 27 | "github.com/google/go-containerregistry/pkg/name" 28 | v1 "github.com/google/go-containerregistry/pkg/v1" 29 | "github.com/google/go-containerregistry/pkg/v1/empty" 30 | "github.com/google/go-containerregistry/pkg/v1/mutate" 31 | "github.com/google/go-containerregistry/pkg/v1/remote" 32 | "github.com/google/go-containerregistry/pkg/v1/stream" 33 | "github.com/google/go-containerregistry/pkg/v1/types" 34 | ) 35 | 36 | var ( 37 | upgrade = flag.Bool("upgrade", false, "upgrade the image in-place by overwriting the tag") 38 | flatten = flag.Bool("flatten", false, "flatten the image's layers into a single layer") 39 | insecure = flag.Bool("insecure", false, "allow HTTP connections to the registry which has the prefix \"http://\"") 40 | 41 | usage = `usage: %[1]s [-upgrade] [-flatten] input [output] 42 | 43 | Converting images: 44 | # converts "ubuntu" from dockerhub and uploads to your GCR project 45 | %[1]s ubuntu gcr.io//ubuntu:stargz 46 | 47 | # converts and overwrites :latest 48 | %[1]s -upgrade gcr.io//ubuntu:latest 49 | 50 | # converts and flattens "ubuntu" 51 | %[1]s -flatten ubuntu gcr.io//ubuntu:flattened 52 | 53 | # converts "ubuntu" from dockerhub and uploads to your registry using HTTP 54 | %[1]s -insecure ubuntu http://registry:5000//ubuntu:stargz 55 | 56 | Converting files: 57 | %[1]s file:/tmp/input.tar.gz file:output.stargz 58 | 59 | # writes to /tmp/input.stargz 60 | %[1]s file:/tmp/input.tar.gz 61 | ` 62 | ) 63 | 64 | func main() { 65 | flag.Parse() 66 | if len(flag.Args()) < 1 { 67 | printUsage() 68 | } 69 | 70 | // Set up logs package to get useful messages i.e. progress. 71 | logs.Warn.SetOutput(os.Stderr) 72 | logs.Progress.SetOutput(os.Stderr) 73 | 74 | if strings.HasPrefix(flag.Args()[0], "file:") { 75 | // We'll use "file:" prefix as a signal to convert single files. 76 | convertFile() 77 | } else { 78 | convertImage() 79 | } 80 | } 81 | 82 | func printUsage() { 83 | log.Fatalf(usage, os.Args[0]) 84 | } 85 | 86 | func convertFile() { 87 | var in, out string 88 | if len(flag.Args()) > 0 { 89 | in = strings.TrimPrefix(flag.Args()[0], "file:") 90 | } 91 | if len(flag.Args()) > 1 { 92 | out = strings.TrimPrefix(flag.Args()[1], "file:") 93 | } 94 | 95 | var f, fo *os.File // file in, file out 96 | var err error 97 | switch in { 98 | case "": 99 | printUsage() 100 | case "-": 101 | f = os.Stdin 102 | default: 103 | f, err = os.Open(in) 104 | if err != nil { 105 | log.Fatal(err) 106 | } 107 | } 108 | defer f.Close() 109 | 110 | if out == "" { 111 | if in == "-" { 112 | out = "-" 113 | } else { 114 | base := strings.TrimSuffix(in, ".gz") 115 | base = strings.TrimSuffix(base, ".tgz") 116 | base = strings.TrimSuffix(base, ".tar") 117 | out = base + ".stargz" 118 | } 119 | } 120 | if out == "-" { 121 | fo = os.Stdout 122 | } else { 123 | fo, err = os.Create(out) 124 | if err != nil { 125 | log.Fatal(err) 126 | } 127 | } 128 | w := stargz.NewWriter(fo) 129 | if err := w.AppendTar(f); err != nil { 130 | log.Fatal(err) 131 | } 132 | if err := w.Close(); err != nil { 133 | log.Fatal(err) 134 | } 135 | if err := fo.Close(); err != nil { 136 | log.Fatal(err) 137 | } 138 | } 139 | 140 | func parseFlags(args []string) (string, string) { 141 | if len(args) < 1 { 142 | printUsage() 143 | } 144 | 145 | var src, dst string 146 | src = args[0] 147 | 148 | if len(args) < 2 { 149 | if *upgrade { 150 | dst = src 151 | } else { 152 | printUsage() 153 | } 154 | } else if len(args) == 2 { 155 | if *upgrade { 156 | log.Println("expected one argument with -upgrade") 157 | printUsage() 158 | } else { 159 | dst = args[1] 160 | } 161 | } else { 162 | log.Println("too many arguments") 163 | printUsage() 164 | } 165 | 166 | return src, dst 167 | } 168 | 169 | func convertImage() { 170 | src, dst := parseFlags(flag.Args()) 171 | 172 | srcRef, err := parseReference(src) 173 | if err != nil { 174 | log.Fatal(err) 175 | } 176 | 177 | // Pull source image. 178 | srcImg, err := remote.Image(srcRef, remote.WithAuthFromKeychain(authn.DefaultKeychain)) 179 | if err != nil { 180 | log.Fatal(err) 181 | } 182 | 183 | // Grab original config, clear the layer info from the config file. We want to 184 | // preserve the relevant config. 185 | srcCfg, err := srcImg.ConfigFile() 186 | if err != nil { 187 | log.Fatal(err) 188 | } 189 | srcCfg.RootFS.DiffIDs = []v1.Hash{} 190 | srcCfg.History = []v1.History{} 191 | 192 | // Use an empty image with the rest of src's config file as a base. 193 | img, err := mutate.ConfigFile(empty.Image, srcCfg) 194 | if err != nil { 195 | log.Fatal(err) 196 | } 197 | 198 | layers, err := convertLayers(srcImg) 199 | if err != nil { 200 | log.Fatal(err) 201 | } 202 | 203 | for _, layer := range layers { 204 | img, err = mutate.Append(img, mutate.Addendum{ 205 | Layer: layer, 206 | History: v1.History{ 207 | // Leave our mark. 208 | CreatedBy: fmt.Sprintf("stargzify %s %s", src, dst), 209 | }, 210 | }) 211 | if err != nil { 212 | log.Fatal(err) 213 | } 214 | } 215 | 216 | // Push the stargzified image to dst. 217 | dstRef, err := parseReference(dst) 218 | if err != nil { 219 | log.Fatal(err) 220 | } 221 | dstAuth, err := authn.DefaultKeychain.Resolve(dstRef.Context().Registry) 222 | if err != nil { 223 | log.Fatal(err) 224 | } 225 | 226 | if err := remote.Write(dstRef, img, remote.WithAuth(dstAuth), remote.WithTransport(http.DefaultTransport)); err != nil { 227 | log.Fatal(err) 228 | } 229 | } 230 | 231 | func convertLayers(img v1.Image) ([]v1.Layer, error) { 232 | if *flatten { 233 | r := mutate.Extract(img) 234 | return []v1.Layer{newLayer(r)}, nil 235 | } 236 | 237 | layers, err := img.Layers() 238 | if err != nil { 239 | return nil, err 240 | } 241 | 242 | converted := []v1.Layer{} 243 | for _, layer := range layers { 244 | r, err := layer.Uncompressed() 245 | if err != nil { 246 | return nil, err 247 | } 248 | converted = append(converted, newLayer(r)) 249 | } 250 | 251 | return converted, nil 252 | } 253 | 254 | type layer struct { 255 | rc io.ReadCloser 256 | d *digester 257 | diff *v1.Hash 258 | digest *v1.Hash 259 | } 260 | 261 | // parseReference is like go-containerregistry/pkg/name.ParseReference but additionally 262 | // supports the reference starting with "http://" to mean insecure. 263 | func parseReference(ref string) (name.Reference, error) { 264 | var opts []name.Option 265 | if strings.HasPrefix(ref, "http://") { 266 | if !*insecure { 267 | return nil, fmt.Errorf("-insecure flag required when connecting using HTTP to %q", ref) 268 | } 269 | ref = strings.TrimPrefix(ref, "http://") 270 | opts = append(opts, name.Insecure) 271 | } 272 | return name.ParseReference(ref, opts...) 273 | } 274 | 275 | // newLayer converts the given io.ReadCloser to a stargz layer. 276 | func newLayer(rc io.ReadCloser) v1.Layer { 277 | return &layer{ 278 | rc: rc, 279 | d: &digester{ 280 | h: sha256.New(), 281 | }, 282 | } 283 | } 284 | 285 | func (l *layer) Digest() (v1.Hash, error) { 286 | if l.digest == nil { 287 | return v1.Hash{}, stream.ErrNotComputed 288 | } 289 | return *l.digest, nil 290 | } 291 | 292 | func (l *layer) Size() (int64, error) { 293 | if l.digest == nil { 294 | return -1, stream.ErrNotComputed 295 | } 296 | return l.d.n, nil 297 | } 298 | 299 | func (l *layer) DiffID() (v1.Hash, error) { 300 | if l.diff == nil { 301 | return v1.Hash{}, stream.ErrNotComputed 302 | } 303 | return *l.diff, nil 304 | } 305 | 306 | func (l *layer) MediaType() (types.MediaType, error) { 307 | // TODO: We might want to set our own media type to indicate stargz layers, 308 | // but that has the potential to break registry compatibility. 309 | return types.DockerLayer, nil 310 | } 311 | 312 | func (l *layer) Compressed() (io.ReadCloser, error) { 313 | pr, pw := io.Pipe() 314 | 315 | // Convert input blob to stargz while computing diffid, digest, and size. 316 | go func() { 317 | w := stargz.NewWriter(io.MultiWriter(pw, l.d)) 318 | if err := w.AppendTar(l.rc); err != nil { 319 | pw.CloseWithError(err) 320 | return 321 | } 322 | if err := w.Close(); err != nil { 323 | pw.CloseWithError(err) 324 | return 325 | } 326 | diffid, err := v1.NewHash(w.DiffID()) 327 | if err != nil { 328 | pw.CloseWithError(err) 329 | return 330 | } 331 | l.diff = &diffid 332 | l.digest = &v1.Hash{ 333 | Algorithm: "sha256", 334 | Hex: hex.EncodeToString(l.d.h.Sum(nil)), 335 | } 336 | pw.Close() 337 | }() 338 | 339 | return ioutil.NopCloser(pr), nil 340 | } 341 | 342 | func (l *layer) Uncompressed() (io.ReadCloser, error) { 343 | return l.rc, nil 344 | } 345 | 346 | // digester tracks the sha256 and length of what is written to it. 347 | type digester struct { 348 | h hash.Hash 349 | n int64 350 | } 351 | 352 | func (d *digester) Write(b []byte) (int, error) { 353 | n, err := d.h.Write(b) 354 | d.n += int64(n) 355 | return n, err 356 | } 357 | --------------------------------------------------------------------------------