├── .github ├── ISSUE_TEMPLATE │ ├── config.yml │ └── open_an_issue.md ├── config.yml ├── dependabot.yml └── workflows │ ├── generated-pr.yml │ ├── go-check.yml │ ├── go-test-config.json │ ├── go-test.yml │ ├── release-check.yml │ ├── releaser.yml │ ├── stale.yml │ └── tagpush.yml ├── .gx └── lastpubver ├── LICENSE ├── README.md ├── codecov.yml ├── datastore.go ├── ds_test.go ├── go.mod ├── go.sum └── version.json /.github/ISSUE_TEMPLATE/config.yml: -------------------------------------------------------------------------------- 1 | blank_issues_enabled: false 2 | contact_links: 3 | - name: Getting Help on IPFS 4 | url: https://ipfs.io/help 5 | about: All information about how and where to get help on IPFS. 6 | - name: IPFS Official Forum 7 | url: https://discuss.ipfs.io 8 | about: Please post general questions, support requests, and discussions here. 9 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/open_an_issue.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Open an issue 3 | about: Only for actionable issues relevant to this repository. 4 | title: '' 5 | labels: need/triage 6 | assignees: '' 7 | 8 | --- 9 | 20 | -------------------------------------------------------------------------------- /.github/config.yml: -------------------------------------------------------------------------------- 1 | # Configuration for welcome - https://github.com/behaviorbot/welcome 2 | 3 | # Configuration for new-issue-welcome - https://github.com/behaviorbot/new-issue-welcome 4 | # Comment to be posted to on first time issues 5 | newIssueWelcomeComment: > 6 | Thank you for submitting your first issue to this repository! A maintainer 7 | will be here shortly to triage and review. 8 | 9 | In the meantime, please double-check that you have provided all the 10 | necessary information to make this process easy! Any information that can 11 | help save additional round trips is useful! We currently aim to give 12 | initial feedback within **two business days**. If this does not happen, feel 13 | free to leave a comment. 14 | 15 | Please keep an eye on how this issue will be labeled, as labels give an 16 | overview of priorities, assignments and additional actions requested by the 17 | maintainers: 18 | 19 | - "Priority" labels will show how urgent this is for the team. 20 | - "Status" labels will show if this is ready to be worked on, blocked, or in progress. 21 | - "Need" labels will indicate if additional input or analysis is required. 22 | 23 | Finally, remember to use https://discuss.ipfs.io if you just need general 24 | support. 25 | 26 | # Configuration for new-pr-welcome - https://github.com/behaviorbot/new-pr-welcome 27 | # Comment to be posted to on PRs from first time contributors in your repository 28 | newPRWelcomeComment: > 29 | Thank you for submitting this PR! 30 | 31 | A maintainer will be here shortly to review it. 32 | 33 | We are super grateful, but we are also overloaded! Help us by making sure 34 | that: 35 | 36 | * The context for this PR is clear, with relevant discussion, decisions 37 | and stakeholders linked/mentioned. 38 | 39 | * Your contribution itself is clear (code comments, self-review for the 40 | rest) and in its best form. Follow the [code contribution 41 | guidelines](https://github.com/ipfs/community/blob/master/CONTRIBUTING.md#code-contribution-guidelines) 42 | if they apply. 43 | 44 | Getting other community members to do a review would be great help too on 45 | complex PRs (you can ask in the chats/forums). If you are unsure about 46 | something, just leave us a comment. 47 | 48 | Next steps: 49 | 50 | * A maintainer will triage and assign priority to this PR, commenting on 51 | any missing things and potentially assigning a reviewer for high 52 | priority items. 53 | 54 | * The PR gets reviews, discussed and approvals as needed. 55 | 56 | * The PR is merged by maintainers when it has been approved and comments addressed. 57 | 58 | We currently aim to provide initial feedback/triaging within **two business 59 | days**. Please keep an eye on any labelling actions, as these will indicate 60 | priorities and status of your contribution. 61 | 62 | We are very grateful for your contribution! 63 | 64 | 65 | # Configuration for first-pr-merge - https://github.com/behaviorbot/first-pr-merge 66 | # Comment to be posted to on pull requests merged by a first time user 67 | # Currently disabled 68 | #firstPRMergeComment: "" 69 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | updates: 3 | - package-ecosystem: gomod 4 | directory: "/" 5 | schedule: 6 | interval: weekly 7 | time: "11:00" 8 | open-pull-requests-limit: 10 9 | ignore: 10 | - dependency-name: github.com/ipfs/go-log/v2 11 | versions: 12 | - 2.1.1 13 | - 2.1.2 14 | -------------------------------------------------------------------------------- /.github/workflows/generated-pr.yml: -------------------------------------------------------------------------------- 1 | name: Close Generated PRs 2 | 3 | on: 4 | schedule: 5 | - cron: '0 0 * * *' 6 | workflow_dispatch: 7 | 8 | permissions: 9 | issues: write 10 | pull-requests: write 11 | 12 | jobs: 13 | stale: 14 | uses: ipdxco/unified-github-workflows/.github/workflows/reusable-generated-pr.yml@v1 15 | -------------------------------------------------------------------------------- /.github/workflows/go-check.yml: -------------------------------------------------------------------------------- 1 | name: Go Checks 2 | 3 | on: 4 | pull_request: 5 | push: 6 | branches: ["master"] 7 | workflow_dispatch: 8 | 9 | permissions: 10 | contents: read 11 | 12 | concurrency: 13 | group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.event_name == 'push' && github.sha || github.ref }} 14 | cancel-in-progress: true 15 | 16 | jobs: 17 | go-check: 18 | uses: ipdxco/unified-github-workflows/.github/workflows/go-check.yml@v1.0 19 | -------------------------------------------------------------------------------- /.github/workflows/go-test-config.json: -------------------------------------------------------------------------------- 1 | { 2 | "shuffle": false, 3 | "skipOSes": ["windows"] 4 | } 5 | -------------------------------------------------------------------------------- /.github/workflows/go-test.yml: -------------------------------------------------------------------------------- 1 | name: Go Test 2 | 3 | on: 4 | pull_request: 5 | push: 6 | branches: ["master"] 7 | workflow_dispatch: 8 | 9 | permissions: 10 | contents: read 11 | 12 | concurrency: 13 | group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.event_name == 'push' && github.sha || github.ref }} 14 | cancel-in-progress: true 15 | 16 | jobs: 17 | go-test: 18 | uses: ipdxco/unified-github-workflows/.github/workflows/go-test.yml@v1.0 19 | secrets: 20 | CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} 21 | -------------------------------------------------------------------------------- /.github/workflows/release-check.yml: -------------------------------------------------------------------------------- 1 | name: Release Checker 2 | 3 | on: 4 | pull_request_target: 5 | paths: [ 'version.json' ] 6 | types: [ opened, synchronize, reopened, labeled, unlabeled ] 7 | workflow_dispatch: 8 | 9 | permissions: 10 | contents: write 11 | pull-requests: write 12 | 13 | concurrency: 14 | group: ${{ github.workflow }}-${{ github.ref }} 15 | cancel-in-progress: true 16 | 17 | jobs: 18 | release-check: 19 | uses: ipdxco/unified-github-workflows/.github/workflows/release-check.yml@v1.0 20 | -------------------------------------------------------------------------------- /.github/workflows/releaser.yml: -------------------------------------------------------------------------------- 1 | name: Releaser 2 | 3 | on: 4 | push: 5 | paths: [ 'version.json' ] 6 | workflow_dispatch: 7 | 8 | permissions: 9 | contents: write 10 | 11 | concurrency: 12 | group: ${{ github.workflow }}-${{ github.sha }} 13 | cancel-in-progress: true 14 | 15 | jobs: 16 | releaser: 17 | uses: ipdxco/unified-github-workflows/.github/workflows/releaser.yml@v1.0 18 | -------------------------------------------------------------------------------- /.github/workflows/stale.yml: -------------------------------------------------------------------------------- 1 | name: Close Stale Issues 2 | 3 | on: 4 | schedule: 5 | - cron: '0 0 * * *' 6 | workflow_dispatch: 7 | 8 | permissions: 9 | issues: write 10 | pull-requests: write 11 | 12 | jobs: 13 | stale: 14 | uses: ipdxco/unified-github-workflows/.github/workflows/reusable-stale-issue.yml@v1 15 | -------------------------------------------------------------------------------- /.github/workflows/tagpush.yml: -------------------------------------------------------------------------------- 1 | name: Tag Push Checker 2 | 3 | on: 4 | push: 5 | tags: 6 | - v* 7 | 8 | permissions: 9 | contents: read 10 | issues: write 11 | 12 | concurrency: 13 | group: ${{ github.workflow }}-${{ github.ref }} 14 | cancel-in-progress: true 15 | 16 | jobs: 17 | releaser: 18 | uses: ipdxco/unified-github-workflows/.github/workflows/tagpush.yml@v1.0 19 | -------------------------------------------------------------------------------- /.gx/lastpubver: -------------------------------------------------------------------------------- 1 | 1.12.4: QmeSwaXGLDbzGXTaaNoCP9drpFp4YDUDRwE2Qw7wzvDCKm 2 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License 2 | 3 | Copyright (c) 2016 Łukasz Magiera 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in 13 | all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 21 | THE SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # go-ds-badger 2 | 3 | [![](https://img.shields.io/badge/made%20by-Protocol%20Labs-blue.svg?style=flat-square)](http://ipn.io) 4 | [![](https://img.shields.io/badge/project-IPFS-blue.svg?style=flat-square)](http://ipfs.io/) 5 | [![](https://img.shields.io/badge/freenode-%23ipfs-blue.svg?style=flat-square)](http://webchat.freenode.net/?channels=%23ipfs) 6 | [![standard-readme compliant](https://img.shields.io/badge/standard--readme-OK-green.svg?style=flat-square)](https://github.com/RichardLitt/standard-readme) 7 | [![GoDoc](https://godoc.org/github.com/ipfs/go-ds-badger?status.svg)](https://godoc.org/github.com/ipfs/go-ds-badger) 8 | [![Build Status](https://travis-ci.org/ipfs/go-ds-badger.svg?branch=master)](https://travis-ci.org/ipfs/go-ds-badger) 9 | 10 | > Datastore implementation using [badger](https://github.com/dgraph-io/badger) as backend. 11 | 12 | ## Lead Maintainer 13 | 14 | [Łukasz Magiera](https://github.com/magik6k) 15 | 16 | ## Table of Contents 17 | 18 | - [Documentation](#documentation) 19 | - [Badger2](#badger2) 20 | - [Contribute](#contribute) 21 | - [License](#license) 22 | 23 | ## Documentation 24 | 25 | https://godoc.org/github.com/ipfs/go-ds-badger 26 | 27 | ## Badger2 28 | 29 | This repo contains a datastore implementation using Badger v1. If you are looking for a Badger v2 datastore check out https://github.com/ipfs/go-ds-badger2. 30 | 31 | ## Contribute 32 | 33 | Feel free to join in. All welcome. Open an [issue](https://github.com/ipfs/go-ds-badger/issues)! 34 | 35 | This repository falls under the IPFS [Code of Conduct](https://github.com/ipfs/community/blob/master/code-of-conduct.md). 36 | 37 | ### Want to hack on IPFS? 38 | 39 | [![](https://cdn.rawgit.com/jbenet/contribute-ipfs-gif/master/img/contribute.gif)](https://github.com/ipfs/community/blob/master/CONTRIBUTING.md) 40 | 41 | ## License 42 | 43 | MIT 44 | -------------------------------------------------------------------------------- /codecov.yml: -------------------------------------------------------------------------------- 1 | coverage: 2 | range: "50...100" 3 | comment: off 4 | -------------------------------------------------------------------------------- /datastore.go: -------------------------------------------------------------------------------- 1 | package badger 2 | 3 | import ( 4 | "context" 5 | "errors" 6 | "fmt" 7 | "os" 8 | "runtime" 9 | "strings" 10 | "sync" 11 | "time" 12 | 13 | badger "github.com/dgraph-io/badger" 14 | options "github.com/dgraph-io/badger/options" 15 | ds "github.com/ipfs/go-datastore" 16 | dsq "github.com/ipfs/go-datastore/query" 17 | logger "github.com/ipfs/go-log/v2" 18 | ) 19 | 20 | // badgerLog is a local wrapper for go-log to make the interface 21 | // compatible with badger.Logger (namely, aliasing Warnf to Warningf) 22 | type badgerLog struct { 23 | logger.ZapEventLogger 24 | } 25 | 26 | func (b *badgerLog) Warningf(format string, args ...interface{}) { 27 | b.Warnf(format, args...) 28 | } 29 | 30 | var log = logger.Logger("badger") 31 | 32 | var ErrClosed = errors.New("datastore closed") 33 | 34 | type Datastore struct { 35 | DB *badger.DB 36 | 37 | closeLk sync.RWMutex 38 | closed bool 39 | closeOnce sync.Once 40 | closing chan struct{} 41 | 42 | gcDiscardRatio float64 43 | gcSleep time.Duration 44 | gcInterval time.Duration 45 | 46 | syncWrites bool 47 | } 48 | 49 | // Implements the datastore.Batch interface, enabling batching support for 50 | // the badger Datastore. 51 | type batch struct { 52 | ds *Datastore 53 | writeBatch *badger.WriteBatch 54 | } 55 | 56 | // Implements the datastore.Txn interface, enabling transaction support for 57 | // the badger Datastore. 58 | type txn struct { 59 | ds *Datastore 60 | txn *badger.Txn 61 | 62 | // Whether this transaction has been implicitly created as a result of a direct Datastore 63 | // method invocation. 64 | implicit bool 65 | } 66 | 67 | // Options are the badger datastore options, reexported here for convenience. 68 | type Options struct { 69 | // Please refer to the Badger docs to see what this is for 70 | GcDiscardRatio float64 71 | 72 | // Interval between GC cycles 73 | // 74 | // If zero, the datastore will perform no automatic garbage collection. 75 | GcInterval time.Duration 76 | 77 | // Sleep time between rounds of a single GC cycle. 78 | // 79 | // If zero, the datastore will only perform one round of GC per 80 | // GcInterval. 81 | GcSleep time.Duration 82 | 83 | badger.Options 84 | } 85 | 86 | // DefaultOptions are the default options for the badger datastore. 87 | var DefaultOptions Options 88 | 89 | func init() { 90 | DefaultOptions = Options{ 91 | GcDiscardRatio: 0.2, 92 | GcInterval: 15 * time.Minute, 93 | GcSleep: 10 * time.Second, 94 | Options: badger.LSMOnlyOptions(""), 95 | } 96 | // This is to optimize the database on close so it can be opened 97 | // read-only and efficiently queried. We don't do that and hanging on 98 | // stop isn't nice. 99 | DefaultOptions.Options.CompactL0OnClose = false 100 | 101 | // The alternative is "crash on start and tell the user to fix it". This 102 | // will truncate corrupt and unsynced data, which we don't guarantee to 103 | // persist anyways. 104 | DefaultOptions.Options.Truncate = true 105 | 106 | // Uses less memory, is no slower when writing, and is faster when 107 | // reading (in some tests). 108 | DefaultOptions.Options.ValueLogLoadingMode = options.FileIO 109 | 110 | // Explicitly set this to mmap. This doesn't use much memory anyways. 111 | DefaultOptions.Options.TableLoadingMode = options.MemoryMap 112 | 113 | // Reduce this from 64MiB to 16MiB. That means badger will hold on to 114 | // 20MiB by default instead of 80MiB. 115 | // 116 | // This does not appear to have a significant performance hit. 117 | DefaultOptions.Options.MaxTableSize = 16 << 20 118 | } 119 | 120 | var _ ds.Datastore = (*Datastore)(nil) 121 | var _ ds.PersistentDatastore = (*Datastore)(nil) 122 | var _ ds.TxnDatastore = (*Datastore)(nil) 123 | var _ ds.Txn = (*txn)(nil) 124 | var _ ds.TTLDatastore = (*Datastore)(nil) 125 | var _ ds.GCDatastore = (*Datastore)(nil) 126 | var _ ds.Batching = (*Datastore)(nil) 127 | 128 | // NewDatastore creates a new badger datastore. 129 | // 130 | // DO NOT set the Dir and/or ValuePath fields of opt, they will be set for you. 131 | func NewDatastore(path string, opts *Options) (*Datastore, error) { 132 | // Copy the options because we modify them. 133 | var opt badger.Options 134 | var gcDiscardRatio float64 135 | var gcSleep time.Duration 136 | var gcInterval time.Duration 137 | if opts == nil { 138 | opt = badger.DefaultOptions("") 139 | gcDiscardRatio = DefaultOptions.GcDiscardRatio 140 | gcSleep = DefaultOptions.GcSleep 141 | gcInterval = DefaultOptions.GcInterval 142 | } else { 143 | opt = opts.Options 144 | gcDiscardRatio = opts.GcDiscardRatio 145 | gcSleep = opts.GcSleep 146 | gcInterval = opts.GcInterval 147 | } 148 | 149 | if os.Getenv("GOARCH") == "386" { 150 | opt.TableLoadingMode = options.FileIO 151 | } 152 | 153 | if gcSleep <= 0 { 154 | // If gcSleep is 0, we don't perform multiple rounds of GC per 155 | // cycle. 156 | gcSleep = gcInterval 157 | } 158 | 159 | opt.Dir = path 160 | opt.ValueDir = path 161 | opt.Logger = &badgerLog{*log} 162 | 163 | kv, err := badger.Open(opt) 164 | if err != nil { 165 | if strings.HasPrefix(err.Error(), "manifest has unsupported version:") { 166 | err = fmt.Errorf("unsupported badger version, use github.com/ipfs/badgerds-upgrade to upgrade: %s", err.Error()) 167 | } 168 | return nil, err 169 | } 170 | 171 | ds := &Datastore{ 172 | DB: kv, 173 | closing: make(chan struct{}), 174 | gcDiscardRatio: gcDiscardRatio, 175 | gcSleep: gcSleep, 176 | gcInterval: gcInterval, 177 | syncWrites: opt.SyncWrites, 178 | } 179 | 180 | // Start the GC process if requested. 181 | if ds.gcInterval > 0 { 182 | go ds.periodicGC() 183 | } 184 | 185 | return ds, nil 186 | } 187 | 188 | // Keep scheduling GC's AFTER `gcInterval` has passed since the previous GC 189 | func (d *Datastore) periodicGC() { 190 | gcTimeout := time.NewTimer(d.gcInterval) 191 | defer gcTimeout.Stop() 192 | 193 | for { 194 | select { 195 | case <-gcTimeout.C: 196 | switch err := d.gcOnce(); err { 197 | case badger.ErrNoRewrite, badger.ErrRejected: 198 | // No rewrite means we've fully garbage collected. 199 | // Rejected means someone else is running a GC 200 | // or we're closing. 201 | gcTimeout.Reset(d.gcInterval) 202 | case nil: 203 | gcTimeout.Reset(d.gcSleep) 204 | case ErrClosed: 205 | return 206 | default: 207 | log.Errorf("error during a GC cycle: %s", err) 208 | // Not much we can do on a random error but log it and continue. 209 | gcTimeout.Reset(d.gcInterval) 210 | } 211 | case <-d.closing: 212 | return 213 | } 214 | } 215 | } 216 | 217 | // NewTransaction starts a new transaction. The resulting transaction object 218 | // can be mutated without incurring changes to the underlying Datastore until 219 | // the transaction is Committed. 220 | func (d *Datastore) NewTransaction(ctx context.Context, readOnly bool) (ds.Txn, error) { 221 | d.closeLk.RLock() 222 | defer d.closeLk.RUnlock() 223 | if d.closed { 224 | return nil, ErrClosed 225 | } 226 | 227 | return &txn{d, d.DB.NewTransaction(!readOnly), false}, nil 228 | } 229 | 230 | // newImplicitTransaction creates a transaction marked as 'implicit'. 231 | // Implicit transactions are created by Datastore methods performing single operations. 232 | func (d *Datastore) newImplicitTransaction(readOnly bool) *txn { 233 | return &txn{d, d.DB.NewTransaction(!readOnly), true} 234 | } 235 | 236 | func (d *Datastore) Put(ctx context.Context, key ds.Key, value []byte) error { 237 | d.closeLk.RLock() 238 | defer d.closeLk.RUnlock() 239 | if d.closed { 240 | return ErrClosed 241 | } 242 | 243 | txn := d.newImplicitTransaction(false) 244 | defer txn.discard() 245 | 246 | if err := txn.put(key, value); err != nil { 247 | return err 248 | } 249 | 250 | return txn.commit() 251 | } 252 | 253 | func (d *Datastore) Sync(ctx context.Context, prefix ds.Key) error { 254 | d.closeLk.RLock() 255 | defer d.closeLk.RUnlock() 256 | if d.closed { 257 | return ErrClosed 258 | } 259 | 260 | if d.syncWrites { 261 | return nil 262 | } 263 | 264 | return d.DB.Sync() 265 | } 266 | 267 | func (d *Datastore) PutWithTTL(ctx context.Context, key ds.Key, value []byte, ttl time.Duration) error { 268 | d.closeLk.RLock() 269 | defer d.closeLk.RUnlock() 270 | if d.closed { 271 | return ErrClosed 272 | } 273 | 274 | txn := d.newImplicitTransaction(false) 275 | defer txn.discard() 276 | 277 | if err := txn.putWithTTL(key, value, ttl); err != nil { 278 | return err 279 | } 280 | 281 | return txn.commit() 282 | } 283 | 284 | func (d *Datastore) SetTTL(ctx context.Context, key ds.Key, ttl time.Duration) error { 285 | d.closeLk.RLock() 286 | defer d.closeLk.RUnlock() 287 | if d.closed { 288 | return ErrClosed 289 | } 290 | 291 | txn := d.newImplicitTransaction(false) 292 | defer txn.discard() 293 | 294 | if err := txn.setTTL(key, ttl); err != nil { 295 | return err 296 | } 297 | 298 | return txn.commit() 299 | } 300 | 301 | func (d *Datastore) GetExpiration(ctx context.Context, key ds.Key) (time.Time, error) { 302 | d.closeLk.RLock() 303 | defer d.closeLk.RUnlock() 304 | if d.closed { 305 | return time.Time{}, ErrClosed 306 | } 307 | 308 | txn := d.newImplicitTransaction(false) 309 | defer txn.discard() 310 | 311 | return txn.getExpiration(key) 312 | } 313 | 314 | func (d *Datastore) Get(ctx context.Context, key ds.Key) (value []byte, err error) { 315 | d.closeLk.RLock() 316 | defer d.closeLk.RUnlock() 317 | if d.closed { 318 | return nil, ErrClosed 319 | } 320 | 321 | txn := d.newImplicitTransaction(true) 322 | defer txn.discard() 323 | 324 | return txn.get(key) 325 | } 326 | 327 | func (d *Datastore) Has(ctx context.Context, key ds.Key) (bool, error) { 328 | d.closeLk.RLock() 329 | defer d.closeLk.RUnlock() 330 | if d.closed { 331 | return false, ErrClosed 332 | } 333 | 334 | txn := d.newImplicitTransaction(true) 335 | defer txn.discard() 336 | 337 | return txn.has(key) 338 | } 339 | 340 | func (d *Datastore) GetSize(ctx context.Context, key ds.Key) (size int, err error) { 341 | d.closeLk.RLock() 342 | defer d.closeLk.RUnlock() 343 | if d.closed { 344 | return -1, ErrClosed 345 | } 346 | 347 | txn := d.newImplicitTransaction(true) 348 | defer txn.discard() 349 | 350 | return txn.getSize(key) 351 | } 352 | 353 | func (d *Datastore) Delete(ctx context.Context, key ds.Key) error { 354 | d.closeLk.RLock() 355 | defer d.closeLk.RUnlock() 356 | 357 | txn := d.newImplicitTransaction(false) 358 | defer txn.discard() 359 | 360 | err := txn.delete(key) 361 | if err != nil { 362 | return err 363 | } 364 | 365 | return txn.commit() 366 | } 367 | 368 | func (d *Datastore) Query(ctx context.Context, q dsq.Query) (dsq.Results, error) { 369 | d.closeLk.RLock() 370 | defer d.closeLk.RUnlock() 371 | if d.closed { 372 | return nil, ErrClosed 373 | } 374 | 375 | txn := d.newImplicitTransaction(true) 376 | // We cannot defer txn.Discard() here, as the txn must remain active while the iterator is open. 377 | // https://github.com/dgraph-io/badger/commit/b1ad1e93e483bbfef123793ceedc9a7e34b09f79 378 | // The closing logic in the query goprocess takes care of discarding the implicit transaction. 379 | return txn.query(q) 380 | } 381 | 382 | // DiskUsage implements the PersistentDatastore interface. 383 | // It returns the sum of lsm and value log files sizes in bytes. 384 | func (d *Datastore) DiskUsage(ctx context.Context) (uint64, error) { 385 | d.closeLk.RLock() 386 | defer d.closeLk.RUnlock() 387 | if d.closed { 388 | return 0, ErrClosed 389 | } 390 | lsm, vlog := d.DB.Size() 391 | return uint64(lsm + vlog), nil 392 | } 393 | 394 | func (d *Datastore) Close() error { 395 | d.closeOnce.Do(func() { 396 | close(d.closing) 397 | }) 398 | d.closeLk.Lock() 399 | defer d.closeLk.Unlock() 400 | if d.closed { 401 | return ErrClosed 402 | } 403 | d.closed = true 404 | return d.DB.Close() 405 | } 406 | 407 | // Batch creats a new Batch object. This provides a way to do many writes, when 408 | // there may be too many to fit into a single transaction. 409 | func (d *Datastore) Batch(ctx context.Context) (ds.Batch, error) { 410 | d.closeLk.RLock() 411 | defer d.closeLk.RUnlock() 412 | if d.closed { 413 | return nil, ErrClosed 414 | } 415 | 416 | b := &batch{d, d.DB.NewWriteBatch()} 417 | // Ensure that incomplete transaction resources are cleaned up in case 418 | // batch is abandoned. 419 | runtime.SetFinalizer(b, func(b *batch) { 420 | b.cancel() 421 | log.Error("batch not committed or canceled") 422 | }) 423 | 424 | return b, nil 425 | } 426 | 427 | func (d *Datastore) CollectGarbage(ctx context.Context) (err error) { 428 | // The idea is to keep calling DB.RunValueLogGC() till Badger no longer has any log files 429 | // to GC(which would be indicated by an error, please refer to Badger GC docs). 430 | for err == nil { 431 | err = d.gcOnce() 432 | } 433 | 434 | if err == badger.ErrNoRewrite { 435 | err = nil 436 | } 437 | 438 | return err 439 | } 440 | 441 | func (d *Datastore) gcOnce() error { 442 | d.closeLk.RLock() 443 | defer d.closeLk.RUnlock() 444 | if d.closed { 445 | return ErrClosed 446 | } 447 | log.Info("Running GC round") 448 | defer log.Info("Finished running GC round") 449 | return d.DB.RunValueLogGC(d.gcDiscardRatio) 450 | } 451 | 452 | var _ ds.Batch = (*batch)(nil) 453 | 454 | func (b *batch) Put(ctx context.Context, key ds.Key, value []byte) error { 455 | b.ds.closeLk.RLock() 456 | defer b.ds.closeLk.RUnlock() 457 | if b.ds.closed { 458 | return ErrClosed 459 | } 460 | return b.put(key, value) 461 | } 462 | 463 | func (b *batch) put(key ds.Key, value []byte) error { 464 | return b.writeBatch.Set(key.Bytes(), value) 465 | } 466 | 467 | func (b *batch) Delete(ctx context.Context, key ds.Key) error { 468 | b.ds.closeLk.RLock() 469 | defer b.ds.closeLk.RUnlock() 470 | if b.ds.closed { 471 | return ErrClosed 472 | } 473 | 474 | return b.delete(key) 475 | } 476 | 477 | func (b *batch) delete(key ds.Key) error { 478 | return b.writeBatch.Delete(key.Bytes()) 479 | } 480 | 481 | func (b *batch) Commit(ctx context.Context) error { 482 | b.ds.closeLk.RLock() 483 | defer b.ds.closeLk.RUnlock() 484 | if b.ds.closed { 485 | return ErrClosed 486 | } 487 | 488 | return b.commit() 489 | } 490 | 491 | func (b *batch) commit() error { 492 | err := b.writeBatch.Flush() 493 | if err != nil { 494 | // Discard incomplete transaction held by b.writeBatch 495 | b.cancel() 496 | return err 497 | } 498 | runtime.SetFinalizer(b, nil) 499 | return nil 500 | } 501 | 502 | func (b *batch) Cancel() error { 503 | b.ds.closeLk.RLock() 504 | defer b.ds.closeLk.RUnlock() 505 | if b.ds.closed { 506 | return ErrClosed 507 | } 508 | 509 | b.cancel() 510 | return nil 511 | } 512 | 513 | func (b *batch) cancel() { 514 | b.writeBatch.Cancel() 515 | runtime.SetFinalizer(b, nil) 516 | } 517 | 518 | var _ ds.Datastore = (*txn)(nil) 519 | var _ ds.TTLDatastore = (*txn)(nil) 520 | 521 | func (t *txn) Put(ctx context.Context, key ds.Key, value []byte) error { 522 | t.ds.closeLk.RLock() 523 | defer t.ds.closeLk.RUnlock() 524 | if t.ds.closed { 525 | return ErrClosed 526 | } 527 | return t.put(key, value) 528 | } 529 | 530 | func (t *txn) put(key ds.Key, value []byte) error { 531 | return t.txn.Set(key.Bytes(), value) 532 | } 533 | 534 | func (t *txn) Sync(ctx context.Context, prefix ds.Key) error { 535 | t.ds.closeLk.RLock() 536 | defer t.ds.closeLk.RUnlock() 537 | if t.ds.closed { 538 | return ErrClosed 539 | } 540 | 541 | return nil 542 | } 543 | 544 | func (t *txn) PutWithTTL(ctx context.Context, key ds.Key, value []byte, ttl time.Duration) error { 545 | t.ds.closeLk.RLock() 546 | defer t.ds.closeLk.RUnlock() 547 | if t.ds.closed { 548 | return ErrClosed 549 | } 550 | return t.putWithTTL(key, value, ttl) 551 | } 552 | 553 | func (t *txn) putWithTTL(key ds.Key, value []byte, ttl time.Duration) error { 554 | return t.txn.SetEntry(badger.NewEntry(key.Bytes(), value).WithTTL(ttl)) 555 | } 556 | 557 | func (t *txn) GetExpiration(ctx context.Context, key ds.Key) (time.Time, error) { 558 | t.ds.closeLk.RLock() 559 | defer t.ds.closeLk.RUnlock() 560 | if t.ds.closed { 561 | return time.Time{}, ErrClosed 562 | } 563 | 564 | return t.getExpiration(key) 565 | } 566 | 567 | func (t *txn) getExpiration(key ds.Key) (time.Time, error) { 568 | item, err := t.txn.Get(key.Bytes()) 569 | if err == badger.ErrKeyNotFound { 570 | return time.Time{}, ds.ErrNotFound 571 | } else if err != nil { 572 | return time.Time{}, err 573 | } 574 | return time.Unix(int64(item.ExpiresAt()), 0), nil 575 | } 576 | 577 | func (t *txn) SetTTL(ctx context.Context, key ds.Key, ttl time.Duration) error { 578 | t.ds.closeLk.RLock() 579 | defer t.ds.closeLk.RUnlock() 580 | if t.ds.closed { 581 | return ErrClosed 582 | } 583 | 584 | return t.setTTL(key, ttl) 585 | } 586 | 587 | func (t *txn) setTTL(key ds.Key, ttl time.Duration) error { 588 | item, err := t.txn.Get(key.Bytes()) 589 | if err != nil { 590 | return err 591 | } 592 | return item.Value(func(data []byte) error { 593 | return t.putWithTTL(key, data, ttl) 594 | }) 595 | 596 | } 597 | 598 | func (t *txn) Get(ctx context.Context, key ds.Key) ([]byte, error) { 599 | t.ds.closeLk.RLock() 600 | defer t.ds.closeLk.RUnlock() 601 | if t.ds.closed { 602 | return nil, ErrClosed 603 | } 604 | 605 | return t.get(key) 606 | } 607 | 608 | func (t *txn) get(key ds.Key) ([]byte, error) { 609 | item, err := t.txn.Get(key.Bytes()) 610 | if err == badger.ErrKeyNotFound { 611 | err = ds.ErrNotFound 612 | } 613 | if err != nil { 614 | return nil, err 615 | } 616 | 617 | return item.ValueCopy(nil) 618 | } 619 | 620 | func (t *txn) Has(ctx context.Context, key ds.Key) (bool, error) { 621 | t.ds.closeLk.RLock() 622 | defer t.ds.closeLk.RUnlock() 623 | if t.ds.closed { 624 | return false, ErrClosed 625 | } 626 | 627 | return t.has(key) 628 | } 629 | 630 | func (t *txn) has(key ds.Key) (bool, error) { 631 | _, err := t.txn.Get(key.Bytes()) 632 | switch err { 633 | case badger.ErrKeyNotFound: 634 | return false, nil 635 | case nil: 636 | return true, nil 637 | default: 638 | return false, err 639 | } 640 | } 641 | 642 | func (t *txn) GetSize(ctx context.Context, key ds.Key) (int, error) { 643 | t.ds.closeLk.RLock() 644 | defer t.ds.closeLk.RUnlock() 645 | if t.ds.closed { 646 | return -1, ErrClosed 647 | } 648 | 649 | return t.getSize(key) 650 | } 651 | 652 | func (t *txn) getSize(key ds.Key) (int, error) { 653 | item, err := t.txn.Get(key.Bytes()) 654 | switch err { 655 | case nil: 656 | return int(item.ValueSize()), nil 657 | case badger.ErrKeyNotFound: 658 | return -1, ds.ErrNotFound 659 | default: 660 | return -1, err 661 | } 662 | } 663 | 664 | func (t *txn) Delete(ctx context.Context, key ds.Key) error { 665 | t.ds.closeLk.RLock() 666 | defer t.ds.closeLk.RUnlock() 667 | if t.ds.closed { 668 | return ErrClosed 669 | } 670 | 671 | return t.delete(key) 672 | } 673 | 674 | func (t *txn) delete(key ds.Key) error { 675 | return t.txn.Delete(key.Bytes()) 676 | } 677 | 678 | func (t *txn) Query(ctx context.Context, q dsq.Query) (dsq.Results, error) { 679 | t.ds.closeLk.RLock() 680 | defer t.ds.closeLk.RUnlock() 681 | if t.ds.closed { 682 | return nil, ErrClosed 683 | } 684 | 685 | return t.query(q) 686 | } 687 | 688 | func (t *txn) query(q dsq.Query) (dsq.Results, error) { 689 | opt := badger.DefaultIteratorOptions 690 | opt.PrefetchValues = !q.KeysOnly 691 | prefix := ds.NewKey(q.Prefix).String() 692 | if prefix != "/" { 693 | opt.Prefix = []byte(prefix + "/") 694 | } 695 | 696 | // Handle ordering 697 | if len(q.Orders) > 0 { 698 | switch q.Orders[0].(type) { 699 | case dsq.OrderByKey, *dsq.OrderByKey: 700 | // We order by key by default. 701 | case dsq.OrderByKeyDescending, *dsq.OrderByKeyDescending: 702 | // Reverse order by key 703 | opt.Reverse = true 704 | default: 705 | // Ok, we have a weird order we can't handle. Let's 706 | // perform the _base_ query (prefix, filter, etc.), then 707 | // handle sort/offset/limit later. 708 | 709 | // Skip the stuff we can't apply. 710 | baseQuery := q 711 | baseQuery.Limit = 0 712 | baseQuery.Offset = 0 713 | baseQuery.Orders = nil 714 | 715 | // perform the base query. 716 | res, err := t.query(baseQuery) 717 | if err != nil { 718 | return nil, err 719 | } 720 | 721 | // fix the query 722 | res = dsq.ResultsReplaceQuery(res, q) 723 | 724 | // Remove the parts we've already applied. 725 | naiveQuery := q 726 | naiveQuery.Prefix = "" 727 | naiveQuery.Filters = nil 728 | 729 | // Apply the rest of the query 730 | return dsq.NaiveQueryApply(naiveQuery, res), nil 731 | } 732 | } 733 | 734 | it := t.txn.NewIterator(opt) 735 | results := dsq.ResultsWithContext(q, func(ctx context.Context, output chan<- dsq.Result) { 736 | t.ds.closeLk.RLock() 737 | closedEarly := false 738 | defer func() { 739 | t.ds.closeLk.RUnlock() 740 | if closedEarly { 741 | select { 742 | case output <- dsq.Result{ 743 | Error: ErrClosed, 744 | }: 745 | case <-ctx.Done(): 746 | } 747 | } 748 | 749 | }() 750 | if t.ds.closed { 751 | closedEarly = true 752 | return 753 | } 754 | 755 | // this iterator is part of an implicit transaction, so when 756 | // we're done we must discard the transaction. It's safe to 757 | // discard the txn it because it contains the iterator only. 758 | if t.implicit { 759 | defer t.discard() 760 | } 761 | 762 | defer it.Close() 763 | 764 | // All iterators must be started by rewinding. 765 | it.Rewind() 766 | 767 | // skip to the offset 768 | for skipped := 0; skipped < q.Offset && it.Valid(); it.Next() { 769 | // On the happy path, we have no filters and we can go 770 | // on our way. 771 | if len(q.Filters) == 0 { 772 | skipped++ 773 | continue 774 | } 775 | 776 | // On the sad path, we need to apply filters before 777 | // counting the item as "skipped" as the offset comes 778 | // _after_ the filter. 779 | item := it.Item() 780 | 781 | matches := true 782 | check := func(value []byte) error { 783 | e := dsq.Entry{ 784 | Key: string(item.Key()), 785 | Value: value, 786 | Size: int(item.ValueSize()), // this function is basically free 787 | } 788 | 789 | // Only calculate expirations if we need them. 790 | if q.ReturnExpirations { 791 | e.Expiration = expires(item) 792 | } 793 | matches = filter(q.Filters, e) 794 | return nil 795 | } 796 | 797 | // Maybe check with the value, only if we need it. 798 | var err error 799 | if q.KeysOnly { 800 | err = check(nil) 801 | } else { 802 | err = item.Value(check) 803 | } 804 | 805 | if err != nil { 806 | select { 807 | case output <- dsq.Result{Error: err}: 808 | case <-t.ds.closing: // datastore closing. 809 | closedEarly = true 810 | return 811 | case <-ctx.Done(): // client told us to close early 812 | return 813 | } 814 | } 815 | if !matches { 816 | skipped++ 817 | } 818 | } 819 | 820 | for sent := 0; (q.Limit <= 0 || sent < q.Limit) && it.Valid(); it.Next() { 821 | item := it.Item() 822 | e := dsq.Entry{Key: string(item.Key())} 823 | 824 | // Maybe get the value 825 | var result dsq.Result 826 | if !q.KeysOnly { 827 | b, err := item.ValueCopy(nil) 828 | if err != nil { 829 | result = dsq.Result{Error: err} 830 | } else { 831 | e.Value = b 832 | e.Size = len(b) 833 | result = dsq.Result{Entry: e} 834 | } 835 | } else { 836 | e.Size = int(item.ValueSize()) 837 | result = dsq.Result{Entry: e} 838 | } 839 | 840 | if q.ReturnExpirations { 841 | result.Expiration = expires(item) 842 | } 843 | 844 | // Finally, filter it (unless we're dealing with an error). 845 | if result.Error == nil && filter(q.Filters, e) { 846 | continue 847 | } 848 | 849 | select { 850 | case output <- result: 851 | sent++ 852 | case <-t.ds.closing: // datastore closing. 853 | closedEarly = true 854 | return 855 | case <-ctx.Done(): // client told us to close early 856 | return 857 | } 858 | } 859 | }) 860 | 861 | return results, nil 862 | } 863 | 864 | func (t *txn) Commit(ctx context.Context) error { 865 | t.ds.closeLk.RLock() 866 | defer t.ds.closeLk.RUnlock() 867 | if t.ds.closed { 868 | return ErrClosed 869 | } 870 | 871 | return t.commit() 872 | } 873 | 874 | func (t *txn) commit() error { 875 | return t.txn.Commit() 876 | } 877 | 878 | // Alias to commit 879 | func (t *txn) Close() error { 880 | t.ds.closeLk.RLock() 881 | defer t.ds.closeLk.RUnlock() 882 | if t.ds.closed { 883 | return ErrClosed 884 | } 885 | return t.close() 886 | } 887 | 888 | func (t *txn) close() error { 889 | return t.txn.Commit() 890 | } 891 | 892 | func (t *txn) Discard(ctx context.Context) { 893 | t.ds.closeLk.RLock() 894 | defer t.ds.closeLk.RUnlock() 895 | if t.ds.closed { 896 | return 897 | } 898 | 899 | t.discard() 900 | } 901 | 902 | func (t *txn) discard() { 903 | t.txn.Discard() 904 | } 905 | 906 | // filter returns _true_ if we should filter (skip) the entry 907 | func filter(filters []dsq.Filter, entry dsq.Entry) bool { 908 | for _, f := range filters { 909 | if !f.Filter(entry) { 910 | return true 911 | } 912 | } 913 | return false 914 | } 915 | 916 | func expires(item *badger.Item) time.Time { 917 | return time.Unix(int64(item.ExpiresAt()), 0) 918 | } 919 | -------------------------------------------------------------------------------- /ds_test.go: -------------------------------------------------------------------------------- 1 | package badger 2 | 3 | import ( 4 | "bytes" 5 | "context" 6 | "crypto/rand" 7 | "fmt" 8 | "sort" 9 | "testing" 10 | "time" 11 | 12 | ds "github.com/ipfs/go-datastore" 13 | dsq "github.com/ipfs/go-datastore/query" 14 | dstest "github.com/ipfs/go-datastore/test" 15 | ) 16 | 17 | var bg = context.Background() 18 | 19 | var testcases = map[string]string{ 20 | "/a": "a", 21 | "/a/b": "ab", 22 | "/a/b/c": "abc", 23 | "/a/b/d": "a/b/d", 24 | "/a/c": "ac", 25 | "/a/d": "ad", 26 | "/e": "e", 27 | "/f": "f", 28 | "/g": "", 29 | } 30 | 31 | func addTestCases(t *testing.T, d *Datastore, testcases map[string]string) { 32 | for k, v := range testcases { 33 | dsk := ds.NewKey(k) 34 | if err := d.Put(bg, dsk, []byte(v)); err != nil { 35 | t.Fatal(err) 36 | } 37 | } 38 | 39 | for k, v := range testcases { 40 | dsk := ds.NewKey(k) 41 | v2, err := d.Get(bg, dsk) 42 | if err != nil { 43 | t.Fatal(err) 44 | } 45 | if string(v2) != v { 46 | t.Errorf("%s values differ: %s != %s", k, v, v2) 47 | } 48 | } 49 | } 50 | func TestQuery(t *testing.T) { 51 | d, err := NewDatastore(t.TempDir(), nil) 52 | if err != nil { 53 | t.Fatal(err) 54 | } 55 | defer d.Close() 56 | 57 | addTestCases(t, d, testcases) 58 | 59 | rs, err := d.Query(bg, dsq.Query{Prefix: "/a/"}) 60 | if err != nil { 61 | t.Fatal(err) 62 | } 63 | 64 | expectMatches(t, []string{ 65 | "/a/b", 66 | "/a/b/c", 67 | "/a/b/d", 68 | "/a/c", 69 | "/a/d", 70 | }, rs) 71 | 72 | // test offset and limit 73 | 74 | rs, err = d.Query(bg, dsq.Query{Prefix: "/a/", Offset: 2, Limit: 2}) 75 | if err != nil { 76 | t.Fatal(err) 77 | } 78 | 79 | expectMatches(t, []string{ 80 | "/a/b/d", 81 | "/a/c", 82 | }, rs) 83 | } 84 | 85 | func TestHas(t *testing.T) { 86 | d, err := NewDatastore(t.TempDir(), nil) 87 | if err != nil { 88 | t.Fatal(err) 89 | } 90 | defer d.Close() 91 | 92 | addTestCases(t, d, testcases) 93 | 94 | has, err := d.Has(bg, ds.NewKey("/a/b/c")) 95 | if err != nil { 96 | t.Error(err) 97 | } 98 | 99 | if !has { 100 | t.Error("Key should be found") 101 | } 102 | 103 | has, err = d.Has(bg, ds.NewKey("/a/b/c/d")) 104 | if err != nil { 105 | t.Error(err) 106 | } 107 | 108 | if has { 109 | t.Error("Key should not be found") 110 | } 111 | } 112 | 113 | func TestGetSize(t *testing.T) { 114 | d, err := NewDatastore(t.TempDir(), nil) 115 | if err != nil { 116 | t.Fatal(err) 117 | } 118 | defer d.Close() 119 | 120 | addTestCases(t, d, testcases) 121 | 122 | size, err := d.GetSize(bg, ds.NewKey("/a/b/c")) 123 | if err != nil { 124 | t.Error(err) 125 | } 126 | 127 | if size != len(testcases["/a/b/c"]) { 128 | t.Error("") 129 | } 130 | 131 | _, err = d.GetSize(bg, ds.NewKey("/a/b/c/d")) 132 | if err != ds.ErrNotFound { 133 | t.Error(err) 134 | } 135 | } 136 | 137 | func TestNotExistGet(t *testing.T) { 138 | d, err := NewDatastore(t.TempDir(), nil) 139 | if err != nil { 140 | t.Fatal(err) 141 | } 142 | defer d.Close() 143 | 144 | addTestCases(t, d, testcases) 145 | 146 | has, err := d.Has(bg, ds.NewKey("/a/b/c/d")) 147 | if err != nil { 148 | t.Error(err) 149 | } 150 | 151 | if has { 152 | t.Error("Key should not be found") 153 | } 154 | 155 | val, err := d.Get(bg, ds.NewKey("/a/b/c/d")) 156 | if val != nil { 157 | t.Error("Key should not be found") 158 | } 159 | 160 | if err != ds.ErrNotFound { 161 | t.Error("Error was not set to ds.ErrNotFound") 162 | if err != nil { 163 | t.Error(err) 164 | } 165 | } 166 | } 167 | 168 | func TestDelete(t *testing.T) { 169 | d, err := NewDatastore(t.TempDir(), nil) 170 | if err != nil { 171 | t.Fatal(err) 172 | } 173 | defer d.Close() 174 | 175 | addTestCases(t, d, testcases) 176 | 177 | has, err := d.Has(bg, ds.NewKey("/a/b/c")) 178 | if err != nil { 179 | t.Error(err) 180 | } 181 | if !has { 182 | t.Error("Key should be found") 183 | } 184 | 185 | err = d.Delete(bg, ds.NewKey("/a/b/c")) 186 | if err != nil { 187 | t.Error(err) 188 | } 189 | 190 | has, err = d.Has(bg, ds.NewKey("/a/b/c")) 191 | if err != nil { 192 | t.Error(err) 193 | } 194 | if has { 195 | t.Error("Key should not be found") 196 | } 197 | } 198 | 199 | func TestGetEmpty(t *testing.T) { 200 | d, err := NewDatastore(t.TempDir(), nil) 201 | if err != nil { 202 | t.Fatal(err) 203 | } 204 | defer d.Close() 205 | 206 | err = d.Put(bg, ds.NewKey("/a"), []byte{}) 207 | if err != nil { 208 | t.Error(err) 209 | } 210 | 211 | v, err := d.Get(bg, ds.NewKey("/a")) 212 | if err != nil { 213 | t.Error(err) 214 | } 215 | 216 | if len(v) != 0 { 217 | t.Error("expected 0 len []byte form get") 218 | } 219 | } 220 | 221 | func expectMatches(t *testing.T, expect []string, actualR dsq.Results) { 222 | actual, err := actualR.Rest() 223 | if err != nil { 224 | t.Error(err) 225 | } 226 | 227 | if len(actual) != len(expect) { 228 | t.Error("not enough", expect, actual) 229 | } 230 | for _, k := range expect { 231 | found := false 232 | for _, e := range actual { 233 | if e.Key == k { 234 | found = true 235 | } 236 | } 237 | if !found { 238 | t.Error(k, "not found") 239 | } 240 | } 241 | } 242 | 243 | func TestBatching(t *testing.T) { 244 | d, err := NewDatastore(t.TempDir(), nil) 245 | if err != nil { 246 | t.Fatal(err) 247 | } 248 | defer d.Close() 249 | 250 | b, err := d.Batch(bg) 251 | if err != nil { 252 | t.Fatal(err) 253 | } 254 | 255 | for k, v := range testcases { 256 | err := b.Put(bg, ds.NewKey(k), []byte(v)) 257 | if err != nil { 258 | t.Fatal(err) 259 | } 260 | } 261 | 262 | err = b.Commit(bg) 263 | if err != nil { 264 | t.Fatal(err) 265 | } 266 | 267 | for k, v := range testcases { 268 | val, err := d.Get(bg, ds.NewKey(k)) 269 | if err != nil { 270 | t.Fatal(err) 271 | } 272 | 273 | if v != string(val) { 274 | t.Fatal("got wrong data!") 275 | } 276 | } 277 | 278 | //Test delete 279 | 280 | b, err = d.Batch(bg) 281 | if err != nil { 282 | t.Fatal(err) 283 | } 284 | 285 | err = b.Delete(bg, ds.NewKey("/a/b")) 286 | if err != nil { 287 | t.Fatal(err) 288 | } 289 | 290 | err = b.Delete(bg, ds.NewKey("/a/b/c")) 291 | if err != nil { 292 | t.Fatal(err) 293 | } 294 | 295 | err = b.Commit(bg) 296 | if err != nil { 297 | t.Fatal(err) 298 | } 299 | 300 | rs, err := d.Query(bg, dsq.Query{Prefix: "/"}) 301 | if err != nil { 302 | t.Fatal(err) 303 | } 304 | 305 | expectMatches(t, []string{ 306 | "/a", 307 | "/a/b/d", 308 | "/a/c", 309 | "/a/d", 310 | "/e", 311 | "/f", 312 | "/g", 313 | }, rs) 314 | 315 | //Test cancel 316 | 317 | b, err = d.Batch(bg) 318 | if err != nil { 319 | t.Fatal(err) 320 | } 321 | 322 | const key = "/xyz" 323 | 324 | err = b.Put(bg, ds.NewKey(key), []byte("/x/y/z")) 325 | if err != nil { 326 | t.Fatal(err) 327 | } 328 | 329 | // TODO: remove type assertion once datastore.Batch interface has Cancel 330 | err = b.(*batch).Cancel() 331 | if err != nil { 332 | t.Fatal(err) 333 | } 334 | 335 | _, err = d.Get(bg, ds.NewKey(key)) 336 | if err == nil { 337 | t.Fatal("expected error trying to get uncommited data") 338 | } 339 | } 340 | 341 | func TestBatchingRequired(t *testing.T) { 342 | dsOpts := DefaultOptions 343 | d, err := NewDatastore(t.TempDir(), &dsOpts) 344 | if err != nil { 345 | t.Fatal(err) 346 | } 347 | defer d.Close() 348 | 349 | const valSize = 1000 350 | 351 | // Check that transaction fails when there are too many writes. This is 352 | // not testing batching logic, but is here to prove that batching works 353 | // where a transaction fails. 354 | t.Logf("putting %d byte values until transaction overflows", valSize) 355 | tx, err := d.NewTransaction(bg, false) 356 | if err != nil { 357 | t.Fatal(err) 358 | } 359 | var puts int 360 | for ; puts < 10000000; puts++ { 361 | buf := make([]byte, valSize) 362 | rand.Read(buf) 363 | err = tx.Put(bg, ds.NewKey(fmt.Sprintf("/key%d", puts)), buf) 364 | if err != nil { 365 | break 366 | } 367 | puts++ 368 | } 369 | if err == nil { 370 | t.Error("expected transaction to fail") 371 | } else { 372 | t.Logf("OK - transaction cannot handle %d puts: %s", puts, err) 373 | } 374 | tx.Discard(bg) 375 | 376 | // Check that batch succeeds with the same number of writes that caused a 377 | // transaction to fail. 378 | t.Logf("putting %d %d byte values using batch", puts, valSize) 379 | b, err := d.Batch(bg) 380 | if err != nil { 381 | t.Fatal(err) 382 | } 383 | for i := 0; i < puts; i++ { 384 | buf := make([]byte, valSize) 385 | rand.Read(buf) 386 | err = b.Put(bg, ds.NewKey(fmt.Sprintf("/key%d", i)), buf) 387 | if err != nil { 388 | t.Fatal(err) 389 | } 390 | } 391 | 392 | err = b.Commit(bg) 393 | if err != nil { 394 | t.Fatal(err) 395 | } 396 | } 397 | 398 | // Tests from basic_tests from go-datastore 399 | 400 | func TestBasicPutGet(t *testing.T) { 401 | d, err := NewDatastore(t.TempDir(), nil) 402 | if err != nil { 403 | t.Fatal(err) 404 | } 405 | defer d.Close() 406 | 407 | k := ds.NewKey("foo") 408 | val := []byte("Hello Datastore!") 409 | 410 | err = d.Put(bg, k, val) 411 | if err != nil { 412 | t.Fatal("error putting to datastore: ", err) 413 | } 414 | 415 | have, err := d.Has(bg, k) 416 | if err != nil { 417 | t.Fatal("error calling has on key we just put: ", err) 418 | } 419 | 420 | if !have { 421 | t.Fatal("should have key foo, has returned false") 422 | } 423 | 424 | out, err := d.Get(bg, k) 425 | if err != nil { 426 | t.Fatal("error getting value after put: ", err) 427 | } 428 | 429 | if !bytes.Equal(out, val) { 430 | t.Fatal("value received on get wasnt what we expected:", out) 431 | } 432 | 433 | have, err = d.Has(bg, k) 434 | if err != nil { 435 | t.Fatal("error calling has after get: ", err) 436 | } 437 | 438 | if !have { 439 | t.Fatal("should have key foo, has returned false") 440 | } 441 | 442 | err = d.Delete(bg, k) 443 | if err != nil { 444 | t.Fatal("error calling delete: ", err) 445 | } 446 | 447 | have, err = d.Has(bg, k) 448 | if err != nil { 449 | t.Fatal("error calling has after delete: ", err) 450 | } 451 | 452 | if have { 453 | t.Fatal("should not have key foo, has returned true") 454 | } 455 | } 456 | 457 | func TestNotFounds(t *testing.T) { 458 | d, err := NewDatastore(t.TempDir(), nil) 459 | if err != nil { 460 | t.Fatal(err) 461 | } 462 | defer d.Close() 463 | 464 | badk := ds.NewKey("notreal") 465 | 466 | val, err := d.Get(bg, badk) 467 | if err != ds.ErrNotFound { 468 | t.Fatal("expected ErrNotFound for key that doesnt exist, got: ", err) 469 | } 470 | 471 | if val != nil { 472 | t.Fatal("get should always return nil for not found values") 473 | } 474 | 475 | have, err := d.Has(bg, badk) 476 | if err != nil { 477 | t.Fatal("error calling has on not found key: ", err) 478 | } 479 | if have { 480 | t.Fatal("has returned true for key we don't have") 481 | } 482 | } 483 | 484 | func TestManyKeysAndQuery(t *testing.T) { 485 | d, err := NewDatastore(t.TempDir(), nil) 486 | if err != nil { 487 | t.Fatal(err) 488 | } 489 | defer d.Close() 490 | 491 | var keys []ds.Key 492 | var keystrs []string 493 | var values [][]byte 494 | count := 100 495 | for i := 0; i < count; i++ { 496 | s := fmt.Sprintf("%dkey%d", i, i) 497 | dsk := ds.NewKey(s) 498 | keystrs = append(keystrs, dsk.String()) 499 | keys = append(keys, dsk) 500 | buf := make([]byte, 64) 501 | rand.Read(buf) 502 | values = append(values, buf) 503 | } 504 | 505 | t.Logf("putting %d values", count) 506 | for i, k := range keys { 507 | err := d.Put(bg, k, values[i]) 508 | if err != nil { 509 | t.Fatalf("error on put[%d]: %s", i, err) 510 | } 511 | } 512 | 513 | t.Log("getting values back") 514 | for i, k := range keys { 515 | val, err := d.Get(bg, k) 516 | if err != nil { 517 | t.Fatalf("error on get[%d]: %s", i, err) 518 | } 519 | 520 | if !bytes.Equal(val, values[i]) { 521 | t.Fatal("input value didnt match the one returned from Get") 522 | } 523 | } 524 | 525 | t.Log("querying values") 526 | q := dsq.Query{KeysOnly: true} 527 | resp, err := d.Query(bg, q) 528 | if err != nil { 529 | t.Fatal("calling query: ", err) 530 | } 531 | 532 | t.Log("aggregating query results") 533 | var outkeys []string 534 | for { 535 | res, ok := resp.NextSync() 536 | if res.Error != nil { 537 | t.Fatal("query result error: ", res.Error) 538 | } 539 | if !ok { 540 | break 541 | } 542 | 543 | outkeys = append(outkeys, res.Key) 544 | } 545 | 546 | t.Log("verifying query output") 547 | sort.Strings(keystrs) 548 | sort.Strings(outkeys) 549 | 550 | if len(keystrs) != len(outkeys) { 551 | t.Fatalf("got wrong number of keys back, %d != %d", len(keystrs), len(outkeys)) 552 | } 553 | 554 | for i, s := range keystrs { 555 | if outkeys[i] != s { 556 | t.Fatalf("in key output, got %s but expected %s", outkeys[i], s) 557 | } 558 | } 559 | 560 | t.Log("deleting all keys") 561 | for _, k := range keys { 562 | if err := d.Delete(bg, k); err != nil { 563 | t.Fatal(err) 564 | } 565 | } 566 | } 567 | 568 | func TestGC(t *testing.T) { 569 | d, err := NewDatastore(t.TempDir(), nil) 570 | if err != nil { 571 | t.Fatal(err) 572 | } 573 | defer d.Close() 574 | 575 | count := 10000 576 | 577 | b, err := d.Batch(bg) 578 | if err != nil { 579 | t.Fatal(err) 580 | } 581 | 582 | t.Logf("putting %d values", count) 583 | for i := 0; i < count; i++ { 584 | buf := make([]byte, 6400) 585 | rand.Read(buf) 586 | err = b.Put(bg, ds.NewKey(fmt.Sprintf("/key%d", i)), buf) 587 | if err != nil { 588 | t.Fatal(err) 589 | } 590 | } 591 | 592 | err = b.Commit(bg) 593 | if err != nil { 594 | t.Fatal(err) 595 | } 596 | 597 | b, err = d.Batch(bg) 598 | if err != nil { 599 | t.Fatal(err) 600 | } 601 | 602 | t.Logf("deleting %d values", count) 603 | for i := 0; i < count; i++ { 604 | err := b.Delete(bg, ds.NewKey(fmt.Sprintf("/key%d", i))) 605 | if err != nil { 606 | t.Fatal(err) 607 | } 608 | } 609 | 610 | err = b.Commit(bg) 611 | if err != nil { 612 | t.Fatal(err) 613 | } 614 | 615 | if err := d.CollectGarbage(bg); err != nil { 616 | t.Fatal(err) 617 | } 618 | } 619 | 620 | // TestDiskUsage verifies we fetch some badger size correctly. 621 | // Because the Size metric is only updated every minute in badger and 622 | // this interval is not configurable, we re-open the database 623 | // (the size is always calculated on Open) to make things quick. 624 | func TestDiskUsage(t *testing.T) { 625 | path := t.TempDir() 626 | d, err := NewDatastore(path, nil) 627 | if err != nil { 628 | t.Fatal(err) 629 | } 630 | defer d.Close() 631 | 632 | addTestCases(t, d, testcases) 633 | d.Close() 634 | 635 | d, err = NewDatastore(path, nil) 636 | if err != nil { 637 | t.Fatal(err) 638 | } 639 | defer d.Close() 640 | 641 | s, _ := d.DiskUsage(bg) 642 | if s == 0 { 643 | t.Error("expected some size") 644 | } 645 | } 646 | 647 | func TestTxnDiscard(t *testing.T) { 648 | d, err := NewDatastore(t.TempDir(), nil) 649 | if err != nil { 650 | t.Fatal(err) 651 | } 652 | defer d.Close() 653 | 654 | txn, err := d.NewTransaction(bg, false) 655 | if err != nil { 656 | t.Fatal(err) 657 | } 658 | key := ds.NewKey("/test/thing") 659 | if err := txn.Put(bg, key, []byte{1, 2, 3}); err != nil { 660 | t.Fatal(err) 661 | } 662 | txn.Discard(bg) 663 | has, err := d.Has(bg, key) 664 | if err != nil { 665 | t.Fatal(err) 666 | } 667 | if has { 668 | t.Fatal("key written in aborted transaction still exists") 669 | } 670 | } 671 | 672 | func TestTxnCommit(t *testing.T) { 673 | d, err := NewDatastore(t.TempDir(), nil) 674 | if err != nil { 675 | t.Fatal(err) 676 | } 677 | defer d.Close() 678 | 679 | txn, err := d.NewTransaction(bg, false) 680 | if err != nil { 681 | t.Fatal(err) 682 | } 683 | key := ds.NewKey("/test/thing") 684 | if err := txn.Put(bg, key, []byte{1, 2, 3}); err != nil { 685 | t.Fatal(err) 686 | } 687 | err = txn.Commit(bg) 688 | if err != nil { 689 | t.Fatal(err) 690 | } 691 | has, err := d.Has(bg, key) 692 | if err != nil { 693 | t.Fatal(err) 694 | } 695 | if !has { 696 | t.Fatal("key written in committed transaction does not exist") 697 | } 698 | } 699 | 700 | func TestTxnBatch(t *testing.T) { 701 | d, err := NewDatastore(t.TempDir(), nil) 702 | if err != nil { 703 | t.Fatal(err) 704 | } 705 | defer d.Close() 706 | 707 | txn, err := d.NewTransaction(bg, false) 708 | if err != nil { 709 | t.Fatal(err) 710 | } 711 | data := make(map[ds.Key][]byte) 712 | for i := 0; i < 10; i++ { 713 | key := ds.NewKey(fmt.Sprintf("/test/%d", i)) 714 | bytes := make([]byte, 16) 715 | _, err := rand.Read(bytes) 716 | if err != nil { 717 | t.Fatal(err) 718 | } 719 | data[key] = bytes 720 | 721 | err = txn.Put(bg, key, bytes) 722 | if err != nil { 723 | t.Fatal(err) 724 | } 725 | } 726 | err = txn.Commit(bg) 727 | if err != nil { 728 | t.Fatal(err) 729 | } 730 | 731 | for key, bytes := range data { 732 | retrieved, err := d.Get(bg, key) 733 | if err != nil { 734 | t.Fatal(err) 735 | } 736 | if len(retrieved) != len(bytes) { 737 | t.Fatal("bytes stored different length from bytes generated") 738 | } 739 | for i, b := range retrieved { 740 | if bytes[i] != b { 741 | t.Fatal("bytes stored different content from bytes generated") 742 | } 743 | } 744 | } 745 | } 746 | 747 | func TestTTL(t *testing.T) { 748 | d, err := NewDatastore(t.TempDir(), nil) 749 | if err != nil { 750 | t.Fatal(err) 751 | } 752 | defer d.Close() 753 | 754 | txn, err := d.NewTransaction(bg, false) 755 | if err != nil { 756 | t.Fatal(err) 757 | } 758 | 759 | data := make(map[ds.Key][]byte) 760 | for i := 0; i < 10; i++ { 761 | key := ds.NewKey(fmt.Sprintf("/test/%d", i)) 762 | bytes := make([]byte, 16) 763 | _, err := rand.Read(bytes) 764 | if err != nil { 765 | t.Fatal(err) 766 | } 767 | data[key] = bytes 768 | } 769 | 770 | // write data 771 | for key, bytes := range data { 772 | err = txn.(ds.TTL).PutWithTTL(bg, key, bytes, time.Second) 773 | if err != nil { 774 | t.Fatal(err) 775 | } 776 | } 777 | err = txn.Commit(bg) 778 | if err != nil { 779 | t.Fatal(err) 780 | } 781 | 782 | txn, err = d.NewTransaction(bg, true) 783 | if err != nil { 784 | t.Fatal(err) 785 | } 786 | for key := range data { 787 | _, err := txn.Get(bg, key) 788 | if err != nil { 789 | t.Fatal(err) 790 | } 791 | } 792 | txn.Discard(bg) 793 | 794 | time.Sleep(time.Second) 795 | 796 | for key := range data { 797 | has, err := d.Has(bg, key) 798 | if err != nil { 799 | t.Fatal(err) 800 | } 801 | if has { 802 | t.Fatal("record with ttl did not expire") 803 | } 804 | } 805 | } 806 | 807 | func TestExpirations(t *testing.T) { 808 | d, err := NewDatastore(t.TempDir(), nil) 809 | if err != nil { 810 | t.Fatal(err) 811 | } 812 | defer d.Close() 813 | 814 | txn, err := d.NewTransaction(bg, false) 815 | if err != nil { 816 | t.Fatal(err) 817 | } 818 | ttltxn := txn.(ds.TTL) 819 | defer txn.Discard(bg) 820 | 821 | key := ds.NewKey("/abc/def") 822 | val := make([]byte, 32) 823 | if n, err := rand.Read(val); n != 32 || err != nil { 824 | t.Fatal("source of randomness failed") 825 | } 826 | 827 | ttl := time.Hour 828 | now := time.Now() 829 | tgt := now.Add(ttl) 830 | 831 | if err = ttltxn.PutWithTTL(bg, key, val, ttl); err != nil { 832 | t.Fatalf("adding with ttl failed: %v", err) 833 | } 834 | 835 | if err = txn.Commit(bg); err != nil { 836 | t.Fatalf("commiting transaction failed: %v", err) 837 | } 838 | 839 | // Second transaction to retrieve expirations. 840 | txn, err = d.NewTransaction(bg, true) 841 | if err != nil { 842 | t.Fatal(err) 843 | } 844 | ttltxn = txn.(ds.TTL) 845 | defer txn.Discard(bg) 846 | 847 | // GetExpiration returns expected value. 848 | var dsExp time.Time 849 | if dsExp, err = ttltxn.GetExpiration(bg, key); err != nil { 850 | t.Fatalf("getting expiration failed: %v", err) 851 | } else if tgt.Sub(dsExp) >= 5*time.Second { 852 | t.Fatal("expiration returned by datastore not within the expected range (tolerance: 5 seconds)") 853 | } else if tgt.Sub(dsExp) < 0 { 854 | t.Fatal("expiration returned by datastore was earlier than expected") 855 | } 856 | 857 | // Iterator returns expected value. 858 | q := dsq.Query{ 859 | ReturnExpirations: true, 860 | KeysOnly: true, 861 | } 862 | var ress dsq.Results 863 | if ress, err = txn.Query(bg, q); err != nil { 864 | t.Fatalf("querying datastore failed: %v", err) 865 | } 866 | 867 | defer ress.Close() 868 | if res, ok := ress.NextSync(); !ok { 869 | t.Fatal("expected 1 result in iterator") 870 | } else if res.Expiration != dsExp { 871 | t.Fatalf("expiration returned from iterator differs from GetExpiration, expected: %v, actual: %v", dsExp, res.Expiration) 872 | } 873 | 874 | if _, ok := ress.NextSync(); ok { 875 | t.Fatal("expected no more results in iterator") 876 | } 877 | 878 | // Datastore->GetExpiration() 879 | if exp, err := d.GetExpiration(bg, key); err != nil { 880 | t.Fatalf("querying datastore failed: %v", err) 881 | } else if exp != dsExp { 882 | t.Fatalf("expiration returned from DB differs from that returned by txn, expected: %v, actual: %v", dsExp, exp) 883 | } 884 | 885 | if _, err := d.GetExpiration(bg, ds.NewKey("/foo/bar")); err != ds.ErrNotFound { 886 | t.Fatalf("wrong error type: %v", err) 887 | } 888 | } 889 | 890 | func TestSuite(t *testing.T) { 891 | d, err := NewDatastore(t.TempDir(), nil) 892 | if err != nil { 893 | t.Fatal(err) 894 | } 895 | defer d.Close() 896 | 897 | dstest.SubtestAll(t, d) 898 | } 899 | -------------------------------------------------------------------------------- /go.mod: -------------------------------------------------------------------------------- 1 | module github.com/ipfs/go-ds-badger 2 | 3 | go 1.23 4 | 5 | require ( 6 | github.com/dgraph-io/badger v1.6.2 7 | github.com/ipfs/go-datastore v0.8.2 8 | github.com/ipfs/go-log/v2 v2.5.1 9 | ) 10 | 11 | require ( 12 | github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 // indirect 13 | github.com/cespare/xxhash v1.1.0 // indirect 14 | github.com/dgraph-io/ristretto v0.0.2 // indirect 15 | github.com/dustin/go-humanize v1.0.0 // indirect 16 | github.com/golang/protobuf v1.3.1 // indirect 17 | github.com/google/uuid v1.6.0 // indirect 18 | github.com/ipfs/go-detect-race v0.0.1 // indirect 19 | github.com/mattn/go-isatty v0.0.14 // indirect 20 | github.com/pkg/errors v0.8.1 // indirect 21 | go.uber.org/atomic v1.7.0 // indirect 22 | go.uber.org/multierr v1.11.0 // indirect 23 | go.uber.org/zap v1.19.1 // indirect 24 | golang.org/x/net v0.35.0 // indirect 25 | golang.org/x/sys v0.30.0 // indirect 26 | ) 27 | -------------------------------------------------------------------------------- /go.sum: -------------------------------------------------------------------------------- 1 | github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 h1:cTp8I5+VIoKjsnZuH8vjyaysT/ses3EvZeaV/1UkF2M= 2 | github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8= 3 | github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= 4 | github.com/OneOfOne/xxhash v1.2.2 h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE= 5 | github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= 6 | github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= 7 | github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8= 8 | github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= 9 | github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko= 10 | github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= 11 | github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= 12 | github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= 13 | github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= 14 | github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= 15 | github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 16 | github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= 17 | github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 18 | github.com/dgraph-io/badger v1.6.2 h1:mNw0qs90GVgGGWylh0umH5iag1j6n/PeJtNvL6KY/x8= 19 | github.com/dgraph-io/badger v1.6.2/go.mod h1:JW2yswe3V058sS0kZ2h/AXeDSqFjxnZcRrVH//y2UQE= 20 | github.com/dgraph-io/ristretto v0.0.2 h1:a5WaUrDa0qm0YrAAS1tUykT5El3kt62KNZZeMxQn3po= 21 | github.com/dgraph-io/ristretto v0.0.2/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E= 22 | github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2 h1:tdlZCpZ/P9DhczCTSixgIKmwPv6+wP5DGjqLYw5SUiA= 23 | github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw= 24 | github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo= 25 | github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= 26 | github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= 27 | github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg= 28 | github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 29 | github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= 30 | github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= 31 | github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= 32 | github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= 33 | github.com/ipfs/go-datastore v0.8.2 h1:Jy3wjqQR6sg/LhyY0NIePZC3Vux19nLtg7dx0TVqr6U= 34 | github.com/ipfs/go-datastore v0.8.2/go.mod h1:W+pI1NsUsz3tcsAACMtfC+IZdnQTnC/7VfPoJBQuts0= 35 | github.com/ipfs/go-detect-race v0.0.1 h1:qX/xay2W3E4Q1U7d9lNs1sU9nvguX0a7319XbyQ6cOk= 36 | github.com/ipfs/go-detect-race v0.0.1/go.mod h1:8BNT7shDZPo99Q74BpGMK+4D8Mn4j46UU0LZ723meps= 37 | github.com/ipfs/go-log/v2 v2.5.1 h1:1XdUzF7048prq4aBjDQQ4SL5RxftpRGdXhNRwKSAlcY= 38 | github.com/ipfs/go-log/v2 v2.5.1/go.mod h1:prSpmC1Gpllc9UYWxDiZDreBYw7zp4Iqp1kOLU9U5UI= 39 | github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= 40 | github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= 41 | github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= 42 | github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= 43 | github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= 44 | github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= 45 | github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= 46 | github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= 47 | github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= 48 | github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y= 49 | github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94= 50 | github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= 51 | github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= 52 | github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= 53 | github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= 54 | github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= 55 | github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= 56 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 57 | github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII= 58 | github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o= 59 | github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= 60 | github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= 61 | github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI= 62 | github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= 63 | github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= 64 | github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= 65 | github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= 66 | github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= 67 | github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= 68 | github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= 69 | github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= 70 | github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= 71 | github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= 72 | github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= 73 | github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= 74 | github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= 75 | github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= 76 | github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= 77 | github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= 78 | go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw= 79 | go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= 80 | go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 h1:sHOAIxRGBp443oHZIPB+HsUGaksVCXVQENPxwTfQdH4= 81 | go.uber.org/goleak v1.1.11-0.20210813005559-691160354723/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ= 82 | go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU= 83 | go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= 84 | go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= 85 | go.uber.org/zap v1.19.1 h1:ue41HOKd1vGURxrmeKIgELGb3jPW9DMUDGtsinblHwI= 86 | go.uber.org/zap v1.19.1/go.mod h1:j3DNczoxDZroyBnOT1L/Q79cfUMGZxlv/9dzN7SM1rI= 87 | golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= 88 | golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 89 | golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= 90 | golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= 91 | golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= 92 | golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 93 | golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 94 | golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 95 | golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= 96 | golang.org/x/net v0.35.0 h1:T5GQRQb2y08kTAByq9L4/bz8cipCdA8FbRTXewonqY8= 97 | golang.org/x/net v0.35.0/go.mod h1:EglIi67kWsHKlRzzVMUD93VMSWGFOMSZgxFjparz1Qk= 98 | golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 99 | golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 100 | golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 101 | golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 102 | golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 103 | golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 104 | golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 105 | golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 106 | golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 107 | golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= 108 | golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc= 109 | golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= 110 | golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= 111 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 112 | golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= 113 | golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 114 | golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= 115 | golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= 116 | golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= 117 | golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 118 | golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 119 | golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 120 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 121 | gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 122 | gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 123 | gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= 124 | gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= 125 | gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 126 | gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= 127 | gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 128 | gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 129 | gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 130 | gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= 131 | gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 132 | -------------------------------------------------------------------------------- /version.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "v0.3.4" 3 | } 4 | --------------------------------------------------------------------------------