├── .gitignore ├── 10_concurrently.png ├── README.md ├── all_in_one_request.png ├── benchmark ├── benchmarks_test.go ├── go.mod ├── go.sum ├── tools.go ├── user.go └── userloader_gen.go ├── cached.png ├── dataloaden_test.go ├── dataloader_test.go ├── dataloadgen.go ├── dataloadgen_test.go ├── go.mod ├── go.sum ├── go.work ├── init.png ├── license.md └── unique_keys.png /.gitignore: -------------------------------------------------------------------------------- 1 | # If you prefer the allow list template instead of the deny list, see community template: 2 | # https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore 3 | # 4 | # Binaries for programs and plugins 5 | *.exe 6 | *.exe~ 7 | *.dll 8 | *.so 9 | *.dylib 10 | 11 | # Test binary, built with `go test -c` 12 | *.test 13 | 14 | # Output of the go coverage tool, specifically when used with LiteIDE 15 | *.out 16 | 17 | # Dependency directories (remove the comment below to include it) 18 | # vendor/ 19 | 20 | # Go workspace file 21 | # go.work 22 | go.work.sum 23 | 24 | # env file 25 | .env 26 | 27 | # Editors config 28 | .vscode 29 | .idea -------------------------------------------------------------------------------- /10_concurrently.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vikstrous/dataloadgen/f6e9a8a533f0275fa23946b74136dfe65faf6676/10_concurrently.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # dataloadgen 2 | 3 | [![Go Reference](https://pkg.go.dev/badge/github.com/vikstrous/dataloadgen.svg)](https://pkg.go.dev/github.com/vikstrous/dataloadgen) 4 | 5 | `dataloadgen` is an implementation of a pattern popularized by [Facebook's DataLoader](https://github.com/graphql/dataloader). 6 | 7 | It works as follows: 8 | * A Loader object is created per graphql request. 9 | * Each of many concurrently executing graphql resolver functions call `Load()` on the Loader object with different keys. Let's say `K1`, `K2`, `K3` 10 | * Each call to `Load()` with a new key is delayed slightly (a few milliseconds) so that the Loader can load them together. 11 | * The customizable `fetch` function of the loader takes a list of keys and loads data for all of them in a single batched request to the data storage layer. It might send `[K1,K2,K3]` and get back `[V1,V2,V3]`. The order of the keys must match the order of the values. 12 | * Alternatively, the `mappedFetch` function of the loader takes a list of keys and returns a map instead of a list. It might send `[K1, K2, K3]` and get back `{K1: V1, K2: V2, K3: V3}`. 13 | * The Loader takes care of sending the right result to the right caller and the result is cached for the duration of the graphql request. 14 | 15 | > [!NOTE] 16 | > The `fetch` method expects the returned list to correspond to the provided keys in the same order. Alternatively, the `mappedFetch` allows returning a map, ensuring correct ordering and automatic creation of the `ErrNotFound` error for missing values. 17 | 18 | Usage: 19 | 20 | ```sh 21 | go get github.com/vikstrous/dataloadgen 22 | ``` 23 | 24 | See the usage [example](https://pkg.go.dev/github.com/vikstrous/dataloadgen#example-Loader) in the documentation: 25 | ```go 26 | package main 27 | 28 | import ( 29 | "context" 30 | "fmt" 31 | "strconv" 32 | 33 | "github.com/vikstrous/dataloadgen" 34 | ) 35 | 36 | // fetchFn/mappedFetchFn is shown as a function here, but it might work better as a method 37 | // ctx is the context from the first call to Load for the current batch 38 | func fetchFn(ctx context.Context, keys []string) (ret []int, errs []error) { 39 | for _, key := range keys { 40 | num, err := strconv.ParseInt(key, 10, 32) 41 | ret = append(ret, int(num)) 42 | errs = append(errs, err) 43 | } 44 | return 45 | } 46 | func mappedFetchFn(ctx context.Context, keys []string) (ret map[string]int, err error) { 47 | ret = make(map[string]int, len(keys)) 48 | errs := make(map[string]error, len(keys)) 49 | for _, key := range keys { 50 | num, err := strconv.ParseInt(key, 10, 32) 51 | ret[key] = int(num) 52 | errs[key] = err 53 | } 54 | // You can also return a single error, returned for every key's load invocation, instead of this MappedFetchError. 55 | err = dataloadgen.MappedFetchError[string](errs) 56 | return 57 | } 58 | 59 | func main() { 60 | ctx := context.Background() 61 | 62 | // Per-request setup code. Either: 63 | loader := dataloadgen.NewLoader(fetchFn) 64 | // or 65 | loader := dataloadgen.NewMappedLoader(mappedFetchFn) 66 | 67 | // In every graphql resolver: 68 | result, err := loader.Load(ctx, "1") 69 | if err != nil { 70 | panic(err) 71 | } 72 | fmt.Println(result) 73 | } 74 | ``` 75 | 76 | ## Comparison to others 77 | 78 | * [dataloaden](https://github.com/vektah/dataloaden) uses code generation and has similar performance 79 | * [dataloader](https://github.com/graph-gophers/dataloader) does not use code generation but has much worse performance and is more difficult to use 80 | * [yckao/go-dataloader](https://github.com/yckao/go-dataloader) does not use code generation but has much worse performance and is very similar to dataloader. 81 | 82 | The benchmarks in this repo show that this package is faster than all of the above and I also find it easier to use. 83 | 84 |
85 | Benchmark data as CSV 86 | 87 | ``` 88 | Benchmark,Package,iterations,ns/op,B/op,allocs/op 89 | init-8,graph-gophers/dataloader,"9,242,047.00",130.50,208.00,3.00 90 | init-8,vektah/dataloaden,"1,000,000,000.00",0.27,0.00,0.00 91 | init-8,yckao/go-dataloader,"3,153,999.00",402.10,400.00,10.00 92 | init-8,vikstrous/dataloadgen,"10,347,595.00",114.90,128.00,3.00 93 | cached-8,graph-gophers/dataloader,"4,669.00","222,072.00","25,307.00",522.00 94 | cached-8,vektah/dataloaden,"1,243.00","1,037,044.00","5,234.00",110.00 95 | cached-8,yckao/go-dataloader,"2,312.00","580,860.00","2,273.00",130.00 96 | cached-8,vikstrous/dataloadgen,"1,552.00","824,939.00",776.00,15.00 97 | unique_keys-8,graph-gophers/dataloader,"12,334.00","97,118.00","56,314.00",945.00 98 | unique_keys-8,vektah/dataloaden,"36,489.00","32,507.00","37,514.00",227.00 99 | unique_keys-8,yckao/go-dataloader,"8,055.00","133,224.00","50,180.00",747.00 100 | unique_keys-8,vikstrous/dataloadgen,"42,943.00","27,257.00","22,255.00",230.00 101 | 10_concurrently-8,graph-gophers/dataloader,326.00,"11,119,367.00","5,574,460.00","164,247.00" 102 | 10_concurrently-8,vektah/dataloaden,100.00,"194,627,574.00","898,977.00","19,502.00" 103 | 10_concurrently-8,yckao/go-dataloader,278.00,"10,972,399.00","314,963.00","29,558.00" 104 | 10_concurrently-8,vikstrous/dataloadgen,643.00,"8,249,158.00","43,474.00",806.00 105 | all_in_one_request-8,graph-gophers/dataloader,28.00,"39,954,324.00","27,475,136.00","158,321.00" 106 | all_in_one_request-8,vektah/dataloaden,328.00,"3,713,407.00","3,533,086.00","41,368.00" 107 | all_in_one_request-8,yckao/go-dataloader,132.00,"9,060,571.00","4,886,722.00","102,564.00" 108 | all_in_one_request-8,vikstrous/dataloadgen,375.00,"3,206,175.00","2,518,498.00","41,582.00" 109 | ``` 110 | 111 |
112 | 113 | ![](init.png) 114 | ![](cached.png) 115 | ![](unique_keys.png) 116 | ![](10_concurrently.png) 117 | ![](all_in_one_request.png) 118 | 119 | To run the benchmarks, run `go test -bench=. . -run BenchmarkAll -benchmem` from the benchmark directory. 120 | -------------------------------------------------------------------------------- /all_in_one_request.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vikstrous/dataloadgen/f6e9a8a533f0275fa23946b74136dfe65faf6676/all_in_one_request.png -------------------------------------------------------------------------------- /benchmark/benchmarks_test.go: -------------------------------------------------------------------------------- 1 | package benchmark_test 2 | 3 | import ( 4 | "context" 5 | "fmt" 6 | "strconv" 7 | "sync" 8 | "testing" 9 | "time" 10 | 11 | "github.com/graph-gophers/dataloader/v7" 12 | "github.com/vikstrous/dataloadgen" 13 | "github.com/vikstrous/dataloadgen/benchmark" 14 | yckaodataloader "github.com/yckao/go-dataloader" 15 | ) 16 | 17 | func newDataloader() *dataloader.Loader[int, benchmark.User] { 18 | return dataloader.NewBatchedLoader(func(ctx context.Context, keys []int) []*dataloader.Result[benchmark.User] { 19 | users := make([]*dataloader.Result[benchmark.User], len(keys)) 20 | 21 | for i, key := range keys { 22 | if key%100 == 1 { 23 | users[i] = &dataloader.Result[benchmark.User]{Error: fmt.Errorf("user not found")} 24 | } else { 25 | users[i] = &dataloader.Result[benchmark.User]{Data: benchmark.User{ID: strconv.Itoa(key), Name: "user " + strconv.Itoa(key)}} 26 | } 27 | } 28 | return users 29 | }, 30 | dataloader.WithBatchCapacity[int, benchmark.User](100), 31 | dataloader.WithWait[int, benchmark.User](500*time.Nanosecond), 32 | ) 33 | } 34 | 35 | func newDataloaden() *benchmark.UserLoader { 36 | return benchmark.NewUserLoader(benchmark.UserLoaderConfig{ 37 | Wait: 500 * time.Nanosecond, 38 | MaxBatch: 100, 39 | Fetch: func(keys []int) ([]benchmark.User, []error) { 40 | users := make([]benchmark.User, len(keys)) 41 | errors := make([]error, len(keys)) 42 | 43 | for i, key := range keys { 44 | if key%100 == 1 { 45 | errors[i] = fmt.Errorf("user not found") 46 | } else { 47 | users[i] = benchmark.User{ID: strconv.Itoa(key), Name: "user " + strconv.Itoa(key)} 48 | } 49 | } 50 | return users, errors 51 | }, 52 | }) 53 | } 54 | 55 | func newYckao() yckaodataloader.DataLoader[int, benchmark.User, int] { 56 | return yckaodataloader.New[int, benchmark.User, int](context.Background(), func(_ context.Context, keys []int) []yckaodataloader.Result[benchmark.User] { 57 | results := make([]yckaodataloader.Result[benchmark.User], len(keys)) 58 | 59 | for i, key := range keys { 60 | if key%100 == 1 { 61 | results[i] = yckaodataloader.Result[benchmark.User]{Error: fmt.Errorf("user not found")} 62 | } else { 63 | results[i] = yckaodataloader.Result[benchmark.User]{Value: benchmark.User{ID: strconv.Itoa(key), Name: "user " + strconv.Itoa(key)}} 64 | } 65 | } 66 | return results 67 | }, 68 | yckaodataloader.WithMaxBatchSize[int, benchmark.User, int](100), 69 | yckaodataloader.WithBatchScheduleFn[int, benchmark.User, int](yckaodataloader.NewTimeWindowScheduler(500*time.Nanosecond)), 70 | ) 71 | } 72 | 73 | func newVikstrous() *dataloadgen.Loader[int, benchmark.User] { 74 | return dataloadgen.NewLoader(func(_ context.Context, keys []int) ([]benchmark.User, []error) { 75 | users := make([]benchmark.User, len(keys)) 76 | errors := make([]error, len(keys)) 77 | 78 | for i, key := range keys { 79 | if key%100 == 1 { 80 | errors[i] = fmt.Errorf("user not found") 81 | } else { 82 | users[i] = benchmark.User{ID: strconv.Itoa(key), Name: "user " + strconv.Itoa(key)} 83 | } 84 | } 85 | return users, errors 86 | }, 87 | dataloadgen.WithBatchCapacity(100), 88 | dataloadgen.WithWait(500*time.Nanosecond), 89 | ) 90 | } 91 | 92 | func newVikstrousMapped() *dataloadgen.Loader[int, benchmark.User] { 93 | return dataloadgen.NewMappedLoader(func(_ context.Context, keys []int) (map[int]benchmark.User, error) { 94 | users := make(map[int]benchmark.User, len(keys)) 95 | errors := make(map[int]error, len(keys)) 96 | 97 | for _, key := range keys { 98 | if key%100 == 1 { 99 | errors[key] = fmt.Errorf("user not found") 100 | } else { 101 | users[key] = benchmark.User{ID: strconv.Itoa(key), Name: "user " + strconv.Itoa(key)} 102 | } 103 | } 104 | return users, dataloadgen.MappedFetchError[int](errors) 105 | }, 106 | dataloadgen.WithBatchCapacity(100), 107 | dataloadgen.WithWait(500*time.Nanosecond), 108 | ) 109 | } 110 | 111 | func BenchmarkAll(b *testing.B) { 112 | ctx := context.Background() 113 | 114 | b.Run("init", func(b *testing.B) { 115 | b.Run("dataloader", func(b *testing.B) { 116 | for i := 0; i < b.N; i++ { 117 | newDataloader() 118 | } 119 | }) 120 | b.Run("dataloaden", func(b *testing.B) { 121 | for i := 0; i < b.N; i++ { 122 | newDataloaden() 123 | } 124 | }) 125 | b.Run("yckao_dataloader", func(b *testing.B) { 126 | for i := 0; i < b.N; i++ { 127 | newYckao() 128 | } 129 | }) 130 | b.Run("dataloadgen", func(b *testing.B) { 131 | for i := 0; i < b.N; i++ { 132 | newVikstrous() 133 | } 134 | }) 135 | b.Run("dataloadgen_mapped", func(b *testing.B) { 136 | for i := 0; i < b.N; i++ { 137 | newVikstrousMapped() 138 | } 139 | }) 140 | }) 141 | 142 | b.Run("cached", func(b *testing.B) { 143 | b.Run("dataloader", func(b *testing.B) { 144 | for i := 0; i < b.N; i++ { 145 | dataloaderDL := newDataloader() 146 | thunks := make([]func() (benchmark.User, error), 100) 147 | for i := 0; i < 100; i++ { 148 | thunks[i] = dataloaderDL.Load(ctx, 1) 149 | } 150 | for i := 0; i < 100; i++ { 151 | _, _ = thunks[i]() 152 | } 153 | } 154 | }) 155 | b.Run("dataloaden", func(b *testing.B) { 156 | for i := 0; i < b.N; i++ { 157 | dataloadenDL := newDataloaden() 158 | thunks := make([]func() (benchmark.User, error), 100) 159 | for i := 0; i < 100; i++ { 160 | thunks[i] = dataloadenDL.LoadThunk(1) 161 | } 162 | 163 | for i := 0; i < 100; i++ { 164 | _, _ = thunks[i]() 165 | } 166 | } 167 | }) 168 | b.Run("yckao_dataloader", func(b *testing.B) { 169 | for i := 0; i < b.N; i++ { 170 | yckaoDL := newYckao() 171 | thunks := make([]*yckaodataloader.Thunk[benchmark.User], 100) 172 | for i := 0; i < 100; i++ { 173 | thunks[i] = yckaoDL.Load(ctx, 1) 174 | } 175 | 176 | for i := 0; i < 100; i++ { 177 | _, _ = thunks[i].Get(ctx) 178 | } 179 | } 180 | }) 181 | b.Run("dataloadgen", func(b *testing.B) { 182 | for i := 0; i < b.N; i++ { 183 | vikstrousDL := newVikstrous() 184 | thunks := make([]func() (benchmark.User, error), 100) 185 | for i := 0; i < 100; i++ { 186 | thunks[i] = vikstrousDL.LoadThunk(ctx, 1) 187 | } 188 | 189 | for i := 0; i < 100; i++ { 190 | _, _ = thunks[i]() 191 | } 192 | } 193 | }) 194 | b.Run("dataloadgen_mapped", func(b *testing.B) { 195 | for i := 0; i < b.N; i++ { 196 | vikstrousDL := newVikstrousMapped() 197 | thunks := make([]func() (benchmark.User, error), 100) 198 | for i := 0; i < 100; i++ { 199 | thunks[i] = vikstrousDL.LoadThunk(ctx, 1) 200 | } 201 | 202 | for i := 0; i < 100; i++ { 203 | _, _ = thunks[i]() 204 | } 205 | } 206 | }) 207 | }) 208 | 209 | b.Run("unique keys", func(b *testing.B) { 210 | b.Run("dataloader", func(b *testing.B) { 211 | for i := 0; i < b.N; i++ { 212 | dataloaderDL := newDataloader() 213 | thunks := make([]func() (benchmark.User, error), 100) 214 | for i := 0; i < 100; i++ { 215 | thunks[i] = dataloaderDL.Load(ctx, i) 216 | } 217 | for i := 0; i < 100; i++ { 218 | _, _ = thunks[i]() 219 | } 220 | } 221 | }) 222 | b.Run("dataloaden", func(b *testing.B) { 223 | for i := 0; i < b.N; i++ { 224 | dataloadenDL := newDataloaden() 225 | thunks := make([]func() (benchmark.User, error), 100) 226 | for i := 0; i < 100; i++ { 227 | thunks[i] = dataloadenDL.LoadThunk(i) 228 | } 229 | 230 | for i := 0; i < 100; i++ { 231 | _, _ = thunks[i]() 232 | } 233 | } 234 | }) 235 | b.Run("yckao_dataloader", func(b *testing.B) { 236 | for i := 0; i < b.N; i++ { 237 | yckaoDL := newYckao() 238 | thunks := make([]*yckaodataloader.Thunk[benchmark.User], 100) 239 | for i := 0; i < 100; i++ { 240 | thunks[i] = yckaoDL.Load(ctx, i) 241 | } 242 | 243 | for i := 0; i < 100; i++ { 244 | _, _ = thunks[i].Get(ctx) 245 | } 246 | } 247 | }) 248 | b.Run("dataloadgen", func(b *testing.B) { 249 | for i := 0; i < b.N; i++ { 250 | vikstrousDL := newVikstrous() 251 | thunks := make([]func() (benchmark.User, error), 100) 252 | for i := 0; i < 100; i++ { 253 | thunks[i] = vikstrousDL.LoadThunk(ctx, i) 254 | } 255 | 256 | for i := 0; i < 100; i++ { 257 | _, _ = thunks[i]() 258 | } 259 | } 260 | }) 261 | b.Run("dataloadgen_mapped", func(b *testing.B) { 262 | for i := 0; i < b.N; i++ { 263 | vikstrousDL := newVikstrousMapped() 264 | thunks := make([]func() (benchmark.User, error), 100) 265 | for i := 0; i < 100; i++ { 266 | thunks[i] = vikstrousDL.LoadThunk(ctx, i) 267 | } 268 | 269 | for i := 0; i < 100; i++ { 270 | _, _ = thunks[i]() 271 | } 272 | } 273 | }) 274 | }) 275 | b.Run("10 concurrently", func(b *testing.B) { 276 | b.Run("dataloader", func(b *testing.B) { 277 | for n := 0; n < b.N*10; n++ { 278 | dataloaderDL := newDataloader() 279 | results := make([]benchmark.User, 10) 280 | var wg sync.WaitGroup 281 | for i := 0; i < 10; i++ { 282 | wg.Add(1) 283 | go func(i int) { 284 | for j := 0; j < b.N; j++ { 285 | u, _ := dataloaderDL.Load(ctx, i)() 286 | results[i] = u 287 | } 288 | wg.Done() 289 | }(i) 290 | } 291 | wg.Wait() 292 | } 293 | }) 294 | b.Run("dataloaden", func(b *testing.B) { 295 | for n := 0; n < b.N*10; n++ { 296 | dataloadenDL := newDataloaden() 297 | results := make([]benchmark.User, 10) 298 | var wg sync.WaitGroup 299 | for i := 0; i < 10; i++ { 300 | wg.Add(1) 301 | go func(i int) { 302 | for j := 0; j < b.N; j++ { 303 | u, _ := dataloadenDL.Load(i) 304 | results[i] = u 305 | } 306 | wg.Done() 307 | }(i) 308 | } 309 | wg.Wait() 310 | } 311 | }) 312 | b.Run("yckao_dataloader", func(b *testing.B) { 313 | for n := 0; n < b.N*10; n++ { 314 | yckaoDL := newYckao() 315 | results := make([]benchmark.User, 10) 316 | var wg sync.WaitGroup 317 | for i := 0; i < 10; i++ { 318 | wg.Add(1) 319 | go func(i int) { 320 | for j := 0; j < b.N; j++ { 321 | u, _ := yckaoDL.Load(ctx, i).Get(ctx) 322 | results[i] = u 323 | } 324 | wg.Done() 325 | }(i) 326 | } 327 | wg.Wait() 328 | } 329 | }) 330 | b.Run("dataloadgen", func(b *testing.B) { 331 | for n := 0; n < b.N*10; n++ { 332 | vikstrousDL := newVikstrous() 333 | results := make([]benchmark.User, 10) 334 | var wg sync.WaitGroup 335 | for i := 0; i < 10; i++ { 336 | wg.Add(1) 337 | go func(i int) { 338 | for j := 0; j < b.N; j++ { 339 | u, _ := vikstrousDL.Load(ctx, i) 340 | results[i] = u 341 | } 342 | wg.Done() 343 | }(i) 344 | } 345 | wg.Wait() 346 | } 347 | }) 348 | b.Run("dataloadgen_mapped", func(b *testing.B) { 349 | for n := 0; n < b.N*10; n++ { 350 | vikstrousDL := newVikstrousMapped() 351 | results := make([]benchmark.User, 10) 352 | var wg sync.WaitGroup 353 | for i := 0; i < 10; i++ { 354 | wg.Add(1) 355 | go func(i int) { 356 | for j := 0; j < b.N; j++ { 357 | u, _ := vikstrousDL.Load(ctx, i) 358 | results[i] = u 359 | } 360 | wg.Done() 361 | }(i) 362 | } 363 | wg.Wait() 364 | } 365 | }) 366 | }) 367 | 368 | b.Run("all in one request", func(b *testing.B) { 369 | b.Run("dataloader", func(b *testing.B) { 370 | var keys []int 371 | for n := 0; n < 10000; n++ { 372 | keys = append(keys, n) 373 | } 374 | b.ResetTimer() 375 | for i := 0; i < b.N; i++ { 376 | dataloaderDL := newDataloader() 377 | dataloaderDL.LoadMany(ctx, keys)() 378 | } 379 | }) 380 | b.Run("dataloaden", func(b *testing.B) { 381 | var keys []int 382 | for n := 0; n < 10000; n++ { 383 | keys = append(keys, n) 384 | } 385 | b.ResetTimer() 386 | for i := 0; i < b.N; i++ { 387 | dataloadenDL := newDataloaden() 388 | dataloadenDL.LoadAll(keys) 389 | } 390 | }) 391 | b.Run("yckao_dataloader", func(b *testing.B) { 392 | var keys []int 393 | for n := 0; n < 10000; n++ { 394 | keys = append(keys, n) 395 | } 396 | b.ResetTimer() 397 | for i := 0; i < b.N; i++ { 398 | yckaoDL := newYckao() 399 | thunks := yckaoDL.LoadMany(ctx, keys) 400 | for _, t := range thunks { 401 | _, _ = t.Get(ctx) 402 | } 403 | } 404 | }) 405 | b.Run("dataloadgen", func(b *testing.B) { 406 | var keys []int 407 | for n := 0; n < 10000; n++ { 408 | keys = append(keys, n) 409 | } 410 | b.ResetTimer() 411 | for i := 0; i < b.N; i++ { 412 | vikstrousDL := newVikstrous() 413 | _, _ = vikstrousDL.LoadAll(ctx, keys) 414 | } 415 | }) 416 | b.Run("dataloadgen_mapped", func(b *testing.B) { 417 | var keys []int 418 | for n := 0; n < 10000; n++ { 419 | keys = append(keys, n) 420 | } 421 | b.ResetTimer() 422 | for i := 0; i < b.N; i++ { 423 | vikstrousDL := newVikstrousMapped() 424 | _, _ = vikstrousDL.LoadAll(ctx, keys) 425 | } 426 | }) 427 | }) 428 | } 429 | -------------------------------------------------------------------------------- /benchmark/go.mod: -------------------------------------------------------------------------------- 1 | module github.com/vikstrous/dataloadgen/benchmark 2 | 3 | go 1.20 4 | 5 | require ( 6 | github.com/graph-gophers/dataloader/v7 v7.1.0 7 | github.com/vikstrous/dataloadgen v0.0.6 8 | ) 9 | 10 | require ( 11 | github.com/pkg/errors v0.8.1 // indirect 12 | github.com/vektah/dataloaden v0.3.0 // indirect 13 | github.com/yckao/go-dataloader v1.0.1 // indirect 14 | go.opentelemetry.io/otel v1.11.1 // indirect 15 | go.opentelemetry.io/otel/trace v1.11.1 // indirect 16 | golang.org/x/tools v0.0.0-20190515012406-7d7faa4812bd // indirect 17 | ) 18 | 19 | replace github.com/vikstrous/dataloadgen => ../ -------------------------------------------------------------------------------- /benchmark/go.sum: -------------------------------------------------------------------------------- 1 | github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 2 | github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= 3 | github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= 4 | github.com/graph-gophers/dataloader v5.0.0+incompatible h1:R+yjsbrNq1Mo3aPG+Z/EKYrXrXXUNJHOgbRt+U6jOug= 5 | github.com/graph-gophers/dataloader/v7 v7.1.0 h1:Wn8HGF/q7MNXcvfaBnLEPEFJttVHR8zuEqP1obys/oc= 6 | github.com/graph-gophers/dataloader/v7 v7.1.0/go.mod h1:1bKE0Dm6OUcTB/OAuYVOZctgIz7Q3d0XrYtlIzTgg6Q= 7 | github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs= 8 | github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= 9 | github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= 10 | github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= 11 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 12 | github.com/stretchr/testify v1.2.1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= 13 | github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk= 14 | github.com/vektah/dataloaden v0.3.0 h1:ZfVN2QD6swgvp+tDqdH/OIT/wu3Dhu0cus0k5gIZS84= 15 | github.com/vektah/dataloaden v0.3.0/go.mod h1:/HUdMve7rvxZma+2ZELQeNh88+003LL7Pf/CZ089j8U= 16 | github.com/vikstrous/dataloadgen v0.0.4 h1:ERTGz0+aHmIr7bFlyS6fS6RlAmaoii78SBKXE3YR/yA= 17 | github.com/vikstrous/dataloadgen v0.0.4/go.mod h1:MwXoRc0i/jsYl2/ispv0o/rflhvFJHF4vwPVoO29JyU= 18 | github.com/vikstrous/dataloadgen v0.0.6/go.mod h1:8vuQVpBH0ODbMKAPUdCAPcOGezoTIhgAjgex51t4vbg= 19 | github.com/yckao/go-dataloader v1.0.1 h1:1JsaHrz1WG+tyxgLPmM5vcITERdKaLVtzSvjibPj3uc= 20 | github.com/yckao/go-dataloader v1.0.1/go.mod h1:4poceB9QCsDQ82SNpG9GMmp+Rt6blMFMkkI3h81Iqn8= 21 | go.opentelemetry.io/otel v1.11.1 h1:4WLLAmcfkmDk2ukNXJyq3/kiz/3UzCaYq6PskJsaou4= 22 | go.opentelemetry.io/otel v1.11.1/go.mod h1:1nNhXBbWSD0nsL38H6btgnFN2k4i0sNLHNNMZMSbUGE= 23 | go.opentelemetry.io/otel/trace v1.11.1 h1:ofxdnzsNrGBYXbP7t7zpUK281+go5rF7dvdIZXF8gdQ= 24 | go.opentelemetry.io/otel/trace v1.11.1/go.mod h1:f/Q9G7vzk5u91PhbmKbg1Qn0rzH1LJ4vbPHFGkTPtOk= 25 | golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 26 | golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 27 | golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 28 | golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 29 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 30 | golang.org/x/tools v0.0.0-20190515012406-7d7faa4812bd h1:oMEQDWVXVNpceQoVd1JN3CQ7LYJJzs5qWqZIUcxXHHw= 31 | golang.org/x/tools v0.0.0-20190515012406-7d7faa4812bd/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= 32 | gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= 33 | -------------------------------------------------------------------------------- /benchmark/tools.go: -------------------------------------------------------------------------------- 1 | //go:build tools 2 | 3 | package benchmark 4 | 5 | // To make sure go generate can find the executable 6 | import _ "github.com/vektah/dataloaden" 7 | -------------------------------------------------------------------------------- /benchmark/user.go: -------------------------------------------------------------------------------- 1 | package benchmark 2 | 3 | // User is some kind of database backed model 4 | // 5 | //go:generate go run github.com/vektah/dataloaden UserLoader int User 6 | type User struct { 7 | ID string 8 | Name string 9 | } 10 | -------------------------------------------------------------------------------- /benchmark/userloader_gen.go: -------------------------------------------------------------------------------- 1 | // Code generated by github.com/vektah/dataloaden, DO NOT EDIT. 2 | 3 | package benchmark 4 | 5 | import ( 6 | "sync" 7 | "time" 8 | ) 9 | 10 | // UserLoaderConfig captures the config to create a new UserLoader 11 | type UserLoaderConfig struct { 12 | // Fetch is a method that provides the data for the loader 13 | Fetch func(keys []int) ([]User, []error) 14 | 15 | // Wait is how long wait before sending a batch 16 | Wait time.Duration 17 | 18 | // MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit 19 | MaxBatch int 20 | } 21 | 22 | // NewUserLoader creates a new UserLoader given a fetch, wait, and maxBatch 23 | func NewUserLoader(config UserLoaderConfig) *UserLoader { 24 | return &UserLoader{ 25 | fetch: config.Fetch, 26 | wait: config.Wait, 27 | maxBatch: config.MaxBatch, 28 | } 29 | } 30 | 31 | // UserLoader batches and caches requests 32 | type UserLoader struct { 33 | // this method provides the data for the loader 34 | fetch func(keys []int) ([]User, []error) 35 | 36 | // how long to done before sending a batch 37 | wait time.Duration 38 | 39 | // this will limit the maximum number of keys to send in one batch, 0 = no limit 40 | maxBatch int 41 | 42 | // INTERNAL 43 | 44 | // lazily created cache 45 | cache map[int]User 46 | 47 | // the current batch. keys will continue to be collected until timeout is hit, 48 | // then everything will be sent to the fetch method and out to the listeners 49 | batch *userLoaderBatch 50 | 51 | // mutex to prevent races 52 | mu sync.Mutex 53 | } 54 | 55 | type userLoaderBatch struct { 56 | keys []int 57 | data []User 58 | error []error 59 | closing bool 60 | done chan struct{} 61 | } 62 | 63 | // Load a User by key, batching and caching will be applied automatically 64 | func (l *UserLoader) Load(key int) (User, error) { 65 | return l.LoadThunk(key)() 66 | } 67 | 68 | // LoadThunk returns a function that when called will block waiting for a User. 69 | // This method should be used if you want one goroutine to make requests to many 70 | // different data loaders without blocking until the thunk is called. 71 | func (l *UserLoader) LoadThunk(key int) func() (User, error) { 72 | l.mu.Lock() 73 | if it, ok := l.cache[key]; ok { 74 | l.mu.Unlock() 75 | return func() (User, error) { 76 | return it, nil 77 | } 78 | } 79 | if l.batch == nil { 80 | l.batch = &userLoaderBatch{done: make(chan struct{})} 81 | } 82 | batch := l.batch 83 | pos := batch.keyIndex(l, key) 84 | l.mu.Unlock() 85 | 86 | return func() (User, error) { 87 | <-batch.done 88 | 89 | var data User 90 | if pos < len(batch.data) { 91 | data = batch.data[pos] 92 | } 93 | 94 | var err error 95 | // its convenient to be able to return a single error for everything 96 | if len(batch.error) == 1 { 97 | err = batch.error[0] 98 | } else if batch.error != nil { 99 | err = batch.error[pos] 100 | } 101 | 102 | if err == nil { 103 | l.mu.Lock() 104 | l.unsafeSet(key, data) 105 | l.mu.Unlock() 106 | } 107 | 108 | return data, err 109 | } 110 | } 111 | 112 | // LoadAll fetches many keys at once. It will be broken into appropriate sized 113 | // sub batches depending on how the loader is configured 114 | func (l *UserLoader) LoadAll(keys []int) ([]User, []error) { 115 | results := make([]func() (User, error), len(keys)) 116 | 117 | for i, key := range keys { 118 | results[i] = l.LoadThunk(key) 119 | } 120 | 121 | users := make([]User, len(keys)) 122 | errors := make([]error, len(keys)) 123 | for i, thunk := range results { 124 | users[i], errors[i] = thunk() 125 | } 126 | return users, errors 127 | } 128 | 129 | // LoadAllThunk returns a function that when called will block waiting for a Users. 130 | // This method should be used if you want one goroutine to make requests to many 131 | // different data loaders without blocking until the thunk is called. 132 | func (l *UserLoader) LoadAllThunk(keys []int) func() ([]User, []error) { 133 | results := make([]func() (User, error), len(keys)) 134 | for i, key := range keys { 135 | results[i] = l.LoadThunk(key) 136 | } 137 | return func() ([]User, []error) { 138 | users := make([]User, len(keys)) 139 | errors := make([]error, len(keys)) 140 | for i, thunk := range results { 141 | users[i], errors[i] = thunk() 142 | } 143 | return users, errors 144 | } 145 | } 146 | 147 | // Prime the cache with the provided key and value. If the key already exists, no change is made 148 | // and false is returned. 149 | // (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).) 150 | func (l *UserLoader) Prime(key int, value User) bool { 151 | l.mu.Lock() 152 | var found bool 153 | if _, found = l.cache[key]; !found { 154 | l.unsafeSet(key, value) 155 | } 156 | l.mu.Unlock() 157 | return !found 158 | } 159 | 160 | // Clear the value at key from the cache, if it exists 161 | func (l *UserLoader) Clear(key int) { 162 | l.mu.Lock() 163 | delete(l.cache, key) 164 | l.mu.Unlock() 165 | } 166 | 167 | func (l *UserLoader) unsafeSet(key int, value User) { 168 | if l.cache == nil { 169 | l.cache = map[int]User{} 170 | } 171 | l.cache[key] = value 172 | } 173 | 174 | // keyIndex will return the location of the key in the batch, if its not found 175 | // it will add the key to the batch 176 | func (b *userLoaderBatch) keyIndex(l *UserLoader, key int) int { 177 | for i, existingKey := range b.keys { 178 | if key == existingKey { 179 | return i 180 | } 181 | } 182 | 183 | pos := len(b.keys) 184 | b.keys = append(b.keys, key) 185 | if pos == 0 { 186 | go b.startTimer(l) 187 | } 188 | 189 | if l.maxBatch != 0 && pos >= l.maxBatch-1 { 190 | if !b.closing { 191 | b.closing = true 192 | l.batch = nil 193 | go b.end(l) 194 | } 195 | } 196 | 197 | return pos 198 | } 199 | 200 | func (b *userLoaderBatch) startTimer(l *UserLoader) { 201 | time.Sleep(l.wait) 202 | l.mu.Lock() 203 | 204 | // we must have hit a batch limit and are already finalizing this batch 205 | if b.closing { 206 | l.mu.Unlock() 207 | return 208 | } 209 | 210 | l.batch = nil 211 | l.mu.Unlock() 212 | 213 | b.end(l) 214 | } 215 | 216 | func (b *userLoaderBatch) end(l *UserLoader) { 217 | b.data, b.error = l.fetch(b.keys) 218 | close(b.done) 219 | } 220 | -------------------------------------------------------------------------------- /cached.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vikstrous/dataloadgen/f6e9a8a533f0275fa23946b74136dfe65faf6676/cached.png -------------------------------------------------------------------------------- /dataloaden_test.go: -------------------------------------------------------------------------------- 1 | package dataloadgen_test 2 | 3 | import ( 4 | "context" 5 | "fmt" 6 | "strings" 7 | "sync" 8 | "testing" 9 | "time" 10 | 11 | "github.com/vikstrous/dataloadgen" 12 | ) 13 | 14 | type benchmarkUser struct { 15 | Name string 16 | ID string 17 | } 18 | 19 | func TestUserLoader(t *testing.T) { 20 | ctx := context.Background() 21 | var fetches [][]string 22 | var mu sync.Mutex 23 | dl := dataloadgen.NewLoader(func(_ context.Context, keys []string) ([]*benchmarkUser, []error) { 24 | mu.Lock() 25 | fetches = append(fetches, keys) 26 | mu.Unlock() 27 | 28 | users := make([]*benchmarkUser, len(keys)) 29 | errors := make([]error, len(keys)) 30 | 31 | for i, key := range keys { 32 | if strings.HasPrefix(key, "F") { 33 | return nil, []error{fmt.Errorf("failed all fetches")} 34 | } 35 | if strings.HasPrefix(key, "E") { 36 | errors[i] = fmt.Errorf("user not found") 37 | } else { 38 | users[i] = &benchmarkUser{ID: key, Name: "user " + key} 39 | } 40 | } 41 | return users, errors 42 | }, 43 | dataloadgen.WithBatchCapacity(5), 44 | dataloadgen.WithWait(10*time.Millisecond), 45 | ) 46 | 47 | t.Run("fetch concurrent data", func(t *testing.T) { 48 | t.Run("load user successfully", func(t *testing.T) { 49 | t.Parallel() 50 | u, err := dl.Load(ctx, "U1") 51 | if err != nil { 52 | t.Fatal(err) 53 | } 54 | if u.ID != "U1" { 55 | t.Fatal("not equal") 56 | } 57 | }) 58 | 59 | t.Run("load failed user", func(t *testing.T) { 60 | t.Parallel() 61 | u, err := dl.Load(ctx, "E1") 62 | if err == nil { 63 | t.Fatal("error expected") 64 | } 65 | if u != nil { 66 | t.Fatal("not nil", u) 67 | } 68 | }) 69 | 70 | t.Run("load many users", func(t *testing.T) { 71 | t.Parallel() 72 | u, err := dl.LoadAll(ctx, []string{"U2", "E2", "E3", "U4"}) 73 | if u[0].Name != "user U2" { 74 | t.Fatal("not equal") 75 | } 76 | if u[3].Name != "user U4" { 77 | t.Fatal("not equal") 78 | } 79 | if err == nil { 80 | t.Fatal("error expected") 81 | } 82 | if err.(dataloadgen.ErrorSlice)[1] == nil { 83 | t.Fatal("error expected") 84 | } 85 | if err.(dataloadgen.ErrorSlice)[2] == nil { 86 | t.Fatal("error expected") 87 | } 88 | }) 89 | 90 | t.Run("load thunk", func(t *testing.T) { 91 | t.Parallel() 92 | thunk1 := dl.LoadThunk(ctx, "U5") 93 | thunk2 := dl.LoadThunk(ctx, "E5") 94 | 95 | u1, err1 := thunk1() 96 | if err1 != nil { 97 | t.Fatal(err1) 98 | } 99 | if "user U5" != u1.Name { 100 | t.Fatal("not equal") 101 | } 102 | 103 | u2, err2 := thunk2() 104 | if err2 == nil { 105 | t.Fatal("error expected") 106 | } 107 | if u2 != nil { 108 | t.Fatal("not nil", u2) 109 | } 110 | }) 111 | }) 112 | 113 | t.Run("it sent two batches", func(t *testing.T) { 114 | mu.Lock() 115 | defer mu.Unlock() 116 | 117 | if len(fetches) != 2 { 118 | t.Fatal("wrong length", fetches) 119 | } 120 | if len(fetches[0]) != 5 { 121 | t.Error("wrong number of keys in first fetch", fetches[0]) 122 | } 123 | if len(fetches[1]) != 3 { 124 | t.Error("wrong number of keys in second fetch", fetches[0]) 125 | } 126 | }) 127 | 128 | t.Run("fetch more", func(t *testing.T) { 129 | t.Run("previously cached", func(t *testing.T) { 130 | t.Parallel() 131 | u, err := dl.Load(ctx, "U1") 132 | if err != nil { 133 | t.Fatal("error expected") 134 | } 135 | if u.ID != "U1" { 136 | t.Fatal("not equal") 137 | } 138 | }) 139 | 140 | t.Run("load many users", func(t *testing.T) { 141 | t.Parallel() 142 | u, err := dl.LoadAll(ctx, []string{"U2", "U4"}) 143 | if err != nil { 144 | t.Fatal(err) 145 | } 146 | if u[0].Name != "user U2" { 147 | t.Fatal("not equal") 148 | } 149 | if u[1].Name != "user U4" { 150 | t.Fatal("not equal") 151 | } 152 | }) 153 | }) 154 | 155 | t.Run("no round trips", func(t *testing.T) { 156 | mu.Lock() 157 | defer mu.Unlock() 158 | 159 | if len(fetches) != 2 { 160 | t.Fatal("wrong length", fetches) 161 | } 162 | }) 163 | 164 | t.Run("fetch partial", func(t *testing.T) { 165 | t.Run("errors not in cache cache value", func(t *testing.T) { 166 | t.Parallel() 167 | u, err := dl.Load(ctx, "E2") 168 | if u != nil { 169 | t.Fatal("not nil", u) 170 | } 171 | if err == nil { 172 | t.Fatal("error expected") 173 | } 174 | }) 175 | 176 | t.Run("load all", func(t *testing.T) { 177 | t.Parallel() 178 | u, err := dl.LoadAll(ctx, []string{"U1", "U4", "E1", "U9", "U5"}) 179 | if u[0].ID != "U1" { 180 | t.Fatal("not equal") 181 | } 182 | if u[1].ID != "U4" { 183 | t.Fatal("not equal") 184 | } 185 | if err.(dataloadgen.ErrorSlice)[2] == nil { 186 | t.Fatal("error expected") 187 | } 188 | if u[3].ID != "U9" { 189 | t.Fatal("not equal") 190 | } 191 | if u[4].ID != "U5" { 192 | t.Fatal("not equal") 193 | } 194 | }) 195 | }) 196 | 197 | t.Run("one partial trip", func(t *testing.T) { 198 | mu.Lock() 199 | defer mu.Unlock() 200 | 201 | if len(fetches) != 3 { 202 | t.Fatal("wrong length", fetches) 203 | } 204 | // U9 only because E1 and E2 are already cached as failed and only U9 is new 205 | if len(fetches[2]) != 1 { 206 | t.Fatal("wrong length", fetches[2]) 207 | } 208 | }) 209 | 210 | t.Run("primed reads dont hit the fetcher", func(t *testing.T) { 211 | dl.Prime("U99", &benchmarkUser{ID: "U99", Name: "Primed user"}) 212 | u, err := dl.Load(ctx, "U99") 213 | if err != nil { 214 | t.Fatal("error expected") 215 | } 216 | if "Primed user" != u.Name { 217 | t.Fatal("not equal") 218 | } 219 | 220 | if len(fetches) != 3 { 221 | t.Fatal("wrong length", fetches) 222 | } 223 | }) 224 | 225 | t.Run("priming in a loop is safe", func(t *testing.T) { 226 | users := []benchmarkUser{ 227 | {ID: "Alpha", Name: "Alpha"}, 228 | {ID: "Omega", Name: "Omega"}, 229 | } 230 | for _, user := range users { 231 | user := user 232 | dl.Prime(user.ID, &user) 233 | } 234 | 235 | u, err := dl.Load(ctx, "Alpha") 236 | if err != nil { 237 | t.Fatal("error expected") 238 | } 239 | if "Alpha" != u.Name { 240 | t.Fatal("not equal") 241 | } 242 | 243 | u, err = dl.Load(ctx, "Omega") 244 | if err != nil { 245 | t.Fatal("error expected") 246 | } 247 | if "Omega" != u.Name { 248 | t.Fatal("not equal") 249 | } 250 | 251 | if len(fetches) != 3 { 252 | t.Fatal("wrong length", fetches) 253 | } 254 | }) 255 | 256 | t.Run("cleared results will go back to the fetcher", func(t *testing.T) { 257 | dl.Clear("U99") 258 | u, err := dl.Load(ctx, "U99") 259 | if err != nil { 260 | t.Fatal("error expected") 261 | } 262 | if "user U99" != u.Name { 263 | t.Fatal("not equal") 264 | } 265 | 266 | if len(fetches) != 4 { 267 | t.Fatal("wrong length", fetches) 268 | } 269 | }) 270 | 271 | t.Run("load all thunk", func(t *testing.T) { 272 | thunk1 := dl.LoadAllThunk(ctx, []string{"U5", "U6"}) 273 | thunk2 := dl.LoadAllThunk(ctx, []string{"U6", "E6"}) 274 | 275 | users1, err1 := thunk1() 276 | if len(fetches) != 5 { 277 | t.Fatal("wrong length", fetches) 278 | } 279 | 280 | if err1 != nil { 281 | t.Fatal(err1) 282 | } 283 | if "user U5" != users1[0].Name { 284 | t.Fatal("not equal") 285 | } 286 | if "user U6" != users1[1].Name { 287 | t.Fatal("not equal") 288 | } 289 | 290 | users2, err2 := thunk2() 291 | // already cached 292 | if len(fetches) != 5 { 293 | t.Fatal("wrong length", fetches) 294 | } 295 | 296 | if err2.(dataloadgen.ErrorSlice)[0] != nil { 297 | t.Fatal(err2.(dataloadgen.ErrorSlice)[0]) 298 | } 299 | if err2.(dataloadgen.ErrorSlice)[1] == nil { 300 | t.Fatal("error expected") 301 | } 302 | if "user U6" != users2[0].Name { 303 | t.Fatal("not equal") 304 | } 305 | }) 306 | 307 | t.Run("single error return value works", func(t *testing.T) { 308 | user, err := dl.Load(ctx, "F1") 309 | if err == nil { 310 | t.Fatal("error expected") 311 | } 312 | if "failed all fetches" != err.Error() { 313 | t.Fatal("not equal") 314 | } 315 | if user != nil { 316 | t.Fatal("not empty", user) 317 | } 318 | if len(fetches) != 6 { 319 | t.Fatal("wrong length", fetches) 320 | } 321 | }) 322 | 323 | t.Run("LoadAll does a single fetch", func(t *testing.T) { 324 | dl.Clear("U1") 325 | dl.Clear("F1") 326 | users, errs := dl.LoadAll(ctx, []string{"F1", "U1"}) 327 | if len(fetches) != 7 { 328 | t.Fatal("wrong length", fetches) 329 | } 330 | for _, user := range users { 331 | if user != nil { 332 | t.Fatal("not empty", user) 333 | } 334 | } 335 | if len(errs.(dataloadgen.ErrorSlice)) != 2 { 336 | t.Fatal("wrong length", errs) 337 | } 338 | if errs.(dataloadgen.ErrorSlice)[0] == nil { 339 | t.Fatal("error expected") 340 | } 341 | if "failed all fetches" != errs.(dataloadgen.ErrorSlice)[0].Error() { 342 | t.Fatal("not equal") 343 | } 344 | if errs.(dataloadgen.ErrorSlice)[1] == nil { 345 | t.Fatal("error expected") 346 | } 347 | if "failed all fetches" != errs.(dataloadgen.ErrorSlice)[1].Error() { 348 | t.Fatal("not equal") 349 | } 350 | }) 351 | } 352 | -------------------------------------------------------------------------------- /dataloader_test.go: -------------------------------------------------------------------------------- 1 | package dataloadgen_test 2 | 3 | import ( 4 | "context" 5 | "errors" 6 | "fmt" 7 | "reflect" 8 | "strconv" 9 | "sync" 10 | "testing" 11 | 12 | "github.com/vikstrous/dataloadgen" 13 | ) 14 | 15 | // copied and adapted from github.com/graph-gophers/dataloader 16 | func BenchmarkLoaderFromDataloader(b *testing.B) { 17 | a := &Avg{} 18 | ctx := context.Background() 19 | dl := dataloadgen.NewLoader(func(_ context.Context, keys []string) (results []string, errs []error) { 20 | a.Add(len(keys)) 21 | results = make([]string, 0, len(keys)) 22 | results = append(results, keys...) 23 | return results, nil 24 | }) 25 | b.ResetTimer() 26 | for i := 0; i < b.N; i++ { 27 | dl.LoadThunk(ctx, strconv.Itoa(i)) 28 | } 29 | b.Logf("avg: %f", a.Avg()) 30 | } 31 | 32 | type Avg struct { 33 | total float64 34 | length float64 35 | lock sync.RWMutex 36 | } 37 | 38 | func (a *Avg) Add(v int) { 39 | a.lock.Lock() 40 | a.total += float64(v) 41 | a.length++ 42 | a.lock.Unlock() 43 | } 44 | 45 | func (a *Avg) Avg() float64 { 46 | a.lock.RLock() 47 | defer a.lock.RUnlock() 48 | if a.total == 0 { 49 | return 0 50 | } else if a.length == 0 { 51 | return 0 52 | } 53 | return a.total / a.length 54 | } 55 | 56 | func TestLoader(t *testing.T) { 57 | ctx := context.Background() 58 | t.Run("test Load method", func(t *testing.T) { 59 | t.Parallel() 60 | identityLoader, _ := IDLoader(0) 61 | value, err := identityLoader.Load(ctx, "1") 62 | if err != nil { 63 | t.Error(err.Error()) 64 | } 65 | if value != "1" { 66 | t.Error("load didn't return the right value") 67 | } 68 | }) 69 | 70 | t.Run("test thunk does not contain race conditions", func(t *testing.T) { 71 | t.Parallel() 72 | identityLoader, _ := IDLoader(0) 73 | future := identityLoader.LoadThunk(ctx, "1") 74 | go future() 75 | go future() 76 | }) 77 | 78 | t.Run("test Load Method Panic Safety", func(t *testing.T) { 79 | t.Skip("not supported yet") 80 | t.Parallel() 81 | defer func() { 82 | r := recover() 83 | if r != nil { 84 | t.Error("Panic Loader's panic should have been handled'") 85 | } 86 | }() 87 | panicLoader, _ := PanicLoader(0) 88 | _, err := panicLoader.Load(ctx, "1") 89 | if err == nil || err.Error() != "Panic received in batch function: Programming error" { 90 | t.Error("Panic was not propagated as an error.") 91 | } 92 | }) 93 | 94 | t.Run("test Load Method Panic Safety in multiple keys", func(t *testing.T) { 95 | t.Skip("not supported yet") 96 | t.Parallel() 97 | defer func() { 98 | r := recover() 99 | if r != nil { 100 | t.Error("Panic Loader's panic should have been handled'") 101 | } 102 | }() 103 | panicLoader, _ := PanicLoader(0) 104 | futures := []func() (string, error){} 105 | for i := 0; i < 3; i++ { 106 | futures = append(futures, panicLoader.LoadThunk(ctx, strconv.Itoa(i))) 107 | } 108 | for _, f := range futures { 109 | _, err := f() 110 | if err == nil || err.Error() != "Panic received in batch function: Programming error" { 111 | t.Error("Panic was not propagated as an error.") 112 | } 113 | } 114 | }) 115 | 116 | t.Run("test LoadAll returns errors", func(t *testing.T) { 117 | t.Parallel() 118 | errorLoader, _ := ErrorLoader(0) 119 | _, err := errorLoader.LoadAll(ctx, []string{"1", "2", "3"}) 120 | if len(err.(dataloadgen.ErrorSlice)) != 3 { 121 | t.Error("LoadAll didn't return right number of errors") 122 | } 123 | }) 124 | 125 | t.Run("test LoadAll returns len(errors) == len(keys)", func(t *testing.T) { 126 | t.Parallel() 127 | loader, _ := OneErrorLoader(3) 128 | _, errs := loader.LoadAll(ctx, []string{"1", "2", "3"}) 129 | if len(errs.(dataloadgen.ErrorSlice)) != 3 { 130 | t.Errorf("LoadAll didn't return right number of errors (should match size of input)") 131 | } 132 | 133 | var errCount int = 0 134 | var nilCount int = 0 135 | for _, err := range errs.(dataloadgen.ErrorSlice) { 136 | if err == nil { 137 | nilCount++ 138 | } else { 139 | errCount++ 140 | } 141 | } 142 | if errCount != 1 { 143 | t.Error("Expected an error on only one of the items loaded") 144 | } 145 | 146 | if nilCount != 2 { 147 | t.Error("Expected second and third errors to be nil") 148 | } 149 | }) 150 | 151 | t.Run("test LoadAll returns nil []error when no errors occurred", func(t *testing.T) { 152 | t.Parallel() 153 | loader, _ := IDLoader(0) 154 | _, errs := loader.LoadAll(ctx, []string{"1", "2", "3"}) 155 | if errs != nil { 156 | t.Errorf("Expected LoadAll(ctx,) to return nil error slice when no errors occurred") 157 | } 158 | }) 159 | 160 | t.Run("test thunkmany does not contain race conditions", func(t *testing.T) { 161 | t.Parallel() 162 | identityLoader, _ := IDLoader(0) 163 | future := identityLoader.LoadAllThunk(ctx, []string{"1", "2", "3"}) 164 | go future() 165 | go future() 166 | }) 167 | 168 | t.Run("test Load Many Method Panic Safety", func(t *testing.T) { 169 | t.Skip("not supported yet") 170 | t.Parallel() 171 | defer func() { 172 | r := recover() 173 | if r != nil { 174 | t.Error("Panic Loader's panic should have been handled'") 175 | } 176 | }() 177 | panicLoader, _ := PanicLoader(0) 178 | _, err := panicLoader.LoadAll(ctx, []string{"1"}) 179 | if err == nil || err.Error() != "Panic received in batch function: Programming error" { 180 | t.Error("Panic was not propagated as an error.") 181 | } 182 | }) 183 | 184 | t.Run("test LoadAll method", func(t *testing.T) { 185 | t.Parallel() 186 | identityLoader, _ := IDLoader(0) 187 | results, _ := identityLoader.LoadAll(ctx, []string{"1", "2", "3"}) 188 | if results[0] != "1" || results[1] != "2" || results[2] != "3" { 189 | t.Error("LoadAll didn't return the right value") 190 | } 191 | }) 192 | 193 | t.Run("batches many requests", func(t *testing.T) { 194 | t.Parallel() 195 | identityLoader, loadCalls := IDLoader(0) 196 | future1 := identityLoader.LoadThunk(ctx, "1") 197 | future2 := identityLoader.LoadThunk(ctx, "2") 198 | 199 | _, err := future1() 200 | if err != nil { 201 | t.Error(err.Error()) 202 | } 203 | _, err = future2() 204 | if err != nil { 205 | t.Error(err.Error()) 206 | } 207 | 208 | calls := *loadCalls 209 | inner := []string{"1", "2"} 210 | expected := [][]string{inner} 211 | if !reflect.DeepEqual(calls, expected) { 212 | t.Errorf("did not call batchFn in right order. Expected %#v, got %#v", expected, calls) 213 | } 214 | }) 215 | 216 | t.Run("number of results matches number of keys", func(t *testing.T) { 217 | t.Parallel() 218 | faultyLoader, _ := FaultyLoader() 219 | 220 | n := 10 221 | reqs := []func() (string, error){} 222 | for i := 0; i < n; i++ { 223 | key := strconv.Itoa(i) 224 | reqs = append(reqs, faultyLoader.LoadThunk(ctx, key)) 225 | } 226 | 227 | for _, future := range reqs { 228 | _, err := future() 229 | if err == nil { 230 | t.Error("if number of results doesn't match keys, all keys should contain error") 231 | } 232 | } 233 | }) 234 | 235 | t.Run("responds to max batch size", func(t *testing.T) { 236 | t.Parallel() 237 | identityLoader, loadCalls := IDLoader(2) 238 | future1 := identityLoader.LoadThunk(ctx, "1") 239 | future2 := identityLoader.LoadThunk(ctx, "2") 240 | future3 := identityLoader.LoadThunk(ctx, "3") 241 | 242 | _, err := future1() 243 | if err != nil { 244 | t.Error(err.Error()) 245 | } 246 | _, err = future2() 247 | if err != nil { 248 | t.Error(err.Error()) 249 | } 250 | _, err = future3() 251 | if err != nil { 252 | t.Error(err.Error()) 253 | } 254 | 255 | calls := *loadCalls 256 | inner1 := []string{"1", "2"} 257 | inner2 := []string{"3"} 258 | expected := [][]string{inner1, inner2} 259 | if !reflect.DeepEqual(calls, expected) { 260 | t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls) 261 | } 262 | }) 263 | 264 | t.Run("caches repeated requests", func(t *testing.T) { 265 | t.Parallel() 266 | identityLoader, loadCalls := IDLoader(0) 267 | future1 := identityLoader.LoadThunk(ctx, "1") 268 | future2 := identityLoader.LoadThunk(ctx, "1") 269 | 270 | _, err := future1() 271 | if err != nil { 272 | t.Error(err.Error()) 273 | } 274 | _, err = future2() 275 | if err != nil { 276 | t.Error(err.Error()) 277 | } 278 | 279 | calls := *loadCalls 280 | inner := []string{"1"} 281 | expected := [][]string{inner} 282 | if !reflect.DeepEqual(calls, expected) { 283 | t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls) 284 | } 285 | }) 286 | 287 | t.Run("allows primed cache", func(t *testing.T) { 288 | t.Parallel() 289 | identityLoader, loadCalls := IDLoader(0) 290 | identityLoader.Prime("A", "Cached") 291 | future1 := identityLoader.LoadThunk(ctx, "1") 292 | future2 := identityLoader.LoadThunk(ctx, "A") 293 | 294 | _, err := future1() 295 | if err != nil { 296 | t.Error(err.Error()) 297 | } 298 | value, err := future2() 299 | if err != nil { 300 | t.Error(err.Error()) 301 | } 302 | 303 | calls := *loadCalls 304 | inner := []string{"1"} 305 | expected := [][]string{inner} 306 | if !reflect.DeepEqual(calls, expected) { 307 | t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls) 308 | } 309 | 310 | if value != "Cached" { 311 | t.Errorf("did not use primed cache value. Expected '%#v', got '%#v'", "Cached", value) 312 | } 313 | }) 314 | 315 | t.Run("allows clear value in cache", func(t *testing.T) { 316 | t.Parallel() 317 | identityLoader, loadCalls := IDLoader(0) 318 | identityLoader.Prime("A", "Cached") 319 | identityLoader.Prime("B", "B") 320 | future1 := identityLoader.LoadThunk(ctx, "1") 321 | identityLoader.Clear("A") 322 | future2 := identityLoader.LoadThunk(ctx, "A") 323 | future3 := identityLoader.LoadThunk(ctx, "B") 324 | 325 | _, err := future1() 326 | if err != nil { 327 | t.Error(err.Error()) 328 | } 329 | value, err := future2() 330 | if err != nil { 331 | t.Error(err.Error()) 332 | } 333 | _, err = future3() 334 | if err != nil { 335 | t.Error(err.Error()) 336 | } 337 | 338 | calls := *loadCalls 339 | inner := []string{"1", "A"} 340 | expected := [][]string{inner} 341 | if !reflect.DeepEqual(calls, expected) { 342 | t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls) 343 | } 344 | 345 | if value != "A" { 346 | t.Errorf("did not use primed cache value. Expected '%#v', got '%#v'", "Cached", value) 347 | } 348 | }) 349 | 350 | t.Run("clears cache on batch with WithClearCacheOnBatch", func(t *testing.T) { 351 | t.Skip("not supported yet") 352 | t.Parallel() 353 | batchOnlyLoader, loadCalls := BatchOnlyLoader(0) 354 | future1 := batchOnlyLoader.LoadThunk(ctx, "1") 355 | future2 := batchOnlyLoader.LoadThunk(ctx, "1") 356 | 357 | _, err := future1() 358 | if err != nil { 359 | t.Error(err.Error()) 360 | } 361 | _, err = future2() 362 | if err != nil { 363 | t.Error(err.Error()) 364 | } 365 | 366 | calls := *loadCalls 367 | inner := []string{"1"} 368 | expected := [][]string{inner} 369 | if !reflect.DeepEqual(calls, expected) { 370 | t.Errorf("did not batch queries. Expected %#v, got %#v", expected, calls) 371 | } 372 | 373 | //if _, found := batchOnlyLoader.cache.Get("1"); found { 374 | // t.Errorf("did not clear cache after batch. Expected %#v, got %#v", false, found) 375 | //} 376 | }) 377 | 378 | t.Run("allows clearAll values in cache", func(t *testing.T) { 379 | t.Skip("not supported yet") 380 | t.Parallel() 381 | identityLoader, loadCalls := IDLoader(0) 382 | identityLoader.Prime("A", "Cached") 383 | identityLoader.Prime("B", "B") 384 | 385 | // identityLoader.ClearAll() 386 | 387 | future1 := identityLoader.LoadThunk(ctx, "1") 388 | future2 := identityLoader.LoadThunk(ctx, "A") 389 | future3 := identityLoader.LoadThunk(ctx, "B") 390 | 391 | _, err := future1() 392 | if err != nil { 393 | t.Error(err.Error()) 394 | } 395 | _, err = future2() 396 | if err != nil { 397 | t.Error(err.Error()) 398 | } 399 | _, err = future3() 400 | if err != nil { 401 | t.Error(err.Error()) 402 | } 403 | 404 | calls := *loadCalls 405 | inner := []string{"1", "A", "B"} 406 | expected := [][]string{inner} 407 | if !reflect.DeepEqual(calls, expected) { 408 | t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls) 409 | } 410 | }) 411 | 412 | t.Run("all methods on NoCache are Noops", func(t *testing.T) { 413 | t.Skip("not supported yet") 414 | t.Parallel() 415 | identityLoader, loadCalls := NoCacheLoader(0) 416 | identityLoader.Prime("A", "Cached") 417 | identityLoader.Prime("B", "B") 418 | 419 | // identityLoader.ClearAll() 420 | 421 | identityLoader.Clear("1") 422 | future1 := identityLoader.LoadThunk(ctx, "1") 423 | future2 := identityLoader.LoadThunk(ctx, "A") 424 | future3 := identityLoader.LoadThunk(ctx, "B") 425 | 426 | _, err := future1() 427 | if err != nil { 428 | t.Error(err.Error()) 429 | } 430 | _, err = future2() 431 | if err != nil { 432 | t.Error(err.Error()) 433 | } 434 | _, err = future3() 435 | if err != nil { 436 | t.Error(err.Error()) 437 | } 438 | 439 | calls := *loadCalls 440 | inner := []string{"1", "A", "B"} 441 | expected := [][]string{inner} 442 | if !reflect.DeepEqual(calls, expected) { 443 | t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls) 444 | } 445 | }) 446 | 447 | t.Run("no cache does not cache anything", func(t *testing.T) { 448 | t.Skip("not supported yet") 449 | t.Parallel() 450 | identityLoader, loadCalls := NoCacheLoader(0) 451 | identityLoader.Prime("A", "Cached") 452 | identityLoader.Prime("B", "B") 453 | 454 | future1 := identityLoader.LoadThunk(ctx, "1") 455 | future2 := identityLoader.LoadThunk(ctx, "A") 456 | future3 := identityLoader.LoadThunk(ctx, "B") 457 | 458 | _, err := future1() 459 | if err != nil { 460 | t.Error(err.Error()) 461 | } 462 | _, err = future2() 463 | if err != nil { 464 | t.Error(err.Error()) 465 | } 466 | _, err = future3() 467 | if err != nil { 468 | t.Error(err.Error()) 469 | } 470 | 471 | calls := *loadCalls 472 | inner := []string{"1", "A", "B"} 473 | expected := [][]string{inner} 474 | if !reflect.DeepEqual(calls, expected) { 475 | t.Errorf("did not respect max batch size. Expected %#v, got %#v", expected, calls) 476 | } 477 | }) 478 | } 479 | 480 | // test helpers 481 | func IDLoader(max int) (*dataloadgen.Loader[string, string], *[][]string) { 482 | var mu sync.Mutex 483 | var loadCalls [][]string 484 | identityLoader := dataloadgen.NewLoader(func(_ context.Context, keys []string) (results []string, errs []error) { 485 | mu.Lock() 486 | loadCalls = append(loadCalls, keys) 487 | mu.Unlock() 488 | results = append(results, keys...) 489 | return results, nil 490 | }, dataloadgen.WithBatchCapacity(max)) 491 | return identityLoader, &loadCalls 492 | } 493 | 494 | func BatchOnlyLoader(max int) (*dataloadgen.Loader[string, string], *[][]string) { 495 | var mu sync.Mutex 496 | var loadCalls [][]string 497 | identityLoader := dataloadgen.NewLoader(func(_ context.Context, keys []string) (results []string, errs []error) { 498 | mu.Lock() 499 | loadCalls = append(loadCalls, keys) 500 | mu.Unlock() 501 | results = append(results, keys...) 502 | return results, nil 503 | }, dataloadgen.WithBatchCapacity(max)) // dataloadgen.WithClearCacheOnBatch()) 504 | return identityLoader, &loadCalls 505 | } 506 | 507 | func ErrorLoader(max int) (*dataloadgen.Loader[string, string], *[][]string) { 508 | var mu sync.Mutex 509 | var loadCalls [][]string 510 | identityLoader := dataloadgen.NewLoader(func(_ context.Context, keys []string) (results []string, errs []error) { 511 | mu.Lock() 512 | loadCalls = append(loadCalls, keys) 513 | mu.Unlock() 514 | for _, key := range keys { 515 | results = append(results, key) 516 | errs = append(errs, fmt.Errorf("this is a test error")) 517 | } 518 | return results, errs 519 | }, dataloadgen.WithBatchCapacity(max)) 520 | return identityLoader, &loadCalls 521 | } 522 | 523 | func OneErrorLoader(max int) (*dataloadgen.Loader[string, string], *[][]string) { 524 | var mu sync.Mutex 525 | var loadCalls [][]string 526 | identityLoader := dataloadgen.NewLoader(func(_ context.Context, keys []string) (results []string, errs []error) { 527 | results = make([]string, max) 528 | errs = make([]error, max) 529 | mu.Lock() 530 | loadCalls = append(loadCalls, keys) 531 | mu.Unlock() 532 | for i := range keys { 533 | var err error 534 | if i == 0 { 535 | err = errors.New("always error on the first key") 536 | } 537 | results[i] = keys[i] 538 | errs[i] = err 539 | } 540 | return results, errs 541 | }, dataloadgen.WithBatchCapacity(max)) 542 | return identityLoader, &loadCalls 543 | } 544 | 545 | func PanicLoader(max int) (*dataloadgen.Loader[string, string], *[][]string) { 546 | var loadCalls [][]string 547 | panicLoader := dataloadgen.NewLoader(func(_ context.Context, keys []string) (results []string, errs []error) { 548 | panic("Programming error") 549 | }, dataloadgen.WithBatchCapacity(max)) //, withSilentLogger()) 550 | return panicLoader, &loadCalls 551 | } 552 | 553 | func BadLoader(max int) (*dataloadgen.Loader[string, string], *[][]string) { 554 | var mu sync.Mutex 555 | var loadCalls [][]string 556 | identityLoader := dataloadgen.NewLoader(func(_ context.Context, keys []string) (results []string, errs []error) { 557 | mu.Lock() 558 | loadCalls = append(loadCalls, keys) 559 | mu.Unlock() 560 | results = append(results, keys[0]) 561 | return results, nil 562 | }, dataloadgen.WithBatchCapacity(max)) 563 | return identityLoader, &loadCalls 564 | } 565 | 566 | func NoCacheLoader(max int) (*dataloadgen.Loader[string, string], *[][]string) { 567 | var mu sync.Mutex 568 | var loadCalls [][]string 569 | // cache := &NoCache{} 570 | identityLoader := dataloadgen.NewLoader(func(_ context.Context, keys []string) (results []string, errs []error) { 571 | mu.Lock() 572 | loadCalls = append(loadCalls, keys) 573 | mu.Unlock() 574 | results = append(results, keys...) 575 | return results, nil 576 | }, /*dataloadgen.WithCache(cache),*/ dataloadgen.WithBatchCapacity(max)) 577 | return identityLoader, &loadCalls 578 | } 579 | 580 | // FaultyLoader gives len(keys)-1 results. 581 | func FaultyLoader() (*dataloadgen.Loader[string, string], *[][]string) { 582 | var mu sync.Mutex 583 | var loadCalls [][]string 584 | 585 | loader := dataloadgen.NewLoader(func(_ context.Context, keys []string) (results []string, errs []error) { 586 | mu.Lock() 587 | loadCalls = append(loadCalls, keys) 588 | mu.Unlock() 589 | 590 | lastKeyIndex := len(keys) - 1 591 | for i, key := range keys { 592 | if i == lastKeyIndex { 593 | break 594 | } 595 | 596 | results = append(results, key) 597 | } 598 | return results, nil 599 | }) 600 | 601 | return loader, &loadCalls 602 | } 603 | -------------------------------------------------------------------------------- /dataloadgen.go: -------------------------------------------------------------------------------- 1 | package dataloadgen 2 | 3 | import ( 4 | "context" 5 | "errors" 6 | "fmt" 7 | "strings" 8 | "sync" 9 | "time" 10 | 11 | "go.opentelemetry.io/otel/attribute" 12 | "go.opentelemetry.io/otel/trace" 13 | ) 14 | 15 | // Option allows for configuration of loader fields. 16 | type Option func(*loaderConfig) 17 | 18 | // WithBatchCapacity sets the batch capacity. Default is 0 (unbounded) 19 | func WithBatchCapacity(c int) Option { 20 | return func(l *loaderConfig) { 21 | l.maxBatch = c 22 | } 23 | } 24 | 25 | // WithWait sets the amount of time to wait before triggering a batch. 26 | // Default duration is 16 milliseconds. 27 | func WithWait(d time.Duration) Option { 28 | return func(l *loaderConfig) { 29 | l.wait = d 30 | } 31 | } 32 | 33 | func WithTracer(tracer trace.Tracer) Option { 34 | return func(l *loaderConfig) { 35 | l.tracer = tracer 36 | } 37 | } 38 | 39 | // NewLoader creates a new GenericLoader given a fetch, wait, and maxBatch 40 | func NewLoader[KeyT comparable, ValueT any](fetch func(ctx context.Context, keys []KeyT) ([]ValueT, []error), options ...Option) *Loader[KeyT, ValueT] { 41 | config := &loaderConfig{ 42 | wait: 16 * time.Millisecond, 43 | maxBatch: 0, // unlimited 44 | } 45 | for _, o := range options { 46 | o(config) 47 | } 48 | l := &Loader[KeyT, ValueT]{ 49 | fetch: fetch, 50 | loaderConfig: config, 51 | thunkCache: map[KeyT]func() (ValueT, error){}, 52 | } 53 | return l 54 | } 55 | 56 | // NewMappedLoader creates a new GenericLoader given a mappedFetch, wait and maxBatch 57 | func NewMappedLoader[KeyT comparable, ValueT any](mappedFetch func(ctx context.Context, keys []KeyT) (map[KeyT]ValueT, error), options ...Option) *Loader[KeyT, ValueT] { 58 | return NewLoader(convertMappedFetch(mappedFetch), options...) 59 | } 60 | 61 | // convertMappedFetch accepts a fetcher method that returns maps, and converts it to a fetcher that returns lists. 62 | func convertMappedFetch[KeyT comparable, ValueT any](mappedFetch func(ctx context.Context, keys []KeyT) (map[KeyT]ValueT, error)) func(ctx context.Context, keys []KeyT) ([]ValueT, []error) { 63 | return func(ctx context.Context, keys []KeyT) ([]ValueT, []error) { 64 | mappedResults, err := mappedFetch(ctx, keys) 65 | var mfe MappedFetchError[KeyT] 66 | isMappedFetchError := errors.As(err, &mfe) 67 | 68 | var values = make([]ValueT, len(keys)) 69 | var errs = make([]error, len(keys)) 70 | for i, key := range keys { 71 | var ok bool 72 | if mappedResults != nil { 73 | values[i], ok = mappedResults[key] 74 | } 75 | if !ok || mappedResults == nil { 76 | errs[i] = ErrNotFound 77 | continue 78 | } 79 | if isMappedFetchError { 80 | errs[i] = mfe[key] 81 | } else { 82 | errs[i] = err 83 | } 84 | } 85 | return values, errs 86 | } 87 | } 88 | 89 | type MappedFetchError[KeyT comparable] map[KeyT]error 90 | 91 | func (e MappedFetchError[KeyT]) Error() string { 92 | var errSlice = make([]string, len(e)) 93 | i := 0 94 | for k, v := range e { 95 | errSlice[i] = fmt.Sprint(k, ": ", v) 96 | i++ 97 | } 98 | return fmt.Sprint("Mapped errors: [", strings.Join(errSlice, ", "), "]") 99 | } 100 | 101 | type loaderConfig struct { 102 | // how long to done before sending a batch 103 | wait time.Duration 104 | 105 | // this will limit the maximum number of keys to send in one batch, 0 = no limit 106 | maxBatch int 107 | 108 | tracer trace.Tracer 109 | } 110 | 111 | // Loader batches and caches requests 112 | type Loader[KeyT comparable, ValueT any] struct { 113 | // this method provides the data for the loader 114 | fetch func(ctx context.Context, keys []KeyT) ([]ValueT, []error) 115 | 116 | *loaderConfig 117 | 118 | // INTERNAL 119 | 120 | // lazily created thunkCache 121 | thunkCache map[KeyT]func() (ValueT, error) 122 | 123 | // the current batch. keys will continue to be collected until timeout is hit, 124 | // then everything will be sent to the fetch method and out to the listeners 125 | batch *loaderBatch[KeyT, ValueT] 126 | 127 | // mutex to prevent races 128 | mu sync.Mutex 129 | } 130 | 131 | type loaderBatch[KeyT comparable, ValueT any] struct { 132 | keys []KeyT 133 | results []ValueT 134 | errors []error 135 | fetchExecuted bool 136 | done chan struct{} 137 | firstContext context.Context 138 | contexts []context.Context 139 | spans []trace.Span 140 | } 141 | 142 | // Load a ValueT by key, batching and caching will be applied automatically 143 | func (l *Loader[KeyT, ValueT]) Load(ctx context.Context, key KeyT) (ValueT, error) { 144 | return l.LoadThunk(ctx, key)() 145 | } 146 | 147 | // LoadThunk returns a function that when called will block waiting for a ValueT. 148 | // This method should be used if you want one goroutine to make requests to many 149 | // different data loaders without blocking until the thunk is called. 150 | func (l *Loader[KeyT, ValueT]) LoadThunk(ctx context.Context, key KeyT) func() (ValueT, error) { 151 | l.mu.Lock() 152 | defer l.mu.Unlock() 153 | if it, ok := l.thunkCache[key]; ok { 154 | return it 155 | } 156 | 157 | l.startBatch(ctx) 158 | 159 | if l.tracer != nil { 160 | _, loadSpan := l.tracer.Start(ctx, "dataloadgen.load") 161 | defer loadSpan.End() 162 | l.batch.contexts = append(l.batch.contexts, ctx) 163 | _, waitSpan := l.tracer.Start(ctx, "dataloadgen.wait") 164 | l.batch.spans = append(l.batch.spans, waitSpan) 165 | } 166 | 167 | batch := l.batch 168 | pos := l.addKeyToBatch(batch, key) 169 | 170 | thunk := func() (ValueT, error) { 171 | <-batch.done 172 | 173 | var data ValueT 174 | 175 | // Return early if there's a single error and it's not nil 176 | if len(batch.errors) == 1 && batch.errors[0] != nil { 177 | return data, batch.errors[0] 178 | } 179 | 180 | // If the batch function returned the wrong number of responses, return an error to all callers 181 | if len(batch.results) != len(batch.keys) { 182 | return data, fmt.Errorf("bug in fetch function: %d values returned for %d keys", len(batch.results), len(batch.keys)) 183 | } 184 | 185 | if pos < len(batch.results) { 186 | data = batch.results[pos] 187 | } 188 | 189 | var err error 190 | if len(batch.errors) != 0 { 191 | if pos < len(batch.errors) { 192 | err = batch.errors[pos] 193 | } else { 194 | err = fmt.Errorf("bug in fetch function: %d errors returned for %d keys; last error: %w", len(batch.errors), len(batch.keys), batch.errors[len(batch.errors)-1]) 195 | } 196 | 197 | } 198 | 199 | return data, err 200 | } 201 | l.thunkCache[key] = thunk 202 | return thunk 203 | } 204 | 205 | // ErrNotFound is generated for you when using NewMappedLoader and not returning any data for a given key 206 | var ErrNotFound = errors.New("dataloadgen: not found") 207 | 208 | // ErrorSlice represents a list of errors that contains at least one error 209 | type ErrorSlice []error 210 | 211 | // Error implements the error interface 212 | func (e ErrorSlice) Error() string { 213 | combinedErr := errors.Join([]error(e)...) 214 | if combinedErr == nil { 215 | return "no error data" 216 | } 217 | return combinedErr.Error() 218 | } 219 | 220 | // LoadAll fetches many keys at once. It will be broken into appropriate sized 221 | // sub batches depending on how the loader is configured 222 | func (l *Loader[KeyT, ValueT]) LoadAll(ctx context.Context, keys []KeyT) ([]ValueT, error) { 223 | thunks := make([]func() (ValueT, error), len(keys)) 224 | 225 | for i, key := range keys { 226 | thunks[i] = l.LoadThunk(ctx, key) 227 | } 228 | 229 | values := make([]ValueT, len(keys)) 230 | errs := make([]error, len(keys)) 231 | allNil := true 232 | for i, thunk := range thunks { 233 | values[i], errs[i] = thunk() 234 | if errs[i] != nil { 235 | allNil = false 236 | } 237 | } 238 | if allNil { 239 | return values, nil 240 | } 241 | return values, ErrorSlice(errs) 242 | } 243 | 244 | // LoadAllThunk returns a function that when called will block waiting for a ValueT. 245 | // This method should be used if you want one goroutine to make requests to many 246 | // different data loaders without blocking until the thunk is called. 247 | func (l *Loader[KeyT, ValueT]) LoadAllThunk(ctx context.Context, keys []KeyT) func() ([]ValueT, error) { 248 | thunks := make([]func() (ValueT, error), len(keys)) 249 | for i, key := range keys { 250 | thunks[i] = l.LoadThunk(ctx, key) 251 | } 252 | return func() ([]ValueT, error) { 253 | values := make([]ValueT, len(keys)) 254 | errs := make([]error, len(keys)) 255 | allNil := true 256 | for i, thunk := range thunks { 257 | values[i], errs[i] = thunk() 258 | if allNil && errs[i] != nil { 259 | allNil = false 260 | } 261 | } 262 | if allNil { 263 | return values, nil 264 | } 265 | return values, ErrorSlice(errs) 266 | } 267 | } 268 | 269 | // Prime the cache with the provided key and value. If the key already exists, no change is made 270 | // and false is returned. 271 | // (To forcefully prime the cache, clear the key first with loader.Clear(key).Prime(key, value).) 272 | func (l *Loader[KeyT, ValueT]) Prime(key KeyT, value ValueT) bool { 273 | l.mu.Lock() 274 | var found bool 275 | if _, found = l.thunkCache[key]; !found { 276 | l.thunkCache[key] = func() (ValueT, error) { return value, nil } 277 | } 278 | l.mu.Unlock() 279 | return !found 280 | } 281 | 282 | // Clear the value at key from the cache, if it exists 283 | func (l *Loader[KeyT, ValueT]) Clear(key KeyT) { 284 | l.mu.Lock() 285 | delete(l.thunkCache, key) 286 | l.mu.Unlock() 287 | } 288 | 289 | func (l *Loader[KeyT, ValueT]) startBatch(ctx context.Context) { 290 | if l.batch == nil { 291 | batch := &loaderBatch[KeyT, ValueT]{ 292 | done: make(chan struct{}), 293 | firstContext: ctx, 294 | } 295 | if l.maxBatch != 0 { 296 | batch.contexts = make([]context.Context, 0) 297 | batch.keys = make([]KeyT, 0) 298 | if l.tracer != nil { 299 | batch.spans = make([]trace.Span, 0) 300 | } 301 | } 302 | l.batch = batch 303 | go func(l *Loader[KeyT, ValueT]) { 304 | time.Sleep(l.wait) 305 | l.mu.Lock() 306 | 307 | // we must have hit a batch limit and are already finalizing this batch 308 | if batch.fetchExecuted { 309 | l.mu.Unlock() 310 | return 311 | } 312 | 313 | ctxs := l.batch.contexts 314 | spans := l.batch.spans 315 | 316 | l.batch = nil 317 | l.mu.Unlock() 318 | 319 | if l.tracer != nil { 320 | for _, ctx := range ctxs { 321 | _, span := l.tracer.Start(ctx, "dataloadgen.fetch.timelimit", 322 | trace.WithAttributes( 323 | attribute.Int64("dataloadgen.keys", int64(len(batch.keys))))) 324 | defer span.End() 325 | } 326 | } 327 | 328 | batch.results, batch.errors = l.safeFetch(batch.firstContext, batch.keys) 329 | 330 | if l.tracer != nil { 331 | for _, span := range spans { 332 | span.End() 333 | } 334 | } 335 | 336 | close(batch.done) 337 | }(l) 338 | } 339 | } 340 | 341 | func (l *Loader[KeyT, ValueT]) safeFetch(ctx context.Context, keys []KeyT) (values []ValueT, errs []error) { 342 | defer func() { 343 | panicValue := recover() 344 | if panicValue != nil { 345 | errs = []error{fmt.Errorf("panic during fetch: %v", panicValue)} 346 | } 347 | }() 348 | return l.fetch(ctx, keys) 349 | } 350 | 351 | // addKeyToBatch will return the location of the key in the batch, if it's not found 352 | // it will add the key to the batch 353 | func (l *Loader[KeyT, ValueT]) addKeyToBatch(b *loaderBatch[KeyT, ValueT], key KeyT) int { 354 | pos := len(b.keys) 355 | b.keys = append(b.keys, key) 356 | 357 | if l.maxBatch != 0 && pos >= l.maxBatch-1 { 358 | ctxs := l.batch.contexts 359 | spans := l.batch.spans 360 | b.fetchExecuted = true 361 | l.batch = nil 362 | go func(l *Loader[KeyT, ValueT], ctxs []context.Context) { 363 | if l.tracer != nil { 364 | for _, ctx := range ctxs { 365 | _, span := l.tracer.Start(ctx, "dataloadgen.fetch.keylimit", 366 | trace.WithAttributes( 367 | attribute.Int64("dataloadgen.keys", int64(len(b.keys))))) 368 | defer span.End() 369 | } 370 | } 371 | 372 | b.results, b.errors = l.safeFetch(b.firstContext, b.keys) 373 | 374 | if l.tracer != nil { 375 | for _, span := range spans { 376 | span.End() 377 | } 378 | } 379 | 380 | close(b.done) 381 | }(l, ctxs) 382 | } 383 | 384 | return pos 385 | } 386 | -------------------------------------------------------------------------------- /dataloadgen_test.go: -------------------------------------------------------------------------------- 1 | package dataloadgen_test 2 | 3 | import ( 4 | "context" 5 | "errors" 6 | "fmt" 7 | "strconv" 8 | "sync" 9 | "testing" 10 | "time" 11 | 12 | "github.com/vikstrous/dataloadgen" 13 | ) 14 | 15 | func ExampleLoader() { 16 | ctx := context.Background() 17 | 18 | loader := dataloadgen.NewLoader(func(ctx context.Context, keys []string) (ret []int, errs []error) { 19 | for _, key := range keys { 20 | num, err := strconv.ParseInt(key, 10, 32) 21 | ret = append(ret, int(num)) 22 | errs = append(errs, err) 23 | } 24 | return 25 | }, 26 | dataloadgen.WithBatchCapacity(1), 27 | dataloadgen.WithWait(16*time.Millisecond), 28 | ) 29 | one, err := loader.Load(ctx, "1") 30 | if err != nil { 31 | panic(err) 32 | } 33 | 34 | mappedLoader := dataloadgen.NewMappedLoader(func(ctx context.Context, keys []string) (ret map[string]int, err error) { 35 | ret = make(map[string]int, len(keys)) 36 | errs := make(map[string]error, len(keys)) 37 | for _, key := range keys { 38 | num, err := strconv.ParseInt(key, 10, 32) 39 | ret[key] = int(num) 40 | errs[key] = err 41 | } 42 | err = dataloadgen.MappedFetchError[string](errs) 43 | return 44 | }, 45 | dataloadgen.WithBatchCapacity(1), 46 | dataloadgen.WithWait(16*time.Millisecond), 47 | ) 48 | two, err := mappedLoader.Load(ctx, "2") 49 | if err != nil { 50 | panic(err) 51 | } 52 | fmt.Println(one, ",", two) 53 | // Output: 1 , 2 54 | } 55 | 56 | func TestCache(t *testing.T) { 57 | ctx := context.Background() 58 | var fetches [][]int 59 | var mu sync.Mutex 60 | dl := dataloadgen.NewLoader(func(_ context.Context, keys []int) ([]string, []error) { 61 | mu.Lock() 62 | fetches = append(fetches, keys) 63 | mu.Unlock() 64 | 65 | results := make([]string, len(keys)) 66 | errs := make([]error, len(keys)) 67 | 68 | for i, key := range keys { 69 | if key%2 == 0 { 70 | errs[i] = fmt.Errorf("not found") 71 | } else { 72 | results[i] = fmt.Sprint(key) 73 | } 74 | } 75 | return results, errs 76 | }, 77 | dataloadgen.WithBatchCapacity(5), 78 | dataloadgen.WithWait(1*time.Millisecond), 79 | ) 80 | 81 | for i := 0; i < 2; i++ { 82 | _, err := dl.Load(ctx, 0) 83 | if err == nil { 84 | t.Fatal("expected error") 85 | } 86 | if len(fetches) != 1 { 87 | t.Fatal("wrong number of fetches", fetches) 88 | } 89 | if len(fetches[0]) != 1 { 90 | t.Fatal("wrong number of keys in fetch request") 91 | } 92 | } 93 | for i := 0; i < 2; i++ { 94 | r, err := dl.Load(ctx, 1) 95 | if err != nil { 96 | t.Fatal(err) 97 | } 98 | if len(fetches) != 2 { 99 | t.Fatal("wrong number of fetches", fetches) 100 | } 101 | if len(fetches[1]) != 1 { 102 | t.Fatal("wrong number of keys in fetch request") 103 | } 104 | if r != "1" { 105 | t.Fatal("wrong data fetched", r) 106 | } 107 | } 108 | } 109 | 110 | func TestErrors(t *testing.T) { 111 | ctx := context.Background() 112 | dl := dataloadgen.NewLoader(func(_ context.Context, keys []int) ([]string, []error) { 113 | return []string{"1", "2", "3"}, []error{fmt.Errorf("error 1"), fmt.Errorf("error 2")} 114 | }, 115 | dataloadgen.WithBatchCapacity(3), 116 | ) 117 | _, err := dl.LoadAll(ctx, []int{1, 2, 3}) 118 | var errs dataloadgen.ErrorSlice 119 | errors.As(err, &errs) 120 | if len(errs) != 3 { 121 | t.Fatalf("wrong number of errors: %d", len(errs)) 122 | } 123 | if errs[0].Error() != "error 1" { 124 | t.Fatalf("wrong error: %s", errs[0].Error()) 125 | } 126 | if errs[1].Error() != "error 2" { 127 | t.Fatalf("wrong error: %s", errs[1].Error()) 128 | } 129 | if errs[2].Error() != "bug in fetch function: 2 errors returned for 3 keys; last error: error 2" { 130 | t.Fatalf("wrong error: %s", errs[2].Error()) 131 | } 132 | } 133 | 134 | func TestPanic(t *testing.T) { 135 | ctx := context.Background() 136 | dl := dataloadgen.NewLoader(func(_ context.Context, keys []int) ([]string, []error) { 137 | panic("fetch panic") 138 | }, 139 | dataloadgen.WithBatchCapacity(1), 140 | ) 141 | _, err := dl.Load(ctx, 1) 142 | if err != nil && err.Error() != "panic during fetch: fetch panic" { 143 | t.Fatalf("wrong error: %s", err.Error()) 144 | } 145 | } 146 | 147 | func TestMappedLoader(t *testing.T) { 148 | ctx := context.Background() 149 | dl := dataloadgen.NewMappedLoader(func(_ context.Context, keys []string) (res map[string]*string, err error) { 150 | one := "1" 151 | res = map[string]*string{"1": &one} 152 | err = dataloadgen.MappedFetchError[string](map[string]error{"3": errors.New("not found error")}) 153 | return 154 | }) 155 | 156 | thunkOne := dl.LoadThunk(ctx, "1") 157 | thunkTwo := dl.LoadThunk(ctx, "2") 158 | thunkThree := dl.LoadThunk(ctx, "3") 159 | 160 | one, _ := thunkOne() 161 | two, errTwo := thunkTwo() 162 | _, errThree := thunkThree() 163 | 164 | if *one != "1" { 165 | t.Fatal("wrong value returned for '1':", *one) 166 | } 167 | if two != nil || errTwo != nil { 168 | t.Fatalf("wrong value/err returned for '2'. Value: %v Err: %v", two, errTwo) 169 | } 170 | if errThree == nil || errThree.Error() != "not found error" { 171 | t.Fatal("wrong error:", errThree) 172 | } 173 | } 174 | 175 | func TestMappedLoaderSingleError(t *testing.T) { 176 | ctx := context.Background() 177 | dl := dataloadgen.NewMappedLoader(func(_ context.Context, keys []string) (res map[string]*string, err error) { 178 | err = errors.New("something went wrong") 179 | return 180 | }) 181 | 182 | thunkOne := dl.LoadThunk(ctx, "1") 183 | thunkTwo := dl.LoadThunk(ctx, "2") 184 | thunkThree := dl.LoadThunk(ctx, "3") 185 | 186 | _, errOne := thunkOne() 187 | _, errTwo := thunkTwo() 188 | _, errThree := thunkThree() 189 | 190 | if errOne != nil && errTwo != nil && errThree != nil { 191 | if errors.Is(errTwo, errOne) && errors.Is(errThree, errTwo) { 192 | if errOne.Error() != "something went wrong" { 193 | t.Fatalf("Unexpected error message: %s", errOne.Error()) 194 | } 195 | } else { 196 | t.Fatalf("All errors should be equal, instead got: %s, %s, %s", errOne, errTwo, errThree) 197 | } 198 | } else { 199 | t.Fatalf("All errors should be non-nil, instead got: %s, %s, %s", errOne, errTwo, errThree) 200 | } 201 | } 202 | 203 | func TestMappedLoaderNotFoundError(t *testing.T) { 204 | ctx := context.Background() 205 | dl := dataloadgen.NewMappedLoader(func(_ context.Context, keys []string) (map[string]*string, error) { 206 | return nil, nil 207 | }) 208 | _, err := dl.Load(ctx, "1") 209 | if !errors.Is(err, dataloadgen.ErrNotFound) { 210 | t.Fatalf("Wrong error returned: %T", err) 211 | } 212 | 213 | dl2 := dataloadgen.NewMappedLoader(func(_ context.Context, keys []string) (map[string]*string, error) { 214 | return map[string]*string{}, nil 215 | }) 216 | _, err = dl2.Load(ctx, "1") 217 | if !errors.Is(err, dataloadgen.ErrNotFound) { 218 | t.Fatalf("Wrong error returned: %T", err) 219 | } 220 | } 221 | -------------------------------------------------------------------------------- /go.mod: -------------------------------------------------------------------------------- 1 | module github.com/vikstrous/dataloadgen 2 | 3 | go 1.20 4 | 5 | require go.opentelemetry.io/otel/trace v1.11.1 6 | 7 | require go.opentelemetry.io/otel v1.11.1 // indirect 8 | -------------------------------------------------------------------------------- /go.sum: -------------------------------------------------------------------------------- 1 | github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= 2 | github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= 3 | github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= 4 | github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk= 5 | go.opentelemetry.io/otel v1.11.1 h1:4WLLAmcfkmDk2ukNXJyq3/kiz/3UzCaYq6PskJsaou4= 6 | go.opentelemetry.io/otel v1.11.1/go.mod h1:1nNhXBbWSD0nsL38H6btgnFN2k4i0sNLHNNMZMSbUGE= 7 | go.opentelemetry.io/otel/trace v1.11.1 h1:ofxdnzsNrGBYXbP7t7zpUK281+go5rF7dvdIZXF8gdQ= 8 | go.opentelemetry.io/otel/trace v1.11.1/go.mod h1:f/Q9G7vzk5u91PhbmKbg1Qn0rzH1LJ4vbPHFGkTPtOk= 9 | gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= 10 | -------------------------------------------------------------------------------- /go.work: -------------------------------------------------------------------------------- 1 | go 1.20 2 | 3 | use ( 4 | . 5 | ./benchmark 6 | ) 7 | -------------------------------------------------------------------------------- /init.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vikstrous/dataloadgen/f6e9a8a533f0275fa23946b74136dfe65faf6676/init.png -------------------------------------------------------------------------------- /license.md: -------------------------------------------------------------------------------- 1 | Copyright (c) 2017 Adam Scarr 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 8 | -------------------------------------------------------------------------------- /unique_keys.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vikstrous/dataloadgen/f6e9a8a533f0275fa23946b74136dfe65faf6676/unique_keys.png --------------------------------------------------------------------------------