├── LICENSE ├── README.md ├── go.mod ├── go.sum ├── main.go ├── main_test.go ├── main_timing_test.go ├── queries ├── full_year_10percent.sh ├── full_year_1percent.sh ├── full_year_parallel.sh ├── full_year_serial.sh ├── one_day.sh └── three_months.sh └── scripts ├── influx_load_data_1M.sh ├── load_data_1M.sh ├── load_data_500K_client1.sh ├── load_data_500K_client2.sh ├── start_victoria_metrics.sh └── write_needle.sh /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright 2019 VictoriaMetrics 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Billy benchmark 2 | 3 | `billy` is used for ingesting data to [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics) 4 | according to [billy benchmark](https://www.scylladb.com/2019/12/12/how-scylla-scaled-to-one-billion-rows-a-second/). 5 | 6 | See [Billy benchmark results for VictoriaMetrics](https://medium.com/@valyala/billy-how-victoriametrics-deals-with-more-than-500-billion-rows-e82ff8f725da). 7 | -------------------------------------------------------------------------------- /go.mod: -------------------------------------------------------------------------------- 1 | module github.com/VictoriaMetrics/billy 2 | 3 | go 1.12 4 | 5 | require github.com/klauspost/compress v1.9.7 6 | -------------------------------------------------------------------------------- /go.sum: -------------------------------------------------------------------------------- 1 | github.com/klauspost/compress v1.9.7 h1:hYW1gP94JUmAhBtJ+LNz5My+gBobDxPR1iVuKug26aA= 2 | github.com/klauspost/compress v1.9.7/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= 3 | -------------------------------------------------------------------------------- /main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bufio" 5 | "flag" 6 | "fmt" 7 | "io" 8 | "io/ioutil" 9 | "log" 10 | "math" 11 | "math/rand" 12 | "net/http" 13 | "runtime" 14 | "strconv" 15 | "sync" 16 | "sync/atomic" 17 | "time" 18 | 19 | "github.com/klauspost/compress/gzip" 20 | ) 21 | 22 | var ( 23 | startDateStr = flag.String("startdate", "2019-01-01", "Date to start sweep YYYY-MM-DD") 24 | endDateStr = flag.String("enddate", "2019-01-31", "Date to end sweep YYYY-MM-DD") 25 | startKey = flag.Int("startkey", 1, "First sensor ID") 26 | endKey = flag.Int("endkey", 2, "Last sensor ID") 27 | workers = flag.Int("workers", runtime.GOMAXPROCS(-1), "The number of concurrent workers used for data ingestion") 28 | sink = flag.String("sink", "http://localhost:8428/api/v1/import", "HTTP address for the data ingestion sink. It depends on the `-format`") 29 | compress = flag.Bool("compress", false, "Whether to compress data before sending it to sink. This saves network bandwidth at the cost of higher CPU usage") 30 | digits = flag.Int("digits", 2, "The number of decimal digits after the point in the generated temperature. The original benchmark from ScyllaDB uses 2 decimal digits after the point. See query results at https://www.scylladb.com/2019/12/12/how-scylla-scaled-to-one-billion-rows-a-second/") 31 | reportInterval = flag.Duration("report-interval", 10*time.Second, "Stats reporting interval") 32 | format = flag.String("format", "vmimport", "Data ingestion format. Supported values: vmimport, influx") 33 | blocksPerRequest = flag.Int("blocks-per-request", 0, "The maximum number of blocks per request. Unlimited if set to 0. "+ 34 | "This can be used for ingesting data into InfluxDB, which doesn't support request body streaming") 35 | ) 36 | 37 | func main() { 38 | flag.Parse() 39 | 40 | startTimestamp := mustParseDate(*startDateStr, "startdate") 41 | endTimestamp := mustParseDate(*endDateStr, "enddate") 42 | if startTimestamp > endTimestamp { 43 | log.Fatalf("-startdate=%s cannot exceed -enddate=%s", *startDateStr, *endDateStr) 44 | } 45 | endTimestamp += 24 * 3600 * 1000 46 | rowsCount := int((endTimestamp - startTimestamp) / (60 * 1000)) 47 | if *startKey > *endKey { 48 | log.Fatalf("-startkey=%d cannot exceed -endkey=%d", *startKey, *endKey) 49 | } 50 | 51 | workCh := make(chan work) 52 | var workersWg sync.WaitGroup 53 | for i := 0; i < *workers; i++ { 54 | workersWg.Add(1) 55 | go func() { 56 | defer workersWg.Done() 57 | worker(workCh) 58 | }() 59 | } 60 | statsReporterStopCh := make(chan struct{}) 61 | var statsReporterWG sync.WaitGroup 62 | statsReporterWG.Add(1) 63 | go func() { 64 | defer statsReporterWG.Done() 65 | statsReporter(statsReporterStopCh) 66 | }() 67 | keysCount := *endKey - *startKey + 1 68 | startTime = time.Now() 69 | rowsTotal = rowsCount * keysCount 70 | for startTimestamp < endTimestamp { 71 | for key := *startKey; key <= *endKey; key++ { 72 | w := work{ 73 | key: key, 74 | startTimestamp: startTimestamp, 75 | rowsCount: 24 * 60, 76 | } 77 | workCh <- w 78 | } 79 | startTimestamp += 24 * 3600 * 1000 80 | } 81 | close(workCh) 82 | workersWg.Wait() 83 | 84 | close(statsReporterStopCh) 85 | statsReporterWG.Wait() 86 | } 87 | 88 | var rowsTotal int 89 | var rowsGenerated uint64 90 | var startTime time.Time 91 | 92 | func statsReporter(stopCh <-chan struct{}) { 93 | prevTime := time.Now() 94 | nPrev := uint64(0) 95 | ticker := time.NewTicker(*reportInterval) 96 | mustStop := false 97 | for !mustStop { 98 | select { 99 | case <-ticker.C: 100 | case <-stopCh: 101 | mustStop = true 102 | } 103 | t := time.Now() 104 | dAll := t.Sub(startTime).Seconds() 105 | dLast := t.Sub(prevTime).Seconds() 106 | nAll := atomic.LoadUint64(&rowsGenerated) 107 | nLast := nAll - nPrev 108 | log.Printf("created %d out of %d rows in %.3f seconds at %.0f rows/sec; instant speed %.0f rows/sec", 109 | nAll, rowsTotal, dAll, float64(nAll)/dAll, float64(nLast)/dLast) 110 | prevTime = t 111 | nPrev = nAll 112 | } 113 | } 114 | 115 | type work struct { 116 | key int 117 | startTimestamp int64 118 | rowsCount int 119 | } 120 | 121 | func (w *work) do(bw *bufio.Writer, r *rand.Rand) { 122 | switch *format { 123 | case "vmimport": 124 | writeSeriesVMImport(bw, r, w.key, w.rowsCount, w.startTimestamp) 125 | case "influx": 126 | writeSeriesInflux(bw, r, w.key, w.rowsCount, w.startTimestamp) 127 | default: 128 | log.Fatalf("unexpected `-format=%q`. Supported values: vmimport, influx", *format) 129 | } 130 | atomic.AddUint64(&rowsGenerated, uint64(w.rowsCount)) 131 | } 132 | 133 | func worker(workCh <-chan work) { 134 | for w := range workCh { 135 | workerSingleRequest(workCh, w) 136 | } 137 | } 138 | 139 | func workerSingleRequest(workCh <-chan work, wk work) { 140 | pr, pw := io.Pipe() 141 | req, err := http.NewRequest("POST", *sink, pr) 142 | if err != nil { 143 | log.Fatalf("cannot create request to %q: %s", *sink, err) 144 | } 145 | w := io.Writer(pw) 146 | if *compress { 147 | zw, err := gzip.NewWriterLevel(pw, 1) 148 | if err != nil { 149 | log.Fatalf("unexpected error when creating gzip writer: %s", err) 150 | } 151 | w = zw 152 | req.Header.Set("Content-Encoding", "gzip") 153 | } 154 | var wg sync.WaitGroup 155 | wg.Add(1) 156 | go func() { 157 | defer wg.Done() 158 | resp, err := http.DefaultClient.Do(req) 159 | if err != nil { 160 | log.Fatalf("unexpected error when performing request to %q: %s", *sink, err) 161 | } 162 | if resp.StatusCode != http.StatusNoContent { 163 | log.Printf("unexpected response code from %q: %d", *sink, resp.StatusCode) 164 | data, err := ioutil.ReadAll(resp.Body) 165 | if err != nil { 166 | log.Fatalf("cannot read response body: %s", err) 167 | } 168 | log.Fatalf("response body:\n%s", data) 169 | } 170 | }() 171 | bw := bufio.NewWriterSize(w, 16*1024) 172 | r := rand.New(rand.NewSource(time.Now().UnixNano())) 173 | blocks := 0 174 | ok := true 175 | for ok { 176 | wk.do(bw, r) 177 | blocks++ 178 | if *blocksPerRequest > 0 && blocks >= *blocksPerRequest { 179 | break 180 | } 181 | wk, ok = <-workCh 182 | } 183 | _ = bw.Flush() 184 | if *compress { 185 | _ = w.(*gzip.Writer).Close() 186 | } 187 | _ = pw.Close() 188 | wg.Wait() 189 | } 190 | 191 | func writeSeriesVMImport(bw *bufio.Writer, r *rand.Rand, sensorID, rowsCount int, startTimestamp int64) { 192 | min := 68 + r.ExpFloat64()/3.0 193 | e := math.Pow10(*digits) 194 | fmt.Fprintf(bw, `{"metric":{"__name__":"temperature","sensor_id":"%d"},"values":[`, sensorID) 195 | var buf []byte 196 | t := generateTemperature(r, min, e) 197 | for i := 0; i < rowsCount-1; i++ { 198 | buf = strconv.AppendFloat(buf[:0], t, 'f', *digits, 64) 199 | buf = append(buf, ',') 200 | bw.Write(buf) 201 | t = generateTemperature(r, min, e) 202 | } 203 | fmt.Fprintf(bw, `%.*f],"timestamps":[`, *digits, t) 204 | timestamp := startTimestamp 205 | for i := 0; i < rowsCount-1; i++ { 206 | buf = strconv.AppendInt(buf[:0], timestamp, 10) 207 | buf = append(buf, ',') 208 | bw.Write(buf) 209 | timestamp = startTimestamp + int64(i+1)*60*1000 210 | } 211 | fmt.Fprintf(bw, "%d]}\n", timestamp) 212 | } 213 | 214 | func writeSeriesInflux(bw *bufio.Writer, r *rand.Rand, sensorID, rowsCount int, startTimestamp int64) { 215 | min := 68 + r.ExpFloat64()/3.0 216 | e := math.Pow10(*digits) 217 | var buf []byte 218 | for i := 0; i < rowsCount; i++ { 219 | t := generateTemperature(r, min, e) 220 | timestamp := (startTimestamp + int64(i)*60*1000) * 1e6 221 | buf = append(buf[:0], "temperature,sensor_id="...) 222 | buf = strconv.AppendInt(buf, int64(sensorID), 10) 223 | buf = append(buf, " value="...) 224 | buf = strconv.AppendFloat(buf, t, 'f', *digits, 64) 225 | buf = append(buf, ' ') 226 | buf = strconv.AppendInt(buf, timestamp, 10) 227 | buf = append(buf, '\n') 228 | bw.Write(buf) 229 | } 230 | } 231 | 232 | func generateTemperature(r *rand.Rand, min, e float64) float64 { 233 | t := r.ExpFloat64()/1.5 + min 234 | return math.Round(t*e) / e 235 | } 236 | 237 | func mustParseDate(dateStr, flagName string) int64 { 238 | startTime, err := time.Parse("2006-01-02", dateStr) 239 | if err != nil { 240 | log.Fatalf("cannot parse -%s=%q: %s", flagName, dateStr, err) 241 | } 242 | return startTime.UnixNano() / 1e6 243 | } 244 | -------------------------------------------------------------------------------- /main_test.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bufio" 5 | "bytes" 6 | "math/rand" 7 | "testing" 8 | ) 9 | 10 | func TestWriteSeriesVMImport(t *testing.T) { 11 | startTimestamp := int64(1234) 12 | var bb bytes.Buffer 13 | bw := bufio.NewWriter(&bb) 14 | r := rand.New(rand.NewSource(startTimestamp)) 15 | sensorID := 789 16 | rowsCount := 3 17 | writeSeriesVMImport(bw, r, sensorID, rowsCount, startTimestamp) 18 | if err := bw.Flush(); err != nil { 19 | t.Fatalf("unexpected error in bw.Flush: %s", err) 20 | } 21 | result := bb.String() 22 | resultExpected := `{"metric":{"__name__":"temperature","sensor_id":"789"},"values":[68.34,69.61,69.86],"timestamps":[1234,61234,121234]}` + "\n" 23 | if result != resultExpected { 24 | t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected) 25 | } 26 | } 27 | 28 | func TestWriteSeriesInflux(t *testing.T) { 29 | startTimestamp := int64(1234) 30 | var bb bytes.Buffer 31 | bw := bufio.NewWriter(&bb) 32 | r := rand.New(rand.NewSource(startTimestamp)) 33 | sensorID := 789 34 | rowsCount := 3 35 | writeSeriesInflux(bw, r, sensorID, rowsCount, startTimestamp) 36 | if err := bw.Flush(); err != nil { 37 | t.Fatalf("unexpected error in bw.Flush: %s", err) 38 | } 39 | result := bb.String() 40 | resultExpected := `temperature,sensor_id=789 value=68.34 1234000000 41 | temperature,sensor_id=789 value=69.61 61234000000 42 | temperature,sensor_id=789 value=69.86 121234000000 43 | ` 44 | if result != resultExpected { 45 | t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected) 46 | } 47 | } 48 | -------------------------------------------------------------------------------- /main_timing_test.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "bufio" 5 | "fmt" 6 | "io/ioutil" 7 | "math/rand" 8 | "testing" 9 | "time" 10 | ) 11 | 12 | func BenchmarkWriteSeriesVMImport(b *testing.B) { 13 | const rowsCount = 24 * 60 14 | const loopsCount = 10 15 | b.ReportAllocs() 16 | b.SetBytes(rowsCount * loopsCount) 17 | b.RunParallel(func(pb *testing.PB) { 18 | startTimestamp := time.Now().UnixNano() / 1e6 19 | bw := bufio.NewWriter(ioutil.Discard) 20 | r := rand.New(rand.NewSource(startTimestamp)) 21 | sensorID := int(startTimestamp) % 1e6 22 | for pb.Next() { 23 | for i := 0; i < loopsCount; i++ { 24 | writeSeriesVMImport(bw, r, sensorID, rowsCount, startTimestamp) 25 | } 26 | if err := bw.Flush(); err != nil { 27 | panic(fmt.Errorf("unexpected error on bufio.Writer.Flush: %s", err)) 28 | } 29 | } 30 | }) 31 | } 32 | 33 | func BenchmarkWriteSeriesInflux(b *testing.B) { 34 | const rowsCount = 24 * 60 35 | const loopsCount = 10 36 | b.ReportAllocs() 37 | b.SetBytes(rowsCount * loopsCount) 38 | b.RunParallel(func(pb *testing.PB) { 39 | startTimestamp := time.Now().UnixNano() / 1e6 40 | bw := bufio.NewWriter(ioutil.Discard) 41 | r := rand.New(rand.NewSource(startTimestamp)) 42 | sensorID := int(startTimestamp) % 1e6 43 | for pb.Next() { 44 | for i := 0; i < loopsCount; i++ { 45 | writeSeriesInflux(bw, r, sensorID, rowsCount, startTimestamp) 46 | } 47 | if err := bw.Flush(); err != nil { 48 | panic(fmt.Errorf("unexpected error on bufio.Writer.Flush: %s", err)) 49 | } 50 | } 51 | }) 52 | } 53 | -------------------------------------------------------------------------------- /queries/full_year_10percent.sh: -------------------------------------------------------------------------------- 1 | curl -G http://localhost:8428/api/v1/query -d "time=$(date -d '2020-01-01 00:00:00 UTC' +%s)" -d 'query=max(max_over_time(temperature{sensor_id=~"1.*"}[366d]))' 2 | -------------------------------------------------------------------------------- /queries/full_year_1percent.sh: -------------------------------------------------------------------------------- 1 | curl -G http://localhost:8428/api/v1/query -d "time=$(date -d '2020-01-01 00:00:00 UTC' +%s)" -d 'query=max(max_over_time(temperature{sensor_id=~"12.*"}[366d]))' 2 | -------------------------------------------------------------------------------- /queries/full_year_parallel.sh: -------------------------------------------------------------------------------- 1 | curl -G http://localhost:8428/api/v1/query -d "time=$(date -d '2020-01-01 00:00:00 UTC' +%s)" -d 'query=max(max_over_time(temperature{sensor_id=~"[1-3].*"}[366d]))' & 2 | curl -G http://localhost:8428/api/v1/query -d "time=$(date -d '2020-01-01 00:00:00 UTC' +%s)" -d 'query=max(max_over_time(temperature{sensor_id=~"[4-6].*"}[366d]))' & 3 | curl -G http://localhost:8428/api/v1/query -d "time=$(date -d '2020-01-01 00:00:00 UTC' +%s)" -d 'query=max(max_over_time(temperature{sensor_id=~"[7-9].*"}[366d]))' & 4 | 5 | wait 6 | -------------------------------------------------------------------------------- /queries/full_year_serial.sh: -------------------------------------------------------------------------------- 1 | curl -G http://localhost:8428/api/v1/query -d "time=$(date -d '2020-01-01 00:00:00 UTC' +%s)" -d 'query=max(max_over_time(temperature[366d]))' 2 | -------------------------------------------------------------------------------- /queries/one_day.sh: -------------------------------------------------------------------------------- 1 | curl -G http://localhost:8428/api/v1/query -d "time=$(date -d '2020-01-01 00:00:00 UTC' +%s)" -d 'query=max(max_over_time(temperature[1d]))' 2 | -------------------------------------------------------------------------------- /queries/three_months.sh: -------------------------------------------------------------------------------- 1 | curl -G http://localhost:8428/api/v1/query -d "time=$(date -d '2020-01-01 00:00:00 UTC' +%s)" -d 'query=max(max_over_time(temperature[90d]))' 2 | -------------------------------------------------------------------------------- /scripts/influx_load_data_1M.sh: -------------------------------------------------------------------------------- 1 | curl -X POST http://localhost:8086/query?q=create%20database%20benchmark 2 | 3 | ./billy -startdate=2019-01-01 -enddate=2019-12-31 -startkey=1 -endkey=1000000 -sink='http://localhost:8086/write?db=benchmark' -format=influx -blocks-per-request=10 4 | -------------------------------------------------------------------------------- /scripts/load_data_1M.sh: -------------------------------------------------------------------------------- 1 | ./billy -startdate=2019-01-01 -enddate=2019-12-31 -startkey=1 -endkey=1000000 -sink=http://billy-server:8428/api/v1/import 2 | -------------------------------------------------------------------------------- /scripts/load_data_500K_client1.sh: -------------------------------------------------------------------------------- 1 | ./billy -startdate=2019-01-01 -enddate=2019-12-31 -startkey=1 -endkey=500000 -sink=http://billy-server:8428/api/v1/import 2 | -------------------------------------------------------------------------------- /scripts/load_data_500K_client2.sh: -------------------------------------------------------------------------------- 1 | ./billy -startdate=2019-01-01 -enddate=2019-12-31 -startkey=500001 -endkey=1000000 -sink=http://billy-server:8428/api/v1/import 2 | -------------------------------------------------------------------------------- /scripts/start_victoria_metrics.sh: -------------------------------------------------------------------------------- 1 | VERSION=v1.35.0 2 | ARCHIVE=victoria-metrics-$VERSION.tar.gz 3 | 4 | test -f $ARCHIVE || curl -L https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/$VERSION/$ARCHIVE > $ARCHIVE 5 | tar xzf $ARCHIVE 6 | ulimit -n 100000 7 | ./victoria-metrics-prod -retentionPeriod=200 -storageDataPath=/mnt/disks/billy/victoria-metrics-data -search.maxQueryDuration=24h -search.maxUniqueTimeseries=2000000 8 | -------------------------------------------------------------------------------- /scripts/write_needle.sh: -------------------------------------------------------------------------------- 1 | curl -X POST http://localhost:8428/api/v1/import -d '{"metric":{"__name__":"temperature","sensor_id":"12345"},"values":[123],"timestamps":[1564617600000]}' 2 | --------------------------------------------------------------------------------