├── .github └── FUNDING.yml ├── .gitignore ├── .travis.yml ├── Cargo.toml ├── LICENSE ├── README.md ├── benches └── bench.rs ├── db ├── gen_offset.js ├── package-lock.json ├── package.json └── test.offset ├── go_offset_notes.md ├── go_ssb_msgpackhexbytes.txt ├── src ├── flume_log.rs ├── flume_view.rs ├── go_offset_log.rs ├── iter_at_offset.rs ├── lib.rs ├── log_entry.rs ├── mem_log.rs └── offset_log.rs └── test_vecs ├── empty ├── data ├── jrnl └── ofst └── four_ssb_messages ├── data ├── jrnl └── ofst /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | open_collective: sunrise-choir 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | /target/ 2 | **/*.rs.bk 3 | Cargo.lock 4 | db/node_modules 5 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: rust 2 | 3 | cache: cargo 4 | script: cargo test && cargo doc --no-deps 5 | 6 | deploy: 7 | local-dir: ./target/doc 8 | provider: pages 9 | skip-cleanup: true 10 | github-token: $GITHUB_TOKEN # Set in the settings page of your repository, as a secure variable 11 | keep-history: false 12 | verbose: true 13 | on: 14 | branch: master 15 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "flumedb" 3 | version = "0.1.6" 4 | authors = ["Piet Geursen ", "sean billig 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | 9 | This version of the GNU Lesser General Public License incorporates 10 | the terms and conditions of version 3 of the GNU General Public 11 | License, supplemented by the additional permissions listed below. 12 | 13 | 0. Additional Definitions. 14 | 15 | As used herein, "this License" refers to version 3 of the GNU Lesser 16 | General Public License, and the "GNU GPL" refers to version 3 of the GNU 17 | General Public License. 18 | 19 | "The Library" refers to a covered work governed by this License, 20 | other than an Application or a Combined Work as defined below. 21 | 22 | An "Application" is any work that makes use of an interface provided 23 | by the Library, but which is not otherwise based on the Library. 24 | Defining a subclass of a class defined by the Library is deemed a mode 25 | of using an interface provided by the Library. 26 | 27 | A "Combined Work" is a work produced by combining or linking an 28 | Application with the Library. The particular version of the Library 29 | with which the Combined Work was made is also called the "Linked 30 | Version". 31 | 32 | The "Minimal Corresponding Source" for a Combined Work means the 33 | Corresponding Source for the Combined Work, excluding any source code 34 | for portions of the Combined Work that, considered in isolation, are 35 | based on the Application, and not on the Linked Version. 36 | 37 | The "Corresponding Application Code" for a Combined Work means the 38 | object code and/or source code for the Application, including any data 39 | and utility programs needed for reproducing the Combined Work from the 40 | Application, but excluding the System Libraries of the Combined Work. 41 | 42 | 1. Exception to Section 3 of the GNU GPL. 43 | 44 | You may convey a covered work under sections 3 and 4 of this License 45 | without being bound by section 3 of the GNU GPL. 46 | 47 | 2. Conveying Modified Versions. 48 | 49 | If you modify a copy of the Library, and, in your modifications, a 50 | facility refers to a function or data to be supplied by an Application 51 | that uses the facility (other than as an argument passed when the 52 | facility is invoked), then you may convey a copy of the modified 53 | version: 54 | 55 | a) under this License, provided that you make a good faith effort to 56 | ensure that, in the event an Application does not supply the 57 | function or data, the facility still operates, and performs 58 | whatever part of its purpose remains meaningful, or 59 | 60 | b) under the GNU GPL, with none of the additional permissions of 61 | this License applicable to that copy. 62 | 63 | 3. Object Code Incorporating Material from Library Header Files. 64 | 65 | The object code form of an Application may incorporate material from 66 | a header file that is part of the Library. You may convey such object 67 | code under terms of your choice, provided that, if the incorporated 68 | material is not limited to numerical parameters, data structure 69 | layouts and accessors, or small macros, inline functions and templates 70 | (ten or fewer lines in length), you do both of the following: 71 | 72 | a) Give prominent notice with each copy of the object code that the 73 | Library is used in it and that the Library and its use are 74 | covered by this License. 75 | 76 | b) Accompany the object code with a copy of the GNU GPL and this license 77 | document. 78 | 79 | 4. Combined Works. 80 | 81 | You may convey a Combined Work under terms of your choice that, 82 | taken together, effectively do not restrict modification of the 83 | portions of the Library contained in the Combined Work and reverse 84 | engineering for debugging such modifications, if you also do each of 85 | the following: 86 | 87 | a) Give prominent notice with each copy of the Combined Work that 88 | the Library is used in it and that the Library and its use are 89 | covered by this License. 90 | 91 | b) Accompany the Combined Work with a copy of the GNU GPL and this license 92 | document. 93 | 94 | c) For a Combined Work that displays copyright notices during 95 | execution, include the copyright notice for the Library among 96 | these notices, as well as a reference directing the user to the 97 | copies of the GNU GPL and this license document. 98 | 99 | d) Do one of the following: 100 | 101 | 0) Convey the Minimal Corresponding Source under the terms of this 102 | License, and the Corresponding Application Code in a form 103 | suitable for, and under terms that permit, the user to 104 | recombine or relink the Application with a modified version of 105 | the Linked Version to produce a modified Combined Work, in the 106 | manner specified by section 6 of the GNU GPL for conveying 107 | Corresponding Source. 108 | 109 | 1) Use a suitable shared library mechanism for linking with the 110 | Library. A suitable mechanism is one that (a) uses at run time 111 | a copy of the Library already present on the user's computer 112 | system, and (b) will operate properly with a modified version 113 | of the Library that is interface-compatible with the Linked 114 | Version. 115 | 116 | e) Provide Installation Information, but only if you would otherwise 117 | be required to provide such information under section 6 of the 118 | GNU GPL, and only to the extent that such information is 119 | necessary to install and execute a modified version of the 120 | Combined Work produced by recombining or relinking the 121 | Application with a modified version of the Linked Version. (If 122 | you use option 4d0, the Installation Information must accompany 123 | the Minimal Corresponding Source and Corresponding Application 124 | Code. If you use option 4d1, you must provide the Installation 125 | Information in the manner specified by section 6 of the GNU GPL 126 | for conveying Corresponding Source.) 127 | 128 | 5. Combined Libraries. 129 | 130 | You may place library facilities that are a work based on the 131 | Library side by side in a single library together with other library 132 | facilities that are not Applications and are not covered by this 133 | License, and convey such a combined library under terms of your 134 | choice, if you do both of the following: 135 | 136 | a) Accompany the combined library with a copy of the same work based 137 | on the Library, uncombined with any other library facilities, 138 | conveyed under the terms of this License. 139 | 140 | b) Give prominent notice with the combined library that part of it 141 | is a work based on the Library, and explaining where to find the 142 | accompanying uncombined form of the same work. 143 | 144 | 6. Revised Versions of the GNU Lesser General Public License. 145 | 146 | The Free Software Foundation may publish revised and/or new versions 147 | of the GNU Lesser General Public License from time to time. Such new 148 | versions will be similar in spirit to the present version, but may 149 | differ in detail to address new problems or concerns. 150 | 151 | Each version is given a distinguishing version number. If the 152 | Library as you received it specifies that a certain numbered version 153 | of the GNU Lesser General Public License "or any later version" 154 | applies to it, you have the option of following the terms and 155 | conditions either of that published version or of any later version 156 | published by the Free Software Foundation. If the Library as you 157 | received it does not specify a version number of the GNU Lesser 158 | General Public License, you may choose any version of the GNU Lesser 159 | General Public License ever published by the Free Software Foundation. 160 | 161 | If the Library as you received it specifies that a proxy can decide 162 | whether future versions of the GNU Lesser General Public License shall 163 | apply, that proxy's public statement of acceptance of any version is 164 | permanent authorization for you to choose that version for the 165 | Library. 166 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![Build Status](https://travis-ci.com/sunrise-choir/flumedb-rs.svg?branch=master)](https://travis-ci.com/sunrise-choir/flumedb-rs) 2 | 3 | # FlumeDB 4 | 5 | The [Sunrise Choir](https://sunrisechoir.com) Flume is a re-write of [the JavaScript `flumedb`](https://github.com/flumedb/flumedb) into Rust with a new architecture for better performance and flexibility. 6 | 7 | ## Architecture 8 | 9 | Flume is a modular database: 10 | 11 | - our main source of truth is an append-only write-only log: which provides durable storage 12 | - our secondary derived truths are views on the log: which focus on answering queries about the data 13 | - each view receives data through a read-only copy of the append-only log 14 | - each view can build up their own model to represent the data, either to point back to the main source of truth (normalized) or to materialize it (denormalized) 15 | - each view creates a structure optimized for the queries they provide, they don't need to be durable because they can always rebuild from the main log 16 | 17 | > In flume, each view remembers a version number, and if the version number changes, it just rebuilds the view. This means view code can be easily updated, or new views added. It just rebuilds the view on startup. 18 | 19 | ## Example 20 | 21 | ```rust 22 | use flumedb::Error; 23 | use flumedb::OffsetLog; 24 | 25 | fn main() -> Result<(), Error> { 26 | let path = shellexpand::tilde("~/.ssb/flume/log.offset"); 27 | let log = OffsetLog::::open_read_only(path.as_ref())?; 28 | 29 | // Read the entry at offset 0 30 | let r = log.read(0)?; 31 | // `r.data` is a json string in a standard ssb log. 32 | // `r.next` is the offset of the next entry. 33 | let r = log.read(r.next); 34 | 35 | log.iter() 36 | .map(|e| serde_json::from_slice::(&e.data).unwrap()) 37 | .for_each(|v| println!("{}", serde_json::to_string_pretty(&v).unwrap())); 38 | 39 | Ok(()) 40 | } 41 | ``` 42 | -------------------------------------------------------------------------------- /benches/bench.rs: -------------------------------------------------------------------------------- 1 | #[macro_use] 2 | extern crate criterion; 3 | 4 | use criterion::*; 5 | 6 | extern crate serde; 7 | extern crate serde_json; 8 | 9 | extern crate byteorder; 10 | extern crate bytes; 11 | extern crate flumedb; 12 | extern crate tempfile; 13 | 14 | use flumedb::flume_log::FlumeLog; 15 | use flumedb::mem_log::MemLog; 16 | use flumedb::offset_log::*; 17 | use serde_json::{from_slice, Value}; 18 | use tempfile::tempfile; 19 | 20 | const NUM_ENTRIES: usize = 10000; 21 | 22 | static DEFAULT_TEST_BUF: &[u8] = b"{\"value\": 1}"; 23 | 24 | fn default_test_bufs() -> Vec<&'static [u8]> { 25 | vec![DEFAULT_TEST_BUF; NUM_ENTRIES] 26 | } 27 | 28 | fn temp_offset_log() -> OffsetLog { 29 | OffsetLog::::from_file(tempfile().unwrap()).unwrap() 30 | } 31 | 32 | fn offset_log_decode(c: &mut Criterion) { 33 | c.bench_function("offset_log_decode", |b| { 34 | b.iter(|| { 35 | let bytes: &[u8] = &[0, 0, 0, 8, 1, 2, 3, 4, 5, 6, 7, 8, 0, 0, 0, 8, 0, 0, 0, 20]; 36 | let r = read_next::(0, &bytes).unwrap(); 37 | 38 | assert_eq!(&r.entry.data, &[1, 2, 3, 4, 5, 6, 7, 8]); 39 | }) 40 | }); 41 | } 42 | 43 | fn offset_log_append(c: &mut Criterion) { 44 | let buf = DEFAULT_TEST_BUF; 45 | c.bench_function("offset log append", move |b| { 46 | b.iter_batched( 47 | temp_offset_log, 48 | |mut log| { 49 | for i in 0..NUM_ENTRIES { 50 | let offset = log.append(buf).unwrap() as usize; 51 | assert_eq!(offset, i * (12 + buf.len())); 52 | } 53 | }, 54 | BatchSize::SmallInput, 55 | ); 56 | }); 57 | } 58 | 59 | fn offset_log_append_batch(c: &mut Criterion) { 60 | let test_bufs = default_test_bufs(); 61 | c.bench_function("offset log append batch - all", move |b| { 62 | b.iter_batched( 63 | temp_offset_log, 64 | |mut log| { 65 | let offsets = log.append_batch(&test_bufs).unwrap(); 66 | assert_eq!(offsets.len(), NUM_ENTRIES); 67 | assert_eq!(offsets[0], 0); 68 | }, 69 | BatchSize::SmallInput, 70 | ); 71 | }); 72 | 73 | let test_bufs = default_test_bufs(); 74 | c.bench_function("offset log append batch - 100", move |b| { 75 | b.iter_batched( 76 | temp_offset_log, 77 | |mut log| { 78 | for chunk in test_bufs.chunks(100) { 79 | let offsets = log.append_batch(&chunk).unwrap(); 80 | assert_eq!(offsets.len(), chunk.len()); 81 | } 82 | }, 83 | BatchSize::SmallInput, 84 | ); 85 | }); 86 | } 87 | 88 | fn offset_log_get(c: &mut Criterion) { 89 | let mut log = temp_offset_log(); 90 | let offsets = log.append_batch(&default_test_bufs()).unwrap(); 91 | 92 | c.bench_function("offset log get", move |b| { 93 | b.iter(|| { 94 | for offset in offsets.iter() { 95 | let result = log.get(*offset).unwrap(); 96 | assert_eq!(result.len(), DEFAULT_TEST_BUF.len()); 97 | } 98 | }) 99 | }); 100 | } 101 | 102 | fn offset_log_iter(c: &mut Criterion) { 103 | // Forward 104 | let mut log = temp_offset_log(); 105 | let offsets = log.append_batch(&default_test_bufs()).unwrap(); 106 | 107 | c.bench_function("offset log iter forward", move |b| { 108 | b.iter_batched( 109 | || log.iter(), 110 | |mut iter| { 111 | let count = iter.forward().count(); 112 | assert_eq!(count, offsets.len()); 113 | }, 114 | BatchSize::SmallInput, 115 | ) 116 | }); 117 | 118 | // Backward 119 | let mut log = temp_offset_log(); 120 | let offsets = log.append_batch(&default_test_bufs()).unwrap(); 121 | 122 | c.bench_function("offset log iter backward", move |b| { 123 | b.iter_batched( 124 | || log.iter_at_offset(log.end()), 125 | |mut iter| { 126 | let count = iter.backward().count(); 127 | assert_eq!(count, offsets.len()); 128 | }, 129 | BatchSize::SmallInput, 130 | ) 131 | }); 132 | 133 | // Forward with json decoding 134 | let mut log = temp_offset_log(); 135 | log.append_batch(&default_test_bufs()).unwrap(); 136 | 137 | c.bench_function("offset log iter forward and json decode", move |b| { 138 | b.iter_batched( 139 | || log.iter(), 140 | |mut iter| { 141 | let sum: u64 = iter 142 | .forward() 143 | .map(|val| from_slice(&val.data).unwrap()) 144 | .map(|val: Value| match val["value"] { 145 | Value::Number(ref num) => { 146 | let r: u64 = num.as_u64().unwrap(); 147 | r 148 | } 149 | _ => panic!(), 150 | }) 151 | .sum(); 152 | assert_eq!(sum, NUM_ENTRIES as u64); 153 | }, 154 | BatchSize::SmallInput, 155 | ) 156 | }); 157 | } 158 | 159 | fn mem_log_get(c: &mut Criterion) { 160 | let mut log = MemLog::new(); 161 | let test_buf = DEFAULT_TEST_BUF; 162 | let offsets: Vec = (0..NUM_ENTRIES) 163 | .map(|_| log.append(test_buf).unwrap()) 164 | .collect(); 165 | 166 | c.bench_function("mem log get", move |b| { 167 | b.iter(|| { 168 | for offset in offsets.iter() { 169 | let result = log.get(*offset).unwrap(); 170 | assert_eq!(result.len(), test_buf.len()); 171 | } 172 | }) 173 | }); 174 | } 175 | 176 | fn mem_log_append(c: &mut Criterion) { 177 | c.bench_function("mem log append", move |b| { 178 | b.iter(|| { 179 | let mut log = MemLog::new(); 180 | 181 | let offsets: Vec = (0..NUM_ENTRIES) 182 | .map(|_| log.append(DEFAULT_TEST_BUF).unwrap()) 183 | .collect(); 184 | 185 | assert_eq!(offsets.len(), NUM_ENTRIES as usize); 186 | }) 187 | }); 188 | } 189 | 190 | fn mem_log_iter(c: &mut Criterion) { 191 | let mut log = MemLog::new(); 192 | 193 | (0..NUM_ENTRIES).for_each(|_| { 194 | log.append(DEFAULT_TEST_BUF).unwrap(); 195 | }); 196 | 197 | c.bench_function("mem log iter and json decode", move |b| { 198 | b.iter(|| { 199 | let sum: u64 = log 200 | .into_iter() 201 | .map(|val| from_slice(&val).unwrap()) 202 | .map(|val: Value| match val["value"] { 203 | Value::Number(ref num) => { 204 | let result = num.as_u64().unwrap(); 205 | result 206 | } 207 | _ => panic!(), 208 | }) 209 | .sum(); 210 | 211 | assert!(sum > 0); 212 | }) 213 | }); 214 | } 215 | 216 | criterion_group! { 217 | name = offset_log; 218 | config = Criterion::default().sample_size(10); 219 | targets = offset_log_get, offset_log_append, offset_log_append_batch, offset_log_iter, offset_log_decode 220 | } 221 | 222 | criterion_group! { 223 | name = mem_log; 224 | config = Criterion::default().sample_size(10); 225 | targets = mem_log_get, mem_log_append, mem_log_iter 226 | } 227 | 228 | criterion_main!(offset_log, mem_log); 229 | -------------------------------------------------------------------------------- /db/gen_offset.js: -------------------------------------------------------------------------------- 1 | var OffsetLog = require('flumelog-offset') 2 | var codec = require('flumecodec') 3 | var Flume = require('flumedb') 4 | 5 | var db = Flume(OffsetLog("./test.offset", {codec: codec.json})) 6 | 7 | const NUM_ELEMENTS = 10 8 | 9 | for (var i = 0; i < NUM_ELEMENTS; i++) { 10 | db.append({value: i}, function (cb) { 11 | 12 | }) 13 | } 14 | -------------------------------------------------------------------------------- /db/package-lock.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "gen_offset", 3 | "version": "1.0.0", 4 | "lockfileVersion": 1, 5 | "requires": true, 6 | "dependencies": { 7 | "aligned-block-file": { 8 | "version": "1.1.4", 9 | "resolved": "https://registry.npmjs.org/aligned-block-file/-/aligned-block-file-1.1.4.tgz", 10 | "integrity": "sha512-KE27h781ueGONLqSBY2ik6LJRr9vo0L/i3GGhtQgJfCk0MO2QNSgrXZVCk2t7UeZKYTxcTfl+yBgcZWqBiAGPQ==", 11 | "requires": { 12 | "hashlru": "2.3.0", 13 | "int53": "0.2.4", 14 | "mkdirp": "0.5.1", 15 | "obv": "0.0.0", 16 | "uint48be": "1.0.2" 17 | }, 18 | "dependencies": { 19 | "obv": { 20 | "version": "0.0.0", 21 | "resolved": "https://registry.npmjs.org/obv/-/obv-0.0.0.tgz", 22 | "integrity": "sha1-7eq4Ro+R1BkzYu1/kdC5bdOaecE=" 23 | } 24 | } 25 | }, 26 | "append-batch": { 27 | "version": "0.0.1", 28 | "resolved": "https://registry.npmjs.org/append-batch/-/append-batch-0.0.1.tgz", 29 | "integrity": "sha1-kiSFjlVpl8zAfxHx7poShTKqDSU=" 30 | }, 31 | "cont": { 32 | "version": "1.0.3", 33 | "resolved": "https://registry.npmjs.org/cont/-/cont-1.0.3.tgz", 34 | "integrity": "sha1-aHTx6TX8qZ0EjK6qrZoK6wILzOA=", 35 | "requires": { 36 | "continuable": "1.2.0", 37 | "continuable-para": "1.2.0", 38 | "continuable-series": "1.2.0" 39 | } 40 | }, 41 | "continuable": { 42 | "version": "1.2.0", 43 | "resolved": "https://registry.npmjs.org/continuable/-/continuable-1.2.0.tgz", 44 | "integrity": "sha1-CCd0aNQRNiAAdMz4cpQwjRafJbY=" 45 | }, 46 | "continuable-hash": { 47 | "version": "0.1.4", 48 | "resolved": "https://registry.npmjs.org/continuable-hash/-/continuable-hash-0.1.4.tgz", 49 | "integrity": "sha1-gcdNQXcdjJJ4Ph4A5fEbNNbfx4w=", 50 | "requires": { 51 | "continuable": "1.1.8" 52 | }, 53 | "dependencies": { 54 | "continuable": { 55 | "version": "1.1.8", 56 | "resolved": "https://registry.npmjs.org/continuable/-/continuable-1.1.8.tgz", 57 | "integrity": "sha1-3Id7R0FghwrjvN6HM2Jo6+UFl9U=" 58 | } 59 | } 60 | }, 61 | "continuable-list": { 62 | "version": "0.1.6", 63 | "resolved": "https://registry.npmjs.org/continuable-list/-/continuable-list-0.1.6.tgz", 64 | "integrity": "sha1-h88G7FgHFuEN/5X7C4TF8OisrF8=", 65 | "requires": { 66 | "continuable": "1.1.8" 67 | }, 68 | "dependencies": { 69 | "continuable": { 70 | "version": "1.1.8", 71 | "resolved": "https://registry.npmjs.org/continuable/-/continuable-1.1.8.tgz", 72 | "integrity": "sha1-3Id7R0FghwrjvN6HM2Jo6+UFl9U=" 73 | } 74 | } 75 | }, 76 | "continuable-para": { 77 | "version": "1.2.0", 78 | "resolved": "https://registry.npmjs.org/continuable-para/-/continuable-para-1.2.0.tgz", 79 | "integrity": "sha1-RFUQ9klFndD8NchyAVFGEicxxYM=", 80 | "requires": { 81 | "continuable-hash": "0.1.4", 82 | "continuable-list": "0.1.6" 83 | } 84 | }, 85 | "continuable-series": { 86 | "version": "1.2.0", 87 | "resolved": "https://registry.npmjs.org/continuable-series/-/continuable-series-1.2.0.tgz", 88 | "integrity": "sha1-MkM5euk6cdZVswJoNKUVkLlYueg=" 89 | }, 90 | "explain-error": { 91 | "version": "1.0.4", 92 | "resolved": "https://registry.npmjs.org/explain-error/-/explain-error-1.0.4.tgz", 93 | "integrity": "sha1-p5PTrAytTGq1cemWj7urbLJTKSk=" 94 | }, 95 | "flumecodec": { 96 | "version": "0.0.1", 97 | "resolved": "https://registry.npmjs.org/flumecodec/-/flumecodec-0.0.1.tgz", 98 | "integrity": "sha1-rgSacUOGu4PjQmV6gpJLcDZKkNY=", 99 | "requires": { 100 | "level-codec": "6.2.0" 101 | } 102 | }, 103 | "flumedb": { 104 | "version": "1.0.1", 105 | "resolved": "https://registry.npmjs.org/flumedb/-/flumedb-1.0.1.tgz", 106 | "integrity": "sha512-mT0v0dY9EkWRGwDtTfavYNv2Z6nrMNlVZCNJD7qxjfPJymfv8kNYB4UvDdBHleHegvzjufjnE73IkRG5DgMjww==", 107 | "requires": { 108 | "cont": "1.0.3", 109 | "explain-error": "1.0.4", 110 | "obv": "0.0.1", 111 | "pull-cont": "0.0.0", 112 | "pull-looper": "1.0.0", 113 | "pull-stream": "3.6.9" 114 | } 115 | }, 116 | "flumelog-offset": { 117 | "version": "3.3.2", 118 | "resolved": "https://registry.npmjs.org/flumelog-offset/-/flumelog-offset-3.3.2.tgz", 119 | "integrity": "sha512-KG0TCb+cWuEvnL44xjBhVNu+jRmJ8Msh2b1krYb4FllLwSbjreaCU/hH3uzv+HmUrtU/EhJepcAu79WxLH3EZQ==", 120 | "requires": { 121 | "aligned-block-file": "1.1.4", 122 | "append-batch": "0.0.1", 123 | "explain-error": "1.0.4", 124 | "hashlru": "2.3.0", 125 | "int53": "0.2.4", 126 | "looper": "4.0.0", 127 | "ltgt": "2.2.1", 128 | "obv": "0.0.1", 129 | "pull-cursor": "3.0.0", 130 | "pull-looper": "1.0.0", 131 | "uint48be": "1.0.2" 132 | } 133 | }, 134 | "hashlru": { 135 | "version": "2.3.0", 136 | "resolved": "https://registry.npmjs.org/hashlru/-/hashlru-2.3.0.tgz", 137 | "integrity": "sha512-0cMsjjIC8I+D3M44pOQdsy0OHXGLVz6Z0beRuufhKa0KfaD2wGwAev6jILzXsd3/vpnNQJmWyZtIILqM1N+n5A==" 138 | }, 139 | "int53": { 140 | "version": "0.2.4", 141 | "resolved": "https://registry.npmjs.org/int53/-/int53-0.2.4.tgz", 142 | "integrity": "sha1-XtjXqtbFxlZ8rmmqf/xKEJ7oD4Y=" 143 | }, 144 | "level-codec": { 145 | "version": "6.2.0", 146 | "resolved": "https://registry.npmjs.org/level-codec/-/level-codec-6.2.0.tgz", 147 | "integrity": "sha1-pLUkS7akwvcj1oodZOmAxTYn2dQ=" 148 | }, 149 | "looper": { 150 | "version": "4.0.0", 151 | "resolved": "https://registry.npmjs.org/looper/-/looper-4.0.0.tgz", 152 | "integrity": "sha1-dwat7VmpntygbmtUu4bI7BnJUVU=" 153 | }, 154 | "ltgt": { 155 | "version": "2.2.1", 156 | "resolved": "https://registry.npmjs.org/ltgt/-/ltgt-2.2.1.tgz", 157 | "integrity": "sha1-81ypHEk/e3PaDgdJUwTxezH4fuU=" 158 | }, 159 | "minimist": { 160 | "version": "0.0.8", 161 | "resolved": "http://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz", 162 | "integrity": "sha1-hX/Kv8M5fSYluCKCYuhqp6ARsF0=" 163 | }, 164 | "mkdirp": { 165 | "version": "0.5.1", 166 | "resolved": "http://registry.npmjs.org/mkdirp/-/mkdirp-0.5.1.tgz", 167 | "integrity": "sha1-MAV0OOrGz3+MR2fzhkjWaX11yQM=", 168 | "requires": { 169 | "minimist": "0.0.8" 170 | } 171 | }, 172 | "obv": { 173 | "version": "0.0.1", 174 | "resolved": "https://registry.npmjs.org/obv/-/obv-0.0.1.tgz", 175 | "integrity": "sha1-yyNhBjQVNvDaxIFeBnCCIcrX+14=" 176 | }, 177 | "pull-cont": { 178 | "version": "0.0.0", 179 | "resolved": "https://registry.npmjs.org/pull-cont/-/pull-cont-0.0.0.tgz", 180 | "integrity": "sha1-P6xIuBrJe3W6ATMgiLDOevjBvg4=" 181 | }, 182 | "pull-cursor": { 183 | "version": "3.0.0", 184 | "resolved": "https://registry.npmjs.org/pull-cursor/-/pull-cursor-3.0.0.tgz", 185 | "integrity": "sha512-95lZVSF2eSEdOmUtlOBaD9p5YOvlYeCr5FBv2ySqcj/4rpaXI6d8OH+zPHHjKAf58R8QXJRZuyfHkcCX8TZbAg==", 186 | "requires": { 187 | "looper": "4.0.0", 188 | "ltgt": "2.2.1", 189 | "pull-stream": "3.6.9" 190 | } 191 | }, 192 | "pull-looper": { 193 | "version": "1.0.0", 194 | "resolved": "https://registry.npmjs.org/pull-looper/-/pull-looper-1.0.0.tgz", 195 | "integrity": "sha512-djlD60A6NGe5goLdP5pgbqzMEiWmk1bInuAzBp0QOH4vDrVwh05YDz6UP8+pOXveKEk8wHVP+rB2jBrK31QMPA==", 196 | "requires": { 197 | "looper": "4.0.0" 198 | } 199 | }, 200 | "pull-stream": { 201 | "version": "3.6.9", 202 | "resolved": "https://registry.npmjs.org/pull-stream/-/pull-stream-3.6.9.tgz", 203 | "integrity": "sha512-hJn4POeBrkttshdNl0AoSCVjMVSuBwuHocMerUdoZ2+oIUzrWHFTwJMlbHND7OiKLVgvz6TFj8ZUVywUMXccbw==" 204 | }, 205 | "uint48be": { 206 | "version": "1.0.2", 207 | "resolved": "https://registry.npmjs.org/uint48be/-/uint48be-1.0.2.tgz", 208 | "integrity": "sha512-jNn1eEi81BLiZfJkjbiAKPDMj7iFrturKazqpBu0aJYLr6evgkn+9rgkX/gUwPBj5j2Ri5oUelsqC/S1zmpWBA==" 209 | } 210 | } 211 | } 212 | -------------------------------------------------------------------------------- /db/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "gen_offset", 3 | "version": "1.0.0", 4 | "description": "", 5 | "main": "gen_offset.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "author": "", 10 | "license": "LGPL-3.0", 11 | "dependencies": { 12 | "flumecodec": "0.0.1", 13 | "flumedb": "^1.0.1", 14 | "flumelog-offset": "^3.3.2" 15 | } 16 | } 17 | -------------------------------------------------------------------------------- /db/test.offset: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sunrise-choir/flumedb-rs/915ea7577ddb59c2b9f1537730d680bb637eafee/db/test.offset -------------------------------------------------------------------------------- /go_offset_notes.md: -------------------------------------------------------------------------------- 1 | # Notes 2 | 3 | ## What do we need to build? 4 | 5 | - Really, all we need is a unidirecitonal iterator 6 | - TODO: check which bits of the flume api patchql uses 7 | 8 | ## How would we iterate over a go offset? 9 | 10 | - Open all three files. 11 | 12 | - Check the journal 13 | - (how do we respect locking?) 14 | - the journal gives us the number or entries in the log 15 | - Start iterating the offset file 16 | - map the offset file number to size. 17 | - map the size to a slice of bytes by reading that size from the file. 18 | 19 | Actually, that's not even needed right? 20 | 21 | - all we do is open the data file, read a u64 as the size, then read that many bytes. Repeat. 22 | 23 | 24 | 25 | ## Testing 26 | 27 | cryptix tried to add tests for ssb messages but failed to understand the .map(type..) stuff on the iterator and got a panic: 28 | 29 | ``` 30 | $ cargo test 31 | 32 | ... 33 | 34 | failures: 35 | 36 | ---- go_offset_log::test::ssb_messages stdout ---- 37 | thread 'go_offset_log::test::ssb_messages' panicked at 'called `Result::unwrap()` on an `Err` value: FromUtf8Error { bytes: [134, 166, 65, 117, 116, 104, 111, 114, 130, 164, 65, 108, 103, 111, 167, 101, 100, 50, 53, 53, 49, 57, 162, 73, 68, 218, 0, 32, 252, 86, 210, 174, 217, 193, 106, 79, 195, 74, 31, 175, 157, 175, 85, 245, 187, 143, 225, 99, 213, 34, 94, 251, 142, 150, 30, 254, 168, 193, 223, 82, 163, 75, 101, 121, 130, 164, 65, 108, 103, 111, 166, 115, 104, 97, 50, 53, 54, 164, 72, 97, 115, 104, 218, 0, 32, 187, 110, 22, 14, 1, 75, 74, 100, 92, 125, 185, 28, 161, 22, 127, 164, 235, 84, 144, 227, 238, 69, 81, 5, 59, 181, 32, 35, 20, 75, 150, 173, 168, 80, 114, 101, 118, 105, 111, 117, 115, 192, 163, 82, 97, 119, 218, 1, 156, 123, 10, 32, 32, 34, 112, 114, 101, 118, 105, 111, 117, 115, 34, 58, 32, 110, 117, 108, 108, 44, 10, 32, 32, 34, 97, 117, 116, 104, 111, 114, 34, 58, 32, 34, 64, 47, 70, 98, 83, 114, 116, 110, 66, 97, 107, 47, 68, 83, 104, 43, 118, 110, 97, 57, 86, 57, 98, 117, 80, 52, 87, 80, 86, 73, 108, 55, 55, 106, 112, 89, 101, 47, 113, 106, 66, 51, 49, 73, 61, 46, 101, 100, 50, 53, 53, 49, 57, 34, 44, 10, 32, 32, 34, 115, 101, 113, 117, 101, 110, 99, 101, 34, 58, 32, 49, 44, 10, 32, 32, 34, 116, 105, 109, 101, 115, 116, 97, 109, 112, 34, 58, 32, 49, 53, 54, 53, 55, 56, 48, 51, 56, 56, 56, 49, 50, 44, 10, 32, 32, 34, 104, 97, 115, 104, 34, 58, 32, 34, 115, 104, 97, 50, 53, 54, 34, 44, 10, 32, 32, 34, 99, 111, 110, 116, 101, 110, 116, 34, 58, 32, 123, 10, 32, 32, 32, 32, 34, 97, 98, 111, 117, 116, 34, 58 38 | , 32, 34, 64, 47, 70, 98, 83, 114, 116, 110, 66, 97, 107, 47, 68, 83, 104, 43, 118, 110, 97, 57, 86, 57, 98, 117, 80, 52, 87, 80, 86, 73, 108, 55, 55, 106, 112, 89, 101, 47, 113, 106, 66, 51, 49, 73, 61, 46, 101, 100, 50, 53, 53, 49, 57, 34, 44, 10, 32, 32, 32, 32, 34, 110, 97, 109, 101, 34, 58, 32, 34, 116, 101, 115, 116, 32, 117, 115, 101, 114, 34, 44, 10, 32, 32, 32, 32, 34, 116, 121, 112, 101, 34, 58, 32, 34, 97, 98, 111, 117, 116, 34, 10, 32, 32, 125, 44, 10, 32, 32, 34, 115, 105, 103, 110, 97, 116, 117, 114, 101, 34, 58, 32, 34, 51, 73, 99, 99, 71, 56, 116, 83, 54, 83, 70, 106, 80, 98, 114, 76, 122, 105, 81, 53, 98, 121, 69, 47, 110, 80, 101, 43, 84, 98, 105, 52, 98, 115, 121, 99, 48, 49, 82, 84, 53, 98, 69, 114, 52, 79, 122, 76, 113, 65, 114, 55, 121, 112, 111, 48, 104, 113, 74, 114, 79, 120, 116, 79, 67, 88, 106, 80, 120, 97, 88, 104, 121, 87, 117, 85, 72, 47, 106, 116, 48, 69, 80, 77, 67, 103, 61, 61, 46, 115, 105, 103, 46, 101, 100, 50, 53, 53, 49, 57, 34, 10, 125, 168, 83, 101, 113, 117, 101, 110, 99, 101, 1, 169, 84, 105, 109, 101, 115, 116, 97, 109, 112, 168, 193, 244, 94, 88, 93, 83, 233, 164], error: Utf8Error { valid_up_to: 0, error_len: Some(1) } }', src/libcore/result.rs:999:5 39 | note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace. 40 | ``` 41 | 42 | docoding those bytes in go works though: 43 | 44 | ```go 45 | func TestDecode(t *testing.T) { 46 | r := require.New(t) 47 | var input = []byte{134, 166, 65, 117, 116, 104, 111, 114, 130, 164, 65, 108, 103, 111, 167, 101, 100, 50, 53, 53, 49, 57, 162, 73, 68, 218, 0, 32, 252, 86, 210, 174, 217, 193, 106, 79, 195, 74, 31, 175, 157, 175, 85, 245, 187, 143, 225, 99, 213, 34, 94, 251, 142, 150, 30, 254, 168, 193, 223, 82, 163, 75, 101, 121, 130, 164, 65, 108, 103, 111, 166, 115, 104, 97, 50, 53, 54, 164, 72, 97, 115, 104, 218, 0, 32, 187, 110, 22, 14, 1, 75, 74, 100, 92, 125, 185, 28, 161, 22, 127, 164, 235, 84, 144, 227, 238, 69, 81, 5, 59, 181, 32, 35, 20, 75, 150, 173, 168, 80, 114, 101, 118, 105, 111, 117, 115, 192, 163, 82, 97, 119, 218, 1, 156, 123, 10, 32, 32, 34, 112, 114, 101, 118, 105, 111, 117, 115, 34, 58, 32, 110, 117, 108, 108, 44, 10, 32, 32, 34, 97, 117, 116, 104, 111, 114, 34, 58, 32, 34, 64, 47, 70, 98, 83, 114, 116, 110, 66, 97, 107, 47, 68, 83, 104, 43, 118, 110, 97, 57, 86, 57, 98, 117, 80, 52, 87, 80, 86, 73, 108, 55, 55, 106, 112, 89, 101, 47, 113, 106, 66, 51, 49, 73, 61, 46, 101, 100, 50, 53, 53, 49, 57, 34, 44, 10, 32, 32, 34, 115, 101, 113, 117, 101, 110, 99, 101, 34, 58, 32, 49, 44, 10, 32, 32, 34, 116, 105, 109, 101, 115, 116, 97, 109, 112, 34, 58, 32, 49, 53, 54, 53, 55, 56, 48, 51, 56, 56, 56, 49, 50, 44, 10, 32, 32, 34, 104, 97, 115, 104, 34, 58, 32, 34, 115, 104, 97, 50, 53, 54, 34, 44, 10, 32, 32, 34, 99, 111, 110, 116, 101, 110, 116, 34, 58, 32, 123, 10, 32, 32, 32, 32, 34, 97, 98, 111, 117, 116, 34, 58, 32, 34, 64, 47, 70, 98, 83, 114, 116, 110, 66, 97, 107, 47, 68, 83, 104, 43, 118, 110, 97, 57, 86, 57, 98, 117, 80, 52, 87, 80, 86, 73, 108, 55, 55, 106, 112, 89, 101, 47, 113, 106, 66, 51, 49, 73, 61, 46, 101, 100, 50, 53, 53, 49, 57, 34, 44, 10, 32, 32, 32, 32, 34, 110, 97, 109, 101, 34, 58, 32, 34, 116, 101, 115, 116, 32, 117, 115, 101, 114, 34, 44, 10, 32, 32, 32, 32, 34, 116, 121, 112, 101, 34, 58, 32, 34, 97, 98, 111, 117, 116, 34, 10, 32, 32, 125, 44, 10, 32, 32, 34, 115, 105, 103, 110, 97, 116, 117, 114, 101, 34, 58, 32, 34, 51, 73, 99, 99, 71, 56, 116, 83, 54, 83, 70, 106, 80, 98, 114, 76, 122, 105, 81, 53, 98, 121, 69, 47, 110, 80, 101, 43, 84, 98, 105, 52, 98, 115, 121, 99, 48, 49, 82, 84, 53, 98, 69, 114, 52, 79, 122, 76, 113, 65, 114, 55, 121, 112, 111, 48, 104, 113, 74, 114, 79, 120, 116, 79, 67, 88, 106, 80, 120, 97, 88, 104, 121, 87, 117, 85, 72, 47, 106, 116, 48, 69, 80, 77, 67, 103, 61, 61, 46, 115, 105, 103, 46, 101, 100, 50, 53, 53, 49, 57, 34, 10, 125, 168, 83, 101, 113, 117, 101, 110, 99, 101, 1, 169, 84, 105, 109, 101, 115, 116, 97, 109, 112, 168, 193, 244, 94, 88, 93, 83, 233, 164} 48 | 49 | var msgt message.StoredMessage 50 | c := New(&msgt) 51 | 52 | v, err := c.Unmarshal(input) 53 | r.NoError(err) 54 | 55 | msg, ok := v.(message.StoredMessage) 56 | r.True(ok) 57 | t.Log(msg) 58 | 59 | // Author and Key are not stored a a string in msgpack but a struct with the fields Algo and ID and Algo and Hash (for key) 60 | r.Equal("@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519", msg.Author.Ref()) 61 | r.Equal("%u24WDgFLSmRcfbkcoRZ/pOtUkOPuRVEFO7UgIxRLlq0=.sha256", msg.Key.Ref()) 62 | // Raw should be an opaque byte array 63 | rawMessage := `{ 64 | "previous": null, 65 | "author": "@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519", 66 | "sequence": 1, 67 | "timestamp": 1565780388812, 68 | "hash": "sha256", 69 | "content": { 70 | "about": "@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519", 71 | "name": "test user", 72 | "type": "about" 73 | }, 74 | "signature": "3IccG8tS6SFjPbrLziQ5byE/nPe+Tbi4bsyc01RT5bEr4OzLqAr7ypo0hqJrOxtOCXjPxaXhyWuUH/jt0EPMCg==.sig.ed25519" 75 | }` 76 | r.Equal(rawMessage, string(msg.Raw)) 77 | 78 | } 79 | 80 | ``` -------------------------------------------------------------------------------- /go_ssb_msgpackhexbytes.txt: -------------------------------------------------------------------------------- 1 | badger 2019/08/14 12:59:48 INFO: All 0 tables opened in 0s 2 | 3 | 4 | msg(@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519:1) %u24WDgFLSmRcfbkcoRZ/pOtUkOPuRVEFO7UgIxRLlq0=.sha256 5 | { 6 | "previous": null, 7 | "author": "@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519", 8 | "sequence": 1, 9 | "timestamp": 1565780388812, 10 | "hash": "sha256", 11 | "content": { 12 | "about": "@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519", 13 | "name": "test user", 14 | "type": "about" 15 | }, 16 | "signature": "3IccG8tS6SFjPbrLziQ5byE/nPe+Tbi4bsyc01RT5bEr4OzLqAr7ypo0hqJrOxtOCXjPxaXhyWuUH/jt0EPMCg==.sig.ed25519" 17 | } 18 | 86a6417574686f7282a4416c676fa765643235353139a24944da0020fc56d2aed9c16a4fc34a1faf9daf55f5bb8fe163d5225efb8e961efea8c1df52a34b657982a4416c676fa6736861323536a448617368da0020bb6e160e014b4a645c7db91ca1167fa4eb5490e3ee4551053bb52023144b96ada850726576696f7573c0a3526177da019c7b0a20202270726576696f7573223a206e756c6c2c0a202022617574686f72223a2022402f46625372746e42616b2f4453682b766e6139563962755034575056496c37376a7059652f716a423331493d2e65643235353139222c0a20202273657175656e6365223a20312c0a20202274696d657374616d70223a20313536353738303338383831322c0a20202268617368223a2022736861323536222c0a202022636f6e74656e74223a207b0a202020202261626f7574223a2022402f46625372746e42616b2f4453682b766e6139563962755034575056496c37376a7059652f716a423331493d2e65643235353139222c0a20202020226e616d65223a2022746573742075736572222c0a202020202274797065223a202261626f7574220a20207d2c0a2020227369676e6174757265223a202233496363473874533653466a5062724c7a6951356279452f6e50652b54626934627379633031525435624572344f7a4c7141723779706f3068714a724f78744f43586a507861586879577555482f6a743045504d43673d3d2e7369672e65643235353139220a7da853657175656e636501a954696d657374616d70a8c1f45e585d53e9a4 19 | 2019/08/14 12:59:48 new message key: %u24WDgFLSmRcfbkcoRZ/pOtUkOPuRVEFO7UgIxRLlq0=.sha256 20 | 21 | 22 | msg(@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519:2) %z3P5mHT155tQRauZppdZqzAF6sefIyYE8Y4D78Ts+fk=.sha256 23 | { 24 | "previous": "%u24WDgFLSmRcfbkcoRZ/pOtUkOPuRVEFO7UgIxRLlq0=.sha256", 25 | "author": "@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519", 26 | "sequence": 2, 27 | "timestamp": 1565780388815, 28 | "hash": "sha256", 29 | "content": { 30 | "contact": "@p13zSAiOpguI9nsawkGijsnMfWmFd5rlUNpzekEE+vI=.ed25519", 31 | "following": true, 32 | "type": "contact" 33 | }, 34 | "signature": "53twFayAqpR2eiTOx6rsCBVwAbmUX7vE2SnNap7ATd3xuUjobcyGOwqd18JM4t7ttpqBOhZ34jnbGhRQCQuHAA==.sig.ed25519" 35 | } 36 | 86a6417574686f7282a4416c676fa765643235353139a24944da0020fc56d2aed9c16a4fc34a1faf9daf55f5bb8fe163d5225efb8e961efea8c1df52a34b657982a4416c676fa6736861323536a448617368da0020cf73f99874f5e79b5045ab99a69759ab3005eac79f232604f18e03efc4ecf9f9a850726576696f757382a4416c676fa6736861323536a448617368da0020bb6e160e014b4a645c7db91ca1167fa4eb5490e3ee4551053bb52023144b96ada3526177da01d07b0a20202270726576696f7573223a202225753234574467464c536d526366626b636f525a2f704f74556b4f5075525645464f3755674978524c6c71303d2e736861323536222c0a202022617574686f72223a2022402f46625372746e42616b2f4453682b766e6139563962755034575056496c37376a7059652f716a423331493d2e65643235353139222c0a20202273657175656e6365223a20322c0a20202274696d657374616d70223a20313536353738303338383831352c0a20202268617368223a2022736861323536222c0a202022636f6e74656e74223a207b0a2020202022636f6e74616374223a2022407031337a5341694f70677549396e7361776b47696a736e4d66576d466435726c554e707a656b45452b76493d2e65643235353139222c0a2020202022666f6c6c6f77696e67223a20747275652c0a202020202274797065223a2022636f6e74616374220a20207d2c0a2020227369676e6174757265223a20223533747746617941717052326569544f783672734342567741626d555837764532536e4e617037415464337875556a6f626379474f77716431384a4d34743774747071424f685a33346a6e62476852514351754841413d3d2e7369672e65643235353139220a7da853657175656e636502a954696d657374616d70a8c29b9e945d53e9a4 37 | 2019/08/14 12:59:48 new message key: %z3P5mHT155tQRauZppdZqzAF6sefIyYE8Y4D78Ts+fk=.sha256 38 | 39 | 40 | msg(@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519:3) %eAViRPxZlEfOnkPOQqfKhCy9FwYHIxrTD8APq2Pg2TU=.sha256 41 | { 42 | "previous": "%z3P5mHT155tQRauZppdZqzAF6sefIyYE8Y4D78Ts+fk=.sha256", 43 | "author": "@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519", 44 | "sequence": 3, 45 | "timestamp": 1565780388817, 46 | "hash": "sha256", 47 | "content": { 48 | "text": "# hello piet!", 49 | "type": "text" 50 | }, 51 | "signature": "zQqysj61AJg/GQ7FaPM4B3jg4dcou8VyxQacIvSJCKbxS6aN8FPskfvV550OPi0K0HJHX2qS7AF5h8KmpJBlBg==.sig.ed25519" 52 | } 53 | 86a6417574686f7282a4416c676fa765643235353139a24944da0020fc56d2aed9c16a4fc34a1faf9daf55f5bb8fe163d5225efb8e961efea8c1df52a34b657982a4416c676fa6736861323536a448617368da002078056244fc599447ce9e43ce42a7ca842cbd170607231ad30fc00fab63e0d935a850726576696f757382a4416c676fa6736861323536a448617368da0020cf73f99874f5e79b5045ab99a69759ab3005eac79f232604f18e03efc4ecf9f9a3526177da018b7b0a20202270726576696f7573223a2022257a3350356d485431353574515261755a7070645a717a4146367365664979594538593444373854732b666b3d2e736861323536222c0a202022617574686f72223a2022402f46625372746e42616b2f4453682b766e6139563962755034575056496c37376a7059652f716a423331493d2e65643235353139222c0a20202273657175656e6365223a20332c0a20202274696d657374616d70223a20313536353738303338383831372c0a20202268617368223a2022736861323536222c0a202022636f6e74656e74223a207b0a202020202274657874223a2022232068656c6c6f207069657421222c0a202020202274797065223a202274657874220a20207d2c0a2020227369676e6174757265223a20227a517179736a3631414a672f4751374661504d3442336a673464636f75385679785161634976534a434b62785336614e384650736b6676563535304f5069304b30484a48583271533741463568384b6d704a426c42673d3d2e7369672e65643235353139220a7da853657175656e636503a954696d657374616d70a8c314a5cc5d53e9a4 54 | 2019/08/14 12:59:48 new message key: %eAViRPxZlEfOnkPOQqfKhCy9FwYHIxrTD8APq2Pg2TU=.sha256 55 | 56 | 57 | msg(@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519:4) %npvG1WYUDU7akLYguqg6NTbvPa8RBoe1lO7rlLANme4=.sha256 58 | { 59 | "previous": "%eAViRPxZlEfOnkPOQqfKhCy9FwYHIxrTD8APq2Pg2TU=.sha256", 60 | "author": "@/FbSrtnBak/DSh+vna9V9buP4WPVIl77jpYe/qjB31I=.ed25519", 61 | "sequence": 4, 62 | "timestamp": 1565780388819, 63 | "hash": "sha256", 64 | "content": { 65 | "text": "this feels like it will go very rusty", 66 | "type": "test" 67 | }, 68 | "signature": "/uEafY/TRTW/blW3cZZfazGNLLhF5IWNph/vnEktFPDrf5s31s2V1Rpc3Fk3tIQ7BsQxLuQB6p2R7ryNpZUkDA==.sig.ed25519" 69 | } 70 | 86a6417574686f7282a4416c676fa765643235353139a24944da0020fc56d2aed9c16a4fc34a1faf9daf55f5bb8fe163d5225efb8e961efea8c1df52a34b657982a4416c676fa6736861323536a448617368da00209e9bc6d566140d4eda90b620baa83a3536ef3daf110687b594eeeb94b00d99eea850726576696f757382a4416c676fa6736861323536a448617368da002078056244fc599447ce9e43ce42a7ca842cbd170607231ad30fc00fab63e0d935a3526177da01a37b0a20202270726576696f7573223a202225654156695250785a6c45664f6e6b504f5171664b68437939467759484978725444384150713250673254553d2e736861323536222c0a202022617574686f72223a2022402f46625372746e42616b2f4453682b766e6139563962755034575056496c37376a7059652f716a423331493d2e65643235353139222c0a20202273657175656e6365223a20342c0a20202274696d657374616d70223a20313536353738303338383831392c0a20202268617368223a2022736861323536222c0a202022636f6e74656e74223a207b0a202020202274657874223a202274686973206665656c73206c696b652069742077696c6c20676f2076657279207275737479222c0a202020202274797065223a202274657374220a20207d2c0a2020227369676e6174757265223a20222f75456166592f545254572f626c5733635a5a66617a474e4c4c68463549574e70682f766e456b744650447266357333317332563152706333466b3374495137427351784c755142367032523772794e705a556b44413d3d2e7369672e65643235353139220a7da853657175656e636504a954696d657374616d70a8c38bb2b45d53e9a4 71 | 2019/08/14 12:59:48 new message key: %npvG1WYUDU7akLYguqg6NTbvPa8RBoe1lO7rlLANme4=.sha256 72 | PASS 73 | ok go.cryptoscope.co/ssb/multilogs 0.026s 74 | -------------------------------------------------------------------------------- /src/flume_log.rs: -------------------------------------------------------------------------------- 1 | pub use failure::Error; 2 | 3 | pub struct StreamOpts { 4 | pub lt: String, 5 | pub gt: String, 6 | pub reverse: bool, 7 | pub live: bool, 8 | pub limit: usize, 9 | } 10 | 11 | #[derive(Debug, Fail)] 12 | pub enum FlumeLogError { 13 | #[fail(display = "Unable to find sequence: {}", sequence)] 14 | SequenceNotFound { sequence: u64 }, 15 | } 16 | 17 | pub type Sequence = u64; 18 | 19 | pub trait FlumeLog { 20 | fn get(&self, seq: Sequence) -> Result, Error>; 21 | fn clear(&mut self, seq: Sequence); 22 | fn latest(&self) -> Option; 23 | fn append(&mut self, buff: &[u8]) -> Result; 24 | } 25 | -------------------------------------------------------------------------------- /src/flume_view.rs: -------------------------------------------------------------------------------- 1 | pub use crate::flume_log::Sequence; 2 | 3 | pub trait FlumeView { 4 | fn append(&mut self, seq: Sequence, item: &[u8]); 5 | fn latest(&self) -> Sequence; 6 | } 7 | -------------------------------------------------------------------------------- /src/go_offset_log.rs: -------------------------------------------------------------------------------- 1 | pub use bidir_iter::BidirIterator; 2 | 3 | use crate::flume_log::*; 4 | use crate::iter_at_offset::IterAtOffset; 5 | use crate::log_entry::LogEntry; 6 | use buffered_offset_reader::{BufOffsetReader, OffsetRead, OffsetReadMut}; 7 | use byteorder::{BigEndian, ReadBytesExt}; 8 | use serde_cbor::from_slice; 9 | use serde_cbor::Value as CborValue; 10 | use serde_json::{json, Value}; 11 | use ssb_multiformats::multihash::Multihash; 12 | use std::fs::{File, OpenOptions}; 13 | use std::io; 14 | use std::io::{Seek, SeekFrom}; 15 | use std::mem::size_of; 16 | use std::path::Path; 17 | 18 | const DATA_FILE_NAME: &str = "data"; 19 | 20 | #[derive(Debug, Fail)] 21 | pub enum GoFlumeOffsetLogError { 22 | #[fail(display = "Incorrect framing values detected, log file might be corrupt")] 23 | CorruptLogFile {}, 24 | #[fail( 25 | display = "Incorrect values in journal file. File might be corrupt, or we might need better file locking." 26 | )] 27 | CorruptJournalFile {}, 28 | #[fail(display = "Incorrect values in offset file. File might be corrupt.")] 29 | CorruptOffsetFile {}, 30 | #[fail(display = "Unsupported message type in offset log")] 31 | UnsupportedMessageType {}, 32 | 33 | #[fail(display = "The decode buffer passed to decode was too small")] 34 | DecodeBufferSizeTooSmall {}, 35 | } 36 | 37 | #[derive(Debug, Default, Deserialize)] 38 | struct GoMsgPackKey<'a> { 39 | #[serde(rename = "Algo")] 40 | algo: &'a str, 41 | #[serde(rename = "Hash")] 42 | #[serde(with = "serde_bytes")] 43 | hash: &'a [u8], 44 | } 45 | 46 | impl<'a> GoMsgPackKey<'a> { 47 | pub fn to_legacy_string(&self) -> String { 48 | let mut arr = [0u8; 32]; 49 | arr.copy_from_slice(&self.hash[..32]); 50 | let multi_hash = Multihash::Message(arr); 51 | multi_hash.to_legacy_string() 52 | } 53 | } 54 | 55 | #[derive(Debug, Default, Deserialize)] 56 | struct GoMsgPackData<'a> { 57 | #[serde(rename = "Raw_")] 58 | raw: &'a str, 59 | #[serde(rename = "Key_")] 60 | key: GoMsgPackKey<'a>, 61 | #[serde(rename = "Sequence_")] 62 | sequence: u64, 63 | } 64 | 65 | type GoCborKey<'a> = (&'a [u8], &'a str); 66 | type GoCborTuple<'a> = (GoCborKey<'a>, CborValue, GoCborKey<'a>, i128, f64, &'a [u8]); 67 | 68 | pub struct GoOffsetLog { 69 | pub data_file: File, 70 | end_of_file: u64, 71 | } 72 | 73 | // A Frame is like a LogEntry, but without the data 74 | #[derive(Debug)] 75 | pub struct Frame { 76 | pub data_size: usize, 77 | pub offset: u64, 78 | } 79 | 80 | impl Frame { 81 | fn data_start(&self) -> u64 { 82 | self.offset + size_of::() as u64 83 | } 84 | } 85 | 86 | #[derive(Debug)] 87 | pub struct ReadResult { 88 | pub entry: LogEntry, 89 | pub next: u64, 90 | } 91 | 92 | impl GoOffsetLog { 93 | /// Where path is a path to the directory that contains go log files 94 | pub fn new>(path: P) -> Result { 95 | let data_file_path = Path::new(path.as_ref()).join(DATA_FILE_NAME); 96 | 97 | let data_file = OpenOptions::new() 98 | .read(true) 99 | .write(true) 100 | .create(true) 101 | .open(data_file_path)?; 102 | 103 | GoOffsetLog::from_files(data_file) 104 | } 105 | 106 | /// Where path is a path to the directory that contains go log files 107 | pub fn open_read_only>(path: P) -> Result { 108 | let data_file_path = Path::new(path.as_ref()).join(DATA_FILE_NAME); 109 | let file = OpenOptions::new().read(true).open(&data_file_path)?; 110 | 111 | GoOffsetLog::from_files(file) 112 | } 113 | 114 | pub fn from_files(mut data_file: File) -> Result { 115 | let file_length = data_file.seek(SeekFrom::End(0))?; 116 | 117 | Ok(GoOffsetLog { 118 | data_file, 119 | end_of_file: file_length, 120 | }) 121 | } 122 | 123 | pub fn end(&self) -> u64 { 124 | self.end_of_file 125 | } 126 | 127 | pub fn read(&self, offset: u64) -> Result { 128 | read_next::<_>(offset, &self.data_file) 129 | } 130 | 131 | pub fn append_batch(&mut self, _buffs: &[&[u8]]) -> Result, Error> { 132 | unimplemented!() 133 | } 134 | 135 | pub fn iter(&self) -> GoOffsetLogIter { 136 | // TODO: what are the chances that try_clone() will fail? 137 | // I'd rather not return a Result<> here. 138 | GoOffsetLogIter::new(self.data_file.try_clone().unwrap()) 139 | } 140 | } 141 | 142 | pub struct GoOffsetLogIter { 143 | reader: BufOffsetReader, 144 | current: u64, 145 | next: u64, 146 | } 147 | 148 | impl GoOffsetLogIter { 149 | pub fn new(file: File) -> GoOffsetLogIter { 150 | GoOffsetLogIter::with_starting_offset(file, 0) 151 | } 152 | 153 | pub fn with_starting_offset(file: File, offset: u64) -> GoOffsetLogIter { 154 | GoOffsetLogIter { 155 | reader: BufOffsetReader::new(file), 156 | current: offset, 157 | next: offset, 158 | } 159 | } 160 | } 161 | 162 | impl Iterator for GoOffsetLogIter { 163 | type Item = LogEntry; 164 | 165 | fn next(&mut self) -> Option { 166 | self.current = self.next; 167 | let r = read_next_mut::<_>(self.current, &mut self.reader).ok()?; 168 | self.next = r.next; 169 | Some(r.entry) 170 | } 171 | } 172 | 173 | impl IterAtOffset for GoOffsetLog { 174 | fn iter_at_offset(&self, offset: u64) -> GoOffsetLogIter { 175 | GoOffsetLogIter::with_starting_offset(self.data_file.try_clone().unwrap(), offset) 176 | } 177 | } 178 | pub fn read_next(offset: u64, r: &R) -> Result { 179 | read_next_impl::<_>(offset, |b, o| r.read_at(b, o)) 180 | } 181 | 182 | pub fn read_next_mut(offset: u64, r: &mut R) -> Result { 183 | read_next_impl::<_>(offset, |b, o| r.read_at(b, o)) 184 | } 185 | 186 | fn read_next_impl(offset: u64, mut read_at: F) -> Result 187 | where 188 | F: FnMut(&mut [u8], u64) -> io::Result, 189 | { 190 | let frame = read_next_frame::<_>(offset, &mut read_at)?; 191 | read_entry::<_>(&frame, &mut read_at) 192 | } 193 | 194 | fn read_next_frame(offset: u64, read_at: &mut F) -> Result 195 | where 196 | F: FnMut(&mut [u8], u64) -> io::Result, 197 | { 198 | // Entry is [payload size: u64, payload ] 199 | 200 | const HEAD_SIZE: usize = size_of::(); 201 | 202 | let mut head_bytes = [0; HEAD_SIZE]; 203 | let n = read_at(&mut head_bytes, offset)?; 204 | if n < HEAD_SIZE { 205 | return Err(GoFlumeOffsetLogError::DecodeBufferSizeTooSmall {}.into()); 206 | } 207 | 208 | let data_size = (&head_bytes[..]).read_u64::()? as usize; 209 | Ok(Frame { offset, data_size }) 210 | } 211 | 212 | fn read_entry(frame: &Frame, read_at: &mut F) -> Result 213 | where 214 | F: FnMut(&mut [u8], u64) -> io::Result, 215 | { 216 | // Entry is [payload size: u64, payload ] 217 | 218 | let mut buf = vec![0; frame.data_size]; 219 | 220 | let n = read_at(&mut buf, frame.data_start())?; 221 | if n < frame.data_size { 222 | return Err(GoFlumeOffsetLogError::DecodeBufferSizeTooSmall {}.into()); 223 | } 224 | 225 | if buf[0] != 1 { 226 | return Err(GoFlumeOffsetLogError::UnsupportedMessageType {}.into()); 227 | } 228 | 229 | let tuple: GoCborTuple = from_slice(&buf[1..])?; 230 | 231 | let (_, _, (hash, algo), seq, timestamp, raw) = tuple; 232 | 233 | let key = GoMsgPackKey { algo, hash }; 234 | 235 | let cbor = GoMsgPackData { 236 | raw: std::str::from_utf8(raw)?, 237 | key, 238 | sequence: seq as u64, 239 | }; 240 | // The go log stores data in msg pack. 241 | // There is a "Raw" field that has the json used for 242 | // signing. 243 | // But we also need to get the key that is encoded in msg pack and build a traditional json ssb 244 | // message that has "key" "value" and "timestamp". 245 | //let msg_packed: GoMsgPackData = decode::from_slice(&mut buf.as_slice())?; 246 | 247 | let ssb_message = json!({ 248 | "key": cbor.key.to_legacy_string(), 249 | "value": serde_json::from_str::(&cbor.raw)?, 250 | "timestamp": timestamp as u64 251 | }); 252 | 253 | let data = ssb_message.to_string().into_bytes(); 254 | 255 | Ok(ReadResult { 256 | entry: LogEntry { 257 | offset: frame.offset, 258 | data, 259 | }, 260 | next: frame.data_size as u64 + size_of::() as u64 + frame.offset, 261 | }) 262 | } 263 | 264 | #[cfg(test)] 265 | mod test { 266 | extern crate serde_json; 267 | 268 | use crate::go_offset_log::*; 269 | use serde_json::Value; 270 | use std::path::PathBuf; 271 | 272 | #[test] 273 | fn open_ro() { 274 | let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); 275 | d.push("test_vecs/four_ssb_messages"); 276 | let log = GoOffsetLog::open_read_only(d).unwrap(); 277 | let vec = log 278 | .iter() 279 | .map(|log_entry| log_entry.data) 280 | .map(|data| serde_json::from_slice::(&data).unwrap()) 281 | .collect::>(); 282 | 283 | assert_eq!(vec.len(), 2); 284 | assert_eq!(vec[0]["value"]["previous"], Value::Null); 285 | assert_eq!(vec[1]["value"]["content"]["hello"], "piet!!!"); 286 | } 287 | #[test] 288 | fn open_empty() { 289 | let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); 290 | d.push("test_vecs/empty"); 291 | let log = GoOffsetLog::new(d).unwrap(); 292 | let vec = log.iter().collect::>(); 293 | 294 | assert_eq!(vec.len(), 0); 295 | } 296 | 297 | #[test] 298 | fn ssb_messages() { 299 | let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); 300 | d.push("test_vecs/four_ssb_messages"); 301 | let log = GoOffsetLog::new(d).unwrap(); 302 | let vec = log 303 | .iter() 304 | .map(|log_entry| log_entry.data) 305 | .map(|data| serde_json::from_slice::(&data).unwrap()) 306 | .collect::>(); 307 | 308 | assert_eq!(vec.len(), 2); 309 | assert_eq!(vec[0]["value"]["previous"], Value::Null); 310 | assert_eq!(vec[1]["value"]["content"]["hello"], "piet!!!"); 311 | } 312 | } 313 | -------------------------------------------------------------------------------- /src/iter_at_offset.rs: -------------------------------------------------------------------------------- 1 | use crate::log_entry::LogEntry; 2 | 3 | pub trait IterAtOffset> { 4 | fn iter_at_offset(&self, offset: u64) -> I; 5 | } 6 | -------------------------------------------------------------------------------- /src/lib.rs: -------------------------------------------------------------------------------- 1 | //! 2 | //!# flumedb 3 | //! 4 | //! 5 | extern crate bidir_iter; 6 | extern crate buffered_offset_reader; 7 | extern crate byteorder; 8 | extern crate bytes; 9 | #[macro_use] 10 | extern crate failure; 11 | extern crate log; 12 | extern crate serde; 13 | #[macro_use] 14 | extern crate serde_derive; 15 | extern crate serde_json; 16 | extern crate serde_cbor; 17 | extern crate ssb_multiformats; 18 | 19 | 20 | pub mod flume_log; 21 | pub mod flume_view; 22 | pub mod go_offset_log; 23 | pub mod iter_at_offset; 24 | pub mod log_entry; 25 | pub mod mem_log; 26 | pub mod offset_log; 27 | 28 | pub use flume_log::*; 29 | pub use flume_view::*; 30 | pub use iter_at_offset::*; 31 | pub use mem_log::*; 32 | pub use offset_log::*; 33 | -------------------------------------------------------------------------------- /src/log_entry.rs: -------------------------------------------------------------------------------- 1 | #[derive(Debug)] // TODO: derive more traits 2 | pub struct LogEntry { 3 | pub offset: u64, 4 | pub data: Vec, 5 | } 6 | -------------------------------------------------------------------------------- /src/mem_log.rs: -------------------------------------------------------------------------------- 1 | use crate::flume_log::*; 2 | 3 | use std::iter::IntoIterator; 4 | 5 | pub struct MemLog { 6 | log: Vec>, 7 | } 8 | 9 | impl MemLog { 10 | pub fn new() -> MemLog { 11 | let log = Vec::new(); 12 | MemLog { log } 13 | } 14 | } 15 | 16 | impl FlumeLog for MemLog { 17 | fn get(&self, seq_num: u64) -> Result, Error> { 18 | self.log 19 | .get(seq_num as usize) 20 | .map(|slice| slice.clone()) 21 | .ok_or(FlumeLogError::SequenceNotFound { sequence: seq_num }.into()) 22 | } 23 | fn clear(&mut self, seq: u64) { 24 | self.log[seq as usize] = Vec::new(); 25 | } 26 | fn latest(&self) -> Option { 27 | if self.log.len() == 0 { 28 | None 29 | } else { 30 | Some(self.log.len() as u64 - 1) 31 | } 32 | } 33 | fn append(&mut self, buff: &[u8]) -> Result { 34 | let seq = self.log.len(); 35 | let mut vec = Vec::new(); 36 | vec.extend_from_slice(buff); 37 | 38 | self.log.push(vec); 39 | 40 | Ok(seq as u64) 41 | } 42 | } 43 | 44 | impl<'a> IntoIterator for &'a MemLog { 45 | type Item = &'a Vec; 46 | type IntoIter = std::slice::Iter<'a, Vec>; 47 | 48 | fn into_iter(self) -> Self::IntoIter { 49 | self.log.iter() 50 | } 51 | } 52 | 53 | #[cfg(test)] 54 | mod tests { 55 | use crate::flume_log::*; 56 | use crate::mem_log::MemLog; 57 | #[test] 58 | fn get() { 59 | let mut log = MemLog::new(); 60 | let seq0 = log.append("Hello".as_bytes()).unwrap(); 61 | 62 | match log.get(seq0) { 63 | Ok(result) => assert_eq!(String::from_utf8_lossy(&result), "Hello"), 64 | _ => assert!(false), 65 | } 66 | } 67 | #[test] 68 | fn clear() { 69 | let mut log = MemLog::new(); 70 | let seq0 = log.append("Hello".as_bytes()).unwrap(); 71 | log.clear(seq0); 72 | match log.get(seq0) { 73 | Ok(result) => { 74 | assert_eq!(result.len(), 0); 75 | } 76 | _ => assert!(false), 77 | } 78 | } 79 | #[test] 80 | fn iter() { 81 | let mut log = MemLog::new(); 82 | let seq0 = log.append("Hello".as_bytes()).unwrap(); 83 | log.append(" ".as_bytes()).unwrap(); 84 | log.append("World".as_bytes()).unwrap(); 85 | 86 | let result = log 87 | .into_iter() 88 | .map(|bytes| String::from_utf8_lossy(bytes)) 89 | .fold(String::new(), |mut acc: String, elem| { 90 | acc.push_str(&elem); 91 | acc 92 | }); 93 | 94 | assert_eq!( 95 | result, "Hello World", 96 | "Expected Hello World, got {}", 97 | result 98 | ); 99 | 100 | match log.get(seq0) { 101 | Ok(result) => assert_eq!(String::from_utf8_lossy(&result), "Hello"), 102 | _ => assert!(false), 103 | } 104 | } 105 | } 106 | -------------------------------------------------------------------------------- /src/offset_log.rs: -------------------------------------------------------------------------------- 1 | pub use bidir_iter::{BidirIterator, Forward}; 2 | 3 | use crate::flume_log::*; 4 | use crate::iter_at_offset::IterAtOffset; 5 | use crate::log_entry::LogEntry; 6 | use buffered_offset_reader::{BufOffsetReader, OffsetRead, OffsetReadMut, OffsetWrite}; 7 | use byteorder::{BigEndian, ReadBytesExt}; 8 | use bytes::{BufMut, BytesMut}; 9 | use std::fs::{File, OpenOptions}; 10 | use std::io; 11 | use std::io::{Seek, SeekFrom}; 12 | use std::marker::PhantomData; 13 | use std::mem::size_of; 14 | use std::path::Path; 15 | 16 | #[derive(Debug, Fail)] 17 | pub enum FlumeOffsetLogError { 18 | #[fail(display = "Incorrect framing values detected, log file might be corrupt")] 19 | CorruptLogFile {}, 20 | 21 | #[fail(display = "The decode buffer passed to decode was too small")] 22 | DecodeBufferSizeTooSmall {}, 23 | } 24 | 25 | pub struct OffsetLog { 26 | pub file: File, 27 | end_of_file: u64, 28 | last_offset: Option, 29 | tmp_buffer: BytesMut, 30 | byte_type: PhantomData, 31 | } 32 | 33 | // A Frame is like a LogEntry, but without the data 34 | #[derive(Debug)] 35 | pub struct Frame { 36 | pub offset: u64, 37 | pub data_size: usize, 38 | } 39 | 40 | impl Frame { 41 | fn data_start(&self) -> u64 { 42 | self.offset + size_of::() as u64 43 | } 44 | } 45 | 46 | #[derive(Debug)] 47 | pub struct ReadResult { 48 | pub entry: LogEntry, 49 | pub next: u64, 50 | } 51 | 52 | impl OffsetLog { 53 | pub fn new>(path: P) -> Result, Error> { 54 | let file = OpenOptions::new() 55 | .read(true) 56 | .write(true) 57 | .create(true) 58 | .open(&path)?; 59 | 60 | OffsetLog::from_file(file) 61 | } 62 | 63 | pub fn open_read_only>(path: P) -> Result, Error> { 64 | let file = OpenOptions::new().read(true).open(&path)?; 65 | 66 | OffsetLog::from_file(file) 67 | } 68 | 69 | pub fn from_file(mut file: File) -> Result, Error> { 70 | let file_length = file.seek(SeekFrom::End(0))?; 71 | 72 | let last_offset = if file_length > 0 { 73 | let frame = read_prev_frame::(file_length, |b, o| file.read_at(b, o))?; 74 | Some(frame.offset) 75 | } else { 76 | None 77 | }; 78 | 79 | Ok(OffsetLog { 80 | file, 81 | end_of_file: file_length, 82 | last_offset, 83 | tmp_buffer: BytesMut::new(), 84 | byte_type: PhantomData, 85 | }) 86 | } 87 | 88 | pub fn end(&self) -> u64 { 89 | self.end_of_file 90 | } 91 | 92 | pub fn read(&self, offset: u64) -> Result { 93 | read_next::(offset, &self.file) 94 | } 95 | 96 | pub fn append_batch>(&mut self, buffs: &[T]) -> Result, Error> { 97 | let mut bytes = BytesMut::new(); 98 | let mut offsets = Vec::::new(); 99 | 100 | let new_end = buffs.iter().try_fold(self.end_of_file, |offset, buff| { 101 | //Maybe there's a more functional way of doing this. Kinda mixing functional and 102 | //imperative. 103 | offsets.push(offset); 104 | encode::(offset, &buff.as_ref(), &mut bytes) 105 | })?; 106 | 107 | offsets.last().map(|o| self.last_offset = Some(*o)); 108 | 109 | self.file.write_at(&bytes, self.end_of_file)?; 110 | self.end_of_file = new_end; 111 | 112 | Ok(offsets) 113 | } 114 | 115 | pub fn iter(&self) -> Forward> { 116 | OffsetLogIter::new(self.file.try_clone().unwrap()).forward_owned() 117 | } 118 | 119 | pub fn bidir_iter(&self) -> OffsetLogIter { 120 | // TODO: what are the chances that try_clone() will fail? 121 | // I'd rather not return a Result<> here. 122 | OffsetLogIter::new(self.file.try_clone().unwrap()) 123 | } 124 | 125 | pub fn bidir_iter_at_offset(&self, offset: u64) -> OffsetLogIter { 126 | OffsetLogIter::with_starting_offset(self.file.try_clone().unwrap(), offset) 127 | } 128 | } 129 | 130 | impl FlumeLog for OffsetLog { 131 | fn get(&self, seq_num: u64) -> Result, Error> { 132 | self.read(seq_num).map(|r| r.entry.data) 133 | } 134 | 135 | fn latest(&self) -> Option { 136 | self.last_offset 137 | } 138 | 139 | fn append(&mut self, buff: &[u8]) -> Result { 140 | self.tmp_buffer.clear(); 141 | self.tmp_buffer 142 | .reserve(buff.len() + size_of_framing_bytes::()); 143 | 144 | let offset = self.end_of_file; 145 | let new_end = encode::(offset, buff, &mut self.tmp_buffer)?; 146 | self.file.write_at(&self.tmp_buffer, offset)?; 147 | 148 | self.end_of_file = new_end; 149 | self.last_offset = Some(offset); 150 | Ok(offset) 151 | } 152 | 153 | fn clear(&mut self, _seq_num: u64) { 154 | unimplemented!(); 155 | } 156 | } 157 | 158 | pub struct OffsetLogIter { 159 | reader: BufOffsetReader, 160 | current: u64, 161 | next: u64, 162 | byte_type: PhantomData, 163 | } 164 | 165 | impl OffsetLogIter { 166 | pub fn new(file: File) -> OffsetLogIter { 167 | OffsetLogIter::with_starting_offset(file, 0) 168 | } 169 | 170 | pub fn with_starting_offset(file: File, offset: u64) -> OffsetLogIter { 171 | OffsetLogIter { 172 | reader: BufOffsetReader::new(file), 173 | current: offset, 174 | next: offset, 175 | byte_type: PhantomData, 176 | } 177 | } 178 | } 179 | 180 | impl IterAtOffset>> for OffsetLog { 181 | fn iter_at_offset(&self, offset: u64) -> Forward> { 182 | OffsetLogIter::with_starting_offset(self.file.try_clone().unwrap(), offset).forward_owned() 183 | } 184 | } 185 | 186 | impl BidirIterator for OffsetLogIter { 187 | type Item = LogEntry; 188 | 189 | fn next(&mut self) -> Option { 190 | self.current = self.next; 191 | let r = read_next_mut::(self.current, &mut self.reader).ok()?; 192 | self.next = r.next; 193 | Some(r.entry) 194 | } 195 | 196 | fn prev(&mut self) -> Option { 197 | self.next = self.current; 198 | let r = read_prev_mut::(self.current, &mut self.reader).ok()?; 199 | self.current = r.entry.offset; 200 | Some(r.entry) 201 | } 202 | } 203 | 204 | fn size_of_frame_tail() -> usize { 205 | size_of::() + size_of::() 206 | } 207 | fn size_of_framing_bytes() -> usize { 208 | size_of::() * 2 + size_of::() 209 | } 210 | 211 | pub fn encode(offset: u64, item: &[u8], dest: &mut BytesMut) -> Result { 212 | let chunk_size = size_of_framing_bytes::() + item.len(); 213 | dest.reserve(chunk_size); 214 | dest.put_u32(item.len() as u32); 215 | dest.put_slice(&item); 216 | dest.put_u32(item.len() as u32); 217 | let next_offset = offset + chunk_size as u64; 218 | 219 | dest.put_uint(next_offset, size_of::()); 220 | Ok(next_offset) 221 | } 222 | 223 | pub fn validate_entry(offset: u64, data_size: usize, rest: &[u8]) -> Result { 224 | if rest.len() != data_size + size_of_frame_tail::() { 225 | return Err(FlumeOffsetLogError::DecodeBufferSizeTooSmall {}.into()); 226 | } 227 | 228 | let sz = (&rest[data_size..]).read_u32::()? as usize; 229 | if sz != data_size { 230 | return Err(FlumeOffsetLogError::CorruptLogFile {}.into()); 231 | } 232 | 233 | let next = 234 | (&rest[(data_size + size_of::())..]).read_uint::(size_of::())? as u64; 235 | 236 | // `next` should be equal to the offset of the next entry 237 | // which may or may not be immediately following this one (I suppose) 238 | if next < offset + size_of::() as u64 + rest.len() as u64 { 239 | return Err(FlumeOffsetLogError::CorruptLogFile {}.into()); 240 | } 241 | Ok(next) 242 | } 243 | 244 | pub fn read_next(offset: u64, r: &R) -> Result { 245 | read_next_impl::(offset, |b, o| r.read_at(b, o)) 246 | } 247 | 248 | pub fn read_next_mut( 249 | offset: u64, 250 | r: &mut R, 251 | ) -> Result { 252 | read_next_impl::(offset, |b, o| r.read_at(b, o)) 253 | } 254 | 255 | pub fn read_prev(offset: u64, r: &R) -> Result { 256 | read_prev_impl::(offset, |b, o| r.read_at(b, o)) 257 | } 258 | 259 | pub fn read_prev_mut( 260 | offset: u64, 261 | r: &mut R, 262 | ) -> Result { 263 | read_prev_impl::(offset, |b, o| r.read_at(b, o)) 264 | } 265 | 266 | fn read_next_impl(offset: u64, mut read_at: F) -> Result 267 | where 268 | F: FnMut(&mut [u8], u64) -> io::Result, 269 | { 270 | let frame = read_next_frame::(offset, &mut read_at)?; 271 | read_entry::(&frame, &mut read_at) 272 | } 273 | 274 | fn read_prev_impl(offset: u64, mut read_at: F) -> Result 275 | where 276 | F: FnMut(&mut [u8], u64) -> io::Result, 277 | { 278 | let frame = read_prev_frame::(offset, &mut read_at)?; 279 | read_entry::(&frame, &mut read_at) 280 | } 281 | 282 | fn read_next_frame(offset: u64, read_at: &mut F) -> Result 283 | where 284 | F: FnMut(&mut [u8], u64) -> io::Result, 285 | { 286 | // Entry is [payload size: u32, payload, payload_size: u32, next_offset: ByteType] 287 | 288 | const HEAD_SIZE: usize = size_of::(); 289 | 290 | let mut head_bytes = [0; HEAD_SIZE]; 291 | let n = read_at(&mut head_bytes, offset)?; 292 | if n < HEAD_SIZE { 293 | return Err(FlumeOffsetLogError::DecodeBufferSizeTooSmall {}.into()); 294 | } 295 | 296 | let data_size = (&head_bytes[..]).read_u32::()? as usize; 297 | Ok(Frame { offset, data_size }) 298 | } 299 | 300 | fn read_prev_frame(offset: u64, mut read_at: F) -> Result 301 | where 302 | F: FnMut(&mut [u8], u64) -> io::Result, 303 | { 304 | let tail_size = size_of_frame_tail::(); // TODO: why can't this be const? 305 | 306 | // big enough, assuming ByteType isn't bigger than a u64 307 | let mut tmp = [0; size_of::() + size_of::()]; 308 | if tmp.len() as u64 > offset { 309 | return Err(FlumeOffsetLogError::DecodeBufferSizeTooSmall {}.into()); 310 | } 311 | 312 | let n = read_at(&mut tmp[..tail_size], offset - tail_size as u64)?; 313 | if n < tail_size { 314 | return Err(FlumeOffsetLogError::DecodeBufferSizeTooSmall {}.into()); 315 | } 316 | 317 | let data_size = (&tmp[..]).read_u32::()? as usize; 318 | if (data_size as u64) > offset { 319 | return Err(FlumeOffsetLogError::CorruptLogFile {}.into()); 320 | } 321 | 322 | let data_start = offset - tail_size as u64 - data_size as u64; 323 | 324 | Ok(Frame { 325 | offset: data_start - size_of::() as u64, 326 | data_size, 327 | }) 328 | } 329 | 330 | fn read_entry(frame: &Frame, read_at: &mut F) -> Result 331 | where 332 | F: FnMut(&mut [u8], u64) -> io::Result, 333 | { 334 | // Entry is [payload size: u32, payload, payload_size: u32, next_offset: ByteType] 335 | let tail_size = size_of_frame_tail::(); 336 | let to_read = frame.data_size + tail_size; 337 | 338 | let mut buf = vec![0; to_read]; 339 | 340 | let n = read_at(&mut buf, frame.data_start())?; 341 | if n < to_read { 342 | return Err(FlumeOffsetLogError::DecodeBufferSizeTooSmall {}.into()); 343 | } 344 | 345 | let next = validate_entry::(frame.offset, frame.data_size, &buf)?; 346 | 347 | // Chop the tail off of buf, so it only contains the entry data. 348 | buf.truncate(frame.data_size); 349 | 350 | Ok(ReadResult { 351 | entry: LogEntry { 352 | offset: frame.offset, 353 | data: buf, 354 | }, 355 | next, 356 | }) 357 | } 358 | 359 | // extern crate tempfile; 360 | #[cfg(test)] 361 | mod test { 362 | use crate::flume_log::FlumeLog; 363 | use crate::offset_log::*; 364 | use bytes::BytesMut; 365 | 366 | use serde_json::{from_slice, Value}; 367 | 368 | extern crate tempfile; 369 | use self::tempfile::tempfile; 370 | 371 | fn temp_offset_log() -> OffsetLog { 372 | OffsetLog::::from_file(tempfile().unwrap()).unwrap() 373 | } 374 | 375 | #[test] 376 | fn simple_encode() { 377 | let to_encode = vec![1, 2, 3, 4]; 378 | let mut buf = BytesMut::with_capacity(16); 379 | encode::(0, &to_encode, &mut buf).unwrap(); 380 | 381 | assert_eq!(&buf[..], &[0, 0, 0, 4, 1, 2, 3, 4, 0, 0, 0, 4, 0, 0, 0, 16]) 382 | } 383 | 384 | #[test] 385 | fn simple_encode_u64() { 386 | let to_encode = vec![1, 2, 3, 4]; 387 | let mut buf = BytesMut::with_capacity(20); 388 | encode::(0, &to_encode, &mut buf).unwrap(); 389 | 390 | assert_eq!( 391 | &buf[..], 392 | &[0, 0, 0, 4, 1, 2, 3, 4, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 20] 393 | ) 394 | } 395 | 396 | #[test] 397 | fn encode_multi() { 398 | let mut buf = BytesMut::with_capacity(32); 399 | 400 | encode::(0, &[1, 2, 3, 4], &mut buf) 401 | .and_then(|offset| encode::(offset, &[5, 6, 7, 8], &mut buf)) 402 | .unwrap(); 403 | 404 | assert_eq!( 405 | &buf[..], 406 | &[ 407 | 0, 0, 0, 4, 1, 2, 3, 4, 0, 0, 0, 4, 0, 0, 0, 16, 0, 0, 0, 4, 5, 6, 7, 8, 0, 0, 0, 408 | 4, 0, 0, 0, 32 409 | ] 410 | ) 411 | } 412 | 413 | #[test] 414 | fn encode_multi_u64() { 415 | let mut buf = BytesMut::with_capacity(40); 416 | 417 | encode::(0, &[1, 2, 3, 4], &mut buf) 418 | .and_then(|offset| encode::(offset, &[5, 6, 7, 8], &mut buf)) 419 | .unwrap(); 420 | 421 | assert_eq!( 422 | &buf[0..20], 423 | &[0, 0, 0, 4, 1, 2, 3, 4, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 20] 424 | ); 425 | assert_eq!( 426 | &buf[20..], 427 | &[0, 0, 0, 4, 5, 6, 7, 8, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 40] 428 | ) 429 | } 430 | 431 | #[test] 432 | fn simple() { 433 | let bytes: &[u8] = &[0, 0, 0, 8, 1, 2, 3, 4, 5, 6, 7, 8, 0, 0, 0, 8, 0, 0, 0, 20]; 434 | 435 | let r = read_next::(0, &bytes).unwrap(); 436 | assert_eq!(r.entry.offset, 0); 437 | assert_eq!(&r.entry.data, &[1, 2, 3, 4, 5, 6, 7, 8]); 438 | assert_eq!(r.next, bytes.len() as u64); 439 | } 440 | 441 | #[test] 442 | fn simple_u64() { 443 | let bytes: &[u8] = &[ 444 | 0, 0, 0, 8, 1, 2, 3, 4, 5, 6, 7, 8, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 24, 445 | ]; 446 | 447 | let r = read_next::(0, &bytes).unwrap(); 448 | assert_eq!(r.entry.offset, 0); 449 | assert_eq!(&r.entry.data, &[1, 2, 3, 4, 5, 6, 7, 8]); 450 | assert_eq!(r.next, bytes.len() as u64); 451 | } 452 | 453 | #[test] 454 | fn multiple() { 455 | let bytes: &[u8] = &[ 456 | 0, 0, 0, 8, 1, 2, 3, 4, 5, 6, 7, 8, 0, 0, 0, 8, 0, 0, 0, 20, 0, 0, 0, 8, 9, 10, 11, 12, 457 | 13, 14, 15, 16, 0, 0, 0, 8, 0, 0, 0, 40, 458 | ]; 459 | 460 | let r1 = read_next::(0, &bytes).unwrap(); 461 | assert_eq!(r1.entry.offset, 0); 462 | assert_eq!(&r1.entry.data, &[1, 2, 3, 4, 5, 6, 7, 8]); 463 | assert_eq!(r1.next, 20); 464 | 465 | let r2 = read_next::(r1.next, &bytes).unwrap(); 466 | assert_eq!(r2.entry.offset, r1.next); 467 | assert_eq!(&r2.entry.data, &[9, 10, 11, 12, 13, 14, 15, 16]); 468 | assert_eq!(r2.next, 40); 469 | 470 | let r3 = read_prev::(bytes.len() as u64, &bytes).unwrap(); 471 | assert_eq!(r3.entry.offset, r1.next); 472 | assert_eq!(&r3.entry.data, &[9, 10, 11, 12, 13, 14, 15, 16]); 473 | 474 | let r4 = read_prev::(r3.entry.offset, &bytes).unwrap(); 475 | assert_eq!(r4.entry.offset, 0); 476 | assert_eq!(&r4.entry.data, &[1, 2, 3, 4, 5, 6, 7, 8]); 477 | } 478 | 479 | #[test] 480 | fn multiple_u64() { 481 | let bytes: &[u8] = &[ 482 | 0, 0, 0, 8, 1, 2, 3, 4, 5, 6, 7, 8, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 24, 0, 0, 0, 8, 9, 483 | 10, 11, 12, 13, 14, 15, 16, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 48, 484 | ]; 485 | 486 | let r1 = read_next::(0, &bytes).unwrap(); 487 | assert_eq!(r1.entry.offset, 0); 488 | assert_eq!(&r1.entry.data, &[1, 2, 3, 4, 5, 6, 7, 8]); 489 | assert_eq!(r1.next, 24); 490 | 491 | let r2 = read_next::(r1.next, &bytes).unwrap(); 492 | assert_eq!(r2.entry.offset, r1.next); 493 | assert_eq!(&r2.entry.data, &[9, 10, 11, 12, 13, 14, 15, 16]); 494 | assert_eq!(r2.next, 48); 495 | 496 | let r3 = read_prev::(bytes.len() as u64, &bytes).unwrap(); 497 | assert_eq!(r3.entry.offset, r1.next); 498 | assert_eq!(&r3.entry.data, &[9, 10, 11, 12, 13, 14, 15, 16]); 499 | 500 | let r4 = read_prev::(r3.entry.offset, &bytes).unwrap(); 501 | assert_eq!(r4.entry.offset, 0); 502 | assert_eq!(&r4.entry.data, &[1, 2, 3, 4, 5, 6, 7, 8]); 503 | } 504 | 505 | #[test] 506 | fn read_incomplete_entry() { 507 | let bytes: &[u8] = &[0, 0, 0, 8, 1, 2, 3, 4, 5, 6, 7, 8, 0, 0, 0, 9, 0, 0, 0]; 508 | let r = read_next::(0, &bytes); 509 | 510 | assert!(r.is_err()); 511 | } 512 | 513 | #[test] 514 | fn read_very_incomplete_entry() { 515 | let bytes: &[u8] = &[0, 0, 0]; 516 | let r = read_next::(0, &bytes); 517 | assert!(r.is_err()); 518 | } 519 | 520 | #[test] 521 | fn errors_with_bad_second_size_valuen() { 522 | let bytes: &[u8] = &[0, 0, 0, 8, 1, 2, 3, 4, 5, 6, 7, 8, 0, 0, 0, 9, 0, 0, 0, 20]; 523 | let r = read_next::(0, &bytes); 524 | 525 | assert!(r.is_err()); 526 | } 527 | 528 | #[test] 529 | fn errors_with_bad_next_offset_value() { 530 | let bytes: &[u8] = &[0, 0, 0, 8, 1, 2, 3, 4, 5, 6, 7, 8, 0, 0, 0, 8, 0, 0, 0, 16]; 531 | let r = read_next::(0, &bytes); 532 | assert!(r.is_err()); 533 | } 534 | 535 | #[test] 536 | fn read_from_a_file() { 537 | let log = OffsetLog::::new("./db/test.offset").unwrap(); 538 | assert_eq!(log.latest(), Some(207)); 539 | 540 | let result = log 541 | .get(0) 542 | .and_then(|val| from_slice(&val).map_err(|err| err.into())) 543 | .map(|val: Value| match val["value"] { 544 | Value::Number(ref num) => num.as_u64().unwrap(), 545 | _ => panic!(), 546 | }) 547 | .unwrap(); 548 | assert_eq!(result, 0); 549 | } 550 | 551 | #[test] 552 | fn open_read_only() { 553 | let mut log = OffsetLog::::open_read_only("./db/test.offset").unwrap(); 554 | assert_eq!(log.latest(), Some(207)); 555 | 556 | let result = log 557 | .get(0) 558 | .and_then(|val| from_slice(&val).map_err(|err| err.into())) 559 | .map(|val: Value| match val["value"] { 560 | Value::Number(ref num) => num.as_u64().unwrap(), 561 | _ => panic!(), 562 | }) 563 | .unwrap(); 564 | assert_eq!(result, 0); 565 | 566 | assert!(log.append(&[1, 2, 3, 4]).is_err()); 567 | } 568 | 569 | #[test] 570 | fn write_to_a_file() -> Result<(), Error> { 571 | let test_vec = b"{\"value\": 1}"; 572 | 573 | let mut log = temp_offset_log(); 574 | assert_eq!(log.latest(), None); 575 | let offset = log.append(test_vec)?; 576 | assert_eq!(offset, 0); 577 | assert_eq!(log.latest(), Some(0)); 578 | 579 | let offset = log.append(test_vec)?; 580 | assert_eq!(log.latest(), Some(offset)); 581 | 582 | let v: Value = from_slice(&log.get(0)?)?; 583 | let result = match v["value"] { 584 | Value::Number(ref num) => num.as_u64().unwrap(), 585 | _ => panic!(), 586 | }; 587 | assert_eq!(result, 1); 588 | Ok(()) 589 | } 590 | 591 | #[test] 592 | fn batch_write_to_a_file() -> Result<(), Error> { 593 | let test_vec: &[u8] = b"{\"value\": 1}"; 594 | 595 | let mut test_vecs = Vec::new(); 596 | 597 | for _ in 0..100 { 598 | test_vecs.push(test_vec); 599 | } 600 | 601 | let mut offset_log = temp_offset_log(); 602 | let result = offset_log 603 | .append_batch(test_vecs.as_slice()) 604 | .and_then(|sequences| { 605 | assert_eq!(sequences.len(), test_vecs.len()); 606 | assert_eq!(sequences[0], 0); 607 | assert_eq!( 608 | sequences[1], 609 | test_vec.len() as u64 + size_of_framing_bytes::() as u64 610 | ); 611 | offset_log.get(0) 612 | }) 613 | .and_then(|val| from_slice(&val).map_err(|err| err.into())) 614 | .map(|val: Value| match val["value"] { 615 | Value::Number(ref num) => { 616 | let result = num.as_u64().unwrap(); 617 | result 618 | } 619 | _ => panic!(), 620 | }) 621 | .unwrap(); 622 | assert_eq!(result, 1); 623 | Ok(()) 624 | } 625 | 626 | #[test] 627 | fn arbitrary_read_and_write_to_a_file() -> Result<(), Error> { 628 | let mut offset_log = temp_offset_log(); 629 | 630 | let data_to_write = vec![b"{\"value\": 1}", b"{\"value\": 2}", b"{\"value\": 3}"]; 631 | 632 | let seqs: Vec = data_to_write 633 | .iter() 634 | .map(|data| offset_log.append(*data).unwrap()) 635 | .collect(); 636 | 637 | let sum: u64 = seqs 638 | .iter() 639 | .rev() 640 | .map(|seq| offset_log.get(*seq).unwrap()) 641 | .map(|val| from_slice(&val).unwrap()) 642 | .map(|val: Value| match val["value"] { 643 | Value::Number(ref num) => { 644 | let result = num.as_u64().unwrap(); 645 | result 646 | } 647 | _ => panic!(), 648 | }) 649 | .sum(); 650 | 651 | assert_eq!(sum, 6); 652 | Ok(()) 653 | } 654 | 655 | #[test] 656 | fn offset_log_as_iter() { 657 | let log = OffsetLog::::new("./db/test.offset").unwrap(); 658 | 659 | let sum: u64 = log 660 | .iter() 661 | .take(5) 662 | .map(|val| val.data) 663 | .map(|val| from_slice(&val).unwrap()) 664 | .map(|val: Value| match val["value"] { 665 | Value::Number(ref num) => { 666 | let result = num.as_u64().unwrap(); 667 | result 668 | } 669 | _ => panic!(), 670 | }) 671 | .sum(); 672 | 673 | assert_eq!(sum, 10); 674 | } 675 | 676 | #[test] 677 | fn bidir_iter() -> Result<(), Error> { 678 | let mut log = temp_offset_log(); 679 | log.append(b"abc")?; 680 | log.append(b"def")?; 681 | log.append(b"123")?; 682 | log.append(b"456")?; 683 | 684 | let mut iter = log.bidir_iter(); 685 | assert_eq!(iter.next().unwrap().data, b"abc"); 686 | assert_eq!(iter.next().unwrap().data, b"def"); 687 | assert_eq!(iter.next().unwrap().data, b"123"); 688 | assert_eq!(iter.next().unwrap().data, b"456"); 689 | assert!(iter.next().is_none()); 690 | assert_eq!(iter.prev().unwrap().data, b"456"); 691 | assert_eq!(iter.prev().unwrap().data, b"123"); 692 | assert_eq!(iter.prev().unwrap().data, b"def"); 693 | assert_eq!(iter.prev().unwrap().data, b"abc"); 694 | assert!(iter.prev().is_none()); 695 | assert_eq!(iter.next().unwrap().data, b"abc"); 696 | 697 | let iter = log.bidir_iter(); 698 | let mut iter = iter.filter(|e| e.offset % 10 == 0); 699 | 700 | assert_eq!(iter.next().unwrap().data, b"abc"); 701 | assert_eq!(iter.next().unwrap().data, b"123"); 702 | assert!(iter.next().is_none()); 703 | assert_eq!(iter.prev().unwrap().data, b"123"); 704 | 705 | let iter = log.bidir_iter(); 706 | let mut iter = iter.map(|e| e.offset); 707 | 708 | // Same iter forward and back 709 | let forward_offsets: Vec = iter.forward().collect(); 710 | assert_eq!(forward_offsets, &[0, 15, 30, 45]); 711 | 712 | let backward_offsets: Vec = iter.backward().collect(); 713 | assert_eq!(backward_offsets, &[45, 30, 15, 0]); 714 | 715 | // Same iter, take two 716 | let forward_offsets: Vec = iter.forward().take(2).collect(); 717 | assert_eq!(forward_offsets, &[0, 15]); 718 | 719 | // Same iter, two more 720 | let forward_offsets: Vec = iter.forward().take(2).collect(); 721 | assert_eq!(forward_offsets, &[30, 45]); 722 | 723 | // New backward iter, starting at eof 724 | let backward_offsets: Vec = log 725 | .bidir_iter_at_offset(log.end()) 726 | .backward() 727 | .map(|e| e.offset) 728 | .collect(); 729 | assert_eq!(backward_offsets, &[45, 30, 15, 0]); 730 | 731 | Ok(()) 732 | } 733 | } 734 | -------------------------------------------------------------------------------- /test_vecs/empty/data: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sunrise-choir/flumedb-rs/915ea7577ddb59c2b9f1537730d680bb637eafee/test_vecs/empty/data -------------------------------------------------------------------------------- /test_vecs/empty/jrnl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sunrise-choir/flumedb-rs/915ea7577ddb59c2b9f1537730d680bb637eafee/test_vecs/empty/jrnl -------------------------------------------------------------------------------- /test_vecs/empty/ofst: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sunrise-choir/flumedb-rs/915ea7577ddb59c2b9f1537730d680bb637eafee/test_vecs/empty/ofst -------------------------------------------------------------------------------- /test_vecs/four_ssb_messages/data: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sunrise-choir/flumedb-rs/915ea7577ddb59c2b9f1537730d680bb637eafee/test_vecs/four_ssb_messages/data -------------------------------------------------------------------------------- /test_vecs/four_ssb_messages/jrnl: -------------------------------------------------------------------------------- 1 |  -------------------------------------------------------------------------------- /test_vecs/four_ssb_messages/ofst: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sunrise-choir/flumedb-rs/915ea7577ddb59c2b9f1537730d680bb637eafee/test_vecs/four_ssb_messages/ofst --------------------------------------------------------------------------------