├── .github └── stale.yml ├── README.md ├── api └── pull-stream.md ├── gossip-graph1.png ├── gossip-graph2.png ├── scripts ├── build-gh-pages.sh └── fetch-files.sh ├── shs.pdf └── ssb ├── end-to-end-encryption.md ├── linking.md └── secret-handshake.md /.github/stale.yml: -------------------------------------------------------------------------------- 1 | _extends: .github 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Documentation 2 | 3 | NOTE! This site is not updated anymore. Please use 4 | https://dev.scuttlebutt.nz/ instead. 5 | 6 | Get started with Scuttlebot and the Secure Scuttlebutt protocol. 7 | 8 | ## Links 9 | 10 | - Scuttlebot implemented by [`ssb-server`](http://ssbc.github.io/ssb-server/): a p2p log store 11 | - Secure Scuttlebutt implemented by [`ssb-db`](http://ssbc.github.io/ssb-db/): a global database protocol 12 | - [Patchwork](http://ssbc.github.io/patchwork/): a social messaging app built on `ssb-server` and `ssb-db` 13 | -------------------------------------------------------------------------------- /api/pull-stream.md: -------------------------------------------------------------------------------- 1 | # pull-stream 2 | 3 | Minimal Pipeable Pull-stream 4 | 5 | In [classic-streams](1), 6 | streams _push_ data to the next stream in the pipeline. 7 | In [new-streams](https://github.com/joyent/node/blob/v0.10/doc/api/stream.markdown), 8 | data is pulled out of the source stream, into the destination. 9 | In [new-classic-streams]( 10 | `pull-stream` is a minimal take on streams, 11 | pull streams work great for "object" streams as well as streams of raw text or binary data. 12 | 13 | 14 | ## Quick Example 15 | 16 | Stat some files: 17 | 18 | ```js 19 | pull( 20 | pull.values(['file1', 'file2', 'file3']), 21 | pull.asyncMap(fs.stat), 22 | pull.collect(function (err, array) { 23 | console.log(array) 24 | }) 25 | ) 26 | ``` 27 | note that `pull(a, b, c)` is basically the same as `a.pipe(b).pipe(c)`. 28 | 29 | The best thing about pull-stream is that it can be completely lazy. 30 | This is perfect for async traversals where you might want to stop early. 31 | 32 | ## Compatibily with node streams 33 | 34 | pull-streams are not _directly_ compatible with node streams, 35 | but pull-streams can be converted into node streams with 36 | [pull-stream-to-stream](https://github.com/dominictarr/pull-stream-to-stream) 37 | and node streams can be converted into pull-stream using [stream-to-pull-stream](https://github.com/dominictarr/stream-to-pull-stream) 38 | 39 | 40 | ### Readable & Reader vs. Readable & Writable 41 | 42 | Instead of a readable stream, and a writable stream, there is a `readable` stream, 43 | (aka "Source") and a `reader` stream (aka "Sink"). Through streams 44 | is a Sink that returns a Source. 45 | 46 | See also: 47 | * [Sources](https://github.com/dominictarr/pull-stream/blob/master/docs/sources.md) 48 | * [Throughs](https://github.com/dominictarr/pull-stream/blob/master/docs/throughs.md) 49 | * [Sinks](https://github.com/dominictarr/pull-stream/blob/master/docs/sinks.md) 50 | 51 | ### Source (aka, Readable) 52 | 53 | The readable stream is just a `function read(end, cb)`, 54 | that may be called many times, 55 | and will (asynchronously) `cb(null, data)` once for each call. 56 | 57 | To signify an end state, the stream eventually returns `cb(err)` or `cb(true)`. 58 | When indicating a terminal state, `data` *must* be ignored. 59 | 60 | The `read` function *must not* be called until the previous call has called back. 61 | Unless, it is a call to abort the stream (`read(truthy, cb)`). 62 | 63 | ```js 64 | //a stream of 100 random numbers. 65 | var i = 100 66 | var random = function () { 67 | return function (end, cb) { 68 | if(end) return cb(end) 69 | //only read 100 times 70 | if(i-- < 0) return cb(true) 71 | cb(null, Math.random()) 72 | } 73 | } 74 | 75 | ``` 76 | 77 | ### Sink; (aka, Reader, "writable") 78 | 79 | A sink is just a `reader` function that calls a Source (read function), 80 | until it decideds to stop, or the readable ends. `cb(err || true)` 81 | 82 | All [Throughs](https://github.com/dominictarr/pull-stream/blob/master/docs/throughs.md) 83 | and [Sinks](https://github.com/dominictarr/pull-stream/blob/master/docs/sinks.md) 84 | are reader streams. 85 | 86 | ```js 87 | //read source and log it. 88 | var logger = function (read) { 89 | read(null, function next(end, data) { 90 | if(end === true) return 91 | if(end) throw end 92 | 93 | console.log(data) 94 | read(null, next) 95 | }) 96 | } 97 | ``` 98 | 99 | Since these are just functions, you can pass them to each other! 100 | 101 | ```js 102 | var rand = random()) 103 | var log = logger() 104 | 105 | log(rand) //"pipe" the streams. 106 | 107 | ``` 108 | 109 | but, it's easier to read if you use's pull-stream's `pull` method 110 | 111 | ```js 112 | var pull = require('pull-stream') 113 | 114 | pull(random(), logger()) 115 | ``` 116 | 117 | ### Through 118 | 119 | A through stream is a reader on one end and a readable on the other. 120 | It's Sink that returns a Source. 121 | That is, it's just a function that takes a `read` function, 122 | and returns another `read` function. 123 | 124 | ```js 125 | var map = function (read, map) { 126 | //return a readable function! 127 | return function (end, cb) { 128 | read(end, function (end, data) { 129 | cb(end, data != null ? map(data) : null) 130 | }) 131 | } 132 | } 133 | ``` 134 | 135 | ### Pipeability 136 | 137 | Every pipeline must go from a `source` to a `sink`. 138 | Data will not start moving until the whole thing is connected. 139 | 140 | ```js 141 | pull(source, through, sink) 142 | ``` 143 | 144 | some times, it's simplest to describe a stream in terms of other streams. 145 | pull can detect what sort of stream it starts with (by counting arguments) 146 | and if you pull together through streams, it gives you a new through stream. 147 | 148 | ```js 149 | var tripleThrough = 150 | pull(through1(), through2(), through3()) 151 | //THE THREE THROUGHS BECOME ONE 152 | 153 | pull(source(), tripleThrough, sink()) 154 | ``` 155 | 156 | pull detects if it's missing a Source by checking function arity, 157 | if the function takes only one argument it's either a sink or a through. 158 | Otherwise it's a Source. 159 | 160 | ## Duplex Streams 161 | 162 | Duplex streams, which are used to communicate between two things, 163 | (i.e. over a network) are a little different. In a duplex stream, 164 | messages go both ways, so instead of a single function that represents the stream, 165 | you need a pair of streams. `{source: sourceStream, sink: sinkStream}` 166 | 167 | pipe duplex streams like this: 168 | 169 | ``` js 170 | var a = duplex() 171 | var b = duplex() 172 | 173 | pull(a.source, b.sink) 174 | pull(b.source, a.sink) 175 | 176 | //which is the same as 177 | 178 | b.sink(a.source); a.sink(b.source) 179 | 180 | //but the easiest way is to allow pull to handle this 181 | 182 | pull(a, b, a) 183 | 184 | //"pull from a to b and then back to a" 185 | 186 | ``` 187 | 188 | ## Design Goals & Rationale 189 | 190 | There is a deeper, 191 | [platonic abstraction](http://en.wikipedia.org/wiki/Platonic_idealism), 192 | where a streams is just an array in time, instead of in space. 193 | And all the various streaming "abstractions" are just crude implementations 194 | of this abstract idea. 195 | 196 | [classic-streams](https://github.com/joyent/node/blob/v0.8.16/doc/api/stream.markdown), 197 | [new-streams](https://github.com/joyent/node/blob/v0.10/doc/api/stream.markdown), 198 | [reducers](https://github.com/Gozala/reducers) 199 | 200 | The objective here is to find a simple realization of the best features of the above. 201 | 202 | ### Type Agnostic 203 | 204 | A stream abstraction should be able to handle both streams of text and streams 205 | of objects. 206 | 207 | ### A pipeline is also a stream. 208 | 209 | Something like this should work: `a.pipe(x.pipe(y).pipe(z)).pipe(b)` 210 | this makes it possible to write a custom stream simply by 211 | combining a few available streams. 212 | 213 | ### Propagate End/Error conditions. 214 | 215 | If a stream ends in an unexpected way (error), 216 | then other streams in the pipeline should be notified. 217 | (this is a problem in node streams - when an error occurs, 218 | the stream is disconnected, and the user must handle that specially) 219 | 220 | Also, the stream should be able to be ended from either end. 221 | 222 | ### Transparent Backpressure & Laziness 223 | 224 | Very simple transform streams must be able to transfer back pressure 225 | instantly. 226 | 227 | This is a problem in node streams, pause is only transfered on write, so 228 | on a long chain (`a.pipe(b).pipe(c)`), if `c` pauses, `b` will have to write to it 229 | to pause, and then `a` will have to write to `b` to pause. 230 | If `b` only transforms `a`'s output, then `a` will have to write to `b` twice to 231 | find out that `c` is paused. 232 | 233 | [reducers](https://github.com/Gozala/reducers) reducers has an interesting method, 234 | where synchronous tranformations propagate back pressure instantly! 235 | 236 | This means you can have two "smart" streams doing io at the ends, and lots of dumb 237 | streams in the middle, and back pressure will work perfectly, as if the dumb streams 238 | are not there. 239 | 240 | This makes laziness work right. 241 | 242 | ### handling end, error, and abort. 243 | 244 | in pull streams, any part of the stream (source, sink, or through) 245 | may terminate the stream. (this is the case with node streams too, 246 | but it's not handled well). 247 | 248 | #### source: end, error 249 | 250 | A source may end (`cb(true)` after read) or error (`cb(error)` after read) 251 | After ending, the source *must* never `cb(null, data)` 252 | 253 | #### sink: abort 254 | 255 | Sinks do not normally end the stream, but if they decide they do 256 | not need any more data they may "abort" the source by calling `read(true, cb)`. 257 | A abort (`read(true, cb)`) may be called before a preceding read call 258 | has called back. 259 | 260 | ### handling end/abort/error in through streams 261 | 262 | Rules for implementing `read` in a through stream: 263 | 1) Sink wants to stop. sink aborts the through 264 | 265 | just forward the exact read() call to your source, 266 | any future read calls should cb(true). 267 | 268 | 2) We want to stop. (abort from the middle of the stream) 269 | 270 | abort your source, and then cb(true) to tell the sink we have ended. 271 | If the source errored during abort, end the sink by cb read with `cb(err)`. 272 | (this will be an ordinary end/error for the sink) 273 | 274 | 3) Source wants to stop. (`read(null, cb) -> cb(err||true)`) 275 | 276 | forward that exact callback towards the sink chain, 277 | we must respond to any future read calls with `cb(err||true)`. 278 | 279 | In none of the above cases data is flowing! 280 | 4) If data is flowing (normal operation: `read(null, cb) -> cb(null, data)` 281 | 282 | forward data downstream (towards the Sink) 283 | do none of the above! 284 | 285 | There either is data flowing (4) OR you have the error/abort cases (1-3), never both. 286 | 287 | 288 | ## License 289 | 290 | MIT 291 | -------------------------------------------------------------------------------- /gossip-graph1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ssbc/docs/8071d18d3f2f9a75b9bdbe284f0bfe3c86984587/gossip-graph1.png -------------------------------------------------------------------------------- /gossip-graph2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ssbc/docs/8071d18d3f2f9a75b9bdbe284f0bfe3c86984587/gossip-graph2.png -------------------------------------------------------------------------------- /scripts/build-gh-pages.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | set -ex 3 | git checkout gh-pages 4 | git merge master --no-commit 5 | ssbc-sitegen docs 6 | git commit -am build 7 | -------------------------------------------------------------------------------- /scripts/fetch-files.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | GITHUB_ROOT="https://raw.githubusercontent.com" 4 | DEFAULT_ORG="ssbc" 5 | 6 | function fetch() { 7 | REPO=$1 8 | FILE=$2 9 | OUT=$3 10 | ORG=$4 11 | if [ -z $ORG ]; then 12 | ORG=$DEFAULT_ORG 13 | fi 14 | LOCAL_PATH="$HOME/${REPO}/${FILE}" 15 | GH_URL="${GITHUB_ROOT}/${ORG}/${REPO}/master/$FILE" 16 | 17 | # try to copy locally first, then fallback to fetching from github 18 | if [ -f $LOCAL_PATH ]; then 19 | echo "Copying $LOCAL_PATH to $OUT" 20 | cp $LOCAL_PATH $OUT 21 | else 22 | echo "CURLing $REMOTE_PATH to $OUT" 23 | curl -o $OUT $GH_URL 24 | fi 25 | } 26 | 27 | fetch pull-stream README.md ./api/pull-stream.md dominictarr -------------------------------------------------------------------------------- /shs.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ssbc/docs/8071d18d3f2f9a75b9bdbe284f0bfe3c86984587/shs.pdf -------------------------------------------------------------------------------- /ssb/end-to-end-encryption.md: -------------------------------------------------------------------------------- 1 | # Private-box 2 | 3 | Private-box is a format for encrypting a private message to many parties. 4 | It is designed according to the [audit-driven crypto design process](https://github.com/crypto-browserify/crypto-browserify/issues/128). 5 | You can **[find the repository on GitHub](https://github.com/auditdrivencrypto/private-box)**. 6 | 7 | ## Properties 8 | 9 | This protocol was designed for use with secure-scuttlebutt. 10 | In this place, messages are placed in public, and the sender is known via a signature, 11 | but we can hide the recipient and the content. 12 | 13 | ##### **Recipients are hidden.** 14 | 15 | An eaves-dropper cannot know the recipients or their number. 16 | Since the message is encrypted to each recipient, and then placed in public, 17 | to receive a message you will have to decrypt every message posted. 18 | This would not be scalable if you had to decrypt every message on the internet, 19 | but if you can restrict the number of messages you might have to decrypt, 20 | then it's reasonable. For example, if you frequented a forum which contained these messages, 21 | then it would only be a reasonable number of messages, and posting a message would only 22 | reveal that you where talking to some other member of that forum. 23 | 24 | Hiding access to such a forum is another problem that's out of the current scope. 25 | 26 | ##### **The number of recipients are hidden.** 27 | 28 | If the number of recipients was not hidden, then sometimes it would be possible 29 | to deanonymise the recipients, if there was a large group discussion with 30 | an unusual number of recipients. Encrypting the number of recipients means that, 31 | when a message is not encrypted to you, you will attempt to decrypt same number of times 32 | as the maximum recipients. 33 | 34 | ##### **A valid recipient does not know the other recipients.** 35 | 36 | A valid recipient knows the number of recipients but now who they are. 37 | This is more a sideeffect of the design than an intentional design element. 38 | The plaintext contents may reveal the recipients, if needed. 39 | 40 | ##### **By providing the key for a message, an outside party could decrypt the message.** 41 | 42 | When you tell someone a secret you must trust them not to reveal it. 43 | Anyone who knows the key could reveal that to some other party who could then read the message content, 44 | but not the recipients (unless the sender revealed the ephemeral secret key). 45 | 46 | ## Assumptions 47 | 48 | Messages will be posted in public, so that the sender is likely to be known, 49 | and everyone can read the messages. (This makes it possible to hide the recipient, 50 | but probably not the sender.) 51 | 52 | Resisting traffic analysis of the timing or size of messages is out of scope of this spec. 53 | 54 | ## Prior Art 55 | 56 | #### **PGP** 57 | 58 | In PGP the recipient, the sender, and the subject are sent as plaintext. 59 | If the recipient is known, then the metadata graph of who is communicating with who can be read, 60 | which, since it is easier to analyze than the content, is important to protect. 61 | 62 | #### **Sodium seal** 63 | 64 | The Sodium library provides a _seal_ function that generates an ephemeral keypair, 65 | derives a shared key to encrypt a message, and then sends the ephemeral public key and the message. 66 | The recipient is hidden, and it is forward secure if the sender throws out the ephemeral key. 67 | However, it's only possible to have one recipient. 68 | 69 | #### **Minilock** 70 | 71 | Minilock uses a similar approach to `private-box` but does not hide the 72 | number of recipients. In the case of a group discussion where multiple rounds 73 | of messages are sent to everyone, this may enable an eavesdropper to deanonymize 74 | the participiants of a discussion if the sender of each message is known. 75 | 76 | ## API 77 | 78 | #### **encrypt (plaintext Buffer, recipients Array)** 79 | 80 | Takes a plaintext buffer of the message you want to encrypt, 81 | and an array of recipient public keys. 82 | Returns a message that is encrypted to all recipients 83 | and openable by them with `PrivateBox.decrypt`. 84 | The recipients must be between 1 and 7 items long. 85 | 86 | The encrypted length will be `56 + (recipients.length * 33) + plaintext.length` bytes long, 87 | between 89 and 287 bytes longer than the plaintext. 88 | 89 | #### **decrypt (ciphertext Buffer, secretKey curve25519_sk)** 90 | 91 | Attempt to decrypt a private-box message, using your secret key. 92 | If you where an intended recipient then the plaintext will be returned. 93 | If it was not for you, then `undefined` will be returned. 94 | 95 | ## Protocol 96 | 97 | #### **Encryption** 98 | 99 | Private-box generates: 100 | 101 | - `ephemeral`: an ephemeral curve25519 keypair that will only be used with this message. 102 | - `body_key`: a random key that will be used to encrypt the plaintext body. 103 | 104 | First, private-box outputs the ephemeral public key, then multiplies each recipient public key 105 | with its secret to produce ephemeral shared keys (`shared_keys[1..n]`). 106 | Then, private-box concatenates `body_key` with the number of recipients, 107 | encrypts that to each shared key, and concatenates the encrypted body. 108 | 109 | ``` 110 | function encrypt (plaintext, recipients) { 111 | var ephemeral = keypair() 112 | var nonce = random(24) 113 | var body_key = random(32) 114 | var body_key_with_length = concat([body_key, recipients.length]) 115 | return concat([ 116 | nonce, 117 | ephemeral.publicKey, 118 | concat(recipients.map(function (publicKey) { 119 | return secretbox( 120 | body_key_with_length, 121 | nonce, 122 | scalarmult(publicKey, ephemeral.secretKey) 123 | ) 124 | }), 125 | secretbox(plaintext, nonce, body_key) 126 | ]) 127 | } 128 | ``` 129 | 130 | #### **Decryption** 131 | 132 | `private-box` takes the nonce and ephemeral public key, 133 | multiplies that with your secret key, then tests each possible 134 | recipient slot until it either decrypts a key or runs out of slots. 135 | If it runs out of slots, the message was not addressed to you, 136 | so `undefined` is returned. Else, the message is found and the body 137 | is decrypted. 138 | 139 | ``` js 140 | function decrypt (ciphertext, secretKey) { 141 | var next = reader(ciphertext) // next() will read the passed N bytes 142 | var nonce = next(24) 143 | var publicKey = next(32) 144 | var sharedKey = salarmult(publicKey, secretKey) 145 | 146 | for(var i = 0; i < 7; i++) { 147 | var maybe_key = next(33) 148 | var key_with_length = secretbox_open(maybe_key, nonce, sharedKey) 149 | if (key_with_length) { // decrypted! 150 | var key = key_with_length.slice(0, 32) 151 | var length = key_with_length[32] 152 | return secretbox_open( 153 | key, 154 | ciphertext.slice(56 + 33*(length+1), ciphertext.length), 155 | ) 156 | } 157 | } 158 | // this message was not addressed to the owner of secretKey 159 | return undefined 160 | } 161 | ``` 162 | 163 | ## License 164 | 165 | MIT 166 | -------------------------------------------------------------------------------- /ssb/linking.md: -------------------------------------------------------------------------------- 1 | # Content-Hash Linking 2 | 3 | Messages, feeds, and blobs are addressable by specially-formatted identifiers. 4 | Message and blob IDs are content-hashes, while feed IDs are public keys. 5 | 6 | To indicate the type of ID, a "sigil" is prepended to the string. They are: 7 | 8 | - `@` for feeds 9 | - `%` for messages 10 | - `&` for blobs 11 | 12 | Additionally, each ID has a "tag" appended to indicate the hash or key algorithm. 13 | Some example IDs: 14 | 15 | - A feed: `@LA9HYf5rnUJFHHTklKXLLRyrEytayjbFZRo76Aj/qKs=.ed25519` 16 | - A message: `%MPB9vxHO0pvi2ve2wh6Do05ZrV7P6ZjUQ+IEYnzLfTs=.sha256` 17 | - A blob: `&Pe5kTo/V/w4MToasp1IuyMrMcCkQwDOdyzbyD5fy4ac=.sha256` 18 | 19 | --- 20 | 21 | When IDs are found in the messages, they may be treated as links, with the keyname acting as a "relation" type. 22 | An example of this: 23 | 24 | ```bash 25 | sbot publish --type post \ 26 | --root "%MPB9vxHO0pvi2ve2wh6Do05ZrV7P6ZjUQ+IEYnzLfTs=.sha256" \ 27 | --branch "%kRi8MzGDWw2iKNmZak5STshtzJ1D8G/sAj8pa4bVXLI=.sha256" \ 28 | --text "this is a reply!" 29 | ``` 30 | ```js 31 | sbot.publish({ 32 | type: "post", 33 | root: "%MPB9vxHO0pvi2ve2wh6Do05ZrV7P6ZjUQ+IEYnzLfTs=.sha256", 34 | branch: "%kRi8MzGDWw2iKNmZak5STshtzJ1D8G/sAj8pa4bVXLI=.sha256", 35 | text: "this is a reply!" 36 | }) 37 | ``` 38 | 39 | In this example, the `root` and `branch` keys are the relations. 40 | SSB automatically builds an index based on these links, to allow queries such as "all messages with a `root` link to this message." 41 | 42 | --- 43 | 44 | If you want to include data in the link object, you can specify an object with the id in the `link` subattribute: 45 | 46 | ```bash 47 | sbot publish --type post \ 48 | --mentions.link "@LA9HYf5rnUJFHHTklKXLLRyrEytayjbFZRo76Aj/qKs=.ed25519" \ 49 | --mentions.name bob \ 50 | --text "hello, @bob" 51 | ``` 52 | ```js 53 | sbot.publish({ 54 | type: "post", 55 | mentions: { 56 | link: "@LA9HYf5rnUJFHHTklKXLLRyrEytayjbFZRo76Aj/qKs=.ed25519", 57 | name: "bob" 58 | }, 59 | text: "hello, @bob" 60 | }) 61 | ``` 62 | 63 | --- 64 | 65 | To query the link-graph, use [links](https://github.com/ssbc/scuttlebot/blob/master/api.md#links-source): 66 | 67 | ```bash 68 | sbot links [--source id|filter] [--dest id|filter] [--rel value] 69 | ``` 70 | ```js 71 | pull(sbot.links({ source:, dest:, rel: }), pull.drain(...)) 72 | ``` 73 | 74 | You can provide either the source or the destination. 75 | Both can be set to a sigil to filter; for instance, using `'&'` will filter to blobs, as `&` is the sigil that precedes blob IDs. 76 | You can also include a relation-type filter. 77 | 78 | Here are some example queries: 79 | 80 | ```bash 81 | # all links pointing to this message 82 | sbot links \ 83 | --dest %6sHHKhwjVTFVADme55JVW3j9DoWbSlUmemVA6E42bf8=.sha256 84 | 85 | # all "about" links pointing to this user 86 | sbot links \ 87 | --rel about \ 88 | --dest @hxGxqPrplLjRG2vtjQL87abX4QKqeLgCwQpS730nNwE=.ed25519 89 | 90 | # all blob links from this user 91 | sbot links \ 92 | --dest "&" \ 93 | --source @hxGxqPrplLjRG2vtjQL87abX4QKqeLgCwQpS730nNwE=.ed25519 94 | ``` 95 | ```js 96 | // all links pointing to this message 97 | pull( 98 | sbot.links({ 99 | dest: '%6sHHKhwjVTFVADme55JVW3j9DoWbSlUmemVA6E42bf8=.sha256' 100 | }), 101 | pull.drain(...) 102 | ) 103 | 104 | // all "about" links pointing to this user 105 | pull( 106 | sbot.links({ 107 | rel: 'about', 108 | dest: '@hxGxqPrplLjRG2vtjQL87abX4QKqeLgCwQpS730nNwE=.ed25519' 109 | }), 110 | pull.drain(...) 111 | ) 112 | 113 | // all blob links from this user 114 | pull( 115 | sbot.links({ 116 | dest: '&', 117 | source: '@hxGxqPrplLjRG2vtjQL87abX4QKqeLgCwQpS730nNwE=.ed25519' 118 | }), 119 | pull.drain(...) 120 | ) 121 | ``` 122 | 123 | --- 124 | 125 | A common pattern is to recursively fetch the links that point to a message, creating a tree. 126 | This is useful for creating comment-threads, for instance. 127 | 128 | You can do that easily in scuttlebot with [relatedMessages](https://github.com/ssbc/scuttlebot/blob/master/api.md#relatedmessages-async). 129 | 130 | ```bash 131 | sbot relatedMessages --id {id} 132 | ``` 133 | ```js 134 | sbot.relatedMessages({ id: id }, cb) 135 | ``` -------------------------------------------------------------------------------- /ssb/secret-handshake.md: -------------------------------------------------------------------------------- 1 | # Secret Handshake 2 | 3 | Secret Handshake is an encrypted channel protocol based on a mutually authenticating key agreement handshake, with forward secure identity metadata. 4 | It's used by Scuttlebot to authenticate and encrypt peer connections. 5 | 6 | [Read the White Paper](https://ssbc.github.io/docs/shs.pdf) --------------------------------------------------------------------------------