├── .github ├── CODEOWNERS ├── scripts │ └── local-link-checker.sh └── workflows │ └── local-link-checker.yaml ├── .gitignore ├── 0000-template └── 0000-template.md ├── CHANGELOG.md ├── LICENSE ├── README.md └── rfcs ├── 0001-positioning └── 0001-positioning.md ├── 0002-ckb ├── 0002-ckb.md └── images │ ├── .keep │ ├── layered-architecture.png │ ├── separation-of-generation-verification.png │ └── transaction-parallelism.png ├── 0003-ckb-vm ├── 0003-ckb-vm.md └── 0003-ckb-vm.zh.md ├── 0004-ckb-block-sync ├── 0004-ckb-block-sync.md ├── 0004-ckb-block-sync.zh.md └── images │ ├── best-sent-header.jpg │ ├── block-status.jpg │ ├── connect-header-conditions.jpg │ ├── connect-header-status.jpg │ ├── locator.jpg │ ├── seq-connect-headers.jpg │ ├── sliding-window.jpg │ └── status-tree.jpg ├── 0005-priviledged-mode ├── 0005-priviledged-mode.md └── 0005-priviledged-mode.zh.md ├── 0006-merkle-tree ├── 0006-merkle-tree.md └── 0006-merkle-tree.zh.md ├── 0007-scoring-system-and-network-security ├── 0007-scoring-system-and-network-security.md └── 0007-scoring-system-and-network-security.zh.md ├── 0008-serialization └── 0008-serialization.md ├── 0009-vm-syscalls └── 0009-vm-syscalls.md ├── 0010-eaglesong ├── 0010-eaglesong.md ├── CompactFIPS202.py ├── constants.py ├── eaglesong.c ├── eaglesong.py ├── hash.c └── hash.py ├── 0011-transaction-filter-protocol └── 0011-transaction-filter-protocol.md ├── 0012-node-discovery ├── 0012-node-discovery.md ├── 0012-node-discovery.zh.md └── images │ ├── announce-nodes.png │ ├── bootstrap.png │ └── get-nodes.png ├── 0013-get-block-template └── 0013-get-block-template.md ├── 0014-vm-cycle-limits └── 0014-vm-cycle-limits.md ├── 0015-ckb-cryptoeconomics ├── 0015-ckb-cryptoeconomics.md └── images │ ├── 01.png │ ├── 02.png │ ├── 03.png │ ├── 04.png │ ├── 05.png │ ├── 06.png │ ├── 07.png │ ├── 08.png │ ├── 09.png │ ├── 10.png │ ├── 11.png │ ├── 12.png │ ├── 13.png │ ├── 14.png │ ├── 15.png │ ├── 16.png │ ├── 17.png │ ├── 18.png │ ├── 19.png │ ├── 20.png │ ├── 21.png │ ├── 22.png │ ├── 23.png │ ├── 24.png │ ├── 25.png │ ├── 26.png │ ├── 27.png │ ├── 28.png │ ├── 29.png │ ├── 30.png │ ├── 31.png │ ├── 32.png │ ├── 33.png │ ├── 34.png │ ├── 35.png │ ├── 36.png │ ├── 37.png │ ├── 38.png │ ├── 39.png │ ├── 40.png │ ├── 41.png │ ├── 42.png │ ├── 43.png │ ├── 44.png │ ├── 45.png │ ├── 46.png │ ├── 47.png │ ├── 48.png │ ├── 49.png │ ├── 50.png │ ├── 51.png │ ├── 52.png │ ├── 53.png │ ├── 54.png │ └── 55.png ├── 0017-tx-valid-since ├── 0017-tx-valid-since.md ├── commitment-block.jpg ├── e-i-l-encoding.png ├── since-encoding.jpg ├── since-verification.jpg ├── target-value.jpg └── threshold-value.jpg ├── 0019-data-structures └── 0019-data-structures.md ├── 0020-ckb-consensus-protocol ├── 0020-ckb-consensus-protocol.md └── images │ ├── 1559064108898.png │ ├── 1559064685714.png │ ├── 1559064934639.png │ ├── 1559064995366.png │ ├── 1559065017925.png │ ├── 1559065197341.png │ ├── 1559065416713.png │ ├── 1559065517956.png │ ├── 1559065670251.png │ ├── 1559065968791.png │ ├── 1559065997745.png │ ├── 1559066101731.png │ ├── 1559066131427.png │ ├── 1559066158164.png │ ├── 1559066233715.png │ ├── 1559066249700.png │ ├── 1559066329440.png │ ├── 1559066373372.png │ ├── 1559066526598.png │ ├── 1559068235154.png │ └── 1559068266162.png ├── 0021-ckb-address-format ├── 0021-ckb-address-format.md └── images │ └── ckb-address.png ├── 0022-transaction-structure ├── 0022-transaction-structure.md ├── cell-data.png ├── cell-dep-structure.png ├── cell-deps.png ├── code-locating-via-type.png ├── code-locating.png ├── dep-group-expansion.png ├── group-input.png ├── header-deps.png ├── lock-script-cont.png ├── lock-script-grouping.png ├── lock-script.png ├── older-block-and-transaction.png ├── out-point.png ├── outputs-data.png ├── script-p2.png ├── script.png ├── transaction-overview.png ├── transaction-p1.png ├── transaction-p2.png ├── type-id-group.png ├── type-id-recursive-dependency.png ├── type-id.png ├── type-script-grouping.png └── value-storage.png ├── 0023-dao-deposit-withdraw └── 0023-dao-deposit-withdraw.md ├── 0024-ckb-genesis-script-list └── 0024-ckb-genesis-script-list.md ├── 0025-simple-udt └── 0025-simple-udt.md ├── 0026-anyone-can-pay └── 0026-anyone-can-pay.md ├── 0027-block-structure ├── 0027-block-structure.md ├── ckb_block_structure.png ├── compact_target.png ├── epoch.png ├── number.png ├── proposals.png ├── timestamp.png └── transactions_root.png ├── 0028-change-since-relative-timestamp └── 0028-change-since-relative-timestamp.md ├── 0029-allow-script-multiple-matches-on-identical-code └── 0029-allow-script-multiple-matches-on-identical-code.md ├── 0030-ensure-index-less-than-length-in-since └── 0030-ensure-index-less-than-length-in-since.md ├── 0031-variable-length-header-field ├── 0031-variable-length-header-field.md ├── 1-appending-the-field-at-the-end.md ├── 2-using-molecule-table-in-new-block-headers.md └── 3-appending-a-hash-at-the-end.md ├── 0032-ckb-vm-version-selection └── 0032-ckb-vm-version-selection.md ├── 0033-ckb-vm-version-1 └── 0033-ckb-vm-version-1.md ├── 0034-vm-syscalls-2 └── 0034-vm-syscalls-2.md ├── 0035-ckb2021-p2p-protocol-upgrade ├── 0035-ckb2021-p2p-protocol-upgrade.md └── 0035-ckb2021-p2p-protocol-upgrade.zh-CN.md ├── 0036-remove-header-deps-immature-rule └── 0036-remove-header-deps-immature-rule.md ├── 0037-ckb2021 └── 0037-ckb2021.md ├── 0039-cheque └── 0039-cheque.md ├── 0042-omnilock ├── 0042-omnilock.md └── rce_cells.png ├── 0043-ckb-softfork-activation ├── 0043-ckb-softfork-activation.md ├── deployments.md └── images │ └── state-transitions.png ├── 0044-ckb-light-client └── 0044-ckb-light-client.md ├── 0045-client-block-filter └── 0045-client-block-filter.md ├── 0046-syscalls-summary └── 0046-syscalls-summary.md ├── 0048-remove-block-header-version-reservation-rule └── 0048-remove-block-header-version-reservation-rule.md ├── 0049-ckb-vm-version-2 └── 0049-ckb-vm-version-2.md ├── 0050-vm-syscalls-3 └── 0050-vm-syscalls-3.md ├── 0051-ckb2023 └── 0051-ckb2023.md └── 0052-extensible-udt └── 0052-extensible-udt.md /.github/CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @nervosnetwork/rfc 2 | /rfcs/0001-positioning/ @nervosnetwork/rfc @knwang 3 | /rfcs/0002-ckb/ @nervosnetwork/rfc @janx 4 | /rfcs/0003-ckb-vm/ @nervosnetwork/rfc @xxuejie 5 | /rfcs/0004-ckb-block-sync/ @nervosnetwork/rfc @doitian 6 | /rfcs/0005-priviledged-mode/ @nervosnetwork/rfc @xxuejie 7 | /rfcs/0006-merkle-tree/ @nervosnetwork/rfc @kilb 8 | /rfcs/0007-scoring-system-and-network-security/ @nervosnetwork/rfc @jjyr 9 | /rfcs/0008-serialization/ @nervosnetwork/rfc @yangby-cryptape 10 | /rfcs/0009-vm-syscalls/ @nervosnetwork/rfc @xxuejie 11 | /rfcs/0010-eaglesong/ @nervosnetwork/rfc @aszepieniec 12 | /rfcs/0011-transaction-filter-protocol/ @nervosnetwork/rfc @quake 13 | /rfcs/0012-node-discovery/ @nervosnetwork/rfc @jjyr @TheWaWaR 14 | /rfcs/0013-get-block-template/ @nervosnetwork/rfc @zhangsoledad 15 | /rfcs/0014-vm-cycle-limits/ @nervosnetwork/rfc @xxuejie 16 | /rfcs/0015-ckb-cryptoeconomics/ @nervosnetwork/rfc @knwang @janx 17 | /rfcs/0017-tx-valid-since/ @nervosnetwork/rfc @jjyr 18 | /rfcs/0019-data-structures/ @nervosnetwork/rfc @xxuejie 19 | /rfcs/0020-ckb-consensus-protocol/ @nervosnetwork/rfc @nirenzang 20 | /rfcs/0021-ckb-address-format/ @nervosnetwork/rfc @CipherWang 21 | /rfcs/0022-transaction-structure/ @nervosnetwork/rfc @doitian 22 | /rfcs/0023-dao-deposit-withdraw/ @nervosnetwork/rfc @xxuejie @doitian 23 | -------------------------------------------------------------------------------- /.github/scripts/local-link-checker.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | set -e 4 | set -u 5 | [ -n "${DEBUG:-}" ] && set -x || true 6 | 7 | function gherr() { 8 | local line="$1" 9 | local file="${line%%:*}" 10 | local lineno="${line#*:}" 11 | lineno="${lineno%%:*}" 12 | local matching="${line#*:*:}" 13 | echo "::error file=${file#./},line=$lineno::Broken link $matching" 14 | } 15 | 16 | function check() { 17 | local line dir link target failed=0 18 | 19 | while read line; do 20 | dir="$(dirname "${line%%:*}")" 21 | link="${line#*:*:}" 22 | 23 | case "$link" in 24 | //*) 25 | # ignore http links 26 | ;; 27 | /*) 28 | fail "Not a valid local link" 29 | ;; 30 | ../*) 31 | target="$(dirname "$dir")${link#..}" 32 | if ! [ -f "$target" ]; then 33 | failed=1 34 | gherr "$line" 35 | fi 36 | ;; 37 | *) 38 | # relative to current directory 39 | target="$dir/${link#./}" 40 | if ! [ -f "$target" ]; then 41 | failed=1 42 | gherr "$line" 43 | fi 44 | ;; 45 | esac 46 | done 47 | 48 | exit "$failed" 49 | } 50 | 51 | find . -name '*.md' -print0 | xargs -0 /usr/bin/grep -Hno '[^(): ][^():]*\.md' | check 52 | -------------------------------------------------------------------------------- /.github/workflows/local-link-checker.yaml: -------------------------------------------------------------------------------- 1 | name: Local Link Checker 2 | 3 | on: 4 | push: 5 | branches: 6 | - "*" 7 | pull_request: 8 | 9 | jobs: 10 | local-link-checker: 11 | runs-on: ubuntu-latest 12 | 13 | steps: 14 | - uses: actions/checkout@v3 15 | 16 | - name: run checker 17 | run: .github/scripts/local-link-checker.sh 18 | 19 | - name: info for fixing 20 | if: ${{ failure() }} 21 | run: echo "::error::Broken local links found, please use ./.github/scripts/local-link-checker.sh to check locally" 22 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | _book 2 | .vscode 3 | .DS_Store 4 | *.pdf 5 | *.patch 6 | -------------------------------------------------------------------------------- /0000-template/0000-template.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0000" 3 | Category: Informational | Standards 4 | Status: Draft (for Informational) | Proposal (for Standards) 5 | Author: Your Name 6 | Created: YYYY-MM-DD 7 | --- 8 | 9 | # Put RFC Title Here 10 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ## v2019.02.12 2 | 3 | 4 | ### Bug Fixes 5 | 6 | * **0002:** typo ([#69](https://github.com/nervosnetwork/rfcs/issues/69)) ([db4661c](https://github.com/nervosnetwork/rfcs/commit/db4661c)) 7 | * **0003:** Remove atomic operation support in CKB VM ([#68](https://github.com/nervosnetwork/rfcs/issues/68)) ([af51e3a](https://github.com/nervosnetwork/rfcs/commit/af51e3a)) 8 | * **0014:** url in readme ([#61](https://github.com/nervosnetwork/rfcs/issues/61)) ([558f2ba](https://github.com/nervosnetwork/rfcs/commit/558f2ba)) 9 | 10 | 11 | 12 | ## v2018.01.28 13 | 14 | ### Updates 15 | 16 | * [RFC0002]: This is a major update to CKB whitepaper, one year after its publication. Jan added the latest results come from discussions and developments and removed obsolete contents. ([#64](https://github.com/nervosnetwork/rfcs/pull/64)) 17 | * [RFC0003]: Previously, we keep atomic support in CKB VM hoping for maximum compatibility, but since now rv64imc without atomic support is starting to get popular, we don't need to keep atomic instruction support in our design. ([#68](https://github.com/nervosnetwork/rfcs/issues/68)) 18 | 19 | ## v2018.01.14 20 | 21 | ### New RFC 22 | 23 | * [RFC0013]: block template RFC describes the decentralized CKB mining protocol. 24 | * [RFC0014]: cycle limit RFC describes cycle limits used to regulate VM scripts. CKB VM is a flexible VM that is free to implement many control flow constructs, such as loops or branches. As a result, we will need to enforce certain rules in CKB VM to prevent malicious scripts, such as a script with infinite loops. 25 | 26 | ### Updates 27 | 28 | * [RFC0003]: update CKB VM examples based on latest development ([#63](https://github.com/nervosnetwork/rfcs/issues/63)) 29 | * [RFC0006]: use more reasonable proof structure ([#62](https://github.com/nervosnetwork/rfcs/issues/62)) 30 | 31 | 32 | ## v2018.12.28 33 | 34 | The RFC (Request for Comments) process is intended to provide an open and community driven path for new protocols, improvements and best practices. One month later after open source, we have 11 RFCs in draft or proposal status. We haven't finalized them yet, discussions and comments are welcome. 35 | 36 | 37 | * [RFC0002] provides an overview of the Nervos Common Knowledge Base (CKB), the core component of the Nervos Network, a decentralized application platform with a layered architecture. The CKB is the layer 1 of Nervos, and serves as a general purpose common knowledge base that provides data, asset, and identity services. 38 | * [RFC0003] introduces the VM for scripting on CKB the layer 1 chain. VM layer in CKB is used to perform a series of validation rules to determine if transaction is valid given transaction's inputs and outputs. CKB uses [RISC-V](https://riscv.org/) ISA to implement VM layer. CKB relies on dynamic linking and syscalls to provide additional capabilities required by the blockchain, such as reading external cells or other crypto computations. Any compilers with RV64I support, such as [riscv-gcc](https://github.com/riscv/riscv-gcc), [riscv-llvm](https://github.com/lowRISC/riscv-llvm) or [Rust](https://github.com/rust-embedded/wg/issues/218) can be used to generate CKB compatible scripts. 39 | * [RFC0004] is the protocol how CKB nodes synchronize blocks via the P2P network. Block synchronization **must** be performed in stages with Bitcoin Headers First style. Block is downloaded in parts in each stage and is validated using the obtained parts. 40 | * [RFC0006] proposes Complete Binary Merkle Tree(CBMT) to generate *Merkle Root* and *Merkle Proof* for a static list of items in CKB. Currently, CBMT is used to calculate *Transactions Root*. Basically, CBMT is a ***complete binary tree***, in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible. And it is also a ***full binary tree***, in which every node other than the leaves has two children. Compare with other Merkle trees, the hash computation of CBMT is minimal, as well as the proof size. 41 | * [RFC0007] describes the scoring system of CKB P2P Networking layer and several networking security strategies based on it. 42 | * [RFC0009] describes syscalls specification, and all the RISC-V VM syscalls implemented in CKB so far. 43 | * [RFC0010] defines the consensus rule “cellbase maturity period”. For each input, if the referenced output transaction is cellbase, it must have at least `CELLBASE_MATURITY` confirmations; else reject this transaction. 44 | * [RFC0011], transaction filter protocol, allows peers to reduce the amount of transaction data they send. Peer which wants to retrieve transactions of interest, has the option of setting filters on each connection. A filter is defined as a [Bloom filter](http://en.wikipedia.org/wiki/Bloom_filter) on data derived from transactions. 45 | * [RFC0012] proposes a P2P node discovery protocol. CKB Node Discovery Protocol mainly refers to [Satoshi Client Node Discovery](https://en.bitcoin.it/wiki/Satoshi_Client_Node_Discovery), with some modifications to meet our requirements. 46 | 47 | [RFC0002]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0002-ckb/0002-ckb.md 48 | [RFC0003]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0003-ckb-vm/0003-ckb-vm.md 49 | [RFC0004]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0004-ckb-block-sync/0004-ckb-block-sync.md 50 | [RFC0006]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0006-merkle-tree/0006-merkle-tree.md 51 | [RFC0007]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0007-scoring-system-and-network-security/0007-scoring-system-and-network-security.md 52 | [RFC0009]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0009-vm-syscalls/0009-vm-syscalls.md 53 | [RFC0010]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0010-cellbase-maturity-period/0010-cellbase-maturity-period.md 54 | [RFC0011]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0011-transaction-filter-protocol/0011-transaction-filter-protocol.md 55 | [RFC0012]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0012-node-discovery/0012-node-discovery.md 56 | [RFC0013]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0013-get-block-template/0013-get-block-template.md 57 | [RFC0014]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0014-vm-cycle-limits/0014-vm-cycle-limits.md 58 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright 2018 Nervos Foundation 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 6 | 7 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Nervos Network RFCs 2 | 3 | This repository contains proposals, standards and documentations related to Nervos Network. 4 | 5 | The RFC (Request for Comments) process is intended to provide an open and community driven path for new protocols, improvements and best practices, so that all stakeholders can be confident about the direction of Nervos network is evolving in. 6 | 7 | RFCs publication here does not make it formally accepted standard until its status becomes Standard. 8 | 9 | ## Categories 10 | 11 | Not all RFCs are standards, there are 2 categories: 12 | 13 | * Standards Track - RFC that is intended to be standard followed by protocols, clients and applications in Nervos network. 14 | * Informational - Anything related to Nervos network. 15 | 16 | ## Process 17 | 18 | The RFC process attempts to be as simple as possible at beginning and evolves with the network. 19 | 20 | ### 1. Discuss Your Idea with Community 21 | 22 | Before submitting a RFC pull request, you should send the draft to community to solicit initial feedbacks. The [#rfc-chat discord channel](https://discord.gg/8cWtA9uJR5) or [Nervos Talk](https://talk.nervos.org/) are both good places to go. 23 | 24 | ### 2. Create A Pull Request 25 | 26 | After discussion, please create a pull request to propose your RFC: 27 | 28 | > Copy `0000-template` as `rfcs/0000-feature-name`, where `feature-name` is the descriptive name of the RFC. Don't assign a number yet. Reserve a RFC number in [this issue](https://github.com/nervosnetwork/rfcs/issues/246). 29 | 30 | Nervos RFCs should be written in English, but translated versions can be provided to help understanding. English version is the canonical version, check english version when there's ambiguity. 31 | 32 | Nervos RFCs should follow the keyword conventions defined in [RFC 2119](https://tools.ietf.org/html/rfc2119), [RFC 6919](https://tools.ietf.org/html/rfc6919). 33 | 34 | A RFC should be put in either `Informational Track` or `Standards Track`. A RFC on `Standards Track` is a technical specification for software developers to facilitate an interoperable ecosystem. A RFC on `Informational Track` is a descriptive document providing necessary and/or helpful information to users and builders. 35 | 36 | A RFC on `Informational Track` has 3 statuses: 37 | 38 | 1. `Draft` (initial status) 39 | 2. `Withdrawn` 40 | 3. `Final` 41 | 42 | A RFC on `Standards Track` has 5 statuses: 43 | 44 | 1. `Proposal` (initial status) 45 | 2. `Active` 46 | 3. `Withdrawn` 47 | 4. `Rejected` 48 | 5. `Obsolete` 49 | 50 | ### 3. Review / Accept 51 | 52 | The maintainers of RFCs and the community will review the PR, and you should update the RFC according to feedbacks. When a RFC is ready and get enough supports, it will be accepted and merged into this repository. The acceptance of a RFC is based on [rough consensus](https://en.wikipedia.org/wiki/Rough_consensus) at this early stage, we'll keep improving it as the network and ecosystem develops, until we reached the decentralized governance stage. 53 | 54 | ## RFCs 55 | 56 | | Number | Title | Author | Category | Status | 57 | |--------|-------|--------|----------|--------| 58 | | [1](rfcs/0001-positioning) | [The Nervos Network Positioning Paper](rfcs/0001-positioning/0001-positioning.md) | The Nervos Team | Informational | Final | 59 | | [2](rfcs/0002-ckb) | [Nervos CKB: A Common Knowledge Base for Crypto-Economy](rfcs/0002-ckb/0002-ckb.md) | Jan Xie | Informational | Final | 60 | | [3](rfcs/0003-ckb-vm) | [CKB-VM](rfcs/0003-ckb-vm/0003-ckb-vm.md) | Xuejie Xiao | Informational | Final | 61 | | [4](rfcs/0004-ckb-block-sync) | [CKB Block Synchronization Protocol](rfcs/0004-ckb-block-sync/0004-ckb-block-sync.md) | Ian Yang | Informational | Final | 62 | | [5](rfcs/0005-priviledged-mode) | [Privileged architecture support for CKB VM](rfcs/0005-priviledged-mode/0005-priviledged-mode.md) | Xuejie Xiao | Informational | Withdrawn | 63 | | [6](rfcs/0006-merkle-tree) | [Merkle Tree for Static Data](rfcs/0006-merkle-tree/0006-merkle-tree.md) | Ke Wang | Standards Track | Active | 64 | | [7](rfcs/0007-scoring-system-and-network-security) | [P2P Scoring System And Network Security](rfcs/0007-scoring-system-and-network-security/0007-scoring-system-and-network-security.md) | Jinyang Jiang | Standards Track | Withdrawn | 65 | | [8](rfcs/0008-serialization) | [Serialization](rfcs/0008-serialization/0008-serialization.md) | Boyu Yang | Standards Track | Active | 66 | | [9](rfcs/0009-vm-syscalls) | [VM Syscalls](rfcs/0009-vm-syscalls/0009-vm-syscalls.md) | Xuejie Xiao | Standards Track | Active | 67 | | [10](rfcs/0010-eaglesong) | [Eaglesong (Proof-of-Work Function for Nervos CKB)](rfcs/0010-eaglesong/0010-eaglesong.md) | Alan Szepieniec | Standards Track | Active | 68 | | [11](rfcs/0011-transaction-filter-protocol) | [Transaction Filter](rfcs/0011-transaction-filter-protocol/0011-transaction-filter-protocol.md) | Quake Wang | Standards Track | Withdrawn | 69 | | [12](rfcs/0012-node-discovery) | [Node Discovery](rfcs/0012-node-discovery/0012-node-discovery.md) | Linfeng Qian, Jinyang Jiang | Standards Track | Active | 70 | | [13](rfcs/0013-get-block-template) | [Block Template](rfcs/0013-get-block-template/0013-get-block-template.md) | Dingwei Zhang | Standards Track | Active | 71 | | [14](rfcs/0014-vm-cycle-limits) | [VM Cycle Limits](rfcs/0014-vm-cycle-limits/0014-vm-cycle-limits.md) | Xuejie Xiao | Standards Track | Active | 72 | | [15](rfcs/0015-ckb-cryptoeconomics) | [Crypto-Economics of the Nervos Common Knowledge Base](rfcs/0015-ckb-cryptoeconomics/0015-ckb-cryptoeconomics.md) | Kevin Wang, Jan Xie, Jiasun Li, David Zou | Informational | Final | 73 | | [17](rfcs/0017-tx-valid-since) | [Transaction Since Precondition](rfcs/0017-tx-valid-since/0017-tx-valid-since.md) | Jinyang Jiang, Ian Yang, Jordan Mack | Standards Track | Proposal 74 | | [19](rfcs/0019-data-structures) | [Data Structures](rfcs/0019-data-structures/0019-data-structures.md) | Xuejie Xiao | Informational | Withdrawn 75 | | [20](rfcs/0020-ckb-consensus-protocol) | [CKB Consensus Protocol](rfcs/0020-ckb-consensus-protocol/0020-ckb-consensus-protocol.md) | Ren Zhang | Informational | Draft 76 | | [21](rfcs/0021-ckb-address-format) | [CKB Address Format](rfcs/0021-ckb-address-format/0021-ckb-address-format.md) | Cipher Wang, Axel Wan | Standards Track | Active 77 | | [22](rfcs/0022-transaction-structure) | [CKB Transaction Structure](rfcs/0022-transaction-structure/0022-transaction-structure.md) | Ian Yang | Informational | Draft 78 | | [23](rfcs/0023-dao-deposit-withdraw) | [Deposit and Withdraw in Nervos DAO](rfcs/0023-dao-deposit-withdraw/0023-dao-deposit-withdraw.md) | Jan Xie, Xuejie Xiao, Ian Yang | Standards Track | Active 79 | | [24](rfcs/0024-ckb-genesis-script-list) | [CKB Genesis Script List](rfcs/0024-ckb-genesis-script-list/0024-ckb-genesis-script-list.md) | Dylan Duan | Informational | Final 80 | | [25](rfcs/0025-simple-udt) | [Simple UDT](rfcs/0025-simple-udt/0025-simple-udt.md) | Xuejie Xiao | Standards Track | Proposal 81 | | [26](rfcs/0026-anyone-can-pay) | [Anyone-Can-Pay Lock](rfcs/0026-anyone-can-pay/0026-anyone-can-pay.md) | Xuejie Xiao | Standards Track | Proposal 82 | | [27](rfcs/0027-block-structure) | [CKB Block Structure](rfcs/0027-block-structure/0027-block-structure.md) | Ian Yang | Informational | Draft 83 | | [37](rfcs/0037-ckb2021) | [CKB Consensus Change (Edition CKB2021)](rfcs/0037-ckb2021/0037-ckb2021.md) | Ian Yang | Informational | Draft 84 | | [39](rfcs/0039-cheque) | [Cheque Lock](rfcs/0039-cheque/0039-cheque.md) | Dylan Duan | Standards Track | Proposal | 85 | | [42](rfcs/0042-omnilock) | [Omnilock](rfcs/0042-omnilock/0042-omnilock.md) | Xu Jiandong | Standards Track | Proposal 86 | | [43](rfcs/0043-ckb-softfork-activation) | [CKB Softfork Activation](rfcs/0043-ckb-softfork-activation/0043-ckb-softfork-activation.md) | Dingwei Zhang | Standards Track | Proposal 87 | | [44](rfcs/0044-ckb-light-client) | [CKB Light Client Protocol](rfcs/0044-ckb-light-client/0044-ckb-light-client.md) | Boyu Yang | Standards Track | Proposal 88 | | [45](rfcs/0045-client-block-filter) | [CKB Client Side Block Filter Protocol](rfcs/0045-client-block-filter/0045-client-block-filter.md) | Quake Wang | Standards Track | Proposal 89 | | [46](rfcs/0046-syscalls-summary) | [CKB VM Syscalls Summary](rfcs/0046-syscalls-summary/0046-syscalls-summary.md) | Shan | Informational | Draft 90 | ## License 91 | 92 | This repository is being licensed under terms of [MIT license](LICENSE). 93 | -------------------------------------------------------------------------------- /rfcs/0002-ckb/images/.keep: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /rfcs/0002-ckb/images/layered-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0002-ckb/images/layered-architecture.png -------------------------------------------------------------------------------- /rfcs/0002-ckb/images/separation-of-generation-verification.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0002-ckb/images/separation-of-generation-verification.png -------------------------------------------------------------------------------- /rfcs/0002-ckb/images/transaction-parallelism.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0002-ckb/images/transaction-parallelism.png -------------------------------------------------------------------------------- /rfcs/0004-ckb-block-sync/0004-ckb-block-sync.zh.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0004" 3 | Category: Information 4 | Status: Final 5 | Author: Ian Yang <@doitian> 6 | Organization: Nervos Foundation 7 | Created: 2018-07-25 8 | --- 9 | 10 | # 链同步协议 11 | 12 | 术语说明 13 | 14 | - Chain: 创世块开头,由连续的块组成的链。 15 | - Best Chain: 节点之间要达成最终一致的、满足共识验证条件的、PoW 累积工作量最高的、以共识的创世块开始的 Chain。 16 | - Best Header Chain: 累积工作量最高,由状态是 Connected, Downloaded 或者 Accepted 的块组成的 Chain。详见下面块状态的说明。 17 | - Tip: Chain 最后一个块。Tip 可以唯一确定 Chain。 18 | - Best Chain Tip: Best Chain 的最后一个块。 19 | 20 | ## 同步概览 21 | 22 | 块同步**必须**分阶段进行,采用 [Bitcoin Headers First](https://bitcoin.org/en/glossary/headers-first-sync) 的方式。每一阶段获得一部分块的信息,或者基于已有的块信息进行验证,或者两者同时进行。 23 | 24 | 1. 连接块头 (Connect Header): 获得块头,验证块头格式正确且 PoW 工作量有效 25 | 2. 下载块 (Download Block): 获得块内容,验证完整的块,但是不依赖祖先块中的交易信息。 26 | 3. 采用块 (Accept Block): 在链上下文中验证块,会使用到祖先块中的交易信息。 27 | 28 | 分阶段执行的主要目的是先用比较小的代价排除最大作恶的可能性。举例来说,第一步连接块头的步骤在整个同步中的工作量可能只有 5%,但是完成后能有 95% 的可信度认为块头对应的块是有效的。 29 | 30 | 按照已经执行的阶段,块可以处于以下 5 种状态: 31 | 32 | 1. Unknown: 在连接块头执行之前,块的状态是未知的。 33 | 2. Invalid:任意一步失败,块的状态是无效的,且当一个块标记为 Invalid,它的所有子孙节点也都标记为 Invalid。 34 | 3. Connected: 连接块头成功,且该块到创世块的所有祖先块都必须是 Connected, Downloaded 或 Accepted 的状态。 35 | 4. Downloaded: 下载块成功,且该块到创世块的所有祖先块都必须是 Downloaded 或者 Accepted 的状态。 36 | 5. Accepted: 采用块成功,且该块到创世块的所有祖先块都必须是 Accepted 的状态。 37 | 38 | 块的状态是会沿着依赖传递的。按照上面的编号,子块的状态编号一定不会大于父块的状态编号。首先,如果某个块是无效的,那依赖它的子孙块自然也是无效的。另外,同步的每一步代价都远远高于前一步,且每一步都可能失败。如果子节点先于父节点进入下一阶段,而父节点被验证为无效,那子节点上的工作量就浪费了。而且,子块验证是要依赖父块的信息的。 39 | 40 | 初始时创世块状态为 Accepted,其它所有块为 Unknown。 41 | 42 | 之后会使用以下图示表示不同状态的块: 43 | 44 | ![](images/block-status.jpg "Block Status") 45 | 46 | 参与同步的节点创世块**必须**相同,所有的块必然是组成由创世块为根的一颗树。如果块无法最终连接到创世块,这些块都可以丢弃不做处理。 47 | 48 | 参与节点都会在本地构造这颗状态树,其中全部由 Accepted 块组成的累积工作量最大的链就是 Best Chain。而由状态可以是 Connected, Downloaded 或 Accepted 块组成的累积工作量最大的链就是 Best Header Chain. 49 | 50 | 下图是节点 Alice 构造的状态树的示例,其中标记为 Alice 的块是该节点当前的 Best Chain Tip。 51 | 52 | ![](images/status-tree.jpg "Status Tree by Alice") 53 | 54 | ## 连接块头 55 | 56 | 先同步 Headers 可以用最小的代价验证 PoW 有效。构造 PoW 时,不管放入无效的交易还是放入有效的交易都需要付出相同的代价,那么攻击者会选择其它更高性价比的方式进行攻击。可见,当 PoW 有效时整个块都是有效的概率非常高。所以先同步 Headers 能避免浪费资源去下载和验证无效块。 57 | 58 | 因为代价小,同步 Headers 可以和所有的节点同时进行,在本地能构建出可信度非常高的、当前网络中所有分叉的全局图。这样可以对块下载进行规划,避免浪费资源在工作量低的分支上。 59 | 60 | 连接块头这一步的目标是,当节点 Alice 连接到节点 Bob 之后,Alice 让 Bob 发送所有在 Bob 的 Best Chain 上但不在 Alice 的 **Best Header Chain** 上的块头,进行验证并确定这些块的状态是 Connected 还是 Invalid。 61 | 62 | Alice 在连接块头时,需要保持 Best Header Chain Tip 的更新,这样能减少收到已有块头的数量。 63 | 64 | ![](images/seq-connect-headers.jpg) 65 | 66 | 上图是一轮连接块头的流程。完成了一轮连接块头后,节点之间应该通过新块通知保持之后的同步。 67 | 68 | 以上图 Alice 从 Bob 同步为例,首先 Alice 将自己 Best Header Chain 中的块进行采样,将选中块的哈希作为消息内容发给 Bob。采样的基本原则是最近的块采样越密,越早的块越稀疏。比如可以取最后的 10 个块,然后从倒数第十个块开始按 2, 4, 8, … 等以 2 的指数增长的步长进行取样。采样得到的块的哈希列表被称为 Locator。下图中淡色处理的是没有被采样的块,创世块应该始终包含在 Locator 当中。 69 | 70 | ![](images/locator.jpg) 71 | 72 | Bob 根据 Locator 和自己的 Best Chain 可以找出两条链的最后一个共同块。因为创世块相同,所以一定存在这样一个块。Bob 把共同块之后一个开始到 Best Chain Tip 为止的所有块头发给 Alice。 73 | 74 | ![](images/connect-header-conditions.jpg) 75 | 76 | 上图中未淡出的块是 Bob 要发送给 Alice 的块头,金色高亮边框的是最后共同块。下面列举了同步会碰到的三种情况: 77 | 78 | 1. Bob 的 Best Chain Tip 在 Alice 的 Best Header Chain 中,最后共同块就是 Bob 的 Best Chain Tip,Bob 没有块头可以发送。 79 | 2. Alice 的 Best Header Chain Tip 在 Bob 的 Best Chain 中并且不等于 Tip,最后共同块就是 Alice 的 Best Header Chain Tip。 80 | 3. Alice 的 Best Header Chain 和 Bob 的 Best Chain 出现了分叉,最后共同块是发生发叉前的块。 81 | 82 | 如果要发送的块很多,需要做分页处理。Bob 先发送第一页,Alice 通过返回结果发现还有更多的块头就继续向 Bob 请求接下来的页。一个简单的分页方案是限制每次返回块头的最大数量,比如 2000。如果返回块头数量等于 2000,说明可能还有块可以返回,就接着请求之后的块头。如果某页最后一个块是 Best Header Chain Tip 或者 Best Chain Tip 的祖先,可以优化成用对应的 Tip 生成 Locator 发送请求,减少收到已有块头的数量。 83 | 84 | 在同步的同时,Alice 可以观察到 Bob 当前的 Best Chain Tip,即在每轮同步时最后收到的块。如果 Alice 的 Best Header Chain Tip 就是 Bob 的 Best Chain Tip ,因为 Bob 没有块头可发,Alice 就无法观测到 Bob 目前的 Best Chain。所以在每轮连接块头同步的第一个请求时,**应该**从 Best Header Chain Tip 的父块开始构建,而不包含 Tip。 85 | 86 | 在下面的情况下**必须**做新一轮的连接块头同步。 87 | 88 | - 收到对方的新块通知,但是新块的父块状态是 Unknown 89 | 90 | 连接块头时可能会出现以下一些异常情况: 91 | 92 | - Alice 观察到的 Bob Best Chain Tip 很长一段时间没有更新,或者时间很老。这种情况 Bob 无法提供有价值的数据,当连接数达到限制时,可以优先断开该节点的连接。 93 | - Alice 观察到的 Bob Best Chain Tip 状态是 Invalid。这个判断不需要等到一轮 Connect Head 结束,任何一个分页发现有 Invalid 的块就可以停止接受剩下的分页了。因为 Bob 在一个无效的分支上,Alice 可以停止和 Bob 的同步,并将 Bob 加入到黑名单中。 94 | - Alice 收到块头全部都在自己的 Best Header Chain 里,这有两种可能,一是 Bob 故意发送,二是 Alice 在 Connect Head 时 Best Chain 发生了变化,由于无法区分只能忽略,但是可以统计发送的块已经在本地 Best Header Chain 上的比例,高于一定阈值可以将对方加入到黑名单中。 95 | 96 | 在收到块头消息时可以先做以下格式验证: 97 | 98 | - 消息中的块是连续的 99 | - 所有块和第一个块的父块在本地状态树中的状态不是 Invalid 100 | - 第一个块的父块在本地状态树中的状态不是 Unknown,即同步时不处理 Orphan Block。 101 | 102 | 这一步的验证包括检查块头是否满足共识规则,PoW 是否有效。因为不处理 Orphan Block,难度调整也可以在这里进行验证。 103 | 104 | ![](images/connect-header-status.jpg) 105 | 106 | 上图是 Alice 和 Bob, Charlie, Davis, Elsa 等节点同步后的状态树情况和观测到的其它节点的 Best Chain Tip。 107 | 108 | 如果认为 Unknown 状态块是不在状态树上的话,在连接块头阶段,会在状态树的末端新增一些 Connected 或者 Invalid 状态的节点。所以可以把连接块头看作是拓展状态树,是探路的阶段。 109 | 110 | ## 下载块 111 | 112 | 完成连接块头后,一些观测到的邻居节点的 Best Chain Tip 在状态树上的分支是以一个或者多个 Connected 块结尾的,即 Connected Chain,这时可以进入下载块流程,向邻居节点请求完整的块,并进行必要的验证。 113 | 114 | 因为有了状态树,可以对同步进行规划,避免做无用工作。一个有效的优化就是只有当观测到的邻居节点的 Best Chain 的累积工作量大于本地的 Best Chain 的累积工作量才进行下载块。而且可以按照 Connected Chain 累积工作量为优先级排序,优先下载累积工作量更高的分支,只有被验证为 Invalid 或者因为下载超时无法进行时才去下载优先级较低的分支。 115 | 116 | 下载某个分支时,因为块的依赖性,应该优先下载更早的块;同时应该从不同的节点去并发下载,充分利用带宽。这可以使用滑动窗口解决。 117 | 118 | 假设分支第一个要下载的 Connected 状态块号是 M,滑动窗口长度是 N,那么只去下载 M 到 M + N - 1 这 N 个块。在块 M 下载并验证后,窗口往右移动到下一个 Connected 状态的块。如果块 M 验证失败,则分支剩余的块也就都是 Invalid 状态,不需要继续下载。如果窗口长时间没有向右移动,则可以判定为下载超时,可以在尝试其它分支之后再进行尝试,或者该分支上有新增的 Connected 块时再尝试。 119 | 120 | ![](images/sliding-window.jpg) 121 | 122 | 上图是一个长度为 8 的滑动窗口的例子。开始时可下载的块是从 3 到 10。块 3 下载后,因为 4 已经先下载好了,所以窗口直接滑动到从 5 开始。 123 | 124 | 因为通过连接块头已经观测到了邻居节点的 Best Chain,如果在对方 Best Chain 中且对方是一个全节点,可以认为对方是能够提供块的下载的。在下载的时候可以把滑动窗口中的块分成小块的任务加到任务队列中,在能提供下载的节点之间进行任务调度。 125 | 126 | 下载块如果出现交易对不上 Merkle Hash Root 的情况,或者能对上但是有重复的交易 txid 的情况,并不能说明块是无效,只是没有下载到正确的块内容。可以将对方加入黑名单,但是不能标记块的状态为 Invalid,否则恶意节点可以通过发送错误的块内容来污染节点的状态树。 127 | 128 | 这一阶段需要验证交易列表和块头匹配,但是不需要做任何依赖祖先块中交易内容的验证,这些验证会放在下一阶段进行。 129 | 130 | 可以进行的验证比如 Merkel Hash 验证、交易 txid 不能重复、交易列表不能为空、所有交易不能 inputs outputs 同时为空、只有第一个交易可以是 generation transaction 等等。 131 | 132 | 下载块会把状态树中工作量更高的 Connected Chain 中的 Connected 块变成 Downloaded 或者 Invalid。 133 | 134 | ## 采用块 135 | 136 | 在上一阶段中会产生一些以一个或多个 Downloaded 状态的块结尾的链,以下简称为 Downloaded Chain。如果这些链的累积工作量大于 Best Chain Tip, 就可以对这条链进行该阶段完整的合法性验证。如果有多个这样的链,选取累积工作量最高的。 137 | 138 | 这一阶段需要完成所有剩余的验证,包括所有依赖于历史交易内容的规则。 139 | 140 | 因为涉及到 UTXO (未消耗掉的交易 outputs) 的索引,这一步的验证开销是非常大的。为了简化系统,可以只保留一套 UTXO 索引,尝试将本地的 Best Chain Tip 进行必要回退,然后将 Downloaded Chain 上的块进行一次验证,再添加到 Best Chain 上。如果中间有块验证失败则 Downloaded Chain 上剩余的块也就都是 Invalid 状态不需要再继续。这时 Best Chain Tip 甚至会低于之前的 Tip,如果遇到可以采取以下的方案处理: 141 | 142 | - 如果回退之前的 Best Chain 工作量比当前 Tip 更高,恢复之前的 Best Chain 143 | - 如果有其它 Downloaded Chain 比回退之前的 Best Chain 工作量更高,可以继续使用下一个 Downloaded Chain 进行采用块的步骤。 144 | 145 | 采用块会将工作量更高的 Downloaded Chain 中的 Downloaded 状态块变成 Accepted 或者 Invalid,而累积工作量最高的 Downloaded Chain 应该成为本地的 Best Chain。 146 | 147 | ## 新块通知 148 | 149 | 当节点的 Best Chain Tip 发生变化时,应该通过推送的方式主动去通知邻居节点。为了避免通知重复的块,和尽量一次性发送邻居节点没有的块,可以记录给对方发送过的累积工作量最高的块头 (Best Sent Header)。发送过不但指发送过新块通知,也包括发送过在连接块头时给对方的块头的回复。 150 | 151 | 因为可以认为对方节点已经知道 Best Sent Header,及其祖先节点,所以发送新块通知时可以排除掉这些块。 152 | 153 | ![](images/best-sent-header.jpg "Best Sent Header") 154 | 155 | 上面的例子中标记为 Alice 的块是节点 Alice 的 Best Chain Tip。标记为 Best Sent to Bob 是记录的发送给 Bob 工作量最高的块头。其中未淡化的块是 Alice 需要通知给 Bob 的新块。数字对应的每一步说明如下: 156 | 157 | 1. 开始时 Alice 只有 Best Chain Tip 需要发送 158 | 2. Alice 还没有来得及发送,就又多了一个新块,这时需要发送 Best Chain 最后两个块头 159 | 3. Alice 将最后两个块头发送给了 Bob 并同时更新了 Best Sent to Bob 160 | 4. Alice 的 Best Chain 发生了分支切换,只需要发送和 Best Sent to Bob 最后共同块之后的块。 161 | 162 | 基于连接的协商参数和要通知的新块数量: 163 | 164 | - 数量为 1 且对方偏好使用 Compact Block [^1],则使用 Compact Block 165 | - 其它情况直接发送块头列表,但要限制发送块的数量不超过某个阈值,比如 8,如果有 8 个或更多的块要通知,只通知最新的 7 个块。 166 | 167 | 当收到新块通知时,会出现父块状态是 Unknown 的情况,即 Orphan Block,这个时候需要立即做一轮连接块头的同步。收到 Compact Block 且父块就是本地的 Best Chain Tip 的时候可以尝试用交易池直接恢复,如果恢复成功,直接可以将三阶段的工作合并进行,否则就当作收到的只是块头。 168 | 169 | ## 同步状态 170 | 171 | ### 配置 172 | 173 | - `GENESIS_HASH`: 创世块哈希 174 | - `MAX_HEADERS_RESULTS`: 一条消息里可以发送块头的最大数量 175 | - `MAX_BLOCKS_TO_ANNOUNCE`: 新块通知数量不可超过该阈值 176 | - `BLOCK_DOWNLOAD_WINDOW`: 下载滑动窗口大小 177 | 178 | ### 存储 179 | 180 | - 块状态树 181 | - Best Chain Tip,决定是否要下载块和采用块。 182 | - Best Header Chain Tip,连接块头时用来构建每轮第一个请求的 Locator 183 | 184 | 每个连接节点需要单独存储的 185 | 186 | - 观测到的对方的 Best Chain Tip 187 | - 上一次发送过的工作量最高的块头哈希 Best Sent Header 188 | 189 | ## 消息定义 190 | 191 | 具体消息定义见参考实现,这里只列出同步涉及到的消息和必要的一些字段和描述。 192 | 193 | 消息的发送是完全异步的,比如发送 `GetHeaders` 并不需要等待对方回复 `SendHeaders` 再发送其它请求,也不需要保证请求和回复的顺序关系,比如节点 A 发送了 `GetHeaders` 和 `GetBlocks` 给 B,B 可以先发送 `SendBlock`,然后再发送 `SendHeaders` 给 A。 194 | 195 | Compact Block [^1] 需要使用到的消息会在 Compact Block 相关文档中说明。 196 | 197 | ### GetHeaders 198 | 199 | 用于连接块头时向邻居节点请求块头。请求第一页,和收到后续页使用相同的 getheaders 消息,区别是第一页是给本地的 Best Header Chain Tip 的父块生成 Locator,而后续页是使用上一页的最后一个块生成 Locator。 200 | 201 | - `hash_stop`: 通知对端构建 `SendHeaders` 时如果处理到指定 hash 的区块应该提前返回。 202 | - `block_locator_hashes`: 对 Chain 上块采样,得到的哈希列表。 203 | 204 | ### SendHeaders 205 | 206 | 用于回复 `GetHeaders`。返回块头列表。从 Locator 获得的最后一个共同块开始到 `hash_stop` 或者数量达到 `MAX_BLOCKS_TO_ANNOUNCE`。两个条件满足任意一个就必须停止添加并返回结果。 207 | 208 | - `headers`:块头列表 209 | 210 | ### GetBlocks 211 | 212 | 用于下载块阶段 213 | 214 | - `block_hashes`: 要下载的区块哈希列表 215 | 216 | ### SendBlock 217 | 218 | 回复 `GetBlocks` 的块下载请求 219 | 220 | - `block`: 请求的块的完整内容 221 | 222 | [^1]: Compact Block 是种压缩传输完整块的技术。它基于在传播新块时,其中的交易应该都已经在对方节点的交易池中。这时只需要包含 交易 txid 列表,和预测对方可能没有的交易的完整信息,接收方就能基于交易池恢复出完整的交易。详细请查阅 [Block and Compact Block Structure](../0020-ckb-consensus-protocol/0020-ckb-consensus-protocol.md#block-and-compact-block-structure) 和 Bitcoin 相关 [BIP](https://github.com/bitcoin/bips/blob/master/bip-0152.mediawiki)。 223 | 224 | -------------------------------------------------------------------------------- /rfcs/0004-ckb-block-sync/images/best-sent-header.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0004-ckb-block-sync/images/best-sent-header.jpg -------------------------------------------------------------------------------- /rfcs/0004-ckb-block-sync/images/block-status.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0004-ckb-block-sync/images/block-status.jpg -------------------------------------------------------------------------------- /rfcs/0004-ckb-block-sync/images/connect-header-conditions.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0004-ckb-block-sync/images/connect-header-conditions.jpg -------------------------------------------------------------------------------- /rfcs/0004-ckb-block-sync/images/connect-header-status.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0004-ckb-block-sync/images/connect-header-status.jpg -------------------------------------------------------------------------------- /rfcs/0004-ckb-block-sync/images/locator.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0004-ckb-block-sync/images/locator.jpg -------------------------------------------------------------------------------- /rfcs/0004-ckb-block-sync/images/seq-connect-headers.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0004-ckb-block-sync/images/seq-connect-headers.jpg -------------------------------------------------------------------------------- /rfcs/0004-ckb-block-sync/images/sliding-window.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0004-ckb-block-sync/images/sliding-window.jpg -------------------------------------------------------------------------------- /rfcs/0004-ckb-block-sync/images/status-tree.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0004-ckb-block-sync/images/status-tree.jpg -------------------------------------------------------------------------------- /rfcs/0005-priviledged-mode/0005-priviledged-mode.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0005" 3 | Category: Informational 4 | Status: Withdrawn 5 | Author: Xuejie Xiao 6 | Created: 2018-11-26 7 | --- 8 | 9 | # Privileged architecture support for CKB VM 10 | 11 | ## Abstract 12 | 13 | This RFC aims to introduce privileged architecture support for CKB VM. While CKB VM doesn't require a privileged model since it only runs one contract at a time, privileged model can help bring MMU support, which can be quite useful in the following cases: 14 | 15 | * Implementing sophisticated contracts that require dynamic memory allocation, MMU can be used here to prevent invalid memory access for better security. 16 | * Beginners can leverage MMU to trade some cycles for better security. 17 | 18 | Specifically, we plan to add the following features to CKB VM: 19 | 20 | * Just enough CSR(control and status register) instructions and VM changes to support a) privilege mode switching and b) page fault function installation. 21 | * A [TLB](https://en.wikipedia.org/wiki/Translation_lookaside_buffer) structure 22 | 23 | Notice privileged architecture here is an opt-in feature that is closed by default: while CKB VM will always have this feature, it's up to contract writers to decide if they need it. Contracts optimized purely for minimum cycles should have no problem completely ignoring privileged mode. 24 | 25 | ## Privileged mode support via CSR instructions 26 | 27 | To ensure maximum compatibility, we will use the exact instructions and workflows defined in the [RISC-V spec](https://riscv.org/specifications/privileged-isa/) to implement privilege mode support here: 28 | 29 | * First, CSR instructions as defined in RISC-V will be implemented in CKB VM to implement read/write on control and status registers(CSR). 30 | * For simplicity reasons, we might not implement every control and status register as defined in RISC-V spec. For now, we are planning to implement `Supervisor Trap Vector Base Address Register(stvec)` and any other register that might be used in the trap phase. As documented in the spec, reading/writing other registers will result in illegal instruction exception, it's up to contract writer how they want to handle this. 31 | * For now, CKB VM will only use 2 privileged modes: `machine` privileged mode and `user` privileged mode. In machine mode, the contract is free to do anything, in user mode, on the other hand, the operations will be limited. 32 | 33 | The trap function installed in `stvec` is nothing but a normal RISC-V function except that it runs with machine privileged mode. As a result, we will also add proper permission checkings to prevent certain operations in user mode, which might include but are not limited to: 34 | 35 | * CSR instructions 36 | * Accessing memory pages belonging to machine privileged mode 37 | * Accessing memory pages without correct permissions, for example, it's forbidden to execute a memory page which doesn't have `EXECUTE` permission 38 | 39 | Note that when CKB VM first loads, it will be in machine privileged mode, hence contracts that don't need privileged mode support can act as if privileged mode doesn't exist. Contracts that do leverage privileged mode, however, can first setup metadata, then switch to user privileged mode by leveraging RISC-V standard `mret` instruction. 40 | 41 | ## TLB 42 | 43 | To help with MMU, a [Transaction lookaside buffer](https://en.wikipedia.org/wiki/Translation_lookaside_buffer) (TLB) structure will also be included in CKB VM. For simplicity, we will implement a TLB now with the following characteristics: 44 | 45 | * The TLB entry will have 64 entries, each entry is 4KB(exactly 1 memory page). 46 | * The TLB implemented will be one-way associative, meaning if 2 memory pages have the same value for the last 6 bits, they will evict each other. 47 | * Whenever we are switching between different privileged levels, the TLB will be fully flushed. 48 | 49 | Notice TLB will only be instantiated when CKB VM is generating the first page fault trap, that means if a contract keeps running in machine mode, the contract might never interact with the TLB. 50 | 51 | After a TLB is instantiated, there's no way to turn it down in current CKB VM's lifecycle. 52 | -------------------------------------------------------------------------------- /rfcs/0005-priviledged-mode/0005-priviledged-mode.zh.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0005" 3 | Category: Informational 4 | Status: Withdrawn 5 | Author: Xuejie Xiao 6 | Created: 2018-11-26 7 | --- 8 | 9 | # CKB VM 中的特权架构支持 10 | 11 | ## 概要 12 | 13 | 本 RFC 的目标是为 CKB VM 添加特权架构支持。虽然由于 CKB VM 每次只运行一个合约,特权模式在 CKB VM 本身的运行中并不需要,但特权模式对添加 MMU 的支持是很有帮助的,MMU 的存在有利于以下几个场景: 14 | 15 | * 实现需要动态内存分配的复杂合约时,MMU 可以帮助避免内存越界错误,增加安全性 16 | * MMU 可以帮助初学者在消耗一定 cycle 的情况下增加安全性 17 | 18 | 具体来说,我们提议为 CKB VM 增加如下部分: 19 | 20 | * 为支持特权模式切换功能,以及指定 page fault 函数功能添加刚刚好足够的 CSR(控制与状态寄存器,control and status register) 指令以及 VM 修改 21 | * [TLB](https://en.wikipedia.org/wiki/Translation_lookaside_buffer) 结构 22 | 23 | 注意这里实现的特权架构是一个默认关闭,可选开启的功能:虽然这个功能在 CKB VM 中一直存在,但是合约设计者可以自由决定是否使用这一功能。为最小 cycle 使用数优化的合约可以完全忽略这一功能。 24 | 25 | ## 基于 CSR 指令的特权模式支持 26 | 27 | 为尽最大可能确保兼容性,我们会用 [RISC-V 标准](https://riscv.org/specifications/privileged-isa/) 中定义的指令以及流程来实现特权指令支持: 28 | 29 | * 首先,我们会实现 RISC-V 标准中定义的 CSR 指令,用于读写控制与状态寄存器 (CSR)。 30 | * 出于简化实现的考虑,我们不会实现 RISC-V 中定义的每一个控制与状态寄存器。目前为止,我们只计划实现 `Supervisor Trap Vector Base Address Register(stvec)` 以及其他在 trap 阶段会被用到的寄存器。在 CKB VM 中读写其他寄存器会参照 spec 中的定义,抛出违法指令的异常,合约开发者可以自行决定如何处理异常。 31 | * 目前 CKB VM 只用到了两个特权模式级别:`machine` 特权模式以及 `user` 特权模式,在 machine 特权模式中,合约可以自由做任何操作,相应的在 user 特权模式中,合约只可以进行允许的操作。 32 | 33 | `stvec` 中指定的 trap 方法 其实就是一个普通的 RISC-V 函数,他与其他普通函数的唯一区别在于它运行在 machine 特权模式上。相对应的,我们也会在 user 特权模式中禁止某些操作,这包括但不限于: 34 | 35 | * CSR 指令 36 | * 访问属于 machine 特权级别的内存页 37 | * 用错误的权限访问内存页,如执行没有执行权限内存页上的代码 38 | 39 | 注意 CKB VM 加载时首先会进入 machine 特权模式,因此不需要特权模式支持的合约可以假装特权模式不存在而继续运行。需要特权模式的合约则可以先进行初始化操作,然后通过 RISC-V 的标准指令 `mret` 切换到 user 特权模式。 40 | 41 | ## TLB 42 | 43 | CKB VM 会添加 [Transaction lookaside buffer](https://en.wikipedia.org/wiki/Translation_lookaside_buffer) (TLB) 结构辅助 MMU 实现。出于简化实现的考虑,我们会实现具有如下特性的 TLB: 44 | 45 | * TLB 中有 64 个条目,每个条目为 4KB (即正好一个内存页) 46 | * TLB 为单路组相联,即两个末尾 6 个 bit 相同的内存页会相互竞争一个条目位置 47 | * 切换特权级别时,整个 TLB 会被全部清空 48 | 49 | 注意 TLB 只会在 CKB VM 第一次生成 page fault trap 操作时才被初始化。这意味着如果一个合约一直在 machine 特权模式下运行的话,该合约可能永远也不会与 TLB 交互。 50 | 51 | TLB 成功初始化之后,在当前 CKB VM 运行期间会持续存在,无法在初始化之后关闭 TLB。 52 | -------------------------------------------------------------------------------- /rfcs/0006-merkle-tree/0006-merkle-tree.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0006" 3 | Category: Standards Track 4 | Status: Active 5 | Author: Ke Wang 6 | Created: 2018-12-01 7 | --- 8 | 9 | # Merkle Tree for Static Data 10 | 11 | ## Complete Binary Merkle Tree 12 | 13 | CKB uses Complete Binary Merkle Tree(CBMT) to generate *Merkle Root* and *Merkle Proof* for a static list of items. Currently, CBMT is used to calculate *Transactions Root*. Basically, CBMT is a ***complete binary tree***, in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible. And it is also a ***full binary tree***, in which every node other than the leaves has two children. Compare with other Merkle trees, the hash computation of CBMT is minimal, as well as the proof size. 14 | 15 | ## Nodes Organization 16 | 17 | For the sake of illustration, we order the tree nodes from ***top to bottom*** and ***left to right*** starting at zero. In CBMT with *n* items, root is the *first* node, and the first item's hash is *node 0*, second is *node n+1*, etc. We choose this nodes organization because it is easy to calculate the node order for an item. 18 | 19 | For example, CBMT with 6 items(suppose the hashes are `[T0, T1, T2, T3, T4, T5]`) and CBMT with 7 items(suppose the hashes are `[T0, T1, T2, T3, T4, T5, T6]`) is shown below: 20 | 21 | ``` 22 | with 6 items with 7 items 23 | 24 | B0 -- node 0 B0 -- node 0 25 | / \ / \ 26 | / \ / \ 27 | / \ / \ 28 | / \ / \ 29 | B1 -- node 1 B2 -- node 2 B1 -- node 1 B2 -- node 2 30 | / \ / \ / \ / \ 31 | / \ / \ / \ / \ 32 | / \ / \ / \ / \ 33 | B3(3) B4(4) TO(5) T1(6) B3(3) B4(4) B5(5) T0(6) 34 | / \ / \ / \ / \ / \ 35 | T2 T3 T4 T5 T1 T2 T3 T4 T5 T6 36 | (7) (8) (9) (10) (7) (8) (9)(10)(11) (12) 37 | ``` 38 | 39 | Specially, the tree with 0 item is empty(0 node) and its root is `H256::zero`. 40 | 41 | ## Tree Struct 42 | 43 | CBMT can be represented in a very space-efficient way, using an array alone. Nodes in the array are presented in ascending order. 44 | 45 | For example, the two trees above can be represented as: 46 | 47 | ``` 48 | // an array with 11 elements, the first element is node 0(BO), second is node 1, etc. 49 | [B0, B1, B2, B3, B4, T0, T1, T2, T3, T4, T5] 50 | 51 | // an array with 13 elements, the first element is node 0(BO), second is node 1, etc. 52 | [B0, B1, B2, B3, B4, B5, T0, T1, T2, T3, T4, T5, T6] 53 | ``` 54 | 55 | Suppose a CBMT with *n* items, the size of the array would be *2n-1*, the index of item i(start at 0) is *i+n-1*. For node at *i*, the index of its parent is *(i-1)/2*, the index of its sibling is *(i+1)^1-1*(*^* is xor) and the indexes of its children are *[2i+1, 2i+2]*. 56 | 57 | ## Merkle Proof 58 | 59 | Merkle Proof can provide a proof for existence of one or more items. Only sibling of the nodes along the path that form leaves to root, excluding the nodes already in the path, should be included in the proof. We also specify that ***the nodes in the proof is presented in descending order***(with this, algorithms of proof's generation and verification could be much simple). Indexes of item that need to prove are essential to complete the root calculation, since the index is not the inner feature of item, so the indexes are also included in the proof, and in order to get the correct correspondence, we specify that the indexes are ***presented in ascending order by corresponding hash***. For example, if we want to show that `[T1, T4]` is in the list of 6 items above, only nodes `[T5, T0, B3]` and indexes `[9, 6]` should be included in the proof. 60 | 61 | ### Proof Structure 62 | 63 | The schema of proof struct is: 64 | 65 | ``` 66 | table Proof { 67 | // indexes of items 68 | indexes: [uint32]; 69 | // nodes on the path which can not be calculated, in descending order by index 70 | nodes: [H256]; 71 | } 72 | ``` 73 | 74 | ### Algorithm of proof generation 75 | 76 | ```c++ 77 | Proof gen_proof(Hash tree[], U32 indexes[]) { 78 | Hash nodes[]; 79 | U32 tree_indexes[]; 80 | Queue queue; 81 | 82 | int size = len(tree) >> 1 + 1; 83 | indexes.desending_sort(); 84 | 85 | for index in indexes { 86 | queue.push_back(index + size - 1); 87 | } 88 | 89 | while(queue is not empty) { 90 | int index = queue.pop_front(); 91 | int sibling = calculate_sibling(index); 92 | 93 | if(sibling == queue.front()) { 94 | queue.pop_front(); 95 | } else { 96 | nodes.push_back(tree[sibling]); 97 | } 98 | 99 | int parent = calculate_parent(index); 100 | if(parent != 0) { 101 | queue.push_back(parent); 102 | } 103 | } 104 | 105 | add (size-1) for every index in indexes; 106 | sort indexes in ascending order by corresponding hash; 107 | 108 | return Proof::new(indexes, nodes); 109 | } 110 | ``` 111 | 112 | ### Algorithm of validation 113 | 114 | ```c++ 115 | bool validate_proof(Proof proof, Hash root, Item items[]) { 116 | Queue queue; 117 | ascending_sort_by_item_hash(items); 118 | 119 | for (index,item) in (proof.indexes, items) { 120 | queue.push_back((item.hash(), index)); 121 | } 122 | 123 | descending_sort_by_index(queue); 124 | 125 | int i = 0; 126 | while(queue is not empty) { 127 | Hash hash, hash1, hash2; 128 | int index1, index2; 129 | 130 | (hash1, index1) = queue.pop_front(); 131 | (hash2, index2) = queue.front(); 132 | int sibling = calculate_sibling(index1); 133 | 134 | if(sibling == index2) { 135 | queue.pop_front(); 136 | hash = merge(hash2, hash1); 137 | } else { 138 | hash2 = proof.nodes[i++]; 139 | 140 | if(is_left_node(index1)) { 141 | hash = merge(hash1, hash2); 142 | } else { 143 | hash = merge(hash2, hash1); 144 | } 145 | } 146 | 147 | int parent = calculate_parent(index); 148 | if(parent == 0) { 149 | return root == hash; 150 | } 151 | queue.push_back((hash, parent)) 152 | } 153 | 154 | return false; 155 | } 156 | ``` 157 | -------------------------------------------------------------------------------- /rfcs/0006-merkle-tree/0006-merkle-tree.zh.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0006" 3 | Category: Standards Track 4 | Status: Active 5 | Author: Ke Wang 6 | Created: 2018-12-01 7 | --- 8 | 9 | # 静态 Merkle Tree 10 | 11 | ## Complete Binary Merkle Tree 12 | 13 | CKB 使用 ***Complete Binary Merkle Tree(CBMT)*** 来为静态数据生成 *Merkle Root* 及 *Merkle Proof*,目前 CBMT 被用于 *Transactions Root* 的计算中。它是一棵完全二叉树,同时也是一棵满二叉树,相比于其它的 Merkle Tree,***Complete Binary Merkle Tree*** 具有最少的 Hash 计算量及最小的 proof size。 14 | 15 | ## 节点组织形式 16 | 17 | 规定 CBMT 中节点的排列顺序为从上到下、从左到右(从零开始标号),在一棵由 *n* 个 item 生成的 CBMT 中,下标为 *0* 的节点为 *Merkle Root*,下标为 *n* 的节点为第 *1* 个 item 的 hash,下标 *n+1* 的节点为第 2 个 item 的 hash,以此类推。之所以采用这种排列方式,是因为从 item 的位置很容易计算出其在 CBMT 中节点对应的位置。 18 | 19 | 举例来说,6 个 item (假设 item 的 Hash 为 `[T0, T1, T2, T3, T4, T5]`)与 7 个 item (假设 item 的 hash 为 `[T0, T1, T2, T3, T4, T5, T6]`)生成的 Tree 的结构如下所示: 20 | 21 | ``` 22 | with 6 items with 7 items 23 | 24 | B0 -- node 0 B0 -- node 0 25 | / \ / \ 26 | / \ / \ 27 | / \ / \ 28 | / \ / \ 29 | B1 -- node 1 B2 -- node 2 B1 -- node 1 B2 -- node 2 30 | / \ / \ / \ / \ 31 | / \ / \ / \ / \ 32 | / \ / \ / \ / \ 33 | B3(3) B4(4) TO(5) T1(6) B3(3) B4(4) B5(5) T0(6) 34 | / \ / \ / \ / \ / \ 35 | T2 T3 T4 T5 T1 T2 T3 T4 T5 T6 36 | (7) (8) (9) (10) (7) (8) (9)(10)(11) (12) 37 | ``` 38 | 39 | 此外,我们规定对于只有 0 个 item 的情况,生成的 tree 只有 0 个 node,其 root 为 `H256::zero`。 40 | 41 | ## 数据结构 42 | 43 | CBMT 可以用一个数组来表示,节点按照升序存放在数组中,上面的两棵 tree 用数组表示分别为: 44 | 45 | ``` 46 | // 11 个元素的数组,数组第一个位置放 node0, 第二个位置放 node1,以此类推。 47 | [B0, B1, B2, B3, B4, T0, T1, T2, T3, T4, T5] 48 | // 13 个元素的数组,数组第一个位置放 node0, 第二个位置放 node1,以此类推。 49 | [B0, B1, B2, B3, B4, B5, T0, T1, T2, T3, T4, T5, T6] 50 | ``` 51 | 52 | 在一个由 n 个 item 生成的 CBMT 中,其数组的大小为 *2n-1*,*item i* 在数组中的下标为(下标从 0 开始)*i+n-1*。对于下标为 *i* 的节点,其父节点下标为 *(i-1)/2*,兄弟节点下标为 *(i+1)^1-1*(^为异或),子节点的下标为 *2i+1*、*2i+2*。 53 | 54 | ## Merkle Proof 55 | 56 | Merkle Proof 能为一个或多个 item 提供存在性证明,Proof 中应只包含从叶子节点到根节点路径中无法直接计算出的节点,并且我们规定这些节点按照降序排列,采用降序排列的原因是这与节点的生成顺序相符且 *proof* 的生成及校验算法也会变得非常简单。此外,计算 root 时还需要知道要证明的 item 的 index,因此这些 index 也应包含在 Proof 中,且为了能够使这些 index 能够正确的对应到 item,因此规定这些 index 按对应的 item 的 hash 升序排列,如在 6 个 item 的 Merkle Tree 中为 `[T1, T4]` 生成的 Proof 中应只包含 `[T5, T0, B3]` 和 `[9,6]`。 57 | 58 | ### Proof 结构 59 | 60 | Proof 结构体的 schema 形式为: 61 | 62 | ``` 63 | table Proof { 64 | // indexes of items 65 | indexes: [uint32]; 66 | // nodes on the path which can not be calculated, in descending order by index 67 | nodes: [H256]; 68 | } 69 | ``` 70 | 71 | ### Proof 生成算法 72 | 73 | ```c++ 74 | Proof gen_proof(Hash tree[], U32 indexes[]) { 75 | Hash nodes[]; 76 | U32 tree_indexes[]; 77 | Queue queue; 78 | 79 | int size = len(tree) >> 1 + 1; 80 | indexes.desending_sort(); 81 | 82 | for index in indexes { 83 | queue.push_back(index + size - 1); 84 | } 85 | 86 | while(queue is not empty) { 87 | int index = queue.pop_front(); 88 | int sibling = calculate_sibling(index); 89 | 90 | if(sibling == queue.front()) { 91 | queue.pop_front(); 92 | } else { 93 | nodes.push_back(tree[sibling]); 94 | } 95 | 96 | int parent = calculate_parent(index); 97 | if(parent != 0) { 98 | queue.push_back(parent); 99 | } 100 | } 101 | 102 | add (size-1) for every index in indexes; 103 | sort indexes in ascending order by corresponding hash; 104 | 105 | return Proof::new(indexes, nodes); 106 | } 107 | ``` 108 | 109 | ### Proof 校验算法 110 | 111 | ```c++ 112 | bool validate_proof(Proof proof, Hash root, Item items[]) { 113 | Queue queue; 114 | ascending_sort_by_item_hash(items); 115 | 116 | for (index,item) in (proof.indexes, items) { 117 | queue.push_back((item.hash(), index)); 118 | } 119 | 120 | descending_sort_by_index(queue); 121 | 122 | int i = 0; 123 | while(queue is not empty) { 124 | Hash hash, hash1, hash2; 125 | int index1, index2; 126 | 127 | (hash1, index1) = queue.pop_front(); 128 | (hash2, index2) = queue.front(); 129 | int sibling = calculate_sibling(index1); 130 | 131 | if(sibling == index2) { 132 | queue.pop_front(); 133 | hash = merge(hash2, hash1); 134 | } else { 135 | hash2 = proof.nodes[i++]; 136 | 137 | if(is_left_node(index1)) { 138 | hash = merge(hash1, hash2); 139 | } else { 140 | hash = merge(hash2, hash1); 141 | } 142 | } 143 | 144 | int parent = calculate_parent(index); 145 | if(parent == 0) { 146 | return root == hash; 147 | } 148 | queue.push_back((hash, parent)) 149 | } 150 | 151 | return false; 152 | } 153 | ``` 154 | -------------------------------------------------------------------------------- /rfcs/0007-scoring-system-and-network-security/0007-scoring-system-and-network-security.zh.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0007" 3 | Category: Standards Track 4 | Status: Withdrawn 5 | Author: Jinyang Jiang <@jjyr> 6 | Created: 2018-10-02 7 | --- 8 | 9 | # P2P 评分系统和网络安全 10 | 11 | ## 简介 12 | 13 | 本篇 RFC 描述了 CKB P2P 网络层的评分系统,以及基于评分的网络安全策略。 14 | 15 | 16 | ## 目标 17 | 18 | CKB 网络被设计为开放的 P2P 网络,任何节点都能无需许可的加入网络,但网络的开放性同时使得恶意节点也能够加入并对 P2P 网络进行攻击。 19 | 20 | 同样采用开放性 P2P 网络的比特币和以太坊中都曾有「日蚀攻击」的安全问题。 21 | 日蚀攻击的原理是攻击者通过操纵恶意节点占领受害者节点所有的 Peers 连接,以此控制受害者节点可见的网络。 22 | 23 | 攻击者可以用极少成本实施日蚀攻击,攻击成功后可以操纵受害节点的算力做些恶意行为, 或欺骗受害节点进行双花交易。 24 | 25 | 参考论文 -- [Eclipse Attacks on Bitcoin’s Peer-to-Peer Network][2] 26 | 27 | 论文中同时提出了几种防范手段, 其中部分已经在比特币主网应用, 28 | 本 RFC 参考比特币网络的实现,描述如何在 CKB 网络中正确应用这些措施。 29 | 30 | RFC 同时描述了 CKB P2P 网络的评分机制, 31 | 结合 CKB 的评分机制,可以使用比特币中成熟的安全措施来处理更加通用的攻击场景。 32 | 33 | 基于 CKB 的评分机制,我们遵循几条规则来处理恶意 Peers: 34 | 35 | 1. 节点应尽可能的存储已知的 Peers 信息 36 | 2. 节点需要不断对 Peer 的好行为和坏行为进行评分 37 | 3. 节点应保留好的(分数高的) Peer,驱逐坏的(分数低) Peer 38 | 39 | RFC 描述了客户端应该实现的打分系统和下文的几种安全策略。 40 | 41 | 42 | ## Specification 43 | 44 | ### 术语 45 | 46 | * `Node` - 节点 47 | * `Peer` - 网络上的其他节点 48 | * `PeerInfo` - 描述 Peer 信息的数据结构 49 | * `PeerStore` - 用于存储 PeerInfo 的组件 50 | * `outbound peer` - 主动发起连接的节点 51 | * `inbound peer` - 被动接受连接的节点 52 | * `max_outbound` - 节点主动连接的 Peers 上限 53 | * `max_inbound` - 节点被动接受的 Peers 上限 54 | * `network group` - 驱逐节点时用到的概念,对 Peer 连接时的 IP 计算,IPv4 取前 16 位,Ipv6 取前 32 位 55 | 56 | 57 | ### PeerStore 和 PeerInfo 58 | 59 | PeerStore 应该做到持久化存储, 并尽可能多的储存已知的 PeerInfo 60 | 61 | PeerInfo 至少包含以下内容 62 | 63 | ``` 64 | PeerInfo { 65 | NodeId, // Peer 的 NodeId 66 | ConnectedIP, // 连接时的 IP 67 | Direction, // Inbound or Outbound 68 | LastConnectedAt, // 最后一次连接的时间 69 | Score // 分数 70 | } 71 | ``` 72 | 73 | ### 评分系统 74 | 75 | 评分系统需要以下参数 76 | 77 | * `PEER_INIT_SCORE` - Peers 的初始分数 78 | * `BEHAVIOURS` - 节点的行为, 如 `UNEXPECTED_DISCONNECT`, `TIMEOUT`, `CONNECTED` 等 79 | * `SCORING_SCHEMA` - 描述不同行为对应的分数, 如 `{"TIMEOUT": -10, "CONNECTED": 10}` 80 | * `BAN_SCORE` - Peer 评分低于此值时会被加入黑名单 81 | 82 | 网络层应该提供评分接口,允许 `sync`, `relay` 等上层子协议报告 peer 行为, 83 | 并根据 peer 行为和 `SCORING_SCHEMA` 调整 peer 的评分。 84 | 85 | ``` ruby 86 | peer.score += SCOREING_SCHEMA[BEHAVIOUR] 87 | ``` 88 | 89 | Peer 的评分是 CKB P2P 网络安全的重要部分,peer 的行为可以分为如下三种: 90 | 91 | 1. 符合协议的行为: 92 | * 如: 从 peer 获取了新的 block、节点成功连接上 peer 。 当 peer 作出符合协议的行为时,节点应上调对 peer 评分, 93 | 考虑恶意 Peer 有可能在攻击前进行伪装, 94 | 对好行为奖励的分数不应一次性奖励太多, 95 | 而是鼓励 peer 长期进行好的行为来积累信用。 96 | 97 | 2. 可能由于网络异常导致的行为: 98 | * 如: peer 异常断开、连接 peer 失败、ping timeout。 99 | 对这些行为我们采用宽容性的惩罚,下调对 peer 的评分,但不会一次性下调太多。 100 | 101 | 3. 明显违反协议的行为: 102 | * 如: peer 发送无法解码的内容、peer 发送 invalid block, peer 发送 invalid transaction。 103 | 当我们可以确定 peer 存在明显的恶意行为时,对 peer 打低分,如果 peer 评分低于 `BAN_SCORE` ,将 peer 加入黑名单并禁止连接。 104 | 105 | 例子: 106 | * peer 1 连接成功,节点报告 peer1 `CONNECTED` 行为,peer 1 加 10 分 107 | * peer 2 连接超时,节点报告 peer2 `TIMEOUT` 行为,peer 2 减 10 分 108 | * peer 1 通过 `sync` 协议发送重复的请求,节点报告 peer 1 `DUPLICATED_REQUEST_BLOCK` 行为,peer 1 减 50 分 109 | * peer 1 被扣分直至低于 `BAN_SCORE`, 被断开连接并加入黑名单 110 | 111 | `BEHAVIOURS`、 `SCORING_SCHEMA` 等参数不属于共识协议的一部分,CKB 实现应该根据网络实际的情况对参数调整。 112 | 113 | 114 | ### 节点 outbound peers 的选择策略 115 | 116 | [日蚀攻击论文][2]中提到了比特币节点重启时的安全问题: 117 | 118 | 1. 攻击者事先利用比特币的节点发现规则填充受害节点的地址列表 119 | 2. 攻击者等待或诱发受害者节点重启 120 | 3. 重启后,受害者节点会从 addrman (类似 peer store) 中选择一些地址连接 121 | 3. 受害节点的所有对外的连接都连接到了恶意 peers 则攻击者攻击成功 122 | 123 | CKB 在初始化网络时应该避免这些问题 124 | 125 | #### Outbound peers 连接流程 126 | 127 | 参数说明: 128 | * `TRY_SCORE` - 设置一个分数,仅当 PeerInfo 分数高于 `TRY_SCORE` 时节点才会去尝试连接 129 | * `ANCHOR_PEERS` - 锚点 peer 的数量,值应该小于 `max_outbound` 如 `2` 130 | 131 | 变量: 132 | * `try_new_outbound_peer` - 设置节点是否该继续发起新的 Outbound 连接 133 | 134 | 选择一个 outbound peer 的流程: 135 | 136 | 1. 如果当前连接的 outbound peers 小于 `ANCHOR_PEERS` 执行 2, 否则执行 3 137 | 2. 选择一个锚点 peer: 138 | 1. 从 PeerStore 挑选最后连接过的 `max_bound` 个 outbound peers 作为 `recent_peers` 139 | 2. 如果 `recent_peers` 为空则执行 3,否则从 `recent_peers` 中选择分数最高的节点作为 outbound peer 返回 140 | 3. 在 PeerStore 中随机选择一个分数大于 `TRY_SCORE` 且 `NetworkGroup` 和当前连接的 outbound peers 都不相同的 peer info,如果找不到这样的 peer info 则执行 5,否则将这个 peer info 返回 141 | 4. 从 `boot_nodes` 中随机选择一个返回 142 | 143 | 伪代码 144 | 145 | ``` ruby 146 | # 找到一个 outbound peer 候选 147 | def find_outbound_peer 148 | connected_outbound_peers = connected_peers.select{|peer| peer.outbound? && !peer.feeler? } 149 | if connected_outbound_peers.length < ANCHOR_PEERS 150 | find_anchor_peer() || find_random_peer() || random_boot_node() 151 | else 152 | find_random_peer() || random_boot_node() 153 | end 154 | end 155 | 156 | def find_anchor_peer 157 | last_connected_peers = peer_store.sort_by{|peer| -peer.last_connected_at}.take(max_bound) 158 | # 返回最高分的 peer info 159 | last_connected_peers.sort_by(&:score).last 160 | end 161 | 162 | def find_random_peer 163 | connected_outbound_peers = connected_peers.select{|peer| peer.outbound? && !peer.feeler? } 164 | exists_network_groups = connected_outbound_peers.map(&:network_group) 165 | candidate_peers = peer_store.select do |peer| 166 | peer.score >= TRY_SCORE && !exists_network_groups.include?(peer.network_group) 167 | end 168 | candidate_peers.sample 169 | end 170 | 171 | def random_boot_node 172 | boot_nodes.sample 173 | end 174 | ``` 175 | 176 | 177 | 节点应该重复以上过程,直到节点正在连接的 outbound peers 数量大于等于 `max_outbound` 并且 `try_new_outbound_peer` 为 `false`。 178 | 179 | ``` ruby 180 | check_outbound_peers_interval = 15 181 | # 每隔几分钟检查 outbound peers 数量 182 | loop do 183 | sleep(check_outbound_peers_interval) 184 | connected_outbound_peers = connected_peers.select{|peer| peer.outbound? && !peer.feeler? } 185 | if connected_outbound_peers.length >= max_outbound && !try_new_outbound_peer 186 | next 187 | end 188 | new_outbound_peer = find_outbound_peer() 189 | connect_peer(new_outbound_peer) 190 | end 191 | ``` 192 | 193 | `try_new_outbound_peer` 的作用是在一定时间内无法发现有效消息时,允许节点连接更多的 outbound peers,这个机制在后文介绍。 194 | 195 | 该策略在节点没有 Peers 时会强制从最近连接过的 outbound peers 中选择,这个行为参考了[日蚀攻击论文][2]中的 Anchor Connection 策略。 196 | 197 | 攻击者需要做到以下条件才可以成功实施日蚀攻击 198 | 199 | 1. 攻击者有 `n` 个伪装节点(`n == ANCHOR_PEERS`) 成为受害者节点的 outbound peers,这些伪装节点同时要拥有最高得分 200 | 2. 攻击者需要准备至少 `max_outbound - ANCHOR_PEERS` 个伪装节点地址在受害者节点的 PeerStore,并且受害者节点的随机挑选的 `max_outbound - ANCHOR_PEERS` 个 outbound peers 全部是攻击者的伪装节点。 201 | 202 | #### 额外的 outbound peers 连接和驱逐 203 | 204 | 网络组件应该每隔几分钟检测子协议中的主要协议如 `sync` 协议是否工作 205 | 206 | ``` ruby 207 | def sync_maybe_stale 208 | now = Time.now 209 | # 可以通过上次 Tip 更新时间,出块间隔和当前时间判断 sync 是否正常工作 210 | last_tip_updated_at < now - block_produce_interval * n 211 | end 212 | ``` 213 | 214 | 当我们发现 `sync` 协议无法正常工作时,应该设置 `try_new_outbound_peer` 变量为 `true`,当发现 `sync` 协议恢复正常时设置 `try_new_outbound_peer` 为 `false` 215 | 216 | ``` ruby 217 | check_sync_stale_at = Time.now 218 | loop_interval = 30 219 | check_sync_stale_interval = 15 * 60 #(15 minutes) 220 | 221 | loop do 222 | sleep(loop_interval) 223 | # try evict 224 | evict_extra_outbound_peers() 225 | now = Time.now 226 | if check_sync_stale_at >= now 227 | set_try_new_outbound_peer(sync_maybe_stale()) 228 | check_sync_stale_at = now + check_sync_stale_interval 229 | end 230 | end 231 | ``` 232 | 233 | 当 `try_new_outbound_peer` 为 `true` 时 CKB 网络将会持续的尝试连接额外的 outbound peers,并每隔几分钟尝试逐出没有用的额外 outbound peers,这个行为防止节点有过多的连接。 234 | 235 | ``` ruby 236 | def evict_extra_outbound_peers 237 | connected_outbound_peers = connected_peers.select{|peer| peer.outbound? && !peer.feeler? } 238 | if connected_outbound_peers.length <= max_outbound 239 | return 240 | end 241 | now = Time.now 242 | # 找出连接的 outbound peers 中 last_block_announcement_at 最老的 peer 243 | evict_target = connected_outbound_peers.sort_by do |peer| 244 | peer.last_block_announcement_at 245 | end.first 246 | if evict_target 247 | # 至少连接上这个 peer 一段时间,且当前没有从这个 peer 下载块 248 | if now - evict_target.last_connected_at > MINIMUM_CONNECT_TIME && !is_downloading?(evict_target) 249 | disconnect_peer(evict_target) 250 | # 防止连接过多的 outbound peer 251 | set_try_new_outbound_peer(false) 252 | end 253 | end 254 | end 255 | ``` 256 | 257 | 258 | ### 节点 inbound peers 接受机制 259 | 260 | 比特币中当节点的被动 peers 连满同时又有新 peer 尝试连接时,节点会对已有 peers 进行驱逐测试(详细请参考 [Bitcoin 源码][1])。 261 | 262 | 驱逐测试的目的在于节点保留高质量 peer 的同时,驱逐低质量的 peer。 263 | 264 | CKB 参考了比特币的驱逐测试,步骤如下: 265 | 266 | 1. 找出当前连接的所有 inbound peers 作为 `candidate_peers` 267 | 2. 保护 peers (`N` 代表每一步中我们想要保护的 peers 数量): 268 | 1. 从 `candidate_peers` 找出 `N` 个分数最高的 peers 删除 269 | 2. 从 `candidate_peers` 找出 `N` 个 ping 最小的 peers 删除 270 | 3. 从 `candidate_peers` 找出 `N` 个最近发送消息给我们的 peers 删除 271 | 4. 从 `candidate_peers` 找出 `candidate_peers.size / 2` 个连接时间最久的 peers 删除 272 | 3. 按照 `network group` 对剩余的 `candidate_peers` 分组 273 | 4. 找出包含最多 peers 的组 274 | 5. 驱逐组中分数最低的 peer,找不到 peer 驱逐时则拒绝新 peer 的连接 275 | 276 | 我们基于攻击者难以模拟或操纵的特征来保护一些 peers 免受驱逐,以增强网络的安全性。 277 | 278 | ### Feeler Connection 279 | 280 | Feeler Connection 机制的目的在于测试 Peer 是否可以连接。 281 | 282 | 当节点的 outbound peers 数量达到 `max_outbound` 限制时, 283 | 节点会每隔一段时间(一般是几分钟)主动发起 feeler connection: 284 | 285 | 1. 从 PeerStore 中随机选出一个未连接过的 peer info 286 | 2. 连接该 peer 287 | 3. 执行握手协议 288 | 4. 断开连接 289 | 290 | Feeler peer 会被假设为很快断开连接 291 | 292 | ### PeerStore 清理 293 | 294 | 设置一些参数: 295 | `PEER_STORE_LIMIT` - PeerStore 最多可以存储的 PeerInfo 数量 296 | `PEER_NOT_SEEN_TIMEOUT` - 用于判断 peer info 是否该被清理,如该值设为 15 天,则表示最近 15 天内连接过的 peer 不会被清理 297 | 298 | PeerStore 中存储的 PeerInfo 数量达到 `PEER_STORE_LIMIT` 时需要清理,过程如下: 299 | 300 | 1. 按照 `network group` 给 PeerStore 中的 PeerInfo 分组 301 | 2. 找出包含最多节点的组 302 | 3. 在组中搜索最近没有连接过的 peers `peer.last_connected_at < Time.now - PEER_NOT_SEEN_TIMEOUT` 303 | 4. 在该集合中找到分数最低的 PeerInfo `candidate_peer_info` 304 | 5. 如果 `candidate_peer_info.score < new_peer_info.score` 则删掉 `candidate_peer_info` 并插入 `new_peer_info`,否则不接受 `new_peer_info` 305 | 306 | 307 | ## 参考 308 | 309 | 1. [Bitcoin source code][1] 310 | 2. [Eclipse Attacks on Bitcoin’s Peer-to-Peer Network][2] 311 | 312 | [1]: https://github.com/bitcoin/bitcoin 313 | [2]: https://eprint.iacr.org/2015/263.pdf 314 | -------------------------------------------------------------------------------- /rfcs/0010-eaglesong/CompactFIPS202.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Implementation by the Keccak, Keyak and Ketje Teams, namely, Guido Bertoni, 3 | # Joan Daemen, Michaël Peeters, Gilles Van Assche and Ronny Van Keer, hereby 4 | # denoted as "the implementer". 5 | # 6 | # For more information, feedback or questions, please refer to our websites: 7 | # http://keccak.noekeon.org/ 8 | # http://keyak.noekeon.org/ 9 | # http://ketje.noekeon.org/ 10 | # 11 | # To the extent possible under law, the implementer has waived all copyright 12 | # and related or neighboring rights to the source code in this file. 13 | # http://creativecommons.org/publicdomain/zero/1.0/ 14 | 15 | def ROL64(a, n): 16 | return ((a >> (64-(n%64))) + (a << (n%64))) % (1 << 64) 17 | 18 | def KeccakF1600onLanes(lanes): 19 | R = 1 20 | for round in range(24): 21 | # θ 22 | C = [lanes[x][0] ^ lanes[x][1] ^ lanes[x][2] ^ lanes[x][3] ^ lanes[x][4] for x in range(5)] 23 | D = [C[(x+4)%5] ^ ROL64(C[(x+1)%5], 1) for x in range(5)] 24 | lanes = [[lanes[x][y]^D[x] for y in range(5)] for x in range(5)] 25 | # ρ and π 26 | (x, y) = (1, 0) 27 | current = lanes[x][y] 28 | for t in range(24): 29 | (x, y) = (y, (2*x+3*y)%5) 30 | (current, lanes[x][y]) = (lanes[x][y], ROL64(current, (t+1)*(t+2)//2)) 31 | # χ 32 | for y in range(5): 33 | T = [lanes[x][y] for x in range(5)] 34 | for x in range(5): 35 | lanes[x][y] = T[x] ^((~T[(x+1)%5]) & T[(x+2)%5]) 36 | # ι 37 | for j in range(7): 38 | R = ((R << 1) ^ ((R >> 7)*0x71)) % 256 39 | if (R & 2): 40 | lanes[0][0] = lanes[0][0] ^ (1 << ((1<> (8*i)) % 256 for i in range(8)) 48 | 49 | def KeccakF1600(state): 50 | lanes = [[load64(state[8*(x+5*y):8*(x+5*y)+8]) for y in range(5)] for x in range(5)] 51 | lanes = KeccakF1600onLanes(lanes) 52 | state = bytearray(200) 53 | for x in range(5): 54 | for y in range(5): 55 | state[8*(x+5*y):8*(x+5*y)+8] = store64(lanes[x][y]) 56 | return state 57 | 58 | def Keccak(rate, capacity, inputBytes, delimitedSuffix, outputByteLen): 59 | outputBytes = bytearray() 60 | state = bytearray([0 for i in range(200)]) 61 | rateInBytes = rate//8 62 | blockSize = 0 63 | if (((rate + capacity) != 1600) or ((rate % 8) != 0)): 64 | return 65 | inputOffset = 0 66 | # === Absorb all the input blocks === 67 | while(inputOffset < len(inputBytes)): 68 | blockSize = min(len(inputBytes)-inputOffset, rateInBytes) 69 | for i in range(blockSize): 70 | state[i] = state[i] ^ inputBytes[i+inputOffset] 71 | inputOffset = inputOffset + blockSize 72 | if (blockSize == rateInBytes): 73 | state = KeccakF1600(state) 74 | blockSize = 0 75 | # === Do the padding and switch to the squeezing phase === 76 | state[blockSize] = state[blockSize] ^ delimitedSuffix 77 | if (((delimitedSuffix & 0x80) != 0) and (blockSize == (rateInBytes-1))): 78 | state = KeccakF1600(state) 79 | state[rateInBytes-1] = state[rateInBytes-1] ^ 0x80 80 | state = KeccakF1600(state) 81 | # === Squeeze out all the output blocks === 82 | while(outputByteLen > 0): 83 | blockSize = min(outputByteLen, rateInBytes) 84 | outputBytes = outputBytes + state[0:blockSize] 85 | outputByteLen = outputByteLen - blockSize 86 | if (outputByteLen > 0): 87 | state = KeccakF1600(state) 88 | return outputBytes 89 | 90 | def SHAKE128(inputBytes, outputByteLen): 91 | return Keccak(1344, 256, inputBytes, 0x1F, outputByteLen) 92 | 93 | def SHAKE256(inputBytes, outputByteLen): 94 | return Keccak(1088, 512, inputBytes, 0x1F, outputByteLen) 95 | 96 | def SHA3_224(inputBytes): 97 | return Keccak(1152, 448, inputBytes, 0x06, 224//8) 98 | 99 | def SHA3_256(inputBytes): 100 | return Keccak(1088, 512, inputBytes, 0x06, 256//8) 101 | 102 | def SHA3_384(inputBytes): 103 | return Keccak(832, 768, inputBytes, 0x06, 384//8) 104 | 105 | def SHA3_512(inputBytes): 106 | return Keccak(576, 1024, inputBytes, 0x06, 512//8) 107 | -------------------------------------------------------------------------------- /rfcs/0010-eaglesong/constants.py: -------------------------------------------------------------------------------- 1 | from CompactFIPS202 import SHAKE256 2 | 3 | num_rounds = 43 4 | num_constants = 16 * num_rounds 5 | num_bytes = num_constants * 4 6 | 7 | def padhex( integer, number_digits ): 8 | return "0x" + ("0" * (number_digits - len(hex(integer))+2)) + hex(integer)[2:] 9 | 10 | #randomness = SHAKE256(bytearray("I have always been on the machines' side."), num_bytes) 11 | randomness = SHAKE256(bytearray("The various ways in which the knowledge on which people base their plan is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way to utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy - or of designing an efficient economic system."), num_bytes) 12 | 13 | constants = [] 14 | for i in range(0, num_constants): 15 | integer = sum(256**j * randomness[i*4 + j] for j in range(0,4)) 16 | constants.append(integer) 17 | 18 | #print "constants = [", ", ".join(hex(c) for c in constants), "]" 19 | print "injection_constants = [", 20 | for i in range(0, num_constants): 21 | print padhex(constants[i], 8), 22 | if i != num_constants - 1: 23 | print ", ", 24 | if i%8 == 7 and i != num_constants - 1: 25 | print "" 26 | print "]" 27 | 28 | -------------------------------------------------------------------------------- /rfcs/0010-eaglesong/hash.c: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | 4 | void EaglesongHash( unsigned char * output, const unsigned char * input, int input_length ); 5 | 6 | void main( int argc, char ** argv ) { 7 | unsigned char input[10000]; 8 | unsigned char output[32]; 9 | int c; 10 | int i; 11 | 12 | i = 0; 13 | while( 1 ) { 14 | c = getchar(); 15 | if( c == EOF ) { 16 | break; 17 | } 18 | input[i] = c; 19 | i = i + 1; 20 | } 21 | 22 | EaglesongHash(output, input, i); 23 | 24 | for( i = 0 ; i < 32 ; ++i ) { 25 | printf("%02x", output[i]); 26 | } 27 | 28 | printf("\n"); 29 | } 30 | -------------------------------------------------------------------------------- /rfcs/0010-eaglesong/hash.py: -------------------------------------------------------------------------------- 1 | from eaglesong import EaglesongHash 2 | import sys 3 | from binascii import hexlify 4 | 5 | lines = sys.stdin.readlines() 6 | input_bytes = "\n".join(lines) 7 | input_bytes = bytearray(input_bytes, "utf8") 8 | output_bytes = EaglesongHash(input_bytes) 9 | print(hexlify(bytearray(output_bytes))) 10 | 11 | 12 | 13 | -------------------------------------------------------------------------------- /rfcs/0011-transaction-filter-protocol/0011-transaction-filter-protocol.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0011" 3 | Category: Standards Track 4 | Status: Withdrawn 5 | Author: Quake Wang 6 | Created: 2018-12-11 7 | --- 8 | 9 | # Transaction Filter Protocol 10 | 11 | ## Abstract 12 | 13 | Transaction filter protocol allows peers to reduce the amount of transaction data they send. Peer which wants to retrieve transactions of interest, has the option of setting filters on each connection. A filter is defined as a [Bloom filter](http://en.wikipedia.org/wiki/Bloom_filter) on data derived from transactions. 14 | 15 | ## Motivation 16 | 17 | The purpose of transaction filter protocol is to allow low-capacity peers (smartphones, browser extensions, embedded devices, etc) to maintain a high-security assurance about the up to date state of some particular transactions of the chain or verify the execution of transactions. 18 | 19 | These peers do not attempt to fully verify the block chain, instead just checking that [block headers connect](../0004-ckb-block-sync/0004-ckb-block-sync.md#connecting-header) together correctly and trusting that the transactions in the block of highest difficulty are in fact valid. 20 | 21 | Without this protocol, peers have to download the entire blocks and accept all broadcast transactions, then throw away majority of the transactions. This slows down the synchronization process, wastes users bandwidth and increases memory usage. 22 | 23 | ## Messages 24 | 25 | *Message serialization format is [Molecule](../0008-serialization/0008-serialization.md)* 26 | 27 | ### SetFilter 28 | 29 | Upon receiving a `SetFilter` message, the remote peer will immediately restrict the transactions that it broadcasts to the ones matching the filter, where the [matching algorithm](#filter-matching-algorithm) is specified as below. 30 | 31 | ``` 32 | table SetFilter { 33 | filter: [uint8]; 34 | num_hashes: uint8; 35 | hash_seed: uint32; 36 | } 37 | ``` 38 | 39 | `filter`: A bit field of arbitrary byte-aligned size. The maximum size is 36,000 bytes. 40 | 41 | `num_hashes`: The number of hash functions to use in this filter. The maximum value allowed in this field is 20. This maximum value and `filter` maximum size allow to store ~10,000 items and the false positive rate is 0.0001%. 42 | 43 | `hash_seed`: We use [Kirsch-Mitzenmacher-Optimization](https://www.eecs.harvard.edu/~michaelm/postscripts/tr-02-05.pdf) hash function in this protocol, `hash_seed` is a random offset, `h1` is low uint32 of hash value, `h2` is high uint32 of hash value, and the nth hash value is `(hash_seed + h1 + n * h2) mod filter_size`. 44 | 45 | ### AddFilter 46 | 47 | Upon receiving a `AddFilter` message, the given bit data will be added to the existing filter via bitwise OR operator. A filter must have been previously provided using `SetFilter`. This messsage is useful if a new filter is added to a peer whilst it has connections to the network open, alsp avoids the need to re-calculate and send an entirely new filter to every peer. 48 | 49 | ``` 50 | table AddFilter { 51 | filter: [uint8]; 52 | } 53 | ``` 54 | 55 | `filter`: A bit field of arbitrary byte-aligned size. The data size must be litter than or equal to previously provided filter size. 56 | 57 | ### ClearFilter 58 | 59 | The `ClearFilter` message tells the receiving peer to remove a previously-set bloom filter. 60 | 61 | ``` 62 | table ClearFilter { 63 | } 64 | ``` 65 | 66 | The `ClearFilter` message has no arguments at all. 67 | 68 | 69 | ### FilteredBlock 70 | 71 | After a filter has been set, peers don't merely stop announcing non-matching transactions, they can also serve filtered blocks. This message is a replacement for `Block` message of sync protocol and `CompactBlock` message of relay protocol. 72 | 73 | ``` 74 | table FilteredBlock { 75 | header: Header; 76 | transactions: [IndexTransaction]; 77 | hashes: [H256]; 78 | } 79 | 80 | table IndexTransaction { 81 | index: uint32; 82 | transaction: Transaction; 83 | } 84 | ``` 85 | 86 | `header`: Standard block header struct. 87 | 88 | `transactions`: Standard transaction struct plus transaction index. 89 | 90 | `hashes`: Partial [Merkle](../0006-merkle-tree/0006-merkle-tree.md#merkle-proof) branch proof. 91 | 92 | ## Filter matching algorithm 93 | 94 | The filter can be tested against all broadcast transactions, to determine if a transaction matches the filter, the following algorithm is used. Once a match is found the algorithm aborts. 95 | 96 | 1. Test the hash of the transaction itself. 97 | 2. For each CellInput, test the hash of `previous_output`. 98 | 3. For each CellOutput, test the `lock hash` and `type hash` of script. 99 | 4. Otherwise there is no match. 100 | -------------------------------------------------------------------------------- /rfcs/0012-node-discovery/0012-node-discovery.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0012" 3 | Category: Standards Track 4 | Status: Active 5 | Author: Linfeng Qian <@thewawar>, JinYang Jiang <@jjyr> 6 | Created: 2018-11-28 7 | --- 8 | 9 | # CKB Node Discovery Protocol 10 | 11 | CKB Node Discovery Protocol mainly refers to [Satoshi Client Node Discovery][0]. The differences between them are summarized below: 12 | 13 | * The node version number is included in the `GetNodes` message. 14 | * The `Nodes` message is used to periodically broadcast all nodes currently connected. 15 | * We use `multiaddr` as the format of node addresses (It MUST NOT include `/p2p/` segment otherwise it's considered as *misbehavior* and a low score SHOULD be given.) 16 | 17 | Every time client startup, if PeerStore's address list is empty, it SHOULD try to issue DNS requests to initialize address list. If DNS requests don't work it SHOULD fallback to the hard-coded address list. 18 | 19 | ## Discovery Methods 20 | ### DNS Addresses 21 | At the first time startup (bootstrap stage), if the discovery service is needed, the local node SHOULD issues DNS requests to learn about the addresses of other peer nodes. The client includes a list of seed hostnames for DNS services. 22 | 23 | ### Hard-Coded "Seed" Addresses 24 | The client contains some hard-coded "seed" IP addresses that represent CKB nodes. Those addresses are used only if all DNS requests fail. Once the local node has enough addresses (presumably learned from the seed nodes), the client SHOULD close seed node connections to avoid overloading those nodes. 25 | 26 | "Seed" nodes are nodes that generally have a high uptime and have had many connections to many other nodes. 27 | 28 | ### Protocol Message 29 | #### `GetNodes` Message 30 | When all the following conditions are met, the local node will send a `GetNodes` message: 31 | 32 | 1. It's an outbound connection (for resisting [fingerprinting attack][3]). 33 | 2. The other node's version must bigger than a preset value. 34 | 3. The number of addresses currently stored is less than `ADDRESSES_THRESHOLD` (default 1000). 35 | 36 | 37 | #### `Nodes` Message 38 | When the client receives a `GetNodes` request, it SHOULD return a `Nodes` message if this kind of reception is the first time and the connection is an inbound connection, the `announce` field is set to `false`. At regular intervals, local node SHOULD broadcast all connected `Node` information in `Nodes` message to all connected nodes, the `announce` field is set to `true`. When local node received a `Nodes` message and it's `announce` field is `true`, local node SHOULD relay those node addresses that are [routable][1]. 39 | 40 | The `announce` field here is to distinguish a `Nodes` as a response of `GetNodes` or a broadcast message, so it's convenient to apply different rules for punishing misbehaviors. The main rules: 41 | 42 | * A node can only send one `Nodes` message (announce=false) as a response of `GetNodes` message. 43 | * Among a node's broadcast messages only the first `Nodes` message (announce=true) can include more than `ANNOUNCE_THRESHOLD` (default 10) node information, in case other peers send too many node information. 44 | 45 | The number of `addresses` field of each `Node` in all `Nodes` messages cannot exceed `MAX_NODE_ADDRESSES` (default 3). 46 | 47 | ## Resist Typical Attacks 48 | ### Fingerprinting Attack 49 | [Related paper][3] 50 | 51 | `GetNodes` can only send to an outbound connection. 52 | 53 | ## Data Structures 54 | We use [Molecule][2] as serialize/deserialize format, the *schema*: 55 | 56 | ``` 57 | array Bool [byte; 1]; 58 | array Uint16 [byte; 2]; 59 | array Uint32 [byte; 4]; 60 | option PortOpt (Uint16); 61 | vector NodeVec ; 62 | vector BytesVec ; 63 | 64 | table DiscoveryMessage { 65 | payload: DiscoveryPayload, 66 | } 67 | 68 | union DiscoveryPayload { 69 | GetNodes, 70 | Nodes, 71 | } 72 | 73 | table GetNodes { 74 | version: Uint32, 75 | count: Uint32, 76 | listen_port: PortOpt, 77 | } 78 | 79 | table Nodes { 80 | announce: Bool, 81 | items: NodeVec, 82 | } 83 | 84 | table Node { 85 | addresses: BytesVec, 86 | } 87 | ``` 88 | 89 | ## Flow Diagram 90 | ### Node Bootstrap 91 | ![](images/bootstrap.png) 92 | ### Send `GetNodes` Message 93 | ![](images/get-nodes.png) 94 | ### Announce Connected Nodes 95 | ![](images/announce-nodes.png) 96 | 97 | [0]: https://en.bitcoin.it/wiki/Satoshi_Client_Node_Discovery 98 | [1]: https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml 99 | [2]: ../0008-serialization/0008-serialization.md 100 | [3]: https://arxiv.org/pdf/1410.6079.pdf 101 | -------------------------------------------------------------------------------- /rfcs/0012-node-discovery/0012-node-discovery.zh.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0012" 3 | Category: Standards Track 4 | Status: Active 5 | Author: Linfeng Qian <@thewawar>, JinYang Jiang <@jjyr> 6 | Created: 2018-11-28 7 | --- 8 | 9 | # CKB 节点发现协议 10 | 11 | CKB 节点发现协议主要参考了[比特币的协议][0]。主要不同点如下: 12 | * 节点版本号包含在 `GetNodes` 消息中 13 | * 通过 `Nodes` 消息来定时广播当前连接的所有节点 14 | * 我们使用 `multiaddr` 作为节点地址的格式 (不允许出现 `/p2p/` 段,如果违反会被认为是*不良*行为并被打低分) 15 | 16 | 每次客户端启动时,如果 PeerStore 中的地址列表为空就会尝试通过 DNS 的方式获取初始地址,如果 DNS 的方式失败了就使用硬编码的种子地址来初始化地址列表。 17 | 18 | ## 节点发现的手段 19 | ### DNS 获取地址 20 | 第一次启动的时候(引导阶段),如果需要节点发现服务,客户端会尝试向内置的 DNS 服务器发送 DNS 请求来获取种子服务器地址。 21 | 22 | ### 硬编码的「种子」地址 23 | 客户端会硬编码一些「种子」节点地址,这些地址只有在 DNS 获取地址失败的时候被使用。当通过这些种子节点获取了足够多的地址后需要断开这些连接,防止它们过载。这些「种子」地址的时间戳被设置为 0 所以不会加入到 `GetNodes` 请求的返回值中。 24 | 25 | 「种子」节点是那些在线时间较长而且和很多其它节点互连的节点。 26 | 27 | ### 协议消息 28 | 29 | #### `GetNodes` 消息 30 | 当满足所有以下条件时,节点会发送一个 `GetNodes` 请求: 31 | 32 | 1. 这个连接是自己主动发起的 (防御[指纹攻击][3]) 33 | 2. 对方的版本号大于一个预设的值 34 | 3. 当前存储的地址数量小于 `ADDRESSES_THRESHOLD` (默认 1000) 个 35 | 36 | #### `Nodes` 消息 37 | 38 | 当客户端收到一个 `GetNodes` 请求时,如果是第一次收到 `GetNodes` 消息而且这个连接是对方主动发起的就会返回一个 `Nodes` 消息,该 `Nodes` 消息的 `announce` 字段为 `false`。每隔一定时间当前节点会将当前连接的节点信息以及本节点信息以 `Nodes` 消息广播给当前连接的所有节点,`announce` 字段为 `true`。当前收到 `announce` 字段为 `true` 的 `Nodes` 消息时会对地址[可路由][1]的那些节点地址进行转发。 39 | 40 | 这里 `announce` 字段的目的是为了区分 `Nodes` 消息是作为 `GetNodes` 消息的返回值还是广播消息,可以方便应用不同的规则来对节点的恶意行为做相应的处罚。涉及到的规则主要有: 41 | 42 | * 一个节点只能有一个 `Nodes` 消息 (announce=false) 作为 `GetNodes` 消息的返回值。 43 | * 一个节点的广播消息中只能第一个 `Nodes` 消息 (announce=true) 包含的节点信息数量超过 `ANNOUNCE_THRESHOLD` (默认 10) 个,这是为了防止其它节点发送过多的 `Node` 信息。 44 | 45 | 所有 `Nodes` 消息中的每个 `Node` 中的 `addresses` 的数量不能超过 `MAX_NODE_ADDRESSES` (默认 3) 个。 46 | 47 | ## 对主要攻击方式的处理 48 | ### 指纹攻击 (fingerprinting attack) 49 | [相关论文][3] 50 | 51 | `GetNodes` 消息只能通过 outbound 连接发送出去。 52 | 53 | ## 相关数据结构 54 | 我们使用 [Molecule][2] 作为数据序列化格式,以下为相关数据结构的 schema: 55 | 56 | ``` 57 | array Bool [byte; 1]; 58 | array Uint16 [byte; 2]; 59 | array Uint32 [byte; 4]; 60 | option PortOpt (Uint16); 61 | vector NodeVec ; 62 | vector BytesVec ; 63 | 64 | table DiscoveryMessage { 65 | payload: DiscoveryPayload, 66 | } 67 | 68 | union DiscoveryPayload { 69 | GetNodes, 70 | Nodes, 71 | } 72 | 73 | table GetNodes { 74 | version: Uint32, 75 | count: Uint32, 76 | listen_port: PortOpt, 77 | } 78 | 79 | table Nodes { 80 | announce: Bool, 81 | items: NodeVec, 82 | } 83 | 84 | table Node { 85 | addresses: BytesVec, 86 | } 87 | ``` 88 | 89 | ## 流程图 90 | ### 节点 Bootstrap 91 | ![](images/bootstrap.png) 92 | ### 发送 `GetNodes` 消息 93 | ![](images/get-nodes.png) 94 | ### 广播当前连接的节点信息 95 | ![](images/announce-nodes.png) 96 | 97 | [0]: https://en.bitcoin.it/wiki/Satoshi_Client_Node_Discovery 98 | [1]: https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml 99 | [2]: ../0008-serialization/0008-serialization.md 100 | [3]: https://arxiv.org/pdf/1410.6079.pdf 101 | -------------------------------------------------------------------------------- /rfcs/0012-node-discovery/images/announce-nodes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0012-node-discovery/images/announce-nodes.png -------------------------------------------------------------------------------- /rfcs/0012-node-discovery/images/bootstrap.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0012-node-discovery/images/bootstrap.png -------------------------------------------------------------------------------- /rfcs/0012-node-discovery/images/get-nodes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0012-node-discovery/images/get-nodes.png -------------------------------------------------------------------------------- /rfcs/0013-get-block-template/0013-get-block-template.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0013" 3 | Category: Standards Track 4 | Status: Active 5 | Author: Dingwei Zhang 6 | Created: 2019-01-02 7 | --- 8 | 9 | # get_block_template 10 | 11 | ## Abstract 12 | 13 | This RFC describes the decentralized CKB mining protocol. 14 | 15 | 16 | ## Motivation 17 | 18 | The original `get_work` [[btc][1] [eth][2]] mining protocol simply issues block headers for a miner to solve, the miner is kept in the dark, and has no influence over block creation. `get_block_template` moves block creation to the miner, the entire block structure is sent, and left to the miner to (optionally) customize and assemble, miner are enabled to audit and possibly modify the block before hashing it, this improves the security of the CKB network by making blocks decentralized. 19 | 20 | ## Specification 21 | 22 | ### Block Template Request 23 | 24 | A JSON-RPC method is defined, called `get_block_template`. It accepts exactly three argument: 25 | 26 | | Key | Required | Type | Description | 27 | | ------------ | -------- | ------ | --------------------------------------------------- | 28 | | cycles_limit | No | Number | maximum number of cycles to include in template | 29 | | bytes_limit | No | Number | maximum number of bytes to use for the entire block | 30 | | max_version | No | Number | highest block version number supported | 31 | 32 | For `cycles_limit`, `bytes_limit` and `max_version`, if omitted, the default limit (consensus level) is used. 33 | Servers SHOULD respect these desired maximums (if those maximums exceed consensus level limit, Servers SHOULD instead return the consensus level limit), but are NOT required to, clients SHOULD check that the returned template satisfies their requirements appropriately. 34 | 35 | `get_block_template` MUST return a JSON Object containing the following keys: 36 | 37 | | Key | Required | Type | Description | 38 | | --------------------- | -------- | ---------------- | ---------------------------------------------------------------------------- | 39 | | version | Yes | Number | block version | 40 | | difficulty | Yes | String | difficulty in hex-encoded string | 41 | | current_time | Yes | Number | the current time as seen by the server (recommended for block time) | 42 | | number | Yes | Number | the number of the block we are looking for | 43 | | parent_hash | Yes | String | the hash of the parent block, in hex-encoded string | 44 | | cycles_limit | No | Number | maximum number of cycles allowed in blocks | 45 | | bytes_limit | No | Number | maximum number of bytes allowed in blocks | 46 | | commit_transactions | Should | Array of Objects | objects containing information for CKB transactions (excluding cellbase) | 47 | | proposal_transactions | Should | Array of String | array of hex-encoded transaction proposal_short_id | 48 | | cellbase | Yes | Object | information for cellbase transaction | 49 | | work_id | No | String | if provided, this value must be returned with results (see Block Submission) | 50 | 51 | #### Transaction Object 52 | 53 | The Objects listed in the response's "commit_transactions" key contains these keys: 54 | 55 | | Key | Required | Type | Description | 56 | | -------- | -------- | ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 57 | | hash | Yes | String | the hash of the transaction | 58 | | required | No | Boolean | if provided and true, this transaction must be in the final block | 59 | | cycles | No | Number | total number of cycles, if key is not present, cycles is unknown and clients MUST NOT assume there aren't any | 60 | | depends | No | Array of Numbers | other transactions before this one (by 1-based index in "transactions" list) that must be present in the final block if this one is; if key is not present, dependencies are unknown and clients MUST NOT assume there aren't any | 61 | | data | Yes | String | transaction [Molecule][3] bytes in hex-encoded string | 62 | 63 | ### Block Submission 64 | 65 | A JSON-RPC method is defined, called `submit_block`. to submit potential blocks (or shares). It accepts two arguments: the first is always a String of the hex-encoded block [Molecule][3] bytes to submit; the second is String of work_id. 66 | 67 | | Key | Required | Type | Description | 68 | | ------- | -------- | ------ | --------------------------------------------------------------------- | 69 | | data | Yes | String | block [Molecule][3] bytes in hex-encoded string | 70 | | work_id | No | String | if the server provided a workid, it MUST be included with submissions | 71 | 72 | ### References 73 | 74 | * bitcoin Getwork, https://en.bitcoin.it/wiki/Getwork 75 | * ethereum Getwork, https://github.com/ethereum/wiki/wiki/JSON-RPC#eth_getwork 76 | * [Molecule Encoding][3] 77 | 78 | [1]: https://en.bitcoin.it/wiki/Getwork 79 | [2]: https://github.com/ethereum/wiki/wiki/JSON-RPC#eth_getwork 80 | [3]: ../0008-serialization/0008-serialization.md 81 | -------------------------------------------------------------------------------- /rfcs/0014-vm-cycle-limits/0014-vm-cycle-limits.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0014" 3 | Category: Standards Track 4 | Status: Active 5 | Author: Xuejie Xiao 6 | Created: 2019-01-04 7 | --- 8 | 9 | # VM Cycle Limits 10 | 11 | ## Introduction 12 | 13 | This RFC describes cycle limits used to regulate VM scripts. 14 | 15 | CKB VM is a flexible VM that is free to implement many control flow constructs, such as loops or branches. As a result, we will need to enforce certain rules in CKB VM to prevent malicious scripts, such as a script with infinite loops. 16 | 17 | We introduce a concept called `cycles`, each VM instruction or syscall will consume some amount of cycles. At consensus level, a scalar `max_block_cycles` field is defined so that the sum of cycles consumed by all scripts in a block cannot exceed this value. Otherwise, the block will be rejected. This way we can guarantee all scripts running in CKB VM will halt, or result in error state. 18 | 19 | ## Consensus Change 20 | 21 | As mentioned above, a new scalar `max_block_cycles` field is added to chain spec as a consensus rule, it puts a hard limit on how many cycles a block's scripts can consume. No block can consume cycles larger than `max_block_cycles`. 22 | 23 | Note there's no limit on the cycles for an individual transaction or a script. As long as the whole block consumes cycles less than `max_block_cycles`, a transaction or a script in that block are free to consume how many cycles they want. 24 | 25 | ## Cycle Measures 26 | 27 | Here we will specify the cycles needed by each CKB VM instructions or syscalls. Note right now in the RFC, we define hard rules for each instruction or syscall here, in future this might be moved into consensus rules so we can change them more easily. 28 | 29 | The cycles consumed for each operation are determined based on the following rules: 30 | 31 | 1. Cycles for RISC-V instructions are determined based on real hardware that implement RISC-V ISA. 32 | 2. Cycles for syscalls are measured based on real runtime performance metrics obtained while benchmarking current CKB implementation. 33 | 34 | ### Initial Loading Cycles 35 | 36 | For each byte loaded into CKB VM in the initial ELF loading phase, 0.25 cycles will be charged. This is to encourage dapp developers to ship smaller smart contracts as well as preventing DDoS attacks using large binaries. Notice fractions will be rounded up here, so 30.25 cycles will become 31 cycles. 37 | 38 | ### Instruction Cycles 39 | 40 | All CKB VM instructions consume 1 cycle except the following ones: 41 | 42 | | Instruction | Cycles | 43 | |-------------|----------------------| 44 | | JALR | 3 | 45 | | JAL | 3 | 46 | | J | 3 | 47 | | JR | 3 | 48 | | BEQ | 3 | 49 | | BNE | 3 | 50 | | BLT | 3 | 51 | | BGE | 3 | 52 | | BLTU | 3 | 53 | | BGEU | 3 | 54 | | BEQZ | 3 | 55 | | BNEZ | 3 | 56 | | LD | 2 | 57 | | SD | 2 | 58 | | LDSP | 2 | 59 | | SDSP | 2 | 60 | | LW | 3 | 61 | | LH | 3 | 62 | | LB | 3 | 63 | | LWU | 3 | 64 | | LHU | 3 | 65 | | LBU | 3 | 66 | | SW | 3 | 67 | | SH | 3 | 68 | | SB | 3 | 69 | | LWSP | 3 | 70 | | SWSP | 3 | 71 | | MUL | 5 | 72 | | MULW | 5 | 73 | | MULH | 5 | 74 | | MULHU | 5 | 75 | | MULHSU | 5 | 76 | | DIV | 32 | 77 | | DIVW | 32 | 78 | | DIVU | 32 | 79 | | DIVUW | 32 | 80 | | REM | 32 | 81 | | REMW | 32 | 82 | | REMU | 32 | 83 | | REMUW | 32 | 84 | | ECALL | 500 (see note below) | 85 | | EBREAK | 500 (see note below) | 86 | 87 | ### Syscall Cycles 88 | 89 | As shown in the above chart, each syscall will have 500 initial cycle consumptions. This is based on real performance metrics gathered benchmarking CKB implementation, certain bookkeeping logics are required for each syscall here. 90 | 91 | In addition, for each byte loaded into CKB VM in the syscalls, 0.25 cycles will be charged. Notice fractions will also be rounded up here, so 30.25 cycles will become 31 cycles. 92 | 93 | ## Guidelines 94 | 95 | In general, the cycle consumption rules above follow certain guidelines: 96 | 97 | * Branches are more expensive than normal instructions. 98 | * Memory accesses are more expensive than normal instructions. Since CKB VM is a 64-bit system, loading 64-bit value directly will cost less cycle than loading smaller values. 99 | * Multiplication and divisions are much more expensive than normal instructions. 100 | * Syscalls include 2 parts: the bookkeeping part at first, and a plain memcpy phase. The first bookkeeping part includes quite complex logic, which should consume much more cycles. The memcpy part is quite cheap on modern hardware, hence less cycles will be charged. 101 | 102 | Looking into the literature, the cycle consumption rules here resemble a lot like the performance metrics one can find in modern computer archtecture. 103 | -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/01.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/02.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/03.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/04.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/04.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/05.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/05.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/06.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/06.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/07.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/07.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/08.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/08.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/09.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/09.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/10.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/11.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/12.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/13.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/14.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/15.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/15.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/16.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/17.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/17.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/18.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/18.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/19.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/20.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/20.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/21.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/21.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/22.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/22.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/23.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/23.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/24.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/24.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/25.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/25.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/26.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/26.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/27.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/27.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/28.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/28.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/29.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/29.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/30.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/30.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/31.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/31.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/32.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/32.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/33.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/33.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/34.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/34.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/35.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/35.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/36.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/36.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/37.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/37.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/38.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/38.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/39.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/39.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/40.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/40.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/41.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/41.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/42.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/42.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/43.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/43.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/44.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/44.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/45.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/45.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/46.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/46.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/47.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/47.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/48.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/48.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/49.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/49.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/50.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/50.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/51.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/51.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/52.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/52.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/53.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/53.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/54.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/54.png -------------------------------------------------------------------------------- /rfcs/0015-ckb-cryptoeconomics/images/55.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0015-ckb-cryptoeconomics/images/55.png -------------------------------------------------------------------------------- /rfcs/0017-tx-valid-since/commitment-block.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0017-tx-valid-since/commitment-block.jpg -------------------------------------------------------------------------------- /rfcs/0017-tx-valid-since/e-i-l-encoding.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0017-tx-valid-since/e-i-l-encoding.png -------------------------------------------------------------------------------- /rfcs/0017-tx-valid-since/since-encoding.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0017-tx-valid-since/since-encoding.jpg -------------------------------------------------------------------------------- /rfcs/0017-tx-valid-since/since-verification.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0017-tx-valid-since/since-verification.jpg -------------------------------------------------------------------------------- /rfcs/0017-tx-valid-since/target-value.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0017-tx-valid-since/target-value.jpg -------------------------------------------------------------------------------- /rfcs/0017-tx-valid-since/threshold-value.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0017-tx-valid-since/threshold-value.jpg -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559064108898.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559064108898.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559064685714.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559064685714.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559064934639.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559064934639.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559064995366.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559064995366.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559065017925.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559065017925.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559065197341.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559065197341.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559065416713.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559065416713.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559065517956.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559065517956.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559065670251.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559065670251.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559065968791.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559065968791.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559065997745.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559065997745.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559066101731.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559066101731.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559066131427.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559066131427.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559066158164.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559066158164.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559066233715.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559066233715.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559066249700.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559066249700.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559066329440.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559066329440.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559066373372.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559066373372.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559066526598.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559066526598.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559068235154.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559068235154.png -------------------------------------------------------------------------------- /rfcs/0020-ckb-consensus-protocol/images/1559068266162.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0020-ckb-consensus-protocol/images/1559068266162.png -------------------------------------------------------------------------------- /rfcs/0021-ckb-address-format/0021-ckb-address-format.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0021" 3 | Category: Standards Track 4 | Status: Active 5 | Author: Cipher Wang , Axel Wan 6 | Created: 2019-01-20 7 | --- 8 | 9 | # CKB Address Format 10 | 11 | ## Abstract 12 | 13 | *CKB Address Format* is an application level cell **lock script** display recommendation. The lock script consists of three key parameters, including *code_hash*, *hash_type* and *args*. CKB address packages lock script into a single line, verifiable, and human read friendly format. 14 | 15 | ## Data Structure 16 | 17 | ### Payload Format Types 18 | 19 | To generate a CKB address, we firstly encode lock script to bytes array, name *payload*. And secondly, we wrap the payload into final address format. 20 | 21 | There are several methods to convert lock script into payload bytes array. We use 1 byte to identify the payload format. 22 | 23 | | format type | description | 24 | |:-----------:|--------------------------------------------------------------| 25 | | 0x00 | full version identifies the hash_type | 26 | | 0x01 | short version for locks with popular code_hash, deprecated | 27 | | 0x02 | full version with hash_type = "Data", deprecated | 28 | | 0x04 | full version with hash_type = "Type", deprecated | 29 | 30 | ### Full Payload Format 31 | 32 | Full payload format directly encodes all data fields of lock script. 33 | The encode rule of full payload format is Bech32m. 34 | 35 | ```c 36 | payload = 0x00 | code_hash | hash_type | args 37 | ``` 38 | 39 | The `hash_type` field is for CKB VM version selection. 40 | 41 | * When the hash_type is 0, the script group matches code via data hash and will run the code using the CKB VM version 0. 42 | * When the hash_type is 1, the script group matches code via type script hash and will run the code using the CKB VM version 1. 43 | * When the hash_type is 2, the script group matches code via data hash and will run the code using the CKB VM version 1. 44 | 45 | ### Deprecated Short Payload Format 46 | 47 | Short payload format is a compact format which identifies common used [code_hash][genesis-script-list] by 1 byte code_hash_index instead of 32 bytes code_hash. 48 | The encode rule of short payload format is Bech32. 49 | 50 | ```c 51 | payload = 0x01 | code_hash_index | args 52 | ``` 53 | 54 | To translate payload to lock script, one can convert code_hash_index to code_hash and hash_type with the following *popular code_hash table*. And args as the args. 55 | 56 | | code_hash_index | code_hash | hash_type | args | 57 | |:---------------:|----------------------|:------------:|-------------------------| 58 | | 0x00 | SECP256K1 + blake160 | Type | blake160(PK)* | 59 | | 0x01 | SECP256K1 + multisig | Type | multisig script hash** | 60 | | 0x02 | anyone_can_pay | Type | blake160(PK) | 61 | 62 | \* The blake160 here means the leading 20 bytes truncation of Blake2b hash result. 63 | 64 | \*\* The *multisig script hash* is the 20 bytes blake160 hash of multisig script. The multisig script should be assembled in the following format: 65 | 66 | ``` 67 | S | R | M | N | blake160(Pubkey1) | blake160(Pubkey2) | ... 68 | ``` 69 | 70 | Where S/R/M/N are four single byte unsigned integers, ranging from 0 to 255, and blake160(Pubkey1) it the first 160bit blake2b hash of SECP256K1 compressed public keys. S is format version, currently fixed to 0. M/N means the user must provide M of N signatures to unlock the cell. And R means the provided signatures at least match the first R items of the Pubkey list. 71 | 72 | For example, Alice, Bob, and Cipher collectively control a multisig locked cell. They define the unlock rule like "any two of us can unlock the cell, but Cipher must approve". The corresponding multisig script is: 73 | 74 | ``` 75 | 0 | 1 | 2 | 3 | Pk_Cipher_h | Pk_Alice_h | Pk_Bob_h 76 | ``` 77 | 78 | Notice that the length of args in payload here is always 20 bytes. So, if you want to append [CKByte minimum field or/and UDT minimum field](https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0026-anyone-can-pay/0026-anyone-can-pay.md#script-structure) to anyone_can_pay script, you should use the full payload format. 79 | 80 | ### Deprecated Full Payload Format 81 | 82 | The deprecated full payload format directly encodes all data field of lock script. 83 | The encode rule of deprecated full payload format is Bech32. 84 | 85 | ```c 86 | payload = 0x02/0x04 | code_hash | args 87 | ``` 88 | 89 | The first byte identifies the lock script's hash_type, 0x02 for "Data", 0x04 for "Type". 90 | 91 | Two reasons have caused this address format to be deprecated. First, a [flaw](https://github.com/sipa/bech32/issues/51) of Bech32 enables attackers to generate valid but unexpected addresses by deleting or inserting characters into certain full addresses. Last, the hard fork of [ckb2021](https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0037-ckb2021/0037-ckb2021.md) requires a new field to indicate the CKB VM version for each script group. 92 | 93 | ## Wrap to Address 94 | 95 | We follow [Bitcoin bech32 address format (BIP-173)][bip173] or [Bitcoin bech32m address format (BIP-350)][bip350] rules to wrap payload into address, which uses Bech32/Bech32m encoding and a [BCH checksum][bch]. 96 | 97 | The original version of Bech32/Bech32m allows at most 90 characters long. Similar with [BOLT][BOLT_url], we simply remove the length limit. The error correction function is disabled when the Bech32/Bech32m string is longer than 90. We don't intent to use this function anyway, because there is a risk to get wrong correction result. 98 | 99 | A Bech32/Bech32m string consists of the **human-readable part**, the **separator**, and the **data part**. The last 6 characters of data part is checksum. The data part is base32 encoded. Here is the readable translation of base32 encoding table. 100 | 101 | | |0|1|2|3|4|5|6|7| 102 | |-------|-|-|-|-|-|-|-|-| 103 | |**+0** |q|p|z|r|y|9|x|8| 104 | |**+8** |g|f|2|t|v|d|w|0| 105 | |**+16**|s|3|j|n|5|4|k|h| 106 | |**+24**|c|e|6|m|u|a|7|l| 107 | 108 | The human-readable part is "**ckb**" for CKB mainnet, and "**ckt**" for the testnet. The separator is always "1". 109 | 110 | ![](images/ckb-address.png) 111 | 112 | ## Examples and Demo Code 113 | 114 | ```yml 115 | == full address test == 116 | code_hash to encode: 9bd7e06f3ecf4be0f2fcd2188b23f1b9fcc88e5d4b65a8637b17723bbda3cce8 117 | hash_type to encode: 01 118 | with args to encode: b39bbc0b3673c7d36450bc14cfcdad2d559c6c64 119 | full address generated: ckb1qzda0cr08m85hc8jlnfp3zer7xulejywt49kt2rr0vthywaa50xwsqdnnw7qkdnnclfkg59uzn8umtfd2kwxceqxwquc4 120 | 121 | == deprecated short address (code_hash_index = 0x00) test == 122 | args to encode: b39bbc0b3673c7d36450bc14cfcdad2d559c6c64 123 | address generated: ckb1qyqt8xaupvm8837nv3gtc9x0ekkj64vud3jqfwyw5v 124 | 125 | == deprecated short address (code_hash_index = 0x01) test == 126 | multi sign script: 00 | 01 | 02 | 03 | bd07d9f32bce34d27152a6a0391d324f79aab854 | 094ee28566dff02a012a66505822a2fd67d668fb | 4643c241e59e81b7876527ebff23dfb24cf16482 127 | args to encode: 4fb2be2e5d0c1a3b8694f832350a33c1685d477a 128 | address generated: ckb1qyq5lv479ewscx3ms620sv34pgeuz6zagaaqklhtgg 129 | 130 | == deprecated full address test == 131 | code_hash to encode: 9bd7e06f3ecf4be0f2fcd2188b23f1b9fcc88e5d4b65a8637b17723bbda3cce8 132 | with args to encode: b39bbc0b3673c7d36450bc14cfcdad2d559c6c64 133 | full address generated: ckb1qjda0cr08m85hc8jlnfp3zer7xulejywt49kt2rr0vthywaa50xw3vumhs9nvu786dj9p0q5elx66t24n3kxgj53qks 134 | ``` 135 | 136 | Demo code: https://github.com/rev-chaos/ckb-address-demo 137 | 138 | [bip173]: https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki 139 | 140 | [bip350]: https://github.com/sipa/bips/blob/bip-bech32m/bip-0350.mediawiki 141 | 142 | [bch]: https://en.wikipedia.org/wiki/BCH_code 143 | 144 | [BOLT_url]: https://github.com/lightningnetwork/lightning-rfc/blob/master/11-payment-encoding.md 145 | 146 | [multisig_code]: https://github.com/nervosnetwork/ckb-system-scripts/blob/master/c/secp256k1_blake160_multisig_all.c 147 | 148 | [genesis-script-list]: https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0024-ckb-genesis-script-list/0024-ckb-genesis-script-list.md 149 | -------------------------------------------------------------------------------- /rfcs/0021-ckb-address-format/images/ckb-address.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0021-ckb-address-format/images/ckb-address.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/cell-data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/cell-data.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/cell-dep-structure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/cell-dep-structure.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/cell-deps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/cell-deps.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/code-locating-via-type.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/code-locating-via-type.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/code-locating.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/code-locating.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/dep-group-expansion.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/dep-group-expansion.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/group-input.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/group-input.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/header-deps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/header-deps.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/lock-script-cont.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/lock-script-cont.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/lock-script-grouping.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/lock-script-grouping.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/lock-script.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/lock-script.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/older-block-and-transaction.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/older-block-and-transaction.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/out-point.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/out-point.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/outputs-data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/outputs-data.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/script-p2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/script-p2.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/script.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/script.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/transaction-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/transaction-overview.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/transaction-p1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/transaction-p1.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/transaction-p2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/transaction-p2.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/type-id-group.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/type-id-group.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/type-id-recursive-dependency.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/type-id-recursive-dependency.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/type-id.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/type-id.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/type-script-grouping.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/type-script-grouping.png -------------------------------------------------------------------------------- /rfcs/0022-transaction-structure/value-storage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0022-transaction-structure/value-storage.png -------------------------------------------------------------------------------- /rfcs/0025-simple-udt/0025-simple-udt.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0025" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Xuejie Xiao 6 | Created: 2020-09-03 7 | --- 8 | 9 | # Simple UDT 10 | 11 | This RFC defines the Simple User Defined Tokens(Simple UDT or SUDT) specification. Simple UDT provides a way for dapp developers to issue custom tokens on Nervos CKB. The simple part in Simple UDT means we are defining a minimal standard that contains whats absolutely needed, more sophisticated actions are left to CKBs flexibility to achieve. 12 | 13 | ## Data Structure 14 | 15 | ### SUDT Cell 16 | 17 | A SUDT cell in Simple UDT specification looks like following: 18 | 19 | ``` 20 | data: 21 | amount: uint128 22 | type: 23 | code_hash: simple_udt type script 24 | args: owner lock script hash (...) 25 | lock: 26 | 27 | ``` 28 | 29 | The following rules should be met in a SUDT Cell: 30 | 31 | * **Simple UDT Rule 1**: a SUDT cell must store SUDT amount in the first 16 bytes of cell data segment, the amount should be stored as little endian, 128-bit unsigned integer format. In the case of composable scripts, the SUDT amount must still be located at the initial 16 bytes in the data segment which corresponds to the composed SUDT script 32 | * **Simple UDT Rule 2**: the first 32 bytes of the SUDT cells type script args must store the lock script hash of *owner lock*. Owner lock will be explained below 33 | * **Simple UDT Rule 3**: each SUDT must have unique type script[^1], in other words, 2 SUDT cells using the same type script are considered to be the same SUDT. 34 | 35 | [^1]: As per definition a type script is comprised of three fields: `code_hash`, `hash_type` and `args`, see [script definition](https://docs.nervos.org/docs/reference/script/). 36 | 37 | User shall use any lock script as they wish in the SUDT Cell. 38 | 39 | ### Owner lock script 40 | 41 | Owner lock shall be used for governance purposes, such as issuance, mint, burn as well as other operations. The SUDT specification does not enforce specific rules on the behavior of owner lock script. It is expected that owner lock script should at least provide enough security to ensure only token owners can perform governance operations. 42 | 43 | ## Operations 44 | 45 | This section describes operations that must be supported in Simple UDT implementation 46 | 47 | ### Transfer 48 | 49 | Transfer operation transfers SUDTs from one or more SUDT holders to other SUDT holders. 50 | 51 | ``` 52 | // Transfer 53 | Inputs: 54 | SUDT_Cell 55 | Data: 56 | amount: uint128 57 | Type: 58 | code_hash: simple_udt type script 59 | args: owner lock script hash (...) 60 | Lock: 61 | 62 | <...> 63 | Outputs: 64 | SUDT_Cell 65 | Data: 66 | amount: uint128 67 | Type: 68 | code_hash: simple_udt type script 69 | args: owner lock script hash (...) 70 | Lock: 71 | 72 | <...> 73 | ``` 74 | 75 | Transfer operation must satisfy the following rule: 76 | 77 | * **Simple UDT Rule 4**: in a transfer transaction, the sum of all SUDT tokens from all input cells must be larger or equal to the sum of all SUDT tokens from all output cells. Allowing more input SUDTs than output SUDTs enables burning tokens. 78 | 79 | ## Governance Operations 80 | 81 | This section describes governance operations that should be supported by Simple UDT Implementation. All goverance operations must satisfy the following rule: 82 | 83 | * **Simple UDT Rule 5**: in a governance operation, at least one input cell in the transaction should use owner lock specified by the SUDT as its cell lock. 84 | 85 | ### Issue/Mint SUDT 86 | 87 | This operation enables issuing new SUDTs. 88 | 89 | ``` 90 | // Issue new SUDT 91 | Inputs: 92 | <... one of the input cell must have owner lock script as lock> 93 | Outputs: 94 | SUDT_Cell: 95 | Data: 96 | amount: uint128 97 | Type: 98 | code_hash: simple_udt type script 99 | args: owner lock script hash (...) 100 | Lock: 101 | 102 | ``` 103 | 104 | ## Notes 105 | 106 | An [implementation](https://github.com/nervosnetwork/ckb-production-scripts/blob/e570c11aff3eca12a47237c21598429088c610d5/c/simple_udt.c) of the Simple UDT spec above has been deployed to Lina CKB mainnet and Aggron testnet: 107 | 108 | 109 | - Lina 110 | 111 | | parameter | value | 112 | | ----------- | -------------------------------------------------------------------- | 113 | | `code_hash` | `0x5e7a36a77e68eecc013dfa2fe6a23f3b6c344b04005808694ae6dd45eea4cfd5` | 114 | | `hash_type` | `type` | 115 | | `tx_hash` | `0xc7813f6a415144643970c2e88e0bb6ca6a8edc5dd7c1022746f628284a9936d5` | 116 | | `index` | `0x0` | 117 | | `dep_type` | `code` | 118 | 119 | - Aggron 120 | 121 | | parameter | value | 122 | | ----------- | -------------------------------------------------------------------- | 123 | | `code_hash` | `0xc5e5dcf215925f7ef4dfaf5f4b4f105bc321c02776d6e7d52a1db3fcd9d011a4` | 124 | | `hash_type` | `type` | 125 | | `tx_hash` | `0xe12877ebd2c3c364dc46c5c992bcfaf4fee33fa13eebdf82c591fc9825aab769` | 126 | | `index` | `0x0` | 127 | | `dep_type` | `code` | 128 | 129 | 130 | Reproducible build is supported to verify the deploy script. To bulid the deployed Simple UDT script above, one can use the following steps: 131 | 132 | ```bash 133 | $ git clone https://github.com/nervosnetwork/ckb-production-scripts 134 | $ cd ckb-production-scripts 135 | $ git checkout e570c11aff3eca12a47237c21598429088c610d5 136 | $ git submodule update --init --recursive 137 | $ make all-via-docker 138 | ``` 139 | 140 | Now you can compare the simple udt script generated at `build/simple_udt` with the one deployed to CKB, they should be identical. 141 | 142 | A draft of this specification has already been released, reviewed, and discussed in the community at [here](https://talk.nervos.org/t/rfc-simple-udt-draft-spec/4333) for quite some time. 143 | -------------------------------------------------------------------------------- /rfcs/0027-block-structure/ckb_block_structure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0027-block-structure/ckb_block_structure.png -------------------------------------------------------------------------------- /rfcs/0027-block-structure/compact_target.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0027-block-structure/compact_target.png -------------------------------------------------------------------------------- /rfcs/0027-block-structure/epoch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0027-block-structure/epoch.png -------------------------------------------------------------------------------- /rfcs/0027-block-structure/number.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0027-block-structure/number.png -------------------------------------------------------------------------------- /rfcs/0027-block-structure/proposals.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0027-block-structure/proposals.png -------------------------------------------------------------------------------- /rfcs/0027-block-structure/timestamp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0027-block-structure/timestamp.png -------------------------------------------------------------------------------- /rfcs/0027-block-structure/transactions_root.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0027-block-structure/transactions_root.png -------------------------------------------------------------------------------- /rfcs/0028-change-since-relative-timestamp/0028-change-since-relative-timestamp.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0028" 3 | Category: Standards Track 4 | Status: Draft 5 | Author: Ian Yang <@doitian> 6 | Created: 2021-02-03 7 | --- 8 | 9 | # Use input cell committing block timestamp as the start time for the relative timestamp in `since` 10 | 11 | ## Abstract 12 | 13 | This document proposes a transaction verification consensus change. When the `since` field of the transaction input uses a relative timestamp, the commitment block header timestamp is used as the base value instead of the median timestamp of previous 37 blocks. 14 | 15 | This is a modification to the RFC17 [Transaction valid since](../0017-tx-valid-since/0017-tx-valid-since.md). 16 | 17 | ## Motivation 18 | 19 | The current consensus rule uses the median of the timestamps in the 37 blocks preceding the referenced cell commitment block. Getting the median timestamp is resource consuming because it requires either getting 37 block headers or caching the median timestamp for each block. The intention of using median time was to prevent miners from manipulating block timestamp to include more transactions. But it is safe to use the committing block timestamp as the start time because of two reasons: 20 | 21 | 1. The timestamp in the block header has already been verified by the network that it must be larger than the median of the previous 37 blocks and less than or equal to the current time plus 15 seconds. (See [RFC27](../0027-block-structure/0027-block-structure.md#timestamp-uint64)) 22 | 2. The transaction consuming a cell with the `since` requirement must wait until the cell is mature. During this waiting time, the transaction that created the cell has accumulated enough confirmations that it is difficult for the miner to manipulate it. 23 | 24 | ## Specification 25 | 26 | When an input `since` field is present, and 27 | 28 | * The `metric_flag` is block timestamp (10). 29 | * The `relative_flag` is relative (1). 30 | 31 | The input since precondition is fulfilled when 32 | 33 | ``` 34 | MedianTimestamp ≥ StartTime + SinceValue 35 | ``` 36 | 37 | where 38 | 39 | * `StartTime` is the timestamp field in the commitment block header. 40 | * `SinceValue` is the `value` part of the `since` field. 41 | * `MedianTimestamp` is the median timestamp of the previous 37 blocks preceding the block if the transaction is in the block, or the latest 37 blocks if the transaction is in the pool. 42 | 43 | The only change is `StartTime`, which was the median of the previous 37 blocks preceding the one that has committed the consumed cell. Because block timestamp must be larger than the median of its previous 37 blocks, the new consensus rule is more strict than the old rule. A transaction that is mature under the old rule may be immature under the new rule, but a transaction that is mature under the new rule must be mature under the old rule. 44 | 45 | ## Test Vectors 46 | 47 | Following is an example that a transaction is mature using the new rule but is immature using the old rule. 48 | 49 | Assuming that: 50 | 51 | * A transaction consumes a cell in block S and is about to be committed into block T with a since requirement that: 52 | * The `metric_flag` is block timestamp (10). 53 | * The `relative_flag` is relative (1). 54 | * The `value` is 600,000 (10 minutes). 55 | * The median of the previous 37 blocks preceding block S is 10,000. 56 | * The timestamp of block S is 20,000. 57 | * The median of the previous 37 blocks preceding block T is 615,000 58 | 59 | In the old consensus, `StartTime` + `SinceValue` = 10,000 + 600,000 = 610,000, which is less than the `MedianTimestamp` 615,000, thus the transaction is mature. 60 | 61 | But in the new rule, `StartTime` + `SinceValue` = 20,000 + 600,000 = 620,000 ≥ 615,000, so the transaction is still immature. 62 | 63 | ## Deployment 64 | 65 | The deployment can be performed in two stages. 66 | 67 | The first stage will activate the new consensus rule starting from a specific epoch. The mainnet and testnet will use different starting epochs and all other chains will use the new rule from epoch 0. 68 | 69 | After the fork is activated, and if the transactions in the old epochs all satisfy the new rule, the old consensus rule will be removed and the new rule will be applied from the genesis block. 70 | 71 | ## Backward compatibility 72 | 73 | Because the new consensus rule is more strict than the old one, this proposal can be deployed via a soft fork. 74 | -------------------------------------------------------------------------------- /rfcs/0029-allow-script-multiple-matches-on-identical-code/0029-allow-script-multiple-matches-on-identical-code.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0029" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Ian Yang <@doitian> 6 | Created: 2021-02-03 7 | --- 8 | 9 | # Allow Multiple Cell Dep Matches When There Is No Ambiguity 10 | 11 | ## Abstract 12 | 13 | This document proposes a transaction verification consensus change to allow multiple cell dep matches on type script hash when all the matches are resolved to the same script code. 14 | 15 | ## Motivation 16 | 17 | CKB locates the code for lock and type script to execute via data hash or type script hash. 18 | 19 | CKB allows multiple matches on data hash because it is safe. Data hash is the hash on the code, thus multiple matches must have the same code. This does not hold for type hash. Two cells with the same type script hash may have different contents. 20 | 21 | Currently, CKB does not allow multiple matches on type script hash. But in many cases, multiple matches on type script hash do not introduce ambiguity if all the matches have the same data hash as well. Because in the most scenarios, the cause is that the transaction uses two dep groups which contain duplicated cells, the multiple matches on type script hash really point to the same cell. 22 | 23 | ``` 24 | # An example that multiple matches on the type script hash really are the same cell. 25 | cell_deps: 26 | - out_point: ... 27 | # Expands to 28 | # - out_point: Cell A 29 | dep_group: DepGroup 30 | 31 | - out_point: ... 32 | # Expands to 33 | # - out_point: Cell A 34 | dep_group: DepGroup 35 | 36 | inputs: 37 | - out_point: ... 38 | lock: ... 39 | type: 40 | code_hash: hash(Cell A.type) 41 | hash_type: Type 42 | ``` 43 | 44 | Based on the observation above, this RFC proposes to allow the multiple matches on the type script hash if they all have the same data. 45 | 46 | ## Specification 47 | 48 | When the transaction verifier locates script code in dep cell via data hash, multiple matches are allowed. This is the same as before. 49 | 50 | When the verifier locates code via type hash, multiple matches are allowed if all the matched cells have the same data, otherwise, the transaction is invalid and the verification fails. This is the modification introduced by this RFC. 51 | 52 | ## Test Vectors 53 | 54 | Multiple matches of data hash. This works in both the old rule and the new one. 55 | 56 | ``` 57 | # hash(Cell B.data) equals to hash(Cell A.data) 58 | cell_deps: 59 | - out_point: ... 60 | # Expands to 61 | # - out_point: Cell A 62 | dep_group: DepGroup 63 | 64 | - out_point: ... 65 | # Expands to 66 | # - out_point: Cell B 67 | dep_group: DepGroup 68 | 69 | inputs: 70 | - out_point: ... 71 | lock: 72 | code_hash: hash(Cell A.data) 73 | hash_type: Data 74 | ``` 75 | 76 | Multiple matches of type hash which all resolve to the same code. This transaction is invalid using the old rule but valid using the new rule. 77 | 78 | ``` 79 | # hash(Cell B.data) equals to hash(Cell A.data) 80 | # and hash(Cell B.type) equals to hash(Cell A.type) 81 | cell_deps: 82 | - out_point: ... 83 | # Expands to 84 | # - out_point: Cell A 85 | dep_group: DepGroup 86 | 87 | - out_point: ... 88 | # Expands to 89 | # - out_point: Cell B 90 | dep_group: DepGroup 91 | 92 | inputs: 93 | - out_point: ... 94 | lock: ... 95 | type: 96 | code_hash: hash(Cell A.type) 97 | hash_type: Type 98 | ``` 99 | 100 | ## Deployment 101 | 102 | The deployment can be performed in two stages. 103 | 104 | The first stage will activate the new consensus rule starting from a specific epoch. The mainnet and testnet will use different starting epochs and all the development chains initialized via the default settings in this stage will use the new rule from epoch 0. 105 | 106 | After the fork is activated, the old rule will be replaced by the new rule starting from the genesis block by new CKB node versions. 107 | 108 | ## Backward compatibility 109 | 110 | The consensus rule proposed in this document is looser, so it must be activated via a hard fork. The blocks accepted by new version clients may be rejected by the old versions. 111 | -------------------------------------------------------------------------------- /rfcs/0030-ensure-index-less-than-length-in-since/0030-ensure-index-less-than-length-in-since.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0030" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Ian Yang <@doitian> 6 | Created: 2021-02-04 7 | --- 8 | 9 | # Ensure That Index Is Less Than Length In the Input Since Field Using Epoch With Fraction 10 | 11 | ## Abstract 12 | 13 | This document proposes adding a new consensus rule to verify the `since` field in the transaction. 14 | 15 | As described in the RFC17, [Transaction valid since](../0017-tx-valid-since/0017-tx-valid-since.md), when a transaction input uses the epoch with fraction in the `since` field, the `value` is an encoded rational number `E I/L`, where 16 | 17 | - `E` is the epoch number. 18 | - `I` is the block index in the epoch. 19 | - `L` is the epoch length. 20 | 21 | This RFC requires that when any transaction uses the epoch with fraction as the unit, the encoded number `E I/L` is valid only if 22 | 23 | - `I` is less than `L`, or 24 | - `I` and `L` are both zero. 25 | 26 | If any `since` field is invalid, the transaction is rejected. 27 | 28 | ## Motivation 29 | 30 | The `since` field prevents the transaction from being mined before an absolute or relative time. 31 | 32 | When the `since` field uses epoch with fraction number as the unit, the `value` is an encoded rational number `E I/L`. If it is a relative time, the rational number is used as it is. But when it is the absolute time, the special rule, **Absolute Epoch With Fraction Value Normalization** as mentioned in RFC17, requires normalizing the number to `E+1 0/1` when `I` equals to or is larger than `L`. 33 | This document suggests adding a new rule to verify that when `since` uses epoch as the unit, it must ensure that the index `I` is less than the length `L`. 34 | 35 | ## Specification 36 | 37 | This RFC adds a new verification requirement on the transaction `since` field. 38 | 39 | When an input `since` field is present, and the `metric_flag` is epoch (01), the `value` part is the encoded number `E I/L`. No matter whether the relative flag is `relative` or `absolute`, the number is valid if and only if 40 | 41 | - `I` is less than `L`, or 42 | - `I` and `L` are both zero. 43 | 44 | There are no changes to the rules in RFC17, except that **Absolute Epoch With Fraction Value Normalization** is no longer needed. 45 | 46 | ## Test Vectors 47 | 48 | When `since` uses the absolute epoch `99 360/180`, and the current epoch is `100 0/180`, the transaction is mature using the old consensus rule but is invalid using the new rule. 49 | 50 | ## Deployment 51 | 52 | The deployment can advance in two stages. 53 | 54 | The first stage will activate the new consensus rule, starting from a specific epoch. The mainnet and testnet will use different starting epochs and all other chains will use the new rule from epoch 0. 55 | 56 | The second stage is optional. After the new rule is active, and the blocks in the chain before activation can also pass the new consensus rule, the old rule is redundant and can be safely removed. 57 | 58 | ## Backward compatibility 59 | 60 | The new rule is stricter than the old one thus it can be deployed via a soft fork. When most mining nodes have upgraded to the new version, the old version full nodes can keep up to date. Blocks generated by old version mining nodes may be rejected by new version full nodes. 61 | -------------------------------------------------------------------------------- /rfcs/0031-variable-length-header-field/1-appending-the-field-at-the-end.md: -------------------------------------------------------------------------------- 1 | ### Appending the Field At the End 2 | 3 | The block header size is at least 208 bytes. The first 208 bytes are encoded the same as the current header. The remaining bytes are the variable length field. 4 | 5 | ``` 6 | +-----------------------+-----------+ 7 | | | | 8 | | 208-bytes header | New Field | 9 | | | | 10 | +-----------------------+-----------+ 11 | ``` 12 | 13 | 14 | Pros 15 | 16 | - Apps that are not interested in the new field can just read the first 208 bytes. 17 | 18 | Cons 19 | 20 | - It's not a valid Molecule buffer. 21 | - It may break the old contract which assumes that the header has only 208 bytes. 22 | - Nodes that do not need the new field still has to download it. 23 | - Header is a variable length structure now. -------------------------------------------------------------------------------- /rfcs/0031-variable-length-header-field/2-using-molecule-table-in-new-block-headers.md: -------------------------------------------------------------------------------- 1 | ### Using Molecule Table in New Block Headers 2 | 3 | This solution uses a different molecule schema for the new block headers. If the block header size is 208 bytes, it's encoded using the old schema, otherwise it uses the new one. The new schema converts `RawHeader` into a molecule table and adds a variable length bytes field at the end of `RawHeader`. 4 | 5 | ``` 6 | old one: 7 | 208 bytes 8 | +-----+-----------------+ 9 | | | | 10 | |Nonce| RawHeader Stuct | 11 | | | | 12 | +-----+-----------------+ 13 | 14 | new one: 15 | 16 | +-----+-------------------------------+ 17 | | | | 18 | |Nonce| RawHeader Table | 19 | | | | 20 | +-----+-------------------------------+ 21 | ``` 22 | 23 | Pros 24 | 25 | - It is a valid Molecule buffer. 26 | 27 | Cons 28 | 29 | - It may break the old contract which assumes that the header has only 208 bytes and is just the concatenation of all members. 30 | - Nodes that do not need the new field still has to download it. 31 | - The molecule table header overhead. 32 | - Header is a variable length structure now. -------------------------------------------------------------------------------- /rfcs/0031-variable-length-header-field/3-appending-a-hash-at-the-end.md: -------------------------------------------------------------------------------- 1 | ### Appending a Hash At the End 2 | 3 | Instead of adding the new field directly at the end of the header, this solution adds a 32 bytes hash at the end of the header which is the hash of the new variable length field. The header is still a fixed length struct but is 32 bytes larger. If client does not need the extra field, it only has the 32 bytes overhead. Otherwise it has to download both the header and the extra field and verify that the hash matches. 4 | 5 | ``` 6 | +-----------------------+--+ 7 | | | | 8 | | 208-bytes header | +----+ 9 | | | | | 10 | +-----------------------+--+ | 11 | | Hash of 12 | | 13 | v 14 | +-----+-----+ 15 | | | 16 | | New Field | 17 | | | 18 | +-----------+ 19 | 20 | ``` 21 | 22 | Pros 23 | 24 | - It is a valid Molecule buffer. 25 | - The header still has the fixed length. 26 | - Nodes that do not want the new field only need to download an extra hash to verify the PoW. 27 | 28 | Cons 29 | 30 | - It may break the old contract which assumes that the header has only 208 bytes. 31 | - Extra P2P messages must be added to download the new extension field. 32 | -------------------------------------------------------------------------------- /rfcs/0032-ckb-vm-version-selection/0032-ckb-vm-version-selection.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0032" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Ian Yang <@doitian> 6 | Created: 2021-04-26 7 | --- 8 | 9 | # CKB VM Version Selection 10 | 11 | ## Abstract 12 | 13 | This RFC proposes a mechanism to decide on the CKB VM version to execute the transaction scripts. 14 | 15 | ## Motivation 16 | 17 | It's essential to keep improving CKB VM because it is the computation bottleneck of the whole network. The upgrade packages can improve the performance, bring bug fixings and add new RISC-V extensions. However the upgrade should not break the old code, users must have the opt-in option to specify the VM version. 18 | 19 | This RFC proposes a general mechanism that determines how the CKB node chooses the CKB VM version for a transaction script group. 20 | 21 | ## Specification 22 | 23 | When CKB launches the testnet Lina, it only has one VM version, the version 0. The first hard fork will bring VM version 1 which coexists with version 0. Users have the opt-in option to specify which VM version to run the script of a cell by setting the `hash_type` field. 24 | 25 | In CKB, each VM version also has its bundled instruction set, syscalls and cost model. The [rfc3], [rfc5], [rfc9] and [rfc14] have defined what is VM version 0. VM version 1 is version 0 plus the revisions mentioned in [rfc33] and [rfc34]. 26 | 27 | [rfc3]: ../0003-ckb-vm/0003-ckb-vm.md 28 | [rfc5]: ../0005-priviledged-mode/0005-priviledged-mode.md 29 | [rfc9]: ../0009-vm-syscalls/0009-vm-syscalls.md 30 | [rfc14]: ../0014-vm-cycle-limits/0014-vm-cycle-limits.md 31 | [rfc33]: ../0033-ckb-vm-version-1/0033-ckb-vm-version-1.md 32 | [rfc34]: ../0034-vm-syscalls-2/0034-vm-syscalls-2.md 33 | 34 | The first hard fork takes effect from an epoch decided by the community consensus. For all the transactions in the blocks before the activation epoch, they must run the CKB VM version 0 to verify all the script groups. In these transactions, the `hash_type` in cell lock and type script must be 0 or 1 in the serialized molecule data. 35 | 36 | After the fork is activated, CKB nodes must choose the CKB VM version for each script group. The allowed values for the `hash_type` field in the lock and type script are 0, 1, and 2. Cells are sorted into different groups if they have different `hash_type`. According to the value of `hash_type`: 37 | 38 | * When the `hash_type` is 0, the script group matches code via data hash and will run the code using the CKB VM version 0. 39 | * When the `hash_type` is 1, the script group matches code via type script hash and will run the code using the CKB VM version 1. 40 | * When the `hash_type` is 2, the script group matches code via data hash and will run the code using the CKB VM version 1. 41 | 42 | | `hash_type` | matches by | VM version | 43 | | ----------- | ---------------- | ---------- | 44 | | 0 | data hash | 0 | 45 | | 1 | type script hash | 1 | 46 | | 2 | data hash | 1 | 47 | 48 | The transaction is invalid if any `hash_type` is not in the allowed values 0, 1, and 2. 49 | 50 | See more information about code locating using `hash_type` in [rfc22]. 51 | 52 | [rfc22]: ../0022-transaction-structure/0022-transaction-structure.md 53 | 54 | The `hash_type` encoding pattern ensures that if a script matches code via type hash, CKB always uses the latest available version of VM depending when the script is executed. But if the script matches code via data hash, the VM version to execute is determined when the cell is created. 55 | 56 | Here is an example of when VM version 2 is available: 57 | 58 | | `hash_type` | matches by | VM version | 59 | | ----------- | ---------------- | ---------- | 60 | | 0 | data hash | 0 | 61 | | 1 | type script hash | 2 | 62 | | 2 | data hash | 1 | 63 | | \* | data hash | 2 | 64 | 65 | > \* The actual value to represent data hash plus VM version 2 is undecided yet. 66 | 67 | Cell owners can trade off between the determination and VM performance boost when creating the cell. They should use data hash for determination, and type hash for the latest VM techniques. 68 | 69 | In [nervosnetwork/ckb](https://github.com/nervosnetwork/ckb), the `hash_type` is returned in the JSON RPC as an enum. Now it has three allowed values: 70 | 71 | * 0: "data" 72 | * 1: "type" 73 | * 2: "data1" 74 | 75 | ## RFC Dependencies 76 | 77 | This RFC depends on [rfc33], [rfc34], and [rfc35]. The 4 RFCs must be activated together at the same epoch. 78 | 79 | [rfc35]: ../0035-ckb2021-p2p-protocol-upgrade/0035-ckb2021-p2p-protocol-upgrade.md 80 | 81 | The first two RFCs, [rfc33] and [rfc34] are the specification of VM version 1. The [rfc35] proposes to run two versions of transaction relay protocols during the fork, because the VM selection algorithm depends on which epoch the transaction belongs to, thus it is not deterministic for transactions still in the memory pool. 82 | 83 | ## Rationale 84 | 85 | There are many other solutions to select VM versions. The current solution results from discussion and trade-off. Following are some example alternatives: 86 | 87 | Consistently uses the latest VM version. The users cannot specify the VM versions for transactions, and the version selection will be non-determine cause it will depend on the chain state. 88 | * Depend on the script code cell epoch. Use the old VM version if the code cell is deployed before the fork, and use the new one otherwise. The problem with this solution is that anyone can re-deploy the cell and construct the transaction using the new code cell to choose VM versions. 89 | 90 | ## Backward compatibility 91 | 92 | For cell scripts which reference codes via data hash, they will use the same VM before and after the fork. For those referenced by type hash, they will use the different VM versions. The dApps developers must ensure the compatibility of their scripts and upgrade them if necessary. 93 | 94 | ## Test Vectors 95 | 96 | ### Transaction Hash 97 | 98 | This is a transaction containing `data1` hash type. 99 | 100 |
JSON 101 | 102 | ```json 103 | { 104 | "version": "0x0", 105 | "cell_deps": [ 106 | { 107 | "out_point": { 108 | "tx_hash": "0xace5ea83c478bb866edf122ff862085789158f5cbff155b7bb5f13058555b708", 109 | "index": "0x0" 110 | }, 111 | "dep_type": "dep_group" 112 | } 113 | ], 114 | "header_deps": [], 115 | "inputs": [ 116 | { 117 | "since": "0x0", 118 | "previous_output": { 119 | "tx_hash": "0xa563884b3686078ec7e7677a5f86449b15cf2693f3c1241766c6996f206cc541", 120 | "index": "0x7" 121 | } 122 | } 123 | ], 124 | "outputs": [ 125 | { 126 | "capacity": "0x2540be400", 127 | "lock": { 128 | "code_hash": "0x709f3fda12f561cfacf92273c57a98fede188a3f1a59b1f888d113f9cce08649", 129 | "hash_type": "data", 130 | "args": "0xc8328aabcd9b9e8e64fbc566c4385c3bdeb219d7" 131 | }, 132 | "type": null 133 | }, 134 | { 135 | "capacity": "0x2540be400", 136 | "lock": { 137 | "code_hash": "0x9bd7e06f3ecf4be0f2fcd2188b23f1b9fcc88e5d4b65a8637b17723bbda3cce8", 138 | "hash_type": "type", 139 | "args": "0xc8328aabcd9b9e8e64fbc566c4385c3bdeb219d7" 140 | }, 141 | "type": null 142 | }, 143 | { 144 | "capacity": "0x2540be400", 145 | "lock": { 146 | "code_hash": "0x709f3fda12f561cfacf92273c57a98fede188a3f1a59b1f888d113f9cce08649", 147 | "hash_type": "data1", 148 | "args": "0xc8328aabcd9b9e8e64fbc566c4385c3bdeb219d7" 149 | }, 150 | "type": null 151 | } 152 | ], 153 | "outputs_data": [ 154 | "0x", 155 | "0x", 156 | "0x" 157 | ], 158 | "witnesses": [ 159 | "0x550000001000000055000000550000004100000070b823564f7d1f814cc135ddd56fd8e8931b3a7040eaf1fb828adae29736a3cb0bc7f65021135b293d10a22da61fcc64f7cb660bf2c3276ad63630dad0b6099001" 160 | ] 161 | } 162 | ``` 163 | 164 |
165 | 166 | The Transaction Hash is `0x9110ca9266f89938f09ae6f93cc914b2c856cc842440d56fda6d16ee62543f5c`. 167 | 168 | ## Acknowledgments 169 | 170 | The authors would like to thank Jan Xie and Xuejie Xiao for their comments and insightful suggestions. The members of the CKB Dev team also helped by participating in the discussion and review. Boyu Yang is the primary author of the code changes, and his experiments and feedbacks are essential to complete this document. 171 | -------------------------------------------------------------------------------- /rfcs/0033-ckb-vm-version-1/0033-ckb-vm-version-1.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0033" 3 | Category: Informational 4 | Status: Draft 5 | Author: Wanbiao Ye 6 | Created: 2021-05-25 7 | --- 8 | 9 | # VM version1 10 | 11 | This RFC describes version 1 of the CKB-VM, in comparison to version 0, which 12 | 13 | - Fixed several bugs 14 | - Behavioural changes not affecting execution results 15 | - New features 16 | - Performance optimisations 17 | 18 | ## 1 Fixed Several Bugs 19 | 20 | CKB-VM Version 1 has fixed identified bugs discovered in Version 0. 21 | 22 | ### 1.1 Enabling Stack Pointer SP To Be Always 16-byte Aligned 23 | 24 | In the previous version, SP incorrectly aligned during stack initialisation. See [issue](https://github.com/nervosnetwork/ckb-vm/issues/97). 25 | 26 | ### 1.2 Added a NULL To Argv 27 | 28 | C Standard 5.1.2.2.1/2 states: `argv[argc]` should be a null pointer. `NULL` was unfortunately omitted during the initialization of the stack, and now it has returned. See [issue](https://github.com/nervosnetwork/ckb-vm/issues/98). 29 | 30 | ### 1.3 JALR Caused Erroneous Behaviour on AsmMachine When rs1 and rd utilised the same register 31 | 32 | The problem arose with the JALR instruction, where the CKB-VM had made an error in the sequence of its different steps. The correct step to follow would be to calculate the pc first and then update the rd. See [problem](https://github.com/nervosnetwork/ckb-vm/issues/92). 33 | 34 | ### 1.4 Error OutOfBound was triggered by reading the last byte of memory 35 | 36 | We have fixed it, as described in the title. 37 | 38 | ### 1.5 Unaligned executable pages from loading binary would raise an error 39 | 40 | We have fixed it, as described in the title. 41 | 42 | ### 1.6 Frozen writable pages by error 43 | 44 | This error occurred during the loading of elf. The CKB-VM has incorrectly set a freeze flag on a writeable page, which made the page unmodifiable. 45 | 46 | It happened mainly with external variables that have dynamic links. 47 | 48 | ### 1.7 Update crate goblib 49 | 50 | goblin is a cross-platform trifecta of binary parsing and loading fun. ckb-vm uses it to load RISC-V programs. But in the past period of time goblin fixed many bugs and produced destructive upgrades, we decided to upgrade goblin: this will cause the binary that could not be loaded before can now be normal Load, or vice versa. 51 | 52 | ## 2 Behavioural Changes that will not affect the execution outcomes 53 | 54 | ### 2.1 Skip writing 0 to memory when argc equals 0 during stack initialisation 55 | 56 | For ckb scripts, argc is always 0 and the memory is initialised to 0, so memory writing can be safely skipped. Note that when "chaos_mode" is enabled and "argv" is empty, the reading of "argc" will return an unexpected data. This happens uncommonly, and never happens on the mainnet. 57 | 58 | ### 2.2 Redesign of the internal instruction format 59 | 60 | For the sake of fast decoding and cache convenience, RISC-V instruction is decoded into the 64-bit unsigned integer. Such a format used only internally in ckb-vm rather than the original RISC-V instruction format. 61 | 62 | ## 3 New features 63 | 64 | ### 3.1 B extension 65 | 66 | We have added the RISC-V B extension (v1.0.0) [1]. This extension aims at covering the four major categories of bit manipulation: counting, extracting, inserting and swapping. For all B instructions, 1 cycle will be consumed. 67 | 68 | ### 3.2 Chaos memory mode 69 | 70 | Chaos memory mode was added for the debugging tools. Under this mode, the program memory forcibly initializes randomly, helping us to discover uninitialized objects/values in the script. 71 | 72 | ### 3.3 Suspend/resume a running VM 73 | 74 | It is possible to suspend a running CKB VM, save the state to a certain place and to resume the previously running VM later on, possibly even on a different machine. 75 | 76 | ## 4 Performance optimization 77 | 78 | ### 4.1 Lazy initialization memory 79 | 80 | In version 0, when the VM was initialised, the program memory would be initialised to zero value. Now, we have deferred the initialisation of program memory. The program memory is divided into several different frames, so that only when a frame is used (read, write), the corresponding program memory area of that frame will be initialised with zero value. As a result , small programs that do not need to use large volumes of memory will be able to run faster. 81 | 82 | ### 4.2 MOP 83 | 84 | Macro-Operation Fusion (also Macro-Op Fusion, MOP Fusion, or Macrofusion) is a hardware optimization technique found in many modern microarchitectures whereby a series of adjacent macro-operations are merged into a single macro-operation prior or during decoding. Those instructions are later decoded into fused-µOPs. 85 | 86 | The cycle consumption of the merged instructions is the maximum cycle value of the two instructions before the merge. We have verified that the use of MOPs can lead to significant improvements in some encryption algorithms. 87 | 88 | | Opcode | Origin | Cycles | 89 | | ---------------------------- | ---------------------------- | ----------------- | 90 | | ADC [2] | add + sltu + add + sltu + or | 1 + 0 + 0 + 0 + 0 | 91 | | SBB | sub + sltu + sub + sltu + or | 1 + 0 + 0 + 0 + 0 | 92 | | WIDE_MUL | mulh + mul | 5 + 0 | 93 | | WIDE_MULU | mulhu + mul | 5 + 0 | 94 | | WIDE_MULSU | mulhsu + mul | 5 + 0 | 95 | | WIDE_DIV | div + rem | 32 + 0 | 96 | | WIDE_DIVU | divu + remu | 32 + 0 | 97 | | FAR_JUMP_REL | auipc + jalr | 0 + 3 | 98 | | FAR_JUMP_ABS | lui + jalr | 0 + 3 | 99 | | LD_SIGN_EXTENDED_32_CONSTANT | lui + addiw | 1 + 0 | 100 | 101 | # Reference 102 | 103 | * [1]: [B extension][1] 104 | * [2]: [Macro-op-fusion: Pattern design of ADC and SBB][2] 105 | 106 | [1]: https://github.com/riscv/riscv-bitmanip 107 | [2]: https://github.com/nervosnetwork/ckb-vm/issues/169 108 | -------------------------------------------------------------------------------- /rfcs/0034-vm-syscalls-2/0034-vm-syscalls-2.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0034" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Wanbiao Ye 6 | Created: 2021-05-25 7 | --- 8 | 9 | # VM Syscalls 2 10 | 11 | ## Abstract 12 | 13 | This document describes the addition of the syscalls during the ckb2021. These syscalls are only available since ckb-vm version 1 and ckb2021 [2]. 14 | 15 | - [VM Version] 16 | - [Current Cycles] 17 | - [Exec] 18 | 19 | ### VM Version 20 | [vm version]: #vm-version 21 | 22 | As shown above, *VM Version* syscall has a signature like following: 23 | 24 | ```c 25 | int ckb_vm_version() 26 | { 27 | return syscall(2041, 0, 0, 0, 0, 0, 0); 28 | } 29 | ``` 30 | 31 | *VM version* syscall returns current running VM version, so far 2 values will be returned: 32 | 33 | - Error for Lina CKB-VM version 34 | - 1 for the new hardfork CKB-VM version. 35 | 36 | This syscall consumes 500 cycles. 37 | 38 | ### Current Cycles 39 | [current cycles]: #current-cycles 40 | 41 | *Current Cycles* syscall has a signature like following: 42 | 43 | ```c 44 | uint64_t ckb_current_cycles() 45 | { 46 | return syscall(2042, 0, 0, 0, 0, 0, 0); 47 | } 48 | ``` 49 | 50 | *Current Cycles* returns current cycle consumption just before executing this syscall. This syscall consumes 500 cycles. 51 | 52 | 53 | ### Exec 54 | [exec]: #exec 55 | 56 | Exec runs an executable file from specified cell data in the context of an already existing machine, replacing the previous executable. The used cycles does not change, but the code, registers and memory of the vm are replaced by those of the new program. It's cycles consumption consists of two parts: 57 | 58 | - Fixed 500 cycles 59 | - Initial Loading Cycles [1] 60 | 61 | *Exec* syscall has a signature like following: 62 | 63 | ```c 64 | int ckb_exec(size_t index, size_t source, size_t place, size_t bounds, int argc, char* argv[]) 65 | { 66 | return syscall(2043, index, source, place, bounds, argc, argv); 67 | } 68 | ``` 69 | 70 | The arguments used here are: 71 | 72 | * `index`: an index value denoting the index of entries to read. 73 | * `source`: a flag denoting the source of cells or witnesses to locate, possible values include: 74 | + 1: input cells. 75 | + `0x0100000000000001`: input cells with the same running script as current script 76 | + 2: output cells. 77 | + `0x0100000000000002`: output cells with the same running script as current script 78 | + 3: dep cells. 79 | * `place`: A value of 0 or 1: 80 | + 0: read from cell data 81 | + 1: read from witness 82 | * `bounds`: high 32 bits means `offset`, low 32 bits means `length`. if `length` equals to zero, it read to end instead of reading 0 bytes. 83 | * `argc`: argc contains the number of arguments passed to the program 84 | * `argv`: argv is a one-dimensional array of strings 85 | 86 | 87 | # Reference 88 | 89 | * [1]: [Vm Cycle Limits][1] 90 | * [2]: [CKB VM version selection][2] 91 | 92 | [1]: ../0014-vm-cycle-limits/0014-vm-cycle-limits.md 93 | [2]: ../0032-ckb-vm-version-selection/0032-ckb-vm-version-selection.md 94 | -------------------------------------------------------------------------------- /rfcs/0035-ckb2021-p2p-protocol-upgrade/0035-ckb2021-p2p-protocol-upgrade.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0035" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Chao Luo <@driftluo>, Ian Yang <@doitian> 6 | Created: 2021-07-01 7 | --- 8 | # P2P protocol upgrade 9 | 10 | ## Abstract 11 | 12 | This RFC describes how the network protocol changes before and after the ckb hard fork, and how the network protocols smoothly upgrade along the hard fork. 13 | 14 | ## Motivation 15 | 16 | The network protocol is the foundation of distributed applications. Before and after hard fork, there will be small changes in data format, but the network should not be disconnected or split because of this change. After hard fork, only clients that support hard fork are allowed to connect. 17 | 18 | This RFC describes in detail how the ckb node implements this functionality. 19 | 20 | ## Specification 21 | 22 | We divide the entire hard fork process into three phases: before hard fork, the moment that hard fork activates, and after hard fork. The protocols are divided into two categories which have different upgrade strategies. 23 | 24 | - Upgrade the version of a specific protocol and ensure that both versions of the protocol are supported and can be enabled at the same time 25 | - Mount two protocols that are functionally identical but require runtime switching for smooth upgrades 26 | 27 | ### For protocols whose functionality and implementation do not need to be modified 28 | 29 | Including protocols: 30 | 31 | - Identify 32 | - Ping 33 | - Feeler 34 | - DisconnectMessage 35 | - Time 36 | - Alert 37 | 38 | ##### Before hard fork 39 | 40 | Change the version support list from `[1]` to `[1, 2]`, the client will support both versions of the protocol, the new client will enable version 2 and the old client will enable version 1 41 | 42 | ##### Hard fork moment 43 | 44 | Disconnect all clients with the protocol version 1 on, and reject this version afterwards. 45 | 46 | ##### After hard fork 47 | 48 | Remove the support for the protocol version 1 from the next version of client code, i.e. change the support list from `[1, 2]` to `[2]`, and clean up the compatibility code 49 | 50 | ### Implement protocols that requires modification 51 | 52 | #### Discovery 53 | 54 | ##### Before hard fork 55 | 56 | 1. Change the version support list from `[1]` to `[1, 2]`. 57 | 2. Remove redundant codec operations from the previous implementation 58 | 59 | ##### Hard fork moment 60 | 61 | Disconnect all clients with the protocol version 1 on, and reject this version afterwards. 62 | 63 | ##### After hard fork 64 | 65 | Remove the support for the protocol version 1 from the next version of client code, i.e. change the support list from `[1, 2]` to `[2]`, and clean up the compatibility code 66 | 67 | #### Sync 68 | 69 | ##### Before hard fork Before 70 | 71 | 1. Change the version support list from `[1]` to `[1, 2]` 72 | 2. Remove the 16 group limit from the sync request list and keep the maximum number of syncs, new version changes the block sync request limit from 16 to 32 73 | 74 | ##### Hard fork moment 75 | 76 | Disconnect all clients with the protocol version 1 on, and reject this version afterwards. 77 | 78 | ##### After hard fork 79 | 80 | Remove the support for the protocol version 1 from the next version of client code, i.e. change the support list from `[1, 2]` to `[2]`, and clean up the compatibility code 81 | 82 | ### For protocols whose behavior will conflict before and after fork 83 | 84 | #### Relay 85 | 86 | ##### Before hard fork. 87 | 88 | Since relay protocols before and after fork may have inconsistent cycle of transaction validation due to inconsistent vm, such behavior cannot be identified by a simple upgrade, for such protocols, another solution will be adopted to smooth the transition, i.e., open both relay protocols, disable the new protocol relay tx related messages, and let the old protocol work normally 89 | 90 | ##### Hard fork moment 91 | 92 | 1. Disable relay tx related messages in version 1 protocol and switch to the new relay 93 | 2. Allow opening the version 1 protocols 94 | 95 | ##### After hard fork 96 | 97 | Remove the support for the old relay protocol in the next version of the client code, i.e. remove the support for the old relay protocol and clean up the compatibility code 98 | -------------------------------------------------------------------------------- /rfcs/0035-ckb2021-p2p-protocol-upgrade/0035-ckb2021-p2p-protocol-upgrade.zh-CN.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0035" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Chao Luo <@driftluo>, Ian Yang <@doitian> 6 | Created: 2021-07-01 7 | --- 8 | 9 | # P2P 协议升级 10 | 11 | ## Abstract 12 | 13 | 这个 RFC 用于描述网络协议在 ckb hard fork 前后的变化,以及如何在 hard fork 过程中让网络协议平稳过度。 14 | 15 | ## Motivation 16 | 17 | 网络协议是分布式应用的基础,hard fork 前后,数据格式将会有小范围的变化,但网络不应该因为这个变化而导致断开或者分裂,我们应当尽可能平稳地过度这一特殊时期,同时要保证在 hard fork 之前,所有客户端可以连接,hard fork 之后,只允许连接支持 hard fork 的客户端。 18 | 19 | 这个 RFC 详细描述了 ckb 节点如何实现上述功能。 20 | 21 | ## Specification 22 | 23 | 我们将整个 hard fork 的过程分为三个阶段:hard fork 之前,hard fork 时点,hard fork 之后。然后分为两大类手段来描述具体的改动细节,以及如何支持平稳过度。 24 | 25 | ckb 对网络协议有两种升级和扩展的方式: 26 | 27 | - 对特定协议升级版本,并保证同时支持两个版本协议可同时开启 28 | - 挂载两个功能一样但需要运行时切换的协议用于平滑升级 29 | 30 | ### 对于功能和实现都不需要修改的协议 31 | 32 | 包含协议: 33 | 34 | - Identify 35 | - Ping 36 | - Feeler 37 | - DisconnectMessage 38 | - Time 39 | - Alert 40 | 41 | ##### hard fork 之前 42 | 43 | 将版本支持列表从 `[1]` 修改为 `[1, 2]`,该客户端将同时支持两个版本的协议,新客户端将开启 2 版本,老客户端将开启 1 版本 44 | 45 | ##### hard fork 时点 46 | 47 | 将开启 1 版本协议的客户端全部断开连接,同时在之后拒绝此版本协议开启 48 | 49 | ##### hard fork 之后 50 | 51 | 在下一个版本客户端代码中移除 1 版本协议的支持,即支持列表从 `[1, 2]` 修改为 `[2]`,并清理兼容代码 52 | 53 | ### 对实现需要微调的协议 54 | 55 | #### Discovery 56 | 57 | ##### hard fork 之前: 58 | 59 | 1. 将版本支持列表从 `[1]` 修改为 `[1, 2]` 60 | 2. 移除之前实现时多余的编码解码操作 61 | 62 | ##### hard fork 时点 63 | 64 | 将开启 1 版本协议的客户端全部断开连接,同时在之后拒绝此版本协议开启 65 | 66 | ##### hard fork 之后 67 | 68 | 在下一个版本客户端代码中移除 1 版本协议的支持,即支持列表从 `[1, 2]` 修改为 `[2]`,并清理兼容代码 69 | 70 | #### Sync 71 | 72 | ##### hard fork 之前: 73 | 74 | 1. 将版本支持列表从 `[1]` 修改为 `[1, 2]` 75 | 2. 移除同步时请求列表的 16 一组限制,保留最大同步数的限制,新版本将 block 同步请求上限从 16 改为 32 76 | 77 | ##### hard fork 时点 78 | 79 | 将开启 1 版本协议的客户端全部断开连接,同时在之后拒绝此版本协议开启 80 | 81 | ##### hard fork 之后 82 | 83 | 在下一个版本客户端代码中移除 1 版本协议的支持,即支持列表从 `[1, 2]` 修改为 `[2]`,并清理兼容代码 84 | 85 | ### 对行为在 fork 前后会发生冲突的协议 86 | 87 | #### Relay 88 | 89 | ##### hard fork 之前: 90 | 91 | 由于 relay 协议在 fork 前后可能会因为 vm 不一致而导致交易验证的 cycle 不一致,这样的行为无法通过简单的升级来标识,对于这样的协议,将采取另一种方案进行平滑过度,即打开两个 relay 协议,禁用新协议 relay tx 相关消息,让老协议正常工作 92 | 93 | ##### hard fork 时点 94 | 95 | 1. 禁用 1 版本协议中的 relay tx 相关消息,切换为新 relay 工作 96 | 2. 允许打开 1 版本协议 97 | 98 | ##### hard fork 之后 99 | 100 | 在下一个版本客户端代码中移除老版本 relay 协议的支持,即删除老版本 relay 协议的支持,并清理兼容代码 101 | -------------------------------------------------------------------------------- /rfcs/0036-remove-header-deps-immature-rule/0036-remove-header-deps-immature-rule.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0036" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Ian Yang <@doitian> 6 | Created: 2021-02-07 7 | --- 8 | 9 | # Remove Header Deps Immature Rule 10 | 11 | ## Abstract 12 | 13 | This document proposes removing the *[Loading Header Immature Rule]*. 14 | 15 | [Loading Header Immature Rule]: ../0009-vm-syscalls/0009-vm-syscalls.md#loading-header-immature-error 16 | 17 | In the consensus ckb2019, the header dep must reference the block which is 4 epochs ago. After this RFC is activated, the transaction can use any existing blocks in the chain as the header dep. 18 | 19 | ## Motivation 20 | 21 | Header dep is a useful feature for dApps developers because the script can read the block's header in the chain or verify that an input cell or dep cell is in a specific block in the chain. 22 | 23 | The *Loading Header Immature Rule* prevents the usage of header deps in many scenarios because the script must reference the block about 16 hours ago. 24 | 25 | The intention of the immature rule is like the cellbase immature rule. A transaction and all its descendants may be invalidated after a chain reorganization [^1], because its header deps referred to stale or orphan blocks. Removing the rule lets dApps developers trade-off between responsive header reference and reliable transaction finality. 26 | 27 | [^1]: Chain reorganization happens when the node found a better chain with more accumulated proved work and it has to rollback blocks to switch to the new chain. 28 | 29 | ## Specification 30 | 31 | This RFC must be activated via a hard fork. After activation, the consensus no longer verifies that the referenced block in the header deps is mined 4 epochs ago. 32 | 33 | The transaction producers can choose to postpone the transaction submission when it has a header dep that has been mined recently. It suggests waiting for at least 4 epochs, but the app can choose the best value in its scenario, like the transaction confirmation period. 34 | -------------------------------------------------------------------------------- /rfcs/0037-ckb2021/0037-ckb2021.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0037" 3 | Category: Informational 4 | Status: Draft 5 | Author: Ian Yang <@doitian> 6 | Created: 2021-07-24 7 | --- 8 | 9 | # CKB Consensus Change (Edition CKB2021) 10 | 11 | The current edition of CKB consensus rules is CKB2019. CKB2021 refers to the new edition of CKB consensus rules after its first hardfork. The purpose of a hard fork is to upgrade and update the rules encoded in the network. The changes are not backward compatible. This document outlines the changes in this upgrade. 12 | 13 | ## What's in CKB2021 14 | 15 | CKB2021 includes both new features and bug fixes. All changes are proposed via RFCs. The appendix has a list of all the RFCs related to CKB2021. 16 | 17 | The upgrade is divided into three categories. 18 | 19 | First, CKB VM gets a major upgrade. CKB2021 will bundle CKB VM v1, in addition to the v0 in CKB2019. Scripts will be executed on v1 unless users opt in to use v0 by setting the script hash type to `data`. 20 | 21 | Second, CKB2021 adds a new field `extension` in the block. This is reserved for future upgrades such as flyclient. 22 | 23 | Lastly, there are a bunch of consensus patches to fix bugs and make the consensus rules more robust. 24 | 25 | ### CKB VM v1 26 | 27 | Since CKB2021, there will be multiple VM versions available. [RFC32] introduces a CKB VM version mechanism. It piggybacks on the `hash_type` field in the Script structure. 28 | 29 | | `hash_type` | JSON representation | matches by | VM version | 30 | | ----------- | ---------- | ---------------- | ---------- | 31 | | 0 | "data" | data hash | 0 | 32 | | 1 | "type" | type script hash | 1 | 33 | | 2 | "data1" | data hash | 1 | 34 | 35 | [RFC33] introduces what's new in CKB VM v1 and [RFC34] adds new syscalls for VM v1. 36 | 37 | The new VM version adds new features and performance optimizations. It has fixed identified bugs discovered in v0. 38 | 39 | CKB VM v1 supports [RISC-V B extension](https://github.com/riscv/riscv-bitmanip) and [macro-op fusion](https://en.wikichip.org/wiki/macro-operation_fusion). One major rationale behind the changes in CKB-VM is about reducing overheads. RISC-V B extension allows developers to map RISC-V instructions directly with native instructions provided by x86-64 CPUs, while macro-op fusion goes even deeper to exploit modern micro-architectures in CPUs. All those efforts make crypto algorithms more efficiently on CKB-VM, unlocking more potential use cases of Nervos CKB. For example, the BLS signature verification lock consumes too many cycles on CKB now. With the help of B extension, together with macro-op, it's possible to bring the cycles consumption down to a feasible rate. 40 | 41 | Given the same transaction, different VM versions may consume different cycles, even give different verification results. [RFC35] proposes to use separate transaction relay protocols for each VM version to help the smooth transition of the CKB2021 activation. 42 | 43 | ### Extension Field 44 | 45 | [RFC31] proposes adding an optional variable length field to the block. 46 | 47 | Many extensions require adding new fields into the block. For example, PoA for testnet requires 65 bytes for each signature, and flyclient needs to add a 64 bytes hash. But there's not enough reserved bits in the header for these extensions. The RFC proposes a solution to add a variable length field in the block. 48 | 49 | Although the field is added to the block body, nodes can synchronize the block header and this field together without overhead. 50 | 51 | CKB2021 will not parse and verify the field after the activation. Instead, it enables a future soft fork to give the definition of the extension field. For example, flyclient can store the hash in the extension field. 52 | 53 | ### Consensus Patches 54 | 55 | [RFC28] uses block timestamp as the start time for the relative timestamp `since` field, instead of the median of previous 37 blocks. This simplifies the `since` maturity calculation. 56 | 57 | [RFC29] allows multiple matches on dep cells via type script hash when these cells have the same data. It removes unnecessary restrictions when there's no ambiguity to choose matched script code. 58 | 59 | [RFC30] ensures that the index is less than the length in the `since` field using epoch as the time measure. It avoids the ambiguity because of the inconsistent behavior when using relative and absolute epoch `since`. 60 | 61 | [RFC36] removes header deps immature rule, allowing developers to choose how long to wait until a header can be used as a dep header. 62 | 63 | ## CKB2021 Timeline 64 | 65 | The mainnet upgrade is divided into three phases. 66 | 67 | * **Stage 1 - Code Preview**: An RC version of 0.100.0 is ready for preview on July 16 2021 via nervosnetwork/ckb [releases](https://github.com/nervosnetwork/ckb/releases). It will introduce the incompatible changes to help developers to adapt their tools and apps to CKB2021. But this version does not activate the consensus incompatible changes in CKB2021. Developers can test the new rules by running a dev chain locally. 68 | 69 | * **Stage 2 - Testnet Activation**: With the release of CKB 0.101.0, CKB2021 is set to activate on Aggron testnet on October 24th, 2021. Pudge is the successor guardian of the testnet after activation. Thank you Aggron, Ogre Magi! Look who's coming for dinner, Pudge! 70 | 71 | * **Stage 3 - Mainnet Activation**: With the release of CKB 0.103.0, CKB2021 will be set to activate on Lina mainnet. The exact mainnet activation time will be determined after Stage 2 passed successfully. Mirana will be the successor guardian of CKB mainnet after activation. Thank you Lina, our flame burns brighter. The moon lights our way, Mirana! 72 | 73 | ## Upgrade Strategies 74 | 75 | First, the SDK, Tool, and dApps authors must adapt to any 0.100.0 rc version. 76 | 77 | There are two strategies for ecosystem developers to upgrade to the CKB2021 consensus. Choose the former one if the developers can pause the app during the fork activation, otherwise, use the latter one. 78 | 79 | - Release two different versions or use the feature switcher. Manually deploy the newer version or enable the feature CKB2021 after the fork activation. 80 | - Use feature switcher and enable the feature CKB2021 automatically when the chain grows into the activation epoch. The activation epoch is different in the testnet and the mainnet, which is available via the updated `get_consensus` RPC. 81 | 82 | ## Appendix 83 | 84 | ### CKB2021 RFCs List 85 | 86 | * [RFC28]: Use Block Timestamp as Start Timestamp in Since. 87 | * [RFC29]: Allow multiple matches on dep cells via type script hash when these cells have the same data. 88 | * [RFC30]: Ensure that index is less than length in input since field using epoch. 89 | * [RFC31]: Add a variable length field in the block header. 90 | * [RFC32]: CKB VM version selection. 91 | * [RFC33]: CKB VM version1 changes. 92 | * [RFC34]: CKB VM syscalls bundle 2. 93 | * [RFC35]: P2P protocol upgrade. 94 | * [RFC36]: Remove header deps immature rule. 95 | * RFC37: This RFC, CKB2021 overview. 96 | 97 | [RFC28]: ../0028-change-since-relative-timestamp/0028-change-since-relative-timestamp.md 98 | [RFC29]: ../0029-allow-script-multiple-matches-on-identical-code/0029-allow-script-multiple-matches-on-identical-code.md 99 | [RFC30]: ../0030-ensure-index-less-than-length-in-since/0030-ensure-index-less-than-length-in-since.md 100 | [RFC31]: ../0031-variable-length-header-field/0031-variable-length-header-field.md 101 | [RFC32]: ../0032-ckb-vm-version-selection/0032-ckb-vm-version-selection.md 102 | [RFC33]: ../0033-ckb-vm-version-1/0033-ckb-vm-version-1.md 103 | [RFC34]: ../0034-vm-syscalls-2/0034-vm-syscalls-2.md 104 | [RFC35]: ../0035-ckb2021-p2p-protocol-upgrade/0035-ckb2021-p2p-protocol-upgrade.md 105 | [RFC36]: ../0036-remove-header-deps-immature-rule/0036-remove-header-deps-immature-rule.md 106 | -------------------------------------------------------------------------------- /rfcs/0042-omnilock/rce_cells.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0042-omnilock/rce_cells.png -------------------------------------------------------------------------------- /rfcs/0043-ckb-softfork-activation/0043-ckb-softfork-activation.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0043" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Dingwei Zhang 6 | Created: 2022-06-16 7 | --- 8 | 9 | # CKB softfork activation 10 | 11 | ## Abstract 12 | 13 | This document specifies a proposed change to the semantics of the 'version' field in CKB blocks, allowing multiple backward-compatible changes (further called "softforks") to be deployed in parallel. It relies on interpreting the version field as a bit vector, where each bit can be used to track an independent change. These are tallied in each period. Once the consensus change succeeds or times out, there is a "fallow" pause, after which the bit can be reused for later changes. 14 | 15 | ## Specification 16 | 17 | ### Parameters 18 | 19 | Each softfork deployment is specified by the following per-chain parameters (further elaborated below): 20 | 21 | 1. The `name` specifies a very brief description of the softfork, reasonable for use as an identifier. 22 | 2. The `bit` determines which bit in the `version` field of the block is to be used to signal the softfork lock-in and activation. It is chosen from the set {0,1,2,...,28}. 23 | 3. The `start_epoch` specifies the first epoch in which the bit gains meaning. 24 | 4. The `timeout_epoch` specifies an epoch at which the miner signaling ends. Once this epoch has been reached, if the softfork has not yet locked_in (excluding this epoch block's bit state), the deployment is considered failed on all descendants of the block. 25 | 5. The `period` specifies length of epochs of the signalling period. 26 | 6. The `threshold` specifies the minimum ratio of block per `period`, which indicate the locked_in of the softfork during the `period`. 27 | 7. The `minimum_activation_epoch` specifies the epoch at which the softfork is allowed to become active. 28 | 29 | These parameters will be written to the software binary at the start of deployment and deployment will begin as the software is released. 30 | 31 | ### Selection guidelines 32 | The following guidelines are suggested for selecting these parameters for a softfork: 33 | 34 | 1. `name` should be selected such that no two softforks, concurrent or otherwise, ever use the same name. 35 | 2. `bit` should be selected such that no two concurrent softforks use the same bit. 36 | 3. `start_epoch` can be set soon after software with parameters is expected to be released. 37 | 4. `timeout_epoch` should be set to an epoch when it is considered reasonable to expect the entire economy to have upgraded by, ensure sufficient time for the signaling `period`, should at least one and a half month, 270 epochs after `start_epoch`. 38 | 5. `period` should at least 42 epochs,approximately one week. 39 | 6. `threshold` should be 90% or 75% for testnet. 40 | 7. `minimum_activation_epoch` should be set to several epochs after `timeout_epoch` if the `start_epoch` is to be very soon after software with parameters is expected to be released. 41 | Where the locked_in threshold is reached, softforks are guaranteed to activate eventually but not until `minimum_activation_epoch` after signal tracking starts, allowing users, developers, and organizations to prepare software, announcements, and celebrations for that event. 42 | 43 | ### States 44 | 45 | With each block and softfork, we associate a deployment state. The possible states are: 46 | 47 | 1. DEFINED is the first state that each softfork starts. The blocks of 0 epoch is by definition in this state for each deployment. 48 | 2. STARTED for all blocks reach or past the `start_epoch`. 49 | 3. LOCKED_IN for one `period` after the first `period` with STARTED blocks of which at least `threshold` has the associated bit set in `version`. 50 | 4. ACTIVE for all blocks after the LOCKED_IN `period`. 51 | 5. FAILED for all blocks after the `timeout_epoch`, if LOCKED_IN was not reached. 52 | 53 | ### Bit flags 54 | 55 | The `version` block header field is to be interpreted as a 32-bit little-endian integer, and bits are selected within this integer as values (1 << N) where N is the `bit` number. 56 | 57 | ```rust 58 | pub fn mask(&self) -> u32 { 59 | 1u32 << bit as u32 60 | } 61 | ``` 62 | 63 | Blocks in the STARTED state get a `version` whose bit position bit is set to 1. The top 3 bits of such blocks must be 000, so the range of possible `version` values is [0x00000000...0x1FFFFFFF], inclusive. 64 | 65 | By restricting the top 3 bits to 000, we get 29 out of those for this proposal and support future upgrades for different mechanisms. 66 | 67 | Miners should continue setting the bit in the LOCKED_IN phase, so uptake is visible, though this does not affect consensus rules. 68 | 69 | ### New consensus rules 70 | The new consensus rules for each softfork are enforced for each block with an ACTIVE state. 71 | 72 | ### State transitions 73 | 74 | ![State transitions](images/state-transitions.png) 75 | 76 | The blocks of 0 epoch has a state DEFINED for each deployment, by definition. 77 | 78 | ```rust 79 | if epoch.number().is_zero() { 80 | return ThresholdState::DEFINED; 81 | } 82 | ``` 83 | 84 | We remain in the initial state until we reach the `start_epoch`. 85 | 86 | ```rust 87 | match state { 88 | ThresholdState::DEFINED => { 89 | if epoch.number() >= start { 90 | next_state = ThresholdState::STARTED; 91 | } 92 | } 93 | ``` 94 | 95 | After a `period` in the STARTED state, we tally the bits set and transition to LOCKED_IN if a sufficient number of blocks in the past `period` set the deployment bit in their version numbers. If the threshold has not been met and we reach the `timeout_epoch`, we transition directly to FAILED. 96 | 97 | Note that a block's state never depends on its version, only on that of its ancestors. 98 | 99 | ```rust 100 | match state { 101 | ThresholdState::STARTED => { 102 | let mut count = 0; 103 | for block in (0..period_blocks) { 104 | if (block.version() & 0xE0000000 == 0x00000000 && (block.version() >> bit) & 1 == 1) { 105 | ++count; 106 | } 107 | } 108 | let threshold_number = threshold_number(period_blocks, threshold); 109 | if count >= threshold_number { 110 | next_state = ThresholdState::LOCKED_IN; 111 | } else if epoch_ext.number() >= timeout { 112 | next_state = ThresholdState::FAILED; 113 | } 114 | } 115 | ``` 116 | After a `period` of LOCKED_IN, we automatically transition to ACTIVE if the `minimum_activation_epoch` is reached. Otherwise, LOCKED_IN continues. 117 | 118 | ```rust 119 | ThresholdState::LOCKED_IN => { 120 | if epoch.number() >= min_activation_epoch { 121 | next_state = ThresholdState::ACTIVE; 122 | } 123 | } 124 | ``` 125 | 126 | Furthermore, ACTIVE and FAILED are terminal states in which a deployment stays once reached. 127 | 128 | ```rust 129 | ThresholdState::FAILED | ThresholdState::ACTIVE => { 130 | // Nothing happens, these are terminal states. 131 | } 132 | ``` 133 | 134 | ## Deployments 135 | A living list of deployment proposals can be found [here](./deployments.md). 136 | 137 | ## Reference 138 | 139 | 1. Wuille, P., Todd, P., Maxwell, G., & Russell, R. (2015). BIP9: Version bits with timeout and delay. Bitcoin BIPs. https://github.com/bitcoin/bips/blob/master/bip-0009.mediawiki 140 | 2. Fry, S., & Dashjr, L. (2017). BIP8: Version bits with lock-in by height. Bitcoin BIPs. https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki 141 | 3. Harding, D. A. (2021). Taproot activation proposal “Speedy Trial.” Bitcoin-Dev Mailing List. https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-March/018583.html 142 | -------------------------------------------------------------------------------- /rfcs/0043-ckb-softfork-activation/deployments.md: -------------------------------------------------------------------------------- 1 | # Deployments 2 | --- 3 | 4 | List of proposed deployments. 5 | | Name | Bit | Mainnet Start | Mainnet Timeout | Mainnet State | Testnet Start | Testnet Timeout | Testnet State | RFC | 6 | | ----------- | ---------- | ---------------- | ---------- | ----------- | ---------- | ---------------- | ---------- | ---------- | 7 | | Light Client Protocol | 1 | TBD | TBD | TBD | epoch#5346 | epoch#5616 | active since epoch#5711 | [RFC0044](../0044-ckb-light-client/0044-ckb-light-client.md) 8 | -------------------------------------------------------------------------------- /rfcs/0043-ckb-softfork-activation/images/state-transitions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nervosnetwork/rfcs/4b502ffcb02fc7019e0dd4b5f866b5f09819cfbe/rfcs/0043-ckb-softfork-activation/images/state-transitions.png -------------------------------------------------------------------------------- /rfcs/0045-client-block-filter/0045-client-block-filter.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0045" 3 | Category: Standards Track 4 | Status: Proposal 5 | Author: Quake Wang 6 | Created: 2022-08-23 7 | --- 8 | 9 | # CKB Client Side Block Filter Protocol 10 | 11 | ## Abstract 12 | 13 | This RFC describes a block filter protocol that could be used together with [RFC 0044](../0044-ckb-light-client/0044-ckb-light-client.md). It allows clients to obtain compact probabilistic filters of CKB blocks from full nodes and download full blocks if the filter matches relevant data. 14 | 15 | ## Motivation 16 | 17 | Light clients allow applications to read relevant transactions from the blockchain without incurring the full cost of downloading and validating all data. Such applications seek to simultaneously minimize the trust in peers and the amount of bandwidth, storage space, and computation required. They achieve this by sampling headers through the fly-client protocol, verifying the proofs of work, and following the longest proof-of-work chain. Light clients then download only the blockchain data relevant to them directly from peers and validate inclusion in the header chain. Though clients do not check the validity of all blocks in the longest proof-of-work chain, they rely on miner incentives for security. 18 | 19 | Full nodes generate deterministic filters on block data that are served to the client. A light client can then download an entire block if the filter matches the data it is watching for. Since filters are deterministic, they only need to be constructed once and stored on disk, whenever a new block is appended to the chain. This keeps the computation required to serve filters minimal. 20 | 21 | ## Specification 22 | 23 | ### Protocol Messages 24 | 25 | #### GetBlockFilters 26 | 27 | `GetBlockFilters` is used to request the compact filters of a particular range of blocks: 28 | 29 | ``` 30 | struct GetBlockFilters { 31 | start_number: Uint64, // The height of the first block in the requested range 32 | } 33 | ``` 34 | 35 | #### BlockFilters 36 | `BlockFilters` is sent in response to `GetBlockFilters`, one for each block in the requested range: 37 | 38 | ``` 39 | table BlockFilters { 40 | start_number: Uint64, // The height of the first block in the requested range 41 | block_hashes: Byte32Vec, // The hashes of the blocks in the range 42 | filters: BytesVec, // The filters of the blocks in the range 43 | } 44 | ``` 45 | 46 | 1. The `start_number` SHOULD match the field in the GetBlockFilters request. 47 | 2. The `block_hashes` field size should not be larger than 1000. 48 | 3. The `block_hashes` and `filters` fields size SHOULD match. 49 | 50 | #### GetBlockFilterHashes 51 | `GetBlockFilterHashes` is used to request verifiable filter hashes for a particular range of blocks: 52 | 53 | ``` 54 | struct GetBlockFilterHashes { 55 | start_number: Uint64, // The height of the first block in the requested range 56 | } 57 | ``` 58 | 59 | #### BlockFilterHashes 60 | `BlockFilterHashes` is sent in response to `GetBlockFilterHashes`: 61 | 62 | ``` 63 | table BlockFilterHashes { 64 | start_number: Uint64, // The height of the first block in the requested range 65 | parent_block_filter_hash: Byte32, // The hash of the parent block filter 66 | block_filter_hashes: Byte32Vec, // The hashes of the block filters in the range 67 | } 68 | ``` 69 | 70 | 1. The `start_number` SHOULD match the field in the GetBlockFilterHashes request. 71 | 2. The `block_filter_hashes` field size SHOULD not exceed 2000 72 | 73 | 74 | #### GetBlockFilterCheckPoints 75 | `GetBlockFilterCheckPoints` is used to request filter hashes at evenly spaced intervals over a range of blocks. Clients may use filter hashes from `GetBlockFilterHashes` to connect these checkpoints, as is described in the 76 | [Client Operation](#client-operation) section below: 77 | 78 | ``` 79 | struct GetBlockFilterCheckPoints { 80 | start_number: Uint64, // The height of the first block in the requested range 81 | } 82 | ``` 83 | 84 | #### BlockFilterCheckPoints 85 | `BlockFilterCheckPoints` is sent in response to `GetBlockFilterCheckPoints`. The filter hashes included are the set of all filter hashes on the requested blocks range where the height is a multiple of the interval 2000: 86 | 87 | ``` 88 | table BlockFilterCheckPoints { 89 | start_number: Uint64, 90 | block_filter_hashes: Byte32Vec, 91 | } 92 | ``` 93 | 94 | 1. The `start_number` SHOULD match the field in the GetBlockFilterCheckPoints request. 95 | 2. The `block_filter_hashes` field size should not be larger than 2000. 96 | 97 | ### Filter Data Generation 98 | 99 | We follow the BIP158 for filter data generation and use the same Golomb-Coded Sets parameters P and M values. The only difference is that we only use cell's lock/type script hash as the filter data: 100 | 101 | ``` 102 | filter.add_element(cell.lock.calc_script_hash().as_slice()); 103 | if let Some(type_script) = cell.type_().to_opt() { 104 | filter.add_element(type_script.calc_script_hash().as_slice()); 105 | } 106 | ``` 107 | 108 | ### Node Operation 109 | 110 | Full nodes MAY opt to support this RFC, such nodes SHOULD treat the filters as an additional index of the blockchain. For each new block that is connected to the main chain, nodes SHOULD generate filters and persist them. Nodes that are missing filters and are already synced with the blockchain SHOULD reindex the chain upon start-up, constructing filters for each block from genesis to the current tip. 111 | 112 | Nodes SHOULD NOT generate filters dynamically on request, as malicious peers may be able to perform DoS attacks by requesting small filters derived from large blocks. This would require an asymmetrical amount of I/O on the node to compute and serve. 113 | 114 | Nodes MAY prune block data after generating and storing all filters for a block. 115 | 116 | ### Client Operation 117 | 118 | This section provides recommendations for light clients to download filters with maximal security. 119 | 120 | Clients SHOULD first sync with the full nodes by verifying the best chain tip through the fly-client protocol before downloading any filters or filter hashes. Clients SHOULD disconnect any outbound peers whose best chain has significantly less work than the known longest chain. 121 | 122 | 123 | Once a client's tip is in sync, it SHOULD download and verify filter hashes for all blocks. The client SHOULD send `GetBlockFilterHashes` messages to full nodes and store the filter hashes for each block. The client MAY first fetch hashes by sending `GetBlockFilterCheckPoints`. The checkpoints allow the client to download filter hashes for different intervals from multiple peers in parallel, verifying each range of 2000 headers against the checkpoints. 124 | 125 | Unless securely connected to a trusted peer that is serving filter hashes, the client SHOULD connect to multiple outbound peers to mitigate the risk of downloading incorrect filters. If the client receives conflicting filter hashes from different peers for any block, it SHOULD interrogate them to determine which is faulty. The client SHOULD use `GetBlockFilterHashes` and/or `GetBlockFilterCheckPoints` to first identify the first filter hashes that the peers disagree on. The client then SHOULD download the full block from any peer and derive the correct filter and filter hash. The client SHOULD ban any peers that sent a filter hash that does not match the computed one. 126 | 127 | Once the client has downloaded and verified all filter hashes needed, and no outbound peers have sent conflicting headers, the client can download the actual block filters it needs. Starting from the first block in the desired range, the client now MAY download the filters. The client SHOULD test that each filter links to its corresponding filter hash and ban peers that send incorrect filters. The client MAY download multiple filters at once to increase throughput. 128 | 129 | Each time a new valid block header is received, the client SHOULD request the corresponding filter hashes from all eligible peers. If two peers send conflicting filter hashes, the client should interrogate them as described above and ban any peers that send an invalid header. 130 | 131 | If a client is fetching full blocks from the P2P network, they SHOULD be downloaded from outbound peers at random to mitigate privacy loss due to transaction intersection analysis. Note that blocks may be downloaded from peers that do not support this RFC. 132 | ## Deployment 133 | 134 | This RFC is deploy identically to CKB Light Client Protocol ([RFC0044](../0044-ckb-light-client/0044-ckb-light-client.md)). 135 | 136 | ## Reference 137 | 138 | 1. BIP157: https://github.com/bitcoin/bips/blob/master/bip-0157.mediawiki 139 | 2. BIP158: https://github.com/bitcoin/bips/blob/master/bip-0158.mediawiki 140 | -------------------------------------------------------------------------------- /rfcs/0048-remove-block-header-version-reservation-rule/0048-remove-block-header-version-reservation-rule.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0048" 3 | Category: Standards Track 4 | Status: Draft 5 | Author: Dingwei Zhang 6 | Created: 2023-04-17 7 | --- 8 | 9 | # Remove Block Header Version Reservation Rule 10 | 11 | 12 | ## Abstract 13 | 14 | This rfc proposes to remove this reservation and allow for the use of CKB softfork activation [RFC43] in the block header. This change will be implemented in the 2023 edition of the CKB consensus rules. 15 | 16 | ## Motivation 17 | 18 | The version field in the CKB block header currently has no real meaning, as the consensus rule forces it to be 0 in CKB2021 and earlier. This means that it cannot be used to signal CKB softfork activation [RFC43]. To address this issue, This rfc proposes to remove this reservation and allow for the use of version bits in the block header. 19 | 20 | ## Specification 21 | 22 | This RFC must be activated via a hard fork. After activation, any unsigned 32-bit integer is legal for the version field and no verification rule will be required. 23 | -------------------------------------------------------------------------------- /rfcs/0049-ckb-vm-version-2/0049-ckb-vm-version-2.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0049" 3 | Category: Standards Track 4 | Status: Draft 5 | Author: Wanbiao Ye 6 | Created: 2023-04-17 7 | --- 8 | 9 | # VM version2 10 | 11 | ## Abstract 12 | 13 | This RFC delineates the specifications for CKB-VM version 2. CKB-VM version 2 pertains to the version implemented in the CKB Meepo hardfork. 14 | 15 | ## **Motivation** 16 | 17 | The upgrade of CKB-VM in Meepo hardfork aims to enhance the security, portability, and efficiency of scripts. Throughout recent years, several questions have been a source of concern for us: 18 | 19 | - We currently lack a secure and straightforward method to invoke one script from another. 20 | - The **[dynamic library call](https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0009-vm-syscalls/0009-vm-syscalls.md#load-cell-data-as-code)** presents a security concern. The sub-script and parent script share the same memory space, leading to an uncontrolled security risk when calling an unknown sub-script. 21 | - Although the **[Exec](https://github.com/nervosnetwork/rfcs/blob/master/rfcs/0034-vm-syscalls-2/0034-vm-syscalls-2.md#exec)** system call doesn't pose any security issues, it is exceptionally challenging to utilize effectively. 22 | - Running more intricate scripts (such as zero-knowledge proofs) on CKB-VM necessitates higher performance requirements for CKB-VM. 23 | 24 | ## **Specification** 25 | 26 | To address the aforementioned issues, we have implemented the following optimizations for CKB-VM. These optimizations have been thoroughly tested and standardized through the use of RFC. 27 | 28 | In comparison to version 1, version 2 of CKB-VM incorporates the following enhancements: 29 | 30 | 1. One notable addition is the inclusion of a new system call called "Spawn," which can be further explored in the RFC titled "VM Syscalls 3." In essence, Spawn serves as an alternative to dynamic library calls and Exec. With Spawn, a script can create a child script with an independent memory area, and data can be passed between the parent and child scripts without restriction. 31 | 2. [Macro-Operation Fusion](https://en.wikichip.org/wiki/macro-operation_fusion). There are 5 MOPs added in VM version 2, there are: 32 | 33 | | Opcode | Origin | Cycles | Description | 34 | | ------ | ---------------- | --------- | ------------------------------------------------------------------ | 35 | | ADCS | add + sltu | 1 + 0 | Overflowing addition | 36 | | SBBS | sub + sltu | 1 + 0 | Borrowing subtraction | 37 | | ADD3A | add + sltu + add | 1 + 0 + 0 | Overflowing addition and add the overflow flag to the third number | 38 | | ADD3B | add + sltu + add | 1 + 0 + 0 | Similar to ADD3A but the registers order is different | 39 | | ADD3C | add + sltu + add | 1 + 0 + 0 | Similar to ADD3A but the registers order is different | 40 | 41 | Detailed matching patterns for the above MOPs(Please note that the registers here are only used for placeholders, and it does not mean that the MOP is only established when r0, r1, r2, r3): 42 | 43 | **ADCS rd, rs1, rs2, rs3** 44 | 45 | ``` 46 | add r0, r1, r2 47 | sltu r3, r0, r1 48 | // or 49 | add r0, r2, r1 50 | sltu r3, r0, r1 51 | 52 | Activated when: 53 | r0 != r1 54 | r0 != x0 55 | ``` 56 | 57 | **SBBS rd, rs1, rs2, rs3** 58 | 59 | ``` 60 | sub r0, r1, r2 61 | sltu r3, r1, r2 62 | 63 | Activated when: 64 | r0 != r1 65 | r0 != r2 66 | ``` 67 | 68 | **ADD3A rd, rs1, rs2, rs3, rs4** 69 | 70 | ``` 71 | add r0, r1, r0 72 | sltu r2, r0, r1 73 | add r3, r2, r4 74 | 75 | Activated when: 76 | r0 != r1 77 | r0 != r4 78 | r2 != r4 79 | r0 != x0 80 | r2 != x0 81 | ``` 82 | 83 | **ADD3B rd, rs1, rs2, rs3, rs4** 84 | 85 | ``` 86 | add r0, r1, r2 87 | sltu r3, r0, r1 88 | add r3, r3, r4 89 | 90 | Activated when: 91 | r0 != r1 92 | r0 != r4 93 | r3 != r4 94 | r0 != x0 95 | r3 != x0 96 | ``` 97 | 98 | **ADD3C rd, rs1, rs2, rs3, rs4** 99 | 100 | ``` 101 | add r0, r1, r2 102 | sltu r3, r0, r1 103 | add r3, r3, r4 104 | 105 | Activated when: 106 | r0 != r1 107 | r0 != r4 108 | r3 != r4 109 | r0 != x0 110 | r3 != x0 111 | ``` 112 | -------------------------------------------------------------------------------- /rfcs/0051-ckb2023/0051-ckb2023.md: -------------------------------------------------------------------------------- 1 | --- 2 | Number: "0051" 3 | Category: Standards Track 4 | Status: Draft 5 | Author: Dingwei Zhang 6 | Created: 2023-04-17 7 | --- 8 | 9 | # CKB Consensus Change (Edition CKB2023) 10 | 11 | The current edition of CKB consensus rules is CKB2021. CKB2023 refers to the new edition of CKB consensus rules after its second hardfork, The purpose of a hard fork is to upgrade and update the rules encoded in the network. The changes are not backward compatible. This document outlines the changes in this upgrade. 12 | 13 | ## What's in CKB2023 14 | CKB2023 will bring significant changes to the consensus rules, these changes include the removal of the reservation rule on version field in the block header, the introduction of a new version of the virtual machine (VM) with new syscalls and standard extensions, and the optimization of performance with new mops. This RFC provides a detailed overview of these changes. 15 | 16 | 17 | ### CKB VM v2 18 | 19 | Since CKB2023, there will be multiple VM versions available. [RFC32] introduces a CKB VM version mechanism. It piggybacks on the `hash_type` field in the Script structure. 20 | 21 | | `hash_type` | JSON representation | matches by | VM version | 22 | | ----------- | ---------- | ---------------- | ---------- | 23 | | 0 | "data" | data hash | 0 | 24 | | 1 | "type" | type script hash | 2 | 25 | | 2 | "data1" | data hash | 1 | 26 | | 4 | "data2" | data hash | 2 | 27 | 28 | 29 | [RFC0049] introduces what's new in CKB VM v2 and [RFC0050] adds new syscalls for VM v2. 30 | 31 | CKB VM v2 bring the following features: 32 | 33 | * New syscalls Spawn, Get Memory Limit, Set Content will be added. The syscall Spawn is the core part of this update. The Spawn and the latter two syscalls: Get Memory Limit and Set Content together, implement a way to call another CKB Script in a CKB Script. Unlike the Exec syscall, Spawn saves the execution context of the current script, like posix_spawn, the parent script blocks until the child script ends. 34 | * [“A” Standard Extension](https://five-embeddev.com/riscv-isa-manual/latest/a.html), strictly speaking “A” Standard Extension in ckb-vm does not bring functional changes, but many existing code will be compiled with Atomic Instructions and need to be patched, while ckb-vm can implement A instructions to eliminate such work. For example, in CKB VM v2, if you write a script with rust, you can now use [log](https://crates.io/crates/log) crate directly. 35 | * Introduce more [macro-op fusion](https://en.wikichip.org/wiki/macro-operation_fusion) to reduce cycles consumption of scripts. 36 | 37 | 38 | ### Remove Block Header Version Reservation Rule 39 | 40 | In CKB2021, the version field of the block header is reserved and only allowed to be 0. In the 2023 edition this reservation will be removed to allow for the use of [RFC0043] 41 | 42 | ## CKB2023 Timeline 43 | 44 | The mainnet upgrade is divided into three phases. 45 | 46 | * **Stage 1 - Code Preview**: An RC version of 0.200.0 is ready for preview on June 30 2023 via nervosnetwork/ckb [releases](https://github.com/nervosnetwork/ckb/releases). It will introduce the incompatible changes to help developers to adapt their tools and apps to CKB2023. But this version does not activate the consensus incompatible changes in CKB2023. Developers can test the new rules by running a dev chain locally. 47 | 48 | * **Stage 2 - Testnet Activation**: 49 | 50 | * **Stage 3 - Mainnet Activation**: 51 | 52 | ## Upgrade Strategies 53 | 54 | First, the SDK, Tool, and dApps authors must adapt to any 0.200.0 rc version. 55 | 56 | There are two strategies for ecosystem developers to upgrade to the CKB2023 consensus. Choose the former one if the developers can pause the app during the fork activation, otherwise, use the latter one. 57 | 58 | - Release two different versions or use the feature switcher. Manually deploy the newer version or enable the feature CKB2023 after the fork activation. 59 | - Use feature switcher and enable the feature CKB2023 automatically when the chain grows into the activation epoch. The activation epoch is different in the testnet and the mainnet, which is available via the updated `get_consensus` RPC. 60 | 61 | ## Appendix 62 | 63 | ### CKB2023 RFCs List 64 | 65 | * [RFC0048]: Remove Block Header Version Reservation Rule. 66 | * [RFC0050]: CKB VM Syscalls 3. 67 | * [RFC0049]: CKB VM version2. 68 | * RFC0051: This RFC, CKB2023 overview. 69 | 70 | [RFC0043]: ../0043-ckb-softfork-activation/0043-ckb-softfork-activation.md 71 | [RFC0048]: ../0048-remove-block-header-version-reservation-rule/0048-remove-block-header-version-reservation-rule.md 72 | [RFC0049]: ../0049-ckb-vm-version-2/0049-ckb-vm-version-2.md 73 | [RFC0050]: ../0050-vm-syscalls-3/0050-vm-syscalls-3.md 74 | --------------------------------------------------------------------------------