├── .github └── workflows │ ├── deploy.yaml │ └── test.yaml ├── .gitignore ├── LICENSE ├── README.md ├── SPECIFICATION_TEMPLATE.md ├── assets ├── verkle_branch_node.png └── verkle_leaf_node.png ├── beacon-chain ├── README.md ├── beacon-network-test-vectors.md └── beacon-network.md ├── bootnodes.md ├── canonical-transaction-index └── canonical-transaction-index-network.md ├── history ├── epoch-accumulator-test-vectors.md ├── history-network-test-vectors.md └── history-network.md ├── implementation-details-overlay.md ├── jsonrpc ├── README.md ├── package-lock.json ├── package.json ├── scripts │ ├── build.js │ ├── debug.sh │ └── validate.js └── src │ ├── content │ ├── params.json │ └── results.json │ ├── errors │ └── errors.json │ ├── methods │ ├── beacon.json │ ├── discv5.json │ ├── history.json │ └── state.json │ └── schemas │ ├── base_types.json │ ├── portal.json │ └── trace.json ├── ping-extensions ├── README.md ├── extensions.md └── extensions │ ├── template.md │ ├── type-0.md │ ├── type-1.md │ ├── type-2.md │ └── type-65535.md ├── portal-wire-protocol.md ├── portal-wire-test-vectors.md ├── protocol-version-changelog.md ├── state ├── state-network-test-vectors.md └── state-network.md ├── transaction-gossip └── transaction-gossip.md ├── utp ├── discv5-utp.md └── utp-wire-test-vectors.md └── verkle └── verkle-state-network.md /.github/workflows/deploy.yaml: -------------------------------------------------------------------------------- 1 | name: Deploy JSON-RPC specification 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | 8 | jobs: 9 | build-and-deploy: 10 | 11 | runs-on: ubuntu-latest 12 | defaults: 13 | run: 14 | working-directory: ./jsonrpc 15 | 16 | steps: 17 | - uses: actions/checkout@v4 18 | - name: Use Node.js 15 19 | uses: actions/setup-node@v2 20 | with: 21 | node-version: '16' 22 | - run: npm ci 23 | - run: npm run build 24 | - name: setup git config 25 | run: | 26 | git config user.name "GitHub Actions Bot" 27 | git config user.email "<>" 28 | - name: Deploy to branch 29 | run: | 30 | git checkout -b assembled-spec 31 | git add -f openrpc.json 32 | git commit -m "assemble openrpc.json" 33 | git push -fu origin assembled-spec 34 | -------------------------------------------------------------------------------- /.github/workflows/test.yaml: -------------------------------------------------------------------------------- 1 | name: Build and lint JSON-RPC specs 2 | 3 | on: [push, pull_request, workflow_dispatch] 4 | 5 | jobs: 6 | build: 7 | 8 | runs-on: ubuntu-latest 9 | defaults: 10 | run: 11 | working-directory: ./jsonrpc 12 | 13 | strategy: 14 | matrix: 15 | node-version: [16.x] 16 | 17 | steps: 18 | - uses: actions/checkout@v4 19 | - name: Use Node.js ${{ matrix.node-version }} 20 | uses: actions/setup-node@v2 21 | with: 22 | node-version: ${{ matrix.node-version }} 23 | - run: npm ci 24 | - run: npm run build 25 | - run: npm run lint 26 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | jsonrpc/openrpc.json 2 | jsonrpc/node_modules 3 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Creative Commons Legal Code 2 | 3 | CC0 1.0 Universal 4 | 5 | CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE 6 | LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN 7 | ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS 8 | INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES 9 | REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS 10 | PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM 11 | THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED 12 | HEREUNDER. 13 | 14 | Statement of Purpose 15 | 16 | The laws of most jurisdictions throughout the world automatically confer 17 | exclusive Copyright and Related Rights (defined below) upon the creator 18 | and subsequent owner(s) (each and all, an "owner") of an original work of 19 | authorship and/or a database (each, a "Work"). 20 | 21 | Certain owners wish to permanently relinquish those rights to a Work for 22 | the purpose of contributing to a commons of creative, cultural and 23 | scientific works ("Commons") that the public can reliably and without fear 24 | of later claims of infringement build upon, modify, incorporate in other 25 | works, reuse and redistribute as freely as possible in any form whatsoever 26 | and for any purposes, including without limitation commercial purposes. 27 | These owners may contribute to the Commons to promote the ideal of a free 28 | culture and the further production of creative, cultural and scientific 29 | works, or to gain reputation or greater distribution for their Work in 30 | part through the use and efforts of others. 31 | 32 | For these and/or other purposes and motivations, and without any 33 | expectation of additional consideration or compensation, the person 34 | associating CC0 with a Work (the "Affirmer"), to the extent that he or she 35 | is an owner of Copyright and Related Rights in the Work, voluntarily 36 | elects to apply CC0 to the Work and publicly distribute the Work under its 37 | terms, with knowledge of his or her Copyright and Related Rights in the 38 | Work and the meaning and intended legal effect of CC0 on those rights. 39 | 40 | 1. Copyright and Related Rights. A Work made available under CC0 may be 41 | protected by copyright and related or neighboring rights ("Copyright and 42 | Related Rights"). Copyright and Related Rights include, but are not 43 | limited to, the following: 44 | 45 | i. the right to reproduce, adapt, distribute, perform, display, 46 | communicate, and translate a Work; 47 | ii. moral rights retained by the original author(s) and/or performer(s); 48 | iii. publicity and privacy rights pertaining to a person's image or 49 | likeness depicted in a Work; 50 | iv. rights protecting against unfair competition in regards to a Work, 51 | subject to the limitations in paragraph 4(a), below; 52 | v. rights protecting the extraction, dissemination, use and reuse of data 53 | in a Work; 54 | vi. database rights (such as those arising under Directive 96/9/EC of the 55 | European Parliament and of the Council of 11 March 1996 on the legal 56 | protection of databases, and under any national implementation 57 | thereof, including any amended or successor version of such 58 | directive); and 59 | vii. other similar, equivalent or corresponding rights throughout the 60 | world based on applicable law or treaty, and any national 61 | implementations thereof. 62 | 63 | 2. Waiver. To the greatest extent permitted by, but not in contravention 64 | of, applicable law, Affirmer hereby overtly, fully, permanently, 65 | irrevocably and unconditionally waives, abandons, and surrenders all of 66 | Affirmer's Copyright and Related Rights and associated claims and causes 67 | of action, whether now known or unknown (including existing as well as 68 | future claims and causes of action), in the Work (i) in all territories 69 | worldwide, (ii) for the maximum duration provided by applicable law or 70 | treaty (including future time extensions), (iii) in any current or future 71 | medium and for any number of copies, and (iv) for any purpose whatsoever, 72 | including without limitation commercial, advertising or promotional 73 | purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each 74 | member of the public at large and to the detriment of Affirmer's heirs and 75 | successors, fully intending that such Waiver shall not be subject to 76 | revocation, rescission, cancellation, termination, or any other legal or 77 | equitable action to disrupt the quiet enjoyment of the Work by the public 78 | as contemplated by Affirmer's express Statement of Purpose. 79 | 80 | 3. Public License Fallback. Should any part of the Waiver for any reason 81 | be judged legally invalid or ineffective under applicable law, then the 82 | Waiver shall be preserved to the maximum extent permitted taking into 83 | account Affirmer's express Statement of Purpose. In addition, to the 84 | extent the Waiver is so judged Affirmer hereby grants to each affected 85 | person a royalty-free, non transferable, non sublicensable, non exclusive, 86 | irrevocable and unconditional license to exercise Affirmer's Copyright and 87 | Related Rights in the Work (i) in all territories worldwide, (ii) for the 88 | maximum duration provided by applicable law or treaty (including future 89 | time extensions), (iii) in any current or future medium and for any number 90 | of copies, and (iv) for any purpose whatsoever, including without 91 | limitation commercial, advertising or promotional purposes (the 92 | "License"). The License shall be deemed effective as of the date CC0 was 93 | applied by Affirmer to the Work. Should any part of the License for any 94 | reason be judged legally invalid or ineffective under applicable law, such 95 | partial invalidity or ineffectiveness shall not invalidate the remainder 96 | of the License, and in such case Affirmer hereby affirms that he or she 97 | will not (i) exercise any of his or her remaining Copyright and Related 98 | Rights in the Work or (ii) assert any associated claims and causes of 99 | action with respect to the Work, in either case contrary to Affirmer's 100 | express Statement of Purpose. 101 | 102 | 4. Limitations and Disclaimers. 103 | 104 | a. No trademark or patent rights held by Affirmer are waived, abandoned, 105 | surrendered, licensed or otherwise affected by this document. 106 | b. Affirmer offers the Work as-is and makes no representations or 107 | warranties of any kind concerning the Work, express, implied, 108 | statutory or otherwise, including without limitation warranties of 109 | title, merchantability, fitness for a particular purpose, non 110 | infringement, or the absence of latent or other defects, accuracy, or 111 | the present or absence of errors, whether or not discoverable, all to 112 | the greatest extent permissible under applicable law. 113 | c. Affirmer disclaims responsibility for clearing rights of other persons 114 | that may apply to the Work or any use thereof, including without 115 | limitation any person's Copyright and Related Rights in the Work. 116 | Further, Affirmer disclaims responsibility for obtaining any necessary 117 | consents, permissions or other rights required for any use of the 118 | Work. 119 | d. Affirmer understands and acknowledges that Creative Commons is not a 120 | party to this document and has no duty or obligation with respect to 121 | this CC0 or use of the Work. 122 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # The Portal Network 2 | 3 | > This specification is a work-in-progress and should be considered preliminary. 4 | 5 | ## Introduction 6 | 7 | The Portal Network is an in progress effort to enable lightweight protocol access by resource constrained devices. The term *"portal"* is used to indicate that these networks provide a *view* into the protocol but are not critical to the operation of the core Ethereum protocol. 8 | 9 | The Portal Network is comprised of multiple peer-to-peer networks which together provide the data and functionality necessary to expose the standard [JSON-RPC API](https://eth.wiki/json-rpc/API). These networks are specially designed to ensure that clients participating in these networks can do so with minimal expenditure of networking bandwidth, CPU, RAM, and HDD resources. 10 | 11 | The term 'Portal Client' describes a piece of software which participates in these networks. Portal Clients typically expose the standard JSON-RPC API. 12 | 13 | 14 | ## Motivation 15 | 16 | The Portal Network is focused on delivering reliable, lightweight, and decentralized access to the Ethereum protocol. 17 | 18 | ### Prior Work on the "Light Ethereum Subprotocol" (LES) 19 | 20 | The term "light client" has historically referred to a client of the existing [DevP2P](https://github.com/ethereum/devp2p/blob/master/rlpx.md) based [LES](https://github.com/ethereum/devp2p/blob/master/caps/les.md) network. This network is designed using a client/server architecture. The LES network has a total capacity dictated by the number of "servers" on the network. In order for this network to scale, the "server" capacity has to increase. This also means that at any point in time the network has some total capacity which if exceeded will cause service degradation across the network. Because of this the LES network is unreliable when operating near capacity. 21 | 22 | 23 | ## Architecture 24 | 25 | The Portal Network is built upon the [Discover V5 protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) and operates over the UDP transport. 26 | 27 | The Discovery v5 protocol allows building custom sub-protocols via the use of the built in TALKREQ and TALKRESP message. All sub-protocols use the [Portal Wire Protocol](./portal-wire-protocol.md) which uses the TALKREQ and TALKRESP messages as transport. This wire protocol allows for quick development of the network layer of any new sub-protocol. 28 | 29 | The Portal Network is divided into the following sub-protocols. 30 | 31 | - Execution State Network 32 | - Execution History Network 33 | - Beacon Chain Network 34 | - Execution Canonical Transaction Index Network (preliminary) 35 | - Execution Verkle State Network (preliminary) 36 | - Execution Transaction Gossip Network (preliminary) 37 | 38 | Each of these sub-protocols is designed to deliver a specific unit of functionality. Most Portal clients will participate in all of these sub-protocols in order to deliver the full JSON-RPC API. Each sub-protocol however is designed to be independent of the others, allowing clients the option of only participating in a subset of them if they wish. 39 | 40 | All of the sub-protocols in the Portal Network establish their own overlay DHT that is managed independent of the base Discovery V5 DHT. 41 | 42 | 43 | ## Terminology 44 | 45 | The term "sub-protocol" is used to denote an individual protocol within the Portal Network. 46 | 47 | The term "network" is used contextually to refer to **either** the overall set of multiple protocols that comprise the Portal Network or an individual sub-protocol within the Portal Network. 48 | 49 | 50 | 51 | ## Design Principles 52 | 53 | Each of the Portal Network sub-protocols follows these design principles. 54 | 55 | 1. Isolation 56 | - Participation in one network should not require participation in another network. 57 | 2. Distribution of Responsibility 58 | - Normal operation of the network should result in a roughly even spread of responsibility across the individual nodes in the network. 59 | 3. Tunable Resource Requirements 60 | - Individual nodes should be able to control the amount of machine resources (HDD/CPU/RAM) they provide to the network 61 | 62 | These design principles are aimed at ensuring that participation in the Portal Network is feasible even on resource constrained devices. 63 | 64 | ## The JSON-RPC API 65 | 66 | The following JSON-RPC API endpoints are directly supported by the Portal Network and exposed by Portal clients. 67 | 68 | - `eth_getBlockByHash` 69 | - `eth_getBlockByNumber` 70 | - `eth_getBlockTransactionCountByHash` 71 | - `eth_getBlockTransactionCountByNumber` 72 | - `eth_getUncleCountByBlockHash` 73 | - `eth_getUncleCountByBlockNumber` 74 | - `eth_blockNumber` 75 | - `eth_call` 76 | - `eth_estimateGas` 77 | - `eth_getBalance` 78 | - `eth_getStorageAt` 79 | - `eth_getTransactionCount` 80 | - `eth_getCode` 81 | - `eth_sendRawTransaction` 82 | - `eth_getTransactionByHash` 83 | - `eth_getTransactionByBlockHashAndIndex` 84 | - `eth_getTransactionByBlockNumberAndIndex` 85 | - `eth_getTransactionReceipt` 86 | 87 | In addition to these endpoints, the following endpoints can be exposed by Portal clients through the data available through the Portal Network. 88 | 89 | - `eth_syncing` 90 | 91 | The following endpoints can be exposed by Portal clients as they require no access to execution layer data. 92 | 93 | - `eth_protocolVersion` 94 | - `eth_chainId` 95 | - `eth_coinbase` 96 | - `eth_accounts` 97 | - `eth_gasPrice` 98 | - `eth_feeHistory` 99 | - `eth_newFilter` 100 | - TODO: explain complexity. 101 | - `eth_newBlockFilter` 102 | - `eth_newPendingTransactionFilter` 103 | - `eth_uninstallFilter` 104 | - `eth_getFilterChanges` 105 | - `eth_getFilterLogs` 106 | - `eth_getLogs` 107 | - TODO: explain complexity 108 | - `eth_mining` 109 | - `eth_hashrate` 110 | - `eth_getWork` 111 | - `eth_submitWork` 112 | - `eth_submitHashrate` 113 | - `eth_sign` 114 | - `eth_signTransaction` 115 | 116 | [JSON-RPC Specs](https://playground.open-rpc.org/?schemaUrl=https://raw.githubusercontent.com/ethereum/portal-network-specs/assembled-spec/jsonrpc/openrpc.json&uiSchema%5BappBar%5D%5Bui:splitView%5D=false&uiSchema%5BappBar%5D%5Bui:input%5D=false&uiSchema%5BappBar%5D%5Bui:examplesDropdown%5D=false) 117 | 118 | ## Bridge Nodes 119 | 120 | The term "bridge node" refers to Portal clients which, in addition to participating in the sub-protocols, also inject data into the Portal Network. Any client with valid data may participate as a bridge node. From the perspective of the protocols underlying the Portal Network there is nothing special about bridge nodes. 121 | 122 | The planned architecture for bridge nodes is to pull data from the standard JSON-RPC API of a Full Node and "push" this data into their respective networks within the Portal Network. 123 | 124 | ## Network Functionality 125 | 126 | ### State Network: Accounts and Contract Storage 127 | 128 | The State Network facilitates on-demand retrieval of the Ethereum "state" data. This includes: 129 | 130 | - Reading account balances or nonce values 131 | - Retrieving contract code 132 | - Reading contract storage values 133 | 134 | The responsibility for storing the underlying "state" data should be evenly distributed across the nodes in the network. Nodes must be able to choose how much state they want to store. The data is distributed in a manner that allows nodes to determine the appropriate nodes to query for any individual piece of state data. When retrieving state data, a node should be able to validate the response using a recent header from the header chain. 135 | 136 | The network will be dependent on receiving new and updated state for new blocks. Full "bridge" nodes acting as benevolent state providers are responsible for bringing in this data from the main network. The network should be able to remain healthy even with a small number of bridge nodes. As new data enters the network, nodes are able to validate the data using a recent header from the header chain. 137 | 138 | Querying and reading data from the network should be fast enough for human-driven wallet operations, like estimating the gas for a transaction or reading state from a contract. 139 | 140 | 141 | ### History Network: Headers, Blocks, and Receipts 142 | 143 | The History Network facilitates on-demand retrieval of the history of the Ethereum chain. This includes: 144 | 145 | - Headers 146 | - Block bodies 147 | - Receipts 148 | 149 | The responsibility for storing this data should be evenly distributed across the nodes in the network. Nodes must be able to choose how much history data they want to store. The data is distributed in a manner that allows nodes to determine the appropriate nodes to query for any individual piece of history data. 150 | 151 | Participants in this network are assumed to have access to the canonical header chain. 152 | 153 | All data retrieved from the history network is addressed by block hash. Headers retrieved from this network can be validated to match the requested block hash. Block Bodies and Receipts retrieved from this network can be validated against the corresponding header fields. 154 | 155 | All data retrieved from the history network can be immediately verified by the requesting node. For block headers, the requesting node always knows the expected hash of the requested data and can reject responses with an incorrect hash. For block bodies and receipts, the requesting node is expected to have the corresponding header and can reject responses that do not validate against the corresponding header fields. 156 | 157 | 158 | ### Canonical Transaction Index Network: Transactions by Hash 159 | 160 | The Canonical Transaction Index Network facilitates retrieval of individual transactions by their hash. 161 | 162 | The responsibility for storing the records that make up this should be evenly distributed across the nodes in the network. Nodes must be able to choose how many records from this index they wish to store. The records must be distributed across the network in a manner that allows nodes to determine the appropriate nodes to query for an individual record. 163 | 164 | Transaction information returned from this network includes a merkle proof against the `Header.transactions_trie` for validation purposes. 165 | 166 | 167 | ### Transaction Gossip Network: Sending Transactions 168 | 169 | The Transaction Gossip Network facilitates broadcasting new transactions for inclusion in a future block. 170 | 171 | Nodes in this network must be able to limit how much of the transaction pool they wish to process and gossip. 172 | 173 | The goal of the transaction gossip network is to make sure nodes can broadcast transaction such that they are made available to miners for inclusion in a future block. 174 | 175 | Transactions which are part of this network's gossip are able to be validated without access to the Ethereum state. This is accomplished by bundling a proof which includes the account balance and nonce for the transaction sender. This validation is required to prevent DOS attacks. 176 | 177 | This network is a pure gossip network and does not implement any form of content lookup or retrieval. 178 | 179 | 180 | ## Network Specifications 181 | 182 | - [Portal Wire Protocol](./portal-wire-protocol.md) 183 | - [uTP over DiscoveryV5](./utp/discv5-utp.md) 184 | - [State Network](./state/state-network.md) 185 | - Prior work: https://ethresear.ch/t/scalable-gossip-for-state-network/8958/4 186 | - [History Network](./history/history-network.md) 187 | - Prior work: https://notes.ethereum.org/oUJE4ZX2Q6eMOgEMiQPkpQ?view 188 | - Prior Python proof-of-concept: https://github.com/ethereum/ddht/tree/341e84e9163338556cd48dd2fcfda9eedec3eb45 189 | - This POC should NOT be considered representative of the end goal. It incorporates mechanisms that aren't likely to be apart of the actual implementation, specifically the "advertisement" system which proved to be a big bottleneck, as well as the SSZ merkle root system which was a workaround for large data transfer which we now intend to solve with uTP. 190 | - [Beacon Chain Network](./beacon-chain/beacon-network.md) 191 | - [Canonical Transaction Index Network](./canonical-transaction-index/canonical-transaction-index-network.md) 192 | - Spec is preliminary. 193 | - Network design borrows heavily from history network 194 | - [Transaction Gossip Network](./transaction-gossip/transaction-gossip.md) 195 | - Spec is preliminary 196 | - Prior work: https://ethresear.ch/t/scalable-transaction-gossip/8660 197 | - [Verkle State Network](./verkle/verkle-state-network.md) 198 | - Spec is preliminary 199 | - Prior work: https://ethresear.ch/t/portal-network-verkle/19339 200 | -------------------------------------------------------------------------------- /SPECIFICATION_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 7 | 8 | # Title 9 | 10 | 11 | 12 | ## Overview 13 | 14 | 23 | 24 | 25 | #### Retrieval 26 | 27 | 34 | 35 | ## Specification 36 | 37 | 38 | 39 | ### Distance Function 40 | 41 | 42 | 43 | ### Content ID Derivation Function 44 | 45 | 46 | 47 | ### Wire Protocol 48 | 49 | #### Protocol Identifier 50 | 51 | 52 | 53 | #### Supported Message Types 54 | 55 | 56 | 57 | #### `Ping.payload` & `Pong.payload` 58 | 59 | 60 | 61 | ### Routing Table 62 | 63 | 64 | 65 | ### Node State 66 | 67 | 68 | 69 | ### Data Types 70 | 71 | 79 | 80 | ### Algorithms 81 | 82 | 83 | -------------------------------------------------------------------------------- /assets/verkle_branch_node.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/portal-network-specs/73c7e9dce2190ec0d5225f1041967479b75cbb49/assets/verkle_branch_node.png -------------------------------------------------------------------------------- /assets/verkle_leaf_node.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/portal-network-specs/73c7e9dce2190ec0d5225f1041967479b75cbb49/assets/verkle_leaf_node.png -------------------------------------------------------------------------------- /beacon-chain/README.md: -------------------------------------------------------------------------------- 1 | ## Status 2 | > All specifications in this directory is a work-in-progress and should be considered preliminary. 3 | 4 | ## Beacon Chain Network 5 | The network described in this directory is intended to provide Beacon Chain data that also support beacon chain light clients. 6 | 7 | - Minimal light client [specs](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md) 8 | -------------------------------------------------------------------------------- /beacon-chain/beacon-network.md: -------------------------------------------------------------------------------- 1 | # Beacon Chain Network 2 | **Notice**: This document is a work-in-progress for researchers and implementers. 3 | 4 | This document is the specification for the Portal Network overlay network that supports the on-demand availability of Beacon Chain data. 5 | 6 | ## Overview 7 | 8 | A beacon chain light client could keep track of the chain of beacon block headers by performing Light client state updates 9 | following the light client [sync protocol](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md). 10 | The [LightClientBootstrap](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md#lightclientbootstrap) structure allow setting up a 11 | [LightClientStore](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md#lightclientstore) with the initial sync committee and block header from a user-configured trusted block root. 12 | 13 | Once the client establishes a recent header, it could sync to other headers by processing objects of type [LightClientUpdate](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md#lightclientupdate), 14 | [LightClientFinalityUpdate](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md#lightclientfinalityupdate) 15 | and [LightClientOptimisticUpdate](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md#lightclientoptimisticupdate). 16 | These data types allow a client to stay up-to-date with the beacon chain. 17 | 18 | To verify canonicalness of an execution block header older than ~27 hours, we need the ongoing `BeaconState` accumulator (state.historical_summaries) which stores Merkle roots of recent history logs. 19 | 20 | The Beacon Chain network is a [Kademlia](https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf) DHT that forms an overlay network on top of 21 | the [Discovery v5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5-wire.md) network. The term *overlay network* means that the beacon chain network operates 22 | with its routing table independent of the base Discovery v5 routing table and uses the extensible `TALKREQ` and `TALKRESP` messages from the base Discovery v5 protocol for communication. 23 | 24 | The `TALKREQ` and `TALKRESP` protocol messages are application-level messages whose contents are specific to the Beacon Chain Light Client network. We specify these messages below. 25 | 26 | The Beacon Chain network uses a modified version of the routing table structure from the Discovery v5 network and the lookup algorithm from section 2.3 of the Kademlia paper. 27 | 28 | ### Data 29 | 30 | #### Types 31 | 32 | * LightClientBootstrap 33 | * LightClientUpdate 34 | * LightClientFinalityUpdate 35 | * LightClientOptimisticUpdate 36 | * HistoricalSummaries 37 | 38 | Light client data types are specified in light client [sync protocol](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md#containers). 39 | 40 | #### Retrieval 41 | 42 | The network supports the following mechanisms for data retrieval: 43 | 44 | * `LightClientBootstrap` structure by a post-Altair beacon block root. 45 | * `LightClientUpdatesByRange` - requests the `LightClientUpdate` instances in the sync committee period range [start_period, start_period + count), leading up to the current head sync committee period as selected by fork choice. 46 | * The latest `LightClientFinalityUpdate` known by a peer. 47 | * The latest `LightClientOptimisticUpdate` known by a peer. 48 | * The latest `HistoricalSummaries` known by a peer. 49 | 50 | ## Specification 51 | 52 | ### Distance Function 53 | 54 | The beacon chain network uses the stock XOR distance metric defined in the portal wire protocol specification. 55 | 56 | ### Content ID Derivation Function 57 | 58 | The beacon chain network uses the SHA256 Content ID derivation function from the portal wire protocol specification. 59 | 60 | ### Wire Protocol 61 | 62 | The [Portal wire protocol](../portal-wire-protocol.md) is used as the wire protocol for the Beacon Chain Light Client network. 63 | 64 | #### Protocol Identifier 65 | 66 | As specified in the [Protocol identifiers](../portal-wire-protocol.md#protocol-identifiers) section of the Portal wire protocol, the `protocol` field in the `TALKREQ` message **MUST** contain the value of `0x501A`. 67 | 68 | #### Supported Messages Types 69 | 70 | The beacon chain network supports the following protocol messages: 71 | 72 | - `Ping` - `Pong` 73 | - `FindNodes` - `Nodes` 74 | - `FindContent` - `FoundContent` 75 | - `Offer` - `Accept` 76 | 77 | #### `Ping.payload` & `Pong.payload` 78 | 79 | In the beacon chain network the `payload` field of the `Ping` and `Pong` messages. The first packet between another client MUST be [Type 0: Client Info, Radius, and Capabilities Payload](../ping-extensions/extensions/type-0.md). Then upgraded to the latest payload supported by both of the clients. 80 | 81 | List of currently supported payloads, by latest to oldest. 82 | - [Type 1 Basic Radius Payload](../ping-extensions/extensions/type-1.md) 83 | 84 | ### Routing Table 85 | 86 | The Beacon Chain Network uses the standard routing table structure from the Portal Wire Protocol. 87 | 88 | ### Node State 89 | 90 | #### Data Storage and Retrieval 91 | 92 | Nodes running the beacon chain network MUST store and provide all beacon light 93 | client content for the range as is specified by the consensus light client 94 | specifications: https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/full-node.md#deriving-light-client-data 95 | 96 | This means that data radius and the concept of closeness to data is not 97 | applicable for this content. 98 | 99 | When a node cannot fulfill a request for any of this data it SHOULD return an 100 | empty list of ENRs. It MAY return a list of ENRs of nodes that have provided 101 | this data in the past. 102 | 103 | When a node gossips any of this data, it MUST use [random gossip](../portal-wire-protocol.md#random-gossip) instead of neighborhood gossip. 104 | 105 | #### Data Radius 106 | 107 | The Beacon Chain Network includes one additional piece of node state that should be tracked. Nodes must track the `data_radius` 108 | from the Ping and Pong messages for other nodes in the network. This value is a 256 bit integer and represents the data that 109 | a node is "interested" in. 110 | 111 | We define the following function to determine whether node in the network should be interested in a piece of content: 112 | 113 | ```python 114 | interested(node, content) = distance(node.id, content.id) <= node.radius 115 | ``` 116 | 117 | A node is expected to maintain `radius` information for each node in its local node table. A node's `radius` value may fluctuate as the contents of its local key-value store change. 118 | 119 | A node should track their own radius value and provide this value in all Ping or Pong messages it sends to other nodes. 120 | 121 | ### Data Types 122 | 123 | The beacon chain DHT stores the following data items: 124 | 125 | * LightClientBootstrap 126 | * LightClientUpdate 127 | 128 | The following data objects are ephemeral and we store only the latest values: 129 | 130 | * LightClientFinalityUpdate 131 | * LightClientOptimisticUpdate 132 | * HistoricalSummaries 133 | 134 | #### Constants 135 | 136 | We use the following constants from the beacon chain specs which are used in the various data type definitions: 137 | 138 | ```python 139 | # Maximum number of `LightClientUpdate` instances in a single request 140 | # Defined in https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/p2p-interface.md#configuration 141 | MAX_REQUEST_LIGHT_CLIENT_UPDATES = 2**7 # = 128 142 | 143 | # Maximum number of `HistoricalSummary` records 144 | # Defined in https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md#state-list-lengths 145 | HISTORICAL_ROOTS_LIMIT = 2**24 # = 16,777,216 146 | ``` 147 | 148 | #### ForkDigest 149 | 4-byte fork digest for the current beacon chain version and ``genesis_validators_root``. 150 | 151 | #### LightClientBootstrap 152 | 153 | ```python 154 | light_client_bootstrap_key = Container(block_hash: Bytes32) 155 | selector = 0x10 156 | 157 | content = ForkDigest + SSZ.serialize(LightClientBootstrap) 158 | content_key = selector + SSZ.serialize(light_client_bootstrap_key) 159 | ``` 160 | 161 | #### LightClientUpdatesByRange 162 | 163 | ```python 164 | light_client_update_keys = Container(start_period: uint64, count: uint64) 165 | selector = 0x11 166 | 167 | content = List(ForkDigest + LightClientUpdate, limit=MAX_REQUEST_LIGHT_CLIENT_UPDATES) 168 | content_key = selector + SSZ.serialize(light_client_update_keys) 169 | ``` 170 | 171 | > A node should respond with as many `LightClientUpdate` that they have at the beginning of the 172 | requested range, without exceeding the requested `count`. The elements must be in consecutive 173 | order, starting with `LightClientUpdate` that corresponds to the `start_period`. 174 | 175 | #### LightClientFinalityUpdate 176 | 177 | ```python 178 | light_client_finality_update_key = Container(finalized_slot: uint64) 179 | selector = 0x12 180 | 181 | content = ForkDigest + SSZ.serialize(light_client_finality_update) 182 | content_key = selector + SSZ.serialize(light_client_finality_update_key) 183 | ``` 184 | 185 | > The `LightClientFinalityUpdate` objects are ephemeral and only the latest is of use to the node. 186 | > 187 | > The content key requires the `finalized_slot` to be provided so that this object can be more 188 | efficiently gossiped. Nodes should decide to reject an `LightClientFinalityUpdate` in case it is 189 | not newer than the one they already have. 190 | > 191 | > For `FindContent` requests, nodes SHOULD request `finalized_slot` that is one higher than the 192 | one they already have. If they were following the head of the chain, they should know if such 193 | content can exist. When responding to `FindContent` requests, nodes SHOULD respond with latest 194 | `LightClientFinalityUpdate` that they have. If then can't provide the requested or newer object, 195 | they MUST NOT reply with any content. 196 | 197 | #### LightClientOptimisticUpdate 198 | 199 | ```python 200 | light_client_optimistic_update_key = Container(optimistic_slot: uint64) 201 | selector = 0x13 202 | 203 | content = ForkDigest + SSZ.serialize(light_client_optimistic_update) 204 | content_key = selector + SSZ.serialize(light_client_optimistic_update_key) 205 | ``` 206 | 207 | > The `LightClientOptimisticUpdate` objects are ephemeral and only the latest is of use to the 208 | node. 209 | > 210 | > The content key requires the `optimistic_slot` (corresponding to the `signature_slot` in the 211 | update) to be provided so that this object can be more efficiently gossiped. Nodes should decide 212 | to reject an `LightClientOptimisticUpdate` in case it is not newer than the one they already have. 213 | > 214 | > For `FindContent` requests, nodes SHOULD request `optimistic_slot` that is one higher than the 215 | one they already have. When responding to `FindContent` requests, nodes SHOULD respond with latest 216 | `LightClientOptimisticUpdate` that they have. If they can't provide the requested or newer object, 217 | they MUST NOT reply with any content. 218 | 219 | #### HistoricalSummaries 220 | 221 | Latest `HistoricalSummariesWithProof` object is stored in the network every epoch, even though the 222 | `historical_summaries` only updates every period (8192 slots). This is done to have an up to date 223 | proof every epoch, which makes it easier to verify the `historical_summaries` when starting the 224 | beacon light client sync. 225 | 226 | ```python 227 | 228 | # Definition of generalized index (gindex): 229 | # https://github.com/ethereum/consensus-specs/blob/d8cfdf2626c1219a40048f8fa3dd103ae8c0b040/ssz/merkle-proofs.md#generalized-merkle-tree-index 230 | HISTORICAL_SUMMARIES_GINDEX_ELECTRA* = get_generalized_index(BeaconState, 'historical_summaries') # = 91 231 | 232 | HistoricalSummariesProof = Vector[Bytes32, floorlog2(HISTORICAL_SUMMARIES_GINDEX_ELECTRA)] 233 | 234 | # For Electra and onwards: 235 | historical_summaries_with_proof = Container( 236 | epoch: uint64, 237 | # HistoricalSummary object is defined in consensus specs: 238 | # https://github.com/ethereum/consensus-specs/blob/dev/specs/capella/beacon-chain.md#historicalsummary. 239 | historical_summaries: List(HistoricalSummary, limit=HISTORICAL_ROOTS_LIMIT), 240 | proof: HistoricalSummariesProof 241 | ) 242 | 243 | historical_summaries_key = Container(epoch: uint64) 244 | selector = 0x14 245 | 246 | content = ForkDigest + SSZ.serialize(historical_summaries_with_proof) 247 | content_key = selector + SSZ.serialize(historical_summaries_key) 248 | ``` 249 | 250 | > A node SHOULD return the latest `HistoricalSummariesWithProof` object it has in response to a 251 | `FindContent` request. If a node cannot provide the requested or newer 252 | `HistoricalSummariesWithProof` object, it MUST NOT reply with any content. 253 | > 254 | > A node MUST only store and gossip a `HistoricalSummariesWithProof` after it is verified with a 255 | finalized `BeaconState` root. 256 | > 257 | > A bridge MUST only gossip a new `HistoricalSummariesWithProof` when it is part of a finalized 258 | `BeaconState`. On finalization, a bridge MUST gossip the `LightClientFinalityUpdate` before the `HistoricalSummariesWithProof` in order for receiving nodes to be able to verify the latter. 259 | 260 | ### Algorithms 261 | 262 | #### Validation 263 | 264 | ##### LightClientBootstrap 265 | 266 | While still light client syncing a node SHOULD only allow to store an offered `LightClientBootstrap` that it knows to be canonical. 267 | That is, a bootstrap which it can verify as it maps to a known trusted-block-root. 268 | E.g. trusted-block-root(s) provided through client config or pre-loaded in the client. 269 | 270 | Once a node is light client synced, it can verify a new `LightClientBootstrap` and then store and re-gossip it on successful verification. 271 | 272 | ##### LightClientUpdate 273 | 274 | While still light client syncing a node SHOULD NOT store any offered `LightClientUpdate`. It SHOULD retrieve the updates required to sync and store those when verified. 275 | 276 | Once a node is light client synced, it can verify a new `LightClientUpdate` and then store and re-gossip it on successful verification. 277 | 278 | ##### LightClientFinalityUpdate & LightClientOptimisticUpdate 279 | 280 | Validating `LightClientFinalityUpdate` and `LightClientOptimisticUpdate` follows the gossip domain(gossipsub) [consensus specs](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/p2p-interface.md#the-gossip-domain-gossipsub). 281 | -------------------------------------------------------------------------------- /bootnodes.md: -------------------------------------------------------------------------------- 1 | # Portal Network 2 | The [enr](https://eips.ethereum.org/EIPS/eip-778) addresses for the bootnodes of the Portal Network. 3 | 4 | 5 | ### Here be dragons... 6 | These bootnodes are currently all running [Trin](https://github.com/ethereum/trin), [Fluffy](https://github.com/status-im/nimbus-eth1/tree/master/fluffy#introduction), or [Ultralight](https://github.com/ethereumjs/ultralight) clients, so their functionality is limited to what has been implemented in those clients. All clients are still in alpha, so it is likely that bugs will occur or that the clients become unavailable for any number of reasons. If you have problems connecting to the bootnodes or encounter a bug, please let us know in the Portal Network discord. 7 | 8 | 9 | # Bootnodes: Mainnet 10 | ``` 11 | # Trin bootstrap nodes 12 | enr:-Jy4QIs2pCyiKna9YWnAF0zgf7bT0GzlAGoF8MEKFJOExmtofBIqzm71zDvmzRiiLkxaEJcs_Amr7XIhLI74k1rtlXICY5Z0IDAuMS4xLWFscGhhLjEtMTEwZjUwgmlkgnY0gmlwhKEjVaWJc2VjcDI1NmsxoQLSC_nhF1iRwsCw0n3J4jRjqoaRxtKgsEe5a-Dz7y0JloN1ZHCCIyg 13 | enr:-Jy4QKSLYMpku9F0Ebk84zhIhwTkmn80UnYvE4Z4sOcLukASIcofrGdXVLAUPVHh8oPCfnEOZm1W1gcAxB9kV2FJywkCY5Z0IDAuMS4xLWFscGhhLjEtMTEwZjUwgmlkgnY0gmlwhJO2oc6Jc2VjcDI1NmsxoQLMSGVlxXL62N3sPtaV-n_TbZFCEM5AR7RDyIwOadbQK4N1ZHCCIyg 14 | enr:-Jy4QH4_H4cW--ejWDl_W7ngXw2m31MM2GT8_1ZgECnfWxMzZTiZKvHDgkmwUS_l2aqHHU54Q7hcFSPz6VGzkUjOqkcCY5Z0IDAuMS4xLWFscGhhLjEtMTEwZjUwgmlkgnY0gmlwhJ31OTWJc2VjcDI1NmsxoQPC0eRkjRajDiETr_DRa5N5VJRm-ttCWDoO1QAMMCg5pIN1ZHCCIyg 15 | 16 | # Fluffy bootstrap nodes 17 | enr:-Ia4QLBxlH0Y8hGPQ1IRF5EStZbZvCPHQ2OjaJkuFMz0NRoZIuO2dLP0L-W_8ZmgnVx5SwvxYCXmX7zrHYv0FeHFFR0TY2aCaWSCdjSCaXCEwiErIIlzZWNwMjU2azGhAnnTykipGqyOy-ZRB9ga9pQVPF-wQs-yj_rYUoOqXEjbg3VkcIIjjA 18 | enr:-Ia4QM4amOkJf5z84Lv5Fl0RgWeSSDUekwnOPRn6XA1eMWgrHwWmn_gJGtOeuVfuX7ywGuPMRwb0odqQ9N_w_2Qc53gTY2aCaWSCdjSCaXCEwiErIYlzZWNwMjU2azGhAzaQEdPmz9SHiCw2I5yVAO8sriQ-mhC5yB7ea1u4u5QZg3VkcIIjjA 19 | enr:-Ia4QKVuHjNafkYuvhU7yCvSarNIVXquzJ8QOp5YbWJRIJw_EDVOIMNJ_fInfYoAvlRCHEx9LUQpYpqJa04pUDU21uoTY2aCaWSCdjSCaXCEwiErQIlzZWNwMjU2azGhA47eAW5oIDJAqxxqI0sL0d8ttXMV0h6sRIWU4ZwS4pYfg3VkcIIjjA 20 | enr:-Ia4QIU9U3zrP2DM7sfpgLJbbYpg12sWeXNeYcpKN49-6fhRCng0IUoVRI2E51mN-2eKJ4tbTimxNLaAnbA7r7fxVjcTY2aCaWSCdjSCaXCEwiErQYlzZWNwMjU2azGhAxOroJ3HceYvdD2yK1q9w8c9tgrISJso8q_JXI6U0Xwng3VkcIIjjA 21 | 22 | # Ultralight bootstrap nodes 23 | enr:-IS4QFV_wTNknw7qiCGAbHf6LxB-xPQCktyrCEZX-b-7PikMOIKkBg-frHRBkfwhI3XaYo_T-HxBYmOOQGNwThkBBHYDgmlkgnY0gmlwhKRc9_OJc2VjcDI1NmsxoQKHPt5CQ0D66ueTtSUqwGjfhscU_LiwS28QvJ0GgJFd-YN1ZHCCE4k 24 | enr:-IS4QDpUz2hQBNt0DECFm8Zy58Hi59PF_7sw780X3qA0vzJEB2IEd5RtVdPUYZUbeg4f0LMradgwpyIhYUeSxz2Tfa8DgmlkgnY0gmlwhKRc9_OJc2VjcDI1NmsxoQJd4NAVKOXfbdxyjSOUJzmA4rjtg43EDeEJu1f8YRhb_4N1ZHCCE4o 25 | enr:-IS4QGG6moBhLW1oXz84NaKEHaRcim64qzFn1hAG80yQyVGNLoKqzJe887kEjthr7rJCNlt6vdVMKMNoUC9OCeNK-EMDgmlkgnY0gmlwhKRc9-KJc2VjcDI1NmsxoQLJhXByb3LmxHQaqgLDtIGUmpANXaBbFw3ybZWzGqb9-IN1ZHCCE4k 26 | enr:-IS4QA5hpJikeDFf1DD1_Le6_ylgrLGpdwn3SRaneGu9hY2HUI7peHep0f28UUMzbC0PvlWjN8zSfnqMG07WVcCyBhADgmlkgnY0gmlwhKRc9-KJc2VjcDI1NmsxoQJMpHmGj1xSP1O-Mffk_jYIHVcg6tY5_CjmWVg1gJEsPIN1ZHCCE4o 27 | ``` 28 | 29 | # Bootnodes: Angelfood 30 | ``` 31 | # Trin bootstrap nodes 32 | enr:-LC4QMnoW2m4YYQRPjZhJ5hEpcA6a3V7iQs3slQ1TepzKBIVWQtjpcHsPINc0TcheMCbx6I2n5aax8M3AtUObt74ySUCY6p0IDVhYzI2NzViNGRmMjNhNmEwOWVjNDFkZTRlYTQ2ODQxNjk2ZTQ1YzSCaWSCdjSCaXCEQONKaYlzZWNwMjU2azGhAvZgYbpA9G8NQ6X4agu-R7Ymtu0hcX6xBQ--UEel_b6Pg3VkcIIjKA 33 | ``` 34 | -------------------------------------------------------------------------------- /canonical-transaction-index/canonical-transaction-index-network.md: -------------------------------------------------------------------------------- 1 | # Execution Canonical Transaction Index Network 2 | 3 | This document is the specification for the sub-protocol that supports on-demand availability of the index necessary for clients to lookup transactions by their hash. 4 | 5 | 6 | ## Overview 7 | 8 | The canonical transaction index network is a [Kademlia](https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf) DHT that uses the [Portal Wire Protocol](../portal-wire-protocol.md) to establish an overlay network on top of the [Discovery v5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5-wire.md) protocol. 9 | 10 | The canonical transaction index consists of a mapping from transaction hash to the canonical block hash within which the transaction was included and the index of the transaction within the set of transactions executed within that block. 11 | 12 | 13 | ### Data 14 | 15 | #### Types 16 | 17 | - Transaction Index Entry 18 | 19 | 20 | #### Retrieval 21 | 22 | - Transaction index entries can be retrieved by transaction hash. 23 | 24 | 25 | ## Specification 26 | 27 | ### Distance Function 28 | 29 | The canonical transaction index network uses the stock XOR distance metric defined in the portal wire protocol specification. 30 | 31 | 32 | ### Content ID Derivation Function 33 | 34 | The canonical transaction index network uses the SHA256 Content ID derivation function from the portal wire protocol specification. 35 | 36 | 37 | ### Wire Protocol 38 | 39 | The [Portal wire protocol](../portal-wire-protocol.md) is used as wire protocol for the canonical transaction index network. 40 | 41 | 42 | #### Protocol Identifier 43 | 44 | As specified in the [Protocol identifiers](../portal-wire-protocol.md#protocol-identifiers) section of the Portal wire protocol, the `protocol` field in the `TALKREQ` message **MUST** contain the value of `0x500D`. 45 | 46 | 47 | #### Supported Message Types 48 | 49 | The canonical transaction index network supports the following protocol messages: 50 | 51 | - `Ping` - `Pong` 52 | - `FindNodes` - `Nodes` 53 | - `FindContent` - `FoundContent` 54 | - `Offer` - `Accept` 55 | 56 | 57 | #### `Ping.payload` & `Pong.payload` 58 | 59 | In the canonical transaction index network the `payload` field of the `Ping` and `Pong` messages. The first packet between another client MUST be [Type 0: Client Info, Radius, and Capabilities Payload](../ping-extensions/extensions/type-0.md). Then upgraded to the latest payload supported by both of the clients. 60 | 61 | List of currently supported payloads, by latest to oldest. 62 | - [Type 1 Basic Radius Payload](../ping-extensions/extensions/type-1.md) 63 | 64 | 65 | ### Routing Table 66 | 67 | The canonical transaction index network uses the standard routing table structure from the Portal Wire Protocol. 68 | 69 | ### Node State 70 | 71 | #### Data Radius 72 | 73 | The canonical transaction index network includes one additional piece of node state that should be tracked. Nodes must track the `data_radius` from the Ping and Pong messages for other nodes in the network. This value is a 256 bit integer and represents the data that a node is "interested" in. We define the following function to determine whether node in the network should be interested in a piece of content. 74 | 75 | ``` 76 | interested(node, content) = distance(node.id, content.id) <= node.radius 77 | ``` 78 | 79 | A node is expected to maintain `radius` information for each node in its local node table. A node's `radius` value may fluctuate as the contents of its local key-value store change. 80 | 81 | A node should track their own radius value and provide this value in all Ping or Pong messages it sends to other nodes. 82 | 83 | 84 | ### Data Types 85 | 86 | #### Transaction Hash Mapping 87 | 88 | ``` 89 | transaction_index_key := Container(transaction-hash: Bytes32) 90 | selector := 0x00 91 | 92 | content_key := selector + SSZ.encode(transaction_index_key) 93 | content := TODO: block hash & merkle proof against header.transactions_trie 94 | ``` 95 | -------------------------------------------------------------------------------- /history/epoch-accumulator-test-vectors.md: -------------------------------------------------------------------------------- 1 | # Epoch Accumulator Test Vectors 2 | 3 | This document contains test vectors Epoch Accumulator for the Header Network to help implementations conform to the specification. 4 | 5 | ## Header Accumulator Encodings 6 | 7 | These test vectors specify the proper encoding of the header accumulator in certain well defined scenarios 8 | 9 | ### Epoch 0 Accumulator 10 | 11 | #### Accumulator after genesis block 12 | 13 | This test vector represents the input parameters and the expected output for the header accumulator after adding 14 | the genesis block from Ethereum Mainnet. 15 | 16 | ##### Input Parameters 17 | ``` 18 | genesis header rlp = 0xf90214a00000000000000000000000000000000000000000000000000000000000000000a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347940000000000000000000000000000000000000000a0d7f8974fb5ac78d9ac099b9ad5018bedc2ce0a72dad1827a1709da30580f0544a056e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421a056e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421b9010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000850400000000808213888080a011bbe8db4e347b4e8c937c1c8370e4b5ed33adb3db69cbdb7a38e1e50b1b82faa00000000000000000000000000000000000000000000000000000000000000000880000000000000042 19 | ``` 20 | ##### Expected Output 21 | ``` 22 | accumulator hash root = 0xb629833240bb2f5eabfb5245be63d730ca4ed30d6a418340ca476e7c1f1d98c0 23 | ``` 24 | 25 | #### Current Epoch Accumulator after Block 1 26 | This test vector represents the input parameters and the expected output for the header accumulator after adding 27 | block 1 from Ethereum Mainnet. 28 | 29 | ##### Input Parameters 30 | ``` 31 | block 1 header rlp = 0xf90211a0d4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d493479405a56e2d52c817161883f50c441c3228cfe54d9fa0d67e4d450343046425ae4271474353857ab860dbc0a1dde64b41b5cd3a532bf3a056e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421a056e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421b90100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008503ff80000001821388808455ba422499476574682f76312e302e302f6c696e75782f676f312e342e32a0969b900de27b6ac6a67742365dd65f55a0526c41fd18e1b16f1a1215c2e66f5988539bd4979fef1ec4 32 | ``` 33 | ##### Expected Output 34 | ``` 35 | accumulator hash root = 0x00cbebed829e1babb93f2300bebe7905a98cb86993c7fc09bb5b04626fd91ae5 36 | ``` 37 | 38 | #### Current Epoch Accumulator after Block 2 39 | This test vector represents the input parameters and the expected output for the header accumulator after adding 40 | block 2 from Ethereum Mainnet. 41 | 42 | #### Input Parameters 43 | ``` 44 | block 2 header rlp = 0xf90218a088e96d4537bea4d9c05d12549907b32561d3bf31f45aae734cdc119f13406cb6a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d4934794dd2f1e6e498202e86d8f5442af596580a4f03c2ca04943d941637411107494da9ec8bc04359d731bfd08b72b4d0edcbd4cd2ecb341a056e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421a056e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421b90100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008503ff00100002821388808455ba4241a0476574682f76312e302e302d30636463373634372f6c696e75782f676f312e34a02f0790c5aa31ab94195e1f6443d645af5b75c46c04fbf9911711198a0ce8fdda88b853fa261a86aa9e 45 | ``` 46 | #### Expected Output 47 | ``` 48 | accumulator hash root = 0x88cce8439ebc0c1d007177ffb6831c15c07b4361984cc52235b6fd728434f0c7 49 | ``` 50 | -------------------------------------------------------------------------------- /implementation-details-overlay.md: -------------------------------------------------------------------------------- 1 | # Portal Network: Overlay Network Functionality 2 | 3 | This document outlines the units of functionality that are necessary for an implementation of the [portal wire protocol](./portal-wire-protocol.md) 4 | 5 | > Note that in some places the exact functionality may not be strictly necessary and may be influenced by the implementation details of individual clients. There are alternative ways to implement the protocol and this is merely intended to serve as a guide. 6 | 7 | # A - Base Protocol 8 | 9 | Support for the base Discovery v5 protocol functionality 10 | 11 | ## A.1 - Base Protocol TALKREQ and TALKRESP 12 | 13 | Base protocol support for the TALKREQ and TALKRESP messages 14 | 15 | ## A.2 - TALKREQ/TALKRESP message routing 16 | 17 | The ability to route incoming TALKREQ/TALKRESP messages to custom handlers. 18 | 19 | ## A.3 - TALKREQ/TALKRESP request and response handling 20 | 21 | The ability to send a TALKREQ with a specific `request_id` and receive the corresponding TALKRESP message. 22 | 23 | # B - Portal Wire Protocol Messages 24 | 25 | Support for the message types that are part of the [portal wire protocol](./portal-wire-protocol.md) 26 | 27 | ## B.1 - PING & PONG 28 | 29 | Support for the PING and PONG message types 30 | 31 | ### B.1.a - PING message support 32 | 33 | Support for the PING message type 34 | 35 | #### B.1.a.1 - PING sending 36 | 37 | The ability to send a PING message 38 | 39 | #### B.1.a.2 - PING receiving 40 | 41 | The ability to receive PING messages 42 | 43 | ### B.1.b - PONG message support 44 | 45 | Support for the PONG message type 46 | 47 | #### B.1.b.1 - PONG sending 48 | 49 | The ability to send a PONG message 50 | 51 | #### B.1.b.2 - PONG receiving 52 | 53 | The ability to receive PONG messages 54 | 55 | ### B.1.c - PONG when PING'd 56 | 57 | When a PING message is received a PONG response is sent. 58 | 59 | ## B.2 - FINDNODES & FOUNDNODES 60 | 61 | Support for the FINDNODES and FOUNDNODES message types 62 | 63 | ### B.2.a - FINDNODES message support 64 | 65 | Support for the FINDNODES message type 66 | 67 | #### B.2.a.1 - FINDNODES sending 68 | 69 | The ability to send a FINDNODES message 70 | 71 | #### B.2.a.2 - FINDNODES receiving 72 | 73 | The ability to receive FINDNODES messages 74 | 75 | ### B.2.b - FOUNDNODES message support 76 | 77 | Support for the FOUNDNODES message type 78 | 79 | #### B.2.b.1 - FOUNDNODES sending 80 | 81 | The ability to send a FOUNDNODES message 82 | 83 | #### B.2.b.2 - FOUNDNODES receiving 84 | 85 | The ability to receive FOUNDNODES messages 86 | 87 | ### B.2.c - Serving FINDNODES 88 | 89 | When a FINDNODES message is received the appropriate `node_id` records are pulled from the sub protocol [routing table](#TODO) and a FOUNDNODES response is sent with the ENR records. 90 | 91 | ## B.3 - FINDCONTENT & FOUNDCONTENT 92 | 93 | Support for the FINDCONTENT and FOUNDCONTENT message types 94 | 95 | ### B.3.a - FINDCONTENT message support 96 | 97 | Support for the FINDCONTENT message type 98 | 99 | #### B.3.a.1 - FINDCONTENT sending 100 | 101 | The ability to send a FINDCONTENT message 102 | 103 | #### B.3.a.2 - FINDCONTENT receiving 104 | 105 | The ability to receive FINDCONTENT messages 106 | 107 | ### B.3.b - FOUNDCONTENT message support 108 | 109 | Support for the FOUNDCONTENT message type 110 | 111 | #### B.3.b.1 - FOUNDCONTENT sending 112 | 113 | The ability to send a FOUNDCONTENT message 114 | 115 | #### B.3.b.2 - FOUNDCONTENT receiving 116 | 117 | The ability to receive FOUNDCONTENT messages 118 | 119 | ## B.4 - OFFER & ACCEPT 120 | 121 | Support for the OFFER and ACCEPT messages 122 | 123 | ### B.4.a - OFFER message support 124 | 125 | Support for the OFFER message type 126 | 127 | #### B.4.a.1 - OFFER sending 128 | 129 | The ability to send a OFFER message 130 | 131 | #### B.4.a.2 - OFFER receiving 132 | 133 | The ability to receive OFFER messages 134 | 135 | ### B.4.b - ACCEPT message support 136 | 137 | Support for the ACCEPT message type 138 | 139 | #### B.4.b.1 - ACCEPT sending 140 | 141 | The ability to send a ACCEPT message 142 | 143 | #### B.4.b.2 - ACCEPT receiving 144 | 145 | The ability to receive ACCEPT messages 146 | 147 | # C - ENR Database 148 | 149 | Management of known ENR records 150 | 151 | ## C.1 - ENR handling 152 | 153 | Support for encoding, decoding, and validating ENR records according to [EIPTODO](#TODO) 154 | 155 | ### C.1.a - Extraction of IP address and port 156 | 157 | IP address and port information can be extracted from ENR records. 158 | 159 | ## C.2 - Store ENR record 160 | 161 | ENR records can be saved for later retrieval. 162 | 163 | ### C.2.a - Tracking highest sequence number 164 | 165 | Storage of ENR records respects or tracks sequence numbers, preserving and tracking the record with the highest sequence number. 166 | 167 | ## C.3 - Retrieve ENR Record 168 | 169 | ENR records can be retrieved by their `node_id`. 170 | 171 | # D - Overlay Routing Table 172 | 173 | Management of routing tables 174 | 175 | ## D.1 - Sub Protocol Routing Tables 176 | 177 | Separate routing tables for each supported sub protocol. 178 | 179 | ## D.2 - Distance Function 180 | 181 | The routing table is able to use the custom distance function. 182 | 183 | ## D.3 - Manage K-Buckets 184 | 185 | The routing table manages the K-buckets 186 | 187 | ### D.3.a - Insertion of new nodes 188 | 189 | Nodes can be inserted into the routing table into the appropriate bucket, ensuring that buckets do not end up containing duplicate records. 190 | 191 | ### D.3.b - Removal of nodes 192 | 193 | Nodes can be removed from the routing table. 194 | 195 | ### D.3.c - Maximum of K nodes per bucket 196 | 197 | Each bucket is limited to `K` total members 198 | 199 | ### D.3.d - Replacement cache 200 | 201 | Each bucket maintains a set of additional nodes known to be at the appropriate distance. When a node is removed from the routing table it is replaced by a node from the replacement cache when one is available. The cache is managed such that it remains disjoint from the nodes in the corresponding bucket. 202 | 203 | ## D.4 - Retrieve nodes at specified log-distance 204 | 205 | The routing table can return nodes at a requested log-distance 206 | 207 | ## D.5 - Retrieval of nodes ordered by distance to a specified `node_id` 208 | 209 | The routing table can return the nodes closest to a provided `node_id`. 210 | 211 | # E - Overlay Network Management 212 | 213 | Functionality related to managing a node's view of the overlay network. 214 | 215 | ## E.1 - Bootstrapping via Bootnodes 216 | 217 | The client uses a set of bootnodes to acquire an initial view of the network. 218 | 219 | ### E.1.a - Bootnodes 220 | 221 | Each supported sub protocol can have its own set of bootnodes. These records can be either hard coded into the client or provided via client configuration. 222 | 223 | ## E.2 - Population of routing table 224 | 225 | The client actively seeks to populate its routing table by performing [RFN](#TODO) lookups to discover new nodes for the routing table 226 | 227 | ## E.3 - Liveliness checks 228 | 229 | The client tracks *liveliness* of nodes in its routing table and periodically checks the liveliness of the node in its routing table which was least recently checked. 230 | 231 | ### E.3.a - Rate Limiting Liviliness Checks 232 | 233 | The liveliness checks for any individual node are rate limited as to not spam individual nodes with lots of PING messages when the routing table is sparse. 234 | 235 | # F - Content Database 236 | 237 | Management of stored content. 238 | 239 | ## F.1 - Content can be stored 240 | 241 | Content can be stored in a persistent database. Databases are segmented by sub protocol. 242 | 243 | ## F.2 - Content can be retrieved by `content_id` 244 | 245 | Given a known `content_id` the corresponding content payload can be retrieved. 246 | 247 | ## F.3 - Content can be removed 248 | 249 | Content can be removed. 250 | 251 | 252 | ## F.4 - Query furthest by distance 253 | 254 | Retrieval of the content from the database which is furthest from a provided `node_id` using the custom distance function. 255 | 256 | 257 | ## F.5 - Total size of stored content 258 | 259 | Retrieval of the total number of bytes stored. 260 | 261 | # G - Content Management 262 | 263 | ## G.1 - Support for the uTP Sub Protocol 264 | 265 | Support for sending and receiving streams of data using the uTP sub protocol. 266 | 267 | ### G.1.a - Support for outbound streams 268 | 269 | The ability to establish a new outbound connection with another node with a specified `connection-id` 270 | 271 | ### G.1.a - Support for inbound streams 272 | 273 | The ability to listening for an inbound connection from another node with a `connection-id` that is known in advance. 274 | 275 | ## G.2 - Enforcement of maximum stored content size 276 | 277 | When the total size of stored content exceeds the configured maximum content storage size the content which is furthest from the local `node_id` is evicted in a timely manner. This should also result in any "data radius" values relevant to this network being adjusted. 278 | 279 | ## G.3 - Retrieval via FINDCONTENT/FOUNDCONTENT & uTP 280 | 281 | Support for retrieving content using the FINDCONTENT, FOUNDCONTENT, and uTP sub protocol. 282 | 283 | ### G.3.a - DHT Traversal 284 | 285 | The client can use the FINDCONTENT and FOUNDCONTENT messages to traverse the DHT until they encounter a node that has the desired content. 286 | 287 | ### G.3.b - Receipt via direct payload 288 | 289 | Upon encountering a FOUNDCONTENT response that contains the actual content payload, the client can return the payload. 290 | 291 | ### G.3.c - Receipt via uTP 292 | 293 | Upon encountering a FOUNDCONTENT response that contains a uTP `connection-id`, the client should initiate a uTP stream with the provided `connection-id` and receive the full data payload over that stream. 294 | 295 | ## G.4 - Gossip via OFFER/ACCEPT & uTP 296 | 297 | Support for receipt of content using the OFFER/ACCEPT messages and uTP sub protocol. 298 | 299 | ### G.4.a - Handle incoming gossip 300 | 301 | Client can listen for incoming OFFER messages, responding with an ACCEPT message for any offered content which is of interest to the client. 302 | 303 | #### G.4.a.1 - Receipt via uTP 304 | 305 | After sending an ACCEPT response to an OFFER request the client listens for an inbound uTP stream with the `connection-id` that was sent with the ACCEPT response. 306 | 307 | ### G.4.b - Neighborhood Gossip Propogation 308 | 309 | Upon receiving and validating gossip content, the content should then be gossiped to some set of interested nearby peers. 310 | 311 | #### G.4.b.1 - Sending content via uTP 312 | 313 | Upon receiving an ACCEPT message in response to our own OFFER message the client can initiate a uTP stream with the other node and can send the content payload across the stream. 314 | 315 | 316 | ## G.5 - Serving Content 317 | 318 | The client should listen for FINDCONTENT messages. 319 | 320 | When a FINDCONTENT message is received either the requested content or the nodes known to be closest to the content are returned via a FOUNDCONTENT message. 321 | 322 | 323 | # H - JSON-RPC 324 | 325 | Endpoints that require for the portal network wire protocol. 326 | 327 | ## H.1 - `TODO` 328 | 329 | TODO 330 | -------------------------------------------------------------------------------- /jsonrpc/README.md: -------------------------------------------------------------------------------- 1 | # Portal Network JSON-RPC Specification 2 | 3 | [View the spec][playground] 4 | 5 | The Portal Network JSON-RPC is a collection of methods that all clients implement. 6 | This interface allows downstream tooling and infrastructure to treat different 7 | Portal Network clients as modules that can be swapped at will. 8 | 9 | ## Setup 10 | 11 | When this doc was written, the build and test steps required node.js version 12 | v16+. Verify if this version is current by inspecting the `node-version` 13 | defined in the [project's test config file](../.github/workflows/test.yaml). 14 | 15 | ## Building 16 | 17 | The specification is split into multiple files to improve readability. It 18 | can be compiled the into a single document by running: 19 | 20 | ```console 21 | $ npm install 22 | $ npm run build 23 | Build successful. 24 | ``` 25 | 26 | This will output the file `openrpc.json` in the root of the project. This file 27 | will have all schema `#ref`s resolved. 28 | 29 | Preview the built result by copying the `openrpc.json` file into the [open-rpc 30 | playground](https://playground.open-rpc.org/). 31 | 32 | ## Contributing 33 | 34 | The specification is written in [OpenRPC][openrpc]. Refer to the 35 | OpenRPC specification and the JSON schema specification to get started. 36 | 37 | ### Testing 38 | 39 | There are currently two tools for testing contributions. The first tool is 40 | an [OpenRPC validator][validator]. 41 | 42 | ```console 43 | $ npm install 44 | $ npm run lint 45 | OpenRPC spec validated successfully. 46 | ``` 47 | 48 | The second tool can validate a live JSON-RPC provider hosted at 49 | `http://localhost:8545` against the specification: 50 | 51 | ```console 52 | $ ./scripts/debug.sh discv5_sendPing \"enr:-....\" 53 | data.json valid 54 | ``` 55 | 56 | [playground]: https://playground.open-rpc.org/?schemaUrl=https://raw.githubusercontent.com/ethereum/portal-network-specs/assembled-spec/jsonrpc/openrpc.json&uiSchema%5BappBar%5D%5Bui:splitView%5D=false&uiSchema%5BappBar%5D%5Bui:input%5D=false&uiSchema%5BappBar%5D%5Bui:examplesDropdown%5D=false 57 | [openrpc]: https://open-rpc.org 58 | [validator]: https://open-rpc.github.io/schema-utils-js/globals.html#validateopenrpcdocument 59 | -------------------------------------------------------------------------------- /jsonrpc/package-lock.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "portal-network-apis", 3 | "version": "0.0.1", 4 | "lockfileVersion": 2, 5 | "requires": true, 6 | "packages": { 7 | "": { 8 | "name": "portal-network-apis", 9 | "version": "0.0.1", 10 | "license": "CC0-1.0", 11 | "dependencies": { 12 | "@open-rpc/schema-utils-js": "^1.15.0" 13 | } 14 | }, 15 | "node_modules/@json-schema-spec/json-pointer": { 16 | "version": "0.1.2", 17 | "resolved": "https://registry.npmjs.org/@json-schema-spec/json-pointer/-/json-pointer-0.1.2.tgz", 18 | "integrity": "sha512-BYY7IavBjwsWWSmVcMz2A9mKiDD9RvacnsItgmy1xV8cmgbtxFfKmKMtkVpD7pYtkx4mIW4800yZBXueVFIWPw==" 19 | }, 20 | "node_modules/@json-schema-tools/dereferencer": { 21 | "version": "1.5.3", 22 | "resolved": "https://registry.npmjs.org/@json-schema-tools/dereferencer/-/dereferencer-1.5.3.tgz", 23 | "integrity": "sha512-m5OhsfstuYwPX0EFrwIu4BDm/V0CzNXhkrSzIKh/grpxzrWuRz0AKi9m6NhfcE9NuONlOHTjmSB78+ktem+sSA==", 24 | "dependencies": { 25 | "@json-schema-tools/reference-resolver": "^1.2.2", 26 | "@json-schema-tools/traverse": "^1.7.8", 27 | "fast-safe-stringify": "^2.0.7" 28 | } 29 | }, 30 | "node_modules/@json-schema-tools/meta-schema": { 31 | "version": "1.6.19", 32 | "resolved": "https://registry.npmjs.org/@json-schema-tools/meta-schema/-/meta-schema-1.6.19.tgz", 33 | "integrity": "sha512-55zuWFW7tr4tf/G5AYmybcPdGOkVAreQbt2JdnogX4I2r/zkxZiimYPJESDf5je9BI2oRveak2p296HzDppeaA==" 34 | }, 35 | "node_modules/@json-schema-tools/reference-resolver": { 36 | "version": "1.2.3", 37 | "resolved": "https://registry.npmjs.org/@json-schema-tools/reference-resolver/-/reference-resolver-1.2.3.tgz", 38 | "integrity": "sha512-Bc7TjkuSy9PnQDeIenA3aU1cgh2/Wh042sxTfEaav38+Jf+2+U2kMBmMKS4nZLCR+L0fh71V3IwE2FMJleQCGw==", 39 | "dependencies": { 40 | "@json-schema-spec/json-pointer": "^0.1.2", 41 | "isomorphic-fetch": "^3.0.0" 42 | } 43 | }, 44 | "node_modules/@json-schema-tools/traverse": { 45 | "version": "1.8.1", 46 | "resolved": "https://registry.npmjs.org/@json-schema-tools/traverse/-/traverse-1.8.1.tgz", 47 | "integrity": "sha512-y1Tw+r6fgLWp1b264Sva0YhElLwNuG/uPV0ihInWPSpH8qdRQIIu4YM6DBh6UIvwEujYSqrJh2Hfk13hDwJgIw==" 48 | }, 49 | "node_modules/@open-rpc/meta-schema": { 50 | "version": "1.14.2", 51 | "resolved": "https://registry.npmjs.org/@open-rpc/meta-schema/-/meta-schema-1.14.2.tgz", 52 | "integrity": "sha512-vD4Nbkrb7wYFRcSQf+j228LwOy1C6/KKpy5NADlpMElGrAWPRxhTa2yTi6xG+x88OHzg2+cydQ0GAD6o40KUcg==" 53 | }, 54 | "node_modules/@open-rpc/schema-utils-js": { 55 | "version": "1.16.0", 56 | "resolved": "https://registry.npmjs.org/@open-rpc/schema-utils-js/-/schema-utils-js-1.16.0.tgz", 57 | "integrity": "sha512-DkQnjjOVmRBuGA5DCxL+kWhIugYHROtcH3z5tH+RdmxIFGCaPpvAnWNkWtIH0NxVpVulUMsqaP3tmCWYjC2o7Q==", 58 | "dependencies": { 59 | "@json-schema-tools/dereferencer": "1.5.3", 60 | "@json-schema-tools/meta-schema": "1.6.19", 61 | "@json-schema-tools/reference-resolver": "1.2.3", 62 | "@open-rpc/meta-schema": "1.14.2", 63 | "ajv": "^6.10.0", 64 | "detect-node": "^2.0.4", 65 | "fast-safe-stringify": "^2.0.7", 66 | "fs-extra": "^9.0.0", 67 | "is-url": "^1.2.4", 68 | "isomorphic-fetch": "^3.0.0" 69 | } 70 | }, 71 | "node_modules/ajv": { 72 | "version": "6.12.6", 73 | "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", 74 | "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", 75 | "dependencies": { 76 | "fast-deep-equal": "^3.1.1", 77 | "fast-json-stable-stringify": "^2.0.0", 78 | "json-schema-traverse": "^0.4.1", 79 | "uri-js": "^4.2.2" 80 | }, 81 | "funding": { 82 | "type": "github", 83 | "url": "https://github.com/sponsors/epoberezkin" 84 | } 85 | }, 86 | "node_modules/at-least-node": { 87 | "version": "1.0.0", 88 | "resolved": "https://registry.npmjs.org/at-least-node/-/at-least-node-1.0.0.tgz", 89 | "integrity": "sha512-+q/t7Ekv1EDY2l6Gda6LLiX14rU9TV20Wa3ofeQmwPFZbOMo9DXrLbOjFaaclkXKWidIaopwAObQDqwWtGUjqg==", 90 | "engines": { 91 | "node": ">= 4.0.0" 92 | } 93 | }, 94 | "node_modules/detect-node": { 95 | "version": "2.1.0", 96 | "resolved": "https://registry.npmjs.org/detect-node/-/detect-node-2.1.0.tgz", 97 | "integrity": "sha512-T0NIuQpnTvFDATNuHN5roPwSBG83rFsuO+MXXH9/3N1eFbn4wcPjttvjMLEPWJ0RGUYgQE7cGgS3tNxbqCGM7g==" 98 | }, 99 | "node_modules/fast-deep-equal": { 100 | "version": "3.1.3", 101 | "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", 102 | "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==" 103 | }, 104 | "node_modules/fast-json-stable-stringify": { 105 | "version": "2.1.0", 106 | "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", 107 | "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==" 108 | }, 109 | "node_modules/fast-safe-stringify": { 110 | "version": "2.1.1", 111 | "resolved": "https://registry.npmjs.org/fast-safe-stringify/-/fast-safe-stringify-2.1.1.tgz", 112 | "integrity": "sha512-W+KJc2dmILlPplD/H4K9l9LcAHAfPtP6BY84uVLXQ6Evcz9Lcg33Y2z1IVblT6xdY54PXYVHEv+0Wpq8Io6zkA==" 113 | }, 114 | "node_modules/fs-extra": { 115 | "version": "9.1.0", 116 | "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.1.0.tgz", 117 | "integrity": "sha512-hcg3ZmepS30/7BSFqRvoo3DOMQu7IjqxO5nCDt+zM9XWjb33Wg7ziNT+Qvqbuc3+gWpzO02JubVyk2G4Zvo1OQ==", 118 | "dependencies": { 119 | "at-least-node": "^1.0.0", 120 | "graceful-fs": "^4.2.0", 121 | "jsonfile": "^6.0.1", 122 | "universalify": "^2.0.0" 123 | }, 124 | "engines": { 125 | "node": ">=10" 126 | } 127 | }, 128 | "node_modules/graceful-fs": { 129 | "version": "4.2.8", 130 | "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.8.tgz", 131 | "integrity": "sha512-qkIilPUYcNhJpd33n0GBXTB1MMPp14TxEsEs0pTrsSVucApsYzW5V+Q8Qxhik6KU3evy+qkAAowTByymK0avdg==" 132 | }, 133 | "node_modules/is-url": { 134 | "version": "1.2.4", 135 | "resolved": "https://registry.npmjs.org/is-url/-/is-url-1.2.4.tgz", 136 | "integrity": "sha512-ITvGim8FhRiYe4IQ5uHSkj7pVaPDrCTkNd3yq3cV7iZAcJdHTUMPMEHcqSOy9xZ9qFenQCvi+2wjH9a1nXqHww==" 137 | }, 138 | "node_modules/isomorphic-fetch": { 139 | "version": "3.0.0", 140 | "resolved": "https://registry.npmjs.org/isomorphic-fetch/-/isomorphic-fetch-3.0.0.tgz", 141 | "integrity": "sha512-qvUtwJ3j6qwsF3jLxkZ72qCgjMysPzDfeV240JHiGZsANBYd+EEuu35v7dfrJ9Up0Ak07D7GGSkGhCHTqg/5wA==", 142 | "dependencies": { 143 | "node-fetch": "^2.6.1", 144 | "whatwg-fetch": "^3.4.1" 145 | } 146 | }, 147 | "node_modules/json-schema-traverse": { 148 | "version": "0.4.1", 149 | "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", 150 | "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==" 151 | }, 152 | "node_modules/jsonfile": { 153 | "version": "6.1.0", 154 | "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.1.0.tgz", 155 | "integrity": "sha512-5dgndWOriYSm5cnYaJNhalLNDKOqFwyDB/rr1E9ZsGciGvKPs8R2xYGCacuf3z6K1YKDz182fd+fY3cn3pMqXQ==", 156 | "dependencies": { 157 | "universalify": "^2.0.0" 158 | }, 159 | "optionalDependencies": { 160 | "graceful-fs": "^4.1.6" 161 | } 162 | }, 163 | "node_modules/node-fetch": { 164 | "version": "2.6.7", 165 | "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.7.tgz", 166 | "integrity": "sha512-ZjMPFEfVx5j+y2yF35Kzx5sF7kDzxuDj6ziH4FFbOp87zKDZNx8yExJIb05OGF4Nlt9IHFIMBkRl41VdvcNdbQ==", 167 | "dependencies": { 168 | "whatwg-url": "^5.0.0" 169 | }, 170 | "engines": { 171 | "node": "4.x || >=6.0.0" 172 | }, 173 | "peerDependencies": { 174 | "encoding": "^0.1.0" 175 | }, 176 | "peerDependenciesMeta": { 177 | "encoding": { 178 | "optional": true 179 | } 180 | } 181 | }, 182 | "node_modules/punycode": { 183 | "version": "2.1.1", 184 | "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz", 185 | "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==", 186 | "engines": { 187 | "node": ">=6" 188 | } 189 | }, 190 | "node_modules/tr46": { 191 | "version": "0.0.3", 192 | "resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz", 193 | "integrity": "sha1-gYT9NH2snNwYWZLzpmIuFLnZq2o=" 194 | }, 195 | "node_modules/universalify": { 196 | "version": "2.0.0", 197 | "resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.0.tgz", 198 | "integrity": "sha512-hAZsKq7Yy11Zu1DE0OzWjw7nnLZmJZYTDZZyEFHZdUhV8FkH5MCfoU1XMaxXovpyW5nq5scPqq0ZDP9Zyl04oQ==", 199 | "engines": { 200 | "node": ">= 10.0.0" 201 | } 202 | }, 203 | "node_modules/uri-js": { 204 | "version": "4.4.1", 205 | "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", 206 | "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", 207 | "dependencies": { 208 | "punycode": "^2.1.0" 209 | } 210 | }, 211 | "node_modules/webidl-conversions": { 212 | "version": "3.0.1", 213 | "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz", 214 | "integrity": "sha1-JFNCdeKnvGvnvIZhHMFq4KVlSHE=" 215 | }, 216 | "node_modules/whatwg-fetch": { 217 | "version": "3.6.2", 218 | "resolved": "https://registry.npmjs.org/whatwg-fetch/-/whatwg-fetch-3.6.2.tgz", 219 | "integrity": "sha512-bJlen0FcuU/0EMLrdbJ7zOnW6ITZLrZMIarMUVmdKtsGvZna8vxKYaexICWPfZ8qwf9fzNq+UEIZrnSaApt6RA==" 220 | }, 221 | "node_modules/whatwg-url": { 222 | "version": "5.0.0", 223 | "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz", 224 | "integrity": "sha1-lmRU6HZUYuN2RNNib2dCzotwll0=", 225 | "dependencies": { 226 | "tr46": "~0.0.3", 227 | "webidl-conversions": "^3.0.0" 228 | } 229 | } 230 | }, 231 | "dependencies": { 232 | "@json-schema-spec/json-pointer": { 233 | "version": "0.1.2", 234 | "resolved": "https://registry.npmjs.org/@json-schema-spec/json-pointer/-/json-pointer-0.1.2.tgz", 235 | "integrity": "sha512-BYY7IavBjwsWWSmVcMz2A9mKiDD9RvacnsItgmy1xV8cmgbtxFfKmKMtkVpD7pYtkx4mIW4800yZBXueVFIWPw==" 236 | }, 237 | "@json-schema-tools/dereferencer": { 238 | "version": "1.5.3", 239 | "resolved": "https://registry.npmjs.org/@json-schema-tools/dereferencer/-/dereferencer-1.5.3.tgz", 240 | "integrity": "sha512-m5OhsfstuYwPX0EFrwIu4BDm/V0CzNXhkrSzIKh/grpxzrWuRz0AKi9m6NhfcE9NuONlOHTjmSB78+ktem+sSA==", 241 | "requires": { 242 | "@json-schema-tools/reference-resolver": "^1.2.2", 243 | "@json-schema-tools/traverse": "^1.7.8", 244 | "fast-safe-stringify": "^2.0.7" 245 | } 246 | }, 247 | "@json-schema-tools/meta-schema": { 248 | "version": "1.6.19", 249 | "resolved": "https://registry.npmjs.org/@json-schema-tools/meta-schema/-/meta-schema-1.6.19.tgz", 250 | "integrity": "sha512-55zuWFW7tr4tf/G5AYmybcPdGOkVAreQbt2JdnogX4I2r/zkxZiimYPJESDf5je9BI2oRveak2p296HzDppeaA==" 251 | }, 252 | "@json-schema-tools/reference-resolver": { 253 | "version": "1.2.3", 254 | "resolved": "https://registry.npmjs.org/@json-schema-tools/reference-resolver/-/reference-resolver-1.2.3.tgz", 255 | "integrity": "sha512-Bc7TjkuSy9PnQDeIenA3aU1cgh2/Wh042sxTfEaav38+Jf+2+U2kMBmMKS4nZLCR+L0fh71V3IwE2FMJleQCGw==", 256 | "requires": { 257 | "@json-schema-spec/json-pointer": "^0.1.2", 258 | "isomorphic-fetch": "^3.0.0" 259 | } 260 | }, 261 | "@json-schema-tools/traverse": { 262 | "version": "1.8.1", 263 | "resolved": "https://registry.npmjs.org/@json-schema-tools/traverse/-/traverse-1.8.1.tgz", 264 | "integrity": "sha512-y1Tw+r6fgLWp1b264Sva0YhElLwNuG/uPV0ihInWPSpH8qdRQIIu4YM6DBh6UIvwEujYSqrJh2Hfk13hDwJgIw==" 265 | }, 266 | "@open-rpc/meta-schema": { 267 | "version": "1.14.2", 268 | "resolved": "https://registry.npmjs.org/@open-rpc/meta-schema/-/meta-schema-1.14.2.tgz", 269 | "integrity": "sha512-vD4Nbkrb7wYFRcSQf+j228LwOy1C6/KKpy5NADlpMElGrAWPRxhTa2yTi6xG+x88OHzg2+cydQ0GAD6o40KUcg==" 270 | }, 271 | "@open-rpc/schema-utils-js": { 272 | "version": "1.16.0", 273 | "resolved": "https://registry.npmjs.org/@open-rpc/schema-utils-js/-/schema-utils-js-1.16.0.tgz", 274 | "integrity": "sha512-DkQnjjOVmRBuGA5DCxL+kWhIugYHROtcH3z5tH+RdmxIFGCaPpvAnWNkWtIH0NxVpVulUMsqaP3tmCWYjC2o7Q==", 275 | "requires": { 276 | "@json-schema-tools/dereferencer": "1.5.3", 277 | "@json-schema-tools/meta-schema": "1.6.19", 278 | "@json-schema-tools/reference-resolver": "1.2.3", 279 | "@open-rpc/meta-schema": "1.14.2", 280 | "ajv": "^6.10.0", 281 | "detect-node": "^2.0.4", 282 | "fast-safe-stringify": "^2.0.7", 283 | "fs-extra": "^9.0.0", 284 | "is-url": "^1.2.4", 285 | "isomorphic-fetch": "^3.0.0" 286 | } 287 | }, 288 | "ajv": { 289 | "version": "6.12.6", 290 | "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", 291 | "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", 292 | "requires": { 293 | "fast-deep-equal": "^3.1.1", 294 | "fast-json-stable-stringify": "^2.0.0", 295 | "json-schema-traverse": "^0.4.1", 296 | "uri-js": "^4.2.2" 297 | } 298 | }, 299 | "at-least-node": { 300 | "version": "1.0.0", 301 | "resolved": "https://registry.npmjs.org/at-least-node/-/at-least-node-1.0.0.tgz", 302 | "integrity": "sha512-+q/t7Ekv1EDY2l6Gda6LLiX14rU9TV20Wa3ofeQmwPFZbOMo9DXrLbOjFaaclkXKWidIaopwAObQDqwWtGUjqg==" 303 | }, 304 | "detect-node": { 305 | "version": "2.1.0", 306 | "resolved": "https://registry.npmjs.org/detect-node/-/detect-node-2.1.0.tgz", 307 | "integrity": "sha512-T0NIuQpnTvFDATNuHN5roPwSBG83rFsuO+MXXH9/3N1eFbn4wcPjttvjMLEPWJ0RGUYgQE7cGgS3tNxbqCGM7g==" 308 | }, 309 | "fast-deep-equal": { 310 | "version": "3.1.3", 311 | "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", 312 | "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==" 313 | }, 314 | "fast-json-stable-stringify": { 315 | "version": "2.1.0", 316 | "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", 317 | "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==" 318 | }, 319 | "fast-safe-stringify": { 320 | "version": "2.1.1", 321 | "resolved": "https://registry.npmjs.org/fast-safe-stringify/-/fast-safe-stringify-2.1.1.tgz", 322 | "integrity": "sha512-W+KJc2dmILlPplD/H4K9l9LcAHAfPtP6BY84uVLXQ6Evcz9Lcg33Y2z1IVblT6xdY54PXYVHEv+0Wpq8Io6zkA==" 323 | }, 324 | "fs-extra": { 325 | "version": "9.1.0", 326 | "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.1.0.tgz", 327 | "integrity": "sha512-hcg3ZmepS30/7BSFqRvoo3DOMQu7IjqxO5nCDt+zM9XWjb33Wg7ziNT+Qvqbuc3+gWpzO02JubVyk2G4Zvo1OQ==", 328 | "requires": { 329 | "at-least-node": "^1.0.0", 330 | "graceful-fs": "^4.2.0", 331 | "jsonfile": "^6.0.1", 332 | "universalify": "^2.0.0" 333 | } 334 | }, 335 | "graceful-fs": { 336 | "version": "4.2.8", 337 | "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.8.tgz", 338 | "integrity": "sha512-qkIilPUYcNhJpd33n0GBXTB1MMPp14TxEsEs0pTrsSVucApsYzW5V+Q8Qxhik6KU3evy+qkAAowTByymK0avdg==" 339 | }, 340 | "is-url": { 341 | "version": "1.2.4", 342 | "resolved": "https://registry.npmjs.org/is-url/-/is-url-1.2.4.tgz", 343 | "integrity": "sha512-ITvGim8FhRiYe4IQ5uHSkj7pVaPDrCTkNd3yq3cV7iZAcJdHTUMPMEHcqSOy9xZ9qFenQCvi+2wjH9a1nXqHww==" 344 | }, 345 | "isomorphic-fetch": { 346 | "version": "3.0.0", 347 | "resolved": "https://registry.npmjs.org/isomorphic-fetch/-/isomorphic-fetch-3.0.0.tgz", 348 | "integrity": "sha512-qvUtwJ3j6qwsF3jLxkZ72qCgjMysPzDfeV240JHiGZsANBYd+EEuu35v7dfrJ9Up0Ak07D7GGSkGhCHTqg/5wA==", 349 | "requires": { 350 | "node-fetch": "^2.6.1", 351 | "whatwg-fetch": "^3.4.1" 352 | } 353 | }, 354 | "json-schema-traverse": { 355 | "version": "0.4.1", 356 | "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", 357 | "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==" 358 | }, 359 | "jsonfile": { 360 | "version": "6.1.0", 361 | "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.1.0.tgz", 362 | "integrity": "sha512-5dgndWOriYSm5cnYaJNhalLNDKOqFwyDB/rr1E9ZsGciGvKPs8R2xYGCacuf3z6K1YKDz182fd+fY3cn3pMqXQ==", 363 | "requires": { 364 | "graceful-fs": "^4.1.6", 365 | "universalify": "^2.0.0" 366 | } 367 | }, 368 | "node-fetch": { 369 | "version": "2.6.7", 370 | "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.7.tgz", 371 | "integrity": "sha512-ZjMPFEfVx5j+y2yF35Kzx5sF7kDzxuDj6ziH4FFbOp87zKDZNx8yExJIb05OGF4Nlt9IHFIMBkRl41VdvcNdbQ==", 372 | "requires": { 373 | "whatwg-url": "^5.0.0" 374 | } 375 | }, 376 | "punycode": { 377 | "version": "2.1.1", 378 | "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz", 379 | "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==" 380 | }, 381 | "tr46": { 382 | "version": "0.0.3", 383 | "resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz", 384 | "integrity": "sha1-gYT9NH2snNwYWZLzpmIuFLnZq2o=" 385 | }, 386 | "universalify": { 387 | "version": "2.0.0", 388 | "resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.0.tgz", 389 | "integrity": "sha512-hAZsKq7Yy11Zu1DE0OzWjw7nnLZmJZYTDZZyEFHZdUhV8FkH5MCfoU1XMaxXovpyW5nq5scPqq0ZDP9Zyl04oQ==" 390 | }, 391 | "uri-js": { 392 | "version": "4.4.1", 393 | "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", 394 | "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", 395 | "requires": { 396 | "punycode": "^2.1.0" 397 | } 398 | }, 399 | "webidl-conversions": { 400 | "version": "3.0.1", 401 | "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz", 402 | "integrity": "sha1-JFNCdeKnvGvnvIZhHMFq4KVlSHE=" 403 | }, 404 | "whatwg-fetch": { 405 | "version": "3.6.2", 406 | "resolved": "https://registry.npmjs.org/whatwg-fetch/-/whatwg-fetch-3.6.2.tgz", 407 | "integrity": "sha512-bJlen0FcuU/0EMLrdbJ7zOnW6ITZLrZMIarMUVmdKtsGvZna8vxKYaexICWPfZ8qwf9fzNq+UEIZrnSaApt6RA==" 408 | }, 409 | "whatwg-url": { 410 | "version": "5.0.0", 411 | "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz", 412 | "integrity": "sha1-lmRU6HZUYuN2RNNib2dCzotwll0=", 413 | "requires": { 414 | "tr46": "~0.0.3", 415 | "webidl-conversions": "^3.0.0" 416 | } 417 | } 418 | } 419 | } 420 | -------------------------------------------------------------------------------- /jsonrpc/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "portal-network-apis", 3 | "version": "0.0.1", 4 | "description": "Collection of JSON-RPC APIs provided by Portal Network clients", 5 | "main": "index.js", 6 | "type": "module", 7 | "scripts": { 8 | "build": "node scripts/build.js", 9 | "lint": "node scripts/build.js && node scripts/validate.js" 10 | }, 11 | "repository": { 12 | "type": "git", 13 | "url": "git+https://github.com/ethereum/portal-network-specs.git" 14 | }, 15 | "author": "Ethereum", 16 | "license": "CC0-1.0", 17 | "bugs": { 18 | "url": "https://github.com/ethereum/portal-network-specs/issues" 19 | }, 20 | "homepage": "https://github.com/ethereum/portal-network-specs#readme", 21 | "dependencies": { 22 | "@open-rpc/schema-utils-js": "^1.15.0" 23 | } 24 | } 25 | -------------------------------------------------------------------------------- /jsonrpc/scripts/build.js: -------------------------------------------------------------------------------- 1 | import fs from "fs"; 2 | import { parseOpenRPCDocument } from "@open-rpc/schema-utils-js"; 3 | 4 | console.log("Loading files...\n"); 5 | 6 | let methods = []; 7 | let methodsBase = "src/methods/"; 8 | let methodFiles = fs.readdirSync(methodsBase); 9 | methodFiles.forEach(file => { 10 | console.log(file); 11 | let raw = fs.readFileSync(methodsBase + file); 12 | let parsed = JSON.parse(raw); 13 | methods = [ 14 | ...methods, 15 | ...parsed, 16 | ]; 17 | }); 18 | 19 | let schemas = {}; 20 | let schemasBase = "src/schemas/" 21 | let schemaFiles = fs.readdirSync(schemasBase); 22 | schemaFiles.forEach(file => { 23 | console.log(file); 24 | let raw = fs.readFileSync(schemasBase + file); 25 | let parsed = JSON.parse(raw); 26 | schemas = { 27 | ...schemas, 28 | ...parsed, 29 | }; 30 | }); 31 | 32 | 33 | let content = {}; 34 | let contentBase = "src/content/" 35 | let contentFiles = fs.readdirSync(contentBase); 36 | contentFiles.forEach(file => { 37 | console.log(file); 38 | let raw = fs.readFileSync(contentBase + file); 39 | let parsed = JSON.parse(raw); 40 | content = { 41 | ...content, 42 | ...parsed, 43 | }; 44 | }); 45 | 46 | let errors = {}; 47 | let errorBase = "src/errors/" 48 | let errorFiles = fs.readdirSync(errorBase) 49 | errorFiles.forEach(file => { 50 | console.log(file); 51 | let raw = fs.readFileSync(errorBase + file); 52 | let parsed = JSON.parse(raw); 53 | errors = { 54 | ...errors, 55 | ...parsed, 56 | }; 57 | }); 58 | 59 | let spec = await parseOpenRPCDocument({ 60 | openrpc: "1.2.4", 61 | info: { 62 | title: "Portal Network JSON-RPC Specification", 63 | description: "A specification of the standard interface for Portal Network clients.", 64 | license: { 65 | name: "CC0-1.0", 66 | url: "https://creativecommons.org/publicdomain/zero/1.0/legalcode" 67 | }, 68 | version: "0.0.1" 69 | }, 70 | methods: methods, 71 | components: { 72 | contentDescriptors: content, 73 | schemas: schemas, 74 | errors: errors 75 | } 76 | }, 77 | {dereference: false}) 78 | 79 | let data = JSON.stringify(spec, null, '\t'); 80 | fs.writeFileSync('openrpc.json', data); 81 | 82 | console.log(); 83 | console.log("Build successful."); 84 | -------------------------------------------------------------------------------- /jsonrpc/scripts/debug.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -o xtrace 4 | curl -s http://localhost:8545 -H 'Content-Type: application/json' -d '{"method":"'$1'","id":1,"jsonrpc":"2.0", "params":['$2']}' | jq '.["result"]' > data.json 2>&1 5 | cat openrpc.json | jq '.["methods"][] | select(.name == "'$1'") | .["result"]["schema"]["errors"]' > schema.json 6 | ajv validate -s schema.json -d data.json 7 | # rm schema.json data.json 8 | -------------------------------------------------------------------------------- /jsonrpc/scripts/validate.js: -------------------------------------------------------------------------------- 1 | import fs from "fs"; 2 | import { 3 | parseOpenRPCDocument, 4 | validateOpenRPCDocument 5 | } from "@open-rpc/schema-utils-js"; 6 | 7 | let rawdata = fs.readFileSync("openrpc.json"); 8 | let openrpc = JSON.parse(rawdata); 9 | 10 | const error = validateOpenRPCDocument(openrpc); 11 | if (error != true) { 12 | console.log(error.name); 13 | console.log(error.message); 14 | process.exit(1); 15 | } 16 | 17 | try { 18 | await Promise.resolve(parseOpenRPCDocument(openrpc)); 19 | } catch(e) { 20 | console.log(e.name); 21 | let end = e.message.indexOf("schema in question"); 22 | let msg = e.message.substring(0, end); 23 | console.log(msg); 24 | process.exit(1); 25 | } 26 | 27 | console.log("OpenRPC spec validated successfully."); 28 | -------------------------------------------------------------------------------- /jsonrpc/src/content/params.json: -------------------------------------------------------------------------------- 1 | { 2 | "ContentKey": { 3 | "name": "contentKey", 4 | "description": "The encoded Portal content key", 5 | "required": true, 6 | "schema": { 7 | "$ref": "#/components/schemas/hexString" 8 | } 9 | }, 10 | "ContentValue": { 11 | "name": "contentValue", 12 | "description": "The encoded Portal content value", 13 | "required": true, 14 | "schema": { 15 | "$ref": "#/components/schemas/hexString" 16 | } 17 | }, 18 | "ProtocolId": { 19 | "name": "protocolId", 20 | "required": true, 21 | "description": "The protocol element of discv5 TALKREQ message-data", 22 | "schema": { 23 | "title": "protocol id", 24 | "$ref": "#/components/schemas/hexString" 25 | } 26 | }, 27 | "TalkReqPayload": { 28 | "name": "talkReqPayload", 29 | "required": true, 30 | "description": "The request element of discv5 TALKREQ message-data", 31 | "schema": { 32 | "$ref": "#/components/schemas/hexString" 33 | } 34 | }, 35 | "Distances": { 36 | "name": "distances", 37 | "required": true, 38 | "schema": { 39 | "title": "distance", 40 | "type": "array", 41 | "items": { 42 | "type": "number" 43 | } 44 | } 45 | }, 46 | "ContentItems": { 47 | "name": "content_items", 48 | "required": true, 49 | "schema": { 50 | "title": "content_item", 51 | "type": "array", 52 | "items": { 53 | "$ref": "#/components/schemas/ContentItem" 54 | }, 55 | "minItems": 1, 56 | "maxItems": 64 57 | } 58 | }, 59 | "Enr": { 60 | "name": "enr", 61 | "required": true, 62 | "schema": { 63 | "title": "Ethereum node record", 64 | "$ref": "#/components/schemas/Enr" 65 | } 66 | }, 67 | "EnrSeq": { 68 | "name": "enrSeq", 69 | "schema": { 70 | "title": "The ENR sequence number", 71 | "type": "number" 72 | } 73 | }, 74 | "NodeId": { 75 | "name": "nodeId", 76 | "required": true, 77 | "schema": { 78 | "title": "NodeId", 79 | "$ref": "#/components/schemas/bytes32" 80 | } 81 | }, 82 | "DataRadius": { 83 | "name": "dataRadius", 84 | "description": "Data radius value", 85 | "schema": { 86 | "$ref": "#/components/schemas/DataRadius" 87 | } 88 | }, 89 | "PayloadType": { 90 | "name": "payloadType", 91 | "description": "The type of payload. If the payloadType is specified without a payload, the client will generate the default payload for that payloadType.", 92 | "schema": { 93 | "description": "Numeric identifier which tells clients how the payload field should be decoded", 94 | "type": "number" 95 | } 96 | }, 97 | "Payload": { 98 | "name": "payload", 99 | "description": "The JSON encoded extension payload. Requires the payload_type to be specified.", 100 | "schema": { 101 | "description": "The JSON encoded extension payload.", 102 | "type": "object", 103 | "oneOf": [ 104 | { "$ref": "#/components/schemas/ClientInfoAndCapabilities" }, 105 | { "$ref": "#/components/schemas/BasicRadius" }, 106 | { "$ref": "#/components/schemas/HistoryRadius" }, 107 | { "$ref": "#/components/schemas/UnknownPayload" } 108 | ] 109 | } 110 | } 111 | } 112 | -------------------------------------------------------------------------------- /jsonrpc/src/content/results.json: -------------------------------------------------------------------------------- 1 | { 2 | "AddEnrResult": { 3 | "name": "addEnrResult", 4 | "description": "Returns boolean if the node record has been successfully saved.", 5 | "schema": { 6 | "type": "boolean" 7 | } 8 | }, 9 | "BeaconOptimisticStateRootResult": { 10 | "name": "optimisticStateRootResult", 11 | "description": "Returns the hex encoded optimistic beacon state root. If the beacon client is not synced, return the error.", 12 | "schema": { 13 | "title": "Hex encoded optimistic beacon state root", 14 | "$ref": "#/components/schemas/bytes32" 15 | } 16 | }, 17 | "BeaconFinalizedStateRootResult": { 18 | "name": "finalizedStateRootResult", 19 | "description": "Returns the hex encoded finalized beacon state root. If the beacon client is not synced, return the error.", 20 | "schema": { 21 | "title": "Hex encoded finalized beacon state root", 22 | "$ref": "#/components/schemas/bytes32" 23 | } 24 | }, 25 | "StoreResult": { 26 | "name": "storeResult", 27 | "description": "Returns \"true\" upon success", 28 | "schema": { 29 | "type": "boolean" 30 | } 31 | }, 32 | "DeleteEnrResult": { 33 | "name": "deleteEnrResult", 34 | "description": "Returns boolean upon successful deletion of the node record.", 35 | "schema": { 36 | "type": "boolean" 37 | } 38 | }, 39 | "FindContentResult": { 40 | "name": "FindContentResult", 41 | "description": "Returns either the requested content, received directly from the CONTENT message or transferred over uTP, or, in case the recipient does not have the content, a list of ENR records of nodes that are closer than the recipient is to the requested content.", 42 | "schema": { 43 | "title": "FindContentResult", 44 | "type": "object", 45 | "oneOf": [ 46 | { 47 | "title": "ContentInfo", 48 | "type": "object", 49 | "required": [ 50 | "content", 51 | "utpTransfer" 52 | ], 53 | "properties": { 54 | "content": { 55 | "title": "Requested content", 56 | "description": "Encoded requested content", 57 | "$ref": "#/components/schemas/hexString" 58 | }, 59 | "utpTransfer": { 60 | "description": "Indicates whether the content was transferred over a uTP connection or not.", 61 | "type": "boolean" 62 | } 63 | } 64 | }, 65 | { 66 | "title": "ENRs", 67 | "description": "List of ENR records of nodes that are closer than the recipient is to the requested content", 68 | "type": "object", 69 | "required": [ 70 | "enrs" 71 | ], 72 | "properties": { 73 | "enrs": { 74 | "type": "array", 75 | "items": { 76 | "$ref": "#/components/schemas/Enr" 77 | } 78 | } 79 | } 80 | } 81 | ] 82 | } 83 | }, 84 | "FindNodeResult": { 85 | "name": "findNodeResult", 86 | "description": "Returns nodes in a given distance", 87 | "schema": { 88 | "title": "NODES message", 89 | "type": "array", 90 | "items": { 91 | "$ref": "#/components/schemas/Enr" 92 | } 93 | } 94 | }, 95 | "GetEnrResult": { 96 | "name": "getEnrResult", 97 | "description": "Returns latest ENR associated with the given node ID.", 98 | "schema": { 99 | "title": "Ethereum node record", 100 | "$ref": "#/components/schemas/Enr" 101 | } 102 | }, 103 | "LookupEnrResult": { 104 | "name": "lookupEnrResult", 105 | "description": "Returns ENR associated with the given node ID", 106 | "schema": { 107 | "title": "Ethereum node record", 108 | "$ref": "#/components/schemas/Enr" 109 | } 110 | }, 111 | "OfferResult": { 112 | "name": "offerResult", 113 | "description": "Returns the accepted content keys bytelist upon successful content transmission or no transmission in case of empty bytelist receival. Return error on response or transmission errors.", 114 | "schema": { 115 | "title": "Encoded content keys bytelist", 116 | "$ref": "#/components/schemas/hexString" 117 | } 118 | }, 119 | "PingResult": { 120 | "name": "pingResult", 121 | "description": "Returns PONG response", 122 | "schema": { 123 | "title": "PONG message", 124 | "type": "object", 125 | "required": [ 126 | "enrSeq", 127 | "payloadType", 128 | "payload" 129 | ], 130 | "properties": { 131 | "enrSeq": { 132 | "description": "ENR sequence number of sender", 133 | "type": "number" 134 | }, 135 | "payloadType": { 136 | "description": "numeric identifier which tells clients how the payload field should be decoded", 137 | "type": "number" 138 | }, 139 | "payload": { 140 | "description": "The JSON encoded extension payload. These are examples of possible payloads, check the documentation for which payloads are support.", 141 | "type": "object", 142 | "oneOf": [ 143 | { "$ref": "#/components/schemas/ClientInfoAndCapabilities" }, 144 | { "$ref": "#/components/schemas/BasicRadius" }, 145 | { "$ref": "#/components/schemas/HistoryRadius" }, 146 | { "$ref": "#/components/schemas/UnknownPayload" } 147 | ] 148 | } 149 | } 150 | } 151 | }, 152 | "RecursiveFindNodesResult": { 153 | "name": "recursiveFindNodesResult", 154 | "description": "Up to 16 ENRs of the closest nodes, sorted by distance.", 155 | "schema": { 156 | "type": "array", 157 | "items": { 158 | "$ref": "#/components/schemas/Enr" 159 | } 160 | } 161 | }, 162 | "RoutingTableInfoResult": { 163 | "name": "routingTableInfoResult", 164 | "description": "history network routing table information", 165 | "required": true, 166 | "schema": { 167 | "title": "routingTableInfoResults", 168 | "description": "Routing table details", 169 | "type": "object", 170 | "required": [ 171 | "localNodeId", 172 | "buckets" 173 | ], 174 | "properties": { 175 | "localNodeId": { 176 | "title": "nodeId", 177 | "description": "The key identifying the local peer that owns the routing table.", 178 | "$ref": "#/components/schemas/bytes32" 179 | }, 180 | "buckets": { 181 | "title": "kBucketsTable", 182 | "description": "Represents a Kademlia routing table.", 183 | "$ref": "#/components/schemas/kBuckets" 184 | } 185 | } 186 | } 187 | }, 188 | "GetContentResult": { 189 | "name": "GetContentResult", 190 | "description": "Returns the hex encoded content value and utp transfer flag. If the content is not available, returns \"0x\"", 191 | "schema": { 192 | "type": "object", 193 | "required": [ 194 | "content", 195 | "utpTransfer" 196 | ], 197 | "properties": { 198 | "content": { 199 | "description": "Hex encoded content value", 200 | "$ref": "#/components/schemas/hexString" 201 | }, 202 | "utpTransfer": { 203 | "description": "Indicates whether the content was transferred over a uTP connection or not.", 204 | "type": "boolean" 205 | } 206 | } 207 | } 208 | }, 209 | "TraceGetContentResult": { 210 | "name": "TraceGetContentResult", 211 | "description": "Returns the hex encoded content value and trace data object. If the content is not available, returns \"0x\"", 212 | "schema": { 213 | "type": "object", 214 | "required": [ 215 | "content", 216 | "utpTransfer", 217 | "trace" 218 | ], 219 | "properties": { 220 | "content": { 221 | "description": "Hex encoded content value", 222 | "$ref": "#/components/schemas/hexString" 223 | }, 224 | "utpTransfer": { 225 | "description": "Indicates whether the content was transferred over a uTP connection or not.", 226 | "type": "boolean" 227 | }, 228 | "trace": { 229 | "description": "Contains trace data for the request.", 230 | "$ref": "#/components/schemas/traceResultObject" 231 | } 232 | } 233 | } 234 | }, 235 | "LocalContentResult": { 236 | "name": "LocalContentResult", 237 | "description": "Returns the hex encoded content value.", 238 | "schema": { 239 | "$ref": "#/components/schemas/hexString" 240 | } 241 | }, 242 | "PutContentResult": { 243 | "name": "PutContentResult", 244 | "description": "Returns the number of peers that the content was gossiped to and a flag indicating whether the content was stored locally or not.", 245 | "schema": { 246 | "type": "object", 247 | "required": [ 248 | "peerCount", 249 | "storedLocally" 250 | ], 251 | "properties": { 252 | "peerCount": { 253 | "description": "Indicates how many peers the content was gossiped to.", 254 | "type": "number" 255 | }, 256 | "storedLocally": { 257 | "description": "Indicates whether the content was stored locally or not.", 258 | "type": "boolean" 259 | } 260 | } 261 | } 262 | } 263 | } 264 | -------------------------------------------------------------------------------- /jsonrpc/src/errors/errors.json: -------------------------------------------------------------------------------- 1 | { 2 | "ContentNotFoundError": { 3 | "code": -39001, 4 | "message": "content not found" 5 | }, 6 | "ContentNotFoundErrorWithTrace": { 7 | "code": -39002, 8 | "message": "content not found", 9 | "data": { 10 | "$ref": "#/components/schemas/traceResultObject" 11 | } 12 | }, 13 | "BeaconClientNotInitializedError": { 14 | "code": -39003, 15 | "message": "beacon client not initialized" 16 | }, 17 | "PayloadTypeNotSupportedError": { 18 | "code": -39004, 19 | "message": "Payload type not supported", 20 | "data": { 21 | "reason": { 22 | "type": "string", 23 | "description": "Indicates whether the payload type is unsupported by the subnetwork or the client.", 24 | "enum": ["subnetwork", "client"] 25 | } 26 | } 27 | }, 28 | "FailedToDecodePayloadError": { 29 | "code": -39005, 30 | "message": "Failed to decode payload" 31 | }, 32 | "PayloadTypeRequiredError": { 33 | "code": -39006, 34 | "message": "Payload type is required if payload is specified" 35 | }, 36 | "UserSpecifiedPayloadBlockedByClientError": { 37 | "code": -39007, 38 | "message": "The client has blocked users from specifying the payload for this extension" 39 | } 40 | } 41 | -------------------------------------------------------------------------------- /jsonrpc/src/methods/beacon.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "name": "portal_beaconRoutingTableInfo", 4 | "summary": "Returns meta information about beacon network routing table.", 5 | "params": [], 6 | "result": { 7 | "$ref": "#/components/contentDescriptors/RoutingTableInfoResult" 8 | } 9 | }, 10 | { 11 | "name": "portal_beaconAddEnr", 12 | "summary": "Write an ethereum node record to the routing table.", 13 | "params": [ 14 | { 15 | "$ref": "#/components/contentDescriptors/Enr" 16 | } 17 | ], 18 | "result": { 19 | "$ref": "#/components/contentDescriptors/AddEnrResult" 20 | } 21 | }, 22 | { 23 | "name": "portal_beaconGetEnr", 24 | "summary": "Fetch from the local node the latest ENR associated with the given NodeId", 25 | "params": [ 26 | { 27 | "$ref": "#/components/contentDescriptors/NodeId" 28 | } 29 | ], 30 | "result": { 31 | "$ref": "#/components/contentDescriptors/GetEnrResult" 32 | } 33 | }, 34 | { 35 | "name": "portal_beaconDeleteEnr", 36 | "summary": "Delete a Node ID from the routing table", 37 | "params": [ 38 | { 39 | "$ref": "#/components/contentDescriptors/NodeId" 40 | } 41 | ], 42 | "result": { 43 | "$ref": "#/components/contentDescriptors/DeleteEnrResult" 44 | } 45 | }, 46 | { 47 | "name": "portal_beaconLookupEnr", 48 | "summary": "Fetch from the DHT the latest ENR associated with the given NodeId", 49 | "params": [ 50 | { 51 | "$ref": "#/components/contentDescriptors/NodeId" 52 | } 53 | ], 54 | "result": { 55 | "$ref": "#/components/contentDescriptors/LookupEnrResult" 56 | } 57 | }, 58 | { 59 | "name": "portal_beaconPing", 60 | "summary": "Send a PING message to the designated node and wait for a PONG response. By default, the client will send a capabilities extension payload. If the payloadType is specified without a payload, the client will generate the default payload for that payloadType. If the payload is specified, the client will send the specified payload. payload requires the payloadType to be specified or a -39006 error will be thrown. If the payloadType isn't support by the client or the subnetwork a -39004 error will be thrown. If the specified payload fails to decode a -39005 error will be thrown.", 61 | "params": [ 62 | { 63 | "$ref": "#/components/contentDescriptors/Enr" 64 | }, 65 | { 66 | "$ref": "#/components/contentDescriptors/PayloadType" 67 | }, 68 | { 69 | "$ref": "#/components/contentDescriptors/Payload" 70 | } 71 | ], 72 | "result": { 73 | "$ref": "#/components/contentDescriptors/PingResult" 74 | }, 75 | "errors": [ 76 | { 77 | "$ref": "#/components/errors/PayloadTypeNotSupportedError" 78 | }, 79 | { 80 | "$ref": "#/components/errors/FailedToDecodePayloadError" 81 | }, 82 | { 83 | "$ref": "#/components/errors/PayloadTypeRequiredError" 84 | }, 85 | { 86 | "$ref": "#/components/errors/UserSpecifiedPayloadBlockedByClientError" 87 | } 88 | ], 89 | "examples": [ 90 | { 91 | "name": "Only specifying the ENR", 92 | "description": "A successful PING request and response using only specifying the ENR.", 93 | "params": [ 94 | { 95 | "name": "enr", 96 | "value": "enr:-IS4QIa7W1_Yvvl5OJ2Pp3Gp1dJ3..." 97 | } 98 | ], 99 | "result": { 100 | "name": "pingResult", 101 | "value": { 102 | "enrSeq": 1, 103 | "payloadType": 0, 104 | "payload": { 105 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 106 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 107 | "capabilities": [0, 1, 65535] 108 | } 109 | } 110 | } 111 | }, 112 | { 113 | "name": "Only specifying the ENR and the payload_type", 114 | "description": "A successful PING request and response using only specifying the ENR and the payload is generated by the payload_type.", 115 | "params": [ 116 | { 117 | "name": "enr", 118 | "value": "enr:-IS4QIa7W1_Yvvl5OJ2Pp3Gp1dJ3..." 119 | }, 120 | { 121 | "name": "payloadType", 122 | "value": 0 123 | } 124 | ], 125 | "result": { 126 | "name": "pingResult", 127 | "value": { 128 | "enrSeq": 1, 129 | "payloadType": 0, 130 | "payload": { 131 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 132 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 133 | "capabilities": [0, 1, 65535] 134 | } 135 | } 136 | } 137 | }, 138 | { 139 | "name": "ClientInfoAndCapabilities Example", 140 | "description": "A successful PING request and response using the ClientInfoAndCapabilities payload.", 141 | "params": [ 142 | { 143 | "name": "enr", 144 | "value": "enr:-IS4QIa7W1_Yvvl5OJ2Pp3Gp1dJ3..." 145 | }, 146 | { 147 | "name": "payloadType", 148 | "value": 0 149 | }, 150 | { 151 | "name": "payload", 152 | "value": { 153 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 154 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 155 | "capabilities": [0, 1, 65535] 156 | } 157 | } 158 | ], 159 | "result": { 160 | "name": "pingResult", 161 | "value": { 162 | "enrSeq": 1, 163 | "payloadType": 0, 164 | "payload": { 165 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 166 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 167 | "capabilities": [0, 1, 65535] 168 | } 169 | } 170 | } 171 | } 172 | ] 173 | }, 174 | { 175 | "name": "portal_beaconFindNodes", 176 | "summary": "Send a FINDNODES request for nodes that fall within the given set of distances, to the designated peer and wait for a response.", 177 | "params": [ 178 | { 179 | "$ref": "#/components/contentDescriptors/Enr" 180 | }, 181 | { 182 | "$ref": "#/components/contentDescriptors/Distances" 183 | } 184 | ], 185 | "result": { 186 | "$ref": "#/components/contentDescriptors/FindNodeResult" 187 | } 188 | }, 189 | { 190 | "name": "portal_beaconFindContent", 191 | "summary": "Send FINDCONTENT message to get the content with a content key.", 192 | "params": [ 193 | { 194 | "$ref": "#/components/contentDescriptors/Enr" 195 | }, 196 | { 197 | "$ref": "#/components/contentDescriptors/ContentKey" 198 | } 199 | ], 200 | "result": { 201 | "$ref": "#/components/contentDescriptors/FindContentResult" 202 | } 203 | }, 204 | { 205 | "name": "portal_beaconOffer", 206 | "summary": "Send an OFFER request with given array of content items (keys & values), to the designated peer and wait for a response. The client MUST return an error if more than 64 content items are provided or less than 1 content items are provided.", 207 | "params": [ 208 | { 209 | "$ref": "#/components/contentDescriptors/Enr" 210 | }, 211 | { 212 | "$ref": "#/components/contentDescriptors/ContentItems" 213 | } 214 | ], 215 | "result": { 216 | "$ref": "#/components/contentDescriptors/OfferResult" 217 | } 218 | }, 219 | { 220 | "name": "portal_beaconOptimisticStateRoot", 221 | "summary": "Get latest known optimistic beacon state root.", 222 | "params": [], 223 | "result": { 224 | "$ref": "#/components/contentDescriptors/BeaconOptimisticStateRootResult" 225 | }, 226 | "errors": [ 227 | { 228 | "$ref": "#/components/errors/BeaconClientNotInitializedError" 229 | } 230 | ] 231 | }, 232 | { 233 | "name": "portal_beaconFinalizedStateRoot", 234 | "summary": "Get latest known finalized beacon state root.", 235 | "params": [], 236 | "result": { 237 | "$ref": "#/components/contentDescriptors/BeaconFinalizedStateRootResult" 238 | }, 239 | "errors": [ 240 | { 241 | "$ref": "#/components/errors/BeaconClientNotInitializedError" 242 | } 243 | ] 244 | }, 245 | { 246 | "name": "portal_beaconRecursiveFindNodes", 247 | "summary": "Look up ENRs closest to the given target, that are members of the beacon network", 248 | "params": [ 249 | { 250 | "$ref": "#/components/contentDescriptors/NodeId" 251 | } 252 | ], 253 | "result": { 254 | "$ref": "#/components/contentDescriptors/RecursiveFindNodesResult" 255 | } 256 | }, 257 | { 258 | "name": "portal_beaconGetContent", 259 | "summary": "Get content from the local database if it exists, otherwise look up the target content key in the network. After fetching from the network the content is stored in the local database if storage criteria is met before being returned.", 260 | "params": [ 261 | { 262 | "$ref": "#/components/contentDescriptors/ContentKey" 263 | } 264 | ], 265 | "result": { 266 | "$ref": "#/components/contentDescriptors/GetContentResult" 267 | }, 268 | "errors": [ 269 | { 270 | "$ref": "#/components/errors/ContentNotFoundError" 271 | } 272 | ] 273 | }, 274 | { 275 | "name": "portal_beaconTraceGetContent", 276 | "summary": "Get content as defined in portal_beaconGetContent and get additional tracing data", 277 | "params": [ 278 | { 279 | "$ref": "#/components/contentDescriptors/ContentKey" 280 | } 281 | ], 282 | "result": { 283 | "$ref": "#/components/contentDescriptors/TraceGetContentResult" 284 | }, 285 | "errors": [ 286 | { 287 | "$ref": "#/components/errors/ContentNotFoundErrorWithTrace" 288 | } 289 | ] 290 | }, 291 | { 292 | "name": "portal_beaconStore", 293 | "summary": "Store beacon content key with content data", 294 | "params": [ 295 | { 296 | "$ref": "#/components/contentDescriptors/ContentKey" 297 | }, 298 | { 299 | "$ref": "#/components/contentDescriptors/ContentValue" 300 | } 301 | ], 302 | "result": { 303 | "$ref": "#/components/contentDescriptors/StoreResult" 304 | } 305 | }, 306 | { 307 | "name": "portal_beaconLocalContent", 308 | "summary": "Get a content from the local database", 309 | "params": [ 310 | { 311 | "$ref": "#/components/contentDescriptors/ContentKey" 312 | } 313 | ], 314 | "result": { 315 | "$ref": "#/components/contentDescriptors/LocalContentResult" 316 | }, 317 | "errors": [ 318 | { 319 | "$ref": "#/components/errors/ContentNotFoundError" 320 | } 321 | ] 322 | }, 323 | { 324 | "name": "portal_beaconPutContent", 325 | "summary": "Store the content in the local database if storage criteria is met, then send the content to `n` interested peers using the client's default gossip mechanisms. If fewer than `n` interested nodes are found in the local DHT, the client launches a node lookup with target `content-id` and it offers the content to maximum `n` of the newly discovered nodes.", 326 | "params": [ 327 | { 328 | "$ref": "#/components/contentDescriptors/ContentKey" 329 | }, 330 | { 331 | "$ref": "#/components/contentDescriptors/ContentValue" 332 | } 333 | ], 334 | "result": { 335 | "$ref": "#/components/contentDescriptors/PutContentResult" 336 | } 337 | } 338 | ] 339 | -------------------------------------------------------------------------------- /jsonrpc/src/methods/discv5.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "name": "discv5_nodeInfo", 4 | "summary": "Returns ENR and nodeId information of the local discv5 node.", 5 | "params": [], 6 | "result": { 7 | "name": "nodeInfoResult", 8 | "description": "Local node information", 9 | "required": true, 10 | "schema": { 11 | "title": "nodeInfoResults", 12 | "description": "ENR and NodeId of the local peer", 13 | "type": "object", 14 | "required": [ 15 | "enr", 16 | "nodeId" 17 | ], 18 | "properties": { 19 | "enr": { 20 | "title": "nodeENR", 21 | "description": "URL-safe base64 encoded \\\"text\\\" version of the ENR prefixed by \\\"enr:\\\".\"", 22 | "$ref": "#/components/schemas/Enr" 23 | }, 24 | "nodeId": { 25 | "title": "nodeId", 26 | "description": "Hex encoded `NodeId` of an ENR (a 32 byte identifier).", 27 | "$ref": "#/components/schemas/bytes32" 28 | } 29 | } 30 | } 31 | } 32 | }, 33 | { 34 | "name": "discv5_updateNodeInfo", 35 | "summary": "Add, update, or remove a key-value pair from the local node record", 36 | "params": [ 37 | { 38 | "name": "socketAddr", 39 | "required": true, 40 | "schema": { 41 | "title": "ENR socket address", 42 | "$ref": "#/components/schemas/socketAddr" 43 | } 44 | }, 45 | { 46 | "name": "isTcp", 47 | "description": "TCP or UDP socket", 48 | "schema": { 49 | "type": "boolean" 50 | } 51 | } 52 | ], 53 | "result": { 54 | "name": "nodeInfoResult", 55 | "description": "Local node information", 56 | "required": true, 57 | "schema": { 58 | "title": "nodeInfoResults", 59 | "description": "ENR and NodeId of the local peer", 60 | "type": "object", 61 | "required": [ 62 | "enr", 63 | "localNodeId" 64 | ], 65 | "properties": { 66 | "enr": { 67 | "title": "nodeENR", 68 | "description": "URL-safe base64 encoded \\\"text\\\" version of the ENR prefixed by \\\"enr:\\\".\"", 69 | "$ref": "#/components/schemas/Enr" 70 | }, 71 | "localNodeId": { 72 | "title": "nodeId", 73 | "description": "Hex encoded `NodeId` of an ENR (a 32 byte identifier).", 74 | "$ref": "#/components/schemas/bytes32" 75 | } 76 | } 77 | } 78 | } 79 | }, 80 | { 81 | "name": "discv5_routingTableInfo", 82 | "summary": "Returns meta information about discv5 routing table.", 83 | "params": [], 84 | "result": { 85 | "$ref": "#/components/contentDescriptors/RoutingTableInfoResult" 86 | } 87 | }, 88 | { 89 | "name": "discv5_addEnr", 90 | "summary": "Write an ethereum node record to the routing table.", 91 | "params": [ 92 | { 93 | "$ref": "#/components/contentDescriptors/Enr" 94 | } 95 | ], 96 | "result": { 97 | "$ref": "#/components/contentDescriptors/AddEnrResult" 98 | } 99 | }, 100 | { 101 | "name": "discv5_getEnr", 102 | "summary": "Fetch from the local node the latest ENR associated with the given NodeId", 103 | "params": [ 104 | { 105 | "$ref": "#/components/contentDescriptors/NodeId" 106 | } 107 | ], 108 | "result": { 109 | "$ref": "#/components/contentDescriptors/GetEnrResult" 110 | } 111 | }, 112 | { 113 | "name": "discv5_deleteEnr", 114 | "summary": "Delete a Node ID from the routing table", 115 | "params": [ 116 | { 117 | "$ref": "#/components/contentDescriptors/NodeId" 118 | } 119 | ], 120 | "result": { 121 | "$ref": "#/components/contentDescriptors/DeleteEnrResult" 122 | } 123 | }, 124 | { 125 | "name": "discv5_lookupEnr", 126 | "summary": "Fetch from the DHT the latest ENR associated with the given NodeId", 127 | "params": [ 128 | { 129 | "$ref": "#/components/contentDescriptors/NodeId" 130 | } 131 | ], 132 | "result": { 133 | "$ref": "#/components/contentDescriptors/LookupEnrResult" 134 | } 135 | }, 136 | { 137 | "name": "discv5_ping", 138 | "summary": "Send a PING message to the designated node and wait for a PONG response.", 139 | "params": [ 140 | { 141 | "$ref": "#/components/contentDescriptors/Enr" 142 | } 143 | ], 144 | "result": { 145 | "name": "pingResult", 146 | "description": "Returns PONG response", 147 | "schema": { 148 | "title": "PONG message", 149 | "type": "object", 150 | "required": [ 151 | "enrSeq", 152 | "recipientIP", 153 | "recipientPort" 154 | ], 155 | "properties": { 156 | "enrSeq": { 157 | "description": "ENR sequence number of sender", 158 | "type": "number" 159 | }, 160 | "recipientIP": { 161 | "description": "IP address of the intended recipient", 162 | "$ref": "#/components/schemas/ipAddr" 163 | }, 164 | "recipientPort": { 165 | "description": "recipient UDP port", 166 | "$ref": "#/components/schemas/udpPort" 167 | } 168 | } 169 | } 170 | } 171 | }, 172 | { 173 | "name": "discv5_findNode", 174 | "summary": "Send a FINDNODE request for nodes that fall within the given set of distances, to the designated peer and wait for a response.", 175 | "params": [ 176 | { 177 | "$ref": "#/components/contentDescriptors/Enr" 178 | }, 179 | { 180 | "$ref": "#/components/contentDescriptors/Distances" 181 | } 182 | ], 183 | "result": { 184 | "$ref": "#/components/contentDescriptors/FindNodeResult" 185 | } 186 | }, 187 | { 188 | "name": "discv5_talkReq", 189 | "summary": "Send a TALKREQ request with a payload to a given peer and wait for response.", 190 | "params": [ 191 | { 192 | "$ref": "#/components/contentDescriptors/Enr" 193 | }, 194 | { 195 | "$ref": "#/components/contentDescriptors/ProtocolId" 196 | }, 197 | { 198 | "$ref": "#/components/contentDescriptors/TalkReqPayload" 199 | } 200 | ], 201 | "result": { 202 | "name": "talkResult", 203 | "description": "Returns TALKRESP message as hex string", 204 | "schema": { 205 | "$ref": "#/components/schemas/hexString" 206 | } 207 | } 208 | }, 209 | { 210 | "name": "discv5_recursiveFindNodes", 211 | "summary": "Look up ENRs closest to the given target", 212 | "params": [ 213 | { 214 | "$ref": "#/components/contentDescriptors/NodeId" 215 | } 216 | ], 217 | "result": { 218 | "$ref": "#/components/contentDescriptors/RecursiveFindNodesResult" 219 | } 220 | } 221 | ] 222 | -------------------------------------------------------------------------------- /jsonrpc/src/methods/history.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "name": "portal_historyRoutingTableInfo", 4 | "summary": "Returns meta information about history network routing table.", 5 | "params": [], 6 | "result": { 7 | "$ref": "#/components/contentDescriptors/RoutingTableInfoResult" 8 | } 9 | }, 10 | { 11 | "name": "portal_historyAddEnr", 12 | "summary": "Write an ethereum node record to the routing table.", 13 | "params": [ 14 | { 15 | "$ref": "#/components/contentDescriptors/Enr" 16 | } 17 | ], 18 | "result": { 19 | "$ref": "#/components/contentDescriptors/AddEnrResult" 20 | } 21 | }, 22 | { 23 | "name": "portal_historyGetEnr", 24 | "summary": "Fetch from the local node the latest ENR associated with the given NodeId", 25 | "params": [ 26 | { 27 | "$ref": "#/components/contentDescriptors/NodeId" 28 | } 29 | ], 30 | "result": { 31 | "$ref": "#/components/contentDescriptors/GetEnrResult" 32 | } 33 | }, 34 | { 35 | "name": "portal_historyDeleteEnr", 36 | "summary": "Delete a Node ID from the routing table", 37 | "params": [ 38 | { 39 | "$ref": "#/components/contentDescriptors/NodeId" 40 | } 41 | ], 42 | "result": { 43 | "$ref": "#/components/contentDescriptors/DeleteEnrResult" 44 | } 45 | }, 46 | { 47 | "name": "portal_historyLookupEnr", 48 | "summary": "Fetch from the DHT the latest ENR associated with the given NodeId", 49 | "params": [ 50 | { 51 | "$ref": "#/components/contentDescriptors/NodeId" 52 | } 53 | ], 54 | "result": { 55 | "$ref": "#/components/contentDescriptors/LookupEnrResult" 56 | } 57 | }, 58 | { 59 | "name": "portal_historyPing", 60 | "summary": "Send a PING message to the designated node and wait for a PONG response. By default, the client will send a capabilities extension payload. If the payloadType is specified without a payload, the client will generate the default payload for that payloadType. If the payload is specified, the client will send the specified payload. payload requires the payloadType to be specified or a -39006 error will be thrown. If the payloadType isn't support by the client or the subnetwork a -39004 error will be thrown. If the specified payload fails to decode a -39005 error will be thrown.", 61 | "params": [ 62 | { 63 | "$ref": "#/components/contentDescriptors/Enr" 64 | }, 65 | { 66 | "$ref": "#/components/contentDescriptors/PayloadType" 67 | }, 68 | { 69 | "$ref": "#/components/contentDescriptors/Payload" 70 | } 71 | ], 72 | "result": { 73 | "$ref": "#/components/contentDescriptors/PingResult" 74 | }, 75 | "errors": [ 76 | { 77 | "$ref": "#/components/errors/PayloadTypeNotSupportedError" 78 | }, 79 | { 80 | "$ref": "#/components/errors/FailedToDecodePayloadError" 81 | }, 82 | { 83 | "$ref": "#/components/errors/PayloadTypeRequiredError" 84 | }, 85 | { 86 | "$ref": "#/components/errors/UserSpecifiedPayloadBlockedByClientError" 87 | } 88 | ], 89 | "examples": [ 90 | { 91 | "name": "Only specifying the ENR", 92 | "description": "A successful PING request and response using only specifying the ENR.", 93 | "params": [ 94 | { 95 | "name": "enr", 96 | "value": "enr:-IS4QIa7W1_Yvvl5OJ2Pp3Gp1dJ3..." 97 | } 98 | ], 99 | "result": { 100 | "name": "pingResult", 101 | "value": { 102 | "enrSeq": 1, 103 | "payloadType": 0, 104 | "payload": { 105 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 106 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 107 | "capabilities": [0, 1, 65535] 108 | } 109 | } 110 | } 111 | }, 112 | { 113 | "name": "Only specifying the ENR and the payload_type", 114 | "description": "A successful PING request and response using only specifying the ENR and the payload is generated by the payload_type.", 115 | "params": [ 116 | { 117 | "name": "enr", 118 | "value": "enr:-IS4QIa7W1_Yvvl5OJ2Pp3Gp1dJ3..." 119 | }, 120 | { 121 | "name": "payloadType", 122 | "value": 0 123 | } 124 | ], 125 | "result": { 126 | "name": "pingResult", 127 | "value": { 128 | "enrSeq": 1, 129 | "payloadType": 0, 130 | "payload": { 131 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 132 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 133 | "capabilities": [0, 1, 65535] 134 | } 135 | } 136 | } 137 | }, 138 | { 139 | "name": "ClientInfoAndCapabilities Example", 140 | "description": "A successful PING request and response using the ClientInfoAndCapabilities payload.", 141 | "params": [ 142 | { 143 | "name": "enr", 144 | "value": "enr:-IS4QIa7W1_Yvvl5OJ2Pp3Gp1dJ3..." 145 | }, 146 | { 147 | "name": "payloadType", 148 | "value": 0 149 | }, 150 | { 151 | "name": "payload", 152 | "value": { 153 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 154 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 155 | "capabilities": [0, 1, 65535] 156 | } 157 | } 158 | ], 159 | "result": { 160 | "name": "pingResult", 161 | "value": { 162 | "enrSeq": 1, 163 | "payloadType": 0, 164 | "payload": { 165 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 166 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 167 | "capabilities": [0, 1, 65535] 168 | } 169 | } 170 | } 171 | } 172 | ] 173 | }, 174 | { 175 | "name": "portal_historyFindNodes", 176 | "summary": "Send a FINDNODES request for nodes that fall within the given set of distances, to the designated peer and wait for a response.", 177 | "params": [ 178 | { 179 | "$ref": "#/components/contentDescriptors/Enr" 180 | }, 181 | { 182 | "$ref": "#/components/contentDescriptors/Distances" 183 | } 184 | ], 185 | "result": { 186 | "$ref": "#/components/contentDescriptors/FindNodeResult" 187 | } 188 | }, 189 | { 190 | "name": "portal_historyFindContent", 191 | "summary": "Send FINDCONTENT message to get the content with a content key.", 192 | "params": [ 193 | { 194 | "$ref": "#/components/contentDescriptors/Enr" 195 | }, 196 | { 197 | "$ref": "#/components/contentDescriptors/ContentKey" 198 | } 199 | ], 200 | "result": { 201 | "$ref": "#/components/contentDescriptors/FindContentResult" 202 | } 203 | }, 204 | { 205 | "name": "portal_historyOffer", 206 | "summary": "Send an OFFER request with given array of content items (keys & values), to the designated peer and wait for a response. The client MUST return an error if more than 64 content items are provided or less than 1 content items are provided.", 207 | "params": [ 208 | { 209 | "$ref": "#/components/contentDescriptors/Enr" 210 | }, 211 | { 212 | "$ref": "#/components/contentDescriptors/ContentItems" 213 | } 214 | ], 215 | "result": { 216 | "$ref": "#/components/contentDescriptors/OfferResult" 217 | } 218 | }, 219 | { 220 | "name": "portal_historyRecursiveFindNodes", 221 | "summary": "Look up ENRs closest to the given target, that are members of the history network", 222 | "params": [ 223 | { 224 | "$ref": "#/components/contentDescriptors/NodeId" 225 | } 226 | ], 227 | "result": { 228 | "$ref": "#/components/contentDescriptors/RecursiveFindNodesResult" 229 | } 230 | }, 231 | { 232 | "name": "portal_historyGetContent", 233 | "summary": "Get content from the local database if it exists, otherwise look up the target content key in the network. After fetching from the network the content is stored in the local database if storage criteria is met before being returned.", 234 | "params": [ 235 | { 236 | "$ref": "#/components/contentDescriptors/ContentKey" 237 | } 238 | ], 239 | "result": { 240 | "$ref": "#/components/contentDescriptors/GetContentResult" 241 | }, 242 | "errors": [ 243 | { 244 | "$ref": "#/components/errors/ContentNotFoundError" 245 | } 246 | ] 247 | }, 248 | { 249 | "name": "portal_historyTraceGetContent", 250 | "summary": "Get content as defined in portal_historyGetContent and get additional tracing data", 251 | "params": [ 252 | { 253 | "$ref": "#/components/contentDescriptors/ContentKey" 254 | } 255 | ], 256 | "result": { 257 | "$ref": "#/components/contentDescriptors/TraceGetContentResult" 258 | }, 259 | "errors": [ 260 | { 261 | "$ref": "#/components/errors/ContentNotFoundErrorWithTrace" 262 | } 263 | ] 264 | }, 265 | { 266 | "name": "portal_historyStore", 267 | "summary": "Store history content key with content data", 268 | "params": [ 269 | { 270 | "$ref": "#/components/contentDescriptors/ContentKey" 271 | }, 272 | { 273 | "$ref": "#/components/contentDescriptors/ContentValue" 274 | } 275 | ], 276 | "result": { 277 | "$ref": "#/components/contentDescriptors/StoreResult" 278 | } 279 | }, 280 | { 281 | "name": "portal_historyLocalContent", 282 | "summary": "Get a content from the local database", 283 | "params": [ 284 | { 285 | "$ref": "#/components/contentDescriptors/ContentKey" 286 | } 287 | ], 288 | "result": { 289 | "$ref": "#/components/contentDescriptors/LocalContentResult" 290 | }, 291 | "errors": [ 292 | { 293 | "$ref": "#/components/errors/ContentNotFoundError" 294 | } 295 | ] 296 | }, 297 | { 298 | "name": "portal_historyPutContent", 299 | "summary": "Store the content in the local database if storage criteria is met, then send the content to `n` interested peers using the client's default gossip mechanisms. If fewer than `n` interested nodes are found in the local DHT, the client launches a node lookup with target `content-id` and it offers the content to maximum `n` of the newly discovered nodes.", 300 | "params": [ 301 | { 302 | "$ref": "#/components/contentDescriptors/ContentKey" 303 | }, 304 | { 305 | "$ref": "#/components/contentDescriptors/ContentValue" 306 | } 307 | ], 308 | "result": { 309 | "$ref": "#/components/contentDescriptors/PutContentResult" 310 | } 311 | } 312 | ] 313 | -------------------------------------------------------------------------------- /jsonrpc/src/methods/state.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "name": "portal_stateRoutingTableInfo", 4 | "summary": "Returns meta information about state network routing table.", 5 | "params": [], 6 | "result": { 7 | "$ref": "#/components/contentDescriptors/RoutingTableInfoResult" 8 | } 9 | }, 10 | { 11 | "name": "portal_stateAddEnr", 12 | "summary": "Write an ethereum node record to the routing table.", 13 | "params": [ 14 | { 15 | "$ref": "#/components/contentDescriptors/Enr" 16 | } 17 | ], 18 | "result": { 19 | "$ref": "#/components/contentDescriptors/AddEnrResult" 20 | } 21 | }, 22 | { 23 | "name": "portal_stateGetEnr", 24 | "summary": "Fetch from the local node the latest ENR associated with the given NodeId", 25 | "params": [ 26 | { 27 | "$ref": "#/components/contentDescriptors/NodeId" 28 | } 29 | ], 30 | "result": { 31 | "$ref": "#/components/contentDescriptors/GetEnrResult" 32 | } 33 | }, 34 | { 35 | "name": "portal_stateDeleteEnr", 36 | "summary": "Delete a Node ID from the routing table", 37 | "params": [ 38 | { 39 | "$ref": "#/components/contentDescriptors/NodeId" 40 | } 41 | ], 42 | "result": { 43 | "$ref": "#/components/contentDescriptors/DeleteEnrResult" 44 | } 45 | }, 46 | { 47 | "name": "portal_stateLookupEnr", 48 | "summary": "Fetch from the DHT the latest ENR associated with the given NodeId", 49 | "params": [ 50 | { 51 | "$ref": "#/components/contentDescriptors/NodeId" 52 | } 53 | ], 54 | "result": { 55 | "$ref": "#/components/contentDescriptors/LookupEnrResult" 56 | } 57 | }, 58 | { 59 | "name": "portal_statePing", 60 | "summary": "Send a PING message to the designated node and wait for a PONG response. By default, the client will send a capabilities extension payload. If the payloadType is specified without a payload, the client will generate the default payload for that payloadType. If the payload is specified, the client will send the specified payload. payload requires the payloadType to be specified or a -39006 error will be thrown. If the payloadType isn't support by the client or the subnetwork a -39004 error will be thrown. If the specified payload fails to decode a -39005 error will be thrown.", 61 | "params": [ 62 | { 63 | "$ref": "#/components/contentDescriptors/Enr" 64 | }, 65 | { 66 | "$ref": "#/components/contentDescriptors/PayloadType" 67 | }, 68 | { 69 | "$ref": "#/components/contentDescriptors/Payload" 70 | } 71 | ], 72 | "result": { 73 | "$ref": "#/components/contentDescriptors/PingResult" 74 | }, 75 | "errors": [ 76 | { 77 | "$ref": "#/components/errors/PayloadTypeNotSupportedError" 78 | }, 79 | { 80 | "$ref": "#/components/errors/FailedToDecodePayloadError" 81 | }, 82 | { 83 | "$ref": "#/components/errors/PayloadTypeRequiredError" 84 | }, 85 | { 86 | "$ref": "#/components/errors/UserSpecifiedPayloadBlockedByClientError" 87 | } 88 | ], 89 | "examples": [ 90 | { 91 | "name": "Only specifying the ENR", 92 | "description": "A successful PING request and response using only specifying the ENR.", 93 | "params": [ 94 | { 95 | "name": "enr", 96 | "value": "enr:-IS4QIa7W1_Yvvl5OJ2Pp3Gp1dJ3..." 97 | } 98 | ], 99 | "result": { 100 | "name": "pingResult", 101 | "value": { 102 | "enrSeq": 1, 103 | "payloadType": 0, 104 | "payload": { 105 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 106 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 107 | "capabilities": [0, 1, 65535] 108 | } 109 | } 110 | } 111 | }, 112 | { 113 | "name": "Only specifying the ENR and the payload_type", 114 | "description": "A successful PING request and response using only specifying the ENR and the payload is generated by the payload_type.", 115 | "params": [ 116 | { 117 | "name": "enr", 118 | "value": "enr:-IS4QIa7W1_Yvvl5OJ2Pp3Gp1dJ3..." 119 | }, 120 | { 121 | "name": "payloadType", 122 | "value": 0 123 | } 124 | ], 125 | "result": { 126 | "name": "pingResult", 127 | "value": { 128 | "enrSeq": 1, 129 | "payloadType": 0, 130 | "payload": { 131 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 132 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 133 | "capabilities": [0, 1, 65535] 134 | } 135 | } 136 | } 137 | }, 138 | { 139 | "name": "ClientInfoAndCapabilities Example", 140 | "description": "A successful PING request and response using the ClientInfoAndCapabilities payload.", 141 | "params": [ 142 | { 143 | "name": "enr", 144 | "value": "enr:-IS4QIa7W1_Yvvl5OJ2Pp3Gp1dJ3..." 145 | }, 146 | { 147 | "name": "payloadType", 148 | "value": 0 149 | }, 150 | { 151 | "name": "payload", 152 | "value": { 153 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 154 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 155 | "capabilities": [0, 1, 65535] 156 | } 157 | } 158 | ], 159 | "result": { 160 | "name": "pingResult", 161 | "value": { 162 | "enrSeq": 1, 163 | "payloadType": 0, 164 | "payload": { 165 | "clientInfo": "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0", 166 | "dataRadius": "0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", 167 | "capabilities": [0, 1, 65535] 168 | } 169 | } 170 | } 171 | } 172 | ] 173 | }, 174 | { 175 | "name": "portal_stateFindNodes", 176 | "summary": "Send a FINDNODES request for nodes that fall within the given set of distances, to the designated peer and wait for a response.", 177 | "params": [ 178 | { 179 | "$ref": "#/components/contentDescriptors/Enr" 180 | }, 181 | { 182 | "$ref": "#/components/contentDescriptors/Distances" 183 | } 184 | ], 185 | "result": { 186 | "$ref": "#/components/contentDescriptors/FindNodeResult" 187 | } 188 | }, 189 | { 190 | "name": "portal_stateFindContent", 191 | "summary": "Send FINDCONTENT message to get the content with a content key.", 192 | "params": [ 193 | { 194 | "$ref": "#/components/contentDescriptors/Enr" 195 | }, 196 | { 197 | "$ref": "#/components/contentDescriptors/ContentKey" 198 | } 199 | ], 200 | "result": { 201 | "$ref": "#/components/contentDescriptors/FindContentResult" 202 | } 203 | }, 204 | { 205 | "name": "portal_stateOffer", 206 | "summary": "Send an OFFER request with given array of content items (keys & values), to the designated peer and wait for a response. The client MUST return an error if more than 64 content items are provided or less than 1 content items are provided.", 207 | "params": [ 208 | { 209 | "$ref": "#/components/contentDescriptors/Enr" 210 | }, 211 | { 212 | "$ref": "#/components/contentDescriptors/ContentItems" 213 | } 214 | ], 215 | "result": { 216 | "$ref": "#/components/contentDescriptors/OfferResult" 217 | } 218 | }, 219 | { 220 | "name": "portal_stateRecursiveFindNodes", 221 | "summary": "Look up ENRs closest to the given target, that are members of the state network", 222 | "params": [ 223 | { 224 | "$ref": "#/components/contentDescriptors/NodeId" 225 | } 226 | ], 227 | "result": { 228 | "$ref": "#/components/contentDescriptors/RecursiveFindNodesResult" 229 | } 230 | }, 231 | { 232 | "name": "portal_stateGetContent", 233 | "summary": "Get content from the local database if it exists, otherwise look up the target content key in the network. After fetching from the network the content is stored in the local database if storage criteria is met before being returned.", 234 | "params": [ 235 | { 236 | "$ref": "#/components/contentDescriptors/ContentKey" 237 | } 238 | ], 239 | "result": { 240 | "$ref": "#/components/contentDescriptors/GetContentResult" 241 | }, 242 | "errors":[{ 243 | "$ref": "#/components/errors/ContentNotFoundError" 244 | }] 245 | }, 246 | { 247 | "name": "portal_stateTraceGetContent", 248 | "summary": "Get content as defined in portal_stateGetContent and get additional tracing data", 249 | "params": [ 250 | { 251 | "$ref": "#/components/contentDescriptors/ContentKey" 252 | } 253 | ], 254 | "result": { 255 | "$ref": "#/components/contentDescriptors/TraceGetContentResult" 256 | }, 257 | "errors":[{ 258 | "$ref": "#/components/errors/ContentNotFoundErrorWithTrace" 259 | }] 260 | }, 261 | { 262 | "name": "portal_stateStore", 263 | "summary": "Store state content key with content data", 264 | "params": [ 265 | { 266 | "$ref": "#/components/contentDescriptors/ContentKey" 267 | }, 268 | { 269 | "$ref": "#/components/contentDescriptors/ContentValue" 270 | } 271 | ], 272 | "result": { 273 | "$ref": "#/components/contentDescriptors/StoreResult" 274 | } 275 | }, 276 | { 277 | "name": "portal_stateLocalContent", 278 | "summary": "Get a content from the local database", 279 | "params": [ 280 | { 281 | "$ref": "#/components/contentDescriptors/ContentKey" 282 | } 283 | ], 284 | "result": { 285 | "$ref": "#/components/contentDescriptors/LocalContentResult" 286 | }, 287 | "errors":[{ 288 | "$ref": "#/components/errors/ContentNotFoundError" 289 | }] 290 | }, 291 | { 292 | "name": "portal_statePutContent", 293 | "summary": "Store the content in the local database if storage criteria is met, then send the content to `n` interested peers using the client's default gossip mechanisms. If fewer than `n` interested nodes are found in the local DHT, the client launches a node lookup with target `content-id` and it offers the content to maximum `n` of the newly discovered nodes.", 294 | "params": [ 295 | { 296 | "$ref": "#/components/contentDescriptors/ContentKey" 297 | }, 298 | { 299 | "$ref": "#/components/contentDescriptors/ContentValue" 300 | } 301 | ], 302 | "result": { 303 | "$ref": "#/components/contentDescriptors/PutContentResult" 304 | } 305 | } 306 | ] 307 | -------------------------------------------------------------------------------- /jsonrpc/src/schemas/base_types.json: -------------------------------------------------------------------------------- 1 | { 2 | "bytes2": { 3 | "title": "2 hex encoded bytes", 4 | "type": "string", 5 | "pattern": "^0x[0-9a-f]{4}$" 6 | }, 7 | "bytes4": { 8 | "title": "4 hex encoded bytes", 9 | "type": "string", 10 | "pattern": "^0x[0-9a-f]{8}$" 11 | }, 12 | "bytes8": { 13 | "title": "8 hex encoded bytes", 14 | "type": "string", 15 | "pattern": "^0x[0-9a-f]{16}$" 16 | }, 17 | "bytes16": { 18 | "title": "16 hex encoded bytes", 19 | "type": "string", 20 | "pattern": "^0x[0-9a-f]{32}$" 21 | }, 22 | "bytes32": { 23 | "title": "32 hex encoded bytes", 24 | "type": "string", 25 | "pattern": "^0x[0-9a-f]{64}$" 26 | }, 27 | "hexString": { 28 | "title": "Hex string", 29 | "type": "string", 30 | "pattern": "^0x[0-9a-f]$" 31 | }, 32 | "uint": { 33 | "title": "hex encoded unsigned integer", 34 | "type": "string", 35 | "pattern": "^0x[1-9a-f]+[0-9a-f]*$" 36 | }, 37 | "uint256": { 38 | "title": "hex encoded 256 bit unsigned integer", 39 | "type": "string", 40 | "pattern": "^0x([1-9a-f][0-9a-f]{0,63}|0$)" 41 | } 42 | } 43 | -------------------------------------------------------------------------------- /jsonrpc/src/schemas/portal.json: -------------------------------------------------------------------------------- 1 | { 2 | "kBucket": { 3 | "title": "kBucket info", 4 | "description": "List of up to 16 hex encoded nodeIds, ordered from least-recently connected to most-recently connected.", 5 | "type": "array", 6 | "items": { 7 | "$ref": "#/components/schemas/bytes32" 8 | } 9 | }, 10 | "kBuckets": { 11 | "title": "kBuckets", 12 | "description": "The buckets comprising the routing table.", 13 | "type": "array", 14 | "items": { 15 | "$ref": "#/components/schemas/kBucket" 16 | } 17 | }, 18 | "DataRadius": { 19 | "title": "Data radius as a hex encoded uint256", 20 | "$ref": "#/components/schemas/uint256" 21 | }, 22 | "Enr": { 23 | "title": "Base64 encoded ENR", 24 | "type": "string", 25 | "pattern": "^enr:[a-zA-Z0-9_:-]{179}$" 26 | }, 27 | "ipAddr": { 28 | "title": "IP v4/v6 address", 29 | "type": "string", 30 | "pattern": "((^\\s*((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))\\s*$)|(^\\s*((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)(\\.(25[0-5]|2[0-4]\\d|1\\d\\d|[1-9]?\\d)){3}))|:)))(%.+)?\\s*$))" 31 | }, 32 | "socketAddr": { 33 | "title": "ENR socket address", 34 | "type": "string", 35 | "pattern": "/([0-9]{1,3}(?:\\.[0-9]{1,3}){3}|(?=[^\\/]{1,254}(?![^\\/]))(?:(?=[a-zA-Z0-9-]{1,63}\\.)(?:xn--+)?[a-zA-Z0-9]+(?:-[a-zA-Z0-9]+)*\\.)+[a-zA-Z]{2,63}):([0-9]{1,5})$" 36 | }, 37 | "udpPort": { 38 | "title": "UDP port number", 39 | "type": "string", 40 | "pattern": "^([1-9][0-9]{0,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])$" 41 | }, 42 | "ContentItem": { 43 | "title": "content_item", 44 | "type": "array", 45 | "items": [ 46 | { 47 | "title": "Content key", 48 | "description": "The encoded Portal content key", 49 | "$ref": "#/components/schemas/hexString" 50 | }, 51 | { 52 | "title": "Content value", 53 | "description": "The encoded Portal content value", 54 | "$ref": "#/components/schemas/hexString" 55 | } 56 | ], 57 | "minItems": 2, 58 | "maxItems": 2, 59 | "additionalItems": false 60 | }, 61 | "ClientInfoAndCapabilities": { 62 | "title": "ClientInfoAndCapabilities", 63 | "type": "object", 64 | "properties": { 65 | "clientInfo": { 66 | "type": "string", 67 | "description": "UTF-8 hex encoded client information" 68 | }, 69 | "dataRadius": { 70 | "type": "string", 71 | "description": "U256 representing data radius" 72 | }, 73 | "capabilities": { 74 | "type": "array", 75 | "items": { 76 | "type": "number" 77 | }, 78 | "description": "List of u16 values representing enabled extensions" 79 | } 80 | }, 81 | "additionalProperties": false 82 | }, 83 | "BasicRadius": { 84 | "title": "BasicRadius", 85 | "type": "object", 86 | "properties": { 87 | "dataRadius": { 88 | "type": "string", 89 | "description": "U256 representing data radius" 90 | } 91 | }, 92 | "additionalProperties": false 93 | }, 94 | "HistoryRadius": { 95 | "title": "HistoryRadius", 96 | "type": "object", 97 | "properties": { 98 | "dataRadius": { 99 | "type": "string", 100 | "description": "U256 representing data radius" 101 | }, 102 | "ephemeralHeaderCount": { 103 | "type": "number", 104 | "description": "U16 representing number of ephemeral headers held" 105 | } 106 | }, 107 | "additionalProperties": false 108 | }, 109 | "UnknownPayload": { 110 | "title": "UnknownPayload", 111 | "description": "A placeholder for unknown or future payload types", 112 | "type": "object", 113 | "additionalProperties": true 114 | } 115 | } 116 | -------------------------------------------------------------------------------- /jsonrpc/src/schemas/trace.json: -------------------------------------------------------------------------------- 1 | { 2 | "traceResultObject": { 3 | "title": "traceResultObject", 4 | "description": "Trace data for the result of tracing content request.", 5 | "type": "object", 6 | "required": [ 7 | "origin", 8 | "targetId", 9 | "responses", 10 | "metadata", 11 | "startedAtMs" 12 | ], 13 | "properties": { 14 | "origin": { 15 | "description": "Local Node ID", 16 | "$ref": "#/components/schemas/uint256" 17 | }, 18 | "targetId": { 19 | "description": "Target content ID", 20 | "$ref": "#/components/schemas/uint256" 21 | }, 22 | "receivedFrom": { 23 | "description": "Node ID from which the content was received.", 24 | "$ref": "#/components/schemas/uint256" 25 | }, 26 | "responses": { 27 | "$ref": "#/components/schemas/traceResultResponses" 28 | }, 29 | "metadata": { 30 | "$ref": "#/components/schemas/traceResultMetadata" 31 | }, 32 | "startedAtMs": { 33 | "type": "integer", 34 | "minimum": 0, 35 | "description": "Timestamp of the beginning of this request in milliseconds." 36 | }, 37 | "cancelled": { 38 | "description": "List of node IDs requests to which were sent but cancelled due to receiving content from somewhere else, before receiving response from those nodes.", 39 | "$ref": "#/components/schemas/listOfNodeIds" 40 | } 41 | } 42 | }, 43 | "listOfNodeIds": { 44 | "title": "listOfNodeIds", 45 | "description": "Contains list of nodes IDs.", 46 | "type": "array", 47 | "items": { 48 | "$ref": "#/components/schemas/uint256" 49 | } 50 | }, 51 | "traceResultResponses": { 52 | "title": "traceResultResponses", 53 | "description": "Contains a map of remote node IDs with the node IDs each node responded with. For the node ID that is in the `receivedFrom` field, `respondedWith` MUST be empty array.", 54 | "type": "object", 55 | "additionalProperties": { 56 | "$ref": "#/components/schemas/traceResultResponseItem" 57 | } 58 | }, 59 | "traceResultResponseItem": { 60 | "title": "traceResultResponseItem", 61 | "description": "Contains the node's response, including the duration of the request.", 62 | "type": "object", 63 | "properties" : { 64 | "durationsMs": { 65 | "description": "Time it took from the beginning of the lookup(JSON-RPC request) up to receiving this response.", 66 | "type": "integer", 67 | "minimum": 0 68 | }, 69 | "respondedWith": { 70 | "$ref": "#/components/schemas/listOfNodeIds" 71 | } 72 | } 73 | }, 74 | "traceResultMetadata": { 75 | "title": "traceResultMetadata", 76 | "description": "Contains a map from node ID to the metadata object for that node.", 77 | "type": "object", 78 | "additionalProperties": { 79 | "$ref": "#/components/schemas/traceResultMetadataObject" 80 | } 81 | }, 82 | "traceResultMetadataObject": { 83 | "title": "traceResultMetadataObject", 84 | "description": "Contains metadata for each node ID mentioned in the trace response.", 85 | "type": "object", 86 | "properties": { 87 | "enr": { 88 | "$ref": "#/components/schemas/Enr" 89 | }, 90 | "distance": { 91 | "$ref": "#/components/schemas/uint256" 92 | } 93 | } 94 | } 95 | } 96 | -------------------------------------------------------------------------------- /ping-extensions/README.md: -------------------------------------------------------------------------------- 1 | # Ping Custom Payload Extensions 2 | 3 | ## Motivation 4 | 5 | Ping Payload Extensions. Messages on Portal are primarily made of ping/pong responses. This framework allows Portal clients to implement `non standard extensions` which don't require a breaking change to deploy to the network. A more flexible way to extend the Protocol without bloating the core specification [Portal-Wire-Protocol](../portal-wire-protocol.md) or requiring every client to agree to add a feature a subset of clients want. 6 | 7 | # Type's 8 | 9 | There are 65536 unique type ids. 10 | 11 | Types 0-10_000 and 65436-65535 are reserved for for future upgrades. 12 | 13 | The rest are first come first serve, but they should still be defined in this repo to avoid overlaps. 14 | 15 | 16 | ## Requirements 17 | 18 | All `Ping` and `Pong` messages have a `payload_type` and `payload` field: 19 | 20 | - **payload_type**: numeric identifier which tells clients how the `payload` field should be decoded. 21 | - **payload**: the SSZ encoded extension payload 22 | 23 | 24 | ## Ping vs Pong 25 | The relationship between Ping and Pong message will be determined by the requirements of the type. 26 | 27 | Currently type 0, 1, and 2 are symmetrical, having the same payload for both request and response. This symmetry is not required. Extensions may define separate payloads for Ping and Pong within the same type. 28 | 29 | 30 | ### Error responses 31 | If the ping receiver can't handle the ping for any reason the pong should return an error payload 32 | 33 | [Type 65535: The definition of error responses can be found here](extensions/type-65535.md) 34 | 35 | ## Standard extensions 36 | 37 | A standard extension is an extension which all nodes on the network MUST support. Nodes can send these without requiring a `Type 0: Client Info, Radius, and Capabilities Payload` request to discover what extensions the client supports. 38 | 39 | Changing standard extensions is considered a breaking change. 40 | 41 | List of some standard extensions 42 | - [Type 0: Client Info, Radius, and Capabilities Payload](extensions/type-0.md): useful for finding Client Info, Radius, and ping extensions a client supports 43 | - [Type 65535: Error Payload](extensions/type-65535.md): this payload can only be used as a response to a ping 44 | 45 | # Non standard extensions 46 | Non standard extensions are extensions in which you can't assume all other clients support, to use a non standard extension it is required that Portal clients first send a [Type 0: Client Info, Radius, and Capabilities Payload](extensions/type-0.md) packet, then upgrade to use their desired non standard extensions. 47 | 48 | ## What is the [Type 0: Client Info, Radius, and Capabilities Payload](extensions/type-0.md) for 49 | It is for Portal implementations which want to see what extensions a peer supports. Not all extensions need to be implemented by all parties. So in order for a peer to find if an extension is implemented a [Type 0: Client Info, Radius, and Capabilities Payload](extensions/type-0.md) should be exchanged. 50 | 51 | Non-required extension's offer a way for Portal implementations to offer extended functionality to its users without requiring every Portal implementing party to agree to a new feature. This allows for a diverse set of use cases to be fulfilled without requiring every implementer implement every extension, or requiring the need to bloat the minimal [Portal-Wire-Protocol](../portal-wire-protocol.md) with new `Message Types`. 52 | 53 | ## How do sub-network standard extension's work 54 | sub-network standard extension's are fundamental extensions that are required for a Portal sub-network to function. They must be supported by the sub-networks implementations. Changing a standard extension requires a breaking change. 55 | -------------------------------------------------------------------------------- /ping-extensions/extensions.md: -------------------------------------------------------------------------------- 1 | # This document lists defined extensions 2 | This is a list and short description of all the extensions 3 | 4 | - Extensions can be supported by all sub-networks or a subsection 5 | 6 | 7 | | Type number | Name | Supported sub-networks | Short Description | Is standard extension | 8 | |---|---|---|---|---| 9 | | [0](extensions/type-0.md) | Client Info, Radius, and Capabilities | All | Returns client info e.x. `trin/0.1.1-2b00d730/linux-x86_64/rustc1.81.0`, the nodes radius and a list of enabled extensions | Yes | 10 | | [1](extensions/type-1.md) | Basic Radius Payload | State, Beacon | Provides the nodes Radius | No | 11 | | [2](extensions/type-2.md) | History Radius Payload | History | Provides the nodes radius and ephemeral header count | No | 12 | | [65535](extensions/type-65535.md) | Error Response | All | Returns an error for respective ping message | Yes | 13 | -------------------------------------------------------------------------------- /ping-extensions/extensions/template.md: -------------------------------------------------------------------------------- 1 | # [Title] 2 | 3 | [Description] 4 | 5 | Ping payload 6 | ```python 7 | 8 | [Payload] = SSZ.serialize(Container([Key Value Pairs])) 9 | 10 | 11 | Ping Message = Container( 12 | enr_seq: uint64, 13 | payload_type: [Type Number], 14 | payload: [Payload] 15 | ) 16 | ``` 17 | 18 | Pong payload 19 | ```python 20 | 21 | [Payload] = SSZ.serialize(Container([Key Value Pairs])) 22 | 23 | Pong Message = Container( 24 | enr_seq: uint64, 25 | payload_type: [Type Number], 26 | payload: [Payload] 27 | ) 28 | ``` 29 | 30 | ## Test Vectors 31 | -------------------------------------------------------------------------------- /ping-extensions/extensions/type-0.md: -------------------------------------------------------------------------------- 1 | # Client Info and Capabilities Payload 2 | 3 | ## Client Info 4 | Client info are UTF-8 hex encoded strings. 5 | 6 | Client info strings consist of 4 parts 7 | - client name (e.x. `trin`,`fluffy`) 8 | - client version + short commit (e.x. `0.1.1-2b00d730`) 9 | - operating system + cpu archtecture (e.x. `linux-x86_64`) 10 | - programming language + language version (e.x. `rustc1.81.0`) 11 | 12 | Example 13 | - String: `trin/0.1.1-2b00d730/linux-x86_64/rustc1.81.0` 14 | - Hex encoding: `0x7472696E2F302E312E312D32623030643733302F6C696E75782D7838365F36342F7275737463312E38312E30` 15 | 16 | #### Privacy Concerns 17 | Clients can optionally return an empty string for privacy reasons, this is not recommended as client info helps researchers understand the network. 18 | 19 | ## Capabilities 20 | Portal clients can only have max 400 extensions enabled per sub-network. 21 | 22 | This payload provides a list of u16's each u16 provide in the list corresponds to an enabled extension type. 23 | 24 | ## Payload Outline 25 | 26 | Ping and Pong Payload 27 | ```python 28 | 29 | MAX_CLIENT_INFO_BYTE_LENGTH = 200 30 | MAX_CAPABILITIES_LENGTH = 400 31 | 32 | client_info_and_capabilities = SSZ.serialize(Container( 33 | client_info: ByteList[MAX_CLIENT_INFO_BYTE_LENGTH] 34 | data_radius: U256 35 | capabilities: List[u16, MAX_CAPABILITIES_LENGTH] 36 | )) 37 | 38 | Ping/Pong Message = Container( 39 | enr_seq: uint64, 40 | type: 0, 41 | payload: client_info_and_capabilities 42 | ) 43 | ``` 44 | 45 | ## Test Vectors 46 | 47 | ### Protocol Message to ssz encoded ping: case 1 with client info 48 | 49 | #### Input Parameters 50 | ``` 51 | enr_seq = 1 52 | data_radius = 2^256 - 2 # Maximum value - 1 53 | client_info = "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0" 54 | capabilities = [0, 1, 65535] 55 | ``` 56 | 57 | #### Expected Output 58 | ``` 59 | message = 0x00010000000000000000000e00000028000000feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff550000007472696e2f76302e312e312d62363166646335632f6c696e75782d7838365f36342f7275737463312e38312e3000000100ffff 60 | ``` 61 | 62 | ### Protocol Message to ssz encoded ping: case 2 without client info 63 | 64 | #### Input Parameters 65 | ``` 66 | enr_seq = 1 67 | data_radius = 2^256 - 2 # Maximum value - 1 68 | client_info = "" 69 | capabilities = [0, 1, 65535] 70 | ``` 71 | 72 | #### Expected Output 73 | ``` 74 | message = 0x00010000000000000000000e00000028000000feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff2800000000000100ffff 75 | ``` 76 | 77 | ### Protocol Message to ssz encoded pong: case 1 with client info 78 | 79 | #### Input Parameters 80 | ``` 81 | enr_seq = 1 82 | data_radius = 2^256 - 2 # Maximum value - 1 83 | client_info = "trin/v0.1.1-b61fdc5c/linux-x86_64/rustc1.81.0" 84 | capabilities = [0, 1, 65535] 85 | ``` 86 | 87 | #### Expected Output 88 | ``` 89 | message = 0x01010000000000000000000e00000028000000feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff550000007472696e2f76302e312e312d62363166646335632f6c696e75782d7838365f36342f7275737463312e38312e3000000100ffff 90 | ``` 91 | 92 | ### Protocol Message to ssz encoded pong: case 2 without client info 93 | 94 | #### Input Parameters 95 | ``` 96 | enr_seq = 1 97 | data_radius = 2^256 - 2 # Maximum value - 1 98 | client_info = "" 99 | capabilities = [0, 1, 65535] 100 | ``` 101 | 102 | #### Expected Output 103 | ``` 104 | message = 0x01010000000000000000000e00000028000000feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff2800000000000100ffff 105 | ``` 106 | -------------------------------------------------------------------------------- /ping-extensions/extensions/type-1.md: -------------------------------------------------------------------------------- 1 | # Basic Radius Payload 2 | 3 | A basic Ping/Pong payload which only contains the nodes radius 4 | 5 | Ping and Pong Payload 6 | ```python 7 | 8 | basic_radius = SSZ.serialize(Container(data_radius: U256)) 9 | 10 | Ping/Pong Message = Container( 11 | enr_seq: uint64, 12 | type: 1, 13 | payload: basic_radius 14 | ) 15 | ``` 16 | 17 | ## Test Vectors 18 | 19 | ### Protocol Message to ssz encoded ping 20 | 21 | #### Input Parameters 22 | ``` 23 | enr_seq = 1 24 | data_radius = 2^256 - 2 # Maximum value - 1 25 | ``` 26 | 27 | #### Expected Output 28 | ``` 29 | message = 0x00010000000000000001000e000000feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff 30 | ``` 31 | 32 | ### Protocol Message to ssz encoded pong 33 | 34 | #### Input Parameters 35 | ``` 36 | enr_seq = 1 37 | data_radius = 2^256 - 2 # Maximum value - 1 38 | ``` 39 | 40 | #### Expected Output 41 | ``` 42 | message = 0x01010000000000000001000e000000feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff 43 | ``` 44 | -------------------------------------------------------------------------------- /ping-extensions/extensions/type-2.md: -------------------------------------------------------------------------------- 1 | # History Radius Payload 2 | 3 | A specialized radius payload for the history network which contains field for how many ephemeral headers the node holds. 4 | 5 | Ping and Pong Payload 6 | ```python 7 | 8 | history_radius = SSZ.serialize(Container(data_radius: U256, ephemeral_header_count=U16)) 9 | 10 | Ping/Pong Message = Container( 11 | enr_seq: uint64, 12 | type: 2, 13 | payload: history_radius 14 | ) 15 | ``` 16 | 17 | ## Test Vectors 18 | 19 | ### Protocol Message to ssz encoded ping 20 | 21 | #### Input Parameters 22 | ``` 23 | enr_seq = 1 24 | data_radius = 2^256 - 2 # Maximum value - 1 25 | ephemeral_header_count = 4242 26 | ``` 27 | 28 | #### Expected Output 29 | ``` 30 | message = 0x00010000000000000002000e000000feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff9210 31 | ``` 32 | 33 | ### Protocol Message to ssz encoded pong 34 | 35 | #### Input Parameters 36 | ``` 37 | enr_seq = 1 38 | data_radius = 2^256 - 2 # Maximum value - 1 39 | ephemeral_header_count = 4242 40 | ``` 41 | 42 | #### Expected Output 43 | ``` 44 | message = 0x01010000000000000002000e000000feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff9210 45 | ``` 46 | -------------------------------------------------------------------------------- /ping-extensions/extensions/type-65535.md: -------------------------------------------------------------------------------- 1 | # Error Payload 2 | 3 | If the ping receiver can't handle the ping for any reason the pong should return an error payload 4 | 5 | Pong payload 6 | ```python 7 | 8 | # Max UTF-8 hex encoded strings length 9 | MAX_ERROR_BYTE_LENGTH = 300 10 | 11 | error_payload = SSZ.serialize(Container(error_code: u16, message: ByteList[MAX_ERROR_BYTE_LENGTH])) 12 | 13 | Pong Message = Container( 14 | enr_seq: uint64, 15 | type: 65535, 16 | payload: error_payload 17 | ) 18 | ``` 19 | 20 | ### Error Code's 21 | 22 | #### 0: Extension not supported 23 | This code should be returned if the extension isn't supported. This error should only be returned if 24 | - The extension isn't supported 25 | - The extension isn't a required extension for specified Portal Network. 26 | 27 | #### 1: Requested data not found 28 | This error code should be used when a client is unable to provide the necessary data for the response. 29 | 30 | #### 2: Failed to decode payload 31 | Wasn't able to decode the payload 32 | 33 | #### 3: System error 34 | A critical error happened and the ping can't be processed 35 | 36 | ## Test Vectors 37 | 38 | ### Protocol Message to ssz encoded pong 39 | 40 | #### Input Parameters 41 | ``` 42 | enr_seq = 1 43 | error_code = 2 44 | message = "hello world" 45 | ``` 46 | 47 | #### Expected Output 48 | ``` 49 | message = 0x010100000000000000ffff0e00000002000600000068656c6c6f20776f726c64 50 | ``` 51 | -------------------------------------------------------------------------------- /portal-wire-test-vectors.md: -------------------------------------------------------------------------------- 1 | # Portal Wire Test Vectors 2 | 3 | This document provides a collection of test vectors for the Portal wire protocol 4 | aimed to aid new implementations to conform to the specification. 5 | 6 | ## Protocol Message Encodings 7 | 8 | This section provides test vectors for the individual protocol messages defined 9 | in the [Portal wire protocol](./portal-wire-protocol.md). These test vectors can 10 | primarily verify the SSZ encoding and decoding of each protocol message. 11 | 12 | ### Ping Request and Pong Response 13 | 14 | These test vectors are located in the specifications for each [ping extension type](./ping-extensions/extensions/) 15 | 16 | ### Find Nodes Request 17 | 18 | #### Input Parameters 19 | ``` 20 | distances = [256, 255] 21 | ``` 22 | 23 | #### Expected Output 24 | ``` 25 | message = 0x02040000000001ff00 26 | ``` 27 | 28 | ### Nodes Response - Empty enrs 29 | 30 | #### Input Parameters 31 | ``` 32 | total = 1 33 | enrs = [] 34 | ``` 35 | 36 | #### Expected Output 37 | ``` 38 | message = 0x030105000000 39 | ``` 40 | 41 | ### Nodes Response - Multiple enrs 42 | 43 | #### Input Parameters 44 | ``` 45 | enr1 = "enr:-HW4QBzimRxkmT18hMKaAL3IcZF1UcfTMPyi3Q1pxwZZbcZVRI8DC5infUAB_UauARLOJtYTxaagKoGmIjzQxO2qUygBgmlkgnY0iXNlY3AyNTZrMaEDymNMrg1JrLQB2KTGtv6MVbcNEVv0AHacwUAPMljNMTg" 46 | 47 | enr2 = "enr:-HW4QNfxw543Ypf4HXKXdYxkyzfcxcO-6p9X986WldfVpnVTQX1xlTnWrktEWUbeTZnmgOuAY_KUhbVV1Ft98WoYUBMBgmlkgnY0iXNlY3AyNTZrMaEDDiy3QkHAxPyOgWbxp5oF1bDdlYE6dLCUUp8xfVw50jU" 48 | 49 | total = 1 50 | enrs = [enr1, enr2] 51 | ``` 52 | 53 | #### Expected Output 54 | ``` 55 | message = 0x030105000000080000007f000000f875b8401ce2991c64993d7c84c29a00bdc871917551c7d330fca2dd0d69c706596dc655448f030b98a77d4001fd46ae0112ce26d613c5a6a02a81a6223cd0c4edaa53280182696482763489736563703235366b31a103ca634cae0d49acb401d8a4c6b6fe8c55b70d115bf400769cc1400f3258cd3138f875b840d7f1c39e376297f81d7297758c64cb37dcc5c3beea9f57f7ce9695d7d5a67553417d719539d6ae4b445946de4d99e680eb8063f29485b555d45b7df16a1850130182696482763489736563703235366b31a1030e2cb74241c0c4fc8e8166f1a79a05d5b0dd95813a74b094529f317d5c39d235 56 | ``` 57 | 58 | ### Find Content Request 59 | 60 | #### Input Parameters 61 | ``` 62 | content_key = 0x706f7274616c 63 | ``` 64 | 65 | #### Expected Output 66 | ``` 67 | message = 0x0404000000706f7274616c 68 | ``` 69 | 70 | ### Content Response - Connection id 71 | 72 | #### Input Parameters 73 | ``` 74 | connection_id = [0x01, 0x02] 75 | ``` 76 | 77 | #### Expected Output 78 | ``` 79 | message = 0x05000102 80 | ``` 81 | 82 | ### Content Response - Content payload 83 | 84 | #### Input Parameters 85 | ``` 86 | content = 0x7468652063616b652069732061206c6965 87 | ``` 88 | 89 | #### Expected Output 90 | ``` 91 | message = 0x05017468652063616b652069732061206c6965 92 | ``` 93 | 94 | ### Content Response - Multiple enrs 95 | 96 | #### Input Parameters 97 | ``` 98 | enr1 = "enr:-HW4QBzimRxkmT18hMKaAL3IcZF1UcfTMPyi3Q1pxwZZbcZVRI8DC5infUAB_UauARLOJtYTxaagKoGmIjzQxO2qUygBgmlkgnY0iXNlY3AyNTZrMaEDymNMrg1JrLQB2KTGtv6MVbcNEVv0AHacwUAPMljNMTg" 99 | 100 | enr2 = "enr:-HW4QNfxw543Ypf4HXKXdYxkyzfcxcO-6p9X986WldfVpnVTQX1xlTnWrktEWUbeTZnmgOuAY_KUhbVV1Ft98WoYUBMBgmlkgnY0iXNlY3AyNTZrMaEDDiy3QkHAxPyOgWbxp5oF1bDdlYE6dLCUUp8xfVw50jU" 101 | 102 | enrs = [enr1, enr2] 103 | ``` 104 | 105 | #### Expected Output 106 | ``` 107 | message = 0x0502080000007f000000f875b8401ce2991c64993d7c84c29a00bdc871917551c7d330fca2dd0d69c706596dc655448f030b98a77d4001fd46ae0112ce26d613c5a6a02a81a6223cd0c4edaa53280182696482763489736563703235366b31a103ca634cae0d49acb401d8a4c6b6fe8c55b70d115bf400769cc1400f3258cd3138f875b840d7f1c39e376297f81d7297758c64cb37dcc5c3beea9f57f7ce9695d7d5a67553417d719539d6ae4b445946de4d99e680eb8063f29485b555d45b7df16a1850130182696482763489736563703235366b31a1030e2cb74241c0c4fc8e8166f1a79a05d5b0dd95813a74b094529f317d5c39d235 108 | ``` 109 | 110 | ### Offer Request 111 | 112 | #### Input Parameters 113 | ``` 114 | content_key1 = 0x010203 115 | content_keys = [content_key1] 116 | ``` 117 | 118 | #### Expected Output 119 | ``` 120 | message = 0x060400000004000000010203 121 | ``` 122 | 123 | ### Accept Response 124 | 125 | #### Input Parameters 126 | ``` 127 | connection_id = [0x01, 0x02] 128 | content_keys = [0, 1, 2, 3, 4, 5, 1, 1] # 8 byte bytelist 129 | ``` 130 | 131 | #### Expected Output 132 | ``` 133 | message = 0x070102060000000001020304050101 134 | ``` 135 | -------------------------------------------------------------------------------- /protocol-version-changelog.md: -------------------------------------------------------------------------------- 1 | # Wire Protocol Changelog 2 | 3 | ## Version 0 4 | 5 | Initial versioning of the protocol. 6 | 7 | 8 | ## Version 1 9 | 10 | - https://github.com/ethereum/portal-network-specs/pull/370 - Add detailed offer decline codes. 11 | 12 | - https://github.com/ethereum/portal-network-specs/pull/383 - Add a varint size prefix to the content item send over uTP. 13 | -------------------------------------------------------------------------------- /state/state-network.md: -------------------------------------------------------------------------------- 1 | # Execution State Network 2 | 3 | This document is the specification for the sub-protocol that supports on-demand availability of state data from the execution chain. 4 | 5 | ## Overview 6 | 7 | The execution state network is a [Kademlia](https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf) DHT that uses the [Portal Wire Protocol](../portal-wire-protocol.md) to establish an overlay network on top of the [Discovery v5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5-wire.md) protocol. 8 | 9 | State data from the execution chain consists of all account data from the main storage trie, all contract storage data from all of the individual contract storage tries, and the individual bytecodes for all contracts across all historical state roots. This is traditionally referred to as an "archive node". 10 | 11 | ### Data 12 | 13 | The network stores the full execution layer state which encompasses the following: 14 | 15 | - Account trie nodes 16 | - Contract storage trie nodes 17 | - Contract bytecode 18 | 19 | The network is implemented as an "archive" node meaning that it stores all 20 | tries for all historical blocks. 21 | 22 | 23 | #### Retrieval 24 | 25 | - Account trie nodes by their node hash. 26 | - Contract storage trie nodes by their node hash. 27 | - Contract bytecode by code hash. 28 | 29 | ## Specification 30 | 31 | 32 | 33 | ### Distance Function 34 | 35 | The state network uses the stock XOR distance metric defined in the portal wire protocol specification. 36 | 37 | 38 | ### Content ID Derivation Function 39 | 40 | The state network uses the SHA256 Content ID derivation function from the portal wire protocol specification. 41 | 42 | ### Wire Protocol 43 | 44 | #### Protocol Identifier 45 | 46 | As specified in the [Protocol identifiers](../portal-wire-protocol.md#protocol-identifiers) section of the Portal wire protocol, the `protocol` field in the `TALKREQ` message **MUST** contain the value of `0x500A`. 47 | 48 | #### Supported Message Types 49 | 50 | The execution state network supports the following protocol messages: 51 | 52 | - `Ping` - `Pong` 53 | - `FindNodes` - `Nodes` 54 | - `FindContent` - `FoundContent` 55 | - `Offer` - `Accept` 56 | 57 | #### `Ping.payload` & `Pong.payload` 58 | 59 | In the execution state network network the `payload` field of the `Ping` and `Pong` messages. The first packet between another client MUST be [Type 0: Client Info, Radius, and Capabilities Payload](../ping-extensions/extensions/type-0.md). Then upgraded to the latest payload supported by both of the clients. 60 | 61 | List of currently supported payloads, by latest to oldest. 62 | - [Type 1 Basic Radius Payload](../ping-extensions/extensions/type-1.md) 63 | 64 | #### POKE Mechanism 65 | 66 | As `content_for_retrieval` is different from `content_for_offer` the POKE mechanism cannot offer content that is verifiable without providing 67 | the proof/s that are contained in the `content_for_offer` payload. These proofs are usually available when walking down the trie during content 68 | lookups such as during an `eth_getBalance` JSON-RPC call implemented in the state network. 69 | 70 | The [POKE Mechanism](../portal-wire-protocol.md#poke-mechanism) for the state network requires building a `content_for_offer` by combining the `content_for_retrieval` with the parent proof/s and block hash. This is implemented differently for each 71 | type of content: 72 | - For account trie nodes the trie node in the `content_for_retrieval` is appended to the parent account proof and then combined with the block hash to build the `content_for_offer`. 73 | - For contract trie nodes the trie node in the `content_for_retrieval` is appended to the parent storage proof and then combined with the account proof and block hash to build the `content_for_offer`. 74 | - For contract code the code in the `content_for_retrieval` is combined with the account proof and block hash to build the `content_for_offer`. 75 | 76 | This POKE mechanism as described above SHOULD be executed after looking up content from the network, whenever the proofs and block hash are locally available 77 | to be used to build the `content_for_offer`. 78 | 79 | ### Routing Table 80 | 81 | The execution state network uses the standard routing table structure from the Portal Wire Protocol. 82 | 83 | ### Node State 84 | 85 | #### Data Radius 86 | 87 | The execution state network includes one additional piece of node state that should be tracked. Nodes must track the `data_radius` from the Ping and Pong messages for other nodes in the network. This value is a 256 bit integer and represents the data that a node is "interested" in. We define the following function to determine whether node in the network should be interested in a piece of content. 88 | 89 | ``` 90 | interested(node, content) = distance(node.id, content.id) <= node.radius 91 | ``` 92 | 93 | A node is expected to maintain `radius` information for each node in its local node table. A node's `radius` value may fluctuate as the contents of its local key-value store change. 94 | 95 | A node should track their own radius value and provide this value in all Ping or Pong messages it sends to other nodes. 96 | 97 | ### Data Types 98 | 99 | #### OFFER/ACCEPT vs FINDCONTENT/FOUNDCONTENT payloads 100 | 101 | The data payloads for many content types in the state network differ between OFFER/ACCEPT and FINDCONTENT/FOUNDCONTENT. 102 | 103 | The OFFER/ACCEPT payloads need to be provable by their recipients. These proofs are useful during OFFER/ACCEPT because they verify that the offered data is indeed part of the canonical chain. 104 | 105 | The FINDCONTENT/FOUNDCONTENT payloads do not contain proofs because a piece of state can exist under many different state roots. All payloads can still be proved to be the correct requested data, however, it is the responsibility of the requesting party to anchor the returned data as canonical chain state data. 106 | 107 | 108 | #### Helper Data Types 109 | 110 | ##### Paths (Nibbles) 111 | 112 | A naive approach to storage of trie nodes would be to simply use the `node_hash` value of the trie node for storage. This scheme however results in stored data not being tied in any direct way to it's location in the trie. In a situation where a participant in the DHT wished to re-gossip data that they have stored, they would need to reconstruct a valid trie proof for that data in order to construct the appropriate OFFER/ACCEPT payload. We include the `path` metadata in state network content keys so that it is possible to reconstruct this proof. 113 | 114 | We define path as a sequences of "nibbles" which represent the path through the merkle patritia trie (MPT) to reach the trie node. 115 | 116 | ``` 117 | nibble := {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f} 118 | Nibbles := ByteList(33) 119 | ``` 120 | 121 | Because each nibble can be expressed using 4 bits, we pack two nibbles in one byte. Leading nibble will occupy higher bits and following nibble will occupy lower bits. 122 | 123 | This encoding can introduce ambiguity (e.g. it's not possible to distinguish between single nibble `[1]` and nibbles `[0, 1]`, because both are expressed as `0x01`). To prevent this, we are going to use highest 4 bits of the first byte to specify whether length is even or odd: 124 | 125 | - bits `0000` - Number of nibbles is even. The 4 lowest bits of the first byte MUST be set to `0` (resulting that the value of the first byte is `0x00`). 126 | - bits `0001` - Number of nibbles is odd. The first nibble MUST be stored in the 4 lowest bits of the first byte. 127 | 128 | All remaining nibbles are packed in pairs of two and added to the first byte. 129 | 130 | Examples: 131 | 132 | ``` 133 | [] -> [0x00] 134 | [0] -> [0x10] 135 | [1] -> [0x11] 136 | [0, 1] -> [0x00, 0x01] 137 | [1, 2, a, b] -> [0x00, 0x12, 0xab] 138 | [1, 2, a, b, c] -> [0x11, 0x2a, 0xbc] 139 | ``` 140 | 141 | ##### Address Hash 142 | 143 | An address hash is calculated using `keccak` of the Ethereum Address 144 | 145 | ``` 146 | AddressHash := Bytes32 147 | ``` 148 | 149 | ##### Merkle Patricia Trie (MPT) Proofs 150 | 151 | Merkle Patricia Trie (MPT) proofs consist of a list of `TrieNode` objects that correspond to 152 | individual trie nodes from the MPT. Each node can be one of the different node types from the MPT 153 | (e.g. branch, extension, leaf). When serialized, each `TrieNode` is represented as an RLP 154 | serialized list of the component elements. The largest possible node type is the branch node, which 155 | should be up to 532 bytes (16 child nodes of `Bytes32` with extra encoding info), but we specify 156 | 1024 bytes to be on the safe side. 157 | 158 | Note that `blank` (or `nil`) trie node will never be useful in our context because we don't want to 159 | store it and it can't be part of the trie proof. 160 | 161 | ``` 162 | TrieNode := ByteList(1024) 163 | TrieProof := List(TrieNode, limit=65) 164 | ``` 165 | 166 | The `TrieProof` type is used for both Account Trie and Contract Storage trie. 167 | 168 | All `TrieProof` objects are subject to the following validity requirements. 169 | 170 | - A: Lexical Ordering 171 | - B: No Extraneous Nodes 172 | 173 | ###### A: Lexical Ordering 174 | 175 | The sequence of nodes in the proof MUST represent the unbroken path in a trie, starting from the 176 | root node and each node being the child of its predecesor. This results in the state root node 177 | always occuring first and node being proven last in the list of trie nodes. 178 | 179 | > This validity condition is to ensure that verifcation of the proof can be done 180 | in a single pass. 181 | 182 | ###### B: No Extraneous Nodes 183 | 184 | A proof MUST NOT contain any nodes that are not part of the set needed for proving. 185 | 186 | > This validity condition is to protect against malicious or erroneous bloating of proof payloads. 187 | 188 | 189 | #### Account Trie Node 190 | 191 | These data types represent a node from the main state trie. 192 | 193 | ``` 194 | account_trie_node_key := Container(path: Nibbles, node_hash: Bytes32) 195 | selector := 0x20 196 | 197 | content_key := selector + SSZ.serialize(account_trie_node_key) 198 | ``` 199 | 200 | If node is an extension or leaf node (not branch node), than the `path` field MUST NOT include the 201 | same nibbles that are stored inside that node. 202 | 203 | ##### Account Trie Node: OFFER/ACCEPT 204 | 205 | This type MUST be used when content offered via OFFER/ACCEPT. 206 | 207 | ``` 208 | content_for_offer := Container(proof: TrieProof, block_hash: Bytes32) 209 | ``` 210 | 211 | The `proof` field MUST contain the proof for the trie node whose position in the trie and hash are 212 | specified in the `content-key`. The proof MUST be anchored to the block specified by the 213 | `block_hash` field. 214 | 215 | ##### Account Trie Node: FINDCONTENT/FOUNDCONTENT 216 | 217 | This type MUST be used when content retrieved from another node via FINDCONTENT/FOUNDCONTENT. 218 | 219 | ``` 220 | content_for_retrieval := Container(node: TrieNode) 221 | ``` 222 | 223 | 224 | #### Contract Trie Node 225 | 226 | These data types represent a node from an individual contract storage trie. 227 | 228 | ``` 229 | storage_trie_node_key := Container(address_hash: AddressHash, path: Nibbles, node_hash: Bytes32) 230 | selector := 0x21 231 | 232 | content_key := selector + SSZ.serialize(storage_trie_node_key) 233 | ``` 234 | 235 | If node is an extension or leaf node (not branch node), than the `path` field MUST NOT include the 236 | same nibbles that are stored inside that node. 237 | 238 | ##### Contract Trie Node: OFFER/ACCEPT 239 | 240 | This type MUST be used when content offered via OFFER/ACCEPT. 241 | 242 | ``` 243 | content_for_offer := Container(storage_proof: TrieProof, account_proof: TrieProof, block_hash: Bytes32) 244 | ``` 245 | 246 | The `account_proof` field MUST contain the proof for the account state specified by the `address_hash` 247 | field in the key and it MUST be anchored to the block specified by the `block_hash` field. 248 | 249 | The `storage_proof` field MUST contain the proof for the trie node whose position in the trie and 250 | hash are specified in the key and it MUST be anchored to the trie specified by the account state. 251 | 252 | ##### Contract Trie Node: FINDCONTENT/FOUNDCONTENT 253 | 254 | This type MUST be used when content retrieved from another node via FINDCONTENT/FOUNDCONTENT. 255 | 256 | ``` 257 | content_for_retrieval := Container(node: TrieNode) 258 | ``` 259 | 260 | 261 | #### Contract Code 262 | 263 | These data types represent the bytecode for a contract. 264 | 265 | > NOTE: Because CREATE2 opcode allows for redeployment of new code at an existing address_hash, we MUST randomly distribute contract code storage across the DHT keyspace to avoid hotspots developing in the network for any contract that has had many different code deployments. Were we to use the path based *high-bits* approach for computing the content-id, it would be possible for a single location in the network to accumulate a large number of contract code objects that all live in roughly the same space. 266 | Problematic! 267 | 268 | ``` 269 | contract_code_key := Container(address_hash: AddressHash, code_hash: Bytes32) 270 | selector := 0x22 271 | 272 | content_key := selector + SSZ.serialize(contract_code_key) 273 | ``` 274 | 275 | ##### Contract Code: OFFER/ACCEPT 276 | 277 | This types MUST be used when content offered via OFFER/ACCEPT. 278 | 279 | ``` 280 | content_for_offer := Container(code: ByteList(32768), account_proof: TrieProof, block_hash: Bytes32) 281 | ``` 282 | 283 | The `account_proof` field MUST contain the proof for the account state specified by the `address_hash` 284 | field in the key and it MUST be anchored to the block specified by the `block_hash` field. 285 | 286 | ##### Contract Code: FINDCONTENT/FOUNDCONTENT 287 | 288 | This type MUST be used when content retrieved from another node via FINDCONTENT/FOUNDCONTENT. 289 | 290 | ``` 291 | content_for_retrieval := Container(code: ByteList(32768)) 292 | ``` 293 | 294 | 295 | ## Gossip 296 | 297 | As each new block is added to the chain, the updated state from that block must be gossiped into 298 | the network. In short, every trie node that is created or modified MUST be gossiped into the network, 299 | together with its proof. 300 | 301 | ### Terminology 302 | 303 | The Merkle Patricia Trie (MPT) has three types of nodes: *"branch"*, *"extension"* and *"leaf"*. 304 | The MPT also specifies the `nil` node, but it will never be sent or stored over network, so we will 305 | ignore it for this spec. 306 | 307 | Similarly to other tree structure, the `leaf` node is the lowest node on a certain path and it's 308 | where value is stored in the tree (strictly speaking, MPT allows value to be stored in `branch` 309 | nodes as well, but Ethreum storage doesn't use this functionality). The `branch` and `extension` 310 | nodes can be called `intermediate` nodes because there will always be a node that can only be 311 | reached by passing through them. 312 | 313 | A *"merkle proof"* or *"proof"* is a collection of nodes from the trie sufficient to recompute the 314 | state root and prove that *"target"* node is part of the trie defined by that state root. A proof is: 315 | 316 | - *"ordered"* 317 | - the order of the nodes in the proof have to represent the path from root node to the target node 318 | - first node must be the root node, followed by zero or more intermediate nodes, ending 319 | with a target node 320 | - it should be provable that any non-first node is part of the preceding node 321 | - if root node is the target node, then the proof will only contain the root node 322 | - the target node can be of any type (branch, extension or leaf) 323 | - *"minimal"* 324 | - it contains only the nodes from on a path from the root node to the target node 325 | 326 | ### Overview 327 | 328 | At each block, the bridge is responsible for creating and gossiping all following data and their proofs: 329 | 330 | - account trie data: 331 | - all of the new and modified trie nodes from the state trie 332 | - contract storage trie data: 333 | - all of the new and modified trie nodes from each modified contract storage trie 334 | - proof has to include the proof for the account trie that corresponds to the same contract 335 | - all contract bytecode for newly created contracts 336 | - proof has to include the proof for the account trie that corresponds to the same contract 337 | 338 | A bridge should compute the content-id values for all content key/value pairs. These content-ids 339 | should be sorted by proximity to its own node-id. Beginning with the content that is *closest* to its 340 | own node-id it should proceed to GOSSIP each individual content to nodes *interested* in that content. 341 | 342 | #### Special case to consider 343 | 344 | It should be highlighted that it's possible for a trie node to be modified even if it is not in the 345 | path of any value that was modified in the block. Consider following example: 346 | 347 | Let's assume that trie contains two key/value pairs: `(0x0123, A)` and `(0x1234, B)`. Trie would look 348 | something like this (numbers next to branches indicate the index in the branch node): 349 | 350 | ``` 351 | branch (root) 352 | /0 \1 353 | prefix: 123 prefix: 234 354 | value: A value: B 355 | ``` 356 | 357 | New key/value `(0x0246, C)` is inserted in the next block, resulting in the new trie: 358 | 359 | ``` 360 | branch (root) 361 | /0 \1 362 | branch prefix: 234 363 | /1 \2 value: B 364 | prefix: 23 prefix: 46 365 | value: A value: C 366 | ``` 367 | 368 | Note that trie node that stores value `A` changed because its prefix changed, not its value. 369 | -------------------------------------------------------------------------------- /transaction-gossip/transaction-gossip.md: -------------------------------------------------------------------------------- 1 | # Transaction Gossip Network 2 | 3 | > NOTE: This specification is a work in progress. 4 | 5 | This document is the specification for the sub-protocol that facilitates transmission of transactions from individual nodes in the network to block producers for inclusion in future blocks. 6 | 7 | ## Overview 8 | 9 | The transaction gossip network is designed with the following requirements. 10 | 11 | - Transaction payloads that are being passed around the network are "self validating", meaning that they include both the transaction object and a proof against the ethereum state adequate to validate `sender.balance` and `sender.nonce` values. 12 | - Under normal network conditions, nodes that are interested in observing the full transaction pool will reliably receive the full set of valid transactions currently being gossiped. 13 | - Participants are not required to process the full pool and can control the total percentage of the transaction pool they wish to process. 14 | 15 | 16 | ## Wire Protocol 17 | 18 | The [Portal wire protocol](../portal-wire-protocol.md) is used as wire protocol for the transaction gossip network. 19 | 20 | As specified in the [Protocol identifiers](../portal-wire-protocol.md#protocol-identifiers) section of the Portal wire protocol, the `protocol` field in the `TALKREQ` message **MUST** contain the value of `0x500C`. 21 | 22 | The network uses the PING/PONG/FINDNODES/FOUNDNODES/OFFER/ACCEPT messages from the [Portal Wire Protocol](../portal-wire-protocol.md). 23 | 24 | 25 | ### Distance Function 26 | 27 | The network uses the standard XOR distance function. 28 | 29 | In the execution state network the `payload` field of the `Ping` and `Pong` messages is the serialization of an SSZ Container specified as [Type 1 Basic Radius Payload](../ping-extensions/extensions/type-1.md) 30 | 31 | ## Content Keys 32 | 33 | ### Pending Transaction 34 | 35 | ``` 36 | content_key := Container(content_type: uint8, transaction_hash: Bytes32, proof_state_root: Bytes32) 37 | content_type := 0x01 38 | content_id := transaction_hash 39 | ``` 40 | 41 | The data payload for a transaction comes with an merkle proof of the `transaction.to` account. 42 | 43 | ``` 44 | payload := Container(proof: Proof, transaction: ByteList[2048]) 45 | proof := TODO 46 | transaction := TODO 47 | ``` 48 | 49 | ## Secondary Routing Table 50 | 51 | Clients in the Transaction Gossip Network are expected to maintain both the standard routing table and a secondary routing table that is anchored to node radius values. 52 | 53 | The secondary routing table is subject to all of the same maintenance and management rules as the primary table. Any node that is added to the secondary routing table must also satisfy the following validity condition: 54 | 55 | ``` 56 | node.transaction_radius >= distance(node.node_id, self.node_id) 57 | ``` 58 | 59 | The additional validity rule aims to ensure that the table is populated with nodes will have shared interest in mostly the same transactions as us. 60 | 61 | 62 | ## Gossip Algorithm 63 | 64 | The gossip mechanism for transactions is designed to allow DHT nodes to control what percentage of the transaction pool they wish to process. 65 | 66 | ### Radius 67 | 68 | We use the term "radius" to refer to the mechanism through which a node may limit how much of the transaction pool they wish to process. The `radius` is a 256 bit integer. 69 | 70 | A DHT node that wishes to process the full transaction pool would publis a radius value of `2**256-1`. We refer to such a DHT node as a "full radius node". 71 | 72 | A DHT node is expected to process transactions that satisfy the condition: `distance(node_id, content_id) <= radius` 73 | 74 | Each DHT node includes their current `radius` value in both PING and PONG messages. 75 | 76 | 77 | ### Validation 78 | 79 | DHT nodes **should** drop transactions which do not pass the following validation rules 80 | 81 | 82 | #### 2. Proof valid 83 | 84 | The proof must be valid and against a *recent* state root. Individual implementations may choose the boundary for which a proof is considered stale. 85 | 86 | > TODO: 8 blocks seems like a good simple starting point 87 | 88 | #### 1. Balance and Nonce 89 | 90 | The proof **must** show that: 91 | 92 | - `transaction.sender.balance >= transaction.value` 93 | - `transaction.sender.nonce >= transaction.nonce` 94 | 95 | ### Gossip Rules 96 | 97 | A DHT node should only be offer'd a transaction that is inside of its radius. 98 | 99 | A DHT node should OFFER transactions to all of the DHT nodes present in their secondary routing table (skipping any nodes for whom the transaction is outside of their radius). 100 | 101 | A DHT node should only OFFER any individual transaction to a DHT node once. 102 | 103 | 104 | ### Proof Updating 105 | 106 | A DHT Node which encounters a transaction with a proof that is outdated **may** update that proof. 107 | 108 | Lightweight nodes are encouraged to allocate a small amount of processing power towards this altruistic proof updating as a way to help contribute to the overall health of the network. 109 | -------------------------------------------------------------------------------- /utp/discv5-utp.md: -------------------------------------------------------------------------------- 1 | # uTP SubProtocol over Discovery v5 2 | 3 | ## Abstract 4 | 5 | This document specifies an implementation of the [uTP](https://www.bittorrent.org/beps/bep_0029.html) streaming protocol which uses [Node Discovery Protocol v5](https://github.com/ethereum/devp2p/blob/6eddaf50298d551a83bcc242e7ce7024c6cc8590/discv5/discv5.md) as a transport instead of raw UDP packets. 6 | 7 | ## Motivation 8 | 9 | The Discovery v5 protocol provides a simple and extensible UDP based protocol with robust encryption and resistance to deep packet inspection. The use of UDP however imposes a tight limit on packet sizes. [Sub-protocols](https://github.com/ethereum/devp2p/blob/master/discv5/discv5-wire.md#talkreq-request-0x05) which wish to implement functionality that requires transmission of data that exceeds this packet size are forced to implement their own solutions for splitting these payloads across multiple UDP packets. Packet loss makes this type of solution fragile. A generic solution that can be reused across different Discovery v5 sub-protocols will improve the overall security and robustness of sub-protocols. 10 | 11 | 12 | ## Specification 13 | 14 | The uTP over Discovery v5 protocol uses the byte string `utp` (`0x757470` in hex) as value for the protocol byte string in the `TALKREQ` message. 15 | 16 | All uTP packets MUST be sent using the `TALKREQ` message. 17 | 18 | This protocol MUST NOT use the `TALKRESP` message for sending uTP packets. 19 | 20 | > Note: `TALKREQ` is part of a request-response mechanism and might cause Discovery v5 implementations 21 | to invalidate peers when not receiving a `TALKRESP` response. This is an unresolved item in the specification. 22 | Thus currently a `TALKRESP` message MAY be send as response on a `TALKREQ` message. 23 | However, the response MUST be ignored by the uTP protocol. 24 | 25 | The payload passed to the `request` field of the `TALKREQ` message is the uTP packet as specified in BEP29. 26 | 27 | ### BEP29 28 | 29 | https://www.bittorrent.org/beps/bep_0029.html 30 | 31 | The uTP protocol as specified in BEP29 defines the packet structure and logic for handling packets. 32 | 33 | The main difference with BEP29 is that, instead of a raw UDP packet, the Discovery v5 `TALKREQ` message is used as transport. 34 | 35 | Additionally, following deviations from the uTP specification or reference implementation are applied: 36 | - The `connection_id` is passed out of band (i.e. in a Portal wire protocol message), instead of randomly generated by the uTP connection intiator: This is required for integration with the Portal wire protocol. 37 | - To track incoming uTP streams, the IP address + port + Discovery v5 `NodeId` + `connection_id` is used, as opposed to IP address + port + `connection_id` in standard uTP. 38 | - It is allowed to send `ST_DATA` without receiving `ST_DATA` first from the initiator of the uTP connection. This is not specified in BEP29, but rather a deviation from the [uTP reference implementation](https://github.com/bittorrent/libutp). It was added in the reference implementation to counter a reflective DDoS. 39 | Relevant paper: https://www.usenix.org/system/files/conference/woot15/woot15-paper-adamsky.pdf. 40 | However, when using Discovery v5 as a transport, the DDoS becomes no longer applicable because a full Discovery v5 handshake is required which will not work with a spoofed IP address. 41 | - The uTP reference implementation deviates from the uTP specification on the initialization of the `ack_nr` when receiving the `ACK` of a `SYN` packet. The reference implementation [initializes](https://github.com/bittorrent/libutp/blob/master/utp_internal.cpp#L1874) this as `c.ack_nr = pkt.seq_nr - 1` while the specification indicates `c.ack_nr = pkt.seq_nr`. The uTP over Discovery v5 specifications follows the uTP reference implementation: `c.ack_nr = pkt.seq_nr - 1`. 42 | 43 | ## Example Usage 44 | 45 | ### Data Request 46 | Suppose we have a sub-protocol with the following messages: 47 | 48 | - `GetData` (request) 49 | - `Data` (response) 50 | 51 | A request is sent by Alice using the `GetData` message, containing an identifier 52 | to the data. The size of the data to be transmitted exceeds the UDP packet size, 53 | so the `Data` response sent by Bob will contain a randomly generated 54 | `connection_id` instead. 55 | 56 | Alice will then initiate a new uTP connection with Bob using this `connection_id`. 57 | 58 | Bob, upon sending the `Data` message containing the `connection_id` will 59 | *listen* for a new incoming connection from Alice over the `utp` sub-protocol. 60 | When this new connection is opened, Alice can then read the bytes from the stream 61 | until the connection closes. 62 | 63 | The `connection_id` sent in the sub-protocol response message is the 64 | `connection_id_send` value for the node sending the response, and thus the 65 | `connection_id_recv` value for the initiator of the uTP connection. 66 | 67 | A typical flow of messages: 68 | 69 | ```mermaid 70 | sequenceDiagram 71 | Alice->>Bob: GetData 72 | Bob->>Alice: Data 73 | Note right of Bob: Start listening for specific uTP connection from Alice 74 | Alice->>Bob: uTP ST_SYN 75 | Bob->>Alice: uTP ST_STATE 76 | Bob->>Alice: uTP ST_DATA 77 | Alice->>Bob: ... 78 | Bob->>Alice: ... 79 | Note left of Bob: Once DATA sent & acknowledged 80 | Bob->>Alice: uTP ST_FIN 81 | 82 | ``` 83 | 84 | The typical flow is that Bob sends the `ST_FIN` to terminate the uTP connection. 85 | But Alice MAY also send a `ST_FIN` if Alice can conclude that it received all the 86 | data, and there are situations where this may happen (e.g. lost `ST_FIN` packet). 87 | 88 | ### Data Offer 89 | Suppose we have a sub-protocol with the following messages: 90 | 91 | - `OfferData` (request) 92 | - `Accept` (response) 93 | 94 | An offer is sent by Alice using the `OfferData` message, containing an identifier 95 | to the data. The `Accept` response sent by Bob will contain a randomly generated 96 | `connection_id`. 97 | 98 | Alice will then initiate a new uTP connection with Bob using this `connection_id`. 99 | 100 | Bob, upon sending the `Accept` message containing the `connection_id` will 101 | *listen* for a new incoming connection from Alice over the `utp` sub-protocol. 102 | When this new connection is opened, Bob can then read the bytes from the stream 103 | until the connection closes. 104 | 105 | The `connection_id` sent in the response message is the `connection_id_send` 106 | value for the node sending the response, and thus the `connection_id_recv` value 107 | for the initiator of the uTP connection. 108 | 109 | A typical message flow: 110 | 111 | ```mermaid 112 | sequenceDiagram 113 | Alice->>Bob: OfferData 114 | Bob->>Alice: Accept 115 | Note right of Bob: Start listening for specific uTP connection from Alice 116 | Alice->>Bob: uTP ST_SYN 117 | Bob->>Alice: uTP ST_STATE 118 | Alice->>Bob: uTP ST_DATA 119 | Bob->>Alice: ... 120 | Alice->>Bob: ... 121 | Note left of Bob: Once DATA sent & acknowledged 122 | Alice->>Bob: uTP ST_FIN 123 | 124 | ``` 125 | The typical flow is that Alice sends the `ST_FIN` to terminate the uTP connection. 126 | But Bob MAY also send a `ST_FIN` if Bob can conclude that it received all the 127 | data, and there are situations where this may happen (e.g. lost `ST_FIN` packet). 128 | -------------------------------------------------------------------------------- /utp/utp-wire-test-vectors.md: -------------------------------------------------------------------------------- 1 | # Utp Wire Test Vectors 2 | 3 | This document provides a collection of test vectors for encoding and decoding 4 | uTP protocol packets 5 | 6 | ## Utp Packets Encodings 7 | 8 | This section provides test vectors for the individual protocol packets defined in 9 | uTP specification - https://www.bittorrent.org/beps/bep_0029.html. 10 | 11 | Each packet input parameters are - `(PacketHeader, List[Extension], payload)` 12 | 13 | Altough spec defines only one extension called - `SelectiveAckExtension` 14 | 15 | Taking that into account, packet inputs can be simplified to - `(PacketHeader, Option[SelectiveAckExtension], Payload)` 16 | 17 | Where: 18 | `payload` is an `ByteArray` 19 | 20 | `PacketHeader` is object with input paramaters - 21 | `(type, version, extension, connection_id, timestamp_microseconds, timestamp_difference_microseconds, wnd_size, seq_nr, ack_nr)` 22 | 23 | `Option[SelectiveAckExtension]` is either, bitmask of 32bits where each bit represents one packet in the send window 24 | or none value which represent lack of any extension. 25 | 26 | ### SYN Packet 27 | 28 | #### Input Parameters 29 | 30 | ``` 31 | PacketHeader = { 32 | type: 4 33 | version: 1 34 | extension: 0 35 | connection_id: 10049 36 | timestamp_microseconds: 3384187322 37 | timestamp_difference_microseconds: 0 38 | wnd_size: 1048576 39 | seq_nr: 11884 40 | ack_nr: 0 41 | } 42 | 43 | SelectiveAckExtension = none 44 | Payload = [] 45 | ``` 46 | 47 | #### Expected Output 48 | ``` 49 | packet = 0x41002741c9b699ba00000000001000002e6c0000 50 | ``` 51 | 52 | ### Ack Packet (no extension) 53 | 54 | #### Input Parameters 55 | 56 | ``` 57 | PacketHeader = { 58 | type: 2 59 | version: 1 60 | extension: 0 61 | connection_id: 10049 62 | timestamp_microseconds: 6195294 63 | timestamp_difference_microseconds: 916973699 64 | wnd_size: 1048576 65 | seq_nr: 16807 66 | ack_nr: 11885 67 | } 68 | 69 | SelectiveAckExtension = none 70 | Payload = [] 71 | ``` 72 | 73 | #### Expected Output 74 | ``` 75 | packet = 0x21002741005e885e36a7e8830010000041a72e6d 76 | ``` 77 | 78 | 79 | ### Ack Packet (with selective ack extension) 80 | 81 | #### Input Parameters 82 | 83 | ``` 84 | PacketHeader = { 85 | type: 2 86 | version: 1 87 | extension: 1 88 | connection_id: 10049 89 | timestamp_microseconds: 6195294 90 | timestamp_difference_microseconds: 916973699 91 | wnd_size: 1048576 92 | seq_nr: 16807 93 | ack_nr: 11885 94 | } 95 | 96 | SelectiveAckExtension = [1, 0, 0, 128] // bitmask with 0 and 31 bit set 97 | Payload = [] 98 | ``` 99 | 100 | #### Expected Output 101 | ``` 102 | packet = 0x21012741005e885e36a7e8830010000041a72e6d000401000080 103 | ``` 104 | 105 | 106 | ### DATA Packet 107 | 108 | #### Input Parameters 109 | 110 | ``` 111 | PacketHeader = { 112 | type: 0 113 | version: 1 114 | extension: 0 115 | connection_id: 26237 116 | timestamp_microseconds: 252492495 117 | timestamp_difference_microseconds: 242289855 118 | wnd_size: 1048576 119 | seq_nr: 8334 120 | ack_nr: 16806 121 | } 122 | 123 | SelectiveAckExtension = none 124 | Payload = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 125 | ``` 126 | 127 | #### Expected Output 128 | ``` 129 | packet = 0x0100667d0f0cbacf0e710cbf00100000208e41a600010203040506070809 130 | ``` 131 | 132 | 133 | ### FIN Packet 134 | 135 | #### Input Parameters 136 | 137 | ``` 138 | PacketHeader = { 139 | type: 1 140 | version: 1 141 | extension: 0 142 | connection_id: 19003 143 | timestamp_microseconds: 515227279 144 | timestamp_difference_microseconds: 511481041 145 | wnd_size: 1048576 146 | seq_nr: 41050 147 | ack_nr: 16806 148 | } 149 | 150 | SelectiveAckExtension = none 151 | Payload = [] 152 | ``` 153 | 154 | #### Expected Output 155 | ``` 156 | packet = 0x11004a3b1eb5be8f1e7c94d100100000a05a41a6 157 | ``` 158 | 159 | 160 | ### RESET Packet 161 | 162 | #### Input Parameters 163 | 164 | ``` 165 | PacketHeader = { 166 | type: 3 167 | version: 1 168 | extension: 0 169 | connection_id: 62285 170 | timestamp_microseconds: 751226811 171 | timestamp_difference_microseconds: 0 172 | wnd_size: 0 173 | seq_nr: 55413 174 | ack_nr: 16807 175 | } 176 | 177 | SelectiveAckExtension = none 178 | Payload = [] 179 | ``` 180 | 181 | #### Expected Output 182 | ``` 183 | packet = 0x3100f34d2cc6cfbb0000000000000000d87541a7 184 | ``` 185 | 186 | -------------------------------------------------------------------------------- /verkle/verkle-state-network.md: -------------------------------------------------------------------------------- 1 | # Execution Verkle State Network 2 | 3 | This document is the specification for the sub-protocol that supports on-demand availability of Verkle state data from the execution chain. Verkle trie is the upcoming structure for storing Ethereum state. See [EIP-6800](https://eips.ethereum.org/EIPS/eip-6800) for mode details. 4 | 5 | > 🚧 THE SPEC IS IN A STATE OF FLUX AND SHOULD BE CONSIDERED UNSTABLE 🚧 6 | 7 | ## Overview 8 | 9 | The Verkle State Network subnetwork protocol is almost identical to the [State Network](./../state/state-network.md). The main difference is in the way that data is structured and encoded. Only differences will be provided below. 10 | 11 | ### Portal Network version of the Verkle Trie 12 | 13 | The high level overview and reasoning, can be found here: [ethresear.ch/t/portal-network-verkle/19339](https://ethresear.ch/t/portal-network-verkle/19339). 14 | 15 | Portal Network stores every trie node that ever existed. For optimization reasons, each trie node is split into 2-layer mini trie and each node from the mini-trie is stored separately in the network. The exact encoding and the content key is derived differently and is specified below. 16 | 17 | To represent the trie node, the Verkle Trie uses Pedersen Commitment, which is calculated using following formula: 18 | 19 | $$C = Commit(a_0, a_1, ..., a_{255}) = a_0B_0 + a_1B_1 + ... + a_{255}B_{255}$$ 20 | 21 | where: 22 | - $B_i$ is basis of the Pedersen commitment 23 | - already fixed Elliptic curve points on Banderwagon (a prime order subgroup over [Bandersnatch](https://ethresear.ch/t/introducing-bandersnatch-a-fast-elliptic-curve-built-over-the-bls12-381-scalar-field/9957)) curve. 24 | - $a_i$ are values we are committing to 25 | - value from elliptic curve's scalar field $F_r$ (maximum value is less than $2^{253}$) 26 | - $C$ is the commitment of $a_i$ values, which on its own is a point on the elliptic curve 27 | - in order to commit to another commitment, we map commitment $C$ to the scalar field $F_r$ and we call that **Pedersen Hash** or **hash commitment** 28 | - these two values are frequently used interchangeably, but they are not one-to-one mapping 29 | - in this document, we will use $C$ to indicate commitment expressed as elliptic point, and $c$ when it's mapped to scalar field (hash commitment) 30 | 31 | #### Trie Node 32 | 33 | The Verkle trie has 2 types of nodes: 34 | 35 | - branch (inner) node: up to 256 children nodes (either branch or leaf) 36 | - leaf (extension) node: up to 256 values (32 bytes each) 37 | 38 | ##### Branch (Inner) node 39 | 40 | The branch node of the Verkle trie stores up to 256 values, each of which is a hash commitment of a child trie node. 41 | 42 | $$C = c_0B_0 + c_1B_1 + ... + c_{255}B_{255}$$ 43 | 44 | For optimization reasons, Portal Network splits branch node into 2-layer mini network in the following way: 45 | 46 | ![Verkle Branch Node](./../assets/verkle_branch_node.png) 47 | 48 | Each of the `branch-fragment` node stores hash commitments of 16 children nodes. The commitment of those 16 children represents that fragment node and is stored inside `branch-bundle` node. 49 | 50 | $$ 51 | \begin{align*} 52 | C^\prime_0 &= c_0B_0 + c_1B_1 +…+ c_{15}B_{15} \\ 53 | C^\prime_1 &= c_{16}B_{16} + c_{17}B_{17} +…+ c_{31}B_{31} \\ 54 | &… \\ 55 | C^\prime_{15} &= c_{240}B_{240} + c_{241}B_{241} +…+ c_{255}B_{255} \\ 56 | \end{align*} 57 | $$ 58 | 59 | The commitment of the `branch-bundle` node ($C$) is calculated as a sum 16 `branch-fragment` node commitments ($C^\prime_i$). 60 | 61 | $$C = C^\prime_0 + C^\prime_1 +...+ C^\prime_{15}$$ 62 | 63 | ##### Leaf (extension) node 64 | 65 | The leaf node of the Verkle trie stores up to 256 values, each 32 bytes long. Because value (32 bytes) doesn't fit into scalar field, commitment of the leaf node ($C$) is calculated in the following way. 66 | 67 | $$C = Commit(marker, stem, C_1, C_2)$$ 68 | 69 | $$ 70 | \begin{align*} 71 | C_1 &= Commit(v_0^{low+access}, v_0^{high}, v_1^{low+access}, v_1^{high}, ... , v_{127}^{low+access}, v_{127}^{high}) \\ 72 | C_2 &= Commit(v_{128}^{low+access}, v_{128}^{high}, v_{129}^{low+access}, v_{129}^{high}, ... , v_{255}^{low+access}, v_{255}^{high}) \\ 73 | \end{align*} 74 | $$ 75 | 76 | where: 77 | 78 | - $marker$ - currently only value $1$ is used 79 | - $stem$ - the path from the root of the trie (31 bytes) 80 | - $v_i^{low+access}$ - the lower 16 bytes of the value $v_i$ plus $2^{128}$ if value is modified 81 | - note that if value is not modified, it will be equal to $0$ 82 | - $v_i^{high}$ - the higher 16 bytes of the value $v_i$ 83 | 84 | 85 | For optimization reasons, Portal Network splits leaf node into 2-layer mini network in the following way: 86 | 87 | ![Verkle leaf node](./../assets/verkle_leaf_node.png) 88 | 89 | Each of the `leaf-fragment` nodes stores up to 16 values (32 bytes each). 90 | 91 | The commitment of those 16 values ($C^\prime_i$) represents that fragment node and is stored inside `leaf-bundle` node. 92 | 93 | $$ 94 | \begin{align*} 95 | C^\prime_0 &= v_0^{low+access}B_0 + v_0^{high}B_1 + v_1^{low+access}B_2 + v_1^{high}B_3 +…+ v_{15}^{low,access}B_{30} + v_{15}^{high}B_{31} \\ 96 | C^\prime_1 &= v_{16}^{low+access}B_{32} + v_{16}^{high}B_{33} + v_{17}^{low+access}B_{34} + v_{17}^{high}B_{35} +…+ v_{31}^{low,access}B_{62} + v_{31}^{high}B_{63} \\ 97 | &… \\ 98 | C^\prime_7 &= v_{112}^{low+access}B_{224} + v_{112}^{high}B_{225} + v_{113}^{low+access}B_{226} + v_{113}^{high}B_{227} +…+ v_{127}^{low,access}B_{254} + v_{127}^{high}B_{255} \\ 99 | \\ 100 | C^\prime_8 &= v_{128}^{low+access}B_{0} + v_{128}^{high}B_{1} + v_{129}^{low+access}B_{3} + v_{129}^{high}B_{4} +…+ v_{143}^{low,access}B_{30} + v_{143}^{high}B_{31} \\ 101 | &… \\ 102 | C^\prime_{15} &= v_{240}^{low+access}B_{224} + v_{240}^{high}B_{225} + v_{241}^{low+access}B_{256} + v_{241}^{high}B_{227} +…+ v_{255}^{low,access}B_{254} + v_{255}^{high}B_{255} \\ 103 | \end{align*} 104 | $$ 105 | 106 | The commitment of the `leaf-bundle` node ($C$) is calculated in the following way: 107 | 108 | $$ 109 | \begin{align*} 110 | C_1 &= C^\prime_0 + C^\prime_1 + … + C^\prime_7 \\ 111 | C_2 &= C^\prime_8 + C^\prime_9 + … + C^\prime_{15} 112 | \end{align*} 113 | $$ 114 | 115 | $$ 116 | C = marker \cdot B_0 + stem \cdot B_1 + c_1B_2 + c_2B_3 117 | $$ 118 | 119 | 120 | ## Specification 121 | 122 | ### Protocol Identifier 123 | 124 | As specified in the [Protocol identifiers](./../portal-wire-protocol.md#protocol-identifiers) section of the Portal wire protocol, the `protocol` field in the `TALKREQ` message **MUST** contain the value of `0x500E`. 125 | 126 | ### Helper Data Types 127 | 128 | #### Path and Stem 129 | 130 | The Path represents the trie path from the root to the branch node. The Stem represents the first 31 bytes of the Verkle Trie key. 131 | 132 | ``` 133 | Path := List[uint8; 30] 134 | Stem := Bytes31 135 | ``` 136 | 137 | #### Commitment 138 | 139 | Both elliptic curve point (commitment) and scalar field value (hash commitment) can be encoded using 32 bytes. We will define them separately in order to be explicit. 140 | 141 | ``` 142 | EllipticCurvePoint := Bytes32 143 | ScalarFieldValue := Bytes32 144 | ``` 145 | 146 | #### Bundle Commitment Proof 147 | 148 | **⚠️ This section needs more reserach and detailed specifiction. ⚠️** 149 | 150 | In order to prevent bad actors from creating false data for the `bundle` nodes of the mini tries, we have to create and include proof that fragment commitments are correct. The exact proof schema is being researched. 151 | 152 | ``` 153 | BundleProof := Bytes1024 154 | ``` 155 | 156 | #### Trie Proof 157 | 158 | Using IPA and Multiproof, the same proving scheme that Verkle uses, we can prove that any node or value is included in a trie in a memory efficient way. 159 | 160 | **⚠️ This section needs detailed specifiction. ⚠️** 161 | 162 | Exact details of the specification are up to be decided. We provide only temporary types (based on execution witness from the verkle devnet). 163 | 164 | ``` 165 | IpaProof := Container( 166 | cl: Vector[EllipticCurvePoint; 8], 167 | cr: Vector[EllipticCurvePoint; 8], 168 | final_evaluation: ScalarFieldValue, 169 | ) 170 | MultiPointProof := Container( 171 | open_proof: IpaProof, 172 | g_x: EllipticCurvePoint, 173 | ) 174 | 175 | TrieProof := Container( 176 | commitments_by_path: List[EllipticCurvePoint; 32], 177 | multiproof: MultiPointProof, 178 | ) 179 | ``` 180 | 181 | #### Children 182 | 183 | All our nodes store up to 16 children. We encode bitmap of present children, and then the children themself. 184 | 185 | ``` 186 | Children[type] := Container(bitmap: Bitvector(16), children: List[type; 16]) 187 | ``` 188 | 189 | Note that the count of set bits inside `bitmap` field MUST be equal to the length of the `children` field. The order of the children is from the lowest to the highest set bit. 190 | 191 | #### Trie node 192 | 193 | Each trie node has a different type and different proof. 194 | 195 | ``` 196 | BranchBundleNode := Container( 197 | fragments: Children[EllipticCurvePoint], 198 | proof: BundleProof, 199 | ) 200 | BranchBundleNodeWithProof := Container( 201 | node: BranchBundleNode, 202 | block_hash: Bytes32, 203 | path: Path, 204 | proof: Union[None, TrieProof], 205 | ) 206 | 207 | BranchFragmentNode := Container( 208 | fragment_index: uint8, 209 | children: Children[EllipticCurvePoint], 210 | ) 211 | BranchFragmentNodeWithProof := Container( 212 | node: BranchFragmentNode, 213 | block_hash: Bytes32, 214 | path: Path, 215 | proof: Union[None, TrieProof], 216 | ) 217 | 218 | LeafBundleNode := Container( 219 | marker: uint32, 220 | stem: Stem, 221 | fragments: Children[EllipticCurvePoint], 222 | proof: BundleProof, 223 | ) 224 | LeafBundleNodeWithProof := Container( 225 | node: LeafBundleNode, 226 | block_hash: Bytes32, 227 | proof: TrieProof, 228 | ) 229 | 230 | LeafFragmentNode := Container( 231 | fragment_index: uint8, 232 | values: Children[Bytes32], 233 | ) 234 | LeafFragmentNodeWithProof := Container( 235 | node: LeafFragmentNode, 236 | block_hash: Bytes32, 237 | proof: TrieProof, 238 | ) 239 | ``` 240 | 241 | It's worth noting that `proof` is `None` for the branch-bundle and branch-fragments that correspond to the root of the trie (in which case `path` is empty as well). 242 | 243 | ### Content types 244 | 245 | #### Content keys 246 | 247 | When doing lookup for bundle node, we don't know if we should expect branch-bundle or leaf-bundle node. For this reason, they use the same content key type. 248 | 249 | The branch-fragment key has to be different from the branch-bundle key, because they can have the same commitment (in case other fragments from that bundle are zero). 250 | 251 | The leaf-fragment key should include the `stem`, in order to avoid hot-spots. Others keys don't have to worry about hot-spots because they are build on top of leaf-bundle nodes that already includes `stem` in its commitment (effectively guaranteeing the uniqueness). 252 | 253 | ``` 254 | bundle_node_key := EllipticCurvePoint 255 | bundle_content_key := 0x30 + SSZ.serialize(bundle_node_key) 256 | 257 | branch_fragment_node_key := EllipticCurvePoint 258 | branch_fragment_content_key := 0x31 + SSZ.serialize(branch_fragment_node_key) 259 | 260 | leaf_fragment_node_key := Container(stem: Stem, commitment: EllipticCurvePoint) 261 | leaf_fragment_content_key := 0x32 + SSZ.serialize(leaf_fragment_node_key) 262 | ``` 263 | 264 | #### Content values 265 | 266 | The OFFER/ACCEPT payloads need to be provable by their recipients, while FINDCONTENT/FOUNDCONTENT payloads have to be verifiable that they match the commitment that identifies them. 267 | 268 | The content value has to correspond to the content-key. 269 | 270 | ``` 271 | content_value_for_offer := Union( 272 | BranchBundleNodeWithProof, 273 | BranchFragmentNodeWithProof, 274 | LeafBundleNodeWithProof, 275 | LeafFragmentNodeWithProof, 276 | ) 277 | 278 | content_value_for_retrieval := Union( 279 | BranchBundleNode, 280 | BranchFragmentNode, 281 | LeafBundleNode, 282 | LeafFragmentNode, 283 | ) 284 | ``` 285 | 286 | ## Gossip 287 | 288 | As each block, the bridge is responsible for detecting and gossiping all created and updated trie nodes separately. Bridge should first compute all content-ids that should be gossiped, and it should gossip them based on their distance to its own node-id, from closest to farthest. 289 | --------------------------------------------------------------------------------