├── .editorconfig ├── 0000-template.md ├── LICENSE-APACHE ├── LICENSE-MIT ├── README.md ├── projects ├── atomic-swap-service │ └── headless-wallet.md ├── evm-compatible-bridge │ ├── design.md │ ├── evm_contract_design.md │ └── federator.md ├── feature-activation │ ├── 0001-feature-activation-for-blocks.md │ ├── 0002-bit-signaling.md │ ├── 0003-explorer-uis.md │ ├── 0003-images │ │ ├── block.png │ │ └── feature_activation.png │ ├── 0004-phased-testing.md │ └── 0005-miner-update-request.md ├── hathor-wallet-headless │ ├── 0001-multisig-create-mint-melt-token.md │ ├── 0002-hsm-integration.md │ ├── 0002-soft-reload.md │ ├── 0003-fireblocks-integration.md │ ├── 0004-tx-transaction.md │ ├── 0005-early-send-tx-lock-release.md │ └── api-docs-ci.md ├── hathor-wallet-mobile │ ├── 0001-dev-settings.md │ ├── 0002-nano-contract-integration.md │ ├── img │ │ └── 0001-dev-settings │ │ │ ├── hathor-wallet-desktop-settings.png │ │ │ ├── mock-1-settings-page-v2.png │ │ │ ├── mock-2-risk-disclaimer-page-v2.png │ │ │ ├── mock-2-risk-disclaimer-page.png │ │ │ ├── mock-3-presettings-page.png │ │ │ ├── mock-4-custom-settings-page-v2.png │ │ │ ├── mock-5-feedback-success.png │ │ │ ├── mock-alert-ui-v3.png │ │ │ ├── safepal-custom-network-form.png │ │ │ ├── safepal-network-presettings.png │ │ │ ├── safepal-settings-groups.png │ │ │ ├── trezor-danger-area.png │ │ │ ├── trezor-option-description.png │ │ │ └── trezor-option-warning.png │ └── security │ │ └── 0001-improved-pin-security-mode.md ├── reliable-integration │ ├── 0001-high-level-design.md │ ├── 0001-images │ │ └── event_flow.png │ ├── 0002-images │ │ ├── consensus_flow.png │ │ └── full_node_cycle.png │ └── 0002-low-level-design.md ├── wallet-service-reliable-integration │ └── 0001-design.md ├── wallet-service │ └── nano-contracts │ │ └── 0001-nano-contracts-support.md └── web-wallet │ ├── metamask-snaps │ └── 0001-design.md │ ├── rpc-protocol.md │ └── wallet-connect │ ├── 0001-design.md │ ├── 0002-caip-2.md │ └── 0003-caip-10.md └── text ├── 0001-rfc-process.md ├── 0004-tokens.md ├── 0006-merged-mining-with-bitcoin.md ├── 0011-token-deposit.md ├── 0015-anatomy-of-tx.md ├── 0025-images ├── Sync-v2-RFC-images.drawio ├── fig-mempool-example.svg ├── fig-phase1-example.svg ├── fig-phase1-tx-example.svg ├── fig-phase2-example.svg └── fig-state-diagram.svg ├── 0025-p2p-sync-v2.md ├── 0027-exchange-integration.md ├── 0029-semi-isolated-networks.md ├── 0032-nft-standard.md ├── 0033-images ├── private-network-architecture-2.png ├── private-network-architecture.drawio └── private-network-architecture.png ├── 0033-private-network-guide.md └── 0044-use-case-integration-best-practices.md /.editorconfig: -------------------------------------------------------------------------------- 1 | # http://editorconfig.org 2 | 3 | root = true 4 | 5 | [*] 6 | charset = utf-8 7 | end_of_line = lf 8 | insert_final_newline = true 9 | trim_trailing_whitespace = true 10 | 11 | [*.md] 12 | max_line_length = 80 13 | indent_style = space 14 | indent_size = 2 15 | -------------------------------------------------------------------------------- /0000-template.md: -------------------------------------------------------------------------------- 1 | - Feature Name: (fill me in with a unique ident, my_awesome_feature) 2 | - Start Date: (fill me in with today's date, YYYY-MM-DD) 3 | - RFC PR: (leave this empty) 4 | - Hathor Issue: (leave this empty) 5 | - Author: (fill this with Your Name , Other Author ) 6 | 7 | # Summary 8 | [summary]: #summary 9 | 10 | One paragraph explanation of the feature. 11 | 12 | # Motivation 13 | [motivation]: #motivation 14 | 15 | Why are we doing this? What use cases does it support? What is the expected 16 | outcome? 17 | 18 | # Guide-level explanation 19 | [guide-level-explanation]: #guide-level-explanation 20 | 21 | Explain the proposal as if it was already included in the network and you were 22 | teaching it to another Hathor programmer. That generally means: 23 | 24 | - Introducing new named concepts. 25 | - Explaining the feature largely in terms of examples. 26 | - Explaining how Hathor programmers should *think* about the feature, and how it 27 | should impact the way they use Hathor. It should explain the impact as 28 | concretely as possible. 29 | - If applicable, provide sample error messages, deprecation warnings, or 30 | migration guidance. 31 | - If applicable, describe the differences between teaching this to existing 32 | Hathor programmers and new Hathor programmers. 33 | 34 | For implementation-oriented RFCs (e.g. for compiler internals), this section 35 | should focus on how compiler contributors should think about the change, and 36 | give examples of its concrete impact. For policy RFCs, this section should 37 | provide an example-driven introduction to the policy, and explain its impact in 38 | concrete terms. 39 | 40 | # Reference-level explanation 41 | [reference-level-explanation]: #reference-level-explanation 42 | 43 | This is the technical portion of the RFC. Explain the design in sufficient 44 | detail that: 45 | 46 | - Its interaction with other features is clear. 47 | - It is reasonably clear how the feature would be implemented. 48 | - Corner cases are dissected by example. 49 | 50 | The section should return to the examples given in the previous section, and 51 | explain more fully how the detailed proposal makes those examples work. 52 | 53 | # Drawbacks 54 | [drawbacks]: #drawbacks 55 | 56 | Why should we *not* do this? 57 | 58 | # Rationale and alternatives 59 | [rationale-and-alternatives]: #rationale-and-alternatives 60 | 61 | - Why is this design the best in the space of possible designs? 62 | - What other designs have been considered and what is the rationale for not 63 | choosing them? 64 | - What is the impact of not doing this? 65 | 66 | # Prior art 67 | [prior-art]: #prior-art 68 | 69 | Discuss prior art, both the good and the bad, in relation to this proposal. 70 | A few examples of what this can include are: 71 | 72 | - For protocol, network, algorithms and other changes that directly affect the 73 | code: Does this feature exist in other blockchains and what experience have 74 | their community had? 75 | - For community proposals: Is this done by some other community and what were 76 | their experiences with it? 77 | - For other teams: What lessons can we learn from what other communities have 78 | done here? 79 | - Papers: Are there any published papers or great posts that discuss this? If 80 | you have some relevant papers to refer to, this can serve as a more detailed 81 | theoretical background. 82 | 83 | This section is intended to encourage you as an author to think about the 84 | lessons from other blockchains, provide readers of your RFC with a fuller 85 | picture. If there is no prior art, that is fine - your ideas are interesting to 86 | us whether they are brand new or if it is an adaptation from other blockchains. 87 | 88 | Note that while precedent set by other blockchains is some motivation, it does 89 | not on its own motivate an RFC. Please also take into consideration that Hathor 90 | sometimes intentionally diverges from common blockchain features. 91 | 92 | # Unresolved questions 93 | [unresolved-questions]: #unresolved-questions 94 | 95 | - What parts of the design do you expect to resolve through the RFC process 96 | before this gets merged? 97 | - What parts of the design do you expect to resolve through the implementation 98 | of this feature before stabilization? 99 | - What related issues do you consider out of scope for this RFC that could be 100 | addressed in the future independently of the solution that comes out of this 101 | RFC? 102 | 103 | # Future possibilities 104 | [future-possibilities]: #future-possibilities 105 | 106 | Think about what the natural extension and evolution of your proposal would be 107 | and how it would affect the network and project as a whole in a holistic way. 108 | Try to use this section as a tool to more fully consider all possible 109 | interactions with the project and network in your proposal. Also consider how 110 | this all fits into the roadmap for the project and of the relevant sub-team. 111 | 112 | This is also a good place to "dump ideas", if they are out of scope for the 113 | RFC you are writing but otherwise related. 114 | 115 | If you have tried and cannot think of any future possibilities, 116 | you may simply state that you cannot think of anything. 117 | 118 | Note that having something written down in the future-possibilities section 119 | is not a reason to accept the current or a future RFC; such notes should be 120 | in the section on motivation or rationale in this or subsequent RFCs. 121 | The section merely provides additional information. 122 | -------------------------------------------------------------------------------- /LICENSE-APACHE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /LICENSE-MIT: -------------------------------------------------------------------------------- 1 | Permission is hereby granted, free of charge, to any 2 | person obtaining a copy of this software and associated 3 | documentation files (the "Software"), to deal in the 4 | Software without restriction, including without 5 | limitation the rights to use, copy, modify, merge, 6 | publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software 8 | is furnished to do so, subject to the following 9 | conditions: 10 | 11 | The above copyright notice and this permission notice 12 | shall be included in all copies or substantial portions 13 | of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF 16 | ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED 17 | TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 18 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT 19 | SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY 20 | CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 21 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR 22 | IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 23 | DEALINGS IN THE SOFTWARE. 24 | -------------------------------------------------------------------------------- /projects/atomic-swap-service/headless-wallet.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | [summary]: #summary 3 | 4 | This design describes the integration of the [Headless Wallet](https://github.com/HathorNetwork/hathor-wallet-headless) with the [Atomic Swap Service](https://github.com/HathorNetwork/hathor-atomic-swap-service) backend. 5 | 6 | This will allow for easy swap operations between desktop and headless clients, making communication easier for automated operations, such as websites facilitating swaps for end-users. 7 | 8 | # Acceptance criteria 9 | [acceptance-criteria]: #acceptance-criteria 10 | 11 | - Any initialized wallet on the headless app should be able to participate on atomic swap proposals managed by the Atomic Swap Service 12 | - Each wallet's end users should be able to register proposals to be listened to, and receive real-time change notifications for them 13 | - There should be no impact on the current atomic swap workflow based on string exchanges 14 | 15 | # Guide-level explanation 16 | [guide-level-explanation]: #guide-level-explanation 17 | 18 | The headless wallet, in a similar fashion as [what is proposed on the Desktop Wallet](https://github.com/HathorNetwork/hathor-wallet/pull/361/files#diff-81b333677b8eb7fcc977d225072c1c10453c0aee095f9458885d7a553ef3579d), will have a `listnenedProposals` map in memory for each of the initialized wallets and will keep track of their identifiers, passwords and real-time updates. 19 | 20 | ### Example use case 1 21 | A user for the initialized wallet `swap-website-wallet` calls the `registerProposal` endpoint with an identifier `prop-1` and password `123` that it has received from another one of the proposal's participant. This request will add `prop-1` to the local `listenedProposals` object in memory. 22 | 23 | A `fetch` call can be made right after, informing the desired `prop-1` identifier, to retrieve its latest state on the service backend and, after changes are made, `update` requests can persist the changes and inform the other swap participants about it in real time. 24 | 25 | ### Example use case 2 26 | A user of the initialized wallet `swap-website-wallet` calls the `atomic-swap/tx-proposal` passing to it, besides all the necessary parameters to initialize a proposal, the new parameters `service.is_new` and `service.password`. This will make the headless app also initialize this proposal on the service and add it to its local `listenedProposals`. 27 | 28 | These identifier and password can then be shared with the other interested participants and future interactions can follow just like on example 1. 29 | 30 | # Reference-level explanation 31 | [reference-level-explanation]: #reference-level-explanation 32 | 33 | ### Listened proposals 34 | This integration with the swap service backend needs a way for the headless to store in memory the proposals that are currently being monitored by each of its initialized wallets. 35 | 36 | There are some considerations about this object: 37 | - An event listener will be created for each wallet to receive web-socket updates from the service 38 | - More than one initialized wallet can be listening to the same proposal 39 | - Wallets that did not register a proposal should not receive updates about it 40 | 41 | To achieve this, a `walletListenedProposals` map will be created on the `services/wallets.service` module. 42 | ```js 43 | /** 44 | * @typedef TxProposalConfig 45 | * @property {string} id Proposal identifier on the Atomic Swap Service 46 | * @property {string} password Password to access it 47 | */ 48 | 49 | /** 50 | * A map of all the proposals for a wallet. 51 | * The keys are the proposal ids 52 | * @typedef {Record} WalletListenedProposals 53 | */ 54 | 55 | /** 56 | * A map of the initialized wallets and their listened proposals. 57 | * The keys are the wallet-ids 58 | * @type {Record} 59 | */ 60 | const walletListenedProposals = {}; 61 | ``` 62 | 63 | For simplicity, references to `listenedProposals` on this document refer to the listened proposals for a specific initialized wallet 64 | 65 | ### Create endpoint 66 | The `[POST] /wallet/atomic-swap/tx-proposal` route will receive the new optional parameter `service`: 67 | ```ts 68 | { 69 | is_new: boolean, 70 | password: string, 71 | } 72 | ``` 73 | Upon a successful creation, a new property `createdProposalId` will be available on the response object, and its data will be registered on the `listenedProposals` accordingly. 74 | 75 | ### Register endpoint 76 | A new route `[POST] /wallet/atomic-swap/tx-proposal/register/${proposalId}` will be created, receiving as body parameters: 77 | ```ts 78 | { 79 | password: string, 80 | } 81 | ``` 82 | This will allow the wallet user to add any existing proposal to the wallet's `listenedProposals` and interact with them. 83 | 84 | To ensure the informed parameters are correct, a `get` request will also be made at this time to the service backend, and a validation executed on the received contents. Any failure found on the process will be thrown and the registration discarded. 85 | 86 | ### Listened Proposals endpoint 87 | A new route `[GET] /wallet/atomic-swap/tx-proposal/list` will be created, allowing the user to retrieve the list of `proposalIds` currently being listened to. No password should be retrieved on this call. 88 | 89 | ### Fetch endpoint 90 | A new route `[GET] /wallet/atomic-swap/tx-proposal/fetch/{proposalId}` will be created, allowing the user to pool the service backend for updates on a registered proposal. 91 | 92 | Calls to this route for a `proposalId` that has not been registered will raise a `404` error. 93 | 94 | Should the backend return a `404` error, the proposal is considered obsolete and can be removed from the `listenedProposals` map. 95 | 96 | ### Update endpoint 97 | The [current workflow](https://docs.hathor.network/tutorials/headless-wallet/atomic-swaps#step-3-bob-updates-alices-partial-transaction-proposal) of updating proposals will be kept unaltered, requiring calls to the `[POST] /wallet/atomic-swap/tx-proposal` route. However, in order to persist the changes on the Atomic Swap Service, the additional body parameter `service.proposal_id` must also be informed on the request. 98 | 99 | Calling this route with a proposal identifier that has not been registered will raise a `400` error. Any errors raised while interacting with the service will also be treated and returned on the http response. 100 | 101 | Should the backend return a `404` error, the proposal is considered obsolete and can be removed from the `listenedProposals` map. 102 | 103 | ### Unregister endpoint 104 | A new route `[DELETE] /wallet/atomic-swap/tx-proposal/delete/{proposalId}` will be created, allowing the user to stop listening to this proposal for the wallet specified on the request header. 105 | 106 | Also, should a wallet be stopped, all proposals it's currently listening to should also be removed, and its websocket connection closed. 107 | 108 | ### External notifications 109 | For every wallet with a registered proposal, a websocket connection will be opened with the Atomic Swap Service, informing the application about any changes that happen to its proposals. 110 | The event emitted by the lib will be `update-atomic-swap-proposal`, keeping consistent with the [current event names](https://github.com/HathorNetwork/hathor-wallet-headless/blob/4fb465eb8420ea93dbcd43a6c091453b74dbfded/src/services/notification.service.js#L30). 111 | 112 | Whenever a message arrives through this lib channel, the corresponding wallet user will receive a notification through the _External Notifications_ feature containing the updated proposal full contents. The external notification event name will be `wallet:update-atomic-swap-proposal`, and will be added to the [`WalletEventMap` property](https://github.com/HathorNetwork/hathor-wallet-headless/blob/4fb465eb8420ea93dbcd43a6c091453b74dbfded/src/services/notification.service.js#L14) on the notification service. 113 | 114 | # Rationale and alternatives 115 | [rationale-and-alternatives]: #rationale-and-alternatives 116 | 117 | ### Lean proposal objects 118 | In contrast to the desktop wallet, that keeps all the proposal data in memory, the headless keeps only the identifier, password and listeners. 119 | 120 | The current atomic swap headless workflow already expects the application not to keep any of the proposal data cached, so this would not present a difference in its usability. 121 | 122 | The alternative of keeping all the proposal's serialized `PartialTx` and `signatures` would not add any relevant ease of use while increasing the application's memory consumption. 123 | 124 | # Future possibilities 125 | [future-possibilities]: #future-possibilities 126 | 127 | ### Automatic discard of completed atomic swaps 128 | A feature can be implemented to identify when a fully signed atomic swap proposal mediated by the Atomic Swap Service has been turned into a transaction. 129 | 130 | This would involve adding functionality to the `services/notification.service` module, building on [the existing hook](https://github.com/HathorNetwork/hathor-wallet-headless/blob/4fb465eb8420ea93dbcd43a6c091453b74dbfded/src/services/notification.service.js#L63-L69) for the *HathorWallet* `new-tx` event. If all inputs and outputs are identical to one of the proposals being listened to ( independently of the initialized wallet ), it should be safe to assume the proposal can be removed from the `listenedProposals` map. 131 | -------------------------------------------------------------------------------- /projects/evm-compatible-bridge/evm_contract_design.md: -------------------------------------------------------------------------------- 1 | - Feature Name: evm_compatible_bridge 2 | - Author: André Carneiro 3 | 4 | # Summary 5 | 6 | This document will describe the smart contract that will manage the brige on the EVM chain. 7 | 8 | # Guide-level explanation 9 | 10 | ## Contracts 11 | 12 | To support the features defined in the [design](./design.md#interactions) we need multiple contracts: 13 | 14 | - Federation contract. 15 | - This contract will manage the federation and the voting process. 16 | - Bridge contract. 17 | - This contract will manage the crossing process. 18 | - Side token factory. 19 | - Allowed tokens registry. 20 | 21 | ### Bridge contract 22 | 23 | The bridge contract will require support to receive [ERC-777](https://eips.ethereum.org/EIPS/eip-777) tokens and since this standard is backwards compatible with [ERC-20](https://eips.ethereum.org/EIPS/eip-20) we will also support it. 24 | 25 | Since we only support tokens from the ERC-777 and ERC-20 interfaces we need a client for the ERC-1820 interface to check that a token is compatible with our bridge before starting any process to support it. 26 | 27 | The bridge contract will also be required to have some user and admin methods. 28 | 29 | #### User methods 30 | 31 | - `getTokenUid` and `getSideToken` 32 | - These methods will be used to get the Hathor token uid and the side token address for a given token. 33 | - `receiveTokens` and `tokensReceived` 34 | - These methods are required for the contract to receive tokens. 35 | - `sendTokens` 36 | - The user will provide the Hathor destination address along with the tokens to cross. 37 | - The method will be responsible to accept the tokens on the bridge, validate the Hathor address provided and calling `crossTokens` to initiate the process. 38 | - `crossTokens` 39 | - This method will be private and called when the user sends tokens to the bridge. 40 | - The bridge contract will check if the token is supported and if it is not, it will fail the transaction. 41 | - The bridge will register this request with the federation contract. 42 | - The bridge contract will emit an event to log the request. 43 | - The amount of tokens must be valid according to the granularity rules. 44 | 45 | #### Admin methods 46 | 47 | This contract is "ownable" and the admin will be able to make some operations, for instance: 48 | 49 | - Contract suspension (i.e. pausable contract) 50 | - In case of a security breach or legal obstruction we may have to halt operations. 51 | - Example implementation [here](https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/security/Pausable.sol) 52 | - Burn and mint tokens 53 | - The admin should be able to manage tokens created by the bridge. 54 | - The admin can use this to refund tokens or make manual corrections. 55 | - Configuration methods 56 | - The admin should be able to change the configuration of the contract, this includes changing the fee percentage, changing the federation contract address, etc. 57 | - Set token id 58 | - The admin should be able to correct and change the mapping of equivalent tokens. 59 | - This is to correct any mistakes or to add support for new tokens. 60 | 61 | #### Bridge contract storage 62 | 63 | ##### _Token equivalency_ 64 | 65 | The contract will require some data structures to be stored on the persistent storage mainly to track the mapping of Hathor tokens to EVM tokens and the crossing operations. 66 | 67 | The ERC-777 and ERC-20 tokens are identified by the contract address and the Hathor token by its 32 bytes token uid. 68 | There should be a map going both ways to map a contract address to a token uid and a map from token uid to contract address. 69 | 70 | ```solidity 71 | // This maps a local token address to a hathor token uid 72 | // This can be used to know which token in Hathor is equivalent to our native token. 73 | mapping (address => bytes32) localTokenToHathorUid; 74 | 75 | // This maps a Hathor token uid to a local token address. 76 | mapping (bytes32 => address) hathorUidToLocalToken; 77 | ``` 78 | 79 | The only special case is Hathor native token (HTR) which has a token uid of `00` which is not 32 bytes long. 80 | To use the same method we will map HTR to a 32 bytes sequence of zeroes, i.e. `00000000000000000000000000000000`. 81 | 82 | ### Federation contract 83 | 84 | A good example implementation can be found [here](https://github.com/onepercentio/tokenbridge/blob/master/bridge/contracts/Federation.sol). 85 | This contract has a very good example of a federation with all needed functionalities, it also includes an owner account for the federation that can change the members and the bridge being managed. 86 | 87 | The main method of the federation is `voteProposal` which will be used to vote on a request. 88 | Requests can be used to unlock tokens to a user, burn or mint tokens. 89 | Each request will be saved with an id so it can be voted on and later checked if it was accepted or not, the id will be a hash of the request data. 90 | 91 | The federation contract will gather votes for requests and when a request has reached the amount of votes required it will execute the request calling the necessary methods on the bridge contract. 92 | The request will be marked as executed and no more votes will be accepted. 93 | 94 | The federation contract should also be "pausable", where we suspend the `voteProposal` method but keep admin methods, e.g. add and remove members, change the managed bridge, etc. 95 | 96 | # Rationale and alternatives 97 | 98 | _Why use events for communication?_ 99 | 100 | An event (also called a log entry) is a cheaper way to store data on-chain. 101 | The price to allocate a 4 byte word on storage is 20.000 gas. 102 | In contrast an example of log gas for 2 topics and 200 bytes of data would cost 3.325 gas, 50 times more data for almost 10 times less gas. 103 | The method to calculate log gas can be found [here](https://github.com/ethereum/go-ethereum/blob/8a24b563312a0ab0a808770e464c5598ab7e35ea/core/vm/gas_table.go#L220). 104 | 105 | Cost is not the only benefit, responding to events is easier to implement than having getter methods for all types of required data. 106 | 107 | # Prior art 108 | 109 | [This bridge](https://github.com/onepercentio/tokenbridge) implements a bridge between 2 EVM compatible chains, the concepts implemented there are a good reference for security and best practices. 110 | -------------------------------------------------------------------------------- /projects/evm-compatible-bridge/federator.md: -------------------------------------------------------------------------------- 1 | - Feature Name: evm_compatible_bridge 2 | - Author: André Carneiro 3 | 4 | # Summary 5 | 6 | This is the description of how the federator service will coordinate the events on both sides of the bridge. 7 | 8 | # Guide-level explanation 9 | 10 | The federator service will keep track of events on both sides of the bridge and vote on the actions to be taken. 11 | This will be done by watching events on the federation contract on the EVM chain and polling events from the coordinator service on Hathor. 12 | 13 | Since Hathor does not currently have a smart contract implementation, we will use a MultiSig address to check that a certain number of federators have consented to any transaction. 14 | To avoid separate instances of the MultiSig wallet to propose conflicting transactions, we will use a coordinator service to choose the utxos and collect signatures from the federators. 15 | 16 | Each federator will have 2 loops, one to check events on the EVM compatible chain (i.e. EVM loop) and another to check events on the coordinator service (i.e. polling loop). 17 | The EVM loop will get the events from the chain, this loop will have events of transaction requests from the EVM compatible chain to Hathor and confirmations on transactions. 18 | The polling loop will check the coordinator service polling api and vote to perform the actions as requested. 19 | 20 | ## Federator service 21 | 22 | ### EVM loop 23 | 24 | We need to get events from the bridge contract, wait a pre-configured number of blocks (to increase the confirmation level of the transaction) and send the transaction on Hathor as requested. 25 | There should be 2 types of events: 26 | 27 | - A request to cross the token from the EVM chain to Hathor 28 | - If the token is native to the EVM chain, an equivalent Hathor token should already exist and the MultiSig wallet should own the authority utxos to it. 29 | - If the token is native to the Hathor chain, the equivalent token is already burned and we need to unlock the tokens on Hathor. 30 | - The loop will have to use the coordinator service to send the amount of tokens requested from the MultiSig wallet to the destination address. 31 | - Confirmation that an operation is complete 32 | - This way we can clean any state pertaining to this crossing of tokens and mark it as complete. 33 | - If the crossing was originated in Hathor, we need to send a request to the coordinator service so it knows the crossing is over. 34 | 35 | ### Polling loop 36 | 37 | We will pool for events, which are requests for voting, and vote on them. 38 | If the event requires a vote on the smart contract, we will send a transaction to the EVM chain with the vote. 39 | If the event requires a vote on the MultiSig wallet, we will send a message to the coordinator service with a signature. 40 | 41 | The federator will be responsible to check the data on the event, it will have its own headless wallet to check the information from the Hathor network and it can read the data from the EVM chain. 42 | 43 | ### Transaction inspection 44 | 45 | Each federator will inspect the transactions before signing them, checks they do on each transaction will change depending on the intended action. 46 | 47 | Crossing tokens from Hathor to EVM (token native to EVM): 48 | 49 | - Check that the transaction is a melt operation. 50 | - Check the wallet balance has enougth tokens to melt. 51 | - Check that the amount to melt is equal to the amount crossed. 52 | 53 | Crossing tokens from Hathor to EVM (token native to Hathor): 54 | 55 | - This operation does not require the federation to send a transaction in Hathor, so the validations will be done in the smart contract. 56 | 57 | Crossing tokens from EVM to Hathor (token native to EVM): 58 | 59 | - Check that the transaction is a mint operation. 60 | - Check the wallet balance has enougth HTR to mint. 61 | - Check that the amount to mint is equal to the amount requested minus fees. 62 | 63 | Crossing tokens from EVM to Hathor (token native to Hathor): 64 | 65 | - Check that the token amount minus the fee is being sent to the destination address. 66 | - Check that the fee amount is sent to the admin address. 67 | 68 | ## Coordinator service 69 | 70 | The coordinator service will serve the function of the smart contract in the Hathor side. 71 | It will listen for transactions on the MultiSig wallet, keep a comprehensive persistent database and will gather signatures from the federation. 72 | It will be started with the MultiSig wallet config but it will not be a participant of the wallet. 73 | 74 | During the startup we need to check all existing transactions on the MultiSig address, if the transaction was already processed we can safely ignore it. 75 | After the startup we can start to listen for transactions using the queue plugin which enqueues events like new transactions made to one of our addresses. 76 | 77 | The coordinator service will provide a REST API for the federators and admins to interact with it. 78 | 79 | ### Processing Hathor transactions 80 | 81 | When first starting the process it will iterate over all transactions in the MultiSig wallet and process them. 82 | After this, each new message will be enqueued and processed in order of arrival. 83 | Obs: The headless wallet offers a message queue extension to enqueue events e.g. new transactions. 84 | 85 | First we need to check the transaction is compliant with the request format if not, we can save it as processed and continue. 86 | If the transaction is already on tha database we can skip it, if not we need to check the federation contract to see if it was already processed. 87 | Obs: This read operation on the smart contract does not require a transaction to be sent, we can read the state from the EVM chain with 0 cost. 88 | 89 | The request format for a transaction is very simple: 90 | 91 | - The first output should be a data output, the data should be the destination address on the EVM. 92 | - There should only be 1 token being sent to the MultiSig wallet. 93 | 94 | To check this format we will call the headless api to check the transaction data and validate based on the response. 95 | 96 | Since this transaction was made from Hathor it will be saved as a Hathor request on the database with the following fields: 97 | 98 | - Hathor tx_id 99 | - With the tx_id, so the federators can inspect the contents. 100 | - Hathor token uid 101 | - The equivalent EVM token can be found on the smart contract storage (it will be created if this is the first time we see this token) 102 | - Amount of tokens being sent to the MultiSig wallet 103 | - destination address (on EVM chain) 104 | - Height 105 | - Height of the first block that confirms the transaction. 106 | - Transactions in Hathor do not have a height but we can use the `first_block_height` metadata. 107 | 108 | With this we can safely wait for 5 blocks on Hathor to have passed so we can create the proposal. 109 | There is an event `best-block-update` which tells the current height of the network. 110 | This proposal will be available for the polling API as soon as it is created. 111 | 112 | ### Coordinator service API 113 | 114 | #### GET /events 115 | 116 | This is the polling api, an event is a request to collect signatures, this means there are events to collect signatures in the MultiSig wallet and events to collect signatures on the federation contract. 117 | 118 | Both kinds of events will be returned by this endpoint, the federator will be responsible to make the appropriate calls to the coordinator service or the federation contract. 119 | 120 | The response will contain a list of events and pagination data. 121 | 122 | ```ts 123 | type Event = { 124 | eventOrigin: 'htr' | 'evm' | 'admin', 125 | type: 'mint' | 'melt' | 'send', // This indicates the action on the Hathor MultiSig wallet 126 | proposalId: number, 127 | txHex: string, 128 | // The event will have 1 of the following fields 129 | evmRequestId?: number, 130 | htrRequestId?: number, 131 | adminRequestId?: number, 132 | }; 133 | 134 | type ApiResponse = { 135 | events: Event[], 136 | hasMore: boolean, 137 | }; 138 | ``` 139 | 140 | #### POST /sign/{proposal_id} 141 | 142 | The proposal id is referring a Hathor tx proposal where the federator has signed and this api will be used to collect the signatures. 143 | 144 | Once enough signatures are collected, the transaction will be sent to the network. 145 | 146 | The endpoint will be expecting the fields: 147 | 148 | ```ts 149 | type RequestBody = { 150 | signature: string, 151 | publicKey: string, 152 | }; 153 | ``` 154 | 155 | The public key is required so we can count how many distinct signatures were collected. 156 | 157 | #### POST /request/evm 158 | 159 | The federators will call this endpoint when an event is received in the EVM loop. 160 | The coordinator will check the smart contract to validate the information received by the federation. 161 | 162 | The endpoint will be expecting the fields: 163 | 164 | ```ts 165 | type RequestBody = { 166 | blockHash: string, 167 | txHash: string, 168 | evmTokenAddr: address, 169 | amount: string, 170 | destinationAddress: string, 171 | }; 172 | ``` 173 | 174 | #### POST /fulfill/htr/{htr_request_id} 175 | 176 | When a request is fulfilled, the federator will call this endpoint to inform the API that the request was fulfilled. 177 | 178 | Hathor requests can only end when the tokens are sent or minted on the EVM chain. 179 | This means this endpoint will be called when the bridge contract has sent the event to confirm that the tokens were minted or unlocked. 180 | 181 | ```ts 182 | type RequestBody = { 183 | // Data to identify the transaction that fulfilled the request 184 | blockHash: string, 185 | txHash: string, 186 | logIndex: number, // Of the event that confirmed the fulfillment 187 | }; 188 | ``` 189 | 190 | #### POST /fulfill/admin/{admin_request_id} 191 | 192 | Admin requests are fulfilled when a transaction is sent in Hathor network and this API should be used to mark the request as fulfilled. 193 | 194 | ```ts 195 | type RequestBody = { 196 | txHash: string, 197 | htrTokenUid: string, 198 | amount: string, 199 | destinationAddress: string, 200 | }; 201 | ``` 202 | 203 | #### Admin requests 204 | 205 | The admin should be able to request a transaction from the MultiSig wallet to any user address. 206 | This is to fix any mistakes that might happen, like refunding a transaction. 207 | 208 | - `POST /request/admin/refund` 209 | - Requires the transaction id of the user request. 210 | 211 | #### Coordinator fulfilling requests 212 | 213 | When a request is fulfilled by a transaction being sent on Hathor, it means that the coordinator will be aware of the fulfillment since it will be the one sending the transaction. 214 | This means that when the confirmation of the request is received by the coordinator it will be responsible to mark the request as fulfilled. 215 | 216 | ### Database 217 | 218 | #### Tables: Hathor requests 219 | 220 | | id | tx_id | token_uid | amount | destination_address | height | fulfilled | 221 | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | 222 | | Integer | String(64) | String(64) | BigInt | String | Integer | Boolean | 223 | 224 | #### Tables: EVM requests 225 | 226 | | id | block_hash | tx_hash | log_index | evm_token_address | amount | destination_address | height | fulfilled | 227 | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | 228 | | Integer | String(64) | String(64) | Integer | String(42) | BigInt | String | Integer | Boolean | 229 | 230 | #### Tables: Hathor tx proposals 231 | 232 | | id | type | tx_hex | fulfilled | admin_request_id | evm_request_id | 233 | | :---: | :---: | :---: | :---: | :---: | :---: | 234 | | Integer | String | String | Boolean | Integer | Integer | 235 | 236 | 237 | Obs: the `hathor_request_id` is not present on this table because a transaction originated in Hathor cannot request a transaction to be made on Hathor network. 238 | 239 | #### Tables: EVM tx proposals 240 | 241 | | id | evm_request_id | fulfilled | 242 | | :---: | :---: | :---: | 243 | | Integer | Integer | Boolean | 244 | 245 | #### Tables: Signatures 246 | 247 | | id | proposal_id | signature | public_key | 248 | | :---: | :---: | :---: | :---: | 249 | | Integer | Integer | String | String | 250 | 251 | #### Tables: Admin Refund Requests 252 | 253 | | id | tx_id | 254 | | :---: | :---: | 255 | | Integer | String(64) | 256 | 257 | 258 | ### Supporting new tokens 259 | 260 | #### Hathor native token 261 | 262 | The admin should make a transaction on the EVM contract passing the Hathor token data to create the new side token and give the bridge address the permission to mint and melt this new token. 263 | 264 | This new side token will have an address, this address will be entered in the mapping along with the token uid making this token a valid token to be used. 265 | Then the admin should add this token on the allowed tokens, this will make the bridge allowed to process transactions with this token. 266 | 267 | #### EVM native tokens 268 | 269 | The admin should create the equivalent token on Hathor, melt all available tokens then send the authorities to the federation MultiSig address. 270 | This will make the federation capable of minting and melting the token but the token will only be allowed when its information is added to the allowed tokens contract by the admin. 271 | 272 | -------------------------------------------------------------------------------- /projects/feature-activation/0002-bit-signaling.md: -------------------------------------------------------------------------------- 1 | - Feature Name: Feature Activation Bit Signaling 2 | - Start Date: 2023-06-26 3 | - Author: Gabriel Levcovitz <> 4 | 5 | # Table of Contents 6 | 7 | # Summary 8 | [summary]: #summary 9 | 10 | Given the [Feature Activation for Blocks RFC](./0001-feature-activation-for-blocks.md), a mechanism is provided to deploy new features in the Hathor network conditionally on some criteria. Part of the defined criteria is the ability for miners to signal their support for a new feature by setting signal bits on a mined block. The former RFC describes how this bit is calculated, interpreted, etc., but details on how miners would actually set those bits were left open. This RFC addresses this requirement by defining a user-friendly way for miners to configure their support for a feature, or the lack thereof. 11 | 12 | # Motivation 13 | [motivation]: #motivation 14 | 15 | In the current implementation, the Feature Activation process can be used but would require manual manipulation of block bits by miners to set support for features undergoing the process. Instead, we should provide a user-friendly way through our full node CLI interface so miners can easily start their mining full node with the desired feature support configuration. That configuration will then be reflected on mining block templates, so the templates carry the corresponding signal bits for all features. 16 | 17 | # Guide-level explanation 18 | [Guide-level explanation]: #guide-level-explanation 19 | 20 | Users running the full node, or specifically miners in this case, should be able to pass two new options to the `run_node` CLI interface: 21 | 22 | 1. `--signal-support [feature]`, and 23 | 2. `--signal-not-support [feature]` 24 | 25 | Here are rule definitions for these options: 26 | 27 | 1. Both options receive the name of some feature, as defined in the existing `Feature` enum; 28 | 2. Both options can be used multiple times to either enable or disable support for multiple features; 29 | 3. A feature cannot be set in both options at the same time, as this would be a conflicting configuration; 30 | 4. A default bit signal value (either enabled or disabled) should be provided for each feature; 31 | 5. If a feature is set in any of these options while it's not undergoing a signaling phase of its Feature Activation process, the full node should emit a warning. 32 | 33 | Here's an example of running the full node with those options, enabling support for features 2 and 3, and disabling support for feature 1: 34 | 35 | ```bash 36 | poetry run hathor-cli run_node --status 8080 --testnet --memory-storage --signal-not-support NOP_FEATURE_1 --signal-support NOP_FEATURE_2 --signal-support NOP_FEATURE_3 37 | ``` 38 | 39 | And here's the equivalent example using env vars instead of the CLI options directly: 40 | 41 | ```bash 42 | HATHOR_SIGNAL_SUPPORT=[NOP_FEATURE_2,NOP_FEATURE_3] HATHOR_SIGNAL_NOT_SUPPORT=NOP_FEATURE_1 poetry run hathor-cli run_node --status 8080 --testnet --memory-storage 43 | ``` 44 | 45 | Let's assume all features are disabled by default, and that `NOP_FEATURE_1` is configured to use `bit=0`, `NOP_FEATURE_2` is configured to use `bit=1`, and `NOP_FEATURE_3` is configured to use `bit=2`, where `bit=0` is the LSB, as explained in the previous RFC. 46 | 47 | Then, the signal bits for this full node run would be `110`, meaning that those bits would be present in all block templates generated for mining, while those features are in their respective voting period. After that, the bits are reset to 0. 48 | 49 | The full node should log the outcome for each feature, indicating whether its support is enabled or disabled, and whether the reason was the default value or the user setting. 50 | 51 | # Reference-level explanation 52 | [Reference-level explanation]: #reference-level-explanation 53 | 54 | ## Bit Signaling Service 55 | 56 | A new `BitSignalingService` class should be implemented, providing a single public method: `get_signal_bits()`, that returns an `int`. This class and method's only purpose is to calculate the binary number that represents the signal bits for all block templates generated during this full node run, as configured by the respective CLI options and feature configuration. 57 | 58 | That class should also perform validations on the configuration, as specified in the list in the [Guide-level explanation] section. The full node must not start if those validations fail. 59 | 60 | ## New signal_support_by_default configuration 61 | 62 | A new attribute called `signal_support_by_default` should be implemented in the `Criteria` class. It is a boolean that represents whether a feature should be supported by default or not, according to its configuration via `HathorSettings`. Its default value should be `False`. 63 | 64 | That is, assuming `signal_support_by_default=True` for `NOP_FEATURE_1`, and no CLI option is provided for that feature, then that feature is going to be signaled as supported in block templates. If some CLI option is set for that feature, then it takes precedence over the `signal_support_by_default` attribute. 65 | 66 | ## Hathor Manager and mining APIs 67 | 68 | There are changes to be made on `HathorManager` and in mining APIs, be it over WebSocket or HTTP. The manager should call the `BitSignalingService.get_signal_bits()` method and generate template blocks accordingly. All mining APIs eventually call `HathorManager._make_block_template()`, so we must inject the signal bits in this method. 69 | 70 | # Proof of Concept 71 | 72 | A POC is provided for this RFC, as a [pull request on hathor-core](https://github.com/HathorNetwork/hathor-core/pull/688). 73 | -------------------------------------------------------------------------------- /projects/feature-activation/0003-explorer-uis.md: -------------------------------------------------------------------------------- 1 | - Feature Name: Feature Activation Explorer UIs 2 | - Start Date: 2023-07-03 3 | - Author: Gabriel Levcovitz <> 4 | 5 | # Summary 6 | [summary]: #summary 7 | 8 | Given the [Feature Activation for Blocks RFC](./0001-feature-activation-for-blocks.md), a mechanism is provided to deploy new features in the Hathor network conditionally on some criteria. It defined two new Explorer user interfaces: a new page containing a table of all features in the Feature Activation process and their metadata, and a new panel in the Block interface displaying the signal bits and respective features for that block. This RFC details those interfaces. 9 | 10 | # Motivation 11 | [motivation]: #motivation 12 | 13 | There should be an easy way for users to follow the Feature Activation process and each feature's current state. Using the Explorer is ideal for this. 14 | 15 | # Guide-level explanation 16 | [Guide-level explanation]: #guide-level-explanation 17 | 18 | This section defines what the interfaces will contain and provides a simple mockup for them. 19 | 20 | ## New Feature Activation interface 21 | 22 | There must be a new option in the Tools dropdown, at the Explorer header. When clicked, it will open a new page specific for Feature Activation information. It will include a table as described in [this section](https://github.com/HathorNetwork/rfcs/blob/master/projects/feature-activation/0001-feature-activation-for-blocks.md#explorer-user-interface) of the original RFC, containing all Criteria for each feature, and also it's current state and acceptance percentage. 23 | 24 | Here are some more details, not shown in the mockup below: 25 | 26 | 1. There must be an indication on the block used to retrieve the "current" information, that is, the best block when the request was made. Its height should be displayed and there should be a link to its page. 27 | 2. The table should be sorted showing the latest features first, so we can see the new information first. That is, it should be sorted by descending `start_height`. 28 | 3. The table should be paginated to show at most 10 items. 29 | 4. Add a `?` icon for each column title, that displays the column description when hovered. 30 | 31 | ### Mockup 32 | 33 | ![feature_activation.png](0003-images%2Ffeature_activation.png) 34 | 35 | ## New panel in Block interface 36 | 37 | There must be a new panel in the Block interface containing the Feature Activation information for that block, that is, a table with the block's signal value for each feature, the corresponding bit and feature name, and the feature state at that point. 38 | 39 | Here are some more details, not shown in the mockup below: 40 | 41 | 1. The signal column should display a checkmark icon for enabled features, instead of `1`. It should be blank for disabled features. 42 | 43 | ### Mockup 44 | 45 | ![block.png](0003-images%2Fblock.png) 46 | 47 | # Reference-level explanation 48 | [Reference-level explanation]: #reference-level-explanation 49 | 50 | ## New Feature Activation interface 51 | 52 | ### Backend 53 | 54 | The API described in [this section](https://github.com/HathorNetwork/rfcs/blob/master/projects/feature-activation/0001-feature-activation-for-blocks.md#rest-api) of the original RFC is enough to power this interface and is already implemented. 55 | 56 | ### Frontend 57 | 58 | A new page will be created, analogous to the [Tokens Explorer interface](https://explorer.hathor.network/tokens), leveraging its table component. 59 | 60 | ## New panel in Block interface 61 | 62 | ### Backend 63 | 64 | The [existing](https://github.com/HathorNetwork/rfcs/blob/master/projects/feature-activation/0001-feature-activation-for-blocks.md#feature-service) `FeatureService.get_bits_description()` method will be used to power this panel, although some manipulation will be necessary to convert it to this schema: 65 | 66 | ```json 67 | { 68 | "signal_bits": [ 69 | { "bit": 0, "signal": 1, "feature": "MY_NEW_FEATURE_1", "state": "STARTED" }, 70 | { "bit": 1, "signal": 0, "feature": null, "state": null }, 71 | { "bit": 2, "signal": 1, "feature": "MY_NEW_FEATURE_2", "state": "MUST_SIGNAL" }, 72 | { "bit": 3, "signal": 0, "feature": "MY_NEW_FEATURE_3", "state": "STARTED" } 73 | ] 74 | } 75 | ``` 76 | 77 | A new endpoint must be provided to return this information for a specific block. 78 | 79 | ### Frontend 80 | 81 | Should be straightforward to implement, creating a new panel analogous to the existing ones, like the Funds neighbors, and also leveraging the Tokens table component. The request to the endpoint should be made lazily, when the user clicks on "Click to show". 82 | -------------------------------------------------------------------------------- /projects/feature-activation/0003-images/block.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/feature-activation/0003-images/block.png -------------------------------------------------------------------------------- /projects/feature-activation/0003-images/feature_activation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/feature-activation/0003-images/feature_activation.png -------------------------------------------------------------------------------- /projects/feature-activation/0004-phased-testing.md: -------------------------------------------------------------------------------- 1 | - Feature Name: Feature Activation Phased Testing 2 | - Start Date: 2023-08-01 3 | - Initial Document: [Feature Activation](https://docs.google.com/document/d/1IiFTVW1wH6ztSP_MnObYIucinOYd-MsJThprmsCIdDE/edit) 4 | - Author: Gabriel Levcovitz <> 5 | 6 | # Summary 7 | [summary]: #summary 8 | 9 | This RFC describes a phased procedure for testing the Feature Activation project. 10 | 11 | # Motivation 12 | [motivation]: #motivation 13 | 14 | After all changes introduced by the previous Feature Activation RFCs are implemented, the mechanism should be tested with some "dull" features in both the testnet and mainnet environments. Such features do not change any full node behavior and are used exclusively to test the Feature Activation mechanism itself. Only then we'll be able to use it with confidence in the mainnet to change real features like updating the maximum merkle path length, for example. 15 | 16 | # Guide-level explanation 17 | [Guide-level explanation]: #guide-level-explanation 18 | 19 | The testing will consist of 3 phases: 20 | 21 | 1. NOP features in testnet 22 | 2. NOP features in mainnet 23 | 3. Maximum merkle path length update 24 | 25 | Phase 1 will happen independently, and then Phase 2 and Phase 3 can happen concurrently. Phase 3 will be specified in a separate RFC. 26 | 27 | The NOP features used in Phases 1 and 2 are simply defined in the respective network's settings file, but their activation has no real effect in any computation of the full node, that is, no forks are created no matter whether those features are activated or not. Instead, their states will simply be logged by the full node. After Phase 1 and 2 are complete, Phase 3 will introduce a real change that creates a fork between nodes that recognize its feature activation, and nodes that do not. 28 | 29 | # Reference-level explanation 30 | [Reference-level explanation]: #reference-level-explanation 31 | 32 | In this section, technical details are expanded for what was described above. 33 | 34 | ## Phase 1 35 | 36 | In this phase, 3 NOP features are defined for testnet and each one of them undergo their own feature activation process. Then, we validate that their states progress as expected, both by inspecting their values through the Explorer interfaces, and by inspecting logs via AWS CloudWatch queries. 37 | 38 | ### NOP features configuration 39 | 40 | We'll be using 3 NOP features and define them in `testnet.yml`. We'll test different Feature Activation behavior by configuring different settings for each feature: 41 | 42 | 1. `NOP_FEATURE_1` is expected to be signaled and activated through bit signaling. Therefore, its `signal_support_by_default` will be set to `True`, so testnet miners (the ones used in `tx-mining-service`) will receive block templates with their signal bit enabled. It'll also have a `minimum_activation_height`. 43 | 2. `NOP_FEATURE_2` is expected to NOT be activated through bit signaling. Instead, it'll have `lock_in_on_timeout=True` and will be "forcefully" activated. 44 | 3. `NOP_FEATURE_3` is expected to FAIL. It'll have `signal_support_by_default=False` and `lock_in_on_timeout=False`. 45 | 46 | By doing this, we'll test different paths of the Feature Activation workflow. To define the configuration, we'll first choose some block height `N`, that is, the block height where Phase 1 will start. It can only be chosen when we actually implement this RFC. All other heights are relative to `N`. Here's the reference configuration: 47 | 48 | ```yml 49 | # testnet.yml 50 | 51 | FEATURE_ACTIVATION: 52 | default_threshold: 30240 # 30240 = 75% of evaluation_interval (40320) 53 | features: 54 | NOP_FEATURE_1: 55 | bit: 0 56 | start_height: 57 | timeout_height: # 4 weeks after the start 58 | minimum_activation_height: # 6 weeks after the start 59 | lock_in_on_timeout: false 60 | version: 61 | signal_support_by_default: true 62 | 63 | NOP_FEATURE_2: 64 | bit: 1 65 | start_height: 66 | timeout_height: # 4 weeks after the start 67 | minimum_activation_height: 0 68 | lock_in_on_timeout: true 69 | version: 70 | signal_support_by_default: false 71 | 72 | NOP_FEATURE_3: 73 | bit: 2 74 | start_height: 75 | timeout_height: # 4 weeks after the start 76 | minimum_activation_height: 0 77 | lock_in_on_timeout: false 78 | version: 79 | signal_support_by_default: false 80 | ``` 81 | 82 | And below are the expected states for each feature for each height, according to the [state transitions](https://github.com/HathorNetwork/rfcs/blob/master/projects/feature-activation/0001-feature-activation-for-blocks.md#state-transitions) workflow defined in the original RFC. Note that `40320` is the evaluation interval, which is equivalent to 2 weeks. 83 | 84 | #### `NOP_FEATURE_1` 85 | 86 | 1. It begins as `DEFINED`, up to block `N - 1`. 87 | 2. It transitions to `STARTED` on block `N`, for reaching its `start_height`. 88 | 3. After 2 weeks, it gets to `LOCKED_IN` on block `N + 40320`, for reaching its bit signaling `threshold` before the `timeout_height`. 89 | 4. After 2 weeks, it continues `LOCKED_IN` on block `N + 2 * 40320`, as its `minimum_activation_height` has not yet been reached. 90 | 5. After 2 weeks, it gets to `ACTIVE` on block `N + 3 * 40320`, as its `minimum_activation_height` is reached. 91 | 92 | Therefore, `NOP_FEATURE_1`'s process takes 6 weeks. 93 | 94 | #### `NOP_FEATURE_2` 95 | 96 | 1. It begins as `DEFINED`, up to block `N - 1`. 97 | 2. It transitions to `STARTED` on block `N`, for reaching its `start_height`. 98 | 3. After 2 weeks, it gets to `MUST_SIGNAL` on block `N + 40320`, for its `lock_in_on_timeout` configuration. 99 | 4. After 2 weeks, it gets to `LOCKED_IN` on block `N + 2 * 40320`. 100 | 5. After 2 weeks, it gets to `ACTIVE` on block `N + 3 * 40320`. 101 | 102 | Therefore, `NOP_FEATURE_2`'s process takes 6 weeks. 103 | 104 | #### `NOP_FEATURE_3` 105 | 106 | 1. It begins as `DEFINED`, up to block `N - 1`. 107 | 2. It transitions to `STARTED` on block `N`, for reaching its `start_height`. 108 | 3. After 2 weeks, it continues on `STARTED` on block `N + 40320`, as its `timeout_height` has not been reached yet. 109 | 4. After 2 weeks, it gets to `FAILED` on block `N + 2 * 40320`, as its `timeout_height` is reached and `lock_in_on_timeout` is `false`. 110 | 111 | Therefore, `NOP_FEATURE_3`'s process takes 4 weeks. 112 | 113 | ### Validation via the Hathor Explorer 114 | 115 | By manually inspecting blocks via the Hathor Explorer interfaces described in the [Feature Activation Explorer UIs RFC](https://github.com/HathorNetwork/rfcs/blob/master/projects/feature-activation/0003-explorer-uis.md), we'll be able to validate the expected state for each block-feature pair according to the lists described above. 116 | 117 | ### Validation via logging 118 | 119 | Complementing the validation via Explorer, we'll also have validation via full node logs, which will be useful to perform more complex queries to understand state transitions, specially if debugging is necessary. By using structured logging of features states for each block, we'll be able to leverage AWS CloudWatch queries to perform analysis and even export metrics. 120 | 121 | After a new block is received and processed by the full node, we'll log its height, together with its state for each NOP feature. At the end of the `HathorManager.tx_fully_validated()` method, we'll call a new `HathorManager.log_feature_states()` method, defined as follows: 122 | 123 | ```python 124 | def log_feature_states(self, vertex: BaseTransaction) -> None: 125 | if not isinstance(vertex, Block): 126 | return 127 | 128 | feature_descriptions = self.feature_service.get_bits_description(block=vertex) 129 | state_by_feature = { 130 | feature.value: description.state.value 131 | for feature, description in feature_descriptions.items() 132 | } 133 | 134 | self.log.info( 135 | 'New block accepted with feature activation states', 136 | block_height=vertex.get_height(), 137 | features_states=state_by_feature 138 | ) 139 | ``` 140 | 141 | This logging will be removed after Phase 1 is complete. 142 | 143 | ### Phase 1b 144 | 145 | After the initial part of Phase 1 is complete, Phase 1b will start. On this phase, we'll force a reorg on testnet to undo the activation of `NOP_FEATURE_1`. We'll have to disable support of said feature in our miners and use a higher hash rate to generate more blocks than the original best chain, using some block before the 75% threshold is reached as the common block. After the procedure is complete, we must observe that `NOP_FEATURE_1` transitions back to `FAILED` instead of `ACTIVE`. 146 | 147 | ## Phase 2 148 | 149 | Phase 2 can start as soon as Phase 1 ends, and it is mostly analogous to Phase 1, minus a few exceptions: 150 | 151 | - The NOP features will be configured on `mainnet.yml` instead of `testnet.yml`. 152 | - `N` will be different. 153 | - We won't test `NOP_FEATURE_2`, that is, we won't test `lock_in_on_timeout=True` on mainnet, as we have no easy way of coordinating bit signals with miners during the `MUST_SIGNAL` phase. 154 | 155 | ## Phase 3 156 | 157 | Phase 3 can start after Phase 2 ends. It'll define a new `INCREASE_MAX_MERKLE_PATH_LENGTH` feature that will actually affect how the full node verifies merged mining blocks. 158 | 159 | Its details will be described in a separate RFC, specifying the process for both testnet and mainnet, including the new value for the maximum merkle path length. Just for completeness, below we provide a reference for the update in testnet. 160 | 161 | Currently, there's a `MAX_MERKLE_PATH_LENGTH: int = 12` constant in `aux_pow.py`. This constant will be replaced by a function call to `get_max_merkle_path_length()`, with its reference implementation as follows: 162 | 163 | ```python 164 | def get_max_merkle_path_length(feature_service: FeatureService, block: Block) -> int: 165 | if feature_service.is_feature_active( 166 | block=block, 167 | feature=Feature.INCREASE_MAX_MERKLE_PATH_LENGTH 168 | ): 169 | return NEW_MAX_MERKLE_PATH_LENGTH 170 | 171 | return OLD_MAX_MERKLE_PATH_LENGTH 172 | ``` 173 | 174 | Where this function is actually defined will be left as an implementation detail. Let `M` be a block height for the beginning of Phase 3. Then, its reference configuration is: 175 | 176 | ```yml 177 | # testnet.yml 178 | 179 | FEATURE_ACTIVATION: 180 | default_threshold: 30240 # 30240 = 75% of evaluation_interval (40320) 181 | features: 182 | INCREASE_MAX_MERKLE_PATH_LENGTH: 183 | bit: 3 184 | start_height: 185 | timeout_height: 186 | minimum_activation_height: 0 187 | lock_in_on_timeout: false 188 | version: 189 | signal_support_by_default: true 190 | ``` 191 | 192 | And it's expected to reach its threshold and become active in 4 weeks (2 in `STARTED`, 2 in `LOCKED_IN`). We'll use the same validation mechanisms described in Phase 1 to make sure the feature states transition as expected. Also, we should configure a merged miner to mine blocks with a higher merkle path length, making sure its blocks are invalidated before the feature becomes `ACTIVE`, and validated after it does. 193 | 194 | ## Conclusion 195 | 196 | As described above, both Phase 1 and Phase 2 take 6 weeks. This makes for a total Phased Testing duration of 12 weeks. If this duration is deemed too high, we can actually shift Phase 2 earlier, as it does not conflict with Phase 1 in any way. Another option to make it even faster, is decreasing the `EVALUATION_INTERVAL`, for example from 2 weeks to 1 week, only in testnet. 197 | 198 | After all phases are completed, we'll have enough confidence on the process' stability to use it to increase the maximum merkle path length on mainnet. 199 | 200 | # Task breakdown 201 | 202 | | Task | Dev days | 203 | |--------------------------------------------------------------------------|----------| 204 | | [Phase 1] Release a new hathor-core version with NOP features on testnet | 0.5 | 205 | | [Phase 1] Validate feature states during the following weeks | 0.5 | 206 | | [Phase 1b] Force a reorg on testnet | 1 | 207 | | [Phase 2] Release a new hathor-core version with NOP features on mainnet | 0.5 | 208 | | [Phase 2] Validate feature states during the following weeks | 0.5 | 209 | | **Total** | **3** | 210 | -------------------------------------------------------------------------------- /projects/feature-activation/0005-miner-update-request.md: -------------------------------------------------------------------------------- 1 | # Hathor Integration Update Request 2 | 3 | ## Introduction 4 | 5 | We’re currently in the testing phase of a project called Feature Activation, which is a mechanism used to activate new consensus-agreed features on Hathor Network. In the future, it will be used to release important features such as Nano Contracts and increasing the maximum Merkle Path length, which will allow miners to find more valid merge mined blocks. Here’s a [blog post](https://blog.hathor.network/hathors-feature-activation-process-miners-lead-the-way-d7f43b12978d) about it. 6 | 7 | In order for that to work, we need miners to propagate a support signal for features through their mined blocks. This is simply a repurposed byte in the block data. We’ll soon have a complete guide on how to interact with feature signals, but everything should work by default and transparently. However, at this time, we kindly request your cooperation. 8 | 9 | It’s likely that you need to update your integration with Hathor to correctly propagate those signals. The block template that you currently use has a new field called `signal_bits`. This field is an integer and represents a single byte. You simply have to set this as the first byte in the Hathor blocks you generate, and everything should work. Before, the `version` field occupied the first 2 bytes. Now, the `signal_bits` is the first byte, and the `version` field is the second byte. 10 | 11 | We of course are available for any questions or help that you might need. Thank you in advance! 12 | 13 | ## Guide: Updating the Integration 14 | 15 | ### Summary 16 | 17 | The first byte of a Hathor block was reserved for future use and ignored. Now, this byte is being used for bit signaling of Feature Activation, starting from [hathor-core v0.59.0](https://github.com/HathorNetwork/hathor-core/releases/tag/v0.59.0) on mainnet. In order to support this new mechanism, a simple change must be made in miner integration with Hathor. This is a one-time change and after it, everything 18 | will work automatically. 19 | 20 | #### What is Feature Activation? 21 | 22 | Feature Activation is a mechanism for upgrading consensus rules on Hathor Network while preventing forks from happening. It's inspired by [Bitcoin's BIP 8](https://github.com/bitcoin/bips/blob/master/bip-0008.mediawiki). Read the [blog post](https://blog.hathor.network/hathors-feature-activation-process-miners-lead-the-way-d7f43b12978d) for a general overview, or the [RFC](https://github.com/HathorNetwork/rfcs/blob/master/projects/feature-activation/0001-feature-activation-for-blocks.md) for internal technical details (out of the scope for this guide). 23 | 24 | #### What is Bit Signaling? 25 | 26 | In order to activate an upgrade in the network, a super majority of miners must signal that their full node is up-to-date and ready to accept the change. This is signaled though bits in the first byte of mined blocks, which was previously unused. Miners will have the flexibility to decide whether they want to support a new change or not, but by default no action is necessary and everything will work automatically. Internal technical details can be found in the [Feature Activation RFC](https://github.com/HathorNetwork/rfcs/blob/master/projects/feature-activation/0001-feature-activation-for-blocks.md#bit-signaling-implementation) and the [Bit Signaling RFC](https://github.com/HathorNetwork/rfcs/blob/master/projects/feature-activation/0002-bit-signaling.md), but are out of scope for this guide. 27 | 28 | #### What should I do? 29 | 30 | Here are the necessary steps: 31 | 32 | 1. Make sure hathor-core is updated to the [latest version](https://github.com/HathorNetwork/hathor-core/releases). 33 | 2. Nothing has to be changed in the way the full node is operated. Everything will work automatically. 34 | 3. Verify that Hathor blocks are being generated with the correct first byte. If not, update the integration to support it. Read the examples below for a detailed explanation. 35 | 36 | ### 1. Identifying where the Hathor block is generated 37 | 38 | Your current integration most certainly reads Block templates from one of our HTTP or WebSocket endpoints. The relevant fields provided in those endpoints are the following: 39 | 40 | ```python 41 | versions: set[int] 42 | signal_bits: int 43 | reward: int 44 | weight: float 45 | timestamp_now: int 46 | timestamp_min: int 47 | timestamp_max: int 48 | parents: list[bytes] 49 | parents_any: list[bytes] 50 | height: int 51 | score: float 52 | ``` 53 | 54 | Identify where your integration code reads this data and constructs the block. 55 | 56 | ### 2. Verify that your generated block includes `signal_bits` 57 | 58 | For reference, you can use the [websocat](https://github.com/vi/websocat) tool to get a block template from our WebSocket API by running `$ websocat wss://node1.mainnet.hathor.network/v1a/mining_ws`. Then, you'll get: 59 | 60 | ```json 61 | { 62 | "id": null, 63 | "method": "mining.notify", 64 | "params": [ 65 | { 66 | "data": "0400010000032000000040516f9397b2cc0d66205cfb03000000000000000001b1f30acdddd6f519fed153ad4bc4f107cde88a6176022a000054281a233cace0503977300affcd6fd72f4aa92c35da826d40c49b786ef200000a04a8cc7097f5701c810d4c5234ad85acf81f6609e2c4e680519f52ed3e00", 67 | "versions": [ 68 | 0, 69 | 3 70 | ], 71 | "reward": 800, 72 | "weight": 69.74338333569195, 73 | "timestamp_now": 1713397058, 74 | "timestamp_min": 1713396987, 75 | "timestamp_max": 1713397358, 76 | "parents": [ 77 | "000000000000000001b1f30acdddd6f519fed153ad4bc4f107cde88a6176022a", 78 | "000054281a233cace0503977300affcd6fd72f4aa92c35da826d40c49b786ef2", 79 | "00000a04a8cc7097f5701c810d4c5234ad85acf81f6609e2c4e680519f52ed3e" 80 | ], 81 | "parents_any": [], 82 | "height": 4404735, 83 | "score": 88.38130312675703, 84 | "signal_bits": 4 85 | } 86 | ] 87 | } 88 | ``` 89 | 90 | It's also possible that you use the HTTP API instead, from `GET /v1a/get_block_template`. While the returned schema is a bit different, the relevant fields are the same. 91 | 92 | Using the example from above, notice that `"versions": [0, 3]` and `"signal_bits": 4`. The `versions` field includes the possible block versions to be used, where `0` is a Regular Block and `3` is a Merge Mined Block, which is likely what you are using. Both `versions` and `signal_bits` values represent a single byte each. That is, considering this example, the following block would be generated: 93 | 94 | - 1st byte: `0x04` (the value from `signal_bits`). 95 | - 2nd byte: `0x03` (the chosen value, `3`, from `versions`). 96 | - Then, the rest of the block bytes. 97 | 98 | Or, as a string of bytes: `[0x04, 0x03, ...]`. 99 | 100 | Check that the blocks generated by your integration are correctly assigning the `signal_bits` in the first byte. Before this update, the first byte would always be zero, `0x00`. 101 | 102 | > [!NOTE] 103 | > Every previous version of the full node ignores the value of the first byte, so using the `signal_bits` will not affect the propagation of blocks to peers running nodes with outdated versions. 104 | 105 | ### 3. What if my generated block does not include `signal_bits`? 106 | 107 | It's likely that your integration ignores the `signal_bits` field, which didn't exist before. You probably set the first byte of the block data as `0x00`, or just use the `version` field to set the first two block bytes (the `version` value is never greater than one byte). You must change this so the `signal_bits` field is the first block byte, and the `version` field is the second block byte. 108 | 109 | #### Example 110 | 111 | A simple Python script is provided to illustrate how the first bytes can be constructed: 112 | 113 | 114 | ```python 115 | import json 116 | 117 | MERGE_MINED_BLOCK_VERSION = 3 118 | 119 | json_response_from_ws = '{"id":null,"method":"mining.notify","params":[{"data":"0400010000032000000040516f9397b2cc0d66205cfb03000000000000000001b1f30acdddd6f519fed153ad4bc4f107cde88a6176022a000054281a233cace0503977300affcd6fd72f4aa92c35da826d40c49b786ef200000a04a8cc7097f5701c810d4c5234ad85acf81f6609e2c4e680519f52ed3e00","versions":[0,3],"reward":800,"weight":69.74338333569195,"timestamp_now":1713397058,"timestamp_min":1713396987,"timestamp_max":1713397358,"parents":["000000000000000001b1f30acdddd6f519fed153ad4bc4f107cde88a6176022a","000054281a233cace0503977300affcd6fd72f4aa92c35da826d40c49b786ef2","00000a04a8cc7097f5701c810d4c5234ad85acf81f6609e2c4e680519f52ed3e"],"parents_any":[],"height":4404735,"score":88.38130312675703,"signal_bits":4}]}' 120 | json_dict = json.loads(json_response_from_ws) 121 | 122 | fields = json_dict['params'][0] 123 | versions = fields['versions'] 124 | signal_bits = fields['signal_bits'] 125 | 126 | assert MERGE_MINED_BLOCK_VERSION in versions, 'in this example we are generating a merge mined block' 127 | 128 | def prepare_first_two_bytes(signal_bits: int, version: int) -> bytes: 129 | r""" 130 | Prepare first two bytes for a Hathor block. 131 | Notice that the return value will always have length equal to two. 132 | 133 | >>> prepare_first_two_bytes(0, 0) 134 | b'\x00\x00' 135 | 136 | >>> prepare_first_two_bytes(0b101, 0x04) 137 | b'\x05\x04' 138 | 139 | >>> prepare_first_two_bytes(0b1111_0001, 0xFF) 140 | b'\xf1\xff' 141 | 142 | >>> prepare_first_two_bytes(0xFFF, 0x00) 143 | Traceback (most recent call last): 144 | ... 145 | AssertionError: the signal_bits must be at most one byte 146 | """ 147 | assert signal_bits <= 0xFF, 'the signal_bits must be at most one byte' 148 | assert version <= 0xFF, 'the version must be at most one byte' 149 | return bytes([signal_bits, version]) 150 | 151 | first_two_bytes = prepare_first_two_bytes(signal_bits, MERGE_MINED_BLOCK_VERSION) 152 | assert first_two_bytes == b'\x04\x03' 153 | 154 | # ...Then you would go on to configure the rest of the block bytes. 155 | ``` 156 | 157 | > [!IMPORTANT] 158 | > Depending on when you run this example, it's possible that you get a different value for `signal_bits` from the Mining API. It might be that you receive `"signal_bits": 0`, which would result in the first byte being `0x00`, just like before this update. Make sure that you're correctly assigning `signal_bits` to the first block byte, event if it's zero. 159 | 160 | ### 3. Done! 161 | 162 | Make this change so all your blocks contain this new first byte. No further change is necessary. 163 | -------------------------------------------------------------------------------- /projects/hathor-wallet-headless/0001-multisig-create-mint-melt-token.md: -------------------------------------------------------------------------------- 1 | - Feature Name: MultiSig to assist the EVM compatible bridge by supporting create, mint and melt token commands 2 | - Start Date: 2023-06-14 3 | - RFC PR: (leave this empty) 4 | - Hathor Issue: (leave this empty) 5 | - Author: Alex Ruzenhack alex@hathor.network 6 | 7 | #### Table of content 8 | [table-of-content]: #table-of-content 9 | 10 | - [Summary](#summary) 11 | - [Motivation](#motivation) 12 | - [Current system](#current-system) 13 | - [Guide-level explanation](#guide-level-explanation) 14 | - [Drawbacks](#drawbacks) 15 | - [Rationale and alternatives](#rationale-and-alternatives) 16 | - [Prior-art](#prior-art) 17 | - [Unresolved questions](#unresolved-questions) 18 | - [Future possibilities](#future-possibilities) 19 | 20 | # Summary 21 | [summary]: #summary 22 | 23 | It extends the wallet-headless to support create a custom token, mint and melt token, all using a MultiSig wallet. Add the ability to initialize a MultiSig wallet without seed to support the coordinator service in the operation of sign and push transactions. Also, add an endpoint to inspect transaction data with wallet metadata like balance, etc. 24 | 25 | # Motivation 26 | [motivation]: #motivation 27 | 28 | It is part of the solution proposed in the [EVM compatible blockchain design](https://github.com/HathorNetwork/rfcs/blob/doc/evm-compatible-brigde/projects/evm-compatible-bridge/design.md#required-features). It aims to enable create, mint and melt tokens using MultiSig wallet and cooperate with the [federation service](https://github.com/HathorNetwork/rfcs/blob/doc/evm-compatible-brigde/projects/evm-compatible-bridge/design.md#required-features) and the coordinator service. 29 | 30 | # Guide-level explanation 31 | [guide-level-explanation]: #guide-level-explanation 32 | 33 | The features we need to add for the MultiSig wallet: 34 | * Create a custom token 35 | * Mint a custom token 36 | * Melt a custom token 37 | * Inspect transaction with wallet metadata 38 | * Initialize wallet without seed 39 | 40 | ## API endpoints design 41 | 42 | Taking by example the `POST:/wallet/p2sh/tx-proposal` and `POST:/wallet/p2sh/tx-proposal/get-my-signatures` endpoints we can identify the following semantic: 43 | 44 | - `/wallet` -- the wallet module 45 | - `/p2sh` -- a submodule representing a domain of wallet 46 | - `/tx-proposal` -- a command to create a new transaction proposal 47 | - `/tx-proposal/get-my-signatures` -- this time `/tx-proposal` assumes the role of a component of p2sh domain in the wallet module, and `/get-my-signatures` is the command to sign the transaction with the wallet private key 48 | 49 | In summary, we can have either syntax: 50 | 51 | * `/module/domain/command`, or 52 | * `/module/domain/component/command` 53 | 54 | As our intend is to execute commands, the verb `POST` is the most convenient method to call the endpoints. 55 | 56 | ## Create custom token 57 | 58 | A single wallet can [create a token](https://docs.hathor.network/references/headless-wallet/http-api#operation/createToken) using `POST:/wallet/create-token` endpoint. This operation creates mint and melt authorities by default. 59 | 60 | To provide a seamless extension to create a custom token in the MultiSig wallet, the endpoint keeps the same command: 61 | * `POST:/wallet/p2sh/tx-proposal/create-token` 62 | 63 | The data fields are: 64 | * `name`* 65 | * `symbol`* 66 | * `amount`* 67 | * `address` 68 | * `change_address` 69 | * `create_mint` 70 | * `mint_authority_address` 71 | * `allow_external_mint_authority_address` 72 | * `create_melt` 73 | * `melt_authority_address` 74 | * `allow_external_melt_authority_address` 75 | 76 | *`*` as required field 77 | 78 | The response scheme for success: 79 | ```ts 80 | { 81 | "success": boolean, 82 | "txHex": string 83 | } 84 | ``` 85 | 86 | The purpose of the command is only produce the `txHex` not signed and nothing more, it will not push the transaction to the network. The returned `txHex` will be handled by the coordinator service, and later on by the federation service, which is responsible to sign the transaction. 87 | 88 | With this specification we cover the possibility to create custom tokens with or without mint and melt authorities. 89 | 90 | ## Mint a custom token 91 | 92 | A single wallet can [mint a token](https://docs.hathor.network/references/headless-wallet/http-api#operation/mintTokens) using `POST:/wallet/mint-token` endpoint. This operation creates another mint authority by default. 93 | 94 | For the MultiSig wallet the endpoint keeps the same command: 95 | `POST:/wallet/p2sh/tx-proposal/mint-token` 96 | 97 | The data fields are: 98 | * `token`* 99 | * `amount`* 100 | * `address` 101 | * `change_address` 102 | * `mint_authority_address` 103 | * `allow_external_mint_authority_address` 104 | 105 | *`*` as required field 106 | 107 | The response scheme for success: 108 | ```ts 109 | { 110 | "success": boolean, 111 | "txHex": string 112 | } 113 | ``` 114 | 115 | The purpose of the command is only produce the `txHex` not signed and nothing more, it will not push the transaction to the network. The returned `txHex` will be handled by the coordinator service, and later on by the federation service, which is responsible to sign the transaction. 116 | 117 | With this specification we cover the possibility to mint tokens, send it to another wallet, create or not another mint authority, and send the authority to another wallet. 118 | 119 | ## Melt a custom token 120 | 121 | A single wallet can melt a token using `POST:/wallet/melt-token` endpoint. This operation creates another melt authority by default. 122 | 123 | For the MultiSig wallet the endpoint keeps the same command: 124 | `POST:/wallet/p2sh/tx-proposal/melt-token` 125 | 126 | The data fields are: 127 | * `token`* 128 | * `amount`* 129 | * `deposit_address` 130 | * `change_address` 131 | * `melt_authority_address` 132 | * `allow_external_melt_authority_address` 133 | 134 | *`*` as required field 135 | 136 | The response scheme for success: 137 | ```ts 138 | { 139 | "success": boolean, 140 | "txHex": string 141 | } 142 | ``` 143 | 144 | The purpose of the command is only produce the `txHex` not signed and nothing more, it will not push the transaction to the network. The returned `txHex` will be handled by the coordinator service, and later on by the federation service, which is responsible to sign the transaction. 145 | 146 | With this specification we cover the possibility to melt tokens, send it to another wallet, create or not another melt authority, send the authority to another wallet, and define an address to deposit the withdraw amount if any. 147 | 148 | ## Decode transaction with wallet metadata 149 | 150 | There is an endpoint to [inspect the transaction](https://docs.hathor.network/references/headless-wallet/http-api) by decoding the transaction hexadecimal encode in `POST:/wallet/decode`. However, it doesn't bring signature information, nor wallet information with it. 151 | 152 | By extending this endpoint operation to include wallet metadata we introduce a breaking change in the API. However, we keep coherence, once it is used not only to decode a `txHex` but to decode a partial transaction. The extension adds a semantic decoding of the transaction components regarding the wallet. Also, it reduces the roundtrips to the wallet by delivering the tokens balance. 153 | 154 | The data field is: 155 | * txHex 156 | * partial_tx 157 | 158 | The response scheme for success: 159 | ```ts 160 | { 161 | "success": boolean, 162 | "completeSignatures": boolean, 163 | "tx": { 164 | "version": number, 165 | "tokens": string[], 166 | "inputs": [ 167 | { 168 | txId: string, 169 | index: number, 170 | decoded: { 171 | type: string, 172 | address: string, 173 | timelock: number, 174 | }, // data from the spent ouput 175 | token: string, 176 | value: number, 177 | tokenData: number, // user face 178 | token_data: number, // internal use 179 | script: string, 180 | signed: boolean, 181 | mine: boolean, 182 | }, 183 | ], 184 | "outputs": [ 185 | { 186 | decoded: { 187 | address: string, 188 | timelock: number, 189 | }, 190 | value: number, 191 | tokenData: number, // user face 192 | token_data: number, // internal use 193 | script: string, 194 | type: string, 195 | mine: boolean, 196 | token?: string, 197 | } 198 | ], 199 | }, 200 | "balance": { 201 | "": { 202 | "tokens": { available: number, locked: number ), 203 | "authorities": { 204 | mint: { available: number, locked: number ), 205 | melt: { available: number, locked: number ), 206 | }, 207 | }, 208 | }, 209 | } 210 | ``` 211 | 212 | Lets represent the response document as ` $ `, and each element of a list ` [*] `, we have: 213 | * `$.completeSignatures` -- that represents the completeness of signatures required to use the transaction 214 | * `$.tx.inputs[*].signed` -- that indicates this input has a signature 215 | * `$.tx.inputs[*].mine` -- that indicates this input belongs to this wallet 216 | * `$.tx.outputs[*].mine` -- that indicates this output belongs to this wallet 217 | * `$.balance[*].tokens` -- that contains the balance for the token 218 | * `$.balance[*].authorities` -- that contains the balance of mint and melt for the token 219 | 220 | ## Initialize read-only MultiSig wallet 221 | 222 | This wallet is used by the coordinator service to sign or sign-and-push the transaction given the signatures collected by the service. 223 | 224 | A read-only wallet only needs the `xpubkey` to initialize. The MultiSig wallet can be initialized with `seedKey` or `multisigKey`, but initializing with `multisigKey` there is no need to configure the `seeds` property in the configuration. 225 | 226 | By combining the two initializations we have a MultiSig read-only wallet, as we can see bellow: 227 | ```bash 228 | curl -X POST --data "wallet-id=my1" \ 229 | --data "xpubkey=xpub...FHB" \ 230 | --data "multisigKey=mymultisigwallet" \ 231 | --data "multisig=true" \ 232 | http://localhost:8000/start 233 | ``` 234 | 235 | In the presented configuration the `xpubkey` doesn't have any role in the formation of `multisigData`, it servers only to avoid the requirement to configure the `seeds` for the MultiSig wallet. 236 | 237 | With this configuration one can request the creation of transaction proposals but can't call `get-my-signatures` with success. However, after collect all the signatures it is possible to call `sign` or `sign-and-push`. 238 | 239 | # Drawbacks 240 | [drawbacks]: #drawbacks 241 | 242 | Despite the current design provides a minimal set of operations to enable the EVM compatible bridge, it still lacks some operations like "create authority", "destroy authority", and "delegate authority", that could equip an organization to take full control over its assets in a MultiSig wallet. Also, the token creation don't supports NFT token. 243 | 244 | # Rationale and alternatives 245 | [rationale-and-alternatives]: #rationale-and-alternatives 246 | 247 | - The addition of specialized commands is better than change the behavior of an existing command because this approach avoids produce breaking changes. 248 | 249 | # Prior art 250 | [prior-art]: #prior-art 251 | 252 | * This design is a complement for the [EVM compatible bridge design](https://github.com/HathorNetwork/rfcs/blob/doc/evm-compatible-brigde/projects/evm-compatible-bridge/design.md#federation-service). 253 | * The commands to create token, mint and melt includes a fine tuned authority creation after the PRs [#291](https://github.com/HathorNetwork/hathor-wallet-headless/pull/291) and [#293](https://github.com/HathorNetwork/hathor-wallet-headless/pull/293) in the wallet-headless. 254 | 255 | # Unresolved questions 256 | [unresolved-questions]: #unresolved-questions 257 | 258 | * Should NFT data be allowed in the create token endpoint for the MultiSig wallet, or do it deserves its own endpoint, as it happens for the single wallet? 259 | 260 | # Future possibilities 261 | [future-possibilities]: #future-possibilities 262 | 263 | * Destroy authority for a given token 264 | * Delegate authority for a given token 265 | * Create NFT token 266 | 267 | ## Destroy authority for a given token 268 | 269 | Despite there is not similar feature implemented in the single wallet, there is a method to prepare a transaction for this purpose in the wallet facade, which is the `prepareDestroyAuthorityData` method. 270 | 271 | For the MultiSig wallet the command endpoint: 272 | `POST:/wallet/p2sh/tx-proposal/destroy-authority` 273 | 274 | We can specify the following data fields: 275 | * `type`* -- being it one of `mint|melt` option 276 | * `token`* 277 | * `count`* 278 | 279 | *`*` as required field 280 | 281 | The response scheme for success: 282 | ```ts 283 | { 284 | "success": boolean, 285 | "txHex": string 286 | } 287 | ``` 288 | 289 | The purpose of the command is only produce the `txHex` not signed and nothing more, it will not push the transaction to the network. The returned `txHex` will be handled by the coordinator service, and later on by the federation service, which is responsible to sign the transaction. 290 | 291 | With this specification we cover the possibility to destroy a melt or mint authority with any count. 292 | 293 | ## Delegate authority 294 | 295 | Despite there is not similar feature implemented in the single wallet, there is a method to prepare a transaction for this purpose in the wallet facade, which is the `prepareDelegateAuthorityData` method. 296 | 297 | For the MultiSig wallet the command endpoint: 298 | `POST:/wallet/p2sh/tx-proposal/delegate-authority` 299 | 300 | We can specify the following data fields: 301 | * `type`* -- being it one of `mint|melt` option 302 | * `token`* 303 | * `authority_address`* 304 | * `allow_external_authority_address` 305 | 306 | *`*` as required field 307 | 308 | The response scheme for success: 309 | ```ts 310 | { 311 | "success": boolean, 312 | "txHex": string 313 | } 314 | ``` 315 | 316 | With this specification we cover the possibility to delegate a melt or mint authority to another wallet. 317 | 318 | Another possibility for the delegate action would be the creation of N authorities. The use case for this could be to spread governance power, or increase the transaction throughput for mint or melt. To enable this we can add an optional `count` property to the data fields. 319 | 320 | ## Create NFT Token 321 | 322 | The creation of an NFT token for a MultiSig wallet can happen in either of the paths: 323 | * Extending the create token endpoint to support data field 324 | * Creating an endpoint only for this purpose 325 | -------------------------------------------------------------------------------- /projects/hathor-wallet-headless/0002-soft-reload.md: -------------------------------------------------------------------------------- 1 | - Feature Name: soft_config_reload 2 | - Start Date: 2023-08-21 3 | - Author: André Carneiro 4 | 5 | # Summary 6 | [summary]: #summary 7 | 8 | A change to the configuration system to allow changing the configuration without stopping the headless instance. 9 | 10 | # Motivation 11 | [motivation]: #motivation 12 | 13 | The configuration of the headless is exposed to the source code as a module, and as such it is locked by Node's cache. 14 | We may require some changes to the config without stopping the headless instance, e.g. addind a new seed, changing the network, etc., but with the current system it cannot be done. 15 | 16 | # Guide-level explanation 17 | [guide-level-explanation]: #guide-level-explanation 18 | 19 | ## Configuration system 20 | 21 | ### How it works 22 | 23 | #### When running locally 24 | 25 | The `config.js` file is created by the dev and he can alter the contents as needed. 26 | The dev is responsible for the contents and if they follow the headless requirements. 27 | 28 | Once the instance starts, it will load the `config.js` as a module and node's cache system will save this state. 29 | From this point on, any calls that require reading the configuration will read from cache, so writting to the file will have no effect on the running instance. 30 | 31 | #### When running in docker container 32 | 33 | During the docker build a `config.js` file is created, this file does not have a configuration but instead loads them from the environment variables and then exports the configuration object. 34 | This means that once the wallet starts, the environment variables will be read and the resulting config will be cached the same way as running locally. 35 | 36 | ### Changes to configuration system 37 | 38 | We will change all instances that import `config.js` directly to import a `setting.js` instead, this file will export the methods: 39 | - `setupConfig`: dynamically import the config module and update the configuration singleton. 40 | - Should be run only once 41 | - `getConfig`: will return the configuration singleton. 42 | - If the singleton is not present, throw an error 43 | - `reloadConfig`: will erase the configuration singleton, clear any cache on it, then reload the configuration. 44 | - Will perform checks on the new config as well, to ensure the wallet is running as expected. 45 | 46 | This way the dev running locally can continue using the `config.js` module without changes, and the docker configuration will also not change. 47 | 48 | 49 | ## Settings module 50 | 51 | The settings module will hold a cache for the configuration, this cache will be a singleton instance (i.e. a simple `_config` object declared on the file). 52 | 53 | 54 | ### Setup config 55 | 56 | Since all parts of the wallet will access the config through the settings module it has to be the first thing on the wallet to load since even the server startup depends on the config. 57 | 58 | The setup will import the config module, update the singleton and set the value of a variable called `_started` to `true`. 59 | `_started` is initialized with `false` and its function is to signal that the configuration has already been setup, so we don't call this method again. 60 | 61 | The reason we can only call this method once is that when called it will import the config and update the singleton ignoring any possible changes to the config. 62 | So if called again it could leave the wallet in an impossible state. 63 | 64 | 65 | ### Get config 66 | 67 | Should just check that `_config` is valid and return it. 68 | If `_config` is not valid, it should throw an error, indicating that the configuration cannot be read at the moment (possibly during a reload). 69 | The error it throws should be catched by a middleware which will return `503 (Service Unavailable)` to the caller with the reason for unavailability. 70 | 71 | The reload should be a fast method but we cannot allow other calls to be made during the reload since the service can be in an inconsistent state. 72 | 73 | ### Reload config 74 | 75 | This method will save the current config state, delete the node module cache on the `config.js` file then check for the differences on the configuration, some actions may need to be taken depending on the differences. 76 | 77 | The action to be takes depends on the severity of the change, we should go down this list and start the appropriate action. 78 | 79 | - Changes in `http_bind_address`, `http_port`, `http_api_key`, `consoleLevel`, `httpLogFormat`, `enabled_plugins`, `plugin_config` 80 | - This are critical changes and require a full restart of the service. 81 | - We should raise a custom error so that a middleware can return to the user a 200 request then shutdown the service with `process.exit(0)`. 82 | - Changes in `seed` and `multisig` 83 | - Check that all previous entries are the same. 84 | - The multisig should check for changes in all subkeys (pubkeys, numSignatures, total) 85 | - If there was any change in previous entries we should stop all wallets. 86 | - If there was only new entries, we should do nothing. 87 | - We should check that all seed values are valid seeds. 88 | - If an invalid seed is found, we should raise an error to stop the service. 89 | - Changes in `network`, `server` 90 | - Stop all wallets, then change the configuration using wallet-lib's config. 91 | - Changes in `tokenUid`, `gapLimit`, `connectionTimeout` 92 | - Stop all wallets, these will take effect when the wallet is started again. 93 | - Changes in `txMiningUrl`, `atomicSwapService`, `txMiningApiKey` 94 | - Run the `initHathorLib` method in `app.js` to configure these. 95 | - Changes in `confirmFirstAddress`, `allowPassphrase` 96 | - Do nothing. 97 | 98 | The stopped wallets should also be removed from `initializedWallets` on `src/services/wallets.service.js`. 99 | Although it is a bad practice to remove a listener created by another module ([reference](https://nodejs.dev/en/api/v20/events/#emitterremovealllistenerseventname)), we will use the `removeAllListeners` method on the wallet and `wallet.conn` to avoid memory leaks. 100 | 101 | ## API 102 | 103 | ### POST /reload-config 104 | 105 | This api will run the `reloadConfig` module and return 200. 106 | 107 | # Reference-level explanation 108 | [reference-level-explanation]: #reference-level-explanation 109 | 110 | ## Dynamic import 111 | 112 | In NodeJS we can use [import() expressions](https://nodejs.org/api/esm.html#import-expressions) to dynamically import a module. 113 | Although all rules and managing operation for static imports still apply, dynamic imports are only evaluated when needed (as per the [documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import)). 114 | This allows us to import the config module when needed. 115 | 116 | One of the rules that we need to override is the native module cache which would make subsequent calls to dynamically import the config module return the cached value instead of evaluating the module again. 117 | 118 | ```js 119 | async function importConfig() { 120 | // The default extraction is because the actual content of the config is inside the default export of the module. 121 | return (await import(configPath)).default; 122 | } 123 | ``` 124 | 125 | ## NodeJS module cache 126 | 127 | We interact with the [require cache](https://nodejs.org/api/modules.html#requirecache) to ensure we can dynamically load the `config.js` as a module and the get current value of the config. 128 | Since the `config.js` is imported as a module it falls into a cache table, the key to the cache table is the fullpath of the module entrypoint if its a relative import. 129 | We can get the path used by NodeJS as cache key with [require.resolve](https://nodejs.org/api/modules.html#requireresolverequest-options). 130 | 131 | With this information we know we can use the following to invalidate NodeJS cache on the config.js module. 132 | 133 | ```js 134 | delete require.cache[require.resolve('./config.js')] 135 | ``` 136 | 137 | ## Scanning config changes 138 | 139 | To avoid any errors with object references we should start the `reloadConfig` method by using [lodash.cloneDeep](https://lodash.com/docs/4.17.15#cloneDeep) on the current `_config` and dynamically importing the current config module. 140 | Then we can make comparisons on the old and new config modules and take appropriate action depending on the changes found. 141 | 142 | ## Handling errors with middlewares 143 | 144 | Middlewares can be configured to run on specific routes, on routers or on the entire application, since the configuration errors are not wallet specific we should use the middleware on the entire application. 145 | To do so we can use the `createApp` method on `src/app.js` to add the middleware to the app. 146 | 147 | The [guide](https://expressjs.com/en/guide/error-handling.html) on error handlers tells us the middleware will receive `(error, request, response, next)` where the `error` is the error instance we will intercept. 148 | If the error is not one of the types we want to handle we can just return `next(err)` to invoke the next error handler, it no custom handlers catch the error the default one will, returning `500 (Internal Server Error)` to the user. 149 | 150 | ```js 151 | function ConfigErrorHandler(err, req, res, next) { 152 | if (err instanceof NonRecoverableConfigChangeError) { 153 | res.status(200); 154 | res.send({ success: false, error: 'A non recoverable change in the config was made, the service will shutdown.' }); 155 | process.exit(0); 156 | } else if (err instanceof UnavailableConfigError) { 157 | res.status(503); 158 | return res.send({ success: false, error: 'Service currently unavailable.' }); 159 | } 160 | 161 | return next(err); 162 | } 163 | ``` 164 | 165 | # Rationale and alternatives 166 | [rationale-and-alternatives]: #rationale-and-alternatives 167 | 168 | ### Always read from current config state 169 | 170 | When performing a read on any config key, read from the actual source (either envvar or file). 171 | This alternative can have a very poor performance but it is not the reason it was discarded as an option. 172 | There are some special keys that have an impact on the wallet, e.g. network, if we change the network, we should stop all wallets and then reconnect expecting the server to be of the new network. 173 | If we have no way of reacting to changes on the config, we cannot safely discard the possibility that a breaking change was made. 174 | 175 | # Future possibilities 176 | [future-possibilities]: #future-possibilities 177 | 178 | ## Remake the configuration system 179 | 180 | The designed system aims to have as little problems as possible with the current configuration system, but we should consider changing the configuration to a less indirect system, without a `config.js` module for instance so the configuration can work the same way locally and in docker. 181 | The configuration is also very error prone for new developers, we should create a type system so any errors on the config can be found as early as possible, maybe even with a static evaluator. 182 | 183 | ## Standard error response 184 | 185 | We should use a common standard for error responses on our service, a good start would be the [RFC7807](https://datatracker.ietf.org/doc/html/rfc7807) standard. 186 | 187 | # Task breakdown 188 | 189 | - [ ] Create settings module (1 dev day) 190 | - [ ] Create middlewares to handle errors from the settings module (1 dev day) 191 | - [ ] Refactor the headless to use the new settings module instead of config (1 dev day) 192 | - [ ] Refactor the tests to use the new settings module instead of the config (1 dev day) 193 | - [ ] Create tests for the settings module (0.5 dev day) 194 | -------------------------------------------------------------------------------- /projects/hathor-wallet-headless/0003-fireblocks-integration.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | [summary]: #summary 3 | 4 | Hathor headless wallet integration with Fireblocks using RAW Signing API. 5 | 6 | # Motivation 7 | [motivation]: #motivation 8 | 9 | Fireblocks manages the private key so removing the private key from the headless instance will allow users to run the headless in a more secure manner. 10 | 11 | # Guide-level explanation 12 | [guide-level-explanation]: #guide-level-explanation 13 | 14 |
15 | 16 | POST /wallet/fireblocks/start (Start a wallet with the fireblocks integration) 17 | 18 | This API will use the wallet's `setExternalTxSigningMethod` to register a method that will start a client for fireblocks and use Fireblock's RAW signing API to sign the transaction. 19 | 20 | ##### Parameters 21 | 22 | > | name | type | data type | description | location | 23 | > | --- | --- | --- | --- | --- | 24 | > | xpub-id | required | string | The id of the xpub in the config | body | 25 | > | wallet-id | required | string | create a wallet with this id | body | 26 | 27 | ##### Responses 28 | 29 | > | http code | content-type | response | 30 | > | --- | --- | --- | 31 | > | `200` | `application/json` | `{"success":true}` | 32 | > | `400` | `application/json` | `{"success": false, "message":"Bad Request"}` | 33 | 34 | ##### Example cURL 35 | 36 | > ```javascript 37 | > curl -X POST -H "Content-Type: application/json" --data '{"xpub-id": "cafe", "wallet-id": "cafe", "raw": true}' 'http://localhost:8000/fireblocks/start' 38 | > ``` 39 | 40 |
41 | 42 | Fireblocks has documented a few possibilities for integrations, we will use the [REST API](https://developers.fireblocks.com/docs/rest-api-guide) to implement our client, this means that we need to implement the authorization workflow and API calls. 43 | 44 | ## Fireblocks Client 45 | 46 | ### BIP44 path derivation 47 | 48 | Fireblocks follows a derivation path very similar to BIP44 but without any hardened derivations. 49 | This is why we can use the root xPub from the console to generate the account level xPub, even though it is not a BIP44 account level it will work exactly the same with our wallet-lib. 50 | 51 | The usual account derivation path of `m/44'/280'/0'` will be replaced with `m/44/280/0`, we will need to create a script that derives the root xPub to the fireblocks account level path. 52 | 53 | From the account level the change and address derivation will work as usual. 54 | 55 | Fireblocks exporting the root xPub has some risk to the user privacy since it can be used to derive all coins xPubs, hardened derivation was meant to be used as a protection from this. 56 | The tokens are safe since Fireblocks does not export the xPriv but someone with the root xPub can derive all addresses and check their balance, on all coins. 57 | Since we save the Hathor "account path" only the Hathor addresses can be derived. 58 | 59 | > #### Warning! 60 | > Since derivation is not done under usual BIP44 path one may not be able to find the tokens if using the seed or xPriv, this is not harmful (as of writing) because Fireblocks does not export the xPriv or seed but may cause confusion and users may think they lost their funds if Fireblocks allow xPriv exports in the future. 61 | > 62 | > Obs: During testing we tried using the correct BIP44 derivation path with the Fireblocks API but it is not allowed. If this is changed in the future we must handle any migration and updates carefully. 63 | 64 | ### Fireblocks API Authorization 65 | 66 | [reference](https://developers.fireblocks.com/reference/signing-a-request-jwt-structure) 67 | 68 | The Fireblocks authorization workflow is defined by having 2 headers: 69 | - `X-API-Key`: The api key identifying the api user. 70 | - `Authorization`: Bearer token using JWT, with custom claims and signed with the api user's secret key (downloaded from Fireblocks dashboard). 71 | 72 | The JWT token requires the claims: 73 | - `uri`: path with querystring. 74 | - `nonce`: UUID4 or any random generator. 75 | - `iat` and `exp`: "issued at time" and "expiration". 76 | - Since it's a per-call token the difference between them can be very short (e.g. 30s). 77 | 78 | - `sub`: The api key used in the first header. 79 | - `bodyHash`: The sha256 hash of the JSON encoded body of the request 80 | - Or sha256 hash of an empty string if the body is empty (e.g. GET requests). 81 | 82 | The token must be signed with `RS256` (RSA + SHA256) using the api user's private key (downloaded from Fireblocks). 83 | 84 | Some examples are provided on how to implement this (see [reference](https://github.com/fireblocks/developers-hub/tree/main/authentication_examples)). 85 | 86 | ### RAW Signing API 87 | 88 | [reference](https://developers.fireblocks.com/reference/post_transactions) 89 | [guide](https://developers.fireblocks.com/docs/raw-message-signing-overview) 90 | 91 | The RAW Signing uses the normal transaction POST to create a `RAW` operation where we can request a content (hex encoded) to be signed by the derivation path we send. 92 | The operation is treated as a transaction and returns a transaction id which can be used with the Transaction API to check the status. 93 | 94 | ### Transaction API 95 | 96 | [reference](https://developers.fireblocks.com/reference/get_transactions-txid) 97 | 98 | With the transaction id we can check the transaction status, once the status is `COMPLETED` the return object will have the requested signatures. 99 | 100 | The signatures are returned as raw r, s values so we need to encode them in DER to use with our wallet lib. 101 | The signature also comes with the public key of the key used to sign the content which can be used to check against the locally generated public key to assert that the correct private key was used. 102 | 103 | # Reference-level explanation 104 | [reference-level-explanation]: #reference-level-explanation 105 | 106 | ## Start API 107 | 108 | The start API is very straightforward, we only need to evaluate the wallet-id, xpub-key and configured credentials to fireblocks. 109 | 110 | To check that the credentials are valid we can use the [public_key_info API](https://developers.fireblocks.com/reference/get_vault-public-key-info) to fetch the first address at derivationPath `[44, 280, 0, 0, 0]`. 111 | 112 | ## External signing function 113 | 114 | Since Fireblocks API a new token is required for each request we can have a method that gathers the "signature requests" from a transaction and sends a single request for RAW signing. 115 | The resulting signatures will be fetched from the Transaction API which we will need to encode to DER to use with the wallet-lib. 116 | 117 | 118 | DER encoded signature has the following structure: 119 | 120 | | sequence tag | len(sequence) in bytes | integer tag | len(r) in bytes | r | integer tag | len(s) in bytes | s | 121 | | :----------: | :--------------------: | :---------: | :-------------: | :---: | :---------: | :-------------: | :---: | 122 | | 0x30 | 1 byte | 0x02 | 1 byte | bytes | 0x02 | 1 byte | bytes | 123 | 124 | Where `r` and `s` may be prefixed by 0x00 if they are negative (first bit is 1). 125 | 126 | The Fireblocks signing method will be registered with the wallet facade and all subsequent transactions from this facade will be signed with Fireblocks, giving full compatibility with the other APIs in the headless wallet. 127 | 128 | # Rationale and alternatives 129 | [rationale-and-alternatives]: #rationale-and-alternatives 130 | 131 | ### Fireblocks SDK 132 | 133 | Fireblocks provides a javascript, typescript and python SDKs but while trying to use the SDK some dependency of it conflicted with bitcore-lib making the headless fail and not startup at all. 134 | The SDK also adds many unneeded dependencies, so using the REST API will allow us to add some minimal dependencies at the cost of implementing all AIPs we want to use. 135 | 136 | # Unresolved questions 137 | [unresolved-questions]: #unresolved-questions 138 | 139 | This design only contemplates the RAW signing flow using our own wallet-lib to create, mine and push the transaction to the network. 140 | The full integration with Fireblocks will not be contemplated because it requires additional implementation from Fireblocks which may happen in the future. 141 | -------------------------------------------------------------------------------- /projects/hathor-wallet-headless/0005-early-send-tx-lock-release.md: -------------------------------------------------------------------------------- 1 | - Feature Name: early_send_tx_lock_release 2 | - Start Date: 2024-09-17 3 | - Author: Andre Carneiro 4 | 5 | # Summary 6 | [summary]: #summary 7 | 8 | Release the send-tx lock after the transaction is prepared and before it is mined. 9 | 10 | # Motivation 11 | [motivation]: #motivation 12 | 13 | The only impediment to running 2 send transaction simultaneously is trying to choose the UTXOs and selecting the same ones. 14 | Meaning it is safe to release the send-tx lock after the transaction is chosen. 15 | This would allow sending more transactions from the same headless wallet instance. 16 | 17 | # Guide-level explanation 18 | [guide-level-explanation]: #guide-level-explanation 19 | 20 | ## Wallet-lib 21 | 22 | We should aim to affect all types of wallets capable of sending transactions (P2SH, P2PKH, HSM, Fireblocks, etc.). 23 | The wallet facade has some methods that are meant to prepare and send transactions, these methods return a promise that resolves when the transaction is accepted by the network. 24 | This promise is the return of either `run` or `runFromMining` from the `SendTransaction` facade. 25 | 26 | The methods that send transactions are: 27 | 28 | - `sendManyOutputsTransaction` 29 | - `handleSendPreparedTransaction` 30 | - `consolidateUtxos` 31 | - `sendTransaction` 32 | - `createNewToken` 33 | - `mintTokens` 34 | - `meltTokens` 35 | - `delegateAuthority` 36 | - `destroyAuthority` 37 | - `createNFT` 38 | - `createNanoContractTransaction` 39 | - `createAndSendNanoContractTransaction` 40 | 41 | We will create new methods that return the `SendTransaction` instance instead and these methods will be changed to use the new methods to avoid code duplication. 42 | This will give the caller more control over the execution of the "send transaction" process. 43 | 44 | ## Headless send-tx lock 45 | 46 | We will change the call of any methods listed above to use the new method. 47 | This will enable the headless to unlock the send-tx lock before mining the transaction. 48 | 49 | The send-tx process can be seen as divided in 3 steps: 50 | 51 | - `prepare-tx` 52 | - Choose UTXOs and create a `Transaction` instance 53 | - `SendTransaction` can be instantiated with the `Transaction` instance 54 | - `mine-tx` 55 | - Lock utxos from the `Transaction` instance 56 | - Send transaction to a tx-mining service and get the `nonce` 57 | - `push-tx` 58 | - Send the transaction to the fullnode. 59 | 60 | The headless will have to manually lock the UTXOs after the `prepare-tx` step so the UTXOs are not chosen by another call to create a transaction. 61 | 62 | Some of the wallet-lib methods initiate a `SendTransaction` instance from the expected outputs and others from a prepared `Transaction` instance. 63 | The caller can still identify if the instance has a prepared transaction or not, meaning the execution steps are: 64 | 65 | - If `sendTransaction.transaction` is `null` 66 | - run `prepareTx` 67 | - Lock UTXOs with `updateOutputSelected` 68 | - Release the send-tx lock. 69 | - call `.runFromMining()` and return the result. 70 | 71 | Since these steps do not depend on the method called we can move this to an util method to be used on all routes. 72 | 73 | ## HSM special case 74 | 75 | The HSM does not allow simultaneous connections so we should use the global send-tx for any HSM wallet. 76 | Since we release the connection after the transaction is signed the early lock release will be safe if all HSM wallets use the same lock. 77 | 78 | # Rationale and alternatives 79 | [rationale-and-alternatives]: #rationale-and-alternatives 80 | 81 | ## Peek internals with an EventEmitter 82 | 83 | The headless could inject an `EventEmitter` and receive updates on what is happening during the "send transaction" process. 84 | These updates could be used to release the send-tx lock and would not change the facade methods types. 85 | Returning the `SendTransaction` instance was chosen because it gives the caller more control over the process and leaves less oportunities for memory leaks. 86 | 87 | ## Create a Task facade to control the process 88 | 89 | The "Task facade" would be initiated with the steps to execute and the caller would be able to run code between steps or just run all steps. 90 | This allows oportunities to abort or release the lock after the `prepare-tx` step and can be useful for other processes as well. 91 | 92 | This was discarded due to the overcomplication and time to develop. 93 | 94 | # Unresolved questions 95 | 96 | ## Finite UTXO selection 97 | 98 | Allowing transactions to choose UTXOs faster can eventually make all available UTXOs chosen before the transactions are sent. 99 | This fails the next attempt to send a transaction with a "Not enough tokens" error even if the wallet may have the balance. 100 | 101 | If a wallet is meant to send many transactions in a burst and it wants to avoid this it should spread the tokens on as much UTXOs as possible. 102 | It will not prevent this error, but minimize since less tokens should be on change outputs that have yet to be sent. 103 | 104 | # Task Breakdown 105 | [task-breakdown]: #task-breakdown 106 | 107 | - [ ] Create new lib methods (2 dev-days). 108 | - [ ] Change headless to use the new lib methods (1 dev-day). 109 | - [ ] Change headless to release the send-tx lock early (1 dev-day). 110 | - [ ] Make all HSM wallets use the same lock (1 dev-day). 111 | 112 | Total: 5 dev-days. 113 | -------------------------------------------------------------------------------- /projects/hathor-wallet-headless/api-docs-ci.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | This feature automates the process of uploading the updated API Docs for the Headless Wallet, while giving the developer a preview of the contents, error checking and linting. 3 | 4 | A script is made available for the developer to make all the necessary checks locally and a GitHub Workflow is implemented to: 5 | - Validate and lint the `api-docs.js` file on every PR to the `dev` branch 6 | - Deploy the updated docs on every new successful release 7 | 8 | # Motivation 9 | Currently this upload is made manually by a team member, extracting data from the `src/api-docs.js` file and processing it to finally upload its contents to the [Headless Hathor Wallet API](https://wallet-headless.docs.hathor.network/) site. 10 | 11 | # Guide-level explanation 12 | ### JS to JSON conversion 13 | The platform that manages the OpenAPI Documentation static website, [Redoc](https://github.com/Redocly/redoc), only interacts with `.json` files. So we need to convert our `src/api-docs.js` file into a `.json` one whenever we want to use this platform. 14 | 15 | A script should be built to make this conversion on the `scripts` folder, that implements the following: 16 | - Ensures the `default` property normally exported by the `JSON.stringify()` is not present on the output json 17 | - Outputs the results to the `tmp/api-docs.json` folder. 18 | 19 | This script will be referred to as `scripts/convert-docs.js` from now on. 20 | 21 | > *Note*: The `securitySchemes` from the `components` property is no longer necessary and can be removed from the `js` file when this is implemented. 22 | 23 | ### Linting and Preview 24 | A developer should be able to run a `npm run docs` command with three outcomes: 25 | - A validation check should be executed on the generated docs, alerting any errors found 26 | - A lint should be executed on the generated docs, displaying its analysis results 27 | - A local server should be fired, allowing the developer to visualize the results of the doc changes 28 | 29 | This would be an automated version of manually executing the following commands: 30 | ```sh 31 | # Validation and conversion 32 | node ./scripts/convert-docs.js 33 | 34 | # Linting 35 | npx @redocly/cli lint tmp/api-docs.json 36 | 37 | # Local server for validating 38 | npx @redocly/cli preview-docs tmp/api-docs.json 39 | xdg-open http://127.0.0.1:8080 40 | 41 | # Cleanup after the server is closed 42 | rm tmp/api-docs.json 43 | ``` 44 | 45 | > *Note:* The linter throws some errors on our current documentation. The first implementation of this script will certainly require a refactoring PR. 46 | 47 | ### PR Validation 48 | A GitHub Workflow, configured to run on every PR, should run the conversion script and linter to check for errors on the generated documentation. This should be part of the CI process. 49 | 50 | ### Deployment 51 | On the deployment pipeline, namely when there is a new release label on the `master` branch, a new workflow should also upload the generated `json` file to the production documentation S3 bucket. This will update the website to the community. 52 | 53 | More on what consists a release label can be found on the reference-level explanation. 54 | 55 | > *Note:* All connection data with the S3 should be implemented as secrets. Refer to the [`docker.yml`](https://github.com/HathorNetwork/hathor-wallet-headless/blob/master/.github/workflows/docker.yml) file for reference. 56 | 57 | # Reference-level explanation 58 | 59 | ### Identifying release versions 60 | One point requires more detailed explanation on the deployment workflow: identifying if a version is a release candidate or a regular version. 61 | 62 | What will be considered a release version is a regular [_semver_](https://semver.org/) without any suffixes ( Ex.: v0.21.0 ). This means any release candidates versions ( Ex.: v0.21.0-rc1 ) will not be subject to this workflow. 63 | 64 | A lightweight solution for that, one that fits our current default of implementing the whole workflow within the `yml` file itself, would be creating a version validation step, in the lines of ( inspired by the [docker workflow](https://github.com/HathorNetwork/hathor-wallet-headless/blob/master/.github/workflows/docker.yml#L12) ): 65 | 66 | ```yaml 67 | steps: 68 | - name: Check release version 69 | id: tags 70 | shell: python 71 | run: | 72 | import re 73 | 74 | def is_release_version(version): 75 | // This patterns accepts "v1.0.0" , "v2.4.6". Rejects "v1.0.0-rc1". 76 | pattern = r"^v\d+\.\d+\.\d+$" 77 | match = re.match(pattern, version) 78 | return match is not None 79 | 80 | ref='${{ github.ref }}' 81 | 82 | if ref.startswith('refs/tags/'): 83 | version = ref[10:] 84 | if is_release_version(version) 85 | sys.exit(0) // This is a release version. Continue the workflow 86 | else 87 | sys.exit(1) // This is not a release candidate. Interrupt this workflow 88 | else 89 | sys.exit(2) // This is not a release label. Interrupt this workflow 90 | ' 91 | ``` 92 | 93 | # Task breakdown 94 | 95 | ### Milestone: Developer feedback - 0.5 dev days 96 | - Write the conversion script - 0.2 dev days 97 | - Implement the local validation environment - 0.3 dev days 98 | - Write the `shell` command to convert and lint the api docs 99 | - Also start the local server for manual conference. 100 | - Associate it with the `docs` script on `package.json` 101 | - Merge this on the `dev` branch 102 | 103 | ### Milestone: CI workflow - 0.5 dev days 104 | - Implement a workflow for validating and lint checking when merging to `dev` - 0.1 dev days 105 | - Implement a workflow to deploy when making a new release - 0.1 dev days 106 | - Test these new workflows on PR merges and releases - 0.3 dev days 107 | -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/hathor-wallet-desktop-settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/hathor-wallet-desktop-settings.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/mock-1-settings-page-v2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/mock-1-settings-page-v2.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/mock-2-risk-disclaimer-page-v2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/mock-2-risk-disclaimer-page-v2.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/mock-2-risk-disclaimer-page.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/mock-2-risk-disclaimer-page.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/mock-3-presettings-page.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/mock-3-presettings-page.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/mock-4-custom-settings-page-v2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/mock-4-custom-settings-page-v2.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/mock-5-feedback-success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/mock-5-feedback-success.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/mock-alert-ui-v3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/mock-alert-ui-v3.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/safepal-custom-network-form.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/safepal-custom-network-form.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/safepal-network-presettings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/safepal-network-presettings.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/safepal-settings-groups.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/safepal-settings-groups.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/trezor-danger-area.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/trezor-danger-area.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/trezor-option-description.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/trezor-option-description.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/img/0001-dev-settings/trezor-option-warning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/hathor-wallet-mobile/img/0001-dev-settings/trezor-option-warning.png -------------------------------------------------------------------------------- /projects/hathor-wallet-mobile/security/0001-improved-pin-security-mode.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | [summary]: #summary 3 | 4 | A new way to use biometrics on the mobile wallet, using a longer pin for security, this does not allow a fallback to the configured pin in case of too many failures. 5 | 6 | # Motivation 7 | [motivation]: #motivation 8 | 9 | When using biometrics the pin is saved on the phone system, meaning the storage data is still encrypted with the 6 digit pin, using a longer and more random pin makes the storage data harder to crack. 10 | 11 | # Guide-level explanation 12 | [guide-level-explanation]: #guide-level-explanation 13 | 14 | When enabling the biometrics on the mobile wallet we just set a flag and the pin is always saved on the device keychain, this makes it so enabling and disabling biometrics does not require the user to input the pin. 15 | This approach makes it easier to set up biometrics and allows the usual pin screen to be used as a fallback. 16 | 17 | The new configuration would use the same configuration for biometrics but use a longer and unguessable password (random 32 bytes for instance), it will trust the system prompt completely, so being locked out due to too many attemps will lock the user out of the wallet. 18 | 19 | ## Access control 20 | 21 | The dependency used to manage keychain password is [react-native-keychain](https://www.npmjs.com/package/react-native-keychain) and we use the [setGenericPassword](https://oblador.github.io/react-native-keychain/docs/api/functions/setGenericPassword) and [getGenericPassword](https://oblador.github.io/react-native-keychain/docs/api/functions/getGenericPassword) to save and retrieve the pin from the keychain. 22 | 23 | The [access control](https://oblador.github.io/react-native-keychain/docs/api/enumerations/ACCESS_CONTROL) and [accessible](https://oblador.github.io/react-native-keychain/docs/api/enumerations/ACCESSIBLE) configurations are used with the `setGenericPassword` to manage how this password can be used and by whom. 24 | 25 | This new password should be saved with: 26 | 27 | - Access control **`BIOMETRY_ANY`**: Constraint to access an item with Touch ID for any enrolled fingers. 28 | - Accessible **`WHEN_UNLOCKED_THIS_DEVICE_ONLY`**: The data in the keychain item can be accessed only while the device is unlocked by the user. Items with this attribute do not migrate to a new device. 29 | 30 | ## Unlocking the wallet 31 | 32 | Since the access data is not encrypted with the pin it cannot be used as a fallback if the user fails the system authentication. 33 | If the user cancels the system authentication prompt or fails too many times it should show a warning along with the "reset wallet" button to allow users to start a new wallet. 34 | 35 | ## Feature flag 36 | 37 | We should create a feature flag to control this feature rollout. 38 | This feature flag should be first enabled to trusted users, then we can incrementally rollout to all users. 39 | 40 | ## Migration 41 | 42 | We should create a configuration migration if a user currently uses biometry the pin should be exchanged by a random password the next time he opens the wallet. 43 | When the user input his pin and it is verified we should: 44 | 45 | - generate a random password. 46 | - decrypt the access data with the pin and encrypt with the new random password. 47 | - Save the random password and the pin on the keychain. 48 | - Save mark this migration as complete. 49 | 50 | ## Disable biometry 51 | 52 | Since the access data is not encrypted with the pin anymore we should get the pin from the keychain and encrypt the access data with the pin. 53 | Since the pin is also saved on the keychain this means the user can only do this if he passes the system biometry check. 54 | 55 | # Future possibilities 56 | [future-possibilities]: #future-possibilities 57 | 58 | ## Passcode or biometry 59 | 60 | If we could give the user this option he could choose which authentication type he wants (e.g. biometry, passcode, etc.). 61 | 62 | ## 2 levels of encryption 63 | 64 | We could have the keys of the access data encrypted with the pin and the access data object encrypted with the random password. 65 | This would make the wallet only open with a biometry check and sending transactions would require the biometry or passcode. 66 | -------------------------------------------------------------------------------- /projects/reliable-integration/0001-high-level-design.md: -------------------------------------------------------------------------------- 1 | _This design has been migrated from [this issue description](https://github.com/HathorNetwork/hathor-core/issues/405)._ 2 | 3 | ## Problem 4 | 5 | Currently, applications that want to interact with the full-node must write its own sync algorithm and handle all use cases (Like reorganization). This algorithm can become very complex and consume several days of development. 6 | 7 | There are two alternatives when applications want to interact with the full node: 8 | 9 | - Initialize a bidirectional communication via websocket. 10 | - Call the REST APIs. 11 | 12 | However, the main concerns with these approaches are: 13 | 14 | - When a reorganization happens, the applications must know how to query all the affected txs and update them on their databases. 15 | - There is no guarantee on the order the events happened. 16 | - When the websocket is disconnected, events might be lost and a resync must be executed. 17 | 18 | ## Solution 19 | 20 | To tackle the problems presented above, we must implement a built-in event management system, where all events will be sent in the order they occurred. This system has the following requirements: 21 | 22 | 1. Detect events that are important to applications. (Look for `Event Types` below). 23 | 1. Persist each event and give it an unique incremental ID. 24 | 1. Give users an REST API and Websocket connection to query for events 25 | 26 | To set up this system, the user must provide, during the full-node initialization, the `--enable-event-queue` flag. 27 | 28 | Due to the necessary flags and the events that can be emitted, we will provide a document on how to understand this new mechanism. 29 | 30 | This project, however, does **NOT** intend to eliminate the necessity of a sync algorithm, but to make it much more simpler. 31 | 32 | ## Out of Scope 33 | 34 | These features will not be part of the first phase of this project: 35 | 36 | - Sync pause/resume 37 | - An API to manipulate the sync algorithm. 38 | - Event filter 39 | - Give users the choice to receive only a subset of events, according to some criteria. 40 | - The `--flush-events` flag. By default, all events are retained. In the future, the user could provide a `--flush-events` flag to enable the flushing of events after each event is sent to the client. 41 | 42 | ## Flow 43 | 44 | ![Event Flow drawio](./0001-images/event_flow.png) 45 | 46 | ## API 47 | 48 | To retrieve the events, an API will be provided: 49 | 50 | `GET /event?last_ack_event_id=:last_ack_event_id&size=:size` 51 | 52 | The maximum value for `size` will be 1000, to prevent DoS attacks, and the default value will be 100. Those should be configured via settings. 53 | 54 | If `last_ack_event_id` is not provided, the first event on the database will be returned. 55 | 56 | The result will be an array of events. Each entry will have the data format described on the `Data Format` section below. Also, a `latest_event_id` field will be returned, so the client will know how many events ahead are already available. 57 | 58 | ## WebSocket 59 | 60 | In addition to the REST API, a similar WebSocket API will be available. The client will start the communication also providing the `last_ack_event_id` and an `window_size`. The server will keep the connection open and start sending streaming events to the client. 61 | 62 | Each message will be a single event. The server will keep sending events while there are events available and there's `window_size` available. The client can send an ACK message at any time to either update the `ack_event_id` and/or the `window_size`, effectively having full control of the flow of events. Each entry will have the data format described on the `Data Format` section below. Also, a `latest_event_id` field will be returned, so the client will know how many events ahead are already available. 63 | 64 | ## Storage 65 | 66 | To avoid adding new dependencies, the RocksDB, which is already implemented for `tx_storage`, will be used to persist the events. RocksDB is a key-value store. The key will be the event id, and the value will be the whole event. This way, querying a range of keys must be cheap (seek of O(log(n)) and each next is O(1)). 67 | 68 | To serialize the data, we will transform the whole event into a JSON object and store as bytes on the RocksDB. A simple `json.decode` will be sufficient to retrieve the encoded data. 69 | 70 | ## Retention Policy 71 | 72 | By default, all events will be stored, given this feature is enable via the `--enable-event-queue` flag. In future phases of the project, a `--flush-events` flag will be implemented to control the retention policy. 73 | 74 | ## Data Format 75 | 76 | All events will have the following structure: 77 | 78 | ``` 79 | { 80 | peer_id: str, // Full node UID, because different full nodes can have different sequences of events 81 | id: NonNegativeInt, // Event order 82 | timestamp: float, // Timestamp in which the event was emitted. This will follow the unix_timestamp format 83 | type: HathorEvents, // One of the event types of the HathorEvents enum 84 | group_id: Optional[NonNegativeInt], // Used to link events. For example, many TX_METADATA_CHANGED will have the same group_id when they belong to the same reorg process 85 | data: EventData, // Variable class for each event type. Check Event Types section below 86 | } 87 | ``` 88 | 89 | ## Data types: 90 | 91 | ### TxInput 92 | 93 | ``` 94 | { 95 | tx_id: str, 96 | index: int, 97 | token_data: int 98 | } 99 | ``` 100 | 101 | ### TxOutput 102 | 103 | ``` 104 | { 105 | value: int, 106 | script: str, 107 | token_data: int 108 | } 109 | ``` 110 | 111 | ### TxData 112 | 113 | ``` 114 | { 115 | hash: str, 116 | nonce: int, 117 | timestamp: int, 118 | version: int, 119 | weight: float, 120 | inputs: List[TxInput], 121 | outputs: List[TxOutput], 122 | parents: List[str], 123 | tokens: List[str], 124 | token_name: Optional[str], 125 | token_symbol: Optional[str], 126 | metadata: TxMedatada 127 | } 128 | ``` 129 | 130 | ### SpentOutputs 131 | 132 | ``` 133 | { 134 | spent_output: List[SpentOutput] 135 | } 136 | ``` 137 | 138 | ### SpentOutput 139 | 140 | ``` 141 | { 142 | index: int, 143 | tx_ids: List[str] 144 | } 145 | ``` 146 | 147 | ### TxMetadata 148 | 149 | ``` 150 | { 151 | hash: str, 152 | spent_outputs: List[SpentOutputs], 153 | conflict_with: List[str] 154 | voided_by: List[str] 155 | received_by: List[int] 156 | children: List[str] 157 | twins: List[str] 158 | accumulated_weight: float 159 | score: float 160 | first_block: Optional[str] 161 | height: int 162 | validation: str 163 | } 164 | ``` 165 | 166 | ### ReorgData 167 | 168 | ``` 169 | { 170 | reorg_size: int, 171 | previous_best_block: str, // hash of the block. At the time of this event, this block won't be part of the best blockchain anymore 172 | new_best_block: str // hash of the block 173 | common_block: str // hash of the block 174 | } 175 | ``` 176 | 177 | ### EmptyData 178 | 179 | ``` 180 | {} 181 | ``` 182 | 183 | ### EventData 184 | 185 | One of `TxData`, `ReorgData`, or `EmptyData`, depending on the event type. 186 | 187 | ### HathorEvents 188 | 189 | One of the Event Types described in the section below. 190 | 191 | ## Event Types 192 | 193 | Events described here are a subset of all events in the `HathorEvents` enum. The event manager only subscribes and handles the ones listed below. 194 | 195 | - `LOAD_STARTED` 196 | - `LOAD_FINISHED` 197 | - `NEW_VERTEX_ACCEPTED` 198 | - `REORG_STARTED` 199 | - `REORG_FINISHED` 200 | - `VERTEX_METADATA_CHANGED` 201 | 202 | ### LOAD_STARTED 203 | 204 | It will be triggered when the full-node is initializing and is reading locally from the dabatase, at the same time of `MANAGER_ON_START` Hathor event. It should have an empty body. 205 | 206 | ### LOAD_FINISHED 207 | 208 | It will be triggered when the full-node is ready to establish new connections, sync, and exchange transactions, at the same that when the manager state changes to `READY` [here](https://github.com/HathorNetwork/hathor-core/blob/85206cb631b609a5680e276e4db8cffbb418eb88/hathor/manager.py#L652). `EmptyData` is sent. 209 | 210 | ### NEW_VERTEX_ACCEPTED 211 | 212 | It will be triggered when the transaction is synced, and the consensus algorithm immediately identifies it as an accepted TX that can be placed in the mempool. `TxData` is going to be sent. We will reuse the `NETWORK_NEW_TX_ACCEPTED` Hathor Event that is already triggered. This event will NOT be emitted for partially validated transactions. 213 | 214 | ### REORG_STARTED 215 | 216 | Indicates that the best chain has changed. It will trigger the necessary ```TX_METADATA_CHANGED``` and ```VERTEX_METADATA_CHANGED``` events to void/execute them. `ReorgData` datatype is going to be sent. 217 | 218 | ### REORG_FINISHED 219 | 220 | It will be triggered if a `REORG_STARTED` had been triggered previously, indicating that the reorg (i.e. a new best chain was found) was completed and all the necessary metadata update was included between ```REORG_STARTED``` and this event. 221 | 222 | ### VERTEX_METADATA_CHANGED 223 | 224 | Initially, we will trigger this event for two use cases: 225 | 226 | - When a best block is found. All transactions that were on the mempool and were confirmed by the new block will change its ```first_block``` metadata, which will be propagated through this event. 227 | - When a reorg happens. This can trigger multiple transactions and blocks being changed to voided/executed. This will be detected on `mark_as_voided` functions inside `consensus.py` file (As long as consensus context finds that a reorg is happening). 228 | 229 | Data type `TxData` is going to be sent. Only the new transaction information is going to be sent, and it's the client responsibility to react accordingly. 230 | 231 | ## Scenarios 232 | 233 | ### Single chain 234 | 235 | Two transactions are accepted into the mempool, and a block on the best chain is found to confirm those transactions. 236 | 237 | 1. `NEW_VERTEX_ACCEPTED` (Tx 1) 238 | 1. `NEW_VERTEX_ACCEPTED` (Tx 2) 239 | 1. `NEW_VERTEX_ACCEPTED` (Block 1) 240 | 1. `VERTEX_METADATA_CHANGED` (Changing the `first_block` of `Tx 1` to `Block 1`) 241 | 1. `VERTEX_METADATA_CHANGED` (Changing the `first_block` of `Tx 2` to `Block 1`) 242 | 243 | ### Best chain with side chains 244 | 245 | Two transactions are accepted into the mempool. A block on the best chain is found to confirm those transactions, but a new block on a side chain arrives and becomes the best chain. The transactions are confirmed by this new block. 246 | 247 | 1. `NEW_VERTEX_ACCEPTED` (Tx 1) 248 | 1. `NEW_VERTEX_ACCEPTED` (Tx 2) 249 | 1. `NEW_VERTEX_ACCEPTED` (Block 1) 250 | 1. `VERTEX_METADATA_CHANGED` (Changing the `first_block` of `Tx 2` to `Block 1`) 251 | 1. `VERTEX_METADATA_CHANGED` (Changing the `first_block` of `Tx 1` to `Block 1`) 252 | 1. `REORG_STARTED` 253 | 1. `NEW_VERTEX_ACCEPTED` (Block 2) 254 | 1. `VERTEX_METADATA_CHANGED` (Changing the `voided_by` of `Block 1`) 255 | 1. `VERTEX_METADATA_CHANGED` (Changing the `first_block` of `Tx 1` to `Block 2`) 256 | 1. `VERTEX_METADATA_CHANGED` (Changing the `first_block` of `Tx 2` to `Block 2`) 257 | 1. `REORG_FINISHED` 258 | 259 | ## Integration Tests 260 | 261 | We will provide test cases with sequences of events for each scenario. This will help application to integrate with this new mechanism. 262 | 263 | 264 | ## Task Breakdown and Effort 265 | 266 | - [x] Proof of Concept (2 dev-days) 267 | - [x] Low-level design (2 dev-days) 268 | - [x] Change manager to handle the new flag (2 dev-days) 269 | - [x] Make the REORG event be emitted during consensus (2 dev-days) 270 | - [x] Emit event when tx/block is voided (2 dev-days) 271 | - [x] Emit event for metadata changes (2 dev-days) 272 | - [x] Create RocksDB event column family (2 dev-days) 273 | - [x] Implement an event management layer, where events will be detected and persisted on RocksDB (3 dev-days) 274 | - [x] Implement event persistence layer on RocksDB (3 dev-days) 275 | - [x] Implement WebSocket API (2 dev-days) 276 | - [x] Implement `GET /event` REST API (1 dev-day) 277 | - [x] Implement `--skip-load-events` flag (2 dev-days) 278 | - [ ] Doc with user instructions (1.5 dev-days) 279 | - [ ] Build testing cases for integrations (2 dev-days) 280 | 281 | **Total: 28.5 dev-days** 282 | -------------------------------------------------------------------------------- /projects/reliable-integration/0001-images/event_flow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/reliable-integration/0001-images/event_flow.png -------------------------------------------------------------------------------- /projects/reliable-integration/0002-images/consensus_flow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/reliable-integration/0002-images/consensus_flow.png -------------------------------------------------------------------------------- /projects/reliable-integration/0002-images/full_node_cycle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/projects/reliable-integration/0002-images/full_node_cycle.png -------------------------------------------------------------------------------- /projects/wallet-service/nano-contracts/0001-nano-contracts-support.md: -------------------------------------------------------------------------------- 1 | - Feature Name: nano-contracts-support 2 | - Start Date: 2025-01-02 3 | - Author: @andreabadesso 4 | 5 | # Summary 6 | 7 | This document describes the integration of nano-contracts support into the wallet-service, enabling tracking and indexing of nano-contract transactions, including caller information and providing proxy endpoints for fullnode interaction. 8 | 9 | # Motivation 10 | 11 | The Hathor Network is introducing nano-contracts. To maintain complete transaction history for wallets and addresses, we need to adapt our indexing service to handle these new transactions and provide appropriate APIs for interacting with nano-contracts. 12 | 13 | # Guide-level explanation 14 | 15 | Currently, the wallet-service syncs with the fullnode through reliable integrations, which sends events in the exact order they occur. The daemon processes these events through a state machine (SyncMachine) that handles different types of events like new transactions, metadata changes, and voided transactions. 16 | 17 | For nano-contracts, the main change is tracking the caller's relationship with the transaction, especially when they are not directly involved in token transfers. When a nano-contract transaction is received (identified by its version), we'll: 18 | 19 | 1. Process normal inputs/outputs as usual 20 | 2. Check if the caller is involved in inputs/outputs 21 | 3. If not, explicitly add the transaction to their history 22 | 4. Forward any contract-specific API calls to the fullnode 23 | 24 | This means that if address is the caller in a contract call that moves tokens from X to Y (e.g. a deposit action), all three addresses will see this transaction in their history: 25 | - X and Y through normal UTXO tracking 26 | - Z through explicit caller tracking 27 | 28 | The existing transaction history endpoints will work with nano-contracts without changes: 29 | - `GET /wallet/history` - Returns all transactions involving the wallet, including: 30 | * Transactions where the wallet's addresses are inputs/outputs 31 | * Transactions where the wallet's addresses are callers 32 | - `GET /wallet/transactions/{txId}` - Returns details of a specific transaction 33 | * Works for both regular and nano-contract transactions 34 | * Includes all token movements and metadata 35 | 36 | From the user's perspective, they will be able to: 37 | - See all their contract interactions in their history, whether they moved tokens or just called the contract 38 | - Create and execute contracts through the wallet-service API 39 | - Query contract state through proxied fullnode APIs 40 | 41 | # Reference-level explanation 42 | 43 | ## Database Changes 44 | 45 | No schema changes are required. The existing tables (`address_tx_history` and `wallet_tx_history`) will be used to track contract interactions, but we need to explicitly add entries for callers when they are not involved in inputs/outputs. 46 | 47 | Important implementation notes: 48 | 1. The `updateAddressTablesWithTx` function needs to be modified to handle zero balances efficiently: 49 | - Keep updating the `address` table for transaction count 50 | - Skip `address_balance` updates when all values (balance and authorities) are zero 51 | - Always add entries to `address_tx_history`, even for zero balances 52 | 53 | 2. The `voided` column in history tables applies to callers as well - if a nano-contract transaction is voided, all history entries (including the caller's) must be marked as voided 54 | 55 | ## Daemon Changes 56 | 57 | ### SyncMachine Updates 58 | The `handleVertexAccepted` service will be enhanced to handle nano-contract transactions: 59 | 60 | 1. Detect nano-contract transactions (by version) 61 | 2. Extract the caller information from the transaction metadata 62 | 3. If the caller is not involved in any inputs or outputs: 63 | - Use `updateAddressTablesWithTx` with a zero balance to add the transaction to their history 64 | - The function will handle incrementing transaction count and adding history entries 65 | 66 | Example flow: 67 | ```typescript 68 | // In handleVertexAccepted 69 | if (isNanoContractTx(metadata)) { 70 | const { caller } = metadata; 71 | const addresses = new Set([ 72 | ...inputs.map(i => i.address), 73 | ...outputs.map(o => o.address) 74 | ]); 75 | 76 | // If caller is not in inputs or outputs, add to history 77 | if (!addresses.has(caller)) { 78 | // Create a map with zero balance for the HTR token 79 | const zeroBalanceMap = { 80 | [caller]: TokenBalanceMap.fromStringMap({ 81 | '00': { unlocked: 0, locked: 0 } 82 | }) 83 | }; 84 | 85 | await updateAddressTablesWithTx(mysql, txId, timestamp, zeroBalanceMap); 86 | } 87 | } 88 | ``` 89 | 90 | Example scenarios: 91 | ``` 92 | Scenario 1: X (caller) sends to Y 93 | - X appears in history (input) 94 | - Y appears in history (output) 95 | 96 | Scenario 2: Z (caller) triggers X to send to Y 97 | - Z appears in history (zero balance entry via updateAddressTablesWithTx) 98 | - X appears in history (input) 99 | - Y appears in history (output) 100 | 101 | Scenario 3: Y (caller) triggers X to send to Y 102 | - X appears in history (input) 103 | - Y appears in history (output) 104 | ``` 105 | 106 | ## Wallet Service Changes 107 | 108 | ### Proxy Endpoints 109 | 110 | The wallet-service will transparently proxy all nano-contract related requests to the fullnode instead of reimplementing the state tracking functionality. This decision was made because: 111 | 1. The fullnode already maintains and optimizes contract state tracking 112 | 2. The fullnode provides APIs to query state at specific blocks 113 | 3. Reimplementing state tracking would require significant complexity and storage overhead 114 | 115 | Implementation: 116 | ``` 117 | /api/nano_contract/* -> fullnode's /v1a/nano_contract/* 118 | ``` 119 | 120 | All nano contract endpoints will be proxied transparently, maintaining the same interface as the fullnode. The wallet-service will: 121 | - Forward all requests to the corresponding fullnode endpoint 122 | - Preserve request parameters and body 123 | - Handle errors appropriately (including 404s) 124 | - Maintain proper authentication and rate limiting 125 | 126 | # Drawbacks 127 | 128 | 1. Additional storage requirements for tracking caller information 129 | 2. Increased complexity in transaction processing 130 | 3. Need for careful handling of contract-related errors 131 | 132 | # Rationale and alternatives 133 | 134 | The proposed design: 135 | - Minimizes changes to existing tables 136 | - Reuses existing sync mechanisms 137 | - Maintains backward compatibility 138 | - Provides clear separation between contract and regular transactions 139 | 140 | Alternative approaches considered: 141 | 142 | 1. Storing nano-contract state locally in wallet-service 143 | - This would require new tables to store: 144 | * Contract state at each block height 145 | * Contract execution history 146 | * Blueprint information and metadata 147 | - Drawbacks: 148 | * Significant storage overhead (storing state for every contract) 149 | * Complex state management (handling reorgs, voided transactions) 150 | * Need to replicate fullnode's state validation logic 151 | * Risk of state divergence between fullnode and wallet-service 152 | * Additional indexing overhead for state queries 153 | - Benefits: 154 | * Potentially faster queries for frequently accessed contracts 155 | * Ability to build custom indexes for specific contract types 156 | - Rejected because: 157 | * The storage and complexity costs outweigh the benefits 158 | * The fullnode already optimizes state storage and access 159 | * State queries are not as frequent as balance/history queries 160 | 161 | 2. Separate tables for nano-contract transactions 162 | - Would require duplicating transaction data 163 | - Increases complexity of history queries 164 | - Makes it harder to maintain consistency 165 | - Rejected due to unnecessary data duplication 166 | -------------------------------------------------------------------------------- /projects/web-wallet/wallet-connect/0002-caip-2.md: -------------------------------------------------------------------------------- 1 | --- 2 | namespace-identifier: hathor 3 | title: Hathor Network Namespace 4 | author: André Abadesso (@andreabadesso) 5 | discussions-to: 6 | status: Draft 7 | type: Standard 8 | created: 17.03.23 9 | requires CAIP-2 10 | replaces (*optional): 11 | --- 12 | 13 | 14 | ## Abstract 15 | 16 | In CAIP-2, a general blockchain identification scheme is defined. This is the implementation of [CAIP-2](https://github.com/ChainAgnostic/CAIPs/blob/master/CAIPs/caip-2.md) for the [Hathor Network](https://hathor.network/) 17 | 18 | ## Hathor Namespace 19 | 20 | The namespace "hathor" refers to the wider Hathor Network ecosystem. 21 | 22 | ## Reference Definition 23 | 24 | The reference addresses Hathor's two current production networks: `mainnet` and `testnet` and also private networks by using the reference `privatenet` 25 | 26 | ## Rationale 27 | 28 | The namespace `hathor` refers to the [Hathor Network](https://hathor.network/) ecosystem, including the `mainnet`, the current active `testnet` and any other private network with the reference `privatenet` 29 | 30 | `mainnet` - Main network of [Hathor Network](https://hathor.network) 31 | `testnet` - Current active testnet of Hathor Network which can be checked [here](https://hathor.network/testnet/) 32 | `privatenet` - Private networks 33 | 34 | ## Resolution Method 35 | 36 | One can identify the current network by observing the addresses, the addresses of the Hathor’s **mainnet** and **testnet** follow a pattern. All addresses of the mainnet start with the letter H (P2PKH address) or h (P2SH address) and addresses of the testnet start whether W (P2PKH address) or w (P2SH address). Private nets might follow different rules. 37 | 38 | One can also detect which network a given fullnode is running on by checking the `/version` API: 39 | 40 | ```bash 41 | curl -X GET https:///v1a/version 42 | ``` 43 | 44 | Response: 45 | 46 | ```json 47 | { 48 | "version": "0.52.1", 49 | "network": "mainnet", 50 | "min_weight": 14, 51 | "min_tx_weight": 14, 52 | "min_tx_weight_coefficient": 1.6, 53 | "min_tx_weight_k": 100, 54 | "token_deposit_percentage": 0.01, 55 | "reward_spend_min_blocks": 300, 56 | "max_number_inputs": 255, 57 | "max_number_outputs": 255 58 | } 59 | ``` 60 | 61 | 62 | ## Test Cases 63 | 64 | ``` 65 | # Hathor mainnet 66 | hathor:mainnet 67 | 68 | # Hathor testnet 69 | hathor:testnet 70 | 71 | # Hathor privatenet 72 | hathor:privatenet 73 | ``` 74 | 75 | ## References 76 | * [Hathor Network](https://hathor.network/) - Hathor Website 77 | * [Hathor Resources](https://hathor.network/resources/) - Various useful resources for working with Hathor Network 78 | * [Hathor docs](https://docs.hathor.network) - Official technical documentation of Hathor. 79 | 80 | ## Copyright 81 | 82 | Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). 83 | -------------------------------------------------------------------------------- /projects/web-wallet/wallet-connect/0003-caip-10.md: -------------------------------------------------------------------------------- 1 | --- 2 | namespace-identifier: hathor-caip10 3 | title: Hathor Network Namespace - Addresses 4 | author: André Abadesso (@andreabadesso) 5 | discussions-to: 6 | status: Draft 7 | type: Standard 8 | created: 2023-17-03 9 | requires: ["CAIP-2", "CAIP-10"] 10 | --- 11 | 12 | # CAIP-10 13 | 14 | *For context, see the [CAIP-10](https://github.com/ChainAgnostic/CAIPs/blob/master/CAIPs/caip-10.md) specification.* 15 | 16 | ## Abstract 17 | 18 | In CAIP-10 an account identification scheme is defined. This is the implementation of CAIP-10 for [Hathor Network](https://hathor.network). 19 | 20 | ## Rationale 21 | 22 | Hathor Network is a UTXO chain and wallets usually use a different address for each transaction, so a decision was made to use the first address, on the BIP32 `m/44'/280'/0'/0/0` path as an unique identifier of a Wallet's account. 23 | 24 | 25 | ## Syntax 26 | 27 | | Component | Description | 28 | |--------------------|--------------------------------------------------------| 29 | | caip10-like address| `namespace` + `":"` + `chainId` + `":"` + `address` | 30 | | namespace | hathor | 31 | | chain Id | One of (`mainnet`, `testnet`, `privatenet`) as defined on the [CAIP-2 document](./0002-caip-2.md) | 32 | | address | Hathor address represented as a Base58 string | 33 | 34 | **Addresses Pattern:** The addresses of the Hathor’s **mainnet** and **testnet** follow a pattern. All addresses of the mainnet start with the letter H (P2PKH address) or h (P2SH address) and addresses of the testnet start whether W (P2PKH address) or w (P2SH address). 35 | 36 | 37 | ### Backwards Compatibility 38 | 39 | Not applicable. 40 | 41 | ## Test Cases 42 | 43 | ``` 44 | # Hathor mainnet 45 | hathor:mainnet:HSXqoFjCcUG7tCfdniujU5VTeTuF2e7g5P 46 | 47 | # Hathor testnet 48 | hathor:testnet:WQDn4KwWrP3dacWbn35iUgea4GPFzV4s35 49 | ``` 50 | 51 | ## References 52 | 53 | - [Hathor Network](https://hathor.network/) 54 | - [Hathor Addresses](https://docs.hathor.network/explanations/blockchain-foundations/addresses) 55 | - [CAIP-2](https://github.com/ChainAgnostic/CAIPs/blob/master/CAIPs/caip-2.md) 56 | - [CAIP-10](https://github.com/ChainAgnostic/CAIPs/blob/master/CAIPs/caip-10.md) 57 | 58 | 59 | ## Rights 60 | 61 | Copyright and related rights waived via CC0. 62 | -------------------------------------------------------------------------------- /text/0001-rfc-process.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2019-07-24 2 | 3 | # Disclosure 4 | [disclosure]: #disclosure 5 | 6 | This whole document is an adaptation of the [rust-rfc-process]. 7 | 8 | # Summary 9 | [summary]: #summary 10 | 11 | The "RFC" (request for comments) process is intended to provide a consistent and 12 | controlled path for new features to enter the platform and standard libraries, 13 | so that all stakeholders can be confident about the direction the platform is 14 | evolving in. 15 | 16 | # Motivation 17 | [motivation]: #motivation 18 | 19 | The freewheeling way that we add new features to Hathor has been good for early 20 | development, but for Hathor to become a mature platform we need to develop some 21 | more self-discipline when it comes to changing the system. This is a proposal 22 | for a more principled RFC process to make it a more integral part of the overall 23 | development process, and one that is followed consistently to introduce features 24 | to Hathor. 25 | 26 | # Detailed design 27 | [detailed-design]: #detailed-design 28 | 29 | Many changes, including bug fixes and documentation improvements can be 30 | implemented and reviewed via the normal GitHub pull request workflow. 31 | 32 | Some changes though are "substantial", and we ask that these be put through a 33 | bit of a design process and produce a consensus among the Hathor community and 34 | the [core team]. 35 | 36 | ## When you need to follow this process 37 | [when-you-need-to-follow-this-process]: #when-you-need-to-follow-this-process 38 | 39 | You need to follow this process if you intend to make "substantial" changes to 40 | the Hathor distribution. What constitutes a "substantial" change is evolving 41 | based on community norms, but may include the following. 42 | 43 | - Any change that affects the transaction or block's serialization format. 44 | - Changes to the validation of transactions and blocks. 45 | - Changes to the opcodes of the script language. 46 | - Changes that affects the consensus of the network, i.e., which transactions 47 | are voided and which are executed. 48 | - In general any feature that requires a soft or hard fork. 49 | 50 | Some changes do not require an RFC: 51 | 52 | - Rephrasing, reorganizing, refactoring, or otherwise "changing shape does not 53 | change meaning". 54 | - Additions that strictly improve objective, numerical quality criteria (warning 55 | removal, speedup, better platform coverage, more parallelism, trap more 56 | errors, etc.) 57 | - Additions only likely to be _noticed by_ other developers-of-hathor, invisible 58 | to users-of-hathor. 59 | 60 | If you submit a pull request to implement a new feature without going through 61 | the RFC process, it may be closed with a polite request to submit an RFC first. 62 | 63 | ## What the process is 64 | [what-the-process-is]: #what-the-process-is 65 | 66 | In short, to get a major feature added to Hathor, one must first get the RFC 67 | merged into the RFC repo as a markdown file. At that point the RFC is 'active' 68 | and may be implemented with the goal of eventual inclusion into Hathor. 69 | 70 | - Fork the RFC repo https://github.com/HathorNetwork/rfcs 71 | - Copy `0000-template.md` to `text/0000-my-feature.md` (where 'my-feature' is 72 | descriptive. don't assign an RFC number yet). 73 | - Fill in the RFC 74 | - Submit a pull request. The pull request is the time to get review of the 75 | design from the larger community. 76 | - Build consensus and integrate feedback. RFCs that have broad support are much 77 | more likely to make progress than those that don't receive any comments. 78 | 79 | Eventually, somebody on the [core team] will either accept the RFC by merging 80 | the pull request, at which point the RFC is 'active', or reject it by closing 81 | the pull request. 82 | 83 | Whomever merges the RFC should do the following: 84 | 85 | - Assign an id, using the PR number of the RFC pull request. (If the RFC has 86 | multiple pull requests associated with it, choose one PR number, preferably 87 | the minimal one.) 88 | - Add the file in the `text/` directory. 89 | - Create a corresponding issue on 90 | [Hathor repo](https://github.com/HathorNetwork/hathor-python) 91 | - Fill in the remaining metadata in the RFC header, including links for the 92 | original pull request(s) and the newly created Hathor issue. 93 | - Commit everything. 94 | 95 | Once an RFC becomes active then authors may implement it and submit the feature 96 | as a pull request to the Hathor repo. An 'active' is not a rubber stamp, and in 97 | particular still does not mean the feature will ultimately be merged; it does 98 | mean that in principle all the major stakeholders have agreed to the feature and 99 | are amenable to merging it. 100 | 101 | Modifications to active RFC's can be done in followup PR's. An RFC that makes it 102 | through the entire process to implementation is considered 'complete' and is 103 | removed from the [Active RFC List]; an RFC that fails after becoming active is 104 | 'inactive' and moves to the 'inactive' folder. 105 | 106 | # Alternatives 107 | [alternatives]: #alternatives 108 | 109 | Retain the current informal RFC process. The newly proposed RFC process is 110 | designed to improve over the informal process in the following ways: 111 | 112 | - Discourage unactionable or vague RFCs 113 | - Ensure that all serious RFCs are considered equally 114 | - Give confidence to those with a stake in Hathor's development that they 115 | understand why new features are being merged 116 | 117 | As an alternative alternative, we could adopt an even stricter RFC process than 118 | the one proposed here. If desired, we should likely look to Python's [PEP] 119 | process for inspiration. 120 | 121 | # Unresolved questions 122 | [unresolved-questions]: #unresolved-questions 123 | 124 | 1. Does this RFC strike a favorable balance between formality and agility? 125 | 2. Does this RFC successfully address the aforementioned issues with the current 126 | informal RFC process? 127 | 3. Should we retain rejected RFCs in the archive? 128 | 129 | [core team]: https://hathor.network/team/ 130 | [PEP]: http://legacy.python.org/dev/peps/pep-0001/ 131 | [rust-rfc-process]: https://github.com/rust-lang/rfcs/blob/master/text/0002-rfc-process.md 132 | -------------------------------------------------------------------------------- /text/0011-token-deposit.md: -------------------------------------------------------------------------------- 1 | - Feature Name: token_deposit 2 | - Start Date: 2019-07-04 3 | - RFC PR: https://gitlab.com/HathorNetwork/rfcs/merge_requests/11 4 | - Hathor Issue: (leave this empty) 5 | - Author: Marcelo Salhab Brogliato 6 | 7 | # Summary 8 | [summary]: #summary 9 | 10 | A transaction that mints `X` tokens consumes `P` percent of `X` in HTR, while a transaction that melts `X` tokens yields `P` percent of `X` in HTR. 11 | 12 | As of this writing, `P = 0.01` (i.e., 1%), which means that a deposit of 1 must be made when minting 100 tokens. This deposit will be withdraw if the tokens are melted. 13 | 14 | # Motivation 15 | [motivation]: #motivation 16 | 17 | To keep Hathor network up and running, we need to give incentives to miners to join our network and mine blocks. Through the deposit, the more users mint new tokens, the higher the demand for native tokens HTR, which is a force to increase the price, which brings more miners to the network. 18 | 19 | Instead of charging a fee to create a new token, we demand the token creator to deposit some HTR that can be withdraw if the tokens are melted later. So, when the token creators buy HTR to mint their tokens, they are buying a long term asset, that may be used many times. It is analogous to buying a land and build your company on it. Later on, you can decide to close your company and use your land for another endeavor (or sell it). 20 | 21 | # Guide-level explanation 22 | [guide-level-explanation]: #guide-level-explanation 23 | 24 | After a token is created, the token creator can perform two operations (among others): mint and melt tokens. 25 | 26 | ## Mint operation 27 | 28 | When minting X tokens, a deposit of p percent of X must be made. This deposit is made by an unbalanced set of inputs and outputs that "disappears with" HTR. 29 | 30 | A common transaction that mint tokens has the following structure: 31 | 32 | 1. Inputs 33 | 1. A mint authority 34 | 2. An input with `Y` HTR 35 | 2. Outputs 36 | 1. A mint authority (optional) 37 | 2. An output with `X` tokens being minted 38 | 3. A change output with `Z` HTR, where `Y - Z = ceil(p * X)` 39 | 40 | As you can notice, `Z < Y` which means some HTR will be deposited in this transaction. 41 | 42 | 43 | ## Melt operation 44 | 45 | When melting X tokens, a withdraw of p percent of X must be made. This withdraw is made by an unbalanced set of inputs and outputs that "creates" HTR. 46 | 47 | A common transaction that melt tokens has the following structure: 48 | 49 | 1. Inputs 50 | 1. A melt authority 51 | 2. An input with X tokens 52 | 2. Outputs 53 | 1. A melt authority (optional) 54 | 2. An output with Y tokens, where X - Y are being melted 55 | 3. An output with A HTR, where `A = floor(p * (X - Y))` 56 | 57 | As you can notice, `A` HTR will be withdraw from the deposit that was made before. 58 | 59 | # Reference-level explanation 60 | [reference-level-explanation]: #reference-level-explanation 61 | 62 | This feature is implemented in the transaction validation step, i.e., the transaction is valid if, and only if, the deposit/withdraw is correctly made. 63 | 64 | One step of the transaction validation is to check the balance of each token in the transaction. Let `balance[T]` be the balance of the token `T`, i.e., the outputs minus the inputs. Then, one out of the three situations are possible: (i) a regular transaction where `balance[T] == 0`, (ii) a minting operation where `balance[T] > 0`, or (iii) a melting operation where `balance[T] < 0`. 65 | 66 | To enforce the deposit/withdraw, the following rule is applied: 67 | 68 | ``` 69 | p = 0.01 70 | deposit = sum(ceil(p * balance[T]) for T in tx if T != HTR and balance[T] > 0) 71 | withdraw = sum(floor(p * balance[T]) for T in tx if T != HTR and balance[T] < 0) 72 | valid = (balance[HTR] == withdraw - deposit) 73 | ``` 74 | 75 | The transaction `tx` is valid if, and only if, `valid is True`. 76 | 77 | This rule is enough to handle many minting and melting operations in a single transaction `tx`. 78 | 79 | We use the `ceil` and `floor` operator to handle the boundary cases where the user is minting/melting an amount `X` such that `p * X < 1` (remember that `X` is always an integer, i.e., X = 1 means 0.01 HTR). So, the `floor` function prevents users from issuing HTR, and the `ceil` function prevents users from depositing zero. 80 | 81 | This rounding problem of the deposit, supposedly solved by the `ceil` and `floor` operators, may be used to destroy HTR. For instance, one can create 0.99 tokens which would require a deposit of 0.01 HTR. Then, these tokens are melted yielding zero HTR, therefore destroying 0.01 HTR. We do not think it would be a problem because the attacker would have to have the 0.01 HTR, and they can "destroy their tokens" just destroying the private keys. 82 | 83 | # Drawbacks 84 | [drawbacks]: #drawbacks 85 | 86 | The drawback of charging a fixed percentage `P` is that not all tokens are worth the same. For example, as of this writing, 1 USD is equal to 108 JPY, which means a stablecoin of JPY would need to mint more tokens than a stablecoin of USD. 87 | 88 | Another drawback is that melting authorities can only melt their own tokens, which means they may not be able to withdrawy the whole deposit because they may not have the custody of all tokens that were minted. We've already discussed two alternatives to solve this problem: (i) create a reclaim authority which can be used to reclaim tokens from others, this authority may have other uses besides this problem; or (ii) create a "token destruction transaction", which would withdraw the remaining deposit and prohibit any future transaction involving that token. 89 | 90 | # Rationale and alternatives 91 | [rationale-and-alternatives]: #rationale-and-alternatives 92 | 93 | An alternative is to charge a fee, which can be done in two ways: (i) melting HTR, or (ii) paying fees to the first block verifying the transaction. We feel that the deposit/withdraw rule is simpler and capable of generating a demand for HTR as the community grows. In this case, when users buy HTR, they are buying the right to mint tokens. 94 | 95 | The impact of not doing this is not increasing the demand for HTR as the community grows, which would not increase the price and would not give incentives to miners to join the network. 96 | 97 | # Prior art 98 | [prior-art]: #prior-art 99 | 100 | As this writing, we do not know any platform using this deposit/withdraw strategy. Ethereum, the most used platform for token issuance, charges a fee to run any operation. 101 | 102 | # Unresolved questions 103 | [unresolved-questions]: #unresolved-questions 104 | 105 | What should we do when token owners are minting/melting amounts that leads to deposits/withdraws less than 0.01 HTR? The proposed solution is to require a minimum deposit of 0.01 HTR, i.e., round it up when minting, and to withdraw nothing, i.e., round it down when melting. It would allow any one to permanently destroy small amounts of HTR. 106 | 107 | An alternative may be to prohibit this cases, but this would require token owners to mint or melt a minimum amount of tokens. 108 | 109 | # Future possibilities 110 | [future-possibilities]: #future-possibilities 111 | 112 | The percentage P may be adjusted in the future depending on the use cases. Let's say we change P from `P1` to `P2`, and `P2 > P1`. Thus, we need to handle the case in which the deposit was made using `P1`, and a future withdraw using `P2` would yield more HTR than the deposit. To avoid it, we need to know with percentage was used in each case. A possible intuitive idea would be to use use the timestamp of the transactions, but we cannot do it because it can be easily modified. Instead of the timestamp of the transaction, we can safely use the timestamp or the height of the first block that verifies that transaction and would prohibit an old type of transaction. 113 | 114 | Another future possibility would be to allow tokens to have a custom precision (number of decimal places). In this case, the deposit and withdraw may be calculated using the smallest unit of the tokens. 115 | -------------------------------------------------------------------------------- /text/0025-images/Sync-v2-RFC-images.drawio: -------------------------------------------------------------------------------- 1 | 7Vtbb9s2FP41BroHD7rbeoydtMGQLhuColnfKImW1UiiRtGx1V8/UqKoC9nYa3yREyMBQh5SF37nOx/JQ2VkzpPNJwyy5WcUwHhkaMFmZF6PDEPXJyb9wyxFZZlYTmUIcRTwTo3hIfoBuVHj1lUUwLzTkSAUkyjrGn2UptAnHRvAGK273RYo7j41AyGUDA8+iGXr1yggy8o6tbXGfgujcFk/Wdd4SwLqztyQL0GA1i2TeTMy5xghUpWSzRzGDLwal+q6jz9pFS+GYUp2uaDw/viT/Pj71v2cf7kzUTLTru7HOr/NM4hXfMT8bUlRQwADigivIkyWKEQpiG8a6wyjVRpA9hyN1po+dwhl1KhT43dISMHdC1YEUdOSJDFvlcfC3ytHK+zDFwYw4ZwAOITkhX7Tqh8bS+sBHKlPECWQ4IJ2wDAGJHrueh9wEoWiX4MzLXCo/wfsEwn1zMhGhhPTIcw8TEshK2EI4jGJEshGuJEc04V9vYwIfMhACdeaRuOOED9DTODmRVB4q8OZXHSr6yYu9Jrsy1ZM1P32jqJunTl5pzuSVzcGxV7deS+424PCfSrBvliVV1ELu/CjrBBLlHir/CjqMO2qg66Qh6lCHaaHUgdXQisnCLPpvo8SnZkzVvSLOKKUxOZ2vLyKvHeeMAD/KSwpfb8i9DaQ2/OKvZRJewHZdLej7BwT5VqaWjBfR7kPcCDBTIdIuijmBKMnOEcxwtSSIobabBHFcc9E6R2mzEEUOEjtMwZYRBdpV7whiYKgFBWV27pCswcniDUed4JAt+UES+EE42BOkNcT5yXItdBuVWR3UIJcv3YLd7oPyWGaU8099WpNbKFqqZicfL0mT2DKZW9epP4gV7ymfUQIr4rH7/fpty+Jo2mRtvg29283Y3m75mkSSltwAXlW7ZQX0YZhOVuglNTSoAlZrre7bDaMgQfjmZjiXlDskWHeOOxnPw6wug6wdlRa62D4n9HSVzkAQxZa9UBPtvR96bXbvBez8EGZf0BuD47cgkRH5TIFEBeP7co/7Ga/23X1esNvXtWKbQu5rTFg7hgDtbIPJAZMOQaMc48Byx5aDMhLuvMSeGtXcluDIrecyvPMcye3ow+M3HJKRF5815Y8A2kHe+ffFTuqmPkVWFeMGqEHPtCXpb/0hTRl6TdWZLfVmC/GC5BEcVFdnqAU5aUvO12qlAnroGUb0dQkWMbdN/hAcRWPMyy3Kdvs4fVrN+Oz+8dVNoWQWcuzGFGrIbVLUKnlmpXZzW2Gm009sa2vLvrWRPql2xjNbSrXiRavuQ33lGhqRQqzlbFCC1W0MIteVnnEMEMVM8zYRA2zt57Bub/LAEWTGFsTIDYLEdHT0lpPKFr2actehUzrmqaJh46yjYaQsIet5/eRL6sC/raxSwreT2JPFTM0KKuwqan2s0TjCsfFDFNGM4neto9tJpOqRqgcI5b1Grt7SmiZtt0VqrreEirVNlekFPa/z9IuUnWRqotUXaSqn3vvbxgUByBKqTpcSk71CcVFqi5SdZGq9y1V9g75PaVU7eOcUC1Vcho1g5CKlCZn/t7OaW3PDbbiEOdQp7VqL8iJPO4FOdX0ZrxgWvbAvCBnnLgXzLfrBdscWizIR8IS+oNOt6q+XFB35N9oDCTfqvh0wXNdCfvX5Vv3IN7u9jn0uAfAMkbD5ufeaccv/QtFbGEqHNXLddtazwNVgPCrek4Qr/EKv8ifQHnudHh8HtyZ70n0Fm4i8sgvZ+XWiS+tNQe+rPL68979n+OqY8CcnDYG6nF2vpIYoKhb2tCCYHJeor7/s1s1oaXl4rEJrTgVNgZIaNsZGqF3WFaf1VF6b9oUJ+snA1g+Sp8vof+U8QCaxch/kiB//T4yhgty8l2k293KO4bsDN1UeONgu0hzh/9iPLVG9BmsOLc4KoNNQ8KMk1YL0DqNEWCcMbQFRgl7Pht/mR95H5xWfSyv+velX+D0iKe/W1Nlk/g2b/4D7Vptk6I4EP41Vu190CIEUD+qM7tbd3s7c+e9zO2XrQgZYCcSBuOo++svgRDAcM6LInprzVZt0nmDfvrp7rR04GS+/pCgOPiVeph0TMNbd+BVxzQB6EP+n5BsMsnAkAI/CT05qRBMw+9YCg0pXYYeXlQmMkoJC+Oq0KVRhF1WkaEkoavqtHtKqqfGyMeaYOoiokv/Dj0WyLewjUL+EYd+kJ8MDDkyR/lkKVgEyKOrkghed+AkoZRlrfl6golQXq6XyW1sL7/iz90/R3+8//bX96APHrvZZu9fs0S9QoIj9uatXXf1+An5v3yx4Wh0d2d9fLrpyyXGEyJLqa9Z/rZsk6twFYQMT2Pkiv6Km0kHjgM2J7wHeBMt4gy4+3CN+WHjexqxqVxu8P6CJfRBaR9yCUEzTMbIffATuoy8CSU04UMRjbBYHxKSizomvHbEH5fLZ8UJw+stcJ/RDFBwcTvHdI5ZsuHr5C6WfOVNtbsqzCUXBSVLyWVIGqiv9i1A4A2JwyswsTVMNESwx21cdmnCAurTCJHrQjpO9SrQSBEo5nyiNJa4fcOMbSRh0ZLRKqrZmeKgN6iaPyxdJi7e8ZaO9AUo8THbMa9EtQp2CSaIhU/Vpzs4Fo7OD2gcgyHH5oBpnxYJTLsNo8frkN3J5aL9j2j3bNm7WpeGrjay0zRR9iaAXHpLQ/48CnDgVAF3tpDMiCkXbYGpnmIPfGsijzkcHphZTZHFODGygOdDxvm7qK0wDfpta93UtD4JsPsQS6KNCXUfNBy4UlhV9Zlmd+RAUoRI6Ee8S/C92EEoOOR57kiK56HnpV6wDumqZ2wKoaFdRWigIwRgDURmYxDBpolxHFtXGmrN1i1NkdK8DX4nighFwrr49Syhc3G+0ADGyY9s/Sassf5BQ9b/Ff+Mf/9g3UyDLzdw8pv5aJm3NZe7jukQob0ZR8bxWaqMTLKIUVSBynlciovt2M1gGImMw5+hd3xT/o8/kFHb+kk0xbaG4En3Hs1DssmWz2lEFykwlSmLNBkTE4x4rYZm6nLYrT7BO9MaqONMa1i0bXF4/tjF+9nbxQ2bq1BI05u76uUqtVOlcsmVaIvNbaE3myPx3Fyg5uam9KZtzGKbDDo1Miu2kUipoZLZC1lq+LyRMU1IQNqV3kwIMn8mhIVHE/LSGZIjL3lBNaTerSCILSiiZlpG6YRNST4oyTPOlNYUQ5I6tWOcQkrul87f1nzaVeovC6tGIedp1pNxhpMyo01ualvObhGgWDTdZUI244RbtLjp1nqokkMsnFXWYzzLp8LFdYdNei9gm72t8N3PBSUPZtY4MODYDXkwPa+9DdBCOBCgaXv/0OJy/fGYdZLBBUANHqDHF+eY4cXRIDjRGzrXe7K5yzcQndIq0S2Wpb0Gbva1CpS56LMlMKfeMg5eAdv1kNUKmM6+k8yiYXv39Fpd9i+M2Y8xUGfMfxttW4zR75ycMWdy74TtlYFrdTm4MGY/xlgvZAxskzF6caHX650JYYaDXr+amLXNGb2IfuHMqzhjv5AzVpuc0X8lnlnnEmUs6+RIk1+myvp08mB4wJ9RPLQI1OVeDJZ+QRkY4q/BS6RzYtEd1P1udalQXiqUlwrlpUKpRw1bixrdAeiZLytSQkcV0A7vx/QiSV6n1CPy/7tOaUENpNYLleBsKpW7E4LmMl6wbypb/42RNTzWN0Y73mo7R4bt5HTq62HjeFwcDk8uzR7WZHw/Chxb33Qc8ful+iKpfuX5LGITT23lpx3cpfGOCGPljzwOHs/O55OOmg+aTMM+SCjryMyu5AGLnA5e/ws=7Vpdk6I4FP01PrYFiSA+jt09s7s1U9tV7tbOPEaIkJ1IqBBa3V+/CQb5iLa2BR17qvXB3Es+IOec3OTiCN6vt184ypJvLMJ0BJxoO4IPIwBcdwrlj/Ls9p7A0Y6Yk0hXqh0L8h/WTkd7CxLhvFVRMEYFydrOkKUpDkXLhzhnm3a1FaPtUTMUY8OxCBE1vf+QSCT6KTyn9v+GSZxUI7uOvrJGVWXtyBMUsU3DBR9H8J4zJval9fYeUzV51bx47u8z/Pey+MP78+6HRwtCi8XdvrPPr2lyeASOU9Fv15637/sZ0UJP2KJY3j18+rKHZEX4GkeyvNwp4BKsijgXYYJIqqdF7Kq55qxII6zGc0dwjnio6SC5AOcRypPDtRWh9J5RxqWdslRWmsccRUQ+YMetmj0hITBPy7bAUe1zwdnPA6BAeSp01FgULTGdo/BnXN5Sp8sVq0cZAeiUH3XDlMRqkFDeBeZVN08sJ4Kw1oVnzAWRHPvaqbBkQrB1o8In3aVgmfTquZbX8LbDzjPQuge+SaFitsaCK0B0L1PN0F0lPWcMNLCbmvNSuntf0uT7ZDae6rpISy0+DFDTSRY0o17BLmCQa+kYpNkkROBFhkJlb+QiJCcqEWtasSjP9svCimwVfUr0Frq5Y1ABnke/QT2J/qOvvgOCM2mDU5kNYCZHcKl8vWMCHQMUAxIcySVUm4yLhMUsRfSx9s5rpSsI6jpfmSJ6Cdy/WIidXgBQIVgb1o4GA0d9DyioG7gCA/kQrOAhfqHeRIcgxGP8Un8gOA4qxxQJ8ty+u95BmpjCgc5bSOetxQGcG1PHxIYY8JaI77q5Kv9Q5bGnrYdt49LDThu2BOQGFypoYlNA1V02FDQej3sW0FCaCG5LE+7svWhCzjrffa86UEajlTLrZqVVtRtaM9UJ4qxmXKtRB06tAq32rDXW7iBIn9r6hwV/PhxOboYO0LNKh8AuHT7Y0MHDt8kGYJ4blr7X9550oJAK/dsKqWBmTKYNrdnaQ4JL95DAKuPf8R7SBbdFeO+ac1VrGba7oXTGQdCOLeBccCmtJ8yJnEGVNHwx4gytOOhdqDjPOc6rN9pymOlosTWZQynJcsWQ12qvTaJXpIzbmRL3dLrayCF3Miqfy8+QGZXO6dGFtqV/TWi7KenP/Lb0p+9K+f77UL7/ofy+lR/YVr55ZNi/WGSrEfCpHH6+5LIUq9JfHKU5CtWLtPwF3CVymXKGlBXReR70CPspOE/Q4eKXjEPRYdZhAzDZ4HpH6DAbig7+NemlgQJBO9fUME8EA2l0F/QTSYczseHIKxdLgcG/9FWY3SyUf00WaijWuLDFmum0J9a4vyBrrGarfPMN6je8zhijH7Glh9gycY1DhtLGUPFFmvXfrMprjT+rwcf/AQ==7VjbbtswDP2aPG7wJWmXxyVN2gHtMCDA1j4Nms3YWhTRkOXE3tdPluVb3LXNZXAKJC8xj0la4uGhDA/c6Tq9FSQKH9AHNnAsPx24NwPHse1rV/3lSFYgnywDBIL6xqkGFvQPGNAyaEJ9iFuOEpFJGrVBDzkHT7YwIgRu225LZO2nRiSADrDwCOuiP6gvQ7OLkVXjd0CDsHyybZk7a1I6GyAOiY/bBuTOBu5UIMriap1OgeXFK+vycO+vkrufn785nNFwTKIl2Xwoks33Cam2IIDL06a+KlJvCEtMvcxeZVYWEHxVT2OikCEGyAmb1ejES8QG8qfYyhCYcF9blrLqgHvEyLj8Bikz0ykkkaigUK6ZuRtLgauKqxxZIpdTZCj0glxL/xReLDVf3w7Vr9TJ+MWYCA9e8DO1kEQE8FI+t2oGpSLANUiRqTgBjEi6aS+OmHYOKr+aMnVhWNuDwXHvhEFK5WMe/nFkrKfGnZvUZNZGVhpc7f2xaTzVGXKzDtNWGff+mmPUZ3M4HXl/VdNKA9QnUrHeaR7G1GTOm2QbUgmLiOgqbNXh0Ga93TbVbMwNRn4BmxBvFWinkh2OHJ6nkDLWoHCufwoPBPEp1PSaBP9iXO0p4ArzVAiIqgU2ICSkBzRBlzSTZWwOhKxtbuvjxS5Pv7BxtFxZ/4nl68sQf2U4v0Gnwz51ah8yxU82uMvrcxjbrTY8l97odYa7HXVPSEw9BX2/TPEjpng1j89ljA87RM8THXXh+Rieh+6Z8Tzq8PyF6xeyC70noNfunV+7S/Dlfaz2G77x0HV6fSHrDuPD1UniqPiytaRpTuIp5XqsLHfkvtMGp5frrlrtrlqdZ8Tq7C9WZdbf5vS9xhdOd/YX7Vttc6IwEP41fjwHiAh+7fvcXOfmpjN3vY8BInKNxIlRsb/+eEkEDFW0aqLVfii7JCHk2Wd3s0AH3I6TRwono2cSINyxjCDpgLuOZZmmA9J/mWZZaJxev1CENAp4o1LxEr0jrjS4dhYFaFpryAjBLJrUlT6JY+Szmg5SShb1ZkOC61edwBBJihcfYln7JwrYqNC6tlHqn1AUjsSVTYOfGUPRmCumIxiQRUUF7jvglhLCiqNxcotwtnhiXX7/HN95C/v703Lx7gzw86/F4PVbMdjDLl1Wt0BRzA47tNUrxp5DPOMLxm+WLcUKoiBdUC4SykYkJDHE96X2xp/ROcouY6YCJbM4yCUjlcoOPwiZ8Cb/EGNLbipwxkiqGrEx5menjJK3FViZZkhidkswofmEgJH/Un0x1Wx+a1hvWSjebkpm1Ecb2nHTZ5CGaNN4xaLxZX9EZIwYXaYdKcKQRfP67CA36HDVrgQtPeC47YAhkCDsdrsSiotRxNDLBOY3vEi5Xl90OJ0U7BtGSQZevuYCdaMRFQw9hG+g/xbmkAuAYhJnNjGMMK5g9pD/VpjNEWUo2QM1eZH5KCuHw/1Uj4uLkvRCNarwXegOjspBiPVpLq0xxzWyvxMwx27JHBOoZI7VV+7sUBKx16x717K5+Ldy6i7hQ+fCUghxevevolkm/M2HsIVYdssl0U8fz9raPmyl9uFoZB/O1T4axnNU2oct+XjPckzJZg4fek8dXIGjV3Ddx28fjooVHnJOfchEVfHXackfpe7VOUHieiRC9Gy9CGFaXzrdNNvu1JSmm+YpdmrH2l4N9DJ4oFNqVs3MjI3x4PwzM7Ntama5KrkGXE3t49Iz9/b2MVDqi+XcnSVy6p6OFE2mmW3s6pXr5rOqHRtbC2cylBBHYZwe+ylCiMrp/3CI+r6/6lo5EzgDr4L6ETYGa1U301ZddjNkDLXg/cXHhbZbDrW87zVwXAv7uPi40NY+gKHSPkx5U8qShr3UNS7sFBcs1XHBkuO9ckegDznFYmx33mqf18hVjq/4qNMEeu3FLbmu4clp2KWjollFEFzf7Nhksm7bbOQD0E+UrarHrOmpx1656tlkqu1tw1IaDN2GTBWcaabq+qg5U/VcO3vWcsT3hnprFQxTdQWjAUMNOL/tSef571CLwoT+vB808L535f0neW8ZqnnfgKEOvN+D9ebZcF7spDTnvJhmjfP2lfOf5fxANeevValNVi++F9lKTqVVKTHNGjkb3lA7B3IGELnDRnL2fRd5w2NWtfq6JeIavR3+dQJy20q0Ys7LlWiWNLySdOX8TpxXnoQDuZTNkoZXic4CVxu5Qa8JV9fyQL9/RFydk7nyVCy/wMzPVb5jBff/AQ== -------------------------------------------------------------------------------- /text/0025-images/fig-state-diagram.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 |
Not validated
Not validated
Basic Validated
Basic Validated
Full Validated
Full Validated
Invalid
Invalid
Viewer does not support full SVG 1.1
-------------------------------------------------------------------------------- /text/0029-semi-isolated-networks.md: -------------------------------------------------------------------------------- 1 | - Feature Name: semi_isolated_networks 2 | - Start Date: 2021-06-01 3 | - RFC PR: [HathorNetwork/rfcs#29](https://github.com/HathorNetwork/rfcs/pull/29) 4 | - Author: Jan Segre 5 | 6 | # Summary 7 | [summary]: #summary 8 | 9 | New architecture where we use internal (not connected to all the p2p network) nodes for essential services (wallets and mining). 10 | 11 | # Motivation 12 | [motivation]: #motivation 13 | 14 | Since our first deploy the nodes that serve essential services have always been connected to the full open p2p network. This has some disadvantages and recently we've experienced some of those disadvantages. Mainly there was a situation with a large number of connection attempts on the p2p network that seems to have starved the nodes of fd handlers which caused the APIs to become unresponsive and in turn the services which rely on those nodes were affected. 15 | 16 | The proposed architecture aims to make a semi-isolated network where the nodes that serve these services are not exposed to the open p2p network. This isn't meant to be a solution to any particular problem, and issues like the fd handlers starvation have resulted in specialized solutions/optimizations. This is meant however as a precaution and as a deploy that is generally easier to protect without impacting the network as a whole. For example we will be able to create firewall rules that restrict connections to the p2p port of some nodes, without impacting the connectivity of nodes from other operators, allowing safer targeted responses to some situations. 17 | 18 | # Guide-level explanation 19 | [guide-level-explanation]: #guide-level-explanation 20 | 21 | This guide will make use of the following nomenclature: 22 | 23 | - Public nodes: nodes that are able to connect to any other node in open p2p network but never to internal nodes; 24 | - Gate nodes: nodes that can connect to both public nodes and internal nodes, thus acting as _gates_ of the _walled garden_; 25 | - Internal nodes: nodes that have a very restricted set of nodes that it can connect to, they are only directly connected to other internal nodes or gate nodes, and never to public nodes; 26 | 27 | At a high level these are the changes: 28 | 29 | - Unify node1 and node2 endpoints, today these are handled by two separate CloudFront Distributions, but we would now use only one that can handle these endpoints and optionally a new endpoint that we can add to the wallets in the future. The same distribution can use (and certificate manager also supports this) multiple DNS entries. This entry is the least related to the protection of nodes, but since we're changing our general architecture we can as well do this now; 30 | - Make the wallet nodes internal nodes; 31 | - Make the explorer a gate node (basically so it has maximal visibility of the network); 32 | - Add a gate node for each region there is a wallet node. 33 | 34 | # Reference-level explanation 35 | [reference-level-explanation]: #reference-level-explanation 36 | 37 | The implementation of this architecture already has all the required pieces and parts working: 38 | 39 | - hathor-core supports a custom whitelist (and soon will have a netfilter style firewall that might make this simpler to configure); 40 | - aws-deploy-tools supports setting up a custom configuration for a hathor-core node; 41 | 42 | Two whitelist are used to achieve this: 43 | 44 | - public whitelist: contains all nodes, including the wallets 45 | - internal whitelist: only contains the gates and the wallets 46 | 47 | The wallets have to be configured to use the interal whitelist while all other nodes use the public whitelist. The effect of this is that wallets can connect between themselve and with the gates while the gates can connect with everyone. 48 | 49 | # Drawbacks 50 | [drawbacks]: #drawbacks 51 | 52 | - Because we introduce a new whitelist, some errors can lead to split-brain networks; 53 | - We will need to run and maintain more nodes (the gate nodes), which also incur an increased cost; 54 | - DoS situations might be harder to percieve (wallets would keep working but propagation would be affected), this might require smarter monitoring; 55 | 56 | # Rationale and alternatives 57 | [rationale-and-alternatives]: #rationale-and-alternatives 58 | 59 | - We could get away with not using extra gate nodes (or only using the explorer node(s) for this): the cost saving would be small compared to the increased risk of split brain if there is any issue with the explorer (which is more exposed because it serves the explorer frontend); 60 | - 61 | 62 | # Prior art 63 | [prior-art]: #prior-art 64 | 65 | I haven't found documentation on how Bitcoin (or other larger PoW networks) does this in practice but I heard about similarstrategies being used. 66 | 67 | # Unresolved questions 68 | [unresolved-questions]: #unresolved-questions 69 | 70 | - Is it worth making the explorer node an internal node, or at least split it into 2 nodes: the main being an internal node and only for the public API we'd use a public node (this split API can be made at the LB lvl); 71 | - Should we unify node1/node2 LBs and only use one region for both? 72 | - Should we really make the node1/node2 change? 73 | 74 | # Future possibilities 75 | [future-possibilities]: #future-possibilities 76 | 77 | After the netfilter firewall is released we'll have to make changes to the whitelist, it should be simpler to set-it-up such that we won't need to repeat nodes on different lists. 78 | 79 | When we the wallet and explorer services are deployed we will have much less exposure of these nodes so we can rethink our architecture to maybe have a common set of core nodes used by these services that would never be exposed to the public. 80 | 81 | When sync-v2 is released the open p2p network will be more exposed so we will have to consider an extension of this architecture to keep the connectivity with trusted parties (exchanges, pools, use cases) less exposed. 82 | -------------------------------------------------------------------------------- /text/0032-nft-standard.md: -------------------------------------------------------------------------------- 1 | - Feature Name: nft_standard 2 | - Start Date: 2021-08-02 3 | - RFC PR: https://github.com/HathorNetwork/rfcs/pull/32 4 | - Hathor Issue: (leave this empty) 5 | - Author: Pedro Ferreira , Sharaddition 6 | - Revision: 2 7 | - Last updated date: 2022-05-06 8 | 9 | # Summary 10 | [summary]: #summary 11 | 12 | This document presents a standard to be followed when creating an NFT transaction and its metadata. The idea behind having a standard is to be able to identify NFT tokens and to be easy for any plataform that would like to integrate Hathor NFTs. 13 | 14 | # Motivation 15 | [motivation]: #motivation 16 | 17 | Creating an NFT standard is important mainly for 3 reasons: 18 | 19 | 1. There are some requirements that an NFT token must fulfill in order to have its digital asset shown in our explorer, so it's important that they are described here. 20 | 1. Having a similar structure in most NFTs (also for its metadatas) facilitates the integration with any platform to list and show information about NFTs created on Hathor. 21 | 1. Identifying an NFT is important to show specific information in the explorer and wallets. 22 | 23 | # Guide-level explanation 24 | [guide-level-explanation]: #guide-level-explanation 25 | 26 | Non-fungible tokens (NFTs) are unique, indivisible digital assets created on blockchains. HTR token, for instance, is a fungible token and 1 HTR of mine can be exchanged with 1 HTR of anyone without difference because all tokens are equal. Each NFT’s uniqueness can be proven by a unique identifier. 27 | 28 | In Hathor Network an NFT is a new custom token created and its unique identifier is the transaction hash. 29 | 30 | ## Transaction standard 31 | 32 | A transaction can be identified as an NFT creation if it has the following structure: 33 | 34 | 1. Transaction version is 2 (TOKEN_CREATION_TRANSACTION). 35 | 1. All inputs are HTR inputs. 36 | 1. First output has HTR value and the script is a pushdata with the NFT data followed by a OP_CHECKSIG. 37 | 1. It can optionally have a melt authority output (but never more than one). 38 | 1. It can optionally have a mint authority output (but never more than one). 39 | 1. It has one or more outputs with the created token (any value is valid). 40 | 1. It can optionally have an output for the HTR change. 41 | 42 | Any transaction that has this structure will be identified as an NFT and the explorer/wallet screens will show specific NFT information for them. The media associated with the NFT will not be shown unless the NFT goes through Hathor Labs' review process. 43 | 44 | ## Output script data 45 | 46 | The output script data is the most important piece of an NFT transaction because it represents the data that is being uniquely identified in the blockchain. 47 | 48 | The data can be any string, e.g. '#A123' as a serial number, or 'https://mywebsite.com/', or even 'ipfs://ipfs//filename'. 49 | 50 | The most common use case is to represent a digital asset, then the NFT data usually redirects to an image/video/audio file. In that case, in order for us to add the digital asset in our explorer, it's required that this URL uses a immutable protocol, e.g. [IPFS](https://en.wikipedia.org/wiki/InterPlanetary_File_System). **Important to notice that your IPFS URL to be considered immutable must be like ipfs://ipfs//filename and not https://ipfs.io/ipfs//filename because ipfs.io DNS may change anytime.** 51 | 52 | ## Metadata 53 | 54 | Most NFTs also need extra data besides just the digital asset URL. The metadata standard is important to be followed in order to help future platform integrations with Hathor NFTs. This standard was inspired by [OpenSea metadata standard](https://docs.opensea.io/docs/metadata-standards) and [Metaplex Token Metadata Standard](https://docs.metaplex.com/token-metadata/Versions/v1.0.0/nft-standard). 55 | 56 | If the NFT requires a metadata, the output script data saved on blockchain **MUST** be the metadata URL in an immutable protocol (e.g. ipfs as explained in the last section), e.g. ipfs://ipfs//metadata.json. 57 | 58 | The metadata should have the following JSON structure: 59 | 60 | ``` 61 | { 62 | "name": { 63 | "type": "string", 64 | "description": "Identifies the asset to which this token represents" 65 | }, 66 | "description": { 67 | "type": "string", 68 | "description": "Describes the asset to which this token represents" 69 | }, 70 | "file": { 71 | "type": "string", 72 | "description": "A URL pointing to an immutable resource with a digital asset to which this token represents" 73 | }, 74 | "collection": { 75 | "type": "Object", 76 | "description": "Additional details are recommended for NFT collections with multiple unique assets under one single family and symbol." 77 | }, 78 | "attributes": { 79 | "type": "Array", 80 | "description": "These are the extra attributes for the digital asset. It's an array of AttributeObject to make it as flexible as possible." 81 | }, 82 | "royalty": { 83 | "type": "Object", 84 | "description": "Object with fields that would be utilised to pay royalties to original creators in secondary sales conducted by marketplaces that adheres to Hathor Metadata standards, after minting (primary sale)." 85 | }, 86 | "external_url": { 87 | "type": "string", 88 | "description": "This is the URL that will appear with the asset in the marketplaces and will direct users to view the NFT on the creator's (or some other) website." 89 | }, 90 | "animation_url": { 91 | "type": "string", 92 | "description": "URL to a file that will be used as a preview of the NFT in the marketplaces or wallet. If not set, the file field may be used." 93 | } 94 | } 95 | ``` 96 | 97 | - The CollectionObject has the following JSON structure: 98 | 99 | ``` 100 | { 101 | "name": { 102 | "type": "string", 103 | "description": "Name of the collection, e.g. Hippos" 104 | }, 105 | "symbol": { 106 | "type": "string", 107 | "description": "Symbol of the NFT collection within 2-5 characters, e.g. HIPPO" 108 | }, 109 | "family": { 110 | "type": "string", 111 | "description": "Represents the family of the NFT collection, e.g. HathorLand DAO" 112 | } 113 | } 114 | ``` 115 | 116 | - The AttributeObject has the following JSON structure: 117 | 118 | ``` 119 | { 120 | "type": { 121 | "type": "string", 122 | "description": "Type of the attribute, e.g. rarity, stamina, eyes" 123 | }, 124 | "value": { 125 | "type": "string | decimal", 126 | "description": "Value of the attribute, e.g. rare, 1.4, blue" 127 | } 128 | } 129 | ``` 130 | 131 | The AttributeObject may have more attributes than the ones described above but those are the required ones. 132 | 133 | - The RoyaltyObject has the following JSON structure: 134 | 135 | ``` 136 | { 137 | "fee_basis_points": { 138 | "type": "integer", 139 | "description": "Royalties percentage awarded to creators (basis points), e.g. 500, which means 5%." 140 | }, 141 | "creators": { 142 | "type": "Array", 143 | "description": "Array of all creators involved in the project & their royalty bps." 144 | } 145 | } 146 | ``` 147 | 148 | 500 Basis Points mean 5% royalty, One basis point is equal to 1/100th of 1%. More info on BPS at [Investopedia](https://www.investopedia.com/terms/b/basispoint.asp). 149 | 150 | - The CreatorObject has the following JSON structure: 151 | 152 | ``` 153 | { 154 | "address": { 155 | "type": "string", 156 | "description": "Address of the wallet in which royalty fees will be sent for secondary sales." 157 | }, 158 | "share": { 159 | "type": "integer", 160 | "description": "Amount of royalty for each creator, in bps." 161 | } 162 | } 163 | ``` 164 | 165 | The sum of the shares should *always* total 10000 bps, which corresponds to 100%. 166 | 167 | For an NFT with royalties of 10% (`fee_basis_points` = 1000 bps) and two creators, one receiving 40% of the royalties and the other 60%, the shares are 4000 and 6000. If there were 3 creators, each receiving 30%, 20% and 50%, the shares would be 3000, 2000 and 5000. 168 | 169 | ### Metadata example 170 | 171 | ``` 172 | { 173 | "name": "Gandalf", 174 | "description": "A wizard, one of the Istari order, and the leader and mentor of the Fellowship of the Ring", 175 | "file": "ipfs://ipfs/QmbuthvFV2EjvfmWXxt2L83PwPPwbjjggBhVsrEB7AXW123/gandalf.png", 176 | "collection": { 177 | "name": "Wizards", 178 | "symbol": "WZD", 179 | "family": "HathorLand DAO" 180 | }, 181 | "attributes": [ 182 | { 183 | "type": "rarity", 184 | "value": "super rare" 185 | }, 186 | { 187 | "type": "hp", 188 | "value": 32 189 | }, 190 | { 191 | "type": "intelligence", 192 | "value": 99 193 | }, 194 | { 195 | "type": "ring", 196 | "value": 0 197 | } 198 | ], 199 | "royalty": { 200 | "fee_basis_points": 500, 201 | "creators": [ 202 | { 203 | "address": "H7t2eNhFNeH3hrUAJj4AAu9amtWmPT4fLB", 204 | "share": 9500 205 | }, 206 | { 207 | "address": "HNBuYEsJYznh5zZGx4eDn2NXYuiPpgKBgH", 208 | "share": 500 209 | }, 210 | { 211 | "address": "RaNd0MsJYznh5zYOyoDn2NXYuiPAdDre5s", 212 | "share": 0 213 | } 214 | ] 215 | }, 216 | "animation_url": "https://www.arweave.net/efgh1234?ext=mp4", 217 | "external_url": "https://hathor.land/dao" 218 | } 219 | ``` 220 | 221 | In the example above we have one creator with 0 royalty shares. The idea is just to identify this address as a creator of the NFT but it won't receive any royalties. 222 | 223 | ## Deposit & Fee 224 | 225 | ### Deposit 226 | 227 | The deposit will be the same as in the normal token creation, i.e. 1% of the amount created deposited in HTR. 228 | 229 | For example, if you'd like to create 100 units of a NFT, you must deposit 0.01 HTR. 1% of 100 units is 1 unit and given that we never handle decimals, only integers, 1 unit of HTR is 0.01 HTR. 230 | 231 | - 500 NFT units: deposit of 0.05 HTR. 232 | - 100 NFT units: deposit of 0.01 HTR. 233 | - 10 NFT units: deposit of 0.01 HTR. 234 | - 1 NFT unit: deposit of 0.01 HTR. 235 | 236 | This deposit may be returned in case of a melt of the created NFTs. Important to notice that you have the deposit back only if you melt an amount bigger than or equal to 100 units, just like the normal custom token. 237 | 238 | ### Fee 239 | 240 | The first output of a NFT transaction will contain the script with the data string. This output must have some value and we use 0.01 HTR for that. It's an output that can never be returned. 241 | 242 | ## Custom NFT 243 | 244 | There are some special cases where the NFT token won't follow the proposed standard, e.g. if it needs more than one data output. In that case, our wallets and explorer won't automatically identify this token as NFT. 245 | 246 | Given that this situation is expected to be rare, we will handle them manually. The NFT creator will need to get in touch with Hathor team, in order to have the token identified as an NFT on the official Hathor explorer. Besides that, as long as the digital asset's URL is immutable, it should be approved in the review and should be shown in Hathor's Public Explorer like any other standard NFT. 247 | 248 | ## FAQ 249 | 250 | ### Why use HTR for the fee and not the custom token created? 251 | 252 | It's possible to create a token unit especially to be used in the first output (the one with the data script), however the total supply of the NFT would be increased by 1 in the blockchain, which is not good given the idea behind non fungible tokens. That's why we've decided to create the standard with the fee as HTR. -------------------------------------------------------------------------------- /text/0033-images/private-network-architecture-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/text/0033-images/private-network-architecture-2.png -------------------------------------------------------------------------------- /text/0033-images/private-network-architecture.drawio: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | -------------------------------------------------------------------------------- /text/0033-images/private-network-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/HathorNetwork/rfcs/5eef95536d5e5df305025b1fa110f08bc15f8b8d/text/0033-images/private-network-architecture.png -------------------------------------------------------------------------------- /text/0044-use-case-integration-best-practices.md: -------------------------------------------------------------------------------- 1 | - Feature Name: use_case_integration_best_practices 2 | - Status: Draft 3 | - Start Date: 2022-09-20 4 | - RFC PR: (leave this empty) 5 | - Hathor Issue: (leave this empty) 6 | - Author: Pedro Ferreira 7 | 8 | # Summary 9 | [summary]: #summary 10 | 11 | This document presents security best practices for use cases that integrate with the Hathor Network, in order to improve the reliability of their operation. 12 | 13 | # Guide-level explanation 14 | [guide-level-explanation]: #guide-level-explanation 15 | 16 | There are multiple ways that a use case can integrate with Hathor, here is a common architecture compliant to our recommendations 17 | 18 | - The use case has a software that runs operations offchain and is used by its customer. 19 | - This software connects to a headless wallet which communicates with a full node. The full node is connected to the Hathor's p2p network. 20 | - The headless wallet will handle all events from the full node to keep its balance and transaction history updated. The headless software have APIs to manage a wallet, e.g. get addresses, create and push transactions to the network, create new tokens, and some other features described in the project [repository](https://github.com/HathorNetwork/hathor-wallet-headless). 21 | 22 |
 23 |                        ┌─────────────┐      ┌──────────┐                     ┌─────────────┐        ┌─────────────┐
 24 |                        │             │      │          │◀────────────────────┤             │        │             │
 25 |                        │  Hathor     │      │ Headless │                     │  Use Case   │        │  Customer   │
 26 |                        │ Full Node 1 │◀────▶│  Wallet  │                     │  Software   │◀──────▶│             │
 27 |                        │             │      │          ├────────────────────▶│             │        │             │
 28 |                        └────┬────────┘      └──────────┘                     └─────────────┘        └─────────────┘
 29 |                             │
 30 | ┌────────────┐              │
 31 | │            │◀─────────────┘
 32 | │  Rest of   │
 33 | │  Network   │
 34 | │            │◀─────────────┐
 35 | └────────────┘              │
 36 |                        ┌────┴────────┐
 37 |                        │             │               
 38 |                        │  Hathor     │
 39 |                        │ Full Node 2 │
 40 |                        │             │
 41 |                        └─────────────┘
 42 | 
 43 | 
44 | 45 | ## Run more than one node 46 | 47 | We strongly recommend use cases to run two or more full nodes as a protection to direct attacks to their full nodes. 48 | 49 | Use case's full nodes **should not** be connected among them. This is important to mitigate some attack vectors. Remember that the transactions will be propagated by the p2p network and all use case's full nodes will receive the transactions eventually during normal network activity. 50 | 51 | ### Validate new transactions on more than one full node before accepting them 52 | 53 | Let's assume an exchange wants to run nodes to identify deposits in the Hathor network, so a recommended approach for the integration would be to run at least two full nodes (node1 and node2), which are not connected between them (node1 must have node2 in the blacklist and vice versa). In that architecture, if any deposit is identified in node1, then the exchange must check that it's also a valid transaction in node2 and in one of the public nodes. In this approach, if an attacker successfully compromises one of your full nodes, your validation would fail and the deposit will not be accepted. 54 | 55 | ### Validate all your full nodes have the same best block 56 | 57 | Use cases should regularly check whether the best block is the same on all their full nodes. If full node have different best blocks, the validation must be done again some seconds later because this might happen depending on the network block propagation time. If the differente continue, the nodes might be under attack and the use case should consider blocking deposits. 58 | 59 | 60 | ## Peer-id 61 | 62 | The peer-id is a unique identifier of your full node in Hathor's p2p network. You must keep your peer-id secret to prevent attackers from directly targeting your full nodes. Do not tell anyone your peer-ids, and do not publish them on public channels. If you think your peer-id has been exposed, you should generate a new peer-id and replace the exposed ones. 63 | 64 | ## How to validate a new transaction 65 | 66 | The transactions in the Hathor network have many fields that must be checked to guarantee that a transaction is valid for your use case. For more details about the fields of a transaction, check the [Transaction Anatomy RFC](https://github.com/HathorNetwork/rfcs/blob/master/text/0015-anatomy-of-tx.md). 67 | 68 | - [Version](#version) 69 | - [Voided state](#voided-state) 70 | - [Outputs](#outputs) 71 | - [Number of confirmations](#number-of-confirmations) 72 | 73 | ### Version 74 | 75 | Version identifies the type of vertex. The possible choices are: 76 | 77 | - Block: 0 78 | - Regular Transaction: 1 79 | - Token Creation Transaction: 2 80 | - Merged Mined Block: 3 81 | 82 | Depending on your use case you must accept one or more types. 83 | 84 | ### Voided state 85 | 86 | A voided transaction is cancelled and should **never** be accepted. You must validate that the transaction is not voided asserting that `is_voided == false`. 87 | 88 | ### Outputs 89 | 90 | Hathor supports multi-token transactions. You must confirm that your outputs are correctly placed as follows: 91 | 92 | - Token id is correct. If you accept only HTR token, token id must be `"00"`. 93 | - Token data is equal to `0`, if it's HTR token. 94 | - Value matches the expected value. Note that it is an integer and `12.34` is represented by `1234`. 95 | - Timelock must be `null`; otherwise your funds might be locked. 96 | 97 | ### Number of confirmations 98 | 99 | Some use cases might handle transactions with huge amounts, so it's essential to wait for some blocks to confirm the transaction before accepting it as a valid one. The more blocks confirm a transaction, the more guarantee there is that this transaction won't become voided in the future. As a reference, Bitcoin's use cases usually require six confirmations before accepting a new deposit. 100 | 101 | ## Be alert for weird behavior 102 | 103 | ### Check if an unusual amount of deposits or withdrawals are being made 104 | 105 | Many use cases have withdrawals/deposits in their set of features through blockchain transactions. It's important to check for unusual levels of deposits and withdrawals because this could be caused by an attack. 106 | 107 | For that situation, it's crucial to have an easy way to block specific accounts that have unusual behavior and might be part of an attack. You might also consider to limit the number of operations a user can do in a time window. 108 | 109 | This validation and user throttling must be done in the use-case software application. 110 | 111 | ### Check if one of your full nodes gets out-of-sync 112 | 113 | Always check for weird behavior in the synchronization among full nodes. We recommend use cases to regularly validate that all their full nodes are in sync among them and in sync with at least one public node as well. 114 | 115 | This validation is important to guarantee the node is not isolated from the rest of the network with a fork of the blockchain. 116 | 117 | Besides that, it's also important to validate that the timestamp of the best block of the node is recent, which means that the node's blockchain is not halted in the past. 118 | 119 | # Reference-level explanation 120 | [reference-level-explanation]: #reference-level-explanation 121 | 122 | ## Run a full node with a blacklist of peer-ids 123 | 124 | Following the recommended architecture, you will need to run more than one full node and they shouldn't be connected among them. To achieve that, each of your nodes must have a blacklist of peer-ids, containing the ids of the other nodes. 125 | 126 | For example, you run node1 (peerid1), node2 (peerid2), and node3 (peerid3). The node1 should have peerid2 and peerid3 in the blacklist, node2 should have peerid1 and peerid3 in the blacklist, and node3 should have peerid1 and peerid2 in the blacklist. 127 | 128 | To create this blacklist in the full node there are some possible approaches: 129 | 130 | ### CLI command 131 | 132 | If you use the full node parameters directly in the command line, then you should add: 133 | 134 | ``` 135 | --peer-id-blacklist peerid1 peerid2 136 | ``` 137 | 138 | ### Environment variables 139 | 140 | If you use the full node parameters using env vars, then you should add: 141 | 142 | ``` 143 | export HATHOR_PEER_ID_BLACKLIST=[peerid1, peerid2] 144 | ``` 145 | 146 | ### API 147 | 148 | There is also an API to add a peer-id to the blacklist while the node is running, however this is not recommended because if you restart your node you will lose this blacklist. 149 | 150 | ``` 151 | POST to /v1a/p2p/netfilter 152 | 153 | { 154 | "chain": { 155 | "name": "post_peerid", 156 | }, 157 | "target": { 158 | "type": "NetfilterReject", 159 | "target_params": {} 160 | }, 161 | "match": { 162 | "type": "NetfilterMatchPeerId", 163 | "match_params": { 164 | "peer_id": "peerid1" 165 | } 166 | } 167 | } 168 | ``` --------------------------------------------------------------------------------