├── T_C_x.png ├── eltoo.png ├── talks ├── tabconf_2022.pdf ├── onchainscaling.pdf └── scaling_bitcoin.pdf ├── README.md ├── .gitignore ├── roadmap.md ├── overview.md ├── general_considerations.md ├── braids.py ├── LICENSE └── braidpool_spec.md /T_C_x.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mcelrath/braidcoin/HEAD/T_C_x.png -------------------------------------------------------------------------------- /eltoo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mcelrath/braidcoin/HEAD/eltoo.png -------------------------------------------------------------------------------- /talks/tabconf_2022.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mcelrath/braidcoin/HEAD/talks/tabconf_2022.pdf -------------------------------------------------------------------------------- /talks/onchainscaling.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mcelrath/braidcoin/HEAD/talks/onchainscaling.pdf -------------------------------------------------------------------------------- /talks/scaling_bitcoin.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mcelrath/braidcoin/HEAD/talks/scaling_bitcoin.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Braidcoin 2 | Experiments in a improving Bitcoin using a Directed Acyclic Graph 3 | 4 | Running these examples requires [Jupyter](http://jupyter.org/) and the ipython 5 | kernel. Fire up the example notebook in your browser by `jupyter notebook`. 6 | 7 | We use [graph_tool](https://graph-tool.skewed.de/) for plotting. This one is a 8 | bitch to compile, I recommend getting it from your friendly local pre-compiled 9 | distribution. I use Ubuntu 16.04. The dependencies for running my notebook are 10 | pre-compiled there, just: 11 | 12 | sudo apt-get install ipython3-notebook python3-graph-tool python3-joblib python3-scipy 13 | 14 | You can [view the example notebook in HTML](https://rawgit.com/mcelrath/braidcoin/master/Braid%2BExamples.html) 15 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | .hypothesis/ 47 | 48 | # Translations 49 | *.mo 50 | *.pot 51 | 52 | # Django stuff: 53 | *.log 54 | 55 | # Sphinx documentation 56 | docs/_build/ 57 | 58 | # PyBuilder 59 | target/ 60 | 61 | #Ipython Notebook 62 | .ipynb_checkpoints 63 | -------------------------------------------------------------------------------- /roadmap.md: -------------------------------------------------------------------------------- 1 | # Version 0.0 (current) 2 | 3 | 1. Connects to other nodes via a command line option (no peer discovery) 4 | 2. Connects to bitcoind to receive block headers and transactions 5 | 6 | # Version 0.1 7 | 8 | The goal of the first version is to gather useful data about mining devices and expected particpants in Braidpool. We need to find out: 9 | 10 | 1. What is the latency in submitting a new work unit to a mining device? In particular we need to *interrupt* any work the device thinks its doing and give it a new work unit. Braidpool beads will come in with a mean time between 150ms-1000ms, and we need to stop whatever the mining device is doing (if it hasn't found a solution) in a timeframe much less than that. 11 | 2. How large of a coinbase can the mining unit handle? The Bitmain S5 famously killed P2Pool by restricting the size of the coinbase transaction. The entire coinbase transaction is handed to the mining device because in addition to the nonce field, the coinbase transaction also contains the "extranonce" field which the mining device rolls. But some mining pools like Eligus, Ocean, and P2Pool additionally paid miners in the coinbase resulting in very large coinbases that the S5 couldn't handle. Braidpool can pay miners in coinbases if it's viable. We need to determine its viability. 12 | 3. What is the network latency we can expect given our p2p system and the network realities of real miners. 13 | 14 | During the 0.1 phase we will recruit miners to point **one** device of each type (manufacturer, bios version, batch number, etc) to the pool, and fill out a small form indicating what the mining device is, so that we can get a map of which mining devices *can* work with the low latencies that braidpool requires. We can use this to pressure mining device manufacturers to lower their latency and increase the size of their coinbase buffer. There's no technical reason why either of these should be limited, but anecdotal evidence indicates that both are limited on some hardware. 15 | 16 | During this phase we want to acquire a diversity of devices (for the above measurements), as well as geographic diversity (for network latency measurements), but not hashrate. Please do not point multiple devices of the same type to the pool. 17 | 18 | We will mine to the Braidpool donation address during this phase. If we win a block we will use it to fund developers, but we expect that the hashrate will be far too low for that and are not depending on funding in this manner. If you're so inclined the donation address is bcXXXXXX and we'd rather you donate than point more mining devices. Adding more mining devices decreases the probability that some odd hardware wins a share and decreases our ability to gather measurements about it. 19 | 20 | Requirements: 21 | 1. Braid implementation (possibly use Kaspa's DAGKnight Rust implementation) 22 | 2. getblocktemplate from bitcoin 23 | 3. Stratum V2 integration (using their StratumV1 proxy to devices) 24 | 4. Docker images (or other runnable image solution) 25 | 5. Manual peer discovery via command line 26 | 27 | # Version 0.2 28 | 29 | 1. Peer discovery mechanism (DHT or other) 30 | 2. Testnet version available, keep mainnet "donation" running, continue to solicit hashrate from new devices 31 | 3. Expansion of p2p system to include block and tx relay 32 | 33 | # Version 0.3 34 | 35 | 1. Implementation of "slots" key-value store to keep track of FROST signers 36 | 2. Implementation of share validation checks (full block checks with txs) 37 | 38 | # Version 0.4 39 | 40 | 1. Implementation of difficulty adjustment algorithm, with user selected difficulty (software will provide a recommended difficulty given the number of mining devices it sees to keep a constant variance among miners regardless of their size) 41 | 2. Initial implentation of FROST signing using "slots" 42 | 43 | # Version 0.5 44 | 45 | 1. Public launch of Braidpool. You will be able to actually mine on it with payouts every 2 weeks. UX will be command line and shit but we will provide Docker images. 46 | 2. Miner monitoring console (UX) initial implementation 47 | 48 | # Version 0.6 49 | 50 | 1. Initial implementation of transaction system (sending of shares). We will probably straight up copy bitcoin using libconsensus or similar. 51 | 52 | # Version 0.7 53 | 54 | 1. P2P latency improvements through Forward Error Correction over UDP (FEC). 55 | 2. Polish that UX 56 | 57 | # Version 0.8 58 | 59 | 1. Initial implementation of "sub-pools". (Braidpool that mines into parent Braidpool) to decrease the smallest possible miner by 1000x. 60 | 61 | # Version 0.9 62 | 63 | 1. Everything I forgot in the above 64 | 65 | # Version 1.0 66 | 67 | To the moon! 68 | -------------------------------------------------------------------------------- /overview.md: -------------------------------------------------------------------------------- 1 | # What is Braidpool? 2 | 3 | Braidpool is a decentralized mining pool for bitcoin. It is a short-lived DAG-based blockchain solely for the purpose of producing bitcoin blocks. It decentralizes *share accounting* and *payout*, in addition to transaction selection. 4 | 5 | # Why? 6 | 7 | Bitcoin's long block time (10 minutes) introduces revenue variance for miners that can only be overcome by being a larger and larger miner. For this reason Mining Pools have come into existence to pool hashrate from different miners to reduce their variance. This directly impacts the decentralization of the bitcoin network since pools want to maximize their hashrate to minimize their variance, but cannot get bigger than 51% or they are perceived as an attack risk on Bitcoin. A centralized pool "looks like" an individual miner with respect to the Bitcoin network. 8 | 9 | Braidpool aims to decouple the problems of transaction selection, variance reduction, device monitoring, and mining payout in order to enhance the businesses of both pools and individual miners. 10 | 11 | # What does Braidpool mean for existing pools? 12 | 13 | Existing pools can run *on top* of Braidpool at no cost to them (Braidpool will be zero-fee) and we encourage them to do so. Doing this allows pools to decouple themselves from two unwinnable games: 14 | 15 | **Variance reduction**: 16 | Pools can only reduce revenue variance by being big, but being big makes them an attack risk to a decentralized network, and a single point of control for hostile governments. The 51% attack risk still exists but is transferred to Briadpool itself. By having many miners acting individually in Braidpool instead of the pool appearing as one big miner, the risk of a 51% is substantially reduced. It would be safe for all miners to be running on top of Braidpool, as long as no single miner has more than 51% of the hashrate. It's also safe for a buyer of shares to buy more than 51% of the shares without becoming a risk to decentralization. 17 | 18 | **Transaction selection**: 19 | Pools are commonly accused of censoring transactions and constantly fight a PR battle over it. By leaving this responsibility to individual miners, transaction selection is decentralized. Block template construction is not outsourced as it is on Ethereum, which has resulted in more centralization around MEV-boosting traders and "compliant" blocks which are antithetical to Bitcoin. 20 | 21 | **By running on top of Braidpool, pools can get out of the above unwinnable games and focus on the other aspects of their business:** 22 | 23 | **Buying shares**: 24 | Existing pools basicaly buy shares in exchange for Bitcoin. Because Braidpool only pays out every 2 weeks, pools can offer miners a faster payout method by buying their shares in exchange for BTC on-chain or in Lightning channels. This is technically a [Forward Contract](https://www.investopedia.com/terms/f/forwardcontract.asp). More complex instruments such as options can be built on top of it, and enable on the Braidpool chain by the ability to send your shares to the address of your pool counterparty. Braidpool is not a Decentralized Exchange and this kind of trading will have to happen privately, on existing exchanges, or over-the-counter (OTC). 25 | 26 | **Risk Management**: 27 | Pools take a risk-on position with respect to the number of blocks the pool will win and the fees in those blocks. This is a duty taken on by hedge funds and market makers in other industries, and is accompanied by careful analysis of on-chain usage and expected changes in hashrate due to new mining devices or new facilities coming online. Miners outsource this responsibility in the [PPS model](https://medium.com/luxor/why-should-you-choose-a-pps-pool-5a71ee574478) and pools can continue to provide this service by running on Braidpool without creating a 51% attack risk. The PPS model requires a source of funds on the part of the pool and pools are not simply aggregators but are a financial counterparty. Diversifying this responsibility allows pools to specalize into legal jurisdictions while recognizing the financial nature of this part of their business. 28 | 29 | **Device Monitoring**: 30 | Many miners use their pool for device monitoring. Because individual mining devices make direct connections to the pool, the pool can identify underperforming or non-functional mining devices, allowing the miner to take action. Pools also report site-wide hashrate and luck statistics so that operators can monitor their operation. 31 | 32 | **Payment Routing**: 33 | By decoupling mining from share payout itself, it encourages pools to diversify their services by offering new technical means for miners to receive their payment, including the Lightning network, and future in-progress proposals like Ark, FediMint, and Mercury Layer. This would allow todays centralized pools to become the routing nodes of the future payment system we all want Bitcoin to become, earning fees by routing off-chain payments in addition to mining revenue. 34 | 35 | # What does Braidpool mean for miners? 36 | 37 | Miners can continue to use the pools they have existing relationships with (assuming their pool runs on Braidpool) for the reasons above. They will have to run the Braidpool node software and a Bitcoin full node. Braidpool will provide Docker images to make this as easy as possible since we know this is somewhat of a hurdle, but we believe this technical hurdle is easier to solve than the variance reduction and tx selection problems existing pools have. 38 | 39 | Miners can choose to run on top of Braidpool directly. The payout algorithm is **Full Proportional** meaning over a single difficulty adjustment window (2016 blocks -- about 2 weeks), all funds (both block rewards and all fees) mined by Braidpool are paid out to miners in proportion to the shares they submitted to the pool. The choice of paying over an entire difficulty adjustment window is intended to avoid arbitrary "smoothing" that happens in PPLNS pools, while providing a definitive hashprice for each difficulty epoch that can be traded to create risk-management tools (financial derivatives). It is also chosen so that rewards are *aggregated* over a sufficient time period so that Braidpool does not create huge coinbase transactions like Eligus, Ocean and P2Pool, which competes with fee revenue. For instance one could buy shares in one epoch and sell shares in the upcoming epoch through private agreement, in order to create a hashrate derivative instrument. 40 | 41 | Miners will have the ability to emulate their current PPS arrangement with pools by engaging a counterparty to buy their shares at a fixed price. Your counterparty will of course demand a fee or spread for this service, as they are taking on risk to do this, but we anticipate this fee will match current pool fees, and Braidpool itself does not impose additional fees. This will incentivize private parties to offer new financial and technical instruments, such as streaming payments over Lightning, and more complex and comprehensive contract arrangements. 42 | 43 | An example of a more comprehensive contract that could be created would be a long-term share buying agreement at a fixed price. The repayment period for mining hardware is generally 6-18 months. Miners would really like to lock-in their risk until the hardware is paid off by buying a contract to send all their shares to their counterparty for a fixed price, allowing their counterparty to take on the risk of fee and hashrate variance. This is similar to the options and futures contracts used in agriculture and other commodities. Producers lock-in their price, mitigating the variance in crop yield and oil usage due to winter temperatures. Braidpool will enable this, with a diversity of risk-taking counterparties in your jurisdiction. 44 | 45 | # A Few Technical Details 46 | 47 | Braidpool is a Directed Acyclic Graph (DAG) of "beads" rather than a blockchain of "blocks". All this really means is that each bead can have multiple parents, and the graph can have diamonds or other higher-order graph structures within it. This is done to overcome the orphan/stale block problem in Bitcoin, while enabling much faster bead times (we estimate about 1000x faster). 48 | 49 | These beads are "shares" and every time a miner wins one he adds an entry into the decentralized share accounting ledger that is Braidpool's UTXO set. Every bead could be a bitcoin full bitcion block had it met Bitcoin's difficulty target, but Braidpool has a dynamic difficulty target that is around 1000x easier to hit, resulting in 1000 times more beads than Bitcoin blocks. At the end of the 2016-block difficulty window, all shares are automatically paid out according to the consensus rules of Braidpool, which enforce the Full Proportional payout algorithm. 50 | 51 | Custody of accumulated coinbase rewards and fees is performed by a large multi-sig among miners who have recently mined blocks using the [FROST Schnorr signature algorithm](https://glossary.blockstream.com/frost/). Consensus rules on the network ensure that only a payout properly paying all miners can be signed and no individual miner or small group of colluding miners can steal the rewards. 52 | 53 | # Current Status and How To Contribute 54 | 55 | Braidpool is a nascent project written in Rust. We have published a spec and have a github which currently connects to peers and bitcoind. If any of the above features excite you, it's a great project to get in on the ground floor of a brand new blockchain that is *not* a shitcoin. No premine, no ICO, no BS, and with real utility. 56 | 57 | We are taking donations at address bcXXXXX and will distribute any received funds to contributors using a bounty and project system. 58 | 59 | FIXME mention Matrix chat. 60 | 61 | FIXME link to spec after it's moved to braidpool github. 62 | -------------------------------------------------------------------------------- /general_considerations.md: -------------------------------------------------------------------------------- 1 | I'm writing this post to help organize and clarify some of the thought 2 | surrounding "decentralized mining pools" for bitcoin, their requirements, and 3 | some unsolved problems that must be solved before such a thing can be fully 4 | specified. 5 | 6 | # Decentralized Mining Pools for Bitcoin 7 | 8 | A decentralized mining pool consists of the following components: 9 | 1. A [weak block](#weak-blocks) and difficulty target mechanism, 10 | 2. A [consensus mechanism](#consensus-mechanism) for collecting and accounting 11 | for shares, 12 | 3. A [payout commitment](#payout-commitment) requiring a quorum of participants 13 | to sign off on share payments, 14 | 4. A [signing procedure](#signing-procedure) for signing the [payout 15 | commitment](#payout-commitment) in such a way that the pool participants are 16 | properly paid, 17 | 5. A [transaction selection](#transaction-selection) mechanism for building 18 | valid bitcoin blocks. 19 | 20 | The improvements being made by the 21 | [StratumV2](https://github.com/stratum-mining/sv2-spec) project also include 22 | encrypted communication to mining devices. These problems are important largely 23 | factorizable from the pool itself, so we won't include discussion of that here, 24 | but it is assumed that any decentralized mining pool would use the StratumV2 25 | communications mechanisms. 26 | 27 | # Weak Blocks 28 | 29 | A *share* is a "weak block" that is defined as a standard bitcoin block that 30 | does not meet bitcoin's target difficulty $T$, but does meet some lesser 31 | difficulty target $t$. The *pool* specifies this parameter $t$ and when "weak 32 | block" is found, it is communicated to the pool. 33 | 34 | The share is itself a bearer proof that a certain amount of sha256 computation 35 | has been done. The share itself must have a structure that indicates that it 36 | "belongs" to a particular pool. In the case of centralized pools, this happens 37 | because the pool itself hands out "work units" (bitcoin headers) corresponding 38 | to a block that has been created by the pool, with transaction selection done 39 | by the (centralized) pool. 40 | 41 | In the case of a decentralized pool, the share itself must have additional 42 | structure that indicates to other miners in the pool that the share belongs to 43 | the pool, and if it had met bitcoin's difficulty target, the share contains 44 | commitments such that all *other* miners in the pool would be paid according to 45 | the share tally achieved by the decentralized [consensus 46 | mechanism](#consensus-mechanism) of the pool. 47 | 48 | Shares or blocks which do not commit to the additional metadata proving that the 49 | share is part of the pool must be excluded from the share calculation, and those 50 | miners are not "part of" the pool. In other words, submitting a random sha256 51 | header to a pool must not count as a share contribution unless the ultimate 52 | payout for that share, had it become a bitcoin block, would have paid the pool 53 | in such a way that all other hashers are paid. 54 | 55 | For example, consider a decentralized mining pool's "share" that looks like: 56 | 57 | Version | Previous Block Hash | Merkle Root | Timestamp | Difficulty Target | Nonce 58 | Coinbase Transaction | Merkle Sibling | Merkle Sibling | ... 59 | Pool Metadata 60 | 61 | Here the `Merkle Siblings` in the second line are the additional nodes in the 62 | transaction Merkle tree necessary to verify that the specified `Coinbase 63 | Transaction` transaction is included in the `Merkle Root`. We assume that this 64 | `Coinbase Transaction` commits to any additional data needed for the pool's 65 | [consensus mechansim](#consensus-mechanism), for instance in an `OP_RETURN` 66 | output. 67 | 68 | The `Coinbase Transaction` is a standard transaction having no inputs, and 69 | should have the following outputs: 70 | 71 | OutPoint(Value:0, scriptPubKey OP_RETURN ) 72 | OutPoint(Value:, scriptPubKey ) 73 | 74 | The `` is the sum of all fees and block reward for this halving 75 | epoch, and `pool_pubkey` is an address controlled collaboratively by the pool in 76 | such a way that the [consensus mechanism](#consensus-mechanism) can only spend 77 | it in such a way as to pay all hashers in the manner described by its share 78 | accounting. 79 | 80 | The `` is a hash of `Pool Metadata` committing to any 81 | additional data required for the operation of the pool. At a minimum, this must 82 | include the weak difficulty target $t$ (or it must be computable from this 83 | metadata). Validation of this share requires that the PoW hash of this bitcoin 84 | header be less than this weak difficulty target $t$. 85 | 86 | Other things that one might want to include in the `` are: 87 | 88 | 1. Pubkeys of the hasher that can be used in collaboratively signing the 89 | [payout commitment](#payout-commitment), 90 | 2. Keys necessary for encrypted communication with this miner, 91 | 3. Identifying information such as an IP address, TOR address, or other routing 92 | information that would allow other miners to communicate out-of-band with 93 | this miner 94 | 4. Parents of this share (bead, in the case of braidpool), or other 95 | consensus-specific data if some other [consensus 96 | mechansim](#consensus-mechanism) is used. 97 | 5. Intermediate consensus data involved in multi-round threshold signing 98 | ceremonies. 99 | 100 | Finally we note that there exists a proposal for [mining coordination using 101 | covenants (CTV)](https://utxos.org/uses/miningpools/) that does not use weak 102 | blocks, does not sample hashrate any faster than bitcoin blocks, and is 103 | incapable of reducing variance. It is therefore not a "pool" in the usual sense 104 | and we will not consider that design further, though covenants may still be 105 | useful for a decentralized mining pool, which we discuss in [Payout 106 | Commitments](#payout-commitments). 107 | 108 | # Consensus Mechanism 109 | 110 | In a centralized pool, the central pool operator receives all shares and does 111 | accounting on them. While this accounting is simple, the point of a 112 | decentralized mining pool is that we don't want to trust any single entity to do 113 | this correctly, nor do we want to give any single entity control to steal all 114 | funds in the pool, or the power to issue payments incorrectly. 115 | 116 | Because of this, all hashers must receive the [shares](#weak-blocks) of all 117 | other hashers. Each share could have been a bitcoin block if it had met 118 | bitcoin's difficulty target and must commit to the pool metadata as described 119 | above. 120 | 121 | With the set of shares for a given epoch, we must place a consensus mechanism on 122 | the shares, so that all participants of the pool agree that these are valid 123 | shares and deserve to be paid out according to the pool's payout mechanism. 124 | 125 | The consensus mechanism must have the characteristic that it operates much 126 | *faster* than bitcoin, so that it can collect as many valid shares as possible 127 | between valid bitcoin blocks. The reason for this is that one of the primary 128 | goals of a pool is *variance reduction*. 129 | [P2Pool](https://en.bitcoin.it/wiki/P2Pool) achieved this by using a standard 130 | blockchain having a block time of 30s, and the [Monero 131 | p2pool](https://github.com/SChernykh/p2pool) achieves it using a block time of 132 | 10s. 133 | 134 | Bitcoin has spawned a great amount of research into consensus algorithms which 135 | might be considered here including the [GHOST 136 | protocol](https://eprint.iacr.org/2013/881), [asynchronous 137 | PBFT](https://eprint.iacr.org/2016/199), "sampling" algorithms such as 138 | [Avalanche](https://arxiv.org/abs/1906.08936) and that used by 139 | [DFinity](https://arxiv.org/abs/1704.02397), and 140 | [DAG-based](https://eprint.iacr.org/2018/104) algorithms. (This is not an 141 | exhaustive bibliography but just a representative sample of the options) 142 | 143 | One characteristic that is common to all consensus algorithms is that consensus 144 | cannot be arrived at faster than roughly the global network latency. Regardless 145 | of which consensus algorithm is chosen, it is necessary for all participants to 146 | see all data going into the current state, and be able to agree that this is the 147 | correct current state. Surveying the networks developed with the above 148 | algorithms, one finds that the fastest they can come to consensus is in around 149 | one second. Therefore the exact "time-to-consensus" of different algorithms 150 | varies by an O(1) constant, all of them are around 1 second. This is around 600 151 | times faster than bitcoin blocks, and results in miners 600 times smaller being 152 | able to contribute, and a 600x factor reduction in variance compared to solo 153 | mining. 154 | 155 | While a 600x decrease in variance is a worthy goal, this is not enough of an 156 | improvement to allow a single modern mining device to reduce its variance enough 157 | to be worthwhile. Therefore, a different solution must be found for miners 158 | smaller than a certain hashrate. We present some ideas in 159 | [Sub-Pools](#sub-pools). 160 | 161 | From our perspective, the obvious choice for a consensus algorithm is a DAG 162 | which re-uses bitcoin's proof of work in the same spirit as bitcoin itself -- 163 | that is, the chain tip is defined by the heaviest work-weighted tip, and 164 | conflict resolution within the DAG uses work-weighting. Note that this is the 165 | same as the "longest chain rule" which only works at constant difficulty, but we 166 | assume the DAG does not have constant difficulty so combining difficulties must 167 | be done correctly. The solution is to identify the heaviest weight linear path 168 | from genesis to tip, where the difficulties are summed along the path. 169 | 170 | Finally, we caution that in considering mechanisms to include even smaller 171 | miners, one must not violate the "progress-free" characteristic of bitcoin. That 172 | is, one should not sum work from a subset of smaller miners to arrive at a 173 | higher-work block in the DAG. 174 | 175 | # Payout Commitment 176 | 177 | The Payout Commitment is the coinbase output on bitcoin, containing all funds 178 | from the block reward and fees in this block. This payout must commit to the 179 | share payout structure as calculated at the time the block is mined. In other 180 | words, it must represent and commit to the consensus of the decentralized mining 181 | pool's share accounting. 182 | 183 | Validating the output of the [consensus mechanism](#consensus-mechanism) is well 184 | beyond the capability of bitcoin script. Therefore generally one must find a 185 | mechanism such that a supermajority (Byzantine Fault Tolerant subset) of 186 | braidpool participants can sign the output, which is essentially reflecting the 187 | consensus about share payments into bitcoin. 188 | 189 | ## The Unspent Hasher Payment Output (UHPO) mechanism 190 | 191 | For the payout commitment we present a new and simple record accounting for 192 | shares. Consider the consensus mechanism as a UTXO-based blockchain analagous to 193 | bitcoin. The "UTXO set" of the consensus mechanism is the set of payment outputs 194 | for all hashers, with amounts decided by the recorded shares and consensus 195 | mechanism rules. 196 | 197 | We will term the set of hasher payments the Unspent Hasher Payment Output (UHPO) 198 | set. This is the "UTXO set" of the decentralized mining pool, and calculation 199 | and management of the UHPO set is the primary objective of the decentralized 200 | mining pool. 201 | 202 | The UHPO set can be simply represented as a transaction which has as inputs all 203 | unspent coinbases mined by the pool, and one output for each unique miner with 204 | an amount decided by his share contributions subject to the consensus mechanism 205 | rules. 206 | 207 | In p2pool this UHPO set was placed directly in the coinbase of every block, 208 | resulting in a large number of very small payments to hashers. One advantage of 209 | traditional pools is that the *aggregate* these payments over multiple blocks so 210 | that the number of withdrawals per hasher is reduced. A decentralized mining 211 | pool should do the same. The consequence of this was that in p2pool, the large 212 | coinbase with small outputs competed for block space with fee-paying 213 | transactions. 214 | 215 | The commitment to the UHPO set in the coinbase output is a mechanism that allows 216 | all hashers to be correctly paid if the decentralized mining pool shuts down or 217 | fails after this block. As such, the UHPO set transaction(s) must be properly 218 | formed, fully signed and valid bitcoin transactions that can be broadcast. See 219 | [Payout Authorization](#payout-authorization) for considerations on how to 220 | sign/authorize this UHPO transaction. 221 | 222 | We don't ever want to actually have to broadcast this UHPO set transaction 223 | except in the case of pool failure. Similar to other optimistic protocols like 224 | Lightning, we will withhold this transaction from bitcoin and update it 225 | out-of-band with respect to bitcoin. With each new block we will update the UHPO 226 | set transaction to account for any new shares since the last block mined by the 227 | pool. 228 | 229 | Furthermore a decentralized mining pool should support "withdrawal" by hashers. 230 | This would take the form of a special message or transaction sent to the pool 231 | (and agreed by consensus within the pool) to *remove* a hasher's output from the 232 | UHPO set transaction, and create a new separate transaction which pays that 233 | hasher, [authorizes](#payout-authorization) it, and broadcasts it to bitcoin. 234 | 235 | ## Pool Transactions and Derivative Instruments 236 | 237 | If the decentralized mining pool supports transactions of its own, one could 238 | "send shares" to another party. This operation replaces one party's address in 239 | the UHPO set transaction with that of another party. In this way unpaid shares 240 | can be delivered to an exchange, market maker, or OTC desk in exchange for 241 | immediate payment (over Lightning, for example) or as part of a derivatives 242 | contract. 243 | 244 | The reason that delivery of shares can constitute a derivative contract is that 245 | they are actually a measurement of *hashrate* and have not yet settled to 246 | bitcoin. While we can compute the UHPO set at any point and convert that to 247 | bitcoin outputs given the amount of bitcoin currently mined by the pool, there 248 | remains uncertainty as to how many more blocks the pool will mine before 249 | settlement is requested, and how many fees those blocks will have. 250 | 251 | A private arrangement can be created where one party *buys future shares* from 252 | another in exchange for bitcoin up front. This is a *futures* contract, where 253 | the counterparty to the miner is taking on pool "luck" risk and fee rate risk. 254 | 255 | In order to form hashrate derivatives, it must be posible to deliver shares 256 | across two different difficulty adjustment windows $d_1$ and $d_2$. Shares in one difficulty 257 | adjustment window have a different value $BTC_1$ compared to shares in another window $BTC_2$, 258 | due to the difficulty adjustment itself. If one can compute the discrete derivative 259 | 260 | $$ 261 | \frac{d({\rm hashrate})}{d({\rm BTC})} = \frac{d_1-d_2}{BTC_1 - BTC_2} 262 | $$ 263 | 264 | then derivative instruments such as options and futures can be constructed by 265 | private contract, where shares from different difficulty adjustment epochs are 266 | delivered to the derivative contract counterparty in exchange for BTC, possibly 267 | with time restrictions. We do not describe further how to achieve this, here we 268 | are only pointing out that the sufficient condition for the decentralized mining 269 | pool to support private contract derivative instruments are: 270 | 271 | 1. The ability to send shares to another party 272 | 2. The ability to settle shares into BTC at a well defined point in time with 273 | respect to the difficulty adjustment (for instance after the adjustment, for 274 | the previous epoch) 275 | 3. The ability transact shares across two difficulty adjustment windows. 276 | 277 | It may be tempting to turn a decentralized mining pool into a full DeFi market 278 | place with an order book. We caution that the problem of Miner Extractable Value 279 | (MEV) is a serious one that destroys fairness and confidence in the system, and 280 | should be avoided here. The only operations we consider here are (a) sending 281 | shares to another party and (b) requesting payout in BTC for shares. 282 | 283 | Finally let us note that the value of a "share" is naturally fixed after each 284 | difficulty adjustment. Within one two-week difficulty adjustment window, each 285 | sha256d hash attempt has a fixed value in terms of BTC, but the exact amount of 286 | BTC is unknown until the next difficulty adjustment. Therefore, the 2-week 287 | difficulty adjustment window is a natural point to automatically broadcast the 288 | UHPO tree for the last epoch and settle out all shares from the previous epoch. 289 | 290 | # Payout Authorization 291 | 292 | In [Payout Commitment](#payout-commitment) we described a simple mechansim to 293 | represent shares and share payouts as decided by the [Consensus 294 | Mechansim](#consensus-mechansim) on shares at any point in time. However, 295 | bitcoin is incapable of evaluating the logic of the pool's consensus mechanism 296 | and we must find a simpler way to represent that share payout consensus to 297 | bitcoin, such that the coinbase outputs cannot be spent in any other way than as 298 | decided by the pool's consensus. 299 | 300 | Probably the most straightforward way to authorize the share payouts and signing 301 | of coinbase outputs is to use a large threshold multi-signature. The set of 302 | signers can be any pool participant running the pool's consensus mechanism and 303 | having availability of all data to see that consensus mechanism's chain tip. We 304 | assume that in the [weak block](#weak-blocks) metadata, the pool participants 305 | include a pubkey with which they will collaboratively sign the payout 306 | authorization. 307 | 308 | The most logical set of signers to authorize the coinbase spends are the set of 309 | miners who have already successfully mined a bitcoin block. We want to avoid 310 | having any single miner having unilateral control over a coinbase and the 311 | ability to steal the funds without paying other hashers. As such the minimum 312 | number of signers is four, using the $(3f+1)$ rule from the Byzantine agreement 313 | literature. This means that on pool startup, the first 4 blocks must be directly 314 | and immediately paid out to hashers, as there are not enough known parties to 315 | sign a multi-signature, and we don't even know their pubkeys to construct a 316 | (P2TR, P2SH, etc) bitcoin output address and scriptPubKey. 317 | 318 | After the first 4 blocks, we assume that 66%+1 miners who have previously mined 319 | a block must sign the coinbase output(s), paying into the UHPO set transaction. 320 | 321 | This is probably the biggest unsolved problem in building a decentralized mining 322 | pool -- how to coordinate a large number of signers. If we assume that shares 323 | are paid out onto bitcoin with every difficulty adjustment, this is 2016 blocks 324 | and up to 1345 signers that must collaborate to make a threshold 325 | multi-signature. This is a very large number and generally well beyond the 326 | capabilities of available signing algorithms such as 327 | [FROST](https://eprint.iacr.org/2020/852), 328 | [ROAST](https://eprint.iacr.org/2022/550), 329 | [MP-ECDSA](https://eprint.iacr.org/2017/552), or [Lindell's threshold 330 | Schnorr](https://eprint.iacr.org/2022/374) 331 | algorithm. 332 | 333 | Below we discuss threshold Schnorr in more detail, but this may not be the only 334 | way to commit to and then authorize spending of coinbases into the UHPO tree. We 335 | encourage readers to find alternative solutions to this problem. The very large 336 | drawback to all signing algorithms we are able to find is that they are 337 | intolerant to failures. 338 | 339 | ## Schnorr Threshold Signatures 340 | 341 | We have reviewed a large amount of literature on threshold Schnorr algorithms. 342 | 343 | They all generally involve a Distributed Key Generation (DKG) phase using a 344 | variant of [Pedersen's 345 | DKG](https://link.springer.com/chapter/10.1007/3-540-46766-1_9), often 346 | augmenting it with polynomial commitments introduced by Feldman to achieve a 347 | [Verifiable Secret Sharing scheme 348 | (VSS)](https://ieeexplore.ieee.org/document/4568297). There are many papers with 349 | variations on this idea, each focusing on organizing rounds of communication, 350 | assumptions about communication (such as whether a broadcast channel exists) and 351 | security proofs. 352 | 353 | Participants in the threshold signature each contribute entropy in the DKG phase 354 | by creating and secret sharing their contribution to all other participants. In 355 | this way a key can be created with entropy input from all participants, such 356 | that no participant knows the key, but at the end of the DKG, all participants 357 | hold shares of it such that a t-of-n threshold number of shares must be Lagrange 358 | interpolated to reconstruct the secret. 359 | 360 | These secret shares are then used to compute a signature. Instead of directly 361 | reconstructing the secret key (which would give unilateral spending control to 362 | the party doing the reconstruction) one computes the signature using the 363 | secret share as the private key, and then Lagrange interpolation is performed on 364 | the resulting set of signatures instead. 365 | 366 | Both ECDSA and Schnorr signatures require a nonce $k$ which must additionally be 367 | agreed upon by the signing participants before signing, and is committed to in 368 | the signature itself. This is generally done by running an additional round of 369 | the DKG to compute $k$ such that everyone has a secret share of it. 370 | 371 | ### Distributed Key Generation 372 | 373 | # Transaction Selection 374 | 375 | The [Stratum V2](https://github.com/stratum-mining/sv2-spec) project is focusing 376 | on a model where hashers are responsible for constructing the block and 377 | selecting transactions. This is an improvement over Stratum V1 where the 378 | (centralized) pool chooses the block and transactions. 379 | 380 | The risk here is that the pool either censors valid transactions at the 381 | direction of a government entity, or prioritizes transactions through 382 | out-of-band payment, risking the "censorship resistant" property of the system. 383 | 384 | In the [Weak Blocks](#weak-blocks) section we did not indicate how transaction 385 | selection was done. This is a factorizable problem, and for a decentralized 386 | mining pool we also assume that individual hashers are constructing blocks, and 387 | the pool places no further restrictions on the transaction content of a block 388 | mined by a participating hasher. In fact, for weak blocks which do not meet 389 | bitcoin's difficulty threshold, it is probably best to elide the transaction set 390 | entirely for faster verification of shares. This introduces a problem that a 391 | hasher could construct a block with invalid transactions, but this would be 392 | easily discovered if that hasher ever mined a block, and his shares could 393 | invalidated. 394 | 395 | A transaction selection mechanism using both a decentralized mining pool and 396 | Stratum V2 should be able to easily slot into the block structure required by 397 | the decentralized mining pool as indicated in [weak blocks](#weak-blocks), as 398 | long as Stratum V2 is tolerant of the required coinbase and metadata structure. 399 | 400 | In our opinion simply allowing hashers to do transaction selection is 401 | insufficient, as centralized pools can simply withhold payment unless hashers 402 | select transactions according to the rules dictated by the pool. A full solution 403 | that restores bitcoin's censorship resistance requires decentralized payment as 404 | well. 405 | 406 | # Unsolved Problems and Future Directions 407 | 408 | The largest unsolved problem here is that of the [Payout 409 | Authorization](#payout-authorization). While off-the-shelf algorithms are 410 | available such as [ROAST](https://eprint.iacr.org/2022/550), they require fixing 411 | the set of signers and are intolerant to failure in either the nonce generation 412 | phase, the signing phase, or both. A threshold number of participants must be 413 | chosen, and must *all* remain online through the keygen and signing phase. If 414 | any participant fails, a different subset must be chosen and the process 415 | restarted. There does exist an [approach due to Joshi et 416 | al](https://link.springer.com/chapter/10.1007/978-3-031-08896-4_4) at the cost 417 | of an extra preprocessing step, which makes the final signature aggregation 418 | asynchronous assuming the nonce generation was successful, though the setup 419 | phases are still intolerant to failure. 420 | 421 | The fact that both ECDSA and Schnorr signatures require a nonce $k$ is a big 422 | drawback requiring an additional keygen round with everyone online that other 423 | systems such as BLS do not have. 424 | 425 | In practice if no new algorithm is found and an existing Schnorr threshold 426 | signature is used (something involving a DKG and Shamir sharing), a balance must 427 | be struck between having so many signers that payouts cannot be signed in a 428 | reasonable time, and so few signers that the system is insecure and coinbases 429 | could be stolen by a small subset. 430 | 431 | An approach that might be considered is to sub-sample the set of signers, and 432 | somehow aggregate signatures from subsets. As the resultant signatures would 433 | have different nonces, they cannot be straightforwardly aggregated, but this is 434 | the same problem as aggregating different signatures within a transaction or 435 | block, and approaches to [Cross Input Signature Aggregation 436 | (CISA)](https://github.com/ElementsProject/cross-input-aggregation) might be 437 | used here and might indicate the desirability of a future soft fork in this 438 | direction. 439 | 440 | ## Covenants 441 | 442 | One might take the UHPO set transaction and convtert it to a tree structure, 443 | using covenants to enforce the structure of the tree in descendant transactions. 444 | This is often done in the context of covenant-based soft fork proposals so that 445 | one party can execute his withdrawal while not having to force everyone else to 446 | withdraw at the same time. 447 | 448 | Because a decentralized mining pool is an active online system, it seems better 449 | to use an interactive method to write a new transaction for a withdrawal, than 450 | to allow broadcasting part of a tree. If part of a tree were broadcast, this 451 | must also be noticed by all miners and the share payouts updated. 452 | 453 | In our opinion the only reason the whole UHPO set transaction(s) would be 454 | broadcast is in a failure mode or shutdown of the pool, in which case the tree 455 | just increases the on-chain data load for no benefit. 456 | 457 | ## Sub-Pools 458 | 459 | Since a consensus system cannot achieve consensus faster than the global 460 | latency, this is an improvement in share size of at most about 1000x. In order 461 | to support even smaller hashers, one might consider "chaining" the decentralized 462 | mining pool to create a sub-pool. 463 | 464 | Instead of coinbase UTXOs as inputs to its UHPO set, a sub-pool would have UHPO 465 | set entries from a parent pool as entries in its UHPO set. With a separate 466 | consensus mechanism from its parent, a chain of two decentralized mining pools 467 | could allow hashers 1000000x smaller to participate. A pool could in principle 468 | dynamically create and destroy sub-pools, moving miners between the sub-pools 469 | and main pool dependent on their observed hashrate, so as to target a constant 470 | variance for all hashers. 471 | -------------------------------------------------------------------------------- /braids.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | # Sample code for doing computations with braids 4 | # 5 | # The code here emphasizes clarity over speed. We have used the memoize() 6 | # function to memoize functions that are called repeatedly with the same 7 | # arguments. Use of memoize is an indication that better algorithms exist. 8 | 9 | import hashlib 10 | import bitcoin # uses python-bitcoinlib https://github.com/petertodd/python-bitcoinlib 11 | from bitcoin.core import uint256_from_str as uint256 12 | import graph_tool.all as gt 13 | import graph_tool.draw as gtdraw 14 | import numpy as np 15 | from numpy.random import choice, sample, randint 16 | from copy import copy 17 | from math import sqrt, pi, sin, cos, acos 18 | 19 | NETWORK_SIZE = 1.0 # The round-trip time in seconds to traverse the network 20 | TICKSIZE = 0.1 # One "tick" of the network in which beads will be propagated and mined 21 | MAX_HASH = 2**256-1 # Maximum value a 256 bit unsigned hash can have, used to calculate targets 22 | 23 | bead_color = ( 27/255, 158/255, 119/255, 1) # Greenish 24 | genesis_color = (217/255, 95/255, 2/255, 1) # Orangeish 25 | cohort_color = (117/255, 112/255, 179/255, 1) # Purplish 26 | tip_color = (231/255, 41/255, 138/255, 1) # Pinkish 27 | sibling_color = (102/255, 166/255, 30/255, 1) # Light Greenish 28 | highlight1_color = ( 1, 1, 0, 1) # Yellow 29 | highlight2_color = ( 1, 0, 1, 1) # Magenta 30 | highlight3_color = ( 0, 1, 1, 1) # Yellow 31 | nohighlight_color = ( 1, 1, 1, 1) # White 32 | me_color = ( 0, 0, 0, 1) # Black 33 | 34 | descendant_color = highlight2_color 35 | ancestor_color = highlight3_color 36 | 37 | # A rotating color palette to color cohorts 38 | color_palette = [genesis_color, cohort_color, sibling_color, tip_color] 39 | 40 | #gencache = {} 41 | #gencache[True] = {} 42 | #gencache[False] = {} 43 | cohort_size_benchmark = [] # cohort size vs time 44 | 45 | def sha256(x: int): return hashlib.sha256(('%d'%x).encode()).digest() 46 | 47 | def printvset(vs): 48 | """ Print a (sub-)set of vertices in compact form. """ 49 | return("{"+",".join(sorted([str(v) for v in vs]))+"}") 50 | 51 | def sph_arclen(n1, n2): 52 | """ Compute the arc length on the surface of a unit sphere. """ 53 | # phi = 90 - latitude 54 | phi1 = (90.0 - n1.latitude)*pi/180.0 55 | phi2 = (90.0 - n2.latitude)*pi/180.0 56 | 57 | # theta = longitude 58 | theta1 = n1.longitude*pi/180.0 59 | theta2 = n2.longitude*pi/180.0 60 | 61 | c = sin(phi1)*sin(phi2)*cos(theta1-theta2) + cos(phi1)*cos(phi2) 62 | return acos(c) 63 | 64 | class Network: 65 | """ Abstraction for an entire network containing nodes. The network has 66 | a internal clock for simulation which uses . Latencies are taken 67 | from a uniform distribution on [0,1) so should be < 1. 68 | """ 69 | def __init__(self, nnodes, ticksize=TICKSIZE, npeers=4, target=None, topology='sphere'): 70 | self.t = 0 # the current "time" 71 | self.ticksize = ticksize # the size of a "tick": self.t += self.tick at each step 72 | self.npeers = npeers 73 | self.nnodes = nnodes 74 | self.genesis = uint256(sha256(0)) 75 | self.beads = {} # a hash map of all beads in existence 76 | #self.inflightdelay = {} # list of in-flight beads 77 | #self.mempool = set() # A list of transactions for everyone to mine. Everyone 78 | # sees the same mempool, p2p propegation delays are not modelled 79 | self.beads[self.genesis] = Bead(self.genesis, set(), set(), self, -1) 80 | # FIXME not modelling mempool propagation means that we will either have all blocks in a round have 81 | # the same tx, or none. Maybe each round mining should grab a random subset? 82 | self.nodes = [Node(self.genesis, self, nodeid, target=target) for nodeid in range(nnodes)] 83 | latencies = None 84 | for (node, peers) in zip(self.nodes, [choice(list(set(range(nnodes)) - {me}), \ 85 | npeers, replace=False) for me in range(nnodes)]): 86 | #print("Node ", node, " has peers ", peers) 87 | if topology == 'sphere': 88 | latencies = [10*sph_arclen(node, self.nodes[i]) for i in peers]; 89 | node.setpeers([self.nodes[i] for i in peers], latencies) 90 | self.reset(target=target) 91 | 92 | def tick(self, mine=True): 93 | """ Execute one tick. """ 94 | self.t += self.ticksize 95 | 96 | # Create a new set of transaction IDs in the mempool 97 | #self.mempool.update([uint256(sha256(randint(2**63-1))) for dummy in range(randint(1,2))]) 98 | 99 | # Have each node attempt to mine a random subset of the global mempool 100 | for node in self.nodes: 101 | # numpy.random.choice doesn't work with sets :-( 102 | #node.tick(choice(list(self.mempool), randint(len(self.mempool)), replace=False), mine) 103 | node.tick(mine=mine) 104 | 105 | for (node, bead) in copy(self.inflightdelay): 106 | self.inflightdelay[(node, bead)] -= self.ticksize 107 | if self.inflightdelay[(node, bead)] < 0: 108 | node.receive(bead) 109 | del self.inflightdelay[(node, bead)] 110 | 111 | def broadcast(self, node, bead, delay): 112 | """ Announce a block/bead discovery to a node who is away. """ 113 | if bead not in node.beads: 114 | prevdelay = NETWORK_SIZE 115 | if (node,bead) in self.inflightdelay: prevdelay = self.inflightdelay[(node, bead)] 116 | self.inflightdelay[(node, bead)] = min(prevdelay, delay) 117 | 118 | def reset(self, target=None): 119 | self.t = 0 120 | self.beads = {} 121 | self.beads[self.genesis] = Bead(self.genesis, set(), set(), self, -1) 122 | self.inflightdelay = {} 123 | self.mempool = set() 124 | for node in self.nodes: 125 | node.reset(target) 126 | 127 | def printinflightdelays(self): 128 | for (node, bead) in self.inflightdelay: 129 | print("bead ", bead, " to node ", node, " will arrive in %fs"%self.inflightdelay[(node, bead)]) 130 | 131 | class Node: 132 | """ Abstraction for a node. """ 133 | def __init__(self, genesis, network, nodeid, target=None): 134 | self.genesis = genesis 135 | self.network = network 136 | self.peers = [] 137 | self.latencies = [] 138 | self.nodeid = nodeid 139 | # A salt for this node, so all nodes don't produce the same hashes 140 | self.nodesalt = uint256(sha256(randint(2**63-1))) 141 | self.nonce = 0 # Will be increased in the mining process 142 | self.reset(target) 143 | # Geospatial location information 144 | self.latitude = pi*(1/2-sample(1)) 145 | self.longitude = 2*pi*sample(1) 146 | 147 | def reset(self, target=None): 148 | self.beads = [self.network.beads[self.network.genesis]] # A list of beads in the order received 149 | self.braids = [Braid(self.beads)] # A list of viable braids, each having a braid tip 150 | self.mempool = set() # A set of txids I'm mining 151 | self.incoming = set() # incoming beads we were unable to process 152 | self.target = target 153 | self.braids[0].tips = {self.beads[0]} 154 | self.hremaining = np.random.geometric(self.target/MAX_HASH) 155 | 156 | def __str__(self): 157 | return ""%self.nodeid 158 | 159 | def setpeers(self, peers, latencies=None): 160 | """ Add a peer separated by a latency . """ 161 | self.peers = peers 162 | if latencies: self.latencies = latencies 163 | else: self.latencies = sample(len(peers))*NETWORK_SIZE 164 | assert(len(self.peers) == len(self.latencies)) 165 | 166 | def tick(self, newtxs=[], mine=True): 167 | """ Add a Bead satisfying . """ 168 | # First try to extend all braids by received beads that have not been added to a braid 169 | newincoming = set() 170 | oldtips = self.braids[0].tips 171 | while len(newincoming) != len(self.incoming): 172 | for bead in self.incoming: 173 | for braid in self.braids: 174 | if not braid.extend(bead): 175 | newincoming.add(bead) 176 | self.incoming = newincoming 177 | if mine: 178 | #PoW = uint256(sha256(self.nodesalt+self.nonce)) 179 | PoW = uint256(sha256(np.random.randint(1<<64-1)*np.random.randint(1<<64-1))) 180 | self.nonce += 1 181 | if PoW < self.target: 182 | b = Bead(PoW, copy(self.braids[0].tips), copy(self.mempool), self.network, self.nodeid) 183 | self.receive(b) # Send it to myself (will rebroadcast to peers) 184 | # TODO remove txids from mempool 185 | else : 186 | self.hremaining -= 1 187 | if(self.hremaining <= 0): 188 | PoW = (uint256(sha256(self.nodesalt+self.nonce))*self.target)//MAX_HASH 189 | self.nonce += 1 190 | # The expectation of how long it will take to mine a block is Geometric 191 | # This node will generate one after this many hashing rounds (ticks) 192 | b = Bead(PoW, copy(self.braids[0].tips), copy(self.mempool), self.network, self.nodeid) 193 | self.receive(b) # Send it to myself (will rebroadcast to peers) 194 | self.hremaining = np.random.geometric(self.target/MAX_HASH) 195 | elif(self.braids[0].tips != oldtips): 196 | # reset mining if we have new tips 197 | self.hremaining = np.random.geometric(self.target/MAX_HASH) 198 | 199 | def receive(self, bead): 200 | """ Recieve announcement of a new bead. """ 201 | # TODO Remove txids from mempool 202 | if bead in self.beads: return 203 | else: self.beads.append(bead) 204 | for braid in self.braids: 205 | if not braid.extend(bead): 206 | self.incoming.add(bead) # index of vertex is index in list 207 | self.send(bead) 208 | 209 | def send(self, bead): 210 | """ Announce a new block from a peer to this node. """ 211 | for (peer, delay) in zip(self.peers, self.latencies): 212 | self.network.broadcast(peer, bead, delay) 213 | 214 | class Bead: 215 | """ A bead is either a block of transactions, or an individual transaction. 216 | This class stores auxiliary information about a bead and is separate 217 | from the vertex being stored by the Braid class. Beads are stored by 218 | the Braid object in the same order as vertices. So if you know the 219 | vertex v, the Bead instance is Braid.beads[int(v)]. graph_tool vertices 220 | can be cast to integers as int(v), giving their index. 221 | """ 222 | 223 | # FIXME lots of stuff here 224 | def __init__(self, hash, parents, transactions, network, creator): 225 | self.t = network.t 226 | self.hash = hash # a hash that identifies this block 227 | self.parents = parents 228 | self.children = set() # filled in by Braid.make_children() 229 | self.siblings = set() # filled in by Braid.analyze 230 | self.cohort = set() # filled in by Braid.analyze 231 | self.transactions = transactions 232 | self.network = network 233 | self.creator = creator 234 | if creator != -1: # if we're not the genesis block (which has no creator node) 235 | self.difficulty = MAX_HASH/network.nodes[creator].target 236 | else: self.difficulty = 1 237 | self.sibling_difficulty = 0 238 | network.beads[hash] = self # add myself to global list 239 | self.reward = None # this bead's reward (filled in by Braid.rewards) 240 | 241 | def __str__(self): 242 | return ""%(self.hash%10000) 243 | 244 | class Braid(gt.Graph): 245 | """ A Braid is a Directed Acyclic Graph with no incest (parents may not also 246 | be non-parent ancestors). A braid may have multiple tips. """ 247 | 248 | def __init__(self, beads=[]): 249 | super().__init__(directed=True, vorder=True) 250 | self.times = self.new_vertex_property("double") 251 | self.beads = [] # A list of beads in this braid 252 | self.vhashes = {} # A dict of (hash, Vertex) for each vertex 253 | self.vcolors = self.new_vertex_property("vector") # vertex colorings 254 | self.vhcolors = self.new_vertex_property("vector") # vertex halo colorings 255 | self.groups = self.new_vertex_property("vector") # vertex group (cohort number) 256 | self.vsizes = self.new_vertex_property("float") # vertex size 257 | self.ncohorts = -1 # updated by cohorts() 258 | 259 | if beads: 260 | for b in beads: 261 | self.beads.append(b) # Reference to a list of beads 262 | self.vhashes[b.hash] = self.add_vertex() 263 | self.vhashes[b.hash].bead = b 264 | self.vcolors[self.vhashes[b.hash]] = genesis_color 265 | self.vhcolors[self.vhashes[b.hash]] = nohighlight_color 266 | self.vsizes[self.vhashes[b.hash]] = 14 267 | # FIXME add edges if beads has more than one element. 268 | self.tips = {beads[-1]} 269 | self.tips = set() # A tip is a bead with no children. 270 | 271 | def extend(self, bead): 272 | """ Add a bead to the end of this braid. Returns True if the bead 273 | successfully extended this braid, and False otherwise. """ 274 | if (not bead.parents # No parents -- bad block 275 | or not all([p.hash in self.vhashes for p in bead.parents]) # We don't have all parents 276 | or bead in self.beads): # We've already seen this bead 277 | return False 278 | self.beads.append(bead) 279 | self.vhashes[bead.hash] = self.add_vertex() 280 | self.vhashes[bead.hash].bead = bead 281 | self.vcolors[self.vhashes[bead.hash]] = bead_color 282 | self.vhcolors[self.vhashes[bead.hash]] = nohighlight_color 283 | self.vsizes[self.vhashes[bead.hash]] = 14 284 | for p in bead.parents: 285 | self.add_edge(self.vhashes[bead.hash], self.vhashes[p.hash]) 286 | self.times[self.vhashes[bead.hash]] = bead.t 287 | if p in self.tips: 288 | self.tips.remove(p) 289 | self.tips.add(bead) 290 | return True 291 | 292 | def rewards(self, coinbase): 293 | """ Compute the rewards for each bead, where each cohort is awarded 294 | coins. 295 | FIXME splitting of tx fees not implemented. 296 | """ 297 | for cohort in self.cohorts(): 298 | for c in cohort: 299 | siblings = cohort - self.ancestors(c, cohort) - self.descendants(c, cohort) - {c} 300 | bc = self.beads[int(c)] 301 | # Denominator (normalization) for distribution among siblings 302 | bc.sibling_difficulty = MAX_HASH/(sum([self.beads[int(s)].difficulty for s in siblings]) 303 | + bc.difficulty) 304 | N = sum([self.beads[int(c)].difficulty/self.beads[int(c)].sibling_difficulty for c in cohort]) 305 | for c in cohort: 306 | bc = self.beads[int(c)] 307 | bc.reward = coinbase*(bc.difficulty/bc.sibling_difficulty)/N 308 | 309 | # FIXME I can make 3-way siblings too: find the common ancestor of any 3 siblings 310 | # and ask what its rank is... 311 | def siblings(self): 312 | """ The siblings of a bead are other beads for which it cannot be 313 | decided whether the come before or after this bead in time. 314 | Note that it does not make sense to call siblings() on a cohort 315 | which contains dangling chain tips. The result is a dict of 316 | (s,v): (m,n) 317 | which can be read as: 318 | The sibling $s$ of vertex $v$ has a common ancestor $m$ 319 | generations away from $v$ and a common descendant $n$ 320 | generations away from $v$. 321 | """ 322 | retval = dict() 323 | if self.ncohorts < 0: 324 | for c in c.cohorts(): pass # force cohorts() to generate all cohorts 325 | # FIXME Since siblings are mutual, we could compute (s,c) and (c,s) at the same time 326 | for (cohort, ncohort) in zip(self.cohorts(), range(self.ncohorts)): 327 | for c in cohort: 328 | #siblings = cohort - self.ancestors(c, cohort) - self.descendants(c, cohort) - {c} 329 | for s in self.sibling_cache[ncohort][c]: 330 | ycas = self.youngest_common_ancestors({s,c}) 331 | ocds = self.oldest_common_descendants({s,c}) 332 | # Step through each generation of parents/children until the common ancestor is found 333 | pgen = {s} # FIXME either c or s depending on whether we want to step from the 334 | for m in range(1,len(cohort)): 335 | pgen = {q for p in pgen for q in self.parents(p) } 336 | if pgen.intersection(ycas) or not pgen: break 337 | cgen = {s} # FIXME and here 338 | for n in range(1,len(cohort)): 339 | cgen = {q for p in cgen for q in self.children(p)} 340 | if cgen.intersection(ocds) or not cgen: break 341 | retval[int(s),int(c)] = (m,n) 342 | return retval 343 | 344 | def cohorts(self, initial_cohort=None, older=False, cache=False): 345 | """ Given the seed of the next cohort (which is the set of beads one step older, in the next 346 | cohort), build an ancestor and descendant set for each visited bead. A new cohort is 347 | formed if we encounter a set of beads, stepping in the descendant direction, for which 348 | *all* beads in this cohort are ancestors of the first generation of beads in the next 349 | cohort. 350 | 351 | This function will not return the tips nor any beads connected to them, that do not yet 352 | form a cohort. 353 | """ 354 | cohort = head = nexthead = initial_cohort or frozenset([self.vertex(0)]) 355 | parents = ancestors = {h: self.next_generation(h, not older) - cohort for h in head} 356 | ncohorts = 0 357 | if cache and not hasattr(self, 'cohort_cache'): 358 | self.cohort_cache = [] 359 | self.sibling_cache = {} 360 | # These caches also contain the first beads *outside* the cohort in both the 361 | # (ancestor,descendant) directions and are used primarily for finding siblings 362 | self.ancestor_cache = {} # The ancestors of each bead, *within* their own cohort 363 | self.descendant_cache = {} # The descendants of each bead, *within* their own cohort 364 | while True : 365 | if cache and ncohorts in self.cohort_cache: 366 | yield self.cohort_cache[ncohorts] 367 | else: 368 | if cache: 369 | acohort = cohort.union(self.next_generation(head, older)) # add the youngest beads in the ancestor cohort 370 | ancestors = {int(v): frozenset(map(int, 371 | self.ancestors(v, acohort))) for v in acohort} 372 | self.ancestor_cache[ncohorts] = ancestors 373 | dcohort = cohort.union(self.next_generation(head, not older)) # add the oldest beads in the descendant cohort 374 | descendants = {int(v): frozenset(map(int, 375 | self.descendants(v, dcohort))) for v in dcohort} 376 | self.descendant_cache[ncohorts] = descendants 377 | self.cohort_cache.append(cohort) 378 | self.sibling_cache[ncohorts] = {v: cohort - ancestors[v] - descendants[v] - 379 | frozenset([v]) for v in cohort} 380 | yield cohort 381 | ncohorts += 1 382 | gen = head = nexthead 383 | parents = ancestors = {h: self.next_generation(h, not older) - cohort for h in head} 384 | while True : 385 | gen = self.next_generation(gen, older) 386 | if not gen: 387 | self.ncohorts = ncohorts 388 | return # Ends the iteration (StopIteration) 389 | for v in gen: parents[v] = self.next_generation(v, not older) 390 | while True: # Update ancestors: parents plus its parents' parents 391 | oldancestors = {v: ancestors[v] for v in gen} # loop because ancestors may have new ancestors 392 | for v in gen: 393 | if all([p in ancestors for p in parents[v]]): # If we have ancestors for all parents, 394 | ancestors[v] = parents[v].union(*[ancestors[p] for p in parents[v]]) # update the ancestors 395 | if oldancestors == {v: ancestors[v] for v in gen}: break 396 | if(all([p in ancestors] for p in frozenset.union(*[parents[v] for v in gen]))# we have no missing ancestors 397 | and all([h in ancestors[v] for h in head for v in gen])): # and everyone has all head beads as ancestors 398 | cohort = frozenset.intersection(*[ancestors[v] for v in gen]) # We found a new cohort 399 | nexthead = self.next_generation(cohort, older) - cohort 400 | tail = self.next_generation(nexthead, not older) # the oldest beads in the candidate cohort 401 | if all([n in ancestors and p in ancestors[n] for n in nexthead for p in tail]): 402 | break 403 | 404 | def cohort_time(self): 405 | """ Compute the average cohort time and its standard deviation returned 406 | as a tuple (mean, stddev). """ 407 | t = 0 408 | ctimes = [] 409 | for c in self.cohorts(): 410 | if c == {self.vertex(0)}: continue # skip genesis bead 411 | times = [self.beads[int(v)].t for v in c] 412 | ctimes.append(max(times)-t) 413 | t = max(times) 414 | return (np.mean(ctimes), np.std(ctimes)) 415 | 416 | def exclude(self, vs, predicate): 417 | """ Recursively exclude beads which satisfy a predicate (usually either 418 | parents or children) -- this removes all ancestors or descendants. """ 419 | lastvs = copy(vs) 420 | while True: 421 | newvs = {v for v in vs if predicate(v) not in vs} 422 | if newvs == lastvs: return newvs 423 | lastvs = newvs 424 | 425 | def common_generation(self, vs, older=True): 426 | """ Find the first common ancestor/descendant generation of all vs, and 427 | all intermediate ancestors/descendants by bfs. This is analagous to the 428 | Most Recent Common Ancestor (MRCA) in biology. The first return value 429 | should be the seed for the *next* cohort while the second return value 430 | is the *current* cohort. """ 431 | if older: (edgef, nodef, nextgen_f) = ("out_edges","target", self.parents) 432 | else: (edgef, nodef, nextgen_f) = ("in_edges", "source", self.children) 433 | if not isinstance(vs, set): vs = {vs} 434 | lastvs = self.exclude(vs, nextgen_f) 435 | nextgen = lastgen = {v: nextgen_f(v) for v in lastvs} 436 | firstv = next(iter(lastvs)) 437 | niter = 0 438 | while True: 439 | commond = frozenset.intersection(*[nextgen[v] for v in nextgen]) - lastvs 440 | if commond: return commond 441 | else: # add one generation of descendants for bfs 442 | nextgenupd = dict() 443 | for v in lastgen: 444 | nextgenupd[v] = nextgen_f(lastgen[v]) 445 | nextgen[v] = frozenset.union(nextgen[v], nextgenupd[v]) 446 | # We hit a tip, on all paths there can be no common descendants 447 | if not all([nextgenupd[v] for v in nextgenupd]): 448 | return set() 449 | lastgen = nextgen 450 | niter += 1 451 | if niter > 1000: 452 | raise Exception("infinite loop in common_generation? ") 453 | 454 | def oldest_common_descendants(self, vs): 455 | return self.common_generation(vs, older=False) 456 | 457 | def youngest_common_ancestors(self, vs): 458 | return self.common_generation(vs, older=True) 459 | 460 | def all_generations(self, v:gt.Vertex, older, cohort=None, limit=None): 461 | """ Return all vertices in older or younger depending on the value of . """ 462 | result = gen = self.next_generation(frozenset([v]), older) 463 | while gen: 464 | gen = self.next_generation(gen, older) 465 | result = result.union(gen) 466 | if cohort: result = result.intersection(cohort) 467 | return result 468 | 469 | def ancestors(self, v:gt.Vertex, cohort=None, limit=None): 470 | return self.all_generations(v, older=True, cohort=cohort, limit=limit) 471 | 472 | def descendants(self, v:gt.Vertex, cohort=None, limit=None): 473 | return self.all_generations(v, older=False, cohort=cohort, limit=limit) 474 | 475 | def next_generation(self, vs, older): 476 | """ Returns the set of vertices one generation from in the direction. """ 477 | if older: (edgef, nodef) = ("out_edges","target") 478 | else: (edgef, nodef) = ("in_edges", "source") 479 | if isinstance(vs, gt.Vertex): 480 | return frozenset([getattr(y, nodef)() for y in getattr(vs,edgef)()]) 481 | elif isinstance(vs, frozenset): 482 | ng = [self.next_generation(v, older) for v in vs] 483 | if not ng: return frozenset() 484 | else: return frozenset.union(*ng) 485 | 486 | def parents(self, vs): 487 | return self.next_generation(vs, older=True) 488 | 489 | def children(self, vs): 490 | return self.next_generation(vs, older=False) 491 | 492 | def plot(self, focusbead=None, cohorts=True, focuscohort=None, numbervertices=False, 493 | highlightancestors=False, output=None, rewards=False, layout=None, **kwargs): 494 | """ Plot this braid, possibly coloring graph cuts. 495 | indicates which bead to consider for coloring its siblings and 496 | cohort. """ 497 | vlabel = self.new_vertex_property("string") 498 | pos = self.new_vertex_property("vector") 499 | if layout: pos = layout(self, **kwargs) 500 | else: pos = self.braid_layout(**kwargs) 501 | n = 0 502 | 503 | kwargs = {'vertex_size': self.vsizes, 504 | 'vertex_font_size':10, 505 | 'nodesfirst':True, 506 | 'vertex_text':vlabel, 507 | 'vertex_halo':True, 508 | 'vertex_halo_size':0, 509 | 'vertex_fill_color':self.vcolors, 510 | 'vertex_halo_color':self.vhcolors, 511 | 'pos':pos} 512 | if rewards: 513 | # We want the sum of the area of the beads to be a constant. Since 514 | # the area is pi r^2, the vertex size should scale like the sqrt of 515 | # the reward 516 | self.rewards(400) 517 | for v in self.vertices(): 518 | if self.beads[int(v)].reward: 519 | self.vsizes[v] = sqrt(self.beads[int(v)].reward) 520 | else: 521 | self.vsizes[v] = 0 522 | if output: kwargs['output'] = output 523 | if focusbead: 524 | if not hasattr(self, 'sibling_cache'): 525 | for c in self.cohorts(cache=True): pass 526 | ancestors = self.ancestors(focusbead) 527 | descendants = self.descendants(focusbead) 528 | kwargs['vertex_halo_size'] = 1.5 529 | 530 | for v in self.vertices(): 531 | # Decide the bead's color 532 | if v.out_degree() == 0: 533 | self.vcolors[v] = genesis_color 534 | elif v.in_degree() == 0: 535 | self.vcolors[v] = tip_color 536 | else: 537 | self.vcolors[v] = bead_color 538 | 539 | # Decide the highlight color 540 | if v == focusbead: 541 | self.vhcolors[v] = me_color 542 | elif v in ancestors and highlightancestors: 543 | self.vhcolors[v] = ancestor_color 544 | elif v in descendants and highlightancestors: 545 | self.vhcolors[v] = descendant_color 546 | else: 547 | self.vhcolors[v] = nohighlight_color 548 | 549 | # Label our siblings with their rank 550 | siblings = self.siblings() 551 | for cohort in self.cohorts(): 552 | if focusbead in cohort: 553 | for c in cohort: 554 | self.vcolors[c] = cohort_color 555 | for (s,v),(m,n) in siblings.items(): 556 | if v == focusbead: 557 | vlabel[self.vertex(s)] = "%d,%d"%(m,n) 558 | #self.vcolors[s] = sibling_color 559 | break 560 | else: 561 | cnum = 0 562 | if cohorts: 563 | for c in self.cohorts(): 564 | for v in c: 565 | if focuscohort == cnum: 566 | self.vhcolors[v] = highlight1_color 567 | self.vcolors[v] = color_palette[cnum%len(color_palette)] 568 | self.vhcolors[v] = nohighlight_color 569 | cnum += 1 570 | if numbervertices: kwargs['vertex_text'] = self.vertex_index 571 | 572 | return gtdraw.graph_draw(self, **kwargs) 573 | 574 | def braid_layout(self, **kwargs): 575 | """ Create a position vertex property for a braid. We use the actual bead time for the x 576 | coordinate, and a spring model for determining y. 577 | 578 | FIXME how do we minimize crossing edges? 579 | """ 580 | # FIXME what I really want here is symmetry about a horizontal line, and a spring-block layout that 581 | # enforces the symmetry. 582 | # 1. randomly assign vertices to "above" or "below" the midline. 583 | # 2. Compute how many edges cross the midline ( 584 | # 3. Compute SFDP holding everything fixed except those above, in a single cohort. 585 | # 4. Repeat for the half of the cohort below the midline 586 | # 5. Iterate this a couple times... 587 | groups = self.new_vertex_property("int") 588 | pos = self.new_vertex_property("vector") 589 | pin = self.new_vertex_property("bool") 590 | xpos = 1 591 | for c in self.cohorts(): 592 | head = gen = self.children(self.parents(c)-c) 593 | for (v,m) in zip(head, range(len(head))): 594 | pin[v] = True 595 | pos[v] = np.array([xpos, len(head)-1-2*m]) 596 | while gen.intersection(c): 597 | gen = self.children(gen).intersection(c) 598 | xpos += 1 599 | xpos -= 1 # We already stepped over the tail in the above loop 600 | tail = self.parents(self.children(c) - c) - head 601 | for (v,m) in zip(tail, range(len(tail))): 602 | pin[v] = True 603 | pos[v] = np.array([xpos, len(tail)-1-2*m]) 604 | xpos += 1 605 | # position remaining beads not in a cohort but not tips 606 | gen = self.children(c) - c 607 | for (v,m) in zip(gen,range(len(gen))): 608 | pos[v] = np.array([xpos, len(gen)-1-2*m]) 609 | while True: # Count number of generations to tips 610 | gen = self.children(gen) - gen 611 | if not gen: break 612 | xpos += 1 613 | # position tips 614 | tips = frozenset(map(lambda x: self.vhashes[x.hash], self.tips)) 615 | for (v,m) in zip(tips,range(len(tips))): 616 | pos[v] = np.array([xpos, len(tips)-1-2*m]) 617 | pin[v] = True 618 | 619 | # feed it all to the spring-block algorithm. 620 | if 'C' not in kwargs: kwargs['C'] = 0.1 621 | if 'K' not in kwargs: kwargs['K'] = 2 622 | return gt.sfdp_layout(self, pos=pos, pin=pin, groups=groups, **kwargs) 623 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | {one line to give the program's name and a brief idea of what it does.} 635 | Copyright (C) {year} {name of author} 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | {project} Copyright (C) {year} {fullname} 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | -------------------------------------------------------------------------------- /braidpool_spec.md: -------------------------------------------------------------------------------- 1 | 2 | # Braidpool Specification 3 | 4 | Herein we present the specification for a decentralized mining pool we name 5 | "Braidpool". For background information and general considerations please read 6 | [General Considerations for Decentralized Mining 7 | Pools](https://github.com/mcelrath/braidcoin/blob/master/general_considerations.md) 8 | which has relevant general discussion omitted from this document. The sections 9 | below correspond to the sections in that document, describing how Braidpool will 10 | solve each of the indicated issues. Orthogonal considerations including 11 | encrypted miner communication is being pursued by the 12 | [StratumV2](https://github.com/stratum-mining/sv2-spec) project, which Braidpool 13 | will build upon. 14 | 15 | ## Table of Contents 16 | 17 | 1. [Shares and Weak Blocks](#shares-and-weak-blocks) 18 | 1. [Metadata Commitments](#metadata-commitments) 19 | 2. [Share Value](#share-value) 20 | 2. [Braid Consensus Mechansim](#braid-consensus-mechanism) 21 | 1. [Simple Sum of Descendant Work](#simple-sum-of-descendant-work) 22 | 2. [Difficulty Retarget Algorithm](#difficulty-retarget-algorithm) 23 | 3. [Miner-Selected Difficulty](#miner-selected-difficulty) 24 | 3. [Payout Update](#payout-commitment) 25 | 1. [Unspent Hasher Payment Output](#unspent-hasher-payment-output) 26 | 4. [Payout Update and Settlement Signing](#payout-update-and-settlement-signing) 27 | 5. Transaction Selection 28 | 29 | # Shares and Weak Blocks 30 | 31 | A *share* is a "weak block" that is defined as a standard bitcoin block that 32 | does not meet bitcoin's target difficulty $x_b$, but does meet some lesser 33 | difficulty target $x$. 34 | 35 | The share is itself a bearer proof that approximately $w=1/x$ sha256 36 | computations have been done. The share data structure has additional data that 37 | indicates to other miners that the share belongs to Braidpool, and if it had met 38 | bitcoin's difficulty target, it contains commitments such that all *other* 39 | miners in the pool would be paid according to the share tally. 40 | 41 | Shares or blocks which do not commit to the additional metadata proving that the 42 | share is part of the Braidpool must be excluded from the share calculation, and 43 | those miners are not "part of" the pool. In other words, submitting a random 44 | sha256 header to a Braidpool node must not count as a share contribution unless 45 | the ultimate payout for that share, had it become a bitcoin block, would have 46 | paid all members of the pool in such a way that all other hashers are paid. 47 | 48 | A Braidpool "share" or "bead" is a data structure containing a bitcoin block 49 | header, the coinbase transaction, and metadata: 50 | 51 | | Field | Description | 52 | | ---------- | ----------- | 53 | | `blockheader` | `Version, Previous Block Hash, Merkle Root, Timestamp, Target, Nonce` | 54 | | `coinbase` | `Coinbase Txn, Merkle Sibling, Merkle Sibling, ...` | 55 | | `payout` | `Payout Update Txn, Merkle Sibling, Merkle Sibling, ...` | 56 | | `metadata` | `Braidpool Metadata` (see below) | 57 | | `un_metadata` | `Uncommitted Metadata` (see below) | 58 | 59 | The first line is a standard Bitcoin block header. The `Merkle Siblings` in the 60 | second and third line are the additional nodes in the transaction Merkle tree 61 | necessary to verify that the specified `Coinbase Transaction` and `Payout 62 | Commitment` transactions are included in the `Merkle Root`. This `Coinbase 63 | Transaction` commits to any additional data needed for the Braidpool's [braid 64 | consensus mechansim](#braid-consensus-mechanism), in an `OP_RETURN` output. 65 | While we could commit to this data in a more space-efficient manner (e.g. via a 66 | pubkey tweak), the coinbase is also the location of the `extranonce` 8-byte 67 | field used by some mining equipment. 68 | 69 | The `Coinbase Transaction` is a standard transaction having no inputs, and 70 | must have the following outputs: 71 | 72 | OutPoint(Value:0, scriptPubKey OP_RETURN "BP"++) 73 | OutPoint(Value:, scriptPubKey ) 74 | 75 | The `` is the sum of all fees and block reward for this halving 76 | epoch, and `pool_pubkey` is an address controlled collaboratively by the pool in 77 | such a way that the [braid consensus mechanism](#braid-consensus-mechanism) can 78 | only spend it in such a way as to pay all hashers in the manner described by its 79 | share accounting. 80 | 81 | ## Metadata Commitments 82 | 83 | The `` is a hash of `Braidpool Metadata` committing to 84 | additional data required for the operation of the pool. Validation of this share 85 | requires that the PoW hash of this bitcoin header be less than this weak 86 | difficulty target $x$. 87 | 88 | The `Braidpool Metadata` is: 89 | | Field | Description | 90 | | ----- | ----------- | 91 | | `target` | Miner-selected target difficulty $x_b < x < x_0$ | 92 | | `payout_pubkey` | P2TR pubkey for this miner's payout | 93 | | `comm_pubkey` | secp256k1 pubkey for encrypted DH communication with this miner | 94 | | `miner IP` | IP address of this miner | 95 | | [[`parent`, `timestamp`], ...] | An array of block hashes of parent beads and timestamps when those parents were seen | 96 | 97 | The `Uncommitted Metadata` block is intentionally not committed to in the PoW 98 | mining process. It contains: 99 | | Field | Description | 100 | | ----- | ----------- | 101 | | `timestamp` | timestamp when this bead was broadcast | 102 | | `signature` | Signature on the `Uncommitted Metadata` block using the `payout_pubkey` | 103 | 104 | The purpose of this data is to gather higher resolution timestamps than are 105 | possible if the timestamp was committed. All Braidpool timestamps are 64-bit 106 | fields as milliseconds since the Unix epoch. When a block header is sent to 107 | mining devices, many manufacturers' mining devices do not return for quite some 108 | time (10-60 seconds) while they compute the hash, which causes PoW-mined 109 | timestamps to be delayed by this amount. Adding timestamps when parents were 110 | seen by the node and a timestamp when the bead was broadcast allows the braid to 111 | compute bead times with much higher precision. Though the data is uncommitted in 112 | the PoW header, it is signed by a key that is committed in the PoW header, so 113 | third parties cannot falsify these timestamps. 114 | 115 | ## Share Value 116 | 117 | A great many [share payout 118 | algorithms](https://medium.com/luxor/mining-pool-payment-methods-pps-vs-pplns-ac699f44149f) 119 | have been proposed and used by pools. Because Braidpool will not collect fees 120 | and has no source of funds other than block rewards with which pay hashers, it 121 | will use the **Full Proportional** method, meaning that all rewards and fees are 122 | fully distributed to hashers proportionally to their contributed shares. Closely 123 | related methods like Pay Per Share (PPS) allow the pool operator to earn the 124 | fees, but a decentralized mining pool has no operator which could/should be 125 | earning these fees. While many projects have inserted a "developer donation", we 126 | feel that Braidpool is an open source public good that should be developed and 127 | maintained by the community, without the political drama of who and how to pay 128 | with a source of funds. 129 | 130 | With PPS-type methods, most centralized pool operators are taking a risk on 131 | paying immediately for shares, therefore absorbing the variance risk involved in 132 | "luck". For hashers that desire immediate payout this can be achieved using any 133 | third party willing to buy their shares and take on the risk management of 134 | "luck" and fee variance. It's not necessary or desirable for Braidpool itself to 135 | subsume this risk management function. It is logical to allow professional risk 136 | management firms to take it on by directly buying shares. We envision that 137 | existing pools might run on top of Braidpool and continue to perform this risk 138 | management function for their clients. 139 | 140 | Other payout algorithms such as Pay Per Last N Shares (PPLNS) were created 141 | primarily to discourage pool hopping. We don't feel that this is needed in the 142 | modern day and a smoothing function applied to payouts interferes with the 143 | notion of using shares as a hashrate derivative instrument. 144 | 145 | A purely work-weighted proportional algorithm would work for a pure-DAG 146 | blockchain, however we have the problem that some of the beads are blocks in a 147 | parent blockchain, and the parent blockchain has the property that some blocks 148 | can be orphans and receive no reward. We must dis-incentivize the creation of 149 | blocks which might become orphans. One component of this solution is the 150 | [difficulty retarget algorithm](#difficulty-retarget-algorithm) which maximizes 151 | throughput while minimizing the number of simultaneous beads. 152 | 153 | However simultaneous beads will happen naturally due to the faster bead time, 154 | latency, and attackers. Within a time window $T_C$ (the cohort time), the 155 | probability that 2 or more blocks is generated by the parent blockchain can be 156 | obtained by summing the Poisson distribution in terms of its rate parameter 157 | $\sigma$ (usually called $\lambda$) and is 158 | 159 | $$ 160 | P_{\ge 2}(T_C) = \sum_{k=2}^\infty \frac{\sigma(T_C)^k e^{-\sigma(T_C)}}{k!} = 1 - e^{-\sigma(T_C)} (1+\sigma(T_C)) 161 | $$ 162 | 163 | where 164 | 165 | $$ 166 | \sigma(T_C) = \frac{T_C}{\rm block\ time} \left(\frac{\rm pool\ hashrate}{\rm total 167 | \ hashrate}\right) 168 | $$ 169 | 170 | Therefore shares within a cohort containing 2 or more beads must be weighted by 171 | $(1-P_{\ge 2}(T_C))$. Beads which are "blockchain-like" will be counted as full 172 | shares, while beads in larger cohorts will be counted as slightly less than a 173 | full share by this factor. The value $T_C$ is the cohort time, which is half the 174 | time difference between the median of the parent cohort's timestamps, and the 175 | median of the descendant cohort's timestamps. Here we use only the timestamps 176 | as witnessed by descendants, not the claimed broadcast time by the miner in 177 | `Uncommitted Metadata`. 178 | 179 | (FIXME: Is it appropriate to apply this factor to even blockchain-like beads?) 180 | 181 | As $T_C$ grows, the value of shares decreases. Therefore an attacker attempting 182 | to reorganize transactions or execute a selfish mining attack will see the value 183 | of his shares decrease in an appropriate way corresponding to how likely it is 184 | that he generates an orphan and reduces the profit of the pool. 185 | 186 | Summing it all up, the number of shares $s$ for a given bead is given by: 187 | 188 | $$ 189 | s = \frac{1}{x (1-P_{\ge 2})} 190 | $$ 191 | 192 | Where $x_b \le x \le x_0$ is the [miner-selected 193 | difficulty](#miner-selected-difficulty), $x_0$ is the minimum target given by 194 | the [Difficulty Retarget Algorithm](#difficulty-retarget-algorithm), and $x_b$ 195 | is the bitcoin target. Note that $w = 1/x$ is traditionally called the "work", 196 | and is a statistical estimate of the number of sha256d computations performed by 197 | the miner. 198 | 199 | At first glance this algorithm might seem to "punish" lower-target (higher work) 200 | miners given [miner-selected difficulty](#miner-selected-difficulty), however 201 | because it is directly proportional to work $w=1/x$, it weights high-work miners 202 | more than low-work miners. So while a low-work miner is more likely to generate 203 | a multi-bead cohort with a high-work miner, the reward and share is 204 | appropriately work-weighted. 205 | 206 | The sum of shares generated by this formula must be equal to the actual work 207 | going into the blockchain, minus work lost due to orphans (communications 208 | latency). 209 | 210 | # Braid Consensus Mechanism 211 | 212 | The consensus algorithm we choose is inspired by simply extending Nakamoto 213 | consensus to a Directed Acyclic Graph. We call nodes in this DAG "beads" and the 214 | overall structure a "braid" so as to distinguish it from the bitcoin blocks and 215 | chain. Some of the beads in the DAG are bitcoin blocks. 216 | 217 | We call this structure a "braid" because it contains an extra restriction 218 | relative to a general DAG: beads must not name as parents other beads which are 219 | ancestors of another parent. Naming a parent that is an ancestor of another 220 | parent conveys no useful information, since ancestors of each parent are already 221 | implied when ordering the DAG and including transactions. Visually this means 222 | that a braid will never have triangles or some other higher order structures. 223 | 224 | A DAG can be totally ordered in linear time using either [Kahn's 225 | algorithm](https://dl.acm.org/doi/10.1145/368996.369025) or a modified 226 | depth-first search which terminates when a bead is found that is a common 227 | ancestor to all of a bead's parents, which defines a "graph cut" and a point of 228 | global consensus on all ancestors. We define the set of beads between two graph 229 | cuts to be a "cohort". Within a cohort it is not possible to total order the 230 | contained beads using graph structure alone. The cohort can be defined as a set 231 | of beads having the same set of oldest common descendants and youngest common 232 | ancestors. 233 | 234 | It should be noted that within a braid we keep *all* beads with a valid PoW, 235 | regardless of whether they are considered invalid in other ways, or contain 236 | conflicting transactions. Transaction conflict resolution within the Braid is 237 | decided by the [work weighting algorithm](#work-weighting-algorithm) and doing 238 | so requires retaining both sides of the conflict. It is generally possible for 239 | new work to change which beads are considered in the "main chain", just as in 240 | Bitcoin new work can cause a reorganization of the chain ("reorg"), which makes 241 | a block that was previously an orphan be in the main chain. 242 | 243 | We have considered the [PHANTOM](https://eprint.iacr.org/2018/104) proposal 244 | which has many similarities to ours and should be read by implementors. We 245 | reject it for the following reasons: 246 | 247 | 1. The k-width heuristic is somewhat analogous to our cohorts, but has the 248 | property that it improperly penalizes naturally occurring beads. If for 249 | example we target the bead rate such that 40% of the cohorts have 2 or more 250 | beads, this means that approximately 2.5% of cohorts would have 4 or more 251 | beads. The red/blue algorithm of PHANTOM would improperly penalize all but 252 | the first three of the beads in this cohort. 253 | 254 | 2. It is impossible in practice to reliably identify "honest" and "attacking" 255 | nodes. There is only latency, which we can measure and take account of. Even 256 | in the absence of attackers, cohorts exceeding the k-width happen naturally 257 | and cannot be prevented. 258 | 259 | ## Simple Sum of Descendant Work 260 | 261 | Within Bitcoin, the "Longest Chain Rule" determines which tip has the most work 262 | among several possible tips. The "Longest Chain Rule" only works at constant 263 | difficulty and the actual rule is a "Highest Work" rule when you consider 264 | difficulty changes. 265 | 266 | Therefore we require an algorithm to calculate the total work for each bead. 267 | This total work can then be used to select the highest work tips as well as to 268 | select transactions within beads which have more work than other beads for 269 | transaction conflict resolution. 270 | 271 | For conflict resolution, we choose the Simple Sum of Descendant Work (SSDW), 272 | which is the sum of work among descendants for each bead, disregarding any graph 273 | structure. This is the direct analog of Nakamoto's "longest chain/highest work" 274 | rule. Graph structure is manipulable at zero cost, therefore we must have a 275 | conflict resolution algorithm that is independent of graph structure, lest we 276 | create a game which can be played to give a non-work advantage to an attacking 277 | miner which he could use to reverse transactions. The SSDW work is: 278 | 279 | $$ 280 | w_{\rm SSDW} = \sum_{i \in \rm descendants} \frac{1}{x_i} 281 | $$ 282 | 283 | where $x_i$ is the target difficulty for descendant $i$, and $1/x$ is 284 | traditionally called the "work". 285 | 286 | The SSDW can be optimized by first applying the Cohort algorithm, since all 287 | beads in a parent cohort have all beads in all descendant cohorts added to their 288 | work. Therefore, the only thing that matters for conflict resolution is 289 | descendant work *within* a cohort. 290 | 291 | In the event that two beads containing conflicting transactions have exactly the 292 | same SSDW, the one with the lower hash ("luck") will be selected. 293 | 294 | ## Difficulty Retarget Algorithm 295 | 296 | ![Cohort time $T(x)$ vs target difficulty $x$](https://github.com/mcelrath/braidcoin/raw/master/T_C_x.png) 297 | 298 | The cohort time $T(x)$ in terms of the target difficulty $x$ is well 299 | approximated (green line in above graph) by 300 | 301 | $$ 302 | T(x) = \frac{1}{\lambda x} + a e^{a \lambda x} 303 | $$ 304 | 305 | where $a$ is a latency parameter and $\lambda$ is a rate parameter given by 306 | 307 | $$ 308 | a = T_C W\left(\frac{T_C}{T_B} - 1 \right); \qquad 309 | \lambda = \frac{N_B}{x T_C N_C}, 310 | $$ 311 | 312 | where $T_B = \frac{1}{\lambda x}$ is the bead time, $T_C$ is the (measured) 313 | cohort time, and $W(z)$ is the [Lambert W 314 | function](https://en.wikipedia.org/wiki/Lambert_W_function). 315 | 316 | Given a starting value for $x$, we can measure these parameters directly from 317 | the braid within a time window corresponding to a retarget epoch: 318 | | Parameter | Description | 319 | | ----------- | ----------- | 320 | | $N_B$ | Number of beads | 321 | | $N_C$ | Number of cohorts | 322 | | $T_C$ | Cohort time | 323 | | $T_B$ | Bead time | 324 | 325 | This function has a minimum at 326 | 327 | $$ 328 | x_0 = \frac{2 W\left(\frac12\right)}{a \lambda} = \frac{0.7035}{a \lambda}. 329 | $$ 330 | 331 | This minimum corresponds to the fastest possible cohort time, and the most 332 | frequent global consensus achievable in a braid. For smaller target difficulty 333 | $x \to 0$, the braid becomes blockchain-like, and 334 | $T(x) \xrightarrow[x\rightarrow 0]{} (\lambda x)^{-1} + a + \mathcal{O}(x)$, 335 | showing that the parameter a is the increase in effective block time due to 336 | network latency effects. In the opposite limit $x \to \infty$, cohorts become 337 | large, meaning that beads cannot be total ordered, double-spend conflicts cannot 338 | be resolved, and global consensus is never achieved. In this limit the cohort 339 | time increases exponentially, so we cannot let $x$ get too large. 340 | 341 | This gives us a zero-parameter retargeting algorithm. At any time we can 342 | evaluate $x_0$, which represents a maximum target difficulty that the braid will 343 | accept. 344 | 345 | [Braid Retargeting 346 | Algorithm](https://rawgit.com/mcelrath/braidcoin/master/Braid%2BExamples.html) 347 | contains the full analysis that results in this formula including code to 348 | reproduce this result. 349 | 350 | ## Miner Selected Difficulty 351 | 352 | Within the Braid we wish to allow different miners to select their difficulty 353 | and to target for constant *variance* among miners by allowing a small miner to 354 | use a lower difficulty than a larger miner. Miners may select any difficulty 355 | between the maximum target $x_0$ described in [Difficulty Retarget 356 | Algorithm](#difficulty-retarget-algorithm) and the bitcoin target. 357 | 358 | Braidpool will automatically select an appropriate target difficulty based on 359 | the miner's observed hashrate. Larger miners will see a higher target selected 360 | while smaller miners will see a lower target, where we will seek that each miner 361 | is expected to produce on average one bead per bitcoin block. For miners smaller 362 | than this they will be allocated to a [Sub-Pool](#sub-pools). 363 | 364 | Note that this equal-variance target is not enforceable by consensus. A miner 365 | could choose to run multiple Braidpool instances or just change the code to 366 | select a different target, and the Braidpool software-selected target is an 367 | unenforceable recommendation. The consequence of a miner ignoring this 368 | recommendation would be to decrease a single miner's variance at the expense of 369 | producing more beads in the braid for the same amount of work. This slows down 370 | the braid and increases the bead time. Accepting this equal-variance target 371 | allows Braidpool to accommodate the maximum number of miners, the most work, and 372 | the fastest possible bead time without resorting to allocating more miners to 373 | [Sub-Pools](#sub-pools). 374 | 375 | # Payout Update 376 | 377 | The Payout Update is a separate transaction from the coinbase transaction 378 | that aggregates previous coinbase outputs into a single new output. This output 379 | contains all funds from the block reward and fees in this and all past blocks. 380 | This payout must commit to the share payout structure as calculated at the time 381 | the block is mined. In other words, it must represent and commit to the 382 | consensus of the decentralized mining pool's share accounting. This transaction 383 | has two inputs and one output 384 | 385 | Input (1): 386 | Input (2): 387 | Outputs (1): 388 | 389 | Validating the output of the [consensus mechanism](#consensus-mechanism) is well 390 | beyond the capability of bitcoin script. Therefore generally one must find a 391 | mechanism such that a supermajority (Byzantine Fault Tolerant subset) of 392 | Braidpool participants can sign the output, which is essentially reflecting the 393 | consensus about share payments into bitcoin. This is done by having a single 394 | P2TR public key which is controlled by a Byzantine fault tolerant supermajority 395 | of miners which must cooperate to sign the output. 396 | 397 | The payout is a rolling commitment that spends the previous payout update 398 | output and creates a new one including rewards and fees from the new block. This 399 | must be signed with `SIGHASH_ANYONECANPAY` so that the output amount is 400 | uncommitted in the sighash. Since each miner may choose different transactions, 401 | the exact amount of the fee reward in this block cannot be known until a block 402 | is successfully mined, and we cannot commit to this value. 403 | 404 | Since newly created coinbase outputs cannot be spent for 100 blocks due to a 405 | bitcoin consensus rule, the Payout Update transaction is always 100 blocks 406 | in arrears. The transaction that must be included in a bead spends the most 407 | recent Braidpool coinbase that's at least 100 blocks old into a new Payout 408 | Commitment output. 409 | 410 | ![On-Chain Eltoo from the Eltoo paper](https://github.com/mcelrath/braidcoin/raw/master/eltoo.png) 411 | 412 | This rolling set of payout updates is an on-chain version of the [Eltoo 413 | protocol](https://blockstream.com/eltoo.pdf). By spending the previous Payout 414 | Update, we automatically invalidate the previous UHPO payout tree, and replace 415 | it with a new one. Old UHPO settlement transactions can no longer be broadcast 416 | as they would be double-spends. Relative to the Eltoo diagram above, $T_{u,i}$ 417 | are Payout Commitment outputs, and $T_{s,i}$ are UHPO payout transactions. 418 | 419 | Note that the most common discussions around Eltoo revolve around holding the 420 | update transactions off-chain using a NOINPUT or ANYPREVOUT flag. In our case, 421 | there is really no good reason to hold these updates off-chain nor wait for 422 | these transaction flags to be deployed. These update transactions must be held 423 | by Braidpool nodes between the time that they are signed and mined into a 424 | block. 425 | 426 | ## Unspent Hasher Payment Output 427 | 428 | For the payout commitment we present a new and simple record accounting for 429 | shares. Consider the consensus mechanism as a UTXO-based blockchain analagous to 430 | bitcoin. The "UTXO set" of the consensus mechanism is the set of payment outputs 431 | for all hashers, with amounts decided by the recorded shares and consensus 432 | mechanism rules. 433 | 434 | We will term the set of hasher payments the Unspent Hasher Payment Output (UHPO) 435 | set. This is the "UTXO set" of the decentralized mining pool, and calculation 436 | and management of the UHPO set is the primary objective of the decentralized 437 | mining pool. 438 | 439 | The UHPO set can be simply represented as a transaction which has as inputs all 440 | unspent coinbases mined by the pool, and one output for each unique miner with 441 | an amount decided by his share contributions subject to the consensus mechanism 442 | rules. 443 | 444 | In p2pool this UHPO set was placed directly in the coinbase of every block, 445 | resulting in a large number of very small payments to hashers. One advantage of 446 | traditional pools is that the *aggregate* these payments over multiple blocks so 447 | that the number of withdrawals per hasher is reduced. A decentralized mining 448 | pool should do the same. The consequence of this was that in p2pool, the large 449 | coinbase with small outputs competed for block space with fee-paying 450 | transactions. 451 | 452 | The commitment to the UHPO set in the coinbase output is a mechanism that allows 453 | all hashers to be correctly paid if the decentralized mining pool shuts down or 454 | fails after this block. As such, the UHPO set transaction(s) must be properly 455 | formed, fully signed and valid bitcoin transactions that can be broadcast. See 456 | [Payout Authorization](#payout-authorization) for considerations on how to 457 | sign/authorize this UHPO transaction. 458 | 459 | We don't ever want to actually have to broadcast this UHPO set transaction 460 | except in the case of pool failure. Similar to other optimistic protocols like 461 | Lightning, we will withhold this transaction from bitcoin and update it 462 | out-of-band with respect to bitcoin. With each new block we will update the UHPO 463 | set transaction to account for any new shares since the last block mined by the 464 | pool. 465 | 466 | Furthermore a decentralized mining pool should support "withdrawal" by hashers. 467 | This would take the form of a special message or transaction sent to the pool 468 | (and agreed by consensus within the pool) to *remove* a hasher's output from the 469 | UHPO set transaction, and create a new separate transaction which pays that 470 | hasher, [authorizes](#payout-authorization) it, and broadcasts it to bitcoin. 471 | Miners may not cash out for the *current* difficulty adjustment window (because 472 | the share/BTC price is not yet decided), but may only cash out for the *last* 473 | (and older) difficulty adjustment window(s). 474 | 475 | The share value for the *current* difficulty adjustment epoch is encoded 476 | proportionally in the UHPO transactions, however this is only for use in the 477 | case of catastrophic failure of Braidpool. During normal operation, the UHPO 478 | transaction is fixed at the end of the difficulty adjustment window when the 479 | share/BTC price for that epoch is known. 480 | 481 | # Payout Update and Settlement Signing 482 | 483 | Once a bitcoin block is mined by the pool, Braidpool will kick off a signing 484 | ceremony to create a new Payout Commitment and UHPO settlement transaction. 485 | 486 | It is impossible or impractical to sign the payout update and UHPO set 487 | transactions prior to mining a block, because the extranonce used by mining 488 | devices changes the coinbase txid, we can't sign this transaction until its 489 | Input(2) txid is known. 490 | 491 | After the RCA transaction is signed, and its corresponding UHPO transaction is 492 | signed, spending the RCA's output, Braidpool nodes will *delete* the 493 | corresponding key shares and keys associated with signing these. As long as 494 | $n-t$ nodes successfully delete these shares and keys, and the RCA and UHPO 495 | transactions are distributed to all nodes, it then becomes impossible to spend 496 | the aggregated Braidpool funds in any other way. 497 | 498 | FIXME should update and settlement keys be different here? 499 | 500 | FIXME use a tapscript for the UHPO payment. Happy path is RCA, and just a 501 | Schnorr signature. 502 | 503 | FIXME Can we authorize the tapscript UHPO in any other way? Can we verify a PoW 504 | hash for instance? 505 | 506 | FIXME pre-kegen and ROAST parallel signing 507 | 508 | FIXME use nlocktime or CSV? CSV would separate the update and settlement 509 | transactions. 510 | 511 | FIXME what do we do with any coinbases mined by Braidpool after the settlement 512 | tx is broadcast? CSV and let the miner take it all? 513 | 514 | FIXME from eltoo paper: "The use of different key-pairs prevents an attacker 515 | from simply swapping out the branch selection and reusing the same signatures 516 | for the other branch." 517 | This should still be possible with tapscript. An attacker can know the 518 | pubkey tweak and adapt an update signature to be a settlement signature and 519 | v/v. (CHECK THIS) 520 | 521 | The script 522 | 523 | ## Pool Transactions and Derivative Instruments 524 | 525 | If the decentralized mining pool supports transactions of its own, one could 526 | "send shares" to another party. This operation replaces one party's address in 527 | the UHPO set transaction with that of another party. In this way unpaid shares 528 | can be delivered to an exchange, market maker, or OTC desk in exchange for 529 | immediate payment (over Lightning, for example) or as part of a derivatives 530 | contract. 531 | 532 | The reason that delivery of shares can constitute a derivative contract is that 533 | they are actually a measurement of *hashrate* and have not yet settled to 534 | bitcoin. While we can compute the UHPO set at any point and convert that to 535 | bitcoin outputs given the amount of bitcoin currently mined by the pool, there 536 | remains uncertainty as to how many more blocks the pool will mine before 537 | settlement is requested, and how many fees those blocks will have. 538 | 539 | A private arrangement can be created where one party *buys future shares* from 540 | another in exchange for bitcoin up front. This is a *futures* contract, where 541 | the counterparty to the miner is taking on pool "luck" risk and fee rate risk. 542 | 543 | In order to form hashrate derivatives, it must be posible to deliver shares 544 | across two different difficulty adjustment windows. Shares in one difficulty 545 | adjustment window have a different value compared to shares in another window, 546 | due to the difficulty adjustment itself. If one can compute the derivative 547 | 548 | $$ 549 | \frac{d({\rm hashrate})}{d({\rm BTC})} = \frac{d_1-d_2}{{\rm BTC}_1 - {\rm BTC}_2} 550 | $$ 551 | 552 | then derivative instruments such as options and futures can be constructed by 553 | private contract, where shares from different difficulty adjustment epochs are 554 | delivered to the derivative contract counterparty in exchange for BTC, possibly 555 | with time restrictions. We do not describe further how to achieve this, here we 556 | are only pointing out that the sufficient condition for the decentralized mining 557 | pool to support private contract derivative instruments are: 558 | 559 | 1. The ability to send shares to another party 560 | 2. The ability to settle shares into BTC at a well defined point in time with 561 | respect to the difficulty adjustment (for instance after the adjustment, for 562 | the previous epoch) 563 | 3. The ability transact shares across two difficulty adjustment windows. 564 | 565 | It may be tempting to turn a decentralized mining pool into a full DeFi market 566 | place with an order book. We caution that the problem of Miner Extractable Value 567 | (MEV) is a serious one that destroys fairness and confidence in the system, and 568 | should be avoided here. The only operations we consider here are (a) sending 569 | shares to another party and (b) requesting payout in BTC for shares. 570 | 571 | Finally let us note that the value of a "share" is naturally fixed after each 572 | difficulty adjustment. Within one two-week difficulty adjustment window, each 573 | sha256d hash attempt has a fixed value in terms of BTC, but the exact amount of 574 | BTC is unknown until the next difficulty adjustment. Therefore, the 2-week 575 | difficulty adjustment window is a natural point to automatically broadcast the 576 | UHPO tree for the last epoch and settle out all shares from the previous epoch. 577 | 578 | # Payout Authorization 579 | 580 | In [Payout Commitment](#payout-commitment) we described a simple mechansim to 581 | represent shares and share payouts as decided by the [Consensus 582 | Mechansim](#consensus-mechansim) on shares at any point in time. However, 583 | bitcoin is incapable of evaluating the logic of the pool's consensus mechanism 584 | and we must find a simpler way to represent that share payout consensus to 585 | bitcoin, such that the coinbase outputs cannot be spent in any other way than as 586 | decided by the pool's consensus. 587 | 588 | Probably the most straightforward way to authorize the share payouts and signing 589 | of coinbase outputs is to use a large threshold multi-signature. The set of 590 | signers can be any pool participant running the pool's consensus mechanism and 591 | having availability of all data to see that consensus mechanism's chain tip. We 592 | assume that in the [weak block](#weak-blocks) metadata, the pool participants 593 | include a pubkey with which they will collaboratively sign the payout 594 | authorization. 595 | 596 | FIXME -- choose a subset of nodes who submitted shares using a hash function to 597 | "elect" them. Those nodes must then submit proof that their shares were valid by 598 | broadcasting the transaction tree in their share. If validation fails, the 599 | miner's shares are invalidated. This allows us to spot-check all hashers, 600 | mitigate block withholding attacks, and keep the signing subset small. 601 | 602 | The most logical set of signers to authorize the coinbase spends are the set of 603 | miners who have already successfully mined a bitcoin block. We want to avoid 604 | having any single miner having unilateral control over a coinbase and the 605 | ability to steal the funds without paying other hashers. As such the minimum 606 | number of signers is four, using the $(3f+1)$ rule from the Byzantine agreement 607 | literature. This means that on pool startup, the first 4 blocks must be directly 608 | and immediately paid out to hashers, as there are not enough known parties to 609 | sign a multi-signature, and we don't even know their pubkeys to construct a 610 | (P2TR, P2SH, etc) bitcoin output address and scriptPubKey. 611 | 612 | After the first 4 blocks, we assume that 66%+1 miners who have previously mined 613 | a block must sign the coinbase output(s), paying into the UHPO set transaction. 614 | 615 | This is probably the biggest unsolved problem in building a decentralized mining 616 | pool -- how to coordinate a large number of signers. If we assume that shares 617 | are paid out onto bitcoin with every difficulty adjustment, this is 2016 blocks 618 | and up to 1345 signers that must collaborate to make a threshold 619 | multi-signature. This is a very large number and generally well beyond the 620 | capabilities of available signing algorithms such as 621 | [FROST](https://eprint.iacr.org/2020/852), 622 | [ROAST](https://eprint.iacr.org/2022/550), 623 | [MP-ECDSA](https://eprint.iacr.org/2017/552), or [Lindell's threshold 624 | Schnorr](https://eprint.iacr.org/2022/374) 625 | algorithm. 626 | 627 | Below we discuss threshold Schnorr in more detail, but this may not be the only 628 | way to commit to and then authorize spending of coinbases into the UHPO tree. We 629 | encourage readers to find alternative solutions to this problem. The very large 630 | drawback to all signing algorithms we are able to find is that they are 631 | intolerant to failures. 632 | 633 | ## Schnorr Threshold Signatures 634 | 635 | We have reviewed a large amount of literature on threshold Schnorr algorithms. 636 | 637 | They all generally involve a Distributed Key Generation (DKG) phase using a 638 | variant of [Pedersen's 639 | DKG](https://link.springer.com/chapter/10.1007/3-540-46766-1_9), often 640 | augmenting it with polynomial commitments introduced by Feldman to achieve a 641 | [Verifiable Secret Sharing scheme 642 | (VSS)](https://ieeexplore.ieee.org/document/4568297). There are many papers with 643 | variations on this idea, each focusing on organizing rounds of communication, 644 | assumptions about communication (such as whether a broadcast channel exists) and 645 | security proofs. 646 | 647 | Participants in the threshold signature each contribute entropy in the DKG phase 648 | by creating and secret sharing their contribution to all other participants. In 649 | this way a key can be created with entropy input from all participants, such 650 | that no participant knows the key, but at the end of the DKG, all participants 651 | hold shares of it such that a t-of-n threshold number of shares must be Lagrange 652 | interpolated to reconstruct the secret. 653 | 654 | These secret shares are then used to compute a signature. Instead of directly 655 | reconstructing the secret key (which would give unilateral spending control to 656 | the party doing the reconstruction) one computes the signature using the 657 | secret share as the private key, and then Lagrange interpolation is performed on 658 | the resulting set of signatures instead. 659 | 660 | Both ECDSA and Schnorr signatures require a nonce $k$ which must additionally be 661 | agreed upon by the signing participants before signing, and is committed to in 662 | the signature itself. This is generally done by running an additional round of 663 | the DKG to compute $k$ such that everyone has a secret share of it. 664 | 665 | ### Distributed Key Generation 666 | 667 | # Transaction Selection 668 | 669 | The [Stratum V2](https://github.com/stratum-mining/sv2-spec) project is focusing 670 | on a model where hashers are responsible for constructing the block and 671 | selecting transactions. This is an improvement over Stratum V1 where the 672 | (centralized) pool chooses the block and transactions. 673 | 674 | The risk here is that the pool either censors valid transactions at the 675 | direction of a government entity, or prioritizes transactions through 676 | out-of-band payment, risking the "censorship resistant" property of the system. 677 | 678 | In the [Weak Blocks](#weak-blocks) section we did not indicate how transaction 679 | selection was done. This is a factorizable problem, and for a decentralized 680 | mining pool we also assume that individual hashers are constructing blocks, and 681 | the pool places no further restrictions on the transaction content of a block 682 | mined by a participating hasher. In fact, for weak blocks which do not meet 683 | bitcoin's difficulty threshold, it is probably best to elide the transaction set 684 | entirely for faster verification of shares. This introduces a problem that a 685 | hasher could construct a block with invalid transactions, but this would be 686 | easily discovered if that hasher ever mined a block, and his shares could 687 | invalidated. 688 | 689 | A transaction selection mechanism using both a decentralized mining pool and 690 | Stratum V2 should be able to easily slot into the block structure required by 691 | the decentralized mining pool as indicated in [weak blocks](#weak-blocks), as 692 | long as Stratum V2 is tolerant of the required coinbase and metadata structure. 693 | 694 | In our opinion simply allowing hashers to do transaction selection is 695 | insufficient, as centralized pools can simply withhold payment unless hashers 696 | select transactions according to the rules dictated by the pool. A full solution 697 | that restores bitcoin's censorship resistance requires decentralized payment as 698 | well. 699 | 700 | # Attacks 701 | 702 | ## Block Withholding 703 | 704 | ## Coinbase Theft by Large Miners 705 | 706 | Because signing very large threshold Schnorr outputs is impractical, it is 707 | necessary to keep the number of signers $n$ of the t-of-n UHPO root output 708 | relatively small, so as to complete the signature in a reasonable amount of time 709 | and without consuming too much bandwidth or computation. 710 | 711 | Therefore there exists the possibility that just due to luck, the same (large) 712 | miner might mine all $n$ of the most recent blocks, or that two miners who 713 | together mine all $n$ of the most recent blocks collude. In this case 714 | 715 | The UHPO root must be signed by t-of-n of the most recent *distinct* miners 716 | who successfully mined bitcoin blocks. 717 | 718 | We might also consider including hashers who have not won bitcoin blocks. In 719 | order to do this we might select a random subset of recent shares, and require 720 | that those hashers prove the entire bitcoin block committed to in their share. 721 | Upon successful validation of their share, they are included in the signing 722 | subset for future blocks. Consensus on this signing subset would be included in 723 | beads. 724 | 725 | If a hasher is elected for UHPO signing, fails to provide proof of his 726 | 727 | # Unsolved Problems and Future Directions 728 | 729 | The largest unsolved problem here is that of the [Payout 730 | Authorization](#payout-authorization). While off-the-shelf algorithms are 731 | available such as [ROAST](https://eprint.iacr.org/2022/550), they require fixing 732 | the set of signers and are intolerant to failure in either the nonce generation 733 | phase, the signing phase, or both. A threshold number of participants must be 734 | chosen, and must *all* remain online through the keygen and signing phase. If 735 | any participant fails, a different subset must be chosen and the process 736 | restarted. There does exist an [approach due to Joshi et 737 | al](https://link.springer.com/chapter/10.1007/978-3-031-08896-4_4) at the cost 738 | of an extra preprocessing step, which makes the final signature aggregation 739 | asynchronous assuming the nonce generation was successful, though the setup 740 | phases are still intolerant to failure. 741 | 742 | The fact that both ECDSA and Schnorr signatures require a nonce $k$ is a big 743 | drawback requiring an additional keygen round with everyone online that other 744 | systems such as BLS do not have. 745 | 746 | In practice if no new algorithm is found and an existing Schnorr threshold 747 | signature is used (something involving a DKG and Shamir sharing), a balance must 748 | be struck between having so many signers that payouts cannot be signed in a 749 | reasonable time, and so few signers that the system is insecure and coinbases 750 | could be stolen by a small subset. 751 | 752 | An approach that might be considered is to sub-sample the set of signers, and 753 | somehow aggregate signatures from subsets. As the resultant signatures would 754 | have different nonces, they cannot be straightforwardly aggregated, but this is 755 | the same problem as aggregating different signatures within a transaction or 756 | block, and approaches to [Cross Input Signature Aggregation 757 | (CISA)](https://github.com/ElementsProject/cross-input-aggregation) might be 758 | used here and might indicate the desirability of a future soft fork in this 759 | direction. 760 | 761 | ## Covenants 762 | 763 | One might take the UHPO set transaction and convtert it to a tree structure, 764 | using covenants to enforce the structure of the tree in descendant transactions. 765 | This is often done in the context of covenant-based soft fork proposals so that 766 | one party can execute his withdrawal while not having to force everyone else to 767 | withdraw at the same time. 768 | 769 | Because a decentralized mining pool is an active online system, it seems better 770 | to use an interactive method to write a new transaction for a withdrawal, than 771 | to allow broadcasting part of a tree. If part of a tree were broadcast, this 772 | must also be noticed by all miners and the share payouts updated. 773 | 774 | In our opinion the only reason the whole UHPO set transaction(s) would be 775 | broadcast is in a failure mode or shutdown of the pool, in which case the tree 776 | just increases the on-chain data load for no benefit. 777 | 778 | ## Sub-Pools 779 | 780 | Since a consensus system cannot achieve consensus faster than the global 781 | latency, this is an improvement in share size of at most about 1000x. In order 782 | to support even smaller hashers, one might consider "chaining" the decentralized 783 | mining pool to create a sub-pool. 784 | 785 | Instead of coinbase UTXOs as inputs to its UHPO set, a sub-pool would have UHPO 786 | set entries from a parent pool as entries in its UHPO set. With a separate 787 | consensus mechanism from its parent, a chain of two decentralized mining pools 788 | could allow hashers 1000000x smaller to participate. A pool could in principle 789 | dynamically create and destroy sub-pools, moving miners between the sub-pools 790 | and main pool dependent on their observed hashrate, so as to target a constant 791 | variance for all hashers. 792 | 793 | 794 | --------------------------------------------------------------------------------