├── plasma ├── cash_vs_mvp_math.md ├── plasma-mvp │ ├── specifications │ │ ├── morevp.md │ │ └── no-confirmations.md │ ├── chaining-txs-within-block.md │ ├── explore │ │ ├── priority.md │ │ └── rootchain.md │ └── delegated-exits.md ├── mechanical_reorg_protection.md ├── bond-pricing-concerns.md ├── dual_ledger_mass_exits.md ├── simple_fast_withdrawals.md ├── mass_withdraw.md ├── fast_finality.md └── plasma-cash │ ├── bloom-filters.md │ └── thinking-about-plasma-cash.md ├── README.md └── dex └── offchain-settlement-mechanism.md /plasma/cash_vs_mvp_math.md: -------------------------------------------------------------------------------- 1 | Plasma cash vs mvp math 2 | 3 | Link to hackmd as it shows math equation in md file better there. 4 | 5 | https://hackmd.io/s/S10NHJI_4 6 | -------------------------------------------------------------------------------- /plasma/plasma-mvp/specifications/morevp.md: -------------------------------------------------------------------------------- 1 | # More Viable Plasma 2 | 3 | This document is based on the original [More Viable Plasma post](https://ethresear.ch/t/more-viable-plasma/2160/49) on ethresearch. 4 | 5 | The document has become part of OMG network specifications, and has been moved to github.com/omisego/elixir-omg, `docs/` folder. 6 | 7 | See `git blame` and/or https://github.com/omisego/research/pull/44 for the original work and discussion done on this design in this repo. 8 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # research 2 | 3 | OmiseGO's public research repository. 4 | 5 | ## Background 6 | 7 | OmiseGO is publishing research documents in an effort to generally improve transparency, increase community engagement, and advance the state of research. These documents range from train-of-thought-style research notes to finalized specifications. 8 | 9 | **Comments, feedback, and improvements are more than welcome! Feel free to create issues or pull requests where you see fit.** 10 | 11 | This repository isn't just for OmiseGO! Anyone is welcome to create pull requests containing original research. Please make sure that any content submitted is properly attributed. 12 | 13 | ## Scope 14 | 15 | This repository is designed to host any research related to, or tangentially related to, OmiseGO. If you need inspiration, we're currently thinking about [Plasma](https://plasma.io/plasma.pdf), [Plasma Cash](https://ethresear.ch/t/plasma-cash-plasma-with-much-less-per-user-data-checking/1298), and [Decentralized Exchanges](https://cdn.omise.co/omg/whitepaper.pdf) in general (and much more). 16 | 17 | Check out some of the documents in this repository if you're unsure that your work will fit. 18 | -------------------------------------------------------------------------------- /plasma/mechanical_reorg_protection.md: -------------------------------------------------------------------------------- 1 | Mechanical reorg protection, proposal 2 | == 3 | 4 | Original doc from Pepesza, also see discussion there: 5 | https://gist.github.com/paulperegud/3fb4bb96693abe2e0b175f7b20e64362 6 | 7 | ### How chain can be invalidated by the reorg? 8 | A deposit is being spent by the owner on the child chain, and operator submits new block to the root chain. 9 | Next deposit diappears from the root chain, creating a situation where operator has allowed money creation on the child chain, 10 | leading to partial reserve and mass exit. 11 | 12 | ### Implemented protection - economical one 13 | To protect the chain from invalidation by reorgs we do not allow users to spent deposits younger than N Ethereum blocks. 14 | This gives us probabilistic guarantees from invalidation by random reorg. 15 | Unfortunately, 51% attack targeting specifically OMG chain is still possible. 16 | Its cost depends on N and current spending on security of Ethereum chain. 17 | https://www.crypto51.app/ evaluates costs of 1 hour 51% attack at USD 85000. 18 | Taking N=12, costs of attack against OMG chain is just USD 5000. 19 | 20 | While viability of such attack depends not only on money but also on access to hash power, if we succeed 21 | OMG chain will become a large target. 22 | 23 | ### Proposed solution that gives us full reorg immunity 24 | (solution born in discussion with Pawel Thomalla) 25 | Operator, while submitting a block, adds a hash describing his knowledge about deposits in the contract. 26 | Contract computes his own summary of state of deposits and compares it with submitted one. 27 | If not equal, transaction submitting the block is rejected. 28 | Bonus points - this enables operator to spend deposits as soon as they appear, without waiting. 29 | 30 | #### Technical description, pseudocode 31 | ``` 32 | submitBlock(bytes32 blockRoot, bytes32 depositsHash) 33 | bytes ownHash = keccak256(all deposit blocks where blknum > N-2 and blknum < N-1) 34 | require(ownHash == depositHash); 35 | ``` 36 | 37 | #### Gas cost analysis 38 | 200 + 42 (SLOAD, SHA3 on two words) gas per deposit made to the chain, paid by operator. 39 | 40 | ### Changes needed in client software 41 | 1. ability to rollback plasma blocks for both operator and watcher 42 | 2. ability to evaluate such rollbacks with regard to byzantine behavior by operator 43 | -------------------------------------------------------------------------------- /plasma/bond-pricing-concerns.md: -------------------------------------------------------------------------------- 1 | # Challenge Bond Pricing Concerns 2 | 3 | This is a set of notes from discussions with David Knott and Vi. 4 | 5 | --- 6 | 7 | Users who successfully challenge an action on a Plasma contract should generally receive a bond for doing so. 8 | Current thinking on the actual magnitude of this bond has assumed that the bond should simply cover the gas cost of challenging. 9 | So, although challenges aren’t supposed to be profitable, challenges are supposed to be free. 10 | If users are already required to monitor the Plasma chain and challenges are free, then users have no real incentive to disable automatic challenges in their client software. 11 | 12 | Unfortunately, bond pricing isn’t as simple as it seems. 13 | Whenever multiple challenges are submitted simultaneously, only one challenge can actually be successful. 14 | **Each unsuccessful challenge still needs to pay at least the base transaction gas cost (21000 gas)**. 15 | Depending on network congestion, this base cost alone can range from anywhere from <$0.01 to >$0.50. 16 | 17 | We can try to limit the number of challenges submitted at the same time by having clients strategically wait to see if other challenges are submitted, but this won’t work in every case. 18 | We run into further issues if miners choose to front-run challenge transactions. 19 | **A front-running miner would place their own challenges in front of other challenges in order to collect the bond**. 20 | Challenges by other users would always fail and always have to pay at least the base gas fee. 21 | It’s unlikely that front-running will happen if the bond is sufficiently low and the cost of modifying client software is high, but it’s something to consider. 22 | 23 | **If challenges are not free, then users may choose to only challenge if the exiting UTXO would directly impact the safety of their funds**. 24 | As a result of exit priority, the safety of a user’s UTXO is only threatened when the total funds stolen is greater than the total sum of valid UTXOs with a lower priority than the user’s UTXO. 25 | If the sum stolen is less than this amount, the user can be sure that their UTXO will be processed with enough funds in the contract. 26 | 27 | In practice, these problems probably aren’t as bad as they seem. 28 | The cost of submitting an invalid exit is pretty high for even a low bond on the order of a few USD. 29 | On the high end, a failed challenge submitted every Ethereum block (that’s a lot of challenges) would only run on the order of ~$100k annually. 30 | Certain parties will probably have external incentives to challenge. 31 | With changes in Ethereum 2.0, it may be possible to block these double-challenges automatically in a way that doesn’t charge gas for the second challenge. 32 | 33 | **So we’re probably fine for now**. 34 | However, these are problems that don’t currently have a convincing economic solution. 35 | We definitely need to consider these things going forward and come up with a stronger protocol that addresses these concerns without relying on extra-protocol assumptions. 36 | -------------------------------------------------------------------------------- /plasma/plasma-mvp/chaining-txs-within-block.md: -------------------------------------------------------------------------------- 1 | original: https://github.com/omisego/research/issues/86 2 | ### TL;DR 3 | 4 | The idea here propose a solution to chain txs within a block under Plasma MVP protocol. 5 | All txs within a block would be consider successful or failed atomically. 6 | 7 | ### Why 8 | Trying to solve the problem of plasma tx cannot be chained within a block. 9 | Currently, all Plasma flavors needs to wait a block to be able to start next transaction. 10 | This can be useful in several scenerios. For example, we can chain merge utxo txs together without waiting for a block. 11 | 12 | ### Concept 13 | 14 | By design, to use a MVP tx, it would need the confirm signature of the tx from the sender. This is the proof that user double checked the result of the tx inside the block. This two step design gives user the control to finalize the tx after block submitted. 15 | 16 | Leveraging the same design, we can make user finalize all txs within a block instead of finalizing a single tx. By finalizing the whole txs within a block, user can decide not to finalize anything within the block and exit from previous blocks. 17 | 18 | This does not provide fast finality, but remove the issue of plasma tx cannot be chained together within a block. 19 | 20 | ### Design 21 | 22 | 1. If a tx is using a mined input in plasma chain, it would need confirmation signature of the input owner signing-off previous block. 23 | 1. If the tx input is in-flight (those would eventually be in the next block), do not need the confirmation signature of input block. 24 | 1. To exit an output or to use an output in next tx which is in different, the user needs to disclose the confirmation signature that sign-off the whole block that mines the chained txs (signing block hash). So in any case, the proof information to finalize the txs within the block would be disclosed at once (promising atomic). 25 | 26 | Be aware that it is possible for a user to chain A->B->C->D, but operator mines A->B->C only without D in the next block. In this case, the transaction data of D is useless now as C is mined and would need to regenerate D' including confirmation sign of the block of A->B->C. 27 | 28 | 29 | ### Extra Challenge for MVP 30 | In MVP, there should be an extra challenge to protect user withhold confirmation signature. For a transaction A -> B, user can possible exit from A first without confirming B, and then confirm B to exit B too. Thus, we need an extra challenge game to protect this specific double-spending via exit scenerio. 31 | 32 | We would need to same challenge for chained txs as well. It is to protect the following situation: 33 | ``` 34 | block1: [A] 35 | block2 [B->C->D] 36 | ``` 37 | where A->B->C->D are chained together from same owner. Now the owner can choose to not confirm B->C->D first and exit A. Later, after the exit of A is finalized, confirm the block and exit from D. This creates a double spending situation via exit. 38 | 39 | As a result, we need an extra challenge for this. I'd like to propose a challenge for "confirmation signature invalid". We challenge such confirmation signature is invalid by: 40 | 41 | 1. We found a previous finalized exit output 42 | 1. We prove there exists another tx in the same block of the confirm sig that is using the already-exitted output. 43 | 44 | Now, whatever exit that is using this confirmation signature is invalid. 45 | 46 | ### Limitation of chaining 47 | 48 | The whole chain must be in the same owner. We probably cannot chained payments: Alice sends to Bob and sends to Carol. Because, what if Carol confirms the block without Bob's confirmation ? 49 | -------------------------------------------------------------------------------- /plasma/plasma-mvp/explore/priority.md: -------------------------------------------------------------------------------- 1 | Exploring `plasma-mvp`: Exit Priority 2 | === 3 | 4 | Author: Kelvin Fichter 5 | 6 | --- 7 | 8 | Plasma's exit priority a necessary mechanism to ensure that exits can be processed safely and effectively. This document explains why it's necessary and how `plasma-mvp` attempts to implement it. 9 | 10 | ## Example 11 | 12 | Let's illustrate why priority is important by showing what could happen if we didn't have it: 13 | 14 | 1. Alice makes valid transactions on the Plasma chain and ends up with a UTXO worth 1 ETH. 15 | 2. The operator creates an invalid block and does not publish it. This block contains a transaction that creates a 1 ETH UTXO for the operator "out of nowhere" (like a deposit). 16 | 3. The operator immediately exits on this UTXO. The exit can't be challenged because it's valid (as far as Plasma is concerned). 17 | 4. Alice later attempts to exit on her 1 ETH UTXO, but the contract doesn't have enough ETH to pay her because the operator took 1 ETH "out of nowhere". 18 | 19 | Here's how that situation plays out with priority: 20 | 21 | 1. Alice makes valid transactions on the Plasma chain and ends up with a UTXO worth 1 ETH. 22 | 2. The operator creates an invalid block and does not publish it. This block contains a transaction that creates a 1 ETH UTXO for the operator "out of nowhere" (like a deposit). 23 | 3. The operator immediately exits on this UTXO. The exit can't be challenged because it's valid (as far as Plasma is concerned). 24 | 4. Alice later attempts to exit on her 1 ETH UTXO. Her exit is given a higher priority than the operator's because the corresponding UTXO was included in an earlier block. Alice's exit will be processed before the operator's exit. 25 | 5. The operator's exit fails as long as all other (valid) exits are created before the operator's exit is processed. 26 | 27 | ## Implementation 28 | 29 | So how is this actually implemented? 30 | 31 | [Minimal Viable Plasma](https://ethresear.ch/t/minimal-viable-plasma/426) specifies that `priority` be implemented as follows: 32 | 33 | > startExit must arrange exits into a priority queue structure, where priority is normally the tuple (blknum, txindex, oindex) (alternatively, blknum * 1000000000 + txindex * 10000 + oindex). However, if when calling exit, the block that the UTXO was created in is more than 7 days old, then the blknum of the oldest Plasma block that is less than 7 days old is used instead. There is a passive loop that finalizes exits that are more than 14 days old, always processing exits in order of priority (earlier to later). 34 | > 35 | > This mechanism ensures that ordinarily, exits from earlier UTXOs are processed before exits from older UTXOs, and particularly, if an attacker makes a invalid block containing bad UTXOs, the holders of all earlier UTXOs will be able to exit before the attacker. The 7 day minimum ensures that even for very old UTXOs, there is ample time to challenge them. 36 | 37 | So we generally need to ensure that exits are ordered first by block, then transaction index, and then output index. `plasma-mvp` uses `blknum * 1000000000 + txindex * 10000 + oindex` (also called `utxoPos`) as priority. If the output is more than a week old, `blknum` is replaced by `weekOldBlock`, the oldest block less than a week old. 38 | 39 | The code for that looks like this: 40 | 41 | ``` 42 | // Priority is a given utxos position in the exit priority queue 43 | uint256 priority; 44 | if (blknum < weekOldBlock) { 45 | priority = (utxoPos / blknum).mul(weekOldBlock); 46 | } else { 47 | priority = utxoPos; 48 | } 49 | ``` 50 | 51 | Now we just need to make sure that exits are placed in a queue and ordered by priority. `plasma-mvp` uses `priority` in a mapping between `priority` and `exits`, so `priority` is necessarily unique. This might eventually be changed so that `priority` doesn't need to be unique but the priority queue somehow holds some other identifying information about the exit. There's an open issue on GitHub about this [here](https://github.com/omisego/plasma-mvp/issues/29). 52 | 53 | -------------------------------------------------------------------------------- /dex/offchain-settlement-mechanism.md: -------------------------------------------------------------------------------- 1 | Original Github issue: https://github.com/omisego/research/issues/88 2 | 3 | ## Why 4 | In the [alternative proof doc](https://omisego.atlassian.net/wiki/spaces/RES/pages/26673228/Alternative+DEX+proof+design?atlOrigin=eyJpIjoiZWY3ZTI1ZDg2MzYxNGM2OGJiYjM3MjIxMGU2NDZlOTEiLCJwIjoiYyJ9) from clearpool it seems like they would like to have some off chain "settlement" mechanism. This just means they would like to be able double-check (likely by a third party) events history off-chain before finalizing them on-chain. 5 | 6 | In current plasma chain, you can verify one tx off chain. However, if venue wants to verify a set of txs (chained txs) this would need some alternative mechanism. 7 | 8 | ## Possible Solution 0 9 | 10 | Venue and Validator double signs each transaction. This would need more liveness assumption on validator, as validator would need to catch up on each transaction generation. However, this is the simplest solution. 11 | 12 | ## Possible Solution 1 - Chaining txs + Confirmation signature (MVP) by validator 13 | 14 | One possible solution is to use same flow as [chaining txs within block](https://github.com/omisego/research/blob/master/plasma/plasma-mvp/chaining-txs-within-block.md). Venue chains all txs and send them to operator. However, instead of venue providing confirm sig, it is the validator to provide that. So validator gets block data and confirms whatever is there. Venue needs to get the confirm sig to be able to keep on generating next settlement transaction. 15 | 16 | Be aware that the mechanism does not promise operator would put all txs into same block. In the case that operator rejected some chained transactions to be put in same block, those rejected transactions need to be regenerated and add the confirm signatures of validator. 17 | 18 | ## Possible Solution 2 - Mine Chained Txs First + Explicit range to finalize 19 | This is a complex solution. Venue first submit chained txs to plasma. However, those txs needs one more confirm tx + one confirm signature to finalize. 20 | 1. The confirm tx would explicit the range of the chained txs to finalize. For example, a chained txs: A->B->C->D, this confirm tx can state it only wants to finalize: A->B->C without D. 21 | 2. The main trick here is that exit priority of transactions would be using the position of the confirm tx instead of their original tx position. Let's say somebody chained C->D', and confirmed both C->D and C->D' afterward (double spending). If fraud operator let both confirm txs mined, only one of the two would have higher priority depends on which confirm tx got mined first. 22 | 3. Since exit priority is decided by the confirm tx, we would need another confirm signature (of the confirm tx) to confirm no invalid tx in front the confirm tx. MoreVp does not really works here as there might be no "input" for the confimation tx to change to higher priority. 23 | 24 | So we can let the validator/regulator create the confirmation tx and confirm signature. Cool things here is that venue can just keep submit the event data and let validator/regulator comes in and stamp on the finalization process. If any of the settlement data does not looks right, venue can even start a new branch/fork of event history from any non-finalized position. 25 | 26 | One thing to be aware is that this basically means we delay the verification of transaction to the time when confirmation tx is mined. How can we make watcher/operator efficiently knows the txs need to be verified from the history data might be an issue. (As currently it just need the verify whenever new tx comes) A naive solution is the confirmation signature would need to include all positions of the transaction so watcher does not need to run through all txs. 27 | 28 | ## Alternative Solution - Validation token (Jamie & Chris's idea) 29 | This idea is that the validator/regulator provide some "token" as the proof of validation. Venue has to put the token as input of the settlement. 30 | 31 | Also, note that validator/regulator might not need to do a full verification. For instance, they might be verifying the settlement price is correctly or not only insteald of also checking utxos. 32 | -------------------------------------------------------------------------------- /plasma/dual_ledger_mass_exits.md: -------------------------------------------------------------------------------- 1 | ## Dual submissions of ledger state for cheap mass-exits 2 | 3 | We propose introducing a dual nature of the operator's submissions (commitments) to the ledger state: UTXO-based and account-based. 4 | The goal is for users to be able to start an exit of all UTXOs in possession using a single merkle-proof and limited root chain data. 5 | 6 | ### Problem 7 | 8 | Without mass-exit facilities, the gas cost required for users to exit the funds limits the number of UTXOs present on chain to roughly 550K. 9 | This is because for every UTXO exited, a merkle proof must be published and verified on the root chain contract (~280K gas, judging based on recent `plasma-contracts` impl.). 10 | 11 | ### Construction 12 | 13 | For simplicity let's assume that there's only one token. 14 | For multi-token support, all tokens are just treated separately. 15 | 16 | Every `submitBlock` from the operator publishes two merkle roots: that of the block of txs (as done now) and that of an **account-based ledger** of the form: 17 | `[{owner_address, sum_of_funds}, ...]`. 18 | 19 | The `sum_of_funds` is a commitment of the operator to much do all the accounts hold at given child chain height. 20 | The merkle root of the account-based ledger is verified by the Watchers on every block (TODO: can be less frequent?). 21 | 22 | There is an **account exit** available, an additional kind of exit. `startAccountExit` takes in the following parameters: 23 | - `{owner_address, sum_of_funds}` - the `account_state` 24 | - `N` - height of the account-based ledger submission used 25 | - the proof of the given `account_state` inclusion in account-based ledger at `N` 26 | - `exclusions` - a list of `[{utxo_pos, amount}]` UTXOs that were spent by `owner_address` somewhere at heights `>=N` 27 | 28 | Assume `K` is the height of the `startAccountExit` being mined. 29 | 30 | `startAccountExit` has the following effects: 31 | - an exit of amount `sum_of_funds - sum(exclusions)` to `owner_address` is scheduled according to the age of submission `N` and exit start `K` (similar to how it is done for an UTXO-based standard exit) 32 | - if any of the `exclusions` is incorrect, i.e. at the given `utxo_pos` there is something different than `{owner_address, amount}` - the account exit can be challenged 33 | - if there exists _any spend_ from `owner_address` published at height `>=N`, other than from UTXOs in `exclusions` - the account exit can be challenged. Published means included in a child block or used in an exit 34 | - no UTXO-based exits (SE & IFE alike) are allowed to be finalized for `owner_address` except those listed in the `exclusions` 35 | - funds that `owner_address` receives after `K` can be operated with normally. (TODO - is that so? see [_finality question_](https://github.com/omisego/research/pull/106#issuecomment-507705003)) 36 | 37 | The account exit effectively "closes" the account of `owner_address` and compacts it, allowing to cheaply exit all the funds held. 38 | 39 | ### Rationale 40 | 41 | Why could it help? 42 | - allows to mass exit all funds belonging to `owner_address` roughly at a cost of a single UTXO standard exit, so that roughly 550K users are able to securely hold state, regardless of their UTXO count (**NOTE** the single token assumption!) 43 | - removes the requirement to manage UTXOs, you can have as many as you please 44 | 45 | Why does it work? 46 | - operator can't put arbitrary data in the account-based ledger submission to exit, because the exit priority is observed (standard plasma security) 47 | - no one can use an old account-based ledger submission to exit funds which were later spent. 48 | Any spend that comes after height `N` can challenge 49 | - the `exclusions` declaration allows one to deal with funds spent after the last known correct pair of submissions from the operator - those funds are most likely to require IFEs 50 | - it is easy to validate an account exit and reasonably easy to compute challenge. 51 | It suffices to check the `balance(address)` at height `K` - it must equal to `sum_of_funds - sum(exclusions)`. 52 | If that's not the case, blockchain from `N` to `K` must be scanned to find the violating transaction/exit. 53 | After this initial check, all blocks/exits seen must be checked to not include anything spent by `owner_address` ever again, as long as it's been created before moment height `K` (TODO - is the "`K`" part possible/necessary? see _finality question_ above). 54 | -------------------------------------------------------------------------------- /plasma/simple_fast_withdrawals.md: -------------------------------------------------------------------------------- 1 | # Simple Fast Withdrawals 2 | 3 | Design from conversation with vi, David Knott, Ben Jones, and Eva Beylin. Thanks to Eva Beylin & Kelsie Nabben for review/edits. 4 | 5 | eth research link: https://ethresear.ch/t/simple-fast-withdrawals/2128 6 | 7 | --- 8 | 9 | ## TL;DR 10 | 11 | We can enable fast withdrawals without Plasma contracts by taking advantage of root chain smart contracts. Withdrawals can then be handled as tokenized debt, and we can build a marketplace from there. 12 | 13 | ## Background 14 | 15 | Fast withdrawals are a construction in Plasma that effectively boil down to an atomic swap between the Plasma chain and the root chain. They're useful because Plasma withdrawals are slow (2 weeks, in our implementation), and people usually want their money quite quickly. The Plasma paper discusses one such construction that relies on outputs being locked to contracts: 16 | 17 | > Funds are locked to a contract on a particular output in the Plasma chain. This occurs in a manner similar to a normal transfer, in that both parties broadcast a transaction, and then later commit that they have seen the transaction in a Plasma block. The terms of this contract is that if a contract is broadcast on the root blockchain and has been finalized, then the payment will go through in the Plasma chain. 18 | 19 | However, we currently don't support funds locked to contracts in Plasma. This post describes a simple fast withdrawal mechanism that ensures the liquidity provider will be paid after the full withdrawal time without the need of Plasma contracts. Like the original fast withdrawal design, this design relies on Plasma data availability. 20 | 21 | ## Pay-to-Smart-Contract 22 | 23 | We take advantage of the fact that Ethereum smart contracts cannot produce signatures, and therefore cannot spend funds on the Plasma chain. However, Ethereum contracts *can* initiate an exit by calling the Plasma contract. This makes it possible for a user to send child chain funds to the address of an Ethereum contract, where these funds can no longer be spent but can be withdrawn. 24 | 25 | In the case that a user doesn’t want to wait for the Plasma exit, we can enable fast withdrawals by deploying a special contract to Ethereum - let’s call this a "liquidity contract". Any user may force the contract to trigger a slow Plasma exit of any utxo where the user is the sender. This action creates an ERC721 token for the user that represents the right to receive the value of the exit once it processes. The user can then quickly and simply receive value of their utxo (minus a fee in the form of a discount) by transferring or selling this token to any other user. 26 | 27 | For clarity, here's a quick user flow: 28 | 29 | 1. Alice has 10 ETH on the child chain and wishes to quickly withdraw to the Plasma chain instead of waiting two weeks. 30 | 2. Bob is okay waiting two weeks for the exit to process, so he's willing to front Alice the money now in exchange for a discount. 31 | 3. Alice and Bob will use an Ethereum liquidity contract. 32 | 4. Alice sends her 10 (child chain) ETH to the address of the liquidity contract. This a Plasma transaction, not an Ethereum transaction. 33 | 5. Alice sees that her transaction to the contract has been included in the Plasma chain. The contract now owns a utxo received from Alice. 34 | 6. Alice calls a function in the smart contract that triggers an exit from this utxo. The contract credits Alice with a token representing the future funds from this exit. 35 | 7. Bob is willing to pay 9 ETH for a 10 ETH token that will "mature" (to take some bond terminology) in two weeks. 36 | 8. Bob has data availability, checks the Plasma chain, and sees that Alice's exit is not invalid. Bob tells Alice that he's willing to purchase her exit token. 37 | 9. Alice sells her 10 ETH token to Bob for 9 ETH. Alice receives 9 ETH now, and Bob will receive 10 ETH once the exit processes. Bob has "earned" 1 ETH (in the form of a discount) for providing a liquidity service to Alice. 38 | 39 | ## Markets 40 | 41 | To ensure that Alice is able to receive funds from the Plasma chain quickly, there must be a marketplace for her tokens. It's possible to create any number of schemes that give users the best possible price. For example, each user could hold a short auction for their token or could arrange a sale out-of-band. 42 | 43 | It's also possible to reintroduce the concept of rating agencies to create more liquid markets. These agencies would attest to the validity of the exit. Liquidity providers could then give a market price for each token (based on value & time to process). This means that users can quickly sell their tokens and receive their funds without having to spend time finding a liquidity provider or waiting for an auction to complete. 44 | 45 | Furthermore, it's probably also possible to sell parts of a token, but gas costs make this more infeasible for low-value tokens. 46 | 47 | An auction seems like the simplest mechanism in the short-term. 48 | 49 | ## Notes 50 | 51 | As always, feedback and comments are more than welcome. Please feel free to challenge any part of this, there very well may be issues. 52 | -------------------------------------------------------------------------------- /plasma/mass_withdraw.md: -------------------------------------------------------------------------------- 1 | Improve efficiency of mass-withdrawals. 2 | 3 | By @DavidKnott and @kfichter: 4 | 5 | # Mass withdrawals 6 | 7 | Talks with Kelvin 8 | 9 | Plasma MVP's exit time constraints make it vulnerable to Ethereum network congestion. Mass withdrawals have the ability to improve upon this weakness by decreasing the average cost per utxo exit. 10 | 11 | There are a few ways we can go about doing mass withdrawals. 12 | 13 | ### Single owner mass withdrawals 14 | The initialy way I thought of to do single owner mass withdrawals was to have have the owner make a list of all the utxos they own client side. From this list they'd submit a start position as well as a bitfield in which each bit would represent an exitor utxo from the start position onward. Then if any of the UTXO's being exited is invalid anyone will be able to challenge the exit 15 | This doesn't work though because there's no efficient way to prove that a given UTXO challenge response is the utxo from the challenged position in the bitfield. 16 | 17 | To fix this problem we'll require the operator to add a field to each transaction when it's merkilized stating the it's utxos position in relation to it's owners other UTXO's 18 | 19 | 20 | ### Multi owner mass withdrawals 21 | Multi owner mass withdrawals require the owner of every utxo being exited to sign off. The leader of the exit is the one who submits the mass exit to Ethereum. The leader must specify the block and start position of the bitfield. Their signatures are represented in a bitfield. It's the responsibility of each owner to watch all mass withdrawals to make sure that their own UTXO's aren't withdrawn without their permission. The exit leader will also have to submit the merkle root of all the transactions being withdrawn. 22 | 23 | Both mass withdrawals will wait a challenge period which will allow for anyone to request the exit leader to provide the utxo and signature for the utxo that they claimed to have in the bitfield the previously submitted. 24 | 25 | ### Targeted Mass Withdrawals 26 | By requiring the operator to create an incrementing nonce for each address's UTXO's we can allow for mass withdrawals where the exit leader submits a bitfield and a list of addresses they're withdrawing from. 27 | 28 | 29 | ### Tracking UTXO's Seperately 30 | Unpsent transactions outputs can be tracked in a seperate merkle tree that's used to facilitate spending and exiting transactions. This makes mass withdrawals more efficeient because each position in the bitfield is refering to an exitable UTXO. 31 | 32 | 33 | ### Getting Rid of Spent UTXO's 34 | With the current Plasma MVP design we're unable to get rid of spent UTXO's because we need them to challenge exits that double spend. Though if a double spending exit goes through it hurts everyone using the child chain except for the party who submitted the exit. This means that we don't need everyone to keep spent UTXO's but enough to be able to submit a challenge in the case that one is needed. We can further incentivize those wishing to keep spent UTXO's by rewarding them for submitting a succesfful challenge when someone attempts to double spend. 35 | 36 | 37 | 38 | 39 | ### Referencing inputs by transaction hash 40 | We currently reference transaction inputs by their position in the child chain. This is the easiest to think about but makes it hard to create sequences of transactions refering to each other without waiting for each transaction to be included in the child chain and submitted to the root chain. This is because the operator chooses transactions position in the child chain as opposed to the transactions creators. By having inputs use a transactions hash to reference it as opposed to it's position we can create sequences of transactions that build on each other while only committing them to the child chain if something goes wrong. 41 | 42 | ### Value Summation in Mass Exits 43 | 44 | It seems like it'll be necessary for the user that submits a mass exit to attach a summation of the value of all referenced UTXOs. For example, if exiting UTXOs worth (10 ETH, 20 ETH, 15 ETH), then the user will also attach the value "45 ETH" in some way. When the mass exit processes, this sum will be "reserved" for mass exit to be processed on a per-UTXO basis later. This is necessary so that invalid exits that process after 2 weeks can't steal money while the mass exit is still being processed. 45 | 46 | #### Summation Validity 47 | 48 | Unfortunately, it isn't easy to prove that this summation is valid. The above user might attach "50 ETH", which would obviously be invalid. We don't want users to be able to steal money in this manner. 49 | 50 | One possible solution to this problem is a sum tree of sorts. Each leaf node in the tree would contain the tuple (`utxo_value`, `total_sum`), where `total_sum` represents the sum of all leaf nodes to the left of this node, inclusive of the node itself. For example, if the UTXO values are 10 ETH, 20 ETH, 15 ETH, then the leaf nodes would be (10, 10), (20, 30), (15, 45). 51 | 52 | These leaves would be Merklized and the final sum + tree root would be published. The tree could be challenged in a TrueBit-esque game where two users iterate down the tree until they find the first node at which they disagree. They reveal this node as well as the node to the left of this node. The root chain makes a calculation to determine which party is correct (`left_total_sum` + `right_utxo_value` = `right_total_sum`). 53 | 54 | This requires log(n) transactions to the root-chain in the worst case, which is not ideal. It may be possible to construct more concise proofs, but I haven't figured anything better out yet. 55 | -------------------------------------------------------------------------------- /plasma/fast_finality.md: -------------------------------------------------------------------------------- 1 | ## Fast Finality 2 | 3 | Author: @kfichter 4 | 5 | Fast finality is important, especially in the context of payments or exchanges. There hasn't been a significant amount of research into fast finality on Plasma. Currently, users need to wait until root chain finality before blocks on the child chain are considered valid. This waiting period can be quite large (6 blocks = ~1m30s at an absolute minimum, typically). 6 | 7 | Long waiting periods aren't great for payments. **The 5s credit card wait period is probably the maximum we're willing to accept.** No one wants to wait a minute and a half just to find out if their payment went through or not. This is motivation to develop a scheme for fast finality. 8 | 9 | Unfortunately, fast finality in a real sense is probably not possible without payment channels. We might see the development of something like the lightning network on top of Plasma, but it could be useful to develop some idea of native fast finality. Let's go through some of attempts on fast finality to explore the various issues. 10 | 11 | ### Shorter Block Times 12 | 13 | Shorter block times seem like a natural way to reach faster finality. If we can reduce Plasma block times to ~5s and ensure that most transactions on the network can be cleared within a single block, then we're in business. Assuming VISA throughput of ~25,000 TPS (yes, this is played out), then we can basically hit this rate with 2^16 (= 65536) tx blocks. 14 | 15 | Note that this would mean multiple child chain blocks within a single parent block. The main problem is equivocation by the operator - the operator might *say* that block N is X, but then might publish Y to the root chain. We can't stop an operator from signing two blocks at the same height, but we *can* severely punish this behavior. The most effective way to accomplish this is by having the operator put up a massive bond. 16 | 17 | At this point it's probably worth considering the attack vectors here. The severity of available attacks will determine how much we have to punish the operator. In the context of a payments chain, we're most likely thinking about very many small-value payments between consumers and merchants. A payments chain might contain significant value, but realistically most users won't hold thousands of dollars on Plasma. Therefore, it's unlikely that an operator will directly gain a lot by equivocating. This isn't an ideal assumption, but it's probably good enough for most purposes. 18 | 19 | The bigger problem in payment chains is that the operator can reverse *lots* of transactions. They might not gain anything directly, but they'll definitely cause harm to many end users. Therefore we need to make the cost of attack high enough that the "madman" attack isn't worth it. Ballpark, if we're transacting ~150,000 USD of value per second, then it's probably good enough to have the operator put up a 100-200m USD bond. This will almost definitely cover the value being transacted between any finality period on the root chain, plus some additional value on top to deter madman attacks. We're still vulnerable if transaction value suddenly shoots up to more than 50% of the bond. 20 | 21 | **Note:** The operator needs to bond up 2x the transaction value. Operator can make a purchase for $$ => seller sends item worth $$ => operator reverses transaction => operator has $$ AND item worth $$ (= 2x $$). Seller punishes => operator loses 2x $$ => operator has nothing. 22 | 23 | ### Finality Contracts 24 | 25 | 100-200m USD is still an absolutely massive bond. The operator doesn't even get anything in return for it! It's also useful to note that not *everyone* needs fast finality - a user sending money to their family is probably okay with waiting a few minutes. This is where finality contracts come into play. In a nutshell, the operator tells a user that their transaction will be included within X blocks or the user can "force" the operator to send the recipient funds. 26 | 27 | The general idea here is that the operator sells "finality bandwidth". Users can buy this bandwidth to ensure that their transactions finalize instantly or merchants can buy this bandwidth for their users to use. Users can only make purchases of less value than the bandwidth. If the operator fails to include the transaction, then the bandwidth will be used to pay out the recipient. 28 | 29 | My guess is that this will be mostly purchased by merchants to give their users "free" transactions that finalize instantly in exchange for some maintenance fee paid by the merchant. 30 | 31 | The cool thing about this construction is that the recipient will *always* get the money. So this isn't real finality, but it *is* cryptoeconomic finality. We're usually assuming that the recipient is some sort of merchant, so it seems realistic to want the customer to experience instant finality and the merchant to have to deal with any difficulties. Merchant software will develop to handle these situations automatically. 32 | 33 | There's an alternative to this solution where users or merchants actually purchase specific transaction slots in advance, but I think bandwidth is more general and usable. 34 | 35 | ### Payment Channels 36 | 37 | There's another (similar) alternative to finality bandwidth where we use payment channels. Instead of allowing users or merchants to buy bandwidth, we instead allow merchants to pay for payment channels with the operator. The operator agrees to lock up some funds in the channel in exchange for a maintenance fee. 38 | 39 | When users want to pay the merchant, they sign a special type of transaction that's contingent on the successful completion of a payment from the operator to the merchant on a payment channel. The idea here is to shift responsibility to the operator, but it also requires cooperation from the operator to actually complete the payment channel payment on time. 40 | 41 | ## Summary 42 | 43 | I think finality bandwidth is the way to go. It's the best of both worlds in terms of finality and generality, and it can be purchased by anyone on the network. We need to come up with a good way to construct the purchasing of bandwidth - my best solution here is a Plasma Cash chain where each "token" represents some amount of bandwidth. 44 | -------------------------------------------------------------------------------- /plasma/plasma-mvp/delegated-exits.md: -------------------------------------------------------------------------------- 1 | Plasma (+ Delegated Exits & Exit Challenges) 2 | =========================================== 3 | 4 | Author: Kelvin Fichter 5 | 6 | --- 7 | 8 | This document attempts to extend Vitalik's [Minimum Viable Plasma](https://ethresear.ch/t/minimal-viable-plasma/426) so that a third party ("exiter", "challenger") can be incentivised to execute exits and challenges on behalf of a user on the network. Generally, this is accomplished allowing a third party to exit or challenge on a user's behalf in exchange for a fee. This third party is **trusted** when *exiting* but does not need to be trusted when *challenging*. 9 | 10 | This document is a WIP and will likely change as I receive additional feedback. 11 | 12 | ## Construction 13 | 14 | The incentive models behind delegated exits and delegated exit challenges differ slightly. Each construction is presented separately to make the overall design as clear as possible. 15 | 16 | ### Delegated Exits 17 | 18 | Delegated Plasma exits are a useful construction. It's likely that it'll generally be hard to get *every* user on a Plasma chain to submit exits if blocks are being withheld. This might occur for any number of reasons. For example, a user may be unable to exit if they are being targeted and deliberately denied service/connectivity to the root chain during the exit period. 19 | 20 | #### Fees 21 | 22 | However, simply allowing others to exit on a user's behalf isn't enough. It's not reasonable to expect that others will behave altruistically, so it's necessary for the user to specify a fee for this service. As it's probably poor user experience to have users determine a fee for each of their UTXOs, it's simpler specify a flat % fee for every exited UTXO. This could be maintained as a mapping `(address => uint256)` where each `address` maps to a `uint256` fee to be divided by some `uint256 FEE_BASE` to calculate a fee %. 23 | 24 | #### Unnamed Exiters 25 | 26 | A few possible constructions exist when deciding who can actually perform exits on behalf of a user. An "ideal" design allows any user to exit on behalf of any other user. However, we want to make sure that exits are only performed when actually necessary. Otherwise, an exiter might be incentivised to trigger an exit as quickly as possible in order to capture the exiting fee. 27 | 28 | If we can prove that a chain is actually suffering from an "exit condition" (e.g. block withholding), then this problem is easy to solve. It'd be quite easy to require some initial deposit and slash the deposit if an exit is made when the chain is not under exit conditions. Even so, we might want to allow exiters to exit for us under normal chain conditions. 29 | 30 | Unfortunately, it's not trivial to prove that the chain is actually under exit conditions. A naive solution may require child chain blocks be submitted once every X parent chain blocks, else the chain is under exit conditions. Extra-protocol interactions might cause the network to experience these conditions temporarily even if users on the network understand this is temporary. In general, the subjectivity of exits on the network make them difficult to quantify. 31 | 32 | This subjectivity might be addressed by implementing a parent chain vote to determine if the child chain is under exit conditions. Users on the network would signal one way or the other, dependent on their % assets owned in the parent contract. This may end up being ineffective for several reasons. If the underlying asset on the parent chain derives its value from the Plasma chain, then large stakeholders have an incentive to vote that the chain is *not* under exit conditions to prevent a mass exit (and a rapid decrease in value of the asset). If the underlying asset is something like Ether, then majority stakeholders can grieve the network by signalling exit conditions at no significant cost. 33 | 34 | We could prevent some exit grieving by implementing another transaction that allows a user to "cancel" an exit called on their behalf, but this has its own issues. If we don't require the exiter to place a deposit on the exit, then it's in the depositor's best interest to repeatedly attempt to exit until the total transaction cost on the root chain exceeds the potential exit fee. If we do require a deposit, then an active user might cancel the exit and call the exit themselves in order to save the exit fee at no additional cost (although this behavior might be necessary and preferred). 35 | 36 | #### Named Exiters 37 | 38 | A (likely) better delegated exit model is to require users to specify a named exiter or list of exiters. The root contract could maintain some mapping `(address => address)` or `(address => (address => boolean))` that specifies a user's permitted exiters. 39 | 40 | This is effectively a trusted subset of the "Unnamed Exiters" design. We inherit some problems, but we gain some useful properties. Most importantly, we effectively build a reputation system where certain named exiters are identified as "trustworthy." Users will likely name exiters who have a history of acting in their customers' best interests. As a result, exiters have an incentive to behave. 41 | 42 | This design isn't perfect. It's still possible for exiters to attempt to exit against the user's will. However, it's easier for a user to mitigate this attack by removing the exiter and then cancelling the exit. The worst case scenario allows an exiter to "exit scam" (confusing terminology, oops) and call exits for as many of their customers as possible in the hope that a few are inactive. 43 | 44 | Named exiters should only expect to realize fees from inactive users. Active users have an incentive to cancel the exit transaction and exit for themselves. Exiters will only participate in this system if the total fees generated from inactive users exceed the transaction fees of creating the exits on the root chain. This becomes more feasible if mass exits are implemented. Exiters will probably wait some short period of time before submitting an exit to allow active users to submit exits first. 45 | 46 | Rational exiters should only call exits for users with a sufficiently high transaction fee. A fee market will probably emerge over time as users and exiters decide on some equilibrium fee. 47 | 48 | #### Exiter Pools 49 | 50 | It may be possible that "exiter pool" smart contracts come into existence. These contracts could mitigate some of the risks of named exiters by requiring members vote on which exits to submit. This is probably extremely inefficient and would likely require centralized management. Fees from exits would then be distributed to members relative to their investment in the pool. 51 | 52 | ### Delegated Exit Challenges 53 | 54 | Exit challenges are relatively simple to incentivise. It's not possible to take a fee from the UTXO being exited if the UTXO is invalid. Additionally, it doesn't make sense to punish the user for an invalid exit not submitted by themselves. Instead, we can take a fee in the form of a deposit placed by the exiter. 55 | 56 | #### Deposits 57 | 58 | This deposit would be refunded in the case that the exit is "cancelled" (as described above) or the exit is successful. If the exit is found to be invalid, then the deposit is transferred to the challenger as a reward. 59 | 60 | We should only punish exiters for malicious behavior. It's possible for a user to submit a later transaction that causes the exit to be invalid. In this case, exiters should not be punished. 61 | 62 | #### Deposits as a Barrier to Entry 63 | 64 | However, deposits have flaws. Each individual exit would require a deposit that correctly incentivises other users to validate the transactions. This increases the barrier to entry of becoming an exiter. Well-behaved exiters will never lose this deposit, but this limits the ability of individuals to compete. 65 | 66 | The above "exiter pool" construction could solve this problem by allowing users to pool capital together, even if the pool is centrally managed. 67 | 68 | -------------------------------------------------------------------------------- /plasma/plasma-cash/bloom-filters.md: -------------------------------------------------------------------------------- 1 | Bloom Filters in Plasma Cash 2 | ============================ 3 | 4 | Author: Kelvin Fichter 5 | 6 | --- 7 | 8 | ## Acknowledgements 9 | 10 | Thank you to [Dan Robinson](https://twitter.com/danrobinson?lang=en) from Chain for the original [comment](https://ethresear.ch/t/plasma-cash-plasma-with-much-less-per-user-data-checking/1298/3) that inspired this post. 11 | 12 | ## Background 13 | 14 | ### Plasma Cash 15 | 16 | Plasma Cash is a novel Plasma design that eliminates the need for every user to validate every other user's transactions. The basic idea of this design is that users can only transact *unique* coins - kind of like spending a *specific* $20 bill. Just like a merchant might check that your $20 bill is legitimate, recipients on a Plasma Cash chain will verify that your unique coin is legitimate. 17 | 18 | The original Plasma Cash [specification](https://ethresear.ch/t/plasma-cash-plasma-with-much-less-per-user-data-checking/1298) describes a two-part process for verifying coins. First, a user requests the coin's full transaction history, including proof that the coin was actually spent in the given blocks. Then, the user requests proof that the coin was *not* spent in any other block. 19 | 20 | ### Bloom Filters 21 | 22 | A Bloom filter is a probabilistic data structure that can test whether something is a member of a set or not. Bloom filters are designed so that false positives are possible, but false negatives are not. If the Bloom filter says that the word "hello" *isn't* in the set of inputted words, then "hello" is definitely not in the list. However, if a filter says that "world" *is* in the set of inputted words, there's a chance that it might not be. 23 | 24 | [Dan Robinson](https://twitter.com/danrobinson?lang=en) first mentioned the idea of using Bloom filters to decrease the required Plasma Cash proof size in a [comment](https://ethresear.ch/t/plasma-cash-plasma-with-much-less-per-user-data-checking/1298/3) on ethresear.ch. 25 | 26 | ## Methodology 27 | 28 | ### Quick Pitch 29 | 30 | Dan's comment described an easy way to drastically reduce the Plasma Cash proof size. The idea is simple: instead of sending out lots of proofs that coins haven't been spent each block, the operator simply sends out a Bloom filter populated with every transaction in the block. Whenever a user is verifying a coin's history, they'll just check their cached Bloom filters. If the filter returns a false positive for a specific block, then the user will request a proof that the coin hasn't been spent in that block. 31 | 32 | ### Correctness 33 | 34 | Plasma Cash requires a few small modifications to ensure that the Bloom filter design works correctly. Let's discuss some of the potential ways things can go wrong and solutions to those problems: 35 | 36 | #### Operator includes a transaction in a block but not in the Bloom filter 37 | 38 | This situation would mean that there *is* a valid transaction in a block, but that the receiving user wouldn't request that transaction. In this case, the user should act as if that transaction never existed. A recipient of a coin should verify that the coin is included in a block *and* that the coin is included in the block's Bloom filter. 39 | 40 | It's possible that the operator doesn't include the transaction in the Bloom filter, but that the transaction appears as a false positive. In this case, it's as if the operator included the transaction in the filter and there's nothing to worry about. 41 | 42 | #### Operator includes a transaction in a Bloom filter when there is no transaction in the block 43 | 44 | This is equivalent to a false positive. The recipient would see that the transaction is in the Bloom filter and request a proof that the transaction is not in the block. 45 | 46 | #### Operator gives different Bloom filters to different users 47 | 48 | The idea behind this attack is better described with an example: 49 | 50 | 1. Mallory is the operator on a Plasma Cash chain. 51 | 2. Mallory sends a coin to Bob in block #5. 52 | 3. Mallory wants to cheat Alice by double spending the coin. 53 | 4. Mallory sends Alice a Bloom filter for block #5 that doesn't include the coin. 54 | 5. Alice knows that Bloom filters can't have false negatives, so she doesn't request a proof for block #5. 55 | 6. Alice accepts Mallory's coin. 56 | 57 | Obviously, this makes double spending pretty easy. This problem really stems from the fact that Bloom filters are quite large (order of a few KB). **It isn't economical to publish the whole filter to the root chain**. Instead, this problem can be solved if the operator is required to publish the keccak256 *hash* of the Bloom filter to the root chain. A user would then receive a Bloom filter and verify that the hash of the received filter matches the published hash. The user will only trust the filter if the hashes match. 58 | 59 | The above situation now works out as follows: 60 | 61 | 1. Mallory sends a coin to Bob in block #5. 62 | 2. Mallory publishes $hash(filter)$ to the root chain. 63 | 3. Mallory sends Alice a Bloom filter for block #5 that doesn't include the coin. 64 | 4. Alice checks that $hash(received filter) = hash(filter)$ and finds that they differ. 65 | 5. Alice doesn't accept Mallory's transaction. 66 | 67 | ### Size Considerations 68 | 69 | The main problem with Bloom filters is that they grow in size with the number of elements in the set. As stated above, this means that it's too expensive to publish the whole filter to the root chain. We want to make sure that we're not in effect *increasing* the Plasma Cash proof size. Let's do some math to ensure that this isn't the case. 70 | 71 | Plasma Cash currently requires a Merkle proof for each block in the chain. The size of a Merkle proof depends what hash function we're using and how large the tree is. Let's make some assumptions in order to calculate the data requirements. It's likely that many implementations will use keccak256 (a 256 bit hash function) to calculate Merkle roots. If we want to support 1,000,000 unique coins, then we'll need a Merkle tree 20 layers deep ($2^{20} = 1048576$). 72 | 73 | Every Merkle proof requires log(n) hashes + the data in the transaction. Let's ignore the size of the transaction for now. The size of the required proof is therefore: 74 | 75 | $$ 76 | S_{merkle proof} = log_{2}(2^{20}) * 256 bits = 20 * 256 bits = 0.64 KB 77 | $$ 78 | 79 | Note that this is the size of the proof per coin, per block! 80 | 81 | Now let's figure out how large the Bloom filters would have to be. Bloom filters scale linearly with the number of items in the filter, so we'll have to estimate the number of transactions per Plasma block. Assume that we're creating a new Plasma block once per every Ethereum block. If we want a ~10x order of magnitude increase in transactions per second over Ethereum, we'll need to include approximately 5000 transactions in every block. 82 | 83 | A bloom filter with 5000 elements and a 1/100000 false positive rate is [14.62 KB](https://krisives.github.io/bloom-calculator/). That's pretty big - much bigger than the Merkle proof size. Is it worth it? The answer is, "it depends." 84 | 85 | The Bloom filter for each block is ~23 times bigger than the Merkle proof. This means that if users are making lots of transactions with lots of different coins, the Bloom filter might actually be more efficient than the Merkle proofs. Bloom filters also have a great user experience. Users don't have to download large blocks of data and can use the same Bloom filter to verify the validity of all received transactions. The low false positive rate makes the Merkle proofs basically irrelevant. 86 | 87 | ### Summary 88 | 89 | Bloom filters can be used to simplify the proofs required by Plasma Cash at the cost of increased storage in the average case. Bloom filters work particularly well when a Plasma Cash chain has relatively low TPS. The element-linear size of each filter makes the scheme difficult to scale. 90 | 91 | At the same time, Bloom filters vastly improve the user experience of Plasma Cash - users aren't required to send entire transaction histories and proofs of non-inclusion in the form of huge chunks of data. A Bloom filter-based Plasma Cash implementation doesn't sacrifice any security guarantees. 92 | 93 | ## Research Topics 94 | 95 | - Can we use [Cuckoo filters](https://www.cs.cmu.edu/~dga/papers/cuckoo-conext2014.pdf) instead? 96 | - In what cases might Plasma Cash + Bloom filters be practically better than sticking to Merkle proofs? 97 | -------------------------------------------------------------------------------- /plasma/plasma-mvp/specifications/no-confirmations.md: -------------------------------------------------------------------------------- 1 | Plasma w/o Confirmations 2 | === 3 | 4 | Authors: David Knott, Kelvin Fichter 5 | 6 | --- 7 | 8 | **Note:** This is a cleaned up version of David's original "Plasma w/o Confirmations" specification. The original version of this document can be found [here](https://hackmd.io/s/BkgnUZALf). 9 | 10 | --- 11 | 12 | ## Background 13 | 14 | [Minimal Viable Plasma](https://ethresear.ch/t/minimal-viable-plasma/426) specifies that a transaction be "confirmed" before the transaction is considered valid. Transaction confirmations are necessary to avoid problems created by block withholding. A confirmation proves that a user has accesss to the block in which their transaction was included. 15 | 16 | However, confirmations are a little annoying because a user needs to sign a transaction, wait for their transaction to be included in a block, and then sign *another* confirmaton transaction. You can probably see how this could quickly become problematic. 17 | 18 | This document specifies a version of Plasma that does *not* require confirmations and therefore only requires a single user signature to submit a transaction. Some implementation details are provided. As always, this construction has its own pros and cons and is subject to change as it's refined. 19 | 20 | ## Overview 21 | 22 | The high-level summary of this specification is that transaction confirmations are removed by requiring that transactions be included within some number of Plasma blocks. Transactions that have "timed out" are considered invalid and should not be included in future Plasma blocks. An additional root-chain construction ensures that transactions are still safe if they're included but withheld. 23 | 24 | ## Why Confirmations Matter 25 | 26 | Before we describe what Plasma might look like without confirmations, its important to understand why it currently has them. Remember that exits on Plasma MVP must be submitted within 7 days of a chain becoming byzantine in order to be considered safe. 27 | 28 | ### Block Withholding 29 | 30 | One of the main attacks on a Plasma chain is a block withholding attack. This basically means that the Plasma operator either publishes a block root and fails to publish the actual block contents or stops publishing blocks entirely. Block withholding attacks have several consequences, and confirmations are used to mitigate a few of them. 31 | 32 | Let's think about what might happen if we didn't have transaction confirmations. Consider the case where a user makes a transaction, the operator includes this transaction in a block, and then block is then withheld. The user will want to exit and retrieve their funds, but they have no clue if the transation they just sent was included in the block or not. The user can try to exit from the UTXO they just spent, but there's a chance that the operator challenges the exit. If happens then the original recipient has a chance to exit from the now-published transaction. However, the operator has 14 days to challenge the exit and could simply wait more that 7 days (the safety point) before challenging. 33 | 34 | Confirmations completely mitigate this problem. A user in the situation from above would be able to safely exit from their original UTXO because a transaction is only considered valid once it's been confirmed. Any solution that gets rid of confirmations needs to make sure this issue isn't reintroduced. 35 | 36 | ### Late Transactions 37 | 38 | Confirmations also mean that a user can choose which of their published transactions are considered valid. Imagine a user submits a transaction at `t=0`, but the operator doesn't include it in a block immediately. A long time later (maybe `t=10000`), the operator includes the transaction in a block. However, the user no longer wants that transaction to occur because they've already completed that transaction some other way. The user can simply refuse to confirm the transaction. 39 | 40 | A construction without confirmations needs to give users some similar guarantees. We'll talk about a few different ways to potentially accomplish this, although some are more convenient than others. 41 | 42 | ## Problems with Confirmations 43 | 44 | This list is not complete, but it describes some of the more obvious ways that confirmations make things difficult. 45 | 46 | ### Fees 47 | 48 | Confirmations could potentially result in grieving if we aren't careful. We want to incentivise users against flooding the network with transactions that they don't confirm. This could be accomplished by requiring that users pay a transaction fee for every transaction, even if the transaction isn't confirmed. 49 | 50 | ### User Experience 51 | 52 | Confirmations require a second signature. This is generally a bad user experience. Users need to sign the transaction, wait for it to be included, and then sign a confirmation. The waiting period between the two signatures could be significant depending on Plasma network load. Users will need to be available for both signatures and transactions will fail if users lose connection before they're able to send a confirmation. 53 | 54 | ## David's Implementation 55 | 56 | This section describes David Knott's original no-conf specification. 57 | 58 | ### Rules 59 | 60 | We specify a few rules to make this construction as simple as possible: 61 | 62 | 1. Transactions must be included within `n` Plasma blocks. 63 | 2. Transaction inputs must be at least `n + 1` Plasma blocks old. 64 | 65 | Rule #1 prohibits a Plasma operator from including a transaction a long time (`>n` blocks) after it's published. This wouldn't matter if we were using confirmations because a user could just refuse to confirm a very late transaction. 66 | 67 | Rule #2 ensures that an invalid exit can't have higher priority than a valid exit (as long as users exit in a timely manner) while minimizing the amount of necessary on-chain work. We'll get back to this later. 68 | 69 | ### Modifications to Plasma MVP 70 | 71 | We need to modify the Plasma MVP specification in order to implement these rules. 72 | 73 | #### Change #1: The root chain needs to validate a transaction *and* its inputs before it can be exited 74 | 75 | This is necessary to ensure that we can't break Rule #2 in any useful way. Remember that Rule #2 requires that inputs be at least `n + 1` Plasma blocks old. The following example demonstrates why this change is necessary: 76 | 77 | 1. Alice submits a transaction to Bob. 78 | 2. The operator includes Alice's transaction after including a few invalid transactions. The operator reveals Alice's transaction (so Bob can exit), but doesn't reveal their own (so no one can prove those are invalid). 79 | 3. The operator attempts to exit on their invalid transactions. 80 | 4. Bob attempts to exit on his valid transaction. 81 | 5. The operator's exit has higher priority than Bob's transaction, the operator withdraws first. 82 | 83 | This isn't a problem if we have confirmations - Alice could simply refuse to confirm her transaction. We need to get more creative if we remove confirmations. 84 | 85 | If we require that inputs be at least `n + 1` blocks old, then anyone who stops submitting transactions once the chain is byzantine can safely exit. An operator attempting to create a UTXO from "out of thin air" inputs would have to create at least `n + 1` invalid blocks. 86 | 87 | This breaks down if a user submits a transaction after the chain becomes byzantine. A client should *never* submit a transaction if they haven't received a block by the time the next block should've been published. 88 | 89 | #### Change #2: Exits must be challenged within 4 days 90 | 91 | This way, a withheld transaction can be safely withdrawn before an attacker completes any invalid exits. 92 | 93 | Plasma MVP mitigates block withholding attacks by requiring users to submit confirmations. If we remove confirmations, then a transaction can be included and be considered valid even if the block is withheld. This means that an operator could challenge an exit with a withheld transaction. Change #2 attempts to solve this problem. 94 | 95 | To better understand why this change is necessary, consider the following potential scenario. 96 | 97 | 1. Alice submits a transaction to Bob. 98 | 2. The operator includes Alice's transaction but does not publish the block. Alice has no way to tell if her transaction has been included or not. 99 | 3. Alice attempts to exit from a UTXO that she tried to spend in her transaction to Bob. 100 | 4. The operator submits exits for some invalid UTXOs that drain the entire contract. 101 | 5. After 7 days, the operator challenges Alice's exit with her transaction to Bob. 102 | 6. Bob now has the information he needs in order to exit from Alice's transaction. Bob submits an exit. 103 | 7. The operator's invalid exits process before Bob's exit because Bob didn't submit his exit within 7 days of the chain becoming byzantine. The contract is now empty. 104 | 8. Bob cannot withdraw once his exits process because the operator has already empties the contract. 105 | 106 | Consider the same scenario once we apply Change #2. If the operator only has 4 days to challenge Alice's exit, then Bob has 3 days to safely submit his exit. The operator's exits will be given a lower priority than Bob's exit, and Bob will complete his (valid) withdrawal. 107 | 108 | #### Alternative to Change #2: Special exit transactions 109 | 110 | Change #2 is somewhat unsatisfying because Alice *will* lose her exit deposit. We'd prefer to come up with some mechanism that avoids restricting the challenge period and allows Alice to keep her exit deposit. 111 | 112 | Piotr Dobaczewski came up with another construction that addresses this case. In a nutshell, Piotr's idea is that Alice creates a special type of exit that allows Bob to claim the funds within a period of time. If Bob doesn't complete the exit, then Alice can claim the funds. This special "limbo" exit can be invalidated by demonstrating that the UTXO (from Alice to Bob) was already spent. 113 | 114 | Piotr gave the following example of how this might work out: 115 | 116 | 1. Alice buys a pear from Bob. 117 | 2. The operator includes Alice's transaction but withholds the block. Alice's payment is now stuck in "limbo" because she doesn't know if her transaction was included or not. 118 | 3. Alice starts a "limbo exit" that references her transaction to Bob. 119 | 4. Bob has 3 days to complete the exit. 120 | 121 | One of two things can happen at this point: 122 | 123 | 1. Bob completes the exit. 124 | 2. Bob does not complete the exit. Alice completes the exit after 3 days. 125 | 126 | In either case, this exit can be challenged if a user demonstrates that Alice's UTXO to Bob is spent. If no challenges are submitted, then the exit processes and the funds are withdrawn. 127 | 128 | This is really cool, but a little complex. There's a potential for grieving if users are not actively watching for these type of transactions. For example, Alice could decide that she wants to cheat and get the money for her pear back. Assume that blocks are not being withheld and that Bob hasn't spent the UTXO from Alice's transaction. Alice could start the "limbo exit", which forces Bob to respond within 3 days. If Bob is offline for 3 days, then she might be able to steal Bob's funds. Bob's safest bet is therefore to immediately spend the funds to himself. 129 | 130 | We can avoid the grieving case by modifying the construction slightly so that the limbo exit can *only* exit to Bob and *only* if Bob signs off on it. The idea behind this modification is that Alice and Bob will want to complete their transaction if the two parties are cooperating. If blocks are being withheld and the two parties are not cooperating, then Alice can attempt to exit normally. If the transaction is included (and therefore complete), then Alice will need to find some extra-protocol settlement anyway. It's reasonable to assume that Alice can ask for her deposit as part of this settlement. 131 | -------------------------------------------------------------------------------- /plasma/plasma-cash/thinking-about-plasma-cash.md: -------------------------------------------------------------------------------- 1 | Thinking about "Plasma Cash" 2 | === 3 | 4 | Author: Kelvin Fichter 5 | 6 | --- 7 | 8 | **Note:** this post and other similar writings represent unfinished or generally rambling thoughts. It's my opinion that organized train-of-thought documents can help other researchers in a multitude of ways. Certain things in this document might be completely wrong! I wanted to make sure that readers could see my thought process in *how* I came to certain (possibly mistaken!) conclusions. 9 | 10 | --- 11 | 12 | On March 3rd, 2018, Vitalik published [this](https://ethresear.ch/t/plasma-with-much-less-per-user-data-checking/1298/2) post on ethresearch. The post details a scheme that Vitalik later dubbed "Plasma Cash." Vitalik also gave a detailed presentation about the idea at EthCC, available via YouTube [here](https://www.youtube.com/watch?v=uyuA11PDDHE). 13 | 14 | ## A problem 15 | 16 | Plasma MVP currently requires users store and check every transaction submitted by every user on the chain. This is potentially a huge amount of data! So although Plasma MVP can scale Ethereum already, this data storage requirement is pretty significant and might become an unwanted barrier to entry. 17 | 18 | ## A solution 19 | 20 | From a high level, Plasma Cash is a version of Plasma that tackles this data storage/checking problem by taking advantage of some convenient properties of identifiable, indivisble, non-mergable tokens. Unlike deposits on Plasma MVP, which can be broken up into smaller pieces or merged together into larger ones, deposits on Plasma Cash cannot be divided or merged. 21 | 22 | Each invididual deposit on Plasma Cash is assigned a unique "coin ID." This is stored on the root chain as a mapping from coin IDs to denominations. This is implemented as something like: 23 | 24 | ``` 25 | mapping (uint32 => uint256) coins; 26 | ``` 27 | 28 | One specific coin ID might be worth 1 ETH, while another might be worth 5 ETH. However, you can't join the two to make one coin worth 6 ETH or split them to make three coins worth 2 ETH. It's effectively like transacting with physical money, where we can only transact in specific discrete denominations (unless we make change!). Hence, Plasma *Cash*. 29 | 30 | ## So what? 31 | 32 | Instead of requiring users to check *every* transaction, Plasma Cash only requires users check: 33 | 34 | 1. The full transaction history of the specific coin(s) being received. 35 | 2. Proof that the specific coin(s) weren't spent in any blocks not mentioned in (1). 36 | 37 | This is cool because it means users suddenly have to check *much* less data in order to make a transaction. 38 | 39 | The exact process of validating a transaction is actually a little bit more complex, and generally looks like this: 40 | 41 | 1. Check that the coin ID being sent is valid and has the correct denomination (check the root chain mapping). 42 | 2. Check that the transaction history is valid (does it correctly stem from the deposit that created this coin ID?). 43 | 3. Check that the coin isn't referenced in any other block not mentioned in the full transaction history. If our chain is 5 blocks long and the tx history references blocks 1, 2, and 4, then we need to make sure the coin wasn't spent in blocks 3 or 5. 44 | 45 | Once we check these, we have a (relatively) compact proof that *our specific* coin is valid and hasn't been double spent anywhere. We don't have to know anything about any other coins! 46 | 47 | ## Cool, what's the catch? 48 | 49 | Plasma Cash currently only really works if we're using individible, non-mergable coins. If I want to send someone 7 ETH, but I only have a coin worth 10 ETH, then I'd need to coordinate a transaction where I simultaneously send a coin worth 10 ETH and receive back a coin worth 3 ETH. This is kind of like getting physical change. Just like with physical change, if the person who I'm sending a 10 ETH coin to doesn't have a 3 ETH coin to send back, I'm out of luck. 50 | 51 | ## Change providers 52 | 53 | You might be able to mitigate the above scenario to some extent with a construction called a "change provider." A change provider is basically someone who always has a lot of coins of different denominations available. The area in between the seat cushions of your couch would probably qualify. 54 | 55 | To illustrate, let's go back to our example. Alice is trying to send 7 ETH to Bob, but she only has a 10 ETH coin and some small change. Bob doesn't have any coins. This time, Alice is going to use a change provider to make this transaction work. Carol, a change provider, has lots of coins of different denominations and happens to have both a 3 ETH coin and a 7 ETH coin. Alice and Carol form a transaction with the following components: 56 | 57 | 1. A 10 ETH coin sent from Alice to Carol 58 | 2. A 3 ETH coin sent from Carol to Alice 59 | 3. A 7 ETH coin sent from Carol to Bob 60 | 4. Some small coin sent from Alice to Carol as a fee 61 | 62 | ## Fees 63 | 64 | Unfortunately, that last component of the above transaction is a little complex because it means Alice needs to keep some small change around in order to pay for change provider fees. This is kind of like keeping quarters in your car to pay for parking meters. 65 | 66 | This is a good time to talk about fees. Plasma Cash operators should probably receive fees for their services. However, it's not as easy to implement fees on Plasma Cash like as it is on Plasma MVP. A few possible ways to take fees have been suggested. 67 | 68 | ### Fees as small coins 69 | 70 | The first and most "Plasma Cash"-like way is to simply require users to always have tokens with small values that are used to pay fees. This is the most intuitive mechanism, but it has a lot of potential UX issues. For example, we don't want users to end up in a situation where it costs more to withdraw a token than the token is worth. We also don't want the transaction fees on Plasma to be limited by gas prices on Ethereum! 71 | 72 | We could potentially mitigate this by enabling splits on Plasma Cash through something like decimal places in the coin ID. We'll talk about this later. 73 | 74 | ### Fees as a cut from coins 75 | 76 | Another way to implement fees is by specifying a `total fee` attached to each coin. An operator will only accept a transaction if the new total fee specified is greater than the old total fee. When a user attempts to exit from a coin, they can only exit the initial value of the coin minus the latest total fee. This works, but it has the unfortunate side effect of ruining nice denominations. 77 | 78 | ### Fees from a "fee balance" 79 | 80 | Fees could also be separated from Plasma Cash entirely. One way to do this is via a "fee balance". This would be a separate balance on the root chain that a user needs to maintain in order to transact on the network. When the user makes a transaction, they'll specify a `total fee` just like in the example above. The transaction will only be accepted if the operator sees that the user has enough balance and that `total fee` is greater than the `total fee` specified in the user's last transaction. This works because the operator already needs to see every transaction. 81 | 82 | The user can attempt to withdraw their fee balance, but an operator will challenge if they can prove that there's a transaction with some total fee greater than what the user claims. Transactions would have to contain a nonce so that users can specify an ordering to their transactions. Operators would then first order transactions by nonce before validating `total fee`. Otherwise, transactions might be thrown out for containing an invalid `total fee` if the users makes more than one transaction in the same block. 83 | 84 | ## Merges/splits 85 | 86 | There's a distinct possibility that either no change provider exists or no change provider has the correct change for your transaction to occur. In these cases, you'll have to figure out another way for your transaction to complete. 87 | 88 | ### Plasma Cash split transactions 89 | 90 | It might be possible to enable coin splitting without ever touching the root chain. One way to do this is via decimals. This basically means that when I make a deposit, my coin ID always ends with some number of 0s (e.g. if I'm using 3 decimal places, `XX...XX000`). We can use these 0s to split our coins on the Plasma chain and uniquely identify/value the split off coins. 91 | 92 | This exact construction took me a while to come up with and it might still be wrong. We generally want to make sure that it's easy to quickly verify that a coin has never been spent (or split) somewhere else. This means making sure that my coin is unique, and that no one else can legitimately hold the same coin ID with the same (or different) value. If I want to split my coin, then I'd use some sort of special transaction that takes a coin and "burns" the input coin to create new outputs. "Burning" here just means that we're permanently changing the value of that specific coin. 93 | 94 | Let's demonstrate this by example. I want to create a few coins from my `XX...XX000` coin. My `XX...XX000` coin (with value 100%) could be burned in exchange for two new coins, `XX...XX000` and `XX...XX005` (with values 0.5% and 99.5%, respectively). The value of these coins is specified in the transaction fields. The 3 digits of the first new coin will be the same as the 3 of the original coin. The 3 of the second new coin will be the 3 of the original + the value of the first coin. Using this method, it's impossible for two users to have valid coins with the same ID. 95 | 96 | Now let's assume this splitting process occurs a few more times until I have a coin `XX...XX000` worth 0.1% of the deposit. I now want to send this coin and prove that I haven't double spent anything. I send a full transaction history and proofs of non-inclusion for `XX...XX000`. This transaction history necessarily includes each split transaction that I've made from `XX...XX000`. The receiver would be able to see if I tried to hide a split from them. 97 | 98 | Therefore the user can be sure that `XX...XX000` that I'm sending does indeed have the value that I've specified, and that no one else also has an `XX...XX000` with any other value. If the user then wants to exit from `XX...XX000`, they'll simply attempt to exit and specify the % they claim to own. The exit can be challenged if it's invalid or if someone else can prove that they have a later iteration of the coin with a different value. 99 | 100 | The downside to this construction is that there's only a maximum of `n` possible coins that can be broken off, where `n` is the number of decimals specified. There's probably an equivalent construction for merging these coins back together, but I haven't really put enough thought into it. 101 | 102 | ### Root Chain merge/split transactions 103 | 104 | David Knott came up with an excellent idea for root chain "merge/split" transactions. These transactions would occur on the Ethereum contract, just like deposits or exits. A merge/split transaction is basically self descriptive - someone who owns a specific coin can merge (or split) some coin(s) into one or more entirely new coins with the same total value. 105 | 106 | For example, if I have a 10 ETH coin with ID `0` and a 5 ETH coin with ID `1`, I could create a new 15 ETH coin with ID `2` as long as I invalidate coins `0` and `1`. I could also choose to split the 10 ETH coin into five 2 ETH coins. I can make any combination of coins, as long as the total value of the input coins is equal to the total value of the output coins. 107 | 108 | This transaction needs to happen on the root chain because we need to update our mapping between coin IDs and denominations to reflect the new coins. We also want to make sure that only the real owner of the coins can merge/split them. If we didn't confirm this, users could grieve the system by merging or splitting other people's coins against their will. 109 | 110 | We can confirm the real owner of the coins by holding a challenge period similar to that of the Plasma MVP exit period. During this time, a user puts down a bond and declares intent to merge or split a specific coin output or set of coin outputs that the user owns. Other users can then challenge that merge/split by proving that the outputs are either invalid or already spent somewhere else. 111 | 112 | ## Plasma MVP Cash? 113 | 114 | Plasma Cash is extremely cool, but it'd be even *cooler* if we could find a way to achieve the same result (less per-user data checking) with coins that can be merged or split. In theory, this would give us all of the benefits of Plasma Cash without the downsides involved with needing to make exact change. 115 | 116 | No one has figured this out (as of the writing of this post), and it's not entirely apparent that this is even possible. For the sake of research, I'll go ahead and list a few of the rabbit holes I've been down and the walls I eventually ran into. Maybe someone else can spot something that I didn't! 117 | 118 | **Note:** these lines of thinking didn't lead to any useful end goal. However, I think it's useful for people to read about *failed* research just as they'd read about successful research. Successful research is often the result of many dead ends. Understanding why a construction doesn't work can be useful for understanding why similar constructions don't work. You might also find something I didnt, or know of a concept I didn't know that could be used to solve these problems! 119 | 120 | ### Simply checking transaction inputs 121 | 122 | Like coin IDs, transaction inputs are unique. If we can prove that a transaction input is valid and hasn't been spent in any other block, then we've accomplished our goal. 123 | 124 | Unfortunately, this isn't as easy (or useful) as it sounds. We can't simply check that the inputs into our current transaction haven't been included in any other block because those inputs might not be valid. To verify that the inputs are valid, we would need to verify that the inputs to those inputs are also valid. 125 | 126 | This is check is feasible when we're talking about unique, indivisible, non-mergable coins because the transaction history can only ever have as many transactions as the block has blocks. However, if we can merge and divide inputs, then we could easily have many more transactions to check. For example, if each transaction can have at least two inputs and two outputs, then the tree of inputs to check could potentially grow to encompass every input ever. At this point we're basically passing around an entire blockchain worth of data for every transaction. This obviously isn't scalable. 127 | 128 | ### Giant Merkle trees, maybe? 129 | 130 | After the naive construction failed, I started to think about ways to remove the requirement that users check every single input. I figured that if I could find a way to mark inputs as spent as soon as they're included in a block, then I could remove the need to check proofs of non-inclusion for each input. 131 | 132 | Unfortunately, even solving this issue ignores the fact that I'd still need to validate every single transaction in the (potentially) massive history tree. So attempting to accomplish the gains of Plasma Cash without non-fungible coins is hard because verifying the transaction history is a core aspect of the security guarantees. The transaction history is bounded by the length of the Plasma chain in Plasma Cash, but it's bounded by the length of the Plasma chain times the number of transactions that can fit in a block in Plasma MVP! 133 | 134 | Anyway, I figured I'd think about this in case it had some useful outcomes for Plasma Cash. Maybe it's possible to design a system where proof of non-inclusion is handled in a more compact way. My intuition was that if I could create a *giant* sparse Merkle tree on chain where the leaf nodes represent *every* possible UTXO the Plasma chain could ever produce, then I could create a (relatively) compact proof that a certain input has never been spent. 135 | 136 | This massive Merkle tree would have to be updated whenever an input is spent, but it's not economical for each user to do so individually. Instead, the Plasma operator would update the tree whenever they submit a new block to the root chain. In order to make this work, we need to make sure that the operator is correctly updating the tree. This means two things: 137 | 138 | 1. The operator has updated the tree with every input spent in this block 139 | 2. The operator has not included any inputs in this block that already exist in the tree 140 | 141 | Or (a little) more formally: 142 | 143 | 1. The set of non-null inputs of the old Merkle tree and the set of spent inputs of the new block are disjoint 144 | 2. The set of non-null inputs of the new Merkle tree is a superset of both the sets in (1) 145 | 146 | This is easy to prove if we have the entire block, because we could update the old tree ourselves and cross check the results. However, the whole point of Plasma Cash is that we don't want users to have to receive the entire block. 147 | 148 | Here's where I got stuck. I couldn't figure out a way of proving that given the Merkle root of the tree and the Merkle root of the inputs to the block, that the new Merkle tree satisfied the above property. Maybe something with zkSNARKS? I'm not sure. 149 | 150 | -------------------------------------------------------------------------------- /plasma/plasma-mvp/explore/rootchain.md: -------------------------------------------------------------------------------- 1 | Exploring [`plasma-mvp`](https://github.com/omisego/plasma-mvp): Root Chain Contracts 2 | === 3 | 4 | Author: Kelvin Fichter 5 | 6 | --- 7 | 8 | **Note:** This document is part of a series titled "Exploring `plasma-mvp`" that attempts to document and explain the underlying functionality of `plasma-mvp`. I created this series so that I (and hopefully others) could gain a deeper understanding of every aspect OmiseGO's `plasma-mvp` codebase. This document *will* change as the codebase changes. 9 | 10 | Each document will begin with a general target audience. 11 | 12 | Feedback and request for changes are always appreciated! 13 | 14 | --- 15 | 16 | ## Target Audience 17 | 18 | This document targets **developers who want to gain a deeper understanding of what a Plasma implementation actually looks like under the hood**. As a result, the rest of this document assumes you have a basic working knowledge of smart contracts on Ethereum. Zeppelin Solutions' [Hitchhiker’s Guide to Smart Contracts](https://blog.zeppelin.solutions/the-hitchhikers-guide-to-smart-contracts-in-ethereum-848f08001f05) is a great place to start if you'd like to learn more. 19 | 20 | --- 21 | 22 | ## Root chain 23 | 24 | Any Plasma implementation requires at least one smart contract be deployed to whatever blockchain we're using as a "root chain". Before we start describing what that might look like, it's useful to define a root chain. When you hear "root chain" in regard to Plasma, it's referring to the underlying blockchain that we're eventually going to have to submit blocks to. `plasma-mvp` is built on top of Ethereum, so any references to the root chain throughout the rest of this document will also be referring to Ethereum. 25 | 26 | **Note:** The [Plasma paper](https://plasma.io/plasma.pdf) briefly mentions using more than one root chain. Current implementations (including `plasma-mvp`) don't require this or make use of it. There might eventually be some benefits to this construction, but it doesn't seem like that's been explored significantly (yet). 27 | 28 | ## Root chain contracts 29 | 30 | Plasma smart contracts generally need to be able to do the following things: 31 | 32 | 1. Allow users to submit deposits into the Plasma chain. 33 | 2. Allow users to exit from the Plasma chain. 34 | 3. Allow users to challenge an exit. 35 | 4. Allow some operator to submit Plasma blocks. 36 | 37 | We'll define some key terms before we continue. 38 | 39 | A `deposit` can be made in any coin or token. `plasma-mvp` only allows deposits in ETH for simplicity, but you could theoretically allow for deposits in any type of asset. The Plasma paper assumes that most production Plasma implementations will accept deposits in ERC20 (or a similar standard) tokens. 40 | 41 | Users can `exit` from the Plasma chain by referring to a specific [Unspent Transaction Output (UTXO)](https://bitcoin.org/en/glossary/unspent-transaction-output). An exit is really just a claim by a user stating that the user has the right to spend a specific UTXO. Exits are finalized after a waiting period, during which the exit can be challenged. Other users can attempt to prove that the exit is invalid by issuing a challenge. A successful challenge blocks ("invalidates") the exit. 42 | 43 | A `Plasma block` is very similar to an Ethereum block. It's effectively a Merklized set of transactions, along with some metadata and a signature attesting that it was created by the `authority`. 44 | 45 | The Plasma `authority` is a deliberately vague term. When we talk about the Plasma authority from a high level standpoint, we're describing some mechanism that submits Plasma block to the root chain. This could really be any sort of consensus mechanism and entirely depends on the needs of the individual Plasma chain. For the purposes of `plasma-mvp`, this authority is just one user who's permitted to submit Plasma blocks. However, the `authority` could also be a group of people through something like proof-of-stake, a few trusted users, or any mechanism in between. 46 | 47 | **Note:** Just because `plasma-mvp` has a centralized authority **does not** mean that Plasma is centralized or that all Plasma implementations will be centralized. No matter what, Plasma derives its security from the root chain, so a bad authority can't steal your money as long as you're vigilant and exit when things go badly. `plasma-mvp` is a proof of concept Plasma implementation designed to demonstrate that the core ideas behind Plasma actually do, in fact, work. My guess is that most public Plasma implementations will use some decentralized proof-of-stake mechanism. 48 | 49 | Now that that's out of the way, we'll begin a line-by-line breakdown of how `plasma-mvp` accomplishes the above goals. 50 | 51 | ### Imports 52 | 53 | Let's start with some imports: 54 | 55 | ``` 56 | // RootChain.sol 57 | 58 | pragma solidity 0.4.18; 59 | import 'SafeMath.sol'; 60 | import 'Math.sol'; 61 | import 'RLP.sol'; 62 | import 'Merkle.sol'; 63 | import 'Validate.sol'; 64 | import 'PriorityQueue.sol'; 65 | 66 | ... 67 | ``` 68 | 69 | `SafeMath` is used throughout `plasma-mvp` to make sure that math operatons are carried out safely. You can read more about it [here](https://ethereum.stackexchange.com/questions/25829/meaning-of-using-safemath-for-uint256). 70 | 71 | `Math` doesn't actually seem to be used anywhere in `plasma-mvp` and might just be an unnecessary import. 72 | 73 | `RLP` is a library for [RLP decoding](https://github.com/ethereum/wiki/wiki/RLP). This is specifically used to decode Plasma transactions. We'll get to that later. 74 | 75 | `Merkle` is a library that allows for the verification of Merkle proofs, explained well in [this article](https://blog.ethereum.org/2015/11/15/merkling-in-ethereum/) by Vitalik. We'll use these proofs whenever users start or challenge an exit in order to concisely validate that some transaction was actually included in a particular block. 76 | 77 | `Validate` is used to check that the signatures attached to a transaction are valid. 78 | 79 | `PriorityQueue` is pretty self descriptive. Exits are given a priority based on the age of their corresponding input and are therefore held in a priority queue. This priority exists so that we're always processing exits in correct age order. If a Byzantine authority creates an invalid block, then all earlier (valid) transactions will process correctly before the attacker can attempt to exit from the invalid transactions. More on this later. 80 | 81 | ### `Using For` statements 82 | 83 | The contract starts off by declaring that it's using some libraries for certain types: 84 | 85 | ``` 86 | ... 87 | 88 | contract RootChain { 89 | using SafeMath for uint256; 90 | using RLP for bytes; 91 | using RLP for RLP.RLPItem; 92 | using RLP for RLP.Iterator; 93 | using Merkle for bytes32; 94 | 95 | ... 96 | ``` 97 | 98 | This behavior is described in more detail in the [Solidity docs](https://solidity.readthedocs.io/en/develop/contracts.html#using-for). Basically this just attaches library functions to objects of a certain type. The object that the function is called on will be passed as the first parameter to that function. For example, the statement `using SafeMath for uint256;` means we can later do something like: 99 | 100 | ``` 101 | currentChildBlock = currentChildBlock.add(1); 102 | ``` 103 | 104 | instead of 105 | 106 | ``` 107 | currentChildBlock = SafeMath.add(currentChildBlock, 1); 108 | ``` 109 | 110 | Later on we'll get to how the contract actually uses these libraries. 111 | 112 | ### Events 113 | 114 | `RootChain.sol` defines two events, `Deposit` and `Exit`: 115 | 116 | ``` 117 | ... 118 | 119 | event Deposit(address depositor, uint256 amount); 120 | event Exit(address exitor, uint256 utxoPos); 121 | 122 | ... 123 | ``` 124 | 125 | `Deposit` is called whenever a deposit is made. It emits the address of the user that created the deposit and the deposit amount (in ETH). The Plasma client watches for this event and automatically adds a block locally whenever a deposit is made to the root chain. 126 | 127 | `Exit` is called whenever an exit is *started*. It emits the address of the user that created the exit and the "position" (`utxoPos`) of the UTXO being exited in the blockchain. This `utxoPos` is determined by `blknum * 1000000000 + index * 10000 + oindex`. `blknum` is the number of the block in which the UTXO was included. `index` is the index of the transaction within that block. `oindex` is either 0 or 1, depending on which of the transaction's multiple outputs is being exited (transactions in `plasma-mvp` currently have two outputs). Clients listen to this event and challenge the exit if it's invalid. 128 | 129 | ### Storage 130 | 131 | We define some structs and variables that make up the storage of our contract: 132 | 133 | ``` 134 | ... 135 | 136 | mapping(uint256 => childBlock) public childChain; 137 | mapping(uint256 => exit) public exits; 138 | mapping(uint256 => uint256) public exitIds; 139 | PriorityQueue exitsQueue; 140 | address public authority; 141 | uint256 public currentChildBlock; 142 | uint256 public recentBlock; 143 | uint256 public weekOldBlock; 144 | 145 | struct exit { 146 | address owner; 147 | uint256 amount; 148 | uint256 utxoPos; 149 | } 150 | 151 | struct childBlock { 152 | bytes32 root; 153 | uint256 created_at; 154 | } 155 | 156 | ... 157 | ``` 158 | 159 | `childChain` maps from a set of `uint256` block numbers to a set of `childBlock` structs. Each `childBlock` consists of a `bytes32` [Merkle root](https://bitcoin.stackexchange.com/questions/10479/what-is-the-merkle-root) of the transaction set in the block and a `uint256` timestamp that represents the time the block was created. This timestamp is determined by the `block.timestamp` of the Ethereum block in which the Plasma block submission transaction was included. We need this timestamp to make sure that outputs in blocks older than some amount of time (1 week, in our case) are given some maximum priority. 160 | 161 | `exits` maps from a set of `uint256` priorities to a set of `exit` structs. Each `exit` is comprised of the claimant, the amount being exited, and the "position" of the referenced UTXO (as described above). Priority should be unique so that these exits can't be overwritten. 162 | 163 | `exitIds` maps from a set of `uint256` UTXO positions to a set of `uint256` priorities. This feature was designed for usability so that users only need to maintain a single unique value (`utxoPos`). This should probably be removed and be replaced with client-side method of converting from `utxoPos` to `priority`. 164 | 165 | `exitsQueue` is a priority queue that holds a list of `priorities` in decreasing order (highest priority first). This is used when processing exits so that the oldest outputs (with the highest priority) are processed before newer outputs. 166 | 167 | `authority` is the address of the user allowed to submit Plasma blocks. This is only a single user in `plasma-mvp` but, as stated before, this can be modified to be some decentralized mechanism. 168 | 169 | `currentChildBlock` is the block number of the current Plasma block. This is used to make sure that we know where to insert new blocks into the `childChain` mapping (because maps aren't lists and don't have an `append` method). 170 | 171 | `recentBlock` isn't actually used anywhere. We need to get rid of that. Don't worry about it. 172 | 173 | `weekOldBlock` keeps track of the number of the oldest Plasma block that's less than a week old. This is used to limit the maximum `priority` that any transaction can have. Without this, outputs greater than two weeks old would be able to exit immediately and we definitely don't want that! 174 | 175 | ### Modifiers 176 | 177 | Now let's start getting into some contract logic! `RootChain.sol` currently has two modifiers. There's definitely some room for improvement here. 178 | 179 | ``` 180 | ... 181 | 182 | modifier isAuthority() { 183 | require(msg.sender == authority); 184 | _; 185 | } 186 | 187 | ... 188 | ``` 189 | 190 | `isAuthority` is a simple modifier that verifies that the sender is also the specified `authority`. Nothing too complex. 191 | 192 | ``` 193 | ... 194 | 195 | modifier incrementOldBlocks() { 196 | while (childChain[weekOldBlock].created_at < block.timestamp.sub(1 weeks)) { 197 | if (childChain[weekOldBlock].created_at == 0) 198 | break; 199 | weekOldBlock = weekOldBlock.add(1); 200 | } 201 | _; 202 | } 203 | 204 | ... 205 | ``` 206 | 207 | `incrementOldBlocks` is a weird one. The idea here is to make sure that we're constantly updating our `weekOldBlock`. `incrementOldBlocks` is added to a few functions so that there's (hopefully) never a case in which the `while` loop costs a prohibitive amount of gas. There's probably a better way to do this. If you've got ideas, feel free to make a pull request! 208 | 209 | The logic here is pretty simple - if the current `weekOldBlock` is more than a week old, increment `weekOldBlock` by one and check again. If `weekOldBlock` would ever point to a block that doesn't exist (meaning we haven't had a child block in at least a week), then `weekOldBlock` won't be incremented further. 210 | 211 | ### Functions 212 | 213 | We're going to explore each contract function one by one. If we get to a library call that's worth explaining, we'll go through that too. Without further ado, here's the constructor: 214 | 215 | #### `RootChain` 216 | 217 | ``` 218 | ... 219 | 220 | function RootChain() 221 | public 222 | { 223 | authority = msg.sender; 224 | currentChildBlock = 1; 225 | exitsQueue = new PriorityQueue(); 226 | } 227 | 228 | ... 229 | ``` 230 | 231 | The constructor doesn't really do much - it just sets `authority` as the contract creator, `currentChildBlock` to 1, and `exitsQueue` to a new instance of `PriorityQueue`. I'm not going to go into it here, but if you're interested in seeing the source code for `PriorityQueue`, it's available on [GitHub](https://github.com/omisego/plasma-mvp/blob/master/plasma/root_chain/contracts/DataStructures/PriorityQueue.sol). 232 | 233 | #### `submitBlock` 234 | 235 | Here's the logic for submitting blocks: 236 | 237 | ``` 238 | ... 239 | 240 | // @dev Allows Plasma chain operator to submit block root 241 | // @param root The root of a child chain block 242 | function submitBlock(bytes32 root, uint256 blknum) 243 | public 244 | isAuthority 245 | incrementOldBlocks 246 | { 247 | require(blknum == currentChildBlock); 248 | childChain[currentChildBlock] = childBlock({ 249 | root: root, 250 | created_at: block.timestamp 251 | }); 252 | currentChildBlock = currentChildBlock.add(1); 253 | } 254 | 255 | ... 256 | ``` 257 | 258 | This method takes the `bytes32 root` of the child chain block and the block's `uint256 blknum` as parameters. 259 | 260 | We've attached both of our modifiers to this function so the user calling this function `isAuthority` and any attempt to submit blocks will also `incrementOldBlocks`. 261 | 262 | There's a check here to make sure that `blknum == currentChildBlock`. This has the effect of preventing an `authority` from ever accidentally submitting the wrong block or submitting blocks in the wrong order. 263 | 264 | If everything checks out, we insert a new `childBlock` with the given `root` and `created_at: block.timestamp`. Finally, we add 1 to the `currentChildBlock`. 265 | 266 | #### `deposit` 267 | 268 | `deposit` is where things start to get a little bit more complicated, so I'll descrbie things in more detail. The idea behind `deposit` is to create a new `childBlock` with a *single transaction* that creates a UTXO for the user "out of thin air". 269 | 270 | ``` 271 | ... 272 | 273 | // @dev Allows anyone to deposit funds into the Plasma chain 274 | // @param txBytes The format of the transaction that'll become the deposit 275 | // TODO: This needs to be optimized so that the transaction is created 276 | // from msg.sender and msg.value 277 | function deposit(bytes txBytes) 278 | public 279 | payable 280 | { 281 | var txList = txBytes.toRLPItem().toList(11); 282 | require(txList.length == 11); 283 | for (uint256 i; i < 6; i++) { 284 | require(txList[i].toUint() == 0); 285 | } 286 | require(txList[7].toUint() == msg.value); 287 | require(txList[9].toUint() == 0); 288 | bytes32 zeroBytes; 289 | bytes32 root = keccak256(keccak256(txBytes), new bytes(130)); 290 | for (i = 0; i < 16; i++) { 291 | root = keccak256(root, zeroBytes); 292 | zeroBytes = keccak256(zeroBytes, zeroBytes); 293 | } 294 | childChain[currentChildBlock] = childBlock({ 295 | root: root, 296 | created_at: block.timestamp 297 | }); 298 | currentChildBlock = currentChildBlock.add(1); 299 | Deposit(txList[6].toAddress(), txList[7].toUint()); 300 | } 301 | 302 | ... 303 | ``` 304 | 305 | `deposit` only takes a single argument, `txBytes`. `txBytes` is an RLP encoded `plasma-mvp` transaction. Specifically, this transaction is the Plasma transaction that will create a UTXO for the user "out of thin air". Transactions in `plasma-mvp` are a little different from the ones specified in [Minimal Viable Plasma](https://ethresear.ch/t/minimal-viable-plasma/426) and have the following form: 306 | 307 | ``` 308 | [ 309 | blknum1, txindex1, oindex1, # Input 1 310 | blknum2, txindex2, oindex2, # Input 2 311 | newowner1, amount1, # Output 1 312 | newowner2, amount2, # Output 2 313 | fee, # Fee 314 | sig1, sig2 # Signatures 315 | ] 316 | ``` 317 | 318 | So when you see something like `txBytes[i]`, it's referring to the component of the transaction at the index `i`. For example, `txList[6]` refers to `newowner1` and `txList[7]` refers to `amount1`. 319 | 320 | A valid `deposit` transaction for 1 ETH will look like this (before RLP encoding): 321 | 322 | ``` 323 | [ 324 | 0, 0, 0, # Input 1 325 | 0, 0, 0, # Input 2 326 | 0x281055afc982d96fab65b3a49cac8b878184cb16, 1000000000000000000, # Output 1 327 | 0, 0, # Output 2 328 | 0, # Fee 329 | 0, 0 # Signatures 330 | ] 331 | ``` 332 | 333 | **Note:** The requirement that the user submit an encoded transaction will probably change if we can cheaply RLP encode in the contract. This is an [open problem](https://github.com/omisego/plasma-mvp/issues/65) and PRs are welcome! 334 | 335 | The first thing we're doing with 336 | 337 | ``` 338 | var txList = txBytes.toRLPItem().toList(11); 339 | ``` 340 | 341 | is using the `RLP` library to decode our `txBytes` to a list of 11 components (because our transaction has 11 parts). Remember, we can do this because we specified that our contract is: 342 | 343 | ``` 344 | using RLP for bytes; 345 | using RLP for RLP.RLPItem; 346 | using RLP for RLP.Iterator; 347 | ``` 348 | 349 | So now we've got a list that looks like the transaction specified above. We then `require(txList.length == 11);` to make sure that the transaction provided is, in fact, 11 parts. 350 | 351 | Next, we make sure that the first 6 components are set to 0. 352 | 353 | ``` 354 | for (uint256 i; i < 6; i++) { 355 | require(txList[i].toUint() == 0); 356 | } 357 | ``` 358 | 359 | We do this because the transaction is created "out of thin air", so there's no input to reference. 360 | 361 | We also verify that `txIndex[7]` (the first output amount, `amount1`) is equal to the `msg.value` and that `txIndex[9]` (the second output amount, `amount2`) is equal to 0. 362 | 363 | ``` 364 | require(txList[7].toUint() == msg.value); 365 | require(txList[9].toUint() == 0); 366 | ``` 367 | 368 | This way we're sure that the deposit amount specified in the user's `txBytes` is correct. 369 | 370 | Now we do some hashing to build up a height 16 Merkle tree: 371 | 372 | ``` 373 | bytes32 zeroBytes; 374 | bytes32 root = keccak256(keccak256(txBytes), new bytes(130)); 375 | for (i = 0; i < 16; i++) { 376 | root = keccak256(root, zeroBytes); 377 | zeroBytes = keccak256(zeroBytes, zeroBytes); 378 | } 379 | ``` 380 | 381 | All we're doing is repeatedly hashing the deposit transaction with a bunch of zero bytes. This is done so that `deposit` blocks look just like any other block (which also use height 16 Merkle trees). Fun fact, this means `plasma-mvp` permits a maximum of 65536 transactions per block. 382 | 383 | Finally, we insert the `childBlock` and emit a `Deposit` event. 384 | 385 | ``` 386 | childChain[currentChildBlock] = childBlock({ 387 | root: root, 388 | created_at: block.timestamp 389 | }); 390 | currentChildBlock = currentChildBlock.add(1); 391 | Deposit(txList[6].toAddress(), txList[7].toUint()); 392 | ``` 393 | 394 | #### `startExit` 395 | 396 | On to even more complicated stuff: exits. `startExit` allows a user to begin the process of exiting the Plasma chain. The user needs to specify what specific UTXO they're withdrawing, along with the raw transaction that created that output, proof that the transaction is actually in the specified block, and the signatures that prove the transaction is valid. 397 | 398 | ``` 399 | ... 400 | 401 | // @dev Starts to exit a specified utxo 402 | // @param utxoPos The position of the exiting utxo in the format of blknum * 1000000000 + index * 10000 + oindex 403 | // @param txBytes The transaction being exited in RLP bytes format 404 | // @param proof Proof of the exiting transactions inclusion for the block specified by utxoPos 405 | // @param sigs Both transaction signatures and confirmations signatures used to verify that the exiting transaction has been confirmed 406 | function startExit(uint256 utxoPos, bytes txBytes, bytes proof, bytes sigs) 407 | public 408 | incrementOldBlocks 409 | { 410 | var txList = txBytes.toRLPItem().toList(11); 411 | uint256 blknum = utxoPos / 1000000000; 412 | uint256 txindex = (utxoPos % 1000000000) / 10000; 413 | uint256 oindex = utxoPos - blknum * 1000000000 - txindex * 10000; 414 | bytes32 root = childChain[blknum].root; 415 | 416 | require(msg.sender == txList[6 + 2 * oindex].toAddress()); 417 | bytes32 txHash = keccak256(txBytes); 418 | bytes32 merkleHash = keccak256(txHash, ByteUtils.slice(sigs, 0, 130)); 419 | require(Validate.checkSigs(txHash, root, txList[0].toUint(), txList[3].toUint(), sigs)); 420 | require(merkleHash.checkMembership(txindex, root, proof)); 421 | 422 | // Priority is a given utxos position in the exit priority queue 423 | uint256 priority; 424 | if (blknum < weekOldBlock) { 425 | priority = (utxoPos / blknum).mul(weekOldBlock); 426 | } else { 427 | priority = utxoPos; 428 | } 429 | require(exitIds[utxoPos] == 0); 430 | exitIds[utxoPos] = priority; 431 | exitsQueue.insert(priority); 432 | exits[priority] = exit({ 433 | owner: txList[6 + 2 * oindex].toAddress(), 434 | amount: txList[7 + 2 * oindex].toUint(), 435 | utxoPos: utxoPos 436 | }); 437 | Exit(msg.sender, utxoPos); 438 | } 439 | 440 | ... 441 | ``` 442 | 443 | Once again we add our `incrementOldBlocks` modifier. This is necessary because we're actually going to be using `weekOldBlock` here. 444 | 445 | The four function parameters are `uint256 utxoPos`, `bytes txBytes`, `bytes proof`, and `bytes sigs`. 446 | 447 | `utxoPos` is a unique UTXO identifier that's given by `blknum * 1000000000 + txindex * 10000 + oindex`. For example, if I'm attempting to exit the second UTXO of the second transaction in the fifth Plasma block, my `utxoPos` would be `utxoPos = blknum * 1000000000 + txindex * 10000 + oindex = 5 * 1000000000 + 1 * 10000 + 1 = 5000010001`. This `utxoPos` is unique as long as we put some constraints on how many transactions a single block can have. 448 | 449 | `txBytes` is another RLP encoded transaction, but this time it's the raw transaction that created the UTXO at `utxoPos`. 450 | 451 | `proof` is a Merkle proof that the transaction given by `txBytes` is actually the transaction that created the UTXO at `utxoPos`. 452 | 453 | `sigs` is the signatures to make this transaction, concatenated into a single `bytes`. `sigs` has the form `sig1 + sig2 + confsig1 + confsig2` where each signature is 65 bytes long. We use a library `ByteUtils` to slice off individual signatures from the larger byte string, i.e. `ByteUtils.slice(sigs, 0, 130)` is `sig1 + sig2`. 454 | 455 | Now to the logic. First we convert our `txBytes` to a list of components like we did in `deposit`: 456 | 457 | ``` 458 | var txList = txBytes.toRLPItem().toList(11); 459 | ``` 460 | 461 | Next we decompose our `utxoPos` into its components: 462 | 463 | ``` 464 | uint256 blknum = utxoPos / 1000000000; 465 | uint256 txindex = (utxoPos % 1000000000) / 10000; 466 | uint256 oindex = utxoPos - blknum * 1000000000 - txindex * 10000; 467 | ``` 468 | 469 | This is effectively just doing the opposite of `blknum * 1000000000 + txindex * 10000 + oindex`. 470 | 471 | We also pull the Merkle root of the block that corresponds to `blknum`: 472 | 473 | ``` 474 | bytes32 root = childChain[blknum].root; 475 | ``` 476 | 477 | Now we're verifying that the user creating the exit actually owns the output being referenced: 478 | 479 | ``` 480 | require(msg.sender == txList[6 + 2 * oindex].toAddress()); 481 | ``` 482 | 483 | Remember that `txList[i]` represents some component of the transaction. `oindex` can either be 0 or 1 (first or second output), so `6 + 2 * oindex` is either `6` or `8`. `txList[6]` is `newowner1` and `txList[8]` is `newowner2`. 484 | 485 | Next we're validating signatures: 486 | 487 | ``` 488 | bytes32 txHash = keccak256(txBytes); 489 | bytes32 merkleHash = keccak256(txHash, ByteUtils.slice(sigs, 0, 130)); 490 | require(Validate.checkSigs(txHash, root, txList[0].toUint(), txList[3].toUint(), sigs)); 491 | require(merkleHash.checkMembership(txindex, root, proof)); 492 | ``` 493 | 494 | The first thing we do is take the `keccak256` hash of `txBytes`. `txHash` gives us enough information to validate that the signatures on the transaction are valid. Validation is handled in [Validate.sol](https://github.com/omisego/plasma-mvp/blob/master/plasma/root_chain/contracts/Libraries/Validate.sol). 495 | 496 | Next we hash `txHash` again with `ByteUtils.slice(sigs, 0, 130) = sig1 + sig2`. This hash represents a leaf node in our Merkle tree and means we can verify that the transaction was actually included in the `blknum` (derived from `utxoPos`). We do this with `require(merkleHash.checkMembership(txindex, root, proof));`, where `proof` is a compact Merkle inclusion proof. 497 | 498 | Priority calculation is up next. The idea here is that older outputs should generally be processed before newer outputs. A detailed explanation of *why* we need priority is described [here](https://hackmd.io/s/BJZdignFf). This is a definite area for improvement because [priority is a lot more complicated than it seems](https://github.com/omisego/plasma-mvp/issues/29). 499 | 500 | ``` 501 | // Priority is a given utxos position in the exit priority queue 502 | uint256 priority; 503 | if (blknum < weekOldBlock) { 504 | priority = (utxoPos / blknum).mul(weekOldBlock); 505 | } else { 506 | priority = utxoPos; 507 | } 508 | ``` 509 | 510 | Here we're actually calculating the exit's priority. If the output being referenced is more than a week old (determined by `weekOldBlock`), then `blknum` is replaced by `weekOldBlock`. This actually creates a collision where two transactions might have the same priority. A potential fix for this collision is proposed [here](https://github.com/omisego/plasma-mvp/issues/29). 511 | 512 | Lastly, we make some checks and insert the new exit: 513 | 514 | ``` 515 | require(exitIds[utxoPos] == 0); 516 | exitIds[utxoPos] = priority; 517 | exitsQueue.insert(priority); 518 | exits[priority] = exit({ 519 | owner: txList[6 + 2 * oindex].toAddress(), 520 | amount: txList[7 + 2 * oindex].toUint(), 521 | utxoPos: utxoPos 522 | }); 523 | Exit(msg.sender, utxoPos); 524 | ``` 525 | 526 | `require(exitIds[utxoPos] == 0);` just ensures that we aren't making the same exit twice. Then, to be able to access an exit given its unique identifier (`utxoPos`), we map `utxoPos` to `priority` in `exitIds`. Finally, we insert the exit into our list of exits and emit an `Exit` event. 527 | 528 | #### `challengeExit` 529 | 530 | Users need to be able to challenge exits of double spends. The general process for this is: 531 | 532 | 1. Check that the challenge tx is included in the referenced block and that it has correct signatures. 533 | 2. Check that the exiting UTXO is included as an input to the challenge tx OR that the challenge tx comes before the exiting UTXO and both have a common input. 534 | 535 | The first part looks a lot like the checks we made in `startExit`. The second part simply verifies that the UTXO being exited was actually spent in the transaction we're providing. Note: the current implementation as shown below [is actually broken](https://github.com/omisego/plasma-mvp/issues/63). 536 | 537 | 538 | ``` 539 | ... 540 | 541 | // @dev Allows anyone to challenge an exiting transaction by submitting proof of a double spend on the child chain 542 | // @param cUtxoPos The position of the challenging utxo 543 | // @param eUtxoPos The position of the exiting utxo 544 | // @param txBytes The challenging transaction in bytes RLP form 545 | // @param proof Proof of inclusion for the transaction used to challenge 546 | // @param sigs Signatures for the transaction used to challenge 547 | // @param confirmationSig The confirmation signature for the transaction used to challenge 548 | function challengeExit(uint256 cUtxoPos, uint256 eUtxoPos, bytes txBytes, bytes proof, bytes sigs, bytes confirmationSig) 549 | public 550 | { 551 | uint256 txindex = (cUtxoPos % 1000000000) / 10000; 552 | bytes32 root = childChain[cUtxoPos / 1000000000].root; 553 | uint256 priority = exitIds[eUtxoPos]; 554 | var txHash = keccak256(txBytes); 555 | var confirmationHash = keccak256(txHash, root); 556 | var merkleHash = keccak256(txHash, sigs); 557 | address owner = exits[priority].owner; 558 | 559 | require(owner == ECRecovery.recover(confirmationHash, confirmationSig)); 560 | require(merkleHash.checkMembership(txindex, root, proof)); 561 | delete exits[priority]; 562 | delete exitIds[eUtxoPos]; 563 | } 564 | 565 | ... 566 | ``` 567 | 568 | `challengeExit` takes six parameters. 569 | 570 | `cUtxoPos` is the position of one of the two UTXOs inside the conflicting transaction. This is probably a misleading parameter, because you actually only need to provide proof of a transaction and not a specific input. We just use this to figure out the block and txindex of the challenging tx. 571 | 572 | `eUtxoPos` is the `utxoPos` of the exit being challenged. 573 | 574 | `txBytes` is the RLP encoded transaction that's being used to challenge this exit. 575 | 576 | `proof` is a Merkle proof that the transaction described by `txBytes` was actually included at the specified block and txindex. 577 | 578 | `sigs` are the signatures for the challenging transaction. 579 | 580 | `confirmationSig` is a signature showing that the owner of the exit has seen this transaction. 581 | 582 | I'm going to describe this one in more detail once the issues described [here](https://github.com/omisego/plasma-mvp/issues/63) are fixed. 583 | 584 | #### `finalizeExits` 585 | 586 | [Minimal Viable Plasma](https://ethresear.ch/t/minimal-viable-plasma/426) describes a "passive loop" that finalizes exits that are more than two weeks old. The current implementation is actually also broken because it's verifying that the *output*, and not the exit of that output, is more than two weeks old. 587 | 588 | ``` 589 | ... 590 | 591 | // @dev Loops through the priority queue of exits, settling the ones whose challenge 592 | // @dev challenge period has ended 593 | function finalizeExits() 594 | public 595 | incrementOldBlocks 596 | returns (uint256) 597 | { 598 | uint256 twoWeekOldTimestamp = block.timestamp.sub(2 weeks); 599 | exit memory currentExit = exits[exitsQueue.getMin()]; 600 | uint256 blknum = currentExit.utxoPos.div(1000000000); 601 | while (childChain[blknum].created_at < twoWeekOldTimestamp && exitsQueue.currentSize() > 0) { 602 | currentExit.owner.transfer(currentExit.amount); 603 | uint256 priority = exitsQueue.delMin(); 604 | delete exits[priority]; 605 | delete exitIds[currentExit.utxoPos]; 606 | currentExit = exits[exitsQueue.getMin()]; 607 | } 608 | } 609 | 610 | ... 611 | ``` 612 | 613 | Before we do anything we `incrementOldBlocks`. 614 | 615 | Next we calculate what time it was 2 weeks ago based on the current block's timestamp: 616 | 617 | ``` 618 | uint256 twoWeekOldTimestamp = block.timestamp.sub(2 weeks); 619 | ``` 620 | 621 | Then we figure out what the current exit to be processed is based on our priority queue: 622 | 623 | ``` 624 | exit memory currentExit = exits[exitsQueue.getMin()]; 625 | ``` 626 | 627 | **Note:** Here's where things are broken. We *should* be calculating the age of the exit, but we're instead checking the age of the *output*: 628 | 629 | ``` 630 | uint256 blknum = currentExit.utxoPos.div(1000000000); 631 | while (childChain[blknum].created_at < twoWeekOldTimestamp && exitsQueue.currentSize() > 0) { 632 | ``` 633 | 634 | If that check passes, then we send the owner of the exit their money and delete their exit: 635 | 636 | ``` 637 | currentExit.owner.transfer(currentExit.amount); 638 | uint256 priority = exitsQueue.delMin(); 639 | delete exits[priority]; 640 | delete exitIds[currentExit.utxoPos]; 641 | ``` 642 | 643 | **Note:** There's another bug here! We aren't stopping people from attempting to exit twice. We should really be maintaining a list of UTXOs that have already exited. 644 | 645 | Finally, we continue the loop with the next exit to be processed 646 | 647 | ``` 648 | currentExit = exits[exitsQueue.getMin()]; 649 | ``` 650 | 651 | ### Constant Functions 652 | 653 | Lastly, we have some constant functions. 654 | 655 | #### `getChildChain` 656 | 657 | Returns the block with a specific `blockNumber`. 658 | 659 | ``` 660 | ... 661 | 662 | function getChildChain(uint256 blockNumber) 663 | public 664 | view 665 | returns (bytes32, uint256) 666 | { 667 | return (childChain[blockNumber].root, childChain[blockNumber].created_at); 668 | } 669 | 670 | ... 671 | ``` 672 | 673 | #### `getExit` 674 | 675 | Returns the exit with a specific `priority`. 676 | 677 | ``` 678 | ... 679 | 680 | function getExit(uint256 priority) 681 | public 682 | view 683 | returns (address, uint256, uint256) 684 | { 685 | return (exits[priority].owner, exits[priority].amount, exits[priority].utxoPos); 686 | } 687 | ``` 688 | 689 | --------------------------------------------------------------------------------