├── .gitignore ├── .nojekyll ├── LICENSE ├── README.md ├── assets └── pdf │ ├── ethcc2021.pdf │ ├── notes-georgios.pdf │ └── rig-ethcc.pdf ├── auctions └── notebooks │ └── fpa_hybrid_sim.ipynb ├── eip1559 ├── combination.md ├── fixedesc.jpeg ├── floatingesc.jpeg ├── floatingescfixedtip.jpeg └── notes-call3.md ├── ethdata ├── extract.sh └── notebooks │ ├── explore_data.Rmd │ ├── explore_data.html │ └── gas_weather_reports │ ├── exploreJuly21.Rmd │ └── exploreJuly21.html ├── index.html ├── posdata ├── README.md ├── default.html5 ├── index.html ├── notebooks │ ├── lib.R │ ├── mainnet_compare.Rmd │ ├── mainnet_explore.Rmd │ ├── mainnet_explore.html │ ├── medalla_explore.Rmd │ ├── pyrmont_compare.Rmd │ ├── pyrmont_explore.Rmd │ └── uptime_reward_gif.R └── scripts │ ├── 20210424_plots.R │ ├── 20210908_slowdown.R │ └── 20210918_altair_sync.R ├── static ├── authorFormatting.js ├── component-library.js ├── footer.js ├── header.js ├── index.css ├── jupyter.css ├── react-dom.development.js ├── react.development.js ├── referencesFormatting.js ├── rig.png ├── sectionFormatting.js └── theme-light.css └── supply-chain-health └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | .DS_Store 3 | __pycache__ 4 | econ-review.docx 5 | .ipynb_checkpoints 6 | reviews/ 7 | visuals/ 8 | temp 9 | .Rproj.user 10 | cadCAD/ 11 | ethdata/scripts/default_url.R 12 | venv/ 13 | data/ 14 | -------------------------------------------------------------------------------- /.nojekyll: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/rig/6ad58089cf2c36d2632b0d9e70a02e41ef3b2a28/.nojekyll -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | The Robust Incentives Group is an Ethereum Foundation research team dedicated to the study of protocol mechanisms with the lens of game theory, mechanism design, crypto-economics, formal methods, and data science. 2 | 3 | For a complete directory of our papers, posts, presentations, as well as "RIG's Open Problems (ROPs)", please visit https://rig.ethereum.org. 4 | -------------------------------------------------------------------------------- /assets/pdf/ethcc2021.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/rig/6ad58089cf2c36d2632b0d9e70a02e41ef3b2a28/assets/pdf/ethcc2021.pdf -------------------------------------------------------------------------------- /assets/pdf/notes-georgios.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/rig/6ad58089cf2c36d2632b0d9e70a02e41ef3b2a28/assets/pdf/notes-georgios.pdf -------------------------------------------------------------------------------- /assets/pdf/rig-ethcc.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/rig/6ad58089cf2c36d2632b0d9e70a02e41ef3b2a28/assets/pdf/rig-ethcc.pdf -------------------------------------------------------------------------------- /eip1559/combination.md: -------------------------------------------------------------------------------- 1 | # Combination EIP1559 / escalator 2 | 3 | **TL;DR:** We present three models for combining EIP1559 and escalator. Of the three, only one really makes sense for us (the _floating escalator_ model), while the other two (_thresholded escalator_ and _fixed escalator_) are presented for the sake of providing a complete exploration of the design space. 4 | 5 | ## Base dynamics and parameters 6 | 7 | ### Base parameters 8 | 9 | - `c` = target gas used 10 | - `1 / d` = max rate of change 11 | - `g[t]` = gas used by block t 12 | - `b[t]` = basefee at block t 13 | - `p[t]` = premium at block t 14 | 15 | ### Dynamics 16 | 17 | **EIP 1559 dynamics** 18 | 19 | - `b[t+1] = b[t] * (1 + (g[t] - c) / c / d)` 20 | 21 | **Linear escalator, given `startblock`, `endblock`, `startpremium` and `maxpremium`** 22 | 23 | - `p[t] = startpremium + (t - startblock) / (endblock - startblock) * (maxpremium - startpremium)` 24 | 25 | ## Thresholded escalator 26 | 27 | **Intuition:** Vanilla escalator with the condition that a bid cannot be included if the `gasprice` is lower than the current `basefee`. 28 | 29 | ### User-specified parameters 30 | 31 | - `startbid` 32 | - `startblock` 33 | - `endblock` 34 | - `maxpremium` 35 | 36 | ### Computed parameters 37 | 38 | - `startpremium = 0` 39 | 40 | ### Gas price 41 | 42 | ```python 43 | gasprice[t] = startbid + p[t] 44 | 45 | # Include only if 46 | assert gasprice[t] >= b[t] 47 | ``` 48 | 49 | ### Pros/cons 50 | 51 | #### Pros 52 | 53 | - "Pure" escalator, only modulated by the presence of the basefee which determines inclusion or not. 54 | - Wallets can default to `startbid = b[t]`. This is the _fixed escalator_ model. 55 | 56 | #### Cons 57 | 58 | - Cannot write EIP 1559 simple strategy basefee + fixed premium under that model. 59 | 60 | ## Fixed escalator 61 | 62 | **Intuition:** Vanilla escalator with a reasonable `startbid` parameter provided by the current `basefee`. 63 | 64 | ### User-specified parameters 65 | 66 | - `startblock` 67 | - `endblock` 68 | - `maxfee` 69 | - `startpremium` 70 | 71 | ### Computed parameters 72 | 73 | - `maxpremium = maxfee - b[startblock]` 74 | 75 | ### Gas price 76 | 77 | ```python 78 | gasprice[t] = min( 79 | max(b[startblock] + p[t], b[t]), 80 | b[startblock] + maxpremium 81 | ) 82 | 83 | # Include only if 84 | assert gasprice[t] >= b[t] 85 | ``` 86 | 87 | - Gas price set to either current basefee `b[t]` OR basefee at the start of the escalator `b[startblock]` + current premium `p[t]`, whichever is higher, bounded above by the maxfee. 88 | - Setting `startpremium = 0` means starting bid = basefee. 89 | 90 | ![](fixedesc.jpeg) 91 | _Bid in solid purple line, basefee in blue._ 92 | 93 | ### Pros/cons 94 | 95 | #### Pros 96 | 97 | - Respects intuition of `basefee` as good default current price + escalating tip. 98 | - For stable `basefee`, looks like escalator with a well-defined `startbid`. 99 | 100 | #### Cons 101 | 102 | - Gas price can raise faster than the escalator would plan, if basefee increases faster than the escalator slope. Should the premium follow? See "floating escalator started on basefee". 103 | - Cannot write EIP 1559 simple strategy basefee + fixed premium under that model. 104 | 105 | ## Floating escalator 106 | 107 | **Intuition:** The "true" EIP 1559 with escalating tips. User specifies an escalator for the tip, which is added to the current basefee always, as opposed to the basefee at `startblock` for the fixed escalator. Users specifying a steeper escalator "take off" above other users, expressing their higher time preferences. 108 | 109 | ### User-specified parameters 110 | 111 | - `startblock` 112 | - `endblock` 113 | - `startpremium` 114 | - `maxfee` OR `maxpremium` OR both. 115 | 116 | ### Computed parameters 117 | 118 | - If `maxfee` is given: `maxpremium = maxfee - (b[startblock] + startpremium)` 119 | - If `maxpremium` is given: `maxfee = b[startblock] + maxpremium` 120 | - If both are given, NA. 121 | 122 | ### Gas price 123 | 124 | ```python 125 | gasprice[t] = min( 126 | b[t] + p[t], 127 | maxfee 128 | ) 129 | 130 | # Include only if 131 | assert gasprice[t] >= b[t] 132 | ``` 133 | 134 | Gas price set current basefee `b[t]` + current premium `p[t]`, bounded above by `maxfee`. 135 | 136 | ![](floatingesc.jpeg) 137 | _Bid in solid purple line, basefee in blue._ 138 | 139 | ### Pros/cons 140 | 141 | #### Pros 142 | 143 | - Respects intuition of `basefee` as good default current price + escalating tip. 144 | - For stable `basefee`, looks like escalator with a well-defined `startbid`. 145 | - For unstable `basefee`, escalates tip in excess of the current basefee, unlike the fixed escalator. 146 | - Setting `startpremium = maxpremium` and some `maxfee`, this is equivalent to the EIP 1559 paradigm (with `endblock` far into the future). 147 | 148 | ![](floatingescfixedtip.jpeg) 149 | _Bid in solid purple line, basefee in blue._ 150 | 151 | #### Cons 152 | 153 | - "Double dynamics" of basefee varying + tip varying, maybe hard to reason about. 154 | - You can reach your `maxfee` much faster than you intended if `basefee` increases during the transaction lifetime. 155 | -------------------------------------------------------------------------------- /eip1559/fixedesc.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/rig/6ad58089cf2c36d2632b0d9e70a02e41ef3b2a28/eip1559/fixedesc.jpeg -------------------------------------------------------------------------------- /eip1559/floatingesc.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/rig/6ad58089cf2c36d2632b0d9e70a02e41ef3b2a28/eip1559/floatingesc.jpeg -------------------------------------------------------------------------------- /eip1559/floatingescfixedtip.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/rig/6ad58089cf2c36d2632b0d9e70a02e41ef3b2a28/eip1559/floatingescfixedtip.jpeg -------------------------------------------------------------------------------- /eip1559/notes-call3.md: -------------------------------------------------------------------------------- 1 | # EIP 1559 implementers' call #3 notes 2 | 3 | ## 1559 and the escalator 4 | 5 | I see two mutually exclusive paths: 6 | 7 | 1. Keeping the current transaction model with the escalator rule for the fee. 8 | 2. Adopting 1559 and then possibly adopting the escalator rule for the premium. 9 | 10 | If it is decided to combine 1559 and the escalator, I believe the [floating escalator](combination.md) is the best way to do so. It is the only option for which it is possible to unbundle the 1559 side and the escalator side, allowing us to implement 1559 first and decide later on (or in concert) whether the escalator rule should be proposed. 11 | 12 | As a reminder, the escalator rule governs the premium: 13 | 14 | - `p[t] = startpremium + (t - startblock) / (endblock - startblock) * (maxpremium - startpremium)` 15 | 16 | In the floating escalator, we simply add the escalating premium to the current basefee `b[t]`. 17 | 18 | ```python 19 | gasprice[t] = min( 20 | b[t] + p[t], 21 | maxfee 22 | ) 23 | 24 | # Include only if 25 | assert gasprice[t] >= b[t] 26 | ``` 27 | 28 | A user must decide `maxfee`, `startpremium`, `maxpremium`, `startblock` and `endblock`. 29 | 30 | In addition, it is possible even with the escalator rule to emulate the behaviour of a 1559 tx with parameters `gas_premium` and `maxfee`, by setting `startblock` to the current block, `endblock` to an outrageously far away block and `startpremium == maxpremium == gas_premium`. This should help for compatibility and UX if and once the escalator rule is adopted to move the premium value. 31 | 32 | ## 2718 and 1559 33 | 34 | I don't have much to say about this. It seems 2718 offers a clean way to upgrade transaction patterns. This is perhaps helpful with the above? 35 | 36 | ## (In progress) Simulations 37 | 38 | We have the beginning of a more robust environment for agent-based simulations [here](abm1559.ipynb). We need to think through how agents should behave but initial tests show basefee converges quickly when the demand is at steady-state (e.g., same expectation of arrivals between two blocks). There is also support for escalator-style transactions but untested so far. 39 | 40 | Currently, agents can have two different cost functions, one where they incur a cost for waiting one extra block that is fixed, with some value for having their transaction included and one where this value is discounted over time (the later inclusion, the smaller the value). Agents decide to enter or not based on their estimation of profit: if they expect to realise a negative profit, they balk and do not submit their transaction. 41 | 42 | Note that _without the option to cancel their transaction (for free or at some predictible cost)_, an agent may realise a negative profit after all if their estimation was too optimistic. This violates ex post individual rationality. 43 | 44 | The current agent estimation of waiting time is pretty dumb (they simply expect to wait 5 blocks). A better estimator must depend on the submitted transaction parameters (the higher the premium/maxfee, the lower their expected waiting time) and could look like the estimators currently used by wallets. This will also be helpful to test these estimators empirically and decide on good transaction default values. 45 | 46 | ## (Important) Wallet defaults 47 | 48 | How should wallets set `max_fee` and `gas_premium`? We look for good default values to proposer to users. In the current UX paradigm, users are presented with 4 options: 49 | 50 | - Three of them suggest values corresponding to "fast", "average" and "slow" inclusions. 51 | - Otherwise, users can set their own transaction values. 52 | 53 | Suppose a wallet offers defaults pegged to the basefee, e.g., three defaults $\rho_1 < \rho_2 < \rho_3$ such that proposed maxfees are $m_i = (1+\rho_i) b(t)$. Assuming users broadly follow wallet defaults (they seem to), miners now make a higher profit when basefee is higher, all else equal. 54 | 55 | It was suggested to default to a fixed premium for users, e.g., 1 Gwei, or the amount of Gwei that would exactly compensate a miner for the extra ommer risk of including the transaction in their block. The tip however will likely decide the speed of inclusion of the transaction, given that the tip is received by miners. We prefer high value or time-sensitive transactions to get in first and with a fixed premium, may not be able to discriminate between low and high value instances. The floating escalator can come in handy to help discriminate between the two. 56 | 57 | ### Pegged premium rule: A naive proposal that doesn't work 58 | 59 | A default that respects this intuition is pegging the premium to the proposed maxfee. We assume then that users only declare their maxfee and the premium is set in protocol, taking e.g. one hundredth of the declared maxfee. 60 | 61 | I value my transaction a lot and am ready to pay 10 Gwei for it. The default sets my premium to 10 / 100 = 0.1 Gwei. Someone else who values theirs less, e.g., is only ready to pay up to 5 Gwei for it, has their premium set to 5 / 100 = 0.05 Gwei. Miners prefer my transaction to theirs. This also collapses the number of parameters to set from 2 to 1. 62 | 63 | When the premium is equal to a fixed fraction of the maxfee, the tip becomes a consistent transaction order, in addition to representing exactly the miner profit. Whenever $m_i < m_j$, two maxfees of two users $i$ and $j$, we _always_ have $p_i < p_j$ (premiums) and $t_i < t_j$ (tips). 64 | 65 | From an incentive-compatibility point of view, a user who wants to "game" the system by inflating their maxfee to inflate their tip exposes themselves to a high transaction fee, in the case where basefee increases before they are included. 66 | 67 | But there is a trivial strategy to defeat this rule: a user could declare a maxfee they would not be ready to pay and monitor the basefee, cancelling their transaction whenever basefee rises above their true (undeclared) maxfee. So the pegged premium rule is not incentive compatible. 68 | 69 | ## (Important) Client strategies 70 | 71 | We need to figure out how clients handle pending transactions. In the current paradigm, clients can simply rank and update their list of pending transactions based on the gasprice. This is *not true* when users can set both the maxfee and the premium! For instance, when basefee is equal to 5, consider these two users: 72 | 73 | | Basefee = 5 | Maxfee | Premium | Tip | 74 | |-|-|-|-| 75 | | **User A** | 10 | 8 | 5 | 76 | | **User B** | 15 | 6 | 6 | 77 | 78 | We like ranking by premiums since these do not vary over time. It means miners can easily update their pending transactions list. But ranking by premiums, a miner would prefer user A to user B, even though the miner would receive a greater payoff from including B. 79 | 80 | So we must rank by tips, in which case B is preferred. But tips are time-varying! Suppose basefee now drops to 2. 81 | 82 | | Basefee = 2 | Maxfee | Premium | Tip | 83 | |-|-|-|-| 84 | | **User A** | 10 | 8 | 8 | 85 | | **User B** | 15 | 6 | 6 | 86 | 87 | Now user A is preferred to B. Miners must re-rank all pending transactions between each block based on the new basefee. 88 | 89 | This issue compounds with time-varying premiums, as suggested in the [floating escalator](combination.md) for instance. 90 | 91 | Clients must also handle their memory -- by default I believe, clients only keep around the current 8092 most profitable transactions in their transaction pools. Should a client keep around a currently invalid transaction (one where current basefee is higher than maxfee) in the hope that when basefee lowers they will reap a good tip? 92 | 93 | When basefee is high, some high-premium transactions may be submerged. 94 | 95 | | Basefee = 10 | Maxfee | Premium | Tip | 96 | |-|-|-|-| 97 | | **User A** | 9 | 4 | - | 98 | | **User B** | 15 | 3 | 3 | 99 | 100 | But let the tide ebb, and the transaction is now preferred. 101 | 102 | | Basefee = 5 | Maxfee | Premium | Tip | 103 | |-|-|-|-| 104 | | **User A** | 9 | 4 | 4 | 105 | | **User B** | 15 | 3 | 3 | 106 | 107 | With some work it is likely possible to find a good rule / heuristics to have a pretty good approximation of the optimum. This is something that we should discuss more with the Nethermind team too as they raised this concern in their 1559 document. 108 | 109 | ## (Nice to have) Equilibrium strategy 110 | 111 | We can take a cue from [Huberman et al.](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3025604) and analyse the transaction fee market as a strategic game of queueing. Assuming all transactions have constant gas requirements, how should we define the game? 112 | 113 | - It is a batched service queue (a round of service includes a maximum of _K_ transactions). Normalise time units such that service happens deterministically each time step ($\mu = 1$). 114 | - There is one server/miner (logically, although practically the server/miner varies between services). 115 | - The server sets a _dynamic_ minimum fee (the basefee $b(t)$), observed by users before deciding whether to enter the queue or balk. 116 | - The dynamic fee depends on the congestion. 117 | 118 | We can use the model of users having some fixed value $v$ for the transaction, and random per-time-unit costs (distributed according to some CDF $F$). A user with per-time-unit cost $c$ served after $w$ time steps at time $t$ who submitted _tip_ $\overline{p}(t) = \min(maxfee - b(t), premium)$ has payoff $v - \overline{p}(t) - c \cdot w$. We look for equilibrium waiting times and strategies. Users come in following a Poisson arrival process of rate $\lambda$ (i.e., during $t$ time units, we expect $t\lambda$ arrivals). 119 | 120 | This differs from the Huberman et al. case since we have a time-varying basefee and thus time-varying premiums. In the Huberman et al. setting, there exists an equilibrium distribution of bids $G$ such that a player bids $p$ and expects payoff $v - p - c \cdot w(p|G)$, where $w(p|G)$ denotes that the waiting time $w$ depends on $p$ given $G$. $G$ is entirely determined by $F$ and $\lambda$. 121 | 122 | The equivalent of $G$ in EIP 1559 is the distribution over $\overline{p}$ which is what miners consider for inclusion. We look for the following properties: 123 | 124 | - Users with greater costs always offer greater tips, i.e., whenever $c_i \leq c_j$ for two users $i$ and $j$, $\overline{p}_i(t) \leq \overline{p}_j(t)$ for all $t$. In the case where all users propose the same premium, this is true if players with greater costs choose higher $maxfee$. 125 | - An equilibrium basefee $\overline{b}$ given $\lambda$ and $F$. Demand shocks are interpreted as increasing $\lambda$. 126 | -------------------------------------------------------------------------------- /ethdata/extract.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env zsh 2 | 3 | startblock=12965000 4 | endblock=12970000 5 | 6 | ethereumetl export_blocks_and_transactions --start-block $startblock --end-block $endblock \ 7 | --provider-uri http://192.168.0.120:8545 --blocks-output blocks.csv --transactions-output transactions.csv 8 | 9 | ethereumetl extract_csv_column --input transactions.csv --column hash --output transaction_hashes.txt 10 | 11 | ethereumetl export_receipts_and_logs --transaction-hashes transaction_hashes.txt \ 12 | --provider-uri http://192.168.0.120:8545 --receipts-output receipts.csv 13 | 14 | # Blocks 15 | # number,hash,parent_hash,nonce,sha3_uncles,logs_bloom,transactions_root,state_root,receipts_root,miner, (10) 16 | # difficulty,total_difficulty,size,extra_data,gas_limit,gas_used,timestamp,transaction_count,base_fee_per_gas (19) 17 | 18 | # Transactions 19 | # hash,nonce,block_hash,block_number,transaction_index,from_address,to_address,value, (8) 20 | # gas,gas_price,input,block_timestamp,max_fee_per_gas,max_priority_fee_per_gas,transaction_type (15) 21 | 22 | # Receipts 23 | # transaction_hash,transaction_index,block_hash,block_number,cumulative_gas_used,gas_used, (6) 24 | # contract_address,root,status,effective_gas_price (10) 25 | 26 | cut -d , -f 1,13,15,16,17,19 blocks.csv > blocks-cut.csv 27 | cut -d , -f 1,6 receipts.csv > receipts-cut.csv 28 | cut -d , -f 1,4,8,9,10,13,14,15 transactions.csv > transactions-cut.csv 29 | 30 | rm blocks.csv 31 | rm receipts.csv 32 | rm transactions.csv 33 | mv blocks-cut.csv data/bxs-$startblock-$endblock.csv 34 | mv receipts-cut.csv data/rxs-$startblock-$endblock.csv 35 | mv transactions-cut.csv data/txs-$startblock-$endblock.csv 36 | -------------------------------------------------------------------------------- /ethdata/notebooks/explore_data.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Exploring blocks, gas and transactions" 3 | description: | 4 | A focus on the recent high gas prices, towards understanding high congestion regimes for EIP 1559. 5 | author: 6 | - name: Barnabé Monnot 7 | url: https://twitter.com/barnabemonnot 8 | affiliation: Robust Incentives Group, Ethereum Foundation 9 | affiliation_url: https://github.com/ethereum/rig 10 | date: "`r Sys.Date()`" 11 | output: 12 | distill::distill_article: 13 | toc: true 14 | toc_depth: 3 15 | --- 16 | 17 | While real world gallons of oil went negative, Ethereum gas prices have sustained a long period of high fees since the beginning of May. I wanted to dig in a bit deeper, with a view to understanding the fundamentals of the demand. Some of the charts below retrace steps that are very well-known to a lot of us -- these are mere restatements and updates. The data includes all blocks produced between May 4th, 2020, 13:22:16 UTC and May 19th, 2020, 19:57:17 UTC. 18 | 19 | [Onur Solmaz](https://twitter.com/onurhsolmaz) from Casper Labs wrote [a very nice post](https://solmaz.io/2019/10/21/gas-price-fee-volatility/) arguing that since we observe daily cycles, there must be something more than one-off ICOs and Ponzis at play. 20 | 21 | 24 | 25 | We will see these cycles here too, and a few more questions I thought were interesting (or at least, that I kinda knew the answer to but never had derived or played with myself). This is an excuse to play with my new [DAppNode](https://dappnode.io) full node, using the wonderful [ethereum-etl](https://github.com/blockchain-etl/ethereum-etl) package from Evgeny Medvedev to extract transaction and block details. This data will also be useful to calibrate good simulations for EIP 1559 (more on this soon!) 26 | 27 | ```{r setup, message = FALSE} 28 | library(tidyverse) 29 | library(here) 30 | library(glue) 31 | library(lubridate) 32 | library(forecast) 33 | library(infer) 34 | library(matrixStats) 35 | library(rmarkdown) 36 | library(knitr) 37 | library(skimr) 38 | 39 | options(digits=10) 40 | options(scipen = 999) 41 | 42 | # Make the plots a bit less pixellated 43 | knitr::opts_chunk$set(dpi = 300) 44 | 45 | # A minimal theme I like (zero bonus point for using it though!) 46 | newtheme <- theme_grey() + theme( 47 | axis.text = element_text(size = 9), 48 | axis.title = element_text(size = 12), 49 | axis.line = element_line(colour = "#000000"), 50 | panel.grid.major = element_blank(), 51 | panel.grid.minor = element_blank(), 52 | panel.background = element_blank(), 53 | legend.title = element_text(size = 12), 54 | legend.text = element_text(size = 10), 55 | legend.box.background = element_blank(), 56 | legend.key = element_blank(), 57 | strip.text.x = element_text(size = 10), 58 | strip.background = element_rect(fill = "white") 59 | ) 60 | theme_set(newtheme) 61 | ``` 62 | 63 | ```{r} 64 | start_block <- 10000001 65 | end_block <- 10100000 66 | suffix <- glue("-", start_block, "-", end_block) 67 | ``` 68 | 69 | ```{r message = FALSE, eval=FALSE} 70 | txs <- read_csv(here::here(glue("data/txs", suffix, ".csv"))) 71 | txs <- txs %>% select(-block_timestamp) 72 | txs %>% glimpse() 73 | ``` 74 | 75 | ```{r message=FALSE, eval=FALSE} 76 | txs_receipts <- txs %>% 77 | left_join( 78 | read_csv(here::here(glue("data/rxs", suffix, ".csv"))), 79 | by = c("hash" = "transaction_hash")) %>% 80 | arrange(block_number) 81 | saveRDS(txs_receipts, here::here(glue("data/txs", suffix, ".rds"))) 82 | ``` 83 | 84 | ```{r message=FALSE, cache=TRUE} 85 | txs_receipts <- readRDS(here::here(glue("data/txs", suffix, ".rds"))) %>% 86 | mutate(gas_fee = gas_price * gas_used) %>% 87 | mutate(gas_price = gas_price / (10 ^ 9), 88 | gas_fee = gas_fee / (10 ^ 9), 89 | value = value / (10 ^ 18)) 90 | ``` 91 | 92 | ```{r message=FALSE, cache=TRUE} 93 | blocks <- read_csv(here::here(glue("data/bxs", suffix, ".csv"))) %>% 94 | mutate(block_date = as_datetime(timestamp), 95 | prop_used = gas_used / gas_limit) %>% 96 | rename(block_number = number) %>% 97 | arrange(block_number) 98 | 99 | gas_prices_per_block <- blocks %>% 100 | select(block_number) %>% 101 | left_join( 102 | txs_receipts %>% 103 | group_by(block_number) %>% 104 | summarise( 105 | min_gas_price = min(gas_price), 106 | total_gas_used = sum(gas_used), 107 | avg_gas_price = sum(gas_fee) / total_gas_used, 108 | med_gas_price = weightedMedian(gas_price, w = gas_used), 109 | max_gas_price = max(gas_price) 110 | ) 111 | ) %>% 112 | select(-total_gas_used) 113 | 114 | blocks <- blocks %>% 115 | left_join(gas_prices_per_block) 116 | ``` 117 | 118 | ```{r message=FALSE, cache=TRUE} 119 | date_sample <- interval(ymd("2020-05-13"), ymd("2020-05-20")) 120 | blocks_sample <- blocks %>% 121 | filter(block_date %within% date_sample) 122 | 123 | txs_sample <- txs_receipts %>% 124 | semi_join(blocks_sample) 125 | ``` 126 | 127 | ## Block properties 128 | 129 | ### Gas used by a block 130 | 131 | Miners have some control over the gas limit of a block, but how much gas do blocks generally use? 132 | 133 | ```{r} 134 | blocks %>% 135 | ggplot() + 136 | geom_histogram(aes(x = gas_used), bins = 1000, fill = "steelblue") + 137 | scale_y_log10() 138 | ``` 139 | 140 | There are a few peaks, notably at 0 (the amount of gas used by an empty block) and towards the maximum gas limit set at 10,000,000. Let's zoom in on blocks that use more than 9,800,000 gas. 141 | 142 | ```{r} 143 | blocks %>% 144 | filter(gas_used >= 9.8 * 10^6) %>% 145 | ggplot() + 146 | geom_histogram(aes(x = gas_used), fill = "steelblue") 147 | ``` 148 | 149 | We can also look at the proportion of gas used, i.e., the amount of gas used by the block divided by the total gas available in that block. Taking a moving average over the last 500 blocks, we obtain the following plot. 150 | 151 | ```{r} 152 | blocks_sample %>% 153 | mutate(ma_prop_used = ma(prop_used, 500)) %>% 154 | ggplot() + 155 | geom_line(aes(x = block_date, y = ma_prop_used), colour = "#FED152") + 156 | xlab("Block timestamp") 157 | ``` 158 | 159 | Where does the dip on May 15th come from? Empty blocks? We plot how many empty blocks are found in chunks of 2000 blocks. 160 | 161 | ```{r} 162 | chunk_size <- 2000 163 | blocks_sample %>% 164 | mutate(block_chunk = block_number %/% chunk_size) %>% 165 | filter(gas_used == 0) %>% 166 | group_by(block_chunk) %>% 167 | summarise(block_date = min(block_date), 168 | `Empty blocks` = n()) %>% 169 | ggplot() + 170 | geom_point(aes(x = block_date, y = 1/2, size = `Empty blocks`), 171 | alpha = 0.3, colour = "steelblue") + 172 | scale_size_area(max_size = 12) + 173 | theme( 174 | axis.line.y = element_blank(), 175 | axis.text.y = element_blank(), 176 | axis.title.y = element_blank(), 177 | axis.ticks.y = element_blank(), 178 | ) + 179 | xlab("Block timestamp") 180 | ``` 181 | 182 | It doesn't seem so. 183 | 184 | ### Relationship between block size and gas used 185 | 186 | Does the block weight (in _gas_) roughly correlate with the block size (in _bytes_)? 187 | 188 | ```{r} 189 | cor.test(blocks$gas_used, blocks$size) 190 | ``` 191 | 192 | It does! But since most blocks have very high `gas_used` anyways, it pays to look a bit more closely. 193 | 194 | ```{r} 195 | blocks %>% 196 | ggplot() + 197 | geom_point(aes(x = gas_used, y = size), alpha = 0.1, colour = "steelblue") + 198 | scale_y_log10() + 199 | xlab("Gas used per block") + 200 | ylab("Block size (in bytes)") 201 | ``` 202 | 203 | We use a logarithmic scale for the y-axis. There is definitely a big spread around the 10 million gas limit. Does the block size correlate with the number of transactions instead then? 204 | 205 | ```{r cache=TRUE} 206 | blocks_num_txs <- blocks %>% 207 | left_join( 208 | txs_receipts %>% 209 | group_by(block_number) %>% 210 | summarise(n = n()) 211 | ) %>% 212 | replace_na(list(n = 0)) 213 | ``` 214 | 215 | ```{r} 216 | blocks_num_txs %>% 217 | ggplot() + 218 | geom_point(aes(x = n, y = size), alpha = 0.2, colour = "steelblue") + 219 | xlab("Number of transactions per block") + 220 | ylab("Block size (in bytes)") 221 | ``` 222 | 223 | A transaction has a minimum size, if only to include things like the sender and receiver addresses and the other necessary fields. This is why we pretty much only observe values above some diagonal. The largest blocks (in bytes) are not the ones with the most transactions. 224 | 225 | ## Gas prices 226 | 227 | ### Distribution of gas prices 228 | 229 | First, some descriptive stats for the distribution of gas prices. 230 | 231 | ```{r} 232 | quarts = c(0, 0.25, 0.5, 0.75, 1) 233 | tibble( 234 | `Quartile` = quarts, 235 | ) %>% 236 | add_column(`Value` = quantile(txs_receipts$gas_price, quarts)) %>% 237 | kable() 238 | ``` 239 | 240 | 75% of included transactions post a gas price less than or equal to 31 Gwei! Plotting the distribution of gas prices under 2000 Gwei: 241 | 242 | ```{r} 243 | txs_receipts %>% 244 | filter(gas_price <= 2000) %>% 245 | ggplot() + 246 | geom_histogram(aes(x = gas_price), bins = 100, fill = "#F05431") + 247 | scale_y_log10() + 248 | xlab("Gas price (Gwei)") 249 | ``` 250 | 251 | The y-axis is in logarithmic scale. Notice these curious, regular peaks? Turns out people love round numbers (or their wallets do). Let's dig into this. 252 | 253 | ### Do users like default prices? 254 | 255 | How do users set their gas prices? We can make the hypothesis that most rely on some oracle (e.g., the Eth Gas Station or their values appearing as Metamask defaults). We show next the 50 most frequent gas prices (in Gwei) and their frequency among included transactions. 256 | 257 | ```{r cache=TRUE} 258 | gas_price_freqs <- txs_receipts %>% 259 | group_by(gas_price) %>% 260 | summarise(count = n()) %>% 261 | arrange(-count) %>% 262 | mutate(freq = count / nrow(txs_receipts), cumfreq = cumsum(freq), 263 | `Gas price (Gwei)` = gas_price, 264 | `Gas price (wei)` = round(gas_price * (10 ^ 9), 10)) %>% 265 | mutate(frequency = str_c(round(freq * 100), "%"), cum_freq = str_c(round(cumfreq * 100), "%")) %>% 266 | select(-freq, -cumfreq, -gas_price) 267 | ``` 268 | 269 | ```{r} 270 | paged_table(gas_price_freqs %>% 271 | select(`Gas price (Gwei)`, `Gas price (wei)`, count, frequency, cum_freq) %>% 272 | filter(row_number() <= 50)) 273 | ``` 274 | 275 | Clearly round numbers dominate here! 276 | 277 | ### Evolution of gas prices 278 | 279 | I wanted to see how the gas prices evolve over time. To compute the average gas price in a block, I do a weighted mean using `gas_used` as weight. I then compute the average gas price over 100 blocks by doing another weighted mean using the total gas used in the blocks. 280 | 281 | ```{r} 282 | chunk_size <- 100 283 | blocks_sample %>% 284 | mutate(block_chunk = block_number %/% chunk_size) %>% 285 | replace_na(list( 286 | avg_gas_price = 0, gas_used = 0)) %>% 287 | mutate(block_num = gas_used * avg_gas_price) %>% 288 | group_by(block_chunk) %>% 289 | summarise(avg_prop_used = mean(prop_used), 290 | gas_used_chunk = sum(gas_used), 291 | num_chunk = sum(block_num), 292 | avg_gas_price = num_chunk / gas_used_chunk, 293 | block_date = min(block_date)) %>% 294 | ggplot() + 295 | geom_line(aes(x = block_date, y = avg_gas_price), colour = "#F05431") + 296 | xlab("Block timestamp") 297 | ``` 298 | 299 | We see a daily seasonality, with peaks and troughs corresponding to high congestion and low congestion hours of the day. How does this jive with other series we saw before? We now average over 200 blocks and present a comparison with the series of block proportion used. 300 | 301 | ```{r} 302 | chunk_size <- 200 303 | blocks_sample %>% 304 | mutate(block_chunk = block_number %/% chunk_size) %>% 305 | replace_na(list( 306 | avg_gas_price = 0, gas_used = 0)) %>% 307 | mutate(block_num = gas_used * avg_gas_price) %>% 308 | group_by(block_chunk) %>% 309 | summarise(gas_limit_chunk = sum(gas_limit), 310 | gas_used_chunk = sum(gas_used), 311 | num_chunk = sum(block_num), 312 | avg_gas_price = num_chunk / gas_used_chunk, 313 | block_date = min(block_date), 314 | prop_used = gas_used_chunk / gas_limit_chunk) %>% 315 | select(block_date, `Proportion used` = prop_used, `Average gas price` = avg_gas_price) %>% 316 | pivot_longer(-block_date, names_to = "Series") %>% 317 | ggplot() + 318 | geom_line(aes(x = block_date, y = value, color = Series)) + 319 | scale_color_manual(values = c("#F05431", "#FED152")) + 320 | facet_grid(rows = vars(Series), scales = "free") + 321 | xlab("Block timestamp") 322 | ``` 323 | 324 | Blocks massively unused right after a price peak? The mystery deepens. 325 | 326 | ### Timestamp difference between blocks 327 | 328 | How much time elapses between two consecutive blocks? Miners are responsible for setting the timestamp, so it's not a perfectly objective value, but good enough! 329 | 330 | ```{r} 331 | blocks %>% 332 | mutate(time_difference = timestamp - lag(timestamp)) %>% 333 | ggplot() + 334 | geom_histogram(aes(x = time_difference), binwidth = 1, fill = "#BFCE80") 335 | ``` 336 | 337 | ```{r cache=TRUE} 338 | late_blocks <- blocks %>% 339 | mutate(time_difference = timestamp - lag(timestamp), 340 | late_block = time_difference >= 20) %>% 341 | replace_na(list(gas_used = 0, avg_gas_price = 0)) %>% 342 | drop_na() 343 | 344 | mean_diff <- late_blocks %>% 345 | specify(formula = avg_gas_price ~ late_block) %>% 346 | calculate(stat = "diff in means", order = c(TRUE, FALSE)) 347 | ``` 348 | 349 | ```{r cache=TRUE} 350 | null_distribution <- late_blocks %>% 351 | specify(formula = avg_gas_price ~ late_block) %>% 352 | hypothesize(null = "independence") %>% 353 | generate(reps = 500, type = "permute") %>% 354 | calculate(stat = "diff in means", order = c(TRUE, FALSE)) 355 | ``` 356 | 357 | We can do a simple difference-in-means test to check whether the difference between the average gas price of late blocks (with timestamp difference greater than 20 seconds) and early blocks (lesser than 20 seconds) is significant. 358 | 359 | ```{r fig.cap="Mean gas price in \"late\" and \"early\" blocks"} 360 | kable(late_blocks %>% 361 | group_by(late_block) %>% 362 | summarise(avg_gas_price = mean(avg_gas_price))) 363 | ``` 364 | 365 | 374 | -------------------------------------------------------------------------------- /ethdata/notebooks/gas_weather_reports/exploreJuly21.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Gas weather report: July 21st - July 27th" 3 | description: | 4 | Do gas limit increases decrease gas prices? 5 | author: 6 | - name: Barnabé Monnot 7 | url: https://twitter.com/barnabemonnot 8 | affiliation: Robust Incentives Group, Ethereum Foundation 9 | affiliation_url: https://github.com/ethereum/rig 10 | date: "`r Sys.Date()`" 11 | output: 12 | distill::distill_article: 13 | toc: true 14 | toc_depth: 3 15 | --- 16 | 17 | The data includes all blocks produced between July 21st, 2020, 02:00:17 UTC (block 10500001) and July 27th, 2020, 06:37:09 UTC (block 10540000). It was obtained from [Geth](https://geth.ethereum.org/) using a [DAppNode](https://dappnode.io) full node with the wonderful [ethereum-etl](https://github.com/blockchain-etl/ethereum-etl) package from Evgeny Medvedev to extract transaction and block details. 18 | 19 | 22 | 23 | ```{r setup, message = FALSE} 24 | library(tidyverse) 25 | library(here) 26 | library(glue) 27 | library(lubridate) 28 | library(forecast) 29 | library(infer) 30 | library(matrixStats) 31 | library(rmarkdown) 32 | library(knitr) 33 | library(skimr) 34 | 35 | options(digits=10) 36 | options(scipen = 999) 37 | 38 | # Make the plots a bit less pixellated 39 | knitr::opts_chunk$set(dpi = 300) 40 | 41 | # A minimal theme I like (zero bonus point for using it though!) 42 | newtheme <- theme_grey() + theme( 43 | axis.text = element_text(size = 9), 44 | axis.title = element_text(size = 12), 45 | axis.line = element_line(colour = "#000000"), 46 | panel.grid.major = element_blank(), 47 | panel.grid.minor = element_blank(), 48 | panel.background = element_blank(), 49 | legend.title = element_text(size = 12), 50 | legend.text = element_text(size = 10), 51 | legend.box.background = element_blank(), 52 | legend.key = element_blank(), 53 | strip.text.x = element_text(size = 10), 54 | strip.background = element_rect(fill = "white") 55 | ) 56 | theme_set(newtheme) 57 | ``` 58 | 59 | ```{r} 60 | start_block <- 10500001 61 | end_block <- 10540000 62 | suffix <- glue("-", start_block, "-", end_block) 63 | ``` 64 | 65 | ```{r message = FALSE, eval=FALSE} 66 | txs <- read_csv(here::here(glue("data/txs", suffix, ".csv"))) 67 | txs %>% glimpse() 68 | ``` 69 | 70 | ```{r message=FALSE, eval=FALSE} 71 | txs_receipts <- txs %>% 72 | left_join( 73 | read_csv(here::here(glue("data/rxs", suffix, ".csv"))), 74 | by = c("hash" = "transaction_hash")) %>% 75 | arrange(block_number) 76 | saveRDS(txs_receipts, here::here(glue("data/txs", suffix, ".rds"))) 77 | ``` 78 | 79 | ```{r message=FALSE, cache=TRUE} 80 | txs_receipts <- readRDS(here::here(glue("data/txs", suffix, ".rds"))) %>% 81 | mutate(gas_fee = gas_price * gas_used) %>% 82 | mutate(gas_price = gas_price / (10 ^ 9), 83 | gas_fee = gas_fee / (10 ^ 9), 84 | value = value / (10 ^ 18)) 85 | ``` 86 | 87 | ```{r message=FALSE, cache=TRUE} 88 | blocks <- read_csv(here::here(glue("data/bxs", suffix, ".csv"))) %>% 89 | mutate(block_date = as_datetime(timestamp), 90 | prop_used = gas_used / gas_limit) %>% 91 | rename(block_number = number) %>% 92 | arrange(block_number) 93 | 94 | gas_prices_per_block <- blocks %>% 95 | select(block_number) %>% 96 | left_join( 97 | txs_receipts %>% 98 | group_by(block_number) %>% 99 | summarise( 100 | min_gas_price = min(gas_price), 101 | total_gas_used = sum(gas_used), 102 | avg_gas_price = sum(gas_fee) / total_gas_used, 103 | med_gas_price = weightedMedian(gas_price, w = gas_used), 104 | max_gas_price = max(gas_price) 105 | ) 106 | ) %>% 107 | select(-total_gas_used) 108 | 109 | blocks <- blocks %>% 110 | left_join(gas_prices_per_block) 111 | ``` 112 | 113 | ```{r message=FALSE, cache=TRUE} 114 | # To get all blocks 115 | date_sample <- interval(min(blocks$block_date), max(blocks$block_date)) 116 | 117 | # To get a sample 118 | # date_sample <- interval(ymd("2020-05-13"), ymd("2020-05-20")) 119 | 120 | blocks_sample <- blocks %>% 121 | filter(block_date %within% date_sample) 122 | 123 | txs_sample <- txs_receipts %>% 124 | semi_join(blocks_sample) 125 | ``` 126 | 127 | ## Block properties 128 | 129 | ### Gas used by a block 130 | 131 | Miners have some control over the gas limit of a block, but how much gas do blocks generally use? 132 | 133 | ```{r} 134 | blocks %>% 135 | ggplot() + 136 | geom_histogram(aes(x = gas_used), bins = 1000, fill = "steelblue") + 137 | scale_y_log10() + 138 | xlab("Gas used") + 139 | ylab("Number of blocks") 140 | ``` 141 | 142 | The gas limit was increased from the previous limit of 10M gas, and gas used in blocks soon followed. We notice two peaks. Let's zoom in. 143 | 144 | ```{r} 145 | blocks %>% 146 | filter(gas_used >= 10.05 * 10^6) %>% 147 | ggplot() + 148 | geom_histogram(aes(x = gas_used), fill = "steelblue", bins = 60) + 149 | xlab("Gas used") + 150 | ylab("Number of blocks") 151 | ``` 152 | 153 | How did the gas limit evolve over time? 154 | 155 | ```{r} 156 | blocks %>% 157 | ggplot() + 158 | geom_line(aes(x = block_date, y = gas_limit), color = "#FED152") + 159 | xlab("Block date") + 160 | ylab("Gas limit") 161 | ``` 162 | 163 | We have a shift in the middle of the week from about 12M gas limit to 12.5M. Did this release some pressure from transaction fees? 164 | 165 | ## Gas prices 166 | 167 | ### Distribution of gas prices 168 | 169 | First, some descriptive stats for the distribution of gas prices. 170 | 171 | ```{r} 172 | quarts = c(0, 0.25, 0.5, 0.75, 1) 173 | tibble( 174 | `Quartile` = quarts, 175 | ) %>% 176 | add_column(`Value` = quantile(txs_receipts$gas_price, quarts)) %>% 177 | kable() 178 | ``` 179 | 180 | 75% of included transactions post a gas price less than or equal to 90 Gwei. This is much higher than in our last [gas weather report in May](https://ethereum.github.io/rig/ethdata/notebooks/explore_data.html). 181 | 182 | ### Evolution of gas prices 183 | 184 | To compute the average gas price in a block, I do a weighted mean using `gas_used` as weight. I then compute the average gas price over 100 blocks by doing another weighted mean using the total gas used in the blocks. 185 | 186 | ```{r} 187 | chunk_size <- 100 188 | blocks_sample %>% 189 | mutate(block_chunk = block_number %/% chunk_size) %>% 190 | replace_na(list( 191 | avg_gas_price = 0, gas_used = 0)) %>% 192 | mutate(block_num = gas_used * avg_gas_price) %>% 193 | group_by(block_chunk) %>% 194 | summarise(avg_prop_used = mean(prop_used), 195 | gas_used_chunk = sum(gas_used), 196 | num_chunk = sum(block_num), 197 | avg_gas_price = num_chunk / gas_used_chunk, 198 | block_date = min(block_date)) %>% 199 | ggplot() + 200 | geom_line(aes(x = block_date, y = avg_gas_price), colour = "#F05431") + 201 | xlab("Block timestamp") + 202 | ylab("Average gas price") 203 | ``` 204 | 205 | We see a daily seasonality, with peaks and troughs corresponding to high congestion and low congestion hours of the day. 206 | 207 | Did increasing the gas limit reduce the prices overall? We can take a look visually. 208 | 209 | ```{r} 210 | chunk_size <- 200 211 | blocks_sample %>% 212 | mutate(block_chunk = block_number %/% chunk_size) %>% 213 | replace_na(list( 214 | avg_gas_price = 0, gas_used = 0)) %>% 215 | mutate(block_num = gas_used * avg_gas_price) %>% 216 | group_by(block_chunk) %>% 217 | summarise(gas_limit_chunk = sum(gas_limit), 218 | gas_used_chunk = sum(gas_used), 219 | num_chunk = sum(block_num), 220 | avg_gas_price = num_chunk / gas_used_chunk, 221 | block_date = min(block_date), 222 | prop_used = gas_used_chunk / gas_limit_chunk, 223 | avg_gas_limit_chunk = mean(gas_limit), 224 | avg_gas_used_chunk = mean(gas_used)) %>% 225 | select(block_date, `Gas limit` = avg_gas_limit_chunk, `Average gas price` = avg_gas_price, `Gas used` = avg_gas_used_chunk) %>% 226 | pivot_longer(-block_date, names_to = "Series") %>% 227 | ggplot() + 228 | geom_line(aes(x = block_date, y = value, color = Series)) + 229 | scale_color_manual(values = c("#F05431", "#FED152", "steelblue")) + 230 | facet_grid(rows = vars(Series), scales = "free") + 231 | xlab("Block timestamp") 232 | ``` 233 | 234 | It doesn't seem like it did to me, even though the average gas used in blocks increased in concert with the gas limit. We can look at average prices for transactions in blocks with gas limit lesser than 12.25M gas ("small blocks") vs. blocks with gas limit greater than 12.25M ("big blocks"). 235 | 236 | ```{r cache=TRUE} 237 | big_blocks <- blocks %>% 238 | mutate(big_block = if_else(gas_limit > 12.25 * 10^6, "Big block", "Small block")) %>% 239 | replace_na(list(gas_used = 0, avg_gas_price = 0)) %>% 240 | drop_na() 241 | ``` 242 | 243 | ```{r fig.cap="Mean gas price in \"big\" and \"small\" blocks"} 244 | kable(big_blocks %>% 245 | group_by(big_block) %>% 246 | summarise(avg_gas_price = mean(avg_gas_price))) 247 | ``` 248 | 249 | The two averages are mighty close to each other, with big blocks posting even slightly higher gas prices than small ones (a negligible difference however). 250 | 251 | 254 | 255 | 256 | 257 | 258 | 259 | 260 | -------------------------------------------------------------------------------- /index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | Redirecting... 7 | 8 | 9 |

If you are not redirected automatically, click here.

10 | 11 | 12 | -------------------------------------------------------------------------------- /posdata/README.md: -------------------------------------------------------------------------------- 1 | - [Exploring the first 1000 epochs](https://ethereum.github.io/rig/posdata/notebooks/mainnet_explore.html) 2 | 3 | - [Shyam Sridhar's Beacon Digest](https://shsr2001.github.io/beacondigest) 4 | -------------------------------------------------------------------------------- /posdata/default.html5: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | $for(author-meta)$ 8 | 9 | $endfor$ 10 | $if(date-meta)$ 11 | 12 | $endif$ 13 | $if(keywords)$ 14 | 15 | $endif$ 16 | $if(description-meta)$ 17 | 18 | $endif$ 19 | $if(title-prefix)$$title-prefix$ – $endif$$pagetitle$ 20 | $for(css)$ 21 | 22 | $endfor$ 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | $if(math)$ 32 | $math$ 33 | $endif$ 34 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | $for(header-includes)$ 45 | $header-includes$ 46 | $endfor$ 47 | 48 | 49 | 50 | $for(include-before)$ 51 | $include-before$ 52 | $endfor$ 53 |
54 | 69 |
70 |
71 |
72 | $if(title)$ 73 |
74 | $title$ 75 |
76 | $endif$ 77 | 78 | $if(subtitle)$ 79 |
80 | $subtitle$ 81 |
82 | $endif$ 83 |
84 | $for(author)$ 85 |

$author$

86 | $endfor$ 87 | $if(date)$ 88 |

$date$

89 | $endif$ 90 | $if(toc)$ 91 | 97 | $endif$ 98 | $body$ 99 | $for(include-after)$ 100 | $include-after$ 101 | $endfor$ 102 | 103 | 104 | -------------------------------------------------------------------------------- /posdata/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | Data analysis of Proof-of-Stake 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 |
30 | 45 |
46 |
47 |
48 |
49 | Data analysis of Proof-of-Stake 50 |
51 | 52 |
53 | 57 | 58 | 59 | -------------------------------------------------------------------------------- /posdata/notebooks/lib.R: -------------------------------------------------------------------------------- 1 | library(BMS) 2 | library(tidyverse) 3 | library(data.table) 4 | library(httr) 5 | library(jsonlite) 6 | library(vroom) 7 | library(zoo) 8 | library(lubridate) 9 | library(microbenchmark) 10 | 11 | options(digits=10) 12 | options(scipen = 999) 13 | 14 | slots_per_epoch <- 32 15 | medalla_genesis <- 1596546008 16 | pyrmont_genesis <- 1605700800 17 | mainnet_genesis <- 1606824023 18 | 19 | # Source default_url 20 | source(here::here("scripts/default_url.R")) 21 | 22 | get_date_from_epoch <- function(epoch, testnet="mainnet") { 23 | if (testnet == "mainnet") { 24 | genesis <- mainnet_genesis 25 | } else if (testnet == "pyrmont") { 26 | genesis <- pyrmont_genesis 27 | } else { 28 | genesis <- medalla_genesis 29 | } 30 | return(as_datetime(epoch * slots_per_epoch * 12 + genesis)) 31 | } 32 | 33 | get_epoch_from_timestamp <- function(timestamp, testnet="mainnet") { 34 | if (testnet == "mainnet") { 35 | genesis <- mainnet_genesis 36 | } else if (testnet == "pyrmont") { 37 | genesis <- pyrmont_genesis 38 | } else { 39 | genesis <- medalla_genesis 40 | } 41 | return((timestamp - genesis) / (slots_per_epoch * 12)) 42 | } 43 | 44 | write_all_ats <- function() { 45 | (1:429) %>% 46 | map(function(b) { fread(here::here(str_c("data/attestations_", b, ".csv"))) }) %>% 47 | rbindlist() %>% 48 | fwrite(here::here("rds_data/all_ats.csv")) 49 | } 50 | 51 | write_all_bxs <- function() { 52 | (1:429) %>% 53 | map(function(b) { fread(here::here(str_c("data/blocks_", b, ".csv"))) }) %>% 54 | rbindlist() %>% 55 | fwrite(here::here("rds_data/all_bxs.csv")) 56 | } 57 | 58 | add_bxs <- function(all_bxs, start_epoch, end_epoch) { 59 | list(all_bxs %>% 60 | .[slot < start_epoch * slots_per_epoch], 61 | start_epoch:end_epoch %>% 62 | map(function(epoch) { get_blocks(epoch) }) %>% 63 | rbindlist() 64 | ) %>% 65 | rbindlist() 66 | } 67 | 68 | add_ats <- function(all_ats, start_epoch, end_epoch) { 69 | list(all_ats %>% 70 | .[slot < start_epoch * slots_per_epoch], 71 | start_epoch:end_epoch %>% 72 | map(function(epoch) { get_attestations(epoch) }) %>% 73 | rbindlist() 74 | ) %>% 75 | rbindlist() 76 | } 77 | 78 | add_bxs_and_ats <- function(start_epoch, end_epoch) { 79 | start_epoch:end_epoch %>% 80 | map(function(epoch) { get_blocks_and_attestations(epoch) }) %>% 81 | reduce(function(a, b) list( 82 | blocks = list(a$blocks, b$blocks) %>% rbindlist(), 83 | attestations = list(a$attestations, b$attestations) %>% rbindlist() 84 | )) 85 | } 86 | 87 | add_committees <- function(start_epoch, end_epoch) { 88 | start_epoch:end_epoch %>% 89 | map(function(epoch) { get_committees(epoch) }) %>% 90 | rbindlist() 91 | } 92 | 93 | batch_ops <- function(df, fn, filter_fn = NULL, batch_size = 1e4) { 94 | batches <- nrow(df) %/% batch_size 95 | (0:batches) %>% 96 | map( 97 | function(batch) { 98 | print(str_c(batch, " batch out of ", batches, " batches")) 99 | if (is.null(filter_fn)) { 100 | df %>% 101 | filter( 102 | row_number() >= batch * batch_size, row_number() < (batch + 1) * batch_size 103 | ) %>% 104 | fn() 105 | } else { 106 | df %>% 107 | filter_fn(batch, batches) %>% 108 | fn() 109 | } 110 | }) %>% 111 | bind_rows() %>% 112 | return() 113 | } 114 | 115 | batch_ops_per_slot <- function(df, fn, from_slot=0, to_slot=1e6, batch_size = 1e2) { 116 | batches <- (to_slot - from_slot) / batch_size 117 | print(str_c(batches, " batches")) 118 | (0:(batches-1)) %>% 119 | map(function(batch) { 120 | print(str_c("Batch ", batch, " out of ", batches, " batches, from slot ", 121 | from_slot + batch * batch_size, " to slot ", from_slot + (batch + 1) * batch_size)) 122 | t <- df %>% 123 | filter(slot >= from_slot + batch * batch_size, 124 | slot < pmin(from_slot + (batch + 1) * batch_size, to_slot)) %>% 125 | fn() 126 | t %>% fwrite(here::here(str_c("rds_data/temp_", batch, ".csv"))) 127 | t %>% return() 128 | }) %>% 129 | bind_rows() %>% 130 | return() 131 | } 132 | 133 | batch_ops_ats <- function(fn, dataset = "individual") { 134 | (0:387) %>% 135 | map( 136 | function(batch) { 137 | print(str_c("batch ", batch, " out of 387")) 138 | fread(here::here(str_c("rds_data/", dataset, "_ats_", batch, ".csv"))) %>% 139 | fn() 140 | } 141 | ) %>% 142 | rbindlist() 143 | } 144 | 145 | test_ops_ats <- function(fn, dataset = "individual") { 146 | fread(here::here(str_c("rds_data/", dataset, "_ats_", 0, ".csv"))) %>% 147 | fn() 148 | } 149 | 150 | get_committees <- function(epoch, url=default_url) { 151 | print(str_c("Getting committee of epoch ", epoch, "\n")) 152 | content(GET(str_c(url, "/eth/v1/beacon/states/", 153 | epoch * slots_per_epoch, "/committees"), accept_json()))$data %>% 154 | rbindlist() %>% 155 | .[,.(att_slot = as.numeric(slot), 156 | committee_index = as.numeric(index), 157 | validator_index = as.numeric(validators), 158 | index_in_committee = rowid(slot, index) - 1)] 159 | } 160 | 161 | get_validators <- function(epoch, url=default_url) { 162 | t <- (content(GET(str_c(url, "/eth/v1/beacon/states/", 163 | epoch * slots_per_epoch, "/validators")), as="text") %>% 164 | fromJSON())$data 165 | cbind(t[c("index", "balance")], t$validator) %>% 166 | select(validator_index = index, balance, effective_balance, slashed, activation_epoch, exit_epoch, pubkey) %>% 167 | mutate(across(-any_of(c("pubkey")), as.numeric)) %>% 168 | mutate(time_active = pmin(exit_epoch, epoch) - pmin(epoch, activation_epoch)) %>% 169 | as.data.table() 170 | } 171 | 172 | get_balances_active_validators <- function(epoch) { 173 | get_validators(epoch)[ 174 | time_active > 0 & exit_epoch > epoch, 175 | .(validator_index, balance, time_active, activation_epoch) 176 | ] 177 | } 178 | 179 | decode_aggregation_bits <- function(ab) { 180 | gsub('(..)(?!$)', '\\1,', substring(ab, 3), perl=TRUE) %>% 181 | str_split(",") %>% 182 | pluck(1) %>% 183 | lapply(function(d) { rev(hex2bin(d)) }) %>% 184 | unlist() %>% 185 | str_c(collapse = "") 186 | } 187 | 188 | get_attestations_in_slot <- function(slot) { 189 | get_block_and_attestations_at_slot()$attestations 190 | } 191 | 192 | get_attestations <- function(epoch) { 193 | print(str_c("Getting attestations for epoch ", epoch)) 194 | start_slot <- epoch * slots_per_epoch 195 | end_slot <- (epoch + 1) * slots_per_epoch - 1 196 | start_slot:end_slot %>% 197 | map(get_attestations_in_slot) %>% 198 | rbindlist() 199 | } 200 | 201 | get_exploded_ats <- function(t) { 202 | t[, agg_index := .I] 203 | t <- t[, .(attested = as.numeric(unlist(strsplit(attesting_indices, "")))), by=setdiff(names(t), "attesting_indices")] 204 | t[, index_in_committee := rowid(agg_index) - 1] 205 | return(t[attested == 1, -c("agg_index", "attested")]) 206 | } 207 | 208 | hex2string <- function(string) { 209 | intToUtf8( 210 | strtoi( 211 | do.call( 212 | paste0, 213 | as.data.frame( 214 | matrix( 215 | strsplit(substring(string, 3), split = "")[[1]], 216 | ncol=2, 217 | byrow=TRUE), 218 | stringsAsFactors=FALSE)), 219 | base=16L) 220 | ) 221 | } 222 | 223 | find_client <- function(graffiti) { 224 | case_when( 225 | (str_starts(graffiti, "poap") & str_ends(graffiti, "a")) | 226 | str_detect(graffiti, "prysm") | graffiti == "" ~ "prysm", 227 | (str_starts(graffiti, "poap") & str_ends(graffiti, "b")) | 228 | str_detect(graffiti, "lighthouse") ~ "lighthouse", 229 | (str_starts(graffiti, "poap") & str_ends(graffiti, "c")) | 230 | str_detect(graffiti, "teku") ~ "teku", 231 | (str_starts(graffiti, "poap") & str_ends(graffiti, "d")) | 232 | str_detect(graffiti, "nimbus") ~ "nimbus", 233 | (str_starts(graffiti, "poap") & str_ends(graffiti, "e")) | 234 | str_detect(graffiti, "lodestar") ~ "lodestar", 235 | TRUE ~ "undecided" 236 | ) 237 | } 238 | 239 | get_block_at_slot <- function(slot, url=default_url, sync_committee = FALSE) { 240 | get_block_and_attestations_at_slot(slot, url, sync_committee)$block 241 | } 242 | 243 | get_block_and_attestations_at_slot <- function(slot, url=default_url, sync_committee = FALSE) { 244 | # print(str_c("Blocks and attestations of slot ", slot, "\n")) 245 | block <- content(GET(str_c(url, "/eth/v1/beacon/blocks/", slot), accept_json()))$data$message 246 | block <- content(GET(str_c(url, "/eth/v1/beacon/blocks/", 2100000), accept_json()))$data$message 247 | 248 | if (is.null(block) || as.numeric(block$slot) != slot) { 249 | return(NULL) 250 | } 251 | 252 | if (length(block$body$attestations) == 0) { 253 | attestations <- NULL 254 | } else { 255 | attestations <- block$body$attestations %>% 256 | plyr::ldply(data.frame) %>% 257 | rowwise() %>% 258 | mutate(attesting_indices = decode_aggregation_bits(aggregation_bits)) %>% 259 | ungroup() %>% 260 | mutate(committee_index = as.numeric(data.index), 261 | att_slot = as.numeric(data.slot), slot = slot, 262 | beacon_block_root = str_trunc(data.beacon_block_root, 12, "left", ellipsis = ""), 263 | source_block_root = str_trunc(data.source.root, 12, "left", ellipsis = ""), 264 | target_block_root = str_trunc(data.target.root, 12, "left", ellipsis = ""), 265 | ) %>% 266 | select(slot, att_slot, committee_index, 267 | beacon_block_root, 268 | attesting_indices, source_epoch = data.source.epoch, 269 | source_block_root, 270 | target_epoch = data.target.epoch, 271 | target_block_root) 272 | setDT(attestations) 273 | } 274 | 275 | block_root <- content(GET(str_c(url, "/eth/v1/beacon/blocks/", slot, "/root")))$data$root 276 | 277 | block_ret <- tibble( 278 | block_root = str_trunc(block_root, 12, "left", ellipsis = ""), 279 | parent_root = str_trunc(block$parent_root, 12, "left", ellipsis = ""), 280 | state_root = str_trunc(block$state_root, 12, "left", ellipsis = ""), 281 | slot = slot, 282 | proposer_index = as.numeric(block$proposer_index), 283 | graffiti = tolower(hex2string(block$body$graffiti)), 284 | ) 285 | if (sync_committee) { 286 | block_ret$sync_committee <- decode_aggregation_bits(block$body$sync_aggregate$sync_committee_bits) 287 | } 288 | setDT(block_ret) 289 | 290 | list( 291 | block = block_ret, 292 | attestations = attestations 293 | ) 294 | } 295 | 296 | get_sync_committee <- function(epoch, url=default_url) { 297 | scs <- content(GET(str_c(url, "/eth/v1/beacon/states/", 298 | epoch * 32, "/sync_committees"), accept_json()))$data 299 | tibble( 300 | epoch = epoch, 301 | validator_index = scs$validators %>% unlist() %>% as.integer() 302 | ) %>% 303 | mutate(index_in_sync_committee = row_number() - 1) %>% 304 | return() 305 | } 306 | 307 | get_exploded_sync_block <- function(t) { 308 | t[, bxs_index := .I] 309 | t <- t[, .(sync_committeed = as.numeric(unlist(strsplit(sync_committee, "")))), 310 | by=setdiff(names(t), "sync_committee")] 311 | t[, index_in_sync_committee := rowid(bxs_index) - 1] 312 | return(t[, -c("bxs_index")]) 313 | } 314 | 315 | get_blocks_and_attestations <- function(epoch, url=default_url, sync_committee=FALSE) { 316 | print(str_c("Blocks and attestations of epoch ", epoch, "\n")) 317 | start_slot <- epoch * slots_per_epoch 318 | end_slot <- (epoch + 1) * slots_per_epoch - 1 319 | start_slot:end_slot %>% 320 | map(function(epoch) { get_block_and_attestations_at_slot(epoch, url, sync_committee) }) %>% 321 | keep(is.list) %>% 322 | purrr::transpose() %>% 323 | map(rbindlist) 324 | } 325 | 326 | get_blocks <- function(epoch, sync_committee = FALSE) { 327 | print(str_c("Getting blocks of epoch ", epoch)) 328 | start_slot <- epoch * slots_per_epoch 329 | end_slot <- (epoch + 1) * slots_per_epoch - 1 330 | start_slot:end_slot %>% 331 | map(function(slot) { get_block_at_slot(slot, sync_committee=sync_committee) }) %>% 332 | rbindlist() 333 | } 334 | 335 | get_block_root_at_slot <- function(all_bxs) { 336 | tibble( 337 | slot = min(all_bxs$slot):max(all_bxs$slot) 338 | ) %>% 339 | left_join( 340 | all_bxs %>% select(slot, block_root), 341 | by = c("slot" = "slot") 342 | ) %>% 343 | mutate(block_root = .$block_root %>% na.locf()) %>% 344 | as.data.table() 345 | } 346 | 347 | get_first_possible_inclusion_slot <- function(all_bxs) { 348 | tibble( 349 | slot = min(all_bxs$slot):max(all_bxs$slot) 350 | ) %>% 351 | left_join( 352 | all_bxs %>% 353 | mutate(block_slot = slot) %>% 354 | select(slot, block_slot), 355 | by = c("slot" = "slot") 356 | ) %>% 357 | mutate(block_slot = .$block_slot %>% na.locf(fromLast = TRUE)) %>% 358 | as.data.table() 359 | } 360 | 361 | get_correctness_data <- function(t, block_root_at_slot) { 362 | t[, epoch := att_slot %/% slots_per_epoch] 363 | t[, epoch_slot := epoch * slots_per_epoch] 364 | t[block_root_at_slot, on=c("epoch_slot" = "slot", "target_block_root" = "block_root"), correct_target:=1] 365 | t[block_root_at_slot, on=c("att_slot" = "slot", "beacon_block_root" = "block_root"), correct_head:=1] 366 | setnafill(t, "const", 0, cols=c("correct_target", "correct_head")) 367 | t[, `:=`(epoch = NULL, epoch_slot = NULL)] 368 | } 369 | 370 | get_stats_per_val <- function(all_ats, block_root_at_slot, first_possible_inclusion_slot, 371 | committees = NULL, validators = NULL, chunk_size = 10, url=default_url) { 372 | min_epoch <- min(all_ats$att_slot) %/% 32 373 | max_epoch <- max(all_ats$att_slot) %/% 32 374 | print(str_c("Min epoch ", min_epoch, ", max epoch ", max_epoch)) 375 | seq(min_epoch, (max_epoch+1) - chunk_size, chunk_size) %>% 376 | map(function(epoch) { 377 | print(str_c("Epoch ", epoch)) 378 | if (is.null(committees)) { 379 | committees <- epoch:(epoch + chunk_size - 1) %>% 380 | map(function(epoch) { get_committees(epoch, url) }) %>% 381 | rbindlist() 382 | } 383 | 384 | if (is.null(validators)) { 385 | validators <- get_validators(epoch + chunk_size - 1, url)[ 386 | (time_active > 0 & exit_epoch > epoch), .(validator_index, time_active, exit_epoch, balance) 387 | ] 388 | } 389 | 390 | t <- copy(all_ats[(att_slot >= epoch * slots_per_epoch) & (att_slot < (epoch + chunk_size) * slots_per_epoch)]) 391 | t[, att_slot_plus := att_slot + 1] 392 | t <- t[first_possible_inclusion_slot, on=c("att_slot_plus" = "slot"), nomatch=NULL] 393 | t <- get_exploded_ats(t) 394 | t[, .(inclusion_delay=min(slot)-att_slot, inclusion_delay_by_block=min(slot) - block_slot), 395 | by=.(att_slot, block_slot, committee_index, index_in_committee, correct_target, correct_head)] %>% 396 | .[committees, on=c("att_slot", "committee_index", "index_in_committee"), nomatch=NULL] %>% 397 | .[, .(included_ats=.N, 398 | correct_targets=sum(correct_target), correct_heads=sum(correct_head), 399 | inclusion_delay=mean(inclusion_delay), 400 | inclusion_delay_by_block=mean(inclusion_delay_by_block)), 401 | by=validator_index 402 | ] %>% 403 | .[validators[, .(validator_index, time_active, balance)], on=c("validator_index")] %>% 404 | setnafill("const", 0, cols=c("included_ats", "correct_targets", "correct_heads")) %>% 405 | .[, .(validator_index, epoch = epoch + chunk_size, expected_ats=chunk_size, 406 | included_ats, correct_targets, correct_heads, 407 | inclusion_delay, inclusion_delay_by_block, balance)] 408 | }) %>% 409 | rbindlist() 410 | } 411 | 412 | get_stats_per_slot <- function(all_ats, committees, chunk_size = 100) { 413 | expected_ats <- committees[, .(expected_ats = .N), by=att_slot] 414 | min_epoch <- min(all_ats$att_slot) %/% 32 415 | max_epoch <- max(all_ats$att_slot) %/% 32 416 | print(str_c("Min epoch ", min_epoch, ", max epoch ", max_epoch)) 417 | seq(min_epoch, max_epoch-1, chunk_size) %>% 418 | map(function(epoch) { 419 | print(str_c("Epoch ", epoch)) 420 | t <- copy(all_ats[(att_slot >= epoch * slots_per_epoch) & (att_slot < ((epoch + chunk_size) * slots_per_epoch))]) 421 | t <- get_exploded_ats(t) 422 | t[, .SD[which.min(slot)], by = .(att_slot, committee_index, index_in_committee)] %>% 423 | .[, .(att_slot, committee_index, index_in_committee, correct_target, correct_head)] %>% 424 | unique() %>% 425 | inner_join(committees) %>% 426 | .[, .(included_ats = .N, 427 | correct_targets = sum(correct_target), 428 | correct_heads = sum(correct_head)), by=att_slot] %>% 429 | merge(expected_ats) 430 | }) %>% 431 | rbindlist() 432 | } 433 | 434 | # bxs_per_client <- all_bxs[ 435 | # validators[team == "ef", .(validator_index, client)], 436 | # on=c("proposer_index" = "validator_index"), 437 | # nomatch=NULL, 438 | # .(slot, producer_client = client) 439 | # ] 440 | # t <- all_ats[ 441 | # slot >= 1800 * 32 & slot < 1890 * 32 & slot == att_slot+1, 442 | # ] %>% 443 | # get_exploded_ats() %>% 444 | # .[committees, on=c("att_slot", "committee_index", "index_in_committee"), nomatch=NULL] %>% 445 | # .[validators[team != "ef", .(validator_index, client)], on=c("validator_index"), nomatch=NULL] %>% 446 | # .[, .(att_slot, slot, validator_index, client)] %>% 447 | # unique() %>% 448 | # .[bxs_per_client, on=c("slot"), nomatch=NULL] %>% 449 | # .[, (n_attester_client=.N), by=.(slot, producer_client, client)] 450 | # 451 | # ls <- c("lighthouse", "nimbus", "prysm", "teku") 452 | # t %>% 453 | # .[, (avg=mean(V1)), by=.(producer_client, client)] %>% 454 | # mutate(producer_client = factor(producer_client, c("lighthouse", "prysm", "nimbus", "teku")), 455 | # client = factor(client, c("lighthouse", "prysm", "teku", "nimbus"))) %>% 456 | # ggplot() + 457 | # geom_tile(aes(x = producer_client, y = client, fill = V1)) + 458 | # geom_text(aes(x = producer_client, y = client, label = round(V1)), color = "white") + 459 | # scale_fill_viridis_c() + 460 | # xlab("EF block producer") + 461 | # ylab("First attesters, sans EF") 462 | 463 | get_appearances_in_agg <- function(all_ats, chunk_size = 100) { 464 | min_epoch <- min(all_ats$att_slot) %/% 32 465 | max_epoch <- max(all_ats$att_slot) %/% 32 466 | seq(min_epoch, max_epoch - chunk_size, chunk_size) %>% 467 | map(function(epoch) { 468 | # print(str_c("Epoch ", epoch)) 469 | t <- copy(all_ats[(att_slot >= epoch * slots_per_epoch) & (att_slot < ((epoch + chunk_size) * slots_per_epoch - 1))]) 470 | t <- get_exploded_ats(t) 471 | t[, .(appearances=.N), 472 | by=.(att_slot, committee_index, index_in_committee, 473 | beacon_block_root, source_block_root, target_block_root)] %>% 474 | .[, .(count=.N), by=appearances] 475 | }) %>% 476 | rbindlist() %>% 477 | .[, .(count=sum(count)), by=.(appearances)] 478 | } 479 | 480 | get_myopic_redundant_ats <- function(all_ats) { 481 | all_ats %>% 482 | .[, .(appearances=.N), 483 | by=.(att_slot, committee_index, 484 | beacon_block_root, source_block_root, target_block_root, attesting_indices)] %>% 485 | .[, .(count=.N), by=.(appearances)] 486 | } 487 | 488 | get_myopic_redundant_ats_detail <- function(all_ats) { 489 | all_ats[ 490 | all_ats[ 491 | , .(min_slot=min(slot), appearances=.N), 492 | by=.(att_slot, committee_index, beacon_block_root, source_block_root, target_block_root, attesting_indices) 493 | ][ 494 | appearances > 1 495 | ], 496 | on = c("att_slot", "committee_index", "beacon_block_root", "source_block_root", "target_block_root", "attesting_indices") 497 | ][ 498 | slot != min_slot, .(n_myopic_redundant = .N), by = slot 499 | ] 500 | } 501 | 502 | get_redundant_ats <- function(all_ats) { 503 | t <- copy(all_ats) 504 | t[, ats_index:=.I] 505 | t <- get_exploded_ats(t) 506 | first_inclusions <- t[, .(first_inclusion=min(slot)), 507 | by=.(att_slot, committee_index, index_in_committee, beacon_block_root, source_block_root, target_block_root)] 508 | t[ 509 | first_inclusions, 510 | on=c("att_slot", "committee_index", "index_in_committee", 511 | "beacon_block_root", "source_block_root", "target_block_root") 512 | ][ 513 | , .(max_inclusion_slot=max(first_inclusion)), 514 | by=.(slot, ats_index) 515 | ][ 516 | slot > max_inclusion_slot, .(n_redundant = .N), by = slot 517 | ] 518 | } 519 | 520 | get_strong_redundant_ats <- function(all_ats) { 521 | all_ats %>% 522 | .[, .(appearances=.N), 523 | by=.(slot, att_slot, committee_index, 524 | beacon_block_root, source_block_root, target_block_root, attesting_indices)] %>% 525 | .[, .(count=.N), by=.(appearances)] 526 | } 527 | 528 | ### Subset and clashing attestations 529 | 530 | # Two attestations, I and J 531 | # Strongly redundant: I = J => Should drop one of the two 532 | # If not: Subset: I \subset J or J \subset I => Should drop the smaller one 533 | # If not: Strongly clashing: I \cap J \neq \emptyset => Cannot aggregate 534 | # If not: Weakly clashing => Can aggregate 535 | 536 | compare_ats <- function(bunch) { 537 | if (nrow(bunch) == 1) { 538 | return(NULL) 539 | } 540 | 541 | t <- bunch %>% 542 | pull(attesting_indices) %>% 543 | str_extract_all("[01]") %>% 544 | map(strtoi) %>% 545 | map(as.logical) %>% 546 | tibble(indices = .) 547 | 548 | t %>% 549 | mutate(group = 1, 550 | agg_index = row_number()) %>% 551 | full_join(t %>% 552 | mutate(group = 1, 553 | agg_index = row_number()), 554 | by = c("group" = "group")) %>% 555 | filter(agg_index.x < agg_index.y) %>% 556 | rowwise() %>% 557 | mutate(or_op = list(indices.x | indices.y), 558 | are_same = identical(indices.x, indices.y), 559 | x_in_y = identical(indices.y, or_op) & !are_same, 560 | y_in_x = identical(indices.x, or_op) & !are_same, 561 | and_op = list(indices.x & indices.y), 562 | intersection_empty = (sum(and_op) == 0), 563 | subset_is_individual = case_when( 564 | x_in_y & sum(indices.x) == 1 ~ TRUE, 565 | y_in_x & sum(indices.y) == 1 ~ TRUE, 566 | TRUE ~ FALSE 567 | ), 568 | strongly_clashing = (!intersection_empty & !x_in_y & !y_in_x), 569 | weakly_clashing = (intersection_empty & !x_in_y & !y_in_x)) %>% 570 | select(agg_index.x, agg_index.y, intersection_empty, are_same, subset_is_individual, 571 | x_in_y, y_in_x, strongly_clashing, weakly_clashing) 572 | } 573 | 574 | count_subset <- function(si) { 575 | if (is.null(si)) { 576 | return(0) 577 | } 578 | 579 | si %>% 580 | pull(x_in_y) %>% 581 | sum() + 582 | si %>% 583 | pull(y_in_x) %>% 584 | sum() %>% 585 | return() 586 | } 587 | 588 | count_subset_ind <- function(si) { 589 | if (is.null(si)) { 590 | return(0) 591 | } 592 | 593 | si %>% 594 | pull(subset_is_individual) %>% 595 | sum() %>% 596 | return() 597 | } 598 | 599 | count_strongly_clashing <- function(si) { 600 | if (is.null(si)) { 601 | return(0) 602 | } 603 | 604 | si %>% 605 | filter(strongly_clashing) %>% 606 | select(agg_index = agg_index.x) %>% 607 | union( 608 | si %>% 609 | filter(strongly_clashing) %>% 610 | select(agg_index = agg_index.y) 611 | ) %>% 612 | distinct() %>% 613 | nrow() %>% 614 | return() 615 | } 616 | 617 | count_weakly_clashing <- function(si) { 618 | if (is.null(si)) { 619 | return(0) 620 | } 621 | 622 | subset_ags <- si %>% 623 | filter(x_in_y) %>% 624 | select(agg_index = agg_index.x) %>% 625 | union( 626 | si %>% 627 | filter(y_in_x) %>% 628 | select(agg_index = agg_index.y) 629 | ) %>% 630 | distinct() %>% 631 | pull(agg_index) 632 | 633 | strongly_clashing_ags <- si %>% 634 | filter(strongly_clashing) %>% 635 | select(agg_index = agg_index.x) %>% 636 | union( 637 | si %>% 638 | filter(strongly_clashing) %>% 639 | select(agg_index = agg_index.y) 640 | ) %>% 641 | distinct() %>% 642 | pull(agg_index) 643 | 644 | si %>% 645 | mutate(agg_index = agg_index.x) %>% 646 | select(agg_index, weakly_clashing) %>% 647 | union(si %>% 648 | mutate(agg_index = agg_index.y) %>% 649 | select(agg_index, weakly_clashing)) %>% 650 | filter(!(agg_index %in% subset_ags), !(agg_index %in% strongly_clashing_ags)) %>% 651 | distinct() %>% 652 | pull(weakly_clashing) %>% 653 | sum() %>% 654 | return() 655 | } 656 | 657 | get_aggregate_info <- function(all_ats) { 658 | if (nrow(all_ats) == 0) { 659 | return(NULL) 660 | } 661 | 662 | all_ats %>% 663 | group_by(slot, att_slot, committee_index, 664 | beacon_block_root, source_block_root, target_block_root) %>% 665 | nest() %>% 666 | mutate(includes = map(data, compare_ats), 667 | n_subset = map(includes, count_subset) %>% unlist(), 668 | n_subset_ind = map(includes, count_subset_ind) %>% unlist() 669 | # n_strongly_clashing = map(includes, count_strongly_clashing) %>% unlist(), 670 | # n_weakly_clashing = map(includes, count_weakly_clashing) %>% unlist() 671 | ) %>% 672 | filter(n_subset > 0 | n_strongly_clashing > 0 | n_weakly_clashing > 0) %>% 673 | select(slot, att_slot, committee_index, beacon_block_root, 674 | source_block_root, target_block_root, 675 | n_subset, n_subset_ind 676 | # n_strongly_clashing, n_weakly_clashing 677 | ) 678 | } -------------------------------------------------------------------------------- /posdata/notebooks/mainnet_compare.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Mainnet client comparison" 3 | author: 4 | - name: Barnabé Monnot 5 | url: https://twitter.com/barnabemonnot 6 | affiliation: Robust Incentives Group, Ethereum Foundation 7 | affiliation_url: https://github.com/ethereum/rig 8 | date: "`r Sys.Date()`" 9 | output: 10 | distill::distill_article: 11 | toc: yes 12 | toc_depth: 3 13 | html_document: 14 | toc: yes 15 | toc_depth: '3' 16 | df_print: paged 17 | description: | 18 | Diving into client performance. 19 | --- 20 | 21 | ```{r setup, include=FALSE} 22 | library(tidyverse) 23 | library(data.table) 24 | library(rmarkdown) 25 | library(infer) 26 | 27 | source(here::here("notebooks/lib.R")) 28 | 29 | options(digits=10) 30 | options(scipen = 999) 31 | 32 | # Make the plots a bit less pixellated 33 | knitr::opts_chunk$set(dpi = 300) 34 | knitr::opts_chunk$set(message = FALSE) 35 | knitr::opts_chunk$set(warning = FALSE) 36 | 37 | # A minimal theme I like 38 | newtheme <- theme_grey() + theme( 39 | axis.text = element_text(size = 9), 40 | axis.title = element_text(size = 12), 41 | axis.line = element_line(colour = "#000000"), 42 | panel.grid.major.y = element_line(colour="#bbbbbb", size=0.1), 43 | panel.grid.major.x = element_blank(), 44 | panel.grid.minor = element_blank(), 45 | panel.background = element_blank(), 46 | legend.title = element_text(size = 12), 47 | legend.text = element_text(size = 10), 48 | legend.box.background = element_blank(), 49 | legend.key = element_blank(), 50 | strip.text.x = element_text(size = 10), 51 | strip.background = element_rect(fill = "white") 52 | ) 53 | theme_set(newtheme) 54 | 55 | myred <- "#F05431" 56 | myyellow <- "#FED152" 57 | mygreen <- "#BFCE80" 58 | client_colours <- c("#000011", "#ff9a02", "#eb4a9b", "#7dc19e") 59 | 60 | end_epoch <- 1000 61 | slots_per_epoch <- 32 62 | until_slot <- (end_epoch + 1) * slots_per_epoch - 1 63 | slot_chunk_res <- until_slot %/% 15 64 | slots_per_year <- 365.25 * 24 * 60 * 60 / 12 65 | epochs_per_year <- slots_per_year / slots_per_epoch 66 | ``` 67 | 68 | ```{r cache=TRUE} 69 | all_bxs <- fread(here::here("mainnet_data/all_bxs.csv"))[slot < end_epoch * slots_per_epoch] 70 | all_ats <- fread(here::here("mainnet_data/all_ats.csv"))[att_slot < end_epoch * slots_per_epoch] 71 | block_root_at_slot <- get_block_root_at_slot(all_bxs) 72 | get_correctness_data(all_ats, block_root_at_slot) 73 | all_myopic_redundant_ats <- get_myopic_redundant_ats_detail(all_ats) 74 | redundant_ats <- get_redundant_ats(all_ats) 75 | subset_ats <- fread(here::here("mainnet_data/subset_ats.csv"))[slot < end_epoch * slots_per_epoch] 76 | val_series <- fread(here::here("mainnet_data/val_series.csv"))[epoch <= end_epoch] 77 | stats_per_slot <- fread(here::here("mainnet_data/stats_per_slot.csv"))[att_slot < end_epoch * slots_per_epoch] 78 | ``` 79 | 80 | This report was compiled with data until epoch `r end_epoch` (`r get_date_from_epoch(end_epoch)` UTC). We look at the performance of validators who self-declared their client, either writing the client name in their graffiti or with the POAP tag. 81 | 82 | 85 | 86 | ## Client distribution 87 | 88 | Declared client is obtained in the graffiti of produced blocks: 89 | 90 | - Either when the graffiti starts with `poap` and ends with `a`, `b`, `c`, `d` and `e` (respectively, Prysm, Lighthouse, Teku, Nimbus and Lodestar). 91 | - Or when the graffiti contains the client name in its string (e.g., `teku/v20.11.1`). 92 | 93 | Since the chain started recently, we do not have a lot of graffitis to scrape from. Additionally, not all graffitis feature the client name or the poap (thanks, Mr. F). This analysis is carried over self-declared clients then. 94 | 95 | ```{r} 96 | validators <- all_bxs[, .(validator_index = proposer_index, client = declared_client)][ 97 | client != "undecided" & client != "lodestar" 98 | ] %>% 99 | unique() 100 | ``` 101 | 102 | ```{r} 103 | validators %>% 104 | .[, .(client)] %>% 105 | .[, .(count=.N), by=.(client)] %>% 106 | ggplot() + 107 | geom_col(aes(x = client, y = count, fill=client)) + 108 | scale_fill_manual(name = "Client", values=client_colours) + 109 | ggtitle("Distribution of clients in the dataset") + 110 | xlab("Declared client") + 111 | ylab("Count") 112 | ``` 113 | 114 | We have identified the client of `r nrow(validators)` validators, out of `r nrow(all_bxs)` blocks produced. 115 | 116 | ```{r} 117 | all_bxs[, .(client=declared_client, graffiti)][ 118 | client != "undecided" & client != "lodestar" & str_starts(graffiti, "poap") 119 | ] %>% 120 | unique() %>% 121 | .[, .(count=.N), by=.(client)] %>% 122 | ggplot() + 123 | geom_col(aes(x = client, y = count, fill=client)) + 124 | scale_fill_manual(name = "Client", values=client_colours) + 125 | ggtitle("Distribution of clients in the dataset") + 126 | xlab("Declared client") + 127 | ylab("Count") 128 | ``` 129 | 130 | 131 | ## Client performance 132 | 133 | ### Correctness by slot index 134 | 135 | It's close, but we observe a more incorrect head attestations when the attestation is made for the starting slot of a new epoch. We name `slot_index` the index of the slot in the epoch (from 0 to 31). 136 | 137 | ```{r} 138 | stats_per_slot[ 139 | , .(percent_correct_heads = sum(correct_heads) / sum(included_ats) * 100), 140 | by= .(slot_index=att_slot%%32) 141 | ] %>% 142 | ggplot() + 143 | geom_col(aes(x = slot_index, y = percent_correct_heads), fill=myred) + 144 | xlab("Slot index") + 145 | ylab("Percent of correct head attestations") 146 | ``` 147 | 148 | Attesters get the head wrong whenever the block they are supposed to attest for is late, and comes much after the attestation was published. We can check which clients are producing these late blocks. 149 | 150 | 164 | 165 | ```{r} 166 | stats_per_slot[ 167 | all_bxs[ 168 | validators[, .(validator_index, client)], 169 | on=c("proposer_index" = "validator_index"), 170 | nomatch=NULL, 171 | .(slot, client) 172 | ], 173 | on = c("att_slot" = "slot"), 174 | nomatch=NULL 175 | ][ 176 | , .(percent_correct_heads = sum(correct_heads) / sum(included_ats) * 100), 177 | by= .(slot_index=att_slot%%32, client) 178 | ] %>% 179 | ggplot() + 180 | geom_col(aes(x = slot_index, y = percent_correct_heads, fill=client)) + 181 | scale_fill_manual(name="Client", values=client_colours) + 182 | facet_wrap(vars(client)) + 183 | xlab("Slot index") + 184 | ylab("Percent of correct head attestations") 185 | ``` 186 | 187 | Since these late blocks seem to happen more often at the start of an epoch than at the end, it is quite clear that epoch processing is at fault, with some clients likely spending more time processing the epoch and unable to publish the block on time. 188 | 189 | We can also check over time how the performance of validators on blocks at slot index 0 evolves, again plotting per client who is expected to produce the block at slot index 0. 190 | 191 | ```{r} 192 | chunk_size <- 20 193 | stats_per_slot[ 194 | all_bxs[ 195 | validators[, .(validator_index, client)], 196 | on=c("proposer_index" = "validator_index"), 197 | nomatch=NULL, 198 | .(slot, client) 199 | ], 200 | on = c("att_slot" = "slot"), 201 | nomatch=NULL 202 | ][ 203 | att_slot%%32==0, .(percent_correct_heads = sum(correct_heads) / sum(expected_ats) * 100), 204 | by= .(epoch_chunk=(att_slot%/%32)%/%chunk_size, client) 205 | ] %>% 206 | ggplot() + 207 | geom_line(aes(x = epoch_chunk * chunk_size, y = percent_correct_heads, group=client, color=client)) + 208 | scale_color_manual(name="Client", values=client_colours) + 209 | xlab("Epoch") + 210 | ylab("Percent of correct head attestations") + 211 | ggtitle("Head correctness per slot index 0 client proposer") 212 | ``` 213 | 214 | ## Attestations over time 215 | 216 | In the plots below, we align on the y-axis validators activated at genesis. A point on the plot is coloured in green when the validator has managed to get their attestation included for the epoch given on the x-axis. Otherwise, the point is coloured in red. Note that we do not check for the correctness of the attestation, merely its presence in some block of the beacon chain. 217 | 218 | The plots allow us to check when a particular client is experiencing issues, at which point some share of validators of that client will be unable to publish their attestations. 219 | 220 | ```{r} 221 | get_grid_per_client <- function(val_series, selected_client) { 222 | val_series[client == selected_client] %>% 223 | mutate(validator_index = as.factor(validator_index)) %>% 224 | ggplot() + 225 | geom_tile(aes(x = epoch, y = validator_index, fill = included_ats)) + 226 | scale_fill_gradient(low = myred, high = mygreen, na.value = NA, 227 | limits = c(0, max(val_series$included_ats)), 228 | guide = FALSE) + 229 | scale_x_continuous(expand = c(0, 0)) + 230 | xlab("Epoch") + 231 | ylab("Validators") + 232 | theme(axis.text.y=element_blank(), 233 | axis.ticks.y=element_blank(), 234 | panel.background=element_rect(fill=myred, colour=myred), 235 | axis.title.x = element_text(size = 6), 236 | axis.title.y = element_text(size = 6), 237 | axis.text.x = element_text(size = 6), 238 | strip.text = element_text(size = 7)) 239 | } 240 | 241 | plot_grid <- function(start_epoch, end_epoch, committees = NULL) { 242 | l <- c("prysm", "lighthouse", "nimbus", "teku") %>% 243 | map(function(client) { 244 | get_grid_per_client(val_series, client) 245 | }) 246 | 247 | l[["prysm"]] | l[["lighthouse"]] | l[["nimbus"]] | l[["teku"]] 248 | } 249 | ``` 250 | 251 | ### Lighthouse 252 | 253 | ```{r, layout="l-screen", fig.height=2} 254 | get_grid_per_client(val_series[ 255 | validators[, .(validator_index, client)], on="validator_index" 256 | ], "lighthouse") 257 | ``` 258 | 259 | ### Nimbus 260 | 261 | ```{r, layout="l-screen", fig.height=2} 262 | get_grid_per_client(val_series[ 263 | validators[, .(validator_index, client)], on="validator_index" 264 | ], "nimbus") 265 | ``` 266 | 267 | ### Prysm 268 | 269 | ```{r, layout="l-screen", fig.height=2} 270 | get_grid_per_client(val_series[ 271 | validators[, .(validator_index, client)], on="validator_index" 272 | ], "prysm") 273 | ``` 274 | 275 | ### Teku 276 | 277 | ```{r, layout="l-screen", fig.height=2} 278 | get_grid_per_client(val_series[ 279 | validators[, .(validator_index, client)], on="validator_index" 280 | ], "teku") 281 | ``` 282 | 283 | ## Reward rates since genesis 284 | 285 | ```{r} 286 | get_reward_timelines <- function(start_epoch, end_epoch, step=25) { 287 | start_balances <- get_balances_active_validators(start_epoch)[ 288 | validators[, .(validator_index, client)], on="validator_index" 289 | ][!is.na(balance)] %>% 290 | mutate( 291 | measurement_epoch = start_epoch 292 | ) %>% 293 | select(-time_active, -activation_epoch) 294 | 295 | seq(start_epoch+step, end_epoch+1, step) %>% 296 | map(function(epoch) { 297 | end_balances <- get_balances_active_validators(epoch)[ 298 | validators[, .(validator_index, client)], on="validator_index" 299 | ][!is.na(balance)] %>% 300 | mutate( 301 | measurement_epoch = epoch 302 | ) %>% 303 | select(-time_active, -activation_epoch) 304 | 305 | t <- start_balances %>% 306 | inner_join(end_balances, 307 | by = c("validator_index", "client")) %>% 308 | mutate(reward_rate = (balance.y - balance.x) / balance.x * 100 * epochs_per_year / (measurement_epoch.y - measurement_epoch.x)) 309 | rr <- t %>% 310 | group_by(client, measurement_epoch.y) %>% 311 | summarise(avg_rr = mean(reward_rate), n_group = n()) 312 | 313 | start_balances <<- end_balances 314 | return(rr) 315 | }) %>% 316 | bind_rows() 317 | } 318 | ``` 319 | 320 | ```{r cache=TRUE, message=FALSE} 321 | reward_step <- 20 322 | rr_series <- get_reward_timelines(1, end_epoch + 1, step=reward_step) 323 | ``` 324 | 325 | We first look at the reward rates per client since genesis. 326 | 327 | ```{r} 328 | rr_series %>% 329 | group_by(client, measurement_epoch.y) %>% 330 | summarise(avg_rr = sum(avg_rr * n_group) / sum(n_group)) %>% 331 | ggplot(aes(x = measurement_epoch.y - reward_step / 2, y = avg_rr, group=client, color=client)) + 332 | geom_line() + 333 | scale_color_manual(name = "Client", values = client_colours) + 334 | xlab("Epoch") + 335 | ylab("Average reward rate") + 336 | xlim(0, end_epoch) + 337 | ggtitle("Timeline of average rates of reward per client") 338 | ``` 339 | 340 | ## Inclusion delay per client 341 | 342 | We check the inclusion delay over all validators per client. 343 | 344 | ```{r} 345 | val_series[!is.na(inclusion_delay)][ 346 | validators, on=c("validator_index"), nomatch=NULL 347 | ][ 348 | , .(avg_inclusion_delay = sum(inclusion_delay * included_ats) / sum(included_ats)), 349 | by=.(epoch, client) 350 | ] %>% 351 | ggplot() + 352 | geom_line(aes(x = epoch, y = avg_inclusion_delay, color = client)) + 353 | scale_color_manual(name = "Client", values = client_colours) + 354 | xlab("Epoch") + 355 | ylab("Average inclusion delay") + 356 | ylim(1.0, 1.3) + 357 | ggtitle("Timeline of average inclusion delay per client") 358 | ``` 359 | 360 | We can also check the inclusion delay _by block_, where instead of looking at first inclusion of the attestation minus the attestation slot, we compute the first inclusion of the attestation minus _the earliest block in which this attestation could have been included_. Note that the minimum value of the inclusion delay by block is 0. 361 | 362 | ```{r} 363 | val_series[!is.na(inclusion_delay_by_block)][ 364 | validators, on=c("validator_index"), nomatch=NULL 365 | ][ 366 | , .(avg_inclusion_delay = sum(inclusion_delay_by_block * included_ats) / sum(included_ats)), 367 | by=.(epoch, client) 368 | ] %>% 369 | ggplot() + 370 | geom_line(aes(x = epoch, y = avg_inclusion_delay, color = client)) + 371 | scale_color_manual(name = "Client", values = client_colours) + 372 | xlab("Epoch") + 373 | ylab("Average inclusion delay by block") + 374 | ylim(0.0, 0.2) + 375 | ggtitle("Timeline of average inclusion delay by block per client") 376 | ``` 377 | 378 | ## Block-packing 379 | 380 | A block can include at most 128 aggregate attestations. How many aggregate attestations did each client include on average? 381 | 382 | ```{r} 383 | chunk_size <- 25 384 | all_ats %>% 385 | .[, .(included_ats = .N), by=slot] %>% 386 | merge(all_bxs[, .(slot, proposer_index)]) %>% 387 | merge(validators[, .(validator_index, client)], 388 | by.x = c("proposer_index"), by.y = c("validator_index")) %>% 389 | mutate(epoch_chunk = slot %/% slots_per_epoch %/% chunk_size) %>% 390 | group_by(epoch_chunk, client) %>% 391 | summarise(included_ats = mean(included_ats)) %>% 392 | ggplot(aes(x = epoch_chunk * chunk_size, y = included_ats, group=client, color=client)) + 393 | geom_line() + 394 | scale_color_manual(name = "Client", values = client_colours) + 395 | ylim(0, 128) + 396 | ggtitle("Average number of aggregates included per block") + 397 | xlab("Declared client") + 398 | ylab("Average number of aggregates") 399 | ``` 400 | 401 | Smaller blocks lead to healthier network, as long as they do not leave attestations aside. We check how each client manages redundancy in the next sections. 402 | 403 | ### Redundant aggregates 404 | 405 | Redundant aggregates are made up of attestations that were all already published, albeit possibly across different aggregates. 406 | 407 | ```{r} 408 | chunk_size <- 25 409 | all_bxs %>% 410 | merge(validators[, .(validator_index, client)], 411 | by.x = c("proposer_index"), by.y = c("validator_index")) %>% 412 | merge(redundant_ats, by.x = c("slot"), by.y = c("slot"), all.x = TRUE) %>% 413 | setnafill("const", fill = 0, cols = c("n_redundant")) %>% 414 | mutate(epoch_chunk = slot %/% slots_per_epoch %/% chunk_size) %>% 415 | group_by(epoch_chunk, client) %>% 416 | summarise(n_redundant = mean(n_redundant)) %>% 417 | ggplot(aes(x = epoch_chunk * chunk_size, y = n_redundant, group=client, color=client)) + 418 | geom_line() + 419 | scale_color_manual(name = "Client", values = client_colours) + 420 | ggtitle("Average number of redundant aggregates per block") + 421 | xlab("Epoch") + 422 | ylab("Average myopic aggregates") 423 | ``` 424 | 425 | ### Myopic redundant aggregates 426 | 427 | Myopic redundant aggregates were already published, with the same attesting indices, in a previous block. 428 | 429 | ```{r} 430 | chunk_size <- 25 431 | all_bxs %>% 432 | merge(validators[, .(validator_index, client)], 433 | by.x = c("proposer_index"), by.y = c("validator_index")) %>% 434 | merge(all_myopic_redundant_ats, by.x = c("slot"), by.y = c("slot"), all.x = TRUE) %>% 435 | setnafill("const", fill = 0, cols = c("n_myopic_redundant")) %>% 436 | mutate(epoch_chunk = slot %/% slots_per_epoch %/% chunk_size) %>% 437 | group_by(epoch_chunk, client) %>% 438 | summarise(n_myopic_redundant = mean(n_myopic_redundant)) %>% 439 | ggplot(aes(x = epoch_chunk * chunk_size, y = n_myopic_redundant, group=client, color=client)) + 440 | geom_line() + 441 | scale_color_manual(name = "Client", values = client_colours) + 442 | ggtitle("Average number of myopic redundant aggregates per block") + 443 | xlab("Epoch") + 444 | ylab("Average myopic aggregates") 445 | ``` 446 | 447 | ### Subset aggregates 448 | 449 | ```{r} 450 | subset_until_slot <- 20000 451 | ``` 452 | 453 | Subset aggregates are aggregates included in a block which are fully covered by another aggregate included in the same block. Namely, when aggregate 1 has attesting indices $I$ and aggregate 2 has attesting indices $J$, aggregate 1 is a subset aggregate when $I \subset J$. 454 | 455 | 458 | 459 | ```{r} 460 | chunk_size <- 20 461 | all_bxs[slot <= subset_until_slot] %>% 462 | merge(validators[, .(validator_index, client)], 463 | by.x = c("proposer_index"), by.y = c("validator_index")) %>% 464 | merge(subset_ats, by.x = c("slot"), by.y = c("slot"), all.x = TRUE) %>% 465 | setnafill("const", fill = 0, cols = c("n_subset", "n_subset_ind", "n_weakly_clashing", "n_strongly_clashing")) %>% 466 | mutate(epoch_chunk = slot %/% slots_per_epoch %/% chunk_size) %>% 467 | group_by(epoch_chunk, client) %>% 468 | summarise(n_subset = mean(n_subset)) %>% 469 | ggplot(aes(x = epoch_chunk * chunk_size, y = n_subset, group=client, color=client)) + 470 | geom_line() + 471 | scale_color_manual(name = "Client", values = client_colours) + 472 | ggtitle("Average number of subset aggregates per block") + 473 | xlab("Epoch") + 474 | ylab("Average subset aggregates") 475 | ``` 476 | 477 | Lighthouse and Nimbus both score a perfect 0. 478 | 479 | ```{r} 480 | chunk_size <- 20 481 | all_ats[slot <= subset_until_slot] %>% 482 | .[, .(included_ats = .N), by=slot] %>% 483 | merge(all_bxs[, .(slot, proposer_index)]) %>% 484 | merge(validators[, .(validator_index, client)], 485 | by.x = c("proposer_index"), by.y = c("validator_index")) %>% 486 | merge(subset_ats, by.x = c("slot"), by.y = c("slot"), all.x = TRUE) %>% 487 | setnafill("const", fill = 0, cols = c("n_subset", "n_subset_ind", "n_weakly_clashing", "n_strongly_clashing")) %>% 488 | mutate(epoch_chunk = slot %/% slots_per_epoch %/% chunk_size) %>% 489 | group_by(epoch_chunk, client) %>% 490 | summarise(n_subset = mean(n_subset)) %>% 491 | ggplot(aes(x = epoch_chunk * chunk_size, y = n_subset, group=client, color=client)) + 492 | geom_line() + 493 | scale_color_manual(name = "Client", values = client_colours) + 494 | ggtitle("Percentage of subset aggregates among included aggregates") + 495 | xlab("Epoch") + 496 | ylab("Percentage of subset aggregates in block") 497 | ``` 498 | -------------------------------------------------------------------------------- /posdata/notebooks/pyrmont_compare.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Pyrmont client comparison" 3 | author: 4 | - name: Barnabé Monnot 5 | url: https://twitter.com/barnabemonnot 6 | affiliation: Robust Incentives Group, Ethereum Foundation 7 | affiliation_url: https://github.com/ethereum/rig 8 | date: "`r Sys.Date()`" 9 | output: 10 | distill::distill_article: 11 | toc: yes 12 | toc_depth: 3 13 | html_document: 14 | toc: yes 15 | toc_depth: '3' 16 | df_print: paged 17 | description: | 18 | Onwards! 19 | --- 20 | 21 | ```{r setup, include=FALSE} 22 | library(tidyverse) 23 | library(data.table) 24 | library(rmarkdown) 25 | library(infer) 26 | 27 | source(here::here("notebooks/lib.R")) 28 | 29 | options(digits=10) 30 | options(scipen = 999) 31 | 32 | # Make the plots a bit less pixellated 33 | knitr::opts_chunk$set(dpi = 300) 34 | knitr::opts_chunk$set(message = FALSE) 35 | knitr::opts_chunk$set(warning = FALSE) 36 | 37 | # A minimal theme I like 38 | newtheme <- theme_grey() + theme( 39 | axis.text = element_text(size = 9), 40 | axis.title = element_text(size = 12), 41 | axis.line = element_line(colour = "#000000"), 42 | panel.grid.major = element_blank(), 43 | panel.grid.minor = element_blank(), 44 | panel.background = element_blank(), 45 | legend.title = element_text(size = 12), 46 | legend.text = element_text(size = 10), 47 | legend.box.background = element_blank(), 48 | legend.key = element_blank(), 49 | strip.text.x = element_text(size = 10), 50 | strip.background = element_rect(fill = "white") 51 | ) 52 | theme_set(newtheme) 53 | 54 | myred <- "#F05431" 55 | myyellow <- "#FED152" 56 | mygreen <- "#BFCE80" 57 | client_colours <- c("#000011", "#ff9a02", "#eb4a9b", "#7dc19e") 58 | 59 | end_epoch <- 2820 60 | slots_per_epoch <- 32 61 | until_slot <- (end_epoch + 2) * slots_per_epoch - 1 62 | slot_chunk_res <- until_slot %/% 15 63 | slots_per_year <- 365.25 * 24 * 60 * 60 / 12 64 | epochs_per_year <- slots_per_year / slots_per_epoch 65 | ``` 66 | 67 | ```{r cache=TRUE} 68 | all_ats <- fread(here::here("pyrmont_data/all_ats.csv")) 69 | all_bxs <- fread(here::here("pyrmont_data/all_bxs.csv")) 70 | block_root_at_slot <- get_block_root_at_slot(all_bxs) 71 | validators <- fread(here::here("pyrmont_data/initial_validators.csv")) 72 | all_myopic_redundant_ats <- get_myopic_redundant_ats_detail(all_ats) 73 | subset_ats <- fread(here::here("pyrmont_data/subset_ats.csv")) 74 | val_series <- fread(here::here("pyrmont_data/val_series.csv")) 75 | stats_per_slot <- fread(here::here("pyrmont_data/stats_per_slot.csv")) 76 | ``` 77 | 78 | In this report, we focus on the initial set of 99,900 validators controlled by the Ethereum Foundation and the client teams. This report was compiled with data until epoch `r end_epoch` (`r get_date_from_epoch(end_epoch)`). 79 | 80 | 83 | 84 | ## High-level insights 85 | 86 | - All clients perform generally well, with improvements over the (still short) lifetime of Pyrmont. 87 | - Attention should be given to epoch processing. We identify from the performance of validator duties (attesting for the correct head) that some clients are late to propose the initial block of a new epoch, and make the hypothesis that this lateness is due to high epoch processing overhead. 88 | - Block-packing algorithms have improved considerably since Medalla. 89 | - Lighthouse has a significantly lower number of attestations in the blocks it produces. 90 | - 👍 No client includes any aggregate attestation of size 1 if it is contained in a larger, also included, aggregate. 91 | - 👍 Lighthouse and Nimbus have no subset aggregates in their blocks. Prysm and Teku have roughly the same numbers as in Medalla. 92 | - 👍 All clients (except Nimbus) include very few myopic redundant aggregates (aggregates already contained in parent blocks). 93 | - There seems to be a clear correlation between the hardware used to run validators and the validator performance, leading to slightly decreased rewards when the hardware is scaled down. In particular, Teku appears to be the most sensitive to smaller hardware specs, confirming its status as institutions-directed client. 94 | 95 | ## Client distribution 96 | 97 | We have roughly equal distribution of clients in the network at genesis. The EF operates around 20% of each validator set associated with each client, while the remaining validators are maintained by the team behind the client itself. 98 | 99 | ```{r} 100 | validators %>% 101 | .[, .(client, team=(if_else(team=="ef", "EF", "Other")))] %>% 102 | .[, .(count=.N), by=.(client, team)] %>% 103 | ggplot() + 104 | geom_col(aes(x = client, y = count, fill = team)) + 105 | scale_fill_manual(name = "Team", values = c(myyellow, myred)) + 106 | ggtitle("Distribution of clients in the dataset") + 107 | xlab("Declared client") + 108 | ylab("Count") 109 | ``` 110 | 111 | In the following, statistics are obtained over all EF- and client teams-controlled validators, unless otherwise noted. In particular, we do not inclde data from validators activated after genesis or validators who are not controlled by the EF or the client teams. 112 | 113 | ## Client performance 114 | 115 | ### Correctness by slot index 116 | 117 | We observe a lot more incorrect head attestations when the attestation is made for the starting slot of a new epoch. We name `slot_index` the index of the slot in the epoch (from 0 to 31). 118 | 119 | ```{r} 120 | stats_per_slot[ 121 | , .(percent_correct_heads = sum(correct_heads) / sum(expected_ats) * 100), 122 | by= .(slot_index=att_slot%%32) 123 | ] %>% 124 | ggplot() + 125 | geom_col(aes(x = slot_index, y = percent_correct_heads), fill=myred) + 126 | xlab("Slot index") + 127 | ylab("Percent of correct head attestations") 128 | ``` 129 | 130 | Attesters get the head wrong whenever the block they are supposed to attest for is late, and comes much after the attestation was published. We can check which clients are producing these late blocks. 131 | 132 | 146 | 147 | ```{r} 148 | stats_per_slot[ 149 | all_bxs[ 150 | validators[, .(validator_index, client)], 151 | on=c("proposer_index" = "validator_index"), 152 | nomatch=NULL, 153 | .(slot, client) 154 | ], 155 | on = c("att_slot" = "slot"), 156 | nomatch=NULL 157 | ][ 158 | , .(percent_correct_heads = sum(correct_heads) / sum(expected_ats) * 100), 159 | by= .(slot_index=att_slot%%32, client) 160 | ] %>% 161 | ggplot() + 162 | geom_col(aes(x = slot_index, y = percent_correct_heads, fill=client)) + 163 | scale_fill_manual(name="Client", values=client_colours) + 164 | facet_wrap(vars(client)) + 165 | xlab("Slot index") + 166 | ylab("Percent of correct head attestations") 167 | ``` 168 | 169 | Since these late blocks seem to happen more often at the start of an epoch than at the end, it is quite clear that epoch processing is at fault, with some clients likely spending more time processing the epoch and unable to publish the block on time. 170 | 171 | We can also check over time how the performance of validators on blocks at slot index 0 evolves, again plotting per client who is expected to produce the block at slot index 0. 172 | 173 | ```{r} 174 | chunk_size <- 50 175 | stats_per_slot[ 176 | all_bxs[ 177 | validators[, .(validator_index, client)], 178 | on=c("proposer_index" = "validator_index"), 179 | nomatch=NULL, 180 | .(slot, client) 181 | ], 182 | on = c("att_slot" = "slot"), 183 | nomatch=NULL 184 | ][ 185 | att_slot%%32==0, .(percent_correct_heads = sum(correct_heads) / sum(expected_ats) * 100), 186 | by= .(epoch_chunk=(att_slot%/%32)%/%chunk_size, client) 187 | ] %>% 188 | ggplot() + 189 | geom_line(aes(x = epoch_chunk * chunk_size, y = percent_correct_heads, group=client, color=client)) + 190 | scale_color_manual(name="Client", values=client_colours) + 191 | xlab("Epoch") + 192 | ylab("Percent of correct head attestations") 193 | ``` 194 | 195 | Validators attesting on Teku-expected blocks at slot index 0 performed better at a time when the chain experienced difficulty and the number of block produced was lower, around epochs 200 to 300, which lines up with the suggested explanation of long epoch processing times. 196 | 197 | ## Attestations over time 198 | 199 | In the plots below, we align on the y-axis validators activated at genesis. A point on the plot is coloured in green when the validator has managed to get their attestation included for the epoch given on the x-axis. Otherwise, the point is coloured in red. Note that we do not check for the correctness of the attestation, merely its presence in some block of the beacon chain. 200 | 201 | The plots allow us to check when a particular client is experiencing issues, at which point some share of validators of that client will be unable to publish their attestations. 202 | 203 | ```{r} 204 | get_grid_per_client <- function(val_series, selected_client) { 205 | val_series[client == selected_client] %>% 206 | mutate(validator_index = as.factor(validator_index)) %>% 207 | ggplot() + 208 | geom_tile(aes(x = epoch, y = validator_index, fill = included_ats)) + 209 | scale_fill_gradient(low = myred, high = mygreen, na.value = NA, 210 | limits = c(0, max(val_series$included_ats)), 211 | guide = FALSE) + 212 | scale_x_continuous(expand = c(0, 0)) + 213 | xlab("Epoch") + 214 | ylab("Validators") + 215 | theme(axis.text.y=element_blank(), 216 | axis.ticks.y=element_blank(), 217 | panel.background=element_rect(fill=myred, colour=myred), 218 | axis.title.x = element_text(size = 6), 219 | axis.title.y = element_text(size = 6), 220 | axis.text.x = element_text(size = 6), 221 | strip.text = element_text(size = 7)) 222 | } 223 | 224 | plot_grid <- function(start_epoch, end_epoch, committees = NULL) { 225 | l <- c("prysm", "lighthouse", "nimbus", "teku") %>% 226 | map(function(client) { 227 | get_grid_per_client(val_series, client) 228 | }) 229 | 230 | l[["prysm"]] | l[["lighthouse"]] | l[["nimbus"]] | l[["teku"]] 231 | } 232 | ``` 233 | 234 | ### Lighthouse 235 | 236 | ```{r, layout="l-screen", fig.height=2} 237 | get_grid_per_client(val_series[ 238 | validators[, .(validator_index, client)], on="validator_index" 239 | ], "lighthouse") 240 | ``` 241 | 242 | ### Nimbus 243 | 244 | ```{r, layout="l-screen", fig.height=2} 245 | get_grid_per_client(val_series[ 246 | validators[, .(validator_index, client)], on="validator_index" 247 | ], "nimbus") 248 | ``` 249 | 250 | ### Prysm 251 | 252 | ```{r, layout="l-screen", fig.height=2} 253 | get_grid_per_client(val_series[ 254 | validators[, .(validator_index, client)], on="validator_index" 255 | ], "prysm") 256 | ``` 257 | 258 | ### Teku 259 | 260 | ```{r, layout="l-screen", fig.height=2} 261 | get_grid_per_client(val_series[ 262 | validators[, .(validator_index, client)], on="validator_index" 263 | ], "teku") 264 | ``` 265 | 266 | ## Block-packing 267 | 268 | A block can include at most 128 aggregate attestations. How many aggregate attestations did each client include on average? 269 | 270 | ```{r} 271 | chunk_size <- 25 272 | all_ats %>% 273 | .[, .(included_ats = .N), by=slot] %>% 274 | merge(all_bxs[, .(slot, proposer_index)]) %>% 275 | merge(validators[, .(validator_index, client)], 276 | by.x = c("proposer_index"), by.y = c("validator_index")) %>% 277 | mutate(epoch_chunk = slot %/% slots_per_epoch %/% chunk_size) %>% 278 | group_by(epoch_chunk, client) %>% 279 | summarise(included_ats = mean(included_ats)) %>% 280 | ggplot(aes(x = epoch_chunk * chunk_size, y = included_ats, group=client, color=client)) + 281 | geom_line() + 282 | scale_color_manual(name = "Client", values = client_colours) + 283 | ylim(0, 128) + 284 | ggtitle("Average number of aggregates included per block") + 285 | xlab("Declared client") + 286 | ylab("Average number of aggregates") 287 | ``` 288 | 289 | Smaller blocks lead to healthier network, as long as they do not leave attestations aside. We check how each client manages redundancy in the next sections. 290 | 291 | ### Myopic redundant aggregates 292 | 293 | Myopic redundant aggregates were already published, with the same attesting indices, in a previous block. 294 | 295 | ```{r} 296 | chunk_size <- 25 297 | all_bxs %>% 298 | merge(validators[, .(validator_index, client)], 299 | by.x = c("proposer_index"), by.y = c("validator_index")) %>% 300 | merge(all_myopic_redundant_ats, by.x = c("slot"), by.y = c("slot"), all.x = TRUE) %>% 301 | setnafill("const", fill = 0, cols = c("n_myopic_redundant")) %>% 302 | mutate(epoch_chunk = slot %/% slots_per_epoch %/% chunk_size) %>% 303 | group_by(epoch_chunk, client) %>% 304 | summarise(n_myopic_redundant = mean(n_myopic_redundant)) %>% 305 | ggplot(aes(x = epoch_chunk * chunk_size, y = n_myopic_redundant, group=client, color=client)) + 306 | geom_line() + 307 | scale_color_manual(name = "Client", values = client_colours) + 308 | ggtitle("Average number of myopic redundant aggregates per block") + 309 | xlab("Epoch") + 310 | ylab("Average myopic aggregates") 311 | ``` 312 | 313 | ### Subset aggregates 314 | 315 | ```{r} 316 | subset_until_slot <- 65000 317 | ``` 318 | 319 | Subset aggregates are aggregates included in a block which are fully covered by another aggregate included in the same block. Namely, when aggregate 1 has attesting indices $I$ and aggregate 2 has attesting indices $J$, aggregate 1 is a subset aggregate when $I \subset J$. 320 | 321 | 324 | 325 | ```{r} 326 | chunk_size <- 10 327 | all_bxs[slot <= subset_until_slot] %>% 328 | merge(validators[, .(validator_index, client)], 329 | by.x = c("proposer_index"), by.y = c("validator_index")) %>% 330 | merge(subset_ats, by.x = c("slot"), by.y = c("slot"), all.x = TRUE) %>% 331 | setnafill("const", fill = 0, cols = c("n_subset", "n_subset_ind", "n_weakly_clashing", "n_strongly_clashing")) %>% 332 | mutate(epoch_chunk = slot %/% slots_per_epoch %/% chunk_size) %>% 333 | group_by(epoch_chunk, client) %>% 334 | summarise(n_subset = mean(n_subset)) %>% 335 | ggplot(aes(x = epoch_chunk * chunk_size, y = n_subset, group=client, color=client)) + 336 | geom_line() + 337 | scale_color_manual(name = "Client", values = client_colours) + 338 | ggtitle("Average number of subset aggregates per block") + 339 | xlab("Epoch") + 340 | ylab("Average subset aggregates") 341 | ``` 342 | 343 | Lighthouse and Nimbus both score a perfect 0. 344 | 345 | ```{r} 346 | chunk_size <- 5 347 | all_ats[slot <= subset_until_slot] %>% 348 | .[, .(included_ats = .N), by=slot] %>% 349 | merge(all_bxs[, .(slot, proposer_index)]) %>% 350 | merge(validators[, .(validator_index, client)], 351 | by.x = c("proposer_index"), by.y = c("validator_index")) %>% 352 | merge(subset_ats, by.x = c("slot"), by.y = c("slot"), all.x = TRUE) %>% 353 | setnafill("const", fill = 0, cols = c("n_subset", "n_subset_ind", "n_weakly_clashing", "n_strongly_clashing")) %>% 354 | mutate(epoch_chunk = slot %/% slots_per_epoch %/% chunk_size) %>% 355 | group_by(epoch_chunk, client) %>% 356 | summarise(n_subset = mean(n_subset)) %>% 357 | ggplot(aes(x = epoch_chunk * chunk_size, y = n_subset, group=client, color=client)) + 358 | geom_line() + 359 | scale_color_manual(name = "Client", values = client_colours) + 360 | ggtitle("Percentage of subset aggregates among included aggregates") + 361 | xlab("Epoch") + 362 | ylab("Percentage of subset aggregates in block") 363 | ``` 364 | 365 | ## Reward rates since genesis 366 | 367 | ```{r} 368 | get_reward_timelines <- function(start_epoch, end_epoch, step=25) { 369 | start_balances <- get_balances_active_validators(start_epoch)[ 370 | validators[, .(validator_index, client, team, first_digit)], on="validator_index" 371 | ] %>% 372 | mutate( 373 | measurement_epoch = start_epoch 374 | ) %>% 375 | select(-time_active, -activation_epoch) 376 | 377 | seq(start_epoch+step, end_epoch+1, step) %>% 378 | map(function(epoch) { 379 | end_balances <- get_balances_active_validators(epoch)[ 380 | validators[, .(validator_index, client, team, first_digit)], on="validator_index" 381 | ] %>% 382 | mutate( 383 | measurement_epoch = epoch 384 | ) %>% 385 | select(-time_active, -activation_epoch) 386 | 387 | t <- start_balances %>% 388 | inner_join(end_balances, 389 | by = c("validator_index", "client", "team", "first_digit")) %>% 390 | mutate(reward_rate = (balance.y - balance.x) / balance.x * 100 * epochs_per_year / (measurement_epoch.y - measurement_epoch.x)) 391 | rr <- t %>% 392 | group_by(client, first_digit, team, measurement_epoch.y) %>% 393 | summarise(avg_rr = mean(reward_rate), n_group = n()) 394 | 395 | start_balances <<- end_balances 396 | return(rr) 397 | }) %>% 398 | bind_rows() %>% 399 | mutate(region = as_factor(first_digit)) 400 | } 401 | ``` 402 | 403 | ```{r cache=TRUE, message=FALSE} 404 | rr_series <- get_reward_timelines(1, end_epoch + 1, step=50) 405 | ``` 406 | 407 | We first look at the reward rates per client since genesis. 408 | 409 | ```{r} 410 | rr_series %>% 411 | group_by(client, measurement_epoch.y) %>% 412 | summarise(avg_rr = sum(avg_rr * n_group) / sum(n_group)) %>% 413 | ggplot(aes(x = measurement_epoch.y, y = avg_rr, group=client, color=client)) + 414 | geom_line() + 415 | scale_color_manual(name = "Client", values = client_colours) + 416 | xlab("Epoch") + 417 | ylab("Average reward rate") + 418 | ggtitle("Timeline of average rates of reward per client") 419 | ``` 420 | 421 | Ethereum foundation-controlled clients are hosted on AWS nodes scattered across four regions in roughly equal proportions. We look at the reward rates per region. 422 | 423 | ```{r} 424 | rr_series %>% 425 | filter(team == "ef") %>% 426 | group_by(region, measurement_epoch.y) %>% 427 | summarise(avg_rr = sum(avg_rr * n_group) / sum(n_group)) %>% 428 | ggplot(aes(x = measurement_epoch.y, y = avg_rr, group=region, color=region)) + 429 | geom_line() + 430 | xlab("Epoch") + 431 | ylab("Average reward rate") + 432 | ggtitle("Timeline of average rates of reward per region") + 433 | scale_color_discrete(name = "Region") 434 | ``` 435 | 436 | Performing an omnibus test to detect significant difference between any of the four groups, we are unable to find such significance at epoch 800. Not long after, an experiment was performed which we describe in the next section. Before doing so, we investigate reward rates per client for validators controlled by the client team. 437 | 438 | ### Reward rates per client team 439 | 440 | While we presented reward rates for all validators per client above, our results may have involved several competing effects. On the one hand, for each client, 20% of all validators are controlled by the EF. All validators controlled by the EF run on the same hardware for the first 1000 epochs or so (more on this in the next section). While this setting allows us to compare the performance of all clients in a controlled environment, we also expect the client teams behind the development of their client to have better knowledge of the hardware requirements of their software. Thus in the following we present two analyses: first the reward rates of all validators controlled by the EF, per client; second, the reward rates of validators controlled by the client teams. 441 | 442 | #### EF-controlled validators 443 | 444 | ```{r} 445 | rr_series %>% 446 | filter(team == "ef") %>% 447 | group_by(client, measurement_epoch.y) %>% 448 | summarise(avg_rr = sum(avg_rr * n_group) / sum(n_group)) %>% 449 | ggplot(aes(x = measurement_epoch.y, y = avg_rr, group=client, color=client)) + 450 | geom_line() + 451 | scale_color_manual(name = "Client", values = client_colours) + 452 | xlab("Epoch") + 453 | ylab("Average reward rate") + 454 | ggtitle("Timeline of average rates of reward per client (EF-controlled)") 455 | ``` 456 | 457 | #### Client teams-controlled validators 458 | 459 | ```{r} 460 | rr_series %>% 461 | filter(team != "ef") %>% 462 | group_by(client, measurement_epoch.y) %>% 463 | summarise(avg_rr = sum(avg_rr * n_group) / sum(n_group)) %>% 464 | ggplot(aes(x = measurement_epoch.y, y = avg_rr, group=client, color=client)) + 465 | geom_line() + 466 | scale_color_manual(name = "Client", values = client_colours) + 467 | xlab("Epoch") + 468 | ylab("Average reward rate") + 469 | ggtitle("Timeline of average rates of reward per client (Client team-controlled)") 470 | ``` 471 | 472 | #### Comparison 473 | 474 | ```{r} 475 | rr_series %>% 476 | mutate(is_ef = if_else(team == "ef", "EF", "Client team")) %>% 477 | group_by(client, is_ef, team, measurement_epoch.y) %>% 478 | summarise(avg_rr = sum(avg_rr * n_group) / sum(n_group)) %>% 479 | ggplot(aes(x = measurement_epoch.y, y = avg_rr, linetype=is_ef, color=client)) + 480 | geom_line() + 481 | scale_color_manual(name = "Client", values = client_colours) + 482 | scale_linetype_manual(name = "Team", values = c("solid", "dashed")) + 483 | xlab("Epoch") + 484 | ylab("Average reward rate") + 485 | ggtitle("Timeline of average rates of reward per client") + 486 | facet_wrap(vars(client)) 487 | ``` 488 | 489 | ## Experiment: Scaling down nodes 490 | 491 | Around epoch 1020, nodes controlled by the EF in regions 1 and 2 were scaled down from t3.xlarge units (4 CPUs 16GB memory, with unlimited CPU burst) to m5.large units (2 CPUs, 8GB memory, no burst). We observe a significant loss of performance despite continuous uptime. 492 | 493 | Large decreases in all plots below for regions 1 and 2 indicate when nodes were stopped and restarted, circa epochs 1000 for region 1 and epoch 1025 for region 2. When we compare the performance of validators before and after the scaling down of regions 1 and 2, we use epoch 900 as control and epoch 1300 as treatment. 494 | 495 | ```{r cache=TRUE, message=FALSE} 496 | start_epoch <- 800 497 | end_epoch <- 1400 498 | rr_series <- get_reward_timelines(start_epoch, end_epoch, step=40) 499 | ``` 500 | 501 | ```{r} 502 | rr_series %>% 503 | filter(team == "ef") %>% 504 | group_by(region, measurement_epoch.y) %>% 505 | summarise(avg_rr = sum(avg_rr * n_group) / sum(n_group)) %>% 506 | ggplot(aes(x = measurement_epoch.y, y = avg_rr, group=region, color=region)) + 507 | geom_line() + 508 | xlab("Epoch") + 509 | ylab("Average reward rate") + 510 | ggtitle("Timeline of average rates of reward per region") + 511 | scale_color_discrete(name = "Region") 512 | ``` 513 | 514 | Reward rates per client are affected in roughly equal proportions. 515 | 516 | ```{r} 517 | rr_series %>% 518 | filter(team == "ef") %>% 519 | group_by(client, measurement_epoch.y) %>% 520 | summarise(avg_rr = sum(avg_rr * n_group) / sum(n_group)) %>% 521 | ggplot(aes(x = measurement_epoch.y, y = avg_rr, group=client, color=client)) + 522 | geom_line() + 523 | scale_color_manual(name = "Client", values = client_colours) + 524 | xlab("Epoch") + 525 | ylab("Average reward rate") + 526 | ggtitle("Timeline of average rates of reward per client") 527 | ``` 528 | 529 | We explore further the difference between clients in regions 1 and 2 and in regions 3 and 4. 530 | 531 | ```{r} 532 | rr_series %>% 533 | mutate(region = if_else(region == "1" | region == "2", "Regions 1 and 2", "Regions 3 and 4")) %>% 534 | group_by(measurement_epoch.y, client, region) %>% 535 | summarise(avg_rr = sum(avg_rr * n_group) / sum(n_group)) %>% 536 | ggplot(aes(x = measurement_epoch.y, y = avg_rr, color=client, linetype=region)) + 537 | geom_line() + 538 | scale_color_manual(name = "Client", values = client_colours) + 539 | scale_linetype_manual(name = "Region", values = c("solid", "dashed")) + 540 | xlab("Epoch") + 541 | ylab("Average reward rate") + 542 | ggtitle("Timeline of average rates of reward per region") + 543 | facet_wrap(vars(client)) 544 | ``` 545 | 546 | It seems that Teku is responsible for most of the reward decrease in regions 1 and 2. Prysm registers a significant, albeit small, decrease in reward rates between the two region groups too. 547 | 548 | ### Analysis by duty 549 | 550 | ```{r} 551 | chunk_size <- 50 552 | ``` 553 | 554 | We look at four metrics across each region: 555 | 556 | - Percentage of included attestations. 557 | - Percentage of correct targets among expected attestations. 558 | - Percentage of correct heads among expected attestations. 559 | - Average inclusion delay. 560 | 561 | To obtain a time series, we divide the period between epoch `r start_epoch` and epoch `r end_epoch` in chunks of size `r chunk_size` epochs. For each validator, we record how many included attestations appear in the dataset (ranging between 0 and `r chunk_size` for each chunk), the number of correct targets, correct heads and its average inclusion delay. We average over all validators in the EF-controlled set, measuring metrics either per client or per region. 562 | 563 | We start by looking at the metrics per region. 564 | 565 | ```{r, layout="l-body-outset", fig.height=2} 566 | val_series[ 567 | validators[node_code > 0, .(validator_index, client, region=str_c("Region ", as.character(first_digit)))], on="validator_index" 568 | ][epoch >= start_epoch & epoch < end_epoch,][ 569 | , .( 570 | included_ats=sum(included_ats)/sum(expected_ats) * 10, 571 | correct_targets=sum(correct_targets)/sum(expected_ats) * 10, 572 | correct_heads=sum(correct_heads)/sum(expected_ats) * 10, 573 | inclusion_delay=mean(inclusion_delay, na.rm = TRUE) 574 | ), 575 | by=.(epoch_chunk=epoch%/%chunk_size, region) 576 | ] %>% 577 | melt(id.vars = c("epoch_chunk", "region")) %>% 578 | ggplot() + 579 | geom_line(aes(x = epoch_chunk * chunk_size, y = value, group=region, color=region)) + 580 | xlab("Epoch") + 581 | ylab("Value") + 582 | scale_color_discrete(name = "Region") + 583 | facet_wrap(vars(variable), nrow=1, scales="free_y") + 584 | theme(axis.text.y=element_text(size = 6), 585 | axis.title.x = element_text(size = 8), 586 | axis.title.y = element_text(size = 8), 587 | axis.text.x = element_text(size = 6), 588 | legend.title = element_text(size = 8), 589 | legend.text = element_text(size = 7), 590 | strip.text = element_text(size = 6)) 591 | ``` 592 | 593 | Inclusion, target and head correctness all present insignificant differences between the two groups of regions 1 and 2 and regions 3 and 4. However, we observe an increase in the average inclusion delay, which should explain the decreased reward rates for validators in regions 1 and 2. 594 | 595 | Teku validators log a higher inclusion delay than others after the switch to smaller containers, as well as worse performance on other duties. 596 | 597 | ```{r, layout="l-body-outset", fig.height=2} 598 | val_series[ 599 | validators[node_code > 0, .(validator_index, client, first_digit)], on="validator_index" 600 | ][epoch >= start_epoch & epoch < end_epoch,][ 601 | , .( 602 | included_ats=sum(included_ats)/sum(expected_ats) * 10, 603 | correct_targets=sum(correct_targets)/sum(expected_ats) * 10, 604 | correct_heads=sum(correct_heads)/sum(expected_ats) * 10, 605 | inclusion_delay=mean(inclusion_delay, na.rm = TRUE) 606 | ), 607 | by=.(epoch_chunk=epoch%/%chunk_size, client) 608 | ] %>% 609 | melt(id.vars = c("epoch_chunk", "client")) %>% 610 | ggplot() + 611 | geom_line(aes(x = epoch_chunk * chunk_size, y = value, group=client, color=client)) + 612 | xlab("Epoch") + 613 | ylab("Value") + 614 | scale_color_manual(name = "Client", values = client_colours) + 615 | facet_wrap(vars(variable), nrow=1, scales="free_y") + 616 | theme(axis.text.y=element_text(size = 6), 617 | axis.title.x = element_text(size = 8), 618 | axis.title.y = element_text(size = 8), 619 | axis.text.x = element_text(size = 6), 620 | legend.title = element_text(size = 8), 621 | legend.text = element_text(size = 7), 622 | strip.text = element_text(size = 6)) 623 | ``` 624 | -------------------------------------------------------------------------------- /posdata/notebooks/pyrmont_explore.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Pyrmont data" 3 | author: 4 | - name: Barnabé Monnot 5 | url: https://twitter.com/barnabemonnot 6 | affiliation: Robust Incentives Group, Ethereum Foundation 7 | affiliation_url: https://github.com/ethereum/rig 8 | date: "`r Sys.Date()`" 9 | output: 10 | distill::distill_article: 11 | toc: yes 12 | toc_depth: 3 13 | html_document: 14 | toc: yes 15 | toc_depth: '3' 16 | df_print: paged 17 | description: | 18 | Onwards! 19 | --- 20 | 21 | ```{r setup, include=FALSE} 22 | library(tidyverse) 23 | library(data.table) 24 | library(patchwork) 25 | library(rmarkdown) 26 | library(ineq) 27 | library(infer) 28 | 29 | source(here::here("notebooks/lib.R")) 30 | 31 | options(digits=10) 32 | options(scipen = 999) 33 | 34 | # Make the plots a bit less pixellated 35 | knitr::opts_chunk$set(dpi = 300) 36 | knitr::opts_chunk$set(message = FALSE) 37 | knitr::opts_chunk$set(warning = FALSE) 38 | 39 | # A minimal theme I like 40 | newtheme <- theme_grey() + theme( 41 | axis.text = element_text(size = 9), 42 | axis.title = element_text(size = 12), 43 | axis.line = element_line(colour = "#000000"), 44 | panel.grid.major = element_blank(), 45 | panel.grid.minor = element_blank(), 46 | panel.background = element_blank(), 47 | legend.title = element_text(size = 12), 48 | legend.text = element_text(size = 10), 49 | legend.box.background = element_blank(), 50 | legend.key = element_blank(), 51 | strip.text.x = element_text(size = 10), 52 | strip.background = element_rect(fill = "white") 53 | ) 54 | theme_set(newtheme) 55 | 56 | myred <- "#F05431" 57 | myyellow <- "#FED152" 58 | mygreen <- "#BFCE80" 59 | client_colours <- c("#000011", "#ff9a02", "#eb4a9b", "#7dc19e") 60 | 61 | start_epoch <- 0 62 | end_epoch <- 2820 63 | slots_per_epoch <- 32 64 | until_slot <- (end_epoch + 2) * slots_per_epoch - 1 65 | slot_chunk_res <- until_slot %/% 15 66 | slots_per_year <- 365.25 * 24 * 60 * 60 / 12 67 | epochs_per_year <- slots_per_year / slots_per_epoch 68 | ``` 69 | 70 | ```{r eval=FALSE} 71 | # Run this to add to the dataset 72 | start_epoch <- 2601 73 | end_epoch <- 2820 74 | 75 | all_bxs <- fread(here::here("pyrmont_data/all_bxs.csv")) 76 | all_ats <- fread(here::here("pyrmont_data/all_ats.csv")) 77 | committees <- fread(here::here("pyrmont_data/committees.csv")) 78 | validators <- fread(here::here("pyrmont_data/initial_validators.csv")) 79 | val_series <- fread(here::here("pyrmont_data/val_series.csv")) 80 | stats_per_slot <- fread(here::here("pyrmont_data/stats_per_slot.csv")) 81 | 82 | bxs_and_ats <- start_epoch:end_epoch %>% 83 | map(get_blocks_and_attestations) %>% 84 | purrr::transpose() %>% 85 | map(rbindlist) 86 | 87 | new_bxs <- copy(bxs_and_ats$block) 88 | new_bxs[, declared_client := find_client(graffiti)] 89 | list(all_bxs, new_bxs) %>% rbindlist() %>% fwrite(here::here("pyrmont_data/all_bxs.csv")) 90 | rm(new_bxs) 91 | 92 | list(all_ats, bxs_and_ats$attestations) %>% rbindlist() %>% fwrite(here::here("pyrmont_data/all_ats.csv")) 93 | rm(bxs_and_ats) 94 | 95 | new_committees <- start_epoch:end_epoch %>% 96 | map(get_committees) %>% 97 | rbindlist() 98 | list(committees, new_committees) %>% rbindlist() %>% fwrite(here::here("pyrmont_data/committees.csv")) 99 | rm(new_committees) 100 | 101 | block_root_at_slot <- get_block_root_at_slot(fread(here::here("pyrmont_data/all_bxs.csv"))) 102 | all_ats <- fread(here::here("pyrmont_data/all_ats.csv")) 103 | committees <- fread(here::here("pyrmont_data/committees.csv")) 104 | get_correctness_data(all_ats, block_root_at_slot) 105 | 106 | new_val_series <- get_stats_per_val( 107 | all_ats[att_slot >= (start_epoch-1) * slots_per_epoch & att_slot < end_epoch * slots_per_epoch], 108 | block_root_at_slot, committees = committees, validators = validators, chunk_size = 10) 109 | list(val_series, new_val_series) %>% rbindlist() %>% fwrite(here::here("pyrmont_data/val_series.csv")) 110 | rm(new_val_series) 111 | 112 | new_stats_per_slot <- get_stats_per_slot( 113 | all_ats[att_slot >= (start_epoch-1) * slots_per_epoch & att_slot < end_epoch * slots_per_epoch], 114 | committees) 115 | list(stats_per_slot, new_stats_per_slot) %>% rbindlist() %>% fwrite(here::here("pyrmont_data/stats_per_slot.csv")) 116 | rm(new_stats_per_slot) 117 | ``` 118 | 119 | ```{r eval=FALSE} 120 | all_bxs <- fread(here::here("pyrmont_data/all_bxs.csv")) 121 | all_ats <- fread(here::here("pyrmont_data/all_ats.csv")) 122 | committees <- fread(here::here("pyrmont_data/committees.csv")) 123 | validators <- fread(here::here("pyrmont_data/initial_validators.csv")) 124 | block_root_at_slot <- get_block_root_at_slot(fread(here::here("pyrmont_data/all_bxs.csv"))) 125 | get_correctness_data(all_ats, block_root_at_slot) 126 | stats_per_val <- get_stats_per_val( 127 | all_ats[att_slot < end_epoch * slots_per_epoch], 128 | block_root_at_slot, committees = committees, validators = validators, chunk_size = 10) 129 | stats_per_slot <- get_stats_per_slot( 130 | all_ats[att_slot < end_epoch * slots_per_epoch], committees) 131 | 132 | stats_per_val %>% fwrite(here::here("pyrmont_data/val_series.csv")) 133 | stats_per_slot %>% fwrite(here::here("pyrmont_data/stats_per_slot.csv")) 134 | ``` 135 | 136 | ```{r eval=FALSE} 137 | (55001:65000) %>% 138 | map(function(current_slot) { 139 | if (current_slot %% 1000 == 0) { print(str_c("slot ", current_slot)) } 140 | get_aggregate_info(all_ats[slot == current_slot]) 141 | }) %>% 142 | bind_rows() %>% 143 | group_by(slot) %>% 144 | summarise(n_subset = sum(n_subset), 145 | n_subset_ind = sum(n_subset_ind), 146 | n_strongly_clashing = sum(n_strongly_clashing), 147 | n_weakly_clashing = sum(n_weakly_clashing)) %>% 148 | union(read_csv(here::here("pyrmont_data/subset_ats.csv"))) %>% 149 | write_csv(here::here("pyrmont_data/subset_ats.csv")) 150 | ``` 151 | 152 | ```{r cache=TRUE, cache.lazy=FALSE} 153 | all_bxs <- fread(here::here("pyrmont_data/all_bxs.csv")) 154 | all_ats <- fread(here::here("pyrmont_data/all_ats.csv")) 155 | validators <- fread(here::here("pyrmont_data/initial_validators.csv")) 156 | block_root_at_slot <- get_block_root_at_slot(all_bxs) 157 | get_correctness_data(all_ats, block_root_at_slot) 158 | stats_per_slot <- fread(here::here("pyrmont_data/stats_per_slot.csv")) 159 | appearances_in_aggs <- get_appearances_in_agg(all_ats) 160 | myopic_redundant_ats <- get_myopic_redundant_ats(all_ats) 161 | strong_redundant_ats <- get_strong_redundant_ats(all_ats) 162 | subset_ats <- fread(here::here("pyrmont_data/subset_ats.csv")) 163 | ``` 164 | 165 | We look at data between epochs 0 and `r end_epoch` (`r get_date_from_epoch(end_epoch)`) and report updated metrics for the Pyrmont eth2 testnet. You can also find a similar notebook for [Medalla here](https://ethereum.github.io/rig/medalla-data-challenge/notebooks/explore.html). 166 | 167 | 170 | 171 | ## Performance of duties 172 | 173 | ### Attester duties 174 | 175 | We compare the number of included attestations with the number of expected attestations. 176 | 177 | ```{r} 178 | stats_per_slot %>% 179 | .[, slot_chunk:=att_slot %/% slot_chunk_res] %>% 180 | filter(slot_chunk != max(slot_chunk)) %>% 181 | group_by(slot_chunk) %>% 182 | summarise(percent_received = sum(included_ats) / sum(expected_ats) * 100) %>% 183 | ggplot() + 184 | geom_line(aes(x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_received), colour = myred) + 185 | geom_point(aes(x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_received), 186 | colour = myred) + 187 | geom_text(aes( 188 | x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_received, 189 | label = round(percent_received, digits = 1)), 190 | colour = myred, alpha = 0.7, nudge_y = -4) + 191 | ggtitle("Proportion of included attestations", 192 | subtitle=str_c("Group size = ", slot_chunk_res, " slots")) + 193 | xlab("Epoch") + 194 | ylab("Percent attested and included") + 195 | ylim(0, 100) 196 | ``` 197 | 198 | ### Proposer duties 199 | 200 | How many blocks are there in the canonical chain? 201 | 202 | ```{r} 203 | tibble(slot = 0:until_slot) %>% 204 | left_join(all_bxs %>% 205 | select(slot) %>% 206 | mutate(proposed = 1), 207 | by = c("slot" = "slot")) %>% 208 | replace_na(list(proposed = 0)) %>% 209 | mutate(slot_chunk = slot %/% slot_chunk_res) %>% 210 | filter(slot_chunk != max(slot_chunk)) %>% 211 | group_by(slot_chunk) %>% 212 | summarise(percent_proposed = sum(proposed) / n() * 100) %>% 213 | ggplot() + 214 | geom_line(aes(x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_proposed), colour = myred) + 215 | geom_point(aes(x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_proposed), 216 | colour = myred) + 217 | geom_text(aes( 218 | x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_proposed, 219 | label = round(percent_proposed, digits = 1)), 220 | colour = myred, alpha = 0.7, nudge_y = -4) + 221 | ggtitle("Proportion of included blocks", 222 | subtitle=str_c("Group size = ", slot_chunk_res, " slots")) + 223 | xlab("Epoch") + 224 | ylab("Percent proposed and included") + 225 | ylim(0, 100) 226 | ``` 227 | 228 | ## Correctness of attestations 229 | 230 | ### Target checkpoint 231 | 232 | Attestations vouch for some target checkpoint to justify. We can check whether they vouched for the correct one by comparing their `target_block_root` with the latest known block root as of the start of the attestation epoch (that's a mouthful). How many individual attestations correctly attest for the target? 233 | 234 | ```{r} 235 | n_individual_ats <- stats_per_slot %>% 236 | pull(included_ats) %>% 237 | sum() 238 | n_correct_target_ats <- stats_per_slot %>% 239 | pull(correct_targets) %>% 240 | sum() 241 | 242 | tibble( 243 | Name = c("Individual attestations", "Correct target attestations", "Percent correct"), 244 | Value = c(n_individual_ats, n_correct_target_ats, round(n_correct_target_ats / n_individual_ats * 100, digits = 2) 245 | ) 246 | ) %>% 247 | paged_table() 248 | ``` 249 | 250 | How does the correctness evolve over time? 251 | 252 | ```{r} 253 | stats_per_slot %>% 254 | .[, slot_chunk:=att_slot %/% slot_chunk_res] %>% 255 | .[, .(percent_correct_target=sum(correct_targets) / sum(included_ats) * 100), by=slot_chunk] %>% 256 | ggplot() + 257 | geom_line(aes(x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_correct_target), 258 | colour = mygreen) + 259 | geom_point(aes(x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_correct_target), 260 | colour = mygreen) + 261 | geom_text(aes( 262 | x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_correct_target, 263 | label = round(percent_correct_target, digits = 1)), 264 | colour = mygreen, alpha = 0.7, nudge_y = -4) + 265 | ggtitle("Correct targets in included attestations", 266 | subtitle=str_c("Group size = ", slot_chunk_res, " slots")) + 267 | xlab("Epoch") + 268 | ylab("Percent correct targets") + 269 | ylim(0, 100) 270 | ``` 271 | 272 | ### Head of the chain 273 | 274 | Attestations must also vote for the correct head of the chain, as returned by the [GHOST fork choice rule]. To check for correctness, one looks at the latest block known as of the attestation slot. Possibly, this block was proposed for the same slot as the attestation `att_slot`. When the `beacon_block_root` attribute of the attestation and the latest block root match, the head is correct! 275 | 276 | ```{r} 277 | n_correct_head_ats <- stats_per_slot %>% 278 | pull(correct_heads) %>% 279 | sum() 280 | 281 | tibble( 282 | Name = c("Individual attestations", "Correct head attestations", "Percent correct"), 283 | Value = c(n_individual_ats, n_correct_head_ats, round(n_correct_head_ats / n_individual_ats * 100, digits = 2) 284 | ) 285 | ) %>% 286 | paged_table() 287 | ``` 288 | 289 | How does the correctness evolve over time? 290 | 291 | ```{r} 292 | stats_per_slot %>% 293 | .[, slot_chunk:=att_slot %/% slot_chunk_res] %>% 294 | .[, .(percent_correct_head=sum(correct_heads) / sum(included_ats) * 100), by=slot_chunk] %>% 295 | ggplot() + 296 | geom_line(aes(x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_correct_head), 297 | colour = "purple") + 298 | geom_point(aes(x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_correct_head), 299 | colour = "purple") + 300 | geom_text(aes( 301 | x = slot_chunk * slot_chunk_res %/% slots_per_epoch, y = percent_correct_head, 302 | label = round(percent_correct_head, digits = 1)), 303 | colour = "purple", alpha = 0.7, nudge_y = -4) + 304 | ggtitle("Correct heads in included attestations", 305 | subtitle=str_c("Samples = ", until_slot, " slots; group size = ", slot_chunk_res, " slots.")) + 306 | xlab("Epoch") + 307 | ylab("Percent correct head") + 308 | ylim(0, 100) 309 | ``` 310 | 311 | ## Aggregate attestations 312 | 313 | eth2 is built to scale to tens of thousands of validators. This introduces overhead from message passing (and inclusion) when these validators are asked to vote on the canonical chain. To alleviate the beacon chain, votes (a.k.a. **attestations**) can be **aggregated**. 314 | 315 | In particular, an attestation contains five attributes: 316 | 317 | - The slot it is attesting for ("**attestation slot**"). 318 | - The index of its committee in the slot ("**attestation committee**"). 319 | - Its vote for the head of the beacon chain, given by the fork choice rule. 320 | - Its vote for the source, i.e., the last justified checkpoint in its view. 321 | - Its vote for the target, i.e., the checkpoint to be justified in its view. 322 | 323 | Since we expect validators to broadly agree in times of low latency, we also expect that a lot of attestations will share these same five attributes. We can aggregate such a set of attestations $I$ into a single aggregate. 324 | 325 | 328 | 329 | When we have $N$ active validators, about $N / 32$ are selected to attest for each slot in an epoch. The validators for a slot $s$ are further divided between a few committees. Identical votes from validators in the same committee can be aggregated. Assume that two aggregate attestations were formed from attestations of validators in set $C(s, c)$, validators in committee $c$ attesting for slot $s$. One aggregate contains attestations from set $I \subseteq C(s, c)$ and the other attestations from set $J \subseteq C(s, c)$. We have two cases: 330 | 331 | - When the intersection of $I$ and $J$ is non-empty, we cannot aggregate the two aggregates further. 332 | - When the intersection of $I$ and $J$ is empty, the two aggregates can themselves be aggregated, into one containing attestations from validator set $I \cup J$. 333 | 334 | ### How many attestations are contained in aggregates? 335 | 336 | ```{r message=FALSE} 337 | all_ats[, contained_ats:=str_count(attesting_indices, "1")] 338 | 339 | all_ats %>% 340 | .[, .(count=.N), by=contained_ats] %>% 341 | ggplot() + 342 | geom_col(aes(x = contained_ats, y = count), fill=myred) + 343 | ggtitle("Number of attestations per aggregate (histogram)", 344 | subtitle = str_c("Aggregate attestations = ", nrow(all_ats))) + 345 | xlab("Number of attestations in aggregate") + 346 | ylab("Count") 347 | ``` 348 | 349 | We can plot the same, weighing by the size of the validator set in the aggregate, to count how many attestations each size of aggregates included. 350 | 351 | ```{r} 352 | all_ats %>% 353 | .[, .(count=.N * contained_ats), by=contained_ats] %>% 354 | ggplot() + 355 | geom_col(aes(x = contained_ats, y = count), fill=myred) + 356 | ggtitle("Number of attestations per aggregate (histogram, weighted)", 357 | subtitle = str_c("Aggregate attestations = ", nrow(all_ats))) + 358 | xlab("Number of attestations in aggregate") + 359 | ylab("Number of attestations") 360 | ``` 361 | 362 | Overall, we can plot the [Lorenz curve](https://en.wikipedia.org/wiki/Lorenz_curve) of aggregate attestations. This allows us to find out the share of attestations held by the 20% largest aggregates. 363 | 364 | ```{r} 365 | L <- Lc(all_ats$contained_ats) 366 | ``` 367 | 368 | ```{r} 369 | L_tibble <- tibble(p = L$p, L = L$L) %>% 370 | filter(row_number() %% 100000 == 1 | row_number() == max(row_number())) 371 | 372 | L_80q <- quantile(L$L, 0.8, names=FALSE) %>% 373 | round(digits = 2) 374 | 375 | L_tibble %>% 376 | ggplot() + 377 | geom_line(aes(x = p, y = L), colour = myred, size = 1.1) + 378 | geom_abline(slope = 1, intercept = 0, linetype="dotted") + 379 | geom_vline(xintercept = 0.8, colour = "steelblue", linetype = "dotted", size = 1.1) + 380 | geom_hline(yintercept = L_80q, colour = "steelblue", size = 1.1) + 381 | scale_x_continuous( 382 | breaks = sort(c(c(0.8), with(L_tibble, pretty(range(p))))), 383 | ) + 384 | scale_y_continuous( 385 | breaks = sort(c(c(L_80q), with(L_tibble, pretty(range(p))))), 386 | ) + 387 | ggtitle("Lorenz curve of aggregate attestation sizes", 388 | subtitle = str_c("Aggregate attestations = ", nrow(all_ats))) + 389 | xlab("Aggregation percentile") + 390 | ylab("Cumulative share of attestations") 391 | ``` 392 | 393 | The answer is `r (100 - L_80q * 100)`%. 394 | 395 | #### How much savings did aggregates provide? 396 | 397 | In the previous plots, we "double-counted" some attestations which were included in several aggregates. Here, we tally the number of **individual attestations**, unique votes from validators. We compare how many individual attestations to how many aggregates were included in blocks. 398 | 399 | ```{r} 400 | n_aggregates <- all_ats %>% nrow() 401 | savings_ratio <- round(n_individual_ats / n_aggregates, digits=2) 402 | 403 | tibble(Name = c("Individual attestations", "Included aggregates", "Savings ratio"), 404 | Value = c(n_individual_ats, n_aggregates, 405 | savings_ratio)) %>% 406 | paged_table() 407 | ``` 408 | 409 | We have `r round(n_individual_ats / n_aggregates, digits = 2)` times more individual attestations than aggregates, meaning that if we were not aggregating, we would have `r round(n_individual_ats / n_aggregates, digits = 2)` times as much data on-chain. 410 | 411 | ### In how many aggregate attestations is a attestation included? 412 | 413 | Individual attestations can be included in several aggregates. For each, we count how many aggregates it appears in, and produce the following histogram. 414 | 415 | ```{r} 416 | appearances_in_aggs %>% 417 | ggplot() + 418 | geom_col(aes(x = appearances, y = count), fill=myred) + 419 | scale_y_log10() + 420 | ggtitle("Individual attestation inclusions in an aggregate (histogram)", 421 | subtitle = str_c("Individual attestations = ", n_individual_ats)) + 422 | xlab("Number of inclusions") + 423 | ylab("Count (log10)") 424 | ``` 425 | 426 | Most attestations were included in an aggregate once only. 427 | 428 | ### How many redundant aggregate attestations are there? 429 | 430 | We call **myopic redundant** identical aggregate attestations (same five attributes and same set of validator indices) which are included in more than one block. It can happen when a block producer does not see that an aggregate was previously included (e.g., because of latency), or simply when the block producer doesn't pay attention and greedily adds as many aggregates as they know about. 431 | 432 | ```{r} 433 | myopic_redundant_ats %>% 434 | ggplot() + 435 | geom_col(aes(x = appearances, y = count), fill=myred) + 436 | ggtitle("Number of times one aggregate attestation is included (histogram)", 437 | subtitle = str_c("Aggregate attestations = ", nrow(all_ats))) + 438 | xlab("Number of times redundant") + 439 | ylab("Count (log10)") + 440 | scale_y_log10() 441 | ``` 442 | 443 | The mode is 1, which is also the optimal case. A redundant aggregate does not have much purpose apart from bloating the chain. 444 | 445 | We could generalise this definition and call **redundant** an aggregate included in a block for which all of its attesting indices were previously seen in other aggregates. We didn't compute these as they are much harder to count. 446 | 447 | ### How many times did a block include the exact same aggregate attestation more than once? 448 | 449 | We could call these **strongly redundant**, as this is pure waste. 450 | 451 | ```{r} 452 | n_strong_redundant_twice <- strong_redundant_ats %>% 453 | pull(count) %>% 454 | pluck(2) 455 | n_strong_redundant_over_twice <- strong_redundant_ats %>% 456 | pull(count) %>% 457 | sum() - n_strong_redundant_twice - strong_redundant_ats %>% pull(count) %>% pluck(1) 458 | strong_redundant_ats %>% 459 | paged_table() 460 | ``` 461 | 462 | We see that `r n_strong_redundant_twice` times, identical aggregates were included twice in the same block. 463 | 464 | ### How many aggregates in a block are included by another aggregate in the same block? 465 | 466 | We now define **subset aggregates**. Suppose two aggregates in the same block with equal attributes (slot, committee index, beacon root, source root and target root) include validator sets $I$ and $J$ respectively. If we have $I \subset J$, i.e., if all validators of the first aggregate are also included in the second aggregate (but the reverse is not true), then we call the first aggregate a **subset aggregate** of the second. 467 | 468 | Subset aggregates, much like redundant aggregate attestations (equal aggregates included in more than one block of the canonical chain), can be removed from the finalised chain without losing any voting information. In fact, subset aggregates use much less local information than redundant aggregates. To root out subset aggregates, a client simply must ensure that no aggregate it is prepared to include in a block is a subset aggregate of another. Meanwhile, to root out redundant aggregates, a client must check all past blocks (until the inclusion limit of 32 slots) to ensure that it is not including a redundant aggregate. In a sense, subset aggregate are "worse" as they should be easier to root out. 469 | 470 | ```{r} 471 | subset_until_slot <- 65000 472 | ``` 473 | 474 | So among all included aggregates in blocks, how many are subset aggregates? We count these instances for attestations included in blocks until epoch `r subset_until_slot %/% 32` (`r get_date_from_epoch(subset_until_slot %/% 32)`). 475 | 476 | 479 | 480 | ```{r} 481 | n_aggregates_until <- all_ats[slot < subset_until_slot] %>% 482 | nrow() 483 | 484 | n_subset_ats <- sum(subset_ats$n_subset) 485 | percent_subset <- round(n_subset_ats / n_aggregates_until, digits=4) * 100 486 | tibble(Name = c("Subset aggregates", "Included aggregates", "Percentage of subset aggregates"), 487 | Value = c(n_subset_ats, n_aggregates_until, 488 | percent_subset)) %>% 489 | paged_table() 490 | ``` 491 | 492 | We find that `r percent_subset`% included aggregates are subset aggregates. 493 | 494 | #### How often are subset aggregates of size 1? 495 | 496 | In Medalla, we observed that subset aggregates were often of size 1. In other words, frequently a "big" aggregate is included, aggregating very many validators, and then a second aggregate of size 1, namely, a simple attestation, is included too, while this simple attestation is already accounted for by the first, larger aggregate. 497 | 498 | ```{r} 499 | n_subset_ind_ats <- sum(subset_ats$n_subset_ind) 500 | percent_subset_ind <- round(n_subset_ind_ats / n_subset_ats, digits=4) * 100 501 | tibble(Name = c("Subset aggregates of size 1", "Subset aggregates", 502 | "Percentage of subset aggregates of size 1"), 503 | Value = c(n_subset_ind_ats, n_subset_ats, 504 | percent_subset_ind)) %>% 505 | paged_table() 506 | ``` 507 | 508 | In Pyrmont, clients seem to have improved their block-packing algorithms, since we do not find any subset aggregate of size 1. 509 | 510 | ### How many times were clashing attestations included in blocks? 511 | 512 | We look at situations where two aggregate attestations are included in the same block, with identical attributes (same attesting slot, attesting committee, beacon chain head, source block and target block) but different attesting indices and neither one is a subset of the other. We define the following two notions, assuming the two aggregate attestations include attestations of validator sets $I$ and $J$ respectively: 513 | 514 | - **Strongly clashing:** The two aggregates share some validator indices, i.e., $I \cap J \neq \emptyset$. The two aggregate attestations were incompatible, so could not be aggregated further. 515 | - **Weakly clashing:** The two aggregates have different validator indices, i.e., $I \cap J = \emptyset$. The two aggregate attestations could have been aggregated further. 516 | 517 | Let's first count how many aggregates are strongly clashing in blocks before slot `r subset_until_slot`. 518 | 519 | ```{r} 520 | n_strongly_clashing <- sum(subset_ats$n_strongly_clashing) 521 | percent_strongly_clashing <- round(n_strongly_clashing / n_aggregates_until, digits=4) * 100 522 | tibble(Name = c("Strongly clashing aggregates", "Included aggregates", "Percentage of strongly clashing"), 523 | Value = c(n_strongly_clashing, n_aggregates_until, 524 | percent_strongly_clashing)) %>% 525 | paged_table() 526 | ``` 527 | 528 | How many are weakly clashing? 529 | 530 | ```{r} 531 | n_weakly_clashing <- sum(subset_ats$n_weakly_clashing) 532 | percent_weakly_clashing <- round(n_weakly_clashing / n_aggregates_until, digits=4) * 100 533 | tibble(Name = c("Weakly clashing aggregates", "Included aggregates", "Percentage of weakly clashing"), 534 | Value = c(n_weakly_clashing, n_aggregates_until, 535 | percent_weakly_clashing)) %>% 536 | paged_table() 537 | ``` 538 | 539 | None! That's pretty great. It means blocks always include the most aggregated possible attestations, and we have a local optimum to the aggregation problem. 540 | 541 | Note that optimally aggregating a set of aggregates is NP-complete! Here is a reduction of the optimal aggregation problem to the [graph colouring](https://en.wikipedia.org/wiki/Graph_coloring). Set aggregate attestations as vertices in a graph, with an edge drawn between two vertices if the validator sets of the two aggregates have a non-empty overlap. In the graph colouring, we look for the minimum number of colours necessary to assign a colour to each vertex such that two connected vertices do not have the same colour. All vertices who share the same colour have an empty overlap, and thus can be combined into an aggregate. The minimum number of colours necessary to colour the graph tells us how few aggregates were necessary to combine a given set of aggregates further. 542 | 543 | ### Aggregates glossary 544 | 545 | ```{r} 546 | n_size_1_ags <- all_ats %>% 547 | .[, .(count=.N), by=contained_ats] %>% 548 | pull(count) %>% 549 | pluck(1) 550 | n_myopic_redundant <- readRDS(here::here("rds_data/redundant_ats.rds")) %>% 551 | filter(appearances > 1) %>% 552 | pull(count) %>% 553 | sum() 554 | percent_myopic_redundant <- round(n_myopic_redundant / n_aggregates, digits=4) * 100 555 | ``` 556 | 557 | We've looked at aggregate attestations in a few different ways. We offer here a table to summarise the definitions we have introduced and associated statistics. 558 | 559 | ::: l-body-outset 560 | | Name | Explanation | Statistics | Recommendation | 561 | |-|-|-|-| 562 | | Aggregate | Attestation summarising the vote of validators in a single committee | There are `r n_aggregates` aggregates included from slot 0 to slot `r until_slot` | x | 563 | | Individual attestation | A single, unique, validator vote | There are `r n_individual_ats` individual attestations | x | 564 | | Savings ratio | The ratio of individual attestations to aggregate attestations | The savings ratio is `r savings_ratio` | Keep it up! | 565 | | Redundant aggregate | An aggregate containing validator attestations which were all already included on-chain, possibly across several aggregates with different sets of attesting indices | x | Don't include these | 566 | | Myopic redundant aggregate | An aggregate included more than once on-chain, always with the same attesting indices | There are `r n_myopic_redundant` myopic redundant aggregates, `r percent_myopic_redundant`% of all aggregates | These are redundant too: don't include them either | 567 | ::: 568 | 569 | In the next table, we present definitions classifying aggregates when two or more instances are included _in the same block_ with the same five attributes (attesting slot and committee, beacon root, source root and target root). 570 | 571 | ::: l-body-outset 572 | | Name | Explanation | Statistics | Recommendation | 573 | |-|-|-|-| 574 | | Strongly redundant aggregate | An aggregate included more than once _in the same block_ | There are `r n_strong_redundant_twice + n_strong_redundant_over_twice` strongly redundant aggregates | Keep only one of the strongly redundant aggregates | 575 | | Subset aggregate | _If not strongly redundant_, an aggregate fully contained in another aggregate included _in the same block_ | There are `r n_subset_ats` subset aggregates until slot `r subset_until_slot`, `r percent_subset`% of all aggregates until slot `r subset_until_slot` | Drop all subset aggregates | 576 | | Strongly clashing aggregates | _If not a subset aggregate_, an aggregate with attesting indices $I$ such that there exists another aggregate _attesting for the same in the same block_ with attesting indices $J$ and $I \cap J \neq \emptyset$ | There are `r n_strongly_clashing` strongly clashing aggregates until slot `r subset_until_slot`, `r percent_strongly_clashing`% of all aggregates until slot `r subset_until_slot` | These cannot be aggregated further. Do nothing | 577 | | Weakly clashing aggregates | _If not a strongly clashing aggregate_, an aggregate with attesting indices $I$ such that there exists another aggregate _attesting for the same in the same block_ with attesting indices $J$ | There are `r n_weakly_clashing` weakly clashing aggregates until slot `r subset_until_slot`, `r percent_weakly_clashing`% of all aggregates until slot `r subset_until_slot` | These can be aggregated further into one aggregate with attesting indices $I \cup J$. In an ideal world, we have 0 weakly clashing aggregates | 578 | ::: 579 | 580 | Size one aggregates do not appear often in the dataset, [an improvement compared to Medalla](https://ethereum.github.io/rig/medalla-data-challenge/notebooks/explore.html#aggregates-glossary). 581 | 582 | ::: l-body-outset 583 | | Name | Explanation | Statistics | Recommendation | 584 | |-|-|-|-| 585 | | Subset aggregate of size 1 | A subset aggregate which is an unaggregated individual attestation | There are `r n_subset_ind_ats` subset aggregates of size 1 until slot `r subset_until_slot`, `r percent_subset_ind`% of all subset aggregates until slot `r subset_until_slot` | Definitely drop these | 586 | | Aggregate of size 1 | An attestation included without being aggregated | There are `r n_size_1_ags` aggregates of size 1 | Either it is weakly clashing, so aggregate it further; or it is a subset aggregate, so drop it; or it is a redundant, drop it; or it is new and never aggregated, keep it | 587 | ::: -------------------------------------------------------------------------------- /posdata/notebooks/uptime_reward_gif.R: -------------------------------------------------------------------------------- 1 | library(tidyverse) 2 | library(animation) 3 | library(gganimate) 4 | 5 | source(here::here("notebooks/lib.R")) 6 | 7 | all_ats <- fread(here::here("rds_data/all_ats.csv")) 8 | max(all_ats$slot) %/% 32 9 | 10 | slots_per_epoch <- 32 11 | slots_per_year <- 365.25 * 24 * 60 * 60 / 12 12 | epochs_per_year <- slots_per_year / slots_per_epoch 13 | 14 | chunk_size <- 100 15 | ats_over_time <- seq(0, 20000 - chunk_size, chunk_size) %>% 16 | map(function(epoch) { 17 | print(str_c("Epoch ", epoch)) 18 | all_ats[(att_slot >= epoch * slots_per_epoch) & (att_slot < ((epoch + chunk_size) * slots_per_epoch - 1)),] %>% 19 | get_exploded_ats() %>% 20 | .[, .(att_slot, committee_index, index_in_committee)] %>% 21 | unique() %>% 22 | merge(epoch:(epoch + chunk_size - 1) %>% 23 | map(get_committees) %>% 24 | rbindlist()) %>% 25 | .[, .(epoch = epoch + chunk_size, included_ats=.N), by=validator_index] 26 | }) %>% 27 | rbindlist() 28 | 29 | cum_ats_over_time <- ats_over_time %>% 30 | .[, .(epoch=epoch, included_ats = included_ats, cum_included_ats = cumsum(included_ats)), 31 | by=.(validator_index)] 32 | 33 | df <- seq(chunk_size, 20000, chunk_size) %>% 34 | map(function(current_epoch) { 35 | print(str_c("Epoch ", current_epoch)) 36 | get_validators(current_epoch)[ 37 | time_active > 0, .(validator_index, balance, time_active, activation_epoch) 38 | ] %>% 39 | merge(cum_ats_over_time[epoch==current_epoch, .(validator_index, cum_included_ats)], 40 | all.x = TRUE, by.x = c("validator_index"), 41 | by.y = c("validator_index")) %>% 42 | setnafill("const", fill = 0, cols = c("cum_included_ats")) %>% 43 | mutate( 44 | balance = if_else(balance < 16e9, 16e9+1, balance), 45 | round_balance = round(balance / (32e9)) * 32e9, 46 | true_rewards = balance - round_balance, 47 | balance_diff = true_rewards / (32e9) * 100 * epochs_per_year / time_active, 48 | percent_attested = cum_included_ats / time_active * 100, 49 | epoch = current_epoch 50 | ) 51 | }) %>% 52 | bind_rows() 53 | 54 | p <- ggplot(df %>% 55 | filter(epoch <= 20000) %>% 56 | filter(balance_diff < 50, balance_diff > -100) %>% 57 | mutate(epoch = as_factor(epoch)), 58 | aes(x = percent_attested, y = balance_diff, color = activation_epoch)) + 59 | geom_point(aes(group = epoch), alpha = 0.2) + 60 | scale_color_viridis_c() + 61 | # geom_hline(yintercept = 0, colour = "steelblue", linetype = "dashed") + 62 | xlab("Percent of epochs attested") + 63 | ylab("Annualised reward (%)") + 64 | labs(title = 'Epoch: {closest_state}', x = 'Percent attested', y = 'Annual reward') + 65 | transition_states(epoch, 66 | transition_length = 0, 67 | state_length = 1, 68 | wrap = FALSE) 69 | 70 | nframes <- 200 71 | fps <- 20 72 | 73 | anim_save("hello.gif", animation = p, 74 | width = 900, # 900px wide 75 | height = 600, # 600px high 76 | nframes = nframes, # 200 frames 77 | fps = fps) -------------------------------------------------------------------------------- /posdata/scripts/20210424_plots.R: -------------------------------------------------------------------------------- 1 | ### COLLECTING DATA 2 | 3 | ### Initial collection 4 | 5 | start_epoch <- 32280 6 | end_epoch <- 32320 7 | 8 | new_epochs <- start_epoch:end_epoch %>% 9 | map(function(epoch) { 10 | get_blocks_and_attestations(epoch) 11 | }) %>% 12 | purrr::transpose() %>% 13 | map(rbindlist) 14 | 15 | new_bxs <- copy(new_epochs$block) 16 | new_bxs[, declared_client := find_client(graffiti)] 17 | new_ats <- copy(new_epochs$attestations) 18 | new_committees <- start_epoch:end_epoch %>% 19 | map(get_committees) %>% 20 | rbindlist() 21 | block_root_at_slot <- get_block_root_at_slot(new_bxs) 22 | get_correctness_data(new_ats, block_root_at_slot) 23 | first_possible_inclusion_slot <- get_first_possible_inclusion_slot(new_bxs) 24 | new_val_series <- get_stats_per_val( 25 | new_ats[att_slot >= 32290 * slots_per_epoch & att_slot < end_epoch * slots_per_epoch], 26 | block_root_at_slot, first_possible_inclusion_slot, 27 | committees = new_committees, chunk_size = 1) 28 | new_stats_per_slot <- get_stats_per_slot( 29 | new_ats[att_slot >= (start_epoch-1) * slots_per_epoch & att_slot < end_epoch * slots_per_epoch], 30 | new_committees) 31 | 32 | new_bxs %>% fwrite(here::here("data/prysmcrash/all_bxs.csv")) 33 | new_ats %>% fwrite(here::here("data/prysmcrash/all_ats.csv")) 34 | new_committees %>% fwrite(here::here("data/prysmcrash/committees.csv")) 35 | new_val_series %>% fwrite(here::here("data/prysmcrash/val_series.csv")) 36 | new_stats_per_slot %>% fwrite(here::here("data/prysmcrash/stats_per_slot.csv")) 37 | 38 | ### Update 39 | 40 | start_epoch <- 32321 41 | end_epoch <- 32330 42 | 43 | all_bxs <- fread(here::here("data/prysmcrash/all_bxs.csv")) 44 | all_ats <- fread(here::here("data/prysmcrash/all_ats.csv")) 45 | committees <- fread(here::here("data/prysmcrash/committees.csv")) 46 | val_series <- fread(here::here("data/prysmcrash/val_series.csv")) 47 | stats_per_slot <- fread(here::here("data/prysmcrash/stats_per_slot.csv")) 48 | 49 | bxs_and_ats <- start_epoch:end_epoch %>% 50 | map(get_blocks_and_attestations) %>% 51 | purrr::transpose() %>% 52 | map(rbindlist) 53 | 54 | new_bxs <- copy(bxs_and_ats$block) 55 | new_bxs[, declared_client := find_client(graffiti)] 56 | list(all_bxs, new_bxs) %>% rbindlist() %>% fwrite(here::here("data/prysmcrash/all_bxs.csv")) 57 | rm(new_bxs) 58 | 59 | list(all_ats, bxs_and_ats$attestations) %>% rbindlist() %>% fwrite(here::here("data/prysmcrash/all_ats.csv")) 60 | rm(bxs_and_ats) 61 | 62 | new_committees <- start_epoch:end_epoch %>% 63 | map(get_committees) %>% 64 | rbindlist() 65 | list(committees, new_committees) %>% rbindlist() %>% fwrite(here::here("data/prysmcrash/committees.csv")) 66 | rm(new_committees) 67 | 68 | all_bxs <- fread(here::here("data/prysmcrash/all_bxs.csv")) 69 | block_root_at_slot <- get_block_root_at_slot(all_bxs) 70 | all_ats <- fread(here::here("data/prysmcrash/all_ats.csv")) 71 | committees <- fread(here::here("data/prysmcrash/committees.csv")) 72 | get_correctness_data(all_ats, block_root_at_slot) 73 | first_possible_inclusion_slot <- get_first_possible_inclusion_slot(all_bxs) 74 | 75 | new_val_series <- get_stats_per_val( 76 | all_ats[att_slot >= (start_epoch-1) * slots_per_epoch & att_slot < end_epoch * slots_per_epoch], 77 | block_root_at_slot, first_possible_inclusion_slot, 78 | committees = committees, chunk_size = 1) 79 | 80 | list(val_series, new_val_series) %>% 81 | rbindlist() %>% 82 | fwrite(here::here("data/prysmcrash/val_series.csv")) 83 | rm(new_val_series) 84 | 85 | new_stats_per_slot <- get_stats_per_slot( 86 | all_ats[att_slot >= (start_epoch-1) * slots_per_epoch & att_slot < end_epoch * slots_per_epoch], 87 | committees) 88 | list(stats_per_slot, new_stats_per_slot) %>% rbindlist() %>% fwrite(here::here("data/prysmcrash/stats_per_slot.csv")) 89 | rm(new_stats_per_slot) 90 | 91 | ### PLOTS 92 | 93 | client_colours <- c("#000011", "#ff9a02", "#eb4a9b", "#7dc19e", "grey", "red") 94 | myred <- "#F05431" 95 | myyellow <- "#FED152" 96 | mygreen <- "#BFCE80" 97 | 98 | newtheme <- theme_grey() + theme( 99 | axis.text = element_text(size = 9), 100 | axis.title = element_text(size = 12), 101 | axis.line = element_line(colour = "#000000"), 102 | panel.grid.major.y = element_line(colour="#bbbbbb", size=0.1), 103 | panel.grid.major.x = element_blank(), 104 | panel.grid.minor = element_blank(), 105 | panel.background = element_blank(), 106 | legend.title = element_text(size = 12), 107 | legend.text = element_text(size = 10), 108 | legend.box.background = element_blank(), 109 | legend.key = element_blank(), 110 | strip.text.x = element_text(size = 10), 111 | strip.background = element_rect(fill = "white") 112 | ) 113 | theme_set(newtheme) 114 | 115 | start_epoch <- 32280 116 | end_epoch <- 32320 117 | all_bxs <- fread(here::here("data/prysmcrash/all_bxs.csv")) 118 | all_slots <- data.table(slot = (start_epoch*32):(end_epoch*32+31)) 119 | all_bxs <- new_bxs[all_slots, on="slot", nomatch="unproposed"] 120 | all_bxs[, epoch := slot %/% 32] 121 | 122 | ### Blocks per client plot 123 | all_bxs[, .(num_blocks = .N), by=.(epoch, declared_client)] %>% 124 | ggplot() + 125 | geom_col(aes(x = epoch, y = num_blocks, group=declared_client, fill=declared_client), 126 | position = position_dodge()) + 127 | facet_wrap(vars(declared_client)) + 128 | scale_fill_manual(values = client_colours) 129 | 130 | ### Inclusion delay plot 131 | val_series <- fread(here::here("data/prysmcrash/val_series.csv")) 132 | val_series[epoch > 32280, .(inclusion_delay = mean(inclusion_delay, na.rm = TRUE), 133 | inclusion_delay_by_block = mean(inclusion_delay_by_block, na.rm = TRUE)), by=epoch] %>% 134 | ggplot() + 135 | geom_line(aes(x = epoch, y = inclusion_delay), colour=myred) + 136 | geom_line(aes(x = epoch, y = inclusion_delay_by_block), colour=mygreen) 137 | 138 | ### ETH lost 139 | 140 | balances <- 32295:32325 %>% 141 | map(get_balances_active_validators) %>% 142 | rbindlist() 143 | 144 | balances %>% fwrite(here::here("data/prysmcrash/balances.csv")) 145 | 146 | diff_balances <- balances %>% 147 | group_by(validator_index) %>% 148 | mutate(epoch = 32294 + row_number(), previous_balance = lag(balance), diff = balance - previous_balance) 149 | 150 | diff_per_epoch <- diff_balances %>% group_by(epoch) %>% 151 | summarise(total_diff = sum(diff) / 1e9) 152 | 153 | diff_per_epoch %>% 154 | ggplot() + 155 | geom_col(aes(x = epoch, y = total_diff), fill = mygreen) 156 | 157 | expected_diff <- diff_per_epoch %>% 158 | filter(epoch < 32303) %>% 159 | pull(total_diff) %>% 160 | mean(., na.rm = TRUE) 161 | 162 | eth_lost <- diff_per_epoch %>% 163 | mutate(eth_lost = expected_diff - total_diff) %>% 164 | filter(epoch >= 32303, epoch <= 32322) %>% 165 | pull(eth_lost) %>% 166 | sum() -------------------------------------------------------------------------------- /posdata/scripts/20210908_slowdown.R: -------------------------------------------------------------------------------- 1 | source("notebooks/lib.R") 2 | library(zoo) 3 | 4 | myred <- "#F05431" 5 | myyellow <- "#FED152" 6 | mygreen <- "#BFCE80" 7 | client_colours <- c("#000011", "#ff9a02", "#eb4a9b", "#7dc19e", "grey", "red") 8 | 9 | bxs_ats <- 63241:63400 %>% 10 | map(get_blocks_and_attestations) %>% 11 | purrr::transpose() %>% 12 | map(rbindlist) 13 | 14 | bxs <- copy(bxs_ats$block) 15 | bxs[, declared_client := find_client(graffiti)] 16 | list(fread(here::here("data/bxs_slowdown.csv")), bxs) %>% rbindlist() %>% 17 | fwrite(here::here("data/bxs_slowdown.csv")) 18 | 19 | ats <- bxs_ats$attestations 20 | list(fread(here::here("data/ats_slowdown.csv")), ats) %>% rbindlist() %>% 21 | fwrite(here::here("data/ats_slowdown.csv")) 22 | 23 | bxs <- fread(here::here("data/bxs_slowdown.csv")) 24 | ats <- fread(here::here("data/ats_slowdown.csv")) 25 | block_root_at_slot <- get_block_root_at_slot(bxs) 26 | get_correctness_data(ats, block_root_at_slot) 27 | 28 | committees <- 63201:63220 %>% 29 | map(get_committees) %>% 30 | rbindlist() 31 | 32 | agg_info <- get_aggregate_info(ats[slot < 63250]) %>% 33 | inner_join(bxs[, .(slot, declared_client)], by = c("slot" = "slot")) 34 | 35 | myopic_redundant <- get_myopic_redundant_ats_detail(ats[slot >= 63201 * 32]) %>% 36 | inner_join(bxs[, .(slot, declared_client, graffiti)], by = c("slot" = "slot")) %>% 37 | mutate(declared_client = if_else(declared_client == "undecided", graffiti, declared_client)) 38 | 39 | redundant_ats <- get_redundant_ats(ats[slot >= 63201 * 32 & slot < 63221 * 32]) %>% 40 | inner_join(bxs[, .(slot, declared_client, graffiti)], by = c("slot" = "slot")) %>% 41 | mutate(declared_client = if_else(declared_client == "undecided", graffiti, declared_client)) 42 | 43 | ats %>% group_by(slot) %>% summarise(n = n()) %>% 44 | inner_join(bxs[, .(slot, declared_client, graffiti)], by = c("slot" = "slot")) %>% 45 | group_by(declared_client) %>% 46 | summarise(mean_aggs = mean(n)) %>% 47 | ggplot() + 48 | geom_col(aes(x = declared_client, y = mean_aggs, fill=declared_client)) + 49 | scale_fill_manual(values = client_colours) 50 | 51 | stats_per_slot <- get_stats_per_slot(ats[att_slot >= 63201 * 32 & att_slot < 63220 * 32], committees, chunk_size = 1) 52 | 53 | myopic_redundant %>% union( 54 | bxs %>% mutate(declared_client = if_else(declared_client == "undecided", graffiti, declared_client)) %>% 55 | anti_join(myopic_redundant %>% select(slot)) %>% 56 | mutate(n_myopic_redundant = 0) %>% 57 | select(slot, n_myopic_redundant, declared_client, graffiti) 58 | ) %>% 59 | group_by(declared_client) %>% 60 | summarise(mean_myopic = mean(n_myopic_redundant), 61 | n_blocks = n()) %>% 62 | filter(n_blocks >= 5) %>% 63 | arrange(-mean_myopic) %>% 64 | ggplot() + 65 | geom_col(aes(x = declared_client, y = mean_myopic, fill=declared_client)) + 66 | guides(fill=FALSE) + 67 | theme( 68 | axis.text.x = element_text( 69 | angle = 90, 70 | hjust = 1, 71 | vjust = 0.5 72 | )) 73 | 74 | redundant_ats %>% union( 75 | bxs %>% mutate(declared_client = if_else(declared_client == "undecided", graffiti, declared_client)) %>% 76 | anti_join(redundant_ats %>% select(slot)) %>% 77 | mutate(n_redundant = 0) %>% 78 | select(slot, n_redundant, declared_client, graffiti) 79 | ) %>% group_by(declared_client) %>% 80 | summarise(mean_redundant = mean(n_redundant), 81 | n_blocks = n()) %>% 82 | filter(n_blocks >= 5) %>% 83 | arrange(-mean_redundant) %>% 84 | ggplot() + 85 | geom_col(aes(x = declared_client, y = mean_redundant, fill=declared_client)) + 86 | guides(fill=FALSE) + 87 | theme( 88 | axis.text.x = element_text( 89 | angle = 90, 90 | hjust = 1, 91 | vjust = 0.5 92 | )) 93 | 94 | stats_per_slot %>% 95 | mutate(slot_in_epoch = att_slot %% 32) %>% 96 | group_by(slot_in_epoch) %>% 97 | summarise(correct_target = mean(correct_targets/included_ats)) %>% 98 | # summarise(correct_head = mean(correct_head)) %>% 99 | ggplot() + 100 | geom_col(aes(x = slot_in_epoch, y = correct_target), fill = myred) 101 | 102 | stats_per_slot %>% 103 | mutate(slot_in_epoch = att_slot %% 32) %>% 104 | filter(slot_in_epoch == 0) %>% 105 | ggplot() + 106 | geom_histogram(aes(x = correct_targets/included_ats)) 107 | 108 | my_sps <- function(epoch) { 109 | bxs_ats <- (epoch-1):(epoch+1) %>% 110 | map(get_blocks_and_attestations) %>% 111 | purrr::transpose() %>% 112 | map(rbindlist) 113 | 114 | bxs <- copy(bxs_ats$block) 115 | bxs[, declared_client := find_client(graffiti)] 116 | ats <- copy(bxs_ats$attestations) 117 | block_root_at_slot <- get_block_root_at_slot(bxs) 118 | get_correctness_data(ats, block_root_at_slot) 119 | committees <- epoch:epoch %>% 120 | map(get_committees) %>% 121 | rbindlist() 122 | pre_trans_blk <- bxs %>% filter(slot == epoch * 32 - 1) 123 | if (pre_trans_blk %>% nrow() == 0) { 124 | blk_producer = "not produced" 125 | } else { 126 | blk_producer = pre_trans_blk$declared_client[1] 127 | } 128 | 129 | trans_blk <- bxs %>% filter(slot == epoch * 32) 130 | if (trans_blk %>% nrow() == 0) { 131 | trans_blk_producer = "not produced" 132 | } else { 133 | trans_blk_producer = trans_blk$declared_client[1] 134 | } 135 | 136 | stats_per_slot <- get_stats_per_slot( 137 | ats[att_slot >= epoch * 32], committees, chunk_size = 1 138 | )[att_slot < (epoch + 1) * 32] %>% 139 | mutate(pre_trans_blk_producer = blk_producer, 140 | trans_blk_producer = trans_blk_producer) 141 | return(stats_per_slot) 142 | } 143 | 144 | my_trans_block <- function(epoch) { 145 | bxs <- get_block_at_slot(epoch * 32) 146 | if (is.null(bxs)) { 147 | return(tibble(att_slot = epoch * 32, declared_client = "not produced")) 148 | } else { 149 | bxs[, declared_client := find_client(graffiti)] %>% 150 | mutate(declared_client = if_else(declared_client == "undecided", graffiti, declared_client)) 151 | return(bxs[, .(att_slot = slot, declared_client)]) 152 | } 153 | } 154 | 155 | stats_temp <- seq(45050, 64050, 100) %>% map(my_sps) %>% rbindlist() 156 | bxs <- seq(45000, 64000, 100) %>% map(my_trans_block) %>% rbindlist() 157 | 158 | stats_per_slot_total <- list( 159 | stats_per_slot_total, 160 | stats_temp %>% mutate(epoch = att_slot %/% 32) %>% 161 | select(att_slot, included_ats, correct_targets, correct_heads, 162 | expected_ats, pre_trans_blk_producer, epoch, trans_blk_producer) 163 | ) %>% rbindlist() %>% arrange(att_slot) 164 | 165 | 166 | 167 | stats_per_slot_total %>% 168 | mutate(epoch = att_slot %/% 32) %>% 169 | group_by(epoch, trans_blk_producer) %>% 170 | summarise(included_ats = sum(included_ats), 171 | expected_ats = sum(expected_ats), 172 | correct_targets = sum(correct_targets), 173 | correct_heads = sum(correct_heads)) %>% 174 | mutate(percent_voted = included_ats / expected_ats, 175 | percent_correct_target = correct_targets / included_ats, 176 | percent_correct_heads = correct_heads / included_ats) %>% 177 | ggplot() + 178 | geom_line(aes(x = epoch, y = percent_voted), color = "steelblue") 179 | # geom_line(aes(x = epoch, y = percent_correct_target), color = "orange") + 180 | # facet_wrap(vars(trans_blk_producer)) 181 | 182 | stats_per_slot_total %>% 183 | filter(att_slot %% 32 == 0) %>% 184 | group_by(trans_blk_producer) %>% 185 | summarise(percent_correct = mean(correct_targets / included_ats)) 186 | 187 | ##### 188 | 189 | val_ats <- get_exploded_ats(ats[att_slot < 63221 * 32]) %>% left_join(committees, .) %>% 190 | inner_join(read_csv(here::here("data/stakers.csv"))) %>% select(-slot) %>% 191 | unique() %>% filter(is.na(correct_head)) 192 | 193 | val_ats %>% group_by(staker) %>% 194 | summarise(percent_correct_head = sum(correct_head) / n()) 195 | 196 | read_csv(here::here("data/stakers.csv")) %>% 197 | count(staker) -------------------------------------------------------------------------------- /posdata/scripts/20210918_altair_sync.R: -------------------------------------------------------------------------------- 1 | source("notebooks/lib.R") 2 | 3 | myred <- "#F05431" 4 | myyellow <- "#FED152" 5 | mygreen <- "#BFCE80" 6 | client_colours <- c("#000011", "yellow", "#ff9a02", "#eb4a9b", "#7dc19e", "grey", "red") 7 | 8 | testnet <- "prater" 9 | 10 | # Get blocks 11 | t_testnet_bxs <- 61001:62000 %>% 12 | map(get_blocks) %>% 13 | rbindlist() 14 | list(fread(here::here(str_c("data/", testnet , "_bxs.csv"))), t_testnet_bxs) %>% 15 | rbindlist() %>% 16 | fwrite(here::here(str_c("data/", testnet , "_bxs.csv"))) 17 | testnet_bxs <- fread(here::here(str_c("data/", testnet , "_bxs.csv"))) 18 | testnet_bxs[, declared_client := find_client(graffiti)] 19 | 20 | testnet_clients <- if (testnet == "pyrmont") testnet_bxs %>% 21 | select(validator_index = proposer_index, declared_client) %>% 22 | filter(validator_index > (if(testnet == "pyrmont") 19899 else 200999)) %>% 23 | union(fread(here::here(str_c("data/", testnet , "_client_map.csv")))) %>% 24 | unique() %>% 25 | group_by(validator_index) %>% 26 | filter(n() == 1) %>% 27 | ungroup() else fread(here::here(str_c("data/", testnet , "_client_map.csv"))) 28 | 29 | t_scs <- seq(40700, 40970, 10) %>% 30 | map(function(epoch) { 31 | bxs <- get_blocks(epoch, sync_committee = TRUE) 32 | bxs[, epoch := epoch] 33 | exploded_bxs <- get_exploded_sync_block(bxs) 34 | sync_committee <- get_sync_committee(epoch) 35 | exploded_bxs %>% inner_join(sync_committee) 36 | }) %>% 37 | rbindlist() 38 | list(fread(here::here(str_c("data/", testnet , "_scs.csv"))), t_scs) %>% 39 | rbindlist() %>% 40 | fwrite(here::here(str_c("data/", testnet , "_scs.csv"))) 41 | scs <- fread(here::here(str_c("data/", testnet , "_scs.csv"))) 42 | scs[, proposer_declared_client := find_client(graffiti)] 43 | 44 | previous_blk_proposer <- scs[, .(slot)] %>% 45 | unique() %>% 46 | mutate(previous_slot = slot - 1) %>% 47 | inner_join(scs[, .(slot, graffiti)] %>% unique() %>% copy(), by = c("previous_slot" = "slot")) %>% copy() 48 | previous_blk_proposer[, previous_proposer_declared_client := find_client(graffiti)] 49 | 50 | t_scs %>% 51 | group_by(validator_index) %>% 52 | summarise(percent_correct_sync = sum(sync_committeed) / n()) %>% 53 | inner_join(testnet_clients %>% select( 54 | validator_index, 55 | syncer_declared_client = declared_client 56 | )) %>% 57 | group_by(syncer_declared_client) %>% 58 | summarise(mean_pc = mean(percent_correct_sync), n_obs = n()) %>% 59 | ggplot() + 60 | geom_col(aes(x = syncer_declared_client, y = mean_pc, fill = syncer_declared_client)) + 61 | scale_fill_manual(values = client_colours) 62 | 63 | scs %>% 64 | inner_join(previous_blk_proposer %>% select(slot, previous_proposer_declared_client)) %>% 65 | inner_join(testnet_clients %>% select( 66 | validator_index, 67 | syncer_declared_client = declared_client 68 | )) %>% 69 | group_by(previous_proposer_declared_client, syncer_declared_client) %>% 70 | summarise(mean_pc = sum(sync_committeed) / n()) %>% 71 | ggplot() + 72 | geom_col(aes(x = previous_proposer_declared_client, y = mean_pc, fill = syncer_declared_client)) + 73 | facet_wrap(vars(syncer_declared_client)) + 74 | scale_fill_manual(values = client_colours) 75 | 76 | scs %>% 77 | inner_join(testnet_clients %>% select( 78 | validator_index, 79 | syncer_declared_client = declared_client 80 | )) %>% 81 | group_by(proposer_declared_client, syncer_declared_client) %>% 82 | summarise(mean_pc = sum(sync_committeed) / n()) %>% 83 | ggplot() + 84 | geom_col(aes(x = proposer_declared_client, y = mean_pc, fill = syncer_declared_client)) + 85 | facet_wrap(vars(syncer_declared_client)) + 86 | scale_fill_manual(values = client_colours) 87 | 88 | t_scs %>% 89 | inner_join(testnet_clients %>% select( 90 | validator_index, 91 | syncer_declared_client = declared_client 92 | )) %>% 93 | group_by(epoch, syncer_declared_client) %>% 94 | summarise(mean_pc = sum(sync_committeed) / n()) %>% 95 | ggplot() + 96 | geom_line(aes(x = epoch, y = mean_pc, color = syncer_declared_client)) + 97 | scale_color_manual(values = client_colours) -------------------------------------------------------------------------------- /static/authorFormatting.js: -------------------------------------------------------------------------------- 1 | ReactDOM.render( 2 | React.createElement( 3 | AuthorBlock, { authors: authorData } 4 | ), 5 | document.querySelector("#authors") 6 | ); 7 | -------------------------------------------------------------------------------- /static/footer.js: -------------------------------------------------------------------------------- 1 | class Footer extends React.Component { 2 | render() { 3 | return ( 4 | e( 5 | "div", { 6 | className: "document-container footer-container" 7 | }, 8 | this.props.acknowledgements ? e( 9 | "div", null, 10 | e( 11 | "div", { 12 | className: "footer-sub-title" 13 | }, 14 | "Acknowledgements." 15 | ), 16 | e( 17 | "div", { 18 | className: "footer-content" 19 | }, 20 | this.props.acknowledgements 21 | ) 22 | ) : null, 23 | e( 24 | "div", { 25 | className: "footer-sub-title" 26 | }, 27 | "License." 28 | ), 29 | e( 30 | "div", { 31 | className: "footer-content" 32 | }, 33 | "All content is licensed under the Creative Commons Attribution ", 34 | e( 35 | "a", { 36 | href: "https://creativecommons.org/licenses/by/4.0/", 37 | target: "_blank" 38 | }, 39 | "CC-BY 4.0" 40 | ), 41 | "." 42 | ), 43 | this.props.openSource ? e( 44 | "div", null, 45 | e( 46 | "div", { 47 | className: "footer-sub-title" 48 | }, 49 | "Open-source." 50 | ), 51 | e( 52 | "div", { 53 | className: "footer-content" 54 | }, 55 | this.props.openSource 56 | ) 57 | ) : null, 58 | // e( 59 | // "div", { className: "footer-sub-title" }, 60 | // "Donate." 61 | // ), 62 | // e( 63 | // "div", { className: "footer-content" }, 64 | // "This project is born out of love for the ideas of the crypto community ❤️ We do not currently receive any funding and count on your support to allow us to continue.", 65 | // e("br", null), 66 | // e( 67 | // "p", null, 68 | // "Our address: ", 69 | // e( 70 | // "a", { 71 | // href: "https://etherscan.io/address/hackingresearch.eth", 72 | // target: "_blank" 73 | // }, 74 | // "hackingresearch.eth" 75 | // ) 76 | // ) 77 | // ), 78 | e( 79 | "div", { 80 | id: "reference-container" 81 | } 82 | ) 83 | ) 84 | ) 85 | } 86 | } 87 | -------------------------------------------------------------------------------- /static/header.js: -------------------------------------------------------------------------------- 1 | class Header extends React.Component { 2 | render() { 3 | const e = React.createElement; 4 | 5 | return ( 6 | e( 7 | 'div', { className: 'rig-header' }, 8 | e( 9 | "div", { className: "header-container" }, 10 | e('a', { href: 'https://github.com/ethereum/rig', className: 'nav-logo' }, "Robust Incentives Group"), 11 | e( 12 | 'ul', { className: "nav-menu" }, 13 | e('li', { className: 'nav-item' }, e('a', { href: 'https://ethereum.github.io/abm1559' }, "eip1559")), 14 | e('li', { className: 'nav-item' }, e('a', { href: 'https://ethereum.github.io/beaconrunner' }, "PoS")), 15 | e('li', { className: 'nav-item' }, e('a', { href: 'https://shsr2001.github.io/beacondigest' }, "beacondigest")), 16 | ), 17 | e( 18 | 'div', { className: "hamburger" }, "🍔" 19 | ) 20 | // e( 21 | // 'div', { className: "header-right-element" }, 22 | // e('a', { href: '/abm1559' }, "eip1559") 23 | // ), 24 | // e( 25 | // 'div', { className: "header-right-element" }, 26 | // e('a', { href: '/beaconrunner' }, "eth2") 27 | // ) 28 | // e( 29 | // 'div', { className: "header-right-element" }, 30 | // e('a', { href: '/about' }, 'About') 31 | // ), 32 | // e( 33 | // 'div', { className: "header-right-element" }, 34 | // e('a', { href: 'https://twitter.com/hackingresearch' }, 'Twitter') 35 | // ) 36 | ) 37 | ) 38 | ); 39 | } 40 | } 41 | -------------------------------------------------------------------------------- /static/index.css: -------------------------------------------------------------------------------- 1 | html { 2 | font-size: 1.1rem; 3 | font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen, Ubuntu, Cantarell, "Fira Sans", "Droid Sans", "Helvetica Neue", Arial, sans-serif; 4 | color: rgb(56,56,56); 5 | line-height: 1.6em; 6 | font-weight: 400; 7 | color: rgba(0,0,0,0.8); 8 | -webkit-text-size-adjust: 100%; 9 | -moz-text-size-adjust: 100%; 10 | -ms-text-size-adjust: 100%; 11 | } 12 | 13 | p, ul, ol { 14 | font-size: 16px; 15 | } 16 | 17 | th, tr { 18 | font-size: 14px; 19 | } 20 | 21 | pre { 22 | margin-top: 0px; 23 | margin-bottom: 5px; 24 | } 25 | 26 | .jp-Notebook { 27 | padding: 0px; 28 | } 29 | 30 | .highlight button { 31 | color: rgb(56,56,56); 32 | background: rgba(0, 0, 0, 0.15); 33 | margin-right: 0px; 34 | box-sizing: border-box; 35 | transition: 0.2s ease-out; 36 | cursor: pointer; 37 | user-select: none; 38 | border: 1px solid rgba(0, 0, 0, 0); 39 | padding: 5px 10px; 40 | font-size: 0.8em; 41 | font-weight: 600; 42 | font-family: "Menlo", monospace, monospace; 43 | position: absolute; 44 | bottom: 0; 45 | right: 0; 46 | border-radius: 0 0.15rem; 47 | } 48 | 49 | .highlight { 50 | margin-bottom: 1em; 51 | overflow-x: scroll; 52 | padding-right: 50px; 53 | } 54 | 55 | .article-content { 56 | max-width: 900px; 57 | padding-left: 1rem; 58 | padding-right: 1rem; 59 | margin: 0px auto; 60 | } 61 | 62 | body { 63 | padding: 0 0 0 0; 64 | margin: 0; 65 | } 66 | 67 | .rig-header { 68 | margin: 0; 69 | border-bottom: 1px solid #E2E8F0; 70 | background: steelblue; 71 | } 72 | 73 | .rig-header ul { 74 | margin: 0; 75 | } 76 | 77 | .header-container { 78 | display: flex; 79 | justify-content: space-between; 80 | align-items: center; 81 | padding: 1rem 1.5rem; 82 | } 83 | 84 | .hamburger { 85 | display: none; 86 | } 87 | 88 | .rig-header a { 89 | text-decoration: none; 90 | color: white; 91 | } 92 | 93 | .rig-header li { 94 | list-style: none; 95 | } 96 | 97 | .nav-menu { 98 | display: flex; 99 | justify-content: space-between; 100 | align-items: center; 101 | } 102 | 103 | .nav-item { 104 | margin-left: 5rem; 105 | font-size: 16px; 106 | font-family: "Menlo", monospace, monospace; 107 | } 108 | 109 | .nav-link{ 110 | font-size: 16px; 111 | font-family: "Menlo", monospace, monospace; 112 | } 113 | 114 | .nav-link:hover{ 115 | color: #482ff7; 116 | } 117 | 118 | .nav-logo { 119 | font-size: 16px; 120 | font-family: "Menlo", monospace, monospace; 121 | } 122 | 123 | .rig-header li { 124 | display: flex; 125 | flex-direction: row; 126 | align-items: center; 127 | } 128 | 129 | @media only screen and (max-width: 768px) { 130 | .nav-menu { 131 | padding-left: 0; 132 | position: fixed; 133 | left: -100%; 134 | top: 4rem; 135 | flex-direction: column; 136 | background-color: steelblue; 137 | width: 100%; 138 | text-align: center; 139 | transition: 0.3s; 140 | box-shadow: 141 | 0 10px 27px rgba(0, 0, 0, 0.05); 142 | } 143 | 144 | .nav-menu.active { 145 | left: 0; 146 | } 147 | 148 | .nav-item { 149 | margin: 0.5rem 0; 150 | } 151 | 152 | .hamburger { 153 | display: block; 154 | cursor: pointer; 155 | } 156 | } 157 | 158 | .title-container { 159 | margin-top: 3rem; 160 | margin-bottom: 3rem; 161 | } 162 | 163 | section { 164 | margin-top: 3rem; 165 | margin-bottom: 1rem; 166 | } 167 | 168 | /* Homepage article list */ 169 | 170 | .article-list-container { 171 | padding-top: 4rem; 172 | padding-left: 1rem; 173 | padding-right: 1rem; 174 | padding-bottom: 2rem; 175 | display: grid; 176 | grid-template-columns: repeat(auto-fill, minmax(300px, 1fr)); 177 | grid-column-gap: 3rem; 178 | grid-row-gap: 2rem; 179 | max-width: 1000px; 180 | margin: auto; 181 | } 182 | 183 | .article-list-small { 184 | margin-top: 2rem; 185 | margin-bottom: 2rem; 186 | padding-left: 0rem; 187 | padding-right: 0rem; 188 | padding-top: 2rem; 189 | border-bottom: 1px solid lightgrey; 190 | border-top: 1px solid lightgrey; 191 | } 192 | 193 | .article-list-container a { 194 | text-decoration: none; 195 | color: steelblue; 196 | } 197 | 198 | .article-list-container p { 199 | margin-top: -0.5em; 200 | margin-bottom: -0.5em; 201 | } 202 | 203 | .article-list-item-container { 204 | padding-bottom: 2rem; 205 | border-bottom: 1px solid lightgrey; 206 | } 207 | 208 | .article-list-small .article-list-item-container { 209 | padding-bottom: 0rem; 210 | border-bottom: none; 211 | } 212 | 213 | .article-list-item-header { 214 | align-items: center; 215 | display: flex; 216 | margin-bottom: 1rem; 217 | } 218 | 219 | .article-list-item-header .article-list-item-published { 220 | flex-grow: 1; 221 | } 222 | 223 | .article-list-item-published { 224 | font-size: 0.9rem; 225 | color: rgba(0,0,0,0.6); 226 | } 227 | 228 | .article-list-item-category a { 229 | font-family: "Menlo", monospace, monospace; 230 | color: green; 231 | } 232 | 233 | .article-list-small .article-list-item-published { 234 | font-size: 0.8rem; 235 | } 236 | 237 | .article-list-item-image-container { 238 | height: 200px; 239 | } 240 | 241 | .article-list-item-image { 242 | object-fit: contain; 243 | height: 100%; 244 | } 245 | 246 | .article-list-item-content { 247 | /* line-height: 1.6rem; */ 248 | cursor: pointer; 249 | } 250 | 251 | .article-list-item-title { 252 | font-size: 1.8rem; 253 | line-height: normal; 254 | color: steelblue; 255 | } 256 | 257 | .article-list-small .article-list-item-title { 258 | font-size: 1.4rem; 259 | line-height: normal; 260 | color: steelblue; 261 | } 262 | 263 | .article-list-item-content p { 264 | text-align: justify; 265 | color: rgba(0,0,0,0.8); 266 | } 267 | 268 | .article-list-small .article-list-item-content p { 269 | font-size: 0.9rem; 270 | } 271 | 272 | .button { 273 | display: inline-block; 274 | padding-left: 1.5em; 275 | padding-right: 1.5rem; 276 | padding-top: 0.25rem; 277 | padding-bottom: 0.25rem; 278 | } 279 | 280 | header { 281 | font-weight: 600; 282 | } 283 | 284 | img { 285 | width: 80%; 286 | } 287 | 288 | a { 289 | color: rgba(3,136,166,1.0); 290 | } 291 | 292 | li { 293 | margin-bottom: 0.5rem; 294 | } 295 | 296 | ul { 297 | margin-top: 0.5rem; 298 | } 299 | 300 | /* button { 301 | font-weight: bold; 302 | font-family: "Menlo", monospace, monospace; 303 | font-size: 0.8rem; 304 | border: none; 305 | background-color: inherit; 306 | cursor: pointer; 307 | padding: 0; 308 | margin-right: 10px; 309 | text-decoration: underline; 310 | } */ 311 | 312 | table .odd { 313 | background-color: #eeeef9; 314 | } 315 | 316 | th, td { 317 | padding: 5px; 318 | } 319 | 320 | table.data-table { 321 | padding-top: 1rem; 322 | /* max-width: 400px; */ 323 | /* margin: 0 auto; */ 324 | } 325 | 326 | table.data-table thead th { 327 | padding-bottom: 0.5rem; 328 | /* border-bottom: 1px solid black; */ 329 | } 330 | 331 | table.data-table thead th:after { 332 | content: ""; /* This is necessary for the pseudo element to work. */ 333 | display: block; /* This will put the pseudo element on its own line. */ 334 | margin: 0 auto; /* This will center the border. */ 335 | width: 70%; /* Change this to whatever width you want. */ 336 | padding-top: 0.5rem; /* This creates some space between the element and the border. */ 337 | border-bottom: 1px solid black; /* This creates the border. Replace black with whatever color you want. */ 338 | } 339 | 340 | table.data-table th { 341 | font-weight: normal; 342 | font-size: 0.9rem; 343 | padding-left: 0rem; 344 | padding-right: 2.5rem; 345 | color: black; 346 | } 347 | 348 | table.data-table td { 349 | font-size: 0.9rem; 350 | text-align: center; 351 | padding-right: 2.5rem; 352 | } 353 | 354 | .hidden { 355 | display: none; 356 | } 357 | 358 | .notes { 359 | color: rgba(255,56,0,1.0); 360 | } 361 | 362 | .list-item { 363 | padding-left: 0.5rem; 364 | } 365 | 366 | .section0 { 367 | text-align: center; 368 | } 369 | 370 | .title { 371 | font-weight: 700; 372 | font-size: 2rem; 373 | line-height: 2.5rem; 374 | padding: 0; 375 | margin: 0; 376 | /* text-decoration: underline; 377 | text-decoration-color: rgba(3,136,166,1.0); 378 | -webkit-text-decoration-color: rgba(3,136,166,1.0); */ 379 | } 380 | 381 | .intro { 382 | font-size: 1.0rem; 383 | } 384 | 385 | .about-me { 386 | font-size: 0.9rem; 387 | } 388 | 389 | .quote { 390 | font-style: italic; 391 | } 392 | 393 | .quotation { 394 | font-size: 14px; 395 | font-style: italic; 396 | color: rgb(46,46,46); 397 | } 398 | 399 | .head { 400 | margin: 0 auto; 401 | text-align: center; 402 | padding-top: 0rem; 403 | padding-bottom: 2rem; 404 | } 405 | 406 | .document-container { 407 | margin: 0 auto; 408 | max-width: 900px; 409 | } 410 | 411 | .explorable-header { 412 | display: block; 413 | font-weight: bold; 414 | } 415 | 416 | /* Author block */ 417 | 418 | #toc-author-block { 419 | margin-top: 2rem; 420 | margin-bottom: 2rem; 421 | display: flex; 422 | } 423 | 424 | .author-block-title, .toc-container-title { 425 | font-weight: 600; 426 | font-size: 1rem; 427 | margin-bottom: 0.8rem; 428 | margin-right: 20px; 429 | padding-bottom: 0.5rem; 430 | border-bottom: solid rgba(0,0,0,0.6) 0.5px; 431 | } 432 | 433 | .author-block-container { 434 | min-width: 250px; 435 | } 436 | 437 | .author-block-authors { 438 | line-height: 1.3rem; 439 | padding-right: 20px; 440 | } 441 | 442 | .author-block-author a { 443 | color: rgb(56,56,56,1); 444 | font-size: 1rem; 445 | text-decoration: underline; 446 | text-decoration-color: rgba(3,136,166,1.0); 447 | -webkit-text-decoration-color: rgba(3,136,166,1.0); 448 | font-weight: bold; 449 | /* font-family: Palatino, serif; */ 450 | } 451 | 452 | .author-block-affiliation { 453 | color: rgb(0,0,0,0.6); 454 | font-size: 0.8rem; 455 | font-family: "Menlo", monospace, monospace; 456 | margin-bottom: 0.8rem; 457 | } 458 | 459 | /* Table of contents */ 460 | 461 | #toc { 462 | min-width: 300px; 463 | width: 100%; 464 | } 465 | 466 | .toc-container { 467 | 468 | } 469 | 470 | .toc-container a { 471 | 472 | } 473 | 474 | .toc-sections-container { 475 | line-height: 1.45rem; 476 | padding-right: 20px; 477 | } 478 | 479 | .toc-section a { 480 | color: rgba(56, 56, 56, 1); 481 | font-size: 0.9rem; 482 | font-weight: 600; 483 | text-decoration: none; 484 | } 485 | 486 | .toc-section-number { 487 | display: inline-block; 488 | width: 25px; 489 | } 490 | 491 | .toc-sub-section a { 492 | color: rgb(0,0,0,0.6); 493 | padding-left: 10px; 494 | font-size: 0.9rem; 495 | font-weight: normal; 496 | text-decoration: none; 497 | } 498 | 499 | .toc-sub-section-number { 500 | display: inline-block; 501 | width: 30px; 502 | } 503 | 504 | .title-number { 505 | text-decoration: none; 506 | font-weight: bold; 507 | display: inline-block; 508 | width: 30px; 509 | } 510 | 511 | .sub-title-number { 512 | text-decoration: none; 513 | font-weight: bold; 514 | display: inline-block; 515 | width: 35px; 516 | } 517 | 518 | /* small screens */ 519 | @media screen and (max-width: 920px) { 520 | #toc-author-block { 521 | display: block; 522 | } 523 | 524 | .toc-container { 525 | padding-top: 1.5rem; 526 | padding-bottom: 0.5rem; 527 | } 528 | 529 | img { 530 | width: 100%; 531 | } 532 | 533 | .document-container { 534 | margin-right: 10px; 535 | margin-left: 10px; 536 | } 537 | } 538 | 539 | /* Fixed width list */ 540 | 541 | .fixed-width-list-item a { 542 | color: rgb(0,0,0,0.6); 543 | font-size: 0.9rem; 544 | font-weight: 600; 545 | text-decoration: none; 546 | } 547 | 548 | .fixed-width-list-item-highlighted a { 549 | color: rgba(56, 56, 56, 1); 550 | } 551 | 552 | .fixed-width-list-number { 553 | display: inline-block; 554 | width: 20px; 555 | color: rgba(3,136,166,1.0); 556 | } 557 | 558 | .fixed-width-list-number-highlighted { 559 | color: rg 560 | } 561 | 562 | /* Expandable section */ 563 | 564 | details { 565 | padding: 0.5rem; 566 | border: solid rgba(0,0,0,0.1) 0.5px; 567 | cursor: pointer; 568 | } 569 | 570 | details, .jp-Cell-inputWrapper { 571 | margin-top: 0.5em; 572 | margin-bottom: 0.5em; 573 | } 574 | 575 | summary { 576 | font-size: 15px; 577 | font-weight: 600; 578 | font-family: "Menlo", monospace, monospace; 579 | } 580 | 581 | .related-idea-container { 582 | margin-top: 1.5rem; 583 | margin-bottom: 1.5rem; 584 | padding-left: 1rem; 585 | padding-right: 1rem; 586 | padding-bottom: 1rem; 587 | padding-top: 1rem; 588 | /* background-color: rgba(0,0,0,0.04); */ 589 | border: solid rgba(0,0,0,0.1) 0.5px; 590 | } 591 | 592 | .related-idea { 593 | font-size: 1rem; 594 | font-weight: 600; 595 | font-family: "Menlo", monospace, monospace; 596 | display: flex; 597 | cursor: pointer; 598 | transform: translateX(0px); 599 | transition: color 0.1s ease-out, transform 0.25s; 600 | padding-bottom: 0.5rem; 601 | } 602 | 603 | .related-idea-close { 604 | /* border-bottom: solid rgba(0,0,0,0.6) 0.5px; */ 605 | border-top: none; 606 | padding-bottom: 0rem; 607 | } 608 | 609 | .related-idea-title { 610 | flex-grow: 1; 611 | } 612 | 613 | .related-idea-description { 614 | /* margin-bottom: 0.5rem; */ 615 | margin-top: 0.5rem; 616 | color: rgba(0,0,0,0.6); 617 | } 618 | 619 | .related-idea-caret { 620 | font-size: 0.9rem; 621 | font-weight: 600; 622 | font-family: "Menlo", monospace, monospace; 623 | color: rgba(3,136,166,1.0); 624 | /* display: inline-block; */ 625 | /* width: 80px; */ 626 | display: none; 627 | } 628 | 629 | .expand-button { 630 | flex-shrink: 0; 631 | align-self: flex-end; 632 | font-size: 0.8rem; 633 | color: rgba(3,136,166,1.0); 634 | } 635 | 636 | .box-container { 637 | margin-top: 1.5rem; 638 | margin-bottom: 1.5rem; 639 | } 640 | 641 | .box-title-container { 642 | font-size: 1rem; 643 | font-weight: 600; 644 | font-family: "Menlo", monospace, monospace; 645 | display: flex; 646 | /* border-bottom: solid rgba(0,0,0,0.6) 0.5px; */ 647 | border-top: solid rgba(0,0,0,0.6) 0.5px; 648 | /* padding-bottom: 0.5rem; */ 649 | padding-top: 0.5rem; 650 | margin-bottom: 0.5rem; 651 | } 652 | 653 | .box-close { 654 | padding-bottom: 0.5rem; 655 | border-bottom: solid rgba(0,0,0,0.6) 0.5px; 656 | } 657 | 658 | /* Figures */ 659 | 660 | /* .i-visual { 661 | padding-bottom: 1.5rem; 662 | padding-top: 1rem; 663 | } */ 664 | 665 | .figure-title { 666 | color: rgba(3,136,166,1.0); 667 | font-size: 1rem; 668 | font-weight: 600; 669 | /* font-family: "Menlo", monospace, monospace; */ 670 | } 671 | 672 | .figure-name { 673 | font-size: 1rem; 674 | font-weight: 600; 675 | color: rgb(46, 46, 46); 676 | } 677 | 678 | .figure-caption { 679 | color: rgb(0,0,0,0.7); 680 | font-size: 0.9rem; 681 | font-weight: 400; 682 | line-height: 1.45rem; 683 | text-align: justify; 684 | } 685 | 686 | .figure-caption-container { 687 | padding-bottom: 1.5rem; 688 | } 689 | 690 | .definition-container .figure-caption-container { 691 | padding-top: 1rem; 692 | padding-bottom: 1rem; 693 | padding-left: 1rem; 694 | padding-right: 1rem; 695 | background-color: rgba(0,40,120,0.04); 696 | } 697 | 698 | .definition-container .figure-caption { 699 | font-size: 1rem; 700 | } 701 | 702 | .figure-caption ol { 703 | margin-top: 0.5rem; 704 | } 705 | 706 | .caption-interaction-explanation { 707 | /* font-weight: bold; */ 708 | color: rgba(3,136,166,1.0); 709 | } 710 | 711 | .static-link-container { 712 | margin-top: 1rem; 713 | margin-bottom: 1rem; 714 | } 715 | 716 | .static-link-container a { 717 | text-decoration: none; 718 | color: none; 719 | } 720 | 721 | .static-image-container { 722 | margin-top: 2rem; 723 | margin-bottom: 3rem; 724 | } 725 | 726 | .related-idea-container .static-image-container { 727 | margin-bottom: 0rem; 728 | margin-top: 0rem; 729 | } 730 | 731 | .static-table-container { 732 | margin-top: 0rem; 733 | } 734 | 735 | .static-table-container table { 736 | margin: auto; 737 | margin-bottom: 1rem; 738 | } 739 | 740 | .image-legend { 741 | padding-left: 10px; 742 | padding-right: 10px; 743 | font-size: 0.9rem; 744 | margin-bottom: 10px; 745 | color: #888; 746 | } 747 | 748 | /* Titles */ 749 | 750 | .sub-title { 751 | margin-top: 0.5rem; 752 | color: rgba(0, 0, 0, 0.6); 753 | text-align: justify; 754 | } 755 | 756 | .section-title { 757 | font-weight: 600; 758 | color: rgba(0, 0, 0, 0.8); 759 | font-size: 1.5rem; 760 | margin-top: 2rem; 761 | margin-bottom: 1rem; 762 | } 763 | 764 | .section-sub-title { 765 | font-weight: 600; 766 | color: rgba(0, 0, 0, 0.8); 767 | font-size: 1.2rem; 768 | margin-top: 1.5rem; 769 | margin-bottom: 1rem; 770 | } 771 | 772 | .adjustable { 773 | color: rgb(3,192,60); 774 | cursor: col-resize; 775 | margin: 0 0 0 0; 776 | text-decoration: none; 777 | font-weight: normal; 778 | font-family: inherit; 779 | font-size: inherit; 780 | -webkit-user-select: none; 781 | -moz-user-select: none; 782 | -ms-user-select: none; 783 | user-select: none; 784 | -webkit-touch-callout: none; 785 | } 786 | 787 | .adjustable:hover { 788 | background: rgba(3,192,60,0.2); 789 | cursor: col-resize; 790 | } 791 | 792 | .adjustable:active { 793 | background: rgba(3,192,60,1.0); 794 | color: white; 795 | cursor: col-resize; 796 | } 797 | 798 | .output { 799 | color: rgba(3,136,166,1.0); 800 | cursor: pointer; 801 | } 802 | 803 | .output:hover { 804 | background: rgba(3,136,166,0.2); 805 | } 806 | 807 | .output:active { 808 | background: rgba(3,136,166,1.0); 809 | color: white; 810 | } 811 | 812 | .blue { 813 | color: rgba(3,136,166,1.0); 814 | } 815 | 816 | .orange { 817 | color: rgb(255,56,0); 818 | } 819 | 820 | .green { 821 | color: rgb(3,192,60) 822 | } 823 | 824 | .bold { 825 | font-weight: bold; 826 | } 827 | 828 | .italic { 829 | font-style: italic; 830 | font-weight: 200; 831 | } 832 | 833 | .list-wrapper { 834 | padding-top: 1rem; 835 | padding-bottom: 1rem; 836 | } 837 | 838 | .paper-extract { 839 | //background: #F5F5F5; 840 | background: rgba(255, 239, 0,0.6); 841 | //background: rgba(255,56,0,0.6); 842 | //color: white; 843 | padding-left: 10px; 844 | padding-right: 10px; 845 | padding-top: 10px; 846 | padding-bottom: 10px; 847 | margin-bottom: 1rem; 848 | //font-style: italic; 849 | } 850 | 851 | .abstract { 852 | margin-bottom: 1rem; 853 | } 854 | 855 | .credits { 856 | margin-top: 3rem; 857 | font-size: 0.8rem; 858 | } 859 | 860 | .author-o { 861 | font-weight: bold; 862 | } 863 | 864 | .content, p { 865 | margin-bottom: 0.8rem; 866 | margin-top: 0.8rem; 867 | } 868 | 869 | /* Calculations */ 870 | 871 | .calculations-container { 872 | font-size: 0.9rem; 873 | margin-bottom: 1rem; 874 | padding-bottom: 0.5rem; 875 | border-bottom: solid rgba(0,0,0,0.6) 0.5px; 876 | } 877 | 878 | .calculations-title { 879 | font-size: 1rem; 880 | font-weight: bold; 881 | color: rgba(3,136,166,1.0); 882 | } 883 | 884 | .caclulations-title-container { 885 | padding-bottom: 0.5rem; 886 | } 887 | 888 | .calculations-space { 889 | padding-top: 0.5rem; 890 | } 891 | 892 | .calculation-title-comment { 893 | font-size: 0.8rem; 894 | color: rgb(180,180,180); 895 | } 896 | 897 | .calculations-line { 898 | font-size: 1rem; 899 | display: flex; 900 | padding-top: 0.5rem; 901 | padding-bottom: 0.5rem; 902 | } 903 | 904 | .calculations-line-left { 905 | min-width: 10rem; 906 | max-width: 10rem; 907 | } 908 | 909 | .calculations-line-operator { 910 | padding-left: 0.4rem; 911 | padding-right: 0.4rem; 912 | } 913 | 914 | .calculations-line-right { 915 | 916 | } 917 | 918 | .calculations-line-bubble { 919 | border-radius: 1rem; 920 | background: rgba(255,56,0,1.0); 921 | color: white; 922 | padding-left: 0.5rem; 923 | padding-right: 0.5rem; 924 | } 925 | 926 | .calculation-analysis { 927 | padding-top: 0.25rem; 928 | } 929 | 930 | .calculations-comment { 931 | font-size: 0.9rem; 932 | color: rgba(0,0,0,0.6); 933 | margin-top: -0.4rem; 934 | line-height: 1.2rem; 935 | } 936 | 937 | /* SVG */ 938 | 939 | svg text { 940 | font-family: "Menlo", monospace, monospace; 941 | } 942 | 943 | /* Series */ 944 | .series-container { 945 | padding-bottom: 1rem; 946 | } 947 | 948 | /* .domain { 949 | display: none; 950 | } */ 951 | 952 | .follow-line { 953 | fill: none; 954 | stroke: rgba(255,56,0,1.0); 955 | stroke-width: 1px; 956 | } 957 | 958 | .follow-point { 959 | fill: rgba(255,56,0,1.0); 960 | } 961 | 962 | .axis-label { 963 | font-size: 0.8rem; 964 | font-weight: 600; 965 | } 966 | 967 | .tick { 968 | fill: rgb(255,56,0); 969 | } 970 | 971 | .tick line { 972 | stroke: rgb(120,120,120); 973 | } 974 | 975 | .legend-container { 976 | display: flex; 977 | flex-wrap: wrap; 978 | } 979 | 980 | .legend-item-container { 981 | display: flex; 982 | margin-right: 40px; 983 | } 984 | 985 | .legend-visual { 986 | width: 20px; 987 | margin-right: 10px; 988 | } 989 | 990 | .legend-text { 991 | font-family: "Menlo", monospace, monospace; 992 | font-size: 0.8rem; 993 | } 994 | 995 | /* Bimatrix */ 996 | 997 | .bimatrix { 998 | margin: auto; 999 | } 1000 | 1001 | .bimatrix-payoff-cell { 1002 | text-align: center; 1003 | } 1004 | 1005 | .bimatrix-strategy-p1 { 1006 | font-weight: 600; 1007 | text-decoration: underline; 1008 | text-decoration-color: rgba(3,136,166,1.0); 1009 | -webkit-text-decoration-color: rgba(3,136,166,1.0); 1010 | } 1011 | 1012 | .bimatrix-strategy-p2 { 1013 | font-weight: 600; 1014 | text-decoration: underline; 1015 | text-decoration-color: rgba(255,56,0,1.0); 1016 | -webkit-text-decoration-color: rgba(255,56,0,1.0); 1017 | } 1018 | 1019 | .bimatrix-payoff-p1 { 1020 | color: rgba(3,136,166,1.0); 1021 | } 1022 | 1023 | .bimatrix-payoff-p2 { 1024 | color: rgba(255,56,0,1.0); 1025 | } 1026 | 1027 | .help-button { 1028 | text-align: right; 1029 | color: rgba(3,136,166,1.0); 1030 | text-decoration: underline; 1031 | font-size: 0.8rem; 1032 | cursor: pointer; 1033 | } 1034 | 1035 | .bimatrix-help-explanation-container { 1036 | padding-left: 20px; 1037 | padding-right: 20px; 1038 | font-size: 0.8rem; 1039 | } 1040 | 1041 | /* Reference */ 1042 | 1043 | .reference-item-number, .reference { 1044 | color: rgba(3,136,166,1.0); 1045 | font-size: 1rem; 1046 | font-weight: 600; 1047 | /* display: inline-block; */ 1048 | width: 25px; 1049 | vertical-align: top; 1050 | } 1051 | 1052 | .reference a, .reference-item-number a { 1053 | text-decoration: none; 1054 | margin-right: 0; 1055 | padding-right: 0; 1056 | } 1057 | 1058 | .reference-container { 1059 | /* color: rgb(0,0,0,0.5); */ 1060 | margin-bottom: 3rem; 1061 | } 1062 | 1063 | .reference-container-title { 1064 | font-size: 1.1rem; 1065 | margin-top: 20px; 1066 | margin-bottom: 10px; 1067 | } 1068 | 1069 | .reference-container-title { 1070 | font-weight: bold; 1071 | } 1072 | 1073 | .reference-title { 1074 | font-weight: 500; 1075 | display: inline-block; 1076 | } 1077 | 1078 | .reference-item-container { 1079 | font-size: 0.9rem; 1080 | line-height: 1.4rem; 1081 | margin-bottom: 0.5rem; 1082 | } 1083 | 1084 | .reference-item { 1085 | /* display: inline-block; */ 1086 | } 1087 | 1088 | .references-sub-title { 1089 | margin-bottom: 0.4rem; 1090 | } 1091 | 1092 | .explain { 1093 | // background: rgba(255, 239, 0,0.6); 1094 | background: #F5F5F5; 1095 | padding-left: 10px; 1096 | padding-right: 10px; 1097 | padding-top: 10px; 1098 | padding-bottom: 10px; 1099 | margin-bottom: 1.5rem; 1100 | margin-top: 1.5rem; 1101 | } 1102 | 1103 | .reading { 1104 | margin-bottom: 0.9rem; 1105 | } 1106 | 1107 | .radio-title { 1108 | font-weight: bold; 1109 | font-family: "Menlo", monospace, monospace; 1110 | font-size: 0.8rem; 1111 | } 1112 | 1113 | .radio-buttons-container { 1114 | margin-bottom: 0.4rem; 1115 | } 1116 | 1117 | .visual-extra { 1118 | font-size: 0.8rem; 1119 | line-height: 1.3rem; 1120 | font-family: "Menlo", monospace, monospace; 1121 | margin-top: 0.5rem; 1122 | } 1123 | 1124 | /* Tables */ 1125 | 1126 | .static-image-container table { 1127 | font-size: 0.9rem; 1128 | line-height: 1.4rem; 1129 | font-family: "Menlo", monospace, monospace; 1130 | width: 200px; 1131 | border-collapse: collapse; 1132 | color: rgb(56,56,56,0.8); 1133 | } 1134 | 1135 | .static-image-container td.index-cell { 1136 | padding-right: 10px; 1137 | } 1138 | 1139 | .static-image-container th { 1140 | padding-right: 10px; 1141 | font-weight: normal; 1142 | color: rgba(3,136,166,1.0); 1143 | } 1144 | 1145 | /* Math */ 1146 | 1147 | div.math { 1148 | text-align: center; 1149 | margin-bottom: 0.8rem; 1150 | } 1151 | 1152 | .math-rendered .katex { 1153 | font-size: 1em; 1154 | font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen, Ubuntu, Cantarell, "Fira Sans", "Droid Sans", "Helvetica Neue", Arial, sans-serif; 1155 | } 1156 | 1157 | /* Footer */ 1158 | 1159 | .footer-container { 1160 | margin-top: 2rem; 1161 | padding-top: 3rem; 1162 | border-top: solid rgba(0,0,0,0.6) 0.5px; 1163 | } 1164 | 1165 | .footer-sub-title { 1166 | /* font-size: 0.9rem; */ 1167 | color: rgba(0, 0, 0, 0.6); 1168 | margin-bottom: 0.1rem; 1169 | } 1170 | 1171 | .footer-content { 1172 | font-size: 0.9rem; 1173 | margin-bottom: 0.4rem; 1174 | line-height: 1.3rem; 1175 | } 1176 | -------------------------------------------------------------------------------- /static/referencesFormatting.js: -------------------------------------------------------------------------------- 1 | let refs = document.getElementsByClassName('reference'); 2 | Object.values(refs).forEach( 3 | (ref, idx) => { 4 | if (Object.values(refs).map( 5 | r => r.getAttribute('refid') 6 | ).indexOf(ref.getAttribute('refid')) == idx) { 7 | ref.setAttribute('id', 'ref-' + ref.getAttribute('refid').split(" ")[0]); 8 | } 9 | } 10 | ); 11 | let bibliographyData = _.uniq( 12 | Object.values(refs).reduce( 13 | (acc, ref) => { 14 | let refids = ref.getAttribute('refid').split(" "); 15 | return acc.concat( 16 | refids.map( 17 | refid => { 18 | let extendedRef = _.extend( 19 | referenceData[refid], { refid: refid, href: refids[0] } 20 | ); 21 | return extendedRef; 22 | } 23 | ) 24 | ); 25 | }, [] 26 | ), 27 | item => { 28 | return item.refid; 29 | } 30 | ); 31 | 32 | Object.values(refs).forEach( 33 | d => { 34 | let refidx = d.getAttribute('refid').split(" ") 35 | .map( 36 | ref => _.indexOf( 37 | bibliographyData.map(bib => bib.refid), 38 | ref 39 | ) + 1 40 | ); 41 | ReactDOM.render( 42 | React.createElement(ReferenceInText, { refidx }), 43 | d 44 | ); 45 | } 46 | ); 47 | 48 | ReactDOM.render( 49 | React.createElement(ReferenceContainer, { 50 | references: bibliographyData 51 | }), 52 | document.querySelector("#reference-container") 53 | ); 54 | -------------------------------------------------------------------------------- /static/rig.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ethereum/rig/6ad58089cf2c36d2632b0d9e70a02e41ef3b2a28/static/rig.png -------------------------------------------------------------------------------- /static/sectionFormatting.js: -------------------------------------------------------------------------------- 1 | const formatSection = function(section) { 2 | return { 3 | name: section.text 4 | } 5 | }; 6 | 7 | Object.values(document.getElementsByTagName('h2')).forEach((item, i) => { 8 | item.setAttribute('class', 'section-title section-label') 9 | }); 10 | 11 | Object.values(document.getElementsByTagName('h3')).forEach((item, i) => { 12 | item.setAttribute('class', 'section-sub-title section-label') 13 | }); 14 | 15 | const titles = document.getElementsByClassName('section-label'); 16 | const unformattedSections = Object.values(titles).map( 17 | (title, idx) => [title, idx] 18 | ).filter( 19 | d => d[0].getAttribute('class').includes('section-title') 20 | ).map( 21 | (d, i, a) => { 22 | return { 23 | section: { 24 | element: d[0], 25 | secidx: i+1, 26 | id: d[0].getAttribute('id'), 27 | text: d[0].innerText 28 | }, 29 | subsections: Object.values(titles).filter( 30 | (title, idx) => title.getAttribute('class').includes('section-sub-title') && (idx < (i == (a.length-1) ? titles.length : a[i+1][1])) && (idx > a[i][1]) 31 | ).map( 32 | (el, idx) => { 33 | return { 34 | element: el, 35 | secidx: i+1, 36 | subidx: idx+1, 37 | id: el.getAttribute('id'), 38 | text: el.innerText 39 | } 40 | } 41 | ) 42 | } 43 | } 44 | ); 45 | 46 | const flatRefs = unformattedSections.reduce( 47 | (acc, sec) => acc.concat( 48 | [sec.section].concat(sec.subsections) 49 | ), [] 50 | ); 51 | flatRefs.forEach( 52 | reference => { 53 | const text = (reference.subidx ? (reference.secidx + "." + reference.subidx) : 54 | (reference.secidx)); 55 | const className = reference.subidx ? "sub-title-number" : "title-number"; 56 | reference.element.innerHTML = "" + text + ". " + reference.text; 57 | } 58 | ); 59 | 60 | Object.values(document.getElementsByClassName('secref')) 61 | .forEach( 62 | reference => { 63 | const refId = reference.getAttribute('class').split(' ')[1]; 64 | const rs = flatRefs.find( 65 | r => r.id === refId 66 | ); 67 | const link = rs.subidx ? ("subsec-" + rs.secidx + "-" + rs.subidx) : 68 | ("sec-" + rs.secidx); 69 | const text = rs.subidx ? (rs.secidx + "." + rs.subidx) : 70 | (rs.secidx); 71 | reference.innerHTML = "" + text + ""; 72 | } 73 | ); 74 | 75 | unformattedSections.forEach( 76 | section => { 77 | section.section.element.setAttribute("id", "sec-" + section.section.secidx); 78 | section.subsections.forEach( 79 | subsection => { 80 | subsection.element.setAttribute( 81 | "id", "subsec-" + subsection.secidx + "-" + subsection.subidx 82 | ); 83 | } 84 | ) 85 | } 86 | ); 87 | 88 | const sections = unformattedSections.map( 89 | section => { 90 | return { 91 | section: formatSection(section.section), 92 | subsections: section.subsections.map( 93 | subsection => formatSection(subsection) 94 | ) 95 | }; 96 | } 97 | ); 98 | 99 | ReactDOM.render( 100 | React.createElement(TOC, { 101 | sections: sections 102 | }), 103 | document.querySelector("#toc") 104 | ); 105 | -------------------------------------------------------------------------------- /static/theme-light.css: -------------------------------------------------------------------------------- 1 | /*----------------------------------------------------------------------------- 2 | | Copyright (c) Jupyter Development Team. 3 | | Distributed under the terms of the Modified BSD License. 4 | |----------------------------------------------------------------------------*/ 5 | 6 | /* 7 | The following CSS variables define the main, public API for styling JupyterLab. 8 | These variables should be used by all plugins wherever possible. In other 9 | words, plugins should not define custom colors, sizes, etc unless absolutely 10 | necessary. This enables users to change the visual theme of JupyterLab 11 | by changing these variables. 12 | 13 | Many variables appear in an ordered sequence (0,1,2,3). These sequences 14 | are designed to work well together, so for example, `--jp-border-color1` should 15 | be used with `--jp-layout-color1`. The numbers have the following meanings: 16 | 17 | * 0: super-primary, reserved for special emphasis 18 | * 1: primary, most important under normal situations 19 | * 2: secondary, next most important under normal situations 20 | * 3: tertiary, next most important under normal situations 21 | 22 | Throughout JupyterLab, we are mostly following principles from Google's 23 | Material Design when selecting colors. We are not, however, following 24 | all of MD as it is not optimized for dense, information rich UIs. 25 | */ 26 | 27 | :root { 28 | /* Elevation 29 | * 30 | * We style box-shadows using Material Design's idea of elevation. These particular numbers are taken from here: 31 | * 32 | * https://github.com/material-components/material-components-web 33 | * https://material-components-web.appspot.com/elevation.html 34 | */ 35 | 36 | --jp-shadow-base-lightness: 0; 37 | --jp-shadow-umbra-color: rgba( 38 | var(--jp-shadow-base-lightness), 39 | var(--jp-shadow-base-lightness), 40 | var(--jp-shadow-base-lightness), 41 | 0.2 42 | ); 43 | --jp-shadow-penumbra-color: rgba( 44 | var(--jp-shadow-base-lightness), 45 | var(--jp-shadow-base-lightness), 46 | var(--jp-shadow-base-lightness), 47 | 0.14 48 | ); 49 | --jp-shadow-ambient-color: rgba( 50 | var(--jp-shadow-base-lightness), 51 | var(--jp-shadow-base-lightness), 52 | var(--jp-shadow-base-lightness), 53 | 0.12 54 | ); 55 | --jp-elevation-z0: none; 56 | --jp-elevation-z1: 0px 2px 1px -1px var(--jp-shadow-umbra-color), 57 | 0px 1px 1px 0px var(--jp-shadow-penumbra-color), 58 | 0px 1px 3px 0px var(--jp-shadow-ambient-color); 59 | --jp-elevation-z2: 0px 3px 1px -2px var(--jp-shadow-umbra-color), 60 | 0px 2px 2px 0px var(--jp-shadow-penumbra-color), 61 | 0px 1px 5px 0px var(--jp-shadow-ambient-color); 62 | --jp-elevation-z4: 0px 2px 4px -1px var(--jp-shadow-umbra-color), 63 | 0px 4px 5px 0px var(--jp-shadow-penumbra-color), 64 | 0px 1px 10px 0px var(--jp-shadow-ambient-color); 65 | --jp-elevation-z6: 0px 3px 5px -1px var(--jp-shadow-umbra-color), 66 | 0px 6px 10px 0px var(--jp-shadow-penumbra-color), 67 | 0px 1px 18px 0px var(--jp-shadow-ambient-color); 68 | --jp-elevation-z8: 0px 5px 5px -3px var(--jp-shadow-umbra-color), 69 | 0px 8px 10px 1px var(--jp-shadow-penumbra-color), 70 | 0px 3px 14px 2px var(--jp-shadow-ambient-color); 71 | --jp-elevation-z12: 0px 7px 8px -4px var(--jp-shadow-umbra-color), 72 | 0px 12px 17px 2px var(--jp-shadow-penumbra-color), 73 | 0px 5px 22px 4px var(--jp-shadow-ambient-color); 74 | --jp-elevation-z16: 0px 8px 10px -5px var(--jp-shadow-umbra-color), 75 | 0px 16px 24px 2px var(--jp-shadow-penumbra-color), 76 | 0px 6px 30px 5px var(--jp-shadow-ambient-color); 77 | --jp-elevation-z20: 0px 10px 13px -6px var(--jp-shadow-umbra-color), 78 | 0px 20px 31px 3px var(--jp-shadow-penumbra-color), 79 | 0px 8px 38px 7px var(--jp-shadow-ambient-color); 80 | --jp-elevation-z24: 0px 11px 15px -7px var(--jp-shadow-umbra-color), 81 | 0px 24px 38px 3px var(--jp-shadow-penumbra-color), 82 | 0px 9px 46px 8px var(--jp-shadow-ambient-color); 83 | 84 | /* Borders 85 | * 86 | * The following variables, specify the visual styling of borders in JupyterLab. 87 | */ 88 | 89 | --jp-border-width: 1px; 90 | --jp-border-color0: var(--md-grey-400); 91 | --jp-border-color1: var(--md-grey-400); 92 | --jp-border-color2: var(--md-grey-300); 93 | --jp-border-color3: var(--md-grey-200); 94 | --jp-border-radius: 2px; 95 | 96 | /* UI Fonts 97 | * 98 | * The UI font CSS variables are used for the typography all of the JupyterLab 99 | * user interface elements that are not directly user generated content. 100 | * 101 | * The font sizing here is done assuming that the body font size of --jp-ui-font-size1 102 | * is applied to a parent element. When children elements, such as headings, are sized 103 | * in em all things will be computed relative to that body size. 104 | */ 105 | 106 | --jp-ui-font-scale-factor: 1.2; 107 | --jp-ui-font-size0: 0.83333em; 108 | --jp-ui-font-size1: 13px; /* Base font size */ 109 | --jp-ui-font-size2: 1.2em; 110 | --jp-ui-font-size3: 1.44em; 111 | 112 | --jp-ui-font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, 113 | Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; 114 | 115 | /* 116 | * Use these font colors against the corresponding main layout colors. 117 | * In a light theme, these go from dark to light. 118 | */ 119 | 120 | /* Defaults use Material Design specification */ 121 | --jp-ui-font-color0: rgba(0, 0, 0, 1); 122 | --jp-ui-font-color1: rgba(0, 0, 0, 0.87); 123 | --jp-ui-font-color2: rgba(0, 0, 0, 0.54); 124 | --jp-ui-font-color3: rgba(0, 0, 0, 0.38); 125 | 126 | /* 127 | * Use these against the brand/accent/warn/error colors. 128 | * These will typically go from light to darker, in both a dark and light theme. 129 | */ 130 | 131 | --jp-ui-inverse-font-color0: rgba(255, 255, 255, 1); 132 | --jp-ui-inverse-font-color1: rgba(255, 255, 255, 1); 133 | --jp-ui-inverse-font-color2: rgba(255, 255, 255, 0.7); 134 | --jp-ui-inverse-font-color3: rgba(255, 255, 255, 0.5); 135 | 136 | /* Content Fonts 137 | * 138 | * Content font variables are used for typography of user generated content. 139 | * 140 | * The font sizing here is done assuming that the body font size of --jp-content-font-size1 141 | * is applied to a parent element. When children elements, such as headings, are sized 142 | * in em all things will be computed relative to that body size. 143 | */ 144 | 145 | --jp-content-line-height: 1.6; 146 | --jp-content-font-scale-factor: 1.2; 147 | --jp-content-font-size0: 0.83333em; 148 | --jp-content-font-size1: 14px; /* Base font size */ 149 | --jp-content-font-size2: 1.2em; 150 | --jp-content-font-size3: 1.44em; 151 | --jp-content-font-size4: 1.728em; 152 | --jp-content-font-size5: 2.0736em; 153 | 154 | /* This gives a magnification of about 125% in presentation mode over normal. */ 155 | --jp-content-presentation-font-size1: 17px; 156 | 157 | --jp-content-heading-line-height: 1; 158 | --jp-content-heading-margin-top: 1.2em; 159 | --jp-content-heading-margin-bottom: 0.8em; 160 | --jp-content-heading-font-weight: 500; 161 | 162 | /* Defaults use Material Design specification */ 163 | --jp-content-font-color0: rgba(0, 0, 0, 1); 164 | --jp-content-font-color1: rgba(0, 0, 0, 0.87); 165 | --jp-content-font-color2: rgba(0, 0, 0, 0.54); 166 | --jp-content-font-color3: rgba(0, 0, 0, 0.38); 167 | 168 | --jp-content-link-color: var(--md-blue-700); 169 | 170 | --jp-content-font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 171 | Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 172 | 'Segoe UI Symbol'; 173 | 174 | /* 175 | * Code Fonts 176 | * 177 | * Code font variables are used for typography of code and other monospaces content. 178 | */ 179 | 180 | --jp-code-font-size: 13px; 181 | --jp-code-line-height: 1.3077; /* 17px for 13px base */ 182 | --jp-code-padding: 5px; /* 5px for 13px base, codemirror highlighting needs integer px value */ 183 | --jp-code-font-family-default: Menlo, Consolas, 'DejaVu Sans Mono', monospace; 184 | --jp-code-font-family: var(--jp-code-font-family-default); 185 | 186 | /* This gives a magnification of about 125% in presentation mode over normal. */ 187 | --jp-code-presentation-font-size: 16px; 188 | 189 | /* may need to tweak cursor width if you change font size */ 190 | --jp-code-cursor-width0: 1.4px; 191 | --jp-code-cursor-width1: 2px; 192 | --jp-code-cursor-width2: 4px; 193 | 194 | /* Layout 195 | * 196 | * The following are the main layout colors use in JupyterLab. In a light 197 | * theme these would go from light to dark. 198 | */ 199 | 200 | --jp-layout-color0: white; 201 | --jp-layout-color1: white; 202 | --jp-layout-color2: var(--md-grey-200); 203 | --jp-layout-color3: var(--md-grey-400); 204 | --jp-layout-color4: var(--md-grey-600); 205 | 206 | /* Inverse Layout 207 | * 208 | * The following are the inverse layout colors use in JupyterLab. In a light 209 | * theme these would go from dark to light. 210 | */ 211 | 212 | --jp-inverse-layout-color0: #111111; 213 | --jp-inverse-layout-color1: var(--md-grey-900); 214 | --jp-inverse-layout-color2: var(--md-grey-800); 215 | --jp-inverse-layout-color3: var(--md-grey-700); 216 | --jp-inverse-layout-color4: var(--md-grey-600); 217 | 218 | /* Brand/accent */ 219 | 220 | --jp-brand-color0: var(--md-blue-700); 221 | --jp-brand-color1: var(--md-blue-500); 222 | --jp-brand-color2: var(--md-blue-300); 223 | --jp-brand-color3: var(--md-blue-100); 224 | --jp-brand-color4: var(--md-blue-50); 225 | 226 | --jp-accent-color0: var(--md-green-700); 227 | --jp-accent-color1: var(--md-green-500); 228 | --jp-accent-color2: var(--md-green-300); 229 | --jp-accent-color3: var(--md-green-100); 230 | 231 | /* State colors (warn, error, success, info) */ 232 | 233 | --jp-warn-color0: var(--md-orange-700); 234 | --jp-warn-color1: var(--md-orange-500); 235 | --jp-warn-color2: var(--md-orange-300); 236 | --jp-warn-color3: var(--md-orange-100); 237 | 238 | --jp-error-color0: var(--md-red-700); 239 | --jp-error-color1: var(--md-red-500); 240 | --jp-error-color2: var(--md-red-300); 241 | --jp-error-color3: var(--md-red-100); 242 | 243 | --jp-success-color0: var(--md-green-700); 244 | --jp-success-color1: var(--md-green-500); 245 | --jp-success-color2: var(--md-green-300); 246 | --jp-success-color3: var(--md-green-100); 247 | 248 | --jp-info-color0: var(--md-cyan-700); 249 | --jp-info-color1: var(--md-cyan-500); 250 | --jp-info-color2: var(--md-cyan-300); 251 | --jp-info-color3: var(--md-cyan-100); 252 | 253 | /* Cell specific styles */ 254 | 255 | --jp-cell-padding: 5px; 256 | 257 | --jp-cell-collapser-width: 8px; 258 | --jp-cell-collapser-min-height: 20px; 259 | --jp-cell-collapser-not-active-hover-opacity: 0.6; 260 | 261 | --jp-cell-editor-background: var(--md-grey-100); 262 | --jp-cell-editor-border-color: var(--md-grey-300); 263 | --jp-cell-editor-box-shadow: inset 0 0 2px var(--md-blue-300); 264 | --jp-cell-editor-active-background: var(--jp-layout-color0); 265 | --jp-cell-editor-active-border-color: var(--jp-brand-color1); 266 | 267 | --jp-cell-prompt-width: 64px; 268 | --jp-cell-prompt-font-family: 'Source Code Pro', monospace; 269 | --jp-cell-prompt-letter-spacing: 0px; 270 | --jp-cell-prompt-opacity: 1; 271 | --jp-cell-prompt-not-active-opacity: 0.5; 272 | --jp-cell-prompt-not-active-font-color: var(--md-grey-700); 273 | /* A custom blend of MD grey and blue 600 274 | * See https://meyerweb.com/eric/tools/color-blend/#546E7A:1E88E5:5:hex */ 275 | --jp-cell-inprompt-font-color: #307fc1; 276 | /* A custom blend of MD grey and orange 600 277 | * https://meyerweb.com/eric/tools/color-blend/#546E7A:F4511E:5:hex */ 278 | --jp-cell-outprompt-font-color: #bf5b3d; 279 | 280 | /* Notebook specific styles */ 281 | 282 | --jp-notebook-padding: 10px; 283 | --jp-notebook-select-background: var(--jp-layout-color1); 284 | --jp-notebook-multiselected-color: var(--md-blue-50); 285 | 286 | /* The scroll padding is calculated to fill enough space at the bottom of the 287 | notebook to show one single-line cell (with appropriate padding) at the top 288 | when the notebook is scrolled all the way to the bottom. We also subtract one 289 | pixel so that no scrollbar appears if we have just one single-line cell in the 290 | notebook. This padding is to enable a 'scroll past end' feature in a notebook. 291 | */ 292 | --jp-notebook-scroll-padding: calc( 293 | 100% - var(--jp-code-font-size) * var(--jp-code-line-height) - 294 | var(--jp-code-padding) - var(--jp-cell-padding) - 1px 295 | ); 296 | 297 | /* Rendermime styles */ 298 | 299 | --jp-rendermime-error-background: #fdd; 300 | --jp-rendermime-table-row-background: var(--md-grey-100); 301 | --jp-rendermime-table-row-hover-background: var(--md-light-blue-50); 302 | 303 | /* Dialog specific styles */ 304 | 305 | --jp-dialog-background: rgba(0, 0, 0, 0.25); 306 | 307 | /* Console specific styles */ 308 | 309 | --jp-console-padding: 10px; 310 | 311 | /* Toolbar specific styles */ 312 | 313 | --jp-toolbar-border-color: var(--jp-border-color1); 314 | --jp-toolbar-micro-height: 8px; 315 | --jp-toolbar-background: var(--jp-layout-color1); 316 | --jp-toolbar-box-shadow: 0px 0px 2px 0px rgba(0, 0, 0, 0.24); 317 | --jp-toolbar-header-margin: 4px 4px 0px 4px; 318 | --jp-toolbar-active-background: var(--md-grey-300); 319 | 320 | /* Input field styles */ 321 | 322 | --jp-input-box-shadow: inset 0 0 2px var(--md-blue-300); 323 | --jp-input-active-background: var(--jp-layout-color1); 324 | --jp-input-hover-background: var(--jp-layout-color1); 325 | --jp-input-background: var(--md-grey-100); 326 | --jp-input-border-color: var(--jp-border-color1); 327 | --jp-input-active-border-color: var(--jp-brand-color1); 328 | --jp-input-active-box-shadow-color: rgba(19, 124, 189, 0.3); 329 | 330 | /* General editor styles */ 331 | 332 | --jp-editor-selected-background: #d9d9d9; 333 | --jp-editor-selected-focused-background: #d7d4f0; 334 | --jp-editor-cursor-color: var(--jp-ui-font-color0); 335 | 336 | /* Code mirror specific styles */ 337 | 338 | --jp-mirror-editor-keyword-color: #008000; 339 | --jp-mirror-editor-atom-color: #88f; 340 | --jp-mirror-editor-number-color: #080; 341 | --jp-mirror-editor-def-color: #00f; 342 | --jp-mirror-editor-variable-color: var(--md-grey-900); 343 | --jp-mirror-editor-variable-2-color: #05a; 344 | --jp-mirror-editor-variable-3-color: #085; 345 | --jp-mirror-editor-punctuation-color: #05a; 346 | --jp-mirror-editor-property-color: #05a; 347 | --jp-mirror-editor-operator-color: #aa22ff; 348 | --jp-mirror-editor-comment-color: #408080; 349 | --jp-mirror-editor-string-color: #ba2121; 350 | --jp-mirror-editor-string-2-color: #708; 351 | --jp-mirror-editor-meta-color: #aa22ff; 352 | --jp-mirror-editor-qualifier-color: #555; 353 | --jp-mirror-editor-builtin-color: #008000; 354 | --jp-mirror-editor-bracket-color: #997; 355 | --jp-mirror-editor-tag-color: #170; 356 | --jp-mirror-editor-attribute-color: #00c; 357 | --jp-mirror-editor-header-color: blue; 358 | --jp-mirror-editor-quote-color: #090; 359 | --jp-mirror-editor-link-color: #00c; 360 | --jp-mirror-editor-error-color: #f00; 361 | --jp-mirror-editor-hr-color: #999; 362 | 363 | /* Vega extension styles */ 364 | 365 | --jp-vega-background: white; 366 | 367 | /* Sidebar-related styles */ 368 | 369 | --jp-sidebar-min-width: 180px; 370 | 371 | /* Search-related styles */ 372 | 373 | --jp-search-toggle-off-opacity: 0.5; 374 | --jp-search-toggle-hover-opacity: 0.8; 375 | --jp-search-toggle-on-opacity: 1; 376 | --jp-search-selected-match-background-color: rgb(245, 200, 0); 377 | --jp-search-selected-match-color: black; 378 | --jp-search-unselected-match-background-color: var( 379 | --jp-inverse-layout-color0 380 | ); 381 | --jp-search-unselected-match-color: var(--jp-ui-inverse-font-color0); 382 | 383 | /* Icon colors that work well with light or dark backgrounds */ 384 | --jp-icon-contrast-color0: var(--md-purple-600); 385 | --jp-icon-contrast-color1: var(--md-green-600); 386 | --jp-icon-contrast-color2: var(--md-pink-600); 387 | --jp-icon-contrast-color3: var(--md-blue-600); 388 | } 389 | -------------------------------------------------------------------------------- /supply-chain-health/README.md: -------------------------------------------------------------------------------- 1 | # Supply chain health 2 | 3 | Market structure and health metrics for the Ethereum supply chain. 4 | 5 | ## Data sources 6 | 7 | 1. [relayscan](https://github.com/flashbots/relayscan) 8 | 9 | The table contains data from all the relays for each slot, reporting `payload delivered` xxx and also get header data fields? xxx 10 | 11 | 12 | - tx-inclusion-delays/censorship 13 | 14 | - vertical integration/private orders 15 | 16 | - horizontal concentration 17 | 18 | - bid timing (split it by validator, validator-builder sub-market) 19 | 20 | - mev take --------------------------------------------------------------------------------