├── resources ├── Across-Intents-Journey.png └── LayerZero-lzRead-Ethereum.png ├── README.md └── audit-checklists ├── Across.md ├── Chainlink-CCIP.md ├── Wormhole.md ├── Arbitrum.md └── LayerZeroV2.md /resources/Across-Intents-Journey.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/windhustler/Interoperability-Protocol-Security-Checklist/HEAD/resources/Across-Intents-Journey.png -------------------------------------------------------------------------------- /resources/LayerZero-lzRead-Ethereum.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/windhustler/Interoperability-Protocol-Security-Checklist/HEAD/resources/LayerZero-lzRead-Ethereum.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Interoperability Protocol Security Checklist 2 | 3 | This repo contains audit checklists for various interoperability protocols 4 | 5 | ## Table of Contents 6 | 7 | - [x] [Across](audit-checklists/Across.md) 8 | - [x] [Arbitrum](audit-checklists/Arbitrum.md) 9 | - [x] [Chainlink CCIP](audit-checklists/Chainlink-CCIP.md) 10 | - [x] [LayerZeroV2](audit-checklists/LayerZeroV2.md) 11 | - [x] [Wormhole](audit-checklists/Wormhole.md) 12 | 13 | ## Contributing 14 | If you'd like to contribute reach out on [X](https://x.com/windhustler), or Discord/Telegram @windhustler. 15 | -------------------------------------------------------------------------------- /audit-checklists/Across.md: -------------------------------------------------------------------------------- 1 | # Across Intent Lifecycle 2 | ![Across Intent Lifecycle](/resources/Across-Intents-Journey.png) 3 | 4 | # Across Security Checklist 5 | Across Protocol is an intent-based cross-chain protocol. It enables fast, low-cost, and secure asset transfers across multiple blockchain networks by leveraging a decentralized network of relayers and an optimistic verification model. 6 | 7 | Across Protocol’s architecture breaks down into three main parts: 8 | 1. *Request for Quote*: This is where users submit their intents/orders, specifying what they want not execution paths. 9 | 2. *Relayer Network*: Enabling a competitive network of relayers to bid, claim and fill those orders. 10 | 3. *Settlement Layer*: A settlement layer to verify intent fulfillment and repay relayers. 11 | 12 | ## Core Security Assumptions 13 | - Bridge security relies on optimistic verification with challenge periods 14 | - Relayers are economic actors incentivized by fees but are untrusted 15 | 16 | ### Relayers can spoof messages 17 | According to Across V3's [docs](https://docs.across.to/quickstart/embedded-crosschain-actions/crosschain-actions-integration-guide/using-the-generic-multicaller-handler-contract#security-and-safety-considerations), the protocol does not guarantee the integrity of the parameters that `handleV3AcrossMessage()` is called with: 18 | 19 | > Avoid making unvalidated assumptions about the message data supplied to `handleV3AcrossMessage()`. Across does not guarantee message integrity, only that a relayer who spoofs a message will not be repaid by Across. If integrity is required, integrators should consider including a depositor signature in the message for additional verification. Message data should otherwise be treated as spoofable and untrusted for use beyond directing the funds passed along with it. 20 | 21 | Also [docs](https://docs.across.to/quickstart/embedded-crosschain-actions/crosschain-actions-integration-guide/using-the-generic-multicaller-handler-contract#message-constraints) states: 22 | > Handler contracts only use the funds that are sent to it. That means that the message is assumed to only have authority over those funds and, critically, no outside funds. This is important because relayers can send invalid relays. They will not be repaid if they attempt this, but if an invalid message could unlock other funds, then a relayer could spoof messages maliciously. 23 | 24 | More specifically, a malicious relayer can call `handleV3AcrossMessage()` with any arguments, regardless 25 | of what was sent by users in the cross-chain transaction on the source chain. As such, all parameters of 26 | `handleV3AcrossMessage()` must be validated to ensure they are not manipulated by a malicious relayer. 27 | Important args like `tokenSent` and `intentAmount` etc, should be validated. 28 | 29 | Bug example: [Finding 3.2.9](https://github.com/superform-xyz/v2-core-public-cantina/blob/main/audits/2025.04.19-cantinacode-superform-core.pdf) 30 | 31 | 32 | ### Depositor Address and Refund Handling 33 | When a deposit expires or a fill fails, refunds are sent to the depositor address on the origin chain. Across does not support specifying a separate refund address like other protocols (e.g., LayerZero). It’s critical to ensure that the [depositor](https://github.com/across-protocol/contracts/blob/0fee0264009e662a17e2cd8c22c4c493f12b8a03/contracts/SpokePool.sol#L449) address provided in the deposit is correct and able to receive refunds. Failing to do so can result in lost funds. 34 | 35 | Across Docs [states](https://docs.across.to/concepts/intent-lifecycle-in-across#slow-fill-or-expiration-if-no-fill): 36 | > In cases where a slow fill can't or does not happen and a relayer does not fill the intent, the intent expires. Like a slow fill, this expiry must be optimistically verified, which takes a few hours. Once this verification is done, the user is then refunded their money on the origin chain. 37 | 38 | Bug example: [Finding 6.1.3](https://github.com/superform-xyz/v2-core-public-cantina/blob/main/audits/2025.03.24-sujithsomraaj-superform-core.pdf) 39 | 40 | 41 | ### The `outputAmount` should not be outside the recommended range otherwise tokens can be locked temporarily 42 | If the inputAmount is set too high, it can take a while for the deposit to be filled depending on available relayer liquidity. BUT if the outputAmount is set too high, it can be unprofitable to relay. The contracts will not revert if the outputAmount is set outside of the recommended range, but it will probably lock up funds for an unexpected length of time. The protocols should make sure that the user is not inputting an `outputAmount` which is outside the recommended range. user given `outputAmount` should be vetted against this: 43 | ``` 44 | recommended outputAmount = inputAmount * ( 1 - relayerFeePct - lpFeePct). 45 | ``` 46 | or using the API: 47 | > If you are using the API to set the outputAmount then you should set it equal to inputAmount * (1 - fees.totalRelayFee.pct) where fees.totalRelayFee is returned by the /suggested-fees endpoint. 48 | 49 | Ref: [Across Docs](https://docs.across.to/reference/selected-contract-functions#deposit-1) 50 | 51 | ### Across doesn't send the origin sender address, which makes spoofable attacks easier 52 | The Across protocol doesn't provide origin sender information in bridged messages, making the msgs spoofable if not handled correctly. This absence requires developers to implement additional verification measures to prevent attackers from exploiting this gap. 53 | 54 | Bug Example: [Finding 5.2.2](https://github.com/meliopolis/chainhopper-protocol/blob/main/docs/Spearbit-audit.pdf) 55 | 56 | ### `handleV3AcrossMessage` should be callable by only the Spoke Contract 57 | Restricting `handleV3AcrossMessage` to be callable only by the authorized SpokePool contract (or another designated trusted contract) ensures that arbitrary or malicious actors cannot invoke this function with crafted or spoofed data. Without such access control, any external party could call the function with unauthorized messages, potentially causing unintended state changes, fund manipulation, or denial of service. This would undermine the protocol’s security guarantees and expose user funds to risk. 58 | ```solidity 59 | function handleV3AcrossMessage( 60 | address tokenSent, 61 | uint256 amount, 62 | address relayer, 63 | bytes memory message 64 | ) external override { 65 | // 1. Validate Sender [MUST] 66 | if (msg.sender != acrossSpokePool) { 67 | revert INVALID_SENDER(); 68 | } 69 | ``` 70 | -------------------------------------------------------------------------------- /audit-checklists/Chainlink-CCIP.md: -------------------------------------------------------------------------------- 1 | # Chainlink CCIP Security Checklist 2 | 3 | ## Message Execution Options 4 | 5 | ### Fees 6 | CCIP supports fee payments in LINK and in alternative assets, including blockchain-native gas tokens and their ERC-20 wrapped versions. 7 | The fee is calculated by the following [formula](https://docs.chain.link/ccip/billing#billing-mechanism): 8 | ``` 9 | fee = blockchain fee + network fee 10 | ``` 11 | - Blockchain fee: An estimation of the gas cost the node operators will pay to deliver the CCIP message to the destination blockchain. 12 | - Network fee: A fee paid to CCIP service providers, which varies based on the use case, the chosen lanes, and the fee token (See the [network fee table](https://docs.chain.link/ccip/billing#network-fee-table)). 13 | 14 | > **Note**: It is recommended to use the `getFee` function to estimate the fee accurately. 15 | 16 | 17 | ### Additional Arguments 18 | There are two additional arguments that can be included when sending a message: 19 | ```solidity 20 | extraArgs: Client._argsToBytes( 21 | Client.EVMExtraArgsV2({ 22 | gasLimit: 200_000, 23 | allowOutOfOrderExecution: true 24 | }) 25 | ) 26 | ``` 27 | 28 | - **`gasLimit`**: Specifies the maximum amount of gas that CCIP can consume to execute `ccipReceive()` on the contract located on the destination blockchain. It is the main factor in determining the fee to send a message. 29 | - If `gasLimit` is not provided, the default value is `200,000`. Ensure that `ccipReceive()` does not require more gas than this default. 30 | - Unspent gas is **not refunded**. 31 | - **`allowOutOfOrderExecution`**: Controls the execution order of your messages on the destination blockchain. This parameter is available only on lanes where the `Out of Order Execution` property is set to *Optional* or *Required*. 32 | - When `allowOutOfOrderExecution` is **Optional**: You can set it to either `true` or `false`. 33 | - When `allowOutOfOrderExecution` is **Required**: You must set it to `true`. This acknowledges that messages may be executed out of order. If set to `false`, the message will revert and will not be processed. 34 | 35 | > **Note**: It is recommended to use mutable `extraArgs` (not hardcoded) in production deployments. 36 | 37 | 38 | ### CCIP Rate Limits 39 | Rate limits consist of a maximum capacity and a refill rate, which determines how quickly the maximum capacity is restored after a token transfer consumes some or all of the available capacity. 40 | 41 | - **[Token Pool Rate Limit](https://docs.chain.link/ccip/architecture#token-pool-rate-limit)**: For each supported token on every individual [lane](https://docs.chain.link/ccip/concepts#lane), this rate limit manages the total number of tokens that can be transferred within a specified time frame. This limit is independent of the token's USD value. 42 | - **[Aggregate Rate Limit](https://docs.chain.link/ccip/architecture#aggregate-rate-limit)**: Each lane also has an aggregate rate limit that caps the total USD value of transfers across all supported tokens on that lane. 43 | 44 | 45 | ## Token Decimal Handling 46 | When tokens move between blockchains with different decimal places, rounding may occur. This rounding can impact small amounts of tokens during cross-chain transfers. 47 | 48 | **Example**: 49 | Token precision on Chain A is 18 decimals, and on Chain B is 9 decimals. 50 | ``` 51 | • Sent from A: 1.123456789123456789 52 | • Received on B: 1.123456789 53 | 54 | • Lost: 0.000000000123456789 55 | ``` 56 | 57 | 58 | ## Manual Execution 59 | In certain exceptional conditions, users might need to manually execute the transaction on the destination blockchain. 60 | These conditions include: 61 | 62 | - **Unhandled exceptions**: Logical errors in the receiver contract. 63 | - **Gas limit exceeded for token pools**: If the combined execution of the required functions (`balanceOf` checks and [releaseOrMint](https://github.com/smartcontractkit/ccip/blob/bca2fe0/contracts/src/v0.8/ccip/pools/BurnMintTokenPoolAbstract.sol#L36)) surpasses the default gas limit of 90,000 on the destination blockchain. 64 | - **Insufficient gas**: If the gas limit provided in the [extraArgs](https://github.com/smartcontractkit/ccip/blob/5e7b209/contracts/src/v0.8/ccip/libraries/Client.sol#L49) parameter of the message is insufficient to execute the `ccipReceive()` function. 65 | - **Smart Execution time window exceeded**: If the message cannot be executed on the destination chain within CCIP’s Smart Execution time window (currently set to 8 hours). 66 | - This could occur during extreme network congestion or gas price spikes. 67 | 68 | > **Note**: After the Smart Execution time window expires, all subsequent messages will fail until the **failing message** is successfully executed. 69 | 70 | Bug examples: [1](https://code4rena.com/reports/2024-04-renzo#m-04-price-updating-mechanism-can-break) 71 | 72 | 73 | ## Requirements for Token Pools 74 | 75 | ### Gas Requirements 76 | On the destination blockchain, the CCIP OffRamp contract performs three key operations when releasing or minting tokens: 77 | 78 | 1. **`balanceOf` before minting/releasing tokens** 79 | 2. **[`releaseOrMint`](https://github.com/smartcontractkit/ccip/blob/bca2fe0/contracts/src/v0.8/ccip/pools/LockReleaseTokenPool.sol#L64) to mint or release tokens** 80 | 3. **`balanceOf` after minting/releasing tokens** 81 | 82 | > **Note**: If the combined gas consumption of these three operations exceeds the default gas limit of 90,000 on the destination blockchain, the CCIP execution will fail. 83 | 84 | ### Custom Token Pools 85 | If custom `TokenPool` is build, it is crucial to follow these guidelines: 86 | 87 | - **For Burn and Mint mechanisms**: 88 | Custom token pool should inherit from [BurnMintTokenPoolAbstract](https://github.com/smartcontractkit/ccip/blob/bca2fe0/contracts/src/v0.8/ccip/pools/BurnMintTokenPoolAbstract.sol). 89 | - **For Lock and Release mechanisms**: 90 | Custom token pool can: 91 | - Inherit from [TokenPool](https://github.com/smartcontractkit/ccip/blob/478f0e5/contracts/src/v0.8/ccip/pools/TokenPool.sol) and implement the [ILiquidityContainer](https://github.com/smartcontractkit/ccip/blob/19dafcc/contracts/src/v0.8/liquiditymanager/interfaces/ILiquidityContainer.sol) interface. 92 | - Or directly inherit from [LockReleaseTokenPool](https://github.com/smartcontractkit/ccip/blob/bca2fe0/contracts/src/v0.8/ccip/pools/LockReleaseTokenPool.sol) and reimplement the `lockOrBurn` and `releaseOrMint` functions as needed. 93 | 94 | 95 | ## Validate CCIP Inputs 96 | If users can call the `ccipSend()` function, it's important to check the CCIP inputs before sending the message. If the state changes before the message is sent and an attacker provides wrong inputs, funds might get stuck in the contract. 97 | 98 | Bug examples: [1](https://github.com/sherlock-audit/2024-08-winnables-raffles-judging/issues/50) 99 | 100 | 101 | ## Useful resources 102 | 103 | - [Chainlink CCIP developer docs](https://docs.chain.link/ccip) 104 | - [CCIP local simulator in Foundry](https://docs.chain.link/chainlink-local/build/ccip/foundry/local-simulator) 105 | -------------------------------------------------------------------------------- /audit-checklists/Wormhole.md: -------------------------------------------------------------------------------- 1 | # Wormhole Security Checklist 2 | 3 | ## Overview 4 | 5 | Wormhole is a cross-chain messaging protocol that provides several services: 6 | - Message passing 7 | - Fungible token bridging 8 | - NFT bridging 9 | - Native token transfers (NTT) 10 | - Cross-chain governance 11 | 12 | The protocol operates as a multisig bridge system with the following key components: 13 | - Guardian nodes: A minimum of 13 guardian nodes must attest to messages (VAA's) for them to be valid 14 | - Relayers: Trustless parties responsible for delivering VAAs to destination chains 15 | - Wormhole-Core contract: Verifies VAAs on destination chains 16 | 17 | ## Core Security Assumptions 18 | 19 | 1. Bridge security relies on the guardian node committee 20 | 2. VAAs are broadcast publicly and can be verified on any chain with Wormhole-Core 21 | 3. Relayers are untrusted and may drop or skip VAAs 22 | 4. For protocols deployed on multiple chains, implement replay protection 23 | 24 | 25 | ### Direct `publishMessage` Usage 26 | 27 | Protocols can publish messages directly through the Wormhole-Core contract's [`publishMessage`](https://github.com/wormhole-foundation/wormhole/blob/fd1ed1564e3a4047cca78ac539956c9932664d96/ethereum/contracts/Implementation.sol#L15) function: 28 | 29 | ```solidity 30 | function publishMessage( 31 | uint32 nonce, 32 | bytes memory payload, 33 | uint8 consistencyLevel 34 | ) external payable returns (uint64 sequence); 35 | ``` 36 | 37 | Key parameters and considerations: 38 | 39 | 1. `nonce`: 40 | - Must be unique for each bridge message and caller 41 | - Protocols must implement proper nonce generation 42 | - Missing nonce validation can lead to replay attacks 43 | 44 | 2. `payload`: 45 | - Contains the data being sent to the destination chain 46 | - Must be properly encoded/decoded between chains 47 | - Use matching encoding/decoding methods (e.g., `abi.encode`/`abi.decode`) 48 | - Incorrect encoding can lead to undefined behavior or data corruption 49 | 50 | 3. `consistencyLevel`: 51 | - Determines the security level and processing delay 52 | - Higher levels = better security but longer delays 53 | - Lower levels = faster processing but vulnerable to block reorgs 54 | - Must be set appropriately based on chain finality guarantees 55 | 56 | ### Message Fee Handling 57 | 58 | The `publishMessage()` function is payable and requires the correct fee to be passed. Always fetch the current fee using `messageFee()` on the Wormhole-Core contract instead of hardcoding the value. 59 | 60 | ```solidity 61 | // Problematic Implementation 62 | ICircleIntegration(wormholeCircleBridge).transferTokensWithPayload( 63 | transferParameters, 64 | 0, // No fee value passed 65 | abi.encode(msg.sender) 66 | ); 67 | 68 | // Correct Implementation 69 | uint256 messageFee = wormholeCore.messageFee(); 70 | ICircleIntegration(wormholeCircleBridge).transferTokensWithPayload{value: messageFee}( 71 | transferParameters, 72 | messageFee, 73 | abi.encode(msg.sender) 74 | ); 75 | ``` 76 | 77 | Bug examples: [1](https://0xmacro.com/library/audits/infinex-1.html#M-1), [2](https://cdn.prod.website-files.com/621a140a057f392845dfaef3/65c9d04d3bc92bd2dfb6dc87_SmartContract_Audit_MagpieProtocol(v4)_v1.1.pdf), [3](https://iosiro.com/audits/infinex-accounts-smart-account-smart-contract-audit#IO-IFX-ACC-007), [QS-5](https://certificate.quantstamp.com/full/hashflow-hashverse/1af3e150-d612-4b24-bc74-185624a863f8/index.html#findings-qs5), [P1-I-02](https://2301516674-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FNGiifMbrcug9ogAfisQY%2Fuploads%2FPxfV4xPmOCVKng8LcjiH%2FMayan_MCTP_Sec3.pdf?alt=media&token=62699afe-9e67-44fb-96fe-b593041365f4), [OS-SNM-SUG-02](https://2239978398-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzjiJ8UzMPEfugKsLon59%2Fuploads%2FYr6wLCHl8r6uS6eHAYnb%2Fsynonym_audit_final.pdf?alt=media&token=3ad993f9-da68-496d-be06-d7eed5d305de) 78 | 79 | ### Relayer Authorization 80 | 81 | Verify messages originate from the authorized Wormhole relayer using both `isRegisteredSender` modifier and relayer address check. 82 | 83 | ```solidity 84 | // Problematic Implementation 85 | function receiveWormholeMessages(bytes memory payload) public { 86 | // Missing relayer validation 87 | } 88 | 89 | // Correct Implementation 90 | function receiveWormholeMessages( 91 | bytes memory payload, 92 | bytes[] memory, 93 | bytes32 sourceAddress, 94 | uint16 sourceChain, 95 | bytes32 96 | ) public payable override 97 | isRegisteredSender(sourceChain, sourceAddress) 98 | { 99 | require(msg.sender == address(wormholeRelayer), "Only Wormhole relayer"); 100 | } 101 | ``` 102 | 103 | ### Chain ID vs Domain Mislabeling 104 | 105 | Check for incorrect labeling of chain IDs as "domains" in function parameters and variable names. 106 | 107 | ```solidity 108 | // Problematic Implementation 109 | function _bridgeUSDCWithWormhole( 110 | uint256 _amount, 111 | bytes32 _destinationAddress, 112 | uint16 _destinationDomain // Mislabeled - should be _destinationChainId 113 | ) internal { 114 | if (!_validateWormholeDestinationDomain(_destinationDomain)) revert Error.InvalidDestinationDomain(); 115 | } 116 | 117 | // Correct Implementation 118 | function _bridgeUSDCWithWormhole( 119 | uint256 _amount, 120 | bytes32 _destinationAddress, 121 | uint16 _destinationChainId 122 | ) internal { 123 | if (!_validateWormholeDestinationChainId(_destinationChainId)) revert Error.InvalidChainId(); 124 | } 125 | ``` 126 | 127 | Bug examples: [L1](https://0xmacro.com/library/audits/infinex-1.html#L-1), [QS-3](https://certificate.quantstamp.com/full/hashflow-hashverse/1af3e150-d612-4b24-bc74-185624a863f8/index.html#findings-qs3), [P1-I-05](https://2301516674-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FNGiifMbrcug9ogAfisQY%2Fuploads%2Fz6Gq4wJprv7LomYsQ1LY%2FMayan_Swift_Sec3.pdf?alt=media&token=4a1b7f69-a626-4e34-9db2-4e931c3bc11f) 128 | 129 | ### Decimal Scaling Issues in Token Bridging 130 | 131 | When implementing token bridging, it's crucial to handle different token decimals correctly. A common issue occurs when using a hardcoded maximum bridge amount without considering token-specific decimal places. 132 | 133 | **Problematic Implementation:** 134 | ```solidity 135 | // Problematic Implementation 136 | function _computeScaledAmount(uint256 _amount, address _tokenAddress) internal returns (uint256) { 137 | uint256 scaledAmount = uint256(DecimalScaling.scaleTo( 138 | int256(_amount), 139 | IERC20Metadata(_tokenAddress).decimals() 140 | )); 141 | // Incorrectly using USDC's max amount for all tokens 142 | if (scaledAmount > BridgeIntegrations._getBridgeMaxAmount()) 143 | revert Error.BridgeMaxAmountExceeded(); 144 | } 145 | 146 | // Correct Implementation 147 | function _computeScaledAmount(uint256 _amount, address _tokenAddress) internal returns (uint256) { 148 | uint256 scaledAmount = uint256(DecimalScaling.scaleTo( 149 | int256(_amount), 150 | IERC20Metadata(_tokenAddress).decimals() 151 | )); 152 | 153 | // Token-specific max amount validation 154 | if (_tokenAddress == Bridge._USDC()) { 155 | if (scaledAmount > BridgeIntegrations._getBridgeMaxAmount()) 156 | revert Error.BridgeMaxAmountExceeded(); 157 | } else { 158 | // Implement token-specific max amount logic 159 | if (scaledAmount > getTokenSpecificMaxAmount(_tokenAddress)) 160 | revert Error.BridgeMaxAmountExceeded(); 161 | } 162 | } 163 | ``` 164 | 165 | Bug examples: [1](https://0xmacro.com/library/audits/infinex-1.html#L-2) 166 | 167 | ### Normalize and Denormalize makes dust amount stuck in contracts 168 | 169 | When implementing token bridging with Wormhole, the token bridge performs normalize and denormalize operations on amounts to remove dust. This can lead to tokens getting stuck in the contract if not handled properly. 170 | 171 | **Problematic Implementation:** 172 | ```solidity 173 | // Problematic Implementation 174 | function swap(uint256 amountIn) external { 175 | IERC20(token).transferFrom(msg.sender, address(this), amountIn); 176 | tokenBridge.transferTokens(token, amountIn, destinationChain, destinationAddress); 177 | } 178 | 179 | // Correct Implementation 180 | function swap(uint256 amountIn) external { 181 | uint256 normalizedAmount = normalize(amountIn); 182 | uint256 denormalizedAmount = denormalize(normalizedAmount); 183 | 184 | // Only transfer the exact amount that will be bridged 185 | IERC20(token).transferFrom(msg.sender, address(this), denormalizedAmount); 186 | tokenBridge.transferTokens(token, denormalizedAmount, destinationChain, destinationAddress); 187 | } 188 | ``` 189 | 190 | Bug examples: [OS-MYN-ADV-05](https://2301516674-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FNGiifMbrcug9ogAfisQY%2Fuploads%2F9YSweDuRP1P28bMmjy63%2FMayan_Audit_OtterSec.pdf?alt=media&token=ffefae4d-367d-401f-bd16-2d799dd3a403), [OS-OPF-ADV-00](https://2239978398-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzjiJ8UzMPEfugKsLon59%2Fuploads%2F19FGz84wBMigB9EBngTu%2Foptimistic_finality_audit_final.pdf?alt=media&token=c3a631b5-0cc0-4104-a781-0691e2491973) 191 | 192 | ### Double Normalization/Denormalization Issues 193 | 194 | Check for multiple normalization/denormalization operations on the same amount. 195 | 196 | ```solidity 197 | // Problematic Implementation 198 | function getCurrentAccrualIndices(address assetAddress) public view returns(AccrualIndices memory) { 199 | uint256 deposited = getTotalAssetsDeposited(assetAddress); // Already normalized 200 | // Double normalization occurs here 201 | uint256 normalizedDeposited = normalizeAmount(deposited, accrualIndices.deposited, Round.DOWN); 202 | } 203 | 204 | // Correct Implementation 205 | function getCurrentAccrualIndices(address assetAddress) public view returns(AccrualIndices memory) { 206 | uint256 deposited = getTotalAssetsDeposited(assetAddress); // Already normalized 207 | // Use values directly without additional normalization 208 | } 209 | ``` 210 | 211 | Bug examples: [OS-SNM-ADV-04](https://2239978398-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzjiJ8UzMPEfugKsLon59%2Fuploads%2FYr6wLCHl8r6uS6eHAYnb%2Fsynonym_audit_final.pdf?alt=media&token=3ad993f9-da68-496d-be06-d7eed5d305de), [OS-SNM-ADV-05](https://2239978398-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzjiJ8UzMPEfugKsLon59%2Fuploads%2FYr6wLCHl8r6uS6eHAYnb%2Fsynonym_audit_final.pdf?alt=media&token=3ad993f9-da68-496d-be06-d7eed5d305de) 212 | 213 | ### Guardian Set Transition Issues 214 | 215 | When implementing Wormhole guardian set updates, proper handling of the transition period is crucial. The protocol needs to maintain both old and new guardian sets active during the transition period to ensure continuous operation. This issue can affect any chain implementing Wormhole guardian sets. 216 | 217 | The following is an example from TON blockchain smart contracts of incorrect handling of guardian node transitions. 218 | 219 | ```func 220 | ;; Example of incorrect strict equality check during guardian set transition 221 | ;; This issue can occur in any chain's implementation of Wormhole guardian sets 222 | throw_unless( 223 | ERROR_INVALID_GUARDIAN_SET( 224 | current_guardian_set_index == vm_guardian_set_index( 225 | (expiration_time == 0) | (expiration_time > now()) 226 | ) 227 | ) 228 | ); 229 | 230 | ;; Correct Implementation (change the operator == to >=) 231 | throw_unless( 232 | ERROR_INVALID_GUARDIAN_SET( 233 | current_guardian_set_index >= vm_guardian_set_index( 234 | (expiration_time == 0) | (expiration_time > now()) 235 | ) 236 | ) 237 | ); 238 | ``` 239 | 240 | Bug examples: [TOB-PYTHTON-1](https://github.com/pyth-network/audit-reports/blob/main/2024_11_26/pyth_ton_pull_oracle_audit_final.pdf) 241 | 242 | ### Empty Guardian Set Validation 243 | 244 | Check for missing validation of guardian set size during upgrades. 245 | 246 | ```func 247 | ;; Example of missing guardian length validation in TON 248 | ;; This issue can occur in any chain's implementation of Wormhole guardian sets 249 | (int, int, int, cell, int) parse_encoded_upgrade(slice payload) impure { 250 | int module = payload~load_uint(256); 251 | throw_unless(ERROR_INVALID_MODULE, module == UPGRADE_MODULE); 252 | int action = payload~load_uint(8); 253 | throw_unless(ERROR_INVALID_GOVERNANCE_ACTION, action == 2); 254 | int chain = payload~load_uint(16); 255 | int new_guardian_set_index = payload~load_uint(32); 256 | throw_unless(ERROR_NEW_GUARDIAN_SET_INDEX_IS_INVALID, new_guardian_set_index == (current_guardian_set_index + 1)); 257 | 258 | ;; Missing validation for non-zero length 259 | int guardian_length = payload~load_uint(8); 260 | cell new_guardian_set_keys = new_dict(); 261 | int key_count = 0; 262 | while (key_count < guardian_length) { 263 | builder key = begin_cell(); 264 | int key_bits_loaded = 0; 265 | while (key_bits_loaded < 160) { 266 | int bits_to_load = min(payload.slice_bits(), 160 - key_bits_loaded); 267 | key = key.store_slice(payload~load_bits(bits_to_load)); 268 | key_bits_loaded += bits_to_load; 269 | if (key_bits_loaded < 160) { 270 | throw_unless(ERROR_INVALID_GUARDIAN_SET_UPGRADE_LENGTH, ~ payload.slice_refs_empty?()); 271 | payload = payload~load_ref().begin_parse(); 272 | } 273 | } 274 | slice key_slice = key.end_cell().begin_parse(); 275 | new_guardian_set_keys~udict_set(8, key_count, key_slice); 276 | key_count += 1; 277 | } 278 | throw_unless(ERROR_GUARDIAN_SET_KEYS_LENGTH_NOT_EQUAL, key_count == guardian_length); 279 | throw_unless(ERROR_INVALID_GUARDIAN_SET_UPGRADE_LENGTH, payload.slice_empty?()); 280 | 281 | return (action, chain, module, new_guardian_set_keys, new_guardian_set_index); 282 | } 283 | ``` 284 | 285 | Missing empty guardian set validation causes `parse_and_verify_wormhole_vm` to fail in the subsequent calls, requiring at least one guardian key. 286 | 287 | ```diff 288 | ;; Correct Implementation: 289 | ;; Validate guardian set size before processing upgrade 290 | ;; Guardian set cannot be empty or Guardian set must contain at least one key after upgrade 291 | int guardian_length = payload~load_uint(8); 292 | + throw_unless(ERROR_EMPTY_GUARDIAN_SET, guardian_length != 0); 293 | cell new_guardian_set_keys = new_dict(); 294 | int key_count = 0; 295 | while (key_count < guardian_length) { 296 | ``` 297 | 298 | Bug examples: [TOB-PYTHON-2](https://github.com/pyth-network/audit-reports/blob/main/2024_11_26/pyth_ton_pull_oracle_audit_final.pdf) 299 | 300 | ### Signature Verification Issues in Guardian Sets 301 | 302 | Check for missing validation of unique guardian signatures during verification. 303 | 304 | ```func 305 | ;; Example of missing unique signature validation in TON 306 | ;; This issue can occur in any chain's implementation of Wormhole guardian sets 307 | () verify_signatures(int hash, cell signatures, int signers_length, cell guardian_set_keys, int guardian_set_size) impure { 308 | slice cs = signatures.begin_parse(); 309 | int i = 0; 310 | int valid_signatures = 0; 311 | 312 | while (i < signers_length) { 313 | // ... signature parsing code ... 314 | 315 | int guardian_index = sig_slice~load_uint(8); 316 | int r = sig_slice~load_uint(256); 317 | int s = sig_slice~load_uint(256); 318 | int v = sig_slice~load_uint(8); 319 | 320 | // Missing validation for unique guardian indices 321 | (slice guardian_key, int found?) = guardian_set_keys.udict_get?(8, guardian_index); 322 | int guardian_address = guardian_key~load_uint(160); 323 | 324 | throw_unless(ERROR_INVALID_GUARDIAN_ADDRESS, parsed_address == guardian_address); 325 | valid_signatures += 1; 326 | i += 1; 327 | } 328 | 329 | ;; Check quorum (2/3 + 1) 330 | throw_unless(ERROR_NO_QUORUM, valid_signatures >= (((guardian_set_size * 10) / 3) * 2) / 10 + 1); 331 | } 332 | ``` 333 | The above TON smart contract implements verification of guardian node signatures with a quorum of super majority (`>2/3`). But the issue is that it is not tracking the verified guardian nodes. Meaning that it allows replay of one guardian node signature for `>2/3` times will pass the quorum. 334 | 335 | ```func 336 | ;; A temporary example of correct implementation, but you should check all the edge cases of guardian node signing process. 337 | function verifySignatures(bytes[] calldata signatures) public { 338 | mapping(uint8 => bool) usedIndices; 339 | for (uint256 i = 0; i < signatures.length; i++) { 340 | uint8 guardianIndex = uint8(signatures[i][0]); 341 | require(!usedIndices[guardianIndex], "Duplicate guardian signature"); 342 | ;; tracking of verified guardian node signatures. 343 | usedIndices[guardianIndex] = true; 344 | 345 | ;; Verify signature 346 | address signer = recoverSigner(hash, signatures[i]); 347 | require(signer == guardianSet[guardianIndex], "Invalid guardian signature"); 348 | 349 | validSignatures++; 350 | } 351 | } 352 | ``` 353 | 354 | Bug examples: [TOB-PYTHON-3](https://github.com/pyth-network/audit-reports/blob/main/2024_11_26/pyth_ton_pull_oracle_audit_final.pdf) 355 | 356 | 357 | 358 | -------------------------------------------------------------------------------- /audit-checklists/Arbitrum.md: -------------------------------------------------------------------------------- 1 | # Arbitrum Security Checklist 2 | 3 | ## General 4 | 5 | ### Block number 6 | 7 | Invoking `block.number` in a smart contract on Arbitrum will return the **L1** block number at which the sequencer received the transaction, not the **L2** block number. Also worth noting, the block number being returned is not necessarily the latest L1 block number. It is the latest synced block number and the syncing process occurs approximately every minute. In that period there can be ~5 new L1 blocks. So values returned by `block.number` on Arbitrum do not increase continuously but in "jumps" of ~5 blocks. If contract's logic requires tracking Arbitrum's L2 block numbers, that's possible as well using the precompile call `ArbSys(100).arbBlockNumber()`. 8 | 9 | > Always check for incorrect use of `block.number` on Arbitrum chains, especially if it is used to track time over short periods. 10 | 11 | Bug examples: [1](https://code4rena.com/reports/2022-12-tigris#m-15-_checkdelay-will-not-work-properly-for-arbitrum-or-optimism-due-to-blocknumber-), [2](https://solodit.cyfrin.io/issues/incorrect-use-of-l1-blocknumber-on-arbitrum-cantina-none-uniswap-pdf) 12 | 13 | ## L1 to L2 messaging 14 | 15 | Retryable tickets are the canonical way to execute L1 to L2 messages in Arbitrum. User submits the message in the L1 Inbox (the main entrypoint for this is function [`createRetryableTicket`](https://github.com/OffchainLabs/nitro-contracts/blob/780366a0c40caf694ed544a6a1d52c0de56573ba/src/bridge/Inbox.sol#L261)) and once submission is finalized message will be (asynchronously) executed on L2. If execution fails for any reason, like TX running out of gas or smart contract reverts, the ticket stays in the buffer and can be manually redeemed (thus it is "retryable"). We will describe some common pitfalls related to the usage of retryable tickets. 16 | 17 | ### Out-of-order retryable ticket execution 18 | 19 | One of the common issues is assuming that retryable tickets are guaranteed to be executed in the same order as they were created. This doesn't have to be the case. Let's say the L1 contract creates 2 retryables, A and B, in a single transaction. When it's time to execute A and B, gas price spikes on L2 so auto-redemption fails. At that point, anyone can manually redeem tickets by providing a new custom gas price. It is possible that user executes ticket B and only then ticket A. If protocol developers haven't anticipated such a situation and haven't built in guard rails, the protocol can be left in a bad state. 20 | 21 | > Look for vulnerabilities in protocol that can come out of retryable tickets executing in different order than they were submitted 22 | 23 | Bug examples: [1](https://docs.arbitrum.io/assets/files/2024_08_01_trail_of_bits_security_audit_custom_fee_token-7ce514634632f4735a710c81b55f2d27.pdf#page=12) 24 | 25 | ### Address aliasing and cross-chain authentication 26 | 27 | When processing unsigned L1 to L2 messages, including retryable tickets, Arbitrum node will apply [address aliasing](https://docs.arbitrum.io/how-arbitrum-works/l1-to-l2-messaging#address-aliasing). That means the `msg.sender` on the L2 side will be modified in a deterministic way from the L1 address that issued the message. Why is this necessary? Let's say there is a contract deployed at address `0xabc` at both L1 and L2 chains. It is possible that contract `0xabc` on L1 is controlled by a different party than contract `0xabc` on L2 (remember [Wintermute](https://inspexco.medium.com/how-20-million-op-was-stolen-from-the-multisig-wallet-not-yet-owned-by-wintermute-3f6c75db740a)?). So, imagine a situation where a developer deploys a contract on Arbitrum which has critical functions gated by check `msg.sender == 0xabc` because developer controls `0xabc` multisig on Arbitrum. If there was no aliasing feature in place, a malicious owner of `0xabc` on Ethereum could send a retryable ticket and authenticate themselves to access the critical function! With aliasing, this is not possible because the address comparison check will fail. 28 | 29 | On the other hand, there are cross-chain app usecases where the L2 contract has a function that shall be only callable by the L1 counterpart contract. In such cases, it is important to apply alias when checking the sender. If the sender is not aliased before the check, the function won't be callable at all. The check should look something like this: 30 | 31 | ```solidity 32 | require(msg.sender == AddressAliasHelper.applyL1ToL2Alias(l1Counterpart), "only counterpart"); 33 | ``` 34 | 35 | > When retryable ticket's sender address is checked make sure alias is applied where needed 36 | 37 | Bug examples: [1](https://solodit.cyfrin.io/issues/msgsender-has-to-be-un-aliased-in-l2blastbridgefinalizebridgeethdirect-spearbit-none-draft-pdf) 38 | 39 | ### Spending the nonce 40 | 41 | There are different ways in which retryable tickets can fail on L2. If the L2 gas price provided is too low, ie. due to a gas price spike, retryable ticket will not be scheduled for auto-redemption. On the other hand, auto-redemption can be scheduled, but fail due to too low gas limit provided (TX runs out of gas). In the latter case (aliased) sender's nonce will be spent. This can have important implications if for example L1 contract is used to deploy a contract on L2 and predict the address of the new contract. If the gas limit provided for deployment is too low, nonce 0 will be burned. Deployment can still be done by doing the manual redemption of retryable ticket, however, address of the deployed contract will be different than the one initially predicted by the L1 contract because nonce 1 was used for deployment instead of nonce 0. 42 | 43 | Implementation example: 44 | [`L1AtomicTokenBridgeCreator`](https://github.com/OffchainLabs/token-bridge-contracts/blob/5bdf33259d2d9ae52ddc69bc5a9cbc558c4c40c7/contracts/tokenbridge/ethereum/L1AtomicTokenBridgeCreator.sol#L52) is used to deploy and set up token bridge contracts on both L1 and L2 side of Arbitrum chains. It issues 2 retryable tickets to configure the L2 side: 45 | 46 | - 1st ticket will create "factory" on L2 47 | - 2nd ticket will call factory's [function](https://github.com/OffchainLabs/token-bridge-contracts/blob/5bdf33259d2d9ae52ddc69bc5a9cbc558c4c40c7/contracts/tokenbridge/arbitrum/L2AtomicTokenBridgeFactory.sol#L35) to deploy token bridge's L2 contracts 48 | 49 | 2nd ticket [assumes](https://github.com/OffchainLabs/token-bridge-contracts/blob/5bdf33259d2d9ae52ddc69bc5a9cbc558c4c40c7/contracts/tokenbridge/ethereum/L1AtomicTokenBridgeCreator.sol#L131..L132) that factory is deployed at an address from nonce 0. So it is critical to make sure that 1st ticket does not fail due to out-of-gas error, since that would burn the deployer's nonce 0. This is achieved by [hard-coding gas limit](https://github.com/OffchainLabs/token-bridge-contracts/blob/main/contracts/tokenbridge/ethereum/L1AtomicTokenBridgeCreator.sol#L90..L93) which is guaranteed to be big enough for factory deployment to succeed. 50 | 51 | > When L1 contract is used to deploy contracts to L2 look for any hidden assumptions about the deployment nonce 52 | 53 | ### Permission to cancel the retryable ticket 54 | 55 | When creating a new retryable ticket one of the parameters that are provided to the [`Inbox::createRetryableTicket`](https://github.com/OffchainLabs/nitro-contracts/blob/780366a0c40caf694ed544a6a1d52c0de56573ba/src/bridge/Inbox.sol#L261) is [`callValueRefundAddress`](https://github.com/OffchainLabs/nitro-contracts/blob/780366a0c40caf694ed544a6a1d52c0de56573ba/src/bridge/IInbox.sol#L81). This is the address where `l2CallValue` is refunded to in the case that retryable ticket execution fails on L2. But additionally `callValueRefundAddress` has the permission to cancel the ticket, making it permanently unclaimable. It is important to keep this in mind in case protocol is designed in a way where this can be exploited. Here's an example of such a scenario: 56 | 57 | - protocol has a permissionless function on L1 that lets anyone create retryable ticket to execute action X on the L2 58 | - protocol assumes once retryable ticket is submitted action X will surely be executed on L2 59 | - there is no way to re-trigger the action X 60 | - attacker calls the L1 function to submit a retryable ticket but intentionally provides too small gas limit to make sure ticket can't be auto-redeemed on L2 61 | - attacker also provides their own address for `callValueRefundAddress` 62 | - retryable ticket auto-redemption fails and the attacker cancels the ticket before anyone else can redeem it 63 | - as a consequence, action X can never be executed 64 | 65 | > Make sure the account provided as `callValueRefundAddress` cannot misuse its capability to cancel the ticket in a way that hurts the protocol 66 | 67 | Bug examples: [1](https://github.com/code-423n4/2024-05-olas-findings/issues/29) 68 | 69 | ### Safe vs unsafe retryable ticket creation 70 | 71 | Arbitrum's `Inbox` contract offers 2 different entrypoints for creating retryables - [createRetryableTicket](https://github.com/OffchainLabs/nitro-contracts/blob/v3.0.0/src/bridge/Inbox.sol#L261) and [unsafeCreateRetryableTicket](https://github.com/OffchainLabs/nitro-contracts/blob/v3.0.0/src/bridge/Inbox.sol#L285). There are couple of important differences between the two. 72 | 73 | Unsafe version **will not** check that value provided is enough to cover the L2 execution cost of `gasLimit * gasPrice`. Creator of retryable ticket can even provide 0 value for gas limit and gas price and submission will succeed (value provided still must cover `maxSubmissionCost`), but auto-redemption will not be scheduled and ticket will need to be executed manually by providing L2 funds. 74 | 75 | There is also an important difference in how `excessFeeRefundAddress` (receives execution refund) and `callValueRefundAddress` (receives L2 call value if ticket is cancelled) are handled. Safe version will apply alias to both addresses in case they are a contract on L1 side, while unsafe version does not apply alias. So it is critical to ensure that the contract address used as the L2 refund address can control the refunded funds. 76 | 77 | > When the unsafe function for creating retryable tickets is used make sure the L2 contract is able to control the refunded funds 78 | 79 | ## Oracle integration 80 | 81 | ### Sequencer uptime 82 | 83 | Typically users interact with Arbitrum through sequencer. The sequencer is responsible for including and processing transactions. If the sequencer goes down users can still submit transactions using [Inbox](https://etherscan.io/address/0x4Dbd4fc535Ac27206064B68FfCf827b0A60BAB3f) contract on L1. However, such transactions will be included with potentially significant delay. In the worst case, if the sequencer is down for a prolonged time, transactions submitted to L1 can be force included only after 24h. In the context of price feeds, it means that reported prices can get stale and significantlly drift from the actual price. Thus protocols should always check the sequencer's uptime and revert the price request if the sequencer is down for more than a predefined period of time. 84 | 85 | Chainlink's sequencer uptime feeds can be used for this purpose. An integration example can be found [here](https://docs.chain.link/data-feeds/l2-sequencer-feeds#example-code). 86 | 87 | > Integrations with Chainlink's price feeds on Arbitrum should always check the sequencer uptime in order to avoid stale prices. If missing, think in what ways can the protocol be negatively impacted 88 | 89 | Bug examples: [1](https://code4rena.com/reports/2024-07-benddao#m-12--no-check-if-arbitrumoptimism-l2-sequencer-is-down-in-chainlink-feeds-priceoraclesol-), [2](https://github.com/shieldify-security/audits-portfolio-md/blob/main/Pear-v2-Security-Review.md#m-02-missing-check-for-active-l2-sequencer-in-calculatearbamount), [3](https://github.com/sherlock-audit/2023-05-perennial-judging/issues/37) 90 | 91 | ### Incorrect staleness threshold or feed decimals used 92 | 93 | The staleness threshold is used to determine if the price fetched from the price feed is fresh enough. Most often staleness threshold should match the feed's heartbeat - the longest amount of time that can pass between consequent price updates. Also, feeds use different number of decimals (not necessarily related to the number of decimals used by underlying tokens). Using incorrect values for staleness or feed decimals can lead to critical outcomes. Different feeds on Arbitrum require different values for the mentioned parameters. For example: 94 | 95 | - [LINK / ETH](https://arbiscan.io/address/0xb7c8Fb1dB45007F98A68Da0588e1AA524C317f27) has 24h heartbeat and uses 18 decimals 96 | - [LINK / USD](https://arbiscan.io/address/0x86E53CF1B870786351Da77A57575e79CB55812CB) has 1h heartbeat and uses 8 decimals 97 | 98 | > Make sure that the contract uses the correct staleness threshold and decimal precision for every price feed being integrated 99 | 100 | Bug examples: [1](https://github.com/sherlock-audit/2024-06-new-scope-judging/issues/166), [2](https://github.com/sherlock-audit/2024-06-new-scope-judging/issues/9), 101 | [3](https://github.com/solodit/solodit_content/blob/d4d5dbc/reports/Zokyo/2024-06-23-Copra.md#incorrect-staleness-threshold-for-chainlink-price-feeds) 102 | 103 | ### Price going out of aggregator's acceptable price range 104 | 105 | Some Chainlink price feeds use `minAnswer` and `maxAnswer` variables to limit the price range that can be reported by the feed. If the price goes below the min price during flash crash, or goes above the max price, the feed will answer with an incorrect price. 106 | 107 | Even though min/max answers have been deprecated on newer feeds, some older feeds still use it. Here are couple of examples on Arbitrum: 108 | 109 | - [ETH / USD](https://arbiscan.io/address/0x639Fe6ab55C921f74e7fac1ee960C0B6293ba612) feed is limited to report price in range [\[$10, $1000000\]](https://arbiscan.io/address/0x3607e46698d218B3a5Cae44bF381475C0a5e2ca7#readContract) 110 | - [USDC / USD](https://arbiscan.io/address/0x50834F3163758fcC1Df9973b6e91f0F0F0434aD3) feed is limited to report price in range [\[$0.01, $1000\]](https://arbiscan.io/address/0x2946220288DbBF77dF0030fCecc2a8348CbBE32C#readContract) 111 | - [USDT / USD](https://arbiscan.io/address/0x3f3f5dF88dC9F13eac63DF89EC16ef6e7E25DdE7) feed is limited to report price in range [\[$0.01, $1000\]](https://arbiscan.io/address/0xCb35fE6E53e71b30301Ec4a3948Da4Ad3c65ACe4#readContract) 112 | 113 | > Check if the price feed used by the protocol has `minAnswer` and `maxAnswer` configured, and analyze the implications of the unlikely case that the actual price goes out of the range 114 | 115 | Bug examples: [1](https://github.com/pashov/audits/blob/master/team/md/Cryptex-security-review.md#m-02-circuit-breakers-are-not-considered-when-processing-chainlinks-answer), [2](https://code4rena.com/reports/2024-05-bakerfi#m-06-min-and-maxanswer-never-checked-for-oracle-price-feed) 116 | 117 | ## Orbit chains 118 | 119 | Orbit chains are custom chains deployed using Arbitrum's Nitro software stack. They are mostly deployed as L2s on top of Ethereum or as L3s on top of Arbitrum. 120 | 121 | ### Using custom fee token 122 | 123 | Transaction fees on Arbitrum are paid in ETH. Where does the ETH come from? Users lock their ETH in the bridge contract on the parent chain (Ethereum) and Arbitrum node mints the same amount of native currency on the child chain (Arbitrum) in user's account. 124 | 125 | But Orbit chains don't need to necessarily use ETH to pay for gas. Orbit chain owner can select, at deployment time, any ERC20 to be the fee token for new chain. Once set, fee token for the chain cannot be changed. In this case user will lock the ERC20 fee token on the parent chain and Arbitrum node will mint the same amount of native currency on the Orbit chain. Customized bridge contracts are used on the parent chain for this purpose - `ERC20Bridge`, `ERC20Inbox` and `ERC20Outbox` are used instead of `Bridge`, `Inbox` and `Outbox`. 126 | 127 | > Check if Orbit chain uses custom fee token by calling [nativeToken()](https://github.com/OffchainLabs/nitro-contracts/blob/780366a0c40caf694ed544a6a1d52c0de56573ba/src/bridge/ERC20Bridge.sol#L38) function on the chain's bridge contract. If chain is ETH based call will revert, otherwise it will return the address of the fee token on the parent chain 128 | 129 | ### Custom fee token with non-18 decimals 130 | 131 | Let's say a team creates a new Orbit chain which uses USDC as the custom fee token. There is an inherent mismatch - USDC uses 6 decimals, while native currency is assumed to have 18 decimals. Orbit's bridge contracts deal with this in following way - when the native fee tokens are deposited from the parent chain to the Orbit chain, the deposited amount is scaled to 18 decimals. When native currency is withdrawn from the Orbit chain to the parent chain the amount is scaled from 18 back to the fee token's decimals. It's important to notice that scaling process can round-down amounts and cause user to lose some dust. 132 | 133 | As an example - Orbit chain is deployed as an L3 on top of Arbitrum and chain uses USDC as custom fee token. User bridges 10 USDC from Arbitrum to Orbit chain. 134 | 135 | ``` 136 | Deposited on Arbitrum: 10000000 (10 USDC) 137 | Minted on Orbit (scale to 18 decimals): 10000000000000000000 (10 "ETH" of native currency) 138 | ``` 139 | 140 | On Orbit chain user earns some yield, let's say `2*10^18 + 300`. User's total balance now is `12000000000000000300`. User decides to bridge it back to Arbitrum. 141 | 142 | ``` 143 | Withdrawn from Orbit: 12000000000000000300 (12.0000000000000003 "ETH" of native currency) 144 | Unlocked on Arbitrum: 12000000000000000300 / 10^12 = 12000000 (12 USDC) 145 | ``` 146 | 147 | User gets back 12 USDC, while the dust is lost in conversion process and it stays locked in the `ERC20Bridge` contract. 148 | 149 | > Look for any issues that can be caused by scaling or rounding logic when Orbit chain's fee token uses non-18 decimals 150 | 151 | ### Retryable tickets in Orbit chains using non-18 decimals fee tokens 152 | 153 | When creating a retryable ticket to send L1 to L2 message to Orbit chain user needs to provide [4 numerical values](https://github.com/OffchainLabs/nitro-contracts/blob/v3.0.0/src/bridge/IERC20Inbox.sol#L21..L36): 154 | 155 | - l2CallValue (call value for execution on L2) 156 | - maxSubmissionCost (value to pay for storing the retryable ticket) 157 | - maxFeePerGas (gas price bid on L2) 158 | - tokenTotalFeeAmount (amount of fee tokens to be transferred from user to cover all the costs) 159 | 160 | Let's again assume Orbit chain's fee token is USDC. In that case it is not obvious whether the retryable ticket's parameters are denominated in USDC (6 decimals) or in the native currency (18 decimals). Answer is mixed - `tokenTotalFeeAmount` is denominated in USDC's decimals, while other 3 params are denominated in native currency's decimals. This is because `tokenTotalFeeAmount` signals how many tokens user needs to spend on **parent chain**, while other 3 params signal how execution is to be performed on **Orbit chain**. 161 | 162 | > When contract is integrating the Orbit retryable tickets, check that all the ticket input parameters are properly denominated 163 | 164 | ## Useful resources 165 | 166 | - [Arbitrum official docs](https://docs.arbitrum.io/intro/) 167 | - [Out-of-order retryable tickets execution](https://blog.trailofbits.com/2024/03/01/when-try-try-try-again-leads-to-out-of-order-execution-bugs/) 168 | -------------------------------------------------------------------------------- /audit-checklists/LayerZeroV2.md: -------------------------------------------------------------------------------- 1 | # LayerZeroV2 Security Checklist 2 | 3 | ## EndpointV2 4 | 5 | ### `lzReceive` function can revert with an "out of gas" (OOG) error 6 | Every time a new message is [verified](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/EndpointV2.sol#L159) inside the `EndpointV2` contract a mapping for the `receiver`, `srcEid` and `nonce` combination is set to the corresponding `payloadHash` for later execution with `lzReceive`. Every new message verified for the pathway has a sequential, monotonically increasing nonce. 7 | 8 | During the `lzReceive` function execution the [`_clearPayload`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b9e5967643853476445ffe0e777360b906/packages/layerzero-v2/evm/protocol/contracts/MessagingChannel.sol#L134-L142) function tries to lazily update the `lazyInboundNonce` to the nonce of the message being received. It loops through all the nonces to check if they have been verified and updates the `lazyInboundNonce` to the nonce of the message being received. 9 | 10 | >In extreme cases, if there are a lot of messages verified, the looping can cause an "out of gas" (OOG) error. 11 | However, there's a straightforward solution -- instead of processing a message with a high nonce, you can first process messages with lower nonces. This allows the `lazyInboundNonce` to be updated in smaller steps. 12 | 13 | ## `lzCompose` implementation should always enforce `from` address and the `msg.sender` 14 | If we look at the default `OFT` standard implementation and the usage of `lzCompose` within `lzReceive`: 15 | ```solidity 16 | ## OFTCore.sol 17 | 18 | function _lzReceive( 19 | Origin calldata _origin, 20 | bytes32 _guid, 21 | bytes calldata _message, 22 | address /*_executor*/, // @dev unused in the default implementation. 23 | bytes calldata /*_extraData*/ // @dev unused in the default implementation. 24 | ) internal virtual override { 25 | // @dev The src sending chain doesnt know the address length on this chain (potentially non-evm) 26 | // Thus everything is bytes32() encoded in flight. 27 | address toAddress = _message.sendTo().bytes32ToAddress(); 28 | // @dev Credit the amountLD to the recipient and return the ACTUAL amount the recipient received in local decimals 29 | >>> uint256 amountReceivedLD = _credit(toAddress, _toLD(_message.amountSD()), _origin.srcEid); 30 | 31 | if (_message.isComposed()) { 32 | // @dev Proprietary composeMsg format for the OFT. 33 | bytes memory composeMsg = OFTComposeMsgCodec.encode( 34 | _origin.nonce, 35 | _origin.srcEid, 36 | amountReceivedLD, 37 | _message.composeMsg() 38 | ); 39 | 40 | // @dev Stores the lzCompose payload that will be executed in a separate tx. 41 | // Standardizes functionality for executing arbitrary contract invocation on some non-evm chains. 42 | // @dev The off-chain executor will listen and process the msg based on the src-chain-callers compose options passed. 43 | // @dev The index is used when a OApp needs to compose multiple msgs on lzReceive. 44 | // For default OFT implementation there is only 1 compose msg per lzReceive, thus its always 0. 45 | >>> endpoint.sendCompose(toAddress, _guid, 0 /* the index of the composed message*/, composeMsg); 46 | } 47 | 48 | emit OFTReceived(_guid, _origin.srcEid, toAddress, amountReceivedLD); 49 | } 50 | ``` 51 | 52 | The key points here are: 53 | 54 | - Tokens are first credited to the `toAddress` contract which should implement the `lzCompose` function. 55 | - The `lzCompose` function gets executed in a separate transaction. 56 | - The tokens remain in the `toAddress` contract until `lzCompose` is executed 57 | 58 | Let's observe how the `sendCompose` and `lzCompose` inside the LayerZero contracts works: 59 | 60 | ```solidity 61 | ## MessagingComposer.sol 62 | 63 | function sendCompose(address _to, bytes32 _guid, uint16 _index, bytes calldata _message) external { 64 | // must have not been sent before 65 | if (composeQueue[msg.sender][_to][_guid][_index] != NO_MESSAGE_HASH) revert Errors.LZ_ComposeExists(); 66 | >>> composeQueue[msg.sender][_to][_guid][_index] = keccak256(_message); 67 | emit ComposeSent(msg.sender, _to, _guid, _index, _message); 68 | } 69 | 70 | function lzCompose( 71 | address _from, 72 | address _to, 73 | bytes32 _guid, 74 | uint16 _index, 75 | bytes calldata _message, 76 | bytes calldata _extraData 77 | ) external payable { 78 | // assert the validity 79 | bytes32 expectedHash = composeQueue[_from][_to][_guid][_index]; 80 | bytes32 actualHash = keccak256(_message); 81 | if (expectedHash != actualHash) revert Errors.LZ_ComposeNotFound(expectedHash, actualHash); 82 | 83 | // marks the message as received to prevent reentrancy 84 | // cannot just delete the value, otherwise the message can be sent again and could result in some undefined behaviour 85 | // even though the sender(composing Oapp) is implicitly fully trusted by the composer. 86 | // eg. sender may not even realize it has such a bug 87 | >>> composeQueue[_from][_to][_guid][_index] = RECEIVED_MESSAGE_HASH; 88 | ILayerZeroComposer(_to).lzCompose{ value: msg.value }(_from, _guid, _message, msg.sender, _extraData); 89 | emit ComposeDelivered(_from, _to, _guid, _index); 90 | } 91 | ``` 92 | 93 | When `sendCompose` is called from within `lzReceive`, the `msg.sender` (and therefore the `from` address stored in `composeQueue`) will be the OFT token contract. This is important because this same `from` address will be passed to `lzCompose` when executing the composed message. 94 | 95 | > When implementing `lzCompose`, you must validate: 96 | > 1. The `from` parameter matches your expected OFT token contract - this is the original sender that queued the composed message 97 | > 2. The `msg.sender` is the EndpointV2 contract - only the official endpoint should be able to trigger composed message execution 98 | > 99 | > Failing to validate either of these could allow unauthorized contracts to execute malicious composed messages. 100 | 101 | ## Message Execution Options 102 | LayerZeroV2 provides message execution options, where you can specify gas amount, `msg.value` and other options for the destination transaction. This info gets picked up by the application defined [Executor](https://docs.layerzero.network/v2/home/permissionless-execution/executors) contract. 103 | 104 | ### Native airdrop option cap 105 | The default LayerZero [Executor](https://etherscan.io/address/0x173272739Bd7Aa6e4e214714048a9fE699453059) contract has a [configuration](https://github.com/LayerZero-Labs/LayerZero-v2/blob/943ce4a/packages/layerzero-v2/evm/messagelib/contracts/Executor.sol#L63) for every chain. If you're sending a message from Ethereum -> Polygon it will use the Polygon configuration to estimate the fee while doing a few sanity checks. The structure of the configuration is as follows: 106 | ```solidity 107 | struct DstConfig { 108 | uint64 lzReceiveBaseGas; 109 | uint16 multiplierBps; 110 | uint128 floorMarginUSD; // uses priceFeed PRICE_RATIO_DENOMINATOR 111 | uint128 nativeCap; 112 | uint64 lzComposeBaseGas; 113 | } 114 | ```` 115 | 116 | One of the values in the configuration is `nativeCap`, which is the maximum amount of native tokens that can be sent to the destination chain. 117 | 118 | Here is an example of configuration for Polygon: 119 | 120 | ```bash 121 | cast call 0x173272739Bd7Aa6e4e214714048a9fE699453059 "dstConfig(uint32)(uint64,uint16,uint128,uint128,uint64)" 30109 --rpc-url https://eth.llamarpc.com // nativeCap is 1500000000000000000000 ~ 1500e18 MATIC 122 | ``` 123 | 124 | The maximum amount of native tokens to airdrop from Ethereum -> Polygon is 1500e18 MATIC. 125 | 126 | ### Don't rely on the gas limit and `msg.value` 127 | All the metadata passed as options to the `Endpoint::send` function is simply an off-chain agreement with the Executor. The `lzReceive` function can be executed by anyone with a different `msg.value` and gas limit compared to what was specified on the sending side. 128 | 129 | There are however various ways to enforce certain properties on the receiving side: 130 | - Whitelisting address that can execute [`lzReceive`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/EndpointV2.sol#L181), in the receiving app, e.g. only allowing the LayerZero executor or whitelisted addresses to execute it. 131 | - Encoding the `msg.value` into the message payload and reverting if the value is different from what was specified on the sending side. 132 | - Enforcing certain gas limit in the `lzReceive` function, e.g. [`SafeCallMinGas.sol`](https://github.com/liquity/V2-gov/blob/22bc82f/src/utils/SafeCallMinGas.sol) contract. 133 | 134 | Bug examples: [1](https://solodit.cyfrin.io/issues/bridgedgovernorlzreceive-can-be-executed-with-different-msgvalue-than-intended-cantina-none-drips-pdf) 135 | 136 | ### Execution Ordering 137 | The default OApp implementation of `lzReceive` is un-ordered execution. This means if nonce 4,5,6 are verified, the Executor will try to execute the message with nonce 4 first, but if it fails (due to some gas or user logic related issue), it will try to execute the message with nonce 5 and so on. 138 | 139 | The process the off-chain executor uses if you want to enforce ordered execution: 140 | 1. It checks if the [ordered execution option](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/oapp/contracts/oapp/libs/OptionsBuilder.sol#L107-L111) has been set. 141 | 2. If this is true then it queries the [nextNonce](https://github.com/LayerZero-Labs/LayerZero-v2/blob/943ce4a/packages/layerzero-v2/evm/oapp/contracts/oapp/OAppReceiver.sol#L78) function in the receiver contract. 142 | 3. Let's assume nextNonce returns nonce 4. It tries to execute nonce 4 and if this transaction fails for any reason, it will block all subsequent transactions with higher nonces from being executed until nonce 4 is resolved. 143 | 144 | > If you want to enforce ordered execution, you need to ensure that the `nextNonce` function is implemented correctly and that it returns the correct nonce. Make sure to never have a reverting transaction due to the blocking nature of the system. 145 | 146 | Bug examples: [1](https://solodit.cyfrin.io/issues/non-executable-messages-in-bridgedgovernor-can-result-in-an-unrecoverable-state-cantina-none-drips-pdf) 147 | 148 | ## OFT standard 149 | 150 | ### Dust removal 151 | The OFT standard was created to allow transferring tokens across different blockchain VMs. While `EVM` chains support `uint256` for token balances, many non-EVM chains use `uint64`. Because of this, the default OFT standard has a max token supply of `(2^64 - 1)/(10^6)`, or `18,446,744,073,709.551615`. 152 | This property is defined by [`sharedDecimals` and `localDecimals`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/943ce4a/packages/layerzero-v2/evm/oapp/contracts/oft/OFTCore.sol#L54-L57) for the token. 153 | 154 | In practice, this means that you can only transfer the amount that can be represented in the shared system. The default OFT standard uses local decimals equal to `18` and shared decimals of `6`, which means a conversion rate of `10^12`. 155 | 156 | Take the example: 157 | 158 | 1. User specifies the amount to `send` that equals `1234567890123456789`. 159 | 2. The OFT standard will first divide this amount by `10^12` to get the amount in the shared system, which equals `1234567`. 160 | 3. On the receiving chain it will be multiplied by `10^12` to get the amount in the local system, which equals `1234567000000000000`. 161 | 162 | This process removes the last `12` digits from the original amount, effectively "cleaning" the amount from any "dust" that cannot be represented in a system with `6` decimal places. 163 | 164 | ```solidity 165 | amountToSend = 1234567890123456789; 166 | amountReceived = 1234567000000000000; 167 | ``` 168 | 169 | It's important to highlight that the dust removed is not lost, it's just cleaned from the input amount. 170 | 171 | > Look for custom fees added to the OFT standard, `_removeDust` should be called after determining the actual transfer amount. 172 | 173 | Bug examples: [1](https://github.com/windhustler/audits/blob/21bf9a1/solo/PING-Security-Review.pdf) 174 | 175 | ### Overriding shared decimals 176 | The `OFTCore.sol` contract uses a default `sharedDecimals` value of `6`. When overriding this value, be aware of a critical limitation, the `_toSD` function casts amounts to `uint64` when converting from local to shared decimals. 177 | 178 | ```solidity 179 | function _buildMsgAndOptions( 180 | SendParam calldata _sendParam, 181 | uint256 _amountLD 182 | ) internal view virtual returns (bytes memory message, bytes memory options) { 183 | bool hasCompose; 184 | // @dev This generated message has the msg.sender encoded into the payload so the remote knows who the caller is. 185 | (message, hasCompose) = OFTMsgCodec.encode( 186 | _sendParam.to, 187 | >>> _toSD(_amountLD), 188 | // @dev Must be include a non empty bytes if you want to compose, EVEN if you dont need it on the remote. 189 | // EVEN if you dont require an arbitrary payload to be sent... eg. '0x01' 190 | _sendParam.composeMsg 191 | ); 192 | 193 | function _toSD(uint256 _amountLD) internal view virtual returns (uint64 amountSD) { 194 | return uint64(_amountLD / decimalConversionRate); 195 | } 196 | ``` 197 | 198 | This becomes important when `localDecimals` and `sharedDecimals` are both set to 18. In this case: 199 | - The `decimalConversionRate` becomes 1 (no decimal adjustment) 200 | - Any amount larger than `uint64.max` will silently be truncated to `uint64` 201 | 202 | > This truncation can lead to unexpected behavior where users might think they're transferring a larger amount, but the actual transfer will be cast into `uint64`, resulting in a loss of value. 203 | 204 | ## LayerZero Read 205 | [LayerZero Read](https://docs.layerzero.network/v2/developers/evm/lzread/overview) enables requesting data from a remote chain without executing a transaction there. It works with a request-response pattern, where you request a certain data from the remote chain and the DVNs will respond by directly reading the data from the node on the remote chain. 206 | 207 | ### Reverts while reading data blocks subsequent messages 208 | The request can contain multiple read commands and compute operations. Here is an example of how to specify those commands with [EVMCallRequestV1 and EVMCallComputeV1](https://github.com/LayerZero-Labs/LayerZero-v2/blob/943ce4a/packages/layerzero-v2/evm/oapp/contracts/oapp/examples/LzReadCounter.sol#L73-L105) structs, and corresponding functions that get called by the DVNs on the remote chain -- [`readCount`, `lzMap` and `lzReduce`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/943ce4a/packages/layerzero-v2/evm/oapp/contracts/oapp/examples/LzReadCounter.sol#L108-L133). 209 | 210 | If any of these functions calls revert(`readCount`, `lzMap` and `lzReduce`), the DVNs are not able to create a response and verify the message. Let's look at what happens if the message with certain nonce can't be verified. An example covers sending a message on Ethereum to read the data from Polygon. 211 | 212 | - Sending a message on Ethereum to read the data on Polygon assigns a [monotonically increasing nonce](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/EndpointV2.sol#L116-L127) to the message per `sender`, `dstEid` and `receiver`. 213 | ```solidity 214 | function _send( 215 | address _sender, 216 | MessagingParams calldata _params 217 | ) internal returns (MessagingReceipt memory, address) { 218 | // get the correct outbound nonce 219 | >>> uint64 latestNonce = _outbound(_sender, _params.dstEid, _params.receiver); 220 | 221 | // construct the packet with a GUID 222 | Packet memory packet = Packet({ 223 | nonce: latestNonce, 224 | srcEid: eid, 225 | sender: _sender, 226 | dstEid: _params.dstEid, 227 | receiver: _params.receiver, 228 | guid: GUID.generate(latestNonce, eid, _sender, _params.dstEid, _params.receiver), 229 | message: _params.message 230 | }); 231 | 232 | /// @dev increase and return the next outbound nonce 233 | function _outbound(address _sender, uint32 _dstEid, bytes32 _receiver) internal returns (uint64 nonce) { 234 | unchecked { 235 | nonce = ++outboundNonce[_sender][_dstEid][_receiver]; 236 | } 237 | } 238 | ```` 239 | - In case of lzRead, `dstEid` is the `channelId` equal to `4294967295`. Read paths information can be found in the [Read Paths](https://docs.layerzero.network/v2/developers/evm/lzread/read-paths) section in the LayerZero docs. 240 | - This `Packet` gets processed in the [`ReadLib1002`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/943ce4a/packages/layerzero-v2/evm/messagelib/contracts/uln/readlib/ReadLib1002.sol#L97) contract. 241 | - Application configured DVNs needs to [verify the message and commit verification](https://github.com/LayerZero-Labs/LayerZero-v2/blob/943ce4a/packages/layerzero-v2/evm/messagelib/contracts/uln/readlib/ReadLib1002.sol#L138-L167) needs to be called. 242 | - If the DVNs can't generate a response, they can't verify that specific message. 243 | ```solidity 244 | // ============================ External =================================== 245 | /// @dev The verification will be done in the same chain where the packet is sent. 246 | /// @dev dont need to check endpoint verifiable here to save gas, as it will reverts if not verifiable. 247 | /// @param _packetHeader - the srcEid should be the localEid and the dstEid should be the channel id. 248 | /// The original packet header in PacketSent event should be processed to flip the srcEid and dstEid. 249 | function commitVerification(bytes calldata _packetHeader, bytes32 _cmdHash, bytes32 _payloadHash) external { 250 | // assert packet header is of right size 81 251 | if (_packetHeader.length != 81) revert LZ_RL_InvalidPacketHeader(); 252 | // assert packet header version is the same 253 | if (_packetHeader.version() != PacketV1Codec.PACKET_VERSION) revert LZ_RL_InvalidPacketVersion(); 254 | // assert the packet is for this endpoint 255 | if (_packetHeader.dstEid() != localEid) revert LZ_RL_InvalidEid(); 256 | 257 | // cache these values to save gas 258 | address receiver = _packetHeader.receiverB20(); 259 | uint32 srcEid = _packetHeader.srcEid(); // channel id 260 | uint64 nonce = _packetHeader.nonce(); 261 | 262 | // reorg protection. to allow reverification, the cmdHash cant be removed 263 | if (cmdHashLookup[receiver][srcEid][nonce] != _cmdHash) revert LZ_RL_InvalidCmdHash(); 264 | 265 | ReadLibConfig memory config = getReadLibConfig(receiver, srcEid); 266 | _verifyAndReclaimStorage(config, keccak256(_packetHeader), _cmdHash, _payloadHash); 267 | 268 | // endpoint will revert if nonce <= lazyInboundNonce 269 | Origin memory origin = Origin(srcEid, _packetHeader.sender(), nonce); 270 | >>> ILayerZeroEndpointV2(endpoint).verify(origin, receiver, _payloadHash); 271 | } 272 | ``` 273 | - srcEid is the `channelId`, while nonce is the `latestNonce` assigned while sending the message. 274 | - [`EndpointV2::verify`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/EndpointV2.sol#L151) function updates the `inboundPayloadHash` mapping with the `latestNonce`. 275 | ```solidity 276 | /// @dev MESSAGING STEP 2 - on the destination chain 277 | /// @dev configured receive library verifies a message 278 | /// @param _origin a struct holding the srcEid, nonce, and sender of the message 279 | /// @param _receiver the receiver of the message 280 | /// @param _payloadHash the payload hash of the message 281 | function verify(Origin calldata _origin, address _receiver, bytes32 _payloadHash) external { 282 | if (!isValidReceiveLibrary(_receiver, _origin.srcEid, msg.sender)) revert Errors.LZ_InvalidReceiveLibrary(); 283 | 284 | uint64 lazyNonce = lazyInboundNonce[_receiver][_origin.srcEid][_origin.sender]; 285 | if (!_initializable(_origin, _receiver, lazyNonce)) revert Errors.LZ_PathNotInitializable(); 286 | if (!_verifiable(_origin, _receiver, lazyNonce)) revert Errors.LZ_PathNotVerifiable(); 287 | 288 | // insert the message into the message channel 289 | >> _inbound(_receiver, _origin.srcEid, _origin.sender, _origin.nonce, _payloadHash); 290 | emit PacketVerified(_origin, _receiver, _payloadHash); 291 | } 292 | 293 | /// @dev inbound won't update the nonce eagerly to allow unordered verification 294 | /// @dev instead, it will update the nonce lazily when the message is received 295 | /// @dev messages can only be cleared in order to preserve censorship-resistance 296 | function _inbound( 297 | address _receiver, 298 | uint32 _srcEid, 299 | bytes32 _sender, 300 | uint64 _nonce, 301 | bytes32 _payloadHash 302 | ) internal { 303 | if (_payloadHash == EMPTY_PAYLOAD_HASH) revert Errors.LZ_InvalidPayloadHash(); 304 | >>> inboundPayloadHash[_receiver][_srcEid][_sender][_nonce] = _payloadHash; 305 | } 306 | ``` 307 | 308 | - During invocation of `lzReceive`, [`_clearPayload`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/MessagingChannel.sol#L138) internal function gets called. 309 | ```solidity 310 | function _clearPayload( 311 | address _receiver, 312 | uint32 _srcEid, 313 | bytes32 _sender, 314 | uint64 _nonce, 315 | bytes memory _payload 316 | ) internal returns (bytes32 actualHash) { 317 | uint64 currentNonce = lazyInboundNonce[_receiver][_srcEid][_sender]; 318 | if (_nonce > currentNonce) { 319 | unchecked { 320 | // try to lazily update the inboundNonce till the _nonce 321 | for (uint64 i = currentNonce + 1; i <= _nonce; ++i) { 322 | >>> if (!_hasPayloadHash(_receiver, _srcEid, _sender, i)) revert Errors.LZ_InvalidNonce(i); 323 | } 324 | lazyInboundNonce[_receiver][_srcEid][_sender] = _nonce; 325 | } 326 | } 327 | 328 | function _hasPayloadHash( 329 | address _receiver, 330 | uint32 _srcEid, 331 | bytes32 _sender, 332 | uint64 _nonce 333 | ) internal view returns (bool) { 334 | return inboundPayloadHash[_receiver][_srcEid][_sender][_nonce] != EMPTY_PAYLOAD_HASH; 335 | } 336 | ``` 337 | 338 | - The key part of this function is updating the `lazyInboundNonce` to the latest nonce. 339 | - In case a message with a certain nonce has been sent, but couldn't been verified `_hasPayloadHash` for that nonce will return `false` and the `lzReceive` function will revert. 340 | 341 | In summary, if a message with a certain nonce has been sent, but couldn't been verified, the `lzReceive` function will revert until that nonce is verified. 342 | 343 | > The `OAppRead` or its delegate can call [`EndpointV2::skip`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/MessagingChannel.sol#L76-L88) function to increment the `lazyInboundNonce` without having had that corresponding message be verified. This can be used to skip the verification but it's paramount to ensure that the message can be verified in the first place but not having reverts during reading data. 344 | 345 | ### lzRead can be used to read data from the same chain 346 | As opposed to standard LayerZero messages where you can only send data to a different chain, `lzRead` allows to read data from the same chain. 347 | 348 | Here is an example of supported chains for Ethereum: 349 | ![Ethereum Read Paths](/resources/LayerZero-lzRead-Ethereum.png) 350 | 351 | The targetEid is specified inside the `EVMCallRequestV1` and `EVMCallComputeV1` structs. 352 | ```solidity 353 | struct EVMCallRequestV1 { 354 | uint16 appRequestLabel; // Label identifying the application or type of request (can be use in lzCompute) 355 | >>> uint32 targetEid; // Target endpoint ID (representing a target blockchain) 356 | bool isBlockNum; // True if the request = block number, false if timestamp 357 | uint64 blockNumOrTimestamp; // Block number or timestamp to use in the request 358 | uint16 confirmations; // Number of block confirmations on top of the requested block number or timestamp before the view function can be called 359 | address to; // Address of the target contract on the target chain 360 | bytes callData; // Calldata for the contract call 361 | } 362 | 363 | struct EVMCallComputeV1 { 364 | uint8 computeSetting; // Compute setting (0 = map only, 1 = reduce only, 2 = map reduce) 365 | >>> uint32 targetEid; // Target endpoint ID (representing a target blockchain) 366 | bool isBlockNum; // True if the request = block number, false if timestamp 367 | uint64 blockNumOrTimestamp; // Block number or timestamp to use in the request 368 | uint16 confirmations; // Number of block confirmations on top of the requested block number or timestamp before the view function can be called 369 | address to; // Address of the target contract on the target chain 370 | } 371 | ``` 372 | 373 | > Make sure to check the `targetEid` for the `lzRead` request and assess if you need to read data from the same chain, or any other for that matter. As highlited in the [Reverts while reading data blocks subsequent messages](#reverts-while-reading-data-blocks-subsequent-messages) section, it's paramount that the `lzRead` request doesn't revert. 374 | 375 | ## LayerZero immutability 376 | 377 | How immutable is LayerZero? 378 | 379 | Based on the [LayerZeroV2 docs](https://docs.layerzero.network/v2/developers/evm/overview): 380 | > LayerZero is an immutable, censorship-resistant, and permissionless smart contract protocol that enables anyone on a blockchain to send, verify, and execute messages on a supported destination network. 381 | 382 | Is this true? Continue reading if you want to learn why you should always configure your OApp. 383 | 384 | Let's examine the critical dependencies in the `EndpointV2` contract, which is the core contract of the system. Two key external dependencies are: 385 | 386 | 1. **Message Sending**: The [send library lookup](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/EndpointV2.sol#L75) during message transmission 387 | ```solidity 388 | address _sendLibrary = getSendLibrary(_sender, _params.dstEid); 389 | ``` 390 | 2. **Message Verification**: The [receive library validation](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/EndpointV2.sol#L152) on the destination chain 391 | ```solidity 392 | if (!isValidReceiveLibrary(_receiver, _origin.srcEid, msg.sender)) revert Errors.LZ_InvalidReceiveLibrary(); 393 | ``` 394 | 395 | The configuration of send and receive libraries is managed through the `MessageLibManager` contract, which `EndpointV2` extends. 396 | 397 | > Only the LayerZero time can register libraries that can be used to send or receive messages. 398 | 399 | #### Key Privileges of LayerZero Team 400 | 1. **Library Registration**: Only LayerZero can register new send/receive libraries via [`MessageLibManager.registerLibrary()`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/MessageLibManager.sol#L140) 401 | 2. **Default Library Control**: LayerZero can change default send/receive libraries via: 402 | - [`setDefaultSendLibrary()`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/MessageLibManager.sol#L157) 403 | - [`setDefaultReceiveLibrary()`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/MessageLibManager.sol#L171) 404 | 405 | 406 | While only LayerZero can register new libraries, each protocol can select and configure their preferred send and receive libraries from the registered options. 407 | 408 | What are the options? Let's check the [EndpointV2 on Ethereum](https://etherscan.io/address/0x1a44076050125825900e736c501f859c50fE728c) contract and call the `getRegisteredLibraries` variable. Here is what we get: 409 | 410 | - [BlockLibrary](https://etherscan.io/address/0x1ccBf0db9C192d969de57E25B3fF09A25bb1D862) - dummy library that completely disables sending and receiving messages. 411 | - [SendUln302](https://etherscan.io/address/0xbb2ea70c9e858123480642cf96acbcce1372dce1) 412 | - [ReceiveUln302](https://etherscan.io/address/0xc02ab410f0734efa3f14628780e6e695156024c2) 413 | - [ReadLibrary1002](https://etherscan.io/address/0x74f55bc2a79a27a0bf1d1a35db5d0fc36b9fdb9d) 414 | 415 | Currently, the default libraries are the only available options and are required for cross-chain communication. Protocols that don't explicitly configure their libraries will automatically use these defaults. 416 | 417 | There are two security considerations here. The attack threat is LayerZero acting maliciously. 418 | 419 | 1. **Protocol hasn't configured a send/receive library** 420 | - Relies on system defaults 421 | - LayerZero can freely change these defaults 422 | - Risk of protocol functionality being bricked 423 | 424 | 2. **Protocol has explicitly configured their send/receive library to use the current LayerZero defaults** 425 | - While this may seem similar to not configuring at all (since currently the default libraries are the only option), there is a crucial distinction. 426 | - When you explicitly configure your send/receive library, that configuration is locked in for your protocol. 427 | - Even if LayerZero later adds new libraries or changes the defaults, your protocol will continue using your configured libraries 428 | - This gives you control over your security posture - you won't be affected by changes to system defaults 429 | 430 | ## Configuration Tips 431 | 432 | ### Pausing bidirectional messages 433 | 434 | When deploying an OApp on multiple chains (e.g., Ethereum and Arbitrum), bidirectional communication is established by [setting peers](https://github.com/LayerZero-Labs/LayerZero-v2/blob/943ce4a/packages/layerzero-v2/evm/oapp/contracts/oapp/OAppCore.sol#L56) on both OApps. However, you might want to allow messages in only one direction (e.g., only Ethereum -> Arbitrum). 435 | 436 | This cannot be achieved through the `setPeer` configuration alone since `_getPeerOrRevert` is called during both [sending](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/oapp/contracts/oapp/OAppSender.sol#L88) and [receiving](https://github.com/LayerZero-Labs/LayerZero-v2/blob/943ce4a/packages/layerzero-v2/evm/oapp/contracts/oapp/OAppReceiver.sol#L106) messages. 437 | 438 | However, there is a workaround to disable communication in one direction without modifying the peer configuration. By setting the [Executor configuration](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/MessageLibManager.sol#L307) parameter [`maxMessageSize`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/messagelib/contracts/SendLibBase.sol#L24) to 1 byte, the [`send`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/messagelib/contracts/SendLibBase.sol#L162) function will always revert, effectively blocking messages from being sent from that chain. 439 | 440 | ## Non-standard implementations 441 | 442 | ### Message receiver should implement `allowInitializePath` function 443 | 444 | When a DVN verifies a message for a pathway the first time, it calls [`allowInitializePath`](https://github.com/LayerZero-Labs/LayerZero-v2/blob/592625b/packages/layerzero-v2/evm/protocol/contracts/EndpointV2.sol#L340) on the receiver to check if messages from that sender and source chain are allowed. The default OApp implementation checks if the sender is a trusted peer: 445 | 446 | ```solidity 447 | ## OAppReceiver.sol 448 | 449 | function allowInitializePath(Origin calldata origin) public view virtual returns (bool) { 450 | return peers[origin.srcEid] == origin.sender; 451 | } 452 | ``` 453 | 454 | > If you're not using the default OApp implementation, make sure to implement the `allowInitializePath` function in your receiving contract. 455 | 456 | 457 | ## Useful resources 458 | 459 | - [LayerZeroV2 developer docs](https://docs.layerzero.network/v2) 460 | - [Decode LayerZero V2](https://senn.fun/decode-layerzero-v2) 461 | - [LayerZero V2 Deep Dive Video](https://www.youtube.com/watch?v=ercyc98S7No) 462 | - [Comparison between Hyperlane, Wormhole and LayerZero](https://lindgren.xyz/posts/how-interopability-work/) 463 | --------------------------------------------------------------------------------