├── .github ├── CONTRIBUTING.md ├── ISSUE_TEMPLATE │ ├── bug_report.md │ └── feature_request.md ├── dependabot.yml ├── labels.yaml ├── pull_request_template.md └── workflows │ ├── book.yaml │ ├── release.yaml │ ├── rust.yaml │ └── sync_labels.yaml ├── .gitignore ├── .rustfmt.toml ├── .taplo.toml ├── CONTRIBUTING.md ├── Cargo.lock ├── Cargo.toml ├── LICENSE ├── README.md ├── arbiter-core ├── Cargo.toml ├── LICENSE └── src │ ├── agent.rs │ ├── handler.rs │ ├── lib.rs │ └── runtime │ ├── mod.rs │ └── wasm.rs ├── arbiter-ethereum ├── Cargo.toml ├── LICENSE ├── benches │ └── bench.rs ├── contracts │ ├── ArbiterMath.sol │ ├── ArbiterToken.sol │ ├── Counter.sol │ ├── LiquidExchange.sol │ └── WETH.sol ├── src │ ├── bindings │ │ ├── arbiter_math.rs │ │ ├── arbiter_token.rs │ │ ├── counter.rs │ │ ├── gaussian.rs │ │ ├── invariant.rs │ │ ├── liquid_exchange.rs │ │ ├── mod.rs │ │ ├── units.rs │ │ └── weth.rs │ ├── console │ │ ├── abi.rs │ │ └── mod.rs │ ├── coprocessor.rs │ ├── database │ │ ├── fork.rs │ │ ├── inspector.rs │ │ └── mod.rs │ ├── environment │ │ ├── instruction.rs │ │ └── mod.rs │ ├── errors.rs │ ├── events.rs │ ├── lib.rs │ └── middleware │ │ ├── connection.rs │ │ ├── mod.rs │ │ └── nonce_middleware.rs └── tests │ ├── common.rs │ ├── contracts.rs │ ├── environment_integration.rs │ ├── events_integration.rs │ ├── fork.json │ └── middleware_integration.rs ├── arbiter-macros ├── CHANGELOG.md ├── Cargo.toml ├── LICENSE └── src │ └── lib.rs ├── arbiter ├── Cargo.toml ├── LICENSE └── src │ └── lib.rs ├── book.toml ├── docs ├── CHANGELOG.md ├── Cargo.lock ├── Cargo.toml ├── build.rs ├── src │ ├── SUMMARY.md │ ├── contributing.md │ ├── getting_started │ │ ├── examples.md │ │ └── index.md │ ├── index.md │ ├── lib.rs │ ├── usage │ │ ├── arbiter_cli.md │ │ ├── arbiter_core │ │ │ ├── environment.md │ │ │ ├── index.md │ │ │ └── middleware.md │ │ ├── arbiter_engine │ │ │ ├── agents_and_engines.md │ │ │ ├── behaviors.md │ │ │ ├── configuration.md │ │ │ ├── index.md │ │ │ └── worlds_and_universes.md │ │ ├── arbiter_macros.md │ │ ├── index.md │ │ └── techniques │ │ │ ├── anomaly_detection.md │ │ │ ├── index.md │ │ │ ├── measuring_risk.md │ │ │ └── stateful_testing.md │ └── vulnerability_corpus.md └── tests │ └── skeptic.rs ├── examples └── leader │ ├── Cargo.lock │ ├── Cargo.toml │ ├── index.html │ └── src │ ├── lib.rs │ └── server.rs └── justfile /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Welcome to the Arbiter contributing guide 2 | 3 | Thank you for investing your time in contributing to our project! Any contribution you make is greatly appreciated :sparkles:. 4 | 5 | Read our [Code of Conduct]([./CODE_OF_CONDUCT.md](https://github.com/anthias-labs/.github/blob/main/CODE_OF_CONDUCT.md)) to keep our community approachable and respectable. 6 | 7 | In this guide you will get an overview of the contribution workflow from opening an issue, creating a PR, reviewing, and merging the PR. 8 | 9 | Use the table of contents icon on the top left corner of this document to get to a specific section of this guide quickly. 10 | 11 | ## New contributor guide 12 | 13 | To get an overview of the project, read the [README](https://github.com/anthias-labs/arbiter/blob/main/README.md). Here are some resources to help you get started with open source contributions: 14 | 15 | - [Finding ways to contribute to open source on GitHub](https://docs.github.com/en/get-started/exploring-projects-on-github/finding-ways-to-contribute-to-open-source-on-github) 16 | - [Set up Git](https://docs.github.com/en/get-started/quickstart/set-up-git) 17 | - [GitHub flow](https://docs.github.com/en/get-started/quickstart/github-flow) 18 | - [Collaborating with pull requests](https://docs.github.com/en/github/collaborating-with-pull-requests) 19 | 20 | 21 | ## Getting started 22 | 23 | ### Issues 24 | 25 | #### Create a new issue 26 | 27 | If you spot a problem with the docs, [search if an issue already exists](https://docs.github.com/en/github/searching-for-information-on-github/searching-on-github/searching-issues-and-pull-requests#search-by-the-title-body-or-comments). If a related issue doesn't exist, you can open a new issue! 28 | 29 | #### Solve an issue 30 | 31 | Scan through our [existing issues](https://github.com/anthias-labs/arbiter/issues) to find one that interests you. You can narrow down the search using `labels` as filters. If you find an issue to work on, you are welcome to assign it to yourself and open a PR with a fix. 32 | 33 | ### Make Changes 34 | 35 | 36 | 1. [Install Git LFS](https://docs.github.com/en/github/managing-large-files/versioning-large-files/installing-git-large-file-storage). 37 | 2. Fork the repository. 38 | - Using GitHub Desktop: 39 | - [Getting started with GitHub Desktop](https://docs.github.com/en/desktop/installing-and-configuring-github-desktop/getting-started-with-github-desktop) will guide you through setting up Desktop. 40 | - Once Desktop is set up, you can use it to [fork the repo](https://docs.github.com/en/desktop/contributing-and-collaborating-using-github-desktop/cloning-and-forking-repositories-from-github-desktop)! 41 | 42 | - Using the command line: 43 | - [Fork the repo](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo#fork-an-example-repository) so that you can make your changes without affecting the original project until you're ready to merge them. 44 | 45 | 3. Create a working branch and start with your changes! 46 | 47 | ### Commit your update 48 | 49 | Commit the changes once you are happy with them with descriptive comments:zap:. 50 | 51 | ### Pull Request 52 | 53 | When you're finished with the changes, create a pull request, also known as a PR. 54 | - Check to see your pull request passes our continuous integration (CI). If you cannot get a certain integration test to pass, let us know. We can assist you in fixing these issues or approve a merge manually. 55 | - Make sure your additions are properly documented! You can see the [Rust book](https://doc.rust-lang.org/rustdoc/how-to-write-documentation.html) for documentation guidelines. Each module uses the diagnostic attribute `#![warn(missing_docs)]` which will trigger clippy in our CI. 56 | - Don't forget to [link PR to issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue) if you are solving one. 57 | - Enable the checkbox to [allow maintainer edits](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork) so the branch can be updated for a merge. 58 | Once you submit your PR, a Arbiter team member will review your proposal. We may ask questions or request additional information. 59 | - We may ask for changes to be made before a PR can be merged, either using [suggested changes](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/incorporating-feedback-in-your-pull-request) or pull request comments. You can apply suggested changes directly through the UI. You can make any other changes in your fork, then commit them to your branch. 60 | - As you update your PR and apply changes, mark each conversation as [resolved](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/commenting-on-a-pull-request#resolving-conversations). 61 | - If you run into any merge issues, checkout this [git tutorial](https://github.com/skills/resolve-merge-conflicts) to help you resolve merge conflicts and other issues. 62 | 63 | ### Your PR is merged! 64 | 65 | Congratulations :tada::tada: The Anthias Labs team thanks you :sparkles:. 66 | 67 | Once your PR is merged, your contributions will be publicly visible on the [Arbiter Repository](https://github.com/anthias-labs/arbiter). 68 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug Report 3 | about: Report a bug to help us improve 4 | title: 'bug(area): brief description of the bug' 5 | labels: 'type: enhancement' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Title Format** 11 | Please use the following format for your title: 12 | ``` 13 | fix(area): brief description of the bug 14 | ``` 15 | Where `area` is one of: `algebra`, `space` 16 | 17 | **Describe the Bug** 18 | A clear and concise description of what the bug is. 19 | 20 | **To Reproduce** 21 | Steps to reproduce the behavior: 22 | 1. Use function/method '...' 23 | 2. With parameters '....' 24 | 3. See error 25 | 26 | **Expected Behavior** 27 | A clear and concise description of what you expected to happen. 28 | 29 | **Environment** 30 | - OS: [e.g. Ubuntu 20.04] 31 | - Rust Version: [e.g. 1.75.0] 32 | - Crate Version: [e.g. 0.1.0] 33 | 34 | **Additional Context** 35 | Add any other context about the problem here. 36 | 37 | **Labels to Consider** 38 | - `area: algebra` or `area: space` (based on affected component) 39 | - `priority: critical/high/medium/low` (based on impact) 40 | - `tech: performance` (if performance related) 41 | - `tech: security` (if security related) -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature Request 3 | about: Suggest an idea for this project 4 | title: 'feat(area): brief description of the feature' 5 | labels: 'type: enhancement' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Title Format** 11 | Please use the following format for your title: 12 | ``` 13 | feat(area): brief description of the feature 14 | ``` 15 | Where `area` is one of: `algebra`, `space` 16 | 17 | **Feature Description** 18 | A clear and concise description of what you want to happen. 19 | 20 | **Motivation** 21 | Why is this feature needed? What problems does it solve? 22 | 23 | **Implementation Details** 24 | If applicable, describe the technical approach to implementing this feature: 25 | - [ ] Task 1 26 | - [ ] Task 2 27 | - [ ] Task 3 28 | 29 | **Dependencies** 30 | List any dependencies or prerequisites needed for this feature. 31 | 32 | **Additional Context** 33 | Add any other context or screenshots about the feature request here. 34 | 35 | **Labels to Consider** 36 | - `area: algebra` or `area: space` (based on component) 37 | - `priority: critical/high/medium/low` (based on importance) 38 | - `tech: performance` (if performance related) 39 | - `community: help-wanted` (if seeking contributors) -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | updates: 3 | - package-ecosystem: "cargo" 4 | directory: "/" 5 | schedule: 6 | interval: "monthly" 7 | # Group all updates together 8 | groups: 9 | all-updates: 10 | patterns: 11 | - "*" 12 | # Specify rules for version updates 13 | open-pull-requests-limit: 10 14 | # Assign reviewers (optional) 15 | reviewers: 16 | - "autoparallel" 17 | # Labels for PRs (optional) 18 | labels: 19 | - "dependencies" 20 | - "automated pr" 21 | # Configure commit message 22 | commit-message: 23 | prefix: "chore" 24 | include: "scope" 25 | # Only allow certain update types (optional) 26 | allow: 27 | # Allow both direct and indirect updates for all packages 28 | - dependency-type: "all" -------------------------------------------------------------------------------- /.github/labels.yaml: -------------------------------------------------------------------------------- 1 | # Priority Levels (Red to Green) 2 | - name: "priority: critical" 3 | color: "b60205" 4 | description: "Must be addressed immediately" 5 | 6 | - name: "priority: high" 7 | color: "d93f0b" 8 | description: "High priority tasks" 9 | 10 | - name: "priority: medium" 11 | color: "fbca04" 12 | description: "Medium priority tasks" 13 | 14 | - name: "priority: low" 15 | color: "0e8a16" 16 | description: "Low priority tasks" 17 | 18 | # Areas of Codebase (Blues) 19 | - name: "area: core" 20 | color: "0052cc" 21 | description: "Core related changes" 22 | 23 | - name: "area: engine" 24 | color: "006b75" 25 | description: "Engine related changes" 26 | 27 | - name: "area: ethereum" 28 | color: "0075ca" 29 | description: "Ethereum related changes" 30 | 31 | - name: "area: macros" 32 | color: "008672" 33 | description: "Macros related changes" 34 | 35 | # Task Types (Purples) 36 | - name: "type: refactor" 37 | color: "6f42c1" 38 | description: "Code refactoring and restructuring" 39 | 40 | - name: "type: enhancement" 41 | color: "7057ff" 42 | description: "Improvements to existing features" 43 | 44 | - name: "type: maintenance" 45 | color: "8a63d2" 46 | description: "Maintenance and cleanup tasks" 47 | 48 | # Technical Categories (Oranges) 49 | - name: "tech: security" 50 | color: "d93f0b" 51 | description: "Security-related changes" 52 | 53 | - name: "tech: performance" 54 | color: "fbca04" 55 | description: "Performance improvements" 56 | 57 | - name: "tech: testing" 58 | color: "ff7619" 59 | description: "Testing related changes" 60 | 61 | # Process Labels (Greens) 62 | - name: "status: needs-review" 63 | color: "0e8a16" 64 | description: "Needs review from maintainers" 65 | 66 | - name: "status: blocked" 67 | color: "44cc11" 68 | description: "Blocked by other issues" 69 | 70 | - name: "status: in-progress" 71 | color: "1a7f37" 72 | description: "Currently being worked on" 73 | 74 | # Community Labels (Teals) 75 | - name: "community: good-first-issue" 76 | color: "008672" 77 | description: "Good for new contributors" 78 | 79 | - name: "community: help-wanted" 80 | color: "006b75" 81 | description: "Extra attention needed" 82 | 83 | # Documentation (Light Blue) 84 | - name: "docs" 85 | color: "0075ca" 86 | description: "Documentation updates" 87 | 88 | # Dependencies (Pink) 89 | - name: "dependencies" 90 | color: "cb7eed" 91 | description: "Dependency updates and maintenance" 92 | 93 | # CI/CD (Bright Green) 94 | - name: "ci/cd" 95 | color: "44cc11" 96 | description: "Continuous integration and deployment pipeline changes" -------------------------------------------------------------------------------- /.github/pull_request_template.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Pull Request 3 | about: Create a pull request 4 | title: 'type(area): brief description of changes' 5 | assignees: '' 6 | --- 7 | 8 | **Title Format** 9 | Please use the following format for your title: 10 | ``` 11 | type(area): brief description of changes 12 | ``` 13 | Where: 14 | - `type` is one of: `feat`, `fix`, `refactor`, `docs`, `test`, `chore` 15 | - `area` is one of: `core`, `engine`, etc. 16 | 17 | **Description** 18 | Please provide a clear description of your changes. If this PR closes any issues, please reference them here. 19 | 20 | **Changes Made** 21 | - [ ] Change 1 22 | - [ ] Change 2 23 | - [ ] Change 3 24 | 25 | **Testing** 26 | - [ ] Tests added/updated 27 | - [ ] All tests pass 28 | 29 | **Labels to Consider** 30 | - `area: *` (based on component) 31 | - `priority: critical/high/medium/low` (based on importance) 32 | - `type: enhancement` (for new features) 33 | - `type: refactor` (for refactoring) 34 | - `tech: performance` (if performance related) 35 | - `tech: testing` (if testing related) 36 | - `docs` (if documentation related) 37 | 38 | **Additional Notes** 39 | Add any additional context or notes here. 40 | 41 | ## 📝 Description 42 | 43 | 44 | ## 🔍 Changes include 45 | 46 | - [ ] 🐛 Bugfix 47 | - [ ] ✨ New feature 48 | - [ ] 📚 Documentation 49 | - [ ] ⚡ Performance improvement 50 | - [ ] 🔨 Refactoring 51 | - [ ] ✅ Test updates 52 | 53 | ## 🧪 Testing 54 | 55 | 56 | ## 📋 Checklist 57 | - [ ] I have tested the changes locally 58 | - [ ] I have updated the documentation accordingly 59 | - [ ] I have added tests that prove my fix/feature works 60 | - [ ] All tests pass locally 61 | 62 | ## 📸 Screenshots 63 | 64 | 65 | ## 🔗 Linked Issues 66 | 67 | 68 | closes # -------------------------------------------------------------------------------- /.github/workflows/book.yaml: -------------------------------------------------------------------------------- 1 | # name: MDBook Build (and Deploy on Push) 2 | 3 | # on: 4 | # pull_request: 5 | # branches: ["main"] 6 | # push: 7 | # branches: ["main"] 8 | 9 | # permissions: 10 | # contents: read 11 | # pages: write 12 | # id-token: write 13 | 14 | # jobs: 15 | # build: 16 | # name: Build Documentation 17 | # runs-on: ubuntu-latest 18 | # steps: 19 | # - uses: actions/checkout@v4 20 | 21 | # - name: Install Rust 22 | # uses: dtolnay/rust-toolchain@master 23 | # with: 24 | # toolchain: nightly-2025-05-25 25 | 26 | # - name: Install mdbook 27 | # uses: taiki-e/install-action@cargo-binstall 28 | # with: 29 | # tool: mdbook 30 | 31 | # - name: Install mdbook-katex 32 | # uses: taiki-e/install-action@cargo-binstall 33 | # with: 34 | # tool: mdbook-katex 35 | 36 | # - name: Install mdbook-linkcheck 37 | # uses: taiki-e/install-action@cargo-binstall 38 | # with: 39 | # tool: mdbook-linkcheck 40 | 41 | # - name: Build mdBook 42 | # run: mdbook build 43 | 44 | # - name: Upload artifact 45 | # uses: actions/upload-pages-artifact@v3 46 | # with: 47 | # path: ./docs/html 48 | 49 | # deploy: 50 | # environment: 51 | # name: github-pages 52 | # url: ${{ steps.deployment.outputs.page_url }} 53 | # runs-on: ubuntu-latest 54 | # needs: build 55 | # if: github.event_name == 'push' && github.ref == 'refs/heads/main' 56 | # steps: 57 | # - name: Deploy to GitHub Pages 58 | # id: deployment 59 | # uses: actions/deploy-pages@v4 -------------------------------------------------------------------------------- /.github/workflows/release.yaml: -------------------------------------------------------------------------------- 1 | # name: Release 2 | 3 | # permissions: 4 | # pull-requests: write 5 | # contents: write 6 | 7 | # on: 8 | # push: 9 | # branches: 10 | # - main 11 | 12 | # jobs: 13 | # release: 14 | # name: release 15 | # runs-on: ubuntu-latest 16 | # steps: 17 | # - name: Checkout repository 18 | # uses: actions/checkout@v4 19 | # with: 20 | # fetch-depth: 0 21 | 22 | # - name: Install Rust toolchain 23 | # uses: dtolnay/rust-toolchain@master 24 | # with: 25 | # toolchain: nightly-2025-05-25 26 | 27 | # - name: Rust Cache 28 | # uses: Swatinem/rust-cache@v2 29 | 30 | # - name: Install cargo-semver-checks 31 | # uses: taiki-e/install-action@cargo-binstall 32 | # with: 33 | # tool: cargo-semver-checks 34 | 35 | # - name: Check semver 36 | # run: cargo semver-checks check-release 37 | # continue-on-error: true 38 | 39 | # - name: Update Cargo.lock 40 | # uses: stefanzweifel/git-auto-commit-action@v5 41 | # with: 42 | # commit_message: "chore: update Cargo.lock" 43 | # file_pattern: "Cargo.lock" 44 | 45 | # - name: Run release-plz 46 | # uses: MarcoIeni/release-plz-action@v0.5.41 47 | # env: 48 | # GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 49 | # CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }} -------------------------------------------------------------------------------- /.github/workflows/rust.yaml: -------------------------------------------------------------------------------- 1 | name: Rust 2 | concurrency: 3 | group: ${{ github.workflow }}-${{ github.ref }} 4 | cancel-in-progress: true 5 | 6 | on: 7 | push: 8 | branches: [ main ] 9 | pull_request: 10 | branches: [ main ] 11 | 12 | env: 13 | CARGO_TERM_COLOR: always 14 | 15 | jobs: 16 | rust-fmt: 17 | name: rustfmt 18 | runs-on: ubuntu-latest 19 | steps: 20 | - uses: actions/checkout@v4 21 | 22 | - name: Install Rust 23 | uses: dtolnay/rust-toolchain@master 24 | with: 25 | toolchain: nightly-2025-05-25 26 | components: rustfmt 27 | 28 | - name: Rust Cache 29 | uses: Swatinem/rust-cache@v2 30 | with: 31 | key: rust/rustfmt 32 | 33 | - name: Run Rust fmt 34 | run: cargo fmt --all -- --check 35 | 36 | toml-fmt: 37 | name: taplo 38 | runs-on: ubuntu-latest 39 | steps: 40 | - uses: actions/checkout@v4 41 | 42 | - name: Install Rust 43 | uses: dtolnay/rust-toolchain@master 44 | with: 45 | toolchain: nightly-2025-05-25 46 | 47 | - name: Install taplo 48 | uses: taiki-e/install-action@cargo-binstall 49 | with: 50 | tool: taplo-cli 51 | 52 | - name: Rust Cache 53 | uses: Swatinem/rust-cache@v2 54 | with: 55 | key: rust/taplo 56 | 57 | - name: Run TOML fmt 58 | run: taplo fmt --check 59 | 60 | check: 61 | name: check 62 | runs-on: ubuntu-latest 63 | steps: 64 | - uses: actions/checkout@v4 65 | 66 | - name: Install Rust 67 | uses: dtolnay/rust-toolchain@master 68 | with: 69 | toolchain: nightly-2025-05-25 70 | 71 | - name: Rust Cache 72 | uses: Swatinem/rust-cache@v2 73 | with: 74 | key: rust/check 75 | 76 | - name: Run cargo check 77 | run: cargo check --workspace 78 | 79 | clippy: 80 | name: clippy 81 | runs-on: ubuntu-latest 82 | steps: 83 | - uses: actions/checkout@v4 84 | 85 | - name: Install Rust 86 | uses: dtolnay/rust-toolchain@master 87 | with: 88 | toolchain: nightly-2025-05-25 89 | components: clippy 90 | 91 | - name: Rust Cache 92 | uses: Swatinem/rust-cache@v2 93 | with: 94 | key: rust/clippy 95 | 96 | - name: Build 97 | run: cargo build --workspace 98 | 99 | - name: Clippy 100 | run: cargo clippy --all-targets --all-features -- --deny warnings 101 | 102 | test: 103 | name: test 104 | runs-on: ubuntu-latest 105 | steps: 106 | - uses: actions/checkout@v4 107 | 108 | - name: Install Rust 109 | uses: dtolnay/rust-toolchain@master 110 | with: 111 | toolchain: nightly-2025-05-25 112 | 113 | - name: Rust Cache 114 | uses: Swatinem/rust-cache@v2 115 | with: 116 | key: rust/test 117 | 118 | - name: Run tests 119 | run: cargo test --verbose --workspace 120 | 121 | semver: 122 | name: semver 123 | runs-on: ubuntu-latest 124 | continue-on-error: true 125 | steps: 126 | - uses: actions/checkout@v4 127 | with: 128 | fetch-depth: 0 129 | 130 | - name: Install Rust 131 | uses: dtolnay/rust-toolchain@master 132 | with: 133 | toolchain: nightly-2025-05-25 134 | 135 | - name: Rust Cache 136 | uses: Swatinem/rust-cache@v2 137 | with: 138 | key: rust/semver 139 | 140 | - name: Install cargo-semver-checks 141 | uses: taiki-e/install-action@cargo-binstall 142 | with: 143 | tool: cargo-semver-checks 144 | 145 | - name: Check semver 146 | run: cargo semver-checks check-release 147 | 148 | udeps: 149 | name: udeps 150 | runs-on: ubuntu-latest 151 | steps: 152 | - uses: actions/checkout@v4 153 | 154 | - name: Install Rust 155 | uses: dtolnay/rust-toolchain@master 156 | with: 157 | toolchain: nightly-2025-05-25 158 | 159 | - name: Rust Cache 160 | uses: Swatinem/rust-cache@v2 161 | with: 162 | key: rust/udeps 163 | 164 | - name: Install cargo-udeps 165 | uses: taiki-e/install-action@cargo-binstall 166 | with: 167 | tool: cargo-udeps 168 | 169 | - name: Run cargo-udeps 170 | run: cargo udeps --workspace 171 | 172 | doc: 173 | name: doc 174 | runs-on: ubuntu-latest 175 | steps: 176 | - uses: actions/checkout@v4 177 | 178 | - name: Install Rust 179 | uses: dtolnay/rust-toolchain@master 180 | with: 181 | toolchain: nightly-2025-05-25 182 | 183 | - name: Rust Cache 184 | uses: Swatinem/rust-cache@v2 185 | with: 186 | key: rust/doc 187 | 188 | - name: Run cargo doc to check for warnings 189 | run: RUSTDOCFLAGS="-D warnings" cargo doc --no-deps --all-features -------------------------------------------------------------------------------- /.github/workflows/sync_labels.yaml: -------------------------------------------------------------------------------- 1 | name: Sync Labels 2 | 3 | on: 4 | push: 5 | branches: 6 | - main 7 | paths: 8 | - '.github/labels.yaml' 9 | workflow_dispatch: 10 | 11 | permissions: 12 | issues: write 13 | pull-requests: write 14 | 15 | jobs: 16 | sync: 17 | runs-on: ubuntu-latest 18 | steps: 19 | - uses: actions/checkout@v4 20 | 21 | - uses: micnncim/action-label-syncer@v1 22 | env: 23 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 24 | with: 25 | manifest: .github/labels.yaml 26 | prune: true -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Generated by Cargo 2 | # will have compiled files and executables 3 | /target/ 4 | /benches/target/ 5 | 6 | # These are backup files generated by rustfmt 7 | **/*.rs.bk 8 | 9 | /cache/ 10 | /out/ 11 | *.csv 12 | 13 | .DS_Store 14 | 15 | .vscode 16 | 17 | # mdbook 18 | book 19 | 20 | arbiter-core/data 21 | example_fork/test.json 22 | 23 | doctest_cache/ 24 | docs/target/* 25 | 26 | examples/leader/target/* -------------------------------------------------------------------------------- /.rustfmt.toml: -------------------------------------------------------------------------------- 1 | # Opinionated whitespace and tabs. The most important of these are imports and width settings. 2 | # Others may want to borrow or change these to their own liking. 3 | # https://rust-lang.github.io/rustfmt 4 | 5 | # version-related 6 | edition = "2021" # redundant, fmt will read Cargo.toml for editor edition year 7 | unstable_features = true 8 | use_try_shorthand = true # replace any `try!` (2015 Rust) with `?` 9 | 10 | # misc formatting 11 | condense_wildcard_suffixes = true # replace: (a,b,_,_)=(1, 2, 3, 4); -> (a,b,..)=(1, 2, 3, 4); 12 | format_code_in_doc_comments = true # format code blocks in doc comments 13 | format_macro_matchers = true # $a: ident -> $a:ident 14 | format_strings = true # break and insert newlines for long string literals 15 | match_block_trailing_comma = true # include comma in match blocks after '}' 16 | normalize_comments = true # convert /*..*/ to //.. where possible 17 | reorder_impl_items = true # move `type` and `const` declarations to top of impl block 18 | struct_field_align_threshold = 20 # align struct arguments' types vertically 19 | use_field_init_shorthand = true # struct initialization short {x: x} -> {x} 20 | 21 | # reduce whitespace 22 | blank_lines_upper_bound = 1 # default: 1. Sometimes useful to change to 0 to condense a file. 23 | brace_style = "PreferSameLine" # prefer starting `{` without inserting extra \n 24 | fn_single_line = true # if it's a short 1-liner, let it be a short 1-liner 25 | match_arm_blocks = false # remove unnecessary {} in match arms 26 | newline_style = "Unix" # not auto, we won the culture war. \n over \r\n 27 | overflow_delimited_expr = true # prefer ]); to ]\n); 28 | where_single_line = true # put where on a single line if possible 29 | 30 | # imports preferences 31 | group_imports = "StdExternalCrate" # create import groupings for std, external libs, and internal deps 32 | imports_granularity = "Crate" # aggressively group imports 33 | 34 | # width settings: everything to 100 35 | comment_width = 100 # default: 80 36 | inline_attribute_width = 60 # inlines #[cfg(test)]\nmod test -> #[cfg(test)] mod test 37 | max_width = 100 # default: 100 38 | use_small_heuristics = "Max" # don't ever newline short of `max_width`. 39 | wrap_comments = true # wrap comments at `comment_width` 40 | # format_strings = true # wrap strings at `max_length` 41 | 42 | # tabs and spaces 43 | hard_tabs = false # (def: false) use spaces over tabs 44 | tab_spaces = 2 # 2 > 4, it's just math. 45 | -------------------------------------------------------------------------------- /.taplo.toml: -------------------------------------------------------------------------------- 1 | # .toml file formatting settings for `taplo` 2 | # https://taplo.tamasfe.dev/configuration/formatter-options.html 3 | 4 | [formatting] 5 | # align entries vertically 6 | align_entries = true 7 | # allow up to 1 consecutive empty line (default: 2) 8 | allowed_blank_line = 1 9 | # collapse arrays into one line if they fit 10 | array_auto_collapse = true 11 | # alphabetically sort entries not separated by line breaks 12 | reorder_keys = true 13 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to Arbiter 2 | 3 | Thank you for your interest in contributing to Arbiter! This document provides guidelines and instructions for contributing to the project. 4 | 5 | ## Table of Contents 6 | - [Code of Conduct](#code-of-conduct) 7 | - [Getting Started](#getting-started) 8 | - [Development Workflow](#development-workflow) 9 | - [Documentation](#documentation) 10 | - [Issue Guidelines](#issue-guidelines) 11 | - [Pull Request Guidelines](#pull-request-guidelines) 12 | - [Code Style](#code-style) 13 | - [Testing](#testing) 14 | 15 | ## Code of Conduct 16 | 17 | By participating in this project, you agree to abide by our Code of Conduct. Please be respectful and considerate of others. 18 | 19 | ## Getting Started 20 | 21 | 1. Fork the repository 22 | 2. Clone your fork: 23 | ```bash 24 | git clone https://github.com/yourusername/arbiter.git 25 | cd arbiter 26 | ``` 27 | 3. Set up the development environment: 28 | ```bash 29 | # Install Rust (if not already installed) 30 | curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh 31 | 32 | # Install development tools 33 | rustup component add rustfmt clippy 34 | ``` 35 | 36 | ## Development Workflow 37 | 38 | 1. Create a new branch for your feature/fix: 39 | ```bash 40 | git checkout -b type/area/description 41 | # Example: git checkout -b feat/algebra/vector-spaces 42 | ``` 43 | 44 | 2. Make your changes following the [Code Style](#code-style) guidelines 45 | 46 | 3. Run tests and checks: 47 | ```bash 48 | cargo test 49 | cargo fmt --all -- --check 50 | cargo clippy --all-targets --all-features -- -D warnings 51 | ``` 52 | 53 | 4. Commit your changes following the [Commit Message Format](#commit-message-format) 54 | 55 | 5. Push your branch and create a Pull Request 56 | 57 | ## Documentation 58 | 59 | Harness provides two types of documentation that you should be familiar with: 60 | 61 | ### API Documentation 62 | The Rust API documentation for all crates can be viewed using: 63 | ```bash 64 | just docs 65 | ``` 66 | This will build and open the Rust API documentation in your browser. This documentation is automatically generated from your code comments and should be kept up to date. 67 | 68 | ### Book Documentation 69 | The comprehensive book documentation can be viewed using: 70 | ```bash 71 | just book 72 | ``` 73 | This will serve the book documentation locally and open it in your browser. The book includes detailed explanations of mathematical concepts, examples, and usage guides. 74 | 75 | When contributing, please: 76 | 1. Keep API documentation up to date with your code changes 77 | 2. Update the book documentation if you add new features or change existing behavior 78 | 3. Add examples to both API docs and the book where appropriate 79 | 4. Ensure mathematical definitions and references are accurate 80 | 81 | ## Issue Guidelines 82 | 83 | When creating issues, please use the provided templates and follow these guidelines: 84 | 85 | ### Title Format 86 | ``` 87 | type(area): brief description 88 | ``` 89 | Where: 90 | - `type` is one of: `feat`, `fix`, `refactor`, `docs`, `test`, `chore` 91 | - `area` is one of: `core`, `engine`, `ethereum`, `macros`, etc. 92 | 93 | ### Labels 94 | Please use appropriate labels to categorize your issue: 95 | - Area labels: `area: core`, `area: engine`, `area: ethereum`, `area: macros`, etc. 96 | - Priority labels: `priority: critical/high/medium/low` 97 | - Type labels: `type: enhancement`, `type: refactor` 98 | - Technical labels: `tech: performance`, `tech: security`, `tech: testing` 99 | 100 | ## Pull Request Guidelines 101 | 102 | 1. Use the provided PR template 103 | 2. Ensure your PR title follows the format: `type(area): description` 104 | 3. Link related issues using `closes #issue_number` 105 | 4. Keep PRs focused and small when possible 106 | 5. Include tests for new features or bug fixes 107 | 6. Update documentation as needed 108 | 109 | ## Code Style 110 | 111 | - Follow Rust's official style guide 112 | - Use `rustfmt` for formatting 113 | - Run `cargo clippy` to catch common mistakes 114 | - Document public APIs thoroughly 115 | - Use meaningful variable and function names 116 | - Keep functions focused and small 117 | 118 | ## Testing 119 | 120 | - Write unit tests for all new functionality 121 | - Include examples in documentation 122 | - Run all tests before submitting PRs 123 | - Consider edge cases and error conditions 124 | 125 | ## Commit Message Format 126 | 127 | Follow this format for commit messages: 128 | ``` 129 | type(area): description 130 | 131 | [optional body] 132 | 133 | [optional footer] 134 | ``` 135 | 136 | Where: 137 | - `type` is one of: `feat`, `fix`, `refactor`, `docs`, `test`, `chore` 138 | - `area` is one of: `core`, `engine`, etc. 139 | - Description is a brief summary of changes 140 | - Body provides additional context if needed 141 | - Footer references issues or PRs 142 | 143 | ## Questions? 144 | 145 | If you have any questions, feel free to: 146 | 1. Open an issue with the `question` label 147 | 2. Join our community discussions 148 | 3. Contact the maintainers -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [workspace] 2 | exclude = ["benches", "docs"] 3 | members = [ 4 | "arbiter", 5 | "arbiter-core", 6 | "arbiter-ethereum", 7 | "arbiter-macros", 8 | "docs", 9 | ] 10 | 11 | [workspace.dependencies] 12 | # Local 13 | arbiter-core = { path = "arbiter-core" } 14 | arbiter-ethereum = { path = "arbiter-ethereum" } 15 | arbiter-macros = { path = "arbiter-macros" } 16 | 17 | # Ethereum 18 | ethers = { version = "2.0.14" } 19 | revm = { version = "8.0.0", features = ["ethersdb", "std", "serde"] } 20 | revm-primitives = "3.1.1" 21 | 22 | # Serialization 23 | serde = { version = "1.0", features = ["derive"] } 24 | serde_json = { version = "1.0" } 25 | toml = "0.8" 26 | 27 | # Async 28 | async-stream = "0.3" 29 | async-trait = { version = "0.1" } 30 | crossbeam-channel = { version = "0.5" } 31 | futures = "0.3" 32 | futures-util = { version = "0.3" } 33 | tokio = { version = "1.45", default-features = false } 34 | 35 | # Macros 36 | proc-macro2 = { version = "1.0" } 37 | syn = { version = "2.0", features = ["full"] } 38 | 39 | # Errors and logging 40 | anyhow = "1.0" 41 | thiserror = { version = "2.0" } 42 | tracing = "0.1" 43 | tracing-subscriber = { version = "0.3", default-features = false } 44 | tracing-test = "0.2" 45 | 46 | # Testing 47 | tempfile = "3.20" 48 | 49 | [profile.release] 50 | codegen-units = 1 51 | lto = true 52 | -------------------------------------------------------------------------------- /arbiter-core/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | authors = ["Harness Labs"] 3 | edition = "2021" 4 | name = "arbiter-core" 5 | version = "0.1.0" 6 | 7 | [dependencies] 8 | # Error and logging 9 | serde = { workspace = true } 10 | thiserror = { workspace = true } 11 | tracing = { workspace = true } 12 | 13 | # WASM dependencies (optional) 14 | serde_json = { version = "1.0", optional = true } 15 | wasm-bindgen = { version = "0.2", optional = true } 16 | 17 | [dev-dependencies] 18 | tracing-test = { workspace = true } 19 | 20 | [features] 21 | default = [] 22 | wasm = ["wasm-bindgen", "serde_json"] 23 | -------------------------------------------------------------------------------- /arbiter-core/src/handler.rs: -------------------------------------------------------------------------------- 1 | use std::{any::Any, rc::Rc}; 2 | 3 | /// Trait for types that can be sent as messages between agents 4 | pub trait Message: Any + 'static {} 5 | 6 | // Blanket implementation for all types that meet the requirements 7 | impl Message for T where T: Any + 'static {} 8 | 9 | pub trait Handler { 10 | type Reply: Message; 11 | 12 | fn handle(&mut self, message: &M) -> Self::Reply; 13 | } 14 | 15 | pub type MessageHandlerFn = Box Rc>; 16 | 17 | pub fn create_handler() -> MessageHandlerFn 18 | where 19 | A: Handler + 'static, 20 | M: Message, { 21 | Box::new(|agent: &mut dyn Any, message: &dyn Any| { 22 | if let (Some(typed_agent), Some(typed_message)) = 23 | (agent.downcast_mut::(), message.downcast_ref::()) 24 | { 25 | let reply = typed_agent.handle(typed_message); 26 | Rc::new(reply) 27 | } else { 28 | Rc::new(()) 29 | } 30 | }) 31 | } 32 | -------------------------------------------------------------------------------- /arbiter-core/src/lib.rs: -------------------------------------------------------------------------------- 1 | pub mod agent; 2 | pub mod handler; 3 | pub mod runtime; 4 | -------------------------------------------------------------------------------- /arbiter-core/src/runtime/wasm.rs: -------------------------------------------------------------------------------- 1 | //! WASM bindings for the Runtime 2 | //! 3 | //! This module provides JavaScript-friendly wrappers around the core Runtime functionality. 4 | //! All methods return simple types (bool, usize, String) to work well with wasm-bindgen. 5 | 6 | // TODO: I still don't think this module is actually necessary, I think we can just stub the 7 | // functions and use the Runtime directly from JS. 8 | 9 | use wasm_bindgen::prelude::*; 10 | 11 | use super::{Runtime, RuntimeStatistics}; 12 | use crate::agent::AgentState; 13 | 14 | #[wasm_bindgen] 15 | impl Runtime { 16 | /// Create a new runtime instance 17 | #[wasm_bindgen(constructor)] 18 | pub fn wasm_new() -> Self { Self::new() } 19 | 20 | // === CORE EXECUTION === 21 | 22 | /// Execute a single runtime step 23 | /// Returns the number of messages processed 24 | #[wasm_bindgen(js_name = "step")] 25 | pub fn wasm_step(&mut self) { self.step() } 26 | 27 | /// Check if the runtime has pending work 28 | #[wasm_bindgen(js_name = "hasPendingWork")] 29 | pub fn wasm_has_pending_work(&self) -> bool { self.has_pending_work() } 30 | 31 | /// Start an agent by name 32 | /// Returns true if successful, false if agent not found 33 | #[wasm_bindgen(js_name = "startAgent")] 34 | pub fn wasm_start_agent(&mut self, agent_name: &str) -> bool { 35 | self.start_agent_by_name(agent_name).is_ok() 36 | } 37 | 38 | /// Pause an agent by name 39 | /// Returns true if successful, false if agent not found 40 | #[wasm_bindgen(js_name = "pauseAgent")] 41 | pub fn wasm_pause_agent(&mut self, agent_name: &str) -> bool { 42 | self.pause_agent_by_name(agent_name).is_ok() 43 | } 44 | 45 | /// Resume an agent by name 46 | /// Returns true if successful, false if agent not found 47 | #[wasm_bindgen(js_name = "resumeAgent")] 48 | pub fn wasm_resume_agent(&mut self, agent_name: &str) -> bool { 49 | self.resume_agent_by_name(agent_name).is_ok() 50 | } 51 | 52 | /// Stop an agent by name 53 | /// Returns true if successful, false if agent not found 54 | #[wasm_bindgen(js_name = "stopAgent")] 55 | pub fn wasm_stop_agent(&mut self, agent_name: &str) -> bool { 56 | self.stop_agent_by_name(agent_name).is_ok() 57 | } 58 | 59 | /// Remove an agent by name 60 | /// Returns true if successful, false if agent not found 61 | #[wasm_bindgen(js_name = "removeAgent")] 62 | pub fn wasm_remove_agent(&mut self, agent_name: &str) -> bool { 63 | self.remove_agent_by_name(agent_name).is_ok() 64 | } 65 | 66 | // === BULK OPERATIONS === 67 | 68 | /// Start all agents 69 | /// Returns the number of agents that were started 70 | #[wasm_bindgen(js_name = "startAllAgents")] 71 | pub fn wasm_start_all_agents(&mut self) -> usize { self.start_all_agents() } 72 | 73 | /// Pause all agents 74 | /// Returns the number of agents that were paused 75 | #[wasm_bindgen(js_name = "pauseAllAgents")] 76 | pub fn wasm_pause_all_agents(&mut self) -> usize { self.pause_all_agents() } 77 | 78 | /// Resume all agents 79 | /// Returns the number of agents that were resumed 80 | #[wasm_bindgen(js_name = "resumeAllAgents")] 81 | pub fn wasm_resume_all_agents(&mut self) -> usize { self.resume_all_agents() } 82 | 83 | /// Stop all agents 84 | /// Returns the number of agents that were stopped 85 | #[wasm_bindgen(js_name = "stopAllAgents")] 86 | pub fn wasm_stop_all_agents(&mut self) -> usize { self.stop_all_agents() } 87 | 88 | /// Remove all agents 89 | /// Returns the number of agents that were removed 90 | #[wasm_bindgen(js_name = "removeAllAgents")] 91 | pub fn wasm_remove_all_agents(&mut self) -> usize { self.remove_all_agents().len() } 92 | 93 | // === INFORMATION AND STATISTICS === 94 | 95 | /// Get the total number of agents 96 | #[wasm_bindgen(js_name = "agentCount")] 97 | pub fn wasm_agent_count(&self) -> usize { self.agent_count() } 98 | 99 | /// Get the number of agents that need processing 100 | #[wasm_bindgen(js_name = "agentsNeedingProcessing")] 101 | pub fn wasm_agents_needing_processing(&self) -> usize { self.agents_needing_processing() } 102 | 103 | /// Get agent state by name 104 | /// Returns "Running", "Paused", "Stopped", or "NotFound" 105 | #[wasm_bindgen(js_name = "agentState")] 106 | pub fn wasm_agent_state(&self, agent_name: &str) -> String { 107 | match self.agent_state_by_name(agent_name) { 108 | Some(AgentState::Running) => "Running".to_string(), 109 | Some(AgentState::Paused) => "Paused".to_string(), 110 | Some(AgentState::Stopped) => "Stopped".to_string(), 111 | None => "NotFound".to_string(), 112 | } 113 | } 114 | 115 | /// Look up agent ID by name (returns the raw u64 value) 116 | /// Returns the agent ID or 0 if not found 117 | #[wasm_bindgen(js_name = "agentIdByName")] 118 | pub fn wasm_agent_id_by_name(&self, name: &str) -> u64 { 119 | self.agent_id_by_name(name).map_or(0, |id| id.value()) 120 | } 121 | 122 | /// Get list of all agent names as JSON array 123 | #[wasm_bindgen(js_name = "agentNames")] 124 | pub fn wasm_agent_names(&self) -> String { 125 | let names: Vec<&String> = self.agent_names(); 126 | serde_json::to_string(&names).unwrap_or_else(|_| "[]".to_string()) 127 | } 128 | 129 | /// Get list of all agent IDs as JSON array 130 | #[wasm_bindgen(js_name = "agentIds")] 131 | pub fn wasm_agent_ids(&self) -> String { 132 | let ids: Vec = self.agent_ids().iter().map(super::super::agent::AgentId::value).collect(); 133 | serde_json::to_string(&ids).unwrap_or_else(|_| "[]".to_string()) 134 | } 135 | 136 | /// Get runtime statistics as JSON string 137 | #[wasm_bindgen(js_name = "statistics")] 138 | pub fn wasm_statistics(&self) -> String { 139 | serde_json::to_string(&self.statistics()).unwrap_or_else(|_| "{}".to_string()) 140 | } 141 | 142 | /// Get agents by state as JSON array of agent IDs 143 | #[wasm_bindgen(js_name = "agentsByState")] 144 | pub fn wasm_agents_by_state(&self, state_str: &str) -> String { 145 | let state = match state_str { 146 | "Running" => AgentState::Running, 147 | "Paused" => AgentState::Paused, 148 | "Stopped" => AgentState::Stopped, 149 | _ => return "[]".to_string(), 150 | }; 151 | 152 | let agent_ids: Vec = 153 | self.agents_by_state(state).iter().map(super::super::agent::AgentId::value).collect(); 154 | serde_json::to_string(&agent_ids).unwrap_or_else(|_| "[]".to_string()) 155 | } 156 | 157 | // === UTILITIES === 158 | 159 | /// Process all pending messages 160 | /// Returns the number of messages processed 161 | #[wasm_bindgen(js_name = "processAllPendingMessages")] 162 | pub fn wasm_process_all_pending_messages(&mut self) -> usize { 163 | self.process_all_pending_messages() 164 | } 165 | } 166 | -------------------------------------------------------------------------------- /arbiter-ethereum/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | authors = ["Harness Labs"] 3 | description = "Allowing smart contract developers to do simulation driven development via an EVM emulator" 4 | edition = "2021" 5 | keywords = ["ethereum", "evm", "emulator", "testing", "smart-contracts"] 6 | license = "AGPL-3.0" 7 | name = "arbiter-ethereum" 8 | readme = "../README.md" 9 | version = "0.1.0" 10 | 11 | # Dependencies for the release build 12 | [dependencies] 13 | # Ethereum and EVM 14 | ethers.workspace = true 15 | revm.workspace = true 16 | revm-primitives.workspace = true 17 | 18 | 19 | # Serialization 20 | serde.workspace = true 21 | serde_json.workspace = true 22 | 23 | # Types 24 | bytes = { version = "^1.5.0" } 25 | hashbrown = "^0.14.5" 26 | hex = { version = "^0.4.3", default-features = false } 27 | uint = "^0.9.5" 28 | 29 | # Concurrency/async 30 | async-stream.workspace = true 31 | async-trait.workspace = true 32 | crossbeam-channel.workspace = true 33 | futures-locks = { version = "=0.7.1" } 34 | futures-timer = { version = ">=3.0.2, <4.0.0" } 35 | futures-util.workspace = true 36 | tokio.workspace = true 37 | 38 | # Randomness 39 | rand = { version = "=0.8.5" } 40 | 41 | # Errors 42 | thiserror.workspace = true 43 | 44 | # Logging 45 | tracing.workspace = true 46 | 47 | # File types 48 | polars = { version = "0.38.3", features = ["parquet", "csv", "json"] } 49 | 50 | # Dependencies for the test build and development 51 | [dev-dependencies] 52 | futures.workspace = true 53 | tracing-subscriber = "0.3.18" 54 | 55 | # For bench 56 | cargo_metadata = "0.18.1" 57 | chrono = "0.4.38" 58 | 59 | assert_matches = { version = "=1.5" } 60 | 61 | [[bench]] 62 | harness = false 63 | name = "bench" 64 | path = "benches/bench.rs" 65 | -------------------------------------------------------------------------------- /arbiter-ethereum/benches/bench.rs: -------------------------------------------------------------------------------- 1 | use std::{ 2 | collections::HashMap, 3 | convert::TryFrom, 4 | sync::Arc, 5 | time::{Duration, Instant}, 6 | }; 7 | 8 | use arbiter_bindings::bindings::{ 9 | arbiter_math::ArbiterMath, 10 | arbiter_token::{self, ArbiterToken}, 11 | }; 12 | use arbiter_core::{environment::Environment, middleware::ArbiterMiddleware}; 13 | use ethers::{ 14 | core::{k256::ecdsa::SigningKey, utils::Anvil}, 15 | middleware::SignerMiddleware, 16 | providers::{Http, Middleware, Provider}, 17 | signers::{LocalWallet, Signer, Wallet}, 18 | types::{Address, I256, U256}, 19 | utils::AnvilInstance, 20 | }; 21 | use polars::{ 22 | prelude::{DataFrame, NamedFrom}, 23 | series::Series, 24 | }; 25 | use tracing::info; 26 | 27 | const NUM_BENCH_ITERATIONS: usize = 100; 28 | const NUM_LOOP_STEPS: usize = 10; 29 | 30 | #[derive(Debug)] 31 | struct BenchDurations { 32 | deploy: Duration, 33 | lookup: Duration, 34 | stateless_call: Duration, 35 | stateful_call: Duration, 36 | } 37 | 38 | #[tokio::main] 39 | async fn main() { 40 | // Choose the benchmark group items by label. 41 | let group = ["anvil", "arbiter"]; 42 | let mut results: HashMap<&str, HashMap<&str, Duration>> = HashMap::new(); 43 | 44 | // Set up for showing percentage done. 45 | let ten_percent = NUM_BENCH_ITERATIONS / 10; 46 | 47 | for item in group { 48 | let mut item_results = HashMap::new(); 49 | // Count up total durations for each part of the benchmark. 50 | let mut durations = Vec::with_capacity(NUM_BENCH_ITERATIONS); 51 | println!("Running {item} benchmark"); 52 | 53 | for index in 0..NUM_BENCH_ITERATIONS { 54 | durations.push(match item { 55 | label @ "anvil" => { 56 | let (client, _anvil_instance) = anvil_startup().await; 57 | let duration = bencher(client, label).await; 58 | drop(_anvil_instance); 59 | duration 60 | }, 61 | label @ "arbiter" => { 62 | let (_environment, client) = arbiter_startup(); 63 | bencher(client, label).await 64 | }, 65 | _ => panic!("Invalid argument"), 66 | }); 67 | if index % ten_percent == 0 { 68 | println!("{index} out of {NUM_BENCH_ITERATIONS} complete"); 69 | } 70 | } 71 | let sum_durations = durations.iter().fold( 72 | BenchDurations { 73 | deploy: Duration::default(), 74 | lookup: Duration::default(), 75 | stateless_call: Duration::default(), 76 | stateful_call: Duration::default(), 77 | }, 78 | |acc, duration| BenchDurations { 79 | deploy: acc.deploy + duration.deploy, 80 | lookup: acc.lookup + duration.lookup, 81 | stateless_call: acc.stateless_call + duration.stateless_call, 82 | stateful_call: acc.stateful_call + duration.stateful_call, 83 | }, 84 | ); 85 | 86 | let average_durations = BenchDurations { 87 | deploy: sum_durations.deploy / NUM_BENCH_ITERATIONS as u32, 88 | lookup: sum_durations.lookup / NUM_BENCH_ITERATIONS as u32, 89 | stateless_call: sum_durations.stateless_call / NUM_BENCH_ITERATIONS as u32, 90 | stateful_call: sum_durations.stateful_call / NUM_BENCH_ITERATIONS as u32, 91 | }; 92 | 93 | item_results.insert("Deploy", average_durations.deploy); 94 | item_results.insert("Lookup", average_durations.lookup); 95 | item_results.insert("Stateless Call", average_durations.stateless_call); 96 | item_results.insert("Stateful Call", average_durations.stateful_call); 97 | 98 | results.insert(item, item_results); 99 | } 100 | 101 | let df = create_dataframe(&results, &group); 102 | 103 | match get_version_of("arbiter-core") { 104 | Some(version) => println!("arbiter-core version: {}", version), 105 | None => println!("Could not find version for arbiter-core"), 106 | } 107 | 108 | match get_version_of("ethers") { 109 | Some(version) => println!("ethers-core anvil version: {}", version), 110 | None => println!("Could not find version for anvil"), 111 | } 112 | println!("Date: {}", chrono::Local::now().format("%Y-%m-%d")); 113 | println!("{}", df); 114 | } 115 | 116 | async fn bencher(client: Arc, label: &str) -> BenchDurations { 117 | // Track the duration for each part of the benchmark. 118 | let mut total_deploy_duration = 0; 119 | let mut total_lookup_duration = 0; 120 | let mut total_stateless_call_duration = 0; 121 | let mut total_stateful_call_duration = 0; 122 | 123 | // Deploy `ArbiterMath` and `ArbiterToken` contracts and tally up how long this 124 | // takes. 125 | let (arbiter_math, arbiter_token, deploy_duration) = deployments(client.clone(), label).await; 126 | total_deploy_duration += deploy_duration.as_micros(); 127 | 128 | // Call `balance_of` `NUM_LOOP_STEPS` times on `ArbiterToken` and tally up how 129 | // long basic lookups take. 130 | let lookup_duration = lookup(arbiter_token.clone(), label).await; 131 | total_lookup_duration += lookup_duration.as_micros(); 132 | 133 | // Call `cdf` `NUM_LOOP_STEPS` times on `ArbiterMath` and tally up how long this 134 | // takes. 135 | let stateless_call_duration = stateless_call_loop(arbiter_math, label).await; 136 | total_stateless_call_duration += stateless_call_duration.as_micros(); 137 | 138 | // Call `mint` `NUM_LOOP_STEPS` times on `ArbiterToken` and tally up how long 139 | // this takes. 140 | let statefull_call_duration = 141 | stateful_call_loop(arbiter_token, client.default_sender().unwrap(), label).await; 142 | total_stateful_call_duration += statefull_call_duration.as_micros(); 143 | 144 | BenchDurations { 145 | deploy: Duration::from_micros(total_deploy_duration as u64), 146 | lookup: Duration::from_micros(total_lookup_duration as u64), 147 | stateless_call: Duration::from_micros(total_stateless_call_duration as u64), 148 | stateful_call: Duration::from_micros(total_stateful_call_duration as u64), 149 | } 150 | } 151 | 152 | async fn anvil_startup( 153 | ) -> (Arc, Wallet>>, AnvilInstance) { 154 | // Create an Anvil instance 155 | // No blocktime mines a new block for each tx, which is fastest. 156 | let anvil = Anvil::new().spawn(); 157 | 158 | // Create a client 159 | let provider = Provider::::try_from(anvil.endpoint()).unwrap().interval(Duration::ZERO); 160 | 161 | let wallet: LocalWallet = anvil.keys()[0].clone().into(); 162 | let client = Arc::new(SignerMiddleware::new(provider, wallet.with_chain_id(anvil.chain_id()))); 163 | 164 | (client, anvil) 165 | } 166 | 167 | fn arbiter_startup() -> (Environment, Arc) { 168 | let environment = Environment::builder().build(); 169 | 170 | let client = ArbiterMiddleware::new(&environment, Some("name")).unwrap(); 171 | (environment, client) 172 | } 173 | 174 | async fn deployments( 175 | client: Arc, 176 | label: &str, 177 | ) -> (ArbiterMath, ArbiterToken, Duration) { 178 | let start = Instant::now(); 179 | let arbiter_math = ArbiterMath::deploy(client.clone(), ()).unwrap().send().await.unwrap(); 180 | let arbiter_token = arbiter_token::ArbiterToken::deploy( 181 | client.clone(), 182 | ("Bench Token".to_string(), "BNCH".to_string(), 18_u8), 183 | ) 184 | .unwrap() 185 | .send() 186 | .await 187 | .unwrap(); 188 | let duration = start.elapsed(); 189 | info!("Time elapsed in {} deployment is: {:?}", label, duration); 190 | 191 | (arbiter_math, arbiter_token, duration) 192 | } 193 | 194 | async fn lookup(arbiter_token: ArbiterToken, label: &str) -> Duration { 195 | let address = arbiter_token.client().default_sender().unwrap(); 196 | let start = Instant::now(); 197 | for _ in 0..NUM_LOOP_STEPS { 198 | arbiter_token.balance_of(address).call().await.unwrap(); 199 | } 200 | let duration = start.elapsed(); 201 | info!("Time elapsed in {} cdf loop is: {:?}", label, duration); 202 | 203 | duration 204 | } 205 | 206 | async fn stateless_call_loop( 207 | arbiter_math: ArbiterMath, 208 | label: &str, 209 | ) -> Duration { 210 | let iwad = I256::from(10_u128.pow(18)); 211 | let start = Instant::now(); 212 | for _ in 0..NUM_LOOP_STEPS { 213 | arbiter_math.cdf(iwad).call().await.unwrap(); 214 | } 215 | let duration = start.elapsed(); 216 | info!("Time elapsed in {} cdf loop is: {:?}", label, duration); 217 | 218 | duration 219 | } 220 | 221 | async fn stateful_call_loop( 222 | arbiter_token: arbiter_token::ArbiterToken, 223 | mint_address: Address, 224 | label: &str, 225 | ) -> Duration { 226 | let wad = U256::from(10_u128.pow(18)); 227 | let start = Instant::now(); 228 | for _ in 0..NUM_LOOP_STEPS { 229 | arbiter_token.mint(mint_address, wad).send().await.unwrap().await.unwrap(); 230 | } 231 | let duration = start.elapsed(); 232 | info!("Time elapsed in {} mint loop is: {:?}", label, duration); 233 | 234 | duration 235 | } 236 | 237 | fn create_dataframe(results: &HashMap<&str, HashMap<&str, Duration>>, group: &[&str]) -> DataFrame { 238 | let operations = ["Deploy", "Lookup", "Stateless Call", "Stateful Call"]; 239 | let mut df = DataFrame::new(vec![ 240 | Series::new("Operation", operations.to_vec()), 241 | Series::new( 242 | &format!("{} (μs)", group[0]), 243 | operations 244 | .iter() 245 | .map(|&op| results.get(group[0]).unwrap().get(op).unwrap().as_micros() as f64) 246 | .collect::>(), 247 | ), 248 | Series::new( 249 | &format!("{} (μs)", group[1]), 250 | operations 251 | .iter() 252 | .map(|&op| results.get(group[1]).unwrap().get(op).unwrap().as_micros() as f64) 253 | .collect::>(), 254 | ), 255 | ]) 256 | .unwrap(); 257 | 258 | let s0 = df.column(&format!("{} (μs)", group[0])).unwrap().to_owned(); 259 | let s1 = df.column(&format!("{} (μs)", group[1])).unwrap().to_owned(); 260 | let mut relative_difference = s0.divide(&s1).unwrap(); 261 | 262 | df.with_column::(relative_difference.rename("Relative Speedup").clone()).unwrap().clone() 263 | } 264 | 265 | fn get_version_of(crate_name: &str) -> Option { 266 | let metadata = cargo_metadata::MetadataCommand::new().exec().unwrap(); 267 | 268 | for package in metadata.packages { 269 | if package.name == crate_name { 270 | return Some(package.version.to_string()); 271 | } 272 | } 273 | 274 | None 275 | } 276 | -------------------------------------------------------------------------------- /arbiter-ethereum/contracts/ArbiterMath.sol: -------------------------------------------------------------------------------- 1 | pragma solidity ^0.8.17; 2 | import "solmate/utils/FixedPointMathLib.sol"; 3 | import "solstat/Gaussian.sol"; 4 | import "solstat/Invariant.sol"; 5 | 6 | contract ArbiterMath { 7 | using FixedPointMathLib for int256; 8 | using FixedPointMathLib for uint256; 9 | 10 | function cdf(int256 input) public pure returns (int256 output) { 11 | output = Gaussian.cdf(input); 12 | } 13 | 14 | function pdf(int256 input) public pure returns (int256 output) { 15 | output = Gaussian.pdf(input); 16 | } 17 | 18 | function ppf(int256 input) public pure returns (int256 output) { 19 | output = Gaussian.ppf(input); 20 | } 21 | 22 | function mulWadDown(uint256 x, uint256 y) public pure returns (uint256 z) { 23 | z = FixedPointMathLib.mulWadDown(x, y); 24 | } 25 | 26 | function mulWadUp(uint256 x, uint256 y) public pure returns (uint256 z) { 27 | z = FixedPointMathLib.mulWadUp(x, y); 28 | } 29 | 30 | function divWadDown(uint256 x, uint256 y) public pure returns (uint256 z) { 31 | z = FixedPointMathLib.divWadDown(x, y); 32 | } 33 | 34 | function divWadUp(uint256 x, uint256 y) public pure returns (uint256 z) { 35 | z = FixedPointMathLib.divWadUp(x, y); 36 | } 37 | 38 | function log(int256 x) public pure returns (int256 z) { 39 | z = FixedPointMathLib.lnWad(x); 40 | } 41 | 42 | function sqrt(uint256 x) public pure returns (uint256 z) { 43 | z = FixedPointMathLib.sqrt(x); 44 | } 45 | 46 | function invariant(uint256 R_y, uint256 R_x, uint256 stk, uint256 vol, uint256 tau) public pure returns (int256 k) { 47 | k = Invariant.invariant(R_y, R_x, stk, vol, tau); 48 | } 49 | } 50 | -------------------------------------------------------------------------------- /arbiter-ethereum/contracts/ArbiterToken.sol: -------------------------------------------------------------------------------- 1 | pragma solidity ^0.8.10; 2 | import "solmate/tokens/ERC20.sol"; 3 | 4 | contract ArbiterToken is ERC20 { 5 | address public admin; 6 | 7 | constructor(string memory name, string memory symbol, uint8 decimals) 8 | ERC20(name, symbol, decimals) { 9 | admin = msg.sender; // Set the contract deployer as the initial admin 10 | } 11 | 12 | // Our admin lock 13 | modifier onlyAdmin() { 14 | require(msg.sender == admin, "Only admin can call this function"); 15 | _; 16 | } 17 | 18 | function mint(address receiver, uint256 amount) public onlyAdmin returns (bool) { 19 | _mint(receiver, amount); 20 | return true; 21 | } 22 | 23 | } 24 | -------------------------------------------------------------------------------- /arbiter-ethereum/contracts/Counter.sol: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: UNLICENSED 2 | pragma solidity ^0.8.13; 3 | import "forge-std/console2.sol"; 4 | 5 | contract Counter { 6 | uint256 public number; 7 | 8 | function setNumber(uint256 newNumber) public { 9 | number = newNumber; 10 | console2.log("You set the number to: ", newNumber); 11 | } 12 | 13 | function increment() public { 14 | number++; 15 | } 16 | } 17 | -------------------------------------------------------------------------------- /arbiter-ethereum/contracts/LiquidExchange.sol: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MIT 2 | // compiler version must be greater than or equal to 0.8.17 and less than 0.9.0 3 | pragma solidity ^0.8.17; 4 | // import "solmate/utils/FixedPointMathLib.sol"; // This import is correct given Arbiter's foundry.toml 5 | // import "solmate/utils/FixedPointMathLib.sol"; // This import goes directly to the contract 6 | // import "solmate/tokens/ERC20.sol"; 7 | import "solmate/tokens/ERC20.sol"; 8 | import "solmate/utils/FixedPointMathLib.sol"; 9 | import "./ArbiterToken.sol"; 10 | 11 | /** 12 | * @dev Implementation of the test interface for Arbiter writing contracts. 13 | */ 14 | contract LiquidExchange { 15 | using FixedPointMathLib for int256; 16 | using FixedPointMathLib for uint256; 17 | address public admin; 18 | address public arbiterTokenX; 19 | address public arbiterTokenY; 20 | uint256 public price; 21 | uint256 constant WAD = 10**18; 22 | 23 | // Each LiquidExchange contract will be deployed with a pair of token addresses and an initial price 24 | constructor(address arbiterTokenX_, address arbiterTokenY_, uint256 price_) { 25 | admin = msg.sender; // Set the contract deployer as the initial admin 26 | arbiterTokenX = arbiterTokenX_; 27 | arbiterTokenY = arbiterTokenY_; 28 | price = price_; 29 | } 30 | 31 | // Our admin lock 32 | modifier onlyAdmin() { 33 | require(msg.sender == admin, "Only admin can call this function"); 34 | _; 35 | } 36 | 37 | event PriceChange(uint256 price); 38 | event Swap(address tokenIn, address tokenOut, uint256 amountIn, uint256 amountOut, address to); 39 | 40 | // Admin only function to set the price of x in terms of y 41 | function setPrice(uint256 _price) public onlyAdmin { 42 | price = _price; 43 | emit PriceChange(price); 44 | } 45 | 46 | function swap(address tokenIn, uint256 amountIn) public{ 47 | 48 | uint256 amountOut; 49 | address tokenOut; 50 | if (tokenIn == arbiterTokenX) { 51 | tokenOut = arbiterTokenY; 52 | amountOut = FixedPointMathLib.mulWadDown(amountIn, price); 53 | } else if (tokenIn == arbiterTokenY) { 54 | tokenOut = arbiterTokenX; 55 | amountOut = FixedPointMathLib.divWadDown(amountIn, price); 56 | } else { 57 | revert("Invalid token"); 58 | } 59 | require(ERC20(tokenIn).transferFrom(msg.sender, address(this), amountIn), "Transfer failed"); 60 | require(ERC20(tokenOut).transfer(msg.sender, amountOut), "Transfer failed"); 61 | emit Swap(tokenIn, tokenOut, amountIn, amountOut, msg.sender); 62 | } 63 | } 64 | -------------------------------------------------------------------------------- /arbiter-ethereum/contracts/WETH.sol: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: AGPL-3.0-only 2 | pragma solidity >=0.8.0; 3 | 4 | import {ERC20} from "solmate/tokens/ERC20.sol"; 5 | import {SafeTransferLib} from "solmate/utils/SafeTransferLib.sol"; 6 | 7 | /// @notice Minimalist and modern Wrapped Ether implementation. 8 | /// @author Solmate (https://github.com/transmissions11/solmate/blob/main/src/tokens/WETH.sol) 9 | /// @author Inspired by WETH9 (https://github.com/dapphub/ds-weth/blob/master/src/weth9.sol) 10 | contract WETH is ERC20("Wrapped Ether", "WETH", 18) { 11 | using SafeTransferLib for address; 12 | 13 | event Deposit(address indexed from, uint256 amount); 14 | 15 | event Withdrawal(address indexed to, uint256 amount); 16 | 17 | function deposit() public payable virtual { 18 | _mint(msg.sender, msg.value); 19 | 20 | emit Deposit(msg.sender, msg.value); 21 | } 22 | 23 | function withdraw(uint256 amount) public virtual { 24 | _burn(msg.sender, amount); 25 | 26 | emit Withdrawal(msg.sender, amount); 27 | 28 | msg.sender.safeTransferETH(amount); 29 | } 30 | 31 | receive() external payable virtual { 32 | deposit(); 33 | } 34 | } -------------------------------------------------------------------------------- /arbiter-ethereum/src/bindings/invariant.rs: -------------------------------------------------------------------------------- 1 | pub use invariant::*; 2 | /// This module was auto-generated with ethers-rs Abigen. 3 | /// More information at: 4 | #[allow( 5 | clippy::enum_variant_names, 6 | clippy::too_many_arguments, 7 | clippy::upper_case_acronyms, 8 | clippy::type_complexity, 9 | dead_code, 10 | non_camel_case_types, 11 | )] 12 | pub mod invariant { 13 | #[allow(deprecated)] 14 | fn __abi() -> ::ethers::core::abi::Abi { 15 | ::ethers::core::abi::ethabi::Contract { 16 | constructor: ::core::option::Option::None, 17 | functions: ::std::collections::BTreeMap::new(), 18 | events: ::std::collections::BTreeMap::new(), 19 | errors: ::core::convert::From::from([ 20 | ( 21 | ::std::borrow::ToOwned::to_owned("OOB"), 22 | ::std::vec![ 23 | ::ethers::core::abi::ethabi::AbiError { 24 | name: ::std::borrow::ToOwned::to_owned("OOB"), 25 | inputs: ::std::vec![], 26 | }, 27 | ], 28 | ), 29 | ]), 30 | receive: false, 31 | fallback: false, 32 | } 33 | } 34 | ///The parsed JSON ABI of the contract. 35 | pub static INVARIANT_ABI: ::ethers::contract::Lazy<::ethers::core::abi::Abi> = ::ethers::contract::Lazy::new( 36 | __abi, 37 | ); 38 | #[rustfmt::skip] 39 | const __BYTECODE: &[u8] = b"`|`7`\x0B\x82\x82\x829\x80Q`\0\x1A`s\x14`*WcNH{q`\xE0\x1B`\0R`\0`\x04R`$`\0\xFD[0`\0R`s\x81S\x82\x81\xF3\xFEs\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0`\x80`@\x81\x90RbF\x1B\xCD`\xE5\x1B\x81R` `\x84\x90\x81R`5`\xA4R\x7FContract does not have fallback `\xC4\x90\x81Rtnor receive functions`X\x1B`\xE4R0\x93\x90\x93\x14\x92\x90\x82\xFD"; 40 | /// The bytecode of the contract. 41 | pub static INVARIANT_BYTECODE: ::ethers::core::types::Bytes = ::ethers::core::types::Bytes::from_static( 42 | __BYTECODE, 43 | ); 44 | #[rustfmt::skip] 45 | const __DEPLOYED_BYTECODE: &[u8] = b"s\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0`\x80`@\x81\x90RbF\x1B\xCD`\xE5\x1B\x81R` `\x84\x90\x81R`5`\xA4R\x7FContract does not have fallback `\xC4\x90\x81Rtnor receive functions`X\x1B`\xE4R0\x93\x90\x93\x14\x92\x90\x82\xFD"; 46 | /// The deployed bytecode of the contract. 47 | pub static INVARIANT_DEPLOYED_BYTECODE: ::ethers::core::types::Bytes = ::ethers::core::types::Bytes::from_static( 48 | __DEPLOYED_BYTECODE, 49 | ); 50 | pub struct Invariant(::ethers::contract::Contract); 51 | impl ::core::clone::Clone for Invariant { 52 | fn clone(&self) -> Self { 53 | Self(::core::clone::Clone::clone(&self.0)) 54 | } 55 | } 56 | impl ::core::ops::Deref for Invariant { 57 | type Target = ::ethers::contract::Contract; 58 | fn deref(&self) -> &Self::Target { 59 | &self.0 60 | } 61 | } 62 | impl ::core::ops::DerefMut for Invariant { 63 | fn deref_mut(&mut self) -> &mut Self::Target { 64 | &mut self.0 65 | } 66 | } 67 | impl ::core::fmt::Debug for Invariant { 68 | fn fmt(&self, f: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result { 69 | f.debug_tuple(::core::stringify!(Invariant)).field(&self.address()).finish() 70 | } 71 | } 72 | impl Invariant { 73 | /// Creates a new contract instance with the specified `ethers` client at 74 | /// `address`. The contract derefs to a `ethers::Contract` object. 75 | pub fn new>( 76 | address: T, 77 | client: ::std::sync::Arc, 78 | ) -> Self { 79 | Self( 80 | ::ethers::contract::Contract::new( 81 | address.into(), 82 | INVARIANT_ABI.clone(), 83 | client, 84 | ), 85 | ) 86 | } 87 | /// Constructs the general purpose `Deployer` instance based on the provided constructor arguments and sends it. 88 | /// Returns a new instance of a deployer that returns an instance of this contract after sending the transaction 89 | /// 90 | /// Notes: 91 | /// - If there are no constructor arguments, you should pass `()` as the argument. 92 | /// - The default poll duration is 7 seconds. 93 | /// - The default number of confirmations is 1 block. 94 | /// 95 | /// 96 | /// # Example 97 | /// 98 | /// Generate contract bindings with `abigen!` and deploy a new contract instance. 99 | /// 100 | /// *Note*: this requires a `bytecode` and `abi` object in the `greeter.json` artifact. 101 | /// 102 | /// ```ignore 103 | /// # async fn deploy(client: ::std::sync::Arc) { 104 | /// abigen!(Greeter, "../greeter.json"); 105 | /// 106 | /// let greeter_contract = Greeter::deploy(client, "Hello world!".to_string()).unwrap().send().await.unwrap(); 107 | /// let msg = greeter_contract.greet().call().await.unwrap(); 108 | /// # } 109 | /// ``` 110 | pub fn deploy( 111 | client: ::std::sync::Arc, 112 | constructor_args: T, 113 | ) -> ::core::result::Result< 114 | ::ethers::contract::builders::ContractDeployer, 115 | ::ethers::contract::ContractError, 116 | > { 117 | let factory = ::ethers::contract::ContractFactory::new( 118 | INVARIANT_ABI.clone(), 119 | INVARIANT_BYTECODE.clone().into(), 120 | client, 121 | ); 122 | let deployer = factory.deploy(constructor_args)?; 123 | let deployer = ::ethers::contract::ContractDeployer::new(deployer); 124 | Ok(deployer) 125 | } 126 | } 127 | impl From<::ethers::contract::Contract> 128 | for Invariant { 129 | fn from(contract: ::ethers::contract::Contract) -> Self { 130 | Self::new(contract.address(), contract.client()) 131 | } 132 | } 133 | ///Custom Error type `OOB` with signature `OOB()` and selector `0xaaf3956f` 134 | #[derive( 135 | Clone, 136 | ::ethers::contract::EthError, 137 | ::ethers::contract::EthDisplay, 138 | serde::Serialize, 139 | serde::Deserialize, 140 | Default, 141 | Debug, 142 | PartialEq, 143 | Eq, 144 | Hash 145 | )] 146 | #[etherror(name = "OOB", abi = "OOB()")] 147 | pub struct OOB; 148 | } 149 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/bindings/mod.rs: -------------------------------------------------------------------------------- 1 | #![allow(clippy::all)] 2 | //! This module contains abigen! generated bindings for solidity contracts. 3 | //! This is autogenerated code. 4 | //! Do not manually edit these files. 5 | //! These files may be overwritten by the codegen system at any time. 6 | pub mod arbiter_math; 7 | pub mod arbiter_token; 8 | pub mod counter; 9 | pub mod liquid_exchange; 10 | pub mod weth; 11 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/bindings/units.rs: -------------------------------------------------------------------------------- 1 | pub use units::*; 2 | /// This module was auto-generated with ethers-rs Abigen. 3 | /// More information at: 4 | #[allow( 5 | clippy::enum_variant_names, 6 | clippy::too_many_arguments, 7 | clippy::upper_case_acronyms, 8 | clippy::type_complexity, 9 | dead_code, 10 | non_camel_case_types, 11 | )] 12 | pub mod units { 13 | #[allow(deprecated)] 14 | fn __abi() -> ::ethers::core::abi::Abi { 15 | ::ethers::core::abi::ethabi::Contract { 16 | constructor: ::core::option::Option::None, 17 | functions: ::std::collections::BTreeMap::new(), 18 | events: ::std::collections::BTreeMap::new(), 19 | errors: ::std::collections::BTreeMap::new(), 20 | receive: false, 21 | fallback: false, 22 | } 23 | } 24 | ///The parsed JSON ABI of the contract. 25 | pub static UNITS_ABI: ::ethers::contract::Lazy<::ethers::core::abi::Abi> = ::ethers::contract::Lazy::new( 26 | __abi, 27 | ); 28 | pub struct Units(::ethers::contract::Contract); 29 | impl ::core::clone::Clone for Units { 30 | fn clone(&self) -> Self { 31 | Self(::core::clone::Clone::clone(&self.0)) 32 | } 33 | } 34 | impl ::core::ops::Deref for Units { 35 | type Target = ::ethers::contract::Contract; 36 | fn deref(&self) -> &Self::Target { 37 | &self.0 38 | } 39 | } 40 | impl ::core::ops::DerefMut for Units { 41 | fn deref_mut(&mut self) -> &mut Self::Target { 42 | &mut self.0 43 | } 44 | } 45 | impl ::core::fmt::Debug for Units { 46 | fn fmt(&self, f: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result { 47 | f.debug_tuple(::core::stringify!(Units)).field(&self.address()).finish() 48 | } 49 | } 50 | impl Units { 51 | /// Creates a new contract instance with the specified `ethers` client at 52 | /// `address`. The contract derefs to a `ethers::Contract` object. 53 | pub fn new>( 54 | address: T, 55 | client: ::std::sync::Arc, 56 | ) -> Self { 57 | Self( 58 | ::ethers::contract::Contract::new( 59 | address.into(), 60 | UNITS_ABI.clone(), 61 | client, 62 | ), 63 | ) 64 | } 65 | } 66 | impl From<::ethers::contract::Contract> 67 | for Units { 68 | fn from(contract: ::ethers::contract::Contract) -> Self { 69 | Self::new(contract.address(), contract.client()) 70 | } 71 | } 72 | } 73 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/console/mod.rs: -------------------------------------------------------------------------------- 1 | //! This module contains the backend for the `console2.log` Solidity function so 2 | //! that these logs can be read in Arbiter. 3 | 4 | use revm_primitives::address; 5 | 6 | use super::*; 7 | 8 | const CONSOLE_ADDRESS: Address = address!("000000000000000000636F6e736F6c652e6c6f67"); 9 | 10 | #[allow(clippy::all)] 11 | #[rustfmt::skip] 12 | #[allow(missing_docs)] 13 | pub(crate) mod abi; 14 | 15 | /// An inspector that collects `console2.log`s during execution. 16 | #[derive(Debug, Clone, Default)] 17 | pub struct ConsoleLogs(pub Vec); 18 | 19 | impl Inspector for ConsoleLogs { 20 | #[inline] 21 | fn call(&mut self, _context: &mut EvmContext, call: &mut CallInputs) -> Option { 22 | if call.contract == CONSOLE_ADDRESS { 23 | self.0.push(call.input.clone()); 24 | } 25 | None 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/coprocessor.rs: -------------------------------------------------------------------------------- 1 | //! The [`Coprocessor`] is used to process calls and can access read-only from 2 | //! the [`Environment`]'s database while staying up to date with the 3 | //! latest state of the [`Environment`]'s database. 4 | 5 | use std::convert::Infallible; 6 | 7 | use revm_primitives::{EVMError, ResultAndState}; 8 | 9 | use super::*; 10 | use crate::environment::Environment; 11 | 12 | /// A [`Coprocessor`] is used to process calls and can access read-only from the 13 | /// [`Environment`]'s database. This can eventually be used for things like 14 | /// parallelized compute for agents that are not currently sending transactions 15 | /// that need to be processed by the [`Environment`], but are instead using the 16 | /// current state to make decisions. 17 | pub struct Coprocessor<'a> { 18 | evm: Evm<'a, (), ArbiterDB>, 19 | } 20 | 21 | impl<'a> Coprocessor<'a> { 22 | /// Create a new `Coprocessor` with the given `Environment`. 23 | pub fn new(environment: &Environment) -> Self { 24 | let db = environment.db.clone(); 25 | let evm = Evm::builder().with_db(db).build(); 26 | Self { evm } 27 | } 28 | 29 | // TODO: Should probably take in a TxEnv or something. 30 | /// Used as an entrypoint to process a call with the `Coprocessor`. 31 | pub fn transact(&mut self) -> Result> { self.evm.transact() } 32 | } 33 | 34 | #[cfg(test)] 35 | mod tests { 36 | use revm_primitives::{InvalidTransaction, U256}; 37 | 38 | use super::*; 39 | 40 | #[test] 41 | fn coprocessor() { 42 | let environment = Environment::builder().build(); 43 | let mut coprocessor = Coprocessor::new(&environment); 44 | coprocessor.evm.tx_mut().value = U256::from(100); 45 | let outcome = coprocessor.transact(); 46 | if let Err(EVMError::Transaction(InvalidTransaction::LackOfFundForMaxFee { fee, balance })) = 47 | outcome 48 | { 49 | assert_eq!(*fee, U256::from(100)); 50 | assert_eq!(*balance, U256::from(0)); 51 | } 52 | } 53 | } 54 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/database/fork.rs: -------------------------------------------------------------------------------- 1 | //! This module contains the [`Fork`] struct which is used to store the data 2 | //! that will be loaded into an [`Environment`] and be used in `arbiter-core`. 3 | //! [`Fork`] contains a [`CacheDB`] and [`ContractMetadata`] so 4 | //! that the [`Environment`] can be initialized with a forked database and the 5 | //! end-user still has access to the relevant metadata. 6 | 7 | use std::{env, fs}; 8 | 9 | use super::*; 10 | 11 | /// A [`ContractMetadata`] is used to store the metadata of a contract that will 12 | /// be loaded into a [`Fork`]. 13 | #[derive(Clone, Debug, Deserialize, Serialize)] 14 | pub struct ContractMetadata { 15 | /// The address of the contract. 16 | pub address: eAddress, 17 | 18 | /// The path to the contract artifacts. 19 | pub artifacts_path: String, 20 | 21 | /// The mappings that are part of the contract's storage. 22 | pub mappings: HashMap>, 23 | } 24 | 25 | /// A [`Fork`] is used to store the data that will be loaded into an 26 | /// [`Environment`] and be used in `arbiter-core`. It is a wrapper around a 27 | /// [`CacheDB`] and a [`HashMap`] of [`ContractMetadata`] so that the 28 | /// [`environment::Environment`] can be initialized with the data and the 29 | /// end-user still has access to the relevant metadata. 30 | #[derive(Clone, Debug)] 31 | pub struct Fork { 32 | /// The [`CacheDB`] that will be loaded into the [`Environment`]. 33 | pub db: CacheDB, 34 | 35 | /// The [`HashMap`] of [`ContractMetadata`] that will be used by the 36 | /// end-user. 37 | pub contracts_meta: HashMap, 38 | /// The [`HashMap`] of [`Address`] that will be used by the end-user. 39 | pub eoa: HashMap, 40 | } 41 | 42 | impl Fork { 43 | /// Creates a new [`Fork`] from serialized [`DiskData`] stored on disk. 44 | pub fn from_disk(path: &str) -> Result { 45 | // Read the file 46 | let mut cwd = env::current_dir().unwrap(); 47 | cwd.push(path); 48 | print!("Reading db from: {:?}", cwd); 49 | let data = fs::read_to_string(cwd).unwrap(); 50 | 51 | // Deserialize the JSON data to your OutputData type 52 | let disk_data: DiskData = serde_json::from_str(&data).unwrap(); 53 | 54 | // Create a CacheDB instance 55 | let mut db = CacheDB::new(EmptyDB::default()); 56 | 57 | // Populate the CacheDB from the OutputData 58 | for (address, (info, storage_map)) in disk_data.raw { 59 | // Convert the string address back to its original type 60 | let address = address.as_fixed_bytes().into(); // You'd need to define this 61 | 62 | // Insert account info into the DB 63 | db.insert_account_info(address, info); 64 | 65 | // Insert storage data into the DB 66 | for (key_str, value_str) in storage_map { 67 | let key = U256::from_str_radix(&key_str, 10).unwrap(); 68 | let value = U256::from_str_radix(&value_str, 10).unwrap(); 69 | 70 | db.insert_account_storage(address, key, value).unwrap(); 71 | } 72 | } 73 | 74 | Ok(Self { db, contracts_meta: disk_data.meta, eoa: disk_data.externally_owned_accounts }) 75 | } 76 | } 77 | 78 | impl From for CacheDB { 79 | fn from(val: Fork) -> Self { val.db } 80 | } 81 | 82 | type Storage = HashMap; 83 | 84 | /// This is the data that will be written to and loaded from disk to generate a 85 | /// [`Fork`]. 86 | #[derive(Debug, Serialize, Deserialize)] 87 | pub struct DiskData { 88 | /// This is the metadata for the contracts that will be loaded into the 89 | /// [`Fork`]. 90 | pub meta: HashMap, 91 | 92 | /// This is the raw data that will be loaded into the [`Fork`]. 93 | pub raw: HashMap, 94 | 95 | /// This is the eoa data that will be loaded into the [`Fork`]. 96 | pub externally_owned_accounts: HashMap, 97 | } 98 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/database/inspector.rs: -------------------------------------------------------------------------------- 1 | //! This module contains an extensible [`Inspector`] called 2 | //! [`ArbiterInspector`]. It is currently configurable in order to allow 3 | //! for users to set configuration to see logs generated in Solidity contracts 4 | //! and or enforce gas payment. 5 | 6 | use revm::{ 7 | inspectors::GasInspector, 8 | interpreter::{CreateInputs, CreateOutcome, Interpreter}, 9 | }; 10 | 11 | use super::*; 12 | use crate::console::ConsoleLogs; 13 | 14 | /// An configurable [`Inspector`] that collects information about the 15 | /// execution of the [`Interpreter`]. Depending on whether which or both 16 | /// features are enabled, it collects information about the gas used by each 17 | /// opcode and the `console2.log`s emitted during execution. It ensures gas 18 | /// payments are made when `gas` is enabled. 19 | #[derive(Debug, Clone)] 20 | pub struct ArbiterInspector { 21 | /// Whether to collect `console2.log`s. 22 | pub console_log: Option, 23 | 24 | /// Whether to collect gas usage information. 25 | pub gas: Option, 26 | } 27 | 28 | impl ArbiterInspector { 29 | /// Create a new [`ArbiterInspector`] with the given configuration. 30 | pub fn new(console_log: bool, gas: bool) -> Self { 31 | let console_log = if console_log { Some(ConsoleLogs::default()) } else { None }; 32 | let gas = if gas { Some(GasInspector::default()) } else { None }; 33 | Self { console_log, gas } 34 | } 35 | } 36 | 37 | impl Inspector for ArbiterInspector { 38 | #[inline] 39 | fn initialize_interp(&mut self, interp: &mut Interpreter, context: &mut EvmContext) { 40 | if let Some(gas) = &mut self.gas { 41 | gas.initialize_interp(interp, context); 42 | } 43 | } 44 | 45 | #[inline] 46 | fn step_end(&mut self, interp: &mut Interpreter, context: &mut EvmContext) { 47 | if let Some(gas) = &mut self.gas { 48 | gas.step_end(interp, context); 49 | } 50 | } 51 | 52 | #[inline] 53 | fn call( 54 | &mut self, 55 | context: &mut EvmContext, 56 | inputs: &mut CallInputs, 57 | ) -> Option { 58 | if let Some(console_log) = &mut self.console_log { 59 | console_log.call(context, inputs) 60 | } else { 61 | None 62 | } 63 | } 64 | 65 | #[inline] 66 | fn call_end( 67 | &mut self, 68 | context: &mut EvmContext, 69 | inputs: &CallInputs, 70 | outcome: CallOutcome, 71 | ) -> CallOutcome { 72 | if let Some(gas) = &mut self.gas { 73 | gas.call_end(context, inputs, outcome) 74 | } else { 75 | outcome 76 | } 77 | } 78 | 79 | #[inline] 80 | fn create_end( 81 | &mut self, 82 | _context: &mut EvmContext, 83 | _inputs: &CreateInputs, 84 | outcome: CreateOutcome, 85 | ) -> CreateOutcome { 86 | outcome 87 | } 88 | } 89 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/database/mod.rs: -------------------------------------------------------------------------------- 1 | //! The [`ArbiterDB`] is a wrapper around a `CacheDB` that is used to provide 2 | //! access to the `Environment`'s database to multiple `Coprocessors`. 3 | //! It is also used to be able to write out the `Environment` database to a 4 | //! file. 5 | //! 6 | //! Further, it gives the ability to be generated from a [`fork::Fork`] so that 7 | //! you can preload an [`environment::Environment`] with a specific state. 8 | 9 | use std::{ 10 | fs, 11 | io::{self, Read, Write}, 12 | }; 13 | 14 | use revm::{ 15 | primitives::{db::DatabaseRef, keccak256, Bytecode, B256}, 16 | DatabaseCommit, 17 | }; 18 | use serde_json; 19 | 20 | use super::*; 21 | pub mod fork; 22 | pub mod inspector; 23 | 24 | /// A [`ArbiterDB`] is contains both a [`CacheDB`] that is used to provide 25 | /// state for the [`environment::Environment`]'s as well as for multiple 26 | /// [`coprocessor::Coprocessor`]s. 27 | /// The `logs` field is a [`HashMap`] to store [`ethers::types::Log`]s that can 28 | /// be queried from at any point. 29 | #[derive(Debug, Serialize, Deserialize)] 30 | pub struct ArbiterDB { 31 | /// The state of the `ArbiterDB`. This is a `CacheDB` that is used to 32 | /// provide a db for the `Environment` to use. 33 | pub state: Arc>>, 34 | 35 | /// The logs of the `ArbiterDB`. This is a `HashMap` that is used to store 36 | /// logs that can be queried from at any point. 37 | pub logs: Arc>>>, 38 | } 39 | 40 | // Implement `Clone` by hand so we utilize the `Arc`'s `Clone` implementation. 41 | impl Clone for ArbiterDB { 42 | fn clone(&self) -> Self { Self { state: self.state.clone(), logs: self.logs.clone() } } 43 | } 44 | 45 | impl ArbiterDB { 46 | /// Create a new `ArbiterDB`. 47 | pub fn new() -> Self { 48 | Self { 49 | state: Arc::new(RwLock::new(CacheDB::new(EmptyDB::new()))), 50 | logs: Arc::new(RwLock::new(HashMap::new())), 51 | } 52 | } 53 | 54 | /// Write the `ArbiterDB` to a file at the given path.`` 55 | pub fn write_to_file(&self, path: &str) -> io::Result<()> { 56 | // Serialize the ArbiterDB 57 | let serialized = serde_json::to_string(self)?; 58 | // Write to file 59 | let mut file = fs::File::create(path)?; 60 | file.write_all(serialized.as_bytes())?; 61 | Ok(()) 62 | } 63 | 64 | /// Read the `ArbiterDB` from a file at the given path. 65 | pub fn read_from_file(path: &str) -> io::Result { 66 | // Read the file content 67 | let mut file = fs::File::open(path)?; 68 | let mut contents = String::new(); 69 | file.read_to_string(&mut contents)?; 70 | 71 | // Deserialize the content into ArbiterDB 72 | #[derive(Deserialize)] 73 | struct TempDB { 74 | state: Option>, 75 | logs: Option>>, 76 | } 77 | let temp_db: TempDB = serde_json::from_str(&contents)?; 78 | Ok(Self { 79 | state: Arc::new(RwLock::new(temp_db.state.unwrap_or_default())), 80 | logs: Arc::new(RwLock::new(temp_db.logs.unwrap_or_default())), 81 | }) 82 | } 83 | } 84 | 85 | impl Default for ArbiterDB { 86 | fn default() -> Self { Self::new() } 87 | } 88 | 89 | // TODO: This is a BAD implementation of PartialEq, but it works for now as we 90 | // do not ever really need to compare DBs directly at the moment. 91 | // This is only used in the `Outcome` enum for `instruction.rs`. 92 | impl PartialEq for ArbiterDB { 93 | fn eq(&self, _other: &Self) -> bool { true } 94 | } 95 | 96 | impl Database for ArbiterDB { 97 | type Error = Infallible; 98 | 99 | // TODO: Not sure we want this, but it works for now. 100 | 101 | fn basic( 102 | &mut self, 103 | address: revm::primitives::Address, 104 | ) -> Result, Self::Error> { 105 | self.state.write().unwrap().basic(address) 106 | } 107 | 108 | fn code_by_hash(&mut self, code_hash: B256) -> Result { 109 | self.state.write().unwrap().code_by_hash(code_hash) 110 | } 111 | 112 | fn storage( 113 | &mut self, 114 | address: revm::primitives::Address, 115 | index: U256, 116 | ) -> Result { 117 | self.state.write().unwrap().storage(address, index) 118 | } 119 | 120 | fn block_hash(&mut self, number: U256) -> Result { 121 | self.state.write().unwrap().block_hash(number) 122 | } 123 | } 124 | 125 | impl DatabaseRef for ArbiterDB { 126 | type Error = Infallible; 127 | 128 | // TODO: Not sure we want this, but it works for now. 129 | 130 | fn basic_ref( 131 | &self, 132 | address: revm::primitives::Address, 133 | ) -> Result, Self::Error> { 134 | self.state.read().unwrap().basic_ref(address) 135 | } 136 | 137 | fn code_by_hash_ref(&self, code_hash: B256) -> Result { 138 | self.state.read().unwrap().code_by_hash_ref(code_hash) 139 | } 140 | 141 | fn storage_ref( 142 | &self, 143 | address: revm::primitives::Address, 144 | index: U256, 145 | ) -> Result { 146 | self.state.read().unwrap().storage_ref(address, index) 147 | } 148 | 149 | fn block_hash_ref(&self, number: U256) -> Result { 150 | self.state.read().unwrap().block_hash_ref(number) 151 | } 152 | } 153 | 154 | impl DatabaseCommit for ArbiterDB { 155 | fn commit( 156 | &mut self, 157 | changes: revm_primitives::HashMap, 158 | ) { 159 | self.state.write().unwrap().commit(changes) 160 | } 161 | } 162 | 163 | /// [AnvilDump] models the schema of an [anvil](https://github.com/foundry-rs/foundry) state dump. 164 | #[derive(Clone, Debug, Serialize, Deserialize)] 165 | pub struct AnvilDump { 166 | /// Mapping of account addresses to [AccountRecord]s stored in the dump 167 | /// file. 168 | pub accounts: BTreeMap, 169 | } 170 | 171 | /// [AccountRecord] describes metadata about an account within the state trie. 172 | #[derive(Clone, Debug, Serialize, Deserialize)] 173 | pub struct AccountRecord { 174 | /// The nonce of the account. 175 | pub nonce: u64, 176 | /// The balance of the account. 177 | pub balance: U256, 178 | /// The bytecode of the account. If empty, the account is an EOA. 179 | pub code: Bytes, 180 | /// The storage mapping of the account. 181 | pub storage: revm_primitives::HashMap, 182 | } 183 | 184 | impl TryFrom for CacheDB { 185 | type Error = as Database>::Error; 186 | 187 | fn try_from(dump: AnvilDump) -> Result { 188 | let mut db = CacheDB::default(); 189 | 190 | dump.accounts.into_iter().try_for_each(|(address, account_record)| { 191 | db.insert_account_info(address, AccountInfo { 192 | balance: account_record.balance, 193 | nonce: account_record.nonce, 194 | code_hash: keccak256(account_record.code.as_ref()), 195 | code: (!account_record.code.is_empty()) 196 | .then(|| Bytecode::new_raw(account_record.code)), 197 | }); 198 | db.replace_account_storage(address, account_record.storage) 199 | })?; 200 | 201 | Ok(db) 202 | } 203 | } 204 | 205 | #[cfg(test)] 206 | mod tests { 207 | use revm_primitives::{address, bytes}; 208 | 209 | use super::*; 210 | 211 | #[test] 212 | fn read_write_to_file() { 213 | let db = ArbiterDB::new(); 214 | db.write_to_file("test.json").unwrap(); 215 | let db = ArbiterDB::read_from_file("test.json").unwrap(); 216 | assert_eq!(db, ArbiterDB::new()); 217 | fs::remove_file("test.json").unwrap(); 218 | } 219 | 220 | #[test] 221 | fn load_anvil_dump_cachedb() { 222 | const RAW_DUMP: &str = r#" 223 | { 224 | "accounts": { 225 | "0x0000000000000000000000000000000000000000": { 226 | "nonce": 1234, 227 | "balance": "0xfacade", 228 | "code": "0x", 229 | "storage": {} 230 | }, 231 | "0x0000000000000000000000000000000000000001": { 232 | "nonce": 555, 233 | "balance": "0xc0ffee", 234 | "code": "0xbadc0de0", 235 | "storage": { 236 | "0x0000000000000000000000000000000000000000000000000000000000000000": "0x000000000000000000000000000000000000000000000000000000000000deAD", 237 | "0x0000000000000000000000000000000000000000000000000000000000000001": "0x000000000000000000000000000000000000000000000000000000000000babe" 238 | } 239 | } 240 | } 241 | } 242 | "#; 243 | 244 | let dump: AnvilDump = serde_json::from_str(RAW_DUMP).unwrap(); 245 | let mut db: CacheDB = dump.try_into().unwrap(); 246 | 247 | let account_a = db.load_account(address!("0000000000000000000000000000000000000000")).unwrap(); 248 | assert_eq!(account_a.info.nonce, 1234); 249 | assert_eq!(account_a.info.balance, U256::from(0xfacade)); 250 | assert_eq!(account_a.info.code, None); 251 | assert_eq!(account_a.info.code_hash, keccak256([])); 252 | 253 | let account_b = db.load_account(address!("0000000000000000000000000000000000000001")).unwrap(); 254 | let b_bytecode = bytes!("badc0de0"); 255 | assert_eq!(account_b.info.nonce, 555); 256 | assert_eq!(account_b.info.balance, U256::from(0xc0ffee)); 257 | assert_eq!(account_b.info.code_hash, keccak256(b_bytecode.as_ref())); 258 | assert_eq!(account_b.info.code, Some(Bytecode::new_raw(b_bytecode))); 259 | assert_eq!(account_b.storage.get(&U256::ZERO), Some(&U256::from(0xdead))); 260 | assert_eq!(account_b.storage.get(&U256::from(1)), Some(&U256::from(0xbabe))); 261 | } 262 | } 263 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/environment/instruction.rs: -------------------------------------------------------------------------------- 1 | //! This module contains the `Instruction` and `Outcome` enums that are used to 2 | //! communicate instructions and their outcomes between the 3 | //! [`middleware::ArbiterMiddleware`] and the [`Environment`]. 4 | 5 | use super::*; 6 | 7 | /// [`Instruction`]s that can be sent to the [`Environment`] via the 8 | /// [`Socket`]. 9 | /// These instructions can be: 10 | /// - [`Instruction::AddAccount`], 11 | /// - [`Instruction::BlockUpdate`], 12 | /// - [`Instruction::Call`], 13 | /// - [`Instruction::Cheatcode`], 14 | /// - [`Instruction::Query`]. 15 | /// - [`Instruction::SetGasPrice`], 16 | /// - [`Instruction::Stop`], 17 | /// - [`Instruction::Transaction`], 18 | /// 19 | /// The [`Instruction`]s are sent to the [`Environment`] via the 20 | /// [`Socket::instruction_sender`] and the results are received via the 21 | /// [`crate::middleware::Connection::outcome_receiver`]. 22 | #[derive(Debug, Clone)] 23 | pub(crate) enum Instruction { 24 | /// An `AddAccount` is used to add a default/unfunded account to the 25 | /// [`Environment`]. 26 | AddAccount { 27 | /// The address of the account to add to the [`EVM`]. 28 | address: eAddress, 29 | 30 | /// The sender used to to send the outcome of the account addition back 31 | /// to. 32 | outcome_sender: OutcomeSender, 33 | }, 34 | 35 | /// A `BlockUpdate` is used to update the block number and timestamp of the 36 | /// [`Environment`]. 37 | BlockUpdate { 38 | /// The block number to update the [`EVM`] to. 39 | block_number: eU256, 40 | 41 | /// The block timestamp to update the [`EVM`] to. 42 | block_timestamp: eU256, 43 | 44 | /// The sender used to to send the outcome of the block update back to. 45 | outcome_sender: OutcomeSender, 46 | }, 47 | 48 | /// A `Call` is processed by the [`EVM`] but will not be state changing and 49 | /// will not create events. 50 | Call { 51 | /// The transaction environment for the call. 52 | tx_env: TxEnv, 53 | 54 | /// The sender used to to send the outcome of the call back to. 55 | outcome_sender: OutcomeSender, 56 | }, 57 | 58 | /// A `cheatcode` enables direct access to the underlying [`EVM`]. 59 | Cheatcode { 60 | /// The [`Cheatcode`] to use to access the underlying [`EVM`]. 61 | cheatcode: Cheatcodes, 62 | 63 | /// The sender used to to send the outcome of the cheatcode back to. 64 | outcome_sender: OutcomeSender, 65 | }, 66 | 67 | /// A `Query` is used to query the [`EVM`] for some data, the choice of 68 | /// which data is specified by the inner `EnvironmentData` enum. 69 | Query { 70 | /// The data to query the [`EVM`] for. 71 | environment_data: EnvironmentData, 72 | 73 | /// The sender used to to send the outcome of the query back to. 74 | outcome_sender: OutcomeSender, 75 | }, 76 | 77 | /// A `SetGasPrice` is used to set the gas price of the [`EVM`]. 78 | SetGasPrice { 79 | /// The gas price to set the [`EVM`] to. 80 | gas_price: eU256, 81 | 82 | /// The sender used to to send the outcome of the gas price setting back 83 | /// to. 84 | outcome_sender: OutcomeSender, 85 | }, 86 | 87 | /// A `Stop` is used to stop the [`Environment`]. 88 | Stop(OutcomeSender), 89 | 90 | /// A `Transaction` is processed by the [`EVM`] and will be state changing 91 | /// and will create events. 92 | Transaction { 93 | /// The transaction environment for the transaction. 94 | tx_env: TxEnv, 95 | 96 | /// The sender used to to send the outcome of the transaction back to. 97 | outcome_sender: OutcomeSender, 98 | }, 99 | } 100 | 101 | /// [`Outcome`]s that can be sent back to the the client via the 102 | /// [`Socket`]. 103 | /// These outcomes can be from `Call`, `Transaction`, or `BlockUpdate` 104 | /// instructions sent to the [`Environment`] 105 | #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] 106 | pub(crate) enum Outcome { 107 | /// The outcome of an [`Instruction::AddAccount`] instruction that is used 108 | /// to signify that the account was added successfully. 109 | AddAccountCompleted, 110 | 111 | /// The outcome of a `BlockUpdate` instruction that is used to provide a 112 | /// non-error output of updating the block number and timestamp of the 113 | /// [`EVM`] to the client. 114 | BlockUpdateCompleted(ReceiptData), 115 | 116 | /// Return value from a cheatcode instruction. 117 | /// todo: make a decision on how to handle cheatcode returns. 118 | CheatcodeReturn(CheatcodesReturn), 119 | 120 | /// The outcome of a `Call` instruction that is used to provide the output 121 | /// of some [`EVM`] computation to the client. 122 | CallCompleted(ExecutionResult), 123 | 124 | /// The outcome of a [`Instruction::SetGasPrice`] instruction that is used 125 | /// to signify that the gas price was set successfully. 126 | SetGasPriceCompleted, 127 | 128 | /// The outcome of a `Transaction` instruction that is first unpacked to see 129 | /// if the result is successful, then it can be used to build a 130 | /// `TransactionReceipt` in the `Middleware`. 131 | TransactionCompleted(ExecutionResult, ReceiptData), 132 | 133 | /// The outcome of a `Query` instruction that carries a `String` 134 | /// representation of the data. Currently this may carry the block 135 | /// number, block timestamp, gas price, or balance of an account. 136 | QueryReturn(String), 137 | 138 | /// The outcome of a `Stop` instruction that is used to signify that the 139 | /// [`Environment`] was stopped successfully. 140 | StopCompleted(ArbiterDB), 141 | } 142 | 143 | /// [`EnvironmentData`] is an enum used inside of the [`Instruction::Query`] to 144 | /// specify what data should be returned to the user. 145 | /// Currently this may be the block number, block timestamp, gas price, or 146 | /// balance of an account. 147 | #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] 148 | #[allow(clippy::large_enum_variant)] 149 | pub(crate) enum EnvironmentData { 150 | /// The query is for the block number of the [`EVM`]. 151 | BlockNumber, 152 | 153 | /// The query is for the block timestamp of the [`EVM`]. 154 | BlockTimestamp, 155 | 156 | /// The query is for the gas price of the [`EVM`]. 157 | GasPrice, 158 | 159 | /// The query is for the balance of an account given by the inner `Address`. 160 | Balance(eAddress), 161 | 162 | // TODO: Rename this to `Nonce`? 163 | /// The query is for the nonce of an account given by the inner `Address`. 164 | TransactionCount(eAddress), 165 | 166 | /// Query for logs in a range of blocks. 167 | Logs { 168 | /// The filter to use to query for logs 169 | filter: Filter, 170 | }, 171 | } 172 | 173 | /// [`ReceiptData`] is a structure that holds the block number, transaction 174 | /// index, and cumulative gas used per block for a transaction. 175 | #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] 176 | pub struct ReceiptData { 177 | /// `block_number` is the number of the block in which the transaction was 178 | /// included. 179 | pub block_number: U64, 180 | /// `transaction_index` is the index position of the transaction in the 181 | /// block. 182 | pub transaction_index: U64, 183 | /// `cumulative_gas_per_block` is the total amount of gas used in the 184 | /// block up until and including the transaction. 185 | pub cumulative_gas_per_block: eU256, 186 | } 187 | 188 | /// Cheatcodes are a direct way to access the underlying [`EVM`] environment and 189 | /// database. 190 | #[derive(Clone, Debug, PartialEq, Eq, serde::Serialize, serde::Deserialize)] 191 | pub enum Cheatcodes { 192 | /// A `Deal` is used to increase the balance of an account in the [`EVM`]. 193 | Deal { 194 | /// The address of the account to increase the balance of. 195 | address: eAddress, 196 | 197 | /// The amount to increase the balance of the account by. 198 | amount: eU256, 199 | }, 200 | /// Fetches the value of a storage slot of an account. 201 | Load { 202 | /// The address of the account to fetch the storage slot from. 203 | account: eAddress, 204 | /// The storage slot to fetch. 205 | key: H256, 206 | /// The block to fetch the storage slot from. 207 | /// todo: implement storage slots at blocks. 208 | block: Option, 209 | }, 210 | /// Overwrites a storage slot of an account. 211 | /// TODO: for more complicated data types, like structs, there's more work 212 | /// to do. 213 | Store { 214 | /// The address of the account to overwrite the storage slot of. 215 | account: ethers::types::Address, 216 | /// The storage slot to overwrite. 217 | key: ethers::types::H256, 218 | /// The value to overwrite the storage slot with. 219 | value: ethers::types::H256, 220 | }, 221 | /// Fetches the `DbAccount` account at the given address. 222 | Access { 223 | /// The address of the account to fetch. 224 | address: ethers::types::Address, 225 | }, 226 | } 227 | 228 | /// Wrapper around [`AccountState`] that can be serialized and deserialized. 229 | #[derive(Debug, Clone, Default, Eq, PartialEq, serde::Serialize, serde::Deserialize)] 230 | pub enum AccountStateSerializable { 231 | /// Before Spurious Dragon hardfork there was a difference between empty and 232 | /// not existing. And we are flagging it here. 233 | NotExisting, 234 | /// EVM touched this account. For newer hardfork this means it can be 235 | /// cleared/removed from state. 236 | Touched, 237 | /// EVM cleared storage of this account, mostly by selfdestruct, we don't 238 | /// ask database for storage slots and assume they are U256::ZERO 239 | StorageCleared, 240 | /// EVM didn't interacted with this account 241 | #[default] 242 | None, 243 | } 244 | 245 | /// Return values of applying cheatcodes. 246 | #[derive(Clone, Debug, PartialEq, Eq, serde::Serialize, serde::Deserialize)] 247 | pub enum CheatcodesReturn { 248 | /// A `Load` returns the value of a storage slot of an account. 249 | Load { 250 | /// The value of the storage slot. 251 | value: U256, 252 | }, 253 | /// A `Store` returns nothing. 254 | Store, 255 | /// A `Deal` returns nothing. 256 | Deal, 257 | /// Gets the DbAccount associated with an address. 258 | Access { 259 | /// Basic account information like nonce, balance, code hash, bytcode. 260 | info: AccountInfo, 261 | /// todo: revm must be updated with serde deserialize, then `DbAccount` 262 | /// can be used. 263 | account_state: AccountStateSerializable, 264 | /// Storage slots of the account. 265 | storage: HashMap, 266 | }, 267 | } 268 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/errors.rs: -------------------------------------------------------------------------------- 1 | //! Errors that can occur when managing or interfacing with Arbiter's sandboxed 2 | //! Ethereum environment. 3 | 4 | use std::sync::{PoisonError, RwLockWriteGuard}; 5 | 6 | // use crossbeam_channel::SendError; 7 | use crossbeam_channel::{RecvError, SendError}; 8 | use ethers::{ 9 | providers::{MiddlewareError, ProviderError}, 10 | signers::WalletError, 11 | }; 12 | use revm_primitives::{EVMError, HaltReason}; 13 | use thiserror::Error; 14 | 15 | use self::environment::instruction::{Instruction, Outcome}; 16 | use super::*; 17 | 18 | /// The error type for `arbiter-core`. 19 | #[derive(Error, Debug)] 20 | pub enum ArbiterCoreError { 21 | /// Tried to create an account that already exists. 22 | #[error("Account already exists!")] 23 | AccountCreationError, 24 | 25 | /// Tried to access an account that doesn't exist. 26 | #[error("Account doesn't exist!")] 27 | AccountDoesNotExistError, 28 | 29 | /// Tried to sign with forked EOA. 30 | #[error("Can't sign with a forked EOA!")] 31 | ForkedEOASignError, 32 | 33 | /// Failed to upgrade instruction sender in middleware. 34 | #[error("Failed to upgrade sender to a strong reference!")] 35 | UpgradeSenderError, 36 | 37 | /// Data missing when calling a transaction. 38 | #[error("Data missing when calling a transaction!")] 39 | MissingDataError, 40 | 41 | /// Invalid data used for a query request. 42 | #[error("Invalid data used for a query request!")] 43 | InvalidQueryError, 44 | 45 | /// Failed to join environment thread on stop. 46 | #[error("Failed to join environment thread on stop!")] 47 | JoinError, 48 | 49 | /// Reverted execution. 50 | #[error("Execution failed with revert: {gas_used:?} gas used, {output:?}")] 51 | ExecutionRevert { 52 | /// The amount of gas used. 53 | gas_used: u64, 54 | /// The output bytes of the execution. 55 | output: Vec, 56 | }, 57 | 58 | /// Halted execution. 59 | #[error("Execution failed with halt: {reason:?}, {gas_used:?} gas used")] 60 | ExecutionHalt { 61 | /// The halt reason. 62 | reason: HaltReason, 63 | /// The amount of gas used. 64 | gas_used: u64, 65 | }, 66 | 67 | /// Failed to parse integer. 68 | #[error(transparent)] 69 | ParseIntError(#[from] std::num::ParseIntError), 70 | 71 | /// Evm had a runtime error. 72 | #[error(transparent)] 73 | EVMError(#[from] EVMError), 74 | 75 | /// Provider error. 76 | #[error(transparent)] 77 | ProviderError(#[from] ProviderError), 78 | 79 | /// Wallet error. 80 | #[error(transparent)] 81 | WalletError(#[from] WalletError), 82 | 83 | /// Send error. 84 | #[error(transparent)] 85 | SendError( 86 | #[from] 87 | #[allow(private_interfaces)] 88 | SendError, 89 | ), 90 | 91 | /// Recv error. 92 | #[error(transparent)] 93 | RecvError(#[from] RecvError), 94 | 95 | /// Failed to parse integer from string. 96 | #[error(transparent)] 97 | FromStrRadixError(#[from] uint::FromStrRadixErr), 98 | 99 | /// Failed to handle json. 100 | #[error(transparent)] 101 | SerdeJsonError(#[from] serde_json::Error), 102 | 103 | /// Failed to reply to instruction. 104 | #[error("{0}")] 105 | ReplyError(String), 106 | 107 | /// Failed to grab a lock. 108 | #[error("{0}")] 109 | RwLockError(String), 110 | } 111 | 112 | impl From>> for ArbiterCoreError { 113 | fn from(e: SendError>) -> Self { 114 | ArbiterCoreError::ReplyError(e.to_string()) 115 | } 116 | } 117 | 118 | impl From>> for ArbiterCoreError { 119 | fn from(e: PoisonError>) -> Self { 120 | ArbiterCoreError::RwLockError(e.to_string()) 121 | } 122 | } 123 | 124 | impl MiddlewareError for ArbiterCoreError { 125 | type Inner = ProviderError; 126 | 127 | fn from_err(e: Self::Inner) -> Self { ArbiterCoreError::from(e) } 128 | 129 | fn as_inner(&self) -> Option<&Self::Inner> { None } 130 | } 131 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/lib.rs: -------------------------------------------------------------------------------- 1 | //! ```text 2 | //! _ _____ ____ _____ _______ ______ _____ 3 | //! / \ | __ \| _ \_ _|__ __| ____| __ \ 4 | //! / \ | |__) | |_) || | | | | |__ | |__) | 5 | //! / / \ \ | _ /| _ < | | | | | __| | _ / 6 | //! / _____ \| | \ \| |_) || |_ | | | |____| | \ \ 7 | //! /_/ \_\_| \_\____/_____| |_| |______|_| \_\ 8 | //! ``` 9 | //! 10 | //! `arbiter-core` is designed to facilitate agent-based simulations of Ethereum 11 | //! smart contracts in a local environment. 12 | //! 13 | //! With a primary emphasis on ease of use and performance, it employs the 14 | //! [`revm`](https://crates.io/crates/revm) (Rust EVM) to provide a local 15 | //! execution environment that closely simulates the Ethereum blockchain but 16 | //! without associated overheads like networking latency. 17 | //! 18 | //! Key Features: 19 | //! - **Environment Handling**: Detailed setup and control mechanisms for running the Ethereum-like 20 | //! blockchain environment. 21 | //! - **Middleware Implementation**: Customized middleware to reduce overhead and provide optimal 22 | //! performance. 23 | //! 24 | //! For a detailed guide on getting started, check out the 25 | //! [Arbiter Github page](https://github.com/amthias-labs/arbiter/). 26 | //! 27 | //! For specific module-level information and examples, navigate to the 28 | //! respective module documentation below. 29 | 30 | #![warn(missing_docs)] 31 | 32 | pub mod console; 33 | pub mod coprocessor; 34 | pub mod database; 35 | pub mod environment; 36 | pub mod errors; 37 | pub mod events; 38 | pub mod middleware; 39 | #[rustfmt::skip] 40 | pub mod bindings; 41 | 42 | use std::{ 43 | collections::{BTreeMap, HashMap}, 44 | convert::Infallible, 45 | fmt::Debug, 46 | sync::{Arc, RwLock}, 47 | }; 48 | 49 | use async_trait::async_trait; 50 | use ethers::types::{Address as eAddress, Filter, Log as eLog, H256, U256 as eU256, U64}; 51 | use revm::{ 52 | db::{CacheDB, EmptyDB}, 53 | interpreter::{CallInputs, CallOutcome}, 54 | primitives::{AccountInfo, Address, Bytes, ExecutionResult, Log, TxEnv, U256}, 55 | Database, Evm, EvmContext, Inspector, 56 | }; 57 | use serde::{Deserialize, Serialize}; 58 | use tokio::sync::broadcast::{Receiver as BroadcastReceiver, Sender as BroadcastSender}; 59 | use tracing::{debug, error, info, trace, warn}; 60 | 61 | use crate::{database::ArbiterDB, environment::Broadcast, errors::ArbiterCoreError}; 62 | -------------------------------------------------------------------------------- /arbiter-ethereum/src/middleware/nonce_middleware.rs: -------------------------------------------------------------------------------- 1 | //! The `nonce_middleware` module provides a middleware implementation for 2 | //! managing nonces for Ethereum-like virtual machines. A nonce is a number that 3 | //! is used only once in a cryptographic communication. In this case, it is used 4 | //! to ensure that each transaction sent from the address associated with the 5 | //! middleware is unique and cannot be replayed. 6 | //! 7 | //! Main components: 8 | //! - [`NonceManagerMiddleware`]: The core middleware implementation. 9 | //! - [`NonceManagerError`]: Error type for the middleware. 10 | use std::sync::atomic::{AtomicBool, AtomicU64, Ordering}; 11 | 12 | use ethers::providers::MiddlewareError; 13 | use thiserror::Error; 14 | 15 | use super::*; 16 | 17 | #[derive(Debug)] 18 | /// Middleware used for calculating nonces locally, useful for signing multiple 19 | /// consecutive transactions without waiting for them to hit the mempool 20 | pub struct NonceManagerMiddleware { 21 | inner: M, 22 | init_guard: futures_locks::Mutex<()>, 23 | initialized: AtomicBool, 24 | nonce: AtomicU64, 25 | address: eAddress, 26 | } 27 | 28 | impl NonceManagerMiddleware 29 | where M: Middleware 30 | { 31 | /// Instantiates the nonce manager with a 0 nonce. The `address` should be 32 | /// the address which you'll be sending transactions from 33 | pub fn new(inner: M, address: eAddress) -> Self { 34 | Self { 35 | inner, 36 | init_guard: Default::default(), 37 | initialized: Default::default(), 38 | nonce: Default::default(), 39 | address, 40 | } 41 | } 42 | 43 | /// Returns the next nonce to be used 44 | pub fn next(&self) -> eU256 { 45 | let nonce = self.nonce.fetch_add(1, Ordering::SeqCst); 46 | nonce.into() 47 | } 48 | 49 | /// Initializes the nonce for the address associated with this middleware. 50 | /// 51 | /// This function initializes the nonce for the address associated with this 52 | /// middleware. If the nonce has already been initialized, this function 53 | /// returns the current nonce. Otherwise, it initializes the nonce by 54 | /// querying the blockchain for the current transaction count for the 55 | /// address. The nonce is used to ensure that each transaction sent from the 56 | /// address is unique and cannot be replayed. 57 | /// 58 | /// # Arguments 59 | /// 60 | /// * `block` - An optional block ID to use when querying the blockchain for the current 61 | /// transaction count. If `None`, the latest block will be used. 62 | /// 63 | /// # Errors 64 | /// 65 | /// This function returns an error if there is an error querying the 66 | /// blockchain for the current transaction count. 67 | 68 | pub async fn initialize_nonce( 69 | &self, 70 | block: Option, 71 | ) -> Result> { 72 | if self.initialized.load(Ordering::SeqCst) { 73 | // return current nonce 74 | return Ok(self.nonce.load(Ordering::SeqCst).into()); 75 | } 76 | 77 | let _guard = self.init_guard.lock().await; 78 | 79 | // do this again in case multiple tasks enter this codepath 80 | if self.initialized.load(Ordering::SeqCst) { 81 | // return current nonce 82 | return Ok(self.nonce.load(Ordering::SeqCst).into()); 83 | } 84 | 85 | // Note: Need to implement get_transaction_count for the middleware 86 | // initialize the nonce the first time the manager is called 87 | let nonce = self 88 | .inner 89 | .get_transaction_count(self.address, block) 90 | .await 91 | .map_err(MiddlewareError::from_err)?; 92 | self.nonce.store(nonce.as_u64(), Ordering::SeqCst); 93 | self.initialized.store(true, Ordering::SeqCst); 94 | trace!("Nonce initialized for address: {:?}", self.address); 95 | Ok(nonce) 96 | } 97 | 98 | // guard dropped here 99 | 100 | async fn get_transaction_count_with_manager( 101 | &self, 102 | block: Option, 103 | ) -> Result> { 104 | // initialize the nonce the first time the manager is called 105 | if !self.initialized.load(Ordering::SeqCst) { 106 | let nonce = self 107 | .inner 108 | .get_transaction_count(self.address, block) 109 | .await 110 | .map_err(MiddlewareError::from_err)?; 111 | self.nonce.store(nonce.as_u64(), Ordering::SeqCst); 112 | self.initialized.store(true, Ordering::SeqCst); 113 | } 114 | 115 | Ok(self.next()) 116 | } 117 | } 118 | 119 | #[derive(Error, Debug)] 120 | /// Thrown when an error happens at the Nonce Manager 121 | pub enum NonceManagerError { 122 | /// Thrown when the internal middleware errors 123 | #[error(transparent)] 124 | MiddlewareError(M::Error), 125 | } 126 | 127 | impl MiddlewareError for NonceManagerError { 128 | type Inner = M::Error; 129 | 130 | fn from_err(src: M::Error) -> Self { NonceManagerError::MiddlewareError(src) } 131 | 132 | fn as_inner(&self) -> Option<&Self::Inner> { 133 | match self { 134 | NonceManagerError::MiddlewareError(e) => Some(e), 135 | } 136 | } 137 | } 138 | 139 | #[cfg_attr(target_arch = "wasm32", async_trait(?Send))] 140 | #[cfg_attr(not(target_arch = "wasm32"), async_trait)] 141 | impl Middleware for NonceManagerMiddleware 142 | where M: Middleware 143 | { 144 | type Error = NonceManagerError; 145 | type Inner = M; 146 | type Provider = M::Provider; 147 | 148 | fn inner(&self) -> &M { &self.inner } 149 | 150 | async fn fill_transaction( 151 | &self, 152 | tx: &mut TypedTransaction, 153 | block: Option, 154 | ) -> Result<(), Self::Error> { 155 | if tx.nonce().is_none() { 156 | tx.set_nonce(self.get_transaction_count_with_manager(block).await?); 157 | } 158 | 159 | Ok(self.inner().fill_transaction(tx, block).await.map_err(MiddlewareError::from_err)?) 160 | } 161 | 162 | /// Signs and broadcasts the transaction. The optional parameter `block` can 163 | /// be passed so that gas cost and nonce calculations take it into 164 | /// account. For simple transactions this can be left to `None`. 165 | async fn send_transaction + Send + Sync>( 166 | &self, 167 | tx: T, 168 | block: Option, 169 | ) -> Result, Self::Error> { 170 | let mut tx = tx.into(); 171 | 172 | if tx.nonce().is_none() { 173 | tx.set_nonce(self.get_transaction_count_with_manager(block).await?); 174 | } 175 | 176 | match self.inner.send_transaction(tx.clone(), block).await { 177 | Ok(tx_hash) => Ok(tx_hash), 178 | Err(err) => { 179 | let nonce = self.get_transaction_count(self.address, block).await?; 180 | if nonce != self.nonce.load(Ordering::SeqCst).into() { 181 | // try re-submitting the transaction with the correct nonce if there 182 | // was a nonce mismatch 183 | self.nonce.store(nonce.as_u64(), Ordering::SeqCst); 184 | tx.set_nonce(nonce); 185 | trace!("Nonce incremented for address: {:?}", self.address); 186 | self.inner.send_transaction(tx, block).await.map_err(MiddlewareError::from_err) 187 | } else { 188 | // propagate the error otherwise 189 | Err(MiddlewareError::from_err(err)) 190 | } 191 | }, 192 | } 193 | } 194 | } 195 | -------------------------------------------------------------------------------- /arbiter-ethereum/tests/common.rs: -------------------------------------------------------------------------------- 1 | use std::sync::Arc; 2 | 3 | use arbiter_bindings::bindings::{ 4 | arbiter_math::ArbiterMath, arbiter_token::ArbiterToken, liquid_exchange::LiquidExchange, 5 | }; 6 | use arbiter_core::{environment::Environment, middleware::ArbiterMiddleware}; 7 | use ethers::utils::parse_ether; 8 | 9 | pub const TEST_ARG_NAME: &str = "ArbiterToken"; 10 | pub const TEST_ARG_SYMBOL: &str = "ARBT"; 11 | pub const TEST_ARG_DECIMALS: u8 = 18; 12 | 13 | pub const TEST_MINT_AMOUNT: u128 = 69; 14 | pub const TEST_MINT_TO: &str = "0xf7e93cc543d97af6632c9b8864417379dba4bf15"; 15 | 16 | pub const TEST_APPROVAL_AMOUNT: u128 = 420; 17 | 18 | pub const TEST_SIGNER_SEED_AND_LABEL: &str = "test_seed_and_label"; 19 | 20 | pub const ARBITER_TOKEN_X_NAME: &str = "Arbiter Token X"; 21 | pub const ARBITER_TOKEN_X_SYMBOL: &str = "ARBX"; 22 | pub const ARBITER_TOKEN_X_DECIMALS: u8 = 18; 23 | 24 | pub const ARBITER_TOKEN_Y_NAME: &str = "Arbiter Token Y"; 25 | pub const ARBITER_TOKEN_Y_SYMBOL: &str = "ARBY"; 26 | pub const ARBITER_TOKEN_Y_DECIMALS: u8 = 18; 27 | 28 | pub const LIQUID_EXCHANGE_PRICE: f64 = 420.69; 29 | 30 | pub fn log() { 31 | std::env::set_var("RUST_LOG", "trace"); 32 | tracing_subscriber::fmt::init(); 33 | } 34 | 35 | pub fn startup() -> (Environment, Arc) { 36 | let env = Environment::builder().build(); 37 | let client = ArbiterMiddleware::new(&env, Some(TEST_SIGNER_SEED_AND_LABEL)).unwrap(); 38 | (env, client) 39 | } 40 | 41 | pub async fn deploy_arbx(client: Arc) -> ArbiterToken { 42 | ArbiterToken::deploy( 43 | client, 44 | ( 45 | ARBITER_TOKEN_X_NAME.to_string(), 46 | ARBITER_TOKEN_X_SYMBOL.to_string(), 47 | ARBITER_TOKEN_X_DECIMALS, 48 | ), 49 | ) 50 | .unwrap() 51 | .send() 52 | .await 53 | .unwrap() 54 | } 55 | 56 | pub async fn deploy_arby(client: Arc) -> ArbiterToken { 57 | ArbiterToken::deploy( 58 | client, 59 | ( 60 | ARBITER_TOKEN_Y_NAME.to_string(), 61 | ARBITER_TOKEN_Y_SYMBOL.to_string(), 62 | ARBITER_TOKEN_Y_DECIMALS, 63 | ), 64 | ) 65 | .unwrap() 66 | .send() 67 | .await 68 | .unwrap() 69 | } 70 | 71 | pub async fn deploy_liquid_exchange( 72 | client: Arc, 73 | ) -> ( 74 | ArbiterToken, 75 | ArbiterToken, 76 | LiquidExchange, 77 | ) { 78 | let arbx = deploy_arbx(client.clone()).await; 79 | let arby = deploy_arby(client.clone()).await; 80 | let price = parse_ether(LIQUID_EXCHANGE_PRICE).unwrap(); 81 | let liquid_exchange = LiquidExchange::deploy(client, (arbx.address(), arby.address(), price)) 82 | .unwrap() 83 | .send() 84 | .await 85 | .unwrap(); 86 | (arbx, arby, liquid_exchange) 87 | } 88 | 89 | pub async fn deploy_arbiter_math(client: Arc) -> ArbiterMath { 90 | ArbiterMath::deploy(client, ()).unwrap().send().await.unwrap() 91 | } 92 | -------------------------------------------------------------------------------- /arbiter-ethereum/tests/contracts.rs: -------------------------------------------------------------------------------- 1 | use std::fs::{self, File}; 2 | 3 | use ethers::{ 4 | prelude::Middleware, 5 | types::{I256 as eI256, U256 as eU256}, 6 | }; 7 | use tracing_subscriber::{fmt, EnvFilter}; 8 | include!("common.rs"); 9 | 10 | #[tokio::test] 11 | async fn arbiter_math() { 12 | let (_env, client) = startup(); 13 | let arbiter_math = deploy_arbiter_math(client).await; 14 | 15 | // Test the cdf function 16 | let cdf_output = arbiter_math.cdf(eI256::from(1)).call().await.unwrap(); 17 | println!("cdf(1) = {}", cdf_output); 18 | assert_eq!(cdf_output, eI256::from(500000000000000000u64)); 19 | 20 | // Test the pdf function 21 | let pdf_output = arbiter_math.pdf(eI256::from(1)).call().await.unwrap(); 22 | println!("pdf(1) = {}", pdf_output); 23 | assert_eq!(pdf_output, eI256::from(398942280401432678u64)); 24 | 25 | // Test the ppf function. 26 | let ppf_output = arbiter_math.ppf(eI256::from(1)).call().await.unwrap(); 27 | println!("ppf(1) = {}", ppf_output); 28 | assert_eq!(ppf_output, eI256::from(-8710427241990476442_i128)); 29 | 30 | // Test the mulWadDown function. 31 | let mulwaddown_output = arbiter_math 32 | .mul_wad_down(eU256::from(1_000_000_000_000_000_000_u128), eU256::from(2)) 33 | .call() 34 | .await 35 | .unwrap(); 36 | println!("mulWadDown(1, 2) = {}", mulwaddown_output); 37 | assert_eq!(mulwaddown_output, eU256::from(2)); 38 | 39 | // Test the mulWadUp function. 40 | let mulwadup_output = arbiter_math 41 | .mul_wad_up(eU256::from(1_000_000_000_000_000_000_u128), eU256::from(2)) 42 | .call() 43 | .await 44 | .unwrap(); 45 | println!("mulWadUp(1, 2) = {}", mulwadup_output); 46 | assert_eq!(mulwadup_output, eU256::from(2)); 47 | 48 | // Test the divWadDown function. 49 | let divwaddown_output = arbiter_math 50 | .div_wad_down(eU256::from(1_000_000_000_000_000_000_u128), eU256::from(2)) 51 | .call() 52 | .await 53 | .unwrap(); 54 | println!("divWadDown(1, 2) = {}", divwaddown_output); 55 | assert_eq!(divwaddown_output, eU256::from(500000000000000000000000000000000000_u128)); 56 | 57 | // Test the divWadUp function. 58 | let divwadup_output = arbiter_math 59 | .div_wad_up(eU256::from(1_000_000_000_000_000_000_u128), eU256::from(2)) 60 | .call() 61 | .await 62 | .unwrap(); 63 | println!("divWadUp(1, 2) = {}", divwadup_output); 64 | assert_eq!(divwadup_output, eU256::from(500000000000000000000000000000000000_u128)); 65 | 66 | // Test the lnWad function. 67 | let lnwad_output = 68 | arbiter_math.log(eI256::from(1_000_000_000_000_000_000_u128)).call().await.unwrap(); 69 | println!("ln(1) = {}", lnwad_output); 70 | assert_eq!(lnwad_output, eI256::from(0)); 71 | 72 | // Test the sqrt function 73 | let sqrt_output = 74 | arbiter_math.sqrt(eU256::from(1_000_000_000_000_000_000_u128)).call().await.unwrap(); 75 | println!("sqrt(1) = {}", sqrt_output); 76 | assert_eq!(sqrt_output, eU256::from(1_000_000_000)); 77 | } 78 | 79 | // TODO: It would be good to change this to `token_functions` and test all 80 | // relevant ERC20 functions (e.g., transfer, approve, etc.). 81 | #[tokio::test] 82 | async fn token_mint_and_balance() { 83 | let (_env, client) = startup(); 84 | let arbx = deploy_arbx(client.clone()).await; 85 | 86 | // Mint some tokens to the client. 87 | arbx 88 | .mint(client.default_sender().unwrap(), eU256::from(TEST_MINT_AMOUNT)) 89 | .send() 90 | .await 91 | .unwrap() 92 | .await 93 | .unwrap(); 94 | 95 | // Fetch the balance of the client. 96 | let balance = arbx.balance_of(client.default_sender().unwrap()).call().await.unwrap(); 97 | 98 | // Check that the balance is correct. 99 | assert_eq!(balance, eU256::from(TEST_MINT_AMOUNT)); 100 | } 101 | 102 | #[tokio::test] 103 | async fn liquid_exchange_swap() { 104 | let (_env, client) = startup(); 105 | let (arbx, arby, liquid_exchange) = deploy_liquid_exchange(client.clone()).await; 106 | 107 | // Mint tokens to the client then check balances. 108 | arbx 109 | .mint(client.default_sender().unwrap(), eU256::from(TEST_MINT_AMOUNT)) 110 | .send() 111 | .await 112 | .unwrap() 113 | .await 114 | .unwrap(); 115 | arby 116 | .mint(client.default_sender().unwrap(), eU256::from(TEST_MINT_AMOUNT)) 117 | .send() 118 | .await 119 | .unwrap() 120 | .await 121 | .unwrap(); 122 | let arbx_balance = arbx.balance_of(client.default_sender().unwrap()).call().await.unwrap(); 123 | let arby_balance = arby.balance_of(client.default_sender().unwrap()).call().await.unwrap(); 124 | println!("arbx_balance prior to swap = {}", arbx_balance); 125 | println!("arby_balance prior to swap = {}", arby_balance); 126 | assert_eq!(arbx_balance, eU256::from(TEST_MINT_AMOUNT)); 127 | assert_eq!(arby_balance, eU256::from(TEST_MINT_AMOUNT)); 128 | 129 | // Get the price at the liquid exchange 130 | let price = liquid_exchange.price().call().await.unwrap(); 131 | println!("price in 18 decimal WAD: {}", price); 132 | 133 | // Mint tokens to the liquid exchange. 134 | let exchange_mint_amount = eU256::MAX / 2; 135 | arbx.mint(liquid_exchange.address(), exchange_mint_amount).send().await.unwrap().await.unwrap(); 136 | arby.mint(liquid_exchange.address(), exchange_mint_amount).send().await.unwrap().await.unwrap(); 137 | 138 | // Approve the liquid exchange to spend the client's tokens. 139 | arbx.approve(liquid_exchange.address(), eU256::MAX).send().await.unwrap().await.unwrap(); 140 | arby.approve(liquid_exchange.address(), eU256::MAX).send().await.unwrap().await.unwrap(); 141 | 142 | // Swap some X for Y on the liquid exchange. 143 | let swap_amount_x = eU256::from(TEST_MINT_AMOUNT) / 2; 144 | liquid_exchange.swap(arbx.address(), swap_amount_x).send().await.unwrap().await.unwrap().unwrap(); 145 | 146 | // Check the client's balances are correct. 147 | let arbx_balance_after_swap_x = 148 | arbx.balance_of(client.default_sender().unwrap()).call().await.unwrap(); 149 | let arby_balance_after_swap_x = 150 | arby.balance_of(client.default_sender().unwrap()).call().await.unwrap(); 151 | println!("arbx_balance after swap = {}", arbx_balance_after_swap_x); 152 | println!("arby_balance after swap = {}", arby_balance_after_swap_x); 153 | assert_eq!(arbx_balance_after_swap_x, eU256::from(TEST_MINT_AMOUNT) - swap_amount_x); 154 | let additional_y = swap_amount_x * price / eU256::from(10_u64.pow(18)); 155 | assert_eq!(arby_balance_after_swap_x, eU256::from(TEST_MINT_AMOUNT) + additional_y); 156 | 157 | // Swap some Y for X on the liquid exchange. 158 | let swap_amount_y = additional_y; 159 | liquid_exchange.swap(arby.address(), swap_amount_y).send().await.unwrap().await.unwrap(); 160 | 161 | // Check the client's balances are correct. 162 | let arbx_balance_after_swap_y = 163 | arbx.balance_of(client.default_sender().unwrap()).call().await.unwrap(); 164 | let arby_balance_after_swap_y = 165 | arby.balance_of(client.default_sender().unwrap()).call().await.unwrap(); 166 | println!("arbx_balance after swap = {}", arbx_balance_after_swap_y); 167 | println!("arby_balance after swap = {}", arby_balance_after_swap_y); 168 | 169 | // The balance here is off by one due to rounding and the extremely small 170 | // balances we are using. 171 | assert_eq!(arbx_balance_after_swap_y, eU256::from(TEST_MINT_AMOUNT) - 1); 172 | assert_eq!(arby_balance_after_swap_y, eU256::from(TEST_MINT_AMOUNT)); 173 | } 174 | 175 | #[tokio::test] 176 | async fn price_simulation_oracle() { 177 | let (_env, client) = startup(); 178 | let (.., liquid_exchange) = deploy_liquid_exchange(client.clone()).await; 179 | 180 | let price_path = vec![1000.0, 2000.0, 3000.0, 4000.0, 5000.0, 6000.0, 7000.0, 8000.0]; 181 | 182 | // Get the initial price of the liquid exchange. 183 | let initial_price = liquid_exchange.price().call().await.unwrap(); 184 | assert_eq!(initial_price, parse_ether(LIQUID_EXCHANGE_PRICE).unwrap()); 185 | 186 | for price in price_path { 187 | let wad_price = parse_ether(price).unwrap(); 188 | liquid_exchange.set_price(wad_price).send().await.unwrap().await.unwrap(); 189 | let new_price = liquid_exchange.price().call().await.unwrap(); 190 | assert_eq!(new_price, wad_price); 191 | } 192 | } 193 | 194 | #[tokio::test] 195 | async fn can_log() { 196 | std::env::set_var("RUST_LOG", "trace"); 197 | let file = File::create("test_logs.log").expect("Unable to create log file"); 198 | let subscriber = fmt().with_env_filter(EnvFilter::from_default_env()).with_writer(file).finish(); 199 | tracing::subscriber::set_global_default(subscriber).expect("setting default subscriber failed"); 200 | 201 | let env = Environment::builder().with_console_logs().build(); 202 | let client = ArbiterMiddleware::new(&env, None).unwrap(); 203 | let counter = 204 | arbiter_bindings::bindings::counter::Counter::deploy(client, ()).unwrap().send().await.unwrap(); 205 | 206 | // Call the `setNumber` function to emit a console log. 207 | counter.set_number(eU256::from(42)).send().await.unwrap().await.unwrap(); 208 | 209 | let parsed_file = fs::read_to_string("test_logs.log").expect("Unable to read log file"); 210 | assert!(parsed_file.contains("You set the number to: , 42")); 211 | fs::remove_file("test_logs.log").expect("Unable to remove log file"); 212 | } 213 | -------------------------------------------------------------------------------- /arbiter-ethereum/tests/environment_integration.rs: -------------------------------------------------------------------------------- 1 | use std::str::FromStr; 2 | 3 | use arbiter_bindings::bindings::{self, weth::weth}; 4 | use arbiter_core::database::fork::Fork; 5 | use ethers::{ 6 | prelude::Middleware, 7 | types::{Address, U256 as eU256, U64}, 8 | }; 9 | include!("common.rs"); 10 | 11 | #[tokio::test] 12 | async fn receipt_data() { 13 | let (_environment, client) = startup(); 14 | let arbiter_token = deploy_arbx(client.clone()).await; 15 | let receipt: ethers::types::TransactionReceipt = arbiter_token 16 | .mint(client.default_sender().unwrap(), 1000u64.into()) 17 | .send() 18 | .await 19 | .unwrap() 20 | .await 21 | .unwrap() 22 | .unwrap(); 23 | 24 | assert!(receipt.block_number.is_some()); 25 | assert_eq!(receipt.status, Some(1.into())); 26 | 27 | assert!(receipt.contract_address.is_none()); 28 | assert_eq!(receipt.to, Some(arbiter_token.address())); 29 | 30 | assert!(receipt.gas_used.is_some()); 31 | assert_eq!(receipt.logs.len(), 1); 32 | assert_eq!(receipt.logs[0].topics.len(), 3); 33 | assert_eq!(receipt.transaction_index, 1.into()); 34 | assert_eq!(receipt.from, client.default_sender().unwrap()); 35 | 36 | let mut cumulative_gas = eU256::from(0); 37 | assert!(receipt.cumulative_gas_used >= cumulative_gas); 38 | cumulative_gas += receipt.cumulative_gas_used; 39 | 40 | let receipt_1 = arbiter_token 41 | .mint(client.default_sender().unwrap(), 1000u64.into()) 42 | .send() 43 | .await 44 | .unwrap() 45 | .await 46 | .unwrap() 47 | .unwrap(); 48 | 49 | // ensure gas in increasing 50 | assert!(cumulative_gas <= receipt_1.cumulative_gas_used); 51 | } 52 | 53 | #[tokio::test] 54 | async fn user_update_block() { 55 | let (_environment, client) = startup(); 56 | let block_number = client.get_block_number().await.unwrap(); 57 | assert_eq!(block_number, U64::from(0)); 58 | 59 | let block_timestamp = client.get_block_timestamp().await.unwrap(); 60 | assert_eq!(block_timestamp, eU256::from(1)); 61 | 62 | let new_block_number = 69; 63 | let new_block_timestamp = 420; 64 | 65 | assert!(client.update_block(new_block_number, new_block_timestamp,).is_ok()); 66 | 67 | let block_number = client.get_block_number().await.unwrap(); 68 | assert_eq!(block_number, new_block_number.into()); 69 | 70 | let block_timestamp = client.get_block_timestamp().await.unwrap(); 71 | assert_eq!(block_timestamp, new_block_timestamp.into()); 72 | } 73 | 74 | #[should_panic] 75 | #[tokio::test] 76 | async fn stop_environment() { 77 | let (environment, client) = startup(); 78 | environment.stop().unwrap(); 79 | deploy_arbx(client).await; 80 | } 81 | 82 | #[tokio::test] 83 | async fn fork_into_arbiter() { 84 | let fork = Fork::from_disk("tests/fork.json").unwrap(); 85 | 86 | // Get the environment going 87 | let environment = Environment::builder().with_state(fork.db).build(); 88 | 89 | // Create a client 90 | let client = ArbiterMiddleware::new(&environment, Some("name")).unwrap(); 91 | 92 | // Deal with the weth contract 93 | let weth_meta = fork.contracts_meta.get("weth").unwrap(); 94 | let weth = weth::WETH::new(weth_meta.address, client.clone()); 95 | 96 | let address_to_check_balance = 97 | Address::from_str(&weth_meta.mappings.get("balanceOf").unwrap()[0]).unwrap(); 98 | 99 | println!("checking address: {}", address_to_check_balance); 100 | let balance = weth.balance_of(address_to_check_balance).call().await.unwrap(); 101 | assert_eq!(balance, eU256::from(34890707020710109111_u128)); 102 | 103 | // eoa check 104 | let eoa = fork.eoa.get("vitalik").unwrap(); 105 | let eth_balance = client.get_balance(*eoa, None).await.unwrap(); 106 | // Check the balance of the eoa with the load cheatcode 107 | assert_eq!(eth_balance, eU256::from(934034962177715175765_u128)); 108 | } 109 | 110 | #[tokio::test] 111 | async fn middleware_from_forked_eo() { 112 | let fork = Fork::from_disk("tests/fork.json").unwrap(); 113 | 114 | // Get the environment going 115 | let environment = Environment::builder().with_state(fork.db).build(); 116 | 117 | let vitalik_address = fork.eoa.get("vitalik").unwrap(); 118 | let vitalik_as_a_client = ArbiterMiddleware::new_from_forked_eoa(&environment, *vitalik_address); 119 | assert!(vitalik_as_a_client.is_ok()); 120 | let vitalik_as_a_client = vitalik_as_a_client.unwrap(); 121 | 122 | // test a state mutating call from the forked eoa 123 | let weth = bindings::weth::WETH::deploy(vitalik_as_a_client.clone(), ()).unwrap().send().await; 124 | assert!(weth.is_ok()); // vitalik deployed the weth contract 125 | 126 | // test a non mutating call from the forked eoa 127 | let eth_balance = vitalik_as_a_client.get_balance(*vitalik_address, None).await.unwrap(); 128 | assert_eq!(eth_balance, eU256::from(934034962177715175765_u128)); 129 | } 130 | 131 | #[tokio::test] 132 | async fn env_returns_db() { 133 | let (environment, client) = startup(); 134 | deploy_arbx(client).await; 135 | let db = environment.stop().unwrap(); 136 | assert!(!db.state.read().unwrap().accounts.is_empty()) 137 | } 138 | 139 | #[tokio::test] 140 | async fn block_logs() { 141 | let (environment, client) = startup(); 142 | 143 | let arbiter_token = deploy_arbx(client.clone()).await; 144 | arbiter_token.mint(Address::zero(), eU256::from(1000)).send().await.unwrap().await.unwrap(); 145 | 146 | let new_block_number = 69; 147 | let new_block_timestamp = 420; 148 | 149 | client.update_block(new_block_number, new_block_timestamp).unwrap(); 150 | 151 | arbiter_token.approve(Address::zero(), eU256::from(1000)).send().await.unwrap().await.unwrap(); 152 | client.update_block(6969, 420420).unwrap(); 153 | 154 | let db = environment.stop().unwrap(); 155 | let logs = db.logs.read().unwrap(); 156 | println!("DB Logs: {:?}\n", logs); 157 | assert_eq!(logs.get(&revm::primitives::U256::from(0)).unwrap().len(), 1); 158 | assert_eq!(logs.get(&revm::primitives::U256::from(69)).unwrap().len(), 1); 159 | } 160 | -------------------------------------------------------------------------------- /arbiter-ethereum/tests/events_integration.rs: -------------------------------------------------------------------------------- 1 | use std::path::Path; 2 | 3 | use arbiter_core::{ 4 | errors::ArbiterCoreError, 5 | events::{Logger, OutputFileType}, 6 | }; 7 | use ethers::types::U256 as eU256; 8 | use serde::Serialize; 9 | include!("common.rs"); 10 | 11 | #[derive(Serialize, Clone)] 12 | struct MockMetadata { 13 | pub name: String, 14 | } 15 | 16 | async fn generate_events( 17 | arbx: ArbiterToken, 18 | arby: ArbiterToken, 19 | lex: LiquidExchange, 20 | client: Arc, 21 | ) -> Result<(), ArbiterCoreError> { 22 | for _ in 0..2 { 23 | arbx.approve(client.address(), eU256::from(1)).send().await.unwrap().await?; 24 | arby.approve(client.address(), eU256::from(1)).send().await.unwrap().await?; 25 | lex.set_price(eU256::from(10u128.pow(18))).send().await.unwrap().await?; 26 | } 27 | Ok(()) 28 | } 29 | 30 | #[tokio::test] 31 | async fn data_capture() { 32 | // 33 | let (env, client) = startup(); 34 | let (arbx, arby, lex) = deploy_liquid_exchange(client.clone()).await; 35 | println!("Deployed contracts"); 36 | 37 | // default_listener 38 | let logger_task = Logger::builder() 39 | .with_event(arbx.events(), "arbx") 40 | .with_event(arby.events(), "arby") 41 | .with_event(lex.events(), "lex") 42 | .run() 43 | .unwrap(); 44 | 45 | let metadata = MockMetadata { name: "test".to_string() }; 46 | 47 | Logger::builder() 48 | .with_event(arbx.events(), "arbx") 49 | .with_event(arby.events(), "arby") 50 | .with_event(lex.events(), "lex") 51 | .metadata(metadata) 52 | .unwrap() 53 | .run() 54 | .unwrap(); 55 | 56 | Logger::builder() 57 | .with_event(arbx.events(), "arbx") 58 | .with_event(arby.events(), "arby") 59 | .with_event(lex.events(), "lex") 60 | .file_type(OutputFileType::CSV) 61 | .run() 62 | .unwrap(); 63 | 64 | Logger::builder() 65 | .with_event(arbx.events(), "arbx") 66 | .with_event(arby.events(), "arby") 67 | .with_event(lex.events(), "lex") 68 | .file_type(OutputFileType::Parquet) 69 | .run() 70 | .unwrap(); 71 | 72 | generate_events(arbx, arby, lex, client.clone()).await.unwrap_or_else(|e| { 73 | panic!("Error generating events: {}", e); 74 | }); 75 | 76 | let _ = env.stop(); 77 | 78 | logger_task.await.unwrap(); 79 | std::thread::sleep(std::time::Duration::from_secs(1)); 80 | assert!(Path::new("./data/output.csv").exists()); 81 | assert!(Path::new("./data/output.parquet").exists()); 82 | assert!(Path::new("./data/output.json").exists()); 83 | std::fs::remove_dir_all("./data").unwrap(); 84 | } 85 | -------------------------------------------------------------------------------- /arbiter-ethereum/tests/fork.json: -------------------------------------------------------------------------------- 1 | { 2 | "meta": { 3 | "weth": { 4 | "address": "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2", 5 | "artifacts_path": "example_fork/WETH.json", 6 | "mappings": { 7 | "balanceOf": [ 8 | "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045" 9 | ] 10 | } 11 | } 12 | }, 13 | "raw": { 14 | "0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2": [ 15 | { 16 | "balance": "0x295f9676bf9b23ace3959", 17 | "nonce": 1, 18 | "code_hash": "0xd0a06b12ac47863b5c7be4185c2deaad1c61557033f56c7d4ea74429cbb25e23", 19 | "code": { 20 | "bytecode": "0x6060604052600436106100af576000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff16806306fdde03146100b9578063095ea7b31461014757806318160ddd146101a157806323b872dd146101ca5780632e1a7d4d14610243578063313ce5671461026657806370a082311461029557806395d89b41146102e2578063a9059cbb14610370578063d0e30db0146103ca578063dd62ed3e146103d4575b6100b7610440565b005b34156100c457600080fd5b6100cc6104dd565b6040518080602001828103825283818151815260200191508051906020019080838360005b8381101561010c5780820151818401526020810190506100f1565b50505050905090810190601f1680156101395780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b341561015257600080fd5b610187600480803573ffffffffffffffffffffffffffffffffffffffff1690602001909190803590602001909190505061057b565b604051808215151515815260200191505060405180910390f35b34156101ac57600080fd5b6101b461066d565b6040518082815260200191505060405180910390f35b34156101d557600080fd5b610229600480803573ffffffffffffffffffffffffffffffffffffffff1690602001909190803573ffffffffffffffffffffffffffffffffffffffff1690602001909190803590602001909190505061068c565b604051808215151515815260200191505060405180910390f35b341561024e57600080fd5b61026460048080359060200190919050506109d9565b005b341561027157600080fd5b610279610b05565b604051808260ff1660ff16815260200191505060405180910390f35b34156102a057600080fd5b6102cc600480803573ffffffffffffffffffffffffffffffffffffffff16906020019091905050610b18565b6040518082815260200191505060405180910390f35b34156102ed57600080fd5b6102f5610b30565b6040518080602001828103825283818151815260200191508051906020019080838360005b8381101561033557808201518184015260208101905061031a565b50505050905090810190601f1680156103625780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b341561037b57600080fd5b6103b0600480803573ffffffffffffffffffffffffffffffffffffffff16906020019091908035906020019091905050610bce565b604051808215151515815260200191505060405180910390f35b6103d2610440565b005b34156103df57600080fd5b61042a600480803573ffffffffffffffffffffffffffffffffffffffff1690602001909190803573ffffffffffffffffffffffffffffffffffffffff16906020019091905050610be3565b6040518082815260200191505060405180910390f35b34600360003373ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001908152602001600020600082825401925050819055503373ffffffffffffffffffffffffffffffffffffffff167fe1fffcc4923d04b559f4d29a8bfc6cda04eb5b0d3c460751c2402c5c5cc9109c346040518082815260200191505060405180910390a2565b60008054600181600116156101000203166002900480601f0160208091040260200160405190810160405280929190818152602001828054600181600116156101000203166002900480156105735780601f1061054857610100808354040283529160200191610573565b820191906000526020600020905b81548152906001019060200180831161055657829003601f168201915b505050505081565b600081600460003373ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200190815260200160002060008573ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001908152602001600020819055508273ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff167f8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925846040518082815260200191505060405180910390a36001905092915050565b60003073ffffffffffffffffffffffffffffffffffffffff1631905090565b600081600360008673ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200190815260200160002054101515156106dc57600080fd5b3373ffffffffffffffffffffffffffffffffffffffff168473ffffffffffffffffffffffffffffffffffffffff16141580156107b457507fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff600460008673ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200190815260200160002060003373ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff1681526020019081526020016000205414155b156108cf5781600460008673ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200190815260200160002060003373ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001908152602001600020541015151561084457600080fd5b81600460008673ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200190815260200160002060003373ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001908152602001600020600082825403925050819055505b81600360008673ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff1681526020019081526020016000206000828254039250508190555081600360008573ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001908152602001600020600082825401925050819055508273ffffffffffffffffffffffffffffffffffffffff168473ffffffffffffffffffffffffffffffffffffffff167fddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef846040518082815260200191505060405180910390a3600190509392505050565b80600360003373ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff1681526020019081526020016000205410151515610a2757600080fd5b80600360003373ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001908152602001600020600082825403925050819055503373ffffffffffffffffffffffffffffffffffffffff166108fc829081150290604051600060405180830381858888f193505050501515610ab457600080fd5b3373ffffffffffffffffffffffffffffffffffffffff167f7fcf532c15f0a6db0bd6d0e038bea71d30d808c7d98cb3bf7268a95bf5081b65826040518082815260200191505060405180910390a250565b600260009054906101000a900460ff1681565b60036020528060005260406000206000915090505481565b60018054600181600116156101000203166002900480601f016020809104026020016040519081016040528092919081815260200182805460018160011615610100020316600290048015610bc65780601f10610b9b57610100808354040283529160200191610bc6565b820191906000526020600020905b815481529060010190602001808311610ba957829003601f168201915b505050505081565b6000610bdb33848461068c565b905092915050565b60046020528160005260406000206020528060005260406000206000915091505054815600a165627a7a72305820deb4c2ccab3c2fdca32ab3f46728389c2fe2c165d5fafa07661e4e004f6c344a0029", 21 | "state": "Raw" 22 | } 23 | }, 24 | { 25 | "3": "0", 26 | "1": "39473711962023174749424659199615060097653232135126263878786656434573155500040", 27 | "26503682303622372439135079884578270057785794679663935708947492083441471802570": "34890707020710109111", 28 | "0": "39553310892875263560936207548857176834471854732421237974622739861269930573850", 29 | "4": "0", 30 | "5": "0", 31 | "2": "18" 32 | } 33 | ], 34 | "0xd8da6bf26964af9d7eed9e03e53415d37aa96045": [ 35 | { 36 | "balance": "0x32a256c95f0e218155", 37 | "nonce": 1130, 38 | "code_hash": "0xc5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470", 39 | "code": { 40 | "bytecode": "0x", 41 | "state": "Raw" 42 | } 43 | }, 44 | {} 45 | ] 46 | }, 47 | "externally_owned_accounts": { 48 | "vitalik": "0xd8da6bf26964af9d7eed9e03e53415d37aa96045" 49 | } 50 | } -------------------------------------------------------------------------------- /arbiter-macros/CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | All notable changes to this project will be documented in this file. 3 | 4 | The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), 5 | and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). 6 | 7 | ## [Unreleased] 8 | 9 | ## [0.1.4](https://github.com/anthias-labs/arbiter/compare/arbiter-macros-v0.1.3...arbiter-macros-v0.1.4) - 2024-04-26 10 | 11 | ### Other 12 | - *(deps)* bump quote from 1.0.35 to 1.0.36 ([#947](https://github.com/anthias-labs/arbiter/pull/947)) 13 | 14 | ## [0.1.3](https://github.com/anthias-labs/arbiter/compare/arbiter-macros-v0.1.2...arbiter-macros-v0.1.3) - 2024-02-20 15 | 16 | ### Other 17 | - update Cargo.toml dependencies 18 | 19 | ## [0.1.2](https://github.com/anthias-labs/arbiter/compare/arbiter-macros-v0.1.1...arbiter-macros-v0.1.2) - 2024-02-15 20 | 21 | ### Other 22 | - update docs ([#891](https://github.com/anthias-labs/arbiter/pull/891)) 23 | 24 | ## [0.1.1](https://github.com/anthias-labs/arbiter/compare/arbiter-macros-v0.1.0...arbiter-macros-v0.1.1) - 2024-02-13 25 | 26 | ### Added 27 | - *(arbiter-macros)* `#[arbiter_macros::main]` and a project example ([#880](https://github.com/anthias-labs/arbiter/pull/880)) 28 | 29 | ### Other 30 | - Engine/world from config ([#882](https://github.com/anthias-labs/arbiter/pull/882)) 31 | -------------------------------------------------------------------------------- /arbiter-macros/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | authors = ["Harness Labs"] 3 | description = "Arbiter macros for Ethereum smart contract testing." 4 | edition = "2021" 5 | keywords = ["ethereum", "evm", "emulator", "testing", "smart-contracts"] 6 | license = "AGPL-3.0" 7 | name = "arbiter-macros" 8 | readme = "../README.md" 9 | version = "0.1.4" 10 | 11 | [lib] 12 | proc-macro = true 13 | 14 | [dependencies] 15 | quote = "1.0.36" 16 | syn.workspace = true 17 | -------------------------------------------------------------------------------- /arbiter/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | authors = ["Harness Labs"] 3 | description = "A Rust-based multi-agent framework." 4 | edition = "2021" 5 | keywords = [ 6 | "ethereum", 7 | "evm", 8 | "emulator", 9 | "testing", 10 | "smart-contracts", 11 | "multi-agent", 12 | "framework", 13 | ] 14 | license = "AGPL-3.0" 15 | name = "arbiter" 16 | version = "0.5.0" 17 | 18 | [dependencies] 19 | arbiter-core = { workspace = true } 20 | arbiter-ethereum = { workspace = true, optional = true } 21 | arbiter-macros = { workspace = true } 22 | 23 | [features] 24 | ethereum = ["dep:arbiter-ethereum"] 25 | -------------------------------------------------------------------------------- /arbiter/src/lib.rs: -------------------------------------------------------------------------------- 1 | pub use arbiter_core::*; 2 | pub use arbiter_engine::*; 3 | pub use arbiter_ethereum::*; 4 | pub use arbiter_macros::*; 5 | -------------------------------------------------------------------------------- /book.toml: -------------------------------------------------------------------------------- 1 | [book] 2 | authors = ["Arbiter Contributors"] 3 | language = "en" 4 | multilingual = false 5 | src = "docs/src" 6 | title = "Arbiter Documentation" 7 | 8 | 9 | [output.html] 10 | katex = true 11 | 12 | [preprocessor.katex] 13 | after = ["links"] 14 | # KaTeX options. 15 | error-color = "#cc0000" 16 | fleqn = false 17 | leqno = false 18 | max-expand = 1000 19 | max-size = "Infinity" 20 | min-rule-thickness = -1.0 21 | output = "html" 22 | throw-on-error = true 23 | trust = false 24 | # Extra options. 25 | block-delimiter = { left = "$$", right = "$$" } 26 | include-src = false 27 | inline-delimiter = { left = "$", right = "$" } 28 | no-css = false 29 | 30 | [output.linkcheck] 31 | follow-web-links = false 32 | optional = true 33 | -------------------------------------------------------------------------------- /docs/CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | All notable changes to this project will be documented in this file. 3 | 4 | The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), 5 | and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). 6 | 7 | ## [Unreleased] 8 | 9 | ## [0.0.1](https://github.com/primitivefinance/arbiter/compare/documentation-v0.0.0...documentation-v0.0.1) - 2024-02-12 10 | 11 | ### Fixed 12 | - docs 13 | - fixes 14 | 15 | ### Other 16 | - simplify stream interface ([#866](https://github.com/primitivefinance/arbiter/pull/866)) 17 | - remove codecov ([#860](https://github.com/primitivefinance/arbiter/pull/860)) 18 | - refactor arbiter-core ([#858](https://github.com/primitivefinance/arbiter/pull/858)) 19 | - add `arbiter-engine` error handling ([#852](https://github.com/primitivefinance/arbiter/pull/852)) 20 | - documentation refactor 🌱 ([#847](https://github.com/primitivefinance/arbiter/pull/847)) 21 | - engine::new() visibility ([#854](https://github.com/primitivefinance/arbiter/pull/854)) 22 | - mdbook contributions for new crates ([#845](https://github.com/primitivefinance/arbiter/pull/845)) 23 | - updated builder api for environment 24 | - spellcheck 25 | - inspector for `Environment` 26 | - `Environment` customization 27 | - simulation.md 28 | - vulnerability corpus 29 | - usage 30 | - getting started 31 | - contributing 32 | - documentation 33 | - now all pass? 34 | - make sure this works properly 35 | - testing out 36 | - reworking 37 | - linkcheck, should fail on book build 38 | - improvement 39 | - save 40 | - improving mdbook testing 41 | - Update arbiter_core.md 42 | - `Environment` docs 43 | - restructure 44 | - save 45 | - risk 46 | - quantatative security 47 | - auditing documentatation 48 | - chapter on anomoly detection 49 | - rename dir 50 | - remove book 📖❌ 51 | - update chapter 3 52 | - update chapter 1 53 | - reformat the chapters 54 | - progress on chapter 2 docs 55 | - update chapter_2.md 56 | - Outline a few core chapters 57 | - develop docs 58 | - Begin mdBook 59 | -------------------------------------------------------------------------------- /docs/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | build = "build.rs" 3 | edition = "2021" 4 | name = "docs" 5 | publish = false 6 | version = "0.0.1" 7 | 8 | [build-dependencies] 9 | skeptic = "0.13.7" 10 | tokio = { workspace = true } 11 | 12 | [dev-dependencies] 13 | arbiter-core.workspace = true 14 | revm-primitives.workspace = true 15 | skeptic = "0.13.7" 16 | -------------------------------------------------------------------------------- /docs/build.rs: -------------------------------------------------------------------------------- 1 | use skeptic::markdown_files_of_directory; 2 | 3 | extern crate skeptic; 4 | 5 | fn main() { 6 | let markdown_files = markdown_files_of_directory("src/"); 7 | skeptic::generate_doc_tests(&markdown_files); 8 | } 9 | -------------------------------------------------------------------------------- /docs/src/SUMMARY.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | [Arbiter](./index.md) 3 | - [Getting Started](./getting_started/index.md) 4 | - [Examples](./getting_started/examples.md) 5 | # Usage 6 | - [Overview](./usage/index.md) 7 | - [Arbiter Core](./usage/arbiter_core/index.md) 8 | - [Environment](./usage/arbiter_core/environment.md) 9 | - [Middleware](./usage/arbiter_core/middleware.md) 10 | - [Arbiter Engine](./usage/arbiter_engine/index.md) 11 | - [Behaviors](./usage/arbiter_engine/behaviors.md) 12 | - [Agents and Engines](./usage/arbiter_engine/agents_and_engines.md) 13 | - [Worlds and Universes](./usage/arbiter_engine/worlds_and_universes.md) 14 | - [Configuration](./usage/arbiter_engine/configuration.md) 15 | - [Arbiter CLI](./usage/arbiter_cli.md) 16 | - [Arbiter Macros](./usage/arbiter_macros.md) 17 | - [Techniques](./usage/techniques/index.md) 18 | - [Anomaly Detection](./usage/techniques/anomaly_detection.md) 19 | - [Measuring Risk](./usage/techniques/measuring_risk.md) 20 | # Engagement 21 | - [Contributing](./contributing.md) 22 | - [Vulnerability Corpus](./vulnerability_corpus.md) -------------------------------------------------------------------------------- /docs/src/contributing.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | Feedback is the number one way you can help us improve Arbiter, and we want to hear from you! 3 | A worthy contribution to the repo is opening an issue or a discussion on the [GitHub issues](https://github.com/primitivefinance/arbiter/issues) page. 4 | Similarly, you can feel free to reach out to us on [Telegram](https://t.me/arbiter_rs). 5 | Any and all questions are welcome. 6 | 7 | ## Open Source Community 8 | Arbiter is an open-source project and we welcome contributions from the community. 9 | We keep track of all issues and feature requests on our [GitHub issues](https://github.com/primitivefinance/arbiter/issues) page. 10 | Issues that are approachable for newcomers are tagged with the **good first issue**, so be on the lookout for those! 11 | 12 | See our [Contributing Guidelines](https://github.com/primitivefinance/arbiter/blob/main/.github/CONTRIBUTING.md) 13 | 14 | ## Vulnerability Corpus 15 | If you have found a vulnerability in a smart contract using Arbiter, please report it to us by opening an issue on our [GitHub issues](https://github.com/primitivefinance/arbiter/issues) page or consider adding it yourself to our [Vulnerability Corpus](./vulnerability_corpus.md). 16 | This can help the Ethereum developer community know how to test their own smart contracts and avoid similar vulnerabilities. -------------------------------------------------------------------------------- /docs/src/getting_started/examples.md: -------------------------------------------------------------------------------- 1 | # Examples 2 | 3 | We have a few examples to help you get started with Arbiter. These examples are designed to be simple and easy to understand. They are also designed to be easy to run and modify. We hope you find them helpful! 4 | 5 | Our examples are in the [examples](https://github.com/anthias-labs/arbiter/tree/main/examples) directory. There are two examples: one for building a simulation and one fork forking the mainnet state. 6 | 7 | ## Simulation 8 | 9 | You can run them with the following command: 10 | 11 | ```bash 12 | cargo run --example project simulate examples/project/configs/example.toml 13 | ``` 14 | 15 | This will run the minimal counter-simulation. The simulation is very minimal and is designed to be easy to understand. It uses an arbiter main macro to derive the `incrementer` behavior for a single agent. Our design philosophy is that the users of Arbiter should only need to define behaviors and a configuration toml for the behaviors. You can see how the behaviors were represented in this simulation in the [behaviors](https://github.com/anthias-labs/arbiter/tree/main/examples/project/behaviors) module. We implement a single behavior for the incrementer struct that deploys the counter on startup and then on the increment event will increment the count. 16 | 17 | For more information on the behavior trait please see the section on [behaviors](https://anthias-labs.github.io/arbiter/usage/arbiter_engine/behaviors.html) 18 | 19 | 20 | ## Forking 21 | 22 | You can run the fork example with the following command: 23 | 24 | ```bash 25 | arbiter fork examples/fork/weth_config.toml 26 | ``` 27 | 28 | This will fork the state specified in the `weth_config.toml` file. If you would like to fork a different state, you can modify the `weth_config.toml` file to point to include additional EOAs or contract storage. Once you have forked the state you want, you can start your simulation with the forked state by loading it into a memory revm instance like so: 29 | 30 | ```rust ignore 31 | use arbiter_core::{database::Fork::*, Environment, ArbiterMiddleware}; 32 | 33 | let fork = Fork::from_disk("tests/fork.json").unwrap(); 34 | 35 | // Get the environment going 36 | let environment = Environment::builder().with_db(fork.db).build(); 37 | 38 | // Create a client 39 | let client = ArbiterMiddleware::new(&environment, Some("name")).unwrap(); 40 | ``` 41 | 42 | 43 | -------------------------------------------------------------------------------- /docs/src/getting_started/index.md: -------------------------------------------------------------------------------- 1 | # Getting Started 2 | To use Arbiter, you can use the Arbiter CLI to help you manage your projects or, if you feel you don't need any of the CLI features, you can be free to use the [`arbiter-core`](https://crates.io/crates/arbiter-core), `arbiter-engine`, and [`arbiter-bindings`](https://crates.io/crates/arbiter-bindings) crates directly. 3 | You can find more information about these crates in the [Usage](../index.md) section. 4 | The crates (aside from `arbiter-engine` at the moment) are linked to their crates.io pages so you can add them to your project by: 5 | ```toml 6 | [dependencies] 7 | arbiter-core = "*" # You can specify a version here if you'd like 8 | arbiter-bindings = "*" # You can specify a version here if you'd like 9 | arbiter-engine = "*" # You can specify a version here if you'd like 10 | ``` 11 | 12 | 13 | # Auditing 14 | 15 | The current state of software auditing in the EVM is rapidly evolving. Competitive salaries are attracting top talent to firms like [Spearbit](https://spearbit.com/), [ChainSecurity](https://chainsecurity.com/), and [Trail of Bits](https://www.trailofbits.com/), while open security bounties and competitions like [Code Arena](https://code4rena.com/) are drawing in the best and brightest from around the world. Moreover, the rise of decentralized finance and the value at stake in these EVM-oriented systems have also caught the attention of a collection of black hats. 16 | 17 | As competition in auditing intensifies, auditors will likely need to specialize to stay competitive. With its ability to model the EVM with a high degree of granularity, Arbiter is well-positioned to be leveraged by auditors to develop its tooling and methodologies to stay ahead of the curve. 18 | 19 | One such methodology is domain-specific fuzzing. Fuzzing is a testing technique that provides invalid, unexpected, or random data as input to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Domain-specific fuzzing in the context of EVM system design involves modeling "normal" system behavior with agents and then playing with different parameters of the system to expose system fragility. 20 | 21 | With its high degree of EVM modeling granularity, Arbiter is well-suited to support and enable domain-specific fuzzing. It can accurately simulate the behavior of the EVM under a wide range of conditions and inputs, providing auditors with a powerful tool for identifying and addressing potential vulnerabilities. Moreover, Arbiter is designed to be highly performant and fast, allowing for efficient and timely auditing processes. This speed and performance make it an even more valuable tool in the rapidly evolving world of software auditing. 22 | 23 | -------------------------------------------------------------------------------- /docs/src/index.md: -------------------------------------------------------------------------------- 1 | # Arbiter 2 | **Arbiter** is a framework for stateful Ethereum smart-contract simulation. 3 | The framework features an [`ethers-rs`](https://github.com/gakonst/ethers-rs) middleware built on top of [revm](https://github.com/bluealloy/revm) which allows the end user to interact with a sandboxed `revm` instance as if it were an Ethereum node. 4 | This provides a familiar interface for interacting with the Ethereum Virtual Machine (EVM), but with unrivaled speed. 5 | Furthermore, Arbiter provides containment and management for simulations. For a running list of vulnerabilities found with Arbiter, please see the [Vulnerability Corpus](vulnerability_corpus.md). 6 | 7 | ## Overview 8 | The Arbiter workspace has three crates: 9 | - `arbiter`: The binary crate that exposes a command line interface for initializing simulations via a templated repository and generating contract bindings needed for the simulation. 10 | - `arbiter-core`: The lib crate that contains the core logic for the Arbiter framework including the `RevmMiddleware` discussed before, the `Environment` which envelopes simulations, and the `Manager` who controls a collection of environments. 11 | - `arbiter-engine`: The lib crate that provides abstractions for building simulations and more. 12 | 13 | The purpose of Arbiter is to provide a toolset to construct arbitrary agents (defined in Rust, by smart contracts, or even other foreign function interfaces) and have these agents interact with an Ethereum-like environment of your design. 14 | All contract bytecode is run directly using a blazing-fast EVM instance `revm` (which is used in live RPC nodes such as [`reth`](https://github.com/paradigmxyz/reth)) so that your contracts are tested in the exact same type of environment that they are deployed in. 15 | 16 | ## Motivation 17 | Smart contract engineers need to test their contracts against a wide array of potentially adversarial environments and contract parameters. 18 | The static stateless testing of contracts can only take you so far. 19 | To truly test the security of a contract, you need to test it against a wide array of dynamic environments that encompass the externalities of Ethereum mainnet. 20 | We wanted to do just that with Arbiter. 21 | 22 | Both smart contract and financial engineers come together in Decentralized Finance (DeFi) to build and deploy a wide array of complex decentralized applications as well as financial strategies respectively. 23 | For the latter, a financial engineer may want to test their strategies against thousands of market conditions, contract settings, shocks, and autonomous or random or even AI agents all while making sure their strategy isn't vulnerable to bytecode-level exploits. 24 | 25 | To configure such a rich simulation environment on a test or local network is also possible with Arbiter by a change in choice of middleware. 26 | The most efficient choice for getting robust, yet quick, simulations would bypass any networking and use a low level language's implementation of the EVM. 27 | Furthermore, we can gain control over the EVM worldstate by working directly on `revm`. 28 | We would like the user to have a choice in how they want to simulate their contracts and Arbiter provides that choice. 29 | 30 | ### Sim Driven Development and Strategization 31 | 32 | Test driven development is a popular engineering practice to write tests first, which fail, and implement logic to get the test to eventually pass. 33 | With simulation driven development, it's possible to build "tests" that can only pass if the *incentives* actually work. For example, a sim driven test might be `is_loan_liquidated`, and a simulation must be made for a liquidator agent to do the liquidation. 34 | This approach significantly improves the testing of economic systems and other mechanism designs, which is important in the world of networks that are mostly incentive driven. 35 | 36 | The same goes with developing strategies that one would like to deploy on a live Ethereum network. 37 | One can use Arbiter to simulate their strategy with an intended goal and see if it actually works. 38 | This is especially important in the world of DeFi where strategies are often a mix of on and offchain and are susceptible to exploits. 39 | 40 | ### Anomaly Detection 41 | Anomaly detection in software design systems refers to identifying unusual patterns or behaviors that deviate from the expected or normal functioning of the software. These anomalies can be due to various reasons, such as bugs, performance issues, security vulnerabilities, or design flaws. Arbiter's agent-based modeling and EVM execution parity make it well suited for anomaly detection of greater systemic risk in the Ethereum ecosystem. 42 | 43 | In the context of software design, anomaly detection can be used to identify design flaws or inconsistencies in the design of the software. For example, if a particular module or component of the software behaves differently than it was intended, it could indicate a design flaw or security vulnerability. 44 | 45 | ### Agent Base Modeling 46 | Agent-based simulations for anomaly detection systems involve creating a model of the system using agents, where each agent represents a component or a module of the system. These agents interact with each other and their environment, mimicking the behavior of the actual system. Agent-based simulations can be a powerful tool for anomaly detection as they can model complex systems and their interactions, making it possible to detect anomalies that other methods might miss. However, they also require a good understanding of the system being modeled and what constitutes normal behavior for that system. 47 | 48 | ### Modeling the System 49 | The first and most crucial step is to model the system. A well-modeled system accurately reflects the real-world behavior of the software or system under study. This ensures that the simulation provides meaningful and applicable results. We build the `RevmMiddleware` to accurately model how users/agents or externally owned accounts interact with the EVM. This means the `RevmMiddleware` implements the middleware trait from the rust Ethereum ecosystem, exploiting the same API the EOAs would use to talk to a node today. This is why having EVM execution parity is so important. 50 | 51 | #### Statistical Methods: 52 | These methods model the system's normal behavior using statistical models and then use these models to detect deviations. To model things well, people use techniques such as mean, median, and standard deviation, or more complex models like regression models can be used. For example, the Poisson distribution gives the probability of an event happening a certain number of times (k) within a given interval of time or space. So, you can quantify an average number of occurrences of some action (say, to model the behavior of a retail agent or network congestion from certain events). In that case, you can model this well with the Poisson distribution. 53 | 54 | ### Defining Normal Behavior: Agent design 55 | Once the system is modeled, the next step is to define what constitutes normal behavior for the system. This could be based on historical data, expert knowledge, or both. This is not a feature of Arbiter yet (The arbiter-engine crate is a WIP but contains some of our initial work on this). This can be incredibly simple (passive behavior) or complex (interactive behavior). But the better they model the system, the better the results. For example, you can model LPs as more passive agents that deposit and withdraw liquidity based on some average occurrences. In contrast, arbitrageurs can be modeled as more interactive agents that react to certain events or `SLAOD's on specific memory locations. As the agents start to resemble real-world actors, the results will be more accurate, and the data will be more beneficial for the system designers. 56 | 57 | ### Simulating the System 58 | The system is then simulated over some time. During this simulation, the agents interact with each other and their environment, generating data that reflects the system's behavior. You can decide on specific parameters and configurations for the system. Designating the system simulation to be as close to the real-world system as possible is recommended. For example, historically or with price processes, we can model a sequence of prices for arbitrageurs. The speed and performance of the simulation have made it possible for you to get more data by doing the latter. 59 | 60 | ### Detecting Anomalies 61 | The data generated by the simulation is then analyzed to detect anomalies. This could be done using various statistical methods, machine learning, or rule-based methods. Anomalies are identified as deviations from the defined normal behavior. 62 | 63 | 64 | >Machine Learning: Machine Learning techniques can be used to learn the system's normal behavior and then detect anomalies. 65 | 66 | >Rule-Based Methods: These methods define rules that describe the system's normal behavior. Any behavior that does not conform to these rules is considered an anomaly. 67 | 68 | >Time Series Analysis: In systems where data is collected over time, time series analysis can be used to detect anomalies. This involves looking for patterns or trends in the data over time and identifying any deviations from these patterns. 69 | >>Log Analysis: Many software systems generate logs that record the system's activity. Analyzing these logs can help detect anomalies. This can be done manually or using automated tools. 70 | 71 | >>Evaluating and Refining the Model: The detected anomalies are evaluated to determine if they are true anomalies or false positives. The model is refined based on these evaluations to improve its accuracy in detecting abnormalities. 72 | 73 | 74 | ### Using Insights to Refine the System 75 | Insights gained from the system can be invaluable in refining and improving it. By understanding the anomalies and their causes, we can make necessary adjustments to the system's design or operation. This could involve modifying the system's parameters, updating the agent's behaviors, or even redesigning certain aspects of the system. 76 | 77 | However, it's essential to be cautious about overfitting the data. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. An overfitted model has poor predictive performance, as it overreacts to minor fluctuations in the training data. 78 | 79 | ## Developer Documentation 80 | To see the documentation for the Arbiter crates, please visit the following: 81 | - [`arbiter`](https://docs.rs/crate/arbiter/) 82 | - [`arbiter-bindings`](https://docs.rs/crate/arbiter-bindings/) 83 | - [`arbiter-core`](https://docs.rs/arbiter-core/) 84 | 85 | You will also find each of these on crates.io. -------------------------------------------------------------------------------- /docs/src/lib.rs: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /docs/src/usage/arbiter_cli.md: -------------------------------------------------------------------------------- 1 | # Arbiter CLI 2 | Arbiter provides a Foundry-like CLI experience. You can initialize new projects, generate bindings and execute simulations using the CLI. 3 | 4 | To create a new Arbiter project: 5 | ```bash 6 | arbiter init your-new-project 7 | cd your-new-project 8 | ``` 9 | This initializes a new Arbiter project with a template. 10 | You can run `arbiter init --no-git` to remove the `.git` directory from the template upon initialization. 11 | 12 | ## Bindings 13 | 14 | You can load or write your own smart contracts in the `arbiter-bindings/contracts/` directory and begin writing your own simulations. 15 | Arbiter treats Rust smart-contract bindings as first-class citizens. 16 | The contract bindings are generated via Foundry's `forge` command. 17 | `arbiter bind` wraps `forge` with some convenience features that will generate all your bindings to src/bindings as a rust module. 18 | [Foundry](https://github.com/foundry-rs/foundry) power-users are welcome to use `forge` directly. 19 | You can generate the bindings again by running: 20 | 21 | ```bash 22 | arbiter bind 23 | ``` 24 | Arbiter bind wraps `forge bind` and is configured from your cargo.toml. There are three optional fields you can add to your toml to configure arbiter bind. 25 | ```toml 26 | [arbiter] 27 | bindings_workspace = "simulation" # must be a valid workspace member 28 | submodules = false # change to true if you want the submodule bindings to be generated 29 | ignore_interfaces = false # change to true if you want to ignore interfaces contracts 30 | ``` 31 | 32 | The template is executable at this point and you can run it by running: 33 | 34 | ```bash 35 | cargo run 36 | ``` 37 | 38 | You can load or write your own smart contracts in the templates `contracts/` directory and begin writing your own simulations. Arbiter treats Rust smart-contract bindings as first-class citizens. The contract bindings are generated via Foundry's forge command. arbiter bind wraps forge with some convenience features that will generate all your bindings to `src/bindings` as a rust module. Foundry power-users are welcome to use forge directly. You can also manage project dependencies using git submodules via `forge install`. The [Foundry book](https://book.getfoundry.sh/reference/forge/forge-install) provides further details on managing project dependencies and other features. 39 | 40 | 41 | ## Forking 42 | 43 | To fork a state of an EVM network, you must first create a fork config file. 44 | An example is provided in the `example_fork` directory. 45 | Essentially, you provide your storage location for the data, the network you want the block number you want, and metadata about the contracts you want to fork. 46 | 47 | ```bash 48 | arbiter fork 49 | ``` 50 | 51 | This will create a fork of the network you specified in the config file and store it in the location you specified. 52 | It can then be loaded into an `arbiter-core` `Environment` by using the `Fork::from_disk()` method. 53 | 54 | Forking is done this way to make sure that all emulation done does not require a constant connection to an RPC-endpoint. 55 | 56 | **Optional Arguments** 57 | You can run `arbiter fork --overwrite` to overwrite the fork if it already exists. -------------------------------------------------------------------------------- /docs/src/usage/arbiter_core/environment.md: -------------------------------------------------------------------------------- 1 | # Environment 2 | The `Environment` owns a `revm` instance for processing EVM bytecode. 3 | To make the `Environment` performant and flexible, it runs on its own system thread and receives all communication via `Instruction`s sent to it via a `Sender`. 4 | The `Socket` is a struct owned by the `Environment` that manages all inward and outward communication with the `Environment`'s clients, such as the `Instruction` channel. 5 | 6 | ## Usage 7 | To create an `Environment`, we use a builder pattern that allows you to pre-load an `Environment` with your own database. 8 | We can do the following to create a default `Environment`: 9 | ```rust, ignore 10 | use arbiter_core::environment::Environment; 11 | 12 | fn main() { 13 | let env = Environment::builder().build(); 14 | } 15 | ``` 16 | Note that the call to `.build()` will start the `Environment`'s thread and begin processing `Instruction`s. 17 | 18 | ### Inspector Configuration 19 | The `Environment` also supports the ability to inspect the `revm` instance's state at any point in time which can be useful for debugging and managing gas. 20 | By default, the `Environment` will not inspect the `revm` instance's state at all (which should provide the highest speed), but you can enable these features by doing the following: 21 | ```rust, ignore 22 | use arbiter_core::environment::Environment; 23 | 24 | fn main() { 25 | let env = Environment::builder() 26 | .with_console_logs() 27 | .with_pay_gas() 28 | .build(); 29 | } 30 | ``` 31 | The feature `with_console_logs` will print out logs generated by `console2.log` in Solidity so that you can get intermediate state of your contracts. 32 | The feature `with_pay_gas` will pay gas for transactions which is useful for realism. 33 | 34 | ### Fork Configuration 35 | If you have a database that has been forked from a live network, it has likely been serialized to disk. 36 | In which case, you can do something like this: 37 | ```rust, ignore 38 | use arbiter_core::environment::Environment; 39 | use arbiter_core::database::fork::Fork; 40 | 41 | fn main() { 42 | let path_to_fork = "path/to/fork"; 43 | let fork = Fork::from_disk(path_to_fork).unwrap(); 44 | let env = Environment::builder().with_db(fork).build(); 45 | } 46 | ``` 47 | This will create an `Environment` that has been forked from the database at the given path and is ready to receive `Instruction`s. 48 | 49 | `Environment` supports more customization for the `gas_limit` and `contract_size_limit` of the `revm` instance. 50 | You can do the following: 51 | ```rust, ignore 52 | use arbiter_core::environment::Environment; 53 | 54 | fn main() { 55 | let env = Environment::builder() 56 | .with_gas_limit(revm_primitives::U256::from(12_345_678)) 57 | .with_contract_size_limit(111_111) 58 | .build(); 59 | } 60 | ``` 61 | 62 | ## Instructions 63 | `Instruction`s have been added to over time, but at the moment we allow for the following: 64 | - `Instruction::AddAccount`: Add an account to the `Environment`'s world state. This is usually called by the `RevmMiddleware` when a new client is created. 65 | - `Instruction::BlockUpdate`: Update the `Environment`'s block number and block timestamp. This can be handled by an external agent in a simulation, if desired. 66 | - `Instruction::Cheatcode`: Execute one of the `Cheatcodes` on the `Environment`'s world state. 67 | The `Cheatcodes` include: 68 | - `Cheatcodes::Deal`: Used to set the raw ETH balance of a user. Useful when you need to pay gas fees in a transaction. 69 | - `Cheatcodes::Load`: Gets the value of a storage slot of an account. 70 | - `Cheatcodes::Store`: Sets the value of a storage slot of an account. 71 | - `Cheatcodes::Access`: Gets the account at an address. 72 | - `Instruction::Query`: Allows for querying the `Environment`'s world state and current configuration. Anything in the `EnvironmentData` enum is accessible via this instruction. 73 | - `EnvironmentData::BlockNumber`: Gets the current block number of the `Environment`. 74 | - `EnvironmentData::BlockTimestamp`: Gets the current block timestamp of the `Environment`. 75 | - `EnvironmentData::GasPrice`: Gets the current gas price of the `Environment`. 76 | - `EnvironmentData::Balance`: Gets the current ETH balance of an account. 77 | - `EnvironmentData::TransactionCount`: Gets the current nonce of an account. 78 | - `Instruction::Stop`: Stops the `Environment`'s thread and echos out to any listeners to shut down their event streams. This can be used when handling errors or reverts, or just when you're done with the `Environment`. 79 | - `Instruction::Transaction`: Executes a transaction on the `Environment`'s world state. This is usually called by the `RevmMiddleware` when a client sends a ETH-call or state-changing transaction. 80 | 81 | The `RevmMiddleware` provides methods for sending the above instructions to an associated `Environment` so that you do not have to interact with the `Environment` directly! 82 | 83 | ## Events 84 | The `Environment` also emits Ethereum events and errors/reverts to clients who are set to listen to them. 85 | To do so, we use a `tokio::sync::broadcast` channel and the `RevmMiddleware` manages subscriptions to these events. 86 | As for errors or reverts, we are working on making the flow of handling these more graceful so that your own program or agents can decide how to handle them. 87 | -------------------------------------------------------------------------------- /docs/src/usage/arbiter_core/index.md: -------------------------------------------------------------------------------- 1 | # Arbiter Core 2 | The `arbiter-core` crate is the core of the Arbiter framework. 3 | It contains the `Environment` struct which acts as an EVM sandbox and the `RevmMiddleware` which gives a convenient interface for interacting with contracts deployed into the `Environment`. 4 | The API provided by `RevmMiddleware` is that of the `Middleware` trait in the `ethers-rs` crate, therefore it looks and feels just like you're interacting with a live network when you work with an Arbiter `Environment`. 5 | The only notable differences are in the control you have over this `Environment` compared to something like Anvil, a testnet, or a live network. 6 | -------------------------------------------------------------------------------- /docs/src/usage/arbiter_core/middleware.md: -------------------------------------------------------------------------------- 1 | # Middleware 2 | The `ArbiterMiddleware` is the main interface for interacting with an `Environment`. 3 | We implement the `ethers-rs` `Middleware` trait so that you may work with contract bindings generated by `forge` or `arbiter bind` as if you were interacting with a live network. 4 | Not all methods are implemented, but the relevant ones are. 5 | 6 | `ArbiterMiddleware` owns a `Connection` which is the client's interface to the `Environment`'s `Socket`. 7 | This `Connection` acts much like a WebSocket connection and is used to send `Instruction`s and receive their outcome from the `Environment` as well as subscribe to events. 8 | To make this `Connection` and `ArbiterMiddleware` flexible, we also implement (for both) the `JsonRpcClient` and `PubSubClient` traits. 9 | 10 | We also provide `ArbiterMiddleware` a wallet so that it can be associated to an account in the `Environment`'s world state. 11 | The `wallet: EOA` field of `ArbiterMiddleware` is decided upon creation of the `ArbiterMiddleware` and, if the wallet is generated from calling `ArbiterMiddleware::new()`, wallet will be of `EOA::Wallet(Wallet)` which allows for `ArbiterMiddleware` to sign transactions if need be. 12 | It is possible to create accounts from a forked database, in which case you would call `ArbiterMiddleware::new_from_forked_eoa()` and the wallet would be of `EOA::Forked(Address)`. 13 | This type is unable to sign as it is effectively impossible to recover the signing key from an address. 14 | Fortunately, for almost every usecase of `ArbiterMiddleware`, you will not need to sign transactions, so this distinction does not matter. 15 | 16 | ## Usage 17 | 18 | To create a `ArbiterMiddleware` that is associated with an account in the `Environment`'s world state, we can do the following: 19 | ```rust, ignore 20 | use arbiter_core::{middleware::ArbiterMiddleware, environment::Environment}; 21 | 22 | fn main() { 23 | let env = Environment::builder().build(); 24 | 25 | // Create a client for the above `Environment` with an ID 26 | let id = "alice"; 27 | let alice = ArbiterMiddleware::new(&env, Some(id)); 28 | 29 | // Create a client without an ID 30 | let client = ArbiterMiddleware::new(&env, None); 31 | } 32 | ``` 33 | These created clients can then get access to making calls and transactions to contracts deployed into the `Environment`'s world state. We can do the following: 34 | ```rust, ignore 35 | use arbiter_core::{middleware::ArbiterMiddleware, environment::Environment}; 36 | use arbiter_bindings::bindings::arbiter_token::ArbiterToken; 37 | 38 | #[tokio::main] 39 | async fn main() { 40 | let env = Environment::builder().build(); 41 | let client = ArbiterMiddleware::new(&env, None).unwrap(); 42 | 43 | // Deploy a contract 44 | let contract = ArbiterToken::deploy(client, ("ARBT".to_owned(), "Arbiter Token".to_owned(), 18u8)).unwrap().send().await.unwrap(); 45 | } 46 | ``` 47 | -------------------------------------------------------------------------------- /docs/src/usage/arbiter_engine/agents_and_engines.md: -------------------------------------------------------------------------------- 1 | # Agents and Engines 2 | `Behavior`s are the heartbeat of your `Agent`s and they are wrapped by `Engine`s. 3 | The main idea here is that you can have an `Agent` that has as many `Behavior`s as you like, and each of those behaviors may process different types of events. 4 | This gives flexibility in how you want to design your `Agent`s and what emergent properties you want to observe. 5 | 6 | ## Design Principles 7 | We designed the behaviors to be flexible. It is up to you whether or not you prefer to have `Agent`s have multiple `Behavior`s or not or if you want them to have a single `Behavior` that processes all events. 8 | For the former case, you will build `Behavior` for different types `E` and place these inside of an `Agent`. 9 | For the latter, you will create an `enum` that wraps all the different types of events that you want to process and then implement `Behavior` on that `enum`. 10 | The latter will also require a `stream::select` type of operation to merge all the different event streams into one, though this is not difficult to do. 11 | 12 | ## `struct Agent` 13 | The `Agent` struct is the primary struct that you will be working with. 14 | It contains an ID, a client (`Arc`) that provides means to send calls and transactions to an Arbiter `Environment`, and a `Messager`. 15 | It looks like this: 16 | ```rust, ignore 17 | pub struct Agent { 18 | pub id: String, 19 | pub messager: Messager, 20 | pub client: Arc, 21 | pub(crate) behavior_engines: Vec>, 22 | } 23 | ``` 24 | 25 | Your work will only be to define `Behavior`s and then add them to an `Agent` with the `Agent::with_behavior` method. 26 | 27 | The `Agent` is inactive until it is paired with a `World` and then it is ready to be run. 28 | This is handled by creating a world (see: [Worlds and Universes](./worlds_and_universes.md)) and then adding the `Agent` to the `World` with the `World::add_agent` method. 29 | Some of the intermediary representations are below: 30 | 31 | #### `struct AgentBuilder` 32 | The `AgentBuilder` struct is a builder pattern for creating `Agent`s. 33 | This is essentially invisible for the end-user, but it is used internally so that `Agent`s can be built in a more ergonomic way. 34 | 35 | #### `struct Engine` 36 | Briefly, the `Engine` struct provides the machinery to run a `Behavior` and it is not necessary for you to handle this directly. 37 | The purpose of this design is to encapsulate the `Behavior` and the event stream `Stream` that the `Behavior` will use for processing. 38 | This encapsulation also allows the `Agent` to hold onto `Behavior` for various different types of `E` all at once. 39 | 40 | ## Example 41 | Let's create an `Agent` that has two `Behavior`s using the `Replier` behavior from before. 42 | ```rust, ignore 43 | use arbiter_engine::agent::Agent; 44 | use crate::Replier; 45 | 46 | fn setup() { 47 | let ping_replier = Replier::new("ping", "pong", 5, None); 48 | let pong_replier = Replier::new("pong", "ping", 5, Some("ping")); 49 | let agent = Agent::builder("my_agent") 50 | .with_behavior(ping_replier) 51 | .with_behavior(pong_replier); 52 | } 53 | ``` 54 | In this example, we have created an `Agent` with two `Replier` behaviors. 55 | The `ping_replier` will reply to a message with "pong" and the `pong_replier` will reply to a message with "ping". 56 | Given that the `pong_replier` has a `startup_message` of "ping", it will send a message to everyone (including the "my_agent" itself who holds the `ping_replier` behavior) when it starts up. 57 | This will start a chain of messages that will continue in a "ping" "pong" fashion until the `max_count` is reached. 58 | ``` 59 | -------------------------------------------------------------------------------- /docs/src/usage/arbiter_engine/behaviors.md: -------------------------------------------------------------------------------- 1 | # Behaviors 2 | The design of `arbiter-engine` is centered around the concept of `Agent`s and `Behavior`s. 3 | At the core, we place `Behavior`s as the event-driven machinery that defines the entire simulation. 4 | What we want is that your simulation is defined completely with how your `Agent`s behaviors are defined. 5 | All you should be looking for is how to define your `Agent`s behaviors and what emergent properties you want to observe. 6 | 7 | ## `trait Behavior` 8 | To define a `Behavior`, you need to implement the `Behavior` trait on a struct of your own design. 9 | The `Behavior` trait is defined as follows: 10 | ```rust, ignore 11 | pub trait Behavior { 12 | fn startup(&mut self, client: Arc, messager: Messager) -> Result, ArbiterEngineError>; 13 | fn process(&mut self, event: E) -> Result; 14 | } 15 | ``` 16 | To outline the design principles here: 17 | - `startup` is a method that initializes the `Behavior` and returns an `EventStream` that the `Behavior` will use for processing. 18 | - This method yields a client and messager from the `Agent` that owns the `Behavior`. 19 | In this method you should take the client and messager and store them in your struct if you will need them in the processing of events. 20 | Note, you may not need them! 21 | - `process` is a method that processes an event of type `E` and returns an `Option`. 22 | - If `process` returns `Some(MachineHalt)`, then the `Behavior` will stop processing events completely. 23 | 24 | **Summary:** A `Behavior` is tantamount to engage the processing some events of type `E`. 25 | 26 | **Advice:** `Behavior`s should be limited in scope and should be a simplistic action driven from a single event. 27 | Otherwise you risk having a `Behavior` that is too complex and difficult to understand and maintain. 28 | 29 | ### Example 30 | To see this in use, let's take a look at an example of a `Behavior` called `Replier` that replies to a message with a message of its own, and stops once it has replied a certain number of times. 31 | ```rust, ignore 32 | use std::sync::Arc; 33 | use arbiter_core::middleware::RevmMiddleware; 34 | use arbiter_engine::{ 35 | machine::{Behavior, ControlFlow}, 36 | messager::{Messager, To}, 37 | EventStream}; 38 | 39 | pub struct Replier { 40 | receive_data: String, 41 | send_data: String, 42 | max_count: u64, 43 | startup_message: Option, 44 | count: u64, 45 | messager: Option, 46 | } 47 | 48 | impl Replier { 49 | pub fn new( 50 | receive_data: String, 51 | send_data: String, 52 | max_count: u64, 53 | startup_message: Option, 54 | ) -> Self { 55 | Self { 56 | receive_data, 57 | send_data, 58 | startup_message, 59 | max_count, 60 | count: 0, 61 | messager: None, 62 | } 63 | } 64 | } 65 | 66 | impl Behavior for Replier { 67 | async fn startup( 68 | &mut self, 69 | client: Arc, 70 | messager: Messager, 71 | ) -> Result, ArbiterEngineError> { 72 | if let Some(startup_message) = &self.startup_message { 73 | messager.send(To::All, startup_message).await; 74 | } 75 | self.messager = Some(messager.clone()); 76 | messager.stream() 77 | } 78 | 79 | async fn process(&mut self, event: Message) -> Result { 80 | if event.data == self.receive_data { 81 | self.messager.unwrap().messager.send(To::All, send_data).await; 82 | self.count += 1; 83 | } 84 | if self.count == self.max_count { 85 | return Ok(ControlFlow::Halt); 86 | } 87 | Ok(ControlFlow::Continue) 88 | } 89 | } 90 | ``` 91 | In this example, we have a `Behavior` that upon `startup` will see if there is a `startup_message` assigned and if so, send it to all `Agent`s that are listening to their `Messager`. 92 | Then, it will store the `Messager` for sending messages later on and start a stream of incoming messages so that we have `E = Message` in this case. 93 | Once these are completed, the `Behavior` automatically transitions into the `process`ing stage where events are popped from the `EventStream` and fed to the `process` method. 94 | 95 | As messages come in, if the `receive_data` matches the incoming message, then the `Behavior` will send the `send_data` to all `Agent`s listening to their `Messager` a message with data `send_data`. -------------------------------------------------------------------------------- /docs/src/usage/arbiter_engine/configuration.md: -------------------------------------------------------------------------------- 1 | # Configuration 2 | To make it so you rarely have to recompile your project, you can use a configuration file to set the parameters of your simulation once your `Behavior`s have been defined. 3 | Let's take a look at how to do this. 4 | 5 | ## Behavior Enum 6 | It is good practice to take your `Behavior`s and wrap them in an `enum` so that you can use them in a configuration file. 7 | For instance, let's say you have two struct `Maker` and `Taker` that implement `Behavior` for their own `E`. 8 | Then you can make your `enum` like this: 9 | ```rust, ignore 10 | use arbiter_macros::Behaviors; 11 | 12 | #[derive(Behaviors)] 13 | pub enum Behaviors { 14 | Maker(Maker), 15 | Taker(Taker), 16 | } 17 | ``` 18 | Notice that we used the `Behaviors` derive macro from the `arbiter_macros` crate. 19 | This macro will generate an implementation of a `CreateStateMachine` trait for the `Behaviors` enum and ultimately save you from having to write a lot of boilerplate code. 20 | The macro solely requires that the `Behavior`s you have implement the `Behavior` trait and that the necessary imports are in scope. 21 | 22 | ## Configuration File 23 | Now that you have your `enum` of `Behavior`s, you can configure your `World` and the `Agent`s inside of it from configuration file. 24 | Since the `World` and your simulation is completely defined by the `Agent` `Behavior`s you make, all you need to do is specify your `Agent`s in the configuration file. 25 | For example, let's say we have the `Replier` behavior from before, so we have: 26 | ```rust, ignore 27 | #[derive(Behaviors)] 28 | pub enum Behaviors { 29 | Replier(Replier), 30 | } 31 | 32 | pub struct Replier { 33 | receive_data: String, 34 | send_data: String, 35 | max_count: u64, 36 | startup_message: Option, 37 | count: u64, 38 | messager: Option, 39 | } 40 | ``` 41 | Then, we can specify the "ping" and "pong" `Behavior`s like this: 42 | ```toml 43 | [[my_agent]] 44 | Replier = { send_data = "ping", receive_data = "pong", max_count = 5, startup_message = "ping" } 45 | 46 | [[my_agent]] 47 | Replier = { send_data = "pong", receive_data = "ping", max_count = 5 } 48 | ``` 49 | If you instead wanted to specify two `Agent`s "Alice" and "Bob" each with one of the `Replier` `Behavior`s, you could do it like this: 50 | ```toml 51 | [[alice]] 52 | Replier = { send_data = "ping", receive_data = "pong", max_count = 5, startup_message = "ping" } 53 | 54 | [[bob]] 55 | Replier = { send_data = "pong", receive_data = "ping", max_count = 5 } 56 | ``` 57 | 58 | ## Loading the Configuration 59 | Once you have your configuration file located at `./path/to/config.toml`, you can load it and run your simulation like this: 60 | ```rust, ignore 61 | fn main() { 62 | let world = World::from_config("./path/to/config.toml")?; 63 | world.run().await; 64 | } 65 | ``` 66 | At the moment, we do not configure `Universe`s from a configuration file, but this is a feature that is planned for the future. -------------------------------------------------------------------------------- /docs/src/usage/arbiter_engine/index.md: -------------------------------------------------------------------------------- 1 | # Arbiter Engine 2 | `arbiter-engine` provides the machinery to build agent based / event driven simulations and should be the primary entrypoint for using Arbiter. 3 | The goal of this crate is to abstract away the work required to set up agents, their behaviors, and the worlds they live in. 4 | At the moment, all interaction of agents is done through the `arbiter-core` crate and is meant to be for local simulations and it is not yet generalized for the case of live network automation. 5 | 6 | ## Hierarchy 7 | The primary components of `arbiter-engine` are, from the bottom up: 8 | - `Behavior`: This is an event-driven behavior that takes in some item of type `E` and can act on that. 9 | The `Behavior` has two methods: `startup` and `process`. 10 | - `startup` is meant to initialize the `Behavior` and any context around it. 11 | An example could be an agent that deploys token contracts on startup. 12 | - `process` is meant to be a stage that runs on every event that comes in. 13 | An example could be an agent that deployed token contracts on startup, and now wants to process queries about the tokens deployed in the simulation (e.g., what their addresses are). 14 | - `Engine` and `StateMachine`: The `Engine` is a struct that implements the `StateMachine` trait as an entrypoint to run `Behavior`s. 15 | - `Engine` is a struct owns a `B: Behavior` and the event stream `Stream` that the `Behavior` will use for processing. 16 | - `StateMachine` is a trait that reduces the interface to `Engine` to a single method: `execute`. 17 | This trait allows `Agent`s to have multiple behaviors that may not use the same event type. 18 | - `Agent` a struct that contains an ID, a client (`Arc`) that provides means to send calls and transactions to an Arbiter `Environment`, and a `Messager`. 19 | - `Messager` is a struct that owns a `Sender` and `Receiver` for sending and receiving messages. 20 | This is a way for `Agent`s to communicate with each other. 21 | It can also be streamed and used for processing messages in a `Behavior`. 22 | - `Agent` also owns a `Vec>` which is a list of `StateMachine`s that the `Agent` will run. 23 | This is a way for `Agent`s to have multiple `Behavior`s that may not use the same event type. 24 | - `World` is a struct that has an ID, an Arbiter `Environment`, a mapping of `Agent`s, and a `Messager`. 25 | - The `World` is tasked with letting `Agent`s join in, and when they do so, to connect them to the `Environment` with a client and `Messager` with the `Agent`'s ID. 26 | - `Universe` is a struct that wraps a mapping of `World`s. 27 | - The `Universe` is tasked with letting `World`s join in and running those `World`s in parallel. 28 | -------------------------------------------------------------------------------- /docs/src/usage/arbiter_engine/worlds_and_universes.md: -------------------------------------------------------------------------------- 1 | # Worlds and Universes 2 | `Universes` are the top-level struct that you will be working with in the Arbiter Engine. 3 | They are tasked with letting `World`s join in and running those `World`s in parallel. 4 | By no means are you required to use `Universe`s, but they will be useful for running multiple simulations at once or, in the future, they will allow for running `World`s that have different internal environments. 5 | For instance, one could have a `World` that consists of `Agent`s acting on the Ethereum mainnet, another `World` that consists of `Agent`s acting on Optimism, and finally a `World` that has an Arbiter `Environment` as the network analogue. 6 | Using these in tandem is a long-term goal of the Arbiter project. 7 | 8 | Depending on your needs, you will either use the `Universe` if you want to run multiple `World`s in parallel or you will use the `World` if you only want to run a single simulation. 9 | The choice is yours. 10 | 11 | ## `struct Universe` 12 | The `Universe` struct looks like this: 13 | ```rust, ignore 14 | pub struct Universe { 15 | worlds: Option>, 16 | world_tasks: Option>>, 17 | } 18 | ``` 19 | The `Universe` is a struct that wraps a mapping of `World`s where the key of the map is the `World`'s ID. 20 | Also, the `Universe` manages the running of those `World`s in parallel by storing the running `World`s as tasks. 21 | In the future, more introspection and control will be added to the `Universe` to allow for debugging and managing the running `World`s. 22 | 23 | The `Universe::run_worlds` currently iterates through the `World`s and starts them in concurrent tasks. 24 | 25 | ## `struct World` 26 | The `World` struct looks like this: 27 | ```rust, ignore 28 | pub struct World { 29 | pub id: String, 30 | pub agents: Option>, 31 | pub environment: Environment, 32 | pub messager: Messager, 33 | } 34 | ``` 35 | The `World` is a struct that has an ID, an Arbiter `Environment`, a mapping of `Agent`s, and a `Messager`. 36 | The `World` is tasked with letting `Agent`s join in, and when they do so, to connect them to the `Environment` with a client and `Messager` with the `Agent`'s ID. 37 | Then the `World` stores the `Agent`s in a map where the key is the `Agent`'s ID. 38 | 39 | The main methods to use with the world is `World::add_agent` which adds an agent to the `World` and `World::run` which will engage all of the `Agent` `Behavior`s. 40 | 41 | In future development, the `World` will be generic over your choice of `Provider` that encapsulates the Ethereum-like execution environment you want to use (e.g., Ethereum mainnet, Optimism, or an Arbiter `Environment`). 42 | 43 | ## Example 44 | Let's first do a quick example where we take a `World` and add an `Agent` to it. 45 | ```rust, ignore 46 | use arbiter_engine::{agent::Agent, world::World}; 47 | use crate::Replier; 48 | 49 | fn setup_world(id: &str) -> World { 50 | let ping_replier = Replier::new("ping", "pong", 5, None); 51 | let pong_replier = Replier::new("pong", "ping", 5, Some("ping")); 52 | let agent = Agent::new("my_agent") 53 | .with_behavior(ping_replier) 54 | .with_behavior(pong_replier); 55 | let mut world = World::new(id); 56 | world.add_agent(agent); 57 | } 58 | 59 | async fn run() { 60 | let world = setup_world("my_world"); 61 | world.run().await; 62 | } 63 | ``` 64 | If you wanted to extend this to use a `Universe`, you would simply create a `Universe` and add the `World` to it. 65 | ```rust, ignore 66 | use arbiter_engine::{agent::Agent, world::World}; 67 | use crate::Replier; 68 | 69 | fn setup_world(id: &str) -> World { 70 | let ping_replier = Replier::new("ping", "pong", 5, None); 71 | let pong_replier = Replier::new("pong", "ping", 5, Some("ping")); 72 | let agent = Agent::new("my_agent") 73 | .with_behavior(ping_replier) 74 | .with_behavior(pong_replier); 75 | let mut world = World::new(id); 76 | world.add_agent(agent); 77 | } 78 | 79 | fn main() { 80 | let mut universe = Universe::new(); 81 | universe.add_world(setup_world("my_world")); 82 | universe.add_world(setup_world("my_other_world")); 83 | universe.run_worlds().await; 84 | } 85 | ``` 86 | -------------------------------------------------------------------------------- /docs/src/usage/arbiter_macros.md: -------------------------------------------------------------------------------- 1 | # Arbiter macros 2 | `arbiter_macros` provides a set of macros to help with the use of `arbiter-engine` and `arbiter-core`. 3 | Macros allow for code generation which enables developers to write code that writes code. 4 | We use them here to reduce boilerplate by abstracting repetitive patterns. 5 | Macros can be used for tasks like deriving traits automatically or for generating code based on custom attributes. 6 | 7 | ## Procedural Macros 8 | 9 | > **`#[derive(Behaviors)]`** 10 | This Rust procedural macro automatically implements the [CreateStateMachine](https://github.com/anthias-labs/arbiter/blob/ffbbd146dc05f3e1088a9df5cf78452a1bef2212/macros/src/lib.rs#L68) trait for an enum, generating a [create_state_machine](https://github.com/anthias-labs/arbiter/blob/ffbbd146dc05f3e1088a9df5cf78452a1bef2212/macros/src/lib.rs#L26) method that matches each enum variant to a new state machine instance. 11 | It's designed for enums where each variant contains a single unnamed field representing state data. 12 | This macro simplifies the creation of state machines from enums, eliminating repetitive boilerplate code and enhancing code maintainability. 13 | 14 | ### Example 15 | You can use this macro like so: 16 | ```rust, ignore 17 | use arbiter_macros::Behaviors; 18 | use arbiter_engine::machine::Behavior; 19 | 20 | struct MyBehavior1 {} 21 | impl Behavior for MyBehavior1 { 22 | // ... 23 | } 24 | struct MyBehavior2 {} 25 | 26 | } 27 | impl Behavior for MyBehavior2 { 28 | // ... 29 | } 30 | 31 | #[derive(Behaviors)] 32 | enum Behaviors { 33 | MyBehavior1(MyBehavior1), 34 | MyBehavior2(MyBehavior2), 35 | } 36 | ``` 37 | 38 | > **`#[main]`**. 39 | The [`#[arbiter_macros::main]`](https://github.com/anthias-labs/arbiter/blob/ffbbd146dc05f3e1088a9df5cf78452a1bef2212/macros/src/lib.rs#L161) macro in `arbiter-macros/src/lib.rs` is designed to simplify the creation of a CLI that will let you run your simulations by automatically generating a `main` function that sets up command-line parsing, logging, async execution, and world creation. 40 | It takes custom attributes to configure the application's metadata such as the project's name, description, and the set of behaviors you want to use. 41 | Under the hood, it uses the [clap](https://crates.io/crates/clap) crate for parsing CLI arguments and [tracing](https://crates.io/crates/tracing) for logging based on verbosity level. 42 | The macro needs to have have an object that has the `CreateStateMachine` trait implemented which can be done using the `#[derive(Behaviors)]` macro. 43 | 44 | 45 | ## Usage 46 | You can find an example that uses both of these macros in the [arbiter-template repository](https://github.com/anthias-labs/arbiter-template). 47 | Similarly, in the Arbiter repo itself, this exact same collection of code is found in the `examples/template/` directory. 48 | 49 | If you wanted to use the `#[main]` macro alongside the `#[derive(Behaviors)]` macro, you would do so like this: 50 | ```rust, ignore 51 | use arbiter_macros::main; 52 | 53 | use Behaviors; // From the Behaviors example above 54 | 55 | 56 | #[main( 57 | name = "ExampleArbiterProject", 58 | about = "Our example to get you started.", 59 | behaviors = Behaviors 60 | )] 61 | pub async fn main() {} 62 | ``` -------------------------------------------------------------------------------- /docs/src/usage/index.md: -------------------------------------------------------------------------------- 1 | # Software Architecture 2 | Arbiter is broken into a number of crates that provide different levels of abstraction for interacting with the Ethereum Virtual Machine (EVM) sandbox. 3 | 4 | ## Arbiter Core 5 | The `arbiter-core` crate is the core of the Arbiter. 6 | It contains the `Environment` struct which acts as an EVM sandbox and the `RevmMiddleware` which gives a convenient interface for interacting with contracts deployed into the `Environment`. 7 | Direct usage of `arbiter-core` will be minimized as much as possible as it is intended for developers to mostly pull from the `arbiter-engine` crate in the future. 8 | This crate provides the interface for agents to interact with an in memory evm. 9 | 10 | ## Arbiter Engine 11 | The `arbiter-engine` crate is the main interface for running simulations. 12 | It is built on top of `arbiter-core` and provides a more ergonomic interface for designing agents and running them in simulations. 13 | 14 | ## Arbiter CLI (under construction) 15 | The Arbiter CLI is a minimal interface for managing your Arbiter projects. 16 | It is built on top of Foundry and aims to provide a similar CLI interface of setting up and interacting with Arbiter projects. -------------------------------------------------------------------------------- /docs/src/usage/techniques/anomaly_detection.md: -------------------------------------------------------------------------------- 1 | # Anomaly Detection 2 | 3 | Anomaly detection is the process of identifying unexpected items or events in data sets, which differ from the norm. 4 | Anomaly detection is often applied on unlabeled data which is known as unsupervised anomaly detection. 5 | 6 | When you are building your simulation you are trying to discover unknown unknowns and carefully examine design assumptions. 7 | This is a difficult task and it is not always clear what you are looking for. 8 | As a result the best place to start is the design a simulation that will validate the existing design assumptions. 9 | -------------------------------------------------------------------------------- /docs/src/usage/techniques/index.md: -------------------------------------------------------------------------------- 1 | # Techniques 2 | 3 | At a high level when you are designing a simulation the two things you need to think about are behaviors and one or more random variable. 4 | A random variable is what you can perturb over the course of a simulation. 5 | For example almost all economic models have a random variable that represents the price. 6 | This allows you to see how the model behaves under different prices or market conditions. 7 | Does this system handle price volatility well? 8 | Or does it break down? -------------------------------------------------------------------------------- /docs/src/usage/techniques/measuring_risk.md: -------------------------------------------------------------------------------- 1 | # Measuring Risk 2 | # Quantifying Security Risk 3 | 4 | Quantitative security is a field of research that applies mathematical and statistical methods to studying cybersecurity. It aims to quantify and model security risks, vulnerabilities, and impacts providing a more objective and measurable approach to security management. Quantitative security can be used to assess the effectiveness of security controls, identify vulnerabilities, and predict the impact of security incidents. It can also be used to evaluate the effectiveness of security policies and procedures. 5 | 6 | In software design, quantitative security can be used to quantify the economic risk of a system. This involves modeling the system's behavior and then using statistical methods to analyze the data generated by the model. The results can be used to identify vulnerabilities and predict the impact of security incidents. 7 | 8 | Risk is understood as 9 | $$ 10 | risk = impact * likelihood 11 | $$ 12 | where impact is the cost of exploitation and likelihood is the probability of exploitation. 13 | 14 | for example 15 | ```rust ignore 16 | pub fn calculate_impact(consequences: Vec, weights: Option>) -> f64 { 17 | let weights = match weights { 18 | Some(w) => w, 19 | None => vec![1.0; consequences.len()], // If no weights are provided, assume equal importance 20 | }; 21 | consequences.iter().zip(weights.iter()).map(|(c, w)| c * w).sum() 22 | } 23 | 24 | // Example: Data loss (e.g., $5000), downtime (e.g., 10 hours), reputational damage (e.g., 7 on a scale of 10) 25 | let consequences = vec![5000.0, 10.0, 7.0]; 26 | let weights = Some(vec![0.5, 0.3, 0.2]); // Weights reflecting the relative importance of each consequence 27 | let impact = calculate_impact(consequences, weights); 28 | println!("{}", impact); // Outputs: 2535.0 29 | ``` 30 | 31 | and calculating the likelihood of exploitation is a function of its historical frequency, threat capability, control effectiveness, and environmental factors. All of which are between zero and one. Threat capability is a metric quantifying threat actor sophistication and resources, control effectiveness quantifying the effectiveness of security controls, and environment factor quantifying the security of the environment in which the system operates. 32 | 33 | ```rust ignore 34 | fn calculate_likelihood(historical_frequency: f64, threat_capability: f64, control_effectiveness: f64, environment_factor: f64) -> f64 { 35 | historical_frequency * threat_capability * (1.0 - control_effectiveness) * environment_factor 36 | } 37 | 38 | fn main() { 39 | // Example: High historical frequency (e.g., 0.8), high threat capability (e.g., 0.9), medium control effectiveness (e.g., 0.5), high environment factor (e.g., 1.0) 40 | let likelihood = calculate_likelihood(0.8, 0.9, 0.5, 1.0); 41 | println!("{}", likelihood); // Outputs: 0.36 42 | } 43 | ``` 44 | 45 | ## Economic Risk 46 | 47 | Economic risk in the context of finance can be quantified by considering various factors such as: 48 | 49 | - **Market Risk**: This is the risk of investments declining in value because of economic developments or other events that affect the entire market. For example, the risk of a decline in the stock market. 50 | 51 | - **Credit Risk**: This is the risk that a borrower will not repay a loan according to the loan terms, resulting in a loss to the lender—for example, the risk of a company defaulting on its bonds. 52 | 53 | - **Operational Risk**: This is the risk of loss resulting from inadequate or failed internal processes, people, and systems or external events—for example, the risk of a data breach due to insufficient cybersecurity measures. 54 | 55 | - **Liquidity Risk**: This is the risk that an investor will not be able to sell an investment when they wish because of a lack of buyers in the market—for example, the risk of being unable to sell real estate quickly at a fair price. 56 | 57 | These risks can be quantified using various financial models and statistical methods. For example, Value at Risk (VaR) is commonly used to quantify market risk. Given a certain level of confidence and time horizon, it estimates the potential loss that could occur on an investment. 58 | 59 | Credit risk can be quantified using credit scoring models like the Altman Z-score, which predicts the probability of a company going bankrupt. Operational risk can be quantified using methods like the loss distribution approach (LDA), where the frequency and severity of losses are modeled to estimate the total loss. Liquidity risk can be quantified using the bid-ask spread or the liquidity coverage ratio (LCR). 60 | 61 | It's important to note that these are just examples, and quantifying economic risk in finance is a complex process that requires a deep understanding of financial theories and statistical models. 62 | 63 | 64 | # Metrics 65 | Data plays a crucial role in quantifying risk and modeling systems. It provides the foundation for statistical analysis and predictive modeling, enabling us to measure and understand the behavior of systems under various conditions. We can identify patterns, trends, and correlations by analyzing data to help us predict future events or outcomes. This is particularly important in economic risk, where accurate predictions can help mitigate potential losses and optimize returns. 66 | 67 | The particular metrics we have been interested in (by no means exhaustive or representative of the entire field) are: 68 | 69 | ## Arbitrage Profit 70 | Arbitrage profit is the profit made by taking advantage of the price differences of a particular asset across different markets or platforms. In DeFi, these opportunities can arise due to inefficiencies in asset pricing. If related to a decentralized exchange, such as an automated market maker(AMM), mathematical metrics can be derived to compute the cost and revenue of these arbitrage opportunities exactly. 71 | 72 | There are generally two types of arbitrage opportunities in DeFi: 73 | 74 | >Atomic arbitrage opportunities in DeFi are transactions that are either fully executed or not executed at all. This is possible due to the atomicity of the Ethereum Virtual Machine (EVM), which ensures that all operations within a transaction are treated as a single, indivisible unit. The entire transaction is reverted if any operation fails, ensuring no partial state changes occur. This characteristic of the EVM allows for risk-free arbitrage opportunities, as the arbitrageur is not exposed to the risk of one part of the trade executing while the other does not. 75 | 76 | >Non-atomic arbitrage opportunities in DeFi are transactions that are partially executed. This is possible due to the lack of atomicity in the EVM, allowing partial state changes to occur. If one part of the trade fails, the other can still be executed, resulting in a partial state change. This characteristic of the EVM allows for riskier arbitrage opportunities, as the arbitrageur is exposed to the risk of one part of the trade executing while the other is not. 77 | 78 | Non-atomic arbitrage is much more challenging to measure and model, requiring a more complex understanding of the EVM and its execution model. However, atomic arbitrage is [easy to measure](https://explore.flashbots.net/), as it only requires a basic understanding of the EVM and its execution model. 79 | 80 | ## Liquidity Provider Portfolio Value 81 | Liquidity Provider Portfolio Value refers to the payoff that an LP assumes when providing liquidity to a pool[Replicating Market Makers](https://arxiv.org/abs/2103.14769)[Replicating Monotonic Payoffs Without Oracles](https://arxiv.org/abs/2111.13740). 82 | 83 | The has been shown to have two components path dependent and path independent components, which have been introduced in this [paper](https://arxiv.org/abs/2208.06046) as loss vs. holding(LVH) and loss vs. rebalancing (LVR), respectively. 84 | 85 | ## Fee Growth 86 | 87 | Fee Growth in Automated Market Makers (AMMs) refers to the fees collected by the liquidity providers over time. These fees are generated from the trading activity in the liquidity pool and are directly proportional to the volume of trades. The more the trading activity (turnover), the higher the fees collected, leading to a growth in the fees. This fee growth can be a significant source of income for liquidity providers, in addition to the potential price appreciation of the assets in the pool. 88 | 89 | ## Model Parameters 90 | 91 | ## Geometric Brownian Motion (GBM) 92 | 93 | Geometric Brownian Motion (GBM) is a standard method to model price paths in financial markets. Two parameters characterize it: 94 | 95 | 1. **Drift (μ)**: This represents the asset's expected return. It is the direction that we expect our asset to move in the future. 96 | 97 | 2. **Volatility (σ)**: This represents the standard deviation of the asset's returns. It is a measure of the asset's risk or uncertainty. 98 | 99 | The GBM model assumes that the logarithmic returns of the asset prices are normally distributed and that the following stochastic differential equation can model them: 100 | 101 | $$ 102 | dS_t = μS_t dt + σS_t dW_t 103 | $$ 104 | 105 | Where: 106 | 107 | - $S_t$ is the asset price at time t 108 | - $μ$ is the drift 109 | - $σ$ is the volatility 110 | - $W_t$ is a Wiener process 111 | 112 | This equation describes the change in the asset price over an infinitesimally small period. The first term on the right-hand side represents the deterministic trend (drift), and the second term represents the random fluctuation (volatility). 113 | 114 | -------------------------------------------------------------------------------- /docs/src/usage/techniques/stateful_testing.md: -------------------------------------------------------------------------------- 1 | # Stateful Testing 2 | -------------------------------------------------------------------------------- /docs/src/vulnerability_corpus.md: -------------------------------------------------------------------------------- 1 | # Vulnerability Corpus 2 | 3 | Here is a running list of vulnerabilities that have been found with Arbiter. This list is not exhaustive, but it is a good starting point for understanding how to use Arbiter to find vulnerabilities. Arbiter has a unique ability to detect anomaly behavior in a production-like environment. This can be used to audit mechanism design in smart contract systems as well as detect vulnerabilities in smart contracts. 4 | 5 | --- 6 | 7 | ## Vulnerabilities 8 | 9 | ### Portfolio Rebalancing: Severity - High 10 | 11 | This was a critical vulnerability discovered in the [Portfolio Contracts](https://github.com/primitivefinance/portfolio) that we were auditing internally. The bug is described in this [PR](https://github.com/primitivefinance/portfolio_simulations/pull/36/files). To reproduce the vulnerability you can run the following command: 12 | 13 | ```bash 14 | git clone https://github.com/primitivefinance/portfolio_simulations.git 15 | cd portfolio_simulations 16 | git checkout (bug-found)-invariant-pre-post-swap 17 | cargo run --release 18 | ``` 19 | The bug was not caught by our [prior audits](https://github.com/primitivefinance/security) and [extensive test suit](https://github.com/primitivefinance/portfolio/tree/main/test). The simulation ran an arbitrageur against the Portfolio AMM and a stochastic price path. The bug was identified after 18,000 swaps. It turns out that that Portfolio pools can reach an edge case where the pool reaches one of the tails of its liquidity distribution and causes the invariant to jump, affecting the price of the trade. This would allow a swapper to take advantage of the mispriced funds and take funds from LPs. With arbiter we were able to run ~20000 swaps with this emulated protocol state in parallel with other parameters in <30s allowing us to discover this anomaly. 20 | 21 | --- 22 | 23 | ## Rating System 24 | 25 | **Low**: Includes both Non-critical (code style, clarity, syntax, versioning, off-chain monitoring (events, etc) and Low risk (e.g. assets are not at risk: state handling, function incorrect as to spec, issues with comments). 26 | 27 | **Med**: Assets not at direct risk, but the function of the protocol or its availability could be impacted, or leak value with a hypothetical attack path with stated assumptions, but external requirements. 28 | 29 | **High**: Assets can be stolen/lost/compromised directly (or indirectly if there is a valid attack path that does not have hand-wavy hypotheticals). These are considered critical issues that should be addressed immediately. 30 | 31 | This criteria is based on the [Code4rena](https://docs.code4rena.com/awarding/judging-criteria/severity-categorization) judging criteria. 32 | 33 | ### Resources for Classifying Vulnerabilities 34 | - [CVSS](https://www.first.org/cvss/v3.0/user-guide) system. 35 | - [OWASP](https://owasp.org/www-community/vulnerabilities/) system. 36 | - [SWC](https://swcregistry.io/) system. 37 | - [Code4rena](https://docs.code4rena.com/awarding/judging-criteria/severity-categorization) 38 | 39 | ## Contributing to the Corpus 40 | 41 | If you find any vulnerabilities with Arbiter, please submit a pull request to this file with the vulnerability and a description of the vulnerability, a link to the arbiter repo and post mortem and steps to reproduce. If the vulnerability is in the wild and has not yet been patched, please do your best to work with the team responsible for the vulnerability to resolve the vulnerability before disclosing it publicly. -------------------------------------------------------------------------------- /docs/tests/skeptic.rs: -------------------------------------------------------------------------------- 1 | include!(concat!(env!("OUT_DIR"), "/skeptic-tests.rs")); 2 | -------------------------------------------------------------------------------- /examples/leader/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | authors = ["Harness Labs"] 3 | description = "Interactive Vietoris-Rips Complex Demo using cova" 4 | edition = "2021" 5 | name = "leader" 6 | version = "0.1.0" 7 | 8 | [package.metadata.wasm-pack.profile.release] 9 | wasm-opt = false 10 | 11 | [workspace] 12 | 13 | # Server binary 14 | [[bin]] 15 | name = "server" 16 | path = "src/server.rs" 17 | 18 | [dependencies] 19 | 20 | [target.'cfg(target_arch = "wasm32")'.dependencies] 21 | arbiter-core = { path = "../../arbiter-core", features = ["wasm"] } 22 | console_error_panic_hook = { version = "0.1" } 23 | getrandom = { version = "0.2", features = ["js"] } 24 | gloo-timers = { version = "0.2", features = ["futures"] } 25 | js-sys = { version = "0.3" } 26 | serde = { version = "1.0", features = ["derive"] } 27 | serde_json = { version = "1.0" } 28 | wasm-bindgen = { version = "0.2" } 29 | wasm-bindgen-futures = { version = "0.4" } 30 | web-sys = { version = "0.3", features = [ 31 | "console", 32 | "CanvasRenderingContext2d", 33 | "HtmlCanvasElement", 34 | "HtmlInputElement", 35 | "MouseEvent", 36 | "Element", 37 | "Document", 38 | "Window", 39 | "Event", 40 | "EventTarget", 41 | ] } 42 | 43 | [target.'cfg(not(target_arch = "wasm32"))'.dependencies] 44 | tokio = { version = "1.0", features = ["macros", "rt-multi-thread"] } 45 | warp = { version = "0.3" } 46 | 47 | [lib] 48 | crate-type = ["cdylib", "rlib"] 49 | 50 | [profile.release] 51 | codegen-units = 1 52 | lto = true 53 | opt-level = 3 54 | panic = "abort" 55 | strip = true 56 | -------------------------------------------------------------------------------- /examples/leader/src/server.rs: -------------------------------------------------------------------------------- 1 | //! # Vietoris-Rips Demo Server 2 | //! 3 | //! A simple web server that serves the interactive Vietoris-Rips complex demo. 4 | //! 5 | //! ## Usage 6 | //! ```bash 7 | //! cargo run --bin server 8 | //! ``` 9 | //! Then open http://localhost:3030 10 | 11 | #[cfg(not(target_arch = "wasm32"))] use warp::Filter; 12 | 13 | const HTML_CONTENT: &str = include_str!("../index.html"); 14 | 15 | #[cfg(not(target_arch = "wasm32"))] 16 | #[tokio::main] 17 | async fn main() { 18 | println!("🦀 Starting Leader-Follower Demo Server..."); 19 | 20 | // Serve the main HTML page 21 | let index = warp::path::end().map(|| warp::reply::html(HTML_CONTENT)); 22 | 23 | // Serve WASM files from pkg directory 24 | let wasm_files = warp::path("pkg").and(warp::fs::dir("pkg")); 25 | 26 | // Combine routes with CORS 27 | let routes = index.or(wasm_files).with(warp::cors().allow_any_origin()); 28 | 29 | println!("🌐 Demo available at: http://localhost:3030"); 30 | println!("📖 Click to add points, right-click to remove, adjust epsilon slider!"); 31 | println!("🛑 Press Ctrl+C to stop the server"); 32 | 33 | warp::serve(routes).run(([127, 0, 0, 1], 3030)).await; 34 | } 35 | 36 | #[cfg(target_arch = "wasm32")] 37 | pub fn main() { 38 | panic!("This is a server"); 39 | } 40 | -------------------------------------------------------------------------------- /justfile: -------------------------------------------------------------------------------- 1 | default: 2 | @just --list 3 | 4 | [private] 5 | warn := "\\033[33m" 6 | error := "\\033[31m" 7 | info := "\\033[34m" 8 | success := "\\033[32m" 9 | reset := "\\033[0m" 10 | bold := "\\033[1m" 11 | 12 | # Print formatted headers without shell scripts 13 | [private] 14 | header msg: 15 | @printf "{{info}}{{bold}}==> {{msg}}{{reset}}\n" 16 | 17 | # Install cargo tools 18 | install-cargo-tools: 19 | @just header "Installing Cargo tools" 20 | # cargo-udeps 21 | if ! command -v cargo-udeps > /dev/null; then \ 22 | printf "{{info}}Installing cargo-udeps...{{reset}}\n" && \ 23 | cargo install cargo-udeps --locked; \ 24 | else \ 25 | printf "{{success}}✓ cargo-udeps already installed{{reset}}\n"; \ 26 | fi 27 | # cargo-semver-checks 28 | if ! command -v cargo-semver-checks > /dev/null; then \ 29 | printf "{{info}}Installing cargo-semver-checks...{{reset}}\n" && \ 30 | cargo install cargo-semver-checks; \ 31 | else \ 32 | printf "{{success}}✓ cargo-semver-checks already installed{{reset}}\n"; \ 33 | fi 34 | # taplo 35 | if ! command -v taplo > /dev/null; then \ 36 | printf "{{info}}Installing taplo...{{reset}}\n" && \ 37 | cargo install taplo-cli; \ 38 | else \ 39 | printf "{{success}}✓ taplo already installed{{reset}}\n"; \ 40 | fi 41 | 42 | # Install mdbook and plugins 43 | install-mdbook-tools: 44 | @just header "Installing mdbook and plugins" 45 | if ! command -v mdbook > /dev/null; then \ 46 | printf "{{info}}Installing mdbook...{{reset}}\n" && \ 47 | cargo install mdbook; \ 48 | else \ 49 | printf "{{success}}✓ mdbook already installed{{reset}}\n"; \ 50 | fi 51 | if ! command -v mdbook-linkcheck > /dev/null; then \ 52 | printf "{{info}}Installing mdbook-linkcheck...{{reset}}\n" && \ 53 | cargo install mdbook-linkcheck; \ 54 | else \ 55 | printf "{{success}}✓ mdbook-linkcheck already installed{{reset}}\n"; \ 56 | fi 57 | if ! command -v mdbook-katex > /dev/null; then \ 58 | printf "{{info}}Installing mdbook-katex...{{reset}}\n" && \ 59 | cargo install mdbook-katex; \ 60 | else \ 61 | printf "{{success}}✓ mdbook-katex already installed{{reset}}\n"; \ 62 | fi 63 | 64 | # Install nightly rust 65 | install-rust-nightly: 66 | @just header "Installing Rust nightly" 67 | rustup install nightly 68 | 69 | # Setup complete development environment 70 | setup: install-cargo-tools install-rust-nightly install-mdbook-tools 71 | @printf "{{success}}{{bold}}Development environment setup complete!{{reset}}\n" 72 | 73 | # Check the with local OS target 74 | check: 75 | @just header "Building workspace" 76 | cargo build --workspace --all-targets 77 | 78 | # Build with local OS target 79 | build: 80 | @just header "Building workspace" 81 | cargo build --workspace --all-targets 82 | 83 | # Build with local OS target 84 | build-wasm: 85 | @just header "Building workspace" 86 | cargo build --workspace --all-targets --target wasm32-unknown-unknown 87 | 88 | # Run the tests on your local OS 89 | test: 90 | @just header "Running main test suite" 91 | cargo test --workspace --all-targets --all-features 92 | @just header "Running doc tests" 93 | cargo test --workspace --doc 94 | 95 | # Run clippy for the workspace on your local OS 96 | lint: 97 | @just header "Running clippy" 98 | cargo clippy --workspace --all-targets --all-features 99 | 100 | # Run clippy for the workspace on WASM 101 | lint-wasm: 102 | @just header "Running clippy" 103 | cargo clippy --workspace --all-targets --all-features --target wasm32-unknown-unknown 104 | 105 | # Check for semantic versioning for workspace crates 106 | semver: 107 | @just header "Checking semver compatibility" 108 | cargo semver-checks check-release --workspace 109 | 110 | # Run format for the workspace 111 | fmt: 112 | @just header "Formatting code" 113 | cargo fmt --all 114 | taplo fmt 115 | 116 | # Check for unused dependencies in the workspace 117 | udeps: 118 | @just header "Checking unused dependencies" 119 | cargo +nightly udeps --workspace 120 | 121 | # Run cargo clean to remove build artifacts 122 | clean: 123 | @just header "Cleaning build artifacts" 124 | cargo clean 125 | 126 | # Serve the mdbook documentation (with live reload) 127 | book: 128 | @just header "Serving mdbook documentation" 129 | mdbook serve 130 | 131 | book-check: 132 | @just header "Checking mdbook documentation" 133 | mdbook build 134 | 135 | # Open cargo docs in browser 136 | docs: 137 | @just header "Building and opening cargo docs" 138 | cargo doc --workspace --no-deps --open 139 | 140 | doc-check: 141 | @just header "Checking cargo docs" 142 | RUSTDOCFLAGS="-D warnings" cargo doc --no-deps --all-features 143 | 144 | # Show your relevant environment information 145 | info: 146 | @just header "Environment Information" 147 | @printf "{{info}}OS:{{reset}} %s\n" "$(uname -s)" 148 | @printf "{{info}}Rust:{{reset}} %s\n" "$(rustc --version)" 149 | @printf "{{info}}Cargo:{{reset}} %s\n" "$(cargo --version)" 150 | @printf "{{info}}Installed targets:{{reset}}\n" 151 | @rustup target list --installed | sed 's/^/ /' 152 | 153 | # Run all possible CI checks (cannot test a non-local OS target!) 154 | ci: 155 | @printf "{{bold}}Starting CI checks{{reset}}\n\n" 156 | @ERROR=0; \ 157 | just run-single-check "Rust formatting" "cargo fmt --all -- --check" || ERROR=1; \ 158 | just run-single-check "TOML formatting" "taplo fmt --check" || ERROR=1; \ 159 | just run-single-check "Check" "cargo check --workspace" || ERROR=1; \ 160 | just run-single-check "Clippy" "cargo clippy --workspace --all-targets --all-features -- --deny warnings" || ERROR=1; \ 161 | just run-single-check "Test suite" "cargo test --verbose --workspace" || ERROR=1; \ 162 | just run-single-check "Doc check" "RUSTDOCFLAGS=\"-D warnings\" cargo doc --no-deps --all-features" || ERROR=1; \ 163 | just run-single-check "Unused dependencies" "cargo +nightly udeps --workspace" || ERROR=1; \ 164 | just run-single-check "Semver compatibility" "cargo semver-checks check-release --workspace" || ERROR=1; \ 165 | printf "\n{{bold}}CI Summary:{{reset}}\n"; \ 166 | if [ $ERROR -eq 0 ]; then \ 167 | printf "{{success}}{{bold}}All checks passed successfully!{{reset}}\n"; \ 168 | else \ 169 | printf "{{error}}{{bold}}Some checks failed. See output above for details.{{reset}}\n"; \ 170 | exit 1; \ 171 | fi 172 | 173 | # Run a single check and return status (0 = pass, 1 = fail) 174 | [private] 175 | run-single-check name command: 176 | #!/usr/bin/env sh 177 | printf "{{info}}{{bold}}Running{{reset}} {{info}}%s{{reset}}...\n" "{{name}}" 178 | if {{command}} > /tmp/check-output 2>&1; then 179 | printf " {{success}}{{bold}}PASSED{{reset}}\n" 180 | exit 0 181 | else 182 | printf " {{error}}{{bold}}FAILED{{reset}}\n" 183 | printf "{{error}}----------------------------------------\n" 184 | while IFS= read -r line; do 185 | printf "{{error}}%s{{reset}}\n" "$line" 186 | done < /tmp/check-output 187 | printf "{{error}}----------------------------------------{{reset}}\n" 188 | exit 1 189 | fi 190 | 191 | # Success summary (called if all checks pass) 192 | [private] 193 | _ci-summary-success: 194 | @printf "\n{{bold}}CI Summary:{{reset}}\n" 195 | @printf "{{success}}{{bold}}All checks passed successfully!{{reset}}\n" 196 | 197 | # Failure summary (called if any check fails) 198 | [private] 199 | _ci-summary-failure: 200 | @printf "\n{{bold}}CI Summary:{{reset}}\n" 201 | @printf "{{error}}{{bold}}Some checks failed. See output above for details.{{reset}}\n" 202 | @exit 1 203 | 204 | 205 | --------------------------------------------------------------------------------