├── .github ├── CODEOWNERS ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md └── workflows │ ├── ci.yml │ └── release.yml ├── .gitignore ├── Cargo.lock ├── Cargo.toml ├── LICENSE ├── README.md ├── build.rs ├── config.dhall ├── examples ├── func.sh ├── greet.wasm ├── install.sh ├── neuron.sh ├── wallet.did └── wallet.sh └── src ├── account_identifier.rs ├── command.rs ├── error.rs ├── exp.rs ├── governance.did ├── grammar.lalrpop ├── grammar.rs ├── helper.rs ├── ic.did ├── ledger.did ├── main.rs ├── offline.rs ├── profiling.rs ├── selector.rs ├── token.rs └── utils.rs /.github/CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @dfinity/dx 2 | 3 | -------------------------------------------------------------------------------- /.github/CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | We as members, contributors, and leaders pledge to make participation in our 6 | community a harassment-free experience for everyone, regardless of age, body 7 | size, visible or invisible disability, ethnicity, sex characteristics, gender 8 | identity and expression, level of experience, education, socio-economic status, 9 | nationality, personal appearance, race, religion, or sexual identity 10 | and orientation. 11 | 12 | We pledge to act and interact in ways that contribute to an open, welcoming, 13 | diverse, inclusive, and healthy community. 14 | 15 | ## Our Standards 16 | 17 | Examples of behavior that contributes to a positive environment for our 18 | community include: 19 | 20 | * Demonstrating empathy and kindness toward other people 21 | * Being respectful of differing opinions, viewpoints, and experiences 22 | * Giving and gracefully accepting constructive feedback 23 | * Accepting responsibility and apologizing to those affected by our mistakes, 24 | and learning from the experience 25 | * Focusing on what is best not just for us as individuals, but for the 26 | overall community 27 | 28 | Examples of unacceptable behavior include: 29 | 30 | * The use of sexualized language or imagery, and sexual attention or 31 | advances of any kind 32 | * Trolling, insulting or derogatory comments, and personal or political attacks 33 | * Public or private harassment 34 | * Publishing others' private information, such as a physical or email 35 | address, without their explicit permission 36 | * Other conduct which could reasonably be considered inappropriate in a 37 | professional setting 38 | 39 | ## Enforcement Responsibilities 40 | 41 | Community leaders are responsible for clarifying and enforcing our standards of 42 | acceptable behavior and will take appropriate and fair corrective action in 43 | response to any behavior that they deem inappropriate, threatening, offensive, 44 | or harmful. 45 | 46 | Community leaders have the right and responsibility to remove, edit, or reject 47 | comments, commits, code, wiki edits, issues, and other contributions that are 48 | not aligned to this Code of Conduct, and will communicate reasons for moderation 49 | decisions when appropriate. 50 | 51 | ## Scope 52 | 53 | This Code of Conduct applies within all community spaces, and also applies when 54 | an individual is officially representing the community in public spaces. 55 | Examples of representing our community include using an official e-mail address, 56 | posting via an official social media account, or acting as an appointed 57 | representative at an online or offline event. 58 | 59 | ## Enforcement 60 | 61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 62 | reported to the community leaders responsible for enforcement at 63 | DFINITY. 64 | All complaints will be reviewed and investigated promptly and fairly. 65 | 66 | All community leaders are obligated to respect the privacy and security of the 67 | reporter of any incident. 68 | 69 | ## Enforcement Guidelines 70 | 71 | Community leaders will follow these Community Impact Guidelines in determining 72 | the consequences for any action they deem in violation of this Code of Conduct: 73 | 74 | ### 1. Correction 75 | 76 | **Community Impact**: Use of inappropriate language or other behavior deemed 77 | unprofessional or unwelcome in the community. 78 | 79 | **Consequence**: A private, written warning from community leaders, providing 80 | clarity around the nature of the violation and an explanation of why the 81 | behavior was inappropriate. A public apology may be requested. 82 | 83 | ### 2. Warning 84 | 85 | **Community Impact**: A violation through a single incident or series 86 | of actions. 87 | 88 | **Consequence**: A warning with consequences for continued behavior. No 89 | interaction with the people involved, including unsolicited interaction with 90 | those enforcing the Code of Conduct, for a specified period of time. This 91 | includes avoiding interactions in community spaces as well as external channels 92 | like social media. Violating these terms may lead to a temporary or 93 | permanent ban. 94 | 95 | ### 3. Temporary Ban 96 | 97 | **Community Impact**: A serious violation of community standards, including 98 | sustained inappropriate behavior. 99 | 100 | **Consequence**: A temporary ban from any sort of interaction or public 101 | communication with the community for a specified period of time. No public or 102 | private interaction with the people involved, including unsolicited interaction 103 | with those enforcing the Code of Conduct, is allowed during this period. 104 | Violating these terms may lead to a permanent ban. 105 | 106 | ### 4. Permanent Ban 107 | 108 | **Community Impact**: Demonstrating a pattern of violation of community 109 | standards, including sustained inappropriate behavior, harassment of an 110 | individual, or aggression toward or disparagement of classes of individuals. 111 | 112 | **Consequence**: A permanent ban from any sort of public interaction within 113 | the community. 114 | 115 | ## Attribution 116 | 117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], 118 | version 2.0, available at 119 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. 120 | 121 | Community Impact Guidelines were inspired by [Mozilla's code of conduct 122 | enforcement ladder](https://github.com/mozilla/diversity). 123 | 124 | [homepage]: https://www.contributor-covenant.org 125 | 126 | For answers to common questions about this code of conduct, see the FAQ at 127 | https://www.contributor-covenant.org/faq. Translations are available at 128 | https://www.contributor-covenant.org/translations. 129 | -------------------------------------------------------------------------------- /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | Thank you for your interest in contributing to the Rust crates for the Internet Computer. 4 | By participating in this project, you agree to abide by our [Code of Conduct](./CODE_OF_CONDUCT.md). 5 | 6 | As a member of the community, you are invited and encouraged to contribute by submitting issues, offering suggestions for improvements, adding review comments to existing pull requests, or creating new pull requests to fix issues. 7 | 8 | All contributions to DFINITY documentation and the developer community are respected and appreciated. 9 | Your participation is an important factor in the success of the Internet Computer. 10 | 11 | ## Contents of this repository 12 | 13 | This repository contains source code for the canister interface description language—often referred to as Candid or IDL. Candid provides a common language for specifying the signature of a canister service and interacting with canisters running on the 14 | Internet Computer. 15 | 16 | ## Before you contribute 17 | 18 | Before contributing, please take a few minutes to review these contributor guidelines. 19 | The contributor guidelines are intended to make the contribution process easy and effective for everyone involved in addressing your issue, assessing changes, and finalizing your pull requests. 20 | 21 | Before contributing, consider the following: 22 | 23 | - If you want to report an issue, click **Issues**. 24 | 25 | - If you have more general questions related to Candid and its use, post a message to the [community forum](https://forum.dfinity.org/) or submit a [support request](mailto://support@dfinity.org). 26 | 27 | - If you are reporting a bug, provide as much information about the problem as possible. 28 | 29 | - If you want to contribute directly to this repository, typical fixes might include any of the following: 30 | 31 | - Fixes to resolve bugs or documentation errors 32 | - Code improvements 33 | - Feature requests 34 | 35 | Note that any contribution to this repository must be submitted in the form of a **pull request**. 36 | 37 | - If you are creating a pull request, be sure that the pull request only implements one fix or suggestion. 38 | 39 | If you are new to working with GitHub repositories and creating pull requests, consider exploring [First Contributions](https://github.com/firstcontributions/first-contributions) or [How to Contribute to an Open Source Project on GitHub](https://egghead.io/courses/how-to-contribute-to-an-open-source-project-on-github). 40 | 41 | # How to make a contribution 42 | 43 | Depending on the type of contribution you want to make, you might follow different workflows. 44 | 45 | This section describes the most common workflow scenarios: 46 | 47 | - Reporting an issue 48 | - Submitting a pull request 49 | 50 | ### Reporting an issue 51 | 52 | To open a new issue: 53 | 54 | 1. Click **Issues**. 55 | 56 | 1. Click **New Issue**. 57 | 58 | 1. Click **Open a blank issue**. 59 | 60 | 1. Type a title and description, then click **Submit new issue**. 61 | 62 | Be as clear and descriptive as possible. 63 | 64 | For any problem, describe it in detail, including details about the crate, the version of the code you are using, the results you expected, and how the actual results differed from your expectations. 65 | 66 | ### Submitting a pull request 67 | 68 | If you want to submit a pull request to fix an issue or add a feature, here's a summary of what you need to do: 69 | 70 | 1. Make sure you have a GitHub account, an internet connection, and access to a terminal shell or GitHub Desktop application for running commands. 71 | 72 | 1. Navigate to the DFINITY public repository in a web browser. 73 | 74 | 1. Click **Fork** to create a copy the repository associated with the issue you want to address under your GitHub account or organization name. 75 | 76 | 1. Clone the repository to your local machine. 77 | 78 | 1. Create a new branch for your fix by running a command similar to the following: 79 | 80 | ```bash 81 | git checkout -b my-branch-name-here 82 | ``` 83 | 84 | 1. Open the file you want to fix in a text editor and make the appropriate changes for the issue you are trying to address. 85 | 86 | 1. Add the file contents of the changed files to the index `git` uses to manage the state of the project by running a command similar to the following: 87 | 88 | ```bash 89 | git add path-to-changed-file 90 | ``` 91 | 1. Commit your changes to store the contents you added to the index along with a descriptive message by running a command similar to the following: 92 | 93 | ```bash 94 | git commit -m "Description of the fix being committed." 95 | ``` 96 | 97 | 1. Push the changes to the remote repository by running a command similar to the following: 98 | 99 | ```bash 100 | git push origin my-branch-name-here 101 | ``` 102 | 103 | 1. Create a new pull request for the branch you pushed to the upstream GitHub repository. 104 | 105 | Provide a title that includes a short description of the changes made. 106 | 107 | 1. Wait for the pull request to be reviewed. 108 | 109 | 1. Make changes to the pull request, if requested. 110 | 111 | 1. Celebrate your success after your pull request is merged! 112 | -------------------------------------------------------------------------------- /.github/workflows/ci.yml: -------------------------------------------------------------------------------- 1 | name: Tests 2 | on: 3 | push: 4 | branches: 5 | - master 6 | pull_request: 7 | jobs: 8 | test: 9 | runs-on: ubuntu-latest 10 | env: 11 | DFX_VERSION: 0.24.2 12 | steps: 13 | - uses: actions/checkout@v4 14 | - name: Install stable Rust toolchain 15 | uses: actions-rs/toolchain@v1 16 | with: 17 | profile: minimal 18 | toolchain: stable 19 | override: true 20 | components: rustfmt, clippy 21 | - name: Cache cargo build 22 | uses: actions/cache@v4 23 | with: 24 | path: | 25 | ~/.cargo/registry 26 | ~/.cargo/git 27 | target 28 | key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }} 29 | - name: Build 30 | run: cargo build 31 | - name: Run tests 32 | run: cargo test 33 | - name: fmt 34 | run: cargo fmt -v -- --check 35 | - name: lint 36 | run: cargo clippy --tests -- -D clippy::all 37 | - name: Install dfx 38 | uses: dfinity/setup-dfx@main 39 | with: 40 | dfx-version: "${{ env.DFX_VERSION }}" 41 | - name: Run e2e tests against replica 42 | run: | 43 | echo '{}' > dfx.json 44 | dfx start --background --clean 45 | set -ex 46 | target/debug/ic-repl examples/install.sh 47 | target/debug/ic-repl examples/func.sh 48 | dfx stop 49 | - name: Run e2e tests against PocketIC 50 | run: | 51 | echo '{}' > dfx.json 52 | dfx start --background --clean --pocketic 53 | set -ex 54 | target/debug/ic-repl examples/install.sh 55 | target/debug/ic-repl examples/func.sh 56 | dfx stop 57 | -------------------------------------------------------------------------------- /.github/workflows/release.yml: -------------------------------------------------------------------------------- 1 | name: Release 2 | on: 3 | push: 4 | tags: 5 | - '*' 6 | jobs: 7 | build: 8 | name: Release for ${{ matrix.name }} 9 | runs-on: ${{ matrix.os }} 10 | strategy: 11 | fail-fast: false 12 | matrix: 13 | include: 14 | - os: ubuntu-20.04 15 | name: linux64 16 | artifact_name: target/release/ic-repl 17 | asset_name: ic-repl-linux64 18 | - os: macos-13 19 | name: macos 20 | artifact_name: target/release/ic-repl 21 | asset_name: ic-repl-macos 22 | - os: ubuntu-latest 23 | name: arm 24 | artifact_name: target/arm-unknown-linux-gnueabihf/release/ic-repl 25 | asset_name: ic-repl-arm32 26 | steps: 27 | - uses: actions/checkout@v4 28 | - name: Install stable toolchain 29 | if: matrix.name != 'arm' 30 | uses: actions-rs/toolchain@v1 31 | with: 32 | profile: minimal 33 | toolchain: stable 34 | override: true 35 | - name: Install stable ARM toolchain 36 | if: matrix.name == 'arm' 37 | uses: actions-rs/toolchain@v1 38 | with: 39 | profile: minimal 40 | toolchain: stable 41 | override: true 42 | target: arm-unknown-linux-gnueabihf 43 | - name: Build 44 | if: matrix.name != 'arm' 45 | run: cargo build --release --locked 46 | - name: Cross build 47 | if: matrix.name == 'arm' 48 | uses: actions-rs/cargo@v1 49 | with: 50 | use-cross: true 51 | command: build 52 | args: --target arm-unknown-linux-gnueabihf --release --locked 53 | - name: 'Upload assets' 54 | uses: actions/upload-artifact@v3 55 | with: 56 | name: ${{ matrix.asset_name }} 57 | path: ${{ matrix.artifact_name }} 58 | retention-days: 3 59 | test: 60 | needs: build 61 | name: Test for ${{ matrix.os }} 62 | runs-on: ${{ matrix.os }} 63 | strategy: 64 | fail-fast: false 65 | matrix: 66 | include: 67 | - os: ubuntu-22.04 68 | asset_name: ic-repl-linux64 69 | - os: ubuntu-20.04 70 | asset_name: ic-repl-linux64 71 | - os: macos-13 72 | asset_name: ic-repl-macos 73 | - os: macos-14 74 | asset_name: ic-repl-macos 75 | steps: 76 | - name: Get executable 77 | id: download 78 | uses: actions/download-artifact@v3 79 | with: 80 | name: ${{ matrix.asset_name }} 81 | - name: Executable runs 82 | run: | 83 | chmod +x ic-repl 84 | ./ic-repl --version 85 | publish: 86 | needs: test 87 | name: Publish ${{ matrix.asset_name }} 88 | strategy: 89 | fail-fast: false 90 | matrix: 91 | include: 92 | - asset_name: ic-repl-linux64 93 | - asset_name: ic-repl-arm32 94 | - asset_name: ic-repl-macos 95 | runs-on: ubuntu-latest 96 | steps: 97 | - name: Get executable 98 | uses: actions/download-artifact@v3 99 | with: 100 | name: ${{ matrix.asset_name }} 101 | - name: Upload binaries to release 102 | uses: svenstaro/upload-release-action@v2 103 | with: 104 | repo_token: ${{ secrets.GITHUB_TOKEN }} 105 | file: ic-repl 106 | asset_name: ${{ matrix.asset_name }} 107 | tag: ${{ github.ref }} 108 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .history 2 | target/ 3 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "ic-repl" 3 | version = "0.7.7" 4 | authors = ["DFINITY Team"] 5 | edition = "2021" 6 | default-run = "ic-repl" 7 | 8 | [[bin]] 9 | name = "ic-repl" 10 | path = "src/main.rs" 11 | doc = false 12 | 13 | [build-dependencies] 14 | lalrpop = "0.20" 15 | 16 | [dependencies] 17 | candid = { version = "0.10", features = ["all"] } 18 | candid_parser = { version = "0.2.0-beta.1", features = ["all"] } 19 | rustyline = "14.0" 20 | rustyline-derive = "0.10" 21 | console = "0.15" 22 | pretty_assertions = "1.4" 23 | codespan-reporting = "0.11" 24 | pretty = "0.12" 25 | pem = "3.0" 26 | shellexpand = "3.1" 27 | ic-agent = "0.39" 28 | ic-identity-hsm = "0.39" 29 | ic-transport-types = "0.39" 30 | ic-wasm = { version = "0.9", default-features = false } 31 | inferno = { version = "0.11", default-features = false, features = [ 32 | "multithreaded", 33 | "nameattr", 34 | ] } 35 | tokio = { version = "1.43", features = ["full"] } 36 | anyhow = "1.0" 37 | rand = "0.8" 38 | logos = "0.14" 39 | lalrpop-util = "0.20" 40 | clap = { version = "4.4", features = ["derive"] } 41 | ed25519-consensus = "2.1.0" 42 | rpassword = "7.2" 43 | serde = "1.0" 44 | serde_json = "1.0" 45 | serde_cbor = "0.11" 46 | hex = { version = "0.4", features = ["serde"] } 47 | sha2 = "0.10" 48 | crc32fast = "1.3" 49 | qrcode = "0.13" 50 | image = { version = "0.24", default-features = false, features = ["png"] } 51 | libflate = "2.0" 52 | base64 = "0.21" 53 | futures = "0.3.30" 54 | reqwest = "0.12.9" 55 | serde_with = { version = "3.11.0", features = ["base64"] } 56 | 57 | # When cross-compiling for ARM, we need to use a vendored version of OpenSSL 58 | [target.arm-unknown-linux-gnueabihf.dependencies] 59 | openssl = { version = "0.10", features = ["vendored"] } 60 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright 2023 DFINITY Stiftung 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Canister REPL 2 | 3 | ``` 4 | ic-repl [--replica [local|ic|url] | --offline [--format [json|ascii|png]]] --config [script file] --verbose 5 | ``` 6 | 7 | ## Commands 8 | 9 | ``` 10 | := 11 | | import = (as )? // bind canister URI to , with optional did file 12 | | load // load and run a script file. Do not error out if ends with '?' 13 | | config // set config in TOML format 14 | | let = // bind to a variable 15 | | // show the value of 16 | | assert // assertion 17 | | identity ( | record { slot_index = ; key_id = })? // switch to identity , with optional pem file or HSM config 18 | | function ( ,* ) { ;* } // define a function 19 | | if { ;* } else { ;* } // conditional branch 20 | | while { ;* } // while loop 21 | := 22 | | // any candid value 23 | | * // variable with optional transformers 24 | | fail // convert error message as text 25 | | call (as )? . (( ,* ))? // call a canister method, and store the result as a single value 26 | | par_call [ ( . (( ,* ))),* ] // make concurrent canister calls, and store the result as a tuple record 27 | | encode ( . )? (( ,* ))? // encode candid arguments as a blob value. canister.__init_args represents init args 28 | | decode (as . )? // decode blob as candid values 29 | | ( ,* ) // function application 30 | := 31 | | // variable name 32 | | _ // previous eval of exp is bind to `_` 33 | := 34 | | ? // select opt value 35 | | . // select field name from record or variant value 36 | | [ ] // select index from vec, text, record, or variant value 37 | | . ( ,* ) // transform (map, filter, fold) a collection value 38 | := 39 | | == // structural equality 40 | | ~= // equal under candid subtyping; for text value, we check if the right side is contained in the left side 41 | | != // not equal 42 | ``` 43 | 44 | ## Functions 45 | 46 | Similar to most shell languages, functions in ic-repl is dynamically scoped and untyped. 47 | 48 | We also provide some built-in functions: 49 | * `account(principal)`: convert principal to account id (blob). 50 | * `account(principal, blob)`: convert principal and subaccount (blob) to account id (blob). 51 | * `subaccount(principal)`: convert principal to subaccount (blob). 52 | * `neuron_account(principal, nonce)`: convert (principal, nonce) to account in the governance canister. 53 | * `file(path)`: load external file as a blob value. 54 | * `gzip(blob)`: gzip a blob value. 55 | * `replica_url()`: returns the replica URL ic-repl connects to. 56 | * `stringify(exp1, exp2, exp3, ...)`: convert all expressions to string and concat. Only supports primitive types. 57 | * `output(path, content)`: append text content to file path. 58 | * `export(path, var1, var2, ...)`: overwrite variable bindings to file path. The file can be used by the `load` command. 59 | * `wasm_profiling(path)/wasm_profiling(path, record { trace_only_funcs = ; start_page = ; page_limit = })`: load Wasm module, instrument the code and store as a blob value. Calling profiled canister binds the cost to variable `__cost_{id}` or `__cost__`. The second argument is optional, and all fields in the record are also optional. If provided, `trace_only_funcs` will only count and trace the provided set of functions; `start_page` writes the logs to a preallocated pages in stable memory; `page_limit` specifies the number of the preallocated pages, default to 4096 if omitted. See [ic-wasm's doc](https://github.com/dfinity/ic-wasm#working-with-upgrades-and-stable-memory) for more details. 60 | * `flamegraph(canister_id, title, filename)`: generate flamegraph for the last update call to canister_id, with title and write to `{filename}.svg`. The cost of the update call is returned. 61 | * `concat(e1, e2)`: concatenate two vec/record/text together. 62 | * `add/sub/mul/div(e1, e2)`: addition/subtraction/multiplication/division of two integers/floats. If one of the arguments is float32/float64, the result is float64; otherwise, the result is integer. You can use type annotation to get the integer part of the float number. For example `div((mul(div(1, 3.0), 1000) : nat), 100.0)` returns `3.33`. 63 | * `lt/lte/gt/gte(e1, e2)`: check if integer/float `e1` is less than/less than or equal to/greater than/greater than or equal to `e2`. 64 | * `eq/neq(e1, e2)`: check if `e1` and `e2` are equal or not. `e1` and `e2` must have the same type. 65 | * `and/or(e1, e2)/not(e)`: logical and/or/not. 66 | * `exist(e)`: check if `e` can be evaluated without errors. This is useful to check the existence of data, e.g., `exist(res[10])`. 67 | * `ite(cond, e1, e2)`: expression version of conditional branch. For example, `ite(exist(res.ok), "success", "error")`. 68 | * `exec(cmd, arg1, arg2, ...)/exec(cmd, arg1, arg2, ..., record { silence = ; cwd = })`: execute a bash command. The arguments are all text types. The last line from stdout is parsed by the Candid value parser as the result of the `exec` function. If parsing fails, returns that line as a text value. You can specify an optional record argument at the end. All fields in the record are optional. If provided, `silence = true` hides the stdout and stderr output; `cwd` specifies the current working directory of the command. There are security risks in running arbitrary bash command. Be careful about what command you execute. 69 | 70 | The following functions are only available in non-offline mode: 71 | * `read_state([effective_id,] prefix, id, paths, ...)`: fetch the state tree path of `//`. Some useful examples, 72 | + candid metadata: `read_state("canister", principal "canister_id", "metadata/candid:service")` 73 | + canister controllers: `read_state("canister", principal "canister_id", "controllers")` 74 | + list all subnet ids: `read_state("subnet")` 75 | + subnet metrics: `read_state("subnet", principal "subnet_id", "metrics")` 76 | + list subnet nodes: `read_state("subnet", principal "subnet_id", "node")` 77 | + node public key: `read_state("subnet", principal "subnet_id", "node", principal "node_id", "public_key")` 78 | * `send(blob)`: send signed JSON messages generated from offline mode. The function can take a single message or an array of messages. Most likely use is `send(file("messages.json"))`. The return result is the return results of all calls. Alternatively, you can use `ic-repl -s messages.json -r ic`. 79 | 80 | There is a special `__main` function you can define in the script, which gets executed when loading from CLI. `__main` can take arguments provided from CLI. The CLI arguments gets parsed by the Candid value parser first. If parsing fails, it is stored as a text value. For example, the following code can be called with `ic-repl main.sh -- test 42` and outputs "test43". 81 | 82 | ### main.sh 83 | ``` 84 | function __main(name, n) { 85 | stringify(name, add(n, 1)) 86 | } 87 | ``` 88 | 89 | ## Object methods 90 | 91 | For `vec`, `record` or `text` value, we provide some built-in methods for value transformation: 92 | * v.map(func): transform each item `v[i]` with `func(v[i])`. 93 | * v.filter(func): filter out item `v[i]` if `func(v[i])` returns `false` or has an error. 94 | * v.fold(init, func): combine all items in `v` by repeatedly applying `func(...func(func(init, v[0]), v[1])..., v[n-1])`. 95 | * v.size(): count the size of `v`. 96 | 97 | For `record` value, `v[i]` is represented as `record { key; value }` sorted by field id. 98 | 99 | For `text` value, `v[i]` is represented as a `text` value containing a single character. 100 | 101 | ## Type casting 102 | 103 | Type annotations in `ic-repl` is more permissible (not following the subtyping rules) than the Candid library to allow piping results from different canister calls. 104 | * `("text" : blob)` becomes `blob "text"` and vice versa. Converting `blob` to `text` can get an error if the blob is not utf8 compatible. 105 | * `(service "aaaaa-aa" : principal)` becomes `principal "aaaaa-aa"`. You can convert among `service`, `principal` and `func`. 106 | * `((((1.99 : nat8) : int) : float32) : nat32)` becomes `(1 : nat32)`. When converting from float to integer, we only return the integer part of the float. 107 | * Type annotations for `record`, `variant` is left unimplemented. With candid interface embedded in the canister metadata, annotating composite types is almost never needed. 108 | 109 | ## Examples 110 | 111 | ### test.sh 112 | ``` 113 | #!/usr/bin/ic-repl -r ic 114 | // assume we already installed the greet canister 115 | import greet = "rrkah-fqaaa-aaaaa-aaaaq-cai"; 116 | call greet.greet("test"); 117 | let result = _; 118 | assert _ == "Hello, test!"; 119 | identity alice; 120 | call "rrkah-fqaaa-aaaaa-aaaaq-cai".greet("test"); 121 | assert _ == result; 122 | ``` 123 | 124 | ### nns.sh 125 | ``` 126 | #!/usr/bin/ic-repl -r ic 127 | // nns and ledger canisters are auto-imported if connected to the mainnet 128 | call nns.get_pending_proposals() 129 | identity private "./private.pem"; 130 | call ledger.account_balance(record { account = account(private) }); 131 | 132 | function transfer(to, amount, memo) { 133 | call ledger.transfer( 134 | record { 135 | to = to; 136 | fee = record { e8s = 10_000 }; 137 | memo = memo; 138 | from_subaccount = null; 139 | created_at_time = null; 140 | amount = record { e8s = amount }; 141 | }, 142 | ); 143 | }; 144 | function stake(amount, memo) { 145 | let _ = transfer(neuron_account(private, memo), amount, memo); 146 | call nns.claim_or_refresh_neuron_from_account( 147 | record { controller = opt private; memo = memo } 148 | ); 149 | _.result?.NeuronId 150 | }; 151 | let neuron_id = stake(100_000_000, 42); 152 | ``` 153 | 154 | ### install.sh 155 | ``` 156 | #!/usr/bin/ic-repl 157 | function deploy(wasm) { 158 | let id = call ic.provisional_create_canister_with_cycles(record { settings = null; amount = null }); 159 | call ic.install_code( 160 | record { 161 | arg = encode wasm.__init_args(); 162 | wasm_module = wasm; 163 | mode = variant { install }; 164 | canister_id = id.canister_id; 165 | }, 166 | ); 167 | id 168 | }; 169 | 170 | identity alice; 171 | let id = deploy(file("greet.wasm")); 172 | let canister = id.canister_id; 173 | let res = par_call [ic.canister_status(id), canister.greet("test")]; 174 | let status = res[0]; 175 | assert status.settings ~= record { controllers = vec { alice } }; 176 | assert status.module_hash? == blob "..."; 177 | assert res[1] == "Hello, test!"; 178 | ``` 179 | 180 | ### wallet.sh 181 | ``` 182 | #!/usr/bin/ic-repl 183 | import wallet = "${WALLET_ID:-rwlgt-iiaaa-aaaaa-aaaaa-cai}" as "wallet.did"; 184 | identity default "~/.config/dfx/identity/default/identity.pem"; 185 | call wallet.wallet_create_canister( 186 | record { 187 | cycles = ${CYCLE:-1_000_000}; 188 | settings = record { 189 | controllers = null; 190 | freezing_threshold = null; 191 | memory_allocation = null; 192 | compute_allocation = null; 193 | }; 194 | }, 195 | ); 196 | let id = _.Ok.canister_id; 197 | call as wallet ic.install_code( 198 | record { 199 | arg = encode (); 200 | wasm_module = file("${WASM_FILE}"); 201 | mode = variant { install }; 202 | canister_id = id; 203 | }, 204 | ); 205 | call id.greet("test"); 206 | ``` 207 | 208 | ### profiling.sh 209 | ``` 210 | #!/usr/bin/ic-repl 211 | import "install.sh"; 212 | 213 | let file = "result.md"; 214 | output(file, "# profiling result\n\n"); 215 | output(file, "|generate|get|put|\n|--:|--:|--:|\n"); 216 | 217 | let cid = deploy(gzip(wasm_profiling("hashmap.wasm"))); 218 | call cid.__toggle_tracing(); // Disable flamegraph tracing 219 | call cid.generate(50000); 220 | output(file, stringify(__cost__, "|")); 221 | 222 | call cid.__toggle_tracing(); // Enable flamegraph tracing 223 | call cid.batch_get(50); 224 | flamegraph(cid, "hashmap.get(50)", "get"); 225 | output(file, stringify("[", __cost__, "](get.svg)|")); 226 | 227 | let put = call cid.batch_put(50); 228 | flamegraph(cid, "hashmap.put(50)", "put.svg"); 229 | output(file, stringify("[", __cost_put, "](put.svg)|\n")); 230 | ``` 231 | 232 | ### recursion.sh 233 | ``` 234 | function fib(n) { 235 | let _ = ite(lt(n, 2), 1, add(fib(sub(n, 1)), fib(sub(n, 2)))) 236 | }; 237 | function fib2(n) { 238 | let a = 1; 239 | let b = 1; 240 | while gt(n, 0) { 241 | let b = add(a, b); 242 | let a = sub(b, a); 243 | let n = sub(n, 1); 244 | }; 245 | let _ = a; 246 | }; 247 | function fib3(n) { 248 | if lt(n, 2) { 249 | let _ = 1; 250 | } else { 251 | let _ = add(fib3(sub(n, 1)), fib3(sub(n, 2))); 252 | } 253 | }; 254 | assert fib(10) == 89; 255 | assert fib2(10) == 89; 256 | assert fib3(10) == 89; 257 | ``` 258 | 259 | ## Relative paths 260 | 261 | Several commands and functions are taking arguments from the file system. We have different definitions for 262 | relative paths, depending on whether you are reading or writing the file. 263 | 264 | * For reading files, e.g., `import`, `load`, `identity`, `file`, `wasm_profiling`, relative paths are based on where the current script is located; 265 | * For writing files, e.g., `export`, `output`, `flamegraph`, relative paths are based on the current directory when the script is run. 266 | 267 | The rationale for the difference is that we can have an easier time to control where the output files are located, as scripts can spread out in different directories. 268 | 269 | ## Derived forms 270 | 271 | * `call as proxy_canister target_canister.method(args)` is a shorthand for 272 | ``` 273 | let _ = call proxy_canister.wallet_call( 274 | record { 275 | args = encode target_canister.method(args); 276 | cycles = 0; 277 | method_name = "method"; 278 | canister = principal "target_canister"; 279 | } 280 | ); 281 | decode as target_canister.method _.Ok.return 282 | ``` 283 | 284 | ## Canister init args types 285 | 286 | When calling `ic.install_code`, you may need to provide a Candid message for initializing the canister. 287 | To help with encoding the message, you can use get the init args types from the Wasm module custom section: 288 | ``` 289 | let wasm = file("a.wasm"); 290 | encode wasm.__init_args(...) 291 | ``` 292 | 293 | If the Wasm module doesn't contain the init arg types, you can import the full did file as a workaround: 294 | ``` 295 | import init = "2vxsx-fae" as "did_file_with_init_args.did"; 296 | encode init.__init_args(...) 297 | ``` 298 | 299 | ## Contributing 300 | 301 | Please follow the guidelines in the [CONTRIBUTING.md](.github/CONTRIBUTING.md) document. 302 | 303 | ## Issues 304 | 305 | * Autocompletion within Candid value 306 | * Robust support for `~=`, requires inferring principal types 307 | * Loop detection for `load` 308 | * Assert upgrade correctness 309 | -------------------------------------------------------------------------------- /build.rs: -------------------------------------------------------------------------------- 1 | fn main() { 2 | lalrpop::Configuration::new() 3 | .use_cargo_dir_conventions() 4 | .emit_rerun_directives(true) 5 | .process_file("src/grammar.lalrpop") 6 | .unwrap(); 7 | } 8 | -------------------------------------------------------------------------------- /config.dhall: -------------------------------------------------------------------------------- 1 | { canister = 2 | { cancan = "sqf3m-qqaaa-aaaab-qaeaa-cai" 3 | , hello = "rwlgt-iiaaa-aaaaa-aaaaa-cai" 4 | , asset = "rrkah-fqaaa-aaaaa-aaaaq-cai" 5 | , linkedup = "ryjl3-tyaaa-aaaaa-aaaba-cai" 6 | , connectd = "rrkah-fqaaa-aaaaa-aaaaq-cai" 7 | , linkedup_asset = "r7inp-6aaaa-aaaaa-aaabq-cai" 8 | } 9 | , imgUrl.text = Some "path" 10 | , lastName.text = Some "name" 11 | , firstName.text = Some "name" 12 | , company.text = Some "company" 13 | , experience.text = Some "bs" 14 | , education.text = Some "country" 15 | , text = Some "emoji" 16 | } 17 | -------------------------------------------------------------------------------- /examples/func.sh: -------------------------------------------------------------------------------- 1 | function f(x) { 2 | let _ = x.id; 3 | }; 4 | function f2(x) { let _ = record { abc = x.id; } }; 5 | function f3(x) { let _ = exist(x.y) }; 6 | function f3_2(x) { let _ = x.y }; 7 | function f4(acc, x) { let _ = add(acc, x) }; 8 | let x = vec{record {id=1;x=opt 2};record {id=2;y=opt 5}}; 9 | assert x.map(f) == vec {1;2}; 10 | assert x.map(f2) == vec {record { abc = 1 }; record { abc = 2 }}; 11 | assert x.filter(f3) == vec { record {id=2; y=opt 5}}; 12 | assert x.filter(f3).map(f) == vec {2}; 13 | assert x.map(f).fold(0, f4) == 3; 14 | 15 | let y = vec { variant { y = 1 }; variant { x = "error" }; variant { y = 2 } }; 16 | assert y.filter(f3).map(f3_2) == vec {1;2}; 17 | assert y[sub(y.size(), 1)].y == 2; 18 | 19 | let z = record { opt 1;2;opt 3;opt 4 }; 20 | function f5(x) { let _ = record { x[0]; x[1]? } }; 21 | function f6(x) { let _ = exist(x[1]?) }; 22 | function f7(acc, x) { let _ = concat(acc, vec{x[1]}) }; 23 | assert z.filter(f6).map(f5) == record { 1; 2 = 3; 4 }; 24 | assert z.filter(f6).map(f5).fold(vec{}, f7) == vec {1;3;4}; 25 | assert z[sub(z.size(), 1)]? == 4; 26 | 27 | let s = "abcdef"; 28 | function f8(x) { let _ = stringify(" ", x) }; 29 | function f9(acc, x) { let _ = add(acc, 1) }; 30 | assert s.map(f8) == " a b c d e f"; 31 | assert s.map(f8).fold(0, f9) == 12; 32 | assert s.map(f8).size() == (12 : nat); 33 | assert s[sub(s.size(), 1)] == "f"; 34 | 35 | assert div(1, 2) == 0; 36 | assert div(1, 2.0) == 0.5; 37 | assert div((mul(div(((1:nat8):float32), (3:float64)), 1000) : nat), 100.0) == 3.33; 38 | assert eq("text", "text") == true; 39 | assert not(eq("text", "text")) == false; 40 | assert eq(div(1,2), sub(2,2)) == true; 41 | assert gt(div(1, 2.0), 1) == false; 42 | assert eq(div(1, 2.0), 0.5) == true; 43 | assert and(lte(div(1, 2), 0), gte(div(1, 2), 0)) == true; 44 | assert or(lt(div(1, 2), 0), gt(div(1, 2), 0)) == false; 45 | 46 | assert (service "aaaaa-aa" : principal) == principal "aaaaa-aa"; 47 | assert eq((service "aaaaa-aa" : principal), principal "aaaaa-aa") == true; 48 | assert (func "aaaaa-aa".test : service {}) == service "aaaaa-aa"; 49 | assert (principal "aaaaa-aa" : service {}) == service "aaaaa-aa"; 50 | 51 | assert account(principal "aaaaa-aa") == blob "\2d\0e\89\7f\7e\86\2d\2b\57\d9\bc\9e\a5\c6\5f\9a\24\ac\6c\07\45\75\f4\78\98\31\4b\8d\6c\b0\92\9d"; 52 | assert subaccount(principal "aaaaa-aa") == blob "\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00"; 53 | assert account(principal "aaaaa-aa", subaccount(principal "aaaaa-aa")) == blob "\2d\0e\89\7f\7e\86\2d\2b\57\d9\bc\9e\a5\c6\5f\9a\24\ac\6c\07\45\75\f4\78\98\31\4b\8d\6c\b0\92\9d"; 54 | assert account(principal "aaaaa-aa", subaccount(principal "2vxsx-fae")) == blob "\ad\2f\2a\2f\19\a4\ef\fd\a2\af\d4\44\66\12\37\cf\77\4f\44\95\df\68\bd\67\1f\b4\16\0a\ca\5b\13\41"; 55 | 56 | assert ("this is a text" : blob) == blob "this is a text"; 57 | assert (blob "this is a blob" : text) == "this is a blob"; 58 | 59 | function fac(n) { 60 | if eq(n, 0) { 61 | let _ = 1; 62 | } else { 63 | let _ = mul(n, fac(sub(n, 1))); 64 | } 65 | }; 66 | function fac2(n) { 67 | let res = 1; 68 | while gt(n, 0) { 69 | let res = mul(res, n); 70 | let n = sub(n, 1); 71 | }; 72 | let _ = res; 73 | }; 74 | function fac3(n) { 75 | let _ = ite(eq(n, 0), 1, mul(n, fac3(sub(n, 1)))) 76 | }; 77 | function fib(n) { 78 | let _ = ite(lt(n, 2), 1, add(fib(sub(n, 1)), fib(sub(n, 2)))) 79 | }; 80 | function fib2(n) { 81 | let a = 1; 82 | let b = 1; 83 | while gt(n, 0) { 84 | let b = add(a, b); 85 | let a = sub(b, a); 86 | let n = sub(n, 1); 87 | }; 88 | let _ = a; 89 | }; 90 | function fib3(n) { 91 | if lt(n, 2) { 92 | let _ = 1; 93 | } else { 94 | let _ = add(fib3(sub(n, 1)), fib3(sub(n, 2))); 95 | } 96 | }; 97 | function __main() { 98 | assert fac(5) == 120; 99 | assert fac2(5) == 120; 100 | assert fac3(5) == 120; 101 | assert fib(10) == 89; 102 | assert fib2(10) == 89; 103 | assert fib3(10) == 89; 104 | } 105 | -------------------------------------------------------------------------------- /examples/greet.wasm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dfinity/ic-repl/e3cbed2cc5c202184e740268e4582fdf80c169c9/examples/greet.wasm -------------------------------------------------------------------------------- /examples/install.sh: -------------------------------------------------------------------------------- 1 | #!/ic-repl 2 | function deploy(wasm) { 3 | let id = call ic.provisional_create_canister_with_cycles(record { settings = null; amount = null }); 4 | call ic.canister_status(id); 5 | assert _.module_hash == (null : opt blob); 6 | call ic.install_code( 7 | record { 8 | arg = encode wasm.__init_args(); 9 | wasm_module = wasm; 10 | mode = variant { install }; 11 | canister_id = id.canister_id; 12 | }, 13 | ); 14 | id 15 | }; 16 | 17 | identity alice; 18 | let id = deploy(file("greet.wasm")); 19 | let canister = id.canister_id; 20 | let res = par_call [ic.canister_status(id), canister.greet("test")]; 21 | let status = res[0]; 22 | assert status.settings ~= record { controllers = vec { alice } }; 23 | assert status.module_hash? == blob "\ab\a7h\8cH\e0]\e7W]\8b\07\92\ac\9fH\95\7f\f4\97\d0\efX\c4~\0d\83\91\01<\da\1d"; 24 | assert res[1] == "Hello, test!"; 25 | call ic.stop_canister(id); 26 | call ic.delete_canister(id); 27 | -------------------------------------------------------------------------------- /examples/neuron.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/ic-repl -o 2 | identity private "../private.pem"; 3 | 4 | function transfer(to, amount, memo) { 5 | call ledger.transfer( 6 | record { 7 | to = to; 8 | fee = record { e8s = 10_000 }; 9 | memo = memo; 10 | from_subaccount = null; 11 | created_at_time = null; 12 | amount = record { e8s = amount }; 13 | }, 14 | ); 15 | }; 16 | 17 | // Staking or top up 18 | function stake(amount, memo) { 19 | let _ = transfer(neuron_account(private, memo), amount, memo); 20 | call nns.claim_or_refresh_neuron_from_account( 21 | record { controller = opt private; memo = memo } 22 | ); 23 | _.result?.NeuronId 24 | }; 25 | 26 | let amount = 100_000_000; // 1 ICP 27 | let memo = 42; // memo determines neuron id 28 | let neuron_id = stake(amount, memo); 29 | 30 | // Define neuron config operations 31 | function dissolve_delay(delay) { 32 | variant { 33 | IncreaseDissolveDelay = record { 34 | additional_dissolve_delay_seconds = delay; 35 | } 36 | } 37 | }; 38 | function start_dissolving() { 39 | variant { 40 | StartDissolving = record {} 41 | } 42 | }; 43 | function stop_dissolving() { 44 | variant { 45 | StopDissolving = record {} 46 | } 47 | }; 48 | function add_hot_key(hot_key) { 49 | variant { 50 | AddHotKey = record { new_hot_key = opt hot_key } 51 | } 52 | }; 53 | function remove_hot_key(hot_key) { 54 | variant { 55 | RemoveHotKey = record { hot_key_to_remove = opt hot_key } 56 | } 57 | }; 58 | function config_neuron(neuron_id, operation) { 59 | let _ = call nns.manage_neuron( 60 | record { 61 | id = opt record { id = neuron_id }; 62 | command = opt variant { 63 | Configure = record { 64 | operation = opt operation; 65 | } 66 | }; 67 | neuron_id_or_subaccount = null; 68 | }, 69 | ); 70 | }; 71 | 72 | config_neuron(neuron_id, dissolve_delay(3600)); 73 | 74 | function disburse() { 75 | variant { Disburse = record { to_account = null; amount = null } } 76 | }; 77 | function spawn() { 78 | variant { Spawn = record { new_controller = null } } 79 | }; 80 | function merge_maturity(percent) { 81 | variant { MergeMaturity = record { percentage_to_merge = percent } } 82 | }; 83 | function manage(neuron_id, cmd) { 84 | let _ = call nns.manage_neuron( 85 | record { 86 | id = opt record { id = neuron_id }; 87 | command = opt cmd; 88 | neuron_id_or_subaccount = null; 89 | }, 90 | ) 91 | }; 92 | 93 | manage(neuron_id, disburse()); 94 | -------------------------------------------------------------------------------- /examples/wallet.did: -------------------------------------------------------------------------------- 1 | type EventKind = variant { 2 | CyclesSent: record { 3 | to: principal; 4 | amount: nat64; 5 | refund: nat64; 6 | }; 7 | CyclesReceived: record { 8 | from: principal; 9 | amount: nat64; 10 | }; 11 | AddressAdded: record { 12 | id: principal; 13 | name: opt text; 14 | role: Role; 15 | }; 16 | AddressRemoved: record { 17 | id: principal; 18 | }; 19 | CanisterCreated: record { 20 | canister: principal; 21 | cycles: nat64; 22 | }; 23 | CanisterCalled: record { 24 | canister: principal; 25 | method_name: text; 26 | cycles: nat64; 27 | }; 28 | WalletDeployed: record { 29 | canister: principal; 30 | } 31 | }; 32 | 33 | type Event = record { 34 | id: nat32; 35 | timestamp: nat64; 36 | kind: EventKind; 37 | }; 38 | 39 | type Role = variant { 40 | Contact; 41 | Custodian; 42 | Controller; 43 | }; 44 | 45 | type Kind = variant { 46 | Unknown; 47 | User; 48 | Canister; 49 | }; 50 | 51 | // An entry in the address book. It must have an ID and a role. 52 | type AddressEntry = record { 53 | id: principal; 54 | name: opt text; 55 | kind: Kind; 56 | role: Role; 57 | }; 58 | 59 | type ResultCreate = variant { 60 | Ok : record { canister_id: principal }; 61 | Err: text; 62 | }; 63 | 64 | type ResultSend = variant { 65 | Ok : null; 66 | Err : text; 67 | }; 68 | 69 | type ResultCall = variant { 70 | Ok : record { return: blob }; 71 | Err : text; 72 | }; 73 | 74 | type CanisterSettings = record { 75 | controller: opt principal; 76 | compute_allocation: opt nat; 77 | memory_allocation: opt nat; 78 | freezing_threshold: opt nat; 79 | }; 80 | 81 | type CreateCanisterArgs = record { 82 | cycles: nat64; 83 | settings: CanisterSettings; 84 | }; 85 | 86 | service : { 87 | // Wallet Name 88 | name: () -> (opt text) query; 89 | set_name: (text) -> (); 90 | 91 | // Controller Management 92 | get_controllers: () -> (vec principal) query; 93 | add_controller: (principal) -> (); 94 | remove_controller: (principal) -> (); 95 | 96 | // Custodian Management 97 | get_custodians: () -> (vec principal) query; 98 | authorize: (principal) -> (); 99 | deauthorize: (principal) -> (); 100 | 101 | // Cycle Management 102 | wallet_balance: () -> (record { amount: nat64 }) query; 103 | wallet_send: (record { canister: principal; amount: nat64 }) -> (ResultSend); 104 | wallet_receive: () -> (); // Endpoint for receiving cycles. 105 | 106 | // Managing canister 107 | wallet_create_canister: (CreateCanisterArgs) -> (ResultCreate); 108 | 109 | wallet_create_wallet: (CreateCanisterArgs) -> (ResultCreate); 110 | 111 | wallet_store_wallet_wasm: (record { 112 | wasm_module: blob; 113 | }) -> (); 114 | 115 | // Call Forwarding 116 | wallet_call: (record { 117 | canister: principal; 118 | method_name: text; 119 | args: blob; 120 | cycles: nat64; 121 | }) -> (ResultCall); 122 | 123 | // Address book 124 | add_address: (address: AddressEntry) -> (); 125 | list_addresses: () -> (vec AddressEntry) query; 126 | remove_address: (address: principal) -> (); 127 | 128 | // Events 129 | get_events: (opt record { from: opt nat32; to: opt nat32; }) -> (vec Event) query; 130 | get_chart: (opt record { count: opt nat32; precision: opt nat64; } ) -> (vec record { nat64; nat64; }) query; 131 | } 132 | 133 | -------------------------------------------------------------------------------- /examples/wallet.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/ic-repl 2 | function deploy(wallet, wasm, cycle) { 3 | identity default "~/.config/dfx/identity/default/identity.pem"; 4 | call wallet.wallet_create_canister( 5 | record { 6 | cycles = cycle; 7 | settings = record { 8 | controller = null; 9 | freezing_threshold = null; 10 | memory_allocation = null; 11 | compute_allocation = null; 12 | }; 13 | }, 14 | ); 15 | let id = _.Ok.canister_id; 16 | call as wallet ic.install_code( 17 | record { 18 | arg = encode (); 19 | wasm_module = wasm; 20 | mode = variant { install }; 21 | canister_id = id; 22 | }, 23 | ); 24 | id 25 | }; 26 | 27 | import wallet = "${WALLET_ID:-rwlgt-iiaaa-aaaaa-aaaaa-cai}" as "wallet.did"; 28 | let id = deploy(wallet, file("greet.wasm"), 1_000_000); 29 | call id.greet("test"); 30 | -------------------------------------------------------------------------------- /src/account_identifier.rs: -------------------------------------------------------------------------------- 1 | // DISCLAIMER: 2 | // Do not modify this file arbitrarily. 3 | // The contents are borrowed from: 4 | // dfinity-lab/dfinity@25999dd54d29c24edb31483801bddfd8c1d780c8 5 | // https://github.com/dfinity-lab/dfinity/blob/master/rs/rosetta-api/canister/src/account_identifier.rs 6 | #![allow(clippy::all)] 7 | 8 | use candid::{CandidType, Principal}; 9 | use serde::{de, de::Error, Deserialize, Serialize}; 10 | use sha2::{Digest, Sha224, Sha256}; 11 | use std::convert::{TryFrom, TryInto}; 12 | use std::fmt::{Display, Formatter}; 13 | use std::str::FromStr; 14 | 15 | const SUB_ACCOUNT_ZERO: Subaccount = Subaccount([0; 32]); 16 | const ACCOUNT_DOMAIN_SEPERATOR: &[u8] = b"\x0Aaccount-id"; 17 | 18 | /// While this is backed by an array of length 28, it's canonical representation 19 | /// is a hex string of length 64. The first 8 characters are the CRC-32 encoded 20 | /// hash of the following 56 characters of hex. Both, upper and lower case 21 | /// characters are valid in the input string and can even be mixed. 22 | /// 23 | /// When it is encoded or decoded it will always be as a string to make it 24 | /// easier to use from DFX. 25 | #[derive(Clone, Copy, Hash, Debug, PartialEq, Eq, PartialOrd, Ord)] 26 | pub struct AccountIdentifier { 27 | pub hash: [u8; 28], 28 | } 29 | 30 | impl AccountIdentifier { 31 | pub fn new(account: Principal, sub_account: Option) -> AccountIdentifier { 32 | let mut hash = Sha224::new(); 33 | hash.update(ACCOUNT_DOMAIN_SEPERATOR); 34 | hash.update(account.as_slice()); 35 | 36 | let sub_account = sub_account.unwrap_or(SUB_ACCOUNT_ZERO); 37 | hash.update(&sub_account.0[..]); 38 | 39 | AccountIdentifier { 40 | hash: hash.finalize().into(), 41 | } 42 | } 43 | 44 | pub fn from_hex(hex_str: &str) -> Result { 45 | let hex: Vec = hex::decode(hex_str).map_err(|e| e.to_string())?; 46 | Self::from_slice(&hex[..]) 47 | } 48 | 49 | /// Goes from the canonical format (with checksum) encoded in bytes rather 50 | /// than hex to AccountIdentifier 51 | pub fn from_slice(v: &[u8]) -> Result { 52 | // Trim this down when we reach rust 1.48 53 | let hex: Box<[u8; 32]> = match v.to_vec().into_boxed_slice().try_into() { 54 | Ok(h) => h, 55 | Err(_) => { 56 | let hex_str = hex::encode(v); 57 | return Err(format!( 58 | "{} has a length of {} but we expected a length of 64", 59 | hex_str, 60 | hex_str.len() 61 | )); 62 | } 63 | }; 64 | check_sum(*hex) 65 | } 66 | 67 | pub fn to_hex(&self) -> String { 68 | hex::encode(self.to_vec()) 69 | } 70 | 71 | pub fn to_vec(&self) -> Vec { 72 | [&self.generate_checksum()[..], &self.hash[..]].concat() 73 | } 74 | 75 | pub fn generate_checksum(&self) -> [u8; 4] { 76 | let mut hasher = crc32fast::Hasher::new(); 77 | hasher.update(&self.hash); 78 | hasher.finalize().to_be_bytes() 79 | } 80 | } 81 | 82 | impl Display for AccountIdentifier { 83 | fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { 84 | self.to_hex().fmt(f) 85 | } 86 | } 87 | 88 | impl FromStr for AccountIdentifier { 89 | type Err = String; 90 | 91 | fn from_str(s: &str) -> Result { 92 | AccountIdentifier::from_hex(s) 93 | } 94 | } 95 | 96 | impl Serialize for AccountIdentifier { 97 | fn serialize(&self, serializer: S) -> Result 98 | where 99 | S: serde::Serializer, 100 | { 101 | self.to_hex().serialize(serializer) 102 | } 103 | } 104 | 105 | impl<'de> Deserialize<'de> for AccountIdentifier { 106 | // This is the canonical way to read a this from string 107 | fn deserialize(deserializer: D) -> Result 108 | where 109 | D: serde::Deserializer<'de>, 110 | D::Error: de::Error, 111 | { 112 | let hex: [u8; 32] = hex::serde::deserialize(deserializer)?; 113 | check_sum(hex).map_err(D::Error::custom) 114 | } 115 | } 116 | 117 | fn check_sum(hex: [u8; 32]) -> Result { 118 | // Get the checksum provided 119 | let found_checksum = &hex[0..4]; 120 | 121 | // Copy the hash into a new array 122 | let mut hash = [0; 28]; 123 | hash.copy_from_slice(&hex[4..32]); 124 | 125 | let account_id = AccountIdentifier { hash }; 126 | let expected_checksum = account_id.generate_checksum(); 127 | 128 | // Check the generated checksum matches 129 | if expected_checksum == found_checksum { 130 | Ok(account_id) 131 | } else { 132 | Err(format!( 133 | "Checksum failed for {}, expected check bytes {} but found {}", 134 | hex::encode(&hex[..]), 135 | hex::encode(expected_checksum), 136 | hex::encode(found_checksum), 137 | )) 138 | } 139 | } 140 | 141 | impl CandidType for AccountIdentifier { 142 | // The type expected for account identifier is 143 | fn _ty() -> candid::types::Type { 144 | String::_ty() 145 | } 146 | 147 | fn idl_serialize(&self, serializer: S) -> Result<(), S::Error> 148 | where 149 | S: candid::types::Serializer, 150 | { 151 | self.to_hex().idl_serialize(serializer) 152 | } 153 | } 154 | 155 | /// Subaccounts are arbitrary 32-byte values. 156 | #[derive(CandidType, Deserialize, Clone, Hash, Debug, PartialEq, Eq, Copy)] 157 | #[serde(transparent)] 158 | pub struct Subaccount(pub [u8; 32]); 159 | 160 | impl Subaccount { 161 | #[allow(dead_code)] 162 | pub fn to_vec(&self) -> Vec { 163 | self.0.to_vec() 164 | } 165 | } 166 | 167 | impl From<&Principal> for Subaccount { 168 | fn from(principal_id: &Principal) -> Self { 169 | let mut subaccount = [0; std::mem::size_of::()]; 170 | let principal_id = principal_id.as_slice(); 171 | subaccount[0] = principal_id.len().try_into().unwrap(); 172 | subaccount[1..1 + principal_id.len()].copy_from_slice(principal_id); 173 | Subaccount(subaccount) 174 | } 175 | } 176 | 177 | impl TryFrom<&[u8]> for Subaccount { 178 | type Error = std::array::TryFromSliceError; 179 | 180 | fn try_from(slice: &[u8]) -> Result { 181 | slice.try_into().map(Subaccount) 182 | } 183 | } 184 | 185 | // This function _must_ correspond to how the governance canister computes the 186 | // subaccount. 187 | pub fn get_neuron_subaccount(controller: &Principal, nonce: u64) -> Subaccount { 188 | let mut data = Sha256::new(); 189 | data.update(&[0x0c]); 190 | data.update(b"neuron-stake"); 191 | data.update(controller.as_slice()); 192 | data.update(&nonce.to_be_bytes()); 193 | Subaccount(data.finalize().into()) 194 | } 195 | -------------------------------------------------------------------------------- /src/command.rs: -------------------------------------------------------------------------------- 1 | use super::error::pretty_parse; 2 | use super::exp::Exp; 3 | use super::helper::{did_to_canister_info, FileSource, MyHelper}; 4 | use super::token::{ParserError, Tokenizer}; 5 | use super::utils::{get_dfx_hsm_pin, resolve_path}; 6 | use anyhow::{anyhow, Context}; 7 | use candid::{types::value::IDLValue, Principal, TypeEnv}; 8 | use candid_parser::configs::Configs; 9 | use pretty_assertions::{assert_eq, assert_ne}; 10 | use std::ops::Range; 11 | use std::sync::Arc; 12 | use std::time::Instant; 13 | 14 | #[derive(Debug, Clone)] 15 | pub struct Commands(pub Vec<(Command, Range)>); 16 | #[derive(Debug, Clone)] 17 | pub enum Command { 18 | Config(String), 19 | Show(Exp), 20 | Let(String, Exp), 21 | Assert(BinOp, Exp, Exp), 22 | Import(String, Principal, Option), 23 | Load(Exp), 24 | Identity(String, IdentityConfig), 25 | Func { 26 | name: String, 27 | args: Vec, 28 | body: Vec, 29 | }, 30 | While { 31 | cond: Exp, 32 | body: Vec, 33 | }, 34 | If { 35 | cond: Exp, 36 | then: Vec, 37 | else_: Vec, 38 | }, 39 | } 40 | #[derive(Debug, Clone)] 41 | pub enum IdentityConfig { 42 | Empty, 43 | Pem(String), 44 | Hsm { slot_index: usize, key_id: String }, 45 | } 46 | #[allow(clippy::enum_variant_names)] 47 | #[derive(Debug, Clone)] 48 | pub enum BinOp { 49 | Equal, 50 | SubEqual, 51 | NotEqual, 52 | } 53 | 54 | impl Command { 55 | pub fn run(self, helper: &mut MyHelper) -> anyhow::Result<()> { 56 | match self { 57 | Command::Import(id, canister_id, did) => { 58 | if let Some(did) = &did { 59 | let path = resolve_path(&helper.base_path, did); 60 | let info = did_to_canister_info(did, FileSource::Path(&path), None)?; 61 | helper.canister_map.borrow_mut().0.insert(canister_id, info); 62 | } 63 | // TODO decide if it's a Service instead 64 | helper.env.0.insert(id, IDLValue::Principal(canister_id)); 65 | } 66 | Command::Let(id, val) => { 67 | let is_call = val.is_call(); 68 | let v = val.eval(helper)?; 69 | bind_value(helper, id, v, is_call, false); 70 | } 71 | Command::Func { name, args, body } => { 72 | helper.func_env.0.insert(name, (args, body)); 73 | } 74 | Command::Assert(op, left, right) => { 75 | let left = left.eval(helper)?; 76 | let right = right.eval(helper)?; 77 | match op { 78 | BinOp::Equal => assert_eq!(left, right), 79 | BinOp::SubEqual => { 80 | if let (IDLValue::Text(left), IDLValue::Text(right)) = (&left, &right) { 81 | assert!(left.contains(right)); 82 | } else { 83 | let l_ty = left.value_ty(); 84 | let r_ty = right.value_ty(); 85 | let env = TypeEnv::new(); 86 | if let Ok(left) = left.annotate_type(false, &env, &r_ty) { 87 | assert_eq!(left, right); 88 | } else if let Ok(right) = right.annotate_type(false, &env, &l_ty) { 89 | assert_eq!(left, right); 90 | } else { 91 | assert_eq!(left, right); 92 | } 93 | } 94 | } 95 | BinOp::NotEqual => assert_ne!(left, right), 96 | } 97 | } 98 | Command::Config(conf) => { 99 | if conf.ends_with(".toml") { 100 | let path = resolve_path(&helper.base_path, &conf); 101 | let conf = std::fs::read_to_string(path)?; 102 | helper.config = conf.parse::()?; 103 | } else { 104 | helper.config = conf.parse::()?; 105 | } 106 | } 107 | Command::Show(val) => { 108 | let is_call = val.is_call(); 109 | let time = Instant::now(); 110 | let v = val.eval(helper)?; 111 | let duration = time.elapsed(); 112 | bind_value(helper, "_".to_string(), v, is_call, true); 113 | if helper.verbose { 114 | let width = console::Term::stdout().size().1 as usize; 115 | println!("{:>width$}", format!("({duration:.2?})"), width = width); 116 | } 117 | } 118 | Command::Identity(id, config) => { 119 | use ic_agent::identity::{BasicIdentity, Identity, Secp256k1Identity}; 120 | let identity: Arc = match &config { 121 | IdentityConfig::Hsm { slot_index, key_id } => { 122 | #[cfg(target_os = "macos")] 123 | const PKCS11_LIBPATH: &str = "/Library/OpenSC/lib/pkcs11/opensc-pkcs11.so"; 124 | #[cfg(target_os = "linux")] 125 | const PKCS11_LIBPATH: &str = "/usr/lib/x86_64-linux-gnu/opensc-pkcs11.so"; 126 | #[cfg(target_os = "windows")] 127 | const PKCS11_LIBPATH: &str = 128 | "C:/Program Files/OpenSC Project/OpenSC/pkcs11/opensc-pkcs11.dll"; 129 | let lib_path = std::env::var("PKCS11_LIBPATH") 130 | .unwrap_or_else(|_| PKCS11_LIBPATH.to_string()); 131 | Arc::from(ic_identity_hsm::HardwareIdentity::new( 132 | lib_path, 133 | *slot_index, 134 | key_id, 135 | get_dfx_hsm_pin, 136 | )?) 137 | } 138 | IdentityConfig::Pem(pem_path) => { 139 | let pem_path = resolve_path(&helper.base_path, pem_path); 140 | match Secp256k1Identity::from_pem_file(&pem_path) { 141 | Ok(identity) => Arc::from(identity), 142 | Err(_) => Arc::from(BasicIdentity::from_pem_file(&pem_path)?), 143 | } 144 | } 145 | IdentityConfig::Empty => match helper.identity_map.0.get(&id) { 146 | Some(identity) => identity.clone(), 147 | None => Arc::from(BasicIdentity::from_signing_key( 148 | ed25519_consensus::SigningKey::new(rand::thread_rng()), 149 | )), 150 | }, 151 | }; 152 | helper 153 | .identity_map 154 | .0 155 | .insert(id.to_string(), identity.clone()); 156 | let sender = identity.sender().map_err(|e| anyhow!("{}", e))?; 157 | println!("Current identity {sender}"); 158 | 159 | helper.agent.set_arc_identity(identity.clone()); 160 | helper.current_identity = id.to_string(); 161 | helper.env.0.insert(id, IDLValue::Principal(sender)); 162 | } 163 | Command::Load(e) => { 164 | // TODO check for infinite loop 165 | // Note that it's a bit tricky to make load as a built-in function, as it requires mutable access to helper. 166 | let IDLValue::Text(file) = e.eval(helper)? else { 167 | return Err(anyhow!("load needs to be a file path")); 168 | }; 169 | let (file, fail_safe) = if file.ends_with('?') { 170 | (file.trim_end_matches('?'), true) 171 | } else { 172 | (file.as_str(), false) 173 | }; 174 | let old_base = helper.base_path.clone(); 175 | let path = resolve_path(&old_base, file); 176 | let read_result = std::fs::read_to_string(&path); 177 | if read_result.is_err() && fail_safe { 178 | return Ok(()); 179 | } 180 | let mut script = read_result.with_context(|| format!("Cannot read {path:?}"))?; 181 | if script.starts_with("#!") { 182 | let line_end = script.find('\n').unwrap_or(0); 183 | script.drain(..line_end); 184 | } 185 | let script = 186 | shellexpand::env(&script).map_err(|e| crate::token::error2(e, 0..0))?; 187 | let cmds = pretty_parse::(file, &script)?; 188 | helper.base_path = path.parent().unwrap().to_path_buf(); 189 | for (cmd, pos) in cmds.0.into_iter() { 190 | if helper.verbose { 191 | println!("> {}", &script[pos]); 192 | } 193 | cmd.run(helper)?; 194 | } 195 | helper.base_path = old_base; 196 | } 197 | Command::If { cond, then, else_ } => { 198 | let IDLValue::Bool(cond) = cond.eval(helper)? else { 199 | return Err(anyhow!("if condition is not a boolean expression")); 200 | }; 201 | if cond { 202 | for cmd in then.into_iter() { 203 | cmd.run(helper)?; 204 | } 205 | } else { 206 | for cmd in else_.into_iter() { 207 | cmd.run(helper)?; 208 | } 209 | } 210 | } 211 | Command::While { cond, body } => loop { 212 | let IDLValue::Bool(cond) = cond.clone().eval(helper)? else { 213 | return Err(anyhow!("while condition is not a boolean expression")); 214 | }; 215 | if !cond { 216 | break; 217 | } 218 | for cmd in body.iter() { 219 | cmd.clone().run(helper)?; 220 | } 221 | }, 222 | } 223 | Ok(()) 224 | } 225 | } 226 | 227 | impl std::str::FromStr for Command { 228 | type Err = ParserError; 229 | fn from_str(str: &str) -> Result { 230 | let lexer = Tokenizer::new(str); 231 | super::grammar::CommandParser::new().parse(lexer) 232 | } 233 | } 234 | impl std::str::FromStr for Commands { 235 | type Err = ParserError; 236 | fn from_str(str: &str) -> Result { 237 | let lexer = Tokenizer::new(str); 238 | super::grammar::CommandsParser::new().parse(lexer) 239 | } 240 | } 241 | 242 | fn bind_value(helper: &mut MyHelper, id: String, v: IDLValue, is_call: bool, display: bool) { 243 | if display { 244 | if helper.verbose { 245 | println!("{v}"); 246 | } else if let IDLValue::Text(v) = &v { 247 | println!("{v}"); 248 | } 249 | } 250 | if is_call { 251 | let (v, cost) = crate::profiling::may_extract_profiling(v); 252 | if let Some(cost) = cost { 253 | let cost_id = format!("__cost_{id}"); 254 | helper.env.0.insert(cost_id, IDLValue::Int64(cost)); 255 | } 256 | helper.env.0.insert(id, v); 257 | } else { 258 | helper.env.0.insert(id, v); 259 | } 260 | } 261 | -------------------------------------------------------------------------------- /src/error.rs: -------------------------------------------------------------------------------- 1 | use crate::token::{error2, ParserError}; 2 | use codespan_reporting::diagnostic::{Diagnostic, Label}; 3 | use codespan_reporting::files::SimpleFile; 4 | use codespan_reporting::term::{self, termcolor::StandardStream}; 5 | 6 | fn report(e: &ParserError) -> Diagnostic<()> { 7 | use lalrpop_util::ParseError::*; 8 | let mut diag = Diagnostic::error().with_message("parser error"); 9 | let label = match e { 10 | User { error } => Label::primary((), error.span.clone()).with_message(&error.err), 11 | InvalidToken { location } => { 12 | Label::primary((), *location..location + 1).with_message("Invalid token") 13 | } 14 | UnrecognizedEof { location, expected } => { 15 | diag = diag.with_notes(report_expected(expected)); 16 | Label::primary((), *location..location + 1).with_message("Unexpected EOF") 17 | } 18 | UnrecognizedToken { token, expected } => { 19 | diag = diag.with_notes(report_expected(expected)); 20 | Label::primary((), token.0..token.2).with_message("Unexpected token") 21 | } 22 | ExtraToken { token } => Label::primary((), token.0..token.2).with_message("Extra token"), 23 | }; 24 | diag.with_labels(vec![label]) 25 | } 26 | 27 | fn report_expected(expected: &[String]) -> Vec { 28 | if expected.is_empty() { 29 | return Vec::new(); 30 | } 31 | use pretty::RcDoc; 32 | let doc: RcDoc<()> = RcDoc::intersperse( 33 | expected.iter().map(RcDoc::text), 34 | RcDoc::text(",").append(RcDoc::softline()), 35 | ); 36 | let header = if expected.len() == 1 { 37 | "Expects" 38 | } else { 39 | "Expects one of" 40 | }; 41 | let doc = RcDoc::text(header).append(RcDoc::softline().append(doc)); 42 | vec![doc.pretty(70).to_string()] 43 | } 44 | 45 | pub fn pretty_parse(name: &str, str: &str) -> Result 46 | where 47 | T: std::str::FromStr, 48 | { 49 | let str = shellexpand::env(str).map_err(|e| error2(e, 0..0))?; 50 | str.parse::().inspect_err(|e| { 51 | let writer = StandardStream::stderr(term::termcolor::ColorChoice::Auto); 52 | let config = term::Config::default(); 53 | let file = SimpleFile::new(name, str); 54 | term::emit(&mut writer.lock(), &config, &file, &report(e)).unwrap(); 55 | }) 56 | } 57 | -------------------------------------------------------------------------------- /src/grammar.lalrpop: -------------------------------------------------------------------------------- 1 | use super::exp::{Field, Exp, Method, CallMode, FuncCall}; 2 | use super::selector::Selector; 3 | use candid_parser::types::{IDLType, TypeField, PrimType, FuncType, Binding}; 4 | use candid::utils::check_unique; 5 | use super::token::{Token, error2, LexicalError, Span}; 6 | use candid::{Principal, types::{FuncMode, Label, TypeEnv}}; 7 | use super::command::{Command, Commands, BinOp}; 8 | 9 | grammar; 10 | 11 | extern { 12 | type Location = usize; 13 | type Error = LexicalError; 14 | enum Token { 15 | "decimal" => Token::Decimal(), 16 | "hex" => Token::Hex(), 17 | "float" => Token::Float(), 18 | "bool" => Token::Boolean(), 19 | "text" => Token::Text(), 20 | "id" => Token::Id(), 21 | "null" => Token::Null, 22 | "opt" => Token::Opt, 23 | "vec" => Token::Vec, 24 | "record" => Token::Record, 25 | "variant" => Token::Variant, 26 | "func" => Token::Func, 27 | "service" => Token::Service, 28 | "oneway" => Token::Oneway, 29 | "query" => Token::Query, 30 | "composite_query" => Token::CompositeQuery, 31 | "blob" => Token::Blob, 32 | "type" => Token::Type, 33 | "import" => Token::Import, 34 | "load" => Token::Load, 35 | "principal" => Token::Principal, 36 | "call" => Token::Call, 37 | "par_call" => Token::ParCall, 38 | "encode" => Token::Encode, 39 | "decode" => Token::Decode, 40 | "as" => Token::As, 41 | "config" => Token::Config, 42 | "assert" => Token::Assert, 43 | "let" => Token::Let, 44 | "fail" => Token::Fail, 45 | "identity" => Token::Identity, 46 | "function" => Token::Function, 47 | "while" => Token::While, 48 | "if" => Token::If, 49 | "else" => Token::Else, 50 | "sign" => Token::Sign(), 51 | "=" => Token::Equals, 52 | "==" => Token::TestEqual, 53 | "~=" => Token::SubEqual, 54 | "!=" => Token::NotEqual, 55 | "(" => Token::LParen, 56 | ")" => Token::RParen, 57 | "[" => Token::LSquare, 58 | "]" => Token::RSquare, 59 | "{" => Token::LBrace, 60 | "}" => Token::RBrace, 61 | "," => Token::Comma, 62 | "." => Token::Dot, 63 | ";" => Token::Semi, 64 | ":" => Token::Colon, 65 | "?" => Token::Question, 66 | "->" => Token::Arrow, 67 | } 68 | } 69 | 70 | pub Commands: Commands = SepBy, ";"> => Commands(<>); 71 | 72 | // Command 73 | pub Command: Command = { 74 | "config" => Command::Config(<>), 75 | Exp => Command::Show(<>), 76 | "assert" => Command::Assert(op, left, right), 77 | "let" "=" => Command::Let(id, val), 78 | "load" => Command::Load(<>), 79 | "import" "=" > )?> =>? { 80 | let principal = Principal::from_text(&uri.0).map_err(|e| error2(e, uri.1))?; 81 | Ok(Command::Import(id, principal, did)) 82 | }, 83 | "identity" ?> =>? { 84 | use super::command::IdentityConfig::*; 85 | Ok(match config { 86 | None => Command::Identity(id, Empty), 87 | Some((Exp::Text(path), _)) => Command::Identity(id, Pem(path)), 88 | Some((Exp::Record(fs), pos)) => match fs.as_slice() { 89 | [Field { id: key, val: Exp::Text(key_id) }, Field { id: slot, val: Exp::Number(slot_index) }] if *slot == Label::Named("slot_index".to_string()) && *key == Label::Named("key_id".to_string()) => Command::Identity(id, Hsm{ key_id: key_id.to_string(), slot_index: slot_index.parse::().map_err(|_| error2("slot_index cannot convert to usize", pos))? }), 90 | _ => return Err(error2("only expect record { slot_index : nat; key_id : text }", pos)), 91 | }, 92 | Some((_, pos)) => return Err(error2("Identity can either be a .pem file or HSM slot_index and key_id record", pos)), 93 | }) 94 | }, 95 | "function" "(" > ")" "{" > "}" => Command::Func {name,args,body}, 96 | "while" "{" > "}" => Command::While {cond, body}, 97 | "if" "{" > "}" "else" "{" > "}" => Command::If{cond, then, else_}, 98 | } 99 | 100 | pub Exp: Exp = { 101 | Arg => <>, 102 | Variable => <>, 103 | "fail" => Exp::Fail(Box::new(<>)), 104 | "call" => Exp::Call{method:Some(method), args, mode: CallMode::Call}, 105 | "par_call" "[" > "]" => Exp::ParCall { calls }, 106 | "call" "as" => Exp::Call{method:Some(method), args, mode: CallMode::Proxy(proxy)}, 107 | "encode" => Exp::Call{method, args, mode: CallMode::Encode}, 108 | "decode" )?> => Exp::Decode{method, blob:Box::new(blob)}, 109 | "(" > ")" => Exp::Apply(func, args), 110 | } 111 | FuncCall: FuncCall = => FuncCall { method, args }; 112 | Variable: Exp = )*> => Exp::Path(v, path); 113 | Selector: Selector = { 114 | "?" => Selector::Option, 115 | "." => Selector::Field(<>), 116 | "[" "]" => Selector::Index(<>), 117 | "." > "(" > ")" =>? { 118 | match (method.0.as_str(), args.as_slice()) { 119 | ("map", [Exp::Path(func, _x)]) if _x.is_empty() => Ok(Selector::Map(func.to_string())), 120 | ("filter", [Exp::Path(func, _x)]) if _x.is_empty() => Ok(Selector::Filter(func.to_string())), 121 | ("fold", [init, Exp::Path(func, _x)]) if _x.is_empty() => Ok(Selector::Fold(init.clone(), func.to_string())), 122 | ("size", []) => Ok(Selector::Size), 123 | (_, _) => Err(error2("unknown method or wrong arguments", method.1)), 124 | } 125 | } 126 | } 127 | Method: Method = "." => Method { canister, method }; 128 | 129 | BinOp: BinOp = { 130 | "==" => BinOp::Equal, 131 | "~=" => BinOp::SubEqual, 132 | "!=" => BinOp::NotEqual, 133 | } 134 | 135 | // Candid Value 136 | Exps: Vec = "(" > ")" => <>; 137 | 138 | Arg: Exp = { 139 | "bool" => Exp::Bool(<>), 140 | NumLiteral => <>, 141 | Text => Exp::Text(<>), 142 | Bytes => Exp::Blob(<>), 143 | "null" => Exp::Null, 144 | "opt" => Exp::Opt(Box::new(<>)), 145 | "vec" "{" > "}" => Exp::Vec(<>), 146 | "record" "{" >> "}" =>? { 147 | let mut id: u32 = 0; 148 | let span = <>.1.clone(); 149 | let mut fs: Vec = <>.0.into_iter().map(|f| { 150 | match f.id { 151 | Label::Unnamed(_) => { 152 | id = id + 1; 153 | Field { id: Label::Unnamed(id - 1), val: f.val } 154 | } 155 | _ => { 156 | id = f.id.get_id() + 1; 157 | f 158 | } 159 | } 160 | }).collect(); 161 | fs.sort_unstable_by_key(|Field { id, .. }| id.get_id()); 162 | check_unique(fs.iter().map(|f| &f.id)).map_err(|e| error2(e, span))?; 163 | Ok(Exp::Record(fs)) 164 | }, 165 | "variant" "{" "}" => Exp::Variant(Box::new(<>), 0), 166 | "principal" > =>? Ok(Exp::Principal(Principal::from_text(&<>.0).map_err(|e| error2(e, <>.1))?)), 167 | "service" > =>? Ok(Exp::Service(Principal::from_text(&<>.0).map_err(|e| error2(e, <>.1))?)), 168 | "func" > "." =>? { 169 | let id = Principal::from_text(&id.0).map_err(|e| error2(e, id.1))?; 170 | Ok(Exp::Func(id, meth)) 171 | }, 172 | "(" ")" => <>, 173 | } 174 | 175 | Text: String = { 176 | Sp<"text"> =>? { 177 | if std::str::from_utf8(<>.0.as_bytes()).is_err() { 178 | Err(error2("Not valid unicode text", <>.1)) 179 | } else { 180 | Ok(<>.0) 181 | } 182 | } 183 | } 184 | 185 | Bytes: Vec = { 186 | "blob" <"text"> => <>.into_bytes(), 187 | } 188 | 189 | Number: String = { 190 | "decimal" => <>, 191 | // "hex" => num_bigint::BigInt::parse_bytes(<>.as_bytes(), 16).unwrap().to_str_radix(10), 192 | } 193 | 194 | AnnVal: Exp = { 195 | => <>, 196 | ":" > =>? { 197 | let env = TypeEnv::new(); 198 | let typ = candid_parser::typing::ast_to_type(&env, &typ.0).map_err(|e| error2(e, typ.1))?; 199 | Ok(Exp::AnnVal(Box::new(arg), typ)) 200 | } 201 | } 202 | 203 | NumLiteral: Exp = { 204 | => { 205 | let num = match sign { 206 | Some('-') => format!("-{}", n), 207 | _ => n, 208 | }; 209 | Exp::Number(num) 210 | }, 211 | > =>? { 212 | let span = n.1.clone(); 213 | let num = match sign { 214 | Some('-') => format!("-{}", n.0), 215 | _ => n.0, 216 | }; 217 | let f = num.parse::().map_err(|_| error2("not a float", span))?; 218 | Ok(Exp::Float64(f)) 219 | }, 220 | } 221 | 222 | FieldId: u32 = { 223 | Sp<"decimal"> =>? <>.0.parse::().map_err(|_| error2("field id out of u32 range", <>.1)), 224 | Sp<"hex"> =>? u32::from_str_radix(&<>.0, 16).map_err(|_| error2("field id out of u32 range", <>.1)), 225 | } 226 | 227 | Field: Field = { 228 | "=" =>? Ok(Field { id: Label::Id(n), val: v }), 229 | "=" => Field { id: Label::Named(n), val: v }, 230 | } 231 | 232 | VariantField: Field = { 233 | Field => <>, 234 | Name => Field { id: Label::Named(<>), val: Exp::Null }, 235 | FieldId =>? Ok(Field { id: Label::Id(<>), val: Exp::Null }), 236 | } 237 | 238 | RecordField: Field = { 239 | Field => <>, 240 | AnnVal => Field { id: Label::Unnamed(0), val:<> }, 241 | } 242 | 243 | // Common util 244 | Name: String = { 245 | "id" => <>, 246 | Text => <>, 247 | } 248 | 249 | // Type 250 | Typ: IDLType = { 251 | PrimTyp => <>, 252 | "opt" => IDLType::OptT(Box::new(<>)), 253 | "vec" => IDLType::VecT(Box::new(<>)), 254 | "blob" => IDLType::VecT(Box::new(IDLType::PrimT(PrimType::Nat8))), 255 | "record" "{" >> "}" =>? { 256 | let mut id: u32 = 0; 257 | let span = <>.1.clone(); 258 | let mut fs: Vec = <>.0.iter().map(|f| { 259 | let label = match f.label { 260 | Label::Unnamed(_) => { id = id + 1; Label::Unnamed(id - 1) }, 261 | ref l => { id = l.get_id() + 1; l.clone() }, 262 | }; 263 | TypeField { label, typ: f.typ.clone() } 264 | }).collect(); 265 | fs.sort_unstable_by_key(|TypeField { label, .. }| label.get_id()); 266 | check_unique(fs.iter().map(|f| &f.label)).map_err(|e| error2(e, span))?; 267 | Ok(IDLType::RecordT(fs)) 268 | }, 269 | "variant" "{" >> "}" =>? { 270 | let span = fs.1.clone(); 271 | fs.0.sort_unstable_by_key(|TypeField { label, .. }| label.get_id()); 272 | check_unique(fs.0.iter().map(|f| &f.label)).map_err(|e| error2(e, span))?; 273 | Ok(IDLType::VariantT(fs.0)) 274 | }, 275 | "func" => IDLType::FuncT(<>), 276 | "service" => IDLType::ServT(<>), 277 | "principal" => IDLType::PrincipalT, 278 | } 279 | 280 | PrimTyp: IDLType = { 281 | "null" => IDLType::PrimT(PrimType::Null), 282 | "id" => { 283 | match PrimType::str_to_enum(&<>) { 284 | Some(p) => IDLType::PrimT(p), 285 | None => IDLType::VarT(<>), 286 | } 287 | }, 288 | } 289 | 290 | FieldTyp: TypeField = { 291 | ":" =>? Ok(TypeField { label: Label::Id(n), typ: t }), 292 | ":" => TypeField { label: Label::Named(n), typ: t }, 293 | } 294 | 295 | RecordFieldTyp: TypeField = { 296 | FieldTyp => <>, 297 | Typ => TypeField { label: Label::Unnamed(0), typ: <> }, 298 | } 299 | 300 | VariantFieldTyp: TypeField = { 301 | FieldTyp => <>, 302 | Name => TypeField { label: Label::Named(<>), typ: IDLType::PrimT(PrimType::Null) }, 303 | FieldId =>? Ok(TypeField { label: Label::Id(<>), typ: IDLType::PrimT(PrimType::Null) }), 304 | } 305 | 306 | TupTyp: Vec = "(" > ")" => <>; 307 | 308 | FuncTyp: FuncType = { 309 | "->" => 310 | FuncType { modes, args, rets }, 311 | } 312 | 313 | ArgTyp: IDLType = { 314 | Typ => <>, 315 | Name ":" => <>, 316 | } 317 | 318 | ActorTyp: Vec = { 319 | "{" >> "}" =>? { 320 | let span = fs.1.clone(); 321 | fs.0.sort_unstable_by_key(|Binding { id, .. }| id.clone()); 322 | let labs: Vec<_> = fs.0.iter().map(|f| f.id.clone()).collect(); 323 | check_unique(labs.iter()).map_err(|e| error2(e, span))?; 324 | Ok(fs.0) 325 | } 326 | } 327 | 328 | MethTyp: Binding = { 329 | ":" => Binding { id: n, typ: IDLType::FuncT(f) }, 330 | ":" => Binding { id: n, typ: IDLType::VarT(id) }, 331 | } 332 | 333 | FuncMode: FuncMode = { 334 | "oneway" => FuncMode::Oneway, 335 | "query" => FuncMode::Query, 336 | "composite_query" => FuncMode::CompositeQuery, 337 | } 338 | 339 | // Also allows trailing separator 340 | #[inline] 341 | SepBy: Vec = { 342 | S)*> => match e { 343 | None => v, 344 | Some(e) => { 345 | v.push(e); 346 | v 347 | } 348 | } 349 | }; 350 | 351 | #[inline] 352 | Sp: (T, Span) = 353 | => (t, l..r); 354 | -------------------------------------------------------------------------------- /src/grammar.rs: -------------------------------------------------------------------------------- 1 | #![allow(clippy::all)] 2 | include!(concat!(env!("OUT_DIR"), "/grammar.rs")); 3 | -------------------------------------------------------------------------------- /src/helper.rs: -------------------------------------------------------------------------------- 1 | use crate::exp::Exp; 2 | use crate::token::{Token, Tokenizer}; 3 | use crate::utils::{fetch_metadata, random_value, str_to_principal}; 4 | use candid::{ 5 | types::value::{IDLField, IDLValue, VariantValue}, 6 | types::{Function, Label, Type, TypeInner}, 7 | Decode, Encode, Principal, TypeEnv, 8 | }; 9 | use candid_parser::{check_prog, configs::Configs, pretty_check_file, pretty_parse, IDLProg}; 10 | use ic_agent::{Agent, Identity}; 11 | use rustyline::completion::{extract_word, Completer, FilenameCompleter, Pair}; 12 | use rustyline::error::ReadlineError; 13 | use rustyline::highlight::{Highlighter, MatchingBracketHighlighter}; 14 | use rustyline::hint::{Hinter, HistoryHinter}; 15 | use rustyline::validate::{self, MatchingBracketValidator, Validator}; 16 | use rustyline::Context; 17 | use rustyline_derive::Helper; 18 | use std::borrow::Cow::{self, Borrowed, Owned}; 19 | use std::cell::RefCell; 20 | use std::collections::BTreeMap; 21 | use std::sync::Arc; 22 | use tokio::runtime::Runtime; 23 | 24 | #[derive(Default, Clone)] 25 | pub struct CanisterMap(pub BTreeMap); 26 | #[derive(Default, Clone)] 27 | pub struct IdentityMap(pub BTreeMap>); 28 | #[derive(Default, Clone)] 29 | pub struct Env(pub BTreeMap); 30 | #[derive(Default, Clone)] 31 | pub struct FuncEnv(pub BTreeMap, Vec)>); 32 | #[derive(Debug, Clone)] 33 | pub struct CanisterInfo { 34 | pub env: TypeEnv, 35 | pub methods: BTreeMap, 36 | pub init: Option>, 37 | pub profiling: Option>, 38 | } 39 | #[derive(Clone)] 40 | pub enum OfflineOutput { 41 | Json, 42 | Ascii(String), 43 | Png(String), 44 | PngNoUrl, 45 | AsciiNoUrl, 46 | } 47 | impl CanisterMap { 48 | pub fn get(&mut self, agent: &Agent, id: &Principal) -> anyhow::Result<&CanisterInfo> { 49 | if !self.0.contains_key(id) { 50 | let info = fetch_actor(agent, *id)?; 51 | self.0.insert(*id, info); 52 | } 53 | Ok(self.0.get(id).unwrap()) 54 | } 55 | } 56 | impl CanisterInfo { 57 | pub fn match_method(&self, meth: &str) -> Vec { 58 | self.methods 59 | .iter() 60 | .filter(|(name, _)| name.starts_with(meth)) 61 | .map(|(meth, func)| { 62 | let mut replacement = format!(".{meth}("); 63 | if func.args.is_empty() { 64 | replacement.push(')'); 65 | } 66 | Pair { 67 | display: format!("{meth} : {func}"), 68 | replacement, 69 | } 70 | }) 71 | .collect() 72 | } 73 | } 74 | 75 | #[derive(Helper)] 76 | pub struct MyHelper { 77 | completer: FilenameCompleter, 78 | highlighter: MatchingBracketHighlighter, 79 | validator: MatchingBracketValidator, 80 | hinter: HistoryHinter, 81 | pub colored_prompt: String, 82 | pub offline: Option, 83 | pub canister_map: RefCell, 84 | pub identity_map: IdentityMap, 85 | pub current_identity: String, 86 | pub agent_url: String, 87 | pub agent: Agent, 88 | pub config: Configs, 89 | pub env: Env, 90 | pub func_env: FuncEnv, 91 | pub base_path: std::path::PathBuf, 92 | pub messages: RefCell>, 93 | pub verbose: bool, 94 | pub default_effective_canister_id: Principal, 95 | } 96 | 97 | impl MyHelper { 98 | pub fn spawn(&self) -> Self { 99 | MyHelper { 100 | completer: FilenameCompleter::new(), 101 | highlighter: MatchingBracketHighlighter::new(), 102 | hinter: HistoryHinter {}, 103 | colored_prompt: "".to_owned(), 104 | validator: MatchingBracketValidator::new(), 105 | config: "".parse::().unwrap(), 106 | canister_map: self.canister_map.clone(), 107 | identity_map: self.identity_map.clone(), 108 | current_identity: self.current_identity.clone(), 109 | env: self.env.clone(), 110 | func_env: self.func_env.clone(), 111 | base_path: self.base_path.clone(), 112 | agent: self.agent.clone(), 113 | agent_url: self.agent_url.clone(), 114 | offline: self.offline.clone(), 115 | messages: self.messages.clone(), 116 | verbose: self.verbose, 117 | default_effective_canister_id: self.default_effective_canister_id, 118 | } 119 | } 120 | pub fn new( 121 | agent: Agent, 122 | agent_url: String, 123 | offline: Option, 124 | verbose: bool, 125 | ) -> Self { 126 | let runtime = Runtime::new().expect("Unable to create a runtime"); 127 | let default_effective_canister_id = runtime 128 | .block_on(async { 129 | use serde_with::base64::Base64; 130 | #[serde_with::serde_as] 131 | #[derive(serde::Deserialize)] 132 | pub struct RawCanisterId { 133 | #[serde_as(as = "Base64")] 134 | pub canister_id: Vec, 135 | } 136 | #[derive(serde::Deserialize)] 137 | struct Topology { 138 | pub default_effective_canister_id: RawCanisterId, 139 | } 140 | let resp = reqwest::get(format!("{}/_/topology", agent_url.trim_end_matches('/'))) 141 | .await 142 | .ok()?; 143 | if resp.status().is_success() { 144 | resp.json::().await.ok().map(|topology| { 145 | Principal::from_slice(&topology.default_effective_canister_id.canister_id) 146 | }) 147 | } else { 148 | None 149 | } 150 | }) 151 | .unwrap_or(Principal::management_canister()); 152 | let mut res = MyHelper { 153 | completer: FilenameCompleter::new(), 154 | highlighter: MatchingBracketHighlighter::new(), 155 | hinter: HistoryHinter {}, 156 | colored_prompt: "".to_owned(), 157 | validator: MatchingBracketValidator::new(), 158 | canister_map: RefCell::new(CanisterMap::default()), 159 | identity_map: IdentityMap::default(), 160 | current_identity: "anonymous".to_owned(), 161 | config: "".parse::().unwrap(), 162 | env: Env::default(), 163 | func_env: FuncEnv::default(), 164 | base_path: std::env::current_dir().unwrap(), 165 | messages: Vec::new().into(), 166 | agent, 167 | agent_url, 168 | offline, 169 | verbose, 170 | default_effective_canister_id, 171 | }; 172 | res.fetch_root_key_if_needed().unwrap(); 173 | res.load_prelude().unwrap(); 174 | res 175 | } 176 | fn is_mainnet(&self) -> bool { 177 | self.agent_url == "https://icp0.io" || self.agent_url == "https://ic0.app" 178 | } 179 | fn load_prelude(&mut self) -> anyhow::Result<()> { 180 | self.identity_map.0.insert( 181 | "anonymous".to_string(), 182 | Arc::new(ic_agent::identity::AnonymousIdentity), 183 | ); 184 | self.preload_canister( 185 | "ic".to_string(), 186 | Principal::from_text("aaaaa-aa")?, 187 | Some(include_str!("ic.did")), 188 | )?; 189 | if self.is_mainnet() { 190 | self.preload_canister( 191 | "nns".to_string(), 192 | Principal::from_text("rrkah-fqaaa-aaaaa-aaaaq-cai")?, 193 | // only load did file in offline mode 194 | self.offline 195 | .as_ref() 196 | .map(|_| include_str!("governance.did")), 197 | )?; 198 | self.preload_canister( 199 | "ledger".to_string(), 200 | Principal::from_text("ryjl3-tyaaa-aaaaa-aaaba-cai")?, 201 | self.offline.as_ref().map(|_| include_str!("ledger.did")), 202 | )?; 203 | self.preload_canister( 204 | "registry".to_string(), 205 | Principal::from_text("rwlgt-iiaaa-aaaaa-aaaaa-cai")?, 206 | None, 207 | )?; 208 | self.preload_canister( 209 | "cycles_ledger".to_string(), 210 | Principal::from_text("um5iw-rqaaa-aaaaq-qaaba-cai")?, 211 | None, 212 | )?; 213 | } 214 | Ok(()) 215 | } 216 | fn preload_canister( 217 | &mut self, 218 | name: String, 219 | id: Principal, 220 | did_file: Option<&str>, 221 | ) -> anyhow::Result<()> { 222 | let mut canister_map = self.canister_map.borrow_mut(); 223 | if let Some(did_file) = did_file { 224 | canister_map.0.insert( 225 | id, 226 | did_to_canister_info(&name, FileSource::Text(did_file), None)?, 227 | ); 228 | } 229 | self.env.0.insert(name, IDLValue::Principal(id)); 230 | Ok(()) 231 | } 232 | pub fn fetch_root_key_if_needed(&mut self) -> anyhow::Result<()> { 233 | if self.offline.is_none() && !self.is_mainnet() { 234 | let runtime = Runtime::new().expect("Unable to create a runtime"); 235 | runtime.block_on(self.agent.fetch_root_key())?; 236 | }; 237 | Ok(()) 238 | } 239 | pub fn dump_ingress(&self) -> anyhow::Result<()> { 240 | crate::offline::dump_ingress(&self.messages.borrow()) 241 | } 242 | } 243 | 244 | #[derive(Debug, PartialEq, Clone)] 245 | enum Partial { 246 | Call(Principal, String), 247 | Val(IDLValue, String), 248 | } 249 | impl Partial { 250 | fn get_func_type<'a>( 251 | &'a self, 252 | agent: &'a Agent, 253 | map: &'a mut CanisterMap, 254 | ) -> Option<(&'a TypeEnv, &'a [Type])> { 255 | match self { 256 | Partial::Call(canister_id, method) => { 257 | let info = map.get(agent, canister_id).ok()?; 258 | let func = info.methods.get(method)?; 259 | Some((&info.env, &func.args)) 260 | } 261 | _ => None, 262 | } 263 | } 264 | } 265 | 266 | fn partial_parse(line: &str, pos: usize, helper: &MyHelper) -> Option<(usize, Partial)> { 267 | let (start, _) = extract_word(line, pos, None, |c| c == ' '); 268 | let iter = Tokenizer::new(&line[start..pos]); 269 | let mut tokens = Vec::new(); 270 | let mut pos_start = 0; 271 | for v in iter { 272 | let v = v.ok()?; 273 | if pos_start == 0 274 | && matches!( 275 | v.1, 276 | Token::Equals | Token::TestEqual | Token::SubEqual | Token::NotEqual 277 | ) 278 | { 279 | pos_start = v.2; 280 | } 281 | let tok = if let Token::Text(id) = v.1 { 282 | Token::Id(id) 283 | } else { 284 | v.1 285 | }; 286 | tokens.push((v.0, tok)); 287 | } 288 | match tokens.as_slice() { 289 | [(_, Token::Id(id))] => match str_to_principal(id, helper) { 290 | Ok(id) => Some((pos, Partial::Call(id, "".to_string()))), 291 | Err(_) => parse_value(&line[..pos], start, pos, helper), 292 | }, 293 | [(_, Token::Id(id)), (pos_tail, Token::Dot)] 294 | | [(_, Token::Id(id)), (pos_tail, Token::Dot), (_, _)] => { 295 | match str_to_principal(id, helper) { 296 | Ok(id) => Some(( 297 | start + pos_tail, 298 | Partial::Call(id, line[start + pos_tail + 1..pos].to_string()), 299 | )), 300 | Err(_) => parse_value(&line[..pos], start + pos_start, start + pos_tail, helper), 301 | } 302 | } 303 | [.., (_, Token::RSquare)] | [.., (_, Token::Question)] => { 304 | parse_value(&line[..pos], start + pos_start, pos, helper) 305 | } 306 | [.., (pos_tail, Token::Dot)] 307 | | [.., (pos_tail, Token::Dot), (_, _)] 308 | | [.., (pos_tail, Token::LSquare)] 309 | | [.., (pos_tail, Token::LSquare), (_, Token::Decimal(_))] => { 310 | parse_value(&line[..pos], start + pos_start, start + pos_tail, helper) 311 | } 312 | _ => None, 313 | } 314 | } 315 | fn parse_value( 316 | line: &str, 317 | start: usize, 318 | end: usize, 319 | helper: &MyHelper, 320 | ) -> Option<(usize, Partial)> { 321 | let v = line[start..end].parse::().ok()?.eval(helper).ok()?; 322 | Some((end, Partial::Val(v, line[end..].to_string()))) 323 | } 324 | fn match_selector(v: &IDLValue, prefix: &str) -> Vec { 325 | match v { 326 | IDLValue::Opt(_) => vec![Pair { 327 | display: "?".to_string(), 328 | replacement: "?".to_string(), 329 | }], 330 | IDLValue::Blob(b) => vec![ 331 | Pair { 332 | display: "blob".to_string(), 333 | replacement: "".to_string(), 334 | }, 335 | Pair { 336 | display: format!("index should be less than {}", b.len()), 337 | replacement: "".to_string(), 338 | }, 339 | ], 340 | IDLValue::Vec(vs) => vec![ 341 | Pair { 342 | display: "vec".to_string(), 343 | replacement: "".to_string(), 344 | }, 345 | Pair { 346 | display: format!("index should be less than {}", vs.len()), 347 | replacement: "".to_string(), 348 | }, 349 | ], 350 | IDLValue::Record(fs) => fs.iter().filter_map(|f| match_field(f, prefix)).collect(), 351 | IDLValue::Variant(VariantValue(f, _)) => { 352 | if let Some(pair) = match_field(f, prefix) { 353 | vec![pair] 354 | } else { 355 | Vec::new() 356 | } 357 | } 358 | _ => Vec::new(), 359 | } 360 | } 361 | fn match_field(f: &IDLField, prefix: &str) -> Option { 362 | match &f.id { 363 | Label::Named(name) 364 | if prefix.is_empty() || prefix.starts_with('.') && name.starts_with(&prefix[1..]) => 365 | { 366 | Some(Pair { 367 | display: format!(".{} = {}", name, f.val), 368 | replacement: format!(".{name}"), 369 | }) 370 | } 371 | Label::Id(id) | Label::Unnamed(id) 372 | if prefix.is_empty() 373 | || prefix.starts_with('[') && id.to_string().starts_with(&prefix[1..]) => 374 | { 375 | Some(Pair { 376 | display: format!("[{}] = {}", id, f.val), 377 | replacement: format!("[{id}]"), 378 | }) 379 | } 380 | _ => None, 381 | } 382 | } 383 | 384 | impl Completer for MyHelper { 385 | type Candidate = Pair; 386 | fn complete( 387 | &self, 388 | line: &str, 389 | pos: usize, 390 | ctx: &Context<'_>, 391 | ) -> Result<(usize, Vec), ReadlineError> { 392 | match partial_parse(line, pos, self) { 393 | Some((pos, Partial::Call(canister_id, meth))) => { 394 | let mut map = self.canister_map.borrow_mut(); 395 | Ok(match map.get(&self.agent, &canister_id) { 396 | Ok(info) => (pos, info.match_method(&meth)), 397 | Err(_) => (pos, Vec::new()), 398 | }) 399 | } 400 | Some((pos, Partial::Val(v, rest))) => Ok((pos, match_selector(&v, &rest))), 401 | _ => match match_type(line, self) { 402 | Some(res) => Ok(res), 403 | None => self.completer.complete(line, pos, ctx), 404 | }, 405 | } 406 | } 407 | } 408 | 409 | fn match_type(line: &str, helper: &MyHelper) -> Option<(usize, Vec)> { 410 | use std::collections::HashSet; 411 | let (pos, arg_idx, call) = find_lastest_call(line, helper)?; 412 | let mut map = helper.canister_map.borrow_mut(); 413 | let (env, args) = call.get_func_type(&helper.agent, &mut map)?; 414 | let expect_ty = &args[arg_idx]; 415 | let mut res = Vec::new(); 416 | let mut gamma = HashSet::new(); 417 | for (var, value) in helper.env.0.iter() { 418 | let ty = value.value_ty(); 419 | if candid::types::subtype::subtype(&mut gamma, env, &ty, expect_ty).is_ok() { 420 | let value = format!("{:?}", value); 421 | // TODO use floor_char_boundary when available. 422 | let value = &value[..20.min(value.len())]; 423 | res.push(Pair { 424 | display: format!("{var}: {value}"), 425 | replacement: var.to_owned(), 426 | }) 427 | } 428 | } 429 | Some((pos, res)) 430 | } 431 | // Returns (pos at the beginning of the current arg, current arg index, Partial::Call) 432 | fn find_lastest_call(line: &str, helper: &MyHelper) -> Option<(usize, usize, Partial)> { 433 | if matches!(line.chars().last(), Some(')')) { 434 | return None; 435 | } 436 | let start = line.rfind("encode").or_else(|| line.rfind("call"))?; 437 | let arg_pos = line[start..].find('(')?; 438 | let given_args = line[arg_pos..].matches(',').count(); 439 | let (_, call) = partial_parse(line, arg_pos, helper)?; 440 | let mut map = helper.canister_map.borrow_mut(); 441 | let (_, args) = call.get_func_type(&helper.agent, &mut map)?; 442 | if given_args >= args.len() { 443 | return None; 444 | } 445 | let pos = line.rfind([',', '('])? + 1; 446 | Some((pos, given_args, call)) 447 | } 448 | fn hint_method(line: &str, pos: usize, helper: &MyHelper) -> Option { 449 | use candid_parser::configs::{Scope, ScopePos}; 450 | let (_, given_args, call) = find_lastest_call(line, helper)?; 451 | let mut map = helper.canister_map.borrow_mut(); 452 | let (env, args) = call.get_func_type(&helper.agent, &mut map)?; 453 | let method = match &call { 454 | Partial::Call(_, method) => Some(method), 455 | _ => None, 456 | }?; 457 | let ty = &args[given_args]; 458 | let scope = Scope { 459 | method, 460 | position: Some(ScopePos::Arg), 461 | }; 462 | let mut value = random_value(env, ty, helper.config.clone(), scope).ok()?; 463 | if given_args == args.len() - 1 { 464 | value.push(')'); 465 | } 466 | // TODO doesn't match on newline 467 | if let Some(prefix) = line[..pos] 468 | .rfind(',') 469 | .or_else(|| line[..pos].rfind('(')) 470 | .map(|start| line[start + 1..pos].trim_start()) 471 | { 472 | #[allow(clippy::assigning_clones)] 473 | if value.starts_with(prefix) { 474 | value = value[prefix.len()..].trim().to_owned(); 475 | } 476 | } 477 | Some(value) 478 | } 479 | 480 | impl Hinter for MyHelper { 481 | type Hint = String; 482 | fn hint(&self, line: &str, pos: usize, ctx: &Context<'_>) -> Option { 483 | if pos < line.len() { 484 | return None; 485 | } 486 | hint_method(line, pos, self).or_else(|| self.hinter.hint(line, pos, ctx)) 487 | } 488 | } 489 | 490 | impl Highlighter for MyHelper { 491 | fn highlight_prompt<'b, 's: 'b, 'p: 'b>( 492 | &'s self, 493 | prompt: &'p str, 494 | default: bool, 495 | ) -> Cow<'b, str> { 496 | if default { 497 | Borrowed(&self.colored_prompt) 498 | } else { 499 | Borrowed(prompt) 500 | } 501 | } 502 | 503 | fn highlight_hint<'h>(&self, hint: &'h str) -> Cow<'h, str> { 504 | let s = format!("{}", console::style(hint).black().bright()); 505 | Owned(s) 506 | } 507 | 508 | fn highlight<'l>(&self, line: &'l str, pos: usize) -> Cow<'l, str> { 509 | self.highlighter.highlight(line, pos) 510 | } 511 | 512 | fn highlight_char(&self, line: &str, pos: usize, forced: bool) -> bool { 513 | self.highlighter.highlight_char(line, pos, forced) 514 | } 515 | } 516 | 517 | impl Validator for MyHelper { 518 | fn validate( 519 | &self, 520 | ctx: &mut validate::ValidationContext, 521 | ) -> rustyline::Result { 522 | self.validator.validate(ctx) 523 | } 524 | 525 | fn validate_while_typing(&self) -> bool { 526 | self.validator.validate_while_typing() 527 | } 528 | } 529 | 530 | #[tokio::main] 531 | async fn fetch_actor(agent: &Agent, canister_id: Principal) -> anyhow::Result { 532 | let response = fetch_metadata(agent, canister_id, "metadata/candid:service").await; 533 | let profiling = fetch_metadata(agent, canister_id, "metadata/name") 534 | .await 535 | .ok() 536 | .as_ref() 537 | .and_then(|bytes| Decode!(bytes, BTreeMap).ok()); 538 | let candid = match response { 539 | Ok(blob) => std::str::from_utf8(&blob)?.to_owned(), 540 | Err(_) => { 541 | let response = agent 542 | .query(&canister_id, "__get_candid_interface_tmp_hack") 543 | .with_arg(Encode!()?) 544 | .call() 545 | .await; 546 | match response { 547 | Ok(response) => Decode!(&response, String)?, 548 | Err(_) => { 549 | return Ok(CanisterInfo { 550 | env: Default::default(), 551 | methods: Default::default(), 552 | init: None, 553 | profiling, 554 | }) 555 | } 556 | } 557 | } 558 | }; 559 | did_to_canister_info( 560 | &format!("did file for {canister_id}"), 561 | FileSource::Text(&candid), 562 | profiling, 563 | ) 564 | } 565 | 566 | pub enum FileSource<'a> { 567 | Text(&'a str), 568 | Path(&'a std::path::Path), 569 | } 570 | 571 | pub fn did_to_canister_info( 572 | name: &str, 573 | did: FileSource, 574 | profiling: Option>, 575 | ) -> anyhow::Result { 576 | let (env, actor) = match did { 577 | FileSource::Text(did) => { 578 | let ast = pretty_parse::(name, did)?; 579 | let mut env = TypeEnv::new(); 580 | let actor = check_prog(&mut env, &ast)?; 581 | (env, actor) 582 | } 583 | FileSource::Path(path) => pretty_check_file(path)?, 584 | }; 585 | let actor = actor.ok_or_else(|| anyhow::anyhow!("no main actor"))?; 586 | let methods = env 587 | .as_service(&actor)? 588 | .iter() 589 | .map(|(meth, ty)| { 590 | let func = env.as_func(ty).unwrap(); 591 | (meth.to_owned(), func.clone()) 592 | }) 593 | .collect(); 594 | let init = find_init_args(&env, &actor); 595 | Ok(CanisterInfo { 596 | env, 597 | methods, 598 | init, 599 | profiling, 600 | }) 601 | } 602 | 603 | pub fn find_init_args(env: &TypeEnv, actor: &Type) -> Option> { 604 | match actor.as_ref() { 605 | TypeInner::Var(id) => find_init_args(env, env.find_type(id).ok()?), 606 | TypeInner::Class(init, _) => Some(init.to_vec()), 607 | _ => None, 608 | } 609 | } 610 | 611 | impl Env { 612 | pub fn dump_principals(&self) -> BTreeMap { 613 | self.0 614 | .iter() 615 | .filter_map(|(name, value)| match value { 616 | IDLValue::Principal(id) => Some((name, id)), 617 | _ => None, 618 | }) 619 | .map(|(name, id)| (name.clone(), id.to_text())) 620 | .collect() 621 | } 622 | } 623 | 624 | #[test] 625 | fn test_partial_parse() -> anyhow::Result<()> { 626 | use candid_parser::parse_idl_value; 627 | let url = "https://icp0.io".to_string(); 628 | let agent = Agent::builder().with_url(url.clone()).build()?; 629 | let mut helper = MyHelper::new(agent, url, None, false); 630 | helper.env.0.insert( 631 | "a".to_string(), 632 | parse_idl_value("opt record { variant {b=vec{1;2;3}}; 42; f1=42;42=35;a1=30}")?, 633 | ); 634 | let ic0 = Principal::from_text("aaaaa-aa")?; 635 | helper 636 | .env 637 | .0 638 | .insert("ic0".to_string(), IDLValue::Principal(ic0)); 639 | assert_eq!(partial_parse("call x", 6, &helper), None); 640 | assert_eq!( 641 | partial_parse("let id = call \"aaaaa-aa\"", 24, &helper).unwrap(), 642 | (24, Partial::Call(ic0, "".to_string())) 643 | ); 644 | assert_eq!( 645 | partial_parse("let id = call \"aaaaa-aa\".", 25, &helper).unwrap(), 646 | (24, Partial::Call(ic0, "".to_string())) 647 | ); 648 | assert_eq!( 649 | partial_parse("let id = call \"aaaaa-aa\".t", 26, &helper).unwrap(), 650 | (24, Partial::Call(ic0, "t".to_string())) 651 | ); 652 | assert_eq!( 653 | partial_parse("let id = encode ic0", 19, &helper).unwrap(), 654 | (19, Partial::Call(ic0, "".to_string())) 655 | ); 656 | assert_eq!( 657 | partial_parse("let id = encode ic0.", 20, &helper).unwrap(), 658 | (19, Partial::Call(ic0, "".to_string())) 659 | ); 660 | assert_eq!( 661 | partial_parse("let id = encode ic0.t", 21, &helper).unwrap(), 662 | (19, Partial::Call(ic0, "t".to_string())) 663 | ); 664 | assert_eq!( 665 | partial_parse("let id = a", 10, &helper).unwrap(), 666 | ( 667 | 10, 668 | Partial::Val( 669 | parse_idl_value("opt record { variant {b=vec{1;2;3}}; 42; f1=42;42=35;a1=30}")?, 670 | "".to_string() 671 | ) 672 | ) 673 | ); 674 | assert_eq!(partial_parse("let id = a.f1.", 14, &helper), None); 675 | assert_eq!( 676 | partial_parse("let id =a?", 10, &helper).unwrap(), 677 | ( 678 | 10, 679 | Partial::Val( 680 | parse_idl_value("record { variant {b=vec{1;2;3}}; 42; f1=42;42=35;a1=30}")?, 681 | "".to_string() 682 | ) 683 | ) 684 | ); 685 | assert_eq!( 686 | partial_parse("let id=a?.", 10, &helper).unwrap(), 687 | ( 688 | 9, 689 | Partial::Val( 690 | parse_idl_value("record { variant {b=vec{1;2;3}}; 42; f1=42;42=35;a1=30}")?, 691 | ".".to_string() 692 | ) 693 | ) 694 | ); 695 | assert_eq!( 696 | partial_parse("let id = a?.f1", 14, &helper).unwrap(), 697 | ( 698 | 11, 699 | Partial::Val( 700 | parse_idl_value("record { variant {b=vec{1;2;3}}; 42; f1=42;42=35;a1=30}")?, 701 | ".f1".to_string() 702 | ) 703 | ) 704 | ); 705 | assert_eq!( 706 | partial_parse("let id = a?[0", 13, &helper).unwrap(), 707 | ( 708 | 11, 709 | Partial::Val( 710 | parse_idl_value("record { variant {b=vec{1;2;3}}; 42; f1=42;42=35;a1=30}")?, 711 | "[0".to_string() 712 | ) 713 | ) 714 | ); 715 | assert_eq!( 716 | partial_parse("let id = a?[0]", 14, &helper).unwrap(), 717 | ( 718 | 14, 719 | Partial::Val(parse_idl_value("variant {b=vec{1;2;3}}")?, "".to_string()) 720 | ) 721 | ); 722 | assert_eq!( 723 | partial_parse("let id = a?[0].", 15, &helper).unwrap(), 724 | ( 725 | 14, 726 | Partial::Val(parse_idl_value("variant {b=vec{1;2;3}}")?, ".".to_string()) 727 | ) 728 | ); 729 | Ok(()) 730 | } 731 | -------------------------------------------------------------------------------- /src/ic.did: -------------------------------------------------------------------------------- 1 | type canister_id = principal; 2 | type wasm_module = blob; 3 | 4 | type log_visibility = variant { 5 | controllers; 6 | public; 7 | }; 8 | 9 | type canister_settings = record { 10 | controllers : opt vec principal; 11 | compute_allocation : opt nat; 12 | memory_allocation : opt nat; 13 | freezing_threshold : opt nat; 14 | reserved_cycles_limit : opt nat; 15 | log_visibility : opt log_visibility; 16 | wasm_memory_limit : opt nat; 17 | }; 18 | 19 | type definite_canister_settings = record { 20 | controllers : vec principal; 21 | compute_allocation : nat; 22 | memory_allocation : nat; 23 | freezing_threshold : nat; 24 | reserved_cycles_limit : nat; 25 | log_visibility : log_visibility; 26 | wasm_memory_limit : nat; 27 | }; 28 | 29 | type change_origin = variant { 30 | from_user : record { 31 | user_id : principal; 32 | }; 33 | from_canister : record { 34 | canister_id : principal; 35 | canister_version : opt nat64; 36 | }; 37 | }; 38 | 39 | type change_details = variant { 40 | creation : record { 41 | controllers : vec principal; 42 | }; 43 | code_uninstall; 44 | code_deployment : record { 45 | mode : variant { install; reinstall; upgrade }; 46 | module_hash : blob; 47 | }; 48 | load_snapshot : record { 49 | canister_version : nat64; 50 | taken_at_timestamp : nat64; 51 | }; 52 | controllers_change : record { 53 | controllers : vec principal; 54 | }; 55 | }; 56 | 57 | type change = record { 58 | timestamp_nanos : nat64; 59 | canister_version : nat64; 60 | origin : change_origin; 61 | details : change_details; 62 | }; 63 | 64 | type chunk_hash = record { 65 | hash : blob; 66 | }; 67 | 68 | type http_header = record { 69 | name : text; 70 | value : text; 71 | }; 72 | 73 | type http_request_result = record { 74 | status : nat; 75 | headers : vec http_header; 76 | body : blob; 77 | }; 78 | 79 | type ecdsa_curve = variant { 80 | secp256k1; 81 | }; 82 | 83 | type satoshi = nat64; 84 | 85 | type bitcoin_network = variant { 86 | mainnet; 87 | testnet; 88 | }; 89 | 90 | type bitcoin_address = text; 91 | 92 | type block_hash = blob; 93 | 94 | type outpoint = record { 95 | txid : blob; 96 | vout : nat32; 97 | }; 98 | 99 | type utxo = record { 100 | outpoint : outpoint; 101 | value : satoshi; 102 | height : nat32; 103 | }; 104 | 105 | type bitcoin_get_utxos_args = record { 106 | address : bitcoin_address; 107 | network : bitcoin_network; 108 | filter : opt variant { 109 | min_confirmations : nat32; 110 | page : blob; 111 | }; 112 | }; 113 | 114 | type bitcoin_get_utxos_query_args = record { 115 | address : bitcoin_address; 116 | network : bitcoin_network; 117 | filter : opt variant { 118 | min_confirmations : nat32; 119 | page : blob; 120 | }; 121 | }; 122 | 123 | type bitcoin_get_current_fee_percentiles_args = record { 124 | network : bitcoin_network; 125 | }; 126 | 127 | type bitcoin_get_utxos_result = record { 128 | utxos : vec utxo; 129 | tip_block_hash : block_hash; 130 | tip_height : nat32; 131 | next_page : opt blob; 132 | }; 133 | 134 | type bitcoin_get_utxos_query_result = record { 135 | utxos : vec utxo; 136 | tip_block_hash : block_hash; 137 | tip_height : nat32; 138 | next_page : opt blob; 139 | }; 140 | 141 | type bitcoin_get_balance_args = record { 142 | address : bitcoin_address; 143 | network : bitcoin_network; 144 | min_confirmations : opt nat32; 145 | }; 146 | 147 | type bitcoin_get_balance_query_args = record { 148 | address : bitcoin_address; 149 | network : bitcoin_network; 150 | min_confirmations : opt nat32; 151 | }; 152 | 153 | type bitcoin_send_transaction_args = record { 154 | transaction : blob; 155 | network : bitcoin_network; 156 | }; 157 | 158 | type millisatoshi_per_byte = nat64; 159 | 160 | type node_metrics = record { 161 | node_id : principal; 162 | num_blocks_proposed_total : nat64; 163 | num_block_failures_total : nat64; 164 | }; 165 | 166 | type create_canister_args = record { 167 | settings : opt canister_settings; 168 | sender_canister_version : opt nat64; 169 | }; 170 | 171 | type create_canister_result = record { 172 | canister_id : canister_id; 173 | }; 174 | 175 | type update_settings_args = record { 176 | canister_id : principal; 177 | settings : canister_settings; 178 | sender_canister_version : opt nat64; 179 | }; 180 | 181 | type upload_chunk_args = record { 182 | canister_id : principal; 183 | chunk : blob; 184 | }; 185 | 186 | type clear_chunk_store_args = record { 187 | canister_id : canister_id; 188 | }; 189 | 190 | type stored_chunks_args = record { 191 | canister_id : canister_id; 192 | }; 193 | 194 | type canister_install_mode = variant { 195 | install; 196 | reinstall; 197 | upgrade : opt record { 198 | skip_pre_upgrade : opt bool; 199 | wasm_memory_persistence : opt variant { 200 | keep; 201 | replace; 202 | }; 203 | }; 204 | }; 205 | 206 | type install_code_args = record { 207 | mode : canister_install_mode; 208 | canister_id : canister_id; 209 | wasm_module : wasm_module; 210 | arg : blob; 211 | sender_canister_version : opt nat64; 212 | }; 213 | 214 | type install_chunked_code_args = record { 215 | mode : canister_install_mode; 216 | target_canister : canister_id; 217 | store_canister : opt canister_id; 218 | chunk_hashes_list : vec chunk_hash; 219 | wasm_module_hash : blob; 220 | arg : blob; 221 | sender_canister_version : opt nat64; 222 | }; 223 | 224 | type uninstall_code_args = record { 225 | canister_id : canister_id; 226 | sender_canister_version : opt nat64; 227 | }; 228 | 229 | type start_canister_args = record { 230 | canister_id : canister_id; 231 | }; 232 | 233 | type stop_canister_args = record { 234 | canister_id : canister_id; 235 | }; 236 | 237 | type canister_status_args = record { 238 | canister_id : canister_id; 239 | }; 240 | 241 | type canister_status_result = record { 242 | status : variant { running; stopping; stopped }; 243 | settings : definite_canister_settings; 244 | module_hash : opt blob; 245 | memory_size : nat; 246 | cycles : nat; 247 | reserved_cycles : nat; 248 | idle_cycles_burned_per_day : nat; 249 | query_stats: record { 250 | num_calls_total: nat; 251 | num_instructions_total: nat; 252 | request_payload_bytes_total: nat; 253 | response_payload_bytes_total: nat; 254 | }; 255 | }; 256 | 257 | type canister_info_args = record { 258 | canister_id : canister_id; 259 | num_requested_changes : opt nat64; 260 | }; 261 | 262 | type canister_info_result = record { 263 | total_num_changes : nat64; 264 | recent_changes : vec change; 265 | module_hash : opt blob; 266 | controllers : vec principal; 267 | }; 268 | 269 | type delete_canister_args = record { 270 | canister_id : canister_id; 271 | }; 272 | 273 | type deposit_cycles_args = record { 274 | canister_id : canister_id; 275 | }; 276 | 277 | type http_request_args = record { 278 | url : text; 279 | max_response_bytes : opt nat64; 280 | method : variant { get; head; post }; 281 | headers : vec http_header; 282 | body : opt blob; 283 | transform : opt record { 284 | function : func(record { response : http_request_result; context : blob }) -> (http_request_result) query; 285 | context : blob; 286 | }; 287 | }; 288 | 289 | type ecdsa_public_key_args = record { 290 | canister_id : opt canister_id; 291 | derivation_path : vec blob; 292 | key_id : record { curve : ecdsa_curve; name : text }; 293 | }; 294 | 295 | type ecdsa_public_key_result = record { 296 | public_key : blob; 297 | chain_code : blob; 298 | }; 299 | 300 | type sign_with_ecdsa_args = record { 301 | message_hash : blob; 302 | derivation_path : vec blob; 303 | key_id : record { curve : ecdsa_curve; name : text }; 304 | }; 305 | 306 | type sign_with_ecdsa_result = record { 307 | signature : blob; 308 | }; 309 | 310 | type node_metrics_history_args = record { 311 | subnet_id : principal; 312 | start_at_timestamp_nanos : nat64; 313 | }; 314 | 315 | type node_metrics_history_result = vec record { 316 | timestamp_nanos : nat64; 317 | node_metrics : vec node_metrics; 318 | }; 319 | 320 | type provisional_create_canister_with_cycles_args = record { 321 | amount : opt nat; 322 | settings : opt canister_settings; 323 | specified_id : opt canister_id; 324 | sender_canister_version : opt nat64; 325 | }; 326 | 327 | type provisional_create_canister_with_cycles_result = record { 328 | canister_id : canister_id; 329 | }; 330 | 331 | type provisional_top_up_canister_args = record { 332 | canister_id : canister_id; 333 | amount : nat; 334 | }; 335 | 336 | type raw_rand_result = blob; 337 | 338 | type stored_chunks_result = vec chunk_hash; 339 | 340 | type upload_chunk_result = chunk_hash; 341 | 342 | type bitcoin_get_balance_result = satoshi; 343 | 344 | type bitcoin_get_balance_query_result = satoshi; 345 | 346 | type bitcoin_get_current_fee_percentiles_result = vec millisatoshi_per_byte; 347 | 348 | type snapshot_id = blob; 349 | 350 | type snapshot = record { 351 | id : snapshot_id; 352 | taken_at_timestamp : nat64; 353 | total_size : nat64; 354 | }; 355 | 356 | type take_canister_snapshot_args = record { 357 | canister_id : canister_id; 358 | replace_snapshot : opt snapshot_id; 359 | }; 360 | 361 | type take_canister_snapshot_result = snapshot; 362 | 363 | type load_canister_snapshot_args = record { 364 | canister_id : canister_id; 365 | snapshot_id : snapshot_id; 366 | sender_canister_version : opt nat64; 367 | }; 368 | 369 | type list_canister_snapshots_args = record { 370 | canister_id : canister_id; 371 | }; 372 | 373 | type list_canister_snapshots_result = vec snapshot; 374 | 375 | type delete_canister_snapshot_args = record { 376 | canister_id : canister_id; 377 | snapshot_id : snapshot_id; 378 | }; 379 | 380 | type fetch_canister_logs_args = record { 381 | canister_id : canister_id; 382 | }; 383 | 384 | type canister_log_record = record { 385 | idx: nat64; 386 | timestamp_nanos: nat64; 387 | content: blob; 388 | }; 389 | 390 | type fetch_canister_logs_result = record { 391 | canister_log_records: vec canister_log_record; 392 | }; 393 | 394 | service ic : { 395 | create_canister : (create_canister_args) -> (create_canister_result); 396 | update_settings : (update_settings_args) -> (); 397 | upload_chunk : (upload_chunk_args) -> (upload_chunk_result); 398 | clear_chunk_store : (clear_chunk_store_args) -> (); 399 | stored_chunks : (stored_chunks_args) -> (stored_chunks_result); 400 | install_code : (install_code_args) -> (); 401 | install_chunked_code : (install_chunked_code_args) -> (); 402 | uninstall_code : (uninstall_code_args) -> (); 403 | start_canister : (start_canister_args) -> (); 404 | stop_canister : (stop_canister_args) -> (); 405 | canister_status : (canister_status_args) -> (canister_status_result); 406 | canister_info : (canister_info_args) -> (canister_info_result); 407 | delete_canister : (delete_canister_args) -> (); 408 | deposit_cycles : (deposit_cycles_args) -> (); 409 | raw_rand : () -> (raw_rand_result); 410 | http_request : (http_request_args) -> (http_request_result); 411 | 412 | // Threshold ECDSA signature 413 | ecdsa_public_key : (ecdsa_public_key_args) -> (ecdsa_public_key_result); 414 | sign_with_ecdsa : (sign_with_ecdsa_args) -> (sign_with_ecdsa_result); 415 | 416 | // bitcoin interface 417 | bitcoin_get_balance : (bitcoin_get_balance_args) -> (bitcoin_get_balance_result); 418 | bitcoin_get_balance_query : (bitcoin_get_balance_query_args) -> (bitcoin_get_balance_query_result) query; 419 | bitcoin_get_utxos : (bitcoin_get_utxos_args) -> (bitcoin_get_utxos_result); 420 | bitcoin_get_utxos_query : (bitcoin_get_utxos_query_args) -> (bitcoin_get_utxos_query_result) query; 421 | bitcoin_send_transaction : (bitcoin_send_transaction_args) -> (); 422 | bitcoin_get_current_fee_percentiles : (bitcoin_get_current_fee_percentiles_args) -> (bitcoin_get_current_fee_percentiles_result); 423 | 424 | // metrics interface 425 | node_metrics_history : (node_metrics_history_args) -> (node_metrics_history_result); 426 | 427 | // provisional interfaces for the pre-ledger world 428 | provisional_create_canister_with_cycles : (provisional_create_canister_with_cycles_args) -> (provisional_create_canister_with_cycles_result); 429 | provisional_top_up_canister : (provisional_top_up_canister_args) -> (); 430 | 431 | // Canister snapshots 432 | take_canister_snapshot : (take_canister_snapshot_args) -> (take_canister_snapshot_result); 433 | load_canister_snapshot : (load_canister_snapshot_args) -> (); 434 | list_canister_snapshots : (list_canister_snapshots_args) -> (list_canister_snapshots_result); 435 | delete_canister_snapshot : (delete_canister_snapshot_args) -> (); 436 | 437 | // canister logging 438 | fetch_canister_logs : (fetch_canister_logs_args) -> (fetch_canister_logs_result) query; 439 | }; 440 | -------------------------------------------------------------------------------- /src/ledger.did: -------------------------------------------------------------------------------- 1 | // This is the official Ledger interface that is guaranteed to be backward compatible. 2 | 3 | // Amount of tokens, measured in 10^-8 of a token. 4 | type Tokens = record { 5 | e8s : nat64; 6 | }; 7 | 8 | // Number of nanoseconds from the UNIX epoch in UTC timezone. 9 | type TimeStamp = record { 10 | timestamp_nanos: nat64; 11 | }; 12 | 13 | // AccountIdentifier is a 32-byte array. 14 | // The first 4 bytes is big-endian encoding of a CRC32 checksum of the last 28 bytes. 15 | type AccountIdentifier = blob; 16 | 17 | // Subaccount is an arbitrary 32-byte byte array. 18 | // Ledger uses subaccounts to compute the source address, which enables one 19 | // principal to control multiple ledger accounts. 20 | type SubAccount = blob; 21 | 22 | // Sequence number of a block produced by the ledger. 23 | type BlockIndex = nat64; 24 | 25 | type Transaction = record { 26 | memo : Memo; 27 | icrc1_memo: opt blob; 28 | operation : opt Operation; 29 | created_at_time : TimeStamp; 30 | }; 31 | 32 | // An arbitrary number associated with a transaction. 33 | // The caller can set it in a `transfer` call as a correlation identifier. 34 | type Memo = nat64; 35 | 36 | // Arguments for the `transfer` call. 37 | type TransferArgs = record { 38 | // Transaction memo. 39 | // See comments for the `Memo` type. 40 | memo: Memo; 41 | // The amount that the caller wants to transfer to the destination address. 42 | amount: Tokens; 43 | // The amount that the caller pays for the transaction. 44 | // Must be 10000 e8s. 45 | fee: Tokens; 46 | // The subaccount from which the caller wants to transfer funds. 47 | // If null, the ledger uses the default (all zeros) subaccount to compute the source address. 48 | // See comments for the `SubAccount` type. 49 | from_subaccount: opt SubAccount; 50 | // The destination account. 51 | // If the transfer is successful, the balance of this address increases by `amount`. 52 | to: AccountIdentifier; 53 | // The point in time when the caller created this request. 54 | // If null, the ledger uses current IC time as the timestamp. 55 | created_at_time: opt TimeStamp; 56 | }; 57 | 58 | type TransferError = variant { 59 | // The fee that the caller specified in the transfer request was not the one that ledger expects. 60 | // The caller can change the transfer fee to the `expected_fee` and retry the request. 61 | BadFee : record { expected_fee : Tokens; }; 62 | // The account specified by the caller doesn't have enough funds. 63 | InsufficientFunds : record { balance: Tokens; }; 64 | // The request is too old. 65 | // The ledger only accepts requests created within 24 hours window. 66 | // This is a non-recoverable error. 67 | TxTooOld : record { allowed_window_nanos: nat64 }; 68 | // The caller specified `created_at_time` that is too far in future. 69 | // The caller can retry the request later. 70 | TxCreatedInFuture : null; 71 | // The ledger has already executed the request. 72 | // `duplicate_of` field is equal to the index of the block containing the original transaction. 73 | TxDuplicate : record { duplicate_of: BlockIndex; } 74 | }; 75 | 76 | type TransferResult = variant { 77 | Ok : BlockIndex; 78 | Err : TransferError; 79 | }; 80 | 81 | // Arguments for the `account_balance` call. 82 | type AccountBalanceArgs = record { 83 | account: AccountIdentifier; 84 | }; 85 | 86 | type TransferFeeArg = record {}; 87 | 88 | type TransferFee = record { 89 | // The fee to pay to perform a transfer 90 | transfer_fee: Tokens; 91 | }; 92 | 93 | type GetBlocksArgs = record { 94 | // The index of the first block to fetch. 95 | start : BlockIndex; 96 | // Max number of blocks to fetch. 97 | length : nat64; 98 | }; 99 | 100 | type Operation = variant { 101 | Mint : record { 102 | to : AccountIdentifier; 103 | amount : Tokens; 104 | }; 105 | Burn : record { 106 | from : AccountIdentifier; 107 | spender : opt AccountIdentifier; 108 | amount : Tokens; 109 | }; 110 | Transfer : record { 111 | from : AccountIdentifier; 112 | to : AccountIdentifier; 113 | amount : Tokens; 114 | fee : Tokens; 115 | spender : opt vec nat8; 116 | }; 117 | Approve : record { 118 | from : AccountIdentifier; 119 | spender : AccountIdentifier; 120 | // This field is deprecated and should not be used. 121 | allowance_e8s : int; 122 | allowance: Tokens; 123 | fee : Tokens; 124 | expires_at : opt TimeStamp; 125 | expected_allowance : opt Tokens; 126 | }; 127 | }; 128 | 129 | 130 | 131 | type Block = record { 132 | parent_hash : opt blob; 133 | transaction : Transaction; 134 | timestamp : TimeStamp; 135 | }; 136 | 137 | // A prefix of the block range specified in the [GetBlocksArgs] request. 138 | type BlockRange = record { 139 | // A prefix of the requested block range. 140 | // The index of the first block is equal to [GetBlocksArgs.from]. 141 | // 142 | // Note that the number of blocks might be less than the requested 143 | // [GetBlocksArgs.len] for various reasons, for example: 144 | // 145 | // 1. The query might have hit the replica with an outdated state 146 | // that doesn't have the full block range yet. 147 | // 2. The requested range is too large to fit into a single reply. 148 | // 149 | // NOTE: the list of blocks can be empty if: 150 | // 1. [GetBlocksArgs.len] was zero. 151 | // 2. [GetBlocksArgs.from] was larger than the last block known to the canister. 152 | blocks : vec Block; 153 | }; 154 | 155 | // An error indicating that the arguments passed to [QueryArchiveFn] were invalid. 156 | type QueryArchiveError = variant { 157 | // [GetBlocksArgs.from] argument was smaller than the first block 158 | // served by the canister that received the request. 159 | BadFirstBlockIndex : record { 160 | requested_index : BlockIndex; 161 | first_valid_index : BlockIndex; 162 | }; 163 | 164 | // Reserved for future use. 165 | Other : record { 166 | error_code : nat64; 167 | error_message : text; 168 | }; 169 | }; 170 | 171 | type QueryArchiveResult = variant { 172 | // Successfully fetched zero or more blocks. 173 | Ok : BlockRange; 174 | // The [GetBlocksArgs] request was invalid. 175 | Err : QueryArchiveError; 176 | }; 177 | 178 | // A function that is used for fetching archived ledger blocks. 179 | type QueryArchiveFn = func (GetBlocksArgs) -> (QueryArchiveResult) query; 180 | 181 | // The result of a "query_blocks" call. 182 | // 183 | // The structure of the result is somewhat complicated because the main ledger canister might 184 | // not have all the blocks that the caller requested: One or more "archive" canisters might 185 | // store some of the requested blocks. 186 | // 187 | // Note: as of Q4 2021 when this interface is authored, the IC doesn't support making nested 188 | // query calls within a query call. 189 | type QueryBlocksResponse = record { 190 | // The total number of blocks in the chain. 191 | // If the chain length is positive, the index of the last block is `chain_len - 1`. 192 | chain_length : nat64; 193 | 194 | // System certificate for the hash of the latest block in the chain. 195 | // Only present if `query_blocks` is called in a non-replicated query context. 196 | certificate : opt blob; 197 | 198 | // List of blocks that were available in the ledger when it processed the call. 199 | // 200 | // The blocks form a contiguous range, with the first block having index 201 | // [first_block_index] (see below), and the last block having index 202 | // [first_block_index] + len(blocks) - 1. 203 | // 204 | // The block range can be an arbitrary sub-range of the originally requested range. 205 | blocks : vec Block; 206 | 207 | // The index of the first block in "blocks". 208 | // If the blocks vector is empty, the exact value of this field is not specified. 209 | first_block_index : BlockIndex; 210 | 211 | // Encoding of instructions for fetching archived blocks whose indices fall into the 212 | // requested range. 213 | // 214 | // For each entry `e` in [archived_blocks], `[e.from, e.from + len)` is a sub-range 215 | // of the originally requested block range. 216 | archived_blocks : vec ArchivedBlocksRange; 217 | }; 218 | 219 | type ArchivedBlocksRange = record { 220 | // The index of the first archived block that can be fetched using the callback. 221 | start : BlockIndex; 222 | 223 | // The number of blocks that can be fetch using the callback. 224 | length : nat64; 225 | 226 | // The function that should be called to fetch the archived blocks. 227 | // The range of the blocks accessible using this function is given by [from] 228 | // and [len] fields above. 229 | callback : QueryArchiveFn; 230 | }; 231 | 232 | type ArchivedEncodedBlocksRange = record { 233 | callback : func (GetBlocksArgs) -> ( 234 | variant { Ok : vec blob; Err : QueryArchiveError }, 235 | ) query; 236 | start : nat64; 237 | length : nat64; 238 | }; 239 | 240 | type QueryEncodedBlocksResponse = record { 241 | certificate : opt blob; 242 | blocks : vec blob; 243 | chain_length : nat64; 244 | first_block_index : nat64; 245 | archived_blocks : vec ArchivedEncodedBlocksRange; 246 | }; 247 | 248 | type Archive = record { 249 | canister_id: principal; 250 | }; 251 | 252 | type Archives = record { 253 | archives: vec Archive; 254 | }; 255 | 256 | type Duration = record { 257 | secs: nat64; 258 | nanos: nat32; 259 | }; 260 | 261 | type ArchiveOptions = record { 262 | trigger_threshold : nat64; 263 | num_blocks_to_archive : nat64; 264 | node_max_memory_size_bytes : opt nat64; 265 | max_message_size_bytes : opt nat64; 266 | controller_id : principal; 267 | more_controller_ids: opt vec principal; 268 | cycles_for_archive_creation : opt nat64; 269 | max_transactions_per_response : opt nat64; 270 | }; 271 | 272 | // Account identifier encoded as a 64-byte ASCII hex string. 273 | type TextAccountIdentifier = text; 274 | 275 | // Arguments for the `send_dfx` call. 276 | type SendArgs = record { 277 | memo: Memo; 278 | amount: Tokens; 279 | fee: Tokens; 280 | from_subaccount: opt SubAccount; 281 | to: TextAccountIdentifier; 282 | created_at_time: opt TimeStamp; 283 | }; 284 | 285 | type AccountBalanceArgsDfx = record { 286 | account: TextAccountIdentifier; 287 | }; 288 | 289 | type FeatureFlags = record { 290 | icrc2 : bool; 291 | }; 292 | 293 | type InitArgs = record { 294 | minting_account: TextAccountIdentifier; 295 | icrc1_minting_account: opt Account; 296 | initial_values: vec record {TextAccountIdentifier; Tokens}; 297 | max_message_size_bytes: opt nat64; 298 | transaction_window: opt Duration; 299 | archive_options: opt ArchiveOptions; 300 | send_whitelist: vec principal; 301 | transfer_fee: opt Tokens; 302 | token_symbol: opt text; 303 | token_name: opt text; 304 | feature_flags : opt FeatureFlags; 305 | maximum_number_of_accounts : opt nat64; 306 | accounts_overflow_trim_quantity: opt nat64; 307 | }; 308 | 309 | type Icrc1BlockIndex = nat; 310 | // Number of nanoseconds since the UNIX epoch in UTC timezone. 311 | type Icrc1Timestamp = nat64; 312 | type Icrc1Tokens = nat; 313 | 314 | type Account = record { 315 | owner : principal; 316 | subaccount : opt SubAccount; 317 | }; 318 | 319 | type TransferArg = record { 320 | from_subaccount : opt SubAccount; 321 | to : Account; 322 | amount : Icrc1Tokens; 323 | fee : opt Icrc1Tokens; 324 | memo : opt blob; 325 | created_at_time: opt Icrc1Timestamp; 326 | }; 327 | 328 | type Icrc1TransferError = variant { 329 | BadFee : record { expected_fee : Icrc1Tokens }; 330 | BadBurn : record { min_burn_amount : Icrc1Tokens }; 331 | InsufficientFunds : record { balance : Icrc1Tokens }; 332 | TooOld; 333 | CreatedInFuture : record { ledger_time : nat64 }; 334 | TemporarilyUnavailable; 335 | Duplicate : record { duplicate_of : Icrc1BlockIndex }; 336 | GenericError : record { error_code : nat; message : text }; 337 | }; 338 | 339 | type Icrc1TransferResult = variant { 340 | Ok : Icrc1BlockIndex; 341 | Err : Icrc1TransferError; 342 | }; 343 | 344 | // The value returned from the [icrc1_metadata] endpoint. 345 | type Value = variant { 346 | Nat : nat; 347 | Int : int; 348 | Text : text; 349 | Blob : blob; 350 | }; 351 | 352 | type UpgradeArgs = record { 353 | maximum_number_of_accounts : opt nat64; 354 | icrc1_minting_account : opt Account; 355 | feature_flags : opt FeatureFlags; 356 | }; 357 | 358 | type LedgerCanisterPayload = variant { 359 | Init: InitArgs; 360 | Upgrade: opt UpgradeArgs; 361 | }; 362 | 363 | type ApproveArgs = record { 364 | from_subaccount : opt SubAccount; 365 | spender : Account; 366 | amount : Icrc1Tokens; 367 | expected_allowance : opt Icrc1Tokens; 368 | expires_at : opt Icrc1Timestamp; 369 | fee : opt Icrc1Tokens; 370 | memo : opt blob; 371 | created_at_time: opt Icrc1Timestamp; 372 | }; 373 | 374 | type ApproveError = variant { 375 | BadFee : record { expected_fee : Icrc1Tokens }; 376 | InsufficientFunds : record { balance : Icrc1Tokens }; 377 | AllowanceChanged : record { current_allowance : Icrc1Tokens }; 378 | Expired : record { ledger_time : nat64 }; 379 | TooOld; 380 | CreatedInFuture : record { ledger_time : nat64 }; 381 | Duplicate : record { duplicate_of : Icrc1BlockIndex }; 382 | TemporarilyUnavailable; 383 | GenericError : record { error_code : nat; message : text }; 384 | }; 385 | 386 | type ApproveResult = variant { 387 | Ok : Icrc1BlockIndex; 388 | Err : ApproveError; 389 | }; 390 | 391 | type AllowanceArgs = record { 392 | account : Account; 393 | spender : Account; 394 | }; 395 | 396 | type Allowance = record { 397 | allowance : Icrc1Tokens; 398 | expires_at : opt Icrc1Timestamp; 399 | }; 400 | 401 | type TransferFromArgs = record { 402 | spender_subaccount : opt SubAccount; 403 | from : Account; 404 | to : Account; 405 | amount : Icrc1Tokens; 406 | fee : opt Icrc1Tokens; 407 | memo : opt blob; 408 | created_at_time: opt Icrc1Timestamp; 409 | }; 410 | 411 | type TransferFromResult = variant { 412 | Ok : Icrc1BlockIndex; 413 | Err : TransferFromError; 414 | }; 415 | 416 | type TransferFromError = variant { 417 | BadFee : record { expected_fee : Icrc1Tokens }; 418 | BadBurn : record { min_burn_amount : Icrc1Tokens }; 419 | InsufficientFunds : record { balance : Icrc1Tokens }; 420 | InsufficientAllowance : record { allowance : Icrc1Tokens }; 421 | TooOld; 422 | CreatedInFuture : record { ledger_time : Icrc1Timestamp }; 423 | Duplicate : record { duplicate_of : Icrc1BlockIndex }; 424 | TemporarilyUnavailable; 425 | GenericError : record { error_code : nat; message : text }; 426 | }; 427 | 428 | type icrc21_consent_message_metadata = record { 429 | language: text; 430 | utc_offset_minutes: opt int16; 431 | }; 432 | 433 | type icrc21_consent_message_spec = record { 434 | metadata: icrc21_consent_message_metadata; 435 | device_spec: opt variant { 436 | GenericDisplay; 437 | LineDisplay: record { 438 | characters_per_line: nat16; 439 | lines_per_page: nat16; 440 | }; 441 | }; 442 | }; 443 | 444 | type icrc21_consent_message_request = record { 445 | method: text; 446 | arg: blob; 447 | user_preferences: icrc21_consent_message_spec; 448 | }; 449 | 450 | type icrc21_consent_message = variant { 451 | GenericDisplayMessage: text; 452 | LineDisplayMessage: record { 453 | pages: vec record { 454 | lines: vec text; 455 | }; 456 | }; 457 | }; 458 | 459 | type icrc21_consent_info = record { 460 | consent_message: icrc21_consent_message; 461 | metadata: icrc21_consent_message_metadata; 462 | }; 463 | 464 | type icrc21_error_info = record { 465 | description: text; 466 | }; 467 | 468 | type icrc21_error = variant { 469 | UnsupportedCanisterCall: icrc21_error_info; 470 | ConsentMessageUnavailable: icrc21_error_info; 471 | InsufficientPayment: icrc21_error_info; 472 | 473 | // Any error not covered by the above variants. 474 | GenericError: record { 475 | error_code: nat; 476 | description: text; 477 | }; 478 | }; 479 | 480 | type icrc21_consent_message_response = variant { 481 | Ok: icrc21_consent_info; 482 | Err: icrc21_error; 483 | }; 484 | 485 | service: (LedgerCanisterPayload) -> { 486 | // Transfers tokens from a subaccount of the caller to the destination address. 487 | // The source address is computed from the principal of the caller and the specified subaccount. 488 | // When successful, returns the index of the block containing the transaction. 489 | transfer : (TransferArgs) -> (TransferResult); 490 | 491 | // Returns the amount of Tokens on the specified account. 492 | account_balance : (AccountBalanceArgs) -> (Tokens) query; 493 | 494 | // Returns the account identifier for the given Principal and subaccount. 495 | account_identifier : (Account) -> (AccountIdentifier) query; 496 | 497 | // Returns the current transfer_fee. 498 | transfer_fee : (TransferFeeArg) -> (TransferFee) query; 499 | 500 | // Queries blocks in the specified range. 501 | query_blocks : (GetBlocksArgs) -> (QueryBlocksResponse) query; 502 | 503 | // Queries encoded blocks in the specified range 504 | query_encoded_blocks : (GetBlocksArgs) -> (QueryEncodedBlocksResponse) query; 505 | 506 | // Returns token symbol. 507 | symbol : () -> (record { symbol: text }) query; 508 | 509 | // Returns token name. 510 | name : () -> (record { name: text }) query; 511 | 512 | // Returns token decimals. 513 | decimals : () -> (record { decimals: nat32 }) query; 514 | 515 | // Returns the existing archive canisters information. 516 | archives : () -> (Archives) query; 517 | 518 | send_dfx : (SendArgs) -> (BlockIndex); 519 | account_balance_dfx : (AccountBalanceArgsDfx) -> (Tokens) query; 520 | 521 | // The following methods implement the ICRC-1 Token Standard. 522 | // https://github.com/dfinity/ICRC-1/tree/main/standards/ICRC-1 523 | icrc1_name : () -> (text) query; 524 | icrc1_symbol : () -> (text) query; 525 | icrc1_decimals : () -> (nat8) query; 526 | icrc1_metadata : () -> (vec record { text; Value }) query; 527 | icrc1_total_supply : () -> (Icrc1Tokens) query; 528 | icrc1_fee : () -> (Icrc1Tokens) query; 529 | icrc1_minting_account : () -> (opt Account) query; 530 | icrc1_balance_of : (Account) -> (Icrc1Tokens) query; 531 | icrc1_transfer : (TransferArg) -> (Icrc1TransferResult); 532 | icrc1_supported_standards : () -> (vec record { name : text; url : text }) query; 533 | icrc2_approve : (ApproveArgs) -> (ApproveResult); 534 | icrc2_allowance : (AllowanceArgs) -> (Allowance) query; 535 | icrc2_transfer_from : (TransferFromArgs) -> (TransferFromResult); 536 | 537 | icrc21_canister_call_consent_message: (icrc21_consent_message_request) -> (icrc21_consent_message_response); 538 | icrc10_supported_standards : () -> (vec record { name : text; url : text }) query; 539 | } 540 | -------------------------------------------------------------------------------- /src/main.rs: -------------------------------------------------------------------------------- 1 | use clap::Parser; 2 | use ic_agent::Agent; 3 | use rustyline::error::ReadlineError; 4 | use rustyline::CompletionType; 5 | 6 | mod account_identifier; 7 | mod command; 8 | mod error; 9 | mod exp; 10 | mod grammar; 11 | mod helper; 12 | mod offline; 13 | mod profiling; 14 | mod selector; 15 | mod token; 16 | mod utils; 17 | use crate::command::Command; 18 | use crate::error::pretty_parse; 19 | use crate::helper::{MyHelper, OfflineOutput}; 20 | 21 | fn unwrap(v: Result, f: F) 22 | where 23 | E: std::fmt::Debug, 24 | F: FnOnce(T), 25 | { 26 | match v { 27 | Ok(res) => f(res), 28 | Err(e) => eprintln!("Error: {e:?}"), 29 | } 30 | } 31 | 32 | fn repl(opts: Opts) -> anyhow::Result<()> { 33 | let mut replica = opts.replica.unwrap_or_else(|| "local".to_string()); 34 | let offline = if opts.offline { 35 | replica = "ic".to_string(); 36 | let send_url = opts 37 | .url 38 | .unwrap_or_else(|| "https://qhmh2-niaaa-aaaab-qadta-cai.raw.icp0.io/?msg=".to_string()); 39 | Some(match opts.format.as_deref() { 40 | None | Some("json") => OfflineOutput::Json, 41 | Some("ascii") => OfflineOutput::Ascii(send_url), 42 | Some("png") => OfflineOutput::Png(send_url), 43 | Some("png_no_url") => OfflineOutput::PngNoUrl, 44 | Some("ascii_no_url") => OfflineOutput::AsciiNoUrl, 45 | _ => unreachable!(), 46 | }) 47 | } else { 48 | None 49 | }; 50 | let url = match replica.as_str() { 51 | "local" => "http://localhost:4943/", 52 | "ic" => "https://icp0.io", 53 | url => url, 54 | }; 55 | println!("Ping {url}..."); 56 | let agent = Agent::builder() 57 | .with_url(url) 58 | .with_max_tcp_error_retries(2) 59 | .with_max_polling_time(std::time::Duration::from_secs(60 * 10)) 60 | .build()?; 61 | 62 | println!("Canister REPL"); 63 | let config = rustyline::Config::builder() 64 | .history_ignore_space(true) 65 | .completion_type(CompletionType::List) 66 | .build(); 67 | let h = MyHelper::new(agent, url.to_string(), offline, opts.verbose); 68 | if let Some(file) = opts.send { 69 | use crate::offline::{send_messages, Messages}; 70 | let json = std::fs::read_to_string(file)?; 71 | let msgs = serde_json::from_str::(&json)?; 72 | send_messages(&h, &msgs)?; 73 | return Ok(()); 74 | } 75 | let mut rl = rustyline::Editor::with_config(config)?; 76 | rl.set_helper(Some(h)); 77 | let _ = rl.load_history("./.history"); 78 | if let Some(file) = opts.config { 79 | let config = std::fs::read_to_string(file)?; 80 | rl.helper_mut().unwrap().config = config.parse::()?; 81 | } 82 | 83 | let enter_repl = opts.script.is_none() || opts.interactive; 84 | if let Some(file) = opts.script { 85 | let cmd = Command::Load(exp::Exp::Text(file)); 86 | let helper = rl.helper_mut().unwrap(); 87 | cmd.run(helper)?; 88 | if helper.func_env.0.contains_key("__main") { 89 | let mut args = Vec::new(); 90 | for arg in opts.extra_args { 91 | let v = candid_parser::parse_idl_value(&arg).unwrap_or(candid::IDLValue::Text(arg)); 92 | args.push(v); 93 | } 94 | exp::apply_func(helper, "__main", args)?; 95 | } 96 | } 97 | if enter_repl { 98 | rl.helper_mut().unwrap().verbose = true; 99 | let mut count = 1; 100 | loop { 101 | let identity = &rl.helper().unwrap().current_identity; 102 | let p = format!("{identity}@{replica} {count}> "); 103 | rl.helper_mut().unwrap().colored_prompt = 104 | format!("{}", console::style(&p).green().bold()); 105 | let input = rl.readline(&p); 106 | match input { 107 | Ok(line) => { 108 | rl.add_history_entry(&line)?; 109 | unwrap(pretty_parse::("stdin", &line), |cmd| { 110 | let helper = rl.helper_mut().unwrap(); 111 | unwrap(cmd.run(helper), |_| {}); 112 | }); 113 | } 114 | Err(ReadlineError::Interrupted) | Err(ReadlineError::Eof) => break, 115 | Err(err) => { 116 | eprintln!("Error: {err:?}"); 117 | break; 118 | } 119 | } 120 | count += 1; 121 | } 122 | rl.save_history("./.history")?; 123 | } 124 | if opts.offline { 125 | let helper = rl.helper().unwrap(); 126 | if !helper.messages.borrow().is_empty() { 127 | helper.dump_ingress()?; 128 | } 129 | } 130 | Ok(()) 131 | } 132 | 133 | #[derive(Parser)] 134 | #[clap(version, author)] 135 | struct Opts { 136 | #[clap(short, long)] 137 | /// Specifies replica URL, possible values: local, ic, URL 138 | replica: Option, 139 | #[clap(short, long, conflicts_with("replica"))] 140 | /// Offline mode to be run in air-gap machines. All signed messages will be stored in messages.json 141 | offline: bool, 142 | #[clap(short, long, requires("offline"), value_parser = ["ascii", "json", "png", "ascii_no_url", "png_no_url"])] 143 | /// Offline output format 144 | format: Option, 145 | #[clap(short, long, requires("offline"))] 146 | /// Offline URL embeded in the QR code, only used in ascii or png format. Default value: "https://qhmh2-niaaa-aaaab-qadta-cai.raw.ic0.app/?msg=" 147 | url: Option, 148 | #[clap(short, long)] 149 | /// Specifies config file for Candid random value generation 150 | config: Option, 151 | /// ic-repl script file 152 | script: Option, 153 | #[clap(short, long, requires("script"))] 154 | /// Enter repl once the script is finished 155 | interactive: bool, 156 | #[clap(short, long, conflicts_with("script"), conflicts_with("offline"))] 157 | /// Send signed messages 158 | send: Option, 159 | #[clap(short, long)] 160 | /// Run script in verbose mode. Non-verbose mode will only output text values. 161 | verbose: bool, 162 | #[clap(last = true)] 163 | /// Extra arguments passed to __main function when running a script 164 | extra_args: Vec, 165 | } 166 | 167 | fn main() -> anyhow::Result<()> { 168 | let opts = Opts::parse(); 169 | repl(opts) 170 | } 171 | -------------------------------------------------------------------------------- /src/offline.rs: -------------------------------------------------------------------------------- 1 | use crate::helper::{MyHelper, OfflineOutput}; 2 | use crate::utils::args_to_value; 3 | use anyhow::{anyhow, Context, Result}; 4 | use candid::Principal; 5 | use candid::{types::Function, IDLArgs, TypeEnv}; 6 | use ic_agent::{agent::CallResponse, Agent}; 7 | use serde::{Deserialize, Serialize}; 8 | 9 | #[derive(Serialize, Deserialize, Clone)] 10 | pub struct Ingress { 11 | pub call_type: String, 12 | pub request_id: Option, 13 | pub content: String, 14 | } 15 | #[derive(Serialize, Deserialize, Clone)] 16 | pub struct RequestStatus { 17 | pub canister_id: Principal, 18 | pub request_id: String, 19 | pub content: String, 20 | } 21 | #[derive(Serialize, Deserialize, Clone)] 22 | pub struct IngressWithStatus { 23 | pub ingress: Ingress, 24 | pub request_status: Option, 25 | } 26 | #[derive(Serialize, Deserialize, Clone)] 27 | pub struct Messages(Vec); 28 | 29 | static mut PNG_COUNTER: u32 = 0; 30 | 31 | impl Ingress { 32 | pub fn parse(&self) -> Result<(Principal, Principal, String, Vec)> { 33 | use serde_cbor::Value; 34 | let cbor: Value = serde_cbor::from_slice(&hex::decode(&self.content)?) 35 | .context("Invalid cbor data in the content of the message.")?; 36 | if let Value::Map(m) = cbor { 37 | let cbor_content = m 38 | .get(&Value::Text("content".to_string())) 39 | .ok_or_else(|| anyhow!("Invalid cbor content"))?; 40 | if let Value::Map(m) = cbor_content { 41 | if let ( 42 | Some(Value::Bytes(sender)), 43 | Some(Value::Bytes(canister_id)), 44 | Some(Value::Text(method_name)), 45 | Some(Value::Bytes(arg)), 46 | ) = ( 47 | m.get(&Value::Text("sender".to_string())), 48 | m.get(&Value::Text("canister_id".to_string())), 49 | m.get(&Value::Text("method_name".to_string())), 50 | m.get(&Value::Text("arg".to_string())), 51 | ) { 52 | let sender = Principal::try_from(sender)?; 53 | let canister_id = Principal::try_from(canister_id)?; 54 | return Ok((sender, canister_id, method_name.to_string(), arg.to_vec())); 55 | } 56 | } 57 | } 58 | Err(anyhow!("Invalid cbor content")) 59 | } 60 | } 61 | 62 | #[allow(static_mut_refs)] 63 | pub fn output_message(json: String, format: &OfflineOutput) -> Result<()> { 64 | match format { 65 | OfflineOutput::Json => println!("{json}"), 66 | _ => { 67 | use base64::{ 68 | engine::general_purpose::{STANDARD_NO_PAD, URL_SAFE_NO_PAD}, 69 | Engine, 70 | }; 71 | use libflate::gzip; 72 | use qrcode::{render::unicode, QrCode}; 73 | use std::io::Write; 74 | eprintln!("json length: {}", json.len()); 75 | let mut encoder = gzip::Encoder::new(Vec::new())?; 76 | encoder.write_all(json.as_bytes())?; 77 | let zipped = encoder.finish().into_result()?; 78 | let engine = if matches!(format, OfflineOutput::PngNoUrl | OfflineOutput::AsciiNoUrl) { 79 | STANDARD_NO_PAD 80 | } else { 81 | URL_SAFE_NO_PAD 82 | }; 83 | let base64 = engine.encode(zipped); 84 | eprintln!("base64 length: {}", base64.len()); 85 | let msg = match format { 86 | OfflineOutput::Ascii(url) | OfflineOutput::Png(url) => url.to_owned() + &base64, 87 | _ => base64, 88 | }; 89 | let code = QrCode::new(msg)?; 90 | match format { 91 | OfflineOutput::Ascii(_) | OfflineOutput::AsciiNoUrl => { 92 | let img = code.render::().build(); 93 | println!("{img}"); 94 | } 95 | OfflineOutput::Png(_) | OfflineOutput::PngNoUrl => { 96 | let img = code.render::>().build(); 97 | let filename = unsafe { 98 | PNG_COUNTER += 1; 99 | format!("msg{PNG_COUNTER}.png") 100 | }; 101 | img.save(&filename)?; 102 | println!("QR code saved to {filename}"); 103 | } 104 | _ => unreachable!(), 105 | } 106 | } 107 | }; 108 | Ok(()) 109 | } 110 | pub fn dump_ingress(msgs: &[IngressWithStatus]) -> Result<()> { 111 | use std::fs::File; 112 | use std::io::Write; 113 | let msgs = Messages(msgs.to_vec()); 114 | let json = serde_json::to_string(&msgs)?; 115 | let mut file = File::create("messages.json")?; 116 | file.write_all(json.as_bytes())?; 117 | Ok(()) 118 | } 119 | 120 | pub fn send_messages(helper: &MyHelper, msgs: &Messages) -> Result { 121 | let len = msgs.0.len(); 122 | let mut res = Vec::with_capacity(len); 123 | println!("Sending {} messages to {}", len, helper.agent_url); 124 | for (i, msg) in msgs.0.iter().enumerate() { 125 | print!("[{}/{}] ", i + 1, len); 126 | let args = send(helper, msg)?; 127 | res.push(args_to_value(args)) 128 | } 129 | Ok(IDLArgs::new(&res)) 130 | } 131 | pub fn send(helper: &MyHelper, msg: &IngressWithStatus) -> Result { 132 | let message = &msg.ingress; 133 | let (sender, canister_id, method_name, bytes) = message.parse()?; 134 | let meth = crate::exp::Method { 135 | canister: canister_id.to_string(), 136 | method: method_name.clone(), 137 | }; 138 | let opt_func = meth.get_info(helper, false)?.signature; 139 | let args = if let Some((env, func)) = &opt_func { 140 | IDLArgs::from_bytes_with_types(&bytes, env, &func.args)? 141 | } else { 142 | IDLArgs::from_bytes(&bytes)? 143 | }; 144 | println!("Sending {} call as {}:", message.call_type, sender); 145 | println!(" call \"{}\".{}{};", canister_id, method_name, args); 146 | println!("Do you want to send this message? [y/N]"); 147 | let mut input = String::new(); 148 | std::io::stdin().read_line(&mut input)?; 149 | if !["y", "yes"].contains(&input.to_lowercase().trim()) { 150 | return Err(anyhow!("Send abort")); 151 | } 152 | send_internal(&helper.agent, canister_id, msg, &opt_func) 153 | } 154 | #[tokio::main] 155 | async fn send_internal( 156 | agent: &Agent, 157 | canister_id: Principal, 158 | message: &IngressWithStatus, 159 | opt_func: &Option<(TypeEnv, Function)>, 160 | ) -> Result { 161 | let content = hex::decode(&message.ingress.content)?; 162 | let response = match message.ingress.call_type.as_str() { 163 | "query" => agent.query_signed(canister_id, content).await?, 164 | "update" => { 165 | let call_response = agent.update_signed(canister_id, content).await?; 166 | match call_response { 167 | CallResponse::Response(blob) => blob, 168 | CallResponse::Poll(request_id) => { 169 | println!("Request ID: 0x{}", String::from(request_id)); 170 | let status = message 171 | .request_status 172 | .as_ref() 173 | .ok_or_else(|| anyhow!("Cannot get request status for update call"))?; 174 | if !(status.canister_id == canister_id 175 | && status.request_id == String::from(request_id)) 176 | { 177 | return Err(anyhow!("request_id doesn't match, cannot request status")); 178 | } 179 | let status = hex::decode(&status.content)?; 180 | agent.wait_signed(&request_id, canister_id, status).await?.0 181 | } 182 | } 183 | } 184 | _ => unreachable!(), 185 | }; 186 | let res = if let Some((env, func)) = &opt_func { 187 | IDLArgs::from_bytes_with_types(&response, env, &func.rets)? 188 | } else { 189 | IDLArgs::from_bytes(&response)? 190 | }; 191 | println!("{}", res); 192 | Ok(res) 193 | } 194 | -------------------------------------------------------------------------------- /src/profiling.rs: -------------------------------------------------------------------------------- 1 | use crate::exp::MethodInfo; 2 | use crate::helper::MyHelper; 3 | use anyhow::anyhow; 4 | use candid::{ 5 | types::value::{IDLField, IDLValue}, 6 | types::Label, 7 | Principal, 8 | }; 9 | use ic_agent::Agent; 10 | use std::collections::BTreeMap; 11 | use std::path::PathBuf; 12 | 13 | pub fn ok_to_profile<'a>(helper: &'a MyHelper, info: &'a MethodInfo) -> bool { 14 | helper.offline.is_none() 15 | && info.profiling.is_some() 16 | && info.signature.as_ref().map(|s| s.1.is_query()) != Some(true) 17 | } 18 | 19 | #[tokio::main] 20 | pub async fn get_cycles(agent: &Agent, canister_id: &Principal) -> anyhow::Result { 21 | get_cycles_inner(agent, canister_id).await 22 | } 23 | async fn get_cycles_inner(agent: &Agent, canister_id: &Principal) -> anyhow::Result { 24 | use candid::{Decode, Encode}; 25 | let builder = agent.query(canister_id, "__get_cycles"); 26 | let bytes = builder 27 | .with_arg(Encode!()?) 28 | .with_effective_canister_id(*canister_id) 29 | .call() 30 | .await?; 31 | Ok(Decode!(&bytes, i64)?) 32 | } 33 | 34 | #[tokio::main] 35 | pub async fn get_profiling( 36 | agent: &Agent, 37 | canister_id: &Principal, 38 | names: &BTreeMap, 39 | title: &str, 40 | filename: PathBuf, 41 | ) -> anyhow::Result { 42 | use candid::{Decode, Encode}; 43 | let mut idx = 0i32; 44 | let mut pairs = vec![]; 45 | let mut cnt = 1; 46 | let builder = agent.query(canister_id, "__get_profiling"); 47 | loop { 48 | let bytes = builder 49 | .clone() 50 | .with_arg(Encode!(&idx)?) 51 | .with_effective_canister_id(*canister_id) 52 | .call() 53 | .await?; 54 | let (mut trace, opt_idx) = Decode!(&bytes, Vec<(i32, i64)>, Option)?; 55 | pairs.append(&mut trace); 56 | if let Some(i) = opt_idx { 57 | idx = i; 58 | cnt += 1; 59 | } else { 60 | break; 61 | } 62 | } 63 | if cnt > 1 { 64 | eprintln!("large trace: {}MB", cnt * 2); 65 | } 66 | if !pairs.is_empty() { 67 | match render_profiling(pairs, names, title, filename)? { 68 | CostValue::Complete(cost) => Ok(cost), 69 | CostValue::StartCost(start) => { 70 | let end = get_cycles_inner(agent, canister_id).await? as u64; 71 | Ok(end - start) 72 | } 73 | } 74 | } else { 75 | eprintln!("empty trace"); 76 | Ok(0) 77 | } 78 | } 79 | 80 | enum CostValue { 81 | Complete(u64), 82 | StartCost(u64), 83 | } 84 | 85 | fn render_profiling( 86 | input: Vec<(i32, i64)>, 87 | names: &BTreeMap, 88 | title: &str, 89 | filename: PathBuf, 90 | ) -> anyhow::Result { 91 | use inferno::flamegraph::{from_reader, Options}; 92 | let mut stack = Vec::new(); 93 | let mut prefix = Vec::new(); 94 | let mut result = Vec::new(); 95 | let mut total = 0; 96 | let mut prev = None; 97 | let start_cost = input.first().map(|(_, count)| *count); 98 | for (id, count) in input.into_iter() { 99 | if id >= 0 { 100 | stack.push((id, count, 0)); 101 | let name = match names.get(&(id as u16)) { 102 | Some(name) => name.clone(), 103 | None => "func_".to_string() + &id.to_string(), 104 | }; 105 | prefix.push(name); 106 | } else { 107 | match stack.pop() { 108 | None => return Err(anyhow!("pop empty stack")), 109 | Some((start_id, start, children)) => { 110 | if start_id != -id { 111 | return Err(anyhow!("func id mismatch")); 112 | } 113 | let cost = count - start; 114 | let frame = prefix.join(";"); 115 | prefix.pop().unwrap(); 116 | if let Some((parent, parent_cost, children_cost)) = stack.pop() { 117 | stack.push((parent, parent_cost, children_cost + cost)); 118 | } else { 119 | total += cost as u64; 120 | } 121 | match prev { 122 | Some(prev) if prev == frame => { 123 | // Add an empty spacer to avoid collapsing adjacent same-named calls 124 | // See https://github.com/jonhoo/inferno/issues/185#issuecomment-671393504 125 | result.push(format!("{};spacer 0", prefix.join(";"))); 126 | } 127 | _ => (), 128 | } 129 | result.push(format!("{} {}", frame, cost - children)); 130 | prev = Some(frame); 131 | } 132 | } 133 | } 134 | } 135 | let cost = if !stack.is_empty() { 136 | eprintln!("A trap occured or trace is too large"); 137 | CostValue::StartCost(start_cost.unwrap() as u64) 138 | } else { 139 | CostValue::Complete(total) 140 | }; 141 | //println!("Cost: {} Wasm instructions", total); 142 | let mut opt = Options::default(); 143 | opt.count_name = "instructions".to_string(); 144 | let title = if matches!(cost, CostValue::StartCost(_)) { 145 | title.to_string() + " (incomplete)" 146 | } else { 147 | title.to_string() 148 | }; 149 | opt.title = title; 150 | opt.image_width = Some(1024); 151 | opt.flame_chart = true; 152 | opt.no_sort = true; 153 | // Reserve result order to make flamegraph from left to right. 154 | // See https://github.com/jonhoo/inferno/issues/236 155 | result.reverse(); 156 | let logs = result.join("\n"); 157 | let reader = std::io::Cursor::new(logs); 158 | println!("Flamegraph written to {}", filename.display()); 159 | let mut writer = std::fs::File::create(&filename)?; 160 | from_reader(&mut opt, reader, &mut writer)?; 161 | Ok(cost) 162 | } 163 | 164 | pub fn may_extract_profiling(result: IDLValue) -> (IDLValue, Option) { 165 | match result { 166 | IDLValue::Record(ref fs) => match fs.as_slice() { 167 | [IDLField { 168 | id: Label::Id(0), 169 | val, 170 | }, IDLField { 171 | id: Label::Id(1), 172 | val: IDLValue::Record(fs), 173 | }] => match fs.as_slice() { 174 | [IDLField { 175 | id: Label::Named(lab), 176 | val: IDLValue::Int64(cost), 177 | }] if lab == "__cost" => (val.clone(), Some(*cost)), 178 | _ => (result, None), 179 | }, 180 | _ => (result, None), 181 | }, 182 | _ => (result, None), 183 | } 184 | } 185 | -------------------------------------------------------------------------------- /src/selector.rs: -------------------------------------------------------------------------------- 1 | use super::exp::Exp; 2 | use super::helper::MyHelper; 3 | use super::utils::as_u32; 4 | use anyhow::{anyhow, Result}; 5 | use candid::{ 6 | types::value::{IDLField, IDLValue, VariantValue}, 7 | types::Label, 8 | utils::check_unique, 9 | }; 10 | 11 | #[derive(Debug, Clone)] 12 | pub enum Selector { 13 | Index(Exp), 14 | Field(String), 15 | Option, 16 | Map(String), 17 | Filter(String), 18 | Fold(Exp, String), 19 | Size, // Size is not required, but it is faster than using fold 20 | } 21 | impl Selector { 22 | fn to_label(&self, helper: &MyHelper) -> Result