{props.children};
12 | // }
13 | // }}>{props.children.props.children.props.children}24 | {props.children} 25 |
26 | | [-s -n ] | [-t ]]
16 |
17 | FLAGS
18 | -f, --force Does not prompt before removing a squid or its version
19 | --[no-]interactive Disable interactive mode
20 |
21 | SQUID FLAGS
22 | -n, --name= Name of the squid
23 | -r, --reference=[/](@|:) Fully qualified reference of the squid. It can include the organization, name, slot, or tag
24 | -s, --slot= Slot of the squid
25 | -t, --tag= Tag of the squid
26 |
27 | ORG FLAGS
28 | -o, --org= Code of the organization
29 |
30 | ALIASES
31 | $ sqd rm
32 | ```
33 |
34 | _See code: [src/commands/rm.ts](https://github.com/subsquid/squid-cli/tree/master/src/commands/rm.ts)_
35 |
--------------------------------------------------------------------------------
/docs/squid-cli/restart.md:
--------------------------------------------------------------------------------
1 | `sqd restart`
2 | =============
3 |
4 | Restart a squid deployed to the Cloud
5 |
6 | * [`sqd restart`](#sqd-restart-1)
7 |
8 | ## `sqd restart`
9 |
10 | Restart a squid deployed to the Cloud
11 |
12 | ```
13 | USAGE
14 | $ sqd restart [--interactive]
15 | [-r [/](@|:) | -o | [-s -n ] | [-t ]]
16 |
17 | FLAGS
18 | --[no-]interactive Disable interactive mode
19 |
20 | SQUID FLAGS
21 | -n, --name= Name of the squid
22 | -r, --reference=[/](@|:) Fully qualified reference of the squid.
23 | It can include the organization, name,
24 | slot, or tag
25 | -s, --slot= Slot of the squid
26 | -t, --tag= Tag of the squid
27 |
28 | ORG FLAGS
29 | -o, --org= Code of the organization
30 | ```
31 |
32 | _See code: [src/commands/restart.ts](https://github.com/subsquid/squid-cli/blob/master/src/commands/restart.ts)_
33 |
--------------------------------------------------------------------------------
/docs/squid-cli/run.md:
--------------------------------------------------------------------------------
1 | `sqd run`
2 | =========
3 |
4 | Run a squid locally according to the [deployment manifest](/cloud/reference/manifest).
5 |
6 | * [sqd run PATH](#sqd-run-path)
7 |
8 | Notes:
9 | - The command is especially useful for running [multichain squids](/sdk/resources/multichain), as it runs all services in the same terminal and handles failures gracefully.
10 |
11 | ## `sqd run PATH`
12 |
13 | Run a squid project locally
14 |
15 | ```
16 | USAGE
17 | $ sqd run PATH [--interactive] [-m ] [-f ] [-i ... | -e ...] [-r ]
18 |
19 | FLAGS
20 | -e, --exclude=... Do not run specified services
21 | -f, --envFile= [default: .env] Relative path to an additional environment file
22 | -i, --include=... Run only specified services
23 | -m, --manifest= [default: squid.yaml] Relative path to a squid manifest file
24 | -r, --retries= [default: 5] Attempts to restart failed or stopped services
25 | --[no-]interactive Disable interactive mode
26 | ```
27 |
28 | _See code: [src/commands/run.ts](https://github.com/subsquid/squid-cli/blob/master/src/commands/run.ts)_
29 |
--------------------------------------------------------------------------------
/docs/sdk/reference/openreader-server/api/sorting.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 60
3 | title: Sorting
4 | description: >-
5 | The orderBy argument
6 | ---
7 |
8 | # Sorting
9 |
10 | ## Sort order
11 |
12 | The sort order (ascending vs. descending) is set by specifying an `ASC` or `DESC` suffix for the column name in the `orderBy` input object, e.g. `title_DESC`.
13 |
14 | ### **Sorting entities**
15 |
16 | Example: Fetch a list of videos sorted by their titles in an ascending order:
17 |
18 | ```graphql
19 | query {
20 | videos(orderBy: title_ASC) {
21 | id
22 | title
23 | }
24 | }
25 | ```
26 | or
27 | ```graphql
28 | query {
29 | videos(orderBy: [title_ASC]) {
30 | id
31 | title
32 | }
33 | }
34 | ```
35 |
36 | ### **Sorting entities by multiple fields**
37 |
38 | The `orderBy` argument takes an array of fields to allow sorting by multiple columns.
39 |
40 | Example: Fetch a list of videos that is sorted by their titles (ascending) and then on their published date (descending):
41 |
42 | ```graphql
43 | query {
44 | videos(orderBy: [title_ASC, publishedOn_DESC]) {
45 | id
46 | title
47 | publishedOn
48 | }
49 | }
50 | ```
51 |
--------------------------------------------------------------------------------
/docs/sdk/reference/openreader-server/api/intro.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 10
3 | title: Intro
4 | description: >-
5 | GraphQL and its support in SQD
6 | ---
7 |
8 | :::info
9 | At the moment, [Squid SDK GraphQL server](/sdk/reference/openreader-server) can only be used with squids that use Postgres as their target database.
10 | :::
11 |
12 | GraphQL is an API query language, and a server-side runtime for executing queries using a custom type system. Head over to the [official documentation website](https://graphql.org/learn/) for more info.
13 |
14 | A GraphQL API served by the [GraphQL server](/sdk/reference/openreader-server) has two components:
15 |
16 | 1. Core API is defined by the [schema file](/sdk/reference/schema-file).
17 | 2. Extensions added via [custom resolvers](/sdk/reference/openreader-server/configuration/custom-resolvers).
18 |
19 | In this section we cover the core GraphQL API, with short explanations on how to perform GraphQL queries, how to paginate and sort results. This functionality is supported via [OpenReader](https://github.com/subsquid/squid-sdk/tree/master/graphql/openreader), SQD's own implementation of [OpenCRUD](https://www.opencrud.org).
20 |
--------------------------------------------------------------------------------
/scripts/networksLists/bin/substrate.js:
--------------------------------------------------------------------------------
1 | const substrateJsonUrl = 'https://cdn.subsquid.io/archives/substrate.json'
2 |
3 | require('axios')
4 | .get(substrateJsonUrl)
5 | .then(
6 | resp => {
7 | console.log(substrateNetworksList(resp.data.archives))
8 | },
9 | e => {
10 | const errorDescription = e.code==='ERR_BAD_REQUEST' ? `${e.code} ${e.response.status}` : e.code
11 | console.error(`Retrieving ${substrateJsonUrl} failed with ${errorDescription}`)
12 | }
13 | )
14 |
15 | function substrateNetworksList(networksJson) {
16 | const nameMapping = arch => {
17 | switch (arch.network) {
18 | case 'asset-hub-kusama':
19 | case 'asset-hub-polkadot':
20 | return `${arch.chainName} (*)`
21 | case 'avail':
22 | return `${arch.chainName} (**)`
23 | default:
24 | return arch.chainName
25 | }
26 | }
27 | const rows = networksJson.map(a => ({
28 | network: nameMapping(a),
29 | url: a.providers.find(p => p.provider==='subsquid' && p.release==='ArrowSquid')?.dataSourceUrl
30 | }))
31 | const header = {
32 | network: 'Network',
33 | url: 'Gateway URL'
34 | }
35 | return require('../lib/formatTable')(rows, header, ['network', 'url'])
36 | }
37 |
--------------------------------------------------------------------------------
/docs/cloud/resources/production-alias.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_class_name: hidden
3 | ---
4 |
5 | :::danger
6 | Production aliasing feature is deprecated in `@subsquid/cli>=3.0.0`. Use [Slots and tags](/cloud/resources/slots-and-tags) instead.
7 | :::
8 |
9 | # Alias to the production endpoint
10 |
11 | Version aliasing is used to switch between squid versions without a downtime and updates of the downstream clients.
12 | Each squid has a canonical production endpoint URL of the form
13 | ```bash
14 | https://.subsquid.io//graphql
15 | ```
16 |
17 | To alias a squid version to the production endpoint, use [`sqd prod`](/squid-cli/prod):
18 | ```bash
19 | sqd prod @
20 | ```
21 |
22 | Note that after promoting to the production the version-specific endpoint URL of the form
23 | ```bash
24 | https://.subsquid.io//v/v/graphql
25 | ```
26 | remains to be available.
27 |
28 |
29 | ## Example
30 |
31 | Assuming your organization is called `my-org`, running
32 |
33 | ```bash
34 | sqd prod my-squid@v1
35 | ```
36 |
37 | will make the endpoint of the v1 of `my-squid` accessible at `https://my-org.subsquid.io/my-squid/graphql`.
38 |
--------------------------------------------------------------------------------
/src/components/tutorial-card.tsx:
--------------------------------------------------------------------------------
1 | import React from 'react';
2 | import clsx from 'clsx';
3 |
4 | import LaunchIcon from '/static/img/rocket_launch.svg'
5 |
6 | export type TutorialCard = React.PropsWithChildren<{
7 | description: string;
8 | path?: string;
9 | disabled?: boolean;
10 | }>
11 |
12 | export function TutorialCard(props: TutorialCard) {
13 | return (
14 | <>
15 |
25 | {props.children}
30 | {props.description}
31 |
32 |
33 | >
34 | );
35 | }
36 |
--------------------------------------------------------------------------------
/docs/squid-cli/list.md:
--------------------------------------------------------------------------------
1 | `sqd list`
2 | ========
3 |
4 | List squids deployed to the Cloud
5 |
6 | * [`sqd list`](#sqd-list-1)
7 |
8 | ## `sqd list`
9 |
10 | List squids deployed to the Cloud
11 |
12 | ```
13 | USAGE
14 | $ sqd list [--interactive] [--truncate]
15 | [-r [/](@|:) | -o | -n | [-s ] | [-t ]]
16 |
17 | FLAGS
18 | --[no-]interactive Disable interactive mode
19 | --[no-]truncate Truncate data in columns: false by default
20 |
21 | SQUID FLAGS
22 | -n, --name= Name of the squid
23 | -r, --reference=[/](@|:) Fully qualified reference
24 | of the squid. It can include
25 | the organization, name, slot,
26 | or tag
27 | -s, --slot= Slot of the squid
28 | -t, --tag= Tag of the squid
29 |
30 | ORG FLAGS
31 | -o, --org= Code of the organization
32 |
33 | ALIASES
34 | $ sqd ls
35 | ```
36 |
37 | _See code: [src/commands/ls.ts](https://github.com/subsquid/squid-cli/tree/master/src/commands/ls.ts)_
38 |
--------------------------------------------------------------------------------
/scripts/pageMigrationOps/rewriteLinks.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Usage: rewriteLinks.sh
4 |
5 | IFS=$'\n'
6 | for logLine in `cat $1`; do
7 | inPath=`echo $logLine | cut -d ' ' -f 1`
8 | outPath=`echo $logLine | cut -d ' ' -f 3`
9 | if [[ "$inPath" =~ .*"md"|"mdx"$ ]]; then
10 | inPathNoExt="${inPath%.*}"
11 | inPathGrepQuery="](/${inPathNoExt}/\?\(#[^()#]*\)\?)"
12 | for affectedDocPath in `grep -lr "$inPathGrepQuery" docs/`; do
13 | if [[ "$outPath" =~ .*"md"|"mdx"$ ]]; then
14 | outPathNoExt="${outPath%.*}"
15 | sed -i -e "s/](\/${inPathNoExt//\//\\\/}\(\/\?\)\(#[^()#]*\)\?)/](\/${outPathNoExt//\//\\\/}\1\2)/g" "$affectedDocPath"
16 | else
17 | echo WARNING: erasing section information for link to $inPath in file $affectedDocPath - file became a section
18 | sed -i -e "s/](\/${inPathNoExt//\//\\\/}\(\/\?\)\(#[^()#]*\)\?)/[(\/${outPathNoExt//\//\\\/})/g" "$affectedDocPath"
19 | fi
20 | done
21 | else
22 | inPathGrepQuery="](/${inPath}/\?)"
23 | for affectedDocPath in `grep -lr "$inPathGrepQuery" docs/`; do
24 | if [[ "$outPath" =~ .*"md"|"mdx"$ ]]; then
25 | outPathNoExt="${outPath%.*}"
26 | else
27 | outPathNoExt="$outPath"
28 | fi
29 | sed -i -e "s/](\/${inPath//\//\\\/}\(\/\?\))/](\/${outPathNoExt//\//\\\/}\1)/g" "$affectedDocPath"
30 | done
31 | fi
32 | done
33 |
--------------------------------------------------------------------------------
/docs/sdk/tutorials/case-studies.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Case studies
3 | description: >-
4 | Deep dives into larger projects built with SQD
5 | sidebar_position: 80
6 | ---
7 |
8 | Follow the links from the [SQD Medium blog](https://medium.com/subsquid) for deep dives into the development process of larger projects built from scratch. They use the FireSquid version of the framework, so much of the involved code is outdated. Still, the general approaches they illustrate endure.
9 |
10 | - [DeFi Dashboard](https://medium.com/subsquid/build-your-first-defi-dashboard-ad3ce1e9fc73). A React app for a real-time dashboard showing key statistics of the Moonwell lending protocol
11 | - [Rave name service indexing on Fantom](https://medium.com/subsquid/building-a-fast-and-scalable-web3-api-on-fantom-blockchain-94c79933b55).
12 | - [Indexing Uniswap data into parquets](https://medium.com/subsquid/how-to-scale-blockchain-data-science-for-large-datasets-b49d078c15eb). A step-by-step guide on how to extract, decode and analyze Uniswap trading data with SQD, Pandas and a Python notebook.
13 | - [Analyzing Lens Protocol](https://medium.com/subsquid/how-to-analyze-lens-protocol-activity-data-with-subsquid-e1ee3b7b43fa). An end-to-end tutorial on how to extract and analyze the social data of the Lens protocol. Includes an extra step on how to build a UI for the dashboards.
14 |
--------------------------------------------------------------------------------
/docs/sdk/resources/substrate/gear.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 30
3 | description: >-
4 | Additional support for indexing Gear programs
5 | ---
6 |
7 | # Gear support
8 |
9 | :::info
10 | SQD Network has gateways for two networks that use Gear Protocol: **Vara** and **Vara Testnet**. Here are their endopint URLs:
11 | ```
12 | https://v2.archive.subsquid.io/network/vara
13 | ```
14 | ```
15 | https://v2.archive.subsquid.io/network/vara-testnet
16 | ```
17 | :::
18 |
19 | Indexing [Gear](https://gear-tech.io/) programs is supported with [`addGearMessageQueued()`](/sdk/reference/processors/substrate-batch/data-requests/#addgearmessagequeued) and [`addGearUserMessageSent()`](/sdk/reference/processors/substrate-batch/data-requests/#addgearusermessagesent) specialized data requests. These subscribe to the events [`Gear.MessageQueued`](https://wiki.gear-tech.io/docs/api/events/#messagequeued) and [`Gear.UserMessageSent`](https://wiki.gear-tech.io/docs/api/events/#usermessagesent) emitted by a specified Gear program.
20 |
21 | The processor can also subscribe to any other event with [`addEvent()`](/sdk/reference/processors/substrate-batch/data-requests/#events) and filter by program ID in the batch handler, if so necessary.
22 |
23 | An example of a squid indexing a Gear program (an NFT contract) can be found [here](https://github.com/subsquid/squid-sdk/tree/master/test/gear-nft).
24 |
--------------------------------------------------------------------------------
/docs/cloud/resources/monitoring.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 40
3 | title: Monitoring
4 | description: Prometheus endpoints for squid services
5 | ---
6 |
7 | # Monitoring
8 |
9 | Each deployed squid version exposes Prometheus metrics for external monitoring with e.g. Grafana.
10 |
11 | ## Processor metrics
12 |
13 | The processor metrics are available at
14 |
15 | - `https://${org}.squids.live/${name}@${slot}/processors/${processor}/metrics`, and at
16 | - `https://${org}.squids.live/${name}:${tag}/processors/${processor}/metrics` for each tag attached to the slot.
17 |
18 | See the [slots and tags guide](/cloud/resources/slots-and-tags).
19 |
20 | `${processor}` here is the processor name; it defaults to `processor` unless specified.
21 |
22 | The metrics are documented inline. They include some values reflecting the squid health:
23 | - `sqd_processor_last_block`. The last processed block.
24 | - `sqd_processor_chain_height`. Current chain height as reported by the RPC endpoint (when [RPC ingestion](/sdk/resources/unfinalized-blocks) is enabled) or by [SQD Network](/subsquid-network) (when it is disabled).
25 |
26 | Inspect the metrics endpoint for a full list.
27 |
28 | ## Postgres metrics
29 |
30 | Postgres metrics will be available in the future SQD Cloud releases.
31 |
32 | ## API metrics
33 |
34 | API metrics will be available in the future SQD Cloud releases.
35 |
--------------------------------------------------------------------------------
/src/components/guide-card.tsx:
--------------------------------------------------------------------------------
1 | import React from 'react';
2 | import clsx from 'clsx';
3 |
4 | type BgColor =
5 | | 'bg-role--building'
6 | | 'bg-role--success'
7 | | 'bg-role--error'
8 | | 'bg-role--syncing'
9 | | 'bg-role--info'
10 | | 'bg-role--notice'
11 | | 'bg-role--warning';
12 |
13 | export type GuideCardProps = React.PropsWithChildren<{
14 | description: string;
15 | color: BgColor;
16 | path?: string;
17 | isDisabled?: boolean;
18 | isExternalLink?: boolean;
19 | }>
20 |
21 | export function GuideCard(props: GuideCardProps) {
22 | return (
23 | <>
24 |
35 |
36 | {props.children}
40 | {props.description}
41 |
42 |
43 | >
44 | );
45 | }
46 |
47 | export default GuideCard;
48 |
--------------------------------------------------------------------------------
/docs/sdk/resources/substrate/types-bundle-miniguide.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 70
3 | description: >-
4 | Sourcing Substrate metadata
5 | title: Substrate types bundles
6 | ---
7 |
8 | ### Where do I get a types bundle for my chain?
9 |
10 | Types bundle is only needed for pre-Metadata v14 blocks and only if SQD does not offer a built-in support for the chain in question.
11 |
12 | Most chains publish their type bundles as an npm package (for example: [Edgeware](https://www.npmjs.com/package/@edgeware/node-types)). One of the best places to check for the latest version is the [polkadot-js/app](https://github.com/polkadot-js/apps/tree/master/packages/apps-config/src/api/spec) and [polkadot-js/api](https://github.com/polkadot-js/api/tree/master/packages/types-known/src/spec) repositories.
13 |
14 | :::info
15 | **Note:** the type bundle format for typegen is slightly different from `OverrideBundleDefinition` of `polkadot.js`. The structure is as follows, all the fields are optional.
16 | :::
17 |
18 | ```javascript
19 | {
20 | types: {}, // top-level type definitions, as `.types` option of `ApiPromise`
21 | typesAlias: {}, // top-level type aliases, as `.typesAlias` option of `ApiPromise`
22 | versions: [ // spec version specific overrides, same as `OverrideBundleDefinition.types` of `polkadot.js`
23 | {
24 | minmax: [0, 1010] // spec range
25 | types: {}, // type overrides for the spec range
26 | typesAlias: {}, // type alias overrides for the spec range
27 | }
28 | ]
29 | }
30 | ```
31 |
--------------------------------------------------------------------------------
/docs/firesquid.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_class_name: hidden
3 | pagination_next: null
4 | pagination_prev: null
5 | ---
6 |
7 | # On the FireSquid release
8 |
9 | Previous major release of SQD was called FireSquid. It featured GraphQL-based archives ([EVM](https://github.com/subsquid/eth-archive), [Substrate](https://github.com/subsquid/substrate-archive-setup)) that were replaced by [SQD Network](/subsquid-network). Interfaces for data requests on the [SDK](/sdk) side were changed without keeping backwards compatibility.
10 |
11 | Actions are needed if:
12 |
13 | 1. You're relying on a squid that's using an older SDK version. One way to know that is to look at the signatures of the data requesting methods (`.addEvent()`, `.addLog()` etc): if call signatures are different from what you see in the docs ([EVM](/sdk/reference/processors/evm-batch), [Substrate](/sdk/reference/processors/substrate-batch)), then you need to migrate to the modern [ArrowSquid SDK](/sdk).
14 |
15 | 2. You're relying on a GraphQL API of an older archive. Your options are:
16 | - To rely on SQD Network instead. See its [reference documentation](/subsquid-network/reference) for info on available datasets and the API used to access them.
17 | - To fork an older archive setup and maintain it yourself. For EVM you can just fork [the repo](https://github.com/subsquid/eth-archive). If you would like to do the same for the Substrate archives, that'd require pulling some old code out of the repos' history. Ping us in the [SquidDevs TG chat](https://t.me/HydraDevs).
18 |
--------------------------------------------------------------------------------
/docs/squid-cli/commands-json.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 20
3 | title: commands.json
4 | description: Dynamic sqd commands
5 | ---
6 |
7 | # commands.json
8 |
9 | The `sqd` tool automatically discovers and loads any extra commands defined in the `commands.json` file. Here is a sample file demonstrating the available features:
10 |
11 | ```json
12 | { // comments are ok
13 | "$schema": "https://subsquid.io/schemas/commands.json",
14 | "commands": {
15 | "clean": {
16 | "description": "delete all build artifacts",
17 | "cmd": ["rm", "-rf", "lib"]
18 | },
19 | "build": {
20 | "description": "build the project",
21 | "deps": ["clean"], // commands to execute before
22 | "cmd": ["tsc"]
23 | },
24 | "typegen": {
25 | "hidden": true, // Don't show in the overview listing
26 | "workdir": "abi", // change working dir
27 | "command": [
28 | "squid-evm-typegen", // node_modules/.bin is in the PATH
29 | "../src/abi",
30 | {"glob": "*.json"} // cross-platform glob expansion
31 | ],
32 | "env": { // additional environment variables
33 | "DEBUG": "*"
34 | }
35 | }
36 | }
37 | }
38 | ```
39 | This functionality is managed by the [`@subsquid/commands`](https://github.com/subsquid/squid-sdk/tree/master/util/commands) package.
40 |
41 | All [squid templates](/sdk/how-to-start/squid-development/#templates) include such a file with a predefined set of useful shortcuts. See [Cheatsheet](/sdk/how-to-start/cli-cheatsheet).
42 |
--------------------------------------------------------------------------------
/docs/sdk/reference/schema-file/indexes-and-constraints.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 21
3 | title: Indexes and constraints
4 | description: Annotate indexed fields for faster queries
5 | ---
6 |
7 | # Indexes and unique constraints
8 |
9 | :::warning
10 | The lack of indices is the most common cause of slow API queries
11 | :::
12 |
13 | It is crucial to add database indexes to the entity fields on which one expects filtering and ordering. To add an index to a column, the corresponding entity field must be decorated with `@index`. The corresponding entity field will be decorated with [TypeORM `@Index()`](https://typeorm.io/indices#column-indices).
14 |
15 | One can additionally decorate the field with `@unique` to enforce uniqueness. It corresponds to the [`@Index({ unique: true })`](https://typeorm.io/indices#unique-indices) TypeORM decorator.
16 |
17 | ### Example
18 |
19 | ```graphql
20 | type Transfer @entity {
21 | id: ID!
22 | to: Account!
23 | amount: BigInt! @index
24 | fee: BigInt! @index @unique
25 | }
26 | ```
27 |
28 | ## Multi-column indices
29 |
30 | Multi-column indices are defined on the entity level, with an optional `unique` constraint.
31 |
32 | ### Example
33 |
34 | ```graphql
35 | type Foo @entity @index(fields: ["foo", "bar"]) @index(fields: ["bar", "baz"])
36 | {
37 | id: ID!
38 | bar: Int!
39 | baz: [Enum!]
40 | foo: String!
41 |
42 | type Extrinsic @entity @index(fields: ["hash", "block"], unique: true) {
43 | id: ID!
44 | hash: String! @unique
45 | block: String!
46 | }
47 | ```
48 |
--------------------------------------------------------------------------------
/docs/solana-indexing/sdk/solana-batch/balances.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 50
3 | description: >-
4 | Track balance changes with addBalance()
5 | ---
6 |
7 | # Balances
8 |
9 | #### `addBalance(options)` {#add-balance}
10 |
11 | This allows for tracking SOL account balances. `options` has the following structure:
12 |
13 | ```typescript
14 | {
15 | // data requests
16 | where?: {
17 | account?: string[]
18 | }
19 |
20 | // related data retrieval
21 | include?: {
22 | transaction?: boolean
23 | transactionInstructions?: boolean
24 | }
25 |
26 | range?: {
27 | from: number
28 | to?: number
29 | }
30 | }
31 | ```
32 |
33 | The data requests here are:
34 | - `account`: the set of accounts to track. Leave undefined to subscribe to balance updates of all accounts in the whole network.
35 |
36 | Related data retrieval flags:
37 | - `transaction = true`: retrieve the transaction that gave rise to the balance update
38 | - `transactionInstructions = true`: retrieve all instructions executed by the parent transaction
39 |
40 | The related data will be added to the appropriate iterables within the [block data](/solana-indexing/sdk/solana-batch/context-interfaces). You can also call `augmentBlock()` from `@subsquid/solana-objects` on the block data to populate the convenience reference fields like `instruction.inner`.
41 |
42 | Selection of the exact fields to be retrieved for each balance item and the related data is done with the `setFields()` method documented on the [Field selection](../field-selection) page.
43 |
--------------------------------------------------------------------------------
/docs/sdk/reference/openreader-server/api/json-queries.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 41
3 | description: >-
4 | Query entities with Object-typed fields
5 | ---
6 |
7 | # JSON queries
8 |
9 | The possibility of defining JSON objects as fields of a type in a GraphQL schema has been explained in the [schema reference](/sdk/reference/schema-file).
10 |
11 | This guide is focusing on how to query such objects and how to fully leverage their potential. Let's take the example of this (non-crypto related, for once😁) schema:
12 |
13 | ```graphql title="schema.graphql"
14 | type Entity @entity {
15 | id: ID!
16 | a: A
17 | }
18 |
19 | type A {
20 | a: String
21 | b: B
22 | }
23 |
24 | type B {
25 | a: A
26 | b: String
27 | e: Entity
28 | }
29 | ```
30 |
31 | It's composed of one entity and two JSON objects definitions, used in a "nested" way.
32 |
33 | Let's now look at a simple query:
34 |
35 | ```graphql
36 | query {
37 | entities(orderBy: id_ASC) {
38 | id
39 | a { a }
40 | }
41 | }
42 | ```
43 |
44 | This will return a result such as this one (imagining this data exists in the database):
45 |
46 | ```graphql
47 | {
48 | entities: [
49 | {id: '1', a: {a: 'a'}},
50 | {id: '2', a: {a: 'A'}},
51 | {id: '3', a: {a: null}},
52 | {id: '4', a: null}
53 | ]
54 | }
55 | ```
56 |
57 | Simply enough, the first two objects have an object of type `A` with some content inside, the third one has an object, but its `a` field is `null` and the fourth one simply does not have an `A` object at all.
58 |
--------------------------------------------------------------------------------
/docs/sdk/reference/schema-file/unions-and-typed-json.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 23
3 | title: Unions and typed JSON
4 | description: Union and JSON types
5 | ---
6 |
7 | # Unions and typed JSON
8 |
9 | Complex scalar types can be modelled using a typed JSON fields together with union types, making safe union types.
10 |
11 | ## Typed JSON
12 |
13 | It is possible to define explicit types for JSON fields. The generated entity classes and the GraphQL API will respect the type definition of the field, enforcing the data integrity.
14 |
15 | **Example**
16 | ```graphql
17 | type Entity @entity {
18 | a: A
19 | }
20 |
21 | type A {
22 | a: String
23 | b: B
24 | c: JSON
25 | }
26 |
27 | type B {
28 | a: A
29 | b: String
30 | e: Entity
31 | }
32 | ```
33 |
34 | ## Union types
35 |
36 | One can leverage union types supported both by [Typescript](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#union-types) and [GraphQL](https://graphql.org/learn/schema/#union-types). The union operator for `schema.graphql` supports only non-entity types, including typed JSON types described above. JSON types, however, are allowed to reference an entity type.
37 |
38 | **Example**
39 | ```graphql
40 | type User @entity {
41 | id: ID!
42 | login: String!
43 | }
44 |
45 | type Farmer {
46 | user: User!
47 | crop: Int
48 | }
49 |
50 | type Degen {
51 | user: User!
52 | bag: String
53 | }
54 |
55 | union Owner = Farmer | Degen
56 |
57 | type NFT @entity {
58 | name: String!
59 | owner: Owner!
60 | }
61 | ```
62 |
--------------------------------------------------------------------------------
/docs/fuel-indexing/fuel-datasource/context-interfaces.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 5
3 | description: >-
4 | Block data for Fuel
5 | ---
6 |
7 | # Block data for Fuel Network
8 |
9 | In Fuel Squid SDK, the data is processed by repeatedly calling the user-defined [batch handler](/sdk/reference/processors/architecture/#processorrun) function on batches of on-chain data. The sole argument of the batch handler is its context `ctx`, and `ctx.blocks` is an array of `Block` objects containing the data to be processed, aligned at the block level.
10 |
11 | For Fuel `DataSource` the `Block` interface is defined as follows:
12 |
13 | ```ts
14 | export interface Block {
15 | header: BlockHeader;
16 | transactions: Transaction[];
17 | inputs: TransactionInput[];
18 | outputs: TransactionOutput[];
19 | receipts: Receipt[];
20 | }
21 | ```
22 |
23 | `Block.header` contains the block header data. The rest of the fields are iterables containing the four kinds of blockchain data. The items within each iterable are ordered in the same way as they are within blocks.
24 |
25 | The exact fields available in each data item type are inferred from the `setFields()` call argument. The method is documented on the [field selection](/fuel-indexing/fuel-datasource/field-selection) page:
26 |
27 | - [`Input` section](/fuel-indexing/fuel-datasource/field-selection#input);
28 | - [`Transaction` section](/fuel-indexing/fuel-datasource/field-selection#transaction);
29 | - [`Output` section](/fuel-indexing/fuel-datasource/field-selection#output);
30 | - [`Receipt` section](/fuel-indexing/fuel-datasource/field-selection#receipt).
31 |
--------------------------------------------------------------------------------
/docs/cloud/resources/logging.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 30
3 | title: Inspect logs
4 | description: Inspect the deployment logs
5 | ---
6 |
7 | # Logging
8 |
9 | SQD Cloud automatically collects the logs emitted by the squid processor, its API server and its database. Please use the [built-in SDK logger](/sdk/reference/logger) throughout your code when developing for SQD Cloud. You can set the severity flags for squids running in the Cloud via `SQD_DEBUG`, `SQD_TRACE` or `SQD_INFO` - see [Environment Variables](/cloud/resources/env-variables).
10 |
11 | To inspect and follow the squid logs from all the squid services, use [`sqd logs`](/squid-cli/logs):
12 | ```bash
13 | sqd logs -n -s -f
14 | ```
15 | or
16 | ```bash
17 | sqd logs -n -t -f
18 | ```
19 |
20 |
21 | For older version-based deployments...
22 |
23 | ...the slot string is `v${version}`, so use
24 | ```bash
25 | sqd logs -n -s v -f
26 | ```
27 | Check out the [Slots and tags guide](/cloud/resources/slots-and-tags) to learn more.
28 |
29 |
30 |
31 | There are additional flags to filter the logs:
32 | - `-f` to follow the logs
33 | - `-c` allows filtering by the container (can be `processor`, `db`, `db-migrate` and `query-node`)
34 | - `-l` allows filtering by the severity
35 | - `-p` number of lines to fetch (default: `100`)
36 | - `--since` cut off by the time (default: `1d`). Accepts the notation of the [`ms` library](https://www.npmjs.com/package/ms): `1d`, `10h`, `1m`.
37 |
38 | ### Example
39 |
40 | ```bash
41 | sqd logs squid-substrate-template@v1 -f -c processor -l info --since 1d
42 | ```
43 |
44 |
--------------------------------------------------------------------------------
/docs/cloud/reference/hasura.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 33
3 | title: addons.hasura section
4 | description: Run a Hasura instance
5 | ---
6 |
7 | # Hasura add-on
8 |
9 | ## Running Hasura
10 |
11 | To provision a [Hasura](https://hasura.io) instance, add an empty `deploy.addons.hasura` section to the [deployment manifest](/cloud/reference/manifest). Provide some basic configuration:
12 | ```yaml
13 | deploy:
14 | env:
15 | HASURA_GRAPHQL_ADMIN_SECRET: "${{ secrets.HASURA_SECRET }}"
16 | HASURA_GRAPHQL_UNAUTHORIZED_ROLE: user
17 | HASURA_GRAPHQL_STRINGIFY_NUMERIC_TYPES: "true"
18 | addons:
19 | postgres:
20 | hasura:
21 | ```
22 | Note the use of a [Cloud secret](/cloud/resources/env-variables/#secrets) for storing the admin password.
23 |
24 | ## Configuring a Hasura API
25 |
26 | ### For a squid
27 |
28 | Use the [Hasura configuration tool](/sdk/resources/tools/hasura-configuration) for squids running dedicated Hasura instances. To make Cloud initialize Hasura configuration on squid restarts, make sure that the tool runs on squid startup by adding a `deploy.init` section to the manifest, e.g. like this:
29 | ```yaml
30 | deploy:
31 | init:
32 | env:
33 | HASURA_GRAPHQL_ENDPOINT: 'http://hasura:8080'
34 | cmd:
35 | - npx
36 | - squid-hasura-configuration
37 | - apply
38 | ```
39 | See also the [Hasura section of the GraphQL guide](/sdk/resources/serving-graphql/#hasura) and the [complete squid example](https://github.com/subsquid-labs/squid-hasura-example).
40 |
41 | ### For a DipDup indexer
42 |
43 | [DipDup](https://dipdup.io) also configures Hasura automatically. See the [DipDup section](/external-tools/#dipdup) for details.
44 |
--------------------------------------------------------------------------------
/docs/squid-cli/installation.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 10
3 | description: Setup Squid CLI
4 | ---
5 |
6 | # Installation
7 |
8 | Squid CLI is a command line tool for
9 |
10 | - scaffolding new squids from templates
11 | - running SDK tools and scripts defined in `commands.json` in a cross-platform way
12 | - managing squid deployments in [SQD Cloud](/cloud) (former Aquarium)
13 |
14 | The CLI is distributed as a [`npm` package](https://www.npmjs.com/package/@subsquid/cli).
15 |
16 | To install Squid CLI, follow the steps below.
17 |
18 | ## 0. Install and setup Squid CLI
19 |
20 | First, install the latest version of Squid CLI as a global `npm` package:
21 | ```bash
22 | npm i -g @subsquid/cli@latest
23 | ```
24 |
25 | Check the version:
26 | ```bash
27 | sqd --version
28 | ```
29 | Make sure the output looks like `@subsquid/cli@`.
30 |
31 | :::info
32 | The next steps are **optional** for building and running squids. A key is required to enable the CLI commands managing the [SQD Cloud](/cloud) deployments.
33 | :::
34 |
35 | ## 1. Obtain a SQD Cloud deployment key
36 |
37 | Sign in to [Cloud](https://app.subsquid.io/), and obtain (or refresh) the deployment key page by clicking at the profile picture > "Deployment key":
38 |
39 | 
40 |
41 | ## 2. Authenticate Squid CLI
42 |
43 | Open a terminal window and run
44 |
45 | ```bash
46 | sqd auth -k
47 | ```
48 |
49 | ## 3. Explore with `--help`
50 |
51 | Use `sqd --help` to get a list of the available command and `sqd --help` to get help on the available options for a specific command, e.g.
52 |
53 | ```bash
54 | sqd deploy --help
55 | ```
56 |
--------------------------------------------------------------------------------
/orphaned-docs/evm-config-caveats.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 70
3 | description: >-
4 | Common processor configuration issues
5 | ---
6 |
7 | # Caveats
8 |
9 | - Processor data subscription methods guarantee that all data matching their data requests will be retrieved, but for technical reasons non-matching data may be added to the [batch context iterables](/evm-indexing/context-interfaces). As such, it is important to always filter the data within the batch handler. For example, a config like this
10 | ```ts title=src/processor.ts
11 | export const processor = new EvmBatchProcessor()
12 | .addLog({
13 | address: ['0xdac17f958d2ee523a2206206994597c13d831ec7'],
14 | topic0: [erc20abi.events.Transfer.topic]
15 | })
16 | ```
17 | must always be matched with a filter like this
18 | ```ts title=src/main.ts
19 | procesor.run(database, async ctx => {
20 | // ...
21 | for (let block of ctx.blocks) {
22 | for (let log of block.logs) {
23 | // ⌄⌄⌄ this filter ⌄⌄⌄
24 | if (log.address === '0xdac17f958d2ee523a2206206994597c13d831ec7' &&
25 | log.topics[0] === erc20abi.events.Transfer.topic) {
26 | // ...
27 | }
28 | }
29 | }
30 | })
31 | ```
32 | *even if no other event types (including from other addresses) were requested*.
33 |
34 | - The meaning of passing `[]` as a set of parameter values has been changed in the ArrowSquid release: now it _selects no data_. Some data might still arrive (see above), but that's not guaranteed. Pass `undefined` for a wildcard selection:
35 | ```typescript
36 | .addStateDiff({address: []}) // selects no state diffs
37 | .addStateDiff({}) // selects all state diffs
38 | ```
39 |
--------------------------------------------------------------------------------
/docs/squid-cli/init.md:
--------------------------------------------------------------------------------
1 | `sqd init`
2 | ==========
3 |
4 | Setup a new squid project from a template or github repo
5 |
6 | * [`sqd init NAME`](#sqd-init-name)
7 |
8 | ## `sqd init NAME`
9 |
10 | Setup a new squid project from a template or github repo
11 |
12 | ```
13 | USAGE
14 | $ sqd init NAME [--interactive] [-t ] [-d ] [-r]
15 |
16 | ARGUMENTS
17 | NAME The squid name. It must contain only alphanumeric or dash ("-") symbols and must not start with "-".
18 |
19 | FLAGS
20 | -d, --dir=
21 | The target location for the squid. If omitted, a new folder NAME is created.
22 |
23 | -r, --remove
24 | Clean up the target directory if it exists
25 |
26 | -t, --template=
27 | A template for the squid. Accepts:
28 | - a github repository URL containing a valid squid.yaml manifest in the root folder
29 | or one of the pre-defined aliases:
30 | - evm A minimal squid template for indexing EVM data.
31 | - abi A template to auto-generate a squid indexing events and txs from a contract ABI
32 | - multichain A template for indexing data from multiple chains
33 | - gravatar A sample EVM squid indexing the Gravatar smart contract on Ethereum.
34 | - substrate A template squid for indexing Substrate-based chains.
35 | - ink A template for indexing Ink! smart contracts
36 | - ink-abi A template to auto-generate a squid from an ink! contract ABI
37 | - frontier-evm A template for indexing Frontier EVM chains, like Moonbeam and Astar.
38 |
39 | --[no-]interactive
40 | Disable interactive mode
41 | ```
42 |
43 | _See code: [src/commands/init.ts](https://github.com/subsquid/squid-cli/tree/master/src/commands/init.ts)_
44 |
--------------------------------------------------------------------------------
/.gitbook/assets/hydra-logo-horizontallockup.svg:
--------------------------------------------------------------------------------
1 |
7 |
--------------------------------------------------------------------------------
/src/components/docs-rating.tsx:
--------------------------------------------------------------------------------
1 | import ExecutionEnvironment from '@docusaurus/ExecutionEnvironment';
2 | import React, {useState} from 'react';
3 |
4 | const DocsRating = () => {
5 | if (!ExecutionEnvironment.canUseDOM) {
6 | return null;
7 | }
8 |
9 | const [haveVoted, setHaveVoted] = useState(!!(localStorage.getItem(window.location.href)));
10 | const giveFeedback = value => {
11 | // @ts-ignore
12 | if (window.gtag) {
13 | // @ts-ignore
14 | window.gtag('event', 'like', {
15 | event_label: "like",
16 | value: value
17 | });
18 | }
19 |
20 | setHaveVoted(true)
21 | localStorage.setItem(window.location.href, value)
22 | };
23 |
24 | return (
25 |
26 | {haveVoted ? (
27 | <>
28 |
29 | Thanks for letting us know!
30 |
31 |
32 | >
33 | ) : (
34 | <>
35 |
36 | Is this page useful?
37 |
38 | giveFeedback(1)}>
39 | giveFeedback(0)}>
40 |
41 |
42 | >
43 | )}
44 |
45 | );
46 | };
47 |
48 | export default DocsRating;
--------------------------------------------------------------------------------
/static/img/.gitbook/assets/hydra-logo-horizontallockup.svg:
--------------------------------------------------------------------------------
1 |
7 |
--------------------------------------------------------------------------------
/docs/tron-indexing/tron-batch-processor/internal-transactions.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 50
3 | description: >-
4 | Subscribe to internal txn data
5 | ---
6 |
7 | # Internal transactions
8 |
9 | #### `addInternalTransaction(options)` {#add-internal-transaction}
10 |
11 | Get some _or all_ internal transactions on the network. `options` has the following structure:
12 |
13 | ```typescript
14 | {
15 | where?: {
16 | caller?: string[]
17 | transferTo?: string[]
18 | }
19 | include?: {
20 | transaction?: boolean
21 | }
22 | range?: {
23 | from: number
24 | to?: number
25 | }
26 | }
27 | ```
28 |
29 | **Data requests** are located in the `where` field:
30 |
31 | - `caller` is the set of caller addresses responsible for the internal transactons. Leave it undefined to subscribe to internal txs from all callers.
32 | - `transferTo` is the set of receiver addresses that the internal txn is addressed to.
33 |
34 | Omit the `where` field to subscribe to all txs network-wide.
35 |
36 | **Related data** can be requested via the `include` field:
37 |
38 | - `transaction = true`: will retrieve parent transactions for each selected internal txn.
39 |
40 | The data will be added to the `.transactions` iterable within [block data](/tron-indexing/tron-batch-processor/context-interfaces) and made available via the `.transaction` field of each internal transaction item.
41 |
42 | Note that internal transactions can also be requested by the other `TronBatchProcessor` methods as related data.
43 |
44 | Selection of the exact fields to be retrieved for each transaction and the optional related data items is done with the `setFields()` method documented on the [Field selection](../field-selection) page.
45 |
46 | #### Example
47 |
48 | TBA
49 |
--------------------------------------------------------------------------------
/docs/fuel-indexing/fuel-datasource/outputs.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 30
3 | description: >-
4 | Subscribe to outputs data with addOutput()
5 | ---
6 |
7 | # Outputs
8 |
9 | #### `addOutput(options)` {#add-output}
10 |
11 | Get some _or all_ outputs on the network. `options` has the following structure:
12 |
13 | ```typescript
14 | {
15 | // data requests
16 | type?: OutputType[]
17 |
18 | // related data retrieval
19 | transaction?: boolean
20 |
21 | range?: {
22 | from: number
23 | to?: number
24 | }
25 | }
26 | ```
27 |
28 | Data requests:
29 |
30 | - `type` sets the type of the output. Output type has the following options: `'CoinOutput' | 'ContractOutput' | 'ChangeOutput' | 'VariableOutput' | 'ContractCreated'`. Leave it undefined to subscribe to all outputs.
31 |
32 | Enabling the `transaction` flag will cause the processor to retrieve transactions where the selected outputs have occurred. The data will be added to the appropriate iterables within the [block data](/fuel-indexing/fuel-datasource/context-interfaces). You can also call `augmentBlock()` from `@subsquid/fuel-objects` on the block data to populate the convenience reference fields like `output.transaction`.
33 |
34 | Note that receipts can also be requested by the other `FuelDataSource` methods as related data.
35 |
36 | Selection of the exact fields to be retrieved for each transaction and the optional related data items is done with the `setFields()` method documented on the [Field selection](../field-selection) page.
37 |
38 | ## Examples
39 |
40 | Request all outputs with `ChangeOutput` type and include transactions:
41 |
42 | ```ts
43 | processor
44 | .addOutput({
45 | type: ["ChangeOutput"],
46 | transaction: true,
47 | })
48 | .build();
49 | ```
50 |
--------------------------------------------------------------------------------
/docs/tron-indexing/tron-batch-processor/context-interfaces.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 10
3 | description: >-
4 | Block data for Tron
5 | ---
6 |
7 | # Block data for Tron
8 |
9 | In Tron Squid SDK, the data is processed by repeatedly calling the user-defined [batch handler](/sdk/reference/processors/architecture/#processorrun) function on batches of on-chain data. The sole argument of the batch handler is its context `ctx`, and `ctx.blocks` is an array of `Block` objects containing the data to be processed, aligned at the block level.
10 |
11 | For `TronBatchProcessor` the `Block` interface is defined as follows:
12 |
13 | ```ts
14 | export interface Block {
15 | header: BlockHeader
16 | transactions: Transaction[]
17 | logs: Log[]
18 | internalTransactions: InternalTransaction[]
19 | }
20 | ```
21 | `F` here is the type of the argument of the [`setFields()`](/tron-indexing/tron-batch-processor/field-selection) processor method.
22 |
23 | `Block.header` contains the block header data. The rest of the fields are iterables containing the three kinds of blockchain data. The items within each iterable are ordered in the same way as they are within the block.
24 |
25 | The exact fields available in each data item type are inferred from the `setFields()` call argument. The method is documented on the [field selection](/tron-indexing/tron-batch-processor/field-selection) page:
26 |
27 | - [`Transaction` section](/tron-indexing/tron-batch-processor/field-selection/#transaction);
28 | - [`Log` section](/tron-indexing/tron-batch-processor/field-selection/#log);
29 | - [`InternalTransaction` section](/tron-indexing/tron-batch-processor/field-selection/#internal-transaction).
30 | - [`BlockHeader` section](/tron-indexing/tron-batch-processor/field-selection/#block-header)
31 |
--------------------------------------------------------------------------------
/docs/fuel-indexing/fuel-datasource/inputs.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 30
3 | description: >-
4 | Subscribe to Input data with addInput()
5 | ---
6 |
7 | # Inputs
8 |
9 | #### `addInput(options)` {#add-input}
10 |
11 | Get some _or all_ inputs on the network. `options` has the following structure:
12 |
13 | ```typescript
14 | {
15 | // data requests
16 | type?: InputType[]
17 | coinOwner?: string[]
18 | coinAssetId?: string[]
19 | contractContract?: string[]
20 | messageSender?: string[]
21 | messageRecipient?: string[]
22 |
23 | // related data retrieval
24 | transaction?: boolean
25 |
26 | range?: {
27 | from: number
28 | to?: number
29 | }
30 | }
31 | ```
32 |
33 | Data requests:
34 |
35 | - `type` sets the type of the input. You can request one or more of `'InputCoin' | 'InputContract' | 'InputMessage'`. Leave it undefined to subscribe to all inputs.
36 |
37 | Enabling the `transaction` flag will cause the processor to retrieve transactions where the selected inputs have occurred. The data will be added to the appropriate iterables within the [block data](/fuel-indexing/fuel-datasource/context-interfaces). You can also call `augmentBlock()` from `@subsquid/fuel-objects` on the block data to populate the convenience reference fields like `input.transaction`.
38 |
39 | Note that inputs can also be requested by the other `FuelDataSource` methods as related data.
40 |
41 | Selection of the exact fields to be retrieved for each transaction and the optional related data items is done with the `setFields()` method documented on the [Field selection](../field-selection) page.
42 |
43 | ## Examples
44 |
45 | Request all inputs with `InputCoin` type and include transactions:
46 |
47 | ```ts
48 | processor
49 | .addInput({
50 | type: ["InputCoin"],
51 | transaction: true,
52 | })
53 | .build();
54 | ```
55 |
--------------------------------------------------------------------------------
/docs/sdk/reference/schema-file/interfaces.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 24
3 | title: Interfaces
4 | description: Queriable interfaces
5 | ---
6 |
7 | # Interfaces
8 |
9 | The schema file supports [GraphQL Interfaces](https://graphql.org/learn/schema/#interfaces) for modelling complex types sharing common traits. Interfaces are annotated with `@query` at the type level and do not affect the database schema, only enriching the GraphQL API queries with [inline fragments](https://graphql.org/learn/queries/#inline-fragments).
10 |
11 | Currently, only [OpenReader](/sdk/reference/openreader-server) supports GraphQL interfaces defined in the schema file.
12 |
13 | ### Examples
14 |
15 |
16 | ```graphql
17 | interface MyEntity @query {
18 | id: ID!
19 | name: String
20 | ref: Ref
21 | }
22 |
23 | type Ref @entity {
24 | id: ID!
25 | name: String
26 | foo: Foo! @unique
27 | bar: Bar! @unique
28 | }
29 |
30 | type Foo implements MyEntity @entity {
31 | id: ID!
32 | name: String
33 | ref: Ref @derivedFrom(field: "foo")
34 | foo: Int
35 | }
36 |
37 | type Bar implements MyEntity @entity {
38 | id: ID!
39 | name: String
40 | ref: Ref @derivedFrom(field: "bar")
41 | bar: Int
42 | }
43 |
44 | type Baz implements MyEntity @entity {
45 | id: ID!
46 | name: String
47 | ref: Ref
48 | baz: Int
49 | }
50 | ```
51 |
52 | The `MyEntity` interface above enables `myEntities` and `myEntitiesConnection` [GraphQL API queries](/sdk/reference/openreader-server/api) with inline fragments and the `_type`, `__typename` [meta fields](https://graphql.org/learn/queries/#meta-fields):
53 |
54 | ```graphql
55 | query {
56 | myEntities(orderBy: [_type_DESC, id_ASC]) {
57 | id
58 | name
59 | ref {
60 | id
61 | name
62 | }
63 | __typename
64 | ... on Foo { foo }
65 | ... on Bar { bar }
66 | ... on Baz { baz }
67 | }
68 | }
69 | ```
70 |
--------------------------------------------------------------------------------
/docs/sdk/resources/migrate/migrate-to-hasura-configuration-tool-v2.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 5
3 | title: hasura-configuration tool v2
4 | description: Breaking change in the Hasura configuration tool
5 | ---
6 |
7 | # Migrating to Hasura configuration tool v2
8 |
9 | Pre-2.0.0 [`@subsquid/hasura-configuration`](/sdk/resources/tools/hasura-configuration) used a fixed naming schema for [relation fields](/sdk/reference/schema-file/entity-relations/):
10 |
11 | - Forward relation fields were _always_ named after the type they referred to. Type names were `snake_case`d. For example, any field referring to an entity called `BurnTest` was called `burn_test`.
12 | - Inverse relation fields names were also determined by the relation type:
13 | + In one-to-one relation their names were `to_snake_case(typeName)`, where `typeName` is the name of the entity that the inverse field is typed with.
14 | + In one-to-many inverse relations they were named `${to_snake_case(typeName)}s`.
15 |
16 | Any names given to the fields in the [schema](/sdk/reference/schema-file) were ignored.
17 |
18 | `@subsquid/hasura-configuration@2.0.0` introduces full support for in-schema field names. Now the fields will be called exactly as they are in the schema file. If these field names are different from what's described above, your API will have a breaking change. If you'd like to avoid it, please make sure that the fields in your `schema.graphql` are named exactly as described above.
19 |
20 | To update the tool and your Hasura config:
21 | ```bash
22 | npm i @subsquid/hasura-configuration@latest
23 | npx squid-hasura-configuration regenerate
24 | ```
25 | If you deployed to the Cloud you can safely (barring the field names change) redeploy your squid in-place.
26 |
27 | If you opted to not change your schema to accommodate the old relation field names, please follow this up by revising GraphQL queries in any of your client apps.
28 |
--------------------------------------------------------------------------------
/docs/fuel-indexing/fuel-datasource/transactions.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 30
3 | description: >-
4 | Subscribe to txn data with addTransaction()
5 | ---
6 |
7 | # Transactions
8 |
9 | #### `addTransaction(options)` {#add-transaction}
10 |
11 | Get some _or all_ transactions on the network. `options` has the following structure:
12 |
13 | ```typescript
14 | {
15 | // data requests
16 | type?: TransactionType[]
17 |
18 | // related data retrieval
19 | receipts?: boolean
20 | inputs?: boolean
21 | outputs?: boolean
22 |
23 | range?: {
24 | from: number
25 | to?: number
26 | }
27 | }
28 | ```
29 |
30 | Data requests:
31 |
32 | - `type` sets the type of the transaction: `'Script' | 'Create' | 'Mint' | 'Upgrade' | 'Upload'`. Leave it undefined to subscribe to all transactions.
33 |
34 | Enabling the `receipts` and/or `inputs` and `outputs` flags will cause the processor to retrieve receipts, inputs and outputs that occurred as a result of each selected transaction. The data will be added to the appropriate iterables within the [block data](/fuel-indexing/fuel-datasource/context-interfaces). You can also call `augmentBlock()` from `@subsquid/fuel-objects` on the block data to populate the convenience reference fields like `transaction.receipts`.
35 |
36 | Note that transactions can also be requested by the other `FuelDataSource` methods as related data.
37 |
38 | Selection of the exact fields to be retrieved for each transaction and the optional related data items is done with the `setFields()` method documented on the [Field selection](../field-selection) page.
39 |
40 | ## Examples
41 |
42 | Request all transactions with `Create` and `Mint` types and include receipts, inputs and outputs:
43 |
44 | ```ts
45 | processor
46 | .addTransaction({
47 | type: ["Create", "Mint"],
48 | receipts: true,
49 | inputs: true,
50 | outputs: true,
51 | })
52 | .build();
53 | ```
54 |
--------------------------------------------------------------------------------
/docs/cloud/troubleshooting.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 60
3 | ---
4 |
5 | # Troubleshooting
6 |
7 | ### "Secrets outdated. Please restart the squid" warning
8 |
9 | This occurs when you have a squid deployed, then create, remove or change some [secrets](/squid-cli/secrets) of [relevance](/cloud/resources/organizations). Squids must be restarted manually for such changes to have effect. Navigate to the squid version page (e.g. by clicking on the warning sign) and click restart. The restart will not touch the database, so unless your new secret values cause the squid to crash this procedure should be quick and easy.
10 |
11 | ![Secrets outdated]()
12 |
13 | ### My squid is stuck in "Building", "Deploying" or "Starting" state
14 |
15 | - Run with `SQD_DEBUG=*` as explained on the [Logging](/sdk/reference/logger/#overriding-the-log-level) page
16 | - Update the squid CLI to the latest version with
17 | ```bash
18 | npm update -g @subsquid/cli
19 | ```
20 | - Update the Squid SDK dependencies:
21 | ```bash
22 | npm run update
23 | ```
24 | - Check that the squid adheres to the expected [structure](/sdk/how-to-start/layout)
25 | - Make sure you can [build and run Docker images locally](/sdk/resources/self-hosting)
26 |
27 | ### `Validation error` when releasing a squid
28 |
29 | Make sure the squid name contains only alphanumeric characters, underscores and hyphens. The squid version must be also alphanumeric.
30 | Since both the squid and version name become part of the squid API endpoint URL, slashes and dots are not accepted.
31 |
32 | ### My squid ran out of disk space
33 |
34 | Edit the [postgres addon](/cloud/reference/pg) section of `squid.yaml` and request more space for the database.
35 |
36 | ### My squid is behind the chain, but is shows that it is in sync
37 |
38 | Check that your processor uses both a RPC endpoint as one of its data sources (in addition to a SQD Network gateway).
39 |
--------------------------------------------------------------------------------
/docs/sdk/reference/store/bigquery.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 25
3 | title: bigquery-store
4 | description: >-
5 | @subsquid/bigquery-store reference
6 | ---
7 |
8 | # `@subsquid/bigquery-store`
9 |
10 | See also the [BigQuery guide](/sdk/resources/persisting-data/bigquery).
11 |
12 | ## Column types
13 |
14 | | Column type | Value type | Dataset column type |
15 | |:------------------------------:|:--------------------------------:|:--------------------------------------------------------------------------------------------------------:|
16 | | `String()` | `string` | [STRING](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#string_type) |
17 | | `Numeric(precision, scale)` | number | bigint | [NUMERIC(P[, S])](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#parameterized_decimal_type) |
18 | | `BigNumeric(precision, scale)` | number | bigint | [BIGNUMERIC(P[, S])](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#parameterized_decimal_type) |
19 | | `Bool()` | `boolean` | [BOOL](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#boolean_type) |
20 | | `Timestamp()` | `Date` | [TIMESTAMP](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#timestamp_type) |
21 | | `Float64()` | `number` | [FLOAT64](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#floating_point_types) |
22 | | `Int64()` | number | bigint | [INT64](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#integer_types) |
23 |
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "subsquid-docs",
3 | "version": "0.0.0",
4 | "private": true,
5 | "scripts": {
6 | "docusaurus": "docusaurus",
7 | "start": "docusaurus start",
8 | "build": "docusaurus build",
9 | "swizzle": "docusaurus swizzle",
10 | "deploy": "docusaurus deploy",
11 | "clear": "docusaurus clear",
12 | "serve": "docusaurus serve",
13 | "write-translations": "docusaurus write-translations",
14 | "write-heading-ids": "docusaurus write-heading-ids",
15 | "typecheck": "tsc"
16 | },
17 | "dependencies": {
18 | "@docusaurus/core": "^3.3.2",
19 | "@docusaurus/plugin-client-redirects": "^3.3.2",
20 | "@docusaurus/plugin-google-gtag": "^3.3.2",
21 | "@docusaurus/plugin-google-tag-manager": "^3.3.2",
22 | "@docusaurus/preset-classic": "^3.3.2",
23 | "@mdx-js/react": "3.0.0",
24 | "autoprefixer": "^10.4.7",
25 | "axios": "^1.6.8",
26 | "clsx": "^1.1.1",
27 | "docusaurus-plugin-hotjar": "^0.0.2",
28 | "postcss": "^8.4.14",
29 | "prism-react-renderer": "^1.3.3",
30 | "react": "^18.2.0",
31 | "react-collapsed": "^3.3.2",
32 | "react-dom": "^18.2.0",
33 | "react-syntax-highlighter": "^15.5.0",
34 | "react-transition-group": "^4.4.5",
35 | "swiper": "^10.2.0",
36 | "tailwindcss": "^3.1.3"
37 | },
38 | "devDependencies": {
39 | "@docusaurus/module-type-aliases": "^3.3.2",
40 | "@docusaurus/types": "^3.3.2",
41 | "@tsconfig/docusaurus": "^1.0.5",
42 | "@types/react": "^18.0.14",
43 | "@types/react-syntax-highlighter": "^15.5.7",
44 | "path-browserify": "^1.0.1",
45 | "stream-http": "^3.2.0",
46 | "typescript": "^4.6.4"
47 | },
48 | "browserslist": {
49 | "production": [
50 | ">0.5%",
51 | "not dead",
52 | "not op_mini all"
53 | ],
54 | "development": [
55 | "last 1 chrome version",
56 | "last 1 firefox version",
57 | "last 1 safari version"
58 | ]
59 | }
60 | }
61 |
--------------------------------------------------------------------------------
/docs/fuel-indexing/fuel-datasource/receipts.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 30
3 | description: >-
4 | Subscribe to txn data with addTransaction()
5 | ---
6 |
7 | # Receipts
8 |
9 | #### `addReceipt(options)` {#add-receipt}
10 |
11 | Get some _or all_ transactions on the network. `options` has the following structure:
12 |
13 | ```typescript
14 | {
15 | // data requests
16 | type?: ReceiptType[]
17 | contract?: string[]
18 |
19 | // related data retrieval
20 | transaction?: boolean
21 |
22 | range?: {
23 | from: number
24 | to?: number
25 | }
26 | }
27 | ```
28 |
29 | Data requests:
30 |
31 | - `type` sets the type of the receipt. Receipt type has the following options: `'CALL' | 'RETURN' | 'RETURN_DATA' | 'PANIC' | 'REVERT' | 'LOG' | 'LOG_DATA' | 'TRANSFER' | 'TRANSFER_OUT' | 'SCRIPT_RESULT' | 'MESSAGE_OUT' | 'MINT' | 'BURN'`. Leave it undefined to subscribe to all receipts.
32 | - `contract` sets the contract addresses to track. Leave it undefined to subscribe to all receipts.
33 |
34 | Enabling the `transaction` flag will cause the processor to retrieve transactions that gave rise to the matching receipts. The data will be added to the appropriate iterables within the [block data](/fuel-indexing/fuel-datasource/context-interfaces). You can also call `augmentBlock()` from `@subsquid/fuel-objects` on the block data to populate the convenience reference fields like `receipt.transaction`.
35 |
36 | Note that receipts can also be requested by the other `FuelDataSource` methods as related data.
37 |
38 | Selection of the exact fields to be retrieved for each transaction and the optional related data items is done with the `setFields()` method documented on the [Field selection](../field-selection) page.
39 |
40 | ## Examples
41 |
42 | Request all receipts of the `LOG_DATA` type and include parent transactions:
43 |
44 | ```ts
45 | processor
46 | .addReceipt({
47 | type: ["LOG_DATA"],
48 | transaction: true,
49 | })
50 | .build();
51 | ```
52 |
--------------------------------------------------------------------------------
/scripts/validateTagsJson.js:
--------------------------------------------------------------------------------
1 | var error = false
2 |
3 | function test(condition, message, exitOnFailure=false) {
4 | if (!condition) {
5 | console.error(message)
6 | error = true
7 | if (exitOnFailure) {
8 | process.exit(1)
9 | }
10 | }
11 | }
12 |
13 | test(process.argv[2]!=null, 'Please supply the path to the tags JSON', true)
14 | const tagsJson = JSON.parse(require('fs').readFileSync(process.argv[2]))
15 |
16 | // Extracting tags proper
17 |
18 | const tags = new Set(tagsJson.tags.map(r => r.id))
19 |
20 | const categoryTags = tagsJson.categories.map(cr => new Set(cr.tags))
21 | const allCategoryTags = categoryTags.reduce((a, v) => a.union(v), new Set())
22 |
23 | const cardsTags = new Set()
24 | for (let c of tagsJson.cards) {
25 | for (let t of c.tags) {
26 | cardsTags.add(t)
27 | }
28 | }
29 |
30 | // Each individual tags must have a description
31 |
32 | for (let t of tagsJson.tags) {
33 | test(t.description!=null, `Missing description for tag ${t.id}`)
34 | }
35 |
36 | // tags, cardTags and allCategoryTags must all be equal
37 |
38 | test(tags.difference(allCategoryTags).size===0, `Base tags [${[...tags.difference(allCategoryTags)]}] are not listed in any categories`)
39 | test(allCategoryTags.difference(tags).size===0, `Category tags [${[...allCategoryTags.difference(tags)]}] are not listed as base tags`)
40 |
41 | test(tags.difference(cardsTags).size===0, `Base tags [${[...tags.difference(cardsTags)]}] are not listed on any card`)
42 | test(cardsTags.difference(tags).size===0, `Cards tags [${[...cardsTags.difference(tags)]}] are not listed as base tags`)
43 |
44 | // tags categories must be disjoint
45 |
46 | for (let cti of categoryTags) {
47 | for (let ctj of categoryTags) {
48 | if (cti!==ctj) { // as references
49 | test(cti.intersection(ctj).size===0, `Tags [${[...cti.intersection(ctj)]}] are present in more than one category`)
50 | }
51 | }
52 | }
53 |
54 | if (error) {
55 | process.exit(1)
56 | }
57 |
--------------------------------------------------------------------------------
/docs/sdk/resources/substrate/ink.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 10
3 | description: >-
4 | ink! WASM smart contracts support
5 | title: ink! contracts support
6 | ---
7 |
8 | # ink! contracts support
9 |
10 | This section describes additional options available for indexing [ink!-based WASM contracts](https://use.ink), supported by chains with a `Contracts` pallet. At the moment of writing, AlephZero, Shibuya (Astar testnet), Shiden (Kusama parachain) and Astar (Polkadot parachain) are the most popular chains for deploying ink! contracts.
11 |
12 | [Generate an ink! indexing squid automatically](/sdk/resources/tools/squid-gen), follow the [WASM squid tutorial](/sdk/tutorials/ink) for a step-by-step instruction or check out the [squid-wasm-template](https://github.com/subsquid-labs/squid-wasm-template) reference project.
13 |
14 | ## Processor configuration
15 |
16 | Request events by the contract address as described on the [`SubstrateBatchProcessor` reference page](/sdk/reference/processors/substrate-batch/data-requests/#addcontractscontractemitted), e.g.
17 |
18 | ```ts
19 | import * as ss58 from '@subsquid/ss58'
20 | import {toHex} from '@subsquid/util-internal-hex'
21 |
22 | const ADDRESS = toHex(ss58.decode('XnrLUQucQvzp5kaaWLG9Q3LbZw5DPwpGn69B5YcywSWVr5w').bytes)
23 |
24 | const processor = new SubstrateBatchProcessor()
25 | .setGateway('https://v2.archive.subsquid.io/network/shibuya-substrate')
26 | .setRpcEndpoint('https://shibuya.public.blastapi.io')
27 | .addContractsContractEmitted({
28 | contractAddress: [ADDRESS],
29 | extrinsic: true
30 | })
31 | .setFields({
32 | event: {
33 | phase: true
34 | }
35 | })
36 | ```
37 |
38 | Generate and use the facade classes for decoding ink! smart contract data as described in the [typegens reference](/sdk/resources/tools/typegen). You can also make [direct contract state queries](/sdk/resources/tools/typegen/state-queries/?typegen=ink) using the `Contract` class generated by `squid-ink-typegen`.
39 |
--------------------------------------------------------------------------------
/docs/sdk/reference/openreader-server/api/nested-field-queries.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 30
3 | title: Nested field queries
4 | description: >-
5 | Query entities related to other entities
6 | ---
7 |
8 | # Nested field queries
9 |
10 | With OpenReader, fields of an Entity that contain fields themselves are shown as nested fields and it is possible to filter these as well. GraphQL queries can traverse related objects and their fields, letting clients fetch lots of related data in one request, instead of making several roundtrips as one would need in a classic REST architecture.
11 |
12 | As an example, this query searches for all `accounts` whose balance is bigger than a threshold value, fetching the `id` and `balance` simple fields, as well as the `historicalBalances` **nested field**.
13 |
14 | ```graphql
15 | query {
16 | accounts(orderBy: balance_ASC, where: {balance_gte: "250000000000000000"}) {
17 | id
18 | balance
19 | historicalBalances {
20 | balance
21 | date
22 | id
23 | }
24 | }
25 | }
26 |
27 | ```
28 |
29 | A nested field is a list (one account can have multiple `historicalBalances`) of objects with fields of their own. These objects can be filtered, too.
30 |
31 | In the following query the `historicalBalances` are filtered in order to only return the balances created after a certain date:
32 |
33 | ```graphql
34 | query {
35 | accounts(orderBy: balance_ASC, where: {balance_gte: "250000000000000000"}) {
36 | id
37 | balance
38 | historicalBalances(where: {date_lte: "2020-10-31T11:59:59.000Z"}, orderBy: balance_DESC) {
39 | balance
40 | date
41 | id
42 | }
43 | }
44 | }
45 |
46 | ```
47 | Note that the [newer](/sdk/reference/openreader-server/overview/#supported-queries) and [more advanced](/sdk/reference/openreader-server/api/paginate-query-results) `{entityName}sConnection` queries support exactly the same format of the `where` argument as the older `{entityName}s` queries used in the examples provided here.
48 |
--------------------------------------------------------------------------------
/docs/sdk/reference/openreader-server/api/and-or-filters.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 20
3 | title: AND/OR filters
4 | description: >-
5 | Basic logic operators for use in filters
6 | ---
7 |
8 | # AND/OR filters
9 |
10 | ## Overview
11 |
12 | Our GraphQL implementation offers a vast selection of tools to filter and section results. One of these is the `where` clause, very common in most database query languages and [explained here](/sdk/reference/openreader-server/api/queries/#filter-query-results--search-queries) in detail.
13 |
14 | In our GraphQL server implementation, we included logical operators to be used in the `where` clause, allowing to group multiple parameters in the same `where` argument using the `AND` and `OR` operators to filter results based on more than one criteria.
15 |
16 | Note that the [newer](/sdk/reference/openreader-server/overview/#supported-queries) and [more advanced](/sdk/reference/openreader-server/api/paginate-query-results) `{entityName}sConnection` queries support exactly the same format of the `where` argument as the older `{entityName}s` queries used in the examples provided here.
17 |
18 | ### Example of an `OR` clause:
19 |
20 | Fetch a list of `accounts` that either have a balance bigger than a certain amount, or have a specific id.
21 |
22 | ```graphql
23 | query {
24 | accounts(
25 | orderBy: balance_DESC,
26 | where: {
27 | OR: [
28 | {balance_gte: "240000000000000000"}
29 | {id_eq: "CksmaBx9rKUG9a7eXwc5c965cJ3QiiC8ELFsLtJMYZYuRWs"}
30 | ]
31 | }
32 | ) {
33 | balance
34 | id
35 | }
36 | }
37 |
38 | ```
39 |
40 | ### Example of `AND` clause:
41 |
42 | Fetch a list of `accounts` that have a balance between two specific amounts:
43 |
44 | ```graphql
45 | query {
46 | accounts(
47 | orderBy: balance_DESC,
48 | where: {
49 | AND: [
50 | {balance_lte: "240000000000000000"}
51 | {balance_gte: "100000000000000"}
52 | ]
53 | }
54 | ) {
55 | balance
56 | id
57 | }
58 | }
59 |
60 | ```
61 |
--------------------------------------------------------------------------------
/docs/sdk/reference/store/file/json.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 40
3 | title: JSON support
4 | description: >-
5 | Table class for writing JSON and JSONL files
6 | ---
7 |
8 | # JSON format support
9 |
10 | ## `Table` Implementation
11 |
12 | The `@subsquid/file-store-json` package provides a `Table` implementation for writing to JSON and [JSONL](https://jsonlines.org) files. Use it by [supplying one or more of its instances via the `tables` field of the `Database` constructor argument](/sdk/resources/persisting-data/file/#database-options). The `Table` uses a constructor with the following signature:
13 | ```typescript
14 | Table>(fileName: string, options?: {lines?: boolean})
15 | ```
16 | Here,
17 | * **`S`** is a Typescript type describing the schema of the table data.
18 | * **`fileName: string`** is the name of the output file in every dataset partition folder.
19 | * **`options?: {lines?: boolean}`** are table options. At the moment the only available setting is whether to use JSONL instead of a plain JSON array (default: false).
20 |
21 | ## Example
22 |
23 | This saves ERC20 `Transfer` events captured by the processor to a JSONL file where each line is a JSON serialization of a `{from: string, to: string, value: number}` object. Full squid code is available in [this repo](https://github.com/subsquid-labs/file-store-json-example).
24 |
25 | ```typescript
26 | import {Database} from '@subsquid/file-store'
27 | import {Table} from '@subsquid/file-store-json'
28 |
29 | ...
30 |
31 | const dbOptions = {
32 | tables: {
33 | TransfersTable: new Table<{
34 | from: string,
35 | to: string,
36 | value: bigint
37 | }>('transfers.jsonl', { lines: true })
38 | },
39 | dest: new LocalDest('./data'),
40 | chunkSizeMb: 10
41 | }
42 |
43 | processor.run(new Database(dbOptions), async (ctx) => {
44 | ...
45 | let from: string = ...
46 | let to: string = ...
47 | let value: bigint = ...
48 | ctx.store.TransfersTable.write({ from, to, value })
49 | ...
50 | })
51 | ```
52 |
--------------------------------------------------------------------------------
/docs/solana-indexing/sdk/solana-batch/logs.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 40
3 | description: >-
4 | Subscribe to log messages with addLog()
5 | ---
6 |
7 | # Log messages
8 |
9 | #### `addLog(options)` {#add-log}
10 |
11 | Get log messages emitted by some _or all_ programs in the network. `options` has the following structure:
12 |
13 | ```typescript
14 | {
15 | // data requests
16 | where?: {
17 | programId?: string[]
18 | kind?: ('log' | 'data' | 'other')[]
19 | }
20 |
21 | // related data retrieval
22 | include?: {
23 | transaction?: boolean
24 | instruction?: boolean
25 | }
26 |
27 | range?: {
28 | from: number,
29 | to?: number
30 | }
31 | }
32 | ```
33 |
34 | Data requests:
35 |
36 | - `programId`: the set of addresses of programs emitting the logs. Leave it undefined to subscribe to logs from all programs in the network.
37 | - `kind`: the set of values of `kind`.
38 |
39 | With `transaction = true` the processor will retrieve all parent transactions and add them to the `transactions` iterable within the [block data](/solana-indexing/sdk/solana-batch/context-interfaces). You can also call `augmentBlock()` from `@subsquid/solana-objects` on the block data to populate the convenience reference fields like `log.transaction`.
40 |
41 | Note that logs can also be requested by the other `SolanaDataSource` methods as related data.
42 |
43 | Selection of the exact fields to be retrieved for each log and its optional parent transaction is done with the `setFields()` method documented on the [Field selection](../field-selection) page.
44 |
45 | ## Examples
46 |
47 | Fetch all event logs emitted by Orca Whirlpool.
48 |
49 | ```ts
50 | const dataSource = new DataSourceBuilder()
51 | .setGateway('https://v2.archive.subsquid.io/network/solana-mainnet')
52 | .addLog({
53 | where: {
54 | programId: [PYTH_PUSH_ORACLE_PROGRAM_ID]
55 | },
56 | include: {
57 | instruction: true
58 | },
59 | range: {
60 | from: 241_000_000
61 | }
62 | })
63 | .build()
64 | ```
65 |
--------------------------------------------------------------------------------
/scripts/networksLists/bin/rpcProxy.js:
--------------------------------------------------------------------------------
1 | const help =
2 | `This script expects the RPC chains JSON at its stdin. Supply it e.g. with\n
3 | $ curl -s -H 'Authorization: Bearer ' https://app.subsquid.io/api/v1/orgs//rpc/chains | node scripts/networksLists/bin/rpcProxy.js\n
4 | Consult your browser's dev console at https://app.subsquid.io/rpc to get the token.`
5 |
6 | if (process.argv[2] === '--help') {
7 | console.log(help)
8 | process.exit(0)
9 | }
10 |
11 | const inputChunks = []
12 |
13 | process.stdin.resume()
14 | process.stdin.setEncoding('utf8')
15 | process.stdin.on('data', chunk => {
16 | inputChunks.push(chunk)
17 | })
18 | process.stdin.on('end', () => {
19 | const inputJSON = inputChunks.join('')
20 | try {
21 | const parsedData = JSON.parse(inputJSON)
22 | console.log(makeRpcProxyTables(parsedData.payload))
23 | }
24 | catch (e) {
25 | console.error('Invalid JSON input, see --help', e)
26 | process.exit(1)
27 | }
28 | })
29 |
30 | function makeRpcProxyTables(allChains) {
31 | const uncategorizedNetworks = allChains.filter(r => r.type!='evm' && r.type!='substrate' && r.type!='solana')
32 | if (uncategorizedNetworks.length > 0) {
33 | console.error('Found uncategorized networks', uncategorizedNetworks)
34 | }
35 | return [
36 | makeRpcProxyTable(allChains.filter(r => r.type=='evm')),
37 | makeRpcProxyTable(allChains.filter(r => r.type=='substrate')),
38 | makeRpcProxyTable(allChains.filter(r => r.type=='solana'))
39 | ].join('\n\n')
40 | }
41 |
42 | function makeRpcProxyTable(chains) {
43 | const urlType = a => {
44 | if (a.url.startsWith('https')) {
45 | return 'http'
46 | }
47 | else {
48 | console.error('Got a network with an unsupported protocol', a)
49 | process.exit(1)
50 | }
51 | }
52 |
53 | const rows = chains.map(a => ({
54 | network: a.title,
55 | alias: `\`${a.id}.${urlType(a)}\``
56 | }))
57 | const header = {
58 | network: 'Network name',
59 | alias: 'network.protocol'
60 | }
61 | return require('../lib/formatTable')(rows, header, ['network', 'alias'])
62 | }
63 |
--------------------------------------------------------------------------------
/docs/solana-indexing/sdk/solana-batch/transactions.md:
--------------------------------------------------------------------------------
1 | ---
2 | sidebar_position: 30
3 | description: >-
4 | Subscribe to txn data with addTransaction()
5 | ---
6 |
7 | # Transactions
8 |
9 | #### `addTransaction(options)` {#add-transaction}
10 |
11 | Get some _or all_ transactions on the network. `options` has the following structure:
12 |
13 | ```typescript
14 | {
15 | // data requests
16 | where?: {
17 | feePayer?: string[]
18 | }
19 |
20 | // related data retrieval
21 | include?: {
22 | instructions?: boolean
23 | logs?: boolean
24 | }
25 |
26 | range?: {
27 | from: number
28 | to?: number
29 | }
30 | }
31 | ```
32 |
33 | Data requests:
34 | - `feePayer` sets the addresses of the fee payers. Leave it undefined to subscribe to all transactions.
35 |
36 | Enabling the `instructions` and/or `logs` flags will cause the processor to retrieve [instructions](/solana-indexing/sdk/solana-batch/instructions) and [logs](/solana-indexing/sdk/solana-batch/logs) that occured as a result of each selected transaction. The data will be added to the appropriate iterables within the [block data](/solana-indexing/sdk/solana-batch/context-interfaces). You can also call `augmentBlock()` from `@subsquid/solana-objects` on the block data to populate the convenience reference fields like `transaction.logs`.
37 |
38 | Note that transactions can also be requested by the other `SolanaDataSource` methods as related data.
39 |
40 | Selection of the exact fields to be retrieved for each transaction and the optional related data items is done with the `setFields()` method documented on the [Field selection](../field-selection) page.
41 |
42 | ## Examples
43 |
44 | Request all transactions with fee payer `rec5EKMGg6MxZYaMdyBfgwp4d5rB9T1VQH5pJv5LtFJ` and include logs and instructions:
45 |
46 | ```ts
47 | processor
48 | .addTransaction({
49 | where: {
50 | feePayer: ['rec5EKMGg6MxZYaMdyBfgwp4d5rB9T1VQH5pJv5LtFJ'],
51 | },
52 | include: {
53 | logs: true,
54 | instructions: true
55 | }
56 | })
57 | .build()
58 | ```
59 |
--------------------------------------------------------------------------------
/scripts/networksLists/lib/getArchiveCapabilities.js:
--------------------------------------------------------------------------------
1 | axios = require('axios')
2 |
3 | module.exports = getArchiveCapabilities
4 |
5 | /**
6 | * @param {string} archiveUrl
7 | * @returns {} Capabilities of the archive
8 | */
9 | function getArchiveCapabilities(archiveUrl) {
10 | return processArchiveHeight(archiveUrl, (height) =>
11 | processWorkerUrl(archiveUrl, height, (workerUrl, height) =>
12 | getWorkerCapabilities(workerUrl, height)
13 | )
14 | )
15 | }
16 |
17 | function processArchiveHeight(archiveUrl, callback) {
18 | return axios.get(`${archiveUrl}/height`).then(ahdata => callback(ahdata.data))
19 | }
20 |
21 | function processWorkerUrl(archiveUrl, height, callback) {
22 | return height>0 ?
23 | axios.get(`${archiveUrl}/${height}/worker`).then(wdata => callback(wdata.data, height)) :
24 | Promise.resolve(callback(undefined, height))
25 | }
26 |
27 | function getWorkerCapabilities(workerUrl, height) {
28 | const capabilities = ['transactions', 'logs', 'stateDiffs', 'traces']
29 | if (height<=0) {
30 | return Promise.resolve(Object.fromEntries(capabilities.map(c => [c, null])))
31 | }
32 |
33 | const postConfig = {
34 | headers: {
35 | 'content-type': 'application/json',
36 | 'accept': 'application/json'
37 | },
38 | validateStatus: null
39 | }
40 |
41 | function getWorkerCapability(capability) {
42 | return axios.post(workerUrl, `{"fromBlock": ${height}, "toBlock": ${height}, "${capability}": [{}]}`, postConfig)
43 | .then(response => {
44 | if (response.status===200) {
45 | return true
46 | }
47 | else if (response.status===400 && response.data.description===`"${capability.toLowerCase()}" data is not supported by this archive on requested block range`) {
48 | return false
49 | }
50 | else {
51 | console.error(response)
52 | throw new Error(`Unrecognized response to request on capability ${capability}`)
53 | }
54 | })
55 | }
56 |
57 | return Promise.all(capabilities.map(getWorkerCapability))
58 | .then(caps => Object.fromEntries(caps.map((cv, i) => [capabilities[i], cv])))
59 | }
60 |
61 |
--------------------------------------------------------------------------------
/docs/external-tools.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: External tools
3 | description: >-
4 | Third party tools and extensions
5 | sidebar_position: 90
6 | ---
7 |
8 | # External tools
9 |
10 | ## `@belopash/typeorm-store`
11 |
12 | [`@belopash/typeorm-store`](https://github.com/belopash/squid-typeorm-store) is a [fork](/sdk/resources/persisting-data/overview/#custom-database) of [`@subsquid/typeorm-store`](/sdk/reference/store/typeorm) that automates collecting read and write database requests into [batches](/sdk/resources/batch-processing) and caches the available entity records in RAM. Unlike the [standard `typeorm-store`](/sdk/resources/persisting-data/typeorm), @belopash's store is intended to be used with declarative code: it makes it easy to write mapping functions (e.g. event handlers) that explicitly define
13 |
14 | - what data you're going to need from the database
15 | - what code has to be executed once the data is available
16 | - how to save the results
17 |
18 | Data dependencies due to [entity relations](/sdk/reference/schema-file/entity-relations) are handled automatically, along with the caching of intermediate resultsg and in-memory batching of database requests.
19 |
20 | See [this repository](https://github.com/subsquid-labs/belopash-typeorm-store-example) for a minimal example.
21 |
22 | ## DipDup
23 |
24 | [DipDup](https://dipdup.io) is a Python indexing framework that can use [SQD Network](/subsquid-network) as a data source. It offers
25 |
26 | * SQLite, PostgreSQL and TimescaleDB data sinks
27 | * GraphQL APIs based on Hasura
28 |
29 | Development workflow uses the `dipdup` tool to generate a stub project. Once done with that, all you have to do is to define the data schema and the handlers. Take a look at their [quickstart](https://dipdup.io/docs/quickstart-evm) for more details.
30 |
31 | With its handler-based architecture and the choice of Python as the transform logic language, DipDup is easier to develop for than [Squid SDK](/sdk), but has higher requirements on database IO bandwith and CPU. The IO bandwidth issue is partially solved by DipDup's caching layer used for database access.
32 |
--------------------------------------------------------------------------------
/docs/squid-cli/logs.md:
--------------------------------------------------------------------------------
1 | `sqd logs`
2 | ==========
3 |
4 | Fetch logs from a squid deployed to the Cloud
5 |
6 | * [`sqd logs`](#sqd-logs-1)
7 |
8 | ## `sqd logs`
9 |
10 | Fetch logs from a squid deployed to the Cloud
11 |
12 | ```
13 | USAGE
14 | $ sqd logs [--interactive] [--since ] [--search ] [-f | -p ]
15 | [-r [/](@|:) | -o | [-s -n ] | [-t ]]
16 | [-c processor|query-node|api|db-migrate|db...]
17 | [-l error|debug|info|warning...] [--since ]
18 |
19 | FLAGS
20 | -c, --container=