├── LICENSE ├── README.md ├── bigquery_to_gcs.sh ├── ethereum_bigquery_to_gcs.sh ├── ethereum_gcs_to_cloud_sql.sh ├── ethereumetl_postgres └── gcs_compose.py ├── flatten_crypto_ethereum_blocks.sql ├── flatten_crypto_ethereum_contracts.sql ├── flatten_crypto_ethereum_logs.sql ├── flatten_crypto_ethereum_transactions.sql ├── gcs_to_cloud_sql.sh ├── indexes ├── blocks.sql ├── contracts.sql ├── logs.sql ├── token_transfers.sql ├── tokens.sql ├── traces.sql └── transactions.sql ├── requirements.txt └── schema ├── blocks.sql ├── contracts.sql ├── logs.sql ├── token_transfers.sql ├── tokens.sql ├── traces.sql └── transactions.sql /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Blockchain ETL 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Ethereum ETL for PostgreSQL 2 | 3 | The steps below will allow you to bootstrap a PostgreSQL database in GCP with full historical and real-time Ethereum data: 4 | blocks, transactions, logs, token_transfers, and traces. 5 | 6 | The whole process will take between 24 and 72 hours. 7 | 8 | **Prerequisites**: 9 | 10 | - Python 3.6+ 11 | - gcloud 12 | - psql 13 | - [Cloud SQL Proxy](https://cloud.google.com/sql/docs/mysql/sql-proxy#install) 14 | 15 | ### 1. Export Ethereum data from BigQuery to CSV files in GCS 16 | 17 | - Install gcloud and run `gcloud auth login` 18 | - Run 19 | ```bash 20 | pip install -r requirements.txt 21 | export BUCKET= 22 | bash ethereum_bigquery_to_gcs.sh $BUCKET 23 | ``` 24 | 25 | Optionally provide start and end dates: `bash ethereum_bigquery_to_gcs.sh $BUCKET 2020-01-01 2020-01-31` 26 | 27 | Exporting to CSV files is going to take about 10 minutes. 28 | 29 | ### 2. Import data from CSV files to PostgreSQL database in Cloud SQL 30 | 31 | - Create a new Cloud SQL instance 32 | 33 | ```bash 34 | export CLOUD_SQL_INSTANCE_ID=ethereum-0 35 | export ROOT_PASSWORD= 36 | gcloud sql instances create $CLOUD_SQL_INSTANCE_ID --database-version=POSTGRES_11 --root-password=$ROOT_PASSWORD \ 37 | --storage-type=SSD --storage-size=100 --cpu=4 --memory=6 \ 38 | --database-flags=temp_file_limit=2147483647 39 | ``` 40 | 41 | Notice the storage size is set to 100 GB. It will scale up automatically to around 1.5 TB when we load in the data. 42 | 43 | - Add Cloud SQL service account to GCS bucket as `objectViewer`. 44 | Run `gcloud sql instances describe $CLOUD_SQL_INSTANCE_ID`, 45 | then copy `serviceAccountEmailAddress` from the output and add it to the bucket. 46 | 47 | - Create the database and the tables: 48 | 49 | ```bash 50 | gcloud sql databases create ethereum --instance=$CLOUD_SQL_INSTANCE_ID 51 | 52 | # Install Cloud SQL Proxy following the instructions here https://cloud.google.com/sql/docs/mysql/sql-proxy#install 53 | ./cloud_sql_proxy -instances=myProject:us-central1:${CLOUD_SQL_INSTANCE_ID}=tcp:5433 54 | 55 | cat schema/*.sql | psql -U postgres -d ethereum -h 127.0.0.1 --port 5433 -a 56 | ``` 57 | 58 | - Run import from GCS to Cloud SQL: 59 | 60 | ```bash 61 | bash ethereum_gcs_to_cloud_sql.sh $BUCKET $CLOUD_SQL_INSTANCE_ID 62 | ``` 63 | 64 | Importing to Cloud SQL is going to take between 12 and 24 hours. 65 | 66 | A few performance optimization tips for initial loading of the data: 67 | 68 | - Turn off fsync https://www.postgresql.org/docs/11/runtime-config-wal.html. 69 | - Use UNLOGGED tables. 70 | - Turn OFF auto backups and vacuum on Google Cloud SQL instance. 71 | 72 | ### 3. Apply indexes to the tables 73 | 74 | NOTE: indexes won't work for the contracts table due to the issue described here https://github.com/blockchain-etl/ethereum-etl-postgres/pull/11#issuecomment-1107801061 75 | 76 | - Run: 77 | 78 | ```bash 79 | cat indexes/*.sql | psql -U postgres -d ethereum -h 127.0.0.1 --port 5433 -a 80 | ``` 81 | 82 | Creating indexes is going to take between 12 and 24 hours. Depending on the queries you're going to run 83 | you may need to create more indexes or [partition](https://www.postgresql.org/docs/11/ddl-partitioning.html) the tables. 84 | 85 | Cloud SQL instance will cost you between $200 and $500 per month depending on 86 | whether you use HDD or SSD and on the machine type. 87 | 88 | ### 4. Streaming 89 | 90 | Use `ethereumetl stream` command to continually pull data from an Ethereum node and insert it to Postgres tables: 91 | https://github.com/blockchain-etl/ethereum-etl/tree/develop/docs/commands.md#stream. 92 | 93 | Follow the instructions here to deploy it to Kubernetes: https://github.com/blockchain-etl/blockchain-etl-streaming. -------------------------------------------------------------------------------- /bigquery_to_gcs.sh: -------------------------------------------------------------------------------- 1 | #set -o xtrace 2 | 3 | usage() { echo "Usage: $0 []" 1>&2; exit 1; } 4 | 5 | # Read input params 6 | 7 | input_table=$1 8 | output_bucket=$2 9 | 10 | if [ -z "${input_table}" ] || [ -z "${output_bucket}" ]; then 11 | usage 12 | fi 13 | 14 | output_folder=$3 15 | if [ -z "${output_folder}" ]; then 16 | output_folder="${input_table}" 17 | fi 18 | 19 | # Call bq extract 20 | 21 | output_folder_raw="${output_folder}_raw" 22 | output_folder_composed="${output_folder}_composed" 23 | 24 | output_uri="gs://${output_bucket}/${output_folder_raw}/*.gz" 25 | echo "Extracting ${input_table} to ${output_uri} ..." 26 | bq extract --noprint_header --destination_format=CSV --compression=GZIP ${input_table} ${output_uri} 27 | 28 | # BigQuery exports partitioned tables to many small files. We combine them to bigger files to import faster. 29 | python ethereumetl_postgres/gcs_compose.py -b ${output_bucket} -i "${output_folder_raw}/" -o "${output_folder_composed}/" 30 | # gcloud compose has limit of 32 files in one compose operation. Try composing once more to work around this limit. 31 | python ethereumetl_postgres/gcs_compose.py -b ${output_bucket} -i "${output_folder_composed}/" -o "${output_folder}/" 32 | 33 | # Cleanup 34 | 35 | gsutil -m rm -r "gs://${output_bucket}/${output_folder_raw}" 36 | gsutil -m rm -r "gs://${output_bucket}/${output_folder_composed}" -------------------------------------------------------------------------------- /ethereum_bigquery_to_gcs.sh: -------------------------------------------------------------------------------- 1 | #set -o xtrace 2 | 3 | usage() { echo "Usage: $0 " 1>&2; exit 1; } 4 | 5 | output_bucket=$1 6 | 7 | if [ -z "${output_bucket}" ]; then 8 | usage 9 | fi 10 | 11 | start_date=$2 12 | end_date=$3 13 | filter_date=false 14 | if [ -n "${start_date}" ] && [ -n "${end_date}" ]; then 15 | filter_date=true 16 | fi 17 | 18 | # The tables below contain columns with type ARRAY<...>. 19 | # BigQuery can't export it to CSV so we need to flatten it. 20 | export_temp_dataset="export_temp_dataset" 21 | export_temp_blocks_table="flattened_blocks" 22 | export_temp_transactions_table="flattened_transactions" 23 | export_temp_logs_table="flattened_logs" 24 | export_temp_contracts_table="flattened_contracts" 25 | 26 | bq rm -r -f ${export_temp_dataset} 27 | bq mk ${export_temp_dataset} 28 | 29 | flatten_table() { 30 | local sql_file=$1 31 | local temp_table_name=$2 32 | local timestamp_column=$3 33 | local sql=$(cat ./${sql_file} | awk -F '--' '{print $1}'| tr '\n' ' ') 34 | 35 | if [ "${filter_date}" = "true" ]; then 36 | sql="${sql} where date(${timestamp_column}) >= '${start_date}' and date(${timestamp_column}) <= '${end_date}'" 37 | fi 38 | 39 | echo "Executing query ${sql}" 40 | bq --location=US query --destination_table ${export_temp_dataset}.${temp_table_name} --use_legacy_sql=false "${sql}" 41 | } 42 | 43 | flatten_table "flatten_crypto_ethereum_blocks.sql" "${export_temp_blocks_table}" "timestamp" 44 | flatten_table "flatten_crypto_ethereum_transactions.sql" "${export_temp_transactions_table}" "block_timestamp" 45 | flatten_table "flatten_crypto_ethereum_logs.sql" "${export_temp_logs_table}" "block_timestamp" 46 | flatten_table "flatten_crypto_ethereum_contracts.sql" "${export_temp_contracts_table}" "block_timestamp" 47 | 48 | declare -a tables=( 49 | "${export_temp_dataset}.${export_temp_blocks_table}" 50 | "${export_temp_dataset}.${export_temp_transactions_table}" 51 | "bigquery-public-data:crypto_ethereum.token_transfers" 52 | "bigquery-public-data:crypto_ethereum.traces" 53 | "bigquery-public-data:crypto_ethereum.tokens" 54 | "${export_temp_dataset}.${export_temp_logs_table}" 55 | "${export_temp_dataset}.${export_temp_contracts_table}" 56 | ) 57 | 58 | for table in "${tables[@]}" 59 | do 60 | echo "Exporting BigQuery table ${table}" 61 | if [ "${filter_date}" = "true" ]; then 62 | query="select * from \`${table//:/.}\`" 63 | timestamp_column="block_timestamp" 64 | if [ "${table}" = "${export_temp_dataset}.${export_temp_blocks_table}" ]; then 65 | timestamp_column="timestamp" 66 | fi 67 | query="${query} where date(${timestamp_column}) >= '${start_date}' and date(${timestamp_column}) <= '${end_date}'" 68 | filtered_table_name="${table//[.:-]/_}_filtered" 69 | echo "Executing query ${query}" 70 | bq --location=US query --destination_table "${export_temp_dataset}.${filtered_table_name}" --use_legacy_sql=false "${query}" 71 | 72 | output_folder=${filtered_table_name} 73 | bash bigquery_to_gcs.sh "${export_temp_dataset}.${filtered_table_name}" ${output_bucket} ${output_folder} 74 | gsutil -m mv gs://${output_bucket}/${output_folder}/* gs://${output_bucket}/${table}/ 75 | else 76 | output_folder=${table} 77 | bash bigquery_to_gcs.sh ${table} ${output_bucket} ${output_folder} 78 | fi 79 | done 80 | 81 | # Rename output folder for flattened tables 82 | gsutil -m mv gs://${output_bucket}/${export_temp_dataset}.${export_temp_blocks_table}/* gs://${output_bucket}/bigquery-public-data:crypto_ethereum.blocks/ 83 | gsutil -m mv gs://${output_bucket}/${export_temp_dataset}.${export_temp_transactions_table}/* gs://${output_bucket}/bigquery-public-data:crypto_ethereum.transactions/ 84 | gsutil -m mv gs://${output_bucket}/${export_temp_dataset}.${export_temp_logs_table}/* gs://${output_bucket}/bigquery-public-data:crypto_ethereum.logs/ 85 | gsutil -m mv gs://${output_bucket}/${export_temp_dataset}.${export_temp_contracts_table}/* gs://${output_bucket}/bigquery-public-data:crypto_ethereum.contracts/ 86 | 87 | # Cleanup 88 | bq rm -r -f ${export_temp_dataset} 89 | -------------------------------------------------------------------------------- /ethereum_gcs_to_cloud_sql.sh: -------------------------------------------------------------------------------- 1 | #set -o xtrace 2 | 3 | usage() { echo "Usage: $0 " 1>&2; exit 1; } 4 | 5 | input_bucket=$1 6 | cloud_sql_instance_id=$2 7 | 8 | if [ -z "${input_bucket}" ] || [ -z "${cloud_sql_instance_id}" ]; then 9 | usage 10 | fi 11 | 12 | declare -a tables=( 13 | "blocks" 14 | "transactions" 15 | "token_transfers" 16 | "traces" 17 | "logs" 18 | "contracts" 19 | "tokens" 20 | ) 21 | 22 | for table in "${tables[@]}" 23 | do 24 | folder="bigquery-public-data:crypto_ethereum.${table}" 25 | uri="gs://${input_bucket}/${folder}" 26 | echo "Importing files from ${uri}" 27 | bash gcs_to_cloud_sql.sh ${uri} ${cloud_sql_instance_id} ${table} 28 | done 29 | -------------------------------------------------------------------------------- /ethereumetl_postgres/gcs_compose.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | 3 | from google.cloud import storage 4 | 5 | BYTES_IN_GB = 10 ** 9 6 | GCS_MAX_FILE_SIZE = 2 * BYTES_IN_GB 7 | GCS_MAX_FILES_FOR_COMBINE = 32 8 | 9 | parser = argparse.ArgumentParser(description='Compose files in GCS folder.') 10 | parser.add_argument('-b', '--bucket', required=True, type=str, help='Bucket name.') 11 | parser.add_argument('-i', '--input-folder', required=True, type=str, help='Input folder.') 12 | parser.add_argument('-o', '--output-folder', required=True, type=str, help='Output folder.') 13 | parser.add_argument('-m', '--max-size-in-bytes', default=GCS_MAX_FILE_SIZE, type=str, 14 | help='Maximum size of output files.') 15 | 16 | args = parser.parse_args() 17 | 18 | 19 | def compose(bucket_name, source_prefix, destination_prefix, max_size_in_bytes): 20 | print('Composing files in {} to {}'.format(f'gs://{bucket_name}/{source_prefix}', f'gs://{bucket_name}/{destination_prefix}')) 21 | storage_client = storage.Client() 22 | 23 | blobs = storage_client.list_blobs(bucket_name, prefix=source_prefix) 24 | 25 | current_batch_size = 0 26 | current_batch = [] 27 | all_batches = [] 28 | for blob in blobs: 29 | if (current_batch_size + blob.size) < max_size_in_bytes and len(current_batch) < GCS_MAX_FILES_FOR_COMBINE: 30 | current_batch.append(blob) 31 | current_batch_size += blob.size 32 | else: 33 | all_batches.append(current_batch) 34 | current_batch = [blob] 35 | current_batch_size = blob.size 36 | if current_batch: 37 | all_batches.append(current_batch) 38 | 39 | for index, batch in enumerate(all_batches): 40 | padded_index = str(index).zfill(12) 41 | bucket = storage_client.bucket(bucket_name) 42 | blob_name = destination_prefix + padded_index + '.gz' 43 | blob = bucket.blob(blob_name) 44 | 45 | print('Composing {} files to {}'.format(len(batch), blob_name)) 46 | blob.compose(batch) 47 | 48 | 49 | compose(args.bucket, 50 | source_prefix=args.input_folder, 51 | destination_prefix=args.output_folder, 52 | max_size_in_bytes=args.max_size_in_bytes) 53 | -------------------------------------------------------------------------------- /flatten_crypto_ethereum_blocks.sql: -------------------------------------------------------------------------------- 1 | select 2 | timestamp, 3 | number, 4 | `hash`, 5 | parent_hash, 6 | nonce, 7 | sha3_uncles, 8 | logs_bloom, 9 | transactions_root, 10 | state_root, 11 | receipts_root, 12 | miner, 13 | difficulty, 14 | total_difficulty, 15 | size, 16 | extra_data, 17 | gas_limit, 18 | gas_used, 19 | transaction_count, 20 | base_fee_per_gas, 21 | withdrawals_root, 22 | blob_gas_used, 23 | excess_blob_gas 24 | from `bigquery-public-data.crypto_ethereum.blocks` -------------------------------------------------------------------------------- /flatten_crypto_ethereum_contracts.sql: -------------------------------------------------------------------------------- 1 | select 2 | address, 3 | bytecode, 4 | -- convert bq array to array literal 5 | -- this allows us to export nested data to a csv and 6 | -- import it to postgres as a text array 7 | concat('{', array_to_string(function_sighashes, ','), '}') as function_sighashes, 8 | is_erc20, 9 | is_erc721, 10 | block_number, 11 | block_timestamp 12 | from `bigquery-public-data.crypto_ethereum.contracts` 13 | -------------------------------------------------------------------------------- /flatten_crypto_ethereum_logs.sql: -------------------------------------------------------------------------------- 1 | select 2 | log_index, 3 | transaction_hash, 4 | transaction_index, 5 | address, 6 | data, 7 | topics[SAFE_OFFSET(0)] AS topic0, 8 | topics[SAFE_OFFSET(1)] AS topic1, 9 | topics[SAFE_OFFSET(2)] AS topic2, 10 | topics[SAFE_OFFSET(3)] AS topic3, 11 | block_timestamp, 12 | block_number, 13 | block_hash 14 | from `bigquery-public-data.crypto_ethereum.logs` -------------------------------------------------------------------------------- /flatten_crypto_ethereum_transactions.sql: -------------------------------------------------------------------------------- 1 | select 2 | `hash`, 3 | nonce, 4 | transaction_index, 5 | from_address, 6 | to_address, 7 | value, 8 | gas, 9 | gas_price, 10 | input, 11 | receipt_cumulative_gas_used, 12 | receipt_gas_used, 13 | receipt_contract_address, 14 | receipt_root, 15 | receipt_status, 16 | block_timestamp, 17 | block_number, 18 | block_hash, 19 | max_fee_per_gas, 20 | max_priority_fee_per_gas, 21 | transaction_type, 22 | receipt_effective_gas_price, 23 | max_fee_per_blob_gas, 24 | ARRAY_TO_STRING(blob_versioned_hashes, ',') as blob_versioned_hashes, 25 | receipt_blob_gas_price, 26 | receipt_blob_gas_used 27 | from `bigquery-public-data.crypto_ethereum.transactions` -------------------------------------------------------------------------------- /gcs_to_cloud_sql.sh: -------------------------------------------------------------------------------- 1 | #set -o xtrace 2 | 3 | usage() { echo "Usage: $0 " 1>&2; exit 1; } 4 | 5 | input_gcs_uri=$1 6 | cloud_sql_instance_id=$2 7 | table_name=$3 8 | 9 | if [ -z "${input_gcs_uri}" ] || [ -z "${cloud_sql_instance_id}" ] || [ -z "${table_name}" ]; then 10 | usage 11 | fi 12 | 13 | database_name="ethereum" 14 | 15 | for gcs_file in $(gsutil ls ${input_gcs_uri}); do 16 | command="gcloud sql import csv ${cloud_sql_instance_id} ${gcs_file} --database=${database_name} --table=${table_name} --quiet --async" 17 | echo "Executing command ${command}" 18 | operation_url=$(${command}) 19 | operation_id="${operation_url: -36}" 20 | gcloud sql operations wait ${operation_id} --timeout unlimited 21 | sleep 10 22 | done 23 | -------------------------------------------------------------------------------- /indexes/blocks.sql: -------------------------------------------------------------------------------- 1 | alter table blocks add constraint blocks_pk primary key (hash); 2 | 3 | create index blocks_timestamp_index on blocks (timestamp desc); 4 | 5 | create unique index blocks_number_uindex on blocks (number desc); 6 | -------------------------------------------------------------------------------- /indexes/contracts.sql: -------------------------------------------------------------------------------- 1 | alter table contracts add constraint contracts_pk primary key (address, block_number); 2 | 3 | create index contracts_block_number_index on contracts (block_number desc); 4 | create index contracts_is_erc20_index on contracts (is_erc20, block_number desc); 5 | create index contracts_is_erc721_index on contracts (is_erc721, block_number desc); 6 | -------------------------------------------------------------------------------- /indexes/logs.sql: -------------------------------------------------------------------------------- 1 | alter table logs add constraint logs_pk primary key (transaction_hash, log_index); 2 | 3 | create index logs_block_timestamp_index on logs (block_timestamp desc); 4 | 5 | create index logs_address_block_timestamp_index on logs (address, block_timestamp desc); 6 | -------------------------------------------------------------------------------- /indexes/token_transfers.sql: -------------------------------------------------------------------------------- 1 | alter table token_transfers add constraint token_transfers_pk primary key (transaction_hash, log_index); 2 | 3 | create index token_transfers_block_timestamp_index on token_transfers (block_timestamp desc); 4 | 5 | create index token_transfers_token_address_block_timestamp_index on token_transfers (token_address, block_timestamp desc); 6 | create index token_transfers_from_address_block_timestamp_index on token_transfers (from_address, block_timestamp desc); 7 | create index token_transfers_to_address_block_timestamp_index on token_transfers (to_address, block_timestamp desc); 8 | -------------------------------------------------------------------------------- /indexes/tokens.sql: -------------------------------------------------------------------------------- 1 | alter table tokens add constraint tokens_pk primary key (address, block_number); 2 | 3 | create index tokens_block_number_index on tokens (block_number desc); 4 | -------------------------------------------------------------------------------- /indexes/traces.sql: -------------------------------------------------------------------------------- 1 | alter table traces add constraint traces_pk primary key (trace_id); 2 | 3 | create index traces_block_timestamp_index on traces (block_timestamp desc); 4 | 5 | create index traces_from_address_block_timestamp_index on traces (from_address, block_timestamp desc); 6 | create index traces_to_address_block_timestamp_index on traces (to_address, block_timestamp desc); -------------------------------------------------------------------------------- /indexes/transactions.sql: -------------------------------------------------------------------------------- 1 | alter table transactions add constraint transactions_pk primary key (hash); 2 | 3 | create index transactions_block_timestamp_index on transactions (block_timestamp desc); 4 | 5 | create index transactions_from_address_block_timestamp_index on transactions (from_address, block_timestamp desc); 6 | create index transactions_to_address_block_timestamp_index on transactions (to_address, block_timestamp desc); 7 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | google-cloud-storage==1.25.0 -------------------------------------------------------------------------------- /schema/blocks.sql: -------------------------------------------------------------------------------- 1 | create table blocks 2 | ( 3 | timestamp timestamp, 4 | number bigint, 5 | hash varchar(66), 6 | parent_hash varchar(66), 7 | nonce varchar(42), 8 | sha3_uncles varchar(66), 9 | logs_bloom text, 10 | transactions_root varchar(66), 11 | state_root varchar(66), 12 | receipts_root varchar(66), 13 | miner varchar(42), 14 | difficulty numeric(38), 15 | total_difficulty numeric(38), 16 | size bigint, 17 | extra_data text, 18 | gas_limit bigint, 19 | gas_used bigint, 20 | transaction_count bigint, 21 | base_fee_per_gas bigint, 22 | withdrawals_root varchar(66), 23 | blob_gas_used bigint, 24 | excess_blob_gas bigint 25 | ); -------------------------------------------------------------------------------- /schema/contracts.sql: -------------------------------------------------------------------------------- 1 | create table contracts 2 | ( 3 | address varchar(42), 4 | bytecode text, 5 | function_sighashes text[], 6 | is_erc20 boolean, 7 | is_erc721 boolean, 8 | block_number bigint, 9 | block_hash varchar(66), 10 | block_timestamp timestamp 11 | ); 12 | -------------------------------------------------------------------------------- /schema/logs.sql: -------------------------------------------------------------------------------- 1 | create table logs 2 | ( 3 | log_index bigint, 4 | transaction_hash varchar(66), 5 | transaction_index bigint, 6 | address varchar(42), 7 | data text, 8 | topic0 varchar(66), 9 | topic1 varchar(66), 10 | topic2 varchar(66), 11 | topic3 varchar(66), 12 | block_timestamp timestamp, 13 | block_number bigint, 14 | block_hash varchar(66) 15 | ); -------------------------------------------------------------------------------- /schema/token_transfers.sql: -------------------------------------------------------------------------------- 1 | create table token_transfers 2 | ( 3 | token_address varchar(42), 4 | from_address varchar(42), 5 | to_address varchar(42), 6 | value numeric(78), 7 | transaction_hash varchar(66), 8 | log_index bigint, 9 | block_timestamp timestamp, 10 | block_number bigint, 11 | block_hash varchar(66) 12 | ); -------------------------------------------------------------------------------- /schema/tokens.sql: -------------------------------------------------------------------------------- 1 | create table tokens 2 | ( 3 | address varchar(42), 4 | name text, 5 | symbol text, 6 | decimals int, 7 | total_supply numeric(78), 8 | block_number bigint, 9 | block_hash varchar(66), 10 | block_timestamp timestamp 11 | ); 12 | -------------------------------------------------------------------------------- /schema/traces.sql: -------------------------------------------------------------------------------- 1 | create table traces 2 | ( 3 | transaction_hash varchar(66), 4 | transaction_index bigint, 5 | from_address varchar(42), 6 | to_address varchar(42), 7 | value numeric(38), 8 | input text, 9 | output text, 10 | trace_type varchar(16), 11 | call_type varchar(16), 12 | reward_type varchar(16), 13 | gas bigint, 14 | gas_used bigint, 15 | subtraces bigint, 16 | trace_address varchar(8192), 17 | error text, 18 | status int, 19 | block_timestamp timestamp, 20 | block_number bigint, 21 | block_hash varchar(66), 22 | trace_id text 23 | ); -------------------------------------------------------------------------------- /schema/transactions.sql: -------------------------------------------------------------------------------- 1 | create table transactions 2 | ( 3 | hash varchar(66), 4 | nonce bigint, 5 | transaction_index bigint, 6 | from_address varchar(42), 7 | to_address varchar(42), 8 | value numeric(38), 9 | gas bigint, 10 | gas_price bigint, 11 | input text, 12 | receipt_cumulative_gas_used bigint, 13 | receipt_gas_used bigint, 14 | receipt_contract_address varchar(42), 15 | receipt_root varchar(66), 16 | receipt_status bigint, 17 | block_timestamp timestamp, 18 | block_number bigint, 19 | block_hash varchar(66), 20 | max_fee_per_gas bigint, 21 | max_priority_fee_per_gas bigint, 22 | transaction_type bigint, 23 | receipt_effective_gas_price bigint, 24 | receipt_l1_fee bigint, 25 | receipt_l1_gas_used bigint, 26 | receipt_l1_gas_price bigint, 27 | receipt_l1_fee_scalar decimal, 28 | max_fee_per_blob_gas bigint, 29 | blob_versioned_hashes text, 30 | receipt_blob_gas_price bigint, 31 | receipt_blob_gas_used bigint 32 | ); 33 | --------------------------------------------------------------------------------