├── .gitignore ├── dags └── dag.py ├── airflow_mlflow.md ├── README.md ├── src ├── feature_extraction.py ├── UNI_v3_funcs.py ├── ML_Strategy.py ├── ActiveStrategyFramework.py ├── .ipynb_checkpoints │ └── GetPoolData-checkpoint.py └── GetPoolData.py ├── predict.py ├── train.py ├── requirements.txt └── notebooks └── uncollected-fees.ipynb /.gitignore: -------------------------------------------------------------------------------- 1 | airflow 2 | config 3 | data 4 | log 5 | mlflow 6 | myenv 7 | .DS_Store 8 | .idea/ 9 | -------------------------------------------------------------------------------- /dags/dag.py: -------------------------------------------------------------------------------- 1 | from airflow import DAG 2 | from airflow.operators.bash import BashOperator 3 | from airflow.utils.dates import days_ago 4 | 5 | default_args = { 6 | "owner": "Amir", 7 | "start_date": days_ago(1), # запуск день назад 8 | "retries": 5, # try again if error 9 | #"retry_delay": datetime.timedelta(minutes=5), # дельта запуска при повторе 5 минут 10 | "task_concurency": 1 # 1 task at a time 11 | } 12 | 13 | piplines = {'train': {"schedule": "0 0 * * 0"}, # Retrain model every week 14 | "predict": {"schedule": "*/15 * * * *"}} # Every 15 minutes predict liquidity 15 | 16 | 17 | def init_dag(dag, task_id): 18 | with dag: 19 | t1 = BashOperator( 20 | task_id=f"{task_id}", 21 | bash_command=f'python3 /Users/amir1/PycharmProjects/uniswap-automation/{task_id}.py') 22 | return dag 23 | 24 | 25 | for task_id, params in piplines.items(): 26 | dag = DAG(task_id, 27 | schedule_interval=params['schedule'], 28 | max_active_runs=1, 29 | default_args=default_args 30 | ) 31 | init_dag(dag, task_id) 32 | globals()[task_id] = dag 33 | -------------------------------------------------------------------------------- /airflow_mlflow.md: -------------------------------------------------------------------------------- 1 | ## How to run strategies 2 | #### For effective and seamless execution strategies can be launched with Airflow and MLflow 3 | 4 | ### VENV 5 | 6 | 1) Create folder and copy repo `mkdir uniswap-strategies` 7 | 2) Create venv ( or conda ) `python -m venv myvenv` 8 | 3) Activate venv `source myvenv/bin/activate` 9 | 4) Install sqlite for mlflow `pip install pysqlite3` 10 | 11 | ### Mlflow intallation and launch 12 | 13 | Installation 14 | `pip install mlflow` 15 | 16 | Folder for mlflow files`mkdir mlflow` 17 | 18 | Set path `export MLFLOW_REGISTRY_URI=mlflow` 19 | 20 | Might be helpfull if have problems: https://www.mlflow.org/docs/latest/tracking.html#tracking-ui 21 | 22 | Server launch `mlflow server --host localhost --port 5000 --backend-store-uri sqlite:///${MLFLOW_REGISTRY_URI}/mlflow.db --default-artifact-root ${MLFLOW_REGISTRY_URI}` 23 | 24 | If you want stop server 25 | 26 | `ps -A | grep gunicorn` 27 | 28 | `kill -9 ps aux | grep mlflow | awk '{print $2}'` 29 | 30 | ## Airflow intallation and launch 31 | 32 | Create folder for airflow files `mkdir airflow` 33 | 34 | Installation `airflow pip install apache-airflow==2.0.1 35 | --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.0.1/constraints-3.7.txt" 36 | export AIRFLOW_HOME=.` 37 | 38 | DB initialisation `airflow db init` 39 | Also put in airflow.cfg прописать: 40 | `[webserver] 41 | rbac = True 42 | load_examples = False` 43 | 44 | Create Airflow user `airflow users create --username Amir --firstname Amir --lastname Amir --role Admin 45 | --email ***@***.com` 46 | 47 | Launch Airflow 48 | 49 | `airflow webserver -p 8080 ` 50 | 51 | `airflow scheduler` 52 | 53 | If you want to stop server 54 | 55 | `ps -A | grep gunicorn` 56 | 57 | `kill -9 ps aux | grep airflow | awk '{print $2}'` 58 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Uniswap V3 strategy framework 2 | 3 | This repository contains several python scripts to simulate the performance of Uniswap v3 liquidity provision strategies performance and evaluate risks. The main scripts of the package are: 4 | 5 | 6 | 1. [ActiveStrategyFramework.py](ActiveStrategyFramework.py) base code of the framework which executues a strategy, conducting either back-testing simulations (```simulate_strategy``` function and passing in historical swap data), or conducting a live implementation of the strategy. 7 | 2. [ML_Strategy.py](ML_Strategy.py) strategy uses an AR(1)-GARCH(1,1) model to predict volatility alongside with XGBoost to predict base asset price movement. 8 | 3. [GetPoolData.py](GetPoolData.py) which downloads the data necessary for the simulations. 9 | 4. [UNI_v3_funcs.py](UNI_v3_funcs.py) python implementation of Uniswap v3's [liquidity math](https://github.com/Uniswap/uniswap-v3-periphery/blob/main/contracts/libraries/LiquidityAmounts.sol). 10 | 11 | In order to provide an illustration of potential usage, i have included Jupyter Notebook that show how to use the framework: 12 | - [4_ML_Strategy.ipynb](4_ML_Strategy_Example.ipynb) 13 | 14 | Have constructed a flexible framework for active LP strategy simulations that uses **the full Uniswap v3 swap history** in order to improve accurracy of fee income. Thefore simulations are available in the time period since Unsiwap v3 was released (May 5th 2021 is when swap data starts to show up consistently). 15 | ## Live running with Mlflow and Airflow 16 | 17 | For live performance i choosed pattern with: 18 | 19 | - Every week XGBoost model retrained on new coming data 20 | - Every 10 min we predict and reset strategy 21 | 22 | To see instructions see `airflow_mlflow.md` 23 | 24 | ## Data & simulating a different pool 25 | 26 | 27 | **The Graph + Bitquery + Flipside Crypto** 28 | 29 | The pattern to use these data sources can be seen in [2_AutoRegressive_Strategy_Example.ipynb](2_AutoRegressive_Strategy_Example.ipynb). The data sources are: 30 | 31 | - **[The Graph](https://thegraph.com/legacy-explorer/subgraph/uniswap/uniswap-v3):** We obtain the full history of Uniswap v3 swaps from whatever pool we need, in order to accurately simulate the performance of the simulated strategy. 32 | - **[Bitquery](https://graphql.bitquery.io/ide):** We obtain historical token prices from Uniswap v2 and v3. 33 | - **[Flipside Crypto](https://app.flipsidecrypto.com/velocity):** We obtain the virtual liquidity of the pool at every block, which is used to approximate the fee income earned in the pool, as described in their [documentation](https://docs.flipsidecrypto.com/our-data/tables/uniswap-v3-tables/pool-stats). 34 | 35 | *Instructions* 36 | 1. Obtain a free API key from [Bitquery](https://graphql.bitquery.io/ide). 37 | 2. Save it in a file in ```config.py``` in the directory where the ActiveStrategyFramework is stored as a variable called ```BITQUERY_API_TOKEN``` (eg. ```BITQUERY_API_TOKEN = XXXXXXXX```). 38 | 3. Generate a new Flipside Crypto query, with the ```pool_address``` for the pair that you are interested. Note that due to a 100,000 row limit, we generate two queries for the USDC/WETH 0.3%, which explains the ```BLOCK_ID``` condition, to split the data into reasonable chunks. A less active pool might not need this split. 39 | 40 | -------------------------------------------------------------------------------- /src/feature_extraction.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | 3 | 4 | def calculateRSI(prices_data, n=14, today_price=None): 5 | """Calculate the Relative Strength Index of an asset. 6 | Args: 7 | prices_data (pandas dataframe object): prices data 8 | n (int, optional): number of . Defaults to 14. 9 | today_price(int, optional): today's price to predict future RSI. Defaults to None 10 | Return: 11 | rsi (pandas series object): relative strength index 12 | """ 13 | price = prices_data['quotePrice'] 14 | 15 | # Append today's date if used for prediction 16 | if today_price is not None: 17 | price = price.append(pd.Series({price.size: today_price})) 18 | 19 | delta = price.diff() 20 | delta = delta[1:] 21 | 22 | prices_up = delta.copy() 23 | prices_up[prices_up < 0] = 0 24 | prices_down = delta.copy() 25 | prices_down[prices_down > 0] = 0 26 | 27 | roll_up = prices_up.rolling(n).mean() 28 | roll_down = prices_down.abs().rolling(n).mean() 29 | 30 | relative_strength = roll_up / roll_down 31 | rsi = 100.0 - (100.0 / (1.0 + relative_strength)) 32 | 33 | return rsi 34 | 35 | 36 | def calculateMACD(prices_data): 37 | """Calculate the MACD of EMA15 and EMA30 of an asset 38 | Args: 39 | prices_data (dataframe): prices data 40 | Returns: 41 | macd (pandas series object): macd of the asset 42 | macd_signal (pandas series object): macd signal of the asset 43 | """ 44 | ema15 = pd.Series(prices_data['quotePrice'].ewm( 45 | span=15, min_periods=15).mean()) 46 | ema30 = pd.Series(prices_data['quotePrice'].ewm( 47 | span=30, min_periods=30).mean()) 48 | 49 | macd = pd.Series(ema15 - ema30) 50 | macd_signal = pd.Series(macd.ewm(span=9, min_periods=9).mean()) 51 | 52 | return macd, macd_signal 53 | 54 | 55 | def extractAll(data): 56 | """Generate most important technical indicators for an asset 57 | Including - 58 | EMA9 - exponential moving average for 9 ticks 59 | SMA5 - simple moving average for 5 ticks 60 | SMA10 - simple moving average for 10 ticks 61 | SMA15 - simple moving average for 15 ticks 62 | SMA30 - simple moving average for 30 ticks 63 | RSI - Relative Strength Index 64 | MACD - Moving Average Convergence Divergence 65 | MACD Signal 66 | Args: 67 | data (pandas dataframe object): prices data 68 | Returns: 69 | pandas dataframe object: prices data with all the indicators 70 | """ 71 | prices_data = data.copy() 72 | # Add moving averages 73 | prices_data['EMA_9'] = prices_data['quotePrice'].ewm(9).mean().shift() 74 | prices_data['SMA_5'] = prices_data['quotePrice'].rolling(5).mean().shift() 75 | prices_data['SMA_10'] = prices_data['quotePrice'].rolling(10).mean().shift() 76 | prices_data['SMA_15'] = prices_data['quotePrice'].rolling(15).mean().shift() 77 | prices_data['SMA_30'] = prices_data['quotePrice'].rolling(30).mean().shift() 78 | 79 | # RSI 80 | # prices_data['RSI'] = calculateRSI(prices_data).fillna(0) 81 | 82 | # MACD 83 | macd, macd_signal = calculateMACD(prices_data) 84 | prices_data['MACD'] = macd 85 | prices_data['MACD_signal'] = macd_signal 86 | 87 | # Shift label(y) by one value to predict the next day using today's data (technical indicators) 88 | prices_data['quotePrice'] = prices_data['quotePrice'].shift(-1) 89 | # Drop invalid samples - the samples where moving averages exceed the required window 90 | prices_data = prices_data.iloc[33:] 91 | prices_data = prices_data[:-1] # since we did shifting by one 92 | # prices_data.index = range(len(prices_data)) # update indexes 93 | 94 | drop_cols = ['baseCurrency', 'quoteCurrency', 'time', 'quoteAmount', 'baseAmount', 'tradeAmount'] 95 | prices_data = prices_data.drop(drop_cols, axis=1) 96 | return prices_data -------------------------------------------------------------------------------- /src/UNI_v3_funcs.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Mon Jun 14 18:53:09 2021 4 | 5 | @author: JNP 6 | """ 7 | 8 | 9 | """liquitidymath""" 10 | """Python library to emulate the calculations done in liquiditymath.sol of UNI_V3 peryphery contract""" 11 | 12 | 13 | # sqrtP: format X96 = int(1.0001**(tick/2)*(2**96)) 14 | # liquidity: int 15 | # sqrtA = price for lower tick 16 | # sqrtB = price for upper tick 17 | """get_amounts function""" 18 | # Use 'get_amounts' function to calculate amounts as a function of liquitidy and price range 19 | 20 | 21 | def get_amount0(sqrtA, sqrtB, liquidity, decimals): 22 | 23 | if sqrtA > sqrtB: 24 | (sqrtA, sqrtB) = (sqrtB, sqrtA) 25 | 26 | amount0 = (liquidity * 2 ** 96 * (sqrtB - sqrtA) / sqrtB / sqrtA) / 10 ** decimals 27 | 28 | return amount0 29 | 30 | 31 | def get_amount1(sqrtA, sqrtB, liquidity, decimals): 32 | 33 | if sqrtA > sqrtB: 34 | (sqrtA, sqrtB) = (sqrtB, sqrtA) 35 | 36 | amount1 = liquidity * (sqrtB - sqrtA) / 2 ** 96 / 10 ** decimals 37 | 38 | return amount1 39 | 40 | 41 | def get_amounts(tick, tickA, tickB, liquidity, decimal0, decimal1): 42 | 43 | sqrt = int(1.0001 ** (tick / 2) * (2 ** 96)) 44 | sqrtA = int(1.0001 ** (tickA / 2) * (2 ** 96)) 45 | sqrtB = int(1.0001 ** (tickB / 2) * (2 ** 96)) 46 | 47 | if sqrtA > sqrtB: 48 | (sqrtA, sqrtB) = (sqrtB, sqrtA) 49 | 50 | if sqrt <= sqrtA: 51 | 52 | amount0 = get_amount0(sqrtA, sqrtB, liquidity, decimal0) 53 | return amount0, 0 54 | 55 | elif sqrtB > sqrt > sqrtA: 56 | amount0 = get_amount0(sqrt, sqrtB, liquidity, decimal0) 57 | amount1 = get_amount1(sqrtA, sqrt, liquidity, decimal1) 58 | return amount0, amount1 59 | 60 | else: 61 | amount1 = get_amount1(sqrtA, sqrtB, liquidity, decimal1) 62 | return 0, amount1 63 | 64 | 65 | """get token amounts relation""" 66 | # Use this formula to calculate amount of t0 based on amount of t1 (required before calculate liquidity) 67 | # relation = t1/t0 68 | 69 | 70 | def amounts_relation(tick, tickA, tickB, decimals0, decimals1): 71 | 72 | sqrt = (1.0001 ** tick / 10 ** (decimals1 - decimals0)) ** (1 / 2) 73 | sqrtA = (1.0001 ** tickA / 10 ** (decimals1 - decimals0)) ** (1 / 2) 74 | sqrtB = (1.0001 ** tickB / 10 ** (decimals1 - decimals0)) ** (1 / 2) 75 | 76 | if sqrt == sqrtA or sqrt == sqrtB: 77 | relation = 0 78 | # print("There is 0 tokens on one side") 79 | else: 80 | relation = (sqrt - sqrtA) / ((1 / sqrt) - (1 / sqrtB)) 81 | return relation 82 | 83 | 84 | """get_liquidity function""" 85 | # Use 'get_liquidity' function to calculate liquidity as a function of amounts and price range 86 | 87 | 88 | def get_liquidity0(sqrtA, sqrtB, amount0, decimals): 89 | 90 | if sqrtA > sqrtB: 91 | (sqrtA, sqrtB) = (sqrtB, sqrtA) 92 | 93 | liquidity = int( 94 | amount0 / ((2 ** 96 * (sqrtB - sqrtA) / sqrtB / sqrtA) / 10 ** decimals) 95 | ) 96 | return liquidity 97 | 98 | 99 | def get_liquidity1(sqrtA, sqrtB, amount1, decimals): 100 | 101 | if sqrtA > sqrtB: 102 | (sqrtA, sqrtB) = (sqrtB, sqrtA) 103 | 104 | liquidity = int(amount1 / ((sqrtB - sqrtA) / 2 ** 96 / 10 ** decimals)) 105 | return liquidity 106 | 107 | 108 | def get_liquidity(tick, tickA, tickB, amount0, amount1, decimal0, decimal1): 109 | 110 | sqrt = int(1.0001 ** (tick / 2) * (2 ** 96)) 111 | sqrtA = int(1.0001 ** (tickA / 2) * (2 ** 96)) 112 | sqrtB = int(1.0001 ** (tickB / 2) * (2 ** 96)) 113 | 114 | if sqrtA > sqrtB: 115 | (sqrtA, sqrtB) = (sqrtB, sqrtA) 116 | 117 | if sqrt <= sqrtA: 118 | liquidity0 = get_liquidity0(sqrtA, sqrtB, amount0, decimal0) 119 | return liquidity0 120 | elif sqrtB > sqrt > sqrtA: 121 | liquidity0 = get_liquidity0(sqrt, sqrtB, amount0, decimal0) 122 | 123 | liquidity1 = get_liquidity1(sqrtA, sqrt, amount1, decimal1) 124 | 125 | liquidity = liquidity0 if liquidity0 < liquidity1 else liquidity1 126 | return liquidity 127 | else: 128 | liquidity1 = get_liquidity1(sqrtA, sqrtB, amount1, decimal1) 129 | return liquidity1 130 | -------------------------------------------------------------------------------- /predict.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | 4 | import numpy as np 5 | import pandas as pd 6 | import yaml 7 | 8 | import mlflow 9 | import src.GetPoolData as GetPoolData 10 | import src.ML_Strategy as ML_Strategy 11 | import src.ActiveStrategyFramework as ActiveStrategyFramework 12 | from datetime import datetime, timedelta 13 | 14 | config_path = os.path.join('/Users/amir1/PycharmProjects/uniswap-automation/config/params_all.yaml') 15 | position_path = os.path.join('/Users/amir1/PycharmProjects/uniswap-automation/config/position.yaml') 16 | 17 | config = yaml.load(open(config_path), Loader=yaml.Loader)['predict'] 18 | position_data = yaml.load(open(position_path), Loader=yaml.Loader) 19 | 20 | os.chdir(config['dir_folder']) 21 | 22 | logging.basicConfig(filename='log/app.log', filemode='w+', format='%(asctime)s : %(levelname)s : %(message)s', 23 | level=logging.DEBUG) 24 | 25 | 26 | def log_strategy_step(current_observation, model): 27 | mlflow.set_tracking_uri("http://localhost:5050") 28 | mlflow.set_experiment(config['name_experiment']) 29 | with mlflow.start_run(): 30 | dict_metrics = model.dict_components(current_observation) 31 | dict_metrics.pop('time') 32 | dict_metrics.pop('reset_reason') 33 | mlflow.log_metrics(dict_metrics) 34 | mlflow.end_run() 35 | 36 | 37 | def main(): 38 | """ 39 | Получение тематик из текста и сохранение их в файл 40 | """ 41 | 42 | # Загрузка последних сохраненнных моделей из MLFlow 43 | mlflow.set_tracking_uri("http://localhost:5050") 44 | #model_uri_xgb = f"models:/{config['xgboost_model']}/{config['version_xgb']}" 45 | 46 | #model_xgb = mlflow.xgboost.load_model(model_uri_xgb) 47 | #price_data = GetPoolData.get_price_data_bitquery(**config['price_data']) 48 | #price_data = price_data[config['price_data']['date_begin']+" 00:00:00+00:00": config['price_data']['date_end']+" 00:00:00+00:00"] 49 | # Get data for required time 50 | now = datetime.now() #- timedelta(days=0) 51 | price_date_end = now.strftime("%Y-%m-%d") #%H:%M:%S") Add in future 52 | hour_before = datetime.now() - timedelta(days=1) 53 | price_date_begin = hour_before.strftime("%Y-%m-%d") #%H:%M:%S") Add in future 54 | DOWNLOAD_DATA = True 55 | price_data = GetPoolData.get_price_data_bitquery(config['price_data']['token_0_address'], config['price_data']['token_1_address'], price_date_begin, price_date_end, 56 | config['price_data']['api_token'], config['price_data']['file_name'], DOWNLOAD_DATA) 57 | #print(price_data) 58 | strategy = ML_Strategy.ML_Strategy(price_data, **config['model']) 59 | 60 | #yaml_file = yaml.load(open(config_path), Loader=yaml.Loader) 61 | #yaml_file['predict']['last_timepoint_updated'] = price_data_end 62 | #with open(config_path, 'w') as fp: 63 | # yaml.dump(yaml_file, fp, encoding='UTF-8', allow_unicode=True, default_flow_style=False) 64 | 65 | # Подгрузка данных каждые 10 минут - расчет текущих параметров (ликвидность, цена) - загрузка их в StartegyObservation - вычисление новых параметров - 66 | 67 | # Подкачка данных с grapthQL 68 | #price_data = GetPoolData.get_current_state() 69 | #yaml_file = yaml.safe_load(open(config_path)) 70 | #yaml_file['predict']['current_position_info']['current_price'] = ((int(price_data['data']['pool']['sqrtPrice']) / 71 | # 2**96)**2) * 10**(config['current_position_info']['decimals_0']-config['current_position_info']['decimals_1']) 72 | 73 | # создание StrategyObservation 74 | 75 | #config = yaml.safe_load(open(config_path))['predict'] 76 | #os.chdir(config['dir_folder']) 77 | 78 | current_observation = ActiveStrategyFramework.StrategyObservation(strategy_in=strategy, current_price=price_data['quotePrice'][-1], 79 | **position_data['current_position_info'], timepoint=price_data.index[-1]) 80 | # Установка параметров yaml на новые значения(по позиции) 81 | res_dict = strategy.dict_components(current_observation) 82 | #print(current_observation.liquidity_ranges) 83 | #print(current_observation.strategy_info) 84 | yaml_file = yaml.load(open(position_path), Loader=yaml.Loader) 85 | yaml_file['current_position_info']['liquidity_in_0'] = current_observation.liquidity_in_0 86 | yaml_file['current_position_info']['liquidity_in_1'] = current_observation.liquidity_in_1 87 | yaml_file['current_position_info']['token_0_left_over'] = current_observation.token_0_left_over 88 | yaml_file['current_position_info']['token_1_left_over'] = current_observation.token_1_left_over 89 | yaml_file['current_position_info']['token_0_fees_uncollected'] = current_observation.token_0_fees_uncollected 90 | yaml_file['current_position_info']['token_1_fees_uncollected'] = current_observation.token_1_fees_uncollected 91 | yaml_file['current_position_info']['liquidity_ranges'] = current_observation.liquidity_ranges 92 | yaml_file['current_position_info']['strategy_info'] = current_observation.strategy_info 93 | yaml_file['last_timepoint_updated'] = str(price_data.index[-1]) 94 | with open(position_path, 'w') as fp: 95 | yaml.dump(yaml_file, fp, encoding='UTF-8', allow_unicode=True, default_flow_style=False) 96 | 97 | log_strategy_step(current_observation, strategy) 98 | 99 | 100 | if __name__ == "__main__": 101 | main() 102 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | 4 | import yaml 5 | from mlflow.tracking import MlflowClient 6 | from mlflow.models.signature import infer_signature 7 | 8 | import mlflow 9 | import src.GetPoolData as GetPoolData 10 | import src.ML_Strategy as ML_Strategy 11 | import src.Strategy_Wrapper as Strategy_Wrapper 12 | import src.ActiveStrategyFramework as ActiveStrategyFramework 13 | 14 | import pandas as pd 15 | import numpy as np 16 | 17 | config_path = os.path.join('/Users/amir1/PycharmProjects/uniswap-automation/config/params_all.yaml') 18 | config = yaml.safe_load(open(config_path))['train'] 19 | os.chdir(config['dir_folder']) 20 | 21 | logging.basicConfig(filename='log/app.log', filemode='w+', format='%(asctime)s : %(levelname)s : %(message)s', 22 | level=logging.DEBUG) 23 | 24 | 25 | ############ 26 | def get_version_model(config_name, client): 27 | """ 28 | Получение последней версии модели из MLFlow 29 | """ 30 | dict_push = {} 31 | for count, value in enumerate(client.search_model_versions(f"name='{config_name}'")): 32 | # Все версии модели 33 | dict_push[count] = value 34 | return dict(list(dict_push.items())[-1][1])['version'] 35 | ######## 36 | 37 | 38 | def main(): 39 | 40 | swap_data = GetPoolData.get_pool_data_flipside(**config['swap_data']) 41 | price_data = GetPoolData.get_price_data_bitquery(**config['price_data']) 42 | 43 | # little preprocessing 44 | 45 | DATE_BEGIN = pd.to_datetime('2022-01-15 00:00PM', utc=True) 46 | DATE_END = pd.to_datetime('2022-01-28 00:00PM', utc=True) 47 | z_score_cutoff = 5 48 | window_size = 60 49 | 50 | # Data for strategy simulation cleaning 51 | STRATEGY_FREQUENCY = 'M' 52 | simulate_data_filtered = ActiveStrategyFramework.aggregate_price_data(price_data, STRATEGY_FREQUENCY) 53 | simulate_data_filtered_roll = simulate_data_filtered.quotePrice.rolling(window=window_size) 54 | simulate_data_filtered['roll_median'] = simulate_data_filtered_roll.median() 55 | roll_dev = np.abs(simulate_data_filtered.quotePrice - simulate_data_filtered.roll_median) 56 | simulate_data_filtered['median_abs_dev'] = 1.4826 * roll_dev.rolling(window=window_size).median() 57 | outlier_indices = np.abs(simulate_data_filtered.quotePrice - simulate_data_filtered.roll_median) >= z_score_cutoff * \ 58 | simulate_data_filtered['median_abs_dev'] 59 | simulate_data_price = simulate_data_filtered[~outlier_indices]['quotePrice'][DATE_BEGIN:DATE_END] 60 | 61 | # Data for statistical analaysis (AGGREGATED_MINUTES frequency data) 62 | #STAT_MODEL_FREQUENCY = 'H' # forecast returns at a daily frequency 63 | 64 | # Initial Position Details 65 | INITIAL_TOKEN_0 = 100000 66 | INITIAL_TOKEN_1 = INITIAL_TOKEN_0 * simulate_data_price[0] 67 | INITIAL_POSITION_VALUE = 2 * INITIAL_TOKEN_0 68 | FEE_TIER = 0.003 69 | 70 | # Set decimals according to your pool 71 | DECIMALS_0 = 6 72 | DECIMALS_1 = 18 73 | swap_data['virtual_liquidity'] = swap_data['VIRTUAL_LIQUIDITY_ADJUSTED'] * (10 ** ((DECIMALS_1 + DECIMALS_0) / 2)) 74 | swap_data['traded_in'] = swap_data.apply(lambda x: -x['amount0'] if (x['amount0'] < 0) else -x['amount1'], 75 | axis=1).astype(float) 76 | swap_data['traded_out'] = swap_data.apply(lambda x: x['amount0'] if (x['amount0'] > 0) else x['amount1'], 77 | axis=1).astype(float) 78 | 79 | # little preprocessing 80 | 81 | # MLFlow tracking 82 | mlflow.set_tracking_uri("http://localhost:5050") 83 | mlflow.set_experiment(config['name_experiment']) 84 | with mlflow.start_run(): 85 | strategy = ML_Strategy.ML_Strategy(price_data, **config['model']) 86 | simulated_strategy = ActiveStrategyFramework.simulate_strategy(simulate_data_price, swap_data, strategy, 87 | INITIAL_TOKEN_0, INITIAL_TOKEN_1, FEE_TIER, 88 | DECIMALS_0, DECIMALS_1) 89 | sim_data = ActiveStrategyFramework.generate_simulation_series(simulated_strategy, strategy) 90 | strat_result = ActiveStrategyFramework.analyze_strategy(sim_data, frequency=STRATEGY_FREQUENCY) 91 | 92 | # logging model and params 93 | mlflow.log_params(config['model']) 94 | mlflow.log_metrics(strat_result) 95 | 96 | #wrapped_model = Strategy_Wrapper(strategy) 97 | #signature = infer_signature() 98 | 99 | #mlflow.pyfunc.log_model("ml_model", python_model=wrapped_model) 100 | 101 | mlflow.xgboost.log_model(strategy.model, 102 | artifact_path="model_xgb", 103 | registered_model_name=f"{config['model_xgb']}") 104 | 105 | mlflow.log_artifact(local_path='./train.py', 106 | artifact_path='code') 107 | mlflow.end_run() 108 | 109 | # Get model last version 110 | client = MlflowClient() 111 | last_version_xgb = get_version_model(config['model_xgb'], client) 112 | 113 | yaml_file = yaml.safe_load(open(config_path)) 114 | yaml_file['predict']["version_xgb"] = int(last_version_xgb) 115 | yaml_file['predict']["model"]["model_link"] = f"models:/{config['model_xgb']}/{last_version_xgb}" 116 | 117 | with open(config_path, 'w') as fp: 118 | yaml.dump(yaml_file, fp, encoding='UTF-8', allow_unicode=True) 119 | 120 | 121 | if __name__ == "__main__": 122 | main() 123 | 124 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | alembic==1.4.1 2 | anyio==3.5.0 3 | apache-airflow==2.0.1 4 | apache-airflow-providers-ftp==1.0.1 5 | apache-airflow-providers-google==1.0.0 6 | apache-airflow-providers-http==1.1.0 7 | apache-airflow-providers-imap==1.0.1 8 | apache-airflow-providers-postgres==1.0.1 9 | apache-airflow-providers-sqlite==1.0.1 10 | apache-airflow-providers-ssh==1.3.0 11 | apispec==3.3.2 12 | appnope==0.1.3 13 | argcomplete==1.12.2 14 | argon2-cffi==21.3.0 15 | argon2-cffi-bindings==21.2.0 16 | astroid==2.5.6 17 | attrs==20.3.0 18 | Babel==2.9.0 19 | backcall==0.2.0 20 | bash==0.6 21 | bcrypt==3.2.0 22 | beautifulsoup4==4.11.1 23 | bleach==5.0.0 24 | cached-property==1.5.2 25 | cachetools==4.2.1 26 | cattrs==1.2.0 27 | certifi==2020.12.5 28 | cffi==1.14.5 29 | chardet==3.0.4 30 | click==8.1.2 31 | clickclick==20.10.2 32 | cloudpickle==1.6.0 33 | colorama==0.4.4 34 | colorlog==4.7.2 35 | commonmark==0.9.1 36 | connexion==2.7.0 37 | croniter==0.3.37 38 | cryptography==3.4.5 39 | databricks-cli==0.14.3 40 | dataclasses-json==0.5.2 41 | debugpy==1.6.0 42 | decorator==5.1.1 43 | defusedxml==0.6.0 44 | dill==0.3.3 45 | dnspython==1.16.0 46 | docker==5.0.0 47 | docutils==0.16 48 | email-validator==1.1.2 49 | entrypoints==0.3 50 | fastjsonschema==2.15.3 51 | Flask==2.1.0 52 | Flask-AppBuilder==3.1.1 53 | Flask-Babel==1.0.0 54 | Flask-Caching==1.9.0 55 | Flask-JWT-Extended==3.25.1 56 | Flask-Login==0.4.1 57 | Flask-OpenID==1.2.5 58 | Flask-SQLAlchemy==2.4.4 59 | Flask-WTF==0.14.3 60 | gitdb==4.0.7 61 | GitPython==3.1.14 62 | google-ads==7.0.0 63 | google-api-core==1.26.3 64 | google-api-python-client==1.12.8 65 | google-auth==1.29.0 66 | google-auth-httplib2==0.1.0 67 | google-auth-oauthlib==0.4.4 68 | google-cloud-automl==1.0.1 69 | google-cloud-bigquery==2.13.1 70 | google-cloud-bigquery-datatransfer==1.1.1 71 | google-cloud-bigquery-storage==2.4.0 72 | google-cloud-bigtable==1.7.0 73 | google-cloud-container==1.0.1 74 | google-cloud-core==1.6.0 75 | google-cloud-datacatalog==0.7.0 76 | google-cloud-dataproc==1.1.1 77 | google-cloud-dlp==1.0.0 78 | google-cloud-kms==1.4.0 79 | google-cloud-language==1.3.0 80 | google-cloud-logging==1.15.1 81 | google-cloud-memcache==0.3.0 82 | google-cloud-monitoring==1.1.0 83 | google-cloud-os-login==1.0.0 84 | google-cloud-pubsub==1.7.0 85 | google-cloud-redis==1.0.0 86 | google-cloud-secret-manager==1.0.0 87 | google-cloud-spanner==1.19.1 88 | google-cloud-speech==1.3.2 89 | google-cloud-storage==1.37.1 90 | google-cloud-tasks==1.5.0 91 | google-cloud-texttospeech==1.0.1 92 | google-cloud-translate==1.7.0 93 | google-cloud-videointelligence==1.16.1 94 | google-cloud-vision==1.0.0 95 | google-crc32c==1.1.2 96 | google-resumable-media==1.2.0 97 | googleapis-common-protos==1.53.0 98 | graphviz==0.16 99 | greenlet==1.0.0 100 | grpc-google-iam-v1==0.12.3 101 | grpcio==1.37.0 102 | grpcio-gcp==0.2.2 103 | gunicorn==19.10.0 104 | httplib2==0.19.1 105 | idna==2.10 106 | importlib-metadata==4.11.3 107 | importlib-resources==1.5.0 108 | inflection==0.5.1 109 | iniconfig==1.1.1 110 | ipykernel==6.13.0 111 | ipython==7.32.0 112 | ipython-genutils==0.2.0 113 | iso8601==0.1.14 114 | isodate==0.6.0 115 | isort==5.8.0 116 | itsdangerous==2.1.2 117 | jedi==0.18.1 118 | Jinja2==3.0.3 119 | joblib==1.0.1 120 | json-merge-patch==0.2 121 | json5==0.9.6 122 | jsonschema==3.2.0 123 | jupyter-client==7.3.0 124 | jupyter-core==4.10.0 125 | jupyter-server==1.16.0 126 | jupyterlab==3.3.4 127 | jupyterlab-pygments==0.2.2 128 | jupyterlab-server==2.13.0 129 | lazy-object-proxy==1.4.3 130 | libcst==0.3.18 131 | lockfile==0.12.2 132 | Mako==1.1.4 133 | Markdown==3.3.3 134 | MarkupSafe==2.1.1 135 | marshmallow==3.10.0 136 | marshmallow-enum==1.5.1 137 | marshmallow-oneofschema==2.1.0 138 | marshmallow-sqlalchemy==0.23.1 139 | matplotlib-inline==0.1.3 140 | mccabe==0.6.1 141 | mistune==0.8.4 142 | mlflow==1.15.0 143 | mypy-extensions==0.4.3 144 | natsort==7.1.1 145 | nbclassic==0.3.7 146 | nbclient==0.6.0 147 | nbconvert==6.5.0 148 | nbformat==5.3.0 149 | nest-asyncio==1.5.5 150 | nltk==3.6.2 151 | notebook==6.4.11 152 | notebook-shim==0.1.0 153 | numpy==1.20.1 154 | oauthlib==3.1.0 155 | openapi-spec-validator==0.2.9 156 | packaging==20.9 157 | pandas==1.2.2 158 | pandas-gbq==0.15.0 159 | pandocfilters==1.5.0 160 | paramiko==2.7.2 161 | parso==0.8.3 162 | pendulum==2.1.2 163 | pexpect==4.8.0 164 | pickleshare==0.7.5 165 | pluggy==0.13.1 166 | prison==0.1.3 167 | prometheus-client==0.10.1 168 | prometheus-flask-exporter==0.18.1 169 | prompt-toolkit==3.0.29 170 | proto-plus==1.18.1 171 | protobuf==3.15.8 172 | psutil==5.8.0 173 | psycopg2-binary==2.8.6 174 | ptyprocess==0.7.0 175 | py==1.10.0 176 | pyarrow==3.0.0 177 | pyasn1==0.4.8 178 | pyasn1-modules==0.2.8 179 | pycparser==2.20 180 | pydata-google-auth==1.2.0 181 | Pygments==2.8.0 182 | PyJWT==1.7.1 183 | pylint==2.8.2 184 | pymystem3==0.2.0 185 | PyNaCl==1.4.0 186 | pyOpenSSL==20.0.1 187 | pyparsing==2.4.7 188 | pyrsistent==0.17.3 189 | pysftp==0.2.9 190 | pytest==6.2.3 191 | python-daemon==2.2.4 192 | python-dateutil==2.8.2 193 | python-editor==1.0.4 194 | python-nvd3==0.15.0 195 | python-slugify==4.0.1 196 | python-youtube==0.8.0 197 | python3-openid==3.2.0 198 | pytz==2020.5 199 | pytzdata==2020.1 200 | PyYAML==5.4.1 201 | pyzmq==22.3.0 202 | querystring-parser==1.2.4 203 | regex==2021.4.4 204 | requests==2.25.1 205 | requests-oauthlib==1.3.0 206 | rich==9.2.0 207 | rsa==4.7.2 208 | scikit-learn==0.24.1 209 | scipy==1.6.2 210 | Send2Trash==1.8.0 211 | setproctitle==1.2.2 212 | six==1.15.0 213 | smmap==4.0.0 214 | sniffio==1.2.0 215 | soupsieve==2.3.2.post1 216 | SQLAlchemy==1.3.23 217 | SQLAlchemy-JSONField==1.0.0 218 | SQLAlchemy-Utils==0.36.8 219 | sqlparse==0.4.1 220 | sshtunnel==0.1.5 221 | stringcase==1.2.0 222 | swagger-ui-bundle==0.0.8 223 | tabulate==0.8.7 224 | tenacity==6.2.0 225 | termcolor==1.1.0 226 | terminado==0.13.3 227 | text-unidecode==1.3 228 | threadpoolctl==2.1.0 229 | tinycss2==1.1.1 230 | toml==0.10.2 231 | tornado==6.1 232 | tqdm==4.60.0 233 | traitlets==5.1.1 234 | typed-ast==1.4.3 235 | typing-extensions==3.7.4.3 236 | typing-inspect==0.6.0 237 | unicodecsv==0.14.1 238 | uritemplate==3.0.1 239 | urllib3==1.25.11 240 | wcwidth==0.2.5 241 | webencodings==0.5.1 242 | websocket-client==0.58.0 243 | Werkzeug==2.0.0 244 | wrapt==1.12.1 245 | WTForms==2.3.3 246 | zipp==3.4.0 247 | -------------------------------------------------------------------------------- /src/ML_Strategy.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | import math 4 | import arch 5 | from src import UNI_v3_funcs 6 | from src import ActiveStrategyFramework 7 | import copy 8 | import xgboost as xgb 9 | from src.feature_extraction import * 10 | import mlflow 11 | 12 | 13 | # ML strategy 14 | 15 | 16 | class ML_Strategy(): 17 | def __init__(self, model_data, alpha_param, tau_param, volatility_reset_ratio, tokens_outside_reset=.05, 18 | data_frequency='M', default_width=.5, days_ar_model=180, return_forecast_cutoff=0.15, 19 | z_score_cutoff=5, gamma=0.01, learning_rate=0.05, max_depth=8, n_estimators=400, load_model=False, model_link=None): 20 | 21 | # Allow for different input data frequencies, always get 1 day ahead forecast 22 | # Model data frequency is expressed in minutes 23 | 24 | if data_frequency == 'D': 25 | self.annualization_factor = 365 ** .5 26 | self.resample_option = '1D' 27 | self.window_size = 15 28 | elif data_frequency == 'H': 29 | self.annualization_factor = (24 * 365) ** .5 30 | self.resample_option = '1H' 31 | self.window_size = 30 32 | elif data_frequency == 'M': 33 | self.annualization_factor = (60 * 24 * 365) ** .5 34 | self.resample_option = '1 min' 35 | self.window_size = 60 36 | 37 | self.alpha_param = alpha_param 38 | self.tau_param = tau_param 39 | self.volatility_reset_ratio = volatility_reset_ratio 40 | self.data_frequency = data_frequency 41 | self.tokens_outside_reset = tokens_outside_reset 42 | self.default_width = default_width 43 | self.return_forecast_cutoff = return_forecast_cutoff 44 | self.days_ar_model = days_ar_model 45 | self.z_score_cutoff = z_score_cutoff 46 | self.window_size = 60 47 | 48 | self.gamma = gamma 49 | self.learning_rate = learning_rate 50 | self.max_depth = max_depth 51 | self.n_estimators = n_estimators 52 | 53 | self.garch_data = self.clean_data_for_garch(model_data) 54 | self.xgboost_data = self.clear_data_for_xgboost(model_data) 55 | if load_model: 56 | mlflow.set_tracking_uri("http://localhost:5050") 57 | self.model = mlflow.xgboost.load_model(model_link) 58 | else: 59 | self.model = self.train_model() 60 | 61 | ##################################### 62 | # Estimate ARX model at current timepoint 63 | ##################################### 64 | 65 | def train_model(self): 66 | X_train = self.xgboost_data.drop(['quotePrice'], axis=1) 67 | y_train = self.xgboost_data['quotePrice'].copy() 68 | parameters = {'gamma': self.gamma, 'learning_rate': self.learning_rate, 69 | 'max_depth': self.max_depth, 'n_estimators': self.n_estimators} 70 | model = xgb.XGBRegressor(**parameters, objective='reg:squarederror') 71 | model.fit(X_train, y_train, verbose=False) 72 | 73 | return model 74 | 75 | def clean_data_for_garch(self, data_in): 76 | data_filled = ActiveStrategyFramework.fill_time(data_in) 77 | 78 | # Filter according to Median Absolute Deviation 79 | # 1. Generate rolling median 80 | data_filled_rolling = data_filled.quotePrice.rolling(window=self.window_size) 81 | data_filled['roll_median'] = data_filled_rolling.median() 82 | 83 | # 2. Compute rolling absolute deviation of current price from median under Gaussian 84 | roll_dev = np.abs(data_filled.quotePrice - data_filled.roll_median) 85 | data_filled['median_abs_dev'] = 1.4826 * roll_dev.rolling(window=self.window_size).median() 86 | 87 | # 3. Identify outliers using MAD 88 | outlier_indices = np.abs(data_filled.quotePrice - data_filled.roll_median) >= self.z_score_cutoff * data_filled[ 89 | 'median_abs_dev'] 90 | 91 | # impute 92 | # data_filled['quotePrice'] = np.where(outlier_indices.values == 0, data_filled['quotePrice'].values,data_filled['roll_median'].values) 93 | 94 | # drop 95 | data_filled = data_filled[~outlier_indices] 96 | 97 | return data_filled 98 | 99 | def clear_data_for_xgboost(self, data_in): 100 | data_filled = ActiveStrategyFramework.fill_time(data_in) 101 | data_filled = extractAll(data_filled) 102 | return data_filled 103 | 104 | def generate_model_forecast(self, timepoint): 105 | 106 | # Compute returns with data_frequency frequency starting at the current timepoint and looking backwards 107 | current_data = self.garch_data.loc[:timepoint].resample(self.resample_option, closed='right', label='right', 108 | origin=timepoint).last() 109 | current_data['price_return'] = current_data['quotePrice'].pct_change() 110 | current_data = current_data.dropna(axis=0, subset=['price_return']) 111 | ar_model = arch.univariate.ARX(current_data.price_return[( 112 | current_data.index >= (timepoint - pd.Timedelta(str(self.days_ar_model) + ' days')))].to_numpy(), 113 | lags=1, rescale=True) 114 | ar_model.volatility = arch.univariate.GARCH(p=1, q=1) 115 | 116 | res = ar_model.fit(update_freq=0, disp="off") 117 | scale = res.scale 118 | 119 | forecasts = res.forecast(horizon=1, reindex=False) 120 | 121 | return_forecast = forecasts.mean.to_numpy()[0][-1] / scale 122 | sd_forecast = (forecasts.variance.to_numpy()[0][-1] / np.power(res.scale, 2)) ** 0.5 * self.annualization_factor 123 | 124 | # Return forecast XGBOOST 125 | current_data_xg = self.xgboost_data.loc[:timepoint].resample(self.resample_option, closed='right', 126 | label='right', origin=timepoint).last() 127 | 128 | X = current_data_xg.drop(['quotePrice'], axis=1) 129 | y = current_data_xg.iloc[-1]['quotePrice'] 130 | features = X.iloc[-1:, :] 131 | predict_features = np.array(features).reshape(-1, 7) 132 | return_forecast = self.model.predict(predict_features)[0] / y - 1.0 133 | 134 | result_dict = {'return_forecast': return_forecast, 'sd_forecast': sd_forecast} 135 | 136 | return result_dict 137 | 138 | ##################################### 139 | # Check if a rebalance is necessary. 140 | # If it is, remove the liquidity and set new ranges 141 | ##################################### 142 | 143 | def check_strategy(self, current_strat_obs): 144 | 145 | model_forecast = None 146 | LIMIT_ORDER_BALANCE = current_strat_obs.liquidity_ranges[1]['token_0'] + current_strat_obs.liquidity_ranges[1][ 147 | 'token_1'] / current_strat_obs.price 148 | BASE_ORDER_BALANCE = current_strat_obs.liquidity_ranges[0]['token_0'] + current_strat_obs.liquidity_ranges[0][ 149 | 'token_1'] / current_strat_obs.price 150 | 151 | if not 'last_vol_check' in current_strat_obs.strategy_info: 152 | current_strat_obs.strategy_info['last_vol_check'] = current_strat_obs.time 153 | 154 | ##################################### 155 | # 156 | # This strategy rebalances in three scenarios: 157 | # 1. Leave Reset Range 158 | # 2. Volatility has dropped (volatility_reset_ratio) 159 | # 3. Tokens outside of pool greater than 5% of value of LP position 160 | # 161 | ##################################### 162 | 163 | ####################### 164 | # 1. Leave Reset Range 165 | ####################### 166 | LEFT_RANGE_LOW = current_strat_obs.price < current_strat_obs.strategy_info['reset_range_lower'] 167 | LEFT_RANGE_HIGH = current_strat_obs.price > current_strat_obs.strategy_info['reset_range_upper'] 168 | 169 | ####################### 170 | # 2. Volatility has dropped 171 | ####################### 172 | # Rebalance if volatility has gone down significantly 173 | # When volatility increases the reset range will be hit 174 | # Check every hour (60 minutes) 175 | 176 | ar_check_frequency = 60 177 | time_since_reset = current_strat_obs.time - current_strat_obs.strategy_info['last_vol_check'] 178 | 179 | VOL_REBALANCE = False 180 | if (time_since_reset.total_seconds() / 60) >= ar_check_frequency: 181 | 182 | current_strat_obs.strategy_info['last_vol_check'] = current_strat_obs.time 183 | model_forecast = self.generate_model_forecast(current_strat_obs.time) 184 | 185 | if model_forecast['sd_forecast'] / current_strat_obs.liquidity_ranges[0][ 186 | 'volatility'] <= self.volatility_reset_ratio: 187 | VOL_REBALANCE = True 188 | else: 189 | VOL_REBALANCE = False 190 | 191 | ####################### 192 | # 3. Tokens outside of pool greater than 5% of value of LP position 193 | ####################### 194 | 195 | left_over_balance = current_strat_obs.token_0_left_over + current_strat_obs.token_1_left_over / current_strat_obs.price 196 | 197 | if (left_over_balance > self.tokens_outside_reset * (LIMIT_ORDER_BALANCE + BASE_ORDER_BALANCE)): 198 | TOKENS_OUTSIDE_LARGE = True 199 | else: 200 | TOKENS_OUTSIDE_LARGE = False 201 | 202 | if 'force_initial_reset' in current_strat_obs.strategy_info: 203 | if current_strat_obs.strategy_info['force_initial_reset']: 204 | INITIAL_RESET = True 205 | current_strat_obs.strategy_info['force_initial_reset'] = False 206 | else: 207 | INITIAL_RESET = False 208 | else: 209 | INITIAL_RESET = False 210 | 211 | # if a reset is necessary 212 | if ((((LEFT_RANGE_LOW | LEFT_RANGE_HIGH) | VOL_REBALANCE) | TOKENS_OUTSIDE_LARGE) | INITIAL_RESET): 213 | current_strat_obs.reset_point = True 214 | 215 | if (LEFT_RANGE_LOW | LEFT_RANGE_HIGH): 216 | current_strat_obs.reset_reason = 'exited_range' 217 | elif VOL_REBALANCE: 218 | current_strat_obs.reset_reason = 'vol_rebalance' 219 | elif TOKENS_OUTSIDE_LARGE: 220 | current_strat_obs.reset_reason = 'tokens_outside_large' 221 | elif INITIAL_RESET: 222 | current_strat_obs.reset_reason = 'initial_reset' 223 | 224 | # Remove liquidity and claim fees 225 | current_strat_obs.remove_liquidity() 226 | 227 | # Reset liquidity 228 | liq_range, strategy_info = self.set_liquidity_ranges(current_strat_obs, model_forecast) 229 | return liq_range, strategy_info 230 | else: 231 | return current_strat_obs.liquidity_ranges, current_strat_obs.strategy_info 232 | 233 | def set_liquidity_ranges(self, current_strat_obs, model_forecast=None): 234 | 235 | ########################################################### 236 | # STEP 1: Do calculations required to determine base liquidity bounds 237 | ########################################################### 238 | 239 | # Fit model 240 | if model_forecast is None: 241 | model_forecast = self.generate_model_forecast(current_strat_obs.time) 242 | 243 | if current_strat_obs.strategy_info is None: 244 | strategy_info_here = dict() 245 | else: 246 | strategy_info_here = copy.deepcopy(current_strat_obs.strategy_info) 247 | 248 | # Limit return prediction to a return_forecast_cutoff % change 249 | if np.abs(model_forecast['return_forecast']) > self.return_forecast_cutoff: 250 | model_forecast['return_forecast'] = np.sign(model_forecast['return_forecast']) * self.return_forecast_cutoff 251 | 252 | # If error in volatility computation use last or overall standard deviation of returns 253 | if np.isnan(model_forecast['sd_forecast']): 254 | if hasattr(current_strat_obs, 'liquidity_ranges'): 255 | model_forecast['sd_forecast'] = current_strat_obs.liquidity_ranges[0]['volatility'] 256 | else: 257 | model_forecast['sd_forecast'] = self.garch_data.quotePrice.pct_change().std() 258 | 259 | target_price = (1 + model_forecast['return_forecast']) * current_strat_obs.price 260 | 261 | # Set the base range 262 | base_range_lower = current_strat_obs.price * ( 263 | 1 + model_forecast['return_forecast'] - self.alpha_param * model_forecast['sd_forecast']) 264 | base_range_upper = current_strat_obs.price * ( 265 | 1 + model_forecast['return_forecast'] + self.alpha_param * model_forecast['sd_forecast']) 266 | 267 | # Set the reset range 268 | strategy_info_here['reset_range_lower'] = current_strat_obs.price * ( 269 | 1 + model_forecast['return_forecast'] - self.tau_param * self.alpha_param * model_forecast[ 270 | 'sd_forecast']) 271 | strategy_info_here['reset_range_upper'] = current_strat_obs.price * ( 272 | 1 + model_forecast['return_forecast'] + self.tau_param * self.alpha_param * model_forecast[ 273 | 'sd_forecast']) 274 | 275 | # If volatility is high enough reset range is less than zero, set at default_width of current price 276 | if strategy_info_here['reset_range_lower'] < 0.0: 277 | strategy_info_here['reset_range_lower'] = self.default_width * current_strat_obs.price 278 | 279 | save_ranges = [] 280 | 281 | ########################################################### 282 | # STEP 2: Set Base Liquidity 283 | ########################################################### 284 | 285 | # Store each token amount supplied to pool 286 | total_token_0_amount = current_strat_obs.liquidity_in_0 287 | total_token_1_amount = current_strat_obs.liquidity_in_1 288 | 289 | # Lower Range 290 | if base_range_lower > 0.0: 291 | TICK_A_PRE = math.log(current_strat_obs.decimal_adjustment * base_range_lower, 1.0001) 292 | TICK_A = int(math.floor(TICK_A_PRE / current_strat_obs.tickSpacing) * current_strat_obs.tickSpacing) 293 | else: 294 | # If lower end of base range is negative, fix at 0.0 295 | base_range_lower = 0.0 296 | TICK_A = math.ceil( 297 | math.log((2 ** -128), 1.0001) / current_strat_obs.tickSpacing) * current_strat_obs.tickSpacing 298 | 299 | # Upper Range 300 | TICK_B_PRE = math.log(current_strat_obs.decimal_adjustment * base_range_upper, 1.0001) 301 | TICK_B = int(math.floor(TICK_B_PRE / current_strat_obs.tickSpacing) * current_strat_obs.tickSpacing) 302 | 303 | # Make sure Tick A < Tick B. If not make one tick 304 | if TICK_A == TICK_B: 305 | TICK_B = TICK_A + current_strat_obs.tickSpacing 306 | 307 | liquidity_placed_base = int(UNI_v3_funcs.get_liquidity(current_strat_obs.price_tick_current, TICK_A, TICK_B, 308 | current_strat_obs.liquidity_in_0, \ 309 | current_strat_obs.liquidity_in_1, 310 | current_strat_obs.decimals_0, 311 | current_strat_obs.decimals_1)) 312 | 313 | base_amount_0_placed, base_amount_1_placed = UNI_v3_funcs.get_amounts(current_strat_obs.price_tick_current, 314 | TICK_A, TICK_B, liquidity_placed_base \ 315 | , current_strat_obs.decimals_0, 316 | current_strat_obs.decimals_1) 317 | 318 | total_token_0_amount -= base_amount_0_placed 319 | total_token_1_amount -= base_amount_1_placed 320 | 321 | base_liq_range = {'price': current_strat_obs.price, 322 | 'target_price': target_price, 323 | 'lower_bin_tick': TICK_A, 324 | 'upper_bin_tick': TICK_B, 325 | 'lower_bin_price': base_range_lower, 326 | 'upper_bin_price': base_range_upper, 327 | 'time': current_strat_obs.time, 328 | 'token_0': base_amount_0_placed, 329 | 'token_1': base_amount_1_placed, 330 | 'position_liquidity': liquidity_placed_base, 331 | 'volatility': model_forecast['sd_forecast'], 332 | 'reset_time': current_strat_obs.time, 333 | 'return_forecast': model_forecast['return_forecast']} 334 | 335 | save_ranges.append(base_liq_range) 336 | 337 | ########################### 338 | # Step 3: Set Limit Position 339 | ############################ 340 | 341 | limit_amount_0 = total_token_0_amount 342 | limit_amount_1 = total_token_1_amount 343 | 344 | token_0_limit = limit_amount_0 * current_strat_obs.price > limit_amount_1 345 | # Place singe sided highest value 346 | if token_0_limit: 347 | # Place Token 0 348 | limit_amount_1 = 0.0 349 | limit_range_lower = current_strat_obs.price 350 | limit_range_upper = base_range_upper 351 | else: 352 | # Place Token 1 353 | limit_amount_0 = 0.0 354 | limit_range_lower = base_range_lower 355 | limit_range_upper = current_strat_obs.price 356 | 357 | if limit_range_lower > 0.0: 358 | TICK_A_PRE = math.log(current_strat_obs.decimal_adjustment * limit_range_lower, 1.0001) 359 | TICK_A = int(math.floor(TICK_A_PRE / current_strat_obs.tickSpacing) * current_strat_obs.tickSpacing) 360 | else: 361 | limit_range_lower = 0.0 362 | TICK_A = math.ceil( 363 | math.log((2 ** -128), 1.0001) / current_strat_obs.tickSpacing) * current_strat_obs.tickSpacing 364 | 365 | TICK_B_PRE = math.log(current_strat_obs.decimal_adjustment * limit_range_upper, 1.0001) 366 | TICK_B = int(math.floor(TICK_B_PRE / current_strat_obs.tickSpacing) * current_strat_obs.tickSpacing) 367 | 368 | if token_0_limit: 369 | # If token 0 in limit, make sure lower tick is above active tick 370 | if TICK_A <= current_strat_obs.price_tick_current: 371 | TICK_A = TICK_A + current_strat_obs.tickSpacing 372 | else: 373 | # In token 1 in limit, make sure upper tick is below active tick 374 | if TICK_B >= current_strat_obs.price_tick_current: 375 | TICK_B = TICK_B - current_strat_obs.tickSpacing 376 | 377 | # Make sure Tick A < Tick B. If not make one tick 378 | if TICK_A == TICK_B: 379 | if token_0_limit: 380 | TICK_A += current_strat_obs.tickSpacing 381 | else: 382 | TICK_B -= current_strat_obs.tickSpacing 383 | 384 | liquidity_placed_limit = int(UNI_v3_funcs.get_liquidity(current_strat_obs.price_tick_current, TICK_A, TICK_B, \ 385 | limit_amount_0, limit_amount_1, 386 | current_strat_obs.decimals_0, 387 | current_strat_obs.decimals_1)) 388 | limit_amount_0_placed, limit_amount_1_placed = UNI_v3_funcs.get_amounts(current_strat_obs.price_tick_current, 389 | TICK_A, TICK_B, \ 390 | liquidity_placed_limit, 391 | current_strat_obs.decimals_0, 392 | current_strat_obs.decimals_1) 393 | 394 | limit_liq_range = {'price': current_strat_obs.price, 395 | 'target_price': target_price, 396 | 'lower_bin_tick': TICK_A, 397 | 'upper_bin_tick': TICK_B, 398 | 'lower_bin_price': limit_range_lower, 399 | 'upper_bin_price': limit_range_upper, 400 | 'time': current_strat_obs.time, 401 | 'token_0': limit_amount_0_placed, 402 | 'token_1': limit_amount_1_placed, 403 | 'position_liquidity': liquidity_placed_limit, 404 | 'volatility': model_forecast['sd_forecast'], 405 | 'reset_time': current_strat_obs.time, 406 | 'return_forecast': model_forecast['return_forecast']} 407 | 408 | save_ranges.append(limit_liq_range) 409 | 410 | # Update token amount supplied to pool 411 | total_token_0_amount -= limit_amount_0_placed 412 | total_token_1_amount -= limit_amount_1_placed 413 | 414 | # How much liquidity is not allcated to ranges 415 | current_strat_obs.token_0_left_over = max([total_token_0_amount, 0.0]) 416 | current_strat_obs.token_1_left_over = max([total_token_1_amount, 0.0]) 417 | 418 | # Since liquidity was allocated, set to 0 419 | current_strat_obs.liquidity_in_0 = 0.0 420 | current_strat_obs.liquidity_in_1 = 0.0 421 | 422 | return save_ranges, strategy_info_here 423 | 424 | ######################################################## 425 | # Extract strategy parameters 426 | ######################################################## 427 | def dict_components(self, strategy_observation): 428 | this_data = dict() 429 | 430 | # General variables 431 | this_data['time'] = strategy_observation.time 432 | this_data['price'] = float(strategy_observation.price) 433 | this_data['reset_point'] = int(strategy_observation.reset_point) 434 | this_data['reset_reason'] = strategy_observation.reset_reason 435 | this_data['volatility'] = float(strategy_observation.liquidity_ranges[0]['volatility']) 436 | this_data['return_forecast'] = float(strategy_observation.liquidity_ranges[0]['return_forecast']) 437 | 438 | # Range Variables 439 | this_data['base_range_lower'] = float(strategy_observation.liquidity_ranges[0]['lower_bin_price']) 440 | this_data['base_range_upper'] = float(strategy_observation.liquidity_ranges[0]['upper_bin_price']) 441 | this_data['limit_range_lower'] = float(strategy_observation.liquidity_ranges[1]['lower_bin_price']) 442 | this_data['limit_range_upper'] = float(strategy_observation.liquidity_ranges[1]['upper_bin_price']) 443 | this_data['reset_range_lower'] = float(strategy_observation.strategy_info['reset_range_lower']) 444 | this_data['reset_range_upper'] = float(strategy_observation.strategy_info['reset_range_upper']) 445 | this_data['price_at_reset'] = float(strategy_observation.liquidity_ranges[0]['price']) 446 | 447 | # Fee Varaibles 448 | this_data['token_0_fees'] = float(strategy_observation.token_0_fees) 449 | this_data['token_1_fees'] = float(strategy_observation.token_1_fees) 450 | this_data['token_0_fees_uncollected'] = float(strategy_observation.token_0_fees_uncollected) 451 | this_data['token_1_fees_uncollected'] = float(strategy_observation.token_1_fees_uncollected) 452 | 453 | # Asset Variables 454 | this_data['token_0_left_over'] = float(strategy_observation.token_0_left_over) 455 | this_data['token_1_left_over'] = float(strategy_observation.token_1_left_over) 456 | 457 | total_token_0 = 0.0 458 | total_token_1 = 0.0 459 | for i in range(len(strategy_observation.liquidity_ranges)): 460 | total_token_0 += strategy_observation.liquidity_ranges[i]['token_0'] 461 | total_token_1 += strategy_observation.liquidity_ranges[i]['token_1'] 462 | 463 | this_data['token_0_allocated'] = float(total_token_0) 464 | this_data['token_1_allocated'] = float(total_token_1) 465 | this_data[ 466 | 'token_0_total'] = float(total_token_0 + strategy_observation.token_0_left_over + strategy_observation.token_0_fees_uncollected) 467 | this_data[ 468 | 'token_1_total'] = float(total_token_1 + strategy_observation.token_1_left_over + strategy_observation.token_1_fees_uncollected) 469 | 470 | # Value Variables 471 | this_data['value_position_in_token_0'] = this_data['token_0_total'] + this_data['token_1_total'] / this_data[ 472 | 'price'] 473 | this_data['value_allocated_in_token_0'] = this_data['token_0_allocated'] + this_data['token_1_allocated'] / \ 474 | this_data['price'] 475 | this_data['value_left_over_in_token_0'] = this_data['token_0_left_over'] + this_data['token_1_left_over'] / \ 476 | this_data['price'] 477 | 478 | this_data['base_position_value_in_token_0'] = strategy_observation.liquidity_ranges[0]['token_0'] + \ 479 | strategy_observation.liquidity_ranges[0]['token_1'] / this_data[ 480 | 'price'] 481 | this_data['limit_position_value_in_token_0'] = strategy_observation.liquidity_ranges[1]['token_0'] + \ 482 | strategy_observation.liquidity_ranges[1]['token_1'] / this_data[ 483 | 'price'] 484 | 485 | return this_data 486 | 487 | -------------------------------------------------------------------------------- /src/ActiveStrategyFramework.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | import math 4 | from src import UNI_v3_funcs 5 | import copy 6 | 7 | 8 | class StrategyObservation: 9 | def __init__(self,timepoint, 10 | current_price, 11 | strategy_in, 12 | liquidity_in_0, 13 | liquidity_in_1, 14 | fee_tier, 15 | decimals_0, 16 | decimals_1, 17 | token_0_left_over = 0.0, 18 | token_1_left_over = 0.0, 19 | token_0_fees_uncollected = 0.0, 20 | token_1_fees_uncollected = 0.0, 21 | liquidity_ranges = None, 22 | strategy_info = None, 23 | swaps = None, 24 | simulate_strat = True): 25 | 26 | ###################################### 27 | # 1. Store current values 28 | ###################################### 29 | self.time = timepoint 30 | self.price = current_price 31 | self.liquidity_in_0 = liquidity_in_0 32 | self.liquidity_in_1 = liquidity_in_1 33 | self.fee_tier = fee_tier 34 | self.decimals_0 = decimals_0 35 | self.decimals_1 = decimals_1 36 | self.token_0_left_over = token_0_left_over 37 | self.token_1_left_over = token_1_left_over 38 | self.token_0_fees_uncollected = token_0_fees_uncollected 39 | self.token_1_fees_uncollected = token_1_fees_uncollected 40 | self.reset_point = False 41 | self.reset_reason = '' 42 | self.decimal_adjustment = 10**(self.decimals_1 - self.decimals_0) 43 | self.tickSpacing = int(self.fee_tier*2*10000) 44 | self.token_0_fees = 0.0 45 | self.token_1_fees = 0.0 46 | self.simulate_strat = simulate_strat 47 | self.strategy_info = copy.deepcopy(strategy_info) 48 | TICK_P_PRE = math.log(self.decimal_adjustment*self.price, 1.0001) 49 | self.price_tick = math.floor(TICK_P_PRE/self.tickSpacing)*self.tickSpacing 50 | self.price_tick_current = math.floor(TICK_P_PRE) 51 | 52 | ###################################### 53 | # 2. Execute the strategy 54 | # If this is the first observation, need to generate ranges 55 | # Otherwise, check if a rebalance is required and execute. 56 | # If swaps data has been fed in, it will be used to estimate fee income (for backtesting simulations) 57 | # If no swap data is fed in (for a live environment) only ranges will be updated 58 | ###################################### 59 | if liquidity_ranges is None: 60 | self.liquidity_ranges, self.strategy_info = strategy_in.set_liquidity_ranges(self) 61 | else: 62 | self.liquidity_ranges = copy.deepcopy(liquidity_ranges) 63 | 64 | # Update amounts in each position according to current pool price 65 | for i in range(len(self.liquidity_ranges)): 66 | self.liquidity_ranges[i]['time'] = self.time 67 | 68 | if self.simulate_strat: 69 | amount_0, amount_1 = UNI_v3_funcs.get_amounts(self.price_tick_current, 70 | self.liquidity_ranges[i]['lower_bin_tick'], 71 | self.liquidity_ranges[i]['upper_bin_tick'], 72 | self.liquidity_ranges[i]['position_liquidity'], 73 | self.decimals_0, 74 | self.decimals_1) 75 | 76 | self.liquidity_ranges[i]['token_0'] = amount_0 77 | self.liquidity_ranges[i]['token_1'] = amount_1 78 | 79 | # If backtesting swaps, accrue the fees in the provided period 80 | if swaps is not None: 81 | fees_token_0, fees_token_1 = self.accrue_fees(swaps) 82 | self.token_0_fees = fees_token_0 83 | self.token_1_fees = fees_token_1 84 | 85 | # Check strategy and potentially reset the ranges 86 | self.liquidity_ranges, self.strategy_info = strategy_in.check_strategy(self) 87 | ######################################################## 88 | # Accrue earned fees (not supply into LP yet) 89 | ######################################################## 90 | 91 | 92 | def accrue_fees(self, relevant_swaps): 93 | 94 | fees_earned_token_0 = 0.0 95 | fees_earned_token_1 = 0.0 96 | 97 | if len(relevant_swaps) > 0: 98 | 99 | # For every swap in this time period 100 | for s in range(len(relevant_swaps)): 101 | for i in range(len(self.liquidity_ranges)): 102 | in_range = (self.liquidity_ranges[i]['lower_bin_tick'] <= relevant_swaps.iloc[s]['tick_swap']) and \ 103 | (self.liquidity_ranges[i]['upper_bin_tick'] >= relevant_swaps.iloc[s]['tick_swap']) 104 | 105 | token_0_in = relevant_swaps.iloc[s]['token_in'] == 'token0' 106 | 107 | # Low liquidity tokens can have zero liquidity after swap 108 | if relevant_swaps.iloc[s]['virtual_liquidity'] < 1e-9: 109 | fraction_fees_earned_position = 1 110 | else: 111 | fraction_fees_earned_position = self.liquidity_ranges[i]['position_liquidity']/(self.liquidity_ranges[i]['position_liquidity'] + relevant_swaps.iloc[s]['virtual_liquidity']) 112 | 113 | fees_earned_token_0 += in_range * token_0_in * self.fee_tier * fraction_fees_earned_position * relevant_swaps.iloc[s]['traded_in'] 114 | fees_earned_token_1 += in_range * (1-token_0_in) * self.fee_tier * fraction_fees_earned_position * relevant_swaps.iloc[s]['traded_in'] 115 | 116 | self.token_0_fees_uncollected += fees_earned_token_0 117 | self.token_1_fees_uncollected += fees_earned_token_1 118 | 119 | return fees_earned_token_0, fees_earned_token_1 120 | 121 | ######################################################## 122 | # Rebalance: Remove all liquidity positions 123 | # Not dependent on strategy 124 | ######################################################## 125 | def remove_liquidity(self): 126 | 127 | removed_amount_0 = 0.0 128 | removed_amount_1 = 0.0 129 | 130 | # For every bin, get the amounts you currently have and withdraw 131 | for i in range(len(self.liquidity_ranges)): 132 | 133 | position_liquidity = self.liquidity_ranges[i]['position_liquidity'] 134 | 135 | TICK_A = self.liquidity_ranges[i]['lower_bin_tick'] 136 | TICK_B = self.liquidity_ranges[i]['upper_bin_tick'] 137 | 138 | token_amounts = UNI_v3_funcs.get_amounts(self.price_tick,TICK_A,TICK_B, 139 | position_liquidity,self.decimals_0,self.decimals_1) 140 | removed_amount_0 += token_amounts[0] 141 | removed_amount_1 += token_amounts[1] 142 | 143 | self.liquidity_in_0 = removed_amount_0 + self.token_0_left_over + self.token_0_fees_uncollected 144 | self.liquidity_in_1 = removed_amount_1 + self.token_1_left_over + self.token_1_fees_uncollected 145 | 146 | self.token_0_left_over = 0.0 147 | self.token_1_left_over = 0.0 148 | 149 | self.token_0_fees_uncollected = 0.0 150 | self.token_1_fees_uncollected = 0.0 151 | 152 | 153 | 154 | 155 | 156 | 157 | 158 | 159 | 160 | ######################################################## 161 | # Simulate strategy using a pandas Series called price_data, which has as an index 162 | # the time point, and contains the pool price (token 1 per token 0) 163 | ######################################################## 164 | 165 | def simulate_strategy(price_data,swap_data,strategy_in, 166 | liquidity_in_0,liquidity_in_1,fee_tier,decimals_0,decimals_1): 167 | 168 | strategy_results = [] 169 | 170 | # Go through every time period in the data that was passet 171 | for i in range(len(price_data)): 172 | # Strategy Initialization 173 | if i == 0: 174 | strategy_results.append(StrategyObservation(price_data.index[i], 175 | price_data[i], 176 | strategy_in, 177 | liquidity_in_0,liquidity_in_1, 178 | fee_tier,decimals_0,decimals_1)) 179 | # After initialization 180 | else: 181 | 182 | relevant_swaps = swap_data[price_data.index[i-1]:price_data.index[i]] 183 | strategy_results.append(StrategyObservation(price_data.index[i], 184 | price_data[i], 185 | strategy_in, 186 | strategy_results[i-1].liquidity_in_0, 187 | strategy_results[i-1].liquidity_in_1, 188 | strategy_results[i-1].fee_tier, 189 | strategy_results[i-1].decimals_0, 190 | strategy_results[i-1].decimals_1, 191 | strategy_results[i-1].token_0_left_over, 192 | strategy_results[i-1].token_1_left_over, 193 | strategy_results[i-1].token_0_fees_uncollected, 194 | strategy_results[i-1].token_1_fees_uncollected, 195 | strategy_results[i-1].liquidity_ranges, 196 | strategy_results[i-1].strategy_info, 197 | relevant_swaps)) 198 | 199 | return strategy_results 200 | 201 | ######################################################## 202 | # Extract Strategy Data 203 | ######################################################## 204 | 205 | def generate_simulation_series(simulations,strategy_in,token_0_usd_data = None): 206 | 207 | # token_0_usd_data has in quotePrice 208 | # token_0 / usd value for each index 209 | 210 | data_strategy = pd.DataFrame([strategy_in.dict_components(i) for i in simulations]) 211 | data_strategy = data_strategy.set_index('time',drop=False) 212 | data_strategy = data_strategy.sort_index() 213 | 214 | token_0_initial = simulations[0].liquidity_ranges[0]['token_0'] + simulations[0].liquidity_ranges[1]['token_0'] + simulations[0].token_0_left_over 215 | token_1_initial = simulations[0].liquidity_ranges[0]['token_1'] + simulations[0].liquidity_ranges[1]['token_1'] + simulations[0].token_1_left_over 216 | 217 | if token_0_usd_data is None: 218 | data_strategy['value_position_usd'] = data_strategy['value_position_in_token_0'] 219 | data_strategy['base_position_value_usd'] = data_strategy['base_position_value_in_token_0'] 220 | data_strategy['limit_position_value_usd'] = data_strategy['limit_position_value_in_token_0'] 221 | data_strategy['cum_fees_usd'] = data_strategy['token_0_fees'].cumsum() + (data_strategy['token_1_fees'] / data_strategy['price']).cumsum() 222 | data_strategy['token_0_hold_usd'] = token_0_initial 223 | data_strategy['token_1_hold_usd'] = token_1_initial / data_strategy['price'] 224 | data_strategy['value_hold_usd'] = data_strategy['token_0_hold_usd'] + data_strategy['token_1_hold_usd'] 225 | data_return = data_strategy 226 | else: 227 | # Merge in usd price data 228 | token_0_usd_data['price_0_usd'] = 1/token_0_usd_data['quotePrice'] 229 | token_0_usd_data['time_pd'] = token_0_usd_data.index 230 | token_0_usd_data = token_0_usd_data.set_index('time_pd').sort_index() 231 | 232 | data_strategy['time_pd'] = pd.to_datetime(data_strategy['time'],utc=True) 233 | data_strategy = data_strategy.set_index('time_pd').sort_index() 234 | data_return = pd.merge_asof(data_strategy,token_0_usd_data['price_0_usd'],on='time_pd',direction='backward',allow_exact_matches = True) 235 | 236 | # Generate usd position values 237 | data_return['value_position_usd'] = data_return['value_position_in_token_0']*data_return['price_0_usd'] 238 | data_return['base_position_value_usd'] = data_return['base_position_value_in_token_0']*data_return['price_0_usd'] 239 | data_return['limit_position_value_usd'] = data_return['limit_position_value_in_token_0']*data_return['price_0_usd'] 240 | data_return['cum_fees_0'] = data_return['token_0_fees'].cumsum() + (data_return['token_1_fees'] / data_return['price']).cumsum() 241 | data_return['cum_fees_usd'] = data_return['cum_fees_0']*data_return['price_0_usd'] 242 | data_return['token_0_hold_usd'] = token_0_initial * data_return['price_0_usd'] 243 | data_return['token_1_hold_usd'] = token_1_initial * data_return['price_0_usd'] / data_return['price'] 244 | data_return['value_hold_usd'] = data_return['token_0_hold_usd'] + data_return['token_1_hold_usd'] 245 | 246 | return data_return 247 | 248 | 249 | ######################################################## 250 | # Calculates % returns over a minutes frequency 251 | ######################################################## 252 | 253 | def fill_time(data): 254 | price_range = pd.DataFrame({'time_pd': pd.date_range(data.index.min(),data.index.max(),freq='1 min',tz='UTC')}) 255 | price_range = price_range.set_index('time_pd') 256 | new_data = price_range.merge(data,left_index=True,right_index=True,how='left').ffill() 257 | return new_data 258 | 259 | def aggregate_price_data(data,frequency): 260 | 261 | if frequency == 'M': 262 | resample_option = '1 min' 263 | elif frequency == 'H': 264 | resample_option = '1H' 265 | elif frequency == 'D': 266 | resample_option = '1D' 267 | 268 | data_floored_min = data.copy() 269 | data_floored_min.index = data_floored_min.index.floor('Min') 270 | price_range = pd.DataFrame({'time_pd': pd.date_range(data_floored_min.index.min(),data_floored_min.index.max(),freq='1 min',tz='UTC')}) 271 | price_range = price_range.set_index('time_pd') 272 | new_data = price_range.merge(data_floored_min, left_index=True, right_index=True, how='left') 273 | new_data['quotePrice'] = new_data['quotePrice'].ffill() 274 | price_data_aggregated = new_data.resample(resample_option).last().copy() 275 | price_data_aggregated['price_return'] = price_data_aggregated['quotePrice'].pct_change() 276 | return price_data_aggregated 277 | 278 | def aggregate_swap_data(data, frequency): 279 | 280 | if frequency == 'M': 281 | resample_option = '1 min' 282 | elif frequency == 'H': 283 | resample_option = '1H' 284 | elif frequency == 'D': 285 | resample_option = '1D' 286 | 287 | swap_data_tmp = data[['amount0_adj', 'amount1_adj', 'virtual_liquidity_adj']].resample(resample_option).agg( 288 | {'amount0_adj': np.sum, 'amount1_adj': np.sum, 'virtual_liquidity_adj': np.median}) 289 | 290 | return swap_data_tmp.ffill() 291 | 292 | def analyze_strategy(data_usd,frequency = 'M'): 293 | 294 | if frequency == 'M': 295 | annualization_factor = 365*24*60 296 | elif frequency == 'H': 297 | annualization_factor = 365*24 298 | elif frequency == 'D': 299 | annualization_factor = 365 300 | 301 | days_strategy = (data_usd['time'].max()-data_usd['time'].min()).days 302 | strategy_last_obs = data_usd.tail(1) 303 | strategy_last_obs = strategy_last_obs.reset_index(drop=True) 304 | initial_position_value = data_usd.iloc[0]['value_hold_usd'] 305 | net_apr = float((strategy_last_obs['value_position_usd']/initial_position_value - 1) * 365 / days_strategy) 306 | 307 | 308 | summary_strat = { 309 | 'days_strategy' : days_strategy, 310 | 'gross_fee_apr' : float((strategy_last_obs['cum_fees_usd']/initial_position_value) * 365 / days_strategy), 311 | 'gross_fee_return' : float(strategy_last_obs['cum_fees_usd']/initial_position_value), 312 | 'net_apr' : net_apr, 313 | 'net_return' : float(strategy_last_obs['value_position_usd']/initial_position_value - 1), 314 | 'rebalances' : data_usd['reset_point'].sum(), 315 | 'max_drawdown' : ( data_usd['value_position_usd'].max() - data_usd['value_position_usd'].min() ) / data_usd['value_position_usd'].max(), 316 | 'volatility' : ((data_usd['value_position_usd'].pct_change().var())**(0.5)) * ((annualization_factor)**(0.5)), 317 | 'sharpe_ratio' : float(net_apr / (((data_usd['value_position_usd'].pct_change().var())**(0.5)) * ((annualization_factor)**(0.5)))), 318 | 'impermanent_loss' : ((strategy_last_obs['value_position_usd'] - strategy_last_obs['value_hold_usd']) / strategy_last_obs['value_hold_usd'])[0], 319 | 'mean_base_position' : (data_usd['base_position_value_in_token_0']/ \ 320 | (data_usd['base_position_value_in_token_0']+data_usd['limit_position_value_in_token_0']+data_usd['value_left_over_in_token_0'])).mean(), 321 | 'median_base_position' : (data_usd['base_position_value_in_token_0']/ \ 322 | (data_usd['base_position_value_in_token_0']+data_usd['limit_position_value_in_token_0']+data_usd['value_left_over_in_token_0'])).median(), 323 | 'mean_base_width' : ((data_usd['base_range_upper']-data_usd['base_range_lower'])/data_usd['price_at_reset']).mean(), 324 | 'median_base_width' : ((data_usd['base_range_upper']-data_usd['base_range_lower'])/data_usd['price_at_reset']).median(), 325 | 'final_value' : data_usd['value_position_usd'].iloc[-1] 326 | } 327 | 328 | return summary_strat 329 | 330 | 331 | 332 | 333 | 334 | 335 | 336 | ######################################################## 337 | # Plotting strategies results 338 | ######################################################## 339 | 340 | 341 | def plot_strategy(data_strategy, y_axis_label, base_color='#ff0000', flip_price_axis=False): 342 | import plotly.graph_objects as go 343 | CHART_SIZE = 300 344 | 345 | if flip_price_axis: 346 | data_strategy_here = data_strategy.copy() 347 | data_strategy_here.base_range_lower = 1/data_strategy_here.base_range_lower 348 | data_strategy_here.base_range_upper = 1/data_strategy_here.base_range_upper 349 | data_strategy_here.limit_range_lower = 1/data_strategy_here.limit_range_lower 350 | data_strategy_here.limit_range_upper = 1/data_strategy_here.limit_range_upper 351 | data_strategy_here.reset_range_lower = 1/data_strategy_here.reset_range_lower 352 | data_strategy_here.reset_range_upper = 1/data_strategy_here.reset_range_upper 353 | data_strategy_here.price = 1/data_strategy_here.price 354 | else: 355 | data_strategy_here = data_strategy.copy() 356 | 357 | fig_strategy = go.Figure() 358 | fig_strategy.add_trace(go.Scatter( 359 | x=data_strategy_here['time'], 360 | y=data_strategy_here['base_range_lower'], 361 | fill=None, 362 | mode='lines', 363 | showlegend = False, 364 | line_color=base_color, 365 | )) 366 | fig_strategy.add_trace(go.Scatter( 367 | x=data_strategy_here['time'], 368 | y=data_strategy_here['base_range_upper'], 369 | name='Base Position', 370 | fill='tonexty', # fill area between trace0 and trace1 371 | mode='lines', line_color=base_color)) 372 | 373 | fig_strategy.add_trace(go.Scatter( 374 | x=data_strategy_here['time'], 375 | y=data_strategy_here['limit_range_lower'], 376 | fill=None, 377 | mode='lines', 378 | showlegend = False, 379 | line_color='#6f6f6f')) 380 | 381 | fig_strategy.add_trace(go.Scatter( 382 | x=data_strategy_here['time'], 383 | y=data_strategy_here['limit_range_upper'], 384 | name='Base + Limit Position', 385 | fill='tonexty', # fill area between trace0 and trace1 386 | mode='lines', line_color='#6f6f6f',)) 387 | 388 | fig_strategy.add_trace(go.Scatter( 389 | x=data_strategy_here['time'], 390 | y=data_strategy_here['reset_range_lower'], 391 | name='Strategy Reset Bound', 392 | line=dict(width=2,dash='dot',color='black'))) 393 | 394 | fig_strategy.add_trace(go.Scatter( 395 | x=data_strategy_here['time'], 396 | y=data_strategy_here['reset_range_upper'], 397 | showlegend = False, 398 | line=dict(width=2,dash='dot',color='black',))) 399 | 400 | fig_strategy.add_trace(go.Scatter( 401 | x=data_strategy_here['time'], 402 | y=data_strategy_here['price'], 403 | name='Price', 404 | line=dict(width=2,color='black'))) 405 | 406 | fig_strategy.update_layout( 407 | margin=dict(l=20, r=20, t=40, b=20), 408 | height= CHART_SIZE, 409 | title = 'Strategy Simulation', 410 | xaxis_title="Date", 411 | yaxis_title=y_axis_label, 412 | ) 413 | 414 | fig_strategy.show(renderer="png", engine='kaleido') 415 | 416 | return fig_strategy 417 | 418 | 419 | def plot_position_value(data_strategy): 420 | import plotly.graph_objects as go 421 | CHART_SIZE = 300 422 | 423 | fig_strategy = go.Figure() 424 | fig_strategy.add_trace(go.Scatter( 425 | x=data_strategy['time'], 426 | y=data_strategy['value_position_usd'], 427 | name='Value of LP Position', 428 | line=dict(width=2, color='red'))) 429 | 430 | fig_strategy.add_trace(go.Scatter( 431 | x=data_strategy['time'], 432 | y=data_strategy['value_hold_usd'], 433 | name='Value of Holding', 434 | line=dict(width=2, color='blue'))) 435 | 436 | fig_strategy.update_layout( 437 | margin=dict(l=20, r=20, t=40, b=20), 438 | height= CHART_SIZE, 439 | title = 'Strategy Simulation — LP Position vs. Holding', 440 | xaxis_title="Date", 441 | yaxis_title='Position Value', 442 | ) 443 | 444 | fig_strategy.show(renderer="png") 445 | 446 | return fig_strategy 447 | 448 | 449 | def plot_asset_composition(data_strategy,token_0_name,token_1_name): 450 | import plotly.graph_objects as go 451 | CHART_SIZE = 300 452 | # 3 - Asset Composition 453 | fig_composition = go.Figure() 454 | fig_composition.add_trace(go.Scatter( 455 | x=data_strategy['time'], y=data_strategy['token_0_total'], 456 | mode='lines', 457 | name=token_0_name, 458 | line=dict(width=0.5, color='#ff0000'), 459 | stackgroup='one', # define stack group 460 | groupnorm='percent' 461 | )) 462 | fig_composition.add_trace(go.Scatter( 463 | x=data_strategy['time'], y=data_strategy['token_1_total']/data_strategy['price'], 464 | mode='lines', 465 | name=token_1_name, 466 | line=dict(width=0.5, color='#f4f4f4'), 467 | stackgroup='one' 468 | )) 469 | 470 | fig_composition.update_layout( 471 | showlegend=True, 472 | xaxis_type='date', 473 | yaxis=dict( 474 | type='linear', 475 | range=[1, 100], 476 | ticksuffix='%')) 477 | 478 | fig_composition.update_layout( 479 | margin=dict(l=20, r=20, t=40, b=20), 480 | height= CHART_SIZE, 481 | title = 'Position Asset Composition', 482 | xaxis_title="Date", 483 | yaxis_title="Position %", 484 | legend_title='Token' 485 | ) 486 | 487 | fig_composition.show(renderer="png") 488 | 489 | return fig_composition 490 | 491 | def plot_position_return_decomposition(data_strategy): 492 | import plotly.graph_objects as go 493 | INITIAL_POSITION_VALUE = data_strategy.iloc[0]['value_position_usd'] 494 | CHART_SIZE = 300 495 | 496 | fig_income = go.Figure() 497 | fig_income.add_trace(go.Scatter( 498 | x=data_strategy['time'], 499 | y=data_strategy['cum_fees_usd']/INITIAL_POSITION_VALUE, 500 | fill=None, 501 | mode='lines', 502 | line_color='blue', 503 | name='Accumulated Fees', 504 | )) 505 | 506 | fig_income.add_trace(go.Scatter( 507 | x=data_strategy['time'], 508 | y=(data_strategy['value_hold_usd']-data_strategy['value_position_usd'])/INITIAL_POSITION_VALUE, 509 | fill=None, 510 | mode='lines', 511 | line_color='black', 512 | name='Value Hold - Position', 513 | )) 514 | 515 | fig_income.add_trace(go.Scatter( 516 | x=data_strategy['time'], 517 | y=(data_strategy['value_hold_usd'])/INITIAL_POSITION_VALUE - 1, 518 | fill=None, 519 | mode='lines', 520 | line_color='green', 521 | name='Value Hold', 522 | )) 523 | 524 | fig_income.add_trace(go.Scatter( 525 | x=data_strategy['time'], 526 | y=data_strategy['value_position_usd']/INITIAL_POSITION_VALUE-1, 527 | fill=None, 528 | mode='lines', 529 | line_color='#ff0000', 530 | name='Net Position Value' 531 | )) 532 | 533 | fig_income.update_layout( 534 | margin=dict(l=20, r=20, t=40, b=20), 535 | height= CHART_SIZE, 536 | title = 'Position Value Change Decomposition', 537 | xaxis_title="Date", 538 | yaxis_title="Position %", 539 | legend_title='Token', 540 | yaxis=dict(tickformat = "%"), 541 | ) 542 | 543 | fig_income.show(renderer="png") 544 | 545 | return fig_income 546 | 547 | 548 | def plot_position_composition(data_strategy): 549 | import plotly.graph_objects as go 550 | CHART_SIZE = 300 551 | fig_position_composition = go.Figure() 552 | fig_position_composition.add_trace(go.Scatter( 553 | x=data_strategy['time'], y=data_strategy['base_position_value_usd'], 554 | mode='lines', 555 | name='Base Position', 556 | line=dict(width=0.5, color='#ff0000'), 557 | stackgroup='one', # define stack group 558 | # groupnorm='percent' 559 | )) 560 | fig_position_composition.add_trace(go.Scatter( 561 | x=data_strategy['time'], y=data_strategy['limit_position_value_usd'], 562 | mode='lines', 563 | name='Limit Position', 564 | line=dict(width=0.5, color='#6f6f6f'), 565 | stackgroup='one' 566 | )) 567 | 568 | fig_position_composition.update_layout( 569 | margin=dict(l=20, r=20, t=40, b=20), 570 | height= CHART_SIZE, 571 | title = 'Base / Limit Values', 572 | xaxis_title="Date", 573 | yaxis_title="USD Value", 574 | legend_title='Value' 575 | ) 576 | 577 | fig_position_composition.show(renderer="png") 578 | 579 | return fig_position_composition -------------------------------------------------------------------------------- /src/.ipynb_checkpoints/GetPoolData-checkpoint.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from datetime import datetime, timedelta 3 | import requests 4 | import pickle 5 | import importlib 6 | from itertools import compress 7 | import time 8 | import os 9 | import math 10 | 11 | ############################################################## 12 | # Pull Uniswap v3 pool data from Google Bigquery 13 | # Have options for Ethereum Mainnet and Polygon 14 | ############################################################## 15 | 16 | 17 | def download_bigquery_price_mainnet( 18 | contract_address, date_begin, date_end, block_start 19 | ): 20 | """ 21 | Internal function to query Google Bigquery for the swap history of a Uniswap v3 pool between two dates starting from a particular block from Ethereum Mainnet. 22 | Use GetPoolData.get_pool_data_bigquery which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 23 | """ 24 | 25 | from google.cloud import bigquery 26 | 27 | client = bigquery.Client() 28 | 29 | query = ( 30 | """ 31 | SELECT * 32 | FROM blockchain-etl.ethereum_uniswap.UniswapV3Pool_event_Swap 33 | where contract_address = lower('""" 34 | + contract_address.lower() 35 | + """') and 36 | block_timestamp >= '""" 37 | + str(date_begin) 38 | + """' and block_timestamp <= '""" 39 | + str(date_end) 40 | + """' and block_number >= """ 41 | + str(block_start) 42 | + """ 43 | """ 44 | ) 45 | query_job = client.query(query) # Make an API request. 46 | return query_job.to_dataframe(create_bqstorage_client=False) 47 | 48 | 49 | def download_bigquery_price_polygon( 50 | contract_address, date_begin, date_end, block_start 51 | ): 52 | """ 53 | Internal function to query Google Bigquery for the swap history of a Uniswap v3 pool between two dates starting from a particular block from Polygon. 54 | Use GetPoolData.get_pool_data_bigquery which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 55 | """ 56 | 57 | from google.cloud import bigquery 58 | 59 | client = bigquery.Client() 60 | query = ( 61 | '''SELECT 62 | block_number, 63 | transaction_index, 64 | log_index, 65 | block_hash, 66 | transaction_hash, 67 | address, 68 | block_timestamp, 69 | '0x' || RIGHT(topics[SAFE_OFFSET(1)],40) AS sender, 70 | '0x' || RIGHT(topics[SAFE_OFFSET(1)],40) AS recipient, 71 | '0x' || SUBSTR(DATA, 3, 64) AS amount0, 72 | '0x' || SUBSTR(DATA, 67, 64) AS amount1, 73 | '0x' || SUBSTR(DATA,131,64) AS sqrtPriceX96, 74 | '0x' || SUBSTR(DATA,195,64) AS liquidity, 75 | '0x' || SUBSTR(DATA,259,64) AS tick 76 | FROM 77 | public-data-finance.crypto_polygon.logs 78 | WHERE 79 | topics[SAFE_OFFSET(0)] = '0xc42079f94a6350d7e6235f29174924f928cc2ac818eb64fed8004e115fbcca67' 80 | AND DATE(block_timestamp) >= DATE("''' 81 | + date_begin 82 | + '''") 83 | AND DATE(block_timestamp) <= DATE("''' 84 | + date_end 85 | + """") 86 | AND block_number >= """ 87 | + str(block_start) 88 | + ''' 89 | AND address = "''' 90 | + contract_address 91 | + """" 92 | """ 93 | ) 94 | query_job = client.query(query) # Make an API request. 95 | 96 | result = query_job.to_dataframe(create_bqstorage_client=False) 97 | result["amount0"] = result["amount0"].apply(signed_int) 98 | result["amount1"] = result["amount1"].apply(signed_int) 99 | result["sqrtPriceX96"] = result["sqrtPriceX96"].apply(signed_int) 100 | result["liquidity"] = result["liquidity"].apply(signed_int) 101 | result["tick"] = result["tick"].apply(signed_int) 102 | 103 | return result 104 | 105 | 106 | def get_pool_data_bigquery( 107 | contract_address, 108 | date_begin, 109 | date_end, 110 | decimals_0, 111 | decimals_1, 112 | network="mainnet", 113 | block_start=0, 114 | ): 115 | 116 | """ 117 | Queries Google Bigquery for the swap history of Uniswap v3 pool between two dates starting from a particular block from either Ethereum Mainnet or Polygon. 118 | Preprocesses data to have decimal adjusted amounts and liquidity values. 119 | """ 120 | 121 | if network == "mainnet": 122 | resulting_data = download_bigquery_price_mainnet( 123 | contract_address.lower(), date_begin, date_end, block_start 124 | ) 125 | elif network == "polygon": 126 | resulting_data = download_bigquery_price_polygon( 127 | contract_address.lower(), date_begin, date_end, block_start 128 | ) 129 | else: 130 | raise ValueError("Unsupported Network:" + network) 131 | 132 | DECIMAL_ADJ = 10 ** (decimals_1 - decimals_0) 133 | resulting_data["sqrtPriceX96_float"] = resulting_data["sqrtPriceX96"].astype(float) 134 | resulting_data["quotePrice"] = ( 135 | (resulting_data["sqrtPriceX96_float"] / 2**96) ** 2 136 | ) / DECIMAL_ADJ 137 | resulting_data["block_date"] = pd.to_datetime(resulting_data["block_timestamp"]) 138 | resulting_data = resulting_data.set_index("block_date", drop=False).sort_index() 139 | 140 | resulting_data["tick_swap"] = resulting_data["tick"].astype(int) 141 | resulting_data["amount0"] = resulting_data["amount0"].astype(float) 142 | resulting_data["amount1"] = resulting_data["amount1"].astype(float) 143 | resulting_data["amount0_adj"] = ( 144 | resulting_data["amount0"].astype(float) / 10**decimals_0 145 | ) 146 | resulting_data["amount1_adj"] = ( 147 | resulting_data["amount1"].astype(float) / 10**decimals_1 148 | ) 149 | resulting_data["virtual_liquidity"] = resulting_data["liquidity"].astype(float) 150 | resulting_data["virtual_liquidity_adj"] = resulting_data["liquidity"].astype( 151 | float 152 | ) / (10 ** ((decimals_0 + decimals_1) / 2)) 153 | resulting_data["token_in"] = resulting_data.apply( 154 | lambda x: "token0" if (x["amount0_adj"] < 0) else "token1", axis=1 155 | ) 156 | resulting_data["traded_in"] = resulting_data.apply( 157 | lambda x: -x["amount0_adj"] if (x["amount0_adj"] < 0) else -x["amount1_adj"], 158 | axis=1, 159 | ).astype(float) 160 | 161 | return resulting_data 162 | 163 | 164 | def signed_int(h): 165 | """ 166 | Converts hex values to signed integers. 167 | """ 168 | s = bytes.fromhex(h[2:]) 169 | i = int.from_bytes(s, "big", signed=True) 170 | return i 171 | 172 | 173 | ############################################################## 174 | # Get Swaps from Uniswap v3's subgraph, and liquidity at each swap from Flipside Crypto 175 | ############################################################## 176 | 177 | 178 | def query_univ3_graph(query: str, variables=None, network="mainnet") -> dict: 179 | """ 180 | Internal function to query The Graph's Uniswap v3 subgraph on either mainnet or arbitrum. 181 | Use GetPoolData.get_pool_data_flipside which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 182 | """ 183 | 184 | if network == "mainnet": 185 | univ3_graph_url = "https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v3" 186 | elif network == "arbitrum": 187 | univ3_graph_url = ( 188 | "https://api.thegraph.com/subgraphs/name/ianlapham/uniswap-arbitrum-one" 189 | ) 190 | 191 | if variables: 192 | params = {"query": query, "variables": variables} 193 | else: 194 | params = {"query": query} 195 | 196 | response = requests.post(univ3_graph_url, json=params) 197 | return response.json() 198 | 199 | 200 | def get_swap_data(contract_address, file_name, DOWNLOAD_DATA=True, network="mainnet"): 201 | """ 202 | Internal function to query full history of swap data from Uniswap v3's subgraph. 203 | Use GetPoolData.get_pool_data_flipside which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 204 | """ 205 | 206 | request_swap = [] 207 | 208 | if DOWNLOAD_DATA: 209 | 210 | current_payload = generate_first_event_payload("swaps", contract_address) 211 | current_id = query_univ3_graph(current_payload, network=network)["data"][ 212 | "pool" 213 | ]["swaps"][0]["id"] 214 | finished = False 215 | 216 | while not finished: 217 | current_payload = generate_event_payload( 218 | "swaps", contract_address, str(1000) 219 | ) 220 | response = query_univ3_graph( 221 | current_payload, variables={"paginateId": current_id}, network=network 222 | )["data"]["pool"]["swaps"] 223 | 224 | if len(response) == 0: 225 | finished = True 226 | else: 227 | current_id = response[-1]["id"] 228 | request_swap.extend(response) 229 | 230 | with open("./data/" + file_name + "_swap.pkl", "wb") as output: 231 | pickle.dump(request_swap, output, pickle.HIGHEST_PROTOCOL) 232 | else: 233 | with open("./data/" + file_name + "_swap.pkl", "rb") as input: 234 | request_swap = pickle.load(input) 235 | 236 | return pd.DataFrame(request_swap) 237 | 238 | 239 | def get_liquidity_flipside(flipside_query, file_name, DOWNLOAD_DATA=True): 240 | """ 241 | Internal function to query full history of liquidity values from Flipside Crypto's Uniswap v3's databases. 242 | Use GetPoolData.get_pool_data_flipside which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 243 | """ 244 | 245 | if DOWNLOAD_DATA: 246 | request_stats = [pd.DataFrame(requests.get(x).json()) for x in flipside_query] 247 | with open("./data/" + file_name + "_liquidity.pkl", "wb") as output: 248 | pickle.dump(request_stats, output, pickle.HIGHEST_PROTOCOL) 249 | else: 250 | with open("./data/" + file_name + "_liquidity.pkl", "rb") as input: 251 | request_stats = pickle.load(input) 252 | 253 | stats_data = pd.concat(request_stats) 254 | 255 | return stats_data 256 | 257 | 258 | def get_pool_data_flipside( 259 | contract_address, flipside_query, file_name, DOWNLOAD_DATA=True 260 | ): 261 | """ 262 | Queries Uniswap v3's subgraph for swap data and Flipside Crypto's queries to find liquidity in order to conduct simulations using the Active Strategy Framework. 263 | """ 264 | 265 | # Download events 266 | swap_data = get_swap_data(contract_address, file_name, DOWNLOAD_DATA) 267 | swap_data["time_pd"] = pd.to_datetime( 268 | swap_data["timestamp"], unit="s", origin="unix", utc=True 269 | ) 270 | swap_data = swap_data.set_index("time_pd") 271 | swap_data["tick_swap"] = swap_data["tick"] 272 | swap_data = swap_data.sort_index() 273 | 274 | # Download pool liquidity data 275 | stats_data = get_liquidity_flipside(flipside_query, file_name, DOWNLOAD_DATA) 276 | stats_data["time_pd"] = pd.to_datetime( 277 | stats_data["BLOCK_TIMESTAMP"], origin="unix", utc=True 278 | ) 279 | stats_data = stats_data.set_index("time_pd") 280 | stats_data = stats_data.sort_index() 281 | stats_data["tick_pool"] = stats_data["TICK"] 282 | 283 | full_data = pd.merge_asof( 284 | swap_data, 285 | stats_data[["VIRTUAL_LIQUIDITY_ADJUSTED", "tick_pool"]], 286 | on="time_pd", 287 | direction="backward", 288 | allow_exact_matches=False, 289 | ) 290 | full_data = full_data.set_index("time_pd") 291 | # token with negative amounts is the token being swapped in 292 | full_data["tick_swap"] = full_data["tick_swap"].astype(int) 293 | full_data["amount0"] = full_data["amount0"].astype(float) 294 | full_data["amount1"] = full_data["amount1"].astype(float) 295 | full_data["token_in"] = full_data.apply( 296 | lambda x: "token0" if (x["amount0"] < 0) else "token1", axis=1 297 | ) 298 | 299 | return full_data 300 | 301 | 302 | def generate_event_payload(event, address, n_query): 303 | payload = ( 304 | ''' 305 | query($paginateId: String!){ 306 | pool(id:"''' 307 | + address 308 | + """"){ 309 | """ 310 | + event 311 | + """( 312 | first: """ 313 | + n_query 314 | + """ 315 | orderBy: id 316 | orderDirection: asc 317 | where: { 318 | id_gt: $paginateId 319 | } 320 | ) { 321 | id 322 | timestamp 323 | tick 324 | amount0 325 | amount1 326 | amountUSD 327 | } 328 | } 329 | }""" 330 | ) 331 | return payload 332 | 333 | 334 | def generate_first_event_payload(event, address): 335 | payload = ( 336 | '''query{ 337 | pool(id:"''' 338 | + address 339 | + """"){ 340 | """ 341 | + event 342 | + """( 343 | first: 1 344 | orderBy: id 345 | orderDirection: asc 346 | ) { 347 | id 348 | timestamp 349 | tick 350 | amount0 351 | amount1 352 | amountUSD 353 | } 354 | } 355 | }""" 356 | ) 357 | return payload 358 | 359 | 360 | ########################## 361 | # Uniswap v2 362 | ########################## 363 | 364 | 365 | def query_univ2_graph(query: str, variables=None) -> dict: 366 | """ 367 | Internal function to query The Graph's Uniswap v2 subgraph on mainnet. 368 | Use GetPoolData.get_swap_data_univ2 which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 369 | """ 370 | 371 | univ2_graph_url = "https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2" 372 | 373 | if variables: 374 | params = {"query": query, "variables": variables} 375 | else: 376 | params = {"query": query} 377 | 378 | response = requests.post(univ2_graph_url, json=params) 379 | 380 | return response.json() 381 | 382 | 383 | def download_swap_univ2_subgraph( 384 | contract_address, file_name, date_begin, date_end, DOWNLOAD_DATA=True 385 | ): 386 | """ 387 | Internal function to query the history of swap data from Uniswap v2's subgraph between begin_date and end_date. 388 | Use GetPoolData.get_swap_data_univ2 which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 389 | """ 390 | 391 | request_swap = [] 392 | 393 | if DOWNLOAD_DATA: 394 | 395 | current_payload = generate_first_swap_univ2_payload( 396 | contract_address, date_begin, date_end 397 | ) 398 | current_id = query_univ2_graph(current_payload)["data"]["swaps"][0]["id"] 399 | finished = False 400 | 401 | while not finished: 402 | current_payload = generate_swap_univ2_payload( 403 | contract_address, date_begin, date_end, str(1000) 404 | ) 405 | response = query_univ2_graph( 406 | current_payload, variables={"paginateId": current_id} 407 | )["data"]["swaps"] 408 | 409 | if len(response) == 0: 410 | finished = True 411 | else: 412 | current_id = response[-1]["id"] 413 | request_swap.extend(response) 414 | 415 | with open("./data/" + file_name + "_swap_v2.pkl", "wb") as output: 416 | pickle.dump(request_swap, output, pickle.HIGHEST_PROTOCOL) 417 | else: 418 | with open("./data/" + file_name + "_swap_v2.pkl", "rb") as input: 419 | request_swap = pickle.load(input) 420 | 421 | return pd.DataFrame(request_swap) 422 | 423 | 424 | def get_swap_data_univ2( 425 | contract_address, file_name, date_begin, date_end, DOWNLOAD_DATA=True 426 | ): 427 | """ 428 | Queries Uniswap v2's subgraph for swap data in order to conduct simulations using the Active Strategy Framework. 429 | """ 430 | 431 | swap_data = download_swap_univ2_subgraph( 432 | contract_address, file_name, date_begin, date_end, DOWNLOAD_DATA 433 | ) 434 | swap_data["time_pd"] = pd.to_datetime( 435 | swap_data["timestamp"], unit="s", origin="unix", utc=True 436 | ) 437 | swap_data = swap_data.set_index("time_pd", drop=False) 438 | swap_data = swap_data.sort_index() 439 | 440 | swap_data["token_in"] = swap_data.apply( 441 | lambda x: "token0" if float(x["amount0In"]) > 0 else "token1", axis=1 442 | ) 443 | swap_data["amount0"] = swap_data.apply( 444 | lambda x: -float(x["amount0In"]) 445 | if x["token_in"] == "token0" 446 | else float(x["amount0Out"]), 447 | axis=1, 448 | ) 449 | swap_data["amount1"] = swap_data.apply( 450 | lambda x: float(x["amount1Out"]) 451 | if x["token_in"] == "token0" 452 | else -float(x["amount1In"]), 453 | axis=1, 454 | ) 455 | swap_data["traded_in"] = swap_data.apply( 456 | lambda x: -x["amount0"] if (x["amount0"] < 0) else -x["amount1"], axis=1 457 | ).astype(float) 458 | 459 | return swap_data 460 | 461 | 462 | def generate_swap_univ2_payload(address, date_begin, date_end, n_query): 463 | """ 464 | Internal function that generates GraphQL queries to to query The Graph's Uniswap v2 subgraph on mainnet. 465 | """ 466 | 467 | date_begin_fmt = str(int(pd.Timestamp(date_begin).timestamp())) 468 | date_end_fmt = str(int(pd.Timestamp(date_end).timestamp())) 469 | 470 | payload = ( 471 | """ 472 | query($paginateId: String!){ 473 | swaps( 474 | first: """ 475 | + n_query 476 | + ''' 477 | orderBy: id 478 | orderDirection: asc 479 | where:{ 480 | pair:"''' 481 | + address 482 | + '''", 483 | id_gt: $paginateId, 484 | timestamp_gte:"''' 485 | + date_begin_fmt 486 | + '''", 487 | timestamp_lte:"''' 488 | + date_end_fmt 489 | + """" 490 | } 491 | ) { 492 | id 493 | timestamp 494 | amount0In 495 | amount1In 496 | amount0Out 497 | amount1Out 498 | amountUSD 499 | } 500 | }""" 501 | ) 502 | 503 | return payload 504 | 505 | 506 | def generate_first_swap_univ2_payload(address, date_begin, date_end): 507 | """ 508 | Internal function that generates GraphQL queries to to query The Graph's Uniswap v2 subgraph on mainnet. 509 | """ 510 | 511 | date_begin_fmt = str(int(pd.Timestamp(date_begin).timestamp())) 512 | date_end_fmt = str(int(pd.Timestamp(date_end).timestamp())) 513 | 514 | payload = ( 515 | '''query{ 516 | swaps( 517 | first: 1 518 | orderBy: id 519 | orderDirection: asc 520 | where:{pair:"''' 521 | + address 522 | + """", 523 | timestamp_gte:""" 524 | + date_begin_fmt 525 | + """, 526 | timestamp_lte:""" 527 | + date_end_fmt 528 | + """} 529 | ) { 530 | id 531 | timestamp 532 | amount0In 533 | amount1In 534 | amount0Out 535 | amount1Out 536 | amountUSD 537 | } 538 | }""" 539 | ) 540 | 541 | return payload 542 | 543 | 544 | ############################################################## 545 | # Get Price Data from Bitquery 546 | ############################################################## 547 | def get_price_data_bitquery( 548 | token_0_address, 549 | token_1_address, 550 | date_begin, 551 | date_end, 552 | api_token, 553 | file_name, 554 | DOWNLOAD_DATA=True, 555 | RATE_LIMIT=False, 556 | exchange_to_query="Uniswap", 557 | ): 558 | """ 559 | Queries the price history of a pair of ERC20's (located at token_0_address and token_1_address) in exchange_to_query (defaults to all Uniswap versions on mainnet) between begin_date and end_date on Bitquery. 560 | """ 561 | request = [] 562 | max_rows_bitquery = 10000 563 | 564 | if DOWNLOAD_DATA: 565 | # Paginate using limit and an offset 566 | offset = 0 567 | current_request = run_bitquery_query( 568 | generate_price_payload( 569 | token_0_address, 570 | token_1_address, 571 | date_begin, 572 | date_end, 573 | offset, 574 | exchange_to_query, 575 | ), 576 | api_token, 577 | ) 578 | request.append(current_request) 579 | 580 | # When a request has less than 10,000 rows we are at the last one 581 | while ( 582 | len(current_request["data"]["ethereum"]["dexTrades"]) == max_rows_bitquery 583 | ): 584 | current_request = run_bitquery_query( 585 | generate_price_payload( 586 | token_0_address, 587 | token_1_address, 588 | date_begin, 589 | date_end, 590 | offset, 591 | exchange_to_query, 592 | ), 593 | api_token, 594 | ) 595 | request.append(current_request) 596 | offset += max_rows_bitquery 597 | if RATE_LIMIT: 598 | time.sleep(5) 599 | 600 | with open("./data/" + file_name + "_1min.pkl", "wb") as output: 601 | pickle.dump(request, output, pickle.HIGHEST_PROTOCOL) 602 | 603 | else: 604 | with open("./data/" + file_name + "_1min.pkl", "rb") as input: 605 | request = pickle.load(input) 606 | 607 | # Prepare data for strategy: 608 | # Collect json data and add to a pandas Data Frame 609 | 610 | requests_with_data = [len(x["data"]["ethereum"]["dexTrades"]) > 0 for x in request] 611 | relevant_requests = list(compress(request, requests_with_data)) 612 | 613 | price_data = pd.concat( 614 | [ 615 | pd.DataFrame( 616 | { 617 | "time": [ 618 | x["timeInterval"]["minute"] 619 | for x in request_price["data"]["ethereum"]["dexTrades"] 620 | ], 621 | "baseCurrency": [ 622 | x["baseCurrency"]["symbol"] 623 | for x in request_price["data"]["ethereum"]["dexTrades"] 624 | ], 625 | "quoteCurrency": [ 626 | x["quoteCurrency"]["symbol"] 627 | for x in request_price["data"]["ethereum"]["dexTrades"] 628 | ], 629 | "quoteAmount": [ 630 | x["quoteAmount"] 631 | for x in request_price["data"]["ethereum"]["dexTrades"] 632 | ], 633 | "baseAmount": [ 634 | x["baseAmount"] 635 | for x in request_price["data"]["ethereum"]["dexTrades"] 636 | ], 637 | "tradeAmount": [ 638 | x["tradeAmount"] 639 | for x in request_price["data"]["ethereum"]["dexTrades"] 640 | ], 641 | "quotePrice": [ 642 | x["quotePrice"] 643 | for x in request_price["data"]["ethereum"]["dexTrades"] 644 | ], 645 | } 646 | ) 647 | for request_price in relevant_requests 648 | ] 649 | ) 650 | 651 | price_data["time"] = pd.to_datetime(price_data["time"], format="%Y-%m-%d %H:%M:%S") 652 | 653 | #cutting df to [begin:end] 654 | #brakets_dates = pd.DataFrame(data=[date_begin, date_end]) 655 | #brakets_dates['time'] = pd.to_datetime(brakets_dates[0], format="%Y-%m-%d %H:%M:%S") 656 | #brakets_dates["time_pd"] = pd.to_datetime(brakets_dates["time"], utc=True) 657 | #date_begin = brakets_dates['time_pd'][0] 658 | #date_end = brakets_dates['time_pd'][1] 659 | 660 | price_data["time_pd"] = pd.to_datetime(price_data["time"], utc=True) 661 | price_data = price_data.set_index("time_pd") 662 | 663 | return price_data[date_begin+" 00:00:00+00:00": date_end+" 00:00:00+00:00"] 664 | 665 | 666 | def get_price_usd_data_bitquery( 667 | token_address, 668 | date_begin, 669 | date_end, 670 | api_token, 671 | file_name, 672 | DOWNLOAD_DATA=True, 673 | RATE_LIMIT=False, 674 | exchange_to_query="Uniswap", 675 | ): 676 | """ 677 | Queries the price history of an ERC20 + USD Stablecoins (located at token_address) in exchange_to_query (defaults to all Uniswap versions on mainnet) between begin_date and end_date on Bitquery. 678 | """ 679 | 680 | request = [] 681 | max_rows_bitquery = 10000 682 | 683 | if DOWNLOAD_DATA: 684 | # Paginate using limit and an offset 685 | offset = 0 686 | current_request = run_bitquery_query( 687 | generate_usd_price_payload( 688 | token_address, date_begin, date_end, offset, exchange_to_query 689 | ), 690 | api_token, 691 | ) 692 | request.append(current_request) 693 | 694 | # When a request has less than 10,000 rows we are at the last one 695 | while ( 696 | len(current_request["data"]["ethereum"]["dexTrades"]) == max_rows_bitquery 697 | ): 698 | current_request = run_bitquery_query( 699 | generate_usd_price_payload( 700 | token_address, date_begin, date_end, offset, exchange_to_query 701 | ), 702 | api_token, 703 | ) 704 | request.append(current_request) 705 | offset += max_rows_bitquery 706 | if RATE_LIMIT: 707 | time.sleep(5) 708 | 709 | with open("./data/" + file_name + "_1min.pkl", "wb") as output: 710 | pickle.dump(request, output, pickle.HIGHEST_PROTOCOL) 711 | else: 712 | with open("./data/" + file_name + "_1min.pkl", "rb") as input: 713 | request = pickle.load(input) 714 | 715 | # Prepare data for strategy: 716 | # Collect json data and add to a pandas Data Frame 717 | 718 | requests_with_data = [len(x["data"]["ethereum"]["dexTrades"]) > 0 for x in request] 719 | relevant_requests = list(compress(request, requests_with_data)) 720 | 721 | price_data = pd.concat( 722 | [ 723 | pd.DataFrame( 724 | { 725 | "time": [ 726 | x["timeInterval"]["minute"] 727 | for x in request_price["data"]["ethereum"]["dexTrades"] 728 | ], 729 | "baseCurrency": [ 730 | x["baseCurrency"]["symbol"] 731 | for x in request_price["data"]["ethereum"]["dexTrades"] 732 | ], 733 | "quoteCurrency": [ 734 | x["quoteCurrency"]["symbol"] 735 | for x in request_price["data"]["ethereum"]["dexTrades"] 736 | ], 737 | "quoteAmount": [ 738 | x["quoteAmount"] 739 | for x in request_price["data"]["ethereum"]["dexTrades"] 740 | ], 741 | "baseAmount": [ 742 | x["baseAmount"] 743 | for x in request_price["data"]["ethereum"]["dexTrades"] 744 | ], 745 | "quotePrice": [ 746 | x["quotePrice"] 747 | for x in request_price["data"]["ethereum"]["dexTrades"] 748 | ], 749 | } 750 | ) 751 | for request_price in relevant_requests 752 | ] 753 | ) 754 | 755 | price_data["time"] = pd.to_datetime(price_data["time"], format="%Y-%m-%d %H:%M:%S") 756 | price_data["time_pd"] = pd.to_datetime(price_data["time"], utc=True) 757 | price_data = price_data.set_index("time_pd") 758 | 759 | return price_data 760 | 761 | 762 | def generate_price_payload( 763 | token_0_address, 764 | token_1_address, 765 | date_begin, 766 | date_end, 767 | offset, 768 | exchange_to_query="Uniswap", 769 | ): 770 | payload = ( 771 | """{ 772 | ethereum(network: ethereum) { 773 | dexTrades( 774 | options: {asc: "timeInterval.minute", limit: 10000, offset:""" 775 | + str(offset) 776 | + '''} 777 | date: {between: ["''' 778 | + date_begin 779 | + '''","''' 780 | + date_end 781 | + '''"]} 782 | exchangeName: {is: "''' 783 | + exchange_to_query 784 | + '''"} 785 | baseCurrency: {is: "''' 786 | + token_0_address 787 | + '''"} 788 | quoteCurrency: {is: "''' 789 | + token_1_address 790 | + """"} 791 | 792 | ) { 793 | timeInterval { 794 | minute(count: 1) 795 | } 796 | baseCurrency { 797 | symbol 798 | address 799 | } 800 | baseAmount 801 | quoteCurrency { 802 | symbol 803 | address 804 | } 805 | tradeAmount(in: USD) 806 | quoteAmount 807 | quotePrice 808 | } 809 | } 810 | }""" 811 | ) 812 | 813 | return payload 814 | 815 | 816 | def generate_usd_price_payload( 817 | token_address, date_begin, date_end, offset, exchange_to_query="Uniswap" 818 | ): 819 | payload = ( 820 | """{ 821 | ethereum(network: ethereum) { 822 | dexTrades( 823 | options: {asc: "timeInterval.minute", limit: 10000, offset:""" 824 | + str(offset) 825 | + '''} 826 | date: {between: ["''' 827 | + date_begin 828 | + '''","''' 829 | + date_end 830 | + '''"]} 831 | exchangeName: {is: "''' 832 | + exchange_to_query 833 | + '''"} 834 | any: [{baseCurrency: {is: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48"}, 835 | quoteCurrency:{is: "''' 836 | + token_address 837 | + '''"}}, 838 | {baseCurrency: {is: "0xdac17f958d2ee523a2206206994597c13d831ec7"}, 839 | quoteCurrency:{is: "''' 840 | + token_address 841 | + """"}}] 842 | 843 | ) { 844 | timeInterval { 845 | minute(count: 1) 846 | } 847 | baseCurrency { 848 | symbol 849 | address 850 | } 851 | baseAmount 852 | quoteCurrency { 853 | symbol 854 | address 855 | } 856 | quoteAmount 857 | quotePrice 858 | } 859 | } 860 | }""" 861 | ) 862 | 863 | return payload 864 | 865 | 866 | def run_bitquery_query(query, api_token): 867 | """ 868 | Internal function that runs a GraphQL query on Bitquery. 869 | """ 870 | url = "https://graphql.bitquery.io/" 871 | headers = {"X-API-KEY": api_token} 872 | request = requests.post(url, json={"query": query}, headers=headers) 873 | if request.status_code == 200: 874 | return request.json() 875 | else: 876 | raise Exception( 877 | "Query failed and return code is {}. {}".format( 878 | request.status_code, query 879 | ) 880 | ) 881 | -------------------------------------------------------------------------------- /src/GetPoolData.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | from datetime import datetime, timedelta 3 | import requests 4 | import pickle 5 | import importlib 6 | from itertools import compress 7 | import time 8 | import os 9 | import math 10 | 11 | ############################################################## 12 | # Pull Uniswap v3 pool data from Google Bigquery 13 | # Have options for Ethereum Mainnet and Polygon 14 | ############################################################## 15 | 16 | 17 | def download_bigquery_price_mainnet( 18 | contract_address, date_begin, date_end, block_start 19 | ): 20 | """ 21 | Internal function to query Google Bigquery for the swap history of a Uniswap v3 pool between two dates starting from a particular block from Ethereum Mainnet. 22 | Use GetPoolData.get_pool_data_bigquery which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 23 | """ 24 | 25 | from google.cloud import bigquery 26 | 27 | client = bigquery.Client() 28 | 29 | query = ( 30 | """ 31 | SELECT * 32 | FROM blockchain-etl.ethereum_uniswap.UniswapV3Pool_event_Swap 33 | where contract_address = lower('""" 34 | + contract_address.lower() 35 | + """') and 36 | block_timestamp >= '""" 37 | + str(date_begin) 38 | + """' and block_timestamp <= '""" 39 | + str(date_end) 40 | + """' and block_number >= """ 41 | + str(block_start) 42 | + """ 43 | """ 44 | ) 45 | query_job = client.query(query) # Make an API request. 46 | return query_job.to_dataframe(create_bqstorage_client=False) 47 | 48 | 49 | def download_bigquery_price_polygon( 50 | contract_address, date_begin, date_end, block_start 51 | ): 52 | """ 53 | Internal function to query Google Bigquery for the swap history of a Uniswap v3 pool between two dates starting from a particular block from Polygon. 54 | Use GetPoolData.get_pool_data_bigquery which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 55 | """ 56 | 57 | from google.cloud import bigquery 58 | 59 | client = bigquery.Client() 60 | query = ( 61 | '''SELECT 62 | block_number, 63 | transaction_index, 64 | log_index, 65 | block_hash, 66 | transaction_hash, 67 | address, 68 | block_timestamp, 69 | '0x' || RIGHT(topics[SAFE_OFFSET(1)],40) AS sender, 70 | '0x' || RIGHT(topics[SAFE_OFFSET(1)],40) AS recipient, 71 | '0x' || SUBSTR(DATA, 3, 64) AS amount0, 72 | '0x' || SUBSTR(DATA, 67, 64) AS amount1, 73 | '0x' || SUBSTR(DATA,131,64) AS sqrtPriceX96, 74 | '0x' || SUBSTR(DATA,195,64) AS liquidity, 75 | '0x' || SUBSTR(DATA,259,64) AS tick 76 | FROM 77 | public-data-finance.crypto_polygon.logs 78 | WHERE 79 | topics[SAFE_OFFSET(0)] = '0xc42079f94a6350d7e6235f29174924f928cc2ac818eb64fed8004e115fbcca67' 80 | AND DATE(block_timestamp) >= DATE("''' 81 | + date_begin 82 | + '''") 83 | AND DATE(block_timestamp) <= DATE("''' 84 | + date_end 85 | + """") 86 | AND block_number >= """ 87 | + str(block_start) 88 | + ''' 89 | AND address = "''' 90 | + contract_address 91 | + """" 92 | """ 93 | ) 94 | query_job = client.query(query) # Make an API request. 95 | 96 | result = query_job.to_dataframe(create_bqstorage_client=False) 97 | result["amount0"] = result["amount0"].apply(signed_int) 98 | result["amount1"] = result["amount1"].apply(signed_int) 99 | result["sqrtPriceX96"] = result["sqrtPriceX96"].apply(signed_int) 100 | result["liquidity"] = result["liquidity"].apply(signed_int) 101 | result["tick"] = result["tick"].apply(signed_int) 102 | 103 | return result 104 | 105 | 106 | def get_pool_data_bigquery( 107 | contract_address, 108 | date_begin, 109 | date_end, 110 | decimals_0, 111 | decimals_1, 112 | network="mainnet", 113 | block_start=0, 114 | ): 115 | 116 | """ 117 | Queries Google Bigquery for the swap history of Uniswap v3 pool between two dates starting from a particular block from either Ethereum Mainnet or Polygon. 118 | Preprocesses data to have decimal adjusted amounts and liquidity values. 119 | """ 120 | 121 | if network == "mainnet": 122 | resulting_data = download_bigquery_price_mainnet( 123 | contract_address.lower(), date_begin, date_end, block_start 124 | ) 125 | elif network == "polygon": 126 | resulting_data = download_bigquery_price_polygon( 127 | contract_address.lower(), date_begin, date_end, block_start 128 | ) 129 | else: 130 | raise ValueError("Unsupported Network:" + network) 131 | 132 | DECIMAL_ADJ = 10 ** (decimals_1 - decimals_0) 133 | resulting_data["sqrtPriceX96_float"] = resulting_data["sqrtPriceX96"].astype(float) 134 | resulting_data["quotePrice"] = ( 135 | (resulting_data["sqrtPriceX96_float"] / 2**96) ** 2 136 | ) / DECIMAL_ADJ 137 | resulting_data["block_date"] = pd.to_datetime(resulting_data["block_timestamp"]) 138 | resulting_data = resulting_data.set_index("block_date", drop=False).sort_index() 139 | 140 | resulting_data["tick_swap"] = resulting_data["tick"].astype(int) 141 | resulting_data["amount0"] = resulting_data["amount0"].astype(float) 142 | resulting_data["amount1"] = resulting_data["amount1"].astype(float) 143 | resulting_data["amount0_adj"] = ( 144 | resulting_data["amount0"].astype(float) / 10**decimals_0 145 | ) 146 | resulting_data["amount1_adj"] = ( 147 | resulting_data["amount1"].astype(float) / 10**decimals_1 148 | ) 149 | resulting_data["virtual_liquidity"] = resulting_data["liquidity"].astype(float) 150 | resulting_data["virtual_liquidity_adj"] = resulting_data["liquidity"].astype( 151 | float 152 | ) / (10 ** ((decimals_0 + decimals_1) / 2)) 153 | resulting_data["token_in"] = resulting_data.apply( 154 | lambda x: "token0" if (x["amount0_adj"] < 0) else "token1", axis=1 155 | ) 156 | resulting_data["traded_in"] = resulting_data.apply( 157 | lambda x: -x["amount0_adj"] if (x["amount0_adj"] < 0) else -x["amount1_adj"], 158 | axis=1, 159 | ).astype(float) 160 | 161 | return resulting_data 162 | 163 | 164 | def signed_int(h): 165 | """ 166 | Converts hex values to signed integers. 167 | """ 168 | s = bytes.fromhex(h[2:]) 169 | i = int.from_bytes(s, "big", signed=True) 170 | return i 171 | 172 | 173 | ############################################################## 174 | # Get Swaps from Uniswap v3's subgraph, and liquidity at each swap from Flipside Crypto 175 | ############################################################## 176 | 177 | 178 | def query_univ3_graph(query: str, variables=None, network="mainnet") -> dict: 179 | """ 180 | Internal function to query The Graph's Uniswap v3 subgraph on either mainnet or arbitrum. 181 | Use GetPoolData.get_pool_data_flipside which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 182 | """ 183 | 184 | if network == "mainnet": 185 | univ3_graph_url = "https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v3" 186 | elif network == "arbitrum": 187 | univ3_graph_url = ( 188 | "https://api.thegraph.com/subgraphs/name/ianlapham/uniswap-arbitrum-one" 189 | ) 190 | 191 | if variables: 192 | params = {"query": query, "variables": variables} 193 | else: 194 | params = {"query": query} 195 | 196 | response = requests.post(univ3_graph_url, json=params) 197 | return response.json() 198 | 199 | 200 | def get_swap_data(contract_address, file_name, DOWNLOAD_DATA=True, network="mainnet"): 201 | """ 202 | Internal function to query full history of swap data from Uniswap v3's subgraph. 203 | Use GetPoolData.get_pool_data_flipside which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 204 | """ 205 | 206 | request_swap = [] 207 | 208 | if DOWNLOAD_DATA: 209 | 210 | current_payload = generate_first_event_payload("swaps", contract_address) 211 | current_id = query_univ3_graph(current_payload, network=network)["data"][ 212 | "pool" 213 | ]["swaps"][0]["id"] 214 | finished = False 215 | 216 | while not finished: 217 | current_payload = generate_event_payload( 218 | "swaps", contract_address, str(1000) 219 | ) 220 | response = query_univ3_graph( 221 | current_payload, variables={"paginateId": current_id}, network=network 222 | )["data"]["pool"]["swaps"] 223 | 224 | if len(response) == 0: 225 | finished = True 226 | else: 227 | current_id = response[-1]["id"] 228 | request_swap.extend(response) 229 | 230 | with open("./data/" + file_name + "_swap.pkl", "wb") as output: 231 | pickle.dump(request_swap, output, pickle.HIGHEST_PROTOCOL) 232 | else: 233 | with open("./data/" + file_name + "_swap.pkl", "rb") as input: 234 | request_swap = pickle.load(input) 235 | 236 | return pd.DataFrame(request_swap) 237 | 238 | 239 | def get_liquidity_flipside(flipside_query, file_name, DOWNLOAD_DATA=True): 240 | """ 241 | Internal function to query full history of liquidity values from Flipside Crypto's Uniswap v3's databases. 242 | Use GetPoolData.get_pool_data_flipside which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 243 | """ 244 | 245 | if DOWNLOAD_DATA: 246 | request_stats = [pd.DataFrame(requests.get(x).json()) for x in flipside_query] 247 | with open("./data/" + file_name + "_liquidity.pkl", "wb") as output: 248 | pickle.dump(request_stats, output, pickle.HIGHEST_PROTOCOL) 249 | else: 250 | with open("./data/" + file_name + "_liquidity.pkl", "rb") as input: 251 | request_stats = pickle.load(input) 252 | 253 | stats_data = pd.concat(request_stats) 254 | 255 | return stats_data 256 | 257 | 258 | def get_pool_data_flipside( 259 | contract_address, flipside_query, file_name, DOWNLOAD_DATA=True 260 | ): 261 | """ 262 | Queries Uniswap v3's subgraph for swap data and Flipside Crypto's queries to find liquidity in order to conduct simulations using the Active Strategy Framework. 263 | """ 264 | 265 | # Download events 266 | swap_data = get_swap_data(contract_address, file_name, DOWNLOAD_DATA) 267 | swap_data["time_pd"] = pd.to_datetime( 268 | swap_data["timestamp"], unit="s", origin="unix", utc=True 269 | ) 270 | swap_data = swap_data.set_index("time_pd") 271 | swap_data["tick_swap"] = swap_data["tick"] 272 | swap_data = swap_data.sort_index() 273 | 274 | # Download pool liquidity data 275 | stats_data = get_liquidity_flipside(flipside_query, file_name, DOWNLOAD_DATA) 276 | stats_data["time_pd"] = pd.to_datetime( 277 | stats_data["BLOCK_TIMESTAMP"], origin="unix", utc=True 278 | ) 279 | stats_data = stats_data.set_index("time_pd") 280 | stats_data = stats_data.sort_index() 281 | stats_data["tick_pool"] = stats_data["TICK"] 282 | 283 | full_data = pd.merge_asof( 284 | swap_data, 285 | stats_data[["VIRTUAL_LIQUIDITY_ADJUSTED", "tick_pool"]], 286 | on="time_pd", 287 | direction="backward", 288 | allow_exact_matches=False, 289 | ) 290 | full_data = full_data.set_index("time_pd") 291 | # token with negative amounts is the token being swapped in 292 | full_data["tick_swap"] = full_data["tick_swap"].astype(int) 293 | full_data["amount0"] = full_data["amount0"].astype(float) 294 | full_data["amount1"] = full_data["amount1"].astype(float) 295 | full_data["token_in"] = full_data.apply( 296 | lambda x: "token0" if (x["amount0"] < 0) else "token1", axis=1 297 | ) 298 | 299 | return full_data 300 | 301 | 302 | def generate_event_payload(event, address, n_query): 303 | payload = ( 304 | ''' 305 | query($paginateId: String!){ 306 | pool(id:"''' 307 | + address 308 | + """"){ 309 | """ 310 | + event 311 | + """( 312 | first: """ 313 | + n_query 314 | + """ 315 | orderBy: id 316 | orderDirection: asc 317 | where: { 318 | id_gt: $paginateId 319 | } 320 | ) { 321 | id 322 | timestamp 323 | tick 324 | amount0 325 | amount1 326 | amountUSD 327 | } 328 | } 329 | }""" 330 | ) 331 | return payload 332 | 333 | 334 | def generate_first_event_payload(event, address): 335 | payload = ( 336 | '''query{ 337 | pool(id:"''' 338 | + address 339 | + """"){ 340 | """ 341 | + event 342 | + """( 343 | first: 1 344 | orderBy: id 345 | orderDirection: asc 346 | ) { 347 | id 348 | timestamp 349 | tick 350 | amount0 351 | amount1 352 | amountUSD 353 | } 354 | } 355 | }""" 356 | ) 357 | return payload 358 | 359 | 360 | ########################## 361 | # Uniswap v2 362 | ########################## 363 | 364 | 365 | def query_univ2_graph(query: str, variables=None) -> dict: 366 | """ 367 | Internal function to query The Graph's Uniswap v2 subgraph on mainnet. 368 | Use GetPoolData.get_swap_data_univ2 which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 369 | """ 370 | 371 | univ2_graph_url = "https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2" 372 | 373 | if variables: 374 | params = {"query": query, "variables": variables} 375 | else: 376 | params = {"query": query} 377 | 378 | response = requests.post(univ2_graph_url, json=params) 379 | 380 | return response.json() 381 | 382 | 383 | def download_swap_univ2_subgraph( 384 | contract_address, file_name, date_begin, date_end, DOWNLOAD_DATA=True 385 | ): 386 | """ 387 | Internal function to query the history of swap data from Uniswap v2's subgraph between begin_date and end_date. 388 | Use GetPoolData.get_swap_data_univ2 which preprocesses the data in order to conduct simualtions with the Active Strategy Framework. 389 | """ 390 | 391 | request_swap = [] 392 | 393 | if DOWNLOAD_DATA: 394 | 395 | current_payload = generate_first_swap_univ2_payload( 396 | contract_address, date_begin, date_end 397 | ) 398 | current_id = query_univ2_graph(current_payload)["data"]["swaps"][0]["id"] 399 | finished = False 400 | 401 | while not finished: 402 | current_payload = generate_swap_univ2_payload( 403 | contract_address, date_begin, date_end, str(1000) 404 | ) 405 | response = query_univ2_graph( 406 | current_payload, variables={"paginateId": current_id} 407 | )["data"]["swaps"] 408 | 409 | if len(response) == 0: 410 | finished = True 411 | else: 412 | current_id = response[-1]["id"] 413 | request_swap.extend(response) 414 | 415 | with open("./data/" + file_name + "_swap_v2.pkl", "wb") as output: 416 | pickle.dump(request_swap, output, pickle.HIGHEST_PROTOCOL) 417 | else: 418 | with open("./data/" + file_name + "_swap_v2.pkl", "rb") as input: 419 | request_swap = pickle.load(input) 420 | 421 | return pd.DataFrame(request_swap) 422 | 423 | 424 | def get_swap_data_univ2( 425 | contract_address, file_name, date_begin, date_end, DOWNLOAD_DATA=True 426 | ): 427 | """ 428 | Queries Uniswap v2's subgraph for swap data in order to conduct simulations using the Active Strategy Framework. 429 | """ 430 | 431 | swap_data = download_swap_univ2_subgraph( 432 | contract_address, file_name, date_begin, date_end, DOWNLOAD_DATA 433 | ) 434 | swap_data["time_pd"] = pd.to_datetime( 435 | swap_data["timestamp"], unit="s", origin="unix", utc=True 436 | ) 437 | swap_data = swap_data.set_index("time_pd", drop=False) 438 | swap_data = swap_data.sort_index() 439 | 440 | swap_data["token_in"] = swap_data.apply( 441 | lambda x: "token0" if float(x["amount0In"]) > 0 else "token1", axis=1 442 | ) 443 | swap_data["amount0"] = swap_data.apply( 444 | lambda x: -float(x["amount0In"]) 445 | if x["token_in"] == "token0" 446 | else float(x["amount0Out"]), 447 | axis=1, 448 | ) 449 | swap_data["amount1"] = swap_data.apply( 450 | lambda x: float(x["amount1Out"]) 451 | if x["token_in"] == "token0" 452 | else -float(x["amount1In"]), 453 | axis=1, 454 | ) 455 | swap_data["traded_in"] = swap_data.apply( 456 | lambda x: -x["amount0"] if (x["amount0"] < 0) else -x["amount1"], axis=1 457 | ).astype(float) 458 | 459 | return swap_data 460 | 461 | 462 | def generate_swap_univ2_payload(address, date_begin, date_end, n_query): 463 | """ 464 | Internal function that generates GraphQL queries to to query The Graph's Uniswap v2 subgraph on mainnet. 465 | """ 466 | 467 | date_begin_fmt = str(int(pd.Timestamp(date_begin).timestamp())) 468 | date_end_fmt = str(int(pd.Timestamp(date_end).timestamp())) 469 | 470 | payload = ( 471 | """ 472 | query($paginateId: String!){ 473 | swaps( 474 | first: """ 475 | + n_query 476 | + ''' 477 | orderBy: id 478 | orderDirection: asc 479 | where:{ 480 | pair:"''' 481 | + address 482 | + '''", 483 | id_gt: $paginateId, 484 | timestamp_gte:"''' 485 | + date_begin_fmt 486 | + '''", 487 | timestamp_lte:"''' 488 | + date_end_fmt 489 | + """" 490 | } 491 | ) { 492 | id 493 | timestamp 494 | amount0In 495 | amount1In 496 | amount0Out 497 | amount1Out 498 | amountUSD 499 | } 500 | }""" 501 | ) 502 | 503 | return payload 504 | 505 | 506 | def generate_first_swap_univ2_payload(address, date_begin, date_end): 507 | """ 508 | Internal function that generates GraphQL queries to to query The Graph's Uniswap v2 subgraph on mainnet. 509 | """ 510 | 511 | date_begin_fmt = str(int(pd.Timestamp(date_begin).timestamp())) 512 | date_end_fmt = str(int(pd.Timestamp(date_end).timestamp())) 513 | 514 | payload = ( 515 | '''query{ 516 | swaps( 517 | first: 1 518 | orderBy: id 519 | orderDirection: asc 520 | where:{pair:"''' 521 | + address 522 | + """", 523 | timestamp_gte:""" 524 | + date_begin_fmt 525 | + """, 526 | timestamp_lte:""" 527 | + date_end_fmt 528 | + """} 529 | ) { 530 | id 531 | timestamp 532 | amount0In 533 | amount1In 534 | amount0Out 535 | amount1Out 536 | amountUSD 537 | } 538 | }""" 539 | ) 540 | 541 | return payload 542 | 543 | 544 | ############################################################## 545 | # Get Price Data from Bitquery 546 | ############################################################## 547 | def get_price_data_bitquery( 548 | token_0_address, 549 | token_1_address, 550 | date_begin, 551 | date_end, 552 | api_token, 553 | file_name, 554 | DOWNLOAD_DATA=True, 555 | RATE_LIMIT=False, 556 | exchange_to_query="Uniswap", 557 | ): 558 | """ 559 | Queries the price history of a pair of ERC20's (located at token_0_address and token_1_address) in exchange_to_query (defaults to all Uniswap versions on mainnet) between begin_date and end_date on Bitquery. 560 | """ 561 | request = [] 562 | max_rows_bitquery = 10000 563 | 564 | if DOWNLOAD_DATA: 565 | # Paginate using limit and an offset 566 | offset = 0 567 | current_request = run_bitquery_query( 568 | generate_price_payload( 569 | token_0_address, 570 | token_1_address, 571 | date_begin, 572 | date_end, 573 | offset, 574 | exchange_to_query, 575 | ), 576 | api_token, 577 | ) 578 | request.append(current_request) 579 | 580 | # When a request has less than 10,000 rows we are at the last one 581 | while ( 582 | len(current_request["data"]["ethereum"]["dexTrades"]) == max_rows_bitquery 583 | ): 584 | current_request = run_bitquery_query( 585 | generate_price_payload( 586 | token_0_address, 587 | token_1_address, 588 | date_begin, 589 | date_end, 590 | offset, 591 | exchange_to_query, 592 | ), 593 | api_token, 594 | ) 595 | request.append(current_request) 596 | offset += max_rows_bitquery 597 | if RATE_LIMIT: 598 | time.sleep(5) 599 | 600 | with open("./data/" + file_name + "_1min.pkl", "wb") as output: 601 | pickle.dump(request, output, pickle.HIGHEST_PROTOCOL) 602 | 603 | else: 604 | with open("./data/" + file_name + "_1min.pkl", "rb") as input: 605 | request = pickle.load(input) 606 | 607 | # Prepare data for strategy: 608 | # Collect json data and add to a pandas Data Frame 609 | 610 | requests_with_data = [len(x["data"]["ethereum"]["dexTrades"]) > 0 for x in request] 611 | relevant_requests = list(compress(request, requests_with_data)) 612 | 613 | price_data = pd.concat( 614 | [ 615 | pd.DataFrame( 616 | { 617 | "time": [ 618 | x["timeInterval"]["minute"] 619 | for x in request_price["data"]["ethereum"]["dexTrades"] 620 | ], 621 | "baseCurrency": [ 622 | x["baseCurrency"]["symbol"] 623 | for x in request_price["data"]["ethereum"]["dexTrades"] 624 | ], 625 | "quoteCurrency": [ 626 | x["quoteCurrency"]["symbol"] 627 | for x in request_price["data"]["ethereum"]["dexTrades"] 628 | ], 629 | "quoteAmount": [ 630 | x["quoteAmount"] 631 | for x in request_price["data"]["ethereum"]["dexTrades"] 632 | ], 633 | "baseAmount": [ 634 | x["baseAmount"] 635 | for x in request_price["data"]["ethereum"]["dexTrades"] 636 | ], 637 | "tradeAmount": [ 638 | x["tradeAmount"] 639 | for x in request_price["data"]["ethereum"]["dexTrades"] 640 | ], 641 | "quotePrice": [ 642 | x["quotePrice"] 643 | for x in request_price["data"]["ethereum"]["dexTrades"] 644 | ], 645 | } 646 | ) 647 | for request_price in relevant_requests 648 | ] 649 | ) 650 | 651 | price_data["time"] = pd.to_datetime(price_data["time"], format="%Y-%m-%d %H:%M:%S") 652 | price_data["time_pd"] = pd.to_datetime(price_data["time"], utc=True) 653 | price_data = price_data.set_index("time_pd") 654 | 655 | return price_data#[date_begin+" 00:00:00+00:00": date_end+" 00:00:00+00:00"] 656 | 657 | 658 | def get_price_usd_data_bitquery( 659 | token_address, 660 | date_begin, 661 | date_end, 662 | api_token, 663 | file_name, 664 | DOWNLOAD_DATA=True, 665 | RATE_LIMIT=False, 666 | exchange_to_query="Uniswap", 667 | ): 668 | """ 669 | Queries the price history of an ERC20 + USD Stablecoins (located at token_address) in exchange_to_query (defaults to all Uniswap versions on mainnet) between begin_date and end_date on Bitquery. 670 | """ 671 | 672 | request = [] 673 | max_rows_bitquery = 10000 674 | 675 | if DOWNLOAD_DATA: 676 | # Paginate using limit and an offset 677 | offset = 0 678 | current_request = run_bitquery_query( 679 | generate_usd_price_payload( 680 | token_address, date_begin, date_end, offset, exchange_to_query 681 | ), 682 | api_token, 683 | ) 684 | request.append(current_request) 685 | 686 | # When a request has less than 10,000 rows we are at the last one 687 | while ( 688 | len(current_request["data"]["ethereum"]["dexTrades"]) == max_rows_bitquery 689 | ): 690 | current_request = run_bitquery_query( 691 | generate_usd_price_payload( 692 | token_address, date_begin, date_end, offset, exchange_to_query 693 | ), 694 | api_token, 695 | ) 696 | request.append(current_request) 697 | offset += max_rows_bitquery 698 | if RATE_LIMIT: 699 | time.sleep(5) 700 | 701 | with open("./data/" + file_name + "_1min.pkl", "wb") as output: 702 | pickle.dump(request, output, pickle.HIGHEST_PROTOCOL) 703 | else: 704 | with open("./data/" + file_name + "_1min.pkl", "rb") as input: 705 | request = pickle.load(input) 706 | 707 | # Prepare data for strategy: 708 | # Collect json data and add to a pandas Data Frame 709 | 710 | requests_with_data = [len(x["data"]["ethereum"]["dexTrades"]) > 0 for x in request] 711 | relevant_requests = list(compress(request, requests_with_data)) 712 | 713 | price_data = pd.concat( 714 | [ 715 | pd.DataFrame( 716 | { 717 | "time": [ 718 | x["timeInterval"]["minute"] 719 | for x in request_price["data"]["ethereum"]["dexTrades"] 720 | ], 721 | "baseCurrency": [ 722 | x["baseCurrency"]["symbol"] 723 | for x in request_price["data"]["ethereum"]["dexTrades"] 724 | ], 725 | "quoteCurrency": [ 726 | x["quoteCurrency"]["symbol"] 727 | for x in request_price["data"]["ethereum"]["dexTrades"] 728 | ], 729 | "quoteAmount": [ 730 | x["quoteAmount"] 731 | for x in request_price["data"]["ethereum"]["dexTrades"] 732 | ], 733 | "baseAmount": [ 734 | x["baseAmount"] 735 | for x in request_price["data"]["ethereum"]["dexTrades"] 736 | ], 737 | "quotePrice": [ 738 | x["quotePrice"] 739 | for x in request_price["data"]["ethereum"]["dexTrades"] 740 | ], 741 | } 742 | ) 743 | for request_price in relevant_requests 744 | ] 745 | ) 746 | 747 | price_data["time"] = pd.to_datetime(price_data["time"], format="%Y-%m-%d %H:%M:%S") 748 | price_data["time_pd"] = pd.to_datetime(price_data["time"], utc=True) 749 | price_data = price_data.set_index("time_pd") 750 | 751 | return price_data 752 | 753 | 754 | def generate_price_payload( 755 | token_0_address, 756 | token_1_address, 757 | date_begin, 758 | date_end, 759 | offset, 760 | exchange_to_query="Uniswap", 761 | ): 762 | payload = ( 763 | """{ 764 | ethereum(network: ethereum) { 765 | dexTrades( 766 | options: {asc: "timeInterval.minute", limit: 10000, offset:""" 767 | + str(offset) 768 | + '''} 769 | date: {between: ["''' 770 | + date_begin 771 | + '''","''' 772 | + date_end 773 | + '''"]} 774 | exchangeName: {is: "''' 775 | + exchange_to_query 776 | + '''"} 777 | baseCurrency: {is: "''' 778 | + token_0_address 779 | + '''"} 780 | quoteCurrency: {is: "''' 781 | + token_1_address 782 | + """"} 783 | 784 | ) { 785 | timeInterval { 786 | minute(count: 1) 787 | } 788 | baseCurrency { 789 | symbol 790 | address 791 | } 792 | baseAmount 793 | quoteCurrency { 794 | symbol 795 | address 796 | } 797 | tradeAmount(in: USD) 798 | quoteAmount 799 | quotePrice 800 | } 801 | } 802 | }""" 803 | ) 804 | 805 | return payload 806 | 807 | 808 | def generate_usd_price_payload( 809 | token_address, date_begin, date_end, offset, exchange_to_query="Uniswap" 810 | ): 811 | payload = ( 812 | """{ 813 | ethereum(network: ethereum) { 814 | dexTrades( 815 | options: {asc: "timeInterval.minute", limit: 10000, offset:""" 816 | + str(offset) 817 | + '''} 818 | date: {between: ["''' 819 | + date_begin 820 | + '''","''' 821 | + date_end 822 | + '''"]} 823 | exchangeName: {is: "''' 824 | + exchange_to_query 825 | + '''"} 826 | any: [{baseCurrency: {is: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48"}, 827 | quoteCurrency:{is: "''' 828 | + token_address 829 | + '''"}}, 830 | {baseCurrency: {is: "0xdac17f958d2ee523a2206206994597c13d831ec7"}, 831 | quoteCurrency:{is: "''' 832 | + token_address 833 | + """"}}] 834 | 835 | ) { 836 | timeInterval { 837 | minute(count: 1) 838 | } 839 | baseCurrency { 840 | symbol 841 | address 842 | } 843 | baseAmount 844 | quoteCurrency { 845 | symbol 846 | address 847 | } 848 | quoteAmount 849 | quotePrice 850 | } 851 | } 852 | }""" 853 | ) 854 | 855 | return payload 856 | 857 | 858 | def run_bitquery_query(query, api_token): 859 | """ 860 | Internal function that runs a GraphQL query on Bitquery. 861 | """ 862 | url = "https://graphql.bitquery.io/" 863 | headers = {"X-API-KEY": api_token} 864 | request = requests.post(url, json={"query": query}, headers=headers) 865 | if request.status_code == 200: 866 | return request.json() 867 | else: 868 | raise Exception( 869 | "Query failed and return code is {}. {}".format( 870 | request.status_code, query 871 | ) 872 | ) 873 | 874 | 875 | def get_current_state() -> dict: 876 | query = '''{ 877 | pool(id: "0x8ad599c3a0ff1de082011efddc58f1908eb6e6d8") { 878 | tick 879 | token0 880 | { 881 | symbol 882 | id 883 | decimals 884 | } 885 | token1 886 | { 887 | symbol 888 | id 889 | decimals 890 | } 891 | feeTier 892 | sqrtPrice 893 | liquidity 894 | } 895 | }''' 896 | univ3_graph_url = "https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v3" 897 | params = {"query": query} 898 | response = requests.post(univ3_graph_url, json=params) 899 | return response.json() -------------------------------------------------------------------------------- /notebooks/uncollected-fees.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 2, 6 | "id": "4fe3b105-0615-4f1d-a2e8-0d7a9ef4cb98", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "import pandas as pd\n", 11 | "import numpy as np\n", 12 | "from google.cloud import bigquery\n", 13 | "import os" 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": 2, 19 | "id": "bb5b92da-be40-4726-83cf-4e544131b131", 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "BITQUERY_API_TOKEN = 'BQY0mSI2IadezoknHO3JMzexNwCpugm5'\n", 24 | "import config\n", 25 | "os.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = config.GOOGLE_SERVICE_AUTH_JSON" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": null, 31 | "id": "31de123d-c891-4145-a145-cedf454b38e8", 32 | "metadata": {}, 33 | "outputs": [], 34 | "source": [ 35 | "Query = \"\"\"query ($network: EthereumNetwork!, $address: String!) {\n", 36 | " ethereum(network: $network) {\n", 37 | " address(address: {is: $address}) {\n", 38 | " smartContract {\n", 39 | " attributes {\n", 40 | " name\n", 41 | " type\n", 42 | " address {\n", 43 | " address\n", 44 | " annotation\n", 45 | " }\n", 46 | " value\n", 47 | " }\n", 48 | " }\n", 49 | " }\n", 50 | " }\n", 51 | "}\n", 52 | "\"\"\"" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": null, 58 | "id": "14aa76d4-4f3b-4e62-8fb4-ec23f2341802", 59 | "metadata": {}, 60 | "outputs": [], 61 | "source": [ 62 | "r = requests.post(endpoint, json={\"query\": Query})\n", 63 | "print(json.dumps(r.json(), indent=2))" 64 | ] 65 | }, 66 | { 67 | "cell_type": "code", 68 | "execution_count": 3, 69 | "id": "bb272c04-c614-43a0-a2bf-7a7bea5eb06a", 70 | "metadata": {}, 71 | "outputs": [], 72 | "source": [ 73 | "def download_bigquery_price_mainnet(\n", 74 | " contract_address, date_begin, date_end, block_start\n", 75 | "):\n", 76 | " \"\"\"\n", 77 | " Internal function to query Google Bigquery for the swap history of a Uniswap v3 pool between two dates starting from a particular block from Ethereum Mainnet.\n", 78 | " Use GetPoolData.get_pool_data_bigquery which| preprocesses the data in order to conduct simualtions with the Active Strategy Framework.\n", 79 | " \"\"\"\n", 80 | "\n", 81 | " from google.cloud import bigquery\n", 82 | "\n", 83 | " client = bigquery.Client()\n", 84 | "\n", 85 | " query = (\n", 86 | " \"\"\"\n", 87 | " SELECT *\n", 88 | " FROM blockchain-etl.ethereum_uniswap.UniswapV3Pool_event_Swap\n", 89 | " where contract_address = lower('\"\"\"\n", 90 | " + contract_address.lower()\n", 91 | " + \"\"\"') and\n", 92 | " block_timestamp >= '\"\"\"\n", 93 | " + str(date_begin)\n", 94 | " + \"\"\"' and block_timestamp <= '\"\"\"\n", 95 | " + str(date_end)\n", 96 | " + \"\"\"' and block_number >= \"\"\"\n", 97 | " + str(block_start)\n", 98 | " + \"\"\"\n", 99 | " \"\"\"\n", 100 | " )\n", 101 | " query_job = client.query(query) # Make an API request.\n", 102 | " return query_job.to_dataframe(create_bqstorage_client=False)" 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "execution_count": 4, 108 | "id": "2e0e5875-7f6c-4244-b04e-c227b80db1c0", 109 | "metadata": {}, 110 | "outputs": [ 111 | { 112 | "ename": "Forbidden", 113 | "evalue": "403 POST https://bigquery.googleapis.com/bigquery/v2/projects/imposing-ring-349623/jobs?prettyPrint=false: Access Denied: Project imposing-ring-349623: User does not have bigquery.jobs.create permission in project imposing-ring-349623.\n\nLocation: None\nJob ID: 37e9eec6-9d05-441f-ba3e-1dd76c08a9cd\n", 114 | "output_type": "error", 115 | "traceback": [ 116 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", 117 | "\u001b[0;31mForbidden\u001b[0m Traceback (most recent call last)", 118 | "Input \u001b[0;32mIn [4]\u001b[0m, in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mdownload_bigquery_price_mainnet\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43m0x8ad599c3a0ff1de082011efddc58f1908eb6e6d8\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43m2022-04-30\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43m2022-05-05\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m100\u001b[39;49m\u001b[43m)\u001b[49m\n", 119 | "Input \u001b[0;32mIn [3]\u001b[0m, in \u001b[0;36mdownload_bigquery_price_mainnet\u001b[0;34m(contract_address, date_begin, date_end, block_start)\u001b[0m\n\u001b[1;32m 11\u001b[0m client \u001b[38;5;241m=\u001b[39m bigquery\u001b[38;5;241m.\u001b[39mClient()\n\u001b[1;32m 13\u001b[0m query \u001b[38;5;241m=\u001b[39m (\n\u001b[1;32m 14\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 15\u001b[0m \u001b[38;5;124;03m SELECT *\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 27\u001b[0m \u001b[38;5;124m \u001b[39m\u001b[38;5;124m\"\"\"\u001b[39m\n\u001b[1;32m 28\u001b[0m )\n\u001b[0;32m---> 29\u001b[0m query_job \u001b[38;5;241m=\u001b[39m \u001b[43mclient\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mquery\u001b[49m\u001b[43m(\u001b[49m\u001b[43mquery\u001b[49m\u001b[43m)\u001b[49m \u001b[38;5;66;03m# Make an API request.\u001b[39;00m\n\u001b[1;32m 30\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m query_job\u001b[38;5;241m.\u001b[39mto_dataframe(create_bqstorage_client\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m)\n", 120 | "File \u001b[0;32m~/PycharmProjects/uniswap-automation/venv/lib/python3.8/site-packages/google/cloud/bigquery/client.py:3331\u001b[0m, in \u001b[0;36mClient.query\u001b[0;34m(self, query, job_config, job_id, job_id_prefix, location, project, retry, timeout, job_retry, api_method)\u001b[0m\n\u001b[1;32m 3320\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m _job_helpers\u001b[38;5;241m.\u001b[39mquery_jobs_query(\n\u001b[1;32m 3321\u001b[0m \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m 3322\u001b[0m query,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 3328\u001b[0m job_retry,\n\u001b[1;32m 3329\u001b[0m )\n\u001b[1;32m 3330\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m api_method \u001b[38;5;241m==\u001b[39m enums\u001b[38;5;241m.\u001b[39mQueryApiMethod\u001b[38;5;241m.\u001b[39mINSERT:\n\u001b[0;32m-> 3331\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43m_job_helpers\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mquery_jobs_insert\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 3332\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3333\u001b[0m \u001b[43m \u001b[49m\u001b[43mquery\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3334\u001b[0m \u001b[43m \u001b[49m\u001b[43mjob_config\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3335\u001b[0m \u001b[43m \u001b[49m\u001b[43mjob_id\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3336\u001b[0m \u001b[43m \u001b[49m\u001b[43mjob_id_prefix\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3337\u001b[0m \u001b[43m \u001b[49m\u001b[43mlocation\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3338\u001b[0m \u001b[43m \u001b[49m\u001b[43mproject\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3339\u001b[0m \u001b[43m \u001b[49m\u001b[43mretry\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3340\u001b[0m \u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3341\u001b[0m \u001b[43m \u001b[49m\u001b[43mjob_retry\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 3342\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 3343\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 3344\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mGot unexpected value for api_method: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mrepr\u001b[39m(api_method)\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m)\n", 121 | "File \u001b[0;32m~/PycharmProjects/uniswap-automation/venv/lib/python3.8/site-packages/google/cloud/bigquery/_job_helpers.py:114\u001b[0m, in \u001b[0;36mquery_jobs_insert\u001b[0;34m(client, query, job_config, job_id, job_id_prefix, location, project, retry, timeout, job_retry)\u001b[0m\n\u001b[1;32m 111\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 112\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m query_job\n\u001b[0;32m--> 114\u001b[0m future \u001b[38;5;241m=\u001b[39m \u001b[43mdo_query\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 115\u001b[0m \u001b[38;5;66;03m# The future might be in a failed state now, but if it's\u001b[39;00m\n\u001b[1;32m 116\u001b[0m \u001b[38;5;66;03m# unrecoverable, we'll find out when we ask for it's result, at which\u001b[39;00m\n\u001b[1;32m 117\u001b[0m \u001b[38;5;66;03m# point, we may retry.\u001b[39;00m\n\u001b[1;32m 118\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m job_id_given:\n", 122 | "File \u001b[0;32m~/PycharmProjects/uniswap-automation/venv/lib/python3.8/site-packages/google/cloud/bigquery/_job_helpers.py:91\u001b[0m, in \u001b[0;36mquery_jobs_insert..do_query\u001b[0;34m()\u001b[0m\n\u001b[1;32m 88\u001b[0m query_job \u001b[38;5;241m=\u001b[39m job\u001b[38;5;241m.\u001b[39mQueryJob(job_ref, query, client\u001b[38;5;241m=\u001b[39mclient, job_config\u001b[38;5;241m=\u001b[39mjob_config)\n\u001b[1;32m 90\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m---> 91\u001b[0m \u001b[43mquery_job\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_begin\u001b[49m\u001b[43m(\u001b[49m\u001b[43mretry\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mretry\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtimeout\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 92\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m core_exceptions\u001b[38;5;241m.\u001b[39mConflict \u001b[38;5;28;01mas\u001b[39;00m create_exc:\n\u001b[1;32m 93\u001b[0m \u001b[38;5;66;03m# The thought is if someone is providing their own job IDs and they get\u001b[39;00m\n\u001b[1;32m 94\u001b[0m \u001b[38;5;66;03m# their job ID generation wrong, this could end up returning results for\u001b[39;00m\n\u001b[1;32m 95\u001b[0m \u001b[38;5;66;03m# the wrong query. We thus only try to recover if job ID was not given.\u001b[39;00m\n\u001b[1;32m 96\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m job_id_given:\n", 123 | "File \u001b[0;32m~/PycharmProjects/uniswap-automation/venv/lib/python3.8/site-packages/google/cloud/bigquery/job/query.py:1298\u001b[0m, in \u001b[0;36mQueryJob._begin\u001b[0;34m(self, client, retry, timeout)\u001b[0m\n\u001b[1;32m 1278\u001b[0m \u001b[38;5;124;03m\"\"\"API call: begin the job via a POST request\u001b[39;00m\n\u001b[1;32m 1279\u001b[0m \n\u001b[1;32m 1280\u001b[0m \u001b[38;5;124;03mSee\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 1294\u001b[0m \u001b[38;5;124;03m ValueError: If the job has already begun.\u001b[39;00m\n\u001b[1;32m 1295\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 1297\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m-> 1298\u001b[0m \u001b[38;5;28;43msuper\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mQueryJob\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_begin\u001b[49m\u001b[43m(\u001b[49m\u001b[43mclient\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mclient\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mretry\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mretry\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtimeout\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1299\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m exceptions\u001b[38;5;241m.\u001b[39mGoogleAPICallError \u001b[38;5;28;01mas\u001b[39;00m exc:\n\u001b[1;32m 1300\u001b[0m exc\u001b[38;5;241m.\u001b[39mmessage \u001b[38;5;241m=\u001b[39m _EXCEPTION_FOOTER_TEMPLATE\u001b[38;5;241m.\u001b[39mformat(\n\u001b[1;32m 1301\u001b[0m message\u001b[38;5;241m=\u001b[39mexc\u001b[38;5;241m.\u001b[39mmessage, location\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlocation, job_id\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mjob_id\n\u001b[1;32m 1302\u001b[0m )\n", 124 | "File \u001b[0;32m~/PycharmProjects/uniswap-automation/venv/lib/python3.8/site-packages/google/cloud/bigquery/job/base.py:510\u001b[0m, in \u001b[0;36m_AsyncJob._begin\u001b[0;34m(self, client, retry, timeout)\u001b[0m\n\u001b[1;32m 507\u001b[0m \u001b[38;5;66;03m# jobs.insert is idempotent because we ensure that every new\u001b[39;00m\n\u001b[1;32m 508\u001b[0m \u001b[38;5;66;03m# job has an ID.\u001b[39;00m\n\u001b[1;32m 509\u001b[0m span_attributes \u001b[38;5;241m=\u001b[39m {\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpath\u001b[39m\u001b[38;5;124m\"\u001b[39m: path}\n\u001b[0;32m--> 510\u001b[0m api_response \u001b[38;5;241m=\u001b[39m \u001b[43mclient\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call_api\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 511\u001b[0m \u001b[43m \u001b[49m\u001b[43mretry\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 512\u001b[0m \u001b[43m \u001b[49m\u001b[43mspan_name\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mBigQuery.job.begin\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 513\u001b[0m \u001b[43m \u001b[49m\u001b[43mspan_attributes\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mspan_attributes\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 514\u001b[0m \u001b[43m \u001b[49m\u001b[43mjob_ref\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 515\u001b[0m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mPOST\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 516\u001b[0m \u001b[43m \u001b[49m\u001b[43mpath\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 517\u001b[0m \u001b[43m \u001b[49m\u001b[43mdata\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mto_api_repr\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 518\u001b[0m \u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtimeout\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 519\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 520\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_set_properties(api_response)\n", 125 | "File \u001b[0;32m~/PycharmProjects/uniswap-automation/venv/lib/python3.8/site-packages/google/cloud/bigquery/client.py:756\u001b[0m, in \u001b[0;36mClient._call_api\u001b[0;34m(self, retry, span_name, span_attributes, job_ref, headers, **kwargs)\u001b[0m\n\u001b[1;32m 752\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m span_name \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 753\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m create_span(\n\u001b[1;32m 754\u001b[0m name\u001b[38;5;241m=\u001b[39mspan_name, attributes\u001b[38;5;241m=\u001b[39mspan_attributes, client\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m, job_ref\u001b[38;5;241m=\u001b[39mjob_ref\n\u001b[1;32m 755\u001b[0m ):\n\u001b[0;32m--> 756\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mcall\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 758\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m call()\n", 126 | "File \u001b[0;32m~/PycharmProjects/uniswap-automation/venv/lib/python3.8/site-packages/google/api_core/retry.py:283\u001b[0m, in \u001b[0;36mRetry.__call__..retry_wrapped_func\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 279\u001b[0m target \u001b[38;5;241m=\u001b[39m functools\u001b[38;5;241m.\u001b[39mpartial(func, \u001b[38;5;241m*\u001b[39margs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[1;32m 280\u001b[0m sleep_generator \u001b[38;5;241m=\u001b[39m exponential_sleep_generator(\n\u001b[1;32m 281\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_initial, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_maximum, multiplier\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_multiplier\n\u001b[1;32m 282\u001b[0m )\n\u001b[0;32m--> 283\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mretry_target\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 284\u001b[0m \u001b[43m \u001b[49m\u001b[43mtarget\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 285\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_predicate\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 286\u001b[0m \u001b[43m \u001b[49m\u001b[43msleep_generator\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 287\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_deadline\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 288\u001b[0m \u001b[43m \u001b[49m\u001b[43mon_error\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mon_error\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 289\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n", 127 | "File \u001b[0;32m~/PycharmProjects/uniswap-automation/venv/lib/python3.8/site-packages/google/api_core/retry.py:190\u001b[0m, in \u001b[0;36mretry_target\u001b[0;34m(target, predicate, sleep_generator, deadline, on_error)\u001b[0m\n\u001b[1;32m 188\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m sleep \u001b[38;5;129;01min\u001b[39;00m sleep_generator:\n\u001b[1;32m 189\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 190\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mtarget\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 192\u001b[0m \u001b[38;5;66;03m# pylint: disable=broad-except\u001b[39;00m\n\u001b[1;32m 193\u001b[0m \u001b[38;5;66;03m# This function explicitly must deal with broad exceptions.\u001b[39;00m\n\u001b[1;32m 194\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m exc:\n", 128 | "File \u001b[0;32m~/PycharmProjects/uniswap-automation/venv/lib/python3.8/site-packages/google/cloud/_http/__init__.py:494\u001b[0m, in \u001b[0;36mJSONConnection.api_request\u001b[0;34m(self, method, path, query_params, data, content_type, headers, api_base_url, api_version, expect_json, _target_object, timeout, extra_api_info)\u001b[0m\n\u001b[1;32m 482\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_make_request(\n\u001b[1;32m 483\u001b[0m method\u001b[38;5;241m=\u001b[39mmethod,\n\u001b[1;32m 484\u001b[0m url\u001b[38;5;241m=\u001b[39murl,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 490\u001b[0m extra_api_info\u001b[38;5;241m=\u001b[39mextra_api_info,\n\u001b[1;32m 491\u001b[0m )\n\u001b[1;32m 493\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;241m200\u001b[39m \u001b[38;5;241m<\u001b[39m\u001b[38;5;241m=\u001b[39m response\u001b[38;5;241m.\u001b[39mstatus_code \u001b[38;5;241m<\u001b[39m \u001b[38;5;241m300\u001b[39m:\n\u001b[0;32m--> 494\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m exceptions\u001b[38;5;241m.\u001b[39mfrom_http_response(response)\n\u001b[1;32m 496\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m expect_json \u001b[38;5;129;01mand\u001b[39;00m response\u001b[38;5;241m.\u001b[39mcontent:\n\u001b[1;32m 497\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m response\u001b[38;5;241m.\u001b[39mjson()\n", 129 | "\u001b[0;31mForbidden\u001b[0m: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/imposing-ring-349623/jobs?prettyPrint=false: Access Denied: Project imposing-ring-349623: User does not have bigquery.jobs.create permission in project imposing-ring-349623.\n\nLocation: None\nJob ID: 37e9eec6-9d05-441f-ba3e-1dd76c08a9cd\n" 130 | ] 131 | } 132 | ], 133 | "source": [ 134 | "download_bigquery_price_mainnet('0x8ad599c3a0ff1de082011efddc58f1908eb6e6d8', '2022-04-30', '2022-05-05', 100)" 135 | ] 136 | }, 137 | { 138 | "cell_type": "code", 139 | "execution_count": null, 140 | "id": "3c23cb89-c74e-4b39-a40b-bca95805736b", 141 | "metadata": {}, 142 | "outputs": [], 143 | "source": [ 144 | "def get_pool_data_bigquery(\n", 145 | " contract_address,\n", 146 | " date_begin,\n", 147 | " date_end,\n", 148 | " decimals_0,\n", 149 | " decimals_1,\n", 150 | " block_start=0,\n", 151 | "):\n", 152 | "\n", 153 | " \"\"\"\n", 154 | " Queries Google Bigquery for the swap history of Uniswap v3 pool between two dates starting from a particular block of Ethereum Mainnet.\n", 155 | " Preprocesses data to have decimal adjusted amounts and liquidity values.\n", 156 | " \"\"\"\n", 157 | "\n", 158 | " resulting_data = download_bigquery_price_mainnet(\n", 159 | " contract_address.lower(), date_begin, date_end, block_start\n", 160 | "\n", 161 | "\n", 162 | " DECIMAL_ADJ = 10 ** (decimals_1 - decimals_0)\n", 163 | " resulting_data[\"sqrtPriceX96_float\"] = resulting_data[\"sqrtPriceX96\"].astype(float)\n", 164 | " resulting_data[\"quotePrice\"] = (\n", 165 | " (resulting_data[\"sqrtPriceX96_float\"] / 2**96) ** 2\n", 166 | " ) / DECIMAL_ADJ\n", 167 | " resulting_data[\"block_date\"] = pd.to_datetime(resulting_data[\"block_timestamp\"])\n", 168 | " resulting_data = resulting_data.set_index(\"block_date\", drop=False).sort_index()\n", 169 | "\n", 170 | " resulting_data[\"tick_swap\"] = resulting_data[\"tick\"].astype(int)\n", 171 | " resulting_data[\"amount0\"] = resulting_data[\"amount0\"].astype(float)\n", 172 | " resulting_data[\"amount1\"] = resulting_data[\"amount1\"].astype(float)\n", 173 | " resulting_data[\"amount0_adj\"] = (\n", 174 | " resulting_data[\"amount0\"].astype(float) / 10**decimals_0\n", 175 | " )\n", 176 | " resulting_data[\"amount1_adj\"] = (\n", 177 | " resulting_data[\"amount1\"].astype(float) / 10**decimals_1\n", 178 | " )\n", 179 | " resulting_data[\"virtual_liquidity\"] = resulting_data[\"liquidity\"].astype(float)\n", 180 | " resulting_data[\"virtual_liquidity_adj\"] = resulting_data[\"liquidity\"].astype(\n", 181 | " float\n", 182 | " ) / (10 ** ((decimals_0 + decimals_1) / 2))\n", 183 | " resulting_data[\"token_in\"] = resulting_data.apply(\n", 184 | " lambda x: \"token0\" if (x[\"amount0_adj\"] < 0) else \"token1\", axis=1\n", 185 | " )\n", 186 | " resulting_data[\"traded_in\"] = resulting_data.apply(\n", 187 | " lambda x: -x[\"amount0_adj\"] if (x[\"amount0_adj\"] < 0) else -x[\"amount1_adj\"],\n", 188 | " axis=1,\n", 189 | " ).astype(float)\n", 190 | "\n", 191 | " return resulting_data" 192 | ] 193 | }, 194 | { 195 | "cell_type": "code", 196 | "execution_count": 25, 197 | "id": "32646865-f182-4521-a7aa-4f31ce8540bd", 198 | "metadata": {}, 199 | "outputs": [ 200 | { 201 | "data": { 202 | "text/plain": [ 203 | "(0, 88425.77342867808)" 204 | ] 205 | }, 206 | "execution_count": 25, 207 | "metadata": {}, 208 | "output_type": "execute_result" 209 | } 210 | ], 211 | "source": [ 212 | "get_amounts(tick, tickLower, tickUpper, liquidity, dec_0, dec_1)" 213 | ] 214 | }, 215 | { 216 | "cell_type": "code", 217 | "execution_count": 3, 218 | "id": "cfd982ec-8413-4f83-b1e0-7fa317084d0d", 219 | "metadata": {}, 220 | "outputs": [], 221 | "source": [ 222 | "#decimals\n", 223 | "\n", 224 | "dec_0 = 6\n", 225 | "dec_1 = 18" 226 | ] 227 | }, 228 | { 229 | "cell_type": "code", 230 | "execution_count": 4, 231 | "id": "53d99175-0b4d-46a9-a494-2ca71f9cc0e6", 232 | "metadata": {}, 233 | "outputs": [], 234 | "source": [ 235 | "def get_amount0(sqrtA, sqrtB, liquidity, decimals):\n", 236 | "\n", 237 | " if sqrtA > sqrtB:\n", 238 | " (sqrtA, sqrtB) = (sqrtB, sqrtA)\n", 239 | "\n", 240 | " amount0 = (liquidity * 2 ** 96 * (sqrtB - sqrtA) / sqrtB / sqrtA) / 10 ** decimals\n", 241 | "\n", 242 | " return amount0\n", 243 | "\n", 244 | "\n", 245 | "def get_amount1(sqrtA, sqrtB, liquidity, decimals):\n", 246 | "\n", 247 | " if sqrtA > sqrtB:\n", 248 | " (sqrtA, sqrtB) = (sqrtB, sqrtA)\n", 249 | "\n", 250 | " amount1 = liquidity * (sqrtB - sqrtA) / 2 ** 96 / 10 ** decimals\n", 251 | "\n", 252 | " return amount1\n", 253 | "\n", 254 | "def get_amounts(tick, tickA, tickB, liquidity, decimal0, decimal1):\n", 255 | "\n", 256 | " sqrt = int(1.0001 ** (tick / 2) * (2 ** 96))\n", 257 | " sqrtA = int(1.0001 ** (tickA / 2) * (2 ** 96))\n", 258 | " sqrtB = int(1.0001 ** (tickB / 2) * (2 ** 96))\n", 259 | "\n", 260 | " if sqrtA > sqrtB:\n", 261 | " (sqrtA, sqrtB) = (sqrtB, sqrtA)\n", 262 | "\n", 263 | " if sqrt <= sqrtA:\n", 264 | "\n", 265 | " amount0 = get_amount0(sqrtA, sqrtB, liquidity, decimal0)\n", 266 | " return amount0, 0\n", 267 | "\n", 268 | " elif sqrtB > sqrt > sqrtA:\n", 269 | " amount0 = get_amount0(sqrt, sqrtB, liquidity, decimal0)\n", 270 | " amount1 = get_amount1(sqrtA, sqrt, liquidity, decimal1)\n", 271 | " return amount0, amount1\n", 272 | "\n", 273 | " else:\n", 274 | " amount1 = get_amount1(sqrtA, sqrtB, liquidity, decimal1)\n", 275 | " return 0, amount1\n", 276 | " \n", 277 | " \n", 278 | "def amounts_relation(tick, tickA, tickB, decimals0, decimals1):\n", 279 | "\n", 280 | " sqrt = (1.0001 ** tick / 10 ** (decimals1 - decimals0)) ** (1 / 2)\n", 281 | " sqrtA = (1.0001 ** tickA / 10 ** (decimals1 - decimals0)) ** (1 / 2)\n", 282 | " sqrtB = (1.0001 ** tickB / 10 ** (decimals1 - decimals0)) ** (1 / 2)\n", 283 | "\n", 284 | " if sqrt == sqrtA or sqrt == sqrtB:\n", 285 | " relation = 0\n", 286 | " # print(\"There is 0 tokens on one side\")\n", 287 | " else:\n", 288 | " relation = (sqrt - sqrtA) / ((1 / sqrt) - (1 / sqrtB))\n", 289 | " return relation" 290 | ] 291 | }, 292 | { 293 | "cell_type": "code", 294 | "execution_count": 50, 295 | "id": "1e129d0f-9b61-495c-915c-edc79bf1fb96", 296 | "metadata": {}, 297 | "outputs": [], 298 | "source": [ 299 | "# Position https://etherscan.io/address/0xc36442b4a4522e871399cd717abdd847ab11fe88#readContract\n", 300 | "# TokenId 236595\n", 301 | "# Transaction - https://etherscan.io/tx/0x9b173baa9442fcb53cdde9876eaabeac9a71ff93d822e5398b5cfd6a5d38a199\n", 302 | "\n", 303 | "token0 = 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48\n", 304 | "token1 = 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2\n", 305 | "fee = 3000\n", 306 | "tickLower = 196200\n", 307 | "tickUpper = 201660\n", 308 | "pos_liquidity = 26938023944267617\n", 309 | "feeGrowthInside0LastX128 = 1146045437454411223706317130808195\n", 310 | "feeGrowthInside1LastX128 = 472868971286049499087619319314568638950194\n", 311 | "tokensOwed0 = 863372171\n", 312 | "tokensOwed1 = 362821509667276570" 313 | ] 314 | }, 315 | { 316 | "cell_type": "code", 317 | "execution_count": 6, 318 | "id": "ef059f54-15d7-429a-a9cc-7492c997de24", 319 | "metadata": {}, 320 | "outputs": [], 321 | "source": [ 322 | "# ticks i_l, i_u info\n", 323 | "\n", 324 | "#Lower\n", 325 | "L_liquidityGross = 18216572431750930\n", 326 | "L_liquidityNet = 18216572431750930\n", 327 | "L_feeGrowthOutside0X128 = 771898924715078716122457993949735\n", 328 | "L_feeGrowthOutside1X128 = 299686152818024558401586591746381951788873\n", 329 | "L_tickCumulativeOutside = 874561673429\n", 330 | "L_secondsPerLiquidityOutsideX128 = 198044337547991777424606226079348331092728\n", 331 | "L_secondsOutside = 1624605540\n", 332 | "L_initialized = True\n", 333 | "\n", 334 | "#Upper\n", 335 | "U_liquidityGross = 1683230461217016778\n", 336 | "U_liquidityNet = 1299234642981930080\n", 337 | "U_feeGrowthOutside0X128 = 1226433780500214190158000189806691\n", 338 | "U_feeGrowthOutside1X128 = 365746732484003642662975858100178521758045\n", 339 | "U_tickCumulativeOutside = 4519733975844\n", 340 | "U_secondsPerLiquidityOutsideX128 = 198044337547992220998400984601510297703566\n", 341 | "U_secondsOutside = 1643365229\n", 342 | "U_initialized = True\n", 343 | "\n" 344 | ] 345 | }, 346 | { 347 | "cell_type": "code", 348 | "execution_count": 16, 349 | "id": "3c8ac672-e494-45df-bc36-53d4e5fd9ae5", 350 | "metadata": {}, 351 | "outputs": [], 352 | "source": [ 353 | "# Pool state - https://etherscan.io/address/0x8ad599c3a0ff1de082011efddc58f1908eb6e6d8#readContract\n", 354 | "\n", 355 | "sqrtPriceX96 = 1617871362335671645645002850471718\n", 356 | "tick = 198495\n", 357 | "observationIndex = 976\n", 358 | "observationCardinality = 1440\n", 359 | "observationCardinalityNext = 1440\n", 360 | "feeProtocol = 0\n", 361 | "\n", 362 | "liquidity = 10200180983978284202\n", 363 | "\n", 364 | "feeGrowthGlobal0X128 = 2046477727527315981651672382110787\n", 365 | "feeGrowthGlobal1X128 = 724331803708946607581226850148167065366051" 366 | ] 367 | }, 368 | { 369 | "cell_type": "code", 370 | "execution_count": 36, 371 | "id": "fb2fa79a-93a0-4b7a-a216-bd945d7ef799", 372 | "metadata": {}, 373 | "outputs": [ 374 | { 375 | "data": { 376 | "text/plain": [ 377 | "0.0004169930415991001" 378 | ] 379 | }, 380 | "execution_count": 36, 381 | "metadata": {}, 382 | "output_type": "execute_result" 383 | } 384 | ], 385 | "source": [ 386 | "(sqrtPriceX96/2**96)**2 * 10**(dec_0-dec_1)" 387 | ] 388 | }, 389 | { 390 | "cell_type": "markdown", 391 | "id": "aca21a35-0d18-4eb8-844a-96c046a7c989", 392 | "metadata": {}, 393 | "source": [ 394 | "** To find " 395 | ] 396 | }, 397 | { 398 | "cell_type": "code", 399 | "execution_count": 14, 400 | "id": "dac9ee95-1d1a-4b74-837d-5d8c113c561f", 401 | "metadata": {}, 402 | "outputs": [], 403 | "source": [ 404 | "# All in 1 token" 405 | ] 406 | }, 407 | { 408 | "cell_type": "code", 409 | "execution_count": 98, 410 | "id": "783b2f0e-e5d4-4b2b-986c-0d62b97c72b0", 411 | "metadata": {}, 412 | "outputs": [], 413 | "source": [ 414 | "# Check upper tick\n", 415 | "def get_fees_for_range(tick, tickA, tickB, feeGrowthGlobal, U_feeGrowthOutside, L_feeGrowthOutside):\n", 416 | " if tick >= tickB:\n", 417 | " f_a = feeGrowthGlobal - U_feeGrowthOutside\n", 418 | " else:\n", 419 | " f_a = U_feeGrowthOutside\n", 420 | " \n", 421 | " if tick >= tickA:\n", 422 | " f_b = L_feeGrowthOutside\n", 423 | " else:\n", 424 | " f_b = feeGrowthGlobal - L_feeGrowthOutside\n", 425 | "\n", 426 | " f_range = feeGrowthGlobal - f_a - f_b\n", 427 | " \n", 428 | " return f_range / 2**128" 429 | ] 430 | }, 431 | { 432 | "cell_type": "code", 433 | "execution_count": 99, 434 | "id": "1875fe35-6832-42f7-b032-f58f65483b6f", 435 | "metadata": {}, 436 | "outputs": [ 437 | { 438 | "name": "stdout", 439 | "output_type": "stream", 440 | "text": [ 441 | "173.0883646421886\n", 442 | "1.414855043699903e-07\n" 443 | ] 444 | } 445 | ], 446 | "source": [ 447 | "print(get_fees_for_range(tick, tickLower, tickUpper, feeGrowthGlobal1X128, U_feeGrowthOutside1X128, L_feeGrowthOutside1X128))\n", 448 | "print(get_fees_for_range(tick, tickLower, tickUpper, feeGrowthGlobal0X128, U_feeGrowthOutside0X128, L_feeGrowthOutside0X128))" 449 | ] 450 | }, 451 | { 452 | "cell_type": "code", 453 | "execution_count": 91, 454 | "id": "541138bd-e2bc-42eb-acfc-0b0c82cbda82", 455 | "metadata": {}, 456 | "outputs": [ 457 | { 458 | "data": { 459 | "text/plain": [ 460 | "1389.6370110647442" 461 | ] 462 | }, 463 | "execution_count": 91, 464 | "metadata": {}, 465 | "output_type": "execute_result" 466 | } 467 | ], 468 | "source": [ 469 | "feeGrowthInside1LastX128/2**128" 470 | ] 471 | }, 472 | { 473 | "cell_type": "code", 474 | "execution_count": 100, 475 | "id": "c3f58027-1f5a-46e6-a23d-83f08d24defb", 476 | "metadata": {}, 477 | "outputs": [], 478 | "source": [ 479 | "def fees_uncollected_position(tick, tickA, tickB, feeGrowthGlobal, U_feeGrowthOutside, L_feeGrowthOutside, feeGrowthInside, pos_liquidity, decimals):\n", 480 | " f_range_uncollected = get_fees_for_range(tick, tickA, tickB, feeGrowthGlobal, U_feeGrowthOutside, L_feeGrowthOutside)\n", 481 | " return pos_liquidity * (f_range_uncollected - feeGrowthInside/2**128)" 482 | ] 483 | }, 484 | { 485 | "cell_type": "code", 486 | "execution_count": 101, 487 | "id": "6cc051c8-0c67-4627-bf6a-07a5f17e9c77", 488 | "metadata": {}, 489 | "outputs": [ 490 | { 491 | "data": { 492 | "text/plain": [ 493 | "-3.2771416566697157e+19" 494 | ] 495 | }, 496 | "execution_count": 101, 497 | "metadata": {}, 498 | "output_type": "execute_result" 499 | } 500 | ], 501 | "source": [ 502 | "fees_uncollected_position(tick, tickLower, tickUpper, feeGrowthGlobal1X128, U_feeGrowthOutside1X128, L_feeGrowthOutside1X128, feeGrowthInside1LastX128, pos_liquidity, dec_1)\n", 503 | "\n" 504 | ] 505 | }, 506 | { 507 | "cell_type": "code", 508 | "execution_count": 102, 509 | "id": "a5445592-351a-4c62-b25b-64d4ccad36aa", 510 | "metadata": {}, 511 | "outputs": [ 512 | { 513 | "data": { 514 | "text/plain": [ 515 | "-86913900179.84258" 516 | ] 517 | }, 518 | "execution_count": 102, 519 | "metadata": {}, 520 | "output_type": "execute_result" 521 | } 522 | ], 523 | "source": [ 524 | "fees_uncollected_position(tick, tickLower, tickUpper, feeGrowthGlobal0X128, U_feeGrowthOutside0X128, L_feeGrowthOutside0X128, feeGrowthInside0LastX128, pos_liquidity, dec_0)\n", 525 | "\n" 526 | ] 527 | }, 528 | { 529 | "cell_type": "code", 530 | "execution_count": 76, 531 | "id": "6693b350-08ed-4b23-ac26-1e72ceb9ef0f", 532 | "metadata": {}, 533 | "outputs": [ 534 | { 535 | "data": { 536 | "text/plain": [ 537 | "1389.6370110647442" 538 | ] 539 | }, 540 | "execution_count": 76, 541 | "metadata": {}, 542 | "output_type": "execute_result" 543 | } 544 | ], 545 | "source": [ 546 | "feeGrowthInside1LastX128 / 2**128" 547 | ] 548 | }, 549 | { 550 | "cell_type": "code", 551 | "execution_count": 103, 552 | "id": "7aae4ab3-8c33-45d2-98dc-f956f9be9023", 553 | "metadata": {}, 554 | "outputs": [ 555 | { 556 | "data": { 557 | "text/plain": [ 558 | "(193076.88153517284, 59.630328294076946)" 559 | ] 560 | }, 561 | "execution_count": 103, 562 | "metadata": {}, 563 | "output_type": "execute_result" 564 | } 565 | ], 566 | "source": [ 567 | "get_amounts(tick,tickLower, tickUpper, pos_liquidity, dec_0, dec_1)" 568 | ] 569 | }, 570 | { 571 | "cell_type": "code", 572 | "execution_count": null, 573 | "id": "b353d7e9-36bb-4331-8337-a02a8b7dd6d7", 574 | "metadata": {}, 575 | "outputs": [], 576 | "source": [] 577 | }, 578 | { 579 | "cell_type": "code", 580 | "execution_count": 49, 581 | "id": "58e6ffe8-57d8-445b-a1e2-6cfd1abcc882", 582 | "metadata": {}, 583 | "outputs": [], 584 | "source": [ 585 | "def to_float(x,e):\n", 586 | " c = abs(x)\n", 587 | " sign = 1 \n", 588 | " if x < 0:\n", 589 | " # convert back from two's complement\n", 590 | " c = x - 1 \n", 591 | " c = ~c\n", 592 | " sign = -1\n", 593 | " f = (1.0 * c) / (2 ** e)\n", 594 | " f = f * sign\n", 595 | " return f" 596 | ] 597 | }, 598 | { 599 | "cell_type": "code", 600 | "execution_count": 60, 601 | "id": "a415227b-fa7f-4a81-9d43-ef66f089d23c", 602 | "metadata": {}, 603 | "outputs": [ 604 | { 605 | "data": { 606 | "text/plain": [ 607 | "1731.764049365379" 608 | ] 609 | }, 610 | "execution_count": 60, 611 | "metadata": {}, 612 | "output_type": "execute_result" 613 | } 614 | ], 615 | "source": [ 616 | "to_float(589288769666640050413360360828235182981321, 128)" 617 | ] 618 | }, 619 | { 620 | "cell_type": "code", 621 | "execution_count": 63, 622 | "id": "2040117b-5427-45cc-9a9d-ee97d2b73ec2", 623 | "metadata": {}, 624 | "outputs": [ 625 | { 626 | "data": { 627 | "text/plain": [ 628 | "4.515442969079665e-06" 629 | ] 630 | }, 631 | "execution_count": 63, 632 | "metadata": {}, 633 | "output_type": "execute_result" 634 | } 635 | ], 636 | "source": [ 637 | "to_float(1536525621214938325098269939695336, 128)" 638 | ] 639 | }, 640 | { 641 | "cell_type": "code", 642 | "execution_count": 2, 643 | "id": "e6b712c0-7fdc-4dc1-9641-30de13608467", 644 | "metadata": {}, 645 | "outputs": [], 646 | "source": [ 647 | "import pickle" 648 | ] 649 | }, 650 | { 651 | "cell_type": "code", 652 | "execution_count": 6, 653 | "id": "b74807b3-7e87-4525-8cf9-bc8aa4030d65", 654 | "metadata": {}, 655 | "outputs": [], 656 | "source": [ 657 | "with open('data/eth_usdc_swap.pkl', \"rb\") as fh:\n", 658 | " data = pickle.load(fh)" 659 | ] 660 | }, 661 | { 662 | "cell_type": "code", 663 | "execution_count": 7, 664 | "id": "2b47c53d-dfcc-4e01-a8ae-8e3d1bf9fa77", 665 | "metadata": {}, 666 | "outputs": [], 667 | "source": [ 668 | "import pandas as pd" 669 | ] 670 | }, 671 | { 672 | "cell_type": "code", 673 | "execution_count": 8, 674 | "id": "9282902a-e739-44b6-b084-01bbe4e602e1", 675 | "metadata": {}, 676 | "outputs": [ 677 | { 678 | "data": { 679 | "text/html": [ 680 | "
\n", 681 | "\n", 694 | "\n", 695 | " \n", 696 | " \n", 697 | " \n", 698 | " \n", 699 | " \n", 700 | " \n", 701 | " \n", 702 | " \n", 703 | " \n", 704 | " \n", 705 | " \n", 706 | " \n", 707 | " \n", 708 | " \n", 709 | " \n", 710 | " \n", 711 | " \n", 712 | " \n", 713 | " \n", 714 | " \n", 715 | " \n", 716 | " \n", 717 | " \n", 718 | " \n", 719 | " \n", 720 | " \n", 721 | " \n", 722 | " \n", 723 | " \n", 724 | " \n", 725 | " \n", 726 | " \n", 727 | " \n", 728 | " \n", 729 | " \n", 730 | " \n", 731 | " \n", 732 | " \n", 733 | " \n", 734 | " \n", 735 | " \n", 736 | " \n", 737 | " \n", 738 | " \n", 739 | " \n", 740 | " \n", 741 | " \n", 742 | " \n", 743 | " \n", 744 | " \n", 745 | " \n", 746 | " \n", 747 | " \n", 748 | " \n", 749 | " \n", 750 | " \n", 751 | " \n", 752 | " \n", 753 | " \n", 754 | " \n", 755 | " \n", 756 | " \n", 757 | " \n", 758 | " \n", 759 | " \n", 760 | " \n", 761 | " \n", 762 | " \n", 763 | " \n", 764 | " \n", 765 | " \n", 766 | " \n", 767 | " \n", 768 | " \n", 769 | " \n", 770 | " \n", 771 | " \n", 772 | " \n", 773 | " \n", 774 | " \n", 775 | " \n", 776 | " \n", 777 | " \n", 778 | " \n", 779 | " \n", 780 | " \n", 781 | " \n", 782 | " \n", 783 | " \n", 784 | " \n", 785 | " \n", 786 | " \n", 787 | " \n", 788 | " \n", 789 | " \n", 790 | " \n", 791 | " \n", 792 | " \n", 793 | " \n", 794 | " \n", 795 | " \n", 796 | " \n", 797 | " \n", 798 | " \n", 799 | " \n", 800 | " \n", 801 | " \n", 802 | " \n", 803 | " \n", 804 | " \n", 805 | " \n", 806 | " \n", 807 | "
idtimestamptickamount0amount1amountUSD
00x000035d5e2bacbcdfad84b1b97113f2c4a83534bc7e4...1621348552195353-341098.711504104.1156341744.6271658624201755352726503601
10x0000cbd95c55d590cfd68a9c593957aca9de75824e94...1638835892192536-744.6085660.171619960261869437745.729040061016820805721675871721
20x0000d28bd990d3232d2c72b10ad5b99c2803c4648eaf...162323951119802226582.946537-10.54105389451065121726542.96632908660648949465857012522
30x00012ce2df8aacdf8c0ec30631170845704b2249f7b0...1627390084198938200000-86.928111599666651722199677.3127029941910943241438634473
40x000130477fe48e93859ae71712e540327aa48b18c54f...1630237508195615-1116978.5238083501119078.488894339722287168798626164
.....................
129950x0e0ce49275d594edb858c1b14dd0f9028510292a68d1...1637763822192886557850.263723-132.461619620591735566556805.023914860124348187291632405
129960x0e0d874a7581194f18f6d61ea061de18831c66b96352...1646414027197358157600.45-58.500015831458282413157338.818080755112285765395126147
129970x0e0db9c4a365760b5716092966a546c49151fe9bcef5...1649103273194750-46501.3373043089402301884657.004947780983065107394209136967
129980x0e0e743f4a1a329f9bb493a40c86d07cb8722a8e7a72...1624859298200453-393200.386124200393801.5121945686181011115428126334
129990x0e0edbf7febbe6bf72fdceb476b48171fe723ab3d392...1649706615196250200-0.066425846649440457199.699978390770123356691868503756
\n", 808 | "

13000 rows × 6 columns

\n", 809 | "
" 810 | ], 811 | "text/plain": [ 812 | " id timestamp tick \\\n", 813 | "0 0x000035d5e2bacbcdfad84b1b97113f2c4a83534bc7e4... 1621348552 195353 \n", 814 | "1 0x0000cbd95c55d590cfd68a9c593957aca9de75824e94... 1638835892 192536 \n", 815 | "2 0x0000d28bd990d3232d2c72b10ad5b99c2803c4648eaf... 1623239511 198022 \n", 816 | "3 0x00012ce2df8aacdf8c0ec30631170845704b2249f7b0... 1627390084 198938 \n", 817 | "4 0x000130477fe48e93859ae71712e540327aa48b18c54f... 1630237508 195615 \n", 818 | "... ... ... ... \n", 819 | "12995 0x0e0ce49275d594edb858c1b14dd0f9028510292a68d1... 1637763822 192886 \n", 820 | "12996 0x0e0d874a7581194f18f6d61ea061de18831c66b96352... 1646414027 197358 \n", 821 | "12997 0x0e0db9c4a365760b5716092966a546c49151fe9bcef5... 1649103273 194750 \n", 822 | "12998 0x0e0e743f4a1a329f9bb493a40c86d07cb8722a8e7a72... 1624859298 200453 \n", 823 | "12999 0x0e0edbf7febbe6bf72fdceb476b48171fe723ab3d392... 1649706615 196250 \n", 824 | "\n", 825 | " amount0 amount1 \\\n", 826 | "0 -341098.711504 104.1156 \n", 827 | "1 -744.608566 0.171619960261869437 \n", 828 | "2 26582.946537 -10.541053894510651217 \n", 829 | "3 200000 -86.928111599666651722 \n", 830 | "4 -1116978.523808 350 \n", 831 | "... ... ... \n", 832 | "12995 557850.263723 -132.461619620591735566 \n", 833 | "12996 157600.45 -58.500015831458282413 \n", 834 | "12997 -4650 1.337304308940230188 \n", 835 | "12998 -393200.386124 200 \n", 836 | "12999 200 -0.066425846649440457 \n", 837 | "\n", 838 | " amountUSD \n", 839 | "0 341744.6271658624201755352726503601 \n", 840 | "1 745.729040061016820805721675871721 \n", 841 | "2 26542.96632908660648949465857012522 \n", 842 | "3 199677.3127029941910943241438634473 \n", 843 | "4 1119078.488894339722287168798626164 \n", 844 | "... ... \n", 845 | "12995 556805.023914860124348187291632405 \n", 846 | "12996 157338.818080755112285765395126147 \n", 847 | "12997 4657.004947780983065107394209136967 \n", 848 | "12998 393801.5121945686181011115428126334 \n", 849 | "12999 199.699978390770123356691868503756 \n", 850 | "\n", 851 | "[13000 rows x 6 columns]" 852 | ] 853 | }, 854 | "execution_count": 8, 855 | "metadata": {}, 856 | "output_type": "execute_result" 857 | } 858 | ], 859 | "source": [ 860 | "pd.DataFrame(data)" 861 | ] 862 | }, 863 | { 864 | "cell_type": "code", 865 | "execution_count": 9, 866 | "id": "87c21fd6-f5ef-4a89-8009-05b7950cfcb1", 867 | "metadata": {}, 868 | "outputs": [ 869 | { 870 | "data": { 871 | "text/plain": [ 872 | "'1.4.2'" 873 | ] 874 | }, 875 | "execution_count": 9, 876 | "metadata": {}, 877 | "output_type": "execute_result" 878 | } 879 | ], 880 | "source": [ 881 | "pd.__version__" 882 | ] 883 | }, 884 | { 885 | "cell_type": "code", 886 | "execution_count": null, 887 | "id": "40cbaec1-c1ad-4a84-b595-c1c78977da3d", 888 | "metadata": {}, 889 | "outputs": [], 890 | "source": [] 891 | } 892 | ], 893 | "metadata": { 894 | "kernelspec": { 895 | "display_name": "Python 3 (ipykernel)", 896 | "language": "python", 897 | "name": "python3" 898 | }, 899 | "language_info": { 900 | "codemirror_mode": { 901 | "name": "ipython", 902 | "version": 3 903 | }, 904 | "file_extension": ".py", 905 | "mimetype": "text/x-python", 906 | "name": "python", 907 | "nbconvert_exporter": "python", 908 | "pygments_lexer": "ipython3", 909 | "version": "3.8.9" 910 | } 911 | }, 912 | "nbformat": 4, 913 | "nbformat_minor": 5 914 | } 915 | --------------------------------------------------------------------------------