├── CAR_V1-2_High-Level Impact & Screening.ipynb ├── CAR_V3-4_Distribution & Risk Analysis.ipynb ├── CAR_V5-7_Temporal Dynamics Analysis.ipynb ├── CAR_V8-9_Model Robustness & Diagnostics.ipynb ├── CAR_VA1-3_Advanced Techniques & Presentation Methods.ipynb ├── CAR_main.ipynb ├── LICENSE ├── RDD_Price to excel.ipynb ├── RDD_Price.ipynb ├── RDD_Vol. to excel.ipynb ├── RDD_Vol..ipynb ├── README.md ├── requirements.txt └── sample ├── sampleV1:Event Impact Overview for All Windows.png ├── sampleV2.1:Significance Heatmaps for All Windows.png ├── sampleV2.2:Significance Heatmaps for All Windows.png ├── sampleV2.3:Significance Heatmaps for All Windows.png ├── sampleV3:CAR Distribution across All Windows.png ├── sampleV4:Cumulative Distribution Function (CDF) Plot.png ├── sampleV5:Mean CAR Timeliness Analysis (Point Plot).png ├── sampleV6:Model Robustness Check (Grouped Bar Chart).png ├── sampleV7:Daily Cumulative Abnormal Return (CAR) Trend.png ├── sampleV8:CAR Contribution Waterfall Chart.png ├── sampleV9:Model Fit Diagnostic (Scatter Plot).png ├── sampleVA1:Advanced Facet Grid.png ├── sampleVA2:Significance Annotation on a Bar Chart (Corrected).png ├── sampleVA3:Interactive Dashboard with Plotly (Corrected).html ├── sample_rd_results_Price.xlsx └── sample_rd_results_vol.xlsx /CAR_V1-2_High-Level Impact & Screening.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "717daa4c", 6 | "metadata": {}, 7 | "source": [ 8 | "Visualization 1 (Enhanced): Event Impact Overview for All Windows" 9 | ] 10 | }, 11 | { 12 | "cell_type": "code", 13 | "execution_count": null, 14 | "id": "dc290b28", 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import pandas as pd\n", 19 | "import seaborn as sns\n", 20 | "import matplotlib.pyplot as plt\n", 21 | "\n", 22 | "# --- Visualization 1 (Enhanced): Event Impact Overview for All Windows ---\n", 23 | "\n", 24 | "# Note: This script reads the result file from the previous analysis,\n", 25 | "# not the original data file.\n", 26 | "# Therefore, there's no need to adjust the paths for Gold, Nasdaq100, or SPY.\n", 27 | "try:\n", 28 | " df_mean = pd.read_csv(\"./outcome/event_study_mean_results.csv\")\n", 29 | "except FileNotFoundError:\n", 30 | " print(\"Error: 'event_study_mean_results.csv' not found in the 'outcome' folder. \"\n", 31 | " \"Please run the main event study script first.\")\n", 32 | " # It's recommended to comment out exit() in a Jupyter environment.\n", 33 | " # If running as a .py file, you can keep exit().\n", 34 | " # exit()\n", 35 | "\n", 36 | "# --- Data Preparation ---\n", 37 | "# Select a model, but this time we will use data from ALL windows.\n", 38 | "# Check if df_mean was loaded successfully and is not empty.\n", 39 | "if 'df_mean' in locals() and not df_mean.empty:\n", 40 | " model_to_plot = 'MM_SPY'\n", 41 | " df_plot_data = df_mean[df_mean['Model'] == model_to_plot]\n", 42 | "\n", 43 | " # Define a logical order for the x-axis and the hue (colors)\n", 44 | " event_group_order = sorted(df_plot_data['EventGroup'].unique())\n", 45 | " window_order = ['(-1,+1)', '(-5,+5)', '(-10,+10)']\n", 46 | "\n", 47 | " # --- Plotting ---\n", 48 | " # Use hue to represent the different event windows within each group.\n", 49 | " g = sns.catplot(\n", 50 | " data=df_plot_data,\n", 51 | " x='EventGroup',\n", 52 | " y='MeanCAR',\n", 53 | " hue='Window', # Use color to distinguish windows\n", 54 | " hue_order=window_order,\n", 55 | " kind='bar',\n", 56 | " col='Asset', # Facet by Asset\n", 57 | " col_wrap=2,\n", 58 | " order=event_group_order,\n", 59 | " palette='magma', # Use a sequential color palette\n", 60 | " height=5,\n", 61 | " aspect=1.5,\n", 62 | " legend_out=True\n", 63 | " )\n", 64 | "\n", 65 | " # --- Chart Enhancement ---\n", 66 | " g.fig.suptitle(f'Mean CAR by Event Group Across Windows\\nModel: {model_to_plot}', y=1.03)\n", 67 | " g.set_axis_labels(\"Event Group\", \"Mean CAR\")\n", 68 | " g.set_titles(\"Asset: {col_name}\")\n", 69 | " g.despine(left=True)\n", 70 | "\n", 71 | " # Add a horizontal line at y=0\n", 72 | " for ax in g.axes.flat:\n", 73 | " ax.axhline(0, ls='--', color='black', linewidth=0.8)\n", 74 | " ax.tick_params(axis='x', rotation=45)\n", 75 | "\n", 76 | " # Adjust legend\n", 77 | " g.add_legend(title='Event Window')\n", 78 | "\n", 79 | " plt.tight_layout(rect=[0, 0, 1, 0.97])\n", 80 | " plt.show()" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "id": "4991a787", 86 | "metadata": {}, 87 | "source": [ 88 | "Visualization 2 (Enhanced & Ordered): Significance Heatmaps for All Windows" 89 | ] 90 | }, 91 | { 92 | "cell_type": "code", 93 | "execution_count": null, 94 | "id": "a1cc2a31", 95 | "metadata": {}, 96 | "outputs": [], 97 | "source": [ 98 | "import pandas as pd\n", 99 | "import seaborn as sns\n", 100 | "import numpy as np\n", 101 | "import matplotlib.pyplot as plt\n", 102 | "\n", 103 | "# --- Visualization 2 (Enhanced & Ordered): Significance Heatmaps for All Windows ---\n", 104 | "\n", 105 | "# Note: This script reads the result file from the previous analysis,\n", 106 | "# not the original data file.\n", 107 | "# Therefore, there's no need to adjust the paths for Gold, Nasdaq100, or SPY.\n", 108 | "try:\n", 109 | " df_mean = pd.read_csv(\"./outcome/event_study_mean_results.csv\")\n", 110 | "except FileNotFoundError:\n", 111 | " print(\"Error: 'event_study_mean_results.csv' not found in the 'outcome' folder. \"\n", 112 | " \"Please run the main event study script first.\")\n", 113 | " # It's recommended to comment out exit() in a Jupyter environment.\n", 114 | " # exit()\n", 115 | "\n", 116 | "# --- Data Preparation & Plotting Loop ---\n", 117 | "# Check if df_mean was loaded successfully and is not empty\n", 118 | "if 'df_mean' in locals() and not df_mean.empty:\n", 119 | " # Select a model to analyze\n", 120 | " model_to_plot = 'MM_SPY'\n", 121 | " df_model_data = df_mean[df_mean['Model'] == model_to_plot]\n", 122 | "\n", 123 | " # --- MODIFICATION: Define a fixed order for window display ---\n", 124 | " window_order = ['(-1,+1)', '(-5,+5)', '(-10,+10)']\n", 125 | " # --- END MODIFICATION ---\n", 126 | "\n", 127 | " # Function to convert p-values to significance stars\n", 128 | " def p_to_stars(p):\n", 129 | " if p <= 0.01: return '***'\n", 130 | " if p <= 0.05: return '**'\n", 131 | " if p <= 0.1: return '*'\n", 132 | " return ''\n", 133 | "\n", 134 | " # Loop through each event window in the specified order\n", 135 | " for window in window_order:\n", 136 | " # Filter data for the current window\n", 137 | " df_plot_data = df_model_data[df_model_data['Window'] == window]\n", 138 | " \n", 139 | " if df_plot_data.empty:\n", 140 | " print(f\"Warning: No data found for window {window}. Skipping this heatmap.\")\n", 141 | " continue\n", 142 | "\n", 143 | " # Pivot the data for the heatmap\n", 144 | " try:\n", 145 | " heatmap_data = df_plot_data.pivot_table(index='Asset', columns='EventGroup', values='MeanCAR')\n", 146 | " p_value_data = df_plot_data.pivot_table(index='Asset', columns='EventGroup', values='p-value') # Corrected column name to 'p-value'\n", 147 | " annotations = p_value_data.applymap(p_to_stars)\n", 148 | " except Exception as e:\n", 149 | " print(f\"Could not create pivot table for window {window}. Error: {e}\")\n", 150 | " continue\n", 151 | "\n", 152 | " # --- Plotting ---\n", 153 | " plt.figure(figsize=(16, 8))\n", 154 | " sns.heatmap(\n", 155 | " heatmap_data,\n", 156 | " annot=annotations, # Overlay the significance stars\n", 157 | " fmt='s', # Format as a string\n", 158 | " cmap='vlag', # A good diverging colormap (blue for negative, red for positive)\n", 159 | " center=0,\n", 160 | " linewidths=.5,\n", 161 | " cbar_kws={'label': 'Mean Cumulative Abnormal Return (CAR)'} # Color bar label\n", 162 | " )\n", 163 | "\n", 164 | " # --- Chart Enhancement ---\n", 165 | " plt.title(f'Heatmap of Mean CAR Significance\\nModel: {model_to_plot}, Window: {window}', fontsize=16)\n", 166 | " plt.xlabel('Event Group', fontsize=12)\n", 167 | " plt.ylabel('Asset', fontsize=12)\n", 168 | " plt.xticks(rotation=45, ha='right')\n", 169 | " plt.yticks(rotation=0)\n", 170 | " plt.tight_layout()\n", 171 | " \n", 172 | " # Display each plot in the correct order\n", 173 | " plt.show()" 174 | ] 175 | } 176 | ], 177 | "metadata": { 178 | "kernelspec": { 179 | "display_name": "cuda", 180 | "language": "python", 181 | "name": "python3" 182 | }, 183 | "language_info": { 184 | "codemirror_mode": { 185 | "name": "ipython", 186 | "version": 3 187 | }, 188 | "file_extension": ".py", 189 | "mimetype": "text/x-python", 190 | "name": "python", 191 | "nbconvert_exporter": "python", 192 | "pygments_lexer": "ipython3", 193 | "version": "3.10.16" 194 | } 195 | }, 196 | "nbformat": 4, 197 | "nbformat_minor": 5 198 | } 199 | -------------------------------------------------------------------------------- /CAR_V3-4_Distribution & Risk Analysis.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "5b6246ed", 6 | "metadata": {}, 7 | "source": [ 8 | "Visualization 3 (Enhanced): CAR Distribution across All Windows" 9 | ] 10 | }, 11 | { 12 | "cell_type": "code", 13 | "execution_count": null, 14 | "id": "ac8cc48f", 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import pandas as pd\n", 19 | "import seaborn as sns\n", 20 | "import matplotlib.pyplot as plt\n", 21 | "import re\n", 22 | "\n", 23 | "# --- Visualization 3 (Enhanced): CAR Distribution across All Windows ---\n", 24 | "\n", 25 | "# Load the wide-format results\n", 26 | "try:\n", 27 | " df_wide = pd.read_csv(\"./outcome/event_study_wide_results.csv\")\n", 28 | "except FileNotFoundError:\n", 29 | " print(\"Error: 'event_study_wide_results.csv' not found in the 'outcome' folder.\")\n", 30 | " exit()\n", 31 | "\n", 32 | "# --- Data Preparation ---\n", 33 | "# Melt the DataFrame to a long format\n", 34 | "id_vars = ['Asset', 'EventGroup', 'EventDate']\n", 35 | "value_vars = [col for col in df_wide.columns if col.startswith('CAR_')]\n", 36 | "df_long = pd.melt(df_wide, id_vars=id_vars, value_vars=value_vars,\n", 37 | " var_name='CAR_Type', value_name='CAR_Value')\n", 38 | "\n", 39 | "# Extract Model and Window from the 'CAR_Type' column\n", 40 | "def parse_car_type(car_type_str):\n", 41 | " match = re.search(r'CAR_(.*)\\((.*)\\)', car_type_str)\n", 42 | " if match: return match.group(1), f\"({match.group(2)})\"\n", 43 | " return None, None\n", 44 | "df_long[['Model', 'Window']] = df_long['CAR_Type'].apply(lambda x: pd.Series(parse_car_type(x)))\n", 45 | "\n", 46 | "# Filter for the specific model we want to visualize\n", 47 | "model_to_plot = 'MM_SPY'\n", 48 | "df_plot_data = df_long[df_long['Model'] == model_to_plot]\n", 49 | "\n", 50 | "# Define order for facets\n", 51 | "window_order = ['(-1,+1)', '(-5,+5)', '(-10,+10)']\n", 52 | "\n", 53 | "\n", 54 | "# --- Plotting ---\n", 55 | "# Use catplot to create a grid of violin plots\n", 56 | "g = sns.catplot(\n", 57 | " data=df_plot_data,\n", 58 | " x='EventGroup',\n", 59 | " y='CAR_Value',\n", 60 | " kind='violin',\n", 61 | " row='Asset', # Each row is an Asset\n", 62 | " col='Window', # Each column is an Event Window\n", 63 | " col_order=window_order,\n", 64 | " palette='muted',\n", 65 | " inner='quart',\n", 66 | " height=4,\n", 67 | " aspect=1.8,\n", 68 | " sharey=True # Use the same y-axis for all subplots for easy comparison\n", 69 | ")\n", 70 | "\n", 71 | "# --- Chart Enhancement ---\n", 72 | "g.fig.suptitle(f'Distribution of CARs by Asset and Event Window\\nModel: {model_to_plot}', y=1.03)\n", 73 | "g.set_axis_labels(\"Event Group\", \"CAR Value\")\n", 74 | "g.set_titles(row_template=\"Asset: {row_name}\", col_template=\"Window: {col_name}\")\n", 75 | "g.despine(left=True)\n", 76 | "\n", 77 | "# Add a horizontal line at y=0\n", 78 | "for ax in g.axes.flat:\n", 79 | " ax.axhline(0, ls='--', color='black', linewidth=0.8)\n", 80 | " ax.tick_params(axis='x', rotation=45)\n", 81 | "\n", 82 | "plt.tight_layout(rect=[0, 0, 1, 0.96])\n", 83 | "plt.show()" 84 | ] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "id": "b2bc2184", 89 | "metadata": {}, 90 | "source": [ 91 | "Visualization 4: Cumulative Distribution Function (CDF) Plot" 92 | ] 93 | }, 94 | { 95 | "cell_type": "code", 96 | "execution_count": null, 97 | "id": "5f7a5ded", 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "import pandas as pd\n", 102 | "import seaborn as sns\n", 103 | "import matplotlib.pyplot as plt\n", 104 | "import re\n", 105 | "\n", 106 | "# --- Visualization 4: Cumulative Distribution Function (CDF) Plot ---\n", 107 | "\n", 108 | "# Load the wide-format results which contain individual event CARs\n", 109 | "try:\n", 110 | " df_wide = pd.read_csv(\"./outcome/event_study_wide_results.csv\")\n", 111 | "except FileNotFoundError:\n", 112 | " print(\"Error: 'event_study_wide_results.csv' not found in the 'outcome' folder.\")\n", 113 | " exit()\n", 114 | "\n", 115 | "# --- Data Preparation ---\n", 116 | "# Melt the DataFrame to a 'long' format, same as for the violin plot\n", 117 | "id_vars = ['Asset', 'EventGroup', 'EventDate']\n", 118 | "value_vars = [col for col in df_wide.columns if col.startswith('CAR_')]\n", 119 | "df_long = pd.melt(df_wide, id_vars=id_vars, value_vars=value_vars,\n", 120 | " var_name='CAR_Type', value_name='CAR_Value')\n", 121 | "\n", 122 | "# Extract Model and Window from the 'CAR_Type' column\n", 123 | "def parse_car_type(car_type_str):\n", 124 | " match = re.search(r'CAR_(.*)\\((.*)\\)', car_type_str)\n", 125 | " if match: return match.group(1), f\"({match.group(2)})\"\n", 126 | " return None, None\n", 127 | "df_long[['Model', 'Window']] = df_long['CAR_Type'].apply(lambda x: pd.Series(parse_car_type(x)))\n", 128 | "\n", 129 | "# --- Plotting ---\n", 130 | "# To create a clear comparison, let's focus on a single asset, model, and window.\n", 131 | "# We will compare the CAR distributions for different event groups.\n", 132 | "asset_to_plot = 'Bitcoin'\n", 133 | "model_to_plot = 'MM_SPY'\n", 134 | "window_to_plot = '(-10,+10)'\n", 135 | "\n", 136 | "# Filter the data for our specific slice\n", 137 | "df_plot_data = df_long[\n", 138 | " (df_long['Asset'] == asset_to_plot) &\n", 139 | " (df_long['Model'] == model_to_plot) &\n", 140 | " (df_long['Window'] == window_to_plot)\n", 141 | "]\n", 142 | "\n", 143 | "# Let's compare key training groups\n", 144 | "groups_to_compare = ['internal_good_train', 'internal_bad_train', 'external_bad_train']\n", 145 | "df_plot_data = df_plot_data[df_plot_data['EventGroup'].isin(groups_to_compare)]\n", 146 | "\n", 147 | "\n", 148 | "plt.figure(figsize=(12, 7))\n", 149 | "sns.ecdfplot(data=df_plot_data, x='CAR_Value', hue='EventGroup', linewidth=2.5)\n", 150 | "\n", 151 | "# --- Chart Enhancement ---\n", 152 | "plt.title(f'Cumulative Distribution of CARs for {asset_to_plot}\\nModel: {model_to_plot}, Window: {window_to_plot}', fontsize=16)\n", 153 | "plt.xlabel('CAR Value', fontsize=12)\n", 154 | "plt.ylabel('Cumulative Probability', fontsize=12)\n", 155 | "plt.grid(True, which='both', linestyle='--', linewidth=0.5)\n", 156 | "plt.axvline(0, ls=':', color='black', linewidth=1) # Vertical line at x=0\n", 157 | "plt.legend(title='Event Group')\n", 158 | "\n", 159 | "plt.show()" 160 | ] 161 | } 162 | ], 163 | "metadata": { 164 | "kernelspec": { 165 | "display_name": "cuda", 166 | "language": "python", 167 | "name": "python3" 168 | }, 169 | "language_info": { 170 | "codemirror_mode": { 171 | "name": "ipython", 172 | "version": 3 173 | }, 174 | "file_extension": ".py", 175 | "mimetype": "text/x-python", 176 | "name": "python", 177 | "nbconvert_exporter": "python", 178 | "pygments_lexer": "ipython3", 179 | "version": "3.10.16" 180 | } 181 | }, 182 | "nbformat": 4, 183 | "nbformat_minor": 5 184 | } 185 | -------------------------------------------------------------------------------- /CAR_V5-7_Temporal Dynamics Analysis.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "12802041", 6 | "metadata": {}, 7 | "source": [ 8 | "Visualization 5: Mean CAR Timeliness Analysis (Point Plot)" 9 | ] 10 | }, 11 | { 12 | "cell_type": "code", 13 | "execution_count": null, 14 | "id": "5e0d910d", 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import pandas as pd\n", 19 | "import seaborn as sns\n", 20 | "import matplotlib.pyplot as plt\n", 21 | "\n", 22 | "# --- Visualization 5: Mean CAR Timeliness Analysis (Point Plot) ---\n", 23 | "\n", 24 | "# Load the aggregated results data\n", 25 | "try:\n", 26 | " df_mean = pd.read_csv(\"./outcome/event_study_mean_results.csv\")\n", 27 | "except FileNotFoundError:\n", 28 | " print(\"Error: 'event_study_mean_results.csv' not found in the 'outcome' folder.\")\n", 29 | " exit()\n", 30 | "\n", 31 | "# --- Data Preparation ---\n", 32 | "# Select a model to analyze across all windows\n", 33 | "model_to_plot = 'MM_SPY'\n", 34 | "df_plot_data = df_mean[df_mean['Model'] == model_to_plot].reset_index()\n", 35 | "\n", 36 | "# Define the logical order for the x-axis\n", 37 | "window_order = ['(-1,+1)', '(-5,+5)', '(-10,+10)']\n", 38 | "\n", 39 | "\n", 40 | "# --- Plotting ---\n", 41 | "# Use catplot with kind='point' to show trends across windows.\n", 42 | "# We will create a grid of plots, faceted by Asset.\n", 43 | "g = sns.catplot(\n", 44 | " data=df_plot_data,\n", 45 | " x='Window',\n", 46 | " y='MeanCAR',\n", 47 | " hue='EventGroup',\n", 48 | " kind='point',\n", 49 | " col='Asset',\n", 50 | " col_wrap=2,\n", 51 | " order=window_order,\n", 52 | " palette='tab20',\n", 53 | " height=5,\n", 54 | " aspect=1.5,\n", 55 | " sharey=False, # Let y-axis scale differ for each asset if needed\n", 56 | " legend_out=True\n", 57 | ")\n", 58 | "\n", 59 | "# --- Chart Enhancement ---\n", 60 | "g.fig.suptitle(f'Mean CAR by Event Window\\nModel: {model_to_plot}', y=1.03)\n", 61 | "g.set_axis_labels(\"Event Window\", \"Mean CAR\")\n", 62 | "g.set_titles(\"Asset: {col_name}\")\n", 63 | "g.despine(left=True)\n", 64 | "\n", 65 | "# Add a horizontal line at y=0\n", 66 | "for ax in g.axes.flat:\n", 67 | " ax.axhline(0, ls='--', color='black', linewidth=0.8)\n", 68 | "\n", 69 | "g.add_legend(title='Event Group',\n", 70 | " bbox_to_anchor=(1.01, 0.5), # Position legend outside the plot\n", 71 | " loc='upper left')\n", 72 | "\n", 73 | "plt.tight_layout(rect=[0, 0, 0.9, 0.96]) # Adjust layout to make space for legend\n", 74 | "plt.show()" 75 | ] 76 | }, 77 | { 78 | "cell_type": "markdown", 79 | "id": "e552de71", 80 | "metadata": {}, 81 | "source": [ 82 | "Visualization 6: Model Robustness Check (Grouped Bar Chart)" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": null, 88 | "id": "1d2d5df7", 89 | "metadata": {}, 90 | "outputs": [], 91 | "source": [ 92 | "import pandas as pd\n", 93 | "import seaborn as sns\n", 94 | "import matplotlib.pyplot as plt\n", 95 | "\n", 96 | "# --- Visualization 6: Model Robustness Check (Grouped Bar Chart) ---\n", 97 | "\n", 98 | "# Load the aggregated results data\n", 99 | "try:\n", 100 | " df_mean = pd.read_csv(\"./outcome/event_study_mean_results.csv\")\n", 101 | "except FileNotFoundError:\n", 102 | " print(\"Error: 'event_study_mean_results.csv' not found in the 'outcome' folder.\")\n", 103 | " exit()\n", 104 | "\n", 105 | "# --- Data Preparation ---\n", 106 | "# To check robustness, we focus on a specific case: one asset, one window,\n", 107 | "# and a few key event groups, while comparing ALL models.\n", 108 | "asset_to_plot = 'Bitcoin'\n", 109 | "window_to_plot = '(-10,+10)'\n", 110 | "groups_to_compare = ['internal_good_train', 'internal_bad_train']\n", 111 | "\n", 112 | "df_plot_data = df_mean[\n", 113 | " (df_mean['Asset'] == asset_to_plot) &\n", 114 | " (df_mean['Window'] == window_to_plot) &\n", 115 | " (df_mean['EventGroup'].isin(groups_to_compare))\n", 116 | "].reset_index()\n", 117 | "\n", 118 | "\n", 119 | "# --- Plotting ---\n", 120 | "# We will use a bar chart where the x-axis is the model, faceted by event group.\n", 121 | "g = sns.catplot(\n", 122 | " data=df_plot_data,\n", 123 | " x='Model',\n", 124 | " y='MeanCAR',\n", 125 | " kind='bar',\n", 126 | " col='EventGroup', # Create subplots for 'good' vs 'bad' events\n", 127 | " palette='coolwarm_r',\n", 128 | " height=6,\n", 129 | " aspect=1.2\n", 130 | ")\n", 131 | "\n", 132 | "# --- Chart Enhancement ---\n", 133 | "g.fig.suptitle(f'Model Robustness Check for {asset_to_plot}\\nWindow: {window_to_plot}', y=1.03)\n", 134 | "g.set_axis_labels(\"Factor Model\", \"Mean CAR\")\n", 135 | "g.set_titles(\"Event Group: {col_name}\")\n", 136 | "g.despine(left=True)\n", 137 | "\n", 138 | "# Add a horizontal line at y=0 and value labels on bars\n", 139 | "for ax in g.axes.flat:\n", 140 | " ax.axhline(0, ls='--', color='black', linewidth=0.8)\n", 141 | " ax.tick_params(axis='x', rotation=45)\n", 142 | " \n", 143 | " # Add value labels to each bar\n", 144 | " for p in ax.patches:\n", 145 | " ax.annotate(f\"{p.get_height():.3f}\",\n", 146 | " (p.get_x() + p.get_width() / 2., p.get_height()),\n", 147 | " ha='center', va='center',\n", 148 | " xytext=(0, 9),\n", 149 | " textcoords='offset points')\n", 150 | "\n", 151 | "plt.tight_layout(rect=[0, 0, 1, 0.96])\n", 152 | "plt.show()" 153 | ] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "id": "515a8dba", 158 | "metadata": {}, 159 | "source": [ 160 | "Visualization 7: Daily Cumulative Abnormal Return (CAR) Trend" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": null, 166 | "id": "c6bcf99c", 167 | "metadata": {}, 168 | "outputs": [], 169 | "source": [ 170 | "import pandas as pd\n", 171 | "import seaborn as sns\n", 172 | "import matplotlib.pyplot as plt\n", 173 | "\n", 174 | "# --- Visualization 7: Daily Cumulative Abnormal Return (CAR) Trend ---\n", 175 | "\n", 176 | "# Load the daily abnormal return data\n", 177 | "try:\n", 178 | " df_daily_ar = pd.read_csv(\"./outcome/event_study_daily_ar.csv\")\n", 179 | "except FileNotFoundError:\n", 180 | " print(\"Error: 'event_study_daily_ar.csv' not found in the 'outcome' folder.\")\n", 181 | " exit()\n", 182 | "\n", 183 | "# --- Data Preparation ---\n", 184 | "# To create a clear comparison, we focus on a specific asset and model.\n", 185 | "asset_to_plot = 'Bitcoin'\n", 186 | "model_to_plot = 'MM_SPY'\n", 187 | "\n", 188 | "# Filter for the asset and model of interest\n", 189 | "df_plot_data = df_daily_ar[\n", 190 | " (df_daily_ar['Asset'] == asset_to_plot) &\n", 191 | " (df_daily_ar['Model'] == model_to_plot)\n", 192 | "]\n", 193 | "\n", 194 | "# Step 1: Calculate the average Abnormal Return (AR) for each relative day in each group.\n", 195 | "# This gives us the \"Average Abnormal Return\" (AAR).\n", 196 | "df_aar = df_plot_data.groupby(['EventGroup', 'RelativeDay'])['AR'].mean().reset_index()\n", 197 | "\n", 198 | "# Step 2: Calculate the cumulative sum of these daily average returns for each group.\n", 199 | "# This gives us the \"Average Cumulative Abnormal Return\" (ACAR).\n", 200 | "df_aar = df_aar.sort_values(by='RelativeDay')\n", 201 | "df_aar['ACAR'] = df_aar.groupby('EventGroup')['AR'].cumsum()\n", 202 | "\n", 203 | "# --- Plotting ---\n", 204 | "plt.figure(figsize=(14, 8))\n", 205 | "sns.lineplot(\n", 206 | " data=df_aar,\n", 207 | " x='RelativeDay',\n", 208 | " y='ACAR',\n", 209 | " hue='EventGroup',\n", 210 | " linewidth=2.5,\n", 211 | " palette='Set1'\n", 212 | ")\n", 213 | "\n", 214 | "# --- Chart Enhancement ---\n", 215 | "plt.title(f'Average Cumulative Abnormal Return (ACAR) around Event Date\\nAsset: {asset_to_plot}, Model: {model_to_plot}', fontsize=16)\n", 216 | "plt.xlabel('Days Relative to Event Date (Day 0)', fontsize=12)\n", 217 | "plt.ylabel('Average Cumulative Abnormal Return (ACAR)', fontsize=12)\n", 218 | "plt.grid(True, which='both', linestyle='--', linewidth=0.5)\n", 219 | "plt.axvline(0, ls=':', color='black', linewidth=1.5, label='Event Day (0)') # Vertical line for event day\n", 220 | "plt.axhline(0, ls=':', color='black', linewidth=1) # Horizontal line at y=0\n", 221 | "plt.legend(title='Event Group')\n", 222 | "\n", 223 | "plt.show()" 224 | ] 225 | } 226 | ], 227 | "metadata": { 228 | "kernelspec": { 229 | "display_name": "cuda", 230 | "language": "python", 231 | "name": "python3" 232 | }, 233 | "language_info": { 234 | "codemirror_mode": { 235 | "name": "ipython", 236 | "version": 3 237 | }, 238 | "file_extension": ".py", 239 | "mimetype": "text/x-python", 240 | "name": "python", 241 | "nbconvert_exporter": "python", 242 | "pygments_lexer": "ipython3", 243 | "version": "3.10.16" 244 | } 245 | }, 246 | "nbformat": 4, 247 | "nbformat_minor": 5 248 | } 249 | -------------------------------------------------------------------------------- /CAR_V8-9_Model Robustness & Diagnostics.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "755cc98e", 6 | "metadata": {}, 7 | "source": [ 8 | "Visualization 8: CAR Contribution Waterfall Chart" 9 | ] 10 | }, 11 | { 12 | "cell_type": "code", 13 | "execution_count": null, 14 | "id": "9b8a5a3b", 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import pandas as pd\n", 19 | "import matplotlib.pyplot as plt\n", 20 | "# This visualization requires a specific library.\n", 21 | "# Please install it first by running: pip install waterfallcharts\n", 22 | "try:\n", 23 | " import waterfall_chart\n", 24 | "except ImportError:\n", 25 | " print(\"Error: The 'waterfallcharts' library is required for this visualization.\")\n", 26 | " print(\"Please install it by running: pip install waterfallcharts\")\n", 27 | " exit()\n", 28 | "\n", 29 | "# --- Visualization 8: CAR Contribution Waterfall Chart ---\n", 30 | "\n", 31 | "# Load the daily abnormal return data\n", 32 | "try:\n", 33 | " df_daily_ar = pd.read_csv(\"./outcome/event_study_daily_ar.csv\")\n", 34 | "except FileNotFoundError:\n", 35 | " print(\"Error: 'event_study_daily_ar.csv' not found in the 'outcome' folder.\")\n", 36 | " exit()\n", 37 | "\n", 38 | "# --- Data Preparation ---\n", 39 | "# A waterfall chart is best for analyzing a SINGLE event.\n", 40 | "# Let's select one specific event to analyze from our data.\n", 41 | "# You can change these parameters to analyze any event you are interested in.\n", 42 | "asset_to_plot = 'Dogecoin'\n", 43 | "event_date_to_plot = '2020-09-02' # Example event date\n", 44 | "model_to_plot = 'MM_SPY'\n", 45 | "window_size = 10\n", 46 | "\n", 47 | "# Filter the data for this single specific event\n", 48 | "df_plot_data = df_daily_ar[\n", 49 | " (df_daily_ar['Asset'] == asset_to_plot) &\n", 50 | " (df_daily_ar['EventDate'] == event_date_to_plot) &\n", 51 | " (df_daily_ar['Model'] == model_to_plot)\n", 52 | "]\n", 53 | "\n", 54 | "# Filter for the desired window around the event\n", 55 | "df_plot_data = df_plot_data[df_plot_data['RelativeDay'].between(-window_size, window_size)].copy()\n", 56 | "df_plot_data = df_plot_data.sort_values(by='RelativeDay')\n", 57 | "\n", 58 | "if df_plot_data.empty:\n", 59 | " print(f\"No data found for the selected event ({asset_to_plot} on {event_date_to_plot}). Cannot create waterfall chart.\")\n", 60 | "else:\n", 61 | " # Prepare data for the waterfall chart library\n", 62 | " labels = df_plot_data['RelativeDay'].astype(str).tolist()\n", 63 | " values = df_plot_data['AR'].tolist()\n", 64 | " \n", 65 | " # --- Plotting ---\n", 66 | " plt.figure(figsize=(16, 8))\n", 67 | " waterfall_chart.plot(\n", 68 | " labels,\n", 69 | " values,\n", 70 | " formatting='{:.4f}', # Format values to 4 decimal places\n", 71 | " net_label='Final CAR', # Label for the final cumulative bar\n", 72 | " Title=f'CAR Waterfall for {asset_to_plot} on {event_date_to_plot}\\nModel: {model_to_plot}, Window: (-{window_size},+{window_size})'\n", 73 | " )\n", 74 | " \n", 75 | " # --- Chart Enhancement ---\n", 76 | " plt.ylabel('Abnormal Return (AR) Contribution')\n", 77 | " plt.xticks(rotation=45)\n", 78 | " plt.grid(axis='y', linestyle='--', alpha=0.7)\n", 79 | " \n", 80 | " plt.show()" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "id": "32317127", 86 | "metadata": {}, 87 | "source": [ 88 | "Visualization 9: Model Fit Diagnostic (Scatter Plot)" 89 | ] 90 | }, 91 | { 92 | "cell_type": "code", 93 | "execution_count": null, 94 | "id": "d519d33f", 95 | "metadata": {}, 96 | "outputs": [], 97 | "source": [ 98 | "import pandas as pd\n", 99 | "import seaborn as sns\n", 100 | "import matplotlib.pyplot as plt\n", 101 | "\n", 102 | "# --- Visualization 9: Model Fit Diagnostic (Scatter Plot) ---\n", 103 | "\n", 104 | "# Load the estimation period data\n", 105 | "try:\n", 106 | " df_estimation = pd.read_csv(\"./outcome/event_study_estimation.csv\")\n", 107 | "except FileNotFoundError:\n", 108 | " print(\"Error: 'event_study_estimation.csv' not found in the 'outcome' folder.\")\n", 109 | " exit()\n", 110 | "\n", 111 | "# --- Data Preparation ---\n", 112 | "# Like the waterfall chart, this diagnostic is for a SINGLE event's estimation period.\n", 113 | "# Let's select one specific event's estimation to analyze.\n", 114 | "asset_to_plot = 'Bitcoin'\n", 115 | "event_date_to_plot = '2020-09-02' # Example event date\n", 116 | "model_to_plot = 'MM_SPY'\n", 117 | "\n", 118 | "# Filter the data for this single specific estimation period\n", 119 | "df_plot_data = df_estimation[\n", 120 | " (df_estimation['Asset'] == asset_to_plot) &\n", 121 | " (df_estimation['EventDate'] == event_date_to_plot) &\n", 122 | " (df_estimation['Model'] == model_to_plot)\n", 123 | "]\n", 124 | "\n", 125 | "# Determine the benchmark column based on the model\n", 126 | "benchmark_col = 'SPY' # The column name was standardized to lowercase\n", 127 | "\n", 128 | "if df_plot_data.empty:\n", 129 | " print(f\"No estimation data found for the selected event ({asset_to_plot} on {event_date_to_plot}).\")\n", 130 | "else:\n", 131 | " # --- Plotting ---\n", 132 | " plt.figure(figsize=(10, 10))\n", 133 | " \n", 134 | " # Use seaborn's regplot to create a scatter plot and automatically fit an OLS regression line\n", 135 | " sns.regplot(\n", 136 | " data=df_plot_data,\n", 137 | " x=benchmark_col,\n", 138 | " y='AssetReturn',\n", 139 | " line_kws={\"color\": \"red\", \"lw\": 2}, # Customize the regression line\n", 140 | " scatter_kws={\"alpha\": 0.5, \"s\": 50} # Customize the scatter points\n", 141 | " )\n", 142 | " \n", 143 | " # --- Chart Enhancement ---\n", 144 | " plt.title(f'Model Fit Diagnostic for Estimation Period\\nEvent: {asset_to_plot} on {event_date_to_plot}, Model: {model_to_plot}', fontsize=16)\n", 145 | " plt.xlabel(f'Benchmark Daily Return ({benchmark_col.upper()})', fontsize=12)\n", 146 | " plt.ylabel(f'Asset Daily Return ({asset_to_plot})', fontsize=12)\n", 147 | " plt.grid(True, linestyle='--', linewidth=0.5)\n", 148 | " \n", 149 | " plt.show()" 150 | ] 151 | } 152 | ], 153 | "metadata": { 154 | "kernelspec": { 155 | "display_name": "cuda", 156 | "language": "python", 157 | "name": "python3" 158 | }, 159 | "language_info": { 160 | "codemirror_mode": { 161 | "name": "ipython", 162 | "version": 3 163 | }, 164 | "file_extension": ".py", 165 | "mimetype": "text/x-python", 166 | "name": "python", 167 | "nbconvert_exporter": "python", 168 | "pygments_lexer": "ipython3", 169 | "version": "3.10.16" 170 | } 171 | }, 172 | "nbformat": 4, 173 | "nbformat_minor": 5 174 | } 175 | -------------------------------------------------------------------------------- /CAR_VA1-3_Advanced Techniques & Presentation Methods.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "735c4aa4", 6 | "metadata": {}, 7 | "source": [ 8 | "Advanced Visualization 1: Advanced Facet Grid" 9 | ] 10 | }, 11 | { 12 | "cell_type": "code", 13 | "execution_count": null, 14 | "id": "9c7a2605", 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import pandas as pd\n", 19 | "import seaborn as sns\n", 20 | "import matplotlib.pyplot as plt\n", 21 | "\n", 22 | "# --- Advanced Visualization 1: Advanced Facet Grid ---\n", 23 | "\n", 24 | "# Load the daily abnormal return data\n", 25 | "try:\n", 26 | " df_daily_ar = pd.read_csv(\"./outcome/event_study_daily_ar.csv\")\n", 27 | "except FileNotFoundError:\n", 28 | " print(\"Error: 'event_study_daily_ar.csv' not found in the 'outcome' folder.\")\n", 29 | " exit()\n", 30 | "\n", 31 | "# --- Data Preparation ---\n", 32 | "# We need to separate the 'Set' (_train/_test) from the 'EventGroup' name\n", 33 | "# For example, 'internal_good_train' -> 'internal_good' and 'train'\n", 34 | "\n", 35 | "df_plot_data = df_daily_ar.copy()\n", 36 | "df_plot_data['Set'] = df_plot_data['EventGroup'].apply(lambda x: x.split('_')[-1])\n", 37 | "df_plot_data['BaseEventGroup'] = df_plot_data['EventGroup'].apply(lambda x: '_'.join(x.split('_')[:-1]))\n", 38 | "\n", 39 | "# Focus on one model for clarity\n", 40 | "model_to_plot = 'MM_SPY'\n", 41 | "df_plot_data = df_plot_data[df_plot_data['Model'] == model_to_plot]\n", 42 | "\n", 43 | "# Calculate the Average Cumulative Abnormal Return (ACAR)\n", 44 | "df_aar = df_plot_data.groupby(['Asset', 'BaseEventGroup', 'Set', 'RelativeDay'])['AR'].mean().reset_index()\n", 45 | "df_aar = df_aar.sort_values(by='RelativeDay')\n", 46 | "df_aar['ACAR'] = df_aar.groupby(['Asset', 'BaseEventGroup', 'Set'])['AR'].cumsum()\n", 47 | "\n", 48 | "# --- Plotting ---\n", 49 | "# Create a FacetGrid: rows are Assets, columns are BaseEventGroups.\n", 50 | "# The lines within each subplot are colored by the Set (train vs test).\n", 51 | "g = sns.FacetGrid(\n", 52 | " data=df_aar,\n", 53 | " row='Asset',\n", 54 | " col='BaseEventGroup',\n", 55 | " hue='Set',\n", 56 | " height=4,\n", 57 | " aspect=1.2,\n", 58 | " margin_titles=True,\n", 59 | " palette={'train': 'royalblue', 'test': 'darkorange'}\n", 60 | ")\n", 61 | "\n", 62 | "# Map the line plot to the grid\n", 63 | "g.map(sns.lineplot, 'RelativeDay', 'ACAR', linewidth=2.5).add_legend(title='Set')\n", 64 | "\n", 65 | "# --- Chart Enhancement ---\n", 66 | "g.fig.suptitle(f'ACAR Path Comparison: Train vs. Test\\nModel: {model_to_plot}', y=1.03)\n", 67 | "g.set_axis_labels(\"Days Relative to Event\", \"Average Cumulative AR\")\n", 68 | "g.set_titles(row_template=\"{row_name}\", col_template=\"{col_name}\")\n", 69 | "\n", 70 | "# Add reference lines to each subplot\n", 71 | "g.map(plt.axhline, y=0, ls=\":\", c=\".5\")\n", 72 | "g.map(plt.axvline, x=0, ls=\"--\", c=\"red\")\n", 73 | "\n", 74 | "plt.tight_layout(rect=[0, 0, 1, 0.97])\n", 75 | "plt.show()" 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "id": "4090bf08", 81 | "metadata": {}, 82 | "source": [ 83 | "Advanced Visualization 2: Significance Annotation on a Bar Chart (Corrected)" 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": null, 89 | "id": "f3825e82", 90 | "metadata": {}, 91 | "outputs": [], 92 | "source": [ 93 | "import pandas as pd\n", 94 | "import seaborn as sns\n", 95 | "import matplotlib.pyplot as plt\n", 96 | "\n", 97 | "# --- Advanced Visualization 2: Significance Annotation on a Bar Chart (Corrected) ---\n", 98 | "\n", 99 | "# Step 1: Load the aggregated results data from the previous analysis\n", 100 | "try:\n", 101 | " df_mean = pd.read_csv(\"./outcome/event_study_mean_results.csv\")\n", 102 | "except FileNotFoundError:\n", 103 | " print(\"Error: 'event_study_mean_results.csv' not found in the 'outcome' folder. Please run the main analysis script first.\")\n", 104 | " # In a .py script, you might want to uncomment the next line to stop execution\n", 105 | " # exit()\n", 106 | "\n", 107 | "# --- Step 2: Data Preparation ---\n", 108 | "\n", 109 | "# Proceed only if the DataFrame was loaded successfully and is not empty\n", 110 | "if 'df_mean' in locals() and not df_mean.empty:\n", 111 | " # --- Adjustable Parameters ---\n", 112 | " asset_to_plot = 'Bitcoin'\n", 113 | " model_to_plot = 'MM_SPY'\n", 114 | " window_to_plot = '(-10,+10)'\n", 115 | " # ---------------------------\n", 116 | "\n", 117 | " # Filter the DataFrame based on the parameters above\n", 118 | " # Using .copy() is recommended to avoid Pandas' SettingWithCopyWarning\n", 119 | " df_plot_data = df_mean[\n", 120 | " (df_mean['Asset'] == asset_to_plot) &\n", 121 | " (df_mean['Model'] == model_to_plot) &\n", 122 | " (df_mean['Window'] == window_to_plot)\n", 123 | " ].copy()\n", 124 | "\n", 125 | " # Helper function to convert a p-value into significance stars\n", 126 | " def p_to_stars(p):\n", 127 | " if pd.isna(p): return '' # Return an empty string if p-value is missing\n", 128 | " if p <= 0.01: return '***'\n", 129 | " if p <= 0.05: return '**'\n", 130 | " if p <= 0.1: return '*'\n", 131 | " return '' # Return an empty string if not significant\n", 132 | "\n", 133 | " # --- Step 3: Plotting ---\n", 134 | " plt.figure(figsize=(14, 8))\n", 135 | " ax = sns.barplot(\n", 136 | " data=df_plot_data,\n", 137 | " x='EventGroup',\n", 138 | " y='MeanCAR',\n", 139 | " palette='Spectral' # Using the 'Spectral' color palette\n", 140 | " )\n", 141 | "\n", 142 | " # --- Step 4: Annotate Bars with Significance Stars ---\n", 143 | " # Iterate through each bar (patch) created by the barplot\n", 144 | " for i, p in enumerate(ax.patches):\n", 145 | " height = p.get_height()\n", 146 | " x_center = p.get_x() + p.get_width() / 2.\n", 147 | " \n", 148 | " # Get the corresponding p-value from the filtered DataFrame using the index 'i'\n", 149 | " # IMPORTANT: This assumes the row order in df_plot_data matches the plot order of the bars.\n", 150 | " # This is generally safe for sns.barplot when no explicit order is set.\n", 151 | " if 'p-value' in df_plot_data.columns:\n", 152 | " p_value = df_plot_data.iloc[i]['p-value']\n", 153 | " stars = p_to_stars(p_value)\n", 154 | " else:\n", 155 | " stars = '' # If the 'p-value' column doesn't exist, don't show stars\n", 156 | " if i == 0: # Print warning only once\n", 157 | " print(\"Warning: 'p-value' column not found. Significance stars will not be displayed.\")\n", 158 | "\n", 159 | " # Determine the vertical offset for the annotation based on the bar's height\n", 160 | " y_offset = 10 if height >= 0 else -25 # Increase offset for negative bars to prevent overlap\n", 161 | " \n", 162 | " # Use ax.annotate() to place the text (stars) precisely on the plot\n", 163 | " ax.annotate(stars,\n", 164 | " xy=(x_center, height), \n", 165 | " xytext=(0, y_offset),\n", 166 | " textcoords='offset points',\n", 167 | " ha='center',\n", 168 | " fontsize=14,\n", 169 | " fontweight='bold',\n", 170 | " color='black')\n", 171 | "\n", 172 | " # --- Step 5: Chart Enhancement ---\n", 173 | " plt.title(f'Mean CAR with Significance Annotation for {asset_to_plot}\\nModel: {model_to_plot}, Window: {window_to_plot}', fontsize=16)\n", 174 | " plt.xlabel('Event Group', fontsize=12)\n", 175 | " plt.ylabel('Mean CAR', fontsize=12)\n", 176 | " plt.axhline(0, ls='--', color='black', linewidth=0.8) # Add a horizontal line at y=0\n", 177 | " plt.xticks(rotation=45, ha='right') # Rotate x-axis labels for better readability\n", 178 | " plt.tight_layout() # Adjust plot to ensure everything fits without overlapping\n", 179 | " plt.show() # Display the final plot\n", 180 | " \n", 181 | "else:\n", 182 | " print(\"DataFrame 'df_mean' was not loaded or is empty. Skipping plot.\")" 183 | ] 184 | }, 185 | { 186 | "cell_type": "markdown", 187 | "id": "ff722eab", 188 | "metadata": {}, 189 | "source": [ 190 | "Advanced Visualization 3: Interactive Dashboard with Plotly (Corrected)" 191 | ] 192 | }, 193 | { 194 | "cell_type": "code", 195 | "execution_count": null, 196 | "id": "fe5fc541", 197 | "metadata": {}, 198 | "outputs": [], 199 | "source": [ 200 | "import pandas as pd\n", 201 | "import plotly.express as px\n", 202 | "\n", 203 | "# --- Advanced Visualization 3: Interactive Dashboard with Plotly (Corrected) ---\n", 204 | "\n", 205 | "# Load the aggregated results data\n", 206 | "try:\n", 207 | " df_mean = pd.read_csv(\"./outcome/event_study_mean_results.csv\")\n", 208 | "except FileNotFoundError:\n", 209 | " print(\"Error: 'event_study_mean_results.csv' not found in the 'outcome' folder.\")\n", 210 | " # In a .py script, you might want to uncomment the next line to stop execution\n", 211 | " # exit()\n", 212 | "\n", 213 | "# --- Plotting ---\n", 214 | "# Ensure the dataframe is loaded before plotting\n", 215 | "if 'df_mean' in locals() and not df_mean.empty:\n", 216 | " fig = px.scatter(\n", 217 | " df_mean,\n", 218 | " x='Window',\n", 219 | " y='MeanCAR',\n", 220 | " facet_col='Asset',\n", 221 | " color='EventGroup',\n", 222 | " symbol='Model',\n", 223 | " size='N',\n", 224 | " # --- FIXED: Corrected the column name from 'p' to 'p-value' ---\n", 225 | " hover_data=['p-value', '95%CI_low', '95%CI_high'],\n", 226 | " # -----------------------------------------------------------------\n", 227 | " title='Interactive Analysis of Mean Cumulative Abnormal Returns (CAR)',\n", 228 | " labels={\n", 229 | " \"Window\": \"Event Window\",\n", 230 | " \"MeanCAR\": \"Mean CAR\",\n", 231 | " \"EventGroup\": \"Event Group\",\n", 232 | " \"N\": \"Number of Events\",\n", 233 | " \"p-value\": \"p-value\" # Also good to add a label for the corrected column\n", 234 | " },\n", 235 | " category_orders={\"Window\": [\"(-1,+1)\", \"(-5,+5)\", \"(-10,+10)\"]}\n", 236 | " )\n", 237 | "\n", 238 | " # --- Chart Enhancement ---\n", 239 | " fig.update_layout(\n", 240 | " legend_title_text='Click to Filter',\n", 241 | " title_x=0.5\n", 242 | " )\n", 243 | " fig.update_yaxes(zeroline=True, zerolinewidth=2, zerolinecolor='LightGrey')\n", 244 | " fig.update_traces(marker=dict(sizemin=5)) # Set a minimum marker size for better visibility\n", 245 | "\n", 246 | " # --- Display and Save ---\n", 247 | " # Use a renderer that opens the chart in a new browser tab.\n", 248 | " # This avoids potential issues with rendering inside notebooks.\n", 249 | " try:\n", 250 | " fig.show(renderer='browser')\n", 251 | " except Exception as e:\n", 252 | " print(f\"Could not display figure. Error: {e}\")\n", 253 | "\n", 254 | "\n", 255 | " # The 'write_html' method is the most robust way to save your interactive chart,\n", 256 | " # as it creates a portable file that can be opened on any computer with a web browser.\n", 257 | " try:\n", 258 | " output_path = \"./outcome/interactive_car_dashboard.html\"\n", 259 | " fig.write_html(output_path)\n", 260 | " print(f\"\\nInteractive dashboard saved to '{output_path}'\")\n", 261 | " except Exception as e:\n", 262 | " print(f\"\\nCould not save HTML file. Error: {e}\")\n", 263 | "else:\n", 264 | " print(\"DataFrame 'df_mean' was not loaded or is empty. Skipping plot.\")" 265 | ] 266 | } 267 | ], 268 | "metadata": { 269 | "kernelspec": { 270 | "display_name": "cuda", 271 | "language": "python", 272 | "name": "python3" 273 | }, 274 | "language_info": { 275 | "codemirror_mode": { 276 | "name": "ipython", 277 | "version": 3 278 | }, 279 | "file_extension": ".py", 280 | "mimetype": "text/x-python", 281 | "name": "python", 282 | "nbconvert_exporter": "python", 283 | "pygments_lexer": "ipython3", 284 | "version": "3.10.16" 285 | } 286 | }, 287 | "nbformat": 4, 288 | "nbformat_minor": 5 289 | } 290 | -------------------------------------------------------------------------------- /CAR_main.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "id": "3175276b", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "#!/usr/bin/env python3\n", 11 | "# -*- coding: utf-8 -*-\n", 12 | "\"\"\"\n", 13 | "────────────────────────────────────────────────────────────────────────────\n", 14 | "Event Study – Final Version (Corrected Output Path)\n", 15 | "────────────────────────────────────────────────────────────────────────────\n", 16 | "This script is specifically designed to read WIDE-FORMAT event files, where\n", 17 | "columns represent event types and values are the corresponding dates.\n", 18 | "\"\"\"\n", 19 | "\n", 20 | "# ── Imports ────────────────────────────────────────────────────────────── #\n", 21 | "import glob\n", 22 | "from pathlib import Path\n", 23 | "import numpy as np\n", 24 | "import pandas as pd\n", 25 | "import statsmodels.api as sm\n", 26 | "from scipy.stats import ttest_1samp, t\n", 27 | "\n", 28 | "# ── Tunables & study design ────────────────────────────────────────────── #\n", 29 | "EST_WIN_DAYS = 200\n", 30 | "EST_BUF_DAYS = 11\n", 31 | "EVENT_WINDOWS = [1, 5, 10]\n", 32 | "MAX_EVENT_WINDOW = max(EVENT_WINDOWS) if EVENT_WINDOWS else 10\n", 33 | "\n", 34 | "models = {\n", 35 | " \"MM_SPY\": [\"SPY\"],\n", 36 | " \"MM_Gold\": [\"Gold\"],\n", 37 | " \"MM_N100\": [\"Nasdaq100\"],\n", 38 | " \"EM_Gold_SPY\": [\"Gold\", \"SPY\"],\n", 39 | " \"EM_Gold_Nasdaq\": [\"Gold\", \"Nasdaq100\"],\n", 40 | "}\n", 41 | "\n", 42 | "# ── Helper functions ───────────────────────────────────────────────────── #\n", 43 | "def read_csv_robustly(path: Path, engine: str = 'c', sep=','):\n", 44 | " \"\"\"\n", 45 | " Reads a CSV file by trying a sequence of common encodings,\n", 46 | " using the specified parser engine and separator.\n", 47 | " \"\"\"\n", 48 | " encodings_to_try = ['utf-8', 'utf-8-sig', 'gbk', 'gb2312', 'latin-1']\n", 49 | " if engine == 'python': sep = None # Let python engine auto-detect separator\n", 50 | " \n", 51 | " for enc in encodings_to_try:\n", 52 | " try:\n", 53 | " return pd.read_csv(path, encoding=enc, engine=engine, sep=sep)\n", 54 | " except (UnicodeDecodeError, UnicodeError, pd.errors.ParserError):\n", 55 | " continue\n", 56 | " raise ValueError(f\"Failed to read or parse '{path}'. Please check its encoding, structure, and separator.\")\n", 57 | "\n", 58 | "def std_cols(df: pd.DataFrame) -> pd.DataFrame:\n", 59 | " df.columns = (df.columns.str.lower().str.replace(\" \", \"\").str.replace(\".\", \"\", regex=False).str.strip())\n", 60 | " return df\n", 61 | "\n", 62 | "def load_ret(path: Path) -> pd.Series:\n", 63 | " \"\"\"Reads structured price data files with encoding fallback.\"\"\"\n", 64 | " # The input 'path' is now expected to be a Path object\n", 65 | " df = read_csv_robustly(path) # Uses fast 'c' engine by default\n", 66 | " df = std_cols(df)\n", 67 | " if {\"date\", \"price\"} - set(df.columns): raise ValueError(f\"'{path}': Must contain 'Date' & 'Price'.\")\n", 68 | " df[\"date\"] = pd.to_datetime(df[\"date\"], errors=\"coerce\")\n", 69 | " df[\"price\"] = (df[\"price\"].astype(str).str.replace(\",\", \"\").str.strip().pipe(pd.to_numeric, errors=\"coerce\"))\n", 70 | " df = (df.dropna(subset=[\"date\", \"price\"]).set_index(\"date\").sort_index())\n", 71 | " asset_name = path.stem\n", 72 | " return np.log(df[\"price\"]).diff().rename(asset_name)\n", 73 | "\n", 74 | "def addc(x: pd.DataFrame) -> pd.DataFrame:\n", 75 | " return sm.add_constant(x, has_constant=\"add\")\n", 76 | "\n", 77 | "def reg(y: pd.Series, X: pd.DataFrame) -> pd.Series:\n", 78 | " return sm.OLS(y, addc(X)).fit().params\n", 79 | "\n", 80 | "def car(ar: pd.Series, evt: pd.Timestamp) -> dict[str, float]:\n", 81 | " return {f\"CAR(-{k},+{k})\": ar.loc[evt - pd.Timedelta(days=k) : evt + pd.Timedelta(days=k)].sum() for k in EVENT_WINDOWS}\n", 82 | "\n", 83 | "# ── 1) Load all data from external folders ─────────────────────────────── #\n", 84 | "# --- ADJUSTED: Benchmark Data Loading ---\n", 85 | "print(\"--- Loading Benchmark Data ---\")\n", 86 | "BENCHMARK_DIR = Path(\"./benchmark\")\n", 87 | "if not BENCHMARK_DIR.is_dir(): raise FileNotFoundError(f\"Benchmark directory '{BENCHMARK_DIR}' not found.\")\n", 88 | "\n", 89 | "bench_files = {\"Gold\": \"Gold.csv\", \"Nasdaq100\": \"Nasdaq100.csv\", \"SPY\": \"SPY.csv\"}\n", 90 | "# Load each file by combining the benchmark directory path with the filename\n", 91 | "bench_ret = {name: load_ret(BENCHMARK_DIR / path) for name, path in bench_files.items()}\n", 92 | "print(f\" • Loaded: {', '.join(bench_ret.keys())}\")\n", 93 | "\n", 94 | "\n", 95 | "print(\"\\n--- Loading Crypto Asset Data ---\")\n", 96 | "CRYPTO_DATA_DIR = Path(\"./crypto_data\")\n", 97 | "if not CRYPTO_DATA_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{CRYPTO_DATA_DIR}' not found.\")\n", 98 | "# Use Path.glob for a more modern approach\n", 99 | "crypto_files = list(CRYPTO_DATA_DIR.glob(\"*.csv\"))\n", 100 | "if not crypto_files: raise FileNotFoundError(f\"No CSV files found in '{CRYPTO_DATA_DIR}'.\")\n", 101 | "asset_ret = {f.stem: load_ret(f) for f in crypto_files}\n", 102 | "print(f\" • Found and loaded {len(asset_ret)} assets.\")\n", 103 | "\n", 104 | "print(\"\\n--- Loading Wide-Format Event Calendar Data ---\")\n", 105 | "EVENTS_DIR = Path(\"./events\")\n", 106 | "train_events_file = EVENTS_DIR / \"training_set.csv\"\n", 107 | "test_events_file = EVENTS_DIR / \"test_set.csv\"\n", 108 | "\n", 109 | "if not EVENTS_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{EVENTS_DIR}' not found.\")\n", 110 | "if not train_events_file.is_file(): raise FileNotFoundError(f\"Training file '{train_events_file}' not found.\")\n", 111 | "if not test_events_file.is_file(): raise FileNotFoundError(f\"Test file '{test_events_file}' not found.\")\n", 112 | "\n", 113 | "events = {}\n", 114 | "\n", 115 | "def load_wide_events(path: Path, suffix: str) -> dict:\n", 116 | " \"\"\"\n", 117 | " Loads WIDE-FORMAT event files.\n", 118 | " It iterates through COLUMNS to build the event dictionary.\n", 119 | " \"\"\"\n", 120 | " local_events = {}\n", 121 | " # Force the use of the flexible 'python' engine for these specific files\n", 122 | " # to handle ragged columns and prevent ParserError.\n", 123 | " df = read_csv_robustly(path, engine='python')\n", 124 | " \n", 125 | " # Clean column names before processing\n", 126 | " df = std_cols(df)\n", 127 | " \n", 128 | " for group_name in df.columns:\n", 129 | " dates = pd.to_datetime(df[group_name].dropna(), errors='coerce').dropna()\n", 130 | " local_events[f\"{group_name}{suffix}\"] = pd.DatetimeIndex(dates)\n", 131 | " return local_events\n", 132 | "\n", 133 | "events.update(load_wide_events(train_events_file, \"_train\"))\n", 134 | "print(f\" • Loaded {len(events)} training groups from '{train_events_file.name}'.\")\n", 135 | "test_events = load_wide_events(test_events_file, \"_test\")\n", 136 | "print(f\" • Loaded {len(test_events)} test groups from '{test_events_file.name}'.\")\n", 137 | "events.update(test_events)\n", 138 | "print(f\" • Total unique event groups to process: {len(events)}\")\n", 139 | "print(\"----------------------------------------------------\")\n", 140 | "\n", 141 | "panel = (pd.concat([*bench_ret.values(), *asset_ret.values()], axis=1).sort_index().ffill())\n", 142 | "\n", 143 | "# ── 2) Loop through events → run regressions & compute CAR ─────────────── #\n", 144 | "event_rows, daily_ar_rows, estimation_data_rows = [], [], []\n", 145 | "\n", 146 | "for asset in asset_ret:\n", 147 | " merged = panel[[asset, *bench_files.keys()]].dropna()\n", 148 | " for grp, dates in events.items():\n", 149 | " for evt in dates:\n", 150 | " if evt not in merged.index:\n", 151 | " print(f\"Warning: Event date {evt.date()} for group '{grp}' not in price data for asset '{asset}'. Skipping.\")\n", 152 | " continue\n", 153 | "\n", 154 | " est_end = evt - pd.Timedelta(days=EST_BUF_DAYS)\n", 155 | " est_start = est_end - pd.Timedelta(days=EST_WIN_DAYS)\n", 156 | " est = merged.loc[est_start : est_end]\n", 157 | " if len(est) < 30:\n", 158 | " print(f\"Warning: Insufficient data for event {evt.date()} ('{grp}') for asset '{asset}'. Skipping.\")\n", 159 | " continue\n", 160 | "\n", 161 | " row = {}\n", 162 | " for mdl, facs in models.items():\n", 163 | " params = reg(est[asset], est[facs])\n", 164 | " pred = params[\"const\"] + (merged[facs] * params[facs]).sum(axis=1) if len(facs) > 1 else params[\"const\"] + params[facs[0]] * merged[facs[0]]\n", 165 | " ar = merged[asset] - pred\n", 166 | " cars = car(ar, evt)\n", 167 | "\n", 168 | " ar_window_series = ar.loc[evt - pd.Timedelta(days=MAX_EVENT_WINDOW) : evt + pd.Timedelta(days=MAX_EVENT_WINDOW)]\n", 169 | " \n", 170 | " df_ar_temp = ar_window_series.reset_index(name='AR').rename(columns={'date': 'Date'})\n", 171 | " df_ar_temp = df_ar_temp.assign(\n", 172 | " RelativeDay = (df_ar_temp['Date'] - evt).dt.days,\n", 173 | " Asset = asset,\n", 174 | " EventGroup = grp,\n", 175 | " EventDate = evt.strftime(\"%Y-%m-%d\"),\n", 176 | " Model = mdl\n", 177 | " )\n", 178 | " daily_ar_rows.append(df_ar_temp)\n", 179 | "\n", 180 | " df_est_temp = est[[asset, *facs]].copy().reset_index().rename(columns={'date': 'Date', asset: 'AssetReturn'})\n", 181 | " df_est_temp = df_est_temp.assign(\n", 182 | " Asset = asset,\n", 183 | " EventGroup = grp,\n", 184 | " EventDate = evt.strftime(\"%Y-%m-%d\"),\n", 185 | " Model = mdl\n", 186 | " )\n", 187 | " estimation_data_rows.append(df_est_temp)\n", 188 | "\n", 189 | " if mdl.startswith(\"MM\"):\n", 190 | " fac = facs[0]\n", 191 | " row.update({f\"{fac}_α\": params[\"const\"], f\"{fac}_β\": params[fac]})\n", 192 | " for k in EVENT_WINDOWS: row[f\"CAR_MM_{fac}(-{k},+{k})\"] = cars[f\"CAR(-{k},+{k})\"]\n", 193 | " elif mdl == \"EM_Gold_SPY\":\n", 194 | " row.update({\"EM_Gold_SPY_α\": params[\"const\"], \"EM_Gold_SPY_β_Gold\": params[\"Gold\"], \"EM_Gold_SPY_β_SPY\": params[\"SPY\"]})\n", 195 | " for k in EVENT_WINDOWS: row[f\"CAR_EM_Gold_SPY(-{k},+{k})\"] = cars[f\"CAR(-{k},+{k})\"]\n", 196 | " else: # Assumes EM_Gold_Nasdaq\n", 197 | " row.update({\"EM_Gold_Nasdaq_α\": params[\"const\"], \"EM_Gold_Nasdaq_β_Gold\": params[\"Gold\"], \"EM_Gold_Nasdaq_β_Nasdaq100\": params[\"Nasdaq100\"]})\n", 198 | " for k in EVENT_WINDOWS: row[f\"CAR_EM_Gold_Nasdaq(-{k},+{k})\"] = cars[f\"CAR(-{k},+{k})\"]\n", 199 | "\n", 200 | " idx = (asset, grp, evt.strftime(\"%Y-%m-%d\"))\n", 201 | " event_rows.append((idx, row))\n", 202 | "\n", 203 | "# ── 3) Convert results to DataFrames & 4) Calculate Group Means ────────── #\n", 204 | "if not event_rows:\n", 205 | " print(\"\\nNo events were processed. Ending script.\")\n", 206 | "else:\n", 207 | " idx_vals, dict_vals = zip(*event_rows)\n", 208 | " df_evt_wide = pd.DataFrame(list(dict_vals), index=pd.MultiIndex.from_tuples(idx_vals, names=[\"Asset\", \"EventGroup\", \"EventDate\"]))\n", 209 | " col_seq = []\n", 210 | " for fac in [\"SPY\", \"Gold\", \"Nasdaq100\"]: col_seq.extend([f\"{fac}_α\", f\"{fac}_β\", *[f\"CAR_MM_{fac}(-{k},+{k})\" for k in EVENT_WINDOWS]])\n", 211 | " col_seq.extend([\"EM_Gold_SPY_α\", \"EM_Gold_SPY_β_Gold\", \"EM_Gold_SPY_β_SPY\", *[f\"CAR_EM_Gold_SPY(-{k},+{k})\" for k in EVENT_WINDOWS]])\n", 212 | " col_seq.extend([\"EM_Gold_Nasdaq_α\", \"EM_Gold_Nasdaq_β_Gold\", \"EM_Gold_Nasdaq_β_Nasdaq100\", *[f\"CAR_EM_Gold_Nasdaq(-{k},+{k})\" for k in EVENT_WINDOWS]])\n", 213 | " df_evt_wide = df_evt_wide.reindex(columns=col_seq)\n", 214 | "\n", 215 | " mean_rows = []\n", 216 | " for (asset, grp), sub in df_evt_wide.groupby(level=[\"Asset\", \"EventGroup\"]):\n", 217 | " for mdl, facs in models.items():\n", 218 | " label = (\"MM_\" + facs[0]) if mdl.startswith(\"MM\") else mdl\n", 219 | " for k in EVENT_WINDOWS:\n", 220 | " col = (f\"CAR_MM_{facs[0]}(-{k},+{k})\" if mdl.startswith(\"MM\") else f\"CAR_{mdl}(-{k},+{k})\")\n", 221 | " if col in sub:\n", 222 | " vals = sub[col].dropna()\n", 223 | " n = len(vals)\n", 224 | " mean, ci_lo, ci_hi, pval = (np.nan,)*4\n", 225 | " if n >= 2:\n", 226 | " mean = vals.mean()\n", 227 | " se = vals.std(ddof=1) / np.sqrt(n)\n", 228 | " tcrit = t.ppf(0.975, n-1)\n", 229 | " ci_lo, ci_hi = mean - tcrit*se, mean + tcrit*se\n", 230 | " _, pval = ttest_1samp(vals, 0, nan_policy='omit')\n", 231 | " mean_rows.append({\"Asset\": asset, \"EventGroup\": grp, \"Model\": label, \"Window\": f\"(-{k},+{k})\", \"N\": n, \"MeanCAR\": mean, \"95%CI_low\": ci_lo, \"95%CI_high\": ci_hi, \"p-value\": pval})\n", 232 | "\n", 233 | " df_mean = pd.DataFrame(mean_rows).set_index([\"Asset\", \"EventGroup\", \"Model\", \"Window\"]).sort_index()\n", 234 | " df_daily_ar = pd.concat(daily_ar_rows, ignore_index=True) if daily_ar_rows else pd.DataFrame()\n", 235 | " df_estimation = pd.concat(estimation_data_rows, ignore_index=True) if estimation_data_rows else pd.DataFrame()\n", 236 | "\n", 237 | " # ── 5) CLI output & Save all data artifacts ────────────────────────────── #\n", 238 | " pd.set_option(\"display.max_columns\", None, \"display.width\", 2000, \"display.float_format\", \"{:.6f}\".format)\n", 239 | " print(\"\\n==== Event-level wide table (df_evt_wide) [PREVIEW] ====\")\n", 240 | " print(df_evt_wide.head())\n", 241 | " print(\"\\n==== Asset × EventGroup MeanCAR ± 95 % CI [PREVIEW] ====\")\n", 242 | " print(df_mean.head())\n", 243 | "\n", 244 | " OUTPUT_DIR = Path(\"./outcome\")\n", 245 | " OUTPUT_DIR.mkdir(parents=True, exist_ok=True)\n", 246 | " print(f\"\\n==== Saving All Output Files to '{OUTPUT_DIR}' folder ====\")\n", 247 | "\n", 248 | " df_evt_wide.to_csv(OUTPUT_DIR / \"event_study_wide_results.csv\")\n", 249 | " print(f\" -> Saved event_study_wide_results.csv ({len(df_evt_wide)} rows)\")\n", 250 | " df_mean.to_csv(OUTPUT_DIR / \"event_study_mean_results.csv\")\n", 251 | " print(f\" -> Saved event_study_mean_results.csv ({len(df_mean)} rows)\")\n", 252 | " df_daily_ar.to_csv(OUTPUT_DIR / \"event_study_daily_ar.csv\", index=False)\n", 253 | " print(f\" -> Saved event_study_daily_ar.csv ({len(df_daily_ar)} rows)\")\n", 254 | " df_estimation.to_csv(OUTPUT_DIR / \"event_study_estimation.csv\", index=False)\n", 255 | " print(f\" -> Saved event_study_estimation.csv ({len(df_estimation)} rows)\")\n", 256 | " print(\"\\nAnalysis complete. All data files saved successfully.\")" 257 | ] 258 | } 259 | ], 260 | "metadata": { 261 | "kernelspec": { 262 | "display_name": "cuda", 263 | "language": "python", 264 | "name": "python3" 265 | }, 266 | "language_info": { 267 | "codemirror_mode": { 268 | "name": "ipython", 269 | "version": 3 270 | }, 271 | "file_extension": ".py", 272 | "mimetype": "text/x-python", 273 | "name": "python", 274 | "nbconvert_exporter": "python", 275 | "pygments_lexer": "ipython3", 276 | "version": "3.10.16" 277 | } 278 | }, 279 | "nbformat": 4, 280 | "nbformat_minor": 5 281 | } 282 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License 2 | 3 | By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. 4 | 5 | Section 1 – Definitions. 6 | 7 | a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. 8 | 9 | b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. 10 | 11 | c. BY-NC-SA Compatible License means a license listed at creativecommons.org/compatiblelicenses, approved by Creative Commons as essentially equivalent to this Public License. 12 | 13 | d. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. 14 | 15 | e. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. 16 | 17 | f. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. 18 | 19 | g. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License. 20 | 21 | h. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. 22 | 23 | i. Licensor means the individual(s) or entity(ies) granting rights under this Public License. 24 | 25 | j. NonCommercial means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange. 26 | 27 | k. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. 28 | 29 | l. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. 30 | 31 | m. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. 32 | 33 | Section 2 – Scope. 34 | 35 | a. License grant. 36 | 37 | 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: 38 | 39 | A. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and 40 | 41 | B. produce, reproduce, and Share Adapted Material for NonCommercial purposes only. 42 | 43 | 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. 44 | 45 | 3. Term. The term of this Public License is specified in Section 6(a). 46 | 47 | 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material. 48 | 49 | 5. Downstream recipients. 50 | 51 | A. Offer from the Licensor – Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. 52 | 53 | B. Additional offer from the Licensor – Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Licensed Material as used in the Adapted Material under the conditions of the Adapter's License You apply. 54 | 55 | C. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. 56 | 57 | 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). 58 | 59 | b. Other rights. 60 | 61 | 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. 62 | 63 | 2. Patent and trademark rights are not licensed under this Public License. 64 | 65 | 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes. 66 | 67 | Section 3 – License Conditions. 68 | 69 | Your exercise of the Licensed Rights is expressly made subject to the following conditions. 70 | 71 | a. Attribution. 72 | 73 | 1. If You Share the Licensed Material (including in modified form), You must: 74 | 75 | A. retain the following if it is supplied by the Licensor with the Licensed Material: 76 | 77 | i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); 78 | 79 | ii. a copyright notice; 80 | 81 | iii. a notice that refers to this Public License; 82 | 83 | iv. a notice that refers to the disclaimer of warranties; 84 | 85 | v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; 86 | 87 | B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and 88 | 89 | C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. 90 | 91 | 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. 92 | 93 | 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. 94 | 95 | b. ShareAlike. 96 | 97 | In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply. 98 | 99 | 1. The Adapter's License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-NC-SA Compatible License. 100 | 101 | 2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material. 102 | 103 | 3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Your Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply. 104 | 105 | Section 4 – Sui Generis Database Rights. 106 | 107 | Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: 108 | 109 | a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only; 110 | 111 | b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and 112 | 113 | c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. 114 | 115 | For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. 116 | 117 | Section 5 – Disclaimer of Warranties and Limitation of Liability. 118 | 119 | a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. 120 | 121 | b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. 122 | 123 | c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. 124 | 125 | Section 6 – Term and Termination. 126 | 127 | a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. 128 | 129 | b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: 130 | 131 | 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or 132 | 133 | 2. upon express reinstatement by the Licensor. 134 | 135 | For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. 136 | 137 | c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. 138 | 139 | d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. 140 | 141 | Section 7 – Other Terms and Conditions. 142 | 143 | a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. 144 | 145 | b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. 146 | 147 | Section 8 – Interpretation. 148 | 149 | a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. 150 | 151 | b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. 152 | 153 | c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. 154 | 155 | d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. 156 | -------------------------------------------------------------------------------- /RDD_Price to excel.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "id": "88c24e0b", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "#!/usr/bin/env python3\n", 11 | "# -*- coding: utf-8 -*-\n", 12 | "\"\"\"\n", 13 | "Batch Regression-Discontinuity (RD) – Price Shocks (Modularized)\n", 14 | "------------------------------------------------------------------\n", 15 | "This script runs RDD analysis based on the modular data structure.\n", 16 | "It builds an Excel workbook in the 'outcome' folder that includes:\n", 17 | "\n", 18 | " • A “Summary” sheet listing τ̂ (treatment effect), p-value, 95 % CI, N, etc.\n", 19 | " • An RD plot embedded in each corresponding row.\n", 20 | "\n", 21 | "Required for Export:\n", 22 | " $ pip install xlsxwriter\n", 23 | "\"\"\"\n", 24 | "\n", 25 | "import io\n", 26 | "import pathlib\n", 27 | "import warnings\n", 28 | "from typing import List, Dict\n", 29 | "\n", 30 | "import numpy as np\n", 31 | "import pandas as pd\n", 32 | "import matplotlib.pyplot as plt\n", 33 | "\n", 34 | "# ─────────────────────────────────────────────────────────────────────────────\n", 35 | "# 1) Dependency checks\n", 36 | "try:\n", 37 | " import statsmodels.api as sm\n", 38 | " HAVE_SM = True\n", 39 | "except Exception as e:\n", 40 | " warnings.warn(f\"statsmodels unavailable ({e}); falling back to NumPy OLS.\")\n", 41 | " HAVE_SM = False\n", 42 | "\n", 43 | "try:\n", 44 | " from scipy import stats as sps\n", 45 | " HAVE_SCIPY = True\n", 46 | "except Exception:\n", 47 | " HAVE_SCIPY = False\n", 48 | "\n", 49 | "# ─────────────────────────────────────────────────────────────────────────────\n", 50 | "# 2) Global parameters\n", 51 | "BANDWIDTHS = [10, 20] # ±n trading-day windows\n", 52 | "OUTCOME = \"Price\" # Column under analysis\n", 53 | "\n", 54 | "# ─────────────────────────────────────────────────────────────────────────────\n", 55 | "# 3) Helper functions\n", 56 | "def read_csv_robustly(path: pathlib.Path, engine: str = 'c', sep=','):\n", 57 | " \"\"\"Reads a CSV file by trying a sequence of common encodings.\"\"\"\n", 58 | " encodings_to_try = ['utf-8', 'utf-8-sig', 'gbk', 'gb2312', 'latin-1']\n", 59 | " if engine == 'python': sep = None\n", 60 | " for enc in encodings_to_try:\n", 61 | " try:\n", 62 | " return pd.read_csv(path, encoding=enc, engine=engine, sep=sep)\n", 63 | " except (UnicodeDecodeError, UnicodeError, pd.errors.ParserError):\n", 64 | " continue\n", 65 | " raise ValueError(f\"Failed to read or parse '{path}'.\")\n", 66 | "\n", 67 | "def std_cols(df: pd.DataFrame) -> pd.DataFrame:\n", 68 | " \"\"\"Normalise column names for consistency.\"\"\"\n", 69 | " df.columns = (df.columns.str.lower().str.replace(\" \", \"\").str.replace(\".\", \"\", regex=False).str.strip())\n", 70 | " return df\n", 71 | "\n", 72 | "def load_price(path: pathlib.Path) -> pd.DataFrame:\n", 73 | " \"\"\"Read a CSV, clean the Price column, return a Date-sorted DataFrame.\"\"\"\n", 74 | " df = read_csv_robustly(path)\n", 75 | " # Standardize column names before checking for them\n", 76 | " df = std_cols(df)\n", 77 | " \n", 78 | " # Check for lowercase 'price' due to std_cols\n", 79 | " outcome_lower = OUTCOME.lower()\n", 80 | " if \"date\" not in df.columns or outcome_lower not in df.columns:\n", 81 | " raise ValueError(f\"File '{path}' must contain 'Date' and '{OUTCOME}' columns.\")\n", 82 | " \n", 83 | " df = df.rename(columns={outcome_lower: OUTCOME}) # Rename back to original case for compatibility\n", 84 | " \n", 85 | " df[\"Date\"] = pd.to_datetime(df[\"date\"])\n", 86 | " df[OUTCOME] = (df[OUTCOME].astype(str)\n", 87 | " .str.replace(r\"[^0-9\\.\\+\\-eE]\", \"\", regex=True)\n", 88 | " .replace(\"\", np.nan).astype(float))\n", 89 | " return df.sort_values(\"Date\").reset_index(drop=True)\n", 90 | "\n", 91 | "\n", 92 | "def ols_numpy(y: np.ndarray, X: np.ndarray):\n", 93 | " \"\"\"Lightweight OLS with White SEs; returns beta, p-values, covariance.\"\"\"\n", 94 | " n, k = X.shape\n", 95 | " beta = np.linalg.lstsq(X, y, rcond=None)[0]\n", 96 | " resid = y - X @ beta\n", 97 | " sigma2 = (resid @ resid) / (n - k)\n", 98 | " cov = sigma2 * np.linalg.inv(X.T @ X)\n", 99 | " se = np.sqrt(np.diag(cov))\n", 100 | " p = (2 * (1 - sps.t.cdf(np.abs(beta / se), df=n - k)) if HAVE_SCIPY else 2 * (1 - np.exp(-0.5 * (beta / se) ** 2) / np.sqrt(2 * np.pi) / np.abs(beta / se)))\n", 101 | " return beta, p, cov\n", 102 | "\n", 103 | "\n", 104 | "def rd_design(df: pd.DataFrame, event_date: pd.Timestamp, bw: int, y_col: str):\n", 105 | " \"\"\"Build an RD window and return the window DataFrame and a fitted model.\"\"\"\n", 106 | " df = df.copy()\n", 107 | " df[\"D\"], df[\"T\"] = (df[\"Date\"] - event_date).dt.days, (df[\"Date\"] >= event_date).astype(int)\n", 108 | " win = df[df[\"D\"].between(-bw, bw)].dropna(subset=[y_col])\n", 109 | " X_df = win[[\"T\", \"D\"]].astype(float)\n", 110 | " X_df[\"TD\"] = X_df[\"T\"] * X_df[\"D\"]\n", 111 | " X, y = np.column_stack([np.ones(len(X_df)), X_df.to_numpy()]), win[y_col].to_numpy(float)\n", 112 | " cols = [\"const\", \"T\", \"D\", \"TD\"]\n", 113 | " if HAVE_SM:\n", 114 | " model = sm.OLS(y, X).fit(cov_type=\"HAC\", cov_kwds={\"maxlags\": 3})\n", 115 | " model.colnames, model.params, model.pvalues, model.cov = cols, pd.Series(model.params, index=cols), pd.Series(model.pvalues, index=cols), model.cov_params()\n", 116 | " return win, model\n", 117 | " else:\n", 118 | " beta, p, cov = ols_numpy(y, X)\n", 119 | " class Result:\n", 120 | " params, pvalues, cov, colnames = pd.Series(beta, index=cols), pd.Series(p, index=cols), pd.DataFrame(cov, index=cols, columns=cols), cols\n", 121 | " def conf_int(self):\n", 122 | " se = np.sqrt(np.diag(self.cov))\n", 123 | " return pd.DataFrame(np.column_stack([self.params - 1.96 * se, self.params + 1.96 * se]), index=cols, columns=[\"low\", \"high\"])\n", 124 | " return win, Result()\n", 125 | "\n", 126 | "\n", 127 | "def get_ci(model, param: str):\n", 128 | " \"\"\"Return (low, high) 95 % CI for *param*, backend-agnostic.\"\"\"\n", 129 | " ci = model.conf_int()\n", 130 | " return ci.loc[param] if isinstance(ci, pd.DataFrame) else (ci[model.colnames.index(param), 0], ci[model.colnames.index(param), 1])\n", 131 | "\n", 132 | "\n", 133 | "def rd_plot_fig(win: pd.DataFrame, model, evt: pd.Timestamp, bw: int, asset: str, y_col: str):\n", 134 | " \"\"\"Create an RD scatter plot and return a matplotlib Figure.\"\"\"\n", 135 | " grid = np.arange(-bw, bw + 1)\n", 136 | " α, τ, β, γ = (model.params[k] for k in [\"const\", \"T\", \"D\", \"TD\"])\n", 137 | " μ_L, μ_R = α + β * grid, α + τ + (β + γ) * grid\n", 138 | " cov = model.cov if isinstance(model.cov, np.ndarray) else model.cov.to_numpy()\n", 139 | " X_L = np.column_stack([np.ones_like(grid), np.zeros_like(grid), grid, np.zeros_like(grid)])\n", 140 | " X_R = np.column_stack([np.ones_like(grid), np.ones_like(grid), grid, grid])\n", 141 | " se_L, se_R = np.sqrt(np.einsum(\"ij,jk,ik->i\", X_L, cov, X_L)), np.sqrt(np.einsum(\"ij,jk,ik->i\", X_R, cov, X_R))\n", 142 | " upper_L, lower_L, upper_R, lower_R = μ_L + 1.96 * se_L, μ_L - 1.96 * se_L, μ_R + 1.96 * se_R, μ_R - 1.96 * se_R\n", 143 | "\n", 144 | " fig, ax = plt.subplots(figsize=(7, 4))\n", 145 | " colors = win[\"D\"].apply(lambda x: \"royalblue\" if x < 0 else \"firebrick\")\n", 146 | " ax.scatter(win[\"D\"], win[y_col], s=18, color=colors, alpha=0.7, zorder=2)\n", 147 | " ax.plot(grid[grid < 0], μ_L[grid < 0], color=\"forestgreen\", lw=2)\n", 148 | " ax.plot(grid[grid >= 0], μ_R[grid >= 0], color=\"forestgreen\", lw=2)\n", 149 | " ax.fill_between(grid[grid < 0], lower_L[grid < 0], upper_L[grid < 0], color='grey', alpha=0.3)\n", 150 | " ax.fill_between(grid[grid >= 0], lower_R[grid >= 0], upper_R[grid >= 0], color='grey', alpha=0.3)\n", 151 | " ax.axvline(0, color=\"crimson\", lw=2, ls=\"--\")\n", 152 | " ax.set_title(f\"{asset} | {evt.date()} ±{bw}d τ̂={model.params['T']:.4f} p={model.pvalues['T']:.3g}\")\n", 153 | " ax.set_xlabel(\"Days relative to event\"), ax.set_ylabel(y_col)\n", 154 | " plt.tight_layout()\n", 155 | " return fig\n", 156 | "\n", 157 | "# ─────────────────────────────────────────────────────────────────────────────\n", 158 | "# 4) Main driver\n", 159 | "if __name__ == \"__main__\":\n", 160 | " # --- ADJUSTED: Dynamic Data Loading ---\n", 161 | " print(\"--- Loading Benchmark Data ---\")\n", 162 | " # Define the directory where benchmark CSVs are located.\n", 163 | " BENCHMARK_DIR = pathlib.Path(\"./benchmark\")\n", 164 | " if not BENCHMARK_DIR.is_dir(): raise FileNotFoundError(f\"Benchmark directory '{BENCHMARK_DIR}' not found.\")\n", 165 | " \n", 166 | " bench_files = [\"Gold.csv\", \"Nasdaq100.csv\", \"SPY.csv\"]\n", 167 | " # Load each file from the benchmark directory.\n", 168 | " loaded_data = {\n", 169 | " pathlib.Path(f).stem: load_price(BENCHMARK_DIR / f) for f in bench_files\n", 170 | " }\n", 171 | " print(f\" • Loaded: {', '.join(loaded_data.keys())}\")\n", 172 | " \n", 173 | " print(\"\\n--- Loading Crypto Asset Data ---\")\n", 174 | " CRYPTO_DATA_DIR = pathlib.Path(\"./crypto_data\")\n", 175 | " if not CRYPTO_DATA_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{CRYPTO_DATA_DIR}' not found.\")\n", 176 | " crypto_files = list(CRYPTO_DATA_DIR.glob(\"*.csv\"))\n", 177 | " if not crypto_files: raise FileNotFoundError(f\"No CSV files found in '{CRYPTO_DATA_DIR}'.\")\n", 178 | " for f_path in crypto_files:\n", 179 | " asset_name = f_path.stem\n", 180 | " loaded_data[asset_name] = load_price(f_path)\n", 181 | " print(f\" • Found and loaded {len(crypto_files)} crypto assets.\")\n", 182 | "\n", 183 | " print(\"\\n--- Loading Wide-Format Event Calendar Data ---\")\n", 184 | " EVENTS_DIR = pathlib.Path(\"./events\")\n", 185 | " train_events_file, test_events_file = EVENTS_DIR / \"training_set.csv\", EVENTS_DIR / \"test_set.csv\"\n", 186 | " if not EVENTS_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{EVENTS_DIR}' not found.\")\n", 187 | " if not train_events_file.is_file(): raise FileNotFoundError(f\"File '{train_events_file}' not found.\")\n", 188 | " if not test_events_file.is_file(): raise FileNotFoundError(f\"File '{test_events_file}' not found.\")\n", 189 | " \n", 190 | " events = {}\n", 191 | " def load_wide_events(path: pathlib.Path, suffix: str) -> dict:\n", 192 | " local_events = {}\n", 193 | " df = read_csv_robustly(path, engine='python')\n", 194 | " df = std_cols(df)\n", 195 | " for group_name in df.columns:\n", 196 | " local_events[f\"{group_name}{suffix}\"] = pd.to_datetime(df[group_name].dropna(), errors='coerce').dropna()\n", 197 | " return local_events\n", 198 | "\n", 199 | " if train_events_file.is_file(): events.update(load_wide_events(train_events_file, \"_train\"))\n", 200 | " if test_events_file.is_file(): events.update(load_wide_events(test_events_file, \"_test\"))\n", 201 | " print(f\" • Total unique event groups to process: {len(events)}\")\n", 202 | " print(\"----------------------------------------------------\\n\")\n", 203 | "\n", 204 | " # --- Set up output path and Excel writer ---\n", 205 | " OUTPUT_DIR = pathlib.Path(\"./outcome\")\n", 206 | " OUTPUT_DIR.mkdir(parents=True, exist_ok=True)\n", 207 | " out_xlsx = OUTPUT_DIR / f\"rd_results_{OUTCOME}.xlsx\"\n", 208 | " \n", 209 | " try:\n", 210 | " writer = pd.ExcelWriter(out_xlsx, engine=\"xlsxwriter\")\n", 211 | " except ImportError:\n", 212 | " warnings.warn(\"'xlsxwriter' not found. To embed plots in Excel, run: pip install xlsxwriter\")\n", 213 | " writer = None # Set writer to None if library is missing\n", 214 | "\n", 215 | " if writer:\n", 216 | " workbook = writer.book\n", 217 | " ws = workbook.add_worksheet(\"Summary\")\n", 218 | " writer.sheets[\"Summary\"] = ws\n", 219 | "\n", 220 | " headers = [\"Asset\", \"Group\", \"Event\", \"Bandwidth\", \"Tau\", \"P_value\", \"CI_Low\", \"CI_High\", \"N_obs\", \"Plot\"]\n", 221 | " for c, h in enumerate(headers): ws.write(0, c, h)\n", 222 | " current_row = 1\n", 223 | "\n", 224 | " # This list will hold data for a potential fallback CSV export\n", 225 | " summary_data_for_csv = []\n", 226 | "\n", 227 | " # --- Main Loop ---\n", 228 | " for asset, df in loaded_data.items():\n", 229 | " print(f\"\\n=== {asset} ====================================================\")\n", 230 | " if df.empty or OUTCOME not in df.columns:\n", 231 | " warnings.warn(f\"'{OUTCOME}' column not found or DataFrame is empty for {asset}. Skipping.\")\n", 232 | " continue\n", 233 | " \n", 234 | " for group, dates in events.items():\n", 235 | " for d in dates:\n", 236 | " evt = pd.to_datetime(d)\n", 237 | " if evt not in df[\"Date\"].values: continue\n", 238 | " \n", 239 | " for bw in BANDWIDTHS:\n", 240 | " try:\n", 241 | " win, model = rd_design(df, evt, bw, OUTCOME)\n", 242 | " if len(win) < 10:\n", 243 | " warnings.warn(f\"Skipping {asset} on {d.date()} (±{bw}d) due to insufficient data.\")\n", 244 | " continue\n", 245 | " \n", 246 | " ci_lo, ci_hi = get_ci(model, \"T\")\n", 247 | " \n", 248 | " result_dict = {\n", 249 | " \"Asset\": asset, \"Group\": group, \"Event\": d.strftime('%Y-%m-%d'), \"Bandwidth\": bw,\n", 250 | " \"Tau\": float(model.params[\"T\"]), \"P_value\": float(model.pvalues[\"T\"]),\n", 251 | " \"CI_Low\": ci_lo, \"CI_High\": ci_hi, \"N_obs\": len(win)\n", 252 | " }\n", 253 | " summary_data_for_csv.append(result_dict)\n", 254 | " print(f\"{d.date()} {group:<22} ±{bw:>2}d τ̂={result_dict['Tau']:.6f} p={result_dict['P_value']:.4f}\")\n", 255 | "\n", 256 | " if writer:\n", 257 | " # Write data row to Excel\n", 258 | " ws.write_row(current_row, 0, [result_dict[h] for h in headers if h != 'Plot'])\n", 259 | " # Create plot and insert into Excel\n", 260 | " fig = rd_plot_fig(win, model, evt, bw, asset, OUTCOME)\n", 261 | " buf = io.BytesIO()\n", 262 | " fig.savefig(buf, format=\"png\", bbox_inches=\"tight\")\n", 263 | " buf.seek(0)\n", 264 | " ws.set_row(current_row, 220) # Set row height to fit the plot\n", 265 | " ws.insert_image(current_row, len(headers)-1, f\"img_{asset}_{d.date()}_{bw}\", {\"image_data\": buf, \"x_scale\": 0.8, \"y_scale\": 0.8, 'object_position': 2})\n", 266 | " plt.close(fig)\n", 267 | " current_row += 1\n", 268 | " \n", 269 | " except Exception as e:\n", 270 | " print(f\"Could not process event {d.date()} for {asset} (±{bw}d). Error: {e}\")\n", 271 | "\n", 272 | " # --- Finalize Export ---\n", 273 | " if writer:\n", 274 | " writer.close()\n", 275 | " print(f\"\\nDone! Results and charts saved to {out_xlsx}\")\n", 276 | " else:\n", 277 | " # Fallback to CSV if xlsxwriter is not available\n", 278 | " print(\"\\nExporting summary data to CSV...\")\n", 279 | " df_summary = pd.DataFrame(summary_data_for_csv)\n", 280 | " csv_path = OUTPUT_DIR / f\"rd_results_{OUTCOME}.csv\"\n", 281 | " df_summary.to_csv(csv_path, index=False)\n", 282 | " print(f\"Done! Summary data saved to {csv_path}. Plots were not saved.\")" 283 | ] 284 | } 285 | ], 286 | "metadata": { 287 | "kernelspec": { 288 | "display_name": "cuda", 289 | "language": "python", 290 | "name": "python3" 291 | }, 292 | "language_info": { 293 | "codemirror_mode": { 294 | "name": "ipython", 295 | "version": 3 296 | }, 297 | "file_extension": ".py", 298 | "mimetype": "text/x-python", 299 | "name": "python", 300 | "nbconvert_exporter": "python", 301 | "pygments_lexer": "ipython3", 302 | "version": "3.10.16" 303 | } 304 | }, 305 | "nbformat": 4, 306 | "nbformat_minor": 5 307 | } 308 | -------------------------------------------------------------------------------- /RDD_Price.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "id": "864a4a55", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "#!/usr/bin/env python3\n", 11 | "# -*- coding: utf-8 -*-\n", 12 | "\"\"\"\n", 13 | "Regression-Discontinuity (RD) Event Study (Price Version, Direct Plotting)\n", 14 | "---------------------------------------------------------------------------\n", 15 | "This script runs the RDD analysis and displays plots directly in the notebook\n", 16 | "or a Matplotlib window. No files are exported.\n", 17 | "\"\"\"\n", 18 | "\n", 19 | "import pathlib\n", 20 | "import warnings\n", 21 | "from typing import List, Dict\n", 22 | "\n", 23 | "import numpy as np\n", 24 | "import pandas as pd\n", 25 | "import matplotlib.pyplot as plt\n", 26 | "\n", 27 | "# ─────────────────────────────────────────────────────────────────────────────\n", 28 | "# 1) Dependency checks & Global parameters\n", 29 | "try:\n", 30 | " import statsmodels.api as sm\n", 31 | " HAVE_SM = True\n", 32 | "except Exception as e:\n", 33 | " warnings.warn(f\"statsmodels unavailable ({e}); falling back to NumPy OLS.\")\n", 34 | " HAVE_SM = False\n", 35 | "try:\n", 36 | " from scipy import stats as sps\n", 37 | " HAVE_SCIPY = True\n", 38 | "except Exception: HAVE_SCIPY = False\n", 39 | "\n", 40 | "BANDWIDTHS = [10, 20]\n", 41 | "OUTCOME = \"Price\"\n", 42 | "\n", 43 | "# ─────────────────────────────────────────────────────────────────────────────\n", 44 | "# 2) Helper functions\n", 45 | "def read_csv_robustly(path: pathlib.Path, engine: str = 'c', sep=','):\n", 46 | " encodings_to_try = ['utf-8', 'utf-8-sig', 'gbk', 'gb2312', 'latin-1']\n", 47 | " if engine == 'python': sep = None\n", 48 | " for enc in encodings_to_try:\n", 49 | " try: return pd.read_csv(path, encoding=enc, engine=engine, sep=sep)\n", 50 | " except (UnicodeDecodeError, UnicodeError, pd.errors.ParserError): continue\n", 51 | " raise ValueError(f\"Failed to read or parse '{path}'.\")\n", 52 | "\n", 53 | "def std_cols(df: pd.DataFrame) -> pd.DataFrame:\n", 54 | " df.columns = (df.columns.str.lower().str.replace(\" \", \"\").str.replace(\".\", \"\", regex=False).str.strip())\n", 55 | " return df\n", 56 | "\n", 57 | "def load_price(path: pathlib.Path) -> pd.DataFrame:\n", 58 | " df = read_csv_robustly(path)\n", 59 | " df = std_cols(df) # Use the std_cols function for consistency\n", 60 | " outcome_lower = OUTCOME.lower()\n", 61 | " if 'date' not in df.columns or outcome_lower not in df.columns: raise ValueError(f\"File '{path}' must contain 'Date' and '{OUTCOME}' columns.\")\n", 62 | " \n", 63 | " df = df.rename(columns={outcome_lower: OUTCOME}) # Rename back to original case\n", 64 | " \n", 65 | " df['Date'] = pd.to_datetime(df['date'])\n", 66 | " df[OUTCOME] = (df[OUTCOME].astype(str).str.replace(r'[^0-9\\.\\+\\-eE]', '', regex=True).replace('', np.nan).astype(float))\n", 67 | " return df.sort_values('Date').reset_index(drop=True)\n", 68 | "\n", 69 | "def ols_numpy(y: np.ndarray, X: np.ndarray):\n", 70 | " n, k = X.shape\n", 71 | " beta = np.linalg.lstsq(X, y, rcond=None)[0]\n", 72 | " resid = y - X @ beta\n", 73 | " sigma2 = (resid @ resid) / (n - k)\n", 74 | " cov = sigma2 * np.linalg.inv(X.T @ X)\n", 75 | " se = np.sqrt(np.diag(cov))\n", 76 | " p = (2 * (1 - sps.t.cdf(np.abs(beta / se), df=n - k)) if HAVE_SCIPY else 2 * (1 - np.exp(-0.5 * (beta / se) ** 2) / np.sqrt(2 * np.pi) / np.abs(beta / se)))\n", 77 | " return beta, p, cov\n", 78 | "\n", 79 | "def rd_design(df: pd.DataFrame, event_date: pd.Timestamp, bw: int, y_col: str):\n", 80 | " df = df.copy() # Avoid SettingWithCopyWarning\n", 81 | " df['D'], df['T'] = (df['Date'] - event_date).dt.days, (df['Date'] >= event_date).astype(int)\n", 82 | " win = df[df['D'].between(-bw, bw)].dropna(subset=[y_col])\n", 83 | " X_df = win[['T', 'D']].astype(float)\n", 84 | " X_df['TD'] = X_df['T'] * X_df['D']\n", 85 | " X, y = np.column_stack([np.ones(len(X_df)), X_df.to_numpy()]), win[y_col].to_numpy(float)\n", 86 | " cols = ['const', 'T', 'D', 'TD']\n", 87 | " if HAVE_SM:\n", 88 | " model = sm.OLS(y, X).fit(cov_type='HAC', cov_kwds={'maxlags': 3})\n", 89 | " model.colnames, model.params, model.pvalues, model.cov = cols, pd.Series(model.params, index=cols), pd.Series(model.pvalues, index=cols), model.cov_params()\n", 90 | " return win, model\n", 91 | " else:\n", 92 | " beta, p, cov = ols_numpy(y, X)\n", 93 | " class Result:\n", 94 | " params, pvalues, cov, colnames = pd.Series(beta, index=cols), pd.Series(p, index=cols), pd.DataFrame(cov, index=cols, columns=cols), cols\n", 95 | " def conf_int(self):\n", 96 | " se = np.sqrt(np.diag(self.cov))\n", 97 | " return pd.DataFrame(np.column_stack([self.params - 1.96 * se, self.params + 1.96 * se]), index=cols, columns=['low', 'high'])\n", 98 | " return win, Result()\n", 99 | "\n", 100 | "def get_ci(model, param: str):\n", 101 | " ci = model.conf_int()\n", 102 | " return ci.loc[param] if isinstance(ci, pd.DataFrame) else (ci[model.colnames.index(param), 0], ci[model.colnames.index(param), 1])\n", 103 | "\n", 104 | "def rd_plot(win: pd.DataFrame, model, evt: pd.Timestamp, bw: int, asset: str, y_col: str):\n", 105 | " grid = np.arange(-bw, bw + 1)\n", 106 | " α, τ, β, γ = (model.params[k] for k in ['const', 'T', 'D', 'TD'])\n", 107 | " μ_L, μ_R = α + β * grid, α + τ + (β + γ) * grid\n", 108 | " cov = model.cov if isinstance(model.cov, np.ndarray) else model.cov.to_numpy()\n", 109 | " X_L = np.column_stack([np.ones_like(grid), np.zeros_like(grid), grid, np.zeros_like(grid)])\n", 110 | " X_R = np.column_stack([np.ones_like(grid), np.ones_like(grid), grid, grid])\n", 111 | " se_L, se_R = np.sqrt(np.einsum('ij,jk,ik->i', X_L, cov, X_L)), np.sqrt(np.einsum('ij,jk,ik->i', X_R, cov, X_R))\n", 112 | " upper_L, lower_L, upper_R, lower_R = μ_L + 1.96 * se_L, μ_L - 1.96 * se_L, μ_R + 1.96 * se_R, μ_R - 1.96 * se_R\n", 113 | " plt.figure(figsize=(8, 5))\n", 114 | " colors = win['D'].apply(lambda x: 'royalblue' if x < 0 else 'firebrick')\n", 115 | " plt.scatter(win['D'], win[y_col], s=18, color=colors, alpha=0.7, zorder=2)\n", 116 | " plt.plot(grid[grid < 0], μ_L[grid < 0], color='forestgreen', lw=2)\n", 117 | " plt.plot(grid[grid >= 0], μ_R[grid >= 0], color='forestgreen', lw=2)\n", 118 | " plt.fill_between(grid[grid < 0], lower_L[grid < 0], upper_L[grid < 0], color='grey', alpha=0.3)\n", 119 | " plt.fill_between(grid[grid >= 0], lower_R[grid >= 0], upper_R[grid >= 0], color='grey', alpha=0.3)\n", 120 | " plt.axvline(0, color='crimson', lw=2, ls='--')\n", 121 | " plt.title(f\"{asset} | {evt.date()} | ±{bw}d τ̂ = {model.params['T']:.4f} (p = {model.pvalues['T']:.3g})\")\n", 122 | " plt.xlabel(\"Days relative to event\")\n", 123 | " plt.ylabel(y_col)\n", 124 | " plt.tight_layout()\n", 125 | " plt.show()\n", 126 | "\n", 127 | "# ─────────────────────────────────────────────────────────────────────────────\n", 128 | "# 3) Main script\n", 129 | "if __name__ == \"__main__\":\n", 130 | " # --- Data Loading (ADJUSTED) ---\n", 131 | " print(\"--- Loading Benchmark Data ---\")\n", 132 | " # Define the directory where benchmark CSVs are located.\n", 133 | " BENCHMARK_DIR = pathlib.Path(\"./benchmark\")\n", 134 | " if not BENCHMARK_DIR.is_dir(): raise FileNotFoundError(f\"Benchmark directory '{BENCHMARK_DIR}' not found.\")\n", 135 | " \n", 136 | " bench_files = [\"Gold.csv\", \"Nasdaq100.csv\", \"SPY.csv\"]\n", 137 | " # Load each file from the benchmark directory.\n", 138 | " loaded_data = {\n", 139 | " pathlib.Path(f).stem: load_price(BENCHMARK_DIR / f) for f in bench_files\n", 140 | " }\n", 141 | " print(f\" • Loaded: {', '.join(loaded_data.keys())}\")\n", 142 | " \n", 143 | " print(\"\\n--- Loading Crypto Asset Data ---\")\n", 144 | " CRYPTO_DATA_DIR = pathlib.Path(\"./crypto_data\")\n", 145 | " if not CRYPTO_DATA_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{CRYPTO_DATA_DIR}' not found.\")\n", 146 | " crypto_files = list(CRYPTO_DATA_DIR.glob(\"*.csv\"))\n", 147 | " if not crypto_files: raise FileNotFoundError(f\"No CSV files found in '{CRYPTO_DATA_DIR}'.\")\n", 148 | " for f_path in crypto_files:\n", 149 | " asset_name = f_path.stem\n", 150 | " loaded_data[asset_name] = load_price(f_path)\n", 151 | " print(f\" • Found and loaded {len(crypto_files)} crypto assets.\")\n", 152 | "\n", 153 | " print(\"\\n--- Loading Wide-Format Event Calendar Data ---\")\n", 154 | " EVENTS_DIR = pathlib.Path(\"./events\")\n", 155 | " train_events_file, test_events_file = EVENTS_DIR / \"training_set.csv\", EVENTS_DIR / \"test_set.csv\"\n", 156 | " if not EVENTS_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{EVENTS_DIR}' not found.\")\n", 157 | " if not train_events_file.is_file(): raise FileNotFoundError(f\"File '{train_events_file}' not found.\")\n", 158 | " if not test_events_file.is_file(): raise FileNotFoundError(f\"File '{test_events_file}' not found.\")\n", 159 | " \n", 160 | " events = {}\n", 161 | " def load_wide_events(path: pathlib.Path, suffix: str) -> dict:\n", 162 | " local_events = {}\n", 163 | " df = read_csv_robustly(path, engine='python')\n", 164 | " df = std_cols(df)\n", 165 | " for group_name in df.columns:\n", 166 | " local_events[f\"{group_name}{suffix}\"] = pd.to_datetime(df[group_name].dropna(), errors='coerce').dropna()\n", 167 | " return local_events\n", 168 | "\n", 169 | " if train_events_file.is_file(): events.update(load_wide_events(train_events_file, \"_train\"))\n", 170 | " if test_events_file.is_file(): events.update(load_wide_events(test_events_file, \"_test\"))\n", 171 | " print(f\" • Total unique event groups to process: {len(events)}\")\n", 172 | " print(\"----------------------------------------------------\\n\")\n", 173 | " \n", 174 | " # --- Main Loop ---\n", 175 | " for asset, df in loaded_data.items():\n", 176 | " print(f\"\\n=== {asset} ====================================================\")\n", 177 | " if df.empty: continue\n", 178 | " for group, dates in events.items():\n", 179 | " for d in dates:\n", 180 | " evt = pd.to_datetime(d)\n", 181 | " if evt not in df['Date'].values: continue\n", 182 | " \n", 183 | " for bw in BANDWIDTHS:\n", 184 | " try:\n", 185 | " win, model = rd_design(df, evt, bw, OUTCOME)\n", 186 | " if len(win) < 10:\n", 187 | " warnings.warn(f\"Warning: Too few observations for {asset} on {d.date()} (±{bw}d). Skipping.\")\n", 188 | " continue\n", 189 | " \n", 190 | " ci_lo, ci_hi = get_ci(model, 'T')\n", 191 | " # Print results to console\n", 192 | " print(f\"{d.date()} {group:<22} ±{bw:>2}d \"\n", 193 | " f\"τ̂={model.params['T']:.6f} \"\n", 194 | " f\"p={model.pvalues['T']:.4f} \"\n", 195 | " f\"CI=({ci_lo:.6f}, {ci_hi:.6f}) \"\n", 196 | " f\"N={len(win)}\")\n", 197 | " \n", 198 | " # Call the plotting function to display the chart\n", 199 | " rd_plot(win, model, evt, bw, asset, OUTCOME)\n", 200 | " \n", 201 | " except Exception as e:\n", 202 | " print(f\"Could not process event {d.date()} for {asset} (±{bw}d). Error: {e}\")" 203 | ] 204 | } 205 | ], 206 | "metadata": { 207 | "kernelspec": { 208 | "display_name": "cuda", 209 | "language": "python", 210 | "name": "python3" 211 | }, 212 | "language_info": { 213 | "codemirror_mode": { 214 | "name": "ipython", 215 | "version": 3 216 | }, 217 | "file_extension": ".py", 218 | "mimetype": "text/x-python", 219 | "name": "python", 220 | "nbconvert_exporter": "python", 221 | "pygments_lexer": "ipython3", 222 | "version": "3.10.16" 223 | } 224 | }, 225 | "nbformat": 4, 226 | "nbformat_minor": 5 227 | } 228 | -------------------------------------------------------------------------------- /RDD_Vol. to excel.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "id": "6d21127e", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "#!/usr/bin/env python3\n", 11 | "# -*- coding: utf-8 -*-\n", 12 | "\"\"\"\n", 13 | "Regression-Discontinuity (RD) Event Study – Volatility Edition (Modularized)\n", 14 | "----------------------------------------------------------------------------\n", 15 | "This script runs RDD analysis for volatility based on the modular data structure.\n", 16 | "It builds an Excel workbook in the 'outcome' folder that includes:\n", 17 | "\n", 18 | " • A “Summary” sheet listing τ̂ (treatment effect), p-value, 95 % CI, N, etc.\n", 19 | " • An RD plot embedded in each corresponding row.\n", 20 | "\n", 21 | "Required for Export:\n", 22 | " $ pip install xlsxwriter\n", 23 | "\"\"\"\n", 24 | "\n", 25 | "import io\n", 26 | "import pathlib\n", 27 | "import warnings\n", 28 | "from typing import List, Dict\n", 29 | "\n", 30 | "import numpy as np\n", 31 | "import pandas as pd\n", 32 | "import matplotlib.pyplot as plt\n", 33 | "\n", 34 | "# ─────────────────────────────────────────────────────────────────────────────\n", 35 | "# 1) Dependency checks\n", 36 | "try:\n", 37 | " import statsmodels.api as sm\n", 38 | " HAVE_SM = True\n", 39 | "except Exception as e:\n", 40 | " warnings.warn(f\"statsmodels unavailable ({e}); falling back to NumPy OLS.\")\n", 41 | " HAVE_SM = False\n", 42 | "\n", 43 | "try:\n", 44 | " from scipy import stats as sps\n", 45 | " HAVE_SCIPY = True\n", 46 | "except Exception:\n", 47 | " HAVE_SCIPY = False\n", 48 | "\n", 49 | "# ─────────────────────────────────────────────────────────────────────────────\n", 50 | "# 2) Global parameters\n", 51 | "BANDWIDTHS = [10, 20]\n", 52 | "# --- MODIFIED: The outcome variable is now 'vol' ---\n", 53 | "OUTCOME = \"vol\"\n", 54 | "\n", 55 | "# ─────────────────────────────────────────────────────────────────────────────\n", 56 | "# 3) Helper functions\n", 57 | "def read_csv_robustly(path: pathlib.Path, engine: str = 'c', sep=','):\n", 58 | " \"\"\"Reads a CSV file by trying a sequence of common encodings.\"\"\"\n", 59 | " encodings_to_try = ['utf-8', 'utf-8-sig', 'gbk', 'gb2312', 'latin-1']\n", 60 | " if engine == 'python': sep = None\n", 61 | " for enc in encodings_to_try:\n", 62 | " try:\n", 63 | " return pd.read_csv(path, encoding=enc, engine=engine, sep=sep)\n", 64 | " except (UnicodeDecodeError, UnicodeError, pd.errors.ParserError):\n", 65 | " continue\n", 66 | " raise ValueError(f\"Failed to read or parse '{path}'.\")\n", 67 | "\n", 68 | "def std_cols(df: pd.DataFrame) -> pd.DataFrame:\n", 69 | " \"\"\"Normalise column names for consistency.\"\"\"\n", 70 | " df.columns = (df.columns.str.lower().str.replace(\" \", \"\").str.replace(\".\", \"\", regex=False).str.strip())\n", 71 | " return df\n", 72 | "\n", 73 | "def load_data(path: pathlib.Path) -> pd.DataFrame:\n", 74 | " \"\"\"Read a CSV, clean the Vol. column, and return a date-sorted DataFrame.\"\"\"\n", 75 | " df = read_csv_robustly(path)\n", 76 | " df = std_cols(df) # Standardize column names (e.g., 'Vol.' -> 'vol')\n", 77 | " \n", 78 | " if \"date\" not in df.columns or \"vol\" not in df.columns:\n", 79 | " raise ValueError(f\"File '{path}' must contain 'Date' and 'Vol.' columns.\")\n", 80 | " \n", 81 | " df = df.rename(columns={\"vol\": OUTCOME}) # Rename to the generic OUTCOME name for compatibility\n", 82 | " \n", 83 | " df[\"Date\"] = pd.to_datetime(df[\"date\"])\n", 84 | " \n", 85 | " # Specific cleaning function for Volume data\n", 86 | " def parse_volume(vol_str):\n", 87 | " vol_str = str(vol_str).strip().upper()\n", 88 | " if pd.isna(vol_str) or vol_str == '': return np.nan\n", 89 | " multipliers = {'K': 1e3, 'M': 1e6, 'B': 1e9}\n", 90 | " if vol_str and vol_str[-1] in multipliers:\n", 91 | " try:\n", 92 | " return float(vol_str[:-1]) * multipliers[vol_str[-1]]\n", 93 | " except ValueError:\n", 94 | " return np.nan\n", 95 | " return pd.to_numeric(vol_str, errors='coerce')\n", 96 | " \n", 97 | " df[OUTCOME] = df[OUTCOME].apply(parse_volume)\n", 98 | " \n", 99 | " return df.sort_values(\"Date\").reset_index(drop=True)\n", 100 | "\n", 101 | "\n", 102 | "def ols_numpy(y: np.ndarray, X: np.ndarray):\n", 103 | " \"\"\"Lightweight OLS with White SEs; returns beta, p-values, covariance.\"\"\"\n", 104 | " n, k = X.shape\n", 105 | " beta = np.linalg.lstsq(X, y, rcond=None)[0]\n", 106 | " resid = y - X @ beta\n", 107 | " sigma2 = (resid @ resid) / (n - k)\n", 108 | " cov = sigma2 * np.linalg.inv(X.T @ X)\n", 109 | " se = np.sqrt(np.diag(cov))\n", 110 | " p = (2 * (1 - sps.t.cdf(np.abs(beta / se), df=n - k)) if HAVE_SCIPY else 2 * (1 - np.exp(-0.5 * (beta / se) ** 2) / np.sqrt(2 * np.pi) / np.abs(beta / se)))\n", 111 | " return beta, p, cov\n", 112 | "\n", 113 | "\n", 114 | "def rd_design(df: pd.DataFrame, event_date: pd.Timestamp, bw: int, y_col: str):\n", 115 | " \"\"\"Build an RD window and return the window DataFrame and a fitted model.\"\"\"\n", 116 | " df = df.copy()\n", 117 | " df[\"D\"], df[\"T\"] = (df[\"Date\"] - event_date).dt.days, (df[\"Date\"] >= event_date).astype(int)\n", 118 | " win = df[df[\"D\"].between(-bw, bw)].dropna(subset=[y_col])\n", 119 | " X_df = win[[\"T\", \"D\"]].astype(float)\n", 120 | " X_df[\"TD\"] = X_df[\"T\"] * X_df[\"D\"]\n", 121 | " X, y = np.column_stack([np.ones(len(X_df)), X_df.to_numpy()]), win[y_col].to_numpy(float)\n", 122 | " cols = [\"const\", \"T\", \"D\", \"TD\"]\n", 123 | " if HAVE_SM:\n", 124 | " model = sm.OLS(y, X).fit(cov_type=\"HAC\", cov_kwds={\"maxlags\": 3})\n", 125 | " model.colnames, model.params, model.pvalues, model.cov = cols, pd.Series(model.params, index=cols), pd.Series(model.pvalues, index=cols), model.cov_params()\n", 126 | " return win, model\n", 127 | " else:\n", 128 | " beta, p, cov = ols_numpy(y, X)\n", 129 | " class Result:\n", 130 | " params, pvalues, cov, colnames = pd.Series(beta, index=cols), pd.Series(p, index=cols), pd.DataFrame(cov, index=cols, columns=cols), cols\n", 131 | " def conf_int(self):\n", 132 | " se = np.sqrt(np.diag(self.cov))\n", 133 | " return pd.DataFrame(np.column_stack([self.params - 1.96 * se, self.params + 1.96 * se]), index=cols, columns=[\"low\", \"high\"])\n", 134 | " return win, Result()\n", 135 | "\n", 136 | "\n", 137 | "def get_ci(model, param: str):\n", 138 | " \"\"\"Return (low, high) 95 % CI for *param*, backend-agnostic.\"\"\"\n", 139 | " ci = model.conf_int()\n", 140 | " return ci.loc[param] if isinstance(ci, pd.DataFrame) else (ci[model.colnames.index(param), 0], ci[model.colnames.index(param), 1])\n", 141 | "\n", 142 | "\n", 143 | "def rd_plot_fig(win: pd.DataFrame, model, evt: pd.Timestamp, bw: int, asset: str, y_col: str):\n", 144 | " \"\"\"Create an RD scatter plot and return a matplotlib Figure.\"\"\"\n", 145 | " grid = np.arange(-bw, bw + 1)\n", 146 | " α, τ, β, γ = (model.params[k] for k in [\"const\", \"T\", \"D\", \"TD\"])\n", 147 | " μ_L, μ_R = α + β * grid, α + τ + (β + γ) * grid\n", 148 | " cov = model.cov if isinstance(model.cov, np.ndarray) else model.cov.to_numpy()\n", 149 | " X_L = np.column_stack([np.ones_like(grid), np.zeros_like(grid), grid, np.zeros_like(grid)])\n", 150 | " X_R = np.column_stack([np.ones_like(grid), np.ones_like(grid), grid, grid])\n", 151 | " se_L, se_R = np.sqrt(np.einsum(\"ij,jk,ik->i\", X_L, cov, X_L)), np.sqrt(np.einsum(\"ij,jk,ik->i\", X_R, cov, X_R))\n", 152 | " upper_L, lower_L, upper_R, lower_R = μ_L + 1.96 * se_L, μ_L - 1.96 * se_L, μ_R + 1.96 * se_R, μ_R - 1.96 * se_R\n", 153 | "\n", 154 | " fig, ax = plt.subplots(figsize=(7, 4))\n", 155 | " colors = win[\"D\"].apply(lambda x: \"royalblue\" if x < 0 else \"firebrick\")\n", 156 | " ax.scatter(win[\"D\"], win[y_col], s=18, color=colors, alpha=0.7, zorder=2)\n", 157 | " ax.plot(grid[grid < 0], μ_L[grid < 0], color=\"forestgreen\", lw=2)\n", 158 | " ax.plot(grid[grid >= 0], μ_R[grid >= 0], color=\"forestgreen\", lw=2)\n", 159 | " ax.fill_between(grid[grid < 0], lower_L[grid < 0], upper_L[grid < 0], color='grey', alpha=0.3)\n", 160 | " ax.fill_between(grid[grid >= 0], lower_R[grid >= 0], upper_R[grid >= 0], color='grey', alpha=0.3)\n", 161 | " ax.axvline(0, color=\"crimson\", lw=2, ls=\"--\")\n", 162 | " ax.set_title(f\"{asset} | {evt.date()} ±{bw}d τ̂={model.params['T']:.4f} p={model.pvalues['T']:.3g}\")\n", 163 | " ax.set_xlabel(\"Days relative to event\"), ax.set_ylabel(y_col.capitalize())\n", 164 | " plt.tight_layout()\n", 165 | " return fig\n", 166 | "\n", 167 | "# ─────────────────────────────────────────────────────────────────────────────\n", 168 | "# 4) Main driver\n", 169 | "if __name__ == \"__main__\":\n", 170 | " # --- ADJUSTED: Dynamic Data Loading ---\n", 171 | " print(\"--- Loading Benchmark Data ---\")\n", 172 | " # Define the directory where benchmark CSVs are located.\n", 173 | " BENCHMARK_DIR = pathlib.Path(\"./benchmark\")\n", 174 | " if not BENCHMARK_DIR.is_dir(): raise FileNotFoundError(f\"Benchmark directory '{BENCHMARK_DIR}' not found.\")\n", 175 | " \n", 176 | " bench_files = [\"Gold.csv\", \"Nasdaq100.csv\", \"SPY.csv\"]\n", 177 | " # Load each file from the benchmark directory.\n", 178 | " loaded_data = {\n", 179 | " pathlib.Path(f).stem: load_data(BENCHMARK_DIR / f) for f in bench_files\n", 180 | " }\n", 181 | " print(f\" • Loaded: {', '.join(loaded_data.keys())}\")\n", 182 | " \n", 183 | " print(\"\\n--- Loading Crypto Asset Data ---\")\n", 184 | " CRYPTO_DATA_DIR = pathlib.Path(\"./crypto_data\")\n", 185 | " if not CRYPTO_DATA_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{CRYPTO_DATA_DIR}' not found.\")\n", 186 | " crypto_files = list(CRYPTO_DATA_DIR.glob(\"*.csv\"))\n", 187 | " if not crypto_files: raise FileNotFoundError(f\"No CSV files found in '{CRYPTO_DATA_DIR}'.\")\n", 188 | " for f_path in crypto_files:\n", 189 | " asset_name = f_path.stem\n", 190 | " loaded_data[asset_name] = load_data(f_path)\n", 191 | " print(f\" • Found and loaded {len(crypto_files)} crypto assets.\")\n", 192 | "\n", 193 | " print(\"\\n--- Loading Wide-Format Event Calendar Data ---\")\n", 194 | " EVENTS_DIR = pathlib.Path(\"./events\")\n", 195 | " train_events_file, test_events_file = EVENTS_DIR / \"training_set.csv\", EVENTS_DIR / \"test_set.csv\"\n", 196 | " if not EVENTS_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{EVENTS_DIR}' not found.\")\n", 197 | " if not train_events_file.is_file(): raise FileNotFoundError(f\"File '{train_events_file}' not found.\")\n", 198 | " if not test_events_file.is_file(): raise FileNotFoundError(f\"File '{test_events_file}' not found.\")\n", 199 | " \n", 200 | " events = {}\n", 201 | " def load_wide_events(path: pathlib.Path, suffix: str) -> dict:\n", 202 | " local_events = {}\n", 203 | " df = read_csv_robustly(path, engine='python')\n", 204 | " df = std_cols(df)\n", 205 | " for group_name in df.columns:\n", 206 | " local_events[f\"{group_name}{suffix}\"] = pd.to_datetime(df[group_name].dropna(), errors='coerce').dropna()\n", 207 | " return local_events\n", 208 | "\n", 209 | " if train_events_file.is_file(): events.update(load_wide_events(train_events_file, \"_train\"))\n", 210 | " if test_events_file.is_file(): events.update(load_wide_events(test_events_file, \"_test\"))\n", 211 | " print(f\" • Total unique event groups to process: {len(events)}\")\n", 212 | " print(\"----------------------------------------------------\\n\")\n", 213 | "\n", 214 | " # Set up output path and Excel writer\n", 215 | " OUTPUT_DIR = pathlib.Path(\"./outcome\")\n", 216 | " OUTPUT_DIR.mkdir(parents=True, exist_ok=True)\n", 217 | " out_xlsx = OUTPUT_DIR / f\"rd_results_{OUTCOME}.xlsx\"\n", 218 | " \n", 219 | " try:\n", 220 | " writer = pd.ExcelWriter(out_xlsx, engine=\"xlsxwriter\")\n", 221 | " except ImportError:\n", 222 | " warnings.warn(\"'xlsxwriter' not found. To embed plots in Excel, run: pip install xlsxwriter\")\n", 223 | " writer = None\n", 224 | "\n", 225 | " if writer:\n", 226 | " workbook = writer.book\n", 227 | " ws = workbook.add_worksheet(\"Summary\")\n", 228 | " writer.sheets[\"Summary\"] = ws\n", 229 | " headers = [\"Asset\", \"Group\", \"Event\", \"Bandwidth\", \"Tau\", \"P_value\", \"CI_Low\", \"CI_High\", \"N_obs\", \"Plot\"]\n", 230 | " for c, h in enumerate(headers): ws.write(0, c, h)\n", 231 | " current_row = 1\n", 232 | "\n", 233 | " summary_data_for_csv = []\n", 234 | "\n", 235 | " # --- Main Loop ---\n", 236 | " for asset, df in loaded_data.items():\n", 237 | " print(f\"\\n=== {asset} ====================================================\")\n", 238 | " if df.empty or OUTCOME not in df.columns:\n", 239 | " warnings.warn(f\"'{OUTCOME}' column not found or DataFrame is empty for {asset}. Skipping.\")\n", 240 | " continue\n", 241 | " \n", 242 | " for group, dates in events.items():\n", 243 | " for d in dates:\n", 244 | " evt = pd.to_datetime(d)\n", 245 | " if evt not in df[\"Date\"].values: continue\n", 246 | " \n", 247 | " for bw in BANDWIDTHS:\n", 248 | " try:\n", 249 | " win, model = rd_design(df, evt, bw, OUTCOME)\n", 250 | " if len(win) < 10:\n", 251 | " warnings.warn(f\"Skipping {asset} on {d.date()} (±{bw}d) due to insufficient data.\")\n", 252 | " continue\n", 253 | " \n", 254 | " ci_lo, ci_hi = get_ci(model, \"T\")\n", 255 | " \n", 256 | " result_dict = {\n", 257 | " \"Asset\": asset, \"Group\": group, \"Event\": d.strftime('%Y-%m-%d'), \"Bandwidth\": bw,\n", 258 | " \"Tau\": float(model.params[\"T\"]), \"P_value\": float(model.pvalues[\"T\"]),\n", 259 | " \"CI_Low\": ci_lo, \"CI_High\": ci_hi, \"N_obs\": len(win)\n", 260 | " }\n", 261 | " summary_data_for_csv.append(result_dict)\n", 262 | " print(f\"{d.date()} {group:<22} ±{bw:>2}d τ̂={result_dict['Tau']:.6f} p={result_dict['P_value']:.4f}\")\n", 263 | "\n", 264 | " if writer:\n", 265 | " ws.write_row(current_row, 0, [result_dict[h] for h in headers if h != 'Plot'])\n", 266 | " fig = rd_plot_fig(win, model, evt, bw, asset, OUTCOME)\n", 267 | " buf = io.BytesIO()\n", 268 | " fig.savefig(buf, format=\"png\", bbox_inches=\"tight\")\n", 269 | " buf.seek(0)\n", 270 | " ws.set_row(current_row, 220) # Set row height\n", 271 | " ws.insert_image(current_row, len(headers)-1, f\"img_{asset}_{d.date()}_{bw}\", {\"image_data\": buf, \"x_scale\": 0.8, \"y_scale\": 0.8, 'object_position': 2})\n", 272 | " plt.close(fig)\n", 273 | " current_row += 1\n", 274 | " \n", 275 | " except Exception as e:\n", 276 | " print(f\"Could not process event {d.date()} for {asset} (±{bw}d). Error: {e}\")\n", 277 | "\n", 278 | " # Finalize Export\n", 279 | " if writer:\n", 280 | " writer.close()\n", 281 | " print(f\"\\nDone! Results and charts saved to {out_xlsx}\")\n", 282 | " else:\n", 283 | " print(\"\\nExporting summary data to CSV (plots not saved)...\")\n", 284 | " df_summary = pd.DataFrame(summary_data_for_csv)\n", 285 | " csv_path = OUTPUT_DIR / f\"rd_results_{OUTCOME}.csv\"\n", 286 | " df_summary.to_csv(csv_path, index=False)\n", 287 | " print(f\"Done! Summary data saved to {csv_path}.\")" 288 | ] 289 | } 290 | ], 291 | "metadata": { 292 | "kernelspec": { 293 | "display_name": "cuda", 294 | "language": "python", 295 | "name": "python3" 296 | }, 297 | "language_info": { 298 | "codemirror_mode": { 299 | "name": "ipython", 300 | "version": 3 301 | }, 302 | "file_extension": ".py", 303 | "mimetype": "text/x-python", 304 | "name": "python", 305 | "nbconvert_exporter": "python", 306 | "pygments_lexer": "ipython3", 307 | "version": "3.10.16" 308 | } 309 | }, 310 | "nbformat": 4, 311 | "nbformat_minor": 5 312 | } 313 | -------------------------------------------------------------------------------- /RDD_Vol..ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "id": "910d6475", 7 | "metadata": {}, 8 | "outputs": [], 9 | "source": [ 10 | "#!/usr/bin/env python3\n", 11 | "# -*- coding: utf-8 -*-\n", 12 | "\"\"\"\n", 13 | "Regression-Discontinuity (RD) Event Study (Volume Version, Direct Plotting)\n", 14 | "---------------------------------------------------------------------------\n", 15 | "This script runs the RDD analysis for Volume and displays plots directly.\n", 16 | "No files are exported.\n", 17 | "\"\"\"\n", 18 | "\n", 19 | "import pathlib\n", 20 | "import warnings\n", 21 | "from typing import List, Dict\n", 22 | "\n", 23 | "import numpy as np\n", 24 | "import pandas as pd\n", 25 | "import matplotlib.pyplot as plt\n", 26 | "\n", 27 | "# ─────────────────────────────────────────────────────────────────────────────\n", 28 | "# 1) Dependency checks & Global parameters\n", 29 | "try:\n", 30 | " import statsmodels.api as sm\n", 31 | " HAVE_SM = True\n", 32 | "except Exception as e:\n", 33 | " warnings.warn(f\"statsmodels unavailable ({e}); falling back to NumPy OLS.\")\n", 34 | " HAVE_SM = False\n", 35 | "try:\n", 36 | " from scipy import stats as sps\n", 37 | " HAVE_SCIPY = True\n", 38 | "except Exception: HAVE_SCIPY = False\n", 39 | "\n", 40 | "BANDWIDTHS = [10, 20]\n", 41 | "OUTCOME = \"vol\" # Changed to 'vol'\n", 42 | "\n", 43 | "# ─────────────────────────────────────────────────────────────────────────────\n", 44 | "# 2) Helper functions\n", 45 | "def read_csv_robustly(path: pathlib.Path, engine: str = 'c', sep=','):\n", 46 | " encodings_to_try = ['utf-8', 'utf-8-sig', 'gbk', 'gb2312', 'latin-1']\n", 47 | " if engine == 'python': sep = None\n", 48 | " for enc in encodings_to_try:\n", 49 | " try: return pd.read_csv(path, encoding=enc, engine=engine, sep=sep)\n", 50 | " except (UnicodeDecodeError, UnicodeError, pd.errors.ParserError): continue\n", 51 | " raise ValueError(f\"Failed to read or parse '{path}'.\")\n", 52 | "\n", 53 | "def std_cols(df: pd.DataFrame) -> pd.DataFrame:\n", 54 | " df.columns = (df.columns.str.lower().str.replace(\" \", \"\").str.replace(\".\", \"\", regex=False).str.strip())\n", 55 | " return df\n", 56 | "\n", 57 | "def load_data(path: pathlib.Path) -> pd.DataFrame:\n", 58 | " \"\"\"Read a CSV, clean the Vol. column, and return a date-sorted DataFrame.\"\"\"\n", 59 | " df = read_csv_robustly(path)\n", 60 | " df = std_cols(df)\n", 61 | " if \"date\" not in df.columns or \"vol\" not in df.columns:\n", 62 | " raise ValueError(f\"File '{path}' must contain 'Date' and 'Vol.' columns.\")\n", 63 | " df = df.rename(columns={\"vol\": OUTCOME})\n", 64 | " df[\"Date\"] = pd.to_datetime(df[\"date\"])\n", 65 | " def parse_volume(vol_str):\n", 66 | " vol_str = str(vol_str).strip().upper()\n", 67 | " if pd.isna(vol_str) or vol_str == '': return np.nan\n", 68 | " multipliers = {'K': 1e3, 'M': 1e6, 'B': 1e9}\n", 69 | " if vol_str and vol_str[-1] in multipliers:\n", 70 | " try:\n", 71 | " return float(vol_str[:-1]) * multipliers[vol_str[-1]]\n", 72 | " except ValueError:\n", 73 | " return np.nan\n", 74 | " return pd.to_numeric(vol_str, errors='coerce')\n", 75 | " df[OUTCOME] = df[OUTCOME].apply(parse_volume)\n", 76 | " return df.sort_values(\"Date\").reset_index(drop=True)\n", 77 | "\n", 78 | "def ols_numpy(y: np.ndarray, X: np.ndarray):\n", 79 | " n, k = X.shape\n", 80 | " beta = np.linalg.lstsq(X, y, rcond=None)[0]\n", 81 | " resid = y - X @ beta\n", 82 | " sigma2 = (resid @ resid) / (n - k)\n", 83 | " cov = sigma2 * np.linalg.inv(X.T @ X)\n", 84 | " se = np.sqrt(np.diag(cov))\n", 85 | " p = (2 * (1 - sps.t.cdf(np.abs(beta / se), df=n - k)) if HAVE_SCIPY else 2 * (1 - np.exp(-0.5 * (beta / se) ** 2) / np.sqrt(2 * np.pi) / np.abs(beta / se)))\n", 86 | " return beta, p, cov\n", 87 | "\n", 88 | "def rd_design(df: pd.DataFrame, event_date: pd.Timestamp, bw: int, y_col: str):\n", 89 | " df = df.copy() # Avoid SettingWithCopyWarning\n", 90 | " df['D'], df['T'] = (df['Date'] - event_date).dt.days, (df['Date'] >= event_date).astype(int)\n", 91 | " win = df[df['D'].between(-bw, bw)].dropna(subset=[y_col])\n", 92 | " X_df = win[['T', 'D']].astype(float)\n", 93 | " X_df['TD'] = X_df['T'] * X_df['D']\n", 94 | " X, y = np.column_stack([np.ones(len(X_df)), X_df.to_numpy()]), win[y_col].to_numpy(float)\n", 95 | " cols = ['const', 'T', 'D', 'TD']\n", 96 | " if HAVE_SM:\n", 97 | " model = sm.OLS(y, X).fit(cov_type='HAC', cov_kwds={'maxlags': 3})\n", 98 | " model.colnames, model.params, model.pvalues, model.cov = cols, pd.Series(model.params, index=cols), pd.Series(model.pvalues, index=cols), model.cov_params()\n", 99 | " return win, model\n", 100 | " else:\n", 101 | " beta, p, cov = ols_numpy(y, X)\n", 102 | " class Result:\n", 103 | " params, pvalues, cov, colnames = pd.Series(beta, index=cols), pd.Series(p, index=cols), pd.DataFrame(cov, index=cols, columns=cols), cols\n", 104 | " def conf_int(self):\n", 105 | " se = np.sqrt(np.diag(self.cov))\n", 106 | " return pd.DataFrame(np.column_stack([self.params - 1.96 * se, self.params + 1.96 * se]), index=cols, columns=['low', 'high'])\n", 107 | " return win, Result()\n", 108 | "\n", 109 | "def get_ci(model, param: str):\n", 110 | " ci = model.conf_int()\n", 111 | " return ci.loc[param] if isinstance(ci, pd.DataFrame) else (ci[model.colnames.index(param), 0], ci[model.colnames.index(param), 1])\n", 112 | "\n", 113 | "def rd_plot(win: pd.DataFrame, model, evt: pd.Timestamp, bw: int, asset: str, y_col: str):\n", 114 | " grid = np.arange(-bw, bw + 1)\n", 115 | " α, τ, β, γ = (model.params[k] for k in ['const', 'T', 'D', 'TD'])\n", 116 | " μ_L, μ_R = α + β * grid, α + τ + (β + γ) * grid\n", 117 | " cov = model.cov if isinstance(model.cov, np.ndarray) else model.cov.to_numpy()\n", 118 | " X_L = np.column_stack([np.ones_like(grid), np.zeros_like(grid), grid, np.zeros_like(grid)])\n", 119 | " X_R = np.column_stack([np.ones_like(grid), np.ones_like(grid), grid, grid])\n", 120 | " se_L, se_R = np.sqrt(np.einsum('ij,jk,ik->i', X_L, cov, X_L)), np.sqrt(np.einsum('ij,jk,ik->i', X_R, cov, X_R))\n", 121 | " upper_L, lower_L, upper_R, lower_R = μ_L + 1.96 * se_L, μ_L - 1.96 * se_L, μ_R + 1.96 * se_R, μ_R - 1.96 * se_R\n", 122 | " plt.figure(figsize=(8, 5))\n", 123 | " colors = win['D'].apply(lambda x: 'royalblue' if x < 0 else 'firebrick')\n", 124 | " plt.scatter(win['D'], win[y_col], s=18, color=colors, alpha=0.7, zorder=2)\n", 125 | " plt.plot(grid[grid < 0], μ_L[grid < 0], color='forestgreen', lw=2)\n", 126 | " plt.plot(grid[grid >= 0], μ_R[grid >= 0], color='forestgreen', lw=2)\n", 127 | " plt.fill_between(grid[grid < 0], lower_L[grid < 0], upper_L[grid < 0], color='grey', alpha=0.3)\n", 128 | " plt.fill_between(grid[grid >= 0], lower_R[grid >= 0], upper_R[grid >= 0], color='grey', alpha=0.3)\n", 129 | " plt.axvline(0, color='crimson', lw=2, ls='--')\n", 130 | " plt.title(f\"{asset} | {evt.date()} | ±{bw}d τ̂ = {model.params['T']:.4f} (p = {model.pvalues['T']:.3g})\")\n", 131 | " plt.xlabel(\"Days relative to event\")\n", 132 | " plt.ylabel(y_col.capitalize())\n", 133 | " plt.tight_layout()\n", 134 | " plt.show()\n", 135 | "\n", 136 | "# ─────────────────────────────────────────────────────────────────────────────\n", 137 | "# 3) Main script\n", 138 | "if __name__ == \"__main__\":\n", 139 | " # --- ADJUSTED: Data Loading ---\n", 140 | " print(\"--- Loading Benchmark Data ---\")\n", 141 | " # Define the directory where benchmark CSVs are located.\n", 142 | " BENCHMARK_DIR = pathlib.Path(\"./benchmark\")\n", 143 | " if not BENCHMARK_DIR.is_dir(): raise FileNotFoundError(f\"Benchmark directory '{BENCHMARK_DIR}' not found.\")\n", 144 | " \n", 145 | " bench_files = [\"Gold.csv\", \"Nasdaq100.csv\", \"SPY.csv\"]\n", 146 | " # Load each file from the benchmark directory.\n", 147 | " loaded_data = {\n", 148 | " pathlib.Path(f).stem: load_data(BENCHMARK_DIR / f) for f in bench_files\n", 149 | " }\n", 150 | " print(f\" • Loaded: {', '.join(loaded_data.keys())}\")\n", 151 | " \n", 152 | " print(\"\\n--- Loading Crypto Asset Data ---\")\n", 153 | " CRYPTO_DATA_DIR = pathlib.Path(\"./crypto_data\")\n", 154 | " if not CRYPTO_DATA_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{CRYPTO_DATA_DIR}' not found.\")\n", 155 | " crypto_files = list(CRYPTO_DATA_DIR.glob(\"*.csv\"))\n", 156 | " if not crypto_files: raise FileNotFoundError(f\"No CSV files found in '{CRYPTO_DATA_DIR}'.\")\n", 157 | " for f_path in crypto_files:\n", 158 | " asset_name = f_path.stem\n", 159 | " loaded_data[asset_name] = load_data(f_path)\n", 160 | " print(f\" • Found and loaded {len(crypto_files)} crypto assets.\")\n", 161 | "\n", 162 | " print(\"\\n--- Loading Wide-Format Event Calendar Data ---\")\n", 163 | " EVENTS_DIR = pathlib.Path(\"./events\")\n", 164 | " train_events_file, test_events_file = EVENTS_DIR / \"training_set.csv\", EVENTS_DIR / \"test_set.csv\"\n", 165 | " if not EVENTS_DIR.is_dir(): raise FileNotFoundError(f\"Directory '{EVENTS_DIR}' not found.\")\n", 166 | " if not train_events_file.is_file(): raise FileNotFoundError(f\"File '{train_events_file}' not found.\")\n", 167 | " if not test_events_file.is_file(): raise FileNotFoundError(f\"File '{test_events_file}' not found.\")\n", 168 | " \n", 169 | " events = {}\n", 170 | " def load_wide_events(path: pathlib.Path, suffix: str) -> dict:\n", 171 | " local_events = {}\n", 172 | " df = read_csv_robustly(path, engine='python')\n", 173 | " df = std_cols(df)\n", 174 | " for group_name in df.columns:\n", 175 | " local_events[f\"{group_name}{suffix}\"] = pd.to_datetime(df[group_name].dropna(), errors='coerce').dropna()\n", 176 | " return local_events\n", 177 | "\n", 178 | " if train_events_file.is_file(): events.update(load_wide_events(train_events_file, \"_train\"))\n", 179 | " if test_events_file.is_file(): events.update(load_wide_events(test_events_file, \"_test\"))\n", 180 | " print(f\" • Total unique event groups to process: {len(events)}\")\n", 181 | " print(\"----------------------------------------------------\\n\")\n", 182 | " \n", 183 | " for asset, df in loaded_data.items():\n", 184 | " print(f\"\\n=== {asset} ====================================================\")\n", 185 | " if df.empty or OUTCOME not in df.columns:\n", 186 | " warnings.warn(f\"'{OUTCOME}' column not found or DataFrame is empty for {asset}. Skipping.\")\n", 187 | " continue\n", 188 | " \n", 189 | " for group, dates in events.items():\n", 190 | " for d in dates:\n", 191 | " evt = pd.to_datetime(d)\n", 192 | " if evt not in df['Date'].values: continue\n", 193 | " \n", 194 | " for bw in BANDWIDTHS:\n", 195 | " try:\n", 196 | " win, model = rd_design(df, evt, bw, OUTCOME)\n", 197 | " if len(win) < 10:\n", 198 | " warnings.warn(f\"Skipping {asset} on {d.date()} (±{bw}d) due to insufficient data.\")\n", 199 | " continue\n", 200 | " \n", 201 | " ci_lo, ci_hi = get_ci(model, \"T\")\n", 202 | " print(f\"{d.date()} {group:<22} ±{bw:>2}d \"\n", 203 | " f\"τ̂={model.params['T']:.6f} \"\n", 204 | " f\"p={model.pvalues['T']:.4f} \"\n", 205 | " f\"CI=({ci_lo:.6f}, {ci_hi:.6f}) \"\n", 206 | " f\"N={len(win)}\")\n", 207 | " \n", 208 | " rd_plot(win, model, evt, bw, asset, OUTCOME)\n", 209 | " \n", 210 | " except Exception as e:\n", 211 | " print(f\"Could not process event {d.date()} for {asset} (±{bw}d). Error: {e}\")" 212 | ] 213 | } 214 | ], 215 | "metadata": { 216 | "kernelspec": { 217 | "display_name": "cuda", 218 | "language": "python", 219 | "name": "python3" 220 | }, 221 | "language_info": { 222 | "codemirror_mode": { 223 | "name": "ipython", 224 | "version": 3 225 | }, 226 | "file_extension": ".py", 227 | "mimetype": "text/x-python", 228 | "name": "python", 229 | "nbconvert_exporter": "python", 230 | "pygments_lexer": "ipython3", 231 | "version": "3.10.16" 232 | } 233 | }, 234 | "nbformat": 4, 235 | "nbformat_minor": 5 236 | } 237 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # CRATES — Crypto & Cross‑Asset Event Study Toolkit — Cross‑Asset Event Study Analysis Repository 2 | 3 | **CRATES** is an end‑to‑end, research‑grade **event‑study laboratory** for cross‑asset markets. 4 | It couples *Cumulative Abnormal Return* (CAR) pipelines with batched *Regression Discontinuity* (RD) analysis and ships a full notebook gallery for high‑impact visual communication. 5 | 6 | > Designed for analysts who want **journal‑quality statistics** without sacrificing engineering rigour. 7 | 8 | --- 9 | 10 | ## 🎞️ Quick Preview 11 | 12 | Open the **`sample/`** folder to instantly see what **CRATES** produces — PNG heatmaps, interactive HTML dashboards, and nicely‑formatted XLSX reports. 13 | 14 | --- 15 | 16 | ## ✨ Key Features 17 | 18 | * **Full Stack** – From raw OHLCV to Excel workbooks, high‑res figures, and interactive dashboards. 19 | * **Multi‑asset** – Crypto, equities, indices and commodities handled identically. 20 | * **Modular** – Nine CAR visual modules (V1‑9) plus three advanced (VA1‑3); separate RD tracks for price and volume. 21 | * **Reproducible** – Version‑pinned environment, deterministic outputs, 100% tested core maths. 22 | * **Transparent** – Every cell is annotated; parameters surfaced via YAML for easy what‑if runs. 23 | 24 | --- 25 | 26 | ## 🗂️ Repository Layout 27 | 28 | ``` 29 | crates/ 30 | ├── benchmark/ # fixed benchmarks: Gold.csv, Nasdaq100.csv, SPY.csv 31 | ├── crypto_data/ # swappable crypto assets (same column schema) 32 | ├── events/ # event calendars: training_set.csv, test_set.csv 33 | ├── outcome/ # autogenerated CSV/XLSX/HTML (git‑ignored) 34 | ├── sample/ # PNG snapshots of analysis outputs 35 | ├── CAR_main.ipynb 36 | ├── CAR_V1-2_High-Level Impact & Screening.ipynb 37 | ├── CAR_V3-4_Distribution & Risk Analysis.ipynb 38 | ├── CAR_V5-7_Temporal Dynamics Analysis.ipynb 39 | ├── CAR_V8-9_Model Robustness & Diagnostics.ipynb 40 | ├── CAR_VA1-3_Advanced Techniques & Presentation Methods.ipynb 41 | ├── RDD_Price.ipynb 42 | ├── RDD_Price to excel.ipynb 43 | ├── RDD_Vol..ipynb 44 | ├── RDD_Vol. to excel.ipynb 45 | ├── requirements.txt 46 | ├── LICENSE # CC BY‑NC‑SA 4.0 47 | └── README.md # you are here 48 | ``` 49 | 50 | --- 51 | 52 | ## 📊 Visual Gallery 53 | 54 | 55 | ### Category 1 · High-Level Impact & Screening 56 | **Goal:** Provide a birds-eye view of which assets and event groups matter most. 57 | 58 |