├── .gitignore ├── HLSM-BlackScholes-American.ipynb ├── HLSM-Heston-American.ipynb ├── MC-BlackScholes-European.ipynb ├── MC-Heston-European.ipynb ├── MonteCarlo.py ├── Notes for Lattice-based Methods.md ├── Potters OHMC 2001.pdf ├── R implementation ├── FE800 final code.R └── example.csv ├── README.md ├── README.pdf ├── bak ├── MC-Heston-American.ipynb ├── MonteCarlo copy.py ├── OHLSMC-Heston-American.ipynb └── OHMC.ipynb ├── binomial.py ├── presentation.md └── trinomial.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Spesific files # 2 | ################### 3 | cn_*.csv 4 | *-checkpoint.ipynb 5 | 6 | # Spesific directors # 7 | # ################# 8 | .idea/ 9 | .ipython_checkpoints/ 10 | __pycache__/ 11 | 12 | # Backup # 13 | ################### 14 | *.pyo 15 | *.pyc 16 | *~ 17 | *.bak 18 | *.swp 19 | *# 20 | 21 | # Images # 22 | ################### 23 | *.jpg 24 | *.gif 25 | *.png 26 | *.svg 27 | *.ico 28 | 29 | # Compiled source # 30 | ################### 31 | *.com 32 | *.class 33 | *.dll 34 | *.exe 35 | *.o 36 | *.so 37 | 38 | # Packages # 39 | ############ 40 | # it's better to unpack these files and commit the raw source 41 | # git has its own built in compression methods 42 | *.7z 43 | *.dmg 44 | *.gz 45 | *.iso 46 | *.jar 47 | *.rar 48 | *.tar 49 | *.zip 50 | 51 | # Logs and databases # 52 | ###################### 53 | *.log 54 | *.sql 55 | *.sqlite 56 | 57 | # OS generated files # 58 | ###################### 59 | .DS_Store 60 | .DS_Store? 61 | ._* 62 | .Spotlight-V100 63 | .Trashes 64 | ehthumbs.db 65 | Thumbs.db 66 | -------------------------------------------------------------------------------- /HLSM-Heston-American.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# How to Price and Hedge American Options Using Regression-based Methods\n", 8 | "\n", 9 | "## Hedged Least Square Method\n", 10 | "\n", 11 | "Author: Jerry Xia\n", 12 | "\n", 13 | "Date: 2018/08/27" 14 | ] 15 | }, 16 | { 17 | "cell_type": "markdown", 18 | "metadata": {}, 19 | "source": [ 20 | "## 1. Introduction\n", 21 | "\n", 22 | "This is a Python Notebook about an inovative method of the Monte Carlo simulation, Optimal Hedged Least Square Monte Carlo (OHLSMC) in order to price and hedge American type options. In this script, I implemented the following variance reduction methods as well as their antithetic variates' version:\n", 23 | "\n", 24 | "* regular Monte Carlo\n", 25 | "* least square Monte Carlo\n", 26 | "* Monte Carlo with delta-based control variates\n", 27 | "* optimal hedged Monte Carlo\n", 28 | "* optimal hedged least square Monte Carlo\n", 29 | "\n", 30 | "Due to the significant efficience and robustness, I mainly focus on the optimal hedged least square Monte Carlo (OHLSMC) in option pricing. We invoke this method to price American options and make comparison with the original least square monte carlo (LSM).\n", 31 | "\n", 32 | "### 1.1 Facts\n", 33 | "* The option price is not simply the average value of the discounted future pay-off over the objective (or historical) probability distribution\n", 34 | "* The requirement of absence of arbitrage opportunities is equivalent to the existence of \"risk-neutral measure\", such that the price is indeed its average discounted future pay-off.\n", 35 | "* Risk in option trading cannot be eliminated\n", 36 | "\n", 37 | "### 1.2 Objective\n", 38 | "* It would be satisfactory to have an option theory where the objective stochastic process of the underlying is used to calculate the option price, the hedge strategy and the *residual risk*.\n", 39 | "\n", 40 | "### 1.3 Advantages\n", 41 | "* It is a versatile methods to price complicated path-dependent options.\n", 42 | "* Considerable variance reduction scheme for Monte Carlo\n", 43 | "* It provide not only a numerical estimate of the option price, but also of the optimal hedge strategy and of the residual risk.\n", 44 | "* This method does not rely on the notion of risk-neutral measure, and can be used to any model of the true dynamics of the underlying" 45 | ] 46 | }, 47 | { 48 | "cell_type": "markdown", 49 | "metadata": {}, 50 | "source": [ 51 | "## 2 Underlying dynamics\n", 52 | "\n", 53 | "### Black-Scholes Model\n", 54 | "$$dS = \\mu S dt + \\sigma S dW_t$$\n", 55 | "$$log S_{t+1} = log S_t +(\\mu - \\frac{\\sigma^2}{2})\\Delta t + \\sigma \\sqrt{\\Delta t} \\epsilon$$\n", 56 | "where\n", 57 | "$$\\epsilon \\sim N(0,1)$$\n", 58 | "In risk neutral measure, $\\mu = r - q$. \n", 59 | "### Heston Model\n", 60 | "The basic Heston model assumes that $S_t$, the price of the asset, is determined by a stochastic process:\n", 61 | "$$\n", 62 | "dS_t = \\mu S_t dt + \\sqrt{v_t} S_t d W_t^S\\\\\n", 63 | "dv_t = \\kappa (\\theta - v_t) dt + \\xi \\sqrt{v_t} d W_t^v\n", 64 | "$$\n", 65 | "where \n", 66 | "$$E[dW_t^S,dW_t^v]=\\rho dt$$\n", 67 | "In risk neutral measure, $\\mu = r - q$. " 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "metadata": {}, 73 | "source": [ 74 | "## 3 Methodology\n", 75 | "\n", 76 | "### 3.1 Notations\n", 77 | "Option price always requires to work backward. That is because the option price is known exactly at the maturity. As with other schemes, we determine the option price step by step from the maturity $t=K\\tau=T$ to the present time $t=0$. The unit of time being $\\tau$, for example, one day. We simulate $N$ trajectories. In trajectory i, the price of the underlying asset at time $k\\tau$ is denoted as $S_k^{(i)}$. The price of the derivative at time $k\\tau$ is denoted as $C_k$, and the hedge function is $H_k$. We define an optimal hedged portfolio as\n", 78 | "$$W_k^{(i)} = C_k(S_k^{(i)}) + H_k(S_k^{(i)})S_k^{(i)}$$\n", 79 | "The one-step change of our portfolio is\n", 80 | "$$\\Delta W_k^{(i)}= df(k,k+1) C_{k+1}(S_{k+1}^{(i)}) - C_k(S_k^{(i)}) + H_k(S_{k}^{(i)}) (df2(k,k+1) S_{k+1}^{(i)} - S_{k}^{(i)})$$\n", 81 | "Where $df(k,k+1)$ is the discounted factor from time $k\\tau$ to $(k+1) \\tau$, $df2(k,k+1)$ is the discounted factor considering dividend $e^{-(r-q)(t_{k+1}-t_k)}$\n", 82 | "\n", 83 | "### 3.2 Objective\n", 84 | "The optimal hedged algorithm can be interpreted as the following optimal problem\n", 85 | "\n", 86 | "\\begin{align}\n", 87 | "\\mbox{minimize}\\quad & \\quad Var[\\Delta W_k]\\\\\n", 88 | "\\mbox{subject to}\\quad & \\quad E[\\Delta W_k]=0\n", 89 | "\\end{align}\n", 90 | "\n", 91 | "It means we should try to minimize the realized volatility of hedged portfolio while maintaining the expected value of portfolio unchanged.\n", 92 | "\n", 93 | "### 3.3 Basis Functions\n", 94 | "The original optimization is very difficult to solve. Thus we assume a set of basis function and solved it in such subspace. We use $N_C$and $N_H$ to denote the number of basis functions for price and hedge.\n", 95 | "\n", 96 | "\\begin{align}\n", 97 | "C_k(\\cdot) &= \\sum_{i=0}^{N_C} a_{k,i} A_i(\\cdot)\\\\\n", 98 | "H_k(\\cdot) &= \\sum_{i=0}^{N_H} b_{k,i} B_i(\\cdot)\n", 99 | "\\end{align}\n", 100 | "\n", 101 | "The basis functions $A_i$ and $B_i$ are priori determined and need not to be identical. The coefficients $a_i$ and $b_i$ can be calibrated by solving the optimal problem.\n", 102 | "\n", 103 | "### 3.4 Numerical Solution\n", 104 | "\n", 105 | "\\begin{align}\n", 106 | "\\mbox{minimize}\\quad & \\quad \\frac{1}{N} \\sum_{i=1}^N \\Delta W_k^{(i)2}\\\\\n", 107 | "\\mbox{subject to}\\quad & \\quad \\frac{1}{N} \\sum_{i=1}^N \\Delta W_k^{(i)}=0\n", 108 | "\\end{align}\n", 109 | "\n", 110 | "Denote the discounted forward underlying price change at time $k\\tau$ as\n", 111 | "\n", 112 | "$$\\Delta S_k = df2(k,k+1) S_{k+1} - S_k$$\n", 113 | "\n", 114 | "Define\n", 115 | "\n", 116 | "\\begin{align}\n", 117 | "Q_k &= \\begin{bmatrix}\n", 118 | " -A_{k,1}(S_k^{(1)}) & \\cdots & -A_{k,N_C}(S_k^{(1)}) & B_{k,1}(S_k^{(1)})\\Delta S_k^{(1)}& \\cdots & B_{k,N_H}(S_k^{(1)})\\Delta S_k^{(1)} \\\\\n", 119 | " -A_{k,1}(S_k^{(2)}) & \\cdots & -A_{k,N_C}(S_k^{(2)}) & B_{k,1}(S_k^{(2)})\\Delta S_k^{(2)}& \\cdots & B_{k,N_H}(S_k^{(1)})\\Delta S_k^{(2)} \\\\\n", 120 | " \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n", 121 | " -A_{k,1}(S_k^{(N)}) & \\cdots & -A_{k,N_C}(S_k^{(N)}) & B_{k,1}(S_k^{(N)})\\Delta S_k^{(N)}& \\cdots & B_{k,N_H}(S_k^{(N)})\\Delta S_k^{(N)}\n", 122 | " \\end{bmatrix}\\\\\\\\\n", 123 | "c_k &= (a_{k,1}, \\cdots a_{k,N_C}, b_{k,1}, \\cdots, b_{k,N_H})^T\\\\\\\\\n", 124 | "v_{k} &= df(k,k+1) C_{k+1}(S_{k+1}^{})\n", 125 | "\\end{align}\n", 126 | "\n", 127 | "As for $v_k$, note that we know the exact value at maturity, which means there is no need to approximate price in terms of basis functions, that is\n", 128 | "\n", 129 | "\\begin{align}\n", 130 | "v_k = \\begin{cases}\n", 131 | "df(N-1,N)\\ payoff(S_N),\\quad & k=N-1\\\\\n", 132 | "df(k,k+1)\\ \\sum_{i=1}^{N_C} a_{k+1,i} A_i(S_{k+1}), \\quad & 0 0 155 | OTM_filter = exercise_values_t <= 0 156 | n_sub_trials, n_sub_steps = sub_price_matrix.shape 157 | holding_values_t = np.zeros(n_sub_trials) # simulated samples: y 158 | exp_holding_values_t = np.zeros(n_sub_trials) # regressed results: E[y] 159 | 160 | itemindex = np.where(sub_exercise_matrix==1) 161 | # print(sub_exercise_matrix) 162 | for trial_i in range(n_sub_trials): 163 | first = next(itemindex[1][i] for i,x in enumerate(itemindex[0]) if x==trial_i) 164 | payoff_i = payoff_fun(sub_price_matrix[trial_i, first]) 165 | df_i = df**(n_sub_steps-first) 166 | holding_values_t[trial_i] = payoff_i*df_i 167 | 168 | A_matrix = np.array([func(sub_price_matrix[:,0]) for func in func_list]).T 169 | b_matrix = holding_values_t[:, np.newaxis] # g_tau|Fi 170 | ITM_A_matrix = A_matrix[ITM_filter, :] 171 | ITM_b_matrix = b_matrix[ITM_filter, :] 172 | lr = LinearRegression(fit_intercept=False) 173 | lr.fit(ITM_A_matrix, ITM_b_matrix) 174 | exp_holding_values_t[ITM_filter] = np.dot(ITM_A_matrix, lr.coef_.T)[:, 0] # E[g_tau|Fi] only ITM 175 | 176 | 177 | if np.sum(OTM_filter): # if no trial falls into the OTM region it would cause empty OTM_A_Matrix and OTM_b_Matrix, and only ITM was applicable. In this step, we are going to estimate the OTM American values E[g_tau|Fi]. 178 | if onlyITM: 179 | # Original LSM 180 | exp_holding_values_t[OTM_filter] = np.nan 181 | else: 182 | # non-conformed approximation: do not assure the continuity of the approximation (regression in two region without iterpolation) 183 | OTM_A_matrix = A_matrix[OTM_filter, :] 184 | OTM_b_matrix = b_matrix[OTM_filter, :] 185 | lr.fit(OTM_A_matrix, OTM_b_matrix) 186 | exp_holding_values_t[OTM_filter] = np.dot(OTM_A_matrix, lr.coef_.T)[:, 0] # E[g_tau|Fi] only OTM 187 | 188 | 189 | sub_exercise_matrix[:,0] = ITM_filter & (exercise_values_t>exp_holding_values_t) 190 | american_values_t = np.maximum(exp_holding_values_t,exercise_values_t) 191 | return american_values_t 192 | 193 | if (option_type == "c"): 194 | payoff_fun = lambda x: np.maximum(x - K, 0) 195 | elif (option_type == "p"): 196 | payoff_fun = lambda x: np.maximum(K - x, 0) 197 | 198 | # when contract is at the maturity 199 | stock_prices_t = price_matrix[:, -1] 200 | exercise_values_t = payoff_fun(stock_prices_t) 201 | holding_values_t = exercise_values_t 202 | american_values_matrix[:,-1] = exercise_values_t 203 | exercise_matrix[:,-1] = 1 204 | 205 | # before maturaty 206 | for i in np.arange(n_steps)[:0:-1]: 207 | sub_price_matrix = price_matrix[:,i:] 208 | sub_exercise_matrix = exercise_matrix[:,i:] 209 | american_values_t = __calc_american_values(payoff_fun,func_list,sub_price_matrix,sub_exercise_matrix,df,onlyITM) 210 | american_values_matrix[:,i] = american_values_t 211 | 212 | 213 | 214 | # obtain the optimal policies at the inception 215 | holding_matrix = np.zeros(exercise_matrix.shape, dtype=bool) 216 | for i in np.arange(n_trials): 217 | exercise_row = exercise_matrix[i, :] 218 | if (exercise_row.any()): 219 | exercise_idx = np.where(exercise_row == 1)[0][0] 220 | exercise_row[exercise_idx + 1:] = 0 221 | holding_matrix[i,:exercise_idx+1] = 1 222 | else: 223 | exercise_row[-1] = 1 224 | holding_matrix[i,:] = 1 225 | 226 | if onlyITM==False: 227 | # i=0 228 | # regular martingale pricing: LSM 229 | american_value1 = american_values_matrix[:,1].mean() * df 230 | # with delta hedging: OHMC 231 | v0 = matrix((american_values_matrix[:,1] * df)[:,np.newaxis]) 232 | S0 = price_matrix[:, 0] 233 | S1 = price_matrix[:, 1] 234 | dS0 = df2 * S1 * (1-sell_cost) - S0*(1+buy_cost) 235 | Q0 = np.concatenate((-np.ones(n_trials)[:, np.newaxis], dS0[:, np.newaxis]), axis=1) 236 | Q0 = matrix(Q0) 237 | P = Q0.T * Q0 238 | q = Q0.T * v0 239 | A = matrix(np.ones(n_trials, dtype=np.float64)).T * Q0 240 | b = - matrix(np.ones(n_trials, dtype=np.float64)).T * v0 241 | sol = solvers.coneqp(P=P, q=q, A=A, b=b) 242 | self.sol = sol 243 | residual_risk = (v0.T * v0 + 2 * sol["primal objective"]) / n_trials 244 | self.residual_risk = residual_risk[0] # the value of unit matrix 245 | american_value2 = sol["x"][0] 246 | delta_hedge = sol["x"][1] 247 | american_values_matrix[:,0] = american_value2 248 | self.american_values_matrix = american_values_matrix 249 | self.HLSM_price = american_value2 250 | self.HLSM_delta = - delta_hedge 251 | print("price: {}, delta-hedge: {}".format(american_value2,delta_hedge)) 252 | 253 | self.holding_matrix = holding_matrix 254 | self.exercise_matrix = exercise_matrix 255 | 256 | pass 257 | 258 | def LSM2(self, option_type="c", func_list=[lambda x: x ** 0, lambda x: x],onlyITM=False,buy_cost=0,sell_cost=0): 259 | dt = self.T / self.n_steps 260 | df = np.exp(-self.r * dt) 261 | df2 = np.exp(-(self.r - self.q) * dt) 262 | K = self.K 263 | price_matrix = self.price_matrix 264 | n_trials = self.n_trials 265 | n_steps = self.n_steps 266 | exercise_matrix = np.zeros(price_matrix.shape,dtype=bool) 267 | american_values_matrix = np.zeros(price_matrix.shape) 268 | 269 | 270 | def __calc_american_values(payoff_fun,func_list, prices_t, american_values_tp1,df): 271 | exercise_values_t = payoff_fun(prices_t[:]) 272 | ITM_filter = exercise_values_t > 0 273 | OTM_filter = exercise_values_t <= 0 274 | n_sub_trials = len(prices_t) 275 | holding_values_t = df*american_values_tp1 # simulated samples: y 276 | exp_holding_values_t = np.zeros(n_sub_trials) # regressed results: E[y] 277 | 278 | 279 | A_matrix = np.array([func(prices_t[:]) for func in func_list]).T 280 | b_matrix = holding_values_t[:, np.newaxis] # g_tau|Fi 281 | ITM_A_matrix = A_matrix[ITM_filter, :] 282 | ITM_b_matrix = b_matrix[ITM_filter, :] 283 | lr = LinearRegression(fit_intercept=False) 284 | lr.fit(ITM_A_matrix, ITM_b_matrix) 285 | exp_holding_values_t[ITM_filter] = np.dot(ITM_A_matrix, lr.coef_.T)[:, 0] # E[g_tau|Fi] only ITM 286 | 287 | OTM_A_matrix = A_matrix[OTM_filter, :] 288 | OTM_b_matrix = b_matrix[OTM_filter, :] 289 | lr.fit(OTM_A_matrix, OTM_b_matrix) 290 | exp_holding_values_t[OTM_filter] = np.dot(OTM_A_matrix, lr.coef_.T)[:, 0] # E[g_tau|Fi] only OTM 291 | 292 | american_values_t = np.maximum(exp_holding_values_t,exercise_values_t) 293 | return american_values_t 294 | 295 | if (option_type == "c"): 296 | payoff_fun = lambda x: np.maximum(x - K, 0) 297 | elif (option_type == "p"): 298 | payoff_fun = lambda x: np.maximum(K - x, 0) 299 | 300 | # when contract is at the maturity 301 | exercise_values_t = payoff_fun(price_matrix[:,-1]) 302 | american_values_matrix[:,-1] = exercise_values_t 303 | american_values_t = exercise_values_t 304 | 305 | # before maturaty 306 | for i in np.arange(n_steps)[:0:-1]: 307 | prices_t = price_matrix[:,i] 308 | american_values_tp1 = american_values_t 309 | american_values_t = __calc_american_values(payoff_fun,func_list,prices_t, american_values_tp1,df) 310 | american_values_matrix[:,i] = american_values_t 311 | 312 | 313 | 314 | # obtain the optimal policies at the inception 315 | 316 | 317 | 318 | # i=0 319 | # regular martingale pricing: LSM 320 | american_value1 = american_values_matrix[:,1].mean() * df 321 | # with delta hedging: OHMC 322 | v0 = matrix((american_values_matrix[:,1] * df)[:,np.newaxis]) 323 | S0 = price_matrix[:, 0] 324 | S1 = price_matrix[:, 1] 325 | dS0 = df2 * S1 * (1-sell_cost) - S0*(1+buy_cost) 326 | Q0 = np.concatenate((-np.ones(n_trials)[:, np.newaxis], dS0[:, np.newaxis]), axis=1) 327 | Q0 = matrix(Q0) 328 | P = Q0.T * Q0 329 | q = Q0.T * v0 330 | A = matrix(np.ones(n_trials, dtype=np.float64)).T * Q0 331 | b = - matrix(np.ones(n_trials, dtype=np.float64)).T * v0 332 | sol = solvers.coneqp(P=P, q=q, A=A, b=b) 333 | self.sol = sol 334 | residual_risk = (v0.T * v0 + 2 * sol["primal objective"]) / n_trials 335 | self.residual_risk = residual_risk[0] # the value of unit matrix 336 | american_value2 = sol["x"][0] 337 | delta_hedge = sol["x"][1] 338 | american_values_matrix[:,0] = american_value2 339 | self.american_values_matrix = american_values_matrix 340 | self.HLSM_price = american_value2 341 | self.HLSM_delta = - delta_hedge 342 | print("price: {}, delta-hedge: {}".format(american_value2,delta_hedge)) 343 | 344 | pass 345 | 346 | 347 | 348 | def LSM3(self, option_type="c", func_list=[lambda x: x ** 0, lambda x: x],onlyITM=False,buy_cost=0,sell_cost=0): 349 | dt = self.T / self.n_steps 350 | df = np.exp(-self.r * dt) 351 | df2 = np.exp(-(self.r - self.q) * dt) 352 | K = self.K 353 | price_matrix = self.price_matrix 354 | n_trials = self.n_trials 355 | n_steps = self.n_steps 356 | exercise_matrix = np.zeros(price_matrix.shape,dtype=bool) 357 | american_values_matrix = np.zeros(price_matrix.shape) 358 | 359 | 360 | def __calc_american_values(payoff_fun,func_list, sub_price_matrix,sub_exercise_matrix,df,onlyITM=False): 361 | exercise_values_t = payoff_fun(sub_price_matrix[:,0]) 362 | ITM_filter = exercise_values_t > 0 363 | OTM_filter = exercise_values_t <= 0 364 | n_sub_trials, n_sub_steps = sub_price_matrix.shape 365 | holding_values_t = np.zeros(n_sub_trials) # simulated samples: y 366 | exp_holding_values_t = np.zeros(n_sub_trials) # regressed results: E[y] 367 | 368 | itemindex = np.where(sub_exercise_matrix==1) 369 | # print(sub_exercise_matrix) 370 | for trial_i in range(n_sub_trials): 371 | first = next(itemindex[1][i] for i,x in enumerate(itemindex[0]) if x==trial_i) 372 | payoff_i = payoff_fun(sub_price_matrix[trial_i, first]) 373 | df_i = df**(n_sub_steps-first) 374 | holding_values_t[trial_i] = payoff_i*df_i 375 | 376 | A_matrix = np.array([func(sub_price_matrix[:,0]) for func in func_list]).T 377 | b_matrix = holding_values_t[:, np.newaxis] # g_tau|Fi 378 | ITM_A_matrix = A_matrix[ITM_filter, :] 379 | ITM_b_matrix = b_matrix[ITM_filter, :] 380 | lr = LinearRegression(fit_intercept=False) 381 | lr.fit(ITM_A_matrix, ITM_b_matrix) 382 | exp_holding_values_t[ITM_filter] = np.dot(ITM_A_matrix, lr.coef_.T)[:, 0] # E[g_tau|Fi] only ITM 383 | 384 | if onlyITM: 385 | # Original LSM 386 | exp_holding_values_t[OTM_filter] = np.nan 387 | else: 388 | # non-conformed approximation: do not assure the continuity of the approximation. 389 | OTM_A_matrix = A_matrix[OTM_filter, :] 390 | OTM_b_matrix = b_matrix[OTM_filter, :] 391 | lr.fit(OTM_A_matrix, OTM_b_matrix) 392 | exp_holding_values_t[OTM_filter] = np.dot(OTM_A_matrix, lr.coef_.T)[:, 0] # E[g_tau|Fi] only OTM 393 | 394 | 395 | sub_exercise_matrix[:,0] = ITM_filter & (exercise_values_t>exp_holding_values_t) 396 | american_values_t = np.maximum(exp_holding_values_t,exercise_values_t) 397 | return american_values_t 398 | 399 | if (option_type == "c"): 400 | payoff_fun = lambda x: np.maximum(x - K, 0) 401 | elif (option_type == "p"): 402 | payoff_fun = lambda x: np.maximum(K - x, 0) 403 | 404 | # when contract is at the maturity 405 | stock_prices_t = price_matrix[:, -1] 406 | exercise_values_t = payoff_fun(stock_prices_t) 407 | holding_values_t = exercise_values_t 408 | american_values_matrix[:,-1] = exercise_values_t 409 | exercise_matrix[:,-1] = 1 410 | 411 | # before maturaty 412 | for i in np.arange(n_steps)[:0:-1]: 413 | sub_price_matrix = price_matrix[:,i:] 414 | sub_exercise_matrix = exercise_matrix[:,i:] 415 | american_values_t = __calc_american_values(payoff_fun,func_list,sub_price_matrix,sub_exercise_matrix,df,onlyITM) 416 | american_values_matrix[:,i] = american_values_t 417 | 418 | 419 | 420 | # obtain the optimal policies at the inception 421 | holding_matrix = np.zeros(exercise_matrix.shape, dtype=bool) 422 | for i in np.arange(n_trials): 423 | exercise_row = exercise_matrix[i, :] 424 | if (exercise_row.any()): 425 | exercise_idx = np.where(exercise_row == 1)[0][0] 426 | exercise_row[exercise_idx + 1:] = 0 427 | holding_matrix[i,:exercise_idx+1] = 1 428 | else: 429 | exercise_row[-1] = 1 430 | holding_matrix[i,:] = 1 431 | 432 | if onlyITM==False: 433 | # i=0 434 | # regular martingale pricing: LSM 435 | american_value1 = american_values_matrix[:,1].mean() * df 436 | # with delta hedging: OHMC 437 | 438 | # min dP0.T*dP0 + delta dS0.T dS0 delta + 2*dP0.T*delta*dS0 439 | # subject to: e.T * (dP0 + delta dS0) = 0 440 | # P = Q.T * Q 441 | # Q = dS0 442 | # q = 2*dP0.T*dS0 443 | # A = e.T * dS0 444 | # b = - e.T * dP0 445 | 446 | 447 | 448 | v0 = matrix((american_values_matrix[:,1] * df)[:,np.newaxis]) 449 | S0 = price_matrix[:, 0] 450 | S1 = price_matrix[:, 1] 451 | dS0 = df2 * S1 * (1-sell_cost) - S0*(1+buy_cost) 452 | dP0 = american_values_matrix[:,1] * df - american_value1 453 | 454 | Q0 = dS0[:, np.newaxis] 455 | Q0 = matrix(Q0) 456 | P = Q0.T * Q0 457 | q = 2*matrix(dP0[:,np.newaxis]).T*Q0 458 | 459 | A = matrix(np.ones(n_trials, dtype=np.float64)).T * Q0 460 | b = - matrix(np.ones(n_trials, dtype=np.float64)).T * matrix(dP0[:,np.newaxis]) 461 | 462 | sol = solvers.coneqp(P=P, q=q, A=A, b=b) 463 | self.sol = sol 464 | residual_risk = (v0.T * v0 + 2 * sol["primal objective"]) / n_trials 465 | self.residual_risk = residual_risk[0] # the value of unit matrix 466 | 467 | delta_hedge = sol["x"][0] 468 | american_values_matrix[:,0] = american_value1 469 | 470 | self.american_values_matrix = american_values_matrix 471 | self.HLSM_price = american_value1 472 | self.HLSM_delta = - delta_hedge 473 | print("price: {}, delta-hedge: {}".format(american_value1,delta_hedge)) 474 | 475 | self.holding_matrix = holding_matrix 476 | self.exercise_matrix = exercise_matrix 477 | 478 | pass 479 | 480 | def BlackScholesPricer(self, option_type='c'): 481 | S = self.S0 482 | K = self.K 483 | T = self.T 484 | r = self.r 485 | q = self.q 486 | sigma = self.sigma 487 | d1 = (np.log(S / K) + (r - q) * T + 0.5 * sigma ** 2 * T) / (sigma * np.sqrt(T)) 488 | d2 = d1 - sigma * np.sqrt(T) 489 | N = lambda x: sp.stats.norm.cdf(x) 490 | call = np.exp(-q * T) * S * N(d1) - np.exp(-r * T) * K * N(d2) 491 | put = call - np.exp(-q * T) * S + K * np.exp(-r * T) 492 | 493 | if (option_type == "c"): 494 | self.BSDelta = N(d1) 495 | self.BSPrice = call 496 | return call 497 | elif (option_type == "p"): 498 | self.BSDelta = -N(-d1) 499 | self.BSPrice = put 500 | return put 501 | else: 502 | print("please enter the option type: (c/p)") 503 | 504 | pass 505 | 506 | def MCPricer(self, option_type='c', isAmerican=False): 507 | price_matrix = self.price_matrix 508 | n_steps = self.n_steps 509 | n_trials = self.n_trials 510 | strike = self.K 511 | risk_free_rate = self.r 512 | time_to_maturity = self.T 513 | dt = time_to_maturity / n_steps 514 | if (option_type == "c"): 515 | payoff_fun = lambda x: np.maximum(x-strike,0) 516 | elif (option_type == "p"): 517 | payoff_fun = lambda x: np.maximum(strike-x, 0) 518 | else: 519 | print("please enter the option type: (c/p)") 520 | return 521 | 522 | if (isAmerican == False): 523 | 524 | payoff = payoff_fun(price_matrix[:, n_steps]) 525 | # vk = payoff*df 526 | value_results = payoff * np.exp(-risk_free_rate * time_to_maturity) 527 | self.payoff = payoff 528 | else: 529 | exercise_matrix = self.exercise_matrix 530 | t_exercise_array = dt * np.where(exercise_matrix == 1)[1] 531 | value_results = payoff_fun(price_matrix[np.where(exercise_matrix == 1)]) * np.exp(-risk_free_rate * t_exercise_array) 532 | 533 | regular_mc_price = np.average(value_results) 534 | self.mc_price = regular_mc_price 535 | self.value_results = value_results 536 | return regular_mc_price 537 | 538 | def BSDeltaHedgedPricer(self, option_type="c"): 539 | 540 | regular_mc_price = self.MCPricer(option_type=option_type) 541 | dt = self.T / self.n_steps 542 | df2 = np.exp(-(self.r - self.q) * dt) 543 | 544 | # Delta hedged cash flow 545 | def Delta_fun(x, tau, option_type): 546 | d1 = (np.log(x / self.K) + (self.r - self.q) * tau + self.sigma ** 2 * tau / 2) / ( 547 | self.sigma * np.sqrt(tau)) 548 | if (option_type == 'c'): 549 | return sp.stats.norm.cdf(d1) 550 | elif (option_type == 'p'): 551 | return -sp.stats.norm.cdf(-d1) 552 | 553 | discounted_hedge_cash_flow = np.zeros(self.n_trials) 554 | for i in range(self.n_trials): 555 | Sk_array = self.price_matrix[i, :] 556 | bi_diag_matrix = np.diag([-1] * (self.n_steps), 0) + np.diag([df2] * (self.n_steps - 1), 1) 557 | # (Sk+1 exp(-r dt) - Sk) exp(-r*(tk-t0)) 558 | discounted_stock_price_change = np.dot(bi_diag_matrix, Sk_array[:-1]) 559 | discounted_stock_price_change[-1] += Sk_array[-1] * df2 560 | discounted_stock_price_change *= np.exp(-self.r * np.arange(self.n_steps) * dt) 561 | tau_array = dt * np.arange(self.n_steps, 0, -1) 562 | Delta_array = np.array([Delta_fun(Sk, tau, option_type) for Sk, tau in zip(Sk_array[:-1], tau_array)]) 563 | discounted_hedge_cash_flow[i] = np.dot(Delta_array, discounted_stock_price_change) 564 | 565 | BSDeltaBased_mc_price = regular_mc_price - discounted_hedge_cash_flow.mean() 566 | # print("The average discounted hedge cash flow: {}".format(discounted_hedge_cash_flow.mean())) 567 | 568 | value_results = self.payoff * np.exp(-self.r * self.T) - discounted_hedge_cash_flow 569 | # print("Sanity check {} = {}".format(value_results.mean(),BSDeltaBased_mc_price)) 570 | self.value_results = value_results 571 | 572 | return BSDeltaBased_mc_price 573 | 574 | def OHMCPricer(self, option_type='c', isAmerican=False, func_list=[lambda x: x ** 0, lambda x: x]): 575 | def _calculate_Q_matrix(S_k, S_kp1, df, df2, func_list): 576 | dS = df2 * S_kp1 - S_k 577 | A = np.array([func(S_k) for func in func_list]).T 578 | B = (np.array([func(S_k) for func in func_list]) * dS).T 579 | return np.concatenate((-A, B), axis=1) 580 | 581 | price_matrix = self.price_matrix 582 | # k = n_steps 583 | dt = self.T / self.n_steps 584 | df = np.exp(- self.r * dt) 585 | df2 = np.exp(-(self.r - self.q) * dt) 586 | n_basis = len(func_list) 587 | n_trials = self.n_trials 588 | n_steps = self.n_steps 589 | strike = self.K 590 | 591 | if (option_type == "c"): 592 | payoff_fun = lambda x: np.maximum(x-strike,0) 593 | # payoff = (price_matrix[:, n_steps] - strike) 594 | elif (option_type == "p"): 595 | payoff_fun = lambda x: np.maximum(strike-x,0) 596 | # payoff = (strike - price_matrix[:, n_steps]) 597 | else: 598 | print("please enter the option type: (c/p)") 599 | return 600 | 601 | if isAmerican is True: 602 | holding_matrix = self.holding_matrix 603 | else: 604 | holding_matrix = np.ones(price_matrix.shape,dtype=bool) 605 | 606 | # At maturity 607 | holding_filter_k = holding_matrix[:, n_steps] 608 | payoff = matrix(payoff_fun(price_matrix[holding_filter_k,n_steps])) 609 | vk = payoff * df 610 | Sk = price_matrix[holding_filter_k,n_steps] 611 | # print("regular MC price",regular_mc_price) 612 | 613 | # k = n_steps-1,...,1 614 | for k in range(n_steps - 1, 0, -1): 615 | 616 | holding_filter_kp1 = holding_filter_k 617 | holding_filter_k = holding_matrix[:, k] 618 | Skp1 = price_matrix[holding_filter_kp1, k+1] 619 | Sk = price_matrix[holding_filter_kp1, k] 620 | Qk = matrix(_calculate_Q_matrix(Sk, Skp1, df, df2, func_list)) 621 | P = Qk.T * Qk 622 | q = Qk.T * vk 623 | A = matrix(np.ones(holding_filter_kp1.sum(), dtype=np.float64)).T * Qk 624 | b = - matrix(np.ones(holding_filter_kp1.sum(), dtype=np.float64)).T * vk 625 | # print(Sk) 626 | # print(Skp1) 627 | 628 | sol = solvers.coneqp(P=P, q=q, A=A, b=b) 629 | ak = sol["x"][:n_basis] 630 | bk = sol["x"][n_basis:] 631 | vk = matrix(np.array([func(price_matrix[holding_filter_k, k]) for func in func_list])).T * ak * df 632 | # break 633 | 634 | # k = 0 635 | v0 = vk 636 | holding_filter_1 = holding_filter_k 637 | holding_filter_0 = holding_matrix[:, 0] 638 | S0 = price_matrix[holding_filter_1, 0] 639 | S1 = price_matrix[holding_filter_1, 1] 640 | dS0 = df2 * S1 - S0 641 | Q0 = np.concatenate((-np.ones(holding_filter_1.sum())[:, np.newaxis], dS0[:, np.newaxis]), axis=1) 642 | Q0 = matrix(Q0) 643 | P = Q0.T * Q0 644 | q = Q0.T * v0 645 | A = matrix(np.ones(holding_filter_1.sum(), dtype=np.float64)).T * Q0 646 | b = - matrix(np.ones(holding_filter_1.sum(), dtype=np.float64)).T * v0 647 | C1 = matrix(ak).T * np.array([func(S1) for func in func_list]).T 648 | sol = solvers.coneqp(P=P, q=q, A=A, b=b) 649 | self.sol = sol 650 | residual_risk = (v0.T * v0 + 2 * sol["primal objective"]) / holding_filter_1.sum() 651 | self.residual_risk = residual_risk[0] # the value of unit matrix 652 | 653 | return sol["x"][0] 654 | 655 | def standard_error(self): 656 | # can not apply to the OHMC since its result is not obtained by averaging 657 | # sample variance 658 | sample_var = np.var(self.value_results, ddof=1) 659 | std_estimate = np.sqrt(sample_var) 660 | standard_err = std_estimate / np.sqrt(self.n_trials) 661 | return standard_err 662 | 663 | def pricing(self, option_type='c', func_list=[lambda x: x ** 0, lambda x: x]): 664 | OHMC_price = self.OHMCPricer(option_type=option_type, func_list=func_list) 665 | regular_mc_price = self.MCPricer(option_type=option_type) 666 | black_sholes_price = self.BlackScholesPricer(option_type) 667 | return ({"OHMC": OHMC_price, "regular MC": regular_mc_price, "Black-Scholes": black_sholes_price}) 668 | 669 | def hedging(self): 670 | S = self.S0 671 | K = self.K 672 | T = self.T 673 | r = self.r 674 | q = self.q 675 | sigma = self.sigma 676 | d1 = (np.log(S / K) + (r - q) * T + 0.5 * sigma ** 2 * T) / (sigma * np.sqrt(T)) 677 | d2 = d1 - sigma * np.sqrt(T) 678 | N = lambda x: sp.stats.norm.cdf(x) 679 | return ({"OHMC optimal hedge": -self.sol["x"][1], "Black-Scholes delta hedge": N(d1), 680 | "OHMC residual risk": self.residual_risk}) 681 | -------------------------------------------------------------------------------- /Notes for Lattice-based Methods.md: -------------------------------------------------------------------------------- 1 | # FE621 Notes 2 | 3 | ## Lattice-based Methods 4 | 5 | Tree methods equations are derived by moment matching for mean and variance. Note that real world probabilities do not play any role in valuing an option in a tree method. 6 | 7 | ### Binomial Trees 8 | 9 | * Additive tree ($log S_t$) 10 | 11 | * Trigeogis tree: $\Delta x_u = \Delta x_d$ 12 | $$ 13 | p_u \Delta x - p_d \Delta x = \mu \Delta t\\ 14 | p_u \Delta x^2 + p_d \Delta x ^2 - (\mu \Delta t)^2 = \sigma^2 \Delta t\\ 15 | p_u + p_d = 1\\ 16 | \mu = r-\frac{\sigma^2}{2} 17 | $$ 18 | 19 | * Jarrow Rudd tree: $p_u = p_d$ 20 | 21 | * Multiplicative tree ($S_t$) 22 | 23 | * Cox-Ross-Rubinstein (CRR) tree: similar to Trigeogis tree, but different in the first moment and involves approximation. $u = e^{\sigma \sqrt{\Delta t}}$ 24 | $$ 25 | p_u u + p_d d = e^{r \Delta t}\\ 26 | p_u \Delta x^2 + p_d \Delta x ^2 - (\mu \Delta t)^2 = \sigma^2 \Delta t\\ 27 | u = \frac{1}{d}=e^{\Delta x}\\ 28 | p_u + p_d = 1\\ 29 | \mu = r-\frac{\sigma^2}{2} 30 | $$ 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | ### Trinomial Trees 42 | 43 | $$ 44 | p_u \Delta x - p_d \Delta x = \mu \Delta t\\ 45 | p_u \Delta x^2 + p_d \Delta x ^2 - (\mu \Delta t)^2 = \sigma^2 \Delta t\\ 46 | p_u +p_m+ p_d = 1\\ 47 | \mu = r-\frac{\sigma^2}{2} 48 | $$ 49 | 50 | In order to make probabilities to be numbers between 0 and 1, we have a sufficient condition: 51 | $$ 52 | \Delta x > \sigma \sqrt{3 \Delta t} 53 | $$ 54 | Any $\Delta x$ with this property produces a convergent tree. 55 | 56 | ### Finite Difference Method 57 | 58 | More general to the addictive trinomial tree method. 59 | 60 | ### Convergence Comparison 61 | 62 | | method | rate of convergence | convergence condition | 63 | | :----------------- | ------------------------------------------------------------ | ---------------------------------------- | 64 | | binomial tree | $O((\Delta x)^2 + \Delta t)$ | NA | 65 | | trinomial tree | $O((\Delta x)^2 + \Delta t)$ | $\Delta x \geq \sigma \sqrt{3 \Delta t}$ | 66 | | explicit FDM | $O((\Delta x)^2 + \Delta t)$ | $\Delta x \geq \sigma \sqrt{3 \Delta t}$ | 67 | | implicit FDM | $O((\Delta x)^2 + \Delta t)$ | stable | 68 | | Crank-Nicolson FDM | $O((\Delta x)^2 + (\frac{\Delta t}{2})^2)$ | stable | 69 | | Monte Carlo | $O\left(max\left(\Delta t, \frac{\sigma}{\sqrt{N_x}}\right)\right)$ | stable | 70 | 71 | 72 | 73 | ## Variance Reduction 74 | 75 | | method | explanation | 76 | | :-----------------:| :-----------------------------------------: | 77 | | antithetic variates| the payoff of the antithetic pair $(X_1,X_2)$, $f(X_1), f(X_2)$ is negative correlated. A sufficient condition is to make payoff function monotone | 78 | | delta-based control variates | | 79 | 80 | 81 | 82 | 83 | 84 | ## Risk-Neutral Measure 85 | 86 | Risk-neutral measures make it easy to express the value of a derivative in a formula. 87 | 88 | $$H_0 = P(0,T) E_Q[H_T]$$ 89 | 90 | where the risk-neutral measure is denoted by Q. This can be re-stated in terms of the physical measure P as 91 | 92 | $$H_0 = P(0,T)E_P[\frac{dQ}{dP} H_T]$$ 93 | 94 | Another name for the risk-neutral measure is the equivalent martingale measure. If there is just one unique risk-neutral measure in the market, then there is a unique arbitrage-free price for each asset in the market. This is the fundamental theorem of arbitrage-free pricing.If there are more such measures, then in an interval of prices no arbitrage is possible. If no equivalent martingale measure exists, arbitrage opportunities do. 95 | 96 | Suppose our economy consists of 2 assets, a stock and a risk-free bond, and that we use Black-Scholes model. In the model the evolution of the stock price can be described by Geometric Brownian Motion: 97 | 98 | $$dS_t = \alpha S_t dt + \sigma S_t dW_t$$ 99 | 100 | where $W_t$ is a standard Brownian motion with respect to the physical measure. If we define 101 | 102 | $$\tilde{W}_t = W_t + \frac{\alpha-r}{\sigma}t$$ 103 | 104 | Girsanov's theorem states that there exists a measure $Q$ under which ${\displaystyle {\tilde {W}}_{t}}$ is a Brownian motion. Put this back in the original equation: 105 | 106 | $$dS_t = r S_t dt + \sigma S_t d\tilde{W}_t$$ 107 | 108 | Then, the discounted stock price $\tilde{S}_t$is a $Q$-martingale. 109 | 110 | $$d \tilde{S}_t = \sigma \tilde{S}_t d\tilde{W}_t$$ 111 | 112 | Note that risk neutral measure is powerful because you don't need to replicate a portfolio in order to be arbitrage-free compared to risk-free bond. Under such measure, expected value(first moment) of securities are equal to rolling it into a deposit account (inflate it to the maturity date at the riskless rate of interest). -------------------------------------------------------------------------------- /Potters OHMC 2001.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jerryxyx/MonteCarlo/04c0c21aef28cfdae606951d5df9d394e98c90e5/Potters OHMC 2001.pdf -------------------------------------------------------------------------------- /R implementation/FE800 final code.R: -------------------------------------------------------------------------------- 1 | # use sample data------------ 2 | S = 1 3 | r = 0.06 4 | dt = 1 5 | K = 1.1 6 | data = read.csv('/Users/wantengxi/Stevens/american option/example.csv',header = FALSE) 7 | data 8 | Ncol = Ncols = ncol(data) 9 | Nrows = Nrow = nrow(data) 10 | #class(data) 11 | cashflow = matrix(0,nrow = Nrow,ncol = Ncol) 12 | cashflow[,Ncol] = ifelse(K > data[,Ncol], K - data[,Ncol], 0) 13 | #j = Ncols 14 | 15 | for(j in Ncols : 2){ 16 | comb_table = cbind(data[,j - 1], cashflow[, j] * exp(-r * dt)) 17 | # stock price at time = j - 1 (X) and value of time j discount back to j - 1 (Y) 18 | colnames(comb_table) = c('X','Y') 19 | itm_table = comb_table[comb_table[,'X'] < K,] 20 | # select only ITM option 21 | lr = lm(data = as.data.frame(itm_table),formula = Y~X + I(X ^ 2) + I(X ^ 3)) 22 | # regression on constant, X and (X ^ 2) term 23 | EY = lr$fitted.values 24 | # fitted.value = E[Y|X] 25 | Z = c() 26 | k = 0 27 | for(i in 1 : Nrow){ 28 | # select in-the-money option paths 29 | if(comb_table[i,'X'] < K){ 30 | k = k + 1 31 | Z[i] = EY[k] 32 | # assgin EY as Z (for only ITM) 33 | } 34 | else{ 35 | Z[i] = comb_table[i,'Y'] 36 | # for OTM, option will not be exercise, so assign continuation value to Z 37 | } 38 | } 39 | C = ifelse(K-comb_table[,'X'] > 0, K-comb_table[,'X'],0) 40 | # C is in the money exercise value 41 | comb_2 = cbind(comb_table,Z,C) 42 | # payoff_temp = cbind(ifelse(K > itm_table[,1],K - itm_table[,1],0),lr$fitted.values) 43 | cashflow[,j - 1] = ifelse(comb_2[,'C'] < comb_2[,'Z'],comb_2[,'Y'],comb_2[,'C']) 44 | # cashflow is the greater between C(exercise value) and Z (continuation value) 45 | } 46 | #plot(itm_table) 47 | #points(itm_table[,1],EY,col = 'red') 48 | 49 | # use self define parameters under Black-Scholes model------------ 50 | S = 36 51 | sigma = 0.2 52 | tau = 1 53 | K = 40 54 | r = 0.06 55 | path = Nrow = 50000 56 | n = 50 57 | Ncol = n + 1 58 | dt = tau / n 59 | 60 | # simulation 61 | simu_data = matrix(0,nrow = Nrow,ncol = Ncol) 62 | colnames(simu_data) = paste("step", 0:n, sep="=") 63 | head(simu_data) 64 | # simulate Black Scholes ---------- 65 | simu_data[,1] = S 66 | for(i in 2:Ncol){ 67 | epsilon = rnorm(path,0,1) 68 | simu_data[,i] = simu_data[,i - 1] + r * simu_data[,i - 1] * dt + sigma * simu_data[,i - 1] * sqrt(dt) * epsilon 69 | } 70 | # some other parameter sets start-------- 71 | S = 100 72 | K = 100 73 | r = 0.04 74 | tau = 3 / 12 75 | kappa = 1.15 76 | theta = 0.0348 77 | sigma = 0.39 78 | rho = -0.64 79 | v_0 = 0.1866 ^ 2 80 | # some other parameter sets-------- 81 | path = Nrow = 50000 82 | n = 50 83 | Ncol = n + 1 84 | dt = tau / n 85 | 86 | simu_data = simu_vol = matrix(0,nrow = Nrow,ncol = Ncol) 87 | simu_data[,1] = S 88 | simu_vol[,1] = v_0 89 | v = v_bar = rep(v_0, Nrow) 90 | full = function(v,v_bar,epsilon){ 91 | # full truncation scheme 92 | v_bar = (v_bar - kappa * dt * (ifelse(v_bar > 0, v_bar, 0) - theta) + 93 | sigma * sqrt(ifelse(v_bar > 0, v_bar, 0)) * sqrt(dt) * epsilon) 94 | v = ifelse(v_bar > 0,v_bar,0) 95 | result = cbind(v, v_bar) 96 | return(result) 97 | } 98 | # stochasitic volatility regression---- 99 | sv_mc = function(data, v_data){ 100 | t_1 = Sys.time() 101 | # we can add time consumption record 102 | Ncol = Ncols = ncol(data) 103 | Nrows = Nrow = nrow(data) 104 | option_value = matrix(0,nrow = Nrow,ncol = Ncol) 105 | option_value[,Ncol] = ifelse(K > data[,Ncol], K - data[,Ncol], 0) 106 | for(j in Ncols : 2){ 107 | # change 3 to 2 108 | X = data[, j - 1] 109 | # stock price 110 | Y = option_value[, j] * exp(-r * dt) 111 | # discounted cashflow 112 | vol = v_data[, j - 1] 113 | # volatility 114 | comb_table = cbind(X, Y, vol) 115 | # stock price at time = j - 1 and value of time j discount back to j - 1 and volatility 116 | 117 | if(sum(comb_table[comb_table[,'X'] < K,]) < 2){ 118 | EY = Y 119 | }else{ 120 | itm_table = as.data.frame(comb_table[comb_table[,'X'] < K,]) 121 | # In-the_Money option 122 | lr = lm(data = itm_table,formula = Y~ X + vol) 123 | # regression with quadratic term Y~ X + vol 124 | EY = lr$fitted.values 125 | # regression fitted expectation of Y 126 | } 127 | Z = c() 128 | k = 0 129 | for(i in 1 : Nrow){ 130 | if(comb_table[i,'X'] < K){ 131 | k = k + 1 132 | Z[i] = EY[k] 133 | } 134 | else{ 135 | Z[i] = comb_table[i,'Y'] 136 | } 137 | 138 | } 139 | # Z is the option value combining 1. exercise, the expectation EY; 2 not exercise, the discounted value Y 140 | C = ifelse(K-comb_table[,'X'] > 0, K-comb_table[,'X'],0) 141 | # compare whether to exercise 142 | comb_2 = cbind(comb_table,Z,C) 143 | option_value[,j - 1] = ifelse(comb_2[,'C'] < comb_2[,'Z'],comb_2[,'Y'],comb_2[,'C']) 144 | # estimate option value one step backward 145 | 146 | } 147 | #price = mean(option_value[,2]) * exp(-r*dt) 148 | price = mean(option_value[,1]) 149 | price = ifelse(price > K - S, price, K - S) 150 | # consider exercise at start 151 | se = sd(option_value[,2] * exp(-r*dt)) / sqrt(Nrows) 152 | t_2 = Sys.time() 153 | time = difftime(t_2,t_1,units = 'secs') 154 | # output regression_table = itm_table,EY = EY if needed 155 | result = list(price = price, se = se, time = time) 156 | return(result) 157 | } 158 | sv_mc_result = sv_mc(data = simu_data, v_data = simu_vol) 159 | sv_mc_result 160 | 161 | # check with EFD and crank nicolson method------- 162 | crank.nicolson.method = function(S,K,tao,sigma,r,step,dx, first, 163 | div,type1=c('American','European'),type2=c('Call','Put')){ 164 | payoff=function(S,K,expect,type1=c('American','Europrean'),type2=c('Call','Put')){ 165 | 166 | temp1=ifelse(type1=='European',0,expect) 167 | temp2=ifelse(type2=='Call',1,-1) 168 | result=max(temp2*(S-K),temp1) 169 | 170 | return(result) 171 | } 172 | v=r-div-0.5*sigma^2 173 | dt=tao/step 174 | pu=-0.25*dt*(sigma^2/dx^2+v/dx) 175 | pm=1+0.5*dt*sigma^2/dx^2+r*dt/2 176 | pd=-0.25*dt*(sigma^2/dx^2-v/dx) 177 | # First we calculate parameters we need 178 | # not the pm,pu,pd is different from implicit method 179 | firstRow = firstCol = 1 180 | nRows = lastRow = 2*step+1 181 | middleRow = step+1 182 | nCols = lastCol = step+1 183 | # Some variables we need to help us understand the position in tree. 184 | 185 | V.data = S.data = matrix(0, nrow=nRows, ncol=nCols, dimnames=list( 186 | paste("NumUps", step:-step, sep="="), paste("T", 0:step, sep="="))) 187 | S.data[step+1, 1] = S 188 | # Set the data table and initial stock value 189 | 190 | for (j in 1:(nCols-1)) { 191 | for(i in (nCols-(j-1)):(nCols+(j-1))) { 192 | S.data [i-1, j+1] = S.data [i, j]*exp(dx) 193 | # up case 194 | S.data [i , j+1] = S.data [i, j] 195 | # middle case 196 | S.data [i+1, j+1] = S.data [i, j]*exp(-dx) 197 | # down case 198 | } 199 | } 200 | # Calculating all stock prices. 201 | 202 | for (i in 1:nRows) { 203 | V.data[i, lastCol] = payoff(S=S.data[i,lastCol],K=K,type1 = 'European',type2 = type2) 204 | } 205 | # Calculating the option price at maturity. 206 | 207 | lambda.up = ifelse(type2=='Call',1 * (S.data[1, lastCol] - S.data[2,lastCol]),0) 208 | lambda.low = ifelse(type2=='Call',0,-1 * (S.data[lastRow-1, lastCol] - S.data[lastRow,lastCol])) 209 | # Boundary condition, same as in implicit method 210 | 211 | solve.crank.nicolson.tridiagnoal=function(V.data,pu,pm,pd,lambda.up,lambda.low,colI){ 212 | lastRow = nrow(V.data) 213 | lastCol = ncol(V.data) 214 | p.prime = c() 215 | pm.prime = c() 216 | # we define p.prime and pm.prime for intermediate steps in the iterations 217 | pm.prime[lastRow-1] = pm + pd 218 | p.prime[lastRow-1] = (-pu*V.data[lastRow-2,lastCol] 219 | -(pm-2)*V.data[lastRow-1,lastCol] 220 | -pd*V.data[lastRow,lastCol]+pd*lambda.low) 221 | 222 | # wo start from the last row (where the boundary took place) 223 | 224 | for (j in (lastRow-2):2) { 225 | pm.prime[j] = pm - pu*pd/pm.prime[j+1] 226 | p.prime[j] = (-pu*V.data[j-1,colI+1] 227 | -(pm-2)*V.data[j,colI+1] 228 | -pd*V.data[j+1,colI+1] 229 | - p.prime[j+1]*pd/pm.prime[j+1]) 230 | } 231 | # solve all of the p.prime and pm.price 232 | 233 | V.data[1, colI] = (p.prime[2] + pm.prime[2]*lambda.up)/(pu + pm.prime[2]) 234 | V.data[2, colI] = V.data[1,colI] - lambda.up 235 | # we get the first two option values 236 | 237 | # And then go back the rest of them 238 | for(j in 3:(lastRow-1)) { 239 | V.data[j, colI] = (p.prime[j] -pu*V.data[j-1, colI])/pm.prime[j] 240 | } 241 | V.data[lastRow, colI] = V.data[lastRow-1, colI] - lambda.low 242 | 243 | # Out put the V.data(option table) 244 | 245 | list(V.data=V.data) 246 | } 247 | 248 | for(j in (nCols-1):1){ 249 | V.data[, j] = solve.crank.nicolson.tridiagnoal(V.data,pu,pm,pd,lambda.up,lambda.low,colI=j)$V.data[,j] 250 | if(type1=='American'){ 251 | for(i in 1:nRows){ 252 | V.data[i, j] = payoff(S=S.data[i,lastCol],K=K,type1 = 'American',type2 = type2, 253 | expect=V.data[i, j]) 254 | } 255 | # consider American option can be exercised early 256 | } 257 | } 258 | list(Type = paste(type1,type2), probability=c(pu,pm,pd), 259 | Price = V.data[step+1,1], 260 | S.first.steps=S.data[(step+1-first):(step+1+first),1:(1+first)], 261 | V.first.steps=V.data[(step+1-first):(step+1+first),1:(1+first)] 262 | ) 263 | # output result including Type, Option price, probability 264 | # and first steps of Stock and Opton. 265 | } 266 | cn_result = crank.nicolson.method(S = S, K = K, tao = tau, sigma = sigma, 267 | r = r, step = 50, dx = 0.05, first = 3, div = 0, type1 = 'American',type2 = 'Put') 268 | cn_result$Price 269 | sigma * sqrt(3 * dt) 270 | 271 | explicit.method = function(S,K,tao,sigma,r,step,dx, first, 272 | div,type1=c('American','European'),type2=c('Call','Put')){ 273 | payoff=function(S,K,expect,type1=c('American','Europrean'),type2=c('Call','Put')){ 274 | 275 | temp1=ifelse(type1=='European',0,expect) 276 | temp2=ifelse(type2=='Call',1,-1) 277 | result=max(temp2*(S-K),temp1) 278 | 279 | return(result) 280 | } 281 | 282 | v=r-div-0.5*sigma^2 283 | dt=tao/step 284 | pu=0.5*dt*(sigma^2/dx^2+v/dx) 285 | pm=1-dt*sigma^2/dx^2-r*dt 286 | pd=0.5*dt*(sigma^2/dx^2-v/dx) 287 | # First we calculate parameters we need 288 | 289 | firstRow = firstCol = 1 290 | nRows = lastRow = 2*step+1 291 | middleRow = step+1 292 | nCols = lastCol = step+1 293 | # Some variables we need to help us understand the position in tree. 294 | 295 | V.data = S.data = matrix(0, nrow=nRows, ncol=nCols, dimnames=list( 296 | paste("NumUps", step:-step, sep="="), paste("T", 0:step, sep="="))) 297 | S.data[step+1, 1] = S 298 | # Set the data table and initial stock value 299 | 300 | for (j in 1:(nCols-1)) { 301 | for(i in (nCols-(j-1)):(nCols+(j-1))) { 302 | S.data [i-1, j+1] = S.data [i, j]*exp(dx) 303 | # up case 304 | S.data [i , j+1] = S.data [i, j] 305 | # middle case 306 | S.data [i+1, j+1] = S.data [i, j]*exp(-dx) 307 | # down case 308 | } 309 | } 310 | # Calculating all stock prices. 311 | 312 | for (i in 1:nRows) { 313 | V.data[i, lastCol] = payoff(S=S.data[i,lastCol],K=K,type1 = 'European',type2 = type2) 314 | } 315 | # Calculating the option price at maturity. 316 | 317 | for (j in (nCols-1):1) { 318 | for(i in (middleRow+(step-1)):(middleRow-(step-1))) { 319 | V.data[i, j] = (pu*V.data[i-1,j+1] + pm*V.data[i, j+1] + pd*V.data[i+1,j+1]) 320 | 321 | } 322 | # Boundary Condition 323 | stockTerm = ifelse(type2=='Call', (S.data[1,lastCol]-S.data[2,lastCol]), 324 | (S.data[nRows-1,lastCol]-S.data[nRows,lastCol])) 325 | V.data[firstRow, j] = V.data[firstRow+1,j] + ifelse(type2=='Call', stockTerm, 0) 326 | V.data[lastRow , j] = V.data[lastRow-1, j] + ifelse(type2=='Call', 0, stockTerm) 327 | # That is for Call, when stock price is high, dV/dS = 1 328 | # when stock price is low, dV/dS = 0 329 | # For put, the when stock price is high, dV/dS = 0 330 | # when stock price is low, dV/dS = -1 331 | 332 | # Then we will add up American option case, deciding whether to exercise 333 | if(type1=='American') { 334 | for(i in lastRow:firstRow){ 335 | V.data[i, j] = payoff(S=S.data[i,lastCol],K=K,type1 = 'American',type2 = type2, 336 | expect=V.data[i, j]) 337 | } 338 | } 339 | } 340 | ## Step backwards through the trinomial tree 341 | 342 | list(Type = paste(type1,type2), probability=c(pu,pm,pd), 343 | Price = V.data[step+1,1], 344 | S.first.steps=S.data[(step+1-first):(step+1+first),1:(1+first)], 345 | V.first.steps=V.data[(step+1-first):(step+1+first),1:(1+first)] 346 | ## output result including Type, Option price, probability 347 | ## and first steps of Stock and Opton. 348 | ) 349 | } 350 | explicit.method(S = S, K = K, tao = tau, sigma = sigma, 351 | r = r, step = 100, dx = 0.05, first = 3, div = 0, type1 = 'American',type2 = 'Put') 352 | 353 | # Bakshi parameters------ 354 | S = 100 355 | K = 100 356 | r = 0.04 357 | tau = 3 / 12 358 | kappa = 1.15 359 | theta = 0.0348 360 | sigma = 0.39 361 | rho = -0.64 362 | v_0 = 0.1866 ^ 2 363 | path = Nrow = 1000 364 | n = 100 365 | Ncol = n + 1 366 | dt = tau / n 367 | 368 | # Simulation with QE scheme ------------- 369 | 370 | QE_scheme = function(v){ 371 | #browser() 372 | len = length(v) 373 | m = theta + (v - theta) * exp(-kappa * dt) 374 | s = sqrt(v * sigma ^ 2 * exp(-kappa * dt) / kappa * (1 - exp(-kappa * dt)) + theta * sigma ^ 2 / (2 * kappa) * (1 - exp(-kappa * dt)) ^ 2) 375 | 376 | psi = s ^ 2 / m ^ 2 377 | psi_c = 1.5 378 | u = runif(len) 379 | epsilon = qnorm(u) 380 | # for psi <= psi_c 381 | b = sqrt(2 / psi - 1 + sqrt(2 / psi * (2 / psi - 1))) 382 | a = m / (1 + b ^ 2) 383 | # for psi > psi_c 384 | p = (psi - 1) / (psi + 1) 385 | beta = (1 - p) / m 386 | psi_inv = ifelse((u <= p), 0, 1 / beta * log((1 - p) / (1 - u))) 387 | v_bar = ifelse(psi <= psi_c, a * (b + epsilon) ^ 2, psi_inv) 388 | return(v_bar) 389 | } 390 | 391 | QE_mc = function(data, v_data){ 392 | t_1 = Sys.time() 393 | # we can add time consumption record 394 | Ncol = Ncols = ncol(data) 395 | Nrows = Nrow = nrow(data) 396 | option_value = matrix(0,nrow = Nrow,ncol = Ncol) 397 | option_value[,Ncol] = ifelse(K > data[,Ncol], K - data[,Ncol], 0) 398 | for(j in Ncols : 2){ 399 | # change 3 to 2 400 | X = data[, j - 1] 401 | # stock price 402 | Y = option_value[, j] * exp(-r * dt) 403 | # discounted cashflow 404 | vol = v_data[, j - 1] 405 | # volatility 406 | comb_table = cbind(X, Y, vol) 407 | # stock price at time = j - 1 and value of time j discount back to j - 1 and volatility 408 | 409 | if(sum(comb_table[comb_table[,'X'] < K,]) < 2){ 410 | EY = Y 411 | }else{ 412 | itm_table = as.data.frame(comb_table[comb_table[,'X'] < K,]) 413 | # In-the_Money option 414 | lr = lm(data = itm_table,formula = Y~ X + vol) 415 | # regression with quadratic term Y~ X + vol 416 | EY = lr$fitted.values 417 | # regression fitted expectation of Y 418 | } 419 | Z = c() 420 | k = 0 421 | for(i in 1 : Nrow){ 422 | if(comb_table[i,'X'] < K){ 423 | k = k + 1 424 | Z[i] = EY[k] 425 | } 426 | else{ 427 | Z[i] = comb_table[i,'Y'] 428 | } 429 | 430 | } 431 | # Z is the option value combining 1. exercise, the expectation EY; 2 not exercise, the discounted value Y 432 | C = ifelse(K-comb_table[,'X'] > 0, K-comb_table[,'X'],0) 433 | # compare whether to exercise 434 | comb_2 = cbind(comb_table,Z,C) 435 | option_value[,j - 1] = ifelse(comb_2[,'C'] < comb_2[,'Z'],comb_2[,'Y'],comb_2[,'C']) 436 | # estimate option value one step backward 437 | 438 | } 439 | #price = mean(option_value[,2]) * exp(-r*dt) 440 | price = mean(option_value[,1]) 441 | price = ifelse(price > K - S, price, K - S) 442 | # consider exercise at start 443 | se = sd(option_value[,2] * exp(-r*dt)) / sqrt(Nrows) 444 | t_2 = Sys.time() 445 | time = difftime(t_2,t_1,units = 'secs') 446 | # output regression_table = itm_table,EY = EY if needed 447 | result = list(price = price, se = se,time = time) 448 | return(result) 449 | } 450 | 451 | QE_lsm = function(path){ 452 | #browser() 453 | Nrow = path 454 | t_1 = Sys.time() 455 | simu_data = simu_vol = matrix(0,nrow = Nrow,ncol = Ncol) 456 | colnames(simu_data) = paste("step", 0:n, sep="=") 457 | simu_data[,1] = S 458 | simu_vol[,1] = v_0 459 | 460 | 461 | # Heston simulation with Euler Discretization 462 | for(i in 2 : Ncol){ 463 | 464 | bm_s = rnorm(Nrow,0,1) 465 | bm_s2 = rnorm(Nrow,0,1) 466 | bm_v = rho * bm_s + sqrt(1 - rho ^ 2) * bm_s2 467 | drift = (r - 0.5 * simu_vol[,i - 1]) * dt 468 | diffusion = sqrt(simu_vol[,i - 1]) * sqrt(dt) * bm_s 469 | simu_data[,i] = exp(log(simu_data[,i - 1]) + drift + diffusion) 470 | 471 | simu_vol[,i] = QE_scheme(simu_vol[,i - 1]) 472 | } 473 | 474 | 475 | lsmc_result = QE_mc(simu_data, simu_vol) 476 | price = lsmc_result[[1]] 477 | se = lsmc_result[[2]] 478 | t_2 = Sys.time() 479 | time = difftime(t_2,t_1,units = 'secs') 480 | result = list(price = price, se = se, time = time) 481 | return(result) 482 | } 483 | 484 | QE_lsm_result = QE_lsm(5000) 485 | QE_lsm_result 486 | 487 | # weighted Laguerre polynomials-------- 488 | laguerre_0 = function(x){ 489 | y = exp(-x/2) 490 | return(y) 491 | } 492 | laguerre_1 = function(x){ 493 | y = exp(-x/2) * (1 - x) 494 | return(y) 495 | } 496 | laguerre_2 = function(x){ 497 | y = exp(-x/2) * (1 - 2 * x + x ^ 2 / 2) 498 | return(y) 499 | } 500 | laguerre_3 = function(x){ 501 | y = exp(-x/2) * (2 * x + 3 / 2 * x ^ 2 - 1 / 6 * x ^ 3) 502 | # - 3*x + 1.5*x^2 - 0.1666667*x^3 503 | return(y) 504 | } 505 | laguerre_4 = function(x){ 506 | y = exp(-x/2) * (1 - 4 * x + 3 * x ^ 2 - 1 / 3 * x ^ 3 + 1 / 24 * x ^ 4) 507 | # - 3*x + 1.5*x^2 - 0.1666667*x^3 508 | return(y) 509 | } 510 | laguerre.polynomials(3) 511 | 512 | lsmc_laguerre = function(data){ 513 | #browser() 514 | Ncol = Ncols = ncol(data) 515 | Nrows = Nrow = nrow(data) 516 | option_value = matrix(0,nrow = Nrow,ncol = Ncol) 517 | option_value[,Ncol] = ifelse(K > data[,Ncol], K - data[,Ncol], 0) 518 | for(j in Ncols : 2){ 519 | # change 3 to 2 520 | X = data[,j - 1] 521 | Y = option_value[, j] * exp(-r * dt) 522 | 523 | 524 | # insert laguerre polynomial 525 | L_0 = laguerre_0(X) 526 | L_1 = laguerre_1(X) 527 | L_2 = laguerre_2(X) 528 | L_3 = laguerre_3(X) 529 | L_4 = laguerre_4(X) 530 | comb_table = cbind(X, L_0, L_1, L_2, L_3, L_4, Y) 531 | # stock price at time = j - 1 and value of time j discount back to j - 1 532 | 533 | if(nrow(comb_table[comb_table[,'X'] < K,]) < 2){ 534 | EY = Y 535 | }else{ 536 | itm_table = as.data.frame(comb_table[comb_table[,'X'] < K,]) 537 | lr = lm(data = itm_table,formula = Y ~ L_0 + L_1 + L_2 + L_3 + L_4) 538 | # regression with quadratic term Y ~ L_0 + L_1 + L_2 + L_3 + L_4 539 | EY = lr$fitted.values 540 | # regression fitted expectation of Y 541 | } 542 | Z = c() 543 | k = 0 544 | for(i in 1 : Nrow){ 545 | if(comb_table[i,'X'] < K){ 546 | k = k + 1 547 | Z[i] = EY[k] 548 | } 549 | else{ 550 | Z[i] = comb_table[i,'Y'] 551 | } 552 | } 553 | # Z is the option value combining 1. exercise, the expectation EY; 2 not exercise, the discounted value Y 554 | C = ifelse(K-comb_table[,'X'] > 0, K-comb_table[,'X'],0) 555 | # compare whether to exercise 556 | comb_2 = cbind(comb_table,Z,C) 557 | option_value[,j - 1] = ifelse(comb_2[,'C'] < comb_2[,'Z'],comb_2[,'Y'],comb_2[,'C']) 558 | # estimate option value one step backward 559 | 560 | } 561 | #price = mean(option_value[,2]) * exp(-r*dt) 562 | price = mean(option_value[,1]) 563 | price = ifelse(price > K - S, price, K - S) 564 | # consider exercise at start 565 | se = sd(option_value[,2] * exp(-r*dt)) / sqrt(Nrows) 566 | # output regression_table = itm_table,EY = EY if needed 567 | result = c(price, se) 568 | return(result) 569 | } 570 | 571 | 572 | lsmc_laguerre_result = lsmc_laguerre(simu_data) 573 | lsmc_laguerre_result 574 | 575 | # path as input--------- 576 | BS_lsm_laguerre = function(path){ 577 | Nrow = path 578 | t_1 = Sys.time() 579 | 580 | 581 | simu_data = matrix(0,nrow = Nrow,ncol = Ncol) 582 | colnames(simu_data) = paste("step", 0:n, sep="=") 583 | # simulate Black Scholes ---------- 584 | simu_data[,1] = S 585 | for(i in 2:Ncol){ 586 | epsilon = rnorm(path,0,1) 587 | simu_data[,i] = simu_data[,i - 1] + r * simu_data[,i - 1] * dt + sigma * simu_data[,i - 1] * sqrt(dt) * epsilon 588 | } 589 | 590 | lsmc_laguerre_result = lsmc_laguerre(simu_data) 591 | price = lsmc_laguerre_result[1] 592 | se = lsmc_laguerre_result[2] 593 | t_2 = Sys.time() 594 | time = difftime(t_2,t_1,units = 'secs') 595 | result = list(price = price, se = se, time = time) 596 | return(result) 597 | } 598 | BS_lsm_laguerre_result = BS_lsm_laguerre(2000) 599 | BS_lsm_laguerre_result 600 | 601 | 602 | -------------------------------------------------------------------------------- /R implementation/example.csv: -------------------------------------------------------------------------------- 1 | 1,1.09,1.08,1.34 1,1.16,1.26,1.54 1,1.22,1.07,1.03 1,0.93,0.97,0.92 1,1.11,1.56,1.52 1,0.76,0.77,0.9 1,0.92,0.84,1.01 1,0.88,1.22,1.34 -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Read Me 2 | 3 | Author: Jerry Xia 4 | 5 | Date: 2018/06/19 6 | 7 | *Note: The advanced Marckdown features such as math expression may not be compatible in GitHub, please see README.pdf instead if you want more details* 8 | 9 | ## Implementation 10 | 11 | Please feel free to see the Monte Carlo engine: MonteCarlo.py 12 | 13 | ### Classification 14 | 15 | * regular Monte Carlo simulation 16 | * optimal hedged Monte Carlo simulation 17 | * delta-based Monte Carlo simulation 18 | * Monte Carlo with antithetic variates 19 | * Least square method of Longstaff and Schiwatz (LSM) 20 | * Hedeged Least Square method (HLSM) 21 | 22 | ### Underlying Process 23 | 24 | * geometric Brownian motion 25 | * CIR model 26 | * Heston model 27 | 28 | ### Boundary Scheme (CIR model) 29 | 30 | * absorption 31 | * reflection 32 | * Higham and Mao 33 | * partial truncation 34 | * full truncation 35 | 36 | 37 | 38 | # Optimal Hedged Monte Carlo 39 | 40 | Model Inventors: Marc Potters, Jean-Philippe Bouchaud, Dragan Sestovic 41 | 42 | 43 | 44 | * ## 1 Introduction 45 | 46 | This is a Python Notebook about variance reduction Monte Carlo simulations. In this script, I implemented the following variance reduction methods as well as their antithetic variates' version: 47 | 48 | * regular Monte Carlo 49 | * Monte Carlo with delta-based control variates 50 | * optimal hedged Monte Carlo 51 | 52 | Due to the significance and robustness, I mainly focus on the optimal hedged Monte Carlo (OHMC) in option pricing. We invoke this method to price European options and make comparison with other methods. 53 | 54 | ### 1.1 Facts 55 | * The option price is not simply the average value of the discounted future pay-off over the objective (or historical) probability distribution 56 | * The requirement of absence of arbitrage opportunities is equivalent to the existence of "risk-neutral measure", such that the price is indeed its average discounted future pay-off. 57 | * Risk in option trading cannot be eliminated 58 | 59 | ### 1.2 Objective 60 | * It would be satisfactory to have an option theory where the objective stochastic process of the underlying is used to calculate the option price, the hedge strategy and the *residual risk*. 61 | 62 | ### 1.3 Advantages 63 | * It is a versatile methods to price complicated path-dependent options. 64 | * Considerable variance reduction scheme for Monte Carlo 65 | * It provide not only a numerical estimate of the option price, but also of the optimal hedge strategy and of the residual risk. 66 | * This method does not rely on the notion of risk-neutral measure, and can be used to any model of the true dynamics of the underlying 67 | 68 | ## 2 Underlying dynamics 69 | 70 | ### Black-Scholes Model 71 | $$dS = \mu S dt + \sigma S dW_t$$ 72 | $$log S_{t+1} = log S_t +(\mu - \frac{\sigma^2}{2})\Delta t + \sigma \sqrt{\Delta t} \epsilon$$ 73 | where 74 | $$\epsilon \sim N(0,1)$$ 75 | In risk neutral measure, $\mu = r - q$. 76 | ### Heston Model 77 | The basic Heston model assumes that $S_t$, the price of the asset, is determined by a stochastic process: 78 | $$ 79 | dS_t = \mu S_t dt + \sqrt{v_t} S_t d W_t^S\\ 80 | dv_t = \kappa (\theta - v_t) dt + \xi \sqrt{v_t} d W_t^v 81 | $$ 82 | where 83 | $$E[dW_t^S,dW_t^v]=\rho dt$$ 84 | In risk neutral measure, $\mu = r - q$. 85 | 86 | ## 3 Methodology 87 | 88 | ### 3.1 Simbol Definition 89 | Option price always requires to work backward. That is because the option price is known exactly at the maturity. As with other schemes, we determine the option price step by step from the maturity $t=K\tau=T$ to the present time $t=0$. The unit of time being $\tau$, for example, one day. We simulate $N$ trajectories. In trajectory i, the price of the underlying asset at time $k\tau$ is denoted as $S_k^{(i)}$. The price of the derivative at time $k\tau$ is denoted as $C_k$, and the hedge function is $H_k$. We define an optimal hedged portfolio as 90 | $$W_k^{(i)} = C_k(S_k^{(i)}) + H_k(S_k^{(i)})S_k^{(i)}$$ 91 | The one-step change of our portfolio is 92 | $$\Delta W_k^{(i)}= df(k,k+1) C_{k+1}(S_{k+1}^{(i)}) - C_k(S_k^{(i)}) + H_k(S_{k}^{(i)}) (df(k,k+1) S_{k+1}^{(i)} - S_{k}^{(i)})$$ 93 | Where $df(k,k+1)$ is the discounted factor from time $k\tau$ to $(k+1) \tau$, $df2(k,k+1)$ is the discounted factor considering dividend $e^{-(r-q)(t_{k+1}-t_k)}$ 94 | 95 | ### 3.2 Objective 96 | The optimal hedged algorithm can be interpreted as the following optimal problem 97 | $$ 98 | \begin{align} 99 | \mbox{minimize}\quad & \quad Var[\Delta W_k]\\ 100 | \mbox{subject to}\quad & \quad E[\Delta W_k]=0 101 | \end{align} 102 | $$ 103 | It means we should try to minimize the realized volatility of hedged portfolio while maintaining the expected value of portfolio unchanged. 104 | 105 | ### 3.3 Basis Functions 106 | The original optimization is very difficult to solve. Thus we assume a set of basis function and solved it in such subspace. We use $N_C$and $N_H$ to denote the number of basis functions for price and hedge. 107 | $$ 108 | \begin{align} 109 | C_k(\cdot) &= \sum_{i=0}^{N_C} a_{k,i} A_i(\cdot)\\ 110 | H_k(\cdot) &= \sum_{i=0}^{N_H} b_{k,i} B_i(\cdot) 111 | \end{align} 112 | $$ 113 | The basis functions $A_i$ and $B_i$ are priori determined and need not to be identical. The coefficients $a_i$ and $b_i$ can be calibrated by solving the optimal problem. 114 | 115 | ### 3.4 Numerical Solution 116 | $$ 117 | \begin{align} 118 | \mbox{minimize}\quad & \quad \frac{1}{N} \sum_{i=1}^N \Delta W_k^{(i)2}\\ 119 | \mbox{subject to}\quad & \quad \frac{1}{N} \sum_{i=1}^N \Delta W_k^{(i)}=0 120 | \end{align} 121 | $$ 122 | 123 | Denote the discounted forward underlying price change at time $k\tau$ as 124 | 125 | $$\Delta S_k = df2(k,k+1) S_{k+1} - S_k$$ 126 | 127 | Define 128 | $$ 129 | \begin{align} 130 | Q_k &= \begin{bmatrix} 131 | -A_{k,1}(S_k^{(1)}) & \cdots & -A_{k,N_C}(S_k^{(1)}) & B_{k,1}(S_k^{(1)})\Delta S_k^{(1)}& \cdots & B_{k,N_H}(S_k^{(1)})\Delta S_k^{(1)} \\ 132 | -A_{k,1}(S_k^{(2)}) & \cdots & -A_{k,N_C}(S_k^{(2)}) & B_{k,1}(S_k^{(2)})\Delta S_k^{(2)}& \cdots & B_{k,N_H}(S_k^{(1)})\Delta S_k^{(2)} \\ 133 | \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 134 | -A_{k,1}(S_k^{(N)}) & \cdots & -A_{k,N_C}(S_k^{(N)}) & B_{k,1}(S_k^{(N)})\Delta S_k^{(N)}& \cdots & B_{k,N_H}(S_k^{(N)})\Delta S_k^{(N)} 135 | \end{bmatrix}\\\\ 136 | c_k &= (a_{k,1}, \cdots a_{k,N_C}, b_{k,1}, \cdots, b_{k,N_H})^T\\\\ 137 | v_{k} &= df(k,k+1) C_{k+1}(S_{k+1}^{}) 138 | \end{align} 139 | $$ 140 | As for $v_k$, note that we know the exact value at maturity, which means there is no need to approximate price in terms of basis functions, that is 141 | $$ 142 | \begin{align} 143 | v_k = \begin{cases} 144 | df(N-1,N)\ payoff(S_N),\quad & k=N-1\\ 145 | df(k,k+1)\ \sum_{i=1}^{N_C} a_{k+1,i} A_i(S_{k+1}), \quad & 0 0 # ITM 161 | A_matrix = np.array([func(stock_prices_t) for func in func_list]).T 162 | b_matrix = np.maximum(holding_values_tp1,exercise_values_tp1)[:, np.newaxis] * df 163 | A_prime_matrix = A_matrix[ITM_filter, :] 164 | b_prime_matrix = b_matrix[ITM_filter, :] 165 | lr = LinearRegression(fit_intercept=False) 166 | lr.fit(A_prime_matrix, b_prime_matrix) 167 | holding_values_t = np.dot(A_matrix, lr.coef_.T)[:, 0] 168 | exercise_values_t = payoff_fun(stock_prices_t) 169 | # american_values_matrix[:,i] = np.maximum(holding_values_t,exercise_values_t) 170 | exercise_filter = (exercise_values_t > holding_values_t) & ITM_filter 171 | exercise_matrix[exercise_filter, i] = 1 172 | 173 | # i=0 174 | # holding_values_tp1 = holding_values_t 175 | # 176 | # stock_price_0 = price_matrix[0, 0] 177 | # holding_value_0 = np.mean(holding_values_tp1 * df) 178 | # exercise_value_0 = payoff_fun(stock_price_0) 179 | # exercise_matrix[:, 0] = exercise_value_0 > holding_value_0 180 | 181 | # redefine the exercise matrix since we should only exercise once 182 | holding_matrix = np.zeros(exercise_matrix.shape, dtype=bool) 183 | for i in np.arange(n_trials): 184 | exercise_row = exercise_matrix[i, :] 185 | if (exercise_row.any()): 186 | exercise_idx = np.where(exercise_row == 1)[0][0] 187 | exercise_row[exercise_idx + 1:] = 0 188 | holding_matrix[i,:exercise_idx+1] = 1 189 | else: 190 | exercise_row[-1] = 1 191 | holding_matrix[i,:] = 1 192 | 193 | self.holding_matrix = holding_matrix 194 | self.exercise_matrix = exercise_matrix 195 | return exercise_matrix 196 | # LSM with only ITM approximation 197 | def LSM2(self, option_type="c", func_list=[lambda x: x ** 0, lambda x: x]): 198 | dt = self.T / self.n_steps 199 | df = np.exp(-self.r * dt) 200 | df2 = np.exp(-(self.r - self.q) * dt) 201 | K = self.K 202 | price_matrix = self.price_matrix 203 | n_trials = self.n_trials 204 | n_steps = self.n_steps 205 | exercise_matrix = np.zeros(price_matrix.shape,dtype=bool) 206 | american_values_matrix = np.zeros(price_matrix.shape) 207 | 208 | def __calc_american_values(payoff_fun,sub_price_matrix,sub_exercise_matrix,df): 209 | exercise_values_t = payoff_fun(sub_price_matrix[:,0]) 210 | ITM_filter = exercise_values_t > 0 211 | n_sub_trials, n_sub_steps = sub_price_matrix.shape 212 | holding_values_t = np.zeros(n_sub_trials) 213 | itemindex = np.where(sub_exercise_matrix==1) 214 | for trial_i in range(n_sub_trials): 215 | first = next(itemindex[1][i] for i,x in enumerate(itemindex[0]) if x==trial_i) 216 | payoff_i = payoff_fun(sub_price_matrix[trial_i, first]) 217 | df_i = df**(n_sub_steps-first) 218 | holding_values_t[trial_i] = payoff_i*df_i 219 | 220 | A_matrix = np.array([func(sub_price_matrix[:,0]) for func in func_list]).T 221 | b_matrix = holding_values_t[:, np.newaxis] # g_tau|Fi 222 | A_prime_matrix = A_matrix[ITM_filter, :] 223 | b_prime_matrix = b_matrix[ITM_filter, :] 224 | lr = LinearRegression(fit_intercept=False) 225 | lr.fit(A_prime_matrix, b_prime_matrix) 226 | exp_holding_values_t = np.dot(A_matrix, lr.coef_.T)[:, 0] # E[g_tau|Fi] only ITM 227 | exp_holding_values_t[np.invert(ITM_filter)] = np.nan 228 | sub_exercise_matrix[:,0] = ITM_filter & (exercise_values_t>exp_holding_values_t) 229 | american_values_t = np.maximum(exp_holding_values_t,exercise_values_t) 230 | return american_values_t 231 | 232 | if (option_type == "c"): 233 | payoff_fun = lambda x: np.maximum(x - K, 0) 234 | elif (option_type == "p"): 235 | payoff_fun = lambda x: np.maximum(K - x, 0) 236 | 237 | # when contract is at the maturity 238 | stock_prices_t = price_matrix[:, -1] 239 | exercise_values_t = payoff_fun(stock_prices_t) 240 | holding_values_t = exercise_values_t 241 | american_values_matrix[:,-1] = exercise_values_t 242 | exercise_matrix[:,-1] = 1 243 | 244 | # before maturaty 245 | for i in np.arange(n_steps)[:0:-1]: 246 | # A1 only ITM 247 | sub_price_matrix = price_matrix[:,i:] 248 | sub_exercise_matrix = exercise_matrix[:,i:] 249 | american_values_t = __calc_american_values(payoff_fun,sub_price_matrix,sub_exercise_matrix,df) 250 | american_values_matrix[:,i] = american_values_t 251 | 252 | # i=0 253 | # regular martingale pricing: LSM 254 | american_value1 = american_values_matrix[:,1].mean() * df 255 | # with delta hedging: OHMC 256 | v0 = matrix((american_values_matrix[:,1] * df)[:,np.newaxis]) 257 | S0 = price_matrix[:, 0] 258 | S1 = price_matrix[:, 1] 259 | dS0 = df * S1 - S0 260 | Q0 = np.concatenate((-np.ones(n_trials)[:, np.newaxis], dS0[:, np.newaxis]), axis=1) 261 | Q0 = matrix(Q0) 262 | P = Q0.T * Q0 263 | q = Q0.T * v0 264 | A = matrix(np.ones(n_trials, dtype=np.float64)).T * Q0 265 | b = - matrix(np.ones(n_trials, dtype=np.float64)).T * v0 266 | sol = solvers.coneqp(P=P, q=q, A=A, b=b) 267 | self.sol = sol 268 | residual_risk = (v0.T * v0 + 2 * sol["primal objective"]) / n_trials 269 | self.residual_risk = residual_risk[0] # the value of unit matrix 270 | american_value2 = sol["x"][0] 271 | delta_hedge = sol["x"][1] 272 | american_values_matrix[:,0] = american_value2 273 | 274 | # obtain the optimal policies at the inception 275 | holding_matrix = np.zeros(exercise_matrix.shape, dtype=bool) 276 | for i in np.arange(n_trials): 277 | exercise_row = exercise_matrix[i, :] 278 | if (exercise_row.any()): 279 | exercise_idx = np.where(exercise_row == 1)[0][0] 280 | exercise_row[exercise_idx + 1:] = 0 281 | holding_matrix[i,:exercise_idx+1] = 1 282 | else: 283 | exercise_row[-1] = 1 284 | holding_matrix[i,:] = 1 285 | 286 | self.holding_matrix = holding_matrix 287 | self.exercise_matrix = exercise_matrix 288 | self.american_values_matrix = american_values_matrix 289 | 290 | self.american_price = american_value2 291 | self.american_delta = delta_hedge 292 | return american_value2, delta_hedge 293 | 294 | # LSM with non-conformed approximation 295 | def LSM3(self, option_type="c", func_list=[lambda x: x ** 0, lambda x: x]): 296 | dt = self.T / self.n_steps 297 | df = np.exp(-self.r * dt) 298 | df2 = np.exp(-(self.r - self.q) * dt) 299 | K = self.K 300 | price_matrix = self.price_matrix 301 | n_trials = self.n_trials 302 | n_steps = self.n_steps 303 | exercise_matrix = np.zeros(price_matrix.shape,dtype=bool) 304 | american_values_matrix = np.zeros(price_matrix.shape) 305 | 306 | def __calc_american_values(payoff_fun,func_list, sub_price_matrix,sub_exercise_matrix,df): 307 | exercise_values_t = payoff_fun(sub_price_matrix[:,0]) 308 | ITM_filter = exercise_values_t > 0 309 | OTM_filter = exercise_values_t <= 0 310 | n_sub_trials, n_sub_steps = sub_price_matrix.shape 311 | holding_values_t = np.zeros(n_sub_trials) # simulated samples: y 312 | exp_holding_values_t = np.zeros(n_sub_trials) # regressed results: E[y] 313 | 314 | itemindex = np.where(sub_exercise_matrix==1) 315 | for trial_i in range(n_sub_trials): 316 | first = next(itemindex[1][i] for i,x in enumerate(itemindex[0]) if x==trial_i) 317 | payoff_i = payoff_fun(sub_price_matrix[trial_i, first]) 318 | df_i = df**(n_sub_steps-first) 319 | holding_values_t[trial_i] = payoff_i*df_i 320 | 321 | A_matrix = np.array([func(sub_price_matrix[:,0]) for func in func_list]).T 322 | b_matrix = holding_values_t[:, np.newaxis] # g_tau|Fi 323 | ITM_A_matrix = A_matrix[ITM_filter, :] 324 | ITM_b_matrix = b_matrix[ITM_filter, :] 325 | OTM_A_matrix = A_matrix[OTM_filter, :] 326 | OTM_b_matrix = b_matrix[OTM_filter, :] 327 | lr = LinearRegression(fit_intercept=False) 328 | # non-conformed approximation: do not assure the continuity of the approximation. 329 | lr.fit(ITM_A_matrix, ITM_b_matrix) 330 | exp_holding_values_t[ITM_filter] = np.dot(ITM_A_matrix, lr.coef_.T)[:, 0] # E[g_tau|Fi] only ITM 331 | 332 | lr.fit(OTM_A_matrix, OTM_b_matrix) 333 | exp_holding_values_t[OTM_filter] = np.dot(OTM_A_matrix, lr.coef_.T)[:, 0] # E[g_tau|Fi] only OTM 334 | 335 | sub_exercise_matrix[:,0] = ITM_filter & (exercise_values_t>exp_holding_values_t) 336 | american_values_t = np.maximum(exp_holding_values_t,exercise_values_t) 337 | return american_values_t 338 | 339 | if (option_type == "c"): 340 | payoff_fun = lambda x: np.maximum(x - K, 0) 341 | elif (option_type == "p"): 342 | payoff_fun = lambda x: np.maximum(K - x, 0) 343 | 344 | # when contract is at the maturity 345 | stock_prices_t = price_matrix[:, -1] 346 | exercise_values_t = payoff_fun(stock_prices_t) 347 | holding_values_t = exercise_values_t 348 | american_values_matrix[:,-1] = exercise_values_t 349 | exercise_matrix[:,-1] = 1 350 | 351 | # before maturaty 352 | for i in np.arange(n_steps)[:0:-1]: 353 | sub_price_matrix = price_matrix[:,i:] 354 | sub_exercise_matrix = exercise_matrix[:,i:] 355 | american_values_t = __calc_american_values(payoff_fun,sub_price_matrix,sub_exercise_matrix,df) 356 | american_values_matrix[:,i] = american_values_t 357 | 358 | # i=0 359 | # regular martingale pricing: LSM 360 | american_value1 = american_values_matrix[:,1].mean() * df 361 | # with delta hedging: OHMC 362 | v0 = matrix((american_values_matrix[:,1] * df)[:,np.newaxis]) 363 | S0 = price_matrix[:, 0] 364 | S1 = price_matrix[:, 1] 365 | dS0 = df * S1 - S0 366 | Q0 = np.concatenate((-np.ones(n_trials)[:, np.newaxis], dS0[:, np.newaxis]), axis=1) 367 | Q0 = matrix(Q0) 368 | P = Q0.T * Q0 369 | q = Q0.T * v0 370 | A = matrix(np.ones(n_trials, dtype=np.float64)).T * Q0 371 | b = - matrix(np.ones(n_trials, dtype=np.float64)).T * v0 372 | sol = solvers.coneqp(P=P, q=q, A=A, b=b) 373 | self.sol = sol 374 | residual_risk = (v0.T * v0 + 2 * sol["primal objective"]) / n_trials 375 | self.residual_risk = residual_risk[0] # the value of unit matrix 376 | american_value2 = sol["x"][0] 377 | delta_hedge = sol["x"][1] 378 | american_values_matrix[:,0] = american_value2 379 | 380 | # obtain the optimal policies at the inception 381 | holding_matrix = np.zeros(exercise_matrix.shape, dtype=bool) 382 | for i in np.arange(n_trials): 383 | exercise_row = exercise_matrix[i, :] 384 | if (exercise_row.any()): 385 | exercise_idx = np.where(exercise_row == 1)[0][0] 386 | exercise_row[exercise_idx + 1:] = 0 387 | holding_matrix[i,:exercise_idx+1] = 1 388 | else: 389 | exercise_row[-1] = 1 390 | holding_matrix[i,:] = 1 391 | 392 | self.holding_matrix = holding_matrix 393 | self.exercise_matrix = exercise_matrix 394 | self.american_values_matrix = american_values_matrix 395 | 396 | self.american_price = american_value2 397 | self.american_delta = delta_hedge 398 | return american_value2, delta_hedge 399 | 400 | def BlackScholesPricer(self, option_type='c'): 401 | S = self.S0 402 | K = self.K 403 | T = self.T 404 | r = self.r 405 | q = self.q 406 | sigma = self.sigma 407 | d1 = (np.log(S / K) + (r - q) * T + 0.5 * sigma ** 2 * T) / (sigma * np.sqrt(T)) 408 | d2 = d1 - sigma * np.sqrt(T) 409 | N = lambda x: sp.stats.norm.cdf(x) 410 | call = np.exp(-q * T) * S * N(d1) - np.exp(-r * T) * K * N(d2) 411 | put = call - np.exp(-q * T) * S + K * np.exp(-r * T) 412 | if (option_type == "c"): 413 | return call 414 | elif (option_type == "p"): 415 | return put 416 | else: 417 | print("please enter the option type: (c/p)") 418 | pass 419 | 420 | def MCPricer(self, option_type='c', isAmerican=False): 421 | price_matrix = self.price_matrix 422 | n_steps = self.n_steps 423 | n_trials = self.n_trials 424 | strike = self.K 425 | risk_free_rate = self.r 426 | time_to_maturity = self.T 427 | dt = time_to_maturity / n_steps 428 | if (option_type == "c"): 429 | payoff_fun = lambda x: np.maximum(x-strike,0) 430 | elif (option_type == "p"): 431 | payoff_fun = lambda x: np.maximum(strike-x, 0) 432 | else: 433 | print("please enter the option type: (c/p)") 434 | return 435 | 436 | if (isAmerican == False): 437 | 438 | payoff = payoff_fun(price_matrix[:, n_steps]) 439 | # vk = payoff*df 440 | value_results = payoff * np.exp(-risk_free_rate * time_to_maturity) 441 | self.payoff = payoff 442 | else: 443 | exercise_matrix = self.exercise_matrix 444 | t_exercise_array = dt * np.where(exercise_matrix == 1)[1] 445 | value_results = payoff_fun(price_matrix[np.where(exercise_matrix == 1)]) * np.exp(-risk_free_rate * t_exercise_array) 446 | 447 | regular_mc_price = np.average(value_results) 448 | self.mc_price = regular_mc_price 449 | self.value_results = value_results 450 | return regular_mc_price 451 | 452 | def BSDeltaHedgedPricer(self, option_type="c"): 453 | 454 | regular_mc_price = self.MCPricer(option_type=option_type) 455 | dt = self.T / self.n_steps 456 | df2 = np.exp(-(self.r - self.q) * dt) 457 | 458 | # Delta hedged cash flow 459 | def Delta_fun(x, tau, option_type): 460 | d1 = (np.log(x / self.K) + (self.r - self.q) * tau + self.sigma ** 2 * tau / 2) / ( 461 | self.sigma * np.sqrt(tau)) 462 | if (option_type == 'c'): 463 | return sp.stats.norm.cdf(d1) 464 | elif (option_type == 'p'): 465 | return -sp.stats.norm.cdf(-d1) 466 | 467 | discounted_hedge_cash_flow = np.zeros(self.n_trials) 468 | for i in range(self.n_trials): 469 | Sk_array = self.price_matrix[i, :] 470 | bi_diag_matrix = np.diag([-1] * (self.n_steps), 0) + np.diag([df2] * (self.n_steps - 1), 1) 471 | # (Sk+1 exp(-r dt) - Sk) exp(-r*(tk-t0)) 472 | discounted_stock_price_change = np.dot(bi_diag_matrix, Sk_array[:-1]) 473 | discounted_stock_price_change[-1] += Sk_array[-1] * df2 474 | discounted_stock_price_change *= np.exp(-self.r * np.arange(self.n_steps) * dt) 475 | tau_array = dt * np.arange(self.n_steps, 0, -1) 476 | Delta_array = np.array([Delta_fun(Sk, tau, option_type) for Sk, tau in zip(Sk_array[:-1], tau_array)]) 477 | discounted_hedge_cash_flow[i] = np.dot(Delta_array, discounted_stock_price_change) 478 | 479 | BSDeltaBased_mc_price = regular_mc_price - discounted_hedge_cash_flow.mean() 480 | # print("The average discounted hedge cash flow: {}".format(discounted_hedge_cash_flow.mean())) 481 | 482 | value_results = self.payoff * np.exp(-self.r * self.T) - discounted_hedge_cash_flow 483 | # print("Sanity check {} = {}".format(value_results.mean(),BSDeltaBased_mc_price)) 484 | self.value_results = value_results 485 | 486 | return BSDeltaBased_mc_price 487 | 488 | def OHMCPricer(self, option_type='c', isAmerican=False, func_list=[lambda x: x ** 0, lambda x: x]): 489 | def _calculate_Q_matrix(S_k, S_kp1, df, df2, func_list): 490 | dS = df2 * S_kp1 - S_k 491 | A = np.array([func(S_k) for func in func_list]).T 492 | B = (np.array([func(S_k) for func in func_list]) * dS).T 493 | return np.concatenate((-A, B), axis=1) 494 | 495 | price_matrix = self.price_matrix 496 | # k = n_steps 497 | dt = self.T / self.n_steps 498 | df = np.exp(- self.r * dt) 499 | df2 = np.exp(-(self.r - self.q) * dt) 500 | n_basis = len(func_list) 501 | n_trials = self.n_trials 502 | n_steps = self.n_steps 503 | strike = self.K 504 | 505 | if (option_type == "c"): 506 | payoff_fun = lambda x: np.maximum(x-strike,0) 507 | # payoff = (price_matrix[:, n_steps] - strike) 508 | elif (option_type == "p"): 509 | payoff_fun = lambda x: np.maximum(strike-x,0) 510 | # payoff = (strike - price_matrix[:, n_steps]) 511 | else: 512 | print("please enter the option type: (c/p)") 513 | return 514 | 515 | if isAmerican is True: 516 | holding_matrix = self.holding_matrix 517 | else: 518 | holding_matrix = np.ones(price_matrix.shape,dtype=bool) 519 | 520 | # At maturity 521 | holding_filter_k = holding_matrix[:, n_steps] 522 | payoff = matrix(payoff_fun(price_matrix[holding_filter_k,n_steps])) 523 | vk = payoff * df 524 | Sk = price_matrix[holding_filter_k,n_steps] 525 | # print("regular MC price",regular_mc_price) 526 | 527 | # k = n_steps-1,...,1 528 | for k in range(n_steps - 1, 0, -1): 529 | 530 | holding_filter_kp1 = holding_filter_k 531 | holding_filter_k = holding_matrix[:, k] 532 | Skp1 = price_matrix[holding_filter_kp1, k+1] 533 | Sk = price_matrix[holding_filter_kp1, k] 534 | Qk = matrix(_calculate_Q_matrix(Sk, Skp1, df, df2, func_list)) 535 | P = Qk.T * Qk 536 | q = Qk.T * vk 537 | A = matrix(np.ones(holding_filter_kp1.sum(), dtype=np.float64)).T * Qk 538 | b = - matrix(np.ones(holding_filter_kp1.sum(), dtype=np.float64)).T * vk 539 | # print(Sk) 540 | # print(Skp1) 541 | 542 | sol = solvers.coneqp(P=P, q=q, A=A, b=b) 543 | ak = sol["x"][:n_basis] 544 | bk = sol["x"][n_basis:] 545 | vk = matrix(np.array([func(price_matrix[holding_filter_k, k]) for func in func_list])).T * ak * df 546 | # break 547 | 548 | # k = 0 549 | v0 = vk 550 | holding_filter_1 = holding_filter_k 551 | holding_filter_0 = holding_matrix[:, 0] 552 | S0 = price_matrix[holding_filter_1, 0] 553 | S1 = price_matrix[holding_filter_1, 1] 554 | dS0 = df2 * S1 - S0 555 | Q0 = np.concatenate((-np.ones(holding_filter_1.sum())[:, np.newaxis], dS0[:, np.newaxis]), axis=1) 556 | Q0 = matrix(Q0) 557 | P = Q0.T * Q0 558 | q = Q0.T * v0 559 | A = matrix(np.ones(holding_filter_1.sum(), dtype=np.float64)).T * Q0 560 | b = - matrix(np.ones(holding_filter_1.sum(), dtype=np.float64)).T * v0 561 | C1 = matrix(ak).T * np.array([func(S1) for func in func_list]).T 562 | sol = solvers.coneqp(P=P, q=q, A=A, b=b) 563 | self.sol = sol 564 | residual_risk = (v0.T * v0 + 2 * sol["primal objective"]) / holding_filter_1.sum() 565 | self.residual_risk = residual_risk[0] # the value of unit matrix 566 | 567 | return sol["x"][0] 568 | 569 | def standard_error(self): 570 | # can not apply to the OHMC since its result is not obtained by averaging 571 | # sample variance 572 | sample_var = np.var(self.value_results, ddof=1) 573 | std_estimate = np.sqrt(sample_var) 574 | standard_err = std_estimate / np.sqrt(self.n_trials) 575 | return standard_err 576 | 577 | def pricing(self, option_type='c', func_list=[lambda x: x ** 0, lambda x: x]): 578 | OHMC_price = self.OHMCPricer(option_type=option_type, func_list=func_list) 579 | regular_mc_price = self.MCPricer(option_type=option_type) 580 | black_sholes_price = self.BlackScholesPricer(option_type) 581 | return ({"OHMC": OHMC_price, "regular MC": regular_mc_price, "Black-Scholes": black_sholes_price}) 582 | 583 | def hedging(self): 584 | S = self.S0 585 | K = self.K 586 | T = self.T 587 | r = self.r 588 | q = self.q 589 | sigma = self.sigma 590 | d1 = (np.log(S / K) + (r - q) * T + 0.5 * sigma ** 2 * T) / (sigma * np.sqrt(T)) 591 | d2 = d1 - sigma * np.sqrt(T) 592 | N = lambda x: sp.stats.norm.cdf(x) 593 | return ({"OHMC optimal hedge": -self.sol["x"][1], "Black-Scholes delta hedge": N(d1), 594 | "OHMC residual risk": self.residual_risk}) 595 | -------------------------------------------------------------------------------- /bak/OHLSMC-Heston-American.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Optimal Hedged Least Square Monte Carlo Simulation\n", 8 | "\n", 9 | "Author: Jerry Xia\n", 10 | "\n", 11 | "Date: 2018/08/27" 12 | ] 13 | }, 14 | { 15 | "cell_type": "markdown", 16 | "metadata": {}, 17 | "source": [ 18 | "## 1. Introduction\n", 19 | "\n", 20 | "This is a Python Notebook about an inovative method of the Monte Carlo simulation, Optimal Hedged Least Square Monte Carlo (OHLSMC) in order to price and hedge American type options. In this script, I implemented the following variance reduction methods as well as their antithetic variates' version:\n", 21 | "\n", 22 | "* regular Monte Carlo\n", 23 | "* least square Monte Carlo\n", 24 | "* Monte Carlo with delta-based control variates\n", 25 | "* optimal hedged Monte Carlo\n", 26 | "* optimal hedged least square Monte Carlo\n", 27 | "\n", 28 | "Due to the significant efficience and robustness, I mainly focus on the optimal hedged least square Monte Carlo (OHLSMC) in option pricing. We invoke this method to price American options and make comparison with the original least square monte carlo (LSM).\n", 29 | "\n", 30 | "### 1.1 Facts\n", 31 | "* The option price is not simply the average value of the discounted future pay-off over the objective (or historical) probability distribution\n", 32 | "* The requirement of absence of arbitrage opportunities is equivalent to the existence of \"risk-neutral measure\", such that the price is indeed its average discounted future pay-off.\n", 33 | "* Risk in option trading cannot be eliminated\n", 34 | "\n", 35 | "### 1.2 Objective\n", 36 | "* It would be satisfactory to have an option theory where the objective stochastic process of the underlying is used to calculate the option price, the hedge strategy and the *residual risk*.\n", 37 | "\n", 38 | "### 1.3 Advantages\n", 39 | "* It is a versatile methods to price complicated path-dependent options.\n", 40 | "* Considerable variance reduction scheme for Monte Carlo\n", 41 | "* It provide not only a numerical estimate of the option price, but also of the optimal hedge strategy and of the residual risk.\n", 42 | "* This method does not rely on the notion of risk-neutral measure, and can be used to any model of the true dynamics of the underlying" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "## 2 Underlying dynamics\n", 50 | "\n", 51 | "### Black-Scholes Model\n", 52 | "$$dS = \\mu S dt + \\sigma S dW_t$$\n", 53 | "$$log S_{t+1} = log S_t +(\\mu - \\frac{\\sigma^2}{2})\\Delta t + \\sigma \\sqrt{\\Delta t} \\epsilon$$\n", 54 | "where\n", 55 | "$$\\epsilon \\sim N(0,1)$$\n", 56 | "In risk neutral measure, $\\mu = r - q$. \n", 57 | "### Heston Model\n", 58 | "The basic Heston model assumes that $S_t$, the price of the asset, is determined by a stochastic process:\n", 59 | "$$\n", 60 | "dS_t = \\mu S_t dt + \\sqrt{v_t} S_t d W_t^S\\\\\n", 61 | "dv_t = \\kappa (\\theta - v_t) dt + \\xi \\sqrt{v_t} d W_t^v\n", 62 | "$$\n", 63 | "where \n", 64 | "$$E[dW_t^S,dW_t^v]=\\rho dt$$\n", 65 | "In risk neutral measure, $\\mu = r - q$. " 66 | ] 67 | }, 68 | { 69 | "cell_type": "markdown", 70 | "metadata": {}, 71 | "source": [ 72 | "## 3 Methodology\n", 73 | "\n", 74 | "### 3.1 Simbol Definition\n", 75 | "Option price always requires to work backward. That is because the option price is known exactly at the maturity. As with other schemes, we determine the option price step by step from the maturity $t=K\\tau=T$ to the present time $t=0$. The unit of time being $\\tau$, for example, one day. We simulate $N$ trajectories. In trajectory i, the price of the underlying asset at time $k\\tau$ is denoted as $S_k^{(i)}$. The price of the derivative at time $k\\tau$ is denoted as $C_k$, and the hedge function is $H_k$. We define an optimal hedged portfolio as\n", 76 | "$$W_k^{(i)} = C_k(S_k^{(i)}) + H_k(S_k^{(i)})S_k^{(i)}$$\n", 77 | "The one-step change of our portfolio is\n", 78 | "$$\\Delta W_k^{(i)}= df(k,k+1) C_{k+1}(S_{k+1}^{(i)}) - C_k(S_k^{(i)}) + H_k(S_{k}^{(i)}) (df2(k,k+1) S_{k+1}^{(i)} - S_{k}^{(i)})$$\n", 79 | "Where $df(k,k+1)$ is the discounted factor from time $k\\tau$ to $(k+1) \\tau$, $df2(k,k+1)$ is the discounted factor considering dividend $e^{-(r-q)(t_{k+1}-t_k)}$\n", 80 | "\n", 81 | "### 3.2 Objective\n", 82 | "The optimal hedged algorithm can be interpreted as the following optimal problem\n", 83 | "\n", 84 | "\\begin{align}\n", 85 | "\\mbox{minimize}\\quad & \\quad Var[\\Delta W_k]\\\\\n", 86 | "\\mbox{subject to}\\quad & \\quad E[\\Delta W_k]=0\n", 87 | "\\end{align}\n", 88 | "\n", 89 | "It means we should try to minimize the realized volatility of hedged portfolio while maintaining the expected value of portfolio unchanged.\n", 90 | "\n", 91 | "### 3.3 Basis Functions\n", 92 | "The original optimization is very difficult to solve. Thus we assume a set of basis function and solved it in such subspace. We use $N_C$and $N_H$ to denote the number of basis functions for price and hedge.\n", 93 | "\n", 94 | "\\begin{align}\n", 95 | "C_k(\\cdot) &= \\sum_{i=0}^{N_C} a_{k,i} A_i(\\cdot)\\\\\n", 96 | "H_k(\\cdot) &= \\sum_{i=0}^{N_H} b_{k,i} B_i(\\cdot)\n", 97 | "\\end{align}\n", 98 | "\n", 99 | "The basis functions $A_i$ and $B_i$ are priori determined and need not to be identical. The coefficients $a_i$ and $b_i$ can be calibrated by solving the optimal problem.\n", 100 | "\n", 101 | "### 3.4 Numerical Solution\n", 102 | "\n", 103 | "\\begin{align}\n", 104 | "\\mbox{minimize}\\quad & \\quad \\frac{1}{N} \\sum_{i=1}^N \\Delta W_k^{(i)2}\\\\\n", 105 | "\\mbox{subject to}\\quad & \\quad \\frac{1}{N} \\sum_{i=1}^N \\Delta W_k^{(i)}=0\n", 106 | "\\end{align}\n", 107 | "\n", 108 | "Denote the discounted forward underlying price change at time $k\\tau$ as\n", 109 | "\n", 110 | "$$\\Delta S_k = df2(k,k+1) S_{k+1} - S_k$$\n", 111 | "\n", 112 | "Define\n", 113 | "\n", 114 | "\\begin{align}\n", 115 | "Q_k &= \\begin{bmatrix}\n", 116 | " -A_{k,1}(S_k^{(1)}) & \\cdots & -A_{k,N_C}(S_k^{(1)}) & B_{k,1}(S_k^{(1)})\\Delta S_k^{(1)}& \\cdots & B_{k,N_H}(S_k^{(1)})\\Delta S_k^{(1)} \\\\\n", 117 | " -A_{k,1}(S_k^{(2)}) & \\cdots & -A_{k,N_C}(S_k^{(2)}) & B_{k,1}(S_k^{(2)})\\Delta S_k^{(2)}& \\cdots & B_{k,N_H}(S_k^{(1)})\\Delta S_k^{(2)} \\\\\n", 118 | " \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots\\\\\n", 119 | " -A_{k,1}(S_k^{(N)}) & \\cdots & -A_{k,N_C}(S_k^{(N)}) & B_{k,1}(S_k^{(N)})\\Delta S_k^{(N)}& \\cdots & B_{k,N_H}(S_k^{(N)})\\Delta S_k^{(N)}\n", 120 | " \\end{bmatrix}\\\\\\\\\n", 121 | "c_k &= (a_{k,1}, \\cdots a_{k,N_C}, b_{k,1}, \\cdots, b_{k,N_H})^T\\\\\\\\\n", 122 | "v_{k} &= df(k,k+1) C_{k+1}(S_{k+1}^{})\n", 123 | "\\end{align}\n", 124 | "\n", 125 | "As for $v_k$, note that we know the exact value at maturity, which means there is no need to approximate price in terms of basis functions, that is\n", 126 | "\n", 127 | "\\begin{align}\n", 128 | "v_k = \\begin{cases}\n", 129 | "df(N-1,N)\\ payoff(S_N),\\quad & k=N-1\\\\\n", 130 | "df(k,k+1)\\ \\sum_{i=1}^{N_C} a_{k+1,i} A_i(S_{k+1}), \\quad & 0\n", 496 | "\n", 509 | "\n", 510 | " \n", 511 | " \n", 512 | " \n", 513 | " \n", 514 | " \n", 515 | " \n", 516 | " \n", 517 | " \n", 518 | " \n", 519 | " \n", 520 | " \n", 521 | " \n", 522 | " \n", 523 | " \n", 524 | " \n", 525 | " \n", 526 | " \n", 527 | " \n", 528 | " \n", 529 | " \n", 530 | " \n", 531 | " \n", 532 | " \n", 533 | " \n", 534 | " \n", 535 | " \n", 536 | " \n", 537 | " \n", 538 | " \n", 539 | " \n", 540 | " \n", 541 | " \n", 542 | " \n", 543 | " \n", 544 | " \n", 545 | " \n", 546 | " \n", 547 | " \n", 548 | " \n", 549 | " \n", 550 | " \n", 551 | " \n", 552 | " \n", 553 | " \n", 554 | " \n", 555 | " \n", 556 | " \n", 557 | " \n", 558 | " \n", 559 | " \n", 560 | " \n", 561 | " \n", 562 | " \n", 563 | " \n", 564 | " \n", 565 | " \n", 566 | " \n", 567 | " \n", 568 | " \n", 569 | " \n", 570 | "
errmethodn_stepsn_trailsruntime
00.000000Black Scholes20050000.003281
10.399365MC20050000.146768
20.138306OHMC20050000.303353
30.000000Black Scholes2001000.000299
42.151938MC2001000.001599
50.542016OHMC2001000.042877
\n", 571 | "" 572 | ], 573 | "text/plain": [ 574 | " err method n_steps n_trails runtime\n", 575 | "0 0.000000 Black Scholes 200 5000 0.003281\n", 576 | "1 0.399365 MC 200 5000 0.146768\n", 577 | "2 0.138306 OHMC 200 5000 0.303353\n", 578 | "3 0.000000 Black Scholes 200 100 0.000299\n", 579 | "4 2.151938 MC 200 100 0.001599\n", 580 | "5 0.542016 OHMC 200 100 0.042877" 581 | ] 582 | }, 583 | "execution_count": 46, 584 | "metadata": {}, 585 | "output_type": "execute_result" 586 | } 587 | ], 588 | "source": [ 589 | "import pandas as pd\n", 590 | "df1 = pd.DataFrame(efficiency_comparison(5000,200))\n", 591 | "df2 = pd.DataFrame(efficiency_comparison(100,200))\n", 592 | "pd.concat((df1,df2),axis=0,ignore_index=True)" 593 | ] 594 | }, 595 | { 596 | "cell_type": "code", 597 | "execution_count": 53, 598 | "metadata": {}, 599 | "outputs": [ 600 | { 601 | "data": { 602 | "text/plain": [ 603 | "array([ 100, 600, 1100, 1600, 2100, 2600, 3100, 3600, 4100, 4600, 5100,\n", 604 | " 5600, 6100, 6600, 7100, 7600, 8100, 8600, 9100, 9600])" 605 | ] 606 | }, 607 | "execution_count": 53, 608 | "metadata": {}, 609 | "output_type": "execute_result" 610 | } 611 | ], 612 | "source": [ 613 | "np.arange(100,10000,500)" 614 | ] 615 | }, 616 | { 617 | "cell_type": "code", 618 | "execution_count": 69, 619 | "metadata": {}, 620 | "outputs": [], 621 | "source": [ 622 | "n_trails_list = np.arange(100,10000,500)\n", 623 | "efficiency_df = pd.DataFrame()\n", 624 | "for n_trails in n_trails_list:\n", 625 | " new_df = pd.DataFrame(efficiency_comparison(n_trails,n_steps))\n", 626 | " efficiency_df = pd.concat((efficiency_df,new_df))" 627 | ] 628 | }, 629 | { 630 | "cell_type": "code", 631 | "execution_count": 70, 632 | "metadata": {}, 633 | "outputs": [ 634 | { 635 | "data": { 636 | "text/html": [ 637 | "
\n", 638 | "\n", 651 | "\n", 652 | " \n", 653 | " \n", 654 | " \n", 655 | " \n", 656 | " \n", 657 | " \n", 658 | " \n", 659 | " \n", 660 | " \n", 661 | " \n", 662 | " \n", 663 | " \n", 664 | " \n", 665 | " \n", 666 | " \n", 667 | " \n", 668 | " \n", 669 | " \n", 670 | " \n", 671 | " \n", 672 | " \n", 673 | " \n", 674 | " \n", 675 | " \n", 676 | " \n", 677 | " \n", 678 | " \n", 679 | " \n", 680 | " \n", 681 | " \n", 682 | " \n", 683 | " \n", 684 | " \n", 685 | " \n", 686 | " \n", 687 | " \n", 688 | " \n", 689 | " \n", 690 | " \n", 691 | " \n", 692 | " \n", 693 | " \n", 694 | " \n", 695 | " \n", 696 | " \n", 697 | " \n", 698 | " \n", 699 | " \n", 700 | " \n", 701 | " \n", 702 | " \n", 703 | " \n", 704 | "
errmethodn_stepsn_trailsruntime
00.000000Black Scholes2001000.000560
11.505011MC2001000.003374
20.617521OHMC2001000.060418
00.000000Black Scholes2006000.000310
11.338692MC2006000.015934
\n", 705 | "
" 706 | ], 707 | "text/plain": [ 708 | " err method n_steps n_trails runtime\n", 709 | "0 0.000000 Black Scholes 200 100 0.000560\n", 710 | "1 1.505011 MC 200 100 0.003374\n", 711 | "2 0.617521 OHMC 200 100 0.060418\n", 712 | "0 0.000000 Black Scholes 200 600 0.000310\n", 713 | "1 1.338692 MC 200 600 0.015934" 714 | ] 715 | }, 716 | "execution_count": 70, 717 | "metadata": {}, 718 | "output_type": "execute_result" 719 | } 720 | ], 721 | "source": [ 722 | "efficiency_df.head()" 723 | ] 724 | }, 725 | { 726 | "cell_type": "markdown", 727 | "metadata": {}, 728 | "source": [ 729 | "### 4.1 Variance Reduction Test" 730 | ] 731 | }, 732 | { 733 | "cell_type": "code", 734 | "execution_count": 75, 735 | "metadata": {}, 736 | "outputs": [ 737 | { 738 | "data": { 739 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYwAAAELCAYAAADKjLEqAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAH1xJREFUeJzt3X+UXGWd5/H3Z0I73fwwraH1kARI0CYYQoTQIiyywxBZ\nAsgSEGMAhwlHFxAYXAaRRM8g67iKgysOAmaRYYBdJEQMIfJjwggoLhwcOgmGBEiI/Eo3jLTBBIHO\nEOJ3/6jqptJ00rf79u1bt+rzOqdPVz331r3fp6qTbz0/7nMVEZiZmQ3kz/IOwMzMisEJw8zMEnHC\nMDOzRJwwzMwsEScMMzNLxAnDzMwSccIwM7NEnDDMzCwRJwwzM0tkp7wDGKzdd989JkyYkHcYZmaF\nsmzZst9HREuaYxQuYUyYMIH29va8wzAzKxRJL6Q9hrukzMwsEScMMzNLxAnDzMwSKdwYhpkV05Yt\nW+jo6GDz5s15h1LTGhsbGT9+PA0NDcN+bCcMMxsRHR0d7LbbbkyYMAFJeYdTkyKCDRs20NHRwcSJ\nE4f9+O6SMrMRsXnzZsaMGeNkkSFJjBkzJrNWnBOGmY0YJ4vsZfke112X1OIVnVyxdA0vbexmbHMT\nFx8ziZkHjcs7LDOzqldXLYzFKzqZt+gJOjd2E0Dnxm7mLXqCxSs68w7NzApu1113TfX6I488kr32\n2ouI6C2bOXPmNsddu3Ytxx13HK2trUybNo1Zs2bxu9/9LtV5B6OuEsYVS9fQvWXrNmXdW7ZyxdI1\nOUVkZnmICP70pz9V3fmbm5t5+OGHAdi4cSMvv/xy77bNmzdz/PHH88UvfpFnnnmG5cuXc+6559LV\n1TVicddVwnhpY/egys0sP4tXdHL45Q8wce7dHH75A6l7Ap5//nkmTZrEGWecwZQpU1i/fj333Xcf\nhx12GNOmTeMzn/kMr7/+OgD33HMP++23HwcffDAXXHABn/rUpwC47LLL+O53v9t7zClTpvD8889v\nc57XX3+d6dOnM23aNA444ADuvPPO7Z6/r9mzZ7NgwQIAFi1axMknn9y77cc//jGHHXYYJ5xwQm/Z\nkUceyZQpU1K9L4NRVwljbHPToMrNLB9ZdR8/88wznHvuuaxevZpddtmFb37zm/z85z9n+fLltLW1\n8b3vfY/Nmzdz9tlnc++997Js2bJBf4NvbGzkjjvuYPny5Tz44INcdNFFvd1Mleffe++93/Xa6dOn\n89BDD7F161YWLFjAZz/72d5tq1at4uCDD05V/7TqKmFcfMwkmhpGbVPW1DCKi4+ZlFNEZtafrLqP\n9957bw499FAAHn30UZ588kkOP/xwDjzwQG666SZeeOEFnn76afbZZ5/e6xhOPfXUQZ0jIvjqV7/K\n1KlT+eQnP0lnZ2fvOEPl+fszatQoPvGJT7BgwQK6u7uptpW562qWVM9sKM+SMqtuWXUf77LLLr2P\nI4Kjjz6aW2+9dZt9Hn/88e2+fqeddtpm7KG/6x1uueUWurq6WLZsGQ0NDUyYMKF3v8rzb8/s2bM5\n6aSTuOyyy7Yp33///fnlL3854OuzVFctDCgljYfnHsVzlx/Pw3OPcrIwq0Ij0X186KGH8vDDD7Nu\n3ToA3njjDdauXcukSZN49tlne8cmbrvttt7XTJgwgeXLlwOwfPlynnvuuXcdd9OmTXzgAx+goaGB\nBx98kBdeGNyq4kcccQTz5s17V8vmtNNO45FHHuHuu+/uLXvooYdYtWrVoI6fRt0lDDOrfiPRfdzS\n0sKNN97IqaeeytSpUznssMN4+umnaWpq4tprr2XGjBkcfPDB7LbbbowePRqAT3/607z66qvsv//+\nXH311ey7777vOu7pp59Oe3s7BxxwADfffDP77bffoOKSxJe//GV23333bcqbmpq46667+MEPfkBr\nayuTJ0/m2muvpaUl1T2RBhdb5ZzfImhrawvfQMmseJ566ik+8pGPJN4/z4tsX3/9dXbddVcigvPO\nO4/W1lYuvPDCETn3cOjvvZa0LCLa0hy3rsYwzKw4Zh40Lrcu4x/96EfcdNNNvPXWWxx00EGcffbZ\nucRRbZwwzMz6uPDCCwvVohgpHsMwM7NEMksYkm6Q9IqkHQ7hS/qYpLclnZJVLGZmll6WLYwbgRk7\n2kHSKOA7wH0ZxmFmZsMgs4QREQ8Brw6w298APwVeySoOMzMbHrmNYUgaB5wE/DCvGMys/nR0dHDi\niSfS2trKhz70Ib70pS/x1ltv8Ytf/KJ3kcEec+bM4fbbbweKsfx41vIc9P4+cElEDLjGsKSzJLVL\nah/JpXzNrLZEBCeffDIzZ87kmWeeYe3atbz++ut87WtfS/T6al9+PGt5Jow2YIGk54FTgGslzexv\nx4i4LiLaIqJtJK9qNLMcrVwIV06By5pLv1cuTH3IBx54gMbGRs4880ygtNjflVdeyQ033MCbb745\n4OurffnxrOWWMCJiYkRMiIgJwO3AuRGxOK94zKyKrFwIP7sANq0HovT7ZxekThqrV69+1xLh733v\ne9lrr71Yt24dv/rVrzjwwAN7f5YsWbLNvtW+/HjWMrtwT9KtwJHA7pI6gK8DDQARMT+r85pZDbj/\nG7Clz8q0W7pL5VNnZXbaI444grvuuqv3+Zw5c7bZXu3Lj2cts4QREYkXkY+IOVnFMdzyXN/GrG5s\n6hhceUKTJ0/uHcTu8dprr/Hiiy/y4Q9/mPvuG3iGfzUvP541X+k9CFndBczM+hg9fnDlCU2fPp03\n33yTm2++GYCtW7dy0UUXMWfOHHbeeedEx6jm5cez5oQxCFndBczM+ph+KTT0ufdFQ1OpPAVJ3HHH\nHfzkJz+htbWVfffdl8bGRr71rW8N6hjVuvx41ry8+SBMnHs3/b1bAp67/PiRDsesUAa7vDkrF5bG\nLDZ1lFoW0y/NdPyilnh58yowtrmJzn5uETmcdwEzs7Kps5wgqoy7pAZhJO4CZmZWrdzCGISe2VCe\nJWU2NBGBpLzDqGlZDjM4YQxSnncBMyuyxsZGNmzYwJgxY5w0MhIRbNiwgcbGxkyO74RhZiNi/Pjx\ndHR01NTaStWosbGR8ePTTT/eHicMMxsRDQ0NTJw4Me8wLAUPepuZWSJOGGZmlogThpmZJeKEYWZm\niThhmJlZIk4YZmaWiBOGmZkl4oRhZmaJZJYwJN0g6RVJ/d49RNLpklZKekLSI5I+mlUsZmaWXpYt\njBuBGTvY/hzwFxFxAPD3wHUZxmJmZilleU/vhyRN2MH2RyqePgpks/iJmZkNi2oZw/g8cO/2Nko6\nS1K7pHYvXGZmlo/cE4akv6SUMC7Z3j4RcV1EtEVEWy3dH9fMrEhyXa1W0lTgeuDYiNiQZyxmZrZj\nubUwJO0FLAL+KiLW5hWHmZklk1kLQ9KtwJHA7pI6gK8DDQARMR+4FBgDXFu++9bbEdGWVTxmZpZO\nlrOkTh1g+xeAL2R1fjMzG16+494IW7yikyuWruGljd2MbW7i4mMm+R7hZlYIThgjaPGKTuYteoLu\nLVsB6NzYzbxFTwA4aZhZ1ct9Wm09uWLpmt5k0aN7y1auWLomp4jMzJJzwhhBL23sHlS5mVk1ccIY\nQWObmwZVbmZWTZwwRtDFx0yiqWHUNmVNDaO4+JhJOUVkZpacB71HUM/AtmdJmVkROWGMsJkHjXOC\nMLNCcpeUmZkl4oRhZmaJOGGYmVkiThhmZpaIE4aZmSXihGFmZok4YZiZWSJOGGZmlkhmCUPSDZJe\nkbRqO9sl6SpJ6yStlDQtq1jMzCy9LFsYNwIzdrD9WKC1/HMW8MMMYzEzs5QySxgR8RDw6g52ORG4\nOUoeBZol7ZFVPGZmlk6eYxjjgPUVzzvKZWZmVoUKMegt6SxJ7ZLau7q68g7HzKwu5ZkwOoE9K56P\nL5e9S0RcFxFtEdHW0tIyIsGZmdm28kwYS4AzyrOlDgU2RcTLOcZjZmY7kNn9MCTdChwJ7C6pA/g6\n0AAQEfOBe4DjgHXAm8CZWcViZmbpZZYwIuLUAbYHcF5W5zczs+FViEFvMzPLnxOGmZkl4oRhZmaJ\nOGGYmVkiThhmZpaIE4aZmSXihGFmZok4YZiZWSJOGGZmlogThpmZJeKEYWZmiThhmJlZIk4YZmaW\niBOGmZkl4oRhZmaJOGGYmVkiThhmZpZIpglD0gxJayStkzS3n+2jJf1M0m8krZbk27SamVWpzBKG\npFHANcCxwGTgVEmT++x2HvBkRHyU0v2//5ek92QVk5mZDV2WLYxDgHUR8WxEvAUsAE7ss08Au0kS\nsCvwKvB2hjGZmdkQZZkwxgHrK553lMsqXQ18BHgJeAL4UkT8KcOYzMxsiPIe9D4GeBwYCxwIXC3p\nvX13knSWpHZJ7V1dXSMdo5mZkSBhSPozSbOGcOxOYM+K5+PLZZXOBBZFyTrgOWC/vgeKiOsioi0i\n2lpaWoYQipmZpTVgwih3EX1lCMd+DGiVNLE8kD0bWNJnnxeB6QCSPghMAp4dwrnMzCxjOyXc7+eS\nvgzcBrzRUxgRr27vBRHxtqTzgaXAKOCGiFgt6Zzy9vnA3wM3SnoCEHBJRPx+aFUxM7MsKSIG3kl6\nrp/iiIh9hj+kHWtra4v29vaRPq2ZWaFJWhYRbWmOMWALQ9KfAZ+LiIfTnMjMzIot6RjG1SMQi5mZ\nVbGk02rvl/Tp8gV2ZmZWh5ImjLOBhcB/SHpN0h8lvZZhXGZmVmWSzpIaDZwOTIyIb0jaC9gju7DM\nzKzaJG1hXAMcCpxafv5HPK5hZlZXkrYwPh4R0yStAIiIP3hVWTOz+pK0hbGlvFx5AEhqAbxIoJlZ\nHUmaMK4C7gA+IOl/Av8P+FZmUZmZWdVJ1CUVEbdIWkZp3ScBMyPiqUwjMzOzqpJ0DIOIeBp4OsNY\nzMysiuV9PwwzMysIJwwzM0vECcPMzBJxwjAzs0TqL2GsXAhXToHLmku/Vy7MOyIzs0JIPEuqJqxc\nCD+7ALZ0l55vWl96DjB1KLctNzOrH5m2MCTNkLRG0jpJc7ezz5GSHpe0WtIvs4yH+7/xTrLosaW7\nVG5mZjuUWQujvJTINcDRQAfwmKQlEfFkxT7NwLXAjIh4UdIHsooHgE0dgys3M7NeWbYwDgHWRcSz\nEfEWsAA4sc8+pwGLIuJFgIh4JcN4YPT4wZWbmVmvLBPGOGB9xfOOclmlfYH3SfqFpGWSzujvQJLO\nktQuqb2rq2voEU2/FBqati1raCqVm5nZDuU9S2on4GDgeOAY4O8k7dt3p4i4LiLaIqKtpaVl6Geb\nOgtOuApG7wmo9PuEqzzgbWaWQJazpDqBPSuejy+XVeoANkTEG8Abkh4CPgqszSyqqbOcIMzMhiDL\nFsZjQKukieWbLc0GlvTZ507gE5J2krQz8HHAq+CamVWhzFoYEfG2pPOBpcAo4IaIWC3pnPL2+RHx\nlKR/AVZSuiHT9RGxKquYzMxs6BQReccwKG1tbdHe3p53GFYwi1d0csXSNby0sZuxzU1cfMwkZh7U\ndw6GWe2StCwi2tIco76u9La6tHhFJ/MWPUH3lq0AdG7sZt6iJwCcNMwGIe9ZUmaZu2Lpmt5k0aN7\ny1auWLomp4jMiskJw2reSxu7B1VuZv1zwrCaN7a5aVDlZtY/JwyreRcfM4mmhlHblDU1jOLiYybl\nFJFZMXnQ22pez8C2Z0mZpeOEYXVh5kHjnCDMUnLCMEvA13GYOWGYDSjtdRxONlYrPOhtNoA013H0\nJJvOjd0E7ySbxSv6rsNpVv2cMMwGkOY6Dl80aLXECcNsAGmu4/BFg1ZLnDDMBpDmOg5fNGi1xAnD\nbAAzDxrHt08+gHHNTQgY19zEt08+INHAtS8atFriWVJmCQz1Og5fNGi1xAnDLGO+aNBqRaZdUpJm\nSFojaZ2kuTvY72OS3pZ0SpbxFN3iFZ0cfvkDTJx7N4df/oCnZprZiMqshSFpFHANcDTQATwmaUlE\nPNnPft8B7ssqllrgmwCZWd6ybGEcAqyLiGcj4i1gAXBiP/v9DfBT4JUMYyk8z+c3s7xlmTDGAesr\nnneUy3pJGgecBPwwwzhqgufzm1ne8p5W+33gkoj40452knSWpHZJ7V1dXSMUWnXxfH4zy1uWCaMT\n2LPi+fhyWaU2YIGk54FTgGslzex7oIi4LiLaIqKtpaUlq3irmufzm1nespxW+xjQKmkipUQxGzit\ncoeImNjzWNKNwF0RsTjDmPK3ciHc/w3Y1AGjx8P0S2HqrAFf5vn8ZsWSdpXialzlOLOEERFvSzof\nWAqMAm6IiNWSzilvn5/VuavWyoXwswtgS3ncYdP60nNInDTy/oMxs4ENx5L41TgrUhGR28mHoq2t\nLdrb2/MOY2iunFJKEn2N3hMuXDXy8ZhZJg6//AE6+5mQMq65iYfnHpX56/sjaVlEtA3pxWV5D3rX\nl00dgyuvNSsXlpLmZc2l3ysX5h2RWSbSzmqs1lmRThgjafT4wZXXkp7uuE3rgXinO85Jw2pQ2lmN\n1Tor0gljJE2/FBr6fOANTaXyWnf/N94Zu+mxpbtUblZj0s5qrNZZkV58cCT1DGwPYZZU4dV7d5zV\nlbSzGqt1VqQHvW1kpBzwr8YphmZF4kFvK44U3XE9Uww7N3YTvDPF0Kv1mo0sJwwbGVNnwQlXlVoU\nqPT7hKsSdcd54UWz6uAxjHoyxKvMh83UWUM6X7VOMbTsuSuyujhh1IuUV5nnaWxzU78XMeU9xdCy\nVa1XO9czd0nViwJPa63WKYaWLXdFVh+3MOpFgae1VusUw6IoareOuyKrjxNGvRg9fjvTWotxlbkX\nXhyaInfrDEdXZFGTZbVyl1S9qOerzOtYkbt10nZFejr28HPCqBcpprVacRW5W2fmQeP49skHMK65\nCVFaqfXbJx+QuIVQ5GRZrdwlVU+GOK3ViqvoM8zSdEUWOVlWK7cwzGpYPc8wq9YVX4vMCcMSW7yi\nk8Mvf4CJc+/m8MsfcF9wAaTt1imyek6WWcm0S0rSDOAfKd2i9fqIuLzP9tOBSwABfwS+GBG/yTIm\nG5oiz7apd/U6w8zTsYdfZglD0ijgGuBooAN4TNKSiHiyYrfngL+IiD9IOha4Dvh4VjHZ0O1oANH/\nAK1a1WuyzEqWXVKHAOsi4tmIeAtYAJxYuUNEPBIRfyg/fRQoxkUBdcgDiGaWZZfUOKDySrEOdtx6\n+Dxwb4bxWApFn21jQ+eL36xHVQx6S/pLSgnjku1sP0tSu6T2rq6ukQ3OgCoYQFy5sHQTpsuaS799\nL/ARkfbiN0+UqC1ZJoxOYM+K5+PLZduQNBW4HjgxIjb0d6CIuC4i2iKiraWlJZNgbcdynW3Ts9Lu\npvVAvLPSrpNG5tJc/OYrrWtPll1SjwGtkiZSShSzgdMqd5C0F7AI+KuIWJthLDYMchtA3NFKu74Q\nMVNpxq48UaL2ZJYwIuJtSecDSylNq70hIlZLOqe8fT5wKTAGuFYSwNtp7zlrNajAK+0C+d+4KoU0\nY1eeKFF7Mr0OIyLuAe7pUza/4vEXgC9kGYNVjyEPnhZ5pd0C37gKSmNXldffQPKxq6JPlPBg/7tV\nxaC31b5U/dlFXmm3wDeugnRjV7lPlEjB4y/98+KDNiJS9Wf3fBMvYrdO0bvTGPrYVZGvtPb4S/+c\nMAarwP3ReUrdn13UlXaL3J02DIp6pbXHX/rnLqnB8PTOISv8yqFDvQ6kyN1pdazwf68ZccIYjIL3\nR+epyP3Zqb4o+MZVhVTov9cMuUtqMGqgPzovRe7PTn0dSFG703rUYTdsof9eM+SEMRgF74/Oe5pg\nrv3Zaf7Tq+cvCgWfFpxGUcdfsuQuqcEocH90XU8TTDv2tL0vBAX5opCKu2GtghPGYBS4PzrNmkC9\niroAYNr/9HL+opB6Ab80n1s9t67sXdwlNVgF7Y9OPU2wyF0Taf/Ty/E6kNR3Okz7uaXthq3D8Y9a\n5hZGnUg9TbDIXRPD0aU0dRZcuAou21j6PUL/6aVuGebZuvI09JrjhFEnUk8TLHLXRIHHnlK3DIej\ndTXUbthq+JJR1G7UKuUuqSJJ0bxPPU2wyDPECry0yNjmJg5+7V/5yk4LGavf81Lszj+8PYtl7z06\n2QGG43Mbajds3l8yityNWqWcMIpiGP74U00TnH7ptueHwnxLBwo79vT9yc8wZdn1NOktAMbr93yn\n4XpWTZ4AHDXwAfL83PL+kuH7qAw7d0kVRd7N+wLPECuyj/32B73JokeT3uJjv/1BsgPk+bnl3RWY\ndwunBrmFURTV8Mdf0G/phTYcn3ten9twdAWmmWXlGV7DzgmjKPJu3ls+iv65p0lWabth03THDcf4\nRw0mnEy7pCTNkLRG0jpJc/vZLklXlbevlDQty3gKLe/mveWjnj/3tN2wec7wGo4pxVU4wyuzFoak\nUcA1wNFAB/CYpCUR8WTFbscCreWfjwM/LP+2vgo808dSqOfPPc/uuLTnTjvgXqUzvLLskjoEWBcR\nzwJIWgCcCFQmjBOBmyMigEclNUvaIyJezjCu4vIYQn2q1889z+64tOfOO+FkJMsuqXFA5TveUS4b\n7D5IOktSu6T2rq6uYQ/UzKpQnt1xac+ddnWBapjk0o9CTKuNiOsioi0i2lpaWvIOx8xGQp5TgtOe\nO++Ek5Esu6Q6gT0rno8vlw12HzOrV3l2x6U5d9qxpyq9UDbLhPEY0CppIqUkMBs4rc8+S4Dzy+Mb\nHwc2efzCzGpCngknI5kljIh4W9L5wFJgFHBDRKyWdE55+3zgHuA4YB3wJnBmVvGYmRVKFU52yPTC\nvYi4h1JSqCybX/E4gPOyjMHMzIZHIQa9zcwsf04YZmaWiBOGmZkl4oRhZmaJOGGYmVkiThhmZpaI\nE4aZmSWi0qUQxSGpC3hhiC/fHfj9MIZTJK57fXLd68/26r13RKRajK9wCSMNSe0R0ZZ3HHlw3V33\nelOvdc+y3u6SMjOzRJwwzMwskXpLGNflHUCOXPf65LrXn8zqXVdjGGZmNnT11sIwM7MhqpuEIWmG\npDWS1kmam3c8aUnaU9KDkp6UtFrSl8rl75f0r5KeKf9+X8Vr5pXrv0bSMRXlB0t6orztKknKo06D\nJWmUpBWS7io/r4u6S2qWdLukpyU9Jemweqi7pAvLf+urJN0qqbGW6y3pBkmvSFpVUTZs9ZX055Ju\nK5f/WtKEAYOKiJr/oXQDp98C+wDvAX4DTM47rpR12gOYVn68G7AWmAz8AzC3XD4X+E758eRyvf8c\nmFh+P0aVt/0bcCgg4F7g2Lzrl/A9+Fvgx8Bd5ed1UXfgJuAL5cfvAZprve7AOOA5oKn8fCEwp5br\nDfxnYBqwqqJs2OoLnAvMLz+eDdw2YEx5vykj9MYfBiyteD4PmJd3XMNcxzuBo4E1wB7lsj2ANf3V\nmdKdEA8r7/N0RfmpwP/Ouz4J6jseuB84qiJh1HzdgdHl/zjVp7ym615OGOuB91O68dtdwH+pg3pP\n6JMwhq2+PfuUH+9E6WI/7SieeumS6vlj69FRLqsJ5abkQcCvgQ/GO/dF/3fgg+XH23sPxpUf9y2v\ndt8HvgL8qaKsHuo+EegC/rncHXe9pF2o8bpHRCfwXeBF4GVgU0TcR43Xux/DWd/e10TE28AmYMyO\nTl4vCaNmSdoV+Cnw3yPitcptUfrqUHPT4CR9CnglIpZtb59arTulb4LTgB9GxEHAG5S6JnrVYt3L\nffUnUkqYY4FdJH2ucp9arPeO5FHfekkYncCeFc/Hl8sKTVIDpWRxS0QsKhf/TtIe5e17AK+Uy7f3\nHnSWH/ctr2aHA/9V0vPAAuAoSf+X+qh7B9AREb8uP7+dUgKp9bp/EnguIroiYguwCPhP1H69+xrO\n+va+RtJOlLo7N+zo5PWSMB4DWiVNlPQeSgM8S3KOKZXyTId/Ap6KiO9VbFoC/HX58V9TGtvoKZ9d\nnhkxEWgF/q3cvH1N0qHlY55R8ZqqFBHzImJ8REyg9Fk+EBGfoz7q/u/AekmTykXTgSep/bq/CBwq\naedyvNOBp6j9evc1nPWtPNYplP4d7bjFkvegzggOHh1HaSbRb4Gv5R3PMNTnE5SaoyuBx8s/x1Hq\ng7wfeAb4OfD+itd8rVz/NVTMDAHagFXlbVczwMBXNf0AR/LOoHdd1B04EGgvf/aLgffVQ92B/wE8\nXY75/1CaEVSz9QZupTRes4VSy/Lzw1lfoBH4CbCO0kyqfQaKyVd6m5lZIvXSJWVmZik5YZiZWSJO\nGGZmlogThpmZJeKEYWZmiThhmJlZIk4YZoMkaY6ksUN43TmSzig/vlHSKcMfnVl2dso7ALMCmkPp\nQqiX+m6QNCoitvb3ooiYn3FcZplyC8OM0oq/5ZsR/ah8k577JDX1s98plK6cvUXS45KaJD0v6TuS\nlgOfkfTfJD0m6TeSfipp5/JrL5P05X6OeblKN8JaKem7mVfWbIicMMze0QpcExH7AxuBT/fdISJu\np7Qsx+kRcWBEdJc3bYiIaRGxAFgUER+LiI9SWu/o89s7oaQxwEnA/hExFfjm8FbJbPg4YZi947mI\neLz8eBmlm9ckdVvF4ymSfiXpCeB0YP8dvG4TsBn4J0knA28O4pxmI8oJw+wd/1HxeCuDG+N7o+Lx\njcD5EXEApQXzGrf3oijduOYQSsuUfwr4l0Gc02xEedDbbPD+SOk+6tuzG/By+X4lp7OD+y2Ub4C1\nc0TcI+lh4NlhjdRsGDlhmA3ejcB8Sd2U7pvc199Rul1uV/n3QMnlTkmNgIC/Hd5QzYaPlzc3M7NE\nPIZhZmaJuEvKbDskXUPp/uGV/jEi/jmPeMzy5i4pMzNLxF1SZmaWiBOGmZkl4oRhZmaJOGGYmVki\nThhmZpbI/wch7vMz8kR/FQAAAABJRU5ErkJggg==\n", 740 | "text/plain": [ 741 | "" 742 | ] 743 | }, 744 | "metadata": {}, 745 | "output_type": "display_data" 746 | } 747 | ], 748 | "source": [ 749 | "import matplotlib.pyplot as plt\n", 750 | "fig,ax = plt.subplots()\n", 751 | "\n", 752 | "mc_trails = efficiency_df[efficiency_df[\"method\"]==\"MC\"][\"n_trails\"]\n", 753 | "mc_err = efficiency_df[efficiency_df[\"method\"]==\"MC\"][\"err\"]\n", 754 | "ax.scatter(mc_trails,mc_err,label=\"regular MC\")\n", 755 | "\n", 756 | "ohmc_trails = efficiency_df[efficiency_df[\"method\"]==\"OHMC\"][\"n_trails\"]\n", 757 | "ohmc_err = efficiency_df[efficiency_df[\"method\"]==\"OHMC\"][\"err\"]\n", 758 | "ax.scatter(ohmc_trails,ohmc_err,label=\"OHMC\")\n", 759 | "ax.legend()\n", 760 | "\n", 761 | "plt.xlabel(\"n_trails\")\n", 762 | "plt.ylabel(\"err\")\n", 763 | "\n", 764 | "plt.show()" 765 | ] 766 | }, 767 | { 768 | "cell_type": "markdown", 769 | "metadata": {}, 770 | "source": [ 771 | "### 4.2 Runtime Efficiency Test" 772 | ] 773 | }, 774 | { 775 | "cell_type": "code", 776 | "execution_count": 71, 777 | "metadata": {}, 778 | "outputs": [ 779 | { 780 | "data": { 781 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEKCAYAAAD9xUlFAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAHzdJREFUeJzt3X2QVPW95/H3N+PcmvEhTArGG2XAwWQEEVBgNBLjXgxx\nRY0XfAgB3WvhJqVGjbmWumJSayiTm2vWbMwSJRSxKLSukZCIRBGDN2oki5ddBzA8KOAsKsxoxQkG\nDDqEAb/7R/ccmmYeTj+cPqd7Pq8qaqZP//r098xh+ju/Z3N3REREAD4RdwAiIpIcSgoiIhJQUhAR\nkYCSgoiIBJQUREQkoKQgIiIBJQUREQkoKYiISEBJQUREAsfEHUCuhgwZ4o2NjXGHISJSVtatW/dn\nd6/vr1zZJYXGxkZaWlriDkNEpKyY2dthyqn5SEREAkoKIiISUFIQEZFA2fUpiEh56urqoq2tjf37\n98cdSkWrqamhoaGB6urqvF6vpCAiJdHW1sYJJ5xAY2MjZhZ3OBXJ3dm9ezdtbW2MGDEir3Oo+UhE\nSmL//v0MHjxYCSFCZsbgwYMLqo0pKYhIySghRK/Qn/GAbT5avqGd+1dt4509nZxcV8udF41k+vih\ncYclIhKrAVlTWL6hnbuXbaJ9TycOtO/p5O5lm1i+oT3u0ESkTB1//PEFvX7y5MkMHz4cdw+OTZ8+\n/Yjzbt++nUsuuYSmpiYmTJjAjBkz+NOf/lTQ+2YbkEnh/lXb6Ow6dMSxzq5D3L9qW0wRiUgpuTsf\nf/xx4t6/rq6ONWvWALBnzx7efffd4Ln9+/dz6aWX8o1vfIM33niD9evXc9NNN9HR0VHU2AZkUnhn\nT2dOx0Wk9JZvaOe8+15gxJxnOO++Fwquyb/11luMHDmSa6+9ljFjxrBr1y6ee+45Jk2axIQJE/jK\nV77Cvn37AFi5ciWjRo1i4sSJ3HrrrXz5y18GYO7cufzoRz8KzjlmzBjeeuutI95n3759TJkyhQkT\nJjB27Fh+85vf9Pr+2WbOnMmSJUsAWLZsGVdccUXw3C9+8QsmTZrEZZddFhybPHkyY8aMKejnkm1A\nJoWT62pzOi4ipRVVE+8bb7zBTTfdxJYtWzjuuOP4/ve/z+9+9zvWr19Pc3MzP/7xj9m/fz833HAD\nzz77LOvWrcv5L/GamhqefPJJ1q9fz4svvsjtt98eNAllvv8pp5xy1GunTJnC6tWrOXToEEuWLOGr\nX/1q8NzmzZuZOHFiQdcfxoBMCndeNJLa6qojjtVWV3HnRSNjikhEMkXVxHvKKadw7rnnArB27Vpe\ne+01zjvvPM466yweeeQR3n77bbZu3cqpp54ajPOfNWtWTu/h7nz7299m3LhxfOlLX6K9vT1o9898\n/55UVVXxhS98gSVLltDZ2UkcK0IPyNFH3aOMNPpIJJmiauI97rjjgu/dnQsvvJDHH3/8iDKvvvpq\nr68/5phjjugL6Gk+wGOPPUZHRwfr1q2jurqaxsbGoFzm+/dm5syZXH755cydO/eI42eccQYvvfRS\nv68v1ICsKUAqMayZ80XevO9S1sz5ohKCSIKUoon33HPPZc2aNbS2tgLw4Ycfsn37dkaOHMmOHTuC\nvoJf/vKXwWsaGxtZv349AOvXr+fNN9886rx79+7lxBNPpLq6mhdffJG33w61YnXg/PPP5+677z6q\nhnL11Vfz8ssv88wzzwTHVq9ezebNm3M6f38GbFIQkeQqRRNvfX09ixcvZtasWYwbN45JkyaxdetW\namtrmT9/PlOnTmXixImccMIJDBo0CIArr7yS999/nzPOOIMHH3yQ00477ajzXnPNNbS0tDB27Fge\nffRRRo0alVNcZsYdd9zBkCFDjjheW1vLihUr+OlPf0pTUxOjR49m/vz51Nf3u29Obu+fOSa2HDQ3\nN7s22REpP6+//jqnn3566PJxTjDdt28fxx9/PO7OzTffTFNTE7fddltJ3rsYevpZm9k6d2/u77UD\nsk9BRJJv+vihsTXr/vznP+eRRx7hwIEDjB8/nhtuuCGWOOKgpCAikuW2224rq5pBMalPQUREApEl\nBTNbZGbvmVmfXeNmdraZHTSzq6KKRUREwomyprAYmNpXATOrAn4IPBdhHCIiElJkScHdVwPv91Ps\nm8ATwHtRxSEiIuHF1qdgZkOBy4GfhSh7vZm1mFlLsVcEFJGBpa2tjWnTptHU1MRnPvMZvvWtb3Hg\nwAF+//vfBwvfdZs9eza//vWvgeQsbR21ODuafwLc5e79rl/r7gvdvdndm4s9UUNEBg5354orrmD6\n9Om88cYbbN++nX379vGd73wn1OuTsLR11OJMCs3AEjN7C7gKmG9m02OMR0SSZONSeGAMzK1Lfd24\ntOBTvvDCC9TU1HDdddcBqQXoHnjgARYtWsRHH33U7+uTsLR11GJLCu4+wt0b3b0R+DVwk7svjyse\nEUmQjUvh6Vth7y7AU1+fvrXgxLBly5ajlp/+5Cc/yfDhw2ltbeUPf/gDZ511VvDvqaeeOqJsEpa2\njlpkk9fM7HFgMjDEzNqA7wLVAO6+IKr3FZEK8Py90JW1ImpXZ+r4uBmRve3555/PihUrgsezZ88+\n4vkkLG0dtciSgruHXoTc3WdHFUc+4lxzRUSAvW25HQ9p9OjRQcdxtw8++ICdO3fy2c9+luee6390\nfNxLW0dNM5qzRLXjk4jkYFBDbsdDmjJlCh999BGPPvooAIcOHeL2229n9uzZHHvssaHOEffS1lFT\nUsgS1Y5PIpKDKfdAddbeCdW1qeMFMDOefPJJfvWrX9HU1MRpp51GTU0NP/jBD3I6R5xLW0dNS2dn\nGTHnGXr6iRjw5n2XRva+IpUu16Wz2bg01Yewty1VQ5hyT6T9CZVES2cX0cl1tbT3sOVfMXd8EpEQ\nxs1QEoiBmo+ylGLHJxGRpFJNIUv3KCONPhIpPnfHzOIOo6IV2iWgpNCDOHd8EqlUNTU17N69m8GD\nBysxRMTd2b17NzU1NXmfQ0lBREqioaGBtra2slsLqNzU1NTQ0JD/0F0lBREpierqakaMGBF3GNIP\ndTSLiEhASUFERAJKCiIiElBSEBGRgJKCiIgElBRERCSgpCAiIgElBRERCUSWFMxskZm9Z2Y97jBh\nZteY2UYz22RmL5vZmVHFIiIi4URZU1gMTO3j+TeBf3D3scD3gIURxiIiIiFEuUfzajNr7OP5lzMe\nrgUK22dPREQKlpQ+ha8Bz/b2pJldb2YtZtaixbRERKITe1IwswtIJYW7eivj7gvdvdndm8ttv1MR\nkXIS6yqpZjYOeBi42N13xxmLiIjEWFMws+HAMuCf3H17XHGIiMhhkdUUzOxxYDIwxMzagO8C1QDu\nvgC4BxgMzE/vwnTQ3ZujikdERPoX5eijWf08/3Xg61G9f1SWb2jX/s0iUrG081oOlm9o5+5lm+js\nOgRA+55O7l62CUCJQUQqQuyjj8rJ/au2BQmhW2fXIe5ftS2miEREiktJIQfv7OnM6biISLlRUsjB\nyXW1OR0XESk3Sgo5uPOikdRWVx1xrLa6ijsvGhlTRCIixaWO5hx0dyZr9JGIVColhRxNHz9USUBE\nKpaaj0REJKCkICIiASUFEREJKCmIiEhASUFERAJKCiIiElBSEBGRgJKCiIgElBRERCQQWVIws0Vm\n9p6Zbe7leTOzeWbWamYbzWxCVLGIiEg4UdYUFgNT+3j+YqAp/e964GcRxiIiIiFElhTcfTXwfh9F\npgGPespaoM7MTooqHhER6V+cfQpDgV0Zj9vSx0REJCZl0dFsZtebWYuZtXR0dMQdjohIxYozKbQD\nwzIeN6SPHcXdF7p7s7s319fXlyQ4EZGBKM6k8BRwbXoU0rnAXnd/N8Z4REQGvMg22TGzx4HJwBAz\nawO+C1QDuPsCYCVwCdAKfARcF1UsIiISTmRJwd1n9fO8AzdH9f4iIpK7suhoFhGR0lBSEBGRgJKC\niIgElBRERCSgpCAiIgElBRERCSgpiIhIQElBREQCSgoiIhJQUhARkYCSgoiIBJQUREQkoKQgIiIB\nJQUREQkoKYiISEBJQUREAkoKIiISiDQpmNlUM9tmZq1mNqeH5weZ2dNm9kcz22Jm2pJTRCRGkSUF\nM6sCHgIuBkYDs8xsdFaxm4HX3P1MUvs5/08z+7uoYhIRkb5FWVM4B2h19x3ufgBYAkzLKuPACWZm\nwPHA+8DBCGMSEZE+RJkUhgK7Mh63pY9lehA4HXgH2AR8y90/jjAmERHpQ9wdzRcBrwInA2cBD5rZ\nJ7MLmdn1ZtZiZi0dHR2ljlFEZMDoNymY2SfMbEYe524HhmU8bkgfy3QdsMxTWoE3gVHZJ3L3he7e\n7O7N9fX1eYQiIiJh9JsU0s05/y2Pc78CNJnZiHTn8UzgqawyO4EpAGb298BIYEce7yUiIkVwTMhy\nvzOzO4BfAh92H3T393t7gbsfNLNbgFVAFbDI3beY2Y3p5xcA3wMWm9kmwIC73P3P+V2KiIgUyty9\n/0Jmb/Zw2N391OKH1Lfm5mZvaWkp9duKiJQ1M1vn7s39leu3pmBmnwD+i7uvKUpkIiKSWGH7FB4s\nQSwiIhKzsENSnzezK9OTzEREpEKFTQo3AEuBv5nZB2b2VzP7IMK4REQkBmFHHw0CrgFGuPu9ZjYc\nOCm6sEREJA5hawoPAecCs9KP/4r6GUREKk7YmsLn3H2CmW0AcPe/aDVTEZHKE7am0JVeCtsBzKwe\n0MJ1IiIVJmxSmAc8CZxoZv8C/G/gB5FFJSIisQjVfOTuj5nZOlLrFBkw3d1fjzQyEREpubB9Crj7\nVmBrhLGIiEjM4t5PQUREEkRJQUREAkoKIiISUFIQEZGAkkKpbVwKD4yBuXWprxuXxh2RiEgg9Ogj\nKYKNS+HpW6GrM/V4767UY4Bx+WyDLSJSXJHWFMxsqpltM7NWM5vTS5nJZvaqmW0xs5eijCd2z997\nOCF06+pMHRcRSYDIagrpZTEeAi4E2oBXzOwpd38to0wdMB+Y6u47zezEqOJJhL1tuR0XESmxKGsK\n5wCt7r7D3Q8AS4BpWWWuBpa5+04Ad38vwnjiN6ght+MiIiUWZVIYCuzKeNyWPpbpNOBTZvZ7M1tn\nZtf2dCIzu97MWsyspaOjI6JwS2DKPVBde+Sx6trUcRGRBIh79NExwETgUuAi4L+b2WnZhdx9obs3\nu3tzfX19qWMsnnEz4LJ5MGgYYKmvl81TJ7OIJEaUo4/agWEZjxvSxzK1Abvd/UPgQzNbDZwJbI8w\nrniNm6EkICKJFWVN4RWgycxGpDfkmQk8lVXmN8AXzOwYMzsW+Byg1VdFRGISWU3B3Q+a2S3AKqAK\nWOTuW8zsxvTzC9z9dTP7LbCR1KY9D7v75qhiEhGRvpm7xx1DTpqbm72lpSXuMAak5RvauX/VNt7Z\n08nJdbXcedFIpo/PHjsgIklkZuvcvbm/cprRLKEs39DO3cs20dl1CID2PZ3cvWwTgBKDSAWJe/SR\nlIn7V20LEkK3zq5D3L9qW0wRiUgUlBQklHf2dOZ0XETKk5KChHJyXW1Ox0WkPCkpSCh3XjSS2uqq\nI47VVldx50UjY4pIRKKgjmYJpbszWaOPRCqbkoKENn38UCUBkQqnpFAhNIdARIpBSaEMZSeAC0bV\n88S6ds0hEJGCqaO5zHRPImvf04mTSgCPrd2pOQQiUhRKCmWmp0lkvS1UojkEIpIrJYUyk8sHveYQ\niEiulBTKTG8f9Jb1WHMIRCQfSgplprdJZNecO5yhdbUYMLSuln+9Yqw6mUUkZxp9VGY0iUxEoqSk\nUIY0iUxEohJp85GZTTWzbWbWamZz+ih3tpkdNLOroownbss3tHPefS8wYs4znHffCyzfkL1ltYhI\nvCKrKZhZFfAQcCHQBrxiZk+5+2s9lPsh8FxUsSSBNqkRkXIQZU3hHKDV3Xe4+wFgCTCth3LfBJ4A\n3oswlthpkxoRKQdRJoWhwK6Mx23pYwEzGwpcDvwswjgSQZvUiEg5iHtI6k+Au9z9474Kmdn1ZtZi\nZi0dHR0lCq24tEmNiJSDKJNCOzAs43FD+limZmCJmb0FXAXMN7Pp2Sdy94Xu3uzuzfX19VHFG6mB\ntEmNOtRFyleUQ1JfAZrMbASpZDATuDqzgLuP6P7ezBYDK9x9eYQxxaas5hdsXArP3wt722BQA0y5\nB8bNCPXSYneoa0lwkdKKLCm4+0EzuwVYBVQBi9x9i5ndmH5+QVTvnVRlMb9g41J4+lboSvd17N2V\negyhEkNfHeq5XrtGbImUXqST19x9JbAy61iPycDdZ0cZi4T0/L2HE0K3rs7U8RBJoZgd6sVMMCIS\nTtwdzZI0e9tyO56lmB3qBSWYjUvhgTEwty71dePSnN9fZCBSUpAjDWrI7XiWYnao551gupvA9u4C\n/HATmBKDSL+UFORIU+6B6qwP3era1PEQpo8fyr9eMbYoK7bmnWD6agITkT5pQTw5Une/QZ6jj6B4\nHep5j9gqsAlMZCBTUigTJR2aOW5GTkkgSnklmEEN6aajHo6LSJ/UfFQGuodmtu/pxDk8NFOTwnpR\nYBOYyECmpFAGtJhejsbNgMvmwaBhgKW+XjYvMbUfkSRT81EZKOfF9Hpt9ipg1nQoCWoCEyknSgpl\n4OS6Wtp7SABJX0yvtxnJQ3et4OxN38171rSIREfNR2WgXBfT663Za9j6+zVkVCShVFMoA0leTC+7\neeiCUfW8uLWDd9Kd4j050TvAenhCQ0ZFYqekUCaSuJheT81D/7Z2Z7+ve8/q+TQ97IuhIaMisVPz\nkeStp+ah/tRWV7Frwp0aMiqSUKopSN5yGf1kEDR7nT1+KjR+KtrRRyKSFyUFyVtvo6KyDa2rZc2c\nLx55UENGRRJJzUeSt55GRWUrh1FSInKYagoJUY7bTvY0Kipz9FG5XIeIHBZpUjCzqcD/IrUd58Pu\nfl/W89cAd5Fqcv4r8A13/2OUMSVROW87mcRRUSKSv8iaj8ysCngIuBgYDcwys9FZxd4E/sHdxwLf\nAxZGFU+SaW0jEUmKKPsUzgFa3X2Hux8AlgDTMgu4+8vu/pf0w7XAgByoXs5rG4lIZYmy+WgokLmo\nfRvwuT7Kfw14NsJ4Eqtc1zYKqxz7S0QGqkSMPjKzC0glhbt6ef56M2sxs5aOjh5mwpa5cl3bKIye\n9oL451++yvh7n0vefhAbl8IDY2BuXeqr9nSWASjKpNAODMt43JA+dgQzGwc8DExz9909ncjdF7p7\ns7s319fXRxJsnIq5r3HS9Dbr+S8fdSVro6CNS1Mrte7dBfjhlVuVGGSAibL56BWgycxGkEoGM4Gr\nMwuY2XBgGfBP7r49wlgSr1JH8fTVL9LdmZ6I637+3t5XbtUkOxlAIksK7n7QzG4BVpEakrrI3beY\n2Y3p5xcA9wCDgflmBnDQ3ZujiimpKrnNvb9Zz4npTO9thdZir9wa9eZCIgWKdJ6Cu68EVmYdW5Dx\n/deBr0cZQ9KV8xyFMC4YVc9ja3f2uox2YjrTBzWkm456OF4s3U1U2lxIEiwRHc0DWSXPUVi+oZ0n\n1rX3mhAS1Zk+5Z7oV27tq4lKJCGUFGJW8ByFBI+Y6Wtp7cR1po+bAZfNg0HDAEt9vWxecf+CL1UT\nlUgBtPZRzAqao5Dw5ojeEpvB0aumJkHUK7eWoolKpECqKcSsoDkKCW+O6C2xJaYfoVjC1tZK0UQl\nUiAlhZgVNEch4c0RlTwpL5DL/IZSNFGVSoKbLaUw5t5bN2AyNTc3e0tLS9xhJMMDY3ppjhgGt20u\nfTw9qOThtkBZ3IOiy262hFSNp1wT3ABhZuvCDPlXn0KJFfVDcso9Pf9yJqg5olIn5QX6qq1V6pwE\nTfSraEoKJVT0OQndv4BF+uCp+L/qo9Bb53HtpxI9CKAgCW+2lMKoT6GEIpmTMG5Gqpli7p7U1wIS\nQvbCdYlamygqhbaN99Z5DIkeBFCQ3kZLFXuin/osYqGkUEJJ3jehkifR9aoYi+D11nnc+Zeey1fC\nX9NRj6LK974okRSFkkIJJXmIZpITVmSKNaS3p9paKf6ajkvUo6jyuS9a5bZolBRKKMlDNJOcsCIT\nZdt4pc9JKFKzZY/yuS+5JhLVKnqlpFBCSd43oc+EVam/QFH+NV/qOQmVdI/yuS+5JBLVKvqk0Ucl\nltQhmt0xHTX6qGpN5Y6iiXpIb9TLZnRL+HInOcvnvuSyhIiG1PZJSUECPSasByr4F6jIQ3pjU2kf\ncvncl1wSSbGaDSt0HoqSgvSt0sekl+qv+ShV4j3K9b7kkkj6qlWE/aCvtNpZBiUF6ZtW9kw+3aOU\nsImkt1pF038O/0FfytpZiWskkXY0m9lUM9tmZq1mNqeH583M5qWf32hmE6KMR/JQ6aNoKoHuUW56\nGwTwxnPhRzAVUjvLZVBADJ3ikdUUzKwKeAi4EGgDXjGzp9z9tYxiFwNN6X+fA36W/ipJUSnt7pVM\n9yh3PdUqll3fc9mePujzrZ3l2uwUQ39RlM1H5wCt7r4DwMyWANOAzKQwDXjUU0u1rjWzOjM7yd3f\njTAuyVUltLtXOt2jwuXyQZ/vyLVcP+Rj6C+KsvloKJD5E25LH8u1DGZ2vZm1mFlLR0dH0QMVEcmp\nGS7feSi5fsjHMDO+LDqa3X0hsBBS+ynEHI6IVKJcm+HyqZ3l2uwUw/L4USaFdmBYxuOG9LFcy4iI\nlEbUzXC5fsjH0F8UZVJ4BWgysxGkPuhnAldnlXkKuCXd3/A5YK/6E0SkYuXzIV/i/qLIkoK7HzSz\nW4BVQBWwyN23mNmN6ecXACuBS4BW4CPguqjiERFJhIQPCoi0T8HdV5L64M88tiDjewdujjIGEREJ\nT6ukiohIQElBREQCSgoiIhJQUhARkYCSgoiIBJQUREQkoKQgIiIBS00VKB9m1gG8XYRTDQH+XITz\nJIWuJ9l0PclWSdfT27Wc4u71/b247JJCsZhZi7s3xx1Hseh6kk3Xk2yVdD2FXouaj0REJKCkICIi\ngYGcFBbGHUCR6XqSTdeTbJV0PQVdy4DtUxARkaMN5JqCiIhkqfikYGZTzWybmbWa2Zwenjczm5d+\nfqOZTYgjzrBCXM8oM/sPM/ubmd0RR4y5CHE916TvyyYze9nMzowjzrBCXM+09PW8mt53/AtxxBlG\nf9eSUe5sMztoZleVMr5chbg3k81sb/revGpm0e15WQRh7k/6ml41sy1m9lKoE7t7xf4jtbnP/wNO\nBf4O+CMwOqvMJcCzgAHnAv8n7rgLvJ4TgbOBfwHuiDvmIlzP54FPpb+/uALuz/EcbrYdB2yNO+58\nryWj3Auk9k25Ku64C7w3k4EVccdaxOupA14Dhqcfnxjm3JVeUzgHaHX3He5+AFgCTMsqMw141FPW\nAnVmdlKpAw2p3+tx9/fc/RWgK44AcxTmel5297+kH64ltY93UoW5nn2e/g0FjgOS2qkX5ncH4JvA\nE8B7pQwuD2Gvp1yEuZ6rgWXuvhNSnw1hTlzpSWEosCvjcVv6WK5lkqKcYg0j1+v5GqlaXVKFuh4z\nu9zMtgLPAP+1RLHlqt9rMbOhwOXAz0oYV77C/l/7fLp571kzO6M0oeUlzPWcBnzKzH5vZuvM7Now\nJ450O06RYjGzC0glhcS2wYfl7k8CT5rZfwK+B3wp5pDy9RPgLnf/2MzijqUY1pNqatlnZpcAy4Gm\nmGMqxDHARGAKUAv8h5mtdfft/b2okrUDwzIeN6SP5VomKcop1jBCXY+ZjQMeBi52990lii0fOd0f\nd19tZqea2RB3T9q6O2GupRlYkk4IQ4BLzOyguy8vTYg56fd63P2DjO9Xmtn8hN4bCHd/2oDd7v4h\n8KGZrQbOBPpMCrF3mETcGXMMsAMYweHOmDOyylzKkR3N/zfuuAu5noyyc0l+R3OY+zMcaAU+H3e8\nRbqez3K4o3lC+hfZ4o69kP9r6fKLSXZHc5h78+mMe3MOsDOJ9yaH6zkdeD5d9lhgMzCmv3NXdE3B\n3Q+a2S3AKlK99YvcfYuZ3Zh+fgGpUROXkPrg+Qi4Lq54+xPmeszs00AL8EngYzP7Z1KjEj7o9cQx\nCXl/7gEGA/PTf5Ee9IQuXBbyeq4ErjWzLqAT+Kqnf4OTJOS1lI2Q13MV8A0zO0jq3sxM4r2BcNfj\n7q+b2W+BjcDHwMPuvrm/c2tGs4iIBCp99JGIiORASUFERAJKCiIiElBSEBGRgJKCiIgElBRECmRm\njWZ2dcbjZjObF2dMIvnSkFSRDJaaDGHu/nEOr5lMaqLglyMLTKREVFOQAS/9l/42M3uU1KzPQxnP\nXWVmi9PfL07vvfGyme3I2D/gPuD89Lr1t6XXsF+Rfs1cM3vEzP5gZm+b2RVm9j/S+0P81syq0+Um\nmtlL6YXLViV4pV6pcEoKIilNwHx3PwP4sI9yJ5FalO/LpJIBwBzgD+5+lrs/0MNrPgN8EfhH4N+A\nF919LKlZs5emE8NPSS0TMRFYRGo/DJGSq+hlLkRy8Lan9tPoz/J009JrZvb3Ic/9rLt3mdkmUksS\n/DZ9fBPQCIwExgD/nl7Kowp4N5fgRYpFSUEkJbN2kNnRVpNV7m8Z34ddL/pvAJ5aYrorYz2dj0n9\nDhqwxd0n5RCvSCTUfCRytD+Z2elm9glSm8j056/ACQW83zag3swmAZhZdcI3eJEKpqQgcrQ5wArg\nZcI142wEDpnZH83stlzfzFPbKV4F/NDM/gi8SmpvapGS05BUEREJqKYgIiIBJQUREQkoKYiISEBJ\nQUREAkoKIiISUFIQEZGAkoKIiASUFEREJPD/AaUAW1DubD9gAAAAAElFTkSuQmCC\n", 782 | "text/plain": [ 783 | "" 784 | ] 785 | }, 786 | "metadata": {}, 787 | "output_type": "display_data" 788 | } 789 | ], 790 | "source": [ 791 | "import matplotlib.pyplot as plt\n", 792 | "fig,ax = plt.subplots()\n", 793 | "\n", 794 | "mc_runtime = efficiency_df[efficiency_df[\"method\"]==\"MC\"][\"runtime\"]\n", 795 | "mc_err = efficiency_df[efficiency_df[\"method\"]==\"MC\"][\"err\"]\n", 796 | "ax.scatter(mc_runtime,mc_err,label=\"regular MC\")\n", 797 | "\n", 798 | "ohmc_runtime = efficiency_df[efficiency_df[\"method\"]==\"OHMC\"][\"runtime\"]\n", 799 | "ohmc_err = efficiency_df[efficiency_df[\"method\"]==\"OHMC\"][\"err\"]\n", 800 | "ax.scatter(ohmc_runtime,ohmc_err,label=\"OHMC\")\n", 801 | "ax.legend()\n", 802 | "\n", 803 | "plt.xlabel(\"runtime\")\n", 804 | "plt.ylabel(\"err\")\n", 805 | "\n", 806 | "plt.show()" 807 | ] 808 | }, 809 | { 810 | "cell_type": "markdown", 811 | "metadata": {}, 812 | "source": [ 813 | "### 4.3 Conclusion\n", 814 | "\n", 815 | "From the test, one can see OHMC reduce the variance considerably, which make it more appreciable when number of trails is limited. For example, if we can only make trails less than 1000, OHMC is definite better than regular MC. However the runtime efficiency is similar based our observation. It seems slightly stable than regular MC, but still not enough to affirm that." 816 | ] 817 | }, 818 | { 819 | "cell_type": "code", 820 | "execution_count": null, 821 | "metadata": { 822 | "collapsed": true 823 | }, 824 | "outputs": [], 825 | "source": [] 826 | } 827 | ], 828 | "metadata": { 829 | "kernelspec": { 830 | "display_name": "Python 3", 831 | "language": "python", 832 | "name": "python3" 833 | }, 834 | "language_info": { 835 | "codemirror_mode": { 836 | "name": "ipython", 837 | "version": 3 838 | }, 839 | "file_extension": ".py", 840 | "mimetype": "text/x-python", 841 | "name": "python", 842 | "nbconvert_exporter": "python", 843 | "pygments_lexer": "ipython3", 844 | "version": "3.6.5" 845 | } 846 | }, 847 | "nbformat": 4, 848 | "nbformat_minor": 2 849 | } 850 | -------------------------------------------------------------------------------- /binomial.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import math 3 | 4 | 5 | def call(S, K, sigma, r, t, div=0, n=100, am=False): 6 | """ 7 | Price a call option using the Binomial Options Pricing model 8 | S: initial spot price of stock 9 | K: strick price of option 10 | sigma: volatility 11 | r: risk-free interest rate 12 | t: time to maturity (in years) 13 | div: dividend yield (continuous compounding) 14 | n: binomial steps 15 | am: True for American option, False for European option 16 | %timeit results: 0 loops, best of 3: 49.6 ms per loop 17 | """ 18 | return option(S, K, sigma, r, t, div, 1, n, am) 19 | 20 | 21 | def put(S, K, sigma, r, t, div=0, n=100, am=False): 22 | """ 23 | Price a put option using the Binomial Options Pricing model 24 | S: initial spot price of stock 25 | K: strick price of option 26 | sigma: volatility 27 | r: risk-free interest rate 28 | t: time to maturity (in years) 29 | div: dividend yield (continuous compounding) 30 | n: binomial steps 31 | am: True for American option, False for European option 32 | %timeit results: 10 loops, best of 3: 50.5 ms per loop 33 | """ 34 | return option(S, K, sigma, r, t, div, -1, n, am) 35 | 36 | 37 | def option(S, K, sigma, r, t, div=0, call=1, n=100, am=False): 38 | """ 39 | Price an option using the Binomial Options Pricing model 40 | S: initial spot price of stock 41 | K: strick price of option 42 | sigma: volatility 43 | r: risk-free interest rate 44 | t: time to maturity (in years) 45 | div: dividend yield (continuous compounding) 46 | call: 1 if call option, -1 if put 47 | n: binomial steps 48 | am: True for American option, False for European option 49 | """ 50 | delta = float(t) / n 51 | u = math.exp(sigma * math.sqrt(delta)) 52 | d = float(1) / u 53 | q = float((math.exp((r - div) * delta) - d)) / (u - d) # Prob. of up step 54 | stock_val = np.zeros((n + 1, n + 1)) 55 | opt_val = np.zeros((n + 1, n + 1)) 56 | 57 | # Calculate stock value at maturity 58 | stock_val[0, 0] = S 59 | for i in range(1, n + 1): 60 | stock_val[i, 0] = stock_val[i - 1, 0] * u 61 | for j in range(1, i + 1): 62 | stock_val[i, j] = stock_val[i - 1, j - 1] * d 63 | 64 | # Recursion for option price 65 | for j in range(n + 1): 66 | opt_val[n, j] = max(0, call*(stock_val[n, j] - K)) 67 | for i in range(n - 1, -1, -1): 68 | for j in range(i + 1): 69 | opt_val[i, j] = \ 70 | (q * opt_val[i + 1, j] + (1 - q) * opt_val[i + 1, j + 1]) \ 71 | / math.exp(r * delta) 72 | if am: 73 | opt_val[i, j] = max(opt_val[i, j], call*(stock_val[i, j] - K)) 74 | return opt_val[0, 0] 75 | 76 | 77 | def call2(S, K, sigma, r, t, div=0, n=100, am=False): 78 | """ 79 | Price a call option using the Binomial Options Pricing model 80 | S: initial spot price of stock 81 | K: strick price of option 82 | sigma: volatility 83 | r: risk-free interest rate 84 | t: time to maturity (in years) 85 | div: dividend yield (continuous compounding) 86 | n: binomial steps 87 | %timeit results: 10000 loops, best of 3: 139 us per loop 88 | """ 89 | return option2(S, K, sigma, r, t, div, 1, n, am) 90 | 91 | 92 | def put2(S, K, sigma, r, t, div=0, n=100, am=False): 93 | """ 94 | Price a put option using the Binomial Options Pricing model 95 | S: initial spot price of stock 96 | K: strick price of option 97 | sigma: volatility 98 | r: risk-free interest rate 99 | t: time to maturity (in years) 100 | div: dividend yield (continuous compounding) 101 | n: binomial steps 102 | %timeit results: 10000 loops, best of 3: 136 us per loop 103 | """ 104 | return option2(S, K, sigma, r, t, div, -1, n, am) 105 | 106 | 107 | def option2(S, K, sigma, r, t, div=0, call=1, n=100, am=False): 108 | """ 109 | Price an option using the Binomial Options Pricing model 110 | S: initial spot price of stock 111 | K: strick price of option 112 | sigma: volatility 113 | r: risk-free interest rate 114 | t: time to maturity (in years) 115 | div: dividend yield (continuous compounding) 116 | n: binomial steps 117 | """ 118 | delta = float(t) / n 119 | u = math.exp(sigma * math.sqrt(delta)) 120 | d = float(1) / u 121 | pu = float((math.exp((r - div) * delta) - d)) / (u - d) # Prob. of up step 122 | pd = 1 - pu # Prob. of down step 123 | u_squared = u * u 124 | S = S * pow(d, n) # stock price at bottom node at last date 125 | prob = pow(pd, n) # prob. of bottom node at last date 126 | opt_val = prob * max(0, call*(S - K)) 127 | for i in range(1, n): 128 | S = S * u_squared 129 | prob = prob * (float(pu) / pd) * (n - i + 1) / i 130 | opt_val = opt_val + prob * max(call*(S - K), 0) 131 | return math.exp(-r * t) * opt_val -------------------------------------------------------------------------------- /presentation.md: -------------------------------------------------------------------------------- 1 | # Presentation 2 | 3 | ## Prologue 4 | ### Greeting 5 | Hello guys, thank you for comming. Today I'm gonna show you an awesome topic regarding how to price and hedge American-style options. Really a well-defined and classic problem, right? So yeah, with all of the new and improved techniques, that is our project. 6 | 7 | ### Gratitude 8 | At the start of my report, I want to first appreciate Prof. Cui for his continuous guidance and interest on my research. And also I want to thank Wan, Tengxi, one of my friend, some of you may know him as he took this course last semester and my research project is kind of extension based on his interests on LSM. 9 | 10 | ## Introduction 11 | ### Outline 12 | OK, back to our report. Today, I will go through some basic theoretical details about the objective problem, and then, show you some stunning methods to address it. In addition, select several potential extentions or research directions to further hone this project. 13 | 14 | ### Background 15 | As we know, option contracts play a prominent role in financial industry including equity, fixed income, forex, commodities and economic index. It can be divided into two main types by exercise rule: European options and American options. Today, We are going to focus on American options and how to price and hedge it step by step. I know the subject today is literature review, but I don't want to be boring, how can anybody convey the ideas without a proper clarification. 16 | 17 | ### Pricing 18 | Here is the pricing formula under the optimization problem. For the sake of simplicity, I set the risk free rate as 0 here, otherwise, one can view it as a forward price standing at the inception. Note that the price process is a supermartingale. (intuitively, the earlier you are, the more chance to find a better exercise point) 19 | 20 | ### Hedging 21 | Besides pricing, hedging is increasingly important in nowadays especially for the ubiquitous American options. But the question is there is no explicite formula for such a option. That means, we cannot obtain the hedging strategies by directly differential. Here, Doob-Meyer decompostion of the supermartingle representation stands out and find a way to calculate hedging strategy corresponding to the underlying asset. 22 | 23 | ### Phase transition 24 | In particular, there comes a question, if we assert the American options' price is actually a supermartingale instead of a martingale, is it contrary to the martingale pricing theory? Well, from I'm concerned, it's not. Before the underlying arrive the exercise boundary($s\in[t,\tau]$), the price process is a martingale. But after that, it's a supermartingale as you lose your chance in perspective of probability (you're not supposed to do so). Fair enough, right? Correct me if I'm wrong. 25 | 26 | ### Methods Classification 27 | In this project, we focuse on Monte Carlo simulation method, as it has many advantages comparing to others I will tell you in the following parts. In a recent research conducted by Bouchard and Warin, they divid Monte Carlo simulation method in the problem of pricing and hedging into two approaches: Malliavin-based approach and Regression-based approach. 28 | 29 | ### Purpose 30 | As I mentioned, this project focuses on the Monte Carlo method. In order to make it more original and profound, it's not enough to just replicate people's work, there are several ideas I want to tackle. Basically, just a rough idea, and I wish in a foreseeable future we can achieve them. 31 | 32 | ## Regression Based Methods 33 | 34 | ### LSM 35 | * Longstaff & Schiwartz (2001) 36 | 37 | ### LSM improvements 38 | * CIR (1985) 39 | * Anderson (2007) 40 | * Heston (1993) 41 | 42 | ### OHMC 43 | * Potters, Bouchaud & Sestovic (2001) 44 | 45 | ### Other extensions 46 | * Bouchard & Warin (2012) 47 | * Davis & Zariphopoulou (1995) 48 | * Broadie(2011) 49 | 50 | ## Conclusion 51 | pros: 52 | 53 | * Simple(algorithm) yet powerfull(expansibility) 54 | * Compatible to any model (nonparametric) 55 | * curse of dimension 56 | 57 | cons: 58 | 59 | * bias 60 | * computational complexity 61 | 62 | ## Thank you 63 | 64 | These are my references upon now, in case you want to follow this topic. So, yeah, that's it. Thank you for attention. Any question? 65 | 66 | -------------------------------------------------------------------------------- /trinomial.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import math as m 3 | 4 | class american_pricer: 5 | 6 | def __init__(self,S=1,T=60,K=1,r=0,q=0,sigma=.1, call = True): 7 | self.S = S 8 | self.T = T 9 | self.K = K 10 | self.r = r 11 | self.q = q 12 | self.sigma = sigma 13 | self.call = call 14 | 15 | 16 | def price(self,N): 17 | dt = float(self.T/N) 18 | 19 | #mu is r-q - (sigma^2)/2 20 | mu = self.r-self.q-(self.sigma**2)/2.0 21 | #set sigma max for stability requirements 22 | smax = 2 * abs(mu) * dt**.5 23 | smax = max(smax, self.sigma * (2**.5)) 24 | if smax ==0: 25 | return -9999 26 | #set up arrays to keep track of steps 27 | #dimension M 28 | M = int(5 * (N**.5)) 29 | C_ = np.empty(2*M+1, dtype=np.float64) 30 | pC_ = np.empty(2*M+1, dtype=np.float64) 31 | S_ = np.empty(2*M+1, dtype=np.float64,) 32 | #set probs up, down, and same 33 | p = float(0.5 * (self.sigma**2) )/ (smax **2) 34 | p_u = p + 0.5 * mu * dt**.5 / float(smax) 35 | p_m = 1 - 2 * p 36 | p_d = p - 0.5 * mu * dt**.5 / float(smax) 37 | #init payoff 38 | D = 1.0 / (1 + self.r * dt) 39 | E = m.exp(smax * dt**.5) 40 | 41 | for j in range(0,len(S_)): 42 | if j ==0: 43 | S_[j] = self.S * m.exp(-M * smax * dt**.5) 44 | else: 45 | S_[j] = S_[j - 1] * E 46 | if self.call ==True: 47 | C_[j] = max(S_[j] - self.K, 0) 48 | else: 49 | C_[j] = max(self.K-S_[j], 0) 50 | 51 | for k in range(0,N): 52 | for j in range(1,2 * M): 53 | pC_[j] = (p_u * C_[j + 1] + p_m * C_[j] + p_d * C_[j - 1])*D 54 | #set boundaries 55 | pC_[0] = 2 * pC_[1] - pC_[2] 56 | pC_[2 * M] = 2 * pC_[2 * M -1] - pC_[2 * M - 2] 57 | 58 | for n in range(0,2 * M+1): 59 | if self.call ==True: 60 | C_[n] = max(pC_[n],max(S_[n]-self.K,0)) 61 | else: 62 | C_[n] = max(pC_[n],max(self.K-S_[n],0)) 63 | ret = C_[M] 64 | return ret --------------------------------------------------------------------------------