├── .other └── cover.png ├── Appendix └── Appendix.ipynb ├── Chapter 04 └── Chapter 4.ipynb ├── Chapter 05 └── Chapter 5.ipynb ├── Chapter 06 └── Chapter 6.ipynb ├── Chapter 08 └── Chapter 8.ipynb ├── Chapter 09 └── Chapter 9.ipynb ├── Chapter 11 └── Chapter 11.ipynb ├── Chapter 12 └── Chapter 12.ipynb ├── Chapter 13 └── Chapter 13.ipynb ├── LICENSE └── README.md /.other/cover.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PacktPublishing/Algorithmic-Short-Selling-with-Python/7ad7711b5236355e4850d422ad1d542d448be0c0/.other/cover.png -------------------------------------------------------------------------------- /Appendix/Appendix.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"metadata":{},"cell_type":"markdown","source":"# Preliminary instruction\n\nTo follow the code in this chapter, the `yfinance` package must be installed in your environment. If you do not have this installed yet, review Chapter 4 for instructions on how to do so."},{"metadata":{},"cell_type":"markdown","source":"# SCREENING ACROSS THE S&P 500 INDEX"},{"metadata":{},"cell_type":"markdown","source":"## Processing order\n\n1. Import Libraries \n2. Define functions:\n 1. Relative\n 2. Regime definitions\n 1. Range Breakout\n 2. Turtle for dummies\n 3. Moving Average Crossover\n 4. Floor/Ceiling\n 3. Data handling\n3. Control Panel: parameters \n4. Download investment universe from website: Wikipedia\n5. Batch download from yfinance\n6. Process batch:\n 1. Loop through batch\n 2. droplevel\n 3. Process individual ticker absolute & relative:\n 1. Calculate relative series\n 2. Regime breakout\n 3. Turtle for dummies\n 4. Moving average crossover\n 5. Swing detection & Floor/Ceiling regime\n 6. Boolean save to csv\n 7. Create dictionary from last row\n7. Append list of last row dictionary\n8. Create dataframe from list\n9. Boolean save to csv"},{"metadata":{},"cell_type":"markdown","source":"## Import Libraries\n\nLet's import all the libraries we will be needing at the beginning. "},{"metadata":{"trusted":true},"cell_type":"code","source":"# Appendix \n\n# Data manipulation\nimport pandas as pd\nimport numpy as np\nfrom scipy.signal import *\n# import pathlib\n\n# Data download\nimport yfinance as yf\n\n# Data visualization\n%matplotlib inline\nimport matplotlib.pyplot as plt","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Relative function and arguments declaration\n\n1. relative: The relative function converts OHLC in absolute into relative currency adjusted prices. The default is rebased to the begining of the absolute series.\n\n2. lower_upper_OHLC: this instantiates _o,_h,_l,_c in capital or small letters. When the relative boolean is set to True, it adds 'r' at the beginning\n\n3. regime_args: instantiates floor/ceiling regime arguments"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n### RELATIVE\ndef relative(df,_o,_h,_l,_c, bm_df, bm_col, ccy_df, ccy_col, dgt, start, end,rebase=True):\n '''\n df: df\n bm_df, bm_col: df benchmark dataframe & column name\n ccy_df,ccy_col: currency dataframe & column name\n dgt: rounding decimal\n start/end: string or offset\n rebase: boolean rebase to beginning or continuous series\n '''\n # Slice df dataframe from start to end period: either offset or datetime\n df = df[start:end] \n \n # inner join of benchmark & currency: only common values are preserved\n df = df.join(bm_df[[bm_col]],how='inner') \n df = df.join(ccy_df[[ccy_col]],how='inner')\n\n # rename benchmark name as bm and currency as ccy\n df.rename(columns={bm_col:'bm', ccy_col:'ccy'},inplace=True)\n\n # Adjustment factor: calculate the scalar product of benchmark and currency\n df['bmfx'] = round(df['bm'].mul(df['ccy']),dgt).fillna(method='ffill')\n if rebase == True:\n df['bmfx'] = df['bmfx'].div(df['bmfx'][0])\n\n # Divide absolute price by fxcy adjustment factor and rebase to first value\n df['r' + str(_o)] = round(df[_o].div(df['bmfx']),dgt)\n df['r' + str(_h)] = round(df[_h].div(df['bmfx']),dgt)\n df['r'+ str(_l)] = round(df[_l].div(df['bmfx']),dgt)\n df['r'+ str(_c)] = round(df[_c].div(df['bmfx']),dgt)\n df = df.drop(['bm','ccy','bmfx'],axis=1)\n \n return (df)\n\n### RELATIVE ###\n\ndef lower_upper_OHLC(df,relative = False):\n if relative==True:\n rel = 'r'\n else:\n rel= '' \n if 'Open' in df.columns:\n ohlc = [rel+'Open',rel+'High',rel+'Low',rel+'Close'] \n elif 'open' in df.columns:\n ohlc = [rel+'open',rel+'high',rel+'low',rel+'close']\n \n try:\n _o,_h,_l,_c = [ohlc[h] for h in range(len(ohlc))]\n except:\n _o=_h=_l=_c= np.nan\n return _o,_h,_l,_c\n\ndef regime_args(df,lvl,relative= False):\n if ('Low' in df.columns) & (relative == False):\n reg_val = ['Lo1','Hi1','Lo'+str(lvl),'Hi'+str(lvl),'rg','clg','flr','rg_ch']\n elif ('low' in df.columns) & (relative == False):\n reg_val = ['lo1','hi1','lo'+str(lvl),'hi'+str(lvl),'rg','clg','flr','rg_ch']\n elif ('Low' in df.columns) & (relative == True):\n reg_val = ['rL1','rH1','rL'+str(lvl),'rH'+str(lvl),'rrg','rclg','rflr','rrg_ch']\n elif ('low' in df.columns) & (relative == True):\n reg_val = ['rl1','rh1','rl'+str(lvl),'rh'+str(lvl),'rrg','rclg','rflr','rrg_ch']\n \n try: \n rt_lo,rt_hi,slo,shi,rg,clg,flr,rg_ch = [reg_val[s] for s in range(len(reg_val))]\n except:\n rt_lo=rt_hi=slo=shi=rg=clg=flr=rg_ch= np.nan\n return rt_lo,rt_hi,slo,shi,rg,clg,flr,rg_ch","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Popular regime definition methodologies\nThose are traditional regime definition techniques. Python deos this remarkable job at condensing concepts into a few lines of code. \n\n#### regime_breakout:\n1. Bullish: if df[_h] == df[_h].rolling(window).max()\n2. Bearish: if df[_l] == df[_l].rolling(window).min()\n3. Bullish condition reverses when Bearish condition is met and vice versa\n \n#### turtle_trader:\nSame as regime breakout but asymmetric entry/exit.\n1. Entry on slow range. \n2. Exit on fast range\nThis protects profits and reduces drawdowns\n\n#### regime_sma:\nSimple moving average crossover: 2 moving averages\n\n#### regime_ema:\nExponential moving average crossover: 2 moving averages"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n#### regime_breakout(df,_h,_l,window) ####\ndef regime_breakout(df,_h,_l,window):\n hl = np.where(df[_h] == df[_h].rolling(window).max(),1,\n np.where(df[_l] == df[_l].rolling(window).min(), -1,np.nan))\n roll_hl = pd.Series(index= df.index, data= hl).fillna(method= 'ffill')\n return roll_hl\n#### regime_breakout(df,_h,_l,window) ####\n\n#### turtle_trader(df, _h, _l, slow, fast) ####\ndef turtle_trader(df, _h, _l, slow, fast):\n '''\n _slow: Long/Short direction\n _fast: trailing stop loss\n '''\n _slow = regime_breakout(df,_h,_l,window = slow)\n _fast = regime_breakout(df,_h,_l,window = fast)\n turtle = pd. Series(index= df.index, \n data = np.where(_slow == 1,np.where(_fast == 1,1,0), \n np.where(_slow == -1, np.where(_fast ==-1,-1,0),0)))\n return turtle\n#### turtle_trader(df, _h, _l, slow, fast) ####\n\n#### regime_sma(df,_c,st,lt) ####\ndef regime_sma(df,_c,st,lt):\n '''\n bull +1: sma_st >= sma_lt , bear -1: sma_st <= sma_lt\n '''\n sma_lt = df[_c].rolling(lt).mean()\n sma_st = df[_c].rolling(st).mean()\n rg_sma = np.sign(sma_st - sma_lt)\n return rg_sma\n#### regime_sma(df,_c,st,lt) ####\n\n#### regime_ema(df,_c,st,lt) ####\ndef regime_ema(df,_c,st,lt):\n '''\n bull +1: ema_st >= ema_lt , bear -1: ema_st <= ema_lt\n '''\n ema_st = df[_c].ewm(span=st,min_periods = st).mean()\n ema_lt = df[_c].ewm(span=lt,min_periods = lt).mean()\n rg_ema = np.sign(ema_st - ema_lt)\n return rg_ema\n#### regime_ema(df,_c,st,lt) ####","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Floor/Ceiling regime definition\n\nUnlike the previous methodologies, the floor/ceiling regime definitionis computationally intense. It is in two parts:\n\n1. Swing detection is a succession of functions. It is broadly split in two parts:\n 1. historical swing detection: all the swings leading up to the latest one\n 1. historical_swings: this uses find_peaks to find small peaks. This function loops over series coming from the hilo alternation function to zoom out\n 2. hilo alternation: this function simply reduces series to alternate highs and lows \n 2. latest swing detection: rapid fire functions to detect the latest swing in real time\n 1. cleanup_latest_swing: Eliminate false positives Swing High/Low last swing\n 2. latest_swing_variables: Set-up arguments for latest swing High or Low\n 3. test_distance: noise filter: removes short amplitude noise\n 4. average_true_range: classic volatility in 1 line of code\n 5. retest_swing: retest method\n 6. retracement_swing: alternative swing detection: retracement from high/low\n \n \n2. Floor/ceiling regime definition uses swings detected above\n 1. Classic regime definition:\n 1. Bullish: (swing low - bottom)/std > threshold\n 2. Bearish: (swing high - top)/std < threshold\n 2. Handling exception:\n 1. Bearish: Low < swing low\n 2. Bullish: High > swing high"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n#### hilo_alternation(hilo, dist= None, hurdle= None) ####\ndef hilo_alternation(hilo, dist= None, hurdle= None):\n i=0 \n while (np.sign(hilo.shift(1)) == np.sign(hilo)).any(): # runs until duplicates are eliminated\n\n # removes swing lows > swing highs\n hilo.loc[(np.sign(hilo.shift(1)) != np.sign(hilo)) & # hilo alternation test\n (hilo.shift(1)<0) & # previous datapoint: high\n (np.abs(hilo.shift(1)) < np.abs(hilo) )] = np.nan # high[-1] < low, eliminate low \n\n hilo.loc[(np.sign(hilo.shift(1)) != np.sign(hilo)) & # hilo alternation\n (hilo.shift(1)>0) & # previous swing: low\n (np.abs(hilo ) < hilo.shift(1))] = np.nan # swing high < swing low[-1]\n\n # alternation test: removes duplicate swings & keep extremes\n hilo.loc[(np.sign(hilo.shift(1)) == np.sign(hilo)) & # same sign\n (hilo.shift(1) < hilo )] = np.nan # keep lower one\n\n hilo.loc[(np.sign(hilo.shift(-1)) == np.sign(hilo)) & # same sign, forward looking \n (hilo.shift(-1) < hilo )] = np.nan # keep forward one\n\n # removes noisy swings: distance test\n if pd.notnull(dist):\n hilo.loc[(np.sign(hilo.shift(1)) != np.sign(hilo))&\\\n (np.abs(hilo + hilo.shift(1)).div(dist, fill_value=1)< hurdle)] = np.nan\n\n # reduce hilo after each pass\n hilo = hilo.dropna().copy() \n i+=1\n if i == 4: # breaks infinite loop\n break \n return hilo\n#### hilo_alternation(hilo, dist= None, hurdle= None) ####\n\n#### historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None) #### \ndef historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None):\n \n reduction = df[[_o,_h,_l,_c]].copy() \n reduction['avg_px'] = round(reduction[[_h,_l,_c]].mean(axis=1),2)\n highs = reduction['avg_px'].values\n lows = - reduction['avg_px'].values\n reduction_target = len(reduction) // 100\n# print(reduction_target )\n\n n = 0\n while len(reduction) >= reduction_target: \n highs_list = find_peaks(highs, distance = 1, width = 0)\n lows_list = find_peaks(lows, distance = 1, width = 0)\n hilo = reduction.iloc[lows_list[0]][_l].sub(reduction.iloc[highs_list[0]][_h],fill_value=0)\n\n # Reduction dataframe and alternation loop\n hilo_alternation(hilo, dist= None, hurdle= None)\n reduction['hilo'] = hilo\n\n # Populate reduction df\n n += 1 \n reduction[str(_h)[:2]+str(n)] = reduction.loc[reduction['hilo']<0 ,_h]\n reduction[str(_l)[:2]+str(n)] = reduction.loc[reduction['hilo']>0 ,_l]\n\n # Populate main dataframe\n df[str(_h)[:2]+str(n)] = reduction.loc[reduction['hilo']<0 ,_h]\n df[str(_l)[:2]+str(n)] = reduction.loc[reduction['hilo']>0 ,_l]\n \n # Reduce reduction\n reduction = reduction.dropna(subset= ['hilo'])\n reduction.fillna(method='ffill', inplace = True)\n highs = reduction[str(_h)[:2]+str(n)].values\n lows = -reduction[str(_l)[:2]+str(n)].values\n \n if n >= 9:\n break\n \n return df\n#### historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None) ####\n\n#### cleanup_latest_swing(df, shi, slo, rt_hi, rt_lo) ####\ndef cleanup_latest_swing(df, shi, slo, rt_hi, rt_lo): \n '''\n removes false positives\n '''\n # latest swing\n shi_dt = df.loc[pd.notnull(df[shi]), shi].index[-1]\n s_hi = df.loc[pd.notnull(df[shi]), shi][-1]\n slo_dt = df.loc[pd.notnull(df[slo]), slo].index[-1] \n s_lo = df.loc[pd.notnull(df[slo]), slo][-1] \n len_shi_dt = len(df[:shi_dt])\n len_slo_dt = len(df[:slo_dt])\n \n\n # Reset false positives to np.nan\n for i in range(2):\n \n if (len_shi_dt > len_slo_dt) & ((df.loc[shi_dt:,rt_hi].max()> s_hi) | (s_hi len_shi_dt) & ((df.loc[slo_dt:,rt_lo].min()< s_lo)| (s_hi shi_dt: \n swg_var = [1,s_lo,slo_dt,rt_lo,shi, df.loc[slo_dt:,_h].max(), df.loc[slo_dt:, _h].idxmax()] \n elif shi_dt > slo_dt: \n swg_var = [-1,s_hi,shi_dt,rt_hi,slo, df.loc[shi_dt:, _l].min(),df.loc[shi_dt:, _l].idxmin()] \n else: \n ud = 0\n ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt = [swg_var[h] for h in range(len(swg_var))] \n \n return ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt\n#### latest_swings(df, shi, slo, rt_hi, rt_lo, _h, _l, _c, _vol) ####\n\n#### test_distance(ud, bs, hh_ll, vlty, dist_vol, dist_pct) ####\ndef test_distance(ud,bs, hh_ll, dist_vol, dist_pct): \n \n # priority: 1. Vol 2. pct 3. dflt\n if (dist_vol > 0): \n distance_test = np.sign(abs(hh_ll - bs) - dist_vol)\n elif (dist_pct > 0):\n distance_test = np.sign(abs(hh_ll / bs - 1) - dist_pct)\n else:\n distance_test = np.sign(dist_pct)\n \n return int(max(distance_test,0) * ud)\n#### test_distance(ud, bs, hh_ll, vlty, dist_vol, dist_pct) ####\n\n#### ATR ####\ndef average_true_range(df, _h, _l, _c, n):\n '''\n http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:average_true_range_atr\n '''\n atr = (df[_h].combine(df[_c].shift(), max) - df[_l].combine(df[_c].shift(), min)).rolling(window=n).mean()\n return atr\n\n#### ATR ####\n\n#### retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg) ####\ndef retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg):\n rt_sgmt = df.loc[hh_ll_dt:, _rt] \n\n if (rt_sgmt.count() > 0) & (_sign != 0): # Retests exist and distance test met \n if _sign == 1: # \n rt_list = [rt_sgmt.idxmax(),rt_sgmt.max(),df.loc[rt_sgmt.idxmax():, _c].cummin()]\n \n elif _sign == -1:\n rt_list = [rt_sgmt.idxmin(), rt_sgmt.min(), df.loc[rt_sgmt.idxmin():, _c].cummax()]\n rt_dt,rt_hurdle, rt_px = [rt_list[h] for h in range(len(rt_list))]\n\n if str(_c)[0] == 'r':\n df.loc[rt_dt,'rrt'] = rt_hurdle\n elif str(_c)[0] != 'r':\n df.loc[rt_dt,'rt'] = rt_hurdle \n\n if (np.sign(rt_px - rt_hurdle) == - np.sign(_sign)).any():\n df.at[hh_ll_dt, _swg] = hh_ll \n return df\n#### retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg) ####\n\n#### retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct) ####\ndef retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct):\n if _sign == 1: #\n retracement = df.loc[hh_ll_dt:, _c].min() - hh_ll\n\n if (vlty > 0) & (retrace_vol > 0) & ((abs(retracement / vlty) - retrace_vol) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n elif (retrace_pct > 0) & ((abs(retracement / hh_ll) - retrace_pct) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n\n elif _sign == -1:\n retracement = df.loc[hh_ll_dt:, _c].max() - hh_ll\n if (vlty > 0) & (retrace_vol > 0) & ((round(retracement / vlty ,1) - retrace_vol) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n elif (retrace_pct > 0) & ((round(retracement / hh_ll , 4) - retrace_pct) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n else:\n retracement = 0\n return df\n#### retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct) ####\n\n\n# CHAPTER 5: Regime Definition \n\n#### regime_floor_ceiling(df, hi,lo,cl, slo, shi,flr,clg,rg,rg_ch,stdev,threshold) ####\ndef regime_floor_ceiling(df, _h,_l,_c,slo, shi,flr,clg,rg,rg_ch,stdev,threshold):\n # Lists instantiation\n threshold_test,rg_ch_ix_list,rg_ch_list = [],[], []\n floor_ix_list, floor_list, ceiling_ix_list, ceiling_list = [],[],[],[]\n\n ### Range initialisation to 1st swing\n floor_ix_list.append(df.index[0])\n ceiling_ix_list.append(df.index[0])\n \n ### Boolean variables\n ceiling_found = floor_found = breakdown = breakout = False\n\n ### Swings lists\n swing_highs = list(df[pd.notnull(df[shi])][shi])\n swing_highs_ix = list(df[pd.notnull(df[shi])].index)\n swing_lows = list(df[pd.notnull(df[slo])][slo])\n swing_lows_ix = list(df[pd.notnull(df[slo])].index)\n loop_size = np.maximum(len(swing_highs),len(swing_lows))\n\n ### Loop through swings\n for i in range(loop_size): \n\n ### asymetric swing list: default to last swing if shorter list\n try:\n s_lo_ix = swing_lows_ix[i]\n s_lo = swing_lows[i]\n except:\n s_lo_ix = swing_lows_ix[-1]\n s_lo = swing_lows[-1]\n\n try:\n s_hi_ix = swing_highs_ix[i]\n s_hi = swing_highs[i]\n except:\n s_hi_ix = swing_highs_ix[-1]\n s_hi = swing_highs[-1]\n\n swing_max_ix = np.maximum(s_lo_ix,s_hi_ix) # latest swing index\n\n ### CLASSIC CEILING DISCOVERY\n if (ceiling_found == False): \n top = df[floor_ix_list[-1] : s_hi_ix][_h].max()\n ceiling_test = round((s_hi - top) / stdev[s_hi_ix] ,1) \n\n ### Classic ceiling test\n if ceiling_test <= -threshold: \n ### Boolean flags reset\n ceiling_found = True \n floor_found = breakdown = breakout = False \n threshold_test.append(ceiling_test)\n\n ### Append lists\n ceiling_list.append(top)\n ceiling_ix_list.append(df[floor_ix_list[-1]: s_hi_ix][_h].idxmax()) \n rg_ch_ix_list.append(s_hi_ix)\n rg_ch_list.append(s_hi) \n\n ### EXCEPTION HANDLING: price penetrates discovery swing\n ### 1. if ceiling found, calculate regime since rg_ch_ix using close.cummin\n elif (ceiling_found == True):\n close_high = df[rg_ch_ix_list[-1] : swing_max_ix][_c].cummax()\n df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] = np.sign(close_high - rg_ch_list[-1])\n\n ### 2. if price.cummax penetrates swing high: regime turns bullish, breakout\n if (df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] >0).any():\n ### Boolean flags reset\n floor_found = ceiling_found = breakdown = False\n breakout = True\n\n ### 3. if breakout, test for bearish pullback from highest high since rg_ch_ix\n if (breakout == True):\n brkout_high_ix = df.loc[rg_ch_ix_list[-1] : swing_max_ix, _c].idxmax()\n brkout_low = df[brkout_high_ix : swing_max_ix][_c].cummin()\n df.loc[brkout_high_ix : swing_max_ix, rg] = np.sign(brkout_low - rg_ch_list[-1])\n\n\n ### CLASSIC FLOOR DISCOVERY \n if (floor_found == False): \n bottom = df[ceiling_ix_list[-1] : s_lo_ix][_l].min()\n floor_test = round((s_lo - bottom) / stdev[s_lo_ix],1)\n\n ### Classic floor test\n if (floor_test >= threshold): \n \n ### Boolean flags reset\n floor_found = True\n ceiling_found = breakdown = breakout = False\n threshold_test.append(floor_test)\n\n ### Append lists\n floor_list.append(bottom)\n floor_ix_list.append(df[ceiling_ix_list[-1] : s_lo_ix][_l].idxmin()) \n rg_ch_ix_list.append(s_lo_ix)\n rg_ch_list.append(s_lo)\n\n ### EXCEPTION HANDLING: price penetrates discovery swing\n ### 1. if floor found, calculate regime since rg_ch_ix using close.cummin\n elif(floor_found == True): \n close_low = df[rg_ch_ix_list[-1] : swing_max_ix][_c].cummin()\n df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] = np.sign(close_low - rg_ch_list[-1])\n\n ### 2. if price.cummin penetrates swing low: regime turns bearish, breakdown\n if (df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] <0).any():\n floor_found = floor_found = breakout = False\n breakdown = True \n\n ### 3. if breakdown,test for bullish rebound from lowest low since rg_ch_ix\n if (breakdown == True):\n brkdwn_low_ix = df.loc[rg_ch_ix_list[-1] : swing_max_ix, _c].idxmin() # lowest low \n breakdown_rebound = df[brkdwn_low_ix : swing_max_ix][_c].cummax() # rebound\n df.loc[brkdwn_low_ix : swing_max_ix, rg] = np.sign(breakdown_rebound - rg_ch_list[-1])\n# breakdown = False\n# breakout = True \n\n ### POPULATE FLOOR,CEILING, RG CHANGE COLUMNS\n df.loc[floor_ix_list[1:], flr] = floor_list\n df.loc[ceiling_ix_list[1:], clg] = ceiling_list\n df.loc[rg_ch_ix_list, rg_ch] = rg_ch_list\n df[rg_ch] = df[rg_ch].fillna(method='ffill')\n\n ### regime from last swing\n df.loc[swing_max_ix:,rg] = np.where(ceiling_found, # if ceiling found, highest high since rg_ch_ix\n np.sign(df[swing_max_ix:][_c].cummax() - rg_ch_list[-1]),\n np.where(floor_found, # if floor found, lowest low since rg_ch_ix\n np.sign(df[swing_max_ix:][_c].cummin() - rg_ch_list[-1]),\n np.sign(df[swing_max_ix:][_c].rolling(5).mean() - rg_ch_list[-1]))) \n df[rg] = df[rg].fillna(method='ffill')\n# df[rg+'_no_fill'] = df[rg]\n return df\n\n#### regime_floor_ceiling(df, hi,lo,cl, slo, shi,flr,clg,rg,rg_ch,stdev,threshold) ####","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Swings and Regime summary functions\n\nThese two functions call the functions necessary to calculate the floor/ceiling.\n\n1. swings: A simple toggle rel calculates either the absolute (rel = False) or the relative swings\n\n2. regime: lvl refers to the swing levels used to calculate regime. For example, Hi2/Lo2 refers to level 2 swings (reduced twice), rH3/rL3 to relative level 3 etc."},{"metadata":{"trusted":true},"cell_type":"code","source":"\n \ndef swings(df,rel = False):\n _o,_h,_l,_c = lower_upper_OHLC(df,relative= False)\n if rel == True:\n df = relative(df=df,_o=_o,_h=_h,_l=_l,_c=_c, bm_df=bm_df, bm_col= bm_col, ccy_df=bm_df, \n ccy_col=ccy_col, dgt= dgt, start=start, end= end,rebase=True)\n _o,_h,_l,_c = lower_upper_OHLC(df,relative= True) \n rt_lo,rt_hi,slo,shi,rg,clg,flr,rg_ch = regime_args(df,lvl,relative= True)\n else :\n rt_lo,rt_hi,slo,shi,rg,clg,flr,rg_ch = regime_args(df,lvl,relative= False)\n df= historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None)\n df= cleanup_latest_swing(df,shi,slo,rt_hi,rt_lo)\n ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt = latest_swing_variables(df, shi,slo,rt_hi,rt_lo,_h,_l, _c)\n vlty = round(average_true_range(df,_h,_l,_c, n= vlty_n)[hh_ll_dt],dgt)\n dist_vol = d_vol * vlty\n _sign = test_distance(ud,bs, hh_ll, dist_vol, dist_pct)\n df = retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg)\n retrace_vol = r_vol * vlty\n df = retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct)\n \n return df\n\n\ndef regime(df,lvl,rel=False): \n _o,_h,_l,_c = lower_upper_OHLC(df,relative= rel) \n rt_lo,rt_hi,slo,shi,rg,clg,flr,rg_ch = regime_args(df,lvl,relative= rel)\n stdev = df[_c].rolling(vlty_n).std(ddof=0)\n df = regime_floor_ceiling(df,_h,_l,_c,slo, shi,flr,clg,rg,rg_ch,stdev,threshold) \n \n return df","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Graph combo\n\nThis verbose function visualises all the above regime definition methodologies"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n### Graph Regimes ###\ndef graph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,\n ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi):\n \n '''\n https://www.color-hex.com/color-names.html\n ticker,df,_c: _c is closing price\n rg: regime -1/0/1 using floor/ceiling method\n lo,hi: small, noisy highs/lows\n slo,shi: swing lows/highs\n clg,flr: ceiling/floor\n \n rg_ch: regime change base\n ma_st,ma_mt,ma_lt: moving averages ST/MT/LT\n lt_lo,lt_hi: range breakout High/Low LT \n st_lo,st_hi: range breakout High/Low ST \n '''\n fig = plt.figure(figsize=(20,8))\n ax1 = plt.subplot2grid((1,1), (0,0))\n date = df.index\n close = df[_c]\n ax1.plot_date(df.index, close,'-', color='k', label=ticker.upper()) \n try:\n if pd.notnull(rg): \n base = df[rg_ch]\n regime = df[rg]\n\n if df[lo].count()>0:\n ax1.plot(df.index, df[lo],'.' ,color='r', label= 'swing low',alpha= 0.6)\n if df[hi].count()>0:\n ax1.plot(df.index, df[hi],'.' ,color='g', label= 'swing high',alpha= 0.6) \n if df[slo].count()>0:\n ax1.plot(df.index, df[slo],'o' ,color='r', label= 'swing low',alpha= 0.8)\n if df[shi].count()>0:\n ax1.plot(df.index, df[shi],'o' ,color='g', label= 'swing high',alpha= 0.8)\n if df[flr].count()>0:\n plt.scatter(df.index, df[flr],c='k',marker='^',label='floor')\n if df[clg].count() >0:\n plt.scatter(df.index, df[clg],c='k',marker='v',label='ceiling')\n\n ax1.plot([],[],linewidth=5, label= 'bear', color='m',alpha=0.1)\n ax1.plot([],[],linewidth=5 , label= 'bull', color='b',alpha=0.1)\n ax1.fill_between(date, close, base,where=((regime==1)&(close > base)), facecolor='b', alpha=0.1)\n ax1.fill_between(date, close, base,where=((regime==1)&(close < base)), facecolor='b', alpha=0.4)\n ax1.fill_between(date, close, base,where=((regime==-1)&(close < base)), facecolor='m', alpha=0.1)\n ax1.fill_between(date, close, base,where=((regime==-1)&(close > base)), facecolor='m', alpha=0.4)\n\n if np.sum(ma_st) >0 :\n ax1.plot(df.index,ma_st,'-' ,color='lime', label= 'ST MA')\n ax1.plot(df.index,ma_mt,'-' ,color='green', label= 'MT MA')\n ax1.plot(df.index,ma_lt,'-' ,color='red', label= 'LT MA')\n\n if pd.notnull(rg): # floor/ceiling regime present\n # Profitable conditions\n ax1.fill_between(date,close, ma_mt,where=((regime==1)&(ma_mt >= ma_lt)&(ma_st>=ma_mt)), \n facecolor='green', alpha=0.5) \n ax1.fill_between(date,close, ma_mt,where=((regime==-1)&(ma_mt <= ma_lt)&(ma_st <= ma_mt)), \n facecolor='red', alpha=0.5)\n # Unprofitable conditions\n ax1.fill_between(date,close, ma_mt,where=((regime==1)&(ma_mt>=ma_lt)&(ma_st>=ma_mt)&(close=ma_mt)), \n facecolor='darkred', alpha=1)\n\n elif pd.isnull(rg): # floor/ceiling regime absent\n # Profitable conditions\n ax1.fill_between(date,close, ma_mt,where=((ma_mt >= ma_lt)&(ma_st>=ma_mt)), \n facecolor='green', alpha=0.4) \n ax1.fill_between(date,close, ma_mt,where=((ma_mt <= ma_lt)&(ma_st <= ma_mt)), \n facecolor='red', alpha=0.4)\n # Unprofitable conditions\n ax1.fill_between(date,close, ma_mt,where=((ma_mt >= ma_lt)&(ma_st >= ma_mt)&(close < ma_mt)), \n facecolor='darkgreen', alpha=1) \n ax1.fill_between(date,close, ma_mt,where=((ma_mt <= ma_lt)&(ma_st <= ma_mt)&(close >= ma_mt)), \n facecolor='darkred', alpha=1)\n\n if (np.sum(lt_hi) > 0): # LT range breakout\n ax1.plot([],[],linewidth=5, label= ' LT High', color='m',alpha=0.2)\n ax1.plot([],[],linewidth=5, label= ' LT Low', color='b',alpha=0.2)\n\n if pd.notnull(rg): # floor/ceiling regime present\n ax1.fill_between(date, close, lt_lo,\n where=((regime ==1) & (close > lt_lo) ), \n facecolor='b', alpha=0.2)\n ax1.fill_between(date,close, lt_hi,\n where=((regime ==-1) & (close < lt_hi)), \n facecolor='m', alpha=0.2)\n if (np.sum(st_hi) > 0): # ST range breakout\n ax1.fill_between(date, close, st_lo,\n where=((regime ==1)&(close > st_lo) ), \n facecolor='b', alpha=0.3)\n ax1.fill_between(date,close, st_hi,\n where=((regime ==-1) & (close < st_hi)), \n facecolor='m', alpha=0.3)\n\n elif pd.isnull(rg): # floor/ceiling regime absent \n ax1.fill_between(date, close, lt_lo,\n where=((close > lt_lo) ), facecolor='b', alpha=0.2)\n ax1.fill_between(date,close, lt_hi,\n where=((close < lt_hi)), facecolor='m', alpha=0.2)\n if (np.sum(st_hi) > 0): # ST range breakout\n ax1.fill_between(date, close, st_lo,\n where=((close > st_lo) & (st_lo >= lt_lo)), facecolor='b', alpha=0.3)\n ax1.fill_between(date,close, st_hi,\n where=((close < st_hi)& (st_hi <= lt_hi)), facecolor='m', alpha=0.3)\n\n if (np.sum(st_hi) > 0): # ST range breakout\n ax1.plot([],[],linewidth=5, label= ' ST High', color='m',alpha=0.3)\n ax1.plot([],[],linewidth=5, label= ' ST Low', color='b',alpha=0.3)\n\n ax1.plot(df.index, lt_lo,'-.' ,color='b', label= 'LT low',alpha=0.2)\n ax1.plot(df.index, lt_hi,'-.' ,color='m', label= 'LT high',alpha=0.2)\n except:\n pass\n \n for label in ax1.xaxis.get_ticklabels():\n label.set_rotation(45)\n ax1.grid(True)\n ax1.xaxis.label.set_color('k')\n ax1.yaxis.label.set_color('k')\n plt.xlabel('Date')\n plt.ylabel(str.upper(ticker) + ' Price')\n plt.title(str.upper(ticker))\n plt.legend()\n### Graph Regimes Combo ###","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### New Functions to process data\n\nThose two functions were not published earlier in the book.\n\n1. yf_droplevel(batch_download,ticker)\nThe batch downoload returns a multiindex dataframe. This drops level from multiindex df to single stock level. \n \n2. last_row_dictionary(df)\n 1. creates dictionary with last row\n 2. when last row is N/A, creates additional column with date"},{"metadata":{"trusted":true},"cell_type":"code","source":"def yf_droplevel(batch_download,ticker):\n df = batch_download.iloc[:, batch_download.columns.get_level_values(1)==ticker]\n df.columns = df.columns.droplevel(1)\n df = df.dropna()\n return df\n\ndef last_row_dictionary(df):\n \n df_cols = list(df.columns)\n col_dict = {'Symbol':str.upper(ticker),'date':df.index.max().strftime('%Y%m%d')}\n for i, col_name in enumerate(df_cols):\n if pd.isnull(df.iloc[-1,i]):\n try:\n last_index = df[pd.notnull(df.iloc[:,i])].index[-1]\n len_last_index = len(df[:last_index]) - 1\n col_dict.update({col_name + '_dt': last_index.strftime('%Y%m%d')})\n col_dict.update({col_name : df.iloc[len_last_index,i]})\n except:\n col_dict.update({col_name + '_dt':np.nan})\n col_dict.update({col_name : np.nan})\n else:\n col_dict.update({col_name : df.iloc[-1,i]})\n return col_dict","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Control Panel\n\nThis is where all arguments and variables are centralised. Dissmeination of variables and arguments the file is a common source of error, hence the centralisation."},{"metadata":{"trusted":true},"cell_type":"code","source":"\nwebsite = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies'\n\nparams = ['2014-12-31', None, 63, 0.05, 0.05, 1.5, 2,5,2.5,3]\nstart,end,vlty_n,dist_pct,retrace_pct,threshold,dgt,d_vol,r_vol,lvl= [params[h] for h in range(len(params))]\n\nrel_var = ['^GSPC','SP500', 'USD']\nbm_ticker, bm_col, ccy_col = [rel_var[h] for h in range(len(rel_var))]\n\nwindow = 100\nst= fast = 50\nlt = slow = 200\n\nbatch_size = 20\nshow_batch = True\nsave_ticker_df = False\nsave_last_row_df = False\nsave_regime_df = False\n\nweb_df_cols = ['Symbol','Security','GICS Sector','GICS Sub-Industry']\nregime_cols = ['rg','rrg',\n 'smaC'+str(st)+str(lt),'smar'+str(st)+str(lt), 'boHL'+str(slow),\n 'borr'+str(slow),'ttH'+str(fast)+'L'+str(slow),'ttr'+str(fast)+'r'+str(slow)]\nswings_cols = ['flr_dt','flr','clg_dt', 'clg', 'rg_ch', \n 'Hi'+str(lvl)+'_dt','Hi'+str(lvl),'Lo'+str(lvl)+'_dt','Lo'+str(lvl) ,\n 'rflr_dt', 'rflr', 'rclg_dt', 'rclg', 'rrg_ch',\n 'rH'+str(lvl)+'_dt','rH'+str(lvl),'rL'+str(lvl)+'_dt','rL'+str(lvl) ]\nsymbol_cols = ['Symbol','date','Close']\n\nlast_row_df_cols = symbol_cols+['score']+regime_cols+swings_cols","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Download investment universe from website: Wikipedia\n\nRead S&P500 page from Wikipedia, extract a tickers list\nThis line (tickers_list = tickers_list[:]) is used to slice the tickers list and avoid downloading data for the entire set each time"},{"metadata":{"trusted":true},"cell_type":"code","source":"web_df = pd.read_html(website)[0]\ntickers_list = list(web_df['Symbol'])\ntickers_list = tickers_list[:]\nprint('tickers_list',len(tickers_list))\nweb_df.head()","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Download & Process: the engine room\n\n1. Benchmark download closing price & currency adjustment\n2. dataframes and lists instantiation\n3. loop size: number of iterations necessary to loop over the tickers_list\n4. Outer loop:\n 1. m,n: index along the batch_list\n 2. batch_download: download using yfinance:\n 1. print batch tickers\n 2. donwload batch\n 3. try/except: append failed list\n 3. Second loop for every batch:\n 1. droplevel to ticker level\n 2. Calculate swings and regime: abs/rel\n 3. Third loop: absolute/relative series:\n 1. process regimes in absolute series\n 2. reset variables to relative series and process regimes second time\n 5. boolean save_ticker_df\n 4. Create a dictionary with last row values. When last row value are N/A, add another key for the date of the last value, then find value\n 5. append list of dictionary rows\n5. create a dataframe last_row_df from dictionary\n6. 'score' column: lateral sum of regime methods absolute & relative\n7. join last_row_df with web_df\n8. boolean save_regime_df\n \n "},{"metadata":{"trusted":true},"cell_type":"code","source":"# Appendix: The Engine Room\n\nbm_df = pd.DataFrame()\nbm_df[bm_col] = round(yf.download(tickers= bm_ticker,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None)['Close'],dgt)\nbm_df[ccy_col] = 1\nprint('benchmark',bm_df.tail(1))\n\nregime_df = pd.DataFrame()\nlast_row_df = pd.DataFrame()\nlast_row_list = []\nfailed = []\n\nloop_size = int(len(tickers_list) // batch_size) + 2\nfor t in range(1,loop_size): \n m = (t - 1) * batch_size\n n = t * batch_size\n batch_list = tickers_list[m:n]\n if show_batch:\n print(batch_list,m,n)\n \n try:\n batch_download = round(yf.download(tickers= batch_list,start= start, end = end, \n interval = \"1d\",group_by = 'column',auto_adjust = True, \n prepost = True, treads = True, proxy = None),dgt) \n \n for flat, ticker in enumerate(batch_list):\n df = yf_droplevel(batch_download,ticker) \n df = swings(df,rel = False)\n df = regime(df,lvl=2,rel = False)\n df = swings(df,rel = True)\n df = regime(df,lvl=2,rel= True) \n _o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\n for a in range(2): \n df['sma'+str(_c)[:1]+str(st)+str(lt)] = regime_sma(df,_c,st,lt)\n df['bo'+str(_h)[:1]+str(_l)[:1]+ str(slow)] = regime_breakout(df,_h,_l,window)\n df['tt'+str(_h)[:1]+str(fast)+str(_l)[:1]+ str(slow)] = turtle_trader(df, _h, _l, slow, fast)\n _o,_h,_l,_c = lower_upper_OHLC(df,relative = True) \n try: \n last_row_list.append(last_row_dictionary(df))\n except:\n failed.append(ticker) \n except:\n failed.append(ticker)\nlast_row_df = pd.DataFrame.from_dict(last_row_list)\n\nif save_last_row_df:\n last_row_df.to_csv('last_row_df_'+ str(last_row_df['date'].max())+'.csv', date_format='%Y%m%d')\nprint('failed',failed)\n\nlast_row_df['score']= last_row_df[regime_cols].sum(axis=1)\nregime_df = web_df[web_df_cols].set_index('Symbol').join(\n last_row_df[last_row_df_cols].set_index('Symbol'), how='inner').sort_values(by='score')\n\nif save_regime_df:\n regime_df.to_csv('regime_df_'+ str(last_row_df['date'].max())+'.csv', date_format='%Y%m%d')\n\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Regime df heatmaps by sector and sub-industry\n\nThe Wikipedia page displays GICS Sectors and Sub-Industry. We group the regime_df by sectors, sub-industry and display\n\n1. Top down bird's eye view GICS Sectors\n2. Bottom up sub-industry\n3. Sector (alphabetical order) and score (ascending order) to arbitrage sub-industries within sectors\n"},{"metadata":{"scrolled":false,"trusted":true},"cell_type":"code","source":"groupby_cols = ['score'] + regime_cols\nsort_key = ['GICS Sector']\nregime_df.groupby(sort_key)[groupby_cols].mean().sort_values(\n by= 'score').style.background_gradient(\n subset= groupby_cols,cmap= 'RdYlGn').format('{:.1g}')","execution_count":null,"outputs":[]},{"metadata":{"scrolled":false,"trusted":true},"cell_type":"code","source":"groupby_cols = ['score'] + regime_cols\nsort_key = ['GICS Sub-Industry']\nregime_df.groupby(sort_key)[groupby_cols].mean().sort_values(\n by= 'score').style.background_gradient(\n subset= groupby_cols,cmap= 'RdYlGn').format('{:.1g}')","execution_count":null,"outputs":[]},{"metadata":{"scrolled":false,"trusted":true},"cell_type":"code","source":"groupby_cols = ['score'] + regime_cols\nsort_key = ['GICS Sector','GICS Sub-Industry']\nregime_df.groupby(sort_key)[groupby_cols].mean().sort_values(\n by= ['GICS Sector','score']).style.background_gradient(\n subset= groupby_cols,cmap= 'RdYlGn').format('{:.1g}')","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Single stock visualisation\nBenchmark needs to be processed only once"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \nbm_ticker= '^GSPC'\nbm_df = pd.DataFrame()\nbm_df[bm_col] = round(yf.download(tickers= bm_ticker,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None)['Close'],dgt)\nbm_df[ccy_col] = 1","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Single stock visualisation\n\nThis block of code calculates regimes for a single stock and visualises it\n\n1. ticker: select a ticker\n2. lvl: these are the swing levels used for the calculation of the floor/ceiling regime. Use lvl 2 or 3\n"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \nticker = 'FMC'\nlvl = 3 # Try different levels to see\n\ndf = round(yf.download(tickers= ticker,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None),dgt)\n\ndf = swings(df,rel = False)\ndf = regime(df,lvl=3,rel = False)\ndf = swings(df,rel = True)\ndf = regime(df,lvl=3,rel= True)\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nfor a in range(2): \n df['sma'+str(_c)[:1]+str(st)+str(lt)] = regime_sma(df,_c,st,lt)\n df['bo'+str(_h)[:1]+str(_l)[:1]+ str(slow)] = regime_breakout(df,_h,_l,window)\n df['tt'+str(_h)[:1]+str(fast)+str(_l)[:1]+ str(slow)] = turtle_trader(df, _h, _l, slow, fast)\n _o,_h,_l,_c = lower_upper_OHLC(df,relative = True)\n df[['Close','rClose']].plot(figsize=(20,5),style=['k','grey'],title = str.upper(ticker)+ ' Relative & Absolute')\n\nplot_abs_cols = ['Close','Hi'+str(lvl), 'Lo'+str(lvl),'clg','flr','rg_ch','rg']\nplot_abs_style = ['k', 'ro', 'go', 'kv', 'k^','b:','b--']\ny2_abs = ['rg']\nplot_rel_cols = ['rClose','rH'+str(lvl), 'rL'+str(lvl),'rclg','rflr','rrg_ch','rrg']\nplot_rel_style = ['grey', 'ro', 'go', 'kv', 'k^','m:','m--']\ny2_rel = ['rrg']\ndf[plot_abs_cols].plot(secondary_y= y2_abs,figsize=(20,8),\n title = str.upper(ticker)+ ' Absolute, level:'+str(lvl),# grid=True,\n style=plot_abs_style)\n\ndf[plot_rel_cols].plot(secondary_y=y2_rel,figsize=(20,8),# grid=True,\n title = str.upper(ticker)+ ' Relative'+str.upper(bm_ticker)+', level:'+str(lvl),\n style=plot_rel_style)\n\ndf[plot_rel_cols + plot_abs_cols].plot(secondary_y=y2_rel + y2_abs,figsize=(20,8),# grid=True,\n title = str.upper(ticker)+ ' Absolute & Relative '+str.upper(bm_ticker)+', level:'+str(lvl),\n style=plot_rel_style + plot_abs_style)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Regime combo visualisation\nThis final block of code plots the data in a more visually appealing way"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \nmav = [fast, slow, 200]\nbo = [fast, slow]\n# ma_st = ma_mt = ma_lt = lt_lo = lt_hi = st_lo = st_hi = 0\n\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\nma_st,ma_mt,ma_lt = [df[_c].rolling(mav[t]).mean() for t in range(len(mav))]\nst_lo,lt_lo = [df[_l].rolling(bo[t]).min() for t in range(len(bo))]\nst_hi,lt_hi = [df[_h].rolling(bo[t]).max() for t in range(len(bo))]\n\nrg_combo = ['Close','rg','Lo'+str(lvl),'Hi'+str(lvl),'Lo'+str(lvl),'Hi'+str(lvl),'clg','flr','rg_ch']\n_c,rg,lo,hi,slo,shi,clg,flr,rg_ch =[rg_combo[r] for r in range(len(rg_combo)) ]\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)\n\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = True)\nma_st,ma_mt,ma_lt = [df[_c].rolling(mav[t]).mean() for t in range(len(mav))]\nst_lo,lt_lo = [df[_l].rolling(bo[t]).min() for t in range(len(bo))]\nst_hi,lt_hi = [df[_h].rolling(bo[t]).max() for t in range(len(bo))]\n\nrrg_combo = ['rClose','rrg','rL'+str(lvl),'rH'+str(lvl),'rL'+str(lvl),'rH'+str(lvl),'rclg','rflr','rrg_ch']\n_c,rg,lo,hi,slo,shi,clg,flr,rg_ch =[rrg_combo[r] for r in range(len(rrg_combo)) ]\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]}],"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"language_info":{"name":"python","version":"3.6.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat":4,"nbformat_minor":4} -------------------------------------------------------------------------------- /Chapter 05/Chapter 5.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"metadata":{},"cell_type":"markdown","source":""},{"metadata":{},"cell_type":"markdown","source":"# Preliminary instruction\n\nTo follow the code in this chapter, the `yfinance` package must be installed in your environment. If you do not have this installed yet, review Chapter 4 for instructions on how to do so."},{"metadata":{},"cell_type":"markdown","source":"# CHAPTER 5: Regime Definition \n"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5:Regime Definition \n\n# Import Libraries\nimport pandas as pd\nimport numpy as np\nimport yfinance as yf\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom scipy.signal import find_peaks","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Plot multiple regime methodologies in a colorful chart"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n### Graph Regimes ###\ndef graph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,\n ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi):\n \n '''\n https://www.color-hex.com/color-names.html\n ticker,df,_c: _c is closing price\n rg: regime -1/0/1 using floor/ceiling method\n lo,hi: small, noisy highs/lows\n slo,shi: swing lows/highs\n clg,flr: ceiling/floor\n \n rg_ch: regime change base\n ma_st,ma_mt,ma_lt: moving averages ST/MT/LT\n lt_lo,lt_hi: range breakout High/Low LT \n st_lo,st_hi: range breakout High/Low ST \n '''\n fig = plt.figure(figsize=(20,8))\n ax1 = plt.subplot2grid((1,1), (0,0))\n date = df.index\n close = df[_c]\n ax1.plot_date(df.index, close,'-', color='k', label=ticker.upper()) \n try:\n if pd.notnull(rg): \n base = df[rg_ch]\n regime = df[rg]\n\n if df[lo].count()>0:\n ax1.plot(df.index, df[lo],'.' ,color='r', label= 'swing low',alpha= 0.6)\n if df[hi].count()>0:\n ax1.plot(df.index, df[hi],'.' ,color='g', label= 'swing high',alpha= 0.6) \n if df[slo].count()>0:\n ax1.plot(df.index, df[slo],'o' ,color='r', label= 'swing low',alpha= 0.8)\n if df[shi].count()>0:\n ax1.plot(df.index, df[shi],'o' ,color='g', label= 'swing high',alpha= 0.8)\n if df[flr].count()>0:\n plt.scatter(df.index, df[flr],c='k',marker='^',label='floor')\n if df[clg].count() >0:\n plt.scatter(df.index, df[clg],c='k',marker='v',label='ceiling')\n\n ax1.plot([],[],linewidth=5, label= 'bear', color='m',alpha=0.1)\n ax1.plot([],[],linewidth=5 , label= 'bull', color='b',alpha=0.1)\n ax1.fill_between(date, close, base,where=((regime==1)&(close > base)), facecolor='b', alpha=0.1)\n ax1.fill_between(date, close, base,where=((regime==1)&(close < base)), facecolor='b', alpha=0.4)\n ax1.fill_between(date, close, base,where=((regime==-1)&(close < base)), facecolor='m', alpha=0.1)\n ax1.fill_between(date, close, base,where=((regime==-1)&(close > base)), facecolor='m', alpha=0.4)\n\n if np.sum(ma_st) >0 :\n ax1.plot(df.index,ma_st,'-' ,color='lime', label= 'ST MA')\n ax1.plot(df.index,ma_mt,'-' ,color='green', label= 'MT MA')\n ax1.plot(df.index,ma_lt,'-' ,color='red', label= 'LT MA')\n\n if pd.notnull(rg): # floor/ceiling regime present\n # Profitable conditions\n ax1.fill_between(date,close, ma_mt,where=((regime==1)&(ma_mt >= ma_lt)&(ma_st>=ma_mt)), \n facecolor='green', alpha=0.5) \n ax1.fill_between(date,close, ma_mt,where=((regime==-1)&(ma_mt <= ma_lt)&(ma_st <= ma_mt)), \n facecolor='red', alpha=0.5)\n # Unprofitable conditions\n ax1.fill_between(date,close, ma_mt,where=((regime==1)&(ma_mt>=ma_lt)&(ma_st>=ma_mt)&(close=ma_mt)), \n facecolor='darkred', alpha=1)\n\n elif pd.isnull(rg): # floor/ceiling regime absent\n # Profitable conditions\n ax1.fill_between(date,close, ma_mt,where=((ma_mt >= ma_lt)&(ma_st>=ma_mt)), \n facecolor='green', alpha=0.4) \n ax1.fill_between(date,close, ma_mt,where=((ma_mt <= ma_lt)&(ma_st <= ma_mt)), \n facecolor='red', alpha=0.4)\n # Unprofitable conditions\n ax1.fill_between(date,close, ma_mt,where=((ma_mt >= ma_lt)&(ma_st >= ma_mt)&(close < ma_mt)), \n facecolor='darkgreen', alpha=1) \n ax1.fill_between(date,close, ma_mt,where=((ma_mt <= ma_lt)&(ma_st <= ma_mt)&(close >= ma_mt)), \n facecolor='darkred', alpha=1)\n\n if (np.sum(lt_hi) > 0): # LT range breakout\n ax1.plot([],[],linewidth=5, label= ' LT High', color='m',alpha=0.2)\n ax1.plot([],[],linewidth=5, label= ' LT Low', color='b',alpha=0.2)\n\n if pd.notnull(rg): # floor/ceiling regime present\n ax1.fill_between(date, close, lt_lo,\n where=((regime ==1) & (close > lt_lo) ), \n facecolor='b', alpha=0.2)\n ax1.fill_between(date,close, lt_hi,\n where=((regime ==-1) & (close < lt_hi)), \n facecolor='m', alpha=0.2)\n if (np.sum(st_hi) > 0): # ST range breakout\n ax1.fill_between(date, close, st_lo,\n where=((regime ==1)&(close > st_lo) ), \n facecolor='b', alpha=0.3)\n ax1.fill_between(date,close, st_hi,\n where=((regime ==-1) & (close < st_hi)), \n facecolor='m', alpha=0.3)\n\n elif pd.isnull(rg): # floor/ceiling regime absent \n ax1.fill_between(date, close, lt_lo,\n where=((close > lt_lo) ), facecolor='b', alpha=0.2)\n ax1.fill_between(date,close, lt_hi,\n where=((close < lt_hi)), facecolor='m', alpha=0.2)\n if (np.sum(st_hi) > 0): # ST range breakout\n ax1.fill_between(date, close, st_lo,\n where=((close > st_lo) & (st_lo >= lt_lo)), facecolor='b', alpha=0.3)\n ax1.fill_between(date,close, st_hi,\n where=((close < st_hi)& (st_hi <= lt_hi)), facecolor='m', alpha=0.3)\n\n if (np.sum(st_hi) > 0): # ST range breakout\n ax1.plot([],[],linewidth=5, label= ' ST High', color='m',alpha=0.3)\n ax1.plot([],[],linewidth=5, label= ' ST Low', color='b',alpha=0.3)\n\n ax1.plot(df.index, lt_lo,'-.' ,color='b', label= 'LT low',alpha=0.2)\n ax1.plot(df.index, lt_hi,'-.' ,color='m', label= 'LT high',alpha=0.2)\n except:\n pass\n \n for label in ax1.xaxis.get_ticklabels():\n label.set_rotation(45)\n ax1.grid(True)\n ax1.xaxis.label.set_color('k')\n ax1.yaxis.label.set_color('k')\n plt.xlabel('Date')\n plt.ylabel(str.upper(ticker) + ' Price')\n plt.title(str.upper(ticker))\n plt.legend()\n### Graph Regimes Combo ###\n\n\n\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Range breakout regime methodology\n1. Function definition\n2. OHLC download using yfinance\n3. define regime\n4. Plot: Softbank one year high/low regime breakout definition "},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n\ndef regime_breakout(df,_h,_l,window):\n hl = np.where(df[_h] == df[_h].rolling(window).max(),1,\n np.where(df[_l] == df[_l].rolling(window).min(), -1,np.nan))\n roll_hl = pd.Series(index= df.index, data= hl).fillna(method= 'ffill')\n return roll_hl\n\nticker = '9984.T' # Softbank\nstart= '2016-12-31'\nend = None\ndf = yf.download(tickers= ticker,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None)\n\nwindow = 252\ndf['hi_'+str(window)] = df['High'].rolling(window).max()\ndf['lo_'+str(window)] = df['Low'].rolling(window).min()\ndf['bo_'+ str(window)]= regime_breakout(df= df,_h= 'High',_l= 'Low',window= window)\ndf[['Close','hi_'+str(window),'lo_'+str(window),'bo_'+ str(window)]].plot(secondary_y= ['bo_'+ str(window)],\n figsize=(20,5), style=['k','g:','r:','b-.'],\n title = str.upper(ticker)+' '+str(window)+' days high/low')\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Utilities functions\n1. lower_upper_OHLC: return _o,_h,_l,_c in small caps or title, absolute or relative\n2. regime_args: returns regime definition arguments"},{"metadata":{"trusted":true},"cell_type":"code","source":"def lower_upper_OHLC(df,relative = False):\n if relative==True:\n rel = 'r'\n else:\n rel= '' \n if 'Open' in df.columns:\n ohlc = [rel+'Open',rel+'High',rel+'Low',rel+'Close'] \n elif 'open' in df.columns:\n ohlc = [rel+'open',rel+'high',rel+'low',rel+'close']\n \n try:\n _o,_h,_l,_c = [ohlc[h] for h in range(len(ohlc))]\n except:\n _o=_h=_l=_c= np.nan\n return _o,_h,_l,_c\n\ndef regime_args(df,lvl,relative= False):\n if ('Low' in df.columns) & (relative == False):\n reg_val = ['Lo1','Hi1','Lo'+str(lvl),'Hi'+str(lvl),'rg','clg','flr','rg_ch']\n elif ('low' in df.columns) & (relative == False):\n reg_val = ['lo1','hi1','lo'+str(lvl),'hi'+str(lvl),'rg','clg','flr','rg_ch']\n elif ('Low' in df.columns) & (relative == True):\n reg_val = ['rL1','rH1','rL'+str(lvl),'rH'+str(lvl),'rrg','rclg','rflr','rrg_ch']\n elif ('low' in df.columns) & (relative == True):\n reg_val = ['rl1','rh1','rl'+str(lvl),'rh'+str(lvl),'rrg','rclg','rflr','rrg_ch']\n \n try: \n rt_lo,rt_hi,slo,shi,rg,clg,flr,rg_ch = [reg_val[s] for s in range(len(reg_val))]\n except:\n rt_lo=rt_hi=slo=shi=rg=clg=flr=rg_ch= np.nan\n return rt_lo,rt_hi,slo,shi,rg,clg,flr,rg_ch","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Turtle for dummies\nTurtle is an asymmetrical range breakout strategy:\n1. Enter on longer duration: slow\n2. Exit on faster duration: fast\n\nPlot: Softbank with asymmetrical regime breakout duration (turtle traders for dummies)"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n\ndef turtle_trader(df, _h, _l, slow, fast):\n '''\n _slow: Long/Short direction\n _fast: trailing stop loss\n '''\n _slow = regime_breakout(df,_h,_l,window = slow)\n _fast = regime_breakout(df,_h,_l,window = fast)\n turtle = pd. Series(index= df.index, \n data = np.where(_slow == 1,np.where(_fast == 1,1,0), \n np.where(_slow == -1, np.where(_fast ==-1,-1,0),0)))\n return turtle\n \n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nfast = 20\nslow = 50\n\ndf['bo_'+ str(slow)] = regime_breakout(df,_h,_l,window = slow)\ndf['bo_'+ str(fast)] = regime_breakout(df,_h,_l,window = fast)\ndf['turtle_'+ str(slow)+str(fast)] = turtle_trader(df, _h, _l, slow, fast)\nrg_cols = ['bo_'+str(slow),'bo_'+ str(fast),'turtle_'+ str(slow)+str(fast)]\n\ndf[['Close','bo_'+str(slow),'bo_'+ str(fast),'turtle_'+ str(slow)+str(fast)] ].plot(\n secondary_y= rg_cols,figsize=(20,5), style=['k','r','g:','b-.'],\n title = str.upper(ticker)+' '+str(rg_cols))","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"Plot Softbank regime using Turtle Trader methodology using the graph_regime_combo function. The darker shade is the shorter timeframe"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n# EDWARD, this is for you bo_lt, bo_st\n\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nma_st = ma_mt = ma_lt = 0\nrg=lo=hi=slo=shi=clg=flr=rg_ch = None\n\nbo = [50, 200]\nst_lo,lt_lo = [df[_l].rolling(window = bo[t]).min() for t in range(len(bo))]\nst_hi,lt_hi = [df[_h].rolling(window = bo[t]).max() for t in range(len(bo))]\n\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Moving average crossover\n1. regime_sma: simple moving average crossover strategy\n 1. Bullish: st > mt = 1\n 2. Bearish: st < mt = -1\n2. regime_ema: exponential moving average crossover strategy\n 1. Bullish: st > mt = 1\n 2. Bearish: st < mt = -1\n \n3. Plot: Softbank regimes using turtle breakout, SMA, and EMA"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n#### Regime SMA EMA ####\ndef regime_sma(df,_c,st,lt):\n '''\n bull +1: sma_st >= sma_lt , bear -1: sma_st <= sma_lt\n '''\n sma_lt = df[_c].rolling(lt).mean()\n sma_st = df[_c].rolling(st).mean()\n rg_sma = np.sign(sma_st - sma_lt)\n return rg_sma\n\ndef regime_ema(df,_c,st,lt):\n '''\n bull +1: ema_st >= ema_lt , bear -1: ema_st <= ema_lt\n '''\n ema_st = df[_c].ewm(span=st,min_periods = st).mean()\n ema_lt = df[_c].ewm(span=lt,min_periods = lt).mean()\n rg_ema = np.sign(ema_st - ema_lt)\n return rg_ema\n\nst = 50\nlt = 200\ndf['sma_' + str(st) + str(lt)] = regime_sma(df, _c='Close', st= st, lt= lt)\ndf['ema_' + str(st) + str(lt)] = regime_ema(df, _c='Close', st= st, lt= lt)\n\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nrgme_cols = ['sma_' + str(st) + str(lt), 'ema_' + str(st) + str(lt),'turtle_'+ str(slow)+str(fast) ]\ndf[['Close','sma_' + str(st) + str(lt), 'ema_' + str(st) + str(lt),'turtle_'+ str(slow)+str(fast)] ].plot(\n secondary_y= rgme_cols,figsize=(20,8), style=['k','orange','m--','b-.'],\n title = str.upper(ticker)+' '+str(rgme_cols))","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Colorful chart plot\nPLot: Crossover on Softbank darker zones are loss-making areas"},{"metadata":{"scrolled":false,"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nrg=lo=hi=slo=shi=clg=flr=rg_ch = None\nlt_lo = lt_hi = st_lo = st_hi = 0\n\nma_st = df[_c].rolling(window=50).mean()\nma_mt = df[_c].rolling(window=200).mean()\nma_lt = df[_c].rolling(window=200).mean()\n\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"Same chart using a list comprehension to define moving averages"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n \nrg=lo=hi=slo=shi=clg=flr=rg_ch = None\nlt_lo = lt_hi = st_lo = st_hi = 0\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nmav = [50, 200, 200]\nma_st,ma_mt,ma_lt = [df[_c].rolling(mav[t]).mean() for t in range(len(mav))]\n\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Combine multiple regime methodologies into a visual graph\nPlot: Softbank crossover imposed on the Turtle for dummies"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nmav = [50, 200, 200]\nma_st,ma_mt,ma_lt = [df[_c].rolling(mav[t]).mean() for t in range(len(mav))]\n\nbo = [50, 252]\nst_lo,lt_lo = [df[_l].rolling(bo[t]).min() for t in range(len(bo))]\nst_hi,lt_hi = [df[_h].rolling(bo[t]).max() for t in range(len(bo))]\n\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Triple moving average regime definition \nThe graph_regime_combo accomodates up to 3 moving averages\n\nPlot: Softbank triple moving average crossover"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n \nrg=lo=hi=slo=shi=clg=flr=rg_ch = None\nlt_lo = lt_hi = st_lo = st_hi = 0\n\nmav = [20, 50, 200]\nma_st,ma_mt,ma_lt = [df[_c].rolling(mav[t]).mean() for t in range(len(mav))]\n\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Floor/Ceiling Methodology\n\nThat method is originally a variation on the higher high/higher low method. Unlike the higher high/higher low method, only one of the two following conditions has to be fulfilled for the regime to change:\n 1. Bearish: A swing high has to be materially lower than the peak.\n 2. Bullish: A swing low has to be materially higher than the bottom.\n\nThe swings do not even have to be consecutive for the regime to change. \n\nThe floor/ceiling methodology is conceptually simple. It is however not easy to calculate. It is a two-step process:\n 1. Swing detection\n 2. Regime definition\n\nDownload raw data: SPY, a proxy Exchange-Traded Fund (ETF) for the S&P 500"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Swing detection \nticker = 'SPY' \n\nstart= '2016-12-31'\nend = None\nraw_data = round(yf.download(tickers= ticker,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None),2)\n\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Import scipy.signal"},{"metadata":{"trusted":true},"cell_type":"code","source":"\nfrom scipy.signal import *","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Historical swings\n1. hilo_alternation: reduces a dataframe to a succession of highs & lows. \n 1. It eliminates same side consecutive highs and lows: highs are assigned a minus sign & lows a positive sign.\n 2. It keeps the lowest value marks the extreme point.\n2. historical_swings: This is the fractal part of the algorithm where we look for the same pattern while zooming out. Creates multiple level columns of highs & lows. At the end of every iteration the hilo df is reduced using the dropna method.\n"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n#### hilo_alternation(hilo, dist= None, hurdle= None) ####\ndef hilo_alternation(hilo, dist= None, hurdle= None):\n i=0 \n while (np.sign(hilo.shift(1)) == np.sign(hilo)).any(): # runs until duplicates are eliminated\n\n # removes swing lows > swing highs\n hilo.loc[(np.sign(hilo.shift(1)) != np.sign(hilo)) & # hilo alternation test\n (hilo.shift(1)<0) & # previous datapoint: high\n (np.abs(hilo.shift(1)) < np.abs(hilo) )] = np.nan # high[-1] < low, eliminate low \n\n hilo.loc[(np.sign(hilo.shift(1)) != np.sign(hilo)) & # hilo alternation\n (hilo.shift(1)>0) & # previous swing: low\n (np.abs(hilo ) < hilo.shift(1))] = np.nan # swing high < swing low[-1]\n\n # alternation test: removes duplicate swings & keep extremes\n hilo.loc[(np.sign(hilo.shift(1)) == np.sign(hilo)) & # same sign\n (hilo.shift(1) < hilo )] = np.nan # keep lower one\n\n hilo.loc[(np.sign(hilo.shift(-1)) == np.sign(hilo)) & # same sign, forward looking \n (hilo.shift(-1) < hilo )] = np.nan # keep forward one\n\n # removes noisy swings: distance test\n if pd.notnull(dist):\n hilo.loc[(np.sign(hilo.shift(1)) != np.sign(hilo))&\\\n (np.abs(hilo + hilo.shift(1)).div(dist, fill_value=1)< hurdle)] = np.nan\n\n # reduce hilo after each pass\n hilo = hilo.dropna().copy() \n i+=1\n if i == 4: # breaks infinite loop\n break \n return hilo\n#### hilo_alternation(hilo, dist= None, hurdle= None) ####\n\n#### historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None) #### \ndef historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None):\n \n reduction = df[[_o,_h,_l,_c]].copy() \n reduction['avg_px'] = round(reduction[[_h,_l,_c]].mean(axis=1),2)\n highs = reduction['avg_px'].values\n lows = - reduction['avg_px'].values\n reduction_target = len(reduction) // 100\n# print(reduction_target )\n\n n = 0\n while len(reduction) >= reduction_target: \n highs_list = find_peaks(highs, distance = 1, width = 0)\n lows_list = find_peaks(lows, distance = 1, width = 0)\n hilo = reduction.iloc[lows_list[0]][_l].sub(reduction.iloc[highs_list[0]][_h],fill_value=0)\n\n # Reduction dataframe and alternation loop\n hilo_alternation(hilo, dist= None, hurdle= None)\n reduction['hilo'] = hilo\n\n # Populate reduction df\n n += 1 \n reduction[str(_h)[:2]+str(n)] = reduction.loc[reduction['hilo']<0 ,_h]\n reduction[str(_l)[:2]+str(n)] = reduction.loc[reduction['hilo']>0 ,_l]\n\n # Populate main dataframe\n df[str(_h)[:2]+str(n)] = reduction.loc[reduction['hilo']<0 ,_h]\n df[str(_l)[:2]+str(n)] = reduction.loc[reduction['hilo']>0 ,_l]\n \n # Reduce reduction\n reduction = reduction.dropna(subset= ['hilo'])\n reduction.fillna(method='ffill', inplace = True)\n highs = reduction[str(_h)[:2]+str(n)].values\n lows = -reduction[str(_l)[:2]+str(n)].values\n \n if n >= 9:\n break\n \n return df\n#### historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None) ####\n\n\ndf = raw_data.copy()\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n# ohlc = ['Open','High','Low','Close']\n# _o,_h,_l,_c = [ohlc[h] for h in range(len(ohlc))]\nrhs = ['Hi1', 'Lo1','Hi2', 'Lo2', 'Hi3', 'Lo3']\nrt_hi,rt_lo,_hi,_lo,shi,slo = [rhs[h] for h in range(len(rhs))]\n\ndf= historical_swings(df,_o,_h,_l,_c,dist= None, hurdle= None)\n\ndf[[_c,rt_hi,rt_lo,_hi,_lo,shi,slo ]].plot(\n style=['grey','y.', 'c.','r.', 'g.', 'rv', 'g^'],\n figsize=(20,5),grid=True, title = str.upper(ticker))\ndf[[_c,shi,slo]].plot(style=['grey','rv', 'g^'],\n figsize=(20,5),grid=True, title = str.upper(ticker))","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Eliminate false positives\n\nThe cleanup_latest_swing() function removes false positives from the latest swing high and low.\n\nThe code takes the following steps:\n1. The code identifies the latest swing low and high\n2. Identify the most recent swing\n3. If a false positive, assign N/A"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n#### cleanup_latest_swing(df, shi, slo, rt_hi, rt_lo) ####\ndef cleanup_latest_swing(df, shi, slo, rt_hi, rt_lo): \n '''\n removes false positives\n '''\n # latest swing\n shi_dt = df.loc[pd.notnull(df[shi]), shi].index[-1]\n s_hi = df.loc[pd.notnull(df[shi]), shi][-1]\n slo_dt = df.loc[pd.notnull(df[slo]), slo].index[-1] \n s_lo = df.loc[pd.notnull(df[slo]), slo][-1] \n len_shi_dt = len(df[:shi_dt])\n len_slo_dt = len(df[:slo_dt])\n \n\n # Reset false positives to np.nan\n for i in range(2):\n \n if (len_shi_dt > len_slo_dt) & ((df.loc[shi_dt:,rt_hi].max()> s_hi) | (s_hi len_shi_dt) & ((df.loc[slo_dt:,rt_lo].min()< s_lo)| (s_hi shi_dt: \n swg_var = [1,s_lo,slo_dt,rt_lo,shi, df.loc[slo_dt:,_h].max(), df.loc[slo_dt:, _h].idxmax()] \n elif shi_dt > slo_dt: \n swg_var = [-1,s_hi,shi_dt,rt_hi,slo, df.loc[shi_dt:, _l].min(),df.loc[shi_dt:, _l].idxmin()] \n else: \n ud = 0\n ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt = [swg_var[h] for h in range(len(swg_var))] \n \n return ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt\n#### latest_swings(df, shi, slo, rt_hi, rt_lo, _h, _l, _c, _vol) ####\n \nud,bs,bs_dt,_rt,_swg,hh_ll,hh_ll_dt = latest_swing_variables(df,shi,slo,rt_hi,rt_lo,_h,_l,_c)\n\nud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Distance test \nThe last swing is in two parts\n1. distance test: sufficient distance from the last swing. This distance test acts as a filter. This function has two built-in tests:\n 1. Distance expressed as a multiple of volatility. We use a measure of volatility Average True Range (ATR)\n 2. Distance as a fixed percentage\n2. Retest swing"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n#### test_distance(ud, bs, hh_ll, vlty, dist_vol, dist_pct) ####\ndef test_distance(ud,bs, hh_ll, dist_vol, dist_pct): \n \n # priority: 1. Vol 2. pct 3. dflt\n if (dist_vol > 0): \n distance_test = np.sign(abs(hh_ll - bs) - dist_vol)\n elif (dist_pct > 0):\n distance_test = np.sign(abs(hh_ll / bs - 1) - dist_pct)\n else:\n distance_test = np.sign(dist_pct)\n \n return int(max(distance_test,0) * ud)\n#### test_distance(ud, bs, hh_ll, vlty, dist_vol, dist_pct) ####\n\n#### ATR ####\ndef average_true_range(df, _h, _l, _c, n):\n '''\n http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:average_true_range_atr\n '''\n atr = (df[_h].combine(df[_c].shift(), max) - df[_l].combine(df[_c].shift(), min)).rolling(window=n).mean()\n return atr\n\n#### ATR ####\n\ndist_vol = round(average_true_range(df,_h,_l,_c,n=63)[hh_ll_dt] * 2,2)\ndist_pct = 0.05\n_sign = test_distance(ud,bs, hh_ll, dist_vol, dist_pct)\n_sign","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Retest swing\n\nThis little function packs a surprisingly good punch. The logic is symmetrical for a swing high or low. \n1. Swing high:\n 1. Detect the highest high from swing low\n 2. From the highest high, identify the highest retest low\n 3. When the price closes below the highest retest low: swing high = highest high.\n2. Swing lows:\n 1. Detect the lowest low from the swing high\n 2. From the lowest low, identify the lowest retest high\n 3. When the price closes above the lowest retest high: swing low = lowest low.\n \nNote: retest resets automatically when finding a new highest high/lowest low"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n#### retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg) ####\ndef retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg):\n rt_sgmt = df.loc[hh_ll_dt:, _rt] \n\n if (rt_sgmt.count() > 0) & (_sign != 0): # Retests exist and distance test met \n if _sign == 1: # \n rt_list = [rt_sgmt.idxmax(),rt_sgmt.max(),df.loc[rt_sgmt.idxmax():, _c].cummin()]\n \n elif _sign == -1:\n rt_list = [rt_sgmt.idxmin(), rt_sgmt.min(), df.loc[rt_sgmt.idxmin():, _c].cummax()]\n rt_dt,rt_hurdle, rt_px = [rt_list[h] for h in range(len(rt_list))]\n\n if str(_c)[0] == 'r':\n df.loc[rt_dt,'rrt'] = rt_hurdle\n elif str(_c)[0] != 'r':\n df.loc[rt_dt,'rt'] = rt_hurdle \n\n if (np.sign(rt_px - rt_hurdle) == - np.sign(_sign)).any():\n df.at[hh_ll_dt, _swg] = hh_ll \n return df\n#### retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg) ####\ndf = retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg)\ntry:\n df['rt '] = df['rt'].fillna(method='ffill')\n df[bs_dt:][[_c, rt_hi, rt_lo,\n shi, slo,'rt']].plot(style=['grey', 'c.','y.',\n 'rv', 'g^', 'ko'],figsize=(20,5),grid=True, title = str.upper(ticker))\nexcept:\n df[bs_dt:][[_c, rt_hi, rt_lo,\n shi, slo]].plot(style=['grey', 'c.','y.',\n 'rv', 'g^', 'ko'],figsize=(20,5),grid=True, title = str.upper(ticker))","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Retracement swing\nThis function is an alternative to the retest method. Once the price has moved far enough in the opposite direction, then it is usually safe to conclude that a swing has been printed.\n1. Calculate the retracement from the extreme value, either the minimum from the top or the maximum from the bottom\n2. Distance test in units of volatility or in percentage points\n"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n#### retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct) ####\ndef retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct):\n if _sign == 1: #\n retracement = df.loc[hh_ll_dt:, _c].min() - hh_ll\n\n if (vlty > 0) & (retrace_vol > 0) & ((abs(retracement / vlty) - retrace_vol) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n elif (retrace_pct > 0) & ((abs(retracement / hh_ll) - retrace_pct) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n\n elif _sign == -1:\n retracement = df.loc[hh_ll_dt:, _c].max() - hh_ll\n if (vlty > 0) & (retrace_vol > 0) & ((round(retracement / vlty ,1) - retrace_vol) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n elif (retrace_pct > 0) & ((round(retracement / hh_ll , 4) - retrace_pct) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n else:\n retracement = 0\n return df\n#### retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct) ####\n\n# ohlc = ['Open','High','Low','Close'] \n# _o,_h,_l,_c = [ohlc[h] for h in range(len(ohlc))]\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nvlty = round(average_true_range(df=df, _h= _h, _l= _l, _c= _c , n=63)[hh_ll_dt],2)\ndist_vol = 5 * vlty\ndist_pct = 0.05\n_sign = test_distance(ud,bs, hh_ll, dist_vol, dist_pct)\ndf = retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg)\nretrace_vol = 2.5 * vlty\nretrace_pct = 0.05\ndf = retracement_swing(df,_sign,_swg,_c,hh_ll_dt,hh_ll, vlty,retrace_vol, retrace_pct)\n\ndf[[_c,_hi,_lo,shi,slo]].plot(\n style=['grey','r.', 'g.', 'rv', 'g^'],\n figsize=(20,5),grid=True, title = str.upper(ticker))\n\ndf[[_c,shi,slo]].plot(style=['grey','rv', 'g^'],\n figsize=(20,5),grid=True, title = str.upper(ticker))","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Relative function \nWe saw this function in chapter 4"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n### RELATIVE\ndef relative(df,_o,_h,_l,_c, bm_df, bm_col, ccy_df, ccy_col, dgt, start, end,rebase=True):\n '''\n df: df\n bm_df, bm_col: df benchmark dataframe & column name\n ccy_df,ccy_col: currency dataframe & column name\n dgt: rounding decimal\n start/end: string or offset\n rebase: boolean rebase to beginning or continuous series\n '''\n # Slice df dataframe from start to end period: either offset or datetime\n df = df[start:end] \n \n # inner join of benchmark & currency: only common values are preserved\n df = df.join(bm_df[[bm_col]],how='inner') \n df = df.join(ccy_df[[ccy_col]],how='inner')\n\n # rename benchmark name as bm and currency as ccy\n df.rename(columns={bm_col:'bm', ccy_col:'ccy'},inplace=True)\n\n # Adjustment factor: calculate the scalar product of benchmark and currency\n df['bmfx'] = round(df['bm'].mul(df['ccy']),dgt).fillna(method='ffill')\n if rebase == True:\n df['bmfx'] = df['bmfx'].div(df['bmfx'][0])\n\n # Divide absolute price by fxcy adjustment factor and rebase to first value\n df['r' + str(_o)] = round(df[_o].div(df['bmfx']),dgt)\n df['r' + str(_h)] = round(df[_h].div(df['bmfx']),dgt)\n df['r'+ str(_l)] = round(df[_l].div(df['bmfx']),dgt)\n df['r'+ str(_c)] = round(df[_c].div(df['bmfx']),dgt)\n df = df.drop(['bm','ccy','bmfx'],axis=1)\n \n return (df)\n\n### RELATIVE ###","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### S&P 500 vs Nasdaq via ETF"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\nbm_df = pd.DataFrame()\nbm_col = 'ONEQ'\nccy_col = 'USD'\ndgt= 3\nbm_df[bm_col] = round(yf.download(tickers= bm_col,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None)['Close'],2)\nbm_df[ccy_col] = 1\n\ndf = raw_data.copy()\n# ohlc = ['Open','High','Low','Close']\n# _o,_h,_l,_c = [ohlc[h] for h in range(len(ohlc))]\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\nrhs = ['Hi1', 'Lo1','Hi2', 'Lo2', 'Hi3', 'Lo3']\nrt_hi,rt_lo,_hi,_lo,shi,slo = [rhs[h] for h in range(len(rhs))]\ndf= relative(df,_o,_h,_l,_c, bm_df, bm_col, ccy_df=bm_df, \n ccy_col=ccy_col, dgt= dgt, start=start, end= end,rebase=True)\n \nfor a in np.arange(0,2): \n df = historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None)\n df = cleanup_latest_swing(df, shi, slo, rt_hi, rt_lo)\n ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt = latest_swing_variables(df, shi, slo,rt_hi,rt_lo,_h, _l,_c)\n vlty = round(average_true_range(df=df, _h= _h, _l= _l, _c= _c , n=63)[hh_ll_dt],2)\n dist_vol = 5 * vlty\n dist_pct = 0.05\n _sign = test_distance(ud,bs, hh_ll, dist_vol, dist_pct)\n df = retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg)\n retrace_vol = 2.5 * vlty\n retrace_pct = 0.05\n df = retracement_swing(df,_sign,_swg,_c,hh_ll_dt,hh_ll, vlty,retrace_vol, retrace_pct)\n _o,_h,_l,_c = lower_upper_OHLC(df,relative = True)\n rrhs = ['rH1', 'rL1','rH2', 'rL2', 'rH3', 'rL3']\n rt_hi,rt_lo,_hi,_lo,shi,slo = [rrhs[h] for h in range(len(rrhs))]\n\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Plot data"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\ndf[['Close','Hi1','Lo1','Hi2','Lo2','Hi3','Lo3']].plot(style=['grey','y.', 'c.','r.', 'g.', 'rv', 'g^'],\n figsize=(20,5),grid=True, title = str.upper(ticker))\n\ndf[['Close','Hi3','Lo3']].plot(\n style=['grey', 'rv', 'g^'],\n figsize=(20,5),grid=True, title = str.upper(ticker))\n\ndf[['Close','Hi3','Lo3','rClose','rH3','rL3']].plot(\n style=['grey','rv', 'g^','k:','mv','b^'],\n figsize=(20,5),grid=True, title = str.upper(ticker)+' vs '+str.upper(bm_col))\n\ndf[['rClose','rH3','rL3']].plot(\n style=['k:','mv','b^'],\n figsize=(20,5),grid=True, title = str.upper(ticker)+' vs '+str.upper(bm_col))","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Regime Definition\n\n\nThe formula is a z-score of the distance from peak/trough to subsequent swing highs/lows. The z-score is a delta expressed in units of volatility (ATR, standard deviations, realized or implied). \n1. Classic bull regime:\n 1. Look for a ceiling: The search window starts from the floor\n 2. z-score: ceiling_test = (swing_high[i]-top)/stdev[i]\n 3. If ceiling_test < x standard deviations, the regime has turned bearish\n2. Classic bear regime:\n 1. Look for a floor: The search window starts from the ceiling\n 2. z-score: floor_test = (swing_low[i]-bottom)/stdev[i]\n 3. If floor_test > x standard deviations, the regime has turned bullish\n3. Exception handling: This happens when price penetrates discovery swings:\n 1. Initial penetration: \n 1. For a floor, we look for the lowest low since the discovery swing low. \n 2. For a ceiling, we look for the highest high since the discovery swing high.\n 2. The regime is reset to the previously dominant one. \n 3. Reversion: Sometimes prices bounce around. This ensures the regime responds well to randomness.\n 4. Once the loop is over, columns are populated."},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n#### regime_floor_ceiling(df, hi,lo,cl, slo, shi,flr,clg,rg,rg_ch,stdev,threshold) ####\ndef regime_floor_ceiling(df, _h,_l,_c,slo, shi,flr,clg,rg,rg_ch,stdev,threshold):\n # Lists instantiation\n threshold_test,rg_ch_ix_list,rg_ch_list = [],[], []\n floor_ix_list, floor_list, ceiling_ix_list, ceiling_list = [],[],[],[]\n\n ### Range initialisation to 1st swing\n floor_ix_list.append(df.index[0])\n ceiling_ix_list.append(df.index[0])\n \n ### Boolean variables\n ceiling_found = floor_found = breakdown = breakout = False\n\n ### Swings lists\n swing_highs = list(df[pd.notnull(df[shi])][shi])\n swing_highs_ix = list(df[pd.notnull(df[shi])].index)\n swing_lows = list(df[pd.notnull(df[slo])][slo])\n swing_lows_ix = list(df[pd.notnull(df[slo])].index)\n loop_size = np.maximum(len(swing_highs),len(swing_lows))\n\n ### Loop through swings\n for i in range(loop_size): \n\n ### asymetric swing list: default to last swing if shorter list\n try:\n s_lo_ix = swing_lows_ix[i]\n s_lo = swing_lows[i]\n except:\n s_lo_ix = swing_lows_ix[-1]\n s_lo = swing_lows[-1]\n\n try:\n s_hi_ix = swing_highs_ix[i]\n s_hi = swing_highs[i]\n except:\n s_hi_ix = swing_highs_ix[-1]\n s_hi = swing_highs[-1]\n\n swing_max_ix = np.maximum(s_lo_ix,s_hi_ix) # latest swing index\n\n ### CLASSIC CEILING DISCOVERY\n if (ceiling_found == False): \n top = df[floor_ix_list[-1] : s_hi_ix][_h].max()\n ceiling_test = round((s_hi - top) / stdev[s_hi_ix] ,1) \n\n ### Classic ceiling test\n if ceiling_test <= -threshold: \n ### Boolean flags reset\n ceiling_found = True \n floor_found = breakdown = breakout = False \n threshold_test.append(ceiling_test)\n\n ### Append lists\n ceiling_list.append(top)\n ceiling_ix_list.append(df[floor_ix_list[-1]: s_hi_ix][_h].idxmax()) \n rg_ch_ix_list.append(s_hi_ix)\n rg_ch_list.append(s_hi) \n\n ### EXCEPTION HANDLING: price penetrates discovery swing\n ### 1. if ceiling found, calculate regime since rg_ch_ix using close.cummin\n elif (ceiling_found == True):\n close_high = df[rg_ch_ix_list[-1] : swing_max_ix][_c].cummax()\n df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] = np.sign(close_high - rg_ch_list[-1])\n\n ### 2. if price.cummax penetrates swing high: regime turns bullish, breakout\n if (df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] >0).any():\n ### Boolean flags reset\n floor_found = ceiling_found = breakdown = False\n breakout = True\n\n ### 3. if breakout, test for bearish pullback from highest high since rg_ch_ix\n if (breakout == True):\n brkout_high_ix = df.loc[rg_ch_ix_list[-1] : swing_max_ix, _c].idxmax()\n brkout_low = df[brkout_high_ix : swing_max_ix][_c].cummin()\n df.loc[brkout_high_ix : swing_max_ix, rg] = np.sign(brkout_low - rg_ch_list[-1])\n\n\n ### CLASSIC FLOOR DISCOVERY \n if (floor_found == False): \n bottom = df[ceiling_ix_list[-1] : s_lo_ix][_l].min()\n floor_test = round((s_lo - bottom) / stdev[s_lo_ix],1)\n\n ### Classic floor test\n if (floor_test >= threshold): \n \n ### Boolean flags reset\n floor_found = True\n ceiling_found = breakdown = breakout = False\n threshold_test.append(floor_test)\n\n ### Append lists\n floor_list.append(bottom)\n floor_ix_list.append(df[ceiling_ix_list[-1] : s_lo_ix][_l].idxmin()) \n rg_ch_ix_list.append(s_lo_ix)\n rg_ch_list.append(s_lo)\n\n ### EXCEPTION HANDLING: price penetrates discovery swing\n ### 1. if floor found, calculate regime since rg_ch_ix using close.cummin\n elif(floor_found == True): \n close_low = df[rg_ch_ix_list[-1] : swing_max_ix][_c].cummin()\n df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] = np.sign(close_low - rg_ch_list[-1])\n\n ### 2. if price.cummin penetrates swing low: regime turns bearish, breakdown\n if (df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] <0).any():\n floor_found = floor_found = breakout = False\n breakdown = True \n\n ### 3. if breakdown,test for bullish rebound from lowest low since rg_ch_ix\n if (breakdown == True):\n brkdwn_low_ix = df.loc[rg_ch_ix_list[-1] : swing_max_ix, _c].idxmin() # lowest low \n breakdown_rebound = df[brkdwn_low_ix : swing_max_ix][_c].cummax() # rebound\n df.loc[brkdwn_low_ix : swing_max_ix, rg] = np.sign(breakdown_rebound - rg_ch_list[-1])\n# breakdown = False\n# breakout = True \n\n ### POPULATE FLOOR,CEILING, RG CHANGE COLUMNS\n df.loc[floor_ix_list[1:], flr] = floor_list\n df.loc[ceiling_ix_list[1:], clg] = ceiling_list\n df.loc[rg_ch_ix_list, rg_ch] = rg_ch_list\n df[rg_ch] = df[rg_ch].fillna(method='ffill')\n\n ### regime from last swing\n df.loc[swing_max_ix:,rg] = np.where(ceiling_found, # if ceiling found, highest high since rg_ch_ix\n np.sign(df[swing_max_ix:][_c].cummax() - rg_ch_list[-1]),\n np.where(floor_found, # if floor found, lowest low since rg_ch_ix\n np.sign(df[swing_max_ix:][_c].cummin() - rg_ch_list[-1]),\n np.sign(df[swing_max_ix:][_c].rolling(5).mean() - rg_ch_list[-1]))) \n df[rg] = df[rg].fillna(method='ffill')\n# df[rg+'_no_fill'] = df[rg]\n return df\n\n#### regime_floor_ceiling(df, hi,lo,cl, slo, shi,flr,clg,rg,rg_ch,stdev,threshold) ####","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Plot data"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nstdev = df[_c].rolling(63).std(ddof=0)\nrg_val = ['Hi3','Lo3','flr','clg','rg','rg_ch',1.5]\nslo, shi,flr,clg,rg,rg_ch,threshold = [rg_val[s] for s in range(len(rg_val))]\ndf = regime_floor_ceiling(df,_h,_l,_c,slo, shi,flr,clg,rg,rg_ch,stdev,threshold)\n\ndf[['Close','Hi3', 'Lo3','clg','flr','rg_ch','rg']].plot(style=['grey', 'ro', 'go', 'kv', 'k^','c:','y-.'], \n secondary_y= ['rg'],figsize=(20,5),grid=True, title = str.upper(ticker))","execution_count":2,"outputs":[{"ename":"NameError","evalue":"name 'df' is not defined","output_type":"error","traceback":["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m","\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)","\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0m_o\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0m_h\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0m_l\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0m_c\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mohlc\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mh\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mh\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mrange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mohlc\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m \u001b[0mstdev\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdf\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0m_c\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrolling\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m63\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstd\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mddof\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 6\u001b[0m \u001b[0mrg_val\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m'Hi3'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'Lo3'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'flr'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'clg'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'rg'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'rg_ch'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1.5\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0mslo\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mshi\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mflr\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mclg\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mrg\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mrg_ch\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mthreshold\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mrg_val\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0ms\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0ms\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mrange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrg_val\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n","\u001b[0;31mNameError\u001b[0m: name 'df' is not defined"]}]},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nstdev = df[_c].rolling(63).std(ddof=0)\nrg_val = ['Hi2','Lo2','flr','clg','rg','rg_ch',0.5]\nslo, shi,flr,clg,rg,rg_ch,threshold = [rg_val[s] for s in range(len(rg_val))]\ndf = regime_floor_ceiling(df,_h,_l,_c,slo, shi,flr,clg,rg,rg_ch,stdev,threshold)\n\ndf[['Close','Hi2', 'Lo2','clg','flr','rg_ch','rg']].plot(\n style=['grey', 'ro', 'go', 'kv', 'k^','c:','y-.'], \n secondary_y= ['rg'],figsize=(20,5),grid=True, title = str.upper(ticker))","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\n# ohlc = ['Open','High','Low','Close']\n# _o,_h,_l,_c = [ohlc[h] for h in range(len(ohlc))]\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\n\nmav = [20, 50, 200]\nma_st,ma_mt,ma_lt = [df[_c].rolling(mav[t]).mean() for t in range(len(mav))]\n\nbo = [50, 252]\nst_lo,lt_lo = [df[_l].rolling(bo[t]).min() for t in range(len(bo))]\nst_hi,lt_hi = [df[_h].rolling(bo[t]).max() for t in range(len(bo))]\n\nrg=lo=hi=slo=shi=clg=flr=rg_ch = None\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)\n\nrg_combo = ['Close','rg','Lo3','Hi3','Lo3','Hi3','clg','flr','rg_ch']\n_c,rg,lo,hi,slo,shi,clg,flr,rg_ch =[rg_combo[r] for r in range(len(rg_combo)) ]\n\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)","execution_count":null,"outputs":[]},{"metadata":{"scrolled":false},"cell_type":"markdown","source":"#### Wells Fargo \n1. Download benchmark & ticker data\n2. Process relative function\n3. Plot Close and relative Close"},{"metadata":{"scrolled":false,"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\nparams = ['2014-12-31', None, 63, 0.05, 0.05, 1.5, 2]\nstart, end, vlty_n,dist_pct,retrace_pct,threshold,dgt= [params[h] for h in range(len(params))]\n\nrel_var = ['^GSPC','SP500', 'USD']\nbm_ticker, bm_col, ccy_col = [rel_var[h] for h in range(len(rel_var))]\nbm_df = pd.DataFrame()\nbm_df[bm_col] = round(yf.download(tickers= bm_ticker,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None)['Close'],dgt)\nbm_df[ccy_col] = 1\n\nticker = 'WFC'\ndf = round(yf.download(tickers= ticker,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None),2)\n# ohlc = ['Open','High','Low','Close']\n# _o,_h,_l,_c = [ohlc[h] for h in range(len(ohlc))]\n_o,_h,_l,_c = lower_upper_OHLC(df,relative = False)\ndf= relative(df=df,_o=_o,_h=_h,_l=_l,_c=_c, bm_df=bm_df, bm_col= bm_col, ccy_df=bm_df, \n ccy_col=ccy_col, dgt= dgt, start=start, end= end,rebase=True)\n\ndf[['Close','rClose']].plot(figsize=(20,5),style=['k','grey'],\n title = str.upper(ticker)+ ' Relative & Absolute')\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Calculate floor/ceiling regime in absolute and relative"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \n\nswing_val = ['rg','Lo1','Hi1','Lo3','Hi3','clg','flr','rg_ch']\nrg,rt_lo,rt_hi,slo,shi,clg,flr,rg_ch = [swing_val[s] for s in range(len(swing_val))]\n\nfor a in np.arange(0,2): \n df = round(historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None),2)\n df = cleanup_latest_swing(df,shi,slo,rt_hi,rt_lo)\n ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt = latest_swing_variables(df, \n shi,slo,rt_hi,rt_lo,_h,_l, _c)\n vlty = round(average_true_range(df,_h,_l,_c, n= vlty_n)[hh_ll_dt],2)\n dist_vol = 5 * vlty\n _sign = test_distance(ud,bs, hh_ll, dist_vol, dist_pct)\n df = retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg)\n retrace_vol = 2.5 * vlty\n df = retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct)\n stdev = df[_c].rolling(vlty_n).std(ddof=0)\n df = regime_floor_ceiling(df,_h,_l,_c,slo, shi,flr,clg,rg,rg_ch,stdev,threshold) \n \n _o,_h,_l,_c = lower_upper_OHLC(df,relative = True)\n rswing_val = ['rrg','rL1','rH1','rL3','rH3','rclg','rflr','rrg_ch']\n rg,rt_lo,rt_hi,slo,shi,clg,flr,rg_ch = [rswing_val[s] for s in range(len(rswing_val))]\n\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Plot data"},{"metadata":{"scrolled":false,"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \nma_st = ma_mt = ma_lt = lt_lo = lt_hi = st_lo = st_hi = 0\n\nrg_combo = ['Close','rg','Lo3','Hi3','Lo3','Hi3','clg','flr','rg_ch']\n_c,rg,lo,hi,slo,shi,clg,flr,rg_ch =[rg_combo[r] for r in range(len(rg_combo)) ]\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)\n\nrrg_combo = ['rClose','rrg','rL3','rH3','rL3','rH3','rclg','rflr','rrg_ch']\n_c,rg,lo,hi,slo,shi,clg,flr,rg_ch =[rrg_combo[r] for r in range(len(rrg_combo)) ]\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 5: Regime Definition \nplot_abs_cols = ['Close','Hi3', 'Lo3','clg','flr','rg_ch','rg']\nplot_abs_style = ['k', 'ro', 'go', 'kv', 'k^','b:','b--']\ny2_abs = ['rg']\nplot_rel_cols = ['rClose','rH3', 'rL3','rclg','rflr','rrg_ch','rrg']\nplot_rel_style = ['grey', 'ro', 'go', 'yv', 'y^','m:','m--']\ny2_rel = ['rrg']\ndf[plot_abs_cols].plot(secondary_y= y2_abs,figsize=(20,8),\n title = str.upper(ticker)+ ' Absolute',# grid=True,\n style=plot_abs_style)\n\ndf[plot_rel_cols].plot(secondary_y=y2_rel,figsize=(20,8),\n title = str.upper(ticker)+ ' Relative',# grid=True,\n style=plot_rel_style)\n\ndf[plot_rel_cols + plot_abs_cols].plot(secondary_y=y2_rel + y2_abs,figsize=(20,8),\n title = str.upper(ticker)+ ' Relative & Absolute',# grid=True,\n style=plot_rel_style + plot_abs_style)","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]}],"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"language_info":{"name":"python","version":"3.6.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat":4,"nbformat_minor":4} -------------------------------------------------------------------------------- /Chapter 06/Chapter 6.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"metadata":{},"cell_type":"markdown","source":"# Preliminary instruction\n\nTo follow the code in this chapter, the `yfinance` package must be installed in your environment. If you do not have this installed yet, review Chapter 4 for instructions on how to do so."},{"metadata":{},"cell_type":"markdown","source":"# CHAPTER 6: The Trading Edge is a Number, and Here is the Formula "},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 6: The Trading Edge is a Number, and Here is the Formula \n\nimport pandas as pd\nimport numpy as np\nimport yfinance as yf\n%matplotlib inline\nimport matplotlib.pyplot as plt","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### A trading edge is not a story\n1. Arithmetic gain expectancy: When talking about trading edges or gain expectancy, market participants default to the arithmetic gain expectancy. It is present in every middle school introduction to statistics and absent in a Finance MBA. \n\n2. Geometric gain expectancy (George): Profits and losses compound geometrically. Geometric gain expectancy is mathematically closer to the expected robustness of a strategy.\n\n3. The Kelly criterion is a position sizing algorithm that optimizes the geometric growth rate of a portfolio. "},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 6: The Trading Edge is a Number, and Here is the Formula \n\n# Gain expectancies and Kelly criterion \ndef expectancy(win_rate,avg_win,avg_loss): \n # win% * avg_win% - loss% * abs(avg_loss%) \n return win_rate * avg_win + (1-win_rate) * avg_loss \n \ndef george(win_rate,avg_win,avg_loss): \n # (1+ avg_win%)** win% * (1- abs(avg_loss%)) ** loss% -1 \n return (1+avg_win) ** win_rate * (1 + avg_loss) ** (1 - win_rate) - 1 \n \ndef kelly(win_rate,avg_win,avg_loss): \n # Kelly = win% / abs(avg_loss%) - loss% / avg_win% \n return win_rate / np.abs(avg_loss) - (1-win_rate) / avg_win ","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Mock strategy\nTurtle for dummies is for education purpose only"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 6: The Trading Edge is a Number, and Here is the Formula \n\ndef regime_breakout(df,_h,_l,window):\n hl = np.where(df[_h] == df[_h].rolling(window).max(),1,\n np.where(df[_l] == df[_l].rolling(window).min(), -1,np.nan))\n roll_hl = pd.Series(index= df.index, data= hl).fillna(method= 'ffill')\n return roll_hl\n\ndef turtle_trader(df, _h, _l, slow, fast):\n '''\n _slow: Long/Short direction\n _fast: trailing stop loss\n '''\n _slow = regime_breakout(df,_h,_l,window = slow)\n _fast = regime_breakout(df,_h,_l,window = fast)\n turtle = pd. Series(index= df.index, \n data = np.where(_slow == 1,np.where(_fast == 1,1,0), \n np.where(_slow == -1, np.where(_fast ==-1,-1,0),0)))\n return turtle","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"\n### Turtle for dummies: long and short entries, cumulative returns Softbank (9984.T)\n\n1. Log returns are easier to manipulate than arithmetic ones. Arithmetic sums do not compound, whereas logarithmic ones do. The cumulative returns are calculated with the apply(np.exp) method.\n2. Strategy entry/exit: Entries and exits are delayed by one bar using the shift() method.\n3. long/short positions: –1 for short and +1 for long positions. The solid blue line is the cumulative returns:"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 6: The Trading Edge is a Number, and Here is the Formula \n\nticker = '9984.T' # Softbank\nstart = '2017-12-31'\nend = None\ndf = round(yf.download(tickers= ticker,start= start, end = end, \n interval = \"1d\",group_by = 'column',auto_adjust = True, \n prepost = True, treads = True, proxy = None),0)\nslow = 50\nfast = 20 \ndf['tt'] = turtle_trader(df, _h= 'High', _l= 'Low', slow= slow,fast= fast)\ndf['stop_loss'] = np.where(df['tt'] == 1, df['Low'].rolling(fast).min(),\n np.where(df['tt'] == -1, df['High'].rolling(fast).max(),np.nan))\n\ndf['tt_chg1D'] = df['Close'].diff() * df['tt'].shift()\ndf['tt_PL_cum'] = df['tt_chg1D'].cumsum()\n\ndf['tt_returns'] = df['Close'].pct_change() * df['tt'].shift()\ntt_log_returns = np.log(df['Close']/df['Close'].shift()) * df['tt'].shift()\ndf['tt_cumul'] = tt_log_returns.cumsum().apply(np.exp) - 1 \n\n\ndf[['Close','stop_loss','tt','tt_cumul']].plot(secondary_y=['tt','tt_cumul'],\n figsize=(20,8),style= ['k','r--','b:','b'],\n title= str(ticker)+' Close Price, Turtle L/S entries, cumulative returns')\n\ndf[['tt_PL_cum','tt_chg1D']].plot(secondary_y=['tt_chg1D'],\n figsize=(20,8),style= ['b','c:'],\n title= str(ticker) +' Daily P&L & Cumulative P&L')","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Rolling profits, losses, and expectancies and plot them in a graph."},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 6: The Trading Edge is a Number, and Here is the Formula \n\n# Separate profits from losses\nloss_roll = tt_log_returns.copy()\nloss_roll[loss_roll > 0] = np.nan\nwin_roll = tt_log_returns.copy()\nwin_roll[win_roll < 0] = np.nan\n\n# Calculate rolling win/loss rates and averages\nwindow= 100\nwin_rate = win_roll.rolling(window).count() / window\nloss_rate = loss_roll.rolling(window).count() / window\navg_win = win_roll.fillna(0).rolling(window).mean()\navg_loss = loss_roll.fillna(0).rolling(window).mean()\n\n# Calculate expectancies\ndf['trading_edge'] = expectancy(win_rate,avg_win,avg_loss).fillna(method='ffill')\ndf['geometric_expectancy'] = george(win_rate,avg_win,avg_loss).fillna(method='ffill')\ndf['kelly'] = kelly(win_rate,avg_win,avg_loss).fillna(method='ffill')\n\ndf[window*2:][['trading_edge', 'geometric_expectancy', 'kelly']].plot(\n secondary_y = ['kelly'], figsize=(20,6),style=['b','y','g'], \n title= 'trading_edge, geometric_expectancy, kelly')","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"All of these formulae can be decomposed into two modules: \n1. Signal module: Win/loss rate. These are the returns generated from entry and exit signals\n2. Money management module: Average profit/loss. Contributions from returns * bet sizes\n\nPlot: Softbank cumulative returns and profit ratios: rolling and cumulative\n\n#### Risk metric for trend following strategies: profit ratio"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 6: The Trading Edge is a Number, and Here is the Formula \n\ndef rolling_profits(returns,window):\n profit_roll = returns.copy()\n profit_roll[profit_roll < 0] = 0\n profit_roll_sum = profit_roll.rolling(window).sum().fillna(method='ffill')\n return profit_roll_sum\n\ndef rolling_losses(returns,window):\n loss_roll = returns.copy()\n loss_roll[loss_roll > 0] = 0\n loss_roll_sum = loss_roll.rolling(window).sum().fillna(method='ffill')\n return loss_roll_sum\n\ndef expanding_profits(returns): \n profit_roll = returns.copy() \n profit_roll[profit_roll < 0] = 0 \n profit_roll_sum = profit_roll.expanding().sum().fillna(method='ffill') \n return profit_roll_sum \n \ndef expanding_losses(returns): \n loss_roll = returns.copy() \n loss_roll[loss_roll > 0] = 0 \n loss_roll_sum = loss_roll.expanding().sum().fillna(method='ffill') \n return loss_roll_sum \n\ndef profit_ratio(profits, losses): \n pr = profits.fillna(method='ffill') / abs(losses.fillna(method='ffill'))\n return pr\n\nwindow = 252\n\ndf['pr_roll'] = profit_ratio(profits= rolling_profits(returns = tt_log_returns,window = window), \n losses= rolling_losses(returns = tt_log_returns,window = window))\ndf['pr'] = profit_ratio(profits= expanding_profits(returns= tt_log_returns), \n losses= expanding_losses(returns = tt_log_returns))\n\ndf[window:] [['tt_cumul','pr_roll','pr'] ].plot(figsize = (20,8),secondary_y= ['tt_cumul'],\n style = ['b','m-.','m'], \n title= str(ticker)+' cumulative returns, Profit Ratio, cumulative & rolling '+str(window)+' days')","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Risk metric for mean reversion strategies\n\nPlot: Softbank, cumulative returns, and tail ratios: rolling and cumulative"},{"metadata":{"trusted":true},"cell_type":"code","source":"# CHAPTER 6: The Trading Edge is a Number, and Here is the Formula \n\ndef rolling_tail_ratio(cumul_returns, window, percentile,limit):\n left_tail = np.abs(cumul_returns.rolling(window).quantile(percentile))\n right_tail = cumul_returns.rolling(window).quantile(1-percentile)\n np.seterr(all='ignore')\n tail = np.maximum(np.minimum(right_tail / left_tail,limit),-limit)\n return tail\n\ndef expanding_tail_ratio(cumul_returns, percentile,limit):\n left_tail = np.abs(cumul_returns.expanding().quantile(percentile))\n right_tail = cumul_returns.expanding().quantile(1 - percentile)\n np.seterr(all='ignore')\n tail = np.maximum(np.minimum(right_tail / left_tail,limit),-limit)\n return tail\n\ndf['tr_roll'] = rolling_tail_ratio(cumul_returns= df['tt_cumul'], \n window= window, percentile= 0.05,limit=5)\ndf['tr'] = expanding_tail_ratio(cumul_returns= df['tt_cumul'], percentile= 0.05,limit=5)\n\ndf[window:] [['tt_cumul','tr_roll','tr'] ].plot(secondary_y= ['tt_cumul'],style = ['b','g-.','g'], figsize = (20,8),\n title= str(ticker)+' cumulative returns, Tail Ratios: cumulative & rolling '+str(window)+ ' days')","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]}],"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"language_info":{"name":"python","version":"3.6.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat":4,"nbformat_minor":4} -------------------------------------------------------------------------------- /Chapter 08/Chapter 8.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"metadata":{},"cell_type":"markdown","source":"# Preliminary instruction\n\nTo follow the code in this chapter, the `yfinance` package must be installed in your environment. If you do not have this installed yet, review Chapter 4 for instructions on how to do so."},{"metadata":{},"cell_type":"markdown","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\nimport pandas as pd\nimport numpy as np\nimport yfinance as yf\n%matplotlib inline\nimport matplotlib.pyplot as plt","execution_count":1,"outputs":[{"ename":"ModuleNotFoundError","evalue":"No module named 'yfinance'","output_type":"error","traceback":["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m","\u001b[0;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)","\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mpandas\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mpd\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mnumpy\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0myfinance\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0myf\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 6\u001b[0m \u001b[0mget_ipython\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun_line_magic\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'matplotlib'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'inline'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mmatplotlib\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpyplot\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mplt\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n","\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'yfinance'"]}]},{"metadata":{},"cell_type":"markdown","source":"#### Convexity and concavity accelerate both drop and recovery as shown in the theoretical example below: "},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\ndef concave(ddr, floor):\n '''\n For demo purpose only\n '''\n if floor == 0:\n concave = ddr\n else:\n concave = ddr ** (floor)\n return concave\n\n# obtuse \ndef convex(ddr, floor):\n '''\n obtuse = 1 - acute\n '''\n if floor == 0:\n convex = ddr\n else:\n convex = ddr ** (1/floor)\n return convex\n\n# instantiate minimum Kapital \nfloor = np.arange(0,1,0.125)\n# print('floor', floor)\n\nx = -np.linspace(0, 1, 100)\n\nfig, ax = plt.subplots()\nfor i,f in enumerate(floor):\n y = concave(ddr=-x, floor=f)\n current_label = f' concave f = {f:.3}'\n ax.plot(x, y, linewidth=2, alpha=0.6, label=current_label)\n\nax.legend()\nplt.ylabel('Concave Oscillator')\nplt.xlabel('Equity Curve From Trailing Trough To Peak')\nax.set_ylim(ax.get_ylim()[::-1])\nplt.show()\n\nfig, ax = plt.subplots()\nfor i,f in enumerate(floor):\n y = convex(ddr=-x, floor=f)\n current_label = f' convex f = {f*10:.3}'\n ax.plot(x, y, linewidth=2, alpha=0.6, label=current_label)\nax.legend()\n\nplt.ylabel('Convex Oscillator')\nplt.xlabel('Equity Curve From Trailing Trough To Peak')\nax.set_ylim(ax.get_ylim()[::-1])\nplt.figure(figsize=(20,8))\nplt.show()","execution_count":2,"outputs":[{"ename":"NameError","evalue":"name 'plt' is not defined","output_type":"error","traceback":["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m","\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)","\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 28\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m-\u001b[0m\u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlinspace\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;36m100\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 29\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 30\u001b[0;31m \u001b[0mfig\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0max\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mplt\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msubplots\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 31\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mi\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mf\u001b[0m \u001b[0;32min\u001b[0m \u001b[0menumerate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfloor\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 32\u001b[0m \u001b[0my\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mconcave\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mddr\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfloor\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mf\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n","\u001b[0;31mNameError\u001b[0m: name 'plt' is not defined"]}]},{"metadata":{},"cell_type":"markdown","source":"#### Equity curve simulation using DAX\n\nPlot: The equity curve, peak equity, drawdown, and drawdown tolerance band"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\nticker = '^GDAXI'\ndd_tolerance = -0.1\n\nequity = pd.DataFrame()\nstart = '2017-12-31'\nend = None\nequity['equity'] = yf.download(tickers= ticker,start= start, end = end, \n interval = \"1d\",group_by = 'column',auto_adjust = True, \n prepost = True, treads = True, proxy = None)['Close']\n\nequity['peak_equity'] = equity['equity'].cummax()\nequity['tolerance'] = equity['peak_equity'] * (1 + dd_tolerance )\nequity['drawdown'] = equity['equity'] /equity['equity'].cummax() -1\n\nequity.plot(style = ['k','g-.','r-.','m:'] ,\n secondary_y=['drawdown'], figsize=(20,8),grid=True)\nequity.columns","execution_count":3,"outputs":[{"ename":"NameError","evalue":"name 'yf' is not defined","output_type":"error","traceback":["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m","\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)","\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0mstart\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'2017-12-31'\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0mend\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 9\u001b[0;31m equity['equity'] = yf.download(tickers= ticker,start= start, end = end, \n\u001b[0m\u001b[1;32m 10\u001b[0m \u001b[0minterval\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m\"1d\"\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mgroup_by\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'column'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mauto_adjust\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 11\u001b[0m prepost = True, treads = True, proxy = None)['Close']\n","\u001b[0;31mNameError\u001b[0m: name 'yf' is not defined"]}]},{"metadata":{},"cell_type":"markdown","source":"### Equity Risk Oscillator\n\n1. Calculate peak equity, watermark, from the eqty series.\n2. Calculate drawdown and rebased drawdown using drawdown tolerance. Smooth the average rebased drawdown using an exponential moving average.\n3. Choose the shape of the curve: concave (-1), convex (1), or linear (anything else).\n4. Calculate the risk appetite oscillator."},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\ndef risk_appetite(eqty, tolerance, mn, mx, span, shape):\n '''\n eqty: equity curve series\n tolerance: tolerance for drawdown (<0)\n mn: min risk\n mx: max risk\n span: exponential moving average to smoothe the risk_appetite\n shape: convex (>45 deg diagonal) = 1, concave ( 2898\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_engine\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_loc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mcasted_key\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2899\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mKeyError\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0merr\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n","\u001b[0;32mpandas/_libs/index.pyx\u001b[0m in \u001b[0;36mpandas._libs.index.IndexEngine.get_loc\u001b[0;34m()\u001b[0m\n","\u001b[0;32mpandas/_libs/index.pyx\u001b[0m in \u001b[0;36mpandas._libs.index.IndexEngine.get_loc\u001b[0;34m()\u001b[0m\n","\u001b[0;32mpandas/_libs/hashtable_class_helper.pxi\u001b[0m in \u001b[0;36mpandas._libs.hashtable.PyObjectHashTable.get_item\u001b[0;34m()\u001b[0m\n","\u001b[0;32mpandas/_libs/hashtable_class_helper.pxi\u001b[0m in \u001b[0;36mpandas._libs.hashtable.PyObjectHashTable.get_item\u001b[0;34m()\u001b[0m\n","\u001b[0;31mKeyError\u001b[0m: 'equity'","\nThe above exception was the direct cause of the following exception:\n","\u001b[0;31mKeyError\u001b[0m Traceback (most recent call last)","\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;31m# Chapter 8: Position Sizing: Money is Made in the Money Management Module\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 3\u001b[0;31m \u001b[0meqty\u001b[0m\u001b[0;34m=\u001b[0m \u001b[0mequity\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'equity'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 4\u001b[0m \u001b[0mtolerance\u001b[0m\u001b[0;34m=\u001b[0m \u001b[0mdd_tolerance\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mmn\u001b[0m\u001b[0;34m=\u001b[0m \u001b[0;34m-\u001b[0m\u001b[0;36m0.0025\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n","\u001b[0;32m/srv/conda/envs/notebook/lib/python3.6/site-packages/pandas/core/frame.py\u001b[0m in \u001b[0;36m__getitem__\u001b[0;34m(self, key)\u001b[0m\n\u001b[1;32m 2904\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcolumns\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mnlevels\u001b[0m \u001b[0;34m>\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2905\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_getitem_multilevel\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2906\u001b[0;31m \u001b[0mindexer\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcolumns\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_loc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2907\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mis_integer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mindexer\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2908\u001b[0m \u001b[0mindexer\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mindexer\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n","\u001b[0;32m/srv/conda/envs/notebook/lib/python3.6/site-packages/pandas/core/indexes/base.py\u001b[0m in \u001b[0;36mget_loc\u001b[0;34m(self, key, method, tolerance)\u001b[0m\n\u001b[1;32m 2898\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_engine\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_loc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mcasted_key\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2899\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mKeyError\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0merr\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2900\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mKeyError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0merr\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2901\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2902\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mtolerance\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n","\u001b[0;31mKeyError\u001b[0m: 'equity'"]}]},{"metadata":{},"cell_type":"markdown","source":"#### Mock strategy "},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\ndef regime_breakout(df,_h,_l,window):\n hl = np.where(df[_h] == df[_h].rolling(window).max(),1,\n np.where(df[_l] == df[_l].rolling(window).min(), -1,np.nan))\n roll_hl = pd.Series(index= df.index, data= hl).fillna(method= 'ffill')\n return roll_hl\n\ndef turtle_trader(df, _h, _l, slow, fast):\n '''\n _slow: Long/Short direction\n _fast: trailing stop loss\n '''\n _slow = regime_breakout(df,_h,_l,window = slow)\n _fast = regime_breakout(df,_h,_l,window = fast)\n turtle = pd. Series(index= df.index, \n data = np.where(_slow == 1,np.where(_fast == 1,1,0), \n np.where(_slow == -1, np.where(_fast ==-1,-1,0),0)))\n return turtle","execution_count":7,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Softbank closing price, long/short positions, using Turtle for dummies on absolute series\n\n1. Softbank closing price, long/short positions, using Turtle for dummies on absolute series\n2. Strategy daily profit and loss in local currency and USD\n3. Strategy cumulative profit and loss in local currency and USD\n"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\nticker = '9984.T' # Softbank\nstart = '2017-12-31'\nend = None\ndf = round(yf.download(tickers= ticker,start= start, end = end, \n interval = \"1d\",group_by = 'column',auto_adjust = True, \n prepost = True, treads = True, proxy = None),0)\nslow = 50\nfast = 20 \ndf['tt'] = turtle_trader(df, _h= 'High', _l= 'Low', slow= slow,fast= fast)\ndf['stop_loss'] = np.where(df['tt'] == 1, df['Low'].rolling(fast).min(),\n np.where(df['tt'] == -1, df['High'].rolling(fast).max(),np.nan))\n\ndf['tt_chg1D'] = df['Close'].diff() * df['tt'].shift()\ndf['tt_PL_cum'] = df['tt_chg1D'].cumsum()\n\ndf['tt_returns'] = df['Close'].pct_change() * df['tt'].shift()\ntt_log_returns = np.log(df['Close']/df['Close'].shift()) * df['tt'].shift()\ndf['tt_cumul'] = tt_log_returns.cumsum().apply(np.exp) - 1 \n\n\ndf[['Close','stop_loss','tt','tt_cumul']].plot(secondary_y=['tt','tt_cumul'],\n figsize=(20,8),style= ['k','r--','b:','b'],\n title= str(ticker)+' Close Price, Turtle L/S entries, cumulative returns')\n\ndf[['tt_PL_cum','tt_chg1D']].plot(secondary_y=['tt_chg1D'],\n figsize=(20,8),style= ['b','c:'],\n title= str(ticker) +' Daily P&L & Cumulative P&L')","execution_count":8,"outputs":[{"ename":"NameError","evalue":"name 'yf' is not defined","output_type":"error","traceback":["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m","\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)","\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mstart\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'2017-12-31'\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mend\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 6\u001b[0;31m df = round(yf.download(tickers= ticker,start= start, end = end, \n\u001b[0m\u001b[1;32m 7\u001b[0m \u001b[0minterval\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m\"1d\"\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mgroup_by\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'column'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mauto_adjust\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m prepost = True, treads = True, proxy = None),0)\n","\u001b[0;31mNameError\u001b[0m: name 'yf' is not defined"]}]},{"metadata":{},"cell_type":"markdown","source":"#### Equity at risk shares"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\ndef eqty_risk_shares(px,sl,eqty,risk,fx,lot):\n r = sl - px\n if fx > 0:\n budget = eqty * risk * fx\n else:\n budget = eqty * risk\n shares = round(budget // (r *lot) * lot,0)\n# print(r,budget,round(budget/r,0))\n return shares\n\npx = 2000\nsl = 2222\n\neqty = 100000\nrisk = -0.005\nfx = 110\nlot = 100\n\neqty_risk_shares(px,sl,eqty,risk,fx,lot)","execution_count":9,"outputs":[{"data":{"text/plain":"-300.0"},"execution_count":9,"metadata":{},"output_type":"execute_result"}]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\n# NEW CODE WITH WORKING LIBRARY\nticker = '9984.T' # Softbank\nstart = '2017-12-31'\nend = None\ndf = round(yf.download(tickers= ticker,start= start, end = end, \n interval = \"1d\",group_by = 'column',auto_adjust = True, \n prepost = True, treads = True, proxy = None),0)\n\nccy_ticker = 'USDJPY=X'\nccy_name = 'JPY'\nccy_df = np.nan\n\ndf[ccy_name] = round(yf.download(tickers= ccy_ticker,start= start, end = end, \n interval = \"1d\",group_by = 'column',auto_adjust = True, \n prepost = True, treads = True, proxy = None)['Close'],2)\ndf[ccy_name] = df[ccy_name].fillna(method='ffill')\nslow = 50\nfast = 20 \ndf['tt'] = turtle_trader(df, _h= 'High', _l= 'Low', slow= slow,fast= fast)\ndf['tt_chg1D'] = df['Close'].diff() * df['tt'].shift()\ndf['tt_chg1D_fx'] = df['Close'].diff() * df['tt'].shift() / df[ccy_name]\n\ndf['tt_returns'] = df['Close'].pct_change() * df['tt'].shift()\ndf['tt_log_returns'] = np.log(df['Close'] / df['Close'].shift()) * df['tt'].shift()\ndf['tt_cumul_returns'] = df['tt_log_returns'].cumsum().apply(np.exp) - 1 \n\ndf['stop_loss'] = np.where(df['tt'] == 1, df['Low'].rolling(fast).min(),\n np.where(df['tt'] == -1, df['High'].rolling(fast).max(),np.nan))# / df[ccy_name]\ndf['tt_PL_cum'] = df['tt_chg1D'].cumsum()\ndf['tt_PL_cum_fx'] = df['tt_chg1D_fx'].cumsum()\n\n\ndf[['Close','stop_loss','tt','tt_cumul_returns']].plot(secondary_y=['tt','tt_cumul_returns'],\n figsize=(20,10),style= ['k','r--','b:','b'],\n title= str(ticker)+' Close Price, Turtle L/S entries')\n\ndf[['tt_chg1D','tt_chg1D_fx']].plot(secondary_y=['tt_chg1D_fx'],\n figsize=(20,10),style= ['b','c'],\n title= str(ticker) +' Daily P&L Local & USD')\n\ndf[['tt_PL_cum','tt_PL_cum_fx']].plot(secondary_y=['tt_PL_cum_fx'],\n figsize=(20,10),style= ['b','c'],\n title= str(ticker) +' Cumulative P&L Local & USD')","execution_count":10,"outputs":[{"ename":"NameError","evalue":"name 'yf' is not defined","output_type":"error","traceback":["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m","\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)","\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mstart\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'2017-12-31'\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[0mend\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 7\u001b[0;31m df = round(yf.download(tickers= ticker,start= start, end = end, \n\u001b[0m\u001b[1;32m 8\u001b[0m \u001b[0minterval\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m\"1d\"\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mgroup_by\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'column'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mauto_adjust\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 9\u001b[0m prepost = True, treads = True, proxy = None),0)\n","\u001b[0;31mNameError\u001b[0m: name 'yf' is not defined"]}]},{"metadata":{},"cell_type":"markdown","source":"### Comparing position-sizing algorithms\n\nThe code takes the following steps:\n1. Instantiate parameters: starting capital, currency, minimum and maximum risk, drawdown tolerance and equal weight\n2. Initialize the number of shares & starting capital for each posSizer\n3. Loop through every bar to recalculate every equity curve by adding the previous value to the current number of shares times daily profit.\n4. Recalculate the concave and convex risk oscillator at each bar.\n5. If there is an entry signal, calculate the number of shares for each posSizer. "},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\nstarting_capital = 1000000\nlot = 100\nmn = -0.0025\nmx = -0.0075\navg = (mn + mx) / 2\ntolerance= -0.1\nequal_weight = 0.05\nshs_fxd = shs_ccv = shs_cvx = shs_eql = 0\ndf.loc[df.index[0],'constant'] = df.loc[df.index[0],'concave'] = starting_capital\ndf.loc[df.index[0],'convex'] = df.loc[df.index[0],'equal_weight'] = starting_capital\n\nfor i in range(1,len(df)):\n df['equal_weight'].iat[i] = df['equal_weight'].iat[i-1] + df['tt_chg1D_fx'][i] * shs_eql\n df['constant'].iat[i] = df['constant'].iat[i-1] + df['tt_chg1D_fx'][i] * shs_fxd\n df['concave'].iat[i] = df['concave'].iat[i-1] + df['tt_chg1D_fx'][i] * shs_ccv\n df['convex'].iat[i] = df['convex'].iat[i-1] + df['tt_chg1D_fx'][i] * shs_cvx\n \n ccv = risk_appetite(eqty= df['concave'][:i], tolerance=tolerance, \n mn= mn, mx=mx, span=5, shape=-1)\n cvx = risk_appetite(eqty= df['convex'][:i], tolerance=tolerance, \n mn= mn, mx=mx, span=5, shape=1)\n\n if (df['tt'][i-1] ==0) & (df['tt'][i] !=0):\n px = df['Close'][i]\n sl = df['stop_loss'][i]\n fx = df[ccy_name][i]\n shs_eql = (df['equal_weight'].iat[i] * equal_weight *fx//(px * lot)) * lot\n if px != sl:\n shs_fxd = eqty_risk_shares(px,sl,eqty= df['constant'].iat[i],\n risk= avg,fx=fx,lot=100)\n shs_ccv = eqty_risk_shares(px,sl,eqty= df['concave'].iat[i],\n risk= ccv[-1],fx=fx,lot=100)\n shs_cvx = eqty_risk_shares(px,sl,eqty= df['convex'].iat[i],\n risk= cvx[-1],fx=fx,lot=100)\n\ndf[['constant','concave','convex','equal_weight', 'tt_PL_cum_fx']].plot(figsize = (20,10), grid=True,\n style=['y.-','m--','g-.','b:', 'b'],secondary_y='tt_PL_cum_fx',\ntitle= 'cumulative P&L, concave, convex, constant equity at risk, equal weight ')","execution_count":11,"outputs":[{"ename":"NameError","evalue":"name 'df' is not defined","output_type":"error","traceback":["\u001b[0;31m---------------------------------------------------------------------------\u001b[0m","\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)","\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[0mequal_weight\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m0.05\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 10\u001b[0m \u001b[0mshs_fxd\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mshs_ccv\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mshs_cvx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mshs_eql\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 11\u001b[0;31m \u001b[0mdf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mloc\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mdf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mindex\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'constant'\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mloc\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mdf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mindex\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'concave'\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mstarting_capital\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 12\u001b[0m \u001b[0mdf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mloc\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mdf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mindex\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'convex'\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mloc\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mdf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mindex\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m'equal_weight'\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mstarting_capital\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 13\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n","\u001b[0;31mNameError\u001b[0m: name 'df' is not defined"]}]},{"metadata":{},"cell_type":"markdown","source":"#### Risk amortization"},{"metadata":{"trusted":true},"cell_type":"code","source":"def pyramid(position, root=2): \n ''' \n position is the number of positions \n power is root n. \n\n Conservative = 1, aggressive = position, default = 2 \n ''' \n return 1 / (1+position) ** (1/root) \n \ndef amortized_weight(raw_weight, amortization): \n ''' \n raw_weight is the initial position size \n amortization is pyramid(position,root=2) \n ''' \n return raw_weight * amortization ","execution_count":12,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"\nweight = 0.05\nposition = np.arange(1,4)\nprint('position',position)\nprint('linear',pyramid(position, root=1)* weight)\nprint('square root',pyramid(position, root=2)* weight)\nprint('position n',pyramid(position, root=position)*weight)","execution_count":13,"outputs":[{"name":"stdout","output_type":"stream","text":"position [0 1 2 3]\nlinear [0.05 0.025 0.01666667 0.0125 ]\nsquare root [0.05 0.03535534 0.02886751 0.025 ]\nposition n [0.05 0.025 0.02886751 0.03149803]\n"},{"name":"stderr","output_type":"stream","text":"/srv/conda/envs/notebook/lib/python3.6/site-packages/ipykernel_launcher.py:8: RuntimeWarning: divide by zero encountered in true_divide\n \n"}]},{"metadata":{},"cell_type":"markdown","source":""},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]}],"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"language_info":{"name":"python","version":"3.6.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat":4,"nbformat_minor":4} -------------------------------------------------------------------------------- /Chapter 09/Chapter 9.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"metadata":{},"cell_type":"markdown","source":"# Preliminary instruction\n\nTo follow the code in this chapter, the `yfinance` package must be installed in your environment. If you do not have this installed yet, review Chapter 4 for instructions on how to do so."},{"metadata":{},"cell_type":"markdown","source":"# Chapter 9: Risk is a Number"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\nimport pandas as pd\nimport numpy as np\nimport yfinance as yf\n%matplotlib inline\nimport matplotlib.pyplot as plt","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Mock Strategy: Turtle for dummies"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\ndef regime_breakout(df,_h,_l,window):\n hl = np.where(df[_h] == df[_h].rolling(window).max(),1,\n np.where(df[_l] == df[_l].rolling(window).min(), -1,np.nan))\n roll_hl = pd.Series(index= df.index, data= hl).fillna(method= 'ffill')\n return roll_hl\n\ndef turtle_trader(df, _h, _l, slow, fast):\n '''\n _slow: Long/Short direction\n _fast: trailing stop loss\n '''\n _slow = regime_breakout(df,_h,_l,window = slow)\n _fast = regime_breakout(df,_h,_l,window = fast)\n turtle = pd. Series(index= df.index, \n data = np.where(_slow == 1,np.where(_fast == 1,1,0), \n np.where(_slow == -1, np.where(_fast ==-1,-1,0),0)))\n return turtle","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Run the strategy with Softbank in absolute\nPlot: Softbank turtle for dummies, positions, and returns\nPlot: Softbank cumulative returns and Sharpe ratios: rolling and cumulative"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\nticker = '9984.T' # Softbank\nstart = '2017-12-31'\nend = None\ndf = round(yf.download(tickers= ticker,start= start, end = end, \n interval = \"1d\",group_by = 'column',auto_adjust = True, \n prepost = True, treads = True, proxy = None),0)\nslow = 50\nfast = 20 \ndf['tt'] = turtle_trader(df, _h= 'High', _l= 'Low', slow= slow,fast= fast)\ndf['stop_loss'] = np.where(df['tt'] == 1, df['Low'].rolling(fast).min(),\n np.where(df['tt'] == -1, df['High'].rolling(fast).max(),np.nan))\n\ndf['tt_chg1D'] = df['Close'].diff() * df['tt'].shift()\ndf['tt_PL_cum'] = df['tt_chg1D'].cumsum()\n\ndf['tt_returns'] = df['Close'].pct_change() * df['tt'].shift()\ntt_log_returns = np.log(df['Close']/df['Close'].shift()) * df['tt'].shift()\ndf['tt_cumul'] = tt_log_returns.cumsum().apply(np.exp) - 1 \n\n\ndf[['Close','stop_loss','tt','tt_cumul']].plot(secondary_y=['tt','tt_cumul'],\n figsize=(20,8),style= ['k','r--','b:','b'],\n title= str(ticker)+' Close Price, Turtle L/S entries, cumulative returns')\n\ndf[['tt_PL_cum','tt_chg1D']].plot(secondary_y=['tt_chg1D'],\n figsize=(20,8),style= ['b','c:'],\n title= str(ticker) +' Daily P&L & Cumulative P&L')","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Sharpe ratio: the right mathematical answer to the wrong question\nPlot: Softbank cumulative returns and Sharpe ratios: rolling and cumulative"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\nr_f = 0.00001 # risk free returns\n\ndef rolling_sharpe(returns, r_f, window):\n avg_returns = returns.rolling(window).mean()\n std_returns = returns.rolling(window).std(ddof=0)\n return (avg_returns - r_f) / std_returns\n\ndef expanding_sharpe(returns, r_f):\n avg_returns = returns.expanding().mean()\n std_returns = returns.expanding().std(ddof=0)\n return (avg_returns - r_f) / std_returns\n\nwindow= 252\ndf['sharpe_roll'] = rolling_sharpe(returns= tt_log_returns, r_f= r_f, window= window) * 252**0.5\n\ndf['sharpe']= expanding_sharpe(returns=tt_log_returns,r_f= r_f) * 252**0.5\n\ndf[window:][['tt_cumul','sharpe_roll','sharpe'] ].plot(figsize = (20,8),style = ['b','c-.','c'],grid=True,\n title = str(ticker)+' cumulative returns, Sharpe ratios: rolling & cumulative') \n\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Grit Index\n\nThis formula was originally invented by Peter G. Martin in 1987 and published as the Ulcer Index in his book The Investor's Guide to Fidelity Funds. Legendary trader Ed Seykota recycled it into the Seykota Lake ratio.\n\nInvestors react to drawdowns in three ways:\n1. Magnitude: never test the stomach of your investors\n2. Frequency: never test the nerves of your investors\n3. Duration: never test the patience of your investors\n\nThe Grit calculation sequence is as follows:\n1. Calculate the peak cumulative returns using rolling().max() or expanding().max()\n2. Calculate the squared drawdown from the peak and square them\n3. Calculate the least square sum by taking the square root of the squared drawdowns \n4. Divide the cumulative returns by the surface of losses\n\nPlot: Softbank cumulative returns and Grit ratios: rolling and cumulative"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\ndef rolling_grit(cumul_returns, window):\n tt_rolling_peak = cumul_returns.rolling(window).max()\n drawdown_squared = (cumul_returns - tt_rolling_peak) ** 2\n ulcer = drawdown_squared.rolling(window).sum() ** 0.5\n return cumul_returns / ulcer\n\ndef expanding_grit(cumul_returns):\n tt_peak = cumul_returns.expanding().max()\n drawdown_squared = (cumul_returns - tt_peak) ** 2\n ulcer = drawdown_squared.expanding().sum() ** 0.5\n return cumul_returns / ulcer\n\nwindow = 252\ndf['grit_roll'] = rolling_grit(cumul_returns= df['tt_cumul'] , window = window)\ndf['grit'] = expanding_grit(cumul_returns= df['tt_cumul'])\ndf[window:][['tt_cumul','grit_roll', 'grit'] ].plot(figsize = (20,8), \n secondary_y = 'tt_cumul',style = ['b','g-.','g'],grid=True,\n title = str(ticker) + ' cumulative returns & Grit Ratios: rolling & cumulative '+ str(window) + ' days') \n\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Common Sense Ratio\n\n1. Risk metric for trend following strategies: profit ratio, gain-to-pain ratio\n2. Risk metric for trend following strategies: tail ratio\n3. Combined risk metric: profit ratio * tail ratio\n\nPlot: Cumulative returns and common sense ratios: cumulative and rolling"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\ndef rolling_profits(returns,window):\n profit_roll = returns.copy()\n profit_roll[profit_roll < 0] = 0\n profit_roll_sum = profit_roll.rolling(window).sum().fillna(method='ffill')\n return profit_roll_sum\n\ndef rolling_losses(returns,window):\n loss_roll = returns.copy()\n loss_roll[loss_roll > 0] = 0\n loss_roll_sum = loss_roll.rolling(window).sum().fillna(method='ffill')\n return loss_roll_sum\n\ndef expanding_profits(returns): \n profit_roll = returns.copy() \n profit_roll[profit_roll < 0] = 0 \n profit_roll_sum = profit_roll.expanding().sum().fillna(method='ffill') \n return profit_roll_sum \n \ndef expanding_losses(returns): \n loss_roll = returns.copy() \n loss_roll[loss_roll > 0] = 0 \n loss_roll_sum = loss_roll.expanding().sum().fillna(method='ffill') \n return loss_roll_sum \n\ndef profit_ratio(profits, losses): \n pr = profits.fillna(method='ffill') / abs(losses.fillna(method='ffill'))\n return pr\n\n\ndef rolling_tail_ratio(cumul_returns, window, percentile,limit):\n left_tail = np.abs(cumul_returns.rolling(window).quantile(percentile))\n right_tail = cumul_returns.rolling(window).quantile(1-percentile)\n np.seterr(all='ignore')\n tail = np.maximum(np.minimum(right_tail / left_tail,limit),-limit)\n return tail\n\ndef expanding_tail_ratio(cumul_returns, percentile,limit):\n left_tail = np.abs(cumul_returns.expanding().quantile(percentile))\n right_tail = cumul_returns.expanding().quantile(1 - percentile)\n np.seterr(all='ignore')\n tail = np.maximum(np.minimum(right_tail / left_tail,limit),-limit)\n return tail\n\ndef common_sense_ratio(pr,tr):\n return pr * tr \n\n\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Plot: Cumulative returns and profit ratios: cumulative and rolling"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\nwindow = 252\ndf['pr_roll'] = profit_ratio(profits= rolling_profits(returns = tt_log_returns,window = window), \n losses= rolling_losses(returns = tt_log_returns,window = window))\ndf['pr'] = profit_ratio(profits= expanding_profits(returns= tt_log_returns), \n losses= expanding_losses(returns = tt_log_returns))\n\ndf[window:] [['tt_cumul','pr_roll','pr'] ].plot(figsize = (20,8),secondary_y= ['tt_cumul'], \n style = ['r','y','y:'],grid=True) ","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Plot: Cumulative returns and common sense ratios: cumulative and rolling"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\nwindow = 252\n\ndf['tr_roll'] = rolling_tail_ratio(cumul_returns= df['tt_cumul'], \n window= window, percentile= 0.05,limit=5)\ndf['tr'] = expanding_tail_ratio(cumul_returns= df['tt_cumul'], percentile= 0.05,limit=5)\n\ndf['csr_roll'] = common_sense_ratio(pr= df['pr_roll'],tr= df['tr_roll'])\ndf['csr'] = common_sense_ratio(pr= df['pr'],tr= df['tr'])\n\ndf[window:] [['tt_cumul','csr_roll','csr'] ].plot(secondary_y= ['tt_cumul'],style = ['b','r-.','r'], figsize = (20,8),\n title= str(ticker)+' cumulative returns, Common Sense Ratios: cumulative & rolling '+str(window)+ ' days')\n\n\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### T-stat of gain expectancy, Van Tharp's System Quality Number (SQN)\n\nPlot: Softbank cumulative returns and t-stat (Van Tharp's SQN): cumulative and rolling"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\ndef expectancy(win_rate,avg_win,avg_loss): \n # win% * avg_win% - loss% * abs(avg_loss%) \n return win_rate * avg_win + (1-win_rate) * avg_loss \n\ndef t_stat(signal_count, trading_edge): \n sqn = (signal_count ** 0.5) * trading_edge / trading_edge.std(ddof=0) \n return sqn \n\n# Trade Count\ndf['trades'] = df.loc[(df['tt'].diff() !=0) & (pd.notnull(df['tt'])),'tt'].abs().cumsum()\nsignal_count = df['trades'].fillna(method='ffill')\nsignal_roll = signal_count.diff(window)\n\n# Rolling t_stat\nwindow = 252\nwin_roll = tt_log_returns.copy()\nwin_roll[win_roll < 0] = np.nan\nwin_rate_roll = win_roll.rolling(window,min_periods=0).count() / window\navg_win_roll = rolling_profits(returns = tt_log_returns,window = window) / window\navg_loss_roll = rolling_losses(returns = tt_log_returns,window = window) / window\n\nedge_roll= expectancy(win_rate= win_rate_roll,avg_win=avg_win_roll,avg_loss=avg_loss_roll)\ndf['sqn_roll'] = t_stat(signal_count= signal_roll, trading_edge=edge_roll)\n\n# Cumulative t-stat\ntt_win_count = tt_log_returns[tt_log_returns>0].expanding().count().fillna(method='ffill')\ntt_count = tt_log_returns[tt_log_returns!=0].expanding().count().fillna(method='ffill')\n\nwin_rate = (tt_win_count / tt_count).fillna(method='ffill')\navg_win = expanding_profits(returns= tt_log_returns) / tt_count\navg_loss = expanding_losses(returns= tt_log_returns) / tt_count\ntrading_edge = expectancy(win_rate,avg_win,avg_loss).fillna(method='ffill')\ndf['sqn'] = t_stat(signal_count, trading_edge)\n\ndf[window:][['tt_cumul','sqn','sqn_roll'] ].plot(figsize = (20,8),\n secondary_y= ['tt_cumul'], grid= True,style = ['b','y','y-.'], \n title= str(ticker)+' Cumulative Returns and SQN: cumulative & rolling'+ str(window)+' days')","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Robustness score\n\nCombined risk metric:\n1. The Grit Index integrates losses throughout the period\n2. The CSR combines risks endemic to the two types of strategies in a single measure\n3. The t-stat SQN incorporates trading frequency into the trading edge formula to show the most efficient use of capital."},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 9: Risk is a Number\n\ndef robustness_score(grit,csr,sqn): \n start_date = max(grit[pd.notnull(grit)].index[0],\n csr[pd.notnull(csr)].index[0],\n sqn[pd.notnull(sqn)].index[0])\n score = grit * csr * sqn / (grit[start_date] * csr[start_date] * sqn[start_date])\n return score\n\ndf['score_roll'] = robustness_score(grit = df['grit_roll'], csr = df['csr_roll'],sqn= df['sqn_roll'])\ndf['score'] = robustness_score(grit = df['grit'],csr = df['csr'],sqn = df['sqn'])\ndf[window:][['tt_cumul','score','score_roll']].plot(\n secondary_y= ['score'],figsize=(20,6),style = ['b','k','k-.'], \n title= str(ticker)+' Cumulative Returns and Robustness Score: cumulative & rolling '+ str(window)+' days')","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]}],"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"language_info":{"name":"python","version":"3.6.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat":4,"nbformat_minor":4} -------------------------------------------------------------------------------- /Chapter 11/Chapter 11.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"metadata":{},"cell_type":"markdown","source":"# Preliminary instruction\n\nTo follow the code in this chapter, the `yfinance` package must be installed in your environment. If you do not have this installed yet, review Chapter 4 for instructions on how to do so."},{"metadata":{},"cell_type":"markdown","source":"# Chapter 11: The Long/Short Toolbox"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 11: The Long/Short Toolbox\n\nimport pandas as pd\nimport numpy as np\nimport yfinance as yf\n%matplotlib inline\nimport matplotlib.pyplot as plt","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":" # Convexity configuration\n We saw the risk appetite function in chapter 8. What works at the stock level for each entry also works for open risk as an aggregate"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 8: Position Sizing: Money is Made in the Money Management Module\n\ndef risk_appetite(eqty, tolerance, mn, mx, span, shape):\n '''\n eqty: equity curve series\n tolerance: tolerance for drawdown (<0)\n mn: min risk\n mx: max risk\n span: exponential moving average to smoothe the risk_appetite\n shape: convex (>45 deg diagonal) = 1, concave (0]\nport_short = port[port['Side']<0]\n\nconcentration = (port_long['Side'].count()-port_short['Side'].count())/port['Side'].count()\ngross = round(abs(MV).sum() / K,3) \nnet = round(MV.sum()/abs(MV).sum(),3)\nnet_Beta = round((MV* port['Beta']).sum()/abs(MV).sum(),2)\nprint('Gross Exposure',gross,'Net Exposure',net,'Net Beta',net_Beta,'concentration',concentration)\nrnet = round(rMV.sum()/abs(rMV).sum(),3)\nrnet_Beta = round((rMV* port['Beta']).sum()/abs(rMV).sum(),2)\nprint('rGross Exposure',gross,'rNet Exposure',rnet,'rNet Beta',rnet_Beta)\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Portfolio\n\n1. R (distance from cost to stop loss) is a measure popularised by Dr Van Tharp. Here, we will be using the relative version of R, or rR\n2. Weight: MV in fund currency (USD) divided by the absolute sum total of MV\n3. rRisk is the weighted relative risk to the equity. As soon as the stop loss is reset beyond cost, this turns negative. This ensures that the open risk remains negative\n4. rRAR is the relative returns expressed in units of initial relative risks. This is the truest and simplest risk-adjusted returns measure, our workhorse \n5. rCTR and CTR are relative and absolute contributions, or P&L divided by equity."},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 11: The Long/Short Toolbox\n\nport[['Side','Weight', 'rRisk', 'rRAR', 'rCTR', 'CTR']].sort_values(by=['Side','rRAR'] )","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Aggregates by side"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 11: The Long/Short Toolbox\n\nprint(port[['Side', 'Weight', 'rRisk', 'rRAR', 'rCTR', 'CTR']].groupby('Side').sum())","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"### Pro rated risk adjustment\nWe will pro-rate open risk by side and divide by risk-adjusted returns. This will rank positions by side and open risk-adjusted returns. Those that have contributed the least are by definition the riskiest ones. \n1. Factor the risk reduction of -1% into the pro rata to calculate the number of shares\n2. Multiply by the capital and divide by the relative distance between cost and stop loss, rR, to obtain the exact number of shares\n3. The risk reduction cannot be larger than the existing number of shares"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 11: The Long/Short Toolbox\n\nadjust_long = adjust_short = -0.01 \n\npro_rata_long = port_long['rRisk'] / (port_long['rRisk'].sum() * port_long['rRAR'])\nrisk_adj_long = (abs(adjust_long) * pro_rata_long * K / port_long['rR'] // lot) * lot\nshares_adj_long = np.minimum(risk_adj_long, port_long['Shares'])*np.sign(adjust_long)\n\npro_rata_short = port_short['rRisk'] / (port_short['rRisk'].sum() * port_short['rRAR'])\nrisk_adj_short = (abs(adjust_short) * pro_rata_short * K / port_short['rR'] // lot)*lot\nshares_adj_short = np.maximum(risk_adj_short,port_short['Shares'])*np.sign(adjust_short)\n\nport['Qty_adj'] = shares_adj_short.append(shares_adj_long)\nport['Shares_adj'] = port['Shares'] + port['Qty_adj']\nport['rRisk_adj'] = -round(np.maximum(0,(port['rR'] * port['Shares_adj'])/K),4)\nMV_adj= port['Shares_adj'] * port['Price']\nrMV_adj = port['Shares_adj'] * port['rPrice']\nport['Weight_adj'] = round(MV_adj.div(abs(MV_adj).sum()),3)\n\nprint(port[['Side','rRAR','rRisk','rRisk_adj','Shares','Qty_adj', 'Shares_adj', 'Weight','Weight_adj']].groupby('Side').sum())\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Aggregates"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 11: The Long/Short Toolbox\n\nport[['Side','rRAR','rRisk','rRisk_adj','Shares','Qty_adj', 'Shares_adj', 'Weight','Weight_adj']].groupby('Side').sum()","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Portfolio before and after adjustment"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 11: The Long/Short Toolbox\n\nport[['Side','rRAR','rRisk','rRisk_adj','Shares','Qty_adj', 'Shares_adj', 'Weight','Weight_adj']].sort_values(\n by=['Side','rRisk_adj' ], ascending=[True,False])","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Aggregates before and after adjustment"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 11: The Long/Short Toolbox\n\nprint('Gross Exposure',gross,'Net Exposure',net,'Net Beta',net_Beta,'concentration',concentration)\ngross_adj = round(abs(MV_adj).sum() / K,3) \nnet_adj = round(MV_adj.sum()/abs(MV_adj).sum(),3)\nnet_Beta_adj = round((MV_adj* port['Beta']).sum()/abs(MV_adj).sum(),2)\nnet_pos_adj = port.loc[port['Shares_adj'] >0,'Shares_adj'].count()-port.loc[port['Shares_adj'] <0,'Shares_adj'].count()\nprint('Gross Exposure adj',gross_adj,'Net Exposure_adj',net_adj,\n 'Net Beta_adj',net_Beta_adj,'concentration adj',net_pos_adj)\nrnet_adj = round(rMV_adj.sum()/abs(rMV_adj).sum(),3)\nrnet_Beta_adj = round((rMV_adj* port['Beta']).sum()/abs(rMV_adj).sum(),2)\nprint('Gross Exposure adj',gross_adj,'rNet Exposure adj',rnet_adj,'rNet Beta adj',rnet_Beta_adj)\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]}],"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"language_info":{"name":"python","version":"3.6.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat":4,"nbformat_minor":4} -------------------------------------------------------------------------------- /Chapter 12/Chapter 12.ipynb: -------------------------------------------------------------------------------- 1 | {"cells":[{"metadata":{},"cell_type":"markdown","source":"# Preliminary instruction\n\nTo follow the code in this chapter, the `yfinance` package must be installed in your environment. If you do not have this installed yet, review Chapter 4 for instructions on how to do so."},{"metadata":{},"cell_type":"markdown","source":"# Chapter 12: Signals and Execution "},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\nimport pandas as pd\nimport numpy as np\nimport yfinance as yf\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom scipy.signal import find_peaks","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Target price and scale out quantity\n\n1. Target price is not an exercise in subjective fair valuation. It is a risk management tool\n2. Partial exit is how much of the position should be closed for the remainder to go on as a free carry"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\ndef target_price(price, stop_loss, r_multiplier):\n r = price - stop_loss\n return price + r * r_multiplier\n\ndef partial_exit(qty, r_multiplier):\n if (qty * r_multiplier)!= 0:\n fraction = qty / r_multiplier\n else:\n fraction = 0\n return fraction\n\nprice = 100 \nstop_loss = 110 \nqty = 2000 \nr_multiplier = 2 \n\npt = target_price(price, stop_loss, r_multiplier) \nexit_qty = partial_exit(qty, r_multiplier) \nprint('target price', pt,'exit_quantity',exit_qty) ","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Position sizing functions\nOne stop shop to size positions"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\ndef risk_appetite(eqty, tolerance, mn, mx, span, shape):\n '''\n eqty: equity curve series\n tolerance: tolerance for drawdown (<0)\n mn: min risk\n mx: max risk\n span: exponential moving average to smoothe the risk_appetite\n shape: convex (>45 deg diagonal) = 1, concave (0:\n ax1.plot(df.index, df[lo],'.' ,color='r', label= 'swing low',alpha= 0.6)\n if df[hi].count()>0:\n ax1.plot(df.index, df[hi],'.' ,color='g', label= 'swing high',alpha= 0.6) \n if df[slo].count()>0:\n ax1.plot(df.index, df[slo],'o' ,color='r', label= 'swing low',alpha= 0.8)\n if df[shi].count()>0:\n ax1.plot(df.index, df[shi],'o' ,color='g', label= 'swing high',alpha= 0.8)\n if df[flr].count()>0:\n plt.scatter(df.index, df[flr],c='k',marker='^',label='floor')\n if df[clg].count() >0:\n plt.scatter(df.index, df[clg],c='k',marker='v',label='ceiling')\n\n ax1.plot([],[],linewidth=5, label= 'bear', color='m',alpha=0.1)\n ax1.plot([],[],linewidth=5 , label= 'bull', color='b',alpha=0.1)\n ax1.fill_between(date, close, base,where=((regime==1)&(close > base)), facecolor='b', alpha=0.1)\n ax1.fill_between(date, close, base,where=((regime==1)&(close < base)), facecolor='b', alpha=0.4)\n ax1.fill_between(date, close, base,where=((regime==-1)&(close < base)), facecolor='m', alpha=0.1)\n ax1.fill_between(date, close, base,where=((regime==-1)&(close > base)), facecolor='m', alpha=0.4)\n\n if np.sum(ma_st) >0 :\n ax1.plot(df.index,ma_st,'-' ,color='lime', label= 'ST MA')\n ax1.plot(df.index,ma_mt,'-' ,color='green', label= 'MT MA')\n ax1.plot(df.index,ma_lt,'-' ,color='red', label= 'LT MA')\n\n if pd.notnull(rg): # floor/ceiling regime present\n # Profitable conditions\n ax1.fill_between(date,close, ma_mt,where=((regime==1)&(ma_mt >= ma_lt)&(ma_st>=ma_mt)), \n facecolor='green', alpha=0.5) \n ax1.fill_between(date,close, ma_mt,where=((regime==-1)&(ma_mt <= ma_lt)&(ma_st <= ma_mt)), \n facecolor='red', alpha=0.5)\n # Unprofitable conditions\n ax1.fill_between(date,close, ma_mt,where=((regime==1)&(ma_mt>=ma_lt)&(ma_st>=ma_mt)&(close=ma_mt)), \n facecolor='darkred', alpha=1)\n\n elif pd.isnull(rg): # floor/ceiling regime absent\n # Profitable conditions\n ax1.fill_between(date,close, ma_mt,where=((ma_mt >= ma_lt)&(ma_st>=ma_mt)), \n facecolor='green', alpha=0.4) \n ax1.fill_between(date,close, ma_mt,where=((ma_mt <= ma_lt)&(ma_st <= ma_mt)), \n facecolor='red', alpha=0.4)\n # Unprofitable conditions\n ax1.fill_between(date,close, ma_mt,where=((ma_mt >= ma_lt)&(ma_st >= ma_mt)&(close < ma_mt)), \n facecolor='darkgreen', alpha=1) \n ax1.fill_between(date,close, ma_mt,where=((ma_mt <= ma_lt)&(ma_st <= ma_mt)&(close >= ma_mt)), \n facecolor='darkred', alpha=1)\n\n if (np.sum(lt_hi) > 0): # LT range breakout\n ax1.plot([],[],linewidth=5, label= ' LT High', color='m',alpha=0.2)\n ax1.plot([],[],linewidth=5, label= ' LT Low', color='b',alpha=0.2)\n\n if pd.notnull(rg): # floor/ceiling regime present\n ax1.fill_between(date, close, lt_lo,\n where=((regime ==1) & (close > lt_lo) ), \n facecolor='b', alpha=0.2)\n ax1.fill_between(date,close, lt_hi,\n where=((regime ==-1) & (close < lt_hi)), \n facecolor='m', alpha=0.2)\n if (np.sum(st_hi) > 0): # ST range breakout\n ax1.fill_between(date, close, st_lo,\n where=((regime ==1)&(close > st_lo) ), \n facecolor='b', alpha=0.3)\n ax1.fill_between(date,close, st_hi,\n where=((regime ==-1) & (close < st_hi)), \n facecolor='m', alpha=0.3)\n\n elif pd.isnull(rg): # floor/ceiling regime absent \n ax1.fill_between(date, close, lt_lo,\n where=((close > lt_lo) ), facecolor='b', alpha=0.2)\n ax1.fill_between(date,close, lt_hi,\n where=((close < lt_hi)), facecolor='m', alpha=0.2)\n if (np.sum(st_hi) > 0): # ST range breakout\n ax1.fill_between(date, close, st_lo,\n where=((close > st_lo) & (st_lo >= lt_lo)), facecolor='b', alpha=0.3)\n ax1.fill_between(date,close, st_hi,\n where=((close < st_hi)& (st_hi <= lt_hi)), facecolor='m', alpha=0.3)\n\n if (np.sum(st_hi) > 0): # ST range breakout\n ax1.plot([],[],linewidth=5, label= ' ST High', color='m',alpha=0.3)\n ax1.plot([],[],linewidth=5, label= ' ST Low', color='b',alpha=0.3)\n\n ax1.plot(df.index, lt_lo,'-.' ,color='b', label= 'LT low',alpha=0.2)\n ax1.plot(df.index, lt_hi,'-.' ,color='m', label= 'LT high',alpha=0.2)\n except:\n pass\n \n for label in ax1.xaxis.get_ticklabels():\n label.set_rotation(45)\n ax1.grid(True)\n ax1.xaxis.label.set_color('k')\n ax1.yaxis.label.set_color('k')\n plt.xlabel('Date')\n plt.ylabel(str.upper(ticker) + ' Price')\n plt.title(str.upper(ticker))\n plt.legend()\n### Graph Regimes Combo ###\n\n\n\n","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"#### Putting everything together\n\n1. Relative function\n2. Calculate swings for the floor/ceiling method:\n 1. import scipy.signals library\n 2. hilo_alternation\n 3. historical_swings\n 4. cleanup_latest_swing\n 5. latest_swing_variables\n 6. test_distance\n 7. average_true_range\n 8. retest_swing\n 9. retracement_swing\n3. regime_floor_ceiling"},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\n### RELATIVE\ndef relative(df,_o,_h,_l,_c, bm_df, bm_col, ccy_df, ccy_col, dgt, start, end,rebase=True):\n '''\n df: df\n bm_df, bm_col: df benchmark dataframe & column name\n ccy_df,ccy_col: currency dataframe & column name\n dgt: rounding decimal\n start/end: string or offset\n rebase: boolean rebase to beginning or continuous series\n '''\n # Slice df dataframe from start to end period: either offset or datetime\n df = df[start:end] \n \n # inner join of benchmark & currency: only common values are preserved\n df = df.join(bm_df[[bm_col]],how='inner') \n df = df.join(ccy_df[[ccy_col]],how='inner')\n\n # rename benchmark name as bm and currency as ccy\n df.rename(columns={bm_col:'bm', ccy_col:'ccy'},inplace=True)\n\n # Adjustment factor: calculate the scalar product of benchmark and currency\n df['bmfx'] = round(df['bm'].mul(df['ccy']),dgt).fillna(method='ffill')\n if rebase == True:\n df['bmfx'] = df['bmfx'].div(df['bmfx'][0])\n\n # Divide absolute price by fxcy adjustment factor and rebase to first value\n df['r' + str(_o)] = round(df[_o].div(df['bmfx']),dgt)\n df['r' + str(_h)] = round(df[_h].div(df['bmfx']),dgt)\n df['r'+ str(_l)] = round(df[_l].div(df['bmfx']),dgt)\n df['r'+ str(_c)] = round(df[_c].div(df['bmfx']),dgt)\n df = df.drop(['bm','ccy','bmfx'],axis=1)\n \n return (df)\n\n### RELATIVE ###\n\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\nfrom scipy.signal import *","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\n#### hilo_alternation(hilo, dist= None, hurdle= None) ####\ndef hilo_alternation(hilo, dist= None, hurdle= None):\n i=0 \n while (np.sign(hilo.shift(1)) == np.sign(hilo)).any(): # runs until duplicates are eliminated\n\n # removes swing lows > swing highs\n hilo.loc[(np.sign(hilo.shift(1)) != np.sign(hilo)) & # hilo alternation test\n (hilo.shift(1)<0) & # previous datapoint: high\n (np.abs(hilo.shift(1)) < np.abs(hilo) )] = np.nan # high[-1] < low, eliminate low \n\n hilo.loc[(np.sign(hilo.shift(1)) != np.sign(hilo)) & # hilo alternation\n (hilo.shift(1)>0) & # previous swing: low\n (np.abs(hilo ) < hilo.shift(1))] = np.nan # swing high < swing low[-1]\n\n # alternation test: removes duplicate swings & keep extremes\n hilo.loc[(np.sign(hilo.shift(1)) == np.sign(hilo)) & # same sign\n (hilo.shift(1) < hilo )] = np.nan # keep lower one\n\n hilo.loc[(np.sign(hilo.shift(-1)) == np.sign(hilo)) & # same sign, forward looking \n (hilo.shift(-1) < hilo )] = np.nan # keep forward one\n\n # removes noisy swings: distance test\n if pd.notnull(dist):\n hilo.loc[(np.sign(hilo.shift(1)) != np.sign(hilo))&\\\n (np.abs(hilo + hilo.shift(1)).div(dist, fill_value=1)< hurdle)] = np.nan\n\n # reduce hilo after each pass\n hilo = hilo.dropna().copy() \n i+=1\n if i == 4: # breaks infinite loop\n break \n return hilo\n#### hilo_alternation(hilo, dist= None, hurdle= None) ####\n\n#### historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None) #### \ndef historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None):\n \n reduction = df[[_o,_h,_l,_c]].copy() \n reduction['avg_px'] = round(reduction[[_h,_l,_c]].mean(axis=1),2)\n highs = reduction['avg_px'].values\n lows = - reduction['avg_px'].values\n reduction_target = len(reduction) // 100\n# print(reduction_target )\n\n n = 0\n while len(reduction) >= reduction_target: \n highs_list = find_peaks(highs, distance = 1, width = 0)\n lows_list = find_peaks(lows, distance = 1, width = 0)\n hilo = reduction.iloc[lows_list[0]][_l].sub(reduction.iloc[highs_list[0]][_h],fill_value=0)\n\n # Reduction dataframe and alternation loop\n hilo_alternation(hilo, dist= None, hurdle= None)\n reduction['hilo'] = hilo\n\n # Populate reduction df\n n += 1 \n reduction[str(_h)[:2]+str(n)] = reduction.loc[reduction['hilo']<0 ,_h]\n reduction[str(_l)[:2]+str(n)] = reduction.loc[reduction['hilo']>0 ,_l]\n\n # Populate main dataframe\n df[str(_h)[:2]+str(n)] = reduction.loc[reduction['hilo']<0 ,_h]\n df[str(_l)[:2]+str(n)] = reduction.loc[reduction['hilo']>0 ,_l]\n \n # Reduce reduction\n reduction = reduction.dropna(subset= ['hilo'])\n reduction.fillna(method='ffill', inplace = True)\n highs = reduction[str(_h)[:2]+str(n)].values\n lows = -reduction[str(_l)[:2]+str(n)].values\n \n if n >= 9:\n break\n \n return df\n#### historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None) ####\n\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\n#### cleanup_latest_swing(df, shi, slo, rt_hi, rt_lo) ####\ndef cleanup_latest_swing(df, shi, slo, rt_hi, rt_lo): \n '''\n removes false positives\n '''\n # latest swing\n shi_dt = df.loc[pd.notnull(df[shi]), shi].index[-1]\n s_hi = df.loc[pd.notnull(df[shi]), shi][-1]\n slo_dt = df.loc[pd.notnull(df[slo]), slo].index[-1] \n s_lo = df.loc[pd.notnull(df[slo]), slo][-1] \n len_shi_dt = len(df[:shi_dt])\n len_slo_dt = len(df[:slo_dt])\n \n\n # Reset false positives to np.nan\n for i in range(2):\n \n if (len_shi_dt > len_slo_dt) & ((df.loc[shi_dt:,rt_hi].max()> s_hi) | (s_hi len_shi_dt) & ((df.loc[slo_dt:,rt_lo].min()< s_lo)| (s_hi shi_dt: \n swg_var = [1,s_lo,slo_dt,rt_lo,shi, df.loc[slo_dt:,_h].max(), df.loc[slo_dt:, _h].idxmax()] \n elif shi_dt > slo_dt: \n swg_var = [-1,s_hi,shi_dt,rt_hi,slo, df.loc[shi_dt:, _l].min(),df.loc[shi_dt:, _l].idxmin()] \n else: \n ud = 0\n ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt = [swg_var[h] for h in range(len(swg_var))] \n \n return ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt\n#### latest_swings(df, shi, slo, rt_hi, rt_lo, _h, _l, _c, _vol) ####","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"\n#### test_distance(ud, bs, hh_ll, vlty, dist_vol, dist_pct) ####\ndef test_distance(ud,bs, hh_ll, dist_vol, dist_pct): \n \n # priority: 1. Vol 2. pct 3. dflt\n if (dist_vol > 0): \n distance_test = np.sign(abs(hh_ll - bs) - dist_vol)\n elif (dist_pct > 0):\n distance_test = np.sign(abs(hh_ll / bs - 1) - dist_pct)\n else:\n distance_test = np.sign(dist_pct)\n \n return int(max(distance_test,0) * ud)\n#### test_distance(ud, bs, hh_ll, vlty, dist_vol, dist_pct) ####\n\n#### ATR ####\ndef average_true_range(df, _h, _l, _c, n):\n '''\n http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:average_true_range_atr\n '''\n atr = (df[_h].combine(df[_c].shift(), max) - df[_l].combine(df[_c].shift(), min)).rolling(window=n).mean()\n return atr\n\n#### ATR ####\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\n#### retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg) ####\ndef retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg):\n rt_sgmt = df.loc[hh_ll_dt:, _rt] \n\n if (rt_sgmt.count() > 0) & (_sign != 0): # Retests exist and distance test met \n if _sign == 1: # swing high\n rt_list = [rt_sgmt.idxmax(), rt_sgmt.max(), df.loc[rt_sgmt.idxmax():, _c].cummin()]\n \n elif _sign == -1: # swing low\n rt_list = [rt_sgmt.idxmin(), rt_sgmt.min(), df.loc[rt_sgmt.idxmin():, _c].cummax()]\n rt_dt,rt_hurdle, rt_px = [rt_list[h] for h in range(len(rt_list))]\n\n if str(_c)[0] == 'r':\n df.loc[rt_dt,'rrt'] = rt_hurdle\n elif str(_c)[0] != 'r':\n df.loc[rt_dt,'rt'] = rt_hurdle \n\n if (np.sign(rt_px - rt_hurdle) == - np.sign(_sign)).any():\n df.at[hh_ll_dt, _swg] = hh_ll \n return df\n#### retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg) ####\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\n#### retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct) ####\ndef retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct):\n if _sign == 1: # swing high\n retracement = df.loc[hh_ll_dt:, _c].min() - hh_ll\n\n if (vlty > 0) & (retrace_vol > 0) & ((abs(retracement / vlty) - retrace_vol) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n elif (retrace_pct > 0) & ((abs(retracement / hh_ll) - retrace_pct) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n\n elif _sign == -1: # swing low\n retracement = df.loc[hh_ll_dt:, _c].max() - hh_ll\n if (vlty > 0) & (retrace_vol > 0) & ((round(retracement / vlty ,1) - retrace_vol) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n elif (retrace_pct > 0) & ((round(retracement / hh_ll , 4) - retrace_pct) > 0):\n df.at[hh_ll_dt, _swg] = hh_ll\n else:\n retracement = 0\n return df\n#### retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct) ####\n\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\n#### regime_floor_ceiling(df, hi,lo,cl, slo, shi,flr,clg,rg,rg_ch,stdev,threshold) ####\ndef regime_floor_ceiling(df, _h,_l,_c,slo, shi,flr,clg,rg,rg_ch,stdev,threshold):\n # Lists instantiation\n threshold_test,rg_ch_ix_list,rg_ch_list = [],[], []\n floor_ix_list, floor_list, ceiling_ix_list, ceiling_list = [],[],[],[]\n\n ### Range initialisation to 1st swing\n floor_ix_list.append(df.index[0])\n ceiling_ix_list.append(df.index[0])\n \n ### Boolean variables\n ceiling_found = floor_found = breakdown = breakout = False\n\n ### Swings lists\n swing_highs = list(df[pd.notnull(df[shi])][shi])\n swing_highs_ix = list(df[pd.notnull(df[shi])].index)\n swing_lows = list(df[pd.notnull(df[slo])][slo])\n swing_lows_ix = list(df[pd.notnull(df[slo])].index)\n loop_size = np.maximum(len(swing_highs),len(swing_lows))\n\n ### Loop through swings\n for i in range(loop_size): \n\n ### asymetric swing list: default to last swing if shorter list\n try:\n s_lo_ix = swing_lows_ix[i]\n s_lo = swing_lows[i]\n except:\n s_lo_ix = swing_lows_ix[-1]\n s_lo = swing_lows[-1]\n\n try:\n s_hi_ix = swing_highs_ix[i]\n s_hi = swing_highs[i]\n except:\n s_hi_ix = swing_highs_ix[-1]\n s_hi = swing_highs[-1]\n\n swing_max_ix = np.maximum(s_lo_ix,s_hi_ix) # latest swing index\n\n ### CLASSIC CEILING DISCOVERY\n if (ceiling_found == False): \n top = df[floor_ix_list[-1] : s_hi_ix][_h].max()\n ceiling_test = round((s_hi - top) / stdev[s_hi_ix] ,1) \n\n ### Classic ceiling test\n if ceiling_test <= -threshold: \n ### Boolean flags reset\n ceiling_found = True \n floor_found = breakdown = breakout = False \n threshold_test.append(ceiling_test)\n\n ### Append lists\n ceiling_list.append(top)\n ceiling_ix_list.append(df[floor_ix_list[-1]: s_hi_ix][_h].idxmax()) \n rg_ch_ix_list.append(s_hi_ix)\n rg_ch_list.append(s_hi) \n\n ### EXCEPTION HANDLING: price penetrates discovery swing\n ### 1. if ceiling found, calculate regime since rg_ch_ix using close.cummin\n elif (ceiling_found == True):\n close_high = df[rg_ch_ix_list[-1] : swing_max_ix][_c].cummax()\n df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] = np.sign(close_high - rg_ch_list[-1])\n\n ### 2. if price.cummax penetrates swing high: regime turns bullish, breakout\n if (df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] >0).any():\n ### Boolean flags reset\n floor_found = ceiling_found = breakdown = False\n breakout = True\n\n ### 3. if breakout, test for bearish pullback from highest high since rg_ch_ix\n if (breakout == True):\n brkout_high_ix = df.loc[rg_ch_ix_list[-1] : swing_max_ix, _c].idxmax()\n brkout_low = df[brkout_high_ix : swing_max_ix][_c].cummin()\n df.loc[brkout_high_ix : swing_max_ix, rg] = np.sign(brkout_low - rg_ch_list[-1])\n\n\n ### CLASSIC FLOOR DISCOVERY \n if (floor_found == False): \n bottom = df[ceiling_ix_list[-1] : s_lo_ix][_l].min()\n floor_test = round((s_lo - bottom) / stdev[s_lo_ix],1)\n\n ### Classic floor test\n if (floor_test >= threshold): \n \n ### Boolean flags reset\n floor_found = True\n ceiling_found = breakdown = breakout = False\n threshold_test.append(floor_test)\n\n ### Append lists\n floor_list.append(bottom)\n floor_ix_list.append(df[ceiling_ix_list[-1] : s_lo_ix][_l].idxmin()) \n rg_ch_ix_list.append(s_lo_ix)\n rg_ch_list.append(s_lo)\n\n ### EXCEPTION HANDLING: price penetrates discovery swing\n ### 1. if floor found, calculate regime since rg_ch_ix using close.cummin\n elif(floor_found == True): \n close_low = df[rg_ch_ix_list[-1] : swing_max_ix][_c].cummin()\n df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] = np.sign(close_low - rg_ch_list[-1])\n\n ### 2. if price.cummin penetrates swing low: regime turns bearish, breakdown\n if (df.loc[rg_ch_ix_list[-1] : swing_max_ix, rg] <0).any():\n floor_found = floor_found = breakout = False\n breakdown = True \n\n ### 3. if breakdown,test for bullish rebound from lowest low since rg_ch_ix\n if (breakdown == True):\n brkdwn_low_ix = df.loc[rg_ch_ix_list[-1] : swing_max_ix, _c].idxmin() # lowest low \n breakdown_rebound = df[brkdwn_low_ix : swing_max_ix][_c].cummax() # rebound\n df.loc[brkdwn_low_ix : swing_max_ix, rg] = np.sign(breakdown_rebound - rg_ch_list[-1])\n# breakdown = False\n# breakout = True \n\n ### POPULATE FLOOR,CEILING, RG CHANGE COLUMNS\n df.loc[floor_ix_list[1:], flr] = floor_list\n df.loc[ceiling_ix_list[1:], clg] = ceiling_list\n df.loc[rg_ch_ix_list, rg_ch] = rg_ch_list\n df[rg_ch] = df[rg_ch].fillna(method='ffill')\n\n ### regime from last swing\n df.loc[swing_max_ix:,rg] = np.where(ceiling_found, # if ceiling found, highest high since rg_ch_ix\n np.sign(df[swing_max_ix:][_c].cummax() - rg_ch_list[-1]),\n np.where(floor_found, # if floor found, lowest low since rg_ch_ix\n np.sign(df[swing_max_ix:][_c].cummin() - rg_ch_list[-1]),\n np.sign(df[swing_max_ix:][_c].rolling(5).mean() - rg_ch_list[-1]))) \n df[rg] = df[rg].fillna(method='ffill')\n# df[rg+'_no_fill'] = df[rg]\n return df\n\n#### regime_floor_ceiling(df, hi,lo,cl, slo, shi,flr,clg,rg,rg_ch,stdev,threshold) ####","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"scrolled":false,"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\nparams = ['2014-12-31', None, 63, 0.05, 0.05, 1.5, 2]\nstart, end, vlty_n,dist_pct,retrace_pct,threshold,dgt= [params[h] for h in range(len(params))]\n\nrel_var = ['^GSPC','SP500', 'USD']\nbm_ticker, bm_col, ccy_col = [rel_var[h] for h in range(len(rel_var))]\nbm_df = pd.DataFrame()\nbm_df[bm_col] = round(yf.download(tickers= bm_ticker,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None)['Close'],dgt)\nbm_df[ccy_col] = 1\n\nticker = 'WFC'\ndf = round(yf.download(tickers= ticker,start= start, end = end,interval = \"1d\",\n group_by = 'column',auto_adjust = True, prepost = True, \n treads = True, proxy = None),2)\nohlc = ['Open','High','Low','Close']\n_o,_h,_l,_c = [ohlc[h] for h in range(len(ohlc))]\ndf= relative(df=df,_o=_o,_h=_h,_l=_l,_c=_c, bm_df=bm_df, bm_col= bm_col, ccy_df=bm_df, \n ccy_col=ccy_col, dgt= dgt, start=start, end= end,rebase=True)\n\ndf[['Close','rClose']].plot(figsize=(20,5),style=['k','grey'],\n title = str.upper(ticker)+ ' Relative & Absolute')\n\nswing_val = ['rg','Lo1','Hi1','Lo3','Hi3','clg','flr','rg_ch']\nrg,rt_lo,rt_hi,slo,shi,clg,flr,rg_ch = [swing_val[s] for s in range(len(swing_val))]\n\nfor a in np.arange(0,2): \n df = round(historical_swings(df,_o,_h,_l,_c, dist= None, hurdle= None),2)\n df = cleanup_latest_swing(df,shi,slo,rt_hi,rt_lo)\n ud, bs, bs_dt, _rt, _swg, hh_ll, hh_ll_dt = latest_swing_variables(df, \n shi,slo,rt_hi,rt_lo,_h,_l, _c)\n vlty = round(average_true_range(df,_h,_l,_c, n= vlty_n)[hh_ll_dt],2)\n dist_vol = 5 * vlty\n _sign = test_distance(ud,bs, hh_ll, dist_vol, dist_pct)\n df = retest_swing(df, _sign, _rt, hh_ll_dt, hh_ll, _c, _swg)\n retrace_vol = 2.5 * vlty\n df = retracement_swing(df, _sign, _swg, _c, hh_ll_dt, hh_ll, vlty, retrace_vol, retrace_pct)\n stdev = df[_c].rolling(vlty_n).std(ddof=0)\n df = regime_floor_ceiling(df,_h,_l,_c,slo, shi,flr,clg,rg,rg_ch,stdev,threshold)\n \n rohlc = ['rOpen','rHigh','rLow','rClose']\n _o,_h,_l,_c = [rohlc[h] for h in range(len(rohlc)) ]\n rswing_val = ['rrg','rL1','rH1','rL3','rH3','rclg','rflr','rrg_ch']\n rg,rt_lo,rt_hi,slo,shi,clg,flr,rg_ch = [rswing_val[s] for s in range(len(rswing_val))]\n\n","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"scrolled":false,"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\nplot_abs_cols = ['Close','Hi3', 'Lo3','clg','flr','rg_ch','rg']\nplot_abs_style = ['k', 'ro', 'go', 'kv', 'k^','b:','b--']\ny2_abs = ['rg']\nplot_rel_cols = ['rClose','rH3', 'rL3','rclg','rflr','rrg_ch','rrg']\nplot_rel_style = ['grey', 'ro', 'go', 'yv', 'y^','m:','m--']\ny2_rel = ['rrg']\ndf[plot_abs_cols].plot(secondary_y= y2_abs,figsize=(20,8),\n title = str.upper(ticker)+ ' Absolute',# grid=True,\n style=plot_abs_style)\n\ndf[plot_rel_cols].plot(secondary_y=y2_rel,figsize=(20,8),\n title = str.upper(ticker)+ ' Relative',# grid=True,\n style=plot_rel_style)\n\ndf[plot_rel_cols + plot_abs_cols].plot(secondary_y=y2_rel + y2_abs,figsize=(20,8),\n title = str.upper(ticker)+ ' Relative & Absolute',# grid=True,\n style=plot_rel_style + plot_abs_style)","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Chapter 12: Signals and Execution \n\nma_st = ma_mt = ma_lt = lt_lo = lt_hi = st_lo = st_hi = 0\n\nrg_combo = ['Close','rg','Lo3','Hi3','Lo3','Hi3','clg','flr','rg_ch']\n_c,rg,lo,hi,slo,shi,clg,flr,rg_ch =[rg_combo[r] for r in range(len(rg_combo)) ]\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)\n\nrrg_combo = ['rClose','rrg','rL3','rH3','rL3','rH3','rclg','rflr','rrg_ch']\n_c,rg,lo,hi,slo,shi,clg,flr,rg_ch =[rrg_combo[r] for r in range(len(rrg_combo)) ]\ngraph_regime_combo(ticker,df,_c,rg,lo,hi,slo,shi,clg,flr,rg_ch,ma_st,ma_mt,ma_lt,lt_lo,lt_hi,st_lo,st_hi)","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]}],"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"language_info":{"name":"python","version":"3.6.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat":4,"nbformat_minor":4} -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Packt 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

Machine Learning Summit 2025

2 | 3 | ## Machine Learning Summit 2025 4 | **Bridging Theory and Practice: ML Solutions for Today’s Challenges** 5 | 6 | 3 days, 20+ experts, and 25+ tech sessions and talks covering critical aspects of: 7 | - **Agentic and Generative AI** 8 | - **Applied Machine Learning in the Real World** 9 | - **ML Engineering and Optimization** 10 | 11 | 👉 [Book your ticket now >>](https://packt.link/mlsumgh) 12 | 13 | --- 14 | 15 | ## Join Our Newsletters 📬 16 | 17 | ### DataPro 18 | *The future of AI is unfolding. Don’t fall behind.* 19 | 20 |

DataPro QR

21 | 22 | Stay ahead with [**DataPro**](https://landing.packtpub.com/subscribe-datapronewsletter/?link_from_packtlink=yes), the free weekly newsletter for data scientists, AI/ML researchers, and data engineers. 23 | From trending tools like **PyTorch**, **scikit-learn**, **XGBoost**, and **BentoML** to hands-on insights on **database optimization** and real-world **ML workflows**, you’ll get what matters, fast. 24 | 25 | > Stay sharp with [DataPro](https://landing.packtpub.com/subscribe-datapronewsletter/?link_from_packtlink=yes). Join **115K+ data professionals** who never miss a beat. 26 | 27 | --- 28 | 29 | ### BIPro 30 | *Business runs on data. Make sure yours tells the right story.* 31 | 32 |

BIPro QR

33 | 34 | [**BIPro**](https://landing.packtpub.com/subscribe-bipro-newsletter/?link_from_packtlink=yes) is your free weekly newsletter for BI professionals, analysts, and data leaders. 35 | Get practical tips on **dashboarding**, **data visualization**, and **analytics strategy** with tools like **Power BI**, **Tableau**, **Looker**, **SQL**, and **dbt**. 36 | 37 | > Get smarter with [BIPro](https://landing.packtpub.com/subscribe-bipro-newsletter/?link_from_packtlink=yes). Trusted by **35K+ BI professionals**, see what you’re missing. 38 | 39 | 40 | 41 | 42 | # Algorithmic-Short-Selling-with-Python 43 | Algorithmic Short Selling with Python, Published by Packt 44 | 45 | 46 | [](https://www.amazon.com/Algorithmic-Short-Selling-Python-algorithmic-consistently-dp-1801815194/dp/1801815194/ref=mt_other?_encoding=UTF8&me=&qid=1632924207) 47 | 48 | ## Links 49 | 50 | * [Amazon](https://www.amazon.com/Algorithmic-Short-Selling-Python-algorithmic-consistently-dp-1801815194/dp/1801815194/ref=mt_other?_encoding=UTF8&me=&qid=1632924207) 51 | * [Packt Publishing](https://www.packtpub.com/product/algorithmic-short-selling-with-python/9781801815192) 52 | 53 | ## Key Features 54 | 55 | Understand techniques such as trend following, mean reversion, position sizing, and risk management in a short-selling context. 56 | Implement Python source code to explore and develop your own investment strategy. 57 | Test your trading strategies to limit risk and increase profits. 58 | 59 | ## What you will learn 60 | 61 | - Develop the mindset required to win the infinite, complex, random game called the stock market 62 | - Demystify short selling in order to generate alpa in bull, bear, and sideways markets 63 | - Generate ideas consistently on both sides of the portfolio 64 | - Implement Python source code to engineer a statistically robust trading edge 65 | - Develop superior risk management habits 66 | - Build a long/short product that investors will find appealing 67 | 68 | ## Who This Book Is For 69 | This is a book by a practitioner for practitioners. It is designed to benefit a wide range of people, including long/short market participants, quantitative participants, proprietary traders, commodity trading advisors, retail investors (pro retailers, students, and retail quants), and long-only investors. 70 | 71 | At least 2 years of active trading experience, intermediate-level experience of the Python programming language, and basic mathematical literacy (basic statistics and algebra) are expected. 72 | 73 | ## Table of Contents 74 | 1. The Stock Market Game 75 | 1. 10 Classic Myths About Short-Selling 76 | 1. Take a Walk on the Wild Short-Side 77 | 1. Long/Short Methodologies: Absolute and Relative 78 | 1. Regime Definition 79 | 1. The Trading Edge is a Number, and Here is the Formula 80 | 1. Improve Your Trading Edge 81 | 1. Position Sizing: Money is Made in the Money Management Module 82 | 1. Risk is a number 83 | 1. Refining the Investment Universe 84 | 1. The Long/Short Toolbox 85 | 1. Signals and Execution 86 | 1. Portfolio Management System 87 | 1. Appendix: Stock Screening 88 | ### Download a free PDF 89 | 90 | If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.
91 |

https://packt.link/free-ebook/9781801815192

--------------------------------------------------------------------------------