├── .gitignore ├── Formula_generator.xlsm ├── README.md ├── TODO.md ├── backtest ├── __init__.py ├── analyseReport.py ├── backtester.py ├── consolidate_reports.py ├── examples.py ├── get_historical_data.py ├── publicHolidays.py ├── setTimeFrameStraddle.py ├── setTimeFrameStrangle.py ├── setTimeStraddleIndexSL.py └── setTimeStrangleIndexSL.py ├── requirements.txt └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | *.csv 3 | backtest.egg-info 4 | build 5 | *temp.py 6 | -------------------------------------------------------------------------------- /Formula_generator.xlsm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PrajwalShenoy/backtest/da263e3b24e945985b03a93fbc17d0365d55502b/Formula_generator.xlsm -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # back-test 2 | Repository to perform backtest on CSV datasets of indian indexes 3 | 4 | ## Installation 5 | 6 | ### Pre requisites 7 | * Python version 3.6 or above is required 8 | 9 | ### Getting the code / Cloning the repository 10 | * Clone the repository from github.com 11 | Git is required if you want to clone the repository 12 | ``` 13 | git clone https://github.com/PrajwalShenoy/backtest.git 14 | ``` 15 | or 16 | * Download the zip file from https://github.com/PrajwalShenoy/backtest 17 | To download the zip file click on the green `Code` and click on `Download ZIP` 18 | 19 | ### Install required modules and libraries 20 | * Make sure you are in the directory where `requirements.txt` is present (backtest/) 21 | * Open a terminal in that location and run the following command 22 | ```bash 23 | pip install . 24 | ``` 25 | 26 | With this the code should be ready to use 27 | 28 | ## Fetching historical data 29 | * Open a python terminal and run the following command with your custom values. 30 | * If you do not have a account on maticalgos, you can follow the following link to create an account http://historical.maticalgos.com/ 31 | * `index` can be given the value of `banknifty` or `nifty` 32 | * Make sure the directory mentioned in `file_path` is created before running the commands 33 | ```python 34 | from backtest.get_historical_data import get_historical_data 35 | 36 | get_historical_data(index="banknifty", email="abc@xyz.com", password="password", start_date="2020-01-01", end_date="2020-01-31", file_path="/home/user/Desktop/historicalData") 37 | ``` 38 | 39 | ## Running Index Spot based SL straddle with specified entry and exit time 40 | * Open a python terminal and run the following command with your custom values. 41 | ```python 42 | from backtest.setTimeStraddleIndexSL import setTimeStraddleIndexSL 43 | 44 | trade1 = setTimeStraddleIndexSL(index="BANKNIFTY", start_date="2020-01-01", end_date="2020-01-10", entry_time="09:20:00", exit_time="15:24:00", stop_loss_p=0.009, historical_data_path="/home/prajwal/Desktop/backtest-documentation/back/backtest/", number_of_lots=1, csv_out_file="trade1_report.csv", days_to_run=[2,3]) 45 | trade1.runBackTest() 46 | ``` 47 | 48 | ## How to create consolidated reports 49 | * Open a python terminal and run the following command with your custom values. 50 | ```python 51 | from backtest.consolidate_reports import consolidate_reports 52 | 53 | trade1 = setTimeStraddleIndexSL(index="BANKNIFTY", start_date="2019-01-01", end_date="2021-12-31", entry_time="09:20:00", exit_time="15:24:00", stop_loss_p=0.009, historical_data_path="/home/prajwal/Documents/Repositories/kotak/historical_data/", number_of_lots=1, csv_out_file="trade1_report.csv", days_to_run=[2,3]) 54 | trade1.runBackTest() 55 | 56 | trade2 = setTimeStraddleIndexSL(index="BANKNIFTY", start_date="2019-01-01", end_date="2021-12-31", entry_time="09:20:00", exit_time="15:24:00", stop_loss_p=0.007, historical_data_path="/home/prajwal/Documents/Repositories/kotak/historical_data/", number_of_lots=1, csv_out_file="trade2_report.csv", days_to_run=[0,1,2,3,4]) 57 | trade2.runBackTest() 58 | 59 | # The above commands generate trade1_report.csv and trade2_report.csv. The next command creates the consolidated report 60 | consolidate_reports(csv_file_names=["trade1_report.csv", "trade2_report.csv"], consolidated_report="consolidated.csv") 61 | ``` 62 | 63 | ## How to use the analysis tool 64 | * Open a python terminal and run the following command with your custom csv file 65 | ```python 66 | from backtest.analyseReport import m2mPlot, perWeekdayPieChart, lossesSplit, profitsSplit 67 | import pandas as pd 68 | 69 | df = pd.read_csv("Path to report file") 70 | profitsSplit(df) 71 | lossesSplit(df) 72 | perWeekdayPieChart(df) 73 | m2mPlot(df) 74 | ``` 75 | 76 | ## Refer to examples.py for more examples on how to use these tools in a python script 77 | 78 | ## Additional information 79 | * `days_to_run` indicate the days on which the back test will run. The mapping is as follows 80 | ``` 81 | 0 - Monday 82 | 1 - Tuesday 83 | 2 - Wednesday 84 | 3 - Thursday 85 | 4 - Friday 86 | 5 - Saturday 87 | 6 - Sunday 88 | ``` 89 | * Even when specified, backtest will not run on `5` and `6`. (Afterall backtest also needs a holiday XD) 90 | * `Formula_generator.xlsm` is a community developed excel sheet to help you guys get the respective `python` command to run the respective straddles. 91 | 92 | Special thanks to Himanshu for helping test this new tool 93 | 94 | -------------------------------------------------------------------------------- /TODO.md: -------------------------------------------------------------------------------- 1 | # TODO 2 | - [x] add days on when the code needs to run - Done 3 | - [x] combined report for all trades - Done 4 | - [ ] adding a GUI 5 | - [ ] enable github 6 | - [ ] check data quality 7 | - [ ] Create a GUI charter to create charts for specific days 8 | - [ ] Create tool to analyse the results 9 | 10 | 11 | 12 | ## BUG FIXES (Aug 1st 2022, 3:10 AM) 13 | * Fixed the commands present in examples.py (trade1.run() was replaced with trade1.runBackTest()) 14 | * Fixed the issues with "backtest execution date" 15 | * Fixed the issue with the "first date"/"start_date" being present in all reports (This was causing problems in consolidated reports) 16 | * Consolidated reports are now sorted by default. Previously the date order was jumbled 17 | * Added historical_data_script, made additions to examples.py as well. 18 | * Added Formula_generator.xlsm for convenience of non coders 19 | 20 | ## UPDATES AND BUG FIXES (Aug 2ns 2022, 3:16 AM) 21 | * Brought in a few breaking changes. Going forward the tool will have to be installed before being used. This was done as using it with relative reference was causing issues with fetching hitstorical data. 22 | * Updated requirements.txt 23 | * Added README.md for usage instructions 24 | * Updated the examples.py with the new usage 25 | * Fixed the bug in consolidated_report.py where previously string was being sorted, now the respective dataframe will be sorted 26 | * Chnaged backtest.py to backtester.py to conform with python packaging norms 27 | 28 | ## UPDATES (Aug 3rd 2022, 3:37 AM) 29 | * Added `analyseReport.py` this will help visualize CSV reports generated 30 | * Updated the `publicHolidays.py` to include holidays from 2018 to 2022 31 | * Updated the `setTimeFrameStraddle` to bring it on par with `setTimeStraddleIndexSL` 32 | * Updated `requirements.txt` -------------------------------------------------------------------------------- /backtest/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PrajwalShenoy/backtest/da263e3b24e945985b03a93fbc17d0365d55502b/backtest/__init__.py -------------------------------------------------------------------------------- /backtest/analyseReport.py: -------------------------------------------------------------------------------- 1 | from matplotlib.widgets import Slider 2 | import matplotlib.ticker as tick 3 | import matplotlib.pyplot as plt 4 | import pandas as pd 5 | import numpy as np 6 | import sys 7 | 8 | def profitsSplit(df): 9 | days = {"Mon": 0, "Tue": 0, "Wed": 0, "Thu": 0, "Fri": 0} 10 | for i in range(len(df)): 11 | if df.loc[i]["Net"] > 0: 12 | days[df.loc[i]["Day"]] = days[df.loc[i]["Day"]] + df.loc[i]["Net"] 13 | pie_day = [] 14 | pie_profits = [] 15 | for day, net in days.items(): 16 | pie_day.append(day) 17 | pie_profits.append(net) 18 | plt.subplot(121) 19 | plt.pie(pie_profits, labels=pie_day, autopct='%1.2f%%') 20 | plt.subplot(122) 21 | bars = plt.barh(pie_day, pie_profits) 22 | plt.bar_label(bars) 23 | plt.title("Profits Pie Chart for weekdays") 24 | plt.show() 25 | 26 | def lossesSplit(df): 27 | days = {"Mon": 0, "Tue": 0, "Wed": 0, "Thu": 0, "Fri": 0} 28 | for i in range(len(df)): 29 | if df.loc[i]["Net"] < 0: 30 | days[df.loc[i]["Day"]] = days[df.loc[i]["Day"]] + abs(df.loc[i]["Net"]) 31 | pie_day = [] 32 | pie_losses = [] 33 | for day, net in days.items(): 34 | pie_day.append(day) 35 | pie_losses.append(net) 36 | plt.subplot(121) 37 | plt.pie(pie_losses, labels=pie_day, autopct='%1.2f%%') 38 | plt.subplot(122) 39 | bars = plt.barh(pie_day, pie_losses) 40 | plt.bar_label(bars) 41 | plt.title("Losses Pie Chart for weekdays") 42 | plt.show() 43 | 44 | def netSplit(df): 45 | days = {"Mon": 0, "Tue": 0, "Wed": 0, "Thu": 0, "Fri": 0} 46 | for i in range(len(df)): 47 | days[df.loc[i]["Day"]] = days[df.loc[i]["Day"]] + df.loc[i]["Net"] 48 | pie_day = [] 49 | pie_net = [] 50 | for day, net in days.items(): 51 | pie_day.append(day) 52 | pie_net.append(net) 53 | plt.subplot(121) 54 | plt.pie(pie_net, labels=pie_day, autopct='%1.2f%%') 55 | plt.subplot(122) 56 | bars = plt.barh(pie_day, pie_net) 57 | plt.bar_label(bars) 58 | plt.title("Net Pie Chart for weekdays") 59 | plt.show() 60 | 61 | def perWeekdayPieChart(df): 62 | days = {"Mon": [0,0], 63 | "Tue": [0,0], 64 | "Wed": [0,0], 65 | "Thu": [0,0], 66 | "Fri": [0,0]} 67 | for i in range(len(df)): 68 | if df.loc[i]["Net"] < 0: 69 | days[df.loc[i]["Day"]][1] = days[df.loc[i]["Day"]][1] + 1 70 | if df.loc[i]["Net"] > 0: 71 | days[df.loc[i]["Day"]][0] = days[df.loc[i]["Day"]][0] + 1 72 | labs = "Profit", "Loss" 73 | fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5) 74 | ax1.pie(days["Mon"], labels=labs, autopct='%1.1f%%') 75 | ax1.title.set_text("Mon") 76 | ax2.pie(days["Tue"], labels=labs, autopct='%1.1f%%') 77 | ax2.title.set_text("Tue") 78 | ax3.pie(days["Wed"], labels=labs, autopct='%1.1f%%') 79 | ax3.title.set_text("Wed") 80 | ax4.pie(days["Thu"], labels=labs, autopct='%1.1f%%') 81 | ax4.title.set_text("Thu") 82 | ax5.pie(days["Fri"], labels=labs, autopct='%1.1f%%') 83 | ax5.title.set_text("Fri") 84 | plt.show() 85 | 86 | def m2mPlot(df): 87 | net = 0 88 | yaxis = [] 89 | xaxis = [] 90 | for i in range(len(df)): 91 | net = net + df.loc[i]["Net"] 92 | xaxis.append(df.loc[i]["Date"]) 93 | yaxis.append(net) 94 | dump = fig, ax = plt.subplots() 95 | dump = ax.plot(xaxis, yaxis) 96 | dump = ax.get_yaxis().set_major_formatter(tick.FuncFormatter(lambda x, p: format(int(x), ','))) 97 | dump = plt.xlabel('Date') 98 | dump = plt.xticks(rotation = 90) 99 | dump = plt.ylabel('Net M2M') 100 | dump = plt.title('M2M graph') 101 | dump = plt.grid() 102 | plt.show() 103 | 104 | def calculateDrawDown(df): 105 | net = 0 106 | yaxis = [] 107 | xaxis = [] 108 | for i in range(len(df)): 109 | net = net + df.loc[i]["Net"] 110 | xaxis.append(df.loc[i]["Date"]) 111 | yaxis.append(net) 112 | dump = fig, ax = plt.subplots() 113 | dump = ax.plot(xaxis, yaxis) 114 | dump = ax.get_yaxis().set_major_formatter(tick.FuncFormatter(lambda x, p: format(int(x), ','))) 115 | dump = plt.xlabel('Date') 116 | dump = plt.xticks(rotation = 90) 117 | dump = plt.ylabel('Net M2M') 118 | dump = plt.title('M2M graph') 119 | xs = df["Net"].cumsum() 120 | i = np.argmax(np.maximum.accumulate(xs) - xs) 121 | j = np.argmax(xs[:i]) 122 | # plt.plot(xs) 123 | plt.plot([i, j], [xs[i], xs[j]], 'o', color='Red', markersize=10) 124 | print("Max DD is", xs[j] - xs[i]) 125 | plt.show() 126 | 127 | def showAllGraphs(df): 128 | profitsSplit(df) 129 | lossesSplit(df) 130 | netSplit(df) 131 | perWeekdayPieChart(df) 132 | calculateDrawDown(df) 133 | 134 | def main(): 135 | file_name = sys.argv[1] 136 | df = pd.read_csv(file_name) 137 | showAllGraphs(df) 138 | 139 | if __name__ == "__main__": 140 | main() -------------------------------------------------------------------------------- /backtest/backtester.py: -------------------------------------------------------------------------------- 1 | from backtest.publicHolidays import public_holidays 2 | import pandas as pd 3 | import datetime 4 | import logging 5 | import csv 6 | 7 | 8 | class Backtester(): 9 | 10 | months = { 11 | "01": "Jan", "1": "Jan", 12 | "02": "Feb", "2": "Feb", 13 | "03": "Mar", "3": "Mar", 14 | "04": "Apr", "4": "Apr", 15 | "05": "May", "5": "May", 16 | "06": "Jun", "6": "Jun", 17 | "07": "Jul", "7": "Jul", 18 | "08": "Aug", 19 | "09": "Sep", 20 | "10": "Oct", 21 | "11": "Nov", 22 | "12": "Dec" 23 | } 24 | 25 | def __init__(self, index="BANKNIFTY", start_date="", end_date="", **kwargs): 26 | self.log = logging.getLogger(__name__) 27 | self.log.addHandler(logging.NullHandler()) 28 | self.index = index 29 | self.start_date = start_date 30 | self.end_date = end_date 31 | self.get_base_vars(kwargs) 32 | self.get_trading_holidays() 33 | 34 | def get_base_vars(self, kwargs): 35 | self.days_to_run = kwargs.get("days_to_run", [0,1,2,3,4]) 36 | self.slippage = kwargs.get("slippage", 1) 37 | 38 | def get_trading_holidays(self): 39 | self.public_holidays = public_holidays 40 | 41 | def create_scrip_symbol(self, option_type, strike_price, index = "BANKNIFTY"): 42 | return self.df.iloc[10]["symbol"][:16] + str(strike_price) + option_type.upper() 43 | 44 | def calculate_result(self, ce_price, current_ce_price, pe_price, current_pe_price): 45 | net = ce_price - current_ce_price + pe_price - current_pe_price 46 | return (net * self.number_of_lots * self.quantity_per_lot) 47 | 48 | def next_weekday(self, d, weekday): 49 | days_ahead = weekday - d.weekday() 50 | if days_ahead <= 0: 51 | days_ahead += 7 52 | return d + datetime.timedelta(days_ahead) 53 | 54 | def next_thursday(self, d): 55 | days_ahead = 3 - d.weekday() 56 | if days_ahead < 0: 57 | days_ahead += 7 58 | return d + datetime.timedelta(days_ahead) 59 | 60 | def increment_date(self, d): 61 | if type(d) != datetime.date: 62 | d = self.create_date(d) 63 | d = d + datetime.timedelta(1) 64 | if d.weekday() == 5 or d.weekday() == 6 or d.weekday() not in self.days_to_run: 65 | d = self.increment_date(d) 66 | return d 67 | 68 | def increment_time(self, dt, interval): 69 | if type(dt) != datetime.time: 70 | strt = str(dt).split(":") 71 | dt = datetime.datetime(100,1,1,int(strt[0]),int(strt[1]),int(strt[2])) 72 | dt = dt + datetime.timedelta(minutes=interval) 73 | return dt.time() 74 | 75 | def find_strike_price(self, cur_price): 76 | return (round(int(cur_price)/100)*100) 77 | 78 | def deci2(self, value): 79 | return (round(value*100)/100) 80 | 81 | def create_date(self, d): 82 | if type(d) != datetime.date and type(d) == str: 83 | return datetime.date(int(d.split('-')[0]), int(d.split('-')[1]), int(d.split('-')[2])) 84 | elif type(d) == datetime.date: 85 | return d 86 | else: 87 | self.log.error("Date is not in the required format, follow the 'yyyy-mm-dd' format") 88 | 89 | def get_day(self, d): 90 | days_of_the_week = { 91 | 0: "Mon", 92 | 1: "Tue", 93 | 2: "Wed", 94 | 3: "Thu", 95 | 4: "Fri", 96 | 5: "Sat", 97 | 6: "Sun" 98 | } 99 | if type(d) != datetime.date and type(d) == str: 100 | d = datetime.date(int(d.split('-')[0]), int(d.split('-')[1]), int(d.split('-')[2])) 101 | elif type(d) == datetime.date: 102 | pass 103 | return days_of_the_week[d.weekday()] 104 | 105 | def create_time(self, t): 106 | if type(t) != datetime.time and type(t) == str: 107 | return datetime.time(int(t.split(":")[0]), int(t.split(":")[1]), int(t.split(":")[2])) 108 | elif type(t) == datetime.time: 109 | return t 110 | else: 111 | self.log.error("Time is not in the required format, follow the 'hh:mm:ss' format") 112 | 113 | def create_date_time(self, d_t): 114 | d_t = d_t.split("") 115 | d = d_t[0] 116 | t = d_t[1] 117 | return datetime.datetime(int(d.split('-')[0]), int(d.split('-')[1]), int(d.split('-')[2]), 118 | int(t.split(":")[0]), int(t.split(":")[1]), int(t.split(":")[2])) 119 | 120 | def get_price_for_nearest_time(self, df, t): 121 | if len(df.loc[df["time"] == t].values) == 1: 122 | return df.loc[df["time"] == t]["open"].values[0], t 123 | else: 124 | while len(df.loc[df["time"] == t].values) == 0 and str(t) < "16:00:00": 125 | t = str(self.increment_time(t, 1)) 126 | return df.loc[df["time"] == t]["open"].values[0], t 127 | 128 | def create_monthly_result_dict(self): 129 | current_date_format = self.create_date(self.start_date) 130 | end_date_format = self.create_date(self.end_date) 131 | monthly_results = {} 132 | while (end_date_format - current_date_format).days >= 0: 133 | monthly_results[str(current_date_format)[:-3]] = 0 134 | current_date_format = self.increment_date(current_date_format) 135 | return monthly_results 136 | 137 | def read_csv_data(self, file_path = ""): 138 | if file_path: 139 | return pd.read_csv(file_path) 140 | else: 141 | return pd.read_csv(self.historical_data_path + str(self.current_date_format).replace("-", "") + ".csv") 142 | -------------------------------------------------------------------------------- /backtest/consolidate_reports.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | 4 | def consolidate_reports(csv_file_names = [], consolidated_report = "consolidated.csv"): 5 | report_df = pd.DataFrame(np.empty((0, 3))) 6 | report_df.columns = ["Date", "Day", "Net"] 7 | for csv_report in csv_file_names: 8 | csv_df = pd.read_csv(csv_report) 9 | for i in range(len(csv_df)): 10 | if csv_df["Date"][i] in list(report_df["Date"]): 11 | report_df.loc[report_df["Date"] == csv_df["Date"][i], "Net"] = report_df.loc[report_df["Date"] == csv_df["Date"][i], "Net"] + csv_df["Net"][i] 12 | else: 13 | temp_df = pd.DataFrame({"Date":[csv_df["Date"][i]], "Day":[csv_df["Day"][i]], "Net":[csv_df["Net"][i]]}) 14 | report_df = pd.concat([report_df, temp_df], ignore_index = True, axis=0) 15 | report_df = report_df.sort_values(by=['Date'])[['Date', 'Day', 'Net']] 16 | report_df.to_csv(consolidated_report) 17 | -------------------------------------------------------------------------------- /backtest/examples.py: -------------------------------------------------------------------------------- 1 | from backtest.setTimeStraddleIndexSL import setTimeStraddleIndexSL 2 | from backtest.consolidate_reports import consolidate_reports 3 | from backtest.get_historical_data import get_historical_data 4 | 5 | # Historical data courtesy of http://historical.maticalgos.com/ 6 | get_historical_data(index="banknifty", email="abc@xyz.com", password="password", start_date="2020-01-01", end_date="2020-01-31", file_path="/home/user/Desktop/historicalData") 7 | 8 | trade1 = setTimeStraddleIndexSL(index="BANKNIFTY", start_date="2019-01-01", end_date="2021-12-31", entry_time="09:20:00", exit_time="15:24:00", stop_loss_p=0.009, historical_data_path="/home/prajwal/Documents/Repositories/kotak/historical_data/", number_of_lots=1, csv_out_file="trade1_report.csv", days_to_run=[2,3]) 9 | trade1.runBackTest() 10 | 11 | trade2 = setTimeStraddleIndexSL(index="BANKNIFTY", start_date="2019-01-01", end_date="2021-12-31", entry_time="09:20:00", exit_time="15:24:00", stop_loss_p=0.007, historical_data_path="/home/prajwal/Documents/Repositories/kotak/historical_data/", number_of_lots=1, csv_out_file="trade1_report.csv", days_to_run=[2,3]) 12 | trade2.runBackTest() 13 | 14 | consolidate_reports(csv_file_names=["trade1_report.csv", "trade2_report.csv"], consolidated_report="consolidated.csv") 15 | -------------------------------------------------------------------------------- /backtest/get_historical_data.py: -------------------------------------------------------------------------------- 1 | # Historical data curtosy of http://historical.maticalgos.com/ 2 | from maticalgos.historical import historical 3 | import datetime 4 | import os 5 | 6 | 7 | def increment_date(d): 8 | d = d + datetime.timedelta(1) 9 | if d.weekday() == 5 or d.weekday() == 6: 10 | d = increment_date(d) 11 | return d 12 | 13 | def create_date(d): 14 | return datetime.date(int(d.split('-')[0]), int(d.split('-')[1]), int(d.split('-')[2])) 15 | 16 | def get_historical_data(index, email, password, start_date, end_date, file_path): 17 | ma = historical(email) 18 | ma.login(password) 19 | current_date_format = create_date(start_date) 20 | end_date_format = create_date(end_date) 21 | faulty_dates = [] 22 | while (end_date_format - current_date_format).days >= 0: 23 | try: 24 | data = ma.get_data(index, current_date_format) 25 | csv_path = os.path.join(file_path, str(current_date_format).replace("-", "") + ".csv") 26 | data.to_csv(csv_path) 27 | print("\033[1;92mFinished processing the following date", str(current_date_format), "\033[0m") 28 | except Exception as err: 29 | faulty_dates.append(str(current_date_format)) 30 | print("\033[1;91mCould not process the following date", str(current_date_format), "\033[0m") 31 | print(err) 32 | current_date_format = increment_date(current_date_format) 33 | print("Failed to get data for the following dates", faulty_dates) 34 | -------------------------------------------------------------------------------- /backtest/publicHolidays.py: -------------------------------------------------------------------------------- 1 | public_holidays = [ 2 | "2018-01-26", "2018-02-13", 3 | "2018-03-02", "2018-03-29", 4 | "2018-03-30", "2018-05-01", 5 | "2018-08-15", "2018-08-22", 6 | "2018-09-13", "2018-09-20", 7 | "2018-10-02", "2018-10-18", 8 | "2018-11-07", "2018-11-08", 9 | "2018-11-23", "2018-12-25", 10 | "2019-03-04", "2019-03-21", 11 | "2019-04-17", "2019-04-19", 12 | "2019-05-01", "2019-06-05", 13 | "2019-08-12", "2019-08-15", 14 | "2019-09-02", "2019-09-10", 15 | "2019-10-02", "2019-10-08", 16 | "2019-10-28", "2019-11-12", 17 | "2019-12-25", "2020-02-21", 18 | "2020-03-10", "2020-04-02", 19 | "2020-04-06", "2020-04-10", 20 | "2020-04-14", "2020-05-01", 21 | "2020-05-25", "2020-10-02", 22 | "2020-11-16", "2020-11-30", 23 | "2020-12-25", "2021-01-26", 24 | "2021-03-11", "2021-03-29", 25 | "2021-04-02", "2021-04-14", 26 | "2021-04-21", "2021-05-13", 27 | "2021-07-21", "2021-08-19", 28 | "2021-09-10", "2021-10-15", 29 | "2021-11-04", "2021-11-05", 30 | "2021-11-19", "2022-01-26", 31 | "2022-03-01", "2022-03-18", 32 | "2022-04-14", "2022-04-15", 33 | "2022-05-03", "2022-08-09", 34 | "2022-08-15", "2022-08-31", 35 | "2022-10-05", "2022-10-24", 36 | "2022-10-26", "2022-11-08" 37 | ] -------------------------------------------------------------------------------- /backtest/setTimeFrameStraddle.py: -------------------------------------------------------------------------------- 1 | from backtest.backtester import Backtester 2 | from pprint import pprint 3 | import pandas as pd 4 | 5 | class setTimeFrameStraddle(Backtester): 6 | def __init__(self, index, start_date, end_date, entry_time, exit_time, stop_loss_p, **kwargs): 7 | self.entry_time = entry_time 8 | self.exit_time = exit_time 9 | self.stop_loss_p = stop_loss_p 10 | super(setTimeFrameStraddle, self).__init__(index, start_date, end_date, **kwargs) 11 | self.get_additional_vars(kwargs) 12 | 13 | def get_additional_vars(self, kwargs): 14 | self.historical_data_path = kwargs.get("historical_data_path", "./hostoricalData/") 15 | self.number_of_lots = kwargs.get("number_of_lots", 1) 16 | self.csv_out_file = kwargs.get("csv_out_file", "backtestFromCsv.csv") 17 | self.max_loss_per_lot = kwargs.get("max_loss_per_lot", -10000000000) 18 | if kwargs.get("quantity_per_lot", False): 19 | self.quantity_per_lot = kwargs.get("quantity_per_lot") 20 | elif self.index == "BANKNIFTY": 21 | self.quantity_per_lot = 25 22 | elif self.index == "NIFTY": 23 | self.quantity_per_lot = 50 24 | 25 | def initialise_for_csv_backtest(self): 26 | current_date = str(self.current_date_format) 27 | index_df = self.df.loc[self.df["symbol"] == self.index].sort_values(by = "time") 28 | self.index_price, self.banknifty_time = self.get_price_for_nearest_time(index_df, self.entry_time) 29 | index_strike_price = self.find_strike_price(self.index_price) 30 | print(index_strike_price, current_date) 31 | thursday = self.next_thursday(self.current_date_format) 32 | self.ce_symbol = self.create_scrip_symbol("CE", index_strike_price, self.index) 33 | self.pe_symbol = self.create_scrip_symbol("PE", index_strike_price, self.index) 34 | print(self.ce_symbol, self.pe_symbol) 35 | self.ce_df = self.df.loc[self.df["symbol"] == self.ce_symbol].sort_values(by = "time") 36 | self.ce_price, self.ce_initial_time = self.get_price_for_nearest_time(self.ce_df, self.entry_time) 37 | self.pe_df = self.df.loc[self.df["symbol"] == self.pe_symbol].sort_values(by = "time") 38 | self.pe_price, self.pe_initial_time = self.get_price_for_nearest_time(self.pe_df, self.entry_time) 39 | self.ce_sl = self.ce_price * self.stop_loss_p 40 | self.pe_sl = self.pe_price * self.stop_loss_p 41 | print(self.ce_symbol, self.ce_price, self.ce_sl, self.ce_initial_time) 42 | print(self.pe_symbol, self.pe_price, self.pe_sl, self.pe_initial_time) 43 | 44 | def csv_backtest_for_day(self): 45 | ce_sl_hit = False 46 | pe_sl_hit = False 47 | self.ce_df = self.ce_df.sort_values(by = "time") 48 | self.pe_df = self.pe_df.sort_values(by = "time") 49 | for i in range(len(self.ce_df)): 50 | if self.create_time(self.entry_time) <= self.create_time(self.ce_df.iloc[i]["time"]) <= self.create_time(self.exit_time): 51 | if not ce_sl_hit: 52 | self.current_ce_price, self.current_ce_time = self.ce_df.iloc[i]["high"], self.ce_df.iloc[i]["time"] 53 | if self.current_ce_price >= self.ce_sl: 54 | ce_sl_hit = True 55 | self.current_ce_price = self.ce_sl 56 | print("\033[1;91mstoploss hit for CE\033[0m") 57 | if not pe_sl_hit: 58 | self.current_pe_price, self.current_pe_time = self.pe_df.iloc[i]["high"], self.pe_df.iloc[i]["time"] 59 | if self.current_pe_price >= self.pe_sl: 60 | pe_sl_hit = True 61 | self.current_pe_price = self.pe_sl 62 | print("\033[1;91mstoploss hit for PE\033[0m") 63 | if self.calculate_result(self.ce_price, self.current_ce_price, self.pe_price, self.current_pe_price) < self.max_loss_per_lot * self.number_of_lots: 64 | break 65 | self.sl_hit = ce_sl_hit + pe_sl_hit 66 | self.result = self.calculate_result(self.ce_price, self.current_ce_price, self.pe_price, self.current_pe_price) 67 | 68 | def runBackTest(self): 69 | self.success = 0 70 | self.failure = 0 71 | self.max_profit = 0 72 | self.total_profit = 0 73 | self.total_profit_days = 0 74 | self.max_days_with_profit = 0 75 | self.max_days_with_profit_temp = 0 76 | self.max_loss = 0 77 | self.total_loss = 0 78 | self.total_loss_days = 0 79 | self.max_days_with_loss = 0 80 | self.max_days_with_loss_temp = 0 81 | self.overall_result = 0 82 | self.monthly_results = self.create_monthly_result_dict() 83 | self.csvFile = open(self.csv_out_file, "w") 84 | self.buffer = "Date,Day,Index,CE,CE Time,CE Price,CE SL,CE LTP,PE,PE Time,PE Price,PE SL,PE LTP,SL hit,Net\n" 85 | self.csvFile.write(self.buffer) 86 | self.current_date_format = self.create_date(self.start_date) 87 | self.end_date_format = self.create_date(self.end_date) 88 | failed_backtests = {} 89 | while (self.end_date_format - self.current_date_format).days >= 0: 90 | while self.create_date(self.current_date_format).weekday() not in self.days_to_run: 91 | self.current_date_format = self.increment_date(self.current_date_format) 92 | if str(self.current_date_format) not in self.public_holidays: 93 | try: 94 | self.df = self.read_csv_data() 95 | self.initialise_for_csv_backtest() 96 | self.csv_backtest_for_day() 97 | self.overall_result += self.result 98 | self.monthly_results[str(self.current_date_format)[:-3]] += self.result 99 | print("==========================================") 100 | if self.result >= 0: 101 | self.max_days_with_loss = max(self.max_days_with_loss, self.max_days_with_loss_temp) 102 | self.max_days_with_loss_temp = 0 103 | self.max_days_with_profit_temp = self.max_days_with_profit_temp + 1 104 | self.total_profit_days = self.total_profit_days + 1 105 | self.total_profit = self.total_profit + self.result 106 | self.max_profit = max(self.max_profit, self.result) 107 | print("\033[1;92m",self.deci2(self.result), "\n\033[0m") 108 | else: 109 | self.max_days_with_profit = max(self.max_days_with_profit, self.max_days_with_profit_temp) 110 | self.max_days_with_profit_temp = 0 111 | self.max_days_with_loss_temp = self.max_days_with_loss_temp + 1 112 | self.total_loss_days = self.total_loss_days + 1 113 | self.total_loss = self.total_loss + self.result 114 | self.max_loss = min(self.max_loss, self.result) 115 | print("\033[1;91m",self.deci2(self.result), "\n\033[0m") 116 | self.buffer = [str(self.current_date_format), self.get_day(self.current_date_format), str(float(self.index_price)), self.ce_symbol, self.ce_initial_time, str(self.ce_price), str(self.ce_sl), str(self.current_ce_price), \ 117 | self.pe_symbol, self.pe_initial_time, str(self.pe_price), str(self.pe_sl), str(self.current_pe_price), str(self.sl_hit), str(self.deci2(self.result))] 118 | self.csvFile.write(",".join(self.buffer) + "\n") 119 | self.success = self.success + 1 120 | except Exception as e: 121 | self.log.error(str(e)) 122 | self.log.error("Could not backtest for " + str(self.current_date_format) + "\n") 123 | if "No such file or directory" not in str(e): 124 | self.failure = self.failure + 1 125 | failed_backtests[str(self.current_date_format)] = str(e) 126 | else: 127 | pass 128 | self.current_date_format = self.increment_date(self.current_date_format) 129 | self.csvFile.close() 130 | self.max_days_with_profit = max(self.max_days_with_profit, self.max_days_with_profit_temp) 131 | self.max_days_with_loss = max(self.max_days_with_loss, self.max_days_with_loss_temp) 132 | print("Entry time:", self.entry_time, "Exit time:", self.exit_time, "Sl:", self.stop_loss_p) 133 | print("Overall result", self.overall_result) 134 | print("Average result", self.overall_result/(self.success + self.failure)) 135 | print("Max profit was", self.max_profit) 136 | print("Average profit on profit making days was", self.total_profit/self.total_profit_days) 137 | print("Winning streak:", self.max_days_with_profit) 138 | print("Win percentage", self.total_profit_days/self.success) 139 | print("Max loss was", self.max_loss) 140 | print("Average loss on loss making days was", self.total_loss/self.total_loss_days) 141 | print("Loosing streak:", self.max_days_with_loss) 142 | print("Win percentage", self.total_loss_days/self.success) 143 | print("Successfull back tests:", self.success) 144 | print("Failed back tests:", self.failure) 145 | print("Monthly wise resport is given below") 146 | for i, j in self.monthly_results.items(): 147 | print(i, ": ", j) 148 | -------------------------------------------------------------------------------- /backtest/setTimeFrameStrangle.py: -------------------------------------------------------------------------------- 1 | from backtest.backtester import Backtester 2 | from pprint import pprint 3 | import pandas as pd 4 | 5 | class setTimeFrameStrangle(Backtester): 6 | def __init__(self, index, start_date, end_date, entry_time, exit_time, point_deviation, stop_loss_p, **kwargs): 7 | self.entry_time = entry_time 8 | self.exit_time = exit_time 9 | self.stop_loss_p = stop_loss_p 10 | self.point_deviation = point_deviation 11 | super(setTimeFrameStrangle, self).__init__(index, start_date, end_date, **kwargs) 12 | self.get_additional_vars(kwargs) 13 | 14 | def get_additional_vars(self, kwargs): 15 | self.historical_data_path = kwargs.get("historical_data_path", "./hostoricalData/") 16 | self.number_of_lots = kwargs.get("number_of_lots", 1) 17 | self.csv_out_file = kwargs.get("csv_out_file", "backtestFromCsv.csv") 18 | self.max_loss_per_lot = kwargs.get("max_loss_per_lot", -10000000000) 19 | if kwargs.get("quantity_per_lot", False): 20 | self.quantity_per_lot = kwargs.get("quantity_per_lot") 21 | elif self.index == "BANKNIFTY": 22 | self.quantity_per_lot = 25 23 | elif self.index == "NIFTY": 24 | self.quantity_per_lot = 50 25 | 26 | def initialise_for_csv_backtest(self): 27 | current_date = str(self.current_date_format) 28 | index_df = self.df.loc[self.df["symbol"] == self.index].sort_values(by = "time") 29 | self.index_price, self.banknifty_time = self.get_price_for_nearest_time(index_df, self.entry_time) 30 | index_strike_price = self.find_strike_price(self.index_price) 31 | print(index_strike_price, current_date) 32 | thursday = self.next_thursday(self.current_date_format) 33 | self.ce_symbol = self.create_scrip_symbol("CE", index_strike_price + self.point_deviation, self.index) 34 | self.pe_symbol = self.create_scrip_symbol("PE", index_strike_price - self.point_deviation, self.index) 35 | print(self.ce_symbol, self.pe_symbol) 36 | self.ce_df = self.df.loc[self.df["symbol"] == self.ce_symbol].sort_values(by = "time") 37 | self.ce_price, self.ce_initial_time = self.get_price_for_nearest_time(self.ce_df, self.entry_time) 38 | self.pe_df = self.df.loc[self.df["symbol"] == self.pe_symbol].sort_values(by = "time") 39 | self.pe_price, self.pe_initial_time = self.get_price_for_nearest_time(self.pe_df, self.entry_time) 40 | self.ce_sl = self.ce_price * self.stop_loss_p 41 | self.pe_sl = self.pe_price * self.stop_loss_p 42 | print(self.ce_symbol, self.ce_price, self.ce_sl, self.ce_initial_time) 43 | print(self.pe_symbol, self.pe_price, self.pe_sl, self.pe_initial_time) 44 | 45 | def csv_backtest_for_day(self): 46 | ce_sl_hit = False 47 | pe_sl_hit = False 48 | self.ce_df = self.ce_df.sort_values(by = "time") 49 | self.pe_df = self.pe_df.sort_values(by = "time") 50 | for i in range(len(self.ce_df)): 51 | if self.create_time(self.entry_time) <= self.create_time(self.ce_df.iloc[i]["time"]) <= self.create_time(self.exit_time): 52 | if not ce_sl_hit: 53 | self.current_ce_price, self.current_ce_time = self.ce_df.iloc[i]["high"], self.ce_df.iloc[i]["time"] 54 | if self.current_ce_price >= self.ce_sl: 55 | ce_sl_hit = True 56 | self.current_ce_price = self.ce_sl 57 | print("\033[1;91mstoploss hit for CE\033[0m") 58 | if not pe_sl_hit: 59 | self.current_pe_price, self.current_pe_time = self.pe_df.iloc[i]["high"], self.pe_df.iloc[i]["time"] 60 | if self.current_pe_price >= self.pe_sl: 61 | pe_sl_hit = True 62 | self.current_pe_price = self.pe_sl 63 | print("\033[1;91mstoploss hit for PE\033[0m") 64 | if self.calculate_result(self.ce_price, self.current_ce_price, self.pe_price, self.current_pe_price) < self.max_loss_per_lot * self.number_of_lots: 65 | break 66 | self.sl_hit = ce_sl_hit + pe_sl_hit 67 | self.result = self.calculate_result(self.ce_price, self.current_ce_price, self.pe_price, self.current_pe_price) 68 | 69 | def runBackTest(self): 70 | self.success = 0 71 | self.failure = 0 72 | self.max_profit = 0 73 | self.total_profit = 0 74 | self.total_profit_days = 0 75 | self.max_days_with_profit = 0 76 | self.max_days_with_profit_temp = 0 77 | self.max_loss = 0 78 | self.total_loss = 0 79 | self.total_loss_days = 0 80 | self.max_days_with_loss = 0 81 | self.max_days_with_loss_temp = 0 82 | self.overall_result = 0 83 | self.monthly_results = self.create_monthly_result_dict() 84 | self.csvFile = open(self.csv_out_file, "w") 85 | self.buffer = "Date,Day,Index,CE,CE Time,CE Price,CE SL,CE LTP,PE,PE Time,PE Price,PE SL,PE LTP,SL hit,Net\n" 86 | self.csvFile.write(self.buffer) 87 | self.current_date_format = self.create_date(self.start_date) 88 | self.end_date_format = self.create_date(self.end_date) 89 | failed_backtests = {} 90 | while (self.end_date_format - self.current_date_format).days >= 0: 91 | while self.create_date(self.current_date_format).weekday() not in self.days_to_run: 92 | self.current_date_format = self.increment_date(self.current_date_format) 93 | if str(self.current_date_format) not in self.public_holidays: 94 | try: 95 | self.df = self.read_csv_data() 96 | self.initialise_for_csv_backtest() 97 | self.csv_backtest_for_day() 98 | self.overall_result += self.result 99 | self.monthly_results[str(self.current_date_format)[:-3]] += self.result 100 | print("==========================================") 101 | if self.result >= 0: 102 | self.max_days_with_loss = max(self.max_days_with_loss, self.max_days_with_loss_temp) 103 | self.max_days_with_loss_temp = 0 104 | self.max_days_with_profit_temp = self.max_days_with_profit_temp + 1 105 | self.total_profit_days = self.total_profit_days + 1 106 | self.total_profit = self.total_profit + self.result 107 | self.max_profit = max(self.max_profit, self.result) 108 | print("\033[1;92m",self.deci2(self.result), "\n\033[0m") 109 | else: 110 | self.max_days_with_profit = max(self.max_days_with_profit, self.max_days_with_profit_temp) 111 | self.max_days_with_profit_temp = 0 112 | self.max_days_with_loss_temp = self.max_days_with_loss_temp + 1 113 | self.total_loss_days = self.total_loss_days + 1 114 | self.total_loss = self.total_loss + self.result 115 | self.max_loss = min(self.max_loss, self.result) 116 | print("\033[1;91m",self.deci2(self.result), "\n\033[0m") 117 | self.buffer = [str(self.current_date_format), self.get_day(self.current_date_format), str(float(self.index_price)), self.ce_symbol, self.ce_initial_time, str(self.ce_price), str(self.ce_sl), str(self.current_ce_price), \ 118 | self.pe_symbol, self.pe_initial_time, str(self.pe_price), str(self.pe_sl), str(self.current_pe_price), str(self.sl_hit), str(self.deci2(self.result))] 119 | self.csvFile.write(",".join(self.buffer) + "\n") 120 | self.success = self.success + 1 121 | except Exception as e: 122 | self.log.error(str(e)) 123 | self.log.error("Could not backtest for " + str(self.current_date_format) + "\n") 124 | if "No such file or directory" not in str(e): 125 | self.failure = self.failure + 1 126 | failed_backtests[str(self.current_date_format)] = str(e) 127 | else: 128 | pass 129 | self.current_date_format = self.increment_date(self.current_date_format) 130 | self.csvFile.close() 131 | self.max_days_with_profit = max(self.max_days_with_profit, self.max_days_with_profit_temp) 132 | self.max_days_with_loss = max(self.max_days_with_loss, self.max_days_with_loss_temp) 133 | print("Entry time:", self.entry_time, "Exit time:", self.exit_time, "Sl:", self.stop_loss_p) 134 | print("Overall result", self.overall_result) 135 | print("Average result", self.overall_result/(self.success + self.failure)) 136 | print("Max profit was", self.max_profit) 137 | print("Average profit on profit making days was", self.total_profit/self.total_profit_days) 138 | print("Winning streak:", self.max_days_with_profit) 139 | print("Win percentage", self.total_profit_days/self.success) 140 | print("Max loss was", self.max_loss) 141 | print("Average loss on loss making days was", self.total_loss/self.total_loss_days) 142 | print("Loosing streak:", self.max_days_with_loss) 143 | print("Win percentage", self.total_loss_days/self.success) 144 | print("Successfull back tests:", self.success) 145 | print("Failed back tests:", self.failure) 146 | print("Monthly wise resport is given below") 147 | for i, j in self.monthly_results.items(): 148 | print(i, ": ", j) 149 | -------------------------------------------------------------------------------- /backtest/setTimeStraddleIndexSL.py: -------------------------------------------------------------------------------- 1 | from backtest.backtester import Backtester 2 | from pprint import pprint 3 | import pandas as pd 4 | 5 | class setTimeStraddleIndexSL(Backtester): 6 | def __init__(self, index, start_date, end_date, entry_time, exit_time, stop_loss_p, **kwargs): 7 | self.entry_time = entry_time 8 | self.exit_time = exit_time 9 | self.stop_loss_p = stop_loss_p 10 | super(setTimeStraddleIndexSL, self).__init__(index, start_date, end_date, **kwargs) 11 | self.get_additional_vars(kwargs) 12 | 13 | def get_additional_vars(self, kwargs): 14 | self.historical_data_path = kwargs.get("historical_data_path", "./hostoricalData/") 15 | self.number_of_lots = kwargs.get("number_of_lots", 1) 16 | self.csv_out_file = kwargs.get("csv_out_file", "backtestFromCsv.csv") 17 | self.max_loss_per_lot = kwargs.get("max_loss_per_lot", -10000000000) 18 | if kwargs.get("quantity_per_lot", False): 19 | self.quantity_per_lot = kwargs.get("quantity_per_lot") 20 | elif self.index == "BANKNIFTY": 21 | self.quantity_per_lot = 25 22 | elif self.index == "NIFTY": 23 | self.quantity_per_lot = 50 24 | 25 | def initialise_for_csv_backtest(self): 26 | current_date = str(self.current_date_format) 27 | self.index_df = self.df.loc[self.df["symbol"] == self.index].sort_values(by = "time") 28 | self.index_price, self.index_time = self.get_price_for_nearest_time(self.index_df, self.entry_time) 29 | index_strike_price = self.find_strike_price(self.index_price) 30 | print(index_strike_price, current_date) 31 | thursday = self.next_thursday(self.current_date_format) 32 | self.ce_symbol = self.create_scrip_symbol("CE", index_strike_price, self.index) 33 | self.pe_symbol = self.create_scrip_symbol("PE", index_strike_price, self.index) 34 | print(self.ce_symbol, self.pe_symbol) 35 | self.ce_df = self.df.loc[self.df["symbol"] == self.ce_symbol].sort_values(by = "time") 36 | self.ce_price, self.ce_initial_time = self.get_price_for_nearest_time(self.ce_df, self.entry_time) 37 | self.pe_df = self.df.loc[self.df["symbol"] == self.pe_symbol].sort_values(by = "time") 38 | self.pe_price, self.pe_initial_time = self.get_price_for_nearest_time(self.pe_df, self.entry_time) 39 | self.ce_sl = self.index_price + (self.index_price * self.stop_loss_p) 40 | self.pe_sl = self.index_price - (self.index_price * self.stop_loss_p) 41 | print(self.ce_symbol, self.ce_price, self.ce_sl, self.ce_initial_time) 42 | print(self.pe_symbol, self.pe_price, self.pe_sl, self.pe_initial_time) 43 | 44 | def check_and_set_sl_to_cost(self, ce_sl_hit, pe_sl_hit): 45 | if ce_sl_hit + pe_sl_hit == 1: 46 | if ce_sl_hit: 47 | self.pe_sl = self.index_price 48 | elif pe_sl_hit: 49 | self.ce_sl = self.index_price 50 | if ce_sl_hit + pe_sl_hit == 2: 51 | self.log.debug("No adjustment to SL as this is the second SL hit") 52 | pass 53 | 54 | def select_acted_sl_price(self, sl_line, index_high, index_low, premium_high, premium_low, slippage=1): 55 | sl1 = (sl_line / index_low) * premium_low 56 | sl2 = (sl_line / index_high) * premium_high 57 | acted_sl = ((sl1 + sl2) / 2) * slippage 58 | return acted_sl 59 | 60 | def csv_backtest_for_day(self): 61 | ce_sl_hit = False 62 | pe_sl_hit = False 63 | self.index_df = self.index_df.sort_values(by = "time") 64 | self.ce_df = self.ce_df.sort_values(by = "time") 65 | self.pe_df = self.pe_df.sort_values(by = "time") 66 | for i in range(len(self.index_df)): 67 | if self.create_time(self.entry_time) <= self.create_time(self.index_df.iloc[i]["time"]) <= self.create_time(self.exit_time): 68 | self.current_index_price_ce, self.current_index_time = self.index_df.iloc[i]["high"], self.index_df.iloc[i]["time"] 69 | self.current_index_price_pe, self.current_index_time = self.index_df.iloc[i]["low"], self.index_df.iloc[i]["time"] 70 | if not ce_sl_hit: 71 | self.current_ce_price, self.current_ce_time = self.ce_df.iloc[i]["high"], self.ce_df.iloc[i]["time"] 72 | if self.current_index_price_ce >= self.ce_sl: 73 | ce_sl_hit = True 74 | self.current_ce_price = self.select_acted_sl_price(self.ce_sl, self.index_df.iloc[i]["high"], self.index_df.iloc[i]["low"], 75 | self.ce_df.iloc[i]["high"], self.ce_df.iloc[i]["low"], slippage=self.slippage) 76 | self.check_and_set_sl_to_cost(ce_sl_hit, pe_sl_hit) 77 | print("\033[1;91mstoploss hit for CE\033[0m") 78 | if not pe_sl_hit: 79 | self.current_pe_price, self.current_pe_time = self.pe_df.iloc[i]["high"], self.pe_df.iloc[i]["time"] 80 | if self.current_index_price_pe <= self.pe_sl: 81 | pe_sl_hit = True 82 | self.current_pe_price = self.select_acted_sl_price(self.pe_sl, self.index_df.iloc[i]["high"], self.index_df.iloc[i]["low"], 83 | self.pe_df.iloc[i]["high"], self.pe_df.iloc[i]["low"], slippage=self.slippage) 84 | self.check_and_set_sl_to_cost(ce_sl_hit, pe_sl_hit) 85 | print("\033[1;91mstoploss hit for PE\033[0m") 86 | if self.calculate_result(self.ce_price, self.current_ce_price, self.pe_price, self.current_pe_price) < self.max_loss_per_lot * self.number_of_lots: 87 | break 88 | self.sl_hit = ce_sl_hit + pe_sl_hit 89 | self.result = self.calculate_result(self.ce_price, self.current_ce_price, self.pe_price, self.current_pe_price) 90 | 91 | def runBackTest(self): 92 | self.success = 0 93 | self.failure = 0 94 | self.max_profit = 0 95 | self.total_profit = 0 96 | self.total_profit_days = 0 97 | self.max_days_with_profit = 0 98 | self.max_days_with_profit_temp = 0 99 | self.max_loss = 0 100 | self.total_loss = 0 101 | self.total_loss_days = 0 102 | self.max_days_with_loss = 0 103 | self.max_days_with_loss_temp = 0 104 | self.overall_result = 0 105 | self.monthly_results = self.create_monthly_result_dict() 106 | self.csvFile = open(self.csv_out_file, "w") 107 | self.buffer = "Date,Day,Index,CE,CE Time,CE Price,CE SL,CE LTP,PE,PE Time,PE Price,PE SL,PE LTP,SL hit,Net\n" 108 | self.csvFile.write(self.buffer) 109 | self.current_date_format = self.create_date(self.start_date) 110 | self.end_date_format = self.create_date(self.end_date) 111 | failed_backtests = {} 112 | while (self.end_date_format - self.current_date_format).days >= 0: 113 | while self.create_date(self.current_date_format).weekday() not in self.days_to_run: 114 | self.current_date_format = self.increment_date(self.current_date_format) 115 | if str(self.current_date_format) not in self.public_holidays: 116 | try: 117 | self.df = self.read_csv_data() 118 | self.initialise_for_csv_backtest() 119 | self.csv_backtest_for_day() 120 | self.overall_result += self.result 121 | self.monthly_results[str(self.current_date_format)[:-3]] += self.result 122 | print("==========================================") 123 | if self.result >= 0: 124 | self.max_days_with_loss = max(self.max_days_with_loss, self.max_days_with_loss_temp) 125 | self.max_days_with_loss_temp = 0 126 | self.max_days_with_profit_temp = self.max_days_with_profit_temp + 1 127 | self.total_profit_days = self.total_profit_days + 1 128 | self.total_profit = self.total_profit + self.result 129 | self.max_profit = max(self.max_profit, self.result) 130 | print("\033[1;92m",self.deci2(self.result), "\n\033[0m") 131 | else: 132 | self.max_days_with_profit = max(self.max_days_with_profit, self.max_days_with_profit_temp) 133 | self.max_days_with_profit_temp = 0 134 | self.max_days_with_loss_temp = self.max_days_with_loss_temp + 1 135 | self.total_loss_days = self.total_loss_days + 1 136 | self.total_loss = self.total_loss + self.result 137 | self.max_loss = min(self.max_loss, self.result) 138 | print("\033[1;91m",self.deci2(self.result), "\n\033[0m") 139 | self.buffer = [str(self.current_date_format), self.get_day(self.current_date_format), str(float(self.index_price)), self.ce_symbol, self.ce_initial_time, str(self.ce_price), str(self.ce_sl), str(self.current_ce_price), \ 140 | self.pe_symbol, self.pe_initial_time, str(self.pe_price), str(self.pe_sl), str(self.current_pe_price), str(self.sl_hit), str(self.deci2(self.result))] 141 | self.csvFile.write(",".join(self.buffer) + "\n") 142 | self.success = self.success + 1 143 | except Exception as e: 144 | self.log.error(str(e)) 145 | self.log.error("Could not backtest for " + str(self.current_date_format) + "\n") 146 | if "No such file or directory" not in str(e): 147 | self.failure = self.failure + 1 148 | failed_backtests[str(self.current_date_format)] = str(e) 149 | else: 150 | pass 151 | self.current_date_format = self.increment_date(self.current_date_format) 152 | self.csvFile.close() 153 | self.max_days_with_profit = max(self.max_days_with_profit, self.max_days_with_profit_temp) 154 | self.max_days_with_loss = max(self.max_days_with_loss, self.max_days_with_loss_temp) 155 | print("Entry time:", self.entry_time, "Exit time:", self.exit_time, "Sl:", self.stop_loss_p) 156 | print("Overall result", self.overall_result) 157 | print("Average result", self.overall_result/(self.success + self.failure)) 158 | print("Max profit was", self.max_profit) 159 | print("Average profit on profit making days was", self.total_profit/self.total_profit_days) 160 | print("Winning streak:", self.max_days_with_profit) 161 | print("Win percentage", self.total_profit_days/self.success) 162 | print("Max loss was", self.max_loss) 163 | print("Average loss on loss making days was", self.total_loss/self.total_loss_days) 164 | print("Loosing streak:", self.max_days_with_loss) 165 | print("Win percentage", self.total_loss_days/self.success) 166 | print("Successfull back tests:", self.success) 167 | print("Failed back tests:", self.failure) 168 | print("Monthly wise resport is given below") 169 | for i, j in self.monthly_results.items(): 170 | print(i, ": ", j) 171 | -------------------------------------------------------------------------------- /backtest/setTimeStrangleIndexSL.py: -------------------------------------------------------------------------------- 1 | from backtest.backtester import Backtester 2 | from pprint import pprint 3 | import pandas as pd 4 | 5 | class setTimeStrangleIndexSL(Backtester): 6 | def __init__(self, index, start_date, end_date, entry_time, exit_time, point_deviation, stop_loss_p, **kwargs): 7 | self.entry_time = entry_time 8 | self.exit_time = exit_time 9 | self.point_deviation = point_deviation 10 | self.stop_loss_p = stop_loss_p 11 | super(setTimeStrangleIndexSL, self).__init__(index, start_date, end_date, **kwargs) 12 | self.get_additional_vars(kwargs) 13 | 14 | def get_additional_vars(self, kwargs): 15 | self.historical_data_path = kwargs.get("historical_data_path", "./hostoricalData/") 16 | self.number_of_lots = kwargs.get("number_of_lots", 1) 17 | self.csv_out_file = kwargs.get("csv_out_file", "backtestFromCsv.csv") 18 | self.max_loss_per_lot = kwargs.get("max_loss_per_lot", -10000000000) 19 | if kwargs.get("quantity_per_lot", False): 20 | self.quantity_per_lot = kwargs.get("quantity_per_lot") 21 | elif self.index == "BANKNIFTY": 22 | self.quantity_per_lot = 25 23 | elif self.index == "NIFTY": 24 | self.quantity_per_lot = 50 25 | 26 | def initialise_for_csv_backtest(self): 27 | current_date = str(self.current_date_format) 28 | self.index_df = self.df.loc[self.df["symbol"] == self.index].sort_values(by = "time") 29 | self.index_price, self.index_time = self.get_price_for_nearest_time(self.index_df, self.entry_time) 30 | index_strike_price = self.find_strike_price(self.index_price) 31 | print(index_strike_price, current_date) 32 | thursday = self.next_thursday(self.current_date_format) 33 | self.ce_symbol = self.create_scrip_symbol("CE", index_strike_price + self.point_deviation, self.index) 34 | self.pe_symbol = self.create_scrip_symbol("PE", index_strike_price - self.point_deviation, self.index) 35 | print(self.ce_symbol, self.pe_symbol) 36 | self.ce_df = self.df.loc[self.df["symbol"] == self.ce_symbol].sort_values(by = "time") 37 | self.ce_price, self.ce_initial_time = self.get_price_for_nearest_time(self.ce_df, self.entry_time) 38 | self.pe_df = self.df.loc[self.df["symbol"] == self.pe_symbol].sort_values(by = "time") 39 | self.pe_price, self.pe_initial_time = self.get_price_for_nearest_time(self.pe_df, self.entry_time) 40 | self.ce_sl = self.index_price + (self.index_price * self.stop_loss_p) 41 | self.pe_sl = self.index_price - (self.index_price * self.stop_loss_p) 42 | print(self.ce_symbol, self.ce_price, self.ce_sl, self.ce_initial_time) 43 | print(self.pe_symbol, self.pe_price, self.pe_sl, self.pe_initial_time) 44 | 45 | def check_and_set_sl_to_cost(self, ce_sl_hit, pe_sl_hit): 46 | if ce_sl_hit + pe_sl_hit == 1: 47 | if ce_sl_hit: 48 | self.pe_sl = self.index_price 49 | elif pe_sl_hit: 50 | self.ce_sl = self.index_price 51 | if ce_sl_hit + pe_sl_hit == 2: 52 | self.log.debug("No adjustment to SL as this is the second SL hit") 53 | pass 54 | 55 | def select_acted_sl_price(self, sl_line, index_high, index_low, premium_high, premium_low, slippage=1): 56 | sl1 = (sl_line / index_low) * premium_low 57 | sl2 = (sl_line / index_high) * premium_high 58 | acted_sl = ((sl1 + sl2) / 2) * slippage 59 | return acted_sl 60 | 61 | def csv_backtest_for_day(self): 62 | ce_sl_hit = False 63 | pe_sl_hit = False 64 | self.index_df = self.index_df.sort_values(by = "time") 65 | self.ce_df = self.ce_df.sort_values(by = "time") 66 | self.pe_df = self.pe_df.sort_values(by = "time") 67 | for i in range(len(self.index_df)): 68 | if self.create_time(self.entry_time) <= self.create_time(self.index_df.iloc[i]["time"]) <= self.create_time(self.exit_time): 69 | self.current_index_price_ce, self.current_index_time = self.index_df.iloc[i]["high"], self.index_df.iloc[i]["time"] 70 | self.current_index_price_pe, self.current_index_time = self.index_df.iloc[i]["low"], self.index_df.iloc[i]["time"] 71 | if not ce_sl_hit: 72 | self.current_ce_price, self.current_ce_time = self.ce_df.iloc[i]["high"], self.ce_df.iloc[i]["time"] 73 | if self.current_index_price_ce >= self.ce_sl: 74 | ce_sl_hit = True 75 | self.current_ce_price = self.select_acted_sl_price(self.ce_sl, self.index_df.iloc[i]["high"], self.index_df.iloc[i]["low"], 76 | self.ce_df.iloc[i]["high"], self.ce_df.iloc[i]["low"], slippage=self.slippage) 77 | self.check_and_set_sl_to_cost(ce_sl_hit, pe_sl_hit) 78 | print("\033[1;91mstoploss hit for CE\033[0m") 79 | if not pe_sl_hit: 80 | self.current_pe_price, self.current_pe_time = self.pe_df.iloc[i]["high"], self.pe_df.iloc[i]["time"] 81 | if self.current_index_price_pe <= self.pe_sl: 82 | pe_sl_hit = True 83 | self.current_pe_price = self.select_acted_sl_price(self.pe_sl, self.index_df.iloc[i]["high"], self.index_df.iloc[i]["low"], 84 | self.pe_df.iloc[i]["high"], self.pe_df.iloc[i]["low"], slippage=self.slippage) 85 | self.check_and_set_sl_to_cost(ce_sl_hit, pe_sl_hit) 86 | print("\033[1;91mstoploss hit for PE\033[0m") 87 | if self.calculate_result(self.ce_price, self.current_ce_price, self.pe_price, self.current_pe_price) < self.max_loss_per_lot * self.number_of_lots: 88 | break 89 | self.sl_hit = ce_sl_hit + pe_sl_hit 90 | self.result = self.calculate_result(self.ce_price, self.current_ce_price, self.pe_price, self.current_pe_price) 91 | 92 | def runBackTest(self): 93 | self.success = 0 94 | self.failure = 0 95 | self.max_profit = 0 96 | self.total_profit = 0 97 | self.total_profit_days = 0 98 | self.max_days_with_profit = 0 99 | self.max_days_with_profit_temp = 0 100 | self.max_loss = 0 101 | self.total_loss = 0 102 | self.total_loss_days = 0 103 | self.max_days_with_loss = 0 104 | self.max_days_with_loss_temp = 0 105 | self.overall_result = 0 106 | self.monthly_results = self.create_monthly_result_dict() 107 | self.csvFile = open(self.csv_out_file, "w") 108 | self.buffer = "Date,Day,Index,CE,CE Time,CE Price,CE SL,CE LTP,PE,PE Time,PE Price,PE SL,PE LTP,SL hit,Net\n" 109 | self.csvFile.write(self.buffer) 110 | self.current_date_format = self.create_date(self.start_date) 111 | self.end_date_format = self.create_date(self.end_date) 112 | failed_backtests = {} 113 | while (self.end_date_format - self.current_date_format).days >= 0: 114 | while self.create_date(self.current_date_format).weekday() not in self.days_to_run: 115 | self.current_date_format = self.increment_date(self.current_date_format) 116 | if str(self.current_date_format) not in self.public_holidays: 117 | try: 118 | self.df = self.read_csv_data() 119 | self.initialise_for_csv_backtest() 120 | self.csv_backtest_for_day() 121 | self.overall_result += self.result 122 | self.monthly_results[str(self.current_date_format)[:-3]] += self.result 123 | print("==========================================") 124 | if self.result >= 0: 125 | self.max_days_with_loss = max(self.max_days_with_loss, self.max_days_with_loss_temp) 126 | self.max_days_with_loss_temp = 0 127 | self.max_days_with_profit_temp = self.max_days_with_profit_temp + 1 128 | self.total_profit_days = self.total_profit_days + 1 129 | self.total_profit = self.total_profit + self.result 130 | self.max_profit = max(self.max_profit, self.result) 131 | print("\033[1;92m",self.deci2(self.result), "\n\033[0m") 132 | else: 133 | self.max_days_with_profit = max(self.max_days_with_profit, self.max_days_with_profit_temp) 134 | self.max_days_with_profit_temp = 0 135 | self.max_days_with_loss_temp = self.max_days_with_loss_temp + 1 136 | self.total_loss_days = self.total_loss_days + 1 137 | self.total_loss = self.total_loss + self.result 138 | self.max_loss = min(self.max_loss, self.result) 139 | print("\033[1;91m",self.deci2(self.result), "\n\033[0m") 140 | self.buffer = [str(self.current_date_format), self.get_day(self.current_date_format), str(float(self.index_price)), self.ce_symbol, self.ce_initial_time, str(self.ce_price), str(self.ce_sl), str(self.current_ce_price), \ 141 | self.pe_symbol, self.pe_initial_time, str(self.pe_price), str(self.pe_sl), str(self.current_pe_price), str(self.sl_hit), str(self.deci2(self.result))] 142 | self.csvFile.write(",".join(self.buffer) + "\n") 143 | self.success = self.success + 1 144 | except Exception as e: 145 | self.log.error(str(e)) 146 | self.log.error("Could not backtest for " + str(self.current_date_format) + "\n") 147 | if "No such file or directory" not in str(e): 148 | self.failure = self.failure + 1 149 | failed_backtests[str(self.current_date_format)] = str(e) 150 | else: 151 | pass 152 | self.current_date_format = self.increment_date(self.current_date_format) 153 | self.csvFile.close() 154 | self.max_days_with_profit = max(self.max_days_with_profit, self.max_days_with_profit_temp) 155 | self.max_days_with_loss = max(self.max_days_with_loss, self.max_days_with_loss_temp) 156 | print("Entry time:", self.entry_time, "Exit time:", self.exit_time, "Sl:", self.stop_loss_p) 157 | print("Overall result", self.overall_result) 158 | print("Average result", self.overall_result/(self.success + self.failure)) 159 | print("Max profit was", self.max_profit) 160 | print("Average profit on profit making days was", self.total_profit/self.total_profit_days) 161 | print("Winning streak:", self.max_days_with_profit) 162 | print("Win percentage", self.total_profit_days/self.success) 163 | print("Max loss was", self.max_loss) 164 | print("Average loss on loss making days was", self.total_loss/self.total_loss_days) 165 | print("Loosing streak:", self.max_days_with_loss) 166 | print("Win percentage", self.total_loss_days/self.success) 167 | print("Successfull back tests:", self.success) 168 | print("Failed back tests:", self.failure) 169 | print("Monthly wise resport is given below") 170 | for i, j in self.monthly_results.items(): 171 | print(i, ": ", j) 172 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | pandas==1.4.3 2 | maticalgos 3 | matplotlib 4 | numpy 5 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | 3 | with open("requirements.txt") as f: 4 | requirements = f.read().splitlines() 5 | 6 | description = "A python package to run backtests on Indian Indexes CSV data" 7 | 8 | setup( 9 | name = "backtest", 10 | version = "0.0.1", 11 | description = description, 12 | author = "Prajwal Shenoy", 13 | author_email = "prajwalkpshenoy@gmail.com", 14 | url = "https://github.com/PrajwalShenoy/backtest", 15 | install_requires = requirements, 16 | ) --------------------------------------------------------------------------------