├── LICENSE ├── README.md ├── UPDATES.md ├── main.py ├── market_data_extraction_tool ├── __init__.py └── market_extraction_tool.py └── requirements.txt /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Quentin 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # market_data_extraction_tool 2 | market_data_extraction_tool is a script that downloads: 3 | - intraday (past 5 days) 4 | - daily (past 5 years) 5 | - active calls/puts of publicly traded companies. 6 | 7 | ### Installation 8 | market_data_extraction_tool requires Python 3.x to run. It uses the following modules: 9 | 10 | > cd market_data_extraction_tool 11 | 12 | > pip install --user --requirement requirements.txt 13 | 14 | Go to ``market_extraction_tool.py`` and replace: 15 | 16 | > ``YOUR_API_KEY`` in the function ``import_web_intraday`` with your working AlphaVantage API key 17 | 18 | > ``YOUR_API_KEY`` in the function ``import_web_daily`` with your working IEX API key 19 | 20 | ### How it works 21 | The script will automatically extract the following tickers' data: 22 | 1. Oil & Gas: XOM, CVX, COP, EOG, OXY 23 | 2. Tech: AAPL, GOOGL, GOOG, FB, MSFT 24 | 3. Banking: JPM, BAC, C,WFC, GS 25 | 4. Recent IPOs: LYFT, PINS 26 | 27 | with: 28 | 29 | > main.py 30 | 31 | You can run the script using command line arguments: 32 | 33 | > main.py [--concurrent] [--replace] ``company tickers`` 34 | 35 | 1. ``--concurrent`` will enable concurrent processes to download multiple tickers' data in parallel 36 | 2. ``--replace`` will replace the default list of companies to extract with the ticker(s) you included as command line argument(s) 37 | 38 | As the scripts ends, it will plot the past 5 days intraday market data of Goldman Sachs (ticker: GS) -------------------------------------------------------------------------------- /UPDATES.md: -------------------------------------------------------------------------------- 1 | ### Past Updates 2 | Update on May 11th, 2019: 3 | 1. Clarified each comments in market_extraction_tool.py 4 | 2. Implemented concurrency for a faster run 5 | 3. Updated the README.md to reflect changes 6 | 7 | Update on June 3rd, 2019: 8 | 1. Patched the short_term_analysis() function: It would crash when searching for an folder containing intraday data that does not exist 9 | 2. Clarified/simplified the error messaging when the program checks for unexisting folders 10 | 3. Patched the extraction of Option data: Yahoo seems to throttle requests, added a time.sleep(60) after 10 company requests 11 | 4. Updated the README.md to reflect changes 12 | 13 | Update on June 4th, 2019: 14 | 1. Clarified/simplified the error messaging when the program fails to request option data 15 | 2. Updated ongoing issues 16 | 3. Updated the README.md to reflect changes 17 | 18 | Update on August 24th, 2019: 19 | 1. Updated the code for readability 20 | 2. The IEX data provider now requires an API token to work 21 | 22 | Update on August 28th, 2019: 23 | 1. Updated the code structure 24 | 25 | Update on September 12th, 2019: 26 | 1. Implemented a command line parameter pull so users can choose to extract data concurrently or not in the command line: 27 | 28 | > $main.py True 29 | 30 | "True" is case sensitive. 31 | 32 | Update on September 17th, 2019: 33 | 1. Add argparse 34 | 2. Review of README.md 35 | 3. inclusion of requirements.txt -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import market_data_extraction_tool as mdet 2 | import argparse 3 | 4 | def extraction(args, companies): 5 | """ 6 | Launches extraction(s). 7 | ----- 8 | :param : string ; arguments passed through the command line 9 | :param : list ; list of company tickers 10 | """ 11 | if len(args.firms) != 0: 12 | if args.replace: 13 | companies = args.firms 14 | else: 15 | companies.extend(args.firms) 16 | 17 | if args.concurrent: 18 | print("> Launch of concurrent extraction") 19 | mdet.main(companies, True) 20 | print("> Concurrent extraction performed") 21 | else: 22 | print("> Launch of linear extraction") 23 | mdet.main(companies, False) 24 | print("> Linear extraction performed") 25 | 26 | # implementation of a command line parser 27 | parser = argparse.ArgumentParser() 28 | subparsers = parser.add_subparsers() 29 | concurrent_parser = subparsers.add_parser("extraction") 30 | concurrent_parser.add_argument("firms", type = str, nargs = "*", 31 | help = "list of firm tickers to add to/or that replaces the existing list stored in the variable.") 32 | concurrent_parser.add_argument("--concurrent", action = "store_true", default = False, 33 | help = "Indicates the extraction is to be performed concurrently.") 34 | concurrent_parser.add_argument("--replace", action = "store_true", default = False, 35 | help = "Indicates that the arguments will replace the existing list stored in the variable.") 36 | concurrent_parser.set_defaults(func = extraction) 37 | 38 | if __name__=='__main__': 39 | 40 | companies = ['XOM','CVX','COP','EOG','OXY','AAPL','GOOGL','GOOG','FB','MSFT','JPM','BAC','C','WFC','GS','LYFT','PINS'] 41 | 42 | args = parser.parse_args() 43 | try: 44 | func = args.func 45 | func(args, companies) 46 | except AttributeError: 47 | print("> Too few arguments detected") 48 | print("> Defaulting to linear extraction") 49 | print("> Launch of linear extraction") 50 | mdet.main(companies, False) 51 | print("> Linear extraction performed") -------------------------------------------------------------------------------- /market_data_extraction_tool/__init__.py: -------------------------------------------------------------------------------- 1 | from .market_extraction_tool import main -------------------------------------------------------------------------------- /market_data_extraction_tool/market_extraction_tool.py: -------------------------------------------------------------------------------- 1 | import arrow 2 | import json 3 | import os 4 | import requests 5 | import time 6 | 7 | from multiprocessing import Process 8 | from datetime import datetime 9 | from yahoo_fin import options 10 | 11 | import numpy as np 12 | import pandas as pd 13 | import pandas_datareader.data as web 14 | import matplotlib.pyplot as plt 15 | import matplotlib as mpl 16 | 17 | # Register Pandas Formatters and Converters with matplotlib 18 | # This function modifies the global matplotlib.units.registry dictionary 19 | from pandas.plotting import register_matplotlib_converters 20 | register_matplotlib_converters() 21 | 22 | def import_web_intraday(ticker): 23 | """ 24 | Queries the website of the stock market data provider AlphaVantage (AV). AV provides stock, 25 | forex, and cryptocurrency data. AV limits access for free users as such: 26 | 1. maximum of : 5 unique queries per minute, and 500 unique queries per 24h period 27 | 2. Intraday history is capped at the past five days (current + 4) 28 | 3. After-hour data is not available 29 | The provided data is JSON formatted. The data is a series of 'ticks' for each minute during 30 | trading hours: open (09:30am) to closing (04:00pm). 31 | Each tick lists the following stock data: open, close, low, high, average, volume 32 | -------- 33 | :param : String ; ticker of a company traded on the financial markets 34 | """ 35 | website = 'https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY&symbol='+ticker+'&interval=1min&apikey=’+YOUR_API_KEY+’&outputsize=full&datatype=json' 36 | raw_json_intraday_data = requests.get(website) 37 | return raw_json_intraday_data.json() 38 | 39 | def import_web_daily(ticker): 40 | """ 41 | Queries the API of the stock market data provider IEX. IEX provides stock, forex, 42 | and cryptocurrency data. IEX limits access for free users as such: 43 | i. maximum of: 5 years of daily data (/!\ standard in finance is usually 10) 44 | The provided data is formatted as a panda dataframe. 45 | -------- 46 | :param : String ; ticker of a company traded on the financial markets 47 | """ 48 | end = datetime.today() 49 | start = end.replace(year=end.year-5) 50 | daily_data = web.DataReader(ticker, 'iex', start, end, access_key = "YOUR_API_KEY") 51 | return daily_data 52 | 53 | def partition_save_intraday(ticker,json_extract): 54 | """ 55 | Saves a JSON array containing a company's intraday data in a folder named after 56 | the company's ticker. It does: 57 | 1. Creates a dictionary that contains the list of existing day date in the JSON 58 | array, formatted as "yyyy-mm-dd" as keys 59 | 2. Splits the JSON array into separate JSON dictionaries (one for each covered day). 60 | Each created dictionary is saved as a single JSON file the folder mentioned above. 61 | If a file shares the same name, both are merged: 62 | 2.1 Checks if a directory named \intraday_data exists 63 | 2.2 Checks if a file named _ in the directory. If so: merges 64 | the created dictionary with the data stored in the existing file 65 | 2.3 Saves the data (merged when applicable) in the folder under the name 66 | _ 67 | -------- 68 | :param : String ; ticker of a company traded on the financial markets 69 | :param : Dictionary ; JSON array of intraday data of company 70 | """ 71 | # Step 1 72 | date = {} 73 | for item in json_extract: date[item[:10]] = "date" 74 | 75 | # Step 2 76 | for day in date.keys(): 77 | daily_time_series = {} 78 | 79 | # Step 2.1 80 | for item in json_extract: 81 | if(item[:10] == day): daily_time_series[item] = json_extract[item] 82 | 83 | # Step 2.2 84 | path = ticker + "\\intraday_data" 85 | if(os.path.isdir(path) == False): os.makedirs(path) 86 | data_file_name = ticker + "_" + day 87 | 88 | # Step 2.3 89 | try: 90 | with open(os.path.join(path,data_file_name),'r') as file: 91 | existing_data_in_file = json.load(file) 92 | for item in existing_data_in_file: 93 | daily_time_series[item] = existing_data_in_file[item] 94 | 95 | except Exception as e: 96 | print(f"{ticker}: {e}") 97 | 98 | with open(os.path.join(path,data_file_name), 'w') as f: 99 | json.dump(daily_time_series, f) 100 | 101 | def partition_save_daily(ticker, data_extract): 102 | """ 103 | Saves the retrieved dataframe in a folder named after the company's ticker. It does: 104 | 1. Checks if a folder named exists and creates it if not. Checks if a 105 | file named exists in the folder. If so: merges the retrieved and existing 106 | data. 107 | 2. Saves the data (merged when applicable) in the folder under the name 108 | -------- 109 | :param : String ; ticker of a company traded on the financial markets 110 | :param : Dataframe ; Dataframe of daily data of company 111 | """ 112 | # Step 1 113 | if(os.path.isdir(ticker) == False): os.mkdir(ticker) 114 | 115 | data_file_name = ticker 116 | data_extract_dictionary = data_extract.to_dict(orient="index") 117 | 118 | try: 119 | with open(os.path.join(ticker,data_file_name),'r') as file: 120 | existing_data_in_file = json.load(file) 121 | for item in existing_data_in_file: 122 | data_extract_dictionary[item] = existing_data_in_file[item] 123 | 124 | except Exception as e: 125 | print(f"{ticker}: {e}") 126 | 127 | # Step 2 128 | with open(os.path.join(ticker,data_file_name), 'w') as f: 129 | json.dump(data_extract_dictionary, f) 130 | 131 | def save_intraday(ticker): 132 | """ 133 | Saves AV's available intraday data for the company . 134 | -------- 135 | :param : String ; ticker of a company traded on the financial markets 136 | """ 137 | raw_json = import_web_intraday(ticker) 138 | time_series_json = raw_json['Time Series (1min)'] 139 | partition_save_intraday(ticker,time_series_json) 140 | 141 | def save_daily(ticker): 142 | """ 143 | Saves IEX's available intraday data for the company . 144 | -------- 145 | :param : String ; ticker of a company traded on the financial markets 146 | """ 147 | raw_dataframe = import_web_daily(ticker) 148 | partition_save_daily(ticker,raw_dataframe) 149 | 150 | def extract_save_option_data(ticker): 151 | """ 152 | Imports and saves from the yahoo finance database a company's option data. 153 | The option data is split per type (call or put), expiration date, and day. 154 | It does: 155 | 1. Checks for nested directories: 156 | \options_data_\__ 157 | Creates it if non-existent. 158 | 2. Checks in the folder for a file named: 159 | ___as-at_ 160 | If so: merges the existing and newly extracted data. 161 | 3. Saves the data (merged when applicable) in the folder under the company's 162 | ticker. 163 | -------- 164 | :param : String ; ticker of a company traded on the financial markets 165 | """ 166 | extract_dates = options.get_expiration_dates(ticker) 167 | today = datetime.today().strftime("%Y-%m-%d") 168 | 169 | for expiration_date in extract_dates: 170 | format_date = arrow.get(expiration_date, 'MMMM D, YYYY').format('YYYY-MM-DD') 171 | extract_date = arrow.get(expiration_date, 'MMMM D, YYYY').format('MM/DD/YYYY') 172 | 173 | try: 174 | extract = options.get_options_chain(ticker, extract_date) 175 | path = ticker + "\\options_data_" + ticker + "\\" + format_date + "_" + ticker + "_options" 176 | option_types = ["calls", "puts"] 177 | 178 | for option in option_types: 179 | extract_chain = extract[option] 180 | extract_chain = extract_chain.to_dict(orient="index") 181 | data_file_name = format_date + "_" + ticker + "_" + option + "_as-at_" + today 182 | 183 | # Step 1 184 | if not os.path.exists(path): os.makedirs(path) 185 | 186 | # Step 2 187 | if os.path.isfile(os.path.join(path,data_file_name)) == True: 188 | try: 189 | with open(os.path.join(path,data_file_name),'r') as file: 190 | existing_data_in_file = json.load(file) 191 | for item in existing_data_in_file: 192 | extract_chain[item] = existing_data_in_file[item] 193 | 194 | except Exception as e: 195 | print(f"{ticker}: {e}") 196 | 197 | #Step 3 198 | with open(os.path.join(path,data_file_name), 'w') as f: 199 | json.dump(extract_chain, f) 200 | print(f"{ticker}: {format_date} {option} options data retrieved successfully!\n") 201 | 202 | except Exception as e: 203 | print(f"{ticker}: {format_date} options data could not be retrieved.\n") 204 | 205 | def extract_info_intraday(company_list): 206 | """ 207 | Calls the extract and save functions above for each input company. 208 | -------- 209 | :param : List ; list of publicly traded companies' tickers 210 | """ 211 | try: 212 | for company in company_list: 213 | save_intraday(company) 214 | print(f"{company}: intraday market data retrieved successfully!\n") 215 | if ((company_list.index(company)+1) % 5 == 0 and company_list.index(company)+1 != len(company_list)): 216 | print("ALPHAVANTAGE REQUEST LIMIT REACHED - WAITING FOR 1 MINUTE\n") 217 | time.sleep(60) 218 | print("1 MINUTE PASSED - RETURN TO REQUESTING ALPHAVANTAGE\n") 219 | 220 | except Exception as e: 221 | print(f"{company}: {e}") 222 | 223 | def extract_info_daily_and_options(company_list): 224 | """ 225 | Calls the extract and save functions above for each input company. 226 | -------- 227 | :param : List ; list of publicly traded companies' tickers 228 | """ 229 | try: 230 | for company in company_list: 231 | if ((company_list.index(company)+1) % 10 == 0 and company_list.index(company)+1 != len(company_list)): 232 | print("YAHOO FINANCE REQUEST LIMIT REACHED - WAITING FOR 1 MINUTE\n") 233 | time.sleep(60) 234 | print("1 MINUTE PASSED - RETURN TO REQUESTING YAHOO FINANCE\n") 235 | save_daily(company) 236 | print(f"{company} daily market data retrieved successfully!\n") 237 | extract_save_option_data(company) 238 | except Exception as e: 239 | print(f"{company}: {e}") 240 | 241 | def extract_info_all(company_list): 242 | """ 243 | Calls the extract and save functions above for each input company. 244 | -------- 245 | :param : List ; list of publicly traded companies' tickers 246 | """ 247 | try: 248 | for company in company_list: 249 | save_intraday(company) 250 | print(f"{company} intraday market data retrieved successfully!\n") 251 | save_daily(company) 252 | print(f"{company} daily market data retrieved successfully!\n") 253 | extract_save_option_data(company) 254 | if ((company_list.index(company)+1) % 5 == 0 and company_list.index(company)+1 != len(company_list)): 255 | print("ALPHAVANTAGE REQUEST LIMIT REACHED - WAITING FOR 1 MINUTE\n") 256 | time.sleep(60) 257 | print("1 MINUTE PASSED - RETURN TO REQUESTING ALPHAVANTAGE\n") 258 | 259 | except Exception as e: 260 | print(f"{company}:{e}") 261 | 262 | def short_term_analysis(ticker): 263 | """ 264 | Extracts intraday data of a single company from AV's website or from an existing 265 | local file. It does: 266 | 1. Checks if data for the company exists locally. If so: retrieves it. If not: 267 | requests it from AV's website. The data is extracted as a JSON array and formatted 268 | as a DataFrame. 269 | 2. Formats the dataframe to fit the following format: 270 | DATE (index)| MARKET DATA 271 | date 1 | open | low | high | etc. 272 | date 2 | open | low | high | etc. 273 | etc. | etc. 274 | 2.1 Modifies the type of each instance of the MARKET DATA from a string to 275 | a float. 276 | 2.2 Creates an empty set of each single day covered in the data formatted as 277 | "yyyy-mm-dd". 278 | 2.3 Modifies the type of each instance of the DATE (index) from a string or 279 | integer to a datetime. 280 | 2.4 Formats the data into an array to be plotted. FYI, The minute data provided by 281 | AV runs from 09:31:00am to 04:00:00pm. The opening bell tick (09:30:00am) is 282 | missing. AV's data is actually lagged by a minute, i.e. the opening value of a 283 | stock at 09:30:00am corresponds to the “1. open” value linked to the index 284 | "09:31:00am". The actual daily data per minute of a stock can be approximated 285 | as such: “1. open” 09:31:00am datum + “4. close” 09:31:00am to 04:00:00pm data. 286 | 3. Plots the data 287 | -------- 288 | :param : String ; ticker of a company traded on the financial markets 289 | """ 290 | # Step 1 291 | path = ticker + "\\intraday_data" 292 | market_data = {} 293 | try: 294 | covered_date_filenames = os.listdir(path)[-5:] 295 | for filename in covered_date_filenames: 296 | with open(os.path.join(ticker,"intraday_data",filename),'r') as file: 297 | load_dict = json.load(file) 298 | market_data = {**market_data, **load_dict} 299 | market_data_minute_time_series = pd.DataFrame(market_data).transpose() 300 | print("Data loaded from existing local file.") 301 | except Exception as e: 302 | print(f"{ticker}: {e}") 303 | market_data = import_web_intraday(ticker) 304 | market_data_minute_time_series = pd.DataFrame(market_data["Time Series (1min)"]).transpose() 305 | market_data_minute_time_series = market_data_minute_time_series.reindex(index=market_data_minute_time_series.index[::-1]) 306 | print("Data loaded from data repository online.") 307 | 308 | # Step 2 309 | # Step 2.1 310 | for column in market_data_minute_time_series.keys(): 311 | market_data_minute_time_series[column] = market_data_minute_time_series[column].astype('float64') 312 | 313 | # Step 2.2 314 | dates_in_time_series = sorted(set([x[:10] for x in market_data_minute_time_series.index.tolist()])) 315 | 316 | # Step 2.3 317 | market_data_minute_time_series.index = pd.to_datetime(market_data_minute_time_series.index, format = '%Y-%m-%d %H:%M:%S') 318 | 319 | # Step 2.4 320 | for counter, index in enumerate(market_data_minute_time_series.index): 321 | if str(index)[-8:] == "09:31:00" and str(market_data_minute_time_series.iloc[counter-1])[-8:] != "09:30:00": 322 | market_data_minute_time_series.loc[pd.Timestamp(str(index)[:11]+"09:30:00")] = [0,0,0,market_data_minute_time_series.iloc[counter][0],0] 323 | market_data_minute_time_series = market_data_minute_time_series.sort_index() 324 | 325 | # Step 3 326 | plt.style.use('ggplot') 327 | fig, ax = plt.subplots(1, len(dates_in_time_series), figsize=(16,7)) 328 | plt.suptitle(ticker, size = 20, y=0.93) 329 | i = 0 330 | for counter, (group_name, df_group) in enumerate(market_data_minute_time_series.groupby(pd.Grouper(freq='D'))): 331 | if df_group.empty == False: 332 | ax[counter-i].plot(df_group['4. close'], color = "blue") 333 | xfmt = mpl.dates.DateFormatter('%m-%d %H:%M') 334 | ax[counter-i].xaxis.set_major_locator(mpl.dates.MinuteLocator(byminute=[30], interval = 1)) 335 | ax[counter-i].xaxis.set_major_formatter(xfmt) 336 | ax[counter-i].get_xaxis().set_tick_params(which='major', pad=4) 337 | ax[counter-i].set_ylim(min(market_data_minute_time_series['4. close'])- 338 | round(0.005*min(market_data_minute_time_series['4. close']),0), 339 | max(market_data_minute_time_series['4. close'])+ 340 | round(0.005*max(market_data_minute_time_series['4. close']),0)) 341 | else: 342 | i += 1 343 | fig.autofmt_xdate() 344 | plt.show() 345 | 346 | def main(company_list, concurrency = False): 347 | """ 348 | Main function call. 349 | -------- 350 | :param : List ; list of publicly traded companies' tickers 351 | """ 352 | 353 | #Concurrent running 354 | if concurrency: 355 | try: 356 | process_1 = Process(target = extract_info_intraday, args=(company_list,)) 357 | process_1.start() 358 | process_2 = Process(target = extract_info_daily_and_options, args=(company_list,)) 359 | process_2.start() 360 | except Exception as e: 361 | print(e) 362 | else: 363 | #Sequential running 364 | extract_info_intraday(company_list) 365 | extract_info_daily_and_options(company_list) 366 | #extract_info_all(company_list) 367 | 368 | single_company_to_analyze = "GS" 369 | short_term_analysis(single_company_to_analyze) 370 | 371 | 372 | 373 | 374 | 375 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aiohttp==3.5.4 2 | appdirs==1.4.3 3 | arrow==0.13.1 4 | async-timeout==3.0.1 5 | binaryornot==0.4.4 6 | bs4==0.0.1 7 | certifi==2019.3.9 8 | chardet==3.0.4 9 | Click==7.0 10 | cssselect==1.0.3 11 | cycler==0.10.0 12 | fake-useragent==0.1.11 13 | future==0.17.1 14 | idna==2.8 15 | imageio==2.5.0 16 | jinja2-time==0.2.0 17 | joblib==0.13.2 18 | lupa==1.8 19 | mpmath==1.1.0 20 | multidict==4.5.2 21 | parse==1.12.0 22 | poyo==0.4.2 23 | pyee==6.0.0 24 | pyparsing==2.4.0 25 | PyPDF2==1.26.0 26 | pyppeteer==0.0.25 27 | pyquery==1.4.0 28 | pytz==2019.1 29 | requests==2.21.0 30 | soupsieve==1.9.1 31 | sympy==1.4 32 | tabula==1.0.5 33 | tqdm==4.31.1 34 | urllib3==1.24.2 35 | w3lib==1.20.0 36 | websockets==6.0 37 | whichcraft==0.5.2 38 | wincertstore==0.2 39 | wrapt==1.11.1 40 | yahoo-fin==0.8.2 41 | yarl==1.3.0 42 | --------------------------------------------------------------------------------