├── .gitignore ├── README.md ├── backend ├── .env.example ├── Apple_90_5weeks.csv ├── Microsoft_30_5weeks.csv ├── Pipfile ├── Pipfile.lock ├── README.md ├── Tesla_30_5weeks.csv ├── Tesla_90_5weeks.csv ├── best_weights │ ├── AAPL_wsize30_sc_dict.p │ ├── AAPL_wsize60_sc_dict.p │ ├── AAPL_wsize90_sc_dict.p │ ├── AMZN_wsize30_sc_dict.p │ ├── AMZN_wsize60_sc_dict.p │ ├── AMZN_wsize90_sc_dict.p │ ├── GOOGL_wsize30_sc_dict.p │ ├── GOOGL_wsize60_sc_dict.p │ ├── GOOGL_wsize90_sc_dict.p │ ├── MSFT_wsize30_sc_dict.p │ ├── MSFT_wsize60_sc_dict.p │ ├── MSFT_wsize90_sc_dict.p │ ├── TSLA_wsize30_sc_dict.p │ ├── TSLA_wsize60_sc_dict.p │ ├── TSLA_wsize90_sc_dict.p │ ├── best_weights_AAPL_wsize30.hdf5 │ ├── best_weights_AAPL_wsize60.hdf5 │ ├── best_weights_AAPL_wsize90.hdf5 │ ├── best_weights_AMZN_wsize30.hdf5 │ ├── best_weights_AMZN_wsize60.hdf5 │ ├── best_weights_AMZN_wsize90.hdf5 │ ├── best_weights_GOOGL_wsize30.hdf5 │ ├── best_weights_GOOGL_wsize60.hdf5 │ ├── best_weights_GOOGL_wsize90.hdf5 │ ├── best_weights_MSFT_wsize30.hdf5 │ ├── best_weights_MSFT_wsize60.hdf5 │ ├── best_weights_MSFT_wsize90.hdf5 │ ├── best_weights_TSLA_wsize30.hdf5 │ ├── best_weights_TSLA_wsize60.hdf5 │ └── best_weights_TSLA_wsize90.hdf5 ├── data_proc.py ├── extrapolate_backend.py ├── extrapolate_predict_keras_old.py ├── finBert │ ├── model │ │ └── config.json │ └── sentiment.py ├── main_keras.py ├── models_keras.py ├── predict_keras.py ├── runs.sh ├── server.py ├── starter_code.py └── utils.py └── frontend ├── README.md ├── package.json ├── public ├── favicon.ico ├── index.html └── manifest.json ├── src ├── App.css ├── App.js ├── components │ ├── Financial │ │ ├── Financial.css │ │ ├── Financial.js │ │ └── combined1.csv │ ├── Misc │ │ └── Landing.js │ ├── Prediction │ │ ├── Prediction.css │ │ └── Prediction.js │ └── Sentiment │ │ ├── Sentiment.css │ │ └── Sentiment.js ├── index.css └── index.js └── yarn.lock /.gitignore: -------------------------------------------------------------------------------- 1 | # dependencies 2 | frontend/node_modules 3 | /.pnp 4 | .pnp.js 5 | backend/reddit 6 | backend/stock_data 7 | backend/twitter 8 | 9 | # testing 10 | /coverage 11 | 12 | # production 13 | /build 14 | 15 | # misc 16 | .DS_Store 17 | *.bin 18 | 19 | 20 | npm-debug.log* 21 | yarn-debug.log* 22 | yarn-error.log* 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Stock Visualization 2 | 3 | # React + D3 + Flask Project 4 | 5 | CSE 6242 project Stock price prediction tool 6 | 7 | React + D3 + Flask project. 8 | Note: you will need to download files from the provided links in order to make the code run. 9 | 10 |  11 | 12 | ## Description 13 | 14 | This package contains all the code necessary to run our Stock Visualization UI. It contains a backend written in Python Flask that contains our machine learning models and weights. It also contains a frontend written in React that has our visualizations. 15 | 16 | The backend contains our trained machine learning models which are LSTMs with different window sizes trained on financial and sentiment data for five stocks. We extract sentiment with the help of finBERT. 17 | 18 | The frontend contains visualizations that will help someone analyze stock price versus sentiment, predict future stock prices by giving future predictions of sentiment, and help determine sentiment over the past week on Twitter for a certain stock. 19 | 20 | ## Installation 21 | 22 | ### Backend 23 | 24 | 1. Download the pytorch_model.bin file from here 25 | https://drive.google.com/file/d/1BZjW13BIMty_WhPx7uabzC7Kp6wPGtpK/view?usp=sharing 26 | 27 | 2. Place the downloaded file in the backend/finBert/model/ directory (relatively backend/finBert/model). This model is around 400 MB. 28 | This directory should now have 2 files config.json, pytorch_model.bin 29 | 3. Download the archive.zip file from here 30 | https://drive.google.com/file/d/1ApRVntmOTog9XyfVavP7SuaFFlGOMUul/view?usp=sharing 31 | 32 | 4. Unzip the archive folder and move the 3 directories twitter, reddit, stock_data 33 | into the backend directory. These 3 directories are around 1.3 GB total. 34 | 35 | 5. The backend server is run using pipenv please ensure you have 36 | python 3.7 and pipenv on your computer 37 | https://www.python.org/downloads/release/python-370/ 38 | https://pypi.org/project/pipenv/ 39 | 40 | 6. Enter backend directory `cd backend` 41 | 7. Rename .env.example to .env `mv .env.example .env` 42 | 8. Run `pipenv --python 3.7` 43 | 9. Run `pipenv install` 44 | 10. Run `pipenv run dev` (Ignore any errors related to GPU) 45 | 11. Now the backend server is running on http://localhost:5000 46 | 47 | #### NOTE 48 | 49 | This repo uses a .env file in the backend folder which contains my credentials in order to use the Twitter API. 50 | These credentials will be valid until May 7, 2021. After this date you will need to apply for a twitter developer 51 | account, create a project, and generate your own credentials to store in the .env file. 52 | https://cran.r-project.org/web/packages/rtweet/vignettes/auth.html 53 | 54 | ### Frontend 55 | 56 | Please make sure you have Node.js with npm and yarn installed and it is updated to the most 57 | recent stable release. 58 | 59 | Open Terminal 60 | 61 | #### With yarn 62 | 63 | 1. Enter frontend directory `cd frontend` 64 | 2. Run `yarn install` 65 | 3. Run `yarn start` 66 | 4. It might take a couple of min to start, if the window does not open automatically navigate to localhost:3000. 67 | 68 | ## Execution 69 | 70 | The first graph is the Financial dashboard. Here the user can select a company and view detailed information about the stock price and sentiment on a particular day. They can see the breakdown of sentiments from tweets that day and can adjust the time period to get a detailed view. 71 | 72 | The second graph is the Prediction graph. Here the user can experiment with different sentiment related settings to analyze predictions about the future price of a stock. The user can fine tune sentiment on a week by week basis for a certain company and see our predicted average price for that week given the user inputs. The user can see a graph that shows our prediction model for the past year to inspire confidence in the model and can then see the prediction. This graph simulates the current date as being the start of January 2020. 73 | 74 | The third graph is the Recent Sentiment graph. This allows the user to enter any stock ticker and see how sentiment is about that ticker on Twitter over the past 7 days. This allows users to get an idea about how sentiment looks right now. This graph is limited because we are using the free version of twitter API which only returns 100 results per request and we are filtering by their definition of “popular” tweets which have a certain number of favorites. This is done to ensure we are not only analyzing 100 tweets from the current day with little to none activity. However, expanding this functionality with the correct Twitter API credentials (pro or educational account) would expand the number of results returned from these queries, but getting the sentiment using our model would take longer. 75 | 76 | #### Note to use the third graph please use UPPERCASE stock symbols with no special characters such as '$' 77 | 78 | ## Instructions to load data and running training data 79 | 80 | All the data will now be at this link: https://drive.google.com/drive/folders/1RQlCXTDjg-_fbt9_nhTIWSsGnP4_sh4K?usp=sharing 81 | 82 | To get the data: 83 | 84 | 1. Download the twitter(`company_tweets_2015_to_2019.csv`, e.g. `TSLA_tweets_2015_to_2019.csv`) and reddit data(`company_reddit_2015_to_2019.csv`, e.g. `TSLA_reddit_2015_to_2019.csv`) from google drive and put it in the same path as starter_code. 85 | 2. Run `starter_code.py company` or any other company (should enter the stock name). It creates two directories (`twitter/company/` and `reddit/company/`) and adds the data for each year to those directories. 86 | 3. Run `load_data.py` with the following arguments: `load_data(stock_name, stock_path, reddit_path, twitter_path, year_list, return_columns_list)` 87 | 88 | - For example, for getting data of TSLA for years 2015 to 1018 and the desired outputs listed below, run `load_data("TSLA", "stock_data/tesla/", "reddit/TSLA/", "twitter/TSLA/", [2015, 2016, 2017, 2018], ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit','comment_num_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'comment_num_twitter','retweet_num_twitter', 'like_num_twitter'])` 89 | - If you want all the columns in the returned array, pass an empty list `[]` as `return_columns_list` in `load_data()`. 90 | 91 | After setting up the data and you know the stock data path, reddit data path and twitter data path for the companies you want to train models on: 92 | 93 | 1. Use the training command: `python3 main_keras.py --window_size 90 --company TSLA --stockdir stock_data/tesla/ --redditdir reddit/TSLA/ --twitterdir twitter/TSLA/ --trainyears 2015 2016 2017 2018 --testyears 2019` . This example trains a model on TESLA with window size 90 for 2015 - 2018 and evaluates on 2019. 94 | 2. Use the prediction command: `python3 predict_keras.py --window_size 90 --company TSLA --modelpath weights/best_weights_TSLA_wsize90.hdf5 --scpath weights/TSLA_wsize90_sc_dict.p --stockdir stock_data/tesla/ --redditdir reddit/TSLA/ --twitterdir twitter/TSLA/ --years 2019` . Make sure to give the right model path and scaler path for the arguments provided. 95 | 3. For predicting into the future, you can use the extrapolate_backend script: `python3 extrapolate_backend.py --window_size 90 --company TSLA --modelpath weights/best_weights_TSLA_wsize90.hdf5 --scpath weights/TSLA_wsize90_sc_dict.p --fdpath Tesla_5_weeks.csv --stockdir stock_data/tesla/ --redditdir reddit/TSLA/ --twitterdir twitter/TSLA/ --years 2019` 96 | 97 | #### Note 98 | 99 | Our D3 visualizations uses code from 100 | https://bl.ocks.org/mbostock/34f08d5e11952a80609169b7917d4172 101 | https://gist.github.com/EfratVil/92f894ac0ba265192411e73f633a3e2f 102 | http://bl.ocks.org/Potherca/b9f8b3d0a24e0b20f16d 103 | http://bl.ocks.org/williaster/10ef968ccfdc71c30ef8 104 | https://github.com/arnauddri/d3-stock 105 | https://www.d3-graph-gallery.com/graph/line_basic.html 106 | https://bl.ocks.org/ProQuestionAsker/8382f70af7f4a7355827c6dc4ee8817d 107 | -------------------------------------------------------------------------------- /backend/.env.example: -------------------------------------------------------------------------------- 1 | consumer_key= 'LYdupuQBh5JIc4jCr32frQoKQ' 2 | consumer_secret= '2PKkzIFBE16W1OaCQweQqBdoAH7xe1LhAuwWhod762w8pFKYub' 3 | access_token= '2410009376-kCT02w7N2Tzarqwcqj65wNrJYXWsPRUfT3LabuT' 4 | access_token_secret= 'ZVMZAw6jJCNB7Y9LAO37IPN72apZPMFxjJe9aaKB61o5L' 5 | TOKENIZERS_PARALLELISM=(true | false) -------------------------------------------------------------------------------- /backend/Apple_90_5weeks.csv: -------------------------------------------------------------------------------- 1 | Week,Reddit Sentiment,Number of Reddit Posts,Twitter Sentiment,Number of Tweets,Tweet Activity 2 | 1,0.5,0.5,1,0.5,0.5 3 | 2,0.5,1,0.5,0.5,0.5 4 | 3,0.5,0.5,1,0.5,0.5 5 | 4,0.5,0.5,0.5,1,1 6 | 5,1,1,0.5,0.5,0.5 7 | -------------------------------------------------------------------------------- /backend/Microsoft_30_5weeks.csv: -------------------------------------------------------------------------------- 1 | Week,Reddit Sentiment,Number of Reddit Posts,Twitter Sentiment,Number of Tweets,Tweet Activity 2 | 1,0.5,0.5,0.5,0.5,0.5 3 | 2,0.5,0.5,0.5,0.5,0.5 4 | 3,0.5,0.5,0.5,0.5,0.5 5 | 4,0.5,0.5,0.5,0.5,0.5 6 | 5,0.5,0.5,0.5,0.5,0.5 7 | -------------------------------------------------------------------------------- /backend/Pipfile: -------------------------------------------------------------------------------- 1 | [[source]] 2 | url = "https://pypi.org/simple" 3 | verify_ssl = true 4 | name = "pypi" 5 | 6 | [packages] 7 | flask = "*" 8 | flask-cors = "*" 9 | tweepy = "*" 10 | pandas = "*" 11 | python-dotenv = "*" 12 | nltk = "*" 13 | transformers = "*" 14 | torch = "*" 15 | matplotlib = "*" 16 | keras = "*" 17 | numpy = "*" 18 | tensorflow = "*" 19 | scikit-learn = "*" 20 | 21 | [dev-packages] 22 | 23 | [requires] 24 | python_version = "3.7" 25 | 26 | [scripts] 27 | dev = "python server.py" 28 | -------------------------------------------------------------------------------- /backend/Pipfile.lock: -------------------------------------------------------------------------------- 1 | { 2 | "_meta": { 3 | "hash": { 4 | "sha256": "9be6c6c709af891552639d272071fe3191ef893671b5044f793e8aeeebd8b8a3" 5 | }, 6 | "pipfile-spec": 6, 7 | "requires": { 8 | "python_version": "3.7" 9 | }, 10 | "sources": [ 11 | { 12 | "name": "pypi", 13 | "url": "https://pypi.org/simple", 14 | "verify_ssl": true 15 | } 16 | ] 17 | }, 18 | "default": { 19 | "absl-py": { 20 | "hashes": [ 21 | "sha256:afe94e3c751ff81aad55d33ab6e630390da32780110b5af72ae81ecff8418d9e", 22 | "sha256:b44f68984a5ceb2607d135a615999b93924c771238a63920d17d3387b0d229d5" 23 | ], 24 | "version": "==0.12.0" 25 | }, 26 | "astunparse": { 27 | "hashes": [ 28 | "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872", 29 | "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8" 30 | ], 31 | "version": "==1.6.3" 32 | }, 33 | "cachetools": { 34 | "hashes": [ 35 | "sha256:1d9d5f567be80f7c07d765e21b814326d78c61eb0c3a637dffc0e5d1796cb2e2", 36 | "sha256:f469e29e7aa4cff64d8de4aad95ce76de8ea1125a16c68e0d93f65c3c3dc92e9" 37 | ], 38 | "markers": "python_version ~= '3.5'", 39 | "version": "==4.2.1" 40 | }, 41 | "certifi": { 42 | "hashes": [ 43 | "sha256:1a4995114262bffbc2413b159f2a1a480c969de6e6eb13ee966d470af86af59c", 44 | "sha256:719a74fb9e33b9bd44cc7f3a8d94bc35e4049deebe19ba7d8e108280cfd59830" 45 | ], 46 | "version": "==2020.12.5" 47 | }, 48 | "chardet": { 49 | "hashes": [ 50 | "sha256:0d6f53a15db4120f2b08c94f11e7d93d2c911ee118b6b30a04ec3ee8310179fa", 51 | "sha256:f864054d66fd9118f2e67044ac8981a54775ec5b67aed0441892edb553d21da5" 52 | ], 53 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'", 54 | "version": "==4.0.0" 55 | }, 56 | "click": { 57 | "hashes": [ 58 | "sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a", 59 | "sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc" 60 | ], 61 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'", 62 | "version": "==7.1.2" 63 | }, 64 | "cycler": { 65 | "hashes": [ 66 | "sha256:1d8a5ae1ff6c5cf9b93e8811e581232ad8920aeec647c37316ceac982b08cb2d", 67 | "sha256:cd7b2d1018258d7247a71425e9f26463dfb444d411c39569972f4ce586b0c9d8" 68 | ], 69 | "version": "==0.10.0" 70 | }, 71 | "filelock": { 72 | "hashes": [ 73 | "sha256:18d82244ee114f543149c66a6e0c14e9c4f8a1044b5cdaadd0f82159d6a6ff59", 74 | "sha256:929b7d63ec5b7d6b71b0fa5ac14e030b3f70b75747cef1b10da9b879fef15836" 75 | ], 76 | "version": "==3.0.12" 77 | }, 78 | "flask": { 79 | "hashes": [ 80 | "sha256:4efa1ae2d7c9865af48986de8aeb8504bf32c7f3d6fdc9353d34b21f4b127060", 81 | "sha256:8a4fdd8936eba2512e9c85df320a37e694c93945b33ef33c89946a340a238557" 82 | ], 83 | "index": "pypi", 84 | "version": "==1.1.2" 85 | }, 86 | "flask-cors": { 87 | "hashes": [ 88 | "sha256:74efc975af1194fc7891ff5cd85b0f7478be4f7f59fe158102e91abb72bb4438", 89 | "sha256:b60839393f3b84a0f3746f6cdca56c1ad7426aa738b70d6c61375857823181de" 90 | ], 91 | "index": "pypi", 92 | "version": "==3.0.10" 93 | }, 94 | "flatbuffers": { 95 | "hashes": [ 96 | "sha256:63bb9a722d5e373701913e226135b28a6f6ac200d5cc7b4d919fa38d73b44610", 97 | "sha256:9e9ef47fa92625c4721036e7c4124182668dc6021d9e7c73704edd395648deb9" 98 | ], 99 | "version": "==1.12" 100 | }, 101 | "gast": { 102 | "hashes": [ 103 | "sha256:8f46f5be57ae6889a4e16e2ca113b1703ef17f2b0abceb83793eaba9e1351a45", 104 | "sha256:b881ef288a49aa81440d2c5eb8aeefd4c2bb8993d5f50edae7413a85bfdb3b57" 105 | ], 106 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'", 107 | "version": "==0.3.3" 108 | }, 109 | "google-auth": { 110 | "hashes": [ 111 | "sha256:010f011c4e27d3d5eb01106fba6aac39d164842dfcd8709955c4638f5b11ccf8", 112 | "sha256:f30a672a64d91cc2e3137765d088c5deec26416246f7a9e956eaf69a8d7ed49c" 113 | ], 114 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'", 115 | "version": "==1.29.0" 116 | }, 117 | "google-auth-oauthlib": { 118 | "hashes": [ 119 | "sha256:09832c6e75032f93818edf1affe4746121d640c625a5bef9b5c96af676e98eee", 120 | "sha256:0e92aacacfb94978de3b7972cf4b0f204c3cd206f74ddd0dc0b31e91164e6317" 121 | ], 122 | "markers": "python_version >= '3.6'", 123 | "version": "==0.4.4" 124 | }, 125 | "google-pasta": { 126 | "hashes": [ 127 | "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954", 128 | "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed", 129 | "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e" 130 | ], 131 | "version": "==0.2.0" 132 | }, 133 | "grpcio": { 134 | "hashes": [ 135 | "sha256:01d3046fe980be25796d368f8fc5ff34b7cf5e1444f3789a017a7fe794465639", 136 | "sha256:07b430fa68e5eecd78e2ad529ab80f6a234b55fc1b675fe47335ccbf64c6c6c8", 137 | "sha256:0e3edd8cdb71809d2455b9dbff66b4dd3d36c321e64bfa047da5afdfb0db332b", 138 | "sha256:0f3f09269ffd3fded430cd89ba2397eabbf7e47be93983b25c187cdfebb302a7", 139 | "sha256:1376a60f9bfce781b39973f100b5f67e657b5be479f2fd8a7d2a408fc61c085c", 140 | "sha256:14c0f017bfebbc18139551111ac58ecbde11f4bc375b73a53af38927d60308b6", 141 | "sha256:182c64ade34c341398bf71ec0975613970feb175090760ab4f51d1e9a5424f05", 142 | "sha256:1ada89326a364a299527c7962e5c362dbae58c67b283fe8383c4d952b26565d5", 143 | "sha256:1ce6f5ff4f4a548c502d5237a071fa617115df58ea4b7bd41dac77c1ab126e9c", 144 | "sha256:1d384a61f96a1fc6d5d3e0b62b0a859abc8d4c3f6d16daba51ebf253a3e7df5d", 145 | "sha256:25959a651420dd4a6fd7d3e8dee53f4f5fd8c56336a64963428e78b276389a59", 146 | "sha256:28677f057e2ef11501860a7bc15de12091d40b95dd0fddab3c37ff1542e6b216", 147 | "sha256:378fe80ec5d9353548eb2a8a43ea03747a80f2e387c4f177f2b3ff6c7d898753", 148 | "sha256:3afb058b6929eba07dba9ae6c5b555aa1d88cb140187d78cc510bd72d0329f28", 149 | "sha256:4396b1d0f388ae875eaf6dc05cdcb612c950fd9355bc34d38b90aaa0665a0d4b", 150 | "sha256:4775bc35af9cd3b5033700388deac2e1d611fa45f4a8dcb93667d94cb25f0444", 151 | "sha256:5bddf9d53c8df70061916c3bfd2f468ccf26c348bb0fb6211531d895ed5e4c72", 152 | "sha256:6d869a3e8e62562b48214de95e9231c97c53caa7172802236cd5d60140d7cddd", 153 | "sha256:6f7947dad606c509d067e5b91a92b250aa0530162ab99e4737090f6b17eb12c4", 154 | "sha256:7cda998b7b551503beefc38db9be18c878cfb1596e1418647687575cdefa9273", 155 | "sha256:99bac0e2c820bf446662365df65841f0c2a55b0e2c419db86eaf5d162ddae73e", 156 | "sha256:9c0d8f2346c842088b8cbe3e14985b36e5191a34bf79279ba321a4bf69bd88b7", 157 | "sha256:a8004b34f600a8a51785e46859cd88f3386ef67cccd1cfc7598e3d317608c643", 158 | "sha256:ac7028d363d2395f3d755166d0161556a3f99500a5b44890421ccfaaf2aaeb08", 159 | "sha256:be98e3198ec765d0a1e27f69d760f69374ded8a33b953dcfe790127731f7e690", 160 | "sha256:c31e8a219650ddae1cd02f5a169e1bffe66a429a8255d3ab29e9363c73003b62", 161 | "sha256:c4966d746dccb639ef93f13560acbe9630681c07f2b320b7ec03fe2c8f0a1f15", 162 | "sha256:c58825a3d8634cd634d8f869afddd4d5742bdb59d594aea4cea17b8f39269a55", 163 | "sha256:ce617e1c4a39131f8527964ac9e700eb199484937d7a0b3e52655a3ba50d5fb9", 164 | "sha256:e28e4c0d4231beda5dee94808e3a224d85cbaba3cfad05f2192e6f4ec5318053", 165 | "sha256:e467af6bb8f5843f5a441e124b43474715cfb3981264e7cd227343e826dcc3ce", 166 | "sha256:e6786f6f7be0937614577edcab886ddce91b7c1ea972a07ef9972e9f9ecbbb78", 167 | "sha256:e811ce5c387256609d56559d944a974cc6934a8eea8c76e7c86ec388dc06192d", 168 | "sha256:ec10d5f680b8e95a06f1367d73c5ddcc0ed04a3f38d6e4c9346988fb0cea2ffa", 169 | "sha256:ef9bd7fdfc0a063b4ed0efcab7906df5cae9bbcf79d05c583daa2eba56752b00", 170 | "sha256:f03dfefa9075dd1c6c5cc27b1285c521434643b09338d8b29e1d6a27b386aa82", 171 | "sha256:f12900be4c3fd2145ba94ab0d80b7c3d71c9e6414cfee2f31b1c20188b5c281f", 172 | "sha256:f53f2dfc8ff9a58a993e414a016c8b21af333955ae83960454ad91798d467c7b", 173 | "sha256:f7d508691301027033215d3662dab7e178f54d5cca2329f26a71ae175d94b83f" 174 | ], 175 | "version": "==1.32.0" 176 | }, 177 | "h5py": { 178 | "hashes": [ 179 | "sha256:063947eaed5f271679ed4ffa36bb96f57bc14f44dd4336a827d9a02702e6ce6b", 180 | "sha256:13c87efa24768a5e24e360a40e0bc4c49bcb7ce1bb13a3a7f9902cec302ccd36", 181 | "sha256:16ead3c57141101e3296ebeed79c9c143c32bdd0e82a61a2fc67e8e6d493e9d1", 182 | "sha256:3dad1730b6470fad853ef56d755d06bb916ee68a3d8272b3bab0c1ddf83bb99e", 183 | "sha256:51ae56894c6c93159086ffa2c94b5b3388c0400548ab26555c143e7cfa05b8e5", 184 | "sha256:54817b696e87eb9e403e42643305f142cd8b940fe9b3b490bbf98c3b8a894cf4", 185 | "sha256:549ad124df27c056b2e255ea1c44d30fb7a17d17676d03096ad5cd85edb32dc1", 186 | "sha256:64f74da4a1dd0d2042e7d04cf8294e04ddad686f8eba9bb79e517ae582f6668d", 187 | "sha256:6998be619c695910cb0effe5eb15d3a511d3d1a5d217d4bd0bebad1151ec2262", 188 | "sha256:6ef7ab1089e3ef53ca099038f3c0a94d03e3560e6aff0e9d6c64c55fb13fc681", 189 | "sha256:769e141512b54dee14ec76ed354fcacfc7d97fea5a7646b709f7400cf1838630", 190 | "sha256:79b23f47c6524d61f899254f5cd5e486e19868f1823298bc0c29d345c2447172", 191 | "sha256:7be5754a159236e95bd196419485343e2b5875e806fe68919e087b6351f40a70", 192 | "sha256:84412798925dc870ffd7107f045d7659e60f5d46d1c70c700375248bf6bf512d", 193 | "sha256:86868dc07b9cc8cb7627372a2e6636cdc7a53b7e2854ad020c9e9d8a4d3fd0f5", 194 | "sha256:8bb1d2de101f39743f91512a9750fb6c351c032e5cd3204b4487383e34da7f75", 195 | "sha256:a5f82cd4938ff8761d9760af3274acf55afc3c91c649c50ab18fcff5510a14a5", 196 | "sha256:aac4b57097ac29089f179bbc2a6e14102dd210618e94d77ee4831c65f82f17c0", 197 | "sha256:bffbc48331b4a801d2f4b7dac8a72609f0b10e6e516e5c480a3e3241e091c878", 198 | "sha256:c0d4b04bbf96c47b6d360cd06939e72def512b20a18a8547fa4af810258355d5", 199 | "sha256:c54a2c0dd4957776ace7f95879d81582298c5daf89e77fb8bee7378f132951de", 200 | "sha256:cbf28ae4b5af0f05aa6e7551cee304f1d317dbed1eb7ac1d827cee2f1ef97a99", 201 | "sha256:d35f7a3a6cefec82bfdad2785e78359a0e6a5fbb3f605dd5623ce88082ccd681", 202 | "sha256:d3c59549f90a891691991c17f8e58c8544060fdf3ccdea267100fa5f561ff62f", 203 | "sha256:d7ae7a0576b06cb8e8a1c265a8bc4b73d05fdee6429bffc9a26a6eb531e79d72", 204 | "sha256:ecf4d0b56ee394a0984de15bceeb97cbe1fe485f1ac205121293fc44dcf3f31f", 205 | "sha256:f0e25bb91e7a02efccb50aba6591d3fe2c725479e34769802fcdd4076abfa917", 206 | "sha256:f23951a53d18398ef1344c186fb04b26163ca6ce449ebd23404b153fd111ded9", 207 | "sha256:ff7d241f866b718e4584fa95f520cb19405220c501bd3a53ee11871ba5166ea2" 208 | ], 209 | "version": "==2.10.0" 210 | }, 211 | "idna": { 212 | "hashes": [ 213 | "sha256:b307872f855b18632ce0c21c5e45be78c0ea7ae4c15c828c20788b26921eb3f6", 214 | "sha256:b97d804b1e9b523befed77c48dacec60e6dcb0b5391d57af6a65a312a90648c0" 215 | ], 216 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'", 217 | "version": "==2.10" 218 | }, 219 | "importlib-metadata": { 220 | "hashes": [ 221 | "sha256:8c501196e49fb9df5df43833bdb1e4328f64847763ec8a50703148b73784d581", 222 | "sha256:d7eb1dea6d6a6086f8be21784cc9e3bcfa55872b52309bc5fad53a8ea444465d" 223 | ], 224 | "markers": "python_version < '3.8' and python_version < '3.8'", 225 | "version": "==4.0.1" 226 | }, 227 | "itsdangerous": { 228 | "hashes": [ 229 | "sha256:321b033d07f2a4136d3ec762eac9f16a10ccd60f53c0c91af90217ace7ba1f19", 230 | "sha256:b12271b2047cb23eeb98c8b5622e2e5c5e9abd9784a153e9d8ef9cb4dd09d749" 231 | ], 232 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'", 233 | "version": "==1.1.0" 234 | }, 235 | "jinja2": { 236 | "hashes": [ 237 | "sha256:03e47ad063331dd6a3f04a43eddca8a966a26ba0c5b7207a9a9e4e08f1b29419", 238 | "sha256:a6d58433de0ae800347cab1fa3043cebbabe8baa9d29e668f1c768cb87a333c6" 239 | ], 240 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'", 241 | "version": "==2.11.3" 242 | }, 243 | "joblib": { 244 | "hashes": [ 245 | "sha256:9c17567692206d2f3fb9ecf5e991084254fe631665c450b443761c4186a613f7", 246 | "sha256:feeb1ec69c4d45129954f1b7034954241eedfd6ba39b5e9e4b6883be3332d5e5" 247 | ], 248 | "markers": "python_version >= '3.6'", 249 | "version": "==1.0.1" 250 | }, 251 | "keras": { 252 | "hashes": [ 253 | "sha256:05e2faf6885f7899482a7d18fc00ba9655fe2c9296a35ad96949a07a9c27d1bb", 254 | "sha256:fedd729b52572fb108a98e3d97e1bac10a81d3917d2103cc20ab2a5f03beb973" 255 | ], 256 | "index": "pypi", 257 | "version": "==2.4.3" 258 | }, 259 | "keras-preprocessing": { 260 | "hashes": [ 261 | "sha256:7b82029b130ff61cc99b55f3bd27427df4838576838c5b2f65940e4fcec99a7b", 262 | "sha256:add82567c50c8bc648c14195bf544a5ce7c1f76761536956c3d2978970179ef3" 263 | ], 264 | "version": "==1.1.2" 265 | }, 266 | "kiwisolver": { 267 | "hashes": [ 268 | "sha256:0cd53f403202159b44528498de18f9285b04482bab2a6fc3f5dd8dbb9352e30d", 269 | "sha256:1e1bc12fb773a7b2ffdeb8380609f4f8064777877b2225dec3da711b421fda31", 270 | "sha256:225e2e18f271e0ed8157d7f4518ffbf99b9450fca398d561eb5c4a87d0986dd9", 271 | "sha256:232c9e11fd7ac3a470d65cd67e4359eee155ec57e822e5220322d7b2ac84fbf0", 272 | "sha256:31dfd2ac56edc0ff9ac295193eeaea1c0c923c0355bf948fbd99ed6018010b72", 273 | "sha256:33449715e0101e4d34f64990352bce4095c8bf13bed1b390773fc0a7295967b3", 274 | "sha256:401a2e9afa8588589775fe34fc22d918ae839aaaf0c0e96441c0fdbce6d8ebe6", 275 | "sha256:44a62e24d9b01ba94ae7a4a6c3fb215dc4af1dde817e7498d901e229aaf50e4e", 276 | "sha256:50af681a36b2a1dee1d3c169ade9fdc59207d3c31e522519181e12f1b3ba7000", 277 | "sha256:563c649cfdef27d081c84e72a03b48ea9408c16657500c312575ae9d9f7bc1c3", 278 | "sha256:5989db3b3b34b76c09253deeaf7fbc2707616f130e166996606c284395da3f18", 279 | "sha256:5a7a7dbff17e66fac9142ae2ecafb719393aaee6a3768c9de2fd425c63b53e21", 280 | "sha256:5c3e6455341008a054cccee8c5d24481bcfe1acdbc9add30aa95798e95c65621", 281 | "sha256:5f6ccd3dd0b9739edcf407514016108e2280769c73a85b9e59aa390046dbf08b", 282 | "sha256:72c99e39d005b793fb7d3d4e660aed6b6281b502e8c1eaf8ee8346023c8e03bc", 283 | "sha256:78751b33595f7f9511952e7e60ce858c6d64db2e062afb325985ddbd34b5c131", 284 | "sha256:834ee27348c4aefc20b479335fd422a2c69db55f7d9ab61721ac8cd83eb78882", 285 | "sha256:8be8d84b7d4f2ba4ffff3665bcd0211318aa632395a1a41553250484a871d454", 286 | "sha256:950a199911a8d94683a6b10321f9345d5a3a8433ec58b217ace979e18f16e248", 287 | "sha256:a357fd4f15ee49b4a98b44ec23a34a95f1e00292a139d6015c11f55774ef10de", 288 | "sha256:a53d27d0c2a0ebd07e395e56a1fbdf75ffedc4a05943daf472af163413ce9598", 289 | "sha256:acef3d59d47dd85ecf909c359d0fd2c81ed33bdff70216d3956b463e12c38a54", 290 | "sha256:b38694dcdac990a743aa654037ff1188c7a9801ac3ccc548d3341014bc5ca278", 291 | "sha256:b9edd0110a77fc321ab090aaa1cfcaba1d8499850a12848b81be2222eab648f6", 292 | "sha256:c08e95114951dc2090c4a630c2385bef681cacf12636fb0241accdc6b303fd81", 293 | "sha256:c5518d51a0735b1e6cee1fdce66359f8d2b59c3ca85dc2b0813a8aa86818a030", 294 | "sha256:c8fd0f1ae9d92b42854b2979024d7597685ce4ada367172ed7c09edf2cef9cb8", 295 | "sha256:ca3820eb7f7faf7f0aa88de0e54681bddcb46e485beb844fcecbcd1c8bd01689", 296 | "sha256:cf8b574c7b9aa060c62116d4181f3a1a4e821b2ec5cbfe3775809474113748d4", 297 | "sha256:d3155d828dec1d43283bd24d3d3e0d9c7c350cdfcc0bd06c0ad1209c1bbc36d0", 298 | "sha256:f8d6f8db88049a699817fd9178782867bf22283e3813064302ac59f61d95be05", 299 | "sha256:fd34fbbfbc40628200730bc1febe30631347103fc8d3d4fa012c21ab9c11eca9" 300 | ], 301 | "markers": "python_version >= '3.6'", 302 | "version": "==1.3.1" 303 | }, 304 | "markdown": { 305 | "hashes": [ 306 | "sha256:31b5b491868dcc87d6c24b7e3d19a0d730d59d3e46f4eea6430a321bed387a49", 307 | "sha256:96c3ba1261de2f7547b46a00ea8463832c921d3f9d6aba3f255a6f71386db20c" 308 | ], 309 | "markers": "python_version >= '3.6'", 310 | "version": "==3.3.4" 311 | }, 312 | "markupsafe": { 313 | "hashes": [ 314 | "sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473", 315 | "sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161", 316 | "sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235", 317 | "sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5", 318 | "sha256:13d3144e1e340870b25e7b10b98d779608c02016d5184cfb9927a9f10c689f42", 319 | "sha256:195d7d2c4fbb0ee8139a6cf67194f3973a6b3042d742ebe0a9ed36d8b6f0c07f", 320 | "sha256:22c178a091fc6630d0d045bdb5992d2dfe14e3259760e713c490da5323866c39", 321 | "sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff", 322 | "sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b", 323 | "sha256:2beec1e0de6924ea551859edb9e7679da6e4870d32cb766240ce17e0a0ba2014", 324 | "sha256:3b8a6499709d29c2e2399569d96719a1b21dcd94410a586a18526b143ec8470f", 325 | "sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1", 326 | "sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e", 327 | "sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183", 328 | "sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66", 329 | "sha256:596510de112c685489095da617b5bcbbac7dd6384aeebeda4df6025d0256a81b", 330 | "sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1", 331 | "sha256:6788b695d50a51edb699cb55e35487e430fa21f1ed838122d722e0ff0ac5ba15", 332 | "sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1", 333 | "sha256:6f1e273a344928347c1290119b493a1f0303c52f5a5eae5f16d74f48c15d4a85", 334 | "sha256:6fffc775d90dcc9aed1b89219549b329a9250d918fd0b8fa8d93d154918422e1", 335 | "sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e", 336 | "sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b", 337 | "sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905", 338 | "sha256:7fed13866cf14bba33e7176717346713881f56d9d2bcebab207f7a036f41b850", 339 | "sha256:84dee80c15f1b560d55bcfe6d47b27d070b4681c699c572af2e3c7cc90a3b8e0", 340 | "sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735", 341 | "sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d", 342 | "sha256:98bae9582248d6cf62321dcb52aaf5d9adf0bad3b40582925ef7c7f0ed85fceb", 343 | "sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e", 344 | "sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d", 345 | "sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c", 346 | "sha256:a6a744282b7718a2a62d2ed9d993cad6f5f585605ad352c11de459f4108df0a1", 347 | "sha256:acf08ac40292838b3cbbb06cfe9b2cb9ec78fce8baca31ddb87aaac2e2dc3bc2", 348 | "sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21", 349 | "sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2", 350 | "sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5", 351 | "sha256:b1dba4527182c95a0db8b6060cc98ac49b9e2f5e64320e2b56e47cb2831978c7", 352 | "sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b", 353 | "sha256:b7d644ddb4dbd407d31ffb699f1d140bc35478da613b441c582aeb7c43838dd8", 354 | "sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6", 355 | "sha256:bf5aa3cbcfdf57fa2ee9cd1822c862ef23037f5c832ad09cfea57fa846dec193", 356 | "sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f", 357 | "sha256:caabedc8323f1e93231b52fc32bdcde6db817623d33e100708d9a68e1f53b26b", 358 | "sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f", 359 | "sha256:cdb132fc825c38e1aeec2c8aa9338310d29d337bebbd7baa06889d09a60a1fa2", 360 | "sha256:d53bc011414228441014aa71dbec320c66468c1030aae3a6e29778a3382d96e5", 361 | "sha256:d73a845f227b0bfe8a7455ee623525ee656a9e2e749e4742706d80a6065d5e2c", 362 | "sha256:d9be0ba6c527163cbed5e0857c451fcd092ce83947944d6c14bc95441203f032", 363 | "sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7", 364 | "sha256:e8313f01ba26fbbe36c7be1966a7b7424942f670f38e666995b88d012765b9be", 365 | "sha256:feb7b34d6325451ef96bc0e36e1a6c0c1c64bc1fbec4b854f4529e51887b1621" 366 | ], 367 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'", 368 | "version": "==1.1.1" 369 | }, 370 | "matplotlib": { 371 | "hashes": [ 372 | "sha256:1f83a32e4b6045191f9d34e4dc68c0a17c870b57ef9cca518e516da591246e79", 373 | "sha256:2eee37340ca1b353e0a43a33da79d0cd4bcb087064a0c3c3d1329cdea8fbc6f3", 374 | "sha256:53ceb12ef44f8982b45adc7a0889a7e2df1d758e8b360f460e435abe8a8cd658", 375 | "sha256:574306171b84cd6854c83dc87bc353cacc0f60184149fb00c9ea871eca8c1ecb", 376 | "sha256:7561fd541477d41f3aa09457c434dd1f7604f3bd26d7858d52018f5dfe1c06d1", 377 | "sha256:7a54efd6fcad9cb3cd5ef2064b5a3eeb0b63c99f26c346bdcf66e7c98294d7cc", 378 | "sha256:7f16660edf9a8bcc0f766f51c9e1b9d2dc6ceff6bf636d2dbd8eb925d5832dfd", 379 | "sha256:81e6fe8b18ef5be67f40a1d4f07d5a4ed21d3878530193898449ddef7793952f", 380 | "sha256:84a10e462120aa7d9eb6186b50917ed5a6286ee61157bfc17c5b47987d1a9068", 381 | "sha256:84d4c4f650f356678a5d658a43ca21a41fca13f9b8b00169c0b76e6a6a948908", 382 | "sha256:86dc94e44403fa0f2b1dd76c9794d66a34e821361962fe7c4e078746362e3b14", 383 | "sha256:90dbc007f6389bcfd9ef4fe5d4c78c8d2efe4e0ebefd48b4f221cdfed5672be2", 384 | "sha256:9f374961a3996c2d1b41ba3145462c3708a89759e604112073ed6c8bdf9f622f", 385 | "sha256:a18cc1ab4a35b845cf33b7880c979f5c609fd26c2d6e74ddfacb73dcc60dd956", 386 | "sha256:a97781453ac79409ddf455fccf344860719d95142f9c334f2a8f3fff049ffec3", 387 | "sha256:a989022f89cda417f82dbf65e0a830832afd8af743d05d1414fb49549287ff04", 388 | "sha256:ac2a30a09984c2719f112a574b6543ccb82d020fd1b23b4d55bf4759ba8dd8f5", 389 | "sha256:be4430b33b25e127fc4ea239cc386389de420be4d63e71d5359c20b562951ce1", 390 | "sha256:c45e7bf89ea33a2adaef34774df4e692c7436a18a48bcb0e47a53e698a39fa39" 391 | ], 392 | "index": "pypi", 393 | "version": "==3.4.1" 394 | }, 395 | "nltk": { 396 | "hashes": [ 397 | "sha256:240e23ab1ab159ef9940777d30c7c72d7e76d91877099218a7585370c11f6b9e", 398 | "sha256:57d556abed621ab9be225cc6d2df1edce17572efb67a3d754630c9f8381503eb" 399 | ], 400 | "index": "pypi", 401 | "version": "==3.6.2" 402 | }, 403 | "numpy": { 404 | "hashes": [ 405 | "sha256:012426a41bc9ab63bb158635aecccc7610e3eff5d31d1eb43bc099debc979d94", 406 | "sha256:06fab248a088e439402141ea04f0fffb203723148f6ee791e9c75b3e9e82f080", 407 | "sha256:0eef32ca3132a48e43f6a0f5a82cb508f22ce5a3d6f67a8329c81c8e226d3f6e", 408 | "sha256:1ded4fce9cfaaf24e7a0ab51b7a87be9038ea1ace7f34b841fe3b6894c721d1c", 409 | "sha256:2e55195bc1c6b705bfd8ad6f288b38b11b1af32f3c8289d6c50d47f950c12e76", 410 | "sha256:2ea52bd92ab9f768cc64a4c3ef8f4b2580a17af0a5436f6126b08efbd1838371", 411 | "sha256:36674959eed6957e61f11c912f71e78857a8d0604171dfd9ce9ad5cbf41c511c", 412 | "sha256:384ec0463d1c2671170901994aeb6dce126de0a95ccc3976c43b0038a37329c2", 413 | "sha256:39b70c19ec771805081578cc936bbe95336798b7edf4732ed102e7a43ec5c07a", 414 | "sha256:400580cbd3cff6ffa6293df2278c75aef2d58d8d93d3c5614cd67981dae68ceb", 415 | "sha256:43d4c81d5ffdff6bae58d66a3cd7f54a7acd9a0e7b18d97abb255defc09e3140", 416 | "sha256:50a4a0ad0111cc1b71fa32dedd05fa239f7fb5a43a40663269bb5dc7877cfd28", 417 | "sha256:603aa0706be710eea8884af807b1b3bc9fb2e49b9f4da439e76000f3b3c6ff0f", 418 | "sha256:6149a185cece5ee78d1d196938b2a8f9d09f5a5ebfbba66969302a778d5ddd1d", 419 | "sha256:759e4095edc3c1b3ac031f34d9459fa781777a93ccc633a472a5468587a190ff", 420 | "sha256:7fb43004bce0ca31d8f13a6eb5e943fa73371381e53f7074ed21a4cb786c32f8", 421 | "sha256:811daee36a58dc79cf3d8bdd4a490e4277d0e4b7d103a001a4e73ddb48e7e6aa", 422 | "sha256:8b5e972b43c8fc27d56550b4120fe6257fdc15f9301914380b27f74856299fea", 423 | "sha256:99abf4f353c3d1a0c7a5f27699482c987cf663b1eac20db59b8c7b061eabd7fc", 424 | "sha256:a0d53e51a6cb6f0d9082decb7a4cb6dfb33055308c4c44f53103c073f649af73", 425 | "sha256:a12ff4c8ddfee61f90a1633a4c4afd3f7bcb32b11c52026c92a12e1325922d0d", 426 | "sha256:a4646724fba402aa7504cd48b4b50e783296b5e10a524c7a6da62e4a8ac9698d", 427 | "sha256:a76f502430dd98d7546e1ea2250a7360c065a5fdea52b2dffe8ae7180909b6f4", 428 | "sha256:a9d17f2be3b427fbb2bce61e596cf555d6f8a56c222bd2ca148baeeb5e5c783c", 429 | "sha256:ab83f24d5c52d60dbc8cd0528759532736b56db58adaa7b5f1f76ad551416a1e", 430 | "sha256:aeb9ed923be74e659984e321f609b9ba54a48354bfd168d21a2b072ed1e833ea", 431 | "sha256:c843b3f50d1ab7361ca4f0b3639bf691569493a56808a0b0c54a051d260b7dbd", 432 | "sha256:cae865b1cae1ec2663d8ea56ef6ff185bad091a5e33ebbadd98de2cfa3fa668f", 433 | "sha256:cc6bd4fd593cb261332568485e20a0712883cf631f6f5e8e86a52caa8b2b50ff", 434 | "sha256:cf2402002d3d9f91c8b01e66fbb436a4ed01c6498fffed0e4c7566da1d40ee1e", 435 | "sha256:d051ec1c64b85ecc69531e1137bb9751c6830772ee5c1c426dbcfe98ef5788d7", 436 | "sha256:d6631f2e867676b13026e2846180e2c13c1e11289d67da08d71cacb2cd93d4aa", 437 | "sha256:dbd18bcf4889b720ba13a27ec2f2aac1981bd41203b3a3b27ba7a33f88ae4827", 438 | "sha256:df609c82f18c5b9f6cb97271f03315ff0dbe481a2a02e56aeb1b1a985ce38e60" 439 | ], 440 | "index": "pypi", 441 | "version": "==1.19.5" 442 | }, 443 | "oauthlib": { 444 | "hashes": [ 445 | "sha256:bee41cc35fcca6e988463cacc3bcb8a96224f470ca547e697b604cc697b2f889", 446 | "sha256:df884cd6cbe20e32633f1db1072e9356f53638e4361bef4e8b03c9127c9328ea" 447 | ], 448 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'", 449 | "version": "==3.1.0" 450 | }, 451 | "opt-einsum": { 452 | "hashes": [ 453 | "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147", 454 | "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549" 455 | ], 456 | "markers": "python_version >= '3.5'", 457 | "version": "==3.3.0" 458 | }, 459 | "packaging": { 460 | "hashes": [ 461 | "sha256:5b327ac1320dc863dca72f4514ecc086f31186744b84a230374cc1fd776feae5", 462 | "sha256:67714da7f7bc052e064859c05c595155bd1ee9f69f76557e21f051443c20947a" 463 | ], 464 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'", 465 | "version": "==20.9" 466 | }, 467 | "pandas": { 468 | "hashes": [ 469 | "sha256:167693a80abc8eb28051fbd184c1b7afd13ce2c727a5af47b048f1ea3afefff4", 470 | "sha256:2111c25e69fa9365ba80bbf4f959400054b2771ac5d041ed19415a8b488dc70a", 471 | "sha256:298f0553fd3ba8e002c4070a723a59cdb28eda579f3e243bc2ee397773f5398b", 472 | "sha256:2b063d41803b6a19703b845609c0b700913593de067b552a8b24dd8eeb8c9895", 473 | "sha256:2cb7e8f4f152f27dc93f30b5c7a98f6c748601ea65da359af734dd0cf3fa733f", 474 | "sha256:52d2472acbb8a56819a87aafdb8b5b6d2b3386e15c95bde56b281882529a7ded", 475 | "sha256:612add929bf3ba9d27b436cc8853f5acc337242d6b584203f207e364bb46cb12", 476 | "sha256:649ecab692fade3cbfcf967ff936496b0cfba0af00a55dfaacd82bdda5cb2279", 477 | "sha256:68d7baa80c74aaacbed597265ca2308f017859123231542ff8a5266d489e1858", 478 | "sha256:8d4c74177c26aadcfb4fd1de6c1c43c2bf822b3e0fc7a9b409eeaf84b3e92aaa", 479 | "sha256:971e2a414fce20cc5331fe791153513d076814d30a60cd7348466943e6e909e4", 480 | "sha256:9db70ffa8b280bb4de83f9739d514cd0735825e79eef3a61d312420b9f16b758", 481 | "sha256:b730add5267f873b3383c18cac4df2527ac4f0f0eed1c6cf37fcb437e25cf558", 482 | "sha256:bd659c11a4578af740782288cac141a322057a2e36920016e0fc7b25c5a4b686", 483 | "sha256:c601c6fdebc729df4438ec1f62275d6136a0dd14d332fc0e8ce3f7d2aadb4dd6", 484 | "sha256:d0877407359811f7b853b548a614aacd7dea83b0c0c84620a9a643f180060950" 485 | ], 486 | "index": "pypi", 487 | "version": "==1.2.4" 488 | }, 489 | "pillow": { 490 | "hashes": [ 491 | "sha256:01425106e4e8cee195a411f729cff2a7d61813b0b11737c12bd5991f5f14bcd5", 492 | "sha256:031a6c88c77d08aab84fecc05c3cde8414cd6f8406f4d2b16fed1e97634cc8a4", 493 | "sha256:083781abd261bdabf090ad07bb69f8f5599943ddb539d64497ed021b2a67e5a9", 494 | "sha256:0d19d70ee7c2ba97631bae1e7d4725cdb2ecf238178096e8c82ee481e189168a", 495 | "sha256:0e04d61f0064b545b989126197930807c86bcbd4534d39168f4aa5fda39bb8f9", 496 | "sha256:12e5e7471f9b637762453da74e390e56cc43e486a88289995c1f4c1dc0bfe727", 497 | "sha256:22fd0f42ad15dfdde6c581347eaa4adb9a6fc4b865f90b23378aa7914895e120", 498 | "sha256:238c197fc275b475e87c1453b05b467d2d02c2915fdfdd4af126145ff2e4610c", 499 | "sha256:3b570f84a6161cf8865c4e08adf629441f56e32f180f7aa4ccbd2e0a5a02cba2", 500 | "sha256:463822e2f0d81459e113372a168f2ff59723e78528f91f0bd25680ac185cf797", 501 | "sha256:4d98abdd6b1e3bf1a1cbb14c3895226816e666749ac040c4e2554231068c639b", 502 | "sha256:5afe6b237a0b81bd54b53f835a153770802f164c5570bab5e005aad693dab87f", 503 | "sha256:5b70110acb39f3aff6b74cf09bb4169b167e2660dabc304c1e25b6555fa781ef", 504 | "sha256:5cbf3e3b1014dddc45496e8cf38b9f099c95a326275885199f427825c6522232", 505 | "sha256:624b977355cde8b065f6d51b98497d6cd5fbdd4f36405f7a8790e3376125e2bb", 506 | "sha256:63728564c1410d99e6d1ae8e3b810fe012bc440952168af0a2877e8ff5ab96b9", 507 | "sha256:66cc56579fd91f517290ab02c51e3a80f581aba45fd924fcdee01fa06e635812", 508 | "sha256:6c32cc3145928c4305d142ebec682419a6c0a8ce9e33db900027ddca1ec39178", 509 | "sha256:8bb1e155a74e1bfbacd84555ea62fa21c58e0b4e7e6b20e4447b8d07990ac78b", 510 | "sha256:95d5ef984eff897850f3a83883363da64aae1000e79cb3c321915468e8c6add5", 511 | "sha256:a013cbe25d20c2e0c4e85a9daf438f85121a4d0344ddc76e33fd7e3965d9af4b", 512 | "sha256:a787ab10d7bb5494e5f76536ac460741788f1fbce851068d73a87ca7c35fc3e1", 513 | "sha256:a7d5e9fad90eff8f6f6106d3b98b553a88b6f976e51fce287192a5d2d5363713", 514 | "sha256:aac00e4bc94d1b7813fe882c28990c1bc2f9d0e1aa765a5f2b516e8a6a16a9e4", 515 | "sha256:b91c36492a4bbb1ee855b7d16fe51379e5f96b85692dc8210831fbb24c43e484", 516 | "sha256:c03c07ed32c5324939b19e36ae5f75c660c81461e312a41aea30acdd46f93a7c", 517 | "sha256:c5236606e8570542ed424849f7852a0ff0bce2c4c8d0ba05cc202a5a9c97dee9", 518 | "sha256:c6b39294464b03457f9064e98c124e09008b35a62e3189d3513e5148611c9388", 519 | "sha256:cb7a09e173903541fa888ba010c345893cd9fc1b5891aaf060f6ca77b6a3722d", 520 | "sha256:d68cb92c408261f806b15923834203f024110a2e2872ecb0bd2a110f89d3c602", 521 | "sha256:dc38f57d8f20f06dd7c3161c59ca2c86893632623f33a42d592f097b00f720a9", 522 | "sha256:e98eca29a05913e82177b3ba3d198b1728e164869c613d76d0de4bde6768a50e", 523 | "sha256:f217c3954ce5fd88303fc0c317af55d5e0204106d86dea17eb8205700d47dec2" 524 | ], 525 | "markers": "python_version >= '3.6'", 526 | "version": "==8.2.0" 527 | }, 528 | "protobuf": { 529 | "hashes": [ 530 | "sha256:0277f62b1e42210cafe79a71628c1d553348da81cbd553402a7f7549c50b11d0", 531 | "sha256:07eec4e2ccbc74e95bb9b3afe7da67957947ee95bdac2b2e91b038b832dd71f0", 532 | "sha256:1c0e9e56202b9dccbc094353285a252e2b7940b74fdf75f1b4e1b137833fabd7", 533 | "sha256:1f0b5d156c3df08cc54bc2c8b8b875648ea4cd7ebb2a9a130669f7547ec3488c", 534 | "sha256:2dc0e8a9e4962207bdc46a365b63a3f1aca6f9681a5082a326c5837ef8f4b745", 535 | "sha256:3053f13207e7f13dc7be5e9071b59b02020172f09f648e85dc77e3fcb50d1044", 536 | "sha256:4a054b0b5900b7ea7014099e783fb8c4618e4209fffcd6050857517b3f156e18", 537 | "sha256:510e66491f1a5ac5953c908aa8300ec47f793130097e4557482803b187a8ee05", 538 | "sha256:5ff9fa0e67fcab442af9bc8d4ec3f82cb2ff3be0af62dba047ed4187f0088b7d", 539 | "sha256:90270fe5732c1f1ff664a3bd7123a16456d69b4e66a09a139a00443a32f210b8", 540 | "sha256:a0a08c6b2e6d6c74a6eb5bf6184968eefb1569279e78714e239d33126e753403", 541 | "sha256:c5566f956a26cda3abdfacc0ca2e21db6c9f3d18f47d8d4751f2209d6c1a5297", 542 | "sha256:dab75b56a12b1ceb3e40808b5bd9dfdaef3a1330251956e6744e5b6ed8f8830b", 543 | "sha256:efa4c4d4fc9ba734e5e85eaced70e1b63fb3c8d08482d839eb838566346f1737", 544 | "sha256:f17b352d7ce33c81773cf81d536ca70849de6f73c96413f17309f4b43ae7040b", 545 | "sha256:f42c2f5fb67da5905bfc03733a311f72fa309252bcd77c32d1462a1ad519521e", 546 | "sha256:f6077db37bfa16494dca58a4a02bfdacd87662247ad6bc1f7f8d13ff3f0013e1", 547 | "sha256:f80afc0a0ba13339bbab25ca0409e9e2836b12bb012364c06e97c2df250c3343", 548 | "sha256:f9cadaaa4065d5dd4d15245c3b68b967b3652a3108e77f292b58b8c35114b56c", 549 | "sha256:fad4f971ec38d8df7f4b632c819bf9bbf4f57cfd7312cf526c69ce17ef32436a" 550 | ], 551 | "version": "==3.15.8" 552 | }, 553 | "pyasn1": { 554 | "hashes": [ 555 | "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359", 556 | "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576", 557 | "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf", 558 | "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7", 559 | "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d", 560 | "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00", 561 | "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8", 562 | "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86", 563 | "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12", 564 | "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776", 565 | "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba", 566 | "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2", 567 | "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3" 568 | ], 569 | "version": "==0.4.8" 570 | }, 571 | "pyasn1-modules": { 572 | "hashes": [ 573 | "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8", 574 | "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199", 575 | "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811", 576 | "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed", 577 | "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4", 578 | "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e", 579 | "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74", 580 | "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb", 581 | "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45", 582 | "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd", 583 | "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0", 584 | "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d", 585 | "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405" 586 | ], 587 | "version": "==0.2.8" 588 | }, 589 | "pyparsing": { 590 | "hashes": [ 591 | "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1", 592 | "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b" 593 | ], 594 | "markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2'", 595 | "version": "==2.4.7" 596 | }, 597 | "pysocks": { 598 | "hashes": [ 599 | "sha256:08e69f092cc6dbe92a0fdd16eeb9b9ffbc13cadfe5ca4c7bd92ffb078b293299", 600 | "sha256:2725bd0a9925919b9b51739eea5f9e2bae91e83288108a9ad338b2e3a4435ee5", 601 | "sha256:3f8804571ebe159c380ac6de37643bb4685970655d3bba243530d6558b799aa0" 602 | ], 603 | "version": "==1.7.1" 604 | }, 605 | "python-dateutil": { 606 | "hashes": [ 607 | "sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c", 608 | "sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a" 609 | ], 610 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2'", 611 | "version": "==2.8.1" 612 | }, 613 | "python-dotenv": { 614 | "hashes": [ 615 | "sha256:471b782da0af10da1a80341e8438fca5fadeba2881c54360d5fd8d03d03a4f4a", 616 | "sha256:49782a97c9d641e8a09ae1d9af0856cc587c8d2474919342d5104d85be9890b2" 617 | ], 618 | "index": "pypi", 619 | "version": "==0.17.0" 620 | }, 621 | "pytz": { 622 | "hashes": [ 623 | "sha256:83a4a90894bf38e243cf052c8b58f381bfe9a7a483f6a9cab140bc7f702ac4da", 624 | "sha256:eb10ce3e7736052ed3623d49975ce333bcd712c7bb19a58b9e2089d4057d0798" 625 | ], 626 | "version": "==2021.1" 627 | }, 628 | "pyyaml": { 629 | "hashes": [ 630 | "sha256:08682f6b72c722394747bddaf0aa62277e02557c0fd1c42cb853016a38f8dedf", 631 | "sha256:0f5f5786c0e09baddcd8b4b45f20a7b5d61a7e7e99846e3c799b05c7c53fa696", 632 | "sha256:129def1b7c1bf22faffd67b8f3724645203b79d8f4cc81f674654d9902cb4393", 633 | "sha256:294db365efa064d00b8d1ef65d8ea2c3426ac366c0c4368d930bf1c5fb497f77", 634 | "sha256:3b2b1824fe7112845700f815ff6a489360226a5609b96ec2190a45e62a9fc922", 635 | "sha256:3bd0e463264cf257d1ffd2e40223b197271046d09dadf73a0fe82b9c1fc385a5", 636 | "sha256:4465124ef1b18d9ace298060f4eccc64b0850899ac4ac53294547536533800c8", 637 | "sha256:49d4cdd9065b9b6e206d0595fee27a96b5dd22618e7520c33204a4a3239d5b10", 638 | "sha256:4e0583d24c881e14342eaf4ec5fbc97f934b999a6828693a99157fde912540cc", 639 | "sha256:5accb17103e43963b80e6f837831f38d314a0495500067cb25afab2e8d7a4018", 640 | "sha256:607774cbba28732bfa802b54baa7484215f530991055bb562efbed5b2f20a45e", 641 | "sha256:6c78645d400265a062508ae399b60b8c167bf003db364ecb26dcab2bda048253", 642 | "sha256:72a01f726a9c7851ca9bfad6fd09ca4e090a023c00945ea05ba1638c09dc3347", 643 | "sha256:74c1485f7707cf707a7aef42ef6322b8f97921bd89be2ab6317fd782c2d53183", 644 | "sha256:895f61ef02e8fed38159bb70f7e100e00f471eae2bc838cd0f4ebb21e28f8541", 645 | "sha256:8c1be557ee92a20f184922c7b6424e8ab6691788e6d86137c5d93c1a6ec1b8fb", 646 | "sha256:bb4191dfc9306777bc594117aee052446b3fa88737cd13b7188d0e7aa8162185", 647 | "sha256:bfb51918d4ff3d77c1c856a9699f8492c612cde32fd3bcd344af9be34999bfdc", 648 | "sha256:c20cfa2d49991c8b4147af39859b167664f2ad4561704ee74c1de03318e898db", 649 | "sha256:cb333c16912324fd5f769fff6bc5de372e9e7a202247b48870bc251ed40239aa", 650 | "sha256:d2d9808ea7b4af864f35ea216be506ecec180628aced0704e34aca0b040ffe46", 651 | "sha256:d483ad4e639292c90170eb6f7783ad19490e7a8defb3e46f97dfe4bacae89122", 652 | "sha256:dd5de0646207f053eb0d6c74ae45ba98c3395a571a2891858e87df7c9b9bd51b", 653 | "sha256:e1d4970ea66be07ae37a3c2e48b5ec63f7ba6804bdddfdbd3cfd954d25a82e63", 654 | "sha256:e4fac90784481d221a8e4b1162afa7c47ed953be40d31ab4629ae917510051df", 655 | "sha256:fa5ae20527d8e831e8230cbffd9f8fe952815b2b7dae6ffec25318803a7528fc", 656 | "sha256:fd7f6999a8070df521b6384004ef42833b9bd62cfee11a09bda1079b4b704247", 657 | "sha256:fdc842473cd33f45ff6bce46aea678a54e3d21f1b61a7750ce3c498eedfe25d6", 658 | "sha256:fe69978f3f768926cfa37b867e3843918e012cf83f680806599ddce33c2c68b0" 659 | ], 660 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'", 661 | "version": "==5.4.1" 662 | }, 663 | "regex": { 664 | "hashes": [ 665 | "sha256:01afaf2ec48e196ba91b37451aa353cb7eda77efe518e481707e0515025f0cd5", 666 | "sha256:11d773d75fa650cd36f68d7ca936e3c7afaae41b863b8c387a22aaa78d3c5c79", 667 | "sha256:18c071c3eb09c30a264879f0d310d37fe5d3a3111662438889ae2eb6fc570c31", 668 | "sha256:1e1c20e29358165242928c2de1482fb2cf4ea54a6a6dea2bd7a0e0d8ee321500", 669 | "sha256:281d2fd05555079448537fe108d79eb031b403dac622621c78944c235f3fcf11", 670 | "sha256:314d66636c494ed9c148a42731b3834496cc9a2c4251b1661e40936814542b14", 671 | "sha256:32e65442138b7b76dd8173ffa2cf67356b7bc1768851dded39a7a13bf9223da3", 672 | "sha256:339456e7d8c06dd36a22e451d58ef72cef293112b559010db3d054d5560ef439", 673 | "sha256:3916d08be28a1149fb97f7728fca1f7c15d309a9f9682d89d79db75d5e52091c", 674 | "sha256:3a9cd17e6e5c7eb328517969e0cb0c3d31fd329298dd0c04af99ebf42e904f82", 675 | "sha256:47bf5bf60cf04d72bf6055ae5927a0bd9016096bf3d742fa50d9bf9f45aa0711", 676 | "sha256:4c46e22a0933dd783467cf32b3516299fb98cfebd895817d685130cc50cd1093", 677 | "sha256:4c557a7b470908b1712fe27fb1ef20772b78079808c87d20a90d051660b1d69a", 678 | "sha256:52ba3d3f9b942c49d7e4bc105bb28551c44065f139a65062ab7912bef10c9afb", 679 | "sha256:563085e55b0d4fb8f746f6a335893bda5c2cef43b2f0258fe1020ab1dd874df8", 680 | "sha256:598585c9f0af8374c28edd609eb291b5726d7cbce16be6a8b95aa074d252ee17", 681 | "sha256:619d71c59a78b84d7f18891fe914446d07edd48dc8328c8e149cbe0929b4e000", 682 | "sha256:67bdb9702427ceddc6ef3dc382455e90f785af4c13d495f9626861763ee13f9d", 683 | "sha256:6d1b01031dedf2503631d0903cb563743f397ccaf6607a5e3b19a3d76fc10480", 684 | "sha256:741a9647fcf2e45f3a1cf0e24f5e17febf3efe8d4ba1281dcc3aa0459ef424dc", 685 | "sha256:7c2a1af393fcc09e898beba5dd59196edaa3116191cc7257f9224beaed3e1aa0", 686 | "sha256:7d9884d86dd4dd489e981d94a65cd30d6f07203d90e98f6f657f05170f6324c9", 687 | "sha256:90f11ff637fe8798933fb29f5ae1148c978cccb0452005bf4c69e13db951e765", 688 | "sha256:919859aa909429fb5aa9cf8807f6045592c85ef56fdd30a9a3747e513db2536e", 689 | "sha256:96fcd1888ab4d03adfc9303a7b3c0bd78c5412b2bfbe76db5b56d9eae004907a", 690 | "sha256:97f29f57d5b84e73fbaf99ab3e26134e6687348e95ef6b48cfd2c06807005a07", 691 | "sha256:980d7be47c84979d9136328d882f67ec5e50008681d94ecc8afa8a65ed1f4a6f", 692 | "sha256:a91aa8619b23b79bcbeb37abe286f2f408d2f2d6f29a17237afda55bb54e7aac", 693 | "sha256:ade17eb5d643b7fead300a1641e9f45401c98eee23763e9ed66a43f92f20b4a7", 694 | "sha256:b9c3db21af35e3b3c05764461b262d6f05bbca08a71a7849fd79d47ba7bc33ed", 695 | "sha256:bd28bc2e3a772acbb07787c6308e00d9626ff89e3bfcdebe87fa5afbfdedf968", 696 | "sha256:bf5824bfac591ddb2c1f0a5f4ab72da28994548c708d2191e3b87dd207eb3ad7", 697 | "sha256:c0502c0fadef0d23b128605d69b58edb2c681c25d44574fc673b0e52dce71ee2", 698 | "sha256:c38c71df845e2aabb7fb0b920d11a1b5ac8526005e533a8920aea97efb8ec6a4", 699 | "sha256:ce15b6d103daff8e9fee13cf7f0add05245a05d866e73926c358e871221eae87", 700 | "sha256:d3029c340cfbb3ac0a71798100ccc13b97dddf373a4ae56b6a72cf70dfd53bc8", 701 | "sha256:e512d8ef5ad7b898cdb2d8ee1cb09a8339e4f8be706d27eaa180c2f177248a10", 702 | "sha256:e8e5b509d5c2ff12f8418006d5a90e9436766133b564db0abaec92fd27fcee29", 703 | "sha256:ee54ff27bf0afaf4c3b3a62bcd016c12c3fdb4ec4f413391a90bd38bc3624605", 704 | "sha256:fa4537fb4a98fe8fde99626e4681cc644bdcf2a795038533f9f711513a862ae6", 705 | "sha256:fd45ff9293d9274c5008a2054ecef86a9bfe819a67c7be1afb65e69b405b3042" 706 | ], 707 | "version": "==2021.4.4" 708 | }, 709 | "requests": { 710 | "extras": [ 711 | "socks" 712 | ], 713 | "hashes": [ 714 | "sha256:27973dd4a904a4f13b263a19c866c13b92a39ed1c964655f025f3f8d3d75b804", 715 | "sha256:c210084e36a42ae6b9219e00e48287def368a26d03a048ddad7bfee44f75871e" 716 | ], 717 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'", 718 | "version": "==2.25.1" 719 | }, 720 | "requests-oauthlib": { 721 | "hashes": [ 722 | "sha256:7f71572defaecd16372f9006f33c2ec8c077c3cfa6f5911a9a90202beb513f3d", 723 | "sha256:b4261601a71fd721a8bd6d7aa1cc1d6a8a93b4a9f5e96626f8e4d91e8beeaa6a", 724 | "sha256:fa6c47b933f01060936d87ae9327fead68768b69c6c9ea2109c48be30f2d4dbc" 725 | ], 726 | "version": "==1.3.0" 727 | }, 728 | "rsa": { 729 | "hashes": [ 730 | "sha256:78f9a9bf4e7be0c5ded4583326e7461e3a3c5aae24073648b4bdfa797d78c9d2", 731 | "sha256:9d689e6ca1b3038bc82bf8d23e944b6b6037bc02301a574935b2dd946e0353b9" 732 | ], 733 | "markers": "python_version >= '3.6'", 734 | "version": "==4.7.2" 735 | }, 736 | "sacremoses": { 737 | "hashes": [ 738 | "sha256:58176cc28391830789b763641d0f458819bebe88681dac72b41a19c0aedc07e9", 739 | "sha256:fa93db44bc04542553ba6090818b892f603d02aa0d681e6c5c3023baf17e8564" 740 | ], 741 | "version": "==0.0.45" 742 | }, 743 | "scikit-learn": { 744 | "hashes": [ 745 | "sha256:0567a2d29ad08af98653300c623bd8477b448fe66ced7198bef4ed195925f082", 746 | "sha256:087dfede39efb06ab30618f9ab55a0397f29c38d63cd0ab88d12b500b7d65fd7", 747 | "sha256:1adf483e91007a87171d7ce58c34b058eb5dab01b5fee6052f15841778a8ecd8", 748 | "sha256:259ec35201e82e2db1ae2496f229e63f46d7f1695ae68eef9350b00dc74ba52f", 749 | "sha256:3c4f07f47c04e81b134424d53c3f5e16dfd7f494e44fd7584ba9ce9de2c5e6c1", 750 | "sha256:4562dcf4793e61c5d0f89836d07bc37521c3a1889da8f651e2c326463c4bd697", 751 | "sha256:4ddd2b6f7449a5d539ff754fa92d75da22de261fd8fdcfb3596799fadf255101", 752 | "sha256:54be0a60a5a35005ad69c75902e0f5c9f699db4547ead427e97ef881c3242e6f", 753 | "sha256:5580eba7345a4d3b097be2f067cc71a306c44bab19e8717a30361f279c929bea", 754 | "sha256:7b04691eb2f41d2c68dbda8d1bd3cb4ef421bdc43aaa56aeb6c762224552dfb6", 755 | "sha256:826b92bf45b8ad80444814e5f4ac032156dd481e48d7da33d611f8fe96d5f08b", 756 | "sha256:83b21ff053b1ff1c018a2d24db6dd3ea339b1acfbaa4d9c881731f43748d8b3b", 757 | "sha256:8772b99d683be8f67fcc04789032f1b949022a0e6880ee7b75a7ec97dbbb5d0b", 758 | "sha256:895dbf2030aa7337649e36a83a007df3c9811396b4e2fa672a851160f36ce90c", 759 | "sha256:8aa1b3ac46b80eaa552b637eeadbbce3be5931e4b5002b964698e33a1b589e1e", 760 | "sha256:9599a3f3bf33f73fed0fe06d1dfa4e6081365a58c1c807acb07271be0dce9733", 761 | "sha256:99349d77f54e11f962d608d94dfda08f0c9e5720d97132233ebdf35be2858b2d", 762 | "sha256:9a24d1ccec2a34d4cd3f2a1f86409f3f5954cc23d4d2270ba0d03cf018aa4780", 763 | "sha256:9bed8a1ef133c8e2f13966a542cb8125eac7f4b67dcd234197c827ba9c7dd3e0", 764 | "sha256:9c6097b6a9b2bafc5e0f31f659e6ab5e131383209c30c9e978c5b8abdac5ed2a", 765 | "sha256:9dfa564ef27e8e674aa1cc74378416d580ac4ede1136c13dd555a87996e13422", 766 | "sha256:a0334a1802e64d656022c3bfab56a73fbd6bf4b1298343f3688af2151810bbdf", 767 | "sha256:a29460499c1e62b7a830bb57ca42e615375a6ab1bcad053cd25b493588348ea8", 768 | "sha256:a36e159a0521e13bbe15ca8c8d038b3a1dd4c7dad18d276d76992e03b92cf643", 769 | "sha256:abe835a851610f87201819cb315f8d554e1a3e8128912783a31e87264ba5ffb7", 770 | "sha256:c13ebac42236b1c46397162471ea1c46af68413000e28b9309f8c05722c65a09", 771 | "sha256:c3deb3b19dd9806acf00cf0d400e84562c227723013c33abefbbc3cf906596e9", 772 | "sha256:c658432d8a20e95398f6bb95ff9731ce9dfa343fdf21eea7ec6a7edfacd4b4d9", 773 | "sha256:c7f4eb77504ac586d8ac1bde1b0c04b504487210f95297235311a0ab7edd7e38", 774 | "sha256:d54dbaadeb1425b7d6a66bf44bee2bb2b899fe3e8850b8e94cfb9c904dcb46d0", 775 | "sha256:ddb52d088889f5596bc4d1de981f2eca106b58243b6679e4782f3ba5096fd645", 776 | "sha256:ed9d65594948678827f4ff0e7ae23344e2f2b4cabbca057ccaed3118fdc392ca", 777 | "sha256:fab31f48282ebf54dd69f6663cd2d9800096bad1bb67bbc9c9ac84eb77b41972" 778 | ], 779 | "index": "pypi", 780 | "version": "==0.24.1" 781 | }, 782 | "scipy": { 783 | "hashes": [ 784 | "sha256:03f1fd3574d544456325dae502facdf5c9f81cbfe12808a5e67a737613b7ba8c", 785 | "sha256:0c81ea1a95b4c9e0a8424cf9484b7b8fa7ef57169d7bcc0dfcfc23e3d7c81a12", 786 | "sha256:1fba8a214c89b995e3721670e66f7053da82e7e5d0fe6b31d8e4b19922a9315e", 787 | "sha256:37f4c2fb904c0ba54163e03993ce3544c9c5cde104bcf90614f17d85bdfbb431", 788 | "sha256:50e5bcd9d45262725e652611bb104ac0919fd25ecb78c22f5282afabd0b2e189", 789 | "sha256:6ca1058cb5bd45388041a7c3c11c4b2bd58867ac9db71db912501df77be2c4a4", 790 | "sha256:77f7a057724545b7e097bfdca5c6006bed8580768cd6621bb1330aedf49afba5", 791 | "sha256:816951e73d253a41fa2fd5f956f8e8d9ac94148a9a2039e7db56994520582bf2", 792 | "sha256:96620240b393d155097618bcd6935d7578e85959e55e3105490bbbf2f594c7ad", 793 | "sha256:993c86513272bc84c451349b10ee4376652ab21f312b0554fdee831d593b6c02", 794 | "sha256:adf7cee8e5c92b05f2252af498f77c7214a2296d009fc5478fc432c2f8fb953b", 795 | "sha256:bc52d4d70863141bb7e2f8fd4d98e41d77375606cde50af65f1243ce2d7853e8", 796 | "sha256:c1d3f771c19af00e1a36f749bd0a0690cc64632783383bc68f77587358feb5a4", 797 | "sha256:d744657c27c128e357de2f0fd532c09c84cd6e4933e8232895a872e67059ac37", 798 | "sha256:e3e9742bad925c421d39e699daa8d396c57535582cba90017d17f926b61c1552", 799 | "sha256:e547f84cd52343ac2d56df0ab08d3e9cc202338e7d09fafe286d6c069ddacb31", 800 | "sha256:e89091e6a8e211269e23f049473b2fde0c0e5ae0dd5bd276c3fc91b97da83480", 801 | "sha256:e9da33e21c9bc1b92c20b5328adb13e5f193b924c9b969cd700c8908f315aa59", 802 | "sha256:ffdfb09315896c6e9ac739bb6e13a19255b698c24e6b28314426fd40a1180822" 803 | ], 804 | "markers": "python_version < '3.10' and python_version >= '3.7'", 805 | "version": "==1.6.2" 806 | }, 807 | "six": { 808 | "hashes": [ 809 | "sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259", 810 | "sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced" 811 | ], 812 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2'", 813 | "version": "==1.15.0" 814 | }, 815 | "tensorboard": { 816 | "hashes": [ 817 | "sha256:e167460085b6528956b33bab1c970c989cdce47a6616273880733f5e7bde452e" 818 | ], 819 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1'", 820 | "version": "==2.5.0" 821 | }, 822 | "tensorboard-data-server": { 823 | "hashes": [ 824 | "sha256:2d723d73e3a3b0a4498f56c64c39e2e26ac192414891df22c9f152b7058fd6bc", 825 | "sha256:a4b8e1c3fc85237b3afeef450db06c9a9b25f5854ad27c21667a90808acd1822", 826 | "sha256:b620e520d3d535ceb896557acca0029fd7fd2f9f408af35abc2d2dad91f0345d" 827 | ], 828 | "markers": "python_version >= '3.6'", 829 | "version": "==0.6.0" 830 | }, 831 | "tensorboard-plugin-wit": { 832 | "hashes": [ 833 | "sha256:2a80d1c551d741e99b2f197bb915d8a133e24adb8da1732b840041860f91183a" 834 | ], 835 | "version": "==1.8.0" 836 | }, 837 | "tensorflow": { 838 | "hashes": [ 839 | "sha256:0e427b1350be6dbe572f971947c5596fdbb152081f227808d8becd894bf40282", 840 | "sha256:22723b8e1fa83b34f56c349b16a57aaff913b404451fcf70981f2b1d6e0c64fc", 841 | "sha256:2357112319303da1b5459a621fd0503c2b2cd97b6c33c4903abd46b3c3e380e2", 842 | "sha256:36d5acd60aac48e34bd545d0ce1fb8b3fceebff6b8782436defd0f71c12203bd", 843 | "sha256:4a04081647b89a8fb602895b29ffc559e3c20aac8bde1d4c5ecd2a65adce5d35", 844 | "sha256:55368ba0bedb513ba0e36a2543a588b5276e9b2ca99fa3232a9a176601a7bab5", 845 | "sha256:e1f2799cc86861680d8515167f103e2207a8cab92a4afe5471e4839330591f08", 846 | "sha256:eedcf578afde5e6e69c75d796bed41093451cd1ab54afb438760e40fb74a09de", 847 | "sha256:efa9daa4b3701a4e439b24b74c1e4b66844aee8ae5263fb3cc12281ac9cc9f67" 848 | ], 849 | "index": "pypi", 850 | "version": "==2.4.1" 851 | }, 852 | "tensorflow-estimator": { 853 | "hashes": [ 854 | "sha256:5b7b7bf2debe19a8794adacc43e8ba6459daa4efaf54d3302623994a359b17f0" 855 | ], 856 | "version": "==2.4.0" 857 | }, 858 | "termcolor": { 859 | "hashes": [ 860 | "sha256:1d6d69ce66211143803fbc56652b41d73b4a400a2891d7bf7a1cdf4c02de613b" 861 | ], 862 | "version": "==1.1.0" 863 | }, 864 | "threadpoolctl": { 865 | "hashes": [ 866 | "sha256:38b74ca20ff3bb42caca8b00055111d74159ee95c4370882bbff2b93d24da725", 867 | "sha256:ddc57c96a38beb63db45d6c159b5ab07b6bced12c45a1f07b2b92f272aebfa6b" 868 | ], 869 | "markers": "python_version >= '3.5'", 870 | "version": "==2.1.0" 871 | }, 872 | "tokenizers": { 873 | "hashes": [ 874 | "sha256:03056431783e72df80de68648573f97a70701d17fa22336c6d761b5d4b7be9ff", 875 | "sha256:05c90ade1b9cc41aaee6056c5e460dc5150f12b602bdc6bfa3758fb965ca7788", 876 | "sha256:15d9b959fd3b9e9e7c6d6d7d909bca5d7397a170a50d99ac8ce4e2ab590b137a", 877 | "sha256:1a22bf899728eeb74ee2bb1ba9eff61898ec02e623a690ed28002762d19ab9b4", 878 | "sha256:1ab8c467e4fe16bba33022feefcd6322642a58e4c8c123fd692c20e17f339964", 879 | "sha256:2094eb8e3608858eb4bd29c32c39969ae63ad9749d8aca9b34e82cba852acaf1", 880 | "sha256:2b592146caff20c283dadf2da99520b1dfde4af8ce964a8adcb4e990923fa423", 881 | "sha256:39e24555d5a2d9df87fd75303e1fd9ba3f995ac8aeb543c511d601d26a54726a", 882 | "sha256:3fb22df976701452db3ba652bd647518a043e58d4209d18273163fbc53252a3b", 883 | "sha256:419bb33bb3690239b93b76b06eba1eb822aa72f4e63293d2f15c60505f6ee0d0", 884 | "sha256:474883e8e0be431394e0ccfb70e97c1856e8c5bc80536f7b2faa3b0785d59afd", 885 | "sha256:4865d34d4897eed4ca4a758971fb14911cf5022e270b53c028fa9312fe440e2b", 886 | "sha256:52c2479975fd5025d399493403c7aedce853da20cec04a32a829c1c12c28e2f1", 887 | "sha256:6229fcc8473fd225e8e09742c354dacacd57dbdc73075e4c9d71f925cd171090", 888 | "sha256:77c4c41f2147c930c66014ca43b6935133781ae1923d62e70c797e71b0ee2598", 889 | "sha256:79119578bcd1d8ec836ddd3dbb305f32084d60e9f67e93a10ca33c67eeaa89fc", 890 | "sha256:7ba26369bc30f9d28d9ff42dcb1b57d9995157a9bb2975b95acda4220195d7aa", 891 | "sha256:86077426c615a814f7456569eade33c12c93131d02fdf548994dcedf41bdbbf1", 892 | "sha256:8a575022e066878bede82bb5d5244b17c6ebda15dbb50229d86f9e8267ddd40e", 893 | "sha256:8b4ae84fb410b5f5abb3a604b3274e2d6994b21f07c379b1c1659561e026bad8", 894 | "sha256:9124a1f77e176cb2a2571bae4c3bf8d4c40975c1681e2ba346fdca5d6a3aa843", 895 | "sha256:953a4e483524fd37fd66208e21dce85d4829bfe294d8b6224d2f00f61aa9950c", 896 | "sha256:9619846026b16967465e5221206f86bdc58cf65b0f92548d048e97925361121e", 897 | "sha256:9e8f32a2ef1902f769da6215ae8beabd632676a1551fb171b5aa6d4c11fd3a02", 898 | "sha256:a323d93fd5e57060428fecb6d73ab13223822f8ffa1ede282070b47a4bda2cea", 899 | "sha256:a3de6ecfbd739ee3d59280c0c930c0c5a716df1cf0cdf68beb379066931866bd", 900 | "sha256:aadcf38b97114d035e389f5aee4edf59e81666ad65de26c06592d76f184bb66c", 901 | "sha256:bac17cceebb2a6947d380e1b7bce8fc33098a979071a1291adc456fb25434924", 902 | "sha256:bb58ad982f8f72052362a5384145e87559899dcc0b06264e71dd137869037e6e", 903 | "sha256:be25827c0506d92927dc0ef4d2ce0c4653a351735546f8b22548535c3d2f7a6c", 904 | "sha256:bed7c5c2c786a2e9b3265006f15a13d8e04dcdfcf9ba13add0d7194a50346393", 905 | "sha256:c0f5bbc2e614468bcb605f2aa4a6dbcdf21629c6ff6ae81f1d9fa9683934ce8e", 906 | "sha256:c429c25c3dfe1ea9ad6e21a49d648910335ef4188c5e8226e5aa2ba2bd13921c", 907 | "sha256:cd408266f13856dc648ed2dcc889ac17feffc28da2ebb03f1977b88935e86c9a", 908 | "sha256:cf7f1aad957fed36e4a90fc094e3adc03fdd45fbb058c1cde25721e3e66235f8", 909 | "sha256:e3e74a9fa40b92a9817fe05ea91bf20075f45ad8cf7c0d3eb738170a27059508", 910 | "sha256:ea54eb0071f13fa7c6c3b88997a843d01c067158b994115759c27827e683fb82", 911 | "sha256:f1553e029f326eb74f36d67a38ef77a7f03068a494a0faa4e16d0d832f25b760", 912 | "sha256:f7b62497dce161babdb9197fa4b26e401bac9541b62fe0d0957134fefeb1b01c", 913 | "sha256:f9fe9c5556ccab03c9d42ed299bd8901c95d22373676437bfeb4656c2b5e42bc", 914 | "sha256:fb1dae213c8531d6af071dd021c7225be73803a0cbe609aed5074be04118aa6c" 915 | ], 916 | "version": "==0.10.2" 917 | }, 918 | "torch": { 919 | "hashes": [ 920 | "sha256:1388b30fbd262c1a053d6c9ace73bb0bd8f5871b4892b6f3e02d1d7bc9768563", 921 | "sha256:16f2630d9604c4ee28ea7d6e388e2264cd7bc6031c6ecd796bae3f56b5efa9a3", 922 | "sha256:225ee4238c019b28369c71977327deeeb2bd1c6b8557e6fcf631b8866bdc5447", 923 | "sha256:3e4190c04dfd89c59bad06d5fe451446643a65e6d2607cc989eb1001ee76e12f", 924 | "sha256:4ace9c5bb94d5a7b9582cd089993201658466e9c59ff88bd4e9e08f6f072d1cf", 925 | "sha256:55137feb2f5a0dc7aced5bba690dcdb7652054ad3452b09a2bbb59f02a11e9ff", 926 | "sha256:5c2e9a33d44cdb93ebd739b127ffd7da786bf5f740539539195195b186a05f6c", 927 | "sha256:6ffa1e7ae079c7cb828712cb0cdaae5cc4fb87c16a607e6d14526b62c20bcc17", 928 | "sha256:8ad2252bf09833dcf46a536a78544e349b8256a370e03a98627ebfb118d9555b", 929 | "sha256:95b7bbbacc3f28fe438f418392ceeae146a01adc03b29d44917d55214ac234c9", 930 | "sha256:a50ea8ed900927fb30cadb63aa7a32fdd59c7d7abe5012348dfbe35a8355c083", 931 | "sha256:c6ede2ae4dcd8214b63e047efabafa92493605205a947574cf358216ca4e440a", 932 | "sha256:ce7d435426f3dd14f95710d779aa46e9cd5e077d512488e813f7589fdc024f78", 933 | "sha256:dac4d10494e74f7e553c92d7263e19ea501742c4825ddd26c4decfa27be95981", 934 | "sha256:e7ad1649adb7dc2a450e70a3e51240b84fa4746c69c8f98989ce0c254f9fba3a", 935 | "sha256:f23eeb1a48cc39209d986c418ad7e02227eee973da45c0c42d36b1aec72f4940" 936 | ], 937 | "index": "pypi", 938 | "version": "==1.8.1" 939 | }, 940 | "tqdm": { 941 | "hashes": [ 942 | "sha256:daec693491c52e9498632dfbe9ccfc4882a557f5fa08982db1b4d3adbe0887c3", 943 | "sha256:ebdebdb95e3477ceea267decfc0784859aa3df3e27e22d23b83e9b272bf157ae" 944 | ], 945 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'", 946 | "version": "==4.60.0" 947 | }, 948 | "transformers": { 949 | "hashes": [ 950 | "sha256:0a57d1cd9301a617c7015d7184228984abdfb1ae2158c29cfb32582219756d23", 951 | "sha256:3508e3b032cf0f5342c67836de4b121aa5c435c959472a28054ba895ea59cca7" 952 | ], 953 | "index": "pypi", 954 | "version": "==4.5.1" 955 | }, 956 | "tweepy": { 957 | "hashes": [ 958 | "sha256:5e22003441a11f6f4c2ea4d05ec5532f541e9f5d874c3908270f0c28e649b53a", 959 | "sha256:76e6954b806ca470dda877f57db8792fff06a0beba0ed43efc3805771e39f06a" 960 | ], 961 | "index": "pypi", 962 | "version": "==3.10.0" 963 | }, 964 | "typing-extensions": { 965 | "hashes": [ 966 | "sha256:7cb407020f00f7bfc3cb3e7881628838e69d8f3fcab2f64742a5e76b2f841918", 967 | "sha256:99d4073b617d30288f569d3f13d2bd7548c3a7e4c8de87db09a9d29bb3a4a60c", 968 | "sha256:dafc7639cde7f1b6e1acc0f457842a83e722ccca8eef5270af2d74792619a89f" 969 | ], 970 | "markers": "python_version < '3.8'", 971 | "version": "==3.7.4.3" 972 | }, 973 | "urllib3": { 974 | "hashes": [ 975 | "sha256:2f4da4594db7e1e110a944bb1b551fdf4e6c136ad42e4234131391e21eb5b0df", 976 | "sha256:e7b021f7241115872f92f43c6508082facffbd1c048e3c6e2bb9c2a157e28937" 977 | ], 978 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4' and python_version < '4'", 979 | "version": "==1.26.4" 980 | }, 981 | "werkzeug": { 982 | "hashes": [ 983 | "sha256:2de2a5db0baeae7b2d2664949077c2ac63fbd16d98da0ff71837f7d1dea3fd43", 984 | "sha256:6c80b1e5ad3665290ea39320b91e1be1e0d5f60652b964a3070216de83d2e47c" 985 | ], 986 | "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'", 987 | "version": "==1.0.1" 988 | }, 989 | "wheel": { 990 | "hashes": [ 991 | "sha256:78b5b185f0e5763c26ca1e324373aadd49182ca90e825f7853f4b2509215dc0e", 992 | "sha256:e11eefd162658ea59a60a0f6c7d493a7190ea4b9a85e335b33489d9f17e0245e" 993 | ], 994 | "markers": "python_version >= '3'", 995 | "version": "==0.36.2" 996 | }, 997 | "wrapt": { 998 | "hashes": [ 999 | "sha256:b62ffa81fb85f4332a4f609cab4ac40709470da05643a082ec1eb88e6d9b97d7" 1000 | ], 1001 | "version": "==1.12.1" 1002 | }, 1003 | "zipp": { 1004 | "hashes": [ 1005 | "sha256:3607921face881ba3e026887d8150cca609d517579abe052ac81fc5aeffdbd76", 1006 | "sha256:51cb66cc54621609dd593d1787f286ee42a5c0adbb4b29abea5a63edc3e03098" 1007 | ], 1008 | "markers": "python_version >= '3.6'", 1009 | "version": "==3.4.1" 1010 | } 1011 | }, 1012 | "develop": {} 1013 | } 1014 | -------------------------------------------------------------------------------- /backend/README.md: -------------------------------------------------------------------------------- 1 | # cse6242-project 2 | All the data will now be at this link: https://drive.google.com/drive/folders/1RQlCXTDjg-_fbt9_nhTIWSsGnP4_sh4K?usp=sharing 3 | 4 | To get the data: 5 | 1. Download the twitter(`company_tweets_2015_to_2019.csv`, e.g. `TSLA_tweets_2015_to_2019.csv`) and reddit data(`company_reddit_2015_to_2019.csv`, e.g. `TSLA_reddit_2015_to_2019.csv`) from google drive and put it in the same path as starter_code. 6 | 2. Run `starter_code.py company` or any other company (should enter the stock name). It creates two directories (`twitter/company/` and `reddit/company/`) and adds the data for each year to those directories. 7 | 3. Run `load_data.py` with the following arguments: `load_data(stock_name, stock_path, reddit_path, twitter_path, year_list, return_columns_list)` 8 | - For example, for getting data of TSLA for years 2015 to 1018 and the desired outputs listed below, run ```load_data("TSLA", "stock_data/tesla/", "reddit/TSLA/", "twitter/TSLA/", [2015, 2016, 2017, 2018], ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit','comment_num_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'comment_num_twitter','retweet_num_twitter', 'like_num_twitter'])``` 9 | - If you want all the columns in the returned array, pass an empty list `[]` as `return_columns_list` in `load_data()`. 10 | -------------------------------------------------------------------------------- /backend/Tesla_30_5weeks.csv: -------------------------------------------------------------------------------- 1 | Week,Reddit Sentiment,Number of Reddit Posts,Twitter Sentiment,Number of Tweets,Tweet Activity 2 | 1,0.5,0.5,0.5,0.5,0.5 3 | 2,0.96,0.5,0.5,0.5,0.5 4 | 3,0.5,0.84,0.5,0.5,0.5 5 | 4,0.5,0.5,0.93,0.5,0.98 6 | 5,0.81,0.87,0.5,0.96,0.5 7 | -------------------------------------------------------------------------------- /backend/Tesla_90_5weeks.csv: -------------------------------------------------------------------------------- 1 | Week,Reddit Sentiment,Number of Reddit Posts,Twitter Sentiment,Number of Tweets,Tweet Activity 2 | 1,0.5,0.5,0.5,0.5,0.5 3 | 2,1,0.5,0.5,1,0.5 4 | 3,0.5,1,0.5,0.5,1 5 | 4,1,0.5,1,0.5,0.5 6 | 5,0.5,1,0.5,0.5,1 7 | -------------------------------------------------------------------------------- /backend/best_weights/AAPL_wsize30_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/AAPL_wsize30_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/AAPL_wsize60_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/AAPL_wsize60_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/AAPL_wsize90_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/AAPL_wsize90_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/AMZN_wsize30_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/AMZN_wsize30_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/AMZN_wsize60_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/AMZN_wsize60_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/AMZN_wsize90_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/AMZN_wsize90_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/GOOGL_wsize30_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/GOOGL_wsize30_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/GOOGL_wsize60_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/GOOGL_wsize60_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/GOOGL_wsize90_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/GOOGL_wsize90_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/MSFT_wsize30_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/MSFT_wsize30_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/MSFT_wsize60_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/MSFT_wsize60_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/MSFT_wsize90_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/MSFT_wsize90_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/TSLA_wsize30_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/TSLA_wsize30_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/TSLA_wsize60_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/TSLA_wsize60_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/TSLA_wsize90_sc_dict.p: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/TSLA_wsize90_sc_dict.p -------------------------------------------------------------------------------- /backend/best_weights/best_weights_AAPL_wsize30.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_AAPL_wsize30.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_AAPL_wsize60.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_AAPL_wsize60.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_AAPL_wsize90.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_AAPL_wsize90.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_AMZN_wsize30.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_AMZN_wsize30.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_AMZN_wsize60.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_AMZN_wsize60.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_AMZN_wsize90.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_AMZN_wsize90.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_GOOGL_wsize30.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_GOOGL_wsize30.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_GOOGL_wsize60.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_GOOGL_wsize60.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_GOOGL_wsize90.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_GOOGL_wsize90.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_MSFT_wsize30.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_MSFT_wsize30.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_MSFT_wsize60.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_MSFT_wsize60.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_MSFT_wsize90.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_MSFT_wsize90.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_TSLA_wsize30.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_TSLA_wsize30.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_TSLA_wsize60.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_TSLA_wsize60.hdf5 -------------------------------------------------------------------------------- /backend/best_weights/best_weights_TSLA_wsize90.hdf5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/backend/best_weights/best_weights_TSLA_wsize90.hdf5 -------------------------------------------------------------------------------- /backend/data_proc.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import datetime as dt 3 | import sys 4 | 5 | def get_grouped_df(source, df): 6 | # print('\n-- get_grouped_df -- ') 7 | 8 | df = df.loc[:, ~df.columns.str.contains('^Unnamed')] 9 | df.loc[:, 'post_date'] = pd.to_datetime(df['post_date'], unit='s').dt.date 10 | 11 | cols = ['positive_'+source, 'negative_'+source, 'neutral_'+source, 'count_'+source] 12 | 13 | grouped_df = pd.DataFrame([], columns=cols) 14 | 15 | grouped_df['count_'+source] = df['prediction'].groupby(df['post_date']).count() 16 | 17 | grouped_df[['negative_'+source,'neutral_'+source,'positive_'+source]] = \ 18 | df.groupby(['post_date','prediction'], as_index=False)\ 19 | .size()\ 20 | .pivot(index='post_date', columns='prediction', values='size')\ 21 | [['negative','neutral','positive']]\ 22 | .fillna(0) 23 | 24 | if source == 'twitter': 25 | grouped_df['comment_num_'+source] = df.groupby(['post_date']).agg({'comment_num':'sum'})['comment_num'] 26 | grouped_df['retweet_num_'+source] = df.groupby(['post_date']).agg({'retweet_num':'sum'})['retweet_num'] 27 | grouped_df['like_num_'+source] = df.groupby(['post_date']).agg({'like_num':'sum'})['like_num'] 28 | 29 | if source == 'reddit': 30 | grouped_df['comment_num_'+source] = df.groupby(['post_date']).agg({'comment_num':'sum'})['comment_num'] 31 | 32 | grouped_df[['positive_'+source, 'negative_'+source, 'neutral_'+source]] = \ 33 | grouped_df[['positive_'+source, 'negative_'+source, 'neutral_'+source]].div(grouped_df['count_'+source],0) 34 | # grouped_df.loc[:,grouped_df.columns!='count_'+source] = grouped_df.loc[:, grouped_df.columns!='count_'+source].div(grouped_df['count_'+source],0) 35 | 36 | return grouped_df 37 | 38 | 39 | def load_data(company, stock_path, reddit_path, twitter_path, year_list, return_cols): 40 | # print('\n-- load_data -- ') 41 | 42 | fin_df = pd.DataFrame([]) 43 | 44 | if stock_path != False: 45 | for y in year_list: 46 | df = pd.read_csv(stock_path + company+"_"+str(y)+'.csv') 47 | fin_df = fin_df.append(df) 48 | 49 | fin_df = fin_df.drop('Adj Close', 1) 50 | fin_df['Date'] = pd.to_datetime(fin_df['Date']) 51 | fin_df.set_index('Date', inplace=True) 52 | 53 | # adding financial dataframe to final_df, then we're gonna add twitter and reddit data to 54 | # final_df (if they exist) 55 | final_df = fin_df.copy() 56 | 57 | 58 | # adding reddit data 59 | if reddit_path != False: 60 | reddit_df = pd.DataFrame([]) 61 | cols = ['post_date','comment_num','prediction'] 62 | for y in year_list: 63 | df = pd.read_csv(reddit_path + str(company) + "_reddit_" + str(y)+'.csv', usecols=cols) 64 | reddit_df = reddit_df.append(df) 65 | 66 | grouped_reddit_df = get_grouped_df('reddit', reddit_df) 67 | final_df = final_df.merge(grouped_reddit_df, 68 | left_index=True, 69 | right_index=True, 70 | how='left') 71 | 72 | # adding twitter data 73 | if twitter_path != False: 74 | twitter_df = pd.DataFrame([]) 75 | cols = [ 'post_date','comment_num', 'retweet_num', 'like_num', 'prediction'] 76 | for y in year_list: 77 | df = pd.read_csv(twitter_path + str(company) +"_tweets_" + str(y)+'.csv', usecols=cols) 78 | twitter_df = twitter_df.append(df) 79 | 80 | grouped_twitter_df = get_grouped_df('twitter', twitter_df) 81 | final_df = final_df.merge(grouped_twitter_df, 82 | left_index=True, 83 | right_index=True, 84 | how='left') 85 | 86 | # fillna with 0 87 | final_df.fillna(0, inplace=True) 88 | if len(return_cols) == 0: 89 | return_cols = final_df.columns 90 | 91 | 92 | return final_df.index.tolist(), final_df[return_cols].to_numpy() 93 | 94 | # if __name__ == '__main__': 95 | # print(load_data("TSLA", "stock_data/tesla/", "reddit/TSLA/","twitter/TSLA/", [2015,2016,2017,2018], [])) 96 | 97 | 98 | 99 | -------------------------------------------------------------------------------- /backend/extrapolate_backend.py: -------------------------------------------------------------------------------- 1 | from models_keras import get_LSTM_model 2 | from data_proc import load_data 3 | from utils import init_scalers, fit_scalers, transform_scalers, viz_test_vs_pred 4 | 5 | import math 6 | import matplotlib.pyplot as plt 7 | import keras as keras 8 | import pandas as pd 9 | import numpy as np 10 | import pickle 11 | import random 12 | 13 | from sklearn.preprocessing import MinMaxScaler 14 | import argparse 15 | 16 | 17 | def extrapolate_func(window_size, company, modelpath, scpath, fdpath, stockdir, redditdir, twitterdir, years): 18 | 19 | ############### Model loading #################### 20 | 21 | model = get_LSTM_model(window_size, 11) 22 | model.load_weights(modelpath) 23 | 24 | sc_dict = pickle.load(open(scpath, "rb")) 25 | 26 | #### Get last entry from train data to kick start test data 27 | dates_pre_test, X_pre_test_indi = load_data(company, stockdir, redditdir, twitterdir, [years[0] - 1], ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'retweet_num_twitter', 'like_num_twitter']) 28 | 29 | ############### Evaluate on test set (remove this later) #################### 30 | dates_test, X_test_indi = load_data(company, stockdir, redditdir, twitterdir, years, ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'retweet_num_twitter', 'like_num_twitter']) 31 | 32 | dates_pre_test_win = dates_pre_test[-window_size:] 33 | X_pre_test_indi_win = X_pre_test_indi[-window_size:] 34 | 35 | dates_test = dates_pre_test_win + dates_test 36 | X_test_indi = np.concatenate((X_pre_test_indi_win, X_test_indi)) 37 | 38 | 39 | 40 | # print(dates_test) 41 | print(len(dates_test)) 42 | print(X_test_indi.shape) 43 | # exit() 44 | 45 | ## test: Scale data, combine data into sequences then send into the model (remove this later) 46 | X_test_indi = transform_scalers(X_test_indi, sc_dict) 47 | 48 | X_test_orig = [] 49 | y_test_orig = [] 50 | for i in range(window_size, X_test_indi.shape[0]): 51 | X_test_orig.append(X_test_indi[i-window_size:i, :]) 52 | y_test_orig.append(X_test_indi[i, :][0]) # just closing price 53 | 54 | ### All test elements 55 | X_test_prev, y_test_prev = np.array(X_test_orig), np.array(y_test_orig) 56 | predicted_stock_price_prev = model.predict(X_test_prev) 57 | predicted_stock_price_prev = sc_dict["Close"].inverse_transform(predicted_stock_price_prev.reshape(-1, 1)).reshape(-1) 58 | y_test_prev = sc_dict["Close"].inverse_transform(y_test_prev.reshape(-1, 1)).reshape(-1) 59 | 60 | ## Get the last element for extrapolating 61 | X_test_elem, y_test_elem = np.array(X_test_orig[-1]), np.array(y_test_orig[-1]) 62 | X_test_elem = np.expand_dims(X_test_elem, axis=0) 63 | y_test_elem = np.expand_dims(y_test_elem, axis=0) 64 | print(X_test_elem.shape, y_test_elem.shape) 65 | print(X_test_elem, y_test_elem) 66 | 67 | X_test_curr = X_test_elem 68 | curr_date = dates_test[-1] 69 | 70 | last_entry = X_test_curr[0][-1].reshape(1, 1, -1) 71 | 72 | future_dates = [] 73 | y_future_preds = [] 74 | 75 | future_df = pd.read_csv(fdpath, sep=',').values 76 | num_weeks_future = future_df.shape[0] 77 | 78 | def calc_ratios_from_score(sent_score): 79 | 80 | pos_score, neutral_score, neg_score = 0.05, 0.05, 0.05 81 | sent_score = max(sent_score, 0) 82 | 83 | if sent_score > 0.75: 84 | pos_score = (sent_score - 0.75) / 0.25 85 | neutral_score = (3/4) * (1 - pos_score) 86 | neg_score = (1/4) * (1 - pos_score) 87 | 88 | elif sent_score < 0.25: 89 | neg_score = (0.25 - sent_score) / 0.25 90 | neutral_score = (3/4) * (1 - neg_score) 91 | pos_score = (1/4) * (1 - neg_score) 92 | 93 | else: 94 | if sent_score < 0.5: 95 | neutral_score = (sent_score - 0.25) / 0.25 96 | neg_score = (3/4) * (1 - neutral_score) 97 | pos_score = (1/4) * (1 - neutral_score) 98 | else: 99 | neutral_score = (0.75 - sent_score) / 0.25 100 | pos_score = (3/4) * (1 - neutral_score) 101 | neg_score = (1/4) * (1 - neutral_score) 102 | 103 | pos_score += (np.random.rand(1)[0] / 20) 104 | neutral_score += (np.random.rand(1)[0] / 20) 105 | neg_score += (np.random.rand(1)[0] / 20) 106 | 107 | sum_total = pos_score + neutral_score + neg_score 108 | final_ratios = [pos_score / sum_total, neg_score / sum_total, neutral_score / sum_total] 109 | 110 | return final_ratios 111 | 112 | 113 | for i in range(num_weeks_future - 1): 114 | 115 | curr_date = curr_date + pd.to_timedelta(2, unit='d') 116 | curr_week_future = future_df[i] 117 | next_week_future = future_df[i+1] 118 | 119 | reddit_sents = np.linspace(curr_week_future[1], next_week_future[1], 5) 120 | reddit_counts = np.linspace(curr_week_future[2], next_week_future[2], 5) 121 | twitter_sents = np.linspace(curr_week_future[3], next_week_future[3], 5) 122 | twitter_counts = np.linspace(curr_week_future[4], next_week_future[4], 5) 123 | twitter_activity = np.linspace(curr_week_future[5], next_week_future[5], 5) 124 | 125 | for day in range(5): 126 | # get random noise 127 | rand_val = random.uniform(-1, 1) / 10 128 | rand_val_2 = random.uniform(-1, 1) / 10 129 | rand_val_3 = random.uniform(-1, 1) / 10 130 | rand_val_4 = random.uniform(-1, 1) / 10 131 | 132 | # create new entry 133 | curr_date = curr_date + pd.to_timedelta(1, unit='d') 134 | 135 | predicted_stock_price_norm = model.predict(X_test_curr) 136 | predicted_stock_price_curr = sc_dict["Close"].inverse_transform(predicted_stock_price_norm.reshape(-1, 1)).reshape(-1) 137 | 138 | future_dates.append(curr_date) 139 | y_future_preds.append(predicted_stock_price_curr) 140 | 141 | print(curr_date, predicted_stock_price_curr) 142 | 143 | # Add predicted close price at 0th position 144 | new_entry = last_entry.copy() 145 | new_entry[0][0][0] = predicted_stock_price_norm[0][0] 146 | 147 | # Add reddit pos, neg, neutral 148 | reddit_sent_ratios_day = calc_ratios_from_score(reddit_sents[day]) 149 | # pos 150 | new_entry[0][0][1] = reddit_sent_ratios_day[0] 151 | # neg 152 | new_entry[0][0][2] = reddit_sent_ratios_day[1] 153 | # neutral 154 | new_entry[0][0][3] = reddit_sent_ratios_day[2] 155 | 156 | # Add count reddit 157 | reddit_ct_day = reddit_counts[day] 158 | new_entry[0][0][4] = reddit_ct_day + rand_val 159 | 160 | # Add twitter pos, neg, neutral 161 | twitter_sent_ratios_day = calc_ratios_from_score(twitter_sents[day]) 162 | # pos 163 | new_entry[0][0][5] = twitter_sent_ratios_day[0] 164 | # neg 165 | new_entry[0][0][6] = twitter_sent_ratios_day[1] 166 | # neutral 167 | new_entry[0][0][7] = twitter_sent_ratios_day[2] 168 | 169 | # Add count twitter 170 | twitter_ct_day = twitter_counts[day] 171 | new_entry[0][0][8] = twitter_ct_day + rand_val_2 172 | 173 | # Add retweet twitter 174 | twitter_retweet_day = twitter_activity[day] 175 | new_entry[0][0][9] = twitter_retweet_day + rand_val_3 176 | 177 | # Add likes twitter 178 | twitter_likes_day = twitter_activity[day] 179 | new_entry[0][0][10] = twitter_likes_day + rand_val_4 180 | 181 | 182 | print(new_entry) 183 | 184 | 185 | X_test_curr = np.concatenate((X_test_curr, new_entry), axis=1)[:, 1:, :] 186 | 187 | 188 | # Plotting 189 | all_dates = dates_test[window_size:] + future_dates 190 | # print(all_dates) 191 | all_preds = np.concatenate((predicted_stock_price_prev, np.array(y_future_preds).reshape(-1))) 192 | all_labels = np.concatenate((y_test_prev, np.array(y_future_preds).reshape(-1))) 193 | print(len(all_dates)) 194 | print(len(all_preds)) 195 | print(len(all_labels)) 196 | result = {"dates": all_dates, "actual": y_test_prev.tolist(), "pred": all_preds.tolist()} 197 | # for i in range(len(all_dates)): 198 | # result(all_dates[i], all_preds[i], all_labels[i])) 199 | return result 200 | # plt.plot(all_dates, all_labels, color = "red", label = "Real Stock Price") 201 | # plt.plot(all_dates, all_preds, color = "blue", label = "Predicted Stock Price") 202 | 203 | # plt.xlabel('Time') 204 | # plt.ylabel('Stock Price') 205 | # plt.legend() 206 | # plt.show() 207 | 208 | # print(extrapolate_func(90, "TSLA", "weights/best_weights_TSLA_wsize90.hdf5", 209 | # "weights/TSLA_wsize90_sc_dict.p", "Tesla_5_weeks.csv", "stock_data/tesla/", 210 | # "reddit/TSLA/", "twitter/TSLA/", [2019])) 211 | # Parser 212 | # parser = argparse.ArgumentParser() 213 | # parser.add_argument('--window_size', default=100, type=int, help='window size') 214 | 215 | # parser.add_argument('--company', type=str, help='company', required=True) 216 | # parser.add_argument('--modelpath', type=str, help='model path', required=True) 217 | # parser.add_argument('--scpath', type=str, help='scaler path', required=True) 218 | # parser.add_argument('--fdpath', type=str, help='future data csv path', required=True) 219 | 220 | # # Remove this later and just use one entry in a csv 221 | # parser.add_argument('--stockdir', type=str, help='stock') 222 | # parser.add_argument('--redditdir', type=str, help='reddit') 223 | # parser.add_argument('--twitterdir', type=str, help='twitter') 224 | # parser.add_argument('--years', nargs='+', help='Years') 225 | 226 | # args = parser.parse_args() 227 | # args.years = [int(year) for year in args.years] 228 | 229 | # extrapolate_func(args.window_size, args.company, args.modelpath, args.scpath, args.fdpath, args.stockdir, args.redditdir, args.twitterdir, args.years) 230 | 231 | 232 | 233 | -------------------------------------------------------------------------------- /backend/extrapolate_predict_keras_old.py: -------------------------------------------------------------------------------- 1 | from models_keras import get_LSTM_model 2 | from data_proc import load_data 3 | from utils import init_scalers, fit_scalers, transform_scalers, viz_test_vs_pred 4 | 5 | import math 6 | import matplotlib.pyplot as plt 7 | import keras as keras 8 | import pandas as pd 9 | import numpy as np 10 | import pickle 11 | 12 | from sklearn.preprocessing import MinMaxScaler 13 | import argparse 14 | 15 | # Parser 16 | parser = argparse.ArgumentParser() 17 | parser.add_argument('--window_size', default=100, type=int, help='window size') 18 | parser.add_argument('--futuredays', default=100, type=int, help='window size') 19 | parser.add_argument('--futuresentiment', type=str, help='positive | negative | neutral', required=True) 20 | 21 | parser.add_argument('--company', type=str, help='company', required=True) 22 | parser.add_argument('--modelpath', type=str, help='model path', required=True) 23 | parser.add_argument('--scpath', type=str, help='scaler path', required=True) 24 | 25 | # Remove this later and just use one entry in a csv 26 | parser.add_argument('--stockdir', type=str, help='stock') 27 | parser.add_argument('--redditdir', type=str, help='reddit') 28 | parser.add_argument('--twitterdir', type=str, help='twitter') 29 | parser.add_argument('--years', nargs='+', help='Years') 30 | 31 | args = parser.parse_args() 32 | args.years = [int(year) for year in args.years] 33 | 34 | ############### Model loading #################### 35 | 36 | model = get_LSTM_model(args.window_size, 11) 37 | model.load_weights(args.modelpath) 38 | 39 | sc_dict = pickle.load(open(args.scpath, "rb")) 40 | 41 | ############### Evaluate on test set (remove this later) #################### 42 | dates_test, X_test_indi = load_data(args.company, args.stockdir, args.redditdir, args.twitterdir, args.years, ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'retweet_num_twitter', 'like_num_twitter']) 43 | 44 | ## test: Scale data, combine data into sequences then send into the model (remove this later) 45 | X_test_indi = transform_scalers(X_test_indi, sc_dict) 46 | 47 | X_test_orig = [] 48 | y_test_orig = [] 49 | for i in range(args.window_size, X_test_indi.shape[0]): 50 | X_test_orig.append(X_test_indi[i-args.window_size:i, :]) 51 | y_test_orig.append(X_test_indi[i, :][0]) # just closing price 52 | 53 | ### All test elements 54 | X_test_prev, y_test_prev = np.array(X_test_orig), np.array(y_test_orig) 55 | predicted_stock_price_prev = model.predict(X_test_prev) 56 | predicted_stock_price_prev = sc_dict["Close"].inverse_transform(predicted_stock_price_prev.reshape(-1, 1)).reshape(-1) 57 | y_test_prev = sc_dict["Close"].inverse_transform(y_test_prev.reshape(-1, 1)).reshape(-1) 58 | 59 | 60 | # viz_test_vs_pred("trial", dates_test[args.window_size:], y_test_prev, predicted_stock_price_prev, True) 61 | # print(dates_test) 62 | # import pdb; pdb.set_trace() 63 | 64 | ## Get the last element for extrapolating 65 | X_test_elem, y_test_elem = np.array(X_test_orig[-1]), np.array(y_test_orig[-1]) 66 | X_test_elem = np.expand_dims(X_test_elem, axis=0) 67 | y_test_elem = np.expand_dims(y_test_elem, axis=0) 68 | print(X_test_elem.shape, y_test_elem.shape) 69 | print(X_test_elem, y_test_elem) 70 | 71 | X_test_curr = X_test_elem 72 | y_test_curr = y_test_elem 73 | curr_date = dates_test[-1] 74 | 75 | last_entry = X_test_curr[0][-1].reshape(1, 1, -1) 76 | 77 | future_dates = [] 78 | y_future_preds = [] 79 | 80 | trial = args.futuresentiment 81 | 82 | for i in range(args.futuredays): 83 | 84 | # ignoring weekends at the moment 85 | curr_date = curr_date + pd.to_timedelta(1, unit='d') 86 | 87 | predicted_stock_price_norm = model.predict(X_test_curr) 88 | predicted_stock_price_curr = sc_dict["Close"].inverse_transform(predicted_stock_price_norm.reshape(-1, 1)).reshape(-1) 89 | 90 | future_dates.append(curr_date) 91 | y_future_preds.append(predicted_stock_price_curr) 92 | 93 | print(curr_date, predicted_stock_price_curr) 94 | 95 | new_entry = last_entry.copy() 96 | new_entry[0][0][0] = predicted_stock_price_norm[0][0] 97 | 98 | if trial == "positive": 99 | rand_val = np.random.rand(1)[0] / 5 100 | rand_val2 = np.random.rand(1)[0] / 5 101 | 102 | new_entry[0][0][1] = 0.7 + rand_val 103 | new_entry[0][0][2] = 0.15 - (rand_val / 2) 104 | new_entry[0][0][3] = 0.15 - (rand_val / 2) 105 | new_entry[0][0][5] = 0.7 + rand_val2 106 | new_entry[0][0][6] = 0.15 - (rand_val2 / 2) 107 | new_entry[0][0][7] = 0.15 - (rand_val2 / 2) 108 | 109 | elif trial == "negative": 110 | 111 | rand_val = np.random.rand(1)[0] / 5 112 | rand_val2 = np.random.rand(1)[0] / 5 113 | 114 | new_entry[0][0][1] = 0.15 + (rand_val / 2) 115 | new_entry[0][0][2] = 0.7 - rand_val 116 | new_entry[0][0][3] = 0.15 - (rand_val / 2) 117 | new_entry[0][0][5] = 0.15 + (rand_val2 / 2) 118 | new_entry[0][0][6] = 0.7 - rand_val2 119 | new_entry[0][0][7] = 0.15 - (rand_val2 / 2) 120 | 121 | elif trial == "neutral": 122 | 123 | rand_val = np.random.rand(1)[0] / 5 124 | rand_val2 = np.random.rand(1)[0] / 5 125 | 126 | new_entry[0][0][1] = 0.15 + (rand_val / 2) 127 | new_entry[0][0][2] = 0.15 - (rand_val / 2) 128 | new_entry[0][0][3] = 0.7 - rand_val 129 | 130 | new_entry[0][0][5] = 0.15 + (rand_val2 / 2) 131 | new_entry[0][0][6] = 0.15 - (rand_val2 / 2) 132 | new_entry[0][0][7] = 0.7 - rand_val2 133 | 134 | 135 | X_test_curr = np.concatenate((X_test_curr, new_entry), axis=1)[:, 1:, :] 136 | 137 | 138 | # Plotting 139 | # import pdb; pdb.set_trace() 140 | 141 | all_dates = dates_test[args.window_size:] + future_dates 142 | all_preds = np.concatenate((predicted_stock_price_prev, np.array(y_future_preds).reshape(-1))) 143 | all_labels = np.concatenate((y_test_prev, np.array(y_future_preds).reshape(-1))) 144 | 145 | plt.plot(all_dates, all_labels, color = "red", label = "Real Stock Price") 146 | plt.plot(all_dates, all_preds, color = "blue", label = "Predicted Stock Price") 147 | 148 | # plt.title(title) 149 | plt.xlabel('Time') 150 | plt.ylabel('Stock Price') 151 | plt.legend() 152 | plt.show() 153 | 154 | 155 | 156 | 157 | 158 | -------------------------------------------------------------------------------- /backend/finBert/model/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "_name_or_path": "/home/ubuntu/finbert/models/language_model/finbertTRC2", 3 | "architectures": [ 4 | "BertForSequenceClassification" 5 | ], 6 | "attention_probs_dropout_prob": 0.1, 7 | "gradient_checkpointing": false, 8 | "hidden_act": "gelu", 9 | "hidden_dropout_prob": 0.1, 10 | "hidden_size": 768, 11 | "id2label": { 12 | "0": "positive", 13 | "1": "negative", 14 | "2": "neutral" 15 | }, 16 | "initializer_range": 0.02, 17 | "intermediate_size": 3072, 18 | "label2id": { 19 | "positive": 0, 20 | "negative": 1, 21 | "neutral": 2 22 | }, 23 | "layer_norm_eps": 1e-12, 24 | "max_position_embeddings": 512, 25 | "model_type": "bert", 26 | "num_attention_heads": 12, 27 | "num_hidden_layers": 12, 28 | "pad_token_id": 0, 29 | "position_embedding_type": "absolute", 30 | "type_vocab_size": 2, 31 | "vocab_size": 30522 32 | } -------------------------------------------------------------------------------- /backend/finBert/sentiment.py: -------------------------------------------------------------------------------- 1 | from nltk.tokenize import sent_tokenize 2 | from transformers import AutoTokenizer, AutoModelForSequenceClassification 3 | import pandas as pd 4 | import numpy as np 5 | import torch 6 | import re 7 | 8 | 9 | # UTILITY FUNCTIONS 10 | class InputExample(object): 11 | """A single training/test example for simple sequence classification.""" 12 | 13 | def __init__(self, guid, text, label=None, agree=None): 14 | """ 15 | Constructs an InputExample 16 | Parameters 17 | ---------- 18 | guid: str 19 | Unique id for the examples 20 | text: str 21 | Text for the first sequence. 22 | label: str, optional 23 | Label for the example. 24 | agree: str, optional 25 | For FinBERT , inter-annotator agreement level. 26 | """ 27 | self.guid = guid 28 | self.text = text 29 | self.label = label 30 | self.agree = agree 31 | 32 | class InputFeatures(object): 33 | """ 34 | A single set of features for the data. 35 | """ 36 | 37 | def __init__(self, input_ids, attention_mask, token_type_ids, label_id, agree=None): 38 | self.input_ids = input_ids 39 | self.attention_mask = attention_mask 40 | self.token_type_ids = token_type_ids 41 | self.label_id = label_id 42 | self.agree = agree 43 | 44 | def chunks(l, n): 45 | """ 46 | Simple utility function to split a list into fixed-length chunks. 47 | Parameters 48 | ---------- 49 | l: list 50 | given list 51 | n: int 52 | length of the sequence 53 | """ 54 | for i in range(0, len(l), n): 55 | # Create an index range for l of n items: 56 | yield l[i:i + n] 57 | 58 | def softmax(x): 59 | """Compute softmax values for each sets of scores in x.""" 60 | e_x = np.exp(x - np.max(x, axis=1)[:, None]) 61 | return e_x / np.sum(e_x, axis=1)[:, None] 62 | 63 | def convert_examples_to_features(examples, label_list, max_seq_length, tokenizer, mode='classification'): 64 | """ 65 | Loads a data file into a list of InputBatch's. With this function, the InputExample's are converted to features 66 | that can be used for the model. Text is tokenized, converted to ids and zero-padded. Labels are mapped to integers. 67 | 68 | Parameters 69 | ---------- 70 | examples: list 71 | A list of InputExample's. 72 | label_list: list 73 | The list of labels. 74 | max_seq_length: int 75 | The maximum sequence length. 76 | tokenizer: BertTokenizer 77 | The tokenizer to be used. 78 | mode: str, optional 79 | The task type: 'classification' or 'regression'. Default is 'classification' 80 | 81 | Returns 82 | ------- 83 | features: list 84 | A list of InputFeature's, which is an InputBatch. 85 | """ 86 | 87 | if mode == 'classification': 88 | label_map = {label: i for i, label in enumerate(label_list)} 89 | label_map[None] = 9090 90 | 91 | features = [] 92 | for (ex_index, example) in enumerate(examples): 93 | tokens = tokenizer.tokenize(example.text) 94 | 95 | if len(tokens) > max_seq_length - 2: 96 | tokens = tokens[:(max_seq_length // 4) - 1] + tokens[ 97 | len(tokens) - (3 * max_seq_length // 4) + 1:] 98 | 99 | tokens = ["[CLS]"] + tokens + ["[SEP]"] 100 | 101 | token_type_ids = [0] * len(tokens) 102 | 103 | input_ids = tokenizer.convert_tokens_to_ids(tokens) 104 | 105 | attention_mask = [1] * len(input_ids) 106 | 107 | padding = [0] * (max_seq_length - len(input_ids)) 108 | input_ids += padding 109 | attention_mask += padding 110 | 111 | 112 | token_type_ids += padding 113 | 114 | assert len(input_ids) == max_seq_length 115 | assert len(attention_mask) == max_seq_length 116 | assert len(token_type_ids) == max_seq_length 117 | 118 | if mode == 'classification': 119 | label_id = label_map[example.label] 120 | elif mode == 'regression': 121 | label_id = float(example.label) 122 | else: 123 | raise ValueError("The mode should either be classification or regression. You entered: " + mode) 124 | 125 | agree = example.agree 126 | mapagree = {'0.5': 1, '0.66': 2, '0.75': 3, '1.0': 4} 127 | try: 128 | agree = mapagree[agree] 129 | except: 130 | agree = 0 131 | 132 | features.append( 133 | InputFeatures(input_ids=input_ids, 134 | attention_mask=attention_mask, 135 | token_type_ids=token_type_ids, 136 | label_id=label_id, 137 | agree=agree)) 138 | return features 139 | 140 | def get_prediction(text_list): 141 | # Load model 142 | tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 143 | model = AutoModelForSequenceClassification.from_pretrained('finBert/model/',num_labels=3,cache_dir=None) 144 | 145 | 146 | # Get prediction 147 | model.eval() 148 | label_list = ['positive', 'negative', 'neutral'] 149 | label_dict = {0: 'positive', 1: 'negative', 2: 'neutral'} 150 | sentiment = [] 151 | 152 | for text in text_list: 153 | # Preprocess text 154 | text = re.sub(r'[^\w\s]', '', text) 155 | text += '.' 156 | sentences = sent_tokenize(text) 157 | examples = [InputExample(str(i), sentence) for i, sentence in enumerate(sentences)] 158 | features = convert_examples_to_features(examples, label_list, 64, tokenizer) 159 | all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) 160 | all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long) 161 | all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long) 162 | with torch.no_grad(): 163 | logits = model(all_input_ids, all_attention_mask, all_token_type_ids)[0] 164 | logits = softmax(np.array(logits)) 165 | predictions = np.squeeze(np.argmax(logits, axis=1)) 166 | sentiment.append(label_dict[int(predictions)]) 167 | return sentiment 168 | 169 | #text = ["Dogecoin Breaks Past The 20 Cents Mark In Dizzying Rally With 'Earnings Report' Achievement Unlocked $DOGE $BTC… https://t.co/DUQt8r2m70", "hi"] 170 | #print(get_prediction(text)) 171 | 172 | -------------------------------------------------------------------------------- /backend/main_keras.py: -------------------------------------------------------------------------------- 1 | from models_keras import get_LSTM_model 2 | from data_proc import load_data 3 | from utils import init_scalers, fit_scalers, transform_scalers, viz_test_vs_pred 4 | 5 | import math 6 | import matplotlib.pyplot as plt 7 | import keras as keras 8 | import pandas as pd 9 | import numpy as np 10 | import pickle 11 | 12 | from sklearn.preprocessing import MinMaxScaler 13 | import argparse 14 | 15 | # Parser 16 | parser = argparse.ArgumentParser() 17 | parser.add_argument('--window_size', default=100, type=int, help='window size') 18 | parser.add_argument('--num_epochs', default=100, type=int, help='num epochs') 19 | 20 | parser.add_argument('--company', type=str, help='company', required=True) 21 | parser.add_argument('--stockdir', type=str, help='stock', required=True) 22 | parser.add_argument('--redditdir', type=str, help='reddit', required=True) 23 | parser.add_argument('--twitterdir', type=str, help='twitter', required=True) 24 | parser.add_argument('--trainyears', nargs='+', help='Years', required=True) 25 | parser.add_argument('--testyears', nargs='+', help='Years', required=True) 26 | 27 | args = parser.parse_args() 28 | 29 | ### Default parameters ### 30 | window_size = int(args.window_size) 31 | title = "TSLA_wsize" + str(window_size) 32 | args.trainyears = [int(year) for year in args.trainyears] 33 | args.testyears = [int(year) for year in args.testyears] 34 | 35 | print(title) 36 | print(args.trainyears) 37 | print(args.testyears) 38 | 39 | ############### Dataloading #################### 40 | ## Implement load_data function 41 | ## If you are changing fields, you need to change the indices in utils.py fit scalers as well as the label index when making label sets 42 | 43 | dates_train, X_train_indi = load_data(args.company, args.stockdir, args.redditdir, args.twitterdir, args.trainyears, ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'retweet_num_twitter', 'like_num_twitter']) 44 | 45 | dates_test, X_test_indi = load_data(args.company, args.stockdir, args.redditdir, args.twitterdir, args.testyears, ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'retweet_num_twitter', 'like_num_twitter']) 46 | 47 | # dates_train, X_train_indi = load_data("TSLA", 'stock_data/tesla/', "reddit/TSLA/","twitter/TSLA/", [2015, 2016, 2017, 2018], ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'retweet_num_twitter', 'like_num_twitter']) 48 | 49 | # dates_test, X_test_indi = load_data("TSLA", 'stock_data/tesla/', "reddit/TSLA/","twitter/TSLA/", [2019], ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'retweet_num_twitter', 'like_num_twitter']) 50 | 51 | 52 | ## train: Scale data, combine data into sequences then send into the model 53 | sc_dict = init_scalers() 54 | X_train_indi, sc_dict = fit_scalers(X_train_indi, sc_dict) 55 | 56 | X_train = [] 57 | y_train = [] 58 | for i in range(window_size, X_train_indi.shape[0]): 59 | X_train.append(X_train_indi[i-window_size:i, :]) 60 | y_train.append(X_train_indi[i, :][0]) # just closing price 61 | 62 | X_train, y_train = np.array(X_train), np.array(y_train) 63 | print(X_train.shape, y_train.shape) 64 | 65 | ## test: Scale data, combine data into sequences then send into the model 66 | X_test_indi = transform_scalers(X_test_indi, sc_dict) 67 | 68 | X_test = [] 69 | y_test = [] 70 | for i in range(window_size, X_test_indi.shape[0]): 71 | X_test.append(X_test_indi[i-window_size:i, :]) 72 | y_test.append(X_test_indi[i, :][0]) # just closing price 73 | 74 | X_test, y_test = np.array(X_test), np.array(y_test) 75 | print(X_test.shape, y_test.shape) 76 | 77 | ############### Model setup and train #################### 78 | 79 | model = get_LSTM_model(X_train.shape[1], X_train.shape[2]) 80 | 81 | # Callbacks 82 | best_weights_filepath = './weights/best_weights_' + str(title) + '.hdf5' 83 | earlyStopping = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1, mode='auto') 84 | saveBestModel = keras.callbacks.ModelCheckpoint(best_weights_filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='auto') 85 | 86 | # Fitting the model to the Training set 87 | hist_train = model.fit(X_train, y_train, epochs = args.num_epochs, batch_size = 16, shuffle=True, validation_data=(X_test, y_test), callbacks=[earlyStopping, saveBestModel]) 88 | 89 | pickle.dump(sc_dict, open('./weights/' + title + "_sc_dict.p", "wb")) 90 | 91 | ############### Testing and Viz #################### 92 | #reload best weights 93 | model.load_weights(best_weights_filepath) 94 | sc_dict = pickle.load(open('./weights/' + title + "_sc_dict.p", "rb" ) ) 95 | 96 | predicted_stock_price = model.predict(X_test) 97 | hist_test = model.evaluate(X_test, y_test) 98 | 99 | #Rescale stock prices 100 | predicted_stock_price = sc_dict["Close"].inverse_transform(predicted_stock_price.reshape(-1, 1)).reshape(-1) 101 | y_test = sc_dict["Close"].inverse_transform(y_test.reshape(-1, 1)).reshape(-1) 102 | 103 | RMSE_score = (np.sum(np.power(y_test - predicted_stock_price, 2))) / y_test.shape[0] 104 | RMSE_score = np.sqrt(RMSE_score) 105 | 106 | print("RMSE is {}".format(RMSE_score)) 107 | 108 | viz_test_vs_pred(title + " -- RMSE: {:.2f}".format(RMSE_score), dates_test[window_size:], y_test, predicted_stock_price) 109 | -------------------------------------------------------------------------------- /backend/models_keras.py: -------------------------------------------------------------------------------- 1 | import keras 2 | from keras.models import Sequential 3 | from keras.layers import Dense 4 | from keras.layers import LSTM 5 | from keras.layers import Dropout 6 | from keras.layers import * 7 | 8 | def get_LSTM_model(window_size, dim_size): 9 | 10 | model = Sequential() 11 | 12 | model.add(LSTM(units = 50, return_sequences = True, input_shape = (window_size, dim_size))) # X_train.shape[1] 13 | model.add(Dropout(0.2)) 14 | 15 | model.add(LSTM(units = 50, return_sequences = True)) 16 | model.add(Dropout(0.2)) 17 | 18 | model.add(LSTM(units = 50, return_sequences = True)) 19 | model.add(Dropout(0.2)) 20 | 21 | model.add(LSTM(units = 50)) 22 | model.add(Dropout(0.2)) 23 | 24 | model.add(Dense(units = 1)) 25 | 26 | model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=[keras.metrics.MeanSquaredError()]) 27 | 28 | return model -------------------------------------------------------------------------------- /backend/predict_keras.py: -------------------------------------------------------------------------------- 1 | from models_keras import get_LSTM_model 2 | from data_proc import load_data 3 | from utils import init_scalers, fit_scalers, transform_scalers, viz_test_vs_pred 4 | 5 | import math 6 | import matplotlib.pyplot as plt 7 | import keras as keras 8 | import pandas as pd 9 | import numpy as np 10 | import pickle 11 | 12 | from sklearn.preprocessing import MinMaxScaler 13 | import argparse 14 | 15 | # Parser 16 | parser = argparse.ArgumentParser() 17 | parser.add_argument('--window_size', default=100, type=int, help='window size') 18 | parser.add_argument('--company', type=str, help='company', required=True) 19 | parser.add_argument('--modelpath', type=str, help='model path', required=True) 20 | parser.add_argument('--scpath', type=str, help='scaler path', required=True) 21 | 22 | parser.add_argument('--stockdir', type=str, help='stock') 23 | parser.add_argument('--redditdir', type=str, help='reddit') 24 | parser.add_argument('--twitterdir', type=str, help='twitter') 25 | parser.add_argument('--years', nargs='+', help='Years') 26 | 27 | args = parser.parse_args() 28 | args.years = [int(year) for year in args.years] 29 | 30 | ############### Model loading #################### 31 | 32 | model = get_LSTM_model(args.window_size, 11) 33 | model.load_weights(args.modelpath) 34 | 35 | sc_dict = pickle.load(open(args.scpath, "rb")) 36 | 37 | #### Get last entry from train data to kick start test data 38 | dates_pre_test, X_pre_test_indi = load_data(args.company, args.stockdir, args.redditdir, args.twitterdir, [args.years[0] - 1], ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'retweet_num_twitter', 'like_num_twitter']) 39 | 40 | ############### Evaluate on test set #################### 41 | dates_test, X_test_indi = load_data(args.company, args.stockdir, args.redditdir, args.twitterdir, args.years, ['Close','positive_reddit','negative_reddit', 'neutral_reddit', 'count_reddit', 'positive_twitter', 'negative_twitter','neutral_twitter', 'count_twitter', 'retweet_num_twitter', 'like_num_twitter']) 42 | 43 | dates_pre_test_win = dates_pre_test[-args.window_size:] 44 | X_pre_test_indi_win = X_pre_test_indi[-args.window_size:] 45 | 46 | dates_test = dates_pre_test_win + dates_test 47 | X_test_indi = np.concatenate((X_pre_test_indi_win, X_test_indi)) 48 | 49 | ## test: Scale data, combine data into sequences then send into the model 50 | X_test_indi = transform_scalers(X_test_indi, sc_dict) 51 | 52 | X_test = [] 53 | y_test = [] 54 | for i in range(args.window_size, X_test_indi.shape[0]): 55 | X_test.append(X_test_indi[i-args.window_size:i, :]) 56 | y_test.append(X_test_indi[i, :][0]) # just closing price 57 | 58 | X_test, y_test = np.array(X_test), np.array(y_test) 59 | print(X_test.shape, y_test.shape) 60 | 61 | predicted_stock_price = model.predict(X_test) 62 | hist_test = model.evaluate(X_test, y_test) 63 | 64 | predicted_stock_price = sc_dict["Close"].inverse_transform(predicted_stock_price.reshape(-1, 1)).reshape(-1) 65 | y_test = sc_dict["Close"].inverse_transform(y_test.reshape(-1, 1)).reshape(-1) 66 | 67 | RMSE_score = (np.sum(np.power(y_test - predicted_stock_price, 2))) / y_test.shape[0] 68 | RMSE_score = np.sqrt(RMSE_score) 69 | 70 | print("RMSE is {}".format(RMSE_score)) 71 | 72 | viz_test_vs_pred("RMSE: {:.2f}".format(RMSE_score), dates_test[args.window_size:], y_test, predicted_stock_price) 73 | 74 | 75 | -------------------------------------------------------------------------------- /backend/runs.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | python3 main_keras.py --window_size 90 --company TSLA --stockdir stock_data/tesla/ --redditdir reddit/TSLA/ --twitterdir twitter/TSLA/ --trainyears 2015 2016 2017 2018 --testyears 2019 4 | 5 | python3 predict_keras.py --window_size 90 --company TSLA --modelpath weights/best_weights_TSLA_wsize90.hdf5 --scpath weights/TSLA_wsize90_sc_dict.p --stockdir stock_data/tesla/ --redditdir reddit/TSLA/ --twitterdir twitter/TSLA/ --years 2019 6 | 7 | python3 extrapolate_backend.py --window_size 90 --company TSLA --modelpath weights/best_weights_TSLA_wsize90.hdf5 --scpath weights/TSLA_wsize90_sc_dict.p --fdpath Tesla_5_weeks.csv --stockdir stock_data/tesla/ --redditdir reddit/TSLA/ --twitterdir twitter/TSLA/ --years 2019 -------------------------------------------------------------------------------- /backend/server.py: -------------------------------------------------------------------------------- 1 | # Imports 2 | import os 3 | import tweepy as tw 4 | import pandas as pd 5 | from extrapolate_backend import extrapolate_func 6 | import csv 7 | from flask import Flask, request, jsonify 8 | from flask_cors import CORS, cross_origin 9 | from datetime import datetime, timedelta 10 | from dotenv import load_dotenv 11 | from finBert.sentiment import get_prediction 12 | 13 | # Need to do one time 14 | #import nltk 15 | #import ssl 16 | #try: 17 | #_create_unverified_https_context = ssl._create_unverified_context 18 | #except AttributeError: 19 | #pass 20 | #else: 21 | #ssl._create_default_https_context = _create_unverified_https_context 22 | #nltk.download('punkt') 23 | 24 | # Load env variables 25 | load_dotenv() 26 | 27 | # Flask configuration 28 | app = Flask(__name__) 29 | cors = CORS(app) 30 | app.config['CORS_HEADERS'] = 'Content-Type' 31 | 32 | # API for getting sentiment of popular tweets 33 | # in the last 7 days with finBert 34 | @app.route('/get_sentiment', methods=['POST']) 35 | @cross_origin() 36 | def get_sentiment(): 37 | date_since = str(datetime.now() - timedelta(days=7)).split(' ')[0] 38 | curr_date = str(datetime.now()).split(' ')[0] 39 | search_words = request.json['query'] + ' -filter:retweets until:'+curr_date 40 | 41 | # Initialize tweepy isntance 42 | auth = tw.OAuthHandler(os.getenv('consumer_key'), os.getenv('consumer_secret')) 43 | auth.set_access_token(os.getenv('access_token'), os.getenv('access_token_secret')) 44 | api = tw.API(auth, wait_on_rate_limit=True) 45 | 46 | # Get tweets 47 | tweets = tw.Cursor(api.search, 48 | q=search_words, 49 | lang="en", 50 | since=date_since, 51 | result_type="popular").items(100) 52 | 53 | tweet_list = [tweet for tweet in tweets] 54 | data = {'created_at': [], 'text': [], 'favorite_count': [], 'sentiment': []} 55 | created_at, text, favorite_count = [], [], [] 56 | for tweet in tweet_list: 57 | data['created_at'].append(str(tweet.created_at).split(' ')[0]) 58 | data['text'].append(tweet.text) 59 | data['favorite_count'].append(tweet.favorite_count) 60 | 61 | sentiment_list = get_prediction(data['text']) 62 | for sentiment in sentiment_list: 63 | data['sentiment'].append(sentiment) 64 | 65 | # Create dataframe 66 | df = pd.DataFrame(data) 67 | if (len(df.index) == 0) : 68 | return jsonify({'error': 'Could not find any tweets or input had invalid character such as "$"'}) 69 | keys1, keys2 = [], [] 70 | 71 | # Get data for graphs 72 | positive_df = df[df['sentiment']=='positive'] 73 | negative_df = df[df['sentiment']=='negative'] 74 | neutral_df = df[df['sentiment']=='neutral'] 75 | if len(positive_df.index) > 0: 76 | keys1.append('positive_count') 77 | keys2.append('positive') 78 | if len(negative_df.index) > 0: 79 | keys1.append('negative_count') 80 | keys2.append('negative') 81 | if len(neutral_df.index) > 0: 82 | keys1.append('neutral_count') 83 | keys2.append('neutral') 84 | 85 | 86 | cols = ['count', 'negative_count', 'neutral_count', 'positive_count'] 87 | grouped_df = pd.DataFrame([], columns=cols) 88 | grouped_df['count'] = df['sentiment'].groupby(df['created_at']).count() 89 | grouped_df[keys1] = \ 90 | df.reset_index().groupby(['created_at','sentiment'], as_index=False)\ 91 | .size()\ 92 | .pivot(index='created_at', columns='sentiment', values='size')\ 93 | [keys2]\ 94 | .fillna(0) 95 | idx_negative = negative_df.groupby(['created_at'])['favorite_count'].transform(max) == negative_df['favorite_count'] 96 | top_negative = negative_df[idx_negative] 97 | idx_positive = positive_df.groupby(['created_at'])['favorite_count'].transform(max) == positive_df['favorite_count'] 98 | top_positive = positive_df[idx_positive] 99 | idx_neutral = neutral_df.groupby(['created_at'])['favorite_count'].transform(max) == neutral_df['favorite_count'] 100 | top_neutral = neutral_df[idx_neutral] 101 | x = pd.merge(grouped_df, top_negative, on='created_at', how='outer').fillna('N/A') 102 | x = x.drop(['sentiment'], axis=1) 103 | x = x.rename(columns={"text": "negative_text", "favorite_count": "negative_favorites"}) 104 | x = pd.merge(x, top_positive, on='created_at', how='outer').fillna('N/A') 105 | x = x.drop(['sentiment'], axis=1) 106 | x = x.rename(columns={"text": "positive_text", "favorite_count": "positive_favorites"}) 107 | x = pd.merge(x, top_neutral, on='created_at', how='outer').fillna('N/A') 108 | x = x.drop(['sentiment'], axis=1) 109 | x = x.rename(columns={"text": "neutral_text", "favorite_count": "neutral_favorites"}) 110 | 111 | output = {} 112 | for index, row in x.iterrows(): 113 | row_data = {} 114 | row_data['count'] = row['count'] 115 | row_data['negative_count'] = row['negative_count'] 116 | row_data['neutral_count'] = row['neutral_count'] 117 | row_data['positive_count'] = row['positive_count'] 118 | row_data['negative_text'] = row['negative_text'] 119 | row_data['negative_favorites'] = row['negative_favorites'] 120 | row_data['positive_text'] = row['positive_text'] 121 | row_data['positive_favorites'] = row['positive_favorites'] 122 | row_data['neutral_text'] = row['neutral_text'] 123 | row_data['neutral_favorites'] = row['neutral_favorites'] 124 | output[row['created_at']] = row_data 125 | 126 | return jsonify(output) 127 | 128 | @app.route('/', methods=['POST']) 129 | @cross_origin() 130 | def get_plots(): 131 | with open(f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', 'w', newline='') as csvfile: 132 | spamwriter = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL) 133 | for row in request.json["data"]: 134 | spamwriter.writerow(row) 135 | if request.json["company"] == "Tesla": 136 | if request.json["window"] == "30": 137 | result = extrapolate_func(int(request.json["window"]), "TSLA", "best_weights/best_weights_TSLA_wsize30.hdf5", 138 | "best_weights/TSLA_wsize30_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/tesla/", 139 | "reddit/TSLA/", "twitter/TSLA/", [2019]) 140 | elif request.json["window"] == "60": 141 | result = extrapolate_func(int(request.json["window"]), "TSLA", "best_weights/best_weights_TSLA_wsize60.hdf5", 142 | "best_weights/TSLA_wsize60_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/tesla/", 143 | "reddit/TSLA/", "twitter/TSLA/", [2019]) 144 | elif request.json["window"] == "90": 145 | result = extrapolate_func(int(request.json["window"]), "TSLA", "best_weights/best_weights_TSLA_wsize90.hdf5", 146 | "best_weights/TSLA_wsize90_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/tesla/", 147 | "reddit/TSLA/", "twitter/TSLA/", [2019]) 148 | elif request.json["company"] == "Amazon": 149 | if request.json["window"] == "30": 150 | result = extrapolate_func(int(request.json["window"]), "AMZN", "best_weights/best_weights_AMZN_wsize30.hdf5", 151 | "best_weights/AMZN_wsize30_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/amazon/", 152 | "reddit/AMZN/", "twitter/AMZN/", [2019]) 153 | elif request.json["window"] == "60": 154 | result = extrapolate_func(int(request.json["window"]), "AMZN", "best_weights/best_weights_AMZN_wsize60.hdf5", 155 | "best_weights/AMZN_wsize60_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/amazon/", 156 | "reddit/AMZN/", "twitter/AMZN/", [2019]) 157 | elif request.json["window"] == "90": 158 | result = extrapolate_func(int(request.json["window"]), "AMZN", "best_weights/best_weights_AMZN_wsize90.hdf5", 159 | "best_weights/AMZN_wsize90_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/amazon/", 160 | "reddit/AMZN/", "twitter/AMZN/", [2019]) 161 | elif request.json["company"] == "Apple": 162 | if request.json["window"] == "30": 163 | result = extrapolate_func(int(request.json["window"]), "AAPL", "best_weights/best_weights_AAPL_wsize30.hdf5", 164 | "best_weights/AAPL_wsize30_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/apple/", 165 | "reddit/AAPL/", "twitter/AAPL/", [2019]) 166 | elif request.json["window"] == "60": 167 | result = extrapolate_func(int(request.json["window"]), "AAPL", "best_weights/best_weights_AAPL_wsize60.hdf5", 168 | "best_weights/AAPL_wsize60_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/apple/", 169 | "reddit/AAPL/", "twitter/AAPL/", [2019]) 170 | elif request.json["window"] == "90": 171 | result = extrapolate_func(int(request.json["window"]), "AAPL", "best_weights/best_weights_AAPL_wsize90.hdf5", 172 | "best_weights/AAPL_wsize90_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/apple/", 173 | "reddit/AAPL/", "twitter/AAPL/", [2019]) 174 | elif request.json["company"] == "Microsoft": 175 | if request.json["window"] == "30": 176 | result = extrapolate_func(int(request.json["window"]), "MSFT", "best_weights/best_weights_MSFT_wsize30.hdf5", 177 | "best_weights/MSFT_wsize30_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/microsoft/", 178 | "reddit/MSFT/", "twitter/MSFT/", [2019]) 179 | elif request.json["window"] == "60": 180 | result = extrapolate_func(int(request.json["window"]), "MSFT", "best_weights/best_weights_MSFT_wsize60.hdf5", 181 | "best_weights/MSFT_wsize60_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/microsoft/", 182 | "reddit/MSFT/", "twitter/MSFT/", [2019]) 183 | elif request.json["window"] == "90": 184 | result = extrapolate_func(int(request.json["window"]), "MSFT", "best_weights/best_weights_MSFT_wsize90.hdf5", 185 | "best_weights/MSFT_wsize90_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/microsoft/", 186 | "reddit/MSFT/", "twitter/MSFT/", [2019]) 187 | elif request.json["company"] == "Google": 188 | if request.json["window"] == "30": 189 | result = extrapolate_func(int(request.json["window"]), "GOOGL", "best_weights/best_weights_GOOGL_wsize30.hdf5", 190 | "best_weights/GOOGL_wsize30_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/google/", 191 | "reddit/GOOGL/", "twitter/GOOGL/", [2019]) 192 | elif request.json["window"] == "60": 193 | result = extrapolate_func(int(request.json["window"]), "GOOGL", "best_weights/best_weights_GOOGL_wsize60.hdf5", 194 | "best_weights/GOOGL_wsize60_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/google/", 195 | "reddit/GOOGL/", "twitter/GOOGL/", [2019]) 196 | elif request.json["window"] == "90": 197 | result = extrapolate_func(int(request.json["window"]), "GOOGL", "best_weights/best_weights_GOOGL_wsize90.hdf5", 198 | "best_weights/GOOGL_wsize90_sc_dict.p", f'{request.json["company"]}_{request.json["window"]}_5weeks.csv', "stock_data/google/", 199 | "reddit/GOOGL/", "twitter/GOOGL/", [2019]) 200 | return jsonify(result) 201 | 202 | if __name__ == "__main__": 203 | app.run(host="0.0.0.0", port=5000, debug=True) -------------------------------------------------------------------------------- /backend/starter_code.py: -------------------------------------------------------------------------------- 1 | import pandas as pd 2 | import numpy as np 3 | import datetime 4 | import os, sys 5 | 6 | 7 | def starter_code(): 8 | company = sys.argv[1].upper() 9 | 10 | reddit_path = company + '_reddit_2015_to_2019.csv' 11 | twitter_path = company + '_tweets_2015_to_2019.csv' 12 | 13 | reddit_df = pd.read_csv(reddit_path) 14 | twitter_df = pd.read_csv(twitter_path) 15 | 16 | reddit_df = reddit_df.loc[:, ~reddit_df.columns.str.contains('^Unnamed')] 17 | twitter_df = twitter_df.loc[:, ~twitter_df.columns.str.contains('^Unnamed')] 18 | 19 | print('Creating the data for each year ...') 20 | 21 | for year in pd.to_datetime(twitter_df['post_date'], unit='s').dt.year.unique(): 22 | 23 | tw_df = twitter_df[pd.to_datetime(twitter_df['post_date'], unit='s').dt.year==year] 24 | red_df = reddit_df[pd.to_datetime(reddit_df['post_date'], unit='s').dt.year==year] 25 | 26 | tw_out_name = str(company) + '_tweets_' + str(year) + '.csv' 27 | outdir = './twitter/' + company 28 | if not os.path.exists(outdir): 29 | os.mkdir(outdir) 30 | fullname = os.path.join(outdir, tw_out_name) 31 | tw_df.to_csv(fullname) 32 | 33 | red_out_name = str(company) + '_reddit_' + str(year) + '.csv' 34 | outdir = './reddit/'+ company 35 | if not os.path.exists(outdir): 36 | os.mkdir(outdir) 37 | fullname = os.path.join(outdir, red_out_name) 38 | red_df.to_csv(fullname) 39 | 40 | print('year ' + str(year) + ' done') 41 | 42 | if __name__ == '__main__': 43 | starter_code() -------------------------------------------------------------------------------- /backend/utils.py: -------------------------------------------------------------------------------- 1 | import matplotlib.pyplot as plt 2 | import numpy as np 3 | from sklearn.preprocessing import MinMaxScaler 4 | 5 | def init_scalers(): 6 | 7 | return { 8 | "Close": MinMaxScaler(feature_range = (0, 1)), 9 | "count_reddit": MinMaxScaler(feature_range = (0, 1)), 10 | "count_twitter": MinMaxScaler(feature_range = (0, 1)), 11 | "retweet_num_twitter": MinMaxScaler(feature_range = (0, 1)), 12 | "like_num_twitter": MinMaxScaler(feature_range = (0, 1)), 13 | } 14 | 15 | 16 | def fit_scalers(X_train_indi, sc_dict): 17 | 18 | X_train_indi[:, 0] = sc_dict["Close"].fit_transform(X_train_indi[:, 0].reshape(-1, 1)).reshape(-1) 19 | X_train_indi[:, 4] = sc_dict["count_reddit"].fit_transform(X_train_indi[:, 4].reshape(-1, 1)).reshape(-1) 20 | X_train_indi[:, 8] = sc_dict["count_twitter"].fit_transform(X_train_indi[:, 8].reshape(-1, 1)).reshape(-1) 21 | X_train_indi[:, 9] = sc_dict["retweet_num_twitter"].fit_transform(X_train_indi[:, 9].reshape(-1, 1)).reshape(-1) 22 | X_train_indi[:, 10] = sc_dict["like_num_twitter"].fit_transform(X_train_indi[:, 10].reshape(-1, 1)).reshape(-1) 23 | 24 | return X_train_indi, sc_dict 25 | 26 | def transform_scalers(X_train_indi, sc_dict): 27 | 28 | X_train_indi[:, 0] = sc_dict["Close"].transform(X_train_indi[:, 0].reshape(-1, 1)).reshape(-1) 29 | X_train_indi[:, 4] = sc_dict["count_reddit"].transform(X_train_indi[:, 4].reshape(-1, 1)).reshape(-1) 30 | X_train_indi[:, 8] = sc_dict["count_twitter"].transform(X_train_indi[:, 8].reshape(-1, 1)).reshape(-1) 31 | X_train_indi[:, 9] = sc_dict["retweet_num_twitter"].transform(X_train_indi[:, 9].reshape(-1, 1)).reshape(-1) 32 | X_train_indi[:, 10] = sc_dict["like_num_twitter"].transform(X_train_indi[:, 10].reshape(-1, 1)).reshape(-1) 33 | 34 | return X_train_indi 35 | 36 | def viz_test_vs_pred(title, dates, dataset_test, predicted_stock_price, show=True): 37 | 38 | # Visualising the results 39 | plt.plot(dates, dataset_test, color = "red", label = "Real Stock Price") 40 | plt.plot(dates, predicted_stock_price, color = "blue", label = "Predicted Stock Price") 41 | 42 | plt.title(title) 43 | plt.xlabel('Time') 44 | plt.ylabel('Stock Price') 45 | plt.legend() 46 | 47 | if show: 48 | plt.show() 49 | else: 50 | plt.savefig('./plots/' + title + '.png') -------------------------------------------------------------------------------- /frontend/README.md: -------------------------------------------------------------------------------- 1 | # Getting Started with Create React App 2 | 3 | This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app). 4 | 5 | ## Available Scripts 6 | 7 | In the project directory, you can run: 8 | 9 | ### `yarn start` 10 | 11 | Runs the app in the development mode.\ 12 | Open [http://localhost:3000](http://localhost:3000) to view it in the browser. 13 | 14 | The page will reload if you make edits.\ 15 | You will also see any lint errors in the console. 16 | 17 | ### `yarn test` 18 | 19 | Launches the test runner in the interactive watch mode.\ 20 | See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information. 21 | 22 | ### `yarn build` 23 | 24 | Builds the app for production to the `build` folder.\ 25 | It correctly bundles React in production mode and optimizes the build for the best performance. 26 | 27 | The build is minified and the filenames include the hashes.\ 28 | Your app is ready to be deployed! 29 | 30 | See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information. 31 | 32 | ### `yarn eject` 33 | 34 | **Note: this is a one-way operation. Once you `eject`, you can’t go back!** 35 | 36 | If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project. 37 | 38 | Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own. 39 | 40 | You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. 41 | 42 | ## Learn More 43 | 44 | You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started). 45 | 46 | To learn React, check out the [React documentation](https://reactjs.org/). 47 | 48 | ### Code Splitting 49 | 50 | This section has moved here: [https://facebook.github.io/create-react-app/docs/code-splitting](https://facebook.github.io/create-react-app/docs/code-splitting) 51 | 52 | ### Analyzing the Bundle Size 53 | 54 | This section has moved here: [https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size](https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size) 55 | 56 | ### Making a Progressive Web App 57 | 58 | This section has moved here: [https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app](https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app) 59 | 60 | ### Advanced Configuration 61 | 62 | This section has moved here: [https://facebook.github.io/create-react-app/docs/advanced-configuration](https://facebook.github.io/create-react-app/docs/advanced-configuration) 63 | 64 | ### Deployment 65 | 66 | This section has moved here: [https://facebook.github.io/create-react-app/docs/deployment](https://facebook.github.io/create-react-app/docs/deployment) 67 | 68 | ### `yarn build` fails to minify 69 | 70 | This section has moved here: [https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify](https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify) 71 | -------------------------------------------------------------------------------- /frontend/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "client", 3 | "version": "0.1.0", 4 | "private": true, 5 | "dependencies": { 6 | "@fortawesome/fontawesome-svg-core": "^1.2.35", 7 | "@fortawesome/free-solid-svg-icons": "^5.15.3", 8 | "@fortawesome/react-fontawesome": "^0.1.14", 9 | "@testing-library/jest-dom": "^5.11.4", 10 | "@testing-library/react": "^11.1.0", 11 | "@testing-library/user-event": "^12.1.10", 12 | "d3": "^6.7.0", 13 | "d32": "npm:d3@3.5.17", 14 | "html-loader": "^2.1.2", 15 | "react": "^17.0.2", 16 | "react-bootstrap": "^1.5.2", 17 | "react-d3-library": "^1.1.8", 18 | "react-dom": "^17.0.2", 19 | "react-pro-sidebar": "^0.6.0", 20 | "react-router-bootstrap": "^0.25.0", 21 | "react-router-dom": "^5.2.0", 22 | "react-scripts": "4.0.3", 23 | "react-spinners": "^0.10.6", 24 | "web-vitals": "^1.0.1" 25 | }, 26 | "scripts": { 27 | "start": "react-scripts start", 28 | "build": "react-scripts build", 29 | "test": "react-scripts test", 30 | "eject": "react-scripts eject" 31 | }, 32 | "eslintConfig": { 33 | "extends": [ 34 | "react-app", 35 | "react-app/jest" 36 | ] 37 | }, 38 | "browserslist": { 39 | "production": [ 40 | ">0.2%", 41 | "not dead", 42 | "not op_mini all" 43 | ], 44 | "development": [ 45 | "last 1 chrome version", 46 | "last 1 firefox version", 47 | "last 1 safari version" 48 | ] 49 | } 50 | } 51 | -------------------------------------------------------------------------------- /frontend/public/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohitgajawada/Stock-Price-Prediction-Visualization/a5bc88e6a6a0718a27ac185df688e7a72472d026/frontend/public/favicon.ico -------------------------------------------------------------------------------- /frontend/public/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 |
4 | 5 | 6 | 7 | 8 | 9 | 13 | 14 | 23 | 29 |1155 | This graph allows you to view stock and social media sentiment data 1156 | for 5 companies between 2015-2020. 1157 |
1158 |17 | This tool using React + D3 + Flask to generate visualizations involing 18 | sentiment analysis of social media posts about Stocks, financial data 19 | about Stocks, and more! Click on a graph on the left to get started. 20 |
21 |286 | This graph allows you to adjust different social media sentiment 287 | values and see the impact on a company's closing stock price. 288 |
289 |294 | | Reddit Sentiment | 295 |Number of Reddit Posts | 296 |Twitter Sentiment | 297 |Number of Tweets | 298 |Tweet Activity (Retweets and Likes) | 299 |
---|---|---|---|---|---|
Week 1 | 304 |305 | 306 | 310 | | 311 |312 | 313 | 317 | | 318 |319 | 320 | 324 | | 325 |326 | 327 | 331 | | 332 |333 | 334 | 338 | | 339 |
Week 2 | 342 |343 | 344 | 348 | | 349 |350 | 351 | 355 | | 356 |357 | 358 | 362 | | 363 |364 | 365 | 369 | | 370 |371 | 372 | 376 | | 377 |
Week 3 | 380 |381 | 382 | 386 | | 387 |388 | 389 | 393 | | 394 |395 | 396 | 400 | | 401 |402 | 403 | 407 | | 408 |409 | 410 | 414 | | 415 |
Week 4 | 418 |419 | 420 | 424 | | 425 |426 | 427 | 431 | | 432 |433 | 434 | 438 | | 439 |440 | 441 | 445 | | 446 |447 | 448 | 452 | | 453 |
Week 5 | 456 |457 | 458 | 462 | | 463 |464 | 465 | 469 | | 470 |471 | 472 | 476 | | 477 |478 | 479 | 483 | | 484 |485 | 486 | 490 | | 491 |
496 | Please select which company and window size you want to predict the 497 | next month for. A smaller window size means you are okay with more 498 | volatility in price. 499 |
500 |503 | We are only predicting for January 2020, due to data limitations. We 504 | are confident in our model's ability to predict real time stock prices 505 | given real time access to stock + social media data. 506 |
507 |Top ${type} Tweet
184 |${tweet}
185 |Number of Favorites: ${favorites}
186 |Date Posted: ${d.data.date}
187 |210 | This graph pulls the latest "popular" tweets from the past 7 days 211 | according to the Twitter API and analyzes the sentiment of these 212 | tweets with finBert 213 |
214 |