├── LICENSE ├── README.md ├── history.txt ├── live.txt ├── no_dash.py ├── static_tpo.py ├── tpo_helper.py ├── tpo_project.py └── version_0.2.0 ├── changes.txt ├── static_tpo_slider.py ├── static_tpo_v2.py ├── tpo_helper2.py └── tpo_project_v2.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 beinghorizontal 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # TPO project in Python 2 | This is a pure Python project to calculate various Market profile related operations in python and visualize them interactively with the help of plotly and dash components. 3 | The motivation behind this project is to feed price distribution (TPOs with alphabets) and other visual cues to convolution neural networks and see if NN learns from the visualization. Therefore this project will be heavy with all the visual cues, colours and different shapes. 4 | While working on the project realized that it is possible to use this entire framework for regular charting as well. 5 | 6 | Okay, let's get started. 7 | ## Dependencies 8 | pip install plotly (update if previously installed) 9 | 10 | pip install dash (update if previously installed) 11 | 12 | pip install pandas 13 | 14 | Finally, download all the files or clone this repo as zip and extract in local folder. 15 | 16 | ## Steps for live chart 17 | 18 | 1. Open tpo_project.py from the folder in python editor 19 | 20 | 2. Run the code 21 | 22 | 3. In the console, you will see the message something like Running on http://127.0.0.1:8050/ copy the server address your console printed and paste it in your browser 23 | 24 | 4. That's it, you will get the TPO chart in browser with sample file provided in the folder. 25 | 26 | 5. About sample file: There is live.txt file in the folder. Add data manually and you should see new updated price in the chart. 27 | 28 | 6. Marker shows the last closing price. The market color changes to green if price >= POC else it is red. 29 | 30 | 7. If the test runs successfully replace live and historical data with your desired data feed. 31 | 32 | 8. Stop the server by pressing ctrl + c in the console. 33 | 34 | ## For static chart 35 | 36 | Simply run static_tpo.py and the chart will open in your browser. It uses a single file as a source for generating charts (unlike dash version which uses history.txt and live.txt), you can replace history.txt with your own data while making sure header names are matching. 37 | 38 | ## For static chart with slider 39 | 40 | Run 'static_tpo_slider.py'. Copy or click the IP address and chart will open in the browser. It uses Dash for dynamically updating Y-axis as you move the slider on X-axis. Minimum number of days to display set to 5, you can increase if you want by sliding to the left. 41 | The reason using Dash for slider because PLotly's zoomin option do not update Y-axis dynamically which makes chart really squeezed up on larger range. 42 | 43 | ## Important 44 | 45 | 1. Match data format and headers exactly with given sample file. 46 | 47 | 2. It is not necessary to have the same data source for historical and live data as long as the format matches. It wouldn't matter even if symbol name is different, or there is price overlap the algorithm will drop the duplicate entries automatically. 48 | 2. Algorithm uses historical data for generating context. In tpo_project.py at the start, there is 'avglen' parameter. That number should be lower or equal to the sample size of your historical data. If you have 31 days of historical data then use avglen = 30 49 | 50 | 3. Make sure your historical file is up to date, especially entries representing day 1 and day 2. The algorithm automatically calculates session time and Initial Balance period using that info. IB is of 60 minutes. It assumes you have 1-minute data. I haven't tested on daily data. 51 | 52 | 4. If your live feed is of 1 minute instead of a tick, you may want to increase the refresh time 'refresh_int' which is set to default 1 second. 53 | 54 | 5. Chart update will not work until live file gets at least 30 minutes worth of data. To remove the limitation reduce frequency-time 'freq' from default 30 to 5 minutes or lower. But it will affect TPO chart calculations. So if possible don't change the parameter. 55 | 56 | ## Do you have a dash compatability issue? 57 | 58 | Run no_dash.py The flow is exactly the same as live version but without dash. 59 | 60 | ## General overview of the TPO chart produced by the algo 61 | 62 | ![How to read TPO chart generated by our algo-1](https://user-images.githubusercontent.com/28746824/89121455-a7b87a00-d4dc-11ea-9df1-6bc1bf897ac6.png) 63 | 64 | ## Bubbles at the top: 65 | ![How to read charts. Hover menu -2](https://user-images.githubusercontent.com/28746824/89723894-e341cf80-da19-11ea-84cd-a575f0a83bcc.png) 66 | 67 | ## Square boxes at the bottom: 68 | ![How to read the charts: square boxes -3](https://user-images.githubusercontent.com/28746824/89723608-65c89000-da16-11ea-9153-d6c7b11d1ed5.png) 69 | 70 | ## About Insights in square boxes and bubbles: 71 | eg for square box. IB>yPOC 72 | Explanation: It means Mid of IB range is above Yesterday's POC ( point of control ) but below yesterday's value area high (VAH)For that the algorithm gave the weightage of '1' 73 | RF means Rotational factor. RF Calculated on 1 minute bar. For a single bar RF can be at maximum 3 when there is higher high, lower low and higher close. Minimum value is -3 for a single bar. Then we sup up all the values for the day for daily RF and for IB RF summation for the 1st 1 hr. 74 | If you observe the breakdown for IB values, it doesn't have a lookahead bias so whatever the prediction or the strength of the IB at the end of 1st hour of the session it will remain the same and won't get updated. If size and color of squares matches with bubbles (which gets updated as day progress) IB predictions were right on the track. 75 | 76 | ## New static TPO with slider window: 77 | ![Slider window](https://user-images.githubusercontent.com/28746824/89724057-d7570d00-da1b-11ea-9d8a-6a5c93b36c2e.png) 78 | 79 | 80 | Everything cannot be explained here, if you're interested do read this small handbook on Auction Market Theory 81 | CBOT Market Profile handbook (it is 13 mb pdf file) https://t.co/L8DfNkLNi5?amp=1 82 | 83 | -------------------------------------------------------------------------------- /live.txt: -------------------------------------------------------------------------------- 1 | symbol,datetime,Open,High,Low,Close,Volume 2 | Q_NF1,20200731 09:15:00,11123.65,11136.0,11064.45,11076.2,227100 3 | Q_NF1,20200731 09:16:00,11074.25,11085.45,11070.35,11071.0,357150 4 | Q_NF1,20200731 09:17:00,11071.0,11078.0,11070.0,11076.9,454725 5 | Q_NF1,20200731 09:18:00,11076.1,11081.95,11067.0,11068.4,529725 6 | Q_NF1,20200731 09:19:00,11068.65,11073.0,11066.0,11070.35,605100 7 | Q_NF1,20200731 09:20:00,11068.75,11068.95,11054.85,11062.5,727275 8 | Q_NF1,20200731 09:21:00,11060.85,11078.95,11060.75,11078.5,801600 9 | Q_NF1,20200731 09:22:00,11077.0,11078.35,11066.1,11072.45,860400 10 | Q_NF1,20200731 09:23:00,11074.95,11087.95,11073.95,11086.0,927975 11 | Q_NF1,20200731 09:24:00,11086.0,11097.8,11083.0,11092.25,1008075 12 | Q_NF1,20200731 09:25:00,11092.0,11095.0,11086.25,11092.0,1062225 13 | Q_NF1,20200731 09:26:00,11092.95,11115.6,11091.65,11115.1,1146450 14 | Q_NF1,20200731 09:27:00,11113.1,11126.0,11112.5,11121.7,1245225 15 | Q_NF1,20200731 09:28:00,11120.0,11122.15,11111.8,11117.5,1291875 16 | Q_NF1,20200731 09:29:00,11116.85,11124.9,11115.85,11120.2,1323375 17 | Q_NF1,20200731 09:30:00,11118.65,11122.0,11106.6,11110.15,1388550 18 | Q_NF1,20200731 09:31:00,11113.5,11115.6,11097.0,11100.5,1447050 19 | Q_NF1,20200731 09:32:00,11101.2,11103.95,11093.5,11098.5,1495875 20 | Q_NF1,20200731 09:33:00,11100.75,11101.0,11087.0,11089.0,1549350 21 | Q_NF1,20200731 09:34:00,11087.25,11103.2,11087.25,11099.0,1593075 22 | Q_NF1,20200731 09:35:00,11100.9,11108.0,11093.3,11106.2,1625250 23 | Q_NF1,20200731 09:36:00,11109.0,11111.95,11102.55,11109.45,1662000 24 | Q_NF1,20200731 09:37:00,11108.25,11116.95,11106.65,11111.45,1696650 25 | Q_NF1,20200731 09:38:00,11111.45,11113.0,11105.0,11109.0,1733175 26 | Q_NF1,20200731 09:39:00,11109.0,11114.9,11108.6,11112.5,1768125 27 | Q_NF1,20200731 09:40:00,11112.25,11128.7,11112.25,11120.0,1851750 28 | Q_NF1,20200731 09:41:00,11120.0,11129.95,11113.9,11125.35,1909800 29 | Q_NF1,20200731 09:42:00,11127.75,11145.0,11123.85,11141.75,2029125 30 | Q_NF1,20200731 09:43:00,11142.0,11143.0,11127.2,11127.65,2091525 31 | Q_NF1,20200731 09:44:00,11126.3,11138.85,11125.0,11132.0,2124525 32 | Q_NF1,20200731 09:45:00,11132.45,11150.0,11132.45,11147.9,2230125 33 | Q_NF1,20200731 09:46:00,11146.15,11148.45,11138.25,11141.0,2283150 34 | Q_NF1,20200731 09:47:00,11141.1,11145.8,11133.15,11138.0,2318550 35 | Q_NF1,20200731 09:48:00,11136.45,11139.1,11130.55,11132.75,2349600 36 | Q_NF1,20200731 09:49:00,11133.0,11138.8,11130.25,11137.0,2376600 37 | Q_NF1,20200731 09:50:00,11135.95,11139.0,11131.6,11136.35,2395575 38 | Q_NF1,20200731 09:51:00,11136.35,11137.45,11123.05,11126.45,2433000 39 | Q_NF1,20200731 09:52:00,11127.1,11129.65,11125.05,11126.0,2450400 40 | Q_NF1,20200731 09:53:00,11125.8,11130.0,11123.9,11127.4,2469825 41 | Q_NF1,20200731 09:54:00,11127.0,11136.1,11126.95,11136.1,2493300 42 | Q_NF1,20200731 09:55:00,11135.0,11140.0,11129.0,11130.2,2520600 43 | Q_NF1,20200731 09:56:00,11129.65,11130.55,11118.55,11123.5,2568975 44 | Q_NF1,20200731 09:57:00,11121.25,11126.0,11115.7,11123.35,2595750 45 | Q_NF1,20200731 09:58:00,11123.05,11123.85,11112.0,11114.0,2636325 46 | Q_NF1,20200731 09:59:00,11114.4,11119.0,11112.5,11115.0,2666700 47 | Q_NF1,20200731 10:00:00,11115.0,11117.7,11112.3,11116.0,2702775 48 | Q_NF1,20200731 10:01:00,11116.0,11128.5,11115.9,11125.0,2736525 49 | Q_NF1,20200731 10:02:00,11124.8,11129.0,11122.0,11123.4,2755350 50 | Q_NF1,20200731 10:03:00,11122.9,11123.45,11114.4,11114.75,2779500 51 | Q_NF1,20200731 10:04:00,11113.95,11113.95,11101.9,11105.3,2857800 52 | Q_NF1,20200731 10:05:00,11107.4,11107.85,11101.0,11101.2,2902950 53 | Q_NF1,20200731 10:06:00,11101.0,11102.0,11081.9,11084.7,3036750 54 | Q_NF1,20200731 10:07:00,11084.4,11097.7,11083.75,11094.85,3107850 55 | Q_NF1,20200731 10:08:00,11094.5,11097.0,11088.0,11095.0,3163725 56 | Q_NF1,20200731 10:09:00,11093.75,11097.0,11087.0,11096.05,3212775 57 | Q_NF1,20200731 10:10:00,11096.3,11096.35,11080.25,11080.25,3264375 58 | Q_NF1,20200731 10:11:00,11080.1,11086.05,11075.25,11085.5,3320700 59 | Q_NF1,20200731 10:12:00,11086.0,11092.0,11084.55,11090.95,3353175 60 | Q_NF1,20200731 10:13:00,11090.75,11091.4,11082.4,11091.0,3373725 61 | Q_NF1,20200731 10:14:00,11092.0,11093.6,11085.85,11089.7,3403275 62 | Q_NF1,20200731 10:15:00,11090.15,11093.0,11081.85,11093.0,3440100 63 | Q_NF1,20200731 10:16:00,11092.0,11107.0,11091.9,11099.8,3493725 64 | Q_NF1,20200731 10:17:00,11099.8,11103.3,11094.0,11102.7,3513675 65 | Q_NF1,20200731 10:18:00,11101.0,11102.75,11095.0,11096.6,3528150 66 | Q_NF1,20200731 10:19:00,11097.8,11107.45,11096.05,11104.8,3547575 67 | Q_NF1,20200731 10:20:00,11103.9,11110.0,11102.0,11106.2,3592650 68 | Q_NF1,20200731 10:21:00,11106.2,11107.05,11097.55,11098.55,3611850 69 | Q_NF1,20200731 10:22:00,11098.55,11098.6,11095.0,11098.0,3628425 70 | Q_NF1,20200731 10:23:00,11097.0,11111.0,11097.0,11110.25,3660450 71 | Q_NF1,20200731 10:24:00,11112.3,11127.05,11110.5,11119.8,3740100 72 | Q_NF1,20200731 10:25:00,11118.0,11132.4,11117.9,11127.4,3802875 73 | Q_NF1,20200731 10:26:00,11124.5,11125.9,11117.25,11118.75,3836775 74 | Q_NF1,20200731 10:27:00,11121.0,11121.0,11108.9,11112.0,3874575 75 | Q_NF1,20200731 10:28:00,11110.2,11118.45,11108.0,11112.1,3903225 76 | Q_NF1,20200731 10:29:00,11112.1,11117.75,11112.1,11115.9,3913800 77 | Q_NF1,20200731 10:30:00,11117.5,11125.0,11115.25,11120.0,3933000 78 | Q_NF1,20200731 10:31:00,11121.0,11124.75,11115.05,11117.35,3948600 79 | Q_NF1,20200731 10:32:00,11115.55,11124.75,11115.55,11119.9,3962925 80 | Q_NF1,20200731 10:33:00,11119.95,11121.35,11110.2,11111.0,3978375 81 | Q_NF1,20200731 10:34:00,11110.0,11121.1,11110.0,11119.0,4047900 82 | Q_NF1,20200731 10:35:00,11119.0,11119.0,11100.1,11102.75,4082700 83 | Q_NF1,20200731 10:36:00,11102.65,11103.75,11097.65,11101.0,4111950 84 | Q_NF1,20200731 10:37:00,11101.85,11112.05,11100.0,11111.95,4230300 85 | Q_NF1,20200731 10:38:00,11112.0,11112.85,11105.0,11109.85,4272600 86 | Q_NF1,20200731 10:39:00,11110.1,11110.1,11105.0,11106.0,4283250 87 | Q_NF1,20200731 10:40:00,11105.85,11113.0,11101.0,11112.9,4310475 88 | Q_NF1,20200731 10:41:00,11113.05,11118.0,11108.7,11115.0,4334250 89 | Q_NF1,20200731 10:42:00,11115.55,11118.7,11111.25,11116.3,4362600 90 | Q_NF1,20200731 10:43:00,11116.3,11120.0,11112.45,11116.0,4406400 91 | Q_NF1,20200731 10:44:00,11116.0,11120.0,11115.8,11118.0,4417050 92 | Q_NF1,20200731 10:45:00,11118.5,11125.0,11115.3,11121.2,4445700 93 | Q_NF1,20200731 10:46:00,11122.65,11123.35,11117.0,11117.55,4458075 94 | Q_NF1,20200731 10:47:00,11117.0,11123.0,11116.0,11116.55,4472775 95 | Q_NF1,20200731 10:48:00,11116.7,11116.7,11106.95,11109.0,4498275 96 | Q_NF1,20200731 10:49:00,11110.0,11113.35,11103.55,11113.35,4520475 97 | Q_NF1,20200731 10:50:00,11111.75,11115.3,11110.0,11113.1,4534275 98 | Q_NF1,20200731 10:51:00,11111.4,11119.0,11111.0,11119.0,4544025 99 | Q_NF1,20200731 10:52:00,11116.35,11122.0,11116.15,11118.85,4560975 100 | Q_NF1,20200731 10:53:00,11118.85,11123.0,11118.85,11122.7,4584825 101 | Q_NF1,20200731 10:54:00,11123.0,11124.0,11115.0,11121.0,4607625 102 | Q_NF1,20200731 10:55:00,11121.0,11124.0,11118.0,11119.15,4620675 103 | Q_NF1,20200731 10:56:00,11118.0,11121.25,11115.0,11116.75,4640625 104 | Q_NF1,20200731 10:57:00,11116.75,11117.3,11110.85,11113.4,4660125 105 | Q_NF1,20200731 10:58:00,11114.0,11120.95,11112.65,11115.5,4672725 106 | Q_NF1,20200731 10:59:00,11115.5,11119.0,11108.0,11109.9,4686975 107 | Q_NF1,20200731 11:00:00,11108.5,11115.5,11108.0,11111.95,4703400 108 | Q_NF1,20200731 11:01:00,11110.9,11119.75,11110.9,11115.2,4715025 109 | Q_NF1,20200731 11:02:00,11115.0,11115.2,11106.45,11108.0,4726500 110 | Q_NF1,20200731 11:03:00,11107.85,11116.35,11107.65,11115.0,4746150 111 | Q_NF1,20200731 11:04:00,11114.75,11118.95,11114.0,11116.0,4753575 112 | Q_NF1,20200731 11:05:00,11115.0,11117.0,11108.4,11117.0,4763325 113 | Q_NF1,20200731 11:06:00,11116.0,11117.75,11111.0,11111.0,4771275 114 | Q_NF1,20200731 11:07:00,11111.0,11114.75,11108.0,11112.85,4794975 115 | Q_NF1,20200731 11:08:00,11111.7,11115.0,11109.5,11111.5,4800825 116 | Q_NF1,20200731 11:09:00,11111.5,11111.5,11105.0,11105.0,4823850 117 | Q_NF1,20200731 11:10:00,11105.0,11106.0,11090.05,11096.15,4929300 118 | Q_NF1,20200731 11:11:00,11096.25,11103.0,11094.45,11100.1,4967925 119 | Q_NF1,20200731 11:12:00,11099.95,11103.05,11094.3,11101.85,4983825 120 | Q_NF1,20200731 11:13:00,11100.35,11105.65,11098.35,11099.0,5005950 121 | Q_NF1,20200731 11:14:00,11099.0,11101.45,11096.25,11100.0,5025750 122 | Q_NF1,20200731 11:15:00,11099.55,11110.0,11099.55,11106.0,5043600 123 | Q_NF1,20200731 11:16:00,11106.0,11106.5,11103.75,11103.8,5052975 124 | Q_NF1,20200731 11:17:00,11103.8,11108.35,11100.6,11104.45,5074575 125 | Q_NF1,20200731 11:18:00,11104.45,11105.35,11099.65,11101.3,5098725 126 | Q_NF1,20200731 11:19:00,11101.3,11104.5,11096.85,11097.0,5121300 127 | Q_NF1,20200731 11:20:00,11096.0,11102.0,11095.3,11095.3,5134425 128 | Q_NF1,20200731 11:21:00,11096.25,11101.45,11095.0,11098.9,5146125 129 | Q_NF1,20200731 11:22:00,11099.25,11099.25,11091.05,11091.05,5173275 130 | Q_NF1,20200731 11:23:00,11090.15,11094.4,11090.0,11091.8,5209875 131 | Q_NF1,20200731 11:24:00,11090.3,11095.35,11090.3,11092.6,5249175 132 | Q_NF1,20200731 11:25:00,11091.15,11094.9,11084.0,11085.7,5284350 133 | Q_NF1,20200731 11:26:00,11085.7,11092.2,11081.3,11082.95,5334675 134 | Q_NF1,20200731 11:27:00,11081.0,11081.5,11072.05,11075.15,5440125 135 | Q_NF1,20200731 11:28:00,11073.0,11078.8,11071.05,11073.0,5484300 136 | Q_NF1,20200731 11:29:00,11073.0,11077.0,11070.9,11075.0,5544900 137 | Q_NF1,20200731 11:30:00,11075.0,11075.0,11056.55,11064.7,5696250 138 | Q_NF1,20200731 11:31:00,11065.2,11072.7,11063.0,11071.0,5734800 139 | Q_NF1,20200731 11:32:00,11070.65,11075.7,11068.15,11073.7,5776050 140 | Q_NF1,20200731 11:33:00,11074.0,11079.0,11071.85,11077.25,5806200 141 | Q_NF1,20200731 11:34:00,11075.9,11078.0,11072.0,11073.85,5825775 142 | Q_NF1,20200731 11:35:00,11073.85,11084.8,11073.85,11084.0,5852850 143 | Q_NF1,20200731 11:36:00,11082.9,11086.8,11078.0,11080.85,5883900 144 | Q_NF1,20200731 11:37:00,11080.85,11083.0,11073.2,11073.2,5898300 145 | Q_NF1,20200731 11:38:00,11072.1,11081.0,11071.7,11080.0,5912925 146 | Q_NF1,20200731 11:39:00,11080.35,11084.9,11080.0,11083.6,5927775 147 | Q_NF1,20200731 11:40:00,11082.0,11085.3,11081.0,11082.2,5945400 148 | Q_NF1,20200731 11:41:00,11082.4,11089.0,11081.5,11081.5,5965875 149 | Q_NF1,20200731 11:42:00,11081.5,11085.05,11073.2,11077.05,5982900 150 | Q_NF1,20200731 11:43:00,11078.6,11079.8,11075.0,11076.1,5992050 151 | Q_NF1,20200731 11:44:00,11077.25,11077.7,11067.95,11068.0,6023475 152 | Q_NF1,20200731 11:45:00,11067.55,11069.3,11061.65,11066.45,6067650 153 | Q_NF1,20200731 11:46:00,11066.7,11071.95,11061.6,11069.0,6099525 154 | Q_NF1,20200731 11:47:00,11070.0,11078.0,11068.15,11076.0,6119100 155 | Q_NF1,20200731 11:48:00,11075.35,11079.9,11071.15,11079.9,6135600 156 | Q_NF1,20200731 11:49:00,11079.9,11082.55,11075.2,11076.2,6157575 157 | Q_NF1,20200731 11:50:00,11075.9,11081.0,11074.0,11081.0,6174825 158 | Q_NF1,20200731 11:51:00,11079.95,11082.0,11077.7,11081.95,6185550 159 | Q_NF1,20200731 11:52:00,11081.95,11082.0,11078.0,11080.65,6193425 160 | Q_NF1,20200731 11:53:00,11081.5,11099.9,11081.0,11098.0,6263400 161 | Q_NF1,20200731 11:54:00,11097.25,11099.0,11091.0,11093.25,6285450 162 | Q_NF1,20200731 11:55:00,11092.95,11098.0,11091.8,11097.0,6302175 163 | Q_NF1,20200731 11:56:00,11097.0,11097.0,11091.0,11094.15,6322500 164 | Q_NF1,20200731 11:57:00,11094.1,11098.0,11093.1,11097.35,6333300 165 | Q_NF1,20200731 11:58:00,11097.35,11097.35,11091.0,11091.2,6348825 166 | Q_NF1,20200731 11:59:00,11091.2,11093.1,11087.65,11089.35,6363900 167 | Q_NF1,20200731 12:00:00,11089.4,11092.0,11082.55,11082.55,6387000 168 | Q_NF1,20200731 12:01:00,11083.95,11093.9,11082.4,11093.9,6400950 169 | Q_NF1,20200731 12:02:00,11094.0,11097.0,11092.15,11092.8,6412275 170 | Q_NF1,20200731 12:03:00,11092.8,11098.65,11092.8,11097.35,6428925 171 | Q_NF1,20200731 12:04:00,11094.5,11106.95,11094.4,11106.0,6493125 172 | Q_NF1,20200731 12:05:00,11105.25,11112.9,11103.4,11111.0,6562950 173 | Q_NF1,20200731 12:06:00,11111.0,11111.25,11103.6,11104.75,6592950 174 | Q_NF1,20200731 12:07:00,11103.3,11105.9,11100.0,11100.3,6610800 175 | Q_NF1,20200731 12:08:00,11102.0,11110.0,11100.3,11108.0,6644325 176 | Q_NF1,20200731 12:09:00,11108.0,11110.0,11102.0,11102.0,6660750 177 | Q_NF1,20200731 12:10:00,11101.9,11108.2,11101.9,11106.4,6677625 178 | Q_NF1,20200731 12:11:00,11106.7,11111.7,11105.0,11107.0,6705375 179 | Q_NF1,20200731 12:12:00,11105.05,11110.0,11105.0,11110.0,6723450 180 | Q_NF1,20200731 12:13:00,11110.0,11110.0,11096.9,11097.1,6750000 181 | Q_NF1,20200731 12:14:00,11100.0,11100.7,11092.0,11095.85,6775950 182 | Q_NF1,20200731 12:15:00,11095.75,11101.95,11095.75,11099.8,6788700 183 | Q_NF1,20200731 12:16:00,11099.8,11100.0,11088.6,11092.0,6821775 184 | Q_NF1,20200731 12:17:00,11092.0,11093.9,11088.6,11093.0,6845625 185 | Q_NF1,20200731 12:18:00,11093.0,11095.6,11092.0,11095.6,6855525 186 | Q_NF1,20200731 12:19:00,11097.9,11103.5,11095.0,11103.5,6884025 187 | Q_NF1,20200731 12:20:00,11105.95,11114.0,11102.75,11110.15,6923475 188 | Q_NF1,20200731 12:21:00,11110.1,11111.85,11106.0,11110.6,6951525 189 | Q_NF1,20200731 12:22:00,11110.6,11110.75,11106.25,11107.8,6969750 190 | Q_NF1,20200731 12:23:00,11107.8,11115.0,11106.1,11115.0,6986925 191 | Q_NF1,20200731 12:24:00,11115.0,11115.0,11098.6,11099.6,7020075 192 | Q_NF1,20200731 12:25:00,11099.1,11102.75,11081.1,11092.55,7076625 193 | Q_NF1,20200731 12:26:00,11091.0,11091.75,11081.0,11089.45,7110300 194 | Q_NF1,20200731 12:27:00,11087.8,11089.45,11081.0,11084.0,7127250 195 | Q_NF1,20200731 12:28:00,11084.85,11088.75,11082.6,11084.55,7142700 196 | Q_NF1,20200731 12:29:00,11084.55,11092.7,11084.55,11088.1,7154775 197 | Q_NF1,20200731 12:30:00,11087.6,11103.75,11084.0,11095.0,7182300 198 | Q_NF1,20200731 12:31:00,11096.0,11099.9,11095.0,11098.0,7197675 199 | Q_NF1,20200731 12:32:00,11099.45,11099.45,11092.0,11096.3,7205700 200 | Q_NF1,20200731 12:33:00,11092.9,11096.3,11085.85,11091.0,7221600 201 | Q_NF1,20200731 12:34:00,11091.1,11092.45,11085.0,11087.45,7234275 202 | Q_NF1,20200731 12:35:00,11087.45,11095.0,11083.0,11095.0,7252425 203 | Q_NF1,20200731 12:36:00,11094.0,11095.9,11089.05,11090.55,7270800 204 | Q_NF1,20200731 12:37:00,11091.45,11091.45,11085.0,11086.0,7289550 205 | Q_NF1,20200731 12:38:00,11087.9,11090.0,11082.65,11083.5,7303275 206 | Q_NF1,20200731 12:39:00,11083.8,11091.15,11082.55,11083.0,7316775 207 | Q_NF1,20200731 12:40:00,11084.8,11088.0,11081.8,11085.4,7325850 208 | Q_NF1,20200731 12:41:00,11086.5,11087.5,11082.3,11082.75,7340550 209 | Q_NF1,20200731 12:42:00,11082.75,11087.0,11075.85,11079.0,7387800 210 | Q_NF1,20200731 12:43:00,11076.3,11078.0,11070.55,11071.0,7424700 211 | Q_NF1,20200731 12:44:00,11074.1,11080.0,11072.0,11072.0,7448850 212 | Q_NF1,20200731 12:45:00,11073.0,11073.8,11063.8,11067.0,7515150 213 | Q_NF1,20200731 12:46:00,11067.0,11072.4,11063.9,11072.4,7558875 214 | Q_NF1,20200731 12:47:00,11071.7,11076.0,11068.35,11073.5,7590900 215 | Q_NF1,20200731 12:48:00,11073.0,11075.85,11068.45,11070.7,7613325 216 | Q_NF1,20200731 12:49:00,11070.7,11072.0,11066.3,11066.3,7642725 217 | Q_NF1,20200731 12:50:00,11066.45,11067.05,11060.0,11065.0,7695450 218 | Q_NF1,20200731 12:51:00,11063.25,11074.7,11062.95,11069.15,7733175 219 | Q_NF1,20200731 12:52:00,11068.0,11079.2,11067.4,11078.25,7756650 220 | Q_NF1,20200731 12:53:00,11077.25,11082.8,11076.25,11078.0,7785375 221 | Q_NF1,20200731 12:54:00,11079.2,11086.95,11076.4,11079.05,7824825 222 | Q_NF1,20200731 12:55:00,11080.0,11086.35,11079.05,11081.0,7842900 223 | Q_NF1,20200731 12:56:00,11081.4,11083.65,11075.0,11075.0,7862025 224 | Q_NF1,20200731 12:57:00,11075.0,11079.2,11072.8,11076.45,7877100 225 | Q_NF1,20200731 12:58:00,11076.45,11091.45,11074.75,11091.45,7901100 226 | Q_NF1,20200731 12:59:00,11092.45,11092.55,11084.3,11084.3,7928325 227 | Q_NF1,20200731 13:00:00,11086.45,11093.95,11083.6,11092.5,7951500 228 | Q_NF1,20200731 13:01:00,11093.3,11098.2,11091.0,11092.0,7986300 229 | Q_NF1,20200731 13:02:00,11092.0,11096.0,11087.0,11096.0,8001825 230 | Q_NF1,20200731 13:03:00,11095.0,11095.0,11087.0,11088.0,8031525 231 | Q_NF1,20200731 13:04:00,11089.0,11090.75,11077.45,11080.15,8051850 232 | Q_NF1,20200731 13:05:00,11082.05,11089.0,11077.0,11087.5,8074425 233 | Q_NF1,20200731 13:06:00,11087.5,11092.0,11082.25,11084.6,8090100 234 | Q_NF1,20200731 13:07:00,11084.6,11088.9,11082.6,11083.05,8098200 235 | Q_NF1,20200731 13:08:00,11084.55,11084.55,11078.6,11081.0,8111100 236 | Q_NF1,20200731 13:09:00,11081.0,11083.05,11076.85,11083.05,8123400 237 | Q_NF1,20200731 13:10:00,11083.45,11091.75,11082.7,11085.05,8146275 238 | Q_NF1,20200731 13:11:00,11087.4,11092.8,11083.25,11091.45,8155500 239 | Q_NF1,20200731 13:12:00,11090.25,11092.65,11086.45,11091.6,8165925 240 | Q_NF1,20200731 13:13:00,11091.6,11091.6,11084.0,11088.2,8176425 241 | Q_NF1,20200731 13:14:00,11088.2,11090.0,11084.7,11085.85,8183250 242 | Q_NF1,20200731 13:15:00,11087.5,11089.5,11081.25,11083.9,8197125 243 | Q_NF1,20200731 13:16:00,11083.9,11083.9,11071.65,11074.35,8225775 244 | Q_NF1,20200731 13:17:00,11077.4,11078.75,11070.0,11075.0,8253225 245 | Q_NF1,20200731 13:18:00,11075.0,11079.0,11072.8,11074.3,8266725 246 | Q_NF1,20200731 13:19:00,11074.3,11082.7,11073.8,11078.55,8285325 247 | Q_NF1,20200731 13:20:00,11079.0,11081.3,11072.0,11072.7,8310075 248 | Q_NF1,20200731 13:21:00,11073.95,11076.5,11070.0,11073.05,8332575 249 | Q_NF1,20200731 13:22:00,11072.5,11078.0,11072.0,11074.7,8351025 250 | Q_NF1,20200731 13:23:00,11072.6,11077.0,11066.45,11076.45,8406825 251 | Q_NF1,20200731 13:24:00,11077.0,11079.7,11072.75,11078.5,8418825 252 | Q_NF1,20200731 13:25:00,11077.9,11077.9,11073.75,11075.05,8429025 253 | Q_NF1,20200731 13:26:00,11075.05,11075.05,11070.0,11074.9,8443275 254 | Q_NF1,20200731 13:27:00,11074.9,11080.7,11073.0,11077.0,8460300 255 | Q_NF1,20200731 13:28:00,11077.0,11079.6,11074.7,11077.3,8472150 256 | Q_NF1,20200731 13:29:00,11077.15,11082.45,11074.5,11079.55,8488425 257 | Q_NF1,20200731 13:30:00,11078.85,11080.9,11075.45,11078.15,8501175 258 | Q_NF1,20200731 13:31:00,11077.9,11078.55,11066.6,11067.0,8526975 259 | Q_NF1,20200731 13:32:00,11067.95,11070.0,11060.05,11065.05,8589750 260 | Q_NF1,20200731 13:33:00,11066.0,11070.9,11063.2,11068.8,8612625 261 | Q_NF1,20200731 13:34:00,11067.75,11069.45,11041.2,11044.35,8861100 262 | Q_NF1,20200731 13:35:00,11046.3,11046.95,11034.0,11046.95,9022350 263 | Q_NF1,20200731 13:36:00,11047.15,11047.95,11031.75,11038.4,9131850 264 | Q_NF1,20200731 13:37:00,11036.15,11043.9,11032.15,11041.95,9204900 265 | Q_NF1,20200731 13:38:00,11043.0,11049.0,11040.1,11044.0,9271350 266 | Q_NF1,20200731 13:39:00,11041.1,11052.0,11041.1,11050.6,9341625 267 | Q_NF1,20200731 13:40:00,11048.75,11049.6,11035.0,11036.35,9425475 268 | Q_NF1,20200731 13:41:00,11035.6,11040.0,11025.0,11034.0,9538650 269 | Q_NF1,20200731 13:42:00,11032.05,11037.0,11030.0,11032.2,9580725 270 | Q_NF1,20200731 13:43:00,11033.8,11041.0,11031.2,11040.6,9612375 271 | Q_NF1,20200731 13:44:00,11039.85,11043.85,11035.1,11036.3,9646800 272 | Q_NF1,20200731 13:45:00,11036.6,11038.65,11031.25,11038.3,9679575 273 | Q_NF1,20200731 13:46:00,11039.0,11039.0,11032.0,11034.0,9699525 274 | Q_NF1,20200731 13:47:00,11036.2,11047.7,11033.7,11041.3,9743250 275 | Q_NF1,20200731 13:48:00,11041.3,11047.95,11040.5,11046.3,9767100 276 | Q_NF1,20200731 13:49:00,11046.45,11048.0,11036.5,11037.6,9810675 277 | Q_NF1,20200731 13:50:00,11037.15,11038.45,11028.0,11033.9,9870000 278 | Q_NF1,20200731 13:51:00,11035.0,11045.0,11033.1,11045.0,9891600 279 | Q_NF1,20200731 13:52:00,11043.25,11054.3,11043.1,11051.45,9955875 280 | Q_NF1,20200731 13:53:00,11051.45,11053.75,11046.45,11046.75,9980550 281 | Q_NF1,20200731 13:54:00,11047.0,11047.0,11042.0,11042.95,9994575 282 | Q_NF1,20200731 13:55:00,11042.95,11048.1,11042.7,11047.0,10020975 283 | Q_NF1,20200731 13:56:00,11047.0,11059.0,11045.1,11059.0,10061250 284 | Q_NF1,20200731 13:57:00,11063.5,11104.3,11059.45,11098.6,10377600 285 | Q_NF1,20200731 13:58:00,11098.5,11119.0,11098.25,11100.25,10547325 286 | Q_NF1,20200731 13:59:00,11101.0,11101.0,11086.0,11089.6,10620375 287 | Q_NF1,20200731 14:00:00,11092.0,11106.0,11090.25,11099.75,10684800 288 | Q_NF1,20200731 14:01:00,11099.75,11101.0,11087.25,11087.85,10725000 289 | Q_NF1,20200731 14:02:00,11087.3,11094.55,11076.5,11076.95,10772175 290 | Q_NF1,20200731 14:03:00,11079.0,11089.0,11077.4,11086.0,10798275 291 | Q_NF1,20200731 14:04:00,11085.6,11089.65,11078.4,11088.15,10818000 292 | Q_NF1,20200731 14:05:00,11088.0,11093.5,11083.65,11087.9,10844100 293 | Q_NF1,20200731 14:06:00,11087.9,11088.25,11081.65,11085.9,10860075 294 | Q_NF1,20200731 14:07:00,11085.9,11092.0,11076.0,11087.8,10899000 295 | Q_NF1,20200731 14:08:00,11088.5,11094.0,11086.3,11090.0,10913850 296 | Q_NF1,20200731 14:09:00,11089.65,11095.0,11087.55,11087.55,10949400 297 | Q_NF1,20200731 14:10:00,11090.1,11099.25,11087.55,11098.65,10985625 298 | Q_NF1,20200731 14:11:00,11098.85,11110.05,11098.85,11107.8,11059950 299 | Q_NF1,20200731 14:12:00,11108.55,11114.5,11106.0,11113.9,11118300 300 | Q_NF1,20200731 14:13:00,11113.8,11114.0,11100.0,11102.35,11146950 301 | Q_NF1,20200731 14:14:00,11102.45,11108.55,11101.6,11105.0,11165625 302 | Q_NF1,20200731 14:15:00,11105.9,11114.5,11105.05,11110.25,11206050 303 | Q_NF1,20200731 14:16:00,11110.25,11111.0,11103.0,11110.0,11229975 304 | Q_NF1,20200731 14:17:00,11108.85,11111.0,11100.0,11102.15,11258025 305 | Q_NF1,20200731 14:18:00,11102.15,11102.55,11091.1,11092.8,11298225 306 | Q_NF1,20200731 14:19:00,11093.7,11100.2,11092.55,11096.5,11323125 307 | Q_NF1,20200731 14:20:00,11096.5,11096.5,11082.6,11085.0,11365200 308 | Q_NF1,20200731 14:21:00,11085.2,11096.7,11085.05,11093.7,11380950 309 | Q_NF1,20200731 14:22:00,11093.7,11094.05,11085.05,11089.75,11394300 310 | Q_NF1,20200731 14:23:00,11090.0,11100.0,11088.55,11097.6,11412450 311 | Q_NF1,20200731 14:24:00,11098.0,11100.0,11091.0,11094.55,11426475 312 | Q_NF1,20200731 14:25:00,11094.55,11094.85,11084.55,11086.1,11448675 313 | Q_NF1,20200731 14:26:00,11087.35,11091.0,11086.15,11090.0,11460600 314 | Q_NF1,20200731 14:27:00,11090.0,11095.5,11086.0,11095.0,11473725 315 | Q_NF1,20200731 14:28:00,11095.0,11105.0,11094.8,11104.45,11517450 316 | Q_NF1,20200731 14:29:00,11101.05,11105.0,11095.2,11099.7,11534400 317 | Q_NF1,20200731 14:30:00,11099.75,11099.75,11092.0,11092.95,11561475 318 | Q_NF1,20200731 14:31:00,11093.15,11103.0,11089.0,11098.0,11589225 319 | Q_NF1,20200731 14:32:00,11098.0,11098.35,11091.55,11096.65,11607225 320 | Q_NF1,20200731 14:33:00,11094.9,11097.35,11092.8,11094.5,11618850 321 | Q_NF1,20200731 14:34:00,11095.5,11095.5,11071.05,11073.05,11682750 322 | Q_NF1,20200731 14:35:00,11074.45,11075.4,11060.0,11067.6,11793225 323 | Q_NF1,20200731 14:36:00,11069.05,11072.3,11066.05,11072.0,11831775 324 | Q_NF1,20200731 14:37:00,11072.35,11081.9,11071.0,11078.05,11870400 325 | Q_NF1,20200731 14:38:00,11077.25,11081.3,11075.0,11075.5,11889150 326 | Q_NF1,20200731 14:39:00,11077.0,11085.0,11076.0,11078.05,11917125 327 | Q_NF1,20200731 14:40:00,11076.35,11082.0,11075.25,11080.3,11935500 328 | Q_NF1,20200731 14:41:00,11080.3,11083.0,11077.7,11080.0,11950800 329 | Q_NF1,20200731 14:42:00,11080.0,11082.05,11078.0,11081.8,11961750 330 | Q_NF1,20200731 14:43:00,11081.8,11081.8,11077.0,11078.25,11975025 331 | Q_NF1,20200731 14:44:00,11077.1,11083.0,11075.0,11075.85,11994750 332 | Q_NF1,20200731 14:45:00,11077.1,11077.1,11067.8,11072.6,12043125 333 | Q_NF1,20200731 14:46:00,11072.65,11078.25,11069.6,11072.65,12078750 334 | Q_NF1,20200731 14:47:00,11071.5,11072.65,11064.7,11064.85,12126450 335 | Q_NF1,20200731 14:48:00,11063.2,11063.65,11051.2,11062.45,12211500 336 | Q_NF1,20200731 14:49:00,11062.15,11062.65,11052.0,11057.75,12255375 337 | Q_NF1,20200731 14:50:00,11056.75,11064.4,11055.0,11055.0,12277425 338 | Q_NF1,20200731 14:51:00,11055.0,11061.45,11053.0,11057.95,12317025 339 | Q_NF1,20200731 14:52:00,11057.95,11063.0,11055.15,11061.3,12333600 340 | Q_NF1,20200731 14:53:00,11062.6,11069.4,11060.4,11062.35,12360450 341 | Q_NF1,20200731 14:54:00,11061.5,11062.95,11053.05,11058.45,12392325 342 | Q_NF1,20200731 14:55:00,11058.4,11063.9,11054.7,11062.85,12415125 343 | Q_NF1,20200731 14:56:00,11062.85,11084.0,11062.85,11076.0,12476250 344 | Q_NF1,20200731 14:57:00,11076.0,11093.45,11073.9,11093.1,12529725 345 | Q_NF1,20200731 14:58:00,11092.45,11093.3,11082.85,11090.15,12565575 346 | Q_NF1,20200731 14:59:00,11090.0,11091.9,11080.05,11084.0,12608100 347 | Q_NF1,20200731 15:00:00,11082.45,11086.7,11072.15,11072.2,12643800 348 | Q_NF1,20200731 15:01:00,11072.0,11080.75,11072.0,11076.9,12678375 349 | Q_NF1,20200731 15:02:00,11078.1,11093.65,11076.1,11087.9,12726900 350 | Q_NF1,20200731 15:03:00,11089.0,11090.0,11080.0,11082.0,12753150 351 | Q_NF1,20200731 15:04:00,11080.0,11089.45,11078.45,11085.85,12776700 352 | Q_NF1,20200731 15:05:00,11083.8,11090.55,11082.95,11087.75,12802800 353 | Q_NF1,20200731 15:06:00,11087.75,11095.0,11087.75,11094.2,12832800 354 | Q_NF1,20200731 15:07:00,11094.0,11100.95,11090.5,11092.0,12879150 355 | Q_NF1,20200731 15:08:00,11092.5,11095.8,11089.35,11094.95,12916800 356 | Q_NF1,20200731 15:09:00,11095.8,11099.55,11094.0,11094.2,12943050 357 | Q_NF1,20200731 15:10:00,11094.2,11100.0,11092.0,11093.95,12986700 358 | Q_NF1,20200731 15:11:00,11094.75,11094.85,11088.85,11094.05,13033575 359 | Q_NF1,20200731 15:12:00,11094.75,11097.0,11090.65,11097.0,13064400 360 | Q_NF1,20200731 15:13:00,11096.9,11098.0,11091.4,11093.75,13080450 361 | Q_NF1,20200731 15:14:00,11095.0,11095.95,11090.0,11090.0,13111125 362 | Q_NF1,20200731 15:15:00,11091.9,11094.65,11088.0,11090.0,13144275 363 | Q_NF1,20200731 15:16:00,11090.0,11090.75,11083.25,11084.5,13183350 364 | Q_NF1,20200731 15:17:00,11085.25,11090.85,11083.05,11090.85,13219650 365 | Q_NF1,20200731 15:18:00,11091.4,11091.5,11085.0,11089.0,13253100 366 | Q_NF1,20200731 15:19:00,11089.0,11097.75,11087.8,11096.1,13300800 367 | Q_NF1,20200731 15:20:00,11093.55,11098.55,11092.05,11096.75,13349700 368 | Q_NF1,20200731 15:21:00,11097.95,11105.0,11096.35,11099.5,13450275 369 | Q_NF1,20200731 15:22:00,11098.6,11106.0,11098.15,11106.0,13477425 370 | Q_NF1,20200731 15:23:00,11105.0,11108.4,11103.1,11107.0,13523625 371 | Q_NF1,20200731 15:24:00,11107.9,11108.7,11104.95,11104.95,13555800 372 | Q_NF1,20200731 15:25:00,11104.95,11111.0,11103.05,11109.05,13621425 373 | Q_NF1,20200731 15:26:00,11109.75,11109.9,11105.0,11106.0,13659900 374 | Q_NF1,20200731 15:27:00,11106.0,11106.55,11102.55,11104.45,13712700 375 | Q_NF1,20200731 15:28:00,11104.6,11107.0,11103.8,11105.0,13762350 376 | -------------------------------------------------------------------------------- /no_dash.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Sat Aug 2 05:17:43 2020 4 | 5 | @author: alex1 6 | twitter.com/beinghorizontal 7 | 8 | This is for checking whether the issue is due to dash dependency. It generates static TPO chart using just plotly 9 | """ 10 | 11 | import pandas as pd 12 | import plotly.graph_objects as go 13 | from tpo_helper import get_ticksize, abc, get_mean, get_rf, get_context, get_contextnow 14 | import numpy as np 15 | from datetime import timedelta 16 | from plotly.offline import plot 17 | 18 | # from transform import transform_live, transform_hist 19 | # from alpha_dataframe import get_data 20 | 21 | # refresh_int = 1 # refresh interval in seconds for live updates 22 | freq = 30 23 | avglen = 10 # num days mean to get values 24 | days_to_display = 10 # Number of last n days you want on the screen to display 25 | mode = 'tpo' # for volume --> 'vol' 26 | 27 | # 1 min historical data in symbol,datetime,open,high,low,close,volume 28 | dfhist = pd.read_csv('history.txt') 29 | 30 | # Check the sample file. Match the format exactly else code will not run. 31 | 32 | dfhist.iloc[:, 2:] = dfhist.iloc[:, 2:].apply(pd.to_numeric) 33 | 34 | # # It calculates tick size for TPO based on mean and standard deviation. 35 | ticksz = get_ticksize(dfhist, freq=freq) 36 | symbol = dfhist.symbol[0] 37 | 38 | 39 | def datetime(dfhist): 40 | """ 41 | dfhist : pandas series 42 | Convert date time to pandas date time 43 | Returns dataframe with datetime index 44 | """ 45 | dfhist['datetime2'] = pd.to_datetime(dfhist['datetime'], format='%Y%m%d %H:%M:%S') 46 | dfhist = dfhist.set_index(dfhist['datetime2'], drop=True, inplace=False) 47 | return(dfhist) 48 | 49 | 50 | dfhist = datetime(dfhist) 51 | # Get mean values for context and also get daily trading hours 52 | mean_val = get_mean(dfhist, avglen=avglen, freq=freq) 53 | trading_hr = mean_val['session_hr'] 54 | # !!! get rotational factor 55 | dfhist = get_rf(dfhist.copy()) 56 | # !!! resample to desire time frequency. For TPO charts 30 min is optimal 57 | dfhist = dfhist.resample(str(freq)+'min').agg({'symbol': 'last', 'datetime': 'first', 'Open': 'first', 'High': 'max', 58 | 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 'rf': 'sum'}) 59 | dfhist = dfhist.dropna() 60 | 61 | # slice df based on days_to_display parameter 62 | dt1 = dfhist.index[-1] 63 | sday1 = dt1 - timedelta(days_to_display) 64 | dfhist = dfhist[(dfhist.index.date > sday1.date())] 65 | # !!! concat current data to avoid insufficient bar num error 66 | 67 | 68 | def live_merge(dfli): 69 | """ 70 | dfli: pandas dataframe with live quotes. 71 | 72 | This is the live data, and will continue to refresh. Since it merges with historical data keep the format same though source 73 | can be different.. 74 | For this we only need small sample and if there are duplicate quotes duplicate values will get droped keeping the original value. 75 | """ 76 | 77 | dflive = datetime(dfli) 78 | 79 | dflive = get_rf(dflive.copy()) 80 | dflive = dflive.resample(str(freq)+'min').agg({'symbol': 'last', 'datetime': 'first', 'Open': 'first', 'High': 'max', 81 | 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 'rf': 'sum'}) 82 | 83 | df_final = pd.concat([dfhist, dflive]) 84 | df_final = df_final.drop_duplicates() 85 | 86 | return (df_final) 87 | 88 | 89 | # get live data from external source, it is not inside loop so it gets called 1 time only to make sure historical data is in sync 90 | # provided the sample file. To check live updates are working, add data in live.txt 91 | dfli = pd.read_csv('live.txt') 92 | df_final = live_merge(dfli) 93 | df_final.iloc[:, 2:] = df_final.iloc[:, 2:].apply(pd.to_numeric) 94 | 95 | dfli = pd.read_csv('live.txt') # This is live file in loop to check updates every n seconds 96 | df = live_merge(dfli) 97 | df.iloc[:, 2:] = df.iloc[:, 2:].apply(pd.to_numeric) 98 | 99 | # !!! split the dataframe with new date 100 | DFList = [group[1] for group in df.groupby(df.index.date)] 101 | # !!! for context based bubbles at the top with text hovers 102 | dfcontext = get_context(df, freq=freq, ticksize=ticksz, style=mode, session_hr=trading_hr) 103 | # get market profile DataFrame and ranking as a series for each day. 104 | # @todo: IN next version, display the ranking DataFrame with drop-down menu 105 | dfmp_list = dfcontext[0] 106 | ranking = dfcontext[1] 107 | # !!! get context based on IB It is predictive value caculated by using various IB stats and previous day's value area 108 | # IB is 1st 1 hour of the session. Not useful for scrips with global 24 x 7 session 109 | context_ibdf = get_contextnow(mean_val, ranking) 110 | ibpower = context_ibdf.power # Non-normalised IB strength 111 | ibpower1 = context_ibdf.power1 # Normalised IB strength for dynamic shape size for markers at bottom 112 | 113 | 114 | fig = go.Figure() 115 | 116 | fig = go.Figure(data=[go.Candlestick(x=df['datetime'], 117 | open=df['Open'], 118 | high=df['High'], 119 | low=df['Low'], 120 | close=df['Close'], 121 | showlegend=True, 122 | name=symbol, opacity=0.3)]) # To make candlesticks more prominent increase the opacity 123 | 124 | # !!! get TPO for each day 125 | for i in range(len(dfmp_list)): # test the loop with i=1 126 | 127 | # df1 is used for datetime axis, other dataframe we have is df_mp but it is not a timeseries 128 | df1 = DFList[i].copy() 129 | df_mp = dfmp_list[i] 130 | irank = ranking.iloc[i] 131 | # df_mp['i_date'] = df1['datetime'][0] 132 | df_mp['i_date'] = irank.date 133 | # # @todo: background color for text 134 | df_mp['color'] = np.where(np.logical_and( 135 | df_mp['close'] > irank.vallist, df_mp['close'] < irank.vahlist), 'green', 'white') 136 | 137 | df_mp = df_mp.set_index('i_date', inplace=False) 138 | 139 | fig.add_trace(go.Scatter(x=df_mp.index, y=df_mp.close, mode="text", name=str(df_mp.index[0]), text=df_mp.alphabets, 140 | showlegend=False, textposition="top right", textfont=dict(family="verdana", size=6, color=df_mp.color))) 141 | power = int(irank['power1']) 142 | if power < 0: 143 | my_rgb = 'rgba({power}, 3, 252, 0.5)'.format(power=abs(165)) 144 | else: 145 | my_rgb = 'rgba(23, {power}, 3, 0.5)'.format(power=abs(252)) 146 | 147 | fig.add_trace(go.Scatter( 148 | # x=[df1.iloc[4]['datetime']], 149 | x=[irank.date], 150 | y=[df['High'].max()], 151 | mode="markers", 152 | marker=dict(color=my_rgb, size=0.40*abs(power), 153 | line=dict(color='rgb(17, 17, 17)', width=2)), 154 | # marker_symbol='square', 155 | hovertext=['VAH:{}, POC:{}, VAL:{}, Balance Target:{}, Day Type:{}'.format(irank.vahlist, irank.poclist, irank.vallist, 156 | irank.btlist, irank.daytype)], showlegend=False 157 | )) 158 | # !!! we will use this for hover text at bottom for developing day 159 | if ibpower1[i] < 0: 160 | ib_rgb = 'rgba(165, 3, 252, 0.5)' 161 | else: 162 | ib_rgb = 'rgba(23, 252, 3, 0.5)' 163 | 164 | fig.add_trace(go.Scatter( 165 | # x=[df1.iloc[4]['datetime']], 166 | x=[irank.date], 167 | y=[df['Low'].min()], 168 | mode="markers", 169 | marker=dict(color=ib_rgb, size=0.40 * \ 170 | abs(ibpower[i]), line=dict(color='rgb(17, 17, 17)', width=2)), 171 | marker_symbol='square', 172 | hovertext=['Vol_mean:{}, Vol_Daily:{}, RF_mean:{}, RF_daily:{}, IBvol_mean:{}, IBvol_day:{}, IB_RFmean:{}, IB_RFday:{}'.format(round(mean_val['volume_mean'], 2), 173 | round(irank.volumed, 2), round(mean_val['rf_mean'], 2), round( 174 | irank.rfd, 2), round(mean_val['volib_mean'], 2), 175 | round(irank.ibvol, 2), round(mean_val['ibrf_mean'], 2), round(irank.ibrf, 2))], showlegend=False 176 | )) 177 | 178 | lvns = irank.lvnlist 179 | 180 | for lvn in lvns: 181 | fig.add_shape( 182 | # Line Horizontal 183 | type="line", 184 | x0=df1.iloc[0]['datetime'], 185 | y0=lvn, 186 | x1=df1.iloc[5]['datetime'], 187 | y1=lvn, 188 | line=dict( 189 | color="darksalmon", 190 | width=2, 191 | dash="dashdot",),) 192 | 193 | excess = irank.excesslist 194 | if excess > 0: 195 | fig.add_shape( 196 | # Line Horizontal 197 | type="line", 198 | x0=df1.iloc[0]['datetime'], 199 | y0=excess, 200 | x1=df1.iloc[5]['datetime'], 201 | y1=excess, 202 | line=dict( 203 | color="cyan", 204 | width=2, 205 | dash="dashdot",),) 206 | 207 | # @todo: last price marker. Color code as per close above poc or below 208 | ltp = df1.iloc[-1]['Close'] 209 | if ltp >= irank.poclist: 210 | ltp_color = 'green' 211 | else: 212 | ltp_color = 'red' 213 | 214 | fig.add_trace(go.Scatter( 215 | x=[df1.iloc[-1]['datetime']], 216 | y=[df1.iloc[-1]['Close']], 217 | mode="text", 218 | name="last traded price", 219 | text=['last '+str(df1.iloc[-1]['Close'])], 220 | textposition="bottom right", 221 | textfont=dict(size=11, color=ltp_color), 222 | showlegend=False 223 | )) 224 | 225 | fig.layout.xaxis.color = 'white' 226 | fig.layout.yaxis.color = 'white' 227 | fig.layout.autosize = True 228 | fig["layout"]["height"] = 800 229 | # fig.layout.hovermode = 'x' 230 | 231 | fig.update_xaxes(title_text='Time', title_font=dict(size=18, color='white'), 232 | tickangle=45, tickfont=dict(size=8, color='white'), showgrid=False, dtick=len(dfmp_list)) 233 | 234 | fig.update_yaxes(title_text=symbol, title_font=dict(size=18, color='white'), 235 | tickfont=dict(size=12, color='white'), showgrid=False) 236 | fig.layout.update(template="plotly_dark", title="@"+abc()[1], autosize=True, 237 | xaxis=dict(showline=True, color='white'), yaxis=dict(showline=True, color='white')) 238 | 239 | fig["layout"]["xaxis"]["rangeslider"]["visible"] = False 240 | fig["layout"]["xaxis"]["tickformat"] = "%H:%M:%S" 241 | 242 | plot(fig, auto_open=True) 243 | fig.show() 244 | -------------------------------------------------------------------------------- /static_tpo.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Sun Aug 2 07:02:43 2020 4 | 5 | @author: alex1 6 | twitter.com/beinghorizontal 7 | 8 | """ 9 | 10 | import pandas as pd 11 | import plotly.graph_objects as go 12 | from tpo_helper import get_ticksize, abc, get_mean, get_rf, get_context, get_contextnow 13 | import numpy as np 14 | from datetime import timedelta 15 | from plotly.offline import plot 16 | 17 | freq = 30 18 | avglen = 10 # num days mean to get values 19 | days_to_display = 10 # Number of last n days you want on the screen to display 20 | mode = 'tpo' # for volume --> 'vol' 21 | 22 | # 1 min historical data. For static plots it needs to be up to date. 23 | dfhist = pd.read_csv('history.txt') 24 | 25 | # Check the sample file. Match the format exactly else code will not run. 26 | 27 | dfhist.iloc[:, 2:] = dfhist.iloc[:, 2:].apply(pd.to_numeric) 28 | 29 | # # It calculates tick size for TPO based on mean and standard deviation. 30 | ticksz = get_ticksize(dfhist, freq=freq) 31 | symbol = dfhist.symbol[0] 32 | 33 | 34 | def datetime(dfhist): 35 | """ 36 | dfhist : pandas series 37 | Convert date time to pandas date time 38 | Returns dataframe with datetime index 39 | """ 40 | dfhist['datetime2'] = pd.to_datetime(dfhist['datetime'], format='%Y%m%d %H:%M:%S') 41 | dfhist = dfhist.set_index(dfhist['datetime2'], drop=True, inplace=False) 42 | return(dfhist) 43 | 44 | 45 | dfhist = datetime(dfhist) 46 | # Get mean values for context and also get daily trading hours 47 | mean_val = get_mean(dfhist, avglen=avglen, freq=freq) 48 | trading_hr = mean_val['session_hr'] 49 | # !!! get rotational factor 50 | dfhist = get_rf(dfhist.copy()) 51 | # !!! resample to desire time frequency. For TPO charts 30 min is optimal 52 | dfhist = dfhist.resample(str(freq)+'min').agg({'symbol': 'last', 'datetime': 'first', 'Open': 'first', 'High': 'max', 53 | 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 'rf': 'sum'}) 54 | dfhist = dfhist.dropna() 55 | 56 | # slice df based on days_to_display parameter 57 | dt1 = dfhist.index[-1] 58 | sday1 = dt1 - timedelta(days_to_display) 59 | dfhist = dfhist[(dfhist.index.date > sday1.date())] 60 | 61 | # !!! split the dataframe with new date 62 | DFList = [group[1] for group in dfhist.groupby(dfhist.index.date)] 63 | # !!! for context based bubbles at the top with text hovers 64 | dfcontext = get_context(dfhist, freq=freq, ticksize=ticksz, style=mode, session_hr=trading_hr) 65 | # get market profile DataFrame and ranking as a series for each day. 66 | # @todo: IN next version, display the ranking DataFrame with drop-down menu 67 | dfmp_list = dfcontext[0] 68 | ranking = dfcontext[1] 69 | # !!! get context based on IB It is predictive value caculated by using various IB stats and previous day's value area 70 | # IB is 1st 1 hour of the session. Not useful for scrips with global 24 x 7 session 71 | context_ibdf = get_contextnow(mean_val, ranking) 72 | ibpower = context_ibdf.power # Non-normalised IB strength 73 | ibpower1 = context_ibdf.power1 # Normalised IB strength for dynamic shape size for markers at bottom 74 | 75 | 76 | fig = go.Figure() 77 | 78 | fig = go.Figure(data=[go.Candlestick(x=dfhist['datetime'], 79 | 80 | open=dfhist['Open'], 81 | high=dfhist['High'], 82 | low=dfhist['Low'], 83 | close=dfhist['Close'], 84 | showlegend=True, 85 | name=symbol, opacity=0.3)]) # To make candlesticks more prominent increase the opacity 86 | 87 | # !!! get TPO for each day 88 | for i in range(len(dfmp_list)): # test the loop with i=1 89 | 90 | # df1 is used for datetime axis, other dataframe we have is df_mp but it is not a timeseries 91 | df1 = DFList[i].copy() 92 | df_mp = dfmp_list[i] 93 | irank = ranking.iloc[i] 94 | # df_mp['i_date'] = df1['datetime'][0] 95 | df_mp['i_date'] = irank.date 96 | # # @todo: background color for text 97 | df_mp['color'] = np.where(np.logical_and( 98 | df_mp['close'] > irank.vallist, df_mp['close'] < irank.vahlist), 'green', 'white') 99 | 100 | df_mp = df_mp.set_index('i_date', inplace=False) 101 | 102 | fig.add_trace(go.Scatter(x=df_mp.index, y=df_mp.close, mode="text", name=str(df_mp.index[0]), text=df_mp.alphabets, 103 | showlegend=False, textposition="top right", textfont=dict(family="verdana", size=6, color=df_mp.color))) 104 | power = int(irank['power1']) 105 | if power < 0: 106 | my_rgb = 'rgba({power}, 3, 252, 0.5)'.format(power=abs(165)) 107 | else: 108 | my_rgb = 'rgba(23, {power}, 3, 0.5)'.format(power=abs(252)) 109 | 110 | fig.add_trace(go.Scatter( 111 | # x=[df1.iloc[4]['datetime']], 112 | x=[irank.date], 113 | y=[dfhist['High'].max()], 114 | mode="markers", 115 | marker=dict(color=my_rgb, size=0.40*abs(power), 116 | line=dict(color='rgb(17, 17, 17)', width=2)), 117 | # marker_symbol='square', 118 | hovertext=['VAH:{}, POC:{}, VAL:{}, Balance Target:{}, Day Type:{}'.format(irank.vahlist, irank.poclist, irank.vallist, 119 | irank.btlist, irank.daytype)], showlegend=False 120 | )) 121 | # !!! we will use this for hover text at bottom for developing day 122 | if ibpower1[i] < 0: 123 | ib_rgb = 'rgba(165, 3, 252, 0.5)' 124 | else: 125 | ib_rgb = 'rgba(23, 252, 3, 0.5)' 126 | 127 | fig.add_trace(go.Scatter( 128 | # x=[df1.iloc[4]['datetime']], 129 | x=[irank.date], 130 | y=[dfhist['Low'].min()], 131 | mode="markers", 132 | marker=dict(color=ib_rgb, size=0.40 * \ 133 | abs(ibpower[i]), line=dict(color='rgb(17, 17, 17)', width=2)), 134 | marker_symbol='square', 135 | hovertext=['Vol_mean:{}, Vol_Daily:{}, RF_mean:{}, RF_daily:{}, IBvol_mean:{}, IBvol_day:{}, IB_RFmean:{}, IB_RFday:{}'.format(round(mean_val['volume_mean'], 2), 136 | round(irank.volumed, 2), round(mean_val['rf_mean'], 2), round( 137 | irank.rfd, 2), round(mean_val['volib_mean'], 2), 138 | round(irank.ibvol, 2), round(mean_val['ibrf_mean'], 2), round(irank.ibrf, 2))], showlegend=False 139 | )) 140 | 141 | lvns = irank.lvnlist 142 | 143 | for lvn in lvns: 144 | fig.add_shape( 145 | # Line Horizontal 146 | type="line", 147 | x0=df1.iloc[0]['datetime'], 148 | y0=lvn, 149 | x1=df1.iloc[5]['datetime'], 150 | y1=lvn, 151 | line=dict( 152 | color="darksalmon", 153 | width=2, 154 | dash="dashdot",),) 155 | 156 | excess = irank.excesslist 157 | if excess > 0: 158 | fig.add_shape( 159 | # Line Horizontal 160 | type="line", 161 | x0=df1.iloc[0]['datetime'], 162 | y0=excess, 163 | x1=df1.iloc[5]['datetime'], 164 | y1=excess, 165 | line=dict( 166 | color="cyan", 167 | width=2, 168 | dash="dashdot",),) 169 | 170 | ltp = df1.iloc[-1]['Close'] 171 | if ltp >= irank.poclist: 172 | ltp_color = 'green' 173 | else: 174 | ltp_color = 'red' 175 | 176 | fig.add_trace(go.Scatter( 177 | x=[df1.iloc[-1]['datetime']], 178 | y=[df1.iloc[-1]['Close']], 179 | mode="text", 180 | name="last traded price", 181 | text=['last '+str(df1.iloc[-1]['Close'])], 182 | textposition="bottom right", 183 | textfont=dict(size=11, color=ltp_color), 184 | showlegend=False 185 | )) 186 | 187 | fig.layout.xaxis.color = 'white' 188 | fig.layout.yaxis.color = 'white' 189 | fig.layout.autosize = True 190 | fig["layout"]["height"] = 800 191 | # fig.layout.hovermode = 'x' 192 | 193 | fig.update_xaxes(title_text='Time', title_font=dict(size=18, color='white'), 194 | tickangle=45, tickfont=dict(size=8, color='white'), showgrid=False, dtick=len(dfmp_list)) 195 | 196 | fig.update_yaxes(title_text=symbol, title_font=dict(size=18, color='white'), 197 | tickfont=dict(size=12, color='white'), showgrid=False) 198 | fig.layout.update(template="plotly_dark", title="@"+abc()[1], autosize=True, 199 | xaxis=dict(showline=True, color='white'), yaxis=dict(showline=True, color='white')) 200 | 201 | fig["layout"]["xaxis"]["rangeslider"]["visible"] = False 202 | fig["layout"]["xaxis"]["tickformat"] = "%H:%M:%S" 203 | 204 | # fig.write_html('tpo.html') # uncomment to save as html 205 | 206 | # To save as png 207 | # from kaleido.scopes.plotly import PlotlyScope # pip install kaleido 208 | # scope = PlotlyScope() 209 | # with open("figure.png", "wb") as f: 210 | # f.write(scope.transform(fig, format="png")) 211 | 212 | plot(fig, auto_open=True) 213 | fig.show() 214 | -------------------------------------------------------------------------------- /tpo_helper.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Mon Jul 27 13:26:04 2020 4 | 5 | @author: alex1 6 | """ 7 | import pandas as pd 8 | import numpy as np 9 | import math 10 | # import itertools 11 | 12 | 13 | def get_ticksize(data, freq=30): 14 | # data = df 15 | numlen = int(len(data)/2) 16 | # sample size for calculating ticksize = 50% of most recent data 17 | tztail = data.tail(numlen).copy() 18 | tztail['tz'] = tztail.Close.rolling(freq).std() # std. dev of 30 period rolling 19 | tztail = tztail.dropna() 20 | ticksize = np.ceil(tztail['tz'].mean()*0.25) # 1/4 th of mean std. dev is our ticksize 21 | 22 | if ticksize < 0.2: 23 | ticksize = 0.2 # minimum ticksize limit 24 | 25 | return int(ticksize) 26 | 27 | 28 | def abc(session_hr=6.5, freq=30): 29 | 30 | caps = [' A', ' B', ' C', ' D', ' E', ' F', ' G', ' H', ' I', ' J', ' K', ' L', ' M', 31 | ' N', ' O', ' P', ' Q', ' R', ' S', ' T', ' U', ' V', ' W', ' X', ' Y', ' Z'] 32 | abc_lw = [x.lower() for x in caps] 33 | Aa = caps + abc_lw 34 | alimit = math.ceil(session_hr * (60 / freq)) + 3 35 | if alimit > 52: 36 | alphabets = Aa * int( 37 | (np.ceil((alimit - 52) / 52)) + 1) # if bar frequency is less than 30 minutes then multiply list 38 | else: 39 | alphabets = Aa[0:alimit] 40 | bk = [28, 31, 35, 40, 33, 34, 41, 44, 35, 52, 41, 40, 46, 27, 38] 41 | ti = [] 42 | for s1 in bk: 43 | ti.append(Aa[s1 - 1]) 44 | tt = (''.join(ti)) 45 | 46 | return (alphabets, tt) 47 | 48 | 49 | def tpo(dft_rs, freq=30, ticksize=10, style='tpo', session_hr=6.5): 50 | 51 | if len(dft_rs) > int(60/freq): 52 | dft_rs = dft_rs.drop_duplicates('datetime') 53 | dft_rs = dft_rs.reset_index(inplace=False, drop=True) 54 | dft_rs['rol_mx'] = dft_rs['High'].cummax() 55 | dft_rs['rol_mn'] = dft_rs['Low'].cummin() 56 | dft_rs['ext_up'] = dft_rs['rol_mn'] > dft_rs['rol_mx'].shift(2) 57 | dft_rs['ext_dn'] = dft_rs['rol_mx'] < dft_rs['rol_mn'].shift(2) 58 | 59 | alphabets = abc(session_hr, freq)[0] 60 | alphabets = alphabets[0:len(dft_rs)] 61 | hh = dft_rs['High'].max() 62 | ll = dft_rs['Low'].min() 63 | day_range = hh - ll 64 | dft_rs['abc'] = alphabets 65 | # place represents total number of steps to take to compare the TPO count 66 | place = int(np.ceil((hh - ll) / ticksize)) 67 | # kk = 0 68 | abl_bg = [] 69 | tpo_countbg = [] 70 | pricel = [] 71 | volcountbg = [] 72 | # datel = [] 73 | for u in range(place): 74 | abl = [] 75 | tpoc = [] 76 | volcount = [] 77 | p = ll + (u*ticksize) 78 | for lenrs in range(len(dft_rs)): 79 | if p >= dft_rs['Low'][lenrs] and p < dft_rs['High'][lenrs]: 80 | abl.append(dft_rs['abc'][lenrs]) 81 | tpoc.append(1) 82 | volcount.append((dft_rs['Volume'][lenrs]) / freq) 83 | abl_bg.append(''.join(abl)) 84 | tpo_countbg.append(sum(tpoc)) 85 | volcountbg.append(sum(volcount)) 86 | pricel.append(p) 87 | 88 | dftpo = pd.DataFrame({'close': pricel, 'alphabets': abl_bg, 89 | 'tpocount': tpo_countbg, 'volsum': volcountbg}) 90 | # drop empty rows 91 | dftpo['alphabets'].replace('', np.nan, inplace=True) 92 | dftpo = dftpo.dropna() 93 | dftpo = dftpo.reset_index(inplace=False, drop=True) 94 | dftpo = dftpo.sort_index(ascending=False) 95 | dftpo = dftpo.reset_index(inplace=False, drop=True) 96 | 97 | if style == 'tpo': 98 | column = 'tpocount' 99 | else: 100 | column = 'volsum' 101 | 102 | dfmx = dftpo[dftpo[column] == dftpo[column].max()] 103 | 104 | mid = ll + ((hh - ll) / 2) 105 | dfmax = dfmx.copy() 106 | dfmax['poc-mid'] = abs(dfmax['close'] - mid) 107 | pocidx = dfmax['poc-mid'].idxmin() 108 | poc = dfmax['close'][pocidx] 109 | poctpo = dftpo[column].max() 110 | tpo_updf = dftpo[dftpo['close'] > poc] 111 | tpo_updf = tpo_updf.sort_index(ascending=False) 112 | tpo_updf = tpo_updf.reset_index(inplace=False, drop=True) 113 | 114 | tpo_dndf = dftpo[dftpo['close'] < poc] 115 | tpo_dndf = tpo_dndf.reset_index(inplace=False, drop=True) 116 | 117 | valtpo = (dftpo[column].sum()) * 0.70 118 | 119 | abovepoc = tpo_updf[column].to_list() 120 | belowpoc = tpo_dndf[column].to_list() 121 | 122 | 123 | if (len(abovepoc)/2).is_integer() is False: 124 | abovepoc = abovepoc+[0] 125 | 126 | if (len(belowpoc)/2).is_integer() is False: 127 | belowpoc = belowpoc+[0] 128 | 129 | bel2 = np.array(belowpoc).reshape(-1, 2) 130 | bel3 = bel2.sum(axis=1) 131 | bel4 = list(bel3) 132 | abv2 = np.array(abovepoc).reshape(-1, 2) 133 | abv3 = abv2.sum(axis=1) 134 | abv4 = list(abv3) 135 | # cum = poctpo 136 | # up_i = 0 137 | # dn_i = 0 138 | df_va = pd.DataFrame({'abv': pd.Series(abv4), 'bel': pd.Series(bel4)}) 139 | df_va = df_va.fillna(0) 140 | df_va['abv_idx'] = np.where(df_va.abv > df_va.bel, 1, 0) 141 | df_va['bel_idx'] = np.where(df_va.bel > df_va.abv, 1, 0) 142 | df_va['cum_tpo'] = np.where(df_va.abv > df_va.bel, df_va.abv, 0) 143 | df_va['cum_tpo'] = np.where(df_va.bel > df_va.abv, df_va.bel, df_va.cum_tpo) 144 | 145 | df_va['cum_tpo'] = np.where(df_va.abv == df_va.bel, df_va.abv+df_va.bel, df_va.cum_tpo) 146 | df_va['abv_idx'] = np.where(df_va.abv == df_va.bel, 1, df_va.abv_idx) 147 | df_va['bel_idx'] = np.where(df_va.abv == df_va.bel, 1, df_va.bel_idx) 148 | df_va['cum_tpo_cumsum'] = df_va.cum_tpo.cumsum() 149 | # haven't add poc tpo because loop cuts off way before 70% so it gives same effect 150 | df_va_cut = df_va[df_va.cum_tpo_cumsum + poctpo <= valtpo] 151 | vah_idx = (df_va_cut.abv_idx.sum())*2 152 | val_idx = (df_va_cut.bel_idx.sum())*2 153 | 154 | if vah_idx >= len(tpo_updf) and vah_idx != 0: 155 | vah_idx = vah_idx - 2 156 | 157 | if val_idx >= len(tpo_dndf) and val_idx != 0: 158 | val_idx = val_idx - 2 159 | 160 | vah = tpo_updf.close[vah_idx] 161 | val = tpo_dndf.close[val_idx] 162 | 163 | 164 | tpoval = dftpo[ticksize * 2:-(ticksize * 2)]['tpocount'] # take mid section 165 | exhandle_index = np.where(tpoval <= 2, tpoval.index, None) # get index where TPOs are 2 166 | exhandle_index = list(filter(None, exhandle_index)) 167 | distance = ticksize * 3 # distance b/w two ex handles / lvn 168 | lvn_list = [] 169 | for ex in exhandle_index[0:-1:distance]: 170 | lvn_list.append(dftpo['close'][ex]) 171 | 172 | excess_h = dftpo[0:ticksize]['tpocount'].sum() / ticksize # take top tail 173 | excess_l = dftpo[-(ticksize):]['tpocount'].sum() / ticksize # take lower tail 174 | excess = 0 175 | if excess_h == 1 and dftpo.iloc[-1]['close'] < poc: 176 | excess = dftpo['close'][ticksize] 177 | 178 | if excess_l == 1 and dftpo.iloc[-1]['close'] >= poc: 179 | excess = dftpo.iloc[-ticksize]['close'] 180 | 181 | 182 | area_above_poc = dft_rs.High.max() - poc 183 | area_below_poc = poc - dft_rs.Low.min() 184 | if area_above_poc == 0: 185 | area_above_poc = 1 186 | if area_below_poc == 0: 187 | area_below_poc = 1 188 | balance = area_above_poc/area_below_poc 189 | 190 | if balance >= 0: 191 | bal_target = poc - area_above_poc 192 | else: 193 | bal_target = poc + area_below_poc 194 | 195 | mp = {'df': dftpo, 'vah': round(vah, 2), 'poc': round(poc, 2), 'val': round(val, 2), 'lvn': lvn_list, 'excess': round(excess, 2), 196 | 'bal_target': round(bal_target, 2)} 197 | 198 | else: 199 | print('not enough bars for date {}'.format(dft_rs['datetime'][0])) 200 | mp = {} 201 | 202 | return mp 203 | 204 | # !!! fetch all MP derived results here with date and do extra context analysis 205 | 206 | 207 | def get_context(df_hi, freq=30, ticksize=5, style='tpo', session_hr=6.5): 208 | # df_hi=df.copy() 209 | try: 210 | 211 | DFcontext = [group[1] for group in df_hi.groupby(df_hi.index.date)] 212 | dfmp_l = [] 213 | i_poctpo_l = [] 214 | i_tposum = [] 215 | vah_l = [] 216 | poc_l = [] 217 | val_l = [] 218 | bt_l = [] 219 | lvn_l = [] 220 | excess_l = [] 221 | date_l = [] 222 | volume_l = [] 223 | rf_l = [] 224 | ibv_l = [] 225 | ibrf_l = [] 226 | ibh_l = [] 227 | ib_l = [] 228 | close_l = [] 229 | hh_l = [] 230 | ll_l = [] 231 | range_l = [] 232 | 233 | for c in range(len(DFcontext)): # c=1 for testing 234 | dfc1 = DFcontext[c].copy() 235 | dfc1.iloc[:, 2:6] = dfc1.iloc[:, 2:6].apply(pd.to_numeric) 236 | 237 | dfc1 = dfc1.reset_index(inplace=False, drop=True) 238 | mpc = tpo(dfc1, freq, ticksize, style, session_hr) 239 | dftmp = mpc['df'] 240 | dfmp_l.append(dftmp) 241 | # for day types 242 | i_poctpo_l.append(dftmp['tpocount'].max()) 243 | i_tposum.append(dftmp['tpocount'].sum()) 244 | # !!! get value areas 245 | vah_l.append(mpc['vah']) 246 | poc_l.append(mpc['poc']) 247 | val_l.append(mpc['val']) 248 | 249 | bt_l.append(mpc['bal_target']) 250 | lvn_l.append(mpc['lvn']) 251 | excess_l.append(mpc['excess']) 252 | 253 | # !!! operatio of non profile stats 254 | date_l.append(dfc1.datetime[0]) 255 | close_l.append(dfc1.iloc[-1]['Close']) 256 | ll_l.append(dfc1.High.max()) 257 | hh_l.append(dfc1.Low.min()) 258 | range_l.append(dfc1.High.max() - dfc1.Low.min()) 259 | 260 | volume_l.append(dfc1.Volume.sum()) 261 | rf_l.append(dfc1.rf.sum()) 262 | # !!! get IB 263 | dfc1['cumsumvol'] = dfc1.Volume.cumsum() 264 | dfc1['cumsumrf'] = dfc1.rf.cumsum() 265 | dfc1['cumsumhigh'] = dfc1.High.cummax() 266 | dfc1['cumsummin'] = dfc1.Low.cummin() 267 | # !!! append ib values 268 | # 60 min = 1 hr divide by time frame to get number of bars 269 | ibv_l.append(dfc1.cumsumvol[int(60/freq)]) 270 | ibrf_l.append(dfc1.cumsumrf[int(60/freq)]) 271 | ib_l.append(dfc1.cumsummin[int(60/freq)]) 272 | ibh_l.append(dfc1.cumsumhigh[int(60/freq)]) 273 | 274 | # dffin = pd.concat(dfcon_l) 275 | max_po = max(i_poctpo_l) 276 | min_po = min(i_poctpo_l) 277 | 278 | dist_df = pd.DataFrame({'date': date_l, 'maxtpo': i_poctpo_l, 'tpocount': i_tposum, 'vahlist': vah_l, 279 | 'poclist': poc_l, 'vallist': val_l, 'btlist': bt_l, 'lvnlist': lvn_l, 'excesslist': excess_l, 280 | 'volumed': volume_l, 'rfd': rf_l, 'highd': hh_l, 'lowd': ll_l, 'ranged': range_l, 'ibh': ibh_l, 281 | 'ibl': ib_l, 'ibvol': ibv_l, 'ibrf': ibrf_l, 'close': close_l}) 282 | 283 | dist_df['distr'] = dist_df.tpocount/dist_df.maxtpo 284 | dismean = math.floor(dist_df.distr.mean()) 285 | dissig = math.floor(dist_df.distr.std()) 286 | 287 | dist_df['daytype'] = np.where(np.logical_and(dist_df.distr >= dismean, 288 | dist_df.distr < dismean + (dissig)), 'Trend Distribution Day', '') 289 | 290 | dist_df['daytype'] = np.where(np.logical_and(dist_df.distr < dismean, 291 | dist_df.distr >= dismean - (dissig)), 'Normal Variation Day', dist_df['daytype']) 292 | 293 | dist_df['daytype'] = np.where(dist_df.distr < dismean - (dissig), 294 | 'Neutral Day', dist_df['daytype']) 295 | 296 | dist_df['daytype'] = np.where(dist_df.distr > dismean + (dissig), 297 | 'Trend Day', dist_df['daytype']) 298 | daytypes = dist_df['daytype'].to_list() 299 | 300 | # !!! get ranking based on distribution data frame aka dist_df 301 | ranking_df = dist_df.copy() 302 | ranking_df['vahtrend'] = np.where(ranking_df.vahlist >= ranking_df.vahlist.shift(), 1, -1) 303 | ranking_df['valtrend'] = np.where(ranking_df.vallist >= ranking_df.vallist.shift(), 1, -1) 304 | ranking_df['poctrend'] = np.where(ranking_df.poclist >= ranking_df.poclist.shift(), 1, -1) 305 | ranking_df['hhtrend'] = np.where(ranking_df.highd >= ranking_df.highd.shift(), 1, -1) 306 | ranking_df['lltrend'] = np.where(ranking_df.lowd >= ranking_df.lowd.shift(), 1, -1) 307 | ranking_df['closetrend'] = np.where(ranking_df.close >= ranking_df.close.shift(), 1, -1) 308 | ranking_df['cl_poc'] = np.where(ranking_df.close >= ranking_df.poclist, 1, -1) 309 | ranking_df['cl_vah'] = np.where(ranking_df.close >= ranking_df.vahlist, 2, 0) # Max is 2 310 | ranking_df['cl_val'] = np.where(ranking_df.close <= ranking_df.vallist, -2, 0) # Min is -2 311 | # !!! total 9 rankings, even though 2 of them have max score of +2 and -2 their else score set to 0 so wont exceed 100% 312 | ranking_df['power1'] = 100*((ranking_df.vahtrend + ranking_df.valtrend+ranking_df.poctrend+ranking_df.hhtrend + 313 | ranking_df.lltrend+ranking_df['closetrend']+ranking_df['cl_poc']+ranking_df['cl_vah']+ranking_df['cl_val'])/9) 314 | 315 | a, b = 70, 100 316 | x, y = ranking_df.power1.min(), ranking_df.power1.max() 317 | ranking_df['power'] = (ranking_df.power1 - x) / (y - x) * (b - a) + a 318 | 319 | except Exception as e: 320 | print(str(e)) 321 | ranking_df = [] 322 | dfmp_l = [] 323 | 324 | return(dfmp_l, ranking_df) 325 | 326 | 327 | def get_contextnow(mean_val, ranking): 328 | ibrankdf = ranking.copy() 329 | ibvol_mean = mean_val['volib_mean'] 330 | ibrf_mean = mean_val['ibrf_mean'] 331 | rf_mean = mean_val['rf_mean'] 332 | vol_mean = mean_val['volume_mean'] 333 | 334 | ibrankdf['ibmid'] = ibrankdf.ibl+((ibrankdf.ibh-ibrankdf.ibl)/2) 335 | 336 | ibrankdf['ib_poc'] = np.where(ibrankdf.ibmid >= ibrankdf.poclist.shift(), 1, -1) 337 | ibrankdf['ib_vah'] = np.where(ibrankdf.ibmid >= ibrankdf.vahlist.shift(), 2, 0) 338 | ibrankdf['ib_val'] = np.where(ibrankdf.ibmid <= ibrankdf.vallist.shift(), -2, 0) 339 | 340 | ibrankdf['ibvol_rise'] = ibrankdf.ibvol/ibvol_mean 341 | ibrankdf['ibvol_rise'] = ibrankdf.ibvol_rise * ibrankdf.ib_poc 342 | 343 | ibrankdf['power1'] = 100*((ibrankdf.ib_poc+ibrankdf.ib_vah + 344 | ibrankdf.ib_val+ibrankdf.ibvol_rise)/4) 345 | 346 | # normalize manually instead of sklearn minmax scaler to avoid dependency 347 | 348 | a, b = 50, 100 349 | x, y = ibrankdf.power1.min(), ibrankdf.power1.max() 350 | ibrankdf['power'] = (ibrankdf.power1 - x) / (y - x) * (b - a) + a 351 | 352 | return ibrankdf 353 | 354 | 355 | def get_rf(df): 356 | df['cup'] = np.where(df['Close'] >= df['Close'].shift(), 1, -1) 357 | df['hup'] = np.where(df['High'] >= df['High'].shift(), 1, -1) 358 | df['lup'] = np.where(df['Low'] >= df['Low'].shift(), 1, -1) 359 | 360 | df['rf'] = df['cup'] + df['hup'] + df['lup'] 361 | df = df.drop(['cup', 'lup', 'hup'], axis=1) 362 | return df 363 | 364 | 365 | def get_mean(dfhist, avglen=30, freq=30): 366 | dfhist = get_rf(dfhist.copy()) 367 | dfhistd = dfhist.resample("D").agg( 368 | {'symbol': 'last', 'Open': 'first', 'High': 'max', 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 369 | 'rf': 'sum', }) 370 | dfhistd = dfhistd.dropna() 371 | comp_days = len(dfhistd) 372 | 373 | vm30 = dfhistd['Volume'].rolling(avglen).mean() 374 | volume_mean = vm30[len(vm30) - 1] 375 | rf30 = (dfhistd['rf']).rolling(avglen).mean() 376 | rf_mean = rf30[len(rf30) - 1] 377 | 378 | date2 = dfhistd.index[1].date() 379 | mask = dfhist.index.date < date2 380 | dfsession = dfhist.loc[mask] 381 | session_hr = math.ceil(len(dfsession)/60) 382 | "get IB volume mean" 383 | ib_start = dfhist.index.time[0] 384 | ib_end = dfhist.index.time[int(freq*(60/freq))] 385 | dfib = dfhist.between_time(ib_start, ib_end) 386 | # dfib = df.head(int(60/freq)) 387 | # dfib['Volume'].plot() 388 | dfib = dfib.resample("D").agg( 389 | {'symbol': 'last', 'Open': 'first', 'High': 'max', 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 390 | 'rf': 'sum', }) 391 | dfib = dfib.dropna() 392 | vib = dfib['Volume'].rolling(avglen).mean() 393 | volib_mean = vib[len(vib) - 1] 394 | ibrf30 = (dfib['rf']).rolling(avglen).mean() 395 | ibrf_mean = ibrf30[len(ibrf30) - 1] 396 | 397 | all_val = dict(volume_mean=volume_mean, rf_mean=rf_mean, volib_mean=volib_mean, 398 | ibrf_mean=ibrf_mean, session_hr=session_hr) 399 | 400 | return all_val 401 | -------------------------------------------------------------------------------- /tpo_project.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Thu Jul 30 06:34:11 2020 4 | 5 | @author: alex1 6 | 7 | twitter.com/beinghorizontal 8 | 9 | beta version 10 | 11 | """ 12 | 13 | import pandas as pd 14 | import plotly.graph_objects as go 15 | import dash 16 | import dash_core_components as dcc 17 | import dash_html_components as html 18 | from dash.dependencies import Input, Output 19 | from tpo_helper import get_ticksize, abc, get_mean, get_rf, get_context, get_contextnow 20 | import numpy as np 21 | from datetime import timedelta 22 | # from transform import transform_live, transform_hist 23 | # from alpha_dataframe import get_data 24 | 25 | app = dash.Dash(__name__) 26 | 27 | # ticksz = 5 28 | # trading_hr = 7 29 | 30 | refresh_int = 1 # refresh interval in seconds for live updates 31 | freq = 30 32 | avglen = 10 # num days mean to get values 33 | days_to_display = 10 # Number of last n days you want on the screen to display 34 | mode = 'tpo' # for volume --> 'vol' 35 | 36 | dfhist = pd.read_csv('history.txt') # 1 min historical data in symbol,datetime,open,high,low,close,volume 37 | 38 | # Check the sample file. Match the format exactly else code will not run. 39 | 40 | dfhist.iloc[:, 2:] = dfhist.iloc[:, 2:].apply(pd.to_numeric) 41 | 42 | ticksz = get_ticksize(dfhist, freq=freq) # # It calculates tick size for TPO based on mean and standard deviation. 43 | symbol = dfhist.symbol[0] 44 | 45 | 46 | def datetime(dfhist): 47 | """ 48 | dfhist : pandas series 49 | Convert date time to pandas date time 50 | Returns dataframe with datetime index 51 | """ 52 | dfhist['datetime2'] = pd.to_datetime(dfhist['datetime'], format='%Y%m%d %H:%M:%S') 53 | dfhist = dfhist.set_index(dfhist['datetime2'], drop=True, inplace=False) 54 | return(dfhist) 55 | 56 | 57 | dfhist = datetime(dfhist) 58 | mean_val = get_mean(dfhist, avglen=avglen, freq=freq) # Get mean values for context and also get daily trading hours 59 | trading_hr = mean_val['session_hr'] 60 | # !!! get rotational factor 61 | dfhist = get_rf(dfhist.copy()) 62 | # !!! resample to desire time frequency. For TPO charts 30 min is optimal 63 | dfhist = dfhist.resample(str(freq)+'min').agg({'symbol': 'last', 'datetime': 'first', 'Open': 'first', 'High': 'max', 64 | 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 'rf': 'sum'}) 65 | dfhist = dfhist.dropna() 66 | 67 | # slice df based on days_to_display parameter 68 | dt1 = dfhist.index[-1] 69 | sday1 = dt1 - timedelta(days_to_display) 70 | dfhist = dfhist[(dfhist.index.date > sday1.date())] 71 | # !!! concat current data to avoid insufficient bar num error 72 | 73 | 74 | def live_merge(dfli): 75 | """ 76 | dfli: pandas dataframe with live quotes. 77 | 78 | This is the live data, and will continue to refresh. Since it merges with historical data keep the format same though source 79 | can be different.. 80 | For this we only need small sample and if there are duplicate quotes duplicate values will get droped keeping the original value. 81 | """ 82 | 83 | dflive = datetime(dfli) 84 | 85 | dflive = get_rf(dflive.copy()) 86 | dflive = dflive.resample(str(freq)+'min').agg({'symbol': 'last', 'datetime': 'first', 'Open': 'first', 'High': 'max', 87 | 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 'rf': 'sum'}) 88 | 89 | df_final = pd.concat([dfhist, dflive]) 90 | df_final = df_final.drop_duplicates() 91 | 92 | return (df_final) 93 | 94 | 95 | # get live data from external source, it is not inside loop so it gets called 1 time only to make sure historical data is in sync 96 | dfli = pd.read_csv('live.txt') # provided the sample file. To check live updates are working, add data in live.txt 97 | df_final = live_merge(dfli) 98 | df_final.iloc[:, 2:] = df_final.iloc[:, 2:].apply(pd.to_numeric) 99 | 100 | # warning: do not chnage the settings below. There are HTML tags get triggered for live updates. Took me a while to figure out 101 | # ------------------------------------------------------------------------------ 102 | # App layout 103 | app.layout = html.Div( 104 | html.Div([ 105 | dcc.Location(id='url', refresh=False), 106 | dcc.Link('For questions, ping me on Twitter', href='https://twitter.com/beinghorizontal'), 107 | html.Br(), 108 | dcc.Link('FAQ and python source code', href='http://www.github.com/beinghorizontal/tpo_project'), 109 | html.H4('@beinghorizontal'), 110 | dcc.Graph(id='beinghorizontal'), 111 | dcc.Interval( 112 | id='interval-component', 113 | interval=refresh_int*1000, # in milliseconds 114 | n_intervals=0 115 | ) 116 | ]) 117 | ) 118 | 119 | # ------------------------------------------------------------------------------ 120 | # Connect the Plotly graphs with Dash Components 121 | 122 | @app.callback(Output(component_id='beinghorizontal', component_property='figure'), 123 | [Input('interval-component', 'n_intervals')]) 124 | def update_graph(n): 125 | """ 126 | main loop for resfreshing the data and to display the chart. It gets triggered every n second as per our 127 | settings. 128 | """ 129 | dfli = pd.read_csv('live.txt') # This is live file in loop to check updates every n seconds 130 | df = live_merge(dfli) 131 | df.iloc[:, 2:] = df.iloc[:, 2:].apply(pd.to_numeric) 132 | 133 | # !!! split the dataframe with new date 134 | DFList = [group[1] for group in df.groupby(df.index.date)] 135 | # !!! for context based bubbles at the top with text hovers 136 | dfcontext = get_context(df, freq=freq, ticksize=ticksz, style=mode, session_hr=trading_hr) 137 | # get market profile DataFrame and ranking as a series for each day. 138 | # @todo: IN next version, display the ranking DataFrame with drop-down menu 139 | dfmp_list = dfcontext[0] 140 | ranking = dfcontext[1] 141 | # !!! get context based on IB It is predictive value caculated by using various IB stats and previous day's value area 142 | # IB is 1st 1 hour of the session. Not useful for scrips with global 24 x 7 session 143 | context_ibdf = get_contextnow(mean_val, ranking) 144 | ibpower = context_ibdf.power # Non-normalised IB strength 145 | ibpower1 = context_ibdf.power1 # Normalised IB strength for dynamic shape size for markers at bottom 146 | 147 | fig = go.Figure() 148 | fig = go.Figure(data=[go.Candlestick(x=df['datetime'], 149 | open=df['Open'], 150 | high=df['High'], 151 | low=df['Low'], 152 | close=df['Close'], 153 | showlegend=True, 154 | name=symbol, opacity=0.3)]) # To make candlesticks more prominent increase the opacity 155 | 156 | # !!! get TPO for each day 157 | for i in range(len(dfmp_list)): # test the loop with i=1 158 | 159 | # df1 is used for datetime axis, other dataframe we have is df_mp but it is not a timeseries 160 | df1 = DFList[i].copy() 161 | df_mp = dfmp_list[i] 162 | irank = ranking.iloc[i] 163 | # df_mp['i_date'] = df1['datetime'][0] 164 | df_mp['i_date'] = irank.date 165 | # # @todo: background color for text 166 | df_mp['color'] = np.where(np.logical_and( 167 | df_mp['close'] > irank.vallist, df_mp['close'] < irank.vahlist), 'green', 'white') 168 | 169 | df_mp = df_mp.set_index('i_date', inplace=False) 170 | 171 | fig.add_trace(go.Scatter(x=df_mp.index, y=df_mp.close, mode="text", name=str(df_mp.index[0]), text=df_mp.alphabets, 172 | showlegend=False, textposition="top right", textfont=dict(family="verdana", size=6, color=df_mp.color))) 173 | power = int(irank['power1']) 174 | if power < 0: 175 | my_rgb = 'rgba({power}, 3, 252, 0.5)'.format(power=abs(165)) 176 | else: 177 | my_rgb = 'rgba(23, {power}, 3, 0.5)'.format(power=abs(252)) 178 | 179 | fig.add_trace(go.Scatter( 180 | # x=[df1.iloc[4]['datetime']], 181 | x=[irank.date], 182 | y=[df['High'].max()], 183 | mode="markers", 184 | marker=dict(color=my_rgb, size=0.40*abs(power), 185 | line=dict(color='rgb(17, 17, 17)', width=2)), 186 | # marker_symbol='square', 187 | hovertext=['VAH:{}, POC:{}, VAL:{}, Balance Target:{}, Day Type:{}'.format(irank.vahlist, irank.poclist, irank.vallist, 188 | irank.btlist, irank.daytype)], showlegend=False 189 | )) 190 | # !!! we will use this for hover text at bottom for developing day 191 | if ibpower1[i] < 0: 192 | ib_rgb = 'rgba(165, 3, 252, 0.5)' 193 | else: 194 | ib_rgb = 'rgba(23, 252, 3, 0.5)' 195 | 196 | fig.add_trace(go.Scatter( 197 | # x=[df1.iloc[4]['datetime']], 198 | x=[irank.date], 199 | y=[df['Low'].min()], 200 | mode="markers", 201 | marker=dict(color=ib_rgb, size=0.40 * \ 202 | abs(ibpower[i]), line=dict(color='rgb(17, 17, 17)', width=2)), 203 | marker_symbol='square', 204 | hovertext=['Vol_mean:{}, Vol_Daily:{}, RF_mean:{}, RF_daily:{}, IBvol_mean:{}, IBvol_day:{}, IB_RFmean:{}, IB_RFday:{}'.format(round(mean_val['volume_mean'], 2), 205 | round(irank.volumed, 2), round(mean_val['rf_mean'], 2), round( 206 | irank.rfd, 2), round(mean_val['volib_mean'], 2), 207 | round(irank.ibvol, 2), round(mean_val['ibrf_mean'], 2), round(irank.ibrf, 2))], showlegend=False 208 | )) 209 | 210 | lvns = irank.lvnlist 211 | 212 | for lvn in lvns: 213 | fig.add_shape( 214 | # Line Horizontal 215 | type="line", 216 | x0=df1.iloc[0]['datetime'], 217 | y0=lvn, 218 | x1=df1.iloc[5]['datetime'], 219 | y1=lvn, 220 | line=dict( 221 | color="darksalmon", 222 | width=2, 223 | dash="dashdot",),) 224 | 225 | excess = irank.excesslist 226 | if excess > 0: 227 | fig.add_shape( 228 | # Line Horizontal 229 | type="line", 230 | x0=df1.iloc[0]['datetime'], 231 | y0=excess, 232 | x1=df1.iloc[5]['datetime'], 233 | y1=excess, 234 | line=dict( 235 | color="cyan", 236 | width=2, 237 | dash="dashdot",),) 238 | 239 | # @todo: last price marker. Color code as per close above poc or below 240 | ltp = df1.iloc[-1]['Close'] 241 | if ltp >= irank.poclist: 242 | ltp_color = 'green' 243 | else: 244 | ltp_color = 'red' 245 | 246 | fig.add_trace(go.Scatter( 247 | x=[df1.iloc[-1]['datetime']], 248 | y=[df1.iloc[-1]['Close']], 249 | mode="text", 250 | name="last traded price", 251 | text=['last '+str(df1.iloc[-1]['Close'])], 252 | textposition="bottom right", 253 | textfont=dict(size=12, color=ltp_color), 254 | showlegend=False 255 | )) 256 | 257 | fig.layout.xaxis.color = 'white' 258 | fig.layout.yaxis.color = 'white' 259 | fig.layout.autosize = True 260 | fig["layout"]["height"] = 800 261 | # fig.layout.hovermode = 'x' 262 | 263 | fig.update_xaxes(title_text='Time', title_font=dict(size=18, color='white'), 264 | tickangle=45, tickfont=dict(size=8, color='white'), showgrid=False, dtick=len(dfmp_list)) 265 | 266 | fig.update_yaxes(title_text=symbol, title_font=dict(size=18, color='white'), 267 | tickfont=dict(size=12, color='white'), showgrid=False) 268 | fig.layout.update(template="plotly_dark", title="@"+abc()[1], autosize=True, 269 | xaxis=dict(showline=True, color='white'), yaxis=dict(showline=True, color='white')) 270 | 271 | fig["layout"]["xaxis"]["rangeslider"]["visible"] = False 272 | fig["layout"]["xaxis"]["tickformat"] = "%H:%M:%S" 273 | 274 | return fig 275 | 276 | 277 | # ------------------------------------------------------------------------------ 278 | if __name__ == '__main__': 279 | app.run_server(debug=False) 280 | -------------------------------------------------------------------------------- /version_0.2.0/changes.txt: -------------------------------------------------------------------------------- 1 | Redefined insights (Hover text in bubbles and squares) 2 | 3 | Insights now pops up vertically instead of horizontally 4 | 5 | Insights shows market strength i.e. total weightage and wightage breakdown 6 | 7 | For faster uploads using big data used plotly's GPU rendering (scattergl) 8 | 9 | For live streaming charts historical calculations done only for the first time when chart loads and calculations for the live data appended as new data arrives. 10 | 11 | Added a new type of static TPO chart. Static TPO with range slider. Unlike PLotly's default slider bar, this one uses Dash's range slider. The big difference is it dynamically updates Y-axis so after the zooming chart wouldn't look dwarfed. 12 | 13 | -------------------------------------------------------------------------------- /version_0.2.0/static_tpo_slider.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Sun Aug 2 07:02:43 2020 4 | 5 | @author: alex1 6 | twitter.com/beinghorizontal 7 | 8 | """ 9 | 10 | import pandas as pd 11 | import plotly.graph_objects as go 12 | from tpo_helper2 import get_ticksize, abc, get_mean, get_rf, get_context, get_dayrank, get_ibrank 13 | import numpy as np 14 | import dash 15 | import dash_core_components as dcc 16 | import dash_html_components as html 17 | from dash.dependencies import Input, Output 18 | 19 | app = dash.Dash(__name__) 20 | 21 | display = 5 22 | freq = 30 23 | avglen = 10 # num days mean to get values 24 | mode = 'tpo' # for volume --> 'vol' 25 | dfhist = pd.read_csv(r'C:\Users\alex1\Dropbox\scripts\tpo_v2\history.txt') 26 | dfhist.iloc[:, 2:] = dfhist.iloc[:, 2:].apply(pd.to_numeric) 27 | 28 | 29 | def datetime(data): 30 | dfhist = data.copy() 31 | dfhist['datetime2'] = pd.to_datetime(dfhist['datetime'], format='%Y%m%d %H:%M:%S') 32 | dfhist = dfhist.set_index(dfhist['datetime2'], drop=True, inplace=False) 33 | return dfhist 34 | 35 | 36 | dfhist = datetime(dfhist) 37 | 38 | ticksz = get_ticksize(dfhist, freq=freq) 39 | symbol = dfhist.symbol[0] 40 | mean_val = get_mean(dfhist, avglen=avglen, freq=freq) 41 | trading_hr = mean_val['session_hr'] 42 | 43 | # !!! get rotational factor again for 30 min resampled data 44 | dfhist = get_rf(dfhist.copy()) 45 | dfhist = dfhist.resample(str(freq) + 'min').agg({'symbol': 'last', 'datetime': 'first', 'Open': 'first', 'High': 'max', 46 | 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 'rf': 'sum'}) 47 | dfhist = dfhist.dropna() 48 | 49 | dfnflist = [group[1] for group in dfhist.groupby(dfhist.index.date)] 50 | dates = [] 51 | for d in range(0, len(dfnflist)): 52 | dates.append(dfnflist[d].datetime[-1]) 53 | 54 | date_mark = {str(h): {'label': str(h), 'style': {'color': 'blue', 'fontsize': '4', 'text-orientation': 'upright'}} 55 | for h in range(0, len(dates))} 56 | 57 | app.layout = html.Div([ 58 | # adding a plot 59 | dcc.Graph(id = 'beinghorizontal'), 60 | # range slider 61 | html.P([ 62 | html.Label("Days to display"), 63 | dcc.RangeSlider(id = 'slider', 64 | pushable=display, 65 | marks = date_mark, 66 | min = 0, 67 | max = len(dates)-1, 68 | step = None, 69 | value = [len(dates)-(display+1), len(dates)-1]) 70 | ], style = {'width' : '80%', 71 | 'fontSize' : '14px', 72 | 'padding-left' : '100px', 73 | 'display': 'inline-block'}) 74 | ]) 75 | 76 | @app.callback(Output('beinghorizontal', 'figure'), 77 | [Input('slider', 'value')]) 78 | 79 | def update_figure(value): 80 | 81 | dfresample = dfhist[(dfhist.datetime > dates[value[0]]) & (dfhist.datetime < dates[value[1]])] 82 | DFList = [group[1] for group in dfresample.groupby(dfresample.index.date)] 83 | # !!! for context based bubbles at the top with text hovers 84 | dfcontext = get_context(dfresample, freq=freq, ticksize=ticksz, style=mode, session_hr=trading_hr) 85 | # get market profile DataFrame and ranking as a series for each day. 86 | # @todo: IN next version, display the ranking DataFrame with drop-down menu 87 | dfmp_list = dfcontext[0] 88 | df_distribution = dfcontext[1] 89 | df_ranking = get_dayrank(df_distribution.copy(), mean_val) 90 | ranking = df_ranking[0] 91 | power1 = ranking.power1 # Non-normalised IB strength 92 | power = ranking.power # Normalised IB strength for dynamic shape size for markers at bottom 93 | breakdown = df_ranking[1] 94 | dh_list = ranking.highd 95 | dl_list = ranking.lowd 96 | # IB is 1st 1 hour of the session. Not useful for scrips with global 24 x 7 session 97 | context_ibdf = get_ibrank(mean_val, ranking) 98 | ibpower1 = context_ibdf[0].ibpower1 # Non-normalised IB strength 99 | ibpower = context_ibdf[0].IB_power # Normalised IB strength for dynamic shape size for markers at bottom 100 | ibbreakdown = context_ibdf[1] 101 | ib_high_list = context_ibdf[0].ibh 102 | ib_low_list = context_ibdf[0].ibl 103 | 104 | fig = go.Figure(data=[go.Candlestick(x=dfresample['datetime'], 105 | 106 | open=dfresample['Open'], 107 | high=dfresample['High'], 108 | low=dfresample['Low'], 109 | close=dfresample['Close'], 110 | showlegend=True, 111 | name=symbol, opacity=0.3)]) # To make candlesticks more prominent increase the opacity 112 | 113 | for i in range(len(dfmp_list)): # test the loop with i=1 114 | 115 | df1 = DFList[i].copy() 116 | df_mp = dfmp_list[i] 117 | irank = ranking.iloc[i] # select single row from ranking df 118 | df_mp['i_date'] = irank.date 119 | # # @todo: background color for text 120 | df_mp['color'] = np.where(np.logical_and( 121 | df_mp['close'] > irank.vallist, df_mp['close'] < irank.vahlist), 'green', 'white') 122 | 123 | df_mp = df_mp.set_index('i_date', inplace=False) 124 | 125 | fig.add_trace(go.Scattergl(x=df_mp.index, y=df_mp.close, mode="text", name=str(df_mp.index[0]), text=df_mp.alphabets, 126 | showlegend=False, textposition="top right", textfont=dict(family="verdana", size=6, color=df_mp.color))) 127 | 128 | if power1[i] < 0: 129 | my_rgb = 'rgba({power}, 3, 252, 0.5)'.format(power=abs(165)) 130 | else: 131 | my_rgb = 'rgba(23, {power}, 3, 0.5)'.format(power=abs(252)) 132 | 133 | 134 | brk_f_list_maj = [] 135 | f = 0 136 | for f in range(len(breakdown.columns)): 137 | brk_f_list_min=[] 138 | for index, rows in breakdown.iterrows(): 139 | brk_f_list_min.append(index+str(': ')+str(rows[f])+'
') 140 | brk_f_list_maj.append(brk_f_list_min) 141 | 142 | breakdown_values ='' # for bubbles 143 | for st in brk_f_list_maj[i]: 144 | breakdown_values += st 145 | 146 | 147 | # ......................... 148 | ibrk_f_list_maj = [] 149 | g = 0 150 | for g in range(len(ibbreakdown.columns)): 151 | ibrk_f_list_min=[] 152 | for index, rows in ibbreakdown.iterrows(): 153 | ibrk_f_list_min.append(index+str(': ')+str(rows[g])+'
') 154 | ibrk_f_list_maj.append(ibrk_f_list_min) 155 | 156 | ibreakdown_values = '' # for squares 157 | for ist in ibrk_f_list_maj[i]: 158 | ibreakdown_values += ist 159 | # .................................. 160 | fig.add_trace(go.Scattergl( 161 | # x=[df1.iloc[4]['datetime']], 162 | x=[irank.date], 163 | y=[dfresample['High'].max()], 164 | mode="markers", 165 | marker=dict(color=my_rgb, size=0.90*power[i], 166 | line=dict(color='rgb(17, 17, 17)', width=2)), 167 | # marker_symbol='square', 168 | hovertext=['
Insights:
VAH: {}
POC: {}
VAL: {}
Balance Target: {}
Day Type: {}
strength: {}
BreakDown: {}
{}
{}'.format(irank.vahlist, 169 | irank.poclist, irank.vallist,irank.btlist, irank.daytype, irank.power,'','-------------------',breakdown_values)], showlegend=False)) 170 | if ibpower1[i] < 0: 171 | ib_rgb = 'rgba(165, 3, 252, 0.5)' 172 | else: 173 | ib_rgb = 'rgba(23, 252, 3, 0.5)' 174 | 175 | fig.add_trace(go.Scattergl( 176 | # x=[df1.iloc[4]['datetime']], 177 | x=[irank.date], 178 | y=[dfresample['Low'].min()], 179 | mode="markers", 180 | marker=dict(color=ib_rgb, size=0.40 * ibpower[i], line=dict(color='rgb(17, 17, 17)', width=2)), 181 | marker_symbol='square', 182 | hovertext=['
Insights:
Vol_mean: {}
Vol_Daily: {}
RF_mean: {}
RF_daily: {}
IBvol_mean: {}
IBvol_day: {}
IB_RFmean: {}
IB_RFday: {}
strength: {}
BreakDown: {}
{}
{}'.format(mean_val['volume_mean'],irank.volumed, mean_val['rf_mean'],irank.rfd, 183 | mean_val['volib_mean'], irank.ibvol, mean_val['ibrf_mean'],irank.ibrf, ibpower[i],'','......................',ibreakdown_values)],showlegend=False)) 184 | 185 | 186 | lvns = irank.lvnlist 187 | 188 | for lvn in lvns: 189 | if lvn > irank.vallist and lvn < irank.vahlist: 190 | 191 | fig.add_shape( 192 | # Line Horizontal 193 | type="line", 194 | x0=df1.iloc[0]['datetime'], 195 | y0=lvn, 196 | x1=df1.iloc[5]['datetime'], 197 | y1=lvn, 198 | line=dict( 199 | color="darksalmon", 200 | width=2, 201 | dash="dashdot",),) 202 | 203 | fig.add_shape( 204 | # Line Horizontal 205 | type="line", 206 | x0=df1.iloc[0]['datetime'], 207 | y0=ib_low_list[i], 208 | x1=df1.iloc[0]['datetime'], 209 | y1=ib_high_list[i], 210 | line=dict( 211 | color="cyan", 212 | width=3, 213 | ),) 214 | # day high and low 215 | fig.add_shape( 216 | # Line Horizontal 217 | type="line", 218 | x0=df1.iloc[0]['datetime'], 219 | y0=dl_list[i], 220 | x1=df1.iloc[0]['datetime'], 221 | y1=dh_list[i], 222 | line=dict( 223 | color="gray", 224 | width=1, 225 | dash="dashdot",),) 226 | 227 | 228 | 229 | ltp = dfresample.iloc[-1]['Close'] 230 | if ltp >= irank.poclist: 231 | ltp_color = 'green' 232 | else: 233 | ltp_color = 'red' 234 | 235 | fig.add_trace(go.Scatter( 236 | x=[df1.iloc[-1]['datetime']], 237 | y=[df1.iloc[-1]['Close']], 238 | mode="text", 239 | name="last traded price", 240 | text=['last '+str(df1.iloc[-1]['Close'])], 241 | textposition="bottom right", 242 | textfont=dict(size=11, color=ltp_color), 243 | showlegend=False 244 | )) 245 | 246 | fig.layout.xaxis.color = 'white' 247 | fig.layout.yaxis.color = 'white' 248 | fig.layout.autosize = True 249 | fig["layout"]["height"] = 550 250 | 251 | fig.update_xaxes(title_text='Time', title_font=dict(size=18, color='white'), 252 | tickangle=45, tickfont=dict(size=8, color='white'), showgrid=False, dtick=len(dfmp_list)) 253 | 254 | fig.update_yaxes(title_text=symbol, title_font=dict(size=18, color='white'), 255 | tickfont=dict(size=12, color='white'), showgrid=False) 256 | fig.layout.update(template="plotly_dark", title="@"+abc()[1], autosize=True, 257 | xaxis=dict(showline=True, color='white'), yaxis=dict(showline=True, color='white',autorange= True,fixedrange=False)) 258 | 259 | fig["layout"]["xaxis"]["rangeslider"]["visible"] = False 260 | fig["layout"]["xaxis"]["tickformat"] = "%H:%M:%S" 261 | 262 | return fig 263 | 264 | if __name__ == '__main__': 265 | app.run_server(debug = False) 266 | -------------------------------------------------------------------------------- /version_0.2.0/static_tpo_v2.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Sun Aug 2 07:02:43 2020 4 | 5 | @author: alex1 6 | twitter.com/beinghorizontal 7 | 8 | """ 9 | 10 | import pandas as pd 11 | import plotly.graph_objects as go 12 | from tpo_helper2 import get_ticksize, abc, get_mean, get_rf, get_context, get_dayrank, get_ibrank 13 | import numpy as np 14 | from datetime import timedelta 15 | from plotly.offline import plot 16 | 17 | # read file and set datetime index 18 | dfhist = pd.read_csv(r'C:\Users\alex1\Dropbox\scripts\tpo_v2\history.txt') 19 | dfhist.iloc[:, 2:] = dfhist.iloc[:, 2:].apply(pd.to_numeric) 20 | def datetime(data): 21 | """ 22 | dfhist : pandas series 23 | Convert date time to pandas date time 24 | Returns dataframe with datetime index 25 | """ 26 | dfhist = data.copy() 27 | dfhist['datetime2'] = pd.to_datetime(dfhist['datetime'], format='%Y%m%d %H:%M:%S') 28 | dfhist = dfhist.set_index(dfhist['datetime2'], drop=True, inplace=False) 29 | return dfhist 30 | dfhist = datetime(dfhist) 31 | 32 | # manual parameters 33 | freq = 30 34 | avglen = 10 # num days mean to get values 35 | days_to_display = 10 # Number of last n days you want on the screen to display 36 | mode = 'tpo' # for volume --> 'vol' 37 | # dynamic parameters based on data & mean values for volume and Rotational Factor (RF) 38 | ticksz = get_ticksize(dfhist, freq=freq) 39 | symbol = dfhist.symbol[0] 40 | mean_val = get_mean(dfhist, avglen=avglen, freq=freq) 41 | trading_hr = mean_val['session_hr'] 42 | 43 | # !!! get rotational factor again for 30 min resampled data 44 | dfhist = get_rf(dfhist.copy()) 45 | # !!! resample to desire time frequency. For TPO charts 30 min is optimal 46 | dfresample = dfhist.copy() # create seperate resampled data frame and preserve old 1 min file 47 | dfresample = dfresample.resample(str(freq)+'min').agg({'symbol': 'last', 'datetime': 'last', 'Open': 'first', 'High': 'max', 48 | 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 'rf': 'sum'}) 49 | dfresample = dfresample.dropna() 50 | 51 | # slice df based on days_to_display parameter 52 | dt1 = dfresample.index[-1] 53 | sday1 = dt1 - timedelta(days_to_display) 54 | dfresample = dfresample[(dfresample.index.date > sday1.date())] 55 | 56 | # !!! split the dataframe with new date 57 | DFList = [group[1] for group in dfresample.groupby(dfresample.index.date)] 58 | # !!! for context based bubbles at the top with text hovers 59 | dfcontext = get_context(dfresample, freq=freq, ticksize=ticksz, style=mode, session_hr=trading_hr) 60 | # get market profile DataFrame and ranking as a series for each day. 61 | # @todo: IN next version, display the ranking DataFrame with drop-down menu 62 | dfmp_list = dfcontext[0] 63 | df_distribution = dfcontext[1] 64 | df_ranking = get_dayrank(df_distribution.copy(), mean_val) 65 | ranking = df_ranking[0] 66 | power1 = ranking.power1 # Non-normalised IB strength 67 | power = ranking.power # Normalised IB strength for dynamic shape size for markers at bottom 68 | breakdown = df_ranking[1] 69 | dh_list = ranking.highd 70 | dl_list = ranking.lowd 71 | # !!! get context based on IB It is predictive value caculated by using various IB stats and previous day's value area 72 | # IB is 1st 1 hour of the session. Not useful for scrips with global 24 x 7 session 73 | context_ibdf = get_ibrank(mean_val, ranking) 74 | ibpower1 = context_ibdf[0].ibpower1 # Non-normalised IB strength 75 | ibpower = context_ibdf[0].IB_power # Normalised IB strength for dynamic shape size for markers at bottom 76 | ibbreakdown = context_ibdf[1] 77 | ib_high_list = context_ibdf[0].ibh 78 | ib_low_list = context_ibdf[0].ibl 79 | 80 | fig = go.Figure(data=[go.Candlestick(x=dfresample['datetime'], 81 | 82 | open=dfresample['Open'], 83 | high=dfresample['High'], 84 | low=dfresample['Low'], 85 | close=dfresample['Close'], 86 | showlegend=True, 87 | name=symbol, opacity=0.3)]) # To make candlesticks more prominent increase the opacity 88 | 89 | # !!! get TPO for each day 90 | for i in range(len(dfmp_list)): # test the loop with i=1 91 | 92 | # df1 is used for datetime axis, other dataframe we have is df_mp but it is not a timeseries 93 | df1 = DFList[i].copy() 94 | df_mp = dfmp_list[i] 95 | irank = ranking.iloc[i] # select single row from ranking df 96 | # df_mp['i_date'] = df1['datetime'][0] 97 | df_mp['i_date'] = irank.date 98 | # # @todo: background color for text 99 | df_mp['color'] = np.where(np.logical_and( 100 | df_mp['close'] > irank.vallist, df_mp['close'] < irank.vahlist), 'green', 'white') 101 | 102 | df_mp = df_mp.set_index('i_date', inplace=False) 103 | 104 | fig.add_trace(go.Scattergl(x=df_mp.index, y=df_mp.close, mode="text", name=str(df_mp.index[0]), text=df_mp.alphabets, 105 | showlegend=False, textposition="top right", textfont=dict(family="verdana", size=6, color=df_mp.color))) 106 | 107 | #power1 = int(irank['power1']) # non normalized strength 108 | #power = int(irank['power']) 109 | if power1[i] < 0: 110 | my_rgb = 'rgba({power}, 3, 252, 0.5)'.format(power=abs(165)) 111 | else: 112 | my_rgb = 'rgba(23, {power}, 3, 0.5)'.format(power=abs(252)) 113 | 114 | 115 | brk_f_list_maj = [] 116 | f = 0 117 | for f in range(len(breakdown.columns)): 118 | brk_f_list_min=[] 119 | for index, rows in breakdown.iterrows(): 120 | brk_f_list_min.append(index+str(': ')+str(rows[f])+'
') 121 | brk_f_list_maj.append(brk_f_list_min) 122 | 123 | breakdown_values ='' # for bubbles 124 | for st in brk_f_list_maj[i]: 125 | breakdown_values += st 126 | 127 | 128 | # ......................... 129 | ibrk_f_list_maj = [] 130 | g = 0 131 | for g in range(len(ibbreakdown.columns)): 132 | ibrk_f_list_min=[] 133 | for index, rows in ibbreakdown.iterrows(): 134 | ibrk_f_list_min.append(index+str(': ')+str(rows[g])+'
') 135 | ibrk_f_list_maj.append(ibrk_f_list_min) 136 | 137 | ibreakdown_values = '' # for squares 138 | for ist in ibrk_f_list_maj[i]: 139 | ibreakdown_values += ist 140 | # irank.power1 141 | # .................................. 142 | fig.add_trace(go.Scattergl( 143 | # x=[df1.iloc[4]['datetime']], 144 | x=[irank.date], 145 | y=[dfresample['High'].max()], 146 | mode="markers", 147 | marker=dict(color=my_rgb, size=0.90*power[i], 148 | line=dict(color='rgb(17, 17, 17)', width=2)), 149 | # marker_symbol='square', 150 | hovertext=['
Insights:
VAH: {}
POC: {}
VAL: {}
Balance Target: {}
Day Type: {}
strength: {}
BreakDown: {}
{}
{}'.format(irank.vahlist, 151 | irank.poclist, irank.vallist,irank.btlist, irank.daytype, irank.power,'','-------------------',breakdown_values)], showlegend=False)) 152 | # !!! we will use this for hover text at bottom for developing day 153 | if ibpower1[i] < 0: 154 | ib_rgb = 'rgba(165, 3, 252, 0.5)' 155 | else: 156 | ib_rgb = 'rgba(23, 252, 3, 0.5)' 157 | 158 | fig.add_trace(go.Scattergl( 159 | # x=[df1.iloc[4]['datetime']], 160 | x=[irank.date], 161 | y=[dfresample['Low'].min()], 162 | mode="markers", 163 | marker=dict(color=ib_rgb, size=0.40 * ibpower[i], line=dict(color='rgb(17, 17, 17)', width=2)), 164 | marker_symbol='square', 165 | hovertext=['
Insights:
Vol_mean: {}
Vol_Daily: {}
RF_mean: {}
RF_daily: {}
IBvol_mean: {}
IBvol_day: {}
IB_RFmean: {}
IB_RFday: {}
strength: {}
BreakDown: {}
{}
{}'.format(mean_val['volume_mean'],irank.volumed, mean_val['rf_mean'],irank.rfd, 166 | mean_val['volib_mean'], irank.ibvol, mean_val['ibrf_mean'],irank.ibrf, ibpower[i],'','......................',ibreakdown_values)],showlegend=False)) 167 | 168 | # @todo: add ib high, low, hd, hl as vertical line at start of each day's start just before above TPOs ib_high_list[i],ib_low_list[i],dh_list[i], dl_list[i] 169 | 170 | 171 | lvns = irank.lvnlist 172 | 173 | for lvn in lvns: 174 | if lvn > irank.vallist and lvn < irank.vahlist: 175 | 176 | fig.add_shape( 177 | # Line Horizontal 178 | type="line", 179 | x0=df1.iloc[0]['datetime'], 180 | y0=lvn, 181 | x1=df1.iloc[5]['datetime'], 182 | y1=lvn, 183 | line=dict( 184 | color="darksalmon", 185 | width=2, 186 | dash="dashdot",),) 187 | 188 | fig.add_shape( 189 | # Line Horizontal 190 | type="line", 191 | x0=df1.iloc[0]['datetime'], 192 | y0=ib_low_list[i], 193 | x1=df1.iloc[0]['datetime'], 194 | y1=ib_high_list[i], 195 | line=dict( 196 | color="cyan", 197 | width=3, 198 | ),) 199 | # day high and low 200 | fig.add_shape( 201 | # Line Horizontal 202 | type="line", 203 | x0=df1.iloc[0]['datetime'], 204 | y0=dl_list[i], 205 | x1=df1.iloc[0]['datetime'], 206 | y1=dh_list[i], 207 | line=dict( 208 | color="gray", 209 | width=1, 210 | dash="dashdot",),) 211 | 212 | 213 | 214 | ltp = dfresample.iloc[-1]['Close'] 215 | if ltp >= irank.poclist: 216 | ltp_color = 'green' 217 | else: 218 | ltp_color = 'red' 219 | 220 | fig.add_trace(go.Scatter( 221 | x=[df1.iloc[-1]['datetime']], 222 | y=[df1.iloc[-1]['Close']], 223 | mode="text", 224 | name="last traded price", 225 | text=['last '+str(df1.iloc[-1]['Close'])], 226 | textposition="bottom right", 227 | textfont=dict(size=11, color=ltp_color), 228 | showlegend=False 229 | )) 230 | 231 | fig.layout.xaxis.color = 'white' 232 | fig.layout.yaxis.color = 'white' 233 | fig.layout.autosize = True 234 | fig["layout"]["height"] = 650 235 | # fig.layout.hovermode = 'x' 236 | 237 | fig.update_xaxes(title_text='Time', title_font=dict(size=18, color='white'), 238 | tickangle=45, tickfont=dict(size=8, color='white'), showgrid=False, dtick=len(dfmp_list)) 239 | 240 | fig.update_yaxes(title_text=symbol, title_font=dict(size=18, color='white'), 241 | tickfont=dict(size=12, color='white'), showgrid=False) 242 | fig.layout.update(template="plotly_dark", title="@"+abc()[1], autosize=True, 243 | xaxis=dict(showline=True, color='white'), yaxis=dict(showline=True, color='white',autorange= True,fixedrange=False)) 244 | 245 | fig["layout"]["xaxis"]["rangeslider"]["visible"] = False 246 | fig["layout"]["xaxis"]["tickformat"] = "%H:%M:%S" 247 | 248 | # fig.write_html('tpo.html') # uncomment to save as html 249 | 250 | # To save as png 251 | # from kaleido.scopes.plotly import PlotlyScope # pip install kaleido 252 | # scope = PlotlyScope() 253 | # with open("figure.png", "wb") as f: 254 | # f.write(scope.transform(fig, format="png")) 255 | 256 | plot(fig, auto_open=True) 257 | fig.show() 258 | -------------------------------------------------------------------------------- /version_0.2.0/tpo_helper2.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Created on Mon Jul 27 13:26:04 2020 4 | 5 | @author: alex1 6 | """ 7 | import math 8 | import numpy as np 9 | import pandas as pd 10 | 11 | 12 | # import itertools 13 | 14 | 15 | def get_ticksize(data, freq=30): 16 | # data = df 17 | numlen = int(len(data) / 2) 18 | # sample size for calculating ticksize = 50% of most recent data 19 | tztail = data.tail(numlen).copy() 20 | tztail['tz'] = tztail.Close.rolling(freq).std() # std. dev of 30 period rolling 21 | tztail = tztail.dropna() 22 | ticksize = np.ceil(tztail['tz'].mean() * 0.50) # 1/4 th of mean std. dev is our ticksize 23 | 24 | if ticksize < 0.2: 25 | ticksize = 0.2 # minimum ticksize limit 26 | 27 | return int(ticksize) 28 | 29 | 30 | def abc(session_hr=6.5, freq=30): 31 | caps = [' A', ' B', ' C', ' D', ' E', ' F', ' G', ' H', ' I', ' J', ' K', ' L', ' M', 32 | ' N', ' O', ' P', ' Q', ' R', ' S', ' T', ' U', ' V', ' W', ' X', ' Y', ' Z'] 33 | abc_lw = [x.lower() for x in caps] 34 | Aa = caps + abc_lw 35 | alimit = math.ceil(session_hr * (60 / freq)) + 3 36 | if alimit > 52: 37 | alphabets = Aa * int( 38 | (np.ceil((alimit - 52) / 52)) + 1) # if bar frequency is less than 30 minutes then multiply list 39 | else: 40 | alphabets = Aa[0:alimit] 41 | bk = [28, 31, 35, 40, 33, 34, 41, 44, 35, 52, 41, 40, 46, 27, 38] 42 | ti = [] 43 | for s1 in bk: 44 | ti.append(Aa[s1 - 1]) 45 | tt = (''.join(ti)) 46 | 47 | return alphabets, tt 48 | 49 | 50 | def tpo(dft_rs, freq=30, ticksize=10, style='tpo', session_hr=6.5): 51 | if len(dft_rs) > int(60 / freq): 52 | dft_rs = dft_rs.drop_duplicates('datetime') 53 | dft_rs = dft_rs.reset_index(inplace=False, drop=True) 54 | dft_rs['rol_mx'] = dft_rs['High'].cummax() 55 | dft_rs['rol_mn'] = dft_rs['Low'].cummin() 56 | dft_rs['ext_up'] = dft_rs['rol_mn'] > dft_rs['rol_mx'].shift(2) 57 | dft_rs['ext_dn'] = dft_rs['rol_mx'] < dft_rs['rol_mn'].shift(2) 58 | 59 | alphabets = abc(session_hr, freq)[0] 60 | alphabets = alphabets[0:len(dft_rs)] 61 | hh = dft_rs['High'].max() 62 | ll = dft_rs['Low'].min() 63 | day_range = hh - ll 64 | dft_rs['abc'] = alphabets 65 | # place represents total number of steps to take to compare the TPO count 66 | place = int(np.ceil((hh - ll) / ticksize)) 67 | # kk = 0 68 | abl_bg = [] 69 | tpo_countbg = [] 70 | pricel = [] 71 | volcountbg = [] 72 | # datel = [] 73 | for u in range(place): 74 | abl = [] 75 | tpoc = [] 76 | volcount = [] 77 | p = ll + (u * ticksize) 78 | for lenrs in range(len(dft_rs)): 79 | if p >= dft_rs['Low'][lenrs] and p < dft_rs['High'][lenrs]: 80 | abl.append(dft_rs['abc'][lenrs]) 81 | tpoc.append(1) 82 | volcount.append((dft_rs['Volume'][lenrs]) / freq) 83 | abl_bg.append(''.join(abl)) 84 | tpo_countbg.append(sum(tpoc)) 85 | volcountbg.append(sum(volcount)) 86 | pricel.append(p) 87 | 88 | dftpo = pd.DataFrame({'close': pricel, 'alphabets': abl_bg, 89 | 'tpocount': tpo_countbg, 'volsum': volcountbg}) 90 | # drop empty rows 91 | dftpo['alphabets'].replace('', np.nan, inplace=True) 92 | dftpo = dftpo.dropna() 93 | dftpo = dftpo.reset_index(inplace=False, drop=True) 94 | dftpo = dftpo.sort_index(ascending=False) 95 | dftpo = dftpo.reset_index(inplace=False, drop=True) 96 | 97 | if style == 'tpo': 98 | column = 'tpocount' 99 | else: 100 | column = 'volsum' 101 | 102 | dfmx = dftpo[dftpo[column] == dftpo[column].max()] 103 | 104 | mid = ll + ((hh - ll) / 2) 105 | dfmax = dfmx.copy() 106 | dfmax['poc-mid'] = abs(dfmax['close'] - mid) 107 | pocidx = dfmax['poc-mid'].idxmin() 108 | poc = dfmax['close'][pocidx] 109 | poctpo = dftpo[column].max() 110 | tpo_updf = dftpo[dftpo['close'] > poc] 111 | tpo_updf = tpo_updf.sort_index(ascending=False) 112 | tpo_updf = tpo_updf.reset_index(inplace=False, drop=True) 113 | 114 | tpo_dndf = dftpo[dftpo['close'] < poc] 115 | tpo_dndf = tpo_dndf.reset_index(inplace=False, drop=True) 116 | 117 | valtpo = (dftpo[column].sum()) * 0.70 118 | 119 | abovepoc = tpo_updf[column].to_list() 120 | belowpoc = tpo_dndf[column].to_list() 121 | 122 | if (len(abovepoc) / 2).is_integer() is False: 123 | abovepoc = abovepoc + [0] 124 | 125 | if (len(belowpoc) / 2).is_integer() is False: 126 | belowpoc = belowpoc + [0] 127 | 128 | bel2 = np.array(belowpoc).reshape(-1, 2) 129 | bel3 = bel2.sum(axis=1) 130 | bel4 = list(bel3) 131 | abv2 = np.array(abovepoc).reshape(-1, 2) 132 | abv3 = abv2.sum(axis=1) 133 | abv4 = list(abv3) 134 | # cum = poctpo 135 | # up_i = 0 136 | # dn_i = 0 137 | df_va = pd.DataFrame({'abv': pd.Series(abv4), 'bel': pd.Series(bel4)}) 138 | df_va = df_va.fillna(0) 139 | df_va['abv_idx'] = np.where(df_va.abv > df_va.bel, 1, 0) 140 | df_va['bel_idx'] = np.where(df_va.bel > df_va.abv, 1, 0) 141 | df_va['cum_tpo'] = np.where(df_va.abv > df_va.bel, df_va.abv, 0) 142 | df_va['cum_tpo'] = np.where(df_va.bel > df_va.abv, df_va.bel, df_va.cum_tpo) 143 | 144 | df_va['cum_tpo'] = np.where(df_va.abv == df_va.bel, df_va.abv + df_va.bel, df_va.cum_tpo) 145 | df_va['abv_idx'] = np.where(df_va.abv == df_va.bel, 1, df_va.abv_idx) 146 | df_va['bel_idx'] = np.where(df_va.abv == df_va.bel, 1, df_va.bel_idx) 147 | df_va['cum_tpo_cumsum'] = df_va.cum_tpo.cumsum() 148 | # haven't add poc tpo because loop cuts off way before 70% so it gives same effect 149 | df_va_cut = df_va[df_va.cum_tpo_cumsum + poctpo <= valtpo] 150 | vah_idx = (df_va_cut.abv_idx.sum()) * 2 151 | val_idx = (df_va_cut.bel_idx.sum()) * 2 152 | 153 | if vah_idx >= len(tpo_updf) and vah_idx != 0: 154 | vah_idx = vah_idx - 2 155 | 156 | if val_idx >= len(tpo_dndf) and val_idx != 0: 157 | val_idx = val_idx - 2 158 | 159 | vah = tpo_updf.close[vah_idx] 160 | val = tpo_dndf.close[val_idx] 161 | 162 | tpoval = dftpo[ticksize:-(ticksize)]['tpocount'] # take mid section 163 | exhandle_index = np.where(tpoval <= 2, tpoval.index, None) # get index where TPOs are 2 164 | exhandle_index = list(filter(None, exhandle_index)) 165 | distance = ticksize * 3 # distance b/w two ex handles / lvn 166 | lvn_list = [] 167 | for ex in exhandle_index[0:-1:distance]: 168 | lvn_list.append(dftpo['close'][ex]) 169 | 170 | # excess_h = dftpo[0:int(ticksize)+int(math.ceil(ticksize*0.5))]['tpocount'].sum() / ticksize # take top tail 171 | # excess_l = dftpo[-1:-1*(ticksize+int(math.ceil(ticksize*0.5)))]['tpocount'].sum() / ticksize # take lower tail 172 | # excess = 0 173 | # if excess_h == 0 and dftpo.iloc[-1]['close'] < poc: 174 | # excess = dftpo['close'][int(ticksize)+int(math.ceil(ticksize*0.5))] 175 | # 176 | # if excess_l == 0 and dftpo.iloc[-1]['close'] >= poc: 177 | # excess = dftpo.iloc[-1*(ticksize+int(math.ceil(ticksize*0.5)))]['close'] 178 | 179 | area_above_poc = dft_rs.High.max() - poc 180 | area_below_poc = poc - dft_rs.Low.min() 181 | if area_above_poc == 0: 182 | area_above_poc = 1 183 | if area_below_poc == 0: 184 | area_below_poc = 1 185 | balance = area_above_poc / area_below_poc 186 | 187 | if balance >= 0: 188 | bal_target = poc - area_above_poc 189 | else: 190 | bal_target = poc + area_below_poc 191 | 192 | mp = {'df': dftpo, 'vah': round(vah, 2), 'poc': round(poc, 2), 'val': round(val, 2), 'lvn': lvn_list, 193 | 'bal_target': round(bal_target, 2)} 194 | 195 | else: 196 | print('not enough bars for date {}'.format(dft_rs['datetime'][0])) 197 | mp = {} 198 | 199 | return mp 200 | 201 | 202 | # !!! fetch all MP derived results here with date and do extra context analysis 203 | 204 | 205 | def get_context(df_hi, freq=30, ticksize=5, style='tpo', session_hr=6.5): 206 | """ 207 | df_hi: resampled DataFrame 208 | mean_val: mean dily values 209 | 210 | return: 1) list of dataframes with TPOs 2) DataFrame with ranking for bubbles 3) DataFrame with a ranking breakdown for hover text 211 | """ 212 | # df_hi=dfresample.copy() 213 | # df_hi = dflive.copy() 214 | try: 215 | 216 | DFcontext = [group[1] for group in df_hi.groupby(df_hi.index.date)] 217 | dfmp_l = [] 218 | i_poctpo_l = [] 219 | i_tposum = [] 220 | vah_l = [] 221 | poc_l = [] 222 | val_l = [] 223 | bt_l = [] 224 | lvn_l = [] 225 | # excess_l = [] 226 | date_l = [] 227 | volume_l = [] 228 | rf_l = [] 229 | ibv_l = [] 230 | ibrf_l = [] 231 | ibh_l = [] 232 | ib_l = [] 233 | close_l = [] 234 | hh_l = [] 235 | ll_l = [] 236 | range_l = [] 237 | 238 | for c in range(len(DFcontext)): # c=0 for testing 239 | dfc1 = DFcontext[c].copy() 240 | dfc1.iloc[:, 2:6] = dfc1.iloc[:, 2:6].apply(pd.to_numeric) 241 | 242 | dfc1 = dfc1.reset_index(inplace=False, drop=True) 243 | mpc = tpo(dfc1, freq, ticksize, style, session_hr) 244 | dftmp = mpc['df'] 245 | dfmp_l.append(dftmp) 246 | # for day types 247 | i_poctpo_l.append(dftmp['tpocount'].max()) 248 | i_tposum.append(dftmp['tpocount'].sum()) 249 | # !!! get value areas 250 | vah_l.append(mpc['vah']) 251 | poc_l.append(mpc['poc']) 252 | val_l.append(mpc['val']) 253 | 254 | bt_l.append(mpc['bal_target']) 255 | lvn_l.append(mpc['lvn']) 256 | # excess_l.append(mpc['excess']) 257 | 258 | # !!! operatio of non profile stats 259 | date_l.append(dfc1.datetime[0]) 260 | close_l.append(dfc1.iloc[-1]['Close']) 261 | ll_l.append(dfc1.High.max()) 262 | hh_l.append(dfc1.Low.min()) 263 | range_l.append(dfc1.High.max() - dfc1.Low.min()) 264 | 265 | volume_l.append(dfc1.Volume.sum()) 266 | rf_l.append(dfc1.rf.sum()) 267 | # !!! get IB 268 | dfc1['cumsumvol'] = dfc1.Volume.cumsum() 269 | dfc1['cumsumrf'] = dfc1.rf.cumsum() 270 | dfc1['cumsumhigh'] = dfc1.High.cummax() 271 | dfc1['cumsummin'] = dfc1.Low.cummin() 272 | # !!! append ib values 273 | # 60 min = 1 hr divide by time frame to get number of bars 274 | ibv_l.append(dfc1.cumsumvol[int(60 / freq)]) 275 | ibrf_l.append(dfc1.cumsumrf[int(60 / freq)]) 276 | ib_l.append(dfc1.cumsummin[int(60 / freq)]) 277 | ibh_l.append(dfc1.cumsumhigh[int(60 / freq)]) 278 | 279 | # dffin = pd.concat(dfcon_l) 280 | # max_po = max(i_poctpo_l) 281 | # min_po = min(i_poctpo_l) 282 | 283 | dist_df = pd.DataFrame({'date': date_l, 'maxtpo': i_poctpo_l, 'tpocount': i_tposum, 'vahlist': vah_l, 284 | 'poclist': poc_l, 'vallist': val_l, 'btlist': bt_l, 'lvnlist': lvn_l, 285 | 'volumed': volume_l, 'rfd': rf_l, 'highd': hh_l, 'lowd': ll_l, 'ranged': range_l, 286 | 'ibh': ibh_l, 287 | 'ibl': ib_l, 'ibvol': ibv_l, 'ibrf': ibrf_l, 'close': close_l}) 288 | 289 | except Exception as e: 290 | print(str(e)) 291 | ranking_df = [] 292 | dfmp_l = [] 293 | dist_df = [] 294 | 295 | return (dfmp_l, dist_df) 296 | 297 | def get_dayrank(dist_df, mean_val): 298 | 299 | # LVNs 300 | 301 | lvnlist = dist_df['lvnlist'].to_list() 302 | cllist = dist_df['close'].to_list() 303 | lvn_powerlist = [] 304 | total_lvns = 0 305 | for c,llist in zip(cllist, lvnlist): 306 | if len(llist) == 0: 307 | delta_lvn = 0 308 | total_lvns = 0 309 | # print('total_lvns 0 ') 310 | lvn_powerlist.append(total_lvns) 311 | else: 312 | for l in llist: 313 | delta_lvn = c - l 314 | if delta_lvn >= 0: 315 | lvn_i = 1 316 | else: 317 | lvn_i = -1 318 | total_lvns = total_lvns + lvn_i 319 | lvn_powerlist.append(total_lvns) 320 | total_lvns = 0 321 | 322 | dist_df['Single_Prints'] = lvn_powerlist 323 | 324 | dist_df['distr'] = dist_df.tpocount / dist_df.maxtpo 325 | dismean = math.floor(dist_df.distr.mean()) 326 | dissig = math.floor(dist_df.distr.std()) 327 | 328 | # Assign day types based on TPO distribution and give numerical value for each day types for calculating total strength at the end 329 | 330 | dist_df['daytype'] = np.where(np.logical_and(dist_df.distr >= dismean, 331 | dist_df.distr < dismean + dissig), 'Trend Distribution Day', '') 332 | 333 | dist_df['daytype_num'] = np.where(np.logical_and(dist_df.distr >= dismean, 334 | dist_df.distr < dismean + dissig), 3, 0) 335 | 336 | dist_df['daytype'] = np.where(np.logical_and(dist_df.distr < dismean, 337 | dist_df.distr >= dismean - dissig), 'Normal Variation Day', 338 | dist_df['daytype']) 339 | 340 | dist_df['daytype_num'] = np.where(np.logical_and(dist_df.distr < dismean, 341 | dist_df.distr >= dismean - dissig), 2, dist_df['daytype_num']) 342 | 343 | dist_df['daytype'] = np.where(dist_df.distr < dismean - dissig, 344 | 'Neutral Day', dist_df['daytype']) 345 | 346 | dist_df['daytype_num'] = np.where(dist_df.distr < dismean - dissig, 347 | 1, dist_df['daytype_num']) 348 | 349 | dist_df['daytype'] = np.where(dist_df.distr > dismean + dissig, 350 | 'Trend Day', dist_df['daytype']) 351 | dist_df['daytype_num'] = np.where(dist_df.distr > dismean + dissig, 352 | 4, dist_df['daytype_num']) 353 | dist_df['daytype_num'] = np.where(dist_df.close >= dist_df.poclist, dist_df.daytype_num * 1, 354 | dist_df.daytype_num * -1) # assign signs as per bias 355 | 356 | daytypes = dist_df['daytype'].to_list() 357 | 358 | # volume comparison with mean 359 | rf_mean = mean_val['rf_mean'] 360 | vol_mean = mean_val['volume_mean'] 361 | 362 | dist_df['vold_zscore'] = (dist_df.volumed - vol_mean) / dist_df.volumed.std(ddof=0) 363 | dist_df['rfd_zscore'] = (abs(dist_df.rfd) - rf_mean) / abs(dist_df.rfd).std(ddof=0) 364 | a, b = 1, 4 365 | x, y = dist_df.rfd_zscore.min(), dist_df.rfd_zscore.max() 366 | dist_df['norm_rf'] = (dist_df.rfd_zscore - x) / (y - x) * (b - a) + a 367 | 368 | p, q = dist_df.vold_zscore.min(), dist_df.vold_zscore.max() 369 | dist_df['norm_volume'] = (dist_df.vold_zscore - p) / (q - p) * (b - a) + a 370 | 371 | dist_df['Volume_Factor'] = np.where(dist_df.close >= dist_df.poclist, dist_df.norm_volume * 1, 372 | dist_df.norm_volume * -1) 373 | dist_df['Rotation_Factor'] = np.where(dist_df.rfd >= 0, dist_df.norm_rf * 1, dist_df.norm_rf * -1) 374 | 375 | # !!! get ranking based on distribution data frame aka dist_df 376 | ranking_df = dist_df.copy() 377 | ranking_df['VAH_vs_yVAH'] = np.where(ranking_df.vahlist >= ranking_df.vahlist.shift(), 1, -1) 378 | ranking_df['VAL_vs_yVAL'] = np.where(ranking_df.vallist >= ranking_df.vallist.shift(), 1, -1) 379 | ranking_df['POC_vs_yPOC'] = np.where(ranking_df.poclist >= ranking_df.poclist.shift(), 1, -1) 380 | ranking_df['H_vs_yH'] = np.where(ranking_df.highd >= ranking_df.highd.shift(), 1, -1) 381 | ranking_df['L_vs_yL'] = np.where(ranking_df.lowd >= ranking_df.lowd.shift(), 1, -1) 382 | ranking_df['Close_vs_yCL'] = np.where(ranking_df.close >= ranking_df.close.shift(), 1, -1) 383 | ranking_df['CL>POC= ranking_df.poclist, ranking_df.close < ranking_df.vahlist), 1, 0) 385 | ranking_df['CLval'] = np.where( 386 | np.logical_and(ranking_df.close < ranking_df.poclist, ranking_df.close >= ranking_df.vallist), -1, 387 | 0) # Max is 2 388 | ranking_df['CL=VAH'] = np.where(ranking_df.close >= ranking_df.vahlist, 2, 0) 390 | 391 | ranking_df['power1'] = 100 * ( 392 | (ranking_df.VAH_vs_yVAH + ranking_df.VAL_vs_yVAL + ranking_df.POC_vs_yPOC + ranking_df.H_vs_yH + 393 | ranking_df.L_vs_yL + ranking_df['Close_vs_yCL'] + ranking_df['CL>POCval'] +ranking_df.Single_Prints+ 395 | ranking_df['CL=VAH'] + ranking_df.Volume_Factor + ranking_df.Rotation_Factor + ranking_df.daytype_num) / 14) 397 | 398 | c, d = 25, 100 399 | r, s = abs(ranking_df.power1).min(), abs(ranking_df.power1).max() 400 | ranking_df['power'] = (abs(ranking_df.power1) - r) / (s - r) * (d - c) + c 401 | ranking_df = ranking_df.round(2) 402 | # ranking_df['power'] = abs(ranking_df['power1']) 403 | 404 | breakdown_df = ranking_df[ 405 | ['Single_Prints', 'daytype_num', 'Volume_Factor', 'Rotation_Factor', 'VAH_vs_yVAH', 'VAL_vs_yVAL', 'POC_vs_yPOC', 'H_vs_yH', 406 | 'L_vs_yL', 'Close_vs_yCL', 'CL>POCval', 'CL=VAH']].transpose() 407 | 408 | breakdown_df = breakdown_df.round(2) 409 | 410 | return (ranking_df, breakdown_df) 411 | 412 | 413 | def get_ibrank(mean_val, ranking): 414 | ibrankdf = ranking.copy() 415 | ibvol_mean = mean_val['volib_mean'] 416 | ibrf_mean = mean_val['ibrf_mean'] 417 | rf_mean = mean_val['rf_mean'] 418 | vol_mean = mean_val['volume_mean'] 419 | 420 | ibrankdf['ibrf_zscore'] = (abs(ibrankdf.ibrf) - ibrf_mean) / abs(ibrankdf.ibrf).std(ddof=0) 421 | a, b = 1, 4 422 | e, f = ibrankdf.ibrf_zscore.min(), ibrankdf.ibrf_zscore.max() 423 | ibrankdf['ibnorm_rf'] = (ibrankdf.ibrf_zscore - e) / (f - e) * (b - a) + a 424 | ibrankdf['IB_RotationFactor'] = np.where(ibrankdf.ibrf >= 0, ibrankdf.ibnorm_rf * 1, ibrankdf.ibnorm_rf * -1) 425 | ibrankdf['IB_RotationFactor'] = ibrankdf['IB_RotationFactor'].round(2) 426 | ibrankdf['ibvol_zscore'] = (ibrankdf.ibvol - ibvol_mean) / ibrankdf.ibvol.std(ddof=0) 427 | a, b = 1, 4 428 | g, h = ibrankdf.ibvol_zscore.min(), ibrankdf.ibvol_zscore.max() 429 | ibrankdf['ibnorm_vol'] = (ibrankdf.ibvol_zscore - g) / (h - g) * (b - a) + a 430 | ibrankdf['IB_Volume_factor'] = np.where(ibrankdf.close >= ibrankdf.poclist, ibrankdf.ibnorm_vol * 1, 431 | ibrankdf.ibnorm_vol * -1) 432 | ibrankdf['IB_Volume_factor'] = ibrankdf['IB_Volume_factor'].round() 433 | ibrankdf['ibmid'] = ibrankdf.ibl + ((ibrankdf.ibh - ibrankdf.ibl) / 2) 434 | """ 435 | IB mid inside value area neutral 1 -1, above value area inside prev day high 2 -2, above prev day high low 3 -3 436 | assign sign to volum based on close > poc 437 | """ 438 | ibrankdf['IB>yPOC= ibrankdf.poclist.shift(), ibrankdf.ibmid < ibrankdf.vahlist.shift()), 1, 0) 440 | ibrankdf['IByVAL'] = np.where( 441 | np.logical_and(ibrankdf.ibmid >= ibrankdf.vallist.shift(), ibrankdf.ibmid < ibrankdf.poclist.shift()), -1, 0) 442 | ibrankdf['IB>yVAH= ibrankdf.vahlist.shift(), ibrankdf.ibmid < ibrankdf.highd.shift()), 2, 0) 444 | ibrankdf['IByL'] = np.where( 445 | np.logical_and(ibrankdf.ibmid < ibrankdf.vallist.shift(), ibrankdf.ibmid > ibrankdf.lowd.shift()), -2, 0) 446 | ibrankdf['IB>yH'] = np.where(ibrankdf.ibmid >= ibrankdf.highd.shift(), 4, 0) 447 | ibrankdf['IByPOCyVAL'] + 450 | ibrankdf['IB>yVAHyL'] + ibrankdf['IB>yH'] + ibrankdf[ 451 | 'IByPOCyVAL', 'IB>yVAHyL', 'IB>yH', 'IB= df['Close'].shift(), 1, -1) 468 | df['hup'] = np.where(df['High'] >= df['High'].shift(), 1, -1) 469 | df['lup'] = np.where(df['Low'] >= df['Low'].shift(), 1, -1) 470 | 471 | df['rf'] = df['cup'] + df['hup'] + df['lup'] 472 | df = df.drop(['cup', 'lup', 'hup'], axis=1) 473 | return df 474 | 475 | 476 | def get_mean(dfhist, avglen=30, freq=30): 477 | """ 478 | dfhist: pandas dataframe 1 min frequency 479 | avglen: Length for mean values 480 | freq: timeframe for the candlestick & TPOs 481 | 482 | return: a) daily mean for volume, rotational factor (absolute value), IB volume, IB RF b) session length 483 | 484 | """ 485 | 486 | dfhist = get_rf(dfhist.copy()) 487 | dfhistd = dfhist.resample("D").agg( 488 | {'symbol': 'last', 'Open': 'first', 'High': 'max', 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 489 | 'rf': 'sum', }) 490 | dfhistd = dfhistd.dropna() 491 | comp_days = len(dfhistd) 492 | 493 | vm30 = dfhistd['Volume'].rolling(avglen).mean() 494 | volume_mean = vm30[len(vm30) - 1] 495 | rf30 = abs((dfhistd['rf'])).rolling(avglen).mean() # it is abs mean to get meaningful value to compare daily values 496 | rf_mean = rf30[len(rf30) - 1] 497 | 498 | date2 = dfhistd.index[1].date() 499 | mask = dfhist.index.date < date2 500 | dfsession = dfhist.loc[mask] 501 | session_hr = math.ceil(len(dfsession) / 60) 502 | "get IB volume mean" 503 | ib_start = dfhist.index.time[0] 504 | ib_end = dfhist.index.time[int(freq * (60 / freq))] 505 | dfib = dfhist.between_time(ib_start, ib_end) 506 | # dfib = df.head(int(60/freq)) 507 | # dfib['Volume'].plot() 508 | dfib = dfib.resample("D").agg( 509 | {'symbol': 'last', 'Open': 'first', 'High': 'max', 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 510 | 'rf': 'sum', }) 511 | dfib = dfib.dropna() 512 | vib = dfib['Volume'].rolling(avglen).mean() 513 | volib_mean = vib[len(vib) - 1] 514 | ibrf30 = abs((dfib['rf'])).rolling(avglen).mean() 515 | ibrf_mean = ibrf30[len(ibrf30) - 1] 516 | 517 | all_val = dict(volume_mean=volume_mean, rf_mean=rf_mean, volib_mean=volib_mean, 518 | ibrf_mean=ibrf_mean, session_hr=session_hr) 519 | 520 | return all_val 521 | -------------------------------------------------------------------------------- /version_0.2.0/tpo_project_v2.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """ 3 | Updated on Mon Sep 15 15:34:11 2020 4 | 5 | @author: alex 6 | 7 | twitter.com/beinghorizontal 8 | 9 | beta version 10 | 11 | """ 12 | 13 | import pandas as pd 14 | import plotly.graph_objects as go 15 | import dash 16 | import dash_core_components as dcc 17 | import dash_html_components as html 18 | from dash.dependencies import Input, Output 19 | from tpo_helper2 import get_ticksize, abc, get_mean, get_rf, get_context, get_dayrank, get_ibrank 20 | import numpy as np 21 | from datetime import timedelta 22 | 23 | app = dash.Dash(__name__) 24 | 25 | # ticksz = 5 26 | # trading_hr = 7 27 | refresh_int = 20 # refresh interval in seconds for live updates 28 | freq = 30 29 | avglen = 10 # num days mean to get values 30 | days_to_display = 10 # Number of last n days you want on the screen to display 31 | mode = 'tpo' # for volume --> 'vol' 32 | 33 | dfhist = pd.read_csv('history.txt') # 1 min historical data in symbol,datetime,open,high,low,close,volume 34 | 35 | # Check the sample file. Match the format exactly else code will not run. 36 | 37 | dfhist.iloc[:, 2:] = dfhist.iloc[:, 2:].apply(pd.to_numeric) 38 | 39 | ticksz = get_ticksize(dfhist, freq=freq) # # It calculates tick size for TPO based on mean and standard deviation. 40 | symbol = dfhist.symbol[0] 41 | 42 | 43 | def datetime(data): 44 | dfhist = data.copy() 45 | dfhist['datetime2'] = pd.to_datetime(dfhist['datetime'], format='%Y%m%d %H:%M:%S') 46 | dfhist = dfhist.set_index(dfhist['datetime2'], drop=True, inplace=False) 47 | return dfhist 48 | 49 | # set index as pandas datetime 50 | dfhist = datetime(dfhist) 51 | mean_val = get_mean(dfhist, avglen=avglen, freq=freq) # Get mean values for comparison 52 | trading_hr = mean_val['session_hr'] 53 | # !!! get rotational factor 54 | dfhist = get_rf(dfhist.copy()) 55 | 56 | # !!! resample to desire time frequency. For TPO charts 30 min is optimal 57 | dfresample = dfhist.copy() # create seperate resampled data frame and preserve old 1 min file 58 | 59 | dfresample = dfresample.resample(str(freq)+'min').agg({'symbol': 'last', 'datetime': 'first', 'Open': 'first', 'High': 'max', 60 | 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 'rf': 'sum'}) 61 | dfresample = dfresample.dropna() 62 | 63 | # slice df based on days_to_display parameter 64 | dt1 = dfresample.index[-1] 65 | sday1 = dt1 - timedelta(days_to_display) 66 | dfresample = dfresample[(dfresample.index.date > sday1.date())] 67 | # to save memory do all the calculations for context outside the loop 68 | dfcontext = get_context(dfresample.copy(), freq=freq, ticksize=ticksz, style=mode, session_hr=trading_hr) 69 | 70 | # warning: do not chnage the settings below. There are HTML tags get triggered for live updates. Took me a while to figure out 71 | # ------------------------------------------------------------------------------ 72 | # App layout 73 | app.layout = html.Div( 74 | html.Div([ 75 | dcc.Location(id='url', refresh=False), 76 | dcc.Link('For questions, ping me on Twitter', href='https://twitter.com/beinghorizontal'), 77 | html.Br(), 78 | dcc.Link('FAQ and python source code', href='http://www.github.com/beinghorizontal/tpo_project'), 79 | html.H4('@beinghorizontal'), 80 | dcc.Graph(id='beinghorizontal'), 81 | dcc.Interval( 82 | id='interval-component', 83 | interval=refresh_int*1000, # in milliseconds 84 | n_intervals=0 85 | ) 86 | ]) 87 | ) 88 | 89 | # ------------------------------------------------------------------------------ 90 | # Connect the Plotly graphs with Dash Components 91 | 92 | @app.callback(Output(component_id='beinghorizontal', component_property='figure'), 93 | [Input('interval-component', 'n_intervals')]) 94 | def update_graph(n, df=dfresample.copy(), dfcontext=dfcontext): 95 | 96 | """ 97 | main loop for resfreshing the data and to display the chart. It gets triggered every n second as per our 98 | settings. 99 | """ 100 | dfmp_list = dfcontext[0] 101 | 102 | df_distribution = dfcontext[1] 103 | distribution_hist = df_distribution.copy() 104 | 105 | dflive = pd.read_csv('live.txt') 106 | dflive.iloc[:, 2:] = dflive.iloc[:, 2:].apply(pd.to_numeric) 107 | dflive = datetime(dflive) 108 | dflive = get_rf(dflive.copy()) 109 | dflive = dflive.resample(str(freq) + 'min').agg( 110 | {'symbol': 'last', 'datetime': 'last', 'Open': 'first', 'High': 'max', 111 | 'Low': 'min', 'Close': 'last', 'Volume': 'sum', 'rf': 'sum'}) 112 | dflive = dflive.dropna() 113 | 114 | dfcontext_live = get_context(dflive.copy(), freq=freq, ticksize=ticksz, style=mode, session_hr=trading_hr) 115 | dfmp_live = dfcontext_live[0] # it will be in list format so take [0] slice for current day MP data frame 116 | df_distribution_live = dfcontext_live[1] 117 | df_distribution_concat = pd.concat([distribution_hist, df_distribution_live]) 118 | df_distribution_concat = df_distribution_concat.reset_index(inplace=False, drop=True) 119 | df_updated_rank = get_dayrank(df_distribution_concat, mean_val) 120 | 121 | ranking = df_updated_rank[0] 122 | power1 = ranking.power1 # Non-normalised IB strength 123 | power = ranking.power # Normalised IB strength for dynamic shape size for markers at bottom 124 | breakdown = df_updated_rank[1] 125 | dh_list = ranking.highd 126 | dl_list = ranking.lowd 127 | 128 | # !!! get context based on IB It is predictive value caculated by using various IB stats and previous day's value area 129 | # IB is 1st 1 hour of the session. Not useful for scrips with global 24 x 7 session 130 | context_ibdf = get_ibrank(mean_val, ranking) 131 | ibpower1 = context_ibdf[0].ibpower1 # Non-normalised IB strength 132 | ibpower = context_ibdf[0].IB_power # Normalised IB strength for dynamic shape size for markers at bottom 133 | ibbreakdown = context_ibdf[1] 134 | ib_high_list = context_ibdf[0].ibh 135 | ib_low_list = context_ibdf[0].ibl 136 | dfmp_list = dfmp_list + dfmp_live 137 | df_merge = pd.concat([df, dflive]) 138 | df_merge2 = df_merge.drop_duplicates('datetime') 139 | df = df_merge2.copy() 140 | 141 | fig = go.Figure(data=[go.Candlestick(x=df['datetime'], 142 | open=df['Open'], 143 | high=df['High'], 144 | low=df['Low'], 145 | close=df['Close'], 146 | showlegend=True, 147 | name=symbol, opacity=0.3)]) # To make candlesticks more prominent increase the opacity 148 | 149 | # !!! get TPO for each day 150 | DFList = [group[1] for group in df.groupby(df.index.date)] 151 | 152 | if trading_hr >= 12: 153 | day_loop = len(dfmp_list) - 1 154 | else: 155 | day_loop = len(dfmp_list) 156 | 157 | for i in range(day_loop): # test the loop with i=1 158 | # print(i) 159 | # df1 is used for datetime axis, other dataframe we have is df_mp but it is not a timeseries 160 | df1 = DFList[i].copy() 161 | df_mp = dfmp_list[i] 162 | irank = ranking.iloc[i] 163 | # df_mp['i_date'] = df1['datetime'][0] 164 | df_mp['i_date'] = irank.date 165 | # # @todo: background color for text 166 | df_mp['color'] = np.where(np.logical_and( 167 | df_mp['close'] > irank.vallist, df_mp['close'] < irank.vahlist), 'green', 'white') 168 | 169 | df_mp = df_mp.set_index('i_date', inplace=False) 170 | 171 | fig.add_trace(go.Scattergl(x=df_mp.index, y=df_mp.close, mode="text", name=str(df_mp.index[0]), text=df_mp.alphabets, 172 | showlegend=False, textposition="top right", textfont=dict(family="verdana", size=6, color=df_mp.color))) 173 | if power1[i] < 0: 174 | my_rgb = 'rgba({power}, 3, 252, 0.5)'.format(power=abs(165)) 175 | else: 176 | my_rgb = 'rgba(23, {power}, 3, 0.5)'.format(power=abs(252)) 177 | 178 | brk_f_list_maj = [] 179 | f = 0 180 | for f in range(len(breakdown.columns)): 181 | brk_f_list_min = [] 182 | for index, rows in breakdown.iterrows(): 183 | brk_f_list_min.append(index + str(': ') + str(rows[f]) + '
') 184 | brk_f_list_maj.append(brk_f_list_min) 185 | 186 | breakdown_values = '' # for bubbles 187 | for st in brk_f_list_maj[i]: 188 | breakdown_values += st 189 | 190 | # ......................... 191 | ibrk_f_list_maj = [] 192 | g = 0 193 | for g in range(len(ibbreakdown.columns)): 194 | ibrk_f_list_min = [] 195 | for index, rows in ibbreakdown.iterrows(): 196 | ibrk_f_list_min.append(index + str(': ') + str(rows[g]) + '
') 197 | ibrk_f_list_maj.append(ibrk_f_list_min) 198 | 199 | ibreakdown_values = '' # for squares 200 | for ist in ibrk_f_list_maj[i]: 201 | ibreakdown_values += ist 202 | # irank.power1 203 | # .................................. 204 | 205 | fig.add_trace(go.Scattergl( 206 | x=[irank.date], 207 | y=[dfresample['High'].max()], 208 | mode="markers", 209 | marker=dict(color=my_rgb, size=0.90 * power[i], 210 | line=dict(color='rgb(17, 17, 17)', width=2)), 211 | # marker_symbol='square', 212 | hovertext=[ 213 | '
Insights:
VAH: {}
POC: {}
VAL: {}
Balance Target: {}
Day Type: {}
strength: {}
BreakDown: {}
{}
{}'.format( 214 | irank.vahlist, 215 | irank.poclist, irank.vallist, irank.btlist, irank.daytype, irank.power, '', '-------------------', 216 | breakdown_values)], showlegend=False)) 217 | 218 | # !!! we will use this for hover text at bottom for developing day 219 | if ibpower1[i] < 0: 220 | ib_rgb = 'rgba(165, 3, 252, 0.5)' 221 | else: 222 | ib_rgb = 'rgba(23, 252, 3, 0.5)' 223 | 224 | fig.add_trace(go.Scattergl( 225 | x=[irank.date], 226 | y=[dfresample['Low'].min()], 227 | mode="markers", 228 | marker=dict(color=ib_rgb, size=0.40 * ibpower[i], line=dict(color='rgb(17, 17, 17)', width=2)), 229 | marker_symbol='square', 230 | hovertext=[ 231 | '
Insights:
Vol_mean: {}
Vol_Daily: {}
RF_mean: {}
RF_daily: {}
IBvol_mean: {}
IBvol_day: {}
IB_RFmean: {}
IB_RFday: {}
strength: {}
BreakDown: {}
{}
{}'.format( 232 | mean_val['volume_mean'], irank.volumed, mean_val['rf_mean'], irank.rfd, 233 | mean_val['volib_mean'], irank.ibvol, mean_val['ibrf_mean'], irank.ibrf, ibpower[i], '', 234 | '......................', ibreakdown_values)], showlegend=False)) 235 | 236 | lvns = irank.lvnlist 237 | 238 | for lvn in lvns: 239 | if lvn > irank.vallist and lvn < irank.vahlist: 240 | fig.add_shape( 241 | type="line", 242 | x0=df1.iloc[0]['datetime'], 243 | y0=lvn, 244 | x1=df1.iloc[5]['datetime'], 245 | y1=lvn, 246 | line=dict( 247 | color="darksalmon", 248 | width=2, 249 | dash="dashdot", ), ) 250 | # ib high and low 251 | fig.add_shape( 252 | type="line", 253 | x0=df.iloc[0]['datetime'], 254 | y0=ib_low_list[i], 255 | x1=df.iloc[0]['datetime'], 256 | y1=ib_high_list[i], 257 | line=dict( 258 | color="cyan", 259 | width=3, 260 | ), ) 261 | # day high and low 262 | fig.add_shape( 263 | type="line", 264 | x0=df.iloc[0]['datetime'], 265 | y0=dl_list[i], 266 | x1=df.iloc[0]['datetime'], 267 | y1=dh_list[i], 268 | line=dict( 269 | color="gray", 270 | width=1, 271 | dash="dashdot", ), ) 272 | # ltp marker 273 | ltp = df1.iloc[-1]['Close'] 274 | if ltp >= irank.poclist: 275 | ltp_color = 'green' 276 | else: 277 | ltp_color = 'red' 278 | 279 | fig.add_trace(go.Scattergl( 280 | x=[df.iloc[-1]['datetime']], 281 | y=[df.iloc[-1]['Close']], 282 | mode="text", 283 | name="last traded price", 284 | text=['last '+str(df1.iloc[-1]['Close'])], 285 | textposition="bottom right", 286 | textfont=dict(size=12, color=ltp_color), 287 | showlegend=False 288 | 289 | )) 290 | 291 | fig.layout.xaxis.color = 'white' 292 | fig.layout.yaxis.color = 'white' 293 | fig.layout.autosize = True 294 | fig["layout"]["height"] = 900 295 | fig["layout"]["width"] = 1900 296 | # fig.layout.hovermode = 'x' # UNcomment this if you want to see insights for both squares and bubbles 297 | 298 | fig.update_xaxes(title_text='Time', title_font=dict(size=18, color='white'), 299 | tickangle=45, tickfont=dict(size=8, color='white'), showgrid=False, dtick=len(dfmp_list)) 300 | 301 | fig.update_yaxes(title_text=symbol, title_font=dict(size=18, color='white'), 302 | tickfont=dict(size=12, color='white'), showgrid=False) 303 | fig.layout.update(template="plotly_dark", title="@" + abc()[1], autosize=True, 304 | xaxis=dict(showline=True, color='white'), 305 | yaxis=dict(showline=True, color='white', autorange=True, fixedrange=False)) 306 | 307 | fig["layout"]["xaxis"]["rangeslider"]["visible"] = False 308 | fig["layout"]["xaxis"]["tickformat"] = "%H:%M:%S" 309 | fig.update_layout(yaxis_tickformat='d') 310 | 311 | return fig 312 | 313 | # from plotly.offline import plot 314 | # plot(fig, auto_open=True) 315 | 316 | # ------------------------------------------------------------------------------ 317 | if __name__ == '__main__': 318 | app.run_server(debug=True) # To run the code from ipython based IDEs such as spyder, pycharm 319 | --------------------------------------------------------------------------------