├── .gitignore
├── Dockerfile
├── Dockerfile-dev
├── LICENSE.md
├── README.md
├── asciiart.py
├── defs.py
├── dev.yml
├── docker-compose.yml
├── docs
└── index.md
├── main.py
├── mkdocs.yml
├── readthedocs
├── Makefile
├── conf.py
├── index.rst
└── make.bat
├── requirements.txt
├── resources
└── concept.png
├── server_message_handler.py
├── shared_memory_obj.py
└── version.py
/.gitignore:
--------------------------------------------------------------------------------
1 | venv*/*
2 | .idea/*
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM python:3.8-slim
2 |
3 | WORKDIR /opt/project
4 |
5 | COPY . /opt/project/
6 | RUN pip install -r requirements.txt
7 |
8 | CMD python main.py
9 |
10 | # Set some environment variables.
11 | # PYTHONUNBUFFERED keeps Python from buffering our standard output stream,
12 | # which means that logs can be delivered to the user quickly.
13 | # PYTHONDONTWRITEBYTECODE keeps Python from writing the .pyc files which are
14 | # unnecessary in this case. We also update
15 |
16 | ENV PYTHONUNBUFFERED=TRUE
17 | ENV PYTHONDONTWRITEBYTECODE=TRUE
18 |
19 |
20 |
21 |
--------------------------------------------------------------------------------
/Dockerfile-dev:
--------------------------------------------------------------------------------
1 | FROM python:3.8-slim
2 |
3 | WORKDIR /opt/project
4 |
5 | COPY requirements.txt /requirements.txt
6 | RUN pip install -r /requirements.txt
7 |
8 | # Set some environment variables.
9 | # PYTHONUNBUFFERED keeps Python from buffering our standard output stream,
10 | # which means that logs can be delivered to the user quickly.
11 | # PYTHONDONTWRITEBYTECODE keeps Python from writing the .pyc files which are
12 | # unnecessary in this case. We also update
13 |
14 | ENV PYTHONUNBUFFERED=TRUE
15 | ENV PYTHONDONTWRITEBYTECODE=TRUE
16 |
17 |
18 |
19 |
--------------------------------------------------------------------------------
/LICENSE.md:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 camelpac
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ```py
2 |
3 | _ _ ___ _ _
4 | /_\ | |_ __ __ _ ___ __ _ / _ \_ __ _____ ___ _ /_\ __ _ ___ _ __ | |_
5 | //_\\| | '_ \ / _` |/ __/ _` | / /_)/ '__/ _ \ \/ / | | | //_\\ / _` |/ _ \ '_ \| __|
6 | / _ \ | |_) | (_| | (_| (_| | / ___/| | | (_) > <| |_| | / _ \ (_| | __/ | | | |_
7 | \_/ \_/_| .__/ \__,_|\___\__,_| \/ |_| \___/_/\_\\__, | \_/ \_/\__, |\___|_| |_|\__|
8 | |_| |___/ |___/
9 |
10 | ```
11 |
12 | This is a project to help users of [Alpaca](https://alpaca.markets) to execute more than one data websocket connections against their servers.
13 |
14 | Right now you can only connect one websocket with your user credentials. if you want to run more than one algorithm, you can't.
15 | This project will help you achieve that. Look at this illustration to get the concept
16 |
17 | 
18 |
19 | It doesn't matter which sdk you are using (python, js, go, c#) you can use this to achieve that.
20 |
21 | ## How to execute the docker container
22 | You have 2 options:
23 | - Using docker with the image from docker hub (easiest)
24 | - Cloning the repo and using docker/docker-compose to build and run this project locally(a bit more powerful if you want to edit the proxy code)
25 |
26 |
27 | ### Directly from docker hub
28 | Nothing easier than that.
29 | - Make sure you have docker installed
30 | - make sure you have the updated image version by running this: `docker pull camelpac/alpaca-proxy-agent`
31 | - Execute this command (this will use the free account data stream(iex). read further to learn how to enable the sip data stream): `docker run -it -p 8765:8765 camelpac/alpaca-proxy-agent`
32 | note: You can change the port you're listening on just by doing this `-p xxxx:8765`
33 | ### Executing a local copy
34 | - Clone the repo: `git clone https://github.com/camelpac/alpaca-proxy-agent.git`
35 | - Run this command locally: `docker-compose up`
36 | It will build the image and run the container using docker-compose
37 | note: If you want to execute in edit mode do this: `docker-compose -f dev.yml up`
38 | You could then update `main.py` and execute it locally.
39 |
40 | ## Available Arguments
41 | There are a few env variables you could pass to your proxy-agent instance
42 | * Data stream sip(paid account) or iex(free account)
43 |
44 | You can do it like this:
45 | - IS_PRO: if true use the sip data stream. if false, use the iex data stream. default value is false.
46 |
47 | so you should execute this for the paid data stream:
48 | ```python
49 | docker run -p 8765:8765 -it -e IS_PRO=true camelpac/alpaca-proxy-agent
50 | ```
51 | and you should execute this for the free data stream:
52 |
53 | ```python
54 | docker run -p 8765:8765 -it -e IS_PRO=false camelpac/alpaca-proxy-agent
55 | ```
56 |
57 | ## Security
58 | You are running a local websocket server. Make sure your IP is not accessible when you do (you probably shouldn't run this on public networks)
59 | SSL between your algo and the proxy is not supported (it runs locally). between the proxy-agent and the Alpaca servers we use WSS
60 |
--------------------------------------------------------------------------------
/asciiart.py:
--------------------------------------------------------------------------------
1 | ascii_art = r"""
2 |
3 |
4 | _ _ ___
5 | /_\ | |_ __ __ _ ___ __ _ / _ \_ __ _____ ___ _
6 | //_\\| | '_ \ / _` |/ __/ _` | / /_)/ '__/ _ \ \/ / | | |
7 | / _ \ | |_) | (_| | (_| (_| | / ___/| | | (_) > <| |_| |
8 | \_/ \_/_| .__/ \__,_|\___\__,_| \/ |_| \___/_/\_\\__, |
9 | |_| |___/
10 |
11 | _ _
12 | /_\ __ _ ___ _ __ | |_
13 | //_\\ / _` |/ _ \ '_ \| __|
14 | / _ \ (_| | __/ | | | |_
15 | \_/ \_/\__, |\___|_| |_|\__|
16 | |___/
17 |
18 |
19 | """
--------------------------------------------------------------------------------
/defs.py:
--------------------------------------------------------------------------------
1 | from enum import Enum
2 |
3 | from alpaca_trade_api.entity import quote_mapping, agg_mapping, trade_mapping
4 |
5 | QUOTE_PREFIX = "alpacadatav1/Q."
6 | TRADE_PREFIX = "alpacadatav1/T."
7 | MINUTE_AGG_PREFIX = "AM."
8 | SECOND_AGG_PREFIX = "A."
9 |
10 | reverse_qoute_mapping = {v: k for k, v in quote_mapping.items()}
11 |
12 | reverse_trade_mapping = {v: k for k, v in trade_mapping.items()}
13 |
14 | reverse_minute_agg_mapping = {v: k for k, v in agg_mapping.items()}
15 |
16 |
17 | class MessageType(Enum):
18 | Quote = 1
19 | MinuteAgg = 2
20 | SecondAgg = 3 # only with polygon
21 | Trade = 4
22 |
--------------------------------------------------------------------------------
/dev.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 |
3 | services:
4 | alpaca-proxy:
5 | build:
6 | context: ./
7 | dockerfile: ./Dockerfile-dev
8 | volumes:
9 | - ./:/opt/project
10 | command: python main.py
11 | ports:
12 | - "8765:8765"
--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: '3'
2 |
3 | services:
4 | alpaca-proxy:
5 | build:
6 | context: ./
7 | dockerfile: ./Dockerfile
8 | command: python main.py
9 | ports:
10 | - "8765:8765"
--------------------------------------------------------------------------------
/docs/index.md:
--------------------------------------------------------------------------------
1 | # Welcome to Alpaca Proxy Agent
2 |
3 | ```py
4 |
5 | _ _ ___ _ _
6 | /_\ | |_ __ __ _ ___ __ _ / _ \_ __ _____ ___ _ /_\ __ _ ___ _ __ | |_
7 | //_\\| | '_ \ / _` |/ __/ _` | / /_)/ '__/ _ \ \/ / | | | //_\\ / _` |/ _ \ '_ \| __|
8 | / _ \ | |_) | (_| | (_| (_| | / ___/| | | (_) > <| |_| | / _ \ (_| | __/ | | | |_
9 | \_/ \_/_| .__/ \__,_|\___\__,_| \/ |_| \___/_/\_\\__, | \_/ \_/\__, |\___|_| |_|\__|
10 | |_| |___/ |___/
11 |
12 | ```
13 |
14 | This is a project to help users of [Alpaca](https://alpaca.markets) to execute more than one data websocket connections against their servers.
15 |
16 | Right now you can only connect one websocket with your user credentials. if you want to run more than one algorithm, you can't.
17 | This project will help you achieve that. look at this illustration to get the concept
18 |
19 | 
20 |
21 | It doesn't matter which sdk you are using (python, js, go, c#) you can use this to achieve that.
22 |
23 | ## How to execute the docker container
24 | You have 2 options:
25 | - Using docker with image from docker hub (easiest)
26 | - Cloning the repo and using docker/docker-compose to build and run this project locally(a bit more powerful if you want to edit the proxy code)
27 |
28 |
29 | ### Directly from docker hub
30 | Nothing easier than that.
31 | - Make sure you have docker installed
32 | - make sure you have the updated image version by running this: `docker pull shlomik/alpaca-proxy-agent`
33 | - Execute this command: `docker run -it -p 8765:8765 shlomik/alpaca-proxy-agent`
34 | note: You can change the port you're listening on just by doing this `-p xxxx:8765`
35 | ### Executing a local copy
36 | - Clone the repo: `git clone https://github.com/shlomikushchi/alpaca-proxy-agent.git`
37 | - Run this command locally: `docker-compose up`
38 | It will build the image and run the container using docker-compose
39 | note: If you want to execute in edit mode do this: `docker-compose -f dev.yml up`
40 | You could then update `main.py` and execute it locally.
41 |
42 | ## Selecting the data stream source
43 | Alpaca supports 2 data streams:
44 | * Alapca data stream
45 | * Polygon data stream
46 |
47 | If you are using this project I assume you know what these are and what are the differences.
48 | The default data stream is Alpaca. To select the Polygon data stream you need to set an environment variable called `USE_POLYGON` like so:
49 | >`docker run -p 8765:8765 -it -e USE_POLYGON=true shlomik/alpaca-proxy-agent`
50 |
51 | ## Security
52 | You are runngin a local websocket server. Make sure your IP is not accessible when you do (you probably shouldn't run this on public networks)
53 | SSL between your algo and the proxy is not supported (it runs locally). between the proxy-agent and the Alpaca servers we use wss
54 |
55 |
56 |
57 | ## Commands
58 |
59 | * `mkdocs new [dir-name]` - Create a new project.
60 | * `mkdocs serve` - Start the live-reloading docs server.
61 | * `mkdocs build` - Build the documentation site.
62 | * `mkdocs -h` - Print help message and exit.
63 |
64 | ## Project layout
65 |
66 | mkdocs.yml # The configuration file.
67 | docs/
68 | index.md # The documentation homepage.
69 | ... # Other markdown pages, images and other files.
70 |
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | import os
3 |
4 | import nest_asyncio
5 |
6 | nest_asyncio.apply()
7 | import asyncio
8 | from collections import defaultdict
9 |
10 | import msgpack
11 |
12 | import websockets
13 | import threading
14 | import alpaca_trade_api as tradeapi
15 | from alpaca_trade_api.common import URL
16 | from websockets import protocol
17 |
18 | from server_message_handler import on_message
19 | from shared_memory_obj import subscribers, response_queue
20 | from version import VERSION
21 | from asciiart import ascii_art
22 | from threading import Lock
23 |
24 | lock = Lock()
25 |
26 | conn: tradeapi.Stream = None
27 | _key_id = None
28 | _secret_key = None
29 | _authenticated = False
30 | _base_url = "https://paper-api.alpaca.markets"
31 | _pro_subscription = 'sip' if os.getenv("IS_PRO").lower() == 'true' else 'iex'
32 | CONSUMER_STARTED = False
33 |
34 |
35 | def consumer_thread(channels):
36 | try:
37 | # make sure we have an event loop, if not create a new one
38 | loop = asyncio.get_event_loop()
39 | # loop.set_debug(True)
40 | except RuntimeError:
41 | asyncio.set_event_loop(asyncio.new_event_loop())
42 |
43 | global conn
44 | if not conn:
45 | conn = tradeapi.Stream(_key_id,
46 | _secret_key,
47 | base_url=URL(_base_url),
48 | data_feed=_pro_subscription,
49 | raw_data=True)
50 | subscribe(channels)
51 | conn.run()
52 |
53 |
54 | def subscribe(channels):
55 | logging.info(f"Subscribing to: {channels}")
56 | conn.subscribe_trades(on_message, *channels['trades'])
57 | conn.subscribe_quotes(on_message, *channels['quotes'])
58 | conn.subscribe_bars(on_message, *channels['bars'])
59 | conn.subscribe_statuses(on_message, *channels['statuses'])
60 | conn.subscribe_daily_bars(on_message, *channels['dailyBars'])
61 |
62 |
63 | def unsubscribe(channels):
64 | logging.info(f"Unsubscribing from: {channels}")
65 | try:
66 | conn.unsubscribe_trades(*channels['trades'])
67 | conn.unsubscribe_quotes(*channels['quotes'])
68 | conn.unsubscribe_bars(*channels['bars'])
69 | conn.unsubscribe_statuses(*channels['statuses'])
70 | conn.unsubscribe_daily_bars(*channels['dailyBars'])
71 | except Exception as e:
72 | logging.warning(f"error unsubscribing from {channels}. {e}")
73 |
74 |
75 | def get_current_channels():
76 | result = defaultdict(set)
77 | for sub, chans in subscribers.items():
78 | if chans:
79 | for _type in chans:
80 | result[_type].update(set(chans[_type]))
81 | return result
82 |
83 |
84 | def clear_dead_subscribers():
85 | # copy to be able to remove closed connections
86 | subs = dict(subscribers.items())
87 | for sub, chans in subs.items():
88 | if sub.state == protocol.State.CLOSED:
89 | del subscribers[sub]
90 |
91 |
92 | async def serve(sub, path):
93 | connected = [{"T": "success", "msg": "connected"}]
94 | await sub.send(msgpack.packb(connected, use_bin_type=True))
95 | global conn, _key_id, _secret_key
96 | global CONSUMER_STARTED
97 | try:
98 | async for msg in sub:
99 | # msg = await sub.recv()
100 | try:
101 | data = msgpack.unpackb(msg)
102 | # print(f"< {data}")
103 | except Exception as e:
104 | print(e)
105 |
106 | if sub not in subscribers.keys():
107 | if data.get("action"):
108 | if data.get("action") == "auth":
109 | if not _key_id:
110 | _key_id = data.get("key")
111 | _secret_key = data.get("secret")
112 | # not really authorized yet.
113 | # but sending because it's expected
114 | authenticated = [{"T": "success", "msg": "authenticated"}]
115 | await sub.send(msgpack.packb(authenticated,
116 | use_bin_type=True))
117 | subscribers[sub] = defaultdict(list)
118 |
119 | else:
120 | new_channels = {}
121 | if data.get("action") == "subscribe":
122 | data.pop("action")
123 | new_channels = data
124 | else:
125 | raise Exception("Got here")
126 |
127 | previous_channels = get_current_channels()
128 | if previous_channels:
129 | # it is easier to first unsubscribe from previous channels
130 | # and then subscribe again. this way we make sure we clean
131 | # dead connections.
132 | unsubscribe(previous_channels)
133 | clear_dead_subscribers()
134 | for _type in new_channels:
135 | subscribers[sub][_type].extend(new_channels[_type])
136 | subscribers[sub][_type] = list(
137 | set(subscribers[sub][_type]))
138 | with lock:
139 | if not CONSUMER_STARTED:
140 | CONSUMER_STARTED = True
141 | threading.Thread(target=consumer_thread,
142 | args=(new_channels,)).start()
143 | else:
144 | channels = get_current_channels()
145 | subscribe(channels)
146 | except Exception as e:
147 | # traceback.print_exc()
148 | print(e)
149 | # we clean up subscriptions upon disconnection and subscribe again
150 | # for still active clients
151 | current = get_current_channels()
152 | clear_dead_subscribers()
153 | unsubscribe(current)
154 | current = get_current_channels()
155 | if current:
156 | subscribe(current)
157 |
158 | print("Done")
159 |
160 |
161 | async def send_response_to_client():
162 | """
163 | The messages sent back to the clients should be sent from the same thread
164 | that accepted the connection. it a websocket issue.
165 | messages from the server are received via a different thread and passed to
166 | this thread(the main thread) using a queue. then this thread( the main
167 | thread) is passing the messages to the clients.
168 | :return:
169 | """
170 | while 1:
171 | try:
172 | if response_queue.empty():
173 | await asyncio.sleep(0.05)
174 | continue
175 | response = response_queue.get()
176 | # print(f"send {response['response']}")
177 | await response["subscriber"].send(
178 | msgpack.packb([response["response"]]))
179 | except:
180 | pass
181 |
182 |
183 | if __name__ == '__main__':
184 | import logging
185 |
186 | logging.basicConfig(format='%(asctime)s %(name)s %(message)s',
187 | level=logging.INFO)
188 | logging.getLogger("asyncio").setLevel(logging.WARNING)
189 | logging.getLogger('websockets').setLevel(logging.INFO)
190 |
191 | #
192 | print(ascii_art)
193 | logging.info(f"Alpaca Proxy Agent v{VERSION}")
194 | logging.info("Using the Alpaca Websocket")
195 | if os.getenv("IS_LIVE"):
196 | logging.info("Connecting to the real account endpoint")
197 | else:
198 | logging.info("Connecting to the paper account endpoint")
199 | if _pro_subscription == 'sip':
200 | logging.info("Using the pro-subscription plan(sip)")
201 | else:
202 | logging.info("Using the free subscription plan(iex)")
203 | start_server = websockets.serve(serve, "0.0.0.0", 8765)
204 |
205 | asyncio.get_event_loop().run_until_complete(asyncio.gather(
206 | start_server,
207 | send_response_to_client(),
208 | return_exceptions=True,
209 | ))
210 | asyncio.get_event_loop().run_forever()
211 |
--------------------------------------------------------------------------------
/mkdocs.yml:
--------------------------------------------------------------------------------
1 | site_name: Alpaca Proxy Agent
2 | nav:
3 | - Home: index.md
--------------------------------------------------------------------------------
/readthedocs/Makefile:
--------------------------------------------------------------------------------
1 | # Minimal makefile for Sphinx documentation
2 | #
3 |
4 | # You can set these variables from the command line, and also
5 | # from the environment for the first two.
6 | SPHINXOPTS ?=
7 | SPHINXBUILD ?= sphinx-build
8 | SOURCEDIR = ../
9 | BUILDDIR = _build
10 |
11 | # Put it first so that "make" without argument is like "make help".
12 | help:
13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
14 |
15 | .PHONY: help Makefile
16 |
17 | # Catch-all target: route all unknown targets to Sphinx using the new
18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
19 | %: Makefile
20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
21 |
--------------------------------------------------------------------------------
/readthedocs/conf.py:
--------------------------------------------------------------------------------
1 | # Configuration file for the Sphinx documentation builder.
2 | #
3 | # This file only contains a selection of the most common options. For a full
4 | # list see the documentation:
5 | # https://www.sphinx-doc.org/en/master/usage/configuration.html
6 |
7 | # -- Path setup --------------------------------------------------------------
8 |
9 | # If extensions (or modules to document with autodoc) are in another directory,
10 | # add these directories to sys.path here. If the directory is relative to the
11 | # documentation root, use os.path.abspath to make it absolute, like shown here.
12 | #
13 | # import os
14 | # import sys
15 | # sys.path.insert(0, os.path.abspath('.'))
16 |
17 |
18 | # -- Project information -----------------------------------------------------
19 |
20 | project = 'Alapca Proxy Agent'
21 | copyright = '2020, Shlomi Kushchi'
22 | author = 'Shlomi Kushchi'
23 |
24 | # The full version, including alpha/beta/rc tags
25 | release = '0.5.5'
26 |
27 |
28 | # -- General configuration ---------------------------------------------------
29 |
30 | # Add any Sphinx extension module names here, as strings. They can be
31 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
32 | # ones.
33 | extensions = [
34 | ]
35 |
36 | # Add any paths that contain templates here, relative to this directory.
37 | templates_path = ['_templates']
38 |
39 | # List of patterns, relative to source directory, that match files and
40 | # directories to ignore when looking for source files.
41 | # This pattern also affects html_static_path and html_extra_path.
42 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
43 |
44 |
45 | # -- Options for HTML output -------------------------------------------------
46 |
47 | # The theme to use for HTML and HTML Help pages. See the documentation for
48 | # a list of builtin themes.
49 | #
50 | html_theme = 'sphinx_rtd_theme'
51 |
52 | # Add any paths that contain custom static files (such as style sheets) here,
53 | # relative to this directory. They are copied after the builtin static files,
54 | # so a file named "default.css" will overwrite the builtin "default.css".
55 | html_static_path = ['_static']
--------------------------------------------------------------------------------
/readthedocs/index.rst:
--------------------------------------------------------------------------------
1 | .. Alapca Proxy Agent documentation master file, created by
2 | sphinx-quickstart on Tue Nov 10 14:36:10 2020.
3 | You can adapt this file completely to your liking, but it should at least
4 | contain the root `toctree` directive.
5 |
6 | Welcome to Alapca Proxy Agent's documentation!
7 | ==============================================
8 |
9 | .. toctree::
10 | :maxdepth: 2
11 | :caption: Contents:
12 |
13 |
14 |
15 | Indices and tables
16 | ==================
17 |
18 | * :ref:`genindex`
19 | * :ref:`modindex`
20 | * :ref:`search`
21 |
22 |
23 | This is a project to help users of Alpaca to execute more than one data websocket connections against their servers.
24 |
25 | Right now you can only connect one websocket with your user credentials. if you want to run more than one algorithm, you can't.
26 | This project will help you achieve that. look at this illustration to get the concept
--------------------------------------------------------------------------------
/readthedocs/make.bat:
--------------------------------------------------------------------------------
1 | @ECHO OFF
2 |
3 | pushd %~dp0
4 |
5 | REM Command file for Sphinx documentation
6 |
7 | if "%SPHINXBUILD%" == "" (
8 | set SPHINXBUILD=sphinx-build
9 | )
10 | set SOURCEDIR=.
11 | set BUILDDIR=_build
12 |
13 | if "%1" == "" goto help
14 |
15 | %SPHINXBUILD% >NUL 2>NUL
16 | if errorlevel 9009 (
17 | echo.
18 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
19 | echo.installed, then set the SPHINXBUILD environment variable to point
20 | echo.to the full path of the 'sphinx-build' executable. Alternatively you
21 | echo.may add the Sphinx directory to PATH.
22 | echo.
23 | echo.If you don't have Sphinx installed, grab it from
24 | echo.http://sphinx-doc.org/
25 | exit /b 1
26 | )
27 |
28 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
29 | goto end
30 |
31 | :help
32 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
33 |
34 | :end
35 | popd
36 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | aiohttp==3.7.4
2 | requests==2.26.0
3 | alpaca-trade-api==1.2.3
4 | asyncio-nats-client==0.10.0
5 | msgpack==1.0.2
6 | numpy==1.19.0
7 | pandas==1.0.5
8 | python-dateutil==2.8.1
9 | pytz==2020.1
10 | urllib3==1.26.5
11 | websocket-client==0.57.0
12 | websockets==9.1
13 | nest_asyncio==1.5.1
--------------------------------------------------------------------------------
/resources/concept.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/camelpac/alpaca-proxy-agent/a44bf7e574c6138fefb969bb9f04440e52fd1e91/resources/concept.png
--------------------------------------------------------------------------------
/server_message_handler.py:
--------------------------------------------------------------------------------
1 | import traceback
2 |
3 | from shared_memory_obj import subscribers, response_queue
4 |
5 |
6 | async def on_message(msg):
7 | """
8 | This is the handler for server messages. We then iterate the subscribers
9 | and send the message through the opened websockets
10 | the message is received with msgpack short types, but when we subscribe
11 | we use long types (e.g 't' vs 'trades') and this is how we keep track of it
12 | we need to translate it first and then send it.
13 | """
14 | msg_type = msg.get('T')
15 | symbol = msg.get('S')
16 | if msg_type == 't':
17 | _type = 'trades'
18 | elif msg_type == 'q':
19 | _type = 'quotes'
20 | elif msg_type == 'b':
21 | _type = 'bars'
22 | elif msg_type == 'd':
23 | _type = 'dailyBars'
24 | elif msg_type == 's':
25 | _type = 'statuses'
26 | else:
27 | return
28 | try:
29 | for sub, channels in subscribers.items():
30 | if symbol in channels[_type]:
31 | response_queue.put({"subscriber": sub,
32 | "response": msg})
33 | except Exception as e:
34 | print(e)
35 | traceback.print_exc()
36 |
--------------------------------------------------------------------------------
/shared_memory_obj.py:
--------------------------------------------------------------------------------
1 | import queue
2 | from collections import defaultdict
3 |
4 | subscribers = defaultdict(list)
5 | q_mapping = {}
6 | register_queue = queue.Queue()
7 | response_queue = queue.Queue()
8 |
--------------------------------------------------------------------------------
/version.py:
--------------------------------------------------------------------------------
1 | VERSION = "1.0.1"
--------------------------------------------------------------------------------