├── LICENSE ├── README.md ├── config.py ├── log └── logfile.log ├── mining_libs ├── __init__.py ├── client_service.py ├── jobs.py ├── multicast_responder.py ├── stratum_listener.py └── version.py ├── stratum ├── __init__.py ├── config_default.py ├── connection_registry.py ├── custom_exceptions.py ├── event_handler.py ├── example_service.py ├── helpers.py ├── http_transport.py ├── irc.py ├── jsonical.py ├── logger.py ├── protocol.py ├── pubsub.py ├── semaphore.py ├── server.py ├── services.py ├── settings.py ├── signature.py ├── socket_transport.py ├── socksclient.py ├── stats.py ├── storage.py ├── version.py └── websocket_transport.py └── xmr-proxy.py /LICENSE: -------------------------------------------------------------------------------- 1 | Monero stratum proxy - for monero-pools using stratum protocol RPCv2 2 | 3 | Copyright (C) 2014 Atrides 4 | 5 | # Stratum proxy 6 | 7 | Copyright (C) slush0 8 | https://github.com/slush0/stratum-mining-proxy 9 | 10 | # Stratum protocol 11 | https://github.com/slush0/stratum 12 | 13 | Copyright (C) 2012 Marek Palatinus 14 | 15 | This program is free software: you can redistribute it and/or modify 16 | it under the terms of the GNU Affero General Public License as 17 | published by the Free Software Foundation, either version 3 of the 18 | License, or any later version. 19 | 20 | This program is distributed in the hope that it will be useful, 21 | but WITHOUT ANY WARRANTY; without even the implied warranty of 22 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 23 | GNU Affero General Public License for more details. 24 | 25 | You should have received a copy of the GNU Affero General Public License 26 | along with this program. If not, see . 27 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # XMR Proxy 2 | 3 | 4 | This is Stratum Proxy for Monero-pools (RPCv2) using asynchronous networking written in Python Twisted. 5 | 6 | **NOTE:** This fork is still in development. Some features may be broken. Please report any broken features or issues. 7 | 8 | 9 | ## Features 10 | 11 | * XMR stratum proxy 12 | * Central Wallet configuration, miners doesn't need wallet as username 13 | * Support mining to exchange 14 | * Support monitoring via email 15 | * Bypass worker_id for detailed statistic and per rig monitoring 16 | * Only one connection to the pool 17 | * Individually Vardiff for workers. 18 | 19 | ## Installation and Configuration 20 | 21 | ### 1. Get pre-reqs and clone repository 22 | 23 | ``` 24 | pip install python-twisted 25 | git clone 26 | cd xmr-proxy 27 | ``` 28 | 29 | ### 2. Configure Settings 30 | 31 | Edit the ```config.py``` file. Modify: 32 | 33 | * ```WALLET``` Enter your wallet address. This is where your monero will be stored when you mine. 34 | * ```MONITORING_EMAIL``` Enter your email if you wish to monitor your server. 35 | * ```POOL_HOST``` Change to a pool you wish to join. 36 | 37 | ### 3. Start the Proxy 38 | 39 | ``` 40 | ./xmr-proxy.py 41 | ``` 42 | This will start the proxy. 43 | 44 | 45 | ### 4. Start miner 46 | 47 | The proxy by default opens port ```8080``` to proxy tcp connections to https connections using twisted. 48 | 49 | You can use several different miners. For example: 50 | 51 | ``` 52 | ./minerd -a cryptonight -o stratum+tcp://127.0.0.1:8080 -u 123456 -p 1 53 | ``` 54 | Or you can use [cpuminer-easy](https://github.com/luisvasquez/cpuminer-easy) 55 | 56 | ``` 57 | ./cpuminer -a cryptonight -o stratum+tcp://127.0.0.1:8080 -u 47kgvGgng2ZGjc7Ey3Vk9J3NTN2hEavkEixeUmgTh8NDJ1FQBCxXPM6Yi5VPmWf5WeTR712voQUvh6qwNUnrZJr9B7v4X66 -p myemail.com 58 | ``` 59 | This will then forward traffic through the proxy and allow you to start mining. 60 | 61 | ## Donations 62 | 63 | * XMR: ```466KoUjvbFE2SduDyiZQUb5QviKo6qnbyDGDB46C6UcTDi5XmVtSXuRYJDmgd6mhYPU92xJHsTQyrSjLbsxdzKQc3Z1PZQM``` 64 | 65 | ## Requirements 66 | 67 | xmr-proxy is built in python. I have been testing it with 2.7.3, but it should work with other versions. The requirements for running the software are below. 68 | 69 | * Python 2.7+ 70 | * python-twisted 71 | * Pool with support for this proxy 72 | 73 | 74 | ## Troubleshooting 75 | 76 | If you lost connections to your proxy and have a lot of users, check limits of your system in ```/etc/security/limits.conf``` 77 | 78 | The best way to increase limits of open files: 79 | 80 | ``` 81 | hard nofile 1048576 82 | soft nofile 1048576 83 | ``` 84 | Where `````` is your user name. e.g: ```ubuntu```. 85 | 86 | ## TODO 87 | 88 | * Automatically failover via proxy, also for non-supported miners (ccminer) 89 | 90 | ## Contact 91 | 92 | * I am available via admin@dwarfpool.com 93 | 94 | 95 | ## Credits 96 | 97 | * Original version by Slush0 (original stratum code) 98 | * More Features added by GeneralFault, Wadee Womersley and Moopless 99 | 100 | ## License 101 | 102 | * This software is provides AS-IS without any warranties of any kind. Please use at your own risk. 103 | -------------------------------------------------------------------------------- /config.py: -------------------------------------------------------------------------------- 1 | 2 | # Ports for your workers 3 | STRATUM_HOST = "0.0.0.0" 4 | STRATUM_PORT = 8080 5 | 6 | # Coin address where money goes. If you mine direct to the exchange, you MUST specify payment_id together with wallet of exchange. 7 | WALLET = '466KoUjvbFE2SduDyiZQUb5QviKo6qnbyDGDB46C6UcTDi5XmVtSXuRYJDmgd6mhYPU92xJHsTQyrSjLbsxdzKQc3Z1PZQM' 8 | # Only if you mine direct to the exchange 9 | PAYMENT_ID = '' 10 | 11 | # It's useful for individually monitoring and statistic. 12 | # In your workers you have to use any number as username (without wallet!) 13 | ENABLE_WORKER_ID = True 14 | WORKER_ID_FROM_IP = False 15 | 16 | # On DwarfPool you have option to monitor your workers via email. 17 | # If WORKER_ID is enabled, you can monitor every worker/rig separately. 18 | MONITORING = True 19 | MONITORING_EMAIL = 'mail@example.com' 20 | 21 | # Main pool 22 | POOL_HOST = 'xmr-us.dwarfpool.com' 23 | POOL_PORT = 8050 24 | 25 | # Failover pool 26 | POOL_FAILOVER_ENABLE = False 27 | POOL_HOST_FAILOVER = 'xmr-eu.dwarfpool.com' 28 | POOL_PORT_FAILOVER = 8050 29 | 30 | # ERROR, INFO, DEBUG 31 | LOGLEVEL = 'DEBUG' 32 | DEBUG = True 33 | LOGFILE = "logfile.log" 34 | -------------------------------------------------------------------------------- /log/logfile.log: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Atrides/xmr-proxy/7440191a8d1b66061d0ff38b10a1b684ef59340a/log/logfile.log -------------------------------------------------------------------------------- /mining_libs/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Atrides/xmr-proxy/7440191a8d1b66061d0ff38b10a1b684ef59340a/mining_libs/__init__.py -------------------------------------------------------------------------------- /mining_libs/client_service.py: -------------------------------------------------------------------------------- 1 | from twisted.internet import reactor 2 | 3 | from stratum.event_handler import GenericEventHandler 4 | from jobs import Job 5 | import version as _version 6 | 7 | import stratum_listener 8 | 9 | import stratum.logger 10 | log = stratum.logger.get_logger('proxy') 11 | 12 | class ClientMiningService(GenericEventHandler): 13 | job_registry = None # Reference to JobRegistry instance 14 | timeout = None # Reference to IReactorTime object 15 | 16 | @classmethod 17 | def reset_timeout(cls): 18 | if cls.timeout != None: 19 | if not cls.timeout.called: 20 | cls.timeout.cancel() 21 | cls.timeout = None 22 | 23 | cls.timeout = reactor.callLater(960, cls.on_timeout) 24 | 25 | @classmethod 26 | def on_timeout(cls): 27 | ''' 28 | Try to reconnect to the pool after 16 minutes of no activity on the connection. 29 | It will also drop all Stratum connections to sub-miners 30 | to indicate connection issues. 31 | ''' 32 | log.error("Connection to upstream pool timed out") 33 | cls.reset_timeout() 34 | cls.job_registry.f.reconnect() 35 | 36 | def handle_event(self, method, params, connection_ref): 37 | '''Handle RPC calls and notifications from the pool''' 38 | # Yay, we received something from the pool, 39 | # let's restart the timeout. 40 | self.reset_timeout() 41 | 42 | if method == 'job': 43 | '''Proxy just received information about new mining job''' 44 | 45 | (blob, job_id, target, user_id, height, seed_hash) = params["blob"],params["job_id"],params["target"],params["id"],params["height"],params["seed_hash"] 46 | 47 | # Broadcast to Stratum client 48 | stratum_listener.MiningSubscription.on_template(job_id, blob, target, user_id, height, seed_hash) 49 | 50 | # Broadcast to getwork clients 51 | job = Job.build_from_pool(job_id, blob, target, height, seed_hash) 52 | log.info("New job %s for %s on height %s" % (job_id, user_id, height)) 53 | 54 | self.job_registry.add_job(job, True) 55 | 56 | else: 57 | '''Pool just asked us for something which we don't support...''' 58 | log.error("Unhandled method %s with params %s" % (method, params)) 59 | 60 | -------------------------------------------------------------------------------- /mining_libs/jobs.py: -------------------------------------------------------------------------------- 1 | from twisted.internet import defer 2 | 3 | import stratum.logger 4 | log = stratum.logger.get_logger('proxy') 5 | 6 | class Job(object): 7 | def __init__(self): 8 | self.job_id = '' 9 | self.blob = '' 10 | self.target = '' 11 | self.height = '' 12 | self.seed_hash = '' 13 | 14 | @classmethod 15 | def build_from_pool(cls, job_id, blob, target, height, seed_hash): 16 | '''Build job object from Stratum server broadcast''' 17 | job = Job() 18 | job.job_id = job_id 19 | job.blob = blob 20 | job.target = target 21 | job.height = height 22 | job.seed_hash = seed_hash 23 | return job 24 | 25 | class JobRegistry(object): 26 | def __init__(self, f): 27 | self.f = f 28 | self.jobs = [] 29 | # Hook for LP broadcasts 30 | self.on_block = defer.Deferred() 31 | 32 | def add_job(self, template, clean_jobs): #????clean 33 | if clean_jobs: 34 | # Pool asked us to stop submitting shares from previous jobs 35 | self.jobs = [] 36 | 37 | self.jobs.append(template) 38 | 39 | if clean_jobs: 40 | # Force miners to reload jobs 41 | on_block = self.on_block 42 | self.on_block = defer.Deferred() 43 | on_block.callback(True) 44 | -------------------------------------------------------------------------------- /mining_libs/multicast_responder.py: -------------------------------------------------------------------------------- 1 | import json 2 | from twisted.internet.protocol import DatagramProtocol 3 | 4 | import stratum.logger 5 | log = stratum.logger.get_logger('proxy') 6 | 7 | class MulticastResponder(DatagramProtocol): 8 | def __init__(self, pool_host, stratum_port): 9 | # Upstream Stratum host/port 10 | # Used for identifying the pool which we're connected to. 11 | # Some load balancing strategies can change the host/port 12 | # during the mining session (by mining.reconnect()), but this points 13 | # to initial host/port provided by user on cmdline or by X-Stratum 14 | self.pool_host = pool_host 15 | 16 | self.stratum_port = stratum_port 17 | 18 | def startProtocol(self): 19 | # 239.0.0.0/8 are for private use within an organization 20 | self.transport.joinGroup("239.3.3.3") 21 | self.transport.setTTL(5) 22 | 23 | def writeResponse(self, address, msg_id, result, error=None): 24 | self.transport.write(json.dumps({"id": msg_id, "result": result, "error": error}), address) 25 | 26 | def datagramReceived(self, datagram, address): 27 | log.info("Received local discovery request from %s:%d" % address) 28 | 29 | try: 30 | data = json.loads(datagram) 31 | except: 32 | # Skip response if datagram is not parsable 33 | log.error("Unparsable datagram") 34 | return 35 | 36 | msg_id = data.get('id') 37 | msg_method = data.get('method') 38 | #msg_params = data.get('params') 39 | 40 | if msg_method == 'mining.get_upstream': 41 | self.writeResponse(address, msg_id, (self.pool_host, self.stratum_port)) -------------------------------------------------------------------------------- /mining_libs/stratum_listener.py: -------------------------------------------------------------------------------- 1 | import time 2 | import binascii 3 | import struct 4 | import re 5 | 6 | from twisted.internet import defer 7 | 8 | from stratum.services import GenericService 9 | from stratum.pubsub import Pubsub, Subscription 10 | from stratum.custom_exceptions import ServiceException, RemoteServiceException 11 | from stratum.socket_transport import SocketTransportFactory, SocketTransportClientFactory 12 | from mining_libs import client_service 13 | from mining_libs.jobs import Job 14 | 15 | import stratum.logger 16 | log = stratum.logger.get_logger('proxy') 17 | 18 | def var_int(i): 19 | if i <= 0xff: 20 | return struct.pack('>B', i) 21 | elif i <= 0xffff: 22 | return struct.pack('>H', i) 23 | raise Exception("number is too big") 24 | 25 | class UpstreamServiceException(ServiceException): 26 | code = -2 27 | 28 | class SubmitException(ServiceException): 29 | code = -2 30 | 31 | class MiningSubscription(Subscription): 32 | '''This subscription object implements 33 | logic for broadcasting new jobs to the clients.''' 34 | 35 | event = 'job' 36 | subscribers = {} 37 | 38 | @classmethod 39 | def disconnect_all(cls): 40 | StratumProxyService.registered_tails = [] 41 | for subs in cls.subscribers: 42 | try: 43 | cls.subscribers[subs].connection_ref().transport.abortConnection() 44 | except Exception: 45 | pass 46 | #for subs in Pubsub.iterate_subscribers(cls.event): 47 | # if subs.connection_ref().transport != None: 48 | # subs.connection_ref().transport.loseConnection() 49 | 50 | @classmethod 51 | def add_user_id(cls, subsc, user_id): 52 | cls.subscribers[user_id] = subsc 53 | 54 | @classmethod 55 | def on_template(cls, job_id, blob, target, user_id, height, seed_hash): 56 | '''Push new job to subscribed clients''' 57 | #cls.last_broadcast = (job_id, blob, target) 58 | #if user_id: 59 | # cls.user_id = user_id 60 | if cls.subscribers.has_key(user_id): 61 | subscr = cls.subscribers[user_id] 62 | subscr.emit_single({'job_id':job_id, 'blob':blob, 'target':target, 'height':height, 'seed_hash':seed_hash}) 63 | 64 | def _finish_after_subscribe(self, result): 65 | '''Send new job to newly subscribed client''' 66 | #try: 67 | # (job_id, blob, target) = self.last_broadcast 68 | #except Exception: 69 | # log.error("Template not ready yet") 70 | # return result 71 | 72 | #self.emit_single({'job_id':job_id, 'blob':blob, 'target':target}) 73 | return result 74 | 75 | def after_subscribe(self, *args): 76 | '''This will send new job to the client *after* he receive subscription details. 77 | on_finish callback solve the issue that job is broadcasted *during* 78 | the subscription request and client receive messages in wrong order.''' 79 | #self.add_user_id(self, user_id) 80 | self.connection_ref().on_finish.addCallback(self._finish_after_subscribe) 81 | 82 | class StratumProxyService(GenericService): 83 | service_type = 'mining' 84 | service_vendor = 'mining_proxy' 85 | is_default = True 86 | 87 | _f = None # Factory of upstream Stratum connection 88 | custom_user = None 89 | custom_password = None 90 | enable_worker_id = False 91 | worker_id_from_ip = False 92 | tail_iterator = 0 93 | registered_tails = [] 94 | 95 | @classmethod 96 | def _set_upstream_factory(cls, f): 97 | cls._f = f 98 | 99 | @classmethod 100 | def _set_custom_user(cls, custom_user, custom_password, enable_worker_id, worker_id_from_ip): 101 | cls.custom_user = custom_user 102 | cls.custom_password = custom_password 103 | cls.enable_worker_id = enable_worker_id 104 | cls.worker_id_from_ip = worker_id_from_ip 105 | 106 | @classmethod 107 | def _is_in_tail(cls, tail): 108 | if tail in cls.registered_tails: 109 | return True 110 | return False 111 | 112 | @classmethod 113 | def _get_unused_tail(cls): 114 | '''Currently adds up to two bytes to extranonce1, 115 | limiting proxy for up to 65535 connected clients.''' 116 | 117 | for _ in range(0, 0xffff): # 0-65535 118 | cls.tail_iterator += 1 119 | cls.tail_iterator %= 0xffff 120 | 121 | # Zero extranonce is reserved for getwork connections 122 | if cls.tail_iterator == 0: 123 | cls.tail_iterator += 1 124 | 125 | # var_int throws an exception when input is >= 0xffff 126 | tail = var_int(cls.tail_iterator) 127 | 128 | if tail not in cls.registered_tails: 129 | cls.registered_tails.append(tail) 130 | return binascii.hexlify(tail) 131 | 132 | raise Exception("Extranonce slots are full, please disconnect some miners!") 133 | 134 | def _drop_tail(self, result, tail): 135 | tail = binascii.unhexlify(tail) 136 | if tail in self.registered_tails: 137 | self.registered_tails.remove(tail) 138 | else: 139 | log.error("Given extranonce is not registered1") 140 | return result 141 | 142 | @defer.inlineCallbacks 143 | def login(self, params, *args): 144 | if self._f.client == None or not self._f.client.connected: 145 | yield self._f.on_connect 146 | 147 | if self._f.client == None or not self._f.client.connected: 148 | raise UpstreamServiceException("Upstream not connected") 149 | 150 | tail = self._get_unused_tail() 151 | 152 | session = self.connection_ref().get_session() 153 | session['tail'] = tail 154 | 155 | custom_user = self.custom_user 156 | if self.enable_worker_id and params.has_key("login"): 157 | if self.worker_id_from_ip: 158 | ip_login = self.connection_ref()._get_ip() 159 | ip_temp = ip_login.split('.') 160 | ip_int = int(ip_temp[0])*16777216 + int(ip_temp[1])*65536 + int(ip_temp[2])*256 + int(ip_temp[3]) 161 | custom_user = "%s.%s" % (custom_user, ip_int) 162 | else: 163 | params_login = re.sub(r'[^\d]', '', params["login"]) 164 | if params_login and int(params_login)>0: 165 | custom_user = "%s.%s" % (custom_user, params_login) 166 | 167 | first_job = (yield self._f.rpc('login', {"login":custom_user, "pass":self.custom_password})) 168 | 169 | try: 170 | self.connection_ref().on_disconnect.addCallback(self._drop_tail, tail) 171 | except Exception: 172 | pass 173 | subs = Pubsub.subscribe(self.connection_ref(), MiningSubscription())[0] 174 | 175 | MiningSubscription.add_user_id(subs[2], first_job['id']) 176 | 177 | defer.returnValue(first_job) 178 | 179 | @defer.inlineCallbacks 180 | def submit(self, params, *args): 181 | if self._f.client == None or not self._f.client.connected: 182 | self.connection_ref().transport.abortConnection() 183 | raise SubmitException("Upstream not connected") 184 | 185 | session = self.connection_ref().get_session() 186 | tail = session.get('tail') 187 | if tail == None: 188 | raise SubmitException("Connection is not subscribed") 189 | 190 | ip = self.connection_ref()._get_ip() 191 | start = time.time() 192 | 193 | try: 194 | result = (yield self._f.rpc('submit', params)) 195 | except RemoteServiceException as exc: 196 | response_time = (time.time() - start) * 1000 197 | log.info("[%dms] Share from '%s' REJECTED: %s" % (response_time, ip, str(exc))) 198 | raise SubmitException(*exc.args) 199 | 200 | response_time = (time.time() - start) * 1000 201 | log.info("[%dms] Share from '%s' accepted" % (response_time, ip)) 202 | defer.returnValue(result) 203 | 204 | @defer.inlineCallbacks 205 | def get_job(self, params, *args): 206 | if self._f.client == None or not self._f.client.connected: 207 | raise SubmitException("Upstream not connected") 208 | 209 | session = self.connection_ref().get_session() 210 | tail = session.get('tail') 211 | if tail == None: 212 | raise SubmitException("Connection is not subscribed") 213 | 214 | ip = self.connection_ref()._get_ip() 215 | start = time.time() 216 | 217 | try: 218 | result = (yield self._f.rpc('get_job', params)) 219 | except RemoteServiceException as exc: 220 | response_time = (time.time() - start) * 1000 221 | log.info("[%dms] GetJob to '%s' ERROR: %s" % (response_time, ip, str(exc))) 222 | raise SubmitException(*exc.args) 223 | 224 | response_time = (time.time() - start) * 1000 225 | log.info("[%dms] send GetJob to '%s'" % (response_time, ip)) 226 | defer.returnValue(result) 227 | 228 | @defer.inlineCallbacks 229 | def keepalived(self, params, *args): 230 | if self._f.client == None or not self._f.client.connected: 231 | raise SubmitException("Upstream not connected") 232 | 233 | session = self.connection_ref().get_session() 234 | tail = session.get('tail') 235 | if tail == None: 236 | raise SubmitException("Connection is not subscribed") 237 | -------------------------------------------------------------------------------- /mining_libs/version.py: -------------------------------------------------------------------------------- 1 | VERSION='3.0.0' 2 | -------------------------------------------------------------------------------- /stratum/__init__.py: -------------------------------------------------------------------------------- 1 | from server import setup 2 | -------------------------------------------------------------------------------- /stratum/config_default.py: -------------------------------------------------------------------------------- 1 | ''' 2 | This is example configuration for Stratum server. 3 | Please rename it to config.py and fill correct values. 4 | ''' 5 | 6 | # ******************** GENERAL SETTINGS *************** 7 | 8 | # Enable some verbose debug (logging requests and responses). 9 | DEBUG = True 10 | 11 | # Destination for application logs, files rotated once per day. 12 | LOGDIR = 'log/' 13 | 14 | # Main application log file. 15 | LOGFILE = None #'stratum.log' 16 | 17 | # Possible values: DEBUG, INFO, WARNING, ERROR, CRITICAL 18 | LOGLEVEL = 'DEBUG' 19 | 20 | # How many threads use for synchronous methods (services). 21 | # 30 is enough for small installation, for real usage 22 | # it should be slightly more, say 100-300. 23 | THREAD_POOL_SIZE = 30 24 | 25 | # RPC call throws TimeoutServiceException once total time since request has been 26 | # placed (time to delivery to client + time for processing on the client) 27 | # crosses _TOTAL (in second). 28 | # _TOTAL reflects the fact that not all transports deliver RPC requests to the clients 29 | # instantly, so request can wait some time in the buffer on server side. 30 | # NOT IMPLEMENTED YET 31 | #RPC_TIMEOUT_TOTAL = 600 32 | 33 | # RPC call throws TimeoutServiceException once client is processing request longer 34 | # than _PROCESS (in second) 35 | # NOT IMPLEMENTED YET 36 | #RPC_TIMEOUT_PROCESS = 30 37 | 38 | # Do you want to expose "example" service in server? 39 | # Useful for learning the server,you probably want to disable 40 | # this on production 41 | ENABLE_EXAMPLE_SERVICE = True 42 | 43 | # ******************** TRANSPORTS ********************* 44 | 45 | # Hostname or external IP to expose 46 | HOSTNAME = 'stratum.example.com' 47 | 48 | # Port used for Socket transport. Use 'None' for disabling the transport. 49 | LISTEN_SOCKET_TRANSPORT = 3333 50 | 51 | # Port used for HTTP Poll transport. Use 'None' for disabling the transport 52 | LISTEN_HTTP_TRANSPORT = 8000 53 | 54 | # Port used for HTTPS Poll transport 55 | LISTEN_HTTPS_TRANSPORT = 8001 56 | 57 | # Port used for WebSocket transport, 'None' for disabling WS 58 | LISTEN_WS_TRANSPORT = 8002 59 | 60 | # Port used for secure WebSocket, 'None' for disabling WSS 61 | LISTEN_WSS_TRANSPORT = 8003 62 | 63 | # ******************** SSL SETTINGS ****************** 64 | 65 | # Private key and certification file for SSL protected transports 66 | # You can find howto for generating self-signed certificate in README file 67 | SSL_PRIVKEY = 'server.key' 68 | SSL_CACERT = 'server.crt' 69 | 70 | # ******************** TCP SETTINGS ****************** 71 | 72 | # Enables support for socket encapsulation, which is compatible 73 | # with haproxy 1.5+. By enabling this, first line of received 74 | # data will represent some metadata about proxied stream: 75 | # PROXY \n 76 | # 77 | # Full specification: http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt 78 | TCP_PROXY_PROTOCOL = False 79 | 80 | # ******************** HTTP SETTINGS ***************** 81 | 82 | # Keepalive for HTTP transport sessions (at this time for both poll and push) 83 | # High value leads to higher memory usage (all sessions are stored in memory ATM). 84 | # Low value leads to more frequent session reinitializing (like downloading address history). 85 | HTTP_SESSION_TIMEOUT = 3600 # in seconds 86 | 87 | # Maximum number of messages (notifications, responses) waiting to delivery to HTTP Poll clients. 88 | # Buffer length is PER CONNECTION. High value will consume a lot of RAM, 89 | # short history will cause that in some edge cases clients won't receive older events. 90 | HTTP_BUFFER_LIMIT = 10000 91 | 92 | # User agent used in HTTP requests (for both HTTP transports and for proxy calls from services) 93 | #USER_AGENT = 'Stratum/0.1' 94 | USER_AGENT = 'PoolServer' 95 | 96 | # Provide human-friendly user interface on HTTP transports for browsing exposed services. 97 | BROWSER_ENABLE = True 98 | 99 | # ******************** BITCOIND SETTINGS ************ 100 | 101 | # Hostname and credentials for one trusted Bitcoin node ("Satoshi's client"). 102 | # Stratum uses both P2P port (which is 8333 everytime) and RPC port 103 | BITCOIN_TRUSTED_HOST = '127.0.0.1' 104 | BITCOIN_TRUSTED_PORT = 8332 # RPC port 105 | BITCOIN_TRUSTED_USER = 'stratum' 106 | BITCOIN_TRUSTED_PASSWORD = '***somepassword***' 107 | 108 | # ******************** OTHER CORE SETTINGS ********************* 109 | # Use "echo -n '' | sha256sum | cut -f1 -d' ' " 110 | # for calculating SHA256 of your preferred password 111 | ADMIN_PASSWORD_SHA256 = None # Admin functionality is disabled 112 | #ADMIN_PASSWORD_SHA256 = '9e6c0c1db1e0dfb3fa5159deb4ecd9715b3c8cd6b06bd4a3ad77e9a8c5694219' # SHA256 of the password 113 | 114 | # IP from which admin calls are allowed. 115 | # Set None to allow admin calls from all IPs 116 | ADMIN_RESTRICT_INTERFACE = '127.0.0.1' 117 | 118 | # Use "./signature.py > signing_key.pem" to generate unique signing key for your server 119 | SIGNING_KEY = None # Message signing is disabled 120 | #SIGNING_KEY = 'signing_key.pem' 121 | 122 | # Origin of signed messages. Provide some unique string, 123 | # ideally URL where users can find some information about your identity 124 | SIGNING_ID = None 125 | #SIGNING_ID = 'stratum.somedomain.com' # Use custom string 126 | #SIGNING_ID = HOSTNAME # Use hostname as the signing ID 127 | 128 | # *********************** IRC / PEER CONFIGURATION ************* 129 | 130 | IRC_NICK = None # Skip IRC registration 131 | #IRC_NICK = "stratum" # Use nickname of your choice 132 | 133 | # Which hostname / external IP expose in IRC room 134 | # This should be official HOSTNAME for normal operation. 135 | IRC_HOSTNAME = HOSTNAME 136 | 137 | # Don't change this unless you're creating private Stratum cloud. 138 | IRC_SERVER = 'irc.freenode.net' 139 | IRC_ROOM = '#stratum-nodes' 140 | IRC_PORT = 6667 141 | 142 | # Hardcoded list of Stratum nodes for clients to switch when this node is not available. 143 | PEERS = [ 144 | { 145 | 'hostname': 'stratum.bitcoin.cz', 146 | 'trusted': True, # This node is trustworthy 147 | 'weight': -1, # Higher number means higher priority for selection. 148 | # -1 will work mostly as a backup when other servers won't work. 149 | # (IRC peers have weight=0 automatically). 150 | }, 151 | ] 152 | 153 | 154 | ''' 155 | DATABASE_DRIVER = 'MySQLdb' 156 | DATABASE_HOST = 'palatinus.cz' 157 | DATABASE_DBNAME = 'marekp_bitcointe' 158 | DATABASE_USER = 'marekp_bitcointe' 159 | DATABASE_PASSWORD = '**empty**' 160 | ''' 161 | -------------------------------------------------------------------------------- /stratum/connection_registry.py: -------------------------------------------------------------------------------- 1 | import weakref 2 | from twisted.internet import reactor 3 | from services import GenericService 4 | 5 | class ConnectionRegistry(object): 6 | __connections = weakref.WeakKeyDictionary() 7 | 8 | @classmethod 9 | def add_connection(cls, conn): 10 | cls.__connections[conn] = True 11 | 12 | @classmethod 13 | def remove_connection(cls, conn): 14 | try: 15 | del cls.__connections[conn] 16 | except: 17 | print "Warning: Cannot remove connection from ConnectionRegistry" 18 | 19 | @classmethod 20 | def get_session(cls, conn): 21 | if isinstance(conn, weakref.ref): 22 | conn = conn() 23 | 24 | if isinstance(conn, GenericService): 25 | conn = conn.connection_ref() 26 | 27 | if conn == None: 28 | return None 29 | 30 | return conn.get_session() 31 | 32 | @classmethod 33 | def iterate(cls): 34 | return cls.__connections.iterkeyrefs() 35 | 36 | def dump_connections(): 37 | for x in ConnectionRegistry.iterate(): 38 | c = x() 39 | c.transport.write('cus') 40 | reactor.callLater(5, dump_connections) 41 | 42 | #reactor.callLater(0, dump_connections) 43 | -------------------------------------------------------------------------------- /stratum/custom_exceptions.py: -------------------------------------------------------------------------------- 1 | class ProtocolException(Exception): 2 | pass 3 | 4 | class TransportException(Exception): 5 | pass 6 | 7 | class ServiceException(Exception): 8 | code = -2 9 | 10 | class UnauthorizedException(ServiceException): 11 | pass 12 | 13 | class SignatureException(ServiceException): 14 | code = -21 15 | 16 | class PubsubException(ServiceException): 17 | pass 18 | 19 | class AlreadySubscribedException(PubsubException): 20 | pass 21 | 22 | class IrcClientException(Exception): 23 | pass 24 | 25 | class SigningNotAvailableException(SignatureException): 26 | code = -21 27 | 28 | class UnknownSignatureIdException(SignatureException): 29 | code = -22 30 | 31 | class UnknownSignatureAlgorithmException(SignatureException): 32 | code = -22 33 | 34 | class SignatureVerificationFailedException(SignatureException): 35 | code = -23 36 | 37 | class MissingServiceTypeException(ServiceException): 38 | code = -2 39 | 40 | class MissingServiceVendorException(ServiceException): 41 | code = -2 42 | 43 | class MissingServiceIsDefaultException(ServiceException): 44 | code = -2 45 | 46 | class DefaultServiceAlreadyExistException(ServiceException): 47 | code = -2 48 | 49 | class ServiceNotFoundException(ServiceException): 50 | code = -2 51 | 52 | class MethodNotFoundException(ServiceException): 53 | code = -3 54 | 55 | class FeeRequiredException(ServiceException): 56 | code = -10 57 | 58 | class TimeoutServiceException(ServiceException): 59 | pass 60 | 61 | class RemoteServiceException(Exception): 62 | pass -------------------------------------------------------------------------------- /stratum/event_handler.py: -------------------------------------------------------------------------------- 1 | import custom_exceptions 2 | from twisted.internet import defer 3 | from services import wrap_result_object 4 | 5 | class GenericEventHandler(object): 6 | def _handle_event(self, msg_method, msg_params, connection_ref): 7 | return defer.maybeDeferred(wrap_result_object, self.handle_event(msg_method, msg_params, connection_ref)) 8 | 9 | def handle_event(self, msg_method, msg_params, connection_ref): 10 | '''In most cases you'll only need to overload this method.''' 11 | print "Other side called method", msg_method, "with params", msg_params 12 | raise custom_exceptions.MethodNotFoundException("Method '%s' not implemented" % msg_method) -------------------------------------------------------------------------------- /stratum/example_service.py: -------------------------------------------------------------------------------- 1 | from twisted.internet import defer 2 | from twisted.internet import reactor 3 | from twisted.names import client 4 | import random 5 | import time 6 | 7 | from services import GenericService, signature, synchronous 8 | import pubsub 9 | 10 | import logger 11 | log = logger.get_logger('example') 12 | 13 | class ExampleService(GenericService): 14 | service_type = 'example' 15 | service_vendor = 'Stratum' 16 | is_default = True 17 | 18 | def hello_world(self): 19 | return "Hello world!" 20 | hello_world.help_text = "Returns string 'Hello world!'" 21 | hello_world.params = [] 22 | 23 | @signature 24 | def ping(self, payload): 25 | return payload 26 | ping.help_text = "Returns signed message with the payload given by the client." 27 | ping.params = [('payload', 'mixed', 'This payload will be sent back to the client.'),] 28 | 29 | @synchronous 30 | def synchronous(self, how_long): 31 | '''This can use blocking calls, because it runs in separate thread''' 32 | for _ in range(int(how_long)): 33 | time.sleep(1) 34 | return 'Request finished in %d seconds' % how_long 35 | synchronous.help_text = "Run time consuming algorithm in server's threadpool and return the result when it finish." 36 | synchronous.params = [('how_long', 'int', 'For how many seconds the algorithm should run.'),] 37 | 38 | def throw_exception(self): 39 | raise Exception("Some error") 40 | throw_exception.help_text = "Throw an exception and send error result to the client." 41 | throw_exception.params = [] 42 | 43 | @signature 44 | def throw_signed_exception(self): 45 | raise Exception("Some error") 46 | throw_signed_exception.help_text = "Throw an exception and send signed error result to the client." 47 | throw_signed_exception.params = [] 48 | 49 | class TimeSubscription(pubsub.Subscription): 50 | event = 'example.pubsub.time_event' 51 | 52 | def process(self, t): 53 | # Process must return list of parameters for notification 54 | # or None if notification should not be send 55 | if t % self.params.get('period', 1) == 0: 56 | return (t,) 57 | 58 | def after_subscribe(self, _): 59 | # Some objects want to fire up notification or other 60 | # action directly after client subscribes. 61 | # after_subscribe is the right place for such logic 62 | pass 63 | 64 | class PubsubExampleService(GenericService): 65 | service_type = 'example.pubsub' 66 | service_vendor = 'Stratum' 67 | is_default = True 68 | 69 | def _setup(self): 70 | self._emit_time_event() 71 | 72 | @pubsub.subscribe 73 | def subscribe(self, period): 74 | return TimeSubscription(period=period) 75 | subscribe.help_text = "Subscribe client for receiving current server's unix timestamp." 76 | subscribe.params = [('period', 'int', 'Broadcast to the client only if timestamp%period==0. Use 1 for receiving an event in every second.'),] 77 | 78 | @pubsub.unsubscribe 79 | def unsubscribe(self, subscription_key):#period): 80 | return subscription_key 81 | unsubscribe.help_text = "Stop broadcasting unix timestampt to the client." 82 | unsubscribe.params = [('subscription_key', 'string', 'Key obtained by calling of subscribe method.'),] 83 | 84 | def _emit_time_event(self): 85 | # This will emit a publish event, 86 | # so all subscribed clients will receive 87 | # the notification 88 | 89 | t = time.time() 90 | TimeSubscription.emit(int(t)) 91 | reactor.callLater(1, self._emit_time_event) 92 | 93 | # Let's print some nice stats 94 | cnt = pubsub.Pubsub.get_subscription_count('example.pubsub.time_event') 95 | if cnt: 96 | log.info("Example event emitted in %.03f sec to %d subscribers" % (time.time() - t, cnt)) -------------------------------------------------------------------------------- /stratum/helpers.py: -------------------------------------------------------------------------------- 1 | from zope.interface import implements 2 | from twisted.internet import defer 3 | from twisted.internet import reactor 4 | from twisted.internet.protocol import Protocol 5 | from twisted.web.iweb import IBodyProducer 6 | from twisted.web.client import Agent 7 | from twisted.web.http_headers import Headers 8 | 9 | import settings 10 | 11 | class ResponseCruncher(Protocol): 12 | '''Helper for get_page()''' 13 | def __init__(self, finished): 14 | self.finished = finished 15 | self.response = "" 16 | 17 | def dataReceived(self, data): 18 | self.response += data 19 | 20 | def connectionLost(self, reason): 21 | self.finished.callback(self.response) 22 | 23 | class StringProducer(object): 24 | '''Helper for get_page()''' 25 | implements(IBodyProducer) 26 | 27 | def __init__(self, body): 28 | self.body = body 29 | self.length = len(body) 30 | 31 | def startProducing(self, consumer): 32 | consumer.write(self.body) 33 | return defer.succeed(None) 34 | 35 | def pauseProducing(self): 36 | pass 37 | 38 | def stopProducing(self): 39 | pass 40 | 41 | @defer.inlineCallbacks 42 | def get_page(url, method='GET', payload=None, headers=None): 43 | '''Downloads the page from given URL, using asynchronous networking''' 44 | agent = Agent(reactor) 45 | 46 | producer = None 47 | if payload: 48 | producer = StringProducer(payload) 49 | 50 | _headers = {'User-Agent': [settings.USER_AGENT,]} 51 | if headers: 52 | for key, value in headers.items(): 53 | _headers[key] = [value,] 54 | 55 | response = (yield agent.request( 56 | method, 57 | str(url), 58 | Headers(_headers), 59 | producer)) 60 | 61 | #for h in response.headers.getAllRawHeaders(): 62 | # print h 63 | 64 | try: 65 | finished = defer.Deferred() 66 | (yield response).deliverBody(ResponseCruncher(finished)) 67 | except: 68 | raise Exception("Downloading page '%s' failed" % url) 69 | 70 | defer.returnValue((yield finished)) 71 | 72 | @defer.inlineCallbacks 73 | def ask_old_server(method, *args): 74 | '''Perform request in old protocol to electrum servers. 75 | This is deprecated, used only for proxying some calls.''' 76 | import urllib 77 | import ast 78 | 79 | # Hack for methods without arguments 80 | if not len(args): 81 | args = ['',] 82 | 83 | res = (yield get_page('http://electrum.bitcoin.cz/electrum.php', method='POST', 84 | headers={"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"}, 85 | payload=urllib.urlencode({'q': repr([method,] + list(args))}))) 86 | 87 | try: 88 | data = ast.literal_eval(res) 89 | except SyntaxError: 90 | print "Data received from server:", res 91 | raise Exception("Corrupted data from old electrum server") 92 | defer.returnValue(data) 93 | -------------------------------------------------------------------------------- /stratum/http_transport.py: -------------------------------------------------------------------------------- 1 | from twisted.web.resource import Resource 2 | from twisted.web.server import Request, Session, NOT_DONE_YET 3 | from twisted.internet import defer 4 | from twisted.python.failure import Failure 5 | import hashlib 6 | import json 7 | import string 8 | 9 | import helpers 10 | import semaphore 11 | #from storage import Storage 12 | from protocol import Protocol, RequestCounter 13 | from event_handler import GenericEventHandler 14 | import settings 15 | 16 | import logger 17 | log = logger.get_logger('http_transport') 18 | 19 | class Transport(object): 20 | def __init__(self, session_id, lock): 21 | self.buffer = [] 22 | self.session_id = session_id 23 | self.lock = lock 24 | self.push_url = None # None or full URL for HTTP Push 25 | self.peer = None 26 | 27 | # For compatibility with generic transport, not used in HTTP transport 28 | self.disconnecting = False 29 | 30 | def getPeer(self): 31 | return self.peer 32 | 33 | def write(self, data): 34 | if len(self.buffer) >= settings.HTTP_BUFFER_LIMIT: 35 | # Drop first (oldest) item in buffer 36 | # if buffer crossed allowed limit. 37 | # This isn't totally exact, because one record in buffer 38 | # can teoretically contains more than one message (divided by \n), 39 | # but current server implementation don't store responses in this way, 40 | # so counting exact number of messages will lead to unnecessary overhead. 41 | self.buffer.pop(0) 42 | 43 | self.buffer.append(data) 44 | 45 | if not self.lock.is_locked() and self.push_url: 46 | # Push the buffer to callback URL 47 | # TODO: Buffer responses and perform callbgitacks in batches 48 | self.push_buffer() 49 | 50 | def push_buffer(self): 51 | '''Push the content of the buffer into callback URL''' 52 | if not self.push_url: 53 | return 54 | 55 | # FIXME: Don't expect any response 56 | helpers.get_page(self.push_url, method='POST', 57 | headers={"content-type": "application/stratum", 58 | "x-session-id": self.session_id}, 59 | payload=self.fetch_buffer()) 60 | 61 | def fetch_buffer(self): 62 | ret = ''.join(self.buffer) 63 | self.buffer = [] 64 | return ret 65 | 66 | def set_push_url(self, url): 67 | self.push_url = url 68 | 69 | def monkeypatch_method(cls): 70 | '''Perform monkey patch for given class.''' 71 | def decorator(func): 72 | setattr(cls, func.__name__, func) 73 | return func 74 | return decorator 75 | 76 | @monkeypatch_method(Request) 77 | def getSession(self, sessionInterface=None, cookie_prefix='TWISTEDSESSION'): 78 | '''Monkey patch for Request object, providing backward-compatible 79 | getSession method which can handle custom cookie as a session ID 80 | (which is necessary for following Stratum protocol specs). 81 | Unfortunately twisted developers rejected named-cookie feature, 82 | which is pressing me into this ugly solution... 83 | 84 | TODO: Especially this would deserve some unit test to be sure it doesn't break 85 | in future twisted versions. 86 | ''' 87 | # Session management 88 | if not self.session: 89 | cookiename = string.join([cookie_prefix] + self.sitepath, "_") 90 | sessionCookie = self.getCookie(cookiename) 91 | if sessionCookie: 92 | try: 93 | self.session = self.site.getSession(sessionCookie) 94 | except KeyError: 95 | pass 96 | # if it still hasn't been set, fix it up. 97 | if not self.session: 98 | self.session = self.site.makeSession() 99 | self.addCookie(cookiename, self.session.uid, path='/') 100 | self.session.touch() 101 | if sessionInterface: 102 | return self.session.getComponent(sessionInterface) 103 | return self.session 104 | 105 | class HttpSession(Session): 106 | sessionTimeout = settings.HTTP_SESSION_TIMEOUT 107 | 108 | def __init__(self, *args, **kwargs): 109 | Session.__init__(self, *args, **kwargs) 110 | #self.storage = Storage() 111 | 112 | # Reference to connection object (Protocol instance) 113 | self.protocol = None 114 | 115 | # Synchronizing object for avoiding race condition on session 116 | self.lock = semaphore.Semaphore(1) 117 | 118 | # Output buffering 119 | self.transport = Transport(self.uid, self.lock) 120 | 121 | # Setup cleanup method on session expiration 122 | self.notifyOnExpire(lambda: HttpSession.on_expire(self)) 123 | 124 | @classmethod 125 | def on_expire(cls, sess_obj): 126 | # FIXME: Close protocol connection 127 | print "EXPIRING SESSION", sess_obj 128 | 129 | if sess_obj.protocol: 130 | sess_obj.protocol.connectionLost(Failure(Exception("HTTP session closed"))) 131 | 132 | sess_obj.protocol = None 133 | 134 | class Root(Resource): 135 | isLeaf = True 136 | 137 | def __init__(self, debug=False, signing_key=None, signing_id=None, 138 | event_handler=GenericEventHandler): 139 | Resource.__init__(self) 140 | self.signing_key = signing_key 141 | self.signing_id = signing_id 142 | self.debug = debug # This class acts as a 'factory', debug is used by Protocol 143 | self.event_handler = event_handler 144 | 145 | def render_GET(self, request): 146 | if not settings.BROWSER_ENABLE: 147 | return "Welcome to %s server. Use HTTP POST to talk with the server." % settings.USER_AGENT 148 | 149 | # TODO: Web browser 150 | return "Web browser not implemented yet" 151 | 152 | def render_OPTIONS(self, request): 153 | session = request.getSession(cookie_prefix='STRATUM_SESSION') 154 | 155 | request.setHeader('server', settings.USER_AGENT) 156 | request.setHeader('x-session-timeout', session.sessionTimeout) 157 | request.setHeader('access-control-allow-origin', '*') # Allow access from any other domain 158 | request.setHeader('access-control-allow-methods', 'POST, OPTIONS') 159 | request.setHeader('access-control-allow-headers', 'Content-Type') 160 | return '' 161 | 162 | def render_POST(self, request): 163 | session = request.getSession(cookie_prefix='STRATUM_SESSION') 164 | 165 | l = session.lock.acquire() 166 | l.addCallback(self._perform_request, request, session) 167 | return NOT_DONE_YET 168 | 169 | def _perform_request(self, _, request, session): 170 | request.setHeader('content-type', 'application/stratum') 171 | request.setHeader('server', settings.USER_AGENT) 172 | request.setHeader('x-session-timeout', session.sessionTimeout) 173 | request.setHeader('access-control-allow-origin', '*') # Allow access from any other domain 174 | 175 | # Update client's IP address 176 | session.transport.peer = request.getHost() 177 | 178 | # Although it isn't intuitive at all, request.getHeader reads request headers, 179 | # but request.setHeader (few lines above) writes response headers... 180 | if 'application/stratum' not in request.getHeader('content-type'): 181 | session.transport.write("%s\n" % json.dumps({'id': None, 'result': None, 'error': (-1, "Content-type must be 'application/stratum'. See http://stratum.bitcoin.cz for more info.", "")})) 182 | self._finish(None, request, session.transport, session.lock) 183 | return 184 | 185 | if not session.protocol: 186 | # Build a "protocol connection" 187 | proto = Protocol() 188 | proto.transport = session.transport 189 | proto.factory = self 190 | proto.connectionMade() 191 | session.protocol = proto 192 | else: 193 | proto = session.protocol 194 | 195 | # Update callback URL if presented 196 | callback_url = request.getHeader('x-callback-url') 197 | if callback_url != None: 198 | if callback_url == '': 199 | # Blank value of callback URL switches HTTP Push back to HTTP Poll 200 | session.transport.push_url = None 201 | else: 202 | session.transport.push_url = callback_url 203 | 204 | data = request.content.read() 205 | if data: 206 | counter = RequestCounter() 207 | counter.on_finish.addCallback(self._finish, request, session.transport, session.lock) 208 | proto.dataReceived(data, request_counter=counter) 209 | else: 210 | # Ping message (empty request) of HTTP Polling 211 | self._finish(None, request, session.transport, session.lock) 212 | 213 | 214 | @classmethod 215 | def _finish(cls, _, request, transport, lock): 216 | # First parameter is callback result; not used here 217 | data = transport.fetch_buffer() 218 | request.setHeader('content-length', len(data)) 219 | request.setHeader('content-md5', hashlib.md5(data).hexdigest()) 220 | request.setHeader('x-content-sha256', hashlib.sha256(data).hexdigest()) 221 | request.write(data) 222 | request.finish() 223 | lock.release() 224 | -------------------------------------------------------------------------------- /stratum/irc.py: -------------------------------------------------------------------------------- 1 | from twisted.words.protocols import irc 2 | from twisted.internet import reactor, protocol 3 | import random 4 | import string 5 | 6 | import custom_exceptions 7 | import logger 8 | log = logger.get_logger('irc') 9 | 10 | # Reference to open IRC connection 11 | _connection = None 12 | 13 | def get_connection(): 14 | if _connection: 15 | return _connection 16 | 17 | raise custom_exceptions.IrcClientException("IRC not connected") 18 | 19 | class IrcLurker(irc.IRCClient): 20 | def connectionMade(self): 21 | irc.IRCClient.connectionMade(self) 22 | self.peers = {} 23 | 24 | global _connection 25 | _connection = self 26 | 27 | def get_peers(self): 28 | return self.peers.values() 29 | 30 | def connectionLost(self, reason): 31 | irc.IRCClient.connectionLost(self, reason) 32 | 33 | global _connection 34 | _connection = None 35 | 36 | def signedOn(self): 37 | self.join(self.factory.channel) 38 | 39 | def joined(self, channel): 40 | log.info('Joined %s' % channel) 41 | 42 | #def dataReceived(self, data): 43 | # print data 44 | # irc.IRCClient.dataReceived(self, data.replace('\r', '')) 45 | 46 | def privmsg(self, user, channel, msg): 47 | user = user.split('!', 1)[0] 48 | 49 | if channel == self.nickname or msg.startswith(self.nickname + ":"): 50 | log.info("'%s': %s" % (user, msg)) 51 | return 52 | 53 | #def action(self, user, channel, msg): 54 | # user = user.split('!', 1)[0] 55 | # print user, channel, msg 56 | 57 | def register(self, nickname, *args, **kwargs): 58 | self.setNick(nickname) 59 | self.sendLine("USER %s 0 * :%s" % (self.nickname, self.factory.hostname)) 60 | 61 | def irc_RPL_NAMREPLY(self, prefix, params): 62 | for nick in params[3].split(' '): 63 | if not nick.startswith('S_'): 64 | continue 65 | 66 | if nick == self.nickname: 67 | continue 68 | 69 | self.sendLine("WHO %s" % nick) 70 | 71 | def irc_RPL_WHOREPLY(self, prefix, params): 72 | nickname = params[5] 73 | hostname = params[7].split(' ', 1)[1] 74 | log.debug("New peer '%s' (%s)" % (hostname, nickname)) 75 | self.peers[nickname] = hostname 76 | 77 | def userJoined(self, nickname, channel): 78 | self.sendLine("WHO %s" % nickname) 79 | 80 | def userLeft(self, nickname, channel): 81 | self.userQuit(nickname) 82 | 83 | def userKicked(self, nickname, *args, **kwargs): 84 | self.userQuit(nickname) 85 | 86 | def userQuit(self, nickname, *args, **kwargs): 87 | try: 88 | hostname = self.peers[nickname] 89 | del self.peers[nickname] 90 | log.info("Peer '%s' (%s) disconnected" % (hostname, nickname)) 91 | except: 92 | pass 93 | 94 | #def irc_unknown(self, prefix, command, params): 95 | # print "UNKNOWN", prefix, command, params 96 | 97 | class IrcLurkerFactory(protocol.ClientFactory): 98 | def __init__(self, channel, nickname, hostname): 99 | self.channel = channel 100 | self.nickname = nickname 101 | self.hostname = hostname 102 | 103 | def _random_string(self, N): 104 | return ''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(N)) 105 | 106 | def buildProtocol(self, addr): 107 | p = IrcLurker() 108 | p.factory = self 109 | p.nickname = "S_%s_%s" % (self.nickname, self._random_string(5)) 110 | log.info("Using nickname '%s'" % p.nickname) 111 | return p 112 | 113 | def clientConnectionLost(self, connector, reason): 114 | """If we get disconnected, reconnect to server.""" 115 | log.error("Connection lost") 116 | reactor.callLater(10, connector.connect) 117 | 118 | def clientConnectionFailed(self, connector, reason): 119 | log.error("Connection failed") 120 | reactor.callLater(10, connector.connect) 121 | 122 | if __name__ == '__main__': 123 | # Example of using IRC bot 124 | reactor.connectTCP("irc.freenode.net", 6667, IrcLurkerFactory('#stratum-nodes', 'test', 'example.com')) 125 | reactor.run() -------------------------------------------------------------------------------- /stratum/jsonical.py: -------------------------------------------------------------------------------- 1 | # Copyright 2009 New England Biolabs 2 | # 3 | # This file is part of the nebgbhist package released under the MIT license. 4 | # 5 | r"""Canonical JSON serialization. 6 | 7 | Basic approaches for implementing canonical JSON serialization. 8 | 9 | Encoding basic Python object hierarchies:: 10 | 11 | >>> import jsonical 12 | >>> jsonical.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}]) 13 | '["foo",{"bar":["baz",null,1.0,2]}]' 14 | >>> print jsonical.dumps("\"foo\bar") 15 | "\"foo\bar" 16 | >>> print jsonical.dumps(u'\u1234') 17 | "\u1234" 18 | >>> print jsonical.dumps('\\') 19 | "\\" 20 | >>> print jsonical.dumps({"c": 0, "b": 0, "a": 0}) 21 | {"a":0,"b":0,"c":0} 22 | >>> from StringIO import StringIO 23 | >>> io = StringIO() 24 | >>> json.dump(['streaming API'], io) 25 | >>> io.getvalue() 26 | '["streaming API"]' 27 | 28 | Decoding JSON:: 29 | 30 | >>> import jsonical 31 | >>> jsonical.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') 32 | [u'foo', {u'bar': [u'baz', None, Decimal('1.0'), 2]}] 33 | >>> jsonical.loads('"\\"foo\\bar"') 34 | u'"foo\x08ar' 35 | >>> from StringIO import StringIO 36 | >>> io = StringIO('["streaming API"]') 37 | >>> jsonical.load(io) 38 | [u'streaming API'] 39 | 40 | Using jsonical from the shell to canonicalize: 41 | 42 | $ echo '{"json":"obj","bar":2.333333}' | python -mjsonical 43 | {"bar":2.333333,"json":"obj"} 44 | $ echo '{1.2:3.4}' | python -mjson.tool 45 | Expecting property name: line 1 column 2 (char 2) 46 | 47 | """ 48 | import datetime 49 | import decimal 50 | import sys 51 | import types 52 | import unicodedata 53 | 54 | try: 55 | import json 56 | except ImportError: 57 | import simplejson as json 58 | 59 | class Encoder(json.JSONEncoder): 60 | def __init__(self, *args, **kwargs): 61 | kwargs.pop("sort_keys", None) 62 | super(Encoder, self).__init__(sort_keys=True, *args, **kwargs) 63 | 64 | def default(self, obj): 65 | """This is slightly different than json.JSONEncoder.default(obj) 66 | in that it should returned the serialized representation of the 67 | passed object, not a serializable representation. 68 | """ 69 | if isinstance(obj, (datetime.date, datetime.time, datetime.datetime)): 70 | return '"%s"' % obj.isoformat() 71 | elif isinstance(obj, unicode): 72 | return '"%s"' % unicodedata.normalize('NFD', obj).encode('utf-8') 73 | elif isinstance(obj, decimal.Decimal): 74 | return str(obj) 75 | return super(Encoder, self).default(obj) 76 | 77 | def _iterencode_default(self, o, markers=None): 78 | yield self.default(o) 79 | 80 | def dump(obj, fp, indent=None): 81 | return json.dump(obj, fp, separators=(',', ':'), indent=indent, cls=Encoder) 82 | 83 | def dumps(obj, indent=None): 84 | return json.dumps(obj, separators=(',', ':'), indent=indent, cls=Encoder) 85 | 86 | class Decoder(json.JSONDecoder): 87 | def raw_decode(self, s, **kw): 88 | obj, end = super(Decoder, self).raw_decode(s, **kw) 89 | if isinstance(obj, types.StringTypes): 90 | obj = unicodedata.normalize('NFD', unicode(obj)) 91 | return obj, end 92 | 93 | def load(fp): 94 | return json.load(fp, cls=Decoder, parse_float=decimal.Decimal) 95 | 96 | def loads(s): 97 | return json.loads(s, cls=Decoder, parse_float=decimal.Decimal) 98 | 99 | def tool(): 100 | infile = sys.stdin 101 | outfile = sys.stdout 102 | if len(sys.argv) > 1: 103 | infile = open(sys.argv[1], 'rb') 104 | if len(sys.argv) > 2: 105 | outfile = open(sys.argv[2], 'wb') 106 | if len(sys.argv) > 3: 107 | raise SystemExit("{0} [infile [outfile]]".format(sys.argv[0])) 108 | try: 109 | obj = load(infile) 110 | except ValueError, e: 111 | raise SystemExit(e) 112 | dump(obj, outfile) 113 | outfile.write('\n') 114 | 115 | if __name__ == '__main__': 116 | tool() -------------------------------------------------------------------------------- /stratum/logger.py: -------------------------------------------------------------------------------- 1 | '''Simple wrapper around python's logging package''' 2 | 3 | import os 4 | import logging 5 | from twisted.python import log as twisted_log 6 | 7 | import settings 8 | 9 | ''' 10 | class Logger(object): 11 | def debug(self, msg): 12 | twisted_log.msg(msg) 13 | 14 | def info(self, msg): 15 | twisted_log.msg(msg) 16 | 17 | def warning(self, msg): 18 | twisted_log.msg(msg) 19 | 20 | def error(self, msg): 21 | twisted_log.msg(msg) 22 | 23 | def critical(self, msg): 24 | twisted_log.msg(msg) 25 | ''' 26 | 27 | def get_logger(name): 28 | logger = logging.getLogger(name) 29 | logger.addHandler(stream_handler) 30 | logger.setLevel(getattr(logging, settings.LOGLEVEL)) 31 | 32 | if settings.LOGFILE != None: 33 | logger.addHandler(file_handler) 34 | 35 | logger.debug("Logging initialized") 36 | return logger 37 | #return Logger() 38 | 39 | if settings.DEBUG: 40 | fmt = logging.Formatter("%(asctime)s %(levelname)s %(name)s %(module)s.%(funcName)s # %(message)s") 41 | else: 42 | fmt = logging.Formatter("%(asctime)s %(levelname)s %(name)s # %(message)s") 43 | 44 | if settings.LOGFILE != None: 45 | file_handler = logging.FileHandler(os.path.join(settings.LOGDIR, settings.LOGFILE)) 46 | file_handler.setFormatter(fmt) 47 | 48 | stream_handler = logging.StreamHandler() 49 | stream_handler.setFormatter(fmt) -------------------------------------------------------------------------------- /stratum/protocol.py: -------------------------------------------------------------------------------- 1 | import json 2 | #import jsonical 3 | import time 4 | import socket 5 | 6 | from twisted.protocols.basic import LineOnlyReceiver 7 | from twisted.internet import defer, reactor, error 8 | from twisted.python.failure import Failure 9 | 10 | #import services 11 | import stats 12 | import signature 13 | import custom_exceptions 14 | import connection_registry 15 | import settings 16 | 17 | import logger 18 | log = logger.get_logger('protocol') 19 | 20 | class RequestCounter(object): 21 | def __init__(self): 22 | self.on_finish = defer.Deferred() 23 | self.counter = 0 24 | 25 | def set_count(self, cnt): 26 | self.counter = cnt 27 | 28 | def decrease(self): 29 | self.counter -= 1 30 | if self.counter <= 0: 31 | self.finish() 32 | 33 | def finish(self): 34 | if not self.on_finish.called: 35 | self.on_finish.callback(True) 36 | 37 | class Protocol(LineOnlyReceiver): 38 | delimiter = '\n' 39 | 40 | def _get_id(self): 41 | self.request_id += 1 42 | return self.request_id 43 | 44 | def _get_ip(self): 45 | return self.proxied_ip or self.transport.getPeer().host 46 | 47 | def get_ident(self): 48 | # Get global unique ID of connection 49 | return "%s:%s" % (self.proxied_ip or self.transport.getPeer().host, "%x" % id(self)) 50 | 51 | def get_session(self): 52 | return self.session 53 | 54 | def connectionMade(self): 55 | try: 56 | self.transport.setTcpNoDelay(True) 57 | self.transport.setTcpKeepAlive(True) 58 | self.transport.socket.setsockopt(socket.SOL_TCP, socket.TCP_KEEPIDLE, 120) # Seconds before sending keepalive probes 59 | self.transport.socket.setsockopt(socket.SOL_TCP, socket.TCP_KEEPINTVL, 1) # Interval in seconds between keepalive probes 60 | self.transport.socket.setsockopt(socket.SOL_TCP, socket.TCP_KEEPCNT, 5) # Failed keepalive probles before declaring other end dead 61 | except: 62 | # Supported only by the socket transport, 63 | # but there's really no better place in code to trigger this. 64 | pass 65 | 66 | # Read settings.TCP_PROXY_PROTOCOL documentation 67 | self.expect_tcp_proxy_protocol_header = self.factory.__dict__.get('tcp_proxy_protocol_enable', False) 68 | self.proxied_ip = None # IP obtained from TCP proxy protocol 69 | 70 | self.request_id = 0 71 | self.lookup_table = {} 72 | self.event_handler = self.factory.event_handler() 73 | self.on_disconnect = defer.Deferred() 74 | self.on_finish = None # Will point to defer which is called 75 | # once all client requests are processed 76 | 77 | # Initiate connection session 78 | self.session = {} 79 | 80 | stats.PeerStats.client_connected(self._get_ip()) 81 | log.debug("Connected %s" % self.transport.getPeer().host) 82 | connection_registry.ConnectionRegistry.add_connection(self) 83 | 84 | def transport_write(self, data): 85 | '''Overwrite this if transport needs some extra care about data written 86 | to the socket, like adding message format in websocket.''' 87 | try: 88 | self.transport.write(data) 89 | except AttributeError: 90 | # Transport is disconnected 91 | pass 92 | 93 | def connectionLost(self, reason): 94 | if self.on_disconnect != None and not self.on_disconnect.called: 95 | self.on_disconnect.callback(self) 96 | self.on_disconnect = None 97 | 98 | stats.PeerStats.client_disconnected(self._get_ip()) 99 | connection_registry.ConnectionRegistry.remove_connection(self) 100 | self.transport = None # Fixes memory leak (cyclic reference) 101 | 102 | def writeJsonRequest(self, method, params, is_notification=False): 103 | request_id = None if is_notification else self._get_id() 104 | serialized = json.dumps({'id': request_id, 'method': method, 'params': params, 'jsonrpc':'2.0'}) 105 | 106 | if self.factory.debug: 107 | log.debug("< %s" % serialized) 108 | 109 | self.transport_write("%s\n" % serialized) 110 | return request_id 111 | 112 | def writeJsonResponse(self, data, message_id, use_signature=False, sign_method='', sign_params=[]): 113 | if not data: 114 | return 115 | if use_signature: 116 | serialized = signature.jsonrpc_dumps_sign(self.factory.signing_key, self.factory.signing_id, False,\ 117 | message_id, sign_method, sign_params, data, None) 118 | else: 119 | serialized = json.dumps({'id': message_id, 'result': data, 'error': None, 'jsonrpc':'2.0'}) 120 | 121 | if self.factory.debug: 122 | log.debug("< %s" % serialized) 123 | 124 | self.transport_write("%s\n" % serialized) 125 | 126 | def writeJsonError(self, code, message, traceback, message_id, use_signature=False, sign_method='', sign_params=[]): 127 | if use_signature: 128 | serialized = signature.jsonrpc_dumps_sign(self.factory.signing_key, self.factory.signing_id, False,\ 129 | message_id, sign_method, sign_params, None, (code, message, traceback)) 130 | else: 131 | serialized = json.dumps({'id': message_id, 'result': None, 'error': (code, message, traceback)}) 132 | 133 | self.transport_write("%s\n" % serialized) 134 | 135 | def writeGeneralError(self, message, code=-1): 136 | log.error(message) 137 | return self.writeJsonError(code, message, None, None) 138 | 139 | def process_response(self, data, message_id, sign_method, sign_params, request_counter): 140 | self.writeJsonResponse(data.result, message_id, data.sign, sign_method, sign_params) 141 | request_counter.decrease() 142 | 143 | 144 | def process_failure(self, failure, message_id, sign_method, sign_params, request_counter): 145 | if not isinstance(failure.value, custom_exceptions.ServiceException): 146 | # All handled exceptions should inherit from ServiceException class. 147 | # Throwing other exception class means that it is unhandled error 148 | # and we should log it. 149 | log.exception(failure) 150 | 151 | sign = False 152 | code = getattr(failure.value, 'code', -1) 153 | 154 | #if isinstance(failure.value, services.ResultObject): 155 | # # Strip ResultObject 156 | # sign = failure.value.sign 157 | # failure.value = failure.value.result 158 | 159 | if message_id != None: 160 | # Other party doesn't care of error state for notifications 161 | if settings.DEBUG: 162 | tb = failure.getBriefTraceback() 163 | else: 164 | tb = None 165 | self.writeJsonError(code, failure.getErrorMessage(), tb, message_id, sign, sign_method, sign_params) 166 | 167 | request_counter.decrease() 168 | 169 | def dataReceived(self, data, request_counter=None): 170 | '''Original code from Twisted, hacked for request_counter proxying. 171 | request_counter is hack for HTTP transport, didn't found cleaner solution how 172 | to indicate end of request processing in asynchronous manner. 173 | 174 | TODO: This would deserve some unit test to be sure that future twisted versions 175 | will work nicely with this.''' 176 | 177 | if request_counter == None: 178 | request_counter = RequestCounter() 179 | 180 | lines = (self._buffer+data).split(self.delimiter) 181 | self._buffer = lines.pop(-1) 182 | request_counter.set_count(len(lines)) 183 | self.on_finish = request_counter.on_finish 184 | 185 | for line in lines: 186 | if self.transport.disconnecting: 187 | request_counter.finish() 188 | return 189 | if len(line) > self.MAX_LENGTH: 190 | request_counter.finish() 191 | return self.lineLengthExceeded(line) 192 | elif line: 193 | try: 194 | self.lineReceived(line, request_counter) 195 | except Exception as exc: 196 | request_counter.finish() 197 | #log.exception("Processing of message failed") 198 | log.warning("Failed message: %s from %s" % (str(exc), self._get_ip())) 199 | return error.ConnectionLost('Processing of message failed') 200 | 201 | if len(self._buffer) > self.MAX_LENGTH: 202 | request_counter.finish() 203 | return self.lineLengthExceeded(self._buffer) 204 | 205 | def lineReceived(self, line, request_counter): 206 | if self.expect_tcp_proxy_protocol_header: 207 | # This flag may be set only for TCP transport AND when TCP_PROXY_PROTOCOL 208 | # is enabled in server config. Then we expect the first line of the stream 209 | # may contain proxy metadata. 210 | 211 | # We don't expect this header during this session anymore 212 | self.expect_tcp_proxy_protocol_header = False 213 | 214 | if line.startswith('PROXY'): 215 | self.proxied_ip = line.split()[2] 216 | 217 | # Let's process next line 218 | request_counter.decrease() 219 | return 220 | 221 | try: 222 | message = json.loads(line) 223 | except: 224 | #self.writeGeneralError("Cannot decode message '%s'" % line) 225 | request_counter.finish() 226 | raise custom_exceptions.ProtocolException("Cannot decode message '%s'" % line.strip()) 227 | 228 | if self.factory.debug: 229 | log.debug("> %s" % message) 230 | 231 | msg_id = message.get('id', 0) 232 | msg_method = message.get('method') 233 | msg_params = message.get('params') 234 | msg_result = message.get('result') 235 | msg_error = message.get('error') 236 | 237 | if msg_method: 238 | # It's a RPC call or notification 239 | try: 240 | result = self.event_handler._handle_event(msg_method, msg_params, connection_ref=self) 241 | if result == None and msg_id != None: 242 | # event handler must return Deferred object or raise an exception for RPC request 243 | raise custom_exceptions.MethodNotFoundException("Event handler cannot process method '%s'" % msg_method) 244 | except: 245 | failure = Failure() 246 | self.process_failure(failure, msg_id, msg_method, msg_params, request_counter) 247 | 248 | else: 249 | if msg_id == None: 250 | # It's notification, don't expect the response 251 | request_counter.decrease() 252 | else: 253 | # It's a RPC call 254 | result.addCallback(self.process_response, msg_id, msg_method, msg_params, request_counter) 255 | result.addErrback(self.process_failure, msg_id, msg_method, msg_params, request_counter) 256 | 257 | elif msg_id: 258 | # It's a RPC response 259 | # Perform lookup to the table of waiting requests. 260 | request_counter.decrease() 261 | 262 | try: 263 | meta = self.lookup_table[msg_id] 264 | del self.lookup_table[msg_id] 265 | except KeyError: 266 | # When deferred object for given message ID isn't found, it's an error 267 | raise custom_exceptions.ProtocolException("Lookup for deferred object for message ID '%s' failed." % msg_id) 268 | 269 | # If there's an error, handle it as errback 270 | # If both result and error are null, handle it as a success with blank result 271 | if msg_error != None: 272 | meta['defer'].errback(custom_exceptions.RemoteServiceException(msg_error)) 273 | else: 274 | meta['defer'].callback(msg_result) 275 | 276 | else: 277 | request_counter.decrease() 278 | raise custom_exceptions.ProtocolException("Cannot handle message '%s'" % line) 279 | 280 | def rpc(self, method, params, is_notification=False): 281 | ''' 282 | This method performs remote RPC call. 283 | 284 | If method should expect an response, it store 285 | request ID to lookup table and wait for corresponding 286 | response message. 287 | ''' 288 | 289 | request_id = self.writeJsonRequest(method, params, is_notification) 290 | 291 | if is_notification: 292 | return 293 | 294 | d = defer.Deferred() 295 | self.lookup_table[request_id] = {'defer': d, 'method': method, 'params': params} 296 | return d 297 | 298 | class ClientProtocol(Protocol): 299 | def connectionMade(self): 300 | Protocol.connectionMade(self) 301 | self.factory.client = self 302 | 303 | if self.factory.timeout_handler: 304 | self.factory.timeout_handler.cancel() 305 | self.factory.timeout_handler = None 306 | 307 | if isinstance(getattr(self.factory, 'after_connect', None), list): 308 | log.debug("Resuming connection: %s" % self.factory.after_connect) 309 | for cmd in self.factory.after_connect: 310 | self.rpc(cmd[0], cmd[1]) 311 | 312 | if not self.factory.on_connect.called: 313 | d = self.factory.on_connect 314 | self.factory.on_connect = defer.Deferred() 315 | d.callback(self.factory) 316 | 317 | 318 | #d = self.rpc('node.get_peers', []) 319 | #d.addCallback(self.factory.add_peers) 320 | 321 | def connectionLost(self, reason): 322 | self.factory.client = None 323 | 324 | if self.factory.timeout_handler: 325 | self.factory.timeout_handler.cancel() 326 | self.factory.timeout_handler = None 327 | 328 | if not self.factory.on_disconnect.called: 329 | d = self.factory.on_disconnect 330 | self.factory.on_disconnect = defer.Deferred() 331 | d.callback(self.factory) 332 | 333 | Protocol.connectionLost(self, reason) 334 | -------------------------------------------------------------------------------- /stratum/pubsub.py: -------------------------------------------------------------------------------- 1 | import weakref 2 | from connection_registry import ConnectionRegistry 3 | import custom_exceptions 4 | import hashlib 5 | 6 | def subscribe(func): 7 | '''Decorator detect Subscription object in result and subscribe connection''' 8 | def inner(self, *args, **kwargs): 9 | subs = func(self, *args, **kwargs) 10 | return Pubsub.subscribe(self.connection_ref(), subs) 11 | return inner 12 | 13 | def unsubscribe(func): 14 | '''Decorator detect Subscription object in result and unsubscribe connection''' 15 | def inner(self, *args, **kwargs): 16 | subs = func(self, *args, **kwargs) 17 | if isinstance(subs, Subscription): 18 | return Pubsub.unsubscribe(self.connection_ref(), subscription=subs) 19 | else: 20 | return Pubsub.unsubscribe(self.connection_ref(), key=subs) 21 | return inner 22 | 23 | class Subscription(object): 24 | def __init__(self, event=None, **params): 25 | if hasattr(self, 'event'): 26 | if event: 27 | raise Exception("Event name already defined in Subscription object") 28 | else: 29 | if not event: 30 | raise Exception("Please define event name in constructor") 31 | else: 32 | self.event = event 33 | 34 | self.params = params # Internal parameters for subscription object 35 | self.connection_ref = None 36 | 37 | def process(self, *args, **kwargs): 38 | return args 39 | 40 | def get_key(self): 41 | '''This is an identifier for current subscription. It is sent to the client, 42 | so result should not contain any sensitive information.''' 43 | #return hashlib.md5(str((self.event, self.params))).hexdigest() 44 | return "%s" % int(hashlib.md5( str((self.event, self.params)) ).hexdigest()[:12], 16) 45 | 46 | def get_session(self): 47 | '''Connection session may be useful in filter or process functions''' 48 | return self.connection_ref().get_session() 49 | 50 | @classmethod 51 | def emit(cls, *args, **kwargs): 52 | '''Shortcut for emiting this event to all subscribers.''' 53 | if not hasattr(cls, 'event'): 54 | raise Exception("Subscription.emit() can be used only for subclasses with filled 'event' class variable.") 55 | return Pubsub.emit(cls.event, *args, **kwargs) 56 | 57 | def emit_single(self, *args, **kwargs): 58 | '''Perform emit of this event just for current subscription.''' 59 | conn = self.connection_ref() 60 | if conn == None: 61 | # Connection is closed 62 | return 63 | 64 | payload = self.process(*args, **kwargs) 65 | if payload != None: 66 | if isinstance(payload, (tuple, list)): 67 | if len(payload)==1 and isinstance(payload[0], dict): 68 | payload = payload[0] 69 | conn.writeJsonRequest(self.event, payload, is_notification=True) 70 | self.after_emit(*args, **kwargs) 71 | else: 72 | raise Exception("Return object from process() method must be list or None") 73 | 74 | def after_emit(self, *args, **kwargs): 75 | pass 76 | 77 | # Once function is defined, it will be called every time 78 | #def after_subscribe(self, _): 79 | # pass 80 | 81 | def __eq__(self, other): 82 | return (isinstance(other, Subscription) and other.get_key() == self.get_key()) 83 | 84 | def __ne__(self, other): 85 | return not self.__eq__(other) 86 | 87 | class Pubsub(object): 88 | __subscriptions = {} 89 | 90 | @classmethod 91 | def subscribe(cls, connection, subscription): 92 | if connection == None: 93 | raise custom_exceptions.PubsubException("Subscriber not connected") 94 | 95 | key = subscription.get_key() 96 | session = ConnectionRegistry.get_session(connection) 97 | if session == None: 98 | raise custom_exceptions.PubsubException("No session found") 99 | 100 | subscription.connection_ref = weakref.ref(connection) 101 | session.setdefault('subscriptions', {}) 102 | 103 | if key in session['subscriptions']: 104 | raise custom_exceptions.AlreadySubscribedException("This connection is already subscribed for such event.") 105 | 106 | session['subscriptions'][key] = subscription 107 | 108 | cls.__subscriptions.setdefault(subscription.event, weakref.WeakKeyDictionary()) 109 | cls.__subscriptions[subscription.event][subscription] = None 110 | 111 | if hasattr(subscription, 'after_subscribe'): 112 | if connection.on_finish != None: 113 | # If subscription is processed during the request, wait to 114 | # finish and then process the callback 115 | connection.on_finish.addCallback(subscription.after_subscribe) 116 | else: 117 | # If subscription is NOT processed during the request (any real use case?), 118 | # process callback instantly (better now than never). 119 | subscription.after_subscribe(True) 120 | 121 | # List of 2-tuples is prepared for future multi-subscriptions 122 | return ((subscription.event, key, subscription),) 123 | 124 | @classmethod 125 | def unsubscribe(cls, connection, subscription=None, key=None): 126 | if connection == None: 127 | raise custom_exceptions.PubsubException("Subscriber not connected") 128 | 129 | session = ConnectionRegistry.get_session(connection) 130 | if session == None: 131 | raise custom_exceptions.PubsubException("No session found") 132 | 133 | if subscription: 134 | key = subscription.get_key() 135 | 136 | try: 137 | # Subscription don't need to be removed from cls.__subscriptions, 138 | # because it uses weak reference there. 139 | del session['subscriptions'][key] 140 | except KeyError: 141 | print "Warning: Cannot remove subscription from connection session" 142 | return False 143 | 144 | return True 145 | 146 | @classmethod 147 | def get_subscription_count(cls, event): 148 | return len(cls.__subscriptions.get(event, {})) 149 | 150 | @classmethod 151 | def get_subscription(cls, connection, event, key=None): 152 | '''Return subscription object for given connection and event''' 153 | session = ConnectionRegistry.get_session(connection) 154 | if session == None: 155 | raise custom_exceptions.PubsubException("No session found") 156 | 157 | if key == None: 158 | sub = [ sub for sub in session.get('subscriptions', {}).values() if sub.event == event ] 159 | try: 160 | return sub[0] 161 | except IndexError: 162 | raise custom_exceptions.PubsubException("Not subscribed for event %s" % event) 163 | 164 | else: 165 | raise Exception("Searching subscriptions by key is not implemented yet") 166 | 167 | @classmethod 168 | def iterate_subscribers(cls, event): 169 | for subscription in cls.__subscriptions.get(event, weakref.WeakKeyDictionary()).iterkeyrefs(): 170 | subscription = subscription() 171 | if subscription == None: 172 | # Subscriber is no more connected 173 | continue 174 | 175 | yield subscription 176 | 177 | @classmethod 178 | def emit(cls, event, *args, **kwargs): 179 | for subscription in cls.iterate_subscribers(event): 180 | subscription.emit_single(*args, **kwargs) -------------------------------------------------------------------------------- /stratum/semaphore.py: -------------------------------------------------------------------------------- 1 | from twisted.internet import defer 2 | 3 | class Semaphore: 4 | """A semaphore for event driven systems.""" 5 | 6 | def __init__(self, tokens): 7 | self.waiting = [] 8 | self.tokens = tokens 9 | self.limit = tokens 10 | 11 | def is_locked(self): 12 | return (bool)(not self.tokens) 13 | 14 | def acquire(self): 15 | """Attempt to acquire the token. 16 | 17 | @return Deferred which returns on token acquisition. 18 | """ 19 | assert self.tokens >= 0 20 | d = defer.Deferred() 21 | if not self.tokens: 22 | self.waiting.append(d) 23 | else: 24 | self.tokens = self.tokens - 1 25 | d.callback(self) 26 | return d 27 | 28 | def release(self): 29 | """Release the token. 30 | 31 | Should be called by whoever did the acquire() when the shared 32 | resource is free. 33 | """ 34 | assert self.tokens < self.limit 35 | self.tokens = self.tokens + 1 36 | if self.waiting: 37 | # someone is waiting to acquire token 38 | self.tokens = self.tokens - 1 39 | d = self.waiting.pop(0) 40 | d.callback(self) 41 | 42 | def _releaseAndReturn(self, r): 43 | self.release() 44 | return r 45 | 46 | def run(self, f, *args, **kwargs): 47 | """Acquire token, run function, release token. 48 | 49 | @return Deferred of function result. 50 | """ 51 | d = self.acquire() 52 | d.addCallback(lambda r: defer.maybeDeferred(f, *args, 53 | **kwargs).addBoth(self._releaseAndReturn)) 54 | return d 55 | -------------------------------------------------------------------------------- /stratum/server.py: -------------------------------------------------------------------------------- 1 | def setup(setup_event=None): 2 | try: 3 | from twisted.internet import epollreactor 4 | epollreactor.install() 5 | except ImportError: 6 | print "Failed to install epoll reactor, default reactor will be used instead." 7 | 8 | try: 9 | import settings 10 | except ImportError: 11 | print "***** Is configs.py missing? Maybe you want to copy and customize config_default.py?" 12 | 13 | from twisted.application import service 14 | application = service.Application("stratum-server") 15 | 16 | # Setting up logging 17 | from twisted.python.log import ILogObserver, FileLogObserver 18 | from twisted.python.logfile import DailyLogFile 19 | 20 | #logfile = DailyLogFile(settings.LOGFILE, settings.LOGDIR) 21 | #application.setComponent(ILogObserver, FileLogObserver(logfile).emit) 22 | 23 | if settings.ENABLE_EXAMPLE_SERVICE: 24 | import stratum.example_service 25 | 26 | if setup_event == None: 27 | setup_finalize(None, application) 28 | else: 29 | setup_event.addCallback(setup_finalize, application) 30 | 31 | return application 32 | 33 | def setup_finalize(event, application): 34 | 35 | from twisted.application import service, internet 36 | from twisted.internet import reactor, ssl 37 | from twisted.web.server import Site 38 | from twisted.python import log 39 | #from twisted.enterprise import adbapi 40 | import OpenSSL.SSL 41 | 42 | from services import ServiceEventHandler 43 | 44 | import socket_transport 45 | import http_transport 46 | import websocket_transport 47 | import irc 48 | 49 | from stratum import settings 50 | 51 | try: 52 | import signature 53 | signing_key = signature.load_privkey_pem(settings.SIGNING_KEY) 54 | except: 55 | print "Loading of signing key '%s' failed, protocol messages cannot be signed." % settings.SIGNING_KEY 56 | signing_key = None 57 | 58 | # Attach HTTPS Poll Transport service to application 59 | try: 60 | sslContext = ssl.DefaultOpenSSLContextFactory(settings.SSL_PRIVKEY, settings.SSL_CACERT) 61 | except OpenSSL.SSL.Error: 62 | sslContext = None 63 | print "Cannot initiate SSL context, are SSL_PRIVKEY or SSL_CACERT missing?" 64 | print "This will skip all SSL-based transports." 65 | 66 | # Set up thread pool size for service threads 67 | reactor.suggestThreadPoolSize(settings.THREAD_POOL_SIZE) 68 | 69 | if settings.LISTEN_SOCKET_TRANSPORT: 70 | # Attach Socket Transport service to application 71 | socket = internet.TCPServer(settings.LISTEN_SOCKET_TRANSPORT, 72 | socket_transport.SocketTransportFactory(debug=settings.DEBUG, 73 | signing_key=signing_key, 74 | signing_id=settings.SIGNING_ID, 75 | event_handler=ServiceEventHandler, 76 | tcp_proxy_protocol_enable=settings.TCP_PROXY_PROTOCOL)) 77 | socket.setServiceParent(application) 78 | 79 | # Build the HTTP interface 80 | httpsite = Site(http_transport.Root(debug=settings.DEBUG, signing_key=signing_key, signing_id=settings.SIGNING_ID, 81 | event_handler=ServiceEventHandler)) 82 | httpsite.sessionFactory = http_transport.HttpSession 83 | 84 | if settings.LISTEN_HTTP_TRANSPORT: 85 | # Attach HTTP Poll Transport service to application 86 | http = internet.TCPServer(settings.LISTEN_HTTP_TRANSPORT, httpsite) 87 | http.setServiceParent(application) 88 | 89 | if settings.LISTEN_HTTPS_TRANSPORT and sslContext: 90 | https = internet.SSLServer(settings.LISTEN_HTTPS_TRANSPORT, httpsite, contextFactory = sslContext) 91 | https.setServiceParent(application) 92 | 93 | if settings.LISTEN_WS_TRANSPORT: 94 | from autobahn.websocket import listenWS 95 | log.msg("Starting WS transport on %d" % settings.LISTEN_WS_TRANSPORT) 96 | ws = websocket_transport.WebsocketTransportFactory(settings.LISTEN_WS_TRANSPORT, 97 | debug=settings.DEBUG, 98 | signing_key=signing_key, 99 | signing_id=settings.SIGNING_ID, 100 | event_handler=ServiceEventHandler, 101 | tcp_proxy_protocol_enable=settings.TCP_PROXY_PROTOCOL) 102 | listenWS(ws) 103 | 104 | if settings.LISTEN_WSS_TRANSPORT and sslContext: 105 | from autobahn.websocket import listenWS 106 | log.msg("Starting WSS transport on %d" % settings.LISTEN_WSS_TRANSPORT) 107 | wss = websocket_transport.WebsocketTransportFactory(settings.LISTEN_WSS_TRANSPORT, is_secure=True, 108 | debug=settings.DEBUG, 109 | signing_key=signing_key, 110 | signing_id=settings.SIGNING_ID, 111 | event_handler=ServiceEventHandler) 112 | listenWS(wss, contextFactory=sslContext) 113 | 114 | if settings.IRC_NICK: 115 | reactor.connectTCP(settings.IRC_SERVER, settings.IRC_PORT, irc.IrcLurkerFactory(settings.IRC_ROOM, settings.IRC_NICK, settings.IRC_HOSTNAME)) 116 | 117 | return event 118 | 119 | if __name__ == '__main__': 120 | print "This is not executable script. Try 'twistd -ny launcher.tac instead!" -------------------------------------------------------------------------------- /stratum/services.py: -------------------------------------------------------------------------------- 1 | from twisted.internet import defer, threads 2 | from twisted.python import log 3 | import hashlib 4 | import weakref 5 | import re 6 | 7 | import custom_exceptions 8 | 9 | VENDOR_RE = re.compile(r'\[(.*)\]') 10 | 11 | class ServiceEventHandler(object): # reimplements event_handler.GenericEventHandler 12 | def _handle_event(self, msg_method, msg_params, connection_ref): 13 | return ServiceFactory.call(msg_method, msg_params, connection_ref=connection_ref) 14 | 15 | class ResultObject(object): 16 | def __init__(self, result=None, sign=False, sign_algo=None, sign_id=None): 17 | self.result = result 18 | self.sign = sign 19 | self.sign_algo = sign_algo 20 | self.sign_id = sign_id 21 | 22 | def wrap_result_object(obj): 23 | def _wrap(o): 24 | if isinstance(o, ResultObject): 25 | return o 26 | return ResultObject(result=o) 27 | 28 | if isinstance(obj, defer.Deferred): 29 | # We don't have result yet, just wait for it and wrap it later 30 | obj.addCallback(_wrap) 31 | return obj 32 | 33 | return _wrap(obj) 34 | 35 | class ServiceFactory(object): 36 | registry = {} # Mapping service_type -> vendor -> cls 37 | 38 | @classmethod 39 | def _split_method(cls, method): 40 | '''Parses "some.service[vendor].method" string 41 | and returns 3-tuple with (service_type, vendor, rpc_method)''' 42 | 43 | # Splits the service type and method name 44 | (service_type, method_name) = method.rsplit('.', 1) 45 | vendor = None 46 | 47 | if '[' in service_type: 48 | # Use regular expression only when brackets found 49 | try: 50 | vendor = VENDOR_RE.search(service_type).group(1) 51 | service_type = service_type.replace('[%s]' % vendor, '') 52 | except: 53 | raise 54 | #raise custom_exceptions.ServiceNotFoundException("Invalid syntax in service name '%s'" % type_name[0]) 55 | 56 | return (service_type, vendor, method_name) 57 | 58 | @classmethod 59 | def call(cls, method, params, connection_ref=None): 60 | if method in ['submit','login']: 61 | method = 'mining.%s' % method 62 | params = [params,] 63 | try: 64 | (service_type, vendor, func_name) = cls._split_method(method) 65 | except ValueError: 66 | raise custom_exceptions.MethodNotFoundException("Method name parsing failed. You *must* use format ., e.g. 'example.ping'") 67 | 68 | try: 69 | if func_name.startswith('_'): 70 | raise 71 | 72 | _inst = cls.lookup(service_type, vendor=vendor)() 73 | _inst.connection_ref = weakref.ref(connection_ref) 74 | func = _inst.__getattribute__(func_name) 75 | if not callable(func): 76 | raise 77 | except: 78 | raise custom_exceptions.MethodNotFoundException("Method '%s' not found for service '%s'" % (func_name, service_type)) 79 | 80 | def _run(func, *params): 81 | return wrap_result_object(func(*params)) 82 | 83 | # Returns Defer which will lead to ResultObject sometimes 84 | return defer.maybeDeferred(_run, func, *params) 85 | 86 | @classmethod 87 | def lookup(cls, service_type, vendor=None): 88 | # Lookup for service type provided by specific vendor 89 | if vendor: 90 | try: 91 | return cls.registry[service_type][vendor] 92 | except KeyError: 93 | raise custom_exceptions.ServiceNotFoundException("Class for given service type and vendor isn't registered") 94 | 95 | # Lookup for any vendor, prefer default one 96 | try: 97 | vendors = cls.registry[service_type] 98 | except KeyError: 99 | raise custom_exceptions.ServiceNotFoundException("Class for given service type isn't registered") 100 | 101 | last_found = None 102 | for _, _cls in vendors.items(): 103 | last_found = _cls 104 | if last_found.is_default: 105 | return last_found 106 | 107 | if not last_found: 108 | raise custom_exceptions.ServiceNotFoundException("Class for given service type isn't registered") 109 | 110 | return last_found 111 | 112 | @classmethod 113 | def register_service(cls, _cls, meta): 114 | # Register service class to ServiceFactory 115 | service_type = meta.get('service_type') 116 | service_vendor = meta.get('service_vendor') 117 | is_default = meta.get('is_default') 118 | 119 | if str(_cls.__name__) in ('GenericService',): 120 | # str() is ugly hack, but it is avoiding circular references 121 | return 122 | 123 | if not service_type: 124 | raise custom_exceptions.MissingServiceTypeException("Service class '%s' is missing 'service_type' property." % _cls) 125 | 126 | if not service_vendor: 127 | raise custom_exceptions.MissingServiceVendorException("Service class '%s' is missing 'service_vendor' property." % _cls) 128 | 129 | if is_default == None: 130 | raise custom_exceptions.MissingServiceIsDefaultException("Service class '%s' is missing 'is_default' property." % _cls) 131 | 132 | if is_default: 133 | # Check if there's not any other default service 134 | 135 | try: 136 | current = cls.lookup(service_type) 137 | if current.is_default: 138 | raise custom_exceptions.DefaultServiceAlreadyExistException("Default service already exists for type '%s'" % service_type) 139 | except custom_exceptions.ServiceNotFoundException: 140 | pass 141 | 142 | setup_func = meta.get('_setup', None) 143 | if setup_func != None: 144 | _cls()._setup() 145 | 146 | ServiceFactory.registry.setdefault(service_type, {}) 147 | ServiceFactory.registry[service_type][service_vendor] = _cls 148 | 149 | log.msg("Registered %s for service '%s', vendor '%s' (default: %s)" % (_cls, service_type, service_vendor, is_default)) 150 | 151 | def signature(func): 152 | '''Decorate RPC method result with server's signature. 153 | This decorator can be chained with Deferred or inlineCallbacks, thanks to _sign_generator() hack.''' 154 | 155 | def _sign_generator(iterator): 156 | '''Iterate thru generator object, detects BaseException 157 | and inject signature into exception's value (=result of inner method). 158 | This is black magic because of decorating inlineCallbacks methods. 159 | See returnValue documentation for understanding this: 160 | http://twistedmatrix.com/documents/11.0.0/api/twisted.internet.defer.html#returnValue''' 161 | 162 | for i in iterator: 163 | try: 164 | iterator.send((yield i)) 165 | except BaseException as exc: 166 | exc.value = wrap_result_object(exc.value) 167 | exc.value.sign = True 168 | raise 169 | 170 | def _sign_deferred(res): 171 | obj = wrap_result_object(res) 172 | obj.sign = True 173 | return obj 174 | 175 | def _sign_failure(fail): 176 | fail.value = wrap_result_object(fail.value) 177 | fail.value.sign = True 178 | return fail 179 | 180 | def inner(*args, **kwargs): 181 | ret = defer.maybeDeferred(func, *args, **kwargs) 182 | #if isinstance(ret, defer.Deferred): 183 | ret.addCallback(_sign_deferred) 184 | ret.addErrback(_sign_failure) 185 | return ret 186 | # return ret 187 | #elif isinstance(ret, types.GeneratorType): 188 | # return _sign_generator(ret) 189 | #else: 190 | # ret = wrap_result_object(ret) 191 | # ret.sign = True 192 | # return ret 193 | return inner 194 | 195 | def synchronous(func): 196 | '''Run given method synchronously in separate thread and return the result.''' 197 | def inner(*args, **kwargs): 198 | return threads.deferToThread(func, *args, **kwargs) 199 | return inner 200 | 201 | def admin(func): 202 | '''Requires an extra first parameter with superadministrator password''' 203 | import settings 204 | def inner(*args, **kwargs): 205 | if not len(args): 206 | raise custom_exceptions.UnauthorizedException("Missing password") 207 | 208 | if settings.ADMIN_RESTRICT_INTERFACE != None: 209 | ip = args[0].connection_ref()._get_ip() 210 | if settings.ADMIN_RESTRICT_INTERFACE != ip: 211 | raise custom_exceptions.UnauthorizedException("RPC call not allowed from your IP") 212 | 213 | if not settings.ADMIN_PASSWORD_SHA256: 214 | raise custom_exceptions.UnauthorizedException("Admin password not set, RPC call disabled") 215 | 216 | (password, args) = (args[1], [args[0],] + list(args[2:])) 217 | 218 | if hashlib.sha256(password).hexdigest() != settings.ADMIN_PASSWORD_SHA256: 219 | raise custom_exceptions.UnauthorizedException("Wrong password") 220 | 221 | return func(*args, **kwargs) 222 | return inner 223 | 224 | class ServiceMetaclass(type): 225 | def __init__(cls, name, bases, _dict): 226 | super(ServiceMetaclass, cls).__init__(name, bases, _dict) 227 | ServiceFactory.register_service(cls, _dict) 228 | 229 | class GenericService(object): 230 | __metaclass__ = ServiceMetaclass 231 | service_type = None 232 | service_vendor = None 233 | is_default = None 234 | 235 | # Keep weak reference to connection which asked for current 236 | # RPC call. Useful for pubsub mechanism, but use it with care. 237 | # It does not need to point to actual and valid data, so 238 | # you have to check if connection still exists every time. 239 | connection_ref = None 240 | 241 | class ServiceDiscovery(GenericService): 242 | service_type = 'discovery' 243 | service_vendor = 'Stratum' 244 | is_default = True 245 | 246 | def list_services(self): 247 | return ServiceFactory.registry.keys() 248 | 249 | def list_vendors(self, service_type): 250 | return ServiceFactory.registry[service_type].keys() 251 | 252 | def list_methods(self, service_name): 253 | # Accepts also vendors in square brackets: firstbits[firstbits.com] 254 | 255 | # Parse service type and vendor. We don't care about the method name, 256 | # but _split_method needs full path to some RPC method. 257 | (service_type, vendor, _) = ServiceFactory._split_method("%s.foo" % service_name) 258 | service = ServiceFactory.lookup(service_type, vendor) 259 | out = [] 260 | 261 | for name, obj in service.__dict__.items(): 262 | 263 | if name.startswith('_'): 264 | continue 265 | 266 | if not callable(obj): 267 | continue 268 | 269 | out.append(name) 270 | 271 | return out 272 | 273 | def list_params(self, method): 274 | (service_type, vendor, meth) = ServiceFactory._split_method(method) 275 | service = ServiceFactory.lookup(service_type, vendor) 276 | 277 | # Load params and helper text from method attributes 278 | func = service.__dict__[meth] 279 | params = getattr(func, 'params', None) 280 | help_text = getattr(func, 'help_text', None) 281 | 282 | return (help_text, params) 283 | list_params.help_text = "Accepts name of method and returns its description and available parameters. Example: 'firstbits.resolve'" 284 | list_params.params = [('method', 'string', 'Method to lookup for description and parameters.'),] -------------------------------------------------------------------------------- /stratum/settings.py: -------------------------------------------------------------------------------- 1 | def setup(): 2 | ''' 3 | This will import modules config_default and config and move their variables 4 | into current module (variables in config have higher priority than config_default). 5 | Thanks to this, you can import settings anywhere in the application and you'll get 6 | actual application settings. 7 | 8 | This config is related to server side. You don't need config.py if you 9 | want to use client part only. 10 | ''' 11 | 12 | def read_values(cfg): 13 | for varname in cfg.__dict__.keys(): 14 | if varname.startswith('__'): 15 | continue 16 | 17 | value = getattr(cfg, varname) 18 | yield (varname, value) 19 | 20 | import config_default 21 | 22 | try: 23 | import config 24 | except ImportError: 25 | # Custom config not presented, but we can still use defaults 26 | config = None 27 | 28 | import sys 29 | module = sys.modules[__name__] 30 | 31 | for name,value in read_values(config_default): 32 | module.__dict__[name] = value 33 | 34 | changes = {} 35 | if config: 36 | for name,value in read_values(config): 37 | if value != module.__dict__.get(name, None): 38 | changes[name] = value 39 | module.__dict__[name] = value 40 | 41 | if module.__dict__['DEBUG'] and changes: 42 | print "----------------" 43 | print "Custom settings:" 44 | for k, v in changes.items(): 45 | if 'passw' in k.lower(): 46 | print k, ": ********" 47 | else: 48 | print k, ":", v 49 | print "----------------" 50 | 51 | setup() 52 | -------------------------------------------------------------------------------- /stratum/signature.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | try: 3 | import ecdsa 4 | from ecdsa import curves 5 | except ImportError: 6 | print "ecdsa package not installed. Signing of messages not available." 7 | ecdsa = None 8 | 9 | import base64 10 | import hashlib 11 | import time 12 | 13 | import jsonical 14 | import json 15 | import custom_exceptions 16 | 17 | if ecdsa: 18 | # secp256k1, http://www.oid-info.com/get/1.3.132.0.10 19 | _p = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2FL 20 | _r = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141L 21 | _b = 0x0000000000000000000000000000000000000000000000000000000000000007L 22 | _a = 0x0000000000000000000000000000000000000000000000000000000000000000L 23 | _Gx = 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798L 24 | _Gy = 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8L 25 | curve_secp256k1 = ecdsa.ellipticcurve.CurveFp(_p, _a, _b) 26 | generator_secp256k1 = ecdsa.ellipticcurve.Point(curve_secp256k1, _Gx, _Gy, _r) 27 | oid_secp256k1 = (1,3,132,0,10) 28 | SECP256k1 = ecdsa.curves.Curve("SECP256k1", curve_secp256k1, generator_secp256k1, oid_secp256k1 ) 29 | 30 | # Register SECP256k1 to ecdsa library 31 | curves.curves.append(SECP256k1) 32 | 33 | def generate_keypair(): 34 | if not ecdsa: 35 | raise custom_exceptions.SigningNotAvailableException("ecdsa not installed") 36 | 37 | private_key = ecdsa.SigningKey.generate(curve=SECP256k1) 38 | public_key = private_key.get_verifying_key() 39 | return (private_key, public_key) 40 | 41 | def load_privkey_pem(filename): 42 | return ecdsa.SigningKey.from_pem(open(filename, 'r').read().strip()) 43 | 44 | def sign(privkey, data): 45 | if not ecdsa: 46 | raise custom_exceptions.SigningNotAvailableException("ecdsa not installed") 47 | 48 | hash = hashlib.sha256(data).digest() 49 | signature = privkey.sign_digest(hash, sigencode=ecdsa.util.sigencode_der) 50 | return base64.b64encode(signature) 51 | 52 | def verify(pubkey, signature, data): 53 | if not ecdsa: 54 | raise custom_exceptions.SigningNotAvailableException("ecdsa not installed") 55 | 56 | hash = hashlib.sha256(data).digest() 57 | sign = base64.b64decode(signature) 58 | try: 59 | return pubkey.verify_digest(sign, hash, sigdecode=ecdsa.util.sigdecode_der) 60 | except: 61 | return False 62 | 63 | def jsonrpc_dumps_sign(privkey, privkey_id, is_request, message_id, method='', params=[], result=None, error=None): 64 | '''Create the signature for given json-rpc data and returns signed json-rpc text stream''' 65 | 66 | # Build data object to sign 67 | sign_time = int(time.time()) 68 | data = {'method': method, 'params': params, 'result': result, 'error': error, 'sign_time': sign_time} 69 | 70 | # Serialize data to sign and perform signing 71 | txt = jsonical.dumps(data) 72 | signature = sign(privkey, txt) 73 | 74 | # Reconstruct final data object and put signature 75 | if is_request: 76 | data = {'id': message_id, 'method': method, 'params': params, 77 | 'sign': signature, 'sign_algo': 'ecdsa;SECP256k1', 'sign_id': privkey_id, 'sign_time': sign_time} 78 | else: 79 | data = {'id': message_id, 'result': result, 'error': error, 80 | 'sign': signature, 'sign_algo': 'ecdsa;SECP256k1', 'sign_id': privkey_id, 'sign_time': sign_time} 81 | 82 | # Return original data extended with signature 83 | return jsonical.dumps(data) 84 | 85 | def jsonrpc_loads_verify(pubkeys, txt): 86 | ''' 87 | Pubkeys is mapping (dict) of sign_id -> ecdsa public key. 88 | This method deserialize provided json-encoded data, load signature ID, perform the lookup for public key 89 | and check stored signature of the message. If signature is OK, returns message data. 90 | ''' 91 | data = json.loads(txt) 92 | signature_algo = data['sign_algo'] 93 | signature_id = data['sign_id'] 94 | signature_time = data['sign_time'] 95 | 96 | if signature_algo != 'ecdsa;SECP256k1': 97 | raise custom_exceptions.UnknownSignatureAlgorithmException("%s is not supported" % signature_algo) 98 | 99 | try: 100 | pubkey = pubkeys[signature_id] 101 | except KeyError: 102 | raise custom_exceptions.UnknownSignatureIdException("Public key for '%s' not found" % signature_id) 103 | 104 | signature = data['sign'] 105 | message_id = data['id'] 106 | method = data.get('method', '') 107 | params = data.get('params', []) 108 | result = data.get('result', None) 109 | error = data.get('error', None) 110 | 111 | # Build data object to verify 112 | data = {'method': method, 'params': params, 'result': result, 'error': error, 'sign_time': signature_time} 113 | txt = jsonical.dumps(data) 114 | 115 | if not verify(pubkey, signature, txt): 116 | raise custom_exceptions.SignatureVerificationFailedException("Signature doesn't match to given data") 117 | 118 | if method: 119 | # It's a request 120 | return {'id': message_id, 'method': method, 'params': params} 121 | 122 | else: 123 | # It's aresponse 124 | return {'id': message_id, 'result': result, 'error': error} 125 | 126 | if __name__ == '__main__': 127 | (private, public) = generate_keypair() 128 | print private.to_pem() 129 | -------------------------------------------------------------------------------- /stratum/socket_transport.py: -------------------------------------------------------------------------------- 1 | from twisted.internet.protocol import ServerFactory 2 | from twisted.internet.protocol import ReconnectingClientFactory 3 | from twisted.internet import reactor, defer, endpoints 4 | 5 | import socksclient 6 | import custom_exceptions 7 | from protocol import Protocol, ClientProtocol 8 | from event_handler import GenericEventHandler 9 | 10 | import logger 11 | log = logger.get_logger('socket_transport') 12 | 13 | def sockswrapper(proxy, dest): 14 | endpoint = endpoints.TCP4ClientEndpoint(reactor, dest[0], dest[1]) 15 | return socksclient.SOCKSWrapper(reactor, proxy[0], proxy[1], endpoint) 16 | 17 | class SocketTransportFactory(ServerFactory): 18 | def __init__(self, debug=False, signing_key=None, signing_id=None, event_handler=GenericEventHandler, 19 | tcp_proxy_protocol_enable=False): 20 | self.debug = debug 21 | self.signing_key = signing_key 22 | self.signing_id = signing_id 23 | self.event_handler = event_handler 24 | self.protocol = Protocol 25 | 26 | # Read settings.TCP_PROXY_PROTOCOL documentation 27 | self.tcp_proxy_protocol_enable = tcp_proxy_protocol_enable 28 | 29 | class SocketTransportClientFactory(ReconnectingClientFactory): 30 | def __init__(self, host, port, allow_trusted=True, allow_untrusted=False, 31 | debug=False, signing_key=None, signing_id=None, 32 | is_reconnecting=True, proxy=None, 33 | event_handler=GenericEventHandler): 34 | self.debug = debug 35 | self.is_reconnecting = is_reconnecting 36 | self.signing_key = signing_key 37 | self.signing_id = signing_id 38 | self.client = None # Reference to open connection 39 | self.on_disconnect = defer.Deferred() 40 | self.on_connect = defer.Deferred() 41 | self.peers_trusted = {} 42 | self.peers_untrusted = {} 43 | self.main_host = (host, port) 44 | self.new_host = None 45 | self.proxy = proxy 46 | 47 | self.event_handler = event_handler 48 | self.protocol = ClientProtocol 49 | self.after_connect = [] 50 | 51 | self.connect() 52 | 53 | def connect(self): 54 | if self.proxy: 55 | self.timeout_handler = reactor.callLater(60, self.connection_timeout) 56 | sw = sockswrapper(self.proxy, self.main_host) 57 | sw.connect(self) 58 | else: 59 | self.timeout_handler = reactor.callLater(30, self.connection_timeout) 60 | reactor.connectTCP(self.main_host[0], self.main_host[1], self) 61 | 62 | ''' 63 | This shouldn't be a part of transport layer 64 | def add_peers(self, peers): 65 | # FIXME: Use this list when current connection fails 66 | for peer in peers: 67 | hash = "%s%s%s" % (peer['hostname'], peer['ipv4'], peer['ipv6']) 68 | 69 | which = self.peers_trusted if peer['trusted'] else self.peers_untrusted 70 | which[hash] = peer 71 | 72 | #print self.peers_trusted 73 | #print self.peers_untrusted 74 | ''' 75 | 76 | def connection_timeout(self): 77 | self.timeout_handler = None 78 | 79 | if self.client: 80 | return 81 | 82 | e = custom_exceptions.TransportException("SocketTransportClientFactory connection timed out") 83 | if not self.on_connect.called: 84 | d = self.on_connect 85 | self.on_connect = defer.Deferred() 86 | d.errback(e) 87 | 88 | else: 89 | raise e 90 | 91 | def rpc(self, method, params, *args, **kwargs): 92 | if not self.client: 93 | raise custom_exceptions.TransportException("Not connected") 94 | 95 | return self.client.rpc(method, params, *args, **kwargs) 96 | 97 | def subscribe(self, method, params, *args, **kwargs): 98 | ''' 99 | This is like standard RPC call, except that parameters are stored 100 | into after_connect list, so the same command will perform again 101 | on restored connection. 102 | ''' 103 | if not self.client: 104 | raise custom_exceptions.TransportException("Not connected") 105 | 106 | self.after_connect.append((method, params)) 107 | return self.client.rpc(method, params, *args, **kwargs) 108 | 109 | def reconnect(self, host=None, port=None, wait=None): 110 | '''Close current connection and start new one. 111 | If host or port specified, it will be used for new connection.''' 112 | 113 | new = list(self.main_host) 114 | if host: 115 | new[0] = host 116 | if port: 117 | new[1] = port 118 | self.new_host = tuple(new) 119 | 120 | if self.client and self.client.connected: 121 | if wait != None: 122 | self.delay = wait 123 | self.client.transport.connector.disconnect() 124 | 125 | def retry(self, connector=None): 126 | if not self.is_reconnecting: 127 | return 128 | 129 | if connector is None: 130 | if self.connector is None: 131 | raise ValueError("no connector to retry") 132 | else: 133 | connector = self.connector 134 | 135 | if self.new_host: 136 | # Switch to new host if any 137 | connector.host = self.new_host[0] 138 | connector.port = self.new_host[1] 139 | self.main_host = self.new_host 140 | self.new_host = None 141 | 142 | return ReconnectingClientFactory.retry(self, connector) 143 | 144 | def buildProtocol(self, addr): 145 | self.resetDelay() 146 | #if not self.is_reconnecting: raise 147 | return ReconnectingClientFactory.buildProtocol(self, addr) 148 | 149 | def clientConnectionLost(self, connector, reason): 150 | if self.is_reconnecting: 151 | log.debug(reason) 152 | ReconnectingClientFactory.clientConnectionLost(self, connector, reason) 153 | 154 | def clientConnectionFailed(self, connector, reason): 155 | if self.is_reconnecting: 156 | log.debug(reason) 157 | ReconnectingClientFactory.clientConnectionFailed(self, connector, reason) 158 | 159 | -------------------------------------------------------------------------------- /stratum/socksclient.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) 2011-2012, Linus Nordberg 2 | # Taken from https://github.com/ln5/twisted-socks/ 3 | 4 | import socket 5 | import struct 6 | from zope.interface import implements 7 | from twisted.internet import defer 8 | from twisted.internet.interfaces import IStreamClientEndpoint 9 | from twisted.internet.protocol import Protocol, ClientFactory 10 | from twisted.internet.endpoints import _WrappingFactory 11 | 12 | class SOCKSError(Exception): 13 | def __init__(self, val): 14 | self.val = val 15 | def __str__(self): 16 | return repr(self.val) 17 | 18 | class SOCKSv4ClientProtocol(Protocol): 19 | buf = '' 20 | 21 | def SOCKSConnect(self, host, port): 22 | # only socksv4a for now 23 | ver = 4 24 | cmd = 1 # stream connection 25 | user = '\x00' 26 | dnsname = '' 27 | try: 28 | addr = socket.inet_aton(host) 29 | except socket.error: 30 | addr = '\x00\x00\x00\x01' 31 | dnsname = '%s\x00' % host 32 | msg = struct.pack('!BBH', ver, cmd, port) + addr + user + dnsname 33 | self.transport.write(msg) 34 | 35 | def verifySocksReply(self, data): 36 | """ 37 | Return True on success, False on need-more-data. 38 | Raise SOCKSError on request rejected or failed. 39 | """ 40 | if len(data) < 8: 41 | return False 42 | if ord(data[0]) != 0: 43 | self.transport.loseConnection() 44 | raise SOCKSError((1, "bad data")) 45 | status = ord(data[1]) 46 | if status != 0x5a: 47 | self.transport.loseConnection() 48 | raise SOCKSError((status, "request not granted: %d" % status)) 49 | return True 50 | 51 | def isSuccess(self, data): 52 | self.buf += data 53 | return self.verifySocksReply(self.buf) 54 | 55 | def connectionMade(self): 56 | self.SOCKSConnect(self.postHandshakeEndpoint._host, 57 | self.postHandshakeEndpoint._port) 58 | 59 | def dataReceived(self, data): 60 | if self.isSuccess(data): 61 | # Build protocol from provided factory and transfer control to it. 62 | self.transport.protocol = self.postHandshakeFactory.buildProtocol( 63 | self.transport.getHost()) 64 | self.transport.protocol.transport = self.transport 65 | self.transport.protocol.connected = 1 66 | self.transport.protocol.connectionMade() 67 | self.handshakeDone.callback(self.transport.getPeer()) 68 | 69 | class SOCKSv4ClientFactory(ClientFactory): 70 | protocol = SOCKSv4ClientProtocol 71 | 72 | def buildProtocol(self, addr): 73 | r=ClientFactory.buildProtocol(self, addr) 74 | r.postHandshakeEndpoint = self.postHandshakeEndpoint 75 | r.postHandshakeFactory = self.postHandshakeFactory 76 | r.handshakeDone = self.handshakeDone 77 | return r 78 | 79 | class SOCKSWrapper(object): 80 | implements(IStreamClientEndpoint) 81 | factory = SOCKSv4ClientFactory 82 | 83 | def __init__(self, reactor, host, port, endpoint): 84 | self._host = host 85 | self._port = port 86 | self._reactor = reactor 87 | self._endpoint = endpoint 88 | 89 | def connect(self, protocolFactory): 90 | """ 91 | Return a deferred firing when the SOCKS connection is established. 92 | """ 93 | 94 | try: 95 | # Connect with an intermediate SOCKS factory/protocol, 96 | # which then hands control to the provided protocolFactory 97 | # once a SOCKS connection has been established. 98 | f = self.factory() 99 | f.postHandshakeEndpoint = self._endpoint 100 | f.postHandshakeFactory = protocolFactory 101 | f.handshakeDone = defer.Deferred() 102 | wf = _WrappingFactory(f) 103 | self._reactor.connectTCP(self._host, self._port, wf) 104 | return f.handshakeDone 105 | except: 106 | return defer.fail() -------------------------------------------------------------------------------- /stratum/stats.py: -------------------------------------------------------------------------------- 1 | import time 2 | import logger 3 | log = logger.get_logger('stats') 4 | 5 | class PeerStats(object): 6 | '''Stub for server statistics''' 7 | counter = 0 8 | changes = 0 9 | 10 | @classmethod 11 | def client_connected(cls, ip): 12 | cls.counter += 1 13 | cls.changes += 1 14 | 15 | cls.print_stats() 16 | 17 | @classmethod 18 | def client_disconnected(cls, ip): 19 | cls.counter -= 1 20 | cls.changes += 1 21 | 22 | cls.print_stats() 23 | 24 | @classmethod 25 | def print_stats(cls): 26 | if cls.counter and float(cls.changes) / cls.counter < 0.05: 27 | # Print connection stats only when more than 28 | # 5% connections change to avoid log spam 29 | return 30 | 31 | log.info("%d peers connected, state changed %d times" % (cls.counter, cls.changes)) 32 | cls.changes = 0 33 | 34 | @classmethod 35 | def get_connected_clients(cls): 36 | return cls.counter 37 | 38 | ''' 39 | class CpuStats(object): 40 | start_time = time.time() 41 | 42 | @classmethod 43 | def get_time(cls): 44 | diff = time.time() - cls.start_time 45 | return resource.getrusage(resource.RUSAGE_SELF)[0] / diff 46 | ''' -------------------------------------------------------------------------------- /stratum/storage.py: -------------------------------------------------------------------------------- 1 | #class StorageFactory(object): 2 | 3 | class Storage(object): 4 | #def __new__(self, session_id): 5 | # pass 6 | 7 | def __init__(self): 8 | self.__services = {} 9 | self.session = None 10 | 11 | def get(self, service_type, vendor, default_object): 12 | self.__services.setdefault(service_type, {}) 13 | self.__services[service_type].setdefault(vendor, default_object) 14 | return self.__services[service_type][vendor] 15 | 16 | def __repr__(self): 17 | return str(self.__services) -------------------------------------------------------------------------------- /stratum/version.py: -------------------------------------------------------------------------------- 1 | VERSION='0.2.13' 2 | -------------------------------------------------------------------------------- /stratum/websocket_transport.py: -------------------------------------------------------------------------------- 1 | from autobahn.websocket import WebSocketServerProtocol, WebSocketServerFactory 2 | from protocol import Protocol 3 | from event_handler import GenericEventHandler 4 | 5 | class WebsocketServerProtocol(WebSocketServerProtocol, Protocol): 6 | def connectionMade(self): 7 | WebSocketServerProtocol.connectionMade(self) 8 | Protocol.connectionMade(self) 9 | 10 | def connectionLost(self, reason): 11 | WebSocketServerProtocol.connectionLost(self, reason) 12 | Protocol.connectionLost(self, reason) 13 | 14 | def onMessage(self, msg, is_binary): 15 | Protocol.dataReceived(self, msg) 16 | 17 | def transport_write(self, data): 18 | self.sendMessage(data, False) 19 | 20 | class WebsocketTransportFactory(WebSocketServerFactory): 21 | def __init__(self, port, is_secure=False, debug=False, signing_key=None, signing_id=None, 22 | event_handler=GenericEventHandler): 23 | self.debug = debug 24 | self.signing_key = signing_key 25 | self.signing_id = signing_id 26 | self.protocol = WebsocketServerProtocol 27 | self.event_handler = event_handler 28 | 29 | if is_secure: 30 | uri = "wss://0.0.0.0:%d" % port 31 | else: 32 | uri = "ws://0.0.0.0:%d" % port 33 | 34 | WebSocketServerFactory.__init__(self, uri) 35 | 36 | # P.S. There's not Websocket client implementation yet 37 | # P.P.S. And it probably won't be for long time...' -------------------------------------------------------------------------------- /xmr-proxy.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding:utf-8 -*- 3 | 4 | import time 5 | import os 6 | import socket 7 | 8 | from stratum import settings 9 | import stratum.logger 10 | log = stratum.logger.get_logger('proxy') 11 | 12 | if __name__ == '__main__': 13 | if (settings.PAYMENT_ID and len(settings.PAYMENT_ID)!=64) or (len(settings.WALLET)!=95 and len(settings.WALLET)!=106): 14 | log.error("Wrong PAYMENT_ID or WALLET !!!") 15 | quit() 16 | settings.CUSTOM_USER = settings.WALLET if not settings.PAYMENT_ID else "%s.%s" % (settings.WALLET, settings.PAYMENT_ID) 17 | settings.CUSTOM_PASSWORD = settings.MONITORING_EMAIL if settings.MONITORING_EMAIL and settings.MONITORING else "1" 18 | 19 | from twisted.internet import reactor, defer, protocol 20 | from twisted.internet import reactor as reactor2 21 | from stratum.socket_transport import SocketTransportFactory, SocketTransportClientFactory 22 | from stratum.services import ServiceEventHandler 23 | from twisted.web.server import Site 24 | from stratum.custom_exceptions import TransportException 25 | 26 | from mining_libs import stratum_listener 27 | from mining_libs import client_service 28 | from mining_libs import jobs 29 | from mining_libs import multicast_responder 30 | from mining_libs import version 31 | from mining_libs.jobs import Job 32 | 33 | def on_shutdown(f): 34 | '''Clean environment properly''' 35 | log.info("Shutting down proxy...") 36 | if os.path.isfile('xmr-proxy.pid'): 37 | os.remove('xmr-proxy.pid') 38 | f.is_reconnecting = False # Don't let stratum factory to reconnect again 39 | 40 | # Support main connection 41 | @defer.inlineCallbacks 42 | def ping(f, id): 43 | if not f.is_reconnecting: 44 | return 45 | try: 46 | yield (f.rpc('getjob', {"id":id,})) 47 | reactor.callLater(300, ping, f, id) 48 | except Exception: 49 | pass 50 | 51 | @defer.inlineCallbacks 52 | def on_connect(f): 53 | '''Callback when proxy get connected to the pool''' 54 | log.info("Connected to Stratum pool at %s:%d" % f.main_host) 55 | #reactor.callLater(30, f.client.transport.loseConnection) 56 | 57 | # Hook to on_connect again 58 | f.on_connect.addCallback(on_connect) 59 | 60 | # Get first job and user_id 61 | initial_job = (yield f.rpc('login', {"login":settings.CUSTOM_USER, "pass":settings.CUSTOM_PASSWORD, "agent":"proxy3"})) 62 | 63 | reactor.callLater(300, ping, f, initial_job['id']) 64 | 65 | defer.returnValue(f) 66 | 67 | def on_disconnect(f): 68 | '''Callback when proxy get disconnected from the pool''' 69 | log.info("Disconnected from Stratum pool at %s:%d" % f.main_host) 70 | f.on_disconnect.addCallback(on_disconnect) 71 | 72 | stratum_listener.MiningSubscription.disconnect_all() 73 | 74 | # Prepare to failover, currently works very bad 75 | #if f.main_host==(settings.POOL_HOST, settings.POOL_PORT): 76 | # main() 77 | #else: 78 | # f.is_reconnecting = False 79 | #return f 80 | 81 | @defer.inlineCallbacks 82 | def main(): 83 | reactor.disconnectAll() 84 | failover = False 85 | if settings.POOL_FAILOVER_ENABLE: 86 | failover = settings.failover_pool 87 | settings.failover_pool = not settings.failover_pool 88 | 89 | pool_host = settings.POOL_HOST 90 | pool_port = settings.POOL_PORT 91 | if failover and settings.POOL_FAILOVER_ENABLE: 92 | pool_host = settings.POOL_HOST_FAILOVER 93 | pool_port = settings.POOL_PORT_FAILOVER 94 | 95 | log.warning("Monero Stratum proxy version: %s" % version.VERSION) 96 | log.warning("Trying to connect to Stratum pool at %s:%d" % (pool_host, pool_port)) 97 | 98 | # Connect to Stratum pool, main monitoring connection 99 | f = SocketTransportClientFactory(pool_host, pool_port, 100 | debug=settings.DEBUG, proxy=None, 101 | event_handler=client_service.ClientMiningService) 102 | 103 | job_registry = jobs.JobRegistry(f) 104 | client_service.ClientMiningService.job_registry = job_registry 105 | client_service.ClientMiningService.reset_timeout() 106 | 107 | f.on_connect.addCallback(on_connect) 108 | f.on_disconnect.addCallback(on_disconnect) 109 | # Cleanup properly on shutdown 110 | reactor.addSystemEventTrigger('before', 'shutdown', on_shutdown, f) 111 | 112 | # Block until proxy connect to the pool 113 | try: 114 | yield f.on_connect 115 | except TransportException: 116 | log.warning("First pool server must be online first time to start failover") 117 | return 118 | 119 | # Setup stratum listener 120 | stratum_listener.StratumProxyService._set_upstream_factory(f) 121 | stratum_listener.StratumProxyService._set_custom_user(settings.CUSTOM_USER, settings.CUSTOM_PASSWORD, settings.ENABLE_WORKER_ID, settings.WORKER_ID_FROM_IP) 122 | reactor.listenTCP(settings.STRATUM_PORT, SocketTransportFactory(debug=settings.DEBUG, event_handler=ServiceEventHandler), interface=settings.STRATUM_HOST) 123 | 124 | # Setup multicast responder 125 | reactor.listenMulticast(3333, multicast_responder.MulticastResponder((pool_host, pool_port), settings.STRATUM_PORT), listenMultiple=True) 126 | 127 | log.warning("-----------------------------------------------------------------------") 128 | if settings.STRATUM_HOST == '0.0.0.0': 129 | log.warning("PROXY IS LISTENING ON ALL IPs ON PORT %d (stratum)" % settings.STRATUM_PORT) 130 | else: 131 | log.warning("LISTENING FOR MINERS ON stratum+tcp://%s:%d (stratum)" % \ 132 | (settings.STRATUM_HOST, settings.STRATUM_PORT)) 133 | log.warning("-----------------------------------------------------------------------") 134 | 135 | if __name__ == '__main__': 136 | fp = file("xmr-proxy.pid", 'w') 137 | fp.write(str(os.getpid())) 138 | fp.close() 139 | settings.failover_pool = False 140 | main() 141 | reactor.run() 142 | --------------------------------------------------------------------------------