upstart.conf
.
51 |
52 | ### Building Frontend
53 |
54 | Install nodejs. I suggest using LTS version >= 4.x from https://github.com/nodesource/distributions or from your Linux distribution or simply install nodejs on Ubuntu Xenial 16.04.
55 |
56 | The frontend is a single-page Ember.js application that polls the pool API to render miner stats.
57 |
58 | cd www
59 |
60 | Change ApiUrl: '//example.net/'
in www/config/environment.js
to match your domain name. Also don't forget to adjust other options.
61 |
62 | npm install -g ember-cli@2.9.1
63 | npm install -g bower
64 | npm install
65 | bower install
66 | ./build.sh
67 |
68 | Configure nginx to serve API on /api
subdirectory.
69 | Configure nginx to serve www/dist
as static website.
70 |
71 | #### Serving API using nginx
72 |
73 | Create an upstream for API:
74 |
75 | upstream api {
76 | server 127.0.0.1:8080;
77 | }
78 |
79 | and add this setting after location /
:
80 |
81 | location /api {
82 | proxy_pass http://api;
83 | }
84 |
85 | #### Customization
86 |
87 | You can customize the layout using built-in web server with live reload:
88 |
89 | ember server --port 8082 --environment development
90 |
91 | **Don't use built-in web server in production**.
92 |
93 | Check out www/app/templates
directory and edit these templates
94 | in order to customise the frontend.
95 |
96 | ### Configuration
97 |
98 | Configuration is actually simple, just read it twice and think twice before changing defaults.
99 |
100 | **Don't copy config directly from this manual. Use the example config from the package,
101 | otherwise you will get errors on start because of JSON comments.**
102 |
103 | ```javascript
104 | {
105 | // Set to the number of CPU cores of your server
106 | "threads": 2,
107 | // Prefix for keys in redis store
108 | "coin": "eth",
109 | // Give unique name to each instance
110 | "name": "main",
111 |
112 | "proxy": {
113 | "enabled": true,
114 |
115 | // Bind HTTP mining endpoint to this IP:PORT
116 | "listen": "0.0.0.0:8888",
117 |
118 | // Allow only this header and body size of HTTP request from miners
119 | "limitHeadersSize": 1024,
120 | "limitBodySize": 256,
121 |
122 | /* Set to true if you are behind CloudFlare (not recommended) or behind http-reverse
123 | proxy to enable IP detection from X-Forwarded-For header.
124 | Advanced users only. It's tricky to make it right and secure.
125 | */
126 | "behindReverseProxy": false,
127 |
128 | // Stratum mining endpoint
129 | "stratum": {
130 | "enabled": true,
131 | // Bind stratum mining socket to this IP:PORT
132 | "listen": "0.0.0.0:8008",
133 | "timeout": "120s",
134 | "maxConn": 8192
135 | },
136 |
137 | // Try to get new job from geth in this interval
138 | "blockRefreshInterval": "120ms",
139 | "stateUpdateInterval": "3s",
140 | // Require this share difficulty from miners
141 | "difficulty": 2000000000,
142 |
143 | /* Reply error to miner instead of job if redis is unavailable.
144 | Should save electricity to miners if pool is sick and they didn't set up failovers.
145 | */
146 | "healthCheck": true,
147 | // Mark pool sick after this number of redis failures.
148 | "maxFails": 100,
149 | // TTL for workers stats, usually should be equal to large hashrate window from API section
150 | "hashrateExpiration": "3h",
151 |
152 | "policy": {
153 | "workers": 8,
154 | "resetInterval": "60m",
155 | "refreshInterval": "1m",
156 |
157 | "banning": {
158 | "enabled": false,
159 | /* Name of ipset for banning.
160 | Check http://ipset.netfilter.org/ documentation.
161 | */
162 | "ipset": "blacklist",
163 | // Remove ban after this amount of time
164 | "timeout": 1800,
165 | // Percent of invalid shares from all shares to ban miner
166 | "invalidPercent": 30,
167 | // Check after after miner submitted this number of shares
168 | "checkThreshold": 30,
169 | // Bad miner after this number of malformed requests
170 | "malformedLimit": 5
171 | },
172 | // Connection rate limit
173 | "limits": {
174 | "enabled": false,
175 | // Number of initial connections
176 | "limit": 30,
177 | "grace": "5m",
178 | // Increase allowed number of connections on each valid share
179 | "limitJump": 10
180 | }
181 | }
182 | },
183 |
184 | // Provides JSON data for frontend which is static website
185 | "api": {
186 | "enabled": true,
187 | "listen": "0.0.0.0:8080",
188 | // Collect miners stats (hashrate, ...) in this interval
189 | "statsCollectInterval": "5s",
190 | // Purge stale stats interval
191 | "purgeInterval": "10m",
192 | // Fast hashrate estimation window for each miner from it's shares
193 | "hashrateWindow": "30m",
194 | // Long and precise hashrate from shares, 3h is cool, keep it
195 | "hashrateLargeWindow": "3h",
196 | // Collect stats for shares/diff ratio for this number of blocks
197 | "luckWindow": [64, 128, 256],
198 | // Max number of payments to display in frontend
199 | "payments": 50,
200 | // Max numbers of blocks to display in frontend
201 | "blocks": 50,
202 |
203 | /* If you are running API node on a different server where this module
204 | is reading data from redis writeable slave, you must run an api instance with this option enabled in order to purge hashrate stats from main redis node.
205 | Only redis writeable slave will work properly if you are distributing using redis slaves.
206 | Very advanced. Usually all modules should share same redis instance.
207 | */
208 | "purgeOnly": false
209 | },
210 |
211 | // Check health of each geth node in this interval
212 | "upstreamCheckInterval": "5s",
213 |
214 | /* List of geth nodes to poll for new jobs. Pool will try to get work from
215 | first alive one and check in background for failed to back up.
216 | Current block template of the pool is always cached in RAM indeed.
217 | */
218 | "upstream": [
219 | {
220 | "name": "main",
221 | "url": "http://127.0.0.1:8545",
222 | "timeout": "10s"
223 | },
224 | {
225 | "name": "backup",
226 | "url": "http://127.0.0.2:8545",
227 | "timeout": "10s"
228 | }
229 | ],
230 |
231 | // This is standard redis connection options
232 | "redis": {
233 | // Where your redis instance is listening for commands
234 | "endpoint": "127.0.0.1:6379",
235 | "poolSize": 10,
236 | "database": 0,
237 | "password": ""
238 | },
239 |
240 | // This module periodically remits ether to miners
241 | "unlocker": {
242 | "enabled": false,
243 | // Pool fee percentage
244 | "poolFee": 1.0,
245 | // Pool fees beneficiary address (leave it blank to disable fee withdrawals)
246 | "poolFeeAddress": "",
247 | // Donate 10% from pool fees to developers
248 | "donate": true,
249 | // Unlock only if this number of blocks mined back
250 | "depth": 120,
251 | // Simply don't touch this option
252 | "immatureDepth": 20,
253 | // Keep mined transaction fees as pool fees
254 | "keepTxFees": false,
255 | // Run unlocker in this interval
256 | "interval": "10m",
257 | // Geth instance node rpc endpoint for unlocking blocks
258 | "daemon": "http://127.0.0.1:8545",
259 | // Rise error if can't reach geth in this amount of time
260 | "timeout": "10s"
261 | },
262 |
263 | // Pay out miners using this module
264 | "payouts": {
265 | "enabled": false,
266 | // Require minimum number of peers on node
267 | "requirePeers": 25,
268 | // Run payouts in this interval
269 | "interval": "12h",
270 | // Geth instance node rpc endpoint for payouts processing
271 | "daemon": "http://127.0.0.1:8545",
272 | // Rise error if can't reach geth in this amount of time
273 | "timeout": "10s",
274 | // Address with pool balance
275 | "address": "0x0",
276 | // Let geth to determine gas and gasPrice
277 | "autoGas": true,
278 | // Gas amount and price for payout tx (advanced users only)
279 | "gas": "21000",
280 | "gasPrice": "50000000000",
281 | // Send payment only if miner's balance is >= 0.5 Ether
282 | "threshold": 500000000,
283 | // Perform BGSAVE on Redis after successful payouts session
284 | "bgsave": false
285 | }
286 | }
287 | ```
288 |
289 | If you are distributing your pool deployment to several servers or processes,
290 | create several configs and disable unneeded modules on each server. (Advanced users)
291 |
292 | I recommend this deployment strategy:
293 |
294 | * Mining instance - 1x (it depends, you can run one node for EU, one for US, one for Asia)
295 | * Unlocker and payouts instance - 1x each (strict!)
296 | * API instance - 1x
297 |
298 | ### Notes
299 |
300 | * Unlocking and payouts are sequential, 1st tx go, 2nd waiting for 1st to confirm and so on. You can disable that in code. Carefully read `docs/PAYOUTS.md`.
301 | * Also, keep in mind that **unlocking and payouts will halt in case of backend or node RPC errors**. In that case check everything and restart.
302 | * You must restart module if you see errors with the word *suspended*.
303 | * Don't run payouts and unlocker modules as part of mining node. Create separate configs for both, launch independently and make sure you have a single instance of each module running.
304 | * If `poolFeeAddress` is not specified all pool profit will remain on coinbase address. If it specified, make sure to periodically send some dust back required for payments.
305 |
306 | ### Alternative Ethereum Implementations
307 |
308 | This pool is tested to work with [Ethcore's Parity](https://github.com/ethcore/parity). Mining and block unlocking works, but I am not sure about payouts and suggest to run *official* geth node for payments.
309 |
310 | ### Credits
311 |
312 | Made by sammy007. Licensed under GPLv3.
313 |
314 | #### Contributors
315 |
316 | [Alex Leverington](https://github.com/subtly)
317 |
318 | ### Donations
319 |
320 | ETH/ETC: 0xb85150eb365e7df0941f0cf08235f987ba91506a
321 |
322 | 
323 |
324 | Highly appreciated.
325 |
--------------------------------------------------------------------------------
/api/server.go:
--------------------------------------------------------------------------------
1 | package api
2 |
3 | import (
4 | "encoding/json"
5 | "log"
6 | "net/http"
7 | "sort"
8 | "strings"
9 | "sync"
10 | "sync/atomic"
11 | "time"
12 |
13 | "github.com/gorilla/mux"
14 |
15 | "github.com/sammy007/open-ethereum-pool/storage"
16 | "github.com/sammy007/open-ethereum-pool/util"
17 | )
18 |
19 | type ApiConfig struct {
20 | Enabled bool `json:"enabled"`
21 | Listen string `json:"listen"`
22 | StatsCollectInterval string `json:"statsCollectInterval"`
23 | HashrateWindow string `json:"hashrateWindow"`
24 | HashrateLargeWindow string `json:"hashrateLargeWindow"`
25 | LuckWindow []int `json:"luckWindow"`
26 | Payments int64 `json:"payments"`
27 | Blocks int64 `json:"blocks"`
28 | PurgeOnly bool `json:"purgeOnly"`
29 | PurgeInterval string `json:"purgeInterval"`
30 | }
31 |
32 | type ApiServer struct {
33 | config *ApiConfig
34 | backend *storage.RedisClient
35 | hashrateWindow time.Duration
36 | hashrateLargeWindow time.Duration
37 | stats atomic.Value
38 | miners map[string]*Entry
39 | minersMu sync.RWMutex
40 | statsIntv time.Duration
41 | }
42 |
43 | type Entry struct {
44 | stats map[string]interface{}
45 | updatedAt int64
46 | }
47 |
48 | func NewApiServer(cfg *ApiConfig, backend *storage.RedisClient) *ApiServer {
49 | hashrateWindow := util.MustParseDuration(cfg.HashrateWindow)
50 | hashrateLargeWindow := util.MustParseDuration(cfg.HashrateLargeWindow)
51 | return &ApiServer{
52 | config: cfg,
53 | backend: backend,
54 | hashrateWindow: hashrateWindow,
55 | hashrateLargeWindow: hashrateLargeWindow,
56 | miners: make(map[string]*Entry),
57 | }
58 | }
59 |
60 | func (s *ApiServer) Start() {
61 | if s.config.PurgeOnly {
62 | log.Printf("Starting API in purge-only mode")
63 | } else {
64 | log.Printf("Starting API on %v", s.config.Listen)
65 | }
66 |
67 | s.statsIntv = util.MustParseDuration(s.config.StatsCollectInterval)
68 | statsTimer := time.NewTimer(s.statsIntv)
69 | log.Printf("Set stats collect interval to %v", s.statsIntv)
70 |
71 | purgeIntv := util.MustParseDuration(s.config.PurgeInterval)
72 | purgeTimer := time.NewTimer(purgeIntv)
73 | log.Printf("Set purge interval to %v", purgeIntv)
74 |
75 | sort.Ints(s.config.LuckWindow)
76 |
77 | if s.config.PurgeOnly {
78 | s.purgeStale()
79 | } else {
80 | s.purgeStale()
81 | s.collectStats()
82 | }
83 |
84 | go func() {
85 | for {
86 | select {
87 | case <-statsTimer.C:
88 | if !s.config.PurgeOnly {
89 | s.collectStats()
90 | }
91 | statsTimer.Reset(s.statsIntv)
92 | case <-purgeTimer.C:
93 | s.purgeStale()
94 | purgeTimer.Reset(purgeIntv)
95 | }
96 | }
97 | }()
98 |
99 | if !s.config.PurgeOnly {
100 | s.listen()
101 | }
102 | }
103 |
104 | func (s *ApiServer) listen() {
105 | r := mux.NewRouter()
106 | r.HandleFunc("/api/stats", s.StatsIndex)
107 | r.HandleFunc("/api/miners", s.MinersIndex)
108 | r.HandleFunc("/api/blocks", s.BlocksIndex)
109 | r.HandleFunc("/api/payments", s.PaymentsIndex)
110 | r.HandleFunc("/api/accounts/{login:0x[0-9a-fA-F]{40}}", s.AccountIndex)
111 | r.NotFoundHandler = http.HandlerFunc(notFound)
112 | err := http.ListenAndServe(s.config.Listen, r)
113 | if err != nil {
114 | log.Fatalf("Failed to start API: %v", err)
115 | }
116 | }
117 |
118 | func notFound(w http.ResponseWriter, r *http.Request) {
119 | w.Header().Set("Content-Type", "application/json; charset=UTF-8")
120 | w.Header().Set("Access-Control-Allow-Origin", "*")
121 | w.Header().Set("Cache-Control", "no-cache")
122 | w.WriteHeader(http.StatusNotFound)
123 | }
124 |
125 | func (s *ApiServer) purgeStale() {
126 | start := time.Now()
127 | total, err := s.backend.FlushStaleStats(s.hashrateWindow, s.hashrateLargeWindow)
128 | if err != nil {
129 | log.Println("Failed to purge stale data from backend:", err)
130 | } else {
131 | log.Printf("Purged stale stats from backend, %v shares affected, elapsed time %v", total, time.Since(start))
132 | }
133 | }
134 |
135 | func (s *ApiServer) collectStats() {
136 | start := time.Now()
137 | stats, err := s.backend.CollectStats(s.hashrateWindow, s.config.Blocks, s.config.Payments)
138 | if err != nil {
139 | log.Printf("Failed to fetch stats from backend: %v", err)
140 | return
141 | }
142 | if len(s.config.LuckWindow) > 0 {
143 | stats["luck"], err = s.backend.CollectLuckStats(s.config.LuckWindow)
144 | if err != nil {
145 | log.Printf("Failed to fetch luck stats from backend: %v", err)
146 | return
147 | }
148 | }
149 | s.stats.Store(stats)
150 | log.Printf("Stats collection finished %s", time.Since(start))
151 | }
152 |
153 | func (s *ApiServer) StatsIndex(w http.ResponseWriter, r *http.Request) {
154 | w.Header().Set("Content-Type", "application/json; charset=UTF-8")
155 | w.Header().Set("Access-Control-Allow-Origin", "*")
156 | w.Header().Set("Cache-Control", "no-cache")
157 | w.WriteHeader(http.StatusOK)
158 |
159 | reply := make(map[string]interface{})
160 | nodes, err := s.backend.GetNodeStates()
161 | if err != nil {
162 | log.Printf("Failed to get nodes stats from backend: %v", err)
163 | }
164 | reply["nodes"] = nodes
165 |
166 | stats := s.getStats()
167 | if stats != nil {
168 | reply["now"] = util.MakeTimestamp()
169 | reply["stats"] = stats["stats"]
170 | reply["hashrate"] = stats["hashrate"]
171 | reply["minersTotal"] = stats["minersTotal"]
172 | reply["maturedTotal"] = stats["maturedTotal"]
173 | reply["immatureTotal"] = stats["immatureTotal"]
174 | reply["candidatesTotal"] = stats["candidatesTotal"]
175 | }
176 |
177 | err = json.NewEncoder(w).Encode(reply)
178 | if err != nil {
179 | log.Println("Error serializing API response: ", err)
180 | }
181 | }
182 |
183 | func (s *ApiServer) MinersIndex(w http.ResponseWriter, r *http.Request) {
184 | w.Header().Set("Content-Type", "application/json; charset=UTF-8")
185 | w.Header().Set("Access-Control-Allow-Origin", "*")
186 | w.Header().Set("Cache-Control", "no-cache")
187 | w.WriteHeader(http.StatusOK)
188 |
189 | reply := make(map[string]interface{})
190 | stats := s.getStats()
191 | if stats != nil {
192 | reply["now"] = util.MakeTimestamp()
193 | reply["miners"] = stats["miners"]
194 | reply["hashrate"] = stats["hashrate"]
195 | reply["minersTotal"] = stats["minersTotal"]
196 | }
197 |
198 | err := json.NewEncoder(w).Encode(reply)
199 | if err != nil {
200 | log.Println("Error serializing API response: ", err)
201 | }
202 | }
203 |
204 | func (s *ApiServer) BlocksIndex(w http.ResponseWriter, r *http.Request) {
205 | w.Header().Set("Content-Type", "application/json; charset=UTF-8")
206 | w.Header().Set("Access-Control-Allow-Origin", "*")
207 | w.Header().Set("Cache-Control", "no-cache")
208 | w.WriteHeader(http.StatusOK)
209 |
210 | reply := make(map[string]interface{})
211 | stats := s.getStats()
212 | if stats != nil {
213 | reply["matured"] = stats["matured"]
214 | reply["maturedTotal"] = stats["maturedTotal"]
215 | reply["immature"] = stats["immature"]
216 | reply["immatureTotal"] = stats["immatureTotal"]
217 | reply["candidates"] = stats["candidates"]
218 | reply["candidatesTotal"] = stats["candidatesTotal"]
219 | reply["luck"] = stats["luck"]
220 | }
221 |
222 | err := json.NewEncoder(w).Encode(reply)
223 | if err != nil {
224 | log.Println("Error serializing API response: ", err)
225 | }
226 | }
227 |
228 | func (s *ApiServer) PaymentsIndex(w http.ResponseWriter, r *http.Request) {
229 | w.Header().Set("Content-Type", "application/json; charset=UTF-8")
230 | w.Header().Set("Access-Control-Allow-Origin", "*")
231 | w.Header().Set("Cache-Control", "no-cache")
232 | w.WriteHeader(http.StatusOK)
233 |
234 | reply := make(map[string]interface{})
235 | stats := s.getStats()
236 | if stats != nil {
237 | reply["payments"] = stats["payments"]
238 | reply["paymentsTotal"] = stats["paymentsTotal"]
239 | }
240 |
241 | err := json.NewEncoder(w).Encode(reply)
242 | if err != nil {
243 | log.Println("Error serializing API response: ", err)
244 | }
245 | }
246 |
247 | func (s *ApiServer) AccountIndex(w http.ResponseWriter, r *http.Request) {
248 | w.Header().Set("Content-Type", "application/json; charset=UTF-8")
249 | w.Header().Set("Access-Control-Allow-Origin", "*")
250 | w.Header().Set("Cache-Control", "no-cache")
251 |
252 | login := strings.ToLower(mux.Vars(r)["login"])
253 | s.minersMu.Lock()
254 | defer s.minersMu.Unlock()
255 |
256 | reply, ok := s.miners[login]
257 | now := util.MakeTimestamp()
258 | cacheIntv := int64(s.statsIntv / time.Millisecond)
259 | // Refresh stats if stale
260 | if !ok || reply.updatedAt < now-cacheIntv {
261 | exist, err := s.backend.IsMinerExists(login)
262 | if !exist {
263 | w.WriteHeader(http.StatusNotFound)
264 | return
265 | }
266 | if err != nil {
267 | w.WriteHeader(http.StatusInternalServerError)
268 | log.Printf("Failed to fetch stats from backend: %v", err)
269 | return
270 | }
271 |
272 | stats, err := s.backend.GetMinerStats(login, s.config.Payments)
273 | if err != nil {
274 | w.WriteHeader(http.StatusInternalServerError)
275 | log.Printf("Failed to fetch stats from backend: %v", err)
276 | return
277 | }
278 | workers, err := s.backend.CollectWorkersStats(s.hashrateWindow, s.hashrateLargeWindow, login)
279 | if err != nil {
280 | w.WriteHeader(http.StatusInternalServerError)
281 | log.Printf("Failed to fetch stats from backend: %v", err)
282 | return
283 | }
284 | for key, value := range workers {
285 | stats[key] = value
286 | }
287 | stats["pageSize"] = s.config.Payments
288 | reply = &Entry{stats: stats, updatedAt: now}
289 | s.miners[login] = reply
290 | }
291 |
292 | w.WriteHeader(http.StatusOK)
293 | err := json.NewEncoder(w).Encode(reply.stats)
294 | if err != nil {
295 | log.Println("Error serializing API response: ", err)
296 | }
297 | }
298 |
299 | func (s *ApiServer) getStats() map[string]interface{} {
300 | stats := s.stats.Load()
301 | if stats != nil {
302 | return stats.(map[string]interface{})
303 | }
304 | return nil
305 | }
306 |
--------------------------------------------------------------------------------
/build/env.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 |
3 | set -e
4 |
5 | if [ ! -f "build/env.sh" ]; then
6 | echo "$0 must be run from the root of the repository."
7 | exit 2
8 | fi
9 |
10 | # Create fake Go workspace if it doesn't exist yet.
11 | workspace="$PWD/build/_workspace"
12 | root="$PWD"
13 | ethdir="$workspace/src/github.com/sammy007"
14 | if [ ! -L "$ethdir/open-ethereum-pool" ]; then
15 | mkdir -p "$ethdir"
16 | cd "$ethdir"
17 | ln -s ../../../../../. open-ethereum-pool
18 | cd "$root"
19 | fi
20 |
21 | # Set up the environment to use the workspace.
22 | # Also add Godeps workspace so we build using canned dependencies.
23 | GOPATH="$workspace"
24 | GOBIN="$PWD/build/bin"
25 | export GOPATH GOBIN
26 |
27 | # Run the command inside the workspace.
28 | cd "$ethdir/open-ethereum-pool"
29 | PWD="$ethdir/open-ethereum-pool"
30 |
31 | # Launch the arguments with the configured environment.
32 | exec "$@"
33 |
--------------------------------------------------------------------------------
/config.example.json:
--------------------------------------------------------------------------------
1 | {
2 | "threads": 2,
3 | "coin": "eth",
4 | "name": "main",
5 |
6 | "proxy": {
7 | "enabled": true,
8 | "listen": "0.0.0.0:8888",
9 | "limitHeadersSize": 1024,
10 | "limitBodySize": 256,
11 | "behindReverseProxy": false,
12 | "blockRefreshInterval": "120ms",
13 | "stateUpdateInterval": "3s",
14 | "difficulty": 2000000000,
15 | "hashrateExpiration": "3h",
16 |
17 | "healthCheck": true,
18 | "maxFails": 100,
19 |
20 | "stratum": {
21 | "enabled": true,
22 | "listen": "0.0.0.0:8008",
23 | "timeout": "120s",
24 | "maxConn": 8192
25 | },
26 |
27 | "policy": {
28 | "workers": 8,
29 | "resetInterval": "60m",
30 | "refreshInterval": "1m",
31 |
32 | "banning": {
33 | "enabled": false,
34 | "ipset": "blacklist",
35 | "timeout": 1800,
36 | "invalidPercent": 30,
37 | "checkThreshold": 30,
38 | "malformedLimit": 5
39 | },
40 | "limits": {
41 | "enabled": false,
42 | "limit": 30,
43 | "grace": "5m",
44 | "limitJump": 10
45 | }
46 | }
47 | },
48 |
49 | "api": {
50 | "enabled": true,
51 | "purgeOnly": false,
52 | "purgeInterval": "10m",
53 | "listen": "0.0.0.0:8080",
54 | "statsCollectInterval": "5s",
55 | "hashrateWindow": "30m",
56 | "hashrateLargeWindow": "3h",
57 | "luckWindow": [64, 128, 256],
58 | "payments": 30,
59 | "blocks": 50
60 | },
61 |
62 | "upstreamCheckInterval": "5s",
63 | "upstream": [
64 | {
65 | "name": "main",
66 | "url": "http://127.0.0.1:8545",
67 | "timeout": "10s"
68 | },
69 | {
70 | "name": "backup",
71 | "url": "http://127.0.0.2:8545",
72 | "timeout": "10s"
73 | }
74 | ],
75 |
76 | "redis": {
77 | "endpoint": "127.0.0.1:6379",
78 | "poolSize": 10,
79 | "database": 0,
80 | "password": ""
81 | },
82 |
83 | "unlocker": {
84 | "enabled": false,
85 | "poolFee": 1.0,
86 | "poolFeeAddress": "",
87 | "donate": true,
88 | "depth": 120,
89 | "immatureDepth": 20,
90 | "keepTxFees": false,
91 | "interval": "10m",
92 | "daemon": "http://127.0.0.1:8545",
93 | "timeout": "10s"
94 | },
95 |
96 | "payouts": {
97 | "enabled": false,
98 | "requirePeers": 25,
99 | "interval": "120m",
100 | "daemon": "http://127.0.0.1:8545",
101 | "timeout": "10s",
102 | "address": "0x0",
103 | "gas": "21000",
104 | "gasPrice": "50000000000",
105 | "autoGas": true,
106 | "threshold": 500000000,
107 | "bgsave": false
108 | },
109 |
110 | "newrelicEnabled": false,
111 | "newrelicName": "MyEtherProxy",
112 | "newrelicKey": "SECRET_KEY",
113 | "newrelicVerbose": false
114 | }
115 |
--------------------------------------------------------------------------------
/docs/PAYOUTS.md:
--------------------------------------------------------------------------------
1 | **First of all make sure your Redis instance and backups are configured properly http://redis.io/topics/persistence.**
2 |
3 | Keep in mind that pool maintains all balances in **Shannon**.
4 |
5 | # Processing and Resolving Payouts
6 |
7 | **You MUST run payouts module in a separate process**, ideally don't run it as daemon and process payouts 2-3 times per day and watch how it goes. **You must configure logging**, otherwise it can lead to big problems.
8 |
9 | Module will fetch accounts and sequentially process payouts.
10 |
11 | For every account who reached minimal threshold:
12 |
13 | * Check if we have enough peers on a node
14 | * Check that account is unlocked
15 |
16 | If any of checks fails, module will not even try to continue.
17 |
18 | * Check if we have enough money for payout (should not happen under normal circumstances)
19 | * Lock payments
20 |
21 | If payments can't be locked (another lock exist, usually after a failure) module will halt payouts.
22 |
23 | * Deduct balance of a miner and log pending payment
24 | * Submit a transaction to a node via `eth_sendTransaction`
25 |
26 | **If transaction submission fails, payouts will remain locked and halted in erroneous state.**
27 |
28 | If transaction submission was successful, we have a TX hash:
29 |
30 | * Write this TX hash to a database
31 | * Unlock payouts
32 |
33 | And so on. Repeat for every account.
34 |
35 | After payout session, payment module will perform `BGSAVE` (background saving) on Redis if you have enabled `bgsave` option.
36 |
37 | ## Resolving Failed Payments (automatic)
38 |
39 | If your payout is not logged and not confirmed by Ethereum network you can resolve it automatically. You need to payouts in maintenance mode by setting up `RESOLVE_PAYOUT=1` or `RESOLVE_PAYOUT=True` environment variable:
40 |
41 | `RESOLVE_PAYOUT=1 ./build/bin/open-ethereum-pool payouts.json`.
42 |
43 | Payout module will fetch all rows from Redis with key `eth:payments:pending` and credit balance back to miners. Usually you will have only single entry there.
44 |
45 | If you see `No pending payments to resolve` we have no data about failed debits.
46 |
47 | If there was a debit operation performed which is not followed by actual money transfer (after `eth_sendTransaction` returned an error), you will likely see:
48 |
49 | ```
50 | Will credit back following balances:
51 | Address: 0xb85150eb365e7df0941f0cf08235f987ba91506a, Amount: 166798415 Shannon, 2016-05-11 08:14:34
52 | ```
53 |
54 | followed by
55 |
56 | ```
57 | Credited 166798415 Shannon back to 0xb85150eb365e7df0941f0cf08235f987ba91506a
58 | ```
59 |
60 | Usually every maintenance run ends with following message and halt:
61 |
62 | ```
63 | Payouts unlocked
64 | Now you have to restart payouts module with RESOLVE_PAYOUT=0 for normal run
65 | ```
66 |
67 | Unset `RESOLVE_PAYOUT=1` or run payouts with `RESOLVE_PAYOUT=0`.
68 |
69 | ## Resolving Failed Payment (manual)
70 |
71 | You can perform manual maintenance using `geth` and `redis-cli` utilities.
72 |
73 | ### Check For Failed Transactions:
74 |
75 | Perform the following command in a `redis-cli`:
76 |
77 | ```
78 | ZREVRANGE "eth:payments:pending" 0 -1 WITHSCORES
79 | ```
80 |
81 | Result will be like this:
82 |
83 | > 1) "0xb85150eb365e7df0941f0cf08235f987ba91506a:25000000"
84 |
85 | It's a pair of `LOGIN:AMOUNT`.
86 |
87 | >2) "1462920526"
88 |
89 | It's a `UNIXTIME`
90 |
91 | ### Manual Payment Submission
92 |
93 | **Make sure there is no TX sent using block explorer. Skip this step if payment actually exist in a blockchain.**
94 |
95 | ```javascript
96 | eth.sendTransaction({
97 | from: eth.coinbase,
98 | to: '0xb85150eb365e7df0941f0cf08235f987ba91506a',
99 | value: web3.toWei(25000000, 'shannon')
100 | })
101 |
102 | // => 0xe670ec64341771606e55d6b4ca35a1a6b75ee3d5145a99d05921026d1527331
103 | ```
104 |
105 | **Write down tx hash**.
106 |
107 | ### Store Payment in Redis
108 |
109 | Also usable for fixing missing payment entries.
110 |
111 | ```
112 | ZADD "eth:payments:all" 1462920526 0xe670ec64341771606e55d6b4ca35a1a6b75ee3d5145a99d05921026d1527331:0xb85150eb365e7df0941f0cf08235f987ba91506a:25000000
113 | ```
114 |
115 | ```
116 | ZADD "eth:payments:0xb85150eb365e7df0941f0cf08235f987ba91506a" 1462920526 0xe670ec64341771606e55d6b4ca35a1a6b75ee3d5145a99d05921026d1527331:25000000
117 | ```
118 |
119 | ### Delete Erroneous Payment Entry
120 |
121 | ```
122 | ZREM "eth:payments:pending" "0xb85150eb365e7df0941f0cf08235f987ba91506a:25000000"
123 | ```
124 |
125 | ### Update Internal Stats
126 |
127 | ```
128 | HINCRBY "eth:finances" pending -25000000
129 | HINCRBY "eth:finances" paid 25000000
130 | ```
131 |
132 | ### Unlock Payouts
133 |
134 | ```
135 | DEL "eth:payments:lock"
136 | ```
137 |
138 | ## Resolving Missing Payment Entries
139 |
140 | If pool actually paid but didn't log transaction, scroll up to `Store Payment in Redis` section. You should have a transaction hash from block explorer.
141 |
142 | ## Transaction Didn't Confirm
143 |
144 | If you are sure, just repeat it manually, you should have all the logs.
145 |
--------------------------------------------------------------------------------
/docs/POLICIES.md:
--------------------------------------------------------------------------------
1 | # Enforcing Policies
2 |
3 | Pool policy server collecting several stats on per IP basis. There are two options: `iptables+ipset` or simple application level bans. Banning is disabled by default.
4 |
5 | ## Firewall Banning
6 |
7 | First you need to configure your firewall to use `ipset`, read [this article](https://wiki.archlinux.org/index.php/Ipset).
8 |
9 | Specify `ipset` name for banning in `policy` section. Timeout argument (in seconds) will be passed to this `ipset`. Stratum will use `os/exec` command like `sudo ipset add banlist x.x.x.x 1800` for banning, so you have to configure `sudo` properly and make sure that your system will never ask for password:
10 |
11 | Example `/etc/sudoers.d/pool` where `pool` is a username under which pool runs:
12 |
13 | pool ALL=NOPASSWD: /sbin/ipset
14 |
15 | If you need something simple, just set `ipset` name to blank string and simple application level banning will be used instead.
16 |
17 | ## Limiting
18 |
19 | Under some weird circumstances you can enforce limits to prevent connection flood to stratum, there are initial settings: `limit` and `limitJump`. Policy server will increase number of allowed connections per IP address on each valid share submission. Stratum will not enforce this policy for a `grace` period specified after stratum start.
20 |
--------------------------------------------------------------------------------
/docs/STRATUM.md:
--------------------------------------------------------------------------------
1 | # Stratum Mining Protocol
2 |
3 | This is the description of stratum protocol used in this pool.
4 |
5 | Stratum defines simple exception handling. Example of rejected share looks like:
6 |
7 | ```javascript
8 | { "id": 1, "jsonrpc": "2.0", "result": null, "error": { code: 23, message: "Invalid share" } }
9 | ```
10 |
11 | Each response with exception is followed by disconnect.
12 |
13 | ## Authentication
14 |
15 | Request looks like:
16 |
17 | ```javascript
18 | {
19 | "id": 1,
20 | "jsonrpc": "2.0",
21 | "method": "eth_submitLogin",
22 | "params": ["0xb85150eb365e7df0941f0cf08235f987ba91506a"]
23 | }
24 | ```
25 |
26 | Request can include additional 2nd param (email for example):
27 |
28 | ```javascript
29 | {
30 | "id": 1,
31 | "jsonrpc": "2.0",
32 | "method": "eth_submitLogin",
33 | "params": ["0xb85150eb365e7df0941f0cf08235f987ba91506a", "admin@example.net"]
34 | }
35 | ```
36 |
37 | Successful response:
38 |
39 | ```javascript
40 | { "id": 1, "jsonrpc": "2.0", "result": true }
41 | ```
42 |
43 | Exceptions:
44 |
45 | ```javascript
46 | { "id": 1, "jsonrpc": "2.0", "result": null, "error": { code: -1, message: "Invalid login" } }
47 | ```
48 |
49 | ## Request For Job
50 |
51 | Request looks like:
52 |
53 | ```javascript
54 | { "id": 1, "jsonrpc": "2.0", "method": "eth_getWork" }
55 | ```
56 |
57 | Successful response:
58 |
59 | ```javascript
60 | {
61 | "id": 1,
62 | "jsonrpc": "2.0",
63 | "result": [
64 | "0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
65 | "0x5eed00000000000000000000000000005eed0000000000000000000000000000",
66 | "0xd1ff1c01710000000000000000000000d1ff1c01710000000000000000000000"
67 | ]
68 | }
69 | ```
70 |
71 | Exceptions:
72 |
73 | ```javascript
74 | { "id": 10, "result": null, "error": { code: 0, message: "Work not ready" } }
75 | ```
76 |
77 | ## New Job Notification
78 |
79 | Server sends job to peers if new job is available:
80 |
81 | ```javascript
82 | {
83 | "jsonrpc": "2.0",
84 | "result": [
85 | "0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef",
86 | "0x5eed00000000000000000000000000005eed0000000000000000000000000000",
87 | "0xd1ff1c01710000000000000000000000d1ff1c01710000000000000000000000"
88 | ]
89 | }
90 | ```
91 |
92 | ## Share Submission
93 |
94 | Request looks like:
95 |
96 | ```javascript
97 | {
98 | "id": 1,
99 | "jsonrpc": "2.0",
100 | "method": "eth_submitWork",
101 | "params": [
102 | "0xe05d1fd4002d962f",
103 | "0x6c872e2304cd1e64b553a65387d7383470f22331aff288cbce5748dc430f016a",
104 | "0x2b20a6c641ed155b893ee750ef90ec3be5d24736d16838b84759385b6724220d"
105 | ]
106 | }
107 | ```
108 |
109 | Request can include optional `worker` param:
110 |
111 | ```javascript
112 | { "id": 1, "worker": "rig-1" /* ... */ }
113 | ```
114 |
115 | Response:
116 |
117 | ```javascript
118 | { "id": 1, "jsonrpc": "2.0", "result": true }
119 | { "id": 1, "jsonrpc": "2.0", "result": false }
120 | ```
121 |
122 | Exceptions:
123 |
124 | Pool MAY return exception on invalid share submission usually followed by temporal ban.
125 |
126 | ```javascript
127 | { "id": 1, "jsonrpc": "2.0", "result": null, "error": { code: 23, message: "Invalid share" } }
128 | ```
129 |
130 | ```javascript
131 | { "id": 1, "jsonrpc": "2.0", "result": null, "error": { code: 22, message: "Duplicate share" } }
132 | { "id": 1, "jsonrpc": "2.0", "result": null, "error": { code: -1, message: "High rate of invalid shares" } }
133 | { "id": 1, "jsonrpc": "2.0", "result": null, "error": { code: 25, message: "Not subscribed" } }
134 | { "id": 1, "jsonrpc": "2.0", "result": null, "error": { code: -1, message: "Malformed PoW result" } }
135 | ```
136 |
137 | ## Submit Hashrate
138 |
139 | `eth_submitHashrate` is a nonsense method. Pool ignores it and the reply is always:
140 |
141 | ```javascript
142 | { "id": 1, "jsonrpc": "2.0", "result": true }
143 | ```
144 |
--------------------------------------------------------------------------------
/main.go:
--------------------------------------------------------------------------------
1 | // +build go1.9
2 |
3 | package main
4 |
5 | import (
6 | "encoding/json"
7 | "log"
8 | "math/rand"
9 | "os"
10 | "path/filepath"
11 | "runtime"
12 | "time"
13 |
14 | "github.com/yvasiyarov/gorelic"
15 |
16 | "github.com/sammy007/open-ethereum-pool/api"
17 | "github.com/sammy007/open-ethereum-pool/payouts"
18 | "github.com/sammy007/open-ethereum-pool/proxy"
19 | "github.com/sammy007/open-ethereum-pool/storage"
20 | )
21 |
22 | var cfg proxy.Config
23 | var backend *storage.RedisClient
24 |
25 | func startProxy() {
26 | s := proxy.NewProxy(&cfg, backend)
27 | s.Start()
28 | }
29 |
30 | func startApi() {
31 | s := api.NewApiServer(&cfg.Api, backend)
32 | s.Start()
33 | }
34 |
35 | func startBlockUnlocker() {
36 | u := payouts.NewBlockUnlocker(&cfg.BlockUnlocker, backend)
37 | u.Start()
38 | }
39 |
40 | func startPayoutsProcessor() {
41 | u := payouts.NewPayoutsProcessor(&cfg.Payouts, backend)
42 | u.Start()
43 | }
44 |
45 | func startNewrelic() {
46 | if cfg.NewrelicEnabled {
47 | nr := gorelic.NewAgent()
48 | nr.Verbose = cfg.NewrelicVerbose
49 | nr.NewrelicLicense = cfg.NewrelicKey
50 | nr.NewrelicName = cfg.NewrelicName
51 | nr.Run()
52 | }
53 | }
54 |
55 | func readConfig(cfg *proxy.Config) {
56 | configFileName := "config.json"
57 | if len(os.Args) > 1 {
58 | configFileName = os.Args[1]
59 | }
60 | configFileName, _ = filepath.Abs(configFileName)
61 | log.Printf("Loading config: %v", configFileName)
62 |
63 | configFile, err := os.Open(configFileName)
64 | if err != nil {
65 | log.Fatal("File error: ", err.Error())
66 | }
67 | defer configFile.Close()
68 | jsonParser := json.NewDecoder(configFile)
69 | if err := jsonParser.Decode(&cfg); err != nil {
70 | log.Fatal("Config error: ", err.Error())
71 | }
72 | }
73 |
74 | func main() {
75 | readConfig(&cfg)
76 | rand.Seed(time.Now().UnixNano())
77 |
78 | if cfg.Threads > 0 {
79 | runtime.GOMAXPROCS(cfg.Threads)
80 | log.Printf("Running with %v threads", cfg.Threads)
81 | }
82 |
83 | startNewrelic()
84 |
85 | backend = storage.NewRedisClient(&cfg.Redis, cfg.Coin)
86 | pong, err := backend.Check()
87 | if err != nil {
88 | log.Printf("Can't establish connection to backend: %v", err)
89 | } else {
90 | log.Printf("Backend check reply: %v", pong)
91 | }
92 |
93 | if cfg.Proxy.Enabled {
94 | go startProxy()
95 | }
96 | if cfg.Api.Enabled {
97 | go startApi()
98 | }
99 | if cfg.BlockUnlocker.Enabled {
100 | go startBlockUnlocker()
101 | }
102 | if cfg.Payouts.Enabled {
103 | go startPayoutsProcessor()
104 | }
105 | quit := make(chan bool)
106 | <-quit
107 | }
108 |
--------------------------------------------------------------------------------
/misc/nginx-default.conf:
--------------------------------------------------------------------------------
1 | upstream api {
2 | server 127.0.0.1:8080;
3 | }
4 |
5 | server {
6 | listen 0.0.0.0:80;
7 | root /path/to/pool/www/dist;
8 | index index.html index.htm;
9 |
10 | server_name localhost;
11 |
12 | location /api {
13 | proxy_pass http://api;
14 | }
15 |
16 | location / {
17 | try_files $uri $uri/ /index.html;
18 | }
19 | }
20 |
--------------------------------------------------------------------------------
/misc/upstart.conf:
--------------------------------------------------------------------------------
1 | # Open-Ethereum-Pool
2 | description "Open-Ethereum-Pool"
3 |
4 | env DAEMON=/home/main/src/open-ethereum-pool/build/bin/open-ethereum-pool
5 | env CONFIG=/home/main/src/open-ethereum-pool/config.json
6 |
7 | start on filesystem or runlevel [2345]
8 | stop on runlevel [!2345]
9 |
10 | setuid main
11 | setgid main
12 |
13 | kill signal INT
14 |
15 | respawn
16 | respawn limit 10 5
17 | umask 022
18 |
19 | pre-start script
20 | test -x $DAEMON || { stop; exit 0; }
21 | end script
22 |
23 | # Start
24 | script
25 | exec $DAEMON $CONFIG
26 | end script
27 |
--------------------------------------------------------------------------------
/payouts/payer.go:
--------------------------------------------------------------------------------
1 | package payouts
2 |
3 | import (
4 | "fmt"
5 | "log"
6 | "math/big"
7 | "os"
8 | "strconv"
9 | "time"
10 |
11 | "github.com/ethereum/go-ethereum/common/hexutil"
12 |
13 | "github.com/sammy007/open-ethereum-pool/rpc"
14 | "github.com/sammy007/open-ethereum-pool/storage"
15 | "github.com/sammy007/open-ethereum-pool/util"
16 | )
17 |
18 | const txCheckInterval = 5 * time.Second
19 |
20 | type PayoutsConfig struct {
21 | Enabled bool `json:"enabled"`
22 | RequirePeers int64 `json:"requirePeers"`
23 | Interval string `json:"interval"`
24 | Daemon string `json:"daemon"`
25 | Timeout string `json:"timeout"`
26 | Address string `json:"address"`
27 | Gas string `json:"gas"`
28 | GasPrice string `json:"gasPrice"`
29 | AutoGas bool `json:"autoGas"`
30 | // In Shannon
31 | Threshold int64 `json:"threshold"`
32 | BgSave bool `json:"bgsave"`
33 | }
34 |
35 | func (self PayoutsConfig) GasHex() string {
36 | x := util.String2Big(self.Gas)
37 | return hexutil.EncodeBig(x)
38 | }
39 |
40 | func (self PayoutsConfig) GasPriceHex() string {
41 | x := util.String2Big(self.GasPrice)
42 | return hexutil.EncodeBig(x)
43 | }
44 |
45 | type PayoutsProcessor struct {
46 | config *PayoutsConfig
47 | backend *storage.RedisClient
48 | rpc *rpc.RPCClient
49 | halt bool
50 | lastFail error
51 | }
52 |
53 | func NewPayoutsProcessor(cfg *PayoutsConfig, backend *storage.RedisClient) *PayoutsProcessor {
54 | u := &PayoutsProcessor{config: cfg, backend: backend}
55 | u.rpc = rpc.NewRPCClient("PayoutsProcessor", cfg.Daemon, cfg.Timeout)
56 | return u
57 | }
58 |
59 | func (u *PayoutsProcessor) Start() {
60 | log.Println("Starting payouts")
61 |
62 | if u.mustResolvePayout() {
63 | log.Println("Running with env RESOLVE_PAYOUT=1, now trying to resolve locked payouts")
64 | u.resolvePayouts()
65 | log.Println("Now you have to restart payouts module with RESOLVE_PAYOUT=0 for normal run")
66 | return
67 | }
68 |
69 | intv := util.MustParseDuration(u.config.Interval)
70 | timer := time.NewTimer(intv)
71 | log.Printf("Set payouts interval to %v", intv)
72 |
73 | payments := u.backend.GetPendingPayments()
74 | if len(payments) > 0 {
75 | log.Printf("Previous payout failed, you have to resolve it. List of failed payments:\n %v",
76 | formatPendingPayments(payments))
77 | return
78 | }
79 |
80 | locked, err := u.backend.IsPayoutsLocked()
81 | if err != nil {
82 | log.Println("Unable to start payouts:", err)
83 | return
84 | }
85 | if locked {
86 | log.Println("Unable to start payouts because they are locked")
87 | return
88 | }
89 |
90 | // Immediately process payouts after start
91 | u.process()
92 | timer.Reset(intv)
93 |
94 | go func() {
95 | for {
96 | select {
97 | case <-timer.C:
98 | u.process()
99 | timer.Reset(intv)
100 | }
101 | }
102 | }()
103 | }
104 |
105 | func (u *PayoutsProcessor) process() {
106 | if u.halt {
107 | log.Println("Payments suspended due to last critical error:", u.lastFail)
108 | return
109 | }
110 | mustPay := 0
111 | minersPaid := 0
112 | totalAmount := big.NewInt(0)
113 | payees, err := u.backend.GetPayees()
114 | if err != nil {
115 | log.Println("Error while retrieving payees from backend:", err)
116 | return
117 | }
118 |
119 | for _, login := range payees {
120 | amount, _ := u.backend.GetBalance(login)
121 | amountInShannon := big.NewInt(amount)
122 |
123 | // Shannon^2 = Wei
124 | amountInWei := new(big.Int).Mul(amountInShannon, util.Shannon)
125 |
126 | if !u.reachedThreshold(amountInShannon) {
127 | continue
128 | }
129 | mustPay++
130 |
131 | // Require active peers before processing
132 | if !u.checkPeers() {
133 | break
134 | }
135 | // Require unlocked account
136 | if !u.isUnlockedAccount() {
137 | break
138 | }
139 |
140 | // Check if we have enough funds
141 | poolBalance, err := u.rpc.GetBalance(u.config.Address)
142 | if err != nil {
143 | u.halt = true
144 | u.lastFail = err
145 | break
146 | }
147 | if poolBalance.Cmp(amountInWei) < 0 {
148 | err := fmt.Errorf("Not enough balance for payment, need %s Wei, pool has %s Wei",
149 | amountInWei.String(), poolBalance.String())
150 | u.halt = true
151 | u.lastFail = err
152 | break
153 | }
154 |
155 | // Lock payments for current payout
156 | err = u.backend.LockPayouts(login, amount)
157 | if err != nil {
158 | log.Printf("Failed to lock payment for %s: %v", login, err)
159 | u.halt = true
160 | u.lastFail = err
161 | break
162 | }
163 | log.Printf("Locked payment for %s, %v Shannon", login, amount)
164 |
165 | // Debit miner's balance and update stats
166 | err = u.backend.UpdateBalance(login, amount)
167 | if err != nil {
168 | log.Printf("Failed to update balance for %s, %v Shannon: %v", login, amount, err)
169 | u.halt = true
170 | u.lastFail = err
171 | break
172 | }
173 |
174 | value := hexutil.EncodeBig(amountInWei)
175 | txHash, err := u.rpc.SendTransaction(u.config.Address, login, u.config.GasHex(), u.config.GasPriceHex(), value, u.config.AutoGas)
176 | if err != nil {
177 | log.Printf("Failed to send payment to %s, %v Shannon: %v. Check outgoing tx for %s in block explorer and docs/PAYOUTS.md",
178 | login, amount, err, login)
179 | u.halt = true
180 | u.lastFail = err
181 | break
182 | }
183 |
184 | // Log transaction hash
185 | err = u.backend.WritePayment(login, txHash, amount)
186 | if err != nil {
187 | log.Printf("Failed to log payment data for %s, %v Shannon, tx: %s: %v", login, amount, txHash, err)
188 | u.halt = true
189 | u.lastFail = err
190 | break
191 | }
192 |
193 | minersPaid++
194 | totalAmount.Add(totalAmount, big.NewInt(amount))
195 | log.Printf("Paid %v Shannon to %v, TxHash: %v", amount, login, txHash)
196 |
197 | // Wait for TX confirmation before further payouts
198 | for {
199 | log.Printf("Waiting for tx confirmation: %v", txHash)
200 | time.Sleep(txCheckInterval)
201 | receipt, err := u.rpc.GetTxReceipt(txHash)
202 | if err != nil {
203 | log.Printf("Failed to get tx receipt for %v: %v", txHash, err)
204 | continue
205 | }
206 | // Tx has been mined
207 | if receipt != nil && receipt.Confirmed() {
208 | if receipt.Successful() {
209 | log.Printf("Payout tx successful for %s: %s", login, txHash)
210 | } else {
211 | log.Printf("Payout tx failed for %s: %s. Address contract throws on incoming tx.", login, txHash)
212 | }
213 | break
214 | }
215 | }
216 | }
217 |
218 | if mustPay > 0 {
219 | log.Printf("Paid total %v Shannon to %v of %v payees", totalAmount, minersPaid, mustPay)
220 | } else {
221 | log.Println("No payees that have reached payout threshold")
222 | }
223 |
224 | // Save redis state to disk
225 | if minersPaid > 0 && u.config.BgSave {
226 | u.bgSave()
227 | }
228 | }
229 |
230 | func (self PayoutsProcessor) isUnlockedAccount() bool {
231 | _, err := self.rpc.Sign(self.config.Address, "0x0")
232 | if err != nil {
233 | log.Println("Unable to process payouts:", err)
234 | return false
235 | }
236 | return true
237 | }
238 |
239 | func (self PayoutsProcessor) checkPeers() bool {
240 | n, err := self.rpc.GetPeerCount()
241 | if err != nil {
242 | log.Println("Unable to start payouts, failed to retrieve number of peers from node:", err)
243 | return false
244 | }
245 | if n < self.config.RequirePeers {
246 | log.Println("Unable to start payouts, number of peers on a node is less than required", self.config.RequirePeers)
247 | return false
248 | }
249 | return true
250 | }
251 |
252 | func (self PayoutsProcessor) reachedThreshold(amount *big.Int) bool {
253 | return big.NewInt(self.config.Threshold).Cmp(amount) < 0
254 | }
255 |
256 | func formatPendingPayments(list []*storage.PendingPayment) string {
257 | var s string
258 | for _, v := range list {
259 | s += fmt.Sprintf("\tAddress: %s, Amount: %v Shannon, %v\n", v.Address, v.Amount, time.Unix(v.Timestamp, 0))
260 | }
261 | return s
262 | }
263 |
264 | func (self PayoutsProcessor) bgSave() {
265 | result, err := self.backend.BgSave()
266 | if err != nil {
267 | log.Println("Failed to perform BGSAVE on backend:", err)
268 | return
269 | }
270 | log.Println("Saving backend state to disk:", result)
271 | }
272 |
273 | func (self PayoutsProcessor) resolvePayouts() {
274 | payments := self.backend.GetPendingPayments()
275 |
276 | if len(payments) > 0 {
277 | log.Printf("Will credit back following balances:\n%s", formatPendingPayments(payments))
278 |
279 | for _, v := range payments {
280 | err := self.backend.RollbackBalance(v.Address, v.Amount)
281 | if err != nil {
282 | log.Printf("Failed to credit %v Shannon back to %s, error is: %v", v.Amount, v.Address, err)
283 | return
284 | }
285 | log.Printf("Credited %v Shannon back to %s", v.Amount, v.Address)
286 | }
287 | err := self.backend.UnlockPayouts()
288 | if err != nil {
289 | log.Println("Failed to unlock payouts:", err)
290 | return
291 | }
292 | } else {
293 | log.Println("No pending payments to resolve")
294 | }
295 |
296 | if self.config.BgSave {
297 | self.bgSave()
298 | }
299 | log.Println("Payouts unlocked")
300 | }
301 |
302 | func (self PayoutsProcessor) mustResolvePayout() bool {
303 | v, _ := strconv.ParseBool(os.Getenv("RESOLVE_PAYOUT"))
304 | return v
305 | }
306 |
--------------------------------------------------------------------------------
/payouts/unlocker.go:
--------------------------------------------------------------------------------
1 | package payouts
2 |
3 | import (
4 | "fmt"
5 | "log"
6 | "math/big"
7 | "strconv"
8 | "strings"
9 | "time"
10 |
11 | "github.com/ethereum/go-ethereum/common/math"
12 |
13 | "github.com/sammy007/open-ethereum-pool/rpc"
14 | "github.com/sammy007/open-ethereum-pool/storage"
15 | "github.com/sammy007/open-ethereum-pool/util"
16 | )
17 |
18 | type UnlockerConfig struct {
19 | Enabled bool `json:"enabled"`
20 | PoolFee float64 `json:"poolFee"`
21 | PoolFeeAddress string `json:"poolFeeAddress"`
22 | Donate bool `json:"donate"`
23 | Depth int64 `json:"depth"`
24 | ImmatureDepth int64 `json:"immatureDepth"`
25 | KeepTxFees bool `json:"keepTxFees"`
26 | Interval string `json:"interval"`
27 | Daemon string `json:"daemon"`
28 | Timeout string `json:"timeout"`
29 | }
30 |
31 | const minDepth = 16
32 | const byzantiumHardForkHeight = 4370000
33 |
34 | var homesteadReward = math.MustParseBig256("5000000000000000000")
35 | var byzantiumReward = math.MustParseBig256("3000000000000000000")
36 |
37 | // Donate 10% from pool fees to developers
38 | const donationFee = 10.0
39 | const donationAccount = "0xb85150eb365e7df0941f0cf08235f987ba91506a"
40 |
41 | type BlockUnlocker struct {
42 | config *UnlockerConfig
43 | backend *storage.RedisClient
44 | rpc *rpc.RPCClient
45 | halt bool
46 | lastFail error
47 | }
48 |
49 | func NewBlockUnlocker(cfg *UnlockerConfig, backend *storage.RedisClient) *BlockUnlocker {
50 | if len(cfg.PoolFeeAddress) != 0 && !util.IsValidHexAddress(cfg.PoolFeeAddress) {
51 | log.Fatalln("Invalid poolFeeAddress", cfg.PoolFeeAddress)
52 | }
53 | if cfg.Depth < minDepth*2 {
54 | log.Fatalf("Block maturity depth can't be < %v, your depth is %v", minDepth*2, cfg.Depth)
55 | }
56 | if cfg.ImmatureDepth < minDepth {
57 | log.Fatalf("Immature depth can't be < %v, your depth is %v", minDepth, cfg.ImmatureDepth)
58 | }
59 | u := &BlockUnlocker{config: cfg, backend: backend}
60 | u.rpc = rpc.NewRPCClient("BlockUnlocker", cfg.Daemon, cfg.Timeout)
61 | return u
62 | }
63 |
64 | func (u *BlockUnlocker) Start() {
65 | log.Println("Starting block unlocker")
66 | intv := util.MustParseDuration(u.config.Interval)
67 | timer := time.NewTimer(intv)
68 | log.Printf("Set block unlock interval to %v", intv)
69 |
70 | // Immediately unlock after start
71 | u.unlockPendingBlocks()
72 | u.unlockAndCreditMiners()
73 | timer.Reset(intv)
74 |
75 | go func() {
76 | for {
77 | select {
78 | case <-timer.C:
79 | u.unlockPendingBlocks()
80 | u.unlockAndCreditMiners()
81 | timer.Reset(intv)
82 | }
83 | }
84 | }()
85 | }
86 |
87 | type UnlockResult struct {
88 | maturedBlocks []*storage.BlockData
89 | orphanedBlocks []*storage.BlockData
90 | orphans int
91 | uncles int
92 | blocks int
93 | }
94 |
95 | /* Geth does not provide consistent state when you need both new height and new job,
96 | * so in redis I am logging just what I have in a pool state on the moment when block found.
97 | * Having very likely incorrect height in database results in a weird block unlocking scheme,
98 | * when I have to check what the hell we actually found and traversing all the blocks with height-N and height+N
99 | * to make sure we will find it. We can't rely on round height here, it's just a reference point.
100 | * ISSUE: https://github.com/ethereum/go-ethereum/issues/2333
101 | */
102 | func (u *BlockUnlocker) unlockCandidates(candidates []*storage.BlockData) (*UnlockResult, error) {
103 | result := &UnlockResult{}
104 |
105 | // Data row is: "height:nonce:powHash:mixDigest:timestamp:diff:totalShares"
106 | for _, candidate := range candidates {
107 | orphan := true
108 |
109 | /* Search for a normal block with wrong height here by traversing 16 blocks back and forward.
110 | * Also we are searching for a block that can include this one as uncle.
111 | */
112 | for i := int64(minDepth * -1); i < minDepth; i++ {
113 | height := candidate.Height + i
114 |
115 | if height < 0 {
116 | continue
117 | }
118 |
119 | block, err := u.rpc.GetBlockByHeight(height)
120 | if err != nil {
121 | log.Printf("Error while retrieving block %v from node: %v", height, err)
122 | return nil, err
123 | }
124 | if block == nil {
125 | return nil, fmt.Errorf("Error while retrieving block %v from node, wrong node height", height)
126 | }
127 |
128 | if matchCandidate(block, candidate) {
129 | orphan = false
130 | result.blocks++
131 |
132 | err = u.handleBlock(block, candidate)
133 | if err != nil {
134 | u.halt = true
135 | u.lastFail = err
136 | return nil, err
137 | }
138 | result.maturedBlocks = append(result.maturedBlocks, candidate)
139 | log.Printf("Mature block %v with %v tx, hash: %v", candidate.Height, len(block.Transactions), candidate.Hash[0:10])
140 | break
141 | }
142 |
143 | if len(block.Uncles) == 0 {
144 | continue
145 | }
146 |
147 | // Trying to find uncle in current block during our forward check
148 | for uncleIndex, uncleHash := range block.Uncles {
149 | uncle, err := u.rpc.GetUncleByBlockNumberAndIndex(height, uncleIndex)
150 | if err != nil {
151 | return nil, fmt.Errorf("Error while retrieving uncle of block %v from node: %v", uncleHash, err)
152 | }
153 | if uncle == nil {
154 | return nil, fmt.Errorf("Error while retrieving uncle of block %v from node", height)
155 | }
156 |
157 | // Found uncle
158 | if matchCandidate(uncle, candidate) {
159 | orphan = false
160 | result.uncles++
161 |
162 | err := handleUncle(height, uncle, candidate)
163 | if err != nil {
164 | u.halt = true
165 | u.lastFail = err
166 | return nil, err
167 | }
168 | result.maturedBlocks = append(result.maturedBlocks, candidate)
169 | log.Printf("Mature uncle %v/%v of reward %v with hash: %v", candidate.Height, candidate.UncleHeight,
170 | util.FormatReward(candidate.Reward), uncle.Hash[0:10])
171 | break
172 | }
173 | }
174 | // Found block or uncle
175 | if !orphan {
176 | break
177 | }
178 | }
179 | // Block is lost, we didn't find any valid block or uncle matching our data in a blockchain
180 | if orphan {
181 | result.orphans++
182 | candidate.Orphan = true
183 | result.orphanedBlocks = append(result.orphanedBlocks, candidate)
184 | log.Printf("Orphaned block %v:%v", candidate.RoundHeight, candidate.Nonce)
185 | }
186 | }
187 | return result, nil
188 | }
189 |
190 | func matchCandidate(block *rpc.GetBlockReply, candidate *storage.BlockData) bool {
191 | // Just compare hash if block is unlocked as immature
192 | if len(candidate.Hash) > 0 && strings.EqualFold(candidate.Hash, block.Hash) {
193 | return true
194 | }
195 | // Geth-style candidate matching
196 | if len(block.Nonce) > 0 {
197 | return strings.EqualFold(block.Nonce, candidate.Nonce)
198 | }
199 | // Parity's EIP: https://github.com/ethereum/EIPs/issues/95
200 | if len(block.SealFields) == 2 {
201 | return strings.EqualFold(candidate.Nonce, block.SealFields[1])
202 | }
203 | return false
204 | }
205 |
206 | func (u *BlockUnlocker) handleBlock(block *rpc.GetBlockReply, candidate *storage.BlockData) error {
207 | correctHeight, err := strconv.ParseInt(strings.Replace(block.Number, "0x", "", -1), 16, 64)
208 | if err != nil {
209 | return err
210 | }
211 | candidate.Height = correctHeight
212 | reward := getConstReward(candidate.Height)
213 |
214 | // Add TX fees
215 | extraTxReward, err := u.getExtraRewardForTx(block)
216 | if err != nil {
217 | return fmt.Errorf("Error while fetching TX receipt: %v", err)
218 | }
219 | if u.config.KeepTxFees {
220 | candidate.ExtraReward = extraTxReward
221 | } else {
222 | reward.Add(reward, extraTxReward)
223 | }
224 |
225 | // Add reward for including uncles
226 | uncleReward := getRewardForUncle(candidate.Height)
227 | rewardForUncles := big.NewInt(0).Mul(uncleReward, big.NewInt(int64(len(block.Uncles))))
228 | reward.Add(reward, rewardForUncles)
229 |
230 | candidate.Orphan = false
231 | candidate.Hash = block.Hash
232 | candidate.Reward = reward
233 | return nil
234 | }
235 |
236 | func handleUncle(height int64, uncle *rpc.GetBlockReply, candidate *storage.BlockData) error {
237 | uncleHeight, err := strconv.ParseInt(strings.Replace(uncle.Number, "0x", "", -1), 16, 64)
238 | if err != nil {
239 | return err
240 | }
241 | reward := getUncleReward(uncleHeight, height)
242 | candidate.Height = height
243 | candidate.UncleHeight = uncleHeight
244 | candidate.Orphan = false
245 | candidate.Hash = uncle.Hash
246 | candidate.Reward = reward
247 | return nil
248 | }
249 |
250 | func (u *BlockUnlocker) unlockPendingBlocks() {
251 | if u.halt {
252 | log.Println("Unlocking suspended due to last critical error:", u.lastFail)
253 | return
254 | }
255 |
256 | current, err := u.rpc.GetPendingBlock()
257 | if err != nil {
258 | u.halt = true
259 | u.lastFail = err
260 | log.Printf("Unable to get current blockchain height from node: %v", err)
261 | return
262 | }
263 | currentHeight, err := strconv.ParseInt(strings.Replace(current.Number, "0x", "", -1), 16, 64)
264 | if err != nil {
265 | u.halt = true
266 | u.lastFail = err
267 | log.Printf("Can't parse pending block number: %v", err)
268 | return
269 | }
270 |
271 | candidates, err := u.backend.GetCandidates(currentHeight - u.config.ImmatureDepth)
272 | if err != nil {
273 | u.halt = true
274 | u.lastFail = err
275 | log.Printf("Failed to get block candidates from backend: %v", err)
276 | return
277 | }
278 |
279 | if len(candidates) == 0 {
280 | log.Println("No block candidates to unlock")
281 | return
282 | }
283 |
284 | result, err := u.unlockCandidates(candidates)
285 | if err != nil {
286 | u.halt = true
287 | u.lastFail = err
288 | log.Printf("Failed to unlock blocks: %v", err)
289 | return
290 | }
291 | log.Printf("Immature %v blocks, %v uncles, %v orphans", result.blocks, result.uncles, result.orphans)
292 |
293 | err = u.backend.WritePendingOrphans(result.orphanedBlocks)
294 | if err != nil {
295 | u.halt = true
296 | u.lastFail = err
297 | log.Printf("Failed to insert orphaned blocks into backend: %v", err)
298 | return
299 | } else {
300 | log.Printf("Inserted %v orphaned blocks to backend", result.orphans)
301 | }
302 |
303 | totalRevenue := new(big.Rat)
304 | totalMinersProfit := new(big.Rat)
305 | totalPoolProfit := new(big.Rat)
306 |
307 | for _, block := range result.maturedBlocks {
308 | revenue, minersProfit, poolProfit, roundRewards, err := u.calculateRewards(block)
309 | if err != nil {
310 | u.halt = true
311 | u.lastFail = err
312 | log.Printf("Failed to calculate rewards for round %v: %v", block.RoundKey(), err)
313 | return
314 | }
315 | err = u.backend.WriteImmatureBlock(block, roundRewards)
316 | if err != nil {
317 | u.halt = true
318 | u.lastFail = err
319 | log.Printf("Failed to credit rewards for round %v: %v", block.RoundKey(), err)
320 | return
321 | }
322 | totalRevenue.Add(totalRevenue, revenue)
323 | totalMinersProfit.Add(totalMinersProfit, minersProfit)
324 | totalPoolProfit.Add(totalPoolProfit, poolProfit)
325 |
326 | logEntry := fmt.Sprintf(
327 | "IMMATURE %v: revenue %v, miners profit %v, pool profit: %v",
328 | block.RoundKey(),
329 | util.FormatRatReward(revenue),
330 | util.FormatRatReward(minersProfit),
331 | util.FormatRatReward(poolProfit),
332 | )
333 | entries := []string{logEntry}
334 | for login, reward := range roundRewards {
335 | entries = append(entries, fmt.Sprintf("\tREWARD %v: %v: %v Shannon", block.RoundKey(), login, reward))
336 | }
337 | log.Println(strings.Join(entries, "\n"))
338 | }
339 |
340 | log.Printf(
341 | "IMMATURE SESSION: revenue %v, miners profit %v, pool profit: %v",
342 | util.FormatRatReward(totalRevenue),
343 | util.FormatRatReward(totalMinersProfit),
344 | util.FormatRatReward(totalPoolProfit),
345 | )
346 | }
347 |
348 | func (u *BlockUnlocker) unlockAndCreditMiners() {
349 | if u.halt {
350 | log.Println("Unlocking suspended due to last critical error:", u.lastFail)
351 | return
352 | }
353 |
354 | current, err := u.rpc.GetPendingBlock()
355 | if err != nil {
356 | u.halt = true
357 | u.lastFail = err
358 | log.Printf("Unable to get current blockchain height from node: %v", err)
359 | return
360 | }
361 | currentHeight, err := strconv.ParseInt(strings.Replace(current.Number, "0x", "", -1), 16, 64)
362 | if err != nil {
363 | u.halt = true
364 | u.lastFail = err
365 | log.Printf("Can't parse pending block number: %v", err)
366 | return
367 | }
368 |
369 | immature, err := u.backend.GetImmatureBlocks(currentHeight - u.config.Depth)
370 | if err != nil {
371 | u.halt = true
372 | u.lastFail = err
373 | log.Printf("Failed to get block candidates from backend: %v", err)
374 | return
375 | }
376 |
377 | if len(immature) == 0 {
378 | log.Println("No immature blocks to credit miners")
379 | return
380 | }
381 |
382 | result, err := u.unlockCandidates(immature)
383 | if err != nil {
384 | u.halt = true
385 | u.lastFail = err
386 | log.Printf("Failed to unlock blocks: %v", err)
387 | return
388 | }
389 | log.Printf("Unlocked %v blocks, %v uncles, %v orphans", result.blocks, result.uncles, result.orphans)
390 |
391 | for _, block := range result.orphanedBlocks {
392 | err = u.backend.WriteOrphan(block)
393 | if err != nil {
394 | u.halt = true
395 | u.lastFail = err
396 | log.Printf("Failed to insert orphaned block into backend: %v", err)
397 | return
398 | }
399 | }
400 | log.Printf("Inserted %v orphaned blocks to backend", result.orphans)
401 |
402 | totalRevenue := new(big.Rat)
403 | totalMinersProfit := new(big.Rat)
404 | totalPoolProfit := new(big.Rat)
405 |
406 | for _, block := range result.maturedBlocks {
407 | revenue, minersProfit, poolProfit, roundRewards, err := u.calculateRewards(block)
408 | if err != nil {
409 | u.halt = true
410 | u.lastFail = err
411 | log.Printf("Failed to calculate rewards for round %v: %v", block.RoundKey(), err)
412 | return
413 | }
414 | err = u.backend.WriteMaturedBlock(block, roundRewards)
415 | if err != nil {
416 | u.halt = true
417 | u.lastFail = err
418 | log.Printf("Failed to credit rewards for round %v: %v", block.RoundKey(), err)
419 | return
420 | }
421 | totalRevenue.Add(totalRevenue, revenue)
422 | totalMinersProfit.Add(totalMinersProfit, minersProfit)
423 | totalPoolProfit.Add(totalPoolProfit, poolProfit)
424 |
425 | logEntry := fmt.Sprintf(
426 | "MATURED %v: revenue %v, miners profit %v, pool profit: %v",
427 | block.RoundKey(),
428 | util.FormatRatReward(revenue),
429 | util.FormatRatReward(minersProfit),
430 | util.FormatRatReward(poolProfit),
431 | )
432 | entries := []string{logEntry}
433 | for login, reward := range roundRewards {
434 | entries = append(entries, fmt.Sprintf("\tREWARD %v: %v: %v Shannon", block.RoundKey(), login, reward))
435 | }
436 | log.Println(strings.Join(entries, "\n"))
437 | }
438 |
439 | log.Printf(
440 | "MATURE SESSION: revenue %v, miners profit %v, pool profit: %v",
441 | util.FormatRatReward(totalRevenue),
442 | util.FormatRatReward(totalMinersProfit),
443 | util.FormatRatReward(totalPoolProfit),
444 | )
445 | }
446 |
447 | func (u *BlockUnlocker) calculateRewards(block *storage.BlockData) (*big.Rat, *big.Rat, *big.Rat, map[string]int64, error) {
448 | revenue := new(big.Rat).SetInt(block.Reward)
449 | minersProfit, poolProfit := chargeFee(revenue, u.config.PoolFee)
450 |
451 | shares, err := u.backend.GetRoundShares(block.RoundHeight, block.Nonce)
452 | if err != nil {
453 | return nil, nil, nil, nil, err
454 | }
455 |
456 | rewards := calculateRewardsForShares(shares, block.TotalShares, minersProfit)
457 |
458 | if block.ExtraReward != nil {
459 | extraReward := new(big.Rat).SetInt(block.ExtraReward)
460 | poolProfit.Add(poolProfit, extraReward)
461 | revenue.Add(revenue, extraReward)
462 | }
463 |
464 | if u.config.Donate {
465 | var donation = new(big.Rat)
466 | poolProfit, donation = chargeFee(poolProfit, donationFee)
467 | login := strings.ToLower(donationAccount)
468 | rewards[login] += weiToShannonInt64(donation)
469 | }
470 |
471 | if len(u.config.PoolFeeAddress) != 0 {
472 | address := strings.ToLower(u.config.PoolFeeAddress)
473 | rewards[address] += weiToShannonInt64(poolProfit)
474 | }
475 |
476 | return revenue, minersProfit, poolProfit, rewards, nil
477 | }
478 |
479 | func calculateRewardsForShares(shares map[string]int64, total int64, reward *big.Rat) map[string]int64 {
480 | rewards := make(map[string]int64)
481 |
482 | for login, n := range shares {
483 | percent := big.NewRat(n, total)
484 | workerReward := new(big.Rat).Mul(reward, percent)
485 | rewards[login] += weiToShannonInt64(workerReward)
486 | }
487 | return rewards
488 | }
489 |
490 | // Returns new value after fee deduction and fee value.
491 | func chargeFee(value *big.Rat, fee float64) (*big.Rat, *big.Rat) {
492 | feePercent := new(big.Rat).SetFloat64(fee / 100)
493 | feeValue := new(big.Rat).Mul(value, feePercent)
494 | return new(big.Rat).Sub(value, feeValue), feeValue
495 | }
496 |
497 | func weiToShannonInt64(wei *big.Rat) int64 {
498 | shannon := new(big.Rat).SetInt(util.Shannon)
499 | inShannon := new(big.Rat).Quo(wei, shannon)
500 | value, _ := strconv.ParseInt(inShannon.FloatString(0), 10, 64)
501 | return value
502 | }
503 |
504 | func getConstReward(height int64) *big.Int {
505 | if height >= byzantiumHardForkHeight {
506 | return new(big.Int).Set(byzantiumReward)
507 | }
508 | return new(big.Int).Set(homesteadReward)
509 | }
510 |
511 | func getRewardForUncle(height int64) *big.Int {
512 | reward := getConstReward(height)
513 | return new(big.Int).Div(reward, new(big.Int).SetInt64(32))
514 | }
515 |
516 | func getUncleReward(uHeight, height int64) *big.Int {
517 | reward := getConstReward(height)
518 | k := height - uHeight
519 | reward.Mul(big.NewInt(8-k), reward)
520 | reward.Div(reward, big.NewInt(8))
521 | return reward
522 | }
523 |
524 | func (u *BlockUnlocker) getExtraRewardForTx(block *rpc.GetBlockReply) (*big.Int, error) {
525 | amount := new(big.Int)
526 |
527 | for _, tx := range block.Transactions {
528 | receipt, err := u.rpc.GetTxReceipt(tx.Hash)
529 | if err != nil {
530 | return nil, err
531 | }
532 | if receipt != nil {
533 | gasUsed := util.String2Big(receipt.GasUsed)
534 | gasPrice := util.String2Big(tx.GasPrice)
535 | fee := new(big.Int).Mul(gasUsed, gasPrice)
536 | amount.Add(amount, fee)
537 | }
538 | }
539 | return amount, nil
540 | }
541 |
--------------------------------------------------------------------------------
/payouts/unlocker_test.go:
--------------------------------------------------------------------------------
1 | package payouts
2 |
3 | import (
4 | "math/big"
5 | "os"
6 | "testing"
7 |
8 | "github.com/sammy007/open-ethereum-pool/rpc"
9 | "github.com/sammy007/open-ethereum-pool/storage"
10 | )
11 |
12 | func TestMain(m *testing.M) {
13 | os.Exit(m.Run())
14 | }
15 |
16 | func TestCalculateRewards(t *testing.T) {
17 | blockReward, _ := new(big.Rat).SetString("5000000000000000000")
18 | shares := map[string]int64{"0x0": 1000000, "0x1": 20000, "0x2": 5000, "0x3": 10, "0x4": 1}
19 | expectedRewards := map[string]int64{"0x0": 4877996431, "0x1": 97559929, "0x2": 24389982, "0x3": 48780, "0x4": 4878}
20 | totalShares := int64(1025011)
21 |
22 | rewards := calculateRewardsForShares(shares, totalShares, blockReward)
23 | expectedTotalAmount := int64(5000000000)
24 |
25 | totalAmount := int64(0)
26 | for login, amount := range rewards {
27 | totalAmount += amount
28 |
29 | if expectedRewards[login] != amount {
30 | t.Errorf("Amount for %v must be equal to %v vs %v", login, expectedRewards[login], amount)
31 | }
32 | }
33 | if totalAmount != expectedTotalAmount {
34 | t.Errorf("Total reward must be equal to block reward in Shannon: %v vs %v", expectedTotalAmount, totalAmount)
35 | }
36 | }
37 |
38 | func TestChargeFee(t *testing.T) {
39 | orig, _ := new(big.Rat).SetString("5000000000000000000")
40 | value, _ := new(big.Rat).SetString("5000000000000000000")
41 | expectedNewValue, _ := new(big.Rat).SetString("3750000000000000000")
42 | expectedFee, _ := new(big.Rat).SetString("1250000000000000000")
43 | newValue, fee := chargeFee(orig, 25.0)
44 |
45 | if orig.Cmp(value) != 0 {
46 | t.Error("Must not change original value")
47 | }
48 | if newValue.Cmp(expectedNewValue) != 0 {
49 | t.Error("Must charge and deduct correct fee")
50 | }
51 | if fee.Cmp(expectedFee) != 0 {
52 | t.Error("Must charge fee")
53 | }
54 | }
55 |
56 | func TestWeiToShannonInt64(t *testing.T) {
57 | wei, _ := new(big.Rat).SetString("1000000000000000000")
58 | origWei, _ := new(big.Rat).SetString("1000000000000000000")
59 | shannon := int64(1000000000)
60 |
61 | if weiToShannonInt64(wei) != shannon {
62 | t.Error("Must convert to Shannon")
63 | }
64 | if wei.Cmp(origWei) != 0 {
65 | t.Error("Must charge original value")
66 | }
67 | }
68 |
69 | func TestGetUncleReward(t *testing.T) {
70 | rewards := make(map[int64]string)
71 | expectedRewards := map[int64]string{
72 | 1: "4375000000000000000",
73 | 2: "3750000000000000000",
74 | 3: "3125000000000000000",
75 | 4: "2500000000000000000",
76 | 5: "1875000000000000000",
77 | 6: "1250000000000000000",
78 | 7: "625000000000000000",
79 | }
80 | for i := int64(1); i < 8; i++ {
81 | rewards[i] = getUncleReward(1, i+1).String()
82 | }
83 | for i, reward := range rewards {
84 | if expectedRewards[i] != rewards[i] {
85 | t.Errorf("Incorrect uncle reward for %v, expected %v vs %v", i, expectedRewards[i], reward)
86 | }
87 | }
88 | }
89 |
90 | func TestGetByzantiumUncleReward(t *testing.T) {
91 | rewards := make(map[int64]string)
92 | expectedRewards := map[int64]string{
93 | 1: "2625000000000000000",
94 | 2: "2250000000000000000",
95 | 3: "1875000000000000000",
96 | 4: "1500000000000000000",
97 | 5: "1125000000000000000",
98 | 6: "750000000000000000",
99 | 7: "375000000000000000",
100 | }
101 | for i := int64(1); i < 8; i++ {
102 | rewards[i] = getUncleReward(byzantiumHardForkHeight, byzantiumHardForkHeight+i).String()
103 | }
104 | for i, reward := range rewards {
105 | if expectedRewards[i] != rewards[i] {
106 | t.Errorf("Incorrect uncle reward for %v, expected %v vs %v", i, expectedRewards[i], reward)
107 | }
108 | }
109 | }
110 |
111 | func TestGetRewardForUngle(t *testing.T) {
112 | reward := getRewardForUncle(1).String()
113 | expectedReward := "156250000000000000"
114 | if expectedReward != reward {
115 | t.Errorf("Incorrect uncle bonus for height %v, expected %v vs %v", 1, expectedReward, reward)
116 | }
117 | }
118 |
119 | func TestGetByzantiumRewardForUngle(t *testing.T) {
120 | reward := getRewardForUncle(byzantiumHardForkHeight).String()
121 | expectedReward := "93750000000000000"
122 | if expectedReward != reward {
123 | t.Errorf("Incorrect uncle bonus for height %v, expected %v vs %v", byzantiumHardForkHeight, expectedReward, reward)
124 | }
125 | }
126 |
127 | func TestMatchCandidate(t *testing.T) {
128 | gethBlock := &rpc.GetBlockReply{Hash: "0x12345A", Nonce: "0x1A"}
129 | parityBlock := &rpc.GetBlockReply{Hash: "0x12345A", SealFields: []string{"0x0A", "0x1A"}}
130 | candidate := &storage.BlockData{Nonce: "0x1a"}
131 | orphan := &storage.BlockData{Nonce: "0x1abc"}
132 |
133 | if !matchCandidate(gethBlock, candidate) {
134 | t.Error("Must match with nonce")
135 | }
136 | if !matchCandidate(parityBlock, candidate) {
137 | t.Error("Must match with seal fields")
138 | }
139 | if matchCandidate(gethBlock, orphan) {
140 | t.Error("Must not match with orphan with nonce")
141 | }
142 | if matchCandidate(parityBlock, orphan) {
143 | t.Error("Must not match orphan with seal fields")
144 | }
145 |
146 | block := &rpc.GetBlockReply{Hash: "0x12345A"}
147 | immature := &storage.BlockData{Hash: "0x12345a", Nonce: "0x0"}
148 | if !matchCandidate(block, immature) {
149 | t.Error("Must match with hash")
150 | }
151 | }
152 |
--------------------------------------------------------------------------------
/policy/policy.go:
--------------------------------------------------------------------------------
1 | package policy
2 |
3 | import (
4 | "fmt"
5 | "log"
6 | "os/exec"
7 | "strings"
8 | "sync"
9 | "sync/atomic"
10 | "time"
11 |
12 | "github.com/sammy007/open-ethereum-pool/storage"
13 | "github.com/sammy007/open-ethereum-pool/util"
14 | )
15 |
16 | type Config struct {
17 | Workers int `json:"workers"`
18 | Banning Banning `json:"banning"`
19 | Limits Limits `json:"limits"`
20 | ResetInterval string `json:"resetInterval"`
21 | RefreshInterval string `json:"refreshInterval"`
22 | }
23 |
24 | type Limits struct {
25 | Enabled bool `json:"enabled"`
26 | Limit int32 `json:"limit"`
27 | Grace string `json:"grace"`
28 | LimitJump int32 `json:"limitJump"`
29 | }
30 |
31 | type Banning struct {
32 | Enabled bool `json:"enabled"`
33 | IPSet string `json:"ipset"`
34 | Timeout int64 `json:"timeout"`
35 | InvalidPercent float32 `json:"invalidPercent"`
36 | CheckThreshold int32 `json:"checkThreshold"`
37 | MalformedLimit int32 `json:"malformedLimit"`
38 | }
39 |
40 | type Stats struct {
41 | sync.Mutex
42 | // We are using atomic with LastBeat,
43 | // so moving it before the rest in order to avoid alignment issue
44 | LastBeat int64
45 | BannedAt int64
46 | ValidShares int32
47 | InvalidShares int32
48 | Malformed int32
49 | ConnLimit int32
50 | Banned int32
51 | }
52 |
53 | type PolicyServer struct {
54 | sync.RWMutex
55 | statsMu sync.Mutex
56 | config *Config
57 | stats map[string]*Stats
58 | banChannel chan string
59 | startedAt int64
60 | grace int64
61 | timeout int64
62 | blacklist []string
63 | whitelist []string
64 | storage *storage.RedisClient
65 | }
66 |
67 | func Start(cfg *Config, storage *storage.RedisClient) *PolicyServer {
68 | s := &PolicyServer{config: cfg, startedAt: util.MakeTimestamp()}
69 | grace := util.MustParseDuration(cfg.Limits.Grace)
70 | s.grace = int64(grace / time.Millisecond)
71 | s.banChannel = make(chan string, 64)
72 | s.stats = make(map[string]*Stats)
73 | s.storage = storage
74 | s.refreshState()
75 |
76 | timeout := util.MustParseDuration(s.config.ResetInterval)
77 | s.timeout = int64(timeout / time.Millisecond)
78 |
79 | resetIntv := util.MustParseDuration(s.config.ResetInterval)
80 | resetTimer := time.NewTimer(resetIntv)
81 | log.Printf("Set policy stats reset every %v", resetIntv)
82 |
83 | refreshIntv := util.MustParseDuration(s.config.RefreshInterval)
84 | refreshTimer := time.NewTimer(refreshIntv)
85 | log.Printf("Set policy state refresh every %v", refreshIntv)
86 |
87 | go func() {
88 | for {
89 | select {
90 | case <-resetTimer.C:
91 | s.resetStats()
92 | resetTimer.Reset(resetIntv)
93 | case <-refreshTimer.C:
94 | s.refreshState()
95 | refreshTimer.Reset(refreshIntv)
96 | }
97 | }
98 | }()
99 |
100 | for i := 0; i < s.config.Workers; i++ {
101 | s.startPolicyWorker()
102 | }
103 | log.Printf("Running with %v policy workers", s.config.Workers)
104 | return s
105 | }
106 |
107 | func (s *PolicyServer) startPolicyWorker() {
108 | go func() {
109 | for {
110 | select {
111 | case ip := <-s.banChannel:
112 | s.doBan(ip)
113 | }
114 | }
115 | }()
116 | }
117 |
118 | func (s *PolicyServer) resetStats() {
119 | now := util.MakeTimestamp()
120 | banningTimeout := s.config.Banning.Timeout * 1000
121 | total := 0
122 | s.statsMu.Lock()
123 | defer s.statsMu.Unlock()
124 |
125 | for key, m := range s.stats {
126 | lastBeat := atomic.LoadInt64(&m.LastBeat)
127 | bannedAt := atomic.LoadInt64(&m.BannedAt)
128 |
129 | if now-bannedAt >= banningTimeout {
130 | atomic.StoreInt64(&m.BannedAt, 0)
131 | if atomic.CompareAndSwapInt32(&m.Banned, 1, 0) {
132 | log.Printf("Ban dropped for %v", key)
133 | delete(s.stats, key)
134 | total++
135 | }
136 | }
137 | if now-lastBeat >= s.timeout {
138 | delete(s.stats, key)
139 | total++
140 | }
141 | }
142 | log.Printf("Flushed stats for %v IP addresses", total)
143 | }
144 |
145 | func (s *PolicyServer) refreshState() {
146 | s.Lock()
147 | defer s.Unlock()
148 | var err error
149 |
150 | s.blacklist, err = s.storage.GetBlacklist()
151 | if err != nil {
152 | log.Printf("Failed to get blacklist from backend: %v", err)
153 | }
154 | s.whitelist, err = s.storage.GetWhitelist()
155 | if err != nil {
156 | log.Printf("Failed to get whitelist from backend: %v", err)
157 | }
158 | log.Println("Policy state refresh complete")
159 | }
160 |
161 | func (s *PolicyServer) NewStats() *Stats {
162 | x := &Stats{
163 | ConnLimit: s.config.Limits.Limit,
164 | }
165 | x.heartbeat()
166 | return x
167 | }
168 |
169 | func (s *PolicyServer) Get(ip string) *Stats {
170 | s.statsMu.Lock()
171 | defer s.statsMu.Unlock()
172 |
173 | if x, ok := s.stats[ip]; !ok {
174 | x = s.NewStats()
175 | s.stats[ip] = x
176 | return x
177 | } else {
178 | x.heartbeat()
179 | return x
180 | }
181 | }
182 |
183 | func (s *PolicyServer) BanClient(ip string) {
184 | x := s.Get(ip)
185 | s.forceBan(x, ip)
186 | }
187 |
188 | func (s *PolicyServer) IsBanned(ip string) bool {
189 | x := s.Get(ip)
190 | return atomic.LoadInt32(&x.Banned) > 0
191 | }
192 |
193 | func (s *PolicyServer) ApplyLimitPolicy(ip string) bool {
194 | if !s.config.Limits.Enabled {
195 | return true
196 | }
197 | now := util.MakeTimestamp()
198 | if now-s.startedAt > s.grace {
199 | return s.Get(ip).decrLimit() > 0
200 | }
201 | return true
202 | }
203 |
204 | func (s *PolicyServer) ApplyLoginPolicy(addy, ip string) bool {
205 | if s.InBlackList(addy) {
206 | x := s.Get(ip)
207 | s.forceBan(x, ip)
208 | return false
209 | }
210 | return true
211 | }
212 |
213 | func (s *PolicyServer) ApplyMalformedPolicy(ip string) bool {
214 | x := s.Get(ip)
215 | n := x.incrMalformed()
216 | if n >= s.config.Banning.MalformedLimit {
217 | s.forceBan(x, ip)
218 | return false
219 | }
220 | return true
221 | }
222 |
223 | func (s *PolicyServer) ApplySharePolicy(ip string, validShare bool) bool {
224 | x := s.Get(ip)
225 | x.Lock()
226 |
227 | if validShare {
228 | x.ValidShares++
229 | if s.config.Limits.Enabled {
230 | x.incrLimit(s.config.Limits.LimitJump)
231 | }
232 | } else {
233 | x.InvalidShares++
234 | }
235 |
236 | totalShares := x.ValidShares + x.InvalidShares
237 | if totalShares < s.config.Banning.CheckThreshold {
238 | x.Unlock()
239 | return true
240 | }
241 | validShares := float32(x.ValidShares)
242 | invalidShares := float32(x.InvalidShares)
243 | x.resetShares()
244 | x.Unlock()
245 |
246 | ratio := invalidShares / validShares
247 |
248 | if ratio >= s.config.Banning.InvalidPercent/100.0 {
249 | s.forceBan(x, ip)
250 | return false
251 | }
252 | return true
253 | }
254 |
255 | func (x *Stats) resetShares() {
256 | x.ValidShares = 0
257 | x.InvalidShares = 0
258 | }
259 |
260 | func (s *PolicyServer) forceBan(x *Stats, ip string) {
261 | if !s.config.Banning.Enabled || s.InWhiteList(ip) {
262 | return
263 | }
264 | atomic.StoreInt64(&x.BannedAt, util.MakeTimestamp())
265 |
266 | if atomic.CompareAndSwapInt32(&x.Banned, 0, 1) {
267 | if len(s.config.Banning.IPSet) > 0 {
268 | s.banChannel <- ip
269 | } else {
270 | log.Println("Banned peer", ip)
271 | }
272 | }
273 | }
274 |
275 | func (x *Stats) incrLimit(n int32) {
276 | atomic.AddInt32(&x.ConnLimit, n)
277 | }
278 |
279 | func (x *Stats) incrMalformed() int32 {
280 | return atomic.AddInt32(&x.Malformed, 1)
281 | }
282 |
283 | func (x *Stats) decrLimit() int32 {
284 | return atomic.AddInt32(&x.ConnLimit, -1)
285 | }
286 |
287 | func (s *PolicyServer) InBlackList(addy string) bool {
288 | s.RLock()
289 | defer s.RUnlock()
290 | return util.StringInSlice(addy, s.blacklist)
291 | }
292 |
293 | func (s *PolicyServer) InWhiteList(ip string) bool {
294 | s.RLock()
295 | defer s.RUnlock()
296 | return util.StringInSlice(ip, s.whitelist)
297 | }
298 |
299 | func (s *PolicyServer) doBan(ip string) {
300 | set, timeout := s.config.Banning.IPSet, s.config.Banning.Timeout
301 | cmd := fmt.Sprintf("sudo ipset add %s %s timeout %v -!", set, ip, timeout)
302 | args := strings.Fields(cmd)
303 | head := args[0]
304 | args = args[1:]
305 |
306 | log.Printf("Banned %v with timeout %v on ipset %s", ip, timeout, set)
307 |
308 | _, err := exec.Command(head, args...).Output()
309 | if err != nil {
310 | log.Printf("CMD Error: %s", err)
311 | }
312 | }
313 |
314 | func (x *Stats) heartbeat() {
315 | now := util.MakeTimestamp()
316 | atomic.StoreInt64(&x.LastBeat, now)
317 | }
318 |
--------------------------------------------------------------------------------
/proxy/blocks.go:
--------------------------------------------------------------------------------
1 | package proxy
2 |
3 | import (
4 | "log"
5 | "math/big"
6 | "strconv"
7 | "strings"
8 | "sync"
9 |
10 | "github.com/ethereum/go-ethereum/common"
11 |
12 | "github.com/sammy007/open-ethereum-pool/rpc"
13 | "github.com/sammy007/open-ethereum-pool/util"
14 | )
15 |
16 | const maxBacklog = 3
17 |
18 | type heightDiffPair struct {
19 | diff *big.Int
20 | height uint64
21 | }
22 |
23 | type BlockTemplate struct {
24 | sync.RWMutex
25 | Header string
26 | Seed string
27 | Target string
28 | Difficulty *big.Int
29 | Height uint64
30 | GetPendingBlockCache *rpc.GetBlockReplyPart
31 | nonces map[string]bool
32 | headers map[string]heightDiffPair
33 | }
34 |
35 | type Block struct {
36 | difficulty *big.Int
37 | hashNoNonce common.Hash
38 | nonce uint64
39 | mixDigest common.Hash
40 | number uint64
41 | }
42 |
43 | func (b Block) Difficulty() *big.Int { return b.difficulty }
44 | func (b Block) HashNoNonce() common.Hash { return b.hashNoNonce }
45 | func (b Block) Nonce() uint64 { return b.nonce }
46 | func (b Block) MixDigest() common.Hash { return b.mixDigest }
47 | func (b Block) NumberU64() uint64 { return b.number }
48 |
49 | func (s *ProxyServer) fetchBlockTemplate() {
50 | rpc := s.rpc()
51 | t := s.currentBlockTemplate()
52 | pendingReply, height, diff, err := s.fetchPendingBlock()
53 | if err != nil {
54 | log.Printf("Error while refreshing pending block on %s: %s", rpc.Name, err)
55 | return
56 | }
57 | reply, err := rpc.GetWork()
58 | if err != nil {
59 | log.Printf("Error while refreshing block template on %s: %s", rpc.Name, err)
60 | return
61 | }
62 | // No need to update, we have fresh job
63 | if t != nil && t.Header == reply[0] {
64 | return
65 | }
66 |
67 | pendingReply.Difficulty = util.ToHex(s.config.Proxy.Difficulty)
68 |
69 | newTemplate := BlockTemplate{
70 | Header: reply[0],
71 | Seed: reply[1],
72 | Target: reply[2],
73 | Height: height,
74 | Difficulty: big.NewInt(diff),
75 | GetPendingBlockCache: pendingReply,
76 | headers: make(map[string]heightDiffPair),
77 | }
78 | // Copy job backlog and add current one
79 | newTemplate.headers[reply[0]] = heightDiffPair{
80 | diff: util.TargetHexToDiff(reply[2]),
81 | height: height,
82 | }
83 | if t != nil {
84 | for k, v := range t.headers {
85 | if v.height > height-maxBacklog {
86 | newTemplate.headers[k] = v
87 | }
88 | }
89 | }
90 | s.blockTemplate.Store(&newTemplate)
91 | log.Printf("New block to mine on %s at height %d / %s", rpc.Name, height, reply[0][0:10])
92 |
93 | // Stratum
94 | if s.config.Proxy.Stratum.Enabled {
95 | go s.broadcastNewJobs()
96 | }
97 | }
98 |
99 | func (s *ProxyServer) fetchPendingBlock() (*rpc.GetBlockReplyPart, uint64, int64, error) {
100 | rpc := s.rpc()
101 | reply, err := rpc.GetPendingBlock()
102 | if err != nil {
103 | log.Printf("Error while refreshing pending block on %s: %s", rpc.Name, err)
104 | return nil, 0, 0, err
105 | }
106 | blockNumber, err := strconv.ParseUint(strings.Replace(reply.Number, "0x", "", -1), 16, 64)
107 | if err != nil {
108 | log.Println("Can't parse pending block number")
109 | return nil, 0, 0, err
110 | }
111 | blockDiff, err := strconv.ParseInt(strings.Replace(reply.Difficulty, "0x", "", -1), 16, 64)
112 | if err != nil {
113 | log.Println("Can't parse pending block difficulty")
114 | return nil, 0, 0, err
115 | }
116 | return reply, blockNumber, blockDiff, nil
117 | }
118 |
--------------------------------------------------------------------------------
/proxy/config.go:
--------------------------------------------------------------------------------
1 | package proxy
2 |
3 | import (
4 | "github.com/sammy007/open-ethereum-pool/api"
5 | "github.com/sammy007/open-ethereum-pool/payouts"
6 | "github.com/sammy007/open-ethereum-pool/policy"
7 | "github.com/sammy007/open-ethereum-pool/storage"
8 | )
9 |
10 | type Config struct {
11 | Name string `json:"name"`
12 | Proxy Proxy `json:"proxy"`
13 | Api api.ApiConfig `json:"api"`
14 | Upstream []Upstream `json:"upstream"`
15 | UpstreamCheckInterval string `json:"upstreamCheckInterval"`
16 |
17 | Threads int `json:"threads"`
18 |
19 | Coin string `json:"coin"`
20 | Redis storage.Config `json:"redis"`
21 |
22 | BlockUnlocker payouts.UnlockerConfig `json:"unlocker"`
23 | Payouts payouts.PayoutsConfig `json:"payouts"`
24 |
25 | NewrelicName string `json:"newrelicName"`
26 | NewrelicKey string `json:"newrelicKey"`
27 | NewrelicVerbose bool `json:"newrelicVerbose"`
28 | NewrelicEnabled bool `json:"newrelicEnabled"`
29 | }
30 |
31 | type Proxy struct {
32 | Enabled bool `json:"enabled"`
33 | Listen string `json:"listen"`
34 | LimitHeadersSize int `json:"limitHeadersSize"`
35 | LimitBodySize int64 `json:"limitBodySize"`
36 | BehindReverseProxy bool `json:"behindReverseProxy"`
37 | BlockRefreshInterval string `json:"blockRefreshInterval"`
38 | Difficulty int64 `json:"difficulty"`
39 | StateUpdateInterval string `json:"stateUpdateInterval"`
40 | HashrateExpiration string `json:"hashrateExpiration"`
41 |
42 | Policy policy.Config `json:"policy"`
43 |
44 | MaxFails int64 `json:"maxFails"`
45 | HealthCheck bool `json:"healthCheck"`
46 |
47 | Stratum Stratum `json:"stratum"`
48 | }
49 |
50 | type Stratum struct {
51 | Enabled bool `json:"enabled"`
52 | Listen string `json:"listen"`
53 | Timeout string `json:"timeout"`
54 | MaxConn int `json:"maxConn"`
55 | }
56 |
57 | type Upstream struct {
58 | Name string `json:"name"`
59 | Url string `json:"url"`
60 | Timeout string `json:"timeout"`
61 | }
62 |
--------------------------------------------------------------------------------
/proxy/handlers.go:
--------------------------------------------------------------------------------
1 | package proxy
2 |
3 | import (
4 | "log"
5 | "regexp"
6 | "strings"
7 |
8 | "github.com/sammy007/open-ethereum-pool/rpc"
9 | "github.com/sammy007/open-ethereum-pool/util"
10 | )
11 |
12 | // Allow only lowercase hexadecimal with 0x prefix
13 | var noncePattern = regexp.MustCompile("^0x[0-9a-f]{16}$")
14 | var hashPattern = regexp.MustCompile("^0x[0-9a-f]{64}$")
15 | var workerPattern = regexp.MustCompile("^[0-9a-zA-Z-_]{1,8}$")
16 |
17 | // Stratum
18 | func (s *ProxyServer) handleLoginRPC(cs *Session, params []string, id string) (bool, *ErrorReply) {
19 | if len(params) == 0 {
20 | return false, &ErrorReply{Code: -1, Message: "Invalid params"}
21 | }
22 |
23 | login := strings.ToLower(params[0])
24 | if !util.IsValidHexAddress(login) {
25 | return false, &ErrorReply{Code: -1, Message: "Invalid login"}
26 | }
27 | if !s.policy.ApplyLoginPolicy(login, cs.ip) {
28 | return false, &ErrorReply{Code: -1, Message: "You are blacklisted"}
29 | }
30 | cs.login = login
31 | s.registerSession(cs)
32 | log.Printf("Stratum miner connected %v@%v", login, cs.ip)
33 | return true, nil
34 | }
35 |
36 | func (s *ProxyServer) handleGetWorkRPC(cs *Session) ([]string, *ErrorReply) {
37 | t := s.currentBlockTemplate()
38 | if t == nil || len(t.Header) == 0 || s.isSick() {
39 | return nil, &ErrorReply{Code: 0, Message: "Work not ready"}
40 | }
41 | return []string{t.Header, t.Seed, s.diff}, nil
42 | }
43 |
44 | // Stratum
45 | func (s *ProxyServer) handleTCPSubmitRPC(cs *Session, id string, params []string) (bool, *ErrorReply) {
46 | s.sessionsMu.RLock()
47 | _, ok := s.sessions[cs]
48 | s.sessionsMu.RUnlock()
49 |
50 | if !ok {
51 | return false, &ErrorReply{Code: 25, Message: "Not subscribed"}
52 | }
53 | return s.handleSubmitRPC(cs, cs.login, id, params)
54 | }
55 |
56 | func (s *ProxyServer) handleSubmitRPC(cs *Session, login, id string, params []string) (bool, *ErrorReply) {
57 | if !workerPattern.MatchString(id) {
58 | id = "0"
59 | }
60 | if len(params) != 3 {
61 | s.policy.ApplyMalformedPolicy(cs.ip)
62 | log.Printf("Malformed params from %s@%s %v", login, cs.ip, params)
63 | return false, &ErrorReply{Code: -1, Message: "Invalid params"}
64 | }
65 |
66 | if !noncePattern.MatchString(params[0]) || !hashPattern.MatchString(params[1]) || !hashPattern.MatchString(params[2]) {
67 | s.policy.ApplyMalformedPolicy(cs.ip)
68 | log.Printf("Malformed PoW result from %s@%s %v", login, cs.ip, params)
69 | return false, &ErrorReply{Code: -1, Message: "Malformed PoW result"}
70 | }
71 | t := s.currentBlockTemplate()
72 | exist, validShare := s.processShare(login, id, cs.ip, t, params)
73 | ok := s.policy.ApplySharePolicy(cs.ip, !exist && validShare)
74 |
75 | if exist {
76 | log.Printf("Duplicate share from %s@%s %v", login, cs.ip, params)
77 | return false, &ErrorReply{Code: 22, Message: "Duplicate share"}
78 | }
79 |
80 | if !validShare {
81 | log.Printf("Invalid share from %s@%s", login, cs.ip)
82 | // Bad shares limit reached, return error and close
83 | if !ok {
84 | return false, &ErrorReply{Code: 23, Message: "Invalid share"}
85 | }
86 | return false, nil
87 | }
88 | log.Printf("Valid share from %s@%s", login, cs.ip)
89 |
90 | if !ok {
91 | return true, &ErrorReply{Code: -1, Message: "High rate of invalid shares"}
92 | }
93 | return true, nil
94 | }
95 |
96 | func (s *ProxyServer) handleGetBlockByNumberRPC() *rpc.GetBlockReplyPart {
97 | t := s.currentBlockTemplate()
98 | var reply *rpc.GetBlockReplyPart
99 | if t != nil {
100 | reply = t.GetPendingBlockCache
101 | }
102 | return reply
103 | }
104 |
105 | func (s *ProxyServer) handleUnknownRPC(cs *Session, m string) *ErrorReply {
106 | log.Printf("Unknown request method %s from %s", m, cs.ip)
107 | s.policy.ApplyMalformedPolicy(cs.ip)
108 | return &ErrorReply{Code: -3, Message: "Method not found"}
109 | }
110 |
--------------------------------------------------------------------------------
/proxy/miner.go:
--------------------------------------------------------------------------------
1 | package proxy
2 |
3 | import (
4 | "log"
5 | "math/big"
6 | "strconv"
7 | "strings"
8 |
9 | "github.com/ethereum/ethash"
10 | "github.com/ethereum/go-ethereum/common"
11 | )
12 |
13 | var hasher = ethash.New()
14 |
15 | func (s *ProxyServer) processShare(login, id, ip string, t *BlockTemplate, params []string) (bool, bool) {
16 | nonceHex := params[0]
17 | hashNoNonce := params[1]
18 | mixDigest := params[2]
19 | nonce, _ := strconv.ParseUint(strings.Replace(nonceHex, "0x", "", -1), 16, 64)
20 | shareDiff := s.config.Proxy.Difficulty
21 |
22 | h, ok := t.headers[hashNoNonce]
23 | if !ok {
24 | log.Printf("Stale share from %v@%v", login, ip)
25 | return false, false
26 | }
27 |
28 | share := Block{
29 | number: h.height,
30 | hashNoNonce: common.HexToHash(hashNoNonce),
31 | difficulty: big.NewInt(shareDiff),
32 | nonce: nonce,
33 | mixDigest: common.HexToHash(mixDigest),
34 | }
35 |
36 | block := Block{
37 | number: h.height,
38 | hashNoNonce: common.HexToHash(hashNoNonce),
39 | difficulty: h.diff,
40 | nonce: nonce,
41 | mixDigest: common.HexToHash(mixDigest),
42 | }
43 |
44 | if !hasher.Verify(share) {
45 | return false, false
46 | }
47 |
48 | if hasher.Verify(block) {
49 | ok, err := s.rpc().SubmitBlock(params)
50 | if err != nil {
51 | log.Printf("Block submission failure at height %v for %v: %v", h.height, t.Header, err)
52 | } else if !ok {
53 | log.Printf("Block rejected at height %v for %v", h.height, t.Header)
54 | return false, false
55 | } else {
56 | s.fetchBlockTemplate()
57 | exist, err := s.backend.WriteBlock(login, id, params, shareDiff, h.diff.Int64(), h.height, s.hashrateExpiration)
58 | if exist {
59 | return true, false
60 | }
61 | if err != nil {
62 | log.Println("Failed to insert block candidate into backend:", err)
63 | } else {
64 | log.Printf("Inserted block %v to backend", h.height)
65 | }
66 | log.Printf("Block found by miner %v@%v at height %d", login, ip, h.height)
67 | }
68 | } else {
69 | exist, err := s.backend.WriteShare(login, id, params, shareDiff, h.height, s.hashrateExpiration)
70 | if exist {
71 | return true, false
72 | }
73 | if err != nil {
74 | log.Println("Failed to insert share data into backend:", err)
75 | }
76 | }
77 | return false, true
78 | }
79 |
--------------------------------------------------------------------------------
/proxy/proto.go:
--------------------------------------------------------------------------------
1 | package proxy
2 |
3 | import "encoding/json"
4 |
5 | type JSONRpcReq struct {
6 | Id json.RawMessage `json:"id"`
7 | Method string `json:"method"`
8 | Params json.RawMessage `json:"params"`
9 | }
10 |
11 | type StratumReq struct {
12 | JSONRpcReq
13 | Worker string `json:"worker"`
14 | }
15 |
16 | // Stratum
17 | type JSONPushMessage struct {
18 | // FIXME: Temporarily add ID for Claymore compliance
19 | Id int64 `json:"id"`
20 | Version string `json:"jsonrpc"`
21 | Result interface{} `json:"result"`
22 | }
23 |
24 | type JSONRpcResp struct {
25 | Id json.RawMessage `json:"id"`
26 | Version string `json:"jsonrpc"`
27 | Result interface{} `json:"result"`
28 | Error interface{} `json:"error,omitempty"`
29 | }
30 |
31 | type SubmitReply struct {
32 | Status string `json:"status"`
33 | }
34 |
35 | type ErrorReply struct {
36 | Code int `json:"code"`
37 | Message string `json:"message"`
38 | }
39 |
--------------------------------------------------------------------------------
/proxy/proxy.go:
--------------------------------------------------------------------------------
1 | package proxy
2 |
3 | import (
4 | "encoding/json"
5 | "io"
6 | "log"
7 | "net"
8 | "net/http"
9 | "strings"
10 | "sync"
11 | "sync/atomic"
12 | "time"
13 |
14 | "github.com/gorilla/mux"
15 |
16 | "github.com/sammy007/open-ethereum-pool/policy"
17 | "github.com/sammy007/open-ethereum-pool/rpc"
18 | "github.com/sammy007/open-ethereum-pool/storage"
19 | "github.com/sammy007/open-ethereum-pool/util"
20 | )
21 |
22 | type ProxyServer struct {
23 | config *Config
24 | blockTemplate atomic.Value
25 | upstream int32
26 | upstreams []*rpc.RPCClient
27 | backend *storage.RedisClient
28 | diff string
29 | policy *policy.PolicyServer
30 | hashrateExpiration time.Duration
31 | failsCount int64
32 |
33 | // Stratum
34 | sessionsMu sync.RWMutex
35 | sessions map[*Session]struct{}
36 | timeout time.Duration
37 | }
38 |
39 | type Session struct {
40 | ip string
41 | enc *json.Encoder
42 |
43 | // Stratum
44 | sync.Mutex
45 | conn *net.TCPConn
46 | login string
47 | }
48 |
49 | func NewProxy(cfg *Config, backend *storage.RedisClient) *ProxyServer {
50 | if len(cfg.Name) == 0 {
51 | log.Fatal("You must set instance name")
52 | }
53 | policy := policy.Start(&cfg.Proxy.Policy, backend)
54 |
55 | proxy := &ProxyServer{config: cfg, backend: backend, policy: policy}
56 | proxy.diff = util.GetTargetHex(cfg.Proxy.Difficulty)
57 |
58 | proxy.upstreams = make([]*rpc.RPCClient, len(cfg.Upstream))
59 | for i, v := range cfg.Upstream {
60 | proxy.upstreams[i] = rpc.NewRPCClient(v.Name, v.Url, v.Timeout)
61 | log.Printf("Upstream: %s => %s", v.Name, v.Url)
62 | }
63 | log.Printf("Default upstream: %s => %s", proxy.rpc().Name, proxy.rpc().Url)
64 |
65 | if cfg.Proxy.Stratum.Enabled {
66 | proxy.sessions = make(map[*Session]struct{})
67 | go proxy.ListenTCP()
68 | }
69 |
70 | proxy.fetchBlockTemplate()
71 |
72 | proxy.hashrateExpiration = util.MustParseDuration(cfg.Proxy.HashrateExpiration)
73 |
74 | refreshIntv := util.MustParseDuration(cfg.Proxy.BlockRefreshInterval)
75 | refreshTimer := time.NewTimer(refreshIntv)
76 | log.Printf("Set block refresh every %v", refreshIntv)
77 |
78 | checkIntv := util.MustParseDuration(cfg.UpstreamCheckInterval)
79 | checkTimer := time.NewTimer(checkIntv)
80 |
81 | stateUpdateIntv := util.MustParseDuration(cfg.Proxy.StateUpdateInterval)
82 | stateUpdateTimer := time.NewTimer(stateUpdateIntv)
83 |
84 | go func() {
85 | for {
86 | select {
87 | case <-refreshTimer.C:
88 | proxy.fetchBlockTemplate()
89 | refreshTimer.Reset(refreshIntv)
90 | }
91 | }
92 | }()
93 |
94 | go func() {
95 | for {
96 | select {
97 | case <-checkTimer.C:
98 | proxy.checkUpstreams()
99 | checkTimer.Reset(checkIntv)
100 | }
101 | }
102 | }()
103 |
104 | go func() {
105 | for {
106 | select {
107 | case <-stateUpdateTimer.C:
108 | t := proxy.currentBlockTemplate()
109 | if t != nil {
110 | err := backend.WriteNodeState(cfg.Name, t.Height, t.Difficulty)
111 | if err != nil {
112 | log.Printf("Failed to write node state to backend: %v", err)
113 | proxy.markSick()
114 | } else {
115 | proxy.markOk()
116 | }
117 | }
118 | stateUpdateTimer.Reset(stateUpdateIntv)
119 | }
120 | }
121 | }()
122 |
123 | return proxy
124 | }
125 |
126 | func (s *ProxyServer) Start() {
127 | log.Printf("Starting proxy on %v", s.config.Proxy.Listen)
128 | r := mux.NewRouter()
129 | r.Handle("/{login:0x[0-9a-fA-F]{40}}/{id:[0-9a-zA-Z-_]{1,8}}", s)
130 | r.Handle("/{login:0x[0-9a-fA-F]{40}}", s)
131 | srv := &http.Server{
132 | Addr: s.config.Proxy.Listen,
133 | Handler: r,
134 | MaxHeaderBytes: s.config.Proxy.LimitHeadersSize,
135 | }
136 | err := srv.ListenAndServe()
137 | if err != nil {
138 | log.Fatalf("Failed to start proxy: %v", err)
139 | }
140 | }
141 |
142 | func (s *ProxyServer) rpc() *rpc.RPCClient {
143 | i := atomic.LoadInt32(&s.upstream)
144 | return s.upstreams[i]
145 | }
146 |
147 | func (s *ProxyServer) checkUpstreams() {
148 | candidate := int32(0)
149 | backup := false
150 |
151 | for i, v := range s.upstreams {
152 | if v.Check() && !backup {
153 | candidate = int32(i)
154 | backup = true
155 | }
156 | }
157 |
158 | if s.upstream != candidate {
159 | log.Printf("Switching to %v upstream", s.upstreams[candidate].Name)
160 | atomic.StoreInt32(&s.upstream, candidate)
161 | }
162 | }
163 |
164 | func (s *ProxyServer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
165 | if r.Method != "POST" {
166 | s.writeError(w, 405, "rpc: POST method required, received "+r.Method)
167 | return
168 | }
169 | ip := s.remoteAddr(r)
170 | if !s.policy.IsBanned(ip) {
171 | s.handleClient(w, r, ip)
172 | }
173 | }
174 |
175 | func (s *ProxyServer) remoteAddr(r *http.Request) string {
176 | if s.config.Proxy.BehindReverseProxy {
177 | ip := r.Header.Get("X-Forwarded-For")
178 | if len(ip) > 0 && net.ParseIP(ip) != nil {
179 | return ip
180 | }
181 | }
182 | ip, _, _ := net.SplitHostPort(r.RemoteAddr)
183 | return ip
184 | }
185 |
186 | func (s *ProxyServer) handleClient(w http.ResponseWriter, r *http.Request, ip string) {
187 | if r.ContentLength > s.config.Proxy.LimitBodySize {
188 | log.Printf("Socket flood from %s", ip)
189 | s.policy.ApplyMalformedPolicy(ip)
190 | http.Error(w, "Request too large", http.StatusExpectationFailed)
191 | return
192 | }
193 | r.Body = http.MaxBytesReader(w, r.Body, s.config.Proxy.LimitBodySize)
194 | defer r.Body.Close()
195 |
196 | cs := &Session{ip: ip, enc: json.NewEncoder(w)}
197 | dec := json.NewDecoder(r.Body)
198 | for {
199 | var req JSONRpcReq
200 | if err := dec.Decode(&req); err == io.EOF {
201 | break
202 | } else if err != nil {
203 | log.Printf("Malformed request from %v: %v", ip, err)
204 | s.policy.ApplyMalformedPolicy(ip)
205 | return
206 | }
207 | cs.handleMessage(s, r, &req)
208 | }
209 | }
210 |
211 | func (cs *Session) handleMessage(s *ProxyServer, r *http.Request, req *JSONRpcReq) {
212 | if req.Id == nil {
213 | log.Printf("Missing RPC id from %s", cs.ip)
214 | s.policy.ApplyMalformedPolicy(cs.ip)
215 | return
216 | }
217 |
218 | vars := mux.Vars(r)
219 | login := strings.ToLower(vars["login"])
220 |
221 | if !util.IsValidHexAddress(login) {
222 | errReply := &ErrorReply{Code: -1, Message: "Invalid login"}
223 | cs.sendError(req.Id, errReply)
224 | return
225 | }
226 | if !s.policy.ApplyLoginPolicy(login, cs.ip) {
227 | errReply := &ErrorReply{Code: -1, Message: "You are blacklisted"}
228 | cs.sendError(req.Id, errReply)
229 | return
230 | }
231 |
232 | // Handle RPC methods
233 | switch req.Method {
234 | case "eth_getWork":
235 | reply, errReply := s.handleGetWorkRPC(cs)
236 | if errReply != nil {
237 | cs.sendError(req.Id, errReply)
238 | break
239 | }
240 | cs.sendResult(req.Id, &reply)
241 | case "eth_submitWork":
242 | if req.Params != nil {
243 | var params []string
244 | err := json.Unmarshal(req.Params, ¶ms)
245 | if err != nil {
246 | log.Printf("Unable to parse params from %v", cs.ip)
247 | s.policy.ApplyMalformedPolicy(cs.ip)
248 | break
249 | }
250 | reply, errReply := s.handleSubmitRPC(cs, login, vars["id"], params)
251 | if errReply != nil {
252 | cs.sendError(req.Id, errReply)
253 | break
254 | }
255 | cs.sendResult(req.Id, &reply)
256 | } else {
257 | s.policy.ApplyMalformedPolicy(cs.ip)
258 | errReply := &ErrorReply{Code: -1, Message: "Malformed request"}
259 | cs.sendError(req.Id, errReply)
260 | }
261 | case "eth_getBlockByNumber":
262 | reply := s.handleGetBlockByNumberRPC()
263 | cs.sendResult(req.Id, reply)
264 | case "eth_submitHashrate":
265 | cs.sendResult(req.Id, true)
266 | default:
267 | errReply := s.handleUnknownRPC(cs, req.Method)
268 | cs.sendError(req.Id, errReply)
269 | }
270 | }
271 |
272 | func (cs *Session) sendResult(id json.RawMessage, result interface{}) error {
273 | message := JSONRpcResp{Id: id, Version: "2.0", Error: nil, Result: result}
274 | return cs.enc.Encode(&message)
275 | }
276 |
277 | func (cs *Session) sendError(id json.RawMessage, reply *ErrorReply) error {
278 | message := JSONRpcResp{Id: id, Version: "2.0", Error: reply}
279 | return cs.enc.Encode(&message)
280 | }
281 |
282 | func (s *ProxyServer) writeError(w http.ResponseWriter, status int, msg string) {
283 | w.WriteHeader(status)
284 | w.Header().Set("Content-Type", "text/plain; charset=utf-8")
285 | }
286 |
287 | func (s *ProxyServer) currentBlockTemplate() *BlockTemplate {
288 | t := s.blockTemplate.Load()
289 | if t != nil {
290 | return t.(*BlockTemplate)
291 | } else {
292 | return nil
293 | }
294 | }
295 |
296 | func (s *ProxyServer) markSick() {
297 | atomic.AddInt64(&s.failsCount, 1)
298 | }
299 |
300 | func (s *ProxyServer) isSick() bool {
301 | x := atomic.LoadInt64(&s.failsCount)
302 | if s.config.Proxy.HealthCheck && x >= s.config.Proxy.MaxFails {
303 | return true
304 | }
305 | return false
306 | }
307 |
308 | func (s *ProxyServer) markOk() {
309 | atomic.StoreInt64(&s.failsCount, 0)
310 | }
311 |
--------------------------------------------------------------------------------
/proxy/stratum.go:
--------------------------------------------------------------------------------
1 | package proxy
2 |
3 | import (
4 | "bufio"
5 | "encoding/json"
6 | "errors"
7 | "io"
8 | "log"
9 | "net"
10 | "time"
11 |
12 | "github.com/sammy007/open-ethereum-pool/util"
13 | )
14 |
15 | const (
16 | MaxReqSize = 1024
17 | )
18 |
19 | func (s *ProxyServer) ListenTCP() {
20 | timeout := util.MustParseDuration(s.config.Proxy.Stratum.Timeout)
21 | s.timeout = timeout
22 |
23 | addr, err := net.ResolveTCPAddr("tcp", s.config.Proxy.Stratum.Listen)
24 | if err != nil {
25 | log.Fatalf("Error: %v", err)
26 | }
27 | server, err := net.ListenTCP("tcp", addr)
28 | if err != nil {
29 | log.Fatalf("Error: %v", err)
30 | }
31 | defer server.Close()
32 |
33 | log.Printf("Stratum listening on %s", s.config.Proxy.Stratum.Listen)
34 | var accept = make(chan int, s.config.Proxy.Stratum.MaxConn)
35 | n := 0
36 |
37 | for {
38 | conn, err := server.AcceptTCP()
39 | if err != nil {
40 | continue
41 | }
42 | conn.SetKeepAlive(true)
43 |
44 | ip, _, _ := net.SplitHostPort(conn.RemoteAddr().String())
45 |
46 | if s.policy.IsBanned(ip) || !s.policy.ApplyLimitPolicy(ip) {
47 | conn.Close()
48 | continue
49 | }
50 | n += 1
51 | cs := &Session{conn: conn, ip: ip}
52 |
53 | accept <- n
54 | go func(cs *Session) {
55 | err = s.handleTCPClient(cs)
56 | if err != nil {
57 | s.removeSession(cs)
58 | conn.Close()
59 | }
60 | <-accept
61 | }(cs)
62 | }
63 | }
64 |
65 | func (s *ProxyServer) handleTCPClient(cs *Session) error {
66 | cs.enc = json.NewEncoder(cs.conn)
67 | connbuff := bufio.NewReaderSize(cs.conn, MaxReqSize)
68 | s.setDeadline(cs.conn)
69 |
70 | for {
71 | data, isPrefix, err := connbuff.ReadLine()
72 | if isPrefix {
73 | log.Printf("Socket flood detected from %s", cs.ip)
74 | s.policy.BanClient(cs.ip)
75 | return err
76 | } else if err == io.EOF {
77 | log.Printf("Client %s disconnected", cs.ip)
78 | s.removeSession(cs)
79 | break
80 | } else if err != nil {
81 | log.Printf("Error reading from socket: %v", err)
82 | return err
83 | }
84 |
85 | if len(data) > 1 {
86 | var req StratumReq
87 | err = json.Unmarshal(data, &req)
88 | if err != nil {
89 | s.policy.ApplyMalformedPolicy(cs.ip)
90 | log.Printf("Malformed stratum request from %s: %v", cs.ip, err)
91 | return err
92 | }
93 | s.setDeadline(cs.conn)
94 | err = cs.handleTCPMessage(s, &req)
95 | if err != nil {
96 | return err
97 | }
98 | }
99 | }
100 | return nil
101 | }
102 |
103 | func (cs *Session) handleTCPMessage(s *ProxyServer, req *StratumReq) error {
104 | // Handle RPC methods
105 | switch req.Method {
106 | case "eth_submitLogin":
107 | var params []string
108 | err := json.Unmarshal(req.Params, ¶ms)
109 | if err != nil {
110 | log.Println("Malformed stratum request params from", cs.ip)
111 | return err
112 | }
113 | reply, errReply := s.handleLoginRPC(cs, params, req.Worker)
114 | if errReply != nil {
115 | return cs.sendTCPError(req.Id, errReply)
116 | }
117 | return cs.sendTCPResult(req.Id, reply)
118 | case "eth_getWork":
119 | reply, errReply := s.handleGetWorkRPC(cs)
120 | if errReply != nil {
121 | return cs.sendTCPError(req.Id, errReply)
122 | }
123 | return cs.sendTCPResult(req.Id, &reply)
124 | case "eth_submitWork":
125 | var params []string
126 | err := json.Unmarshal(req.Params, ¶ms)
127 | if err != nil {
128 | log.Println("Malformed stratum request params from", cs.ip)
129 | return err
130 | }
131 | reply, errReply := s.handleTCPSubmitRPC(cs, req.Worker, params)
132 | if errReply != nil {
133 | return cs.sendTCPError(req.Id, errReply)
134 | }
135 | return cs.sendTCPResult(req.Id, &reply)
136 | case "eth_submitHashrate":
137 | return cs.sendTCPResult(req.Id, true)
138 | default:
139 | errReply := s.handleUnknownRPC(cs, req.Method)
140 | return cs.sendTCPError(req.Id, errReply)
141 | }
142 | }
143 |
144 | func (cs *Session) sendTCPResult(id json.RawMessage, result interface{}) error {
145 | cs.Lock()
146 | defer cs.Unlock()
147 |
148 | message := JSONRpcResp{Id: id, Version: "2.0", Error: nil, Result: result}
149 | return cs.enc.Encode(&message)
150 | }
151 |
152 | func (cs *Session) pushNewJob(result interface{}) error {
153 | cs.Lock()
154 | defer cs.Unlock()
155 | // FIXME: Temporarily add ID for Claymore compliance
156 | message := JSONPushMessage{Version: "2.0", Result: result, Id: 0}
157 | return cs.enc.Encode(&message)
158 | }
159 |
160 | func (cs *Session) sendTCPError(id json.RawMessage, reply *ErrorReply) error {
161 | cs.Lock()
162 | defer cs.Unlock()
163 |
164 | message := JSONRpcResp{Id: id, Version: "2.0", Error: reply}
165 | err := cs.enc.Encode(&message)
166 | if err != nil {
167 | return err
168 | }
169 | return errors.New(reply.Message)
170 | }
171 |
172 | func (self *ProxyServer) setDeadline(conn *net.TCPConn) {
173 | conn.SetDeadline(time.Now().Add(self.timeout))
174 | }
175 |
176 | func (s *ProxyServer) registerSession(cs *Session) {
177 | s.sessionsMu.Lock()
178 | defer s.sessionsMu.Unlock()
179 | s.sessions[cs] = struct{}{}
180 | }
181 |
182 | func (s *ProxyServer) removeSession(cs *Session) {
183 | s.sessionsMu.Lock()
184 | defer s.sessionsMu.Unlock()
185 | delete(s.sessions, cs)
186 | }
187 |
188 | func (s *ProxyServer) broadcastNewJobs() {
189 | t := s.currentBlockTemplate()
190 | if t == nil || len(t.Header) == 0 || s.isSick() {
191 | return
192 | }
193 | reply := []string{t.Header, t.Seed, s.diff}
194 |
195 | s.sessionsMu.RLock()
196 | defer s.sessionsMu.RUnlock()
197 |
198 | count := len(s.sessions)
199 | log.Printf("Broadcasting new job to %v stratum miners", count)
200 |
201 | start := time.Now()
202 | bcast := make(chan int, 1024)
203 | n := 0
204 |
205 | for m, _ := range s.sessions {
206 | n++
207 | bcast <- n
208 |
209 | go func(cs *Session) {
210 | err := cs.pushNewJob(&reply)
211 | <-bcast
212 | if err != nil {
213 | log.Printf("Job transmit error to %v@%v: %v", cs.login, cs.ip, err)
214 | s.removeSession(cs)
215 | } else {
216 | s.setDeadline(cs.conn)
217 | }
218 | }(m)
219 | }
220 | log.Printf("Jobs broadcast finished %s", time.Since(start))
221 | }
222 |
--------------------------------------------------------------------------------
/rpc/rpc.go:
--------------------------------------------------------------------------------
1 | package rpc
2 |
3 | import (
4 | "bytes"
5 | "crypto/sha256"
6 | "encoding/json"
7 | "errors"
8 | "fmt"
9 | "math/big"
10 | "net/http"
11 | "strconv"
12 | "strings"
13 | "sync"
14 |
15 | "github.com/ethereum/go-ethereum/common"
16 |
17 | "github.com/sammy007/open-ethereum-pool/util"
18 | )
19 |
20 | type RPCClient struct {
21 | sync.RWMutex
22 | Url string
23 | Name string
24 | sick bool
25 | sickRate int
26 | successRate int
27 | client *http.Client
28 | }
29 |
30 | type GetBlockReply struct {
31 | Number string `json:"number"`
32 | Hash string `json:"hash"`
33 | Nonce string `json:"nonce"`
34 | Miner string `json:"miner"`
35 | Difficulty string `json:"difficulty"`
36 | GasLimit string `json:"gasLimit"`
37 | GasUsed string `json:"gasUsed"`
38 | Transactions []Tx `json:"transactions"`
39 | Uncles []string `json:"uncles"`
40 | // https://github.com/ethereum/EIPs/issues/95
41 | SealFields []string `json:"sealFields"`
42 | }
43 |
44 | type GetBlockReplyPart struct {
45 | Number string `json:"number"`
46 | Difficulty string `json:"difficulty"`
47 | }
48 |
49 | const receiptStatusSuccessful = "0x1"
50 |
51 | type TxReceipt struct {
52 | TxHash string `json:"transactionHash"`
53 | GasUsed string `json:"gasUsed"`
54 | BlockHash string `json:"blockHash"`
55 | Status string `json:"status"`
56 | }
57 |
58 | func (r *TxReceipt) Confirmed() bool {
59 | return len(r.BlockHash) > 0
60 | }
61 |
62 | // Use with previous method
63 | func (r *TxReceipt) Successful() bool {
64 | if len(r.Status) > 0 {
65 | return r.Status == receiptStatusSuccessful
66 | }
67 | return true
68 | }
69 |
70 | type Tx struct {
71 | Gas string `json:"gas"`
72 | GasPrice string `json:"gasPrice"`
73 | Hash string `json:"hash"`
74 | }
75 |
76 | type JSONRpcResp struct {
77 | Id *json.RawMessage `json:"id"`
78 | Result *json.RawMessage `json:"result"`
79 | Error map[string]interface{} `json:"error"`
80 | }
81 |
82 | func NewRPCClient(name, url, timeout string) *RPCClient {
83 | rpcClient := &RPCClient{Name: name, Url: url}
84 | timeoutIntv := util.MustParseDuration(timeout)
85 | rpcClient.client = &http.Client{
86 | Timeout: timeoutIntv,
87 | }
88 | return rpcClient
89 | }
90 |
91 | func (r *RPCClient) GetWork() ([]string, error) {
92 | rpcResp, err := r.doPost(r.Url, "eth_getWork", []string{})
93 | if err != nil {
94 | return nil, err
95 | }
96 | var reply []string
97 | err = json.Unmarshal(*rpcResp.Result, &reply)
98 | return reply, err
99 | }
100 |
101 | func (r *RPCClient) GetPendingBlock() (*GetBlockReplyPart, error) {
102 | rpcResp, err := r.doPost(r.Url, "eth_getBlockByNumber", []interface{}{"pending", false})
103 | if err != nil {
104 | return nil, err
105 | }
106 | if rpcResp.Result != nil {
107 | var reply *GetBlockReplyPart
108 | err = json.Unmarshal(*rpcResp.Result, &reply)
109 | return reply, err
110 | }
111 | return nil, nil
112 | }
113 |
114 | func (r *RPCClient) GetBlockByHeight(height int64) (*GetBlockReply, error) {
115 | params := []interface{}{fmt.Sprintf("0x%x", height), true}
116 | return r.getBlockBy("eth_getBlockByNumber", params)
117 | }
118 |
119 | func (r *RPCClient) GetBlockByHash(hash string) (*GetBlockReply, error) {
120 | params := []interface{}{hash, true}
121 | return r.getBlockBy("eth_getBlockByHash", params)
122 | }
123 |
124 | func (r *RPCClient) GetUncleByBlockNumberAndIndex(height int64, index int) (*GetBlockReply, error) {
125 | params := []interface{}{fmt.Sprintf("0x%x", height), fmt.Sprintf("0x%x", index)}
126 | return r.getBlockBy("eth_getUncleByBlockNumberAndIndex", params)
127 | }
128 |
129 | func (r *RPCClient) getBlockBy(method string, params []interface{}) (*GetBlockReply, error) {
130 | rpcResp, err := r.doPost(r.Url, method, params)
131 | if err != nil {
132 | return nil, err
133 | }
134 | if rpcResp.Result != nil {
135 | var reply *GetBlockReply
136 | err = json.Unmarshal(*rpcResp.Result, &reply)
137 | return reply, err
138 | }
139 | return nil, nil
140 | }
141 |
142 | func (r *RPCClient) GetTxReceipt(hash string) (*TxReceipt, error) {
143 | rpcResp, err := r.doPost(r.Url, "eth_getTransactionReceipt", []string{hash})
144 | if err != nil {
145 | return nil, err
146 | }
147 | if rpcResp.Result != nil {
148 | var reply *TxReceipt
149 | err = json.Unmarshal(*rpcResp.Result, &reply)
150 | return reply, err
151 | }
152 | return nil, nil
153 | }
154 |
155 | func (r *RPCClient) SubmitBlock(params []string) (bool, error) {
156 | rpcResp, err := r.doPost(r.Url, "eth_submitWork", params)
157 | if err != nil {
158 | return false, err
159 | }
160 | var reply bool
161 | err = json.Unmarshal(*rpcResp.Result, &reply)
162 | return reply, err
163 | }
164 |
165 | func (r *RPCClient) GetBalance(address string) (*big.Int, error) {
166 | rpcResp, err := r.doPost(r.Url, "eth_getBalance", []string{address, "latest"})
167 | if err != nil {
168 | return nil, err
169 | }
170 | var reply string
171 | err = json.Unmarshal(*rpcResp.Result, &reply)
172 | if err != nil {
173 | return nil, err
174 | }
175 | return util.String2Big(reply), err
176 | }
177 |
178 | func (r *RPCClient) Sign(from string, s string) (string, error) {
179 | hash := sha256.Sum256([]byte(s))
180 | rpcResp, err := r.doPost(r.Url, "eth_sign", []string{from, common.ToHex(hash[:])})
181 | var reply string
182 | if err != nil {
183 | return reply, err
184 | }
185 | err = json.Unmarshal(*rpcResp.Result, &reply)
186 | if err != nil {
187 | return reply, err
188 | }
189 | if util.IsZeroHash(reply) {
190 | err = errors.New("Can't sign message, perhaps account is locked")
191 | }
192 | return reply, err
193 | }
194 |
195 | func (r *RPCClient) GetPeerCount() (int64, error) {
196 | rpcResp, err := r.doPost(r.Url, "net_peerCount", nil)
197 | if err != nil {
198 | return 0, err
199 | }
200 | var reply string
201 | err = json.Unmarshal(*rpcResp.Result, &reply)
202 | if err != nil {
203 | return 0, err
204 | }
205 | return strconv.ParseInt(strings.Replace(reply, "0x", "", -1), 16, 64)
206 | }
207 |
208 | func (r *RPCClient) SendTransaction(from, to, gas, gasPrice, value string, autoGas bool) (string, error) {
209 | params := map[string]string{
210 | "from": from,
211 | "to": to,
212 | "value": value,
213 | }
214 | if !autoGas {
215 | params["gas"] = gas
216 | params["gasPrice"] = gasPrice
217 | }
218 | rpcResp, err := r.doPost(r.Url, "eth_sendTransaction", []interface{}{params})
219 | var reply string
220 | if err != nil {
221 | return reply, err
222 | }
223 | err = json.Unmarshal(*rpcResp.Result, &reply)
224 | if err != nil {
225 | return reply, err
226 | }
227 | /* There is an inconsistence in a "standard". Geth returns error if it can't unlock signer account,
228 | * but Parity returns zero hash 0x000... if it can't send tx, so we must handle this case.
229 | * https://github.com/ethereum/wiki/wiki/JSON-RPC#returns-22
230 | */
231 | if util.IsZeroHash(reply) {
232 | err = errors.New("transaction is not yet available")
233 | }
234 | return reply, err
235 | }
236 |
237 | func (r *RPCClient) doPost(url string, method string, params interface{}) (*JSONRpcResp, error) {
238 | jsonReq := map[string]interface{}{"jsonrpc": "2.0", "method": method, "params": params, "id": 0}
239 | data, _ := json.Marshal(jsonReq)
240 |
241 | req, err := http.NewRequest("POST", url, bytes.NewBuffer(data))
242 | req.Header.Set("Content-Length", (string)(len(data)))
243 | req.Header.Set("Content-Type", "application/json")
244 | req.Header.Set("Accept", "application/json")
245 |
246 | resp, err := r.client.Do(req)
247 | if err != nil {
248 | r.markSick()
249 | return nil, err
250 | }
251 | defer resp.Body.Close()
252 |
253 | var rpcResp *JSONRpcResp
254 | err = json.NewDecoder(resp.Body).Decode(&rpcResp)
255 | if err != nil {
256 | r.markSick()
257 | return nil, err
258 | }
259 | if rpcResp.Error != nil {
260 | r.markSick()
261 | return nil, errors.New(rpcResp.Error["message"].(string))
262 | }
263 | return rpcResp, err
264 | }
265 |
266 | func (r *RPCClient) Check() bool {
267 | _, err := r.GetWork()
268 | if err != nil {
269 | return false
270 | }
271 | r.markAlive()
272 | return !r.Sick()
273 | }
274 |
275 | func (r *RPCClient) Sick() bool {
276 | r.RLock()
277 | defer r.RUnlock()
278 | return r.sick
279 | }
280 |
281 | func (r *RPCClient) markSick() {
282 | r.Lock()
283 | r.sickRate++
284 | r.successRate = 0
285 | if r.sickRate >= 5 {
286 | r.sick = true
287 | }
288 | r.Unlock()
289 | }
290 |
291 | func (r *RPCClient) markAlive() {
292 | r.Lock()
293 | r.successRate++
294 | if r.successRate >= 5 {
295 | r.sick = false
296 | r.sickRate = 0
297 | r.successRate = 0
298 | }
299 | r.Unlock()
300 | }
301 |
--------------------------------------------------------------------------------
/storage/redis_test.go:
--------------------------------------------------------------------------------
1 | package storage
2 |
3 | import (
4 | "os"
5 | "reflect"
6 | "strconv"
7 | "testing"
8 |
9 | "gopkg.in/redis.v3"
10 | )
11 |
12 | var r *RedisClient
13 |
14 | const prefix = "test"
15 |
16 | func TestMain(m *testing.M) {
17 | r = NewRedisClient(&Config{Endpoint: "127.0.0.1:6379"}, prefix)
18 | reset()
19 | c := m.Run()
20 | reset()
21 | os.Exit(c)
22 | }
23 |
24 | func TestWriteShareCheckExist(t *testing.T) {
25 | reset()
26 |
27 | exist, _ := r.WriteShare("x", "x", []string{"0x0", "0x0", "0x0"}, 10, 1008, 0)
28 | if exist {
29 | t.Error("PoW must not exist")
30 | }
31 | exist, _ = r.WriteShare("x", "x", []string{"0x0", "0x1", "0x0"}, 10, 1008, 0)
32 | if exist {
33 | t.Error("PoW must not exist")
34 | }
35 | exist, _ = r.WriteShare("x", "x", []string{"0x0", "0x0", "0x1"}, 100, 1010, 0)
36 | if exist {
37 | t.Error("PoW must not exist")
38 | }
39 | exist, _ = r.WriteShare("z", "x", []string{"0x0", "0x0", "0x1"}, 100, 1016, 0)
40 | if !exist {
41 | t.Error("PoW must exist")
42 | }
43 | exist, _ = r.WriteShare("x", "x", []string{"0x0", "0x0", "0x1"}, 100, 1025, 0)
44 | if exist {
45 | t.Error("PoW must not exist")
46 | }
47 | }
48 |
49 | func TestGetPayees(t *testing.T) {
50 | reset()
51 |
52 | n := 256
53 | for i := 0; i < n; i++ {
54 | r.client.HSet(r.formatKey("miners", strconv.Itoa(i)), "balance", strconv.Itoa(i))
55 | }
56 |
57 | var payees []string
58 | payees, _ = r.GetPayees()
59 | if len(payees) != n {
60 | t.Error("Must return all payees")
61 | }
62 | m := make(map[string]struct{})
63 | for _, v := range payees {
64 | m[v] = struct{}{}
65 | }
66 | if len(m) != n {
67 | t.Error("Must be unique list")
68 | }
69 | }
70 |
71 | func TestGetBalance(t *testing.T) {
72 | reset()
73 |
74 | r.client.HSet(r.formatKey("miners:x"), "balance", "750")
75 |
76 | v, _ := r.GetBalance("x")
77 | if v != 750 {
78 | t.Error("Must return balance")
79 | }
80 |
81 | v, err := r.GetBalance("z")
82 | if v != 0 {
83 | t.Error("Must return 0 if account does not exist")
84 | }
85 | if err != nil {
86 | t.Error("Must not return error if account does not exist")
87 | }
88 | }
89 |
90 | func TestLockPayouts(t *testing.T) {
91 | reset()
92 |
93 | r.LockPayouts("x", 1000)
94 | v := r.client.Get("test:payments:lock").Val()
95 | if v != "x:1000" {
96 | t.Errorf("Invalid lock amount: %v", v)
97 | }
98 |
99 | err := r.LockPayouts("x", 100)
100 | if err == nil {
101 | t.Errorf("Must not overwrite lock")
102 | }
103 | }
104 |
105 | func TestUnlockPayouts(t *testing.T) {
106 | reset()
107 |
108 | r.client.Set(r.formatKey("payments:lock"), "x:1000", 0)
109 |
110 | r.UnlockPayouts()
111 | err := r.client.Get(r.formatKey("payments:lock")).Err()
112 | if err != redis.Nil {
113 | t.Errorf("Must release lock")
114 | }
115 | }
116 |
117 | func TestIsPayoutsLocked(t *testing.T) {
118 | reset()
119 |
120 | r.LockPayouts("x", 1000)
121 | if locked, _ := r.IsPayoutsLocked(); !locked {
122 | t.Errorf("Payouts must be locked")
123 | }
124 | }
125 |
126 | func TestUpdateBalance(t *testing.T) {
127 | reset()
128 |
129 | r.client.HMSetMap(
130 | r.formatKey("miners:x"),
131 | map[string]string{"paid": "50", "balance": "1000"},
132 | )
133 | r.client.HMSetMap(
134 | r.formatKey("finances"),
135 | map[string]string{"paid": "500", "balance": "10000"},
136 | )
137 |
138 | amount := int64(250)
139 | r.UpdateBalance("x", amount)
140 | result := r.client.HGetAllMap(r.formatKey("miners:x")).Val()
141 | if result["pending"] != "250" {
142 | t.Error("Must set pending amount")
143 | }
144 | if result["balance"] != "750" {
145 | t.Error("Must deduct balance")
146 | }
147 | if result["paid"] != "50" {
148 | t.Error("Must not touch paid")
149 | }
150 |
151 | result = r.client.HGetAllMap(r.formatKey("finances")).Val()
152 | if result["pending"] != "250" {
153 | t.Error("Must set pool pending amount")
154 | }
155 | if result["balance"] != "9750" {
156 | t.Error("Must deduct pool balance")
157 | }
158 | if result["paid"] != "500" {
159 | t.Error("Must not touch pool paid")
160 | }
161 |
162 | rank := r.client.ZRank(r.formatKey("payments:pending"), join("x", amount)).Val()
163 | if rank != 0 {
164 | t.Error("Must add pending payment")
165 | }
166 | }
167 |
168 | func TestRollbackBalance(t *testing.T) {
169 | reset()
170 |
171 | r.client.HMSetMap(
172 | r.formatKey("miners:x"),
173 | map[string]string{"paid": "100", "balance": "750", "pending": "250"},
174 | )
175 | r.client.HMSetMap(
176 | r.formatKey("finances"),
177 | map[string]string{"paid": "500", "balance": "10000", "pending": "250"},
178 | )
179 | r.client.ZAdd(r.formatKey("payments:pending"), redis.Z{Score: 1, Member: "xx"})
180 |
181 | amount := int64(250)
182 | r.RollbackBalance("x", amount)
183 | result := r.client.HGetAllMap(r.formatKey("miners:x")).Val()
184 | if result["paid"] != "100" {
185 | t.Error("Must not touch paid")
186 | }
187 | if result["balance"] != "1000" {
188 | t.Error("Must increase balance")
189 | }
190 | if result["pending"] != "0" {
191 | t.Error("Must deduct pending")
192 | }
193 |
194 | result = r.client.HGetAllMap(r.formatKey("finances")).Val()
195 | if result["paid"] != "500" {
196 | t.Error("Must not touch pool paid")
197 | }
198 | if result["balance"] != "10250" {
199 | t.Error("Must increase pool balance")
200 | }
201 | if result["pending"] != "0" {
202 | t.Error("Must deduct pool pending")
203 | }
204 |
205 | err := r.client.ZRank(r.formatKey("payments:pending"), join("x", amount)).Err()
206 | if err != redis.Nil {
207 | t.Errorf("Must remove pending payment")
208 | }
209 | }
210 |
211 | func TestWritePayment(t *testing.T) {
212 | reset()
213 |
214 | r.client.HMSetMap(
215 | r.formatKey("miners:x"),
216 | map[string]string{"paid": "50", "balance": "1000", "pending": "250"},
217 | )
218 | r.client.HMSetMap(
219 | r.formatKey("finances"),
220 | map[string]string{"paid": "500", "balance": "10000", "pending": "250"},
221 | )
222 |
223 | amount := int64(250)
224 | r.WritePayment("x", "0x0", amount)
225 | result := r.client.HGetAllMap(r.formatKey("miners:x")).Val()
226 | if result["pending"] != "0" {
227 | t.Error("Must unset pending amount")
228 | }
229 | if result["balance"] != "1000" {
230 | t.Error("Must not touch balance")
231 | }
232 | if result["paid"] != "300" {
233 | t.Error("Must increase paid")
234 | }
235 |
236 | result = r.client.HGetAllMap(r.formatKey("finances")).Val()
237 | if result["pending"] != "0" {
238 | t.Error("Must deduct pool pending amount")
239 | }
240 | if result["balance"] != "10000" {
241 | t.Error("Must not touch pool balance")
242 | }
243 | if result["paid"] != "750" {
244 | t.Error("Must increase pool paid")
245 | }
246 |
247 | err := r.client.Get(r.formatKey("payments:lock")).Err()
248 | if err != redis.Nil {
249 | t.Errorf("Must release lock")
250 | }
251 |
252 | err = r.client.ZRank(r.formatKey("payments:pending"), join("x", amount)).Err()
253 | if err != redis.Nil {
254 | t.Error("Must remove pending payment")
255 | }
256 | err = r.client.ZRank(r.formatKey("payments:all"), join("0x0", "x", amount)).Err()
257 | if err == redis.Nil {
258 | t.Error("Must add payment to set")
259 | }
260 | err = r.client.ZRank(r.formatKey("payments:x"), join("0x0", amount)).Err()
261 | if err == redis.Nil {
262 | t.Error("Must add payment to set")
263 | }
264 | }
265 |
266 | func TestGetPendingPayments(t *testing.T) {
267 | reset()
268 |
269 | r.client.HMSetMap(
270 | r.formatKey("miners:x"),
271 | map[string]string{"paid": "100", "balance": "750", "pending": "250"},
272 | )
273 |
274 | amount := int64(1000)
275 | r.UpdateBalance("x", amount)
276 | pending := r.GetPendingPayments()
277 |
278 | if len(pending) != 1 {
279 | t.Error("Must return pending payment")
280 | }
281 | if pending[0].Amount != amount {
282 | t.Error("Must have corrent amount")
283 | }
284 | if pending[0].Address != "x" {
285 | t.Error("Must have corrent account")
286 | }
287 | if pending[0].Timestamp <= 0 {
288 | t.Error("Must have timestamp")
289 | }
290 | }
291 |
292 | func TestCollectLuckStats(t *testing.T) {
293 | reset()
294 |
295 | members := []redis.Z{
296 | redis.Z{Score: 0, Member: "1:0:0x0:0x0:0:100:100:0"},
297 | }
298 | r.client.ZAdd(r.formatKey("blocks:immature"), members...)
299 | members = []redis.Z{
300 | redis.Z{Score: 1, Member: "1:0:0x2:0x0:0:50:100:0"},
301 | redis.Z{Score: 2, Member: "0:1:0x1:0x0:0:100:100:0"},
302 | redis.Z{Score: 3, Member: "0:0:0x3:0x0:0:200:100:0"},
303 | }
304 | r.client.ZAdd(r.formatKey("blocks:matured"), members...)
305 |
306 | stats, _ := r.CollectLuckStats([]int{1, 2, 5, 10})
307 | expectedStats := map[string]interface{}{
308 | "1": map[string]float64{
309 | "luck": 1, "uncleRate": 1, "orphanRate": 0,
310 | },
311 | "2": map[string]float64{
312 | "luck": 0.75, "uncleRate": 0.5, "orphanRate": 0,
313 | },
314 | "4": map[string]float64{
315 | "luck": 1.125, "uncleRate": 0.5, "orphanRate": 0.25,
316 | },
317 | }
318 |
319 | if !reflect.DeepEqual(stats, expectedStats) {
320 | t.Error("Stats != expected stats")
321 | }
322 | }
323 |
324 | func reset() {
325 | keys := r.client.Keys(r.prefix + ":*").Val()
326 | for _, k := range keys {
327 | r.client.Del(k)
328 | }
329 | }
330 |
--------------------------------------------------------------------------------
/util/util.go:
--------------------------------------------------------------------------------
1 | package util
2 |
3 | import (
4 | "math/big"
5 | "regexp"
6 | "strconv"
7 | "time"
8 |
9 | "github.com/ethereum/go-ethereum/common"
10 | "github.com/ethereum/go-ethereum/common/math"
11 | )
12 |
13 | var Ether = math.BigPow(10, 18)
14 | var Shannon = math.BigPow(10, 9)
15 |
16 | var pow256 = math.BigPow(2, 256)
17 | var addressPattern = regexp.MustCompile("^0x[0-9a-fA-F]{40}$")
18 | var zeroHash = regexp.MustCompile("^0?x?0+$")
19 |
20 | func IsValidHexAddress(s string) bool {
21 | if IsZeroHash(s) || !addressPattern.MatchString(s) {
22 | return false
23 | }
24 | return true
25 | }
26 |
27 | func IsZeroHash(s string) bool {
28 | return zeroHash.MatchString(s)
29 | }
30 |
31 | func MakeTimestamp() int64 {
32 | return time.Now().UnixNano() / int64(time.Millisecond)
33 | }
34 |
35 | func GetTargetHex(diff int64) string {
36 | difficulty := big.NewInt(diff)
37 | diff1 := new(big.Int).Div(pow256, difficulty)
38 | return string(common.ToHex(diff1.Bytes()))
39 | }
40 |
41 | func TargetHexToDiff(targetHex string) *big.Int {
42 | targetBytes := common.FromHex(targetHex)
43 | return new(big.Int).Div(pow256, new(big.Int).SetBytes(targetBytes))
44 | }
45 |
46 | func ToHex(n int64) string {
47 | return "0x0" + strconv.FormatInt(n, 16)
48 | }
49 |
50 | func FormatReward(reward *big.Int) string {
51 | return reward.String()
52 | }
53 |
54 | func FormatRatReward(reward *big.Rat) string {
55 | wei := new(big.Rat).SetInt(Ether)
56 | reward = reward.Quo(reward, wei)
57 | return reward.FloatString(8)
58 | }
59 |
60 | func StringInSlice(a string, list []string) bool {
61 | for _, b := range list {
62 | if b == a {
63 | return true
64 | }
65 | }
66 | return false
67 | }
68 |
69 | func MustParseDuration(s string) time.Duration {
70 | value, err := time.ParseDuration(s)
71 | if err != nil {
72 | panic("util: Can't parse duration `" + s + "`: " + err.Error())
73 | }
74 | return value
75 | }
76 |
77 | func String2Big(num string) *big.Int {
78 | n := new(big.Int)
79 | n.SetString(num, 0)
80 | return n
81 | }
82 |
--------------------------------------------------------------------------------
/www/.bowerrc:
--------------------------------------------------------------------------------
1 | {
2 | "directory": "bower_components",
3 | "analytics": false
4 | }
5 |
--------------------------------------------------------------------------------
/www/.editorconfig:
--------------------------------------------------------------------------------
1 | # EditorConfig helps developers define and maintain consistent
2 | # coding styles between different editors and IDEs
3 | # editorconfig.org
4 |
5 | root = true
6 |
7 |
8 | [*]
9 | end_of_line = lf
10 | charset = utf-8
11 | trim_trailing_whitespace = true
12 | insert_final_newline = true
13 | indent_style = space
14 | indent_size = 2
15 |
16 | [*.js]
17 | indent_style = space
18 | indent_size = 2
19 |
20 | [*.hbs]
21 | insert_final_newline = false
22 | indent_style = space
23 | indent_size = 2
24 |
25 | [*.css]
26 | indent_style = space
27 | indent_size = 2
28 |
29 | [*.html]
30 | indent_style = space
31 | indent_size = 2
32 |
33 | [*.{diff,md}]
34 | trim_trailing_whitespace = false
35 |
--------------------------------------------------------------------------------
/www/.ember-cli:
--------------------------------------------------------------------------------
1 | {
2 | /**
3 | Ember CLI sends analytics information by default. The data is completely
4 | anonymous, but there are times when you might want to disable this behavior.
5 |
6 | Setting `disableAnalytics` to true will prevent any data from being sent.
7 | */
8 | "disableAnalytics": false
9 | }
10 |
--------------------------------------------------------------------------------
/www/.gitignore:
--------------------------------------------------------------------------------
1 | # See http://help.github.com/ignore-files/ for more about ignoring files.
2 |
3 | # compiled output
4 | /dist
5 | /tmp
6 |
7 | # dependencies
8 | /node_modules
9 | /bower_components
10 |
11 | # misc
12 | /.sass-cache
13 | /connect.lock
14 | /coverage/*
15 | /libpeerconnection.log
16 | npm-debug.log
17 | testem.log
18 |
--------------------------------------------------------------------------------
/www/.jshintrc:
--------------------------------------------------------------------------------
1 | {
2 | "predef": [
3 | "document",
4 | "window",
5 | "-Promise",
6 | "moment"
7 | ],
8 | "browser": true,
9 | "boss": true,
10 | "curly": true,
11 | "debug": false,
12 | "devel": true,
13 | "eqeqeq": true,
14 | "evil": true,
15 | "forin": false,
16 | "immed": false,
17 | "laxbreak": false,
18 | "newcap": true,
19 | "noarg": true,
20 | "noempty": false,
21 | "nonew": false,
22 | "nomen": false,
23 | "onevar": false,
24 | "plusplus": false,
25 | "regexp": false,
26 | "undef": true,
27 | "sub": true,
28 | "strict": false,
29 | "white": false,
30 | "eqnull": true,
31 | "esnext": true,
32 | "unused": true
33 | }
34 |
--------------------------------------------------------------------------------
/www/.travis.yml:
--------------------------------------------------------------------------------
1 | ---
2 | language: node_js
3 | node_js:
4 | - "0.12"
5 |
6 | sudo: false
7 |
8 | cache:
9 | directories:
10 | - node_modules
11 |
12 | before_install:
13 | - export PATH=/usr/local/phantomjs-2.0.0/bin:$PATH
14 | - "npm config set spin false"
15 | - "npm install -g npm@^2"
16 |
17 | install:
18 | - npm install -g bower
19 | - npm install
20 | - bower install
21 |
22 | script:
23 | - npm test
24 |
--------------------------------------------------------------------------------
/www/.watchmanconfig:
--------------------------------------------------------------------------------
1 | {
2 | "ignore_dirs": ["tmp"]
3 | }
4 |
--------------------------------------------------------------------------------
/www/README.md:
--------------------------------------------------------------------------------
1 | # Pool
2 |
3 | This README outlines the details of collaborating on this Ember application.
4 | A short introduction of this app could easily go here.
5 |
6 | ## Prerequisites
7 |
8 | You will need the following things properly installed on your computer.
9 |
10 | * [Git](http://git-scm.com/)
11 | * [Node.js](http://nodejs.org/) (with NPM)
12 | * [Bower](http://bower.io/)
13 | * [Ember CLI](http://www.ember-cli.com/)
14 | * [PhantomJS](http://phantomjs.org/)
15 |
16 | ## Installation
17 |
18 | * `git clone By using the pool you accept all possible risks related to experimental software usage.
7 | Pool owner can't compensate any irreversible losses, but will do his best to prevent worst case.
8 |
11 |
ID | 9 |Hashrate (rough, short average) | 10 |Hashrate (accurate, long average) | 11 |Last Share | 12 |
---|---|---|---|
{{k}} | 18 |{{format-hashrate v.hr}} | 19 |{{format-hashrate v.hr2}} | 20 |{{format-relative (seconds-to-ms v.lastBeat)}} | 21 |
Time | 9 |Tx ID | 10 |Amount | 11 |
---|---|---|
{{format-date-locale tx.timestamp}} | 17 |18 | {{tx.tx}} 19 | | 20 |{{format-balance tx.amount}} | 21 |
Usually it's just a temporal issue and mining is not affected.
5 |Pool always pay full block reward including TX fees and uncle rewards.
4 | 5 | Block maturity requires up to 520 blocks. 6 | Usually it's less indeed. 7 | 8 |Height | 9 |Block Hash | 10 |Time Found | 11 |Variance | 12 |Reward | 13 |
---|
Height | 8 |Block Hash | 9 |Time Found | 10 |Variance | 11 |Reward | 12 |
---|
Height | 8 |Time Found | 9 |Variance | 10 |
---|---|---|
{{format-number block.height}} | 16 |{{format-date-locale block.timestamp}} | 17 |18 | {{#if block.isLucky}} 19 | {{format-number block.variance style='percent'}} 20 | {{else}} 21 | {{format-number block.variance style='percent'}} 22 | {{/if}} 23 | | 24 |
In order to mine on this pool you need to have an
6 | ethminer installation
7 | pointed to{{config.HttpHost}}:{{config.HttpPort}}/YOUR_ETH_ADDRESS/RIG_ID
8 |
0xb85150eb365e7df0941f0cf08235f987ba91506a
.
15 | rig-1
21 |
24 | Full example:
25 | ethminer -F {{config.HttpHost}}:{{config.HttpPort}}/0xb85150eb365e7df0941f0cf08235f987ba91506a/myfarm -G --farm-recheck 200
.
26 | Hint: If you are compiling ethminer from latest source, please also use
27 | extra --disable-submit-hashrate
option.
28 |
Grab proxy from eth-proxy GitHub repo.
32 |Edit eth-proxy.conf
and specify our pool's HOST: {{config.StratumHost}}
, PORT: {{config.StratumPort}}
and your WALLET
.
Use stable release of Ethereum Solo/Pool Mining Proxy.
36 | 37 |CPU mining is not recommended.
39 |By using the pool you accept all possible risks related to experimental software usage.
41 | Pool owner can't compensate any irreversible losses, but will do his best to prevent worst case.
42 |
Blocks | 6 |Shares/Diff | 7 |Uncle Rate | 8 |Orphan Rate | 9 |
---|---|---|---|
{{total}} | 15 |{{format-number row.luck style='percent'}} | 16 |{{format-number row.uncleRate style='percent'}} | 17 |{{format-number row.orphanRate style='percent'}} | 18 |
Total hashrate: {{format-hashrate model.hashrate}}.
4 | Total miners: {{model.minersTotal}} 5 |Login | 15 |Hashrate | 16 |Last Beat | 17 |
---|---|---|
{{#link-to 'account' m.login class='hash'}}{{m.login}}{{/link-to}} | 23 |{{format-hashrate m.hr}} | 24 |{{format-date-locale m.lastBeat}} | 25 |
If you are looking for your account stats, you need to submit at least a single share.
5 |Pool always pay tx fees from it's own pocket for now.
4 | Total payments sent: {{model.paymentsTotal}} 5 |Time | 15 |Amount | 16 |Address | 17 |Tx ID | 18 |
---|---|---|---|
{{format-date-locale tx.timestamp}} | 24 |{{format-number tx.formatAmount}} | 25 |26 | {{tx.address}} 27 | | 28 |29 | {{format-tx tx.tx}} 30 | | 31 |