├── .gitignore ├── .travis.yml ├── CHANGELOG.md ├── README.md ├── cluster.go ├── doc.go ├── docs └── algorithm.md ├── integration_test.go ├── leafset.go ├── leafset_test.go ├── message.go ├── neighborhood.go ├── neighborhood_test.go ├── node.go ├── node_test.go ├── nodeid.go ├── nodeid_test.go ├── table.go ├── table_test.go └── wendy.go /.gitignore: -------------------------------------------------------------------------------- 1 | *.8l 2 | *.out 3 | *.swp 4 | .DS_Store 5 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: go 2 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | ## Beta1 4 | 5 | The first beta release introduces a few changes from the Alpha release: 6 | 7 | * Wendy now uses the concept of a neighborhood set to limit the number of Nodes each Node keeps track of. (See [#10](https://github.com/secondbit/wendy/issues/10)) 8 | * Wendy now has a better join algorithm, preventing erroneous race condition warnings. (See [#13](https://github.com/secondbit/wendy/issues/13)) 9 | * Wendy now has some end-to-end integration tests that cover the joining algorithm for Nodes. (See [#16](https://github.com/secondbit/wendy/issues/16)) 10 | * Wendy now uses state table versioning instead of timestamps to detect race conditions, removing the dependency on the Nodes' clocks being in sync. (See [#4](https://github.com/secondbit/wendy/issues/4)) 11 | * Wendy now keeps track of the bound port, when ports are auto-assigned. (See [#17](https://github.com/secondbit/wendy/issues/17)) (Courtesy of [Graeme Humphries](https://github.com/unit3)) 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Wendy 2 | 3 | An open source, pure-[Go](http://www.golang.org "Pretty much the best programming language ever") implementation of the [Pastry Distributed Hash Table](http://en.wikipedia.org/wiki/Pastry_(DHT\) "Pastry on Wikipedia"). 4 | 5 | ## Status 6 | [![Build Status](https://secure.travis-ci.org/secondbit/wendy.png)](http://travis-ci.org/secondbit/wendy) 7 | 8 | **Beta1**: Wendy is still in active development. It should not be used for mission-critical software. It has been beat on a little, but there are probably still bugs we haven't found or fixed. 9 | 10 | ## Requirements 11 | 12 | This implementation of Wendy is written to be compatible with Go 1. It uses nothing outside of the Go standard library. Nodes in the network must be able to communicate using TCP over a configurable port. Nodes also must be able to have long-running processes. 13 | 14 | Wendy was developed on OS X 10.8.1, using Go 1.0.3. It has been verified to work as expected running under Ubuntu 12.04 LTS (64-bit), using Go 1.0.3. 15 | 16 | ## Installation 17 | 18 | The typical `go get secondbit.org/wendy` will install Wendy. 19 | 20 | ## Documentation 21 | 22 | We took pains to try and follow [the guidelines](http://golang.org/doc/articles/godoc_documenting_go_code.html "Godoc guidelines on golang.org") on writing good documentation for `godoc`. You can view the generated documentation on the excellent [godoc.org](http://godoc.org/secondbit.org/wendy "Wendy's documentation on godoc.org"). 23 | 24 | ## Use 25 | 26 | ### Initialising the Cluster 27 | 28 | The "Cluster" represents your network of nodes. The first thing you should do in any application that uses Wendy is initialise the cluster. 29 | 30 | First, you need to create the local Node—because Wendy is a peer-to-peer algorithm, there's no such thing as a server or client; instead, everything is a "Node", and only Nodes can connect to the Cluster. 31 | 32 | ```go 33 | hostname, err := os.Hostname() 34 | if err != nil { 35 | panic(err.Error()) 36 | } 37 | id, err := wendy.NodeIDFromBytes([]byte(hostname)) 38 | if err != nil { 39 | panic(err.Error()) 40 | } 41 | node := wendy.NewNode(id, "your_local_ip_address", "your_global_ip_address", "your_region", 8080) 42 | ``` 43 | 44 | NewNode expects five parameters: 45 | 46 | 1. The ID of the new Node. We created one in the code sample above. The ID can be any unique string—it is used to identify the Node to the network. The ID string has to be over 16 bytes long to be substantial enough to form an ID out of, or NodeIDFromBytes will return an error. 47 | 2. Your local IP address. This IP address only needs to be accessible to your Region (a concept that will be explained below). 48 | 3. Your global IP address. This IP address should be accessible to any Node in your network—the entire Internet should be able to reach the IP. 49 | 4. Your Region. Your Region is a string that helps segment your Wendy network to keep bandwidth minimal. For cloud providers (e.g., EC2), network traffic within a region is free. To take advantage of this, we modified the Wendy algorithm to use the local IP address when two Nodes are in the same Region, and the global IP address the rest of the time, while heavily favouring Nodes that are in the same Region. This allows you to have Nodes in multiple Regions in the same Cluster while minimising your bandwidth costs. 50 | 5. The port this Node should listen on, as an int. Should be an open port you have permission to listen on. If you use `0`, Wendy will automatically use a randomly chosen open port. 51 | 52 | Once you have a Node, you can join the Cluster. 53 | 54 | ```go 55 | cluster := wendy.NewCluster(node, credentials) 56 | ``` 57 | 58 | NewCluster just creates a Cluster object, initialises the state tables and channels used to keep the algorithm concurrency-safe, and returns it. It requires that you specify the current Node and supply [Credentials](http://godoc.org/secondbit.org/wendy#Credentials) for the Cluster. 59 | 60 | Credentials are an interface that Wendy defines to help control access to your clusters. Credentials could be whatever you want them to be: public/private keys, a single word or phrase, a rather large number... anything at all is fair game. The only rules for Credentials are as follows: 61 | 62 | 1. Calling `Marshal()` on any implementation of Credentials must return a slice of bytes. 63 | 2. Calling `Valid([]byte)` on any implementation of Credentials must decide whether the specified slice of bytes should grant access to the Cluster (return true) or not (return false). The recommended way to do that is to attempt to unmarshal the byte slice into your Credentials implementation (returning false on error) and then comparing the resulting instance with your local instance. But there's nothing stopping you from just returning true, granting anyone who cares to connect full access to your Cluster. Like [PSN](http://en.wikipedia.org/wiki/PlayStation_Network_outage) does (*Zing!*) 64 | 65 | In the event that `Valid([]byte)` returns false for *any reason*, the Node will not be added to the state tables of the current Node. It will not be notified that its attempt failed, but it will not receive any messages from the Cluster. 66 | 67 | ### Listening For Messages 68 | 69 | To participate in the Cluster, you need to listen for messages. You'll either be used to pass messages along to the correct Node, or will receive messages intended for your Node. 70 | 71 | ```go 72 | cluster.Listen() 73 | defer cluster.Stop() 74 | ``` 75 | 76 | `Listen()` is a blocking call, so if you need it to be asynchronous, throw it in a goroutine. **Note**: If you listen twice on the same Cluster in two different goroutines, concurrency-safety **is compromised**. You should only ever have one goroutine Listen to any given Cluster. 77 | 78 | `Stop()` ends the Listen call on a Cluster. You'll not receive messages, and will stop participating in the Cluster. It is the graceful way for a Node to exit the Cluster. 79 | 80 | ### Registering Handlers For Your Application 81 | 82 | Wendy offers several callbacks at various points in the process of exchanging messages within your Cluster. You can use these callbacks to register listeners within your application. These callbacks are simply instances of a type that fulfills the [wendy.Application](http://godoc.org/secondbit.org/wendy#Application) interface and are subsequently registered to a cluster. 83 | 84 | ```go 85 | type WendyApplication struct { 86 | } 87 | 88 | func (app *WendyApplication) OnError(err error) { 89 | panic(err.Error()) 90 | } 91 | 92 | func (app *WendyApplication) OnDeliver(msg wendy.Message) { 93 | fmt.Println("Received message: ", msg) 94 | } 95 | 96 | func (app *WendyApplication) OnForward(msg *wendy.Message, next wendy.NodeID) bool { 97 | fmt.Printf("Forwarding message %s to Node %s.", msg.ID, next) 98 | return true // return false if you don't want the message forwarded 99 | } 100 | 101 | func (app *WendyApplication) OnNewLeaves(leaves []*wendy.Node) { 102 | fmt.Println("Leaf set changed: ", leaves) 103 | } 104 | 105 | func (app *WendyApplication) OnNodeJoin(node *wendy.Node) { 106 | fmt.Println("Node joined: ", node.ID) 107 | } 108 | 109 | func (app *WendyApplication) OnNodeExit(node *wendy.Node) { 110 | fmt.Println("Node left: ", node.ID) 111 | } 112 | 113 | func (app *WendyApplication) OnHeartbeat(node *wendy.Node) { 114 | fmt.Println("Received heartbeat from ", node.ID) 115 | } 116 | 117 | app := &WendyApplication{} 118 | cluster.RegisterCallback(app) 119 | ``` 120 | 121 | The methods will be invoked at the appropriate points in the lifecycle of the cluster. You should consult [the documentation](http://godoc.org/secondbit.org/wendy#Application) for more information. 122 | 123 | ### Announcing Your Presence 124 | 125 | Finally, to join a Cluster that has already been formed (which you'll want to do, unless this is the first server in the group you're standing up), you're going to need to use the `Join` method to announce your presence and initialise your state tables. The `Join` method is simple: 126 | 127 | ```go 128 | cluster.Join("127.0.0.1", 8080) 129 | ``` 130 | 131 | The first parameter is simply the IP address of a known Node in the Cluster, as a string. The second is just the port, as an int. 132 | 133 | When `Join()` is called, the Node will contact the specified Node and announce its presence. The specified Node will send the joining Node its state tables and route the join message to the other Nodes in the Cluster, who will also send the joining Node their state tables. These state tables will initialise the joining Node's state tables, allowing it to participate in the Cluster. 134 | 135 | ### Sending Messages 136 | 137 | Sending a message in Wendy is a little weird. Each message has an ID associated with it, which you can generate based on the contents of the message or some other key. Wendy doesn't care what the relationship between the message and the ID is (Wendy is perfectly happy with random message IDs, in fact), but applications built on Wendy sometimes dictate the terms of the message ID. All Wendy requires is that your message ID, like your Node IDs, has at least 16 bytes worth of data in it. 138 | 139 | Messages in Wendy aren't sent *to* something, they're sent *towards* something--their message ID. When a Node receives a Message, it checks to see if it knows about any Node with a NodeID closer to the MessageID than its own NodeID. If it does, it forwards the Message on to that Node. If it doesn't it considers the Message to be delivered. There are all sorts of algorithms in place to help the Message reach that delivery quicker, but they're not really the important bit. The important bit is that messages aren't sent *to* Nodes, they're sent *towards* their MessageID. 140 | 141 | Here's an example of routing a Message with a randomly generated ID (based on the `crypto/rand` package) through a Cluster: 142 | 143 | ```go 144 | b := make([]byte, 16) 145 | _, err := rand.Read(b) 146 | if err != nil { 147 | panic(err.Error()) 148 | } 149 | id, err := wendy.NodeIDFromBytes(b) 150 | if err != nil { 151 | panic(err.Error()) 152 | } 153 | purpose := byte(16) 154 | msg := cluster.NewMessage(purpose, id, []byte("This is the body of the message.")) 155 | err = c.Send(msg) 156 | if err != nil { 157 | panic(err.Error()) 158 | } 159 | ``` 160 | 161 | You'll notice we set `purpose` in there to `byte(16)`. Purpose is a way of distinguishing between different types of Messages, and is useful when handling them. We only guarantee that bytes with values 16 and above will go unused by Wendy's own messages. To avoid collisions, you should only use bytes with values of 16 and above when defining your messages. 162 | 163 | We repeated that because it's kind of important. 164 | 165 | ## Contributing 166 | 167 | We'd love to see Wendy improve. There's a lot that can still be done with it, and we'd love some help figuring out how to automate some more complete tests for it. 168 | 169 | To contribute to Wendy: 170 | 171 | * **Fork** the repository 172 | * **Modify** your fork 173 | * Ensure your fork **passes all tests** 174 | * **Send** a pull request 175 | * Bonus points if the pull request includes *what* you changed, *why* you changed it, and *has unit tests* attached. 176 | * For the love of all that is holy, please use `go fmt` *before* you send the pull request. 177 | 178 | We'll review it and merge it in if it's appropriate. 179 | 180 | ## Implementation Details 181 | 182 | We approached this pragmatically, so there are some differences between the Pastry specification (as we understand it) and our implementation. The end result should not be materially changed. 183 | 184 | * We introduced the concept of Regions. Regions are used to partition your Cluster and give preference to Nodes that are within the same Region. It is useful on cloud providers like EC2 to minimise traffic between regions, which tends to cost more than traffic on the local network. This is implemented as a raw multiplier on the proximity score of nodes, based on if the regions match or not. It should not materially affect the algorithm, outside the intended bias towards local traffic over global traffic. 185 | 186 | ## Known Bugs 187 | 188 | * In the event that: 1) a Node is added, 2) the Node receives a message *before* it has finished initialising its state tables, and 3) the Node, based on its partial implementation of the state tables, is the closest Node to the message ID, that Node will incorrectly assume it is the destination for the message when there *may* be a better suited Node in the network. Depending on network speeds and the size of the cluster, this period of potential-for-message-swallowing is expected to last, at most, a few seconds, and will only occur when a Node is added to the cluster. 189 | * In the event that one of the two immediate neighbours (in the NodeID space) of the current Node leaves the cluster, the Node will have a hole in its leaf set until it next receives (or has a reason to request) state information from another Node. This should not affect the outcome of the routing process, but may lead to sub-optimal routing times. 190 | * We currently rely on the system clock for a few of our functions. If you (or NTP) change the clock in unexpected and significant ways, you will run into problems. Please see [issue 4](https://github.com/secondbit/wendy/issues/4) for more information. 191 | * Our Credentials implementation is currently vulnerable to man-in-the-middle and replay attacks. We are considering the best method for adding a handshake to the low-level TCP connection to better secure your traffic. See [issue 3](https://github.com/secondbit/wendy/issues/3) for more information or to weigh in on the discussion. 192 | 193 | ## Authors 194 | 195 | The following people contributed code that found its way into Wendy: 196 | 197 | * Paddy Foran ([paddyforan](https://github.com/paddyforan)) 198 | * Jesse McNelis ([jessta](https://github.com/jessta)) 199 | * Evan Shaw ([edsrzf](https://github.com/edsrzf)) 200 | * Alec Thomas ([alecthomas](https://github.com/alecthomas)) 201 | * Graeme Humphries ([unit3](https://github.com/unit3)) 202 | 203 | ## Contributors 204 | 205 | The following people contributed to the creation of Wendy through advice and support, not through code: 206 | 207 | * [Matthew Turland](http://www.matthewturland.com) offered support and advice, and has been invaluable in bringing the software to fruition. 208 | * [Chris Hartjes](http://www.littlehart.net/atthekeyboard) offered feedback and advice on our testing strategies. 209 | * [Jesse McNelis](http://jessta.id.au) provided his services both as a bug-hunter and as a rubber duck. 210 | * [Dr. Steven Ko](http://www.cse.buffalo.edu/people/?u=stevko) of the University at Buffalo offered valuable feedback on Pastry and Distributed Hash Tables in general. 211 | * [Jan Newmarch's excellent guide to writing networking code in Go](http://jan.newmarch.name/go/) gave us valuable information. 212 | * [The Go Community](https://groups.google.com/group/go-nuts) (which is superb), offered advice and feedback throughout the creation of this software. 213 | 214 | ## License 215 | 216 | Copyright (c) 2012 Second Bit LLC 217 | 218 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 219 | 220 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 221 | 222 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 223 | -------------------------------------------------------------------------------- /cluster.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "encoding/json" 5 | "errors" 6 | "io" 7 | "log" 8 | "net" 9 | "os" 10 | "strconv" 11 | "strings" 12 | "sync" 13 | "time" 14 | ) 15 | 16 | type StateMask struct { 17 | Mask byte 18 | Rows []int 19 | Cols []int 20 | } 21 | 22 | const ( 23 | rT = byte(1 << iota) 24 | lS 25 | nS 26 | all = rT | lS | nS 27 | ) 28 | 29 | func (m StateMask) includeRT() bool { 30 | return m.Mask == (m.Mask | rT) 31 | } 32 | 33 | func (m StateMask) includeLS() bool { 34 | return m.Mask == (m.Mask | lS) 35 | } 36 | 37 | func (m StateMask) includeNS() bool { 38 | return m.Mask == (m.Mask | nS) 39 | } 40 | 41 | type stateTables struct { 42 | RoutingTable *[32][16]*Node `json:"rt,omitempty"` 43 | LeafSet *[2][16]*Node `json:"ls,omitempty"` 44 | NeighborhoodSet *[32]*Node `json:"ns,omitempty"` 45 | EOL bool `json:"eol,omitempty"` 46 | } 47 | 48 | type proximityCache struct { 49 | cache map[NodeID]int64 50 | ticker <-chan time.Time 51 | *sync.RWMutex 52 | } 53 | 54 | func newProximityCache() *proximityCache { 55 | return &proximityCache{ 56 | cache: map[NodeID]int64{}, 57 | ticker: time.Tick(1 * time.Hour), 58 | RWMutex: new(sync.RWMutex), 59 | } 60 | } 61 | 62 | // Cluster holds the information about the state of the network. It is the main interface to the distributed network of Nodes. 63 | type Cluster struct { 64 | self *Node 65 | table *routingTable 66 | leafset *leafSet 67 | neighborhoodset *neighborhoodSet 68 | kill chan bool 69 | lastStateUpdate time.Time 70 | applications []Application 71 | log *log.Logger 72 | logLevel int 73 | heartbeatFrequency int 74 | networkTimeout int 75 | credentials Credentials 76 | joined bool 77 | lock *sync.RWMutex 78 | proximityCache *proximityCache 79 | } 80 | 81 | func (c *Cluster) newLeaves(leaves []*Node) { 82 | c.lock.RLock() 83 | defer c.lock.RUnlock() 84 | c.debug("Sending newLeaves notifications.") 85 | for i, app := range c.applications { 86 | app.OnNewLeaves(leaves) 87 | c.debug("Sent newLeaves notification %d of %d.", i+1, len(c.applications)) 88 | } 89 | c.debug("Sent newLeaves notifications.") 90 | } 91 | 92 | func (c *Cluster) fanOutJoin(node Node) { 93 | c.lock.RLock() 94 | defer c.lock.RUnlock() 95 | for _, app := range c.applications { 96 | c.debug("Announcing node join.") 97 | app.OnNodeJoin(node) 98 | c.debug("Announced node join.") 99 | } 100 | } 101 | 102 | func (c *Cluster) forward(msg Message, id NodeID) bool { 103 | c.lock.RLock() 104 | defer c.lock.RUnlock() 105 | forward := true 106 | for _, app := range c.applications { 107 | f := app.OnForward(&msg, id) 108 | if forward { 109 | forward = f 110 | } 111 | } 112 | return forward 113 | } 114 | 115 | func (c *Cluster) marshalCredentials() []byte { 116 | c.lock.RLock() 117 | defer c.lock.RUnlock() 118 | if c.credentials == nil { 119 | return []byte{} 120 | } 121 | return c.credentials.Marshal() 122 | } 123 | 124 | func (c *Cluster) getNetworkTimeout() int { 125 | c.lock.RLock() 126 | defer c.lock.RUnlock() 127 | return c.networkTimeout 128 | } 129 | 130 | func (c *Cluster) cacheProximity(id NodeID, proximity int64) { 131 | c.proximityCache.Lock() 132 | defer c.proximityCache.Unlock() 133 | c.proximityCache.cache[id] = proximity 134 | } 135 | 136 | func (c *Cluster) getCachedProximity(id NodeID) int64 { 137 | c.proximityCache.RLock() 138 | defer c.proximityCache.RUnlock() 139 | if proximity, set := c.proximityCache.cache[id]; set { 140 | return proximity 141 | } 142 | return -1 143 | } 144 | 145 | func (c *Cluster) clearProximityCache() { 146 | c.proximityCache.Lock() 147 | defer c.proximityCache.Unlock() 148 | c.proximityCache.cache = map[NodeID]int64{} 149 | } 150 | 151 | func (c *Cluster) isJoined() bool { 152 | c.lock.RLock() 153 | defer c.lock.RUnlock() 154 | return c.joined 155 | } 156 | 157 | // ID returns an identifier for the Cluster. It uses the ID of the current Node. 158 | func (c *Cluster) ID() NodeID { 159 | return c.self.ID 160 | } 161 | 162 | // String returns a string representation of the Cluster, in the form of its ID. 163 | func (c *Cluster) String() string { 164 | return c.ID().String() 165 | } 166 | 167 | // GetIP returns the IP address to use when communicating with a Node. 168 | func (c *Cluster) GetIP(node Node) string { 169 | return c.self.GetIP(node) 170 | } 171 | 172 | // SetLogger sets the log.Logger that the Cluster, along with its child routingTable and leafSet, will write to. 173 | func (c *Cluster) SetLogger(l *log.Logger) { 174 | c.log = l 175 | c.table.log = l 176 | c.leafset.log = l 177 | } 178 | 179 | // SetLogLevel sets the level of logging that will be written to the Logger. It will be mirrored to the child routingTable and leafSet. 180 | // 181 | // Use wendy.LogLevelDebug to write to the most verbose level of logging, helpful for debugging. 182 | // 183 | // Use wendy.LogLevelWarn (the default) to write on events that may, but do not necessarily, indicate an error. 184 | // 185 | // Use wendy.LogLevelError to write only when an event occurs that is undoubtedly an error. 186 | func (c *Cluster) SetLogLevel(level int) { 187 | c.logLevel = level 188 | c.table.logLevel = level 189 | c.leafset.logLevel = level 190 | } 191 | 192 | // SetHeartbeatFrequency sets the frequency in seconds with which heartbeats will be sent from this Node to test the health of other Nodes in the Cluster. 193 | func (c *Cluster) SetHeartbeatFrequency(freq int) { 194 | c.heartbeatFrequency = freq 195 | } 196 | 197 | // SetNetworkTimeout sets the number of seconds before which network requests will be considered timed out and killed. 198 | func (c *Cluster) SetNetworkTimeout(timeout int) { 199 | c.networkTimeout = timeout 200 | } 201 | 202 | // NewCluster creates a new instance of a connection to the network and intialises the state tables and channels it requires. 203 | func NewCluster(self *Node, credentials Credentials) *Cluster { 204 | return &Cluster{ 205 | self: self, 206 | table: newRoutingTable(self), 207 | leafset: newLeafSet(self), 208 | neighborhoodset: newNeighborhoodSet(self), 209 | kill: make(chan bool), 210 | lastStateUpdate: time.Now(), 211 | applications: []Application{}, 212 | log: log.New(os.Stdout, "wendy("+self.ID.String()+") ", log.LstdFlags), 213 | logLevel: LogLevelWarn, 214 | heartbeatFrequency: 300, 215 | networkTimeout: 10, 216 | credentials: credentials, 217 | joined: false, 218 | lock: new(sync.RWMutex), 219 | proximityCache: newProximityCache(), 220 | } 221 | } 222 | 223 | // Stop gracefully shuts down the local connection to the Cluster, removing the local Node from the Cluster and preventing it from receiving or sending further messages. 224 | // 225 | // Before it disconnects the Node, Stop contacts every Node it knows of to warn them of its departure. If a graceful disconnect is not necessary, Kill should be used instead. Nodes will remove the Node from their state tables next time they attempt to contact it. 226 | func (c *Cluster) Stop() { 227 | c.debug("Sending graceful exit message.") 228 | msg := c.NewMessage(NODE_EXIT, c.self.ID, []byte{}) 229 | nodes := c.table.list([]int{}, []int{}) 230 | nodes = append(nodes, c.leafset.list()...) 231 | nodes = append(nodes, c.neighborhoodset.list()...) 232 | for _, node := range nodes { 233 | err := c.send(msg, node) 234 | if err != nil { 235 | c.fanOutError(err) 236 | } 237 | } 238 | c.Kill() 239 | } 240 | 241 | // Kill shuts down the local connection to the Cluster, removing the local Node from the Cluster and preventing it from receiving or sending further messages. 242 | // 243 | // Unlike Stop, Kill immediately disconnects the Node without sending a message to let other Nodes know of its exit. 244 | func (c *Cluster) Kill() { 245 | c.debug("Exiting the cluster.") 246 | c.kill <- true 247 | } 248 | 249 | // RegisterCallback allows anything that fulfills the Application interface to be hooked into the Wendy's callbacks. 250 | func (c *Cluster) RegisterCallback(app Application) { 251 | c.lock.Lock() 252 | defer c.lock.Unlock() 253 | c.applications = append(c.applications, app) 254 | } 255 | 256 | // Listen starts the Cluster listening for events, including all the individual listeners for each state sub-object. 257 | // 258 | // Note that Listen does *not* join a Node to the Cluster. The Node must announce its presence before the Node is considered active in the Cluster. 259 | func (c *Cluster) Listen() error { 260 | portstr := strconv.Itoa(c.self.Port) 261 | c.debug("Listening on port %d", c.self.Port) 262 | ln, err := net.Listen("tcp", ":"+portstr) 263 | if err != nil { 264 | return err 265 | } 266 | defer ln.Close() 267 | // save bound port back to Node in case where port is autoconfigured by OS 268 | if c.self.Port == 0 { 269 | c.debug("Port set to 0") 270 | colonPos := strings.LastIndex(ln.Addr().String(), ":") 271 | if colonPos == -1 { 272 | c.debug("OS returned an address without a port.") 273 | return errors.New("OS returned an address without a port.") 274 | } 275 | port, err := strconv.ParseInt(ln.Addr().String()[colonPos+1:], 10, 32) 276 | if err != nil { 277 | c.debug("Couldn't record autoconfigured port: %s", err.Error()) 278 | return errors.New("Couldn't record autoconfigured port: " + err.Error()) 279 | } 280 | c.debug("Setting port to %d", port) 281 | c.self.Port = int(port) 282 | } 283 | connections := make(chan net.Conn) 284 | go func(ln net.Listener, ch chan net.Conn) { 285 | for { 286 | conn, err := ln.Accept() 287 | if err != nil { 288 | c.fanOutError(err) 289 | return 290 | } 291 | c.debug("Connection received.") 292 | ch <- conn 293 | } 294 | }(ln, connections) 295 | for { 296 | select { 297 | case <-c.kill: 298 | return nil 299 | case <-time.After(time.Duration(c.heartbeatFrequency) * time.Second): 300 | c.debug("Sending heartbeats.") 301 | go c.sendHeartbeats() 302 | break 303 | case conn := <-connections: 304 | c.debug("Handling connection.") 305 | go c.handleClient(conn) 306 | break 307 | case <-c.proximityCache.ticker: 308 | c.debug("Emptying proximity cache...") 309 | go c.clearProximityCache() 310 | break 311 | } 312 | } 313 | return nil 314 | } 315 | 316 | // Send routes a message through the Cluster. 317 | func (c *Cluster) Send(msg Message) error { 318 | c.debug("Getting target for message %s", msg.Key) 319 | target, err := c.Route(msg.Key) 320 | if err != nil { 321 | return err 322 | } 323 | if target == nil { 324 | c.debug("Couldn't find a target. Delivering message %s", msg.Key) 325 | if msg.Purpose > NODE_ANN { 326 | c.deliver(msg) 327 | } 328 | return nil 329 | } 330 | forward := c.forward(msg, target.ID) 331 | if forward { 332 | err = c.send(msg, target) 333 | if err == deadNodeError { 334 | err = c.remove(target.ID) 335 | } 336 | return err 337 | } 338 | c.debug("Message %s wasn't forwarded because callback terminated it.", msg.Key) 339 | return nil 340 | } 341 | 342 | // Route checks the leafSet and routingTable to see if there's an appropriate match for the NodeID. If there is a better match than the current Node, a pointer to that Node is returned. Otherwise, nil is returned (and the message should be delivered). 343 | func (c *Cluster) Route(key NodeID) (*Node, error) { 344 | target, err := c.leafset.route(key) 345 | if err != nil { 346 | if _, ok := err.(IdentityError); ok { 347 | c.debug("I'm the target. Delivering message %s", key) 348 | return nil, nil 349 | } 350 | if err != nodeNotFoundError { 351 | return nil, err 352 | } 353 | if target != nil { 354 | c.debug("Target acquired in leafset.") 355 | return target, nil 356 | } 357 | } 358 | c.debug("Target not found in leaf set, checking routing table.") 359 | target, err = c.table.route(key) 360 | if err != nil { 361 | if _, ok := err.(IdentityError); ok { 362 | c.debug("I'm the target. Delivering message %s", key) 363 | return nil, nil 364 | } 365 | if err != nodeNotFoundError { 366 | return nil, err 367 | } 368 | } 369 | if target != nil { 370 | c.debug("Target acquired in routing table.") 371 | return target, nil 372 | } 373 | return nil, nil 374 | } 375 | 376 | // Join expresses a Node's desire to join the Cluster, kicking off a process that will populate its child leafSet, neighborhoodSet and routingTable. Once that process is complete, the Node can be said to be fully participating in the Cluster. 377 | // 378 | // The IP and port passed to Join should be those of a known Node in the Cluster. The algorithm assumes that the known Node is close in proximity to the current Node, but that is not a hard requirement. 379 | func (c *Cluster) Join(ip string, port int) error { 380 | credentials := c.marshalCredentials() 381 | c.debug("Sending join message to %s:%d", ip, port) 382 | msg := c.NewMessage(NODE_JOIN, c.self.ID, credentials) 383 | address := ip + ":" + strconv.Itoa(port) 384 | return c.SendToIP(msg, address) 385 | } 386 | 387 | func (c *Cluster) fanOutError(err error) { 388 | c.debug(err.Error()) 389 | c.lock.RLock() 390 | defer c.lock.RUnlock() 391 | c.err(err.Error()) 392 | for _, app := range c.applications { 393 | app.OnError(err) 394 | } 395 | } 396 | 397 | func (c *Cluster) sendHeartbeats() { 398 | msg := c.NewMessage(HEARTBEAT, c.self.ID, []byte{}) 399 | nodes := c.table.list([]int{}, []int{}) 400 | nodes = append(nodes, c.leafset.list()...) 401 | nodes = append(nodes, c.neighborhoodset.list()...) 402 | sent := map[NodeID]bool{} 403 | for _, node := range nodes { 404 | if node == nil { 405 | continue 406 | } 407 | if _, set := sent[node.ID]; set { 408 | continue 409 | } 410 | c.debug("Sending heartbeat to %s", node.ID) 411 | err := c.send(msg, node) 412 | if err == deadNodeError { 413 | err = c.remove(node.ID) 414 | if err != nil { 415 | c.fanOutError(err) 416 | } 417 | continue 418 | } 419 | sent[node.ID] = true 420 | } 421 | } 422 | 423 | func (c *Cluster) deliver(msg Message) { 424 | if msg.Purpose <= NODE_ANN { 425 | c.warn("Received utility message %s to the deliver function. Purpose was %d.", msg.Key, msg.Purpose) 426 | return 427 | } 428 | c.lock.RLock() 429 | defer c.lock.RUnlock() 430 | for _, app := range c.applications { 431 | app.OnDeliver(msg) 432 | } 433 | } 434 | 435 | func (c *Cluster) handleClient(conn net.Conn) { 436 | defer conn.Close() 437 | var msg Message 438 | decoder := json.NewDecoder(conn) 439 | err := decoder.Decode(&msg) 440 | if err != nil { 441 | c.fanOutError(err) 442 | return 443 | } 444 | valid := c.credentials == nil 445 | if !valid { 446 | valid = c.credentials.Valid(msg.Credentials) 447 | } 448 | if !valid { 449 | c.warn("Credentials did not match. Supplied credentials: %s", msg.Credentials) 450 | return 451 | } 452 | if msg.Purpose != NODE_JOIN { 453 | node, _ := c.get(msg.Sender.ID) 454 | if node != nil { 455 | node.updateLastHeardFrom() 456 | } 457 | } 458 | conn.Write([]byte(`{"status": "Received."}`)) 459 | c.debug("Got message with purpose %v", msg.Purpose) 460 | msg.Hop = msg.Hop + 1 461 | switch msg.Purpose { 462 | case NODE_JOIN: 463 | c.onNodeJoin(msg) 464 | break 465 | case NODE_ANN: 466 | c.onNodeAnnounce(msg) 467 | break 468 | case NODE_EXIT: 469 | c.onNodeExit(msg) 470 | break 471 | case HEARTBEAT: 472 | c.lock.RLock() 473 | defer c.lock.RUnlock() 474 | for _, app := range c.applications { 475 | app.OnHeartbeat(msg.Sender) 476 | } 477 | break 478 | case STAT_DATA: 479 | c.onStateReceived(msg) 480 | break 481 | case STAT_REQ: 482 | c.onStateRequested(msg) 483 | break 484 | case NODE_RACE: 485 | c.onRaceCondition(msg) 486 | break 487 | case NODE_REPR: 488 | c.onRepairRequest(msg) 489 | break 490 | default: 491 | c.onMessageReceived(msg) 492 | } 493 | } 494 | 495 | func (c *Cluster) send(msg Message, destination *Node) error { 496 | if destination == nil { 497 | return errors.New("Can't send to a nil node.") 498 | } 499 | if c.self == nil { 500 | return errors.New("Can't send from a nil node.") 501 | } 502 | address := c.GetIP(*destination) 503 | c.debug("Sending message %s with purpose %d to %s", msg.Key, msg.Purpose, address) 504 | start := time.Now() 505 | err := c.SendToIP(msg, address) 506 | if err == nil { 507 | proximity := time.Since(start) 508 | destination.setProximity(int64(proximity)) 509 | destination.updateLastHeardFrom() 510 | } 511 | return err 512 | } 513 | 514 | // SendToIP sends a message directly to an IP using the Wendy networking logic. 515 | func (c *Cluster) SendToIP(msg Message, address string) error { 516 | c.debug("Sending message %s", string(msg.Value)) 517 | conn, err := net.DialTimeout("tcp", address, time.Duration(c.getNetworkTimeout())*time.Second) 518 | if err != nil { 519 | c.debug(err.Error()) 520 | return deadNodeError 521 | } 522 | defer conn.Close() 523 | conn.SetDeadline(time.Now().Add(time.Duration(c.getNetworkTimeout()) * time.Second)) 524 | encoder := json.NewEncoder(conn) 525 | err = encoder.Encode(msg) 526 | if err != nil { 527 | return err 528 | } 529 | c.debug("Sent message %s with purpose %d to %s", msg.Key, msg.Purpose, address) 530 | _, err = conn.Read(nil) 531 | if err != nil { 532 | if neterr, ok := err.(net.Error); ok && neterr.Timeout() { 533 | return deadNodeError 534 | } 535 | if err == io.EOF { 536 | err = nil 537 | } 538 | } 539 | return err 540 | } 541 | 542 | // Our message handlers! 543 | 544 | // A node wants to join the cluster. We need to route its message as we normally would, but we should also send it our state tables as appropriate. 545 | func (c *Cluster) onNodeJoin(msg Message) { 546 | c.debug("\033[4;31mNode %s joined!\033[0m", msg.Key) 547 | mask := StateMask{ 548 | Mask: rT, 549 | Rows: []int{}, 550 | Cols: []int{}, 551 | } 552 | row := c.self.ID.CommonPrefixLen(msg.Key) 553 | if msg.Hop == 1 { 554 | // send only the matching routing table rows 555 | for i := 0; i < row; i++ { 556 | mask.Rows = append(mask.Rows, i) 557 | msg.Hop++ 558 | } 559 | // also send neighborhood set, if I'm the first node to get the message 560 | mask.Mask = mask.Mask | nS 561 | } else { 562 | // send only the routing table rows that match the hop 563 | if msg.Hop < row { 564 | mask.Rows = append(mask.Rows, msg.Hop) 565 | } 566 | } 567 | next, err := c.Route(msg.Key) 568 | if err != nil { 569 | c.fanOutError(err) 570 | } 571 | eol := false 572 | if next == nil { 573 | // also send leaf set, if I'm the last node to get the message 574 | mask.Mask = mask.Mask | lS 575 | eol = true 576 | } 577 | err = c.sendStateTables(msg.Sender, mask, eol) 578 | if err != nil { 579 | if err != deadNodeError { 580 | c.fanOutError(err) 581 | } 582 | } 583 | // forward the message on to the next destination 584 | err = c.Send(msg) 585 | if err != nil { 586 | c.fanOutError(err) 587 | } 588 | } 589 | 590 | // A node has joined the cluster. We need to decide if it belongs in our state tables and if the nodes in the state tables it sends us belong in our state tables. If the version of our state tables it sends to us doesn't match our local version, we need to resend our state tables to prevent a race condition. 591 | func (c *Cluster) onNodeAnnounce(msg Message) { 592 | c.debug("\0333[4;31mNode %s announced its presence!\033[0m", msg.Key) 593 | conflicts := byte(0) 594 | if c.self.leafsetVersion > msg.LSVersion { 595 | c.debug("Expected LSVersion %d, got %d", c.self.leafsetVersion, msg.LSVersion) 596 | conflicts = conflicts | lS 597 | } 598 | if c.self.routingTableVersion > msg.RTVersion { 599 | c.debug("Expected RTVersion %d, got %d", c.self.routingTableVersion, msg.RTVersion) 600 | conflicts = conflicts | rT 601 | } 602 | if c.self.neighborhoodSetVersion > msg.NSVersion { 603 | c.debug("Expected NSVersion %d, got %d", c.self.neighborhoodSetVersion, msg.NSVersion) 604 | conflicts = conflicts | nS 605 | } 606 | if conflicts > 0 { 607 | c.debug("Uh oh, %s hit a race condition. Resending state.", msg.Key) 608 | err := c.sendRaceNotification(msg.Sender, StateMask{Mask: conflicts}) 609 | if err != nil { 610 | c.fanOutError(err) 611 | } 612 | return 613 | } 614 | c.debug("No conflicts!") 615 | err := c.insertMessage(msg) 616 | if err != nil { 617 | c.fanOutError(err) 618 | } 619 | c.debug("About to fan out join messages...") 620 | c.fanOutJoin(msg.Sender) 621 | } 622 | 623 | func (c *Cluster) onNodeExit(msg Message) { 624 | c.debug("Node %s left. :(", msg.Sender.ID) 625 | err := c.remove(msg.Sender.ID) 626 | if err != nil { 627 | c.fanOutError(err) 628 | return 629 | } 630 | } 631 | 632 | func (c *Cluster) onStateReceived(msg Message) { 633 | err := c.insertMessage(msg) 634 | if err != nil { 635 | c.debug(err.Error()) 636 | c.fanOutError(err) 637 | } 638 | var state stateTables 639 | err = json.Unmarshal(msg.Value, &state) 640 | if err != nil { 641 | c.debug(err.Error()) 642 | c.fanOutError(err) 643 | return 644 | } 645 | c.debug("State received. EOL is %v, isJoined is %v.", state.EOL, c.isJoined()) 646 | if !c.isJoined() && state.EOL { 647 | c.debug("Haven't announced presence yet... waiting %d seconds", (2 * c.getNetworkTimeout())) 648 | time.Sleep(time.Duration(2*c.getNetworkTimeout()) * time.Second) 649 | err = c.announcePresence() 650 | if err != nil { 651 | c.fanOutError(err) 652 | } 653 | } else if !state.EOL { 654 | c.debug("Already announced presence.") 655 | } else { 656 | c.debug("Not end of line.") 657 | } 658 | } 659 | 660 | func (c *Cluster) onStateRequested(msg Message) { 661 | c.debug("%s wants to know about my state tables!", msg.Sender.ID) 662 | var mask StateMask 663 | err := json.Unmarshal(msg.Value, &mask) 664 | if err != nil { 665 | c.fanOutError(err) 666 | return 667 | } 668 | c.sendStateTables(msg.Sender, mask, false) 669 | } 670 | 671 | func (c *Cluster) onRaceCondition(msg Message) { 672 | c.debug("Race condition. Awkward.") 673 | err := c.insertMessage(msg) 674 | if err != nil { 675 | c.fanOutError(err) 676 | } 677 | err = c.announcePresence() 678 | if err != nil { 679 | c.fanOutError(err) 680 | } 681 | } 682 | 683 | func (c *Cluster) onRepairRequest(msg Message) { 684 | c.debug("Helping to repair %s", msg.Sender.ID) 685 | var mask StateMask 686 | err := json.Unmarshal(msg.Value, &mask) 687 | if err != nil { 688 | c.fanOutError(err) 689 | return 690 | } 691 | c.sendStateTables(msg.Sender, mask, false) 692 | } 693 | 694 | func (c *Cluster) onMessageReceived(msg Message) { 695 | c.debug("Received message %s", msg.Key) 696 | err := c.Send(msg) 697 | if err != nil { 698 | c.fanOutError(err) 699 | } 700 | } 701 | 702 | func (c *Cluster) dumpStateTables(tables StateMask) (stateTables, error) { 703 | var state stateTables 704 | if tables.includeRT() { 705 | routingTable := c.table.export(tables.Rows, tables.Cols) 706 | state.RoutingTable = &routingTable 707 | } 708 | if tables.includeLS() { 709 | leafSet := c.leafset.export() 710 | state.LeafSet = &leafSet 711 | } 712 | if tables.includeNS() { 713 | neighborhoodSet := c.neighborhoodset.export() 714 | state.NeighborhoodSet = &neighborhoodSet 715 | } 716 | return state, nil 717 | } 718 | 719 | func (c *Cluster) sendStateTables(node Node, tables StateMask, eol bool) error { 720 | state, err := c.dumpStateTables(tables) 721 | if err != nil { 722 | return err 723 | } 724 | state.EOL = eol 725 | data, err := json.Marshal(state) 726 | if err != nil { 727 | return err 728 | } 729 | msg := c.NewMessage(STAT_DATA, c.self.ID, data) 730 | target, err := c.get(node.ID) 731 | if err != nil { 732 | if _, ok := err.(IdentityError); !ok && err != nodeNotFoundError { 733 | return err 734 | } else if err == nodeNotFoundError { 735 | return c.send(msg, &node) 736 | } 737 | } 738 | c.debug("Sending state tables to %s", node.ID) 739 | return c.send(msg, target) 740 | } 741 | 742 | func (c *Cluster) sendRaceNotification(node Node, tables StateMask) error { 743 | state, err := c.dumpStateTables(tables) 744 | if err != nil { 745 | return err 746 | } 747 | data, err := json.Marshal(state) 748 | if err != nil { 749 | return err 750 | } 751 | msg := c.NewMessage(NODE_RACE, c.self.ID, data) 752 | target, err := c.get(node.ID) 753 | if err != nil { 754 | if _, ok := err.(IdentityError); !ok && err != nodeNotFoundError { 755 | return err 756 | } else if err == nodeNotFoundError { 757 | return c.send(msg, &node) 758 | } 759 | } 760 | c.debug("Sending state tables to %s to fix race condition", node.ID) 761 | return c.send(msg, target) 762 | } 763 | 764 | func (c *Cluster) announcePresence() error { 765 | c.debug("Announcing presence...") 766 | state, err := c.dumpStateTables(StateMask{Mask: all}) 767 | if err != nil { 768 | return err 769 | } 770 | data, err := json.Marshal(state) 771 | if err != nil { 772 | return err 773 | } 774 | msg := c.NewMessage(NODE_ANN, c.self.ID, data) 775 | nodes := c.table.list([]int{}, []int{}) 776 | nodes = append(nodes, c.leafset.list()...) 777 | nodes = append(nodes, c.neighborhoodset.list()...) 778 | sent := map[NodeID]bool{} 779 | for _, node := range nodes { 780 | if node == nil { 781 | continue 782 | } 783 | c.debug("Saw node %s. rtVersion: %d\tlsVersion: %d\tnsVersion: %d", node.ID.String(), node.routingTableVersion, node.leafsetVersion, node.neighborhoodSetVersion) 784 | if _, set := sent[node.ID]; set { 785 | c.debug("Skipping node %s, already sent announcement there.", node.ID.String()) 786 | continue 787 | } 788 | c.debug("Announcing presence to %s", node.ID) 789 | c.debug("Node: %s\trt: %d\tls: %d\tns: %d", node.ID.String(), node.routingTableVersion, node.leafsetVersion, node.neighborhoodSetVersion) 790 | msg.LSVersion = node.leafsetVersion 791 | msg.RTVersion = node.routingTableVersion 792 | msg.NSVersion = node.neighborhoodSetVersion 793 | err := c.send(msg, node) 794 | if err == deadNodeError { 795 | err = c.remove(node.ID) 796 | if err != nil { 797 | c.fanOutError(err) 798 | } 799 | continue 800 | } 801 | sent[node.ID] = true 802 | } 803 | c.lock.Lock() 804 | defer c.lock.Unlock() 805 | c.joined = true 806 | return nil 807 | } 808 | 809 | func (c *Cluster) repairLeafset(id NodeID) error { 810 | target, err := c.leafset.getNextNode(id) 811 | if err != nil { 812 | if err == nodeNotFoundError { 813 | c.warn("No node found when trying to repair the leafset. Was there a catastrophe?") 814 | } else { 815 | return err 816 | } 817 | } 818 | mask := StateMask{Mask: lS} 819 | data, err := json.Marshal(mask) 820 | if err != nil { 821 | return err 822 | } 823 | msg := c.NewMessage(NODE_REPR, id, data) 824 | return c.send(msg, target) 825 | } 826 | 827 | func (c *Cluster) repairTable(id NodeID) error { 828 | row := c.self.ID.CommonPrefixLen(id) 829 | reqRow := row 830 | col := int(id.Digit(row)) 831 | targets := []*Node{} 832 | for len(targets) < 1 && row < len(c.table.nodes) { 833 | targets = c.table.list([]int{row}, []int{}) 834 | if len(targets) < 1 { 835 | row = row + 1 836 | } 837 | } 838 | mask := StateMask{Mask: rT, Rows: []int{reqRow}, Cols: []int{col}} 839 | data, err := json.Marshal(mask) 840 | if err != nil { 841 | return err 842 | } 843 | msg := c.NewMessage(NODE_REPR, c.self.ID, data) 844 | for _, target := range targets { 845 | err = c.send(msg, target) 846 | if err != nil { 847 | return err 848 | } 849 | } 850 | return nil 851 | } 852 | 853 | func (c *Cluster) repairNeighborhood() error { 854 | targets := c.neighborhoodset.list() 855 | mask := StateMask{Mask: nS} 856 | data, err := json.Marshal(mask) 857 | if err != nil { 858 | return err 859 | } 860 | msg := c.NewMessage(NODE_REPR, c.self.ID, data) 861 | for _, target := range targets { 862 | err = c.send(msg, target) 863 | if err != nil { 864 | return err 865 | } 866 | } 867 | return nil 868 | } 869 | 870 | func (c *Cluster) updateProximity(node *Node) error { 871 | proximity := c.getCachedProximity(node.ID) 872 | if proximity < 0 { 873 | msg := c.NewMessage(HEARTBEAT, c.self.ID, []byte{}) 874 | c.debug("Checking proximity to %s", node.ID) 875 | err := c.send(msg, node) 876 | if err != nil { 877 | return err 878 | } 879 | c.debug("Proximity to %s checked.", node.ID) 880 | c.cacheProximity(node.ID, node.getRawProximity()) 881 | c.debug("Proximity to %s cached.", node.ID) 882 | } 883 | return nil 884 | } 885 | 886 | func (c *Cluster) insertMessage(msg Message) error { 887 | var state stateTables 888 | err := json.Unmarshal(msg.Value, &state) 889 | if err != nil { 890 | c.debug("Error unmarshalling JSON: %s", err.Error()) 891 | return err 892 | } 893 | sender := &msg.Sender 894 | c.debug("Updating versions for %s. RT: %d, LS: %d, NS: %d.", sender.ID.String(), msg.RTVersion, msg.LSVersion, msg.NSVersion) 895 | sender.updateVersions(msg.RTVersion, msg.LSVersion, msg.NSVersion) 896 | err = c.insert(*sender, StateMask{Mask: all}) 897 | if err != nil { 898 | return err 899 | } 900 | if state.NeighborhoodSet != nil { 901 | for _, node := range state.NeighborhoodSet { 902 | if node == nil { 903 | continue 904 | } 905 | err = c.insert(*node, StateMask{Mask: nS}) 906 | if err != nil { 907 | return err 908 | } 909 | } 910 | } 911 | if state.LeafSet != nil { 912 | for _, side := range state.LeafSet { 913 | for _, node := range side { 914 | if node == nil { 915 | continue 916 | } 917 | err = c.insert(*node, StateMask{Mask: lS | nS}) 918 | if err != nil { 919 | return err 920 | } 921 | } 922 | } 923 | } 924 | if state.RoutingTable != nil { 925 | for _, row := range state.RoutingTable { 926 | for _, node := range row { 927 | if node == nil { 928 | continue 929 | } 930 | err = c.insert(*node, StateMask{Mask: rT | nS}) 931 | if err != nil { 932 | return err 933 | } 934 | } 935 | } 936 | } 937 | return nil 938 | } 939 | 940 | func (c *Cluster) insert(node Node, tables StateMask) error { 941 | if node.IsZero() { 942 | return nil 943 | } 944 | if node.ID.Equals(c.self.ID) { 945 | c.debug("Skipping inserting myself.") 946 | return nil 947 | } 948 | c.debug("Inserting node %s", node.ID) 949 | if node.getRawProximity() <= 0 && (tables.includeNS() || tables.includeRT()) { 950 | c.debug("Updating proximity") 951 | c.updateProximity(&node) 952 | c.debug("Updated proximity") 953 | c.debug("Inserting node %s in routing table.", node.ID) 954 | resp, err := c.table.insertNode(node, node.getRawProximity()) 955 | if err != nil && err != rtDuplicateInsertError { 956 | c.err("Error inserting node: %s", err.Error()) 957 | return err 958 | } 959 | if resp != nil && err != rtDuplicateInsertError { 960 | c.debug("Inserted node %s in routing table.", resp.ID) 961 | } 962 | if err == rtDuplicateInsertError { 963 | c.debug(err.Error()) 964 | } 965 | } 966 | if tables.includeLS() { 967 | c.debug("Inserting node %s in leaf set.", node.ID) 968 | resp, err := c.leafset.insertNode(node) 969 | if err != nil && err != lsDuplicateInsertError { 970 | return err 971 | } 972 | if resp != nil && err != lsDuplicateInsertError { 973 | c.debug("Inserted node %s in leaf set.", resp.ID) 974 | c.newLeaves(c.leafset.list()) 975 | } 976 | c.debug("At the end of the leafset insert block.") 977 | if err == lsDuplicateInsertError { 978 | c.debug(err.Error()) 979 | } 980 | } 981 | if tables.includeNS() { 982 | c.debug("Inserting node %s in neighborhood set.", node.ID) 983 | resp, err := c.neighborhoodset.insertNode(node, node.getRawProximity()) 984 | if err != nil && err != nsDuplicateInsertError { 985 | return err 986 | } 987 | if resp != nil && err != nsDuplicateInsertError { 988 | c.debug("Inserted node %s in neighborhood set.", resp.ID) 989 | } 990 | if err == nsDuplicateInsertError { 991 | c.debug(err.Error()) 992 | } 993 | } 994 | return nil 995 | } 996 | 997 | func (c *Cluster) remove(id NodeID) error { 998 | resp, err := c.table.removeNode(id) 999 | if err != nil { 1000 | return err 1001 | } 1002 | if resp != nil { 1003 | err = c.repairTable(resp.ID) 1004 | if err != nil { 1005 | return err 1006 | } 1007 | } 1008 | resp, err = c.leafset.removeNode(id) 1009 | if err != nil { 1010 | return err 1011 | } 1012 | if resp != nil { 1013 | err = c.repairLeafset(resp.ID) 1014 | if err != nil { 1015 | return err 1016 | } 1017 | c.newLeaves(c.leafset.list()) 1018 | } 1019 | resp, err = c.neighborhoodset.removeNode(id) 1020 | if err != nil { 1021 | return err 1022 | } 1023 | if resp != nil { 1024 | err = c.repairNeighborhood() 1025 | if err != nil { 1026 | return err 1027 | } 1028 | } 1029 | return nil 1030 | } 1031 | 1032 | func (c *Cluster) get(id NodeID) (*Node, error) { 1033 | node, err := c.neighborhoodset.getNode(id) 1034 | if err == nodeNotFoundError { 1035 | node, err = c.leafset.getNode(id) 1036 | if err == nodeNotFoundError { 1037 | node, err = c.table.getNode(id) 1038 | return node, err 1039 | } 1040 | return node, err 1041 | } 1042 | return node, err 1043 | } 1044 | 1045 | func (c *Cluster) debug(format string, v ...interface{}) { 1046 | if c.logLevel <= LogLevelDebug { 1047 | c.log.Printf(format, v...) 1048 | } 1049 | } 1050 | 1051 | func (c *Cluster) warn(format string, v ...interface{}) { 1052 | if c.logLevel <= LogLevelWarn { 1053 | c.log.Printf(format, v...) 1054 | } 1055 | } 1056 | 1057 | func (c *Cluster) err(format string, v ...interface{}) { 1058 | if c.logLevel <= LogLevelError { 1059 | c.log.Printf(format, v...) 1060 | } 1061 | } 1062 | -------------------------------------------------------------------------------- /doc.go: -------------------------------------------------------------------------------- 1 | /* Package wendy implements a fault-tolerant, concurrency-safe distributed hash table. 2 | 3 | Self-Organising Services 4 | 5 | Wendy is a package to help make your Go programs self-organising. It makes communicating between a variable number of machines easy and reliable. Machines are referred to as Nodes, which create a Cluster together. Messages can then be routed throughout the Cluster. 6 | 7 | Getting Started 8 | 9 | Getting your own Cluster running is easy. Just create a Node, build a Cluster around it, and announce your presence. 10 | 11 | hostname, err := os.Hostname() 12 | if err != nil { 13 | panic(err.Error()) 14 | } 15 | id, err := wendy.NodeIDFromBytes([]byte(hostname+" test server")) 16 | if err != nil { 17 | panic(err.Error()) 18 | } 19 | node := wendy.NewNode(id, "your_local_ip_address", "your_global_ip_address", "your_region", 8080) 20 | 21 | credentials := wendy.Passphrase("I <3 Gophers.") 22 | cluster := wendy.NewCluster(node, credentials) 23 | go func() { 24 | defer cluster.Stop() 25 | err := cluster.Listen() 26 | if err != nil { 27 | panic(err.Error()) 28 | } 29 | }() 30 | cluster.Join("ip of another Node", 8080) // ports can be different for each Node 31 | select {} 32 | 33 | About Credentials 34 | 35 | Credentials are an interface that is used to control access to your Cluster. Wendy provides the Passphrase implementation, which limits access to Nodes that set their Credentials to the same string. You can feel free to make your own--the only requirements are that you return a slice of bytes when the Marshal() function is called and that you return a boolean when the Valid([]byte) function is called, which should return true if the supplied slice of bytes can be unmarshaled to a valid instance of your Credentials implementation AND that valid instance should be granted access to this Cluster. 36 | */ 37 | package wendy 38 | -------------------------------------------------------------------------------- /docs/algorithm.md: -------------------------------------------------------------------------------- 1 | This document exists to offer a plain-English explanation of how Wendy works. Wendy is based heavily on [Pastry](http://research.microsoft.com/en-us/um/people/antr/PAST/pastry.pdf), but there is no guarantee that Wendy does not stray from the Pastry paper. This document is the only canon spec for how Wendy functions. 2 | 3 | ## Metrics 4 | 5 | ### Node and Message IDs 6 | 7 | IDs are simply 128 bits that can be used to uniquely identify a message or node. Note that node and message IDs are in the same format. They can be visualised as a single circle, like a clock; at the top of the circle is 0, and it counts up until it gets to (2^128)-1 (which is the maximum possible value an ID can hold), which sets next to 0. Thus the number line loops around, ensuring that there is always an ID which is modularly greater than the current ID and an ID that is modularly less than the current ID. 8 | 9 | IDs are the only metric used when actually routing a Message, as will be explained. 10 | 11 | ### Proximity 12 | 13 | Each node in the cluster keeps a "proximity metric" for every other node it knows about in the cluster. The proximity metric is simply a measurement of how close in the network topology one node is to another. Currently, this is measured by the time a single request takes between two nodes, and is updated every time the nodes communicate. For performance reasons, there is also a basic cache of proximity scores for nodes (both known and unknown) that the current node has encountered. This cache empties itself every hour, but is unbounded in the memory it can consume. 14 | 15 | Proximity is only used when populating state tables. The proximity is never used during routing itself. 16 | 17 | ## State Tables 18 | 19 | Wendy maintains three state tables, arrays of known nodes in the cluster that are used either for routing or maintaining the other state tables. 20 | 21 | ### Routing Table 22 | 23 | The routing table is a two-dimensional array, consisting of 32 rows and sixteen columns each. Each column is capable of holding a single node. The routing table exists to keep a representational portion of the cluster, for the purposes of routing messages. 24 | 25 | The routing table is populated by dividing the ID of a node into 32 digits, each with 16 possible values. To determine which row a node belongs in, the common prefix is calculated between the current node and the node being inserted. For example, if the current node has an ID of `1A2BC3D..` and the inserted node has an ID of `1A2BC3E..`, the common prefix is `1A2BC3`. The length of this common prefix is the row the node will be inserted into in the routing table. To determine which column a node belongs in, take the value of the first different digit in the ID (`E` in our example) as a base 16 number (15). So our example node would be inserted into row 6, column 15 of the routing table. 26 | 27 | When a node leaves the cluster, the routing table is repaired by finding another suitable node to take its place. Assume the node in row 6, column 15 of the routing table has failed. To find another suitable node, the other nodes in the row are asked for the node at that position in _their_ routing tables. Because those nodes all share the same prefix length, any node they have in that position would be appropriate for the current node's empty position. If there are no known nodes in the same row, or if none of them have a node at the empty position, then the process is repeated on the next row in the routing table (row 7, in our example). Because each node in this row has a common prefix greater than the common prefix of our empty position, they too will all have an appropriate value for our empty position. This is repeated until the end of the routing table is reached. If a node exists that is a suitable replacement, this process is highly likely to find it. 28 | 29 | When two nodes are both equally suited to fill a position in the routing table, the neighborhood set is consulted to determine which node has a closer proximity in the network topology to the current node. This ensures that routing has good locality properties and favours nodes that will take less time to communicate. 30 | 31 | ### Leaf Set 32 | 33 | The leaf set can be visualised as two arrays of 16 nodes each. The leaf set exists to keep a list of the immediate neighbours in the node ID space for the current node, for the purposes of routing. One array of 16 nodes (the "left" array) contains nodes that have lower IDs than the current node's ID. The other (the "right" array) contains nodes that have greater IDs than the current node's ID. 34 | 35 | The leaf set is populated by determining whether the inserted node's ID is greater than or less than the current node's ID. Once this is determined, the appropriate array is selected. Each node in the array is checked against the inserted node's ID. If the ID falls between the current node's ID and the ID of the node being checked, the inserted node is inserted at that location in the leaf set, and the rest of the nodes in the leaf set are pushed back by a single position. If during this check, an unfilled position is encountered in the leaf set, the inserted node assumes that position. After this process is done, the leaf set is limited to the 16 nodes with IDs closest to the current node's ID. This allows the leaf set to consist of an array of the 16 nodes whose IDs are the closest to the current node's while being greater, and 16 nodes whose IDs are the closest to the current node's while being lesser. 36 | 37 | The leaf set is repaired by choosing the furthest node on the same side as the removed node, asking for its leaf set, and recalculating the array based on that information. Unless all 16 nodes on a single side of the leaf set depart the cluster before the leaf set can be repaired, this process is guaranteed to keep the leaf set repaired. 38 | 39 | ### Neighborhood Set 40 | 41 | The neighborhood set is simply an array of 32 nodes. It exists to keep a list of the nodes that are closest to the current node in the network topology, ensuring that the collection of known nodes will have a wide representation of IDs. The neighborhood set is used when populating and repairing the routing table, but is never used during routing. 42 | 43 | The neighborhood set is populated by calculating the proximity metric for the inserted node from the current node, then compared to other nodes in the neighborhood set. The neighborhood set is sorted by proximity metric score, from lowest (best) to highest (worst). When a new node is inserted, the 32 nodes with the lowest scores are retained, and any extras are discarded. 44 | 45 | The neighborhood set is repaired by requesting the neighborhood set of every other node in the neighborhood set, then calculating the proximity metric for the received nodes. These nodes are all in close proximity to the current node, so the nodes in close proximity to _them_ are likely to contain a suitable replacement. 46 | 47 | ## Routing 48 | 49 | Routing a message through the cluster is a simple process of finding a suitable node in the state tables, then forwarding to that node. Should no suitable node be found, the message has reached its destination and is considered "delivered". 50 | 51 | The message ID is the key tool used in routing the message through the cluster. The node with the ID closest to the message ID is the destination for the message, and each routing step should bring the message closer to that destination. 52 | 53 | The first state table consulted when routing a message is the leaf set. If the message ID falls within the leaf set, then the current node knows the destination of the message, and forwards the message there. 54 | 55 | If the message ID falls outside the leaf set, the routing table is consulted. The shared prefix between the message ID and the current node's ID is calculated, which determines the row that is consulted in the routing table. The column is the value of the first different digit in the message ID. If a node exists at that column in the routing table, the message is forwarded to that node. 56 | 57 | If no node exists at that row and column in the routing table, the rest of the row is searched for a node that is closer in proximity to the message ID than the current node. If such a node is found, the message is forwarded to that node. 58 | 59 | If no node in the row is closer to the message ID than the current node, lower rows (high indices) are searched for a node closer to the message ID than the current node. If such a node is found, the message is forwarded to that node. 60 | 61 | If no such node can be found, the current node is the most appropriate node in the cluster, and should be considered the destination for the message. At this point, the message is considered "delivered". 62 | 63 | ## Joining the Cluster 64 | 65 | When a node wishes to join the cluster, it needs to know the IP and port of another node in the cluster. This node is assumed to be the closest to the joining node in the network topology, though if a sub-optimal node is chosen, only the locality properties of routing will be affected. Essentially, Wendy will be a little slower, but everything should still work. 66 | 67 | The joining node crafts a message with a message ID equal to its node ID. This special "join" message is then sent to the specified node, which then routes it like any other message. Each node that receives the message sends their routing table to the joining node. The node the message was originally sent to, because it is assumed to be the closest node in the network topology, sends its neighborhood set to the joining node, as nodes close to it should be close to the joining node. Finally, the destination node for the message also includes sends its leaf set to the joining node. 68 | 69 | When the node receives routing table information, it attempts to insert the nodes in the received routing table into its own routing table. For unknown nodes with unknown proximities, it checks its local proximity cache (to reduce the number of repeat requests made). If no number is found, it makes a request to the inserted node to determine its proximity, then caches that proximity score in its proximity cache. Each node is also evaluated for inclusion in the neighborhood set as they're being inserted into the routing table. 70 | 71 | When the node receives leaf set information, it uses that leaf set as the basis of its own leaf set. The node that the message is delivered to is the node with the closest node ID to its own, so the nodes closest to that node in the node ID space are the nodes closest to it in the node ID space and appropriate choices for the leaf set. 72 | 73 | When the node receives neighborhood set information, it uses that neighborhood set as the basis of its own neighborhood set. The node the neighborhood set comes from _should_ be the closest node in the network topology, and assuming the proximity metric is Euclidean (i.e., if point A is close to point B and point C is close to point A, point C is also close to point B then; this holds true in the current implementation) the nodes closest to that node should be the nodes closest to this node. Wendy makes a cursory effort to correct mistakes here by gauging the appropriateness of every node it encounters for the neighborhood set, but it is still possible to create a sub-optimal neighborhood set in larger clusters. Wendy will still continue to function, though its routing paths will be less optimal than they would be with a proper neighborhood set. 74 | 75 | Once the node receives the leaf set information, it waits for twice the configured network timeout. This is to prevent a race condition in which the last node to receive the join message is not the last node to contact the joining node with its state tables, which is possible if other nodes are running slowly. After this delay to wait for straggling state messages, the node sends a special message announcing its presence to every node it knows about, along with its state tables. 76 | 77 | The message includes the versions of each state table for the node being sent to. These versions are included when the node sends its state tables to the joining node, and serve to prevent race conditions in which the node transmits its state tables but its state tables are updated again before the joining node joins the cluster. The joining node stores these versions locally, then sends them as part of announcing its presence. If the node's local versions are higher than the versions included in the message, a race condition warning will be sent, containing the node's new state. The joining node will use this state to update its own state tables, then re-announce its presence. 78 | 79 | Upon receiving a presence announcement from a node, each node updates its state tables with the joining node and its state tables. -------------------------------------------------------------------------------- /integration_test.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "testing" 5 | "time" 6 | ) 7 | 8 | type forwardData struct { 9 | next NodeID 10 | msg *Message 11 | } 12 | 13 | type testCallback struct { 14 | t *testing.T 15 | onDeliver chan Message 16 | onForward chan forwardData 17 | onNewLeaves chan []*Node 18 | onNodeJoin chan Node 19 | onNodeExit chan Node 20 | onHeartbeat chan Node 21 | } 22 | 23 | func newTestCallback(t *testing.T) *testCallback { 24 | return &testCallback{ 25 | t: t, 26 | onDeliver: make(chan Message, 10), 27 | onForward: make(chan forwardData, 10), 28 | onNewLeaves: make(chan []*Node, 10), 29 | onNodeJoin: make(chan Node, 10), 30 | onNodeExit: make(chan Node, 10), 31 | onHeartbeat: make(chan Node, 10), 32 | } 33 | } 34 | 35 | func (t *testCallback) OnError(err error) { 36 | t.t.Fatalf(err.Error()) 37 | } 38 | 39 | func (t *testCallback) OnDeliver(msg Message) { 40 | select { 41 | case t.onDeliver <- msg: 42 | default: 43 | } 44 | } 45 | 46 | func (t *testCallback) OnForward(msg *Message, next NodeID) bool { 47 | select { 48 | case t.onForward <- forwardData{next: next, msg: msg}: 49 | default: 50 | } 51 | return true 52 | } 53 | 54 | func (t *testCallback) OnNewLeaves(leaves []*Node) { 55 | select { 56 | case t.onNewLeaves <- leaves: 57 | default: 58 | } 59 | } 60 | 61 | func (t *testCallback) OnNodeJoin(node Node) { 62 | select { 63 | case t.onNodeJoin <- node: 64 | default: 65 | } 66 | } 67 | 68 | func (t *testCallback) OnNodeExit(node Node) { 69 | select { 70 | case t.onNodeExit <- node: 71 | default: 72 | } 73 | } 74 | 75 | func (t *testCallback) OnHeartbeat(node Node) { 76 | select { 77 | case t.onHeartbeat <- node: 78 | default: 79 | } 80 | } 81 | 82 | func makeCluster(idBytes string) (*Cluster, error) { 83 | id, err := NodeIDFromBytes([]byte(idBytes)) 84 | if err != nil { 85 | return nil, err 86 | } 87 | node := NewNode(id, "127.0.0.1", "127.0.0.1", "testing", 0) 88 | cluster := NewCluster(node, nil) 89 | cluster.SetHeartbeatFrequency(10) 90 | cluster.SetNetworkTimeout(1) 91 | cluster.SetLogLevel(LogLevelDebug) 92 | return cluster, nil 93 | } 94 | 95 | // Test joining two nodes 96 | func TestClusterJoinTwo(t *testing.T) { 97 | if testing.Short() { 98 | return 99 | } 100 | one, err := makeCluster("this is a test Node for testing purposes only.") 101 | if err != nil { 102 | t.Fatalf(err.Error()) 103 | } 104 | one.debug("One is %s", one.self.ID) 105 | oneCB := newTestCallback(t) 106 | one.RegisterCallback(oneCB) 107 | two, err := makeCluster("this is some other Node for testing purposes only.") 108 | if err != nil { 109 | t.Fatalf(err.Error()) 110 | } 111 | two.debug("Two is %s", two.self.ID) 112 | twoCB := newTestCallback(t) 113 | two.RegisterCallback(twoCB) 114 | go func() { 115 | defer one.Kill() 116 | err := one.Listen() 117 | if err != nil { 118 | t.Fatalf(err.Error()) 119 | } 120 | }() 121 | go func() { 122 | defer two.Kill() 123 | err := two.Listen() 124 | if err != nil { 125 | t.Fatalf(err.Error()) 126 | } 127 | }() 128 | time.Sleep(2 * time.Millisecond) 129 | err = two.Join(one.self.LocalIP, one.self.Port) 130 | if err != nil { 131 | t.Fatalf(err.Error()) 132 | } 133 | ticker := time.NewTicker(3 * time.Duration(one.getNetworkTimeout()) * time.Second) 134 | defer ticker.Stop() 135 | select { 136 | case <-ticker.C: 137 | t.Fatalf("Timeout waiting on join. Waited %d seconds.", 3*one.getNetworkTimeout()) 138 | return 139 | case <-oneCB.onNodeJoin: 140 | _, err = one.table.getNode(two.self.ID) 141 | if err != nil { 142 | t.Fatalf(err.Error()) 143 | } 144 | _, err = two.table.getNode(one.self.ID) 145 | if err != nil { 146 | t.Fatalf(err.Error()) 147 | } 148 | _, err = one.leafset.getNode(two.self.ID) 149 | if err != nil { 150 | t.Fatalf(err.Error()) 151 | } 152 | _, err = two.leafset.getNode(one.self.ID) 153 | if err != nil { 154 | t.Fatalf(err.Error()) 155 | } 156 | _, err = one.neighborhoodset.getNode(two.self.ID) 157 | if err != nil { 158 | t.Fatalf(err.Error()) 159 | } 160 | _, err = two.neighborhoodset.getNode(one.self.ID) 161 | if err != nil { 162 | t.Fatalf(err.Error()) 163 | } 164 | } 165 | ticker.Stop() 166 | } 167 | 168 | // Test joining three nodes 169 | func TestClusterJoinThreeToTwo(t *testing.T) { 170 | if testing.Short() { 171 | return 172 | } 173 | one, err := makeCluster("A test Node for testing purposes only.") 174 | if err != nil { 175 | t.Fatalf(err.Error()) 176 | } 177 | one.debug("One is %s", one.self.ID) 178 | oneCB := newTestCallback(t) 179 | one.RegisterCallback(oneCB) 180 | two, err := makeCluster("just some other Node for testing purposes only.") 181 | if err != nil { 182 | t.Fatalf(err.Error()) 183 | } 184 | two.debug("Two is %s", two.self.ID) 185 | twoCB := newTestCallback(t) 186 | two.RegisterCallback(twoCB) 187 | three, err := makeCluster("yet a third Node for testing purposes only.") 188 | if err != nil { 189 | t.Fatalf(err.Error()) 190 | } 191 | three.debug("Three is %s", three.self.ID) 192 | threeCB := newTestCallback(t) 193 | three.RegisterCallback(threeCB) 194 | go func() { 195 | defer one.Kill() 196 | err := one.Listen() 197 | if err != nil { 198 | t.Fatalf(err.Error()) 199 | } 200 | }() 201 | go func() { 202 | defer two.Kill() 203 | err := two.Listen() 204 | if err != nil { 205 | t.Fatalf(err.Error()) 206 | } 207 | }() 208 | go func() { 209 | defer three.Kill() 210 | err := three.Listen() 211 | if err != nil { 212 | t.Fatalf(err.Error()) 213 | } 214 | }() 215 | time.Sleep(2 * time.Millisecond) 216 | err = two.Join(one.self.LocalIP, one.self.Port) 217 | if err != nil { 218 | t.Fatalf(err.Error()) 219 | } 220 | ticker := time.NewTicker(5 * time.Duration(one.getNetworkTimeout()) * time.Second) 221 | defer ticker.Stop() 222 | select { 223 | case <-ticker.C: 224 | t.Fatalf("Timeout waiting on two join. Waited %d seconds.", 5*one.getNetworkTimeout()) 225 | return 226 | case <-oneCB.onNodeJoin: 227 | _, err = one.table.getNode(two.self.ID) 228 | if err != nil { 229 | t.Fatalf(err.Error()) 230 | } 231 | _, err = two.table.getNode(one.self.ID) 232 | if err != nil { 233 | t.Fatalf(err.Error()) 234 | } 235 | _, err = one.leafset.getNode(two.self.ID) 236 | if err != nil { 237 | t.Fatalf(err.Error()) 238 | } 239 | _, err = two.leafset.getNode(one.self.ID) 240 | if err != nil { 241 | t.Fatalf(err.Error()) 242 | } 243 | _, err = one.neighborhoodset.getNode(two.self.ID) 244 | if err != nil { 245 | t.Fatalf(err.Error()) 246 | } 247 | _, err = two.neighborhoodset.getNode(one.self.ID) 248 | if err != nil { 249 | t.Fatalf(err.Error()) 250 | } 251 | } 252 | ticker.Stop() 253 | err = three.Join(two.self.LocalIP, two.self.Port) 254 | if err != nil { 255 | t.Fatalf(err.Error()) 256 | } 257 | returns := 0 258 | ticker = time.NewTicker(120 * time.Second) 259 | defer ticker.Stop() 260 | L: 261 | for { 262 | select { 263 | case <-ticker.C: 264 | t.Fatalf("Timeout waiting on three to join. Waited %d seconds.", 120) 265 | return 266 | case <-twoCB.onNodeJoin: 267 | t.Logf("Got node join callback from twoCB") 268 | if returns < 1 { 269 | t.Logf("Waiting on first Node join callback") 270 | returns = returns + 1 271 | continue 272 | } 273 | break L 274 | case <-oneCB.onNodeJoin: 275 | t.Logf("Got Node join callback from oneCB") 276 | if returns < 1 { 277 | t.Logf("Waiting on second Node join callback") 278 | returns = returns + 1 279 | continue 280 | } 281 | break L 282 | } 283 | } 284 | _, err = one.table.getNode(three.self.ID) 285 | if err != nil { 286 | t.Logf("Error getting three from one's table") 287 | t.Errorf(err.Error()) 288 | } 289 | _, err = two.table.getNode(three.self.ID) 290 | if err != nil { 291 | t.Logf("Error getting three from two's table") 292 | t.Errorf(err.Error()) 293 | } 294 | _, err = three.table.getNode(one.self.ID) 295 | if err != nil { 296 | t.Logf("Error getting one from three's table") 297 | t.Errorf(err.Error()) 298 | } 299 | _, err = three.table.getNode(two.self.ID) 300 | if err != nil { 301 | t.Logf("Error getting two from three's table") 302 | t.Errorf(err.Error()) 303 | } 304 | _, err = one.leafset.getNode(three.self.ID) 305 | if err != nil { 306 | t.Logf("Error getting three from one's leaf set") 307 | t.Errorf(err.Error()) 308 | } 309 | _, err = two.leafset.getNode(three.self.ID) 310 | if err != nil { 311 | t.Logf("Error getting three from two's leaf set") 312 | t.Errorf(err.Error()) 313 | } 314 | _, err = three.leafset.getNode(one.self.ID) 315 | if err != nil { 316 | t.Logf("Error getting one from three's leaf set") 317 | t.Errorf(err.Error()) 318 | } 319 | _, err = three.leafset.getNode(two.self.ID) 320 | if err != nil { 321 | t.Logf("Error getting two from three's leaf set") 322 | t.Errorf(err.Error()) 323 | } 324 | _, err = one.neighborhoodset.getNode(three.self.ID) 325 | if err != nil { 326 | t.Logf("Error getting three from one's neighborhood set") 327 | t.Errorf(err.Error()) 328 | } 329 | _, err = two.neighborhoodset.getNode(three.self.ID) 330 | if err != nil { 331 | t.Logf("Error getting three from two's neighborhood set") 332 | t.Errorf(err.Error()) 333 | } 334 | _, err = three.neighborhoodset.getNode(one.self.ID) 335 | if err != nil { 336 | t.Logf("Error getting one from three's neighborhood set") 337 | t.Errorf(err.Error()) 338 | } 339 | _, err = three.neighborhoodset.getNode(two.self.ID) 340 | if err != nil { 341 | t.Logf("Error getting two from three's neighborhood set") 342 | t.Errorf(err.Error()) 343 | } 344 | return 345 | } 346 | -------------------------------------------------------------------------------- /leafset.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "errors" 5 | "log" 6 | "os" 7 | "sync" 8 | ) 9 | 10 | type leafSet struct { 11 | self *Node 12 | left [16]*Node 13 | right [16]*Node 14 | log *log.Logger 15 | logLevel int 16 | lock *sync.RWMutex 17 | } 18 | 19 | func newLeafSet(self *Node) *leafSet { 20 | return &leafSet{ 21 | self: self, 22 | left: [16]*Node{}, 23 | right: [16]*Node{}, 24 | log: log.New(os.Stdout, "wendy#leafSet("+self.ID.String()+")", log.LstdFlags), 25 | logLevel: LogLevelWarn, 26 | lock: new(sync.RWMutex), 27 | } 28 | } 29 | 30 | var lsDuplicateInsertError = errors.New("Node already exists in leaf set.") 31 | 32 | func (l *leafSet) insertNode(node Node) (*Node, error) { 33 | return l.insertValues(node.ID, node.LocalIP, node.GlobalIP, node.Region, node.Port, node.routingTableVersion, node.leafsetVersion, node.neighborhoodSetVersion) 34 | } 35 | 36 | func (l *leafSet) insertValues(id NodeID, localIP, globalIP, region string, port int, rTVersion, lSVersion, nSVersion uint64) (*Node, error) { 37 | l.lock.Lock() 38 | defer l.lock.Unlock() 39 | node := NewNode(id, localIP, globalIP, region, port) 40 | node.updateVersions(rTVersion, lSVersion, nSVersion) 41 | side := l.self.ID.RelPos(node.ID) 42 | var inserted, contained bool 43 | if side == -1 { 44 | l.left, contained, inserted = node.insertIntoArray(l.left, l.self) 45 | if !contained { 46 | return nil, nil 47 | } else if !inserted { 48 | return nil, lsDuplicateInsertError 49 | } else { 50 | l.self.incrementLSVersion() 51 | return node, nil 52 | } 53 | } else if side == 1 { 54 | l.right, contained, inserted = node.insertIntoArray(l.right, l.self) 55 | if !contained { 56 | return nil, nil 57 | } else if !inserted { 58 | return nil, lsDuplicateInsertError 59 | } else { 60 | l.self.incrementLSVersion() 61 | return node, nil 62 | } 63 | } 64 | return nil, throwIdentityError("insert", "into", "leaf set") 65 | } 66 | 67 | func (l *leafSet) getNode(id NodeID) (*Node, error) { 68 | l.lock.RLock() 69 | defer l.lock.RUnlock() 70 | side := l.self.ID.RelPos(id) 71 | if side == -1 { 72 | for _, node := range l.left { 73 | if node == nil { 74 | break 75 | } 76 | if id.Equals(node.ID) { 77 | return node, nil 78 | } 79 | } 80 | } else if side == 1 { 81 | for _, node := range l.right { 82 | if node == nil { 83 | break 84 | } 85 | if id.Equals(node.ID) { 86 | return node, nil 87 | } 88 | } 89 | } else { 90 | return nil, throwIdentityError("get", "from", "leaf set") 91 | } 92 | return nil, nodeNotFoundError 93 | } 94 | 95 | func (l *leafSet) getNextNode(id NodeID) (*Node, error) { 96 | l.lock.RLock() 97 | defer l.lock.RUnlock() 98 | side := l.self.ID.RelPos(id) 99 | last := -1 100 | if side == -1 { 101 | for pos, node := range l.left { 102 | if node == nil { 103 | continue 104 | } else { 105 | last = pos 106 | if node.ID.Less(id) { 107 | return node, nil 108 | } 109 | continue 110 | } 111 | } 112 | if last > -1 { 113 | return l.left[last], nil 114 | } 115 | return nil, nodeNotFoundError 116 | } else if side == 1 { 117 | for pos, node := range l.right { 118 | if node == nil { 119 | continue 120 | } else { 121 | last = pos 122 | if id.Less(node.ID) { 123 | return node, nil 124 | } 125 | continue 126 | } 127 | } 128 | if last > -1 { 129 | return l.left[last], nil 130 | } 131 | return nil, nodeNotFoundError 132 | } else { 133 | return nil, throwIdentityError("get next", "from", "leaf set") 134 | } 135 | return nil, nodeNotFoundError 136 | } 137 | 138 | func (l *leafSet) route(key NodeID) (*Node, error) { 139 | l.lock.RLock() 140 | defer l.lock.RUnlock() 141 | side := l.self.ID.RelPos(key) 142 | best_score := l.self.ID.Diff(key) 143 | best := l.self 144 | biggest := l.self.ID 145 | if side == -1 { 146 | for _, node := range l.left { 147 | if node == nil { 148 | break 149 | } 150 | diff := key.Diff(node.ID) 151 | if diff.Cmp(best_score) == -1 || (diff.Cmp(best_score) == 0 && node.ID.Less(best.ID)) { 152 | best = node 153 | best_score = diff 154 | } 155 | biggest = node.ID 156 | } 157 | } else { 158 | for _, node := range l.right { 159 | if node == nil { 160 | break 161 | } 162 | diff := key.Diff(node.ID) 163 | if diff.Cmp(best_score) == -1 || (diff.Cmp(best_score) == 0 && node.ID.Less(best.ID)) { 164 | best = node 165 | best_score = diff 166 | } 167 | biggest = node.ID 168 | } 169 | } 170 | if biggest.Less(key) { 171 | return nil, nodeNotFoundError 172 | } 173 | if !best.ID.Equals(l.self.ID) { 174 | return best, nil 175 | } else { 176 | return nil, throwIdentityError("route to", "in", "leaf set") 177 | } 178 | return nil, nodeNotFoundError 179 | } 180 | 181 | func (l *leafSet) export() [2][16]*Node { 182 | l.lock.RLock() 183 | defer l.lock.RUnlock() 184 | return [2][16]*Node{l.left, l.right} 185 | } 186 | 187 | func (l *leafSet) list() []*Node { 188 | l.lock.RLock() 189 | defer l.lock.RUnlock() 190 | nodes := []*Node{} 191 | for _, node := range l.left { 192 | if node != nil { 193 | nodes = append(nodes, node) 194 | } 195 | } 196 | for _, node := range l.right { 197 | if node != nil { 198 | nodes = append(nodes, node) 199 | } 200 | } 201 | return nodes 202 | } 203 | 204 | func (node *Node) insertIntoArray(array [16]*Node, center *Node) ([16]*Node, bool, bool) { 205 | var result [16]*Node 206 | result_index := 0 207 | src_index := 0 208 | pos := -1 209 | inserted := false 210 | for result_index < len(result) { 211 | result[result_index] = array[src_index] 212 | if array[src_index] == nil { 213 | if pos < 0 { 214 | result[result_index] = node 215 | pos = result_index 216 | inserted = true 217 | } 218 | break 219 | } 220 | if node.ID.Equals(array[src_index].ID) { 221 | node.updateVersions(array[src_index].routingTableVersion, array[src_index].leafsetVersion, array[src_index].neighborhoodSetVersion) 222 | pos = result_index 223 | result_index += 1 224 | src_index += 1 225 | continue 226 | } 227 | if center.ID.Diff(node.ID).Cmp(center.ID.Diff(result[result_index].ID)) < 0 && pos < 0 { 228 | result[result_index] = node 229 | pos = result_index 230 | inserted = true 231 | } else { 232 | src_index += 1 233 | } 234 | result_index += 1 235 | } 236 | return result, pos > -1, inserted 237 | } 238 | 239 | func (l *leafSet) removeNode(id NodeID) (*Node, error) { 240 | l.lock.Lock() 241 | defer l.lock.Unlock() 242 | side := l.self.ID.RelPos(id) 243 | if side == 0 { 244 | return nil, throwIdentityError("remove", "from", "leaf set") 245 | } 246 | pos := -1 247 | var n *Node 248 | if side == -1 { 249 | for index, node := range l.left { 250 | if node == nil || node.ID.Equals(id) { 251 | pos = index 252 | n = node 253 | break 254 | } 255 | } 256 | } else { 257 | for index, node := range l.right { 258 | if node == nil || node.ID.Equals(id) { 259 | pos = index 260 | n = node 261 | break 262 | } 263 | } 264 | } 265 | if pos == -1 || (side == -1 && pos > len(l.left)) || (side == 1 && pos > len(l.right)) { 266 | return nil, nodeNotFoundError 267 | } 268 | var slice []*Node 269 | if side == -1 { 270 | if len(l.left) == 1 { 271 | slice = []*Node{} 272 | } else if pos+1 == len(l.left) { 273 | slice = l.left[:pos] 274 | } else if pos == 0 { 275 | slice = l.left[1:] 276 | } else { 277 | slice = append(l.left[:pos], l.left[pos+1:]...) 278 | } 279 | for i, _ := range l.left { 280 | if i < len(slice) { 281 | l.left[i] = slice[i] 282 | } else { 283 | l.left[i] = nil 284 | } 285 | } 286 | } else { 287 | if len(l.right) == 1 { 288 | slice = []*Node{} 289 | } else if pos+1 == len(l.right) { 290 | slice = l.right[:pos] 291 | } else if pos == 0 { 292 | slice = l.right[1:] 293 | } else { 294 | slice = append(l.right[:pos], l.right[pos+1:]...) 295 | } 296 | for i, _ := range l.right { 297 | if i < len(slice) { 298 | l.right[i] = slice[i] 299 | } else { 300 | l.right[i] = nil 301 | } 302 | } 303 | } 304 | l.self.incrementLSVersion() 305 | return n, nil 306 | } 307 | 308 | func (l *leafSet) debug(format string, v ...interface{}) { 309 | if l.logLevel <= LogLevelDebug { 310 | l.log.Printf(format, v...) 311 | } 312 | } 313 | 314 | func (l *leafSet) warn(format string, v ...interface{}) { 315 | if l.logLevel <= LogLevelWarn { 316 | l.log.Printf(format, v...) 317 | } 318 | } 319 | 320 | func (l *leafSet) err(format string, v ...interface{}) { 321 | if l.logLevel <= LogLevelError { 322 | l.log.Printf(format, v...) 323 | } 324 | } 325 | -------------------------------------------------------------------------------- /leafset_test.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "testing" 5 | ) 6 | 7 | // Test insertion of a node into the leaf set 8 | func TestLeafSetinsertNode(t *testing.T) { 9 | self_id, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 10 | if err != nil { 11 | t.Fatalf(err.Error()) 12 | } 13 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 14 | t.Logf("%s\n", self_id.String()) 15 | 16 | other_id, err := NodeIDFromBytes([]byte("this is some other Node for testing purposes only.")) 17 | if err != nil { 18 | t.Fatalf(err.Error()) 19 | } 20 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 21 | t.Logf("%s\n", other_id.String()) 22 | t.Logf("Diff: %v\n", self_id.Diff(other_id)) 23 | leafset := newLeafSet(self) 24 | r, err := leafset.insertNode(*other) 25 | if err != nil { 26 | t.Fatalf(err.Error()) 27 | } 28 | if r == nil { 29 | t.Fatalf("Nil response returned.") 30 | } 31 | r2, err := leafset.getNode(other_id) 32 | if err != nil { 33 | t.Fatalf(err.Error()) 34 | } 35 | if r2 == nil { 36 | t.Fatalf("Nil response returned.") 37 | } 38 | if !r2.ID.Equals(other_id) { 39 | t.Fatalf("Expected Node %s, got Node %s instead.", other_id, r2.ID) 40 | } 41 | } 42 | 43 | // Test deleting the only node from the leafset 44 | func TestLeafSetDeleteOnly(t *testing.T) { 45 | self_id, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 46 | if err != nil { 47 | t.Fatalf(err.Error()) 48 | } 49 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 50 | 51 | other_id, err := NodeIDFromBytes([]byte("this is some other Node for testing purposes only.")) 52 | if err != nil { 53 | t.Fatalf(err.Error()) 54 | } 55 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 56 | leafset := newLeafSet(self) 57 | r, err := leafset.insertNode(*other) 58 | if err != nil { 59 | t.Fatalf(err.Error()) 60 | } 61 | if r == nil { 62 | t.Fatalf("Nil response returned.") 63 | } 64 | _, err = leafset.removeNode(other_id) 65 | if err != nil { 66 | t.Fatalf(err.Error()) 67 | } 68 | r3, err := leafset.getNode(other_id) 69 | if err != nodeNotFoundError { 70 | if err != nil { 71 | t.Fatalf(err.Error()) 72 | } else { 73 | t.Fatal("Expected nodeNotFoundError, got nil error.") 74 | } 75 | } 76 | if r3 != nil { 77 | t.Errorf("Expected nil response, got Node %s instead.", r3.ID) 78 | } 79 | } 80 | 81 | // Test deleting the first of two nodes from the leafset 82 | func TestLeafSetDeleteFirst(t *testing.T) { 83 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdef")) 84 | if err != nil { 85 | t.Fatalf(err.Error()) 86 | } 87 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 88 | 89 | other_id, err := NodeIDFromBytes([]byte("1234557890abcdef")) 90 | if err != nil { 91 | t.Fatalf(err.Error()) 92 | } 93 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 94 | second_id, err := NodeIDFromBytes([]byte("1234557890abbdef")) 95 | if err != nil { 96 | t.Fatalf(err.Error()) 97 | } 98 | second := NewNode(second_id, "127.0.0.3", "127.0.0.3", "testing", 55555) 99 | first_side := self.ID.RelPos(other_id) 100 | second_side := self.ID.RelPos(second_id) 101 | if first_side != second_side { 102 | t.Fatalf("Expected %v, got %v.", first_side, second_side) 103 | } 104 | leafset := newLeafSet(self) 105 | r, err := leafset.insertNode(*other) 106 | if err != nil { 107 | t.Fatalf(err.Error()) 108 | } 109 | if r == nil { 110 | t.Fatalf("Nil response returned.") 111 | } 112 | r2, err := leafset.insertNode(*second) 113 | if err != nil { 114 | t.Fatal(err.Error()) 115 | } 116 | if r2 == nil { 117 | t.Fatal("Nil response returned.") 118 | } 119 | var firstnode, secondnode *Node 120 | first_dist := self.ID.Diff(other_id) 121 | second_dist := self.ID.Diff(second_id) 122 | if first_dist.Cmp(second_dist) < 0 { 123 | firstnode = r 124 | secondnode = r2 125 | } else { 126 | secondnode = r 127 | firstnode = r2 128 | } 129 | _, err = leafset.removeNode(firstnode.ID) 130 | if err != nil { 131 | t.Fatalf(err.Error()) 132 | } 133 | r3, err := leafset.getNode(firstnode.ID) 134 | if err != nodeNotFoundError { 135 | if err != nil { 136 | t.Fatalf(err.Error()) 137 | } else { 138 | t.Fatal("Expected nodeNotFoundError, got nil error instead.") 139 | } 140 | } 141 | if r3 != nil { 142 | t.Errorf("Expected nil response, got Node %s instead.", r3.ID) 143 | } 144 | r4, err := leafset.getNode(secondnode.ID) 145 | if err != nil { 146 | t.Fatal(err.Error()) 147 | } 148 | if r4 == nil { 149 | t.Fatalf("Got nil response when querying for second insert.") 150 | } 151 | } 152 | 153 | // Test deleting the last of multiple nodes from the leafset 154 | func TestLeafSetDeleteLast(t *testing.T) { 155 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdef")) 156 | if err != nil { 157 | t.Fatalf(err.Error()) 158 | } 159 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 160 | 161 | other_id, err := NodeIDFromBytes([]byte("1234557890abcdef")) 162 | if err != nil { 163 | t.Fatalf(err.Error()) 164 | } 165 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 166 | second_id, err := NodeIDFromBytes([]byte("1234557890abbdef")) 167 | if err != nil { 168 | t.Fatalf(err.Error()) 169 | } 170 | second := NewNode(second_id, "127.0.0.3", "127.0.0.3", "testing", 55555) 171 | first_side := self.ID.RelPos(other_id) 172 | second_side := self.ID.RelPos(second_id) 173 | if first_side != second_side { 174 | t.Fatalf("Expected %v, got %v.", first_side, second_side) 175 | } 176 | leafset := newLeafSet(self) 177 | r, err := leafset.insertNode(*other) 178 | if err != nil { 179 | t.Fatalf(err.Error()) 180 | } 181 | if r == nil { 182 | t.Fatalf("Nil response returned.") 183 | } 184 | r2, err := leafset.insertNode(*second) 185 | if err != nil { 186 | t.Fatal(err.Error()) 187 | } 188 | if r2 == nil { 189 | t.Fatal("Nil response returned.") 190 | } 191 | var firstnode, secondnode *Node 192 | first_dist := self.ID.Diff(other_id) 193 | second_dist := self.ID.Diff(second_id) 194 | if first_dist.Cmp(second_dist) < 0 { 195 | firstnode = r 196 | secondnode = r2 197 | } else { 198 | secondnode = r 199 | firstnode = r2 200 | } 201 | _, err = leafset.removeNode(secondnode.ID) 202 | if err != nil { 203 | t.Fatalf(err.Error()) 204 | } 205 | r3, err := leafset.getNode(secondnode.ID) 206 | if err != nodeNotFoundError { 207 | if err != nil { 208 | t.Fatalf(err.Error()) 209 | } else { 210 | t.Fatal("Expected nodeNotFoundError, got nil error instead.") 211 | } 212 | } 213 | if r3 != nil { 214 | t.Errorf("Expected nil response, got Node %s instead.", r3.ID) 215 | } 216 | r4, err := leafset.getNode(firstnode.ID) 217 | if err != nil { 218 | t.Fatal(err.Error()) 219 | } 220 | if r4 == nil { 221 | t.Fatalf("Got nil response when querying for first insert.") 222 | } 223 | } 224 | 225 | // Test deleting the middle of multiple nodes from the leafset 226 | func TestLeafSetDeleteMiddle(t *testing.T) { 227 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdef")) 228 | if err != nil { 229 | t.Fatalf(err.Error()) 230 | } 231 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 232 | 233 | first_id, err := NodeIDFromBytes([]byte("1234557890abcdef")) 234 | if err != nil { 235 | t.Fatalf(err.Error()) 236 | } 237 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 238 | second_id, err := NodeIDFromBytes([]byte("1234557890abbdef")) 239 | if err != nil { 240 | t.Fatalf(err.Error()) 241 | } 242 | second := NewNode(second_id, "127.0.0.3", "127.0.0.3", "testing", 55555) 243 | third_id, err := NodeIDFromBytes([]byte("1234557890accdef")) 244 | if err != nil { 245 | t.Fatalf(err.Error()) 246 | } 247 | third := NewNode(third_id, "127.0.0.4", "127.0.0.4", "testing", 55555) 248 | first_side := self.ID.RelPos(first_id) 249 | second_side := self.ID.RelPos(second_id) 250 | third_side := self.ID.RelPos(third_id) 251 | if first_side != second_side || second_side != third_side { 252 | t.Fatalf("Nodes not all on same side. %v, %v, %v", first_side, second_side, third_side) 253 | } 254 | leafset := newLeafSet(self) 255 | r, err := leafset.insertNode(*first) 256 | if err != nil { 257 | t.Fatalf(err.Error()) 258 | } 259 | if r == nil { 260 | t.Fatalf("Nil response returned.") 261 | } 262 | r2, err := leafset.insertNode(*second) 263 | if err != nil { 264 | t.Fatal(err.Error()) 265 | } 266 | if r2 == nil { 267 | t.Fatal("Nil response returned.") 268 | } 269 | r3, err := leafset.insertNode(*third) 270 | if err != nil { 271 | t.Fatal(err.Error()) 272 | } 273 | if r3 == nil { 274 | t.Fatal("Nil response returned.") 275 | } 276 | var zero, one, two NodeID 277 | first_dist := self.ID.Diff(first_id) 278 | second_dist := self.ID.Diff(second_id) 279 | third_dist := self.ID.Diff(third_id) 280 | if first_dist.Cmp(second_dist) < 0 && first_dist.Cmp(third_dist) < 0 { 281 | zero = first_id 282 | if second_dist.Cmp(third_dist) < 0 { 283 | one = second_id 284 | two = third_id 285 | } else { 286 | one = third_id 287 | two = second_id 288 | } 289 | } else if first_dist.Cmp(second_dist) < 0 && first_dist.Cmp(third_dist) > 0 { 290 | zero = third_id 291 | one = first_id 292 | two = second_id 293 | } else if first_dist.Cmp(second_dist) > 0 && first_dist.Cmp(third_dist) < 0 { 294 | zero = second_id 295 | one = first_id 296 | two = third_id 297 | } else { 298 | if second_dist.Cmp(third_dist) < 0 { 299 | zero = second_id 300 | one = third_id 301 | two = first_id 302 | } else { 303 | zero = third_id 304 | one = second_id 305 | two = first_id 306 | } 307 | } 308 | r4, err := leafset.removeNode(one) 309 | if err != nil { 310 | t.Fatalf(err.Error()) 311 | } 312 | if r4 == nil { 313 | t.Fatal("Expected node, got nil instead.") 314 | } 315 | r5, err := leafset.getNode(one) 316 | if err != nodeNotFoundError { 317 | if err != nil { 318 | t.Fatalf(err.Error()) 319 | } else { 320 | t.Fatal("Expected nodeNotFoundError, got nil error.") 321 | } 322 | } 323 | if r5 != nil { 324 | t.Errorf("Expected nil response, got Node %s instead.", r5.ID) 325 | } 326 | r6, err := leafset.getNode(zero) 327 | if err != nil { 328 | t.Fatal(err.Error()) 329 | } 330 | if r6 == nil { 331 | t.Fatalf("Got nil response when querying for first insert.") 332 | } 333 | r7, err := leafset.getNode(two) 334 | if err != nil { 335 | t.Fatal(err.Error()) 336 | } 337 | if r7 == nil { 338 | t.Fatalf("Got nil response when querying for third insert.") 339 | } 340 | } 341 | 342 | // Test scanning the leafset when the key falls in between two nodes 343 | func TestLeafSetScanSplit(t *testing.T) { 344 | self_id, err := NodeIDFromBytes([]byte("1234560890abcdef")) 345 | if err != nil { 346 | t.Fatal(err.Error()) 347 | } 348 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 349 | 350 | leafset := newLeafSet(self) 351 | 352 | first_id, err := NodeIDFromBytes([]byte("12345677890abcde")) 353 | if err != nil { 354 | t.Fatal(err.Error()) 355 | } 356 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 357 | r, err := leafset.insertNode(*first) 358 | if err != nil { 359 | t.Fatal(err.Error()) 360 | } 361 | if r == nil { 362 | t.Fatal("First insert returned nil.") 363 | } 364 | second_id, err := NodeIDFromBytes([]byte("12345637890abcde")) 365 | if err != nil { 366 | t.Fatal(err.Error()) 367 | } 368 | second := NewNode(second_id, "127.0.0.3", "127.0.0.3", "testing", 55555) 369 | r2, err := leafset.insertNode(*second) 370 | if err != nil { 371 | t.Fatal(err.Error()) 372 | } 373 | if r2 == nil { 374 | t.Fatal("Second insert returned nil") 375 | } 376 | first_side := self.ID.RelPos(first_id) 377 | second_side := self.ID.RelPos(second_id) 378 | if first_side != second_side { 379 | t.Fatal("Nodes not inserted on the same side. %v vs. %v.", first_side, second_side) 380 | } 381 | message_id, err := NodeIDFromBytes([]byte("12345657890abcde")) 382 | if err != nil { 383 | t.Fatal(err.Error()) 384 | } 385 | msg_side := self.ID.RelPos(message_id) 386 | if msg_side != first_side { 387 | t.Fatalf("Message not on the same side as the nodes. %v vs. %v.", msg_side, first_side) 388 | } 389 | d1 := message_id.Diff(first_id) 390 | d2 := message_id.Diff(second_id) 391 | if d1.Cmp(d2) != 0 { 392 | t.Fatalf("IDs not equidistant. Expected %v, got %v.", d1, d2) 393 | } 394 | if !second_id.Less(first_id) { 395 | t.Fatalf("%v is not lower than the %v.", second_id.Base10(), first_id.Base10()) 396 | } 397 | r3, err := leafset.route(message_id) 398 | if err != nil { 399 | t.Fatal(err.Error()) 400 | } 401 | if r3 == nil { 402 | t.Fatal("Scan returned nil.") 403 | } 404 | if !second_id.Equals(r3.ID) { 405 | t.Errorf("Wrong Node returned. Expected %s, got %s.", second_id, r3.ID) 406 | } 407 | } 408 | 409 | // Test routing to the only node in the leafset 410 | func TestLeafSetRouteOnly(t *testing.T) { 411 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdeg")) 412 | if err != nil { 413 | t.Fatal(err.Error()) 414 | } 415 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 416 | 417 | leafset := newLeafSet(self) 418 | 419 | first_id, err := NodeIDFromBytes([]byte("1234567890acdefg")) 420 | if err != nil { 421 | t.Fatal(err.Error()) 422 | } 423 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 424 | r, err := leafset.insertNode(*first) 425 | if err != nil { 426 | t.Fatal(err.Error()) 427 | } 428 | if r == nil { 429 | t.Fatal("Insert returned nil.") 430 | } 431 | message_id, err := NodeIDFromBytes([]byte("1234567890acdeff")) 432 | if err != nil { 433 | t.Fatal(err.Error()) 434 | } 435 | msg_side := self.ID.RelPos(message_id) 436 | first_side := self.ID.RelPos(first_id) 437 | if msg_side != first_side { 438 | t.Fatalf("Message and node not on same side. %v vs. %v.", msg_side, first_side) 439 | } 440 | r3, err := leafset.route(message_id) 441 | if err != nil { 442 | t.Fatal(err.Error()) 443 | } 444 | if r3 == nil { 445 | t.Fatal("Route returned nil.") 446 | } 447 | if !r3.ID.Equals(first_id) { 448 | t.Fatalf("Expected Node %s, got Node %s instead.", first_id, r3.ID) 449 | } 450 | } 451 | 452 | // Test routing to a direct match in the leafset 453 | func TestLeafSetRouteMatch(t *testing.T) { 454 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdeg")) 455 | if err != nil { 456 | t.Fatal(err.Error()) 457 | } 458 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 459 | 460 | leafset := newLeafSet(self) 461 | 462 | first_id, err := NodeIDFromBytes([]byte("1234567890acdefg")) 463 | if err != nil { 464 | t.Fatal(err.Error()) 465 | } 466 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 467 | r, err := leafset.insertNode(*first) 468 | if err != nil { 469 | t.Fatal(err.Error()) 470 | } 471 | if r == nil { 472 | t.Fatal("Insert returned nil.") 473 | } 474 | message_id, err := NodeIDFromBytes([]byte("1234567890acdefg")) 475 | if err != nil { 476 | t.Fatal(err.Error()) 477 | } 478 | if !message_id.Equals(first_id) { 479 | t.Fatalf("Expected ID of %s, got %s instead.", first_id, message_id) 480 | } 481 | r3, err := leafset.route(message_id) 482 | if err != nil { 483 | t.Fatal(err.Error()) 484 | } 485 | if r3 == nil { 486 | t.Fatal("Route returned nil.") 487 | } 488 | if !r3.ID.Equals(first_id) { 489 | t.Fatalf("Expected Node %s, got Node %s instead.", first_id, r3.ID) 490 | } 491 | } 492 | 493 | // Test routing when the message is not within the leafset 494 | func TestLeafSetRouteNoneContained(t *testing.T) { 495 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdeg")) 496 | if err != nil { 497 | t.Fatal(err.Error()) 498 | } 499 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 500 | 501 | leafset := newLeafSet(self) 502 | 503 | first_id, err := NodeIDFromBytes([]byte("1234567890abcdeh")) 504 | if err != nil { 505 | t.Fatal(err.Error()) 506 | } 507 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 508 | r, err := leafset.insertNode(*first) 509 | if err != nil { 510 | t.Fatal(err.Error()) 511 | } 512 | if r == nil { 513 | t.Fatal("Insert returned nil.") 514 | } 515 | message_id, err := NodeIDFromBytes([]byte("123456789abcdefg")) 516 | if err != nil { 517 | t.Fatal(err.Error()) 518 | } 519 | r3, err := leafset.route(message_id) 520 | if err != nodeNotFoundError { 521 | if err != nil { 522 | t.Fatal(err.Error()) 523 | } else { 524 | t.Fatal("Expected nodeNotFoundError, got nil error instead.") 525 | } 526 | } 527 | if r3 != nil { 528 | t.Fatalf("Expected nil result, got %s instead.", r3.ID) 529 | } 530 | } 531 | 532 | // Test routing when there are no nodes in the leafset closer than the current node 533 | func TestLeafSetRouteNoneCloser(t *testing.T) { 534 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdef")) 535 | if err != nil { 536 | t.Fatal(err.Error()) 537 | } 538 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 539 | 540 | leafset := newLeafSet(self) 541 | 542 | first_id, err := NodeIDFromBytes([]byte("1234567890abcdez")) 543 | if err != nil { 544 | t.Fatal(err.Error()) 545 | } 546 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 547 | r, err := leafset.insertNode(*first) 548 | if err != nil { 549 | t.Fatal(err.Error()) 550 | } 551 | if r == nil { 552 | t.Fatal("Insert returned nil.") 553 | } 554 | message_id, err := NodeIDFromBytes([]byte("1234567890abcdeg")) 555 | if err != nil { 556 | t.Fatal(err.Error()) 557 | } 558 | self_diff := self_id.Diff(message_id) 559 | node_diff := first_id.Diff(message_id) 560 | node_closer := self_diff.Cmp(node_diff) == 1 561 | if node_closer { 562 | t.Fatalf("Node is closer.") 563 | } 564 | r3, err := leafset.route(message_id) 565 | if err != nil { 566 | if _, ok := err.(IdentityError); !ok { 567 | t.Fatal(err.Error()) 568 | } 569 | } else { 570 | t.Fatal("Expected an IdentityError, but got a nil error instead.") 571 | } 572 | if r3 != nil { 573 | t.Fatalf("Expected nil result, got %s instead.", r3.ID) 574 | } 575 | } 576 | 577 | ////////////////////////////////////////////////////////////////////////// 578 | ////////////////////////// Benchmarks //////////////////////////////////// 579 | ////////////////////////////////////////////////////////////////////////// 580 | 581 | // How fast can we insert nodes 582 | func BenchmarkLeafSetInsert(b *testing.B) { 583 | b.StopTimer() 584 | selfId, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 585 | if err != nil { 586 | b.Fatalf(err.Error()) 587 | } 588 | self := NewNode(selfId, "127.0.0.1", "127.0.0.1", "testing", 55555) 589 | 590 | leafset := newLeafSet(self) 591 | benchRand.Seed(randSeed) 592 | 593 | b.StartTimer() 594 | for i := 0; i < b.N; i++ { 595 | otherId := randomNodeID() 596 | other := *NewNode(otherId, "127.0.0.1", "127.0.0.2", "testing", 55555) 597 | _, err = leafset.insertNode(other) 598 | } 599 | } 600 | 601 | // How fast can we retrieve nodes by ID 602 | func BenchmarkLeafSetGetByID(b *testing.B) { 603 | b.StopTimer() 604 | selfId, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 605 | if err != nil { 606 | b.Fatalf(err.Error()) 607 | } 608 | self := NewNode(selfId, "127.0.0.1", "127.0.0.1", "testing", 55555) 609 | 610 | leafset := newLeafSet(self) 611 | benchRand.Seed(randSeed) 612 | 613 | otherId := randomNodeID() 614 | other := *NewNode(otherId, "127.0.0.2", "127.0.0.2", "testing", 55555) 615 | _, err = leafset.insertNode(other) 616 | if err != nil { 617 | b.Fatalf(err.Error()) 618 | } 619 | 620 | b.StartTimer() 621 | for i := 0; i < b.N; i++ { 622 | leafset.getNode(other.ID) 623 | } 624 | } 625 | 626 | var benchLeafSet *leafSet 627 | 628 | func initBenchLeafSet(b *testing.B) { 629 | selfId, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 630 | if err != nil { 631 | b.Fatalf(err.Error()) 632 | } 633 | self := NewNode(selfId, "127.0.0.1", "127.0.0.1", "testing", 55555) 634 | benchLeafSet = newLeafSet(self) 635 | benchRand.Seed(randSeed) 636 | 637 | for i := 0; i < 100000; i++ { 638 | id := randomNodeID() 639 | node := NewNode(id, "127.0.0.1", "127.0.0.1", "testing", 55555) 640 | _, err = benchLeafSet.insertNode(*node) 641 | if err != nil { 642 | b.Fatal(err.Error()) 643 | } 644 | } 645 | } 646 | 647 | // How fast can we route messages 648 | func BenchmarkLeafSetRoute(b *testing.B) { 649 | b.StopTimer() 650 | if benchLeafSet == nil { 651 | initBenchLeafSet(b) 652 | } 653 | benchRand.Seed(randSeed) 654 | b.StartTimer() 655 | 656 | for i := 0; i < b.N; i++ { 657 | id := randomNodeID() 658 | _, err := benchLeafSet.route(id) 659 | if err != nil && err != nodeNotFoundError { 660 | if _, ok := err.(IdentityError); !ok { 661 | b.Fatalf(err.Error()) 662 | } 663 | } 664 | } 665 | } 666 | 667 | // How fast can we dump the leafset 668 | func BenchmarkLeafSetDump(b *testing.B) { 669 | b.StopTimer() 670 | if benchLeafSet == nil { 671 | initBenchLeafSet(b) 672 | } 673 | benchRand.Seed(randSeed) 674 | b.StartTimer() 675 | 676 | for i := 0; i < b.N; i++ { 677 | benchLeafSet.list() 678 | } 679 | } 680 | 681 | // How fast can we export the leafset 682 | func BenchmarkLeafSetExport(b *testing.B) { 683 | b.StopTimer() 684 | if benchLeafSet == nil { 685 | initBenchLeafSet(b) 686 | } 687 | benchRand.Seed(randSeed) 688 | b.StartTimer() 689 | 690 | for i := 0; i < b.N; i++ { 691 | benchLeafSet.export() 692 | } 693 | } 694 | -------------------------------------------------------------------------------- /message.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | // Message represents the messages that are sent through the cluster of Nodes 4 | type Message struct { 5 | Purpose byte 6 | Sender Node // The Node a message originated at 7 | Key NodeID // The message's ID 8 | Value []byte // The message being passed 9 | Credentials []byte // The Credentials used to authenticate the Message 10 | LSVersion uint64 // The version of the leaf set, for join messages 11 | RTVersion uint64 // The version of the routing table, for join messages 12 | NSVersion uint64 // The version of the neighborhood set, for join messages 13 | Hop int // The number of hops the message has taken 14 | } 15 | 16 | const ( 17 | NODE_JOIN = byte(iota) // Used when a Node wishes to join the cluster 18 | NODE_EXIT // Used when a Node leaves the cluster 19 | HEARTBEAT // Used when a Node is being tested 20 | STAT_DATA // Used when a Node broadcasts state info 21 | STAT_REQ // Used when a Node is requesting state info 22 | NODE_RACE // Used when a Node hits a race condition 23 | NODE_REPR // Used when a Node needs to repair its LeafSet 24 | NODE_ANN // Used when a Node broadcasts its presence 25 | ) 26 | 27 | // String returns a string representation of a message. 28 | func (m *Message) String() string { 29 | return m.Key.String() + ": " + string(m.Value) 30 | } 31 | 32 | func (c *Cluster) NewMessage(purpose byte, key NodeID, value []byte) Message { 33 | var credentials []byte 34 | if c.credentials != nil { 35 | credentials = c.credentials.Marshal() 36 | } 37 | return Message{ 38 | Purpose: purpose, 39 | Sender: *c.self, 40 | Key: key, 41 | Value: value, 42 | Credentials: credentials, 43 | LSVersion: c.self.leafsetVersion, 44 | RTVersion: c.self.routingTableVersion, 45 | NSVersion: c.self.neighborhoodSetVersion, 46 | Hop: 0, 47 | } 48 | } 49 | -------------------------------------------------------------------------------- /neighborhood.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "errors" 5 | "log" 6 | "os" 7 | "sync" 8 | ) 9 | 10 | type neighborhoodSet struct { 11 | self *Node 12 | nodes [32]*Node 13 | log *log.Logger 14 | logLevel int 15 | lock *sync.RWMutex 16 | } 17 | 18 | func newNeighborhoodSet(self *Node) *neighborhoodSet { 19 | return &neighborhoodSet{ 20 | self: self, 21 | nodes: [32]*Node{}, 22 | log: log.New(os.Stdout, "wendy#neighborhoodSet("+self.ID.String()+")", log.LstdFlags), 23 | logLevel: LogLevelWarn, 24 | lock: new(sync.RWMutex), 25 | } 26 | } 27 | 28 | var nsDuplicateInsertError = errors.New("Node already exists in neighborhood set.") 29 | 30 | func (n *neighborhoodSet) insertNode(node Node, proximity int64) (*Node, error) { 31 | return n.insertValues(node.ID, node.LocalIP, node.GlobalIP, node.Region, node.Port, node.routingTableVersion, node.leafsetVersion, node.neighborhoodSetVersion, proximity) 32 | } 33 | 34 | func (n *neighborhoodSet) insertValues(id NodeID, localIP, globalIP, region string, port int, rTVersion, lSVersion, nSVersion uint64, proximity int64) (*Node, error) { 35 | n.lock.Lock() 36 | defer n.lock.Unlock() 37 | if id.Equals(n.self.ID) { 38 | return nil, throwIdentityError("insert", "into", "neighborhood set") 39 | } 40 | insertNode := NewNode(id, localIP, globalIP, region, port) 41 | insertNode.updateVersions(rTVersion, lSVersion, nSVersion) 42 | insertNode.setProximity(proximity) 43 | newNS := [32]*Node{} 44 | newNSpos := 0 45 | score := n.self.Proximity(insertNode) 46 | inserted := false 47 | dup := false 48 | for _, node := range n.nodes { 49 | if newNSpos > 31 { 50 | break 51 | } 52 | if node == nil && !inserted && !dup { 53 | newNS[newNSpos] = insertNode 54 | newNSpos++ 55 | inserted = true 56 | continue 57 | } 58 | if node != nil && insertNode.ID.Equals(node.ID) { 59 | insertNode.updateVersions(node.routingTableVersion, node.leafsetVersion, node.neighborhoodSetVersion) 60 | newNS[newNSpos] = insertNode 61 | newNSpos++ 62 | dup = true 63 | continue 64 | } 65 | if node != nil && n.self.Proximity(node) > score && !inserted && !dup { 66 | newNS[newNSpos] = insertNode 67 | newNSpos++ 68 | inserted = true 69 | continue 70 | } 71 | if newNSpos <= 31 { 72 | newNS[newNSpos] = node 73 | newNSpos++ 74 | } 75 | } 76 | n.nodes = newNS 77 | if dup { 78 | return nil, nsDuplicateInsertError 79 | } 80 | if inserted { 81 | n.self.incrementNSVersion() 82 | return insertNode, nil 83 | } 84 | return nil, nil 85 | } 86 | 87 | func (n *neighborhoodSet) getNode(id NodeID) (*Node, error) { 88 | n.lock.RLock() 89 | defer n.lock.RUnlock() 90 | if id.Equals(n.self.ID) { 91 | return nil, throwIdentityError("get", "from", "neighborhood set") 92 | } 93 | for _, node := range n.nodes { 94 | if node == nil { 95 | break 96 | } 97 | if id.Equals(node.ID) { 98 | return node, nil 99 | } 100 | } 101 | return nil, nodeNotFoundError 102 | } 103 | 104 | func (n *neighborhoodSet) export() [32]*Node { 105 | n.lock.RLock() 106 | defer n.lock.RUnlock() 107 | return n.nodes 108 | } 109 | 110 | func (n *neighborhoodSet) list() []*Node { 111 | n.lock.RLock() 112 | defer n.lock.RUnlock() 113 | nodes := []*Node{} 114 | for _, node := range n.nodes { 115 | if node != nil { 116 | nodes = append(nodes, node) 117 | } 118 | } 119 | return nodes 120 | } 121 | 122 | func (n *neighborhoodSet) removeNode(id NodeID) (*Node, error) { 123 | n.lock.Lock() 124 | defer n.lock.Unlock() 125 | if id.Equals(n.self.ID) { 126 | return nil, throwIdentityError("remove", "from", "neighborhood set") 127 | } 128 | pos := -1 129 | var node *Node 130 | for index, entry := range n.nodes { 131 | if entry == nil || entry.ID.Equals(id) { 132 | pos = index 133 | node = entry 134 | break 135 | } 136 | } 137 | if pos == -1 || pos > len(n.nodes) { 138 | return nil, nodeNotFoundError 139 | } 140 | var slice []*Node 141 | if len(n.nodes) == 1 { 142 | slice = []*Node{} 143 | } else if pos+1 == len(n.nodes) { 144 | slice = n.nodes[:pos] 145 | } else if pos == 0 { 146 | slice = n.nodes[1:] 147 | } else { 148 | slice = append(n.nodes[:pos], n.nodes[pos+1:]...) 149 | } 150 | for i, _ := range n.nodes { 151 | if i < len(slice) { 152 | n.nodes[i] = slice[i] 153 | } else { 154 | n.nodes[i] = nil 155 | } 156 | } 157 | n.self.incrementNSVersion() 158 | return node, nil 159 | } 160 | 161 | func (n *neighborhoodSet) debug(format string, v ...interface{}) { 162 | if n.logLevel <= LogLevelDebug { 163 | n.log.Printf(format, v...) 164 | } 165 | } 166 | 167 | func (n *neighborhoodSet) warn(format string, v ...interface{}) { 168 | if n.logLevel <= LogLevelWarn { 169 | n.log.Printf(format, v...) 170 | } 171 | } 172 | 173 | func (n *neighborhoodSet) err(format string, v ...interface{}) { 174 | if n.logLevel <= LogLevelError { 175 | n.log.Printf(format, v...) 176 | } 177 | } 178 | -------------------------------------------------------------------------------- /neighborhood_test.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "testing" 5 | ) 6 | 7 | // Test insertion of a node into the neighborhood set 8 | func TestNeighborhoodSetInsertNode(t *testing.T) { 9 | self_id, err := NodeIDFromBytes([]byte("this is just a test Node for testing purposes only.")) 10 | if err != nil { 11 | t.Fatalf(err.Error()) 12 | } 13 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 0) 14 | t.Logf("%s\n", self_id.String()) 15 | 16 | other_id, err := NodeIDFromBytes([]byte("this is some other Node for testing purposes only.")) 17 | if err != nil { 18 | t.Fatalf(err.Error()) 19 | } 20 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 0) 21 | t.Logf("%s\n", other_id.String()) 22 | neighborhood := newNeighborhoodSet(self) 23 | r, err := neighborhood.insertNode(*other, 0) 24 | if err != nil { 25 | t.Fatalf(err.Error()) 26 | } 27 | if r == nil { 28 | t.Fatalf("Nil response returned.") 29 | } 30 | r2, err := neighborhood.getNode(other_id) 31 | if err != nil { 32 | t.Fatalf(err.Error()) 33 | } 34 | if r2 == nil { 35 | t.Fatalf("Nil response returned.") 36 | } 37 | if !r2.ID.Equals(other_id) { 38 | t.Fatalf("Expected Node %s, got Node %s instead.", other_id, r2.ID) 39 | } 40 | } 41 | 42 | // Test deleting the only node from the neighborhood set 43 | func TestNeighborhoodSetDeleteOnly(t *testing.T) { 44 | self_id, err := NodeIDFromBytes([]byte("this is just a test Node for testing purposes only.")) 45 | if err != nil { 46 | t.Fatalf(err.Error()) 47 | } 48 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 0) 49 | t.Logf("%s\n", self_id.String()) 50 | 51 | other_id, err := NodeIDFromBytes([]byte("this is some other Node for testing purposes only.")) 52 | if err != nil { 53 | t.Fatalf(err.Error()) 54 | } 55 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 0) 56 | t.Logf("%s\n", other_id.String()) 57 | neighborhood := newNeighborhoodSet(self) 58 | r, err := neighborhood.insertNode(*other, 0) 59 | if err != nil { 60 | t.Fatalf(err.Error()) 61 | } 62 | if r == nil { 63 | t.Fatalf("Nil response returned.") 64 | } 65 | _, err = neighborhood.removeNode(other_id) 66 | if err != nil { 67 | t.Fatalf(err.Error()) 68 | } 69 | r3, err := neighborhood.getNode(other_id) 70 | if err != nodeNotFoundError { 71 | if err != nil { 72 | t.Fatalf(err.Error()) 73 | } else { 74 | t.Fatal("Expected nodeNotFoundError, got nil error.") 75 | } 76 | } 77 | if r3 != nil { 78 | t.Errorf("Expected nil response, got Node %s instead.", r3.ID) 79 | } 80 | } 81 | 82 | /// Test deleting the first of two nodes from the neighborhood set 83 | func TestNeighborhoodSetDeleteFirst(t *testing.T) { 84 | self_id, err := NodeIDFromBytes([]byte("this is just a test Node for testing purposes only.")) 85 | if err != nil { 86 | t.Fatalf(err.Error()) 87 | } 88 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 0) 89 | t.Logf("%s\n", self_id.String()) 90 | 91 | other_id, err := NodeIDFromBytes([]byte("this is some other Node for testing purposes only.")) 92 | if err != nil { 93 | t.Fatalf(err.Error()) 94 | } 95 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 0) 96 | t.Logf("%s\n", other_id.String()) 97 | neighborhood := newNeighborhoodSet(self) 98 | r, err := neighborhood.insertNode(*other, 0) 99 | if err != nil { 100 | t.Fatalf(err.Error()) 101 | } 102 | if r == nil { 103 | t.Fatalf("Nil response returned.") 104 | } 105 | second_id, err := NodeIDFromBytes([]byte("just a third Node for testing purposes only.")) 106 | if err != nil { 107 | t.Fatalf(err.Error()) 108 | } 109 | second := NewNode(second_id, "!27.0.0.3", "127.0.0.3", "testing", 0) 110 | r = nil 111 | r, err = neighborhood.insertNode(*second, 10) 112 | if err != nil { 113 | t.Fatal(err.Error()) 114 | } 115 | if r == nil { 116 | t.Fatal("Nil response returned") 117 | } 118 | _, err = neighborhood.removeNode(other_id) 119 | if err != nil { 120 | t.Fatal(err.Error()) 121 | } 122 | r = nil 123 | r, err = neighborhood.getNode(other_id) 124 | if err != nodeNotFoundError { 125 | if err != nil { 126 | t.Fatal(err.Error()) 127 | } else { 128 | t.Fatal("Expected nodeNotFoundError, got nil error instead.") 129 | } 130 | } 131 | if r != nil { 132 | t.Errorf("Expected nil response, got Node %s instead.", r.ID) 133 | } 134 | r = nil 135 | r, err = neighborhood.getNode(second_id) 136 | if err != nil { 137 | t.Fatal(err.Error()) 138 | } 139 | if r == nil { 140 | t.Fatalf("Got nil response when I expected to get Node %s", second_id) 141 | } 142 | if !r.ID.Equals(second_id) { 143 | t.Fatalf("Expected %s, got %s.", second_id, r.ID) 144 | } 145 | } 146 | 147 | /// Test deleting the last of two nodes from the neighborhood set 148 | func TestNeighborhoodSetDeleteLast(t *testing.T) { 149 | self_id, err := NodeIDFromBytes([]byte("this is just a test Node for testing purposes only.")) 150 | if err != nil { 151 | t.Fatalf(err.Error()) 152 | } 153 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 0) 154 | t.Logf("%s\n", self_id.String()) 155 | 156 | other_id, err := NodeIDFromBytes([]byte("this is some other Node for testing purposes only.")) 157 | if err != nil { 158 | t.Fatalf(err.Error()) 159 | } 160 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 0) 161 | t.Logf("%s\n", other_id.String()) 162 | neighborhood := newNeighborhoodSet(self) 163 | r, err := neighborhood.insertNode(*other, 10) 164 | if err != nil { 165 | t.Fatalf(err.Error()) 166 | } 167 | if r == nil { 168 | t.Fatalf("Nil response returned.") 169 | } 170 | second_id, err := NodeIDFromBytes([]byte("just a third Node for testing purposes only.")) 171 | if err != nil { 172 | t.Fatalf(err.Error()) 173 | } 174 | second := NewNode(second_id, "!27.0.0.3", "127.0.0.3", "testing", 0) 175 | r = nil 176 | r, err = neighborhood.insertNode(*second, 0) 177 | if err != nil { 178 | t.Fatal(err.Error()) 179 | } 180 | if r == nil { 181 | t.Fatal("Nil response returned") 182 | } 183 | _, err = neighborhood.removeNode(other_id) 184 | if err != nil { 185 | t.Fatal(err.Error()) 186 | } 187 | r = nil 188 | r, err = neighborhood.getNode(other_id) 189 | if err != nodeNotFoundError { 190 | if err != nil { 191 | t.Fatal(err.Error()) 192 | } else { 193 | t.Fatal("Expected nodeNotFoundError, got nil error instead.") 194 | } 195 | } 196 | if r != nil { 197 | t.Errorf("Expected nil response, got Node %s instead.", r.ID) 198 | } 199 | r = nil 200 | r, err = neighborhood.getNode(second_id) 201 | if err != nil { 202 | t.Fatal(err.Error()) 203 | } 204 | if r == nil { 205 | t.Fatalf("Got nil response when I expected to get Node %s", second_id) 206 | } 207 | if !r.ID.Equals(second_id) { 208 | t.Fatalf("Expected %s, got %s.", second_id, r.ID) 209 | } 210 | } 211 | 212 | /// Test deleting the middle of three nodes from the neighborhood set 213 | func TestNeighborhoodSetDeleteMiddle(t *testing.T) { 214 | self_id, err := NodeIDFromBytes([]byte("this is just a test Node for testing purposes only.")) 215 | if err != nil { 216 | t.Fatalf(err.Error()) 217 | } 218 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 0) 219 | t.Logf("%s\n", self_id.String()) 220 | 221 | other_id, err := NodeIDFromBytes([]byte("this is some other Node for testing purposes only.")) 222 | if err != nil { 223 | t.Fatalf(err.Error()) 224 | } 225 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 0) 226 | t.Logf("%s\n", other_id.String()) 227 | neighborhood := newNeighborhoodSet(self) 228 | r, err := neighborhood.insertNode(*other, 0) 229 | if err != nil { 230 | t.Fatalf(err.Error()) 231 | } 232 | if r == nil { 233 | t.Fatalf("Nil response returned.") 234 | } 235 | second_id, err := NodeIDFromBytes([]byte("just a third Node for testing purposes only.")) 236 | if err != nil { 237 | t.Fatalf(err.Error()) 238 | } 239 | second := NewNode(second_id, "!27.0.0.3", "127.0.0.3", "testing", 0) 240 | r = nil 241 | r, err = neighborhood.insertNode(*second, 10) 242 | if err != nil { 243 | t.Fatal(err.Error()) 244 | } 245 | if r == nil { 246 | t.Fatal("Nil response returned") 247 | } 248 | third_id, err := NodeIDFromBytes([]byte("just a fourth Node for testing purposes only.")) 249 | if err != nil { 250 | t.Fatalf(err.Error()) 251 | } 252 | third := NewNode(third_id, "127.0.0.4", "127.0.0.4", "testing", 0) 253 | r = nil 254 | r, err = neighborhood.insertNode(*third, 20) 255 | if err != nil { 256 | t.Fatal(err.Error()) 257 | } 258 | if r == nil { 259 | t.Fatal("Nil response returned") 260 | } 261 | _, err = neighborhood.removeNode(second_id) 262 | if err != nil { 263 | t.Fatal(err.Error()) 264 | } 265 | r = nil 266 | r, err = neighborhood.getNode(second_id) 267 | if err != nodeNotFoundError { 268 | if err != nil { 269 | t.Fatal(err.Error()) 270 | } else { 271 | t.Fatal("Expected nodeNotFoundError, got nil error instead.") 272 | } 273 | } 274 | if r != nil { 275 | t.Errorf("Expected nil response, got Node %s instead.", r.ID) 276 | } 277 | r = nil 278 | r, err = neighborhood.getNode(other_id) 279 | if err != nil { 280 | t.Fatal(err.Error()) 281 | } 282 | if r == nil { 283 | t.Fatal("Got nil response when querying for first insert.") 284 | } 285 | r = nil 286 | r, err = neighborhood.getNode(third_id) 287 | if err != nil { 288 | t.Fatal(err.Error()) 289 | } 290 | if r == nil { 291 | t.Fatal("Got nil response when querying for third insert.") 292 | } 293 | } 294 | 295 | ////////////////////////////////////////////////////////////////////////// 296 | ////////////////////////// Benchmarks //////////////////////////////////// 297 | ////////////////////////////////////////////////////////////////////////// 298 | 299 | // How fast can we insert nodes 300 | func BenchmarkNeighborhoodSetInsert(b *testing.B) { 301 | b.StopTimer() 302 | selfId, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 303 | if err != nil { 304 | b.Fatalf(err.Error()) 305 | } 306 | self := NewNode(selfId, "127.0.0.1", "127.0.0.1", "testing", 55555) 307 | 308 | neighborhood := newNeighborhoodSet(self) 309 | benchRand.Seed(randSeed) 310 | 311 | b.StartTimer() 312 | for i := 0; i < b.N; i++ { 313 | otherId := randomNodeID() 314 | other := *NewNode(otherId, "127.0.0.1", "127.0.0.2", "testing", 55555) 315 | _, err = neighborhood.insertNode(other, int64(i%len(neighborhood.nodes))) 316 | } 317 | } 318 | 319 | // How fast can we retrieve nodes by ID 320 | func BenchmarkNeighborhoodSetGetByID(b *testing.B) { 321 | b.StopTimer() 322 | selfId, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 323 | if err != nil { 324 | b.Fatalf(err.Error()) 325 | } 326 | self := NewNode(selfId, "127.0.0.1", "127.0.0.1", "testing", 55555) 327 | 328 | neighborhood := newNeighborhoodSet(self) 329 | benchRand.Seed(randSeed) 330 | 331 | for i := 0; i < len(neighborhood.nodes); i++ { 332 | otherId := randomNodeID() 333 | other := *NewNode(otherId, "127.0.0.2", "127.0.0.2", "testing", 55555) 334 | _, err = neighborhood.insertNode(other, int64(i)) 335 | if err != nil { 336 | b.Fatalf(err.Error()) 337 | } 338 | } 339 | 340 | b.StartTimer() 341 | for i := 0; i < b.N; i++ { 342 | neighborhood.getNode(neighborhood.nodes[i%len(neighborhood.nodes)].ID) 343 | } 344 | } 345 | 346 | var benchNeighborhood *neighborhoodSet 347 | 348 | func initBenchNeighborhoodSet(b *testing.B) { 349 | selfId, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 350 | if err != nil { 351 | b.Fatalf(err.Error()) 352 | } 353 | self := NewNode(selfId, "127.0.0.1", "127.0.0.1", "testing", 55555) 354 | benchNeighborhood = newNeighborhoodSet(self) 355 | benchRand.Seed(randSeed) 356 | 357 | for i := 0; i < len(benchNeighborhood.nodes); i++ { 358 | id := randomNodeID() 359 | node := NewNode(id, "127.0.0.1", "127.0.0.1", "testing", 55555) 360 | _, err = benchNeighborhood.insertNode(*node, int64(i)) 361 | if err != nil { 362 | b.Fatal(err.Error()) 363 | } 364 | } 365 | } 366 | 367 | // How fast can we dump the neighborhood set 368 | func BenchmarkNeighborhoodSetDump(b *testing.B) { 369 | b.StopTimer() 370 | if benchNeighborhood == nil { 371 | initBenchNeighborhoodSet(b) 372 | } 373 | benchRand.Seed(randSeed) 374 | b.StartTimer() 375 | 376 | for i := 0; i < b.N; i++ { 377 | benchNeighborhood.list() 378 | } 379 | } 380 | 381 | // How fast can we export the neighborhood set 382 | func BenchmarkNeighborhoodSetExport(b *testing.B) { 383 | b.StopTimer() 384 | if benchNeighborhood == nil { 385 | initBenchNeighborhoodSet(b) 386 | } 387 | benchRand.Seed(randSeed) 388 | b.StartTimer() 389 | 390 | for i := 0; i < b.N; i++ { 391 | benchNeighborhood.export() 392 | } 393 | } 394 | -------------------------------------------------------------------------------- /node.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "strconv" 5 | "sync" 6 | "sync/atomic" 7 | "time" 8 | ) 9 | 10 | // Node represents a specific machine in the cluster. 11 | type Node struct { 12 | LocalIP string // The IP through which the Node should be accessed by other Nodes with an identical Region 13 | GlobalIP string // The IP through which the Node should be accessed by other Nodes whose Region differs 14 | Port int // The port the Node is listening on 15 | Region string // A string that allows you to intelligently route between local and global requests for, e.g., EC2 regions 16 | ID NodeID 17 | proximity int64 18 | mutex *sync.RWMutex // lock and unlock a Node for concurrency safety 19 | lastHeardFrom time.Time // The last time we heard from this node 20 | leafsetVersion uint64 // the version number of the leafset 21 | routingTableVersion uint64 // the version number of the routing table 22 | neighborhoodSetVersion uint64 // the version number of the neighborhood set 23 | } 24 | 25 | // NewNode initialises a new Node and its associated mutexes. It does *not* update the proximity of the Node. 26 | func NewNode(id NodeID, local, global, region string, port int) *Node { 27 | return &Node{ 28 | ID: id, 29 | LocalIP: local, 30 | GlobalIP: global, 31 | Port: port, 32 | Region: region, 33 | proximity: -1, 34 | mutex: new(sync.RWMutex), 35 | lastHeardFrom: time.Now(), 36 | leafsetVersion: 0, 37 | routingTableVersion: 0, 38 | neighborhoodSetVersion: 0, 39 | } 40 | } 41 | 42 | // IsZero returns whether or the given Node has been initialised or if it's an empty Node struct. IsZero returns true if the Node has been initialised, false if it's an empty struct. 43 | func (self Node) IsZero() bool { 44 | return self.LocalIP == "" && self.GlobalIP == "" && self.Port == 0 45 | } 46 | 47 | // GetIP returns the IP and port that should be used when communicating with a Node, to respect Regions. 48 | func (self Node) GetIP(other Node) string { 49 | self.mutex.RLock() 50 | defer self.mutex.RUnlock() 51 | if other.mutex != nil { 52 | other.mutex.RLock() 53 | defer other.mutex.RUnlock() 54 | } 55 | ip := "" 56 | if self.Region == other.Region { 57 | ip = other.LocalIP 58 | } else { 59 | ip = other.GlobalIP 60 | } 61 | ip = ip + ":" + strconv.Itoa(other.Port) 62 | return ip 63 | } 64 | 65 | // Proximity returns the proximity score for the Node, adjusted for the Region. The proximity score of a Node reflects how close it is to the current Node; a lower proximity score means a closer Node. Nodes outside the current Region are penalised by a multiplier. 66 | func (self *Node) Proximity(n *Node) int64 { 67 | if n == nil { 68 | return -1 69 | } 70 | if self.mutex == nil { 71 | self.mutex = new(sync.RWMutex) 72 | } 73 | n.mutex.RLock() 74 | defer n.mutex.RUnlock() 75 | multiplier := int64(1) 76 | if n.Region != self.Region { 77 | multiplier = 5 78 | } 79 | score := n.proximity * multiplier 80 | return score 81 | } 82 | 83 | func (self *Node) getRawProximity() int64 { 84 | if self.mutex == nil { 85 | self.mutex = new(sync.RWMutex) 86 | } 87 | self.mutex.RLock() 88 | defer self.mutex.RUnlock() 89 | return self.proximity 90 | } 91 | 92 | func (self *Node) setProximity(proximity int64) { 93 | if self.mutex == nil { 94 | self.mutex = new(sync.RWMutex) 95 | } 96 | self.mutex.Lock() 97 | defer self.mutex.Unlock() 98 | self.proximity = proximity 99 | } 100 | 101 | func (self *Node) updateLastHeardFrom() { 102 | if self.mutex == nil { 103 | self.mutex = new(sync.RWMutex) 104 | } 105 | self.mutex.Lock() 106 | defer self.mutex.Unlock() 107 | self.lastHeardFrom = time.Now() 108 | } 109 | 110 | func (self *Node) LastHeardFrom() time.Time { 111 | if self.mutex == nil { 112 | self.mutex = new(sync.RWMutex) 113 | } 114 | self.mutex.RLock() 115 | defer self.mutex.RUnlock() 116 | return self.lastHeardFrom 117 | } 118 | 119 | func (self *Node) incrementLSVersion() { 120 | atomic.AddUint64(&self.leafsetVersion, 1) 121 | } 122 | 123 | func (self *Node) incrementRTVersion() { 124 | atomic.AddUint64(&self.routingTableVersion, 1) 125 | } 126 | 127 | func (self *Node) incrementNSVersion() { 128 | atomic.AddUint64(&self.neighborhoodSetVersion, 1) 129 | } 130 | 131 | func (self *Node) updateVersions(RTVersion, LSVersion, NSVersion uint64) { 132 | for self.routingTableVersion < RTVersion { 133 | self.incrementRTVersion() 134 | } 135 | for self.leafsetVersion < LSVersion { 136 | self.incrementLSVersion() 137 | } 138 | for self.neighborhoodSetVersion < NSVersion { 139 | self.incrementNSVersion() 140 | } 141 | } 142 | -------------------------------------------------------------------------------- /node_test.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "testing" 5 | ) 6 | 7 | // Test that node versions are correctly updated 8 | func TestNodeVersionUpdate(t *testing.T) { 9 | self_id, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 10 | if err != nil { 11 | t.Fatalf(err.Error()) 12 | } 13 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 0) 14 | self.updateVersions(2, 3, 4) 15 | if self.routingTableVersion != 2 { 16 | t.Errorf("Routing table version was supposed to be %d, was %d instead.", 2, self.routingTableVersion) 17 | } 18 | if self.leafsetVersion != 3 { 19 | t.Errorf("Leafset version was supposed to be %d, was %d instead.", 3, self.leafsetVersion) 20 | } 21 | if self.neighborhoodSetVersion != 4 { 22 | t.Errorf("Neighborhood Set version was supposed to be %d, was %d instead.", 4, self.neighborhoodSetVersion) 23 | } 24 | } 25 | 26 | // Test that node versions are updated even when one version is lower 27 | func TestNodeVersionUpdateMixed(t *testing.T) { 28 | self_id, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 29 | if err != nil { 30 | t.Fatalf(err.Error()) 31 | } 32 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 0) 33 | self.updateVersions(2, 3, 4) 34 | if self.routingTableVersion != 2 { 35 | t.Errorf("Routing table version was supposed to be %d, was %d instead.", 2, self.routingTableVersion) 36 | } 37 | if self.leafsetVersion != 3 { 38 | t.Errorf("Leafset version was supposed to be %d, was %d instead.", 3, self.leafsetVersion) 39 | } 40 | if self.neighborhoodSetVersion != 4 { 41 | t.Errorf("Neighborhood Set version was supposed to be %d, was %d instead.", 4, self.neighborhoodSetVersion) 42 | } 43 | self.updateVersions(3, 3, 3) 44 | if self.routingTableVersion != 3 { 45 | t.Errorf("Routing table version was supposed to be %d, was %d instead.", 3, self.routingTableVersion) 46 | } 47 | if self.leafsetVersion != 3 { 48 | t.Errorf("Leafset version was supposed to be %d, was %d instead.", 3, self.leafsetVersion) 49 | } 50 | if self.neighborhoodSetVersion != 4 { 51 | t.Errorf("Neighborhood Set version was supposed to be %d, was %d instead.", 4, self.neighborhoodSetVersion) 52 | } 53 | } 54 | -------------------------------------------------------------------------------- /nodeid.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "encoding/binary" 5 | "encoding/hex" 6 | "encoding/json" 7 | "errors" 8 | "fmt" 9 | "math" 10 | "math/big" 11 | ) 12 | 13 | const idLen = 32 14 | 15 | // NodeID is a unique address for a node in the network. 16 | type NodeID [2]uint64 17 | 18 | // NodeIDFromBytes creates a NodeID from an array of bytes. 19 | // It returns the created NodeID, trimmed to the first 32 digits, or nil and an error if there are not enough bytes to yield 32 digits. 20 | func NodeIDFromBytes(source []byte) (NodeID, error) { 21 | var result NodeID 22 | if len(source) < 16 { 23 | return result, errors.New("Not enough bytes to create a NodeID.") 24 | } 25 | result[0] = binary.BigEndian.Uint64(source) 26 | result[1] = binary.BigEndian.Uint64(source[8:]) 27 | return result, nil 28 | } 29 | 30 | // String returns the hexadecimal string encoding of the NodeID. 31 | func (id NodeID) String() string { 32 | return fmt.Sprintf("%016x%016x", id[0], id[1]) 33 | } 34 | 35 | // Equals tests two NodeIDs for equality and returns true if they are considered equal, false if they are considered inequal. NodeIDs are considered equal if each digit of the NodeID is equal. 36 | func (id NodeID) Equals(other NodeID) bool { 37 | return id[0] == other[0] && id[1] == other[1] 38 | } 39 | 40 | // Less tests two NodeIDs to determine if the ID the method is called on is less than the ID passed as an argument. An ID is considered to be less if the first inequal digit between the two IDs is considered to be less. 41 | func (id NodeID) Less(other NodeID) bool { 42 | return id.RelPos(other) < 0 43 | } 44 | 45 | // absLess returns true if id < other, disregarding modular arithmetic. 46 | func (id NodeID) absLess(other NodeID) bool { 47 | return id[0] < other[0] || id[0] == other[0] && id[1] < other[1] 48 | } 49 | 50 | // TODO(eds): this could be faster and smaller with a little assembly, but not 51 | // sure if we want to go there. 52 | 53 | // digitSet returns the index of the first 4-bit digit with any bits set. 54 | // The most significant digit is digit 0; the least significant is digit 15. 55 | func digitSet(x uint64) int { 56 | if x&0xffffffff00000000 != 0 { 57 | if x&0xffff000000000000 != 0 { 58 | if x&0xff00000000000000 != 0 { 59 | if x&0xf000000000000000 != 0 { 60 | return 0 61 | } 62 | return 1 63 | } 64 | if x&0x00f0000000000000 != 0 { 65 | return 2 66 | } 67 | return 3 68 | } 69 | if x&0x0000ff0000000000 != 0 { 70 | if x&0x0000f00000000000 != 0 { 71 | return 4 72 | } 73 | return 5 74 | } 75 | if x&0x000000f000000000 != 0 { 76 | return 6 77 | } 78 | return 7 79 | } 80 | if x&0x00000000ffff0000 != 0 { 81 | if x&0x00000000ff000000 != 0 { 82 | if x&0x00000000f0000000 != 0 { 83 | return 8 84 | } 85 | return 9 86 | } 87 | if x&0x00000000f0000000 != 0 { 88 | return 10 89 | } 90 | return 11 91 | } 92 | if x&0x000000000000ff00 != 0 { 93 | if x&0x000000000000f000 != 0 { 94 | return 12 95 | } 96 | return 13 97 | } 98 | if x&0x00000000000000f0 != 0 { 99 | return 14 100 | } 101 | return 15 102 | } 103 | 104 | // CommonPrefixLen returns the number of leading digits that are equal in the two NodeIDs. 105 | func (id NodeID) CommonPrefixLen(other NodeID) int { 106 | if xor := id[0] ^ other[0]; xor != 0 { 107 | return digitSet(xor) 108 | } 109 | if xor := id[1] ^ other[1]; xor != 0 { 110 | return digitSet(xor) | 16 111 | } 112 | return idLen 113 | } 114 | 115 | // differences returns the difference between the two NodeIDs in both directions. 116 | func (id NodeID) differences(other NodeID) (NodeID, NodeID) { 117 | var d1, d2 NodeID 118 | if id.absLess(other) { 119 | d1[1] = other[1] - id[1] 120 | // check for borrow 121 | b := 0 122 | if d1[1] > other[1] { 123 | b = 1 124 | } 125 | d1[0] = other[0] - (id[0] + uint64(b)) 126 | d2[0], d2[1] = math.MaxUint64-d1[0], math.MaxUint64-d1[1]+1 127 | } else { 128 | d2[1] = id[1] - other[1] 129 | // check for borrow 130 | b := 0 131 | if d2[1] > id[1] { 132 | b = 1 133 | } 134 | d2[0] = id[0] - (other[0] + uint64(b)) 135 | d1[0], d1[1] = math.MaxUint64-d2[0], math.MaxUint64-d2[1]+1 136 | } 137 | return d2, d1 138 | } 139 | 140 | // Diff returns the difference between two NodeIDs as an absolute value. It performs the modular arithmetic necessary to find the shortest distance between the IDs in the (2^128)-1 item nodespace. 141 | func (id NodeID) Diff(other NodeID) *big.Int { 142 | d1, d2 := id.differences(other) 143 | if d1.absLess(d2) { 144 | return d1.Base10() 145 | } 146 | return d2.Base10() 147 | } 148 | 149 | // RelPos uses modular arithmetic to determine whether the NodeID passed as an argument is to the left of the NodeID it is called on (-1), the same as the NodeID it is called on (0), or to the right of the NodeID it is called on (1) in the circular node space. 150 | func (id NodeID) RelPos(other NodeID) int { 151 | if id.Equals(other) { 152 | return 0 153 | } 154 | d1, d2 := id.differences(other) 155 | if d1.absLess(d2) { 156 | return 1 157 | } 158 | return -1 159 | } 160 | 161 | var one = big.NewInt(1) 162 | 163 | // Base10 returns the NodeID as a base 10 number, translating each base 16 digit. 164 | func (id NodeID) Base10() *big.Int { 165 | var result big.Int 166 | if id[0] > math.MaxInt64 { 167 | result.SetInt64(math.MaxInt64) 168 | result.Add(&result, one) 169 | result.Lsh(&result, 64) 170 | id[0] -= math.MaxInt64 + 1 171 | } 172 | var tmp big.Int 173 | tmp.SetInt64(int64(id[0])) 174 | tmp.Lsh(&tmp, 64) 175 | result.Add(&result, &tmp) 176 | if id[1] > math.MaxInt64 { 177 | tmp.SetInt64(math.MaxInt64) 178 | result.Add(&result, &tmp) 179 | result.Add(&result, one) 180 | id[1] -= math.MaxInt64 + 1 181 | } 182 | tmp.SetInt64(int64(id[1])) 183 | result.Add(&result, &tmp) 184 | return &result 185 | } 186 | 187 | // MarshalJSON fulfills the Marshaler interface, allowing NodeIDs to be serialised to JSON safely. 188 | func (id NodeID) MarshalJSON() ([]byte, error) { 189 | return []byte(`"` + id.String() + `"`), nil 190 | } 191 | 192 | // UnmarshalJSON fulfills the Unmarshaler interface, allowing NodeIDs to be unserialised from JSON safely. 193 | func (id *NodeID) UnmarshalJSON(source []byte) error { 194 | if id == nil { 195 | return errors.New("UnmarshalJSON on nil NodeID.") 196 | } 197 | var str string 198 | err := json.Unmarshal(source, &str) 199 | if err != nil { 200 | return err 201 | } 202 | dec, err := hex.DecodeString(str) 203 | if err != nil { 204 | return err 205 | } 206 | new_id, err := NodeIDFromBytes([]byte(dec)) 207 | if err != nil { 208 | return err 209 | } 210 | *id = new_id 211 | return nil 212 | } 213 | 214 | // Digit returns the ith 4-bit digit in the NodeID. If i >= 32, Digit panics. 215 | func (id NodeID) Digit(i int) byte { 216 | if uint(i) >= 32 { 217 | panic("invalid digit index") 218 | } 219 | n := id[0] 220 | if i >= 16 { 221 | n = id[1] 222 | i &= 15 223 | } 224 | k := 4 * uint(15-i) 225 | return byte((n >> k) & 0xf) 226 | } 227 | -------------------------------------------------------------------------------- /nodeid_test.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "bytes" 5 | "math/big" 6 | "testing" 7 | ) 8 | 9 | func TestNodeIDString(t *testing.T) { 10 | tests := [...]struct { 11 | bytes []byte 12 | str string 13 | }{ 14 | { 15 | make([]byte, 16), 16 | "00000000000000000000000000000000", 17 | }, 18 | { 19 | bytes.Repeat([]byte{0xff}, 16), 20 | "ffffffffffffffffffffffffffffffff", 21 | }, 22 | } 23 | for i, test := range tests { 24 | id, err := NodeIDFromBytes(test.bytes) 25 | if err != nil { 26 | t.Errorf("test %v: unexpected error %v", i, err) 27 | } 28 | str := id.String() 29 | if str != test.str { 30 | t.Errorf("test %v: expected %q, got %q", i, test.str, str) 31 | } 32 | } 33 | } 34 | 35 | func TestNodeIDRelPos(t *testing.T) { 36 | tests := [...]struct { 37 | bytes1, bytes2 []byte 38 | relpos int 39 | }{ 40 | { 41 | make([]byte, 16), 42 | make([]byte, 16), 43 | 0, 44 | }, 45 | { 46 | make([]byte, 16), 47 | bytes.Repeat([]byte{0x11}, 16), 48 | -1, 49 | }, 50 | { 51 | bytes.Repeat([]byte{0x11}, 16), 52 | make([]byte, 16), 53 | 1, 54 | }, 55 | { 56 | make([]byte, 16), 57 | bytes.Repeat([]byte{0xff}, 16), 58 | 1, 59 | }, 60 | { 61 | bytes.Repeat([]byte{0xff}, 16), 62 | make([]byte, 16), 63 | -1, 64 | }, 65 | { 66 | []byte{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xf9, 0x00, 0x00, 0xf7, 0x31, 0x01, 0x01, 0x01, 0x01, 0x01}, 67 | []byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0xff, 0xff, 0x08, 0xce, 0xfe, 0xfe, 0xfe, 0xfe, 0xff}, 68 | -1, 69 | }, 70 | } 71 | for i, test := range tests { 72 | id1, err := NodeIDFromBytes(test.bytes1) 73 | if err != nil { 74 | t.Errorf("test %v: unexpected error %v", i, err) 75 | } 76 | id2, err := NodeIDFromBytes(test.bytes2) 77 | if err != nil { 78 | t.Errorf("test %v: unexpected error %v", i, err) 79 | } 80 | relpos := id1.RelPos(id2) 81 | if relpos != test.relpos { 82 | t.Errorf("test %v: expected %v, got %v", i, test.relpos, relpos) 83 | } 84 | } 85 | } 86 | 87 | func TestNodeIDBase10(t *testing.T) { 88 | tests := [...]struct { 89 | bytes []byte 90 | base10 *big.Int 91 | }{ 92 | { 93 | make([]byte, 16), 94 | big.NewInt(0), 95 | }, 96 | { 97 | append(make([]byte, 15), 1), 98 | big.NewInt(1), 99 | }, 100 | { 101 | bytes.Repeat([]byte{0xff}, 16), 102 | new(big.Int).SetBytes(bytes.Repeat([]byte{0xff}, 16)), 103 | }, 104 | } 105 | for i, test := range tests { 106 | id, err := NodeIDFromBytes(test.bytes) 107 | if err != nil { 108 | t.Errorf("test %v: unexpected error %v", i, err) 109 | } 110 | base10 := id.Base10() 111 | if base10.Cmp(test.base10) != 0 { 112 | t.Errorf("test %v: expected %v, got %v", i, test.base10, base10) 113 | } 114 | } 115 | } 116 | 117 | func TestNodeIDLess(t *testing.T) { 118 | tests := []struct { 119 | bytes1, bytes2 []byte 120 | less bool 121 | }{ 122 | { 123 | make([]byte, 16), 124 | make([]byte, 16), 125 | false, 126 | }, 127 | { 128 | make([]byte, 16), 129 | bytes.Repeat([]byte{0x11}, 16), 130 | true, 131 | }, 132 | { 133 | bytes.Repeat([]byte{0x11}, 16), 134 | make([]byte, 16), 135 | false, 136 | }, 137 | { 138 | make([]byte, 16), 139 | bytes.Repeat([]byte{0xff}, 16), 140 | false, 141 | }, 142 | { 143 | bytes.Repeat([]byte{0xff}, 16), 144 | make([]byte, 16), 145 | true, 146 | }, 147 | } 148 | for i, test := range tests { 149 | id1, err := NodeIDFromBytes(test.bytes1) 150 | if err != nil { 151 | t.Errorf("test %v: unexpected error %v", i, err) 152 | } 153 | id2, err := NodeIDFromBytes(test.bytes2) 154 | if err != nil { 155 | t.Errorf("test %v: unexpected error %v", i, err) 156 | } 157 | less := id1.Less(id2) 158 | if less != test.less { 159 | t.Errorf("test %v: expected %v, got %v", i, test.less, less) 160 | } 161 | } 162 | } 163 | 164 | // Make sure that iterating over digits works correctly. 165 | func TestNodeIDIterDigit(t *testing.T) { 166 | id, err := NodeIDFromBytes([]byte{0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef, 0xfe, 0xdc, 0xba, 0x98, 0x76, 0x54, 0x32, 0x10}) 167 | if err != nil { 168 | t.Fatal("unexpected error", err) 169 | } 170 | for i := 0; i < 16; i++ { 171 | if digit := id.Digit(i); digit != byte(i) { 172 | t.Errorf("expected digit %#x, got %#x", i, digit) 173 | } 174 | } 175 | for i := 0; i < 16; i++ { 176 | if digit := id.Digit(16 + i); digit != byte(15-i) { 177 | t.Errorf("expected digit %#x, got %#x", 15-i, digit) 178 | } 179 | } 180 | } 181 | 182 | // Make sure an error is thrown if a NodeID is created from less than 32 bytes 183 | func TestNodeIDFromBytesWithInsufficientBytes(t *testing.T) { 184 | bytes := []byte("123456789012345") 185 | id, err := NodeIDFromBytes(bytes) 186 | if err == nil { 187 | t.Errorf("Source length of %v bytes, but no error thrown. Instead returned NodeID of %v", len(bytes), id) 188 | } 189 | } 190 | 191 | // Make sure an error is *not* thrown if enough bytes are passed in. 192 | func TestNodeIDFromBytesWithSufficientBytes(t *testing.T) { 193 | bytes := []byte("1234567890123456") 194 | _, err := NodeIDFromBytes(bytes) 195 | if err != nil { 196 | t.Errorf("Source length of %v bytes threw an error when no error should have been thrown.", len(bytes)) 197 | t.Logf(err.Error()) 198 | } 199 | } 200 | 201 | // Make sure the correct common prefix length is reported for two NodeIDs 202 | func TestNodeIDCommonPrefixLen(t *testing.T) { 203 | n1 := NodeID{0xfdfdfdfdfdfdfdfd, 0xfdfdfdfdfdfdfdfd} 204 | n2 := NodeID{0xfdfdddfdfdfdfdfd, 0xfdfdfdfdfdfdfdfd} 205 | diff1 := 4 206 | 207 | n3 := NodeID{0xdfdfdfdfdfdfdfdf, 0xdfdfdfdfdfdfdfdf} 208 | n4 := NodeID{0xdfdfdfafdfdfdfdf, 0xdfdfdfdfdfdfdfdf} 209 | diff2 := 6 210 | 211 | if n1.CommonPrefixLen(n2) != diff1 { 212 | t.Errorf("Common prefix length should be %v, is %v instead.", diff1, n1.CommonPrefixLen(n2)) 213 | t.Log(n1) 214 | t.Log(n2) 215 | if len(n1) > n1.CommonPrefixLen(n2) && len(n2) > n1.CommonPrefixLen(n2) { 216 | t.Logf("First significant digit: %v vs. %v", n1[n1.CommonPrefixLen(n2)], n2[n1.CommonPrefixLen(n2)]) 217 | } 218 | } 219 | if n2.CommonPrefixLen(n3) != 0 { 220 | t.Errorf("Common prefix length should be %v, is %v instead.", 0, n2.CommonPrefixLen(n3)) 221 | t.Log(n2) 222 | t.Log(n3) 223 | if len(n2) > n2.CommonPrefixLen(n3) && len(n3) > n2.CommonPrefixLen(n3) { 224 | t.Logf("First significant digit: %v vs. %v", n2[n2.CommonPrefixLen(n3)], n3[n2.CommonPrefixLen(n3)]) 225 | } 226 | } 227 | if n3.CommonPrefixLen(n4) != diff2 { 228 | t.Errorf("Common prefix length should be %v, is %v instead.", diff2, n3.CommonPrefixLen(n4)) 229 | t.Log(n3) 230 | t.Log(n4) 231 | if len(n3) > n3.CommonPrefixLen(n4) && len(n4) > n3.CommonPrefixLen(n4) { 232 | t.Logf("First significant digit: %v vs. %v", n3[n3.CommonPrefixLen(n4)], n4[n3.CommonPrefixLen(n4)]) 233 | } 234 | } 235 | if n4.CommonPrefixLen(n4) != idLen { 236 | t.Errorf("Common prefix length should be %v, is %v instead.", len(n4), n4.CommonPrefixLen(n4)) 237 | if n4.CommonPrefixLen(n4) < idLen { 238 | t.Logf("First significant digit: %v vs. %v", n4[n4.CommonPrefixLen(n4)], n4[n4.CommonPrefixLen(n4)]) 239 | } 240 | } 241 | } 242 | 243 | // Make sure the correct difference is reported between NodeIDs 244 | func TestNodeIDDiff(t *testing.T) { 245 | n1 := NodeID{0xfdfdfdfdfdfdfdfd, 0xfdfdfdfdfdfdfdfd} 246 | n2 := NodeID{0xfdfdfdfdfdfdfdfd, 0xfdfdfdfdfdfdfdfb} 247 | diff1 := n1.Diff(n2) 248 | if diff1.Cmp(big.NewInt(2)) != 0 { 249 | t.Errorf("Difference should be 2, was %v instead", diff1) 250 | } 251 | diff2 := n2.Diff(n1) 252 | if diff2.Cmp(big.NewInt(2)) != 0 { 253 | t.Errorf("Difference should be 2, was %v instead", diff2) 254 | } 255 | diff3 := n2.Diff(n2) 256 | if diff3.Cmp(big.NewInt(0)) != 0 { 257 | t.Errorf("Difference should be 0, was %v instead", diff3) 258 | } 259 | } 260 | 261 | // Make sure NodeID comparisons wrap around the circle 262 | func TestNodeIDDiffWrap(t *testing.T) { 263 | n1, err := NodeIDFromBytes(make([]byte, 16)) 264 | if err != nil { 265 | t.Fatalf(err.Error()) 266 | } 267 | n2, err := NodeIDFromBytes([]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}) 268 | if err != nil { 269 | t.Fatalf(err.Error()) 270 | } 271 | diff1 := n1.Diff(n2) 272 | if diff1.Cmp(big.NewInt(1)) != 0 { 273 | t.Errorf("Difference should be 1, was %v instead", diff1) 274 | } 275 | diff2 := n2.Diff(n1) 276 | if diff2.Cmp(big.NewInt(1)) != 0 { 277 | t.Errorf("Difference should be 1, was %v instead", diff2) 278 | } 279 | diff3 := n2.Diff(n2) 280 | if diff3.Cmp(big.NewInt(0)) != 0 { 281 | t.Errorf("Difference should be 0, was %v instead", diff3) 282 | } 283 | } 284 | 285 | // Quick benchmark to test how expensive diffing nodes is 286 | func BenchmarkNodeIDDiff(b *testing.B) { 287 | b.StopTimer() 288 | n1, err := NodeIDFromBytes(make([]byte, 16)) 289 | if err != nil { 290 | b.Fatalf(err.Error()) 291 | } 292 | n2, err := NodeIDFromBytes([]byte{255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255}) 293 | if err != nil { 294 | b.Fatalf(err.Error()) 295 | } 296 | b.StartTimer() 297 | 298 | for i := 0; i < b.N; i++ { 299 | n1.Diff(n2) 300 | } 301 | } 302 | -------------------------------------------------------------------------------- /table.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "errors" 5 | "log" 6 | "os" 7 | "sync" 8 | ) 9 | 10 | type routingTable struct { 11 | self *Node 12 | nodes [32][16]*Node 13 | log *log.Logger 14 | logLevel int 15 | lock *sync.RWMutex 16 | } 17 | 18 | func newRoutingTable(self *Node) *routingTable { 19 | return &routingTable{ 20 | self: self, 21 | nodes: [32][16]*Node{}, 22 | log: log.New(os.Stdout, "wendy#routingTable("+self.ID.String()+")", log.LstdFlags), 23 | logLevel: LogLevelWarn, 24 | lock: new(sync.RWMutex), 25 | } 26 | } 27 | 28 | var rtDuplicateInsertError = errors.New("Node already exists in routing table.") 29 | 30 | func (t *routingTable) insertNode(node Node, proximity int64) (*Node, error) { 31 | return t.insertValues(node.ID, node.LocalIP, node.GlobalIP, node.Region, node.Port, node.routingTableVersion, node.leafsetVersion, node.neighborhoodSetVersion, proximity) 32 | } 33 | 34 | func (t *routingTable) insertValues(id NodeID, localIP, globalIP, region string, port int, rtVersion, lsVersion, nsVersion uint64, proximity int64) (*Node, error) { 35 | t.lock.Lock() 36 | defer t.lock.Unlock() 37 | node := NewNode(id, localIP, globalIP, region, port) 38 | node.updateVersions(rtVersion, lsVersion, nsVersion) 39 | node.setProximity(proximity) 40 | row := t.self.ID.CommonPrefixLen(node.ID) 41 | if row >= len(t.nodes) { 42 | return nil, throwIdentityError("insert", "into", "routing table") 43 | } 44 | col := int(node.ID.Digit(row)) 45 | if col >= len(t.nodes[row]) { 46 | return nil, impossibleError 47 | } 48 | if t.nodes[row][col] != nil { 49 | if node.ID.Equals(t.nodes[row][col].ID) { 50 | t.debug("Node %s already in routing table. Versions before insert:\nrouting table: %d\nleaf set: %d\nneighborhood set: %d\n", t.nodes[row][col].ID.String(), t.nodes[row][col].routingTableVersion, t.nodes[row][col].leafsetVersion, t.nodes[row][col].neighborhoodSetVersion) 51 | node.updateVersions(t.nodes[row][col].routingTableVersion, t.nodes[row][col].leafsetVersion, t.nodes[row][col].neighborhoodSetVersion) 52 | t.nodes[row][col] = node 53 | t.debug("Versions after insert:\nrouting table: %d\nleaf set: %d\nneighborhood set: %d\n", t.nodes[row][col].routingTableVersion, t.nodes[row][col].leafsetVersion, t.nodes[row][col].neighborhoodSetVersion) 54 | return nil, rtDuplicateInsertError 55 | } 56 | // keep the node that has the closest proximity 57 | if t.self.Proximity(t.nodes[row][col]) > t.self.Proximity(node) { 58 | t.nodes[row][col] = node 59 | t.debug("Inserted node %s into routing table.", node.ID.String()) 60 | return node, nil 61 | } 62 | } else { 63 | t.nodes[row][col] = node 64 | t.debug("Inserted node %s into routing table.", node.ID.String()) 65 | t.self.incrementRTVersion() 66 | return node, nil 67 | } 68 | return nil, nil 69 | } 70 | 71 | func (t *routingTable) getNode(id NodeID) (*Node, error) { 72 | t.lock.RLock() 73 | defer t.lock.RUnlock() 74 | row := t.self.ID.CommonPrefixLen(id) 75 | if row >= idLen { 76 | return nil, throwIdentityError("get", "from", "routing table") 77 | } 78 | col := int(id.Digit(row)) 79 | if col >= len(t.nodes[row]) { 80 | return nil, impossibleError 81 | } 82 | if t.nodes[row][col] == nil { 83 | return nil, nodeNotFoundError 84 | } 85 | if !t.nodes[row][col].ID.Equals(id) { 86 | t.debug("Node not found. Expected %s, got %s.", id.String(), t.nodes[row][col].ID.String()) 87 | return nil, nodeNotFoundError 88 | } 89 | return t.nodes[row][col], nil 90 | } 91 | 92 | func (t *routingTable) route(id NodeID) (*Node, error) { 93 | t.lock.RLock() 94 | defer t.lock.RUnlock() 95 | row := t.self.ID.CommonPrefixLen(id) 96 | if row >= idLen { 97 | return nil, throwIdentityError("route to", "in", "routing table") 98 | } 99 | col := int(id.Digit(row)) 100 | if col >= len(t.nodes[row]) { 101 | return nil, impossibleError 102 | } 103 | if t.nodes[row][col] != nil { 104 | return t.nodes[row][col], nil 105 | } 106 | diff := t.self.ID.Diff(id) 107 | for scan_row := row; scan_row < len(t.nodes); scan_row++ { 108 | for c, n := range t.nodes[scan_row] { 109 | if c == int(t.self.ID.Digit(row)) { 110 | continue 111 | } 112 | if n == nil { 113 | continue 114 | } 115 | entry_diff := n.ID.Diff(id).Cmp(diff) 116 | if entry_diff == -1 || (entry_diff == 0 && !t.self.ID.Less(n.ID)) { 117 | return n, nil 118 | } 119 | } 120 | } 121 | return nil, nodeNotFoundError 122 | } 123 | 124 | func (t *routingTable) removeNode(id NodeID) (*Node, error) { 125 | t.lock.Lock() 126 | defer t.lock.Unlock() 127 | row := t.self.ID.CommonPrefixLen(id) 128 | if row >= idLen { 129 | return nil, throwIdentityError("remove", "from", "routing table") 130 | } 131 | col := int(id.Digit(row)) 132 | if col > len(t.nodes[row]) { 133 | return nil, impossibleError 134 | } 135 | if t.nodes[row][col] != nil && t.nodes[row][col].ID.Equals(id) { 136 | resp := t.nodes[row][col] 137 | t.nodes[row][col] = nil 138 | t.self.incrementRTVersion() 139 | return resp, nil 140 | } else { 141 | return nil, nodeNotFoundError 142 | } 143 | return nil, nil 144 | } 145 | 146 | func (t *routingTable) list(rows, cols []int) []*Node { 147 | t.lock.RLock() 148 | defer t.lock.RUnlock() 149 | nodes := []*Node{} 150 | if len(rows) > 0 { 151 | for _, row := range rows { 152 | if len(cols) > 0 { 153 | for _, col := range cols { 154 | if t.nodes[row][col] != nil { 155 | nodes = append(nodes, t.nodes[row][col]) 156 | } 157 | } 158 | } else { 159 | for _, col := range t.nodes[row] { 160 | if col != nil { 161 | nodes = append(nodes, col) 162 | } 163 | } 164 | } 165 | } 166 | } else { 167 | for _, row := range t.nodes { 168 | for _, col := range row { 169 | if col != nil { 170 | nodes = append(nodes, col) 171 | } 172 | } 173 | } 174 | } 175 | return nodes 176 | } 177 | 178 | func (t *routingTable) export(rows, cols []int) [32][16]*Node { 179 | t.lock.RLock() 180 | defer t.lock.RUnlock() 181 | nodes := [32][16]*Node{} 182 | if len(rows) > 0 { 183 | for _, row := range rows { 184 | if len(cols) > 0 { 185 | for _, col := range cols { 186 | if t.nodes[row][col] != nil { 187 | nodes[row][col] = t.nodes[row][col] 188 | } 189 | } 190 | } else { 191 | for col, node := range t.nodes[row] { 192 | if node != nil { 193 | nodes[row][col] = node 194 | } 195 | } 196 | } 197 | } 198 | } else { 199 | for rowNo, row := range t.nodes { 200 | for colNo, node := range row { 201 | if node != nil { 202 | nodes[rowNo][colNo] = node 203 | } 204 | } 205 | } 206 | } 207 | return nodes 208 | } 209 | 210 | func (t *routingTable) debug(format string, v ...interface{}) { 211 | if t.logLevel <= LogLevelDebug { 212 | t.log.Printf(format, v...) 213 | } 214 | } 215 | 216 | func (t *routingTable) warn(format string, v ...interface{}) { 217 | if t.logLevel <= LogLevelWarn { 218 | t.log.Printf(format, v...) 219 | } 220 | } 221 | 222 | func (t *routingTable) err(format string, v ...interface{}) { 223 | if t.logLevel <= LogLevelError { 224 | t.log.Printf(format, v...) 225 | } 226 | } 227 | -------------------------------------------------------------------------------- /table_test.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "math/rand" 5 | "testing" 6 | ) 7 | 8 | // Test insertion of a node into the routing table 9 | func TestRoutingTableInsert(t *testing.T) { 10 | self_id, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 11 | if err != nil { 12 | t.Fatalf(err.Error()) 13 | } 14 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 15 | t.Logf("%s\n", self_id.String()) 16 | 17 | other_id, err := NodeIDFromBytes([]byte("this is some other Node for testing purposes only.")) 18 | if err != nil { 19 | t.Fatalf(err.Error()) 20 | } 21 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 22 | row := self_id.CommonPrefixLen(other_id) 23 | col := other_id.Digit(row) 24 | t.Logf("%s\n", other_id.String()) 25 | t.Logf("%v\n", row) 26 | t.Logf("%v\n", int(col)) 27 | table := newRoutingTable(self) 28 | r, err := table.insertNode(*other, self.Proximity(other)) 29 | if err != nil { 30 | t.Fatalf(err.Error()) 31 | } 32 | if r == nil { 33 | t.Fatalf("Nil response returned.") 34 | } 35 | r2, err := table.getNode(other_id) 36 | if err != nil { 37 | t.Fatalf(err.Error()) 38 | } 39 | if r2 == nil { 40 | t.Fatalf("Nil response returned.") 41 | } 42 | if !r2.ID.Equals(r.ID) { 43 | t.Fatalf("Expected %s, got %s.", r.ID, r2.ID) 44 | } 45 | } 46 | 47 | // Test deleting the only node from column of the routing table 48 | func TestRoutingTableDeleteOnly(t *testing.T) { 49 | self_id, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 50 | if err != nil { 51 | t.Fatalf(err.Error()) 52 | } 53 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 54 | 55 | other_id, err := NodeIDFromBytes([]byte("this is some other Node for testing purposes only.")) 56 | if err != nil { 57 | t.Fatalf(err.Error()) 58 | } 59 | other := NewNode(other_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 60 | table := newRoutingTable(self) 61 | r, err := table.insertNode(*other, self.Proximity(other)) 62 | if err != nil { 63 | t.Fatalf(err.Error()) 64 | } 65 | if r == nil { 66 | t.Fatalf("Nil response returned.") 67 | } 68 | _, err = table.removeNode(other_id) 69 | if err != nil { 70 | t.Fatalf(err.Error()) 71 | } 72 | _, err = table.getNode(r.ID) 73 | if err != nodeNotFoundError { 74 | if err != nil { 75 | t.Fatalf(err.Error()) 76 | } else { 77 | t.Fatal("Expected nodeNotFoundError, got nil instead.") 78 | } 79 | } 80 | } 81 | 82 | // Test routing when the key falls in between two nodes 83 | func TestRoutingTableScanSplit(t *testing.T) { 84 | self_id, err := NodeIDFromBytes([]byte("1234560890abcdef")) 85 | if err != nil { 86 | t.Fatal(err.Error()) 87 | } 88 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 89 | 90 | table := newRoutingTable(self) 91 | 92 | first_id, err := NodeIDFromBytes([]byte("12345677890abcde")) 93 | if err != nil { 94 | t.Fatal(err.Error()) 95 | } 96 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 97 | r, err := table.insertNode(*first, self.Proximity(first)) 98 | if err != nil { 99 | t.Fatal(err.Error()) 100 | } 101 | if r == nil { 102 | t.Fatal("First insert returned nil.") 103 | } 104 | second_id, err := NodeIDFromBytes([]byte("12345637890abcde")) 105 | if err != nil { 106 | t.Fatal(err.Error()) 107 | } 108 | second := NewNode(second_id, "127.0.0.3", "127.0.0.3", "testing", 55555) 109 | r2, err := table.insertNode(*second, self.Proximity(second)) 110 | if err != nil { 111 | t.Fatal(err.Error()) 112 | } 113 | if r2 == nil { 114 | t.Fatal("Second insert returned nil") 115 | } 116 | message_id, err := NodeIDFromBytes([]byte("12345657890abcde")) 117 | if err != nil { 118 | t.Fatal(err.Error()) 119 | } 120 | d1 := message_id.Diff(first_id) 121 | d2 := message_id.Diff(second_id) 122 | if d1.Cmp(d2) != 0 { 123 | t.Fatalf("IDs not equidistant. Expected %s, got %s.", d1, d2) 124 | } 125 | r3, err := table.route(message_id) 126 | if err != nil { 127 | t.Fatal(err.Error()) 128 | } 129 | if r3 == nil { 130 | t.Fatal("Scan returned nil.") 131 | } 132 | if !second_id.Equals(r3.ID) { 133 | t.Errorf("Wrong Node returned. Expected %s, got %s.", second_id, r3.ID) 134 | } 135 | } 136 | 137 | // Test routing when there are no suitable matches 138 | func TestRoutingTableRouteNone(t *testing.T) { 139 | self_id, err := NodeIDFromBytes([]byte("1234560890abcdeg")) 140 | if err != nil { 141 | t.Fatal(err.Error()) 142 | } 143 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 144 | 145 | table := newRoutingTable(self) 146 | 147 | first_id, err := NodeIDFromBytes([]byte("12345657890abcde")) 148 | if err != nil { 149 | t.Fatal(err.Error()) 150 | } 151 | row := self_id.CommonPrefixLen(first_id) 152 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 153 | r, err := table.insertNode(*first, self.Proximity(first)) 154 | if err != nil { 155 | t.Fatal(err.Error()) 156 | } 157 | if r == nil { 158 | t.Fatal("Insert returned nil.") 159 | } 160 | message_id, err := NodeIDFromBytes([]byte("1234560890abcdef")) 161 | if err != nil { 162 | t.Fatal(err.Error()) 163 | } 164 | m_row := message_id.CommonPrefixLen(self_id) 165 | if row >= m_row { 166 | t.Fatalf("Node would be picked up by scan.") 167 | } 168 | r3, err := table.route(message_id) 169 | if err != nodeNotFoundError { 170 | if err != nil { 171 | t.Fatal(err.Error()) 172 | } else { 173 | t.Fatal("Expected nodeNotFoundError, didn't get an error.") 174 | } 175 | } 176 | if r3 != nil { 177 | t.Errorf("Scan was supposed to return nil, returned %s instead.", r3.ID) 178 | } 179 | } 180 | 181 | // Test routing over multiple rows in the routing table 182 | func TestRoutingTableScanMultipleRows(t *testing.T) { 183 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdef")) 184 | if err != nil { 185 | t.Fatal(err.Error()) 186 | } 187 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 188 | 189 | table := newRoutingTable(self) 190 | 191 | first_id, err := NodeIDFromBytes([]byte("1234567890abdefg")) 192 | if err != nil { 193 | t.Fatal(err.Error()) 194 | } 195 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 196 | r, err := table.insertNode(*first, self.Proximity(first)) 197 | if err != nil { 198 | t.Fatal(err.Error()) 199 | } 200 | if r == nil { 201 | t.Fatal("Insert returned nil.") 202 | } 203 | 204 | second_id, err := NodeIDFromBytes([]byte("1234567890abcdff")) 205 | if err != nil { 206 | t.Fatal(err.Error()) 207 | } 208 | second := NewNode(second_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 209 | r2, err := table.insertNode(*second, self.Proximity(second)) 210 | if err != nil { 211 | t.Fatal(err.Error()) 212 | } 213 | if r2 == nil { 214 | t.Fatal("Second insert returned nil.") 215 | } 216 | message_id, err := NodeIDFromBytes([]byte("1234567890accdef")) 217 | if err != nil { 218 | t.Fatal(err.Error()) 219 | } 220 | first_row := first_id.CommonPrefixLen(self_id) 221 | second_row := second_id.CommonPrefixLen(self_id) 222 | m_row := message_id.CommonPrefixLen(self_id) 223 | if first_row < m_row || second_row < m_row { 224 | t.Fatalf("Node wouldn't be picked up by scan.") 225 | } 226 | if first_row == m_row || second_row == m_row { 227 | t.Fatalf("Node inserted into the same row.\nNode one: %d\nNode two: %d\nMessage: %d\n", first_row, second_row, m_row) 228 | } 229 | r3, err := table.route(message_id) 230 | if err != nil { 231 | t.Fatal(err.Error()) 232 | } 233 | if r3 == nil { 234 | t.Fatalf("Scan returned nil.") 235 | } 236 | if !r3.ID.Equals(first_id) { 237 | t.Errorf("Scan was supposed to return %s, returned %s instead.", first_id, r3.ID) 238 | } 239 | } 240 | 241 | // Test routing to the only node in the routing table 242 | func TestRoutingTableRouteOnly(t *testing.T) { 243 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdeg")) 244 | if err != nil { 245 | t.Fatal(err.Error()) 246 | } 247 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 248 | 249 | table := newRoutingTable(self) 250 | 251 | first_id, err := NodeIDFromBytes([]byte("1234567890acdefg")) 252 | if err != nil { 253 | t.Fatal(err.Error()) 254 | } 255 | row := self_id.CommonPrefixLen(first_id) 256 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 257 | r, err := table.insertNode(*first, self.Proximity(first)) 258 | if err != nil { 259 | t.Fatal(err.Error()) 260 | } 261 | if r == nil { 262 | t.Fatal("Insert returned nil.") 263 | } 264 | message_id, err := NodeIDFromBytes([]byte("1234567890adefgh")) 265 | if err != nil { 266 | t.Fatal(err.Error()) 267 | } 268 | m_row := message_id.CommonPrefixLen(self_id) 269 | if row < m_row { 270 | t.Fatalf("Node wouldn't be picked up by routing.") 271 | } 272 | r3, err := table.route(message_id) 273 | if err != nil { 274 | t.Fatal(err.Error()) 275 | } 276 | if r3 == nil { 277 | t.Fatal("Route returned nil Node.") 278 | } 279 | if !r3.ID.Equals(first_id) { 280 | t.Fatalf("Expected Node %s, got Node %s instead.", first_id, r3.ID) 281 | } 282 | } 283 | 284 | // Test routing to a direct match in the routing table 285 | func TestRoutingTableRouteMatch(t *testing.T) { 286 | self_id, err := NodeIDFromBytes([]byte("1234567890abcdeg")) 287 | if err != nil { 288 | t.Fatal(err.Error()) 289 | } 290 | self := NewNode(self_id, "127.0.0.1", "127.0.0.1", "testing", 55555) 291 | 292 | table := newRoutingTable(self) 293 | 294 | first_id, err := NodeIDFromBytes([]byte("1234567890acdefg")) 295 | if err != nil { 296 | t.Fatal(err.Error()) 297 | } 298 | first := NewNode(first_id, "127.0.0.2", "127.0.0.2", "testing", 55555) 299 | r, err := table.insertNode(*first, self.Proximity(first)) 300 | if err != nil { 301 | t.Fatal(err.Error()) 302 | } 303 | if r == nil { 304 | t.Fatal("Insert returned nil.") 305 | } 306 | message_id, err := NodeIDFromBytes([]byte("1234567890acdefg")) 307 | if err != nil { 308 | t.Fatal(err.Error()) 309 | } 310 | if !message_id.Equals(first_id) { 311 | t.Fatalf("Expected ID of %s, got %s instead.", first_id, message_id) 312 | } 313 | r3, err := table.route(message_id) 314 | if err != nil { 315 | t.Fatal(err.Error()) 316 | } 317 | if r3 == nil { 318 | t.Fatal("Route returned nil.") 319 | } 320 | if r3 == nil { 321 | t.Fatal("Route returned nil Node.") 322 | } 323 | if !r3.ID.Equals(first_id) { 324 | t.Fatalf("Expected Node %s, got Node %s instead.", first_id, r3.ID) 325 | } 326 | } 327 | 328 | ////////////////////////////////////////////////////////////////////////// 329 | ////////////////////////// Benchmarks //////////////////////////////////// 330 | ////////////////////////////////////////////////////////////////////////// 331 | 332 | // seed used for random number generator in all benchmarks 333 | const randSeed = 42 334 | 335 | var benchRand = rand.New(rand.NewSource(0)) 336 | 337 | func randomNodeID() NodeID { 338 | r := benchRand 339 | lo := uint64(r.Uint32())<<32 | uint64(r.Uint32()) 340 | hi := uint64(r.Uint32())<<32 | uint64(r.Uint32()) 341 | return NodeID{lo, hi} 342 | } 343 | 344 | // How fast can we insert nodes 345 | func BenchmarkRoutingTableInsert(b *testing.B) { 346 | b.StopTimer() 347 | selfId, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 348 | if err != nil { 349 | b.Fatalf(err.Error()) 350 | } 351 | self := NewNode(selfId, "127.0.0.1", "127.0.0.1", "testing", 55555) 352 | 353 | table := newRoutingTable(self) 354 | benchRand.Seed(randSeed) 355 | 356 | b.StartTimer() 357 | for i := 0; i < b.N; i++ { 358 | otherId := randomNodeID() 359 | other := *NewNode(otherId, "127.0.0.2", "127.0.0.2", "testing", 55555) 360 | _, err = table.insertNode(other, self.Proximity(&other)) 361 | } 362 | } 363 | 364 | // How fast can we retrieve nodes by ID 365 | func BenchmarkRoutingTableGetByID(b *testing.B) { 366 | b.StopTimer() 367 | selfId, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 368 | if err != nil { 369 | b.Fatalf(err.Error()) 370 | } 371 | self := NewNode(selfId, "127.0.0.1", "127.0.0.1", "testing", 55555) 372 | 373 | table := newRoutingTable(self) 374 | benchRand.Seed(randSeed) 375 | 376 | otherId := randomNodeID() 377 | other := *NewNode(otherId, "127.0.0.2", "127.0.0.2", "testing", 55555) 378 | _, err = table.insertNode(other, self.Proximity(&other)) 379 | if err != nil { 380 | b.Fatalf(err.Error()) 381 | } 382 | b.StartTimer() 383 | for i := 0; i < b.N; i++ { 384 | table.getNode(other.ID) 385 | } 386 | } 387 | 388 | var benchTable *routingTable 389 | 390 | func initBenchTable(b *testing.B) { 391 | selfId, err := NodeIDFromBytes([]byte("this is a test Node for testing purposes only.")) 392 | if err != nil { 393 | b.Fatalf(err.Error()) 394 | } 395 | self := NewNode(selfId, "127.0.0.1", "127.0.0.1", "testing", 55555) 396 | benchTable = newRoutingTable(self) 397 | benchRand.Seed(randSeed) 398 | 399 | for i := 0; i < 100000; i++ { 400 | id := randomNodeID() 401 | node := NewNode(id, "127.0.0.1", "127.0.0.1", "testing", 55555) 402 | _, err = benchTable.insertNode(*node, self.Proximity(node)) 403 | if err != nil { 404 | b.Fatal(err.Error()) 405 | } 406 | } 407 | } 408 | 409 | // How fast can we route messages 410 | func BenchmarkRoutingTableRoute(b *testing.B) { 411 | b.StopTimer() 412 | if benchTable == nil { 413 | initBenchTable(b) 414 | } 415 | benchRand.Seed(randSeed) 416 | b.StartTimer() 417 | 418 | for i := 0; i < b.N; i++ { 419 | id := randomNodeID() 420 | _, err := benchTable.route(id) 421 | if err != nil && err != nodeNotFoundError { 422 | b.Fatalf(err.Error()) 423 | } 424 | } 425 | } 426 | 427 | // How fast can we dump the nodes in the table 428 | func BenchmarkRoutingTableDump(b *testing.B) { 429 | b.StopTimer() 430 | if benchTable == nil { 431 | initBenchTable(b) 432 | } 433 | b.StartTimer() 434 | for i := 0; i < b.N; i++ { 435 | benchTable.list([]int{}, []int{}) 436 | } 437 | } 438 | 439 | func BenchmarkRoutingTableDumpPartial(b *testing.B) { 440 | b.StopTimer() 441 | if benchTable == nil { 442 | initBenchTable(b) 443 | } 444 | b.StartTimer() 445 | for i := 0; i < b.N; i++ { 446 | benchTable.list([]int{0, 1, 2, 3, 4, 5, 6}, []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}) 447 | } 448 | } 449 | 450 | func BenchmarkRoutingTableExport(b *testing.B) { 451 | b.StopTimer() 452 | if benchTable == nil { 453 | initBenchTable(b) 454 | } 455 | b.StartTimer() 456 | for i := 0; i < b.N; i++ { 457 | benchTable.export([]int{}, []int{}) 458 | } 459 | } 460 | 461 | func BenchmarkRoutingTableExportPartial(b *testing.B) { 462 | b.StopTimer() 463 | if benchTable == nil { 464 | initBenchTable(b) 465 | } 466 | b.StartTimer() 467 | for i := 0; i < b.N; i++ { 468 | benchTable.export([]int{0, 1, 2, 3, 4, 5, 6}, []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}) 469 | } 470 | } 471 | -------------------------------------------------------------------------------- /wendy.go: -------------------------------------------------------------------------------- 1 | package wendy 2 | 3 | import ( 4 | "errors" 5 | "fmt" 6 | ) 7 | 8 | const ( 9 | LogLevelDebug = iota 10 | LogLevelWarn 11 | LogLevelError 12 | ) 13 | 14 | // Application is an interface that other packages can fulfill to hook into Wendy. 15 | // 16 | // OnError is called on errors that are even remotely recoverable, passing the error that was raised. 17 | // 18 | // OnDeliver is called when the current Node is determined to be the final destination of a Message. It passes the Message that was received. 19 | // 20 | // OnForward is called immediately before a Message is forwarded to the next Node in its route through the Cluster. The function receives a pointer to the Message, which can be modified before it is sent, and the ID of the next step in the Message's route. The function must return a boolean; true if the Message should continue its way through the Cluster, false if the Message should be prematurely terminated instead of forwarded. 21 | // 22 | // OnNewLeaves is called when the current Node's leafSet is updated. The function receives a dump of the leafSet. 23 | // 24 | // OnNodeJoin is called when the current Node learns of a new Node in the Cluster. It receives the Node that just joined. 25 | // 26 | // OnNodeExit is called when a Node is discovered to no longer be participating in the Cluster. It is passed the Node that just left the Cluster. Note that by the time this method is called, the Node is no longer reachable. 27 | // 28 | // OnHeartbeat is called when the current Node receives a heartbeat from another Node. Heartbeats are sent at a configurable interval, if no messages have been sent between the Nodes, and serve the purpose of a health check. 29 | type Application interface { 30 | OnError(err error) 31 | OnDeliver(msg Message) 32 | OnForward(msg *Message, nextId NodeID) bool // return False if Wendy should not forward 33 | OnNewLeaves(leafset []*Node) 34 | OnNodeJoin(node Node) 35 | OnNodeExit(node Node) 36 | OnHeartbeat(node Node) 37 | } 38 | 39 | // Credentials is an interface that can be fulfilled to limit access to the Cluster. 40 | type Credentials interface { 41 | Valid([]byte) bool 42 | Marshal() []byte 43 | } 44 | 45 | // Passphrase is an implementation of Credentials that grants access to the Cluster if the Node has the same Passphrase set 46 | type Passphrase string 47 | 48 | func (p Passphrase) Valid(supplied []byte) bool { 49 | return string(supplied) == string(p) 50 | } 51 | 52 | func (p Passphrase) Marshal() []byte { 53 | return []byte(p) 54 | } 55 | 56 | // Errors! 57 | var deadNodeError = errors.New("Node did not respond to heartbeat.") 58 | var nodeNotFoundError = errors.New("Node not found.") 59 | var impossibleError = errors.New("This error should never be reached. It's logically impossible.") 60 | 61 | // IdentityError represents an error that was raised when a Node attempted to perform actions on its state tables using its own ID, which is problematic. It is its own type for the purposes of handling the error. 62 | type IdentityError struct { 63 | Action string 64 | Preposition string 65 | Container string 66 | } 67 | 68 | // Error returns the IdentityError as a string and fulfills the error interface. 69 | func (e IdentityError) Error() string { 70 | return fmt.Sprintf("IdentityError: Tried to %s myself %s the %s.", e.Action, e.Preposition, e.Container) 71 | } 72 | 73 | func throwIdentityError(action, prep, container string) IdentityError { 74 | return IdentityError{ 75 | Action: action, 76 | Preposition: prep, 77 | Container: container, 78 | } 79 | } 80 | 81 | // InvalidArgumentError represents an error that is raised when arguments that are invalid are passed to a function that depends on those arguments. It is its own type for the purposes of handling the error. 82 | type InvalidArgumentError string 83 | 84 | func (e InvalidArgumentError) Error() string { 85 | return fmt.Sprintf("InvalidArgumentError: %s", e) 86 | } 87 | 88 | func throwInvalidArgumentError(msg string) InvalidArgumentError { 89 | return InvalidArgumentError(msg) 90 | } 91 | --------------------------------------------------------------------------------