├── .gitignore ├── README.md ├── bin └── .gitignore ├── p2.pdf ├── pkg └── .gitignore ├── sols ├── darwin_amd64 │ ├── crunner │ ├── lrunner │ ├── srunner │ └── trunner └── linux_amd64 │ ├── crunner │ ├── lrunner │ ├── srunner │ └── trunner ├── src └── github.com │ └── cmu440 │ └── tribbler │ ├── libstore │ ├── libstore_api.go │ └── libstore_impl.go │ ├── rpc │ ├── librpc │ │ └── rpc.go │ ├── storagerpc │ │ ├── proto.go │ │ └── rpc.go │ └── tribrpc │ │ ├── proto.go │ │ └── rpc.go │ ├── runners │ ├── crunner │ │ └── crunner.go │ ├── lrunner │ │ └── lrunner.go │ ├── srunner │ │ └── srunner.go │ └── trunner │ │ └── trunner.go │ ├── storageserver │ ├── storageserver_api.go │ └── storageserver_impl.go │ ├── tests │ ├── libtest │ │ └── libtest.go │ ├── proxycounter │ │ └── proxycounter.go │ ├── storagetest │ │ └── storagetest.go │ ├── stresstest │ │ └── stresstest.go │ └── tribtest │ │ └── tribtest.go │ ├── tribclient │ ├── tribclient_api.go │ └── tribclient_impl.go │ └── tribserver │ ├── tribserver_api.go │ └── tribserver_impl.go └── tests ├── libtest.sh ├── libtest2.sh ├── runall.sh ├── storagetest.sh ├── storagetest2.sh ├── stresstest.sh └── tribtest.sh /.gitignore: -------------------------------------------------------------------------------- 1 | *.o 2 | *.a 3 | *.so 4 | 5 | # Folders 6 | _obj 7 | _test 8 | 9 | # Architecture specific extensions/prefixes 10 | *.[568vq] 11 | [568vq].out 12 | 13 | *.cgo1.go 14 | *.cgo2.c 15 | _cgo_defun.c 16 | _cgo_gotypes.go 17 | _cgo_export.* 18 | 19 | _testmain.go 20 | 21 | *.exe 22 | *.test -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | p2 2 | == 3 | 4 | This repository contains the starter code for project 2 (15-440, Spring 2014). 5 | These instructions assume you have set your `GOPATH` to point to the repository's 6 | root `p2/` directory. 7 | 8 | This project was designed for, and tested on AFS cluster machines, though you may choose to 9 | write and build your code locally as well. 10 | 11 | ## Starter Code 12 | 13 | The starter code for this project is organized roughly as follows: 14 | 15 | ``` 16 | bin/ Student-compiled binaries 17 | 18 | sols/ Staff-compiled binaries 19 | darwin_amd64/ Staff-compiled Mac OS X executables 20 | crunner Staff-compiled TribClient-runner 21 | trunner Staff-compiled TribServer-runner 22 | lrunner Staff-compiled Libstore-runner 23 | srunner Staff-compiled StorageServer-runner 24 | 25 | linux_amd64/ Staff-compiled Linux executables 26 | (see above) 27 | 28 | src/github.com/cmu440/tribbler/ 29 | tribclient/ TribClient implementation 30 | tribserver/ TODO: implement the TribServer 31 | libstore/ TODO: implement the Libstore 32 | storageserver/ TODO: implement the StorageServer 33 | 34 | tests/ Source code for official tests 35 | proxycounter/ Utility package used by the official tests 36 | tribtest/ Tests the TribServer 37 | libtest/ Tests the Libstore 38 | storagetest/ Tests the StorageServer 39 | stresstest/ Tests everything 40 | 41 | rpc/ 42 | tribrpc/ TribServer RPC helpers/constants 43 | librpc/ Libstore RPC helpers/constants 44 | storagerpc/ StorageServer RPC helpers/constants 45 | 46 | tests/ Shell scripts to run the tests 47 | ``` 48 | 49 | ## Instructions 50 | 51 | ### Compiling your code 52 | 53 | To and compile your code, execute one or more of the following commands (the 54 | resulting binaries will be located in the `$GOPATH/bin` directory): 55 | 56 | ```bash 57 | go install github.com/cmu440/tribbler/runners/srunner 58 | go install github.com/cmu440/tribbler/runners/lrunner 59 | go install github.com/cmu440/tribbler/runners/trunner 60 | go install github.com/cmu440/tribbler/runners/crunner 61 | ``` 62 | 63 | To simply check that your code compiles (i.e. without creating the binaries), 64 | you can use the `go build` subcommand to compile an individual package as shown below: 65 | 66 | ```bash 67 | # Build/compile the "tribserver" package. 68 | go build path/to/tribserver 69 | 70 | # A different way to build/compile the "tribserver" package. 71 | go build github.com/cmu440/tribbler/tribserver 72 | ``` 73 | 74 | ##### How to Write Go Code 75 | 76 | If at any point you have any trouble with building, installing, or testing your code, the article 77 | titled [How to Write Go Code](http://golang.org/doc/code.html) is a great resource for understanding 78 | how Go workspaces are built and organized. You might also find the documentation for the 79 | [`go` command](http://golang.org/cmd/go/) to be helpful. As always, feel free to post your questions 80 | on Piazza. 81 | 82 | ### Running your code 83 | 84 | To run and test the individual components that make up the Tribbler system, we have provided 85 | four simple programs that aim to simplify the process. The programs are located in the 86 | `p2/src/github.com/cmu440/tribbler/runners/` directory and may be executed from anywhere on your system. 87 | Each program is discussed individually below: 88 | 89 | ##### The `srunner` program 90 | 91 | The `srunner` (`StorageServer`-runner) program creates and runs an instance of your 92 | `StorageServer` implementation. Some example usage is provided below: 93 | 94 | ```bash 95 | # Start a single master storage server on port 9009. 96 | ./srunner -port=9009 97 | 98 | # Start the master on port 9009 and run two additional slaves. 99 | ./srunner -port=9009 -N=3 100 | ./srunner -master="localhost:9009" 101 | ./srunner -master="localhost:9009" 102 | ``` 103 | 104 | Note that in the above example you do not need to specify a port for your slave storage servers. 105 | For additional usage instructions, please execute `./srunner -help` or consult the `srunner.go` source code. 106 | 107 | ##### The `lrunner` program 108 | 109 | The `lrunner` (`Libstore`-runner) program creates and runs an instance of your `Libstore` 110 | implementation. It enables you to execute `Libstore` methods from the command line, as shown 111 | in the example below: 112 | 113 | ```bash 114 | # Create one (or more) storage servers in the background. 115 | ./srunner -port=9009 & 116 | 117 | # Execute Put("thom", "yorke") 118 | ./lrunner -port=9009 p thom yorke 119 | OK 120 | 121 | # Execute Get("thom") 122 | ./lrunner -port=9009 g thom 123 | yorke 124 | 125 | # Execute Get("jonny") 126 | ./lrunner -port=9009 g jonny 127 | ERROR: Get operation failed with status KeyNotFound 128 | ``` 129 | 130 | Note that the exact error messages that are output by the `lrunner` program may differ 131 | depending on how your `Libstore` implementation. For additional usage instructions, please 132 | execute `./lrunner -help` or consult the `lrunner.go` source code. 133 | 134 | ##### The `trunner` program 135 | 136 | The `trunner` (`TribServer`-runner) program creates and runs an instance of your 137 | `TribServer` implementation. For usage instructions, please execute `./trunner -help` or consult the 138 | `trunner.go` source code. In order to use this program for your own personal testing, 139 | you're `Libstore` implementation must function properly and one or more storage servers 140 | (i.e. `srunner` programs) must be running in the background. 141 | 142 | ##### The `crunner` program 143 | 144 | The `crunner` (`TribClient`-runner) program creates and runs an instance of the 145 | `TribClient` implementation we have provided as part of the starter code. 146 | For usage instructions, please execute `./crunner -help` or consult the 147 | `crunner.go` source code. As with the above programs, you'll need to start one or 148 | more Tribbler servers and storage servers beforehand so that the `TribClient` 149 | will have someone to communicate with. 150 | 151 | ##### Staff-compiled binaries 152 | 153 | Last but not least, we have also provided pre-compiled binaries (i.e. they were compiled against our own 154 | reference solutions) for each of the programs discussed above. 155 | The binaries are located in the `p2/sols/` directory and have been compiled against both 64-bit Mac OS X 156 | and Linux machines. Similar to the staff-compled binaries we provided in project 1, 157 | we hope these will help you test the individual components of your Tribbler system. 158 | 159 | ### Executing the official tests 160 | 161 | The tests for this project are provided as bash shell scripts in the `p2/tests` directory. 162 | The scripts may be run from anywhere on your system (assuming your `GOPATH` has been set and 163 | they are being executed on a 64-bit Mac OS X or Linux machine). For example, to run the 164 | `libtest.sh` test, simply execute the following: 165 | 166 | ```bash 167 | $GOPATH/tests/libtest.sh 168 | ``` 169 | 170 | Note that these bash scripts link against both your own implementations as well as the test 171 | code located in the `p2/src/github.com/cmu440/tribbler/tests/` directory. What's more, a few of these tests 172 | will also run against the staff-solution binaries discussed above, 173 | thus enabling us to test the correctness of individual components of your system 174 | as opposed to your entire Tribbler system as a whole. 175 | 176 | If you and your partner are still confused about the behavior of the testing scripts (even 177 | after you've analyzed its source code), please don't hesitate to ask us a question on Piazza! 178 | 179 | ### Submitting to Autolab 180 | 181 | To submit your code to Autolab, create a `tribbler.tar` file containing your implementation as follows: 182 | 183 | ```sh 184 | cd $GOPATH/src/github.com/cmu440 185 | tar -cvf tribbler.tar tribbler/ 186 | ``` 187 | 188 | ## Miscellaneous 189 | 190 | ### Reading the starter code documentation 191 | 192 | Before you begin the project, you should read and understand all of the starter code we provide. 193 | To make this experience a little less traumatic, fire up a web server and read the 194 | documentation in a browser by executing the following command: 195 | 196 | ```sh 197 | godoc -http=:6060 & 198 | ``` 199 | 200 | Then, navigate to [localhost:6060/pkg/github.com/cmu440/tribbler](http://localhost:6060/pkg/github.com/cmu440/tribbler) 201 | in a browser (note that you can execute this command from anywhere in your system, assuming your `GOPATH` 202 | is set correctly). 203 | 204 | ### Using Go on AFS 205 | 206 | For those students who wish to write their Go code on AFS (either in a cluster or remotely), you will 207 | need to set the `GOROOT` environment variable as follows (this is required because Go is installed 208 | in a custom location on AFS machines): 209 | 210 | ```bash 211 | export GOROOT=/usr/local/lib/go 212 | ``` 213 | -------------------------------------------------------------------------------- /bin/.gitignore: -------------------------------------------------------------------------------- 1 | *runner 2 | *test 3 | -------------------------------------------------------------------------------- /p2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/p2.pdf -------------------------------------------------------------------------------- /pkg/.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/pkg/.gitignore -------------------------------------------------------------------------------- /sols/darwin_amd64/crunner: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/sols/darwin_amd64/crunner -------------------------------------------------------------------------------- /sols/darwin_amd64/lrunner: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/sols/darwin_amd64/lrunner -------------------------------------------------------------------------------- /sols/darwin_amd64/srunner: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/sols/darwin_amd64/srunner -------------------------------------------------------------------------------- /sols/darwin_amd64/trunner: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/sols/darwin_amd64/trunner -------------------------------------------------------------------------------- /sols/linux_amd64/crunner: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/sols/linux_amd64/crunner -------------------------------------------------------------------------------- /sols/linux_amd64/lrunner: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/sols/linux_amd64/lrunner -------------------------------------------------------------------------------- /sols/linux_amd64/srunner: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/sols/linux_amd64/srunner -------------------------------------------------------------------------------- /sols/linux_amd64/trunner: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cmu440/p2/f9cf26ba49a387f7c3eb1e91336144d640b99d36/sols/linux_amd64/trunner -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/libstore/libstore_api.go: -------------------------------------------------------------------------------- 1 | // DO NOT MODIFY! 2 | 3 | package libstore 4 | 5 | import ( 6 | "hash/fnv" 7 | 8 | "github.com/cmu440/tribbler/rpc/storagerpc" 9 | ) 10 | 11 | // LeaseMode is a debugging flag that determines how the Libstore should 12 | // request/handle leases. 13 | type LeaseMode int 14 | 15 | const ( 16 | Never LeaseMode = iota // Never request leases. 17 | Normal // Behave as normal. 18 | Always // Always request leases. 19 | ) 20 | 21 | // Libstore defines the set of methods that a TribServer can call on its 22 | // local cache. 23 | type Libstore interface { 24 | Get(key string) (string, error) 25 | Put(key, value string) error 26 | GetList(key string) ([]string, error) 27 | AppendToList(key, newItem string) error 28 | RemoveFromList(key, removeItem string) error 29 | } 30 | 31 | // LeaseCallbacks defines the set of methods that a StorageServer can call 32 | // on a TribServer's local cache. 33 | type LeaseCallbacks interface { 34 | 35 | // RevokeLease is a callback RPC method that is invoked by storage 36 | // servers when a lease is revoked. It should reply with status OK 37 | // if the key was successfully revoked, or with status KeyNotFound 38 | // if the key did not exist in the cache. 39 | RevokeLease(*storagerpc.RevokeLeaseArgs, *storagerpc.RevokeLeaseReply) error 40 | } 41 | 42 | // StoreHash hashes a string key and returns a 32-bit integer. This function 43 | // is provided here so that all implementations use the same hashing mechanism 44 | // (both the Libstore and StorageServer should use this function to hash keys). 45 | func StoreHash(key string) uint32 { 46 | hasher := fnv.New32() 47 | hasher.Write([]byte(key)) 48 | return hasher.Sum32() 49 | } 50 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/libstore/libstore_impl.go: -------------------------------------------------------------------------------- 1 | package libstore 2 | 3 | import ( 4 | "errors" 5 | 6 | "github.com/cmu440/tribbler/rpc/storagerpc" 7 | ) 8 | 9 | type libstore struct { 10 | // TODO: implement this! 11 | } 12 | 13 | // NewLibstore creates a new instance of a TribServer's libstore. masterServerHostPort 14 | // is the master storage server's host:port. myHostPort is this Libstore's host:port 15 | // (i.e. the callback address that the storage servers should use to send back 16 | // notifications when leases are revoked). 17 | // 18 | // The mode argument is a debugging flag that determines how the Libstore should 19 | // request/handle leases. If mode is Never, then the Libstore should never request 20 | // leases from the storage server (i.e. the GetArgs.WantLease field should always 21 | // be set to false). If mode is Always, then the Libstore should always request 22 | // leases from the storage server (i.e. the GetArgs.WantLease field should always 23 | // be set to true). If mode is Normal, then the Libstore should make its own 24 | // decisions on whether or not a lease should be requested from the storage server, 25 | // based on the requirements specified in the project PDF handout. Note that the 26 | // value of the mode flag may also determine whether or not the Libstore should 27 | // register to receive RPCs from the storage servers. 28 | // 29 | // To register the Libstore to receive RPCs from the storage servers, the following 30 | // line of code should suffice: 31 | // 32 | // rpc.RegisterName("LeaseCallbacks", librpc.Wrap(libstore)) 33 | // 34 | // Note that unlike in the NewTribServer and NewStorageServer functions, there is no 35 | // need to create a brand new HTTP handler to serve the requests (the Libstore may 36 | // simply reuse the TribServer's HTTP handler since the two run in the same process). 37 | func NewLibstore(masterServerHostPort, myHostPort string, mode LeaseMode) (Libstore, error) { 38 | return nil, errors.New("not implemented") 39 | } 40 | 41 | func (ls *libstore) Get(key string) (string, error) { 42 | return "", errors.New("not implemented") 43 | } 44 | 45 | func (ls *libstore) Put(key, value string) error { 46 | return errors.New("not implemented") 47 | } 48 | 49 | func (ls *libstore) GetList(key string) ([]string, error) { 50 | return nil, errors.New("not implemented") 51 | } 52 | 53 | func (ls *libstore) RemoveFromList(key, removeItem string) error { 54 | return errors.New("not implemented") 55 | } 56 | 57 | func (ls *libstore) AppendToList(key, newItem string) error { 58 | return errors.New("not implemented") 59 | } 60 | 61 | func (ls *libstore) RevokeLease(args *storagerpc.RevokeLeaseArgs, reply *storagerpc.RevokeLeaseReply) error { 62 | return errors.New("not implemented") 63 | } 64 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/rpc/librpc/rpc.go: -------------------------------------------------------------------------------- 1 | // This file provides a type-safe wrapper that should be used to register 2 | // the libstore to receive RPCs from the storage server. DO NOT MODIFY! 3 | 4 | package librpc 5 | 6 | import "github.com/cmu440/tribbler/rpc/storagerpc" 7 | 8 | // STAFF USE ONLY! Students should not use this interface in their code. 9 | type RemoteLeaseCallbacks interface { 10 | RevokeLease(*storagerpc.RevokeLeaseArgs, *storagerpc.RevokeLeaseReply) error 11 | } 12 | 13 | type LeaseCallbacks struct { 14 | // Embed all methods into the struct. See the Effective Go section about 15 | // embedding for more details: golang.org/doc/effective_go.html#embedding 16 | RemoteLeaseCallbacks 17 | } 18 | 19 | // Wrap wraps l in a type-safe wrapper struct to ensure that only the desired 20 | // LeaseCallbacks methods are exported to receive RPCs. 21 | func Wrap(l RemoteLeaseCallbacks) RemoteLeaseCallbacks { 22 | return &LeaseCallbacks{l} 23 | } 24 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/rpc/storagerpc/proto.go: -------------------------------------------------------------------------------- 1 | // This file contains constants and arguments used to perform RPCs between 2 | // a TribServer's local Libstore and the storage servers. DO NOT MODIFY! 3 | 4 | package storagerpc 5 | 6 | // Status represents the status of a RPC's reply. 7 | type Status int 8 | 9 | const ( 10 | OK Status = iota + 1 // The RPC was a success. 11 | KeyNotFound // The specified key does not exist. 12 | ItemNotFound // The specified item does not exist. 13 | WrongServer // The specified key does not fall in the server's hash range. 14 | ItemExists // The item already exists in the list. 15 | NotReady // The storage servers are still getting ready. 16 | ) 17 | 18 | // Lease constants. 19 | const ( 20 | QueryCacheSeconds = 10 // Time period used for tracking queries/determining whether to request leases. 21 | QueryCacheThresh = 3 // If QueryCacheThresh queries in last QueryCacheSeconds, then request a lease. 22 | LeaseSeconds = 10 // Number of seconds a lease should remain valid. 23 | LeaseGuardSeconds = 2 // Additional seconds a server should wait before invalidating a lease. 24 | ) 25 | 26 | // Lease stores information about a lease sent from the storage servers. 27 | type Lease struct { 28 | Granted bool 29 | ValidSeconds int 30 | } 31 | 32 | type Node struct { 33 | HostPort string // The host:port address of the storage server node. 34 | NodeID uint32 // The ID identifying this storage server node. 35 | } 36 | 37 | type RegisterArgs struct { 38 | ServerInfo Node 39 | } 40 | 41 | type RegisterReply struct { 42 | Status Status 43 | Servers []Node 44 | } 45 | 46 | type GetServersArgs struct { 47 | // Intentionally left empty. 48 | } 49 | 50 | type GetServersReply struct { 51 | Status Status 52 | Servers []Node 53 | } 54 | 55 | type GetArgs struct { 56 | Key string 57 | WantLease bool 58 | HostPort string // The Libstore's callback host:port. 59 | } 60 | 61 | type GetReply struct { 62 | Status Status 63 | Value string 64 | Lease Lease 65 | } 66 | 67 | type GetListReply struct { 68 | Status Status 69 | Value []string 70 | Lease Lease 71 | } 72 | 73 | type PutArgs struct { 74 | Key string 75 | Value string 76 | } 77 | 78 | type PutReply struct { 79 | Status Status 80 | } 81 | 82 | type RevokeLeaseArgs struct { 83 | Key string 84 | } 85 | 86 | type RevokeLeaseReply struct { 87 | Status Status 88 | } 89 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/rpc/storagerpc/rpc.go: -------------------------------------------------------------------------------- 1 | // This file provides a type-safe wrapper that should be used to register the 2 | // storage server to receive RPCs from a TribServer's libstore. DO NOT MODIFY! 3 | 4 | package storagerpc 5 | 6 | // STAFF USE ONLY! Students should not use this interface in their code. 7 | type RemoteStorageServer interface { 8 | RegisterServer(*RegisterArgs, *RegisterReply) error 9 | GetServers(*GetServersArgs, *GetServersReply) error 10 | Get(*GetArgs, *GetReply) error 11 | GetList(*GetArgs, *GetListReply) error 12 | Put(*PutArgs, *PutReply) error 13 | AppendToList(*PutArgs, *PutReply) error 14 | RemoveFromList(*PutArgs, *PutReply) error 15 | } 16 | 17 | type StorageServer struct { 18 | // Embed all methods into the struct. See the Effective Go section about 19 | // embedding for more details: golang.org/doc/effective_go.html#embedding 20 | RemoteStorageServer 21 | } 22 | 23 | // Wrap wraps s in a type-safe wrapper struct to ensure that only the desired 24 | // StorageServer methods are exported to receive RPCs. 25 | func Wrap(s RemoteStorageServer) RemoteStorageServer { 26 | return &StorageServer{s} 27 | } 28 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/rpc/tribrpc/proto.go: -------------------------------------------------------------------------------- 1 | // This file contains constants and arguments used to perform RPCs between 2 | // a TribClient and TribServer. DO NOT MODIFY! 3 | 4 | package tribrpc 5 | 6 | import "time" 7 | 8 | // Status represents the status of a RPC's reply. 9 | type Status int 10 | 11 | const ( 12 | OK Status = iota + 1 // The RPC was a success. 13 | NoSuchUser // The specified UserID does not exist. 14 | NoSuchTargetUser // The specified TargerUserID does not exist. 15 | Exists // The specified UserID or TargerUserID already exists. 16 | ) 17 | 18 | // Tribble stores the contents and information identifying a unique 19 | // tribble message. 20 | type Tribble struct { 21 | UserID string // The user who created the tribble. 22 | Posted time.Time // The exact time the tribble was posted. 23 | Contents string // The text/contents of the tribble message. 24 | } 25 | 26 | type CreateUserArgs struct { 27 | UserID string 28 | } 29 | 30 | type CreateUserReply struct { 31 | Status Status 32 | } 33 | 34 | type SubscriptionArgs struct { 35 | UserID string // The subscribing user. 36 | TargetUserID string // The user being subscribed to. 37 | } 38 | 39 | type SubscriptionReply struct { 40 | Status Status 41 | } 42 | 43 | type PostTribbleArgs struct { 44 | UserID string 45 | Contents string 46 | } 47 | 48 | type PostTribbleReply struct { 49 | Status Status 50 | } 51 | 52 | type GetSubscriptionsArgs struct { 53 | UserID string 54 | } 55 | 56 | type GetSubscriptionsReply struct { 57 | Status Status 58 | UserIDs []string 59 | } 60 | 61 | type GetTribblesArgs struct { 62 | UserID string 63 | } 64 | 65 | type GetTribblesReply struct { 66 | Status Status 67 | Tribbles []Tribble 68 | } 69 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/rpc/tribrpc/rpc.go: -------------------------------------------------------------------------------- 1 | // This file provides a type-safe wrapper that should be used to register 2 | // the TribServer to receive RPCs from the TribClient. DO NOT MODIFY! 3 | 4 | package tribrpc 5 | 6 | // STAFF USE ONLY! Students should not use this interface in their code. 7 | type RemoteTribServer interface { 8 | CreateUser(args *CreateUserArgs, reply *CreateUserReply) error 9 | AddSubscription(args *SubscriptionArgs, reply *SubscriptionReply) error 10 | RemoveSubscription(args *SubscriptionArgs, reply *SubscriptionReply) error 11 | GetSubscriptions(args *GetSubscriptionsArgs, reply *GetSubscriptionsReply) error 12 | PostTribble(args *PostTribbleArgs, reply *PostTribbleReply) error 13 | GetTribbles(args *GetTribblesArgs, reply *GetTribblesReply) error 14 | GetTribblesBySubscription(args *GetTribblesArgs, reply *GetTribblesReply) error 15 | } 16 | 17 | type TribServer struct { 18 | // Embed all methods into the struct. See the Effective Go section about 19 | // embedding for more details: golang.org/doc/effective_go.html#embedding 20 | RemoteTribServer 21 | } 22 | 23 | // Wrap wraps t in a type-safe wrapper struct to ensure that only the desired 24 | // TribServer methods are exported to receive RPCs. You should Wrap your TribServer 25 | // before registering it for RPC in your TribServer's NewTribServer function 26 | // like so: 27 | // 28 | // tribServer := new(tribServer) 29 | // 30 | // // Create the server socket that will listen for incoming RPCs. 31 | // listener, err := net.Listen("tcp", myHostPort) 32 | // if err != nil { 33 | // return nil, err 34 | // } 35 | // 36 | // // Wrap the tribServer before registering it for RPC. 37 | // err = rpc.RegisterName("TribServer", tribrpc.Wrap(tribServer)) 38 | // if err != nil { 39 | // return nil, err 40 | // } 41 | // 42 | // // Setup the HTTP handler that will server incoming RPCs and 43 | // // serve requests in a background goroutine. 44 | // rpc.HandleHTTP() 45 | // go http.Serve(listener, nil) 46 | // 47 | // return tribServer, nil 48 | func Wrap(t RemoteTribServer) RemoteTribServer { 49 | return &TribServer{t} 50 | } 51 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/runners/crunner/crunner.go: -------------------------------------------------------------------------------- 1 | // A simple program that you may use to test your TribServer. DO NOT MODIFY! 2 | 3 | package main 4 | 5 | import ( 6 | "flag" 7 | "fmt" 8 | "log" 9 | "os" 10 | "strings" 11 | 12 | "github.com/cmu440/tribbler/rpc/tribrpc" 13 | "github.com/cmu440/tribbler/tribclient" 14 | ) 15 | 16 | var port = flag.Int("port", 9010, "TribServer port number") 17 | 18 | type cmdInfo struct { 19 | cmdline string 20 | funcname string 21 | nargs int 22 | } 23 | 24 | func init() { 25 | log.SetFlags(log.Lshortfile | log.Lmicroseconds) 26 | flag.Usage = func() { 27 | fmt.Fprintln(os.Stderr, "The crunner program is a testing tool that that creates and runs an instance") 28 | fmt.Fprintln(os.Stderr, "of the TribClient. You may use it to test the correctness of your TribServer.\n") 29 | fmt.Fprintln(os.Stderr, "Usage:") 30 | flag.PrintDefaults() 31 | fmt.Fprintln(os.Stderr) 32 | fmt.Fprintln(os.Stderr, "Possible commands:") 33 | fmt.Fprintln(os.Stderr, " CreateUser: uc userID") 34 | fmt.Fprintln(os.Stderr, " GetSubscriptions: sl userID") 35 | fmt.Fprintln(os.Stderr, " AddSubscriptions: sa userID targetUserID") 36 | fmt.Fprintln(os.Stderr, " RemoveSubscriptions: sr userID targetUserID") 37 | fmt.Fprintln(os.Stderr, " GetTribbles: tl userID") 38 | fmt.Fprintln(os.Stderr, " PostTribbles: tp userID contents") 39 | fmt.Fprintln(os.Stderr, " GetTribblesBySubscription: ts userID") 40 | } 41 | } 42 | 43 | func main() { 44 | flag.Parse() 45 | if flag.NArg() < 2 { 46 | flag.Usage() 47 | os.Exit(1) 48 | } 49 | cmd := flag.Arg(0) 50 | client, err := tribclient.NewTribClient("localhost", *port) 51 | if err != nil { 52 | log.Fatalln("Failed to create TribClient:", err) 53 | } 54 | 55 | cmdlist := []cmdInfo{ 56 | {"uc", "TribServer.CreateUser", 1}, 57 | {"sl", "TribServer.GetSubscriptions", 1}, 58 | {"sa", "TribServer.AddSubscription", 2}, 59 | {"sr", "TribServer.RemoveSubscription", 2}, 60 | {"tl", "TribServer.GetTribbles", 1}, 61 | {"tp", "TribServer.AddTribble", 2}, 62 | {"ts", "TribServer.GetTribblesBySubscription", 1}, 63 | } 64 | 65 | cmdmap := make(map[string]cmdInfo) 66 | for _, j := range cmdlist { 67 | cmdmap[j.cmdline] = j 68 | } 69 | 70 | ci, found := cmdmap[cmd] 71 | if !found { 72 | flag.Usage() 73 | os.Exit(1) 74 | } 75 | if flag.NArg() < (ci.nargs + 1) { 76 | flag.Usage() 77 | os.Exit(1) 78 | } 79 | 80 | switch cmd { 81 | case "uc": // user create 82 | status, err := client.CreateUser(flag.Arg(1)) 83 | printStatus(ci.funcname, status, err) 84 | case "sl": // subscription list 85 | subs, status, err := client.GetSubscriptions(flag.Arg(1)) 86 | printStatus(ci.funcname, status, err) 87 | if err == nil && status == tribrpc.OK { 88 | fmt.Println(strings.Join(subs, " ")) 89 | } 90 | case "sa": 91 | status, err := client.AddSubscription(flag.Arg(1), flag.Arg(2)) 92 | printStatus(ci.funcname, status, err) 93 | case "sr": // subscription remove 94 | status, err := client.RemoveSubscription(flag.Arg(1), flag.Arg(2)) 95 | printStatus(ci.funcname, status, err) 96 | case "tl": // tribble list 97 | tribbles, status, err := client.GetTribbles(flag.Arg(1)) 98 | printStatus(ci.funcname, status, err) 99 | if err == nil && status == tribrpc.OK { 100 | printTribbles(tribbles) 101 | } 102 | case "ts": // tribbles by subscription 103 | tribbles, status, err := client.GetTribblesBySubscription(flag.Arg(1)) 104 | printStatus(ci.funcname, status, err) 105 | if err == nil && status == tribrpc.OK { 106 | printTribbles(tribbles) 107 | } 108 | case "tp": // tribble post 109 | status, err := client.PostTribble(flag.Arg(1), flag.Arg(2)) 110 | printStatus(ci.funcname, status, err) 111 | } 112 | } 113 | 114 | func tribStatusToString(status tribrpc.Status) (s string) { 115 | switch status { 116 | case tribrpc.OK: 117 | s = "OK" 118 | case tribrpc.NoSuchUser: 119 | s = "NoSuchUser" 120 | case tribrpc.NoSuchTargetUser: 121 | s = "NoSuchTargetUser" 122 | case tribrpc.Exists: 123 | s = "Exists" 124 | } 125 | return 126 | } 127 | 128 | func printStatus(cmdName string, status tribrpc.Status, err error) { 129 | if err != nil { 130 | fmt.Println("ERROR:", cmdName, "got error:", err) 131 | } else if status != tribrpc.OK { 132 | fmt.Println(cmdName, "ERROR:", cmdName, "replied with status", tribStatusToString(status)) 133 | } else { 134 | fmt.Println(cmdName, "OK") 135 | } 136 | } 137 | 138 | func printTribble(t tribrpc.Tribble) { 139 | fmt.Printf("%16.16s - %s - %s\n", t.UserID, t.Posted.String(), t.Contents) 140 | } 141 | 142 | func printTribbles(tribbles []tribrpc.Tribble) { 143 | for _, t := range tribbles { 144 | printTribble(t) 145 | } 146 | } 147 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/runners/lrunner/lrunner.go: -------------------------------------------------------------------------------- 1 | // A simple program that the staff tests use to test your libstore 2 | // implementation. You may use this program for your own testing purposes 3 | // if you wish. DO NOT MODIFY! 4 | 5 | package main 6 | 7 | import ( 8 | "flag" 9 | "fmt" 10 | "log" 11 | "net" 12 | "net/http" 13 | "net/rpc" 14 | "os" 15 | "strconv" 16 | "time" 17 | 18 | "github.com/cmu440/tribbler/libstore" 19 | ) 20 | 21 | var ( 22 | forceLease = flag.Bool("fl", false, "Create libstore in 'Always' mode (default is 'Normal')") 23 | serverAddress = flag.String("host", "localhost", "master storage server host (our tests will always use localhost)") 24 | port = flag.Int("port", 9009, "master storage server port number") 25 | numTimes = flag.Int("n", 1, "number of times to execute the command") 26 | handleLeases = flag.Bool("l", false, "run persistently, requesting leases, and reporting lease revocation requests") 27 | ) 28 | 29 | func init() { 30 | log.SetFlags(log.Lshortfile | log.Lmicroseconds) 31 | flag.Usage = func() { 32 | fmt.Fprintln(os.Stderr, "The lrunner program is a testing tool that that creates and runs an instance") 33 | fmt.Fprintln(os.Stderr, "of your Libstore. You may use it to test the correctness of your storage server.\n") 34 | fmt.Fprintln(os.Stderr, "Usage:") 35 | flag.PrintDefaults() 36 | fmt.Fprintln(os.Stderr) 37 | fmt.Fprintln(os.Stderr, "Possible commands:") 38 | fmt.Fprintln(os.Stderr, " Put: p key value") 39 | fmt.Fprintln(os.Stderr, " Get: g key") 40 | fmt.Fprintln(os.Stderr, " GetList: lg key") 41 | fmt.Fprintln(os.Stderr, " AddToList: la key value") 42 | fmt.Fprintln(os.Stderr, " RemoveFromList: lr key value") 43 | } 44 | } 45 | 46 | type cmdInfo struct { 47 | cmdline string 48 | nargs int 49 | } 50 | 51 | var cmdList = map[string]int{ 52 | "p": 2, 53 | "g": 1, 54 | "la": 2, 55 | "lr": 2, 56 | "lg": 1, 57 | } 58 | 59 | func main() { 60 | flag.Parse() 61 | if flag.NArg() < 2 { 62 | flag.Usage() 63 | os.Exit(1) 64 | } 65 | 66 | cmdmap := make(map[string]cmdInfo) 67 | for k, v := range cmdList { 68 | cmdmap[k] = cmdInfo{cmdline: k, nargs: v} 69 | } 70 | 71 | cmd := flag.Arg(0) 72 | ci, found := cmdmap[cmd] 73 | if !found { 74 | flag.Usage() 75 | os.Exit(1) 76 | } 77 | if flag.NArg() < (ci.nargs + 1) { 78 | flag.Usage() 79 | os.Exit(1) 80 | } 81 | 82 | var leaseCallbackAddr string 83 | if *handleLeases { 84 | // Setup an HTTP handler to receive remote lease revocation requests. 85 | // The student's libstore implementation is resonsible for calling 86 | // rpc.RegisterName("LeaseCallbacks", librpc.Wrap(libstore)) to finish 87 | // the setup. 88 | l, err := net.Listen("tcp", ":0") 89 | if err != nil { 90 | log.Fatalln("Failed to listen:", err) 91 | } 92 | _, listenPort, _ := net.SplitHostPort(l.Addr().String()) 93 | leaseCallbackAddr = net.JoinHostPort("localhost", listenPort) 94 | rpc.HandleHTTP() 95 | go http.Serve(l, nil) 96 | } 97 | 98 | var leaseMode libstore.LeaseMode 99 | if *handleLeases && *forceLease { 100 | leaseMode = libstore.Always 101 | } else if leaseCallbackAddr == "" { 102 | leaseMode = libstore.Never 103 | } else { 104 | leaseMode = libstore.Normal 105 | } 106 | 107 | masterHostPort := net.JoinHostPort(*serverAddress, strconv.Itoa(*port)) 108 | ls, err := libstore.NewLibstore(masterHostPort, leaseCallbackAddr, leaseMode) 109 | if err != nil { 110 | log.Fatalln("Failed to create libstore:", err) 111 | } 112 | 113 | for i := 0; i < *numTimes; i++ { 114 | switch cmd { 115 | case "g": 116 | val, err := ls.Get(flag.Arg(1)) 117 | if err != nil { 118 | fmt.Println("ERROR:", err) 119 | } else { 120 | fmt.Println(val) 121 | } 122 | case "lg": 123 | val, err := ls.GetList(flag.Arg(1)) 124 | if err != nil { 125 | fmt.Println("ERROR:", err) 126 | } else { 127 | for _, i := range val { 128 | fmt.Println(i) 129 | } 130 | } 131 | case "p", "la", "lr": 132 | var err error 133 | switch cmd { 134 | case "p": 135 | err = ls.Put(flag.Arg(1), flag.Arg(2)) 136 | case "la": 137 | err = ls.AppendToList(flag.Arg(1), flag.Arg(2)) 138 | case "lr": 139 | err = ls.RemoveFromList(flag.Arg(1), flag.Arg(2)) 140 | } 141 | if err == nil { 142 | fmt.Println("OK") 143 | } else { 144 | fmt.Println("ERROR:", err) 145 | } 146 | } 147 | } 148 | 149 | if *handleLeases { 150 | fmt.Println("Waiting 20 seconds for lease callbacks...") 151 | time.Sleep(20 * time.Second) 152 | } 153 | } 154 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/runners/srunner/srunner.go: -------------------------------------------------------------------------------- 1 | // DO NOT MODIFY! 2 | 3 | package main 4 | 5 | import ( 6 | crand "crypto/rand" 7 | "flag" 8 | "log" 9 | "math" 10 | "math/big" 11 | "math/rand" 12 | 13 | "github.com/cmu440/tribbler/storageserver" 14 | ) 15 | 16 | const defaultMasterPort = 9009 17 | 18 | var ( 19 | port = flag.Int("port", defaultMasterPort, "port number to listen on") 20 | masterHostPort = flag.String("master", "", "master storage server host port (if non-empty then this storage server is a slave)") 21 | numNodes = flag.Int("N", 1, "the number of nodes in the ring (including the master)") 22 | nodeID = flag.Uint("id", 0, "a 32-bit unsigned node ID to use for consistent hashing") 23 | ) 24 | 25 | func init() { 26 | log.SetFlags(log.Lshortfile | log.Lmicroseconds) 27 | } 28 | 29 | func main() { 30 | flag.Parse() 31 | if *masterHostPort == "" && *port == 0 { 32 | // If masterHostPort string is empty, then this storage server is the master. 33 | *port = defaultMasterPort 34 | } 35 | 36 | // If nodeID is 0, then assign a random 32-bit integer instead. 37 | randID := uint32(*nodeID) 38 | if randID == 0 { 39 | randint, _ := crand.Int(crand.Reader, big.NewInt(math.MaxInt64)) 40 | rand.Seed(randint.Int64()) 41 | randID = rand.Uint32() 42 | } 43 | 44 | // Create and start the StorageServer. 45 | _, err := storageserver.NewStorageServer(*masterHostPort, *numNodes, *port, randID) 46 | if err != nil { 47 | log.Fatalln("Failed to create storage server:", err) 48 | } 49 | 50 | // Run the storage server forever. 51 | select {} 52 | } 53 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/runners/trunner/trunner.go: -------------------------------------------------------------------------------- 1 | // DO NOT MODIFY! 2 | 3 | package main 4 | 5 | import ( 6 | "flag" 7 | "log" 8 | "net" 9 | "strconv" 10 | 11 | "github.com/cmu440/tribbler/tribserver" 12 | ) 13 | 14 | var port = flag.Int("port", 9010, "port number to listen on") 15 | 16 | func init() { 17 | log.SetFlags(log.Lshortfile | log.Lmicroseconds) 18 | } 19 | 20 | func main() { 21 | flag.Parse() 22 | if flag.NArg() < 1 { 23 | log.Fatalln("Usage: trunner ") 24 | } 25 | 26 | // Create and start the TribServer. 27 | hostPort := net.JoinHostPort("localhost", strconv.Itoa(*port)) 28 | _, err := tribserver.NewTribServer(flag.Arg(0), hostPort) 29 | if err != nil { 30 | log.Fatalln("Server could not be created:", err) 31 | } 32 | 33 | // Run the Tribbler server forever. 34 | select {} 35 | } 36 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/storageserver/storageserver_api.go: -------------------------------------------------------------------------------- 1 | // DO NOT MODIFY! 2 | 3 | package storageserver 4 | 5 | import "github.com/cmu440/tribbler/rpc/storagerpc" 6 | 7 | // StorageServer defines the set of methods that can be invoked remotely via RPCs. 8 | type StorageServer interface { 9 | 10 | // RegisterServer adds a storage server to the ring. It replies with 11 | // status NotReady if not all nodes in the ring have joined. Once 12 | // all nodes have joined, it should reply with status OK and a list 13 | // of all connected nodes in the ring. 14 | RegisterServer(*storagerpc.RegisterArgs, *storagerpc.RegisterReply) error 15 | 16 | // GetServers retrieves a list of all connected nodes in the ring. It 17 | // replies with status NotReady if not all nodes in the ring have joined. 18 | GetServers(*storagerpc.GetServersArgs, *storagerpc.GetServersReply) error 19 | 20 | // Get retrieves the specified key from the data store and replies with 21 | // the key's value and a lease if one was requested. If the key does not 22 | // fall within the storage server's range, it should reply with status 23 | // WrongServer. If the key is not found, it should reply with status 24 | // KeyNotFound. 25 | Get(*storagerpc.GetArgs, *storagerpc.GetReply) error 26 | 27 | // GetList retrieves the specified key from the data store and replies with 28 | // the key's list value and a lease if one was requested. If the key does not 29 | // fall within the storage server's range, it should reply with status 30 | // WrongServer. If the key is not found, it should reply with status 31 | // KeyNotFound. 32 | GetList(*storagerpc.GetArgs, *storagerpc.GetListReply) error 33 | 34 | // Put inserts the specified key/value pair into the data store. If 35 | // the key does not fall within the storage server's range, it should 36 | // reply with status WrongServer. 37 | Put(*storagerpc.PutArgs, *storagerpc.PutReply) error 38 | 39 | // AppendToList retrieves the specified key from the data store and appends 40 | // the specified value to its list. If the key does not fall within the 41 | // receiving server's range, it should reply with status WrongServer. If 42 | // the specified value is already contained in the list, it should reply 43 | // with status ItemExists. 44 | AppendToList(*storagerpc.PutArgs, *storagerpc.PutReply) error 45 | 46 | // RemoveFromList retrieves the specified key from the data store and removes 47 | // the specified value from its list. If the key does not fall within the 48 | // receiving server's range, it should reply with status WrongServer. If 49 | // the specified value is not already contained in the list, it should reply 50 | // with status ItemNotFound. 51 | RemoveFromList(*storagerpc.PutArgs, *storagerpc.PutReply) error 52 | } 53 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/storageserver/storageserver_impl.go: -------------------------------------------------------------------------------- 1 | package storageserver 2 | 3 | import ( 4 | "errors" 5 | 6 | "github.com/cmu440/tribbler/rpc/storagerpc" 7 | ) 8 | 9 | type storageServer struct { 10 | // TODO: implement this! 11 | } 12 | 13 | // NewStorageServer creates and starts a new StorageServer. masterServerHostPort 14 | // is the master storage server's host:port address. If empty, then this server 15 | // is the master; otherwise, this server is a slave. numNodes is the total number of 16 | // servers in the ring. port is the port number that this server should listen on. 17 | // nodeID is a random, unsigned 32-bit ID identifying this server. 18 | // 19 | // This function should return only once all storage servers have joined the ring, 20 | // and should return a non-nil error if the storage server could not be started. 21 | func NewStorageServer(masterServerHostPort string, numNodes, port int, nodeID uint32) (StorageServer, error) { 22 | return nil, errors.New("not implemented") 23 | } 24 | 25 | func (ss *storageServer) RegisterServer(args *storagerpc.RegisterArgs, reply *storagerpc.RegisterReply) error { 26 | return errors.New("not implemented") 27 | } 28 | 29 | func (ss *storageServer) GetServers(args *storagerpc.GetServersArgs, reply *storagerpc.GetServersReply) error { 30 | return errors.New("not implemented") 31 | } 32 | 33 | func (ss *storageServer) Get(args *storagerpc.GetArgs, reply *storagerpc.GetReply) error { 34 | return errors.New("not implemented") 35 | } 36 | 37 | func (ss *storageServer) GetList(args *storagerpc.GetArgs, reply *storagerpc.GetListReply) error { 38 | return errors.New("not implemented") 39 | } 40 | 41 | func (ss *storageServer) Put(args *storagerpc.PutArgs, reply *storagerpc.PutReply) error { 42 | return errors.New("not implemented") 43 | } 44 | 45 | func (ss *storageServer) AppendToList(args *storagerpc.PutArgs, reply *storagerpc.PutReply) error { 46 | return errors.New("not implemented") 47 | } 48 | 49 | func (ss *storageServer) RemoveFromList(args *storagerpc.PutArgs, reply *storagerpc.PutReply) error { 50 | return errors.New("not implemented") 51 | } 52 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/tests/libtest/libtest.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "flag" 5 | "fmt" 6 | "log" 7 | "net" 8 | "net/http" 9 | "net/rpc" 10 | "os" 11 | "regexp" 12 | "runtime" 13 | "strings" 14 | "time" 15 | 16 | "github.com/cmu440/tribbler/libstore" 17 | "github.com/cmu440/tribbler/rpc/storagerpc" 18 | "github.com/cmu440/tribbler/tests/proxycounter" 19 | ) 20 | 21 | type testFunc struct { 22 | name string 23 | f func() 24 | } 25 | 26 | var ( 27 | portnum = flag.Int("port", 9010, "port to listen on") 28 | testRegex = flag.String("t", "", "test to run") 29 | ) 30 | 31 | var ( 32 | pc proxycounter.ProxyCounter 33 | ls libstore.Libstore 34 | revokeConn *rpc.Client 35 | passCount int 36 | failCount int 37 | ) 38 | 39 | var LOGE = log.New(os.Stderr, "", log.Lshortfile|log.Lmicroseconds) 40 | 41 | // Initialize proxy and libstore 42 | func initLibstore(storage, server, myhostport string, alwaysLease bool) (net.Listener, error) { 43 | l, err := net.Listen("tcp", server) 44 | if err != nil { 45 | LOGE.Println("Failed to listen:", err) 46 | return nil, err 47 | } 48 | 49 | // The ProxyServer acts like a "StorageServer" in the system, but also has some 50 | // additional functionalities that allow us to enforce the number of RPCs made 51 | // to the storage server, etc. 52 | proxyCounter, err := proxycounter.NewProxyCounter(storage, server) 53 | if err != nil { 54 | LOGE.Println("Failed to setup test:", err) 55 | return nil, err 56 | } 57 | pc = proxyCounter 58 | 59 | // Normally the StorageServer would register itself to receive RPCs, 60 | // but we don't call NewStorageServer here, do we need to do it here instead. 61 | rpc.RegisterName("StorageServer", storagerpc.Wrap(pc)) 62 | 63 | // Normally the TribServer would call the two methods below when it is first 64 | // created, but these tests mock out the TribServer all together, so we do 65 | // it here instead. 66 | rpc.HandleHTTP() 67 | go http.Serve(l, nil) 68 | 69 | var leaseMode libstore.LeaseMode 70 | if alwaysLease { 71 | leaseMode = libstore.Always 72 | } else if myhostport == "" { 73 | leaseMode = libstore.Never 74 | } else { 75 | leaseMode = libstore.Normal 76 | } 77 | 78 | // Create and start the Libstore. 79 | libstore, err := libstore.NewLibstore(server, myhostport, leaseMode) 80 | if err != nil { 81 | LOGE.Println("Failed to create Libstore:", err) 82 | return nil, err 83 | } 84 | ls = libstore 85 | return l, nil 86 | } 87 | 88 | // Cleanup libstore and rpc hooks 89 | func cleanupLibstore(l net.Listener) { 90 | // Close listener to stop http serve thread 91 | if l != nil { 92 | l.Close() 93 | } 94 | // Recreate default http serve mux 95 | http.DefaultServeMux = http.NewServeMux() 96 | // Recreate default rpc server 97 | rpc.DefaultServer = rpc.NewServer() 98 | // Unset libstore just in case 99 | ls = nil 100 | } 101 | 102 | // Force key into cache by requesting 2 * QUERY_CACHE_THRESH gets 103 | func forceCacheGet(key string, value string) { 104 | ls.Put(key, value) 105 | for i := 0; i < 2*storagerpc.QueryCacheThresh; i++ { 106 | ls.Get(key) 107 | } 108 | } 109 | 110 | // Force key into cache by requesting 2 * QUERY_CACHE_THRESH get lists 111 | func forceCacheGetList(key string, value string) { 112 | ls.AppendToList(key, value) 113 | for i := 0; i < 2*storagerpc.QueryCacheThresh; i++ { 114 | ls.GetList(key) 115 | } 116 | } 117 | 118 | // Revoke lease 119 | func revokeLease(key string) (error, storagerpc.Status) { 120 | args := &storagerpc.RevokeLeaseArgs{Key: key} 121 | var reply storagerpc.RevokeLeaseReply 122 | err := revokeConn.Call("LeaseCallbacks.RevokeLease", args, &reply) 123 | return err, reply.Status 124 | } 125 | 126 | // Check rpc and byte count limits 127 | func checkLimits(rpcCountLimit, byteCountLimit uint32) bool { 128 | if pc.GetRpcCount() > rpcCountLimit { 129 | LOGE.Println("FAIL: using too many RPCs") 130 | failCount++ 131 | return true 132 | } 133 | if pc.GetByteCount() > byteCountLimit { 134 | LOGE.Println("FAIL: transferring too much data") 135 | failCount++ 136 | return true 137 | } 138 | return false 139 | } 140 | 141 | // Check error 142 | func checkError(err error, expectError bool) bool { 143 | if expectError { 144 | if err == nil { 145 | LOGE.Println("FAIL: error should be returned") 146 | failCount++ 147 | return true 148 | } 149 | } else { 150 | if err != nil { 151 | LOGE.Println("FAIL: unexpected error returned:", err) 152 | failCount++ 153 | return true 154 | } 155 | } 156 | return false 157 | } 158 | 159 | // Test libstore returns nil when it cannot connect to the server 160 | func testNonexistentServer() { 161 | if l, err := libstore.NewLibstore(fmt.Sprintf("localhost:%d", *portnum), fmt.Sprintf("localhost:%d", *portnum), libstore.Normal); l == nil || err != nil { 162 | fmt.Println("PASS") 163 | passCount++ 164 | } else { 165 | LOGE.Println("FAIL: libstore does not return a non-nil error when it cannot connect to nonexistent storage server") 166 | failCount++ 167 | } 168 | cleanupLibstore(nil) 169 | } 170 | 171 | // Never request leases when myhostport is "" 172 | func testNoLeases() { 173 | l, err := initLibstore(flag.Arg(0), fmt.Sprintf("localhost:%d", *portnum), "", false) 174 | if err != nil { 175 | LOGE.Println("FAIL:", err) 176 | failCount++ 177 | return 178 | } 179 | defer cleanupLibstore(l) 180 | pc.Reset() 181 | forceCacheGet("key:", "value") 182 | if pc.GetLeaseRequestCount() > 0 { 183 | LOGE.Println("FAIL: should not request leases when myhostport is \"\"") 184 | failCount++ 185 | return 186 | } 187 | fmt.Println("PASS") 188 | passCount++ 189 | } 190 | 191 | // Always request leases when alwaysLease is true 192 | func testAlwaysLeases() { 193 | l, err := initLibstore(flag.Arg(0), fmt.Sprintf("localhost:%d", *portnum), fmt.Sprintf("localhost:%d", *portnum), true) 194 | if err != nil { 195 | LOGE.Println("FAIL:", err) 196 | failCount++ 197 | return 198 | } 199 | defer cleanupLibstore(l) 200 | pc.Reset() 201 | ls.Put("key:", "value") 202 | ls.Get("key:") 203 | if pc.GetLeaseRequestCount() == 0 { 204 | LOGE.Println("FAIL: should always request leases when alwaysLease is true") 205 | failCount++ 206 | return 207 | } 208 | fmt.Println("PASS") 209 | passCount++ 210 | } 211 | 212 | // Handle get error 213 | func testGetError() { 214 | pc.Reset() 215 | pc.OverrideErr() 216 | defer pc.OverrideOff() 217 | _, err := ls.Get("key:1") 218 | if checkError(err, true) { 219 | return 220 | } 221 | if checkLimits(5, 50) { 222 | return 223 | } 224 | fmt.Println("PASS") 225 | passCount++ 226 | } 227 | 228 | // Handle get error reply status 229 | func testGetErrorStatus() { 230 | pc.Reset() 231 | pc.OverrideStatus(storagerpc.KeyNotFound) 232 | defer pc.OverrideOff() 233 | _, err := ls.Get("key:2") 234 | if checkError(err, true) { 235 | return 236 | } 237 | if checkLimits(5, 50) { 238 | return 239 | } 240 | fmt.Println("PASS") 241 | passCount++ 242 | } 243 | 244 | // Handle valid get 245 | func testGetValid() { 246 | ls.Put("key:3", "value") 247 | pc.Reset() 248 | v, err := ls.Get("key:3") 249 | if checkError(err, false) { 250 | return 251 | } 252 | if v != "value" { 253 | LOGE.Println("FAIL: got wrong value") 254 | failCount++ 255 | return 256 | } 257 | if checkLimits(5, 50) { 258 | return 259 | } 260 | fmt.Println("PASS") 261 | passCount++ 262 | } 263 | 264 | // Handle put error 265 | func testPutError() { 266 | pc.Reset() 267 | pc.OverrideErr() 268 | defer pc.OverrideOff() 269 | err := ls.Put("key:4", "value") 270 | if checkError(err, true) { 271 | return 272 | } 273 | if checkLimits(5, 50) { 274 | return 275 | } 276 | fmt.Println("PASS") 277 | passCount++ 278 | } 279 | 280 | // Handle put error reply status 281 | func testPutErrorStatus() { 282 | pc.Reset() 283 | pc.OverrideStatus(storagerpc.WrongServer /* use arbitrary status */) 284 | defer pc.OverrideOff() 285 | err := ls.Put("key:5", "value") 286 | if checkError(err, true) { 287 | return 288 | } 289 | if checkLimits(5, 50) { 290 | return 291 | } 292 | fmt.Println("PASS") 293 | passCount++ 294 | } 295 | 296 | // Handle valid put 297 | func testPutValid() { 298 | pc.Reset() 299 | err := ls.Put("key:6", "value") 300 | if checkError(err, false) { 301 | return 302 | } 303 | if checkLimits(5, 50) { 304 | return 305 | } 306 | v, err := ls.Get("key:6") 307 | if checkError(err, false) { 308 | return 309 | } 310 | if v != "value" { 311 | LOGE.Println("FAIL: got wrong value") 312 | failCount++ 313 | return 314 | } 315 | fmt.Println("PASS") 316 | passCount++ 317 | } 318 | 319 | // Handle get list error 320 | func testGetListError() { 321 | pc.Reset() 322 | pc.OverrideErr() 323 | defer pc.OverrideOff() 324 | _, err := ls.GetList("keylist:1") 325 | if checkError(err, true) { 326 | return 327 | } 328 | if checkLimits(5, 50) { 329 | return 330 | } 331 | fmt.Println("PASS") 332 | passCount++ 333 | } 334 | 335 | // Handle get list error reply status 336 | func testGetListErrorStatus() { 337 | pc.Reset() 338 | pc.OverrideStatus(storagerpc.ItemNotFound) 339 | defer pc.OverrideOff() 340 | _, err := ls.GetList("keylist:2") 341 | if checkError(err, true) { 342 | return 343 | } 344 | if checkLimits(5, 50) { 345 | return 346 | } 347 | fmt.Println("PASS") 348 | passCount++ 349 | } 350 | 351 | // Handle valid get list 352 | func testGetListValid() { 353 | ls.AppendToList("keylist:3", "value") 354 | pc.Reset() 355 | v, err := ls.GetList("keylist:3") 356 | if checkError(err, false) { 357 | return 358 | } 359 | if len(v) != 1 || v[0] != "value" { 360 | LOGE.Println("FAIL: got wrong value") 361 | failCount++ 362 | return 363 | } 364 | if checkLimits(5, 50) { 365 | return 366 | } 367 | fmt.Println("PASS") 368 | passCount++ 369 | } 370 | 371 | // Handle append to list error 372 | func testAppendToListError() { 373 | pc.Reset() 374 | pc.OverrideErr() 375 | defer pc.OverrideOff() 376 | err := ls.AppendToList("keylist:4", "value") 377 | if checkError(err, true) { 378 | return 379 | } 380 | if checkLimits(5, 50) { 381 | return 382 | } 383 | fmt.Println("PASS") 384 | passCount++ 385 | } 386 | 387 | // Handle append to list error reply status 388 | func testAppendToListErrorStatus() { 389 | pc.Reset() 390 | pc.OverrideStatus(storagerpc.ItemExists) 391 | defer pc.OverrideOff() 392 | err := ls.AppendToList("keylist:5", "value") 393 | if checkError(err, true) { 394 | return 395 | } 396 | if checkLimits(5, 50) { 397 | return 398 | } 399 | fmt.Println("PASS") 400 | passCount++ 401 | } 402 | 403 | // Handle valid append to list 404 | func testAppendToListValid() { 405 | pc.Reset() 406 | err := ls.AppendToList("keylist:6", "value") 407 | if checkError(err, false) { 408 | return 409 | } 410 | if checkLimits(5, 50) { 411 | return 412 | } 413 | v, err := ls.GetList("keylist:6") 414 | if checkError(err, false) { 415 | return 416 | } 417 | if len(v) != 1 || v[0] != "value" { 418 | LOGE.Println("FAIL: got wrong value") 419 | failCount++ 420 | return 421 | } 422 | fmt.Println("PASS") 423 | passCount++ 424 | } 425 | 426 | // Handle remove from list error 427 | func testRemoveFromListError() { 428 | pc.Reset() 429 | pc.OverrideErr() 430 | defer pc.OverrideOff() 431 | err := ls.RemoveFromList("keylist:7", "value") 432 | if checkError(err, true) { 433 | return 434 | } 435 | if checkLimits(5, 50) { 436 | return 437 | } 438 | fmt.Println("PASS") 439 | passCount++ 440 | } 441 | 442 | // Handle remove from list error reply status 443 | func testRemoveFromListErrorStatus() { 444 | pc.Reset() 445 | pc.OverrideStatus(storagerpc.ItemNotFound) 446 | defer pc.OverrideOff() 447 | err := ls.RemoveFromList("keylist:8", "value") 448 | if checkError(err, true) { 449 | return 450 | } 451 | if checkLimits(5, 50) { 452 | return 453 | } 454 | fmt.Println("PASS") 455 | passCount++ 456 | } 457 | 458 | // Handle valid remove from list 459 | func testRemoveFromListValid() { 460 | err := ls.AppendToList("keylist:9", "value1") 461 | if checkError(err, false) { 462 | return 463 | } 464 | err = ls.AppendToList("keylist:9", "value2") 465 | if checkError(err, false) { 466 | return 467 | } 468 | pc.Reset() 469 | err = ls.RemoveFromList("keylist:9", "value1") 470 | if checkError(err, false) { 471 | return 472 | } 473 | if checkLimits(5, 50) { 474 | return 475 | } 476 | v, err := ls.GetList("keylist:9") 477 | if checkError(err, false) { 478 | return 479 | } 480 | if len(v) != 1 || v[0] != "value2" { 481 | LOGE.Println("FAIL: got wrong value") 482 | failCount++ 483 | return 484 | } 485 | fmt.Println("PASS") 486 | passCount++ 487 | } 488 | 489 | // Cache < limit test for get 490 | func testCacheGetLimit() { 491 | pc.Reset() 492 | ls.Put("keycacheget:1", "value") 493 | for i := 0; i < storagerpc.QueryCacheThresh-1; i++ { 494 | ls.Get("keycacheget:1") 495 | } 496 | if pc.GetLeaseRequestCount() > 0 { 497 | LOGE.Println("FAIL: should not request lease") 498 | failCount++ 499 | return 500 | } 501 | fmt.Println("PASS") 502 | passCount++ 503 | } 504 | 505 | // Cache > limit test for get 506 | func testCacheGetLimit2() { 507 | pc.Reset() 508 | forceCacheGet("keycacheget:2", "value") 509 | if pc.GetLeaseRequestCount() == 0 { 510 | LOGE.Println("FAIL: should have requested lease") 511 | failCount++ 512 | return 513 | } 514 | fmt.Println("PASS") 515 | passCount++ 516 | } 517 | 518 | // Doesn't call server when using cache for get 519 | func testCacheGetCorrect() { 520 | forceCacheGet("keycacheget:3", "value") 521 | pc.Reset() 522 | for i := 0; i < 100*storagerpc.QueryCacheThresh; i++ { 523 | v, err := ls.Get("keycacheget:3") 524 | if checkError(err, false) { 525 | return 526 | } 527 | if v != "value" { 528 | LOGE.Println("FAIL: got wrong value from cache") 529 | failCount++ 530 | return 531 | } 532 | } 533 | if pc.GetRpcCount() > 0 { 534 | LOGE.Println("FAIL: should not contact server when using cache") 535 | failCount++ 536 | return 537 | } 538 | fmt.Println("PASS") 539 | passCount++ 540 | } 541 | 542 | // Cache respects granted flag for get 543 | func testCacheGetLeaseNotGranted() { 544 | pc.DisableLease() 545 | defer pc.EnableLease() 546 | forceCacheGet("keycacheget:4", "value") 547 | pc.Reset() 548 | v, err := ls.Get("keycacheget:4") 549 | if checkError(err, false) { 550 | return 551 | } 552 | if v != "value" { 553 | LOGE.Println("FAIL: got wrong value") 554 | failCount++ 555 | return 556 | } 557 | if pc.GetRpcCount() == 0 { 558 | LOGE.Println("FAIL: not respecting lease granted flag") 559 | failCount++ 560 | return 561 | } 562 | fmt.Println("PASS") 563 | passCount++ 564 | } 565 | 566 | // Cache requests leases until granted for get 567 | func testCacheGetLeaseNotGranted2() { 568 | pc.DisableLease() 569 | defer pc.EnableLease() 570 | forceCacheGet("keycacheget:5", "value") 571 | pc.Reset() 572 | forceCacheGet("keycacheget:5", "value") 573 | if pc.GetLeaseRequestCount() == 0 { 574 | LOGE.Println("FAIL: not requesting leases after lease wasn't granted") 575 | failCount++ 576 | return 577 | } 578 | fmt.Println("PASS") 579 | passCount++ 580 | } 581 | 582 | // Cache respects lease timeout for get 583 | func testCacheGetLeaseTimeout() { 584 | pc.OverrideLeaseSeconds(1) 585 | defer pc.OverrideLeaseSeconds(0) 586 | forceCacheGet("keycacheget:6", "value") 587 | time.Sleep(2 * time.Second) 588 | pc.Reset() 589 | v, err := ls.Get("keycacheget:6") 590 | if checkError(err, false) { 591 | return 592 | } 593 | if v != "value" { 594 | LOGE.Println("FAIL: got wrong value") 595 | failCount++ 596 | return 597 | } 598 | if pc.GetRpcCount() == 0 { 599 | LOGE.Println("FAIL: not respecting lease timeout") 600 | failCount++ 601 | return 602 | } 603 | fmt.Println("PASS") 604 | passCount++ 605 | } 606 | 607 | // Cache memory leak for get 608 | func testCacheGetMemoryLeak() { 609 | pc.OverrideLeaseSeconds(1) 610 | defer pc.OverrideLeaseSeconds(0) 611 | 612 | var memstats runtime.MemStats 613 | var initAlloc, finalAlloc uint64 614 | longValue := strings.Repeat("this sentence is 30 char long\n", 3000) 615 | 616 | // Run garbage collection and get memory stats. 617 | runtime.GC() 618 | runtime.ReadMemStats(&memstats) 619 | initAlloc = memstats.Alloc 620 | 621 | // Cache a lot of data. 622 | for i := 0; i < 1000; i++ { 623 | key := fmt.Sprintf("keymemleakget:%d", i) 624 | pc.Reset() 625 | forceCacheGet(key, longValue) 626 | if pc.GetLeaseRequestCount() == 0 { 627 | LOGE.Println("FAIL: not requesting leases") 628 | failCount++ 629 | return 630 | } 631 | pc.Reset() 632 | v, err := ls.Get(key) 633 | if checkError(err, false) { 634 | return 635 | } 636 | if v != longValue { 637 | LOGE.Println("FAIL: got wrong value") 638 | failCount++ 639 | return 640 | } 641 | if pc.GetRpcCount() > 0 { 642 | LOGE.Println("FAIL: not caching data") 643 | failCount++ 644 | return 645 | } 646 | } 647 | 648 | runtime.GC() 649 | runtime.ReadMemStats(&memstats) 650 | 651 | // Wait for data to expire and someone to cleanup. 652 | time.Sleep(20 * time.Second) 653 | 654 | // Run garbage collection and get memory stats. 655 | runtime.GC() 656 | runtime.ReadMemStats(&memstats) 657 | finalAlloc = memstats.Alloc 658 | 659 | // The maximum number of bytes allowed to be allocated since the beginning 660 | // of this test until now (currently 5,000,000). 661 | const maxBytes = 5000000 662 | if finalAlloc < initAlloc || (finalAlloc-initAlloc) < maxBytes { 663 | fmt.Println("PASS") 664 | passCount++ 665 | } else { 666 | LOGE.Printf("FAIL: Libstore not cleaning expired/cached data (bytes still in use: %d, max allowed: %d)\n", 667 | finalAlloc-initAlloc, maxBytes) 668 | failCount++ 669 | } 670 | } 671 | 672 | // Revoke valid lease for get 673 | func testRevokeGetValid() { 674 | forceCacheGet("keyrevokeget:1", "value") 675 | err, status := revokeLease("keyrevokeget:1") 676 | if checkError(err, false) { 677 | return 678 | } 679 | if status != storagerpc.OK { 680 | LOGE.Println("FAIL: revoke should return OK on success") 681 | failCount++ 682 | return 683 | } 684 | pc.Reset() 685 | v, err := ls.Get("keyrevokeget:1") 686 | if checkError(err, false) { 687 | return 688 | } 689 | if v != "value" { 690 | LOGE.Println("FAIL: got wrong value") 691 | failCount++ 692 | return 693 | } 694 | if pc.GetRpcCount() == 0 { 695 | LOGE.Println("FAIL: not respecting lease revoke") 696 | failCount++ 697 | return 698 | } 699 | fmt.Println("PASS") 700 | passCount++ 701 | } 702 | 703 | // Revoke nonexistent lease for get 704 | func testRevokeGetNonexistent() { 705 | ls.Put("keyrevokeget:2", "value") 706 | // Just shouldn't die or cause future issues 707 | revokeLease("keyrevokeget:2") 708 | pc.Reset() 709 | v, err := ls.Get("keyrevokeget:2") 710 | if checkError(err, false) { 711 | return 712 | } 713 | if v != "value" { 714 | LOGE.Println("FAIL: got wrong value") 715 | failCount++ 716 | return 717 | } 718 | if pc.GetRpcCount() == 0 { 719 | LOGE.Println("FAIL: should not be cached") 720 | failCount++ 721 | return 722 | } 723 | fmt.Println("PASS") 724 | passCount++ 725 | } 726 | 727 | // Revoke lease update for get 728 | func testRevokeGetUpdate() { 729 | forceCacheGet("keyrevokeget:3", "value") 730 | pc.Reset() 731 | forceCacheGet("keyrevokeget:3", "value2") 732 | if pc.GetRpcCount() <= 1 || pc.GetLeaseRequestCount() == 0 { 733 | LOGE.Println("FAIL: not respecting lease revoke") 734 | failCount++ 735 | return 736 | } 737 | pc.Reset() 738 | v, err := ls.Get("keyrevokeget:3") 739 | if checkError(err, false) { 740 | return 741 | } 742 | if v != "value2" { 743 | LOGE.Println("FAIL: got wrong value") 744 | failCount++ 745 | return 746 | } 747 | if pc.GetRpcCount() > 0 { 748 | LOGE.Println("FAIL: should be cached") 749 | failCount++ 750 | return 751 | } 752 | fmt.Println("PASS") 753 | passCount++ 754 | } 755 | 756 | // Cache < limit test for get list 757 | func testCacheGetListLimit() { 758 | pc.Reset() 759 | ls.AppendToList("keycachegetlist:1", "value") 760 | for i := 0; i < storagerpc.QueryCacheThresh-1; i++ { 761 | ls.GetList("keycachegetlist:1") 762 | } 763 | if pc.GetLeaseRequestCount() > 0 { 764 | LOGE.Println("FAIL: should not request lease") 765 | failCount++ 766 | return 767 | } 768 | fmt.Println("PASS") 769 | passCount++ 770 | } 771 | 772 | // Cache > limit test for get list 773 | func testCacheGetListLimit2() { 774 | pc.Reset() 775 | forceCacheGetList("keycachegetlist:2", "value") 776 | if pc.GetLeaseRequestCount() == 0 { 777 | LOGE.Println("FAIL: should have requested lease") 778 | failCount++ 779 | return 780 | } 781 | fmt.Println("PASS") 782 | passCount++ 783 | } 784 | 785 | // Doesn't call server when using cache for get list 786 | func testCacheGetListCorrect() { 787 | forceCacheGetList("keycachegetlist:3", "value") 788 | pc.Reset() 789 | for i := 0; i < 100*storagerpc.QueryCacheThresh; i++ { 790 | v, err := ls.GetList("keycachegetlist:3") 791 | if checkError(err, false) { 792 | return 793 | } 794 | if len(v) != 1 || v[0] != "value" { 795 | LOGE.Println("FAIL: got wrong value from cache") 796 | failCount++ 797 | return 798 | } 799 | } 800 | if pc.GetRpcCount() > 0 { 801 | LOGE.Println("FAIL: should not contact server when using cache") 802 | failCount++ 803 | return 804 | } 805 | fmt.Println("PASS") 806 | passCount++ 807 | } 808 | 809 | // Cache respects granted flag for get list 810 | func testCacheGetListLeaseNotGranted() { 811 | pc.DisableLease() 812 | defer pc.EnableLease() 813 | forceCacheGetList("keycachegetlist:4", "value") 814 | pc.Reset() 815 | v, err := ls.GetList("keycachegetlist:4") 816 | if checkError(err, false) { 817 | return 818 | } 819 | if len(v) != 1 || v[0] != "value" { 820 | LOGE.Println("FAIL: got wrong value") 821 | failCount++ 822 | return 823 | } 824 | if pc.GetRpcCount() == 0 { 825 | LOGE.Println("FAIL: not respecting lease granted flag") 826 | failCount++ 827 | return 828 | } 829 | fmt.Println("PASS") 830 | passCount++ 831 | } 832 | 833 | // Cache requests leases until granted for get list 834 | func testCacheGetListLeaseNotGranted2() { 835 | pc.DisableLease() 836 | defer pc.EnableLease() 837 | forceCacheGetList("keycachegetlist:5", "value") 838 | pc.Reset() 839 | forceCacheGetList("keycachegetlist:5", "value") 840 | if pc.GetLeaseRequestCount() == 0 { 841 | LOGE.Println("FAIL: not requesting leases after lease wasn't granted") 842 | failCount++ 843 | return 844 | } 845 | fmt.Println("PASS") 846 | passCount++ 847 | } 848 | 849 | // Cache respects lease timeout for get list 850 | func testCacheGetListLeaseTimeout() { 851 | pc.OverrideLeaseSeconds(1) 852 | defer pc.OverrideLeaseSeconds(0) 853 | forceCacheGetList("keycachegetlist:6", "value") 854 | time.Sleep(2 * time.Second) 855 | pc.Reset() 856 | v, err := ls.GetList("keycachegetlist:6") 857 | if checkError(err, false) { 858 | return 859 | } 860 | if len(v) != 1 || v[0] != "value" { 861 | LOGE.Println("FAIL: got wrong value") 862 | failCount++ 863 | return 864 | } 865 | if pc.GetRpcCount() == 0 { 866 | LOGE.Println("FAIL: not respecting lease timeout") 867 | failCount++ 868 | return 869 | } 870 | fmt.Println("PASS") 871 | passCount++ 872 | } 873 | 874 | // Cache memory leak for get list 875 | func testCacheGetListMemoryLeak() { 876 | pc.OverrideLeaseSeconds(1) 877 | defer pc.OverrideLeaseSeconds(0) 878 | 879 | var memstats runtime.MemStats 880 | var initAlloc uint64 881 | var finalAlloc uint64 882 | longValue := strings.Repeat("this sentence is 30 char long\n", 3000) 883 | 884 | // Run garbage collection and get memory stats. 885 | runtime.GC() 886 | runtime.ReadMemStats(&memstats) 887 | initAlloc = memstats.Alloc 888 | 889 | // Cache a lot of data. 890 | for i := 0; i < 1000; i++ { 891 | key := fmt.Sprintf("keymemleakgetlist:%d", i) 892 | pc.Reset() 893 | forceCacheGetList(key, longValue) 894 | if pc.GetLeaseRequestCount() == 0 { 895 | LOGE.Println("FAIL: not requesting leases") 896 | failCount++ 897 | return 898 | } 899 | pc.Reset() 900 | v, err := ls.GetList(key) 901 | if checkError(err, false) { 902 | return 903 | } 904 | if len(v) != 1 || v[0] != longValue { 905 | LOGE.Println("FAIL: got wrong value") 906 | failCount++ 907 | return 908 | } 909 | if pc.GetRpcCount() > 0 { 910 | LOGE.Println("FAIL: not caching data") 911 | failCount++ 912 | return 913 | } 914 | } 915 | 916 | runtime.GC() 917 | runtime.ReadMemStats(&memstats) 918 | 919 | // Wait for data to expire and someone to cleanup. 920 | time.Sleep(20 * time.Second) 921 | 922 | // Run garbage collection and get memory stats. 923 | runtime.GC() 924 | runtime.ReadMemStats(&memstats) 925 | finalAlloc = memstats.Alloc 926 | 927 | // The maximum number of bytes allowed to be allocated since the beginning 928 | // of this test until now (currently 5,000,000). 929 | const maxBytes = 5000000 930 | if finalAlloc < initAlloc || (finalAlloc-initAlloc) < maxBytes { 931 | fmt.Println("PASS") 932 | passCount++ 933 | } else { 934 | LOGE.Printf("FAIL: Libstore not cleaning expired/cached data (bytes still in use: %d, max allowed: %d)\n", 935 | finalAlloc-initAlloc, maxBytes) 936 | failCount++ 937 | } 938 | } 939 | 940 | // Revoke valid lease for get list 941 | func testRevokeGetListValid() { 942 | forceCacheGetList("keyrevokegetlist:1", "value") 943 | err, status := revokeLease("keyrevokegetlist:1") 944 | if checkError(err, false) { 945 | return 946 | } 947 | if status != storagerpc.OK { 948 | LOGE.Println("FAIL: revoke should return OK on success") 949 | failCount++ 950 | return 951 | } 952 | pc.Reset() 953 | v, err := ls.GetList("keyrevokegetlist:1") 954 | if checkError(err, false) { 955 | return 956 | } 957 | if len(v) != 1 || v[0] != "value" { 958 | LOGE.Println("FAIL: got wrong value") 959 | failCount++ 960 | return 961 | } 962 | if pc.GetRpcCount() == 0 { 963 | LOGE.Println("FAIL: not respecting lease revoke") 964 | failCount++ 965 | return 966 | } 967 | fmt.Println("PASS") 968 | passCount++ 969 | } 970 | 971 | // Revoke nonexistent lease for get list 972 | func testRevokeGetListNonexistent() { 973 | ls.AppendToList("keyrevokegetlist:2", "value") 974 | // Just shouldn't die or cause future issues 975 | revokeLease("keyrevokegetlist:2") 976 | pc.Reset() 977 | v, err := ls.GetList("keyrevokegetlist:2") 978 | if checkError(err, false) { 979 | return 980 | } 981 | if len(v) != 1 || v[0] != "value" { 982 | LOGE.Println("FAIL: got wrong value") 983 | failCount++ 984 | return 985 | } 986 | if pc.GetRpcCount() == 0 { 987 | LOGE.Println("FAIL: should not be cached") 988 | failCount++ 989 | return 990 | } 991 | fmt.Println("PASS") 992 | passCount++ 993 | } 994 | 995 | // Revoke lease update for get list 996 | func testRevokeGetListUpdate() { 997 | forceCacheGetList("keyrevokegetlist:3", "value") 998 | ls.RemoveFromList("keyrevokegetlist:3", "value") 999 | pc.Reset() 1000 | forceCacheGetList("keyrevokegetlist:3", "value2") 1001 | if pc.GetRpcCount() <= 1 || pc.GetLeaseRequestCount() == 0 { 1002 | LOGE.Println("FAIL: not respecting lease revoke") 1003 | failCount++ 1004 | return 1005 | } 1006 | pc.Reset() 1007 | v, err := ls.GetList("keyrevokegetlist:3") 1008 | if checkError(err, false) { 1009 | return 1010 | } 1011 | if len(v) != 1 || v[0] != "value2" { 1012 | LOGE.Println("FAIL: got wrong value") 1013 | failCount++ 1014 | return 1015 | } 1016 | if pc.GetRpcCount() > 0 { 1017 | LOGE.Println("FAIL: should be cached") 1018 | failCount++ 1019 | return 1020 | } 1021 | fmt.Println("PASS") 1022 | passCount++ 1023 | } 1024 | 1025 | func main() { 1026 | initTests := []testFunc{ 1027 | {"testNonexistentServer", testNonexistentServer}, 1028 | {"testNoLeases", testNoLeases}, 1029 | {"testAlwaysLeases", testAlwaysLeases}, 1030 | } 1031 | tests := []testFunc{ 1032 | {"testGetError", testGetError}, 1033 | {"testGetErrorStatus", testGetErrorStatus}, 1034 | {"testGetValid", testGetValid}, 1035 | {"testPutError", testPutError}, 1036 | {"testPutErrorStatus", testPutErrorStatus}, 1037 | {"testPutValid", testPutValid}, 1038 | {"testGetListError", testGetListError}, 1039 | {"testGetListErrorStatus", testGetListErrorStatus}, 1040 | {"testGetListValid", testGetListValid}, 1041 | {"testAppendToListError", testAppendToListError}, 1042 | {"testAppendToListErrorStatus", testAppendToListErrorStatus}, 1043 | {"testAppendToListValid", testAppendToListValid}, 1044 | {"testRemoveFromListError", testRemoveFromListError}, 1045 | {"testRemoveFromListErrorStatus", testRemoveFromListErrorStatus}, 1046 | {"testRemoveFromListValid", testRemoveFromListValid}, 1047 | {"testCacheGetLimit", testCacheGetLimit}, 1048 | {"testCacheGetLimit2", testCacheGetLimit2}, 1049 | {"testCacheGetCorrect", testCacheGetCorrect}, 1050 | {"testCacheGetLeaseNotGranted", testCacheGetLeaseNotGranted}, 1051 | {"testCacheGetLeaseNotGranted2", testCacheGetLeaseNotGranted2}, 1052 | {"testCacheGetLeaseTimeout", testCacheGetLeaseTimeout}, 1053 | {"testCacheGetMemoryLeak", testCacheGetMemoryLeak}, 1054 | {"testRevokeGetValid", testRevokeGetValid}, 1055 | {"testRevokeGetNonexistent", testRevokeGetNonexistent}, 1056 | {"testRevokeGetUpdate", testRevokeGetUpdate}, 1057 | {"testCacheGetListLimit", testCacheGetListLimit}, 1058 | {"testCacheGetListLimit2", testCacheGetListLimit2}, 1059 | {"testCacheGetListCorrect", testCacheGetListCorrect}, 1060 | {"testCacheGetListLeaseNotGranted", testCacheGetListLeaseNotGranted}, 1061 | {"testCacheGetListLeaseNotGranted2", testCacheGetListLeaseNotGranted2}, 1062 | {"testCacheGetListLeaseTimeout", testCacheGetListLeaseTimeout}, 1063 | {"testCacheGetListMemoryLeak", testCacheGetListMemoryLeak}, 1064 | {"testRevokeGetListValid", testRevokeGetListValid}, 1065 | {"testRevokeGetListNonexistent", testRevokeGetListNonexistent}, 1066 | {"testRevokeGetListUpdate", testRevokeGetListUpdate}, 1067 | } 1068 | 1069 | flag.Parse() 1070 | if flag.NArg() < 1 { 1071 | LOGE.Fatalln("Usage: libtest ") 1072 | } 1073 | 1074 | var err error 1075 | 1076 | // Run init tests 1077 | for _, t := range initTests { 1078 | if b, err := regexp.MatchString(*testRegex, t.name); b && err == nil { 1079 | fmt.Printf("Running %s:\n", t.name) 1080 | t.f() 1081 | } 1082 | // Give the current Listener some time to close before creating 1083 | // a new Libstore. 1084 | time.Sleep(time.Duration(500) * time.Millisecond) 1085 | } 1086 | 1087 | _, err = initLibstore(flag.Arg(0), fmt.Sprintf("localhost:%d", *portnum), fmt.Sprintf("localhost:%d", *portnum), false) 1088 | if err != nil { 1089 | return 1090 | } 1091 | revokeConn, err = rpc.DialHTTP("tcp", fmt.Sprintf("localhost:%d", *portnum)) 1092 | if err != nil { 1093 | LOGE.Println("Failed to connect to Libstore RPC:", err) 1094 | return 1095 | } 1096 | 1097 | // Run tests 1098 | for _, t := range tests { 1099 | if b, err := regexp.MatchString(*testRegex, t.name); b && err == nil { 1100 | fmt.Printf("Running %s:\n", t.name) 1101 | t.f() 1102 | } 1103 | } 1104 | 1105 | fmt.Printf("Passed (%d/%d) tests\n", passCount, passCount+failCount) 1106 | } 1107 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/tests/proxycounter/proxycounter.go: -------------------------------------------------------------------------------- 1 | // DO NOT MODIFY! 2 | 3 | package proxycounter 4 | 5 | import ( 6 | "errors" 7 | "log" 8 | "net/rpc" 9 | "sync/atomic" 10 | 11 | "github.com/cmu440/tribbler/rpc/storagerpc" 12 | "github.com/cmu440/tribbler/storageserver" 13 | ) 14 | 15 | type ProxyCounter interface { 16 | storageserver.StorageServer 17 | Reset() 18 | OverrideLeaseSeconds(leaseSeconds int) 19 | DisableLease() 20 | EnableLease() 21 | OverrideErr() 22 | OverrideStatus(status storagerpc.Status) 23 | OverrideOff() 24 | GetRpcCount() uint32 25 | GetByteCount() uint32 26 | GetLeaseRequestCount() uint32 27 | GetLeaseGrantedCount() uint32 28 | } 29 | 30 | type proxyCounter struct { 31 | srv *rpc.Client 32 | myhostport string 33 | rpcCount uint32 34 | byteCount uint32 35 | leaseRequestCount uint32 36 | leaseGrantedCount uint32 37 | override bool 38 | overrideErr error 39 | overrideStatus storagerpc.Status 40 | disableLease bool 41 | overrideLeaseSeconds int 42 | } 43 | 44 | func init() { 45 | log.SetFlags(log.Lshortfile | log.Lmicroseconds) 46 | } 47 | 48 | func NewProxyCounter(serverHostPort, myHostPort string) (ProxyCounter, error) { 49 | pc := new(proxyCounter) 50 | pc.myhostport = myHostPort 51 | // Create RPC connection to storage server. 52 | srv, err := rpc.DialHTTP("tcp", serverHostPort) 53 | if err != nil { 54 | return nil, err 55 | } 56 | pc.srv = srv 57 | return pc, nil 58 | } 59 | 60 | func (pc *proxyCounter) Reset() { 61 | pc.rpcCount = 0 62 | pc.byteCount = 0 63 | pc.leaseRequestCount = 0 64 | pc.leaseGrantedCount = 0 65 | } 66 | 67 | func (pc *proxyCounter) OverrideLeaseSeconds(leaseSeconds int) { 68 | pc.overrideLeaseSeconds = leaseSeconds 69 | } 70 | 71 | func (pc *proxyCounter) DisableLease() { 72 | pc.disableLease = true 73 | } 74 | 75 | func (pc *proxyCounter) EnableLease() { 76 | pc.disableLease = false 77 | } 78 | 79 | func (pc *proxyCounter) OverrideErr() { 80 | pc.overrideErr = errors.New("error") 81 | pc.override = true 82 | } 83 | 84 | func (pc *proxyCounter) OverrideStatus(status storagerpc.Status) { 85 | pc.overrideStatus = status 86 | pc.override = true 87 | } 88 | 89 | func (pc *proxyCounter) OverrideOff() { 90 | pc.override = false 91 | pc.overrideErr = nil 92 | pc.overrideStatus = storagerpc.OK 93 | } 94 | 95 | func (pc *proxyCounter) GetRpcCount() uint32 { 96 | return pc.rpcCount 97 | } 98 | 99 | func (pc *proxyCounter) GetByteCount() uint32 { 100 | return pc.byteCount 101 | } 102 | 103 | func (pc *proxyCounter) GetLeaseRequestCount() uint32 { 104 | return pc.leaseRequestCount 105 | } 106 | 107 | func (pc *proxyCounter) GetLeaseGrantedCount() uint32 { 108 | return pc.leaseGrantedCount 109 | } 110 | 111 | // RPC methods. 112 | 113 | func (pc *proxyCounter) RegisterServer(args *storagerpc.RegisterArgs, reply *storagerpc.RegisterReply) error { 114 | return nil 115 | } 116 | 117 | func (pc *proxyCounter) GetServers(args *storagerpc.GetServersArgs, reply *storagerpc.GetServersReply) error { 118 | err := pc.srv.Call("StorageServer.GetServers", args, reply) 119 | // Modify reply so node point to myself 120 | if len(reply.Servers) > 1 { 121 | panic("ProxyCounter only works with 1 storage node") 122 | } else if len(reply.Servers) == 1 { 123 | reply.Servers[0].HostPort = pc.myhostport 124 | } 125 | return err 126 | } 127 | 128 | func (pc *proxyCounter) Get(args *storagerpc.GetArgs, reply *storagerpc.GetReply) error { 129 | if pc.override { 130 | reply.Status = pc.overrideStatus 131 | return pc.overrideErr 132 | } 133 | byteCount := len(args.Key) 134 | if args.WantLease { 135 | atomic.AddUint32(&pc.leaseRequestCount, 1) 136 | } 137 | if pc.disableLease { 138 | args.WantLease = false 139 | } 140 | err := pc.srv.Call("StorageServer.Get", args, reply) 141 | byteCount += len(reply.Value) 142 | if reply.Lease.Granted { 143 | if pc.overrideLeaseSeconds > 0 { 144 | reply.Lease.ValidSeconds = pc.overrideLeaseSeconds 145 | } 146 | atomic.AddUint32(&pc.leaseGrantedCount, 1) 147 | } 148 | atomic.AddUint32(&pc.rpcCount, 1) 149 | atomic.AddUint32(&pc.byteCount, uint32(byteCount)) 150 | return err 151 | } 152 | 153 | func (pc *proxyCounter) GetList(args *storagerpc.GetArgs, reply *storagerpc.GetListReply) error { 154 | if pc.override { 155 | reply.Status = pc.overrideStatus 156 | return pc.overrideErr 157 | } 158 | byteCount := len(args.Key) 159 | if args.WantLease { 160 | atomic.AddUint32(&pc.leaseRequestCount, 1) 161 | } 162 | if pc.disableLease { 163 | args.WantLease = false 164 | } 165 | err := pc.srv.Call("StorageServer.GetList", args, reply) 166 | for _, s := range reply.Value { 167 | byteCount += len(s) 168 | } 169 | if reply.Lease.Granted { 170 | if pc.overrideLeaseSeconds > 0 { 171 | reply.Lease.ValidSeconds = pc.overrideLeaseSeconds 172 | } 173 | atomic.AddUint32(&pc.leaseGrantedCount, 1) 174 | } 175 | atomic.AddUint32(&pc.rpcCount, 1) 176 | atomic.AddUint32(&pc.byteCount, uint32(byteCount)) 177 | return err 178 | } 179 | 180 | func (pc *proxyCounter) Put(args *storagerpc.PutArgs, reply *storagerpc.PutReply) error { 181 | if pc.override { 182 | reply.Status = pc.overrideStatus 183 | return pc.overrideErr 184 | } 185 | byteCount := len(args.Key) + len(args.Value) 186 | err := pc.srv.Call("StorageServer.Put", args, reply) 187 | atomic.AddUint32(&pc.rpcCount, 1) 188 | atomic.AddUint32(&pc.byteCount, uint32(byteCount)) 189 | return err 190 | } 191 | 192 | func (pc *proxyCounter) AppendToList(args *storagerpc.PutArgs, reply *storagerpc.PutReply) error { 193 | if pc.override { 194 | reply.Status = pc.overrideStatus 195 | return pc.overrideErr 196 | } 197 | byteCount := len(args.Key) + len(args.Value) 198 | err := pc.srv.Call("StorageServer.AppendToList", args, reply) 199 | atomic.AddUint32(&pc.rpcCount, 1) 200 | atomic.AddUint32(&pc.byteCount, uint32(byteCount)) 201 | return err 202 | } 203 | 204 | func (pc *proxyCounter) RemoveFromList(args *storagerpc.PutArgs, reply *storagerpc.PutReply) error { 205 | if pc.override { 206 | reply.Status = pc.overrideStatus 207 | return pc.overrideErr 208 | } 209 | byteCount := len(args.Key) + len(args.Value) 210 | err := pc.srv.Call("StorageServer.RemoveFromList", args, reply) 211 | atomic.AddUint32(&pc.rpcCount, 1) 212 | atomic.AddUint32(&pc.byteCount, uint32(byteCount)) 213 | return err 214 | } 215 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/tests/storagetest/storagetest.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "flag" 5 | "fmt" 6 | "log" 7 | "net" 8 | "net/http" 9 | "net/rpc" 10 | "os" 11 | "regexp" 12 | "time" 13 | 14 | "github.com/cmu440/tribbler/rpc/librpc" 15 | "github.com/cmu440/tribbler/rpc/storagerpc" 16 | ) 17 | 18 | type storageTester struct { 19 | srv *rpc.Client 20 | myhostport string 21 | recvRevoke map[string]bool // whether we have received a RevokeLease for key x 22 | compRevoke map[string]bool // whether we have replied the RevokeLease for key x 23 | delay float32 // how long to delay the reply of RevokeLease 24 | } 25 | 26 | type testFunc struct { 27 | name string 28 | f func() 29 | } 30 | 31 | var ( 32 | portnum = flag.Int("port", 9019, "port # to listen on") 33 | testType = flag.Int("type", 1, "type of test, 1: jtest, 2: btest") 34 | numServer = flag.Int("N", 1, "(jtest only) total # of storage servers") 35 | myID = flag.Int("id", 1, "(jtest only) my id") 36 | testRegex = flag.String("t", "", "test to run") 37 | passCount int 38 | failCount int 39 | st *storageTester 40 | ) 41 | 42 | var LOGE = log.New(os.Stderr, "", log.Lshortfile|log.Lmicroseconds) 43 | 44 | var statusMap = map[storagerpc.Status]string{ 45 | storagerpc.OK: "OK", 46 | storagerpc.KeyNotFound: "KeyNotFound", 47 | storagerpc.ItemNotFound: "ItemNotFound", 48 | storagerpc.WrongServer: "WrongServer", 49 | storagerpc.ItemExists: "ItemExists", 50 | storagerpc.NotReady: "NotReady", 51 | 0: "Unknown", 52 | } 53 | 54 | func initStorageTester(server, myhostport string) (*storageTester, error) { 55 | tester := new(storageTester) 56 | tester.myhostport = myhostport 57 | tester.recvRevoke = make(map[string]bool) 58 | tester.compRevoke = make(map[string]bool) 59 | 60 | // Create RPC connection to storage server. 61 | srv, err := rpc.DialHTTP("tcp", server) 62 | if err != nil { 63 | return nil, fmt.Errorf("could not connect to server %s", server) 64 | } 65 | 66 | rpc.RegisterName("LeaseCallbacks", librpc.Wrap(tester)) 67 | rpc.HandleHTTP() 68 | 69 | l, err := net.Listen("tcp", fmt.Sprintf(":%d", *portnum)) 70 | if err != nil { 71 | LOGE.Fatalln("Failed to listen:", err) 72 | } 73 | go http.Serve(l, nil) 74 | tester.srv = srv 75 | return tester, nil 76 | } 77 | 78 | func (st *storageTester) ResetDelay() { 79 | st.delay = 0 80 | } 81 | 82 | func (st *storageTester) SetDelay(f float32) { 83 | st.delay = f * (storagerpc.LeaseSeconds + storagerpc.LeaseGuardSeconds) 84 | } 85 | 86 | func (st *storageTester) RevokeLease(args *storagerpc.RevokeLeaseArgs, reply *storagerpc.RevokeLeaseReply) error { 87 | st.recvRevoke[args.Key] = true 88 | st.compRevoke[args.Key] = false 89 | time.Sleep(time.Duration(st.delay*1000) * time.Millisecond) 90 | st.compRevoke[args.Key] = true 91 | reply.Status = storagerpc.OK 92 | return nil 93 | } 94 | 95 | func (st *storageTester) RegisterServer() (*storagerpc.RegisterReply, error) { 96 | node := storagerpc.Node{HostPort: st.myhostport, NodeID: uint32(*myID)} 97 | args := &storagerpc.RegisterArgs{ServerInfo: node} 98 | var reply storagerpc.RegisterReply 99 | err := st.srv.Call("StorageServer.RegisterServer", args, &reply) 100 | return &reply, err 101 | } 102 | 103 | func (st *storageTester) GetServers() (*storagerpc.GetServersReply, error) { 104 | args := &storagerpc.GetServersArgs{} 105 | var reply storagerpc.GetServersReply 106 | err := st.srv.Call("StorageServer.GetServers", args, &reply) 107 | return &reply, err 108 | } 109 | 110 | func (st *storageTester) Put(key, value string) (*storagerpc.PutReply, error) { 111 | args := &storagerpc.PutArgs{Key: key, Value: value} 112 | var reply storagerpc.PutReply 113 | err := st.srv.Call("StorageServer.Put", args, &reply) 114 | return &reply, err 115 | } 116 | 117 | func (st *storageTester) Get(key string, wantlease bool) (*storagerpc.GetReply, error) { 118 | args := &storagerpc.GetArgs{Key: key, WantLease: wantlease, HostPort: st.myhostport} 119 | var reply storagerpc.GetReply 120 | err := st.srv.Call("StorageServer.Get", args, &reply) 121 | return &reply, err 122 | } 123 | 124 | func (st *storageTester) GetList(key string, wantlease bool) (*storagerpc.GetListReply, error) { 125 | args := &storagerpc.GetArgs{Key: key, WantLease: wantlease, HostPort: st.myhostport} 126 | var reply storagerpc.GetListReply 127 | err := st.srv.Call("StorageServer.GetList", args, &reply) 128 | return &reply, err 129 | } 130 | 131 | func (st *storageTester) RemoveFromList(key, removeitem string) (*storagerpc.PutReply, error) { 132 | args := &storagerpc.PutArgs{Key: key, Value: removeitem} 133 | var reply storagerpc.PutReply 134 | err := st.srv.Call("StorageServer.RemoveFromList", args, &reply) 135 | return &reply, err 136 | } 137 | 138 | func (st *storageTester) AppendToList(key, newitem string) (*storagerpc.PutReply, error) { 139 | args := &storagerpc.PutArgs{Key: key, Value: newitem} 140 | var reply storagerpc.PutReply 141 | err := st.srv.Call("StorageServer.AppendToList", args, &reply) 142 | return &reply, err 143 | } 144 | 145 | // Check error and status 146 | func checkErrorStatus(err error, status, expectedStatus storagerpc.Status) bool { 147 | if err != nil { 148 | LOGE.Println("FAIL: unexpected error returned:", err) 149 | failCount++ 150 | return true 151 | } 152 | if status != expectedStatus { 153 | LOGE.Printf("FAIL: incorrect status %s, expected status %s\n", statusMap[status], statusMap[expectedStatus]) 154 | failCount++ 155 | return true 156 | } 157 | return false 158 | } 159 | 160 | // Check error 161 | func checkError(err error, expectError bool) bool { 162 | if expectError { 163 | if err == nil { 164 | LOGE.Println("FAIL: non-nil error should be returned") 165 | failCount++ 166 | return true 167 | } 168 | } else { 169 | if err != nil { 170 | LOGE.Println("FAIL: unexpected error returned:", err) 171 | failCount++ 172 | return true 173 | } 174 | } 175 | return false 176 | } 177 | 178 | // Check list 179 | func checkList(list []string, expectedList []string) bool { 180 | if len(list) != len(expectedList) { 181 | LOGE.Printf("FAIL: incorrect list %v, expected list %v\n", list, expectedList) 182 | failCount++ 183 | return true 184 | } 185 | m := make(map[string]bool) 186 | for _, s := range list { 187 | m[s] = true 188 | } 189 | for _, s := range expectedList { 190 | if m[s] == false { 191 | LOGE.Printf("FAIL: incorrect list %v, expected list %v\n", list, expectedList) 192 | failCount++ 193 | return true 194 | } 195 | } 196 | return false 197 | } 198 | 199 | // We treat a RPC call finihsed in 0.5 seconds as OK 200 | func isTimeOK(d time.Duration) bool { 201 | return d < 500*time.Millisecond 202 | } 203 | 204 | // Cache a key 205 | func cacheKey(key string) bool { 206 | replyP, err := st.Put(key, "old-value") 207 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 208 | return true 209 | } 210 | 211 | // get and cache key 212 | replyG, err := st.Get(key, true) 213 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 214 | return true 215 | } 216 | if !replyG.Lease.Granted { 217 | LOGE.Println("FAIL: Failed to get lease") 218 | failCount++ 219 | return true 220 | } 221 | return false 222 | } 223 | 224 | // Cache a list key 225 | func cacheKeyList(key string) bool { 226 | replyP, err := st.AppendToList(key, "old-value") 227 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 228 | return true 229 | } 230 | 231 | // get and cache key 232 | replyL, err := st.GetList(key, true) 233 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 234 | return true 235 | } 236 | if !replyL.Lease.Granted { 237 | LOGE.Println("FAIL: Failed to get lease") 238 | failCount++ 239 | return true 240 | } 241 | return false 242 | } 243 | 244 | ///////////////////////////////////////////// 245 | // test storage server initialization 246 | ///////////////////////////////////////////// 247 | 248 | // make sure to run N-1 servers in shell before entering this function 249 | func testInitStorageServers() { 250 | // test get server 251 | replyGS, err := st.GetServers() 252 | if checkError(err, false) { 253 | return 254 | } 255 | if replyGS.Status == storagerpc.OK { 256 | LOGE.Println("FAIL: storage system should not be ready:", err) 257 | failCount++ 258 | return 259 | } 260 | 261 | // test register 262 | replyR, err := st.RegisterServer() 263 | if checkError(err, false) { 264 | return 265 | } 266 | if replyR.Status != storagerpc.OK || replyR.Servers == nil { 267 | LOGE.Println("FAIL: storage system should be ready and Servers field should be non-nil:", err) 268 | failCount++ 269 | return 270 | } 271 | if len(replyR.Servers) != (*numServer) { 272 | LOGE.Println("FAIL: storage system returned wrong server list:", err) 273 | failCount++ 274 | return 275 | } 276 | 277 | // test key range 278 | replyG, err := st.Get("wrongkey:1", false) 279 | if checkErrorStatus(err, replyG.Status, storagerpc.WrongServer) { 280 | return 281 | } 282 | fmt.Println("PASS") 283 | passCount++ 284 | } 285 | 286 | ///////////////////////////////////////////// 287 | // test basic storage operations 288 | ///////////////////////////////////////////// 289 | 290 | // Get keys without and with wantlease 291 | func testPutGet() { 292 | // get an invalid key 293 | replyG, err := st.Get("nullkey:1", false) 294 | if checkErrorStatus(err, replyG.Status, storagerpc.KeyNotFound) { 295 | return 296 | } 297 | 298 | replyP, err := st.Put("keyputget:1", "value") 299 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 300 | return 301 | } 302 | 303 | // without asking for a lease 304 | replyG, err = st.Get("keyputget:1", false) 305 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 306 | return 307 | } 308 | if replyG.Value != "value" { 309 | LOGE.Println("FAIL: got wrong value") 310 | failCount++ 311 | return 312 | } 313 | if replyG.Lease.Granted { 314 | LOGE.Println("FAIL: did not apply for lease") 315 | failCount++ 316 | return 317 | } 318 | 319 | // now I want a lease this time 320 | replyG, err = st.Get("keyputget:1", true) 321 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 322 | return 323 | } 324 | if replyG.Value != "value" { 325 | LOGE.Println("FAIL: got wrong value") 326 | failCount++ 327 | return 328 | } 329 | if !replyG.Lease.Granted { 330 | LOGE.Println("FAIL: did not get lease") 331 | failCount++ 332 | return 333 | } 334 | 335 | fmt.Println("PASS") 336 | passCount++ 337 | } 338 | 339 | // list related operations 340 | func testAppendGetRemoveList() { 341 | // test AppendToList 342 | replyP, err := st.AppendToList("keylist:1", "value1") 343 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 344 | return 345 | } 346 | 347 | // test GetList 348 | replyL, err := st.GetList("keylist:1", false) 349 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 350 | return 351 | } 352 | if len(replyL.Value) != 1 || replyL.Value[0] != "value1" { 353 | LOGE.Println("FAIL: got wrong value") 354 | failCount++ 355 | return 356 | } 357 | 358 | // test AppendToList for a duplicated item 359 | replyP, err = st.AppendToList("keylist:1", "value1") 360 | if checkErrorStatus(err, replyP.Status, storagerpc.ItemExists) { 361 | return 362 | } 363 | 364 | // test AppendToList for a different item 365 | replyP, err = st.AppendToList("keylist:1", "value2") 366 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 367 | return 368 | } 369 | 370 | // test RemoveFromList for the first item 371 | replyP, err = st.RemoveFromList("keylist:1", "value1") 372 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 373 | return 374 | } 375 | 376 | // test RemoveFromList for removed item 377 | replyP, err = st.RemoveFromList("keylist:1", "value1") 378 | if checkErrorStatus(err, replyP.Status, storagerpc.ItemNotFound) { 379 | return 380 | } 381 | 382 | // test GetList after RemoveFromList 383 | replyL, err = st.GetList("keylist:1", false) 384 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 385 | return 386 | } 387 | if len(replyL.Value) != 1 || replyL.Value[0] != "value2" { 388 | LOGE.Println("FAIL: got wrong value") 389 | failCount++ 390 | return 391 | } 392 | 393 | fmt.Println("PASS") 394 | passCount++ 395 | } 396 | 397 | ///////////////////////////////////////////// 398 | // test revoke related 399 | ///////////////////////////////////////////// 400 | 401 | // Without leasing, we should not expect revoke 402 | func testUpdateWithoutLease() { 403 | key := "revokekey:0" 404 | 405 | replyP, err := st.Put(key, "value") 406 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 407 | return 408 | } 409 | 410 | // get without caching this item 411 | replyG, err := st.Get(key, false) 412 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 413 | return 414 | } 415 | 416 | // update this key 417 | replyP, err = st.Put(key, "value1") 418 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 419 | return 420 | } 421 | 422 | // get without caching this item 423 | replyG, err = st.Get(key, false) 424 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 425 | return 426 | } 427 | 428 | if st.recvRevoke[key] { 429 | LOGE.Println("FAIL: expect no revoke") 430 | failCount++ 431 | return 432 | } 433 | 434 | fmt.Println("PASS") 435 | passCount++ 436 | } 437 | 438 | // updating a key before its lease expires 439 | // expect a revoke msg from storage server 440 | func testUpdateBeforeLeaseExpire() { 441 | key := "revokekey:1" 442 | 443 | if cacheKey(key) { 444 | return 445 | } 446 | 447 | // update this key 448 | replyP, err := st.Put(key, "value1") 449 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 450 | return 451 | } 452 | 453 | // read it back 454 | replyG, err := st.Get(key, false) 455 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 456 | return 457 | } 458 | if replyG.Value != "value1" { 459 | LOGE.Println("FAIL: got wrong value") 460 | failCount++ 461 | return 462 | } 463 | 464 | // expect a revoke msg, check if we receive it 465 | if !st.recvRevoke[key] { 466 | LOGE.Println("FAIL: did not receive revoke") 467 | failCount++ 468 | return 469 | } 470 | 471 | fmt.Println("PASS") 472 | passCount++ 473 | } 474 | 475 | // updating a key after its lease expires 476 | // expect no revoke msg received from storage server 477 | func testUpdateAfterLeaseExpire() { 478 | key := "revokekey:2" 479 | 480 | if cacheKey(key) { 481 | return 482 | } 483 | 484 | // sleep until lease expires 485 | time.Sleep((storagerpc.LeaseSeconds + storagerpc.LeaseGuardSeconds + 1) * time.Second) 486 | 487 | // update this key 488 | replyP, err := st.Put(key, "value1") 489 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 490 | return 491 | } 492 | 493 | // read back this item 494 | replyG, err := st.Get(key, false) 495 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 496 | return 497 | } 498 | if replyG.Value != "value1" { 499 | LOGE.Println("FAIL: got wrong value") 500 | failCount++ 501 | return 502 | } 503 | 504 | // expect no revoke msg, check if we receive any 505 | if st.recvRevoke[key] { 506 | LOGE.Println("FAIL: should not receive revoke") 507 | failCount++ 508 | return 509 | } 510 | 511 | fmt.Println("PASS") 512 | passCount++ 513 | } 514 | 515 | // helper function for delayed revoke tests 516 | func delayedRevoke(key string, f func() bool) bool { 517 | if cacheKey(key) { 518 | return true 519 | } 520 | 521 | // trigger a delayed revocation in background 522 | var replyP *storagerpc.PutReply 523 | var err error 524 | putCh := make(chan bool) 525 | doneCh := make(chan bool) 526 | go func() { 527 | // put key1 again to trigger a revoke 528 | replyP, err = st.Put(key, "new-value") 529 | putCh <- true 530 | }() 531 | // ensure Put has gotten to server 532 | time.Sleep(100 * time.Millisecond) 533 | 534 | // run rest of function in go routine to allow for timeouts 535 | go func() { 536 | // run rest of test function 537 | ret := f() 538 | // wait for put to complete 539 | <-putCh 540 | // check for failures 541 | if ret { 542 | doneCh <- true 543 | return 544 | } 545 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 546 | doneCh <- true 547 | return 548 | } 549 | doneCh <- false 550 | }() 551 | 552 | // wait for test completion or timeout 553 | select { 554 | case ret := <-doneCh: 555 | return ret 556 | case <-time.After((storagerpc.LeaseSeconds + storagerpc.LeaseGuardSeconds + 1) * time.Second): 557 | break 558 | } 559 | LOGE.Println("FAIL: timeout, may erroneously increase test count") 560 | failCount++ 561 | return true 562 | } 563 | 564 | // when revoking leases for key "x", 565 | // storage server should not block queries for other keys 566 | func testDelayedRevokeWithoutBlocking() { 567 | st.SetDelay(0.5) 568 | defer st.ResetDelay() 569 | 570 | key1 := "revokekey:3" 571 | key2 := "revokekey:4" 572 | 573 | // function called during revoke of key1 574 | f := func() bool { 575 | ts := time.Now() 576 | // put key2, this should not block 577 | replyP, err := st.Put(key2, "value") 578 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 579 | return true 580 | } 581 | if !isTimeOK(time.Since(ts)) { 582 | LOGE.Println("FAIL: concurrent Put got blocked") 583 | failCount++ 584 | return true 585 | } 586 | 587 | ts = time.Now() 588 | // get key2, this should not block 589 | replyG, err := st.Get(key2, false) 590 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 591 | return true 592 | } 593 | if replyG.Value != "value" { 594 | LOGE.Println("FAIL: get got wrong value") 595 | failCount++ 596 | return true 597 | } 598 | if !isTimeOK(time.Since(ts)) { 599 | LOGE.Println("FAIL: concurrent Get got blocked") 600 | failCount++ 601 | return true 602 | } 603 | return false 604 | } 605 | 606 | if delayedRevoke(key1, f) { 607 | return 608 | } 609 | fmt.Println("PASS") 610 | passCount++ 611 | } 612 | 613 | // when revoking leases for key "x", 614 | // storage server should stop leasing for "x" 615 | // before revoking completes or old lease expires. 616 | // this function tests the former case 617 | func testDelayedRevokeWithLeaseRequest1() { 618 | st.SetDelay(0.5) // Revoke finishes before lease expires 619 | defer st.ResetDelay() 620 | 621 | key1 := "revokekey:5" 622 | 623 | // function called during revoke of key1 624 | f := func() bool { 625 | ts := time.Now() 626 | // get key1 and want a lease 627 | replyG, err := st.Get(key1, true) 628 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 629 | return true 630 | } 631 | if isTimeOK(time.Since(ts)) { 632 | // in this case, server should reply old value and refuse lease 633 | if replyG.Lease.Granted || replyG.Value != "old-value" { 634 | LOGE.Println("FAIL: server should return old value and not grant lease") 635 | failCount++ 636 | return true 637 | } 638 | } else { 639 | if !st.compRevoke[key1] || (!replyG.Lease.Granted || replyG.Value != "new-value") { 640 | LOGE.Println("FAIL: server should return new value and grant lease") 641 | failCount++ 642 | return true 643 | } 644 | } 645 | return false 646 | } 647 | 648 | if delayedRevoke(key1, f) { 649 | return 650 | } 651 | fmt.Println("PASS") 652 | passCount++ 653 | } 654 | 655 | // when revoking leases for key "x", 656 | // storage server should stop leasing for "x" 657 | // before revoking completes or old lease expires. 658 | // this function tests the latter case 659 | // The diff from the previous test is 660 | // st.compRevoke[key1] in the else case 661 | func testDelayedRevokeWithLeaseRequest2() { 662 | st.SetDelay(2) // Lease expires before revoking finishes 663 | defer st.ResetDelay() 664 | 665 | key1 := "revokekey:15" 666 | 667 | // function called during revoke of key1 668 | f := func() bool { 669 | ts := time.Now() 670 | // get key1 and want a lease 671 | replyG, err := st.Get(key1, true) 672 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 673 | return true 674 | } 675 | if isTimeOK(time.Since(ts)) { 676 | // in this case, server should reply old value and refuse lease 677 | if replyG.Lease.Granted || replyG.Value != "old-value" { 678 | LOGE.Println("FAIL: server should return old value and not grant lease") 679 | failCount++ 680 | return true 681 | } 682 | } else { 683 | if st.compRevoke[key1] || (!replyG.Lease.Granted || replyG.Value != "new-value") { 684 | LOGE.Println("FAIL: server should return new value and grant lease") 685 | failCount++ 686 | return true 687 | } 688 | } 689 | return false 690 | } 691 | 692 | if delayedRevoke(key1, f) { 693 | return 694 | } 695 | fmt.Println("PASS") 696 | passCount++ 697 | } 698 | 699 | // when revoking leases for key "x", 700 | // storage server should hold upcoming updates for "x", 701 | // until either all revocations complete or the lease expires 702 | // this function tests the former case 703 | func testDelayedRevokeWithUpdate1() { 704 | st.SetDelay(0.5) // revocation takes longer, but still completes before lease expires 705 | defer st.ResetDelay() 706 | 707 | key1 := "revokekey:6" 708 | 709 | // function called during revoke of key1 710 | f := func() bool { 711 | // put key1, this should block 712 | replyP, err := st.Put(key1, "newnew-value") 713 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 714 | return true 715 | } 716 | if !st.compRevoke[key1] { 717 | LOGE.Println("FAIL: storage server should hold modification to key x during finishing revocating all lease holders of x") 718 | failCount++ 719 | return true 720 | } 721 | replyG, err := st.Get(key1, false) 722 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 723 | return true 724 | } 725 | if replyG.Value != "newnew-value" { 726 | LOGE.Println("FAIL: got wrong value") 727 | failCount++ 728 | return true 729 | } 730 | return false 731 | } 732 | 733 | if delayedRevoke(key1, f) { 734 | return 735 | } 736 | fmt.Println("PASS") 737 | passCount++ 738 | } 739 | 740 | // when revoking leases for key "x", 741 | // storage server should hold upcoming updates for "x", 742 | // until either all revocations complete or the lease expires 743 | // this function tests the latter case 744 | func testDelayedRevokeWithUpdate2() { 745 | st.SetDelay(2) // lease expires before revocation completes 746 | defer st.ResetDelay() 747 | 748 | key1 := "revokekey:7" 749 | 750 | // function called during revoke of key1 751 | f := func() bool { 752 | ts := time.Now() 753 | // put key1, this should block 754 | replyP, err := st.Put(key1, "newnew-value") 755 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 756 | return true 757 | } 758 | d := time.Since(ts) 759 | if d < (storagerpc.LeaseSeconds+storagerpc.LeaseGuardSeconds-1)*time.Second { 760 | LOGE.Println("FAIL: storage server should hold this Put until leases expires key1") 761 | failCount++ 762 | return true 763 | } 764 | if st.compRevoke[key1] { 765 | LOGE.Println("FAIL: storage server should not block this Put till the lease revoke of key1") 766 | failCount++ 767 | return true 768 | } 769 | replyG, err := st.Get(key1, false) 770 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 771 | return true 772 | } 773 | if replyG.Value != "newnew-value" { 774 | LOGE.Println("FAIL: got wrong value") 775 | failCount++ 776 | return true 777 | } 778 | return false 779 | } 780 | 781 | if delayedRevoke(key1, f) { 782 | return 783 | } 784 | fmt.Println("PASS") 785 | passCount++ 786 | } 787 | 788 | // remote libstores may not even reply all RevokeLease RPC calls. 789 | // in this case, service should continue after lease expires 790 | func testDelayedRevokeWithUpdate3() { 791 | st.SetDelay(2) // lease expires before revocation completes 792 | defer st.ResetDelay() 793 | 794 | key1 := "revokekey:8" 795 | 796 | // function called during revoke of key1 797 | f := func() bool { 798 | // sleep here until lease expires on the remote server 799 | time.Sleep((storagerpc.LeaseSeconds + storagerpc.LeaseGuardSeconds) * time.Second) 800 | 801 | // put key1, this should not block 802 | ts := time.Now() 803 | replyP, err := st.Put(key1, "newnew-value") 804 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 805 | return true 806 | } 807 | if !isTimeOK(time.Since(ts)) { 808 | LOGE.Println("FAIL: storage server should not block this Put") 809 | failCount++ 810 | return true 811 | } 812 | // get key1 and want lease, this should not block 813 | ts = time.Now() 814 | replyG, err := st.Get(key1, true) 815 | if checkErrorStatus(err, replyG.Status, storagerpc.OK) { 816 | return true 817 | } 818 | if replyG.Value != "newnew-value" { 819 | LOGE.Println("FAIL: got wrong value") 820 | failCount++ 821 | return true 822 | } 823 | if !isTimeOK(time.Since(ts)) { 824 | LOGE.Println("FAIL: storage server should not block this Get") 825 | failCount++ 826 | return true 827 | } 828 | return false 829 | } 830 | 831 | if delayedRevoke(key1, f) { 832 | return 833 | } 834 | fmt.Println("PASS") 835 | passCount++ 836 | } 837 | 838 | // Without leasing, we should not expect revoke 839 | func testUpdateListWithoutLease() { 840 | key := "revokelistkey:0" 841 | 842 | replyP, err := st.AppendToList(key, "value") 843 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 844 | return 845 | } 846 | 847 | // get without caching this item 848 | replyL, err := st.GetList(key, false) 849 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 850 | return 851 | } 852 | 853 | // update this key 854 | replyP, err = st.AppendToList(key, "value1") 855 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 856 | return 857 | } 858 | replyP, err = st.RemoveFromList(key, "value1") 859 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 860 | return 861 | } 862 | 863 | // get without caching this item 864 | replyL, err = st.GetList(key, false) 865 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 866 | return 867 | } 868 | 869 | if st.recvRevoke[key] { 870 | LOGE.Println("FAIL: expect no revoke") 871 | failCount++ 872 | return 873 | } 874 | 875 | fmt.Println("PASS") 876 | passCount++ 877 | } 878 | 879 | // updating a key before its lease expires 880 | // expect a revoke msg from storage server 881 | func testUpdateListBeforeLeaseExpire() { 882 | key := "revokelistkey:1" 883 | 884 | if cacheKeyList(key) { 885 | return 886 | } 887 | 888 | // update this key 889 | replyP, err := st.AppendToList(key, "value1") 890 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 891 | return 892 | } 893 | replyP, err = st.RemoveFromList(key, "value1") 894 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 895 | return 896 | } 897 | 898 | // read it back 899 | replyL, err := st.GetList(key, false) 900 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 901 | return 902 | } 903 | if len(replyL.Value) != 1 || replyL.Value[0] != "old-value" { 904 | LOGE.Println("FAIL: got wrong value") 905 | failCount++ 906 | return 907 | } 908 | 909 | // expect a revoke msg, check if we receive it 910 | if !st.recvRevoke[key] { 911 | LOGE.Println("FAIL: did not receive revoke") 912 | failCount++ 913 | return 914 | } 915 | 916 | fmt.Println("PASS") 917 | passCount++ 918 | } 919 | 920 | // updating a key after its lease expires 921 | // expect no revoke msg received from storage server 922 | func testUpdateListAfterLeaseExpire() { 923 | key := "revokelistkey:2" 924 | 925 | if cacheKeyList(key) { 926 | return 927 | } 928 | 929 | // sleep until lease expires 930 | time.Sleep((storagerpc.LeaseSeconds + storagerpc.LeaseGuardSeconds + 1) * time.Second) 931 | 932 | // update this key 933 | replyP, err := st.AppendToList(key, "value1") 934 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 935 | return 936 | } 937 | replyP, err = st.RemoveFromList(key, "value1") 938 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 939 | return 940 | } 941 | 942 | // read back this item 943 | replyL, err := st.GetList(key, false) 944 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 945 | return 946 | } 947 | if len(replyL.Value) != 1 || replyL.Value[0] != "old-value" { 948 | LOGE.Println("FAIL: got wrong value") 949 | failCount++ 950 | return 951 | } 952 | 953 | // expect no revoke msg, check if we receive any 954 | if st.recvRevoke[key] { 955 | LOGE.Println("FAIL: should not receive revoke") 956 | failCount++ 957 | return 958 | } 959 | 960 | fmt.Println("PASS") 961 | passCount++ 962 | } 963 | 964 | // helper function for delayed revoke tests 965 | func delayedRevokeList(key string, f func() bool) bool { 966 | if cacheKeyList(key) { 967 | return true 968 | } 969 | 970 | // trigger a delayed revocation in background 971 | var replyP *storagerpc.PutReply 972 | var err error 973 | appendCh := make(chan bool) 974 | doneCh := make(chan bool) 975 | go func() { 976 | // append key to trigger a revoke 977 | replyP, err = st.AppendToList(key, "new-value") 978 | appendCh <- true 979 | }() 980 | // ensure Put has gotten to server 981 | time.Sleep(100 * time.Millisecond) 982 | 983 | // run rest of function in go routine to allow for timeouts 984 | go func() { 985 | // run rest of test function 986 | ret := f() 987 | // wait for append to complete 988 | <-appendCh 989 | // check for failures 990 | if ret { 991 | doneCh <- true 992 | return 993 | } 994 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 995 | doneCh <- true 996 | return 997 | } 998 | doneCh <- false 999 | }() 1000 | 1001 | // wait for test completion or timeout 1002 | select { 1003 | case ret := <-doneCh: 1004 | return ret 1005 | case <-time.After((storagerpc.LeaseSeconds + storagerpc.LeaseGuardSeconds + 1) * time.Second): 1006 | break 1007 | } 1008 | LOGE.Println("FAIL: timeout, may erroneously increase test count") 1009 | failCount++ 1010 | return true 1011 | } 1012 | 1013 | // when revoking leases for key "x", 1014 | // storage server should not block queries for other keys 1015 | func testDelayedRevokeListWithoutBlocking() { 1016 | st.SetDelay(0.5) 1017 | defer st.ResetDelay() 1018 | 1019 | key1 := "revokelistkey:3" 1020 | key2 := "revokelistkey:4" 1021 | 1022 | // function called during revoke of key1 1023 | f := func() bool { 1024 | ts := time.Now() 1025 | // put key2, this should not block 1026 | replyP, err := st.AppendToList(key2, "value") 1027 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 1028 | return true 1029 | } 1030 | if !isTimeOK(time.Since(ts)) { 1031 | LOGE.Println("FAIL: concurrent Append got blocked") 1032 | failCount++ 1033 | return true 1034 | } 1035 | 1036 | ts = time.Now() 1037 | // get key2, this should not block 1038 | replyL, err := st.GetList(key2, false) 1039 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 1040 | return true 1041 | } 1042 | if len(replyL.Value) != 1 || replyL.Value[0] != "value" { 1043 | LOGE.Println("FAIL: GetList got wrong value") 1044 | failCount++ 1045 | return true 1046 | } 1047 | if !isTimeOK(time.Since(ts)) { 1048 | LOGE.Println("FAIL: concurrent GetList got blocked") 1049 | failCount++ 1050 | return true 1051 | } 1052 | return false 1053 | } 1054 | 1055 | if delayedRevokeList(key1, f) { 1056 | return 1057 | } 1058 | fmt.Println("PASS") 1059 | passCount++ 1060 | } 1061 | 1062 | // when revoking leases for key "x", 1063 | // storage server should stop leasing for "x" 1064 | // before revoking completes or old lease expires. 1065 | // this function tests the former case 1066 | func testDelayedRevokeListWithLeaseRequest1() { 1067 | st.SetDelay(0.5) // Revoke finishes before lease expires 1068 | defer st.ResetDelay() 1069 | 1070 | key1 := "revokelistkey:5" 1071 | 1072 | // function called during revoke of key1 1073 | f := func() bool { 1074 | ts := time.Now() 1075 | // get key1 and want a lease 1076 | replyL, err := st.GetList(key1, true) 1077 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 1078 | return true 1079 | } 1080 | if isTimeOK(time.Since(ts)) { 1081 | // in this case, server should reply old value and refuse lease 1082 | if replyL.Lease.Granted || len(replyL.Value) != 1 || replyL.Value[0] != "old-value" { 1083 | LOGE.Println("FAIL: server should return old value and not grant lease") 1084 | failCount++ 1085 | return true 1086 | } 1087 | } else { 1088 | if checkList(replyL.Value, []string{"old-value", "new-value"}) { 1089 | return true 1090 | } 1091 | if !st.compRevoke[key1] || !replyL.Lease.Granted { 1092 | LOGE.Println("FAIL: server should grant lease in this case") 1093 | failCount++ 1094 | return true 1095 | } 1096 | } 1097 | return false 1098 | } 1099 | 1100 | if delayedRevokeList(key1, f) { 1101 | return 1102 | } 1103 | fmt.Println("PASS") 1104 | passCount++ 1105 | } 1106 | 1107 | // when revoking leases for key "x", 1108 | // storage server should stop leasing for "x" 1109 | // before revoking completes or old lease expires. 1110 | // this function tests the latter case 1111 | // The diff from the previous test is 1112 | // st.compRevoke[key1] in the else case 1113 | func testDelayedRevokeListWithLeaseRequest2() { 1114 | st.SetDelay(2) // Lease expires before revoking finishes 1115 | defer st.ResetDelay() 1116 | 1117 | key1 := "revokelistkey:15" 1118 | 1119 | // function called during revoke of key1 1120 | f := func() bool { 1121 | ts := time.Now() 1122 | // get key1 and want a lease 1123 | replyL, err := st.GetList(key1, true) 1124 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 1125 | return true 1126 | } 1127 | if isTimeOK(time.Since(ts)) { 1128 | // in this case, server should reply old value and refuse lease 1129 | if replyL.Lease.Granted || len(replyL.Value) != 1 || replyL.Value[0] != "old-value" { 1130 | LOGE.Println("FAIL: server should return old value and not grant lease") 1131 | failCount++ 1132 | return true 1133 | } 1134 | } else { 1135 | if checkList(replyL.Value, []string{"old-value", "new-value"}) { 1136 | return true 1137 | } 1138 | if st.compRevoke[key1] || !replyL.Lease.Granted { 1139 | LOGE.Println("FAIL: server should grant lease in this case") 1140 | failCount++ 1141 | return true 1142 | } 1143 | } 1144 | return false 1145 | } 1146 | 1147 | if delayedRevokeList(key1, f) { 1148 | return 1149 | } 1150 | fmt.Println("PASS") 1151 | passCount++ 1152 | } 1153 | 1154 | // when revoking leases for key "x", 1155 | // storage server should hold upcoming updates for "x", 1156 | // until either all revocations complete or the lease expires 1157 | // this function tests the former case 1158 | func testDelayedRevokeListWithUpdate1() { 1159 | st.SetDelay(0.5) // revocation takes longer, but still completes before lease expires 1160 | defer st.ResetDelay() 1161 | 1162 | key1 := "revokelistkey:6" 1163 | 1164 | // function called during revoke of key1 1165 | f := func() bool { 1166 | // put key1, this should block 1167 | replyP, err := st.AppendToList(key1, "newnew-value") 1168 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 1169 | return true 1170 | } 1171 | if !st.compRevoke[key1] { 1172 | LOGE.Println("FAIL: storage server should hold modification to key x during finishing revocating all lease holders of x") 1173 | failCount++ 1174 | return true 1175 | } 1176 | replyL, err := st.GetList(key1, false) 1177 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 1178 | return true 1179 | } 1180 | if checkList(replyL.Value, []string{"old-value", "new-value", "newnew-value"}) { 1181 | return true 1182 | } 1183 | return false 1184 | } 1185 | 1186 | if delayedRevokeList(key1, f) { 1187 | return 1188 | } 1189 | fmt.Println("PASS") 1190 | passCount++ 1191 | } 1192 | 1193 | // when revoking leases for key "x", 1194 | // storage server should hold upcoming updates for "x", 1195 | // until either all revocations complete or the lease expires 1196 | // this function tests the latter case 1197 | func testDelayedRevokeListWithUpdate2() { 1198 | st.SetDelay(2) // lease expires before revocation completes 1199 | defer st.ResetDelay() 1200 | 1201 | key1 := "revokelistkey:7" 1202 | 1203 | // function called during revoke of key1 1204 | f := func() bool { 1205 | ts := time.Now() 1206 | // put key1, this should block 1207 | replyP, err := st.AppendToList(key1, "newnew-value") 1208 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 1209 | return true 1210 | } 1211 | d := time.Since(ts) 1212 | if d < (storagerpc.LeaseSeconds+storagerpc.LeaseGuardSeconds-1)*time.Second { 1213 | LOGE.Println("FAIL: storage server should hold this Put until leases expires key1") 1214 | failCount++ 1215 | return true 1216 | } 1217 | if st.compRevoke[key1] { 1218 | LOGE.Println("FAIL: storage server should not block this Put till the lease revoke of key1") 1219 | failCount++ 1220 | return true 1221 | } 1222 | replyL, err := st.GetList(key1, false) 1223 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 1224 | return true 1225 | } 1226 | if checkList(replyL.Value, []string{"old-value", "new-value", "newnew-value"}) { 1227 | return true 1228 | } 1229 | return false 1230 | } 1231 | 1232 | if delayedRevokeList(key1, f) { 1233 | return 1234 | } 1235 | fmt.Println("PASS") 1236 | passCount++ 1237 | } 1238 | 1239 | // remote libstores may not even reply all RevokeLease RPC calls. 1240 | // in this case, service should continue after lease expires 1241 | func testDelayedRevokeListWithUpdate3() { 1242 | st.SetDelay(2) // lease expires before revocation completes 1243 | defer st.ResetDelay() 1244 | 1245 | key1 := "revokelistkey:8" 1246 | 1247 | // function called during revoke of key1 1248 | f := func() bool { 1249 | // sleep here until lease expires on the remote server 1250 | time.Sleep((storagerpc.LeaseSeconds + storagerpc.LeaseGuardSeconds) * time.Second) 1251 | 1252 | // put key1, this should not block 1253 | ts := time.Now() 1254 | replyP, err := st.AppendToList(key1, "newnew-value") 1255 | if checkErrorStatus(err, replyP.Status, storagerpc.OK) { 1256 | return true 1257 | } 1258 | if !isTimeOK(time.Since(ts)) { 1259 | LOGE.Println("FAIL: storage server should not block this Put") 1260 | failCount++ 1261 | return true 1262 | } 1263 | // get key1 and want lease, this should not block 1264 | ts = time.Now() 1265 | replyL, err := st.GetList(key1, true) 1266 | if checkErrorStatus(err, replyL.Status, storagerpc.OK) { 1267 | return true 1268 | } 1269 | if checkList(replyL.Value, []string{"old-value", "new-value", "newnew-value"}) { 1270 | return true 1271 | } 1272 | if !isTimeOK(time.Since(ts)) { 1273 | LOGE.Println("FAIL: storage server should not block this Get") 1274 | failCount++ 1275 | return true 1276 | } 1277 | return false 1278 | } 1279 | 1280 | if delayedRevokeList(key1, f) { 1281 | return 1282 | } 1283 | fmt.Println("PASS") 1284 | passCount++ 1285 | } 1286 | 1287 | func main() { 1288 | jtests := []testFunc{{"testInitStorageServers", testInitStorageServers}} 1289 | btests := []testFunc{ 1290 | {"testPutGet", testPutGet}, 1291 | {"testAppendGetRemoveList", testAppendGetRemoveList}, 1292 | {"testUpdateWithoutLease", testUpdateWithoutLease}, 1293 | {"testUpdateBeforeLeaseExpire", testUpdateBeforeLeaseExpire}, 1294 | {"testUpdateAfterLeaseExpire", testUpdateAfterLeaseExpire}, 1295 | {"testDelayedRevokeWithoutBlocking", testDelayedRevokeWithoutBlocking}, 1296 | {"testDelayedRevokeWithLeaseRequest1", testDelayedRevokeWithLeaseRequest1}, 1297 | {"testDelayedRevokeWithLeaseRequest2", testDelayedRevokeWithLeaseRequest2}, 1298 | {"testDelayedRevokeWithUpdate1", testDelayedRevokeWithUpdate1}, 1299 | {"testDelayedRevokeWithUpdate2", testDelayedRevokeWithUpdate2}, 1300 | {"testDelayedRevokeWithUpdate3", testDelayedRevokeWithUpdate3}, 1301 | {"testUpdateListWithoutLease", testUpdateListWithoutLease}, 1302 | {"testUpdateListBeforeLeaseExpire", testUpdateListBeforeLeaseExpire}, 1303 | {"testUpdateListAfterLeaseExpire", testUpdateListAfterLeaseExpire}, 1304 | {"testDelayedRevokeListWithoutBlocking", testDelayedRevokeListWithoutBlocking}, 1305 | {"testDelayedRevokeListWithLeaseRequest1", testDelayedRevokeListWithLeaseRequest1}, 1306 | {"testDelayedRevokeListWithLeaseRequest2", testDelayedRevokeListWithLeaseRequest2}, 1307 | {"testDelayedRevokeListWithUpdate1", testDelayedRevokeListWithUpdate1}, 1308 | {"testDelayedRevokeListWithUpdate2", testDelayedRevokeListWithUpdate2}, 1309 | {"testDelayedRevokeListWithUpdate3", testDelayedRevokeListWithUpdate3}, 1310 | } 1311 | 1312 | flag.Parse() 1313 | if flag.NArg() < 1 { 1314 | LOGE.Fatalln("Usage: storagetest ") 1315 | } 1316 | 1317 | // Run the tests with a single tester 1318 | storageTester, err := initStorageTester(flag.Arg(0), fmt.Sprintf("localhost:%d", *portnum)) 1319 | if err != nil { 1320 | LOGE.Fatalln("Failed to initialize test:", err) 1321 | } 1322 | st = storageTester 1323 | 1324 | switch *testType { 1325 | case 1: 1326 | for _, t := range jtests { 1327 | if b, err := regexp.MatchString(*testRegex, t.name); b && err == nil { 1328 | fmt.Printf("Running %s:\n", t.name) 1329 | t.f() 1330 | } 1331 | } 1332 | case 2: 1333 | for _, t := range btests { 1334 | if b, err := regexp.MatchString(*testRegex, t.name); b && err == nil { 1335 | fmt.Printf("Running %s:\n", t.name) 1336 | t.f() 1337 | } 1338 | } 1339 | } 1340 | 1341 | fmt.Printf("Passed (%d/%d) tests\n", passCount, passCount+failCount) 1342 | } 1343 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/tests/stresstest/stresstest.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "flag" 5 | "fmt" 6 | "log" 7 | "math/rand" 8 | "os" 9 | "strconv" 10 | "strings" 11 | "time" 12 | 13 | "github.com/cmu440/tribbler/rpc/tribrpc" 14 | "github.com/cmu440/tribbler/tribclient" 15 | ) 16 | 17 | const ( 18 | GetSubscription = iota 19 | AddSubscription 20 | RemoveSubscription 21 | GetTribbles 22 | PostTribble 23 | GetTribblesBySubscription 24 | ) 25 | 26 | var ( 27 | portnum = flag.Int("port", 9010, "server port # to connect to") 28 | clientId = flag.String("clientId", "0", "client id for user") 29 | numCmds = flag.Int("numCmds", 1000, "number of random commands to execute") 30 | seed = flag.Int64("seed", 0, "seed for random number generator used to execute commands") 31 | ) 32 | 33 | var LOGE = log.New(os.Stderr, "", log.Lshortfile|log.Lmicroseconds) 34 | 35 | var statusMap = map[tribrpc.Status]string{ 36 | tribrpc.OK: "OK", 37 | tribrpc.NoSuchUser: "NoSuchUser", 38 | tribrpc.NoSuchTargetUser: "NoSuchTargetUser", 39 | tribrpc.Exists: "Exists", 40 | 0: "Unknown", 41 | } 42 | 43 | var ( 44 | // Debugging information (counts the total number of operations performed). 45 | gs, as, rs, gt, pt, gtbs int 46 | // Set this to true to print debug information. 47 | debug bool 48 | ) 49 | 50 | func main() { 51 | flag.Parse() 52 | if flag.NArg() < 2 { 53 | LOGE.Fatalln("Usage: ./stressclient ") 54 | } 55 | 56 | client, err := tribclient.NewTribClient("localhost", *portnum) 57 | if err != nil { 58 | LOGE.Fatalln("FAIL: NewTribClient returned error:", err) 59 | } 60 | 61 | user := flag.Arg(0) 62 | userNum, err := strconv.Atoi(user) 63 | if err != nil { 64 | LOGE.Fatalf("FAIL: user %s not an integer\n", user) 65 | } 66 | numTargets, err := strconv.Atoi(flag.Arg(1)) 67 | if err != nil { 68 | LOGE.Fatalf("FAIL: numTargets invalid %s\n", flag.Arg(1)) 69 | } 70 | 71 | time.Sleep(1 * time.Second) 72 | _, err = client.CreateUser(user) 73 | if err != nil { 74 | LOGE.Fatalf("FAIL: error when creating userID '%s': %s\n", user, err) 75 | } 76 | 77 | tribIndex := 0 78 | if *seed == 0 { 79 | rand.Seed(time.Now().UnixNano()) 80 | } else { 81 | rand.Seed(*seed) 82 | } 83 | 84 | cmds := make([]int, *numCmds) 85 | for i := 0; i < *numCmds; i++ { 86 | cmds[i] = rand.Intn(6) 87 | switch cmds[i] { 88 | case GetSubscription: 89 | gs++ 90 | case AddSubscription: 91 | as++ 92 | case RemoveSubscription: 93 | rs++ 94 | case GetTribbles: 95 | gt++ 96 | case PostTribble: 97 | pt++ 98 | case GetTribblesBySubscription: 99 | gtbs++ 100 | } 101 | } 102 | 103 | if debug { 104 | // Prints out the total number of operations that will be performed. 105 | fmt.Println("GetSubscriptions:", gs) 106 | fmt.Println("AddSubscription:", as) 107 | fmt.Println("RemoveSubscription:", rs) 108 | fmt.Println("GetTribbles:", gt) 109 | fmt.Println("PostTribble:", pt) 110 | fmt.Println("GetTribblesBySubscription:", gtbs) 111 | } 112 | 113 | for _, cmd := range cmds { 114 | switch cmd { 115 | case GetSubscription: 116 | subscriptions, status, err := client.GetSubscriptions(user) 117 | if err != nil { 118 | LOGE.Fatalf("FAIL: GetSubscriptions returned error '%s'\n", err) 119 | } 120 | if status == 0 || status == tribrpc.NoSuchUser { 121 | LOGE.Fatalf("FAIL: GetSubscriptions returned error status '%s'\n", statusMap[status]) 122 | } 123 | if !validateSubscriptions(&subscriptions) { 124 | LOGE.Fatalln("FAIL: failed while validating returned subscriptions") 125 | } 126 | case AddSubscription: 127 | target := rand.Intn(numTargets) 128 | status, err := client.AddSubscription(user, strconv.Itoa(target)) 129 | if err != nil { 130 | LOGE.Fatalf("FAIL: AddSubscription returned error '%s'\n", err) 131 | } 132 | if status == 0 || status == tribrpc.NoSuchUser { 133 | LOGE.Fatalf("FAIL: AddSubscription returned error status '%s'\n", statusMap[status]) 134 | } 135 | case RemoveSubscription: 136 | target := rand.Intn(numTargets) 137 | status, err := client.RemoveSubscription(user, strconv.Itoa(target)) 138 | if err != nil { 139 | LOGE.Fatalf("FAIL: RemoveSubscription returned error '%s'\n", err) 140 | } 141 | if status == 0 || status == tribrpc.NoSuchUser { 142 | LOGE.Fatalf("FAIL: RemoveSubscription returned error status '%s'\n", statusMap[status]) 143 | } 144 | case GetTribbles: 145 | target := rand.Intn(numTargets) 146 | tribbles, status, err := client.GetTribbles(strconv.Itoa(target)) 147 | if err != nil { 148 | LOGE.Fatalf("FAIL: GetTribbles returned error '%s'\n", err) 149 | } 150 | if status == 0 { 151 | LOGE.Fatalf("FAIL: GetTribbles returned error status '%s'\n", statusMap[status]) 152 | } 153 | if !validateTribbles(&tribbles, numTargets) { 154 | LOGE.Fatalln("FAIL: failed while validating returned tribbles") 155 | } 156 | case PostTribble: 157 | tribVal := userNum + tribIndex*numTargets 158 | msg := fmt.Sprintf("%d;%s", tribVal, *clientId) 159 | status, err := client.PostTribble(user, msg) 160 | if err != nil { 161 | LOGE.Fatalf("FAIL: PostTribble returned error '%s'\n", err) 162 | } 163 | if status == 0 || status == tribrpc.NoSuchUser { 164 | LOGE.Fatalf("FAIL: PostTribble returned error status '%s'\n", statusMap[status]) 165 | } 166 | tribIndex++ 167 | case GetTribblesBySubscription: 168 | tribbles, status, err := client.GetTribblesBySubscription(user) 169 | if err != nil { 170 | LOGE.Fatalf("FAIL: GetTribblesBySubscription returned error '%s'\n", err) 171 | } 172 | if status == 0 || status == tribrpc.NoSuchUser { 173 | LOGE.Fatalf("FAIL: GetTribblesBySubscription returned error status '%s'\n", statusMap[status]) 174 | } 175 | if !validateTribbles(&tribbles, numTargets) { 176 | LOGE.Fatalln("FAIL: failed while validating returned tribbles") 177 | } 178 | } 179 | } 180 | fmt.Println("PASS") 181 | os.Exit(7) 182 | } 183 | 184 | func validateSubscriptions(subscriptions *[]string) bool { 185 | subscriptionSet := make(map[string]bool, len(*subscriptions)) 186 | for _, subscription := range *subscriptions { 187 | if subscriptionSet[subscription] == true { 188 | return false 189 | } 190 | subscriptionSet[subscription] = true 191 | } 192 | return true 193 | } 194 | 195 | func validateTribbles(tribbles *[]tribrpc.Tribble, numTargets int) bool { 196 | userIdToLastVal := make(map[string]int, len(*tribbles)) 197 | for _, tribble := range *tribbles { 198 | valAndId := strings.Split(tribble.Contents, ";") 199 | val, err := strconv.Atoi(valAndId[0]) 200 | if err != nil { 201 | return false 202 | } 203 | user, err := strconv.Atoi(tribble.UserID) 204 | if err != nil { 205 | return false 206 | } 207 | userClientId := fmt.Sprintf("%s;%s", tribble.UserID, valAndId[1]) 208 | lastVal := userIdToLastVal[userClientId] 209 | if val%numTargets == user && (lastVal == 0 || lastVal == val+numTargets) { 210 | userIdToLastVal[userClientId] = val 211 | } else { 212 | return false 213 | } 214 | } 215 | return true 216 | } 217 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/tests/tribtest/tribtest.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "flag" 5 | "fmt" 6 | "log" 7 | "net" 8 | "net/http" 9 | "net/rpc" 10 | "os" 11 | "regexp" 12 | "strconv" 13 | "strings" 14 | 15 | "github.com/cmu440/tribbler/rpc/storagerpc" 16 | "github.com/cmu440/tribbler/rpc/tribrpc" 17 | "github.com/cmu440/tribbler/tests/proxycounter" 18 | "github.com/cmu440/tribbler/tribserver" 19 | ) 20 | 21 | type testFunc struct { 22 | name string 23 | f func() 24 | } 25 | 26 | var ( 27 | port = flag.Int("port", 9010, "TribServer port number") 28 | testRegex = flag.String("t", "", "test to run") 29 | passCount int 30 | failCount int 31 | pc proxycounter.ProxyCounter 32 | ts tribserver.TribServer 33 | ) 34 | 35 | var statusMap = map[tribrpc.Status]string{ 36 | tribrpc.OK: "OK", 37 | tribrpc.NoSuchUser: "NoSuchUser", 38 | tribrpc.NoSuchTargetUser: "NoSuchTargetUser", 39 | tribrpc.Exists: "Exists", 40 | 0: "Unknown", 41 | } 42 | 43 | var LOGE = log.New(os.Stderr, "", log.Lshortfile|log.Lmicroseconds) 44 | 45 | func initTribServer(masterServerHostPort string, tribServerPort int) error { 46 | tribServerHostPort := net.JoinHostPort("localhost", strconv.Itoa(tribServerPort)) 47 | proxyCounter, err := proxycounter.NewProxyCounter(masterServerHostPort, tribServerHostPort) 48 | if err != nil { 49 | LOGE.Println("Failed to setup test:", err) 50 | return err 51 | } 52 | pc = proxyCounter 53 | rpc.RegisterName("StorageServer", storagerpc.Wrap(pc)) 54 | 55 | // Create and start the TribServer. 56 | tribServer, err := tribserver.NewTribServer(masterServerHostPort, tribServerHostPort) 57 | if err != nil { 58 | LOGE.Println("Failed to create TribServer:", err) 59 | return err 60 | } 61 | ts = tribServer 62 | return nil 63 | } 64 | 65 | // Cleanup tribserver and rpc hooks 66 | func cleanupTribServer(l net.Listener) { 67 | // Close listener to stop http serve thread 68 | if l != nil { 69 | l.Close() 70 | } 71 | // Recreate default http serve mux 72 | http.DefaultServeMux = http.NewServeMux() 73 | // Recreate default rpc server 74 | rpc.DefaultServer = rpc.NewServer() 75 | // Unset tribserver just in case 76 | ts = nil 77 | } 78 | 79 | // Check rpc and byte count limits. 80 | func checkLimits(rpcCountLimit, byteCountLimit uint32) bool { 81 | if pc.GetRpcCount() > rpcCountLimit { 82 | LOGE.Println("FAIL: using too many RPCs") 83 | failCount++ 84 | return true 85 | } 86 | if pc.GetByteCount() > byteCountLimit { 87 | LOGE.Println("FAIL: transferring too much data") 88 | failCount++ 89 | return true 90 | } 91 | return false 92 | } 93 | 94 | // Check error and status 95 | func checkErrorStatus(err error, status, expectedStatus tribrpc.Status) bool { 96 | if err != nil { 97 | LOGE.Println("FAIL: unexpected error returned:", err) 98 | failCount++ 99 | return true 100 | } 101 | if status != expectedStatus { 102 | LOGE.Printf("FAIL: incorrect status %s, expected status %s\n", statusMap[status], statusMap[expectedStatus]) 103 | failCount++ 104 | return true 105 | } 106 | return false 107 | } 108 | 109 | // Check subscriptions 110 | func checkSubscriptions(subs, expectedSubs []string) bool { 111 | if len(subs) != len(expectedSubs) { 112 | LOGE.Printf("FAIL: incorrect subscriptions %v, expected subscriptions %v\n", subs, expectedSubs) 113 | failCount++ 114 | return true 115 | } 116 | m := make(map[string]bool) 117 | for _, s := range subs { 118 | m[s] = true 119 | } 120 | for _, s := range expectedSubs { 121 | if m[s] == false { 122 | LOGE.Printf("FAIL: incorrect subscriptions %v, expected subscriptions %v\n", subs, expectedSubs) 123 | failCount++ 124 | return true 125 | } 126 | } 127 | return false 128 | } 129 | 130 | // Check tribbles 131 | func checkTribbles(tribbles, expectedTribbles []tribrpc.Tribble) bool { 132 | if len(tribbles) != len(expectedTribbles) { 133 | LOGE.Printf("FAIL: incorrect tribbles %v, expected tribbles %v\n", tribbles, expectedTribbles) 134 | failCount++ 135 | return true 136 | } 137 | lastTime := int64(0) 138 | for i := len(tribbles) - 1; i >= 0; i-- { 139 | if tribbles[i].UserID != expectedTribbles[i].UserID { 140 | LOGE.Printf("FAIL: incorrect tribbles %v, expected tribbles %v\n", tribbles, expectedTribbles) 141 | failCount++ 142 | return true 143 | } 144 | if tribbles[i].Contents != expectedTribbles[i].Contents { 145 | LOGE.Printf("FAIL: incorrect tribbles %v, expected tribbles %v\n", tribbles, expectedTribbles) 146 | failCount++ 147 | return true 148 | } 149 | if tribbles[i].Posted.UnixNano() < lastTime { 150 | LOGE.Println("FAIL: tribble timestamps not in reverse chronological order") 151 | failCount++ 152 | return true 153 | } 154 | lastTime = tribbles[i].Posted.UnixNano() 155 | } 156 | return false 157 | } 158 | 159 | // Helper functions 160 | func createUser(user string) (error, tribrpc.Status) { 161 | args := &tribrpc.CreateUserArgs{UserID: user} 162 | var reply tribrpc.CreateUserReply 163 | err := ts.CreateUser(args, &reply) 164 | return err, reply.Status 165 | } 166 | 167 | func addSubscription(user, target string) (error, tribrpc.Status) { 168 | args := &tribrpc.SubscriptionArgs{UserID: user, TargetUserID: target} 169 | var reply tribrpc.SubscriptionReply 170 | err := ts.AddSubscription(args, &reply) 171 | return err, reply.Status 172 | } 173 | 174 | func removeSubscription(user, target string) (error, tribrpc.Status) { 175 | args := &tribrpc.SubscriptionArgs{UserID: user, TargetUserID: target} 176 | var reply tribrpc.SubscriptionReply 177 | err := ts.RemoveSubscription(args, &reply) 178 | return err, reply.Status 179 | } 180 | 181 | func getSubscription(user string) (error, tribrpc.Status, []string) { 182 | args := &tribrpc.GetSubscriptionsArgs{UserID: user} 183 | var reply tribrpc.GetSubscriptionsReply 184 | err := ts.GetSubscriptions(args, &reply) 185 | return err, reply.Status, reply.UserIDs 186 | } 187 | 188 | func postTribble(user, contents string) (error, tribrpc.Status) { 189 | args := &tribrpc.PostTribbleArgs{UserID: user, Contents: contents} 190 | var reply tribrpc.PostTribbleReply 191 | err := ts.PostTribble(args, &reply) 192 | return err, reply.Status 193 | } 194 | 195 | func getTribbles(user string) (error, tribrpc.Status, []tribrpc.Tribble) { 196 | args := &tribrpc.GetTribblesArgs{UserID: user} 197 | var reply tribrpc.GetTribblesReply 198 | err := ts.GetTribbles(args, &reply) 199 | return err, reply.Status, reply.Tribbles 200 | } 201 | 202 | func getTribblesBySubscription(user string) (error, tribrpc.Status, []tribrpc.Tribble) { 203 | args := &tribrpc.GetTribblesArgs{UserID: user} 204 | var reply tribrpc.GetTribblesReply 205 | err := ts.GetTribblesBySubscription(args, &reply) 206 | return err, reply.Status, reply.Tribbles 207 | } 208 | 209 | // Create valid user 210 | func testCreateUserValid() { 211 | pc.Reset() 212 | err, status := createUser("user") 213 | if checkErrorStatus(err, status, tribrpc.OK) { 214 | return 215 | } 216 | if checkLimits(10, 1000) { 217 | return 218 | } 219 | fmt.Println("PASS") 220 | passCount++ 221 | } 222 | 223 | // Create duplicate user 224 | func testCreateUserDuplicate() { 225 | createUser("user") 226 | pc.Reset() 227 | err, status := createUser("user") 228 | if checkErrorStatus(err, status, tribrpc.Exists) { 229 | return 230 | } 231 | if checkLimits(10, 1000) { 232 | return 233 | } 234 | fmt.Println("PASS") 235 | passCount++ 236 | } 237 | 238 | // Add subscription with invalid user 239 | func testAddSubscriptionInvalidUser() { 240 | createUser("user") 241 | pc.Reset() 242 | err, status := addSubscription("invalidUser", "user") 243 | if checkErrorStatus(err, status, tribrpc.NoSuchUser) { 244 | return 245 | } 246 | if checkLimits(10, 1000) { 247 | return 248 | } 249 | fmt.Println("PASS") 250 | passCount++ 251 | } 252 | 253 | // Add subscription with invaild target user 254 | func testAddSubscriptionInvalidTargetUser() { 255 | createUser("user") 256 | pc.Reset() 257 | err, status := addSubscription("user", "invalidUser") 258 | if checkErrorStatus(err, status, tribrpc.NoSuchTargetUser) { 259 | return 260 | } 261 | if checkLimits(10, 1000) { 262 | return 263 | } 264 | fmt.Println("PASS") 265 | passCount++ 266 | } 267 | 268 | // Add valid subscription 269 | func testAddSubscriptionValid() { 270 | createUser("user1") 271 | createUser("user2") 272 | pc.Reset() 273 | err, status := addSubscription("user1", "user2") 274 | if checkErrorStatus(err, status, tribrpc.OK) { 275 | return 276 | } 277 | if checkLimits(10, 1000) { 278 | return 279 | } 280 | fmt.Println("PASS") 281 | passCount++ 282 | } 283 | 284 | // Add duplicate subscription 285 | func testAddSubscriptionDuplicate() { 286 | createUser("user1") 287 | createUser("user2") 288 | addSubscription("user1", "user2") 289 | pc.Reset() 290 | err, status := addSubscription("user1", "user2") 291 | if checkErrorStatus(err, status, tribrpc.Exists) { 292 | return 293 | } 294 | if checkLimits(10, 1000) { 295 | return 296 | } 297 | fmt.Println("PASS") 298 | passCount++ 299 | } 300 | 301 | // Remove subscription with invalid user 302 | func testRemoveSubscriptionInvalidUser() { 303 | createUser("user") 304 | pc.Reset() 305 | err, status := removeSubscription("invalidUser", "user") 306 | if checkErrorStatus(err, status, tribrpc.NoSuchUser) { 307 | return 308 | } 309 | if checkLimits(10, 1000) { 310 | return 311 | } 312 | fmt.Println("PASS") 313 | passCount++ 314 | } 315 | 316 | // Remove valid subscription 317 | func testRemoveSubscriptionValid() { 318 | createUser("user1") 319 | createUser("user2") 320 | addSubscription("user1", "user2") 321 | pc.Reset() 322 | err, status := removeSubscription("user1", "user2") 323 | if checkErrorStatus(err, status, tribrpc.OK) { 324 | return 325 | } 326 | if checkLimits(10, 1000) { 327 | return 328 | } 329 | fmt.Println("PASS") 330 | passCount++ 331 | } 332 | 333 | // Remove subscription with missing target user 334 | func testRemoveSubscriptionMissingTarget() { 335 | createUser("user1") 336 | createUser("user2") 337 | removeSubscription("user1", "user2") 338 | pc.Reset() 339 | err, status := removeSubscription("user1", "user2") 340 | if checkErrorStatus(err, status, tribrpc.NoSuchTargetUser) { 341 | return 342 | } 343 | if checkLimits(10, 1000) { 344 | return 345 | } 346 | fmt.Println("PASS") 347 | passCount++ 348 | } 349 | 350 | // Get subscription with invalid user 351 | func testGetSubscriptionInvalidUser() { 352 | pc.Reset() 353 | err, status, _ := getSubscription("invalidUser") 354 | if checkErrorStatus(err, status, tribrpc.NoSuchUser) { 355 | return 356 | } 357 | if checkLimits(10, 1000) { 358 | return 359 | } 360 | fmt.Println("PASS") 361 | passCount++ 362 | } 363 | 364 | // Get valid subscription 365 | func testGetSubscriptionValid() { 366 | createUser("user1") 367 | createUser("user2") 368 | createUser("user3") 369 | createUser("user4") 370 | addSubscription("user1", "user2") 371 | addSubscription("user1", "user3") 372 | addSubscription("user1", "user4") 373 | pc.Reset() 374 | err, status, subs := getSubscription("user1") 375 | if checkErrorStatus(err, status, tribrpc.OK) { 376 | return 377 | } 378 | if checkSubscriptions(subs, []string{"user2", "user3", "user4"}) { 379 | return 380 | } 381 | if checkLimits(10, 1000) { 382 | return 383 | } 384 | fmt.Println("PASS") 385 | passCount++ 386 | } 387 | 388 | // Post tribble with invalid user 389 | func testPostTribbleInvalidUser() { 390 | pc.Reset() 391 | err, status := postTribble("invalidUser", "contents") 392 | if checkErrorStatus(err, status, tribrpc.NoSuchUser) { 393 | return 394 | } 395 | if checkLimits(10, 1000) { 396 | return 397 | } 398 | fmt.Println("PASS") 399 | passCount++ 400 | } 401 | 402 | // Post valid tribble 403 | func testPostTribbleValid() { 404 | createUser("user") 405 | pc.Reset() 406 | err, status := postTribble("user", "contents") 407 | if checkErrorStatus(err, status, tribrpc.OK) { 408 | return 409 | } 410 | if checkLimits(10, 1000) { 411 | return 412 | } 413 | fmt.Println("PASS") 414 | passCount++ 415 | } 416 | 417 | // Get tribbles invalid user 418 | func testGetTribblesInvalidUser() { 419 | pc.Reset() 420 | err, status, _ := getTribbles("invalidUser") 421 | if checkErrorStatus(err, status, tribrpc.NoSuchUser) { 422 | return 423 | } 424 | if checkLimits(10, 1000) { 425 | return 426 | } 427 | fmt.Println("PASS") 428 | passCount++ 429 | } 430 | 431 | // Get tribbles 0 tribbles 432 | func testGetTribblesZeroTribbles() { 433 | createUser("tribUser") 434 | pc.Reset() 435 | err, status, tribbles := getTribbles("tribUser") 436 | if checkErrorStatus(err, status, tribrpc.OK) { 437 | return 438 | } 439 | if checkTribbles(tribbles, []tribrpc.Tribble{}) { 440 | return 441 | } 442 | if checkLimits(10, 1000) { 443 | return 444 | } 445 | fmt.Println("PASS") 446 | passCount++ 447 | } 448 | 449 | // Get tribbles < 100 tribbles 450 | func testGetTribblesFewTribbles() { 451 | createUser("tribUser") 452 | expectedTribbles := []tribrpc.Tribble{} 453 | for i := 0; i < 5; i++ { 454 | expectedTribbles = append(expectedTribbles, tribrpc.Tribble{UserID: "tribUser", Contents: fmt.Sprintf("contents%d", i)}) 455 | } 456 | for i := len(expectedTribbles) - 1; i >= 0; i-- { 457 | postTribble(expectedTribbles[i].UserID, expectedTribbles[i].Contents) 458 | } 459 | pc.Reset() 460 | err, status, tribbles := getTribbles("tribUser") 461 | if checkErrorStatus(err, status, tribrpc.OK) { 462 | return 463 | } 464 | if checkTribbles(tribbles, expectedTribbles) { 465 | return 466 | } 467 | if checkLimits(50, 5000) { 468 | return 469 | } 470 | fmt.Println("PASS") 471 | passCount++ 472 | } 473 | 474 | // Get tribbles > 100 tribbles 475 | func testGetTribblesManyTribbles() { 476 | createUser("tribUser") 477 | postTribble("tribUser", "should not see this old msg") 478 | expectedTribbles := []tribrpc.Tribble{} 479 | for i := 0; i < 100; i++ { 480 | expectedTribbles = append(expectedTribbles, tribrpc.Tribble{UserID: "tribUser", Contents: fmt.Sprintf("contents%d", i)}) 481 | } 482 | for i := len(expectedTribbles) - 1; i >= 0; i-- { 483 | postTribble(expectedTribbles[i].UserID, expectedTribbles[i].Contents) 484 | } 485 | pc.Reset() 486 | err, status, tribbles := getTribbles("tribUser") 487 | if checkErrorStatus(err, status, tribrpc.OK) { 488 | return 489 | } 490 | if checkTribbles(tribbles, expectedTribbles) { 491 | return 492 | } 493 | if checkLimits(200, 30000) { 494 | return 495 | } 496 | fmt.Println("PASS") 497 | passCount++ 498 | } 499 | 500 | // Get tribbles by subscription invalid user 501 | func testGetTribblesBySubscriptionInvalidUser() { 502 | pc.Reset() 503 | err, status, _ := getTribblesBySubscription("invalidUser") 504 | if checkErrorStatus(err, status, tribrpc.NoSuchUser) { 505 | return 506 | } 507 | if checkLimits(10, 1000) { 508 | return 509 | } 510 | fmt.Println("PASS") 511 | passCount++ 512 | } 513 | 514 | // Get tribbles by subscription no subscriptions 515 | func testGetTribblesBySubscriptionNoSubscriptions() { 516 | createUser("tribUser") 517 | postTribble("tribUser", "contents") 518 | pc.Reset() 519 | err, status, tribbles := getTribblesBySubscription("tribUser") 520 | if checkErrorStatus(err, status, tribrpc.OK) { 521 | return 522 | } 523 | if checkTribbles(tribbles, []tribrpc.Tribble{}) { 524 | return 525 | } 526 | if checkLimits(10, 1000) { 527 | return 528 | } 529 | fmt.Println("PASS") 530 | passCount++ 531 | } 532 | 533 | // Get tribbles by subscription 0 tribbles 534 | func testGetTribblesBySubscriptionZeroTribbles() { 535 | createUser("tribUser1") 536 | createUser("tribUser2") 537 | addSubscription("tribUser1", "tribUser2") 538 | pc.Reset() 539 | err, status, tribbles := getTribbles("tribUser1") 540 | if checkErrorStatus(err, status, tribrpc.OK) { 541 | return 542 | } 543 | if checkTribbles(tribbles, []tribrpc.Tribble{}) { 544 | return 545 | } 546 | if checkLimits(10, 1000) { 547 | return 548 | } 549 | fmt.Println("PASS") 550 | passCount++ 551 | } 552 | 553 | // Get tribbles by subscription < 100 tribbles 554 | func testGetTribblesBySubscriptionFewTribbles() { 555 | createUser("tribUser1") 556 | createUser("tribUser2") 557 | createUser("tribUser3") 558 | createUser("tribUser4") 559 | addSubscription("tribUser1", "tribUser2") 560 | addSubscription("tribUser1", "tribUser3") 561 | addSubscription("tribUser1", "tribUser4") 562 | postTribble("tribUser1", "should not see this unsubscribed msg") 563 | expectedTribbles := []tribrpc.Tribble{tribrpc.Tribble{UserID: "tribUser2", Contents: "contents"}, tribrpc.Tribble{UserID: "tribUser4", Contents: "contents"}} 564 | for i := len(expectedTribbles) - 1; i >= 0; i-- { 565 | postTribble(expectedTribbles[i].UserID, expectedTribbles[i].Contents) 566 | } 567 | pc.Reset() 568 | err, status, tribbles := getTribblesBySubscription("tribUser1") 569 | if checkErrorStatus(err, status, tribrpc.OK) { 570 | return 571 | } 572 | if checkTribbles(tribbles, expectedTribbles) { 573 | return 574 | } 575 | if checkLimits(20, 2000) { 576 | return 577 | } 578 | fmt.Println("PASS") 579 | passCount++ 580 | } 581 | 582 | // Get tribbles by subscription > 100 tribbles 583 | func testGetTribblesBySubscriptionManyTribbles() { 584 | createUser("tribUser1") 585 | createUser("tribUser2") 586 | createUser("tribUser3") 587 | createUser("tribUser4") 588 | addSubscription("tribUser1", "tribUser2") 589 | addSubscription("tribUser1", "tribUser3") 590 | addSubscription("tribUser1", "tribUser4") 591 | postTribble("tribUser1", "should not see this old msg") 592 | postTribble("tribUser2", "should not see this old msg") 593 | postTribble("tribUser3", "should not see this old msg") 594 | postTribble("tribUser4", "should not see this old msg") 595 | expectedTribbles := []tribrpc.Tribble{} 596 | for i := 0; i < 100; i++ { 597 | expectedTribbles = append(expectedTribbles, tribrpc.Tribble{UserID: fmt.Sprintf("tribUser%d", (i%3)+2), Contents: fmt.Sprintf("contents%d", i)}) 598 | } 599 | for i := len(expectedTribbles) - 1; i >= 0; i-- { 600 | postTribble(expectedTribbles[i].UserID, expectedTribbles[i].Contents) 601 | } 602 | pc.Reset() 603 | err, status, tribbles := getTribblesBySubscription("tribUser1") 604 | if checkErrorStatus(err, status, tribrpc.OK) { 605 | return 606 | } 607 | if checkTribbles(tribbles, expectedTribbles) { 608 | return 609 | } 610 | if checkLimits(200, 30000) { 611 | return 612 | } 613 | fmt.Println("PASS") 614 | passCount++ 615 | } 616 | 617 | // Get tribbles by subscription all recent tribbles by one subscription 618 | func testGetTribblesBySubscriptionManyTribbles2() { 619 | createUser("tribUser1b") 620 | createUser("tribUser2b") 621 | createUser("tribUser3b") 622 | createUser("tribUser4b") 623 | addSubscription("tribUser1b", "tribUser2b") 624 | addSubscription("tribUser1b", "tribUser3b") 625 | addSubscription("tribUser1b", "tribUser4b") 626 | postTribble("tribUser1b", "should not see this old msg") 627 | postTribble("tribUser2b", "should not see this old msg") 628 | postTribble("tribUser3b", "should not see this old msg") 629 | postTribble("tribUser4b", "should not see this old msg") 630 | expectedTribbles := []tribrpc.Tribble{} 631 | for i := 0; i < 100; i++ { 632 | expectedTribbles = append(expectedTribbles, tribrpc.Tribble{UserID: fmt.Sprintf("tribUser3b"), Contents: fmt.Sprintf("contents%d", i)}) 633 | } 634 | for i := len(expectedTribbles) - 1; i >= 0; i-- { 635 | postTribble(expectedTribbles[i].UserID, expectedTribbles[i].Contents) 636 | } 637 | pc.Reset() 638 | err, status, tribbles := getTribblesBySubscription("tribUser1b") 639 | if checkErrorStatus(err, status, tribrpc.OK) { 640 | return 641 | } 642 | if checkTribbles(tribbles, expectedTribbles) { 643 | return 644 | } 645 | if checkLimits(200, 30000) { 646 | return 647 | } 648 | fmt.Println("PASS") 649 | passCount++ 650 | } 651 | 652 | // Get tribbles by subscription test not performing too many RPCs or transferring too much data 653 | func testGetTribblesBySubscriptionManyTribbles3() { 654 | createUser("tribUser1c") 655 | createUser("tribUser2c") 656 | createUser("tribUser3c") 657 | createUser("tribUser4c") 658 | createUser("tribUser5c") 659 | createUser("tribUser6c") 660 | createUser("tribUser7c") 661 | createUser("tribUser8c") 662 | createUser("tribUser9c") 663 | addSubscription("tribUser1c", "tribUser2c") 664 | addSubscription("tribUser1c", "tribUser3c") 665 | addSubscription("tribUser1c", "tribUser4c") 666 | addSubscription("tribUser1c", "tribUser5c") 667 | addSubscription("tribUser1c", "tribUser6c") 668 | addSubscription("tribUser1c", "tribUser7c") 669 | addSubscription("tribUser1c", "tribUser8c") 670 | addSubscription("tribUser1c", "tribUser9c") 671 | postTribble("tribUser1c", "should not see this old msg") 672 | postTribble("tribUser2c", "should not see this old msg") 673 | postTribble("tribUser3c", "should not see this old msg") 674 | postTribble("tribUser4c", "should not see this old msg") 675 | postTribble("tribUser5c", "should not see this old msg") 676 | postTribble("tribUser6c", "should not see this old msg") 677 | postTribble("tribUser7c", "should not see this old msg") 678 | postTribble("tribUser8c", "should not see this old msg") 679 | postTribble("tribUser9c", "should not see this old msg") 680 | longContents := strings.Repeat("this sentence is 30 char long\n", 30) 681 | for i := 0; i < 100; i++ { 682 | for j := 1; j <= 9; j++ { 683 | postTribble(fmt.Sprintf("tribUser%dc", j), longContents) 684 | } 685 | } 686 | expectedTribbles := []tribrpc.Tribble{} 687 | for i := 0; i < 100; i++ { 688 | expectedTribbles = append(expectedTribbles, tribrpc.Tribble{UserID: fmt.Sprintf("tribUser%dc", (i%8)+2), Contents: fmt.Sprintf("contents%d", i)}) 689 | } 690 | for i := len(expectedTribbles) - 1; i >= 0; i-- { 691 | postTribble(expectedTribbles[i].UserID, expectedTribbles[i].Contents) 692 | } 693 | pc.Reset() 694 | err, status, tribbles := getTribblesBySubscription("tribUser1c") 695 | if checkErrorStatus(err, status, tribrpc.OK) { 696 | return 697 | } 698 | if checkTribbles(tribbles, expectedTribbles) { 699 | return 700 | } 701 | if checkLimits(200, 200000) { 702 | return 703 | } 704 | fmt.Println("PASS") 705 | passCount++ 706 | } 707 | 708 | func main() { 709 | tests := []testFunc{ 710 | {"testCreateUserValid", testCreateUserValid}, 711 | {"testCreateUserDuplicate", testCreateUserDuplicate}, 712 | {"testAddSubscriptionInvalidUser", testAddSubscriptionInvalidUser}, 713 | {"testAddSubscriptionInvalidTargetUser", testAddSubscriptionInvalidTargetUser}, 714 | {"testAddSubscriptionValid", testAddSubscriptionValid}, 715 | {"testAddSubscriptionDuplicate", testAddSubscriptionDuplicate}, 716 | {"testRemoveSubscriptionInvalidUser", testRemoveSubscriptionInvalidUser}, 717 | {"testRemoveSubscriptionValid", testRemoveSubscriptionValid}, 718 | {"testRemoveSubscriptionMissingTarget", testRemoveSubscriptionMissingTarget}, 719 | {"testGetSubscriptionInvalidUser", testGetSubscriptionInvalidUser}, 720 | {"testGetSubscriptionValid", testGetSubscriptionValid}, 721 | {"testPostTribbleInvalidUser", testPostTribbleInvalidUser}, 722 | {"testPostTribbleValid", testPostTribbleValid}, 723 | {"testGetTribblesInvalidUser", testGetTribblesInvalidUser}, 724 | {"testGetTribblesZeroTribbles", testGetTribblesZeroTribbles}, 725 | {"testGetTribblesFewTribbles", testGetTribblesFewTribbles}, 726 | {"testGetTribblesManyTribbles", testGetTribblesManyTribbles}, 727 | {"testGetTribblesBySubscriptionInvalidUser", testGetTribblesBySubscriptionInvalidUser}, 728 | {"testGetTribblesBySubscriptionNoSubscriptions", testGetTribblesBySubscriptionNoSubscriptions}, 729 | {"testGetTribblesBySubscriptionZeroTribbles", testGetTribblesBySubscriptionZeroTribbles}, 730 | {"testGetTribblesBySubscriptionZeroTribbles", testGetTribblesBySubscriptionZeroTribbles}, 731 | {"testGetTribblesBySubscriptionFewTribbles", testGetTribblesBySubscriptionFewTribbles}, 732 | {"testGetTribblesBySubscriptionManyTribbles", testGetTribblesBySubscriptionManyTribbles}, 733 | {"testGetTribblesBySubscriptionManyTribbles2", testGetTribblesBySubscriptionManyTribbles2}, 734 | {"testGetTribblesBySubscriptionManyTribbles3", testGetTribblesBySubscriptionManyTribbles3}, 735 | } 736 | 737 | flag.Parse() 738 | if flag.NArg() < 1 { 739 | LOGE.Fatal("Usage: tribtest ") 740 | } 741 | 742 | if err := initTribServer(flag.Arg(0), *port); err != nil { 743 | LOGE.Fatalln("Failed to setup TribServer:", err) 744 | } 745 | 746 | // Run tests. 747 | for _, t := range tests { 748 | if b, err := regexp.MatchString(*testRegex, t.name); b && err == nil { 749 | fmt.Printf("Running %s:\n", t.name) 750 | t.f() 751 | } 752 | } 753 | 754 | fmt.Printf("Passed (%d/%d) tests\n", passCount, passCount+failCount) 755 | } 756 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/tribclient/tribclient_api.go: -------------------------------------------------------------------------------- 1 | // This is the API for a TribClient that we have written for you as 2 | // an example. DO NOT MODIFY! 3 | 4 | package tribclient 5 | 6 | import "github.com/cmu440/tribbler/rpc/tribrpc" 7 | 8 | // TribClient defines the set of methods for one possible Tribbler 9 | // client implementation. 10 | type TribClient interface { 11 | CreateUser(userID string) (tribrpc.Status, error) 12 | GetSubscriptions(userID string) ([]string, tribrpc.Status, error) 13 | AddSubscription(userID, targetUser string) (tribrpc.Status, error) 14 | RemoveSubscription(userID, targetUser string) (tribrpc.Status, error) 15 | GetTribbles(userID string) ([]tribrpc.Tribble, tribrpc.Status, error) 16 | GetTribblesBySubscription(userID string) ([]tribrpc.Tribble, tribrpc.Status, error) 17 | PostTribble(userID, contents string) (tribrpc.Status, error) 18 | Close() error 19 | } 20 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/tribclient/tribclient_impl.go: -------------------------------------------------------------------------------- 1 | // This is the implementation of a TribClient that we have written for you as 2 | // an example. This code also serves as a good reference for understanding 3 | // how RPC works in Go. DO NOT MODIFY! 4 | 5 | package tribclient 6 | 7 | import ( 8 | "net" 9 | "net/rpc" 10 | "strconv" 11 | 12 | "github.com/cmu440/tribbler/rpc/tribrpc" 13 | ) 14 | 15 | // The TribClient uses an 'rpc.Client' in order to perform RPCs to the 16 | // TribServer. The TribServer must register to receive RPCs and setup 17 | // an HTTP handler to serve the requests. The client may then perform RPCs 18 | // to the TribServer using the rpc.Client's Call method (see the code below). 19 | type tribClient struct { 20 | client *rpc.Client 21 | } 22 | 23 | func NewTribClient(serverHost string, serverPort int) (TribClient, error) { 24 | cli, err := rpc.DialHTTP("tcp", net.JoinHostPort(serverHost, strconv.Itoa(serverPort))) 25 | if err != nil { 26 | return nil, err 27 | } 28 | return &tribClient{client: cli}, nil 29 | } 30 | 31 | func (tc *tribClient) CreateUser(userID string) (tribrpc.Status, error) { 32 | args := &tribrpc.CreateUserArgs{UserID: userID} 33 | var reply tribrpc.CreateUserReply 34 | if err := tc.client.Call("TribServer.CreateUser", args, &reply); err != nil { 35 | return 0, err 36 | } 37 | return reply.Status, nil 38 | } 39 | 40 | func (tc *tribClient) GetSubscriptions(userID string) ([]string, tribrpc.Status, error) { 41 | args := &tribrpc.GetSubscriptionsArgs{UserID: userID} 42 | var reply tribrpc.GetSubscriptionsReply 43 | if err := tc.client.Call("TribServer.GetSubscriptions", args, &reply); err != nil { 44 | return nil, 0, err 45 | } 46 | return reply.UserIDs, reply.Status, nil 47 | } 48 | 49 | func (tc *tribClient) AddSubscription(userID, targetUserID string) (tribrpc.Status, error) { 50 | return tc.doSub("TribServer.AddSubscription", userID, targetUserID) 51 | } 52 | 53 | func (tc *tribClient) RemoveSubscription(userID, targetUserID string) (tribrpc.Status, error) { 54 | return tc.doSub("TribServer.RemoveSubscription", userID, targetUserID) 55 | } 56 | 57 | func (tc *tribClient) doSub(funcName, userID, targetUserID string) (tribrpc.Status, error) { 58 | args := &tribrpc.SubscriptionArgs{UserID: userID, TargetUserID: targetUserID} 59 | var reply tribrpc.SubscriptionReply 60 | if err := tc.client.Call(funcName, args, &reply); err != nil { 61 | return 0, err 62 | } 63 | return reply.Status, nil 64 | } 65 | 66 | func (tc *tribClient) GetTribbles(userID string) ([]tribrpc.Tribble, tribrpc.Status, error) { 67 | return tc.doTrib("TribServer.GetTribbles", userID) 68 | } 69 | 70 | func (tc *tribClient) GetTribblesBySubscription(userID string) ([]tribrpc.Tribble, tribrpc.Status, error) { 71 | return tc.doTrib("TribServer.GetTribblesBySubscription", userID) 72 | } 73 | 74 | func (tc *tribClient) doTrib(funcName, userID string) ([]tribrpc.Tribble, tribrpc.Status, error) { 75 | args := &tribrpc.GetTribblesArgs{UserID: userID} 76 | var reply tribrpc.GetTribblesReply 77 | if err := tc.client.Call(funcName, args, &reply); err != nil { 78 | return nil, 0, err 79 | } 80 | return reply.Tribbles, reply.Status, nil 81 | } 82 | 83 | func (tc *tribClient) PostTribble(userID, contents string) (tribrpc.Status, error) { 84 | args := &tribrpc.PostTribbleArgs{UserID: userID, Contents: contents} 85 | var reply tribrpc.PostTribbleReply 86 | if err := tc.client.Call("TribServer.PostTribble", args, &reply); err != nil { 87 | return 0, err 88 | } 89 | return reply.Status, nil 90 | } 91 | 92 | func (tc *tribClient) Close() error { 93 | return tc.client.Close() 94 | } 95 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/tribserver/tribserver_api.go: -------------------------------------------------------------------------------- 1 | // DO NOT MODIFY! 2 | 3 | package tribserver 4 | 5 | import "github.com/cmu440/tribbler/rpc/tribrpc" 6 | 7 | // TribServer defines the set of methods that a TribClient can invoke remotely via RPCs. 8 | type TribServer interface { 9 | 10 | // CreateUser creates a user with the specified UserID. 11 | // Replies with status Exists if the user has previously been created. 12 | CreateUser(args *tribrpc.CreateUserArgs, reply *tribrpc.CreateUserReply) error 13 | 14 | // AddSubscription adds TargerUserID to UserID's list of subscriptions. 15 | // Replies with status NoSuchUser if the specified UserID does not exist, and NoSuchTargerUser 16 | // if the specified TargerUserID does not exist. 17 | AddSubscription(args *tribrpc.SubscriptionArgs, reply *tribrpc.SubscriptionReply) error 18 | 19 | // RemoveSubscription removes TargerUserID to UserID's list of subscriptions. 20 | // Replies with status NoSuchUser if the specified UserID does not exist, and NoSuchTargerUser 21 | // if the specified TargerUserID does not exist. 22 | RemoveSubscription(args *tribrpc.SubscriptionArgs, reply *tribrpc.SubscriptionReply) error 23 | 24 | // GetSubscriptions retrieves a list of all users to whom the user subscribes. 25 | // Replies with status NoSuchUser if the specified UserID does not exist. 26 | GetSubscriptions(args *tribrpc.GetSubscriptionsArgs, reply *tribrpc.GetSubscriptionsReply) error 27 | 28 | // PostTribble posts a tribble on behalf of the specified UserID. The TribServer 29 | // should timestamp the entry before inserting the Tribble into it's local Libstore. 30 | // Replies with status NoSuchUser if the specified UserID does not exist. 31 | PostTribble(args *tribrpc.PostTribbleArgs, reply *tribrpc.PostTribbleReply) error 32 | 33 | // GetTribbles retrieves a list of at most 100 tribbles posted by the specified 34 | // UserID in reverse chronological order (most recent first). 35 | // Replies with status NoSuchUser if the specified UserID does not exist. 36 | GetTribbles(args *tribrpc.GetTribblesArgs, reply *tribrpc.GetTribblesReply) error 37 | 38 | // GetTribblesBySubscription retrieves a list of at most 100 tribbles posted by 39 | // all users to which the specified UserID is subscribed in reverse chronological 40 | // order (most recent first). Replies with status NoSuchUser if the specified UserID 41 | // does not exist. 42 | GetTribblesBySubscription(args *tribrpc.GetTribblesArgs, reply *tribrpc.GetTribblesReply) error 43 | } 44 | -------------------------------------------------------------------------------- /src/github.com/cmu440/tribbler/tribserver/tribserver_impl.go: -------------------------------------------------------------------------------- 1 | package tribserver 2 | 3 | import ( 4 | "errors" 5 | 6 | "github.com/cmu440/tribbler/rpc/tribrpc" 7 | ) 8 | 9 | type tribServer struct { 10 | // TODO: implement this! 11 | } 12 | 13 | // NewTribServer creates, starts and returns a new TribServer. masterServerHostPort 14 | // is the master storage server's host:port and port is this port number on which 15 | // the TribServer should listen. A non-nil error should be returned if the TribServer 16 | // could not be started. 17 | // 18 | // For hints on how to properly setup RPC, see the rpc/tribrpc package. 19 | func NewTribServer(masterServerHostPort, myHostPort string) (TribServer, error) { 20 | return nil, errors.New("not implemented") 21 | } 22 | 23 | func (ts *tribServer) CreateUser(args *tribrpc.CreateUserArgs, reply *tribrpc.CreateUserReply) error { 24 | return errors.New("not implemented") 25 | } 26 | 27 | func (ts *tribServer) AddSubscription(args *tribrpc.SubscriptionArgs, reply *tribrpc.SubscriptionReply) error { 28 | return errors.New("not implemented") 29 | } 30 | 31 | func (ts *tribServer) RemoveSubscription(args *tribrpc.SubscriptionArgs, reply *tribrpc.SubscriptionReply) error { 32 | return errors.New("not implemented") 33 | } 34 | 35 | func (ts *tribServer) GetSubscriptions(args *tribrpc.GetSubscriptionsArgs, reply *tribrpc.GetSubscriptionsReply) error { 36 | return errors.New("not implemented") 37 | } 38 | 39 | func (ts *tribServer) PostTribble(args *tribrpc.PostTribbleArgs, reply *tribrpc.PostTribbleReply) error { 40 | return errors.New("not implemented") 41 | } 42 | 43 | func (ts *tribServer) GetTribbles(args *tribrpc.GetTribblesArgs, reply *tribrpc.GetTribblesReply) error { 44 | return errors.New("not implemented") 45 | } 46 | 47 | func (ts *tribServer) GetTribblesBySubscription(args *tribrpc.GetTribblesArgs, reply *tribrpc.GetTribblesReply) error { 48 | return errors.New("not implemented") 49 | } 50 | -------------------------------------------------------------------------------- /tests/libtest.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z $GOPATH ]; then 4 | echo "FAIL: GOPATH environment variable is not set" 5 | exit 1 6 | fi 7 | 8 | if [ -n "$(go version | grep 'darwin/amd64')" ]; then 9 | GOOS="darwin_amd64" 10 | elif [ -n "$(go version | grep 'linux/amd64')" ]; then 11 | GOOS="linux_amd64" 12 | else 13 | echo "FAIL: only 64-bit Mac OS X and Linux operating systems are supported" 14 | exit 1 15 | fi 16 | 17 | # Build the test binary to use to test the student's libstore implementation. 18 | # Exit immediately if there was a compile-time error. 19 | go install github.com/cmu440/tribbler/tests/libtest 20 | if [ $? -ne 0 ]; then 21 | echo "FAIL: code does not compile" 22 | exit $? 23 | fi 24 | 25 | # Pick random ports between [10000, 20000). 26 | STORAGE_PORT=$(((RANDOM % 10000) + 10000)) 27 | LIB_PORT=$(((RANDOM % 10000) + 10000)) 28 | STORAGE_SERVER=$GOPATH/sols/$GOOS/srunner 29 | LIB_TEST=$GOPATH/bin/libtest 30 | 31 | # Start an instance of the staff's official storage server implementation. 32 | ${STORAGE_SERVER} -port=${STORAGE_PORT} 2> /dev/null & 33 | STORAGE_SERVER_PID=$! 34 | sleep 5 35 | 36 | # Start the test. 37 | ${LIB_TEST} -port=${LIB_PORT} "localhost:${STORAGE_PORT}" 38 | 39 | # Kill the storage server. 40 | kill -9 ${STORAGE_SERVER_PID} 41 | wait ${STORAGE_SERVER_PID} 2> /dev/null 42 | -------------------------------------------------------------------------------- /tests/libtest2.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z $GOPATH ]; then 4 | echo "FAIL: GOPATH environment variable is not set" 5 | exit 1 6 | fi 7 | 8 | if [ -n "$(go version | grep 'darwin/amd64')" ]; then 9 | GOOS="darwin_amd64" 10 | elif [ -n "$(go version | grep 'linux/amd64')" ]; then 11 | GOOS="linux_amd64" 12 | else 13 | echo "FAIL: only 64-bit Mac OS X and Linux operating systems are supported" 14 | exit 1 15 | fi 16 | 17 | # Build the lrunner binary to use to test the student's libstore implementation. 18 | # Exit immediately if there was a compile-time error. 19 | go install github.com/cmu440/tribbler/runners/lrunner 20 | if [ $? -ne 0 ]; then 21 | echo "FAIL: code does not compile" 22 | exit $? 23 | fi 24 | 25 | # Pick random port between [10000, 20000). 26 | STORAGE_PORT=$(((RANDOM % 10000) + 10000)) 27 | STORAGE_SERVER=$GOPATH/sols/$GOOS/srunner 28 | LRUNNER=$GOPATH/bin/lrunner 29 | 30 | function startStorageServers { 31 | N=${#STORAGE_ID[@]} 32 | # Start master storage server. 33 | ${STORAGE_SERVER} -N=${N} -id=${STORAGE_ID[0]} -port=${STORAGE_PORT} 2> /dev/null & 34 | STORAGE_SERVER_PID[0]=$! 35 | # Start slave storage servers. 36 | if [ "$N" -gt 1 ] 37 | then 38 | for i in `seq 1 $((N-1))` 39 | do 40 | STORAGE_SLAVE_PORT=$(((RANDOM % 10000) + 10000)) 41 | ${STORAGE_SERVER} -id=${STORAGE_ID[$i]} -port=${STORAGE_SLAVE_PORT} -master="localhost:${STORAGE_PORT}" 2> /dev/null & 42 | STORAGE_SERVER_PID[$i]=$! 43 | done 44 | fi 45 | sleep 5 46 | } 47 | 48 | function stopStorageServers { 49 | N=${#STORAGE_ID[@]} 50 | for i in `seq 0 $((N-1))` 51 | do 52 | kill -9 ${STORAGE_SERVER_PID[$i]} 53 | wait ${STORAGE_SERVER_PID[$i]} 2> /dev/null 54 | done 55 | } 56 | 57 | # Testing delayed start. 58 | function testDelayedStart { 59 | echo "Running testDelayedStart:" 60 | 61 | # Start master storage server. 62 | ${STORAGE_SERVER} -N=2 -port=${STORAGE_PORT} 2> /dev/null & 63 | STORAGE_SERVER_PID1=$! 64 | sleep 5 65 | 66 | # Run lrunner. 67 | ${LRUNNER} -port=${STORAGE_PORT} p "key:" value &> /dev/null & 68 | sleep 3 69 | 70 | # Start second storage server. 71 | STORAGE_SLAVE_PORT=$(((RANDOM % 10000) + 10000)) 72 | ${STORAGE_SERVER} -master="localhost:${STORAGE_PORT}" -port=${STORAGE_SLAVE_PORT} 2> /dev/null & 73 | STORAGE_SERVER_PID2=$! 74 | sleep 5 75 | 76 | # Run lrunner. 77 | PASS=`${LRUNNER} -port=${STORAGE_PORT} g "key:" | grep value | wc -l` 78 | if [ "$PASS" -eq 1 ] 79 | then 80 | echo "PASS" 81 | PASS_COUNT=$((PASS_COUNT + 1)) 82 | else 83 | echo "FAIL" 84 | FAIL_COUNT=$((FAIL_COUNT + 1)) 85 | fi 86 | 87 | # Kill storage servers. 88 | kill -9 ${STORAGE_SERVER_PID1} 89 | kill -9 ${STORAGE_SERVER_PID2} 90 | wait ${STORAGE_SERVER_PID1} 2> /dev/null 91 | wait ${STORAGE_SERVER_PID2} 2> /dev/null 92 | } 93 | 94 | function testRouting { 95 | startStorageServers 96 | for KEY in "${KEYS[@]}" 97 | do 98 | ${LRUNNER} -port=${STORAGE_PORT} p ${KEY} value > /dev/null 99 | PASS=`${LRUNNER} -port=${STORAGE_PORT} g ${KEY} | grep value | wc -l` 100 | if [ "$PASS" -ne 1 ] 101 | then 102 | break 103 | fi 104 | done 105 | if [ "$PASS" -eq 1 ] 106 | then 107 | echo "PASS" 108 | PASS_COUNT=$((PASS_COUNT + 1)) 109 | else 110 | echo "FAIL" 111 | FAIL_COUNT=$((FAIL_COUNT + 1)) 112 | fi 113 | stopStorageServers 114 | } 115 | 116 | # Testing routing general. 117 | function testRoutingGeneral { 118 | echo "Running testRoutingGeneral:" 119 | STORAGE_ID=('3000000000' '4000000000' '2000000000') 120 | KEYS=('bubble:' 'insertion:' 'merge:' 'heap:' 'quick:' 'radix:') 121 | testRouting 122 | } 123 | 124 | # Testing routing wraparound. 125 | function testRoutingWraparound { 126 | echo "Running testRoutingWraparound:" 127 | STORAGE_ID=('2000000000' '2500000000' '3000000000') 128 | KEYS=('bubble:' 'insertion:' 'merge:' 'heap:' 'quick:' 'radix:') 129 | testRouting 130 | } 131 | 132 | # Testing routing equal. 133 | function testRoutingEqual { 134 | echo "Running testRoutingEqual:" 135 | STORAGE_ID=('3835649095' '1581790440' '2373009399' '3448274451' '1666346102' '2548238361') 136 | KEYS=('bubble:' 'insertion:' 'merge:' 'heap:' 'quick:' 'radix:') 137 | testRouting 138 | } 139 | 140 | # Run tests 141 | PASS_COUNT=0 142 | FAIL_COUNT=0 143 | testDelayedStart 144 | testRoutingGeneral 145 | testRoutingWraparound 146 | testRoutingEqual 147 | 148 | echo "Passed (${PASS_COUNT}/$((PASS_COUNT + FAIL_COUNT))) tests" 149 | -------------------------------------------------------------------------------- /tests/runall.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z $GOPATH ]; then 4 | echo "WARNING! GOPATH environment variable is not set!" 5 | exit 1 6 | fi 7 | 8 | if [ -n "$(go version | grep 'darwin/amd64')" ]; then 9 | GOOS="darwin_amd64" 10 | elif [ -n "$(go version | grep 'linux/amd64')" ]; then 11 | GOOS="linux_amd64" 12 | else 13 | echo "FAIL: only 64-bit Mac OS X and Linux operating systems are supported" 14 | exit 1 15 | fi 16 | 17 | $GOPATH/tests/tribtest.sh 18 | $GOPATH/tests/libtest.sh 19 | $GOPATH/tests/libtest2.sh 20 | $GOPATH/tests/storagetest.sh 21 | $GOPATH/tests/storagetest2.sh 22 | $GOPATH/tests/stresstest.sh -------------------------------------------------------------------------------- /tests/storagetest.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z $GOPATH ]; then 4 | echo "FAIL: GOPATH environment variable is not set" 5 | exit 1 6 | fi 7 | 8 | if [ -n "$(go version | grep 'darwin/amd64')" ]; then 9 | GOOS="darwin_amd64" 10 | elif [ -n "$(go version | grep 'linux/amd64')" ]; then 11 | GOOS="linux_amd64" 12 | else 13 | echo "FAIL: only 64-bit Mac OS X and Linux operating systems are supported" 14 | exit 1 15 | fi 16 | 17 | # Build the student's storage server implementation. 18 | # Exit immediately if there was a compile-time error. 19 | go install github.com/cmu440/tribbler/runners/srunner 20 | if [ $? -ne 0 ]; then 21 | echo "FAIL: code does not compile" 22 | exit $? 23 | fi 24 | 25 | # Build the test binary to use to test the student's storage server implementation. 26 | # Exit immediately if there was a compile-time error. 27 | go install github.com/cmu440/tribbler/tests/storagetest 28 | if [ $? -ne 0 ]; then 29 | echo "FAIL: code does not compile" 30 | exit $? 31 | fi 32 | 33 | # Pick random ports between [10000, 20000). 34 | STORAGE_PORT=$(((RANDOM % 10000) + 10000)) 35 | TESTER_PORT=$(((RANDOM % 10000) + 10000)) 36 | STORAGE_TEST=$GOPATH/bin/storagetest 37 | STORAGE_SERVER=$GOPATH/bin/srunner 38 | 39 | ################################################## 40 | 41 | # Start storage server. 42 | ${STORAGE_SERVER} -port=${STORAGE_PORT} 2> /dev/null & 43 | STORAGE_SERVER_PID=$! 44 | sleep 5 45 | 46 | # Start storagetest. 47 | ${STORAGE_TEST} -port=${TESTER_PORT} -type=2 "localhost:${STORAGE_PORT}" 48 | 49 | # Kill storage server. 50 | kill -9 ${STORAGE_SERVER_PID} 51 | wait ${STORAGE_SERVER_PID} 2> /dev/null 52 | 53 | ################################################## 54 | 55 | # Start storage server. 56 | ${STORAGE_SERVER} -port=${STORAGE_PORT} -N=2 -id=900 2> /dev/null & 57 | STORAGE_SERVER_PID=$! 58 | sleep 5 59 | 60 | # Start storagetest. 61 | ${STORAGE_TEST} -port=${TESTER_PORT} -type=1 -N=2 -id=800 "localhost:${STORAGE_PORT}" 62 | 63 | # Kill storage server. 64 | kill -9 ${STORAGE_SERVER_PID} 65 | wait ${STORAGE_SERVER_PID} 2> /dev/null 66 | -------------------------------------------------------------------------------- /tests/storagetest2.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z $GOPATH ]; then 4 | echo "FAIL: GOPATH environment variable is not set" 5 | exit 1 6 | fi 7 | 8 | if [ -n "$(go version | grep 'darwin/amd64')" ]; then 9 | GOOS="darwin_amd64" 10 | elif [ -n "$(go version | grep 'linux/amd64')" ]; then 11 | GOOS="linux_amd64" 12 | else 13 | echo "FAIL: only 64-bit Mac OS X and Linux operating systems are supported" 14 | exit 1 15 | fi 16 | 17 | # Build the srunner binary to use to test the student's storage server implementation. 18 | # Exit immediately if there was a compile-time error. 19 | go install github.com/cmu440/tribbler/runners/srunner 20 | if [ $? -ne 0 ]; then 21 | echo "FAIL: code does not compile" 22 | exit $? 23 | fi 24 | 25 | # Pick random port between [10000, 20000). 26 | STORAGE_PORT=$(((RANDOM % 10000) + 10000)) 27 | STORAGE_SERVER=$GOPATH/bin/srunner 28 | LRUNNER=$GOPATH/sols/$GOOS/lrunner 29 | 30 | function startStorageServers { 31 | N=${#STORAGE_ID[@]} 32 | # Start master storage server. 33 | ${STORAGE_SERVER} -N=${N} -id=${STORAGE_ID[0]} -port=${STORAGE_PORT} 2> /dev/null & 34 | STORAGE_SERVER_PID[0]=$! 35 | # Start slave storage servers. 36 | if [ "$N" -gt 1 ] 37 | then 38 | for i in `seq 1 $((N-1))` 39 | do 40 | STORAGE_SLAVE_PORT=$(((RANDOM % 10000) + 10000)) 41 | ${STORAGE_SERVER} -port=${STORAGE_SLAVE_PORT} -id=${STORAGE_ID[$i]} -master="localhost:${STORAGE_PORT}" 2> /dev/null & 42 | STORAGE_SERVER_PID[$i]=$! 43 | done 44 | fi 45 | sleep 5 46 | } 47 | 48 | function stopStorageServers { 49 | N=${#STORAGE_ID[@]} 50 | for i in `seq 0 $((N-1))` 51 | do 52 | kill -9 ${STORAGE_SERVER_PID[$i]} 53 | wait ${STORAGE_SERVER_PID[$i]} 2> /dev/null 54 | done 55 | } 56 | 57 | # Testing delayed start. 58 | function testDelayedStart { 59 | echo "Running testDelayedStart:" 60 | 61 | # Start master storage server. 62 | ${STORAGE_SERVER} -N=2 -port=${STORAGE_PORT} 2> /dev/null & 63 | STORAGE_SERVER_PID1=$! 64 | sleep 5 65 | 66 | # Run lrunner. 67 | ${LRUNNER} -port=${STORAGE_PORT} p "key:" value &> /dev/null & 68 | sleep 3 69 | 70 | # Start second storage server. 71 | STORAGE_SLAVE_PORT=$(((RANDOM % 10000) + 10000)) 72 | ${STORAGE_SERVER} -master="localhost:${STORAGE_PORT}" -port=${STORAGE_SLAVE_PORT} 2> /dev/null & 73 | STORAGE_SERVER_PID2=$! 74 | sleep 5 75 | 76 | # Run lrunner. 77 | PASS=`${LRUNNER} -port=${STORAGE_PORT} g "key:" | grep value | wc -l` 78 | if [ "$PASS" -eq 1 ] 79 | then 80 | echo "PASS" 81 | PASS_COUNT=$((PASS_COUNT + 1)) 82 | else 83 | echo "FAIL" 84 | FAIL_COUNT=$((FAIL_COUNT + 1)) 85 | fi 86 | 87 | # Kill storage servers. 88 | kill -9 ${STORAGE_SERVER_PID1} 89 | kill -9 ${STORAGE_SERVER_PID2} 90 | wait ${STORAGE_SERVER_PID1} 2> /dev/null 91 | wait ${STORAGE_SERVER_PID2} 2> /dev/null 92 | } 93 | 94 | function testRouting { 95 | startStorageServers 96 | for KEY in "${KEYS[@]}" 97 | do 98 | ${LRUNNER} -port=${STORAGE_PORT} p ${KEY} value > /dev/null 99 | PASS=`${LRUNNER} -port=${STORAGE_PORT} g ${KEY} | grep value | wc -l` 100 | if [ "$PASS" -ne 1 ] 101 | then 102 | break 103 | fi 104 | done 105 | if [ "$PASS" -eq 1 ] 106 | then 107 | echo "PASS" 108 | PASS_COUNT=$((PASS_COUNT + 1)) 109 | else 110 | echo "FAIL" 111 | FAIL_COUNT=$((FAIL_COUNT + 1)) 112 | fi 113 | stopStorageServers 114 | } 115 | 116 | # Testing routing general. 117 | function testRoutingGeneral { 118 | echo "Running testRoutingGeneral:" 119 | STORAGE_ID=('3000000000' '4000000000' '2000000000') 120 | KEYS=('bubble:' 'insertion:' 'merge:' 'heap:' 'quick:' 'radix:') 121 | testRouting 122 | } 123 | 124 | # Testing routing wraparound. 125 | function testRoutingWraparound { 126 | echo "Running testRoutingWraparound:" 127 | STORAGE_ID=('2000000000' '2500000000' '3000000000') 128 | KEYS=('bubble:' 'insertion:' 'merge:' 'heap:' 'quick:' 'radix:') 129 | testRouting 130 | } 131 | 132 | # Testing routing equal. 133 | function testRoutingEqual { 134 | echo "Running testRoutingEqual:" 135 | STORAGE_ID=('3835649095' '1581790440' '2373009399' '3448274451' '1666346102' '2548238361') 136 | KEYS=('bubble:' 'insertion:' 'merge:' 'heap:' 'quick:' 'radix:') 137 | testRouting 138 | } 139 | 140 | # Run tests. 141 | PASS_COUNT=0 142 | FAIL_COUNT=0 143 | testDelayedStart 144 | testRoutingGeneral 145 | testRoutingWraparound 146 | testRoutingEqual 147 | 148 | echo "Passed (${PASS_COUNT}/$((PASS_COUNT + FAIL_COUNT))) tests" 149 | -------------------------------------------------------------------------------- /tests/stresstest.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z $GOPATH ]; then 4 | echo "FAIL: GOPATH environment variable is not set" 5 | exit 1 6 | fi 7 | 8 | if [ -n "$(go version | grep 'darwin/amd64')" ]; then 9 | GOOS="darwin_amd64" 10 | elif [ -n "$(go version | grep 'linux/amd64')" ]; then 11 | GOOS="linux_amd64" 12 | else 13 | echo "FAIL: only 64-bit Mac OS X and Linux operating systems are supported" 14 | exit 1 15 | fi 16 | 17 | 18 | # Build student binaries. Exit immediately if there was a compile-time error. 19 | go install github.com/cmu440/tribbler/runners/trunner 20 | if [ $? -ne 0 ]; then 21 | echo "FAIL: code does not compile" 22 | exit $? 23 | fi 24 | go install github.com/cmu440/tribbler/runners/srunner 25 | if [ $? -ne 0 ]; then 26 | echo "FAIL: code does not compile" 27 | exit $? 28 | fi 29 | go install github.com/cmu440/tribbler/tests/stresstest 30 | if [ $? -ne 0 ]; then 31 | echo "FAIL: code does not compile" 32 | exit $? 33 | fi 34 | 35 | # Pick random port between [10000, 20000). 36 | STORAGE_PORT=$(((RANDOM % 10000) + 10000)) 37 | STORAGE_SERVER=$GOPATH/bin/srunner 38 | STRESS_CLIENT=$GOPATH/bin/stresstest 39 | TRIB_SERVER=$GOPATH/bin/trunner 40 | 41 | function startStorageServers { 42 | N=${#STORAGE_ID[@]} 43 | # Start master storage server. 44 | ${STORAGE_SERVER} -N=${N} -id=${STORAGE_ID[0]} -port=${STORAGE_PORT} &> /dev/null & 45 | STORAGE_SERVER_PID[0]=$! 46 | # Start slave storage servers. 47 | if [ "$N" -gt 1 ] 48 | then 49 | for i in `seq 1 $((N - 1))` 50 | do 51 | STORAGE_SLAVE_PORT=$(((RANDOM % 10000) + 10000)) 52 | ${STORAGE_SERVER} -port=${STORAGE_SLAVE_PORT} -id=${STORAGE_ID[$i]} -master="localhost:${STORAGE_PORT}" &> /dev/null & 53 | STORAGE_SERVER_PID[$i]=$! 54 | done 55 | fi 56 | sleep 5 57 | } 58 | 59 | function stopStorageServers { 60 | N=${#STORAGE_ID[@]} 61 | for i in `seq 0 $((N - 1))` 62 | do 63 | kill -9 ${STORAGE_SERVER_PID[$i]} 64 | wait ${STORAGE_SERVER_PID[$i]} 2> /dev/null 65 | done 66 | } 67 | 68 | function startTribServers { 69 | for i in `seq 0 $((M - 1))` 70 | do 71 | # Pick random port between [10000, 20000). 72 | TRIB_PORT[$i]=$(((RANDOM % 10000) + 10000)) 73 | ${TRIB_SERVER} -port=${TRIB_PORT[$i]} "localhost:${STORAGE_PORT}" &> /dev/null & 74 | TRIB_SERVER_PID[$i]=$! 75 | done 76 | sleep 5 77 | } 78 | 79 | function stopTribServers { 80 | for i in `seq 0 $((M - 1))` 81 | do 82 | kill -9 ${TRIB_SERVER_PID[$i]} 83 | wait ${TRIB_SERVER_PID[$i]} 2> /dev/null 84 | done 85 | } 86 | 87 | function testStress { 88 | echo "Starting ${#STORAGE_ID[@]} storage server(s)..." 89 | startStorageServers 90 | echo "Starting ${M} Tribble server(s)..." 91 | startTribServers 92 | # Start stress clients 93 | C=0 94 | K=${#CLIENT_COUNT[@]} 95 | for USER in `seq 0 $((K - 1))` 96 | do 97 | for CLIENT in `seq 0 $((CLIENT_COUNT[$USER] - 1))` 98 | do 99 | ${STRESS_CLIENT} -port=${TRIB_PORT[$((C % M))]} -clientId=${CLIENT} ${USER} ${K} & 100 | STRESS_CLIENT_PID[$C]=$! 101 | # Setup background thread to kill client upon timeout. 102 | sleep ${TIMEOUT} && kill -9 ${STRESS_CLIENT_PID[$C]} &> /dev/null & 103 | C=$((C + 1)) 104 | done 105 | done 106 | echo "Running ${C} client(s)..." 107 | 108 | # Check exit status. 109 | FAIL=0 110 | for i in `seq 0 $((C - 1))` 111 | do 112 | wait ${STRESS_CLIENT_PID[$i]} 2> /dev/null 113 | if [ "$?" -ne 7 ] 114 | then 115 | FAIL=$((FAIL + 1)) 116 | fi 117 | done 118 | if [ "$FAIL" -eq 0 ] 119 | then 120 | echo "PASS" 121 | PASS_COUNT=$((PASS_COUNT + 1)) 122 | else 123 | echo "FAIL: ${FAIL} clients failed" 124 | FAIL_COUNT=$((FAIL_COUNT + 1)) 125 | fi 126 | stopTribServers 127 | stopStorageServers 128 | sleep 1 129 | } 130 | 131 | # Testing single client, single tribserver, single storageserver. 132 | function testStressSingleClientSingleTribSingleStorage { 133 | echo "Running testStressSingleClientSingleTribSingleStorage:" 134 | STORAGE_ID=('0') 135 | M=1 136 | CLIENT_COUNT=('1') 137 | TIMEOUT=15 138 | testStress 139 | } 140 | 141 | # Testing single client, single tribserver, multiple storageserver. 142 | function testStressSingleClientSingleTribMultipleStorage { 143 | echo "Running testStressSingleClientSingleTribMultipleStorage:" 144 | STORAGE_ID=('0' '0' '0') 145 | M=1 146 | CLIENT_COUNT=('1') 147 | TIMEOUT=15 148 | testStress 149 | } 150 | 151 | # Testing multiple client, single tribserver, single storageserver. 152 | function testStressMultipleClientSingleTribSingleStorage { 153 | echo "Running testStressMultipleClientSingleTribSingleStorage:" 154 | STORAGE_ID=('0') 155 | M=1 156 | CLIENT_COUNT=('1' '1' '1') 157 | TIMEOUT=15 158 | testStress 159 | } 160 | 161 | # Testing multiple client, single tribserver, multiple storageserver. 162 | function testStressMultipleClientSingleTribMultipleStorage { 163 | echo "Running testStressMultipleClientSingleTribMultipleStorage:" 164 | STORAGE_ID=('0' '0' '0' '0' '0' '0') 165 | M=1 166 | CLIENT_COUNT=('1' '1' '1') 167 | TIMEOUT=15 168 | testStress 169 | } 170 | 171 | # Testing multiple client, multiple tribserver, single storageserver. 172 | function testStressMultipleClientMultipleTribSingleStorage { 173 | echo "Running testStressMultipleClientMultipleTribSingleStorage:" 174 | STORAGE_ID=('0') 175 | M=2 176 | CLIENT_COUNT=('1' '1') 177 | TIMEOUT=30 178 | testStress 179 | } 180 | 181 | # Testing multiple client, multiple tribserver, multiple storageserver. 182 | function testStressMultipleClientMultipleTribMultipleStorage { 183 | echo "Running testStressMultipleClientMultipleTribMultipleStorage:" 184 | STORAGE_ID=('0' '0' '0' '0' '0' '0' '0') 185 | M=3 186 | CLIENT_COUNT=('1' '1' '1') 187 | TIMEOUT=30 188 | testStress 189 | } 190 | 191 | # Testing 2x more clients than tribservers, multiple tribserver, multiple storageserver. 192 | function testStressDoubleClientMultipleTribMultipleStorage { 193 | echo "Running testStressDoubleClientMultipleTribMultipleStorage:" 194 | STORAGE_ID=('0' '0' '0' '0' '0' '0') 195 | M=2 196 | CLIENT_COUNT=('1' '1' '1' '1') 197 | TIMEOUT=30 198 | testStress 199 | } 200 | 201 | 202 | # Testing duplicate users, multiple tribserver, single storageserver. 203 | function testStressDupUserMultipleTribSingleStorage { 204 | echo "Running testStressDupUserMultipleTribSingleStorage:" 205 | STORAGE_ID=('0') 206 | M=2 207 | CLIENT_COUNT=('2') 208 | TIMEOUT=30 209 | testStress 210 | } 211 | 212 | # Testing duplicate users, multiple tribserver, multiple storageserver. 213 | function testStressDupUserMultipleTribMultipleStorage { 214 | echo "Running testStressDupUserMultipleTribMultipleStorage:" 215 | STORAGE_ID=('0' '0' '0') 216 | M=2 217 | CLIENT_COUNT=('2') 218 | TIMEOUT=30 219 | testStress 220 | } 221 | 222 | # Run tests. 223 | PASS_COUNT=0 224 | FAIL_COUNT=0 225 | testStressSingleClientSingleTribSingleStorage 226 | testStressSingleClientSingleTribMultipleStorage 227 | testStressMultipleClientSingleTribSingleStorage 228 | testStressMultipleClientSingleTribMultipleStorage 229 | testStressMultipleClientMultipleTribSingleStorage 230 | testStressMultipleClientMultipleTribMultipleStorage 231 | testStressDoubleClientMultipleTribMultipleStorage 232 | testStressDupUserMultipleTribSingleStorage 233 | testStressDupUserMultipleTribMultipleStorage 234 | 235 | echo "Passed (${PASS_COUNT}/$((PASS_COUNT + FAIL_COUNT))) tests" 236 | -------------------------------------------------------------------------------- /tests/tribtest.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z $GOPATH ]; then 4 | echo "FAIL: GOPATH environment variable is not set" 5 | exit 1 6 | fi 7 | 8 | if [ -n "$(go version | grep 'darwin/amd64')" ]; then 9 | GOOS="darwin_amd64" 10 | elif [ -n "$(go version | grep 'linux/amd64')" ]; then 11 | GOOS="linux_amd64" 12 | else 13 | echo "FAIL: only 64-bit Mac OS X and Linux operating systems are supported" 14 | exit 1 15 | fi 16 | 17 | # Build the test binary to use to test the student's tribble server implementation. 18 | # Exit immediately if there was a compile-time error. 19 | go install github.com/cmu440/tribbler/tests/tribtest 20 | if [ $? -ne 0 ]; then 21 | echo "FAIL: code does not compile" 22 | exit $? 23 | fi 24 | 25 | # Pick random ports between [10000, 20000). 26 | STORAGE_PORT=$(((RANDOM % 10000) + 10000)) 27 | TRIB_PORT=$(((RANDOM % 10000) + 10000)) 28 | STORAGE_SERVER=$GOPATH/sols/$GOOS/srunner 29 | TRIBTEST=$GOPATH/bin/tribtest 30 | 31 | # Start an instance of the staff's official storage server implementation. 32 | ${STORAGE_SERVER} -port=${STORAGE_PORT} 2> /dev/null & 33 | STORAGE_SERVER_PID=$! 34 | sleep 5 35 | 36 | # Start the test. 37 | ${TRIBTEST} -port=${TRIB_PORT} "localhost:${STORAGE_PORT}" 38 | 39 | # Kill the storage server. 40 | kill -9 ${STORAGE_SERVER_PID} 41 | wait ${STORAGE_SERVER_PID} 2> /dev/null 42 | --------------------------------------------------------------------------------