├── .gitignore ├── README.md ├── assignment1-1 ├── README.md ├── common.go ├── declaration_of_independence.txt ├── q1.go ├── q1_test.go ├── q2.go ├── q2_test.go ├── q2_test1.txt ├── q2_test2.txt └── simple.txt ├── assignment1-2 ├── README.md └── src │ ├── main │ ├── ii.go │ ├── mr-challenge.txt │ ├── mr-testout.txt │ ├── pg-being_ernest.txt │ ├── pg-dorian_gray.txt │ ├── pg-dracula.txt │ ├── pg-emma.txt │ ├── pg-frankenstein.txt │ ├── pg-great_expectations.txt │ ├── pg-grimm.txt │ ├── pg-huckleberry_finn.txt │ ├── pg-les_miserables.txt │ ├── pg-metamorphosis.txt │ ├── pg-moby_dick.txt │ ├── pg-sherlock_holmes.txt │ ├── pg-tale_of_two_cities.txt │ ├── pg-tom_sawyer.txt │ ├── pg-ulysses.txt │ ├── pg-war_and_peace.txt │ ├── test-ii.sh │ ├── test-mr.sh │ ├── test-wc.sh │ └── wc.go │ └── mapreduce │ ├── common.go │ ├── common_map.go │ ├── common_reduce.go │ ├── common_rpc.go │ ├── master.go │ ├── master_rpc.go │ ├── master_splitmerge.go │ ├── readme.go │ ├── schedule.go │ ├── test_test.go │ └── worker.go ├── assignment1-3 ├── README.md └── src │ └── .gitignore ├── assignment2 ├── README.md └── src │ ├── .gitignore │ └── chandy-lamport │ ├── common.go │ ├── logger.go │ ├── queue.go │ ├── server.go │ ├── simulator.go │ ├── snapshot_test.go │ ├── syncmap.go │ ├── test_common.go │ └── test_data │ ├── 10nodes.events │ ├── 10nodes.top │ ├── 10nodes0.snap │ ├── 10nodes1.snap │ ├── 10nodes2.snap │ ├── 10nodes3.snap │ ├── 10nodes4.snap │ ├── 10nodes5.snap │ ├── 10nodes6.snap │ ├── 10nodes7.snap │ ├── 10nodes8.snap │ ├── 10nodes9.snap │ ├── 2nodes-message.events │ ├── 2nodes-message.snap │ ├── 2nodes-simple.events │ ├── 2nodes-simple.snap │ ├── 2nodes.top │ ├── 3nodes-bidirectional-messages.events │ ├── 3nodes-bidirectional-messages.snap │ ├── 3nodes-simple.events │ ├── 3nodes-simple.snap │ ├── 3nodes.top │ ├── 8nodes-concurrent-snapshots.events │ ├── 8nodes-concurrent-snapshots0.snap │ ├── 8nodes-concurrent-snapshots1.snap │ ├── 8nodes-concurrent-snapshots2.snap │ ├── 8nodes-concurrent-snapshots3.snap │ ├── 8nodes-concurrent-snapshots4.snap │ ├── 8nodes-sequential-snapshots.events │ ├── 8nodes-sequential-snapshots0.snap │ ├── 8nodes-sequential-snapshots1.snap │ └── 8nodes.top ├── assignment3 ├── README.md └── src │ ├── labrpc │ ├── labrpc.go │ └── test_test.go │ └── raft │ ├── config.go │ ├── persister.go │ ├── raft.go │ ├── test_test.go │ └── util.go ├── assignment4 ├── README.md └── src │ └── .gitignore ├── assignment5 ├── README.md ├── pkg │ ├── darwin_amd64 │ │ └── raft.a │ ├── linux_386 │ │ └── raft.a │ ├── linux_amd64 │ │ └── raft.a │ ├── windows_386 │ │ └── raft.a │ └── windows_amd64 │ │ └── raft.a └── src │ ├── kvraft │ ├── client.go │ ├── common.go │ ├── config.go │ ├── server.go │ └── test_test.go │ ├── labrpc │ ├── labrpc.go │ └── test_test.go │ └── raft │ ├── config.go │ ├── persister.go │ ├── raft.go │ ├── test_test.go │ └── util.go └── setup.md /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Assignments for COS 418 2 | 3 | ### Environment Setup 4 | 5 | Please follow these instructions for setting up for Go environment for assignments, as well as pointers to some necessary/useful tools. 6 | 7 | ### Coding Style 8 | 9 |
All of the code you turn in for this course should have good style. 10 | Make sure that your code has proper indentation, descriptive comments, 11 | and a comment header at the beginning of each file, which includes 12 | your name, userid, and a description of the file.
13 | 14 |A portion of credit for each assignment is determined by code 15 | quality tests, using the standard tools gofmt and go 16 | vet. You will receive full credit for this portion if all files 17 | submitted conform to the style standards set by gofmt and the 18 | report from go vet is clean (that is, produces no errors). 19 | If your code does not pass the gofmt test, you should 20 | reformat your code using the tool. You can also use the Go Checkstyle tool for 22 | advice to improve your code's style, if applicable. Additionally, 23 | though not part of the graded cheks, it would also be advisable to 24 | produce code that complies with Golint where possible.
26 | 27 |The basic git workflow in the shell (assuming you already have a repo set up): 32 |
Finally, Bitbucket 101 is another good resource.
43 | 44 | 45 |All programming assignments, require Git for submission.
We are using Github for distributing and collecting your assignments. At the time of seeing this, you should have already joined the [COS418F18](https://github.com/orgs/COS418F18) organization on Github and forked your private repository. You will need to develop in a *nix environment, i.e., Linux or OS X. Your Github page should have a link. Normally, you only need to clone the repository once, and you will have everything you need for all the assignments in this class.
46 |
47 | ```bash
48 | $ git clone https://github.com/COS418F18/assignments-myusername.git 418
49 | $ cd 418
50 | $ ls
51 | assignment1-1 assignment1-2 assignment1-3 assignment2 assignment3 assignment4 assignment5 README.md setup.md
52 | $
53 | ```
54 |
55 | Now, you have everything you need for doing all assignments, i.e., instructions and starter code. Git allows you to keep track of the changes you make to the code. For example, if you want to checkpoint your progress, you can
5 | In this assignment you will solve two short problems as a way to familiarize 6 | yourself with the Go programming language. We expect you to already have a 7 | basic knowledge of the language. If you're starting from nothing, we highly 8 | recommend going through the Golang tour 9 | before you begin this assignment. Get started by 10 | installing Go on your machine. 11 |
12 | 13 |15 | You will find the code the same directory. The two problems that you need to solve are in q1.go 16 | and q2.go. You should only add code to places that say TODO: implement me. 17 | Do not change any of the function signatures as our testing framework uses them. 18 |
19 | 20 |21 | Q1 - Top K words: The task is to find the K most common words in a 22 | given document. To exclude common words such as "a" and "the", the user of your program 23 | should be able to specify the minimum character threshold for a word. Word matching is 24 | case insensitive and punctuations should be removed. You can find more details on what 25 | qualifies as a word in the comments in the code. 26 |
27 | 28 |29 | Q2 - Parallel sum: The task is to implement a function that sums a list of 30 | numbers in a file in parallel. For this problem you are required to use goroutines (the 31 | go keyword) and channels to pass messages across the goroutines. While it is 32 | possible to just sum all the numbers sequentially, the point of this problem is to 33 | familiarize yourself with the synchronization mechanisms in Go. 34 |
35 | 36 |39 | Our grading uses the tests in q1_test.go and q2_test.go provided to you. 40 | To test the correctness of your code, run the following: 41 |
42 |43 | $ cd assignment1-1 44 | $ go test 45 |46 |
47 | If all tests pass, you should see the following output: 48 |
49 |50 | $ go test 51 | PASS 52 | ok /path/to/assignment1-1 0.009s 53 |54 | 55 | 56 | 57 | 58 | ### Submitting Assignment 59 |
Now you need to submit your assignment. Commit your change and push it to the remote repository by doing the following:
60 |
61 | ```bash
62 | $ git commit -am "[you fill me in]"
63 | $ git tag -a -m "i finished assignment 1-1" a11-handin
64 | $ git push origin master
65 | $ git push origin a11-handin
66 | ```
67 |
68 |
--------------------------------------------------------------------------------
/assignment1-1/common.go:
--------------------------------------------------------------------------------
1 | package cos418_hw1_1
2 |
3 | import "log"
4 |
5 | // Propagate error if it exists
6 | func checkError(err error) {
7 | if err != nil {
8 | log.Fatal(err)
9 | }
10 | }
11 |
--------------------------------------------------------------------------------
/assignment1-1/declaration_of_independence.txt:
--------------------------------------------------------------------------------
1 | Declaration of Independence
2 |
3 | [Adopted in Congress 4 July 1776]
4 |
5 |
6 |
7 | The Unanimous Declaration of the Thirteen United States of America
8 |
9 | When, in the course of human events, it becomes necessary for one people to
10 | dissolve the political bands which have connected them with another, and to
11 | assume among the powers of the earth, the separate and equal station to
12 | which the laws of nature and of nature's God entitle them, a decent respect
13 | to the opinions of mankind requires that they should declare the causes
14 | which impel them to the separation.
15 |
16 | We hold these truths to be self-evident, that all men are created equal,
17 | that they are endowed by their Creator with certain unalienable rights, that
18 | among these are life, liberty and the pursuit of happiness. That to secure
19 | these rights, governments are instituted among men, deriving their just
20 | powers from the consent of the governed. That whenever any form of
21 | government becomes destructive of these ends, it is the right of the people
22 | to alter or to abolish it, and to institute new government, laying its
23 | foundation on such principles and organizing its powers in such form, as to
24 | them shall seem most likely to effect their safety and happiness. Prudence,
25 | indeed, will dictate that governments long established should not be changed
26 | for light and transient causes; and accordingly all experience hath shown
27 | that mankind are more disposed to suffer, while evils are sufferable, than
28 | to right themselves by abolishing the forms to which they are accustomed.
29 | But when a long train of abuses and usurpations, pursuing invariably the
30 | same object evinces a design to reduce them under absolute despotism, it is
31 | their right, it is their duty, to throw off such government, and to provide
32 | new guards for their future security. -- Such has been the patient
33 | sufferance of these colonies; and such is now the necessity which constrains
34 | them to alter their former systems of government. The history of the present
35 | King of Great Britain is a history of repeated injuries and usurpations, all
36 | having in direct object the establishment of an absolute tyranny over these
37 | states. To prove this, let facts be submitted to a candid world.
38 |
39 | He has refused his assent to laws, the most wholesome and
40 | necessary for the public good.
41 |
42 | He has forbidden his governors to pass laws of immediate
43 | and pressing importance, unless suspended in their
44 | operation till his assent should be obtained; and when so
45 | suspended, he has utterly neglected to attend to them.
46 |
47 | He has refused to pass other laws for the accommodation
48 | of large districts of people, unless those people would
49 | relinquish the right of representation in the legislature, a
50 | right inestimable to them and formidable to tyrants only.
51 |
52 | He has called together legislative bodies at places unusual,
53 | uncomfortable, and distant from the depository of their
54 | public records, for the sole purpose of fatiguing them into
55 | compliance with his measures.
56 |
57 | He has dissolved representative houses repeatedly, for
58 | opposing with manly firmness his invasions on the rights of
59 | the people.
60 |
61 | He has refused for a long time, after such dissolutions, to
62 | cause others to be elected; whereby the legislative powers,
63 | incapable of annihilation, have returned to the people at
64 | large for their exercise; the state remaining in the meantime
65 | exposed to all the dangers of invasion from without, and
66 | convulsions within.
67 |
68 | He has endeavored to prevent the population of these
69 | states; for that purpose obstructing the laws for
70 | naturalization of foreigners; refusing to pass others to
71 | encourage their migration hither, and raising the conditions
72 | of new appropriations of lands.
73 |
74 | He has obstructed the administration of justice, by refusing
75 | his assent to laws for establishing judiciary powers.
76 |
77 | He has made judges dependent on his will alone, for the
78 | tenure of their offices, and the amount and payment of their
79 | salaries.
80 |
81 | He has erected a multitude of new offices, and sent hither
82 | swarms of officers to harass our people, and eat out their
83 | substance.
84 |
85 | He has kept among us, in times of peace, standing armies
86 | without the consent of our legislature.
87 |
88 | He has affected to render the military independent of and
89 | superior to civil power.
90 |
91 | He has combined with others to subject us to a jurisdiction
92 | foreign to our constitution, and unacknowledged by our
93 | laws; giving his assent to their acts of pretended legislation:
94 |
95 | For quartering large bodies of armed troops among us:
96 |
97 | For protecting them, by mock trial, from punishment for
98 | any murders which they should commit on the inhabitants
99 | of these states:
100 |
101 | For cutting off our trade with all parts of the world:
102 |
103 | For imposing taxes on us without our consent:
104 |
105 | For depriving us in many cases, of the benefits of trial by
106 | jury:
107 |
108 | For transporting us beyond seas to be tried for pretended
109 | offenses:
110 |
111 | For abolishing the free system of English laws in a
112 | neighboring province, establishing therein an arbitrary
113 | government, and enlarging its boundaries so as to render it
114 | at once an example and fit instrument for introducing the
115 | same absolute rule in these colonies:
116 |
117 | For taking away our charters, abolishing our most valuable
118 | laws, and altering fundamentally the forms of our
119 | governments:
120 |
121 | For suspending our own legislatures, and declaring
122 | themselves invested with power to legislate for us in all
123 | cases whatsoever.
124 |
125 | He has abdicated government here, by declaring us out of
126 | his protection and waging war against us.
127 |
128 | He has plundered our seas, ravaged our coasts, burned
129 | our towns, and destroyed the lives of our people.
130 |
131 | He is at this time transporting large armies of foreign
132 | mercenaries to complete the works of death, desolation
133 | and tyranny, already begun with circumstances of cruelty
134 | and perfidy scarcely paralleled in the most barbarous ages,
135 | and totally unworthy of the head of a civilized nation.
136 |
137 | He has constrained our fellow citizens taken captive on the
138 | high seas to bear arms against their country, to become the
139 | executioners of their friends and brethren, or to fall
140 | themselves by their hands.
141 |
142 | He has excited domestic insurrections amongst us, and has
143 | endeavored to bring on the inhabitants of our frontiers, the
144 | merciless Indian savages, whose known rule of warfare, is
145 | undistinguished destruction of all ages, sexes and
146 | conditions.
147 |
148 | In every stage of these oppressions we have petitioned for redress in the
149 | most humble terms: our repeated petitions have been answered only by
150 | repeated injury. A prince, whose character is thus marked by every act which
151 | may define a tyrant, is unfit to be the ruler of a free people.
152 |
153 | Nor have we been wanting in attention to our British brethren. We have
154 | warned them from time to time of attempts by their legislature to extend an
155 | unwarrantable jurisdiction over us. We have reminded them of the
156 | circumstances of our emigration and settlement here. We have appealed to
157 | their native justice and magnanimity, and we have conjured them by the ties
158 | of our common kindred to disavow these usurpations, which, would inevitably
159 | interrupt our connections and correspondence. They too have been deaf to the
160 | voice of justice and of consanguinity. We must, therefore, acquiesce
161 | in the necessity, which denounces our separation, and hold them, as we hold
162 | the rest of mankind, enemies in war, in peace friends.
163 |
164 | We, therefore, the representatives of the United States of America, in
165 | General Congress, assembled, appealing to the Supreme Judge of the world for
166 | the rectitude of our intentions, do, in the name, and by the authority of
167 | the good people of these colonies, solemnly publish and declare, that these
168 | united colonies are, and of right ought to be free and independent states;
169 | that they are absolved from all allegiance to the British Crown, and that
170 | all political connection between them and the state of Great Britain, is and
171 | ought to be totally dissolved; and that as free and independent states, they
172 | have full power to levey war, conclude peace, contract alliances, establish
173 | commerce, and to do all other acts and things which independent states may
174 | of right do. And for the support of this declaration, with a firm reliance
175 | on the protection of Divine Providence, we mutually pledge to each other our
176 | lives, our fortunes and our sacred honor.
177 |
--------------------------------------------------------------------------------
/assignment1-1/q1.go:
--------------------------------------------------------------------------------
1 | package cos418_hw1_1
2 |
3 | import (
4 | "fmt"
5 | "sort"
6 | )
7 |
8 | // Find the top K most common words in a text document.
9 | // path: location of the document
10 | // numWords: number of words to return (i.e. k)
11 | // charThreshold: character threshold for whether a token qualifies as a word,
12 | // e.g. charThreshold = 5 means "apple" is a word but "pear" is not.
13 | // Matching is case insensitive, e.g. "Orange" and "orange" is considered the same word.
14 | // A word comprises alphanumeric characters only. All punctuations and other characters
15 | // are removed, e.g. "don't" becomes "dont".
16 | // You should use `checkError` to handle potential errors.
17 | func topWords(path string, numWords int, charThreshold int) []WordCount {
18 | // TODO: implement me
19 | // HINT: You may find the `strings.Fields` and `strings.ToLower` functions helpful
20 | // HINT: To keep only alphanumeric characters, use the regex "[^0-9a-zA-Z]+"
21 | return nil
22 | }
23 |
24 | // A struct that represents how many times a word is observed in a document
25 | type WordCount struct {
26 | Word string
27 | Count int
28 | }
29 |
30 | func (wc WordCount) String() string {
31 | return fmt.Sprintf("%v: %v", wc.Word, wc.Count)
32 | }
33 |
34 | // Helper function to sort a list of word counts in place.
35 | // This sorts by the count in decreasing order, breaking ties using the word.
36 | // DO NOT MODIFY THIS FUNCTION!
37 | func sortWordCounts(wordCounts []WordCount) {
38 | sort.Slice(wordCounts, func(i, j int) bool {
39 | wc1 := wordCounts[i]
40 | wc2 := wordCounts[j]
41 | if wc1.Count == wc2.Count {
42 | return wc1.Word < wc2.Word
43 | }
44 | return wc1.Count > wc2.Count
45 | })
46 | }
47 |
--------------------------------------------------------------------------------
/assignment1-1/q1_test.go:
--------------------------------------------------------------------------------
1 | package cos418_hw1_1
2 |
3 | import (
4 | "fmt"
5 | "testing"
6 | )
7 |
8 | func equal(counts1, counts2 []WordCount) bool {
9 | if len(counts1) != len(counts2) {
10 | return false
11 | }
12 | for i := range counts1 {
13 | if counts1[i] != counts2[i] {
14 | return false
15 | }
16 | }
17 | return true
18 | }
19 |
20 | func assertEqual(t *testing.T, answer, expected []WordCount) {
21 | if !equal(answer, expected) {
22 | t.Fatal(fmt.Sprintf(
23 | "Word counts did not match...\nExpected: %v\nActual: %v",
24 | expected,
25 | answer))
26 | }
27 | }
28 |
29 | func TestSimple(t *testing.T) {
30 | answer1 := topWords("simple.txt", 4, 0)
31 | answer2 := topWords("simple.txt", 5, 4)
32 | expected1 := []WordCount{
33 | {"hello", 5},
34 | {"you", 3},
35 | {"and", 2},
36 | {"dont", 2},
37 | }
38 | expected2 := []WordCount{
39 | {"hello", 5},
40 | {"dont", 2},
41 | {"everyone", 2},
42 | {"look", 2},
43 | {"again", 1},
44 | }
45 | assertEqual(t, answer1, expected1)
46 | assertEqual(t, answer2, expected2)
47 | }
48 |
49 | func TestDeclarationOfIndependence(t *testing.T) {
50 | answer := topWords("declaration_of_independence.txt", 5, 6)
51 | expected := []WordCount{
52 | {"people", 10},
53 | {"states", 8},
54 | {"government", 6},
55 | {"powers", 5},
56 | {"assent", 4},
57 | }
58 | assertEqual(t, answer, expected)
59 | }
60 |
--------------------------------------------------------------------------------
/assignment1-1/q2.go:
--------------------------------------------------------------------------------
1 | package cos418_hw1_1
2 |
3 | import (
4 | "bufio"
5 | "io"
6 | "strconv"
7 | )
8 |
9 | // Sum numbers from channel `nums` and output sum to `out`.
10 | // You should only output to `out` once.
11 | // Do NOT modify function signature.
12 | func sumWorker(nums chan int, out chan int) {
13 | // TODO: implement me
14 | // HINT: use for loop over `nums`
15 | }
16 |
17 | // Read integers from the file `fileName` and return sum of all values.
18 | // This function must launch `num` go routines running
19 | // `sumWorker` to find the sum of the values concurrently.
20 | // You should use `checkError` to handle potential errors.
21 | // Do NOT modify function signature.
22 | func sum(num int, fileName string) int {
23 | // TODO: implement me
24 | // HINT: use `readInts` and `sumWorkers`
25 | // HINT: used buffered channels for splitting numbers between workers
26 | return 0
27 | }
28 |
29 | // Read a list of integers separated by whitespace from `r`.
30 | // Return the integers successfully read with no error, or
31 | // an empty slice of integers and the error that occurred.
32 | // Do NOT modify this function.
33 | func readInts(r io.Reader) ([]int, error) {
34 | scanner := bufio.NewScanner(r)
35 | scanner.Split(bufio.ScanWords)
36 | var elems []int
37 | for scanner.Scan() {
38 | val, err := strconv.Atoi(scanner.Text())
39 | if err != nil {
40 | return elems, err
41 | }
42 | elems = append(elems, val)
43 | }
44 | return elems, nil
45 | }
46 |
--------------------------------------------------------------------------------
/assignment1-1/q2_test.go:
--------------------------------------------------------------------------------
1 | package cos418_hw1_1
2 |
3 | import (
4 | "fmt"
5 | "testing"
6 | )
7 |
8 | func test(t *testing.T, fileName string, num int, expected int) {
9 | result := sum(num, fileName)
10 | if result != expected {
11 | t.Fatal(fmt.Sprintf(
12 | "Sum of %s failed: got %d, expected %d\n", fileName, result, expected))
13 | }
14 | }
15 |
16 | func Test1(t *testing.T) {
17 | test(t, "q2_test1.txt", 1, 499500)
18 | }
19 |
20 | func Test2(t *testing.T) {
21 | test(t, "q2_test1.txt", 10, 499500)
22 | }
23 |
24 | func Test3(t *testing.T) {
25 | test(t, "q2_test2.txt", 1, 117652)
26 | }
27 |
28 | func Test4(t *testing.T) {
29 | test(t, "q2_test2.txt", 10, 117652)
30 | }
31 |
--------------------------------------------------------------------------------
/assignment1-1/q2_test1.txt:
--------------------------------------------------------------------------------
1 | 0
2 | 1
3 | 2
4 | 3
5 | 4
6 | 5
7 | 6
8 | 7
9 | 8
10 | 9
11 | 10
12 | 11
13 | 12
14 | 13
15 | 14
16 | 15
17 | 16
18 | 17
19 | 18
20 | 19
21 | 20
22 | 21
23 | 22
24 | 23
25 | 24
26 | 25
27 | 26
28 | 27
29 | 28
30 | 29
31 | 30
32 | 31
33 | 32
34 | 33
35 | 34
36 | 35
37 | 36
38 | 37
39 | 38
40 | 39
41 | 40
42 | 41
43 | 42
44 | 43
45 | 44
46 | 45
47 | 46
48 | 47
49 | 48
50 | 49
51 | 50
52 | 51
53 | 52
54 | 53
55 | 54
56 | 55
57 | 56
58 | 57
59 | 58
60 | 59
61 | 60
62 | 61
63 | 62
64 | 63
65 | 64
66 | 65
67 | 66
68 | 67
69 | 68
70 | 69
71 | 70
72 | 71
73 | 72
74 | 73
75 | 74
76 | 75
77 | 76
78 | 77
79 | 78
80 | 79
81 | 80
82 | 81
83 | 82
84 | 83
85 | 84
86 | 85
87 | 86
88 | 87
89 | 88
90 | 89
91 | 90
92 | 91
93 | 92
94 | 93
95 | 94
96 | 95
97 | 96
98 | 97
99 | 98
100 | 99
101 | 100
102 | 101
103 | 102
104 | 103
105 | 104
106 | 105
107 | 106
108 | 107
109 | 108
110 | 109
111 | 110
112 | 111
113 | 112
114 | 113
115 | 114
116 | 115
117 | 116
118 | 117
119 | 118
120 | 119
121 | 120
122 | 121
123 | 122
124 | 123
125 | 124
126 | 125
127 | 126
128 | 127
129 | 128
130 | 129
131 | 130
132 | 131
133 | 132
134 | 133
135 | 134
136 | 135
137 | 136
138 | 137
139 | 138
140 | 139
141 | 140
142 | 141
143 | 142
144 | 143
145 | 144
146 | 145
147 | 146
148 | 147
149 | 148
150 | 149
151 | 150
152 | 151
153 | 152
154 | 153
155 | 154
156 | 155
157 | 156
158 | 157
159 | 158
160 | 159
161 | 160
162 | 161
163 | 162
164 | 163
165 | 164
166 | 165
167 | 166
168 | 167
169 | 168
170 | 169
171 | 170
172 | 171
173 | 172
174 | 173
175 | 174
176 | 175
177 | 176
178 | 177
179 | 178
180 | 179
181 | 180
182 | 181
183 | 182
184 | 183
185 | 184
186 | 185
187 | 186
188 | 187
189 | 188
190 | 189
191 | 190
192 | 191
193 | 192
194 | 193
195 | 194
196 | 195
197 | 196
198 | 197
199 | 198
200 | 199
201 | 200
202 | 201
203 | 202
204 | 203
205 | 204
206 | 205
207 | 206
208 | 207
209 | 208
210 | 209
211 | 210
212 | 211
213 | 212
214 | 213
215 | 214
216 | 215
217 | 216
218 | 217
219 | 218
220 | 219
221 | 220
222 | 221
223 | 222
224 | 223
225 | 224
226 | 225
227 | 226
228 | 227
229 | 228
230 | 229
231 | 230
232 | 231
233 | 232
234 | 233
235 | 234
236 | 235
237 | 236
238 | 237
239 | 238
240 | 239
241 | 240
242 | 241
243 | 242
244 | 243
245 | 244
246 | 245
247 | 246
248 | 247
249 | 248
250 | 249
251 | 250
252 | 251
253 | 252
254 | 253
255 | 254
256 | 255
257 | 256
258 | 257
259 | 258
260 | 259
261 | 260
262 | 261
263 | 262
264 | 263
265 | 264
266 | 265
267 | 266
268 | 267
269 | 268
270 | 269
271 | 270
272 | 271
273 | 272
274 | 273
275 | 274
276 | 275
277 | 276
278 | 277
279 | 278
280 | 279
281 | 280
282 | 281
283 | 282
284 | 283
285 | 284
286 | 285
287 | 286
288 | 287
289 | 288
290 | 289
291 | 290
292 | 291
293 | 292
294 | 293
295 | 294
296 | 295
297 | 296
298 | 297
299 | 298
300 | 299
301 | 300
302 | 301
303 | 302
304 | 303
305 | 304
306 | 305
307 | 306
308 | 307
309 | 308
310 | 309
311 | 310
312 | 311
313 | 312
314 | 313
315 | 314
316 | 315
317 | 316
318 | 317
319 | 318
320 | 319
321 | 320
322 | 321
323 | 322
324 | 323
325 | 324
326 | 325
327 | 326
328 | 327
329 | 328
330 | 329
331 | 330
332 | 331
333 | 332
334 | 333
335 | 334
336 | 335
337 | 336
338 | 337
339 | 338
340 | 339
341 | 340
342 | 341
343 | 342
344 | 343
345 | 344
346 | 345
347 | 346
348 | 347
349 | 348
350 | 349
351 | 350
352 | 351
353 | 352
354 | 353
355 | 354
356 | 355
357 | 356
358 | 357
359 | 358
360 | 359
361 | 360
362 | 361
363 | 362
364 | 363
365 | 364
366 | 365
367 | 366
368 | 367
369 | 368
370 | 369
371 | 370
372 | 371
373 | 372
374 | 373
375 | 374
376 | 375
377 | 376
378 | 377
379 | 378
380 | 379
381 | 380
382 | 381
383 | 382
384 | 383
385 | 384
386 | 385
387 | 386
388 | 387
389 | 388
390 | 389
391 | 390
392 | 391
393 | 392
394 | 393
395 | 394
396 | 395
397 | 396
398 | 397
399 | 398
400 | 399
401 | 400
402 | 401
403 | 402
404 | 403
405 | 404
406 | 405
407 | 406
408 | 407
409 | 408
410 | 409
411 | 410
412 | 411
413 | 412
414 | 413
415 | 414
416 | 415
417 | 416
418 | 417
419 | 418
420 | 419
421 | 420
422 | 421
423 | 422
424 | 423
425 | 424
426 | 425
427 | 426
428 | 427
429 | 428
430 | 429
431 | 430
432 | 431
433 | 432
434 | 433
435 | 434
436 | 435
437 | 436
438 | 437
439 | 438
440 | 439
441 | 440
442 | 441
443 | 442
444 | 443
445 | 444
446 | 445
447 | 446
448 | 447
449 | 448
450 | 449
451 | 450
452 | 451
453 | 452
454 | 453
455 | 454
456 | 455
457 | 456
458 | 457
459 | 458
460 | 459
461 | 460
462 | 461
463 | 462
464 | 463
465 | 464
466 | 465
467 | 466
468 | 467
469 | 468
470 | 469
471 | 470
472 | 471
473 | 472
474 | 473
475 | 474
476 | 475
477 | 476
478 | 477
479 | 478
480 | 479
481 | 480
482 | 481
483 | 482
484 | 483
485 | 484
486 | 485
487 | 486
488 | 487
489 | 488
490 | 489
491 | 490
492 | 491
493 | 492
494 | 493
495 | 494
496 | 495
497 | 496
498 | 497
499 | 498
500 | 499
501 | 500
502 | 501
503 | 502
504 | 503
505 | 504
506 | 505
507 | 506
508 | 507
509 | 508
510 | 509
511 | 510
512 | 511
513 | 512
514 | 513
515 | 514
516 | 515
517 | 516
518 | 517
519 | 518
520 | 519
521 | 520
522 | 521
523 | 522
524 | 523
525 | 524
526 | 525
527 | 526
528 | 527
529 | 528
530 | 529
531 | 530
532 | 531
533 | 532
534 | 533
535 | 534
536 | 535
537 | 536
538 | 537
539 | 538
540 | 539
541 | 540
542 | 541
543 | 542
544 | 543
545 | 544
546 | 545
547 | 546
548 | 547
549 | 548
550 | 549
551 | 550
552 | 551
553 | 552
554 | 553
555 | 554
556 | 555
557 | 556
558 | 557
559 | 558
560 | 559
561 | 560
562 | 561
563 | 562
564 | 563
565 | 564
566 | 565
567 | 566
568 | 567
569 | 568
570 | 569
571 | 570
572 | 571
573 | 572
574 | 573
575 | 574
576 | 575
577 | 576
578 | 577
579 | 578
580 | 579
581 | 580
582 | 581
583 | 582
584 | 583
585 | 584
586 | 585
587 | 586
588 | 587
589 | 588
590 | 589
591 | 590
592 | 591
593 | 592
594 | 593
595 | 594
596 | 595
597 | 596
598 | 597
599 | 598
600 | 599
601 | 600
602 | 601
603 | 602
604 | 603
605 | 604
606 | 605
607 | 606
608 | 607
609 | 608
610 | 609
611 | 610
612 | 611
613 | 612
614 | 613
615 | 614
616 | 615
617 | 616
618 | 617
619 | 618
620 | 619
621 | 620
622 | 621
623 | 622
624 | 623
625 | 624
626 | 625
627 | 626
628 | 627
629 | 628
630 | 629
631 | 630
632 | 631
633 | 632
634 | 633
635 | 634
636 | 635
637 | 636
638 | 637
639 | 638
640 | 639
641 | 640
642 | 641
643 | 642
644 | 643
645 | 644
646 | 645
647 | 646
648 | 647
649 | 648
650 | 649
651 | 650
652 | 651
653 | 652
654 | 653
655 | 654
656 | 655
657 | 656
658 | 657
659 | 658
660 | 659
661 | 660
662 | 661
663 | 662
664 | 663
665 | 664
666 | 665
667 | 666
668 | 667
669 | 668
670 | 669
671 | 670
672 | 671
673 | 672
674 | 673
675 | 674
676 | 675
677 | 676
678 | 677
679 | 678
680 | 679
681 | 680
682 | 681
683 | 682
684 | 683
685 | 684
686 | 685
687 | 686
688 | 687
689 | 688
690 | 689
691 | 690
692 | 691
693 | 692
694 | 693
695 | 694
696 | 695
697 | 696
698 | 697
699 | 698
700 | 699
701 | 700
702 | 701
703 | 702
704 | 703
705 | 704
706 | 705
707 | 706
708 | 707
709 | 708
710 | 709
711 | 710
712 | 711
713 | 712
714 | 713
715 | 714
716 | 715
717 | 716
718 | 717
719 | 718
720 | 719
721 | 720
722 | 721
723 | 722
724 | 723
725 | 724
726 | 725
727 | 726
728 | 727
729 | 728
730 | 729
731 | 730
732 | 731
733 | 732
734 | 733
735 | 734
736 | 735
737 | 736
738 | 737
739 | 738
740 | 739
741 | 740
742 | 741
743 | 742
744 | 743
745 | 744
746 | 745
747 | 746
748 | 747
749 | 748
750 | 749
751 | 750
752 | 751
753 | 752
754 | 753
755 | 754
756 | 755
757 | 756
758 | 757
759 | 758
760 | 759
761 | 760
762 | 761
763 | 762
764 | 763
765 | 764
766 | 765
767 | 766
768 | 767
769 | 768
770 | 769
771 | 770
772 | 771
773 | 772
774 | 773
775 | 774
776 | 775
777 | 776
778 | 777
779 | 778
780 | 779
781 | 780
782 | 781
783 | 782
784 | 783
785 | 784
786 | 785
787 | 786
788 | 787
789 | 788
790 | 789
791 | 790
792 | 791
793 | 792
794 | 793
795 | 794
796 | 795
797 | 796
798 | 797
799 | 798
800 | 799
801 | 800
802 | 801
803 | 802
804 | 803
805 | 804
806 | 805
807 | 806
808 | 807
809 | 808
810 | 809
811 | 810
812 | 811
813 | 812
814 | 813
815 | 814
816 | 815
817 | 816
818 | 817
819 | 818
820 | 819
821 | 820
822 | 821
823 | 822
824 | 823
825 | 824
826 | 825
827 | 826
828 | 827
829 | 828
830 | 829
831 | 830
832 | 831
833 | 832
834 | 833
835 | 834
836 | 835
837 | 836
838 | 837
839 | 838
840 | 839
841 | 840
842 | 841
843 | 842
844 | 843
845 | 844
846 | 845
847 | 846
848 | 847
849 | 848
850 | 849
851 | 850
852 | 851
853 | 852
854 | 853
855 | 854
856 | 855
857 | 856
858 | 857
859 | 858
860 | 859
861 | 860
862 | 861
863 | 862
864 | 863
865 | 864
866 | 865
867 | 866
868 | 867
869 | 868
870 | 869
871 | 870
872 | 871
873 | 872
874 | 873
875 | 874
876 | 875
877 | 876
878 | 877
879 | 878
880 | 879
881 | 880
882 | 881
883 | 882
884 | 883
885 | 884
886 | 885
887 | 886
888 | 887
889 | 888
890 | 889
891 | 890
892 | 891
893 | 892
894 | 893
895 | 894
896 | 895
897 | 896
898 | 897
899 | 898
900 | 899
901 | 900
902 | 901
903 | 902
904 | 903
905 | 904
906 | 905
907 | 906
908 | 907
909 | 908
910 | 909
911 | 910
912 | 911
913 | 912
914 | 913
915 | 914
916 | 915
917 | 916
918 | 917
919 | 918
920 | 919
921 | 920
922 | 921
923 | 922
924 | 923
925 | 924
926 | 925
927 | 926
928 | 927
929 | 928
930 | 929
931 | 930
932 | 931
933 | 932
934 | 933
935 | 934
936 | 935
937 | 936
938 | 937
939 | 938
940 | 939
941 | 940
942 | 941
943 | 942
944 | 943
945 | 944
946 | 945
947 | 946
948 | 947
949 | 948
950 | 949
951 | 950
952 | 951
953 | 952
954 | 953
955 | 954
956 | 955
957 | 956
958 | 957
959 | 958
960 | 959
961 | 960
962 | 961
963 | 962
964 | 963
965 | 964
966 | 965
967 | 966
968 | 967
969 | 968
970 | 969
971 | 970
972 | 971
973 | 972
974 | 973
975 | 974
976 | 975
977 | 976
978 | 977
979 | 978
980 | 979
981 | 980
982 | 981
983 | 982
984 | 983
985 | 984
986 | 985
987 | 986
988 | 987
989 | 988
990 | 989
991 | 990
992 | 991
993 | 992
994 | 993
995 | 994
996 | 995
997 | 996
998 | 997
999 | 998
1000 | 999
1001 |
--------------------------------------------------------------------------------
/assignment1-1/q2_test2.txt:
--------------------------------------------------------------------------------
1 | 213
2 | -210
3 | 477
4 | 395
5 | -126
6 | 53
7 | 13
8 | -466
9 | 9
10 | 694
11 | 43
12 | 256
13 | -315
14 | 69
15 | 28
16 | 254
17 | -469
18 | 170
19 | -122
20 | 64
21 | -183
22 | -285
23 | -205
24 | -41
25 | -114
26 | -45
27 | -272
28 | -361
29 | -310
30 | -39
31 | 199
32 | -231
33 | 237
34 | 361
35 | 665
36 | 46
37 | -257
38 | 549
39 | -306
40 | 436
41 | 558
42 | 123
43 | 84
44 | 726
45 | -305
46 | 143
47 | 222
48 | 515
49 | -152
50 | -43
51 | -57
52 | -352
53 | 461
54 | 218
55 | 569
56 | -88
57 | 719
58 | 739
59 | 70
60 | -481
61 | 291
62 | -158
63 | -84
64 | 526
65 | 602
66 | 91
67 | 677
68 | -149
69 | 539
70 | -16
71 | -495
72 | -173
73 | 97
74 | -472
75 | 107
76 | -251
77 | 749
78 | -118
79 | -296
80 | -468
81 | -178
82 | 672
83 | 44
84 | 41
85 | 213
86 | -385
87 | -189
88 | 462
89 | 308
90 | 731
91 | -141
92 | -128
93 | -101
94 | -53
95 | -176
96 | -262
97 | -165
98 | -420
99 | -263
100 | -145
101 | 692
102 | 394
103 | -11
104 | 98
105 | -168
106 | -131
107 | -63
108 | 147
109 | 642
110 | 412
111 | 736
112 | -227
113 | 29
114 | 149
115 | 699
116 | 41
117 | -327
118 | 269
119 | -449
120 | -106
121 | 543
122 | 519
123 | -156
124 | -8
125 | 450
126 | 131
127 | -126
128 | -186
129 | 54
130 | -302
131 | 678
132 | 215
133 | 251
134 | 139
135 | 255
136 | -162
137 | 291
138 | -178
139 | 526
140 | -112
141 | 688
142 | 451
143 | 300
144 | 101
145 | 445
146 | 269
147 | 698
148 | -131
149 | -130
150 | 615
151 | -23
152 | -202
153 | 524
154 | -131
155 | -262
156 | 501
157 | 395
158 | -453
159 | -400
160 | -299
161 | 744
162 | 589
163 | 701
164 | -468
165 | 463
166 | 384
167 | 353
168 | 282
169 | 745
170 | 363
171 | 82
172 | -435
173 | -79
174 | -241
175 | 114
176 | 721
177 | -176
178 | -332
179 | 457
180 | -275
181 | 538
182 | 421
183 | -280
184 | -271
185 | -435
186 | -190
187 | 438
188 | -174
189 | 21
190 | 613
191 | -20
192 | 18
193 | -376
194 | 390
195 | 2
196 | 93
197 | 103
198 | -342
199 | 206
200 | 672
201 | 362
202 | -332
203 | 150
204 | -133
205 | 185
206 | 439
207 | -401
208 | 461
209 | -266
210 | -134
211 | -472
212 | 455
213 | -26
214 | 163
215 | -185
216 | 173
217 | -27
218 | 158
219 | -173
220 | -399
221 | 189
222 | -19
223 | -350
224 | 386
225 | 583
226 | 459
227 | -67
228 | 215
229 | -85
230 | -407
231 | -227
232 | -81
233 | 159
234 | 721
235 | -41
236 | -205
237 | 501
238 | 544
239 | 143
240 | 190
241 | -84
242 | 209
243 | 303
244 | -18
245 | 703
246 | 80
247 | -232
248 | 702
249 | 467
250 | -42
251 | -300
252 | 715
253 | 641
254 | -24
255 | 269
256 | -410
257 | -213
258 | 234
259 | 558
260 | -98
261 | 120
262 | -34
263 | -22
264 | 525
265 | -130
266 | 250
267 | 57
268 | -423
269 | 730
270 | 439
271 | -479
272 | -318
273 | 198
274 | -72
275 | 0
276 | 282
277 | 636
278 | -232
279 | 328
280 | -201
281 | 394
282 | 274
283 | -281
284 | -47
285 | -9
286 | -110
287 | 56
288 | -98
289 | -262
290 | 650
291 | -115
292 | 215
293 | -415
294 | 541
295 | -220
296 | 633
297 | -293
298 | 33
299 | -423
300 | 428
301 | 742
302 | 298
303 | 207
304 | 287
305 | 517
306 | 654
307 | -120
308 | -319
309 | 390
310 | -168
311 | 449
312 | -198
313 | 696
314 | 134
315 | -297
316 | -341
317 | 491
318 | -37
319 | -54
320 | 662
321 | 351
322 | 697
323 | 702
324 | 29
325 | -12
326 | -89
327 | 226
328 | 52
329 | -18
330 | -95
331 | 24
332 | -57
333 | 443
334 | -187
335 | -214
336 | 294
337 | -352
338 | -442
339 | 0
340 | 275
341 | 455
342 | 77
343 | 651
344 | -350
345 | -18
346 | 34
347 | -276
348 | -266
349 | -11
350 | -493
351 | -390
352 | 419
353 | 476
354 | -472
355 | -388
356 | -435
357 | -220
358 | 490
359 | 749
360 | 105
361 | 138
362 | -358
363 | 64
364 | 192
365 | -120
366 | 713
367 | 279
368 | 361
369 | -334
370 | 74
371 | -264
372 | -126
373 | -65
374 | 83
375 | -336
376 | 710
377 | 408
378 | 202
379 | 88
380 | -143
381 | 327
382 | -6
383 | -104
384 | 110
385 | -495
386 | -488
387 | -415
388 | 432
389 | -54
390 | -138
391 | -157
392 | 609
393 | 336
394 | -451
395 | -40
396 | 431
397 | 257
398 | 76
399 | 643
400 | -221
401 | -24
402 | 607
403 | -3
404 | 725
405 | 52
406 | -184
407 | 622
408 | 641
409 | 591
410 | -393
411 | 171
412 | 543
413 | -284
414 | -147
415 | 99
416 | -446
417 | -24
418 | 187
419 | 525
420 | 431
421 | -367
422 | 541
423 | 199
424 | -471
425 | -345
426 | -307
427 | 102
428 | 107
429 | -383
430 | -469
431 | 616
432 | 44
433 | 473
434 | -206
435 | 284
436 | 139
437 | 99
438 | 191
439 | 132
440 | -269
441 | -131
442 | 309
443 | 159
444 | 102
445 | 666
446 | -299
447 | -449
448 | 520
449 | 420
450 | 481
451 | -153
452 | 418
453 | 154
454 | -205
455 | 128
456 | -21
457 | -190
458 | -440
459 | -328
460 | -194
461 | -400
462 | 362
463 | 146
464 | 628
465 | 487
466 | 686
467 | 610
468 | -201
469 | 542
470 | 134
471 | 123
472 | 235
473 | 216
474 | 447
475 | -410
476 | -325
477 | 129
478 | 601
479 | 552
480 | 563
481 | 70
482 | 514
483 | 259
484 | -226
485 | -293
486 | -161
487 | -228
488 | -456
489 | 209
490 | -331
491 | -172
492 | -485
493 | 296
494 | -104
495 | -88
496 | 221
497 | -415
498 | 250
499 | 327
500 | -54
501 | 250
502 | -296
503 | -489
504 | 553
505 | 115
506 | 293
507 | 65
508 | 722
509 | 344
510 | 287
511 | -106
512 | -45
513 | -255
514 | 353
515 | -58
516 | 710
517 | 340
518 | -222
519 | 237
520 | 727
521 | 446
522 | 746
523 | 358
524 | 488
525 | -334
526 | 731
527 | 356
528 | 446
529 | 289
530 | 311
531 | -6
532 | 384
533 | -320
534 | 354
535 | 74
536 | 21
537 | -129
538 | 361
539 | -315
540 | 730
541 | -458
542 | 627
543 | 529
544 | 552
545 | -216
546 | 458
547 | -243
548 | 715
549 | -272
550 | 634
551 | -142
552 | 450
553 | -312
554 | 376
555 | -272
556 | 269
557 | -440
558 | 482
559 | -402
560 | 707
561 | 276
562 | 128
563 | 87
564 | -356
565 | 374
566 | 573
567 | 658
568 | 671
569 | -120
570 | 145
571 | -184
572 | -292
573 | 447
574 | 457
575 | 466
576 | -93
577 | -378
578 | -363
579 | 290
580 | 0
581 | 54
582 | 470
583 | 746
584 | -462
585 | 446
586 | -87
587 | 211
588 | -102
589 | -128
590 | -274
591 | -38
592 | 613
593 | 243
594 | -230
595 | 206
596 | 529
597 | -453
598 | -185
599 | 657
600 | 482
601 | -9
602 | 406
603 | -184
604 | 459
605 | 730
606 | 438
607 | -406
608 | -413
609 | -362
610 | 511
611 | 630
612 | 567
613 | -140
614 | 591
615 | -488
616 | 224
617 | -38
618 | 556
619 | 473
620 | -266
621 | 225
622 | -321
623 | 328
624 | 183
625 | -340
626 | -169
627 | 445
628 | 381
629 | 170
630 | -193
631 | -411
632 | 706
633 | 745
634 | 86
635 | 317
636 | -272
637 | 600
638 | 618
639 | 286
640 | 309
641 | 180
642 | 308
643 | 674
644 | 234
645 | 204
646 | -182
647 | 101
648 | -193
649 | -294
650 | 546
651 | -410
652 | -2
653 | 642
654 | 684
655 | -11
656 | 80
657 | -499
658 | -489
659 | -469
660 | -314
661 | -198
662 | -246
663 | 642
664 | 130
665 | 698
666 | 594
667 | -291
668 | -488
669 | 111
670 | -172
671 | -402
672 | -320
673 | 701
674 | -78
675 | 447
676 | 342
677 | 391
678 | -72
679 | -428
680 | 724
681 | 644
682 | 499
683 | 489
684 | 386
685 | 377
686 | -488
687 | 322
688 | 33
689 | -61
690 | 435
691 | -337
692 | -309
693 | 225
694 | 508
695 | -191
696 | -444
697 | -1
698 | 702
699 | 293
700 | 421
701 | 658
702 | 279
703 | -20
704 | 211
705 | -285
706 | -18
707 | 188
708 | 306
709 | 229
710 | 283
711 | -318
712 | 379
713 | 709
714 | 526
715 | -105
716 | -366
717 | 206
718 | -479
719 | -357
720 | 399
721 | 420
722 | -491
723 | 628
724 | 202
725 | 593
726 | 322
727 | -490
728 | 673
729 | 235
730 | -131
731 | 632
732 | -233
733 | 134
734 | 721
735 | -332
736 | -385
737 | -300
738 | -425
739 | -435
740 | -403
741 | -374
742 | 532
743 | -234
744 | 78
745 | 474
746 | 724
747 | 126
748 | 688
749 | 690
750 | 508
751 | 164
752 | -471
753 | -369
754 | -9
755 | 611
756 | 312
757 | -32
758 | -33
759 | -211
760 | 217
761 | -6
762 | 336
763 | -311
764 | -191
765 | 591
766 | -435
767 | -181
768 | 117
769 | 731
770 | 129
771 | 282
772 | 15
773 | 287
774 | 240
775 | 21
776 | -192
777 | 448
778 | 119
779 | -49
780 | 163
781 | -429
782 | 461
783 | 629
784 | 229
785 | 379
786 | -458
787 | 153
788 | 703
789 | 456
790 | 96
791 | 331
792 | 575
793 | -469
794 | 174
795 | 5
796 | -235
797 | -323
798 | 478
799 | 195
800 | 719
801 | -278
802 | 71
803 | 18
804 | 677
805 | -299
806 | -407
807 | 301
808 | 599
809 | -105
810 | 69
811 | 530
812 | 453
813 | 424
814 | -74
815 | -55
816 | -454
817 | 347
818 | -133
819 | 406
820 | 577
821 | -237
822 | 394
823 | 275
824 | -492
825 | 684
826 | -284
827 | 450
828 | -79
829 | 612
830 | 544
831 | 232
832 | -92
833 | 446
834 | 691
835 | 403
836 | 418
837 | 698
838 | 37
839 | 12
840 | -41
841 | -345
842 | 78
843 | -338
844 | -459
845 | -271
846 | -186
847 | 332
848 | 181
849 | 460
850 | 737
851 | 355
852 | 121
853 | -142
854 | 303
855 | -206
856 | 326
857 | 99
858 | 227
859 | 504
860 | 280
861 | 395
862 | -76
863 | 444
864 | -284
865 | 279
866 | 508
867 | -377
868 | -197
869 | 103
870 | 405
871 | 590
872 | 613
873 | -297
874 | 86
875 | 590
876 | 677
877 | -229
878 | 253
879 | 522
880 | 693
881 | 502
882 | 500
883 | -148
884 | -267
885 | 657
886 | 583
887 | -325
888 | 617
889 | 654
890 | 580
891 | 295
892 | 303
893 | 564
894 | 578
895 | 232
896 | -455
897 | -382
898 | -125
899 | 495
900 | -333
901 | 264
902 | -267
903 | 350
904 | 121
905 | 22
906 | 43
907 | 107
908 | 573
909 | -340
910 | -372
911 | 57
912 | -165
913 | 225
914 | 42
915 | -362
916 | 678
917 | 107
918 | -110
919 | -71
920 | 554
921 | 611
922 | 199
923 | 699
924 | 111
925 | 321
926 | 287
927 | -105
928 | -104
929 | -16
930 | -141
931 | -277
932 | 472
933 | 693
934 | 201
935 | 547
936 | -151
937 | 316
938 | -22
939 | 105
940 | 549
941 | 191
942 | -272
943 | -481
944 | -116
945 | 640
946 | 376
947 | -473
948 | 153
949 | -451
950 | -244
951 | 200
952 | 684
953 | -107
954 | 611
955 | -64
956 | 748
957 | -195
958 | 588
959 | -127
960 | -122
961 | -465
962 | 382
963 | 408
964 | 378
965 | 328
966 | 643
967 | -283
968 | -73
969 | -298
970 | 285
971 | -310
972 | -82
973 | 382
974 | -233
975 | 189
976 | 3
977 | -379
978 | 345
979 | -374
980 | -377
981 | 28
982 | 667
983 | 45
984 | -294
985 | 232
986 | 410
987 | 221
988 | 475
989 | -215
990 | -258
991 | -237
992 | -180
993 | 51
994 | 646
995 | 181
996 | 747
997 | 62
998 | 180
999 | 200
1000 | 309
1001 |
--------------------------------------------------------------------------------
/assignment1-1/simple.txt:
--------------------------------------------------------------------------------
1 | Hello everyone how is everyone doing? I mean hello my dear you look amazing today.
2 | Hello sunshine. Hello blue skies. What a wonderful day! Don't you look me in the
3 | eye and tell me you don't see the same things that I do. Hello again and goodbye.
4 |
5 |
--------------------------------------------------------------------------------
/assignment1-2/src/main/ii.go:
--------------------------------------------------------------------------------
1 | package main
2 |
3 | import "os"
4 | import "fmt"
5 | import "mapreduce"
6 |
7 | // The mapping function is called once for each piece of the input.
8 | // In this framework, the key is the name of the file that is being processed,
9 | // and the value is the file's contents. The return value should be a slice of
10 | // key/value pairs, each represented by a mapreduce.KeyValue.
11 | func mapF(document string, value string) (res []mapreduce.KeyValue) {
12 | // TODO: you should complete this to do the inverted index challenge
13 | }
14 |
15 | // The reduce function is called once for each key generated by Map, with a
16 | // list of that key's string value (merged across all inputs). The return value
17 | // should be a single output value for that key.
18 | func reduceF(key string, values []string) string {
19 | // TODO: you should complete this to do the inverted index challenge
20 | }
21 |
22 | // Can be run in 3 ways:
23 | // 1) Sequential (e.g., go run wc.go master sequential x1.txt .. xN.txt)
24 | // 2) Master (e.g., go run wc.go master localhost:7777 x1.txt .. xN.txt)
25 | // 3) Worker (e.g., go run wc.go worker localhost:7777 localhost:7778 &)
26 | func main() {
27 | if len(os.Args) < 4 {
28 | fmt.Printf("%s: see usage comments in file\n", os.Args[0])
29 | } else if os.Args[1] == "master" {
30 | var mr *mapreduce.Master
31 | if os.Args[2] == "sequential" {
32 | mr = mapreduce.Sequential("iiseq", os.Args[3:], 3, mapF, reduceF)
33 | } else {
34 | mr = mapreduce.Distributed("iiseq", os.Args[3:], 3, os.Args[2])
35 | }
36 | mr.Wait()
37 | } else {
38 | mapreduce.RunWorker(os.Args[2], os.Args[3], mapF, reduceF, 100)
39 | }
40 | }
41 |
--------------------------------------------------------------------------------
/assignment1-2/src/main/mr-challenge.txt:
--------------------------------------------------------------------------------
1 | women: 15 pg-being_ernest.txt,pg-dorian_gray.txt,pg-dracula.txt,pg-emma.txt,pg-frankenstein.txt,pg-great_expectations.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-metamorphosis.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
2 | won: 15 pg-being_ernest.txt,pg-dorian_gray.txt,pg-dracula.txt,pg-frankenstein.txt,pg-great_expectations.txt,pg-grimm.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-metamorphosis.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
3 | wonderful: 15 pg-being_ernest.txt,pg-dorian_gray.txt,pg-dracula.txt,pg-emma.txt,pg-frankenstein.txt,pg-great_expectations.txt,pg-grimm.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
4 | words: 15 pg-dorian_gray.txt,pg-dracula.txt,pg-emma.txt,pg-frankenstein.txt,pg-great_expectations.txt,pg-grimm.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-metamorphosis.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
5 | worked: 15 pg-dorian_gray.txt,pg-dracula.txt,pg-emma.txt,pg-frankenstein.txt,pg-great_expectations.txt,pg-grimm.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-metamorphosis.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
6 | worse: 15 pg-being_ernest.txt,pg-dorian_gray.txt,pg-dracula.txt,pg-emma.txt,pg-frankenstein.txt,pg-great_expectations.txt,pg-grimm.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
7 | wounded: 15 pg-being_ernest.txt,pg-dorian_gray.txt,pg-dracula.txt,pg-emma.txt,pg-frankenstein.txt,pg-great_expectations.txt,pg-grimm.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
8 | yes: 15 pg-being_ernest.txt,pg-dorian_gray.txt,pg-dracula.txt,pg-emma.txt,pg-great_expectations.txt,pg-grimm.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-metamorphosis.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
9 | younger: 15 pg-being_ernest.txt,pg-dorian_gray.txt,pg-dracula.txt,pg-emma.txt,pg-frankenstein.txt,pg-great_expectations.txt,pg-grimm.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
10 | yours: 15 pg-being_ernest.txt,pg-dorian_gray.txt,pg-dracula.txt,pg-emma.txt,pg-frankenstein.txt,pg-great_expectations.txt,pg-grimm.txt,pg-huckleberry_finn.txt,pg-les_miserables.txt,pg-moby_dick.txt,pg-sherlock_holmes.txt,pg-tale_of_two_cities.txt,pg-tom_sawyer.txt,pg-ulysses.txt,pg-war_and_peace.txt
11 |
--------------------------------------------------------------------------------
/assignment1-2/src/main/mr-testout.txt:
--------------------------------------------------------------------------------
1 | he: 34077
2 | was: 37044
3 | that: 37495
4 | I: 44502
5 | in: 46092
6 | a: 60558
7 | to: 74357
8 | of: 79727
9 | and: 93990
10 | the: 154024
11 |
--------------------------------------------------------------------------------
/assignment1-2/src/main/test-ii.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | go run ii.go master sequential pg-*.txt
3 | sort -k1,1 mrtmp.iiseq | sort -snk2,2 | grep -v '16' | tail -10 | diff - mr-challenge.txt > diff.out
4 | if [ -s diff.out ]
5 | then
6 | echo "Failed test. Output should be as in mr-challenge.txt. Your output differs as follows (from diff.out):" > /dev/stderr
7 | cat diff.out
8 | else
9 | echo "Passed test" > /dev/stderr
10 | fi
11 |
12 |
--------------------------------------------------------------------------------
/assignment1-2/src/main/test-mr.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | here=$(dirname "$0")
3 | [[ "$here" = /* ]] || here="$PWD/$here"
4 | export GOPATH="$here/../../"
5 | echo ""
6 | echo "==> Part I"
7 | go test -run Sequential mapreduce/...
8 | echo ""
9 | echo "==> Part II"
10 | (cd "$here" && ./test-wc.sh > /dev/null)
11 | echo ""
12 | echo "==> Part III"
13 | go test -run TestBasic mapreduce/...
14 | echo ""
15 | echo "==> Part IV"
16 | go test -run Failure mapreduce/...
17 | echo ""
18 | echo "==> Part V (challenge)"
19 | (cd "$here" && ./test-ii.sh > /dev/null)
20 |
21 | rm "$here"/mrtmp.* "$here"/diff.out
22 |
--------------------------------------------------------------------------------
/assignment1-2/src/main/test-wc.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | go run wc.go master sequential pg-*.txt
3 | sort -n -k2 mrtmp.wcseq | tail -10 | diff - mr-testout.txt > diff.out
4 | if [ -s diff.out ]
5 | then
6 | echo "Failed test. Output should be as in mr-testout.txt. Your output differs as follows (from diff.out):" > /dev/stderr
7 | cat diff.out
8 | else
9 | echo "Passed test" > /dev/stderr
10 | fi
11 |
12 |
--------------------------------------------------------------------------------
/assignment1-2/src/main/wc.go:
--------------------------------------------------------------------------------
1 | package main
2 |
3 | import (
4 | "fmt"
5 | "mapreduce"
6 | "os"
7 | )
8 |
9 | // The mapping function is called once for each piece of the input.
10 | // In this framework, the key is the name of the file that is being processed,
11 | // and the value is the file's contents. The return value should be a slice of
12 | // key/value pairs, each represented by a mapreduce.KeyValue.
13 | func mapF(document string, value string) (res []mapreduce.KeyValue) {
14 | // TODO: you have to write this function
15 | }
16 |
17 | // The reduce function is called once for each key generated by Map, with a
18 | // list of that key's string value (merged across all inputs). The return value
19 | // should be a single output value for that key.
20 | func reduceF(key string, values []string) string {
21 | // TODO: you also have to write this function
22 | }
23 |
24 | // Can be run in 3 ways:
25 | // 1) Sequential (e.g., go run wc.go master sequential x1.txt .. xN.txt)
26 | // 2) Master (e.g., go run wc.go master localhost:7777 x1.txt .. xN.txt)
27 | // 3) Worker (e.g., go run wc.go worker localhost:7777 localhost:7778 &)
28 | func main() {
29 | if len(os.Args) < 4 {
30 | fmt.Printf("%s: see usage comments in file\n", os.Args[0])
31 | } else if os.Args[1] == "master" {
32 | var mr *mapreduce.Master
33 | if os.Args[2] == "sequential" {
34 | mr = mapreduce.Sequential("wcseq", os.Args[3:], 3, mapF, reduceF)
35 | } else {
36 | mr = mapreduce.Distributed("wcseq", os.Args[3:], 3, os.Args[2])
37 | }
38 | mr.Wait()
39 | } else {
40 | mapreduce.RunWorker(os.Args[2], os.Args[3], mapF, reduceF, 100)
41 | }
42 | }
43 |
--------------------------------------------------------------------------------
/assignment1-2/src/mapreduce/common.go:
--------------------------------------------------------------------------------
1 | package mapreduce
2 |
3 | import (
4 | "fmt"
5 | "log"
6 | "strconv"
7 | )
8 |
9 | // Debugging enabled?
10 | const debugEnabled = false
11 |
12 | // DPrintf will only print if the debugEnabled const has been set to true
13 | func debug(format string, a ...interface{}) (n int, err error) {
14 | if debugEnabled {
15 | n, err = fmt.Printf(format, a...)
16 | }
17 | return
18 | }
19 |
20 | // Propagate error if it exists
21 | func checkError(err error) {
22 | if err != nil {
23 | log.Fatal(err)
24 | }
25 | }
26 |
27 | // jobPhase indicates whether a task is scheduled as a map or reduce task.
28 | type jobPhase string
29 |
30 | const (
31 | mapPhase jobPhase = "Map"
32 | reducePhase = "Reduce"
33 | )
34 |
35 | // KeyValue is a type used to hold the key/value pairs passed to the map and
36 | // reduce functions.
37 | type KeyValue struct {
38 | Key string
39 | Value string
40 | }
41 |
42 | // reduceName constructs the name of the intermediate file which map task
43 | //
5 | In this assignment you will implement the
6 | Chandy-Lamport algorithm for distributed snapshots.
7 | Your snapshot algorithm will be implemented on top of a token passing system, similar
8 | to the ones presented in Precept 4 and in
9 | the Chandy-Lamport paper.
10 |
11 | The algorithm makes the following assumptions:
12 | Introduction
4 |
13 |
20 |
24 | You will find the code under this directory. The code is organized 25 | as follows: 26 |
39 | Of these files, you only need to turn in server.go and simulator.go. However, some other 40 | files also contain information that will be useful for your implementation or debugging, such as the debug 41 | flag in common.go and the thread-safe map in syncmap.go. Your task is to implement the functions 42 | that say TODO: IMPLEMENT ME, adding fields to the surrounding classes if necessary. 43 |
44 | 45 |48 | Our grading uses the tests in snapshot_test.go provided to you. Test cases can be found in 49 | test_data/. To test the correctness of your code, simply run the following command: 50 |
51 |52 | $ cd chandy-lamport/ 53 | $ go test 54 | Running test '2nodes.top', '2nodes-simple.events' 55 | Running test '2nodes.top', '2nodes-message.events' 56 | Running test '3nodes.top', '3nodes-simple.events' 57 | Running test '3nodes.top', '3nodes-bidirectional-messages.events' 58 | Running test '8nodes.top', '8nodes-sequential-snapshots.events' 59 | Running test '8nodes.top', '8nodes-concurrent-snapshots.events' 60 | Running test '10nodes.top', '10nodes.events' 61 | PASS 62 | ok _/path/to/chandy-lamport 0.012s 63 |64 |
65 | To run individual tests, you can look up the test names in snapshot_test.go and run: 66 |
67 |68 | $ go test -run 2Node 69 | Running test '2nodes.top', '2nodes-simple.events' 70 | Running test '2nodes.top', '2nodes-message.events' 71 | PASS 72 | ok chandy-lamport 0.006s 73 |74 | 75 | ## Submitting Assignment 76 | 77 | You hand in your assignment as before. 78 | 79 | ```bash 80 | $ git commit -am "[you fill me in]" 81 | $ git tag -a -m "i finished assignment 2" a2-handin 82 | $ git push origin master 83 | $ git push origin a2-handin 84 | $ 85 | ``` 86 | 87 | You should verify that you are able to see your final commit and tags 88 | on the Github page of your repository for this assignment. 89 | 90 | 91 | -------------------------------------------------------------------------------- /assignment2/src/.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/COS418F18/assignments_template/e9e55ad69f23bafc835aa856258a380d8edd5398/assignment2/src/.gitignore -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/common.go: -------------------------------------------------------------------------------- 1 | package chandy_lamport 2 | 3 | import ( 4 | "fmt" 5 | "log" 6 | "reflect" 7 | "sort" 8 | ) 9 | 10 | const debug = false 11 | 12 | // ==================================== 13 | // Messages exchanged between servers 14 | // ==================================== 15 | 16 | // An event that represents the sending of a message. 17 | // This is expected to be queued in `link.events`. 18 | type SendMessageEvent struct { 19 | src string 20 | dest string 21 | message interface{} 22 | // The message will be received by the server at or after this time step 23 | receiveTime int 24 | } 25 | 26 | // A message sent from one server to another for token passing. 27 | // This is expected to be encapsulated within a `sendMessageEvent`. 28 | type TokenMessage struct { 29 | numTokens int 30 | } 31 | 32 | func (m TokenMessage) String() string { 33 | return fmt.Sprintf("token(%v)", m.numTokens) 34 | } 35 | 36 | // A message sent from one server to another during the chandy-lamport algorithm. 37 | // This is expected to be encapsulated within a `sendMessageEvent`. 38 | type MarkerMessage struct { 39 | snapshotId int 40 | } 41 | 42 | func (m MarkerMessage) String() string { 43 | return fmt.Sprintf("marker(%v)", m.snapshotId) 44 | } 45 | 46 | // ======================= 47 | // Events used by logger 48 | // ======================= 49 | 50 | // A message that signifies receiving of a message on a particular server 51 | // This is used only for debugging that is not sent between servers 52 | type ReceivedMessageEvent struct { 53 | src string 54 | dest string 55 | message interface{} 56 | } 57 | 58 | func (m ReceivedMessageEvent) String() string { 59 | switch msg := m.message.(type) { 60 | case TokenMessage: 61 | return fmt.Sprintf("%v received %v tokens from %v", m.dest, msg.numTokens, m.src) 62 | case MarkerMessage: 63 | return fmt.Sprintf("%v received marker(%v) from %v", m.dest, msg.snapshotId, m.src) 64 | } 65 | return fmt.Sprintf("Unrecognized message: %v", m.message) 66 | } 67 | 68 | // A message that signifies sending of a message on a particular server 69 | // This is used only for debugging that is not sent between servers 70 | type SentMessageEvent struct { 71 | src string 72 | dest string 73 | message interface{} 74 | } 75 | 76 | func (m SentMessageEvent) String() string { 77 | switch msg := m.message.(type) { 78 | case TokenMessage: 79 | return fmt.Sprintf("%v sent %v tokens to %v", m.src, msg.numTokens, m.dest) 80 | case MarkerMessage: 81 | return fmt.Sprintf("%v sent marker(%v) to %v", m.src, msg.snapshotId, m.dest) 82 | } 83 | return fmt.Sprintf("Unrecognized message: %v", m.message) 84 | } 85 | 86 | // A message that signifies the beginning of the snapshot process on a particular server. 87 | // This is used only for debugging that is not sent between servers. 88 | type StartSnapshot struct { 89 | serverId string 90 | snapshotId int 91 | } 92 | 93 | func (m StartSnapshot) String() string { 94 | return fmt.Sprintf("%v startSnapshot(%v)", m.serverId, m.snapshotId) 95 | } 96 | 97 | // A message that signifies the end of the snapshot process on a particular server. 98 | // This is used only for debugging that is not sent between servers. 99 | type EndSnapshot struct { 100 | serverId string 101 | snapshotId int 102 | } 103 | 104 | func (m EndSnapshot) String() string { 105 | return fmt.Sprintf("%v endSnapshot(%v)", m.serverId, m.snapshotId) 106 | } 107 | 108 | // ================================================ 109 | // Events injected to the system by the simulator 110 | // ================================================ 111 | 112 | // An event parsed from the .event files that represent the passing of tokens 113 | // from one server to another 114 | type PassTokenEvent struct { 115 | src string 116 | dest string 117 | tokens int 118 | } 119 | 120 | // An event parsed from the .event files that represent the initiation of the 121 | // chandy-lamport snapshot algorithm 122 | type SnapshotEvent struct { 123 | serverId string 124 | } 125 | 126 | // A message recorded during the snapshot process 127 | type SnapshotMessage struct { 128 | src string 129 | dest string 130 | message interface{} 131 | } 132 | 133 | // State recorded during the snapshot process 134 | type SnapshotState struct { 135 | id int 136 | tokens map[string]int // key = server ID, value = num tokens 137 | messages []*SnapshotMessage 138 | } 139 | 140 | // ===================== 141 | // Misc helper methods 142 | // ===================== 143 | 144 | // If the error is not nil, terminate 145 | func checkError(err error) { 146 | if err != nil { 147 | log.Fatal(err) 148 | } 149 | } 150 | 151 | // Return the keys of the given map in sorted order. 152 | // Note: The argument passed in MUST be a map, otherwise an error will be thrown. 153 | func getSortedKeys(m interface{}) []string { 154 | v := reflect.ValueOf(m) 155 | if v.Kind() != reflect.Map { 156 | log.Fatal("Attempted to access sorted keys of a non-map: ", m) 157 | } 158 | keys := make([]string, 0) 159 | for _, k := range v.MapKeys() { 160 | keys = append(keys, k.String()) 161 | } 162 | sort.Strings(keys) 163 | return keys 164 | } 165 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/logger.go: -------------------------------------------------------------------------------- 1 | package chandy_lamport 2 | 3 | import ( 4 | "fmt" 5 | "log" 6 | ) 7 | 8 | // ================================= 9 | // Event logger, internal use only 10 | // ================================= 11 | 12 | type Logger struct { 13 | // index = time step 14 | // value = events that occurred at that time step 15 | events [][]LogEvent 16 | } 17 | 18 | type LogEvent struct { 19 | serverId string 20 | // Number of tokens before execution of event 21 | serverTokens int 22 | event interface{} 23 | } 24 | 25 | func (event LogEvent) String() string { 26 | prependWithTokens := false 27 | switch evt := event.event.(type) { 28 | case SentMessageEvent: 29 | switch evt.message.(type) { 30 | case TokenMessage: 31 | prependWithTokens = true 32 | } 33 | case ReceivedMessageEvent: 34 | switch evt.message.(type) { 35 | case TokenMessage: 36 | prependWithTokens = true 37 | } 38 | case StartSnapshot: 39 | prependWithTokens = true 40 | case EndSnapshot: 41 | default: 42 | log.Fatal("Attempted to log unrecognized event: ", event.event) 43 | } 44 | if prependWithTokens { 45 | return fmt.Sprintf("%v has %v token(s)\n\t%v", 46 | event.serverId, 47 | event.serverTokens, 48 | event.event) 49 | } else { 50 | return fmt.Sprintf("%v", event.event) 51 | } 52 | } 53 | 54 | func NewLogger() *Logger { 55 | return &Logger{make([][]LogEvent, 0)} 56 | } 57 | 58 | func (log *Logger) PrettyPrint() { 59 | for epoch, events := range log.events { 60 | if len(events) != 0 { 61 | fmt.Printf("Time %v:\n", epoch) 62 | } 63 | for _, event := range events { 64 | fmt.Printf("\t%v\n", event) 65 | } 66 | } 67 | } 68 | 69 | func (log *Logger) NewEpoch() { 70 | log.events = append(log.events, make([]LogEvent, 0)) 71 | } 72 | 73 | func (logger *Logger) RecordEvent(server *Server, event interface{}) { 74 | mostRecent := len(logger.events) - 1 75 | events := logger.events[mostRecent] 76 | events = append(events, LogEvent{server.Id, server.Tokens, event}) 77 | logger.events[mostRecent] = events 78 | } 79 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/queue.go: -------------------------------------------------------------------------------- 1 | package chandy_lamport 2 | 3 | import "container/list" 4 | 5 | // Define a queue -- simple implementation over List 6 | type Queue struct { 7 | elements *list.List 8 | } 9 | 10 | func NewQueue() *Queue { 11 | return &Queue{list.New()} 12 | } 13 | 14 | func (q *Queue) Empty() bool { 15 | return (q.elements.Len() == 0) 16 | } 17 | 18 | func (q *Queue) Push(v interface{}) { 19 | q.elements.PushFront(v) 20 | } 21 | 22 | func (q *Queue) Pop() interface{} { 23 | return q.elements.Remove(q.elements.Back()) 24 | } 25 | 26 | func (q *Queue) Peek() interface{} { 27 | return q.elements.Back().Value 28 | } 29 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/server.go: -------------------------------------------------------------------------------- 1 | package chandy_lamport 2 | 3 | import "log" 4 | 5 | // The main participant of the distributed snapshot protocol. 6 | // Servers exchange token messages and marker messages among each other. 7 | // Token messages represent the transfer of tokens from one server to another. 8 | // Marker messages represent the progress of the snapshot process. The bulk of 9 | // the distributed protocol is implemented in `HandlePacket` and `StartSnapshot`. 10 | type Server struct { 11 | Id string 12 | Tokens int 13 | sim *Simulator 14 | outboundLinks map[string]*Link // key = link.dest 15 | inboundLinks map[string]*Link // key = link.src 16 | // TODO: ADD MORE FIELDS HERE 17 | } 18 | 19 | // A unidirectional communication channel between two servers 20 | // Each link contains an event queue (as opposed to a packet queue) 21 | type Link struct { 22 | src string 23 | dest string 24 | events *Queue 25 | } 26 | 27 | func NewServer(id string, tokens int, sim *Simulator) *Server { 28 | return &Server{ 29 | id, 30 | tokens, 31 | sim, 32 | make(map[string]*Link), 33 | make(map[string]*Link), 34 | } 35 | } 36 | 37 | // Add a unidirectional link to the destination server 38 | func (server *Server) AddOutboundLink(dest *Server) { 39 | if server == dest { 40 | return 41 | } 42 | l := Link{server.Id, dest.Id, NewQueue()} 43 | server.outboundLinks[dest.Id] = &l 44 | dest.inboundLinks[server.Id] = &l 45 | } 46 | 47 | // Send a message on all of the server's outbound links 48 | func (server *Server) SendToNeighbors(message interface{}) { 49 | for _, serverId := range getSortedKeys(server.outboundLinks) { 50 | link := server.outboundLinks[serverId] 51 | server.sim.logger.RecordEvent( 52 | server, 53 | SentMessageEvent{server.Id, link.dest, message}) 54 | link.events.Push(SendMessageEvent{ 55 | server.Id, 56 | link.dest, 57 | message, 58 | server.sim.GetReceiveTime()}) 59 | } 60 | } 61 | 62 | // Send a number of tokens to a neighbor attached to this server 63 | func (server *Server) SendTokens(numTokens int, dest string) { 64 | if server.Tokens < numTokens { 65 | log.Fatalf("Server %v attempted to send %v tokens when it only has %v\n", 66 | server.Id, numTokens, server.Tokens) 67 | } 68 | message := TokenMessage{numTokens} 69 | server.sim.logger.RecordEvent(server, SentMessageEvent{server.Id, dest, message}) 70 | // Update local state before sending the tokens 71 | server.Tokens -= numTokens 72 | link, ok := server.outboundLinks[dest] 73 | if !ok { 74 | log.Fatalf("Unknown dest ID %v from server %v\n", dest, server.Id) 75 | } 76 | link.events.Push(SendMessageEvent{ 77 | server.Id, 78 | dest, 79 | message, 80 | server.sim.GetReceiveTime()}) 81 | } 82 | 83 | // Callback for when a message is received on this server. 84 | // When the snapshot algorithm completes on this server, this function 85 | // should notify the simulator by calling `sim.NotifySnapshotComplete`. 86 | func (server *Server) HandlePacket(src string, message interface{}) { 87 | // TODO: IMPLEMENT ME 88 | } 89 | 90 | // Start the chandy-lamport snapshot algorithm on this server. 91 | // This should be called only once per server. 92 | func (server *Server) StartSnapshot(snapshotId int) { 93 | // TODO: IMPLEMENT ME 94 | } 95 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/simulator.go: -------------------------------------------------------------------------------- 1 | package chandy_lamport 2 | 3 | import ( 4 | "log" 5 | "math/rand" 6 | ) 7 | 8 | // Max random delay added to packet delivery 9 | const maxDelay = 5 10 | 11 | // Simulator is the entry point to the distributed snapshot application. 12 | // 13 | // It is a discrete time simulator, i.e. events that happen at time t + 1 come 14 | // strictly after events that happen at time t. At each time step, the simulator 15 | // examines messages queued up across all the links in the system and decides 16 | // which ones to deliver to the destination. 17 | // 18 | // The simulator is responsible for starting the snapshot process, inducing servers 19 | // to pass tokens to each other, and collecting the snapshot state after the process 20 | // has terminated. 21 | type Simulator struct { 22 | time int 23 | nextSnapshotId int 24 | servers map[string]*Server // key = server ID 25 | logger *Logger 26 | // TODO: ADD MORE FIELDS HERE 27 | } 28 | 29 | func NewSimulator() *Simulator { 30 | return &Simulator{ 31 | 0, 32 | 0, 33 | make(map[string]*Server), 34 | NewLogger(), 35 | } 36 | } 37 | 38 | // Return the receive time of a message after adding a random delay. 39 | // Note: since we only deliver one message to a given server at each time step, 40 | // the message may be received *after* the time step returned in this function. 41 | func (sim *Simulator) GetReceiveTime() int { 42 | return sim.time + 1 + rand.Intn(5) 43 | } 44 | 45 | // Add a server to this simulator with the specified number of starting tokens 46 | func (sim *Simulator) AddServer(id string, tokens int) { 47 | server := NewServer(id, tokens, sim) 48 | sim.servers[id] = server 49 | } 50 | 51 | // Add a unidirectional link between two servers 52 | func (sim *Simulator) AddForwardLink(src string, dest string) { 53 | server1, ok1 := sim.servers[src] 54 | server2, ok2 := sim.servers[dest] 55 | if !ok1 { 56 | log.Fatalf("Server %v does not exist\n", src) 57 | } 58 | if !ok2 { 59 | log.Fatalf("Server %v does not exist\n", dest) 60 | } 61 | server1.AddOutboundLink(server2) 62 | } 63 | 64 | // Run an event in the system 65 | func (sim *Simulator) InjectEvent(event interface{}) { 66 | switch event := event.(type) { 67 | case PassTokenEvent: 68 | src := sim.servers[event.src] 69 | src.SendTokens(event.tokens, event.dest) 70 | case SnapshotEvent: 71 | sim.StartSnapshot(event.serverId) 72 | default: 73 | log.Fatal("Error unknown event: ", event) 74 | } 75 | } 76 | 77 | // Advance the simulator time forward by one step, handling all send message events 78 | // that expire at the new time step, if any. 79 | func (sim *Simulator) Tick() { 80 | sim.time++ 81 | sim.logger.NewEpoch() 82 | // Note: to ensure deterministic ordering of packet delivery across the servers, 83 | // we must also iterate through the servers and the links in a deterministic way 84 | for _, serverId := range getSortedKeys(sim.servers) { 85 | server := sim.servers[serverId] 86 | for _, dest := range getSortedKeys(server.outboundLinks) { 87 | link := server.outboundLinks[dest] 88 | // Deliver at most one packet per server at each time step to 89 | // establish total ordering of packet delivery to each server 90 | if !link.events.Empty() { 91 | e := link.events.Peek().(SendMessageEvent) 92 | if e.receiveTime <= sim.time { 93 | link.events.Pop() 94 | sim.logger.RecordEvent( 95 | sim.servers[e.dest], 96 | ReceivedMessageEvent{e.src, e.dest, e.message}) 97 | sim.servers[e.dest].HandlePacket(e.src, e.message) 98 | break 99 | } 100 | } 101 | } 102 | } 103 | } 104 | 105 | // Start a new snapshot process at the specified server 106 | func (sim *Simulator) StartSnapshot(serverId string) { 107 | snapshotId := sim.nextSnapshotId 108 | sim.nextSnapshotId++ 109 | sim.logger.RecordEvent(sim.servers[serverId], StartSnapshot{serverId, snapshotId}) 110 | // TODO: IMPLEMENT ME 111 | } 112 | 113 | // Callback for servers to notify the simulator that the snapshot process has 114 | // completed on a particular server 115 | func (sim *Simulator) NotifySnapshotComplete(serverId string, snapshotId int) { 116 | sim.logger.RecordEvent(sim.servers[serverId], EndSnapshot{serverId, snapshotId}) 117 | // TODO: IMPLEMENT ME 118 | } 119 | 120 | // Collect and merge snapshot state from all the servers. 121 | // This function blocks until the snapshot process has completed on all servers. 122 | func (sim *Simulator) CollectSnapshot(snapshotId int) *SnapshotState { 123 | // TODO: IMPLEMENT ME 124 | snap := SnapshotState{snapshotId, make(map[string]int), make([]*SnapshotMessage, 0)} 125 | return &snap 126 | } 127 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/snapshot_test.go: -------------------------------------------------------------------------------- 1 | package chandy_lamport 2 | 3 | import ( 4 | "fmt" 5 | "math/rand" 6 | "testing" 7 | ) 8 | 9 | func runTest(t *testing.T, topFile string, eventsFile string, snapFiles []string) { 10 | startMessage := fmt.Sprintf("Running test '%v', '%v'", topFile, eventsFile) 11 | if debug { 12 | bars := "==================================================================" 13 | startMessage = fmt.Sprintf("%v\n%v\n%v\n", bars, startMessage, bars) 14 | } 15 | fmt.Println(startMessage) 16 | 17 | // Initialize simulator 18 | rand.Seed(8053172852482175524) 19 | sim := NewSimulator() 20 | readTopology(topFile, sim) 21 | actualSnaps := injectEvents(eventsFile, sim) 22 | if len(actualSnaps) != len(snapFiles) { 23 | t.Fatalf("Expected %v snapshot(s), got %v\n", len(snapFiles), len(actualSnaps)) 24 | } 25 | // Optionally print events for debugging 26 | if debug { 27 | sim.logger.PrettyPrint() 28 | fmt.Println() 29 | } 30 | // Verify that the number of tokens are preserved in the snapshots 31 | checkTokens(sim, actualSnaps) 32 | // Verify against golden files 33 | expectedSnaps := make([]*SnapshotState, 0) 34 | for _, snapFile := range snapFiles { 35 | expectedSnaps = append(expectedSnaps, readSnapshot(snapFile)) 36 | } 37 | sortSnapshots(actualSnaps) 38 | sortSnapshots(expectedSnaps) 39 | for i := 0; i < len(actualSnaps); i++ { 40 | assertEqual(expectedSnaps[i], actualSnaps[i]) 41 | } 42 | } 43 | 44 | func Test2NodesSimple(t *testing.T) { 45 | runTest(t, "2nodes.top", "2nodes-simple.events", []string{"2nodes-simple.snap"}) 46 | } 47 | 48 | func Test2NodesSingleMessage(t *testing.T) { 49 | runTest(t, "2nodes.top", "2nodes-message.events", []string{"2nodes-message.snap"}) 50 | } 51 | 52 | func Test3NodesMultipleMessages(t *testing.T) { 53 | runTest(t, "3nodes.top", "3nodes-simple.events", []string{"3nodes-simple.snap"}) 54 | } 55 | 56 | func Test3NodesMultipleBidirectionalMessages(t *testing.T) { 57 | runTest( 58 | t, 59 | "3nodes.top", 60 | "3nodes-bidirectional-messages.events", 61 | []string{"3nodes-bidirectional-messages.snap"}) 62 | } 63 | 64 | func Test8NodesSequentialSnapshots(t *testing.T) { 65 | runTest( 66 | t, 67 | "8nodes.top", 68 | "8nodes-sequential-snapshots.events", 69 | []string{ 70 | "8nodes-sequential-snapshots0.snap", 71 | "8nodes-sequential-snapshots1.snap", 72 | }) 73 | } 74 | 75 | func Test8NodesConcurrentSnapshots(t *testing.T) { 76 | runTest( 77 | t, 78 | "8nodes.top", 79 | "8nodes-concurrent-snapshots.events", 80 | []string{ 81 | "8nodes-concurrent-snapshots0.snap", 82 | "8nodes-concurrent-snapshots1.snap", 83 | "8nodes-concurrent-snapshots2.snap", 84 | "8nodes-concurrent-snapshots3.snap", 85 | "8nodes-concurrent-snapshots4.snap", 86 | }) 87 | } 88 | 89 | func Test10NodesDirectedEdges(t *testing.T) { 90 | runTest( 91 | t, 92 | "10nodes.top", 93 | "10nodes.events", 94 | []string{ 95 | "10nodes0.snap", 96 | "10nodes1.snap", 97 | "10nodes2.snap", 98 | "10nodes3.snap", 99 | "10nodes4.snap", 100 | "10nodes5.snap", 101 | "10nodes6.snap", 102 | "10nodes7.snap", 103 | "10nodes8.snap", 104 | "10nodes9.snap", 105 | }) 106 | } 107 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/syncmap.go: -------------------------------------------------------------------------------- 1 | package chandy_lamport 2 | 3 | import "sync" 4 | 5 | // An implementation of a map that synchronizes read and write accesses. 6 | // Note: This class intentionally adopts the interface of `sync.Map`, 7 | // which is introduced in Go 1.9+ but not available before that. 8 | // This provides a simplified version of the same class without 9 | // requiring the user to upgrade their Go installation. 10 | type SyncMap struct { 11 | internalMap map[interface{}]interface{} 12 | lock sync.RWMutex 13 | } 14 | 15 | func NewSyncMap() *SyncMap { 16 | m := SyncMap{} 17 | m.internalMap = make(map[interface{}]interface{}) 18 | return &m 19 | } 20 | 21 | func (m *SyncMap) Load(key interface{}) (value interface{}, ok bool) { 22 | m.lock.RLock() 23 | defer m.lock.RUnlock() 24 | value, ok = m.internalMap[key] 25 | return 26 | } 27 | 28 | func (m *SyncMap) Store(key, value interface{}) { 29 | m.lock.Lock() 30 | defer m.lock.Unlock() 31 | m.internalMap[key] = value 32 | } 33 | 34 | func (m *SyncMap) LoadOrStore(key, value interface{}) (interface{}, bool) { 35 | m.lock.Lock() 36 | defer m.lock.Unlock() 37 | existingValue, ok := m.internalMap[key] 38 | if ok { 39 | return existingValue, true 40 | } 41 | m.internalMap[key] = value 42 | return value, false 43 | } 44 | 45 | func (m *SyncMap) Delete(key interface{}) { 46 | m.lock.Lock() 47 | defer m.lock.Unlock() 48 | delete(m.internalMap, key) 49 | } 50 | 51 | func (m *SyncMap) Range(f func(key, value interface{}) bool) { 52 | m.lock.RLock() 53 | for k, v := range m.internalMap { 54 | if !f(k, v) { 55 | break 56 | } 57 | } 58 | defer m.lock.RUnlock() 59 | } 60 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_common.go: -------------------------------------------------------------------------------- 1 | package chandy_lamport 2 | 3 | import ( 4 | "fmt" 5 | "io/ioutil" 6 | "log" 7 | "path" 8 | "reflect" 9 | "regexp" 10 | "sort" 11 | "strconv" 12 | "strings" 13 | ) 14 | 15 | // ================================== 16 | // Helper methods used in test code 17 | // ================================== 18 | 19 | // Directory containing all the test files 20 | const testDir = "test_data" 21 | 22 | // Read the topology from a ".top" file. 23 | // The expected format of the file is as follows: 24 | // - The first line contains number of servers N (e.g. "2") 25 | // - The next N lines each contains the server ID and the number of tokens on 26 | // that server, in the form "[serverId] [numTokens]" (e.g. "N1 1") 27 | // - The rest of the lines represent unidirectional links in the form "[src dst]" 28 | // (e.g. "N1 N2") 29 | func readTopology(fileName string, sim *Simulator) { 30 | b, err := ioutil.ReadFile(path.Join(testDir, fileName)) 31 | checkError(err) 32 | lines := strings.FieldsFunc(string(b), func(r rune) bool { return r == '\n' }) 33 | 34 | // Must call this before we start logging 35 | sim.logger.NewEpoch() 36 | 37 | // Parse topology from lines 38 | numServersLeft := -1 39 | for _, line := range lines { 40 | // Ignore comments 41 | if strings.HasPrefix(line, "#") { 42 | continue 43 | } 44 | if numServersLeft < 0 { 45 | numServersLeft, err = strconv.Atoi(line) 46 | checkError(err) 47 | continue 48 | } 49 | // Otherwise, always expect 2 tokens 50 | parts := strings.Fields(line) 51 | if len(parts) != 2 { 52 | log.Fatal("Expected 2 tokens in line: ", line) 53 | } 54 | if numServersLeft > 0 { 55 | // This is a server 56 | serverId := parts[0] 57 | numTokens, err := strconv.Atoi(parts[1]) 58 | checkError(err) 59 | sim.AddServer(serverId, numTokens) 60 | numServersLeft-- 61 | } else { 62 | // This is a link 63 | src := parts[0] 64 | dest := parts[1] 65 | sim.AddForwardLink(src, dest) 66 | } 67 | } 68 | } 69 | 70 | // Read the events from a ".events" file and inject the events into the simulator. 71 | // The expected format of the file is as follows: 72 | // - "tick N" indicates N time steps has elapsed (default N = 1) 73 | // - "send N1 N2 1" indicates that N1 sends 1 token to N2 74 | // - "snapshot N2" indicates the beginning of the snapshot process, starting on N2 75 | // Note that concurrent events are indicated by the lack of ticks between the events. 76 | // This function waits until all the snapshot processes have terminated before returning 77 | // the snapshots collected. 78 | func injectEvents(fileName string, sim *Simulator) []*SnapshotState { 79 | b, err := ioutil.ReadFile(path.Join(testDir, fileName)) 80 | checkError(err) 81 | 82 | snapshots := make([]*SnapshotState, 0) 83 | getSnapshots := make(chan *SnapshotState, 100) 84 | numSnapshots := 0 85 | 86 | lines := strings.FieldsFunc(string(b), func(r rune) bool { return r == '\n' }) 87 | for _, line := range lines { 88 | // Ignore comments 89 | if strings.HasPrefix("#", line) { 90 | continue 91 | } 92 | parts := strings.Fields(line) 93 | switch parts[0] { 94 | case "send": 95 | src := parts[1] 96 | dest := parts[2] 97 | tokens, err := strconv.Atoi(parts[3]) 98 | checkError(err) 99 | sim.InjectEvent(PassTokenEvent{src, dest, tokens}) 100 | case "snapshot": 101 | numSnapshots++ 102 | serverId := parts[1] 103 | snapshotId := sim.nextSnapshotId 104 | sim.InjectEvent(SnapshotEvent{serverId}) 105 | go func(id int) { 106 | getSnapshots <- sim.CollectSnapshot(id) 107 | }(snapshotId) 108 | case "tick": 109 | numTicks := 1 110 | if len(parts) > 1 { 111 | numTicks, err = strconv.Atoi(parts[1]) 112 | checkError(err) 113 | } 114 | for i := 0; i < numTicks; i++ { 115 | sim.Tick() 116 | } 117 | default: 118 | log.Fatal("Unknown event command: ", parts[0]) 119 | } 120 | } 121 | 122 | // Keep ticking until snapshots complete 123 | for numSnapshots > 0 { 124 | select { 125 | case snap := <-getSnapshots: 126 | snapshots = append(snapshots, snap) 127 | numSnapshots-- 128 | default: 129 | sim.Tick() 130 | } 131 | } 132 | 133 | // Keep ticking until we're sure that the last message has been delivered 134 | for i := 0; i < maxDelay + 1; i++ { 135 | sim.Tick() 136 | } 137 | 138 | return snapshots 139 | } 140 | 141 | // Read the state of snapshot from a ".snap" file. 142 | // The expected format of the file is as follows: 143 | // - The first line contains the snapshot ID (e.g. "0") 144 | // - The next N lines contains the server ID and the number of tokens on that server, 145 | // in the form "[serverId] [numTokens]" (e.g. "N1 0"), one line per server 146 | // - The rest of the lines represent messages exchanged between the servers, 147 | // in the form "[src] [dest] [message]" (e.g. "N1 N2 token(1)") 148 | func readSnapshot(fileName string) *SnapshotState { 149 | b, err := ioutil.ReadFile(path.Join(testDir, fileName)) 150 | checkError(err) 151 | snapshot := SnapshotState{0, make(map[string]int), make([]*SnapshotMessage, 0)} 152 | lines := strings.FieldsFunc(string(b), func(r rune) bool { return r == '\n' }) 153 | for _, line := range lines { 154 | // Ignore comments 155 | if strings.HasPrefix(line, "#") { 156 | continue 157 | } 158 | parts := strings.Fields(line) 159 | if len(parts) == 1 { 160 | // Snapshot ID 161 | snapshot.id, err = strconv.Atoi(line) 162 | checkError(err) 163 | } else if len(parts) == 2 { 164 | // Server and its tokens 165 | serverId := parts[0] 166 | numTokens, err := strconv.Atoi(parts[1]) 167 | checkError(err) 168 | snapshot.tokens[serverId] = numTokens 169 | } else if len(parts) == 3 { 170 | // Src, dest and message 171 | src := parts[0] 172 | dest := parts[1] 173 | messageString := parts[2] 174 | var message interface{} 175 | if strings.Contains(messageString, "token") { 176 | pattern := regexp.MustCompile(`[0-9]+`) 177 | matches := pattern.FindStringSubmatch(messageString) 178 | if len(matches) != 1 { 179 | log.Fatal("Unable to parse token message: ", messageString) 180 | } 181 | numTokens, err := strconv.Atoi(matches[0]) 182 | checkError(err) 183 | message = TokenMessage{numTokens} 184 | } else { 185 | log.Fatal("Unknown message: ", messageString) 186 | } 187 | snapshot.messages = 188 | append(snapshot.messages, &SnapshotMessage{src, dest, message}) 189 | } 190 | } 191 | return &snapshot 192 | } 193 | 194 | // Helper function to pretty print the tokens in the given snapshot state 195 | func tokensString(tokens map[string]int, prefix string) string { 196 | str := make([]string, 0) 197 | for _, serverId := range getSortedKeys(tokens) { 198 | numTokens := tokens[serverId] 199 | maybeS := "s" 200 | if numTokens == 1 { 201 | maybeS = "" 202 | } 203 | str = append(str, fmt.Sprintf( 204 | "%v%v: %v token%v", prefix, serverId, numTokens, maybeS)) 205 | } 206 | return strings.Join(str, "\n") 207 | } 208 | 209 | // Helper function to pretty print the messages in the given snapshot state 210 | func messagesString(messages []*SnapshotMessage, prefix string) string { 211 | str := make([]string, 0) 212 | for _, msg := range messages { 213 | str = append(str, fmt.Sprintf( 214 | "%v%v -> %v: %v", prefix, msg.src, msg.dest, msg.message)) 215 | } 216 | return strings.Join(str, "\n") 217 | } 218 | 219 | // Assert that the two snapshot states are equal. 220 | // If they are not equal, throw an error with a helpful message. 221 | func assertEqual(expected, actual *SnapshotState) { 222 | if expected.id != actual.id { 223 | log.Fatalf("Snapshot IDs do not match: %v != %v\n", expected.id, actual.id) 224 | } 225 | if len(expected.tokens) != len(actual.tokens) { 226 | log.Fatalf( 227 | "Snapshot %v: Number of tokens do not match."+ 228 | "\nExpected:\n%v\nActual:\n%v\n", 229 | expected.id, 230 | tokensString(expected.tokens, "\t"), 231 | tokensString(actual.tokens, "\t")) 232 | } 233 | if len(expected.messages) != len(actual.messages) { 234 | log.Fatalf( 235 | "Snapshot %v: Number of messages do not match."+ 236 | "\nExpected:\n%v\nActual:\n%v\n", 237 | expected.id, 238 | messagesString(expected.messages, "\t"), 239 | messagesString(actual.messages, "\t")) 240 | } 241 | for id, tok := range expected.tokens { 242 | if actual.tokens[id] != tok { 243 | log.Fatalf( 244 | "Snapshot %v: Tokens on %v do not match."+ 245 | "\nExpected:\n%v\nActual:\n%v\n", 246 | expected.id, 247 | id, 248 | tokensString(expected.tokens, "\t"), 249 | tokensString(actual.tokens, "\t")) 250 | } 251 | } 252 | // Ensure message order is preserved per destination 253 | // Note that we don't require ordering of messages across all servers to match 254 | expectedMessages := make(map[string][]*SnapshotMessage) 255 | actualMessages := make(map[string][]*SnapshotMessage) 256 | for i := 0; i < len(expected.messages); i++ { 257 | em := expected.messages[i] 258 | am := actual.messages[i] 259 | _, ok1 := expectedMessages[em.dest] 260 | _, ok2 := actualMessages[am.dest] 261 | if !ok1 { 262 | expectedMessages[em.dest] = make([]*SnapshotMessage, 0) 263 | } 264 | if !ok2 { 265 | actualMessages[am.dest] = make([]*SnapshotMessage, 0) 266 | } 267 | expectedMessages[em.dest] = append(expectedMessages[em.dest], em) 268 | actualMessages[am.dest] = append(actualMessages[am.dest], am) 269 | } 270 | // Test message order per destination 271 | for dest := range expectedMessages { 272 | ems := expectedMessages[dest] 273 | ams := actualMessages[dest] 274 | if !reflect.DeepEqual(ems, ams) { 275 | log.Fatalf( 276 | "Snapshot %v: Messages received at %v do not match."+ 277 | "\nExpected:\n%v\nActual:\n%v\n", 278 | expected.id, 279 | dest, 280 | messagesString(ems, "\t"), 281 | messagesString(ams, "\t")) 282 | } 283 | } 284 | } 285 | 286 | // Helper function to sort the snapshot states by ID. 287 | func sortSnapshots(snaps []*SnapshotState) { 288 | sort.Slice(snaps, func(i, j int) bool { 289 | s1 := snaps[i] 290 | s2 := snaps[j] 291 | return s2.id > s1.id 292 | }) 293 | } 294 | 295 | // Verify that the total number of tokens recorded in the snapshot preserves 296 | // the number of tokens in the system 297 | func checkTokens(sim *Simulator, snapshots []*SnapshotState) { 298 | expectedTokens := 0 299 | for _, server := range sim.servers { 300 | expectedTokens += server.Tokens 301 | } 302 | for _, snap := range snapshots { 303 | snapTokens := 0 304 | // Add tokens recorded on servers 305 | for _, tok := range snap.tokens { 306 | snapTokens += tok 307 | } 308 | // Add tokens from messages in-flight 309 | for _, message := range snap.messages { 310 | switch msg := message.message.(type) { 311 | case TokenMessage: 312 | snapTokens += msg.numTokens 313 | } 314 | } 315 | if expectedTokens != snapTokens { 316 | log.Fatalf("Snapshot %v: simulator has %v tokens, snapshot has %v:\n%v\n%v", 317 | snap.id, 318 | expectedTokens, 319 | snapTokens, 320 | tokensString(snap.tokens, "\t"), 321 | messagesString(snap.messages, "\t")) 322 | } 323 | } 324 | } 325 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes.events: -------------------------------------------------------------------------------- 1 | send N1 N2 10 2 | send N2 N3 10 3 | send N3 N4 10 4 | send N4 N5 10 5 | send N5 N6 10 6 | send N6 N7 10 7 | send N7 N8 10 8 | send N8 N9 10 9 | send N9 N10 10 10 | send N10 N1 10 11 | snapshot N1 12 | tick 13 | send N1 N2 10 14 | send N2 N3 10 15 | send N3 N4 10 16 | send N4 N5 10 17 | send N5 N6 10 18 | send N6 N7 10 19 | send N7 N8 10 20 | send N8 N9 10 21 | send N9 N10 10 22 | send N10 N1 10 23 | snapshot N2 24 | tick 25 | send N1 N2 10 26 | send N2 N3 10 27 | send N3 N4 10 28 | send N4 N5 10 29 | send N5 N6 10 30 | send N6 N7 10 31 | send N7 N8 10 32 | send N8 N9 10 33 | send N9 N10 10 34 | send N10 N1 10 35 | snapshot N3 36 | tick 37 | send N1 N2 10 38 | send N2 N3 10 39 | send N3 N4 10 40 | send N4 N5 10 41 | send N5 N6 10 42 | send N6 N7 10 43 | send N7 N8 10 44 | send N8 N9 10 45 | send N9 N10 10 46 | send N10 N1 10 47 | snapshot N4 48 | tick 49 | send N1 N2 10 50 | send N2 N3 10 51 | send N3 N4 10 52 | send N4 N5 10 53 | send N5 N6 10 54 | send N6 N7 10 55 | send N7 N8 10 56 | send N8 N9 10 57 | send N9 N10 10 58 | send N10 N1 10 59 | snapshot N5 60 | tick 61 | send N1 N2 10 62 | send N2 N3 10 63 | send N3 N4 10 64 | send N4 N5 10 65 | send N5 N6 10 66 | send N6 N7 10 67 | send N7 N8 10 68 | send N8 N9 10 69 | send N9 N10 10 70 | send N10 N1 10 71 | snapshot N6 72 | tick 73 | send N1 N2 10 74 | send N2 N3 10 75 | send N3 N4 10 76 | send N4 N5 10 77 | send N5 N6 10 78 | send N6 N7 10 79 | send N7 N8 10 80 | send N8 N9 10 81 | send N9 N10 10 82 | send N10 N1 10 83 | snapshot N7 84 | tick 85 | send N1 N2 10 86 | send N2 N3 10 87 | send N3 N4 10 88 | send N4 N5 10 89 | send N5 N6 10 90 | send N6 N7 10 91 | send N7 N8 10 92 | send N8 N9 10 93 | send N9 N10 10 94 | send N10 N1 10 95 | snapshot N8 96 | tick 97 | send N1 N2 10 98 | send N2 N3 10 99 | send N3 N4 10 100 | send N4 N5 10 101 | send N5 N6 10 102 | send N6 N7 10 103 | send N7 N8 10 104 | send N8 N9 10 105 | send N9 N10 10 106 | send N10 N1 10 107 | snapshot N9 108 | tick 109 | send N1 N2 10 110 | send N2 N3 10 111 | send N3 N4 10 112 | send N4 N5 10 113 | send N5 N6 10 114 | send N6 N7 10 115 | send N7 N8 10 116 | send N8 N9 10 117 | send N9 N10 10 118 | send N10 N1 10 119 | snapshot N10 120 | tick 121 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes.top: -------------------------------------------------------------------------------- 1 | 10 2 | N1 100 3 | N2 100 4 | N3 100 5 | N4 100 6 | N5 100 7 | N6 100 8 | N7 100 9 | N8 100 10 | N9 100 11 | N10 100 12 | N1 N2 13 | N2 N3 14 | N3 N4 15 | N4 N5 16 | N5 N6 17 | N6 N7 18 | N7 N8 19 | N8 N9 20 | N9 N10 21 | N10 N1 22 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes0.snap: -------------------------------------------------------------------------------- 1 | 0 2 | N1 90 3 | N10 100 4 | N2 60 5 | N3 60 6 | N4 90 7 | N5 100 8 | N6 100 9 | N7 100 10 | N8 100 11 | N9 100 12 | N10 N1 token(10) 13 | N10 N1 token(10) 14 | N10 N1 token(10) 15 | N10 N1 token(10) 16 | N10 N1 token(10) 17 | N10 N1 token(10) 18 | N10 N1 token(10) 19 | N10 N1 token(10) 20 | N10 N1 token(10) 21 | N10 N1 token(10) 22 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes1.snap: -------------------------------------------------------------------------------- 1 | 1 2 | N1 100 3 | N10 100 4 | N2 80 5 | N3 70 6 | N4 50 7 | N5 100 8 | N6 100 9 | N7 100 10 | N8 100 11 | N9 100 12 | N1 N2 token(10) 13 | N1 N2 token(10) 14 | N1 N2 token(10) 15 | N1 N2 token(10) 16 | N1 N2 token(10) 17 | N1 N2 token(10) 18 | N1 N2 token(10) 19 | N1 N2 token(10) 20 | N1 N2 token(10) 21 | N1 N2 token(10) 22 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes2.snap: -------------------------------------------------------------------------------- 1 | 2 2 | N1 100 3 | N10 100 4 | N2 100 5 | N3 70 6 | N4 60 7 | N5 70 8 | N6 100 9 | N7 100 10 | N8 100 11 | N9 100 12 | N2 N3 token(10) 13 | N2 N3 token(10) 14 | N2 N3 token(10) 15 | N2 N3 token(10) 16 | N2 N3 token(10) 17 | N2 N3 token(10) 18 | N2 N3 token(10) 19 | N2 N3 token(10) 20 | N2 N3 token(10) 21 | N2 N3 token(10) 22 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes3.snap: -------------------------------------------------------------------------------- 1 | 3 2 | N1 100 3 | N10 100 4 | N2 100 5 | N3 100 6 | N4 60 7 | N5 60 8 | N6 80 9 | N7 100 10 | N8 100 11 | N9 100 12 | N3 N4 token(10) 13 | N3 N4 token(10) 14 | N3 N4 token(10) 15 | N3 N4 token(10) 16 | N3 N4 token(10) 17 | N3 N4 token(10) 18 | N3 N4 token(10) 19 | N3 N4 token(10) 20 | N3 N4 token(10) 21 | N3 N4 token(10) 22 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes4.snap: -------------------------------------------------------------------------------- 1 | 4 2 | N1 100 3 | N10 100 4 | N2 100 5 | N3 100 6 | N4 100 7 | N5 70 8 | N6 50 9 | N7 100 10 | N8 100 11 | N9 100 12 | N4 N5 token(10) 13 | N4 N5 token(10) 14 | N4 N5 token(10) 15 | N4 N5 token(10) 16 | N4 N5 token(10) 17 | N4 N5 token(10) 18 | N4 N5 token(10) 19 | N4 N5 token(10) 20 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes5.snap: -------------------------------------------------------------------------------- 1 | 5 2 | N1 100 3 | N10 100 4 | N2 100 5 | N3 100 6 | N4 100 7 | N5 100 8 | N6 60 9 | N7 60 10 | N8 100 11 | N9 100 12 | N5 N6 token(10) 13 | N5 N6 token(10) 14 | N5 N6 token(10) 15 | N5 N6 token(10) 16 | N5 N6 token(10) 17 | N5 N6 token(10) 18 | N5 N6 token(10) 19 | N5 N6 token(10) 20 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes6.snap: -------------------------------------------------------------------------------- 1 | 6 2 | N1 100 3 | N10 100 4 | N2 100 5 | N3 100 6 | N4 100 7 | N5 100 8 | N6 100 9 | N7 50 10 | N8 70 11 | N9 100 12 | N6 N7 token(10) 13 | N6 N7 token(10) 14 | N6 N7 token(10) 15 | N6 N7 token(10) 16 | N6 N7 token(10) 17 | N6 N7 token(10) 18 | N6 N7 token(10) 19 | N6 N7 token(10) 20 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes7.snap: -------------------------------------------------------------------------------- 1 | 7 2 | N1 100 3 | N10 100 4 | N2 100 5 | N3 100 6 | N4 100 7 | N5 100 8 | N6 100 9 | N7 100 10 | N8 70 11 | N9 80 12 | N7 N8 token(10) 13 | N7 N8 token(10) 14 | N7 N8 token(10) 15 | N7 N8 token(10) 16 | N7 N8 token(10) 17 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes8.snap: -------------------------------------------------------------------------------- 1 | 8 2 | N1 100 3 | N10 90 4 | N2 100 5 | N3 100 6 | N4 100 7 | N5 100 8 | N6 100 9 | N7 100 10 | N8 100 11 | N9 50 12 | N8 N9 token(10) 13 | N8 N9 token(10) 14 | N8 N9 token(10) 15 | N8 N9 token(10) 16 | N8 N9 token(10) 17 | N8 N9 token(10) 18 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/10nodes9.snap: -------------------------------------------------------------------------------- 1 | 9 2 | N1 100 3 | N10 50 4 | N2 100 5 | N3 100 6 | N4 100 7 | N5 100 8 | N6 100 9 | N7 100 10 | N8 100 11 | N9 100 12 | N9 N10 token(10) 13 | N9 N10 token(10) 14 | N9 N10 token(10) 15 | N9 N10 token(10) 16 | N9 N10 token(10) 17 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/2nodes-message.events: -------------------------------------------------------------------------------- 1 | send N1 N2 1 2 | snapshot N2 3 | tick 4 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/2nodes-message.snap: -------------------------------------------------------------------------------- 1 | 0 2 | N1 0 3 | N2 0 4 | N1 N2 token(1) 5 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/2nodes-simple.events: -------------------------------------------------------------------------------- 1 | snapshot N2 2 | tick 3 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/2nodes-simple.snap: -------------------------------------------------------------------------------- 1 | 0 2 | N1 1 3 | N2 0 4 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/2nodes.top: -------------------------------------------------------------------------------- 1 | 2 2 | N1 1 3 | N2 0 4 | N1 N2 5 | N2 N1 6 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/3nodes-bidirectional-messages.events: -------------------------------------------------------------------------------- 1 | send N1 N2 3 2 | send N2 N3 2 3 | snapshot N2 4 | tick 5 | send N1 N2 2 6 | tick 7 | send N1 N2 1 8 | tick 9 | send N2 N1 1 10 | tick 3 11 | send N3 N2 1 12 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/3nodes-bidirectional-messages.snap: -------------------------------------------------------------------------------- 1 | 0 2 | N1 4 3 | N2 1 4 | N3 2 5 | N1 N2 token(3) 6 | N1 N2 token(2) 7 | N1 N2 token(1) 8 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/3nodes-simple.events: -------------------------------------------------------------------------------- 1 | send N1 N2 3 2 | send N2 N3 2 3 | snapshot N2 4 | tick 5 | send N1 N2 2 6 | tick 4 7 | send N2 N3 1 8 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/3nodes-simple.snap: -------------------------------------------------------------------------------- 1 | 0 2 | N1 5 3 | N2 1 4 | N3 2 5 | N1 N2 token(3) 6 | N1 N2 token(2) 7 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/3nodes.top: -------------------------------------------------------------------------------- 1 | 3 2 | N1 10 3 | N2 3 4 | N3 0 5 | N1 N2 6 | N2 N1 7 | N1 N3 8 | N3 N1 9 | N2 N3 10 | N3 N2 11 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes-concurrent-snapshots.events: -------------------------------------------------------------------------------- 1 | send N1 N2 1 2 | tick 3 | send N2 N3 2 4 | snapshot N3 5 | tick 6 | send N3 N4 3 7 | snapshot N1 8 | tick 9 | send N4 N5 4 10 | snapshot N8 11 | tick 10 12 | send N5 N6 2 13 | send N5 N8 1 14 | snapshot N6 15 | snapshot N2 16 | tick 10 17 | send N6 N7 1 18 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes-concurrent-snapshots0.snap: -------------------------------------------------------------------------------- 1 | 0 2 | N1 9 3 | N2 9 4 | N3 10 5 | N4 6 6 | N5 4 7 | N6 0 8 | N7 0 9 | N8 0 10 | N2 N3 token(2) 11 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes-concurrent-snapshots1.snap: -------------------------------------------------------------------------------- 1 | 1 2 | N1 9 3 | N2 9 4 | N3 9 5 | N4 6 6 | N5 4 7 | N6 0 8 | N7 0 9 | N8 0 10 | N3 N4 token(3) 11 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes-concurrent-snapshots2.snap: -------------------------------------------------------------------------------- 1 | 2 2 | N1 9 3 | N2 9 4 | N3 9 5 | N4 9 6 | N5 0 7 | N6 0 8 | N7 0 9 | N8 0 10 | N4 N5 token(4) 11 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes-concurrent-snapshots3.snap: -------------------------------------------------------------------------------- 1 | 3 2 | N1 9 3 | N2 9 4 | N3 9 5 | N4 9 6 | N5 1 7 | N6 0 8 | N7 0 9 | N8 1 10 | N5 N6 token(2) 11 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes-concurrent-snapshots4.snap: -------------------------------------------------------------------------------- 1 | 4 2 | N1 9 3 | N2 9 4 | N3 9 5 | N4 9 6 | N5 1 7 | N6 1 8 | N7 1 9 | N8 1 10 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes-sequential-snapshots.events: -------------------------------------------------------------------------------- 1 | send N1 N2 1 2 | tick 10 3 | send N2 N3 2 4 | snapshot N3 5 | tick 10 6 | send N3 N4 3 7 | tick 10 8 | send N4 N5 4 9 | tick 10 10 | send N5 N6 2 11 | snapshot N6 12 | tick 10 13 | send N6 N7 1 14 | tick 10 15 | send N5 N8 1 16 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes-sequential-snapshots0.snap: -------------------------------------------------------------------------------- 1 | 0 2 | N1 9 3 | N2 9 4 | N3 10 5 | N4 10 6 | N5 0 7 | N6 0 8 | N7 0 9 | N8 0 10 | N2 N3 token(2) 11 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes-sequential-snapshots1.snap: -------------------------------------------------------------------------------- 1 | 1 2 | N1 9 3 | N2 9 4 | N3 9 5 | N4 9 6 | N5 2 7 | N6 0 8 | N7 0 9 | N8 0 10 | N5 N6 token(2) 11 | -------------------------------------------------------------------------------- /assignment2/src/chandy-lamport/test_data/8nodes.top: -------------------------------------------------------------------------------- 1 | 8 2 | N1 10 3 | N2 10 4 | N3 10 5 | N4 10 6 | N5 0 7 | N6 0 8 | N7 0 9 | N8 0 10 | # N1 - N2 11 | # | | 12 | # N4 - N3 13 | # | 14 | # N5 - N6 15 | # | | 16 | # N8 - N7 17 | N1 N2 18 | N2 N1 19 | N2 N3 20 | N3 N2 21 | N3 N4 22 | N4 N3 23 | N4 N1 24 | N1 N4 25 | N4 N5 26 | N5 N4 27 | N5 N6 28 | N6 N5 29 | N6 N7 30 | N7 N6 31 | N7 N8 32 | N8 N7 33 | N8 N5 34 | N5 N8 35 | -------------------------------------------------------------------------------- /assignment3/README.md: -------------------------------------------------------------------------------- 1 | # COS418 Assignment 3: Raft Leader Election 2 | 3 |
6 | This is the first in a series of assignments in which you'll build a 7 | fault-tolerant key/value storage system. You'll start in this 8 | assignment by implementing the leader election features of Raft, 9 | a replicated state machine protocol. In Assignment 3 you will complete 10 | Raft's log consensus agreement features. You will implement Raft as a 11 | Go object with associated methods, available to be used as a module in 12 | a larger service. Once you have completed Raft, the course assignments 13 | will conclude with such a service: a key/value service built on top of Raft. 14 |
15 | 16 |18 | The Raft protocol is used to manage replica servers for services 19 | that must continue operation in the face of failure (e.g. 20 | server crashes, broken or flaky networks). The challenge is that, 21 | in the face of these failures, the replicas won't always hold identical data. 22 | The Raft protocol helps sort out what the correct data is. 23 |
24 | 25 |26 | Raft's basic approach for this is to implement a replicated state 27 | machine. Raft organizes client requests into a sequence, called 28 | the log, and ensures that all the replicas agree on the the 29 | contents of the log. Each replica executes the client requests 30 | in the log in the order they appear in the log, applying those 31 | requests to the service's state. Since all the live replicas 32 | see the same log contents, they all execute the same requests 33 | in the same order, and thus continue to have identical service 34 | state. If a server fails but later recovers, Raft takes care of 35 | bringing its log up to date. Raft will continue to operate as 36 | long as at least a majority of the servers are alive and can 37 | talk to each other. If there is no such majority, Raft will 38 | make no progress, but will pick up where it left off as soon as 39 | a majority is alive again. 40 |
41 | 42 |43 | You should consult the 44 | extended Raft paper 45 | and the Raft lecture notes. You may also find this 46 | illustrated Raft guide 47 | useful to get a sense of the high-level workings of Raft. For a 48 | wider perspective, have a look at Paxos, Chubby, Paxos Made 49 | Live, Spanner, Zookeeper, Harp, Viewstamped Replication, and 50 | Bolosky et al. 51 |
52 | 53 |55 | For this assignment, we will focus primarily on the code and tests for the Raft implementation in 56 | src/raft and the simple RPC-like system in src/labrpc. It is worth your while to 57 | read and digest the code in these packages. 58 |
59 | 60 |61 | Before you have implemented anything, your raft tests will fail, but this behavior is a sign that you 62 | have everything properly configured and are ready to begin: 63 |
64 | # Go needs $GOPATH to be set to the directory containing "src" 65 | $ cd 418/assignment3 66 | $ export GOPATH="$PWD" 67 | $ cd "$GOPATH/src/raft" 68 | $ go test -run Election 69 | Test: initial election ... 70 | --- FAIL: TestInitialElection (5.00s) 71 | config.go:286: expected one leader, got none 72 | Test: election after network failure ... 73 | --- FAIL: TestReElection (5.00s) 74 | config.go:286: expected one leader, got none 75 | FAIL 76 | exit status 177 | 78 | 79 |
80 | You should implement Raft by adding code to 81 | raft/raft.go (only). In that file you'll find a bit of 82 | skeleton code, plus some examples of how to send and receive 83 | RPCs, and examples of how to save and restore persistent state. 84 |
85 | 86 | 87 |90 | You should start by reading the code to determine which 91 | functions are responsible for conducting Raft leader election, if 92 | you haven't already. 93 |
94 | 95 |96 | The natural first task is to fill in the RequestVoteArgs and 97 | RequestVoteReply structs, and modify 98 | Make() to create a background goroutine that 99 | starts an election (by sending out RequestVote 100 | RPCs) when it hasn't heard from another peer for a 101 | while. For election to work, you will also need to 102 | implement the RequestVote() RPC handler so 103 | that servers will vote for one another. 104 |
105 | 106 |107 | To implement heartbeats, you will need to define an 108 | AppendEntries RPC struct (though you will not need 109 | any real payload yet), and have the leader send 110 | them out periodically. You will also have to write an 111 | AppendEntries RPC handler method that resets 112 | the election timeout so that other servers don't step 113 | forward as leaders when one has already been elected. 114 |
115 | 116 |117 | Make sure the timers in different Raft peers are not 118 | synchronized. In particular, make sure the election 119 | timeouts don't always fire at the same time, or else 120 | all peers will vote for themselves and no one will 121 | become leader. 122 |
123 | 124 |125 | Your Raft implementation must support the following interface, which 126 | the tester and (eventually) your key/value server will use. 127 | You'll find more details in comments in raft.go. 128 | 129 |
130 | // create a new Raft server instance: 131 | rf := Make(peers, me, persister, applyCh) 132 | 133 | // start agreement on a new log entry: 134 | rf.Start(command interface{}) (index, term, isleader) 135 | 136 | // ask a Raft for its current term, and whether it thinks it is leader 137 | rf.GetState() (term, isLeader) 138 | 139 | // each time a new entry is committed to the log, each Raft peer 140 | // should send an ApplyMsg to the service (or tester). 141 | type ApplyMsg142 | 143 |
144 | A service calls Make(peers,me,…) to create a 145 | Raft peer. The peers argument is an array of established RPC 146 | connections, one to each Raft peer (including this one). The 147 | me argument is the index of this peer in the peers 148 | array. Start(command) asks Raft to start the processing 149 | to append the command to the replicated log. Start() 150 | should return immediately, without waiting for for this process 151 | to complete. The service expects your implementation to send an 152 | ApplyMsg for each new committed log entry to the 153 | applyCh argument to Make(). 154 | 155 |
156 | Your Raft peers should exchange RPCs using the labrpc Go 157 | package that we provide to you. It is modeled after Go's 158 | rpc library, but 159 | internally uses Go channels rather than sockets. 160 | raft.go contains some example code that sends an RPC 161 | (sendRequestVote()) and that handles an incoming RPC 162 | (RequestVote()). 163 |
164 | 165 |166 | Implementing leader election and heartbeats (empty 167 | AppendEntries calls) should be sufficient for a 168 | single leader to be elected and -- in the absence of failures -- stay the leader, 169 | as well as redetermine leadership after failures. 170 | Once you have this working, you should be 171 | able to pass the two Election "go test" tests: 172 |
173 | $ go test -run Election 174 | Test: initial election ... 175 | ... Passed 176 | Test: election after network failure ... 177 | ... Passed 178 | PASS 179 | ok raft7.008s180 | 181 | 182 |
227 | Before submitting, please run the full tests given above for both parts one final time. 228 | You will receive full credit for the leader election component if your software passes 229 | the Election tests (as run by the go test commands above) on the CS servers. 230 |
231 | 232 |233 | The final portion of your credit is determined by code quality tests, using the standard tools gofmt and go vet. 234 | You will receive full credit for this portion if all files submitted conform to the style standards set by gofmt and the report from go vet is clean for your raft package (that is, produces no errors). 235 | If your code does not pass the gofmt test, you should reformat your code using the tool. You can also use the Go Checkstyle tool for advice to improve your code's style, if applicable. Additionally, though not part of the graded checks, it would also be advisable to produce code that complies with Golint where possible. 236 |
237 | 238 |This assignment is adapted from MIT's 6.824 course. Thanks to Frans Kaashoek, Robert Morris, and Nickolai Zeldovich for their support.
240 | -------------------------------------------------------------------------------- /assignment3/src/labrpc/test_test.go: -------------------------------------------------------------------------------- 1 | package labrpc 2 | 3 | import "testing" 4 | import "strconv" 5 | import "sync" 6 | import "runtime" 7 | import "time" 8 | import "fmt" 9 | 10 | type JunkArgs struct { 11 | X int 12 | } 13 | type JunkReply struct { 14 | X string 15 | } 16 | 17 | type JunkServer struct { 18 | mu sync.Mutex 19 | log1 []string 20 | log2 []int 21 | } 22 | 23 | func (js *JunkServer) Handler1(args string, reply *int) { 24 | js.mu.Lock() 25 | defer js.mu.Unlock() 26 | js.log1 = append(js.log1, args) 27 | *reply, _ = strconv.Atoi(args) 28 | } 29 | 30 | func (js *JunkServer) Handler2(args int, reply *string) { 31 | js.mu.Lock() 32 | defer js.mu.Unlock() 33 | js.log2 = append(js.log2, args) 34 | *reply = "handler2-" + strconv.Itoa(args) 35 | } 36 | 37 | func (js *JunkServer) Handler3(args int, reply *int) { 38 | js.mu.Lock() 39 | defer js.mu.Unlock() 40 | time.Sleep(20 * time.Second) 41 | *reply = -args 42 | } 43 | 44 | // args is a pointer 45 | func (js *JunkServer) Handler4(args *JunkArgs, reply *JunkReply) { 46 | reply.X = "pointer" 47 | } 48 | 49 | // args is a not pointer 50 | func (js *JunkServer) Handler5(args JunkArgs, reply *JunkReply) { 51 | reply.X = "no pointer" 52 | } 53 | 54 | func TestBasic(t *testing.T) { 55 | runtime.GOMAXPROCS(4) 56 | 57 | rn := MakeNetwork() 58 | 59 | e := rn.MakeEnd("end1-99") 60 | 61 | js := &JunkServer{} 62 | svc := MakeService(js) 63 | 64 | rs := MakeServer() 65 | rs.AddService(svc) 66 | rn.AddServer("server99", rs) 67 | 68 | rn.Connect("end1-99", "server99") 69 | rn.Enable("end1-99", true) 70 | 71 | { 72 | reply := "" 73 | e.Call("JunkServer.Handler2", 111, &reply) 74 | if reply != "handler2-111" { 75 | t.Fatal("wrong reply from Handler2") 76 | } 77 | } 78 | 79 | { 80 | reply := 0 81 | e.Call("JunkServer.Handler1", "9099", &reply) 82 | if reply != 9099 { 83 | t.Fatal("wrong reply from Handler1") 84 | } 85 | } 86 | } 87 | 88 | func TestTypes(t *testing.T) { 89 | runtime.GOMAXPROCS(4) 90 | 91 | rn := MakeNetwork() 92 | 93 | e := rn.MakeEnd("end1-99") 94 | 95 | js := &JunkServer{} 96 | svc := MakeService(js) 97 | 98 | rs := MakeServer() 99 | rs.AddService(svc) 100 | rn.AddServer("server99", rs) 101 | 102 | rn.Connect("end1-99", "server99") 103 | rn.Enable("end1-99", true) 104 | 105 | { 106 | var args JunkArgs 107 | var reply JunkReply 108 | // args must match type (pointer or not) of handler. 109 | e.Call("JunkServer.Handler4", &args, &reply) 110 | if reply.X != "pointer" { 111 | t.Fatal("wrong reply from Handler4") 112 | } 113 | } 114 | 115 | { 116 | var args JunkArgs 117 | var reply JunkReply 118 | // args must match type (pointer or not) of handler. 119 | e.Call("JunkServer.Handler5", args, &reply) 120 | if reply.X != "no pointer" { 121 | t.Fatal("wrong reply from Handler5") 122 | } 123 | } 124 | } 125 | 126 | // 127 | // does net.Enable(endname, false) really disconnect a client? 128 | // 129 | func TestDisconnect(t *testing.T) { 130 | runtime.GOMAXPROCS(4) 131 | 132 | rn := MakeNetwork() 133 | 134 | e := rn.MakeEnd("end1-99") 135 | 136 | js := &JunkServer{} 137 | svc := MakeService(js) 138 | 139 | rs := MakeServer() 140 | rs.AddService(svc) 141 | rn.AddServer("server99", rs) 142 | 143 | rn.Connect("end1-99", "server99") 144 | 145 | { 146 | reply := "" 147 | e.Call("JunkServer.Handler2", 111, &reply) 148 | if reply != "" { 149 | t.Fatal("unexpected reply from Handler2") 150 | } 151 | } 152 | 153 | rn.Enable("end1-99", true) 154 | 155 | { 156 | reply := 0 157 | e.Call("JunkServer.Handler1", "9099", &reply) 158 | if reply != 9099 { 159 | t.Fatal("wrong reply from Handler1") 160 | } 161 | } 162 | } 163 | 164 | // 165 | // test net.GetCount() 166 | // 167 | func TestCounts(t *testing.T) { 168 | runtime.GOMAXPROCS(4) 169 | 170 | rn := MakeNetwork() 171 | 172 | e := rn.MakeEnd("end1-99") 173 | 174 | js := &JunkServer{} 175 | svc := MakeService(js) 176 | 177 | rs := MakeServer() 178 | rs.AddService(svc) 179 | rn.AddServer(99, rs) 180 | 181 | rn.Connect("end1-99", 99) 182 | rn.Enable("end1-99", true) 183 | 184 | for i := 0; i < 17; i++ { 185 | reply := "" 186 | e.Call("JunkServer.Handler2", i, &reply) 187 | wanted := "handler2-" + strconv.Itoa(i) 188 | if reply != wanted { 189 | t.Fatalf("wrong reply %v from Handler1, expecting %v\n", reply, wanted) 190 | } 191 | } 192 | 193 | n := rn.GetCount(99) 194 | if n != 17 { 195 | t.Fatalf("wrong GetCount() %v, expected 17\n", n) 196 | } 197 | } 198 | 199 | // 200 | // test RPCs from concurrent ClientEnds 201 | // 202 | func TestConcurrentMany(t *testing.T) { 203 | runtime.GOMAXPROCS(4) 204 | 205 | rn := MakeNetwork() 206 | 207 | js := &JunkServer{} 208 | svc := MakeService(js) 209 | 210 | rs := MakeServer() 211 | rs.AddService(svc) 212 | rn.AddServer(1000, rs) 213 | 214 | ch := make(chan int) 215 | 216 | nclients := 20 217 | nrpcs := 10 218 | for ii := 0; ii < nclients; ii++ { 219 | go func(i int) { 220 | n := 0 221 | defer func() { ch <- n }() 222 | 223 | e := rn.MakeEnd(i) 224 | rn.Connect(i, 1000) 225 | rn.Enable(i, true) 226 | 227 | for j := 0; j < nrpcs; j++ { 228 | arg := i*100 + j 229 | reply := "" 230 | e.Call("JunkServer.Handler2", arg, &reply) 231 | wanted := "handler2-" + strconv.Itoa(arg) 232 | if reply != wanted { 233 | t.Fatalf("wrong reply %v from Handler1, expecting %v\n", 234 | reply, wanted) 235 | } 236 | n += 1 237 | } 238 | }(ii) 239 | } 240 | 241 | total := 0 242 | for ii := 0; ii < nclients; ii++ { 243 | x := <-ch 244 | total += x 245 | } 246 | 247 | if total != nclients*nrpcs { 248 | t.Fatalf("wrong number of RPCs completed, got %v, expected %v\n", 249 | total, nclients*nrpcs) 250 | } 251 | 252 | n := rn.GetCount(1000) 253 | if n != total { 254 | t.Fatalf("wrong GetCount() %v, expected %v\n", n, total) 255 | } 256 | } 257 | 258 | // 259 | // test unreliable 260 | // 261 | func TestUnreliable(t *testing.T) { 262 | runtime.GOMAXPROCS(4) 263 | 264 | rn := MakeNetwork() 265 | rn.Reliable(false) 266 | 267 | js := &JunkServer{} 268 | svc := MakeService(js) 269 | 270 | rs := MakeServer() 271 | rs.AddService(svc) 272 | rn.AddServer(1000, rs) 273 | 274 | ch := make(chan int) 275 | 276 | nclients := 300 277 | for ii := 0; ii < nclients; ii++ { 278 | go func(i int) { 279 | n := 0 280 | defer func() { ch <- n }() 281 | 282 | e := rn.MakeEnd(i) 283 | rn.Connect(i, 1000) 284 | rn.Enable(i, true) 285 | 286 | arg := i * 100 287 | reply := "" 288 | ok := e.Call("JunkServer.Handler2", arg, &reply) 289 | if ok { 290 | wanted := "handler2-" + strconv.Itoa(arg) 291 | if reply != wanted { 292 | t.Fatalf("wrong reply %v from Handler1, expecting %v\n", 293 | reply, wanted) 294 | } 295 | n += 1 296 | } 297 | }(ii) 298 | } 299 | 300 | total := 0 301 | for ii := 0; ii < nclients; ii++ { 302 | x := <-ch 303 | total += x 304 | } 305 | 306 | if total == nclients || total == 0 { 307 | t.Fatal("all RPCs succeeded despite unreliable") 308 | } 309 | } 310 | 311 | // 312 | // test concurrent RPCs from a single ClientEnd 313 | // 314 | func TestConcurrentOne(t *testing.T) { 315 | runtime.GOMAXPROCS(4) 316 | 317 | rn := MakeNetwork() 318 | 319 | js := &JunkServer{} 320 | svc := MakeService(js) 321 | 322 | rs := MakeServer() 323 | rs.AddService(svc) 324 | rn.AddServer(1000, rs) 325 | 326 | e := rn.MakeEnd("c") 327 | rn.Connect("c", 1000) 328 | rn.Enable("c", true) 329 | 330 | ch := make(chan int) 331 | 332 | nrpcs := 20 333 | for ii := 0; ii < nrpcs; ii++ { 334 | go func(i int) { 335 | n := 0 336 | defer func() { ch <- n }() 337 | 338 | arg := 100 + i 339 | reply := "" 340 | e.Call("JunkServer.Handler2", arg, &reply) 341 | wanted := "handler2-" + strconv.Itoa(arg) 342 | if reply != wanted { 343 | t.Fatalf("wrong reply %v from Handler2, expecting %v\n", 344 | reply, wanted) 345 | } 346 | n += 1 347 | }(ii) 348 | } 349 | 350 | total := 0 351 | for ii := 0; ii < nrpcs; ii++ { 352 | x := <-ch 353 | total += x 354 | } 355 | 356 | if total != nrpcs { 357 | t.Fatalf("wrong number of RPCs completed, got %v, expected %v\n", 358 | total, nrpcs) 359 | } 360 | 361 | js.mu.Lock() 362 | defer js.mu.Unlock() 363 | if len(js.log2) != nrpcs { 364 | t.Fatal("wrong number of RPCs delivered") 365 | } 366 | 367 | n := rn.GetCount(1000) 368 | if n != total { 369 | t.Fatalf("wrong GetCount() %v, expected %v\n", n, total) 370 | } 371 | } 372 | 373 | // 374 | // regression: an RPC that's delayed during Enabled=false 375 | // should not delay subsequent RPCs (e.g. after Enabled=true). 376 | // 377 | func TestRegression1(t *testing.T) { 378 | runtime.GOMAXPROCS(4) 379 | 380 | rn := MakeNetwork() 381 | 382 | js := &JunkServer{} 383 | svc := MakeService(js) 384 | 385 | rs := MakeServer() 386 | rs.AddService(svc) 387 | rn.AddServer(1000, rs) 388 | 389 | e := rn.MakeEnd("c") 390 | rn.Connect("c", 1000) 391 | 392 | // start some RPCs while the ClientEnd is disabled. 393 | // they'll be delayed. 394 | rn.Enable("c", false) 395 | ch := make(chan bool) 396 | nrpcs := 20 397 | for ii := 0; ii < nrpcs; ii++ { 398 | go func(i int) { 399 | ok := false 400 | defer func() { ch <- ok }() 401 | 402 | arg := 100 + i 403 | reply := "" 404 | // this call ought to return false. 405 | e.Call("JunkServer.Handler2", arg, &reply) 406 | ok = true 407 | }(ii) 408 | } 409 | 410 | time.Sleep(100 * time.Millisecond) 411 | 412 | // now enable the ClientEnd and check that an RPC completes quickly. 413 | t0 := time.Now() 414 | rn.Enable("c", true) 415 | { 416 | arg := 99 417 | reply := "" 418 | e.Call("JunkServer.Handler2", arg, &reply) 419 | wanted := "handler2-" + strconv.Itoa(arg) 420 | if reply != wanted { 421 | t.Fatalf("wrong reply %v from Handler2, expecting %v\n", reply, wanted) 422 | } 423 | } 424 | dur := time.Since(t0).Seconds() 425 | 426 | if dur > 0.03 { 427 | t.Fatalf("RPC took too long (%v) after Enable\n", dur) 428 | } 429 | 430 | for ii := 0; ii < nrpcs; ii++ { 431 | <-ch 432 | } 433 | 434 | js.mu.Lock() 435 | defer js.mu.Unlock() 436 | if len(js.log2) != 1 { 437 | t.Fatalf("wrong number (%v) of RPCs delivered, expected 1\n", len(js.log2)) 438 | } 439 | 440 | n := rn.GetCount(1000) 441 | if n != 1 { 442 | t.Fatalf("wrong GetCount() %v, expected %v\n", n, 1) 443 | } 444 | } 445 | 446 | // 447 | // if an RPC is stuck in a server, and the server 448 | // is killed with DeleteServer(), does the RPC 449 | // get un-stuck? 450 | // 451 | func TestKilled(t *testing.T) { 452 | runtime.GOMAXPROCS(4) 453 | 454 | rn := MakeNetwork() 455 | 456 | e := rn.MakeEnd("end1-99") 457 | 458 | js := &JunkServer{} 459 | svc := MakeService(js) 460 | 461 | rs := MakeServer() 462 | rs.AddService(svc) 463 | rn.AddServer("server99", rs) 464 | 465 | rn.Connect("end1-99", "server99") 466 | rn.Enable("end1-99", true) 467 | 468 | doneCh := make(chan bool) 469 | go func() { 470 | reply := 0 471 | ok := e.Call("JunkServer.Handler3", 99, &reply) 472 | doneCh <- ok 473 | }() 474 | 475 | time.Sleep(1000 * time.Millisecond) 476 | 477 | select { 478 | case <-doneCh: 479 | t.Fatal("Handler3 should not have returned yet") 480 | case <-time.After(100 * time.Millisecond): 481 | } 482 | 483 | rn.DeleteServer("server99") 484 | 485 | select { 486 | case x := <-doneCh: 487 | if x != false { 488 | t.Fatal("Handler3 returned successfully despite DeleteServer()") 489 | } 490 | case <-time.After(100 * time.Millisecond): 491 | t.Fatal("Handler3 should return after DeleteServer()") 492 | } 493 | } 494 | 495 | func TestBenchmark(t *testing.T) { 496 | runtime.GOMAXPROCS(4) 497 | 498 | rn := MakeNetwork() 499 | 500 | e := rn.MakeEnd("end1-99") 501 | 502 | js := &JunkServer{} 503 | svc := MakeService(js) 504 | 505 | rs := MakeServer() 506 | rs.AddService(svc) 507 | rn.AddServer("server99", rs) 508 | 509 | rn.Connect("end1-99", "server99") 510 | rn.Enable("end1-99", true) 511 | 512 | t0 := time.Now() 513 | n := 100000 514 | for iters := 0; iters < n; iters++ { 515 | reply := "" 516 | e.Call("JunkServer.Handler2", 111, &reply) 517 | if reply != "handler2-111" { 518 | t.Fatal("wrong reply from Handler2") 519 | } 520 | } 521 | fmt.Printf("%v for %v\n", time.Since(t0), n) 522 | // march 2016, rtm laptop, 22 microseconds per RPC 523 | } 524 | -------------------------------------------------------------------------------- /assignment3/src/raft/config.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | // 4 | // support for Raft tester. 5 | // 6 | // we will use the original config.go to test your code for grading. 7 | // so, while you can modify this code to help you debug, please 8 | // test with the original before submitting. 9 | // 10 | 11 | import "labrpc" 12 | import "log" 13 | import "sync" 14 | import "testing" 15 | import "runtime" 16 | import crand "crypto/rand" 17 | import "encoding/base64" 18 | import "sync/atomic" 19 | import "time" 20 | import "fmt" 21 | 22 | func randstring(n int) string { 23 | b := make([]byte, 2*n) 24 | crand.Read(b) 25 | s := base64.URLEncoding.EncodeToString(b) 26 | return s[0:n] 27 | } 28 | 29 | type config struct { 30 | mu sync.Mutex 31 | t *testing.T 32 | net *labrpc.Network 33 | n int 34 | done int32 // tell internal threads to die 35 | rafts []*Raft 36 | applyErr []string // from apply channel readers 37 | connected []bool // whether each server is on the net 38 | saved []*Persister 39 | endnames [][]string // the port file names each sends to 40 | logs []map[int]int // copy of each server's committed entries 41 | } 42 | 43 | func make_config(t *testing.T, n int, unreliable bool) *config { 44 | runtime.GOMAXPROCS(4) 45 | cfg := &config{} 46 | cfg.t = t 47 | cfg.net = labrpc.MakeNetwork() 48 | cfg.n = n 49 | cfg.applyErr = make([]string, cfg.n) 50 | cfg.rafts = make([]*Raft, cfg.n) 51 | cfg.connected = make([]bool, cfg.n) 52 | cfg.saved = make([]*Persister, cfg.n) 53 | cfg.endnames = make([][]string, cfg.n) 54 | cfg.logs = make([]map[int]int, cfg.n) 55 | 56 | cfg.setunreliable(unreliable) 57 | 58 | cfg.net.LongDelays(true) 59 | 60 | // create a full set of Rafts. 61 | for i := 0; i < cfg.n; i++ { 62 | cfg.logs[i] = map[int]int{} 63 | cfg.start1(i) 64 | } 65 | 66 | // connect everyone 67 | for i := 0; i < cfg.n; i++ { 68 | cfg.connect(i) 69 | } 70 | 71 | return cfg 72 | } 73 | 74 | // shut down a Raft server but save its persistent state. 75 | func (cfg *config) crash1(i int) { 76 | cfg.disconnect(i) 77 | cfg.net.DeleteServer(i) // disable client connections to the server. 78 | 79 | cfg.mu.Lock() 80 | defer cfg.mu.Unlock() 81 | 82 | // a fresh persister, in case old instance 83 | // continues to update the Persister. 84 | // but copy old persister's content so that we always 85 | // pass Make() the last persisted state. 86 | if cfg.saved[i] != nil { 87 | cfg.saved[i] = cfg.saved[i].Copy() 88 | } 89 | 90 | rf := cfg.rafts[i] 91 | if rf != nil { 92 | cfg.mu.Unlock() 93 | rf.Kill() 94 | cfg.mu.Lock() 95 | cfg.rafts[i] = nil 96 | } 97 | 98 | if cfg.saved[i] != nil { 99 | raftlog := cfg.saved[i].ReadRaftState() 100 | cfg.saved[i] = &Persister{} 101 | cfg.saved[i].SaveRaftState(raftlog) 102 | } 103 | } 104 | 105 | // 106 | // start or re-start a Raft. 107 | // if one already exists, "kill" it first. 108 | // allocate new outgoing port file names, and a new 109 | // state persister, to isolate previous instance of 110 | // this server. since we cannot really kill it. 111 | // 112 | func (cfg *config) start1(i int) { 113 | cfg.crash1(i) 114 | 115 | // a fresh set of outgoing ClientEnd names. 116 | // so that old crashed instance's ClientEnds can't send. 117 | cfg.endnames[i] = make([]string, cfg.n) 118 | for j := 0; j < cfg.n; j++ { 119 | cfg.endnames[i][j] = randstring(20) 120 | } 121 | 122 | // a fresh set of ClientEnds. 123 | ends := make([]*labrpc.ClientEnd, cfg.n) 124 | for j := 0; j < cfg.n; j++ { 125 | ends[j] = cfg.net.MakeEnd(cfg.endnames[i][j]) 126 | cfg.net.Connect(cfg.endnames[i][j], j) 127 | } 128 | 129 | cfg.mu.Lock() 130 | 131 | // a fresh persister, so old instance doesn't overwrite 132 | // new instance's persisted state. 133 | // but copy old persister's content so that we always 134 | // pass Make() the last persisted state. 135 | if cfg.saved[i] != nil { 136 | cfg.saved[i] = cfg.saved[i].Copy() 137 | } else { 138 | cfg.saved[i] = MakePersister() 139 | } 140 | 141 | cfg.mu.Unlock() 142 | 143 | // listen to messages from Raft indicating newly committed messages. 144 | applyCh := make(chan ApplyMsg) 145 | go func() { 146 | for m := range applyCh { 147 | err_msg := "" 148 | if m.UseSnapshot { 149 | // ignore the snapshot 150 | } else if v, ok := (m.Command).(int); ok { 151 | cfg.mu.Lock() 152 | for j := 0; j < len(cfg.logs); j++ { 153 | if old, oldok := cfg.logs[j][m.Index]; oldok && old != v { 154 | // some server has already committed a different value for this entry! 155 | err_msg = fmt.Sprintf("commit index=%v server=%v %v != server=%v %v", 156 | m.Index, i, m.Command, j, old) 157 | } 158 | } 159 | _, prevok := cfg.logs[i][m.Index-1] 160 | cfg.logs[i][m.Index] = v 161 | cfg.mu.Unlock() 162 | 163 | if m.Index > 1 && prevok == false { 164 | err_msg = fmt.Sprintf("server %v apply out of order %v", i, m.Index) 165 | } 166 | } else { 167 | err_msg = fmt.Sprintf("committed command %v is not an int", m.Command) 168 | } 169 | 170 | if err_msg != "" { 171 | log.Fatalf("apply error: %v\n", err_msg) 172 | cfg.applyErr[i] = err_msg 173 | // keep reading after error so that Raft doesn't block 174 | // holding locks... 175 | } 176 | } 177 | }() 178 | 179 | rf := Make(ends, i, cfg.saved[i], applyCh) 180 | 181 | cfg.mu.Lock() 182 | cfg.rafts[i] = rf 183 | cfg.mu.Unlock() 184 | 185 | svc := labrpc.MakeService(rf) 186 | srv := labrpc.MakeServer() 187 | srv.AddService(svc) 188 | cfg.net.AddServer(i, srv) 189 | } 190 | 191 | func (cfg *config) cleanup() { 192 | for i := 0; i < len(cfg.rafts); i++ { 193 | if cfg.rafts[i] != nil { 194 | cfg.rafts[i].Kill() 195 | } 196 | } 197 | atomic.StoreInt32(&cfg.done, 1) 198 | } 199 | 200 | // attach server i to the net. 201 | func (cfg *config) connect(i int) { 202 | // fmt.Printf("connect(%d)\n", i) 203 | 204 | cfg.connected[i] = true 205 | 206 | // outgoing ClientEnds 207 | for j := 0; j < cfg.n; j++ { 208 | if cfg.connected[j] { 209 | endname := cfg.endnames[i][j] 210 | cfg.net.Enable(endname, true) 211 | } 212 | } 213 | 214 | // incoming ClientEnds 215 | for j := 0; j < cfg.n; j++ { 216 | if cfg.connected[j] { 217 | endname := cfg.endnames[j][i] 218 | cfg.net.Enable(endname, true) 219 | } 220 | } 221 | } 222 | 223 | // detach server i from the net. 224 | func (cfg *config) disconnect(i int) { 225 | // fmt.Printf("disconnect(%d)\n", i) 226 | 227 | cfg.connected[i] = false 228 | 229 | // outgoing ClientEnds 230 | for j := 0; j < cfg.n; j++ { 231 | if cfg.endnames[i] != nil { 232 | endname := cfg.endnames[i][j] 233 | cfg.net.Enable(endname, false) 234 | } 235 | } 236 | 237 | // incoming ClientEnds 238 | for j := 0; j < cfg.n; j++ { 239 | if cfg.endnames[j] != nil { 240 | endname := cfg.endnames[j][i] 241 | cfg.net.Enable(endname, false) 242 | } 243 | } 244 | } 245 | 246 | func (cfg *config) rpcCount(server int) int { 247 | return cfg.net.GetCount(server) 248 | } 249 | 250 | func (cfg *config) setunreliable(unrel bool) { 251 | cfg.net.Reliable(!unrel) 252 | } 253 | 254 | func (cfg *config) setlongreordering(longrel bool) { 255 | cfg.net.LongReordering(longrel) 256 | } 257 | 258 | // check that there's exactly one leader. 259 | // try a few times in case re-elections are needed. 260 | func (cfg *config) checkOneLeader() int { 261 | for iters := 0; iters < 10; iters++ { 262 | time.Sleep(500 * time.Millisecond) 263 | leaders := make(map[int][]int) 264 | for i := 0; i < cfg.n; i++ { 265 | if cfg.connected[i] { 266 | if t, leader := cfg.rafts[i].GetState(); leader { 267 | leaders[t] = append(leaders[t], i) 268 | } 269 | } 270 | } 271 | 272 | lastTermWithLeader := -1 273 | for t, leaders := range leaders { 274 | if len(leaders) > 1 { 275 | cfg.t.Fatalf("term %d has %d (>1) leaders\n", t, len(leaders)) 276 | } 277 | if t > lastTermWithLeader { 278 | lastTermWithLeader = t 279 | } 280 | } 281 | 282 | if len(leaders) != 0 { 283 | return leaders[lastTermWithLeader][0] 284 | } 285 | } 286 | cfg.t.Fatal("expected one leader, got none") 287 | return -1 288 | } 289 | 290 | // check that everyone agrees on the term. 291 | func (cfg *config) checkTerms() int { 292 | term := -1 293 | for i := 0; i < cfg.n; i++ { 294 | if cfg.connected[i] { 295 | xterm, _ := cfg.rafts[i].GetState() 296 | if term == -1 { 297 | term = xterm 298 | } else if term != xterm { 299 | cfg.t.Fatal("servers disagree on term") 300 | } 301 | } 302 | } 303 | return term 304 | } 305 | 306 | // check that there's no leader 307 | func (cfg *config) checkNoLeader() { 308 | for i := 0; i < cfg.n; i++ { 309 | if cfg.connected[i] { 310 | _, is_leader := cfg.rafts[i].GetState() 311 | if is_leader { 312 | cfg.t.Fatalf("expected no leader, but %v claims to be leader\n", i) 313 | } 314 | } 315 | } 316 | } 317 | 318 | // how many servers think a log entry is committed? 319 | func (cfg *config) nCommitted(index int) (int, interface{}) { 320 | count := 0 321 | cmd := -1 322 | for i := 0; i < len(cfg.rafts); i++ { 323 | if cfg.applyErr[i] != "" { 324 | cfg.t.Fatal(cfg.applyErr[i]) 325 | } 326 | 327 | cfg.mu.Lock() 328 | cmd1, ok := cfg.logs[i][index] 329 | cfg.mu.Unlock() 330 | 331 | if ok { 332 | if count > 0 && cmd != cmd1 { 333 | cfg.t.Fatalf("committed values do not match: index %v, %v, %v\n", 334 | index, cmd, cmd1) 335 | } 336 | count += 1 337 | cmd = cmd1 338 | } 339 | } 340 | return count, cmd 341 | } 342 | 343 | // wait for at least n servers to commit. 344 | // but don't wait forever. 345 | func (cfg *config) wait(index int, n int, startTerm int) interface{} { 346 | to := 10 * time.Millisecond 347 | for iters := 0; iters < 30; iters++ { 348 | nd, _ := cfg.nCommitted(index) 349 | if nd >= n { 350 | break 351 | } 352 | time.Sleep(to) 353 | if to < time.Second { 354 | to *= 2 355 | } 356 | if startTerm > -1 { 357 | for _, r := range cfg.rafts { 358 | if t, _ := r.GetState(); t > startTerm { 359 | // someone has moved on 360 | // can no longer guarantee that we'll "win" 361 | return -1 362 | } 363 | } 364 | } 365 | } 366 | nd, cmd := cfg.nCommitted(index) 367 | if nd < n { 368 | cfg.t.Fatalf("only %d decided for index %d; wanted %d\n", 369 | nd, index, n) 370 | } 371 | return cmd 372 | } 373 | 374 | // do a complete agreement. 375 | // it might choose the wrong leader initially, 376 | // and have to re-submit after giving up. 377 | // entirely gives up after about 10 seconds. 378 | // indirectly checks that the servers agree on the 379 | // same value, since nCommitted() checks this, 380 | // as do the threads that read from applyCh. 381 | // returns index. 382 | func (cfg *config) one(cmd int, expectedServers int) int { 383 | t0 := time.Now() 384 | starts := 0 385 | for time.Since(t0).Seconds() < 10 { 386 | // try all the servers, maybe one is the leader. 387 | index := -1 388 | for si := 0; si < cfg.n; si++ { 389 | starts = (starts + 1) % cfg.n 390 | var rf *Raft 391 | cfg.mu.Lock() 392 | if cfg.connected[starts] { 393 | rf = cfg.rafts[starts] 394 | } 395 | cfg.mu.Unlock() 396 | if rf != nil { 397 | index1, _, ok := rf.Start(cmd) 398 | if ok { 399 | index = index1 400 | break 401 | } 402 | } 403 | } 404 | 405 | if index != -1 { 406 | // somebody claimed to be the leader and to have 407 | // submitted our command; wait a while for agreement. 408 | t1 := time.Now() 409 | for time.Since(t1).Seconds() < 2 { 410 | nd, cmd1 := cfg.nCommitted(index) 411 | if nd > 0 && nd >= expectedServers { 412 | // committed 413 | if cmd2, ok := cmd1.(int); ok && cmd2 == cmd { 414 | // and it was the command we submitted. 415 | return index 416 | } 417 | } 418 | time.Sleep(20 * time.Millisecond) 419 | } 420 | } else { 421 | time.Sleep(50 * time.Millisecond) 422 | } 423 | } 424 | cfg.t.Fatalf("one(%v) failed to reach agreement\n", cmd) 425 | return -1 426 | } 427 | -------------------------------------------------------------------------------- /assignment3/src/raft/persister.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | // 4 | // support for Raft and kvraft to save persistent 5 | // Raft state (log &c) and k/v server snapshots. 6 | // 7 | // we will use the original persister.go to test your code for grading. 8 | // so, while you can modify this code to help you debug, please 9 | // test with the original before submitting. 10 | // 11 | 12 | import "sync" 13 | 14 | type Persister struct { 15 | mu sync.Mutex 16 | raftstate []byte 17 | snapshot []byte 18 | } 19 | 20 | func MakePersister() *Persister { 21 | return &Persister{} 22 | } 23 | 24 | func (ps *Persister) Copy() *Persister { 25 | ps.mu.Lock() 26 | defer ps.mu.Unlock() 27 | np := MakePersister() 28 | np.raftstate = ps.raftstate 29 | np.snapshot = ps.snapshot 30 | return np 31 | } 32 | 33 | func (ps *Persister) SaveRaftState(data []byte) { 34 | ps.mu.Lock() 35 | defer ps.mu.Unlock() 36 | ps.raftstate = data 37 | } 38 | 39 | func (ps *Persister) ReadRaftState() []byte { 40 | ps.mu.Lock() 41 | defer ps.mu.Unlock() 42 | return ps.raftstate 43 | } 44 | 45 | func (ps *Persister) RaftStateSize() int { 46 | ps.mu.Lock() 47 | defer ps.mu.Unlock() 48 | return len(ps.raftstate) 49 | } 50 | 51 | func (ps *Persister) SaveSnapshot(snapshot []byte) { 52 | ps.mu.Lock() 53 | defer ps.mu.Unlock() 54 | ps.snapshot = snapshot 55 | } 56 | 57 | func (ps *Persister) ReadSnapshot() []byte { 58 | ps.mu.Lock() 59 | defer ps.mu.Unlock() 60 | return ps.snapshot 61 | } 62 | -------------------------------------------------------------------------------- /assignment3/src/raft/raft.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | // 4 | // this is an outline of the API that raft must expose to 5 | // the service (or tester). see comments below for 6 | // each of these functions for more details. 7 | // 8 | // rf = Make(...) 9 | // create a new Raft server. 10 | // rf.Start(command interface{}) (index, term, isleader) 11 | // start agreement on a new log entry 12 | // rf.GetState() (term, isLeader) 13 | // ask a Raft for its current term, and whether it thinks it is leader 14 | // ApplyMsg 15 | // each time a new entry is committed to the log, each Raft peer 16 | // should send an ApplyMsg to the service (or tester) 17 | // in the same server. 18 | // 19 | 20 | import "sync" 21 | import "labrpc" 22 | 23 | // import "bytes" 24 | // import "encoding/gob" 25 | 26 | 27 | 28 | // 29 | // as each Raft peer becomes aware that successive log entries are 30 | // committed, the peer should send an ApplyMsg to the service (or 31 | // tester) on the same server, via the applyCh passed to Make(). 32 | // 33 | type ApplyMsg struct { 34 | Index int 35 | Command interface{} 36 | UseSnapshot bool // ignore for lab2; only used in lab3 37 | Snapshot []byte // ignore for lab2; only used in lab3 38 | } 39 | 40 | // 41 | // A Go object implementing a single Raft peer. 42 | // 43 | type Raft struct { 44 | mu sync.Mutex 45 | peers []*labrpc.ClientEnd 46 | persister *Persister 47 | me int // index into peers[] 48 | 49 | // Your data here. 50 | // Look at the paper's Figure 2 for a description of what 51 | // state a Raft server must maintain. 52 | 53 | } 54 | 55 | // return currentTerm and whether this server 56 | // believes it is the leader. 57 | func (rf *Raft) GetState() (int, bool) { 58 | 59 | var term int 60 | var isleader bool 61 | // Your code here. 62 | return term, isleader 63 | } 64 | 65 | // 66 | // save Raft's persistent state to stable storage, 67 | // where it can later be retrieved after a crash and restart. 68 | // see paper's Figure 2 for a description of what should be persistent. 69 | // 70 | func (rf *Raft) persist() { 71 | // Your code here. 72 | // Example: 73 | // w := new(bytes.Buffer) 74 | // e := gob.NewEncoder(w) 75 | // e.Encode(rf.xxx) 76 | // e.Encode(rf.yyy) 77 | // data := w.Bytes() 78 | // rf.persister.SaveRaftState(data) 79 | } 80 | 81 | // 82 | // restore previously persisted state. 83 | // 84 | func (rf *Raft) readPersist(data []byte) { 85 | // Your code here. 86 | // Example: 87 | // r := bytes.NewBuffer(data) 88 | // d := gob.NewDecoder(r) 89 | // d.Decode(&rf.xxx) 90 | // d.Decode(&rf.yyy) 91 | } 92 | 93 | 94 | 95 | 96 | // 97 | // example RequestVote RPC arguments structure. 98 | // 99 | type RequestVoteArgs struct { 100 | // Your data here. 101 | } 102 | 103 | // 104 | // example RequestVote RPC reply structure. 105 | // 106 | type RequestVoteReply struct { 107 | // Your data here. 108 | } 109 | 110 | // 111 | // example RequestVote RPC handler. 112 | // 113 | func (rf *Raft) RequestVote(args RequestVoteArgs, reply *RequestVoteReply) { 114 | // Your code here. 115 | } 116 | 117 | // 118 | // example code to send a RequestVote RPC to a server. 119 | // server is the index of the target server in rf.peers[]. 120 | // expects RPC arguments in args. 121 | // fills in *reply with RPC reply, so caller should 122 | // pass &reply. 123 | // the types of the args and reply passed to Call() must be 124 | // the same as the types of the arguments declared in the 125 | // handler function (including whether they are pointers). 126 | // 127 | // returns true if labrpc says the RPC was delivered. 128 | // 129 | // if you're having trouble getting RPC to work, check that you've 130 | // capitalized all field names in structs passed over RPC, and 131 | // that the caller passes the address of the reply struct with &, not 132 | // the struct itself. 133 | // 134 | func (rf *Raft) sendRequestVote(server int, args RequestVoteArgs, reply *RequestVoteReply) bool { 135 | ok := rf.peers[server].Call("Raft.RequestVote", args, reply) 136 | return ok 137 | } 138 | 139 | 140 | // 141 | // the service using Raft (e.g. a k/v server) wants to start 142 | // agreement on the next command to be appended to Raft's log. if this 143 | // server isn't the leader, returns false. otherwise start the 144 | // agreement and return immediately. there is no guarantee that this 145 | // command will ever be committed to the Raft log, since the leader 146 | // may fail or lose an election. 147 | // 148 | // the first return value is the index that the command will appear at 149 | // if it's ever committed. the second return value is the current 150 | // term. the third return value is true if this server believes it is 151 | // the leader. 152 | // 153 | func (rf *Raft) Start(command interface{}) (int, int, bool) { 154 | index := -1 155 | term := -1 156 | isLeader := true 157 | 158 | 159 | return index, term, isLeader 160 | } 161 | 162 | // 163 | // the tester calls Kill() when a Raft instance won't 164 | // be needed again. you are not required to do anything 165 | // in Kill(), but it might be convenient to (for example) 166 | // turn off debug output from this instance. 167 | // 168 | func (rf *Raft) Kill() { 169 | // Your code here, if desired. 170 | } 171 | 172 | // 173 | // the service or tester wants to create a Raft server. the ports 174 | // of all the Raft servers (including this one) are in peers[]. this 175 | // server's port is peers[me]. all the servers' peers[] arrays 176 | // have the same order. persister is a place for this server to 177 | // save its persistent state, and also initially holds the most 178 | // recent saved state, if any. applyCh is a channel on which the 179 | // tester or service expects Raft to send ApplyMsg messages. 180 | // Make() must return quickly, so it should start goroutines 181 | // for any long-running work. 182 | // 183 | func Make(peers []*labrpc.ClientEnd, me int, 184 | persister *Persister, applyCh chan ApplyMsg) *Raft { 185 | rf := &Raft{} 186 | rf.peers = peers 187 | rf.persister = persister 188 | rf.me = me 189 | 190 | // Your initialization code here. 191 | 192 | // initialize from state persisted before a crash 193 | rf.readPersist(persister.ReadRaftState()) 194 | 195 | 196 | return rf 197 | } 198 | -------------------------------------------------------------------------------- /assignment3/src/raft/util.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | import "log" 4 | 5 | // Debugging 6 | const Debug = 0 7 | 8 | func DPrintf(format string, a ...interface{}) (n int, err error) { 9 | if Debug > 0 { 10 | log.Printf(format, a...) 11 | } 12 | return 13 | } 14 | -------------------------------------------------------------------------------- /assignment4/README.md: -------------------------------------------------------------------------------- 1 | # COS418 Assignment 4: Raft Log Consensus 2 | 3 |6 | This is the second in the series of assignments in which you'll build a 7 | fault-tolerant key/value storage system. You've started off in Assignment 3 8 | assignment by implementing the leader election features of Raft. In this assignment, 9 | you will implement Raft's core features: log consensus agreement. From here, Assignment 4 10 | will be a key/value service that uses this Raft implementation as a foundational module. 11 |
12 | 13 |14 | While being able to elect a leader is useful, we want to use 15 | Raft to keep a consistent, replicated log of operations. To do 16 | so, we need to have the servers accept client operations 17 | through Start(), and insert them into the log. In 18 | Raft, only the leader is allowed to append to the log, and 19 | should disseminate new entries to other servers by including 20 | them in its outgoing AppendEntries RPCs. 21 |
22 | 23 |24 | If this sounds only vaguely familiar (or even if it's crystal clear), you are 25 | highly encouraged to go back to reread the 26 | extended Raft paper, 27 | the Raft lecture notes, and the 28 | illustrated Raft guide. 29 | You should, of course, also review your work from Assignment 3, as this assignment 30 | directly builds off that. 31 |
32 | 33 |36 | You will continue to use the same cos418 code bundle from the previous assignments. 37 | For this assignment, we will focus primarily on the code and tests for the Raft implementation in 38 | src/raft and the simple RPC-like system in src/labrpc. It is worth your while to 39 | read and digest the code in these packages again, including your implementation from Assignment 3. 40 |
41 | 42 |45 | In this lab you'll implement most of the Raft design 46 | described in the extended paper, including saving 47 | persistent state and reading it after a node fails and 48 | then restarts. You will not implement cluster 49 | membership changes (Section 6) or log compaction / 50 | snapshotting (Section 7). 51 |
52 | 53 |54 | A set of Raft instances talk to each other with 55 | RPC to maintain replicated logs. Your Raft interface will 56 | support an indefinite sequence of numbered commands, also 57 | called log entries. The entries are numbered with index numbers. 58 | The log entry with a given index will eventually 59 | be committed. At that point, your Raft should send the log 60 | entry to the larger service for it to execute. 61 |
62 | 63 |64 | Your first major task is to implement the leader and follower code 65 | to append new log entries. 66 | This will involve implementing Start(), completing the 67 | AppendEntries RPC structs, sending them, and fleshing 68 | out the AppendEntry RPC handler. Your goal should 69 | first be to pass the TestBasicAgree() test (in 70 | test_test.go). Once you have that working, you should 71 | try to get all the tests before the "basic persistence" test to 72 | pass before moving on. 73 |
74 | 75 |76 | Only RPC may be used for interaction between different Raft 77 | instances. For example, different instances of your Raft 78 | implementation are not allowed to share Go variables. 79 | Your implementation should not use files at all. 80 |
81 | 82 | 83 |85 | The next major task is to handle the fault tolerant aspects of the Raft protocol, 86 | making your implementation robust against various kinds of failures. These failures 87 | could include servers not receiving some RPCs and servers that crash and restart. 88 |
89 | 90 |91 | A Raft-based server must be able to pick up where it left off, 92 | and continue if the computer it is running on reboots. This requires 93 | that Raft keep persistent state that survives a reboot (the 94 | paper's Figure 2 mentions which state should be persistent). 95 |
96 | 97 |98 | A “real” implementation would do this by writing 99 | Raft's persistent state to disk each time it changes, and reading the latest saved 100 | state from 101 | disk when restarting after a reboot. Your implementation won't use 102 | the disk; instead, it will save and restore persistent state 103 | from a Persister object (see persister.go). 104 | Whoever calls Make() supplies a Persister 105 | that initially holds Raft's most recently persisted state (if 106 | any). Raft should initialize its state from that 107 | Persister, and should use it to save its persistent 108 | state each time the state changes. You can use the 109 | ReadRaftState() and SaveRaftState() methods 110 | for this respectively. 111 |
112 | 113 |114 | Implement persistence by first adding code to serialize any 115 | state that needs persisting in persist(), and to 116 | unserialize that same state in readPersist(). You now 117 | need to determine at what points in the Raft protocol your 118 | servers are required to persist their state, and insert calls 119 | to persist() in those places. Once this code is 120 | complete, you should pass the remaining tests. You may want to 121 | first try and pass the "basic persistence" test (go test 122 | -run 'TestPersist1$'), and then tackle the remaining ones. 123 |
124 | 125 |126 | You will need to encode the state as an array of bytes in order 127 | to pass it to the Persister; raft.go contains 128 | some example code for this in persist() and 129 | readPersist(). 130 |
131 | 132 |133 | In order to pass some of the challenging tests towards the end, such as 134 | those marked "unreliable", you will need to implement the optimization to 135 | allow a follower to back up the leader's nextIndex by more than one entry 136 | at a time. See the description in the 137 | extended Raft paper starting at 138 | the bottom of page 7 and top of page 8 (marked by a gray line). 139 |
140 | 141 | 142 |201 | You will receive full credit for Part I if your software passes the tests mentioned for that section on the CS servers. 202 | You will receive full credit for Part II if your software passes the tests mentioned for that section on the CS servers. 203 |
204 | 205 |206 | The final portion of your credit is determined by code quality tests, using the standard tools gofmt and go vet. 207 | You will receive full credit for this portion if all files submitted conform to the style standards set by gofmt and the report from go vet is clean for your raft package (that is, produces no errors). 208 | If your code does not pass the gofmt test, you should reformat your code using the tool. You can also use the Go Checkstyle tool for advice to improve your code's style, if applicable. Additionally, though not part of the graded cheks, it would also be advisable to produce code that complies with Golint where possible. 209 |
210 | 211 |This assignment is adapted from MIT's 6.824 course. Thanks to Frans Kaashoek, Robert Morris, and Nickolai Zeldovich for their support.
213 | -------------------------------------------------------------------------------- /assignment4/src/.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/COS418F18/assignments_template/e9e55ad69f23bafc835aa856258a380d8edd5398/assignment4/src/.gitignore -------------------------------------------------------------------------------- /assignment5/pkg/darwin_amd64/raft.a: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/COS418F18/assignments_template/e9e55ad69f23bafc835aa856258a380d8edd5398/assignment5/pkg/darwin_amd64/raft.a -------------------------------------------------------------------------------- /assignment5/pkg/linux_386/raft.a: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/COS418F18/assignments_template/e9e55ad69f23bafc835aa856258a380d8edd5398/assignment5/pkg/linux_386/raft.a -------------------------------------------------------------------------------- /assignment5/pkg/linux_amd64/raft.a: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/COS418F18/assignments_template/e9e55ad69f23bafc835aa856258a380d8edd5398/assignment5/pkg/linux_amd64/raft.a -------------------------------------------------------------------------------- /assignment5/pkg/windows_386/raft.a: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/COS418F18/assignments_template/e9e55ad69f23bafc835aa856258a380d8edd5398/assignment5/pkg/windows_386/raft.a -------------------------------------------------------------------------------- /assignment5/pkg/windows_amd64/raft.a: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/COS418F18/assignments_template/e9e55ad69f23bafc835aa856258a380d8edd5398/assignment5/pkg/windows_amd64/raft.a -------------------------------------------------------------------------------- /assignment5/src/kvraft/client.go: -------------------------------------------------------------------------------- 1 | package raftkv 2 | 3 | import "labrpc" 4 | import "crypto/rand" 5 | import "math/big" 6 | 7 | 8 | type Clerk struct { 9 | servers []*labrpc.ClientEnd 10 | // You will have to modify this struct. 11 | } 12 | 13 | func nrand() int64 { 14 | max := big.NewInt(int64(1) << 62) 15 | bigx, _ := rand.Int(rand.Reader, max) 16 | x := bigx.Int64() 17 | return x 18 | } 19 | 20 | func MakeClerk(servers []*labrpc.ClientEnd) *Clerk { 21 | ck := new(Clerk) 22 | ck.servers = servers 23 | // You'll have to add code here. 24 | return ck 25 | } 26 | 27 | // 28 | // fetch the current value for a key. 29 | // returns "" if the key does not exist. 30 | // keeps trying forever in the face of all other errors. 31 | // 32 | // you can send an RPC with code like this: 33 | // ok := ck.servers[i].Call("RaftKV.Get", &args, &reply) 34 | // 35 | // the types of args and reply (including whether they are pointers) 36 | // must match the declared types of the RPC handler function's 37 | // arguments. and reply must be passed as a pointer. 38 | // 39 | func (ck *Clerk) Get(key string) string { 40 | 41 | // You will have to modify this function. 42 | return "" 43 | } 44 | 45 | // 46 | // shared by Put and Append. 47 | // 48 | // you can send an RPC with code like this: 49 | // ok := ck.servers[i].Call("RaftKV.PutAppend", &args, &reply) 50 | // 51 | // the types of args and reply (including whether they are pointers) 52 | // must match the declared types of the RPC handler function's 53 | // arguments. and reply must be passed as a pointer. 54 | // 55 | func (ck *Clerk) PutAppend(key string, value string, op string) { 56 | // You will have to modify this function. 57 | } 58 | 59 | func (ck *Clerk) Put(key string, value string) { 60 | ck.PutAppend(key, value, "Put") 61 | } 62 | func (ck *Clerk) Append(key string, value string) { 63 | ck.PutAppend(key, value, "Append") 64 | } 65 | -------------------------------------------------------------------------------- /assignment5/src/kvraft/common.go: -------------------------------------------------------------------------------- 1 | package raftkv 2 | 3 | const ( 4 | OK = "OK" 5 | ErrNoKey = "ErrNoKey" 6 | ) 7 | 8 | type Err string 9 | 10 | // Put or Append 11 | type PutAppendArgs struct { 12 | // You'll have to add definitions here. 13 | Key string 14 | Value string 15 | Op string // "Put" or "Append" 16 | // You'll have to add definitions here. 17 | // Field names must start with capital letters, 18 | // otherwise RPC will break. 19 | } 20 | 21 | type PutAppendReply struct { 22 | WrongLeader bool 23 | Err Err 24 | } 25 | 26 | type GetArgs struct { 27 | Key string 28 | // You'll have to add definitions here. 29 | } 30 | 31 | type GetReply struct { 32 | WrongLeader bool 33 | Err Err 34 | Value string 35 | } 36 | -------------------------------------------------------------------------------- /assignment5/src/kvraft/config.go: -------------------------------------------------------------------------------- 1 | package raftkv 2 | 3 | import "labrpc" 4 | import "testing" 5 | import "os" 6 | 7 | // import "log" 8 | import crand "crypto/rand" 9 | import "math/rand" 10 | import "encoding/base64" 11 | import "sync" 12 | import "runtime" 13 | import "raft" 14 | 15 | func randstring(n int) string { 16 | b := make([]byte, 2*n) 17 | crand.Read(b) 18 | s := base64.URLEncoding.EncodeToString(b) 19 | return s[0:n] 20 | } 21 | 22 | // Randomize server handles 23 | func random_handles(kvh []*labrpc.ClientEnd) []*labrpc.ClientEnd { 24 | sa := make([]*labrpc.ClientEnd, len(kvh)) 25 | copy(sa, kvh) 26 | for i := range sa { 27 | j := rand.Intn(i + 1) 28 | sa[i], sa[j] = sa[j], sa[i] 29 | } 30 | return sa 31 | } 32 | 33 | type config struct { 34 | mu sync.Mutex 35 | t *testing.T 36 | tag string 37 | net *labrpc.Network 38 | n int 39 | kvservers []*RaftKV 40 | saved []*raft.Persister 41 | endnames [][]string // names of each server's sending ClientEnds 42 | clerks map[*Clerk][]string 43 | nextClientId int 44 | maxraftstate int 45 | } 46 | 47 | func (cfg *config) cleanup() { 48 | cfg.mu.Lock() 49 | defer cfg.mu.Unlock() 50 | for i := 0; i < len(cfg.kvservers); i++ { 51 | if cfg.kvservers[i] != nil { 52 | cfg.kvservers[i].Kill() 53 | } 54 | } 55 | } 56 | 57 | // Maximum log size across all servers 58 | func (cfg *config) LogSize() int { 59 | logsize := 0 60 | for i := 0; i < cfg.n; i++ { 61 | n := cfg.saved[i].RaftStateSize() 62 | if n > logsize { 63 | logsize = n 64 | } 65 | } 66 | return logsize 67 | } 68 | 69 | // attach server i to servers listed in to 70 | // caller must hold cfg.mu 71 | func (cfg *config) connectUnlocked(i int, to []int) { 72 | // log.Printf("connect peer %d to %v\n", i, to) 73 | 74 | // outgoing socket files 75 | for j := 0; j < len(to); j++ { 76 | endname := cfg.endnames[i][to[j]] 77 | cfg.net.Enable(endname, true) 78 | } 79 | 80 | // incoming socket files 81 | for j := 0; j < len(to); j++ { 82 | endname := cfg.endnames[to[j]][i] 83 | cfg.net.Enable(endname, true) 84 | } 85 | } 86 | 87 | func (cfg *config) connect(i int, to []int) { 88 | cfg.mu.Lock() 89 | defer cfg.mu.Unlock() 90 | cfg.connectUnlocked(i, to) 91 | } 92 | 93 | // detach server i from the servers listed in from 94 | // caller must hold cfg.mu 95 | func (cfg *config) disconnectUnlocked(i int, from []int) { 96 | // log.Printf("disconnect peer %d from %v\n", i, from) 97 | 98 | // outgoing socket files 99 | for j := 0; j < len(from); j++ { 100 | if cfg.endnames[i] != nil { 101 | endname := cfg.endnames[i][from[j]] 102 | cfg.net.Enable(endname, false) 103 | } 104 | } 105 | 106 | // incoming socket files 107 | for j := 0; j < len(from); j++ { 108 | if cfg.endnames[j] != nil { 109 | endname := cfg.endnames[from[j]][i] 110 | cfg.net.Enable(endname, false) 111 | } 112 | } 113 | } 114 | 115 | func (cfg *config) disconnect(i int, from []int) { 116 | cfg.mu.Lock() 117 | defer cfg.mu.Unlock() 118 | cfg.disconnectUnlocked(i, from) 119 | } 120 | 121 | func (cfg *config) All() []int { 122 | all := make([]int, cfg.n) 123 | for i := 0; i < cfg.n; i++ { 124 | all[i] = i 125 | } 126 | return all 127 | } 128 | 129 | func (cfg *config) ConnectAll() { 130 | cfg.mu.Lock() 131 | defer cfg.mu.Unlock() 132 | for i := 0; i < cfg.n; i++ { 133 | cfg.connectUnlocked(i, cfg.All()) 134 | } 135 | } 136 | 137 | // Sets up 2 partitions with connectivity between servers in each partition. 138 | func (cfg *config) partition(p1 []int, p2 []int) { 139 | cfg.mu.Lock() 140 | defer cfg.mu.Unlock() 141 | // log.Printf("partition servers into: %v %v\n", p1, p2) 142 | for i := 0; i < len(p1); i++ { 143 | cfg.disconnectUnlocked(p1[i], p2) 144 | cfg.connectUnlocked(p1[i], p1) 145 | } 146 | for i := 0; i < len(p2); i++ { 147 | cfg.disconnectUnlocked(p2[i], p1) 148 | cfg.connectUnlocked(p2[i], p2) 149 | } 150 | } 151 | 152 | // Create a clerk with clerk specific server names. 153 | // Give it connections to all of the servers, but for 154 | // now enable only connections to servers in to[]. 155 | func (cfg *config) makeClient(to []int) *Clerk { 156 | cfg.mu.Lock() 157 | defer cfg.mu.Unlock() 158 | 159 | // a fresh set of ClientEnds. 160 | ends := make([]*labrpc.ClientEnd, cfg.n) 161 | endnames := make([]string, cfg.n) 162 | for j := 0; j < cfg.n; j++ { 163 | endnames[j] = randstring(20) 164 | ends[j] = cfg.net.MakeEnd(endnames[j]) 165 | cfg.net.Connect(endnames[j], j) 166 | } 167 | 168 | ck := MakeClerk(random_handles(ends)) 169 | cfg.clerks[ck] = endnames 170 | cfg.nextClientId++ 171 | cfg.ConnectClientUnlocked(ck, to) 172 | return ck 173 | } 174 | 175 | func (cfg *config) deleteClient(ck *Clerk) { 176 | cfg.mu.Lock() 177 | defer cfg.mu.Unlock() 178 | 179 | v := cfg.clerks[ck] 180 | for i := 0; i < len(v); i++ { 181 | os.Remove(v[i]) 182 | } 183 | delete(cfg.clerks, ck) 184 | } 185 | 186 | // caller should hold cfg.mu 187 | func (cfg *config) ConnectClientUnlocked(ck *Clerk, to []int) { 188 | // log.Printf("ConnectClient %v to %v\n", ck, to) 189 | endnames := cfg.clerks[ck] 190 | for j := 0; j < len(to); j++ { 191 | s := endnames[to[j]] 192 | cfg.net.Enable(s, true) 193 | } 194 | } 195 | 196 | func (cfg *config) ConnectClient(ck *Clerk, to []int) { 197 | cfg.mu.Lock() 198 | defer cfg.mu.Unlock() 199 | cfg.ConnectClientUnlocked(ck, to) 200 | } 201 | 202 | // caller should hold cfg.mu 203 | func (cfg *config) DisconnectClientUnlocked(ck *Clerk, from []int) { 204 | // log.Printf("DisconnectClient %v from %v\n", ck, from) 205 | endnames := cfg.clerks[ck] 206 | for j := 0; j < len(from); j++ { 207 | s := endnames[from[j]] 208 | cfg.net.Enable(s, false) 209 | } 210 | } 211 | 212 | func (cfg *config) DisconnectClient(ck *Clerk, from []int) { 213 | cfg.mu.Lock() 214 | defer cfg.mu.Unlock() 215 | cfg.DisconnectClientUnlocked(ck, from) 216 | } 217 | 218 | // Shutdown a server by isolating it 219 | func (cfg *config) ShutdownServer(i int) { 220 | cfg.mu.Lock() 221 | defer cfg.mu.Unlock() 222 | 223 | cfg.disconnectUnlocked(i, cfg.All()) 224 | 225 | // disable client connections to the server. 226 | // it's important to do this before creating 227 | // the new Persister in saved[i], to avoid 228 | // the possibility of the server returning a 229 | // positive reply to an Append but persisting 230 | // the result in the superseded Persister. 231 | cfg.net.DeleteServer(i) 232 | 233 | // a fresh persister, in case old instance 234 | // continues to update the Persister. 235 | // but copy old persister's content so that we always 236 | // pass Make() the last persisted state. 237 | if cfg.saved[i] != nil { 238 | cfg.saved[i] = cfg.saved[i].Copy() 239 | } 240 | 241 | kv := cfg.kvservers[i] 242 | if kv != nil { 243 | cfg.mu.Unlock() 244 | kv.Kill() 245 | cfg.mu.Lock() 246 | cfg.kvservers[i] = nil 247 | } 248 | } 249 | 250 | // If restart servers, first call ShutdownServer 251 | func (cfg *config) StartServer(i int) { 252 | cfg.mu.Lock() 253 | 254 | // a fresh set of outgoing ClientEnd names. 255 | cfg.endnames[i] = make([]string, cfg.n) 256 | for j := 0; j < cfg.n; j++ { 257 | cfg.endnames[i][j] = randstring(20) 258 | } 259 | 260 | // a fresh set of ClientEnds. 261 | ends := make([]*labrpc.ClientEnd, cfg.n) 262 | for j := 0; j < cfg.n; j++ { 263 | ends[j] = cfg.net.MakeEnd(cfg.endnames[i][j]) 264 | cfg.net.Connect(cfg.endnames[i][j], j) 265 | } 266 | 267 | // a fresh persister, so old instance doesn't overwrite 268 | // new instance's persisted state. 269 | // give the fresh persister a copy of the old persister's 270 | // state, so that the spec is that we pass StartKVServer() 271 | // the last persisted state. 272 | if cfg.saved[i] != nil { 273 | cfg.saved[i] = cfg.saved[i].Copy() 274 | } else { 275 | cfg.saved[i] = raft.MakePersister() 276 | } 277 | cfg.mu.Unlock() 278 | 279 | cfg.kvservers[i] = StartKVServer(ends, i, cfg.saved[i], cfg.maxraftstate) 280 | 281 | kvsvc := labrpc.MakeService(cfg.kvservers[i]) 282 | rfsvc := labrpc.MakeService(cfg.kvservers[i].rf) 283 | srv := labrpc.MakeServer() 284 | srv.AddService(kvsvc) 285 | srv.AddService(rfsvc) 286 | cfg.net.AddServer(i, srv) 287 | } 288 | 289 | func (cfg *config) Leader() (bool, int) { 290 | cfg.mu.Lock() 291 | defer cfg.mu.Unlock() 292 | 293 | for i := 0; i < cfg.n; i++ { 294 | _, is_leader := cfg.kvservers[i].rf.GetState() 295 | if is_leader { 296 | return true, i 297 | } 298 | } 299 | return false, 0 300 | } 301 | 302 | // Partition servers into 2 groups and put current leader in minority 303 | func (cfg *config) make_partition() ([]int, []int) { 304 | _, l := cfg.Leader() 305 | p1 := make([]int, cfg.n/2+1) 306 | p2 := make([]int, cfg.n/2) 307 | j := 0 308 | for i := 0; i < cfg.n; i++ { 309 | if i != l { 310 | if j < len(p1) { 311 | p1[j] = i 312 | } else { 313 | p2[j-len(p1)] = i 314 | } 315 | j++ 316 | } 317 | } 318 | p2[len(p2)-1] = l 319 | return p1, p2 320 | } 321 | 322 | func make_config(t *testing.T, tag string, n int, unreliable bool, maxraftstate int) *config { 323 | runtime.GOMAXPROCS(4) 324 | cfg := &config{} 325 | cfg.t = t 326 | cfg.tag = tag 327 | cfg.net = labrpc.MakeNetwork() 328 | cfg.n = n 329 | cfg.kvservers = make([]*RaftKV, cfg.n) 330 | cfg.saved = make([]*raft.Persister, cfg.n) 331 | cfg.endnames = make([][]string, cfg.n) 332 | cfg.clerks = make(map[*Clerk][]string) 333 | cfg.nextClientId = cfg.n + 1000 // client ids start 1000 above the highest serverid 334 | cfg.maxraftstate = maxraftstate 335 | 336 | // create a full set of KV servers. 337 | for i := 0; i < cfg.n; i++ { 338 | cfg.StartServer(i) 339 | } 340 | 341 | cfg.ConnectAll() 342 | 343 | cfg.net.Reliable(!unreliable) 344 | 345 | return cfg 346 | } 347 | -------------------------------------------------------------------------------- /assignment5/src/kvraft/server.go: -------------------------------------------------------------------------------- 1 | package raftkv 2 | 3 | import ( 4 | "encoding/gob" 5 | "labrpc" 6 | "log" 7 | "raft" 8 | "sync" 9 | ) 10 | 11 | const Debug = 0 12 | 13 | func DPrintf(format string, a ...interface{}) (n int, err error) { 14 | if Debug > 0 { 15 | log.Printf(format, a...) 16 | } 17 | return 18 | } 19 | 20 | 21 | type Op struct { 22 | // Your definitions here. 23 | // Field names must start with capital letters, 24 | // otherwise RPC will break. 25 | } 26 | 27 | type RaftKV struct { 28 | mu sync.Mutex 29 | me int 30 | rf *raft.Raft 31 | applyCh chan raft.ApplyMsg 32 | 33 | maxraftstate int // snapshot if log grows this big 34 | 35 | // Your definitions here. 36 | } 37 | 38 | 39 | func (kv *RaftKV) Get(args *GetArgs, reply *GetReply) { 40 | // Your code here. 41 | } 42 | 43 | func (kv *RaftKV) PutAppend(args *PutAppendArgs, reply *PutAppendReply) { 44 | // Your code here. 45 | } 46 | 47 | // 48 | // the tester calls Kill() when a RaftKV instance won't 49 | // be needed again. you are not required to do anything 50 | // in Kill(), but it might be convenient to (for example) 51 | // turn off debug output from this instance. 52 | // 53 | func (kv *RaftKV) Kill() { 54 | kv.rf.Kill() 55 | // Your code here, if desired. 56 | } 57 | 58 | // 59 | // servers[] contains the ports of the set of 60 | // servers that will cooperate via Raft to 61 | // form the fault-tolerant key/value service. 62 | // me is the index of the current server in servers[]. 63 | // the k/v server should store snapshots with persister.SaveSnapshot(), 64 | // and Raft should save its state (including log) with persister.SaveRaftState(). 65 | // the k/v server should snapshot when Raft's saved state exceeds maxraftstate bytes, 66 | // in order to allow Raft to garbage-collect its log. if maxraftstate is -1, 67 | // you don't need to snapshot. 68 | // StartKVServer() must return quickly, so it should start goroutines 69 | // for any long-running work. 70 | // 71 | func StartKVServer(servers []*labrpc.ClientEnd, me int, persister *raft.Persister, maxraftstate int) *RaftKV { 72 | // call gob.Register on structures you want 73 | // Go's RPC library to marshall/unmarshall. 74 | gob.Register(Op{}) 75 | 76 | kv := new(RaftKV) 77 | kv.me = me 78 | kv.maxraftstate = maxraftstate 79 | 80 | // Your initialization code here. 81 | 82 | kv.applyCh = make(chan raft.ApplyMsg) 83 | kv.rf = raft.Make(servers, me, persister, kv.applyCh) 84 | 85 | 86 | return kv 87 | } 88 | -------------------------------------------------------------------------------- /assignment5/src/raft/config.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | // 4 | // support for Raft tester. 5 | // 6 | // we will use the original config.go to test your code for grading. 7 | // so, while you can modify this code to help you debug, please 8 | // test with the original before submitting. 9 | // 10 | 11 | import "labrpc" 12 | import "log" 13 | import "sync" 14 | import "testing" 15 | import "runtime" 16 | import crand "crypto/rand" 17 | import "encoding/base64" 18 | import "sync/atomic" 19 | import "time" 20 | import "fmt" 21 | 22 | func randstring(n int) string { 23 | b := make([]byte, 2*n) 24 | crand.Read(b) 25 | s := base64.URLEncoding.EncodeToString(b) 26 | return s[0:n] 27 | } 28 | 29 | type config struct { 30 | mu sync.Mutex 31 | t *testing.T 32 | net *labrpc.Network 33 | n int 34 | done int32 // tell internal threads to die 35 | rafts []*Raft 36 | applyErr []string // from apply channel readers 37 | connected []bool // whether each server is on the net 38 | saved []*Persister 39 | endnames [][]string // the port file names each sends to 40 | logs []map[int]int // copy of each server's committed entries 41 | } 42 | 43 | func make_config(t *testing.T, n int, unreliable bool) *config { 44 | runtime.GOMAXPROCS(4) 45 | cfg := &config{} 46 | cfg.t = t 47 | cfg.net = labrpc.MakeNetwork() 48 | cfg.n = n 49 | cfg.applyErr = make([]string, cfg.n) 50 | cfg.rafts = make([]*Raft, cfg.n) 51 | cfg.connected = make([]bool, cfg.n) 52 | cfg.saved = make([]*Persister, cfg.n) 53 | cfg.endnames = make([][]string, cfg.n) 54 | cfg.logs = make([]map[int]int, cfg.n) 55 | 56 | cfg.setunreliable(unreliable) 57 | 58 | cfg.net.LongDelays(true) 59 | 60 | // create a full set of Rafts. 61 | for i := 0; i < cfg.n; i++ { 62 | cfg.logs[i] = map[int]int{} 63 | cfg.start1(i) 64 | } 65 | 66 | // connect everyone 67 | for i := 0; i < cfg.n; i++ { 68 | cfg.connect(i) 69 | } 70 | 71 | return cfg 72 | } 73 | 74 | // shut down a Raft server but save its persistent state. 75 | func (cfg *config) crash1(i int) { 76 | cfg.disconnect(i) 77 | cfg.net.DeleteServer(i) // disable client connections to the server. 78 | 79 | cfg.mu.Lock() 80 | defer cfg.mu.Unlock() 81 | 82 | // a fresh persister, in case old instance 83 | // continues to update the Persister. 84 | // but copy old persister's content so that we always 85 | // pass Make() the last persisted state. 86 | if cfg.saved[i] != nil { 87 | cfg.saved[i] = cfg.saved[i].Copy() 88 | } 89 | 90 | rf := cfg.rafts[i] 91 | if rf != nil { 92 | cfg.mu.Unlock() 93 | rf.Kill() 94 | cfg.mu.Lock() 95 | cfg.rafts[i] = nil 96 | } 97 | 98 | if cfg.saved[i] != nil { 99 | raftlog := cfg.saved[i].ReadRaftState() 100 | cfg.saved[i] = &Persister{} 101 | cfg.saved[i].SaveRaftState(raftlog) 102 | } 103 | } 104 | 105 | // 106 | // start or re-start a Raft. 107 | // if one already exists, "kill" it first. 108 | // allocate new outgoing port file names, and a new 109 | // state persister, to isolate previous instance of 110 | // this server. since we cannot really kill it. 111 | // 112 | func (cfg *config) start1(i int) { 113 | cfg.crash1(i) 114 | 115 | // a fresh set of outgoing ClientEnd names. 116 | // so that old crashed instance's ClientEnds can't send. 117 | cfg.endnames[i] = make([]string, cfg.n) 118 | for j := 0; j < cfg.n; j++ { 119 | cfg.endnames[i][j] = randstring(20) 120 | } 121 | 122 | // a fresh set of ClientEnds. 123 | ends := make([]*labrpc.ClientEnd, cfg.n) 124 | for j := 0; j < cfg.n; j++ { 125 | ends[j] = cfg.net.MakeEnd(cfg.endnames[i][j]) 126 | cfg.net.Connect(cfg.endnames[i][j], j) 127 | } 128 | 129 | cfg.mu.Lock() 130 | 131 | // a fresh persister, so old instance doesn't overwrite 132 | // new instance's persisted state. 133 | // but copy old persister's content so that we always 134 | // pass Make() the last persisted state. 135 | if cfg.saved[i] != nil { 136 | cfg.saved[i] = cfg.saved[i].Copy() 137 | } else { 138 | cfg.saved[i] = MakePersister() 139 | } 140 | 141 | cfg.mu.Unlock() 142 | 143 | // listen to messages from Raft indicating newly committed messages. 144 | applyCh := make(chan ApplyMsg) 145 | go func() { 146 | for m := range applyCh { 147 | err_msg := "" 148 | if m.UseSnapshot { 149 | // ignore the snapshot 150 | } else if v, ok := (m.Command).(int); ok { 151 | cfg.mu.Lock() 152 | for j := 0; j < len(cfg.logs); j++ { 153 | if old, oldok := cfg.logs[j][m.Index]; oldok && old != v { 154 | // some server has already committed a different value for this entry! 155 | err_msg = fmt.Sprintf("commit index=%v server=%v %v != server=%v %v", 156 | m.Index, i, m.Command, j, old) 157 | } 158 | } 159 | _, prevok := cfg.logs[i][m.Index-1] 160 | cfg.logs[i][m.Index] = v 161 | cfg.mu.Unlock() 162 | 163 | if m.Index > 1 && prevok == false { 164 | err_msg = fmt.Sprintf("server %v apply out of order %v", i, m.Index) 165 | } 166 | } else { 167 | err_msg = fmt.Sprintf("committed command %v is not an int", m.Command) 168 | } 169 | 170 | if err_msg != "" { 171 | log.Fatalf("apply error: %v\n", err_msg) 172 | cfg.applyErr[i] = err_msg 173 | // keep reading after error so that Raft doesn't block 174 | // holding locks... 175 | } 176 | } 177 | }() 178 | 179 | rf := Make(ends, i, cfg.saved[i], applyCh) 180 | 181 | cfg.mu.Lock() 182 | cfg.rafts[i] = rf 183 | cfg.mu.Unlock() 184 | 185 | svc := labrpc.MakeService(rf) 186 | srv := labrpc.MakeServer() 187 | srv.AddService(svc) 188 | cfg.net.AddServer(i, srv) 189 | } 190 | 191 | func (cfg *config) cleanup() { 192 | for i := 0; i < len(cfg.rafts); i++ { 193 | if cfg.rafts[i] != nil { 194 | cfg.rafts[i].Kill() 195 | } 196 | } 197 | atomic.StoreInt32(&cfg.done, 1) 198 | } 199 | 200 | // attach server i to the net. 201 | func (cfg *config) connect(i int) { 202 | // fmt.Printf("connect(%d)\n", i) 203 | 204 | cfg.connected[i] = true 205 | 206 | // outgoing ClientEnds 207 | for j := 0; j < cfg.n; j++ { 208 | if cfg.connected[j] { 209 | endname := cfg.endnames[i][j] 210 | cfg.net.Enable(endname, true) 211 | } 212 | } 213 | 214 | // incoming ClientEnds 215 | for j := 0; j < cfg.n; j++ { 216 | if cfg.connected[j] { 217 | endname := cfg.endnames[j][i] 218 | cfg.net.Enable(endname, true) 219 | } 220 | } 221 | } 222 | 223 | // detach server i from the net. 224 | func (cfg *config) disconnect(i int) { 225 | // fmt.Printf("disconnect(%d)\n", i) 226 | 227 | cfg.connected[i] = false 228 | 229 | // outgoing ClientEnds 230 | for j := 0; j < cfg.n; j++ { 231 | if cfg.endnames[i] != nil { 232 | endname := cfg.endnames[i][j] 233 | cfg.net.Enable(endname, false) 234 | } 235 | } 236 | 237 | // incoming ClientEnds 238 | for j := 0; j < cfg.n; j++ { 239 | if cfg.endnames[j] != nil { 240 | endname := cfg.endnames[j][i] 241 | cfg.net.Enable(endname, false) 242 | } 243 | } 244 | } 245 | 246 | func (cfg *config) rpcCount(server int) int { 247 | return cfg.net.GetCount(server) 248 | } 249 | 250 | func (cfg *config) setunreliable(unrel bool) { 251 | cfg.net.Reliable(!unrel) 252 | } 253 | 254 | func (cfg *config) setlongreordering(longrel bool) { 255 | cfg.net.LongReordering(longrel) 256 | } 257 | 258 | // check that there's exactly one leader. 259 | // try a few times in case re-elections are needed. 260 | func (cfg *config) checkOneLeader() int { 261 | for iters := 0; iters < 10; iters++ { 262 | time.Sleep(500 * time.Millisecond) 263 | leaders := make(map[int][]int) 264 | for i := 0; i < cfg.n; i++ { 265 | if cfg.connected[i] { 266 | if t, leader := cfg.rafts[i].GetState(); leader { 267 | leaders[t] = append(leaders[t], i) 268 | } 269 | } 270 | } 271 | 272 | lastTermWithLeader := -1 273 | for t, leaders := range leaders { 274 | if len(leaders) > 1 { 275 | cfg.t.Fatalf("term %d has %d (>1) leaders\n", t, len(leaders)) 276 | } 277 | if t > lastTermWithLeader { 278 | lastTermWithLeader = t 279 | } 280 | } 281 | 282 | if len(leaders) != 0 { 283 | return leaders[lastTermWithLeader][0] 284 | } 285 | } 286 | cfg.t.Fatal("expected one leader, got none") 287 | return -1 288 | } 289 | 290 | // check that everyone agrees on the term. 291 | func (cfg *config) checkTerms() int { 292 | term := -1 293 | for i := 0; i < cfg.n; i++ { 294 | if cfg.connected[i] { 295 | xterm, _ := cfg.rafts[i].GetState() 296 | if term == -1 { 297 | term = xterm 298 | } else if term != xterm { 299 | cfg.t.Fatal("servers disagree on term") 300 | } 301 | } 302 | } 303 | return term 304 | } 305 | 306 | // check that there's no leader 307 | func (cfg *config) checkNoLeader() { 308 | for i := 0; i < cfg.n; i++ { 309 | if cfg.connected[i] { 310 | _, is_leader := cfg.rafts[i].GetState() 311 | if is_leader { 312 | cfg.t.Fatalf("expected no leader, but %v claims to be leader\n", i) 313 | } 314 | } 315 | } 316 | } 317 | 318 | // how many servers think a log entry is committed? 319 | func (cfg *config) nCommitted(index int) (int, interface{}) { 320 | count := 0 321 | cmd := -1 322 | for i := 0; i < len(cfg.rafts); i++ { 323 | if cfg.applyErr[i] != "" { 324 | cfg.t.Fatal(cfg.applyErr[i]) 325 | } 326 | 327 | cfg.mu.Lock() 328 | cmd1, ok := cfg.logs[i][index] 329 | cfg.mu.Unlock() 330 | 331 | if ok { 332 | if count > 0 && cmd != cmd1 { 333 | cfg.t.Fatalf("committed values do not match: index %v, %v, %v\n", 334 | index, cmd, cmd1) 335 | } 336 | count += 1 337 | cmd = cmd1 338 | } 339 | } 340 | return count, cmd 341 | } 342 | 343 | // wait for at least n servers to commit. 344 | // but don't wait forever. 345 | func (cfg *config) wait(index int, n int, startTerm int) interface{} { 346 | to := 10 * time.Millisecond 347 | for iters := 0; iters < 30; iters++ { 348 | nd, _ := cfg.nCommitted(index) 349 | if nd >= n { 350 | break 351 | } 352 | time.Sleep(to) 353 | if to < time.Second { 354 | to *= 2 355 | } 356 | if startTerm > -1 { 357 | for _, r := range cfg.rafts { 358 | if t, _ := r.GetState(); t > startTerm { 359 | // someone has moved on 360 | // can no longer guarantee that we'll "win" 361 | return -1 362 | } 363 | } 364 | } 365 | } 366 | nd, cmd := cfg.nCommitted(index) 367 | if nd < n { 368 | cfg.t.Fatalf("only %d decided for index %d; wanted %d\n", 369 | nd, index, n) 370 | } 371 | return cmd 372 | } 373 | 374 | // do a complete agreement. 375 | // it might choose the wrong leader initially, 376 | // and have to re-submit after giving up. 377 | // entirely gives up after about 10 seconds. 378 | // indirectly checks that the servers agree on the 379 | // same value, since nCommitted() checks this, 380 | // as do the threads that read from applyCh. 381 | // returns index. 382 | func (cfg *config) one(cmd int, expectedServers int) int { 383 | t0 := time.Now() 384 | starts := 0 385 | for time.Since(t0).Seconds() < 10 { 386 | // try all the servers, maybe one is the leader. 387 | index := -1 388 | for si := 0; si < cfg.n; si++ { 389 | starts = (starts + 1) % cfg.n 390 | var rf *Raft 391 | cfg.mu.Lock() 392 | if cfg.connected[starts] { 393 | rf = cfg.rafts[starts] 394 | } 395 | cfg.mu.Unlock() 396 | if rf != nil { 397 | index1, _, ok := rf.Start(cmd) 398 | if ok { 399 | index = index1 400 | break 401 | } 402 | } 403 | } 404 | 405 | if index != -1 { 406 | // somebody claimed to be the leader and to have 407 | // submitted our command; wait a while for agreement. 408 | t1 := time.Now() 409 | for time.Since(t1).Seconds() < 2 { 410 | nd, cmd1 := cfg.nCommitted(index) 411 | if nd > 0 && nd >= expectedServers { 412 | // committed 413 | if cmd2, ok := cmd1.(int); ok && cmd2 == cmd { 414 | // and it was the command we submitted. 415 | return index 416 | } 417 | } 418 | time.Sleep(20 * time.Millisecond) 419 | } 420 | } else { 421 | time.Sleep(50 * time.Millisecond) 422 | } 423 | } 424 | cfg.t.Fatalf("one(%v) failed to reach agreement\n", cmd) 425 | return -1 426 | } 427 | -------------------------------------------------------------------------------- /assignment5/src/raft/persister.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | // 4 | // support for Raft and kvraft to save persistent 5 | // Raft state (log &c) and k/v server snapshots. 6 | // 7 | // we will use the original persister.go to test your code for grading. 8 | // so, while you can modify this code to help you debug, please 9 | // test with the original before submitting. 10 | // 11 | 12 | import "sync" 13 | 14 | type Persister struct { 15 | mu sync.Mutex 16 | raftstate []byte 17 | snapshot []byte 18 | } 19 | 20 | func MakePersister() *Persister { 21 | return &Persister{} 22 | } 23 | 24 | func (ps *Persister) Copy() *Persister { 25 | ps.mu.Lock() 26 | defer ps.mu.Unlock() 27 | np := MakePersister() 28 | np.raftstate = ps.raftstate 29 | np.snapshot = ps.snapshot 30 | return np 31 | } 32 | 33 | func (ps *Persister) SaveRaftState(data []byte) { 34 | ps.mu.Lock() 35 | defer ps.mu.Unlock() 36 | ps.raftstate = data 37 | } 38 | 39 | func (ps *Persister) ReadRaftState() []byte { 40 | ps.mu.Lock() 41 | defer ps.mu.Unlock() 42 | return ps.raftstate 43 | } 44 | 45 | func (ps *Persister) RaftStateSize() int { 46 | ps.mu.Lock() 47 | defer ps.mu.Unlock() 48 | return len(ps.raftstate) 49 | } 50 | 51 | func (ps *Persister) SaveSnapshot(snapshot []byte) { 52 | ps.mu.Lock() 53 | defer ps.mu.Unlock() 54 | ps.snapshot = snapshot 55 | } 56 | 57 | func (ps *Persister) ReadSnapshot() []byte { 58 | ps.mu.Lock() 59 | defer ps.mu.Unlock() 60 | return ps.snapshot 61 | } 62 | -------------------------------------------------------------------------------- /assignment5/src/raft/raft.go: -------------------------------------------------------------------------------- 1 | //go:binary-only-package 2 | 3 | package raft -------------------------------------------------------------------------------- /assignment5/src/raft/util.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | import "log" 4 | 5 | // Debugging 6 | const Debug = 0 7 | 8 | func DPrintf(format string, a ...interface{}) (n int, err error) { 9 | if Debug > 0 { 10 | log.Printf(format, a...) 11 | } 12 | return 13 | } 14 | -------------------------------------------------------------------------------- /setup.md: -------------------------------------------------------------------------------- 1 | # COS418 Assignment Setup 2 | 3 | ### Go Installation 4 | 5 | You will need a working Go environment for the assignments. 6 | Your version should be at least Go 1.9, which is the version that Grading scripts will use. 7 | The latest version as of the Fall 2018 semester is Go 1.11. Things work in 1.9 should also work in 1.11. Learn more about semantic of versioning [here](https://semver.org/). 8 | 9 |11 | The CS servers (cycles.cs.princeton.edu) are one option, if you have a CS account. 12 |
13 | spin:~$ which go 14 | /usr/bin/go 15 | @spin:~$ go version 16 | go version go1.6.3 linux/amd6417 | We have tested that all the infrastructure for the course works on these machines. 18 | 19 | 20 |
22 | The Courselab servers (courselab.cs.princeton.edu), using your 23 | Princeton netId. 24 |
25 | courselab:~$ which go 26 | /usr/bin/go 27 | courselab:~$ go version 28 | go version go1.8.3 linux/amd6429 | 30 | We only support the above methods for using Go. For help with courselab servers see here. 31 | 32 | 33 |
35 | Another option is to install Go on your own machine manually. There are instructions to install from source or with a 36 | package installer for several operating systems at Google's Go site: golang.org. 37 |
38 | 39 |41 | Finally, for Macs many people use package management software, the two most common of which are 42 | Homebrew and 43 | MacPorts 44 | (these links include installation instructions for the package managers themselves). 45 | Here is a walkthrough of installing Go using each of these: 46 |
47 | dustpuppy:~$ brew --version 48 | 0.9.5 49 | dustpuppy:~$ go 50 | -bash: go: command not found 51 | dustpuppy:~$ brew install go 52 | ==> Downloading https://homebrew.bintray.com/bottles/go-1.7.1.el_capitan.bottle. 53 | ######################################################################## 100.0% 54 | ==> Pouring go-1.7.1.el_capitan.bottle.tar.gz 55 | ==> Caveats 56 | As of go 1.2, a valid GOPATH is required to use the `go get` command: 57 | https://golang.org/doc/code.html#GOPATH 58 | 59 | You may wish to add the GOROOT-based install location to your PATH: 60 | export PATH=$PATH:/usr/local/opt/go/libexec/bin 61 | ==> Summary 62 | 🍺 /usr/local/Cellar/go/1.7.1: 6,436 files, 250.6M 63 | dustpuppy:~$ go version 64 | go version go1.7.1 darwin/amd6465 | NB: if brew install go attempts to install an ancient version (e.g. 1.3) you will have to do brew update first to refresh your list of packages that Homebrew knows about. 66 |
67 | dustpuppy:~$ port version 68 | Version: 2.3.4 69 | dustpuppy:~$ go 70 | -bash: go: command not found 71 | dustpuppy:~$ sudo port install go 72 | Password: 73 | Warning: The Xcode Command Line Tools don't appear to be installed; most ports will likely fail to build. 74 | Warning: Install them by running `xcode-select --install'. 75 | ---> Computing dependencies for go 76 | ---> Dependencies to be installed: go-1.4 77 | ---> Fetching archive for go-1.4 78 | ---> Attempting to fetch go-1.4-1.4.3_0.darwin_15.x86_64.tbz2 from https://packages.macports.org/go-1.4 79 | ---> Attempting to fetch go-1.4-1.4.3_0.darwin_15.x86_64.tbz2.rmd160 from https://packages.macports.org/go-1.4 80 | ---> Installing go-1.4 @1.4.3_0 81 | ---> Activating go-1.4 @1.4.3_0 82 | ---> Cleaning go-1.4 83 | ---> Fetching archive for go 84 | ---> Attempting to fetch go-1.7_0.darwin_15.x86_64.tbz2 from https://packages.macports.org/go 85 | ---> Attempting to fetch go-1.7_0.darwin_15.x86_64.tbz2.rmd160 from https://packages.macports.org/go 86 | ---> Installing go @1.7_0 87 | ---> Activating go @1.7_0 88 | ---> Cleaning go 89 | ---> Updating database of binaries 90 | ---> Scanning binaries for linking errors 91 | ---> No broken files found. 92 | dustpuppy:~$ go version 93 | go version go1.7 darwin/amd6494 | 95 | 96 |
98 | There are many commonly used tools in the Go ecosystem. The three most useful starting out are: 99 | Go fmt and Go vet, which are built-ins, and Golint, which is similar to the splint tool you used in COS217. 100 |
101 | 102 |104 | For those of you in touch with your systems side (this is Distributed Systems, after all), there are quite a few resources for Go development in both emacs (additional information available here) and vim (additional resources here). 105 |
106 | 107 |108 | As many Princeton COS students have become attached to Sublime, here are the two indispensible Sublime packages for Go development: GoSublime and Sublime-Build. And -- learning from the ancient emacs-vi holy war -- it would be inviting trouble to offer Sublime information without likewise dispensing the must-have Atom plugin: Go-Plus (walkthrough and additional info here). 109 |
110 | --------------------------------------------------------------------------------