├── .gitignore ├── LICENSE ├── README.md ├── advance └── MIT6.824 │ ├── Makefile │ ├── README.md │ ├── lab │ └── lab1 │ │ ├── 6.5840 Lab 1_ MapReduce.pdf │ │ └── mapreduce(2004).pdf │ └── src │ ├── .gitignore │ ├── go.mod │ ├── go.sum │ ├── kvraft │ ├── client.go │ ├── common.go │ ├── config.go │ ├── server.go │ └── test_test.go │ ├── labgob │ ├── labgob.go │ └── test_test.go │ ├── labrpc │ ├── labrpc.go │ └── test_test.go │ ├── main │ ├── diskvd.go │ ├── lockc.go │ ├── lockd.go │ ├── mr-out-0 │ ├── mrcoordinator.go │ ├── mrsequential.go │ ├── mrworker.go │ ├── pbc.go │ ├── pbd.go │ ├── pg-being_ernest.txt │ ├── pg-dorian_gray.txt │ ├── pg-frankenstein.txt │ ├── pg-grimm.txt │ ├── pg-huckleberry_finn.txt │ ├── pg-metamorphosis.txt │ ├── pg-sherlock_holmes.txt │ ├── pg-tom_sawyer.txt │ ├── test-mr-many.sh │ ├── test-mr.sh │ └── viewd.go │ ├── models │ └── kv.go │ ├── mr │ ├── coordinator.go │ ├── rpc.go │ └── worker.go │ ├── mrapps │ ├── crash.go │ ├── early_exit.go │ ├── indexer.go │ ├── jobcount.go │ ├── mtiming.go │ ├── nocrash.go │ ├── rtiming.go │ └── wc.go │ ├── porcupine │ ├── bitset.go │ ├── checker.go │ ├── model.go │ ├── porcupine.go │ └── visualization.go │ ├── raft │ ├── config.go │ ├── persister.go │ ├── raft.go │ ├── test_test.go │ └── util.go │ ├── shardctrler │ ├── client.go │ ├── common.go │ ├── config.go │ ├── server.go │ └── test_test.go │ └── shardkv │ ├── client.go │ ├── common.go │ ├── config.go │ ├── server.go │ └── test_test.go ├── docs ├── 0-推荐资料.md ├── 1-基础语法.md ├── 2-爬虫.md ├── 3-备忘录.md ├── 4-大作品.md ├── 5(2025)-微服务.md ├── 5-简单提升.md ├── 6(2025)-部署与监控.md ├── 6-微服务.md ├── 7-6.824.md ├── 8-合作.md ├── README.md └── deprecated │ └── 7-底层实现.md ├── etc ├── README.md └── etc.md └── img ├── mindmap-grammer.png ├── mindmap-spider.png └── mindmap-study.png /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | *.exe 3 | *.exe~ 4 | *.dll 5 | *.so 6 | *.dylib 7 | 8 | .DS_Store -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # west2-online golang考核指南 2 | 3 | 这里是西二在线工作室go方向的考核指南,旨在为初学者提供一个循序渐进的golang学习路线 4 | 5 | ## 版权 6 | 7 | 本项目遵循GPL-3.0 License,转载请注明本项目仓库地址 8 | 9 | ## 如何开始 10 | 11 | 请点开`docs`, 里面还有一个介绍~ 12 | 13 | 我们的学习希望是以一个文档引导 + 个人自学的方式, 我们会告诉你应当如何快速的上手这门语言, 但是你想要学好、学懂, 除了我们的引导外你还需要自己的个人提升 14 | 15 | 从功利的角度出发, 如果你想进west2-online工作室, 你需要在完成基础内容的基础上适当的完成**一定量的Bonus内容**, 我们的答辩会考察你一定的知识储备 16 | 17 | ## 问题解答 18 | 19 | 我们罗列了常见的问题解答位于这个仓库的[Discussions](https://github.com/west2-online/learn-go/discussions), 你也可以直接在这个页面的上方找到这个按钮进入 20 | 21 | ## 概览 22 | 23 | ### 基础 24 | | 阶段 | 学习内容 | 预期时长 | 是否需要答辩 | 25 | |-----------------------------|---------------------------------------------|----------| ------------- | 26 | | [基础](docs/1-基础语法.md) | 基本语法,数组,切片,map,chan,Github入门 | 30天(1个月) | x | 27 | | [爬虫](docs/2-爬虫.md) | http请求/响应,http包,并发 | 30天(1个月) | x | 28 | | [备忘录](docs/3-备忘录.md) | 命名/结构规范,基础框架(hertz、gorm)的使用 | 30天(1个月) | ✔ | 29 | | [大作业](docs/4-大作品.md) | 项目结构设计、三层架构、Docker、Redis等的学习与使用,实现一个入门级Demo | 60天(2个月) | ✔ | 30 | | [简单提升](docs/5(2025)-微服务.md) | 微服务架构、服务注册 | 45天(1个半月) | ✔ | 31 | | [微服务](docs/6-微服务.md) | 待定 | 30天(1个月) | ✔ | 32 | | [6.824](docs/7-6.824.md) | 掌握分布式系统设计/MapReduce/Raft算法 | 45天(1个半月) | ✔ | 33 | | [合作](docs/8-合作.md) | 与前端/客户端进行合作开发第一款相对成熟的产品,了解项目的对接、开发、测试 | 60天(2个月) | ✔ | 34 | 35 | 预期时长以一名零基础全日制大学生为参考,如果是全身心投入学习语言,或者已经对其他语言有一定的了解,每一阶段所需的时间会比预期时长来的短 36 | 37 | 更多内容可以访问`docs`文件夹内的`README.md`文件, 会有关于学习建议、答辩内容以及一些更细节的安排 38 | 39 | ### 进阶 40 | 41 | 我们希望可以结合先进课程的实验、理念来提升个人使用go的综合能力,同时尽量避免陷入盲目的开发(我们不鼓励当API工程师) 42 | 43 | 目前我们**正在推进**的有如下课程作业的转换: 44 | 45 | 1. MIT 6.824 分布式系统 (2023 Spring) 46 | 2. MIT 6.031 软件工程 47 | 48 | ## 时间安排 49 | 50 | 考虑到学期的期末等因素,以学期为单位,安排学习内容如下 51 | 52 | | 时间 | 完成内容 | 53 | | -------- | ------------------------ | 54 | | 第一学期 | 基础、爬虫、备忘录 | 55 | | 寒假 | 大作业 | 56 | | 第二学期 | 聊天室、微服务、底层源码 | 57 | | 暑假 | 合作/开始进阶学习 | 58 | 59 | ## 考核设计 60 | 61 | 对于每一轮考核,通常分为如下部分 62 | 63 | | 名称 | 解释 | 64 | | ----- | ---------------------------------------------- | 65 | | 目的 | 本轮需要学习的内容 | 66 | | 背景 | (部分阶段有)增加部分趣味性的故事 | 67 | | 任务 | 任务的具体描述 | 68 | | Bonus | 在完成任务的基础上实现更加深入的功能/特性 | 69 | | 要求 | (部分阶段有)对任务的具体细节要求 | 70 | | 参考 | 提供的部分参考资料 | 71 | | 提示 | (部分阶段有)对考核,或者对语言学习的一些提示 | 72 | 73 | ## 考核目标 74 | 75 | 我们的目标是快速为初学者构建一套**相对广的知识体系**。也就是说,我们希望按照每一阶段的考核完成的同学可以熟悉当前golang的基础业务开发与基础工程项目能力。 76 | 77 | 78 | 79 | 但是很明显,**只有广而不深的知识体系并不能在就业/升学中形成有力竞争**,因此我们在2023级的考核内容中增加了`Bonus`,这部分内容以额外奖励的形式,引导同学们去学习一些更加深入的内容,而这些更加深入的内容将会在未来的读研中给你提供一定的底层思维能力,同时也会在就业中让你的面试更加得心应手。 80 | 81 | 82 | 83 | 如果你有意以golang作为你的主力语言,我们强烈建议认真负责的完成每一轮的`全部内容`。而不是为了考核而考核 84 | 85 | ## 项目结构 86 | 87 | ``` 88 | . 89 | ├── LICENSE 90 | ├── README.md 91 | ├── advance // 进阶学习 92 | ├── etc // 推荐阅读的文章/资料 93 | └── docs // 考核内容 94 | ├── 0-推荐资料.md 95 | ├── 1-基础语法.md 96 | ├── 2-爬虫.md 97 | ├── 3-备忘录.md 98 | ├── 4-大作品.md 99 | ├── 5-简单提升.md 100 | ├── 6-微服务.md 101 | ├── 7-6.824.md 102 | └── 8-合作.md 103 | ``` 104 | -------------------------------------------------------------------------------- /advance/MIT6.824/Makefile: -------------------------------------------------------------------------------- 1 | # This is the Makefile helping you submit the labs. 2 | # Just create 6.5840/api.key with your API key in it, 3 | # and submit your lab with the following command: 4 | # $ make [lab1|lab2a|lab2b|lab2c|lab2d|lab3a|lab3b|lab4a|lab4b] 5 | 6 | LABS=" lab1 lab2a lab2b lab2c lab2d lab3a lab3b lab4a lab4b " 7 | 8 | %: check-% 9 | @echo "Preparing $@-handin.tar.gz" 10 | @if echo $(LABS) | grep -q " $@ " ; then \ 11 | echo "Tarring up your submission..." ; \ 12 | COPYFILE_DISABLE=1 tar cvzf $@-handin.tar.gz \ 13 | "--exclude=src/main/pg-*.txt" \ 14 | "--exclude=src/main/diskvd" \ 15 | "--exclude=src/mapreduce/824-mrinput-*.txt" \ 16 | "--exclude=src/mapreduce/5840-mrinput-*.txt" \ 17 | "--exclude=src/main/mr-*" \ 18 | "--exclude=mrtmp.*" \ 19 | "--exclude=src/main/diff.out" \ 20 | "--exclude=src/main/mrcoordinator" \ 21 | "--exclude=src/main/mrsequential" \ 22 | "--exclude=src/main/mrworker" \ 23 | "--exclude=*.so" \ 24 | Makefile src; \ 25 | if test `stat -c "%s" "$@-handin.tar.gz" 2>/dev/null || stat -f "%z" "$@-handin.tar.gz"` -ge 20971520 ; then echo "File exceeds 20MB."; rm $@-handin.tar.gz; exit; fi; \ 26 | echo "$@-handin.tar.gz successfully created. Please upload the tarball manually on Gradescope."; \ 27 | else \ 28 | echo "Bad target $@. Usage: make [$(LABS)]"; \ 29 | fi 30 | 31 | .PHONY: check-% 32 | check-%: 33 | @echo "Checking that your submission builds correctly..." 34 | @./.check-build git://g.csail.mit.edu/6.5840-golabs-2023 $(patsubst check-%,%,$@) 35 | -------------------------------------------------------------------------------- /advance/MIT6.824/README.md: -------------------------------------------------------------------------------- 1 | # MIT 6.824 2 | 3 | 在这门课程的学习中会阅读大量的分布式领域经典论文,让学生理解分布式系统的设计和实现的重要原则和技术 4 | 5 | 这门课的作业难度比较大,但是网上资料很多,学习后对分布式系统有一个较为清晰的理解 6 | 7 | **不是完全体,正在逐步上传资料** 8 | 9 | 10 | ## 资源 11 | 12 | - 课程网站: [https://pdos.csail.mit.edu/6.824/schedule.html] 13 | - 课程视频: 14 | - Bilibili: [https://www.bilibili.com/video/BV1R7411t71W] 15 | - 你也可以在课程的Youtube中学习 16 | 17 | 18 | ## 提示 19 | 20 | 1. 建议做好笔记,建议做好笔记,建议做好笔记 21 | 2. 不要抄网上代码,首先不一定对,其次你不会有任何内在提升 22 | 3. 不要把你做的lab源代码上传到互联网(例如github),保持一定的学术诚信,这是为你,也是为了你的学弟学妹 23 | 24 | ## 结构 25 | 26 | - src: 做课程作业的源代码资源 27 | - lab: 作业内容、要求与资源 -------------------------------------------------------------------------------- /advance/MIT6.824/lab/lab1/6.5840 Lab 1_ MapReduce.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/west2-online/learn-go/aa241bf083195d0637c034fbaf81420d982d695f/advance/MIT6.824/lab/lab1/6.5840 Lab 1_ MapReduce.pdf -------------------------------------------------------------------------------- /advance/MIT6.824/lab/lab1/mapreduce(2004).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/west2-online/learn-go/aa241bf083195d0637c034fbaf81420d982d695f/advance/MIT6.824/lab/lab1/mapreduce(2004).pdf -------------------------------------------------------------------------------- /advance/MIT6.824/src/.gitignore: -------------------------------------------------------------------------------- 1 | *.*/ 2 | main/mr-tmp/ 3 | mrtmp.* 4 | 824-mrinput-*.txt 5 | /main/diff.out 6 | /mapreduce/x.txt 7 | /pbservice/x.txt 8 | /kvpaxos/x.txt 9 | *.so 10 | /main/mrcoordinator 11 | /main/mrsequential 12 | /main/mrworker 13 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/go.mod: -------------------------------------------------------------------------------- 1 | module 6.5840 2 | 3 | go 1.15 4 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/go.sum: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/west2-online/learn-go/aa241bf083195d0637c034fbaf81420d982d695f/advance/MIT6.824/src/go.sum -------------------------------------------------------------------------------- /advance/MIT6.824/src/kvraft/client.go: -------------------------------------------------------------------------------- 1 | package kvraft 2 | 3 | import "6.5840/labrpc" 4 | import "crypto/rand" 5 | import "math/big" 6 | 7 | 8 | type Clerk struct { 9 | servers []*labrpc.ClientEnd 10 | // You will have to modify this struct. 11 | } 12 | 13 | func nrand() int64 { 14 | max := big.NewInt(int64(1) << 62) 15 | bigx, _ := rand.Int(rand.Reader, max) 16 | x := bigx.Int64() 17 | return x 18 | } 19 | 20 | func MakeClerk(servers []*labrpc.ClientEnd) *Clerk { 21 | ck := new(Clerk) 22 | ck.servers = servers 23 | // You'll have to add code here. 24 | return ck 25 | } 26 | 27 | // fetch the current value for a key. 28 | // returns "" if the key does not exist. 29 | // keeps trying forever in the face of all other errors. 30 | // 31 | // you can send an RPC with code like this: 32 | // ok := ck.servers[i].Call("KVServer.Get", &args, &reply) 33 | // 34 | // the types of args and reply (including whether they are pointers) 35 | // must match the declared types of the RPC handler function's 36 | // arguments. and reply must be passed as a pointer. 37 | func (ck *Clerk) Get(key string) string { 38 | 39 | // You will have to modify this function. 40 | return "" 41 | } 42 | 43 | // shared by Put and Append. 44 | // 45 | // you can send an RPC with code like this: 46 | // ok := ck.servers[i].Call("KVServer.PutAppend", &args, &reply) 47 | // 48 | // the types of args and reply (including whether they are pointers) 49 | // must match the declared types of the RPC handler function's 50 | // arguments. and reply must be passed as a pointer. 51 | func (ck *Clerk) PutAppend(key string, value string, op string) { 52 | // You will have to modify this function. 53 | } 54 | 55 | func (ck *Clerk) Put(key string, value string) { 56 | ck.PutAppend(key, value, "Put") 57 | } 58 | func (ck *Clerk) Append(key string, value string) { 59 | ck.PutAppend(key, value, "Append") 60 | } 61 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/kvraft/common.go: -------------------------------------------------------------------------------- 1 | package kvraft 2 | 3 | const ( 4 | OK = "OK" 5 | ErrNoKey = "ErrNoKey" 6 | ErrWrongLeader = "ErrWrongLeader" 7 | ) 8 | 9 | type Err string 10 | 11 | // Put or Append 12 | type PutAppendArgs struct { 13 | Key string 14 | Value string 15 | Op string // "Put" or "Append" 16 | // You'll have to add definitions here. 17 | // Field names must start with capital letters, 18 | // otherwise RPC will break. 19 | } 20 | 21 | type PutAppendReply struct { 22 | Err Err 23 | } 24 | 25 | type GetArgs struct { 26 | Key string 27 | // You'll have to add definitions here. 28 | } 29 | 30 | type GetReply struct { 31 | Err Err 32 | Value string 33 | } 34 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/kvraft/config.go: -------------------------------------------------------------------------------- 1 | package kvraft 2 | 3 | import "6.5840/labrpc" 4 | import "testing" 5 | import "os" 6 | 7 | // import "log" 8 | import crand "crypto/rand" 9 | import "math/big" 10 | import "math/rand" 11 | import "encoding/base64" 12 | import "sync" 13 | import "runtime" 14 | import "6.5840/raft" 15 | import "fmt" 16 | import "time" 17 | import "sync/atomic" 18 | 19 | func randstring(n int) string { 20 | b := make([]byte, 2*n) 21 | crand.Read(b) 22 | s := base64.URLEncoding.EncodeToString(b) 23 | return s[0:n] 24 | } 25 | 26 | func makeSeed() int64 { 27 | max := big.NewInt(int64(1) << 62) 28 | bigx, _ := crand.Int(crand.Reader, max) 29 | x := bigx.Int64() 30 | return x 31 | } 32 | 33 | // Randomize server handles 34 | func random_handles(kvh []*labrpc.ClientEnd) []*labrpc.ClientEnd { 35 | sa := make([]*labrpc.ClientEnd, len(kvh)) 36 | copy(sa, kvh) 37 | for i := range sa { 38 | j := rand.Intn(i + 1) 39 | sa[i], sa[j] = sa[j], sa[i] 40 | } 41 | return sa 42 | } 43 | 44 | type config struct { 45 | mu sync.Mutex 46 | t *testing.T 47 | net *labrpc.Network 48 | n int 49 | kvservers []*KVServer 50 | saved []*raft.Persister 51 | endnames [][]string // names of each server's sending ClientEnds 52 | clerks map[*Clerk][]string 53 | nextClientId int 54 | maxraftstate int 55 | start time.Time // time at which make_config() was called 56 | // begin()/end() statistics 57 | t0 time.Time // time at which test_test.go called cfg.begin() 58 | rpcs0 int // rpcTotal() at start of test 59 | ops int32 // number of clerk get/put/append method calls 60 | } 61 | 62 | func (cfg *config) checkTimeout() { 63 | // enforce a two minute real-time limit on each test 64 | if !cfg.t.Failed() && time.Since(cfg.start) > 120*time.Second { 65 | cfg.t.Fatal("test took longer than 120 seconds") 66 | } 67 | } 68 | 69 | func (cfg *config) cleanup() { 70 | cfg.mu.Lock() 71 | defer cfg.mu.Unlock() 72 | for i := 0; i < len(cfg.kvservers); i++ { 73 | if cfg.kvservers[i] != nil { 74 | cfg.kvservers[i].Kill() 75 | } 76 | } 77 | cfg.net.Cleanup() 78 | cfg.checkTimeout() 79 | } 80 | 81 | // Maximum log size across all servers 82 | func (cfg *config) LogSize() int { 83 | logsize := 0 84 | for i := 0; i < cfg.n; i++ { 85 | n := cfg.saved[i].RaftStateSize() 86 | if n > logsize { 87 | logsize = n 88 | } 89 | } 90 | return logsize 91 | } 92 | 93 | // Maximum snapshot size across all servers 94 | func (cfg *config) SnapshotSize() int { 95 | snapshotsize := 0 96 | for i := 0; i < cfg.n; i++ { 97 | n := cfg.saved[i].SnapshotSize() 98 | if n > snapshotsize { 99 | snapshotsize = n 100 | } 101 | } 102 | return snapshotsize 103 | } 104 | 105 | // attach server i to servers listed in to 106 | // caller must hold cfg.mu 107 | func (cfg *config) connectUnlocked(i int, to []int) { 108 | // log.Printf("connect peer %d to %v\n", i, to) 109 | 110 | // outgoing socket files 111 | for j := 0; j < len(to); j++ { 112 | endname := cfg.endnames[i][to[j]] 113 | cfg.net.Enable(endname, true) 114 | } 115 | 116 | // incoming socket files 117 | for j := 0; j < len(to); j++ { 118 | endname := cfg.endnames[to[j]][i] 119 | cfg.net.Enable(endname, true) 120 | } 121 | } 122 | 123 | func (cfg *config) connect(i int, to []int) { 124 | cfg.mu.Lock() 125 | defer cfg.mu.Unlock() 126 | cfg.connectUnlocked(i, to) 127 | } 128 | 129 | // detach server i from the servers listed in from 130 | // caller must hold cfg.mu 131 | func (cfg *config) disconnectUnlocked(i int, from []int) { 132 | // log.Printf("disconnect peer %d from %v\n", i, from) 133 | 134 | // outgoing socket files 135 | for j := 0; j < len(from); j++ { 136 | if cfg.endnames[i] != nil { 137 | endname := cfg.endnames[i][from[j]] 138 | cfg.net.Enable(endname, false) 139 | } 140 | } 141 | 142 | // incoming socket files 143 | for j := 0; j < len(from); j++ { 144 | if cfg.endnames[j] != nil { 145 | endname := cfg.endnames[from[j]][i] 146 | cfg.net.Enable(endname, false) 147 | } 148 | } 149 | } 150 | 151 | func (cfg *config) disconnect(i int, from []int) { 152 | cfg.mu.Lock() 153 | defer cfg.mu.Unlock() 154 | cfg.disconnectUnlocked(i, from) 155 | } 156 | 157 | func (cfg *config) All() []int { 158 | all := make([]int, cfg.n) 159 | for i := 0; i < cfg.n; i++ { 160 | all[i] = i 161 | } 162 | return all 163 | } 164 | 165 | func (cfg *config) ConnectAll() { 166 | cfg.mu.Lock() 167 | defer cfg.mu.Unlock() 168 | for i := 0; i < cfg.n; i++ { 169 | cfg.connectUnlocked(i, cfg.All()) 170 | } 171 | } 172 | 173 | // Sets up 2 partitions with connectivity between servers in each partition. 174 | func (cfg *config) partition(p1 []int, p2 []int) { 175 | cfg.mu.Lock() 176 | defer cfg.mu.Unlock() 177 | // log.Printf("partition servers into: %v %v\n", p1, p2) 178 | for i := 0; i < len(p1); i++ { 179 | cfg.disconnectUnlocked(p1[i], p2) 180 | cfg.connectUnlocked(p1[i], p1) 181 | } 182 | for i := 0; i < len(p2); i++ { 183 | cfg.disconnectUnlocked(p2[i], p1) 184 | cfg.connectUnlocked(p2[i], p2) 185 | } 186 | } 187 | 188 | // Create a clerk with clerk specific server names. 189 | // Give it connections to all of the servers, but for 190 | // now enable only connections to servers in to[]. 191 | func (cfg *config) makeClient(to []int) *Clerk { 192 | cfg.mu.Lock() 193 | defer cfg.mu.Unlock() 194 | 195 | // a fresh set of ClientEnds. 196 | ends := make([]*labrpc.ClientEnd, cfg.n) 197 | endnames := make([]string, cfg.n) 198 | for j := 0; j < cfg.n; j++ { 199 | endnames[j] = randstring(20) 200 | ends[j] = cfg.net.MakeEnd(endnames[j]) 201 | cfg.net.Connect(endnames[j], j) 202 | } 203 | 204 | ck := MakeClerk(random_handles(ends)) 205 | cfg.clerks[ck] = endnames 206 | cfg.nextClientId++ 207 | cfg.ConnectClientUnlocked(ck, to) 208 | return ck 209 | } 210 | 211 | func (cfg *config) deleteClient(ck *Clerk) { 212 | cfg.mu.Lock() 213 | defer cfg.mu.Unlock() 214 | 215 | v := cfg.clerks[ck] 216 | for i := 0; i < len(v); i++ { 217 | os.Remove(v[i]) 218 | } 219 | delete(cfg.clerks, ck) 220 | } 221 | 222 | // caller should hold cfg.mu 223 | func (cfg *config) ConnectClientUnlocked(ck *Clerk, to []int) { 224 | // log.Printf("ConnectClient %v to %v\n", ck, to) 225 | endnames := cfg.clerks[ck] 226 | for j := 0; j < len(to); j++ { 227 | s := endnames[to[j]] 228 | cfg.net.Enable(s, true) 229 | } 230 | } 231 | 232 | func (cfg *config) ConnectClient(ck *Clerk, to []int) { 233 | cfg.mu.Lock() 234 | defer cfg.mu.Unlock() 235 | cfg.ConnectClientUnlocked(ck, to) 236 | } 237 | 238 | // caller should hold cfg.mu 239 | func (cfg *config) DisconnectClientUnlocked(ck *Clerk, from []int) { 240 | // log.Printf("DisconnectClient %v from %v\n", ck, from) 241 | endnames := cfg.clerks[ck] 242 | for j := 0; j < len(from); j++ { 243 | s := endnames[from[j]] 244 | cfg.net.Enable(s, false) 245 | } 246 | } 247 | 248 | func (cfg *config) DisconnectClient(ck *Clerk, from []int) { 249 | cfg.mu.Lock() 250 | defer cfg.mu.Unlock() 251 | cfg.DisconnectClientUnlocked(ck, from) 252 | } 253 | 254 | // Shutdown a server by isolating it 255 | func (cfg *config) ShutdownServer(i int) { 256 | cfg.mu.Lock() 257 | defer cfg.mu.Unlock() 258 | 259 | cfg.disconnectUnlocked(i, cfg.All()) 260 | 261 | // disable client connections to the server. 262 | // it's important to do this before creating 263 | // the new Persister in saved[i], to avoid 264 | // the possibility of the server returning a 265 | // positive reply to an Append but persisting 266 | // the result in the superseded Persister. 267 | cfg.net.DeleteServer(i) 268 | 269 | // a fresh persister, in case old instance 270 | // continues to update the Persister. 271 | // but copy old persister's content so that we always 272 | // pass Make() the last persisted state. 273 | if cfg.saved[i] != nil { 274 | cfg.saved[i] = cfg.saved[i].Copy() 275 | } 276 | 277 | kv := cfg.kvservers[i] 278 | if kv != nil { 279 | cfg.mu.Unlock() 280 | kv.Kill() 281 | cfg.mu.Lock() 282 | cfg.kvservers[i] = nil 283 | } 284 | } 285 | 286 | // If restart servers, first call ShutdownServer 287 | func (cfg *config) StartServer(i int) { 288 | cfg.mu.Lock() 289 | 290 | // a fresh set of outgoing ClientEnd names. 291 | cfg.endnames[i] = make([]string, cfg.n) 292 | for j := 0; j < cfg.n; j++ { 293 | cfg.endnames[i][j] = randstring(20) 294 | } 295 | 296 | // a fresh set of ClientEnds. 297 | ends := make([]*labrpc.ClientEnd, cfg.n) 298 | for j := 0; j < cfg.n; j++ { 299 | ends[j] = cfg.net.MakeEnd(cfg.endnames[i][j]) 300 | cfg.net.Connect(cfg.endnames[i][j], j) 301 | } 302 | 303 | // a fresh persister, so old instance doesn't overwrite 304 | // new instance's persisted state. 305 | // give the fresh persister a copy of the old persister's 306 | // state, so that the spec is that we pass StartKVServer() 307 | // the last persisted state. 308 | if cfg.saved[i] != nil { 309 | cfg.saved[i] = cfg.saved[i].Copy() 310 | } else { 311 | cfg.saved[i] = raft.MakePersister() 312 | } 313 | cfg.mu.Unlock() 314 | 315 | cfg.kvservers[i] = StartKVServer(ends, i, cfg.saved[i], cfg.maxraftstate) 316 | 317 | kvsvc := labrpc.MakeService(cfg.kvservers[i]) 318 | rfsvc := labrpc.MakeService(cfg.kvservers[i].rf) 319 | srv := labrpc.MakeServer() 320 | srv.AddService(kvsvc) 321 | srv.AddService(rfsvc) 322 | cfg.net.AddServer(i, srv) 323 | } 324 | 325 | func (cfg *config) Leader() (bool, int) { 326 | cfg.mu.Lock() 327 | defer cfg.mu.Unlock() 328 | 329 | for i := 0; i < cfg.n; i++ { 330 | _, is_leader := cfg.kvservers[i].rf.GetState() 331 | if is_leader { 332 | return true, i 333 | } 334 | } 335 | return false, 0 336 | } 337 | 338 | // Partition servers into 2 groups and put current leader in minority 339 | func (cfg *config) make_partition() ([]int, []int) { 340 | _, l := cfg.Leader() 341 | p1 := make([]int, cfg.n/2+1) 342 | p2 := make([]int, cfg.n/2) 343 | j := 0 344 | for i := 0; i < cfg.n; i++ { 345 | if i != l { 346 | if j < len(p1) { 347 | p1[j] = i 348 | } else { 349 | p2[j-len(p1)] = i 350 | } 351 | j++ 352 | } 353 | } 354 | p2[len(p2)-1] = l 355 | return p1, p2 356 | } 357 | 358 | var ncpu_once sync.Once 359 | 360 | func make_config(t *testing.T, n int, unreliable bool, maxraftstate int) *config { 361 | ncpu_once.Do(func() { 362 | if runtime.NumCPU() < 2 { 363 | fmt.Printf("warning: only one CPU, which may conceal locking bugs\n") 364 | } 365 | rand.Seed(makeSeed()) 366 | }) 367 | runtime.GOMAXPROCS(4) 368 | cfg := &config{} 369 | cfg.t = t 370 | cfg.net = labrpc.MakeNetwork() 371 | cfg.n = n 372 | cfg.kvservers = make([]*KVServer, cfg.n) 373 | cfg.saved = make([]*raft.Persister, cfg.n) 374 | cfg.endnames = make([][]string, cfg.n) 375 | cfg.clerks = make(map[*Clerk][]string) 376 | cfg.nextClientId = cfg.n + 1000 // client ids start 1000 above the highest serverid 377 | cfg.maxraftstate = maxraftstate 378 | cfg.start = time.Now() 379 | 380 | // create a full set of KV servers. 381 | for i := 0; i < cfg.n; i++ { 382 | cfg.StartServer(i) 383 | } 384 | 385 | cfg.ConnectAll() 386 | 387 | cfg.net.Reliable(!unreliable) 388 | 389 | return cfg 390 | } 391 | 392 | func (cfg *config) rpcTotal() int { 393 | return cfg.net.GetTotalCount() 394 | } 395 | 396 | // start a Test. 397 | // print the Test message. 398 | // e.g. cfg.begin("Test (2B): RPC counts aren't too high") 399 | func (cfg *config) begin(description string) { 400 | fmt.Printf("%s ...\n", description) 401 | cfg.t0 = time.Now() 402 | cfg.rpcs0 = cfg.rpcTotal() 403 | atomic.StoreInt32(&cfg.ops, 0) 404 | } 405 | 406 | func (cfg *config) op() { 407 | atomic.AddInt32(&cfg.ops, 1) 408 | } 409 | 410 | // end a Test -- the fact that we got here means there 411 | // was no failure. 412 | // print the Passed message, 413 | // and some performance numbers. 414 | func (cfg *config) end() { 415 | cfg.checkTimeout() 416 | if cfg.t.Failed() == false { 417 | t := time.Since(cfg.t0).Seconds() // real time 418 | npeers := cfg.n // number of Raft peers 419 | nrpc := cfg.rpcTotal() - cfg.rpcs0 // number of RPC sends 420 | ops := atomic.LoadInt32(&cfg.ops) // number of clerk get/put/append calls 421 | 422 | fmt.Printf(" ... Passed --") 423 | fmt.Printf(" %4.1f %d %5d %4d\n", t, npeers, nrpc, ops) 424 | } 425 | } 426 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/kvraft/server.go: -------------------------------------------------------------------------------- 1 | package kvraft 2 | 3 | import ( 4 | "6.5840/labgob" 5 | "6.5840/labrpc" 6 | "6.5840/raft" 7 | "log" 8 | "sync" 9 | "sync/atomic" 10 | ) 11 | 12 | const Debug = false 13 | 14 | func DPrintf(format string, a ...interface{}) (n int, err error) { 15 | if Debug { 16 | log.Printf(format, a...) 17 | } 18 | return 19 | } 20 | 21 | 22 | type Op struct { 23 | // Your definitions here. 24 | // Field names must start with capital letters, 25 | // otherwise RPC will break. 26 | } 27 | 28 | type KVServer struct { 29 | mu sync.Mutex 30 | me int 31 | rf *raft.Raft 32 | applyCh chan raft.ApplyMsg 33 | dead int32 // set by Kill() 34 | 35 | maxraftstate int // snapshot if log grows this big 36 | 37 | // Your definitions here. 38 | } 39 | 40 | 41 | func (kv *KVServer) Get(args *GetArgs, reply *GetReply) { 42 | // Your code here. 43 | } 44 | 45 | func (kv *KVServer) PutAppend(args *PutAppendArgs, reply *PutAppendReply) { 46 | // Your code here. 47 | } 48 | 49 | // the tester calls Kill() when a KVServer instance won't 50 | // be needed again. for your convenience, we supply 51 | // code to set rf.dead (without needing a lock), 52 | // and a killed() method to test rf.dead in 53 | // long-running loops. you can also add your own 54 | // code to Kill(). you're not required to do anything 55 | // about this, but it may be convenient (for example) 56 | // to suppress debug output from a Kill()ed instance. 57 | func (kv *KVServer) Kill() { 58 | atomic.StoreInt32(&kv.dead, 1) 59 | kv.rf.Kill() 60 | // Your code here, if desired. 61 | } 62 | 63 | func (kv *KVServer) killed() bool { 64 | z := atomic.LoadInt32(&kv.dead) 65 | return z == 1 66 | } 67 | 68 | // servers[] contains the ports of the set of 69 | // servers that will cooperate via Raft to 70 | // form the fault-tolerant key/value service. 71 | // me is the index of the current server in servers[]. 72 | // the k/v server should store snapshots through the underlying Raft 73 | // implementation, which should call persister.SaveStateAndSnapshot() to 74 | // atomically save the Raft state along with the snapshot. 75 | // the k/v server should snapshot when Raft's saved state exceeds maxraftstate bytes, 76 | // in order to allow Raft to garbage-collect its log. if maxraftstate is -1, 77 | // you don't need to snapshot. 78 | // StartKVServer() must return quickly, so it should start goroutines 79 | // for any long-running work. 80 | func StartKVServer(servers []*labrpc.ClientEnd, me int, persister *raft.Persister, maxraftstate int) *KVServer { 81 | // call labgob.Register on structures you want 82 | // Go's RPC library to marshall/unmarshall. 83 | labgob.Register(Op{}) 84 | 85 | kv := new(KVServer) 86 | kv.me = me 87 | kv.maxraftstate = maxraftstate 88 | 89 | // You may need initialization code here. 90 | 91 | kv.applyCh = make(chan raft.ApplyMsg) 92 | kv.rf = raft.Make(servers, me, persister, kv.applyCh) 93 | 94 | // You may need initialization code here. 95 | 96 | return kv 97 | } 98 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/labgob/labgob.go: -------------------------------------------------------------------------------- 1 | package labgob 2 | 3 | // 4 | // trying to send non-capitalized fields over RPC produces a range of 5 | // misbehavior, including both mysterious incorrect computation and 6 | // outright crashes. so this wrapper around Go's encoding/gob warns 7 | // about non-capitalized field names. 8 | // 9 | 10 | import "encoding/gob" 11 | import "io" 12 | import "reflect" 13 | import "fmt" 14 | import "sync" 15 | import "unicode" 16 | import "unicode/utf8" 17 | 18 | var mu sync.Mutex 19 | var errorCount int // for TestCapital 20 | var checked map[reflect.Type]bool 21 | 22 | type LabEncoder struct { 23 | gob *gob.Encoder 24 | } 25 | 26 | func NewEncoder(w io.Writer) *LabEncoder { 27 | enc := &LabEncoder{} 28 | enc.gob = gob.NewEncoder(w) 29 | return enc 30 | } 31 | 32 | func (enc *LabEncoder) Encode(e interface{}) error { 33 | checkValue(e) 34 | return enc.gob.Encode(e) 35 | } 36 | 37 | func (enc *LabEncoder) EncodeValue(value reflect.Value) error { 38 | checkValue(value.Interface()) 39 | return enc.gob.EncodeValue(value) 40 | } 41 | 42 | type LabDecoder struct { 43 | gob *gob.Decoder 44 | } 45 | 46 | func NewDecoder(r io.Reader) *LabDecoder { 47 | dec := &LabDecoder{} 48 | dec.gob = gob.NewDecoder(r) 49 | return dec 50 | } 51 | 52 | func (dec *LabDecoder) Decode(e interface{}) error { 53 | checkValue(e) 54 | checkDefault(e) 55 | return dec.gob.Decode(e) 56 | } 57 | 58 | func Register(value interface{}) { 59 | checkValue(value) 60 | gob.Register(value) 61 | } 62 | 63 | func RegisterName(name string, value interface{}) { 64 | checkValue(value) 65 | gob.RegisterName(name, value) 66 | } 67 | 68 | func checkValue(value interface{}) { 69 | checkType(reflect.TypeOf(value)) 70 | } 71 | 72 | func checkType(t reflect.Type) { 73 | k := t.Kind() 74 | 75 | mu.Lock() 76 | // only complain once, and avoid recursion. 77 | if checked == nil { 78 | checked = map[reflect.Type]bool{} 79 | } 80 | if checked[t] { 81 | mu.Unlock() 82 | return 83 | } 84 | checked[t] = true 85 | mu.Unlock() 86 | 87 | switch k { 88 | case reflect.Struct: 89 | for i := 0; i < t.NumField(); i++ { 90 | f := t.Field(i) 91 | rune, _ := utf8.DecodeRuneInString(f.Name) 92 | if unicode.IsUpper(rune) == false { 93 | // ta da 94 | fmt.Printf("labgob error: lower-case field %v of %v in RPC or persist/snapshot will break your Raft\n", 95 | f.Name, t.Name()) 96 | mu.Lock() 97 | errorCount += 1 98 | mu.Unlock() 99 | } 100 | checkType(f.Type) 101 | } 102 | return 103 | case reflect.Slice, reflect.Array, reflect.Ptr: 104 | checkType(t.Elem()) 105 | return 106 | case reflect.Map: 107 | checkType(t.Elem()) 108 | checkType(t.Key()) 109 | return 110 | default: 111 | return 112 | } 113 | } 114 | 115 | // 116 | // warn if the value contains non-default values, 117 | // as it would if one sent an RPC but the reply 118 | // struct was already modified. if the RPC reply 119 | // contains default values, GOB won't overwrite 120 | // the non-default value. 121 | // 122 | func checkDefault(value interface{}) { 123 | if value == nil { 124 | return 125 | } 126 | checkDefault1(reflect.ValueOf(value), 1, "") 127 | } 128 | 129 | func checkDefault1(value reflect.Value, depth int, name string) { 130 | if depth > 3 { 131 | return 132 | } 133 | 134 | t := value.Type() 135 | k := t.Kind() 136 | 137 | switch k { 138 | case reflect.Struct: 139 | for i := 0; i < t.NumField(); i++ { 140 | vv := value.Field(i) 141 | name1 := t.Field(i).Name 142 | if name != "" { 143 | name1 = name + "." + name1 144 | } 145 | checkDefault1(vv, depth+1, name1) 146 | } 147 | return 148 | case reflect.Ptr: 149 | if value.IsNil() { 150 | return 151 | } 152 | checkDefault1(value.Elem(), depth+1, name) 153 | return 154 | case reflect.Bool, 155 | reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, 156 | reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, 157 | reflect.Uintptr, reflect.Float32, reflect.Float64, 158 | reflect.String: 159 | if reflect.DeepEqual(reflect.Zero(t).Interface(), value.Interface()) == false { 160 | mu.Lock() 161 | if errorCount < 1 { 162 | what := name 163 | if what == "" { 164 | what = t.Name() 165 | } 166 | // this warning typically arises if code re-uses the same RPC reply 167 | // variable for multiple RPC calls, or if code restores persisted 168 | // state into variable that already have non-default values. 169 | fmt.Printf("labgob warning: Decoding into a non-default variable/field %v may not work\n", 170 | what) 171 | } 172 | errorCount += 1 173 | mu.Unlock() 174 | } 175 | return 176 | } 177 | } 178 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/labgob/test_test.go: -------------------------------------------------------------------------------- 1 | package labgob 2 | 3 | import "testing" 4 | 5 | import "bytes" 6 | 7 | type T1 struct { 8 | T1int0 int 9 | T1int1 int 10 | T1string0 string 11 | T1string1 string 12 | } 13 | 14 | type T2 struct { 15 | T2slice []T1 16 | T2map map[int]*T1 17 | T2t3 interface{} 18 | } 19 | 20 | type T3 struct { 21 | T3int999 int 22 | } 23 | 24 | // test that we didn't break GOB. 25 | func TestGOB(t *testing.T) { 26 | e0 := errorCount 27 | 28 | w := new(bytes.Buffer) 29 | 30 | Register(T3{}) 31 | 32 | { 33 | x0 := 0 34 | x1 := 1 35 | t1 := T1{} 36 | t1.T1int1 = 1 37 | t1.T1string1 = "6.5840" 38 | t2 := T2{} 39 | t2.T2slice = []T1{T1{}, t1} 40 | t2.T2map = map[int]*T1{} 41 | t2.T2map[99] = &T1{1, 2, "x", "y"} 42 | t2.T2t3 = T3{999} 43 | 44 | e := NewEncoder(w) 45 | e.Encode(x0) 46 | e.Encode(x1) 47 | e.Encode(t1) 48 | e.Encode(t2) 49 | } 50 | data := w.Bytes() 51 | 52 | { 53 | var x0 int 54 | var x1 int 55 | var t1 T1 56 | var t2 T2 57 | 58 | r := bytes.NewBuffer(data) 59 | d := NewDecoder(r) 60 | if d.Decode(&x0) != nil || 61 | d.Decode(&x1) != nil || 62 | d.Decode(&t1) != nil || 63 | d.Decode(&t2) != nil { 64 | t.Fatalf("Decode failed") 65 | } 66 | 67 | if x0 != 0 { 68 | t.Fatalf("wrong x0 %v\n", x0) 69 | } 70 | if x1 != 1 { 71 | t.Fatalf("wrong x1 %v\n", x1) 72 | } 73 | if t1.T1int0 != 0 { 74 | t.Fatalf("wrong t1.T1int0 %v\n", t1.T1int0) 75 | } 76 | if t1.T1int1 != 1 { 77 | t.Fatalf("wrong t1.T1int1 %v\n", t1.T1int1) 78 | } 79 | if t1.T1string0 != "" { 80 | t.Fatalf("wrong t1.T1string0 %v\n", t1.T1string0) 81 | } 82 | if t1.T1string1 != "6.5840" { 83 | t.Fatalf("wrong t1.T1string1 %v\n", t1.T1string1) 84 | } 85 | if len(t2.T2slice) != 2 { 86 | t.Fatalf("wrong t2.T2slice len %v\n", len(t2.T2slice)) 87 | } 88 | if t2.T2slice[1].T1int1 != 1 { 89 | t.Fatalf("wrong slice value\n") 90 | } 91 | if len(t2.T2map) != 1 { 92 | t.Fatalf("wrong t2.T2map len %v\n", len(t2.T2map)) 93 | } 94 | if t2.T2map[99].T1string1 != "y" { 95 | t.Fatalf("wrong map value\n") 96 | } 97 | t3 := (t2.T2t3).(T3) 98 | if t3.T3int999 != 999 { 99 | t.Fatalf("wrong t2.T2t3.T3int999\n") 100 | } 101 | } 102 | 103 | if errorCount != e0 { 104 | t.Fatalf("there were errors, but should not have been") 105 | } 106 | } 107 | 108 | type T4 struct { 109 | Yes int 110 | no int 111 | } 112 | 113 | // make sure we check capitalization 114 | // labgob prints one warning during this test. 115 | func TestCapital(t *testing.T) { 116 | e0 := errorCount 117 | 118 | v := []map[*T4]int{} 119 | 120 | w := new(bytes.Buffer) 121 | e := NewEncoder(w) 122 | e.Encode(v) 123 | data := w.Bytes() 124 | 125 | var v1 []map[T4]int 126 | r := bytes.NewBuffer(data) 127 | d := NewDecoder(r) 128 | d.Decode(&v1) 129 | 130 | if errorCount != e0+1 { 131 | t.Fatalf("failed to warn about lower-case field") 132 | } 133 | } 134 | 135 | // check that we warn when someone sends a default value over 136 | // RPC but the target into which we're decoding holds a non-default 137 | // value, which GOB seems not to overwrite as you'd expect. 138 | // 139 | // labgob does not print a warning. 140 | func TestDefault(t *testing.T) { 141 | e0 := errorCount 142 | 143 | type DD struct { 144 | X int 145 | } 146 | 147 | // send a default value... 148 | dd1 := DD{} 149 | 150 | w := new(bytes.Buffer) 151 | e := NewEncoder(w) 152 | e.Encode(dd1) 153 | data := w.Bytes() 154 | 155 | // and receive it into memory that already 156 | // holds non-default values. 157 | reply := DD{99} 158 | 159 | r := bytes.NewBuffer(data) 160 | d := NewDecoder(r) 161 | d.Decode(&reply) 162 | 163 | if errorCount != e0+1 { 164 | t.Fatalf("failed to warn about decoding into non-default value") 165 | } 166 | } 167 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/labrpc/test_test.go: -------------------------------------------------------------------------------- 1 | package labrpc 2 | 3 | import "testing" 4 | import "strconv" 5 | import "sync" 6 | import "runtime" 7 | import "time" 8 | import "fmt" 9 | 10 | type JunkArgs struct { 11 | X int 12 | } 13 | type JunkReply struct { 14 | X string 15 | } 16 | 17 | type JunkServer struct { 18 | mu sync.Mutex 19 | log1 []string 20 | log2 []int 21 | } 22 | 23 | func (js *JunkServer) Handler1(args string, reply *int) { 24 | js.mu.Lock() 25 | defer js.mu.Unlock() 26 | js.log1 = append(js.log1, args) 27 | *reply, _ = strconv.Atoi(args) 28 | } 29 | 30 | func (js *JunkServer) Handler2(args int, reply *string) { 31 | js.mu.Lock() 32 | defer js.mu.Unlock() 33 | js.log2 = append(js.log2, args) 34 | *reply = "handler2-" + strconv.Itoa(args) 35 | } 36 | 37 | func (js *JunkServer) Handler3(args int, reply *int) { 38 | js.mu.Lock() 39 | defer js.mu.Unlock() 40 | time.Sleep(20 * time.Second) 41 | *reply = -args 42 | } 43 | 44 | // args is a pointer 45 | func (js *JunkServer) Handler4(args *JunkArgs, reply *JunkReply) { 46 | reply.X = "pointer" 47 | } 48 | 49 | // args is a not pointer 50 | func (js *JunkServer) Handler5(args JunkArgs, reply *JunkReply) { 51 | reply.X = "no pointer" 52 | } 53 | 54 | func (js *JunkServer) Handler6(args string, reply *int) { 55 | js.mu.Lock() 56 | defer js.mu.Unlock() 57 | *reply = len(args) 58 | } 59 | 60 | func (js *JunkServer) Handler7(args int, reply *string) { 61 | js.mu.Lock() 62 | defer js.mu.Unlock() 63 | *reply = "" 64 | for i := 0; i < args; i++ { 65 | *reply = *reply + "y" 66 | } 67 | } 68 | 69 | func TestBasic(t *testing.T) { 70 | runtime.GOMAXPROCS(4) 71 | 72 | rn := MakeNetwork() 73 | defer rn.Cleanup() 74 | 75 | e := rn.MakeEnd("end1-99") 76 | 77 | js := &JunkServer{} 78 | svc := MakeService(js) 79 | 80 | rs := MakeServer() 81 | rs.AddService(svc) 82 | rn.AddServer("server99", rs) 83 | 84 | rn.Connect("end1-99", "server99") 85 | rn.Enable("end1-99", true) 86 | 87 | { 88 | reply := "" 89 | e.Call("JunkServer.Handler2", 111, &reply) 90 | if reply != "handler2-111" { 91 | t.Fatalf("wrong reply from Handler2") 92 | } 93 | } 94 | 95 | { 96 | reply := 0 97 | e.Call("JunkServer.Handler1", "9099", &reply) 98 | if reply != 9099 { 99 | t.Fatalf("wrong reply from Handler1") 100 | } 101 | } 102 | } 103 | 104 | func TestTypes(t *testing.T) { 105 | runtime.GOMAXPROCS(4) 106 | 107 | rn := MakeNetwork() 108 | defer rn.Cleanup() 109 | 110 | e := rn.MakeEnd("end1-99") 111 | 112 | js := &JunkServer{} 113 | svc := MakeService(js) 114 | 115 | rs := MakeServer() 116 | rs.AddService(svc) 117 | rn.AddServer("server99", rs) 118 | 119 | rn.Connect("end1-99", "server99") 120 | rn.Enable("end1-99", true) 121 | 122 | { 123 | var args JunkArgs 124 | var reply JunkReply 125 | // args must match type (pointer or not) of handler. 126 | e.Call("JunkServer.Handler4", &args, &reply) 127 | if reply.X != "pointer" { 128 | t.Fatalf("wrong reply from Handler4") 129 | } 130 | } 131 | 132 | { 133 | var args JunkArgs 134 | var reply JunkReply 135 | // args must match type (pointer or not) of handler. 136 | e.Call("JunkServer.Handler5", args, &reply) 137 | if reply.X != "no pointer" { 138 | t.Fatalf("wrong reply from Handler5") 139 | } 140 | } 141 | } 142 | 143 | // 144 | // does net.Enable(endname, false) really disconnect a client? 145 | // 146 | func TestDisconnect(t *testing.T) { 147 | runtime.GOMAXPROCS(4) 148 | 149 | rn := MakeNetwork() 150 | defer rn.Cleanup() 151 | 152 | e := rn.MakeEnd("end1-99") 153 | 154 | js := &JunkServer{} 155 | svc := MakeService(js) 156 | 157 | rs := MakeServer() 158 | rs.AddService(svc) 159 | rn.AddServer("server99", rs) 160 | 161 | rn.Connect("end1-99", "server99") 162 | 163 | { 164 | reply := "" 165 | e.Call("JunkServer.Handler2", 111, &reply) 166 | if reply != "" { 167 | t.Fatalf("unexpected reply from Handler2") 168 | } 169 | } 170 | 171 | rn.Enable("end1-99", true) 172 | 173 | { 174 | reply := 0 175 | e.Call("JunkServer.Handler1", "9099", &reply) 176 | if reply != 9099 { 177 | t.Fatalf("wrong reply from Handler1") 178 | } 179 | } 180 | } 181 | 182 | // 183 | // test net.GetCount() 184 | // 185 | func TestCounts(t *testing.T) { 186 | runtime.GOMAXPROCS(4) 187 | 188 | rn := MakeNetwork() 189 | defer rn.Cleanup() 190 | 191 | e := rn.MakeEnd("end1-99") 192 | 193 | js := &JunkServer{} 194 | svc := MakeService(js) 195 | 196 | rs := MakeServer() 197 | rs.AddService(svc) 198 | rn.AddServer(99, rs) 199 | 200 | rn.Connect("end1-99", 99) 201 | rn.Enable("end1-99", true) 202 | 203 | for i := 0; i < 17; i++ { 204 | reply := "" 205 | e.Call("JunkServer.Handler2", i, &reply) 206 | wanted := "handler2-" + strconv.Itoa(i) 207 | if reply != wanted { 208 | t.Fatalf("wrong reply %v from Handler1, expecting %v", reply, wanted) 209 | } 210 | } 211 | 212 | n := rn.GetCount(99) 213 | if n != 17 { 214 | t.Fatalf("wrong GetCount() %v, expected 17\n", n) 215 | } 216 | } 217 | 218 | // 219 | // test net.GetTotalBytes() 220 | // 221 | func TestBytes(t *testing.T) { 222 | runtime.GOMAXPROCS(4) 223 | 224 | rn := MakeNetwork() 225 | defer rn.Cleanup() 226 | 227 | e := rn.MakeEnd("end1-99") 228 | 229 | js := &JunkServer{} 230 | svc := MakeService(js) 231 | 232 | rs := MakeServer() 233 | rs.AddService(svc) 234 | rn.AddServer(99, rs) 235 | 236 | rn.Connect("end1-99", 99) 237 | rn.Enable("end1-99", true) 238 | 239 | for i := 0; i < 17; i++ { 240 | args := "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" 241 | args = args + args 242 | args = args + args 243 | reply := 0 244 | e.Call("JunkServer.Handler6", args, &reply) 245 | wanted := len(args) 246 | if reply != wanted { 247 | t.Fatalf("wrong reply %v from Handler6, expecting %v", reply, wanted) 248 | } 249 | } 250 | 251 | n := rn.GetTotalBytes() 252 | if n < 4828 || n > 6000 { 253 | t.Fatalf("wrong GetTotalBytes() %v, expected about 5000\n", n) 254 | } 255 | 256 | for i := 0; i < 17; i++ { 257 | args := 107 258 | reply := "" 259 | e.Call("JunkServer.Handler7", args, &reply) 260 | wanted := args 261 | if len(reply) != wanted { 262 | t.Fatalf("wrong reply len=%v from Handler6, expecting %v", len(reply), wanted) 263 | } 264 | } 265 | 266 | nn := rn.GetTotalBytes() - n 267 | if nn < 1800 || nn > 2500 { 268 | t.Fatalf("wrong GetTotalBytes() %v, expected about 2000\n", nn) 269 | } 270 | } 271 | 272 | // 273 | // test RPCs from concurrent ClientEnds 274 | // 275 | func TestConcurrentMany(t *testing.T) { 276 | runtime.GOMAXPROCS(4) 277 | 278 | rn := MakeNetwork() 279 | defer rn.Cleanup() 280 | 281 | js := &JunkServer{} 282 | svc := MakeService(js) 283 | 284 | rs := MakeServer() 285 | rs.AddService(svc) 286 | rn.AddServer(1000, rs) 287 | 288 | ch := make(chan int) 289 | 290 | nclients := 20 291 | nrpcs := 10 292 | for ii := 0; ii < nclients; ii++ { 293 | go func(i int) { 294 | n := 0 295 | defer func() { ch <- n }() 296 | 297 | e := rn.MakeEnd(i) 298 | rn.Connect(i, 1000) 299 | rn.Enable(i, true) 300 | 301 | for j := 0; j < nrpcs; j++ { 302 | arg := i*100 + j 303 | reply := "" 304 | e.Call("JunkServer.Handler2", arg, &reply) 305 | wanted := "handler2-" + strconv.Itoa(arg) 306 | if reply != wanted { 307 | t.Fatalf("wrong reply %v from Handler1, expecting %v", reply, wanted) 308 | } 309 | n += 1 310 | } 311 | }(ii) 312 | } 313 | 314 | total := 0 315 | for ii := 0; ii < nclients; ii++ { 316 | x := <-ch 317 | total += x 318 | } 319 | 320 | if total != nclients*nrpcs { 321 | t.Fatalf("wrong number of RPCs completed, got %v, expected %v", total, nclients*nrpcs) 322 | } 323 | 324 | n := rn.GetCount(1000) 325 | if n != total { 326 | t.Fatalf("wrong GetCount() %v, expected %v\n", n, total) 327 | } 328 | } 329 | 330 | // 331 | // test unreliable 332 | // 333 | func TestUnreliable(t *testing.T) { 334 | runtime.GOMAXPROCS(4) 335 | 336 | rn := MakeNetwork() 337 | defer rn.Cleanup() 338 | rn.Reliable(false) 339 | 340 | js := &JunkServer{} 341 | svc := MakeService(js) 342 | 343 | rs := MakeServer() 344 | rs.AddService(svc) 345 | rn.AddServer(1000, rs) 346 | 347 | ch := make(chan int) 348 | 349 | nclients := 300 350 | for ii := 0; ii < nclients; ii++ { 351 | go func(i int) { 352 | n := 0 353 | defer func() { ch <- n }() 354 | 355 | e := rn.MakeEnd(i) 356 | rn.Connect(i, 1000) 357 | rn.Enable(i, true) 358 | 359 | arg := i * 100 360 | reply := "" 361 | ok := e.Call("JunkServer.Handler2", arg, &reply) 362 | if ok { 363 | wanted := "handler2-" + strconv.Itoa(arg) 364 | if reply != wanted { 365 | t.Fatalf("wrong reply %v from Handler1, expecting %v", reply, wanted) 366 | } 367 | n += 1 368 | } 369 | }(ii) 370 | } 371 | 372 | total := 0 373 | for ii := 0; ii < nclients; ii++ { 374 | x := <-ch 375 | total += x 376 | } 377 | 378 | if total == nclients || total == 0 { 379 | t.Fatalf("all RPCs succeeded despite unreliable") 380 | } 381 | } 382 | 383 | // 384 | // test concurrent RPCs from a single ClientEnd 385 | // 386 | func TestConcurrentOne(t *testing.T) { 387 | runtime.GOMAXPROCS(4) 388 | 389 | rn := MakeNetwork() 390 | defer rn.Cleanup() 391 | 392 | js := &JunkServer{} 393 | svc := MakeService(js) 394 | 395 | rs := MakeServer() 396 | rs.AddService(svc) 397 | rn.AddServer(1000, rs) 398 | 399 | e := rn.MakeEnd("c") 400 | rn.Connect("c", 1000) 401 | rn.Enable("c", true) 402 | 403 | ch := make(chan int) 404 | 405 | nrpcs := 20 406 | for ii := 0; ii < nrpcs; ii++ { 407 | go func(i int) { 408 | n := 0 409 | defer func() { ch <- n }() 410 | 411 | arg := 100 + i 412 | reply := "" 413 | e.Call("JunkServer.Handler2", arg, &reply) 414 | wanted := "handler2-" + strconv.Itoa(arg) 415 | if reply != wanted { 416 | t.Fatalf("wrong reply %v from Handler2, expecting %v", reply, wanted) 417 | } 418 | n += 1 419 | }(ii) 420 | } 421 | 422 | total := 0 423 | for ii := 0; ii < nrpcs; ii++ { 424 | x := <-ch 425 | total += x 426 | } 427 | 428 | if total != nrpcs { 429 | t.Fatalf("wrong number of RPCs completed, got %v, expected %v", total, nrpcs) 430 | } 431 | 432 | js.mu.Lock() 433 | defer js.mu.Unlock() 434 | if len(js.log2) != nrpcs { 435 | t.Fatalf("wrong number of RPCs delivered") 436 | } 437 | 438 | n := rn.GetCount(1000) 439 | if n != total { 440 | t.Fatalf("wrong GetCount() %v, expected %v\n", n, total) 441 | } 442 | } 443 | 444 | // 445 | // regression: an RPC that's delayed during Enabled=false 446 | // should not delay subsequent RPCs (e.g. after Enabled=true). 447 | // 448 | func TestRegression1(t *testing.T) { 449 | runtime.GOMAXPROCS(4) 450 | 451 | rn := MakeNetwork() 452 | defer rn.Cleanup() 453 | 454 | js := &JunkServer{} 455 | svc := MakeService(js) 456 | 457 | rs := MakeServer() 458 | rs.AddService(svc) 459 | rn.AddServer(1000, rs) 460 | 461 | e := rn.MakeEnd("c") 462 | rn.Connect("c", 1000) 463 | 464 | // start some RPCs while the ClientEnd is disabled. 465 | // they'll be delayed. 466 | rn.Enable("c", false) 467 | ch := make(chan bool) 468 | nrpcs := 20 469 | for ii := 0; ii < nrpcs; ii++ { 470 | go func(i int) { 471 | ok := false 472 | defer func() { ch <- ok }() 473 | 474 | arg := 100 + i 475 | reply := "" 476 | // this call ought to return false. 477 | e.Call("JunkServer.Handler2", arg, &reply) 478 | ok = true 479 | }(ii) 480 | } 481 | 482 | time.Sleep(100 * time.Millisecond) 483 | 484 | // now enable the ClientEnd and check that an RPC completes quickly. 485 | t0 := time.Now() 486 | rn.Enable("c", true) 487 | { 488 | arg := 99 489 | reply := "" 490 | e.Call("JunkServer.Handler2", arg, &reply) 491 | wanted := "handler2-" + strconv.Itoa(arg) 492 | if reply != wanted { 493 | t.Fatalf("wrong reply %v from Handler2, expecting %v", reply, wanted) 494 | } 495 | } 496 | dur := time.Since(t0).Seconds() 497 | 498 | if dur > 0.03 { 499 | t.Fatalf("RPC took too long (%v) after Enable", dur) 500 | } 501 | 502 | for ii := 0; ii < nrpcs; ii++ { 503 | <-ch 504 | } 505 | 506 | js.mu.Lock() 507 | defer js.mu.Unlock() 508 | if len(js.log2) != 1 { 509 | t.Fatalf("wrong number (%v) of RPCs delivered, expected 1", len(js.log2)) 510 | } 511 | 512 | n := rn.GetCount(1000) 513 | if n != 1 { 514 | t.Fatalf("wrong GetCount() %v, expected %v\n", n, 1) 515 | } 516 | } 517 | 518 | // 519 | // if an RPC is stuck in a server, and the server 520 | // is killed with DeleteServer(), does the RPC 521 | // get un-stuck? 522 | // 523 | func TestKilled(t *testing.T) { 524 | runtime.GOMAXPROCS(4) 525 | 526 | rn := MakeNetwork() 527 | defer rn.Cleanup() 528 | 529 | e := rn.MakeEnd("end1-99") 530 | 531 | js := &JunkServer{} 532 | svc := MakeService(js) 533 | 534 | rs := MakeServer() 535 | rs.AddService(svc) 536 | rn.AddServer("server99", rs) 537 | 538 | rn.Connect("end1-99", "server99") 539 | rn.Enable("end1-99", true) 540 | 541 | doneCh := make(chan bool) 542 | go func() { 543 | reply := 0 544 | ok := e.Call("JunkServer.Handler3", 99, &reply) 545 | doneCh <- ok 546 | }() 547 | 548 | time.Sleep(1000 * time.Millisecond) 549 | 550 | select { 551 | case <-doneCh: 552 | t.Fatalf("Handler3 should not have returned yet") 553 | case <-time.After(100 * time.Millisecond): 554 | } 555 | 556 | rn.DeleteServer("server99") 557 | 558 | select { 559 | case x := <-doneCh: 560 | if x != false { 561 | t.Fatalf("Handler3 returned successfully despite DeleteServer()") 562 | } 563 | case <-time.After(100 * time.Millisecond): 564 | t.Fatalf("Handler3 should return after DeleteServer()") 565 | } 566 | } 567 | 568 | func TestBenchmark(t *testing.T) { 569 | runtime.GOMAXPROCS(4) 570 | 571 | rn := MakeNetwork() 572 | defer rn.Cleanup() 573 | 574 | e := rn.MakeEnd("end1-99") 575 | 576 | js := &JunkServer{} 577 | svc := MakeService(js) 578 | 579 | rs := MakeServer() 580 | rs.AddService(svc) 581 | rn.AddServer("server99", rs) 582 | 583 | rn.Connect("end1-99", "server99") 584 | rn.Enable("end1-99", true) 585 | 586 | t0 := time.Now() 587 | n := 100000 588 | for iters := 0; iters < n; iters++ { 589 | reply := "" 590 | e.Call("JunkServer.Handler2", 111, &reply) 591 | if reply != "handler2-111" { 592 | t.Fatalf("wrong reply from Handler2") 593 | } 594 | } 595 | fmt.Printf("%v for %v\n", time.Since(t0), n) 596 | // march 2016, rtm laptop, 22 microseconds per RPC 597 | } 598 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/diskvd.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // start a diskvd server. it's a member of some replica 5 | // group, which has other members, and it needs to know 6 | // how to talk to the members of the shardmaster service. 7 | // used by ../diskv/test_test.go 8 | // 9 | // arguments: 10 | // -g groupid 11 | // -m masterport1 -m masterport2 ... 12 | // -s replicaport1 -s replicaport2 ... 13 | // -i my-index-in-server-port-list 14 | // -u unreliable 15 | // -d directory 16 | // -r restart 17 | 18 | import "time" 19 | import "6.5840/diskv" 20 | import "os" 21 | import "fmt" 22 | import "strconv" 23 | import "runtime" 24 | 25 | func usage() { 26 | fmt.Printf("Usage: diskvd -g gid -m master... -s server... -i my-index -d dir\n") 27 | os.Exit(1) 28 | } 29 | 30 | func main() { 31 | var gid int64 = -1 // my replica group ID 32 | masters := []string{} // ports of shardmasters 33 | replicas := []string{} // ports of servers in my replica group 34 | me := -1 // my index in replicas[] 35 | unreliable := false 36 | dir := "" // store persistent data here 37 | restart := false 38 | 39 | for i := 1; i+1 < len(os.Args); i += 2 { 40 | a0 := os.Args[i] 41 | a1 := os.Args[i+1] 42 | if a0 == "-g" { 43 | gid, _ = strconv.ParseInt(a1, 10, 64) 44 | } else if a0 == "-m" { 45 | masters = append(masters, a1) 46 | } else if a0 == "-s" { 47 | replicas = append(replicas, a1) 48 | } else if a0 == "-i" { 49 | me, _ = strconv.Atoi(a1) 50 | } else if a0 == "-u" { 51 | unreliable, _ = strconv.ParseBool(a1) 52 | } else if a0 == "-d" { 53 | dir = a1 54 | } else if a0 == "-r" { 55 | restart, _ = strconv.ParseBool(a1) 56 | } else { 57 | usage() 58 | } 59 | } 60 | 61 | if gid < 0 || me < 0 || len(masters) < 1 || me >= len(replicas) || dir == "" { 62 | usage() 63 | } 64 | 65 | runtime.GOMAXPROCS(4) 66 | 67 | srv := diskv.StartServer(gid, masters, replicas, me, dir, restart) 68 | srv.Setunreliable(unreliable) 69 | 70 | // for safety, force quit after 10 minutes. 71 | time.Sleep(10 * 60 * time.Second) 72 | mep, _ := os.FindProcess(os.Getpid()) 73 | mep.Kill() 74 | } 75 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/lockc.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // see comments in lockd.go 5 | // 6 | 7 | import "6.5840/lockservice" 8 | import "os" 9 | import "fmt" 10 | 11 | func usage() { 12 | fmt.Printf("Usage: lockc -l|-u primaryport backupport lockname\n") 13 | os.Exit(1) 14 | } 15 | 16 | func main() { 17 | if len(os.Args) == 5 { 18 | ck := lockservice.MakeClerk(os.Args[2], os.Args[3]) 19 | var ok bool 20 | if os.Args[1] == "-l" { 21 | ok = ck.Lock(os.Args[4]) 22 | } else if os.Args[1] == "-u" { 23 | ok = ck.Unlock(os.Args[4]) 24 | } else { 25 | usage() 26 | } 27 | fmt.Printf("reply: %v\n", ok) 28 | } else { 29 | usage() 30 | } 31 | } 32 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/lockd.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // export GOPATH=~/6.5840 4 | // go build lockd.go 5 | // go build lockc.go 6 | // ./lockd -p a b & 7 | // ./lockd -b a b & 8 | // ./lockc -l a b lx 9 | // ./lockc -u a b lx 10 | // 11 | // on Athena, use /tmp/myname-a and /tmp/myname-b 12 | // instead of a and b. 13 | 14 | import "time" 15 | import "6.5840/lockservice" 16 | import "os" 17 | import "fmt" 18 | 19 | func main() { 20 | if len(os.Args) == 4 && os.Args[1] == "-p" { 21 | lockservice.StartServer(os.Args[2], os.Args[3], true) 22 | } else if len(os.Args) == 4 && os.Args[1] == "-b" { 23 | lockservice.StartServer(os.Args[2], os.Args[3], false) 24 | } else { 25 | fmt.Printf("Usage: lockd -p|-b primaryport backupport\n") 26 | os.Exit(1) 27 | } 28 | for { 29 | time.Sleep(100 * time.Second) 30 | } 31 | } 32 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/mrcoordinator.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // start the coordinator process, which is implemented 5 | // in ../mr/coordinator.go 6 | // 7 | // go run mrcoordinator.go pg*.txt 8 | // 9 | // Please do not change this file. 10 | // 11 | 12 | import "6.5840/mr" 13 | import "time" 14 | import "os" 15 | import "fmt" 16 | 17 | func main() { 18 | if len(os.Args) < 2 { 19 | fmt.Fprintf(os.Stderr, "Usage: mrcoordinator inputfiles...\n") 20 | os.Exit(1) 21 | } 22 | 23 | m := mr.MakeCoordinator(os.Args[1:], 10) 24 | for m.Done() == false { 25 | time.Sleep(time.Second) 26 | } 27 | 28 | time.Sleep(time.Second) 29 | } 30 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/mrsequential.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // simple sequential MapReduce. 5 | // 6 | // go run mrsequential.go wc.so pg*.txt 7 | // 8 | 9 | import "fmt" 10 | import "6.5840/mr" 11 | import "plugin" 12 | import "os" 13 | import "log" 14 | import "io/ioutil" 15 | import "sort" 16 | 17 | // for sorting by key. 18 | type ByKey []mr.KeyValue 19 | 20 | // for sorting by key. 21 | func (a ByKey) Len() int { return len(a) } 22 | func (a ByKey) Swap(i, j int) { a[i], a[j] = a[j], a[i] } 23 | func (a ByKey) Less(i, j int) bool { return a[i].Key < a[j].Key } 24 | 25 | func main() { 26 | if len(os.Args) < 3 { 27 | fmt.Fprintf(os.Stderr, "Usage: mrsequential xxx.so inputfiles...\n") 28 | os.Exit(1) 29 | } 30 | 31 | mapf, reducef := loadPlugin(os.Args[1]) 32 | 33 | // 34 | // read each input file, 35 | // pass it to Map, 36 | // accumulate the intermediate Map output. 37 | // 38 | intermediate := []mr.KeyValue{} 39 | for _, filename := range os.Args[2:] { 40 | file, err := os.Open(filename) 41 | if err != nil { 42 | log.Fatalf("cannot open %v", filename) 43 | } 44 | content, err := ioutil.ReadAll(file) 45 | if err != nil { 46 | log.Fatalf("cannot read %v", filename) 47 | } 48 | file.Close() 49 | kva := mapf(filename, string(content)) 50 | intermediate = append(intermediate, kva...) 51 | } 52 | 53 | // 54 | // a big difference from real MapReduce is that all the 55 | // intermediate data is in one place, intermediate[], 56 | // rather than being partitioned into NxM buckets. 57 | // 58 | 59 | sort.Sort(ByKey(intermediate)) 60 | 61 | oname := "mr-out-0" 62 | ofile, _ := os.Create(oname) 63 | 64 | // 65 | // call Reduce on each distinct key in intermediate[], 66 | // and print the result to mr-out-0. 67 | // 68 | i := 0 69 | for i < len(intermediate) { 70 | j := i + 1 71 | for j < len(intermediate) && intermediate[j].Key == intermediate[i].Key { 72 | j++ 73 | } 74 | values := []string{} 75 | for k := i; k < j; k++ { 76 | values = append(values, intermediate[k].Value) 77 | } 78 | output := reducef(intermediate[i].Key, values) 79 | 80 | // this is the correct format for each line of Reduce output. 81 | fmt.Fprintf(ofile, "%v %v\n", intermediate[i].Key, output) 82 | 83 | i = j 84 | } 85 | 86 | ofile.Close() 87 | } 88 | 89 | // load the application Map and Reduce functions 90 | // from a plugin file, e.g. ../mrapps/wc.so 91 | func loadPlugin(filename string) (func(string, string) []mr.KeyValue, func(string, []string) string) { 92 | p, err := plugin.Open(filename) 93 | if err != nil { 94 | log.Fatalf("cannot load plugin %v", filename) 95 | } 96 | xmapf, err := p.Lookup("Map") 97 | if err != nil { 98 | log.Fatalf("cannot find Map in %v", filename) 99 | } 100 | mapf := xmapf.(func(string, string) []mr.KeyValue) 101 | xreducef, err := p.Lookup("Reduce") 102 | if err != nil { 103 | log.Fatalf("cannot find Reduce in %v", filename) 104 | } 105 | reducef := xreducef.(func(string, []string) string) 106 | 107 | return mapf, reducef 108 | } 109 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/mrworker.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // start a worker process, which is implemented 5 | // in ../mr/worker.go. typically there will be 6 | // multiple worker processes, talking to one coordinator. 7 | // 8 | // go run mrworker.go wc.so 9 | // 10 | // Please do not change this file. 11 | // 12 | 13 | import "6.5840/mr" 14 | import "plugin" 15 | import "os" 16 | import "fmt" 17 | import "log" 18 | 19 | func main() { 20 | if len(os.Args) != 2 { 21 | fmt.Fprintf(os.Stderr, "Usage: mrworker xxx.so\n") 22 | os.Exit(1) 23 | } 24 | 25 | mapf, reducef := loadPlugin(os.Args[1]) 26 | 27 | mr.Worker(mapf, reducef) 28 | } 29 | 30 | // load the application Map and Reduce functions 31 | // from a plugin file, e.g. ../mrapps/wc.so 32 | func loadPlugin(filename string) (func(string, string) []mr.KeyValue, func(string, []string) string) { 33 | p, err := plugin.Open(filename) 34 | if err != nil { 35 | log.Fatalf("cannot load plugin %v", filename) 36 | } 37 | xmapf, err := p.Lookup("Map") 38 | if err != nil { 39 | log.Fatalf("cannot find Map in %v", filename) 40 | } 41 | mapf := xmapf.(func(string, string) []mr.KeyValue) 42 | xreducef, err := p.Lookup("Reduce") 43 | if err != nil { 44 | log.Fatalf("cannot find Reduce in %v", filename) 45 | } 46 | reducef := xreducef.(func(string, []string) string) 47 | 48 | return mapf, reducef 49 | } 50 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/pbc.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // pbservice client application 5 | // 6 | // export GOPATH=~/6.5840 7 | // go build viewd.go 8 | // go build pbd.go 9 | // go build pbc.go 10 | // ./viewd /tmp/rtm-v & 11 | // ./pbd /tmp/rtm-v /tmp/rtm-1 & 12 | // ./pbd /tmp/rtm-v /tmp/rtm-2 & 13 | // ./pbc /tmp/rtm-v key1 value1 14 | // ./pbc /tmp/rtm-v key1 15 | // 16 | // change "rtm" to your user name. 17 | // start the pbd programs in separate windows and kill 18 | // and restart them to exercise fault tolerance. 19 | // 20 | 21 | import "6.5840/pbservice" 22 | import "os" 23 | import "fmt" 24 | 25 | func usage() { 26 | fmt.Printf("Usage: pbc viewport key\n") 27 | fmt.Printf(" pbc viewport key value\n") 28 | os.Exit(1) 29 | } 30 | 31 | func main() { 32 | if len(os.Args) == 3 { 33 | // get 34 | ck := pbservice.MakeClerk(os.Args[1], "") 35 | v := ck.Get(os.Args[2]) 36 | fmt.Printf("%v\n", v) 37 | } else if len(os.Args) == 4 { 38 | // put 39 | ck := pbservice.MakeClerk(os.Args[1], "") 40 | ck.Put(os.Args[2], os.Args[3]) 41 | } else { 42 | usage() 43 | } 44 | } 45 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/pbd.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // see directions in pbc.go 5 | // 6 | 7 | import "time" 8 | import "6.5840/pbservice" 9 | import "os" 10 | import "fmt" 11 | 12 | func main() { 13 | if len(os.Args) != 3 { 14 | fmt.Printf("Usage: pbd viewport myport\n") 15 | os.Exit(1) 16 | } 17 | 18 | pbservice.StartServer(os.Args[1], os.Args[2]) 19 | 20 | for { 21 | time.Sleep(100 * time.Second) 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/test-mr-many.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | if [ $# -ne 1 ]; then 4 | echo "Usage: $0 numTrials" 5 | exit 1 6 | fi 7 | 8 | trap 'kill -INT -$pid; exit 1' INT 9 | 10 | # Note: because the socketID is based on the current userID, 11 | # ./test-mr.sh cannot be run in parallel 12 | runs=$1 13 | chmod +x test-mr.sh 14 | 15 | for i in $(seq 1 $runs); do 16 | timeout -k 2s 900s ./test-mr.sh & 17 | pid=$! 18 | if ! wait $pid; then 19 | echo '***' FAILED TESTS IN TRIAL $i 20 | exit 1 21 | fi 22 | done 23 | echo '***' PASSED ALL $i TESTING TRIALS 24 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/test-mr.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # 4 | # map-reduce tests 5 | # 6 | 7 | # un-comment this to run the tests with the Go race detector. 8 | # RACE=-race 9 | 10 | if [[ "$OSTYPE" = "darwin"* ]] 11 | then 12 | if go version | grep 'go1.17.[012345]' 13 | then 14 | # -race with plug-ins on x86 MacOS 12 with 15 | # go1.17 before 1.17.6 sometimes crash. 16 | RACE= 17 | echo '*** Turning off -race since it may not work on a Mac' 18 | echo ' with ' `go version` 19 | fi 20 | fi 21 | 22 | ISQUIET=$1 23 | maybe_quiet() { 24 | if [ "$ISQUIET" == "quiet" ]; then 25 | "$@" > /dev/null 2>&1 26 | else 27 | "$@" 28 | fi 29 | } 30 | 31 | 32 | TIMEOUT=timeout 33 | TIMEOUT2="" 34 | if timeout 2s sleep 1 > /dev/null 2>&1 35 | then 36 | : 37 | else 38 | if gtimeout 2s sleep 1 > /dev/null 2>&1 39 | then 40 | TIMEOUT=gtimeout 41 | else 42 | # no timeout command 43 | TIMEOUT= 44 | echo '*** Cannot find timeout command; proceeding without timeouts.' 45 | fi 46 | fi 47 | if [ "$TIMEOUT" != "" ] 48 | then 49 | TIMEOUT2=$TIMEOUT 50 | TIMEOUT2+=" -k 2s 120s " 51 | TIMEOUT+=" -k 2s 45s " 52 | fi 53 | 54 | # run the test in a fresh sub-directory. 55 | rm -rf mr-tmp 56 | mkdir mr-tmp || exit 1 57 | cd mr-tmp || exit 1 58 | rm -f mr-* 59 | 60 | # make sure software is freshly built. 61 | (cd ../../mrapps && go clean) 62 | (cd .. && go clean) 63 | (cd ../../mrapps && go build $RACE -buildmode=plugin wc.go) || exit 1 64 | (cd ../../mrapps && go build $RACE -buildmode=plugin indexer.go) || exit 1 65 | (cd ../../mrapps && go build $RACE -buildmode=plugin mtiming.go) || exit 1 66 | (cd ../../mrapps && go build $RACE -buildmode=plugin rtiming.go) || exit 1 67 | (cd ../../mrapps && go build $RACE -buildmode=plugin jobcount.go) || exit 1 68 | (cd ../../mrapps && go build $RACE -buildmode=plugin early_exit.go) || exit 1 69 | (cd ../../mrapps && go build $RACE -buildmode=plugin crash.go) || exit 1 70 | (cd ../../mrapps && go build $RACE -buildmode=plugin nocrash.go) || exit 1 71 | (cd .. && go build $RACE mrcoordinator.go) || exit 1 72 | (cd .. && go build $RACE mrworker.go) || exit 1 73 | (cd .. && go build $RACE mrsequential.go) || exit 1 74 | 75 | failed_any=0 76 | 77 | ######################################################### 78 | # first word-count 79 | 80 | # generate the correct output 81 | ../mrsequential ../../mrapps/wc.so ../pg*txt || exit 1 82 | sort mr-out-0 > mr-correct-wc.txt 83 | rm -f mr-out* 84 | 85 | echo '***' Starting wc test. 86 | 87 | maybe_quiet $TIMEOUT ../mrcoordinator ../pg*txt & 88 | pid=$! 89 | 90 | # give the coordinator time to create the sockets. 91 | sleep 1 92 | 93 | # start multiple workers. 94 | (maybe_quiet $TIMEOUT ../mrworker ../../mrapps/wc.so) & 95 | (maybe_quiet $TIMEOUT ../mrworker ../../mrapps/wc.so) & 96 | (maybe_quiet $TIMEOUT ../mrworker ../../mrapps/wc.so) & 97 | 98 | # wait for the coordinator to exit. 99 | wait $pid 100 | 101 | # since workers are required to exit when a job is completely finished, 102 | # and not before, that means the job has finished. 103 | sort mr-out* | grep . > mr-wc-all 104 | if cmp mr-wc-all mr-correct-wc.txt 105 | then 106 | echo '---' wc test: PASS 107 | else 108 | echo '---' wc output is not the same as mr-correct-wc.txt 109 | echo '---' wc test: FAIL 110 | failed_any=1 111 | fi 112 | 113 | # wait for remaining workers and coordinator to exit. 114 | wait 115 | 116 | ######################################################### 117 | # now indexer 118 | rm -f mr-* 119 | 120 | # generate the correct output 121 | ../mrsequential ../../mrapps/indexer.so ../pg*txt || exit 1 122 | sort mr-out-0 > mr-correct-indexer.txt 123 | rm -f mr-out* 124 | 125 | echo '***' Starting indexer test. 126 | 127 | maybe_quiet $TIMEOUT ../mrcoordinator ../pg*txt & 128 | sleep 1 129 | 130 | # start multiple workers 131 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/indexer.so & 132 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/indexer.so 133 | 134 | sort mr-out* | grep . > mr-indexer-all 135 | if cmp mr-indexer-all mr-correct-indexer.txt 136 | then 137 | echo '---' indexer test: PASS 138 | else 139 | echo '---' indexer output is not the same as mr-correct-indexer.txt 140 | echo '---' indexer test: FAIL 141 | failed_any=1 142 | fi 143 | 144 | wait 145 | 146 | ######################################################### 147 | echo '***' Starting map parallelism test. 148 | 149 | rm -f mr-* 150 | 151 | maybe_quiet $TIMEOUT ../mrcoordinator ../pg*txt & 152 | sleep 1 153 | 154 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/mtiming.so & 155 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/mtiming.so 156 | 157 | NT=`cat mr-out* | grep '^times-' | wc -l | sed 's/ //g'` 158 | if [ "$NT" != "2" ] 159 | then 160 | echo '---' saw "$NT" workers rather than 2 161 | echo '---' map parallelism test: FAIL 162 | failed_any=1 163 | fi 164 | 165 | if cat mr-out* | grep '^parallel.* 2' > /dev/null 166 | then 167 | echo '---' map parallelism test: PASS 168 | else 169 | echo '---' map workers did not run in parallel 170 | echo '---' map parallelism test: FAIL 171 | failed_any=1 172 | fi 173 | 174 | wait 175 | 176 | 177 | ######################################################### 178 | echo '***' Starting reduce parallelism test. 179 | 180 | rm -f mr-* 181 | 182 | maybe_quiet $TIMEOUT ../mrcoordinator ../pg*txt & 183 | sleep 1 184 | 185 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/rtiming.so & 186 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/rtiming.so 187 | 188 | NT=`cat mr-out* | grep '^[a-z] 2' | wc -l | sed 's/ //g'` 189 | if [ "$NT" -lt "2" ] 190 | then 191 | echo '---' too few parallel reduces. 192 | echo '---' reduce parallelism test: FAIL 193 | failed_any=1 194 | else 195 | echo '---' reduce parallelism test: PASS 196 | fi 197 | 198 | wait 199 | 200 | ######################################################### 201 | echo '***' Starting job count test. 202 | 203 | rm -f mr-* 204 | 205 | maybe_quiet $TIMEOUT ../mrcoordinator ../pg*txt & 206 | sleep 1 207 | 208 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/jobcount.so & 209 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/jobcount.so 210 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/jobcount.so & 211 | maybe_quiet $TIMEOUT ../mrworker ../../mrapps/jobcount.so 212 | 213 | NT=`cat mr-out* | awk '{print $2}'` 214 | if [ "$NT" -eq "8" ] 215 | then 216 | echo '---' job count test: PASS 217 | else 218 | echo '---' map jobs ran incorrect number of times "($NT != 8)" 219 | echo '---' job count test: FAIL 220 | failed_any=1 221 | fi 222 | 223 | wait 224 | 225 | ######################################################### 226 | # test whether any worker or coordinator exits before the 227 | # task has completed (i.e., all output files have been finalized) 228 | rm -f mr-* 229 | 230 | echo '***' Starting early exit test. 231 | 232 | DF=anydone$$ 233 | rm -f $DF 234 | 235 | (maybe_quiet $TIMEOUT ../mrcoordinator ../pg*txt; touch $DF) & 236 | 237 | # give the coordinator time to create the sockets. 238 | sleep 1 239 | 240 | # start multiple workers. 241 | (maybe_quiet $TIMEOUT ../mrworker ../../mrapps/early_exit.so; touch $DF) & 242 | (maybe_quiet $TIMEOUT ../mrworker ../../mrapps/early_exit.so; touch $DF) & 243 | (maybe_quiet $TIMEOUT ../mrworker ../../mrapps/early_exit.so; touch $DF) & 244 | 245 | # wait for any of the coord or workers to exit. 246 | # `jobs` ensures that any completed old processes from other tests 247 | # are not waited upon. 248 | jobs &> /dev/null 249 | if [[ "$OSTYPE" = "darwin"* ]] 250 | then 251 | # bash on the Mac doesn't have wait -n 252 | while [ ! -e $DF ] 253 | do 254 | sleep 0.2 255 | done 256 | else 257 | # the -n causes wait to wait for just one child process, 258 | # rather than waiting for all to finish. 259 | wait -n 260 | fi 261 | 262 | rm -f $DF 263 | 264 | # a process has exited. this means that the output should be finalized 265 | # otherwise, either a worker or the coordinator exited early 266 | sort mr-out* | grep . > mr-wc-all-initial 267 | 268 | # wait for remaining workers and coordinator to exit. 269 | wait 270 | 271 | # compare initial and final outputs 272 | sort mr-out* | grep . > mr-wc-all-final 273 | if cmp mr-wc-all-final mr-wc-all-initial 274 | then 275 | echo '---' early exit test: PASS 276 | else 277 | echo '---' output changed after first worker exited 278 | echo '---' early exit test: FAIL 279 | failed_any=1 280 | fi 281 | rm -f mr-* 282 | 283 | ######################################################### 284 | echo '***' Starting crash test. 285 | 286 | # generate the correct output 287 | ../mrsequential ../../mrapps/nocrash.so ../pg*txt || exit 1 288 | sort mr-out-0 > mr-correct-crash.txt 289 | rm -f mr-out* 290 | 291 | rm -f mr-done 292 | ((maybe_quiet $TIMEOUT2 ../mrcoordinator ../pg*txt); touch mr-done ) & 293 | sleep 1 294 | 295 | # start multiple workers 296 | maybe_quiet $TIMEOUT2 ../mrworker ../../mrapps/crash.so & 297 | 298 | # mimic rpc.go's coordinatorSock() 299 | SOCKNAME=/var/tmp/5840-mr-`id -u` 300 | 301 | ( while [ -e $SOCKNAME -a ! -f mr-done ] 302 | do 303 | maybe_quiet $TIMEOUT2 ../mrworker ../../mrapps/crash.so 304 | sleep 1 305 | done ) & 306 | 307 | ( while [ -e $SOCKNAME -a ! -f mr-done ] 308 | do 309 | maybe_quiet $TIMEOUT2 ../mrworker ../../mrapps/crash.so 310 | sleep 1 311 | done ) & 312 | 313 | while [ -e $SOCKNAME -a ! -f mr-done ] 314 | do 315 | maybe_quiet $TIMEOUT2 ../mrworker ../../mrapps/crash.so 316 | sleep 1 317 | done 318 | 319 | wait 320 | 321 | rm $SOCKNAME 322 | sort mr-out* | grep . > mr-crash-all 323 | if cmp mr-crash-all mr-correct-crash.txt 324 | then 325 | echo '---' crash test: PASS 326 | else 327 | echo '---' crash output is not the same as mr-correct-crash.txt 328 | echo '---' crash test: FAIL 329 | failed_any=1 330 | fi 331 | 332 | ######################################################### 333 | if [ $failed_any -eq 0 ]; then 334 | echo '***' PASSED ALL TESTS 335 | else 336 | echo '***' FAILED SOME TESTS 337 | exit 1 338 | fi 339 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/main/viewd.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // see directions in pbc.go 5 | // 6 | 7 | import "time" 8 | import "6.5840/viewservice" 9 | import "os" 10 | import "fmt" 11 | 12 | func main() { 13 | if len(os.Args) != 2 { 14 | fmt.Printf("Usage: viewd port\n") 15 | os.Exit(1) 16 | } 17 | 18 | viewservice.StartServer(os.Args[1]) 19 | 20 | for { 21 | time.Sleep(100 * time.Second) 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/models/kv.go: -------------------------------------------------------------------------------- 1 | package models 2 | 3 | import "6.5840/porcupine" 4 | import "fmt" 5 | import "sort" 6 | 7 | type KvInput struct { 8 | Op uint8 // 0 => get, 1 => put, 2 => append 9 | Key string 10 | Value string 11 | } 12 | 13 | type KvOutput struct { 14 | Value string 15 | } 16 | 17 | var KvModel = porcupine.Model{ 18 | Partition: func(history []porcupine.Operation) [][]porcupine.Operation { 19 | m := make(map[string][]porcupine.Operation) 20 | for _, v := range history { 21 | key := v.Input.(KvInput).Key 22 | m[key] = append(m[key], v) 23 | } 24 | keys := make([]string, 0, len(m)) 25 | for k := range m { 26 | keys = append(keys, k) 27 | } 28 | sort.Strings(keys) 29 | ret := make([][]porcupine.Operation, 0, len(keys)) 30 | for _, k := range keys { 31 | ret = append(ret, m[k]) 32 | } 33 | return ret 34 | }, 35 | Init: func() interface{} { 36 | // note: we are modeling a single key's value here; 37 | // we're partitioning by key, so this is okay 38 | return "" 39 | }, 40 | Step: func(state, input, output interface{}) (bool, interface{}) { 41 | inp := input.(KvInput) 42 | out := output.(KvOutput) 43 | st := state.(string) 44 | if inp.Op == 0 { 45 | // get 46 | return out.Value == st, state 47 | } else if inp.Op == 1 { 48 | // put 49 | return true, inp.Value 50 | } else { 51 | // append 52 | return true, (st + inp.Value) 53 | } 54 | }, 55 | DescribeOperation: func(input, output interface{}) string { 56 | inp := input.(KvInput) 57 | out := output.(KvOutput) 58 | switch inp.Op { 59 | case 0: 60 | return fmt.Sprintf("get('%s') -> '%s'", inp.Key, out.Value) 61 | case 1: 62 | return fmt.Sprintf("put('%s', '%s')", inp.Key, inp.Value) 63 | case 2: 64 | return fmt.Sprintf("append('%s', '%s')", inp.Key, inp.Value) 65 | default: 66 | return "" 67 | } 68 | }, 69 | } 70 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mr/coordinator.go: -------------------------------------------------------------------------------- 1 | package mr 2 | 3 | import ( 4 | "log" 5 | "net" 6 | "net/http" 7 | "net/rpc" 8 | "os" 9 | ) 10 | 11 | type Coordinator struct { 12 | // Your definitions here. 13 | 14 | } 15 | 16 | // Your code here -- RPC handlers for the worker to call. 17 | 18 | // an example RPC handler. 19 | // 20 | // the RPC argument and reply types are defined in rpc.go. 21 | func (c *Coordinator) Example(args *ExampleArgs, reply *ExampleReply) error { 22 | reply.Y = args.X + 1 23 | return nil 24 | } 25 | 26 | // start a thread that listens for RPCs from worker.go 27 | func (c *Coordinator) server() { 28 | rpc.Register(c) 29 | rpc.HandleHTTP() 30 | //l, e := net.Listen("tcp", ":1234") 31 | sockname := coordinatorSock() 32 | os.Remove(sockname) 33 | l, e := net.Listen("unix", sockname) 34 | if e != nil { 35 | log.Fatal("listen error:", e) 36 | } 37 | go http.Serve(l, nil) 38 | } 39 | 40 | // main/mrcoordinator.go calls Done() periodically to find out 41 | // if the entire job has finished. 42 | func (c *Coordinator) Done() bool { 43 | ret := true 44 | 45 | // Your code here. 46 | 47 | return ret 48 | } 49 | 50 | // create a Coordinator. 51 | // main/mrcoordinator.go calls this function. 52 | // nReduce is the number of reduce tasks to use. 53 | func MakeCoordinator(files []string, nReduce int) *Coordinator { 54 | c := Coordinator{} 55 | 56 | // Your code here. 57 | 58 | c.server() 59 | return &c 60 | } 61 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mr/rpc.go: -------------------------------------------------------------------------------- 1 | package mr 2 | 3 | // 4 | // RPC definitions. 5 | // 6 | // remember to capitalize all names. 7 | // 8 | 9 | import "os" 10 | import "strconv" 11 | 12 | // 13 | // example to show how to declare the arguments 14 | // and reply for an RPC. 15 | // 16 | 17 | type ExampleArgs struct { 18 | X int 19 | } 20 | 21 | type ExampleReply struct { 22 | Y int 23 | } 24 | 25 | // Add your RPC definitions here. 26 | 27 | 28 | // Cook up a unique-ish UNIX-domain socket name 29 | // in /var/tmp, for the coordinator. 30 | // Can't use the current directory since 31 | // Athena AFS doesn't support UNIX-domain sockets. 32 | func coordinatorSock() string { 33 | s := "/var/tmp/5840-mr-" 34 | s += strconv.Itoa(os.Getuid()) 35 | return s 36 | } 37 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mr/worker.go: -------------------------------------------------------------------------------- 1 | package mr 2 | 3 | import ( 4 | "fmt" 5 | "hash/fnv" 6 | "log" 7 | "net/rpc" 8 | ) 9 | 10 | // Map functions return a slice of KeyValue. 11 | type KeyValue struct { 12 | Key string 13 | Value string 14 | } 15 | 16 | // use ihash(key) % NReduce to choose the reduce 17 | // task number for each KeyValue emitted by Map. 18 | func ihash(key string) int { 19 | h := fnv.New32a() 20 | h.Write([]byte(key)) 21 | return int(h.Sum32() & 0x7fffffff) 22 | } 23 | 24 | // main/mrworker.go calls this function. 25 | func Worker(mapf func(string, string) []KeyValue, 26 | reducef func(string, []string) string) { 27 | 28 | // Your worker implementation here. 29 | 30 | // uncomment to send the Example RPC to the coordinator. 31 | CallExample() 32 | 33 | } 34 | 35 | // example function to show how to make an RPC call to the coordinator. 36 | // 37 | // the RPC argument and reply types are defined in rpc.go. 38 | func CallExample() { 39 | 40 | // declare an argument structure. 41 | args := ExampleArgs{} 42 | 43 | // fill in the argument(s). 44 | args.X = 99 45 | 46 | // declare a reply structure. 47 | reply := ExampleReply{} 48 | 49 | // send the RPC request, wait for the reply. 50 | // the "Coordinator.Example" tells the 51 | // receiving server that we'd like to call 52 | // the Example() method of struct Coordinator. 53 | ok := call("Coordinator.Example", &args, &reply) 54 | if ok { 55 | // reply.Y should be 100. 56 | fmt.Printf("reply.Y %v\n", reply.Y) 57 | } else { 58 | fmt.Printf("call failed!\n") 59 | } 60 | } 61 | 62 | // send an RPC request to the coordinator, wait for the response. 63 | // usually returns true. 64 | // returns false if something goes wrong. 65 | func call(rpcname string, args interface{}, reply interface{}) bool { 66 | // c, err := rpc.DialHTTP("tcp", "127.0.0.1"+":1234") 67 | sockname := coordinatorSock() 68 | c, err := rpc.DialHTTP("unix", sockname) 69 | if err != nil { 70 | log.Fatal("dialing:", err) 71 | } 72 | defer c.Close() 73 | 74 | err = c.Call(rpcname, args, reply) 75 | if err == nil { 76 | return true 77 | } 78 | 79 | fmt.Println(err) 80 | return false 81 | } 82 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mrapps/crash.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // a MapReduce pseudo-application that sometimes crashes, 5 | // and sometimes takes a long time, 6 | // to test MapReduce's ability to recover. 7 | // 8 | // go build -buildmode=plugin crash.go 9 | // 10 | 11 | import "6.5840/mr" 12 | import crand "crypto/rand" 13 | import "math/big" 14 | import "strings" 15 | import "os" 16 | import "sort" 17 | import "strconv" 18 | import "time" 19 | 20 | func maybeCrash() { 21 | max := big.NewInt(1000) 22 | rr, _ := crand.Int(crand.Reader, max) 23 | if rr.Int64() < 330 { 24 | // crash! 25 | os.Exit(1) 26 | } else if rr.Int64() < 660 { 27 | // delay for a while. 28 | maxms := big.NewInt(10 * 1000) 29 | ms, _ := crand.Int(crand.Reader, maxms) 30 | time.Sleep(time.Duration(ms.Int64()) * time.Millisecond) 31 | } 32 | } 33 | 34 | func Map(filename string, contents string) []mr.KeyValue { 35 | maybeCrash() 36 | 37 | kva := []mr.KeyValue{} 38 | kva = append(kva, mr.KeyValue{"a", filename}) 39 | kva = append(kva, mr.KeyValue{"b", strconv.Itoa(len(filename))}) 40 | kva = append(kva, mr.KeyValue{"c", strconv.Itoa(len(contents))}) 41 | kva = append(kva, mr.KeyValue{"d", "xyzzy"}) 42 | return kva 43 | } 44 | 45 | func Reduce(key string, values []string) string { 46 | maybeCrash() 47 | 48 | // sort values to ensure deterministic output. 49 | vv := make([]string, len(values)) 50 | copy(vv, values) 51 | sort.Strings(vv) 52 | 53 | val := strings.Join(vv, " ") 54 | return val 55 | } 56 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mrapps/early_exit.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // a word-count application "plugin" for MapReduce. 5 | // 6 | // go build -buildmode=plugin wc_long.go 7 | // 8 | 9 | import ( 10 | "strconv" 11 | "strings" 12 | "time" 13 | 14 | "6.5840/mr" 15 | ) 16 | 17 | // The map function is called once for each file of input. 18 | // This map function just returns 1 for each file 19 | func Map(filename string, contents string) []mr.KeyValue { 20 | kva := []mr.KeyValue{} 21 | kva = append(kva, mr.KeyValue{filename, "1"}) 22 | return kva 23 | } 24 | 25 | // The reduce function is called once for each key generated by the 26 | // map tasks, with a list of all the values created for that key by 27 | // any map task. 28 | func Reduce(key string, values []string) string { 29 | // some reduce tasks sleep for a long time; potentially seeing if 30 | // a worker will accidentally exit early 31 | if strings.Contains(key, "sherlock") || strings.Contains(key, "tom") { 32 | time.Sleep(time.Duration(3 * time.Second)) 33 | } 34 | // return the number of occurrences of this file. 35 | return strconv.Itoa(len(values)) 36 | } 37 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mrapps/indexer.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // an indexing application "plugin" for MapReduce. 5 | // 6 | // go build -buildmode=plugin indexer.go 7 | // 8 | 9 | import "fmt" 10 | import "6.5840/mr" 11 | 12 | import "strings" 13 | import "unicode" 14 | import "sort" 15 | 16 | // The mapping function is called once for each piece of the input. 17 | // In this framework, the key is the name of the file that is being processed, 18 | // and the value is the file's contents. The return value should be a slice of 19 | // key/value pairs, each represented by a mr.KeyValue. 20 | func Map(document string, value string) (res []mr.KeyValue) { 21 | m := make(map[string]bool) 22 | words := strings.FieldsFunc(value, func(x rune) bool { return !unicode.IsLetter(x) }) 23 | for _, w := range words { 24 | m[w] = true 25 | } 26 | for w := range m { 27 | kv := mr.KeyValue{w, document} 28 | res = append(res, kv) 29 | } 30 | return 31 | } 32 | 33 | // The reduce function is called once for each key generated by Map, with a 34 | // list of that key's string value (merged across all inputs). The return value 35 | // should be a single output value for that key. 36 | func Reduce(key string, values []string) string { 37 | sort.Strings(values) 38 | return fmt.Sprintf("%d %s", len(values), strings.Join(values, ",")) 39 | } 40 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mrapps/jobcount.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // a MapReduce pseudo-application that counts the number of times map/reduce 5 | // tasks are run, to test whether jobs are assigned multiple times even when 6 | // there is no failure. 7 | // 8 | // go build -buildmode=plugin crash.go 9 | // 10 | 11 | import "6.5840/mr" 12 | import "math/rand" 13 | import "strings" 14 | import "strconv" 15 | import "time" 16 | import "fmt" 17 | import "os" 18 | import "io/ioutil" 19 | 20 | var count int 21 | 22 | func Map(filename string, contents string) []mr.KeyValue { 23 | me := os.Getpid() 24 | f := fmt.Sprintf("mr-worker-jobcount-%d-%d", me, count) 25 | count++ 26 | err := ioutil.WriteFile(f, []byte("x"), 0666) 27 | if err != nil { 28 | panic(err) 29 | } 30 | time.Sleep(time.Duration(2000+rand.Intn(3000)) * time.Millisecond) 31 | return []mr.KeyValue{mr.KeyValue{"a", "x"}} 32 | } 33 | 34 | func Reduce(key string, values []string) string { 35 | files, err := ioutil.ReadDir(".") 36 | if err != nil { 37 | panic(err) 38 | } 39 | invocations := 0 40 | for _, f := range files { 41 | if strings.HasPrefix(f.Name(), "mr-worker-jobcount") { 42 | invocations++ 43 | } 44 | } 45 | return strconv.Itoa(invocations) 46 | } 47 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mrapps/mtiming.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // a MapReduce pseudo-application to test that workers 5 | // execute map tasks in parallel. 6 | // 7 | // go build -buildmode=plugin mtiming.go 8 | // 9 | 10 | import "6.5840/mr" 11 | import "strings" 12 | import "fmt" 13 | import "os" 14 | import "syscall" 15 | import "time" 16 | import "sort" 17 | import "io/ioutil" 18 | 19 | func nparallel(phase string) int { 20 | // create a file so that other workers will see that 21 | // we're running at the same time as them. 22 | pid := os.Getpid() 23 | myfilename := fmt.Sprintf("mr-worker-%s-%d", phase, pid) 24 | err := ioutil.WriteFile(myfilename, []byte("x"), 0666) 25 | if err != nil { 26 | panic(err) 27 | } 28 | 29 | // are any other workers running? 30 | // find their PIDs by scanning directory for mr-worker-XXX files. 31 | dd, err := os.Open(".") 32 | if err != nil { 33 | panic(err) 34 | } 35 | names, err := dd.Readdirnames(1000000) 36 | if err != nil { 37 | panic(err) 38 | } 39 | ret := 0 40 | for _, name := range names { 41 | var xpid int 42 | pat := fmt.Sprintf("mr-worker-%s-%%d", phase) 43 | n, err := fmt.Sscanf(name, pat, &xpid) 44 | if n == 1 && err == nil { 45 | err := syscall.Kill(xpid, 0) 46 | if err == nil { 47 | // if err == nil, xpid is alive. 48 | ret += 1 49 | } 50 | } 51 | } 52 | dd.Close() 53 | 54 | time.Sleep(1 * time.Second) 55 | 56 | err = os.Remove(myfilename) 57 | if err != nil { 58 | panic(err) 59 | } 60 | 61 | return ret 62 | } 63 | 64 | func Map(filename string, contents string) []mr.KeyValue { 65 | t0 := time.Now() 66 | ts := float64(t0.Unix()) + (float64(t0.Nanosecond()) / 1000000000.0) 67 | pid := os.Getpid() 68 | 69 | n := nparallel("map") 70 | 71 | kva := []mr.KeyValue{} 72 | kva = append(kva, mr.KeyValue{ 73 | fmt.Sprintf("times-%v", pid), 74 | fmt.Sprintf("%.1f", ts)}) 75 | kva = append(kva, mr.KeyValue{ 76 | fmt.Sprintf("parallel-%v", pid), 77 | fmt.Sprintf("%d", n)}) 78 | return kva 79 | } 80 | 81 | func Reduce(key string, values []string) string { 82 | //n := nparallel("reduce") 83 | 84 | // sort values to ensure deterministic output. 85 | vv := make([]string, len(values)) 86 | copy(vv, values) 87 | sort.Strings(vv) 88 | 89 | val := strings.Join(vv, " ") 90 | return val 91 | } 92 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mrapps/nocrash.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // same as crash.go but doesn't actually crash. 5 | // 6 | // go build -buildmode=plugin nocrash.go 7 | // 8 | 9 | import "6.5840/mr" 10 | import crand "crypto/rand" 11 | import "math/big" 12 | import "strings" 13 | import "os" 14 | import "sort" 15 | import "strconv" 16 | 17 | func maybeCrash() { 18 | max := big.NewInt(1000) 19 | rr, _ := crand.Int(crand.Reader, max) 20 | if false && rr.Int64() < 500 { 21 | // crash! 22 | os.Exit(1) 23 | } 24 | } 25 | 26 | func Map(filename string, contents string) []mr.KeyValue { 27 | maybeCrash() 28 | 29 | kva := []mr.KeyValue{} 30 | kva = append(kva, mr.KeyValue{"a", filename}) 31 | kva = append(kva, mr.KeyValue{"b", strconv.Itoa(len(filename))}) 32 | kva = append(kva, mr.KeyValue{"c", strconv.Itoa(len(contents))}) 33 | kva = append(kva, mr.KeyValue{"d", "xyzzy"}) 34 | return kva 35 | } 36 | 37 | func Reduce(key string, values []string) string { 38 | maybeCrash() 39 | 40 | // sort values to ensure deterministic output. 41 | vv := make([]string, len(values)) 42 | copy(vv, values) 43 | sort.Strings(vv) 44 | 45 | val := strings.Join(vv, " ") 46 | return val 47 | } 48 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mrapps/rtiming.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // a MapReduce pseudo-application to test that workers 5 | // execute reduce tasks in parallel. 6 | // 7 | // go build -buildmode=plugin rtiming.go 8 | // 9 | 10 | import "6.5840/mr" 11 | import "fmt" 12 | import "os" 13 | import "syscall" 14 | import "time" 15 | import "io/ioutil" 16 | 17 | func nparallel(phase string) int { 18 | // create a file so that other workers will see that 19 | // we're running at the same time as them. 20 | pid := os.Getpid() 21 | myfilename := fmt.Sprintf("mr-worker-%s-%d", phase, pid) 22 | err := ioutil.WriteFile(myfilename, []byte("x"), 0666) 23 | if err != nil { 24 | panic(err) 25 | } 26 | 27 | // are any other workers running? 28 | // find their PIDs by scanning directory for mr-worker-XXX files. 29 | dd, err := os.Open(".") 30 | if err != nil { 31 | panic(err) 32 | } 33 | names, err := dd.Readdirnames(1000000) 34 | if err != nil { 35 | panic(err) 36 | } 37 | ret := 0 38 | for _, name := range names { 39 | var xpid int 40 | pat := fmt.Sprintf("mr-worker-%s-%%d", phase) 41 | n, err := fmt.Sscanf(name, pat, &xpid) 42 | if n == 1 && err == nil { 43 | err := syscall.Kill(xpid, 0) 44 | if err == nil { 45 | // if err == nil, xpid is alive. 46 | ret += 1 47 | } 48 | } 49 | } 50 | dd.Close() 51 | 52 | time.Sleep(1 * time.Second) 53 | 54 | err = os.Remove(myfilename) 55 | if err != nil { 56 | panic(err) 57 | } 58 | 59 | return ret 60 | } 61 | 62 | func Map(filename string, contents string) []mr.KeyValue { 63 | 64 | kva := []mr.KeyValue{} 65 | kva = append(kva, mr.KeyValue{"a", "1"}) 66 | kva = append(kva, mr.KeyValue{"b", "1"}) 67 | kva = append(kva, mr.KeyValue{"c", "1"}) 68 | kva = append(kva, mr.KeyValue{"d", "1"}) 69 | kva = append(kva, mr.KeyValue{"e", "1"}) 70 | kva = append(kva, mr.KeyValue{"f", "1"}) 71 | kva = append(kva, mr.KeyValue{"g", "1"}) 72 | kva = append(kva, mr.KeyValue{"h", "1"}) 73 | kva = append(kva, mr.KeyValue{"i", "1"}) 74 | kva = append(kva, mr.KeyValue{"j", "1"}) 75 | return kva 76 | } 77 | 78 | func Reduce(key string, values []string) string { 79 | n := nparallel("reduce") 80 | 81 | val := fmt.Sprintf("%d", n) 82 | 83 | return val 84 | } 85 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/mrapps/wc.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | // 4 | // a word-count application "plugin" for MapReduce. 5 | // 6 | // go build -buildmode=plugin wc.go 7 | // 8 | 9 | import "6.5840/mr" 10 | import "unicode" 11 | import "strings" 12 | import "strconv" 13 | 14 | // The map function is called once for each file of input. The first 15 | // argument is the name of the input file, and the second is the 16 | // file's complete contents. You should ignore the input file name, 17 | // and look only at the contents argument. The return value is a slice 18 | // of key/value pairs. 19 | func Map(filename string, contents string) []mr.KeyValue { 20 | // function to detect word separators. 21 | ff := func(r rune) bool { return !unicode.IsLetter(r) } 22 | 23 | // split contents into an array of words. 24 | words := strings.FieldsFunc(contents, ff) 25 | 26 | kva := []mr.KeyValue{} 27 | for _, w := range words { 28 | kv := mr.KeyValue{w, "1"} 29 | kva = append(kva, kv) 30 | } 31 | return kva 32 | } 33 | 34 | // The reduce function is called once for each key generated by the 35 | // map tasks, with a list of all the values created for that key by 36 | // any map task. 37 | func Reduce(key string, values []string) string { 38 | // return the number of occurrences of this word. 39 | return strconv.Itoa(len(values)) 40 | } 41 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/porcupine/bitset.go: -------------------------------------------------------------------------------- 1 | package porcupine 2 | 3 | import "math/bits" 4 | 5 | type bitset []uint64 6 | 7 | // data layout: 8 | // bits 0-63 are in data[0], the next are in data[1], etc. 9 | 10 | func newBitset(bits uint) bitset { 11 | extra := uint(0) 12 | if bits%64 != 0 { 13 | extra = 1 14 | } 15 | chunks := bits/64 + extra 16 | return bitset(make([]uint64, chunks)) 17 | } 18 | 19 | func (b bitset) clone() bitset { 20 | dataCopy := make([]uint64, len(b)) 21 | copy(dataCopy, b) 22 | return bitset(dataCopy) 23 | } 24 | 25 | func bitsetIndex(pos uint) (uint, uint) { 26 | return pos / 64, pos % 64 27 | } 28 | 29 | func (b bitset) set(pos uint) bitset { 30 | major, minor := bitsetIndex(pos) 31 | b[major] |= (1 << minor) 32 | return b 33 | } 34 | 35 | func (b bitset) clear(pos uint) bitset { 36 | major, minor := bitsetIndex(pos) 37 | b[major] &^= (1 << minor) 38 | return b 39 | } 40 | 41 | func (b bitset) get(pos uint) bool { 42 | major, minor := bitsetIndex(pos) 43 | return b[major]&(1<= 0; i-- { 125 | elem := entries[i] 126 | if elem.kind == returnEntry { 127 | entry := &node{value: elem.value, match: nil, id: elem.id} 128 | match[elem.id] = entry 129 | insertBefore(entry, root) 130 | root = entry 131 | } else { 132 | entry := &node{value: elem.value, match: match[elem.id], id: elem.id} 133 | insertBefore(entry, root) 134 | root = entry 135 | } 136 | } 137 | return root 138 | } 139 | 140 | type cacheEntry struct { 141 | linearized bitset 142 | state interface{} 143 | } 144 | 145 | func cacheContains(model Model, cache map[uint64][]cacheEntry, entry cacheEntry) bool { 146 | for _, elem := range cache[entry.linearized.hash()] { 147 | if entry.linearized.equals(elem.linearized) && model.Equal(entry.state, elem.state) { 148 | return true 149 | } 150 | } 151 | return false 152 | } 153 | 154 | type callsEntry struct { 155 | entry *node 156 | state interface{} 157 | } 158 | 159 | func lift(entry *node) { 160 | entry.prev.next = entry.next 161 | entry.next.prev = entry.prev 162 | match := entry.match 163 | match.prev.next = match.next 164 | if match.next != nil { 165 | match.next.prev = match.prev 166 | } 167 | } 168 | 169 | func unlift(entry *node) { 170 | match := entry.match 171 | match.prev.next = match 172 | if match.next != nil { 173 | match.next.prev = match 174 | } 175 | entry.prev.next = entry 176 | entry.next.prev = entry 177 | } 178 | 179 | func checkSingle(model Model, history []entry, computePartial bool, kill *int32) (bool, []*[]int) { 180 | entry := makeLinkedEntries(history) 181 | n := length(entry) / 2 182 | linearized := newBitset(uint(n)) 183 | cache := make(map[uint64][]cacheEntry) // map from hash to cache entry 184 | var calls []callsEntry 185 | // longest linearizable prefix that includes the given entry 186 | longest := make([]*[]int, n) 187 | 188 | state := model.Init() 189 | headEntry := insertBefore(&node{value: nil, match: nil, id: -1}, entry) 190 | for headEntry.next != nil { 191 | if atomic.LoadInt32(kill) != 0 { 192 | return false, longest 193 | } 194 | if entry.match != nil { 195 | matching := entry.match // the return entry 196 | ok, newState := model.Step(state, entry.value, matching.value) 197 | if ok { 198 | newLinearized := linearized.clone().set(uint(entry.id)) 199 | newCacheEntry := cacheEntry{newLinearized, newState} 200 | if !cacheContains(model, cache, newCacheEntry) { 201 | hash := newLinearized.hash() 202 | cache[hash] = append(cache[hash], newCacheEntry) 203 | calls = append(calls, callsEntry{entry, state}) 204 | state = newState 205 | linearized.set(uint(entry.id)) 206 | lift(entry) 207 | entry = headEntry.next 208 | } else { 209 | entry = entry.next 210 | } 211 | } else { 212 | entry = entry.next 213 | } 214 | } else { 215 | if len(calls) == 0 { 216 | return false, longest 217 | } 218 | // longest 219 | if computePartial { 220 | callsLen := len(calls) 221 | var seq []int = nil 222 | for _, v := range calls { 223 | if longest[v.entry.id] == nil || callsLen > len(*longest[v.entry.id]) { 224 | // create seq lazily 225 | if seq == nil { 226 | seq = make([]int, len(calls)) 227 | for i, v := range calls { 228 | seq[i] = v.entry.id 229 | } 230 | } 231 | longest[v.entry.id] = &seq 232 | } 233 | } 234 | } 235 | callsTop := calls[len(calls)-1] 236 | entry = callsTop.entry 237 | state = callsTop.state 238 | linearized.clear(uint(entry.id)) 239 | calls = calls[:len(calls)-1] 240 | unlift(entry) 241 | entry = entry.next 242 | } 243 | } 244 | // longest linearization is the complete linearization, which is calls 245 | seq := make([]int, len(calls)) 246 | for i, v := range calls { 247 | seq[i] = v.entry.id 248 | } 249 | for i := 0; i < n; i++ { 250 | longest[i] = &seq 251 | } 252 | return true, longest 253 | } 254 | 255 | func fillDefault(model Model) Model { 256 | if model.Partition == nil { 257 | model.Partition = NoPartition 258 | } 259 | if model.PartitionEvent == nil { 260 | model.PartitionEvent = NoPartitionEvent 261 | } 262 | if model.Equal == nil { 263 | model.Equal = ShallowEqual 264 | } 265 | if model.DescribeOperation == nil { 266 | model.DescribeOperation = DefaultDescribeOperation 267 | } 268 | if model.DescribeState == nil { 269 | model.DescribeState = DefaultDescribeState 270 | } 271 | return model 272 | } 273 | 274 | func checkParallel(model Model, history [][]entry, computeInfo bool, timeout time.Duration) (CheckResult, linearizationInfo) { 275 | ok := true 276 | timedOut := false 277 | results := make(chan bool, len(history)) 278 | longest := make([][]*[]int, len(history)) 279 | kill := int32(0) 280 | for i, subhistory := range history { 281 | go func(i int, subhistory []entry) { 282 | ok, l := checkSingle(model, subhistory, computeInfo, &kill) 283 | longest[i] = l 284 | results <- ok 285 | }(i, subhistory) 286 | } 287 | var timeoutChan <-chan time.Time 288 | if timeout > 0 { 289 | timeoutChan = time.After(timeout) 290 | } 291 | count := 0 292 | loop: 293 | for { 294 | select { 295 | case result := <-results: 296 | count++ 297 | ok = ok && result 298 | if !ok && !computeInfo { 299 | atomic.StoreInt32(&kill, 1) 300 | break loop 301 | } 302 | if count >= len(history) { 303 | break loop 304 | } 305 | case <-timeoutChan: 306 | timedOut = true 307 | atomic.StoreInt32(&kill, 1) 308 | break loop // if we time out, we might get a false positive 309 | } 310 | } 311 | var info linearizationInfo 312 | if computeInfo { 313 | // make sure we've waited for all goroutines to finish, 314 | // otherwise we might race on access to longest[] 315 | for count < len(history) { 316 | <-results 317 | count++ 318 | } 319 | // return longest linearizable prefixes that include each history element 320 | partialLinearizations := make([][][]int, len(history)) 321 | for i := 0; i < len(history); i++ { 322 | var partials [][]int 323 | // turn longest into a set of unique linearizations 324 | set := make(map[*[]int]struct{}) 325 | for _, v := range longest[i] { 326 | if v != nil { 327 | set[v] = struct{}{} 328 | } 329 | } 330 | for k := range set { 331 | arr := make([]int, len(*k)) 332 | for i, v := range *k { 333 | arr[i] = v 334 | } 335 | partials = append(partials, arr) 336 | } 337 | partialLinearizations[i] = partials 338 | } 339 | info.history = history 340 | info.partialLinearizations = partialLinearizations 341 | } 342 | var result CheckResult 343 | if !ok { 344 | result = Illegal 345 | } else { 346 | if timedOut { 347 | result = Unknown 348 | } else { 349 | result = Ok 350 | } 351 | } 352 | return result, info 353 | } 354 | 355 | func checkEvents(model Model, history []Event, verbose bool, timeout time.Duration) (CheckResult, linearizationInfo) { 356 | model = fillDefault(model) 357 | partitions := model.PartitionEvent(history) 358 | l := make([][]entry, len(partitions)) 359 | for i, subhistory := range partitions { 360 | l[i] = convertEntries(renumber(subhistory)) 361 | } 362 | return checkParallel(model, l, verbose, timeout) 363 | } 364 | 365 | func checkOperations(model Model, history []Operation, verbose bool, timeout time.Duration) (CheckResult, linearizationInfo) { 366 | model = fillDefault(model) 367 | partitions := model.Partition(history) 368 | l := make([][]entry, len(partitions)) 369 | for i, subhistory := range partitions { 370 | l[i] = makeEntries(subhistory) 371 | } 372 | return checkParallel(model, l, verbose, timeout) 373 | } 374 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/porcupine/model.go: -------------------------------------------------------------------------------- 1 | package porcupine 2 | 3 | import "fmt" 4 | 5 | type Operation struct { 6 | ClientId int // optional, unless you want a visualization; zero-indexed 7 | Input interface{} 8 | Call int64 // invocation time 9 | Output interface{} 10 | Return int64 // response time 11 | } 12 | 13 | type EventKind bool 14 | 15 | const ( 16 | CallEvent EventKind = false 17 | ReturnEvent EventKind = true 18 | ) 19 | 20 | type Event struct { 21 | ClientId int // optional, unless you want a visualization; zero-indexed 22 | Kind EventKind 23 | Value interface{} 24 | Id int 25 | } 26 | 27 | type Model struct { 28 | // Partition functions, such that a history is linearizable if and only 29 | // if each partition is linearizable. If you don't want to implement 30 | // this, you can always use the `NoPartition` functions implemented 31 | // below. 32 | Partition func(history []Operation) [][]Operation 33 | PartitionEvent func(history []Event) [][]Event 34 | // Initial state of the system. 35 | Init func() interface{} 36 | // Step function for the system. Returns whether or not the system 37 | // could take this step with the given inputs and outputs and also 38 | // returns the new state. This should not mutate the existing state. 39 | Step func(state interface{}, input interface{}, output interface{}) (bool, interface{}) 40 | // Equality on states. If you are using a simple data type for states, 41 | // you can use the `ShallowEqual` function implemented below. 42 | Equal func(state1, state2 interface{}) bool 43 | // For visualization, describe an operation as a string. 44 | // For example, "Get('x') -> 'y'". 45 | DescribeOperation func(input interface{}, output interface{}) string 46 | // For visualization purposes, describe a state as a string. 47 | // For example, "{'x' -> 'y', 'z' -> 'w'}" 48 | DescribeState func(state interface{}) string 49 | } 50 | 51 | func NoPartition(history []Operation) [][]Operation { 52 | return [][]Operation{history} 53 | } 54 | 55 | func NoPartitionEvent(history []Event) [][]Event { 56 | return [][]Event{history} 57 | } 58 | 59 | func ShallowEqual(state1, state2 interface{}) bool { 60 | return state1 == state2 61 | } 62 | 63 | func DefaultDescribeOperation(input interface{}, output interface{}) string { 64 | return fmt.Sprintf("%v -> %v", input, output) 65 | } 66 | 67 | func DefaultDescribeState(state interface{}) string { 68 | return fmt.Sprintf("%v", state) 69 | } 70 | 71 | type CheckResult string 72 | 73 | const ( 74 | Unknown CheckResult = "Unknown" // timed out 75 | Ok = "Ok" 76 | Illegal = "Illegal" 77 | ) 78 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/porcupine/porcupine.go: -------------------------------------------------------------------------------- 1 | package porcupine 2 | 3 | import "time" 4 | 5 | func CheckOperations(model Model, history []Operation) bool { 6 | res, _ := checkOperations(model, history, false, 0) 7 | return res == Ok 8 | } 9 | 10 | // timeout = 0 means no timeout 11 | // if this operation times out, then a false positive is possible 12 | func CheckOperationsTimeout(model Model, history []Operation, timeout time.Duration) CheckResult { 13 | res, _ := checkOperations(model, history, false, timeout) 14 | return res 15 | } 16 | 17 | // timeout = 0 means no timeout 18 | // if this operation times out, then a false positive is possible 19 | func CheckOperationsVerbose(model Model, history []Operation, timeout time.Duration) (CheckResult, linearizationInfo) { 20 | return checkOperations(model, history, true, timeout) 21 | } 22 | 23 | func CheckEvents(model Model, history []Event) bool { 24 | res, _ := checkEvents(model, history, false, 0) 25 | return res == Ok 26 | } 27 | 28 | // timeout = 0 means no timeout 29 | // if this operation times out, then a false positive is possible 30 | func CheckEventsTimeout(model Model, history []Event, timeout time.Duration) CheckResult { 31 | res, _ := checkEvents(model, history, false, timeout) 32 | return res 33 | } 34 | 35 | // timeout = 0 means no timeout 36 | // if this operation times out, then a false positive is possible 37 | func CheckEventsVerbose(model Model, history []Event, timeout time.Duration) (CheckResult, linearizationInfo) { 38 | return checkEvents(model, history, true, timeout) 39 | } 40 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/raft/persister.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | // 4 | // support for Raft and kvraft to save persistent 5 | // Raft state (log &c) and k/v server snapshots. 6 | // 7 | // we will use the original persister.go to test your code for grading. 8 | // so, while you can modify this code to help you debug, please 9 | // test with the original before submitting. 10 | // 11 | 12 | import "sync" 13 | 14 | type Persister struct { 15 | mu sync.Mutex 16 | raftstate []byte 17 | snapshot []byte 18 | } 19 | 20 | func MakePersister() *Persister { 21 | return &Persister{} 22 | } 23 | 24 | func clone(orig []byte) []byte { 25 | x := make([]byte, len(orig)) 26 | copy(x, orig) 27 | return x 28 | } 29 | 30 | func (ps *Persister) Copy() *Persister { 31 | ps.mu.Lock() 32 | defer ps.mu.Unlock() 33 | np := MakePersister() 34 | np.raftstate = ps.raftstate 35 | np.snapshot = ps.snapshot 36 | return np 37 | } 38 | 39 | func (ps *Persister) ReadRaftState() []byte { 40 | ps.mu.Lock() 41 | defer ps.mu.Unlock() 42 | return clone(ps.raftstate) 43 | } 44 | 45 | func (ps *Persister) RaftStateSize() int { 46 | ps.mu.Lock() 47 | defer ps.mu.Unlock() 48 | return len(ps.raftstate) 49 | } 50 | 51 | // Save both Raft state and K/V snapshot as a single atomic action, 52 | // to help avoid them getting out of sync. 53 | func (ps *Persister) Save(raftstate []byte, snapshot []byte) { 54 | ps.mu.Lock() 55 | defer ps.mu.Unlock() 56 | ps.raftstate = clone(raftstate) 57 | ps.snapshot = clone(snapshot) 58 | } 59 | 60 | func (ps *Persister) ReadSnapshot() []byte { 61 | ps.mu.Lock() 62 | defer ps.mu.Unlock() 63 | return clone(ps.snapshot) 64 | } 65 | 66 | func (ps *Persister) SnapshotSize() int { 67 | ps.mu.Lock() 68 | defer ps.mu.Unlock() 69 | return len(ps.snapshot) 70 | } 71 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/raft/raft.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | // 4 | // this is an outline of the API that raft must expose to 5 | // the service (or tester). see comments below for 6 | // each of these functions for more details. 7 | // 8 | // rf = Make(...) 9 | // create a new Raft server. 10 | // rf.Start(command interface{}) (index, term, isleader) 11 | // start agreement on a new log entry 12 | // rf.GetState() (term, isLeader) 13 | // ask a Raft for its current term, and whether it thinks it is leader 14 | // ApplyMsg 15 | // each time a new entry is committed to the log, each Raft peer 16 | // should send an ApplyMsg to the service (or tester) 17 | // in the same server. 18 | // 19 | 20 | import ( 21 | // "bytes" 22 | "math/rand" 23 | "sync" 24 | "sync/atomic" 25 | "time" 26 | 27 | // "6.5840/labgob" 28 | "6.5840/labrpc" 29 | ) 30 | 31 | 32 | // as each Raft peer becomes aware that successive log entries are 33 | // committed, the peer should send an ApplyMsg to the service (or 34 | // tester) on the same server, via the applyCh passed to Make(). set 35 | // CommandValid to true to indicate that the ApplyMsg contains a newly 36 | // committed log entry. 37 | // 38 | // in part 2D you'll want to send other kinds of messages (e.g., 39 | // snapshots) on the applyCh, but set CommandValid to false for these 40 | // other uses. 41 | type ApplyMsg struct { 42 | CommandValid bool 43 | Command interface{} 44 | CommandIndex int 45 | 46 | // For 2D: 47 | SnapshotValid bool 48 | Snapshot []byte 49 | SnapshotTerm int 50 | SnapshotIndex int 51 | } 52 | 53 | // A Go object implementing a single Raft peer. 54 | type Raft struct { 55 | mu sync.Mutex // Lock to protect shared access to this peer's state 56 | peers []*labrpc.ClientEnd // RPC end points of all peers 57 | persister *Persister // Object to hold this peer's persisted state 58 | me int // this peer's index into peers[] 59 | dead int32 // set by Kill() 60 | 61 | // Your data here (2A, 2B, 2C). 62 | // Look at the paper's Figure 2 for a description of what 63 | // state a Raft server must maintain. 64 | 65 | } 66 | 67 | // return currentTerm and whether this server 68 | // believes it is the leader. 69 | func (rf *Raft) GetState() (int, bool) { 70 | 71 | var term int 72 | var isleader bool 73 | // Your code here (2A). 74 | return term, isleader 75 | } 76 | 77 | // save Raft's persistent state to stable storage, 78 | // where it can later be retrieved after a crash and restart. 79 | // see paper's Figure 2 for a description of what should be persistent. 80 | // before you've implemented snapshots, you should pass nil as the 81 | // second argument to persister.Save(). 82 | // after you've implemented snapshots, pass the current snapshot 83 | // (or nil if there's not yet a snapshot). 84 | func (rf *Raft) persist() { 85 | // Your code here (2C). 86 | // Example: 87 | // w := new(bytes.Buffer) 88 | // e := labgob.NewEncoder(w) 89 | // e.Encode(rf.xxx) 90 | // e.Encode(rf.yyy) 91 | // raftstate := w.Bytes() 92 | // rf.persister.Save(raftstate, nil) 93 | } 94 | 95 | 96 | // restore previously persisted state. 97 | func (rf *Raft) readPersist(data []byte) { 98 | if data == nil || len(data) < 1 { // bootstrap without any state? 99 | return 100 | } 101 | // Your code here (2C). 102 | // Example: 103 | // r := bytes.NewBuffer(data) 104 | // d := labgob.NewDecoder(r) 105 | // var xxx 106 | // var yyy 107 | // if d.Decode(&xxx) != nil || 108 | // d.Decode(&yyy) != nil { 109 | // error... 110 | // } else { 111 | // rf.xxx = xxx 112 | // rf.yyy = yyy 113 | // } 114 | } 115 | 116 | 117 | // the service says it has created a snapshot that has 118 | // all info up to and including index. this means the 119 | // service no longer needs the log through (and including) 120 | // that index. Raft should now trim its log as much as possible. 121 | func (rf *Raft) Snapshot(index int, snapshot []byte) { 122 | // Your code here (2D). 123 | 124 | } 125 | 126 | 127 | // example RequestVote RPC arguments structure. 128 | // field names must start with capital letters! 129 | type RequestVoteArgs struct { 130 | // Your data here (2A, 2B). 131 | } 132 | 133 | // example RequestVote RPC reply structure. 134 | // field names must start with capital letters! 135 | type RequestVoteReply struct { 136 | // Your data here (2A). 137 | } 138 | 139 | // example RequestVote RPC handler. 140 | func (rf *Raft) RequestVote(args *RequestVoteArgs, reply *RequestVoteReply) { 141 | // Your code here (2A, 2B). 142 | } 143 | 144 | // example code to send a RequestVote RPC to a server. 145 | // server is the index of the target server in rf.peers[]. 146 | // expects RPC arguments in args. 147 | // fills in *reply with RPC reply, so caller should 148 | // pass &reply. 149 | // the types of the args and reply passed to Call() must be 150 | // the same as the types of the arguments declared in the 151 | // handler function (including whether they are pointers). 152 | // 153 | // The labrpc package simulates a lossy network, in which servers 154 | // may be unreachable, and in which requests and replies may be lost. 155 | // Call() sends a request and waits for a reply. If a reply arrives 156 | // within a timeout interval, Call() returns true; otherwise 157 | // Call() returns false. Thus Call() may not return for a while. 158 | // A false return can be caused by a dead server, a live server that 159 | // can't be reached, a lost request, or a lost reply. 160 | // 161 | // Call() is guaranteed to return (perhaps after a delay) *except* if the 162 | // handler function on the server side does not return. Thus there 163 | // is no need to implement your own timeouts around Call(). 164 | // 165 | // look at the comments in ../labrpc/labrpc.go for more details. 166 | // 167 | // if you're having trouble getting RPC to work, check that you've 168 | // capitalized all field names in structs passed over RPC, and 169 | // that the caller passes the address of the reply struct with &, not 170 | // the struct itself. 171 | func (rf *Raft) sendRequestVote(server int, args *RequestVoteArgs, reply *RequestVoteReply) bool { 172 | ok := rf.peers[server].Call("Raft.RequestVote", args, reply) 173 | return ok 174 | } 175 | 176 | 177 | // the service using Raft (e.g. a k/v server) wants to start 178 | // agreement on the next command to be appended to Raft's log. if this 179 | // server isn't the leader, returns false. otherwise start the 180 | // agreement and return immediately. there is no guarantee that this 181 | // command will ever be committed to the Raft log, since the leader 182 | // may fail or lose an election. even if the Raft instance has been killed, 183 | // this function should return gracefully. 184 | // 185 | // the first return value is the index that the command will appear at 186 | // if it's ever committed. the second return value is the current 187 | // term. the third return value is true if this server believes it is 188 | // the leader. 189 | func (rf *Raft) Start(command interface{}) (int, int, bool) { 190 | index := -1 191 | term := -1 192 | isLeader := true 193 | 194 | // Your code here (2B). 195 | 196 | 197 | return index, term, isLeader 198 | } 199 | 200 | // the tester doesn't halt goroutines created by Raft after each test, 201 | // but it does call the Kill() method. your code can use killed() to 202 | // check whether Kill() has been called. the use of atomic avoids the 203 | // need for a lock. 204 | // 205 | // the issue is that long-running goroutines use memory and may chew 206 | // up CPU time, perhaps causing later tests to fail and generating 207 | // confusing debug output. any goroutine with a long-running loop 208 | // should call killed() to check whether it should stop. 209 | func (rf *Raft) Kill() { 210 | atomic.StoreInt32(&rf.dead, 1) 211 | // Your code here, if desired. 212 | } 213 | 214 | func (rf *Raft) killed() bool { 215 | z := atomic.LoadInt32(&rf.dead) 216 | return z == 1 217 | } 218 | 219 | func (rf *Raft) ticker() { 220 | for rf.killed() == false { 221 | 222 | // Your code here (2A) 223 | // Check if a leader election should be started. 224 | 225 | 226 | // pause for a random amount of time between 50 and 350 227 | // milliseconds. 228 | ms := 50 + (rand.Int63() % 300) 229 | time.Sleep(time.Duration(ms) * time.Millisecond) 230 | } 231 | } 232 | 233 | // the service or tester wants to create a Raft server. the ports 234 | // of all the Raft servers (including this one) are in peers[]. this 235 | // server's port is peers[me]. all the servers' peers[] arrays 236 | // have the same order. persister is a place for this server to 237 | // save its persistent state, and also initially holds the most 238 | // recent saved state, if any. applyCh is a channel on which the 239 | // tester or service expects Raft to send ApplyMsg messages. 240 | // Make() must return quickly, so it should start goroutines 241 | // for any long-running work. 242 | func Make(peers []*labrpc.ClientEnd, me int, 243 | persister *Persister, applyCh chan ApplyMsg) *Raft { 244 | rf := &Raft{} 245 | rf.peers = peers 246 | rf.persister = persister 247 | rf.me = me 248 | 249 | // Your initialization code here (2A, 2B, 2C). 250 | 251 | // initialize from state persisted before a crash 252 | rf.readPersist(persister.ReadRaftState()) 253 | 254 | // start ticker goroutine to start elections 255 | go rf.ticker() 256 | 257 | 258 | return rf 259 | } 260 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/raft/util.go: -------------------------------------------------------------------------------- 1 | package raft 2 | 3 | import "log" 4 | 5 | // Debugging 6 | const Debug = false 7 | 8 | func DPrintf(format string, a ...interface{}) (n int, err error) { 9 | if Debug { 10 | log.Printf(format, a...) 11 | } 12 | return 13 | } 14 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/shardctrler/client.go: -------------------------------------------------------------------------------- 1 | package shardctrler 2 | 3 | // 4 | // Shardctrler clerk. 5 | // 6 | 7 | import "6.5840/labrpc" 8 | import "time" 9 | import "crypto/rand" 10 | import "math/big" 11 | 12 | type Clerk struct { 13 | servers []*labrpc.ClientEnd 14 | // Your data here. 15 | } 16 | 17 | func nrand() int64 { 18 | max := big.NewInt(int64(1) << 62) 19 | bigx, _ := rand.Int(rand.Reader, max) 20 | x := bigx.Int64() 21 | return x 22 | } 23 | 24 | func MakeClerk(servers []*labrpc.ClientEnd) *Clerk { 25 | ck := new(Clerk) 26 | ck.servers = servers 27 | // Your code here. 28 | return ck 29 | } 30 | 31 | func (ck *Clerk) Query(num int) Config { 32 | args := &QueryArgs{} 33 | // Your code here. 34 | args.Num = num 35 | for { 36 | // try each known server. 37 | for _, srv := range ck.servers { 38 | var reply QueryReply 39 | ok := srv.Call("ShardCtrler.Query", args, &reply) 40 | if ok && reply.WrongLeader == false { 41 | return reply.Config 42 | } 43 | } 44 | time.Sleep(100 * time.Millisecond) 45 | } 46 | } 47 | 48 | func (ck *Clerk) Join(servers map[int][]string) { 49 | args := &JoinArgs{} 50 | // Your code here. 51 | args.Servers = servers 52 | 53 | for { 54 | // try each known server. 55 | for _, srv := range ck.servers { 56 | var reply JoinReply 57 | ok := srv.Call("ShardCtrler.Join", args, &reply) 58 | if ok && reply.WrongLeader == false { 59 | return 60 | } 61 | } 62 | time.Sleep(100 * time.Millisecond) 63 | } 64 | } 65 | 66 | func (ck *Clerk) Leave(gids []int) { 67 | args := &LeaveArgs{} 68 | // Your code here. 69 | args.GIDs = gids 70 | 71 | for { 72 | // try each known server. 73 | for _, srv := range ck.servers { 74 | var reply LeaveReply 75 | ok := srv.Call("ShardCtrler.Leave", args, &reply) 76 | if ok && reply.WrongLeader == false { 77 | return 78 | } 79 | } 80 | time.Sleep(100 * time.Millisecond) 81 | } 82 | } 83 | 84 | func (ck *Clerk) Move(shard int, gid int) { 85 | args := &MoveArgs{} 86 | // Your code here. 87 | args.Shard = shard 88 | args.GID = gid 89 | 90 | for { 91 | // try each known server. 92 | for _, srv := range ck.servers { 93 | var reply MoveReply 94 | ok := srv.Call("ShardCtrler.Move", args, &reply) 95 | if ok && reply.WrongLeader == false { 96 | return 97 | } 98 | } 99 | time.Sleep(100 * time.Millisecond) 100 | } 101 | } 102 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/shardctrler/common.go: -------------------------------------------------------------------------------- 1 | package shardctrler 2 | 3 | // 4 | // Shard controler: assigns shards to replication groups. 5 | // 6 | // RPC interface: 7 | // Join(servers) -- add a set of groups (gid -> server-list mapping). 8 | // Leave(gids) -- delete a set of groups. 9 | // Move(shard, gid) -- hand off one shard from current owner to gid. 10 | // Query(num) -> fetch Config # num, or latest config if num==-1. 11 | // 12 | // A Config (configuration) describes a set of replica groups, and the 13 | // replica group responsible for each shard. Configs are numbered. Config 14 | // #0 is the initial configuration, with no groups and all shards 15 | // assigned to group 0 (the invalid group). 16 | // 17 | // You will need to add fields to the RPC argument structs. 18 | // 19 | 20 | // The number of shards. 21 | const NShards = 10 22 | 23 | // A configuration -- an assignment of shards to groups. 24 | // Please don't change this. 25 | type Config struct { 26 | Num int // config number 27 | Shards [NShards]int // shard -> gid 28 | Groups map[int][]string // gid -> servers[] 29 | } 30 | 31 | const ( 32 | OK = "OK" 33 | ) 34 | 35 | type Err string 36 | 37 | type JoinArgs struct { 38 | Servers map[int][]string // new GID -> servers mappings 39 | } 40 | 41 | type JoinReply struct { 42 | WrongLeader bool 43 | Err Err 44 | } 45 | 46 | type LeaveArgs struct { 47 | GIDs []int 48 | } 49 | 50 | type LeaveReply struct { 51 | WrongLeader bool 52 | Err Err 53 | } 54 | 55 | type MoveArgs struct { 56 | Shard int 57 | GID int 58 | } 59 | 60 | type MoveReply struct { 61 | WrongLeader bool 62 | Err Err 63 | } 64 | 65 | type QueryArgs struct { 66 | Num int // desired config number 67 | } 68 | 69 | type QueryReply struct { 70 | WrongLeader bool 71 | Err Err 72 | Config Config 73 | } 74 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/shardctrler/config.go: -------------------------------------------------------------------------------- 1 | package shardctrler 2 | 3 | import "6.5840/labrpc" 4 | import "6.5840/raft" 5 | import "testing" 6 | import "os" 7 | 8 | // import "log" 9 | import crand "crypto/rand" 10 | import "math/rand" 11 | import "encoding/base64" 12 | import "sync" 13 | import "runtime" 14 | import "time" 15 | 16 | func randstring(n int) string { 17 | b := make([]byte, 2*n) 18 | crand.Read(b) 19 | s := base64.URLEncoding.EncodeToString(b) 20 | return s[0:n] 21 | } 22 | 23 | // Randomize server handles 24 | func random_handles(kvh []*labrpc.ClientEnd) []*labrpc.ClientEnd { 25 | sa := make([]*labrpc.ClientEnd, len(kvh)) 26 | copy(sa, kvh) 27 | for i := range sa { 28 | j := rand.Intn(i + 1) 29 | sa[i], sa[j] = sa[j], sa[i] 30 | } 31 | return sa 32 | } 33 | 34 | type config struct { 35 | mu sync.Mutex 36 | t *testing.T 37 | net *labrpc.Network 38 | n int 39 | servers []*ShardCtrler 40 | saved []*raft.Persister 41 | endnames [][]string // names of each server's sending ClientEnds 42 | clerks map[*Clerk][]string 43 | nextClientId int 44 | start time.Time // time at which make_config() was called 45 | } 46 | 47 | func (cfg *config) checkTimeout() { 48 | // enforce a two minute real-time limit on each test 49 | if !cfg.t.Failed() && time.Since(cfg.start) > 120*time.Second { 50 | cfg.t.Fatal("test took longer than 120 seconds") 51 | } 52 | } 53 | 54 | func (cfg *config) cleanup() { 55 | cfg.mu.Lock() 56 | defer cfg.mu.Unlock() 57 | for i := 0; i < len(cfg.servers); i++ { 58 | if cfg.servers[i] != nil { 59 | cfg.servers[i].Kill() 60 | } 61 | } 62 | cfg.net.Cleanup() 63 | cfg.checkTimeout() 64 | } 65 | 66 | // Maximum log size across all servers 67 | func (cfg *config) LogSize() int { 68 | logsize := 0 69 | for i := 0; i < cfg.n; i++ { 70 | n := cfg.saved[i].RaftStateSize() 71 | if n > logsize { 72 | logsize = n 73 | } 74 | } 75 | return logsize 76 | } 77 | 78 | // attach server i to servers listed in to 79 | // caller must hold cfg.mu 80 | func (cfg *config) connectUnlocked(i int, to []int) { 81 | // log.Printf("connect peer %d to %v\n", i, to) 82 | 83 | // outgoing socket files 84 | for j := 0; j < len(to); j++ { 85 | endname := cfg.endnames[i][to[j]] 86 | cfg.net.Enable(endname, true) 87 | } 88 | 89 | // incoming socket files 90 | for j := 0; j < len(to); j++ { 91 | endname := cfg.endnames[to[j]][i] 92 | cfg.net.Enable(endname, true) 93 | } 94 | } 95 | 96 | func (cfg *config) connect(i int, to []int) { 97 | cfg.mu.Lock() 98 | defer cfg.mu.Unlock() 99 | cfg.connectUnlocked(i, to) 100 | } 101 | 102 | // detach server i from the servers listed in from 103 | // caller must hold cfg.mu 104 | func (cfg *config) disconnectUnlocked(i int, from []int) { 105 | // log.Printf("disconnect peer %d from %v\n", i, from) 106 | 107 | // outgoing socket files 108 | for j := 0; j < len(from); j++ { 109 | if cfg.endnames[i] != nil { 110 | endname := cfg.endnames[i][from[j]] 111 | cfg.net.Enable(endname, false) 112 | } 113 | } 114 | 115 | // incoming socket files 116 | for j := 0; j < len(from); j++ { 117 | if cfg.endnames[j] != nil { 118 | endname := cfg.endnames[from[j]][i] 119 | cfg.net.Enable(endname, false) 120 | } 121 | } 122 | } 123 | 124 | func (cfg *config) disconnect(i int, from []int) { 125 | cfg.mu.Lock() 126 | defer cfg.mu.Unlock() 127 | cfg.disconnectUnlocked(i, from) 128 | } 129 | 130 | func (cfg *config) All() []int { 131 | all := make([]int, cfg.n) 132 | for i := 0; i < cfg.n; i++ { 133 | all[i] = i 134 | } 135 | return all 136 | } 137 | 138 | func (cfg *config) ConnectAll() { 139 | cfg.mu.Lock() 140 | defer cfg.mu.Unlock() 141 | for i := 0; i < cfg.n; i++ { 142 | cfg.connectUnlocked(i, cfg.All()) 143 | } 144 | } 145 | 146 | // Sets up 2 partitions with connectivity between servers in each partition. 147 | func (cfg *config) partition(p1 []int, p2 []int) { 148 | cfg.mu.Lock() 149 | defer cfg.mu.Unlock() 150 | // log.Printf("partition servers into: %v %v\n", p1, p2) 151 | for i := 0; i < len(p1); i++ { 152 | cfg.disconnectUnlocked(p1[i], p2) 153 | cfg.connectUnlocked(p1[i], p1) 154 | } 155 | for i := 0; i < len(p2); i++ { 156 | cfg.disconnectUnlocked(p2[i], p1) 157 | cfg.connectUnlocked(p2[i], p2) 158 | } 159 | } 160 | 161 | // Create a clerk with clerk specific server names. 162 | // Give it connections to all of the servers, but for 163 | // now enable only connections to servers in to[]. 164 | func (cfg *config) makeClient(to []int) *Clerk { 165 | cfg.mu.Lock() 166 | defer cfg.mu.Unlock() 167 | 168 | // a fresh set of ClientEnds. 169 | ends := make([]*labrpc.ClientEnd, cfg.n) 170 | endnames := make([]string, cfg.n) 171 | for j := 0; j < cfg.n; j++ { 172 | endnames[j] = randstring(20) 173 | ends[j] = cfg.net.MakeEnd(endnames[j]) 174 | cfg.net.Connect(endnames[j], j) 175 | } 176 | 177 | ck := MakeClerk(random_handles(ends)) 178 | cfg.clerks[ck] = endnames 179 | cfg.nextClientId++ 180 | cfg.ConnectClientUnlocked(ck, to) 181 | return ck 182 | } 183 | 184 | func (cfg *config) deleteClient(ck *Clerk) { 185 | cfg.mu.Lock() 186 | defer cfg.mu.Unlock() 187 | 188 | v := cfg.clerks[ck] 189 | for i := 0; i < len(v); i++ { 190 | os.Remove(v[i]) 191 | } 192 | delete(cfg.clerks, ck) 193 | } 194 | 195 | // caller should hold cfg.mu 196 | func (cfg *config) ConnectClientUnlocked(ck *Clerk, to []int) { 197 | // log.Printf("ConnectClient %v to %v\n", ck, to) 198 | endnames := cfg.clerks[ck] 199 | for j := 0; j < len(to); j++ { 200 | s := endnames[to[j]] 201 | cfg.net.Enable(s, true) 202 | } 203 | } 204 | 205 | func (cfg *config) ConnectClient(ck *Clerk, to []int) { 206 | cfg.mu.Lock() 207 | defer cfg.mu.Unlock() 208 | cfg.ConnectClientUnlocked(ck, to) 209 | } 210 | 211 | // caller should hold cfg.mu 212 | func (cfg *config) DisconnectClientUnlocked(ck *Clerk, from []int) { 213 | // log.Printf("DisconnectClient %v from %v\n", ck, from) 214 | endnames := cfg.clerks[ck] 215 | for j := 0; j < len(from); j++ { 216 | s := endnames[from[j]] 217 | cfg.net.Enable(s, false) 218 | } 219 | } 220 | 221 | func (cfg *config) DisconnectClient(ck *Clerk, from []int) { 222 | cfg.mu.Lock() 223 | defer cfg.mu.Unlock() 224 | cfg.DisconnectClientUnlocked(ck, from) 225 | } 226 | 227 | // Shutdown a server by isolating it 228 | func (cfg *config) ShutdownServer(i int) { 229 | cfg.mu.Lock() 230 | defer cfg.mu.Unlock() 231 | 232 | cfg.disconnectUnlocked(i, cfg.All()) 233 | 234 | // disable client connections to the server. 235 | // it's important to do this before creating 236 | // the new Persister in saved[i], to avoid 237 | // the possibility of the server returning a 238 | // positive reply to an Append but persisting 239 | // the result in the superseded Persister. 240 | cfg.net.DeleteServer(i) 241 | 242 | // a fresh persister, in case old instance 243 | // continues to update the Persister. 244 | // but copy old persister's content so that we always 245 | // pass Make() the last persisted state. 246 | if cfg.saved[i] != nil { 247 | cfg.saved[i] = cfg.saved[i].Copy() 248 | } 249 | 250 | kv := cfg.servers[i] 251 | if kv != nil { 252 | cfg.mu.Unlock() 253 | kv.Kill() 254 | cfg.mu.Lock() 255 | cfg.servers[i] = nil 256 | } 257 | } 258 | 259 | // If restart servers, first call ShutdownServer 260 | func (cfg *config) StartServer(i int) { 261 | cfg.mu.Lock() 262 | 263 | // a fresh set of outgoing ClientEnd names. 264 | cfg.endnames[i] = make([]string, cfg.n) 265 | for j := 0; j < cfg.n; j++ { 266 | cfg.endnames[i][j] = randstring(20) 267 | } 268 | 269 | // a fresh set of ClientEnds. 270 | ends := make([]*labrpc.ClientEnd, cfg.n) 271 | for j := 0; j < cfg.n; j++ { 272 | ends[j] = cfg.net.MakeEnd(cfg.endnames[i][j]) 273 | cfg.net.Connect(cfg.endnames[i][j], j) 274 | } 275 | 276 | // a fresh persister, so old instance doesn't overwrite 277 | // new instance's persisted state. 278 | // give the fresh persister a copy of the old persister's 279 | // state, so that the spec is that we pass StartKVServer() 280 | // the last persisted state. 281 | if cfg.saved[i] != nil { 282 | cfg.saved[i] = cfg.saved[i].Copy() 283 | } else { 284 | cfg.saved[i] = raft.MakePersister() 285 | } 286 | 287 | cfg.mu.Unlock() 288 | 289 | cfg.servers[i] = StartServer(ends, i, cfg.saved[i]) 290 | 291 | kvsvc := labrpc.MakeService(cfg.servers[i]) 292 | rfsvc := labrpc.MakeService(cfg.servers[i].rf) 293 | srv := labrpc.MakeServer() 294 | srv.AddService(kvsvc) 295 | srv.AddService(rfsvc) 296 | cfg.net.AddServer(i, srv) 297 | } 298 | 299 | func (cfg *config) Leader() (bool, int) { 300 | cfg.mu.Lock() 301 | defer cfg.mu.Unlock() 302 | 303 | for i := 0; i < cfg.n; i++ { 304 | if cfg.servers[i] != nil { 305 | _, is_leader := cfg.servers[i].rf.GetState() 306 | if is_leader { 307 | return true, i 308 | } 309 | } 310 | } 311 | return false, 0 312 | } 313 | 314 | // Partition servers into 2 groups and put current leader in minority 315 | func (cfg *config) make_partition() ([]int, []int) { 316 | _, l := cfg.Leader() 317 | p1 := make([]int, cfg.n/2+1) 318 | p2 := make([]int, cfg.n/2) 319 | j := 0 320 | for i := 0; i < cfg.n; i++ { 321 | if i != l { 322 | if j < len(p1) { 323 | p1[j] = i 324 | } else { 325 | p2[j-len(p1)] = i 326 | } 327 | j++ 328 | } 329 | } 330 | p2[len(p2)-1] = l 331 | return p1, p2 332 | } 333 | 334 | func make_config(t *testing.T, n int, unreliable bool) *config { 335 | runtime.GOMAXPROCS(4) 336 | cfg := &config{} 337 | cfg.t = t 338 | cfg.net = labrpc.MakeNetwork() 339 | cfg.n = n 340 | cfg.servers = make([]*ShardCtrler, cfg.n) 341 | cfg.saved = make([]*raft.Persister, cfg.n) 342 | cfg.endnames = make([][]string, cfg.n) 343 | cfg.clerks = make(map[*Clerk][]string) 344 | cfg.nextClientId = cfg.n + 1000 // client ids start 1000 above the highest serverid 345 | cfg.start = time.Now() 346 | 347 | // create a full set of KV servers. 348 | for i := 0; i < cfg.n; i++ { 349 | cfg.StartServer(i) 350 | } 351 | 352 | cfg.ConnectAll() 353 | 354 | cfg.net.Reliable(!unreliable) 355 | 356 | return cfg 357 | } 358 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/shardctrler/server.go: -------------------------------------------------------------------------------- 1 | package shardctrler 2 | 3 | 4 | import "6.5840/raft" 5 | import "6.5840/labrpc" 6 | import "sync" 7 | import "6.5840/labgob" 8 | 9 | 10 | type ShardCtrler struct { 11 | mu sync.Mutex 12 | me int 13 | rf *raft.Raft 14 | applyCh chan raft.ApplyMsg 15 | 16 | // Your data here. 17 | 18 | configs []Config // indexed by config num 19 | } 20 | 21 | 22 | type Op struct { 23 | // Your data here. 24 | } 25 | 26 | 27 | func (sc *ShardCtrler) Join(args *JoinArgs, reply *JoinReply) { 28 | // Your code here. 29 | } 30 | 31 | func (sc *ShardCtrler) Leave(args *LeaveArgs, reply *LeaveReply) { 32 | // Your code here. 33 | } 34 | 35 | func (sc *ShardCtrler) Move(args *MoveArgs, reply *MoveReply) { 36 | // Your code here. 37 | } 38 | 39 | func (sc *ShardCtrler) Query(args *QueryArgs, reply *QueryReply) { 40 | // Your code here. 41 | } 42 | 43 | 44 | // the tester calls Kill() when a ShardCtrler instance won't 45 | // be needed again. you are not required to do anything 46 | // in Kill(), but it might be convenient to (for example) 47 | // turn off debug output from this instance. 48 | func (sc *ShardCtrler) Kill() { 49 | sc.rf.Kill() 50 | // Your code here, if desired. 51 | } 52 | 53 | // needed by shardkv tester 54 | func (sc *ShardCtrler) Raft() *raft.Raft { 55 | return sc.rf 56 | } 57 | 58 | // servers[] contains the ports of the set of 59 | // servers that will cooperate via Raft to 60 | // form the fault-tolerant shardctrler service. 61 | // me is the index of the current server in servers[]. 62 | func StartServer(servers []*labrpc.ClientEnd, me int, persister *raft.Persister) *ShardCtrler { 63 | sc := new(ShardCtrler) 64 | sc.me = me 65 | 66 | sc.configs = make([]Config, 1) 67 | sc.configs[0].Groups = map[int][]string{} 68 | 69 | labgob.Register(Op{}) 70 | sc.applyCh = make(chan raft.ApplyMsg) 71 | sc.rf = raft.Make(servers, me, persister, sc.applyCh) 72 | 73 | // Your code here. 74 | 75 | return sc 76 | } 77 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/shardctrler/test_test.go: -------------------------------------------------------------------------------- 1 | package shardctrler 2 | 3 | import ( 4 | "fmt" 5 | "sync" 6 | "testing" 7 | "time" 8 | ) 9 | 10 | // import "time" 11 | 12 | func check(t *testing.T, groups []int, ck *Clerk) { 13 | c := ck.Query(-1) 14 | if len(c.Groups) != len(groups) { 15 | t.Fatalf("wanted %v groups, got %v", len(groups), len(c.Groups)) 16 | } 17 | 18 | // are the groups as expected? 19 | for _, g := range groups { 20 | _, ok := c.Groups[g] 21 | if ok != true { 22 | t.Fatalf("missing group %v", g) 23 | } 24 | } 25 | 26 | // any un-allocated shards? 27 | if len(groups) > 0 { 28 | for s, g := range c.Shards { 29 | _, ok := c.Groups[g] 30 | if ok == false { 31 | t.Fatalf("shard %v -> invalid group %v", s, g) 32 | } 33 | } 34 | } 35 | 36 | // more or less balanced sharding? 37 | counts := map[int]int{} 38 | for _, g := range c.Shards { 39 | counts[g] += 1 40 | } 41 | min := 257 42 | max := 0 43 | for g, _ := range c.Groups { 44 | if counts[g] > max { 45 | max = counts[g] 46 | } 47 | if counts[g] < min { 48 | min = counts[g] 49 | } 50 | } 51 | if max > min+1 { 52 | t.Fatalf("max %v too much larger than min %v", max, min) 53 | } 54 | } 55 | 56 | func check_same_config(t *testing.T, c1 Config, c2 Config) { 57 | if c1.Num != c2.Num { 58 | t.Fatalf("Num wrong") 59 | } 60 | if c1.Shards != c2.Shards { 61 | t.Fatalf("Shards wrong") 62 | } 63 | if len(c1.Groups) != len(c2.Groups) { 64 | t.Fatalf("number of Groups is wrong") 65 | } 66 | for gid, sa := range c1.Groups { 67 | sa1, ok := c2.Groups[gid] 68 | if ok == false || len(sa1) != len(sa) { 69 | t.Fatalf("len(Groups) wrong") 70 | } 71 | if ok && len(sa1) == len(sa) { 72 | for j := 0; j < len(sa); j++ { 73 | if sa[j] != sa1[j] { 74 | t.Fatalf("Groups wrong") 75 | } 76 | } 77 | } 78 | } 79 | } 80 | 81 | func TestBasic(t *testing.T) { 82 | const nservers = 3 83 | cfg := make_config(t, nservers, false) 84 | defer cfg.cleanup() 85 | 86 | ck := cfg.makeClient(cfg.All()) 87 | 88 | fmt.Printf("Test: Basic leave/join ...\n") 89 | 90 | cfa := make([]Config, 6) 91 | cfa[0] = ck.Query(-1) 92 | 93 | check(t, []int{}, ck) 94 | 95 | var gid1 int = 1 96 | ck.Join(map[int][]string{gid1: []string{"x", "y", "z"}}) 97 | check(t, []int{gid1}, ck) 98 | cfa[1] = ck.Query(-1) 99 | 100 | var gid2 int = 2 101 | ck.Join(map[int][]string{gid2: []string{"a", "b", "c"}}) 102 | check(t, []int{gid1, gid2}, ck) 103 | cfa[2] = ck.Query(-1) 104 | 105 | cfx := ck.Query(-1) 106 | sa1 := cfx.Groups[gid1] 107 | if len(sa1) != 3 || sa1[0] != "x" || sa1[1] != "y" || sa1[2] != "z" { 108 | t.Fatalf("wrong servers for gid %v: %v\n", gid1, sa1) 109 | } 110 | sa2 := cfx.Groups[gid2] 111 | if len(sa2) != 3 || sa2[0] != "a" || sa2[1] != "b" || sa2[2] != "c" { 112 | t.Fatalf("wrong servers for gid %v: %v\n", gid2, sa2) 113 | } 114 | 115 | ck.Leave([]int{gid1}) 116 | check(t, []int{gid2}, ck) 117 | cfa[4] = ck.Query(-1) 118 | 119 | ck.Leave([]int{gid2}) 120 | cfa[5] = ck.Query(-1) 121 | 122 | fmt.Printf(" ... Passed\n") 123 | 124 | fmt.Printf("Test: Historical queries ...\n") 125 | 126 | for s := 0; s < nservers; s++ { 127 | cfg.ShutdownServer(s) 128 | for i := 0; i < len(cfa); i++ { 129 | c := ck.Query(cfa[i].Num) 130 | check_same_config(t, c, cfa[i]) 131 | } 132 | cfg.StartServer(s) 133 | cfg.ConnectAll() 134 | } 135 | 136 | fmt.Printf(" ... Passed\n") 137 | 138 | fmt.Printf("Test: Move ...\n") 139 | { 140 | var gid3 int = 503 141 | ck.Join(map[int][]string{gid3: []string{"3a", "3b", "3c"}}) 142 | var gid4 int = 504 143 | ck.Join(map[int][]string{gid4: []string{"4a", "4b", "4c"}}) 144 | for i := 0; i < NShards; i++ { 145 | cf := ck.Query(-1) 146 | if i < NShards/2 { 147 | ck.Move(i, gid3) 148 | if cf.Shards[i] != gid3 { 149 | cf1 := ck.Query(-1) 150 | if cf1.Num <= cf.Num { 151 | t.Fatalf("Move should increase Config.Num") 152 | } 153 | } 154 | } else { 155 | ck.Move(i, gid4) 156 | if cf.Shards[i] != gid4 { 157 | cf1 := ck.Query(-1) 158 | if cf1.Num <= cf.Num { 159 | t.Fatalf("Move should increase Config.Num") 160 | } 161 | } 162 | } 163 | } 164 | cf2 := ck.Query(-1) 165 | for i := 0; i < NShards; i++ { 166 | if i < NShards/2 { 167 | if cf2.Shards[i] != gid3 { 168 | t.Fatalf("expected shard %v on gid %v actually %v", 169 | i, gid3, cf2.Shards[i]) 170 | } 171 | } else { 172 | if cf2.Shards[i] != gid4 { 173 | t.Fatalf("expected shard %v on gid %v actually %v", 174 | i, gid4, cf2.Shards[i]) 175 | } 176 | } 177 | } 178 | ck.Leave([]int{gid3}) 179 | ck.Leave([]int{gid4}) 180 | } 181 | fmt.Printf(" ... Passed\n") 182 | 183 | fmt.Printf("Test: Concurrent leave/join ...\n") 184 | 185 | const npara = 10 186 | var cka [npara]*Clerk 187 | for i := 0; i < len(cka); i++ { 188 | cka[i] = cfg.makeClient(cfg.All()) 189 | } 190 | gids := make([]int, npara) 191 | ch := make(chan bool) 192 | for xi := 0; xi < npara; xi++ { 193 | gids[xi] = int((xi * 10) + 100) 194 | go func(i int) { 195 | defer func() { ch <- true }() 196 | var gid int = gids[i] 197 | var sid1 = fmt.Sprintf("s%da", gid) 198 | var sid2 = fmt.Sprintf("s%db", gid) 199 | cka[i].Join(map[int][]string{gid + 1000: []string{sid1}}) 200 | cka[i].Join(map[int][]string{gid: []string{sid2}}) 201 | cka[i].Leave([]int{gid + 1000}) 202 | }(xi) 203 | } 204 | for i := 0; i < npara; i++ { 205 | <-ch 206 | } 207 | check(t, gids, ck) 208 | 209 | fmt.Printf(" ... Passed\n") 210 | 211 | fmt.Printf("Test: Minimal transfers after joins ...\n") 212 | 213 | c1 := ck.Query(-1) 214 | for i := 0; i < 5; i++ { 215 | var gid = int(npara + 1 + i) 216 | ck.Join(map[int][]string{gid: []string{ 217 | fmt.Sprintf("%da", gid), 218 | fmt.Sprintf("%db", gid), 219 | fmt.Sprintf("%db", gid)}}) 220 | } 221 | c2 := ck.Query(-1) 222 | for i := int(1); i <= npara; i++ { 223 | for j := 0; j < len(c1.Shards); j++ { 224 | if c2.Shards[j] == i { 225 | if c1.Shards[j] != i { 226 | t.Fatalf("non-minimal transfer after Join()s") 227 | } 228 | } 229 | } 230 | } 231 | 232 | fmt.Printf(" ... Passed\n") 233 | 234 | fmt.Printf("Test: Minimal transfers after leaves ...\n") 235 | 236 | for i := 0; i < 5; i++ { 237 | ck.Leave([]int{int(npara + 1 + i)}) 238 | } 239 | c3 := ck.Query(-1) 240 | for i := int(1); i <= npara; i++ { 241 | for j := 0; j < len(c1.Shards); j++ { 242 | if c2.Shards[j] == i { 243 | if c3.Shards[j] != i { 244 | t.Fatalf("non-minimal transfer after Leave()s") 245 | } 246 | } 247 | } 248 | } 249 | 250 | fmt.Printf(" ... Passed\n") 251 | } 252 | 253 | func TestMulti(t *testing.T) { 254 | const nservers = 3 255 | cfg := make_config(t, nservers, false) 256 | defer cfg.cleanup() 257 | 258 | ck := cfg.makeClient(cfg.All()) 259 | 260 | fmt.Printf("Test: Multi-group join/leave ...\n") 261 | 262 | cfa := make([]Config, 6) 263 | cfa[0] = ck.Query(-1) 264 | 265 | check(t, []int{}, ck) 266 | 267 | var gid1 int = 1 268 | var gid2 int = 2 269 | ck.Join(map[int][]string{ 270 | gid1: []string{"x", "y", "z"}, 271 | gid2: []string{"a", "b", "c"}, 272 | }) 273 | check(t, []int{gid1, gid2}, ck) 274 | cfa[1] = ck.Query(-1) 275 | 276 | var gid3 int = 3 277 | ck.Join(map[int][]string{gid3: []string{"j", "k", "l"}}) 278 | check(t, []int{gid1, gid2, gid3}, ck) 279 | cfa[2] = ck.Query(-1) 280 | 281 | cfx := ck.Query(-1) 282 | sa1 := cfx.Groups[gid1] 283 | if len(sa1) != 3 || sa1[0] != "x" || sa1[1] != "y" || sa1[2] != "z" { 284 | t.Fatalf("wrong servers for gid %v: %v\n", gid1, sa1) 285 | } 286 | sa2 := cfx.Groups[gid2] 287 | if len(sa2) != 3 || sa2[0] != "a" || sa2[1] != "b" || sa2[2] != "c" { 288 | t.Fatalf("wrong servers for gid %v: %v\n", gid2, sa2) 289 | } 290 | sa3 := cfx.Groups[gid3] 291 | if len(sa3) != 3 || sa3[0] != "j" || sa3[1] != "k" || sa3[2] != "l" { 292 | t.Fatalf("wrong servers for gid %v: %v\n", gid3, sa3) 293 | } 294 | 295 | ck.Leave([]int{gid1, gid3}) 296 | check(t, []int{gid2}, ck) 297 | cfa[3] = ck.Query(-1) 298 | 299 | cfx = ck.Query(-1) 300 | sa2 = cfx.Groups[gid2] 301 | if len(sa2) != 3 || sa2[0] != "a" || sa2[1] != "b" || sa2[2] != "c" { 302 | t.Fatalf("wrong servers for gid %v: %v\n", gid2, sa2) 303 | } 304 | 305 | ck.Leave([]int{gid2}) 306 | 307 | fmt.Printf(" ... Passed\n") 308 | 309 | fmt.Printf("Test: Concurrent multi leave/join ...\n") 310 | 311 | const npara = 10 312 | var cka [npara]*Clerk 313 | for i := 0; i < len(cka); i++ { 314 | cka[i] = cfg.makeClient(cfg.All()) 315 | } 316 | gids := make([]int, npara) 317 | var wg sync.WaitGroup 318 | for xi := 0; xi < npara; xi++ { 319 | wg.Add(1) 320 | gids[xi] = int(xi + 1000) 321 | go func(i int) { 322 | defer wg.Done() 323 | var gid int = gids[i] 324 | cka[i].Join(map[int][]string{ 325 | gid: []string{ 326 | fmt.Sprintf("%da", gid), 327 | fmt.Sprintf("%db", gid), 328 | fmt.Sprintf("%dc", gid)}, 329 | gid + 1000: []string{fmt.Sprintf("%da", gid+1000)}, 330 | gid + 2000: []string{fmt.Sprintf("%da", gid+2000)}, 331 | }) 332 | cka[i].Leave([]int{gid + 1000, gid + 2000}) 333 | }(xi) 334 | } 335 | wg.Wait() 336 | check(t, gids, ck) 337 | 338 | fmt.Printf(" ... Passed\n") 339 | 340 | fmt.Printf("Test: Minimal transfers after multijoins ...\n") 341 | 342 | c1 := ck.Query(-1) 343 | m := make(map[int][]string) 344 | for i := 0; i < 5; i++ { 345 | var gid = npara + 1 + i 346 | m[gid] = []string{fmt.Sprintf("%da", gid), fmt.Sprintf("%db", gid)} 347 | } 348 | ck.Join(m) 349 | c2 := ck.Query(-1) 350 | for i := int(1); i <= npara; i++ { 351 | for j := 0; j < len(c1.Shards); j++ { 352 | if c2.Shards[j] == i { 353 | if c1.Shards[j] != i { 354 | t.Fatalf("non-minimal transfer after Join()s") 355 | } 356 | } 357 | } 358 | } 359 | 360 | fmt.Printf(" ... Passed\n") 361 | 362 | fmt.Printf("Test: Minimal transfers after multileaves ...\n") 363 | 364 | var l []int 365 | for i := 0; i < 5; i++ { 366 | l = append(l, npara+1+i) 367 | } 368 | ck.Leave(l) 369 | c3 := ck.Query(-1) 370 | for i := int(1); i <= npara; i++ { 371 | for j := 0; j < len(c1.Shards); j++ { 372 | if c2.Shards[j] == i { 373 | if c3.Shards[j] != i { 374 | t.Fatalf("non-minimal transfer after Leave()s") 375 | } 376 | } 377 | } 378 | } 379 | 380 | fmt.Printf(" ... Passed\n") 381 | 382 | fmt.Printf("Test: Check Same config on servers ...\n") 383 | 384 | isLeader, leader := cfg.Leader() 385 | if !isLeader { 386 | t.Fatalf("Leader not found") 387 | } 388 | c := ck.Query(-1) // Config leader claims 389 | 390 | cfg.ShutdownServer(leader) 391 | 392 | attempts := 0 393 | for isLeader, leader = cfg.Leader(); isLeader; time.Sleep(1 * time.Second) { 394 | if attempts++; attempts >= 3 { 395 | t.Fatalf("Leader not found") 396 | } 397 | } 398 | 399 | c1 = ck.Query(-1) 400 | check_same_config(t, c, c1) 401 | 402 | fmt.Printf(" ... Passed\n") 403 | } 404 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/shardkv/client.go: -------------------------------------------------------------------------------- 1 | package shardkv 2 | 3 | // 4 | // client code to talk to a sharded key/value service. 5 | // 6 | // the client first talks to the shardctrler to find out 7 | // the assignment of shards (keys) to groups, and then 8 | // talks to the group that holds the key's shard. 9 | // 10 | 11 | import "6.5840/labrpc" 12 | import "crypto/rand" 13 | import "math/big" 14 | import "6.5840/shardctrler" 15 | import "time" 16 | 17 | // which shard is a key in? 18 | // please use this function, 19 | // and please do not change it. 20 | func key2shard(key string) int { 21 | shard := 0 22 | if len(key) > 0 { 23 | shard = int(key[0]) 24 | } 25 | shard %= shardctrler.NShards 26 | return shard 27 | } 28 | 29 | func nrand() int64 { 30 | max := big.NewInt(int64(1) << 62) 31 | bigx, _ := rand.Int(rand.Reader, max) 32 | x := bigx.Int64() 33 | return x 34 | } 35 | 36 | type Clerk struct { 37 | sm *shardctrler.Clerk 38 | config shardctrler.Config 39 | make_end func(string) *labrpc.ClientEnd 40 | // You will have to modify this struct. 41 | } 42 | 43 | // the tester calls MakeClerk. 44 | // 45 | // ctrlers[] is needed to call shardctrler.MakeClerk(). 46 | // 47 | // make_end(servername) turns a server name from a 48 | // Config.Groups[gid][i] into a labrpc.ClientEnd on which you can 49 | // send RPCs. 50 | func MakeClerk(ctrlers []*labrpc.ClientEnd, make_end func(string) *labrpc.ClientEnd) *Clerk { 51 | ck := new(Clerk) 52 | ck.sm = shardctrler.MakeClerk(ctrlers) 53 | ck.make_end = make_end 54 | // You'll have to add code here. 55 | return ck 56 | } 57 | 58 | // fetch the current value for a key. 59 | // returns "" if the key does not exist. 60 | // keeps trying forever in the face of all other errors. 61 | // You will have to modify this function. 62 | func (ck *Clerk) Get(key string) string { 63 | args := GetArgs{} 64 | args.Key = key 65 | 66 | for { 67 | shard := key2shard(key) 68 | gid := ck.config.Shards[shard] 69 | if servers, ok := ck.config.Groups[gid]; ok { 70 | // try each server for the shard. 71 | for si := 0; si < len(servers); si++ { 72 | srv := ck.make_end(servers[si]) 73 | var reply GetReply 74 | ok := srv.Call("ShardKV.Get", &args, &reply) 75 | if ok && (reply.Err == OK || reply.Err == ErrNoKey) { 76 | return reply.Value 77 | } 78 | if ok && (reply.Err == ErrWrongGroup) { 79 | break 80 | } 81 | // ... not ok, or ErrWrongLeader 82 | } 83 | } 84 | time.Sleep(100 * time.Millisecond) 85 | // ask controler for the latest configuration. 86 | ck.config = ck.sm.Query(-1) 87 | } 88 | 89 | return "" 90 | } 91 | 92 | // shared by Put and Append. 93 | // You will have to modify this function. 94 | func (ck *Clerk) PutAppend(key string, value string, op string) { 95 | args := PutAppendArgs{} 96 | args.Key = key 97 | args.Value = value 98 | args.Op = op 99 | 100 | 101 | for { 102 | shard := key2shard(key) 103 | gid := ck.config.Shards[shard] 104 | if servers, ok := ck.config.Groups[gid]; ok { 105 | for si := 0; si < len(servers); si++ { 106 | srv := ck.make_end(servers[si]) 107 | var reply PutAppendReply 108 | ok := srv.Call("ShardKV.PutAppend", &args, &reply) 109 | if ok && reply.Err == OK { 110 | return 111 | } 112 | if ok && reply.Err == ErrWrongGroup { 113 | break 114 | } 115 | // ... not ok, or ErrWrongLeader 116 | } 117 | } 118 | time.Sleep(100 * time.Millisecond) 119 | // ask controler for the latest configuration. 120 | ck.config = ck.sm.Query(-1) 121 | } 122 | } 123 | 124 | func (ck *Clerk) Put(key string, value string) { 125 | ck.PutAppend(key, value, "Put") 126 | } 127 | func (ck *Clerk) Append(key string, value string) { 128 | ck.PutAppend(key, value, "Append") 129 | } 130 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/shardkv/common.go: -------------------------------------------------------------------------------- 1 | package shardkv 2 | 3 | // 4 | // Sharded key/value server. 5 | // Lots of replica groups, each running Raft. 6 | // Shardctrler decides which group serves each shard. 7 | // Shardctrler may change shard assignment from time to time. 8 | // 9 | // You will have to modify these definitions. 10 | // 11 | 12 | const ( 13 | OK = "OK" 14 | ErrNoKey = "ErrNoKey" 15 | ErrWrongGroup = "ErrWrongGroup" 16 | ErrWrongLeader = "ErrWrongLeader" 17 | ) 18 | 19 | type Err string 20 | 21 | // Put or Append 22 | type PutAppendArgs struct { 23 | // You'll have to add definitions here. 24 | Key string 25 | Value string 26 | Op string // "Put" or "Append" 27 | // You'll have to add definitions here. 28 | // Field names must start with capital letters, 29 | // otherwise RPC will break. 30 | } 31 | 32 | type PutAppendReply struct { 33 | Err Err 34 | } 35 | 36 | type GetArgs struct { 37 | Key string 38 | // You'll have to add definitions here. 39 | } 40 | 41 | type GetReply struct { 42 | Err Err 43 | Value string 44 | } 45 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/shardkv/config.go: -------------------------------------------------------------------------------- 1 | package shardkv 2 | 3 | import "6.5840/shardctrler" 4 | import "6.5840/labrpc" 5 | import "testing" 6 | import "os" 7 | 8 | // import "log" 9 | import crand "crypto/rand" 10 | import "math/big" 11 | import "math/rand" 12 | import "encoding/base64" 13 | import "sync" 14 | import "runtime" 15 | import "6.5840/raft" 16 | import "strconv" 17 | import "fmt" 18 | import "time" 19 | 20 | func randstring(n int) string { 21 | b := make([]byte, 2*n) 22 | crand.Read(b) 23 | s := base64.URLEncoding.EncodeToString(b) 24 | return s[0:n] 25 | } 26 | 27 | func makeSeed() int64 { 28 | max := big.NewInt(int64(1) << 62) 29 | bigx, _ := crand.Int(crand.Reader, max) 30 | x := bigx.Int64() 31 | return x 32 | } 33 | 34 | // Randomize server handles 35 | func random_handles(kvh []*labrpc.ClientEnd) []*labrpc.ClientEnd { 36 | sa := make([]*labrpc.ClientEnd, len(kvh)) 37 | copy(sa, kvh) 38 | for i := range sa { 39 | j := rand.Intn(i + 1) 40 | sa[i], sa[j] = sa[j], sa[i] 41 | } 42 | return sa 43 | } 44 | 45 | type group struct { 46 | gid int 47 | servers []*ShardKV 48 | saved []*raft.Persister 49 | endnames [][]string 50 | mendnames [][]string 51 | } 52 | 53 | type config struct { 54 | mu sync.Mutex 55 | t *testing.T 56 | net *labrpc.Network 57 | start time.Time // time at which make_config() was called 58 | 59 | nctrlers int 60 | ctrlerservers []*shardctrler.ShardCtrler 61 | mck *shardctrler.Clerk 62 | 63 | ngroups int 64 | n int // servers per k/v group 65 | groups []*group 66 | 67 | clerks map[*Clerk][]string 68 | nextClientId int 69 | maxraftstate int 70 | } 71 | 72 | func (cfg *config) checkTimeout() { 73 | // enforce a two minute real-time limit on each test 74 | if !cfg.t.Failed() && time.Since(cfg.start) > 120*time.Second { 75 | cfg.t.Fatal("test took longer than 120 seconds") 76 | } 77 | } 78 | 79 | func (cfg *config) cleanup() { 80 | for gi := 0; gi < cfg.ngroups; gi++ { 81 | cfg.ShutdownGroup(gi) 82 | } 83 | for i := 0; i < cfg.nctrlers; i++ { 84 | cfg.ctrlerservers[i].Kill() 85 | } 86 | cfg.net.Cleanup() 87 | cfg.checkTimeout() 88 | } 89 | 90 | // check that no server's log is too big. 91 | func (cfg *config) checklogs() { 92 | for gi := 0; gi < cfg.ngroups; gi++ { 93 | for i := 0; i < cfg.n; i++ { 94 | raft := cfg.groups[gi].saved[i].RaftStateSize() 95 | snap := len(cfg.groups[gi].saved[i].ReadSnapshot()) 96 | if cfg.maxraftstate >= 0 && raft > 8*cfg.maxraftstate { 97 | cfg.t.Fatalf("persister.RaftStateSize() %v, but maxraftstate %v", 98 | raft, cfg.maxraftstate) 99 | } 100 | if cfg.maxraftstate < 0 && snap > 0 { 101 | cfg.t.Fatalf("maxraftstate is -1, but snapshot is non-empty!") 102 | } 103 | } 104 | } 105 | } 106 | 107 | // controler server name for labrpc. 108 | func (cfg *config) ctrlername(i int) string { 109 | return "ctrler" + strconv.Itoa(i) 110 | } 111 | 112 | // shard server name for labrpc. 113 | // i'th server of group gid. 114 | func (cfg *config) servername(gid int, i int) string { 115 | return "server-" + strconv.Itoa(gid) + "-" + strconv.Itoa(i) 116 | } 117 | 118 | func (cfg *config) makeClient() *Clerk { 119 | cfg.mu.Lock() 120 | defer cfg.mu.Unlock() 121 | 122 | // ClientEnds to talk to controler service. 123 | ends := make([]*labrpc.ClientEnd, cfg.nctrlers) 124 | endnames := make([]string, cfg.n) 125 | for j := 0; j < cfg.nctrlers; j++ { 126 | endnames[j] = randstring(20) 127 | ends[j] = cfg.net.MakeEnd(endnames[j]) 128 | cfg.net.Connect(endnames[j], cfg.ctrlername(j)) 129 | cfg.net.Enable(endnames[j], true) 130 | } 131 | 132 | ck := MakeClerk(ends, func(servername string) *labrpc.ClientEnd { 133 | name := randstring(20) 134 | end := cfg.net.MakeEnd(name) 135 | cfg.net.Connect(name, servername) 136 | cfg.net.Enable(name, true) 137 | return end 138 | }) 139 | cfg.clerks[ck] = endnames 140 | cfg.nextClientId++ 141 | return ck 142 | } 143 | 144 | func (cfg *config) deleteClient(ck *Clerk) { 145 | cfg.mu.Lock() 146 | defer cfg.mu.Unlock() 147 | 148 | v := cfg.clerks[ck] 149 | for i := 0; i < len(v); i++ { 150 | os.Remove(v[i]) 151 | } 152 | delete(cfg.clerks, ck) 153 | } 154 | 155 | // Shutdown i'th server of gi'th group, by isolating it 156 | func (cfg *config) ShutdownServer(gi int, i int) { 157 | cfg.mu.Lock() 158 | defer cfg.mu.Unlock() 159 | 160 | gg := cfg.groups[gi] 161 | 162 | // prevent this server from sending 163 | for j := 0; j < len(gg.servers); j++ { 164 | name := gg.endnames[i][j] 165 | cfg.net.Enable(name, false) 166 | } 167 | for j := 0; j < len(gg.mendnames[i]); j++ { 168 | name := gg.mendnames[i][j] 169 | cfg.net.Enable(name, false) 170 | } 171 | 172 | // disable client connections to the server. 173 | // it's important to do this before creating 174 | // the new Persister in saved[i], to avoid 175 | // the possibility of the server returning a 176 | // positive reply to an Append but persisting 177 | // the result in the superseded Persister. 178 | cfg.net.DeleteServer(cfg.servername(gg.gid, i)) 179 | 180 | // a fresh persister, in case old instance 181 | // continues to update the Persister. 182 | // but copy old persister's content so that we always 183 | // pass Make() the last persisted state. 184 | if gg.saved[i] != nil { 185 | gg.saved[i] = gg.saved[i].Copy() 186 | } 187 | 188 | kv := gg.servers[i] 189 | if kv != nil { 190 | cfg.mu.Unlock() 191 | kv.Kill() 192 | cfg.mu.Lock() 193 | gg.servers[i] = nil 194 | } 195 | } 196 | 197 | func (cfg *config) ShutdownGroup(gi int) { 198 | for i := 0; i < cfg.n; i++ { 199 | cfg.ShutdownServer(gi, i) 200 | } 201 | } 202 | 203 | // start i'th server in gi'th group 204 | func (cfg *config) StartServer(gi int, i int) { 205 | cfg.mu.Lock() 206 | 207 | gg := cfg.groups[gi] 208 | 209 | // a fresh set of outgoing ClientEnd names 210 | // to talk to other servers in this group. 211 | gg.endnames[i] = make([]string, cfg.n) 212 | for j := 0; j < cfg.n; j++ { 213 | gg.endnames[i][j] = randstring(20) 214 | } 215 | 216 | // and the connections to other servers in this group. 217 | ends := make([]*labrpc.ClientEnd, cfg.n) 218 | for j := 0; j < cfg.n; j++ { 219 | ends[j] = cfg.net.MakeEnd(gg.endnames[i][j]) 220 | cfg.net.Connect(gg.endnames[i][j], cfg.servername(gg.gid, j)) 221 | cfg.net.Enable(gg.endnames[i][j], true) 222 | } 223 | 224 | // ends to talk to shardctrler service 225 | mends := make([]*labrpc.ClientEnd, cfg.nctrlers) 226 | gg.mendnames[i] = make([]string, cfg.nctrlers) 227 | for j := 0; j < cfg.nctrlers; j++ { 228 | gg.mendnames[i][j] = randstring(20) 229 | mends[j] = cfg.net.MakeEnd(gg.mendnames[i][j]) 230 | cfg.net.Connect(gg.mendnames[i][j], cfg.ctrlername(j)) 231 | cfg.net.Enable(gg.mendnames[i][j], true) 232 | } 233 | 234 | // a fresh persister, so old instance doesn't overwrite 235 | // new instance's persisted state. 236 | // give the fresh persister a copy of the old persister's 237 | // state, so that the spec is that we pass StartKVServer() 238 | // the last persisted state. 239 | if gg.saved[i] != nil { 240 | gg.saved[i] = gg.saved[i].Copy() 241 | } else { 242 | gg.saved[i] = raft.MakePersister() 243 | } 244 | cfg.mu.Unlock() 245 | 246 | gg.servers[i] = StartServer(ends, i, gg.saved[i], cfg.maxraftstate, 247 | gg.gid, mends, 248 | func(servername string) *labrpc.ClientEnd { 249 | name := randstring(20) 250 | end := cfg.net.MakeEnd(name) 251 | cfg.net.Connect(name, servername) 252 | cfg.net.Enable(name, true) 253 | return end 254 | }) 255 | 256 | kvsvc := labrpc.MakeService(gg.servers[i]) 257 | rfsvc := labrpc.MakeService(gg.servers[i].rf) 258 | srv := labrpc.MakeServer() 259 | srv.AddService(kvsvc) 260 | srv.AddService(rfsvc) 261 | cfg.net.AddServer(cfg.servername(gg.gid, i), srv) 262 | } 263 | 264 | func (cfg *config) StartGroup(gi int) { 265 | for i := 0; i < cfg.n; i++ { 266 | cfg.StartServer(gi, i) 267 | } 268 | } 269 | 270 | func (cfg *config) StartCtrlerserver(i int) { 271 | // ClientEnds to talk to other controler replicas. 272 | ends := make([]*labrpc.ClientEnd, cfg.nctrlers) 273 | for j := 0; j < cfg.nctrlers; j++ { 274 | endname := randstring(20) 275 | ends[j] = cfg.net.MakeEnd(endname) 276 | cfg.net.Connect(endname, cfg.ctrlername(j)) 277 | cfg.net.Enable(endname, true) 278 | } 279 | 280 | p := raft.MakePersister() 281 | 282 | cfg.ctrlerservers[i] = shardctrler.StartServer(ends, i, p) 283 | 284 | msvc := labrpc.MakeService(cfg.ctrlerservers[i]) 285 | rfsvc := labrpc.MakeService(cfg.ctrlerservers[i].Raft()) 286 | srv := labrpc.MakeServer() 287 | srv.AddService(msvc) 288 | srv.AddService(rfsvc) 289 | cfg.net.AddServer(cfg.ctrlername(i), srv) 290 | } 291 | 292 | func (cfg *config) shardclerk() *shardctrler.Clerk { 293 | // ClientEnds to talk to ctrler service. 294 | ends := make([]*labrpc.ClientEnd, cfg.nctrlers) 295 | for j := 0; j < cfg.nctrlers; j++ { 296 | name := randstring(20) 297 | ends[j] = cfg.net.MakeEnd(name) 298 | cfg.net.Connect(name, cfg.ctrlername(j)) 299 | cfg.net.Enable(name, true) 300 | } 301 | 302 | return shardctrler.MakeClerk(ends) 303 | } 304 | 305 | // tell the shardctrler that a group is joining. 306 | func (cfg *config) join(gi int) { 307 | cfg.joinm([]int{gi}) 308 | } 309 | 310 | func (cfg *config) joinm(gis []int) { 311 | m := make(map[int][]string, len(gis)) 312 | for _, g := range gis { 313 | gid := cfg.groups[g].gid 314 | servernames := make([]string, cfg.n) 315 | for i := 0; i < cfg.n; i++ { 316 | servernames[i] = cfg.servername(gid, i) 317 | } 318 | m[gid] = servernames 319 | } 320 | cfg.mck.Join(m) 321 | } 322 | 323 | // tell the shardctrler that a group is leaving. 324 | func (cfg *config) leave(gi int) { 325 | cfg.leavem([]int{gi}) 326 | } 327 | 328 | func (cfg *config) leavem(gis []int) { 329 | gids := make([]int, 0, len(gis)) 330 | for _, g := range gis { 331 | gids = append(gids, cfg.groups[g].gid) 332 | } 333 | cfg.mck.Leave(gids) 334 | } 335 | 336 | var ncpu_once sync.Once 337 | 338 | func make_config(t *testing.T, n int, unreliable bool, maxraftstate int) *config { 339 | ncpu_once.Do(func() { 340 | if runtime.NumCPU() < 2 { 341 | fmt.Printf("warning: only one CPU, which may conceal locking bugs\n") 342 | } 343 | rand.Seed(makeSeed()) 344 | }) 345 | runtime.GOMAXPROCS(4) 346 | cfg := &config{} 347 | cfg.t = t 348 | cfg.maxraftstate = maxraftstate 349 | cfg.net = labrpc.MakeNetwork() 350 | cfg.start = time.Now() 351 | 352 | // controler 353 | cfg.nctrlers = 3 354 | cfg.ctrlerservers = make([]*shardctrler.ShardCtrler, cfg.nctrlers) 355 | for i := 0; i < cfg.nctrlers; i++ { 356 | cfg.StartCtrlerserver(i) 357 | } 358 | cfg.mck = cfg.shardclerk() 359 | 360 | cfg.ngroups = 3 361 | cfg.groups = make([]*group, cfg.ngroups) 362 | cfg.n = n 363 | for gi := 0; gi < cfg.ngroups; gi++ { 364 | gg := &group{} 365 | cfg.groups[gi] = gg 366 | gg.gid = 100 + gi 367 | gg.servers = make([]*ShardKV, cfg.n) 368 | gg.saved = make([]*raft.Persister, cfg.n) 369 | gg.endnames = make([][]string, cfg.n) 370 | gg.mendnames = make([][]string, cfg.nctrlers) 371 | for i := 0; i < cfg.n; i++ { 372 | cfg.StartServer(gi, i) 373 | } 374 | } 375 | 376 | cfg.clerks = make(map[*Clerk][]string) 377 | cfg.nextClientId = cfg.n + 1000 // client ids start 1000 above the highest serverid 378 | 379 | cfg.net.Reliable(!unreliable) 380 | 381 | return cfg 382 | } 383 | -------------------------------------------------------------------------------- /advance/MIT6.824/src/shardkv/server.go: -------------------------------------------------------------------------------- 1 | package shardkv 2 | 3 | 4 | import "6.5840/labrpc" 5 | import "6.5840/raft" 6 | import "sync" 7 | import "6.5840/labgob" 8 | 9 | 10 | 11 | type Op struct { 12 | // Your definitions here. 13 | // Field names must start with capital letters, 14 | // otherwise RPC will break. 15 | } 16 | 17 | type ShardKV struct { 18 | mu sync.Mutex 19 | me int 20 | rf *raft.Raft 21 | applyCh chan raft.ApplyMsg 22 | make_end func(string) *labrpc.ClientEnd 23 | gid int 24 | ctrlers []*labrpc.ClientEnd 25 | maxraftstate int // snapshot if log grows this big 26 | 27 | // Your definitions here. 28 | } 29 | 30 | 31 | func (kv *ShardKV) Get(args *GetArgs, reply *GetReply) { 32 | // Your code here. 33 | } 34 | 35 | func (kv *ShardKV) PutAppend(args *PutAppendArgs, reply *PutAppendReply) { 36 | // Your code here. 37 | } 38 | 39 | // the tester calls Kill() when a ShardKV instance won't 40 | // be needed again. you are not required to do anything 41 | // in Kill(), but it might be convenient to (for example) 42 | // turn off debug output from this instance. 43 | func (kv *ShardKV) Kill() { 44 | kv.rf.Kill() 45 | // Your code here, if desired. 46 | } 47 | 48 | 49 | // servers[] contains the ports of the servers in this group. 50 | // 51 | // me is the index of the current server in servers[]. 52 | // 53 | // the k/v server should store snapshots through the underlying Raft 54 | // implementation, which should call persister.SaveStateAndSnapshot() to 55 | // atomically save the Raft state along with the snapshot. 56 | // 57 | // the k/v server should snapshot when Raft's saved state exceeds 58 | // maxraftstate bytes, in order to allow Raft to garbage-collect its 59 | // log. if maxraftstate is -1, you don't need to snapshot. 60 | // 61 | // gid is this group's GID, for interacting with the shardctrler. 62 | // 63 | // pass ctrlers[] to shardctrler.MakeClerk() so you can send 64 | // RPCs to the shardctrler. 65 | // 66 | // make_end(servername) turns a server name from a 67 | // Config.Groups[gid][i] into a labrpc.ClientEnd on which you can 68 | // send RPCs. You'll need this to send RPCs to other groups. 69 | // 70 | // look at client.go for examples of how to use ctrlers[] 71 | // and make_end() to send RPCs to the group owning a specific shard. 72 | // 73 | // StartServer() must return quickly, so it should start goroutines 74 | // for any long-running work. 75 | func StartServer(servers []*labrpc.ClientEnd, me int, persister *raft.Persister, maxraftstate int, gid int, ctrlers []*labrpc.ClientEnd, make_end func(string) *labrpc.ClientEnd) *ShardKV { 76 | // call labgob.Register on structures you want 77 | // Go's RPC library to marshall/unmarshall. 78 | labgob.Register(Op{}) 79 | 80 | kv := new(ShardKV) 81 | kv.me = me 82 | kv.maxraftstate = maxraftstate 83 | kv.make_end = make_end 84 | kv.gid = gid 85 | kv.ctrlers = ctrlers 86 | 87 | // Your initialization code here. 88 | 89 | // Use something like this to talk to the shardctrler: 90 | // kv.mck = shardctrler.MakeClerk(kv.ctrlers) 91 | 92 | kv.applyCh = make(chan raft.ApplyMsg) 93 | kv.rf = raft.Make(servers, me, persister, kv.applyCh) 94 | 95 | 96 | return kv 97 | } 98 | -------------------------------------------------------------------------------- /docs/0-推荐资料.md: -------------------------------------------------------------------------------- 1 | 这里只列举综合性较强的站点、博客、个人主页等内容,如果只想看文章/细节的请移步到[etc](../etc) 2 | 3 | # 站点/系列 4 | 5 | - go语言简明教程:https://geektutu.com/post/quick-golang.html 6 | - go语言高级编程:https://github.com/chai2010/advanced-go-programming-book 7 | - Go语言入门60题:https://blog.csdn.net/weixin_45304503/category_11294773.html 8 | - 极客兔兔手撕框架:https://geektutu.com/post/gee.html 9 | - 深入架构原理与实践:https://www.thebyte.com.cn/ 10 | - csdiy - wiki: https://csdiy.wiki/ 11 | - 7天从零实现:https://github.com/geektutu/7days-golang 12 | 13 | ## 个人/团队 14 | 15 | - halfrost - LeetCode-go:https://github.com/halfrost/LeetCode-Go 16 | - 腾讯技术工程:https://www.zhihu.com/org/teng-xun-ji-zhu-gong-cheng 17 | - 字节跳动技术团队:https://juejin.cn/team/6930545192860647431/posts 18 | - Bilibili 小生凡一:https://space.bilibili.com/291348098 19 | 20 | 21 | ## 学习路线 22 | ![mindmap](../img/mindmap-study.png) 23 | 24 | ## 语法学习思维导图 25 | 26 | ![mindmap](../img/mindmap-grammer.png) 27 | 28 | ## 爬虫思维导图 29 | 30 | ![mindmap](../img/mindmap-spider.png) 31 | -------------------------------------------------------------------------------- /docs/1-基础语法.md: -------------------------------------------------------------------------------- 1 | # Golang Lab1 2 | 3 | ## 目的 4 | 5 | Go语言基本语法 6 | 7 | - 条件,选择 8 | - 循环 9 | - 键值对 10 | - 切片,集合 11 | - 函数 12 | - 通道 Channel 13 | - Go协程 Goroutine 14 | 15 | ## 任务 16 | 17 | ## Task1.基础语法 18 | 请使用golang完成下列任务 19 | 20 | 1. 洛谷P1001:https://www.luogu.com.cn/problem/P1001 21 | 2. 洛谷P1046:https://www.luogu.com.cn/problem/P1046 22 | 3. 洛谷P5737:https://www.luogu.com.cn/problem/P5737 23 | 4. AtCoder ARC017A:https://www.luogu.com.cn/problem/AT_arc017_1 24 | - 对于这道题,请编写一个判断质数的函数`isPrime(x int) bool` ,并且在主函数中调用它 25 | 26 | 5. 创建一个**切片(slice)** 使其元素为数字`1-50`,从切⽚删掉数字为`3`的倍数的数,并且在末尾再增加⼀个数`114514`,输出切⽚。 27 | 28 | **输出示例** 29 | 30 | ```go 31 | [1 2 4 5 7 8 10 11 13 14 16 17 19 20 22 23 25 26 28 29 31 32 34 35 37 38 40 41 43 44 46 47 49 50 666] 32 | ``` 33 | 34 | ### Bonus 35 | 36 | 1. 写一个99乘法表,并且把结果保存到同⽬录下ninenine.txt,⽂件保存命名为"6.go"。 37 | 38 | 2. 回答问题:Go语言中的切片和数组的区别有哪些?答案越详细越好。Go中创建切片有几种方式?创建map 39 | 呢? 40 | 41 | 3. 给定一个整数数组 nums 和一个整数目标值 target,请你在该数组中找出 和为目标值 target 的那 42 | 两个 整数,并返回它们的数组下标。 43 | 44 | 你可以假设每种输入只会对应一个答案。但是,数组中同一个元素在答案里不能重复出现。 45 | 46 | 你可以按任意顺序返回答案。 47 | 48 | **示例 1:** 49 | 50 | > 输入:nums = [2,7,11,15], target = 9 51 | > 输出:[0,1] 52 | > 解释:因为 nums[0] + nums[1] == 9 ,返回 [0, 1] 53 | 54 | **示例2** 55 | 56 | > 输入:nums = [3,2,4], target = 6 57 | > 输出:[1,2] 58 | 59 | * 是否有复杂度`O(n)`的算法? 60 | 61 | 3. 运行下面代码,在你认为重要的地方写好注释,同时回答下面这些问题 62 | - 这个代码实现了什么功能? 63 | - 这个代码利用了golang的什么特性? 64 | - 这个代码相较于普通写法,是否有性能上的提升?(性能提升:求解速度更快了) 65 | 66 | 67 | ```go 68 | package main 69 | 70 | import ( 71 | "fmt" 72 | ) 73 | 74 | func generate(ch chan int) { 75 | for i := 2; ; i++ { 76 | ch <- i 77 | } 78 | } 79 | 80 | func filter(in chan int, out chan int, prime int) { 81 | for { 82 | num := <-in 83 | if num%prime != 0 { 84 | out <- num 85 | } 86 | } 87 | } 88 | 89 | func main() { 90 | ch := make(chan int) 91 | go generate(ch) 92 | for i := 0; i < 6; i++ { 93 | prime := <-ch 94 | fmt.Printf("prime:%d\n", prime) 95 | out := make(chan int) 96 | go filter(ch, out, prime) 97 | ch = out 98 | } 99 | } 100 | ``` 101 | 102 | ## Task2. Git与Github 103 | 104 | 现在我们来讨论一下Git和Github, 这是一个计算机学生绕不开的话题.不论你计划是升学还是就业,掌握Git和Github对你的帮助都是莫大的. 105 | 106 | 目前西二在线编写了如下的文档: [Git与Github的超容易入门](https://west2-online.feishu.cn/wiki/Lsz9w3CiGinXzgkevtmceHZknrf),这个文档简单的介绍了如何使用git和github,但是更多的功能仍然需要你自己去探索,同时,这个文档并没有编写完毕 107 | 108 | 我们希望可以通过这个文档让你快速上手git, 但光看文档肯定是没有用的,你还需要完成下列任务 109 | 110 | - 如果你没有自己的Github账号,请创建一个自己的Github账号 111 | - 为你的Github账号添加一个头像 112 | - 为你的Github账号开启2FA(多因素账号登录) 113 | - 为你的Github账号写一个README(请去网上查阅如何美化自己的Github主页) 114 | - 访问这个仓库[[Github-Introduction](https://github.com/west2-online-reserve/Github-Introduction)],在这个仓库中添加一个新的issue,模板选择[Bug Report],按照模板内的要求填写内容(关于bug的复原部分可以随便写,比如click xxx),发布这个issue 115 | - fork上面这个仓库,在fork的仓库中新增一个READDME.md文件,在这个文件中填写你的Github ID 116 | - 将上一步的修改提交到你fork的仓库,并且给主仓库提交一个关于这个修改的pr 117 | 118 | **注意:fork仓库并添加文件这个步骤,请在你的电脑本地操作,之后push到你fork的仓库.不要试图直接在Github上面操作,是可以区分出在Github上面操作和本地操作后push的** 119 | 120 | ### Bonus 121 | 122 | 1. 请注意你的commit message,你可以自行查找一些git commit规范,我们希望你可以先建立起一定的规范操作 123 | 2. 请创建一个仓库,这个仓库存放着你本次考核的代码.仓库名和其他内容不做要求,但是我们希望你的一切操作都看起来是**具有一定规范**的 124 | 125 | 在未来的工程项目中,规范是非常重要但很难掌握的一个内容,你未来势必不会单打独斗,你会和其他不同的人一起创造、修缮你们的伟大工程.同时,一个工程可能会持续很长时间,可能最初的人因为各种原因被新来的人接替,这时候规范问题更重要了:你该如何使用一定的规范,让新来的人可以很快的上手你们的伟大工程?这是一个值得思考的问题! 126 | 127 | 请务必注意你的规范问题.以及,请务必学好git和github. 128 | 129 | ## 要求 130 | 131 | 1. 不要抄袭 132 | 2. 不要抄袭 133 | 3. 不要抄袭 134 | 4. 遇到不会的地⽅时,⾸先尝试⾃⼰去解决,可以去百度、⾕歌上寻求帮助,能⽤`搜索引擎`解决**⾃⼰**的问题是⼀项⾮常⾮常重要的能⼒。 135 | 136 | ## 参考 137 | 138 | - 菜鸟教程Go语言 https://www.runoob.com/go/go-tutorial.html 139 | - B站老男孩Go语言入门视频 https://www.bilibili.com/video/BV1fz4y1m7Pm 140 | - 七天入门Go语言 https://blog.csdn.net/weixin_45304503/category_11253946.html 141 | - go语言中文社区 https://learnku.com/go 142 | 143 | -------------------------------------------------------------------------------- /docs/2-爬虫.md: -------------------------------------------------------------------------------- 1 | # Golang Lab2 2 | 3 | ## 目的 4 | 5 | - 学习并使⽤go module进⾏第三⽅库的安装 6 | - 了解http协议和web的⼯作原理 7 | - 静态数据与动态数据的爬取 8 | - 学习使⽤关系型数据库,如:MySQL(面向大二同学) 9 | 10 | ## 任务 11 | 12 | ### 爬取福大通知、文件系统 13 | 14 | > 爬取福州大学通知、文件系统(https://info22.fzu.edu.cn/lm_list.jsp?wbtreeid=1460) 15 | 16 | - 包含发布时间,作者,标题以及正文。 17 | - 可自动翻页(爬虫可以自动对后续页面进行爬取,而不需要我们指定第几页) 18 | - 范围:2020年1月1号 - 2021年9月1号(不要爬太多了)。 19 | 20 | #### Bonus 21 | 22 | 1. 使用并发爬取,同时给出加速比(加速比:相较于普通爬取,快了多少倍) 23 | 2. 搜集每个通知的访问人数 24 | 3. 将爬取的数据存入数据库,原生SQL或ORM映射都可以 25 | 26 | ### 爬取Bilibili视频评论 27 | 28 | > 爬取 https://www.bilibili.com/video/BV12341117rG 的全部评论 29 | 30 | - 全部评论,包含子评论 31 | 32 | #### Bonus 33 | 34 | 1. 给出Bilibili爬虫检测阈值(请求频率高于这个阈值将会被ban。也可以是你被封时的请求频率) 35 | 2. 给出爬取的流程图,使用mermaid或者excalidraw 36 | 3. 给出接口返回的json中每个参数所代表的意义 37 | 38 | ## 参考 39 | 40 | - **请求库** 41 | - `net/http` 42 | - **解析库** 43 | - `github.com/PuerkitoBio/goquery` 44 | - `github.com/antchfx/htmlquery` 45 | - `re` 46 | - **数据库驱动** 47 | - `github.com/go-sql-driver/mysql` 48 | 49 | - 抓包:Fiddler、Proxyman、Charles、浏览器F12自带的网络抓包等 50 | - Go Module : https://www.bilibili.com/video/BV1w64y197wo?spm_id_from=333.999.0.0 51 | - 国内代理:https://goproxy.cn/ 52 | - B站黑马程序员**Go爬虫**:https://www.bilibili.com/video/BV1Nt411H7sP?p=1 53 | - Go爬虫知识总结:https://blog.csdn.net/weixin_45304503/article/details/120390989 54 | - Go爬虫基础系列文章: 55 | - https://cuiqingcai.com/5465.html 56 | - https://cuiqingcai.com/5476.html 57 | - https://cuiqingcai.com/5484.html 58 | - https://cuiqingcai.com/5487.html 59 | - https://cuiqingcai.com/5491.html 60 | - Go语⾔中⽂⽹:https://studygolang.com 61 | - 深入浅出BloomFilter原理:https://zhuanlan.zhihu.com/p/140545941 62 | 63 | ## 提示 64 | 65 | - 本次考核难度较大,请**尽早开始学习** 66 | - 已经完成的同学可以先预习⼀下**gin**和**RESTful API**以及**数据库** 67 | - 请多多参考网络资料,爬虫部分网络资料非常多 68 | 69 | ### 推荐上手顺序 70 | 71 | 1. 了解爬虫原理与网页结构(不需要了解太深) 72 | 2. 根据参考中给的几个库,查找对应的使用方法 73 | 3. 选择合适的库,或者择取其他你认为更优秀的库,来编写爬虫程序 74 | 75 | -------------------------------------------------------------------------------- /docs/3-备忘录.md: -------------------------------------------------------------------------------- 1 | # Golang Lab3 2 | 本次第三轮将会进行分流 3 | - 实现 [2024 MIT6.5840(6.824)](https://pdos.csail.mit.edu/6.824/schedule.html) lab1 4 | - 完成以下内容 5 | ## 目的 6 | 7 | - 掌握http协议和Web工作原理 8 | - 掌握Go语言的Hertz/Gin 的Web框架 9 | - 掌握使用关系型数据库,如:mysql 10 | - 学习**RESTful API**接口规范 11 | - 学习编写文档 12 | 13 | ## 背景 14 | FanOne准备放寒假了,但是他是一个**摸鱼怪**,请你写一个**备忘录**(写 API 接口即可),让FanOne记录下寒假要完成的事项,能在寒假完成弯道超车! 15 | 16 | 因为字节和心脏只有一个能跳!我们希望你可以使用字节跳动开源社区 Cloudwego 开源的 HTTP 框架——[Hertz](https://www.cloudwego.io/zh/docs/hertz/) 17 | 18 | 但是如果你感到吃力,你也可以先从 Gin 框架来写这一轮项目!可以看参考资料哦(os:其实感觉 hertz 写的会更快) 19 | 20 | ## 任务 21 | > 编写以下API,并编写接口文档 (推荐使用postman 和 apifox) 22 | 23 | ### 用户模块 24 | 25 | - 实现基本的用户注册登录 ( 用token实现 ) 26 | 27 | **提醒一下:** FanOne创建的待办事务是不能让FanTwo看到的噢~ 28 | 29 | **注意事项:** 可以使用 jwt token,但是`github.com/dgrijalva/jwt-go` 这个包存在安全问题,请不要使用。可以通过寻找它的升级版本来替换它,例如`github.com/golang-jwt/jwt`(如果你用 hertz 框架,你可以阅读文档,他们的框架自带 jwt-auth) 30 | 31 | ### 事务模块 32 | 33 | 增 34 | 35 | - 添加一条新的待办事项 36 | 37 | 改 38 | 39 | - 将 一条/所有 代办事项设置为已完成 40 | - 将 一条/所有 已完成事项设置为待办 41 | 42 | 查 43 | 44 | - 查看所有 已完成/未完成/所有 事项。 (需要分页) 45 | - 输入**关键词**查询事项。(需要分页) 46 | 47 | 删 48 | 49 | - 删除 一条/所有已经完成/所有待办/所有 事项 50 | 51 | 52 | 53 | > 一条事务至少需要这些属性:id、标题、内容、完成状态、添加时间、截止时间 54 | 55 | ### Bonus 56 | 57 | 1. 自动生成接口文档 58 | 2. 使用三层架构设计 59 | 3. 考虑数据库交互安全性 60 | 4. 思考一个比要求中的结构更优秀的返回结构 61 | 5. 对项目使用Redis 62 | 63 | ## 要求 64 | 65 | 1. 接口满足**RESTful API**规范 66 | 2. 接口文档可以不写**参数描述** 67 | 3. 数据返回建议使用JSON格式。如下所示 68 | 69 | ``` 70 | { 71 | "status": 200, // 200 表示正常/成功,500 代表错误。自行了解HTTP状态码。 72 | "msg": "ok", // 返回信息 73 | "data": { // 业务数据。所有的业务信息都应该放到 data 对象上。 74 | "items": [ 75 | { 76 | "id": 1,// 待办事项ID 77 | "title": "更改好了!", // 主题 78 | "content": "好耶!", // 内容 79 | "view": 0, // 访问次数 80 | "status": 1, // 状态(正在进行/已完成/其他) 81 | "created_at": 1638257438, // 创建时间 82 | "start_time": 1638257437, // 开始时间 83 | "end_time": 0 // 结束时间 84 | } 85 | ], 86 | "total": 1 // 检索出的匹配全部条目数(不是items的len值) 87 | }, 88 | } 89 | ``` 90 | 91 | 4. 写一份简要的文档说明一下你的项目结构(使用 markdown) 92 | 93 | ## 提示 94 | - hertz 的官方文档比较晦涩难懂(也就是说需要一定的基础才可以看的比较顺利,不过大家都是这么过来的),你可以结合网上的资料来进行学习,当然,他们有给样例库——[hertz-example](https://github.com/cloudwego/hertz-examples) 95 | - hertz 需要安装命令行工具**hz!** (具体看hertz官方文档),不安装也可以,但是写起来会和 gin 几乎一致 96 | ``` 97 | go install github.com/cloudwego/hertz/cmd/hz@latest 98 | ``` 99 | - hertz的很多使用其实和gin差不多,可以先看看gin的一些简明教程,hertz的使用主要就是看官方文档 (**示例部分**可以多看看) 100 | 101 | ## 参考 102 | 103 | - Hertz中文文档 :https://www.cloudwego.io/zh/docs/hertz/overview 104 | - Hertz入门 :https://juejin.cn/post/7124337913352945672 105 | - B站Hertz入门 :https://www.bilibili.com/video/BV1ta411H7pe?t=1348.7 106 | - 使用 Gin 设计 RESTful API:https://blog.csdn.net/flysnow_org/article/details/103520881 107 | - B站Gin教程:https://www.bilibili.com/video/BV1fA411F7aM?p=1 108 | - Gin知识点总结:https://blog.csdn.net/weixin_45304503/article/details/120381359 109 | - Gorm中文文档:https://learnku.com/docs/gorm/v2 110 | - B站教程:https://www.bilibili.com/video/BV1GT4y1R7tX 111 | -------------------------------------------------------------------------------- /docs/4-大作品.md: -------------------------------------------------------------------------------- 1 | # Golang Lab4 2 | 3 | 在本Lab中,你拥有以下三个选择: 4 | 1. 参加 2024 年服务外包创新创业大赛(此项需求需同组长商量) 5 | 2. 完成MIT 6.824 分布式系统 (2024 Spring)的 lab2 6 | 3. 完成以下内容 7 | 8 | ## 目的 9 | 10 | - 掌握HTTP协议和Web工作原理 11 | - 掌握现代 HTTP 框架开发流程 12 | - 掌握数据库的增删改查(CRUD)及基础的数据库表设计 13 | - 入门简单的缓存引入和使用 14 | - 入门简单的项目设计模式 15 | 16 | ## 背景 17 | 18 | 金三银四来了!FanOne 正在准备面试,由于过于卷,于是想放松一下看一下番剧,可惜她又没有`大会员`,FanOne现在很苦恼 19 | 20 | ## 任务清单 21 | 22 | > 请你写一个**视频网站**(写 API 文档接口即可),让LWGG能在没有大会员的条件下开心的追番吧! 23 | 24 | 请遵照以下接口文档完成功能 25 | 26 | [https://doc.west2.online/](https://doc.west2.online/) 27 | 28 | 29 | 30 | 你不必完成以上的全部功能,以下完成本次作业的最低要求(共计 17 个接口,已经非常少了) 31 | 32 | | 模块名 | 最低需要完成的接口 | 数量 | 33 | | ------ | -------------------------------------------- | ---- | 34 | | 用户 | 注册、登录、用户信息、上传头像 | 4 | 35 | | 视频 | 投稿、发布列表、搜索视频、热门排行榜 | 4 | 36 | | 互动 | 点赞操作、点赞列表、评论、评论列表、删除评论 | 5 | 37 | | 社交 | 关注操作、关注列表、粉丝列表、好友列表 | 4 | 38 | 39 | 别看很多,中间我们还砍掉了以下内容 40 | 41 | ## 你不需要完成的内容 42 | 43 | 除了上面没列出的接口外,你还不需要完成这些 44 | 45 | - 不需要考虑性能,只需要完成项目即可 46 | - 不需要考虑设计模式/项目结构,只需要完成项目即可 47 | - *不需要考虑其他七七八八的,只需要跑通接口即可* 48 | - 互动模块:评论接口只要求完成对视频的点赞,即 comment_id 字段的功能不需要实现,**我们只需要你完成对视频的评论即可,不需要实现对评论进行评论** 49 | - 互动模块:点赞操作只要求完成对视频的点赞,不需要处理对评论的点赞 50 | - 视频模块:投稿接口不要求实现分片上传和分布式存储,你只需要做到可以正常接收文件,并保存到本地某个目录下即可 51 | - 社交模块:不需要完成 WebSocket 部分 52 | 53 | ## 提醒你需要完成的内容 54 | 55 | - 分页管理:如果参数带有 page_num 和 page_size,需要正确识别并进行分页 56 | - 视频搜索:考察简单的 SQL,因此搜索条件需要全部满足 57 | - 删除评论:不可删除其他人的评论 58 | - 需要支持双 Token 59 | - **不要再用 Gin 了,请使用 Hertz/Kratos/其他现代 HTTP 框架,并且要求使用自动生成开发脚手架(如 Hertz 提供的 hz 工具以及 kratos 中集成的 protobuf)** 60 | - 为你的项目提供一份**项目结构图(目录树)** 61 | - 完成**Docker部署**(编写Dockerfile并且利用这个文件成功部署你的项目,不要求传镜像到 hub 上) 62 | - **请求和返回结构必须遵循接口文档** 63 | 64 | ## Bonus 65 | 66 | - 实现全部接口的全部功能 67 | - 对点赞操作引入 Redis 缓存 68 | - 不使用文档中的**投稿**接口,改用自己设计接口,以实现视频的分片上传与存储 69 | - 实现WebSocket 聊天功能 70 | - 引入 ElasticSearch,加强项目的日志管理(这里不做过多赘述,没有对日志管理做过高要求,如果你项目中有针对日志做一些处理,答辩的时候可以提出来) 71 | 72 | ## 提示 73 | 74 | - Apifox 里可以直接调试哦 75 | - HTTP 库建议使用 Hertz:https://www.cloudwego.io/zh/docs/hertz/ 76 | - 请关注项目的返回结构,我们特别设计了返回结构,你可以设计几个 Model 实现 User、Comment、Video 结构来进行复用 77 | - 请关注你项目的逻辑,尤其是社交部分 78 | - 请注意你的数据库表设计,尤其是互动和社交部分 79 | - 热门排行榜考察的是你的 Redis 引入和使用,只需要中间使用到了 Redis 就行(例如,用户请求一次后你将排行榜存在 redis,后续请求直接从 redis 从,不考虑过深的逻辑) 80 | - **下半年的全部作业,都会要求你在本次项目的基础上进行增添和修改**,请认真对待你的项目结构 81 | - 这次作业会考察关系型数据库表的设计以及你的设计模式、项目结构规范,**如果你认为你以上几个可能写的不理想,建议提前实现剩余接口和不要求完成的内容**,下一次答辩后会要求你修改你项目中不合理的结构和表设计 82 | 83 | 84 | 85 | 不要过度关注CURD的内容,请将目标放在下面这些 86 | 87 | - 项目架构是否合理 88 | - 数据库设计是否合理 89 | - 是否对新技术(如Redis)的使用相对合理 90 | - 每一个接口的逻辑是否正确(例如,在上传视频时是否考虑到了用户是否登录?) 91 | - 你的接口能否支撑住多人访问? 92 | - 当你使用缓存后,是否能避免出现缓存穿透,缓存雪崩等情况? 93 | 94 | ## 之后我们做什么 95 | 96 | - 在你这次写的项目基础上,实现接口文档的全部功能 97 | - 在答辩的批斗大会后,依据设计模式/项目结构规范修改你的项目 98 | - 在答辩的批斗大会后,依据数据库性能优化实践优化你的数据库设计 99 | - 使用 CICD 强化你对这个项目的工作流 100 | - 中间件的引入、程序鲁棒性的提升 101 | - 了解 DevOps 理念 102 | - 其他需要做的事情 103 | 104 | 注意:之后我们所做的一切都是基于这个寒假的项目,我们会将侧重点放在更加现代且核心的要素上。 105 | 106 | ## 参考 107 | 108 | - [MinIo官网](https://min.io/) 109 | - [Redis官网](https://redis.io/) 110 | - [Elasticsearch官网](https://www.elastic.co/cn/) 111 | - [RabbitMQ官网](https://www.rabbitmq.com/) 112 | - [如何在Go语言中使用Websockets:最佳工具与行动指南](https://tonybai.com/2019/09/28/how-to-build-websockets-in-go/) 113 | -------------------------------------------------------------------------------- /docs/5(2025)-微服务.md: -------------------------------------------------------------------------------- 1 | # Golang Lab5(2025) 2 | 3 | 鉴于今年的同学们进度相比往常更快,因此我们将对之后要做的内容做一些调整 4 | 5 | 在本Lab中,你拥有以下两种选择 6 | 1. 为[fzuherlper-server](https://github.com/west2-online/fzuhelper-server)解决一些issue(具体咨询学长) 7 | 2. 完成以下内容 8 | 9 | ## 目的 10 | 11 | - 完成项目结构优化 12 | - 掌握 WebSocket 原理与实践 13 | - 掌握微服务架构和Web工作原理 14 | - 掌握HTTP协议和RPC调度方法 15 | - 完善文档 16 | 17 | ## 背景 18 | 19 | 由于疫情封校了,FanOne和小哥哥们外出Happy的计划泡汤了,只能在宿舍一起网聊。她的小哥哥们提出和她一起在手机上看片,为了让FanOne和她的小哥哥们可以快乐看片,请你写一个基于**微服务架构**的视频网站 **(使用[Kitex](http://www.cloudwego.cn/zh/docs/kitex/))**,让FanOne能够享受封校生活! 20 | 21 | ## 任务 22 | 23 | 请遵照以下接口文档完成功能(你不需要完成其中的社交模块,但需要实现websocket) 24 | 25 | [https://doc.west2.online/](https://doc.west2.online/) 26 | 27 | ### 接口 28 | - 对于参加了字节跳动青训营的同学,你不必完成以上的全部功能,以下是完成本次作业的**最低要求** 29 | 30 | | 模块名 | 最低需要完成的接口 | 数量 | 31 | | ------ | -------------------------------------------- | ---- | 32 | | 用户 | 注册、登录、用户信息、上传头像 | 4 | 33 | | 视频 | 投稿、发布列表、搜索视频、热门排行榜 | 4 | 34 | | 互动 | 点赞操作、点赞列表、评论、评论列表、删除评论 | 5 | 35 | | 社交 | 聊天 | 1 | 36 | 37 | - 未参加字节跳动青训营的同学,本次你需要完成新增需求 38 | 39 | | 模块名 | 最低需要完成的接口 | 数量 | 40 | | ------ | -------------------------------------------- | ---- | 41 | | 用户 | 获取 MFA qrcode、绑定 MFA | 2 | 42 | | 视频 | 视频流 | 1 | 43 | | 互动 | | 0 | 44 | | 社交 | 聊天 | 1 | 45 | 46 | 以下需求是**所有同学**都要完成的 47 | 1. 互动模块:评论接口需要实现**对评论进行评论**(即支持 comment_id 请求字段) 48 | 2. 互动模块:点赞接口需要处理对评论的点赞 49 | 3. 社交模块:完成基于 websocket 的聊天功能,考虑到聊天的实时性,请使用 Redis + MySQL 方式实现 50 | 51 | Hertz 框架内置 WebSocket 实现,请使用 Hertz 内置的 WebSocket([文档](https://www.cloudwego.io/zh/docs/hertz/tutorials/basic-feature/protocol/websocket/)) 52 | 53 | ### 微服务 54 | 55 | 鉴于大家的进度比较超出预期,本次要求大家对项目使用**微服务架构** 56 | 57 | 如果你还不了解微服务架构,可以阅读这篇文章[微服务架构 Intro](https://west2-online.feishu.cn/wiki/LfgfwMwZFibcvRkVlR2ctlEenBe?from=from_copylink) 58 | 59 | 此外,大家还需对比单体架构和微服务架构,引入微服务后会带了什么优缺点(解决单点故障,带来一致性问题等等),并在报告文档中体现出来。 60 | 61 | ### 服务注册与发现(约 5h+ , 含学习及使用过程) 62 | 63 | 服务注册与发现是一种机制,用于管理和维护微服务架构中各个服务的地址和元数据的组件。 64 | 65 | 通过服务注册与发现,可以**动态地**发现和调用其他微服务,从而简化了系统的管理和维护。 66 | 67 | 在这一轮中,你需要在你的项目中实现服务注册和发现 68 | 69 | ### 目录结构 70 | 71 | 目录结构一定程度上决定了其他人理解你项目的难易程度,如果你的项目具有目录结构的提升可能性,请优化你的目录结构(**请重点关注这一点**) 72 | 73 | 如果你上一个项目的目录树不是用`tree`生成的,请使用`tree`命令来生成目录树(不需要精确到每一个文件,只需要到目录,以及一些关键文件) 74 | 75 | ### 源代码管理 76 | 77 | 首先,请修改你这个项目的仓库权限——**要求所有人不能直接推送到 main 分支**(请注意,如果你仍然在使用 master 分支,立即改为 main 分支作为主分支) 78 | 79 | 接下来,**以及未来的所有代码变更中,使用 Pull Request(pr)完成**,即使这个项目目前仍然只有你一个人维护 80 | 81 | 在这过程中,**请注意 pr 的规范性**,你可以自己给自己拟定一套规范,也可以参考一些开源社区的规范 82 | 83 | 尤其是需要注意 pr 的标题,**尽可能的保证可以通过标题直接知道你这个 pr 做了什么**,但 pr 的标题不宜过长 84 | 85 | **我们会检查你的 pr 记录**(不会检查时间,放心,可以在合适的时间范围内赶 deadline) 86 | 87 | 你还需要保证你的项目具备下述文件: 88 | 89 | 1. `.gitignore`:如果你项目有一些无关数据,请使用 gitignore 忽略掉,下一次答辩会检查各位的仓库干净程度 90 | 2. `.dockerignoer`:与上一个类似,但是目的是为了减少打包过程中的无关数据 91 | 3. `.editorconfig`:EditorConfig有助于为跨各种编辑器和 IDE 处理同一项目的多个开发人员维护一致的编码风格。 92 | 4. `.gitattributes`:用于指定 Git 应该如何对待特定文件或路径中的文件 93 | 94 | 你需要自行利用搜索引擎完成这几个文件的简单学习 95 | 96 | ### 持续集成(CI) 97 | 98 | 你的项目需要引入 Github Action 工作流([Github Action 快速入门](https://docs.github.com/zh/enterprise-cloud@latest/actions/quickstart)) 99 | 100 | 要求至少实现以下几点: 101 | 102 | 1. 漏洞扫描:CodeQL([关于使用 Code QL 进行代码扫描](https://docs.github.com/zh/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql)) 103 | 2. 代码规范:golangci-lint([Introduction - golangci-lint](https://golangci-lint.run/)) 104 | 105 | 其中,golangci-lint 是一个静态代码扫描检查工具,它有本地的 cli(命令行)帮助你快速找到哪些地方的代码不合规范,规范是一个合格的软件工程师必备的技能,因此你需要 106 | 107 | 1. 较为熟悉的使用 golangci-lint 108 | 2. 在你项目的根目录中添加一个`.golangci.yml`文件,这个文件将会指定静态检查的严格程度([使用教程](https://golangci-lint.run/usage/configuration/)) 109 | 3. 在下一次答辩中,你需要说明你的这份配置文件开启了哪些检查,**开启过少的检查是不合适的** 110 | 111 | 不要直接套用现成的配置,你需要知道workflow的配置内容,答辩时会对你简单了解一下 112 | 113 | ### 文档编写 114 | 115 | 我们鼓励你**使用飞书文档**来提升文档编写的效率 116 | 117 | - 对于所有使用了缓存(如 Redis )的接口,请在文档中**绘制一份流程图**(这很简单)来描述接口运作原理 118 | - 对于所有使用了消息队列的接口,同上 119 | - 将这份飞书文档粘贴到你项目的`README.md`上 120 | 121 | 自己编写接口的流程图,有助于我们快速了解整个流程,**同时也有助于你自己发现这个流程中的问题**(如有) 122 | 123 | 同时,请检查你的`README.md`文件,如果可能,可以做一些文档拆分(在根目录建立一个`docs`文档目录,里面存放子文档),`README.md`应该是尽可能的简单描述这个项目 124 | 125 | 最后,**请提供一个部署文档在仓库中**(可以写在 `README.md`),告诉用户你的项目应当如何部署到服务器(请注意这个需求,这隐含着一个要求:**你的项目自己部署到服务器上过**) 126 | 127 | 由于今年加快了进度,为了保证各位同学能够跟上,请大家从项目初期就写一个自用的记录,在项目中期时我们会组织一到两场会议来确认各位同学的进度 128 | 129 | ### 单元测试 130 | 单测是很重要的(这里省略 20000 个字),虽然这和性能优化无关,但是你仍然需要为你的项目添加一定的单元测试 131 | 132 | 你可以简单的进行单元测试入门([Golang 单元测试指引 | 腾讯技术工程](https://zhuanlan.zhihu.com/p/267341653)、[Golang 单元测试合集整理](https://zhuanlan.zhihu.com/p/656105651)) 133 | 134 | Hertz 同样也提供了单元测试能力([单测 | CloudWeGo](https://www.cloudwego.io/zh/docs/hertz/tutorials/basic-feature/unit-test/)) 135 | 136 | 你必须对你的项目添加一定量的单元测试(考虑大家能力、时间不同,不要求全部完成) 137 | 138 | 请在报告中提供: 139 | 140 | 1. 单元测试覆盖率(可以使用 go 自带的 `go test`命令行工具获取单元测试覆盖率) 141 | 2. 哪些部分使用了单元测试 142 | 3. 你的项目该如何进行单元测试 143 | 144 | **你需要在报告的结尾添加你的单元测试学习笔记**. 145 | 146 | 我们很少硬性规定一定要写笔记,但是这部分请认真对待,你可以写自己对单元测试的理解、`go test` 命令行工具的了解等。 147 | 148 | 字数不限,不需要贴很多字,**不需要套话(写的笔记人能看得懂就行)**,但是请保证是自己的产出。 149 | 150 | ## 配置 151 | 152 | 请独立一个 `config` 文件夹,并内置一个`config.yaml`(请勿使用 ini),该文件负责一些常量的配置,要求支持配置**热更新** 153 | 154 | 可以使用 Viper 库([spf13/viper](https://github.com/spf13/viper)) 155 | 156 | 同时,为了方便我们**检查你的数据库结构**,请在 `config` 文件夹内新建一个 `sql` 文件夹,在该文件夹内储存你数据库的建表语句(如`init.sql`) 157 | 158 | 159 | ## 报告 160 | 161 | 你需要编写一份报告用于答辩(使用飞书文档),在项目提交时提交(可以先提交文档链接,后续继续优化文档),**不限制报告格式**、内容,但需要拥有以下内容 162 | 163 | 1. (Problem Restatement)问题重述:用**最简短**的话复述你这次需要完成的内容 164 | 2. (問題が解決しました)问题解决:使用打勾(复选框)来示意你**全部完成的内容**,对于部分完成的内容,请不要打勾,而是描述你目前已经完成的内容 165 | 3. (如有)(Spotlight)项目亮点:这部分不是必须的,如果你认为你的项目**有一些巧思**,请写上 166 | 4. (如有)(Advancement)进阶:超出文档需求的完成量,比如实现了部分Bonus 内容,则在这部分描述 167 | 5. (如有)(Argument)抱怨:你对这个文档存在的不足的抱怨,请尽量写,**不要害羞**,最好不要写个无 168 | 169 | ## Bonus 170 | 171 | 1. 请考虑你的聊天系统的性能(例如使用Benchmark测试) 172 | 2. 考虑聊天传输的安全性(可以学习一下Telegram是如何保证传输安全性的,但是现阶段是做不到的,可以尝试做一些小的安全性优化) 173 | 3. 使用消息队列(RabbitMQ、RocketMQ、Kafka等) 174 | 4. 使用缓存(如redis) 175 | 5. 优化并发 176 | 6. 优化数据库 177 | 178 | ## 参考 179 | 180 | 常见的RPC框架 181 | 182 | | 公司 | 名称 | 地址 | 183 | | -------- | ------- | ------------------------------------ | 184 | | 谷歌 | grpc-go | https://github.com/grpc/grpc-go | 185 | | 七牛 | go-zero | https://github.com/zeromicro/go-zero | 186 | | Bilibili | Kratos | https://github.com/go-kratos/kratos | 187 | | 字节跳动 | Kitex | https://github.com/cloudwego/kitex | 188 | | Apache | Dubbo | https://github.com/apache/dubbo-go | 189 | | 腾讯 | Tars | https://github.com/TarsCloud/TarsGo | 190 | | 斗鱼 | jupiter | https://github.com/douyu/jupiter | 191 | 192 | 建议学习一些资料较为完善的RPC框架,比如grpc-go,这里不推荐go-micro,因为国内用的少且版本比较混乱。 193 | 194 | | 标题 | 地址 | 195 | | ------------------------------------------------------------ | ------------------------------------------------------------ | 196 | | Google API 设计指南 | https://cloud.google.com/apis/design | 197 | | 如何才能更好的学习MIT 6.824? | https://zhuanlan.zhihu.com/p/110168818 | 198 | | MIT 6.824课程中文学习资料 | https://mit-public-courses-cn-translatio.gitbook.io/mit6-824/ | 199 | | Sorosliu1029/6.824 | https://github.com/Sorosliu1029/6.824 | 200 | | 字节跳动自研高性能微服务框架Kitex的演进之旅 | https://juejin.cn/post/7098631562232594462 | 201 | | RPC框架Kitex实践入门:性能测试指南 | https://juejin.cn/post/7033972008257847304 | 202 | | 高性能RPC框架CloudWeGo-Kitex内外统一的开源实践 | https://juejin.cn/post/7148688078083915807 | 203 | | [译] Go 语言的整洁架构之道 —— 一个使用 gRPC 的 Go 项目整洁架构例子 | https://juejin.cn/post/6844903687463108616 | 204 | | 写给go开发者的gRPC教程-protobuf基础 | https://juejin.cn/post/7191008929986379836 | 205 | | go基于grpc构建微服务框架-服务注册与发现 | https://juejin.cn/post/6844903593758162958 | 206 | | 《gor入门grpc》第一章:什么是gRPC | https://segmentfault.com/a/1190000043343832 | 207 | | Raft算法动画演示 | https://github.com/klboke/raft-animation | 208 | 209 | 当然,这里面只列举了一部分内容,微服务的资料网上非常非常的多 210 | 211 | -------------------------------------------------------------------------------- /docs/5-简单提升.md: -------------------------------------------------------------------------------- 1 | # Golang Lab5 2 | 3 | 在上一次Lab中,部分同学进行了分流 4 | 5 | - 选择做 pr 的同学:联系负责人提供新的需求 6 | - 选择实现接口的同学:请完成下列内容 7 | 8 | ## 目的 9 | 10 | - 掌握Http协议和Web工作原理 11 | - 掌握 WebSocket 原理与实践 12 | - 掌握关系型数据库的基本操作 13 | - 完成项目结构优化 14 | - 完成一定的自动化流程 15 | - 完善上一个项目的文档 16 | 17 | ## 背景 18 | 19 | 众所周知,FanOne是个家喻户晓的**Aquaman**,她经常在社交软件上找小哥哥们聊天,以至于被多个平台封杀,请你写一个IM即时通信系统,让FanOne能聊天自由吧! 20 | 21 | ## 任务 22 | 23 | 请遵照以下接口文档完成功能 24 | 25 | [https://doc.west2.online/](https://doc.west2.online/) 26 | 27 | 本次新增的需求 28 | 29 | | 模块名 | 最低需要完成的接口 | 数量 | 30 | | ------ | -------------------------------------------- | ---- | 31 | | 用户 | 获取 MFA qrcode、绑定 MFA | 2 | 32 | | 视频 | 视频流 | 1 | 33 | | 互动 | | 0 | 34 | | 社交 | 聊天 | 1 | 35 | 36 | 这次作业只新增了 4 个接口,但是按照作业守恒定律,你会在其他地方付出时间 37 | 38 | ### 接口 39 | 1. 互动模块:评论接口需要实现**对评论进行评论**(即支持 comment_id 请求字段) 40 | 2. 互动模块:点赞接口需要处理对评论的点赞 41 | 3. 社交模块:完成基于 websocket 的聊天功能,考虑到聊天的实时性,请使用 Redis + MySQL 方式实现 42 | 43 | Hertz 框架内置 WebSocket 实现,请使用 Hertz 内置的 WebSocket([文档](https://www.cloudwego.io/zh/docs/hertz/tutorials/basic-feature/protocol/websocket/)) 44 | 45 | ### 目录结构 46 | 47 | 目录结构一定程度上决定了其他人理解你项目的难易程度,如果你的项目具有目录结构的提升可能性,请优化你的目录结构(**请重点关注这一点**) 48 | 49 | 如果你上一个项目的目录树不是用`tree`生成的,请使用`tree`命令来生成目录树,windows 下也有相应的解决方案(不需要精确到每一个文件,只需要到目录,以及一些关键文件) 50 | 51 | ### 代码复用性 52 | 53 | 上一次作业没有复用性要求,这一次有了。 54 | 对于多次、重复出现的代码,请考虑整合、抽象、提取。 55 | 56 | **你需要在文档中说明你这部分的修改情况**,也可以留空(比如你认为已经没什么地方可以提升复用性了,不强制要求) 57 | 58 | ### 源代码管理 59 | 60 | 首先,请修改你这个项目的仓库权限——**要求所有人不能直接推送到 main 分支**(请注意,如果你仍然在使用 master 分支,立即改为 main 分支作为主分支) 61 | 62 | 接下来,**在你这个月,以及未来的所有代码变更中,使用 Pull Request(pr)完成**,即使这个项目目前仍然只有你一个人维护 63 | 64 | 在这过程中,**请注意 pr 的规范性**,你可以自己给自己拟定一套规范,也可以参考一些开源社区的规范 65 | 66 | 尤其是需要注意 pr 的标题,**尽可能的保证可以通过标题直接知道你这个 pr 做了什么**,但 pr 的标题不宜过长 67 | 68 | **我们会检查你的 pr 记录**(不会检查时间,放心,可以在合适的时间范围内赶 deadline) 69 | 70 | 你还需要保证你的项目具备下述文件: 71 | 72 | 1. `.gitignore`:如果你项目有一些无关数据,请使用 gitignore 忽略掉,下一次答辩会检查各位的仓库干净程度 73 | 2. `.dockerignoer`:与上一个类似,但是目的是为了减少打包过程中的无关数据 74 | 3. `.editorconfig`:EditorConfig有助于为跨各种编辑器和 IDE 处理同一项目的多个开发人员维护一致的编码风格。 75 | 4. `.gitattributes`:用于指定 Git 应该如何对待特定文件或路径中的文件 76 | 77 | 你需要自行利用搜索引擎完成这几个文件的简单学习 78 | 79 | ### 持续集成(CI) 80 | 81 | 你的项目需要引入 Github Action 工作流([Github Action 快速入门](https://docs.github.com/zh/enterprise-cloud@latest/actions/quickstart)) 82 | 83 | 要求至少实现以下几点: 84 | 85 | 1. 漏洞扫描:CodeQL([关于使用 Code QL 进行代码扫描](https://docs.github.com/zh/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql)) 86 | 2. 代码规范:golangci-lint([Introduction - golangci-lint](https://golangci-lint.run/)) 87 | 88 | 其中,golangci-lint 是一个静态代码扫描检查工具,它有本地的 cli(命令行)帮助你快速找到哪些地方的代码不合规范,规范是一个合格的软件工程师必备的技能,因此你需要 89 | 90 | 1. 较为熟悉的使用 golangci-lint 91 | 2. 在你项目的根目录中添加一个`.golangci.yml`文件,这个文件将会指定静态检查的严格程度([使用教程](https://golangci-lint.run/usage/configuration/)) 92 | 3. 在下一次答辩中,你需要说明你的这份配置文件开启了哪些检查,**开启过少的检查是不合适的** 93 | 94 | 不要直接套用现成的配置,你需要知道workflow的配置内容,答辩时会对你简单了解一下 95 | 96 | ### 文档编写 97 | 98 | 我们鼓励你**使用飞书文档**来提升文档编写的效率 99 | 100 | - 对于所有使用了缓存(如 Redis )的接口,请在文档中**绘制一份流程图**(这很简单)来描述接口运作原理 101 | - 对于所有使用了消息队列的接口,同上 102 | - 将这份飞书文档粘贴到你项目的`README.md`上 103 | 104 | 自己编写接口的流程图,有助于我们快速了解整个流程,**同时也有助于你自己发现这个流程中的问题**(如有) 105 | 106 | 同时,请检查你的`README.md`文件,如果可能,可以做一些文档拆分(在根目录建立一个`docs`文档目录,里面存放子文档),`README.md`应该是尽可能的简单描述这个项目 107 | 108 | 最后,**请提供一个部署文档在仓库中**(可以写在 `README.md`),告诉用户你的项目应当如何部署到服务器(请注意这个需求,这隐含着一个要求:**你的项目自己部署到服务器上过**) 109 | 110 | ### 细节优化 111 | 112 | **错误处理** 113 | 114 | 请[参照这篇文章](https://juejin.cn/post/7246777406387306553)来检查你的项目对于错误的处理是否合适,如果不合适,请修改 115 | *** 116 | **参数校验** 117 | 118 | Hertz 支持参数校验([绑定与校验 | CloudWeGo](https://www.cloudwego.io/zh/docs/hertz/tutorials/basic-feature/binding-and-validate/)),请添加这个功能 119 | *** 120 | **流量治理** 121 | 122 | Hertz 集成了 Sentinel([Sentinel | CloudWeGo](https://www.cloudwego.io/zh/docs/hertz/tutorials/service-governance/sentinel/)),请集成这个功能。要求做到自定义配置(具体请参照 Hertz example 与 Sentinel 官方文档,这是阿里巴巴的项目,中文支持良好) 123 | *** 124 | **代码生成** 125 | 126 | 如果你在上一次作业中没有使用 hz,请使用 hz 代码生成工具自动生成项目 127 | *** 128 | **常量规范** 129 | 130 | 将常量统一进行管理,这部分可以参考github上的一些项目(请不要只对着一个项目借鉴),提升代码复用性,降低代码修改难度。常量不只是数字,还包含文本,对于一些大量重复的文本,也使用常量管理 131 | *** 132 | **配置** 133 | 134 | 请独立一个 `config` 文件夹,并内置一个`config.yaml`(请勿使用 ini),该文件负责一些常量的配置,要求支持配置**热更新** 135 | 136 | 可以使用 Viper 库([spf13/viper](https://github.com/spf13/viper)) 137 | 138 | 同时,为了方便我们**检查你的数据库结构**,请在 `config` 文件夹内新建一个 `sql` 文件夹,在该文件夹内储存你数据库的建表语句(如`init.sql`) 139 | 140 | ### 单元测试 141 | 单测是很重要的(这里省略 20000 个字),虽然这和性能优化无关,但是你仍然需要为你的项目添加一定的单元测试 142 | 143 | 你可以简单的进行单元测试入门([Golang 单元测试指引 | 腾讯技术工程](https://zhuanlan.zhihu.com/p/267341653)、[Golang 单元测试合集整理](https://zhuanlan.zhihu.com/p/656105651)) 144 | 145 | Hertz 同样也提供了单元测试能力([单测 | CloudWeGo](https://www.cloudwego.io/zh/docs/hertz/tutorials/basic-feature/unit-test/)) 146 | 147 | 你必须对你的项目添加一定量的单元测试(考虑大家能力、时间不同,不要求全部完成) 148 | 149 | 请在报告中提供: 150 | 151 | 1. 单元测试覆盖率(可以使用 go 自带的 `go test`命令行工具获取单元测试覆盖率) 152 | 2. 哪些部分使用了单元测试 153 | 3. 你的项目该如何进行单元测试 154 | 155 | **你需要在报告的结尾添加你的单元测试学习笔记**. 156 | 157 | 我们很少硬性规定一定要写笔记,但是这部分请认真对待,你可以写自己对单元测试的理解、`go test` 命令行工具的了解等。 158 | 159 | 字数不限,不需要贴很多字,**不需要套话(写的笔记人能看得懂就行)**,但是请保证是自己的产出。 160 | 161 | ### 性能优化 162 | 163 | **提示:请在完成以上所有内容后开始着手完成这部分的内容** 164 | 165 | 这部分主要是在已有基础上进行修改、更迭 166 | *** 167 | **缓存** 168 | 169 | 有一些接口是可以利用缓存提升接口相应效率的,请自行选择你认为需要完成优化的接口,并逐个进行优化(不要求全部完成) 170 | 171 | *** 172 | **数据库** 173 | 174 | 众所周知,在八股文中,有超级多涉及到数据库的内容。 175 | 176 | 但这里不是要你去背八股文,而是发挥你尽可能的努力,按照接口需求,对数据库**表的结构**进行优化 177 | *** 178 | **并发** 179 | 180 | 对于一些接口(例如上传头像),实际上可以并发操作的,请对你认为可以进行优化(使用 goroutine)的地方进行优化 181 | 182 | tips:实际上,对于一些非敏感性内容(例如点赞),**可以提前返回响应**,然后在服务端再进行点赞的落库 183 | *** 184 | **请在报告中展示你的优化内容**(需要提供优化前和优化后,不需要说明提升了多少效率) 185 | 186 | 这部分没有硬性工作量要求,但并不是说你可以直接略过或者就没做什么事(你敢略过试试!),因为**前面的文字看似多、实际工作量并不多** 187 | 188 | ## 报告 189 | 190 | 你需要编写一份报告用于答辩(使用飞书文档),在项目提交时提交(可以先提交文档链接,后续继续优化文档),**不限制报告格式**、内容,但需要拥有以下内容 191 | 192 | 1. (Problem Restatement)问题重述:用**最简短**的话复述你这次需要完成的内容 193 | 2. (問題が解決しました)问题解决:使用打勾(复选框)来示意你**全部完成的内容**,对于部分完成的内容,请不要打勾,而是描述你目前已经完成的内容 194 | 3. (如有)(Spotlight)项目亮点:这部分不是必须的,如果你认为你的项目**有一些巧思**,请写上 195 | 4. (如有)(Advancement)进阶:超出文档需求的完成量,比如实现了部分Bonus 内容,则在这部分描述 196 | 5. (如有)(Argument)抱怨:你对这个文档存在的不足的抱怨,请尽量写,**不要害羞**,最好不要写个无 197 | 198 | 199 | 本次作业不要求全部完成,**但是会衡量你的工作量**,请酌情注意任务需求 200 | 201 | 202 | ## Bonus 203 | 204 | 1. 项目使用 Kitex(不会用不要乱上哈) 205 | 2. 请考虑你的聊天系统的性能(例如使用Benchmark测试) 206 | 3. 考虑聊天传输的安全性(可以学习一下Telegram是如何保证传输安全性的,但是现阶段是做不到的,可以尝试做一些小的安全性优化) 207 | 4. 使用消息队列(RabbitMQ、RocketMQ、Kafka等) 208 | 209 | ## 预告 210 | 211 | 在未来的作业中,你需要完成一个以图搜图的接口,这个功能的实现基于一个非关系型数据库——向量数据库(Vector Database),可以先了解[Milvus](https://milvus.io/) 212 | 213 | **答辩时会询问你对上下文(Context)这个概念的理解** 214 | 215 | 216 | 可以了解一定的可观测性和治理特性,学有余力的可以开始学习链路追踪(Jaeger)、监控(Prometheus)等内容 217 | 218 | ## 参考 219 | 220 | 有啥好参考的?难道你现在还不会谷歌? 221 | 222 | - [WebSocket | CloudWeGo]([github.com/gorilla/websocket](https://www.cloudwego.io/zh/docs/hertz/tutorials/basic-feature/protocol/websocket/)) 223 | - [慢聊Go之GoLang中使用Gorilla Websocket](https://juejin.cn/post/6946952376825675812) 224 | - [RabbitMQ Go语言客户端教程2——工作队列](https://www.liwenzhou.com/posts/Go/rabbitmq-2/) 225 | -------------------------------------------------------------------------------- /docs/6(2025)-部署与监控.md: -------------------------------------------------------------------------------- 1 | # Golang Lab6(2025) 2 | 3 | ## 目的 4 | 5 | - 深入理解Web工作原理 6 | - 掌握HTTP协议与RPC调度方法 7 | - 理解BASE、CAP理论 8 | - 掌握消息队列在微服务中的应用 9 | - 掌握Kubernetes集群部署与云原生实践 10 | - 理解监控与可观测性体系(**Prometheus**、**OpenTelemetry**) 11 | 12 | --- 13 | 14 | ## 背景 15 | 16 | 由于疫情封校,FanOne与小哥哥们外出Happy的计划泡汤,只能在宿舍一起网聊。他们决定一起在手机上看片。可是看片网站卡顿、缓冲漫长,为拯救观影之夜,他们决定把整套微服务 搬上Kubernetes,引入 Prometheus+Grafana 做全链路监控。为了让FanOne快乐看片,请你实现 **微服务架构** 的**集群部署与监控**。 17 | 18 | --- 19 | 20 | ## 任务 21 | 22 | ### 消息队列(Kafka)(约6h+) 23 | 24 | - 在核心业务流程中引入 **Kafka** 作为异步消息队列,例如: 25 | - 用户点赞、评论等事件异步写入统计服务 26 | #### 参考文档 27 | - https://west2-online.feishu.cn/wiki/DfrlwfG2LiCuw1kmhl5c8wXvnvy 28 | 29 | 30 | ### 链路追踪(约5h+) 31 | 链路追踪(Traceability)是一种跟踪和记录数据、信息或事件流经过的路径和变化的能力。可以帮助开发人员快速定位系统中的性能问题和故障。 32 | - 使用 **OpenTelemetrySDK** 采集 Trace,并通过 **Jaeger** 或 **Tempo** 可视化,**答辩时需展示** 33 | 34 | ### 监控与可观测性**(重点考核项)**(约10h+) 35 | 36 | | 组件 | 要求 | 37 | | ---- |--------------------------------------------------------------------------------------| 38 | | **Prometheus** | 统一采集服务、Kafka、MySQL、redis、集群nodes相关信息等指标;自定义业务Metrics。Prometheus只是一个**时序数据库**,不具备采集数据的功能。 | 39 | | **Collector**| 部署 collector(prometail, alloy 等)用于采集应用的相关性能指标等。| 40 | | **Jaeger**| 用于实现请求在多个服务之间的链路追踪。| 41 | | **Grafana** | 构建 Dashboard:
• 总览面板(流量/错误率/延时)
• Kafka消费 Lag
• GoRuntime监控(gc、goroutines)。 | 42 | 43 | - fzuhelper 的监控技术栈有**Prometheus**、**alloy**、**loki**等,对监控方面感兴趣的可以去了解**Victoriametrics** 44 | 45 | #### 参考文档 46 | - Prometheus官方文档 47 | - OpenTelemetryGo文档 48 | - Jaeger官方文档 49 | 50 | ### Kubernetes(约 20h+ , 含项目部署调试过程) 51 | 52 | | 场景 | 说明 | 53 | |-------------------|--------------------------------------------------------------------------------------------------------------------------| 54 | | **本地集群** | 推荐使用 **kind**(Kubernetes in Docker)快速拉起测试集群。 | 55 | | **云端部署(Bonus)** | 将完整服务部署至公有云 Kubernetes。 | 56 | | **包管理** |(不做强制要求) 采用 **Helm Chart** 管理服务发布:
• 编写 values.yaml 配置
• 使用 `helm upgrade --install` 完成滚动更新
• 文档中附 `helm template` 渲染结果。 | 57 | 58 | 在本轮作业中,你需要在你的本地运行k8s,并且将你的项目部署在本地的k8s集群上,**答辩时需展示** 59 | 60 | - 先去了解 k8s 中的各个组件之间是如何工作的,重点学习**网络**部分,使用**一键启动**的 k8s(minikube、kind、k3s 等),将你的应用和相关依赖(数据库等)进行部署,在部署的过程中学习各组件的功能。 61 | - bonus:在熟悉各组件的基础上,使用 **kubeadm** 部署 k8s。 62 | 63 | 推荐使用[KubeSphere](https://kubesphere.io/zh/) 提供的KubeKey安装[KubeSphere](https://kubesphere.io/zh/) 和[Kubernetes](https://kubernetes.io/zh/) (使用all in one模式) 64 | 65 | 66 | #### 推荐阅读 67 | - [操作系统、容器和Kubernetes](https://west2-online.feishu.cn/wiki/NR0Iwp6mtij1oRkNKNXceeTknQL?from=from_copylink) 68 | - [k8s的一些基础概念](https://west2-online.feishu.cn/wiki/XKNfw1GFDiE1zokrwFKcZ7Dcn8c?from=from_copylink) 69 | - [使用KubeSphere部署MySQL](https://west2-online.feishu.cn/wiki/ExCRwIKrGiNPXQkfiBJcNdazn1d?from=from_copylink) 70 | - [KubeSphere安装记录(踩坑)](https://west2-online.feishu.cn/wiki/WG6IwpVEzikQkAkgrGXceB7yn6b?from=from_copylink) 71 | 72 | 我觉得这部分很有可能会坐牢,请留出至少**5小时**以上的时间 73 | 74 | PS:k8s会吃掉大量的内存(2G 以上,推荐预留 4G),我认为虚拟机的性能可能不够 75 | 76 | ### pprof 分析(约 7h+ ,仅包含学习和简单热点分析耗时) 77 | 78 | 在本轮作业中,你需要使用 pprof 进行程序热点分析,并在热点分析的基础上尝试对**高耗时部分**进行优化(整个使用 pprof、分析火焰图、进行具体的优化**需要写在答辩文档上**) 79 | 80 | [golang pprof 实战](https://blog.wolfogre.com/posts/go-ppof-practice/) 81 | 82 | 即使因具体的优化难度过高无法实现优化,也应该在文档中体现你的分析过程,**过程远比结果重要** 83 | 84 | 85 | ### 自动化测试报告(约 3h , 含学习及上手测试耗时) 86 | 87 | 在接口功能具体实现的基础上,这部分大约需要耗时 1 小时左右,耗时较短。 88 | 89 | 自动化测试可以帮助你,请在**你的报告文档中**提供利用 Apifox 实现的自动化测试报告链接。 90 | 91 | Apifox 提供了简单的自动化测试工具,其提供的测试报告可以反应测试通过率、总耗时、接口请求耗时和平均接口请求耗时。 92 | 93 | 94 | --- 95 | 96 | ### 性能优化(约 10h+ , 和个人有关) 97 | 98 | **数据库优化** 99 | 100 | 这一轮我们会重点关注你的数据库设计 101 | 102 | 要求: 103 | 104 | 1. 合理的数据表设计(可以在文档中阐述思路和遇到的问题) 105 | 2. 为每一个微服务使用单独的数据储存,**至少**不能共享一张表 106 | 3. 你需要做到一定的**数据库优化**,并且将你做的优化**写在文档上** 107 | - 为每张表设计合理的索引 108 | - 考虑使用[Sharding](https://gorm.io/zh_CN/docs/sharding.html) 109 | - 别的你能想到的优化,例如外键、trigger(注意触发器可能带来的性能问题)、定时任务等 110 | 111 | 112 | 113 | **调用优化** 114 | 115 | 这是引入微服务设计后可能出现的问题——你的 RPC 太多了,更有可能出现**循环调用**的情况 116 | 117 | 请重视调用优化,这里给几个可行的解决**循环调用**的方案: 118 | 119 | 1. 数据库层面进行优化,请针对计数类字段进行特别优化,例如,你可以利用定时任务来异步更新,而不需要同步更新(这里说的很简短,具体请结合接口文档及自己的项目分析) 120 | 2. 重新对你的微服务拆分方案进行设计 121 | 122 | 但是,可能还会遇到其他的调用性能问题,不仅仅只是循环调用,请对自己的接口负责认真 123 | 124 | 125 | 126 | **性能提升可视化** 127 | 128 | 在上一轮作业中,很多同学尝试很多性能优化的办法,但是有的优化可能不太合理 129 | 130 | 为了进一步理解性能优化的应用和效果,我们需要你进行一定的测试来显示出性能优化的**效果**(如使用**Benchmark**测试),并且在**文档中展示**出你的优化**内容**、优化**效果**以及描述接口逻辑的**流程图**(如有) 131 | 132 | 我们希望你的文档可以帮我们快速**定位**到项目中的代码位置(添加一些描述,或者给流程图添加一些提示内容) 133 | 134 | *** 135 | 136 | 137 | ### 文档优化 138 | 139 | 请尽量做到可以直接根据文档知道你的项目亮点,不要简单的进行简短描述 140 | 141 | 在上一轮作业中,大部分人的文档都写的不是很好,我们希望你进一步优化你的文档,**不限制格式**、内容,但需要拥有以下内容(和上一轮一样) 142 | 143 | 1. (Problem Restatement)问题重述:用**最简短**的话复述你这次需要完成的内容 144 | 145 | 2. (問題が解決しました)问题解决:使用打勾(复选框)来示意你**全部完成的内容**,对于部分完成的内容,请不要打勾,而是描述你目前已经完成的内容 146 | 147 | 3. (如有)(Spotlight)项目亮点:这部分不是必须的,如果你认为你的项目**有一些巧思**,请写上 148 | 149 | 4. (如有)(Advancement)进阶:超出文档需求的完成量,比如实现了部分Bonus 内容,则在这部分描述 150 | 151 | 5. (如有)(Argument)抱怨:你对这个文档存在的不足的抱怨,请尽量写,**不要害羞**,最好不要写个无 152 | 153 | 我非常建议大家在~~坐牢~~学习新知识的时候在文档里做一些笔记和记录,这有利于量化你的学习内容并且使知识更加系统,我们也可以更加了解你的学习过程~~(学不下去了就在文档里发电)~~ 154 | 155 | 本次作业会**综合衡量你的工作量**,请酌情注意任务需求 156 | 157 | --- 158 | 159 | ## Bonus 160 | 161 | 1. **云端 K8s 部署**(见上表)or 使用 **kubeadm** 部署 k8s 162 | 2. 项目支持负载均衡(Load Balance),实现轮询(Round-Robin)策略即可 163 | 3. 项目中集成熔断降级功能,推荐使用框架 [hystrix](https://github.com/afex/hystrix-go) 164 | 4. 数据库使用分库分表 165 | 5. 添加一个上传视频的接口,具体实现为 166 | 167 | - 使用流式请求分片上传视频 168 | - 与用户绑定(即对于每个视频,都要求在数据库中设定上传者、上传时间等内容 169 | 170 | 171 | --- 172 | 173 | ## 参考资料 174 | 175 | | 主题 | 资料 | 176 | | ---- |-------------------------------------------------------------------| 177 | | **Kafka** | 官方文档 · 《Kafka 权威指南》(中文) | 178 | | **Prometheus & Grafana** | 书籍《Prometheus 权威指南》 · Grafana Labs 文档 | 179 | | **OpenTelemetry** | 官网 · CNCF 中文社区译文 | 180 | | **Helm** | 官方文档 · 《Helm3实战》(中文电子书) | 181 | | **kind** | GitHub README | 182 | | **Kubernetes** | 官方文档(中文) | 183 | | **Jaeger** | 官网 | 184 | | **Cloud Native Glossary(中文)** | | 185 | 186 | --- 187 | 188 | > 🚀 **祝你顺利完成 Lab 6,并在答辩中秀出你的可观测性大屏与分布式集群!** 189 | 190 | -------------------------------------------------------------------------------- /docs/6-微服务.md: -------------------------------------------------------------------------------- 1 | # Golang Lab6 2 | 3 | ## 目的 4 | 5 | - 掌握微服务架构和Web工作原理 6 | - 掌握HTTP协议和RPC调度方法 7 | - 掌握BASE、CAP理论 8 | 9 | ## 背景 10 | 11 | 由于疫情封校了,FanOne和小哥哥们外出Happy的计划泡汤了,只能在宿舍一起网聊。她的小哥哥们提出和她一起在手机上看片,为了让FanOne和她的小哥哥们可以快乐看片,请你写一个基于**微服务架构**的视频网站 **(使用[Kitex](http://www.cloudwego.cn/zh/docs/kitex/))**,让FanOne能够享受封校生活! 12 | 13 | ## 任务 14 | 15 | 请遵照以下接口文档完成功能 16 | 17 | [https://doc.west2.online/](https://doc.west2.online/) 18 | 19 | 本次作业新增了一个需要**与AI组同学合作**的接口 20 | 21 | ### 重构(约 20h+) 22 | 23 | 除了这个接口,你的主要任务是将上一轮的作业升级成**微服务架构** 24 | 25 | 如果你还不了解微服务架构,可以阅读这篇文章[微服务架构 Intro](https://west2-online.feishu.cn/wiki/LfgfwMwZFibcvRkVlR2ctlEenBe?from=from_copylink) 26 | 27 | **提醒** 28 | 29 | 1. 你是否在上一轮中有一些**没完成或者遗漏的内容**或者**做的不好的地方**(参考留档内容)?如果有,请在这一轮完成,并且在文档中提及 30 | 2. 在从单体式架构升级到微服务架构的过程中,请考虑以下问题,并**尽量**解决 31 | - 你的架构是否会导致**RPC循环调用**?例如:在RPC用户个人信息(user_info)时,需要获取用户的视频数量、点赞数量等内容,这时候需要分别的向interaction和video两个模块发RPC 32 | - 如何保证微服务之间的**安全性与权限管理** 33 | - 如何保证不同微服务之间的**数据一致性** 34 | - 接口逻辑可能会有很大的变化,请注意**接口逻辑的解耦** 35 | 36 | ### 接口(约 10h+,含学习及对接过程) 37 | 38 | **新增接口** 39 | 40 | 使用Milvus向量数据库实现以图搜图功能(AI模型由AI方向同学提供) 41 | 42 | **大致的流程** 43 | 44 | 1. 将图片或者图片集通过模型提取图片特征生成多维向量数据 45 | 46 | 2. 将向量数据存储到milvus数据库中 47 | 48 | 3. 后续搜索使用Milvus API进行相关性搜索得到符合条件的向量数据IDList 49 | 50 | 4. 使用这个IDList去Mysql中查找到图片URL 51 | 52 | 更多细节,需要你们在4月28日后与AI同学进行详细对接 53 | 54 | [向量数据库Milvus入门及基本上手](https://west2-online.feishu.cn/wiki/Je2VwBjlvikY05k3pbncEfP3nv8) 55 | 56 | 57 | ### 服务注册与发现(约 5h+ , 含学习及使用过程) 58 | 59 | 服务注册与发现是一种机制,用于管理和维护微服务架构中各个服务的地址和元数据的组件。 60 | 61 | 通过服务注册与发现,可以**动态地**发现和调用其他微服务,从而简化了系统的管理和维护。 62 | 63 | 在这一轮中,你需要在你的项目中实现服务注册和发现 64 | 65 | 注册中心使用[Nacos](https://nacos.io/)或者[Etcd](https://etcd.io/),**答辩时需展示** 66 | 67 | ### 链路追踪(约 5h+ ,含学习及使用过程) 68 | 69 | 链路追踪(Traceability)是一种跟踪和记录数据、信息或事件流经过的路径和变化的能力。可以帮助开发人员快速定位系统中的性能问题和故障。 70 | 71 | 在这一轮中,你需要对项目使用链路追踪(例如[Jaeger](https://github.com/jaegertracing/jaeger)),**答辩时需展示** 72 | 73 | ### Kubernetes(约 15h+ , 含项目部署调试过程) 74 | 75 | 在本轮作业中,你需要在你的本地运行k8s,并且将你的项目部署在本地的k8s集群上,**答辩时需展示** 76 | 77 | 推荐使用[KubeSphere](https://kubesphere.io/zh/) 提供的KubeKey安装[KubeSphere](https://kubesphere.io/zh/) 和[Kubernetes](https://kubernetes.io/zh/) (使用all in one模式) 78 | 79 | 推荐阅读[操作系统、容器和Kubernetes](https://west2-online.feishu.cn/wiki/NR0Iwp6mtij1oRkNKNXceeTknQL?from=from_copylink) 80 | 81 | 以及我的踩坑记录和笔记[KubeSphere安装记录(踩坑)](https://west2-online.feishu.cn/wiki/WG6IwpVEzikQkAkgrGXceB7yn6b?from=from_copylink)和[使用KubeSphere部署MySQL](https://west2-online.feishu.cn/wiki/ExCRwIKrGiNPXQkfiBJcNdazn1d?from=from_copylink) 82 | 83 | 我觉得这部分很有可能会坐牢,请留出至少**3小时**以上的时间 84 | 85 | PS:k8s会吃掉大量的内存,我认为虚拟机的性能可能不够 86 | 87 | ### pprof 分析(约 7h+ ,仅包含学习和简单热点分析耗时) 88 | 89 | 在本轮作业中,你需要使用 pprof 进行程序热点分析,并在热点分析的基础上尝试对**高耗时部分**进行优化(整个使用 pprof、分析火焰图、进行具体的优化**需要写在答辩文档上**) 90 | 91 | [golang pprof 实战](https://blog.wolfogre.com/posts/go-ppof-practice/) 92 | 93 | 即使因具体的优化难度过高无法实现优化,也应该在文档中体现你的分析过程,**过程远比结果重要** 94 | 95 | ### 自动化测试报告(约 3h , 含学习及上手测试耗时) 96 | 97 | 在接口功能具体实现的基础上,这部分大约需要耗时 1 小时左右,耗时较短。 98 | 99 | 自动化测试可以帮助你,请在**你的报告文档中**提供利用 Apifox 实现的自动化测试报告链接。 100 | 101 | Apifox 提供了简单的自动化测试工具,其提供的测试报告可以反应测试通过率、总耗时、接口请求耗时和平均接口请求耗时。 102 | 103 | ### 性能优化(约 10h+ , 和个人有关) 104 | 105 | **数据库优化** 106 | 107 | 这一轮我们会重点关注你的数据库设计 108 | 109 | 要求: 110 | 111 | 1. 合理的数据表设计(可以在文档中阐述思路和遇到的问题) 112 | 2. 为每一个微服务使用单独的数据储存,**至少**不能共享一张表 113 | 3. 你需要做到一定的**数据库优化**,并且将你做的优化**写在文档上** 114 | - 为每张表设计合理的索引 115 | - 考虑使用[Sharding](https://gorm.io/zh_CN/docs/sharding.html) 116 | - 别的你能想到的优化,例如外键、trigger(注意触发器可能带来的性能问题)、定时任务等 117 | 118 | *** 119 | 120 | **调用优化** 121 | 122 | 这是引入微服务设计后可能出现的问题——你的 RPC 太多了,更有可能出现**循环调用**的情况 123 | 124 | 请重视调用优化,这里给几个可行的解决**循环调用**的方案: 125 | 126 | 1. 数据库层面进行优化,请针对计数类字段进行特别优化,例如,你可以利用定时任务来异步更新,而不需要同步更新(这里说的很简短,具体请结合接口文档及自己的项目分析) 127 | 2. 重新对你的微服务拆分方案进行设计 128 | 129 | 但是,可能还会遇到其他的调用性能问题,不仅仅只是循环调用,请对自己的接口负责认真 130 | 131 | *** 132 | 133 | **性能提升可视化** 134 | 135 | 在上一轮作业中,很多同学尝试很多性能优化的办法,但是有的优化可能不太合理 136 | 137 | 为了进一步理解性能优化的应用和效果,我们需要你进行一定的测试来显示出性能优化的**效果**(如使用Benchmark测试),并且在**文档中展示**出你的优化**内容**、优化**效果**以及描述接口逻辑的**流程图**(如有) 138 | 139 | 我们希望你的文档可以帮我们快速**定位**到项目中的代码位置(添加一些描述,或者给流程图添加一些提示内容) 140 | 141 | *** 142 | 143 | 同时,架构升级之后,也会有一些**新的**可优化的点出现 144 | 145 | ### 文档优化 146 | 147 | 请尽量做到可以直接根据文档知道你的项目亮点,不要简单的进行简短描述 148 | 149 | 在上一轮作业中,大部分人的文档都写的不是很好,我们希望你进一步优化你的文档,**不限制格式**、内容,但需要拥有以下内容(和上一轮一样) 150 | 151 | 1. (Problem Restatement)问题重述:用**最简短**的话复述你这次需要完成的内容 152 | 153 | 2. (問題が解決しました)问题解决:使用打勾(复选框)来示意你**全部完成的内容**,对于部分完成的内容,请不要打勾,而是描述你目前已经完成的内容 154 | 155 | 3. (如有)(Spotlight)项目亮点:这部分不是必须的,如果你认为你的项目**有一些巧思**,请写上 156 | 157 | 4. (如有)(Advancement)进阶:超出文档需求的完成量,比如实现了部分Bonus 内容,则在这部分描述 158 | 159 | 5. (如有)(Argument)抱怨:你对这个文档存在的不足的抱怨,请尽量写,**不要害羞**,最好不要写个无 160 | 161 | 我非常建议大家在~~坐牢~~学习新知识的时候在文档里做一些笔记和记录,这有利于量化你的学习内容并且使知识更加系统,我们也可以更加了解你的学习过程~~(学不下去了就在文档里发电)~~ 162 | 163 | 本次作业会**综合衡量你的工作量**,请酌情注意任务需求 164 | 165 | ### Bonus 166 | 167 | 1. 对项目提供监控特性(例如[Prometheus](https://github.com/prometheus/prometheus) 与 [Skywalking](https://skywalking.apache.org/)) 168 | 2. 项目支持负载均衡(Load Balance),实现轮询(Round-Robin)策略即可 169 | 3. 项目中集成熔断降级功能,推荐使用框架 [hystrix](https://github.com/afex/hystrix-go) 170 | 4. 数据库使用分库分表 171 | 5. 尝试使用另一个很热门的rpc框架[Kratos](https://github.com/go-kratos/kratos),写一份kitex 和 kratos 对比报告(对比往往要写一点Demo),报告内容包括 172 | 173 | - 框架的各种支持与扩展对比 174 | - 不同并发量下的吞吐率(每秒完成的调用数)和延迟(平均耗时) 175 | - 别的你的心得体会 176 | 5. 添加一个上传视频的接口,具体实现为 177 | 178 | - 使用流式请求分片上传视频 179 | - 与用户绑定(即对于每个视频,都要求在数据库中设定上传者、上传时间等内容 180 | 181 | 182 | 183 | ## 参考 184 | 185 | 常见的RPC框架 186 | 187 | | 公司 | 名称 | 地址 | 188 | | -------- | ------- | ------------------------------------ | 189 | | 谷歌 | grpc-go | https://github.com/grpc/grpc-go | 190 | | 七牛 | go-zero | https://github.com/zeromicro/go-zero | 191 | | Bilibili | Kratos | https://github.com/go-kratos/kratos | 192 | | 字节跳动 | Kitex | https://github.com/cloudwego/kitex | 193 | | Apache | Dubbo | https://github.com/apache/dubbo-go | 194 | | 腾讯 | Tars | https://github.com/TarsCloud/TarsGo | 195 | | 斗鱼 | jupiter | https://github.com/douyu/jupiter | 196 | 197 | 建议学习一些资料较为完善的RPC框架,比如grpc-go,这里不推荐go-micro,因为国内用的少且版本比较混乱。 198 | 199 | | 标题 | 地址 | 200 | | ------------------------------------------------------------ | ------------------------------------------------------------ | 201 | | Google API 设计指南 | https://cloud.google.com/apis/design | 202 | | 如何才能更好的学习MIT 6.824? | https://zhuanlan.zhihu.com/p/110168818 | 203 | | MIT 6.824课程中文学习资料 | https://mit-public-courses-cn-translatio.gitbook.io/mit6-824/ | 204 | | Sorosliu1029/6.824 | https://github.com/Sorosliu1029/6.824 | 205 | | 字节跳动自研高性能微服务框架Kitex的演进之旅 | https://juejin.cn/post/7098631562232594462 | 206 | | RPC框架Kitex实践入门:性能测试指南 | https://juejin.cn/post/7033972008257847304 | 207 | | 高性能RPC框架CloudWeGo-Kitex内外统一的开源实践 | https://juejin.cn/post/7148688078083915807 | 208 | | [译] Go 语言的整洁架构之道 —— 一个使用 gRPC 的 Go 项目整洁架构例子 | https://juejin.cn/post/6844903687463108616 | 209 | | 写给go开发者的gRPC教程-protobuf基础 | https://juejin.cn/post/7191008929986379836 | 210 | | go基于grpc构建微服务框架-服务注册与发现 | https://juejin.cn/post/6844903593758162958 | 211 | | 《gor入门grpc》第一章:什么是gRPC | https://segmentfault.com/a/1190000043343832 | 212 | | Raft算法动画演示 | https://github.com/klboke/raft-animation | 213 | 214 | 当然,这里面只列举了一部分内容,微服务的资料网上非常非常的多 215 | 216 | ## 提示 217 | 218 | 你可能需要先学习一定的云原生知识(不需要学习太深,我们现在只是做一个toy demo):[云原生资料库](https://lib.jimmysong.io/) 219 | 220 | - 每个厂都会有自己开源的RPC框架,选择哪个RPC框架都无所谓,主要是学习微服务的思想,本质都是一样的 221 | - 不过这些厂开源的RPC框架都非常的完善,以至于实现起来很简单,比如服务注册可以直接调用封装好的功能,虽然提高了开发流程,**但不建议初学者这样使用**,容易成为API工程师。建议使用相对原始一点的rpc框架如grpc-go来自己实现一个服务注册与发现的方法 222 | 223 | 224 | 225 | 如果你想深入学习分布式,可以参考下面这些提示,同时尝试**先**完成所有的Bonus 226 | 227 | 1. 学习MIT-6.824 228 | 2. 了解分布式注册中心,如 etcd , zookeeper , euruka 等,并在代码中封装分布式系统中的服务注册、服务发现功能。有兴趣还可以了解一下Raft算法,参考[hashicorp/raft](https://github.com/hashicorp/raft) 229 | 3. 可以学习一些分布式存储方面的技术,如MySQL主从复制、读写分离、高可用配置,Redis的分布式锁(Redlock 算法)、主从模式和哨兵模式,ELK日志系统等。 230 | 4. 可学习Kubernetes基础,并提前了解云原生方向 https://www.cncf.io/ 231 | -------------------------------------------------------------------------------- /docs/7-6.824.md: -------------------------------------------------------------------------------- 1 | # Golang Lab7 2 | 3 | ## 目的 4 | 5 | - 掌握分布式系统设计与实现的重要原则和关键技术 6 | 7 | - 学习和实现MapReduce 8 | 9 | - 学习和实现Raft算法 10 | 11 | ## 任务 12 | 13 | 6.824(2023年后改名6.5840)包括4个编程作业 14 | 15 | - 6.5840 Lab 1: MapReduce 16 | 17 | - 6.5840 Lab 2: Key/Value Server 18 | 19 | - 6.5840 Lab 3: Raft 20 | 21 | - 6.5840 Lab 4: Fault-tolerant Key/Value Service 22 | 23 | 总课程表如下,里面包含了所有相关论文和作业(当然,纯英文的) 24 | 25 | https://pdos.csail.mit.edu/6.824/schedule.html 26 | 27 | 通过git获取实验初始的框架 28 | 29 | ```bash 30 | git clone git://g.csail.mit.edu/6.5840-golabs-2024 6.5840 31 | ``` 32 | 33 | - 6.824的lab不能在Windows下运行(WSL按照文档说明,无法正常运行) 34 | - 你的IDE可能会报很多错,这是正常的,它可以跑起来 35 | - 每个lab都有对应的测试脚本或代码,你可以从这些文件入手 36 | 37 | 38 | 39 | 本次作业,你只需要完成 Lab1,后续的3个Lab以周会的形式进行 40 | 41 | ## Lab1 42 | 43 | **`因为lab的内容并不好理解,以下内容旨在帮助你找到一个相对合理的主线去理解整个lab,但这并不代表你可以不去看原文(可以用GPT翻译),整个作业中充斥着大量的小细节,而这些细节本文无法涵盖,而它们可能会让你疑惑很久`** 44 | 45 | 在这个实验中,你将构建一个MapReduce系统用于计算多个txt文件的单词计数 46 | 47 | ### MapReduce简介 48 | 49 | 我相信你不会想看又臭又长的英文论文的,所以这里我给出一些核心概念的解释 50 | 51 | MapReduce 的名称来源于其两个主要步骤:Map 和 Reduce。 52 | 53 | 1. **Map 步骤**: 54 | - 输入数据被分割成若干小块(通常是键值对)。 55 | - 每个小块数据被传递给一个 Map 函数进行处理。 56 | - Map 函数生成一组中间结果(键值对)。 57 | 2. **Shuffle 和 Sort 步骤(隐式)**: 58 | - 中间结果根据键进行分组和排序,以便相同键的数据能被传递到同一个 Reduce 函数。 59 | - 这个步骤通常由框架自动处理,不需要用户显式编写代码。 60 | 3. **Reduce 步骤**: 61 | - 每个 Reduce 函数接收来自 Map 步骤的中间结果,并进行汇总、聚合或其他计算。 62 | - Reduce 函数生成最终的输出结果。 63 | 64 | ### 单体式实现 65 | 66 | 6.5840在`src/main/mrsequential.go`中提供了单体式的MapReduce实现 67 | 68 | ```Shell 69 | $ cd ~/6.5840 70 | $ cd src/main 71 | $ go build -buildmode=plugin ../mrapps/wc.go #编译插件 72 | $ rm mr-out* 73 | $ go run mrsequential.go wc.so pg*.txt 74 | $ more mr-out-0 75 | A 509 76 | ABOUT 2 77 | ACT 8 78 | ... 79 | ``` 80 | 81 | #### 使用Plugin加载Map函数和Reduce函数 82 | 83 | `src/mrapps/wc.go `中定义了Map函数和Reduce函数,逻辑也很简单 84 | 85 | Map函数为每个单词生成了一个key为单词内容,value为1的键值对 86 | 87 | ```Go 88 | func Map(filename string, contents string) []mr.KeyValue { 89 | // function to detect word separators. 90 | // 定义字符分隔函数 91 | ff := func(r rune) bool { return !unicode.IsLetter(r) } 92 | 93 | // split contents into an array of words. 94 | // 分割内容成单词数组 95 | words := strings.FieldsFunc(contents, ff) 96 | // 遍历每个单词,为每个单词生成一个键值对 mr.KeyValue{w, "1"} “1”表示这个单词出现过一次 97 | kva := []mr.KeyValue{} 98 | for _, w := range words { 99 | kv := mr.KeyValue{w, "1"} 100 | kva = append(kva, kv) 101 | } 102 | return kva 103 | } 104 | 105 | func Reduce(key string, values []string) string { 106 | // return the number of occurrences of this word. 107 | // 计算键的出现次数 108 | return strconv.Itoa(len(values)) 109 | } 110 | ``` 111 | 112 | 在运行单体式实例时,我们将这个文件编译成plugin 113 | 114 | ```Go 115 | go build -buildmode=plugin ../mrapps/wc.go 116 | ``` 117 | 118 | 然后在单体式MapReduce运行时加载它们,也就是mapf和reducef,它们的本质就是函数变量 119 | 120 | ```Go 121 | func main() { 122 | if len(os.Args) < 3 { 123 | fmt.Fprintf(os.Stderr, "Usage: mrsequential xxx.so inputfiles...\n") 124 | os.Exit(1) 125 | } 126 | 127 | mapf, reducef := loadPlugin(os.Args[1]) 128 | ... 129 | } 130 | 131 | func loadPlugin(filename string) (func(string, string) []mr.KeyValue, func(string, []string) string) { 132 | p, err := plugin.Open(filename) 133 | if err != nil { 134 | log.Fatalf("cannot load plugin %v", filename) 135 | } 136 | xmapf, err := p.Lookup("Map") 137 | if err != nil { 138 | log.Fatalf("cannot find Map in %v", filename) 139 | } 140 | mapf := xmapf.(func(string, string) []mr.KeyValue) // 类型断言 141 | xreducef, err := p.Lookup("Reduce") 142 | if err != nil { 143 | log.Fatalf("cannot find Reduce in %v", filename) 144 | } 145 | reducef := xreducef.(func(string, []string) string) // 类型断言 146 | 147 | return mapf, reducef 148 | } 149 | ``` 150 | 151 | #### main函数分别用Map函数和Reduce函数做了什么 152 | 153 | - 遍历每个txt文件进行Map,将获得的key value 切片合并 154 | - 对key value 切片进行排序,方便计数 155 | - 将相同key的key value进行合并进行Reduce,然后输出 156 | 157 | 你可能会觉得Reduce好像没干什么事情,在单体模式下,确实,但在分布式系统中,Reduce的作用就能体现出来了 158 | 159 | ```Go 160 | // for sorting by key. 161 | type ByKey []mr.KeyValue 162 | 163 | // for sorting by key. 164 | func (a ByKey) Len() int { return len(a) } 165 | func (a ByKey) Swap(i, j int) { a[i], a[j] = a[j], a[i] } 166 | func (a ByKey) Less(i, j int) bool { return a[i].Key < a[j].Key } 167 | 168 | func main() { 169 | if len(os.Args) < 3 { 170 | fmt.Fprintf(os.Stderr, "Usage: mrsequential xxx.so inputfiles...\n") 171 | os.Exit(1) 172 | } 173 | 174 | mapf, reducef := loadPlugin(os.Args[1]) 175 | // 遍历每个txt文件 176 | intermediate := []mr.KeyValue{} 177 | for _, filename := range os.Args[2:] { 178 | // 对每个txt文件进行Map,将获得的key value 切片合并 179 | file, err := os.Open(filename) 180 | if err != nil { 181 | log.Fatalf("cannot open %v", filename) 182 | } 183 | content, err := ioutil.ReadAll(file) 184 | if err != nil { 185 | log.Fatalf("cannot read %v", filename) 186 | } 187 | file.Close() 188 | kva := mapf(filename, string(content)) 189 | intermediate = append(intermediate, kva...) 190 | } 191 | 192 | // 193 | // a big difference from real MapReduce is that all the 194 | // intermediate data is in one place, intermediate[], 195 | // rather than being partitioned into NxM buckets. 196 | // 197 | 198 | // 按照key,也就是单词进行排序,将相同的单词聚集在一起 199 | sort.Sort(ByKey(intermediate)) 200 | 201 | // 创建输出文件 202 | oname := "mr-out-0" 203 | ofile, _ := os.Create(oname) 204 | 205 | // 206 | // call Reduce on each distinct key in intermediate[], 207 | // and print the result to mr-out-0. 208 | // 209 | i := 0 210 | for i < len(intermediate) { 211 | // 对相同的单词进行计数,保存到values切片,再进行Reduce 212 | j := i + 1 213 | for j < len(intermediate) && intermediate[j].Key == intermediate[i].Key { 214 | j++ 215 | } 216 | values := []string{} 217 | for k := i; k < j; k++ { 218 | values = append(values, intermediate[k].Value) 219 | } 220 | output := reducef(intermediate[i].Key, values) 221 | 222 | // this is the correct format for each line of Reduce output. 223 | fmt.Fprintf(ofile, "%v %v\n", intermediate[i].Key, output) 224 | 225 | i = j 226 | } 227 | 228 | ofile.Close() 229 | } 230 | ``` 231 | 232 | #### 单体式较分布式省略了什么步骤 233 | 234 | 单单看单体式的实现并不能帮助你理解分布式系统是怎么运作的,单体式实现省略了一些关键的东西 235 | 236 | 首先你要记住:分布式拥有多个节点同时进行工作 237 | 238 | 1. 单体式直接遍历了每个txt文件进行map任务,但是分布式的时候如何进行map任务的划分和分配? 239 | 2. 单体式直接把Map后的中间结果临时保存在了一个切片内,但是分布式显然不能这么做,分布式系统通过Map产生的中间结果一定不能相互干扰, 240 | 3. 单体式通过一个比较巧妙的循环分割了reduce任务,分布式的reduce任务又应该怎么划分? 241 | 4. 分布式不同节点之间是怎么通信的? 242 | 243 | 你肯定是一头雾水,别急,继续往下看 244 | 245 | ### 你的任务 246 | 247 | 当然,你的实现必须是分布式的,包括1个Coordinator(协调器)和多个Worker(工作节点) 248 | 249 | 其中,Coordinator的启动入口在`main/mrcoordinator.go` 250 | 251 | Worker的启动入口在`main/mrworker.go` (需要插件) 252 | 253 | MapReduce 系统通过分布式文件系统(DFS)来管理和存储数据,在这个lab中,**你可以默认所有节点共享当前目录的所有txt文件** 254 | 255 | #### Coordinator需要做什么 256 | 257 | 1. 将Map任务和Reduce任务的分解成多个小任务 258 | 259 | Map任务的分解比较简单,因为节点共享所有txt文件,你可以直接把Map任务通过文件名划分,Worker只需要拿到对应的文件名就可以开始工作了 260 | 261 | 关于Reduce任务的分解,lab1在`mr/worker.go`给出了一个关键的函数 262 | 263 | ```Go 264 | // 关键是注释 265 | // use ihash(key) % NReduce to choose the reduce 266 | // task number for each KeyValue emitted by Map. 267 | // 268 | func ihash(key string) int { 269 | h := fnv.New32a() 270 | h.Write([]byte(key)) 271 | return int(h.Sum32() & 0x7fffffff) 272 | } 273 | ``` 274 | 275 | 首先,lab1规定了一个nReduce参数,代表着Reduce任务的数量,同时,每个Map任务都需要为Recude任务创建nRecude个中间文件,我们约定一个合理的中间文件是`mr-X-Y`,其中X是Map任务号,Y是reduce任务号。 276 | 277 | 就像注释所说的那样,我们可以通过ihash(key)来决定Y的值,将中间键/值写入文件(lab1的推荐使用Go的`encoding/json`包写文件) 278 | 279 | 所以,我们可以在Map阶段结束后,通过检查当前目录下文件的文件名,整合出具有相同Y值的文件名作为一个Reduce任务 280 | 281 | 2. Worker会向Coordinator请求任务,Coordinator需要将分解的小任务分配出去 282 | 283 | 需要注意的点 284 | 285 | - 同时有多个节点向Coordinator请求任务,怎么保证任务不会被重复分配(答案是加合适的互斥锁)? 286 | - 我们不能保证Worker是可靠的,如果Worker崩了,Coordinator需要再次把任务分配出去,怎么实现(对每个任务进行超时检查)? 287 | 288 | 3. Coordinator需要**知道**并且**告诉Worker**现在进行到了程序的哪个阶段(Map,Reduce还是已经结束?怎么切换阶段才是合理的?) 289 | 290 | #### Worker需要做什么 291 | 292 | 不断向Coordinator请求任务,直到所有任务已完成 293 | 294 | 需要注意的点 295 | 296 | - Map和Recude的逻辑从哪里加载?它们究竟在做什么? 297 | - Worker怎么知道所有任务已经结束,可以退出了? 298 | 299 | ### 分布式节点之间的通信 300 | 301 | 这里我们只介绍通信的方法,其底层实现比较复杂,欢迎同学们研究 302 | 303 | 在`mr`文件夹的3个文件中有以下例子 304 | 305 | Work可以通过call方法,传入`Coordinator.方法名`,对应的`Args`和`Reply`进行通信 306 | 307 | **注意**,`RPC`仅发送名称以大写字母开头的结构字段。子结构也必须有大写的字段名称。 308 | 309 | ```Go 310 | func (c *Coordinator) Example(args *ExampleArgs, reply *ExampleReply) error { 311 | reply.Y = args.X + 1 312 | return nil 313 | } 314 | type ExampleArgs struct { 315 | X int 316 | } 317 | 318 | type ExampleReply struct { 319 | Y int 320 | } 321 | // 322 | // example function to show how to make an RPC call to the coordinator. 323 | // 324 | // the RPC argument and reply types are defined in rpc.go. 325 | // 326 | func CallExample() { 327 | 328 | // declare an argument structure. 329 | args := ExampleArgs{} 330 | 331 | // fill in the argument(s). 332 | args.X = 99 333 | 334 | // declare a reply structure. 335 | reply := ExampleReply{} 336 | 337 | // send the RPC request, wait for the reply. 338 | // the "Coordinator.Example" tells the 339 | // receiving server that we'd like to call 340 | // the Example() method of struct Coordinator. 341 | ok := call("Coordinator.Example", &args, &reply) 342 | if ok { 343 | // reply.Y should be 100. 344 | fmt.Printf("reply.Y %v\n", reply.Y) 345 | } else { 346 | fmt.Printf("call failed!\n") 347 | } 348 | } 349 | 350 | // 351 | // send an RPC request to the coordinator, wait for the response. 352 | // usually returns true. 353 | // returns false if something goes wrong. 354 | // 355 | func call(rpcname string, args interface{}, reply interface{}) bool { 356 | // c, err := rpc.DialHTTP("tcp", "127.0.0.1"+":1234") 357 | sockname := coordinatorSock() 358 | c, err := rpc.DialHTTP("unix", sockname) 359 | if err != nil { 360 | log.Fatal("dialing:", err) 361 | } 362 | defer c.Close() 363 | 364 | err = c.Call(rpcname, args, reply) 365 | if err == nil { 366 | return true 367 | } 368 | 369 | fmt.Println(err) 370 | return false 371 | } 372 | ``` 373 | 374 | ### 你可以也应该修改的文件 375 | 376 | 1. `mr/coordinator.go ` 377 | 378 | 这里是你的Coordinator的实现 379 | 380 | 你需要完成 381 | 382 | - Coordinator结构体的定义(Coordinator struct)和初始化(MakeCoordinator) 383 | - 仿造Example函数定义你需要的RPC handler供Worker调用 384 | 385 | 2. `mr/rpc.go ` 386 | 387 | 这里你应该添加你的RPC handler的 Args 和 Reply 的定义,就像Example那样 388 | 389 | 3. `mr/worker.go ` 390 | 391 | 这里是你的Worker实现 392 | 393 | 你需要完成 394 | 395 | - Worker函数,每一个Worker的都会执行这个函数,一个基本思路是在函数中开启循环向Coordinator获取任务 396 | - RPC handler的Call函数,就像CallExample()那样 397 | 398 | ### 关于测试脚本 399 | 400 | lab1提供了一个测试脚本在`main/test-mr.sh`中。测试检查`wc`和`indexer` MapReduce应用程序在给定`pg-xxx.txt`文件作为输入时是否生成正确的输出。测试还检查你的实现是否并行运行Map和Reduce任务,以及你的实现是否能够从崩溃的工作进程中恢复。 401 | 402 | 如果你现在运行测试脚本,它将挂起,因为协调器从未完成: 403 | 404 | ```Bash 405 | bash 406 | 复制代码 407 | $ cd ~/6.5840/src/main 408 | $ bash test-mr.sh 409 | *** Starting wc test. 410 | ``` 411 | 412 | 你可以将`mr/coordinator.go`中的`Done`函数中的`ret := false`改为`true`,这样协调器会立即退出。然后: 413 | 414 | 测试脚本期望在名为`mr-out-X`的文件中看到输出,每个Reduce任务一个文件。`mr/coordinator.go`和`mr/worker.go`的空实现没有生成这些文件(或者做其他事情),所以测试失败。 415 | 416 | 当你完成后,测试脚本的输出应如下所示: 417 | 418 | 你可能会看到一些来自Go RPC包的错误信息,看起来像这样: 419 | 420 | ```CSS 421 | 2019/12/16 13:27:09 rpc.Register: method "Done" has 1 input parameters; needs exactly three 422 | ``` 423 | 424 | 忽略这些消息;将协调器注册为RPC服务器时,会检查所有方法是否适合用于RPC(有3个输入参数);我们知道`Done`不是通过RPC调用的。 425 | 426 | **`理解测试脚本的逻辑对你理解整个lab1很有帮助,你可以通过GPT等工具详细了解其逻辑`** 427 | 428 | -------------------------------------------------------------------------------- /docs/8-合作.md: -------------------------------------------------------------------------------- 1 | # Golang Lab8 2 | 3 | 这一轮通常是与工作室的其他方向组队开发一款产品,但是也可以选择其他,例如 4 | 5 | - 继续精读源码 6 | - 参加开源活动,例如开源之夏、GSoC(Google Summer of Code)等 7 | - 按需定制内容 8 | 9 | 这里我们推荐以下站点,可以关注一下: 10 | 1. 开源软件供应链点亮计划 (开源之夏) - [https://summer-ospp.ac.cn/](https://summer-ospp.ac.cn/) 11 | 2. Google Summber of Code (gsoc) - [https://summerofcode.withgoogle.com/](https://summerofcode.withgoogle.com/) 12 | 3. GLCC开源夏令营 - [https://opensource.alibaba.com/](https://opensource.alibaba.com/) 13 | 4. 腾讯犀牛鸟开源人才培养计划 - [https://opensource.tencent.com/summer-of-code](https://opensource.tencent.com/summer-of-code) 14 | 15 | 除此之外,可以关注一下一些大厂的开源网站 16 | 17 | 1. 阿里开源:[https://opensource.alibaba.com/](https://opensource.alibaba.com/) 18 | 2. 腾讯开源:[https://opensource.tencent.com/](https://opensource.tencent.com/) 19 | 3. Meta Open Source:[https://opensource.fb.com/](https://opensource.fb.com/) 20 | 4. Google Open Source:[https://opensource.google/](https://opensource.google/) 21 | 5. Uber Open Source:[https://uber.github.io/#/](https://uber.github.io/#/) 22 | 6. 开源 - 美团技术团队:[https://tech.meituan.com/tags/%E5%BC%80%E6%BA%90.html](https://tech.meituan.com/tags/%E5%BC%80%E6%BA%90.html) 23 | 24 | 240630:做个锤子,全给我去做 6.824/6.828 去 25 | -------------------------------------------------------------------------------- /docs/README.md: -------------------------------------------------------------------------------- 1 | # docs 2 | 3 | 这里存放着我们的考核资料,除了推荐资料外,我们还在每一轮的考核中附赠了一定的资料。这些资料是为减轻你的学习负担而准备的。 4 | 5 | 除此之外,我们希望你可以明确以下几点 6 | 7 | 1. 不要过分追求CURD(增删改查)的实现,除了第三阶段(大作品)外,其他几轮我们的目的是让你熟悉对应的框架/代码/实现。当然,第三轮也并不完全侧重CURD。我们只是希望你有一个对中大型项目的基础认知 8 | 2. 我们从第三阶段(大作品)开始将会设置答辩环节,答辩时间与作品提交时间将会岔开至少1天 9 | 3. 如果有困难,可以联系考核寻求帮助,我们不是冰冷的Bot! 10 | 4. 如果你有时间上的困难,同样可以联系考核,我们会依据具体情况为你作出调整 11 | 12 | ## 提交作业 13 | 14 | 按照我们目前的考核, 只要你完成, 你可以在**任何时间**提交你的考核代码, 但你需要注意下面这几点 15 | 16 | 1. 你的代码必须上传至github仓库 17 | 2. 如果需要答辩,请对你代码保持一定的熟练度,同时最好**准备一份介绍你这个代码仓库的文档** 18 | 3. 只要你完成了就可以去联系考核,不必等流程结束统一回收项目 19 | 20 | 我们推荐**学有余力**的同学提前提交你的项目, 可以提前得到考核对你项目的评价, 这样即使你的项目存在缺陷, 你也可以在截止日期(deadline)到来之前更新你的代码, 同时也会给考核对你留下较深的印象 21 | 22 | 提交作业的方法会在考核群内另行通知 23 | 24 | ## 关于答辩 25 | 26 | 对于每次答辩,你需要注意下面这些要点 27 | 28 | 答辩本质上就是你和考核的一次简单聊天,目的是在于**确认你对你代码的熟练程度** 29 | 30 | 1. 不要准备PPT,或者简单的准备一下,我们不会关注你的PPT有多好看 31 | 2. 请对你的代码有拍胸脯的保证,我们会侧重于询问代码实现细节 32 | 3. 答辩不会很长,同时氛围十分轻松 33 | 4. 答辩有时候会设置**考核自问自答**环节,目的是让你了解一些你尚未弄懂/其他细节 34 | 5. 请提前5分钟进入会议室 35 | 6. 我们不会要求你完成所有的内容,如果你没有完成要求,你必须指出你哪些没完成,并且说明原因(包括但不限于:没时间/比较忙/其他原因) 36 | 7. 保持自信! 37 | 38 | ## 学习建议 39 | 40 | 每个人的学习方法都不一样,这里给出设计者认为的一个合适的学习路线 41 | 42 | 1. 认真阅读考核文档, 明确明确明确**考核需求** (注意不要过分解读考核要求, 自己给自己上难度) 43 | 2. 开一个github仓库,在上面存放你这一轮考核的代码和笔记(如果有的话) 44 | 3. 确定你将要学习的内容, 可以参考一下我们给出的资料 45 | 4. 对你学习的内容进行学习, 千万注意**不要纸上谈兵, 务必要自己动手**, 有些东西你现在看不懂, 自己写一遍就能明白了 46 | 5. 完成考核内容, 你不必完成全部的内容(如果你感到吃力, 或者这段时间你有其他需要忙的事情), 但是我们推荐你**尽量完成一些Bouns内容** 47 | 6. 将你完成的代码存放到github仓库(当然, 我们更推荐你 **利用好git, 每次作业有进展的时候就commit记录一下**) 48 | 7. 等待考核发布项目回收搜集表,将仓库填上去即可 49 | 50 | 关于资料, 除了每一轮考核的doc, 你也可以关注一下我们这个仓库的 `etc` 文件夹, 里面也存放了一些文章 51 | -------------------------------------------------------------------------------- /docs/deprecated/7-底层实现.md: -------------------------------------------------------------------------------- 1 | # Golang 第七轮考核 2 | ## 目的 3 | 4 | - 掌握Web底层工作原理 5 | - 掌握Orm库的底层工作原理 6 | - 掌握缓存库的设计原理 7 | 8 | ## 任务 9 | 10 | > 以下内容三选一进行完成即可 11 | 12 | ### 基于net/http库实现类似Gin框架的Gout Web框架 13 | 14 | 该Web框架具备以下功能 15 | 16 | - 路由支持GET、POST、DELETE、PUT功能 17 | - 实现Context功能 18 | - 嵌入log、cors、recovery等middleware 19 | 20 | 使用Gout库实现简单的HTTP的请求与响应 21 | 22 | ### 基于database/sql库实现类似Gorm框架的go-orm框架 23 | 24 | 该orm框架具备以下功能 25 | 26 | - 能进行表的结构映射 27 | - 实现简单的create、update、find、delete等等api接口 28 | - 支持事务 29 | 30 | 使用go-orm对数据库表进行处理 31 | 32 | ### 针对mysql的业务操作使用redis实现一个cache中间件 33 | 34 | 该cache用于提高mysql的查询速度,缓存库具备一下功能 35 | 36 | - 当查询数据在redis中存在时,就在redis中读取,否则在mysql中读取并写入redis 37 | - 当进行增加、删除和修改操作时,redis数据进行更新 38 | - 需要保证mysql与redis的双写一致性 39 | 40 | benchmark进行测试,至少保证10k的并发读写量 41 | 42 | ## 提示 43 | 44 | 如果感到困难,可以选择退而求其次,完成对Gin、Gorm、Redis源码的学习,需要你提交一份学习报告 45 | 46 | 如果你选择提交学习报告,你可以选择的源码就很多了,下面列举了你可以选择的源码: 47 | grpc-go、kitex、kratos、redis-go、gin、gorm 48 | 49 | 你可以按照如下步骤阅读源码,并编写你的报告 50 | 51 | 1. 列出这个框架源码的目录结构,并对每个文件夹(或者重要的文件)进行注释 52 | 2. 找出这个框架特特性、优点 53 | 3. 为了实现这些特性、优点,这个框架做了什么 54 | 4. 为了实现这些特性、优点,这个框架放弃了什么 55 | 5. 这个框架涉及到了哪些数据结构/算法的内容 56 | 6. 挑选一个相同/类似的框架,并对二者进行对比,找出其中的区别 57 | 7. 为这个框架编写一个简易demo(以gin为例,我们实现一个简单的HTTP接口),并在这个基础上,按照代码流程编写一份流程报告(以Gin为例,我们可以从接收HTTP请求开始,一直分析代码到发送响应请求结束,分析这中间的代码跳转、数据封装、语言特性使用等) 58 | 59 | 60 | 61 | 同时,多在群里提问,主打的就是一个`直言不讳` 62 | -------------------------------------------------------------------------------- /etc/README.md: -------------------------------------------------------------------------------- 1 | # etc 2 | 3 | 这里搜集了我们平时遇到的,认为比较有用的,或者说在考核群里分享的文章。 4 | 5 | 归类比较复杂,所以类型可能并不完整。 -------------------------------------------------------------------------------- /etc/etc.md: -------------------------------------------------------------------------------- 1 | ## 入门 2 | 1. [「2022 版」轻松搞定 Go 开发环境](https://polarisxu.studygolang.com/posts/go/2022-dev-env/) 3 | 4 | ## 基础 5 | 1. [掌握 Golang Interface:让你的代码如虎添翼](https://juejin.cn/post/7224654430921850936) 6 | 2. [Go sync.Once:简约而不简单的并发利器](https://juejin.cn/post/7220797267716358199) 7 | 3. [高效的 Go 编程 Effective Go(2020版)](https://learnku.com/docs/effective-go/2020) 8 | 4. [「一劳永逸」一张脑图带你掌握Git命令](https://juejin.cn/post/6869519303864123399) 9 | 5. [https://juejin.cn/post/7245184987531608124](https://juejin.cn/post/7245184987531608124) 10 | 6. [百度前端团队 - Git 工作流实践方案探索](https://juejin.cn/post/7050012586296737805) 11 | 7. [设计模式 Golang实现-《研磨设计模式》读书笔记](https://github.com/senghoo/golang-design-pattern) 12 | 8. [如何写好单元测试?](https://zhuanlan.zhihu.com/p/387540827) 13 | 9. [为go程序编写测试用例](https://lyc10031.github.io/2019/08/23/go-test.html) 14 | 10. [Go 每日一库之 testing](https://darjun.github.io/2021/08/03/godailylib/testing/) 15 | 11. [GoMock快速上手教程](https://zhuanlan.zhihu.com/p/410445621) 16 | 17 | ## 偏底层 18 | 1. [Go汇编详解](https://mp.weixin.qq.com/s/yPkAn3pRO5j9LKJGRxmaBg) 19 | 2. [dynamicgo 开源 :基于原始字节流的高性能+动态化 Go 数据处理](https://mp.weixin.qq.com/s/Cm7CXxhxRfACf4djhinDew) 20 | 21 | ## 优化 22 | 23 | 1. [美团外卖搜索基于Elasticsearch的优化实践](https://zhuanlan.zhihu.com/p/584648660) 24 | 25 | ## 分布式与云原生 26 | 27 | 1. [一文了解 - 云原生大数据知识地图](https://mp.weixin.qq.com/s/vua2g0_t1Y8KW4cNHSQdNw) 28 | 2. [全链路追踪与Jaeger入门](https://jckling.github.io/2021/04/02/Jaeger/%E5%85%A8%E9%93%BE%E8%B7%AF%E8%BF%BD%E8%B8%AA%E4%B8%8E%20Jaeger%20%E5%85%A5%E9%97%A8/) 29 | 3. [Kitex Proxyless之流量路由:配合 Istio 与 OpenTelemetry 实现全链路泳道](https://mp.weixin.qq.com/s/SAn-H5p53IfvSy_Y3Mcz_Q) 30 | 4. [大规模分布式链路分析计算在字节跳动的实践](https://mp.weixin.qq.com/s/A1iWAqvp8GhjouKg9-LnuA) 31 | 5. [Containers From Scratch • Liz Rice • GOTO 2018](https://youtu.be/8fi7uSYlOdc) 32 | 6. [分布式键值存储 etcd 原理与实现 · Analyze](https://wingsxdu.com/posts/database/etcd/) 33 | 7. [腾讯微服务平台 TSF 的敏捷开发流程](https://juejin.cn/post/6940993650432163877) 34 | 8. [Kubernetes 中文指南/云原生应用架构实战手册](https://jimmysong.io/kubernetes-handbook/) 35 | 9. [消息队列怎么能通俗点解释? - 腾讯技术工程的回答 - 知乎](https://www.zhihu.com/question/321144623/answer/3128270015) 36 | 10. [详解微服务之间3大通信方式:网关 API、RPC 和 SideCar](https://zhuanlan.zhihu.com/p/452558073) 37 | 38 | ## 大数据 39 | 40 | 1. [字节跳动大数据容器化构建与落地实践](https://mp.weixin.qq.com/s/aU8Mjmiz7eJRBE1owzileA) 41 | 42 | ## 网络环境 43 | 44 | 1. [我有特别的DNS配置技巧](https://blog.skk.moe/post/i-have-my-unique-dns-setup/#Zheng-Que-Pei-Zhi-SmartDNS) 45 | 2. [浅谈在代理环境中的DNS解析行为](https://blog.skk.moe/post/what-happend-to-dns-in-proxy/) 46 | 3. [TCP 三握四挥 重传机制 滑动窗口 流量控制 拥塞控制](https://blog.csdn.net/cjw0001/article/details/118273577) 47 | 4. [清华大学开源镜像站](https://mirrors.tuna.tsinghua.edu.cn) 48 | 5. [多链路传输技术在火山引擎 RTC 的探索和实践](https://mp.weixin.qq.com/s/ne7H0NOBETnD_MCQ3VU-wQ) 49 | 6. [(建议精读)HTTP灵魂之问,巩固你的 HTTP 知识体系](https://juejin.cn/post/6844904100035821575) 50 | 51 | ## 架构 52 | 53 | 这部分可以当小说看 54 | 55 | 1. [引入 CloudWeGo 后飞书管理后台平台化改造的演进史](https://zhuanlan.zhihu.com/p/544472909) 56 | 2. [深度 | 字节跳动微服务架构体系演进](https://zhuanlan.zhihu.com/p/382833278) -------------------------------------------------------------------------------- /img/mindmap-grammer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/west2-online/learn-go/aa241bf083195d0637c034fbaf81420d982d695f/img/mindmap-grammer.png -------------------------------------------------------------------------------- /img/mindmap-spider.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/west2-online/learn-go/aa241bf083195d0637c034fbaf81420d982d695f/img/mindmap-spider.png -------------------------------------------------------------------------------- /img/mindmap-study.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/west2-online/learn-go/aa241bf083195d0637c034fbaf81420d982d695f/img/mindmap-study.png --------------------------------------------------------------------------------