├── .gitignore ├── README.md └── list ├── coarse_grained_lock_list ├── Makefile ├── coarse_lock_list.cpp ├── coarse_lock_list.h └── coarse_lock_list_test.cpp ├── fine_grained_lock_list ├── Makefile ├── fine_grained_lock_list.cpp ├── fine_grained_lock_list.h └── fine_grained_lock_list_test.cpp ├── lock_free_list ├── Makefile ├── lock_free_list.cpp ├── lock_free_list.h ├── lock_free_list_test.cpp └── run_batch_test.sh ├── lock_free_rcu_list ├── Makefile ├── list_node.cpp ├── list_node.h ├── lock_free_list.cpp ├── lock_free_list.h ├── lock_free_list_test.cpp ├── rcu.cpp ├── rcu.h ├── rcu_test.cpp ├── run_batch_test_list.sh └── run_batch_test_rcu.sh └── result_report ├── Add_to_list_performance.png ├── Delete_to_list_performance.png ├── mixed_op_to_list_performance.png └── pic.py /.gitignore: -------------------------------------------------------------------------------- 1 | */*/build 2 | */*/log 3 | */*/core 4 | */.vscode 5 | .vscode 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Lock-Free List 2 | -------------- 3 | 4 | 对比了单机多核环境下,支持多线程操作的数据结构,采用**不同的同步方式**,对性能的影响。数据结构采用**有序链表实现的集合**,支持插入,删除,查询,它们均是线程安全的。对比了以下同步方式:
5 | * 粗力度锁(coarse-grained lock) 6 | * 细力度锁(fine-grained lock) 7 | * 无锁(lock-free) 8 | * 无锁 + 垃圾回收(lock-free + rcu) 9 | 10 | #### 实验机器 11 | 8核,支持8线程 12 | 13 | #### ============================= 添加元素 ============================== 14 | ![The Performance of Add](https://github.com/alwaysR9/lock_free_ds/blob/master/list/result_report/Add_to_list_performance.png)
15 | 16 | #### ============================= 删除元素 ============================== 17 | ![The Performance of Delete](https://github.com/alwaysR9/lock_free_ds/blob/master/list/result_report/Delete_to_list_performance.png)
18 | 19 | #### ====================== 40%插入,40%查询,20%删除 ====================== 20 | ![The Performance of 40% Add, 40% Loop up, 20% Delete](https://github.com/alwaysR9/lock_free_ds/blob/master/list/result_report/mixed_op_to_list_performance.png)
21 | 22 | #### =============================== 总结 ================================ 23 | 1. 在并发情境下,完成相同的操作: 24 | - 无锁链表,运行时间最少; 可以充分利用多核(CPU利用率可达800%),并行度最高。 25 | - 粗粒度锁链表,运行时间较长; 只能利用一个核(CPU占用率100%)。 26 | - 细粒度锁链表,运行时间最长(甚至超过粗粒度锁链表,这是频繁的锁操作造成的); 可以利用多核,但不充分(CPU利用率最多约400%)。 27 | 2. 对于无锁链表,进行垃圾回收: 28 | - 垃圾回收带来的性能损失,相比于加锁链表的运行耗时非常小。 29 | - 垃圾回收的主要时间开销在STL list操作和保证垃圾回收线程安全的全局锁。 30 | 3. 测试 31 | - 包括:并发情境的正确性测试,性能测试,内存泄漏检查 32 | 33 | #### ============================ 重要参考资料 ============================= 34 | 多种同步方式介绍(包括无锁数据结构的垃圾回收方法):
35 | https://people.eecs.berkeley.edu/~stephentu/presentations/workshop.pdf
36 | 无锁链表:
37 | https://www.cl.cam.ac.uk/research/srg/netos/papers/2001-caslists.pdf
38 | 无锁数据结构的**批量**垃圾回收方法:
39 | https://en.wikipedia.org/wiki/Read-copy-update
-------------------------------------------------------------------------------- /list/coarse_grained_lock_list/Makefile: -------------------------------------------------------------------------------- 1 | x: coarse_lock_list_test.cpp coarse_lock_list.cpp coarse_lock_list.h 2 | g++ -o ./build/x -std=c++11 coarse_lock_list_test.cpp coarse_lock_list.cpp coarse_lock_list.h -lpthread 3 | 4 | clean: 5 | rm ./build/* 6 | -------------------------------------------------------------------------------- /list/coarse_grained_lock_list/coarse_lock_list.cpp: -------------------------------------------------------------------------------- 1 | #include "coarse_lock_list.h" 2 | 3 | struct Node { 4 | long val; 5 | Node* next; 6 | Node() { 7 | val = 0; 8 | next = NULL; 9 | } 10 | Node(long val, Node* next) { 11 | this->val = val; 12 | this->next = next; 13 | } 14 | }; 15 | 16 | CoarseLockList::CoarseLockList() { 17 | _head = new Node(); 18 | pthread_mutex_init(&_mutex, NULL); 19 | } 20 | 21 | CoarseLockList::~CoarseLockList() { 22 | pthread_mutex_destroy(&_mutex); 23 | 24 | Node* cur = _head; 25 | while (cur != NULL) { 26 | Node* next = cur->next; 27 | delete cur; 28 | cur = next; 29 | } 30 | } 31 | 32 | std::vector CoarseLockList::vectorize() { 33 | std::vector v; 34 | Node* cur = _head->next; 35 | while (cur != NULL) { 36 | v.push_back(cur->val); 37 | cur = cur->next; 38 | } 39 | return v; 40 | } 41 | 42 | bool CoarseLockList::add(const long val) { 43 | pthread_mutex_lock(&_mutex); 44 | Node** cur = &(_head->next); 45 | while (*cur != NULL && (*cur)->val < val) { 46 | cur = &((*cur)->next); 47 | } 48 | 49 | bool succ = true; 50 | if (*cur == NULL || (*cur)->val > val) { 51 | Node* node = new Node(val, *cur); 52 | *cur = node; 53 | } else { 54 | succ = false; 55 | } 56 | pthread_mutex_unlock(&_mutex); 57 | 58 | return succ; 59 | } 60 | 61 | bool CoarseLockList::rm(const long val) { 62 | pthread_mutex_lock(&_mutex); 63 | Node** cur = &(_head->next); 64 | while (*cur != NULL && (*cur)->val < val) { 65 | cur = &((*cur)->next); 66 | } 67 | 68 | bool succ = true; 69 | if (*cur == NULL) { 70 | succ = false; 71 | } else if ((*cur)->val == val) { 72 | Node* node = *cur; 73 | *cur = (*cur)->next; 74 | delete node; 75 | } 76 | pthread_mutex_unlock(&_mutex); 77 | 78 | return succ; 79 | } 80 | 81 | bool CoarseLockList::contains(const long val) { 82 | pthread_mutex_lock(&_mutex); 83 | Node** cur = &(_head->next); 84 | while (*cur != NULL && (*cur)->val < val) { 85 | cur = &((*cur)->next); 86 | } 87 | 88 | bool succ = true; 89 | if (*cur == NULL) { 90 | succ = false; 91 | } else if ((*cur)->val > val) { 92 | succ = false; 93 | } 94 | pthread_mutex_unlock(&_mutex); 95 | 96 | return succ; 97 | } 98 | 99 | //bool CoarseLockList::contains(const long val) { 100 | // pthread_mutex_lock(&_mutex); 101 | // Node* pre = _head; 102 | // Node* cur = _head->next; 103 | // 104 | // while (cur != NULL) { 105 | // if (val <= cur->val) { 106 | // break; 107 | // } 108 | // pre = cur; 109 | // cur = pre->next; 110 | // } 111 | // 112 | // if (cur == NULL) { 113 | // pthread_mutex_unlock(&_mutex); 114 | // return false; 115 | // } 116 | // 117 | // if (val == cur->val) { 118 | // pthread_mutex_unlock(&_mutex); 119 | // return true; 120 | // } else { 121 | // pthread_mutex_unlock(&_mutex); 122 | // return false; 123 | // } 124 | //} -------------------------------------------------------------------------------- /list/coarse_grained_lock_list/coarse_lock_list.h: -------------------------------------------------------------------------------- 1 | #ifndef _COARSE_LOCK_LIST_H 2 | #define _COARSE_LOCK_LIST_H 3 | 4 | #include 5 | #include 6 | 7 | struct Node; 8 | 9 | // Elements sorted from smallest to largest 10 | class CoarseLockList { 11 | public: 12 | CoarseLockList(); 13 | ~CoarseLockList(); 14 | 15 | /**************** Interface ****************/ 16 | // Thread safe 17 | bool add(const long val); 18 | bool rm(const long val); 19 | bool contains(const long val); 20 | 21 | /**************** Test ****************/ 22 | // Not thread safe 23 | std::vector vectorize(); 24 | 25 | private: 26 | // _head is an empty node, 27 | // _head->val is invalid 28 | Node* _head; 29 | pthread_mutex_t _mutex; 30 | }; 31 | 32 | #endif -------------------------------------------------------------------------------- /list/coarse_grained_lock_list/coarse_lock_list_test.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | #include "coarse_lock_list.h" 10 | 11 | struct ThreadArgv { 12 | CoarseLockList* pl; 13 | int b, e; 14 | std::vector v; 15 | ThreadArgv() {} 16 | ThreadArgv(CoarseLockList* pl, int b, int e) { 17 | this->pl = pl; 18 | this->b = b; 19 | this->e = e; 20 | } 21 | void init(CoarseLockList* pl, int b, int e) { 22 | this->pl = pl; 23 | this->b = b; 24 | this->e = e; 25 | } 26 | void add_rand_seq(const std::vector & v) { 27 | this->v = v; 28 | } 29 | }; 30 | 31 | void TEST_CORRECTNESS_SINGLE_THREAD() { 32 | CoarseLockList l; 33 | std::vector v; 34 | 35 | l.add(1), l.add(3), l.add(2), l.add(3); 36 | v = l.vectorize(); 37 | assert (v.size() == 3); 38 | assert (v[0] == 1); 39 | assert (v[1] == 2); 40 | assert (v[2] == 3); 41 | 42 | l.rm(1), l.rm(2), l.rm(2); 43 | v = l.vectorize(); 44 | assert (v.size() == 1); 45 | assert (v[0] == 3); 46 | 47 | bool res1 = l.contains(3); 48 | bool res2 = l.contains(1); 49 | assert(res1 == true); 50 | assert(res2 == false); 51 | 52 | std::cout << "Test single thread correctness successfully" << std::endl; 53 | 54 | std::cout << "--------------------------" << std::endl; 55 | } 56 | 57 | void* test_add(void* argv) { 58 | CoarseLockList* pl = ((ThreadArgv*) argv)->pl; 59 | int b = ((ThreadArgv*) argv)->b; 60 | int e = ((ThreadArgv*) argv)->e; 61 | assert (b != e); 62 | long dir = 1; 63 | if (b > e) { 64 | dir = -1; 65 | } 66 | for (int i = b; ; i += dir) { 67 | if (dir > 0) { 68 | if (i > e) break; 69 | } 70 | if (dir < 0) { 71 | if (i < e) break; 72 | } 73 | pl->add((long)i); 74 | } 75 | return NULL; 76 | } 77 | 78 | void* test_rm(void* argv) { 79 | CoarseLockList* pl = ((ThreadArgv*) argv)->pl; 80 | int b = ((ThreadArgv*) argv)->b; 81 | int e = ((ThreadArgv*) argv)->e; 82 | assert (b != e); 83 | long dir = 1; 84 | if (b > e) { 85 | dir = -1; 86 | } 87 | for (int i = b; ; i += dir) { 88 | if (dir > 0) { 89 | if (i > e) break; 90 | } 91 | if (dir < 0) { 92 | if (i < e) break; 93 | } 94 | pl->rm((long)i); 95 | } 96 | return NULL; 97 | } 98 | 99 | void* test_contains(void* argv) { 100 | CoarseLockList* pl = ((ThreadArgv*) argv)->pl; 101 | int b = ((ThreadArgv*) argv)->b; 102 | int e = ((ThreadArgv*) argv)->e; 103 | assert (b != e); 104 | long dir = 1; 105 | if (b > e) { 106 | dir = -1; 107 | } 108 | for (int i = b; ; i += dir) { 109 | if (dir > 0) { 110 | if (i > e) break; 111 | } 112 | if (dir < 0) { 113 | if (i < e) break; 114 | } 115 | pl->contains((long)i); 116 | } 117 | return NULL; 118 | } 119 | 120 | void* rand_test_add(void* argv) { 121 | CoarseLockList* pl = ((ThreadArgv*) argv)->pl; 122 | int b = ((ThreadArgv*) argv)->b; 123 | int e = ((ThreadArgv*) argv)->e; 124 | std::vector v = ((ThreadArgv*) argv)->v; 125 | assert (b != e); 126 | for (int i = 0; i < v.size(); ++ i) { 127 | pl->add(v[i]); 128 | } 129 | return NULL; 130 | } 131 | 132 | void* rand_test_rm(void* argv) { 133 | CoarseLockList* pl = ((ThreadArgv*) argv)->pl; 134 | int b = ((ThreadArgv*) argv)->b; 135 | int e = ((ThreadArgv*) argv)->e; 136 | std::vector v = ((ThreadArgv*) argv)->v; 137 | assert (b != e); 138 | for (int i = 0; i < v.size(); ++ i) { 139 | pl->rm(v[i]); 140 | } 141 | return NULL; 142 | } 143 | 144 | void* rand_test_contains(void* argv) { 145 | CoarseLockList* pl = ((ThreadArgv*) argv)->pl; 146 | int b = ((ThreadArgv*) argv)->b; 147 | int e = ((ThreadArgv*) argv)->e; 148 | std::vector v = ((ThreadArgv*) argv)->v; 149 | assert (b != e); 150 | for (int i = 0; i < v.size(); ++ i) { 151 | pl->contains(v[i]); 152 | } 153 | return NULL; 154 | } 155 | 156 | bool validate_permutations(const std::vector & v) { 157 | return (v.size() == 0) || 158 | (v.size() == 1 && (v[0] == 1 || v[0] == 2)) || 159 | (v.size() == 2 && (v[0] == 1 && v[1] == 2)); 160 | } 161 | 162 | void Test_multi_thread_add() { 163 | CoarseLockList l; 164 | std::vector v; 165 | 166 | pthread_t tid[4]; 167 | ThreadArgv argv1 = ThreadArgv(&l, 1, 10000); 168 | ThreadArgv argv2 = ThreadArgv(&l, 10000, 1); 169 | ThreadArgv argv3 = ThreadArgv(&l, 1000, 8000); 170 | ThreadArgv argv4 = ThreadArgv(&l, 5000, 1); 171 | pthread_create(&tid[0], NULL, test_add, (void*)&argv1); 172 | pthread_create(&tid[1], NULL, test_add, (void*)&argv2); 173 | pthread_create(&tid[2], NULL, test_add, (void*)&argv3); 174 | pthread_create(&tid[3], NULL, test_add, (void*)&argv4); 175 | pthread_join(tid[0], NULL); 176 | pthread_join(tid[1], NULL); 177 | pthread_join(tid[2], NULL); 178 | pthread_join(tid[3], NULL); 179 | 180 | v = l.vectorize(); 181 | assert(v.size() == 10000); 182 | assert(v[0] == 1); 183 | assert(v[9999] == 10000); 184 | } 185 | 186 | void Test_multi_thread_rm() { 187 | CoarseLockList l; 188 | std::vector v; 189 | 190 | for (int i = 0; i < 10000; ++ i) { 191 | l.add((long)(i+1)); 192 | } 193 | 194 | pthread_t tid[4]; 195 | ThreadArgv argv1 = ThreadArgv(&l, 1, 5000); 196 | ThreadArgv argv2 = ThreadArgv(&l, 5000, 1); 197 | ThreadArgv argv3 = ThreadArgv(&l, 2000, 4000); 198 | ThreadArgv argv4 = ThreadArgv(&l, 4500, 100); 199 | pthread_create(&tid[0], NULL, test_rm, (void*)&argv1); 200 | pthread_create(&tid[1], NULL, test_rm, (void*)&argv2); 201 | pthread_create(&tid[2], NULL, test_rm, (void*)&argv3); 202 | pthread_create(&tid[3], NULL, test_rm, (void*)&argv4); 203 | pthread_join(tid[0], NULL); 204 | pthread_join(tid[1], NULL); 205 | pthread_join(tid[2], NULL); 206 | pthread_join(tid[3], NULL); 207 | 208 | v = l.vectorize(); 209 | assert(v.size() == 5000); 210 | assert(v[0] == 5001); 211 | assert(v[4999] == 10000); 212 | } 213 | 214 | void Test_multi_thread_add_and_rm_small() { 215 | CoarseLockList l; 216 | std::vector v; 217 | 218 | pthread_t tid[4]; 219 | ThreadArgv argv1 = ThreadArgv(&l, 1, 2); 220 | ThreadArgv argv2 = ThreadArgv(&l, 2, 1); 221 | pthread_create(&tid[0], NULL, test_add, (void*)&argv1); 222 | pthread_create(&tid[1], NULL, test_rm, (void*)&argv2); 223 | pthread_create(&tid[2], NULL, test_add, (void*)&argv2); 224 | pthread_create(&tid[3], NULL, test_rm, (void*)&argv1); 225 | pthread_join(tid[0], NULL); 226 | pthread_join(tid[1], NULL); 227 | pthread_join(tid[2], NULL); 228 | pthread_join(tid[3], NULL); 229 | 230 | v = l.vectorize(); 231 | assert(validate_permutations(v)); 232 | } 233 | 234 | void Test_multi_thread_add_and_rm_big() { 235 | CoarseLockList l; 236 | 237 | for (int i = 1; i <= 10000; ++ i) { 238 | l.add((long) i); 239 | } 240 | 241 | int n_add_thread = 3; 242 | int n_rm_thread = 3; 243 | int n_contains_thread = 2; 244 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 245 | 246 | pthread_t* tid = new pthread_t[n_thread]; 247 | ThreadArgv* argv = new ThreadArgv[n_thread]; 248 | argv[0].init(&l, 1, 10000); 249 | argv[1].init(&l, 3000, 1); 250 | argv[2].init(&l, 6000, 3500); 251 | argv[3].init(&l, 2010, 8999); 252 | argv[4].init(&l, 3011, 7917); 253 | argv[5].init(&l, 7138, 1234); 254 | argv[6].init(&l, 10000, 1); 255 | argv[7].init(&l, 9216, 4289); 256 | 257 | for (int i = 0; i < n_thread; ++ i) { 258 | if (i < n_add_thread) { 259 | pthread_create(&tid[i], NULL, test_add, (void*)&argv[i]); 260 | } else if (i >= n_add_thread && i < n_add_thread + n_rm_thread) { 261 | pthread_create(&tid[i], NULL, test_rm, (void*)&argv[i]); 262 | } else { 263 | pthread_create(&tid[i], NULL, test_contains, (void*)&argv[i]); 264 | } 265 | } 266 | 267 | for (int i = 0; i < n_thread; ++ i) { 268 | pthread_join(tid[i], NULL); 269 | } 270 | 271 | delete [] tid; 272 | delete [] argv; 273 | } 274 | 275 | void TEST_CORRECTNESS_MULTI_THREAD() { 276 | Test_multi_thread_add(); 277 | std::cout << "Test multi thread add successfully" << std::endl; 278 | 279 | Test_multi_thread_rm(); 280 | std::cout << "Test multi thread rm successfully" << std::endl; 281 | 282 | Test_multi_thread_add_and_rm_small(); 283 | std::cout << "Test multi thread add & rm small successfully" << std::endl; 284 | Test_multi_thread_add_and_rm_big(); 285 | std::cout << "Test multi thread add & rm big successfully" << std::endl; 286 | 287 | std::cout << "--------------------------" << std::endl; 288 | } 289 | 290 | double time_diff(const timeval & b, const timeval & e) { 291 | return (e.tv_sec - b.tv_sec) + (e.tv_usec - b.tv_usec)*1.0 / 1000000.0; 292 | } 293 | 294 | void Test_performance_add(const int n_thread) { 295 | CoarseLockList l; 296 | std::vector v; 297 | 298 | pthread_t* tid = new pthread_t[n_thread]; 299 | ThreadArgv argv = ThreadArgv(&l, 1, 10000); 300 | 301 | timeval begin; 302 | timeval end; 303 | gettimeofday(&begin, NULL); 304 | 305 | for (int i = 0; i < n_thread; ++ i) { 306 | pthread_create(&tid[i], NULL, test_add, (void*)&argv); 307 | } 308 | for (int i = 0; i < n_thread; ++ i) { 309 | pthread_join(tid[i], NULL); 310 | } 311 | 312 | gettimeofday(&end, NULL); 313 | std::cout << "Test performance: add() with " << n_thread; 314 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 315 | 316 | delete [] tid; 317 | } 318 | 319 | void Test_performance_rm(const int n_thread) { 320 | CoarseLockList l; 321 | std::vector v; 322 | 323 | for (int i = 1; i <= 10000; ++ i) { 324 | l.add((long) i); 325 | } 326 | 327 | pthread_t* tid = new pthread_t[n_thread]; 328 | ThreadArgv argv = ThreadArgv(&l, 10000, 1); 329 | 330 | timeval begin; 331 | timeval end; 332 | gettimeofday(&begin, NULL); 333 | 334 | for (int i = 0; i < n_thread; ++ i) { 335 | pthread_create(&tid[i], NULL, test_rm, (void*)&argv); 336 | } 337 | for (int i = 0; i < n_thread; ++ i) { 338 | pthread_join(tid[i], NULL); 339 | } 340 | 341 | gettimeofday(&end, NULL); 342 | std::cout << "Test performance: rm() with " << n_thread; 343 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 344 | 345 | delete [] tid; 346 | } 347 | 348 | void Test_performance_contains(const int n_thread) { 349 | CoarseLockList l; 350 | std::vector v; 351 | 352 | pthread_t* tid = new pthread_t[n_thread]; 353 | ThreadArgv argv = ThreadArgv(&l, 1, 10000); 354 | 355 | for (int i = 0; i < 10000; ++ i) { 356 | l.add((long)i); 357 | } 358 | 359 | timeval begin; 360 | timeval end; 361 | gettimeofday(&begin, NULL); 362 | 363 | for (int i = 0; i < n_thread; ++ i) { 364 | pthread_create(&tid[i], NULL, test_contains, (void*)&argv); 365 | } 366 | for (int i = 0; i < n_thread; ++ i) { 367 | pthread_join(tid[i], NULL); 368 | } 369 | 370 | gettimeofday(&end, NULL); 371 | std::cout << "Test performance: contains() with " << n_thread; 372 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 373 | 374 | delete [] tid; 375 | } 376 | 377 | double Test_performance_multi_op(const int n_add_thread, 378 | const int n_rm_thread, 379 | const int n_contains_thread) { 380 | CoarseLockList l; 381 | std::vector v; 382 | 383 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 384 | pthread_t* tid = new pthread_t[n_thread]; 385 | ThreadArgv* argv = new ThreadArgv[n_thread]; 386 | 387 | // each operation sequence contains 2000 op, 388 | // each operation range is [1, 10000] 389 | for (int i = 0; i < n_thread; ++ i) { 390 | std::vector v; 391 | for (int j = 0; j < 2000; ++ j) { 392 | int r = rand() % 10000 + 1; 393 | v.push_back((long) r); 394 | } 395 | argv[i].init(&l, 0, 1); 396 | argv[i].add_rand_seq(v); 397 | } 398 | 399 | for (int i = 0; i < 10000; ++ i) { 400 | l.add((long)i); 401 | } 402 | 403 | timeval begin; 404 | timeval end; 405 | gettimeofday(&begin, NULL); 406 | 407 | for (int i = 0; i < n_thread; ++ i) { 408 | if (i < n_add_thread) { 409 | pthread_create(&tid[i], NULL, rand_test_add, (void*)&argv[i]); 410 | } else if (i >= n_add_thread && i < n_add_thread + n_rm_thread) { 411 | pthread_create(&tid[i], NULL, rand_test_rm, (void*)&argv[i]); 412 | } else { 413 | pthread_create(&tid[i], NULL, rand_test_contains, (void*)&argv[i]); 414 | } 415 | } 416 | for (int i = 0; i < n_thread; ++ i) { 417 | pthread_join(tid[i], NULL); 418 | } 419 | 420 | gettimeofday(&end, NULL); 421 | 422 | delete [] tid; 423 | delete [] argv; 424 | 425 | return time_diff(begin, end); 426 | } 427 | 428 | void Test_performance_hybird(const int n_add_thread, 429 | const int n_rm_thread, 430 | const int n_contains_thread, 431 | const int n_exp) { 432 | double consuming = 0; 433 | for (int i = 0; i < n_exp; ++ i) { 434 | consuming += Test_performance_multi_op(n_add_thread, 435 | n_rm_thread, 436 | n_contains_thread); 437 | } 438 | 439 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 440 | std::cout << "Test performance: hybird operation with " << n_thread << " thread, "; 441 | std::cout << " add_threads: " << n_add_thread << ","; 442 | std::cout << " rm_threads: " << n_rm_thread << ","; 443 | std::cout << " contains_threads: " << n_contains_thread << ","; 444 | std::cout << " avge consuming " << consuming / n_exp << " s" << std::endl; 445 | } 446 | 447 | void TEST_PERFORMANCE() { 448 | Test_performance_add(1); 449 | Test_performance_add(5); 450 | Test_performance_add(10); 451 | Test_performance_add(20); 452 | std::cout << "Test add performence successfully" << std::endl; 453 | 454 | Test_performance_rm(1); 455 | Test_performance_rm(5); 456 | Test_performance_rm(10); 457 | Test_performance_rm(20); 458 | std::cout << "Test rm performence successfully" << std::endl; 459 | 460 | Test_performance_contains(1); 461 | Test_performance_contains(5); 462 | Test_performance_contains(10); 463 | Test_performance_contains(20); 464 | std::cout << "Test contains performence successfully" << std::endl; 465 | 466 | Test_performance_hybird(2, 1, 2, 10); 467 | Test_performance_hybird(4, 2, 4, 10); 468 | Test_performance_hybird(6, 3, 6, 10); 469 | std::cout << "Test multi op performence successfully" << std::endl; 470 | 471 | std::cout << "--------------------------" << std::endl; 472 | } 473 | 474 | int main() { 475 | 476 | srand((unsigned int)time(NULL)); 477 | TEST_CORRECTNESS_SINGLE_THREAD(); 478 | TEST_CORRECTNESS_MULTI_THREAD(); 479 | TEST_PERFORMANCE(); 480 | 481 | /*CoarseLockList l; 482 | 483 | pthread_t tid[4]; 484 | ThreadArgv argv1 = ThreadArgv(&l, 1, 2); 485 | pthread_create(&tid[0], NULL, test_rm, (void*)&argv1); 486 | pthread_join(tid[0], NULL); 487 | 488 | std::vector v = l.vectorize(); 489 | for (int i = 0; i < v.size(); ++ i) { 490 | std::cout << v[i] << std::endl; 491 | }*/ 492 | 493 | return 0; 494 | } -------------------------------------------------------------------------------- /list/fine_grained_lock_list/Makefile: -------------------------------------------------------------------------------- 1 | x : fine_grained_lock_list_test.cpp fine_grained_lock_list.cpp fine_grained_lock_list.h 2 | g++ -std=c++11 -o ./build/x fine_grained_lock_list_test.cpp fine_grained_lock_list.cpp fine_grained_lock_list.h -lpthread 3 | 4 | clean : 5 | rm ./build/* -------------------------------------------------------------------------------- /list/fine_grained_lock_list/fine_grained_lock_list.cpp: -------------------------------------------------------------------------------- 1 | #include "fine_grained_lock_list.h" 2 | 3 | // 0: pthread_mutex_t 4 | // 1: pthread_spinlock_t 5 | int MUTEX_TYPE = 0; 6 | 7 | //------------------- lock -------------------// 8 | Mutex::Mutex() { 9 | pthread_mutex_init(&this->mu, NULL); 10 | } 11 | Mutex::~Mutex() { 12 | pthread_mutex_destroy(&this->mu); 13 | } 14 | void Mutex::lock() { 15 | assert(pthread_mutex_lock(&this->mu) == 0); 16 | } 17 | void Mutex::unlock() { 18 | assert(pthread_mutex_unlock(&this->mu) == 0); 19 | } 20 | 21 | SpinLock::SpinLock() { 22 | pthread_spin_init(&this->mu, PTHREAD_PROCESS_PRIVATE); 23 | } 24 | SpinLock::~SpinLock() { 25 | pthread_spin_destroy(&this->mu); 26 | } 27 | void SpinLock::lock() { 28 | assert(pthread_spin_lock(&this->mu) == 0); 29 | } 30 | void SpinLock::unlock() { 31 | assert(pthread_spin_unlock(&this->mu) == 0); 32 | } 33 | 34 | //--------------------- Node ---------------------// 35 | Node::Node() {} 36 | Node::Node(long val, Node* next, int mutex_type) { 37 | this->val = val; 38 | this->next = next; 39 | if (mutex_type == 0) { 40 | this->mutex = new Mutex(); 41 | } else if (mutex_type == 1){ 42 | this->mutex = new SpinLock(); 43 | } 44 | } 45 | Node::~Node() { 46 | delete this->mutex; 47 | } 48 | void Node::lock() { 49 | this->mutex->lock(); 50 | } 51 | void Node::unlock() { 52 | this->mutex->unlock(); 53 | } 54 | 55 | //--------------------- list ---------------------// 56 | FineGrainedLockList::FineGrainedLockList() { 57 | head_ = new Node(0, NULL, MUTEX_TYPE); 58 | } 59 | 60 | FineGrainedLockList::~FineGrainedLockList() { 61 | while (head_->next) { 62 | Node* node = head_->next; 63 | head_->next = node->next; 64 | delete node; 65 | } 66 | delete head_; 67 | } 68 | 69 | bool FineGrainedLockList::add(const long val) { 70 | Node* pre = head_; 71 | pre->lock(); 72 | while (pre->next) { 73 | Node* cur = pre->next; 74 | cur->lock(); 75 | if (val == cur->val) { 76 | cur->unlock(); 77 | pre->unlock(); 78 | return false; 79 | } 80 | if (val < cur->val) { 81 | Node* node = new Node(val, cur, MUTEX_TYPE); 82 | pre->next = node; 83 | cur->unlock(); 84 | pre->unlock(); 85 | return true; 86 | } 87 | pre->unlock(); 88 | pre = cur; 89 | } 90 | 91 | Node* node = new Node(val, NULL, MUTEX_TYPE); 92 | pre->next = node; 93 | pre->unlock(); 94 | return true; 95 | } 96 | 97 | bool FineGrainedLockList::rm(const long val) { 98 | Node* pre = head_; 99 | pre->lock(); 100 | while (pre->next) { 101 | Node* cur = pre->next; 102 | cur->lock(); 103 | if (val == cur->val) { 104 | pre->next = cur->next; 105 | cur->unlock(); 106 | delete cur; 107 | pre->unlock(); 108 | return true; 109 | } 110 | if (val < cur->val) { 111 | cur->unlock(); 112 | pre->unlock(); 113 | return false; 114 | } 115 | pre->unlock(); 116 | pre = cur; 117 | } 118 | pre->unlock(); 119 | return false; 120 | } 121 | 122 | bool FineGrainedLockList::contains(const long val) { 123 | Node* pre = head_; 124 | pre->lock(); 125 | while (pre->next) { 126 | Node* cur = pre->next; 127 | cur->lock(); 128 | pre->unlock(); // it is safe 129 | if (val == cur->val) { 130 | cur->unlock(); 131 | return true; 132 | } 133 | if (val > cur->val) { 134 | pre = cur; 135 | } else { 136 | cur->unlock(); 137 | return false; 138 | } 139 | } 140 | pre->unlock(); 141 | return false; 142 | } 143 | 144 | std::vector FineGrainedLockList::vectorize() { 145 | std::vector v; 146 | Node* p = head_; 147 | while (p->next) { 148 | v.push_back(p->next->val); 149 | p = p->next; 150 | } 151 | return v; 152 | } -------------------------------------------------------------------------------- /list/fine_grained_lock_list/fine_grained_lock_list.h: -------------------------------------------------------------------------------- 1 | #ifndef _FINE_GRAINED_LOCK_LIST_H 2 | #define _FINE_GRAINED_LOCK_LIST_H 3 | 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | extern int MUTEX_TYPE; 10 | 11 | class MyMutex { 12 | public: 13 | MyMutex() {}; 14 | virtual ~MyMutex() {}; 15 | 16 | //*********** interface ***********// 17 | virtual void lock() = 0; 18 | virtual void unlock() = 0; 19 | }; 20 | 21 | class Mutex : public MyMutex { 22 | public: 23 | Mutex(); 24 | virtual ~Mutex(); 25 | 26 | virtual void lock(); 27 | virtual void unlock(); 28 | private: 29 | pthread_mutex_t mu; 30 | }; 31 | 32 | class SpinLock : public MyMutex { 33 | public: 34 | SpinLock(); 35 | virtual ~SpinLock(); 36 | 37 | virtual void lock(); 38 | virtual void unlock(); 39 | private: 40 | pthread_spinlock_t mu; 41 | }; 42 | 43 | class Node { 44 | public: 45 | long val; 46 | Node* next; 47 | MyMutex* mutex; 48 | 49 | Node(); 50 | Node(long val, Node* next, int mutex_type); 51 | ~Node(); 52 | 53 | void lock(); 54 | void unlock(); 55 | }; 56 | 57 | class FineGrainedLockList { 58 | public: 59 | FineGrainedLockList(); 60 | ~FineGrainedLockList(); 61 | 62 | /**************** Interface ****************/ 63 | // Thread safe 64 | bool add(const long val); 65 | bool rm(const long val); 66 | bool contains(const long val); 67 | 68 | /**************** Test ****************/ 69 | // Not thread safe 70 | std::vector vectorize(); 71 | 72 | private: 73 | // _head is an empty node, 74 | // _head->val is invalid 75 | Node* head_; 76 | }; 77 | 78 | #endif -------------------------------------------------------------------------------- /list/fine_grained_lock_list/fine_grained_lock_list_test.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | 8 | #include "fine_grained_lock_list.h" 9 | 10 | struct ThreadArgv { 11 | FineGrainedLockList* pl; 12 | int b, e; 13 | std::vector v; 14 | ThreadArgv() {} 15 | ThreadArgv(FineGrainedLockList* pl, int b, int e) { 16 | this->pl = pl; 17 | this->b = b; 18 | this->e = e; 19 | } 20 | void init(FineGrainedLockList* pl, int b, int e) { 21 | this->pl = pl; 22 | this->b = b; 23 | this->e = e; 24 | } 25 | void add_rand_seq(const std::vector & v) { 26 | this->v = v; 27 | } 28 | }; 29 | 30 | void TEST_CORRECTNESS_SINGLE_THREAD() { 31 | FineGrainedLockList l; 32 | std::vector v; 33 | 34 | l.add(1), l.add(3), l.add(2), l.add(3); 35 | v = l.vectorize(); 36 | assert (v.size() == 3); 37 | assert (v[0] == 1); 38 | assert (v[1] == 2); 39 | assert (v[2] == 3); 40 | 41 | l.rm(1), l.rm(2), l.rm(2); 42 | v = l.vectorize(); 43 | assert (v.size() == 1); 44 | assert (v[0] == 3); 45 | 46 | bool res1 = l.contains(3); 47 | bool res2 = l.contains(1); 48 | assert(res1 == true); 49 | assert(res2 == false); 50 | 51 | std::cout << "Test single thread correctness successfully" << std::endl; 52 | 53 | std::cout << "--------------------------" << std::endl; 54 | } 55 | 56 | void* test_add(void* argv) { 57 | FineGrainedLockList* pl = ((ThreadArgv*) argv)->pl; 58 | int b = ((ThreadArgv*) argv)->b; 59 | int e = ((ThreadArgv*) argv)->e; 60 | assert (b != e); 61 | long dir = 1; 62 | if (b > e) { 63 | dir = -1; 64 | } 65 | for (int i = b; ; i += dir) { 66 | if (dir > 0) { 67 | if (i > e) break; 68 | } 69 | if (dir < 0) { 70 | if (i < e) break; 71 | } 72 | pl->add((long)i); 73 | } 74 | return NULL; 75 | } 76 | 77 | void* test_rm(void* argv) { 78 | FineGrainedLockList* pl = ((ThreadArgv*) argv)->pl; 79 | int b = ((ThreadArgv*) argv)->b; 80 | int e = ((ThreadArgv*) argv)->e; 81 | assert (b != e); 82 | long dir = 1; 83 | if (b > e) { 84 | dir = -1; 85 | } 86 | for (int i = b; ; i += dir) { 87 | if (dir > 0) { 88 | if (i > e) break; 89 | } 90 | if (dir < 0) { 91 | if (i < e) break; 92 | } 93 | pl->rm((long)i); 94 | } 95 | return NULL; 96 | } 97 | 98 | void* test_contains(void* argv) { 99 | FineGrainedLockList* pl = ((ThreadArgv*) argv)->pl; 100 | int b = ((ThreadArgv*) argv)->b; 101 | int e = ((ThreadArgv*) argv)->e; 102 | assert (b != e); 103 | long dir = 1; 104 | if (b > e) { 105 | dir = -1; 106 | } 107 | for (int i = b; ; i += dir) { 108 | if (dir > 0) { 109 | if (i > e) break; 110 | } 111 | if (dir < 0) { 112 | if (i < e) break; 113 | } 114 | pl->contains((long)i); 115 | } 116 | return NULL; 117 | } 118 | 119 | void* rand_test_add(void* argv) { 120 | FineGrainedLockList* pl = ((ThreadArgv*) argv)->pl; 121 | int b = ((ThreadArgv*) argv)->b; 122 | int e = ((ThreadArgv*) argv)->e; 123 | std::vector v = ((ThreadArgv*) argv)->v; 124 | assert (b != e); 125 | for (int i = 0; i < v.size(); ++ i) { 126 | pl->add(v[i]); 127 | } 128 | return NULL; 129 | } 130 | 131 | void* rand_test_rm(void* argv) { 132 | FineGrainedLockList* pl = ((ThreadArgv*) argv)->pl; 133 | int b = ((ThreadArgv*) argv)->b; 134 | int e = ((ThreadArgv*) argv)->e; 135 | std::vector v = ((ThreadArgv*) argv)->v; 136 | assert (b != e); 137 | for (int i = 0; i < v.size(); ++ i) { 138 | pl->rm(v[i]); 139 | } 140 | return NULL; 141 | } 142 | 143 | void* rand_test_contains(void* argv) { 144 | FineGrainedLockList* pl = ((ThreadArgv*) argv)->pl; 145 | int b = ((ThreadArgv*) argv)->b; 146 | int e = ((ThreadArgv*) argv)->e; 147 | std::vector v = ((ThreadArgv*) argv)->v; 148 | assert (b != e); 149 | for (int i = 0; i < v.size(); ++ i) { 150 | pl->contains(v[i]); 151 | } 152 | return NULL; 153 | } 154 | 155 | bool validate_permutations(const std::vector & v) { 156 | return (v.size() == 0) || 157 | (v.size() == 1 && (v[0] == 1 || v[0] == 2)) || 158 | (v.size() == 2 && (v[0] == 1 && v[1] == 2)); 159 | } 160 | 161 | void Test_multi_thread_add() { 162 | FineGrainedLockList l; 163 | std::vector v; 164 | 165 | pthread_t tid[4]; 166 | ThreadArgv argv1 = ThreadArgv(&l, 1, 10000); 167 | ThreadArgv argv2 = ThreadArgv(&l, 10000, 1); 168 | ThreadArgv argv3 = ThreadArgv(&l, 1000, 8000); 169 | ThreadArgv argv4 = ThreadArgv(&l, 5000, 1); 170 | pthread_create(&tid[0], NULL, test_add, (void*)&argv1); 171 | pthread_create(&tid[1], NULL, test_add, (void*)&argv2); 172 | pthread_create(&tid[2], NULL, test_add, (void*)&argv3); 173 | pthread_create(&tid[3], NULL, test_add, (void*)&argv4); 174 | pthread_join(tid[0], NULL); 175 | pthread_join(tid[1], NULL); 176 | pthread_join(tid[2], NULL); 177 | pthread_join(tid[3], NULL); 178 | 179 | v = l.vectorize(); 180 | assert(v.size() == 10000); 181 | assert(v[0] == 1); 182 | assert(v[9999] == 10000); 183 | } 184 | 185 | void Test_multi_thread_rm() { 186 | FineGrainedLockList l; 187 | std::vector v; 188 | 189 | for (int i = 0; i < 10000; ++ i) { 190 | l.add((long)(i+1)); 191 | } 192 | 193 | pthread_t tid[4]; 194 | ThreadArgv argv1 = ThreadArgv(&l, 1, 5000); 195 | ThreadArgv argv2 = ThreadArgv(&l, 5000, 1); 196 | ThreadArgv argv3 = ThreadArgv(&l, 2000, 4000); 197 | ThreadArgv argv4 = ThreadArgv(&l, 4500, 100); 198 | pthread_create(&tid[0], NULL, test_rm, (void*)&argv1); 199 | pthread_create(&tid[1], NULL, test_rm, (void*)&argv2); 200 | pthread_create(&tid[2], NULL, test_rm, (void*)&argv3); 201 | pthread_create(&tid[3], NULL, test_rm, (void*)&argv4); 202 | pthread_join(tid[0], NULL); 203 | pthread_join(tid[1], NULL); 204 | pthread_join(tid[2], NULL); 205 | pthread_join(tid[3], NULL); 206 | 207 | v = l.vectorize(); 208 | assert(v.size() == 5000); 209 | assert(v[0] == 5001); 210 | assert(v[4999] == 10000); 211 | } 212 | 213 | void Test_multi_thread_add_and_rm_small() { 214 | FineGrainedLockList l; 215 | std::vector v; 216 | 217 | pthread_t tid[4]; 218 | ThreadArgv argv1 = ThreadArgv(&l, 1, 2); 219 | ThreadArgv argv2 = ThreadArgv(&l, 2, 1); 220 | pthread_create(&tid[0], NULL, test_add, (void*)&argv1); 221 | pthread_create(&tid[1], NULL, test_rm, (void*)&argv2); 222 | pthread_create(&tid[2], NULL, test_add, (void*)&argv2); 223 | pthread_create(&tid[3], NULL, test_rm, (void*)&argv1); 224 | pthread_join(tid[0], NULL); 225 | pthread_join(tid[1], NULL); 226 | pthread_join(tid[2], NULL); 227 | pthread_join(tid[3], NULL); 228 | 229 | v = l.vectorize(); 230 | assert(validate_permutations(v)); 231 | } 232 | 233 | void Test_multi_thread_add_and_rm_big() { 234 | FineGrainedLockList l; 235 | 236 | for (int i = 1; i <= 10000; ++ i) { 237 | l.add((long) i); 238 | } 239 | 240 | int n_add_thread = 3; 241 | int n_rm_thread = 3; 242 | int n_contains_thread = 2; 243 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 244 | 245 | pthread_t* tid = new pthread_t[n_thread]; 246 | ThreadArgv* argv = new ThreadArgv[n_thread]; 247 | argv[0].init(&l, 1, 10000); 248 | argv[1].init(&l, 3000, 1); 249 | argv[2].init(&l, 6000, 3500); 250 | argv[3].init(&l, 2010, 8999); 251 | argv[4].init(&l, 3011, 7917); 252 | argv[5].init(&l, 7138, 1234); 253 | argv[6].init(&l, 10000, 1); 254 | argv[7].init(&l, 9216, 4289); 255 | 256 | for (int i = 0; i < n_thread; ++ i) { 257 | if (i < n_add_thread) { 258 | pthread_create(&tid[i], NULL, test_add, (void*)&argv[i]); 259 | } else if (i >= n_add_thread && i < n_add_thread + n_rm_thread) { 260 | pthread_create(&tid[i], NULL, test_rm, (void*)&argv[i]); 261 | } else { 262 | pthread_create(&tid[i], NULL, test_contains, (void*)&argv[i]); 263 | } 264 | } 265 | 266 | for (int i = 0; i < n_thread; ++ i) { 267 | pthread_join(tid[i], NULL); 268 | } 269 | 270 | delete [] tid; 271 | delete [] argv; 272 | } 273 | 274 | void TEST_CORRECTNESS_MULTI_THREAD() { 275 | Test_multi_thread_add(); 276 | std::cout << "Test multi thread add successfully" << std::endl; 277 | 278 | Test_multi_thread_rm(); 279 | std::cout << "Test multi thread rm successfully" << std::endl; 280 | 281 | Test_multi_thread_add_and_rm_small(); 282 | std::cout << "Test multi thread add & rm small successfully" << std::endl; 283 | Test_multi_thread_add_and_rm_big(); 284 | std::cout << "Test multi thread add & rm big successfully" << std::endl; 285 | 286 | std::cout << "--------------------------" << std::endl; 287 | } 288 | 289 | double time_diff(const timeval & b, const timeval & e) { 290 | return (e.tv_sec - b.tv_sec) + (e.tv_usec - b.tv_usec)*1.0 / 1000000.0; 291 | } 292 | 293 | void Test_performance_add(const int n_thread) { 294 | FineGrainedLockList l; 295 | std::vector v; 296 | 297 | pthread_t* tid = new pthread_t[n_thread]; 298 | ThreadArgv argv = ThreadArgv(&l, 1, 10000); 299 | 300 | timeval begin; 301 | timeval end; 302 | gettimeofday(&begin, NULL); 303 | 304 | for (int i = 0; i < n_thread; ++ i) { 305 | pthread_create(&tid[i], NULL, test_add, (void*)&argv); 306 | } 307 | for (int i = 0; i < n_thread; ++ i) { 308 | pthread_join(tid[i], NULL); 309 | } 310 | 311 | gettimeofday(&end, NULL); 312 | std::cout << "Test performance: add() with " << n_thread; 313 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 314 | 315 | delete [] tid; 316 | } 317 | 318 | void Test_performance_rm(const int n_thread) { 319 | FineGrainedLockList l; 320 | std::vector v; 321 | 322 | for (int i = 1; i <= 10000; ++ i) { 323 | l.add((long) i); 324 | } 325 | 326 | pthread_t* tid = new pthread_t[n_thread]; 327 | ThreadArgv argv = ThreadArgv(&l, 10000, 1); 328 | 329 | timeval begin; 330 | timeval end; 331 | gettimeofday(&begin, NULL); 332 | 333 | for (int i = 0; i < n_thread; ++ i) { 334 | pthread_create(&tid[i], NULL, test_rm, (void*)&argv); 335 | } 336 | for (int i = 0; i < n_thread; ++ i) { 337 | pthread_join(tid[i], NULL); 338 | } 339 | 340 | gettimeofday(&end, NULL); 341 | std::cout << "Test performance: rm() with " << n_thread; 342 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 343 | 344 | delete [] tid; 345 | } 346 | 347 | void Test_performance_contains(const int n_thread) { 348 | FineGrainedLockList l; 349 | std::vector v; 350 | 351 | pthread_t* tid = new pthread_t[n_thread]; 352 | ThreadArgv argv = ThreadArgv(&l, 1, 10000); 353 | 354 | for (int i = 0; i < 10000; ++ i) { 355 | l.add((long)i); 356 | } 357 | 358 | timeval begin; 359 | timeval end; 360 | gettimeofday(&begin, NULL); 361 | 362 | for (int i = 0; i < n_thread; ++ i) { 363 | pthread_create(&tid[i], NULL, test_contains, (void*)&argv); 364 | } 365 | for (int i = 0; i < n_thread; ++ i) { 366 | pthread_join(tid[i], NULL); 367 | } 368 | 369 | gettimeofday(&end, NULL); 370 | std::cout << "Test performance: contains() with " << n_thread; 371 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 372 | 373 | delete [] tid; 374 | } 375 | 376 | double Test_performance_multi_op(const int n_add_thread, 377 | const int n_rm_thread, 378 | const int n_contains_thread) { 379 | FineGrainedLockList l; 380 | std::vector v; 381 | 382 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 383 | pthread_t* tid = new pthread_t[n_thread]; 384 | ThreadArgv* argv = new ThreadArgv[n_thread]; 385 | 386 | // each operation sequence contains 2000 op, 387 | // each operation range is [1, 10000] 388 | for (int i = 0; i < n_thread; ++ i) { 389 | std::vector v; 390 | for (int j = 0; j < 2000; ++ j) { 391 | int r = rand() % 10000 + 1; 392 | v.push_back((long) r); 393 | } 394 | argv[i].init(&l, 0, 1); 395 | argv[i].add_rand_seq(v); 396 | } 397 | 398 | for (int i = 0; i < 10000; ++ i) { 399 | l.add((long)i); 400 | } 401 | 402 | timeval begin; 403 | timeval end; 404 | gettimeofday(&begin, NULL); 405 | 406 | for (int i = 0; i < n_thread; ++ i) { 407 | if (i < n_add_thread) { 408 | pthread_create(&tid[i], NULL, rand_test_add, (void*)&argv[i]); 409 | } else if (i >= n_add_thread && i < n_add_thread + n_rm_thread) { 410 | pthread_create(&tid[i], NULL, rand_test_rm, (void*)&argv[i]); 411 | } else { 412 | pthread_create(&tid[i], NULL, rand_test_contains, (void*)&argv[i]); 413 | } 414 | } 415 | for (int i = 0; i < n_thread; ++ i) { 416 | pthread_join(tid[i], NULL); 417 | } 418 | 419 | gettimeofday(&end, NULL); 420 | 421 | delete [] tid; 422 | delete [] argv; 423 | 424 | return time_diff(begin, end); 425 | } 426 | 427 | void Test_performance_hybird(const int n_add_thread, 428 | const int n_rm_thread, 429 | const int n_contains_thread, 430 | const int n_exp) { 431 | double consuming = 0; 432 | for (int i = 0; i < n_exp; ++ i) { 433 | consuming += Test_performance_multi_op(n_add_thread, 434 | n_rm_thread, 435 | n_contains_thread); 436 | } 437 | 438 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 439 | std::cout << "Test performance: hybird operation with " << n_thread << " thread, "; 440 | std::cout << " add_threads: " << n_add_thread << ","; 441 | std::cout << " rm_threads: " << n_rm_thread << ","; 442 | std::cout << " contains_threads: " << n_contains_thread << ","; 443 | std::cout << " avge consuming " << consuming / n_exp << " s" << std::endl; 444 | } 445 | 446 | void TEST_PERFORMANCE() { 447 | Test_performance_add(1); 448 | Test_performance_add(5); 449 | Test_performance_add(10); 450 | Test_performance_add(20); 451 | std::cout << "Test add performence successfully" << std::endl; 452 | 453 | Test_performance_rm(1); 454 | Test_performance_rm(5); 455 | Test_performance_rm(10); 456 | Test_performance_rm(20); 457 | std::cout << "Test rm performence successfully" << std::endl; 458 | 459 | Test_performance_contains(1); 460 | Test_performance_contains(5); 461 | Test_performance_contains(10); 462 | Test_performance_contains(20); 463 | std::cout << "Test contains performence successfully" << std::endl; 464 | 465 | Test_performance_hybird(2, 1, 2, 10); 466 | Test_performance_hybird(4, 2, 4, 10); 467 | Test_performance_hybird(6, 3, 6, 10); 468 | std::cout << "Test multi op performence successfully" << std::endl; 469 | 470 | std::cout << "--------------------------" << std::endl; 471 | } 472 | 473 | int main() { 474 | 475 | srand((unsigned int)time(NULL)); 476 | TEST_CORRECTNESS_SINGLE_THREAD(); 477 | TEST_CORRECTNESS_MULTI_THREAD(); 478 | TEST_PERFORMANCE(); 479 | 480 | /*FineGrainedLockList l; 481 | 482 | pthread_t tid[4]; 483 | ThreadArgv argv1 = ThreadArgv(&l, 1, 2); 484 | pthread_create(&tid[0], NULL, test_rm, (void*)&argv1); 485 | pthread_join(tid[0], NULL); 486 | 487 | std::vector v = l.vectorize(); 488 | for (int i = 0; i < v.size(); ++ i) { 489 | std::cout << v[i] << std::endl; 490 | }*/ 491 | 492 | return 0; 493 | } -------------------------------------------------------------------------------- /list/lock_free_list/Makefile: -------------------------------------------------------------------------------- 1 | x : lock_free_list_test.cpp lock_free_list.cpp lock_free_list.h 2 | g++ -std=c++11 -g -o ./build/x lock_free_list_test.cpp lock_free_list.cpp lock_free_list.h -lpthread 3 | 4 | clean : 5 | rm ./build/* -------------------------------------------------------------------------------- /list/lock_free_list/lock_free_list.cpp: -------------------------------------------------------------------------------- 1 | #include "lock_free_list.h" 2 | 3 | #include 4 | 5 | Node::Node(long val, Node* next) { 6 | this->val = val; 7 | this->next_ = next; 8 | } 9 | 10 | bool Node::is_mark() { 11 | return (unsigned long)next_.load() & (unsigned long)0x1; 12 | } 13 | 14 | void Node::mark() { 15 | next_ = (Node*) ((unsigned long)next_.load() | (unsigned long)0x1); 16 | } 17 | 18 | Node* Node::get_next() { 19 | return (Node*) ((unsigned long)next_.load() & ~(unsigned long)0x1); 20 | } 21 | 22 | LockFreeList::LockFreeList() { 23 | head_ = new Node(-1, NULL); 24 | } 25 | 26 | LockFreeList::~LockFreeList() { 27 | while (head_->get_next() != NULL) { 28 | Node* node = head_->get_next(); 29 | head_->next_ = node->get_next(); 30 | delete node; 31 | } 32 | delete head_; 33 | } 34 | 35 | bool LockFreeList::add(const long val) { 36 | // insert node between pre and cur->val 37 | Node* pre = head_; 38 | Node* cur = head_->next_; 39 | 40 | while (cur != NULL) { 41 | if (cur->is_mark()) { 42 | Node* next = cur->get_next(); 43 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, next)) { 44 | if (pre->is_mark()) { 45 | return false; 46 | } 47 | } 48 | cur = pre->get_next(); 49 | continue; 50 | } 51 | if (val <= cur->val) { 52 | break; 53 | } 54 | pre = cur; 55 | cur = pre->get_next(); 56 | } 57 | 58 | if (cur == NULL) { 59 | Node* node = new Node(val, NULL); 60 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, node)) { 61 | delete node; 62 | return false; 63 | } 64 | return true; 65 | } 66 | 67 | if (val == cur->val) { 68 | return false; 69 | } else { // val < cur->val 70 | Node* node = new Node(val, cur); 71 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, node)) { 72 | delete node; 73 | return false; 74 | } 75 | return true; 76 | } 77 | } 78 | 79 | bool LockFreeList::rm(const long val) { 80 | // delete node with val == cur->val 81 | Node* pre = head_; 82 | Node* cur = head_->next_; 83 | 84 | while (cur != NULL) { 85 | if (cur->is_mark()) { 86 | Node* next = cur->get_next(); 87 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, next)) { 88 | // pre node has been delete logically 89 | if (pre->is_mark()) { 90 | return false; 91 | } 92 | // cur node has been delete physically 93 | // need do nothing 94 | } 95 | cur = pre->get_next(); 96 | continue; 97 | } 98 | if (val <= cur->val) { 99 | break; 100 | } 101 | pre = cur; 102 | cur = pre->get_next(); 103 | } 104 | 105 | if (cur == NULL) { 106 | return false; 107 | } 108 | 109 | if (val == cur->val) { 110 | cur->mark(); 111 | Node* next = cur->get_next(); 112 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, next)) { 113 | //std::cout << "marked node has been deleted by other thread" << std::endl; 114 | } 115 | return true; 116 | } else { // val < cur->val 117 | return false; 118 | } 119 | } 120 | 121 | bool LockFreeList::contains(const long val) { 122 | Node* cur = head_->next_; 123 | 124 | while (cur != NULL) { 125 | if (cur->is_mark()) { 126 | cur = cur->get_next(); 127 | continue; 128 | } 129 | if (val <= cur->val) { 130 | break; 131 | } 132 | cur = cur->get_next(); 133 | } 134 | 135 | if (cur == NULL) { 136 | return false; 137 | } 138 | 139 | if (val == cur->val) { 140 | return true; 141 | } else { 142 | return false; 143 | } 144 | } 145 | 146 | std::vector LockFreeList::vectorize() { 147 | std::vector v; 148 | 149 | Node* cur = head_->next_; 150 | while (cur) { 151 | if (cur->is_mark()) { 152 | cur = cur->get_next(); 153 | continue; 154 | } 155 | v.push_back(cur->val); 156 | cur = cur->get_next(); 157 | } 158 | return v; 159 | } -------------------------------------------------------------------------------- /list/lock_free_list/lock_free_list.h: -------------------------------------------------------------------------------- 1 | #ifndef _LOCK_FREE_LIST_H 2 | #define _LOCK_FREE_LIST_H 3 | 4 | #include 5 | #include 6 | #include 7 | 8 | class Node { 9 | public: 10 | Node() {} 11 | Node(long val, Node* next); 12 | ~Node() {} 13 | 14 | bool is_mark(); 15 | void mark(); 16 | Node* get_next(); 17 | 18 | long val; 19 | std::atomic next_; 20 | }; 21 | 22 | /* 23 | * In concurrent mode, multithreads call rm() function, 24 | * some threads will just do delete logically, **so 25 | * after all threads finish, there maybe some logical 26 | * deleted nodes in the list.** For example: 27 | * Head->A->B->C->D->NULL 28 | * timeline 1: thread[0] marks B as deleted 29 | * timeline 2: thread[1] marks A as deleted 30 | * timeline 3: thread[0] finds A has been deleted, 31 | * it returns without delete B physically 32 | * timeline 4: thread[1] delete A physically 33 | * After thread[0] and thread[1] finished, the list is: 34 | * Head->B(logical deleted)->C->D->NULL 35 | * So in any time, the list may contains logical deleted 36 | * nodes. 37 | */ 38 | class LockFreeList { 39 | public: 40 | LockFreeList(); 41 | ~LockFreeList(); 42 | 43 | /**************** Interface ****************/ 44 | // Thread safe 45 | bool add(const long val); 46 | bool rm(const long val); 47 | bool contains(const long val); 48 | 49 | /**************** Test ****************/ 50 | // Not thread safe 51 | std::vector vectorize(); 52 | 53 | private: 54 | Node* head_; 55 | }; 56 | 57 | #endif -------------------------------------------------------------------------------- /list/lock_free_list/lock_free_list_test.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | #include "lock_free_list.h" 10 | 11 | // Test Node class 12 | //int main() { 13 | // 14 | // Node x(100, NULL); 15 | // std::cout << "is mark: " << x.is_mark() << std::endl; 16 | // 17 | // x.mark(); 18 | // std::cout << "after mark: " << x.is_mark() << std::endl; 19 | // std::cout << "get next: " << x.get_next() << std::endl; 20 | // 21 | // return 0; 22 | //} 23 | 24 | struct ThreadArgv { 25 | LockFreeList* pl; 26 | int b, e; 27 | std::vector v; 28 | ThreadArgv() {} 29 | ThreadArgv(LockFreeList* pl, int b, int e) { 30 | this->pl = pl; 31 | this->b = b; 32 | this->e = e; 33 | } 34 | void init(LockFreeList* pl, int b, int e) { 35 | this->pl = pl; 36 | this->b = b; 37 | this->e = e; 38 | } 39 | void add_rand_seq(const std::vector & v) { 40 | this->v = v; 41 | } 42 | }; 43 | 44 | void TEST_CORRECTNESS_SINGLE_THREAD() { 45 | LockFreeList l; 46 | std::vector v; 47 | 48 | l.add(1), l.add(3), l.add(2), l.add(3); 49 | v = l.vectorize(); 50 | assert (v.size() == 3); 51 | assert (v[0] == 1); 52 | assert (v[1] == 2); 53 | assert (v[2] == 3); 54 | 55 | l.rm(1), l.rm(2), l.rm(2); 56 | v = l.vectorize(); 57 | assert (v.size() == 1); 58 | assert (v[0] == 3); 59 | 60 | bool res1 = l.contains(3); 61 | bool res2 = l.contains(1); 62 | assert(res1 == true); 63 | assert(res2 == false); 64 | 65 | std::cout << "Test single thread correctness successfully" << std::endl; 66 | 67 | std::cout << "--------------------------" << std::endl; 68 | } 69 | 70 | void* test_add(void* argv) { 71 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 72 | int b = ((ThreadArgv*) argv)->b; 73 | int e = ((ThreadArgv*) argv)->e; 74 | assert (b != e); 75 | long dir = 1; 76 | if (b > e) { 77 | dir = -1; 78 | } 79 | for (int i = b; ; i += dir) { 80 | if (dir > 0) { 81 | if (i > e) break; 82 | } 83 | if (dir < 0) { 84 | if (i < e) break; 85 | } 86 | pl->add((long)i); 87 | } 88 | return NULL; 89 | } 90 | 91 | void* test_rm(void* argv) { 92 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 93 | int b = ((ThreadArgv*) argv)->b; 94 | int e = ((ThreadArgv*) argv)->e; 95 | assert (b != e); 96 | long dir = 1; 97 | if (b > e) { 98 | dir = -1; 99 | } 100 | for (int i = b; ; i += dir) { 101 | if (dir > 0) { 102 | if (i > e) break; 103 | } 104 | if (dir < 0) { 105 | if (i < e) break; 106 | } 107 | pl->rm((long)i); 108 | } 109 | return NULL; 110 | } 111 | 112 | void* test_contains(void* argv) { 113 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 114 | int b = ((ThreadArgv*) argv)->b; 115 | int e = ((ThreadArgv*) argv)->e; 116 | assert (b != e); 117 | long dir = 1; 118 | if (b > e) { 119 | dir = -1; 120 | } 121 | for (int i = b; ; i += dir) { 122 | if (dir > 0) { 123 | if (i > e) break; 124 | } 125 | if (dir < 0) { 126 | if (i < e) break; 127 | } 128 | pl->contains((long)i); 129 | } 130 | return NULL; 131 | } 132 | 133 | void* rand_test_add(void* argv) { 134 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 135 | int b = ((ThreadArgv*) argv)->b; 136 | int e = ((ThreadArgv*) argv)->e; 137 | std::vector v = ((ThreadArgv*) argv)->v; 138 | assert (b != e); 139 | for (int i = 0; i < v.size(); ++ i) { 140 | pl->add(v[i]); 141 | } 142 | return NULL; 143 | } 144 | 145 | void* rand_test_rm(void* argv) { 146 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 147 | int b = ((ThreadArgv*) argv)->b; 148 | int e = ((ThreadArgv*) argv)->e; 149 | std::vector v = ((ThreadArgv*) argv)->v; 150 | assert (b != e); 151 | for (int i = 0; i < v.size(); ++ i) { 152 | pl->rm(v[i]); 153 | } 154 | return NULL; 155 | } 156 | 157 | void* rand_test_contains(void* argv) { 158 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 159 | int b = ((ThreadArgv*) argv)->b; 160 | int e = ((ThreadArgv*) argv)->e; 161 | std::vector v = ((ThreadArgv*) argv)->v; 162 | assert (b != e); 163 | for (int i = 0; i < v.size(); ++ i) { 164 | pl->contains(v[i]); 165 | } 166 | return NULL; 167 | } 168 | 169 | bool validate_permutations(const std::vector & v) { 170 | return (v.size() == 0) || 171 | (v.size() == 1 && (v[0] == 1 || v[0] == 2)) || 172 | (v.size() == 2 && (v[0] == 1 && v[1] == 2)); 173 | } 174 | 175 | void Test_multi_thread_add() { 176 | LockFreeList l; 177 | std::vector v; 178 | 179 | pthread_t tid[4]; 180 | ThreadArgv argv1 = ThreadArgv(&l, 1, 10000); 181 | ThreadArgv argv2 = ThreadArgv(&l, 10000, 1); 182 | ThreadArgv argv3 = ThreadArgv(&l, 1000, 8000); 183 | ThreadArgv argv4 = ThreadArgv(&l, 5000, 1); 184 | pthread_create(&tid[0], NULL, test_add, (void*)&argv1); 185 | pthread_create(&tid[1], NULL, test_add, (void*)&argv2); 186 | pthread_create(&tid[2], NULL, test_add, (void*)&argv3); 187 | pthread_create(&tid[3], NULL, test_add, (void*)&argv4); 188 | pthread_join(tid[0], NULL); 189 | pthread_join(tid[1], NULL); 190 | pthread_join(tid[2], NULL); 191 | pthread_join(tid[3], NULL); 192 | 193 | v = l.vectorize(); 194 | assert(v.size() == 10000); 195 | assert(v[0] == 1); 196 | assert(v[9999] == 10000); 197 | } 198 | 199 | void Test_multi_thread_rm() { 200 | LockFreeList l; 201 | std::vector v; 202 | 203 | for (int i = 0; i < 10000; ++ i) { 204 | l.add((long)(i+1)); 205 | } 206 | 207 | pthread_t tid[4]; 208 | ThreadArgv argv1 = ThreadArgv(&l, 1, 5000); 209 | ThreadArgv argv2 = ThreadArgv(&l, 5000, 1); 210 | ThreadArgv argv3 = ThreadArgv(&l, 2000, 4000); 211 | ThreadArgv argv4 = ThreadArgv(&l, 4500, 100); 212 | pthread_create(&tid[0], NULL, test_rm, (void*)&argv1); 213 | pthread_create(&tid[1], NULL, test_rm, (void*)&argv2); 214 | pthread_create(&tid[2], NULL, test_rm, (void*)&argv3); 215 | pthread_create(&tid[3], NULL, test_rm, (void*)&argv4); 216 | pthread_join(tid[0], NULL); 217 | pthread_join(tid[1], NULL); 218 | pthread_join(tid[2], NULL); 219 | pthread_join(tid[3], NULL); 220 | 221 | v = l.vectorize(); 222 | assert(v.size() == 5000); 223 | assert(v[0] == 5001); 224 | assert(v[4999] == 10000); 225 | } 226 | 227 | void Test_multi_thread_add_and_rm_small() { 228 | LockFreeList l; 229 | std::vector v; 230 | 231 | pthread_t tid[4]; 232 | ThreadArgv argv1 = ThreadArgv(&l, 1, 2); 233 | ThreadArgv argv2 = ThreadArgv(&l, 2, 1); 234 | pthread_create(&tid[0], NULL, test_add, (void*)&argv1); 235 | pthread_create(&tid[1], NULL, test_rm, (void*)&argv2); 236 | pthread_create(&tid[2], NULL, test_add, (void*)&argv2); 237 | pthread_create(&tid[3], NULL, test_rm, (void*)&argv1); 238 | pthread_join(tid[0], NULL); 239 | pthread_join(tid[1], NULL); 240 | pthread_join(tid[2], NULL); 241 | pthread_join(tid[3], NULL); 242 | 243 | v = l.vectorize(); 244 | assert(validate_permutations(v)); 245 | } 246 | 247 | void Test_multi_thread_add_and_rm_big() { 248 | LockFreeList l; 249 | 250 | for (int i = 1; i <= 10000; ++ i) { 251 | l.add((long) i); 252 | } 253 | 254 | int n_add_thread = 3; 255 | int n_rm_thread = 3; 256 | int n_contains_thread = 2; 257 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 258 | 259 | pthread_t* tid = new pthread_t[n_thread]; 260 | ThreadArgv* argv = new ThreadArgv[n_thread]; 261 | argv[0].init(&l, 1, 10000); 262 | argv[1].init(&l, 3000, 1); 263 | argv[2].init(&l, 6000, 3500); 264 | argv[3].init(&l, 2010, 8999); 265 | argv[4].init(&l, 3011, 7917); 266 | argv[5].init(&l, 7138, 1234); 267 | argv[6].init(&l, 10000, 1); 268 | argv[7].init(&l, 9216, 4289); 269 | 270 | for (int i = 0; i < n_thread; ++ i) { 271 | if (i < n_add_thread) { 272 | pthread_create(&tid[i], NULL, test_add, (void*)&argv[i]); 273 | } else if (i >= n_add_thread && i < n_add_thread + n_rm_thread) { 274 | pthread_create(&tid[i], NULL, test_rm, (void*)&argv[i]); 275 | } else { 276 | pthread_create(&tid[i], NULL, test_contains, (void*)&argv[i]); 277 | } 278 | } 279 | 280 | for (int i = 0; i < n_thread; ++ i) { 281 | pthread_join(tid[i], NULL); 282 | } 283 | 284 | delete [] tid; 285 | delete [] argv; 286 | } 287 | 288 | void TEST_CORRECTNESS_MULTI_THREAD() { 289 | Test_multi_thread_add(); 290 | std::cout << "Test multi thread add successfully" << std::endl; 291 | 292 | Test_multi_thread_rm(); 293 | std::cout << "Test multi thread rm successfully" << std::endl; 294 | 295 | Test_multi_thread_add_and_rm_small(); 296 | std::cout << "Test multi thread add & rm small successfully" << std::endl; 297 | Test_multi_thread_add_and_rm_big(); 298 | std::cout << "Test multi thread add & rm big successfully" << std::endl; 299 | 300 | std::cout << "--------------------------" << std::endl; 301 | } 302 | 303 | double time_diff(const timeval & b, const timeval & e) { 304 | return (e.tv_sec - b.tv_sec) + (e.tv_usec - b.tv_usec)*1.0 / 1000000.0; 305 | } 306 | 307 | void Test_performance_add(const int n_thread) { 308 | LockFreeList l; 309 | std::vector v; 310 | 311 | pthread_t* tid = new pthread_t[n_thread]; 312 | ThreadArgv argv = ThreadArgv(&l, 1, 10000); 313 | 314 | timeval begin; 315 | timeval end; 316 | gettimeofday(&begin, NULL); 317 | 318 | for (int i = 0; i < n_thread; ++ i) { 319 | pthread_create(&tid[i], NULL, test_add, (void*)&argv); 320 | } 321 | for (int i = 0; i < n_thread; ++ i) { 322 | pthread_join(tid[i], NULL); 323 | } 324 | 325 | gettimeofday(&end, NULL); 326 | std::cout << "Test performance: add() with " << n_thread; 327 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 328 | 329 | delete [] tid; 330 | } 331 | 332 | void Test_performance_rm(const int n_thread) { 333 | LockFreeList l; 334 | std::vector v; 335 | 336 | for (int i = 1; i <= 10000; ++ i) { 337 | l.add((long) i); 338 | } 339 | 340 | pthread_t* tid = new pthread_t[n_thread]; 341 | ThreadArgv argv = ThreadArgv(&l, 10000, 1); 342 | 343 | timeval begin; 344 | timeval end; 345 | gettimeofday(&begin, NULL); 346 | 347 | for (int i = 0; i < n_thread; ++ i) { 348 | pthread_create(&tid[i], NULL, test_rm, (void*)&argv); 349 | } 350 | for (int i = 0; i < n_thread; ++ i) { 351 | pthread_join(tid[i], NULL); 352 | } 353 | 354 | gettimeofday(&end, NULL); 355 | std::cout << "Test performance: rm() with " << n_thread; 356 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 357 | 358 | delete [] tid; 359 | } 360 | 361 | void Test_performance_contains(const int n_thread) { 362 | LockFreeList l; 363 | std::vector v; 364 | 365 | pthread_t* tid = new pthread_t[n_thread]; 366 | ThreadArgv argv = ThreadArgv(&l, 1, 10000); 367 | 368 | for (int i = 0; i < 10000; ++ i) { 369 | l.add((long)i); 370 | } 371 | 372 | timeval begin; 373 | timeval end; 374 | gettimeofday(&begin, NULL); 375 | 376 | for (int i = 0; i < n_thread; ++ i) { 377 | pthread_create(&tid[i], NULL, test_contains, (void*)&argv); 378 | } 379 | for (int i = 0; i < n_thread; ++ i) { 380 | pthread_join(tid[i], NULL); 381 | } 382 | 383 | gettimeofday(&end, NULL); 384 | std::cout << "Test performance: contains() with " << n_thread; 385 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 386 | 387 | delete [] tid; 388 | } 389 | 390 | double Test_performance_multi_op(const int n_add_thread, 391 | const int n_rm_thread, 392 | const int n_contains_thread) { 393 | LockFreeList l; 394 | std::vector v; 395 | 396 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 397 | pthread_t* tid = new pthread_t[n_thread]; 398 | ThreadArgv* argv = new ThreadArgv[n_thread]; 399 | 400 | // each operation sequence contains 2000 op, 401 | // each operation range is [1, 10000] 402 | for (int i = 0; i < n_thread; ++ i) { 403 | std::vector v; 404 | for (int j = 0; j < 2000; ++ j) { 405 | int r = rand() % 10000 + 1; 406 | v.push_back((long) r); 407 | } 408 | argv[i].init(&l, 0, 1); 409 | argv[i].add_rand_seq(v); 410 | } 411 | 412 | for (int i = 0; i < 10000; ++ i) { 413 | l.add((long)i); 414 | } 415 | 416 | timeval begin; 417 | timeval end; 418 | gettimeofday(&begin, NULL); 419 | 420 | for (int i = 0; i < n_thread; ++ i) { 421 | if (i < n_add_thread) { 422 | pthread_create(&tid[i], NULL, rand_test_add, (void*)&argv[i]); 423 | } else if (i >= n_add_thread && i < n_add_thread + n_rm_thread) { 424 | pthread_create(&tid[i], NULL, rand_test_rm, (void*)&argv[i]); 425 | } else { 426 | pthread_create(&tid[i], NULL, rand_test_contains, (void*)&argv[i]); 427 | } 428 | } 429 | for (int i = 0; i < n_thread; ++ i) { 430 | pthread_join(tid[i], NULL); 431 | } 432 | 433 | gettimeofday(&end, NULL); 434 | 435 | delete [] tid; 436 | delete [] argv; 437 | 438 | return time_diff(begin, end); 439 | } 440 | 441 | void Test_performance_hybird(const int n_add_thread, 442 | const int n_rm_thread, 443 | const int n_contains_thread, 444 | const int n_exp) { 445 | double consuming = 0; 446 | for (int i = 0; i < n_exp; ++ i) { 447 | consuming += Test_performance_multi_op(n_add_thread, 448 | n_rm_thread, 449 | n_contains_thread); 450 | } 451 | 452 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 453 | std::cout << "Test performance: hybird operation with " << n_thread << " thread, "; 454 | std::cout << " add_threads: " << n_add_thread << ","; 455 | std::cout << " rm_threads: " << n_rm_thread << ","; 456 | std::cout << " contains_threads: " << n_contains_thread << ","; 457 | std::cout << " avge consuming " << consuming / n_exp << " s" << std::endl; 458 | } 459 | 460 | void TEST_PERFORMANCE() { 461 | Test_performance_add(1); 462 | Test_performance_add(5); 463 | Test_performance_add(10); 464 | Test_performance_add(20); 465 | std::cout << "Test add performence successfully" << std::endl; 466 | 467 | Test_performance_rm(1); 468 | Test_performance_rm(5); 469 | Test_performance_rm(10); 470 | Test_performance_rm(20); 471 | std::cout << "Test rm performence successfully" << std::endl; 472 | 473 | Test_performance_contains(1); 474 | Test_performance_contains(5); 475 | Test_performance_contains(10); 476 | Test_performance_contains(20); 477 | std::cout << "Test contains performence successfully" << std::endl; 478 | 479 | Test_performance_hybird(2, 1, 2, 10); 480 | Test_performance_hybird(4, 2, 4, 10); 481 | Test_performance_hybird(6, 3, 6, 10); 482 | std::cout << "Test multi op performence successfully" << std::endl; 483 | 484 | std::cout << "--------------------------" << std::endl; 485 | } 486 | 487 | int main() { 488 | 489 | srand((unsigned int)time(NULL)); 490 | TEST_CORRECTNESS_SINGLE_THREAD(); 491 | TEST_CORRECTNESS_MULTI_THREAD(); 492 | TEST_PERFORMANCE(); 493 | 494 | /*LockFreeList l; 495 | 496 | pthread_t tid[4]; 497 | ThreadArgv argv1 = ThreadArgv(&l, 1, 2); 498 | pthread_create(&tid[0], NULL, test_rm, (void*)&argv1); 499 | pthread_join(tid[0], NULL); 500 | 501 | std::vector v = l.vectorize(); 502 | for (int i = 0; i < v.size(); ++ i) { 503 | std::cout << v[i] << std::endl; 504 | }*/ 505 | 506 | return 0; 507 | } -------------------------------------------------------------------------------- /list/lock_free_list/run_batch_test.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | rm log/output 4 | rm core 5 | 6 | NUM=1000 7 | for i in `seq 1 ${NUM}`; do 8 | echo "====== $i ======" > log/output 9 | ./build/x >> log/output 10 | N=$(cat log/output |grep "successfully" |wc -l) 11 | if [ ${N} -ne 5 ]; then 12 | echo "ERROR" 13 | exit 14 | fi 15 | done 16 | 17 | echo "----- BATCH TEST SUCCESS -----" -------------------------------------------------------------------------------- /list/lock_free_rcu_list/Makefile: -------------------------------------------------------------------------------- 1 | x : lock_free_list_test.cpp lock_free_list.cpp lock_free_list.h list_node.cpp list_node.h rcu.cpp rcu.h 2 | g++ -std=c++11 -g -o ./build/x lock_free_list_test.cpp \ 3 | lock_free_list.cpp lock_free_list.h \ 4 | list_node.cpp list_node.h \ 5 | rcu.cpp rcu.h \ 6 | -lpthread 7 | 8 | rcu : rcu_test.cpp rcu.cpp rcu.h list_node.cpp list_node.h 9 | g++ -std=c++11 -g -o ./build/rcu rcu_test.cpp \ 10 | rcu.cpp rcu.h \ 11 | list_node.cpp list_node.h \ 12 | -lpthread 13 | 14 | clean : 15 | rm ./build/* -------------------------------------------------------------------------------- /list/lock_free_rcu_list/list_node.cpp: -------------------------------------------------------------------------------- 1 | #include "list_node.h" 2 | 3 | Node::Node(long val, Node* next) { 4 | this->val = val; 5 | this->next_ = next; 6 | } 7 | 8 | bool Node::is_mark() { 9 | return (unsigned long)next_.load() & (unsigned long)0x1; 10 | } 11 | 12 | void Node::mark() { 13 | next_ = (Node*) ((unsigned long)next_.load() | (unsigned long)0x1); 14 | } 15 | 16 | Node* Node::get_next() { 17 | return (Node*) ((unsigned long)next_.load() & ~(unsigned long)0x1); 18 | } -------------------------------------------------------------------------------- /list/lock_free_rcu_list/list_node.h: -------------------------------------------------------------------------------- 1 | #ifndef _LIST_NODE_H 2 | #define _LIST_NODE_H 3 | 4 | #include 5 | #include 6 | #include 7 | 8 | class Node { 9 | public: 10 | Node() {} 11 | Node(long val, Node* next); 12 | ~Node() {} 13 | 14 | bool is_mark(); 15 | void mark(); 16 | Node* get_next(); 17 | 18 | long val; 19 | std::atomic next_; 20 | }; 21 | 22 | #endif -------------------------------------------------------------------------------- /list/lock_free_rcu_list/lock_free_list.cpp: -------------------------------------------------------------------------------- 1 | #include "lock_free_list.h" 2 | 3 | #include 4 | 5 | LockFreeList::LockFreeList() { 6 | head_ = new Node(-1, NULL); 7 | 8 | rcu_ = new RCU(); 9 | rcu_->start_bg_reclaim_thread(); 10 | } 11 | 12 | LockFreeList::~LockFreeList() { 13 | while (head_->get_next() != NULL) { 14 | Node* node = head_->get_next(); 15 | head_->next_ = node->get_next(); 16 | delete node; 17 | } 18 | delete head_; 19 | 20 | delete rcu_; 21 | } 22 | 23 | bool LockFreeList::add(const long val) { 24 | // insert node between pre and cur->val 25 | unsigned int tid = pthread_self(); 26 | rcu_->add_thread(tid); 27 | 28 | Node* pre = head_; 29 | Node* cur = head_->next_; 30 | 31 | while (cur != NULL) { 32 | if (cur->is_mark()) { 33 | Node* next = cur->get_next(); 34 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, next)) { 35 | if (pre->is_mark()) { 36 | rcu_->rm_thread(tid); 37 | return false; 38 | } 39 | } else { 40 | // remove cur node from list logically, 41 | // so add cur node for batch reclaimming 42 | rcu_->add_reclaim_resource(cur); 43 | } 44 | cur = pre->get_next(); 45 | continue; 46 | } 47 | if (val <= cur->val) { 48 | break; 49 | } 50 | pre = cur; 51 | cur = pre->get_next(); 52 | } 53 | 54 | if (cur == NULL) { 55 | Node* node = new Node(val, NULL); 56 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, node)) { 57 | delete node; 58 | rcu_->rm_thread(tid); 59 | return false; 60 | } 61 | rcu_->rm_thread(tid); 62 | return true; 63 | } 64 | 65 | if (val == cur->val) { 66 | rcu_->rm_thread(tid); 67 | return false; 68 | } else { // val < cur->val 69 | Node* node = new Node(val, cur); 70 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, node)) { 71 | delete node; 72 | rcu_->rm_thread(tid); 73 | return false; 74 | } 75 | rcu_->rm_thread(tid); 76 | return true; 77 | } 78 | } 79 | 80 | bool LockFreeList::rm(const long val) { 81 | // delete node with val == cur->val 82 | unsigned int tid = pthread_self(); 83 | rcu_->add_thread(tid); 84 | 85 | Node* pre = head_; 86 | Node* cur = head_->next_; 87 | 88 | while (cur != NULL) { 89 | if (cur->is_mark()) { 90 | Node* next = cur->get_next(); 91 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, next)) { 92 | // pre node has been delete logically 93 | if (pre->is_mark()) { 94 | rcu_->rm_thread(tid); 95 | return false; 96 | } 97 | // cur node has been delete physically 98 | // need do nothing 99 | } else { 100 | // remove cur node from list logically, 101 | // so add cur node for batch reclaimming 102 | rcu_->add_reclaim_resource(cur); 103 | } 104 | cur = pre->get_next(); 105 | continue; 106 | } 107 | if (val <= cur->val) { 108 | break; 109 | } 110 | pre = cur; 111 | cur = pre->get_next(); 112 | } 113 | 114 | if (cur == NULL) { 115 | rcu_->rm_thread(tid); 116 | return false; 117 | } 118 | 119 | if (val == cur->val) { 120 | cur->mark(); 121 | Node* next = cur->get_next(); 122 | if (!std::atomic_compare_exchange_strong(&pre->next_, &cur, next)) { 123 | // cur node has been removed from list logically 124 | } else { 125 | // remove cur node from list logically, 126 | // so add cur node for batch reclaimming 127 | rcu_->add_reclaim_resource(cur); 128 | } 129 | rcu_->rm_thread(tid); 130 | return true; 131 | } else { // val < cur->val 132 | rcu_->rm_thread(tid); 133 | return false; 134 | } 135 | } 136 | 137 | bool LockFreeList::contains(const long val) { 138 | unsigned int tid = pthread_self(); 139 | rcu_->add_thread(tid); 140 | 141 | Node* cur = head_->next_; 142 | 143 | while (cur != NULL) { 144 | if (cur->is_mark()) { 145 | cur = cur->get_next(); 146 | continue; 147 | } 148 | if (val <= cur->val) { 149 | break; 150 | } 151 | cur = cur->get_next(); 152 | } 153 | 154 | if (cur == NULL) { 155 | rcu_->rm_thread(tid); 156 | return false; 157 | } 158 | 159 | if (val == cur->val) { 160 | rcu_->rm_thread(tid); 161 | return true; 162 | } else { 163 | rcu_->rm_thread(tid); 164 | return false; 165 | } 166 | } 167 | 168 | std::vector LockFreeList::vectorize() { 169 | std::vector v; 170 | 171 | Node* cur = head_->next_; 172 | while (cur) { 173 | if (cur->is_mark()) { 174 | //std::cout << -1 << " "; 175 | cur = cur->get_next(); 176 | continue; 177 | } 178 | //std::cout << cur->val << " "; 179 | v.push_back(cur->val); 180 | cur = cur->get_next(); 181 | } 182 | //std::cout << std::endl; 183 | return v; 184 | } -------------------------------------------------------------------------------- /list/lock_free_rcu_list/lock_free_list.h: -------------------------------------------------------------------------------- 1 | #ifndef _LOCK_FREE_LIST_H 2 | #define _LOCK_FREE_LIST_H 3 | 4 | #include 5 | #include 6 | #include 7 | 8 | #include "list_node.h" 9 | #include "rcu.h" 10 | 11 | /* 12 | * In concurrent mode, multithreads call rm() function, 13 | * some threads will just do delete logically, **so 14 | * after all threads finish, there maybe some logical 15 | * deleted nodes in the list.** For example: 16 | * Head->A->B->C->D->NULL 17 | * timeline 1: thread[0] marks B as deleted 18 | * timeline 2: thread[1] marks A as deleted 19 | * timeline 3: thread[0] finds A has been deleted, 20 | * it returns without delete B physically 21 | * timeline 4: thread[1] delete A physically 22 | * After thread[0] and thread[1] finished, the list is: 23 | * Head->B(logical deleted)->C->D->NULL 24 | * So in any time, the list may contains logical deleted 25 | * nodes. 26 | */ 27 | class LockFreeList { 28 | public: 29 | LockFreeList(); 30 | ~LockFreeList(); 31 | 32 | /**************** Interface ****************/ 33 | // Thread safe 34 | bool add(const long val); 35 | bool rm(const long val); 36 | bool contains(const long val); 37 | 38 | /**************** Test ****************/ 39 | // Not thread safe 40 | std::vector vectorize(); 41 | 42 | private: 43 | Node* head_; 44 | RCU* rcu_; 45 | }; 46 | 47 | #endif -------------------------------------------------------------------------------- /list/lock_free_rcu_list/lock_free_list_test.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | #include "lock_free_list.h" 10 | 11 | // Test Node class 12 | //int main() { 13 | // 14 | // Node x(100, NULL); 15 | // std::cout << "is mark: " << x.is_mark() << std::endl; 16 | // 17 | // x.mark(); 18 | // std::cout << "after mark: " << x.is_mark() << std::endl; 19 | // std::cout << "get next: " << x.get_next() << std::endl; 20 | // 21 | // return 0; 22 | //} 23 | 24 | struct ThreadArgv { 25 | LockFreeList* pl; 26 | int b, e; 27 | std::vector v; 28 | ThreadArgv() {} 29 | ThreadArgv(LockFreeList* pl, int b, int e) { 30 | this->pl = pl; 31 | this->b = b; 32 | this->e = e; 33 | } 34 | void init(LockFreeList* pl, int b, int e) { 35 | this->pl = pl; 36 | this->b = b; 37 | this->e = e; 38 | } 39 | void add_rand_seq(const std::vector & v) { 40 | this->v = v; 41 | } 42 | }; 43 | 44 | void TEST_CORRECTNESS_SINGLE_THREAD() { 45 | LockFreeList l; 46 | std::vector v; 47 | 48 | l.add(1), l.add(3), l.add(2), l.add(3); 49 | v = l.vectorize(); 50 | assert (v.size() == 3); 51 | assert (v[0] == 1); 52 | assert (v[1] == 2); 53 | assert (v[2] == 3); 54 | 55 | l.rm(1), l.rm(2), l.rm(2); 56 | v = l.vectorize(); 57 | assert (v.size() == 1); 58 | assert (v[0] == 3); 59 | 60 | bool res1 = l.contains(3); 61 | bool res2 = l.contains(1); 62 | assert(res1 == true); 63 | assert(res2 == false); 64 | 65 | std::cout << "Test single thread correctness successfully" << std::endl; 66 | 67 | std::cout << "--------------------------" << std::endl; 68 | } 69 | 70 | void* test_add(void* argv) { 71 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 72 | int b = ((ThreadArgv*) argv)->b; 73 | int e = ((ThreadArgv*) argv)->e; 74 | assert (b != e); 75 | long dir = 1; 76 | if (b > e) { 77 | dir = -1; 78 | } 79 | for (int i = b; ; i += dir) { 80 | if (dir > 0) { 81 | if (i > e) break; 82 | } 83 | if (dir < 0) { 84 | if (i < e) break; 85 | } 86 | pl->add((long)i); 87 | } 88 | return NULL; 89 | } 90 | 91 | void* test_rm(void* argv) { 92 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 93 | int b = ((ThreadArgv*) argv)->b; 94 | int e = ((ThreadArgv*) argv)->e; 95 | assert (b != e); 96 | long dir = 1; 97 | if (b > e) { 98 | dir = -1; 99 | } 100 | for (int i = b; ; i += dir) { 101 | if (dir > 0) { 102 | if (i > e) break; 103 | } 104 | if (dir < 0) { 105 | if (i < e) break; 106 | } 107 | pl->rm((long)i); 108 | } 109 | return NULL; 110 | } 111 | 112 | void* test_contains(void* argv) { 113 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 114 | int b = ((ThreadArgv*) argv)->b; 115 | int e = ((ThreadArgv*) argv)->e; 116 | assert (b != e); 117 | long dir = 1; 118 | if (b > e) { 119 | dir = -1; 120 | } 121 | for (int i = b; ; i += dir) { 122 | if (dir > 0) { 123 | if (i > e) break; 124 | } 125 | if (dir < 0) { 126 | if (i < e) break; 127 | } 128 | pl->contains((long)i); 129 | } 130 | return NULL; 131 | } 132 | 133 | void* rand_test_add(void* argv) { 134 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 135 | int b = ((ThreadArgv*) argv)->b; 136 | int e = ((ThreadArgv*) argv)->e; 137 | std::vector v = ((ThreadArgv*) argv)->v; 138 | assert (b != e); 139 | for (int i = 0; i < v.size(); ++ i) { 140 | pl->add(v[i]); 141 | } 142 | return NULL; 143 | } 144 | 145 | void* rand_test_rm(void* argv) { 146 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 147 | int b = ((ThreadArgv*) argv)->b; 148 | int e = ((ThreadArgv*) argv)->e; 149 | std::vector v = ((ThreadArgv*) argv)->v; 150 | assert (b != e); 151 | for (int i = 0; i < v.size(); ++ i) { 152 | pl->rm(v[i]); 153 | } 154 | return NULL; 155 | } 156 | 157 | void* rand_test_contains(void* argv) { 158 | LockFreeList* pl = ((ThreadArgv*) argv)->pl; 159 | int b = ((ThreadArgv*) argv)->b; 160 | int e = ((ThreadArgv*) argv)->e; 161 | std::vector v = ((ThreadArgv*) argv)->v; 162 | assert (b != e); 163 | for (int i = 0; i < v.size(); ++ i) { 164 | pl->contains(v[i]); 165 | } 166 | return NULL; 167 | } 168 | 169 | bool validate_permutations(const std::vector & v) { 170 | return (v.size() == 0) || 171 | (v.size() == 1 && (v[0] == 1 || v[0] == 2)) || 172 | (v.size() == 2 && (v[0] == 1 && v[1] == 2)); 173 | } 174 | 175 | void Test_multi_thread_add() { 176 | LockFreeList l; 177 | std::vector v; 178 | 179 | pthread_t tid[4]; 180 | ThreadArgv argv1 = ThreadArgv(&l, 1, 10000); 181 | ThreadArgv argv2 = ThreadArgv(&l, 10000, 1); 182 | ThreadArgv argv3 = ThreadArgv(&l, 1000, 8000); 183 | ThreadArgv argv4 = ThreadArgv(&l, 5000, 1); 184 | pthread_create(&tid[0], NULL, test_add, (void*)&argv1); 185 | pthread_create(&tid[1], NULL, test_add, (void*)&argv2); 186 | pthread_create(&tid[2], NULL, test_add, (void*)&argv3); 187 | pthread_create(&tid[3], NULL, test_add, (void*)&argv4); 188 | pthread_join(tid[0], NULL); 189 | pthread_join(tid[1], NULL); 190 | pthread_join(tid[2], NULL); 191 | pthread_join(tid[3], NULL); 192 | 193 | v = l.vectorize(); 194 | assert(v.size() == 10000); 195 | assert(v[0] == 1); 196 | assert(v[9999] == 10000); 197 | } 198 | 199 | void Test_multi_thread_rm() { 200 | LockFreeList l; 201 | std::vector v; 202 | 203 | for (int i = 0; i < 10000; ++ i) { 204 | l.add((long)(i+1)); 205 | } 206 | 207 | pthread_t tid[4]; 208 | ThreadArgv argv1 = ThreadArgv(&l, 1, 5000); 209 | ThreadArgv argv2 = ThreadArgv(&l, 5000, 1); 210 | ThreadArgv argv3 = ThreadArgv(&l, 2000, 4000); 211 | ThreadArgv argv4 = ThreadArgv(&l, 4500, 100); 212 | pthread_create(&tid[0], NULL, test_rm, (void*)&argv1); 213 | pthread_create(&tid[1], NULL, test_rm, (void*)&argv2); 214 | pthread_create(&tid[2], NULL, test_rm, (void*)&argv3); 215 | pthread_create(&tid[3], NULL, test_rm, (void*)&argv4); 216 | pthread_join(tid[0], NULL); 217 | pthread_join(tid[1], NULL); 218 | pthread_join(tid[2], NULL); 219 | pthread_join(tid[3], NULL); 220 | 221 | v = l.vectorize(); 222 | assert(v.size() == 5000); 223 | assert(v[0] == 5001); 224 | assert(v[4999] == 10000); 225 | } 226 | 227 | void Test_multi_thread_add_and_rm_small() { 228 | LockFreeList l; 229 | std::vector v; 230 | 231 | pthread_t tid[4]; 232 | ThreadArgv argv1 = ThreadArgv(&l, 1, 2); 233 | ThreadArgv argv2 = ThreadArgv(&l, 2, 1); 234 | pthread_create(&tid[0], NULL, test_add, (void*)&argv1); 235 | pthread_create(&tid[1], NULL, test_rm, (void*)&argv2); 236 | pthread_create(&tid[2], NULL, test_add, (void*)&argv2); 237 | pthread_create(&tid[3], NULL, test_rm, (void*)&argv1); 238 | pthread_join(tid[0], NULL); 239 | pthread_join(tid[1], NULL); 240 | pthread_join(tid[2], NULL); 241 | pthread_join(tid[3], NULL); 242 | 243 | v = l.vectorize(); 244 | assert(validate_permutations(v)); 245 | } 246 | 247 | void Test_multi_thread_add_and_rm_big() { 248 | LockFreeList l; 249 | 250 | for (int i = 1; i <= 10000; ++ i) { 251 | l.add((long) i); 252 | } 253 | 254 | int n_add_thread = 3; 255 | int n_rm_thread = 3; 256 | int n_contains_thread = 2; 257 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 258 | 259 | pthread_t* tid = new pthread_t[n_thread]; 260 | ThreadArgv* argv = new ThreadArgv[n_thread]; 261 | argv[0].init(&l, 1, 10000); 262 | argv[1].init(&l, 3000, 1); 263 | argv[2].init(&l, 6000, 3500); 264 | argv[3].init(&l, 2010, 8999); 265 | argv[4].init(&l, 3011, 7917); 266 | argv[5].init(&l, 7138, 1234); 267 | argv[6].init(&l, 10000, 1); 268 | argv[7].init(&l, 9216, 4289); 269 | 270 | for (int i = 0; i < n_thread; ++ i) { 271 | if (i < n_add_thread) { 272 | pthread_create(&tid[i], NULL, test_add, (void*)&argv[i]); 273 | } else if (i >= n_add_thread && i < n_add_thread + n_rm_thread) { 274 | pthread_create(&tid[i], NULL, test_rm, (void*)&argv[i]); 275 | } else { 276 | pthread_create(&tid[i], NULL, test_contains, (void*)&argv[i]); 277 | } 278 | } 279 | 280 | for (int i = 0; i < n_thread; ++ i) { 281 | pthread_join(tid[i], NULL); 282 | } 283 | 284 | delete [] tid; 285 | delete [] argv; 286 | } 287 | 288 | void TEST_CORRECTNESS_MULTI_THREAD() { 289 | Test_multi_thread_add(); 290 | std::cout << "Test multi thread add successfully" << std::endl; 291 | 292 | Test_multi_thread_rm(); 293 | std::cout << "Test multi thread rm successfully" << std::endl; 294 | 295 | Test_multi_thread_add_and_rm_small(); 296 | std::cout << "Test multi thread add & rm small successfully" << std::endl; 297 | Test_multi_thread_add_and_rm_big(); 298 | std::cout << "Test multi thread add & rm big successfully" << std::endl; 299 | 300 | std::cout << "--------------------------" << std::endl; 301 | } 302 | 303 | double time_diff(const timeval & b, const timeval & e) { 304 | return (e.tv_sec - b.tv_sec) + (e.tv_usec - b.tv_usec)*1.0 / 1000000.0; 305 | } 306 | 307 | void Test_performance_add(const int n_thread) { 308 | LockFreeList l; 309 | std::vector v; 310 | 311 | pthread_t* tid = new pthread_t[n_thread]; 312 | ThreadArgv argv = ThreadArgv(&l, 1, 10000); 313 | 314 | timeval begin; 315 | timeval end; 316 | gettimeofday(&begin, NULL); 317 | 318 | for (int i = 0; i < n_thread; ++ i) { 319 | pthread_create(&tid[i], NULL, test_add, (void*)&argv); 320 | } 321 | for (int i = 0; i < n_thread; ++ i) { 322 | pthread_join(tid[i], NULL); 323 | } 324 | 325 | gettimeofday(&end, NULL); 326 | std::cout << "Test performance: add() with " << n_thread; 327 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 328 | 329 | delete [] tid; 330 | } 331 | 332 | void Test_performance_rm(const int n_thread) { 333 | LockFreeList l; 334 | std::vector v; 335 | 336 | for (int i = 1; i <= 10000; ++ i) { 337 | l.add((long) i); 338 | } 339 | 340 | pthread_t* tid = new pthread_t[n_thread]; 341 | ThreadArgv argv = ThreadArgv(&l, 10000, 1); 342 | 343 | timeval begin; 344 | timeval end; 345 | gettimeofday(&begin, NULL); 346 | 347 | for (int i = 0; i < n_thread; ++ i) { 348 | pthread_create(&tid[i], NULL, test_rm, (void*)&argv); 349 | } 350 | for (int i = 0; i < n_thread; ++ i) { 351 | pthread_join(tid[i], NULL); 352 | } 353 | 354 | gettimeofday(&end, NULL); 355 | std::cout << "Test performance: rm() with " << n_thread; 356 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 357 | 358 | delete [] tid; 359 | } 360 | 361 | void Test_performance_contains(const int n_thread) { 362 | LockFreeList l; 363 | std::vector v; 364 | 365 | pthread_t* tid = new pthread_t[n_thread]; 366 | ThreadArgv argv = ThreadArgv(&l, 1, 10000); 367 | 368 | for (int i = 0; i < 10000; ++ i) { 369 | l.add((long)i); 370 | } 371 | 372 | timeval begin; 373 | timeval end; 374 | gettimeofday(&begin, NULL); 375 | 376 | for (int i = 0; i < n_thread; ++ i) { 377 | pthread_create(&tid[i], NULL, test_contains, (void*)&argv); 378 | } 379 | for (int i = 0; i < n_thread; ++ i) { 380 | pthread_join(tid[i], NULL); 381 | } 382 | 383 | gettimeofday(&end, NULL); 384 | std::cout << "Test performance: contains() with " << n_thread; 385 | std::cout << " threads, consuming " << time_diff(begin, end) << " s" << std::endl; 386 | 387 | delete [] tid; 388 | } 389 | 390 | double Test_performance_multi_op(const int n_add_thread, 391 | const int n_rm_thread, 392 | const int n_contains_thread) { 393 | LockFreeList l; 394 | std::vector v; 395 | 396 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 397 | pthread_t* tid = new pthread_t[n_thread]; 398 | ThreadArgv* argv = new ThreadArgv[n_thread]; 399 | 400 | // each operation sequence contains 2000 op, 401 | // each operation range is [1, 10000] 402 | for (int i = 0; i < n_thread; ++ i) { 403 | std::vector v; 404 | for (int j = 0; j < 2000; ++ j) { 405 | int r = rand() % 10000 + 1; 406 | v.push_back((long) r); 407 | } 408 | argv[i].init(&l, 0, 1); 409 | argv[i].add_rand_seq(v); 410 | } 411 | 412 | for (int i = 0; i < 10000; ++ i) { 413 | l.add((long)i); 414 | } 415 | 416 | timeval begin; 417 | timeval end; 418 | gettimeofday(&begin, NULL); 419 | 420 | for (int i = 0; i < n_thread; ++ i) { 421 | if (i < n_add_thread) { 422 | pthread_create(&tid[i], NULL, rand_test_add, (void*)&argv[i]); 423 | } else if (i >= n_add_thread && i < n_add_thread + n_rm_thread) { 424 | pthread_create(&tid[i], NULL, rand_test_rm, (void*)&argv[i]); 425 | } else { 426 | pthread_create(&tid[i], NULL, rand_test_contains, (void*)&argv[i]); 427 | } 428 | } 429 | for (int i = 0; i < n_thread; ++ i) { 430 | pthread_join(tid[i], NULL); 431 | } 432 | 433 | gettimeofday(&end, NULL); 434 | 435 | delete [] tid; 436 | delete [] argv; 437 | 438 | return time_diff(begin, end); 439 | } 440 | 441 | void Test_performance_hybird(const int n_add_thread, 442 | const int n_rm_thread, 443 | const int n_contains_thread, 444 | const int n_exp) { 445 | double consuming = 0; 446 | for (int i = 0; i < n_exp; ++ i) { 447 | consuming += Test_performance_multi_op(n_add_thread, 448 | n_rm_thread, 449 | n_contains_thread); 450 | } 451 | 452 | int n_thread = n_add_thread + n_rm_thread + n_contains_thread; 453 | std::cout << "Test performance: hybird operation with " << n_thread << " thread, "; 454 | std::cout << " add_threads: " << n_add_thread << ","; 455 | std::cout << " rm_threads: " << n_rm_thread << ","; 456 | std::cout << " contains_threads: " << n_contains_thread << ","; 457 | std::cout << " avge consuming " << consuming / n_exp << " s" << std::endl; 458 | } 459 | 460 | void TEST_PERFORMANCE() { 461 | Test_performance_add(1); 462 | Test_performance_add(5); 463 | Test_performance_add(10); 464 | Test_performance_add(20); 465 | std::cout << "Test add performence successfully" << std::endl; 466 | 467 | Test_performance_rm(1); 468 | Test_performance_rm(5); 469 | Test_performance_rm(10); 470 | Test_performance_rm(20); 471 | std::cout << "Test rm performence successfully" << std::endl; 472 | 473 | Test_performance_contains(1); 474 | Test_performance_contains(5); 475 | Test_performance_contains(10); 476 | Test_performance_contains(20); 477 | std::cout << "Test contains performence successfully" << std::endl; 478 | 479 | Test_performance_hybird(2, 1, 2, 10); 480 | Test_performance_hybird(4, 2, 4, 10); 481 | Test_performance_hybird(6, 3, 6, 10); 482 | std::cout << "Test multi op performence successfully" << std::endl; 483 | 484 | std::cout << "--------------------------" << std::endl; 485 | } 486 | 487 | int main() { 488 | 489 | srand((unsigned int)time(NULL)); 490 | TEST_CORRECTNESS_SINGLE_THREAD(); 491 | TEST_CORRECTNESS_MULTI_THREAD(); 492 | TEST_PERFORMANCE(); 493 | 494 | /*LockFreeList l; 495 | 496 | pthread_t tid[4]; 497 | ThreadArgv argv1 = ThreadArgv(&l, 1, 2); 498 | pthread_create(&tid[0], NULL, test_rm, (void*)&argv1); 499 | pthread_join(tid[0], NULL); 500 | 501 | std::vector v = l.vectorize(); 502 | for (int i = 0; i < v.size(); ++ i) { 503 | std::cout << v[i] << std::endl; 504 | }*/ 505 | 506 | return 0; 507 | } -------------------------------------------------------------------------------- /list/lock_free_rcu_list/rcu.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | 3 | #include "rcu.h" 4 | 5 | RCU::RCU() { 6 | epoch_ = 0; 7 | is_over_.store(false); 8 | should_over_.store(false); 9 | pthread_mutex_init(&mutex_, NULL); 10 | } 11 | RCU::~RCU() { 12 | // kill the background thread 13 | is_over_.store(true); 14 | pthread_join(bg_tid_, NULL); 15 | // reclaim resource 16 | while (true) { 17 | if (should_over_.load()) { 18 | // reclaim resource 19 | threads_.clear(); 20 | thread_index_.clear(); 21 | for (std::list::iterator i = nodes_.begin(); 22 | i != nodes_.end(); ++ i) { 23 | delete i->node; 24 | } 25 | nodes_.clear(); 26 | node_hash_.clear(); 27 | pthread_mutex_destroy(&mutex_); 28 | assert(this->get_thread_queue_size() == 0); 29 | assert(this->get_thread_index_size() == 0); 30 | assert(this->get_resource_queue_size() == 0); 31 | break; 32 | } else { 33 | usleep(1000 * 10); 34 | } 35 | } 36 | } 37 | 38 | void RCU::run_bg_reclaim_thread() { 39 | while (!is_over_.load()) { 40 | usleep(SleepInterval); 41 | ++ epoch_; 42 | 43 | assert(pthread_mutex_lock(&mutex_) == 0); 44 | //print_ds(); 45 | 46 | std::list::iterator earlist_thread = threads_.begin(); 47 | 48 | for (std::list::iterator node_item = nodes_.begin(); 49 | node_item != nodes_.end(); ) { 50 | if (earlist_thread != threads_.end()) { 51 | if (node_item->epoch < earlist_thread->epoch) { 52 | //std::cout << "[ rm node: " << node_item->node->val << " ]"; 53 | //std::cout << "[ node_epoch: " << node_item->epoch << ", thread_epoch: " << earlist_thread->epoch << " ]" << std::endl; 54 | delete node_item->node; 55 | node_hash_.erase(node_item->node); 56 | node_item = nodes_.erase(node_item); 57 | } else { 58 | break; 59 | } 60 | } else { 61 | //std::cout << "[[ there is no threads now, rm node " << node_item->node->val << " ]]" << std::endl; 62 | delete node_item->node; 63 | node_hash_.erase(node_item->node); 64 | node_item = nodes_.erase(node_item); 65 | } 66 | } 67 | assert(pthread_mutex_unlock(&mutex_) == 0); 68 | } 69 | should_over_.store(true); 70 | } 71 | 72 | void RCU::start_bg_reclaim_thread() { 73 | assert(pthread_create(&bg_tid_, NULL, &RCU::run_bg_reclaim_thread_wrapper, this) == 0); 74 | } 75 | 76 | void RCU::add_thread(const unsigned int tid) { 77 | assert(pthread_mutex_lock(&mutex_) == 0); 78 | threads_.push_back(ThreadItem(tid, epoch_)); 79 | 80 | std::list::iterator new_thread_item = threads_.end(); 81 | -- new_thread_item; 82 | thread_index_[new_thread_item->thread_id] = new_thread_item; 83 | assert(pthread_mutex_unlock(&mutex_) == 0); 84 | } 85 | 86 | void RCU::rm_thread(const unsigned int tid) { 87 | assert(pthread_mutex_lock(&mutex_) == 0); 88 | assert(thread_index_.find(tid) != thread_index_.end()); 89 | std::list::iterator del_thread_item = thread_index_[tid]; 90 | threads_.erase(del_thread_item); 91 | thread_index_.erase(tid); 92 | assert(pthread_mutex_unlock(&mutex_) == 0); 93 | } 94 | 95 | void RCU::add_reclaim_resource(Node* node) { 96 | assert(pthread_mutex_lock(&mutex_) == 0); 97 | if (node_hash_.find(node) == node_hash_.end()) { 98 | nodes_.push_back(NodeItem(node, epoch_)); 99 | node_hash_.insert(node); 100 | } 101 | assert(pthread_mutex_unlock(&mutex_) == 0); 102 | } 103 | 104 | // for debug 105 | void RCU::print_ds() { 106 | if (threads_.size() > 0) { 107 | std::cout << "Alive Threads: "; 108 | for (std::list::iterator i = threads_.begin(); 109 | i != threads_.end(); ++ i) { 110 | std::cout << "tid = " << i->thread_id << ": epoch = " << i->epoch << "\t"; 111 | } 112 | std::cout << std::endl; 113 | } 114 | 115 | if (nodes_.size() > 0) { 116 | std::cout << "Need Reclaim Node: "; 117 | for (std::list::iterator i = nodes_.begin(); 118 | i != nodes_.end(); ++ i) { 119 | std::cout << "node = " << i->node->val << ": epoch = " << i->epoch << "\t"; 120 | } 121 | std::cout << std::endl; 122 | } 123 | } 124 | 125 | unsigned long int RCU::get_thread_queue_size() { 126 | return threads_.size(); 127 | } 128 | unsigned long int RCU::get_thread_index_size() { 129 | return thread_index_.size(); 130 | } 131 | unsigned long int RCU::get_resource_queue_size() { 132 | return nodes_.size(); 133 | } -------------------------------------------------------------------------------- /list/lock_free_rcu_list/rcu.h: -------------------------------------------------------------------------------- 1 | /* 2 | * Please use valgrind to check memory leaking 3 | */ 4 | #ifndef _RCU_H 5 | #define _RCU_H 6 | 7 | #include 8 | #include 9 | #include 10 | #include 11 | #include // usleep 12 | #include 13 | 14 | #include "list_node.h" 15 | 16 | struct ThreadItem { 17 | unsigned int thread_id; 18 | unsigned int epoch; 19 | ThreadItem() { 20 | thread_id = -1; 21 | epoch = -1; 22 | } 23 | ThreadItem(const unsigned int tid, const unsigned int epoch) { 24 | this->thread_id = tid; 25 | this->epoch = epoch; 26 | } 27 | }; 28 | 29 | struct NodeItem { 30 | Node* node; 31 | unsigned int epoch; 32 | NodeItem() { 33 | node = NULL; 34 | epoch = -1; 35 | } 36 | NodeItem(Node* node, unsigned int epoch) { 37 | this->node = node; 38 | this->epoch = epoch; 39 | } 40 | }; 41 | 42 | // head --> ... --> tail 43 | // oldest --> ... --> newest 44 | class RCU { 45 | public: 46 | RCU(); 47 | ~RCU(); 48 | 49 | /*----------- interface ------------*/ 50 | // not thread safe 51 | void start_bg_reclaim_thread(); 52 | void kill_bg_reclaim_thread(); 53 | 54 | /*----------- interface ------------*/ 55 | // thread safe 56 | void add_thread(const unsigned int tid); 57 | void rm_thread(const unsigned int tid); 58 | void add_reclaim_resource(Node* node); 59 | 60 | /*----------- debug ----------------*/ 61 | // not thread safe 62 | unsigned long int get_thread_queue_size(); 63 | unsigned long int get_thread_index_size(); 64 | unsigned long int get_resource_queue_size(); 65 | 66 | private: 67 | // do resource reclaiming 68 | // is_over_ used to control when background thread be killed 69 | void run_bg_reclaim_thread(); 70 | static void* run_bg_reclaim_thread_wrapper(void* argv) { 71 | ((RCU*)argv)->run_bg_reclaim_thread(); 72 | pthread_exit(NULL); 73 | } 74 | std::atomic is_over_; 75 | std::atomic should_over_; 76 | pthread_t bg_tid_; 77 | 78 | // debug 79 | // not thread safe 80 | void print_ds(); 81 | 82 | // the queue of running thread 83 | std::list threads_; 84 | std::map::iterator> thread_index_; 85 | 86 | // the queue of resource that need to be reclaimed 87 | std::list nodes_; 88 | std::set node_hash_; // remove duplication 89 | 90 | std::atomic epoch_; // bug 91 | 92 | // this mutex save threads_, thread_index_, nodes_ 93 | pthread_mutex_t mutex_; 94 | 95 | const unsigned int SleepInterval = 1000 * 50; 96 | }; 97 | 98 | #endif -------------------------------------------------------------------------------- /list/lock_free_rcu_list/rcu_test.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | 7 | #include "list_node.h" 8 | #include "rcu.h" 9 | 10 | struct ThreadArgv { 11 | Node* node; 12 | RCU* rcu; 13 | pthread_mutex_t* mutex; 14 | ThreadArgv() { 15 | node = NULL; 16 | rcu = NULL; 17 | mutex = NULL; 18 | } 19 | void init(Node* node, RCU* rcu, pthread_mutex_t* mutex) { 20 | this->node = node; 21 | this->rcu = rcu; 22 | this->mutex = mutex; 23 | } 24 | }; 25 | 26 | void* del_thread(void* argv) { 27 | ThreadArgv* t_argv = (ThreadArgv*) argv; 28 | unsigned int tid = (unsigned int) pthread_self(); 29 | assert(pthread_mutex_lock(t_argv->mutex) == 0); 30 | int n = rand() % 1000; 31 | assert(pthread_mutex_unlock(t_argv->mutex) == 0); 32 | 33 | usleep(n*1000); 34 | t_argv->rcu->add_thread(tid); 35 | usleep(n*1000); 36 | t_argv->rcu->add_reclaim_resource(t_argv->node); 37 | usleep(n*1000); 38 | t_argv->rcu->rm_thread(tid); 39 | } 40 | 41 | void* norm_thread(void* argv) { 42 | ThreadArgv* t_argv = (ThreadArgv*) argv; 43 | unsigned int tid = (unsigned int) pthread_self(); 44 | assert(pthread_mutex_lock(t_argv->mutex) == 0); 45 | int n = rand() % 1000; 46 | assert(pthread_mutex_unlock(t_argv->mutex) == 0); 47 | 48 | usleep(n*1000); 49 | t_argv->rcu->add_thread(tid); 50 | usleep(n*1000); 51 | t_argv->rcu->rm_thread(tid); 52 | } 53 | 54 | void TEST(const int n_del_thread, const int n_norm_thread) { 55 | RCU rcu; 56 | rcu.start_bg_reclaim_thread(); 57 | 58 | pthread_mutex_t mutex; 59 | pthread_mutex_init(&mutex, NULL); 60 | 61 | ThreadArgv* argv1 = new ThreadArgv[n_del_thread]; 62 | for (int i = 0; i < n_del_thread; ++ i) { 63 | argv1[i].init(new Node((long)(i+1), NULL), &rcu, &mutex); 64 | } 65 | ThreadArgv* argv2 = new ThreadArgv[n_norm_thread]; 66 | for (int i = 0; i < n_norm_thread; ++ i) { 67 | argv2[i].init(NULL, &rcu, &mutex); 68 | } 69 | 70 | // create delete thread 71 | pthread_t* tid1 = new pthread_t[n_del_thread]; 72 | for (int i = 0; i < n_del_thread; ++ i) { 73 | assert(pthread_create(&tid1[i], NULL, del_thread, (void*)&argv1[i]) == 0); 74 | } 75 | 76 | // create empty thread 77 | pthread_t* tid2 = new pthread_t[n_norm_thread]; 78 | for (int i = 0; i < n_norm_thread; ++ i) { 79 | assert(pthread_create(&tid2[i], NULL, norm_thread, (void*)&argv2[i]) == 0); 80 | } 81 | 82 | for (int i = 0; i < n_norm_thread; ++ i) { 83 | assert(pthread_join(tid2[i], NULL) == 0); 84 | } 85 | for (int i = 0; i < n_del_thread; ++ i) { 86 | assert(pthread_join(tid1[i], NULL) == 0); 87 | } 88 | 89 | rcu.kill_bg_reclaim_thread(); 90 | 91 | delete [] argv1; 92 | delete [] tid1; 93 | delete [] argv2; 94 | delete [] tid2; 95 | pthread_mutex_destroy(&mutex); 96 | 97 | assert(rcu.get_thread_queue_size() == 0); 98 | assert(rcu.get_thread_index_size() == 0); 99 | assert(rcu.get_resource_queue_size() == 0); 100 | std::cout << "SUCCESS" << std::endl; 101 | } 102 | 103 | int main() { 104 | srand(time(NULL)); 105 | TEST(100, 100); 106 | return 0; 107 | } -------------------------------------------------------------------------------- /list/lock_free_rcu_list/run_batch_test_list.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | rm log/output 4 | rm core 5 | 6 | NUM=10000 7 | for i in `seq 1 ${NUM}`; do 8 | echo "====== $i ======" >> log/output 9 | ./build/x >> log/output 10 | #N=$(cat log/output |grep "successfully" |wc -l) 11 | #valgrind --leak-check=yes ./build/x >> log/output 12 | #N=$(cat log/output |grep "\-1" |wc -l) 13 | #if [ ${N} -gt 0 ]; then 14 | # echo "ERROR" 15 | # exit 16 | #fi 17 | done 18 | 19 | echo "----- BATCH TEST SUCCESS -----" -------------------------------------------------------------------------------- /list/lock_free_rcu_list/run_batch_test_rcu.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | rm log/output 4 | rm core 5 | 6 | #NUM=1000 7 | #for i in `seq 1 ${NUM}`; do 8 | # echo "====== $i ======" > log/output 9 | # ./build/x >> log/output 10 | # N=$(cat log/output |grep "successfully" |wc -l) 11 | # if [ ${N} -ne 5 ]; then 12 | # echo "ERROR" 13 | # exit 14 | # fi 15 | #done 16 | # 17 | #echo "----- BATCH TEST SUCCESS -----" 18 | 19 | NUM=1000 20 | for i in `seq 1 ${NUM}`; do 21 | echo "====== $i ======" > log/output 22 | ./build/rcu >> log/output 23 | N=$(cat log/output |grep "\[" |wc -l) 24 | M=$(cat log/output |grep "SUCCESS" |wc -l) 25 | if [ ${N} -ne 100 ] || [ ${M} -ne 1 ]; then 26 | echo "ERROR" 27 | exit 28 | else 29 | echo "===== $i: success =====" 30 | fi 31 | done 32 | 33 | echo "----- BATCH TEST SUCCESS -----" -------------------------------------------------------------------------------- /list/result_report/Add_to_list_performance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alwaysR9/lock_free_ds/a81413f913c3c752152a281b212e46a6097a5c66/list/result_report/Add_to_list_performance.png -------------------------------------------------------------------------------- /list/result_report/Delete_to_list_performance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alwaysR9/lock_free_ds/a81413f913c3c752152a281b212e46a6097a5c66/list/result_report/Delete_to_list_performance.png -------------------------------------------------------------------------------- /list/result_report/mixed_op_to_list_performance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alwaysR9/lock_free_ds/a81413f913c3c752152a281b212e46a6097a5c66/list/result_report/mixed_op_to_list_performance.png -------------------------------------------------------------------------------- /list/result_report/pic.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import matplotlib.pyplot as plt 3 | 4 | x = [1, 5, 10, 20] 5 | 6 | # add performance 7 | coarse = [0.3, 1.8, 4.0, 8.7] 8 | fine = [1.7, 2.2, 5.3, 14.1] 9 | lock_free = [1.19833, 1.28093, 1.81707, 3.13864] 10 | lock_free_rcu = [1.20253, 1.3221, 1.99714, 3.31791] 11 | title = 'The performance of add' 12 | fname = 'Add_to_list_performance.png' 13 | 14 | # delete performance 15 | #coarse = [0.332658, 0.946826, 1.65139, 4.7045] 16 | #fine = [1.68914, 2.69389, 4.9484, 12.8524] 17 | #lock_free = [1.1877, 1.17583, 1.17385, 2.52614] 18 | #lock_free_rcu = [1.25902, 1.25339, 1.3765, 3.09896] 19 | #title = 'The performance of delete' 20 | #fname = 'Delete_to_list_performance.png' 21 | 22 | # hybird performance 23 | #x = [5, 10, 15] 24 | #coarse = [0.4399, 0.9230, 1.2607] 25 | #fine = [0.4334, 1.0120, 1.6584] 26 | #lock_free = [0.2863, 0.3872, 0.4797] 27 | #lock_free_rcu = [0.2906, 0.3610, 0.4648] 28 | #title = 'The performance of mixed operation' 29 | #fname = 'mixed_op_to_list_performance.png' 30 | 31 | plt.plot(x, coarse, marker='o', label='coarse-grained') 32 | plt.plot(x, fine, marker='o', label='fine-grained') 33 | plt.plot(x, lock_free, marker='o', label='lock-free') 34 | plt.plot(x, lock_free_rcu, marker='o', label='lock-free-rcu') 35 | plt.legend(bbox_to_anchor=(1, 1), loc=1, borderaxespad=0.) 36 | plt.xlabel('the number of threads') 37 | plt.ylabel('the time consumed (second)') 38 | plt.title(title) 39 | plt.xticks(x) 40 | plt.savefig(fname) 41 | plt.show() 42 | --------------------------------------------------------------------------------