├── AUTHORS ├── LICENSE ├── Makefile ├── NEWS ├── README ├── doc ├── concurrentlua.png ├── manual.html └── stylesheet.css ├── samples ├── example1.lua ├── example2.lua ├── example3.lua ├── example4a.lua ├── example4b.lua ├── example5a.lua └── example5b.lua ├── src ├── Makefile ├── clpmd │ ├── Makefile │ └── clpmd ├── concurrent │ ├── Makefile │ ├── distributed │ │ ├── Makefile │ │ ├── cookie.lua │ │ ├── link.lua │ │ ├── message.lua │ │ ├── monitor.lua │ │ ├── network.lua │ │ ├── node.lua │ │ ├── process.lua │ │ ├── register.lua │ │ └── scheduler.lua │ ├── init.lua │ ├── link.lua │ ├── message.lua │ ├── monitor.lua │ ├── option.lua │ ├── process.lua │ ├── register.lua │ ├── root.lua │ └── scheduler.lua ├── daemon │ ├── Makefile │ └── daemon.c └── time │ ├── Makefile │ └── time.c └── test ├── concurrent.sh ├── concurrent ├── link1.lua ├── link2.lua ├── message1.lua ├── monitor1.lua ├── monitor2.lua ├── process1.lua ├── process2.lua ├── register1.lua ├── register2.lua ├── trapexit1.lua └── trapexit2.lua ├── distributed ├── cookie1a.lua ├── cookie1b.lua ├── cookie2a.lua ├── cookie2b.lua ├── link1a.lua ├── link1b.lua ├── link2a.lua ├── link2b.lua ├── link2c.lua ├── message1a.lua ├── message1b.lua ├── monitor1a.lua ├── monitor1b.lua ├── monitor2a.lua ├── monitor2b.lua ├── monitor2c.lua ├── node1a.lua ├── node1b.lua ├── process1a.lua ├── process1b.lua ├── process2a.lua ├── process2b.lua ├── register1a.lua ├── register1b.lua ├── register2a.lua ├── register2b.lua ├── register2c.lua ├── trapexit1a.lua ├── trapexit1b.lua ├── trapexit2a.lua ├── trapexit2b.lua └── trapexit2c.lua ├── distributed2a.sh ├── distributed2b.sh ├── distributed3a.sh ├── distributed3b.sh └── distributed3c.sh /AUTHORS: -------------------------------------------------------------------------------- 1 | Lefteris Chatzimparmpas 2 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2007-2012 Eleftherios Chatzimparmpas 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy 4 | of this software and associated documentation files (the "Software"), to deal 5 | in the Software without restriction, including without limitation the rights 6 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 7 | copies of the Software, and to permit persons to whom the Software is 8 | furnished to do so, subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in 11 | all copies or substantial portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 18 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 19 | THE SOFTWARE. 20 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | all install uninstall clean: 2 | cd src && $(MAKE) $@ 3 | -------------------------------------------------------------------------------- /NEWS: -------------------------------------------------------------------------------- 1 | ConcurrentLua 1.1 - 31 Mar 2012 2 | - Codebase ported to Lua 5.2, but Lua 5.1 is considered the default version 3 | to use for building it. 4 | - The module() function is not used anymore, and instead the main module is 5 | returned as a value by the require() function, while nothing is written to 6 | the global environment. 7 | - Removed prefix from the C module names, and made them submodules of the 8 | main module. 9 | * The way the module is to be loaded has changed, and now the return value of 10 | the require call has to be stored in a variable of choice, as can be seen 11 | in the documentation. 12 | 13 | ConcurrentLua 1.0.6 - 27 Feb 2011 14 | - Project moved to GitHub. 15 | - Updates to the documentation and other information files. 16 | 17 | ConcurrentLua 1.0.5 - 9 Mar 2010 18 | - Bug fix; process name registrations in distributed mode sometimes not 19 | working. 20 | 21 | ConcurrentLua 1.0.4 - 13 Feb 2010 22 | - Bug fix; cltime.time() problem affecting the scheduler in Mac OS X. 23 | 24 | ConcurrentLua 1.0.3 - 23 May 2009 25 | - Bug fix; time calculation for the root process while sleeping was wrong. 26 | - Minor enhancements to the serializer in the handling of tables. 27 | 28 | ConcurrentLua 1.0.2 - 21 Jun 2008 29 | - Minor enhancement to the serializer in the handling of strings. 30 | 31 | ConcurrentLua 1.0.1 - 24 Mar 2008 32 | - Bug fix; cltime.time() overflow in 32-bit architectures caused problems to 33 | the scheduler. 34 | 35 | ConcurrentLua 1.0 - 31 Dec 2007 36 | - Initial release. 37 | -------------------------------------------------------------------------------- /README: -------------------------------------------------------------------------------- 1 | ConcurrentLua 2 | 3 | Description 4 | 5 | ConcurrentLua is a system that implements a concurrency model for the Lua 6 | programming language. It is based on the share-nothing asynchronous 7 | message-passing model that is employed in the Erlang programming language. 8 | 9 | ConcurrentLua extends Lua's coroutines with message-passing primitives, in 10 | order to support concurrent programming. Distributed programming is supported 11 | transparently with the same message-passing primitives. 12 | 13 | ConcurrentLua is implemented as a collection of Lua modules that can be 14 | loaded by any Lua program. Most of the code is written in Lua itself, with 15 | minor parts written in C. 16 | 17 | 18 | Website 19 | 20 | http://github.com/lefcha/concurrentlua 21 | 22 | 23 | Changes 24 | 25 | All the changes in each new release up to the latest are in the NEWS file. 26 | 27 | 28 | Installation 29 | 30 | Lua version 5.1 or 5.2 is compile-time requirement. 31 | 32 | The LuaSocket, Copas and Coxpcall modules are runtime dependencies. 33 | 34 | Compile and install the system: 35 | 36 | make all 37 | make install 38 | 39 | 40 | Documentation 41 | 42 | The detailed reference manual can be found in doc/manual.html. 43 | 44 | 45 | License 46 | 47 | Released under the terms and conditions of the MIT/X11 license, included in 48 | the LICENSE file. 49 | 50 | 51 | Authors 52 | 53 | See AUTHORS file. 54 | -------------------------------------------------------------------------------- /doc/concurrentlua.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lefcha/concurrentlua/74405929dca29b2c61fcb9e94c6b6d8799bb148a/doc/concurrentlua.png -------------------------------------------------------------------------------- /doc/manual.html: -------------------------------------------------------------------------------- 1 | 3 | 4 | 5 | 6 | 7 | 8 | 10 | ConcurrentLua - Introduction 11 | 12 | 13 | 14 | 15 | 16 |
17 | ConcurrentLua logo 18 |

ConcurrentLua

19 |

Concurrency Oriented Programming in Lua

20 | 25 |
26 | 27 |
28 | 29 |

Overview

30 | 31 |

ConcurrentLua is a system that implements a concurrency model for the Lua 32 | programming language. It is based on the share-nothing asynchronous 33 | message-passing model that is employed in the Erlang programming 34 | language.

35 | 36 |

ConcurrentLua extends Lua's coroutines with message-passing primitives, in 37 | order to support concurrent programming. Distributed programming is supported 38 | transparently with the same message-passing primitives.

39 | 40 |

ConcurrentLua is implemented as a collection of Lua modules that can be 41 | loaded by any Lua program. Most of the code is written in Lua itself, with 42 | minor parts written in C.

43 | 44 |

Model description

45 | 46 |

One of the core elements of ConcurrentLua is the process. A 47 | process is a light-weight VM thread, that plays the same role as do processes 48 | in an operating system; they don't share memory but instead they communicate 49 | through some kind of interprocess communication. These processes can be 50 | created and destroyed on demand, and a simple round-robin scheduler passes 51 | control to them.

52 | 53 |

Each process is associated with a mailbox, a message queue for the 54 | temporary storage of messages that were sent to the process. A process can 55 | check its mailbox for new messages at any time, and if there are any, they can 56 | be read in the order of arrival.

57 | 58 |

Each process is identified by a unique numeric process identifier, 59 | or else PID. In addition, aliases or process names can be used 60 | instead of PIDs, in order to refer to processes. These aliases and their 61 | references are stored in a central repository, the registry. 62 | Processes can edit the registry, by adding or deleting entries.

63 | 64 |

Error handling mechanisms are also provided in the form of 65 | monitors and links. With monitors processes can monitor the 66 | other processes, and get notified if the monitored processes terminate 67 | abnormally. With links processes are bound together, and when one of them 68 | terminates abnormally the other one is signalled and terminates, too.

69 | 70 |

This system also supports distributed programming and all the properties 71 | that have been described map naturally onto a distributed system. Distributed 72 | processes communicate with the same primitives as local processes.

73 | 74 |

Distribution is based on a component that is called the node. A 75 | node represents a system runtime inside of which processes are executing. 76 | Nodes can be connected to each other and communicate, thus forming a 77 | virtual network. Distributed processes use this network in order to 78 | exchange messages.

79 | 80 |

Each node has a name associated with it. In order for other nodes to 81 | connect to each other by using only this name, a port mapper daemon 82 | acts a nameserver. The port mapper daemon has details about the nodes running 83 | under the network host that the daemon itself is bound to.

84 | 85 |

As processes can be created locally, it is also possible to request the 86 | creation of processes on remote nodes. A remote process can then be handled as 87 | if it was a local process.

88 | 89 |

If the nodes that form the virtual network are fully connected (every node 90 | is connected bidirectionally to each other), global aliases can be used for 91 | the processes. The nodes negotiate and maintain a virtual global 92 | registry and also keep updated local copies of the registry.

93 | 94 |

Monitors and links for distributed processes are supported with the same 95 | semantics as for local processes. Nodes take care of the task of transparently 96 | handling errors between distributed processes. In addition, it is possible for 97 | processes to monitor nodes as a whole.

98 | 99 |

Nodes are required to authenticate before they can communicate. An 100 | authenticated node can then be part of the virtual network that the nodes 101 | form. A simple security mechanism takes care of this task.

102 | 103 |

Implementation details

104 | 105 |

The implementation of ConcurrentLua is based on the Lua component system. 106 | The system is organized as a collection of Lua modules and submodules. The 107 | main modules are two, and provide the concurrent and distributed programming 108 | functionality respectively. One could load only the concurrency module and 109 | also for each module there is the option of not loading some of the submodules 110 | if the functionality they provide is not needed. A stand-alone port mapper 111 | daemon utility is also included.

112 | 113 |

The processes in the system are implemented with Lua coroutines. A process 114 | is actually a Lua coroutine that yields control when the process suspends its 115 | execution and resumes control when the process continues its execution.

116 | 117 |

The scheduling of the processes is still based on the cooperative 118 | multithreading model that Lua uses. Processes voluntarily suspend their 119 | execution and thus other processes get the chance to run. Nevertheless, the 120 | suspending and resuming of processes is partly hidden under a higher level 121 | mechanism; a process suspends its execution when waiting for a message to 122 | arrive and becomes ready to be resumed when new messages have arrived in its 123 | mailbox. A simple round-robin scheduler resumes the processes.

124 | 125 |

Any type of Lua data, with the exception of memory references, can be sent 126 | inside messages. Messages can be booleans, numbers, strings, tables or 127 | functions, and any combination of them. Data are automatically serialized on 128 | sent and deserialized on receive, and everything is passed by value.

129 | 130 |

Interprocess communication between nodes, and subsequently between 131 | distributed processes, is based on an asynchronous socket handler. This 132 | translates to networking model that uses non-blocking sockets and periodic 133 | polling. This is the approach that is mostly used today by Lua libraries. 134 | Non-blocking semantics should be also used for IO such as files, pipes, 135 | etc.

136 | 137 |

Introduction

138 | 139 |

Some examples will provide an introduction to the most essential 140 | properties of the system, from process creation and message passing to 141 | distributed programming and error handling.

142 | 143 |

Creating processes

144 | 145 |

Processes are created using the spawn() function. The 146 | spawn() function takes at least one argument; the function that 147 | contains the command set that the process will execute. Any additional 148 | arguments are passed directly as arguments of the function.

149 | 150 |

The following example demonstrates the creation of a process. The process 151 | just prints a message as many times as specified:

152 | 153 |
154 | concurrent = require 'concurrent'
155 | 
156 | function hello_world(times)
157 |     for i = 1, times do print('hello world') end
158 |     print('done')
159 | end
160 | 
161 | concurrent.spawn(hello_world, 3)
162 | 
163 | concurrent.loop()
164 | 165 |

The output would be:

166 | 167 |
168 | hello world
169 | hello world
170 | hello world
171 | done
172 | 173 |

First the system is loaded:

174 | 175 |
176 | concurrent = require 'concurrent'
177 | 178 |

The function that the process will execute is defined next:

179 | 180 |
181 | function hello_world(times)
182 |     for i = 1, times do print('hello world') end
183 |     print('done')
184 | end
185 | 186 |

A new process is created:

187 | 188 |
189 | concurrent.spawn(hello_world, 3)
190 | 191 |

The system's infinite loop is called last:

192 | 193 |
194 | concurrent.loop()
195 | 196 |

Exchanging messages

197 | 198 |

Processes can exchange messages by using the send() and 199 | receive() functions. Also, the self() function can 200 | be used to get the PID of the calling process.

201 | 202 |

The following program implements two processes that exchange messages and 203 | then terminate:

204 | 205 |
206 | concurrent = require 'concurrent'
207 | 
208 | function pong()
209 |     while true do
210 |         local msg = concurrent.receive()
211 |         if msg.body == 'finished' then
212 |             break
213 |         elseif msg.body == 'ping' then
214 |             print('pong received ping')
215 |             concurrent.send(msg.from, { body = 'pong' })
216 |         end
217 |     end
218 |     print('pong finished')
219 | end
220 | 
221 | function ping(n, pid)
222 |     for i = 1, n do
223 |         concurrent.send(pid, { from = concurrent.self(), body = 'ping' })
224 |         local msg = concurrent.receive()
225 |         if msg.body == 'pong' then print('ping received pong') end
226 |     end
227 |     concurrent.send(pid, { from = concurrent.self(), body = 'finished' })
228 |     print('ping finished')
229 | end
230 | 
231 | pid = concurrent.spawn(pong)
232 | concurrent.spawn(ping, 3, pid)
233 | 
234 | concurrent.loop()
235 | 236 |

The output would be:

237 | 238 |
239 | pong received ping
240 | ping received pong
241 | pong received ping
242 | ping received pong
243 | pong received ping
244 | ping received pong
245 | pong finished
246 | ping finished
247 | 248 |

After the pong process is created, the ping process is 249 | supplied with the PID of the pong process:

250 | 251 |
252 | pid = concurrent.spawn(pong)
253 | concurrent.spawn(ping, 3, pid)
254 | 255 |

The ping process sends a message:

256 | 257 |
258 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' })
259 | 260 |

The pong process waits for a message to arrive and saves it in a 261 | variable when it does:

262 | 263 |
264 | local msg = concurrent.receive()
265 | 266 |

The pong process replies:

267 | 268 |
269 | concurrent.send(msg.from, { body = 'pong' })
270 | 271 |

The pong process terminates after having received a notification 272 | from the ping process.

273 | 274 |

Registering process names

275 | 276 |

Instead of using process PIDs for sending messages, process names can also 277 | be used. The register() function can be used to create an alias 278 | for a process in the registry:

279 | 280 |
281 |
282 | concurrent = require 'concurrent'
283 | 
284 | function pong()
285 |     while true do
286 |         local msg = concurrent.receive()
287 |         if msg.body == 'finished' then
288 |             break
289 |         elseif msg.body == 'ping' then
290 |             print('pong received ping')
291 |             concurrent.send(msg.from, { body = 'pong' })
292 |         end
293 |     end
294 |     print('pong finished')
295 | end
296 | 
297 | function ping(n)
298 |     for i = 1, n do
299 |         concurrent.send('pong', { from = concurrent.self(), body = 'ping' })
300 |         local msg = concurrent.receive()
301 |         if msg.body == 'pong' then print('ping received pong') end
302 |     end
303 |     concurrent.send('pong', { from = concurrent.self(), body = 'finished' })
304 |     print('ping finished')
305 | end
306 | 
307 | pid = concurrent.spawn(pong)
308 | concurrent.register('pong', pid)
309 | concurrent.spawn(ping, 3)
310 | 
311 | concurrent.loop()
312 | 313 |

The only change from the previous example is the destination that the 314 | ping process sends messages to:

315 | 316 |
317 | concurrent.send('pong', { from = concurrent.self(), body = 'ping' })
318 | 319 |

And:

320 | 321 |
322 | concurrent.send('pong', { from = concurrent.self(), body = 'finished' })
323 |
324 | 325 |

And the pong process now registers its name:

326 | 327 |
328 | concurrent.register('pong', pid)
329 | 330 |

Therefore the ping process isn't supplied with the PID of the 331 | pong process.

332 | 333 |

Distributed message passing

334 | 335 |

Processes in different nodes can still communicate with the same message 336 | passing primitives. Remote processes are denoted by their PID or alias and the 337 | node they are executing under. The previous example could be broken into two 338 | programs, one for each process.

339 | 340 |

The code for the pong process:

341 | 342 |
343 | concurrent = require 'concurrent'
344 | 
345 | function pong()
346 |     while true do
347 |         local msg = concurrent.receive()
348 |         if msg.body == 'finished' then
349 |             break
350 |         elseif msg.body == 'ping' then
351 |             print('pong received ping')
352 |             concurrent.send(msg.from, { body = 'pong' })
353 |         end
354 |     end
355 |     print('pong finished')
356 | end
357 | 
358 | concurrent.init('pong@gaia')
359 | 
360 | pid = concurrent.spawn(pong)
361 | 
362 | concurrent.register('pong', pid)
363 | concurrent.loop()
364 | concurrent.shutdown()
365 | 366 |

And the code for the ping process:

367 | 368 |
369 | concurrent = require 'concurrent'
370 | 
371 | function ping(n)
372 |     for i = 1, n do
373 |         concurrent.send({ 'pong', 'pong@gaia' },
374 |                         { from = { concurrent.self(), concurrent.node() },
375 |                           body = 'ping' })
376 |         local msg = concurrent.receive()
377 |         if msg.body == 'pong' then print('ping received pong') end
378 |     end
379 |     concurrent.send({ 'pong', 'pong@gaia' },
380 |                     { from = { concurrent.self(), concurrent.node() },
381 |                       body = 'finished' })
382 |     print('ping finished')
383 | end
384 | 
385 | concurrent.spawn(ping, 3)
386 | 
387 | concurrent.init('ping@selene')
388 | concurrent.loop()
389 | concurrent.shutdown()
390 | 391 |

The output of the pong process would be:

392 | 393 |
394 | pong received ping
395 | pong received ping
396 | pong received ping
397 | pong finished
398 | 399 |

And the output of the ping process would be:

400 | 401 |
402 | ping received pong
403 | ping received pong
404 | ping received pong
405 | ping finished
406 | 407 |

In this example the runtime system is running in distributed mode. In order 408 | for this to happen, first the port mapper daemon has to be started. This can 409 | done by typing in a command line shell:

410 | 411 |
412 | $ clpmd
413 | 414 |

The code that initializes the node that the pong process is 415 | running on:

416 | 417 |
418 | concurrent.init('pong@gaia')
419 | 420 |

And the code for the ping process:

421 | 422 |
423 | concurrent.init('ping@selene')
424 | 425 |

The previous two code snippets register to the port mapper daemon, the port 426 | that each node is listening to. Both nodes unregister their port with:

427 | 428 |
429 | concurrent.shutdown()
430 | 431 |

The only other changes in this example are the destination that the 432 | messages are sent to, along with the introduction of the node() 433 | function that returns the name of the node that the calling process is running 434 | on:

435 | 436 |
437 | concurrent.send({ 'pong', 'pong@gaia' },
438 |                 { from = { concurrent.self(), concurrent.node() },
439 |                   body = 'ping' })
440 | 441 |

And later:

442 | 443 |
444 | concurrent.send({ 'pong', 'pong@gaia' },
445 |                 { from = { concurrent.self(), concurrent.node() },
446 |                   body = 'finished' })
447 | 448 |

Handling error

449 | 450 |

One approach to handle errors in processes is the notion of linked 451 | processes. Two processes are bound together and if one of them terminates 452 | abnormally the other one terminates, too. The link() function can 453 | be used to link processes:

454 | 455 |
456 | concurrent = require 'concurrent'
457 | 
458 | function ping(n, pid)
459 |     concurrent.link(pid)
460 |     for i = 1, n do
461 |         concurrent.send(pid, { from = concurrent.self(), body = 'ping' })
462 |         local msg = concurrent.receive()
463 |         if msg.body == 'pong' then print('ping received pong') end
464 |     end
465 |     print('ping finished')
466 |     concurrent.exit('finished')
467 | end
468 | 
469 | function pong()
470 |     while true do
471 |         local msg = concurrent.receive()
472 |         if msg.body == 'ping' then
473 |             print('pong received ping')
474 |             concurrent.send(msg.from, { body = 'pong' })
475 |         end
476 |     end
477 |     print('pong finished')
478 | end
479 | 
480 | pid = concurrent.spawn(pong)
481 | concurrent.spawn(ping, 3, pid)
482 | 
483 | concurrent.loop()
484 | 485 |

The output would be:

486 | 487 |
488 | pong received ping
489 | ping received pong
490 | pong received ping
491 | ping received pong
492 | pong received ping
493 | ping received pong
494 | pong finished
495 | 496 |

The pong process never reaches its last line, because it 497 | terminates when the ping process exits.

498 | 499 |

The code that links the processes is:

500 | 501 |
502 | concurrent.link(pid)
503 | 504 |

The exit() function is used to make the calling function quit 505 | abnormally:

506 | 507 |
508 | concurrent.exit('finished')
509 | 510 |

It is also possible to trap the exit signal of the terminating process. In 511 | this case a special message is received:

512 | 513 |
514 | concurrent = require 'concurrent'
515 | 
516 | concurrent.setoption('trapexit', true)
517 | 
518 | function pong()
519 |     while true do
520 |         local msg = concurrent.receive()
521 |         if msg.signal == 'EXIT' then
522 |             break
523 |         elseif msg.body == 'ping' then
524 |             print('pong received ping')
525 |             concurrent.send(msg.from, { body = 'pong' })
526 |         end
527 |     end
528 |     print('pong finished')
529 | end
530 | 
531 | function ping(n, pid)
532 |     concurrent.link(pid)
533 |     for i = 1, n do
534 |         concurrent.send(pid, { from = concurrent.self(), body = 'ping' })
535 |         local msg = concurrent.receive()
536 |         if msg.body == 'pong' then print('ping received pong') end
537 |     end
538 |     print('ping finished')
539 |     concurrent.exit('finished')
540 | end
541 | 
542 | pid = concurrent.spawn(pong)
543 | concurrent.spawn(ping, 3, pid)
544 | 
545 | concurrent.loop()
546 | 547 |

The output would be:

548 | 549 |
550 | pong received ping
551 | ping received pong
552 | pong received ping
553 | ping received pong
554 | pong received ping
555 | ping received pong
556 | pong finished
557 | ping finished
558 | 559 |

There is an option related to process linking that can be set with the 560 | setoption() function, specifically the trapexit 561 | option:

562 | 563 |
564 | concurrent.setoption('trapexit', true)
565 | 566 |

Then the pong process receives a special exit message:

567 | 568 |
569 | if msg.signal == 'EXIT' then
570 |     break
571 | 
572 | 573 |

Alternatively, monitors that are based on notification messages, can be 574 | also used for error handling.

575 | 576 |

Reference

577 | 578 |

A list of all the available functions and their description.

579 | 580 |

Processes

581 | 582 |

spawn(body, ...)

583 | 584 |

Creates a process which will execute the body 585 | function. Any extra arguments can be passed to the executing function. The PID 586 | of the new process is returned. In case of error nil and an error 587 | message are returned.

588 | 589 |

spawn(node, body, ...)

590 | 591 |

Creates a process in a remote node which is a 592 | string in the format 'nodename@hostname' and the new process will 593 | execute the body function. The PID of the new process is returned. In case of 594 | error nil and an error message are returned.

595 | 596 |

self()

597 | 598 |

Returns the PID of the calling process.

599 | 600 |

isalive(process)

601 | 602 |

Checks if the process, which can be specified by 603 | PID or by its registered string name, is alive. Returns true if 604 | the process is alive, and false otherwise.

605 | 606 |

exit(reason)

607 | 608 |

Exits abnormally the calling process with the 609 | specified reason string as a cause of exit.

610 | 611 |

Messages

612 | 613 |

receive([timeout])

614 | 615 |

Receives the next message in the mailbox of the 616 | calling process. If the mailbox is empty it waits indefinitely for a message 617 | to arrive, unless a timeout number in milliseconds is specified. 618 | A message of any type, that depends on what was sent, is returned.

619 | 620 |

send(process, message)

621 | 622 |

Sends to the destination process a 623 | message which can be one of: boolean, number, string, table, 624 | function. Returns true if the message was send successfully, and 625 | false if not.

626 | 627 |

Scheduling

628 | 629 |

sleep(time)

630 | 631 |

Suspends implicitly the calling process for the 632 | specified time, the number of milliseconds.

633 | 634 |

loop([timeout])

635 | 636 |

Calls the system's infinite loop which executes 637 | the process scheduler until all the processes have terminated, or unless the 638 | specified timeout, the number of milliseconds, has 639 | expired.

640 | 641 |

interrupt()

642 | 643 |

Interrupts the infinite loop of the process 644 | scheduler.

645 | 646 |

step([timeout])

647 | 648 |

Executes one step of the process scheduler unless 649 | the specified timeout, the number of milliseconds, has 650 | expired.

651 | 652 |

tick()

653 | 654 |

Forwards the system's clock by one tick.

655 | 656 |

Options

657 | 658 |

setoption(key, value)

659 | 660 |

Sets the key string option to the 661 | specified value, the type of which depends on the 662 | option.

663 | 664 |

getoption(key)

665 | 666 |

Returns the value of the key string 667 | option.

668 | 669 |

Node

670 | 671 |

init(node)

672 | 673 |
674 | 675 |

Makes the runtime system a distributed node. The 676 | first argument is the name string of the node, which can be 677 | either in 'nodename' or 'nodename@hostname' 678 | format.

679 | 680 |

If the 'shortnames' option is set to true, then 681 | short names are used instead of fully qualified domain names. If the 682 | 'connectall' option is set to false, then a fully 683 | connected virtual network between the nodes will not be maintained.

684 | 685 |
686 | 687 |

shutdown()

688 | 689 |

Makes the runtime system stop being a distributed 690 | node.

691 | 692 |

node()

693 | 694 |

Returns the name of the node the calling process 695 | is running on.

696 | 697 |

nodes()

698 | 699 |

Returns a table with the nodes that the node the 700 | calling process is running on is connected to.

701 | 702 |

isnodealive()

703 | 704 |

Returns true if the local node has 705 | been initialized, and false otherwise.

706 | 707 |

monitornode(node)

708 | 709 |

The calling process starts monitoring the 710 | specified node, which is a string of the format 711 | 'nodename@hostname'.

712 | 713 |

demonitornode(node)

714 | 715 |

The calling process stops monitoring the specified 716 | node, which is a string of the format 717 | 'nodename@hostname'.

718 | 719 |

Security

720 | 721 |

setcookie(secret)

722 | 723 |

Sets the pre-shared secret key, a 724 | string, also known as the magic cookie, that will be used for node 725 | authentication.

726 | 727 |

getcookie()

728 | 729 |

Returns the pre-shared secret key, also known as 730 | the magic cookie, that is being used for node 731 | authentication.

732 | 733 |

Registering

734 | 735 |

register(name, pid)

736 | 737 |

Registers the name string for the 738 | given process pid.

739 | 740 |

unregister(name)

741 | 742 |

Unregisters the process with the name 743 | string.

744 | 745 |

whereis(name)

746 | 747 |

Returns the PID of the process with the registered 748 | name string.

749 | 750 |

registered()

751 | 752 |

Returns a table with all the registered process 753 | names.

754 | 755 |

Linking

756 | 757 |

link(process)

758 | 759 |
760 | 761 |

The calling process gets linked with the specified 762 | process, which can be either a PID, a registered name, or a 763 | remote process. A remote process is a table with two elements, the remote 764 | process PID or registered name and the node's name in the format 765 | 'nodename@hostname' .

766 | 767 |

The 'trapexit' option can be set to true, if 768 | exit signals between linked processes are to be trapped.

769 | 770 |
771 | 772 |

unlink(process)

773 | 774 |

The calling process gets unlinked with the 775 | specified process, which can be either a PID, a registered name, 776 | or a remote process. A remote process is a table with two elements, the remote 777 | process PID or registered name and the node's name in the format 778 | 'nodename@hostname'.

779 | 780 |

spawnlink(body, ...)

781 | 782 |
783 | 784 |

Creates a process which will execute the body function and 785 | the calling function also gets linked to the new process. Any extra 786 | arguments can be passed to the executing function. The PID of the new 787 | process is returned. In case of error nil and an error message 788 | are returned.

789 | 790 |

The 'trapexit' option can be set to true, if 791 | exit signals between linked processes are to be trapped.

792 | 793 |
794 | 795 |

spawnlink(node, body, ...)

796 | 797 |
798 | 799 |

Creates a process in a remote node which is a string in the 800 | format 'nodename@hostname', the new process will execute the 801 | body function, and also the calling process gets linked to the 802 | newly created process. The PID of the new process is returned. In case of 803 | error nil and an error message are returned.

804 | 805 |

The 'trapexit' option can set to true, if exit 806 | signals between linked processes are to be trapped.

807 | 808 |
809 | 810 |

Monitoring

811 | 812 |

monitor(process)

813 | 814 |

The calling process starts monitoring the 815 | specified process, which can be either a PID, a registered name, 816 | or a remote process. A remote process is a table with two elements, the remote 817 | process PID or registered name and the node's name in the format 818 | 'nodename@hostname'.

819 | 820 |

demonitor(process)

821 | 822 |

The calling process stops monitoring the specified 823 | process, which can be either a PID, a registered name, or a 824 | remote process. A remote process is a table with two elements, the remote 825 | process PID or registered name and the node's name in the format 826 | 'nodename@hostname'.

827 | 828 |

spawnmonitor(body, ...)

829 | 830 |

Creates a process which will execute the 831 | body function and the calling function also starts monitoring the new 832 | process. Any extra arguments can be passed to the executing function. The PID 833 | of the new process is returned. In case of error nil and an error 834 | message are returned.

835 | 836 |

spawnmonitor(node, body, ...)

837 | 838 |

Creates a process in a remote node 839 | which is a string in the format 'nodename@hostname', the new 840 | process will execute the body function, and also the calling 841 | process starts monitoring the newly created process. The PID of the new 842 | process is returned. In case of error nil and an error message 843 | are returned.

844 | 845 |
846 | 847 |
848 | 853 |
854 | 855 | 856 | 857 | 858 | 859 | -------------------------------------------------------------------------------- /doc/stylesheet.css: -------------------------------------------------------------------------------- 1 | body { 2 | margin: 2%; 3 | background-color: #ffffff; 4 | } 5 | 6 | img { 7 | border: 0px; 8 | } 9 | 10 | h2 { 11 | margin-left: -12px; 12 | margin-right: -12px; 13 | } 14 | 15 | h3 { 16 | margin-left: -12px; 17 | margin-right: -12px; 18 | } 19 | 20 | h4 { 21 | margin-left: -12px; 22 | margin-right: -12px; 23 | } 24 | 25 | 26 | div.description { 27 | margin-left: 48px; 28 | } 29 | 30 | div.verbatim { 31 | margin-left: 48px; 32 | } 33 | 34 | hr { 35 | color: #cccccc; 36 | background-color: #cccccc; 37 | } 38 | 39 | div.center { 40 | text-align: center 41 | } 42 | 43 | div.navigation { 44 | padding: 8px; 45 | border: 1px solid #cccccc; 46 | background-color: #eeeeee; 47 | } 48 | 49 | div.box { 50 | padding-left: 24px; 51 | padding-right: 24px; 52 | border: 1px solid #cccccc; 53 | } 54 | 55 | a.link:link { 56 | font-weight: bold; 57 | color: #00007f; 58 | text-decoration: none; 59 | } 60 | 61 | a.link:visited { 62 | font-weight: bold; 63 | color: #00007f; 64 | text-decoration: none; 65 | } 66 | 67 | a.link:hover { 68 | font-weight: bold; 69 | color: #00007f; 70 | text-decoration: underline; 71 | } 72 | -------------------------------------------------------------------------------- /samples/example1.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function hello_world(times) 4 | for i = 1, times do print('hello world') end 5 | print('done') 6 | end 7 | 8 | concurrent.spawn(hello_world, 3) 9 | 10 | concurrent.loop() 11 | -------------------------------------------------------------------------------- /samples/example2.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong() 4 | while true do 5 | local msg = concurrent.receive() 6 | if msg.body == 'finished' then 7 | break 8 | elseif msg.body == 'ping' then 9 | print('pong received ping') 10 | concurrent.send(msg.from, { body = 'pong' }) 11 | end 12 | end 13 | print('pong finished') 14 | end 15 | 16 | function ping(n, pid) 17 | for i = 1, n do 18 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 19 | local msg = concurrent.receive() 20 | if msg.body == 'pong' then print('ping received pong') end 21 | end 22 | concurrent.send(pid, { from = concurrent.self(), body = 'finished' }) 23 | print('ping finished') 24 | end 25 | 26 | pid = concurrent.spawn(pong) 27 | concurrent.spawn(ping, 3, pid) 28 | 29 | concurrent.loop() 30 | -------------------------------------------------------------------------------- /samples/example3.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong() 4 | while true do 5 | local msg = concurrent.receive() 6 | if msg.body == 'finished' then 7 | break 8 | elseif msg.body == 'ping' then 9 | print('pong received ping') 10 | concurrent.send(msg.from, { body = 'pong' }) 11 | end 12 | end 13 | print('pong finished') 14 | end 15 | 16 | function ping(n) 17 | for i = 1, n do 18 | concurrent.send('pong', { from = concurrent.self(), body = 'ping' }) 19 | local msg = concurrent.receive() 20 | if msg.body == 'pong' then print('ping received pong') end 21 | end 22 | concurrent.send('pong', { from = concurrent.self(), body = 'finished' }) 23 | print('ping finished') 24 | end 25 | 26 | pid = concurrent.spawn(pong) 27 | concurrent.register('pong', pid) 28 | concurrent.spawn(ping, 3) 29 | 30 | concurrent.loop() 31 | -------------------------------------------------------------------------------- /samples/example4a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong() 4 | while true do 5 | local msg = concurrent.receive() 6 | if msg.body == 'finished' then 7 | break 8 | elseif msg.body == 'ping' then 9 | print('pong received ping') 10 | concurrent.send(msg.from, { body = 'pong' }) 11 | end 12 | end 13 | print('pong finished') 14 | end 15 | 16 | concurrent.init('pong@gaia') 17 | 18 | pid = concurrent.spawn(pong) 19 | 20 | concurrent.register('pong', pid) 21 | concurrent.loop() 22 | concurrent.shutdown() 23 | -------------------------------------------------------------------------------- /samples/example4b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function ping(n) 4 | for i = 1, n do 5 | concurrent.send({ 'pong', 'pong@gaia' }, 6 | { from = { concurrent.self(), concurrent.node() }, 7 | body = 'ping' }) 8 | local msg = concurrent.receive() 9 | if msg.body == 'pong' then print('ping received pong') end 10 | end 11 | concurrent.send({ 'pong', 'pong@gaia' }, 12 | { from = { concurrent.self(), concurrent.node() }, 13 | body = 'finished' }) 14 | print('ping finished') 15 | end 16 | 17 | concurrent.spawn(ping, 3) 18 | 19 | concurrent.init('ping@selene') 20 | concurrent.loop() 21 | concurrent.shutdown() 22 | -------------------------------------------------------------------------------- /samples/example5a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function ping(n, pid) 4 | concurrent.link(pid) 5 | for i = 1, n do 6 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 7 | local msg = concurrent.receive() 8 | if msg.body == 'pong' then print('ping received pong') end 9 | end 10 | print('ping finished') 11 | concurrent.exit('finished') 12 | end 13 | 14 | function pong() 15 | while true do 16 | local msg = concurrent.receive() 17 | if msg.body == 'ping' then 18 | print('pong received ping') 19 | concurrent.send(msg.from, { body = 'pong' }) 20 | end 21 | end 22 | print('pong finished') 23 | end 24 | 25 | pid = concurrent.spawn(pong) 26 | concurrent.spawn(ping, 3, pid) 27 | 28 | concurrent.loop() 29 | -------------------------------------------------------------------------------- /samples/example5b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | concurrent.setoption('trapexit', true) 4 | 5 | function pong() 6 | while true do 7 | local msg = concurrent.receive() 8 | if msg.signal == 'EXIT' then 9 | break 10 | elseif msg.body == 'ping' then 11 | print('pong received ping') 12 | concurrent.send(msg.from, { body = 'pong' }) 13 | end 14 | end 15 | print('pong finished') 16 | end 17 | 18 | function ping(n, pid) 19 | concurrent.link(pid) 20 | for i = 1, n do 21 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 22 | local msg = concurrent.receive() 23 | if msg.body == 'pong' then print('ping received pong') end 24 | end 25 | print('ping finished') 26 | concurrent.exit('finished') 27 | end 28 | 29 | pid = concurrent.spawn(pong) 30 | concurrent.spawn(ping, 3, pid) 31 | 32 | concurrent.loop() 33 | -------------------------------------------------------------------------------- /src/Makefile: -------------------------------------------------------------------------------- 1 | all install uninstall clean: 2 | cd time && $(MAKE) $@ 3 | cd concurrent && $(MAKE) $@ 4 | cd daemon && $(MAKE) $@ 5 | cd clpmd && $(MAKE) $@ 6 | -------------------------------------------------------------------------------- /src/clpmd/Makefile: -------------------------------------------------------------------------------- 1 | DESTDIR = 2 | PREFIX = /usr/local 3 | BINDIR = $(PREFIX)/bin 4 | 5 | BIN = clpmd 6 | 7 | all: $(BIN) 8 | 9 | $(BIN): 10 | 11 | install: all 12 | mkdir -p $(DESTDIR)$(BINDIR) && \ 13 | cp -f $(BIN) $(DESTDIR)$(BINDIR) && \ 14 | chmod 0755 $(DESTDIR)$(BINDIR)/$(BIN) 15 | 16 | uninstall: 17 | cd $(DESTDIR)$(BINDIR) && \ 18 | rm -f $(BIN) 19 | 20 | clean: 21 | rm -f *~ 22 | -------------------------------------------------------------------------------- /src/clpmd/clpmd: -------------------------------------------------------------------------------- 1 | #!/usr/bin/lua 2 | 3 | socket = require 'socket' 4 | copas = require 'copas' 5 | 6 | daemon = require 'concurrent.daemon' 7 | 8 | database = {} 9 | 10 | server = socket.bind('*', 9634) 11 | 12 | function handler(socket) 13 | socket = copas.wrap(socket) 14 | while true do 15 | local data = socket:receive() 16 | if not data then 17 | break 18 | end 19 | 20 | local name, port = string.match(data, '^%+ ([%w_]+@[%w-.]+) (%d+)$') 21 | if name and port and not database[name] then 22 | database[name] = port 23 | print(name .. ' = ' .. port) 24 | end 25 | 26 | local name, port = string.match(data, '^%= ([%w_]+@[%w-.]+) (%d+)$') 27 | if name and port and database[name] then 28 | database[name] = port 29 | print(name .. ' = ' .. port) 30 | end 31 | 32 | local name = string.match(data, '^%- ([%w_]+@[%w-.]+)$') 33 | if name and database[name] then 34 | database[name] = nil 35 | print(name .. ' = 0') 36 | end 37 | 38 | local name = string.match(data, '^%? ([%w_]+@[%w-.]+)$') 39 | if name then 40 | if database[name] then 41 | socket:send(database[name] .. '\r\n') 42 | else 43 | socket:send('0\r\n') 44 | end 45 | end 46 | 47 | if string.find(data, '^%*$') then 48 | local s = '' 49 | for k, v in pairs(database) do 50 | s = s .. k .. '=' .. v .. ',' 51 | end 52 | socket:send(s .. '\r\n') 53 | end 54 | end 55 | end 56 | 57 | daemon.daemon() 58 | 59 | copas.addserver(server, handler) 60 | copas.loop() 61 | 62 | -------------------------------------------------------------------------------- /src/concurrent/Makefile: -------------------------------------------------------------------------------- 1 | DESTDIR = 2 | PREFIX = /usr/local 3 | SHAREDIR = $(PREFIX)/share/lua/$(LUAVERSION) 4 | MODDIR = $(SHAREDIR)/concurrent 5 | 6 | LUAVERSION = 5.1 7 | 8 | SHARE = init.lua \ 9 | option.lua \ 10 | process.lua \ 11 | message.lua \ 12 | scheduler.lua \ 13 | register.lua \ 14 | monitor.lua \ 15 | link.lua \ 16 | root.lua 17 | 18 | all: $(SHARE) 19 | 20 | $(SHARE): 21 | 22 | install: $(SHARE) 23 | mkdir -p $(DESTDIR)$(MODDIR) && \ 24 | cp -f $(SHARE) $(DESTDIR)$(MODDIR) && \ 25 | chmod 0644 $(DESTDIR)$(MODDIR)/$(SHARE) 26 | cd distributed && $(MAKE) install 27 | 28 | uninstall: 29 | cd $(DESTDIR)$(MODDIR) && \ 30 | rm -f $(SHARE) 31 | cd distributed && $(MAKE) uninstall 32 | 33 | clean: 34 | rm -f *~ 35 | cd distributed && $(MAKE) clean 36 | -------------------------------------------------------------------------------- /src/concurrent/distributed/Makefile: -------------------------------------------------------------------------------- 1 | DESTDIR = 2 | PREFIX = /usr/local 3 | SHAREDIR = $(PREFIX)/share/lua/$(LUAVERSION) 4 | MODDIR = $(SHAREDIR)/concurrent/distributed 5 | 6 | LUAVERSION = 5.1 7 | 8 | SHARE = network.lua \ 9 | node.lua \ 10 | cookie.lua \ 11 | process.lua \ 12 | message.lua \ 13 | scheduler.lua \ 14 | register.lua \ 15 | link.lua \ 16 | monitor.lua 17 | 18 | all: $(SHARE) 19 | 20 | $(SHARE): 21 | 22 | install: 23 | mkdir -p $(DESTDIR)$(MODDIR) && \ 24 | cp -f $(SHARE) $(DESTDIR)$(MODDIR) && \ 25 | chmod 0644 $(DESTDIR)$(MODDIR)/$(SHARE) 26 | 27 | uninstall: 28 | cd $(DESTDIR)$(MODDIR) && \ 29 | rm -f $(SHARE) 30 | 31 | clean: 32 | rm -f *~ 33 | -------------------------------------------------------------------------------- /src/concurrent/distributed/cookie.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for setting the magic cookie. 2 | local concurrent 3 | 4 | local cookie = {} 5 | 6 | cookie.cookie = nil -- The magic cookie used for authentication. 7 | 8 | -- Sets the magic cookie. 9 | function cookie.setcookie(c) 10 | concurrent = concurrent or require 'concurrent' 11 | if concurrent.node() then cookie.cookie = c end 12 | end 13 | 14 | -- Returns the set magic cookie. 15 | function cookie.getcookie() 16 | return cookie.cookie 17 | end 18 | 19 | return cookie 20 | -------------------------------------------------------------------------------- /src/concurrent/distributed/link.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for linking between distributed processes. 2 | local link = require 'concurrent.link' 3 | local network = require 'concurrent.distributed.network' 4 | local concurrent, process 5 | 6 | -- The existing versions of the linking related functions are renamed. 7 | link._link = link.link 8 | link._spawnlink = link.spawnlink 9 | link._unlink = link.unlink 10 | link._signal = link.signal 11 | 12 | -- Links the calling process with the specified process. If the destination 13 | -- process is local the old renamed version of the function is called, otherwise 14 | -- a linking request is sent to the node where the destination process is 15 | -- executing under. 16 | function link.link(dest) 17 | concurrent = concurrent or require 'concurrent' 18 | if type(dest) ~= 'table' then 19 | return link._link(concurrent.whereis(dest)) 20 | end 21 | 22 | local s = concurrent.self() 23 | local pid, node = unpack(dest) 24 | if type(link.links[s]) == 'nil' then link.links[s] = {} end 25 | for _, v in pairs(link.links[s]) do 26 | if type(v) == 'table' and pid == v[1] and node == v[2] then return end 27 | end 28 | concurrent.send({ -1, node }, 29 | { subject = 'LINK', 30 | to = { pid = pid }, 31 | from = { pid = s, node = concurrent.node() } }) 32 | table.insert(link.links[s], dest) 33 | end 34 | 35 | -- Handles linking requests from a remote process. 36 | function link.controller_link(msg) 37 | concurrent = concurrent or require 'concurrent' 38 | local pid = concurrent.whereis(msg.to.pid) 39 | if not pid then return end 40 | if type(link.links[pid]) == 'nil' then link.links[pid] = {} end 41 | for _, v in pairs(link.links[pid]) do 42 | if type(v) == 'table' and msg.from.pid == v[1] and 43 | msg.from.node == v[2] then 44 | return 45 | end 46 | end 47 | table.insert(link.links[pid], { msg.from.pid, msg.from.node }) 48 | end 49 | 50 | -- Creates a process either local or remote which is also linked to the calling 51 | -- process. 52 | function link.spawnlink(...) 53 | concurrent = concurrent or require 'concurrent' 54 | local pid, errmsg = concurrent.spawn(...) 55 | if not pid then return nil, errmsg end 56 | concurrent.link(pid) 57 | return pid 58 | end 59 | 60 | -- Uninks the calling process from the specified process. If the destination 61 | -- process is local the old renamed version of the function is called, otherwise 62 | -- an unlinking request is sent to the node where the destination process is 63 | -- executing under. 64 | function link.unlink(dest) 65 | concurrent = concurrent or require 'concurrent' 66 | if type(dest) ~= 'table' then 67 | return link._unlink(concurrent.whereis(dest)) 68 | end 69 | 70 | local s = concurrent.self() 71 | local pid, node = unpack(dest) 72 | if type(link.links[s]) == 'nil' then return end 73 | for k, v in pairs(link.links[s]) do 74 | if type(v) == 'table' and pid == v[1] and node == v[2] then 75 | table.remove(link.links[s], k) 76 | end 77 | end 78 | concurrent.send({ -1, node }, 79 | { subject = 'UNLINK', 80 | to = { pid = -1 }, 81 | from = { pid = s, node = concurrent.node() } }) 82 | end 83 | 84 | -- Handles unlinking requests from a remote process. 85 | function link.controller_unlink(msg) 86 | concurrent = concurrent or require 'concurrent' 87 | local pid = concurrent.whereis(msg.to.pid) 88 | if not pid then return end 89 | if type(link.links[pid]) == 'nil' then return end 90 | for k, v in pairs(link.links[pid]) do 91 | if type(v) == 'table' and msg.from.pid == v[1] and 92 | msg.from.node == v[2] then 93 | table.remove(link.links[pid], k) 94 | end 95 | end 96 | end 97 | 98 | -- Signals all processes that are linked to processes in and node to which the 99 | -- connection is lost. 100 | function link.signal_all(deadnode) 101 | for k, v in pairs(link.links) do 102 | if v[2] == deadnode then link.signal(k, v, 'noconnection') end 103 | end 104 | end 105 | 106 | -- Signals a single process that is linked to processes in a node to which the 107 | -- connection is lost. 108 | function link.signal(dest, dead, reason) 109 | concurrent = concurrent or require 'concurrent' 110 | if type(dest) ~= 'table' then 111 | return link._signal(concurrent.whereis(dest), dead, reason) 112 | end 113 | 114 | local pid, node = unpack(dest) 115 | concurrent.send({ -1, node }, 116 | { subject = 'EXIT', 117 | to = { pid = pid }, 118 | from = { dead, concurrent.node() }, reason = reason }) 119 | end 120 | 121 | -- Handles exit requests from distributed processes. 122 | function link.controller_exit(msg) 123 | concurrent = concurrent or require 'concurrent' 124 | process = process or require 'concurrent.process' 125 | if not concurrent.getoption('trapexit') then 126 | process.kill(concurrent.whereis(msg.to.pid), msg.reason) 127 | else 128 | concurrent.send(msg.to.pid, { signal = 'EXIT', 129 | from = msg.from, 130 | reason = msg.reason }) 131 | end 132 | end 133 | 134 | -- Controllers to handle link, unlink and exit requests. 135 | network.controllers['LINK'] = link.controller_link 136 | network.controllers['UNLINK'] = link.controller_unlink 137 | network.controllers['EXIT'] = link.controller_exit 138 | 139 | -- Signals all processes linked to processes in a node to which the connection 140 | -- is lost. 141 | table.insert(network.onfailure, link.signal_all) 142 | 143 | return link 144 | -------------------------------------------------------------------------------- /src/concurrent/distributed/message.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for sending messages to remote processes. 2 | local mime = require 'mime' 3 | 4 | local message = require 'concurrent.message' 5 | local concurrent, network 6 | 7 | -- The existing version of this function for message sending is renamed. 8 | message._send = message.send 9 | 10 | -- Sends a message to local or remote processes. If the process is local the 11 | -- old renamed version of this function is used, otherwise the message is send 12 | -- through the network. The message is serialized and the magic cookie is also 13 | -- attached before sent. Returns true for success and false otherwise. 14 | function message.send(dest, mesg) 15 | concurrent = concurrent or require 'concurrent' 16 | network = network or require 'concurrent.distributed.network' 17 | if type(dest) ~= 'table' then 18 | return message._send(concurrent.whereis(dest), mesg) 19 | end 20 | 21 | local pid, node = unpack(dest) 22 | local socket = network.connect(node) 23 | if not socket then return false end 24 | 25 | local data 26 | if concurrent.getcookie() then 27 | data = concurrent.getcookie() .. ' ' .. tostring(pid) .. ' ' .. 28 | message.serialize(mesg) .. '\r\n' 29 | else 30 | data = tostring(pid) .. ' ' .. message.serialize(mesg) .. '\r\n' 31 | end 32 | local total = #data 33 | repeat 34 | local n, errmsg, _ = socket:send(data, total - #data + 1) 35 | if not n and errmsg == 'closed' then 36 | network.disconnect(node) 37 | return false 38 | end 39 | total = total - n 40 | until total == 0 41 | if concurrent.getoption('debug') then 42 | print('-> ' .. string.sub(data, 1, #data - 2)) 43 | end 44 | return true 45 | end 46 | 47 | -- Serializes an object that can be any of: nil, boolean, number, string, table, 48 | -- function. Returns the serialized object. 49 | function message.serialize(obj) 50 | local t = type(obj) 51 | if t == 'nil' or t == 'boolean' or t == 'number' then 52 | return tostring(obj) 53 | elseif t == 'string' then 54 | return string.format("%q", obj) 55 | elseif t == 'function' then 56 | return 'loadstring((mime.unb64([[' .. (mime.b64(string.dump(obj))) .. 57 | ']])))' 58 | elseif t == 'table' then 59 | local t = '{' 60 | for k, v in pairs(obj) do 61 | if type(k) == 'number' or type(k) == 'boolean' then 62 | t = t .. ' [' .. tostring(k) .. '] = ' .. 63 | message.serialize(v) .. ',' 64 | else 65 | t = t .. ' ["' .. tostring(k) .. '"] = ' .. 66 | message.serialize(v) .. ',' 67 | end 68 | end 69 | t = t .. ' }' 70 | return t 71 | else 72 | error('cannot serialize a ' .. t) 73 | end 74 | end 75 | 76 | return message 77 | -------------------------------------------------------------------------------- /src/concurrent/distributed/monitor.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for monitoring of distributed processes. 2 | local monitor = require 'concurrent.monitor' 3 | local network = require 'concurrent.distributed.network' 4 | local concurrent 5 | 6 | -- The existing versions of the monitoring related functions are renamed. 7 | monitor._monitor = monitor.monitor 8 | monitor._spawnmonitor = monitor.spawnmonitor 9 | monitor._demonitor = monitor.demonitor 10 | monitor._notify = monitor.notify 11 | 12 | -- Starts monitoring the specified process. If the destination process is local 13 | -- the old renamed version of the function is called, otherwise a monitor 14 | -- request is sent to the node where the destination process is executing under. 15 | function monitor.monitor(dest) 16 | concurrent = concurrent or require 'concurrent' 17 | if type(dest) ~= 'table' then 18 | return monitor._monitor(concurrent.whereis(dest)) 19 | end 20 | 21 | local s = concurrent.self() 22 | local pid, node = unpack(dest) 23 | concurrent.send({ -1, node }, { subject = 'MONITOR', to = { pid = pid }, 24 | from = { pid = s, node = concurrent.node() } }) 25 | end 26 | 27 | -- Handles monitor requests from a remote process. 28 | function monitor.controller_monitor(msg) 29 | concurrent = concurrent or require 'concurrent' 30 | local pid = concurrent.whereis(msg.to.pid) 31 | if not pid then 32 | return 33 | end 34 | if type(monitor.monitors[pid]) == 'nil' then 35 | monitor.monitors[pid] = {} 36 | end 37 | for _, v in pairs(monitor.monitors[pid]) do 38 | if type(v) == 'table' and msg.from.pid == v[1] and 39 | msg.from.node == v[2] then 40 | return 41 | end 42 | end 43 | table.insert(monitor.monitors[pid], { msg.from.pid, msg.from.node }) 44 | end 45 | 46 | -- Creates a process either local or remote which is also monitored by the 47 | -- calling process. 48 | function monitor.spawnmonitor(...) 49 | concurrent = concurrent or require 'concurrent' 50 | local pid, errmsg = concurrent.spawn(...) 51 | if not pid then 52 | return nil, errmsg 53 | end 54 | concurrent.monitor(pid) 55 | return pid 56 | end 57 | 58 | -- Stops monitoring the specified process. If the destination process is local 59 | -- the old version of the function is called, otherwise a demonitor request is 60 | -- sent to the node where the destination process is executing under. 61 | function monitor.demonitor(dest) 62 | concurrent = concurrent or require 'concurrent' 63 | if type(dest) ~= 'table' then 64 | return monitor._demonitor(concurrent.whereis(dest)) 65 | end 66 | 67 | local s = concurrent.self() 68 | local pid, node = unpack(dest) 69 | concurrent.send({ -1, node }, { subject = 'DEMONITOR', to = { pid = -1 }, 70 | from = { pid = s, node = concurrent.node() } }) 71 | end 72 | 73 | -- Handles demonitor requests from a remote process. 74 | function monitor.controller_demonitor(msg) 75 | concurrent = concurrent or require 'concurrent' 76 | local pid = concurrent.whereis(msg.to.pid) 77 | if not pid then 78 | return 79 | end 80 | if type(monitor.monitors[pid]) == 'nil' then 81 | return 82 | end 83 | for k, v in pairs(monitor.monitors[pid]) do 84 | if type(v) == 'table' and msg.from.pid == v[1] and 85 | msg.from.node == v[2] then 86 | table.remove(monitor.monitors[pid], k) 87 | end 88 | end 89 | end 90 | 91 | -- Notifies all processes that are monitoring processes in a node to which the 92 | -- connection is lost. 93 | function monitor.notify_all(deadnode) 94 | for k, v in pairs(monitor.monitors) do 95 | if v[2] == deadnode then 96 | monitor.notify(k, v, 'noconnection') 97 | end 98 | end 99 | end 100 | 101 | -- Notifies a single process that is monitoring processes in and node to which 102 | -- the connection is lost. 103 | function monitor.notify(dest, dead, reason) 104 | concurrent = concurrent or require 'concurrent' 105 | if type(dest) ~= 'table' then 106 | return monitor._notify(concurrent.whereis(dest), dead, reason) 107 | end 108 | 109 | concurrent.send(dest, { signal = 'DOWN', from = { dead, 110 | concurrent.node() }, reason = reason }) 111 | end 112 | 113 | -- Controllers to handle monitor and demonitor requests. 114 | network.controllers['MONITOR'] = monitor.controller_monitor 115 | network.controllers['DEMONITOR'] = monitor.controller_demonitor 116 | 117 | -- Notifies all processes that are monitoring processes in a node to which the 118 | -- connection is lost. 119 | table.insert(network.onfailure, monitor.notify_all) 120 | 121 | return monitor 122 | -------------------------------------------------------------------------------- /src/concurrent/distributed/network.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for handling all networking operations between nodes. 2 | local socket = require 'socket' 3 | local copas = require 'copas' 4 | local mime = require 'mime' 5 | 6 | local time = require 'concurrent.time' 7 | local process = require 'concurrent.process' 8 | local message = require 'concurrent.message' 9 | local option = require 'concurrent.option' 10 | local concurrent, register, scheduler 11 | 12 | local network = {} 13 | 14 | network.nodename = nil -- The node's unique name. 15 | 16 | network.connections = {} -- Active connections to other nodes. 17 | 18 | network.controllers = {} -- Functions that handle incoming requests. 19 | 20 | network.onfailure = {} -- Functions to execute on node failure. 21 | 22 | process.processes[-1] = -1 -- The node is a process with PID of -1. 23 | message.mailboxes[-1] = {} -- The mailbox of the node. 24 | 25 | option.options.shortnames = false -- Node fully qualified names. 26 | option.options.connectall = true -- All nodes fully connected. 27 | option.options.keepalive = false -- Keep alive the connections. 28 | option.options.keepalivetimeout = 60 * 1000 -- Keep alive timeout. 29 | 30 | -- Connects to a node, by first finding out the port that the destination node 31 | -- is listening to, then initializing the connection and sending the first 32 | -- handshake message that contains useful information about nodes. Returns a 33 | -- socket to the destination node. 34 | function network.connect(url) 35 | concurrent = concurrent or require 'concurrent' 36 | register = register or require 'concurrent.register' 37 | local node, host = string.match(url, '^(%a[%w_]*)@(.+)$') 38 | if not node or not host then return end 39 | 40 | if network.connections[url] then return network.connections[url] end 41 | 42 | local pmd = socket.connect(host, 9634) 43 | if not pmd then return end 44 | pmd:send('? ' .. url .. '\r\n') 45 | local port = pmd:receive() 46 | pmd:shutdown('both') 47 | 48 | if port then 49 | local client = socket.connect(host, tonumber(port)) 50 | if not client then return end 51 | 52 | network.connections[url] = client 53 | 54 | concurrent.send({ -1, url }, { subject = 'HELLO', 55 | from = { node = network.nodename }, 56 | nodes = concurrent.nodes(), 57 | names = register.names }) 58 | 59 | if concurrent.getoption('keepalive') then 60 | process.spawn_system(network.keepalive_process, url) 61 | end 62 | 63 | return client 64 | end 65 | end 66 | 67 | -- Continuously sends echo messages to a node and waits for echo replies. If 68 | -- no reply has been received the connection to that node is closed. 69 | function network.keepalive_process(name) 70 | scheduler = scheduler or require 'concurrent.scheduler' 71 | local timeouts = scheduler.timeouts 72 | local timeout = concurrent.getoption('keepalivetimeout') 73 | 74 | while true do 75 | local timer = time.time() + timeout 76 | 77 | if not network.connections[name] then break end 78 | 79 | if not concurrent.send({ -1, name }, 80 | { subject = 'ECHO', 81 | from = { pid = concurrent.self(), 82 | node = concurrent.node() } 83 | }) 84 | then 85 | break 86 | end 87 | 88 | local msg = concurrent.receive(timeout) 89 | if not msg then break end 90 | 91 | local diff = timer - time.time() 92 | if diff > 0 then scheduler.sleep(diff) end 93 | end 94 | network.disconnect(name) 95 | end 96 | 97 | -- Handles echo requests by sending back an echo reply. 98 | function network.controller_echo(msg) 99 | concurrent = concurrent or require 'concurrent' 100 | concurrent.send({ msg.from.pid, msg.from.node }, 'ECHO') 101 | end 102 | 103 | -- Handles handshake messages by making use of the the information the 104 | -- connecting node sent, information like other known nodes and registered 105 | -- process names. 106 | function network.controller_hello(msg) 107 | concurrent = concurrent or require 'concurrent' 108 | register = register or require 'concurrent.register' 109 | network.connect(msg.from.node) 110 | if concurrent.getoption('connectall') then 111 | for _, v in ipairs(msg.nodes) do 112 | if v ~= concurrent.node() then network.connect(v) end 113 | end 114 | for k, v in pairs(msg.names) do 115 | if not concurrent.whereis(name) then 116 | register._register(k, v) 117 | else 118 | register._unregister(k) 119 | end 120 | end 121 | end 122 | end 123 | 124 | -- Disconnects from a node. 125 | function network.disconnect(url) 126 | if not network.connections[url] then 127 | return 128 | end 129 | network.connections[url]:shutdown('both') 130 | network.connections[url] = nil 131 | 132 | for _, v in ipairs(network.onfailure) do v(url) end 133 | end 134 | 135 | -- Handles bye messages by closing the connection to the source node. 136 | function network.controller_bye(msg) 137 | network.disconnect(msg.from) 138 | end 139 | 140 | -- Main socket handler for any incoming data, that waits for any data, checks 141 | -- if the they are prefixed with the correct magic cookie and then deserializes 142 | -- the message and forwards it to its recipient. 143 | function network.handler(socket) 144 | concurrent = concurrent or require 'concurrent' 145 | local s = copas.wrap(socket) 146 | while true do 147 | local data = s:receive() 148 | if not data then break end 149 | 150 | if concurrent.getoption('debug') then print('<- ' .. data) end 151 | 152 | local recipient, message 153 | if concurrent.getcookie() then 154 | recipient, message = string.match(data, '^' .. 155 | concurrent.getcookie() .. 156 | ' ([%w%-_]+) (.+)$') 157 | else 158 | recipient, message = string.match(data, '^([%w%-_]+) (.+)$') 159 | end 160 | if recipient and message then 161 | if type(tonumber(recipient)) == 'number' then 162 | recipient = tonumber(recipient) 163 | end 164 | local func = loadstring('return ' .. message) 165 | if func then 166 | if pcall(func) then concurrent.send(recipient, func()) end 167 | end 168 | end 169 | end 170 | end 171 | 172 | -- Checks for and handles messages sent to the node itself based on any 173 | -- controllers that have been defined. 174 | function network.controller() 175 | while #message.mailboxes[-1] > 0 do 176 | local msg = table.remove(message.mailboxes[-1], 1) 177 | if network.controllers[msg.subject] then 178 | network.controllers[msg.subject](msg) 179 | end 180 | end 181 | end 182 | 183 | -- Returns the fully qualified domain name of the calling node. 184 | function network.getfqdn() 185 | local hostname = socket.dns.gethostname() 186 | local _, resolver = socket.dns.toip(hostname) 187 | local fqdn 188 | for _, v in pairs(resolver.ip) do 189 | fqdn, _ = socket.dns.tohostname(v) 190 | if string.find(fqdn, '%w+%.%w+') then break end 191 | end 192 | return fqdn 193 | end 194 | 195 | -- Returns the short name of the calling node. 196 | function network.gethost() 197 | return socket.dns.gethostname() 198 | end 199 | 200 | -- Returns the node's name along with the fully qualified domain name. 201 | function network.hostname(node) 202 | return network.dispatcher(node .. '@' .. network.getfqdn()) 203 | end 204 | 205 | -- Returns the node's name along with the short name. 206 | function network.shortname(node) 207 | return network.dispatcher(node .. '@' .. network.gethost()) 208 | end 209 | 210 | -- Initializes a node. 211 | function network.init(node) 212 | concurrent = concurrent or require 'concurrent' 213 | if string.find(node, '@') then 214 | return network.dispatcher(node) 215 | else 216 | if concurrent.getoption('shortnames') then 217 | return network.shortname(node) 218 | else 219 | return network.hostname(node) 220 | end 221 | end 222 | end 223 | 224 | -- The dispatcher takes care of the main operations to initialize the 225 | -- networking part of the node initialization. Creates a port to listen to for 226 | -- data, and registers this port to the local port mapper daemon, sets the 227 | -- node's name, converts registered names to distributed form and adds a 228 | -- handler for any incoming data. Returns true if successful or false 229 | -- otherwise. 230 | function network.dispatcher(name) 231 | register = register or require 'concurrent.register' 232 | local node, host = string.match(name, '^(%a[%w_]*)@(.+)$') 233 | 234 | local server = socket.bind('*', 0) 235 | local _, port = server:getsockname() 236 | 237 | local client = socket.connect('127.0.0.1', 9634) 238 | if not client then return false end 239 | local answer 240 | client:send('+ ' .. name .. ' ' .. port .. '\r\n') 241 | client:send('? ' .. name .. '\r\n') 242 | answer = client:receive() 243 | if answer ~= tostring(port) then 244 | client:send('= ' .. name .. ' ' .. port .. '\r\n') 245 | client:send('? ' .. name .. '\r\n') 246 | answer = client:receive() 247 | if answer ~= tostring(port) then return false end 248 | end 249 | client:shutdown('both') 250 | 251 | network.nodename = name 252 | 253 | for n, p in pairs(register.names) do 254 | if type(p) == 'number' then 255 | register.names[n] = { p, network.nodename } 256 | end 257 | end 258 | 259 | copas.addserver(server, network.handler) 260 | 261 | return true 262 | end 263 | 264 | -- Shuts down a node by unregistering the node's listening port from the port 265 | -- mapper daemon, by closing all of its active connections to other nodes, and 266 | -- converting the registered names to local form. 267 | function network.shutdown() 268 | concurrent = concurrent or require 'concurrent' 269 | register = register or require 'concurrent.register' 270 | if not concurrent.node() then return true end 271 | 272 | local client = socket.connect('127.0.0.1', 9634) 273 | if not client then return false end 274 | client:send('- ' .. concurrent.node() .. '\r\n') 275 | client:shutdown('both') 276 | 277 | for k, _ in pairs(network.connections) do 278 | concurrent.send({ -1, k }, { subject = 'BYE', 279 | from = concurrent.node() }) 280 | network.disconnect(k) 281 | end 282 | 283 | for n, pid in pairs(register.names) do 284 | if type(pid) == 'table' then 285 | p, _ = unpack(pid) 286 | register.names[n] = p 287 | end 288 | end 289 | 290 | network.nodename = nil 291 | 292 | return true 293 | end 294 | 295 | -- Controllers to handle messages between the nodes. 296 | network.controllers['HELLO'] = network.controller_hello 297 | network.controllers['ECHO'] = network.controller_echo 298 | network.controllers['BYE'] = network.controller_bye 299 | 300 | return network 301 | -------------------------------------------------------------------------------- /src/concurrent/distributed/node.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for node related operations. 2 | local network = require 'concurrent.distributed.network' 3 | local concurrent 4 | 5 | local node = {} 6 | 7 | node.nodemonitors = {} -- Nodes monitoring the node. 8 | 9 | -- Returns the node's name. 10 | function node.node() 11 | return network.nodename 12 | end 13 | 14 | -- Returns a table with the names of the nodes that the node is connected to. 15 | function node.nodes() 16 | local t = {} 17 | for k, _ in pairs(network.connections) do table.insert(t, k) end 18 | return t 19 | end 20 | 21 | -- Returns a true if the node has been initialized or false otherwise. 22 | function node.isnodealive() 23 | concurrent = concurrent or require 'concurrent' 24 | return concurrent.node() ~= nil 25 | end 26 | 27 | -- Starts monitoring the specified node. 28 | function node.monitornode(name) 29 | concurrent = concurrent or require 'concurrent' 30 | local s = concurrent.self() 31 | if not node.nodemonitors[s] then node.nodemonitors[s] = {} end 32 | table.insert(node.nodemonitors[s], name) 33 | end 34 | 35 | -- Stops monitoring the specified node. 36 | function node.demonitornode(name) 37 | concurrent = concurrent or require 'concurrent' 38 | local s = concurrent.self() 39 | if not node.nodemonitors[s] then return end 40 | for k, v in pairs(node.nodemonitors[s]) do 41 | if name == v then table.remove(node.nodemonitors[s], k) end 42 | end 43 | end 44 | 45 | -- Notifies all the monitoring processes about the status change of a node. 46 | function node.notify_all(deadnode) 47 | for k, v in pairs(node.nodemonitors) do 48 | for l, w in pairs(v) do 49 | if w == deadnode then node.notify(k, w, 'noconnection') end 50 | end 51 | end 52 | end 53 | 54 | -- Notifies a single process about the status of a node. 55 | function node.notify(dest, deadnode, reason) 56 | concurrent = concurrent or require 'concurrent' 57 | concurrent.send(dest, { signal = 'NODEDOWN', 58 | from = { dead, concurrent.node() }, 59 | reason = reason }) 60 | end 61 | 62 | -- Monitoring processes should be notified when the connection with a node is 63 | -- lost. 64 | table.insert(network.onfailure, node.notify_all) 65 | 66 | return node 67 | -------------------------------------------------------------------------------- /src/concurrent/distributed/process.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for distributed processes. 2 | local process = require 'concurrent.process' 3 | local network = require 'concurrent.distributed.network' 4 | local concurrent, scheduler, message 5 | 6 | process.last = -1 -- Counter for the last auxiliary process. 7 | 8 | -- The existing version of this function for process creation is renamed. 9 | process._spawn = process.spawn 10 | 11 | -- Creates a process either local or remote. If the process is a local process 12 | -- the old renamed version of the function is used, otherwise an auxiliary 13 | -- system process takes care of the creation of a remote process. Returns 14 | -- the either the local or the remote PID of the newly created process. 15 | function process.spawn(...) 16 | concurrent = concurrent or require 'concurrent' 17 | scheduler = scheduler or require 'concurrent.scheduler' 18 | local args = { ... } 19 | if type(args[1]) == 'function' then return process._spawn(unpack(args)) end 20 | 21 | local node = args[1] 22 | table.remove(args, 1) 23 | local func = args[1] 24 | table.remove(args, 1) 25 | 26 | local pid, errmsg = process.spawn_system(process.spawn_process, 27 | concurrent.self(), 28 | node, func, args) 29 | local msg = scheduler.wait() 30 | if not msg.pid then return nil, msg.errmsg end 31 | return { msg.pid, node } 32 | end 33 | 34 | -- Auxiliary system process that creates a remote process. 35 | function process.spawn_process(parent, node, func, args) 36 | concurrent = concurrent or require 'concurrent' 37 | scheduler = scheduler or require 'concurrent.scheduler' 38 | concurrent.send({ -1, node}, 39 | { subject = 'SPAWN', 40 | from = { pid = concurrent.self(), 41 | node = concurrent.node() }, 42 | func = func, 43 | args = args }) 44 | local msg = concurrent.receive() 45 | scheduler.barriers[parent] = msg 46 | end 47 | 48 | -- Handles spawn requests from a remote node. 49 | function process.controller_spawn(msg) 50 | concurrent = concurrent or require 'concurrent' 51 | local func = loadstring('return ' .. msg.func) 52 | if func then 53 | local pid, errmsg = concurrent.spawn(func(), unpack(msg.args)) 54 | concurrent.send({ msg.from.pid, msg.from.node }, 55 | { pid = pid, errmsg = errmsg }) 56 | end 57 | end 58 | 59 | -- Creates auxiliary system functions, that are mostly similar to normal 60 | -- processes, but have a negative number as a PID and lack certain capabilities. 61 | function process.spawn_system(func, ...) 62 | message = message or require 'concurrent.message' 63 | scheduler = scheduler or require 'concurrent.scheduler' 64 | local co = coroutine.create( 65 | function (...) 66 | coroutine.yield() 67 | func(...) 68 | end 69 | ) 70 | 71 | process.last = process.last - 1 72 | local pid = process.last 73 | 74 | process.processes[pid] = co 75 | message.mailboxes[pid] = {} 76 | scheduler.timeouts[pid] = 0 77 | 78 | local status, errmsg = process.resume(co, ...) 79 | if not status then return nil, errmsg end 80 | return pid 81 | end 82 | 83 | -- Controller to handle spawn requests. 84 | network.controllers['SPAWN'] = process.controller_spawn 85 | 86 | return process 87 | -------------------------------------------------------------------------------- /src/concurrent/distributed/register.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for process name registering in distributed mode. 2 | local time = require 'concurrent.time' 3 | local register = require 'concurrent.register' 4 | local option = require 'concurrent.option' 5 | local process = require 'concurrent.process' 6 | local network = require 'concurrent.distributed.network' 7 | local concurrent, scheduler 8 | 9 | register.nameslocks = {} -- Locking during registration negotiations. 10 | 11 | option.options.registertimeout = 10 * 1000 -- Registration timeout. 12 | option.options.registerlocktimeout = 30 * 1000 -- Lock timeout. 13 | 14 | -- The existing versions of the functions for process registering are renamed. 15 | register._register = register.register 16 | register._unregister = register.unregister 17 | register._whereis = register.whereis 18 | 19 | -- Registers a PID with the specified name. If the process is local the old 20 | -- renamed version of the function is called, otherwise an auxiliary system 21 | -- process, to handle negotiation on the name with the rest of the nodes, is 22 | -- created. Returns true if successful or false otherwise. 23 | function register.register(name, pid) 24 | concurrent = concurrent or require 'concurrent' 25 | scheduler = scheduler or require 'concurrent.scheduler' 26 | if not concurrent.node() or not concurrent.getoption('connectall') then 27 | return register._register(name, pid) 28 | end 29 | 30 | if concurrent.whereis(name) then return false end 31 | if not pid then pid = concurrent.self() end 32 | if #concurrent.nodes() == 0 then 33 | register.names[name] = { pid, concurrent.node() } 34 | return true 35 | end 36 | process.spawn_system(register.register_process, concurrent.self(), name, 37 | pid) 38 | local msg = scheduler.wait() 39 | if msg.status then register.names[name] = { pid, concurrent.node() } end 40 | return msg.status, msg.errmsg 41 | end 42 | 43 | -- The auxiliary system process that negotiates on registering a name with the 44 | -- rest of the nodes. The negotiation is based on a two phase commit protocol. 45 | -- The role of the coordinator plays the node that the register request 46 | -- originated from. First the coordinator asks for locking of a specific name 47 | -- from all nodes, and if this was successful and a commit message is then sent 48 | -- to all the nodes. 49 | function register.register_process(parent, name, pid) 50 | concurrent = concurrent or require 'concurrent' 51 | scheduler = scheduler or require 'concurrent.scheduler' 52 | local locks = {} 53 | local commits = {} 54 | local n = 0 55 | 56 | for k, _ in pairs(network.connections) do 57 | locks[k] = false 58 | commits[k] = false 59 | n = n + 1 60 | end 61 | 62 | for k, _ in pairs(network.connections) do 63 | concurrent.send({ -1, k }, 64 | { subject = 'REGISTER', 65 | phase = 'LOCK', 66 | from = { pid = concurrent.self(), 67 | node = concurrent.node() }, 68 | name = name, 69 | pid = pid, 70 | node = concurrent.node() }) 71 | end 72 | 73 | local i = 0 74 | local timer = time.time() + concurrent.getoption('registertimeout') 75 | repeat 76 | local msg = concurrent.receive(timer - time.time()) 77 | if msg and msg.phase == 'LOCK' then 78 | locks[msg.from.node] = true 79 | i = i + 1 80 | end 81 | until time.time() >= timer or i >= n 82 | 83 | for _, v in pairs(locks) do 84 | if not v then 85 | scheduler.barriers[parent] = { status = false, 86 | errmsg = 'lock failed' } 87 | return 88 | end 89 | end 90 | 91 | for k, _ in pairs(network.connections) do 92 | concurrent.send({ -1, k }, 93 | { subject = 'REGISTER', 94 | phase = 'COMMIT', 95 | from = { pid = concurrent.self(), 96 | node = concurrent.node() }, 97 | name = name, 98 | pid = pid, 99 | node = concurrent.node() }) 100 | end 101 | 102 | local i = 0 103 | local timer = time.time() + concurrent.getoption('registertimeout') 104 | repeat 105 | local msg = concurrent.receive(timer - time.time()) 106 | if msg and msg.phase == 'COMMIT' then 107 | commits[msg.from.node] = true 108 | i = i + 1 109 | end 110 | until time.time() >= timer or i >= n 111 | 112 | for _, v in pairs(commits) do 113 | if not v then 114 | scheduler.barriers[parent] = { status = false, 115 | errmsg = 'commit failed' } 116 | return 117 | end 118 | end 119 | 120 | scheduler.barriers[parent] = { status = true } 121 | end 122 | 123 | -- Handles register requests in distributed mode. 124 | function register.controller_register(msg) 125 | concurrent = concurrent or require 'concurrent' 126 | if msg.phase == 'LOCK' then 127 | if not concurrent.whereis(msg.name) and 128 | (not register.nameslocks[msg.name] or 129 | time.time() - register.nameslocks[msg.name]['stamp'] < 130 | concurrent.getoption('registerlocktimeout')) 131 | then 132 | register.nameslocks[msg.name] = { pid = msg.pid, node = msg.node, 133 | stamp = time.time() } 134 | concurrent.send({ msg.from.pid, msg.from.node }, 135 | { phase = 'LOCK', 136 | from = { node = concurrent.node() } }) 137 | end 138 | elseif msg.phase == 'COMMIT' then 139 | if register.nameslocks[msg.name] and 140 | register.nameslocks[msg.name]['pid'] == msg.pid and 141 | register.nameslocks[msg.name]['node'] == msg.node 142 | then 143 | register._register(msg.name, { msg.pid, msg.node }) 144 | concurrent.send({ msg.from.pid, msg.from.node }, 145 | { phase = 'COMMIT', 146 | from = { node = concurrent.node() } }) 147 | register.nameslocks[msg.name] = nil 148 | end 149 | end 150 | end 151 | 152 | -- Unegisters a PID with the specified name. If the process is local the old 153 | -- renamed version of the function is called, otherwise an auxiliary system 154 | -- process, to handle negotiation on the name with ther rest of the nodes, is 155 | -- created. Returns true if successful or false otherwise. 156 | function register.unregister(name) 157 | concurrent = concurrent or require 'concurrent' 158 | scheduler = scheduler or require 'concurrent.scheduler' 159 | 160 | if not concurrent.node() or not concurrent.getoption('connectall') then 161 | return register._unregister(name) 162 | end 163 | 164 | for k, v in pairs(register.names) do 165 | if name == k and concurrent.node() == v[2] then 166 | if #concurrent.nodes() == 0 then 167 | register.names[name] = nil 168 | return 169 | end 170 | process.spawn_system(register.unregister_process, 171 | concurrent.self(), k) 172 | local msg = scheduler.wait() 173 | if msg.status then register.names[name] = nil end 174 | return msg.status, msg.errmsg 175 | end 176 | end 177 | end 178 | 179 | -- The auxiliary system process that negotiates on unregistering a name with the 180 | -- rest of the nodes. The negotiation is similar to the register operation. 181 | function register.unregister_process(parent, name) 182 | concurrent = concurrent or require 'concurrent' 183 | scheduler = scheduler or require 'concurrent.scheduler' 184 | local locks = {} 185 | local commits = {} 186 | local n = 0 187 | 188 | for k, _ in pairs(network.connections) do 189 | locks[k] = false 190 | commits[k] = false 191 | n = n + 1 192 | end 193 | 194 | for k, _ in pairs(network.connections) do 195 | concurrent.send({ -1, k }, 196 | { subject = 'UNREGISTER', 197 | phase = 'LOCK', 198 | from = { pid = concurrent.self(), 199 | node = concurrent.node() }, 200 | name = name }) 201 | end 202 | 203 | local i = 0 204 | local timer = time.time() + concurrent.getoption('registertimeout') 205 | repeat 206 | local msg = concurrent.receive(timer - time.time()) 207 | if msg and msg.phase == 'LOCK' then 208 | locks[msg.from.node] = true 209 | i = i + 1 210 | end 211 | until time.time() > timer or i >= n 212 | 213 | for _, v in pairs(locks) do 214 | if not v then 215 | scheduler.barriers[parent] = { status = false, 216 | errmsg = 'lock failed' } 217 | return 218 | end 219 | end 220 | 221 | for k, _ in pairs(network.connections) do 222 | concurrent.send({ -1, k }, 223 | { subject = 'UNREGISTER', 224 | phase = 'COMMIT', 225 | from = { pid = concurrent.self(), 226 | node = concurrent.node() }, 227 | name = name }) 228 | end 229 | 230 | local i = 0 231 | local timer = time.time() + concurrent.getoption('registertimeout') 232 | repeat 233 | local msg = concurrent.receive(timer - time.time()) 234 | if msg and msg.phase == 'COMMIT' then 235 | commits[msg.from.node] = true 236 | i = i + 1 237 | end 238 | until time.time() > timer or i >= n 239 | 240 | for _, v in pairs(commits) do 241 | if not v then 242 | scheduler.barriers[parent] = { status = false, 243 | errmsg = 'commit failed' } 244 | return 245 | end 246 | end 247 | 248 | scheduler.barriers[parent] = { status = true } 249 | end 250 | 251 | -- Handles unregister requests in distributed mode. 252 | function register.controller_unregister(msg) 253 | concurrent = concurrent or require 'concurrent' 254 | if msg.phase == 'LOCK' then 255 | if concurrent.whereis(msg.name) and 256 | (not register.nameslocks[msg.name] or 257 | time.time() - register.nameslocks[msg.name]['stamp'] < 258 | concurrent.getoption('registerlocktimeout')) 259 | then 260 | register.nameslocks[msg.name] = { pid = msg.pid, node = msg.node, 261 | stamp = time.time() } 262 | concurrent.send({ msg.from.pid, msg.from.node }, 263 | { phase = 'LOCK', 264 | from = { node = concurrent.node() } }) 265 | end 266 | elseif msg.phase == 'COMMIT' then 267 | if register.nameslocks[msg.name] and 268 | register.nameslocks[msg.name]['pid'] == msg.pid and 269 | register.nameslocks[msg.name]['node'] == msg.node 270 | then 271 | register._unregister(msg.name) 272 | concurrent.send({ msg.from.pid, msg.from.node }, 273 | { phase = 'COMMIT', 274 | from = { node = concurrent.node() } }) 275 | register.nameslocks[msg.name] = nil 276 | end 277 | end 278 | end 279 | 280 | 281 | -- Deletes all registered names from processes in a node to which the connection 282 | -- is lost. 283 | function register.delete_all(deadnode) 284 | for k, v in pairs(register.names) do 285 | if type(v) == 'table' and v[2] == deadnode then register.delete(k) end 286 | end 287 | end 288 | 289 | -- Deletes a single registered name from processes in a node to which the 290 | -- connection is lost. 291 | function register.delete(name) 292 | register.names[name] = nil 293 | end 294 | 295 | -- Returns the PID of the process specified by its registered name. If the 296 | -- system is not in distributed mode or not fully connected, the old renamed 297 | -- version of the function is called. 298 | function register.whereis(name) 299 | concurrent = concurrent or require 'concurrent' 300 | if not concurrent.node() or not concurrent.getoption('connectall') then 301 | return register._whereis(name) 302 | end 303 | 304 | if type(name) == 'number' then return name end 305 | if not register.names[name] then return end 306 | if register.names[name][2] == concurrent.node() then 307 | return register.names[name][1] 308 | end 309 | return register.names[name] 310 | end 311 | 312 | -- Controllers to handle register and unregister requests. 313 | network.controllers['REGISTER'] = register.controller_register 314 | network.controllers['UNREGISTER'] = register.controller_unregister 315 | 316 | -- Overwrites the old unregister functions for terminated and aborted processes 317 | -- with the new versions of these functions. 318 | for k, v in ipairs(process.ondeath) do 319 | if v == register._unregister then 320 | process.ondeath[k] = register.unregister 321 | end 322 | end 323 | for k, v in ipairs(process.ondestruction) do 324 | if v == register._unregister then 325 | process.ondestruction[k] = register.unregister 326 | end 327 | end 328 | 329 | -- Deletes all registered names from processes in a node to which the 330 | -- connection is lost. 331 | table.insert(network.onfailure, register.delete_all) 332 | 333 | return register 334 | -------------------------------------------------------------------------------- /src/concurrent/distributed/scheduler.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for the scheduling of processes in a distributed node. 2 | local socket = require 'socket' 3 | local copas = require 'copas' 4 | 5 | local time = require 'concurrent.time' 6 | local scheduler = require 'concurrent.scheduler' 7 | local concurrent, message, network 8 | 9 | -- The existing versions of these functions for then schedulers operation are 10 | -- renamed. 11 | scheduler._step = scheduler.step 12 | scheduler._tick = scheduler.tick 13 | scheduler._loop = scheduler.loop 14 | 15 | -- In addition to the operations performed for local processes, the mailbox of 16 | -- the node itself is checked and any handlers are called to take care of the 17 | -- messages. 18 | function scheduler.step(timeout) 19 | message = message or require 'concurrent.message' 20 | network = network or require 'concurrent.distributed.network' 21 | if #message.mailboxes[-1] > 0 then network.controller() end 22 | return scheduler._step(timeout) 23 | end 24 | 25 | -- Instead of calling the system's old tick function, one that also considers 26 | -- networking is called. 27 | function scheduler.tick() 28 | concurrent = concurrent or require 'concurrent' 29 | copas.step(concurrent.getoption('tick') / 1000) 30 | end 31 | 32 | -- Infinite or finite loop for the scheduler of a node in distributed mode. 33 | function scheduler.loop(timeout) 34 | concurrent = concurrent or require 'concurrent' 35 | if not concurrent.node() then return scheduler._loop(timeout) end 36 | if timeout then 37 | local timer = time.time() + timeout 38 | while concurrent.step(timeout) and concurrent.node() and 39 | not scheduler.stop and timer > time.time() 40 | do 41 | concurrent.tick() 42 | end 43 | else 44 | while concurrent.step(timeout) and concurrent.node() and 45 | not scheduler.stop 46 | do 47 | concurrent.tick() 48 | end 49 | end 50 | scheduler.stop = false 51 | end 52 | 53 | return scheduler 54 | -------------------------------------------------------------------------------- /src/concurrent/init.lua: -------------------------------------------------------------------------------- 1 | -- Main module for concurrent programming that loads all the submodules. 2 | local concurrent = {} 3 | 4 | local mod 5 | 6 | mod = require 'concurrent.option' 7 | concurrent.setoption = mod.setoption 8 | concurrent.getoption = mod.getoption 9 | 10 | mod = require 'concurrent.process' 11 | concurrent.spawn = mod.spawn 12 | concurrent.self = mod.self 13 | concurrent.isalive = mod.isalive 14 | concurrent.exit = mod.exit 15 | concurrent.whereis = mod.whereis 16 | 17 | mod = require 'concurrent.message' 18 | concurrent.send = mod.send 19 | concurrent.receive = mod.receive 20 | 21 | mod = require 'concurrent.scheduler' 22 | concurrent.step = mod.step 23 | concurrent.tick = mod.tick 24 | concurrent.loop = mod.loop 25 | concurrent.interrupt = mod.interrupt 26 | concurrent.sleep = mod.sleep 27 | 28 | mod = require 'concurrent.register' 29 | concurrent.register = mod.register 30 | concurrent.unregister = mod.unregister 31 | concurrent.registered = mod.registered 32 | concurrent.whereis = mod.whereis 33 | 34 | mod = require 'concurrent.link' 35 | concurrent.link = mod.link 36 | concurrent.unlink = mod.unlink 37 | concurrent.spawnlink = mod.spawnlink 38 | 39 | mod = require 'concurrent.monitor' 40 | concurrent.monitor = mod.monitor 41 | concurrent.demonitor = mod.demonitor 42 | concurrent.spawnmonitor = mod.spawnmonitor 43 | 44 | mod = require 'concurrent.root' 45 | concurrent.self = mod.self 46 | concurrent.isalive = mod.isalive 47 | 48 | mod = require 'concurrent.distributed.network' 49 | concurrent.init = mod.init 50 | concurrent.shutdown = mod.shutdown 51 | 52 | mod = require 'concurrent.distributed.node' 53 | concurrent.node = mod.node 54 | concurrent.nodes = mod.nodes 55 | concurrent.isnodealive = mod.isnodealive 56 | concurrent.monitornode = mod.monitornode 57 | concurrent.demonitornode = mod.demonitornode 58 | 59 | mod = require 'concurrent.distributed.cookie' 60 | concurrent.setcookie = mod.setcookie 61 | concurrent.getcookie = mod.getcookie 62 | 63 | mod = require 'concurrent.distributed.process' 64 | concurrent.spawn = mod.spawn 65 | 66 | mod = require 'concurrent.distributed.message' 67 | concurrent.send = mod.send 68 | 69 | mod = require 'concurrent.distributed.scheduler' 70 | concurrent.step = mod.step 71 | concurrent.tick = mod.tick 72 | concurrent.loop = mod.loop 73 | 74 | mod = require 'concurrent.distributed.register' 75 | concurrent.register = mod.register 76 | concurrent.unregister = mod.unregister 77 | concurrent.whereis = mod.whereis 78 | 79 | mod = require 'concurrent.distributed.link' 80 | concurrent.link = mod.link 81 | concurrent.spawnlink = mod.spawnlink 82 | concurrent.unlink = mod.unlink 83 | 84 | mod = require 'concurrent.distributed.monitor' 85 | concurrent.monitor = mod.monitor 86 | concurrent.spawnmonitor = mod.spawnmonitor 87 | concurrent.demonitor = mod.demonitor 88 | 89 | return concurrent 90 | -------------------------------------------------------------------------------- /src/concurrent/link.lua: -------------------------------------------------------------------------------- 1 | --Submodule for process linking. 2 | local option = require 'concurrent.option' 3 | local process = require 'concurrent.process' 4 | local concurrent 5 | 6 | local link = {} 7 | 8 | link.links = {} -- Active links between processes. 9 | 10 | option.options.trapexit = false -- Option to trap exit signals. 11 | 12 | -- The calling process is linked with the specified process. 13 | function link.link(dest) 14 | concurrent = concurrent or require 'concurrent' 15 | local t = type(dest) 16 | local s = concurrent.self() 17 | local pid = concurrent.whereis(dest) 18 | if not pid then return end 19 | if type(link.links[s]) == 'nil' then link.links[s] = {} end 20 | if type(link.links[pid]) == 'nil' then link.links[pid] = {} end 21 | for _, v in pairs(link.links[s]) do 22 | if pid == v then return end 23 | end 24 | table.insert(link.links[s], pid) 25 | table.insert(link.links[pid], s) 26 | end 27 | 28 | -- Creates a new process which is also linked to the calling process. 29 | function link.spawnlink(...) 30 | concurrent = concurrent or require 'concurrent' 31 | local pid, errmsg = concurrent.spawn(...) 32 | if not pid then return nil, errmsg end 33 | concurrent.link(pid) 34 | return pid 35 | end 36 | 37 | -- The calling process is unlinked from the specified process. 38 | function link.unlink(dest) 39 | concurrent = concurrent or require 'concurrent' 40 | local t = type(dest) 41 | local s = concurrent.self() 42 | local pid = concurrent.whereis(dest) 43 | if not pid then return end 44 | if type(link.links[s]) == 'nil' or type(link.links[pid]) == 'nil' then 45 | return 46 | end 47 | for key, value in pairs(link.links[s]) do 48 | if pid == value then link.links[s][key] = nil end 49 | end 50 | for key, value in pairs(link.links[pid]) do 51 | if s == value then link.links[pid][key] = nil end 52 | end 53 | end 54 | 55 | -- Unlinks the calling process from all other processes. 56 | function link.unlink_all() 57 | concurrent = concurrent or require 'concurrent' 58 | local s = concurrent.self() 59 | if type(link.links[s]) == 'nil' then return end 60 | for _, v in pairs(link.links[s]) do concurrent.unlink(v) end 61 | link.links[s] = nil 62 | end 63 | 64 | -- Signals all the linked processes due to an abnormal exit of a process. 65 | function link.signal_all(dead, reason) 66 | if type(link.links[dead]) == 'nil' then return end 67 | for _, v in pairs(link.links[dead]) do link.signal(v, dead, reason) end 68 | link.links[dead] = nil 69 | end 70 | 71 | -- Signals a single process due to an abnormal exit of a process. 72 | function link.signal(dest, dead, reason) 73 | concurrent = concurrent or require 'concurrent' 74 | if not concurrent.getoption('trapexit') then 75 | process.kill(dest, reason) 76 | else 77 | concurrent.send(dest, { signal = 'EXIT', from = dead, reason = reason }) 78 | end 79 | end 80 | 81 | -- Processes that are linked to terminated or aborted processes should be 82 | -- signaled. 83 | table.insert(process.ondeath, link.signal_all) 84 | table.insert(process.ondestruction, link.unlink_all) 85 | 86 | return link 87 | -------------------------------------------------------------------------------- /src/concurrent/message.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for passing messages. 2 | local time = require 'concurrent.time' 3 | local concurrent, scheduler 4 | 5 | local message = {} 6 | 7 | message.mailboxes = {} -- Mailboxes associated with processes. 8 | 9 | -- Sends a messages to a process, actually, inserts it to the destination 10 | -- mailbox. Returns true if successful and false otherwise. 11 | function message.send(dest, mesg) 12 | concurrent = concurrent or require 'concurrent' 13 | local pid = concurrent.whereis(dest) 14 | if not pid then return false end 15 | table.insert(message.mailboxes[pid], mesg) 16 | return true 17 | end 18 | 19 | -- Receives the oldest unread message. If the mailbox is empty, it waits until 20 | -- the specified timeout has expired. 21 | function message.receive(timeout) 22 | concurrent = concurrent or require 'concurrent' 23 | scheduler = scheduler or require 'concurrent.scheduler' 24 | local timeouts = scheduler.timeouts 25 | local s = concurrent.self() 26 | if type(timeout) == 'number' then timeouts[s] = time.time() + timeout end 27 | if #message.mailboxes[s] == 0 then scheduler.sleep(timeout) end 28 | if #message.mailboxes[s] > 0 then 29 | return table.remove(message.mailboxes[s], 1) 30 | end 31 | end 32 | 33 | return message 34 | -------------------------------------------------------------------------------- /src/concurrent/monitor.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for process monitoring. 2 | local process = require 'concurrent.process' 3 | local concurrent 4 | 5 | local monitor = {} 6 | 7 | monitor.monitors = {} -- Active monitors between processes. 8 | 9 | -- The calling process starts monitoring the specified process. 10 | function monitor.monitor(dest) 11 | concurrent = concurrent or require 'concurrent' 12 | local s = concurrent.self() 13 | local pid = concurrent.whereis(dest) 14 | if not pid then return end 15 | if type(monitor.monitors[pid]) == 'nil' then monitor.monitors[pid] = {} end 16 | for _, v in pairs(monitor.monitors[pid]) do if s == v then return end end 17 | table.insert(monitor.monitors[pid], s) 18 | end 19 | 20 | -- Creates a new process which is also monitored by the calling process. 21 | function monitor.spawnmonitor(...) 22 | concurrent = concurrent or require 'concurrent' 23 | local pid, errmsg = concurrent.spawn(...) 24 | if not pid then return nil, errmsg end 25 | concurrent.monitor(pid) 26 | return pid 27 | end 28 | 29 | -- The calling process stops monitoring the specified process. 30 | function monitor.demonitor(dest) 31 | concurrent = concurrent or require 'concurrent' 32 | local s = concurrent.self() 33 | local pid = concurrent.whereis(dest) 34 | if not pid then return end 35 | if monitor.monitors[pid] == 'nil' then return end 36 | for key, value in pairs(monitor.monitors[pid]) do 37 | if s == value then 38 | monitor.monitors[pid][key] = nil 39 | return 40 | end 41 | end 42 | end 43 | 44 | -- Notifies all the monitoring processes about the status change of the 45 | -- specified process. 46 | function monitor.notify_all(dead, reason) 47 | if type(monitor.monitors[dead]) == 'nil' then return end 48 | for _, v in pairs(monitor.monitors[dead]) do 49 | monitor.notify(v, dead, reason) 50 | end 51 | monitor.monitors[dead] = nil 52 | end 53 | 54 | -- Notifies a single process about the status change of the specified process. 55 | function monitor.notify(dest, dead, reason) 56 | concurrent = concurrent or require 'concurrent' 57 | concurrent.send(dest, { signal = 'DOWN', from = dead, reason = reason }) 58 | end 59 | 60 | -- Processes that monitor terminated or aborted processes should be notified. 61 | table.insert(process.ondeath, monitor.notify_all) 62 | table.insert(process.ondestruction, monitor.notify_all) 63 | 64 | return monitor 65 | -------------------------------------------------------------------------------- /src/concurrent/option.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for the setting the system's options. 2 | local option = {} 3 | 4 | option.options = {} -- System options. 5 | 6 | option.options.debug = false -- Sets printing of debugging messages. 7 | 8 | -- Returns the value of the option. 9 | function option.getoption(key) 10 | return option.options[key] 11 | end 12 | 13 | -- Sets the value of the option. 14 | function option.setoption(key, value) 15 | option.options[key] = value 16 | end 17 | 18 | return option 19 | -------------------------------------------------------------------------------- /src/concurrent/process.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for creating and destroying processes. 2 | local concurrent, message, scheduler 3 | 4 | local process = {} 5 | 6 | process.processes = {} -- All the processes in the system. 7 | 8 | process.ondeath = {} -- Functions to execute on abnormal exit. 9 | process.ondestruction = {} -- Functions to execute on termination. 10 | 11 | -- Creates a process and its mailbox, and initializes its sleep timeout to be 12 | -- used by the scheduler. Returns a PID or in case of error nil and an 13 | -- error message. 14 | function process.spawn(func, ...) 15 | message = message or require 'concurrent.message' 16 | scheduler = scheduler or require 'concurrent.scheduler' 17 | local co = coroutine.create( 18 | function (...) 19 | coroutine.yield() 20 | func(...) 21 | process.destroy() 22 | end) 23 | table.insert(process.processes, co) 24 | local pid = #process.processes 25 | message.mailboxes[pid] = {} 26 | scheduler.timeouts[pid] = 0 27 | local status, errmsg = process.resume(co, ...) 28 | if not status then return nil, errmsg end 29 | return pid 30 | end 31 | 32 | -- Resumes a suspended process. Returns its status and any coroutine related 33 | -- error messages. 34 | function process.resume(co, ...) 35 | if type(co) ~= 'thread' or coroutine.status(co) ~= 'suspended' then 36 | return 37 | end 38 | local status, errmsg = coroutine.resume(co, ...) 39 | if not status then 40 | local pid = process.whois(co) 41 | process.die(pid, errmsg) 42 | end 43 | return status, errmsg 44 | end 45 | 46 | -- Returns the PID of the calling process. 47 | function process.self() 48 | local co = coroutine.running() 49 | if co then return process.whois(co) end 50 | end 51 | 52 | -- Returns the PID of the specified coroutine. 53 | function process.whois(co) 54 | for k, v in pairs(process.processes) do 55 | if v == co then return k end 56 | end 57 | end 58 | 59 | -- Returns the status of a specific process, that can be either alive or dead. 60 | function process.isalive(pid) 61 | local co = process.processes[pid] 62 | if co and type(co) == 'thread' and coroutine.status(co) ~= 'dead' then 63 | return true 64 | else 65 | return false 66 | end 67 | end 68 | 69 | -- Causes abnormal exit of the calling process. 70 | function process.exit(reason) 71 | error(reason, 0) 72 | end 73 | 74 | -- Terminates the specified process. 75 | function process.kill(pid, reason) 76 | if type(process.processes[pid]) == 'thread' and 77 | coroutine.status(process.processes[pid]) == 'suspended' 78 | then 79 | local status, errmsg = coroutine.resume(process.processes[pid], 'EXIT') 80 | process.die(pid, errmsg) 81 | end 82 | end 83 | 84 | -- Executes the functions registered to be run upon process termination. 85 | function process.destroy() 86 | concurrent = concurrent or require 'concurrent' 87 | for _, v in ipairs(process.ondestruction) do 88 | v(concurrent.self(), 'normal') 89 | end 90 | end 91 | 92 | -- Executes the functions registered to be run upon process abnormal exit. 93 | function process.die(pid, reason) 94 | for _, v in ipairs(process.ondeath) do v(pid, reason) end 95 | end 96 | 97 | -- Returns the PID of a process. 98 | function process.whereis(pid) 99 | return pid 100 | end 101 | 102 | return process 103 | -------------------------------------------------------------------------------- /src/concurrent/register.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for process name registering. 2 | local process = require 'concurrent.process' 3 | local concurrent 4 | 5 | local register = {} 6 | 7 | register.names = {} -- Process names and PIDs associative table. 8 | 9 | -- Registers a PID with the specified name. Returns true if successful or false 10 | -- otherwise. 11 | function register.register(name, pid) 12 | concurrent = concurrent or require 'concurrent' 13 | if concurrent.whereis(name) then return false end 14 | if not pid then pid = concurrent.self() end 15 | register.names[name] = pid 16 | return true 17 | end 18 | 19 | -- Unregisters the specified process name. Returns true if successful or 20 | -- false otherwise. 21 | function register.unregister(name) 22 | concurrent = concurrent or require 'concurrent' 23 | if not name then name = concurrent.self() end 24 | for k, v in pairs(register.names) do 25 | if name == k or name == v then 26 | register.names[k] = nil 27 | return true 28 | end 29 | end 30 | return false 31 | end 32 | 33 | -- Returns a table with the names of all the registered processes. 34 | function register.registered() 35 | local n = {} 36 | for k, _ in pairs(register.names) do table.insert(n, k) end 37 | return n 38 | end 39 | 40 | -- Returns the PID of the process specified by its registered name. 41 | function register.whereis(name) 42 | if type(name) == 'number' then return name end 43 | if not register.names[name] then return end 44 | return register.names[name] 45 | end 46 | 47 | -- Terminated or aborted processes should not be registered anymore. 48 | table.insert(process.ondeath, register.unregister) 49 | table.insert(process.ondestruction, register.unregister) 50 | 51 | return register 52 | -------------------------------------------------------------------------------- /src/concurrent/root.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for emulating the control of a script as a process. 2 | local time = require 'concurrent.time' 3 | local process = require 'concurrent.process' 4 | local message = require 'concurrent.message' 5 | local scheduler = require 'concurrent.scheduler' 6 | local concurrent 7 | 8 | process.processes[0] = 0 -- Root process has PID of 0. 9 | message.mailboxes[0] = {} -- Root process mailbox. 10 | 11 | -- The existing versions of these functions are renamed before replacing them. 12 | process._self = process.self 13 | process._isalive = process.isalive 14 | scheduler._wait_yield = scheduler.wait_yield 15 | scheduler._sleep_yield = scheduler.sleep_yield 16 | 17 | -- Returns 0 if the process is not a coroutine. 18 | function process.self() 19 | return process._self() or 0 20 | end 21 | 22 | -- The root process is always alive. 23 | function process.isalive(pid) 24 | if pid ~= 0 then return process._isalive(pid) end 25 | return true 26 | end 27 | 28 | -- Special care must be taken if the root process is blocked. 29 | function scheduler.wait_yield() 30 | concurrent = concurrent or require 'concurrent' 31 | local s = concurrent.self() 32 | 33 | if s ~= 0 then return scheduler._wait_yield() end 34 | 35 | while true do 36 | if scheduler.barriers[s] then break end 37 | concurrent.step() 38 | concurrent.tick() 39 | end 40 | end 41 | 42 | -- Special care must be taken if the root process is sleeping. 43 | function scheduler.sleep_yield() 44 | concurrent = concurrent or require 'concurrent' 45 | local timeouts = scheduler.timeouts 46 | local mailboxes = message.mailboxes 47 | local s = concurrent.self() 48 | 49 | if s ~= 0 then return scheduler._sleep_yield() end 50 | 51 | while true do 52 | if #mailboxes[s] > 0 then break end 53 | if timeouts[s] and time.time() - timeouts[s] >= 0 then 54 | timeouts[s] = nil 55 | return 56 | end 57 | concurrent.step() 58 | concurrent.tick() 59 | end 60 | end 61 | 62 | return process 63 | -------------------------------------------------------------------------------- /src/concurrent/scheduler.lua: -------------------------------------------------------------------------------- 1 | -- Submodule for scheduling processes. 2 | local time = require 'concurrent.time' 3 | local option = require 'concurrent.option' 4 | local concurrent, process, message 5 | 6 | local scheduler = {} 7 | 8 | scheduler.timeouts = {} -- Timeouts for processes that are suspended. 9 | scheduler.barriers = {} -- Barriers for blocked processes. 10 | 11 | scheduler.stop = false -- Flag to interrupt the scheduler. 12 | 13 | option.options.tick = 10 -- Scheduler clock time advances. 14 | 15 | -- Performs a step of the scheduler's operations. Resumes processes that are no 16 | -- longer blocked and then resumes processes that are waiting for a message and 17 | -- one has arrived. If all processes are dead, it instructs the scheduler loop 18 | -- about it. 19 | function scheduler.step(timeout) 20 | process = process or require 'concurrent.process' 21 | message = message or require 'concurrent.message' 22 | 23 | for k, v in pairs(scheduler.barriers) do 24 | if v then process.resume(process.processes[k]) end 25 | end 26 | 27 | for k, v in pairs(process.processes) do 28 | if #message.mailboxes[k] > 0 or 29 | (scheduler.timeouts[k] and time.time() - scheduler.timeouts[k] >= 0) 30 | then 31 | if scheduler.timeouts[k] then scheduler.timeouts[k] = nil end 32 | if type(scheduler.barriers[k]) == 'nil' then process.resume(v) end 33 | end 34 | end 35 | 36 | if not timeout then 37 | local alive = false 38 | for _, v in ipairs(process.processes) do 39 | if coroutine.status(v) ~= 'dead' then alive = true end 40 | end 41 | if not alive then return false end 42 | end 43 | 44 | return true 45 | end 46 | 47 | -- Advances the system clock by a tick. 48 | function scheduler.tick() 49 | concurrent = concurrent or require 'concurrent' 50 | time.sleep(concurrent.getoption('tick')) 51 | end 52 | 53 | -- Infinite or finite loop of the scheduler. Continuesly performs a scheduler 54 | -- step and advances the system clock by a tick. Checks for scheduler interrupts 55 | -- or for a hint in case all processes are dead. 56 | function scheduler.loop(timeout) 57 | concurrent = concurrent or require 'concurrent' 58 | if timeout then 59 | local timer = time.time() + timeout 60 | while concurrent.step(timeout) and not scheduler.stop and 61 | timer > time.time() 62 | do 63 | concurrent.tick() 64 | end 65 | else 66 | while concurrent.step(timeout) and not scheduler.stop do 67 | concurrent.tick() 68 | end 69 | end 70 | scheduler.stop = false 71 | end 72 | 73 | -- Raises the flag to cause a scheduler interrupt. 74 | function scheduler.interrupt() 75 | scheduler.stop = true 76 | end 77 | 78 | -- Sets a barrier for the calling process. 79 | function scheduler.wait() 80 | concurrent = concurrent or require 'concurrent' 81 | local s = concurrent.self() 82 | if not scheduler.barriers[s] then 83 | scheduler.barriers[s] = false 84 | scheduler.wait_yield() 85 | end 86 | r = scheduler.barriers[s] 87 | scheduler.barriers[s] = nil 88 | return r 89 | end 90 | 91 | -- Actions to be performed during a wait yield. 92 | function scheduler.wait_yield() 93 | scheduler.yield() 94 | end 95 | 96 | -- Sets a sleep timeout for the calling process. 97 | function scheduler.sleep(timeout) 98 | concurrent = concurrent or require 'concurrent' 99 | local s = concurrent.self() 100 | if timeout then scheduler.timeouts[s] = time.time() + timeout end 101 | scheduler.sleep_yield() 102 | if timeout then scheduler.timeouts[s] = nil end 103 | end 104 | 105 | -- Actions to be performed during a sleep yield. 106 | function scheduler.sleep_yield() 107 | scheduler.yield() 108 | end 109 | 110 | -- Yields a process , but first checks if the process is exiting intentionally. 111 | function scheduler.yield() 112 | if coroutine.yield() == 'EXIT' then error('EXIT', 0) end 113 | end 114 | 115 | return scheduler 116 | -------------------------------------------------------------------------------- /src/daemon/Makefile: -------------------------------------------------------------------------------- 1 | DESTDIR = 2 | PREFIX = /usr/local 3 | LIBDIR = $(PREFIX)/lib/lua/$(LUAVERSION)/concurrent 4 | 5 | LUAVERSION = 5.1 6 | 7 | MYCFLAGS = 8 | MYLDFLAGS = 9 | MYLIBS = 10 | 11 | INCDIRS = 12 | LIBDIRS = 13 | 14 | LIBLUA = -llua 15 | 16 | CFLAGS = -Wall -O -fpic $(INCDIRS) $(MYCFLAGS) 17 | LDFLAGS = -shared -fpic $(LIBDIRS) $(MYLDFLAGS) 18 | LIBS = $(LIBLUA) $(MYLIBS) 19 | 20 | LIB = daemon.so 21 | OBJ = daemon.o 22 | 23 | all: $(LIB) 24 | 25 | $(LIB): $(OBJ) 26 | $(CC) -o $(LIB) $(LDFLAGS) $(OBJ) $(LIBS) 27 | 28 | $(OBJ): 29 | 30 | install: all 31 | mkdir -p $(DESTDIR)$(LIBDIR) && \ 32 | cp -f $(LIB) $(DESTDIR)$(LIBDIR)/$(LIB) && \ 33 | chmod 0755 $(DESTDIR)$(LIBDIR)/$(LIB) 34 | 35 | uninstall: 36 | cd $(DESTDIR)$(LIBDIR) && \ 37 | rm -f $(LIB) 38 | 39 | clean: 40 | rm -f $(OBJ) $(LIB) *~ 41 | -------------------------------------------------------------------------------- /src/daemon/daemon.c: -------------------------------------------------------------------------------- 1 | #ifndef _WIN32 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #endif 10 | 11 | #include 12 | #include 13 | #include 14 | 15 | /* 16 | * Implements the BSD daemon() functionality, which turns a process into and 17 | * a daemon. 18 | */ 19 | static int 20 | daemon_daemon(lua_State *lua) 21 | { 22 | 23 | #ifndef _WIN32 24 | switch (fork()) { 25 | case -1: 26 | fprintf(stderr, "forking; %s\n", strerror(errno)); 27 | lua_pushboolean(lua, 0); 28 | return 1; 29 | /* NOTREACHED */ 30 | break; 31 | case 0: 32 | break; 33 | default: 34 | exit(0); 35 | /* NOTREACHED */ 36 | break; 37 | } 38 | 39 | if (setsid() == -1) { 40 | fprintf(stderr, "creating session; %s\n", strerror(errno)); 41 | } 42 | 43 | chdir("/"); 44 | 45 | close(STDIN_FILENO); 46 | close(STDOUT_FILENO); 47 | close(STDERR_FILENO); 48 | 49 | if (open("/dev/null", O_RDWR) == -1 || 50 | dup(STDIN_FILENO) == -1 || 51 | dup(STDIN_FILENO) == -1) 52 | fprintf(stderr, "duplicating file descriptors; %s\n", 53 | strerror(errno)); 54 | #endif 55 | 56 | return 0; 57 | } 58 | 59 | /* The daemon library. */ 60 | static const luaL_Reg lib[] = { 61 | { "daemon", daemon_daemon }, 62 | { NULL, NULL } 63 | }; 64 | 65 | /* 66 | * Opens the daemon library. 67 | */ 68 | LUALIB_API int 69 | luaopen_concurrent_daemon(lua_State *lua) 70 | { 71 | 72 | #if LUA_VERSION_NUM < 502 73 | luaL_register(lua, "daemon", lib); 74 | #else 75 | luaL_newlib(lua, lib); 76 | #endif 77 | return 1; 78 | } 79 | -------------------------------------------------------------------------------- /src/time/Makefile: -------------------------------------------------------------------------------- 1 | DESTDIR = 2 | PREFIX = /usr/local 3 | LIBDIR = $(PREFIX)/lib/lua/$(LUAVERSION)/concurrent 4 | 5 | LUAVERSION = 5.1 6 | 7 | MYCFLAGS = 8 | MYLDFLAGS = 9 | MYLIBS = 10 | 11 | INCDIRS = 12 | LIBDIRS = 13 | 14 | LIBLUA = -llua 15 | 16 | CFLAGS = -Wall -O -fpic $(INCDIRS) $(MYCFLAGS) 17 | LDFLAGS = -shared -fpic $(LIBDIRS) $(MYLDFLAGS) 18 | LIBS = $(LIBLUA) $(MYLIBS) 19 | 20 | LIB = time.so 21 | OBJ = time.o 22 | 23 | all: $(LIB) 24 | 25 | $(LIB): $(OBJ) 26 | $(CC) -o $(LIB) $(LDFLAGS) $(OBJ) $(LIBS) 27 | 28 | $(OBJ): 29 | 30 | install: all 31 | mkdir -p $(DESTDIR)$(LIBDIR) && \ 32 | cp -f $(LIB) $(DESTDIR)$(LIBDIR)/$(LIB) && \ 33 | chmod 0755 $(DESTDIR)$(LIBDIR)/$(LIB) 34 | 35 | uninstall: 36 | cd $(DESTDIR)$(LIBDIR) && \ 37 | rm -f $(LIB) 38 | 39 | clean: 40 | rm -f $(OBJ) $(LIB) *~ 41 | -------------------------------------------------------------------------------- /src/time/time.c: -------------------------------------------------------------------------------- 1 | #ifdef _WIN32 2 | #include 3 | #else 4 | #include 5 | #include 6 | #include 7 | #endif 8 | 9 | #include 10 | #include 11 | #include 12 | 13 | /* 14 | * Returns the time elapsed since the epoch in milliseconds. 15 | */ 16 | static int 17 | time_time(lua_State *L) 18 | { 19 | #ifdef _WIN32 20 | SYSTEMTIME st, est; 21 | FILETIME ft, eft; 22 | ULARGE_INTEGER i, ei; 23 | 24 | GetLocalTime(&st); 25 | SystemTimeToFileTime(&st, &ft); 26 | i.HighPart = ft.dwHighDateTime; 27 | i.LowPart = ft.dwLowDateTime; 28 | 29 | est.wYear = 1970; 30 | est.wMonth = 1; 31 | est.wDay = 1; 32 | est.wHour = 0; 33 | est.wMinute = 0; 34 | est.wSecond = 0; 35 | est.wMilliseconds = 0; 36 | SystemTimeToFileTime(&est, &eft); 37 | ei.HighPart = eft.dwHighDateTime; 38 | ei.LowPart = eft.dwLowDateTime; 39 | 40 | lua_pushnumber(L, ((i.QuadPart - ei.QuadPart) / 10000)); 41 | #else 42 | struct timeval tv; 43 | 44 | if (gettimeofday(&tv, NULL) != 0) 45 | return 0; 46 | 47 | lua_pushnumber(L, (lua_Number)((unsigned long long int)(tv.tv_sec) * 1000 + 48 | (unsigned long long int)(tv.tv_usec) / 1000)); 49 | #endif 50 | return 1; 51 | } 52 | 53 | /* 54 | * Delays for the specified amount of time in milliseconds. 55 | */ 56 | static int 57 | time_sleep(lua_State *L) 58 | { 59 | 60 | #ifdef _WIN32 61 | Sleep((DWORD)(lua_tonumber(L, 1))); 62 | #else 63 | usleep((useconds_t)(lua_tonumber(L, 1) * 1000)); 64 | #endif 65 | 66 | lua_pop(L, 1); 67 | 68 | return 0; 69 | } 70 | 71 | /* The time library. */ 72 | static const luaL_Reg lib[] = { 73 | { "time", time_time }, 74 | { "sleep", time_sleep }, 75 | { NULL, NULL } 76 | }; 77 | 78 | /* 79 | * Opens the time library. 80 | */ 81 | LUALIB_API int 82 | luaopen_concurrent_time(lua_State *lua) 83 | { 84 | 85 | #if LUA_VERSION_NUM < 502 86 | luaL_register(lua, "time", lib); 87 | #else 88 | luaL_newlib(lua, lib); 89 | #endif 90 | return 1; 91 | } 92 | -------------------------------------------------------------------------------- /test/concurrent.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | for i in process1 process2 message1 register1 register2 monitor1 monitor2 \ 4 | link1 link2 trapexit1 trapexit2 5 | do 6 | sleep 1 7 | echo running $i.lua 8 | lua concurrent/$i.lua 9 | echo 10 | done 11 | -------------------------------------------------------------------------------- /test/concurrent/link1.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | for i = 1, n do 5 | local msg = concurrent.receive() 6 | print('pong received message from ping') 7 | concurrent.send(msg.from, { from = concurrent.self(), body = 'pong' }) 8 | print('pong sent reply to ping') 9 | end 10 | print('pong exiting') 11 | concurrent.exit('test') 12 | end 13 | 14 | function ping(pid) 15 | concurrent.link(pid) 16 | while true do 17 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 18 | print('ping sent message to pong') 19 | local msg = concurrent.receive(1000) 20 | print('ping received reply from pong') 21 | end 22 | end 23 | 24 | pid = concurrent.spawn(pong, 3) 25 | concurrent.spawn(ping, pid) 26 | 27 | concurrent.loop() 28 | -------------------------------------------------------------------------------- /test/concurrent/link2.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function leaf(n) 4 | for i = 1, n do 5 | local msg = concurrent.receive() 6 | print('leaf received message from internal') 7 | end 8 | print('leaf exiting') 9 | concurrent.exit('test') 10 | end 11 | 12 | function internal(pid) 13 | concurrent.link(pid) 14 | while true do 15 | local msg = concurrent.receive(1000) 16 | if msg and msg.signal == 'EXIT' then break end 17 | print('internal received message from root') 18 | 19 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 20 | print('internal sent message to leaf') 21 | end 22 | end 23 | 24 | function root(pid) 25 | concurrent.link(pid) 26 | local self = concurrent.self() 27 | while true do 28 | concurrent.send(pid, { from = self, body = 'ping' }) 29 | print('root sent message to internal') 30 | 31 | local msg = concurrent.receive(10) 32 | if msg and msg.signal == 'EXIT' then break end 33 | end 34 | end 35 | 36 | pid = concurrent.spawn(leaf, 2) 37 | pid = concurrent.spawn(internal, pid) 38 | concurrent.spawn(root, pid) 39 | 40 | concurrent.loop() 41 | -------------------------------------------------------------------------------- /test/concurrent/message1.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function receiver() 4 | local msg = concurrent.receive() 5 | print('this is an integer: ' .. msg.integer) 6 | print('this is a float: ' .. msg.float) 7 | print('this is a string: ' .. msg.string) 8 | print('this is a ' .. tostring(msg.table)) 9 | print(' table[1] = ' .. msg.table[1]) 10 | print(" table['hello'] = " .. msg.table['hello']) 11 | print('this is a ' .. tostring(msg.callme)) 12 | print(' function() = ' .. msg.callme()) 13 | end 14 | 15 | function sender(pid) 16 | concurrent.send(pid, { from = concurrent.self(), 17 | integer = 9634, 18 | float = 96.34, 19 | string = 'hello world', 20 | table = { 'hello, world', hello = 'world' }, 21 | callme = function () return 'hello world!' end }) 22 | end 23 | 24 | pid = concurrent.spawn(receiver) 25 | concurrent.spawn(sender, pid) 26 | 27 | concurrent.loop() 28 | -------------------------------------------------------------------------------- /test/concurrent/monitor1.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | for i = 1, n do 5 | local msg = concurrent.receive() 6 | print('pong received message from ping') 7 | concurrent.send(msg.from, { from = concurrent.self(), body = 'pong' }) 8 | print('pong sent reply to ping') 9 | end 10 | end 11 | 12 | function ping(pid) 13 | concurrent.monitor(pid) 14 | while true do 15 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 16 | print('ping sent message to pong') 17 | local msg = concurrent.receive(1000) 18 | if msg and msg.signal == 'DOWN' then break end 19 | print('ping received reply from pong') 20 | end 21 | print('ping received DOWN and exiting') 22 | end 23 | 24 | pid = concurrent.spawn(pong, 3) 25 | concurrent.spawn(ping, pid) 26 | 27 | concurrent.loop() 28 | -------------------------------------------------------------------------------- /test/concurrent/monitor2.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function leaf(n) 4 | for i = 1, n do 5 | local msg = concurrent.receive() 6 | print('leaf received message from internal') 7 | end 8 | print('leaf exiting') 9 | end 10 | 11 | function internal(pid) 12 | concurrent.monitor(pid) 13 | while true do 14 | local msg = concurrent.receive(1000) 15 | if msg and msg.signal == 'DOWN' then break end 16 | print('internal received message from root') 17 | 18 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 19 | print('internal sent message to leaf') 20 | end 21 | print('internal received DOWN and exiting') 22 | end 23 | 24 | function root(pid) 25 | concurrent.monitor(pid) 26 | local self = concurrent.self() 27 | while true do 28 | concurrent.send(pid, { from = self, body = 'ping' }) 29 | print('root sent message to internal') 30 | 31 | local msg = concurrent.receive(10) 32 | if msg and msg.signal == 'DOWN' then break end 33 | end 34 | print('root received DOWN and exiting') 35 | end 36 | 37 | pid = concurrent.spawn(leaf, 2) 38 | pid = concurrent.spawn(internal, pid) 39 | concurrent.spawn(root, pid) 40 | 41 | concurrent.loop() 42 | -------------------------------------------------------------------------------- /test/concurrent/process1.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | for i = 1, n do 5 | local msg = concurrent.receive() 6 | print('pong received message from ping') 7 | concurrent.send(msg.from, { from = concurrent.self(), body = 'pong' }) 8 | print('pong sent reply to ping') 9 | end 10 | end 11 | 12 | function ping(pid) 13 | while true do 14 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 15 | print('ping sent message to pong') 16 | local msg = concurrent.receive(1000) 17 | if not msg and not concurrent.isalive(pid) then 18 | print('ping exiting because pong is not alive anymore') 19 | concurrent.exit() 20 | end 21 | print('ping received reply from pong') 22 | end 23 | end 24 | 25 | pid = concurrent.spawn(pong, 3) 26 | concurrent.spawn(ping, pid) 27 | 28 | concurrent.loop() 29 | -------------------------------------------------------------------------------- /test/concurrent/process2.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function leaf(n) 4 | for i = 1, n do 5 | local msg = concurrent.receive() 6 | print('leaf received message from internal') 7 | end 8 | print('leaf exiting') 9 | end 10 | 11 | function internal(pid) 12 | while concurrent.isalive(pid) do 13 | local msg = concurrent.receive(1000) 14 | print('internal received message from root') 15 | 16 | concurrent.send(pid, 'hey') 17 | print('internal sent message to leaf') 18 | end 19 | print('internal exiting') 20 | end 21 | 22 | function root(pid) 23 | while concurrent.isalive(pid) do 24 | concurrent.send(pid, 'hey') 25 | print('root sent message to internal') 26 | 27 | local msg = concurrent.receive(10) 28 | end 29 | print('root exiting') 30 | end 31 | 32 | pid = concurrent.spawn(leaf, 2) 33 | pid = concurrent.spawn(internal, pid) 34 | concurrent.spawn(root, pid) 35 | 36 | concurrent.loop() 37 | -------------------------------------------------------------------------------- /test/concurrent/register1.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(self, n) 4 | for i = 1, n do 5 | local msg = concurrent.receive() 6 | print('pong received message from ping') 7 | concurrent.send(msg.from, { from = self, body = 'pong' }) 8 | print('pong sent reply to ping') 9 | end 10 | end 11 | 12 | function ping(self, name) 13 | while true do 14 | concurrent.send(name, { from = self, body = 'ping' }) 15 | print('ping sent message to pong') 16 | local msg = concurrent.receive(1000) 17 | if not msg and not concurrent.isalive(name) then 18 | print('ping exiting because pong is not alive anymore') 19 | concurrent.exit() 20 | end 21 | print('ping received reply from pong') 22 | end 23 | end 24 | 25 | print('registered: ', unpack(concurrent.registered())) 26 | 27 | pid = concurrent.spawn(pong, 'pong', 3) 28 | concurrent.register('pong', pid) 29 | 30 | print('registered: ', unpack(concurrent.registered())) 31 | concurrent.unregister('pong') 32 | print('registered: ', unpack(concurrent.registered())) 33 | concurrent.register('pong', pid) 34 | print('registered: ', unpack(concurrent.registered())) 35 | 36 | pid = concurrent.spawn(ping, 'ping', 'pong') 37 | concurrent.register('ping', pid) 38 | 39 | print('registered: ', unpack(concurrent.registered())) 40 | 41 | concurrent.loop() 42 | 43 | print('registered: ', unpack(concurrent.registered())) 44 | -------------------------------------------------------------------------------- /test/concurrent/register2.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function leaf(n) 4 | for i = 1, n do 5 | local msg = concurrent.receive() 6 | print('leaf received message from internal') 7 | end 8 | print('leaf exiting') 9 | end 10 | 11 | function internal(name) 12 | while true do 13 | if not concurrent.isalive(concurrent.whereis(name)) then break end 14 | 15 | local msg = concurrent.receive(1000) 16 | print('internal received message from root') 17 | 18 | concurrent.send(name, { from = concurrent.self(), body = 'ping' }) 19 | print('internal sent message to leaf') 20 | end 21 | print('internal exiting') 22 | end 23 | 24 | function root(name) 25 | while true do 26 | if not concurrent.isalive(concurrent.whereis(name)) then break end 27 | 28 | concurrent.send(name, { from = concurrent.self(), body = 'ping' }) 29 | print('root sent message to internal') 30 | 31 | local msg = concurrent.receive(10) 32 | end 33 | print('root exiting') 34 | end 35 | 36 | print('registered: ', unpack(concurrent.registered())) 37 | 38 | pid = concurrent.spawn(leaf, 2) 39 | concurrent.register('leaf', pid) 40 | 41 | print('registered: ', unpack(concurrent.registered())) 42 | concurrent.unregister('leaf') 43 | print('registered: ', unpack(concurrent.registered())) 44 | concurrent.register('leaf', pid) 45 | print('registered: ', unpack(concurrent.registered())) 46 | 47 | pid = concurrent.spawn(internal, 'leaf') 48 | concurrent.register('internal', pid) 49 | 50 | pid = concurrent.spawn(root, 'internal') 51 | concurrent.register('root', pid) 52 | 53 | print('registered: ', unpack(concurrent.registered())) 54 | 55 | concurrent.loop() 56 | 57 | print('registered: ', unpack(concurrent.registered())) 58 | -------------------------------------------------------------------------------- /test/concurrent/trapexit1.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | concurrent.setoption('trapexit', true) 4 | 5 | function pong(n) 6 | for i = 1, n do 7 | local msg = concurrent.receive() 8 | print('pong received message from ping') 9 | concurrent.send(msg.from, { from = concurrent.self(), body = 'pong' }) 10 | print('pong sent reply to ping') 11 | end 12 | concurrent.exit('test') 13 | end 14 | 15 | function ping(pid) 16 | concurrent.link(pid) 17 | while true do 18 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 19 | print('ping sent message to pong') 20 | local msg = concurrent.receive(1000) 21 | if msg and msg.signal == 'EXIT' then break end 22 | print('ping received reply from pong') 23 | end 24 | print('ping received EXIT and exiting') 25 | end 26 | 27 | pid = concurrent.spawn(pong, 3) 28 | concurrent.spawn(ping, pid) 29 | 30 | concurrent.loop() 31 | -------------------------------------------------------------------------------- /test/concurrent/trapexit2.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | concurrent.setoption('trapexit', true) 4 | 5 | function leaf(n) 6 | for i = 1, n do 7 | local msg = concurrent.receive() 8 | print('leaf received message from internal') 9 | end 10 | print('leaf exiting') 11 | concurrent.exit('test') 12 | end 13 | 14 | function internal(pid) 15 | concurrent.link(pid) 16 | while true do 17 | local msg = concurrent.receive(1000) 18 | if msg and msg.signal == 'EXIT' then break end 19 | print('internal received message from root') 20 | 21 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 22 | print('internal sent message to leaf') 23 | end 24 | print('internal received EXIT and exiting') 25 | concurrent.exit('test') 26 | end 27 | 28 | function root(pid) 29 | concurrent.link(pid) 30 | local self = concurrent.self() 31 | while true do 32 | concurrent.send(pid, { from = self, body = 'ping' }) 33 | print('root sent message to internal') 34 | 35 | local msg = concurrent.receive(10) 36 | if msg and msg.signal == 'EXIT' then break end 37 | end 38 | print('root received EXIT and exiting') 39 | end 40 | 41 | pid = concurrent.spawn(leaf, 2) 42 | pid = concurrent.spawn(internal, pid) 43 | concurrent.spawn(root, pid) 44 | 45 | concurrent.loop() 46 | -------------------------------------------------------------------------------- /test/distributed/cookie1a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | concurrent.register('pong', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive() 7 | print('pong received message from ping') 8 | concurrent.send(msg.from, { from = { 'pong', 'pong@localhost' }, 9 | body = 'pong' }) 10 | print('pong sent reply to ping') 11 | end 12 | print('pong exiting') 13 | end 14 | 15 | concurrent.spawn(pong, 3) 16 | 17 | concurrent.init('pong@localhost') 18 | concurrent.setcookie('secret') 19 | concurrent.loop() 20 | concurrent.shutdown() 21 | -------------------------------------------------------------------------------- /test/distributed/cookie1b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function ping(pid) 4 | concurrent.register('ping', concurrent.self()) 5 | while true do 6 | concurrent.send(pid, { from = { 'ping', 'ping@localhost' }, 7 | body = 'ping' }) 8 | print('ping sent message to pong') 9 | local msg = concurrent.receive(1000) 10 | if not msg then break end 11 | print('ping received reply from pong') 12 | end 13 | print('ping exiting') 14 | end 15 | 16 | concurrent.spawn(ping, { 'pong', 'pong@localhost' }) 17 | 18 | concurrent.init('ping@localhost') 19 | concurrent.setcookie('secret') 20 | concurrent.loop() 21 | concurrent.shutdown() 22 | -------------------------------------------------------------------------------- /test/distributed/cookie2a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | concurrent.register('pong', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive(10000) 7 | if not msg then break end 8 | print('pong received message from ping') 9 | concurrent.send(msg.from, { from = { 'pong', 'pong@localhost' }, 10 | body = 'pong' }) 11 | print('pong sent reply to ping') 12 | end 13 | print('pong exiting') 14 | end 15 | 16 | concurrent.spawn(pong, 3) 17 | 18 | concurrent.init('pong@localhost') 19 | concurrent.setcookie('secret') 20 | concurrent.loop() 21 | concurrent.shutdown() 22 | -------------------------------------------------------------------------------- /test/distributed/cookie2b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function ping(pid) 4 | concurrent.register('ping', concurrent.self()) 5 | while true do 6 | concurrent.send(pid, { from = { 'ping', 'ping@localhost' }, 7 | body = 'ping' }) 8 | print('ping sent message to pong') 9 | local msg = concurrent.receive(1000) 10 | if not msg then break end 11 | print('ping received reply from pong') 12 | end 13 | print('ping exiting') 14 | end 15 | 16 | concurrent.spawn(ping, { 'pong', 'pong@localhost' }) 17 | 18 | concurrent.init('ping@localhost') 19 | concurrent.setcookie('wrong') 20 | concurrent.loop() 21 | concurrent.shutdown() 22 | -------------------------------------------------------------------------------- /test/distributed/link1a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | concurrent.register('pong', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive() 7 | print('pong received message from ping') 8 | concurrent.send(msg.from, { from = { 'pong', 'pong@localhost' }, 9 | body = 'pong' }) 10 | print('pong sent reply to ping') 11 | end 12 | print('pong exiting') 13 | concurrent.exit('test') 14 | end 15 | 16 | concurrent.spawn(pong, 3) 17 | 18 | concurrent.init('pong@localhost') 19 | concurrent.loop() 20 | concurrent.shutdown() 21 | -------------------------------------------------------------------------------- /test/distributed/link1b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function ping(pid) 4 | concurrent.register('ping', concurrent.self()) 5 | concurrent.link(pid) 6 | while true do 7 | concurrent.send(pid, { from = { 'ping', 'ping@localhost' }, 8 | body = 'ping' }) 9 | print('ping sent message to pong') 10 | local msg = concurrent.receive(1000) 11 | print('ping received reply from pong') 12 | end 13 | print('ping exiting') 14 | end 15 | 16 | concurrent.spawn(ping, { 'pong', 'pong@localhost' }) 17 | 18 | concurrent.init('ping@localhost') 19 | concurrent.loop() 20 | concurrent.shutdown() 21 | -------------------------------------------------------------------------------- /test/distributed/link2a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function leaf(n) 4 | concurrent.register('leaf', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive() 7 | print('leaf received message from internal') 8 | end 9 | print('leaf exiting') 10 | concurrent.exit('test') 11 | end 12 | 13 | concurrent.spawn(leaf, 2) 14 | 15 | concurrent.init('leaf@localhost') 16 | concurrent.loop() 17 | concurrent.shutdown() 18 | -------------------------------------------------------------------------------- /test/distributed/link2b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function internal(pid) 4 | concurrent.register('internal', concurrent.self()) 5 | concurrent.link(pid) 6 | while true do 7 | local msg = concurrent.receive() 8 | print('internal received message from root') 9 | 10 | concurrent.send(pid, { from = { concurrent.self(), 11 | 'internal@localhost' }, 12 | body = 'ping' }) 13 | print('internal sent message to leaf') 14 | end 15 | end 16 | 17 | concurrent.spawn(internal, { 'leaf', 'leaf@localhost' }) 18 | 19 | concurrent.init('internal@localhost') 20 | concurrent.loop() 21 | concurrent.shutdown() 22 | -------------------------------------------------------------------------------- /test/distributed/link2c.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function root(pid) 4 | local self = concurrent.self() 5 | concurrent.register('root', self) 6 | concurrent.link(pid) 7 | while true do 8 | concurrent.send(pid, { from = { self, 'root@localhost' }, 9 | body = 'ping' }) 10 | print('root sent message to internal') 11 | 12 | local msg = concurrent.receive(10) 13 | end 14 | end 15 | 16 | concurrent.spawn(root, { 'internal', 'internal@localhost' }) 17 | 18 | concurrent.init('root@localhost') 19 | concurrent.loop() 20 | concurrent.shutdown() 21 | -------------------------------------------------------------------------------- /test/distributed/message1a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function receiver() 4 | concurrent.register('receiver', concurrent.self()) 5 | local msg = concurrent.receive() 6 | print('this is an integer: ' .. msg.integer) 7 | print('this is a float: ' .. msg.float) 8 | print('this is a string: ' .. msg.string) 9 | print('this is a ' .. tostring(msg.table)) 10 | print(' table[1] = ' .. msg.table[1]) 11 | print(" table['hello'] = " .. msg.table['hello']) 12 | print('this is a ' .. tostring(msg.callme)) 13 | print(' function() = ' .. msg.callme()) 14 | end 15 | 16 | concurrent.spawn(receiver) 17 | 18 | concurrent.init('receiver@localhost') 19 | concurrent.loop() 20 | concurrent.shutdown() 21 | -------------------------------------------------------------------------------- /test/distributed/message1b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function sender(pid) 4 | concurrent.register('sender', concurrent.self()) 5 | concurrent.send(pid, { from = concurrent.self(), 6 | integer = 9634, 7 | float = 96.34, 8 | string = 'hello world', 9 | table = { 'hello, world', hello = 'world' }, 10 | callme = function () return 'hello world!' end }) 11 | end 12 | 13 | concurrent.spawn(sender, { 'receiver', 'receiver@localhost' }) 14 | 15 | concurrent.init('sender@localhost') 16 | concurrent.loop() 17 | concurrent.shutdown() 18 | -------------------------------------------------------------------------------- /test/distributed/monitor1a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | concurrent.register('pong', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive() 7 | print('pong received message from ping') 8 | concurrent.send(msg.from, { from = { 'pong', 'pong@localhost' }, 9 | body = 'pong' }) 10 | print('pong sent reply to ping') 11 | end 12 | print('pong exiting') 13 | end 14 | 15 | concurrent.spawn(pong, 3) 16 | 17 | concurrent.init('pong@localhost') 18 | concurrent.loop() 19 | concurrent.shutdown() 20 | -------------------------------------------------------------------------------- /test/distributed/monitor1b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function ping(pid) 4 | concurrent.register('ping', concurrent.self()) 5 | concurrent.monitor(pid) 6 | while true do 7 | concurrent.send(pid, { from = { 'ping', 'ping@localhost' }, 8 | body = 'ping' }) 9 | print('ping sent message to pong') 10 | local msg = concurrent.receive(1000) 11 | if msg and msg.signal == 'DOWN' then break end 12 | print('ping received reply from pong') 13 | end 14 | print('ping received DOWN and exiting') 15 | end 16 | 17 | concurrent.spawn(ping, { 'pong', 'pong@localhost' }) 18 | 19 | concurrent.init('ping@localhost') 20 | concurrent.loop() 21 | concurrent.shutdown() 22 | -------------------------------------------------------------------------------- /test/distributed/monitor2a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function leaf(n) 4 | concurrent.register('leaf', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive() 7 | print('leaf received message from internal') 8 | end 9 | print('leaf exiting') 10 | end 11 | 12 | concurrent.spawn(leaf, 2) 13 | 14 | concurrent.init('leaf@localhost') 15 | concurrent.loop() 16 | concurrent.shutdown() 17 | -------------------------------------------------------------------------------- /test/distributed/monitor2b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function internal(pid) 4 | concurrent.register('internal', concurrent.self()) 5 | concurrent.monitor(pid) 6 | while true do 7 | local msg = concurrent.receive() 8 | if msg and msg.signal == 'DOWN' then break end 9 | print('internal received message from root') 10 | 11 | concurrent.send(pid, { from = { concurrent.self(), 12 | 'internal@localhost' }, 13 | body = 'ping' }) 14 | print('internal sent message to leaf') 15 | end 16 | print('internal received DOWN and exiting') 17 | end 18 | 19 | concurrent.spawn(internal, { 'leaf', 'leaf@localhost' }) 20 | 21 | concurrent.init('internal@localhost') 22 | concurrent.loop() 23 | concurrent.shutdown() 24 | -------------------------------------------------------------------------------- /test/distributed/monitor2c.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function root(pid) 4 | local self = concurrent.self() 5 | concurrent.register('root', self) 6 | concurrent.monitor(pid) 7 | while true do 8 | concurrent.send(pid, { from = { self, 'root@localhost' }, 9 | body = 'ping' }) 10 | print('root sent message to internal') 11 | 12 | local msg = concurrent.receive(10) 13 | if msg and msg.signal == 'DOWN' then break end 14 | end 15 | print('root received DOWN and exiting') 16 | end 17 | 18 | concurrent.spawn(root, { 'internal', 'internal@localhost' }) 19 | 20 | concurrent.init('root@localhost') 21 | concurrent.loop() 22 | concurrent.shutdown() 23 | -------------------------------------------------------------------------------- /test/distributed/node1a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | concurrent.register('pong', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive() 7 | print('pong received message from ping') 8 | concurrent.send(msg.from, { from = { 'pong', 'pong@localhost' }, 9 | body = 'pong' }) 10 | print('pong sent reply to ping') 11 | end 12 | print('pong exiting') 13 | print('node(): ' .. concurrent.node()) 14 | print('isnodealive(): ' .. tostring(concurrent.isnodealive())) 15 | print('nodes(): ' .. unpack(concurrent.nodes())) 16 | end 17 | 18 | concurrent.spawn(pong, 3) 19 | 20 | concurrent.init('pong@localhost') 21 | concurrent.loop() 22 | concurrent.shutdown() 23 | -------------------------------------------------------------------------------- /test/distributed/node1b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function ping(pid) 4 | concurrent.register('ping', concurrent.self()) 5 | concurrent.monitornode('pong@localhost') 6 | while true do 7 | concurrent.send(pid, { from = { 'ping', 'ping@localhost' }, 8 | body = 'ping' }) 9 | print('ping sent message to pong') 10 | local msg = concurrent.receive() 11 | if msg and msg.signal == 'NODEDOWN' then break end 12 | print('ping received reply from pong') 13 | end 14 | print('ping received NODEDOWN and exiting') 15 | end 16 | 17 | concurrent.spawn(ping, { 'pong', 'pong@localhost' }) 18 | 19 | concurrent.init('ping@localhost') 20 | concurrent.loop() 21 | concurrent.shutdown() 22 | -------------------------------------------------------------------------------- /test/distributed/process1a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | concurrent.register('pong', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive() 7 | print('pong received message from ping') 8 | concurrent.send(msg.from, { from = { 'pong', 'pong@localhost' }, 9 | body = 'pong' }) 10 | print('pong sent reply to ping') 11 | end 12 | print('pong exiting') 13 | end 14 | 15 | concurrent.spawn(pong, 3) 16 | 17 | concurrent.init('pong@localhost') 18 | concurrent.loop() 19 | concurrent.shutdown() 20 | -------------------------------------------------------------------------------- /test/distributed/process1b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function ping(pid) 4 | concurrent.register('ping', concurrent.self()) 5 | while true do 6 | concurrent.send(pid, { from = { 'ping', 'ping@localhost' }, 7 | body = 'ping' }) 8 | print('ping sent message to pong') 9 | local msg = concurrent.receive(1000) 10 | if not msg then break end 11 | print('ping received reply from pong') 12 | end 13 | print('ping exiting') 14 | end 15 | 16 | concurrent.spawn(ping, { 'pong', 'pong@localhost' }) 17 | 18 | concurrent.init('ping@localhost') 19 | concurrent.loop() 20 | concurrent.shutdown() 21 | -------------------------------------------------------------------------------- /test/distributed/process2a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | for i = 1, n do 5 | local msg = concurrent.receive() 6 | print('pong received message from ping') 7 | concurrent.send(msg.from, { from = concurrent.self(), body = 'pong' }) 8 | print('pong sent reply to ping') 9 | end 10 | end 11 | 12 | function ping(pid) 13 | while true do 14 | concurrent.send(pid, { from = concurrent.self(), body = 'ping' }) 15 | print('ping sent message to pong') 16 | local msg = concurrent.receive(1000) 17 | if not msg and not concurrent.isalive(pid) then 18 | break 19 | end 20 | print('ping received reply from pong') 21 | end 22 | print('ping exiting because pong is not alive anymore') 23 | end 24 | 25 | concurrent.init('remote@localhost') 26 | concurrent.loop(10000) 27 | concurrent.shutdown() 28 | -------------------------------------------------------------------------------- /test/distributed/process2b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | concurrent.init('caller@localhost') 4 | 5 | pid = concurrent.spawn('remote@localhost', 'pong', 3) 6 | concurrent.spawn('remote@localhost', 'ping', pid) 7 | 8 | concurrent.loop() 9 | concurrent.shutdown() 10 | -------------------------------------------------------------------------------- /test/distributed/register1a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | print('registered: ', unpack(concurrent.registered())) 5 | concurrent.register('pong', concurrent.self()) 6 | print('registered: ', unpack(concurrent.registered())) 7 | for i = 1, n do 8 | local msg = concurrent.receive() 9 | print('pong received message from ping') 10 | concurrent.send(msg.from, { from = { 'pong', 'pong@localhost' }, 11 | body = 'pong' }) 12 | print('pong sent reply to ping') 13 | end 14 | print('registered: ', unpack(concurrent.registered())) 15 | concurrent.unregister('pong') 16 | print('registered: ', unpack(concurrent.registered())) 17 | concurrent.register('pong', concurrent.self()) 18 | print('registered: ', unpack(concurrent.registered())) 19 | print('pong exiting') 20 | end 21 | 22 | pid = concurrent.spawn(pong, 3) 23 | 24 | concurrent.init('pong@localhost') 25 | concurrent.loop() 26 | concurrent.shutdown() 27 | -------------------------------------------------------------------------------- /test/distributed/register1b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function ping(pid) 4 | concurrent.register('ping', concurrent.self()) 5 | concurrent.monitor(pid) 6 | while true do 7 | concurrent.send(pid, { from = { 'ping', 'ping@localhost' }, 8 | body = 'ping' }) 9 | print('ping sent message to pong') 10 | local msg = concurrent.receive() 11 | if msg and msg.signal == 'DOWN' then break end 12 | print('ping received reply from pong') 13 | end 14 | print('ping received DOWN and exiting') 15 | end 16 | 17 | pid = concurrent.spawn(ping, { 'pong', 'pong@localhost' }) 18 | 19 | concurrent.init('ping@localhost') 20 | concurrent.loop() 21 | concurrent.shutdown() 22 | -------------------------------------------------------------------------------- /test/distributed/register2a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function leaf(n) 4 | print('registered: ', unpack(concurrent.registered())) 5 | concurrent.register('leaf', concurrent.self()) 6 | print('registered: ', unpack(concurrent.registered())) 7 | for i = 1, n do 8 | local msg = concurrent.receive() 9 | print('leaf received message from internal') 10 | end 11 | print('registered: ', unpack(concurrent.registered())) 12 | concurrent.unregister('leaf') 13 | print('registered: ', unpack(concurrent.registered())) 14 | concurrent.register('leaf', concurrent.self()) 15 | print('registered: ', unpack(concurrent.registered())) 16 | print('leaf exiting') 17 | end 18 | 19 | concurrent.spawn(leaf, 2) 20 | 21 | concurrent.init('leaf@localhost') 22 | concurrent.loop() 23 | concurrent.shutdown() 24 | -------------------------------------------------------------------------------- /test/distributed/register2b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function internal(pid) 4 | concurrent.register('internal', concurrent.self()) 5 | concurrent.monitor(pid) 6 | while true do 7 | local msg = concurrent.receive() 8 | if msg and msg.signal == 'DOWN' then break end 9 | print('internal received message from root') 10 | 11 | concurrent.send(pid, { from = { concurrent.self(), 12 | 'internal@localhost' }, body = 'ping' }) 13 | print('internal sent message to leaf') 14 | end 15 | print('internal received DOWN and exiting') 16 | end 17 | 18 | concurrent.spawn(internal, { 'leaf', 'leaf@localhost' }) 19 | 20 | concurrent.init('internal@localhost') 21 | concurrent.loop() 22 | concurrent.shutdown() 23 | -------------------------------------------------------------------------------- /test/distributed/register2c.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function root(pid) 4 | local self = concurrent.self() 5 | concurrent.register('root', self) 6 | concurrent.monitor(pid) 7 | while true do 8 | concurrent.send(pid, { from = { self, 'root@localhost' }, 9 | body = 'ping' }) 10 | print('root sent message to internal') 11 | 12 | local msg = concurrent.receive(10) 13 | if msg and msg.signal == 'DOWN' then break end 14 | end 15 | print('root received DOWN and exiting') 16 | end 17 | 18 | concurrent.spawn(root, { 'internal', 'internal@localhost' }) 19 | 20 | concurrent.init('root@localhost') 21 | concurrent.loop() 22 | concurrent.shutdown() 23 | -------------------------------------------------------------------------------- /test/distributed/trapexit1a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function pong(n) 4 | concurrent.register('pong', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive() 7 | print('pong received message from ping') 8 | concurrent.send(msg.from, { from = { 'pong', 'pong@localhost' }, 9 | body = 'pong' }) 10 | print('pong sent reply to ping') 11 | end 12 | print('pong exiting') 13 | concurrent.exit('test') 14 | end 15 | 16 | concurrent.spawn(pong, 3) 17 | 18 | concurrent.init('pong@localhost') 19 | concurrent.loop() 20 | concurrent.shutdown() 21 | -------------------------------------------------------------------------------- /test/distributed/trapexit1b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | concurrent.setoption('trapexit', true) 4 | 5 | function ping(pid) 6 | concurrent.register('ping', concurrent.self()) 7 | concurrent.link(pid) 8 | while true do 9 | concurrent.send(pid, { from = { 'ping', 'ping@localhost' }, 10 | body = 'ping' }) 11 | print('ping sent message to pong') 12 | local msg = concurrent.receive(1000) 13 | if msg and msg.signal == 'EXIT' then break end 14 | print('ping received reply from pong') 15 | end 16 | print('ping received EXIT and exiting') 17 | end 18 | 19 | concurrent.spawn(ping, { 'pong', 'pong@localhost' }) 20 | 21 | concurrent.init('ping@localhost') 22 | concurrent.loop() 23 | concurrent.shutdown() 24 | -------------------------------------------------------------------------------- /test/distributed/trapexit2a.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | function leaf(n) 4 | concurrent.register('leaf', concurrent.self()) 5 | for i = 1, n do 6 | local msg = concurrent.receive() 7 | print('leaf received message from internal') 8 | end 9 | print('leaf exiting') 10 | concurrent.exit('test') 11 | end 12 | 13 | concurrent.spawn(leaf, 2) 14 | 15 | concurrent.init('leaf@localhost') 16 | concurrent.loop() 17 | concurrent.shutdown() 18 | -------------------------------------------------------------------------------- /test/distributed/trapexit2b.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | concurrent.setoption('trapexit', true) 4 | 5 | function internal(pid) 6 | concurrent.register('internal', concurrent.self()) 7 | concurrent.link(pid) 8 | while true do 9 | local msg = concurrent.receive() 10 | if msg and msg.signal == 'EXIT' then break end 11 | print('internal received message from root') 12 | 13 | concurrent.send(pid, { from = { concurrent.self(), 14 | 'internal@localhost' }, body = 'ping' }) 15 | print('internal sent message to leaf') 16 | end 17 | print('internal received EXIT and exiting') 18 | concurrent.exit('test') 19 | end 20 | 21 | concurrent.spawn(internal, { 'leaf', 'leaf@localhost' }) 22 | 23 | concurrent.init('internal@localhost') 24 | concurrent.loop() 25 | concurrent.shutdown() 26 | -------------------------------------------------------------------------------- /test/distributed/trapexit2c.lua: -------------------------------------------------------------------------------- 1 | concurrent = require 'concurrent' 2 | 3 | concurrent.setoption('trapexit', true) 4 | 5 | function root(pid) 6 | local self = concurrent.self() 7 | concurrent.register('root', self) 8 | concurrent.link(pid) 9 | while true do 10 | concurrent.send(pid, { from = { self, 'root@localhost' }, 11 | body = 'ping' }) 12 | print('root sent message to internal') 13 | 14 | local msg = concurrent.receive(10) 15 | if msg and msg.signal == 'EXIT' then break end 16 | end 17 | print('root received EXIT and exiting') 18 | end 19 | 20 | concurrent.spawn(root, { 'internal', 'internal@localhost' }) 21 | 22 | concurrent.init('root@localhost') 23 | concurrent.loop() 24 | concurrent.shutdown() 25 | -------------------------------------------------------------------------------- /test/distributed2a.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | for i in process1a process2a message1a register1a monitor1a link1a trapexit1a \ 4 | node1a cookie1a cookie2a 5 | do 6 | echo running $i.lua 7 | lua distributed/$i.lua 8 | echo 9 | done 10 | -------------------------------------------------------------------------------- /test/distributed2b.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | for i in process1b process2b message1b register1b monitor1b link1b trapexit1b \ 4 | node1b cookie1b cookie2b 5 | do 6 | sleep 5 7 | echo running $i.lua 8 | lua distributed/$i.lua 9 | echo 10 | done 11 | -------------------------------------------------------------------------------- /test/distributed3a.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | for i in register2a monitor2a link2a trapexit2a; do 4 | echo running $i.lua 5 | lua distributed/$i.lua 6 | echo 7 | done 8 | -------------------------------------------------------------------------------- /test/distributed3b.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | for i in register2b monitor2b link2b trapexit2b; do 4 | sleep 2 5 | echo running $i.lua 6 | lua distributed/$i.lua 7 | echo 8 | done 9 | -------------------------------------------------------------------------------- /test/distributed3c.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | for i in register2c monitor2c link2c trapexit2c; do 4 | sleep 4 5 | echo running $i.lua 6 | lua distributed/$i.lua 7 | echo 8 | done 9 | --------------------------------------------------------------------------------