├── .gitignore ├── .travis.yml ├── LICENSE ├── README.md ├── bench ├── group_by_bench.exs ├── group_by_one_bench.exs ├── partitioner_bench.exs ├── partitioner_one_bench.exs ├── send_bench.exs ├── send_one_bench.exs └── send_vs_encode_bench.exs ├── lib ├── manifold.ex └── manifold │ ├── partitioner.ex │ ├── sender.ex │ ├── utils.ex │ └── worker.ex ├── mix.exs ├── mix.lock ├── priv └── packets.png └── test ├── manifold_test.exs ├── support ├── child_node.ex └── receiver.ex └── test_helper.exs /.gitignore: -------------------------------------------------------------------------------- 1 | /_build 2 | /bench/snapshots 3 | /cover 4 | /deps 5 | erl_crash.dump 6 | *.ez 7 | .DS_Store -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | sudo: false 2 | language: elixir 3 | git: 4 | depth: 3 5 | elixir: 6 | - 1.5.2 7 | otp_release: 8 | - 20.1 9 | env: 10 | - MIX_ENV=test 11 | script: 12 | - mix test -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Discord 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Manifold 2 | 3 | [![Master](https://travis-ci.org/discordapp/manifold.svg?branch=master)](https://travis-ci.org/discordapp/manifold) 4 | [![Hex.pm Version](http://img.shields.io/hexpm/v/manifold.svg?style=flat)](https://hex.pm/packages/manifold) 5 | 6 | Erlang and Elixir make it very easy to send messages between processes even across the network, but there are a few pitfalls. 7 | 8 | - Sending a message to many PIDs across the network also copies the message across the network that many times. 9 | - Send calls cost about 70 µs/op so doing them in a loop eventually gets too expensive. 10 | 11 | [Discord](https://discord.com) runs a single `GenServer` per Discord server and some of these ~100,000 PIDs connected to them from many different Erlang nodes. Increasingly we noticed some of them getting behind on processing their message queues and the culprit was the cost of 70 µs per `send/2` call multiplied by connected sessions. How could we solve this? 12 | 13 | Inspired by a [blog post](http://www.ostinelli.net/boost-message-passing-between-erlang-nodes/) about boosting performance of message passing between nodes, Manifold was born. Manifold distributes the work of sending messages to the remote nodes of the PIDs, which guarantees that the sending processes at most only calls `send/2` equal to the number of involved remote nodes. Manifold does this by first grouping PIDs by their remote node and then sending to `Manifold.Partitioner` on each of those nodes. The partitioner then consistently hashes the PIDs using `:erlang.phash2/2`, groups them by number of cores, sends to child workers, and finally those workers send to the actual PIDs. This ensures the partitioner does not get overloaded and still provides the linearizability guaranteed by `send/2`. 14 | 15 | The results were great! We observed packets/sec drop by half immediately after deploying. The Discord servers in question also were finally able to keep up with their message queues. 16 | 17 | ![Packets Out Reduction](priv/packets.png) 18 | 19 | ## Usage 20 | 21 | Add it to `mix.exs` 22 | 23 | ```elixir 24 | defp deps do 25 | [{:manifold, "~> 1.0"}] 26 | end 27 | ``` 28 | 29 | Then just use it like the normal `send/2` except it can also take a list of PIDs. 30 | 31 | ```elixir 32 | Manifold.send(self(), :hello) 33 | Manifold.send([self(), self()], :hello) 34 | ``` 35 | 36 | ### Options 37 | 38 | When using Manifold there are two performance options you can choose to enable. 39 | 40 | #### pack_mode 41 | 42 | By default Manifold will send the message using vanilla [External Term Format](https://www.erlang.org/doc/apps/erts/erl_ext_dist.html). If `pack_mode` is not specified or is set to `nil` or `:etf` then this mode will be used. 43 | 44 | When messages are very large `pack_mode` can be set to `:binary` and this will cause Manifold to ensure that the term is only converted into [External Term Format](https://www.erlang.org/doc/apps/erts/erl_ext_dist.html) once. The encoding will happen in the sending process (either the calling process or a `Manifold.Sender`, see `send_mode` for additional details) and will 45 | 46 | The `:binary` optimization works best with large and deeply nested messages that are being sent to PIDs across multiple nodes. Without the optimization, a message sent to N nodes will have to be translated into [External Term Format](https://www.erlang.org/doc/apps/erts/erl_ext_dist.html) N times. With large data structures, this operation can be expensive. With the optimization enabled, the encoding into binary happens once and then over distribution each sending process only needs to transmit the binary. 47 | 48 | To send using the binary pack mode, just add the `pack_mode` argument 49 | 50 | ```elixir 51 | Manifold.send(self(), :hello, pack_mode: :binary) 52 | ``` 53 | 54 | #### send_mode 55 | 56 | By default Manifold will send the message over the network from the caller process. If `send_mode` is not specified then the default behavior of sending from the caller process will be used. 57 | 58 | When messages are very large, `send_mode` can be set to `:offload` which offloads the send from the calling process to a pool of `Manifold.Sender` processes. To maintain the linearizability guaranteed by `send/2`, the same calling process 59 | always offloads the work to the same `Manifold.Sender` process. The size of the `Manifold.Sender` pool is configurable. 60 | 61 | This send mode is optional because its benefits are workload dependent. The optimization works best when the cost of sending a message to a local process < the cost of sending a message over distribution. This is most common for messages that are very large in size. For some workloads, it might degrade overall performance. Use with caution. 62 | 63 | Caution: To maintain the linearizability guaranteed by `send/2`, do not mix calls to Manifold with and without offloading. Mixed use of the two different send modes to the same set of receiving nodes would break the linearizability guarantee. 64 | 65 | To use the `:offload` send mode, make sure the `Manifold.Sender` pool size is appropriate for the workload: 66 | 67 | ```elixir 68 | config :manifold, senders: 69 | ``` 70 | 71 | Then: 72 | 73 | ```elixir 74 | Manifold.send(self(), :hello, send_mode: :offload) 75 | ``` 76 | 77 | ### Configuration 78 | 79 | Manifold takes a single configuration option, which sets the module it dispatches to actually call send. The default 80 | is GenServer. To set this variable, add the following to your `config.exs`: 81 | 82 | ```elixir 83 | config :manifold, gen_module: MyGenModule 84 | ``` 85 | 86 | In the above instance, `MyGenModule` must define a `cast/2` function that matches the types of `GenServer.cast`. 87 | 88 | 89 | ## License 90 | 91 | Manifold is released under [the MIT License](LICENSE). 92 | Check [LICENSE](LICENSE) file for more information. 93 | -------------------------------------------------------------------------------- /bench/group_by_bench.exs: -------------------------------------------------------------------------------- 1 | defmodule GroupByBench do 2 | use Benchfella 3 | 4 | alias Manifold.Utils 5 | 6 | setup_all do 7 | pids = for _ <- 0..5000, do: spawn_link &loop/0 8 | {:ok, pids} 9 | end 10 | 11 | defp loop() do 12 | receive do 13 | _ -> loop() 14 | end 15 | end 16 | 17 | bench "group by 48" do 18 | bench_context 19 | |> Utils.group_by(&:erlang.phash2(&1, 48)) 20 | end 21 | 22 | bench "partition_pids 48" do 23 | bench_context 24 | |> Utils.partition_pids(48) 25 | end 26 | 27 | bench "group by 24" do 28 | bench_context 29 | |> Utils.group_by(&:erlang.phash2(&1, 24)) 30 | end 31 | 32 | bench "partition_pids 24" do 33 | bench_context 34 | |> Utils.partition_pids(24) 35 | end 36 | 37 | bench "group by 8" do 38 | bench_context 39 | |> Utils.group_by(&:erlang.phash2(&1, 8)) 40 | end 41 | 42 | bench "partition_pids 8" do 43 | bench_context 44 | |> Utils.partition_pids(8) 45 | end 46 | 47 | 48 | end -------------------------------------------------------------------------------- /bench/group_by_one_bench.exs: -------------------------------------------------------------------------------- 1 | defmodule GroupByOneBench do 2 | use Benchfella 3 | 4 | alias Manifold.Utils 5 | 6 | setup_all do 7 | pids = [spawn_link &loop/0] 8 | 9 | {:ok, pids} 10 | end 11 | 12 | defp loop() do 13 | receive do 14 | _ -> loop() 15 | end 16 | end 17 | 18 | 19 | bench "group by 48" do 20 | bench_context 21 | |> Utils.group_by(&:erlang.phash2(&1, 48)) 22 | end 23 | 24 | bench "partition_pids 48" do 25 | bench_context 26 | |> Utils.partition_pids(48) 27 | end 28 | 29 | bench "group by 24" do 30 | bench_context 31 | |> Utils.group_by(&:erlang.phash2(&1, 24)) 32 | end 33 | 34 | bench "partition_pids 24" do 35 | bench_context 36 | |> Utils.partition_pids(24) 37 | end 38 | 39 | bench "group by 8" do 40 | bench_context 41 | |> Utils.group_by(&:erlang.phash2(&1, 8)) 42 | end 43 | 44 | bench "partition_pids 8" do 45 | bench_context 46 | |> Utils.partition_pids(8) 47 | end 48 | end -------------------------------------------------------------------------------- /bench/partitioner_bench.exs: -------------------------------------------------------------------------------- 1 | defmodule WorkerSendBenches do 2 | use Benchfella 3 | 4 | alias Manifold.Utils 5 | 6 | defmodule Worker do 7 | use GenServer 8 | 9 | ## Client 10 | @spec start_link :: GenServer.on_start 11 | def start_link, do: GenServer.start_link(__MODULE__, []) 12 | 13 | @spec send(pid, [pid], term) :: :ok 14 | def send(pid, pids, message), do: GenServer.cast(pid, {:send, pids, message}) 15 | 16 | ## Server Callbacks 17 | @spec init([]) :: {:ok, nil} 18 | def init([]), do: {:ok, nil} 19 | 20 | def handle_cast({:send, _pids, _message}, nil) do 21 | {:noreply, nil} 22 | end 23 | 24 | def handle_cast(_message, nil), do: {:noreply, nil} 25 | end 26 | 27 | 28 | setup_all do 29 | workers = (for _ <- 0..15, do: Worker.start_link() |> elem(1)) |> List.to_tuple 30 | pids = for _ <- 0..200, do: spawn_link &loop/0 31 | 32 | pids_by_partition = Utils.partition_pids(pids, tuple_size(workers)) 33 | pids_by_partition_map = Utils.group_by(pids, &Utils.partition_for(&1, tuple_size(workers))) 34 | 35 | {:ok, {workers, pids_by_partition, pids_by_partition_map}} 36 | end 37 | 38 | defp loop() do 39 | receive do 40 | _ -> loop() 41 | end 42 | end 43 | 44 | bench "enum reduce send" do 45 | {workers, _, pids_by_partition_map} = bench_context 46 | Enum.reduce(pids_by_partition_map, workers, fn ({partition, pids}, state) -> 47 | {worker_pid, state} = get_worker_pid(partition, state) 48 | Worker.send(worker_pid, pids, :hi) 49 | state 50 | end) 51 | end 52 | 53 | bench "do_send send" do 54 | {workers, pids_by_partition, _} = bench_context 55 | do_send(:hi, pids_by_partition, workers, 0, tuple_size(pids_by_partition)) 56 | end 57 | 58 | 59 | defp get_worker_pid(partition, state) do 60 | case elem(state, partition) do 61 | nil -> 62 | {:ok, pid} = Worker.start_link() 63 | {pid, put_elem(state, partition, pid)} 64 | pid -> 65 | {pid, state} 66 | end 67 | end 68 | 69 | defp do_send(_message, _pids_by_partition, _workers, partitions, partitions), do: :ok 70 | defp do_send(message, pids_by_partition, workers, partition, partitions) do 71 | pids = elem(pids_by_partition, partition) 72 | if pids != [] do 73 | Worker.send(elem(workers, partition), pids, message) 74 | end 75 | do_send(message, pids_by_partition, workers, partition + 1, partitions) 76 | end 77 | end -------------------------------------------------------------------------------- /bench/partitioner_one_bench.exs: -------------------------------------------------------------------------------- 1 | defmodule WorkerSendOneBenches do 2 | use Benchfella 3 | 4 | alias Manifold.Utils 5 | 6 | defmodule Worker do 7 | use GenServer 8 | 9 | ## Client 10 | @spec start_link :: GenServer.on_start 11 | def start_link, do: GenServer.start_link(__MODULE__, []) 12 | 13 | @spec send(pid, [pid], term) :: :ok 14 | def send(pid, pids, message), do: GenServer.cast(pid, {:send, pids, message}) 15 | 16 | ## Server Callbacks 17 | @spec init([]) :: {:ok, nil} 18 | def init([]), do: {:ok, nil} 19 | 20 | def handle_cast({:send, _pids, _message}, nil) do 21 | {:noreply, nil} 22 | end 23 | 24 | def handle_cast(_message, nil), do: {:noreply, nil} 25 | end 26 | 27 | 28 | setup_all do 29 | workers = (for _ <- 0..47, do: Worker.start_link() |> elem(1)) |> List.to_tuple 30 | pids = [spawn_link &loop/0] 31 | 32 | pids_by_partition = Utils.partition_pids(pids, tuple_size(workers)) 33 | pids_by_partition_map = Utils.group_by(pids, &Utils.partition_for(&1, tuple_size(workers))) 34 | 35 | {:ok, {workers, pids_by_partition, pids_by_partition_map}} 36 | end 37 | 38 | defp loop() do 39 | receive do 40 | _ -> loop() 41 | end 42 | end 43 | 44 | bench "enum reduce send" do 45 | {workers, _, pids_by_partition_map} = bench_context 46 | Enum.reduce(pids_by_partition_map, workers, fn ({partition, pids}, state) -> 47 | {worker_pid, state} = get_worker_pid(partition, state) 48 | Worker.send(worker_pid, pids, :hi) 49 | state 50 | end) 51 | end 52 | 53 | bench "do_send send" do 54 | {workers, pids_by_partition, _} = bench_context 55 | do_send(:hi, pids_by_partition, workers, 0, tuple_size(pids_by_partition)) 56 | end 57 | 58 | 59 | defp get_worker_pid(partition, state) do 60 | case elem(state, partition) do 61 | nil -> 62 | {:ok, pid} = Worker.start_link() 63 | {pid, put_elem(state, partition, pid)} 64 | pid -> 65 | {pid, state} 66 | end 67 | end 68 | 69 | defp do_send(_message, _pids_by_partition, _workers, partitions, partitions), do: :ok 70 | defp do_send(message, pids_by_partition, workers, partition, partitions) do 71 | pids = elem(pids_by_partition, partition) 72 | if pids != [] do 73 | Worker.send(elem(workers, partition), pids, message) 74 | end 75 | do_send(message, pids_by_partition, workers, partition + 1, partitions) 76 | end 77 | end -------------------------------------------------------------------------------- /bench/send_bench.exs: -------------------------------------------------------------------------------- 1 | defmodule SendBench do 2 | use Benchfella 3 | 4 | alias Manifold.Utils 5 | 6 | setup_all do 7 | pids = for _ <- 0..200, do: spawn_link &loop/0 8 | 9 | {:ok, pids} 10 | end 11 | 12 | defp loop() do 13 | receive do 14 | _ -> loop() 15 | end 16 | end 17 | 18 | bench "send enum each" do 19 | bench_context |> Enum.each(&send(&1, :hi)) 20 | end 21 | 22 | bench "send list comp" do 23 | for pid <- bench_context, do: send(pid, :hi) 24 | end 25 | 26 | bench "send fast reducer" do 27 | send_r(bench_context, :hi) 28 | end 29 | 30 | defp send_r([], _msg), do: :ok 31 | defp send_r([pid | pids], msg) do 32 | send(pid, msg) 33 | send_r(pids, msg) 34 | end 35 | 36 | end -------------------------------------------------------------------------------- /bench/send_one_bench.exs: -------------------------------------------------------------------------------- 1 | defmodule SendBenchOne do 2 | use Benchfella 3 | 4 | setup_all do 5 | pid = spawn_link &loop/0 6 | {:ok, pid} 7 | end 8 | 9 | defp loop() do 10 | receive do 11 | _ -> loop() 12 | end 13 | end 14 | 15 | bench "send enum each" do 16 | [bench_context] |> Enum.each(&send(&1, :hi)) 17 | end 18 | 19 | bench "send list comp" do 20 | for pid <- [bench_context], do: send(pid, :hi) 21 | end 22 | 23 | bench "send one" do 24 | send(bench_context, :hi) 25 | end 26 | 27 | bench "send fast reducer" do 28 | send_r([bench_context], :hi) 29 | end 30 | 31 | defp send_r([], _msg), do: :ok 32 | defp send_r([pid | pids], msg) do 33 | send(pid, msg) 34 | send_r(pids, msg) 35 | end 36 | 37 | end -------------------------------------------------------------------------------- /bench/send_vs_encode_bench.exs: -------------------------------------------------------------------------------- 1 | defmodule SendVsEncodeBench do 2 | use Benchfella 3 | 4 | defmodule Receiver do 5 | def loop do 6 | receive do 7 | _ -> 8 | loop() 9 | end 10 | end 11 | end 12 | 13 | setup_all do 14 | pid = spawn(Receiver, :loop, []) 15 | {:ok, pid} 16 | end 17 | 18 | bench "sending message", [message: gen_message()] do 19 | send(bench_context, message) 20 | end 21 | 22 | bench "encoding message", [message: gen_message()] do 23 | :erlang.term_to_binary(message) 24 | end 25 | 26 | defp gen_message() do 27 | Map.new(1..1_000_000, fn item -> {item, :erlang.unique_integer()} end) 28 | end 29 | end 30 | -------------------------------------------------------------------------------- /lib/manifold.ex: -------------------------------------------------------------------------------- 1 | defmodule Manifold do 2 | use Application 3 | 4 | alias Manifold.Partitioner 5 | alias Manifold.Sender 6 | alias Manifold.Utils 7 | 8 | @type pack_mode :: :binary | :etf | nil 9 | 10 | @type pack_mode_option :: {:pack_mode, pack_mode()} 11 | @type send_mode_option :: {:send_mode, :offload} 12 | @type option :: pack_mode_option() | send_mode_option() 13 | 14 | @max_partitioners 32 15 | @partitioners min(Application.get_env(:manifold, :partitioners, 1), @max_partitioners) 16 | @workers_per_partitioner Application.get_env(:manifold, :workers_per_partitioner, System.schedulers_online) 17 | 18 | @max_senders 128 19 | @senders min(Application.get_env(:manifold, :senders, System.schedulers_online), @max_senders) 20 | 21 | ## OTP 22 | 23 | def start(_type, _args) do 24 | import Supervisor.Spec, warn: false 25 | 26 | partitioners = 27 | for partitioner_id <- 0..(@partitioners - 1) do 28 | Partitioner.child_spec(@workers_per_partitioner, name: partitioner_for(partitioner_id)) 29 | end 30 | 31 | senders = 32 | for sender_id <- 0..(@senders - 1) do 33 | Sender.child_spec(name: sender_for(sender_id)) 34 | end 35 | 36 | Supervisor.start_link(partitioners ++ senders, 37 | strategy: :one_for_one, 38 | max_restarts: 10, 39 | name: __MODULE__.Supervisor 40 | ) 41 | end 42 | 43 | ## Client 44 | 45 | @spec valid_send_options?(Keyword.t()) :: boolean() 46 | def valid_send_options?(options) when is_list(options) do 47 | valid_options = [ 48 | {:pack_mode, :binary}, 49 | {:pack_mode, :etf}, 50 | {:send_mode, :offload}, 51 | ] 52 | 53 | # Keywords could have duplicate keys, in which case the first key wins. 54 | Keyword.keys(options) 55 | |> Enum.dedup() 56 | |> Enum.reduce(true, fn key, acc -> acc and {key, options[key]} in valid_options end) 57 | end 58 | 59 | def valid_send_options?(_options) do 60 | false 61 | end 62 | 63 | @spec send([pid() | nil] | pid() | nil, message :: term(), options :: [option()]) :: :ok 64 | def send(pid, message, options \\ []) 65 | def send([pid], message, options), do: __MODULE__.send(pid, message, options) 66 | 67 | def send(pids, message, options) when is_list(pids) do 68 | case options[:send_mode] do 69 | :offload -> 70 | Sender.send(current_sender(), current_partitioner(), pids, message, options[:pack_mode]) 71 | 72 | nil -> 73 | message = Utils.pack_message(options[:pack_mode], message) 74 | 75 | partitioner_name = current_partitioner() 76 | 77 | grouped_by = 78 | Utils.group_by(pids, fn 79 | nil -> nil 80 | pid -> node(pid) 81 | end) 82 | 83 | for {node, pids} <- grouped_by, 84 | node != nil, 85 | do: Partitioner.send({partitioner_name, node}, pids, message) 86 | 87 | :ok 88 | end 89 | end 90 | 91 | def send(pid, message, options) when is_pid(pid) do 92 | case options[:send_mode] do 93 | :offload -> 94 | # To maintain linearizability guaranteed by send/2, we have to send 95 | # it to the sender process, even for a single receiving pid. 96 | # 97 | # Since we know we are only sending to a single pid, there's no 98 | # performance benefit to packing the message, so we will always send as 99 | # raw etf. 100 | Sender.send(current_sender(), current_partitioner(), [pid], message, :etf) 101 | 102 | nil -> 103 | Partitioner.send({current_partitioner(), node(pid)}, [pid], message) 104 | end 105 | end 106 | 107 | def send(nil, _message, _options), do: :ok 108 | 109 | def set_partitioner_key(key) do 110 | partitioner = key 111 | |> Utils.hash() 112 | |> rem(@partitioners) 113 | |> partitioner_for() 114 | 115 | Process.put(:manifold_partitioner, partitioner) 116 | end 117 | 118 | def current_partitioner() do 119 | case Process.get(:manifold_partitioner) do 120 | nil -> 121 | partitioner_for(self()) 122 | partitioner -> 123 | partitioner 124 | end 125 | end 126 | 127 | def partitioner_for(pid) when is_pid(pid) do 128 | pid 129 | |> Utils.partition_for(@partitioners) 130 | |> partitioner_for 131 | end 132 | 133 | # The 0th partitioner does not have a number in it's process name for backwards compatibility 134 | # purposes. 135 | def partitioner_for(0), do: Manifold.Partitioner 136 | for partitioner_id <- (1..@max_partitioners - 1) do 137 | def partitioner_for(unquote(partitioner_id)) do 138 | unquote(:"Manifold.Partitioner_#{partitioner_id}") 139 | end 140 | end 141 | 142 | def set_sender_key(key) do 143 | sender = 144 | key 145 | |> Utils.hash() 146 | |> rem(@senders) 147 | |> sender_for() 148 | 149 | Process.put(:manifold_sender, sender) 150 | end 151 | 152 | def current_sender() do 153 | case Process.get(:manifold_sender) do 154 | nil -> 155 | sender_for(self()) 156 | 157 | sender -> 158 | sender 159 | end 160 | end 161 | 162 | def sender_for(pid) when is_pid(pid) do 163 | pid 164 | |> Utils.partition_for(@senders) 165 | |> sender_for 166 | end 167 | 168 | for sender_id <- 0..(@max_senders - 1) do 169 | def sender_for(unquote(sender_id)) do 170 | unquote(:"Manifold.Sender_#{sender_id}") 171 | end 172 | end 173 | end 174 | -------------------------------------------------------------------------------- /lib/manifold/partitioner.ex: -------------------------------------------------------------------------------- 1 | defmodule Manifold.Partitioner do 2 | use GenServer 3 | 4 | require Logger 5 | 6 | alias Manifold.{Worker, Utils} 7 | 8 | @gen_module Application.get_env(:manifold, :gen_module, GenServer) 9 | 10 | ## Client 11 | 12 | @spec child_spec(Keyword.t) :: tuple 13 | def child_spec(partitions, opts \\ []) do 14 | import Supervisor.Spec, warn: false 15 | supervisor(__MODULE__, [partitions, opts], id: Keyword.get(opts, :name, __MODULE__)) 16 | end 17 | 18 | @spec start_link(Number.t, Keyword.t) :: GenServer.on_start 19 | def start_link(partitions, opts \\ []) do 20 | GenServer.start_link(__MODULE__, partitions, opts) 21 | end 22 | 23 | @spec send(partitioner :: GenServer.server(), pids :: [pid()], message :: term()) :: :ok 24 | def send(partitioner, pids, message) do 25 | @gen_module.cast(partitioner, {:send, pids, message}) 26 | end 27 | 28 | ## Server Callbacks 29 | 30 | def init(partitions) do 31 | # Set optimal process flags 32 | Process.flag(:trap_exit, true) 33 | Process.flag(:message_queue_data, :off_heap) 34 | workers = for _ <- 0..partitions do 35 | {:ok, pid} = Worker.start_link() 36 | pid 37 | end 38 | schedule_next_hibernate() 39 | {:ok, List.to_tuple(workers)} 40 | end 41 | 42 | def terminate(_reason, _state), do: :ok 43 | 44 | def handle_call(:which_children, _from, state) do 45 | children = for pid <- Tuple.to_list(state), is_pid(pid) do 46 | {:undefined, pid, :worker, [Worker]} 47 | end 48 | {:reply, children, state} 49 | end 50 | 51 | def handle_call(:count_children, _from, state) do 52 | {:reply, [ 53 | specs: 1, 54 | active: tuple_size(state), 55 | supervisors: 0, 56 | workers: tuple_size(state) 57 | ], state} 58 | end 59 | 60 | def handle_call(_message, _from, state) do 61 | {:reply, :error, state} 62 | end 63 | 64 | # Specialize handling cast to a single pid. 65 | def handle_cast({:send, [pid], message}, state) do 66 | partition = Utils.partition_for(pid, tuple_size(state)) 67 | Worker.send(elem(state, partition), [pid], message) 68 | {:noreply, state} 69 | end 70 | 71 | def handle_cast({:send, pids, message}, state) do 72 | partitions = tuple_size(state) 73 | pids_by_partition = Utils.partition_pids(pids, partitions) 74 | do_send(message, pids_by_partition, state, 0, partitions) 75 | {:noreply, state} 76 | end 77 | 78 | def handle_cast(_message, state) do 79 | {:noreply, state} 80 | end 81 | 82 | def handle_info({:EXIT, pid, reason}, state) do 83 | Logger.warn "manifold worker exited: #{inspect reason}" 84 | 85 | state = state 86 | |> Tuple.to_list 87 | |> Enum.map(fn 88 | ^pid -> Worker.start_link() 89 | pid -> pid 90 | end) 91 | |> List.to_tuple 92 | 93 | {:noreply, state} 94 | end 95 | 96 | def handle_info(:hibernate, state) do 97 | schedule_next_hibernate() 98 | {:noreply, state, :hibernate} 99 | end 100 | 101 | def handle_info(_message, state) do 102 | {:noreply, state} 103 | end 104 | 105 | defp do_send(_message, _pids_by_partition, _workers, partitions, partitions), do: :ok 106 | defp do_send(message, pids_by_partition, workers, partition, partitions) do 107 | pids = elem(pids_by_partition, partition) 108 | if pids != [] do 109 | Worker.send(elem(workers, partition), pids, message) 110 | end 111 | do_send(message, pids_by_partition, workers, partition + 1, partitions) 112 | end 113 | 114 | defp schedule_next_hibernate() do 115 | Process.send_after(self(), :hibernate, Utils.next_hibernate_delay()) 116 | end 117 | end 118 | -------------------------------------------------------------------------------- /lib/manifold/sender.ex: -------------------------------------------------------------------------------- 1 | defmodule Manifold.Sender do 2 | use GenServer 3 | 4 | alias Manifold.Utils 5 | 6 | @gen_module Application.get_env(:manifold, :gen_module, GenServer) 7 | 8 | ## Client 9 | 10 | @spec child_spec(Keyword.t()) :: tuple 11 | def child_spec(opts \\ []) do 12 | import Supervisor.Spec, warn: false 13 | supervisor(__MODULE__, [:ok, opts], id: Keyword.get(opts, :name, __MODULE__)) 14 | end 15 | 16 | @spec start_link(:ok, Keyword.t()) :: GenServer.on_start() 17 | def start_link(:ok, opts \\ []) do 18 | GenServer.start_link(__MODULE__, :ok, opts) 19 | end 20 | 21 | @spec send(sender :: GenServer.server(), partitioner :: GenServer.server(), pids :: [pid()], message :: term(), pack_mode :: Manifold.pack_mode()) :: :ok 22 | def send(sender, partitioner, pids, message, pack_mode) do 23 | @gen_module.cast(sender, {:send, partitioner, pids, message, pack_mode}) 24 | end 25 | 26 | ## Server Callbacks 27 | 28 | def init(:ok) do 29 | # Set optimal process flags 30 | Process.flag(:message_queue_data, :off_heap) 31 | schedule_next_hibernate() 32 | {:ok, nil} 33 | end 34 | 35 | def handle_cast({:send, partitioner, pids, message, pack_mode}, nil) do 36 | message = Utils.pack_message(pack_mode, message) 37 | 38 | grouped_by = 39 | Utils.group_by(pids, fn 40 | nil -> nil 41 | pid -> node(pid) 42 | end) 43 | 44 | for {node, pids} <- grouped_by, node != nil do 45 | Manifold.Partitioner.send({partitioner, node}, pids, message) 46 | end 47 | 48 | {:noreply, nil} 49 | end 50 | 51 | def handle_cast(_message, nil) do 52 | {:noreply, nil} 53 | end 54 | 55 | def handle_info(:hibernate, nil) do 56 | schedule_next_hibernate() 57 | {:noreply, nil, :hibernate} 58 | end 59 | 60 | def handle_info(_message, nil) do 61 | {:noreply, nil} 62 | end 63 | 64 | defp schedule_next_hibernate() do 65 | Process.send_after(self(), :hibernate, Utils.next_hibernate_delay()) 66 | end 67 | end 68 | -------------------------------------------------------------------------------- /lib/manifold/utils.ex: -------------------------------------------------------------------------------- 1 | defmodule Manifold.Utils do 2 | @type groups :: %{any => [pid]} 3 | @type key_fun :: (any -> any) 4 | 5 | @doc """ 6 | A faster version of Enum.group_by with less bells and whistles. 7 | """ 8 | @spec group_by([pid], key_fun) :: groups 9 | def group_by(pids, key_fun), do: group_by(pids, key_fun, %{}) 10 | 11 | @spec group_by([pid], key_fun, groups) :: groups 12 | defp group_by([pid | pids], key_fun, groups) do 13 | key = key_fun.(pid) 14 | group = Map.get(groups, key, []) 15 | group_by(pids, key_fun, Map.put(groups, key, [pid | group])) 16 | end 17 | defp group_by([], _key_fun, groups), do: groups 18 | 19 | @doc """ 20 | Partitions a bunch of pids into a tuple, of lists of pids grouped by by the result of :erlang.pash2/2 21 | """ 22 | @spec partition_pids([pid], integer) :: tuple 23 | def partition_pids(pids, partitions) do 24 | do_partition_pids(pids, partitions, Tuple.duplicate([], partitions)) 25 | end 26 | 27 | defp do_partition_pids([pid | pids], partitions, pids_by_partition) do 28 | partition = partition_for(pid, partitions) 29 | pids_in_partition = elem(pids_by_partition, partition) 30 | do_partition_pids(pids, partitions, put_elem(pids_by_partition, partition, [pid | pids_in_partition])) 31 | end 32 | defp do_partition_pids([], _partitions, pids_by_partition), do: pids_by_partition 33 | 34 | @doc """ 35 | Computes the partition for a given pid using :erlang.phash2/2 36 | """ 37 | @spec partition_for(pid, integer) :: integer 38 | def partition_for(pid, partitions) do 39 | :erlang.phash2(pid, partitions) 40 | end 41 | 42 | @spec hash(atom | binary | integer) :: integer 43 | def hash(key) when is_binary(key) do 44 | <<_ :: binary-size(8), value :: unsigned-little-integer-size(64)>> = :erlang.md5(key) 45 | value 46 | end 47 | def hash(key), do: hash("#{key}") 48 | 49 | @doc """ 50 | Gets the next delay at which we should attempt to hibernate a worker or partitioner process. 51 | """ 52 | @spec next_hibernate_delay() :: integer 53 | def next_hibernate_delay() do 54 | hibernate_delay = Application.get_env(:manifold, :hibernate_delay, 60_000) 55 | hibernate_jitter = Application.get_env(:manifold, :hibernate_jitter, 30_000) 56 | 57 | hibernate_delay + :rand.uniform(hibernate_jitter) 58 | end 59 | 60 | @spec pack_message(mode :: Manifold.pack_mode(), message :: term()) :: term() 61 | def pack_message(:binary, message), do: {:manifold_binary, :erlang.term_to_binary(message)} 62 | def pack_message(_mode, message), do: message 63 | 64 | @spec unpack_message(message :: term()) :: term() 65 | def unpack_message({:manifold_binary, binary}), do: :erlang.binary_to_term(binary) 66 | def unpack_message(message), do: message 67 | end 68 | -------------------------------------------------------------------------------- /lib/manifold/worker.ex: -------------------------------------------------------------------------------- 1 | defmodule Manifold.Worker do 2 | use GenServer 3 | alias Manifold.Utils 4 | 5 | ## Client 6 | @spec start_link :: GenServer.on_start 7 | def start_link, do: GenServer.start_link(__MODULE__, []) 8 | 9 | @spec send(pid, [pid], term) :: :ok 10 | def send(pid, pids, message), do: GenServer.cast(pid, {:send, pids, message}) 11 | 12 | ## Server Callbacks 13 | @spec init([]) :: {:ok, nil} 14 | def init([]) do 15 | schedule_next_hibernate() 16 | {:ok, nil} 17 | end 18 | 19 | def handle_cast({:send, [pid], message}, nil) do 20 | message = Utils.unpack_message(message) 21 | send(pid, message) 22 | {:noreply, nil} 23 | end 24 | 25 | def handle_cast({:send, pids, message}, nil) do 26 | message = Utils.unpack_message(message) 27 | for pid <- pids, do: send(pid, message) 28 | {:noreply, nil} 29 | end 30 | 31 | def handle_cast(_message, nil), do: {:noreply, nil} 32 | 33 | def handle_info(:hibernate, nil) do 34 | schedule_next_hibernate() 35 | {:noreply, nil, :hibernate} 36 | end 37 | 38 | defp schedule_next_hibernate() do 39 | Process.send_after(self(), :hibernate, Utils.next_hibernate_delay()) 40 | end 41 | end 42 | -------------------------------------------------------------------------------- /mix.exs: -------------------------------------------------------------------------------- 1 | defmodule Manifold.Mixfile do 2 | use Mix.Project 3 | 4 | def project do 5 | [ 6 | app: :manifold, 7 | version: "1.6.0", 8 | elixir: "~> 1.5", 9 | build_embedded: Mix.env == :prod, 10 | start_permanent: Mix.env == :prod, 11 | deps: deps(), 12 | package: package(), 13 | elixirc_paths: elixirc_paths(Mix.env()) 14 | ] 15 | end 16 | 17 | def application do 18 | [ 19 | applications: [:logger], 20 | mod: {Manifold, []}, 21 | ] 22 | end 23 | 24 | defp deps do 25 | [ 26 | {:benchfella, "~> 0.3.0", only: [:dev, :test], runtime: false}, 27 | ] 28 | end 29 | 30 | defp elixirc_paths(:test) do 31 | elixirc_paths(:any) ++ ["test/support"] 32 | end 33 | 34 | defp elixirc_paths(_) do 35 | ["lib"] 36 | end 37 | 38 | def package do 39 | [ 40 | name: :manifold, 41 | description: "Fast batch message passing between nodes for Erlang/Elixir.", 42 | maintainers: [], 43 | licenses: ["MIT"], 44 | files: ["lib/*", "mix.exs", "README*", "LICENSE*"], 45 | links: %{ 46 | "GitHub" => "https://github.com/discordapp/manifold", 47 | }, 48 | ] 49 | end 50 | end 51 | -------------------------------------------------------------------------------- /mix.lock: -------------------------------------------------------------------------------- 1 | %{ 2 | "benchfella": {:hex, :benchfella, "0.3.5", "b2122c234117b3f91ed7b43b6e915e19e1ab216971154acd0a80ce0e9b8c05f5", [:mix], [], "hexpm", "23f27cbc482cbac03fc8926441eb60a5e111759c17642bac005c3225f5eb809d"}, 3 | } 4 | -------------------------------------------------------------------------------- /priv/packets.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/discord/manifold/ca14074ff6fc79adb91345099f2d27cb246e3708/priv/packets.png -------------------------------------------------------------------------------- /test/manifold_test.exs: -------------------------------------------------------------------------------- 1 | defmodule ManifoldTest do 2 | use ExUnit.Case 3 | doctest Manifold 4 | 5 | test "valid_send_options?" do 6 | assert Manifold.valid_send_options?([]) 7 | assert Manifold.valid_send_options?(send_mode: :offload) 8 | assert Manifold.valid_send_options?(send_mode: :offload, send_mode: :bad) 9 | 10 | refute Manifold.valid_send_options?(send_mode: :bad, send_mode: :offload) 11 | refute Manifold.valid_send_options?(unknown: :bad) 12 | refute Manifold.valid_send_options?(:junk) 13 | refute Manifold.valid_send_options?({:junk, :junk}) 14 | end 15 | 16 | test "many pids" do 17 | me = self() 18 | message = :hello 19 | pids = for _ <- 0..10000 do 20 | spawn_link fn -> 21 | receive do 22 | message -> send(me, {self(), message}) 23 | end 24 | end 25 | end 26 | Manifold.send(pids, message) 27 | for pid <- pids do 28 | assert_receive {^pid, ^message}, 1000 29 | end 30 | end 31 | 32 | test "pack_mode option" do 33 | me = self() 34 | message = :hello 35 | pids = for _ <- 0..10000 do 36 | spawn_link fn -> 37 | receive do 38 | message -> send(me, {self(), message}) 39 | end 40 | end 41 | end 42 | Manifold.send(pids, message, pack_mode: :binary) 43 | for pid <- pids do 44 | assert_receive {^pid, ^message}, 1000 45 | end 46 | end 47 | 48 | test "send to list of one" do 49 | me = self() 50 | message = :hello 51 | pid = spawn_link fn -> 52 | receive do 53 | message -> send(me, message) 54 | end 55 | end 56 | Manifold.send([pid], message) 57 | assert_receive ^message 58 | end 59 | 60 | test "send to one" do 61 | me = self() 62 | message = :hello 63 | pid = spawn_link fn -> 64 | receive do 65 | message -> send(me, message) 66 | end 67 | end 68 | Manifold.send(pid, message) 69 | assert_receive ^message 70 | end 71 | 72 | test "send to nil" do 73 | assert Manifold.send([nil], :hi) == :ok 74 | assert Manifold.send(nil, :hi) == :ok 75 | end 76 | 77 | test "send with nil in list wont blow up" do 78 | me = self() 79 | message = :hello 80 | pid = spawn_link fn -> 81 | receive do 82 | message -> send(me, message) 83 | end 84 | end 85 | Manifold.send([nil, pid, nil], message) 86 | assert_receive ^message 87 | end 88 | 89 | test "send with pinned process" do 90 | me = self() 91 | message = :hello 92 | pid = spawn_link fn -> 93 | receive do 94 | message -> send(me, message) 95 | end 96 | receive do 97 | message -> send(me, message) 98 | end 99 | end 100 | assert Process.get(:manifold_partitioner) == nil 101 | Manifold.set_partitioner_key("hello") 102 | assert Process.get(:manifold_partitioner) == Manifold.Partitioner 103 | 104 | Manifold.send([nil, pid, nil], message) 105 | Manifold.send(pid, message) 106 | assert_receive ^message 107 | assert_receive ^message 108 | end 109 | 110 | test "many pids using :offload" do 111 | {:ok, child, _} = ChildNode.start_link(:manifold, :child) 112 | 113 | me = self() 114 | message = {:hello, me} 115 | 116 | pids = 117 | for _ <- 0..10000 do 118 | Node.spawn_link(child, Receiver, :hello_handler, []) 119 | end 120 | 121 | Manifold.send(pids, message, send_mode: :offload) 122 | 123 | for pid <- pids do 124 | assert_receive {^pid, ^message}, 1000 125 | end 126 | end 127 | 128 | defmacro assert_next_receive(pattern, timeout \\ 100) do 129 | quote do 130 | receive do 131 | message -> 132 | assert unquote(pattern) = message 133 | after 134 | unquote(timeout) -> 135 | raise "timeout" 136 | end 137 | end 138 | end 139 | 140 | test "send/2 linearization guarantees with :offload" do 141 | {:ok, child, _} = ChildNode.start_link(:manifold, :child) 142 | 143 | # Set up several receiving pids, but only the first pid echos 144 | # the message back to the sender... 145 | pids = 146 | for n <- 0..2 do 147 | if n == 0 do 148 | Node.spawn_link(child, Receiver, :hello_reply_loop, []) 149 | else 150 | Node.spawn_link(child, Receiver, :hello_noop_loop, []) 151 | end 152 | end 153 | 154 | me = self() 155 | [pid | _] = pids 156 | 157 | # Fire off a bunch of messages, with some sent only to the 158 | # first receiving pid, while others sent to all pids. 159 | for n <- 0..1000 do 160 | message = {:hello, me, n} 161 | 162 | if rem(n, 2) == 0 do 163 | Manifold.send(pid, message, send_mode: :offload) 164 | else 165 | Manifold.send(pids, message, send_mode: :offload) 166 | end 167 | end 168 | 169 | # Expect the messages to be echoed back from the first 170 | # receiving pid in order. 171 | for n <- 0..1000 do 172 | message = {:hello, me, n} 173 | assert_next_receive({^pid, ^message}, 1000) 174 | end 175 | end 176 | end 177 | -------------------------------------------------------------------------------- /test/support/child_node.ex: -------------------------------------------------------------------------------- 1 | defmodule ChildNode do 2 | @moduledoc """ 3 | ChildNode provides facilities for starting another erlang node on the current machine. 4 | 5 | This module enhances and abstracts the erlang `slave` module. After calling `slave.start` to 6 | make sure the child node is running, it ensures that Elixir is started, after which it will run 7 | any function passed in as the `:on_start` param. This function must be compiled and loaded on 8 | both nodes. 9 | 10 | After that, control is handed back to the caller who can use the `:rpc` module to invoke 11 | functions remotely. 12 | 13 | The child nodes process is linked to the caller's process, so if the caller dies, so will the 14 | child node. 15 | 16 | If additional logging is required, set `enable_sasl` option to `true`. 17 | """ 18 | 19 | @type param :: {:enable_sasl, boolean} | {:on_start, (() -> any)} 20 | @type params :: [param] 21 | 22 | defmodule Runner do 23 | @moduledoc """ 24 | When the new node starts up, we often want to set up a supervision tree by calling 25 | a function with `:rpc.call`. However, when the call ends, all the linked processes 26 | in the rpc call will die. This runner encapsulates them and doesn't link to its caller, 27 | so that any processes started by `Runner` will continue to live after the `:rpc` call. 28 | """ 29 | use GenServer 30 | 31 | def start(mod, fun, args) do 32 | GenServer.start(__MODULE__, [mod, fun, args]) 33 | end 34 | 35 | def start(init_fn) when is_function(init_fn) do 36 | GenServer.start(__MODULE__, [init_fn]) 37 | end 38 | 39 | def init([mod, fun, args]) do 40 | rv = apply(mod, fun, args) 41 | {:ok, rv} 42 | end 43 | 44 | def init([init_fn]) do 45 | {:ok, init_fn} 46 | end 47 | 48 | def get(runner_pid) do 49 | GenServer.call(runner_pid, :get) 50 | end 51 | 52 | def do_init(runner_pid, args) do 53 | GenServer.call(runner_pid, {:do_init, args}) 54 | end 55 | 56 | def handle_call({:do_init, args}, _from, init_fn) do 57 | {:reply, init_fn.(args), init_fn} 58 | end 59 | 60 | def handle_call(:get, _from, v) do 61 | {:reply, v, v} 62 | end 63 | end 64 | 65 | @spec start_link(Application.t(), atom, params) :: {:ok, pid} | {:error, any} 66 | def start_link(app_to_start, node_name, params \\ [], timeout \\ 5_000) do 67 | unless Node.alive?() do 68 | {:ok, _} = Node.start(:"local@0.0.0.0") 69 | end 70 | 71 | code_paths = Enum.join(:code.get_path(), " ") 72 | 73 | default_node_start_args = [ 74 | "-setcookie #{Node.get_cookie()}", 75 | "-pa #{code_paths}", 76 | "-connect_all false" 77 | ] 78 | 79 | node_start_args = 80 | if params[:enable_sasl] do 81 | default_node_start_args ++ ["-logger handle_sasl_reports true"] 82 | else 83 | default_node_start_args 84 | end 85 | |> Enum.join(" ") 86 | |> String.to_charlist() 87 | 88 | node_name = to_node_name(node_name) 89 | {:ok, node_name} = :slave.start_link('0.0.0.0', node_name, node_start_args) 90 | {:ok, _} = :rpc.call(node_name, :application, :ensure_all_started, [:elixir]) 91 | 92 | on_start = params[:on_start] 93 | rpc_args = [node_name, app_to_start, on_start, self()] 94 | 95 | case :rpc.call(node_name, __MODULE__, :on_start, rpc_args, timeout) do 96 | {:ok, start_fn_results} -> 97 | {:ok, node_name, start_fn_results} 98 | 99 | {:badrpc, :timeout} -> 100 | {:error, :timeout} 101 | end 102 | end 103 | 104 | def on_start(node_name, app_to_start, start_callback, _caller) do 105 | case app_to_start do 106 | apps when is_list(apps) -> 107 | for app <- apps do 108 | {:ok, _} = Application.ensure_all_started(app) 109 | end 110 | 111 | app when is_atom(app) -> 112 | {:ok, _started_apps} = Application.ensure_all_started(app) 113 | end 114 | 115 | start_fn_results = 116 | case start_callback do 117 | callback when is_function(callback) -> 118 | {:ok, runner_pid} = Runner.start(callback) 119 | Runner.do_init(runner_pid, node_name) 120 | 121 | {m, f, a} -> 122 | {:ok, runner_pid} = Runner.start(m, f, a) 123 | Runner.get(runner_pid) 124 | 125 | nil -> 126 | nil 127 | end 128 | 129 | {:ok, start_fn_results} 130 | end 131 | 132 | def run(node, func) do 133 | {:ok, runner_pid} = :rpc.call(node, Runner, :start, [func]) 134 | :rpc.call(node, Runner, :get, [runner_pid]) 135 | end 136 | 137 | @doc "Runs the MFA in a process on the remote node" 138 | @spec run(node, module(), atom(), [any]) :: any 139 | def run(node, m, f, a) do 140 | {:ok, runner_pid} = :rpc.call(node, Runner, :start, [m, f, a]) 141 | :rpc.call(node, Runner, :get, [runner_pid]) 142 | end 143 | 144 | defp to_node_name(node_name) when is_atom(node_name) do 145 | node_name 146 | |> Atom.to_string() 147 | |> String.split(".") 148 | |> sanitize_node_name 149 | end 150 | 151 | defp sanitize_node_name([node_name]) do 152 | String.to_atom(node_name) 153 | end 154 | 155 | defp sanitize_node_name(node_name) when is_list(node_name) do 156 | node_name 157 | |> List.last() 158 | |> Macro.underscore() 159 | |> String.downcase() 160 | |> String.to_atom() 161 | end 162 | end 163 | -------------------------------------------------------------------------------- /test/support/receiver.ex: -------------------------------------------------------------------------------- 1 | defmodule Receiver do 2 | def hello_handler do 3 | receive do 4 | {:hello, sender} = message -> 5 | send(sender, {self(), message}) 6 | end 7 | end 8 | 9 | def hello_reply_loop do 10 | receive do 11 | {:hello, sender, _n} = message -> 12 | send(sender, {self(), message}) 13 | end 14 | 15 | hello_reply_loop() 16 | end 17 | 18 | def hello_noop_loop do 19 | receive do 20 | {:hello, _, _} -> 21 | :ok 22 | end 23 | 24 | hello_noop_loop() 25 | end 26 | end 27 | -------------------------------------------------------------------------------- /test/test_helper.exs: -------------------------------------------------------------------------------- 1 | Application.ensure_all_started(:manifold) 2 | 3 | ExUnit.start() 4 | --------------------------------------------------------------------------------