├── .gitignore
├── .travis.yml
├── LICENSE
├── README.md
├── assets
└── architecture
│ ├── architecture.dot
│ └── architecture.svg
├── config
└── config.exs
├── lib
├── mldht.ex
└── mldht
│ ├── registry.ex
│ ├── routing_table
│ ├── bucket.ex
│ ├── distance.ex
│ ├── node.ex
│ ├── supervisor.ex
│ └── worker.ex
│ ├── search
│ ├── node.ex
│ ├── supervisor.ex
│ └── worker.ex
│ ├── server
│ ├── storage.ex
│ ├── utils.ex
│ └── worker.ex
│ └── supervisor.ex
├── mix.exs
└── test
├── mldht_routing_table_bucket_test.exs
├── mldht_routing_table_distance_test.exs
├── mldht_routing_table_node_test.exs
├── mldht_routing_table_worker_test.exs
├── mldht_search_worker_test.exs
├── mldht_server_storage_test.exs
├── mldht_server_utils_test.exs
├── mldht_server_worker_test.exs
├── mldht_test.exs
└── test_helper.exs
/.gitignore:
--------------------------------------------------------------------------------
1 | /_build
2 | /deps
3 | erl_crash.dump
4 | *.ez
5 | mix.lock
6 |
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | language: elixir
2 | elixir:
3 | - 1.8
4 | otp_release:
5 | - 20.0
6 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | The MIT License (MIT)
2 |
3 | Copyright (c) 2015 Florian Adamsky
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
23 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # MlDHT - Mainline Distributed Hash Table
2 | [](https://travis-ci.org/cit/MlDHT)
3 |
4 | A Distributed Hash Table (DHT) is a storage and lookup system that is based on a peer-to-peer (P2P) system. The file sharing protocol BitTorrent makes use of a DHT to find new peers without using a central tracker. There are three popular DHT-based protocols: [KAD](https://en.wikipedia.org/wiki/Kad_network), [Vuze DHT](http://wiki.vuze.com/w/Distributed_hash_table) and Mainline DHT. All protocols are based on [Kademlia](https://en.wikipedia.org/wiki/Kademlia) but are not compatible with each other. The mainline DHT is by far the biggest overlay network with around 15-27 million users per day.
5 |
6 | MlDHT, in particular, is an [elixir](http://elixir-lang.org/) package that provides a mainline DHT implementation according to [BEP 05](http://www.bittorrent.org/beps/bep_0005.html). It is build on the following modules:
7 |
8 | * `DHTServer` - main interface, receives all incoming messages;
9 | * `RoutingTable` - maintains contact information of close nodes.
10 |
11 | ## Getting Started
12 |
13 | Learn how to add MlDHT to your Elixir project and start using it.
14 |
15 | ### Adding MlDHT To Your Project
16 |
17 | To use MlDHT with your projects, edit your `mix.exs` file and add it as a dependency:
18 |
19 | ```elixir
20 | defp application do
21 | [applications: [:mldht]]
22 | end
23 |
24 | defp deps do
25 | [{:mldht, "~> 0.0.3"}]
26 | end
27 | ```
28 |
29 | ### Basic Usage
30 |
31 | If the application is loaded it automatically bootstraps itself into the overlay network. It does this by starting a `find_node` search for a node that belongs to the same bucket as our own node id. In `mix.exs` you will find the boostrapping nodes that will be used for that first search. By doing this, we will quickly collect nodes that are close to us.
32 |
33 | You can use the following function to find nodes for a specific BitTorrent infohash (e.g. Ubuntu 19.04):
34 |
35 | ```elixir
36 | iex> "D540FC48EB12F2833163EED6421D449DD8F1CE1F"
37 | |> Base.decode16!
38 | |> MlDHT.search(fn(node) -> IO.puts "#{inspect node}" end)
39 | ```
40 |
41 | If you would like to search for nodes and announce yourself to the DHT network use the following function:
42 |
43 | ```elixir
44 | iex> "D540FC48EB12F2833163EED6421D449DD8F1CE1F"
45 | |> Base.decode16!
46 | |> MlDHT.search_announce(6881, fn(node) -> IO.puts "#{inspect node}" end)
47 | ```
48 |
49 | It is also possible search and announce yourself to the DHT network without a TCP port. By doing this, the source port of the UDP packet should be used instead.
50 |
51 | ```elixir
52 | iex> "D540FC48EB12F2833163EED6421D449DD8F1CE1F"
53 | |> Base.decode16!
54 | |> MlDHT.search_announce(fn(node) -> IO.puts "#{inspect node}" end)
55 | ```
56 |
57 | ## License
58 |
59 | MlDHT source code is released under MIT License.
60 | Check LICENSE file for more information.
--------------------------------------------------------------------------------
/assets/architecture/architecture.dot:
--------------------------------------------------------------------------------
1 | // Commandline to generate the architecture diagram:
2 | // dot architecture.dot -T svg -o architectur.svg
3 |
4 | digraph G {
5 | rankdir = BT;
6 | splines = ortho;
7 | compound = true;
8 | fontname = "Consolas";
9 | node [shape=box, style="rounded, filled", fontname="Consolas"]
10 |
11 | subgraph cluster1 {
12 | labelloc = "b";
13 | label = "DHTServer";
14 |
15 | ds_sv [label = "SuperVisor", fillcolor="#8ae234"]
16 | ds_worker [label = "Worker", fillcolor="#729fcf"]
17 | ds_storage [label = "Storage", fillcolor="#729fcf"]
18 | ds_utils [label = "Utils", fillcolor="#fcaf3e"]
19 |
20 | ds_sv -> ds_storage;
21 | ds_sv -> ds_worker;
22 | ds_worker -> ds_utils;
23 | }
24 |
25 | subgraph cluster2 {
26 | labelloc = "b";
27 | label = "RoutingTable";
28 |
29 | rt_sv [label = "SuperVisor", fillcolor="#8ae234"]
30 | rt_worker [label = "Worker", fillcolor="#729fcf"]
31 | rt_bucket [label = "Bucket", fillcolor="#fcaf3e"]
32 | rt_node [label = "Node", fillcolor="#729fcf"]
33 | rt_distance [label = "Distance", fillcolor="#fcaf3e"]
34 | rt_search [label = "Search", fillcolor="#729fcf"]
35 | rt_snode [label = "Node", fillcolor="#fcaf3e"]
36 |
37 | rt_sv -> rt_worker;
38 | rt_worker -> rt_bucket;
39 | rt_bucket -> rt_node;
40 | rt_worker -> rt_distance;
41 | rt_search -> rt_distance;
42 | rt_search -> rt_snode;
43 | }
44 |
45 | subgraph cluster3 {
46 | labelloc = "b";
47 | label = "KRPCProtocol";
48 |
49 | kp_decoder [label = "Decoder", fillcolor="#fcaf3e"]
50 | kp_encoder [label = "Encoder", fillcolor="#fcaf3e"]
51 | }
52 |
53 | ds_worker -> rt_search [ltail=cluster1,lhead=cluster2];
54 | ds_worker -> kp_decoder [ltail=cluster1,lhead=cluster3];
55 | rt_snode -> kp_encoder [ltail=cluster2,lhead=cluster3];
56 | }
57 |
--------------------------------------------------------------------------------
/assets/architecture/architecture.svg:
--------------------------------------------------------------------------------
1 |
2 |
4 |
6 |
7 |
151 |
--------------------------------------------------------------------------------
/config/config.exs:
--------------------------------------------------------------------------------
1 | # This file is responsible for configuring your application
2 | # and its dependencies with the aid of the Mix.Config module.
3 | use Mix.Config
4 |
5 | # This configuration is loaded before any dependency and is restricted
6 | # to this project. If another project depends on this project, this
7 | # file won't be loaded nor affect the parent project. For this reason,
8 | # if you want to provide default values for your application for third-
9 | # party users, it should be done in your mix.exs file.
10 |
11 | # Sample configuration:
12 | #
13 | # config :logger, :console,
14 | # level: :info,
15 | # format: "$date $time [$level] $metadata$message\n",
16 | # metadata: [:user_id]
17 |
18 | # It is also possible to import configuration files, relative to this
19 | # directory. For example, you can emulate configuration per environment
20 | # by uncommenting the line below and defining dev.exs, test.exs and such.
21 | # Configuration from the imported file will override the ones defined
22 | # here (which is why it is important to import them last).
23 | #
24 | # import_config "#{Mix.env}.exs"
25 | # This file is responsible for configuring your application
26 | # and its dependencies with the aid of the Mix.Config module.
27 | use Mix.Config
28 |
29 | # The configuration defined here will only affect the dependencies
30 | # in the apps directory when commands are executed from the umbrella
31 | # project. For this reason, it is preferred to configure each child
32 | # application directly and import its configuration, as done below.
33 |
34 | # Sample configuration (overrides the imported configuration above):
35 | #
36 | # config :logger, :console,
37 | # level: :debug
38 | # format: "$date $time [$level] $metadata$message\n",
39 | # metadata: [:user_id]
40 |
--------------------------------------------------------------------------------
/lib/mldht.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT do
2 | use Application
3 |
4 | require Logger
5 |
6 | alias MlDHT.Server.Utils, as: Utils
7 |
8 | @moduledoc ~S"""
9 | MlDHT is an Elixir package that provides a Kademlia Distributed Hash Table
10 | (DHT) implementation according to [BitTorrent Enhancement Proposals (BEP)
11 | 05](http://www.bittorrent.org/beps/bep_0005.html). This specific
12 | implementation is called "mainline" variant.
13 |
14 | """
15 |
16 | ## Constants
17 |
18 | @node_id Utils.gen_node_id()
19 | @node_id_enc Base.encode16(@node_id)
20 |
21 | ## Types
22 |
23 | @typedoc """
24 | A binary which contains the infohash of a torrent. An infohash is a SHA1
25 | encoded hex sum which identifies a torrent.
26 | """
27 | @type infohash :: binary
28 |
29 | @typedoc """
30 | A non negative integer (0--65565) which represents a TCP port number.
31 | """
32 | @type tcp_port :: 0..65_565
33 |
34 | @typedoc """
35 | TODO
36 | """
37 | @type node_id :: <<_::20>>
38 |
39 | @typedoc """
40 | TODO
41 | """
42 | @type node_id_enc :: String.t()
43 |
44 | @doc false
45 | def start(_type, _args) do
46 | MlDHT.Registry.start()
47 |
48 | ## Generate a new node ID
49 | Logger.debug "Node-ID: #{@node_id_enc}"
50 |
51 | ## Start the main supervisor
52 | MlDHT.Supervisor.start_link(
53 | node_id: @node_id,
54 | name: MlDHT.Registry.via(@node_id_enc, MlDHT.Supervisor)
55 | )
56 | end
57 |
58 | @doc ~S"""
59 | This function returns the generated node_id as a bitstring.
60 | """
61 | @spec node_id() :: node_id
62 | def node_id, do: @node_id
63 |
64 | @doc ~S"""
65 | This function returns the generated node_id encoded as a String (40
66 | characters).
67 | """
68 | @spec node_id_enc() :: node_id_enc
69 | def node_id_enc, do: @node_id_enc
70 |
71 | @doc ~S"""
72 | This function needs an infohash as binary and a callback function as
73 | parameter. This function uses its own routing table as a starting point to
74 | start a get_peers search for the given infohash.
75 |
76 | ## Example
77 | iex> "3F19B149F53A50E14FC0B79926A391896EABAB6F"
78 | |> Base.decode16!
79 | |> MlDHT.search(fn(node) ->
80 | {ip, port} = node
81 | IO.puts "ip: #{inspect ip} port: #{port}"
82 | end)
83 | """
84 | @spec search(infohash, fun) :: atom
85 | def search(infohash, callback) do
86 | pid = @node_id_enc |> MlDHT.Registry.get_pid(MlDHT.Server.Worker)
87 | MlDHT.Server.Worker.search(pid, infohash, callback)
88 | end
89 |
90 | @doc ~S"""
91 | This function needs an infohash as binary and callback function as
92 | parameter. This function does the same thing as the search/2 function, except
93 | it sends an announce message to the found peers. This function does not need a
94 | TCP port which means the announce message sets `:implied_port` to true.
95 |
96 | ## Example
97 | iex> "3F19B149F53A50E14FC0B79926A391896EABAB6F"
98 | |> Base.decode16!
99 | |> MlDHT.search_announce(fn(node) ->
100 | {ip, port} = node
101 | IO.puts "ip: #{inspect ip} port: #{port}"
102 | end)
103 | """
104 | @spec search_announce(infohash, fun) :: atom
105 | def search_announce(infohash, callback) do
106 | pid = @node_id_enc |> MlDHT.Registry.get_pid(MlDHT.Server.Worker)
107 | MlDHT.Server.Worker.search_announce(pid, infohash, callback)
108 | end
109 |
110 | @doc ~S"""
111 | This function needs an infohash as binary, a callback function as parameter,
112 | and a TCP port as integer. This function does the same thing as the search/2
113 | function, except it sends an announce message to the found peers.
114 |
115 | ## Example
116 | iex> "3F19B149F53A50E14FC0B79926A391896EABAB6F" ## Ubuntu 15.04
117 | |> Base.decode16!
118 | |> MlDHT.search_announce(fn(node) ->
119 | {ip, port} = node
120 | IO.puts "ip: #{inspect ip} port: #{port}"
121 | end, 6881)
122 | """
123 | @spec search_announce(infohash, fun, tcp_port) :: atom
124 | def search_announce(infohash, callback, port) do
125 | pid = @node_id_enc |> MlDHT.Registry.get_pid(MlDHT.Server.Worker)
126 | MlDHT.Server.Worker.search_announce(pid, infohash, callback, port)
127 | end
128 |
129 | end
130 |
--------------------------------------------------------------------------------
/lib/mldht/registry.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Registry do
2 | require Logger
3 |
4 | @name __MODULE__
5 |
6 | @moduledoc ~S"""
7 | This module just capsules functions that avoid boilerplate when using the
8 | MlDHT Registry. (They are not callbacks)
9 | """
10 |
11 | def start, do: Registry.start_link(keys: :unique, name: @name)
12 |
13 | def unregister(name), do: Registry.unregister(@name, name)
14 |
15 | def lookup(name), do: Registry.lookup(@name, name)
16 |
17 | def via(name), do: {:via, Registry, {@name, name}}
18 | def via(node_id_enc, module), do: id(node_id_enc, module) |> via()
19 | def via(node_id_enc, module, id), do: id(node_id_enc, module, id) |> via()
20 |
21 | def get_pid(name) do
22 | case Registry.lookup(@name, name) do
23 | [{pid, _}] -> pid
24 | _e ->
25 | Logger.debug "Could not find Process with name #{name} in MlDHT.Registry"
26 | nil
27 | end
28 | end
29 | def get_pid(node_id_enc, module), do: id(node_id_enc, module) |> get_pid()
30 | def get_pid(node_id_enc, module, id), do: id(node_id_enc, module, id) |> get_pid()
31 |
32 | defp id(node_id_enc, module) do
33 | node_id_enc |> Kernel.<>("_" <> Atom.to_string(module))
34 | end
35 | defp id(node_id_enc, module, id) when is_atom(id) do
36 | id(node_id_enc, module, to_string(id))
37 | end
38 | defp id(node_id_enc, module, id) do
39 | id(node_id_enc, module) <> "_" <> id
40 | end
41 |
42 | end
43 |
--------------------------------------------------------------------------------
/lib/mldht/routing_table/bucket.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.RoutingTable.Bucket do
2 | @moduledoc false
3 |
4 | alias MlDHT.RoutingTable.Bucket
5 | alias MlDHT.RoutingTable.Node
6 |
7 | require Logger
8 |
9 | defstruct index: 0, last_update: 0, nodes: []
10 |
11 | @k_bucket 8
12 |
13 | def new(index) do
14 | %Bucket{index: index, last_update: :os.system_time(:millisecond)}
15 | end
16 |
17 | def size(bucket) do
18 | Enum.count(bucket.nodes)
19 | end
20 |
21 | def age(bucket) do
22 | :os.system_time(:millisecond) - bucket.last_update
23 | end
24 |
25 | def is_full?(bucket) do
26 | Enum.count(bucket.nodes) == @k_bucket
27 | end
28 |
29 | def has_space?(bucket) do
30 | Enum.count(bucket.nodes) < @k_bucket
31 | end
32 |
33 | def add(bucket, element) when is_list(element) do
34 | %{Bucket.new(bucket.index) |nodes: bucket.nodes ++ List.flatten(element)}
35 | end
36 |
37 | def add(bucket, element) do
38 | %{Bucket.new(bucket.index) |nodes: bucket.nodes ++ [element]}
39 | end
40 |
41 | def update(bucket) do
42 | %{Bucket.new(bucket.index) |nodes: bucket.nodes }
43 | end
44 |
45 | def filter(bucket, func) do
46 | %{bucket | nodes: Enum.filter(bucket.nodes, func)}
47 | end
48 |
49 | def get(bucket, node_id) do
50 | Enum.find(bucket.nodes, fn(node_pid) -> Node.id(node_pid) == node_id end)
51 | end
52 |
53 | def del(bucket, node_id) do
54 | nodes = Enum.filter(bucket.nodes, fn(pid) -> Node.id(pid) != node_id end)
55 | %{bucket | nodes: nodes}
56 | end
57 |
58 | defimpl Inspect, for: Bucket do
59 | def inspect(bucket, _) do
60 | size = Bucket.size(bucket)
61 | age = Bucket.age(bucket)
62 |
63 | if size == 0 do
64 | "#Bucket"
65 | else
66 | nodes = Enum.map(bucket.nodes, fn(x) ->
67 | " " <> Node.to_string(x) <> "\n"
68 | end)
69 | """
70 | #Bucket
72 | """
73 | end
74 | end
75 | end
76 |
77 | end
78 |
--------------------------------------------------------------------------------
/lib/mldht/routing_table/distance.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.RoutingTable.Distance do
2 | @moduledoc false
3 |
4 | require Bitwise
5 |
6 | @doc """
7 | TODO
8 | """
9 | def closest_nodes(nodes, target, n) do
10 | closest_nodes(nodes, target)
11 | |> Enum.slice(0..n)
12 | end
13 |
14 | def closest_nodes(nodes, target) do
15 | Enum.sort(nodes, fn(x, y) ->
16 | xor_cmp(x.id, y.id, target, &(&1 < &2))
17 | end)
18 | end
19 |
20 | @doc """
21 | This function gets two node ids, a target node id and a lambda function as an
22 | argument. It compares the two node ids according to the XOR metric which is
23 | closer to the target.
24 |
25 | ## Example
26 |
27 | iex> RoutingTable.Worker.xor_compare("A", "a", "F", &(&1 > &2))
28 | false
29 | """
30 | def xor_cmp("", "", "", func), do: func.(0, 0)
31 | def xor_cmp(node_id_a, node_id_b, target, func) do
32 | << byte_a :: 8, rest_a :: bitstring >> = node_id_a
33 | << byte_b :: 8, rest_b :: bitstring >> = node_id_b
34 | << byte_target :: 8, rest_target :: bitstring >> = target
35 |
36 | if byte_a == byte_b do
37 | xor_cmp(rest_a, rest_b, rest_target, func)
38 | else
39 | xor_a = Bitwise.bxor(byte_a, byte_target)
40 | xor_b = Bitwise.bxor(byte_b, byte_target)
41 |
42 | func.(xor_a, xor_b)
43 | end
44 | end
45 |
46 | @doc """
47 | This function takes two node ids as binary and returns the bucket
48 | number in which the node_id belongs as an integer. It counts the
49 | number of identical bits.
50 |
51 | ## Example
52 |
53 | iex> RoutingTable.find_bucket(<<0b11110000>>, <<0b11111111>>)
54 | 4
55 | """
56 | def find_bucket(node_id_a, node_id_b), do: find_bucket(node_id_a, node_id_b, 0)
57 | def find_bucket("", "", bucket), do: bucket
58 | def find_bucket(node_id_a, node_id_b, bucket) do
59 | << bit_a :: 1, rest_a :: bitstring >> = node_id_a
60 | << bit_b :: 1, rest_b :: bitstring >> = node_id_b
61 |
62 | if bit_a == bit_b do
63 | find_bucket(rest_a, rest_b, (bucket + 1))
64 | else
65 | bucket
66 | end
67 | end
68 |
69 | @doc """
70 | This function gets the number of bits and a node id as an argument and
71 | generates a new node id. It copies the number of bits from the given node id
72 | and the last bits it will generate randomly.
73 | """
74 | def gen_node_id(nr_of_bits, node_id) do
75 | nr_rest_bits = 160 - nr_of_bits
76 | << bits :: size(nr_of_bits), _ :: size(nr_rest_bits) >> = node_id
77 | << rest :: size(nr_rest_bits), _ :: size(nr_of_bits) >> = :crypto.strong_rand_bytes(20)
78 |
79 | << bits :: size(nr_of_bits), rest :: size(nr_rest_bits)>>
80 | end
81 |
82 | end
83 |
--------------------------------------------------------------------------------
/lib/mldht/routing_table/node.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.RoutingTable.Node do
2 | @moduledoc false
3 |
4 | use GenServer, restart: :temporary
5 |
6 | require Logger
7 |
8 | def start_link(opts) do
9 | GenServer.start_link(__MODULE__, [opts])
10 | end
11 |
12 | @doc """
13 | Stops the registry.
14 | """
15 | def stop(node_id) do
16 | GenServer.call(node_id, :stop)
17 | end
18 |
19 | def id(pid) do
20 | GenServer.call(pid, :id)
21 | end
22 |
23 | def socket(pid) do
24 | GenServer.call(pid, :socket)
25 | end
26 |
27 | def bucket_index(pid), do: GenServer.call(pid, :bucket_index)
28 | def bucket_index(pid, new_index) do
29 | GenServer.cast(pid, {:bucket_index, new_index})
30 | end
31 |
32 | def goodness(pid) do
33 | GenServer.call(pid, :goodness)
34 | end
35 |
36 | def goodness(pid, goodness) do
37 | GenServer.call(pid, {:goodness, goodness})
38 | end
39 |
40 | def is_good?(pid) do
41 | GenServer.call(pid, :is_good?)
42 | end
43 |
44 | def is_questionable?(pid) do
45 | GenServer.call(pid, :is_questionable?)
46 | end
47 |
48 | def send_find_node(node_id, target) do
49 | GenServer.cast(node_id, {:send_find_node, target})
50 | end
51 |
52 | def send_ping(pid) do
53 | GenServer.cast(pid, :send_ping)
54 | end
55 |
56 | def send_find_node_reply(pid, tid, nodes) do
57 | GenServer.cast(pid, {:send_find_node_reply, tid, nodes})
58 | end
59 |
60 | def send_get_peers_reply(pid, tid, nodes, token) do
61 | GenServer.cast(pid, {:send_get_peers_reply, tid, nodes, token})
62 | end
63 |
64 | def response_received(pid) do
65 | GenServer.cast(pid, {:response_received})
66 | end
67 |
68 | def query_received(pid) do
69 | GenServer.cast(pid, {:query_received})
70 | end
71 |
72 | def last_time_responded(pid) do
73 | GenServer.call(pid, :last_time_responded)
74 | end
75 |
76 | def last_time_queried(pid) do
77 | GenServer.call(pid, :last_time_queried)
78 | end
79 |
80 | def to_tuple(pid) do
81 | GenServer.call(pid, :to_tuple)
82 | end
83 |
84 | def to_string(pid) do
85 | GenServer.call(pid, :to_string)
86 | end
87 |
88 | ###
89 | ## GenServer API
90 | ###
91 |
92 | def init([opts]) do
93 | {node_id, {ip, port}, socket} = opts[:node_tuple]
94 |
95 | {:ok,
96 | %{
97 | :own_node_id => opts[:own_node_id],
98 | :bucket_index => opts[:bucket_index],
99 | :node_id => node_id,
100 | :ip => ip,
101 | :port => port,
102 | :socket => socket,
103 | :goodness => :good,
104 |
105 | ## Timer
106 | :last_response_rcv => :os.system_time(:millisecond),
107 | :last_query_rcv => 0,
108 | :last_query_snd => 0
109 | }
110 | }
111 | end
112 |
113 | def handle_call(:stop, _from, state) do
114 | {:stop, :normal, :ok, state}
115 | end
116 |
117 | def handle_call(:id, _from, state) do
118 | {:reply, state.node_id, state}
119 | end
120 |
121 | def handle_call(:bucket_index, _from, state) do
122 | {:reply, state.bucket_index, state}
123 | end
124 |
125 | def handle_call(:socket, _from, state) do
126 | {:reply, state.socket, state}
127 | end
128 |
129 | def handle_call(:goodness, _from, state) do
130 | {:reply, state.goodness, state}
131 | end
132 |
133 | def handle_call(:is_good?, _from, state) do
134 | {:reply, state.goodness == :good, state}
135 | end
136 |
137 | def handle_call(:is_questionable?, _from, state) do
138 | {:reply, state.goodness == :questionable, state}
139 | end
140 |
141 | def handle_call({:goodness, goodness}, _from, state) do
142 | {:reply, :ok, %{state | :goodness => goodness}}
143 | end
144 |
145 | def handle_call(:last_time_responded, _from, state) do
146 | {:reply, :os.system_time(:millisecond) - state.last_response_rcv, state}
147 | end
148 |
149 | def handle_call(:last_time_queried, _from, state) do
150 | {:reply, state.last_query_snd, state}
151 | end
152 |
153 | def handle_call(:to_tuple, _from, state) do
154 | {:reply, {state.node_id, state.ip, state.port}, state}
155 | end
156 |
157 | def handle_call(:to_string, _from, state) do
158 | node_id = Base.encode16(state.node_id)
159 | str = "#Node"
160 |
161 | {:reply, str, state}
162 | end
163 |
164 | # If we receive a response to our query and the goodness value is
165 | # :questionable, we set it back to :good
166 | def handle_cast({:response_received}, state) do
167 | {:noreply, %{state | :last_response_rcv => :os.system_time(:millisecond),
168 | :goodness => :good}}
169 | end
170 |
171 | def handle_cast({:query_received}, state) do
172 | {:noreply, %{state | :last_query_rcv => :os.system_time(:millisecond)}}
173 | end
174 |
175 | def handle_cast({:bucket_index, new_index}, state) do
176 | {:noreply, %{state | :bucket_index => new_index}}
177 | end
178 |
179 | ###########
180 | # Queries #
181 | ###########
182 |
183 | def handle_cast(:send_ping, state) do
184 | Logger.debug("[#{Base.encode16(state.node_id)}] << ping")
185 |
186 | payload = KRPCProtocol.encode(:ping, node_id: state.own_node_id)
187 | :gen_udp.send(state.socket, state.ip, state.port, payload)
188 |
189 | {:noreply, %{state | :last_query_snd => :os.system_time(:millisecond)}}
190 | end
191 |
192 | def handle_cast({:send_find_node, target}, state) do
193 | Logger.debug("[#{Base.encode16(state.node_id)}] << find_node")
194 |
195 | payload = KRPCProtocol.encode(:find_node, node_id: state.own_node_id,
196 | target: target)
197 | :gen_udp.send(state.socket, state.ip, state.port, payload)
198 |
199 | {:noreply, %{state | :last_query_snd => :os.system_time(:millisecond)}}
200 | end
201 |
202 | ###########
203 | # Replies #
204 | ###########
205 |
206 | def handle_cast({:send_find_node_reply, tid, nodes}, state) do
207 | Logger.debug("[#{Base.encode16(state.node_id)}] << find_node_reply")
208 |
209 | payload = KRPCProtocol.encode(:find_node_reply, node_id:
210 | state.own_node_id, nodes: nodes, tid: tid)
211 | :gen_udp.send(state.socket, state.ip, state.port, payload)
212 |
213 | {:noreply, state}
214 | end
215 |
216 | def handle_cast({:send_get_peers_reply, tid, nodes, token}, state) do
217 | Logger.debug("[#{Base.encode16(state.node_id)}] << get_peers_reply (#{inspect token})")
218 |
219 | payload = KRPCProtocol.encode(:get_peers_reply, node_id:
220 | state.own_node_id, nodes: nodes, tid: tid, token: token)
221 | :gen_udp.send(state.socket, state.ip, state.port, payload)
222 |
223 | {:noreply, state}
224 | end
225 |
226 | end
227 |
--------------------------------------------------------------------------------
/lib/mldht/routing_table/supervisor.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.RoutingTable.Supervisor do
2 | use Supervisor
3 |
4 | require Logger
5 |
6 | @moduledoc ~S"""
7 | TODO
8 | """
9 |
10 | def start_link(opts) do
11 | name = opts[:node_id_enc]
12 | |> MlDHT.Registry.via(MlDHT.RoutingTable.Supervisor, opts[:rt_name])
13 |
14 | Supervisor.start_link(__MODULE__, opts, name: name)
15 | end
16 |
17 | @impl true
18 | def init(args) do
19 | node_id = args[:node_id]
20 | node_id_enc = args[:node_id_enc]
21 | rt_name = args[:rt_name]
22 |
23 | children = [
24 | {MlDHT.RoutingTable.Worker,
25 | rt_name: rt_name,
26 | node_id: node_id,
27 | name: MlDHT.Registry.via(node_id_enc, MlDHT.RoutingTable.Worker, rt_name)},
28 |
29 | {DynamicSupervisor,
30 | name: MlDHT.Registry.via(node_id_enc, MlDHT.RoutingTable.NodeSupervisor, rt_name),
31 | strategy: :one_for_one}
32 | ]
33 | Logger.debug("RoutingTable.Supervisor children #{inspect(children)}")
34 | Supervisor.init(children, strategy: :one_for_one)
35 | end
36 |
37 | end
38 |
--------------------------------------------------------------------------------
/lib/mldht/routing_table/worker.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.RoutingTable.Worker do
2 | @moduledoc false
3 |
4 | use GenServer
5 |
6 | require Logger
7 | require Bitwise
8 |
9 | alias MlDHT.RoutingTable.Node
10 | alias MlDHT.RoutingTable.Bucket
11 | alias MlDHT.RoutingTable.Distance
12 |
13 | alias MlDHT.Search.Worker, as: Search
14 |
15 | #############
16 | # Constants #
17 | #############
18 |
19 | ## One minute in mi
20 | @min_in_ms 60 * 1000
21 |
22 | ## 5 Minutes
23 | @review_time 5 * @min_in_ms
24 |
25 | ## 5 minutes
26 | @response_time 15 * @min_in_ms
27 |
28 | ## 5 minutes
29 | @neighbourhood_maintenance_time 5 * @min_in_ms
30 |
31 | ## 3 minutes
32 | @bucket_maintenance_time 3 * @min_in_ms
33 |
34 | ## 15 minutes
35 | @bucket_max_idle_time 15 * @min_in_ms
36 |
37 | ##############
38 | # Public API #
39 | ##############
40 |
41 | def start_link(opts) do
42 | Logger.debug "Starting RoutingTable worker: #{inspect(opts)}"
43 | init_args = [node_id: opts[:node_id], rt_name: opts[:rt_name]]
44 | GenServer.start_link(__MODULE__, init_args, opts)
45 | end
46 |
47 | def add(name, remote_node_id, address, socket) do
48 | GenServer.cast(name, {:add, remote_node_id, address, socket})
49 | end
50 |
51 | def size(name) do
52 | GenServer.call(name, :size)
53 | end
54 |
55 | def cache_size(name) do
56 | GenServer.call(name, :cache_size)
57 | end
58 |
59 | def update_bucket(name, bucket_index) do
60 | GenServer.cast(name, {:update_bucket, bucket_index})
61 | end
62 |
63 | def print(name) do
64 | GenServer.cast(name, :print)
65 | end
66 |
67 | def get(name, node_id) do
68 | GenServer.call(name, {:get, node_id})
69 | end
70 |
71 | def closest_nodes(name, target, remote_node_id) do
72 | GenServer.call(name, {:closest_nodes, target, remote_node_id})
73 | end
74 |
75 | def closest_nodes(name, target) do
76 | GenServer.call(name, {:closest_nodes, target, nil})
77 | end
78 |
79 | def del(name, node_id) do
80 | GenServer.call(name, {:del, node_id})
81 | end
82 |
83 | #################
84 | # GenServer API #
85 | #################
86 |
87 | def init(node_id: node_id, rt_name: rt_name) do
88 | ## Start timer for peer review
89 | Process.send_after(self(), :review, @review_time)
90 |
91 | ## Start timer for neighbourhood maintenance
92 | Process.send_after(self(), :neighbourhood_maintenance,
93 | @neighbourhood_maintenance_time)
94 |
95 | ## Start timer for bucket maintenance
96 | Process.send_after(self(), :bucket_maintenance, @bucket_maintenance_time)
97 |
98 | ## Generate name of the ets cache table from the node_id as an atom
99 | ets_name = node_id |> Base.encode16() |> String.to_atom()
100 |
101 | {:ok, %{
102 | node_id: node_id,
103 | node_id_enc: Base.encode16(node_id),
104 | rt_name: rt_name,
105 | buckets: [Bucket.new(0)],
106 | cache: :ets.new(ets_name, [:set, :protected]),
107 | }}
108 | end
109 |
110 | @doc """
111 | This function gets called by an external timer. This function checks when was
112 | the last time a node has responded to our requests.
113 | """
114 | def handle_info(:review, state) do
115 | new_buckets = Enum.map(state.buckets, fn(bucket) ->
116 | Bucket.filter(bucket, fn(pid) ->
117 | Node.last_time_responded(pid)
118 | |> evaluate_node(state.cache, pid)
119 | end)
120 | end)
121 |
122 | ## Restart the Timer
123 | Process.send_after(self(), :review, @review_time)
124 |
125 | {:noreply, %{state | :buckets => new_buckets}}
126 | end
127 |
128 | @doc """
129 | This functions gets called by an external timer. This function takes a random
130 | node from a random bucket and runs a find_node query with our own node_id as a
131 | target. By that way, we try to find more and more nodes that are in our
132 | neighbourhood.
133 | """
134 | def handle_info(:neighbourhood_maintenance, state) do
135 | Distance.gen_node_id(152, state.node_id)
136 | |> find_node_on_random_node(state)
137 |
138 | ## Restart the Timer
139 | Process.send_after(self(), :neighbourhood_maintenance, @neighbourhood_maintenance_time)
140 |
141 | {:noreply, state}
142 | end
143 |
144 | @doc """
145 | This function gets called by an external timer. It iterates through all
146 | buckets and checks if a bucket has less than 6 nodes and was not updated
147 | during the last 15 minutes. If this is the case, then we will pick a random
148 | node and start a find_node query with a random_node from that bucket.
149 |
150 | Excerpt from BEP 0005: "Buckets that have not been changed in 15 minutes
151 | should be "refreshed." This is done by picking a random ID in the range of the
152 | bucket and performing a find_nodes search on it."
153 | """
154 | def handle_info(:bucket_maintenance, state) do
155 | state.buckets
156 | |> Stream.with_index
157 | |> Enum.each(fn({bucket, index}) ->
158 | if Bucket.age(bucket) >= @bucket_max_idle_time or Bucket.size(bucket) < 6 do
159 | Logger.info "Staring find_node search on bucket #{index}"
160 |
161 | Distance.gen_node_id(index, state.node_id)
162 | |> find_node_on_random_node(state)
163 | end
164 | end)
165 |
166 | Process.send_after(self(), :bucket_maintenance, @bucket_maintenance_time)
167 |
168 | {:noreply, state}
169 | end
170 |
171 | @doc """
172 | This function returns the 8 closest nodes in our routing table to a specific
173 | target.
174 | """
175 | def handle_call({:closest_nodes, target, remote_node_id}, _from, state ) do
176 | list = state.cache
177 | |> :ets.tab2list()
178 | |> Enum.filter(&(elem(&1, 0) != remote_node_id))
179 | |> Enum.sort(fn(x, y) ->
180 | Distance.xor_cmp(elem(x, 0), elem(y, 0), target, &(&1 < &2))
181 | end)
182 | |> Enum.map(fn(x) -> elem(x, 1) end)
183 | |> Enum.slice(0..7)
184 |
185 | {:reply, list, state}
186 | end
187 |
188 | @doc """
189 | This functiowe will ren returns the pid for a specific node id. If the node
190 | does not exists, it will try to add it to our routing table. Again, if this
191 | was successful, this function returns the pid, otherwise nil.
192 | """
193 | def handle_call({:get, node_id}, _from, state) do
194 | {:reply, get_node(state.cache, node_id), state}
195 | end
196 |
197 | @doc """
198 | This function returns the number of nodes in our routing table as an integer.
199 | """
200 | def handle_call(:size, _from, state) do
201 | size = state.buckets
202 | |> Enum.map(fn(b) -> Bucket.size(b) end)
203 | |> Enum.reduce(fn(x, acc) -> x + acc end)
204 |
205 | {:reply, size, state}
206 | end
207 |
208 | @doc """
209 | This function returns the number of nodes from the cache as an integer.
210 | """
211 | def handle_call(:cache_size, _from, state) do
212 | {:reply, :ets.tab2list(state.cache) |> Enum.count(), state}
213 | end
214 |
215 | @doc """
216 | Without parameters this function returns our own node id. If this function
217 | gets a string as a parameter, it will set this as our node id.
218 | """
219 | def handle_call(:node_id, _from, state) do
220 | {:reply, state.node_id, state}
221 | end
222 |
223 | def handle_call({:node_id, node_id}, _from, state) do
224 | ## Generate new name of the ets cache table and rename it
225 | ets_name = node_id |> Base.encode16() |> String.to_atom()
226 | new_cache = :ets.rename(state.cache, ets_name)
227 |
228 | {:reply, :ok, %{state | :node_id => node_id, :cache => new_cache}}
229 | end
230 |
231 | @doc """
232 | This function deletes a node according to its node id.
233 | """
234 | def handle_call({:del, node_id}, _from, state) do
235 | new_bucket = del_node(state.cache, state.buckets, node_id)
236 | {:reply, :ok, %{state | :buckets => new_bucket}}
237 | end
238 |
239 | @doc """
240 | This function update the last_update time value in the bucket.
241 | """
242 | def handle_cast({:update_bucket, bucket_index}, state) do
243 | new_bucket = state.buckets
244 | |> Enum.at(bucket_index)
245 | |> Bucket.update()
246 |
247 | new_buckets = state.buckets
248 | |> List.replace_at(bucket_index, new_bucket)
249 |
250 | {:noreply, %{state | :buckets => new_buckets}}
251 | end
252 |
253 | @doc """
254 | This function tries to add a new node to our routing table. It does not add
255 | the node to the routing table if the node is already in the routing table or
256 | if the node_id is equal to our own node_id. In this case the function will
257 | return nil. Otherwise, we will add it to our routing table and return the node
258 | pid.
259 | """
260 | def handle_cast({:add, node_id, address, socket}, state) do
261 | cond do
262 | # This is our own node id
263 | node_id == state.node_id ->
264 | {:noreply, state}
265 | # We have this node already in our table
266 | node_exists?(state.cache, node_id) ->
267 | {:noreply, state}
268 | true ->
269 | {:noreply, add_node(state, {node_id, address, socket})}
270 | end
271 | end
272 |
273 | @doc """
274 | This function is for debugging purpose only. It prints out the complete
275 | routing table.
276 | """
277 | def handle_cast(:print, state) do
278 | state.buckets
279 | |> Enum.each(fn (bucket) ->
280 | Logger.debug inspect(bucket)
281 | end)
282 |
283 | {:noreply, state}
284 | end
285 |
286 | #####################
287 | # Private Functions #
288 | #####################
289 |
290 | # @spec evaluate_node(number, cache, pid) :: {true, false}
291 | def evaluate_node(time, cache, pid) do
292 | cond do
293 | time < @response_time ->
294 | Node.send_ping(pid)
295 | true
296 |
297 | time >= @response_time and Node.is_good?(pid) ->
298 | Node.goodness(pid, :questionable)
299 | Node.send_ping(pid)
300 | true
301 |
302 | time >= @response_time and Node.is_questionable?(pid) ->
303 | Logger.debug "[#{Base.encode16 Node.id(pid)}] Deleted"
304 | :ets.delete(cache, Node.id(pid))
305 | Node.stop(pid)
306 | false
307 | end
308 | end
309 |
310 | def find_node_on_random_node(target, state) do
311 | case random_node(state.cache) do
312 | node_pid when is_pid(node_pid) ->
313 | node = Node.to_tuple(node_pid)
314 | socket = Node.socket(node_pid)
315 |
316 | ## Start find_node search
317 | state.node_id_enc
318 | |> MlDHT.Registry.get_pid(MlDHT.Search.Supervisor)
319 | |> MlDHT.Search.Supervisor.start_child(:find_node, socket, state.node_id)
320 | |> Search.find_node(target: target, start_nodes: [node])
321 |
322 | nil ->
323 | Logger.warn "No nodes in our routing table."
324 | end
325 | end
326 |
327 |
328 | @doc """
329 | This function adds a new node to our routing table.
330 | """
331 | def add_node(state, node_tuple) do
332 | {node_id, _ip_port, _socket} = node_tuple
333 |
334 | my_node_id = state.node_id
335 | buckets = state.buckets
336 | index = find_bucket_index(buckets, my_node_id, node_id)
337 | bucket = Enum.at(buckets, index)
338 |
339 | cond do
340 | ## If the bucket has still some space left, we can just add the node to
341 | ## the bucket. Easy Peasy
342 | Bucket.has_space?(bucket) ->
343 | node_child = {Node, own_node_id: my_node_id, node_tuple: node_tuple,
344 | bucket_index: index}
345 |
346 | {:ok, pid} = my_node_id
347 | |> Base.encode16()
348 | |> MlDHT.Registry.get_pid(MlDHT.RoutingTable.NodeSupervisor, state.rt_name)
349 | |> DynamicSupervisor.start_child(node_child)
350 |
351 | new_bucket = Bucket.add(bucket, pid)
352 |
353 | :ets.insert(state.cache, {node_id, pid})
354 | state |> Map.put(:buckets, List.replace_at(buckets, index, new_bucket))
355 |
356 | ## If the bucket is full and the node would belong to a bucket that is far
357 | ## away from us, we will just drop that node. Go away you filthy node!
358 | Bucket.is_full?(bucket) and index != index_last_bucket(buckets) ->
359 | Logger.debug "Bucket #{index} is full -> drop #{Base.encode16(node_id)}"
360 | state
361 |
362 | ## If the bucket is full but the node is closer to us, we will reorganize
363 | ## the nodes in the buckets and try again to add it to our bucket list.
364 | true ->
365 | buckets = reorganize(bucket.nodes, buckets ++ [Bucket.new(index + 1)], my_node_id)
366 | add_node(%{state | :buckets => buckets}, node_tuple)
367 | end
368 | end
369 |
370 | @doc """
371 | TODO
372 | """
373 | def reorganize([], buckets, _self_node_id), do: buckets
374 | def reorganize([node | rest], buckets, my_node_id) do
375 | current_index = length(buckets) - 2
376 | index = find_bucket_index(buckets, my_node_id, Node.id(node))
377 |
378 | new_buckets = if current_index != index do
379 | current_bucket = Enum.at(buckets, current_index)
380 | new_bucket = Enum.at(buckets, index)
381 |
382 | ## Remove the node from the current bucket
383 | filtered_bucket = Bucket.del(current_bucket, Node.id(node))
384 |
385 | ## Change bucket index in the Node to the new one
386 | Node.bucket_index(node, index)
387 |
388 | ## Then add it to the new_bucket
389 | List.replace_at(buckets, current_index, filtered_bucket)
390 | |> List.replace_at(index, Bucket.add(new_bucket, node))
391 | else
392 | buckets
393 | end
394 |
395 | reorganize(rest, new_buckets, my_node_id)
396 | end
397 |
398 | @doc """
399 | This function returns a random node pid. If the routing table is empty it
400 | returns nil.
401 | """
402 | def random_node(cache) do
403 | cache |> :ets.tab2list() |> Enum.random() |> elem(1)
404 | rescue
405 | _e in Enum.EmptyError -> nil
406 | end
407 |
408 | @doc """
409 | Returns the index of the last bucket as integer.
410 | """
411 | def index_last_bucket(buckets) do
412 | Enum.count(buckets) - 1
413 | end
414 |
415 | @doc """
416 | TODO
417 | """
418 | def find_bucket_index(buckets, self_node_id, remote_node_id) do
419 | unless byte_size(self_node_id) == byte_size(remote_node_id) do
420 | Logger.error "self_node_id: #{byte_size(self_node_id)}
421 | remote_node_id: #{byte_size(remote_node_id)}"
422 |
423 | raise ArgumentError, message: "Different length of self_node_id and remote_node_id"
424 | end
425 | bucket_index = Distance.find_bucket(self_node_id, remote_node_id)
426 |
427 | min(bucket_index, index_last_bucket(buckets))
428 | end
429 |
430 | @doc """
431 | TODO
432 | """
433 | def node_exists?(cache, node_id), do: get_node(cache, node_id)
434 |
435 | @doc """
436 | TODO
437 | """
438 | def del_node(cache, buckets, node_id) do
439 | {_id, node_pid} = :ets.lookup(cache, node_id) |> Enum.at(0)
440 |
441 | ## Delete node from the bucket list
442 | new_buckets = Enum.map(buckets, fn(bucket) ->
443 | Bucket.del(bucket, node_id)
444 | end)
445 |
446 | ## Stop the node
447 | Node.stop(node_pid)
448 |
449 | ## Delete node from the ETS cache
450 | :ets.delete(cache, node_id)
451 |
452 | new_buckets
453 | end
454 |
455 | @doc """
456 |
457 | """
458 | def get_node(cache, node_id) do
459 | case :ets.lookup(cache, node_id) do
460 | [{_node_id, pid}] -> pid
461 | [] -> :nil
462 | end
463 | end
464 |
465 | end
466 |
--------------------------------------------------------------------------------
/lib/mldht/search/node.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Search.Node do
2 | @moduledoc false
3 |
4 | defstruct id: nil, ip: nil, port: nil, token: nil, requested: 0,
5 | request_sent: 0, responded: false
6 |
7 | def last_time_requested(node) do
8 | :os.system_time(:millisecond) - node.request_sent
9 | end
10 | end
11 |
--------------------------------------------------------------------------------
/lib/mldht/search/supervisor.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Search.Supervisor do
2 | use DynamicSupervisor
3 |
4 | require Logger
5 |
6 | def start_link(opts) do
7 | DynamicSupervisor.start_link(__MODULE__, {:ok, opts[:strategy]}, name: opts[:name])
8 | end
9 |
10 | def init({:ok, strategy}) do
11 | DynamicSupervisor.init(strategy: strategy)
12 | end
13 |
14 | def start_child(pid, type, socket, node_id) do
15 | node_id_enc = Base.encode16(node_id)
16 | tid = KRPCProtocol.gen_tid()
17 | tid_str = Base.encode16(tid)
18 |
19 | ## If a Search already exist with this tid, generate a new TID by starting
20 | ## the function again
21 | if MlDHT.Registry.get_pid(node_id_enc, MlDHT.Search.Worker, tid_str) do
22 | Logger.error "SAME TID!!!! #{tid_str}"
23 | start_child(pid, type, socket, node_id)
24 | else
25 | {:ok, search_pid} = DynamicSupervisor.start_child(pid,
26 | {MlDHT.Search.Worker,
27 | name: MlDHT.Registry.via(node_id_enc, MlDHT.Search.Worker, tid_str),
28 | type: type,
29 | socket: socket,
30 | node_id: node_id,
31 | tid: tid})
32 |
33 | search_pid
34 | end
35 | end
36 |
37 | end
38 |
--------------------------------------------------------------------------------
/lib/mldht/search/worker.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Search.Worker do
2 | @moduledoc false
3 |
4 | @typedoc """
5 | A transaction_id (tid) is a two bytes binary.
6 | """
7 | @type transaction_id :: <<_::16>>
8 |
9 | @typedoc """
10 | A DHT search is divided in a :get_peers or a :find_node search.
11 | """
12 | @type search_type :: :get_peers | :find_node
13 |
14 | use GenServer, restart: :temporary
15 |
16 | require Logger
17 |
18 | alias MlDHT.RoutingTable.Distance
19 | alias MlDHT.Search.Node
20 |
21 | ##############
22 | # Client API #
23 | ##############
24 |
25 | # @spec start_link() :: atom
26 | def start_link(opts) do
27 | args = [opts[:socket], opts[:node_id], opts[:type], opts[:tid], opts[:name]]
28 | GenServer.start_link(__MODULE__, args, name: opts[:name])
29 | end
30 |
31 | def get_peers(pid, args), do: GenServer.cast(pid, {:get_peers, args})
32 |
33 | def find_node(pid, args), do: GenServer.cast(pid, {:find_node, args})
34 |
35 | @doc """
36 | Stops a search process.
37 | """
38 | @spec stop(pid) :: :ok
39 | def stop(pid), do: GenServer.call(pid, :stop)
40 |
41 | @doc """
42 | Returns the type of the search process.
43 | """
44 | @spec type(pid) :: search_type
45 | def type(pid), do: GenServer.call(pid, :type)
46 |
47 | def tid(pid), do: GenServer.call(pid, :tid)
48 |
49 | # @spec handle_reply(pid, foo, list) :: :ok
50 | def handle_reply(pid, remote, nodes) do
51 | GenServer.cast(pid, {:handle_reply, remote, nodes})
52 | end
53 |
54 | ####################
55 | # Server Callbacks #
56 | ####################
57 |
58 | def init([socket, node_id, type, tid, name]) do
59 | ## Extract the id from the via string
60 | {_, _, {_, id}} = name
61 |
62 | {:ok, %{
63 | :socket => socket,
64 | :node_id => node_id,
65 | :type => type,
66 | :tid => tid,
67 | :name => id
68 | }}
69 | end
70 |
71 | def handle_info(:search_iterate, state) do
72 | if search_completed?(state.nodes, state.target) do
73 | Logger.debug "Search is complete"
74 |
75 | ## If the search is complete and it was get_peers search, then we will
76 | ## send the clostest peers an announce_peer message.
77 | if Map.has_key?(state, :announce) and state.announce == true do
78 | send_announce_msg(state)
79 | end
80 |
81 | MlDHT.Registry.unregister(state.name)
82 |
83 | {:stop, :normal, state}
84 | else
85 | ## Send queries to the 3 closest nodes
86 | new_state = state.nodes
87 | |> Distance.closest_nodes(state.target)
88 | |> Enum.filter(fn(x) ->
89 | x.responded == false and
90 | x.requested < 3 and
91 | Node.last_time_requested(x) > 5
92 | end)
93 | |> Enum.slice(0..2)
94 | |> nodesinspector()
95 | |> send_queries(state)
96 |
97 | ## Restart Timer
98 | Process.send_after(self(), :search_iterate, 1_000)
99 |
100 | {:noreply, new_state}
101 | end
102 | end
103 |
104 | def nodesinspector(nodes) do
105 | # Logger.error "#{inspect nodes}"
106 | nodes
107 | end
108 |
109 | def handle_call(:stop, _from, state) do
110 | MlDHT.Registry.unregister(state.name)
111 |
112 | {:stop, :normal, :ok, state}
113 | end
114 |
115 | def handle_call(:type, _from, state) do
116 | {:reply, state.type, state}
117 | end
118 |
119 | def handle_call(:tid, _from, state) do
120 | {:reply, state.tid, state}
121 | end
122 |
123 | def handle_cast({:get_peers, args}, state) do
124 | new_state = start_search_closure(:get_peers, args, state).()
125 |
126 | {:noreply, new_state}
127 | end
128 |
129 | def handle_cast({:find_node, args}, state) do
130 | new_state = start_search_closure(:find_node, args, state).()
131 |
132 | {:noreply, new_state}
133 | end
134 |
135 | def handle_cast({:handle_reply, remote, nil}, state) do
136 | old_nodes = update_responded_node(state.nodes, remote)
137 |
138 | ## If the reply contains values we need to inform the user of this
139 | ## information and call the callback function.
140 | if remote.values, do: Enum.each(remote.values, state.callback)
141 |
142 | {:noreply, %{state | nodes: old_nodes}}
143 | end
144 |
145 | def handle_cast({:handle_reply, remote, nodes}, state) do
146 | old_nodes = update_responded_node(state.nodes, remote)
147 |
148 | new_nodes = Enum.map(nodes, fn(node) ->
149 | {id, {ip, port}} = node
150 | unless Enum.find(state.nodes, fn(x) -> x.id == id end) do
151 | %Node{id: id, ip: ip, port: port}
152 | end
153 | end)
154 | |> Enum.filter(fn(x) -> x != nil end)
155 |
156 | {:noreply, %{state | nodes: old_nodes ++ new_nodes}}
157 | end
158 |
159 | #####################
160 | # Private Functions #
161 | #####################
162 |
163 | def send_announce_msg(state) do
164 | state.nodes
165 | |> Distance.closest_nodes(state.target, 7)
166 | |> Enum.filter(fn(node) -> node.responded == true end)
167 | |> Enum.each(fn(node) ->
168 | Logger.debug "[#{Base.encode16 node.id}] << announce_peer"
169 |
170 | args = [node_id: state.node_id, info_hash: state.target,
171 | token: node.token, port: node.port]
172 | args = if state.port == 0, do: args ++ [implied_port: true], else: args
173 |
174 | payload = KRPCProtocol.encode(:announce_peer, args)
175 | :gen_udp.send(state.socket, node.ip, node.port, payload)
176 | end)
177 | end
178 |
179 | ## This function merges args (keyword list) with the state map and returns a
180 | ## function depending on the type (:get_peers, :find_node).
181 | defp start_search_closure(type, args, state) do
182 | fn() ->
183 | Process.send_after(self(), :search_iterate, 500)
184 |
185 | ## Convert the keyword list to a map and merge it with state.
186 | args
187 | |> Enum.into(%{})
188 | |> Map.merge(state)
189 | |> Map.put(:type, type)
190 | |> Map.put(:nodes, nodes_to_search_nodes(args[:start_nodes]))
191 | end
192 | end
193 |
194 | defp send_queries([], state), do: state
195 | defp send_queries([node | rest], state) do
196 | node_id_enc = node.id |> Base.encode16()
197 | Logger.debug "[#{node_id_enc}] << #{state.type}"
198 |
199 | payload = gen_request_msg(state.type, state)
200 | :gen_udp.send(state.socket, node.ip, node.port, payload)
201 |
202 | new_nodes = state.nodes
203 | |> update_nodes(node.id, :requested, &(&1.requested + 1))
204 | |> update_nodes(node.id, :request_sent, fn(_) -> :os.system_time(:millisecond) end)
205 |
206 | send_queries(rest, %{state | nodes: new_nodes})
207 | end
208 |
209 | defp nodes_to_search_nodes(nodes) do
210 | Enum.map(nodes, fn(node) ->
211 | {id, ip, port} = extract_node_infos(node)
212 | %Node{id: id, ip: ip, port: port}
213 | end)
214 | end
215 |
216 | defp gen_request_msg(:find_node, state) do
217 | args = [tid: state.tid, node_id: state.node_id, target: state.target]
218 | KRPCProtocol.encode(:find_node, args)
219 | end
220 |
221 | defp gen_request_msg(:get_peers, state) do
222 | args = [tid: state.tid, node_id: state.node_id, info_hash: state.target]
223 | KRPCProtocol.encode(:get_peers, args)
224 | end
225 |
226 | ## It is necessary that we need to know which node in our node list has
227 | ## responded. This function goes through the node list and sets :responded of
228 | ## the responded node to true. If the reply from the remote node also contains
229 | ## a token this function updates this too.
230 | defp update_responded_node(nodes, remote) do
231 | node_list = update_nodes(nodes, remote.node_id, :responded)
232 |
233 | if Map.has_key?(remote, :token) do
234 | update_nodes(node_list, remote.node_id, :token, fn(_) -> remote.token end)
235 | else
236 | node_list
237 | end
238 | end
239 |
240 | ## This function is a helper function to update the node list easily.
241 | defp update_nodes(nodes, node_id, key) do
242 | update_nodes(nodes, node_id, key, fn(_) -> true end)
243 | end
244 |
245 | defp update_nodes(nodes, node_id, key, func) do
246 | Enum.map(nodes, fn(node) ->
247 | if node.id == node_id do
248 | Map.put(node, key, func.(node))
249 | else
250 | node
251 | end
252 | end)
253 | end
254 |
255 | defp extract_node_infos(node) when is_tuple(node), do: node
256 | defp extract_node_infos(node) when is_pid(node) do
257 | MlDHT.RoutingTable.Node.to_tuple(node)
258 | end
259 |
260 | ## This function contains the condition when a search is completed.
261 | defp search_completed?(nodes, target) do
262 | nodes
263 | |> Distance.closest_nodes(target, 7)
264 | |> Enum.all?(fn(node) ->
265 | node.responded == true or node.requested >= 3
266 | end)
267 | end
268 |
269 | end
270 |
--------------------------------------------------------------------------------
/lib/mldht/server/storage.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Server.Storage do
2 | @moduledoc false
3 |
4 | use GenServer
5 |
6 | require Logger
7 |
8 | #############
9 | # Constants #
10 | #############
11 |
12 | ## One minute in milliseconds
13 | @min_in_ms 60 * 1000
14 |
15 | ## 5 Minutes
16 | @review_time 5 * @min_in_ms
17 |
18 | ## 30 Minutes
19 | @node_expired 30 * @min_in_ms
20 |
21 | def start_link(opts) do
22 | GenServer.start_link(__MODULE__, [], opts)
23 | end
24 |
25 | def init([]) do
26 | Process.send_after(self(), :review_storage, @review_time)
27 | {:ok, %{}}
28 | end
29 |
30 | def put(pid, infohash, ip, port) do
31 | GenServer.cast(pid, {:put, infohash, ip, port})
32 | end
33 |
34 | def print(pid) do
35 | GenServer.cast(pid, :print)
36 | end
37 |
38 | def has_nodes_for_infohash?(pid, infohash) do
39 | GenServer.call(pid, {:has_nodes_for_infohash?, infohash})
40 | end
41 |
42 | def get_nodes(pid, infohash) do
43 | GenServer.call(pid, {:get_nodes, infohash})
44 | end
45 |
46 | def handle_info(:review_storage, state) do
47 | Logger.debug "Review storage"
48 |
49 | ## Restart review timer
50 | Process.send_after(self(), :review_storage, @review_time)
51 |
52 | {:noreply, review(Map.keys(state), state)}
53 | end
54 |
55 | def handle_call({:has_nodes_for_infohash?, infohash}, _from, state) do
56 | has_keys = Map.has_key?(state, infohash)
57 | result = if has_keys, do: Map.get(state, infohash) != [], else: has_keys
58 |
59 | {:reply, result, state}
60 | end
61 |
62 | def handle_call({:get_nodes, infohash}, _from, state) do
63 | nodes = state
64 | |> Map.get(infohash)
65 | |> Enum.map(fn(x) -> Tuple.delete_at(x, 2) end)
66 | |> Enum.slice(0..99)
67 |
68 | {:reply, nodes, state}
69 | end
70 |
71 | def handle_cast({:put, infohash, ip, port}, state) do
72 | item = {ip, port, :os.system_time(:millisecond)}
73 |
74 | new_state =
75 | if Map.has_key?(state, infohash) do
76 | index = state
77 | |> Map.get(infohash)
78 | |> Enum.find_index(fn(node_tuple) ->
79 | Tuple.delete_at(node_tuple, 2) == {ip, port}
80 | end)
81 |
82 | if index do
83 | Map.update!(state, infohash, fn(x) ->
84 | List.replace_at(x, index, item)
85 | end)
86 | else
87 | Map.update!(state, infohash, fn(x) ->
88 | x ++ [item]
89 | end)
90 | end
91 |
92 | else
93 | Map.put(state, infohash, [item])
94 | end
95 |
96 | {:noreply, new_state}
97 | end
98 |
99 | def handle_cast(:print, state) do
100 | Enum.each(Map.keys(state), fn(infohash) ->
101 | Logger.debug "#{Base.encode16 infohash}"
102 | Enum.each(Map.get(state, infohash), fn(x) ->
103 | Logger.debug " #{inspect x}"
104 | end)
105 | end)
106 |
107 | {:noreply, state}
108 | end
109 |
110 | def review([], result), do: result
111 | def review([head | tail], result) do
112 | new = delete_old_nodes(result, head)
113 | review(tail, new)
114 | end
115 |
116 | def delete_old_nodes(state, infohash) do
117 | Map.update!(state, infohash, fn(list) ->
118 | Enum.filter(list, fn(x) ->
119 | (:os.system_time(:millisecond) - elem(x, 2)) <= @node_expired
120 | end)
121 | end)
122 | end
123 |
124 | end
125 |
--------------------------------------------------------------------------------
/lib/mldht/server/utils.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Server.Utils do
2 | @moduledoc false
3 |
4 | @doc ~S"""
5 | This function gets a tuple as IP address and a port and returns a
6 | string which contains the IPv4 or IPv6 address and port in the following
7 | format: "127.0.0.1:6881".
8 |
9 | ## Example
10 | iex> MlDHT.Server.Utils.tuple_to_ipstr({127, 0, 0, 1}, 6881)
11 | "127.0.0.1:6881"
12 | """
13 | def tuple_to_ipstr({oct1, oct2, oct3, oct4}, port) do
14 | "#{oct1}.#{oct2}.#{oct3}.#{oct4}:#{port}"
15 | end
16 |
17 | def tuple_to_ipstr(ipv6_addr, port) when tuple_size(ipv6_addr) == 8 do
18 | ip_str =
19 | String.duplicate("~4.16.0B:", 8)
20 | |> String.slice(0..-2) ## remove last ":" of the string
21 | |> :io_lib.format(Tuple.to_list(ipv6_addr))
22 | |> List.to_string
23 |
24 | "[#{ip_str}]:#{port}"
25 | end
26 |
27 | @doc ~S"""
28 | This function generates a 160 bit (20 byte) random node id as a
29 | binary.
30 | """
31 | @spec gen_node_id :: Types.node_id
32 | def gen_node_id do
33 | :rand.seed(:exs64, :os.timestamp)
34 |
35 | Stream.repeatedly(fn -> :rand.uniform 255 end)
36 | |> Enum.take(20)
37 | |> :binary.list_to_bin
38 | end
39 |
40 | @doc """
41 | TODO
42 | """
43 | def gen_secret, do: gen_node_id()
44 |
45 | end
46 |
--------------------------------------------------------------------------------
/lib/mldht/server/worker.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Server.Worker do
2 | @moduledoc false
3 |
4 | use GenServer
5 |
6 | require Logger
7 |
8 | alias MlDHT.Server.Utils
9 | alias MlDHT.Server.Storage
10 | alias MlDHT.Registry
11 |
12 | alias MlDHT.RoutingTable.Node
13 | alias MlDHT.Search.Worker, as: Search
14 |
15 | @type ip_vers :: :ipv4 | :ipv6
16 |
17 | # Time after the secret changes
18 | @time_change_secret 60 * 1000 * 5
19 |
20 | def start_link(opts) do
21 | GenServer .start_link(__MODULE__, opts[:node_id], opts)
22 | end
23 |
24 | @doc """
25 | This function takes the bootstrapping nodes from the config and starts a
26 | find_node search to our own node id. By doing this, we will quickly collect
27 | nodes that are close to us and save it to our own routing table.
28 |
29 | ## Example
30 | iex> MlDHT.DHTServer.Worker.bootstrap
31 | """
32 | def bootstrap(pid) do
33 | GenServer.cast(pid, :bootstrap)
34 | end
35 |
36 | @doc ~S"""
37 | This function needs an infohash as binary, and a callback function as
38 | parameter. This function uses its own routing table as a starting point to
39 | start a get_peers search for the given infohash.
40 |
41 | ## Example
42 | iex> infohash = "3F19..." |> Base.decode16!
43 | iex> MlDHT.DHTServer.search(infohash, fn(node) ->
44 | {ip, port} = node
45 | IO.puts "ip: #{ip} port: #{port}"
46 | end)
47 | """
48 | def search(pid, infohash, callback) do
49 | GenServer.cast(pid, {:search, infohash, callback})
50 | end
51 |
52 | def search_announce(pid, infohash, callback) do
53 | GenServer.cast(pid, {:search_announce, infohash, callback})
54 | end
55 |
56 | def search_announce(pid, infohash, port, callback) do
57 | GenServer.cast(pid, {:search_announce, infohash, port, callback})
58 | end
59 |
60 | def create_udp_socket(port, ip_vers) do
61 | ip_addr = ip_vers |> to_string() |> Kernel.<>("_addr") |> String.to_atom()
62 | options = ip_vers |> inet_option() |> maybe_put(:ip, config(ip_addr))
63 |
64 | case :gen_udp.open(port, options ++ [{:active, true}]) do
65 | {:ok, socket} ->
66 | Logger.debug "Init DHT Node (#{ip_vers})"
67 |
68 | foo = :inet.getopts(socket, [:ipv6_v6only])
69 | Logger.debug "Options: #{inspect foo}"
70 |
71 | socket
72 | {:error, reason} ->
73 | {:stop, reason}
74 | end
75 | end
76 |
77 | def init(node_id) do
78 | ## Returns false in case the option is not set in the environment (setting
79 | ## the option to false or not setting the option at all has the same effect
80 | ## in this case)
81 | cfg_ipv6_is_enabled? = config(:ipv6, false)
82 | cfg_ipv4_is_enabled? = config(:ipv4, false)
83 |
84 | unless cfg_ipv4_is_enabled? or cfg_ipv6_is_enabled? do
85 | raise "Configuration failure: Either ipv4 or ipv6 has to be set to true."
86 | end
87 |
88 | cfg_port = config(:port)
89 | socket = if cfg_ipv4_is_enabled?, do: create_udp_socket(cfg_port, :ipv4), else: nil
90 | socket6 = if cfg_ipv6_is_enabled?, do: create_udp_socket(cfg_port, :ipv6), else: nil
91 |
92 | ## Change secret of the token every 5 minutes
93 | Process.send_after(self(), :change_secret, @time_change_secret)
94 |
95 | state = %{node_id: node_id, node_id_enc: Base.encode16(node_id),
96 | socket: socket, socket6: socket6, old_secret: nil, secret: Utils.gen_secret}
97 |
98 | # INFO Setup routingtable for IPv4
99 | if cfg_ipv4_is_enabled? do
100 | start_rtable(node_id, :ipv4)
101 |
102 | if config(:bootstrap_nodes) do
103 | bootstrap(state, {socket, :inet})
104 | end
105 | end
106 |
107 | # INFO Setup routingtable for IPv6
108 | if cfg_ipv6_is_enabled? do
109 | start_rtable(node_id, :ipv6)
110 |
111 | if config(:bootstrap_nodes) do
112 | bootstrap(state, {socket6, :inet6})
113 | end
114 | end
115 |
116 | {:ok, state}
117 | end
118 |
119 | defp start_rtable(node_id, rt_name) do
120 | node_id_enc = node_id |> Base.encode16()
121 | rt_name = to_string(rt_name)
122 |
123 | ## Allows giving atoms as rt_name to this function, e.g. :ipv4
124 | {:ok, _pid} = node_id_enc
125 | |> MlDHT.Registry.get_pid(MlDHT.RoutingTable.Supervisor)
126 | |> DynamicSupervisor.start_child({
127 | MlDHT.RoutingTable.Supervisor,
128 | node_id: node_id,
129 | node_id_enc: node_id_enc,
130 | rt_name: rt_name})
131 |
132 | node_id |> get_rtable(rt_name)
133 | end
134 |
135 | defp get_rtable(node_id, rt_name) do
136 | node_id
137 | |> Base.encode16()
138 | |> MlDHT.Registry.get_pid(MlDHT.RoutingTable.Worker, rt_name)
139 | end
140 |
141 | def handle_cast({:bootstrap, socket_tuple}, state) do
142 | bootstrap(state, socket_tuple)
143 | {:noreply, state}
144 | end
145 |
146 | def handle_cast({:search_announce, infohash, callback}, state) do
147 | # TODO What about ipv6?
148 | nodes = state.node_id
149 | |> get_rtable(:ipv4)
150 | |> MlDHT.RoutingTable.Worker.closest_nodes(infohash)
151 |
152 | state.node_id_enc
153 | |> MlDHT.Registry.get_pid(MlDHT.Search.Supervisor)
154 | |> MlDHT.Search.Supervisor.start_child(:get_peers, state.socket, state.node_id)
155 | |> Search.get_peers(target: infohash, start_nodes: nodes,
156 | callback: callback, port: 0, announce: true)
157 |
158 | {:noreply, state}
159 | end
160 |
161 | def handle_cast({:search_announce, infohash, callback, port}, state) do
162 | nodes = state.node_id
163 | |> get_rtable(:ipv4)
164 | |> MlDHT.RoutingTable.Worker.closest_nodes(infohash)
165 |
166 | state.node_id_enc
167 | |> MlDHT.Registry.get_pid(MlDHT.Search.Supervisor)
168 | |> MlDHT.Search.Supervisor.start_child(:get_peers, state.socket, state.node_id)
169 | |> Search.get_peers(target: infohash, start_nodes: nodes,
170 | callback: callback, port: port, announce: true)
171 |
172 | {:noreply, state}
173 | end
174 |
175 | def handle_cast({:search, infohash, callback}, state) do
176 | nodes = state.node_id
177 | |> get_rtable(:ipv4)
178 | |> MlDHT.RoutingTable.Worker.closest_nodes(infohash)
179 |
180 | state.node_id_enc
181 | |> MlDHT.Registry.get_pid(MlDHT.Search.Supervisor)
182 | |> MlDHT.Search.Supervisor.start_child(:get_peers, state.socket, state.node_id)
183 | |> Search.get_peers(target: infohash, start_nodes: nodes, port: 0,
184 | callback: callback, announce: false)
185 |
186 | {:noreply, state}
187 | end
188 |
189 | def handle_info(:change_secret, state) do
190 | Logger.debug "Change Secret"
191 | Process.send_after(self(), :change_secret, @time_change_secret)
192 |
193 | {:noreply, %{state | old_secret: state.secret, secret: Utils.gen_secret()}}
194 | end
195 |
196 | def handle_info({:udp, socket, ip, port, raw_data}, state) do
197 | # if Mix.env == :dev do
198 | # Logger.debug "[#{Utils.tuple_to_ipstr(ip, port)}]\n"
199 | # <> PrettyHex.pretty_hex(to_string(raw_data))
200 | # Logger.debug "#{inspect raw_data, limit: 1000}"
201 | # end
202 |
203 | raw_data
204 | |> :binary.list_to_bin()
205 | |> String.trim_trailing("\n")
206 | |> KRPCProtocol.decode()
207 | |> handle_message({socket, get_ip_vers(socket)}, ip, port, state)
208 | end
209 |
210 | #########
211 | # Error #
212 | #########
213 |
214 | def handle_message({:error, error}, _socket, ip, port, state) do
215 | args = [code: error.code, msg: error.msg, tid: error.tid]
216 | payload = KRPCProtocol.encode(:error, args)
217 | :gen_udp.send(state.socket, ip, port, payload)
218 |
219 | {:noreply, state}
220 | end
221 |
222 | def handle_message({:invalid, msg}, _socket, _ip, _port, state) do
223 | Logger.error "Ignore unknown or corrupted message: #{inspect msg, limit: 5000}"
224 | ## Maybe we should blacklist this filthy peer?
225 |
226 | {:noreply, state}
227 | end
228 |
229 | ########################
230 | # Incoming DHT Queries #
231 | ########################
232 |
233 | def handle_message({:ping, remote}, {socket, ip_vers}, ip, port, state) do
234 | Logger.debug "[#{Base.encode16(remote.node_id)}] >> ping"
235 | query_received(remote.node_id, state.node_id, {ip, port}, {socket, ip_vers})
236 |
237 | send_ping_reply(state.node_id, remote.tid, ip, port, socket)
238 |
239 | {:noreply, state}
240 | end
241 |
242 | def handle_message({:find_node, remote}, {socket, ip_vers}, ip, port, state) do
243 | Logger.debug "[#{Base.encode16(remote.node_id)}] >> find_node"
244 | query_received(remote.node_id, state.node_id, {ip, port}, {socket, ip_vers})
245 |
246 | ## Get closest nodes for the requested target from the routing table
247 | nodes = state.node_id
248 | |> get_rtable(ip_vers)
249 | |> MlDHT.RoutingTable.Worker.closest_nodes(remote.target, remote.node_id)
250 | |> Enum.map(fn(pid) ->
251 | try do
252 | if Process.alive?(pid) do
253 | Node.to_tuple(pid)
254 | end
255 | rescue
256 | _e in Enum.EmptyError -> nil
257 | end
258 | end)
259 |
260 | if nodes != [] do
261 | Logger.debug("[#{Base.encode16(remote.node_id)}] << find_node_reply")
262 |
263 | nodes_args = if ip_vers == :ipv4, do: [nodes: nodes], else: [nodes6: nodes]
264 | args = [node_id: state.node_id] ++ nodes_args ++ [tid: remote.tid]
265 | Logger.debug "NODES ARGS: #{inspect args}"
266 | payload = KRPCProtocol.encode(:find_node_reply, args)
267 |
268 | # Logger.debug(PrettyHex.pretty_hex(to_string(payload)))
269 |
270 | :gen_udp.send(socket, ip, port, payload)
271 | end
272 |
273 | {:noreply, state}
274 | end
275 |
276 | ## Get_peers
277 | def handle_message({:get_peers, remote}, {socket, ip_vers}, ip, port, state) do
278 | Logger.debug "[#{Base.encode16(remote.node_id)}] >> get_peers"
279 | query_received(remote.node_id, state.node_id, {ip, port}, {socket, ip_vers})
280 |
281 | ## Generate a token for the requesting node
282 | token = :crypto.hash(:sha, Utils.tuple_to_ipstr(ip, port) <> state.secret)
283 |
284 | ## Get pid of the storage genserver
285 | storage_pid = state.node_id |> Base.encode16() |> Registry.get_pid(Storage)
286 |
287 | args =
288 | if Storage.has_nodes_for_infohash?(storage_pid, remote.info_hash) do
289 | values = Storage.get_nodes(storage_pid, remote.info_hash)
290 | [node_id: state.node_id, values: values, tid: remote.tid, token: token]
291 | else
292 | ## Get the closest nodes for the requested info_hash
293 | rtable = state.node_id |> get_rtable(ip_vers)
294 | nodes = Enum.map(MlDHT.RoutingTable.Worker.closest_nodes(rtable, remote.info_hash), fn(pid) ->
295 | Node.to_tuple(pid)
296 | end)
297 |
298 | Logger.debug("[#{Base.encode16(remote.node_id)}] << get_peers_reply (nodes)")
299 | [node_id: state.node_id, nodes: nodes, tid: remote.tid, token: token]
300 | end
301 |
302 | payload = KRPCProtocol.encode(:get_peers_reply, args)
303 | :gen_udp.send(socket, ip, port, payload)
304 |
305 | {:noreply, state}
306 | end
307 |
308 | ## Announce_peer
309 | def handle_message({:announce_peer, remote}, {socket, ip_vers}, ip, port, state) do
310 | Logger.debug "[#{Base.encode16(remote.node_id)}] >> announce_peer"
311 | query_received(remote.node_id, state.node_id, {ip, port}, {socket, ip_vers})
312 |
313 | if token_match(remote.token, ip, port, state.secret, state.old_secret) do
314 | Logger.debug "Valid Token"
315 | Logger.debug "#{inspect remote}"
316 |
317 | port = if Map.has_key?(remote, :implied_port) do port else remote.port end
318 |
319 | ## Get pid of the storage genserver
320 | storage_pid = state.node_id |> Base.encode16() |> Registry.get_pid(Storage)
321 |
322 | Storage.put(storage_pid, remote.info_hash, ip, port)
323 |
324 | ## Sending a ping_reply back as an acknowledgement
325 | send_ping_reply(state.node_id, remote.tid, ip, port, socket)
326 |
327 | {:noreply, state}
328 | else
329 | Logger.debug("[#{Base.encode16(remote.node_id)}] << error (invalid token})")
330 |
331 | args = [code: 203, msg: "Announce_peer with wrong token", tid: remote.tid]
332 | payload = KRPCProtocol.encode(:error, args)
333 | :gen_udp.send(socket, ip, port, payload)
334 |
335 | {:noreply, state}
336 | end
337 | end
338 |
339 | ########################
340 | # Incoming DHT Replies #
341 | ########################
342 |
343 | def handle_message({:error_reply, error}, _socket, ip, port, state) do
344 | ip_port_str = Utils.tuple_to_ipstr(ip, port)
345 | Logger.error "[#{ip_port_str}] >> error (#{error.code}: #{error.msg})"
346 |
347 | {:noreply, state}
348 | end
349 |
350 | def handle_message({:find_node_reply, remote}, {socket, ip_vers}, ip, port, state) do
351 | Logger.debug "[#{Base.encode16(remote.node_id)}] >> find_node_reply"
352 | response_received(remote.node_id, state.node_id, {ip, port}, {socket, ip_vers})
353 | tid_enc = Base.encode16(remote.tid)
354 |
355 | case MlDHT.Registry.get_pid(state.node_id_enc, Search, tid_enc) do
356 | nil -> Logger.debug "[#{Base.encode16(remote.node_id)}] ignore unknown tid: #{tid_enc} "
357 | pid -> Search.handle_reply(pid, remote, remote.nodes)
358 | end
359 |
360 | ## Ping all nodes
361 | payload = KRPCProtocol.encode(:ping, node_id: state.node_id)
362 | Enum.each(remote.nodes, fn(node_tuple) ->
363 | {_id, {ip, port}} = node_tuple
364 | :gen_udp.send(socket, ip, port, payload)
365 | end)
366 |
367 | {:noreply, state}
368 | end
369 |
370 | def handle_message({:get_peer_reply, remote}, {socket, ip_vers}, ip, port, state) do
371 | Logger.debug "[#{Base.encode16(remote.node_id)}] >> get_peer_reply"
372 | response_received(remote.node_id, state.node_id, {ip, port}, {socket, ip_vers})
373 | tid_enc = Base.encode16(remote.tid)
374 |
375 | case MlDHT.Registry.get_pid(state.node_id_enc, Search, tid_enc) do
376 | nil -> Logger.debug "[#{Base.encode16(remote.node_id)}] ignore unknown tid: #{tid_enc} "
377 | pid -> Search.handle_reply(pid, remote, remote.nodes)
378 | end
379 |
380 | {:noreply, state}
381 | end
382 |
383 | def handle_message({:ping_reply, remote}, {socket, ip_vers}, ip, port, state) do
384 | Logger.debug "[#{Base.encode16(remote.node_id)}] >> ping_reply"
385 | response_received(remote.node_id, state.node_id, {ip, port}, {socket, ip_vers})
386 |
387 | {:noreply, state}
388 | end
389 |
390 | #####################
391 | # Private Functions #
392 | #####################
393 |
394 | defp inet_option(:ipv4), do: [:inet]
395 | defp inet_option(:ipv6), do: [:inet6, {:ipv6_v6only, true}]
396 |
397 | defp maybe_put(list, _name, nil), do: list
398 | defp maybe_put(list, name, value), do: list ++ [{name, value}]
399 |
400 | defp config(value, ret \\ nil), do: Application.get_env(:mldht, value, ret)
401 |
402 | ## This function starts a search with the bootstrapping nodes.
403 | defp bootstrap(state, {socket, inet}) do
404 |
405 | ## Get the nodes which are defined as bootstrapping nodes in the config
406 | nodes = config(:bootstrap_nodes)
407 | |> resolve_hostnames(inet)
408 |
409 | Logger.debug "nodes: #{inspect nodes}"
410 |
411 | ## Start a find_node search to collect neighbors for our routing table
412 | state.node_id_enc
413 | |> MlDHT.Registry.get_pid(MlDHT.Search.Supervisor)
414 | |> MlDHT.Search.Supervisor.start_child(:find_node, socket, state.node_id)
415 | |> Search.find_node(target: state.node_id, start_nodes: nodes)
416 |
417 | end
418 |
419 | ## function iterates over a list of bootstrapping nodes and tries to
420 | ## resolve the hostname of each node. If a node is not resolvable the function
421 | ## removes it; if is resolvable it replaces the hostname with the IP address.
422 | defp resolve_hostnames(list, inet), do: resolve_hostnames(list, inet, [])
423 | defp resolve_hostnames([], _inet, result), do: result
424 | defp resolve_hostnames([{id, host, port} | tail], inet, result) when is_tuple(host) do
425 | resolve_hostnames(tail, inet, result ++ [{id, host, port}])
426 | end
427 | defp resolve_hostnames([{id, host, port} | tail], inet, result) when is_binary(host) do
428 | case :inet.getaddr(String.to_charlist(host), :inet) do
429 | {:ok, ip_addr} ->
430 | resolve_hostnames(tail, inet, result ++ [{id, ip_addr, port}])
431 | {:error, code} ->
432 | Logger.error "Couldn't resolve the hostname: #{host} (reason: #{code})"
433 | resolve_hostnames(tail, inet, result)
434 | end
435 | end
436 |
437 | ## Gets a socket as an argument and returns to which ip version (:ipv4 or
438 | ## :ipv6) the socket belongs.
439 | @spec get_ip_vers(port) :: ip_vers
440 | defp get_ip_vers(socket) when is_port(socket) do
441 | case :inet.getopts(socket, [:ipv6_v6only]) do
442 | {:ok, [ipv6_v6only: true]} -> :ipv6
443 | {:ok, []} -> :ipv4
444 | end
445 | end
446 |
447 | defp send_ping_reply(node_id, tid, ip, port, socket) do
448 | Logger.debug("[#{Base.encode16(node_id)}] << ping_reply")
449 |
450 | payload = KRPCProtocol.encode(:ping_reply, tid: tid, node_id: node_id)
451 | :gen_udp.send(socket, ip, port, payload)
452 | end
453 |
454 | # TODO query_received and response_received are nearly identical
455 |
456 | defp query_received(remote_node_id, node_id, ip_port, {socket, ip_vers}) do
457 | rtable = node_id |> get_rtable(ip_vers)
458 |
459 | if node_pid = MlDHT.RoutingTable.Worker.get(rtable, remote_node_id) do
460 | Node.query_received(node_pid)
461 | index = Node.bucket_index(node_pid)
462 | MlDHT.RoutingTable.Worker.update_bucket(rtable, index)
463 | else
464 | MlDHT.RoutingTable.Worker.add(rtable, remote_node_id, ip_port, socket)
465 | end
466 | end
467 |
468 | defp response_received(remote_node_id, node_id, ip_port, {socket, ip_vers}) do
469 | rtable = node_id |> get_rtable(ip_vers)
470 |
471 | if node_pid = MlDHT.RoutingTable.Worker.get(rtable, remote_node_id) do
472 | Node.response_received(node_pid)
473 | index = Node.bucket_index(node_pid)
474 | MlDHT.RoutingTable.Worker.update_bucket(rtable, index)
475 | else
476 | MlDHT.RoutingTable.Worker.add(rtable, remote_node_id, ip_port, socket)
477 | end
478 | end
479 |
480 | defp token_match(tok, ip, port, secret, nil) do
481 | new_str = Utils.tuple_to_ipstr(ip, port) <> secret
482 | new_tok = :crypto.hash(:sha, new_str)
483 |
484 | tok == new_tok
485 | end
486 |
487 | defp token_match(tok, ip, port, secret, old_secret) do
488 | token_match(tok, ip, port, secret, nil) or
489 | token_match(tok, ip, port, old_secret, nil)
490 | end
491 |
492 | end
493 |
--------------------------------------------------------------------------------
/lib/mldht/supervisor.ex:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Supervisor do
2 | use Supervisor
3 |
4 | @moduledoc ~S"""
5 | Root Supervisor for MlDHT
6 |
7 | """
8 |
9 | @doc false
10 | # TODO: use Keyword.fetch!/2 to enforce the :node_id option
11 | def start_link(opts) do
12 | Supervisor.start_link(__MODULE__, opts[:node_id], opts)
13 | end
14 |
15 | @impl true
16 | def init(node_id) do
17 | node_id_enc = node_id |> Base.encode16()
18 |
19 | children = [
20 | {DynamicSupervisor,
21 | name: MlDHT.Registry.via(node_id_enc, MlDHT.RoutingTable.Supervisor),
22 | strategy: :one_for_one},
23 |
24 | {MlDHT.Search.Supervisor,
25 | name: MlDHT.Registry.via(node_id_enc, MlDHT.Search.Supervisor),
26 | strategy: :one_for_one},
27 |
28 | {MlDHT.Server.Worker,
29 | node_id: node_id,
30 | name: MlDHT.Registry.via(node_id_enc, MlDHT.Server.Worker)},
31 |
32 | {MlDHT.Server.Storage,
33 | name: MlDHT.Registry.via(node_id_enc, MlDHT.Server.Storage)},
34 | ]
35 |
36 | Supervisor.init(children, strategy: :one_for_one)
37 | end
38 |
39 | end
40 |
--------------------------------------------------------------------------------
/mix.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Mixfile do
2 | use Mix.Project
3 |
4 | def project do
5 | [app: :mldht,
6 | version: "0.0.3",
7 | elixir: "~> 1.2",
8 | build_embedded: Mix.env == :prod,
9 | start_permanent: Mix.env == :prod,
10 | description: description(),
11 | package: package(),
12 | deps: deps()]
13 | end
14 |
15 | def application do
16 | [mod: {MlDHT, []},
17 | env: [
18 | port: 6881,
19 | ipv4: true,
20 | ipv6: true,
21 | bootstrap_nodes: [
22 | {"32F54E697351FF4AEC29CDBAABF2FBE3467CC267", "router.bittorrent.com", 6881},
23 | {"EBFF36697351FF4AEC29CDBAABF2FBE3467CC267", "router.utorrent.com", 6881},
24 | {"9F08E1074F1679137561BAFE2CF62A73A8AFADC7", "dht.transmissionbt.com", 6881},
25 | ],
26 | k_bucket_size: 8,
27 | ],
28 | applications: [:logger]]
29 | end
30 |
31 | defp deps do
32 | [{:bencodex, "~> 1.0.0"},
33 | {:krpc_protocol, "~> 0.0.5"},
34 | {:ex_doc, "~> 0.19", only: :dev},
35 | {:pretty_hex, "~> 0.0.1", only: :dev},
36 | {:dialyxir, "~> 0.5.1", only: [:dev, :test]}
37 | ]
38 | end
39 |
40 | defp description do
41 | """
42 | Distributed Hash Table (DHT) is a storage and lookup system based on a peer-to-peer (P2P) system. The file sharing protocol BitTorrent makes use of a DHT to find new peers. MLDHT, in particular, is an elixir package that provides a mainline DHT implementation according to BEP 05.
43 | """
44 | end
45 |
46 | defp package do
47 | [name: :mldht,
48 | files: ["lib", "mix.exs", "README*", "LICENSE*"],
49 | maintainers: ["Florian Adamsky"],
50 | licenses: ["MIT"],
51 | links: %{"GitHub" => "https://github.com/cit/MLDHT"}]
52 | end
53 | end
54 |
--------------------------------------------------------------------------------
/test/mldht_routing_table_bucket_test.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.RoutingTable.Bucket.Test do
2 | use ExUnit.Case
3 |
4 | require Logger
5 |
6 | alias MlDHT.RoutingTable.Bucket
7 |
8 | test "size function" do
9 | bucket = Bucket.new(0)
10 | assert Bucket.size(bucket) == 0
11 |
12 | bucket =
13 | Bucket.add(bucket, "elem1")
14 | |> Bucket.add("elem2")
15 | |> Bucket.add("elem3")
16 |
17 | assert Bucket.size(bucket) == 3
18 | end
19 |
20 | test "if is_full?/1 returns false when only 6 elements are added" do
21 | list_half_full = 1..6 |> Enum.to_list()
22 | assert Bucket.new(0) |> Bucket.add(list_half_full) |> Bucket.is_full? == false
23 | end
24 |
25 | test "if is_full?/1 returns true when only 8 elements are added" do
26 | list_full = 1..8 |> Enum.to_list()
27 | assert Bucket.new(0) |> Bucket.add(list_full) |> Bucket.is_full? == true
28 | end
29 |
30 | test "if has_space?/1 returns true when only 6 elements are added" do
31 | list_half_full = 1..6 |> Enum.to_list()
32 | assert Bucket.new(0) |> Bucket.add(list_half_full) |> Bucket.has_space? == true
33 | end
34 |
35 | test "if has_space?/1 returns false when only 8 elements are added" do
36 | list_full = 1..8 |> Enum.to_list()
37 | assert Bucket.new(0) |> Bucket.add(list_full) |> Bucket.has_space? == false
38 | end
39 |
40 | test "if different value for k_bucket_size" do
41 | default_k = Application.get_env(:mldht, :k_bucket_size)
42 | new_k = 20
43 |
44 | # Set a new value of k
45 | Application.put_env(:mldht, :k_bucket_size, new_k)
46 |
47 | # Test if the bucket has space if we add as much nodes as the new bucket size
48 | list_full = 1..new_k |> Enum.to_list()
49 | assert Bucket.new(0) |> Bucket.add(list_full) |> Bucket.has_space? == false
50 |
51 | # Set previous default value of k for the other tests
52 | Application.put_env(:mldht, :k_bucket_size, default_k)
53 | end
54 |
55 | test "if age/1 works correctly with waiting one second" do
56 | bucket = Bucket.new(0)
57 | :timer.sleep(1000)
58 |
59 | assert Bucket.age(bucket) >= 1
60 | end
61 |
62 | test "if age/1 works correctly without waiting" do
63 | bucket = Bucket.new(0)
64 | assert Bucket.age(bucket) < 1
65 | end
66 |
67 | test "if age/1 works correctly when adding a new node" do
68 | bucket = Bucket.new(0)
69 | :timer.sleep(1000)
70 | new_bucket = Bucket.add(bucket, "elem")
71 | assert Bucket.age(new_bucket) < 1
72 | end
73 |
74 | test "if update/1 creates a new Bucket" do
75 | bucket = Bucket.new(0)
76 | :timer.sleep(1000)
77 | new_bucket = Bucket.update(bucket)
78 | assert Bucket.age(new_bucket) < 1
79 | end
80 |
81 | test "if update/1 still has the same nodes" do
82 | bucket = Bucket.new(0) |> Bucket.add([1, 2, 3, 4, 5])
83 | new_bucket = Bucket.update(bucket)
84 | assert Bucket.size(new_bucket) == 5
85 | end
86 |
87 | end
88 |
--------------------------------------------------------------------------------
/test/mldht_routing_table_distance_test.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.RoutingTable.Distance.Test do
2 | use ExUnit.Case
3 |
4 | alias MlDHT.RoutingTable.Distance
5 |
6 | test "gen_node_id" do
7 | node_id = String.duplicate("A", 20)
8 | result = Distance.gen_node_id(8, node_id)
9 |
10 | assert byte_size(result) == 20
11 | assert String.first(result) == String.first(node_id)
12 |
13 | result = Distance.gen_node_id(152, node_id)
14 | assert byte_size(result) == 20
15 | assert String.starts_with?(result, String.duplicate("A", 19))
16 |
17 | result = Distance.gen_node_id(2, <<85, 65, 186, 187, 3, 126, 81, 52, 54, 56, 37, 227, 187, 54, 221, 198, 79, 194, 129, 1>>)
18 | assert byte_size(result) == 20
19 | end
20 |
21 | test "xor_cmp" do
22 | assert Distance.xor_cmp("A", "a", "F", &(&1 > &2)) == false
23 | assert Distance.xor_cmp("a", "B", "F", &(&1 > &2)) == true
24 | end
25 |
26 | test "If the function find_bucket works correctly" do
27 | assert Distance.find_bucket("abc", "bca") == 6
28 | assert Distance.find_bucket("bca", "abc") == 6
29 |
30 | assert Distance.find_bucket("AA", "aa") == 2
31 | assert Distance.find_bucket("aa", "AA") == 2
32 |
33 | assert Distance.find_bucket(<<0b00000010>>, <<0b00000010>>) == 8
34 | assert Distance.find_bucket(<<0b10000010>>, <<0b00000010>>) == 0
35 | assert Distance.find_bucket(<<0b11110000>>, <<0b11111111>>) == 4
36 | end
37 |
38 | end
39 |
--------------------------------------------------------------------------------
/test/mldht_routing_table_node_test.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.RoutingTable.Node.Test do
2 | use ExUnit.Case
3 |
4 | alias MlDHT.RoutingTable.Node
5 |
6 | setup do
7 | rt_name = "rt_test"
8 | node_id = String.duplicate("A", 20)
9 | node_id_enc = Base.encode16(node_id)
10 | node_tuple = {node_id, {{127, 0, 0, 1}, 2323}, nil}
11 | node_child = {Node, own_node_id: node_id, node_tuple: node_tuple, bucket_index: 23}
12 |
13 | start_supervised!({
14 | DynamicSupervisor,
15 | name: MlDHT.Registry.via(node_id_enc, MlDHT.RoutingTable.NodeSupervisor, rt_name),
16 | strategy: :one_for_one})
17 |
18 | sup_pid = MlDHT.Registry.get_pid(node_id_enc, MlDHT.RoutingTable.NodeSupervisor, rt_name)
19 | {:ok, pid} = DynamicSupervisor.start_child(sup_pid, node_child)
20 |
21 | [pid: pid,
22 | node_id: node_id,
23 | node_id_enc: node_id_enc
24 | ]
25 | end
26 |
27 | test "if RoutingTable.Node stops correctly ", state do
28 | Node.stop(state.pid)
29 | assert Process.alive?(state.pid) == false
30 | end
31 |
32 | test "if RoutingTable.Node returns socket correctly ", state do
33 | assert Node.socket(state.pid) == nil
34 | end
35 |
36 | test "if RoutingTable.Node returns node_id correctly ", state do
37 | assert Node.id(state.pid) == state.node_id
38 | end
39 |
40 | test "if RoutingTable.Node returns goodness correctly ", state do
41 | assert Node.goodness(state.pid) == :good
42 | end
43 |
44 | test "if is_good?/1 returns true after the start", state do
45 | assert Node.is_good?(state.pid) == true
46 | end
47 |
48 | test "if is_questionable?/1 returns false after the start", state do
49 | assert Node.is_questionable?(state.pid) == false
50 | end
51 |
52 | test "if is_questionable?/1 returns true after change", state do
53 | Node.goodness(state.pid, :questionable)
54 | assert Node.is_questionable?(state.pid) == true
55 | end
56 |
57 | test "if goodness()/1 returns :questionable after change", state do
58 | Node.goodness(state.pid, :questionable)
59 | assert Node.goodness(state.pid) == :questionable
60 | end
61 |
62 | test "if RoutingTable.Node returns bucket_index correctly ", state do
63 | assert Node.bucket_index(state.pid) == 23
64 | end
65 |
66 | test "if RoutingTable.Node returns node_tuple correctly ", state do
67 | assert Node.to_tuple(state.pid) == {"AAAAAAAAAAAAAAAAAAAA", {127, 0, 0, 1}, 2323}
68 | end
69 |
70 | test "if RoutingTable.Node returns bucket_index correctly after change ", state do
71 | Node.bucket_index(state.pid, 42)
72 | assert Node.bucket_index(state.pid) == 42
73 | end
74 |
75 | test "if to_string/0 returns correct string ", state do
76 | str = "#Node state.node_id_enc <> ", goodness: questionable>"
77 | Node.goodness(state.pid, :questionable)
78 | assert Node.to_string(state.pid) == str
79 | end
80 |
81 | test "if response_received/1 sets goodness to :good again", state do
82 | Node.goodness(state.pid, :questionable)
83 | Node.response_received(state.pid)
84 |
85 | assert Node.goodness(state.pid) == :good
86 | end
87 |
88 | end
89 |
--------------------------------------------------------------------------------
/test/mldht_routing_table_worker_test.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.RoutingTable.Worker.Test do
2 | use ExUnit.Case
3 |
4 | @name :test
5 |
6 | setup do
7 | rt_name = "test_rt"
8 | node_id = "AAAAAAAAAAAAAAAAAAAB"
9 | node_id_enc = Base.encode16(node_id)
10 |
11 | start_supervised!({
12 | DynamicSupervisor,
13 | name: MlDHT.Registry.via(node_id_enc, MlDHT.RoutingTable.NodeSupervisor, rt_name),
14 | strategy: :one_for_one})
15 |
16 | start_supervised!({
17 | MlDHT.RoutingTable.Worker,
18 | name: @name,
19 | node_id: node_id,
20 | rt_name: rt_name})
21 |
22 | [node_id: node_id, node_id_enc: node_id_enc, rt_name: rt_name]
23 | end
24 |
25 | test "If the size of the table is 0 if we add and delete a node" do
26 | MlDHT.RoutingTable.Worker.add(@name, "BBBBBBBBBBBBBBBBBBBB", {{127, 0, 0, 1}, 6881}, 23)
27 | assert MlDHT.RoutingTable.Worker.size(@name) == 1
28 |
29 | MlDHT.RoutingTable.Worker.del(@name, "BBBBBBBBBBBBBBBBBBBB")
30 | assert MlDHT.RoutingTable.Worker.size(@name) == 0
31 | end
32 |
33 | test "get_node" do
34 | assert :ok == MlDHT.RoutingTable.Worker.add(@name, "BBBBBBBBBBBBBBBBBBBB", {{127, 0, 0, 1}, 6881}, 23)
35 |
36 | assert MlDHT.RoutingTable.Worker.get(@name, "BBBBBBBBBBBBBBBBBBBB") |> Kernel.is_pid == true
37 | assert MlDHT.RoutingTable.Worker.get(@name, "CCCCCCCCCCCCCCCCCCCC") == nil
38 |
39 | MlDHT.RoutingTable.Worker.del(@name, "BBBBBBBBBBBBBBBBBBBB")
40 | end
41 |
42 | test "Double entries" do
43 | MlDHT.RoutingTable.Worker.add(@name, "BBBBBBBBBBBBBBBBBBBB", {{127, 0, 0, 1}, 6881}, 23)
44 | MlDHT.RoutingTable.Worker.add(@name, "BBBBBBBBBBBBBBBBBBBB", {{127, 0, 0, 1}, 6881}, 23)
45 |
46 | assert MlDHT.RoutingTable.Worker.size(@name) == 1
47 | MlDHT.RoutingTable.Worker.del(@name, "BBBBBBBBBBBBBBBBBBBB")
48 | end
49 |
50 | test "if del() really deletes the node from the routing table" do
51 | MlDHT.RoutingTable.Worker.add(@name, "BBBBBBBBBBBBBBBBBBBB", {{127, 0, 0, 1}, 6881}, 23)
52 | node_pid = MlDHT.RoutingTable.Worker.get(@name, "BBBBBBBBBBBBBBBBBBBB")
53 |
54 | assert Process.alive?(node_pid) == true
55 | MlDHT.RoutingTable.Worker.del(@name, "BBBBBBBBBBBBBBBBBBBB")
56 | assert Process.alive?(node_pid) == false
57 | end
58 |
59 | test "if routing table size and cache size are equal with two elements" do
60 | MlDHT.RoutingTable.Worker.add(@name, "BBBBBBBBBBBBBBBBBBBB", {{127, 0, 0, 1}, 6881}, 23)
61 | MlDHT.RoutingTable.Worker.add(@name, "CCCCCCCCCCCCCCCCCCCC", {{127, 0, 0, 1}, 6881}, 23)
62 |
63 | assert MlDHT.RoutingTable.Worker.size(@name) == MlDHT.RoutingTable.Worker.cache_size(@name)
64 | end
65 |
66 | test "if routing table size and cache size are equal with ten elements" do
67 | Enum.map(?B .. ?Z, fn(x) -> String.duplicate(<>, 20) end)
68 | |> Enum.each(fn(node_id) ->
69 | MlDHT.RoutingTable.Worker.add(@name, node_id, {{127, 0, 0, 1}, 6881}, 23)
70 | end)
71 |
72 | MlDHT.RoutingTable.Worker.del(@name, "BBBBBBBBBBBBBBBBBBBB")
73 | MlDHT.RoutingTable.Worker.del(@name, "CCCCCCCCCCCCCCCCCCCC")
74 | MlDHT.RoutingTable.Worker.del(@name, "DDDDDDDDDDDDDDDDDDDD")
75 |
76 | assert MlDHT.RoutingTable.Worker.size(@name) == MlDHT.RoutingTable.Worker.cache_size(@name)
77 | end
78 |
79 | test "if closest_node() return only the closest nodes", test_worker_context do
80 | node_id = test_worker_context.node_id
81 |
82 | ## Generate close node_ids
83 | close_nodes = 1 .. 16
84 | |> Enum.map(fn(x) -> MlDHT.RoutingTable.Distance.gen_node_id(160 - x, node_id) end)
85 | |> Enum.filter(fn(x) -> x != node_id end)
86 | |> Enum.uniq()
87 | |> Enum.slice(0 .. 7)
88 | |> Enum.sort()
89 |
90 | ## Add the close nodes to the RoutingTable
91 | Enum.each(close_nodes, fn(node) ->
92 | MlDHT.RoutingTable.Worker.add(@name, node, {{127, 0, 0, 1}, 6881}, nil)
93 | end)
94 |
95 | assert MlDHT.RoutingTable.Worker.size(@name) == 8
96 |
97 | ## Generate and add distant nodes
98 | Enum.map(?B .. ?I, fn(x) -> String.duplicate(<>, 20) end)
99 | |> Enum.each(fn(node_id) ->
100 | MlDHT.RoutingTable.Worker.add(@name, node_id, {{127, 0, 0, 1}, 6881}, 23)
101 | end)
102 |
103 | assert MlDHT.RoutingTable.Worker.size(@name) == 16
104 |
105 | list = MlDHT.RoutingTable.Worker.closest_nodes(@name, node_id)
106 | |> Enum.map(fn(x) -> MlDHT.RoutingTable.Node.id(x) end)
107 | |> Enum.sort()
108 |
109 | ## list and close_nodes must be equal
110 | assert list == close_nodes
111 | end
112 |
113 | test "if routing table closest_nodes filters the source" do
114 | MlDHT.RoutingTable.Worker.add(@name, "BBBBBBBBBBBBBBBBBBBB", {{127, 0, 0, 1}, 6881}, 23)
115 | MlDHT.RoutingTable.Worker.add(@name, "CCCCCCCCCCCCCCCCCCCC", {{127, 0, 0, 1}, 6881}, 23)
116 | MlDHT.RoutingTable.Worker.add(@name, "DDDDDDDDDDDDDDDDDDDD", {{127, 0, 0, 1}, 6881}, 23)
117 |
118 | node_id = "AAAAAAAAAAAAAAAAAAAB"
119 | source = "CCCCCCCCCCCCCCCCCCCC"
120 |
121 | list = MlDHT.RoutingTable.Worker.closest_nodes(@name, node_id, source)
122 | assert length(list) == 2
123 | end
124 |
125 | test "if routing table closest_nodes does not filters the source" do
126 | MlDHT.RoutingTable.Worker.add(@name, "BBBBBBBBBBBBBBBBBBBB", {{127, 0, 0, 1}, 6881}, 23)
127 | MlDHT.RoutingTable.Worker.add(@name, "CCCCCCCCCCCCCCCCCCCC", {{127, 0, 0, 1}, 6881}, 23)
128 | MlDHT.RoutingTable.Worker.add(@name, "DDDDDDDDDDDDDDDDDDDD", {{127, 0, 0, 1}, 6881}, 23)
129 |
130 | node_id = "AAAAAAAAAAAAAAAAAAAB"
131 |
132 | list = MlDHT.RoutingTable.Worker.closest_nodes(@name, node_id)
133 | assert length(list) == 3
134 | end
135 |
136 | test "if routing table ignores its own node_id", test_worker_context do
137 | node_id = test_worker_context.node_id
138 | MlDHT.RoutingTable.Worker.add(@name, node_id, {{127, 0, 0, 1}, 6881}, 23)
139 | MlDHT.RoutingTable.Worker.add(@name, "CCCCCCCCCCCCCCCCCCCC", {{127, 0, 0, 1}, 6881}, 23)
140 | MlDHT.RoutingTable.Worker.add(@name, "DDDDDDDDDDDDDDDDDDDD", {{127, 0, 0, 1}, 6881}, 23)
141 |
142 | assert MlDHT.RoutingTable.Worker.size(@name) == 2
143 | end
144 |
145 | end
146 |
--------------------------------------------------------------------------------
/test/mldht_search_worker_test.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Search.Worker.Test do
2 | use ExUnit.Case
3 |
4 | alias MlDHT.Search.Worker, as: Search
5 |
6 | setup do
7 | node_id = String.duplicate("A", 20)
8 | node_id_enc = Base.encode16(node_id)
9 |
10 | start_supervised!(
11 | {MlDHT.Search.Supervisor,
12 | name: MlDHT.Registry.via(node_id_enc, MlDHT.Search.Supervisor),
13 | strategy: :one_for_one})
14 |
15 | [pid: MlDHT.Registry.get_pid(node_id_enc, MlDHT.Search.Supervisor),
16 | node_id: node_id,
17 | node_id_enc: node_id_enc
18 | ]
19 | end
20 |
21 | test "get_peers", state do
22 | search_pid = state.pid
23 | |> MlDHT.Search.Supervisor.start_child(:get_peers, nil, state.node_id)
24 |
25 | Search.get_peers(search_pid, target: state.node_id, start_nodes: [])
26 | assert Search.type(search_pid) == :get_peers
27 | end
28 |
29 | test "find_node", state do
30 | search_pid = state.pid
31 | |> MlDHT.Search.Supervisor.start_child(:find_node, nil, state.node_id)
32 |
33 | Search.find_node(search_pid, target: state.node_id, start_nodes: [])
34 | assert Search.type(search_pid) == :find_node
35 | end
36 |
37 | test "find_node shutdown", state do
38 | search_pid = state.pid
39 | |> MlDHT.Search.Supervisor.start_child(:find_node, nil, state.node_id)
40 |
41 | Search.find_node(search_pid, target: state.node_id, start_nodes: [])
42 | Search.stop(search_pid)
43 |
44 | assert Process.alive?(search_pid) == false
45 | end
46 |
47 | test "if search exists normally and does not restart", state do
48 | search_pid = state.pid
49 | |> MlDHT.Search.Supervisor.start_child(:get_peers, nil, state.node_id)
50 |
51 | tid_enc = search_pid
52 | |> Search.tid()
53 | |> Base.encode16()
54 |
55 | Search.stop(search_pid)
56 | assert MlDHT.Registry.get_pid(state.node_id_enc, Search, tid_enc) == nil
57 | end
58 |
59 | end
60 |
--------------------------------------------------------------------------------
/test/mldht_server_storage_test.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Server.Storage.Test do
2 | use ExUnit.Case
3 | require Logger
4 |
5 | alias MlDHT.Server.Storage
6 | alias MlDHT.Registry
7 |
8 | setup do
9 | node_id_enc = String.duplicate("A", 20) |> Base.encode16()
10 | rt_name = "test_rt"
11 |
12 | start_supervised!({
13 | DynamicSupervisor,
14 | name: Registry.via(node_id_enc, MlDHT.RoutingTable.NodeSupervisor, rt_name),
15 | strategy: :one_for_one})
16 |
17 | start_supervised!({Storage, name: Registry.via(node_id_enc, Storage)})
18 |
19 | [pid: MlDHT.Registry.get_pid(node_id_enc, Storage)]
20 | end
21 |
22 | test "has_nodes_for_infohash?", test_context do
23 | pid = test_context.pid
24 | Storage.put(pid, "aaaa", {127, 0, 0, 1}, 6881)
25 |
26 | assert Storage.has_nodes_for_infohash?(pid, "bbbb") == false
27 | assert Storage.has_nodes_for_infohash?(pid, "aaaa") == true
28 | end
29 |
30 | test "get_nodes", test_context do
31 | pid = test_context.pid
32 | ip1 = {127, 0, 0, 1}
33 | ip2 = {127, 0, 0, 2}
34 |
35 | Storage.put(pid, "aaaa", ip1, 6881)
36 | Storage.put(pid, "aaaa", ip1, 6881)
37 | Storage.put(pid, "aaaa", ip2, 6882)
38 |
39 | Storage.print(pid)
40 |
41 | assert Storage.get_nodes(pid, "aaaa") == [{ip1, 6881}, {ip2, 6882}]
42 | end
43 |
44 | end
45 |
--------------------------------------------------------------------------------
/test/mldht_server_utils_test.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Server.Utils.Test do
2 | use ExUnit.Case
3 |
4 | alias MlDHT.Server.Utils, as: Utils
5 |
6 | test "IPv4 address with tuple_to_ipstr/2" do
7 | assert Utils.tuple_to_ipstr({127, 0, 0, 1}, 6881) == "127.0.0.1:6881"
8 | end
9 |
10 | test "IPv6 address with tuple_to_ipstr/2" do
11 | ip_str = "[2001:41D0:000C:05AC:0005:0000:0000:0001]:6881"
12 | assert Utils.tuple_to_ipstr({8193, 16_848, 12, 1452, 5, 0, 0, 1}, 6881) == ip_str
13 | end
14 |
15 | end
16 |
--------------------------------------------------------------------------------
/test/mldht_server_worker_test.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Server.Worker.Test do
2 | use ExUnit.Case
3 |
4 | alias MlDHT.Server.Utils, as: Utils
5 |
6 | test "if handle_info(:change_secret) changes the secret" do
7 | secret = Utils.gen_secret()
8 | state = %{old_secret: nil, secret: secret}
9 | {:noreply, new_state} = MlDHT.Server.Worker.handle_info(:change_secret, state)
10 |
11 | assert new_state.secret != secret
12 | end
13 |
14 | test "if handle_info(:change_secret) saves the old secret" do
15 | secret = Utils.gen_secret()
16 | state = %{old_secret: nil, secret: secret}
17 | {:noreply, new_state} = MlDHT.Server.Worker.handle_info(:change_secret, state)
18 |
19 | assert new_state.old_secret == secret
20 | end
21 |
22 | end
23 |
--------------------------------------------------------------------------------
/test/mldht_test.exs:
--------------------------------------------------------------------------------
1 | defmodule MlDHT.Test do
2 | use ExUnit.Case
3 |
4 | test "if node_id() returns a String that has a length of 20 characters" do
5 | node_id = MlDHT.node_id()
6 | assert byte_size(node_id) == 20
7 | end
8 |
9 | test "if node_id_enc() returns a String that has a length of 40 characters" do
10 | node_id_enc = MlDHT.node_id_enc()
11 | assert String.length(node_id_enc) == 40
12 | end
13 |
14 | test "if MlDHT.search returns nodes for ubuntu-19.04.iso.torrent" do
15 | Process.register self(), :mldht_test_search
16 |
17 | ## Wait 3 seconds to ensure that the bootstrapping process has collected
18 | ## enough nodes
19 | :timer.sleep(3000)
20 |
21 | ## The mldht search finds multiple nodes and calls the anonymous functions
22 | ## therfor also multiple times. After the test was successful the process
23 | ## :mldht_test_search does not exist anymore, but the function is still get
24 | ## called. To avoid errors, I wrapped the try-do-rescuce around the send
25 | ## function.
26 | "D540FC48EB12F2833163EED6421D449DD8F1CE1F"
27 | |> Base.decode16!
28 | |> MlDHT.search(fn (_node) ->
29 | try do
30 | send :mldht_test_search, {:called_back, :pong}
31 | rescue
32 | _ -> 1 + 1
33 | end
34 | end)
35 |
36 | assert_receive {:called_back, :pong}, 40_000
37 | end
38 |
39 | test "if node returns correct ping_reply message when receiving a ping message" do
40 | rcv_node_id = MlDHT.node_id()
41 | snd_node_id = String.duplicate("B", 20)
42 | host = {127, 0, 0, 1}
43 | port = Application.get_env(:mldht, :port, nil)
44 |
45 | {:ok, socket} = :gen_udp.open(0, [:binary, {:active, false}])
46 |
47 | payload = KRPCProtocol.encode(:ping, tid: "AA", node_id: snd_node_id)
48 | :gen_udp.send(socket, host, port, payload)
49 |
50 | {:ok, {_ip, _port, reply}} = :gen_udp.recv(socket, 0)
51 | {:ping_reply, %{node_id: node_id, tid: tid}} = KRPCProtocol.decode(reply)
52 |
53 | assert node_id == rcv_node_id
54 | assert tid == "AA"
55 | end
56 |
57 | end
58 |
--------------------------------------------------------------------------------
/test/test_helper.exs:
--------------------------------------------------------------------------------
1 | ExUnit.start()
2 |
--------------------------------------------------------------------------------