├── .gitignore ├── LICENSE ├── README.md ├── config ├── config.exs ├── dev.exs ├── prod.exs └── test.exs ├── lib ├── adapter.ex ├── container.ex ├── dispatcher.ex ├── extreme │ ├── adapter.ex │ ├── mapper.ex │ ├── router.ex │ ├── serialization.ex │ └── supervisor.ex ├── persistence.ex ├── repository.ex ├── serialization.ex ├── storage.ex ├── supervisor.ex └── workflow.ex ├── mix.exs └── test ├── domain ├── account │ └── account.ex └── conference │ └── commands.ex ├── persistence_test.exs ├── repository_test.exs ├── storage_test.exs └── test_helper.exs /.gitignore: -------------------------------------------------------------------------------- 1 | # The directory Mix will write compiled artifacts to. 2 | /_build 3 | 4 | # If you run "mix test --cover", coverage assets end up here. 5 | /cover 6 | 7 | # The directory Mix downloads your dependencies sources to. 8 | /deps 9 | 10 | # Where 3rd-party dependencies like ExDoc output generated docs. 11 | /doc 12 | 13 | # If the VM crashes, it generates a dump, let's ignore it too. 14 | erl_crash.dump 15 | 16 | # Also ignore archive artifacts (built via "mix archive.build"). 17 | *.ez 18 | 19 | selenium* 20 | nohup.out 21 | .idea 22 | *.dump 23 | *.log 24 | .iml 25 | .rebar3 26 | _* 27 | .eunit 28 | *.o 29 | *.beam 30 | *.plt 31 | *.swp 32 | *.swo 33 | .erlang.cookie 34 | ebin 35 | log 36 | erl_crash.dump 37 | .rebar 38 | _rel 39 | _rel-dev 40 | _deps 41 | _plugins 42 | _tdeps 43 | logs 44 | MnesiaCore* 45 | Mnesia.* 46 | tempfile* 47 | mnesia_test_case_info 48 | test_log* 49 | deps 50 | bank.d 51 | relx 52 | .erlang.mk 53 | data 54 | *.aof 55 | dump.* 56 | dump.rdb 57 | event_sourcing_appendonly.aof 58 | /docs/**/*.pdf 59 | *.d 60 | *.iml 61 | src/counter_aggregate.erl 62 | mix.lock 63 | *.aof 64 | dump.rdb 65 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2016 Work Capital - Henry Hazan - henry@work.capital 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | CQRS Eventsourcing Workflow Engine 2 | ================================== 3 | 4 | [![Join the chat at https://gitter.im/cqrs-engine/Lobby](https://badges.gitter.im/cqrs-engine/Lobby.svg)](https://gitter.im/cqrs-engine/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 5 | 6 | ### IMPORTANT 7 | Only the data structures are working, for testing and collecting feedbacks. 8 | We believe that at the begining of December 2016, the framework will be usable. 9 | 10 | 11 | ### Pure functions data structures 12 | In the folder engine > types you will find the data structures, so you can write 13 | your pure functions over them, under the "side effects" dimension. 14 | 15 | 1. A pure function is given one or more input parameters. 16 | 2. Its result is based solely off of those parameters and its algorithm. The algorithm will not be based on any hidden state in the class or object it’s contained in. 17 | 3. It won’t mutate the parameters it’s given. 18 | 4. It won’t mutate the state of its class or object. 19 | 5. It doesn’t perform any I/O operations, such as reading from disk, writing to disk, prompting for input, or reading input. 20 | 21 | 22 | ### Motivation 23 | 24 | As aggregates listen for commands, process managers listen for events (sometimes commands also), and as aggregates emmits events, process managers dispatch commands. 25 | 26 | * pure functional data structures for aggregates and process managers 27 | * use monads (monadex) to simulate different business scenarios 28 | * one abstraction to implement side-effects 29 | * multiple data-stores 30 | * plugable message queue for publishing events 31 | * one gen_server implementation for aggregates and process managers 32 | * automatic process-manager creation based on correlation-ids (as suggested by Greg Young) 33 | * easy use of FSM on process managers 34 | 35 | ### Develop 36 | 37 | ``` 38 | mix test.watch 39 | ``` 40 | 41 | Send events from the prompt: 42 | 43 | ``` 44 | iex -S mix 45 | TODO: add example 46 | 47 | ``` 48 | 49 | 50 | ### Eventstore 51 | Run a [docker](https://github.com/EventStore/eventstore-docker) instance in your machine. If you have mac, ask the sys-admin to start it in a linux server on you LAN or WAN. Access the web gui in http://localhost:2113 . User: admin, pass: changeit 52 | 53 | 54 | ``` 55 | docker run --name eventstore-node -it -p 2113:2113 -p 1113:1113 eventstore/eventstore 56 | ``` 57 | 58 | #### Resources 59 | Below you can see several resources I researched before writing this lib. 60 | Special thanks for Ben Smith, where many ideas were copied from 61 | [commanded](https://github.com/slashdotdash/commanded) library. 62 | 63 | * [burmajam](https://github.com/burmajam) for sharing the very 64 | well written extreme driver to connect to Eventstore. 65 | * [slashdotdash](https://github.com/slashdotdash/commanded) for sharing the CQRS 66 | framework, where many parts of the code here are from his framework. 67 | * [cqrs-erlang](https://github.com/bryanhunter/cqrs-with-erlang) - A memory 68 | model using standard spawn functions CQRS in erlang. 69 | * [gen-aggregate](https://github.com/burmajam/gen_aggregate/) - Macro for the 70 | aggregate structure, using buffers. 71 | 72 | 73 | -------------------------------------------------------------------------------- /config/config.exs: -------------------------------------------------------------------------------- 1 | use Mix.Config 2 | import_config "#{Mix.env}.exs" 3 | 4 | -------------------------------------------------------------------------------- /config/dev.exs: -------------------------------------------------------------------------------- 1 | use Mix.Config 2 | 3 | #------------------------------- 4 | # WORKFLOW 5 | #------------------------------- 6 | 7 | 8 | config :workflow, 9 | adapter: Workflow.Extreme.Adapter 10 | 11 | #------------------------------- 12 | # EXTREME [Eventstore Driver] 13 | #------------------------------- 14 | 15 | 16 | config :extreme, :event_store, 17 | db_type: :node, 18 | host: "localhost", 19 | port: 1113, 20 | username: "admin", 21 | password: "changeit", 22 | reconnect_delay: 2_000, 23 | max_attempts: :infinity 24 | 25 | #------------------------------- 26 | # LOGGER 27 | #------------------------------- 28 | 29 | 30 | config :logger, 31 | backends: [{LoggerFileBackend, :log_info}, 32 | {LoggerFileBackend, :log_error}, 33 | {LoggerFileBackend, :log_debug}, 34 | :console] 35 | # handle_otp_reports: true, 36 | # handle_sasl_reports: true 37 | config :logger, :console, 38 | format: "\n$time $metadata[$level] $levelpad$message\n", 39 | colors: [info: :magenta], 40 | metadata: [:module, :function, :id, :uuid] 41 | 42 | config :logger, :log_debug, 43 | path: "./logs/debug.log", 44 | metadata: [:pid, :application, :module, :file, :function, :line, :id, :uuid], 45 | format: "$dateT$time $node $metadata[$level] $levelpad$message\n", 46 | level: :debug 47 | 48 | config :logger, :log_info, 49 | metadata: [:module, :pid, :id, :uuid], 50 | format: "$dateT$time $node $metadata[$level] $levelpad$message\n", 51 | path: "./logs/info.log", 52 | level: :info 53 | 54 | config :logger, :log_error, 55 | path: "./logs/error.log", 56 | metadata: [:pid, :application, :module, :file, :line, :id, :uuid], 57 | format: "$dateT$time $node $metadata[$level] $levelpad$message\n", 58 | level: :error 59 | 60 | -------------------------------------------------------------------------------- /config/prod.exs: -------------------------------------------------------------------------------- 1 | use Mix.Config 2 | 3 | 4 | 5 | 6 | #------------------------------- 7 | # LOGGER 8 | #------------------------------- 9 | 10 | -------------------------------------------------------------------------------- /config/test.exs: -------------------------------------------------------------------------------- 1 | use Mix.Config 2 | 3 | #------------------------------- 4 | # WORKFLOW 5 | #------------------------------- 6 | 7 | 8 | config :workflow, 9 | adapter: Workflow.Extreme.Adapter 10 | 11 | #------------------------------- 12 | # EXTREME [Eventstore Driver] 13 | #------------------------------- 14 | 15 | 16 | config :extreme, :event_store, 17 | db_type: :node, 18 | host: "localhost", 19 | port: 1113, 20 | username: "admin", 21 | password: "changeit", 22 | reconnect_delay: 2_000, 23 | max_attempts: :infinity 24 | 25 | 26 | #------------------------------- 27 | # LOGGER 28 | #------------------------------- 29 | 30 | config :logger, 31 | backends: [{LoggerFileBackend, :log_info}, 32 | {LoggerFileBackend, :log_error}, 33 | {LoggerFileBackend, :log_debug}, 34 | :console] 35 | # handle_otp_reports: true, 36 | # handle_sasl_reports: true 37 | config :logger, :console, 38 | format: "\n$time $metadata[$level] $levelpad$message\n", 39 | colors: [info: :magenta], 40 | metadata: [:module, :function, :id, :uuid], 41 | level: :info 42 | 43 | config :logger, :log_debug, 44 | path: "./logs/debug.log", 45 | metadata: [:pid, :application, :module, :file, :function, :line, :id, :uuid], 46 | format: "$dateT$time $node $metadata[$level] $levelpad$message\n", 47 | level: :debug 48 | 49 | config :logger, :log_info, 50 | metadata: [:module, :pid, :id, :uuid], 51 | format: "$dateT$time $node $metadata[$level] $levelpad$message\n", 52 | path: "./logs/info.log", 53 | level: :info 54 | 55 | config :logger, :log_error, 56 | path: "./logs/error.log", 57 | metadata: [:pid, :application, :module, :file, :line, :id, :uuid], 58 | format: "$dateT$time $node $metadata[$level] $levelpad$message\n", 59 | level: :error 60 | 61 | -------------------------------------------------------------------------------- /lib/adapter.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Adapter do 2 | @moduledoc """ 3 | Implement this functions below to add a new data storage for your events and snapshots. 4 | The persistence modules will retreive a pure list of events, ready for replay, instead of 5 | dealing with localized messages. It's an adapter responsability to filter and answer only the 6 | necessary data to be used inside Commanded 7 | """ 8 | 9 | @type stream_id :: String.t 10 | @type start_version :: number 11 | @type read_event_batch_size :: number 12 | @type batch :: [struct()] 13 | @type stream :: String.t # The Stream ID 14 | @type reason :: atom 15 | @type expected_version :: number 16 | @type event_data :: [struct()] 17 | 18 | 19 | 20 | @doc "Load a batch of events from storage" 21 | @callback read_stream_forward(stream_id, start_version, read_event_batch_size) :: 22 | {:ok, batch} | {:error, reason} 23 | 24 | @doc "Load a list of events from an specific position" 25 | @callback append_to_stream(stream_id, expected_version, event_data) :: 26 | :ok | {:error, reason} 27 | 28 | 29 | 30 | end 31 | -------------------------------------------------------------------------------- /lib/container.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Container do 2 | @moduledoc """ 3 | Genserver to hold Aggregates or Process Managers 4 | """ 5 | use GenServer 6 | require Logger 7 | 8 | alias Workflow.Container 9 | alias Workflow.Persistence 10 | 11 | defstruct [ 12 | module: nil, 13 | uuid: nil, 14 | data: nil, 15 | version: nil 16 | ] 17 | 18 | ## API 19 | 20 | def start_link(module, uuid) do 21 | GenServer.start_link(__MODULE__, %Container{ 22 | module: module, 23 | uuid: uuid 24 | }) 25 | end 26 | 27 | def get_data(container), do: 28 | GenServer.call(container, {:data}) 29 | 30 | def get_state(container), do: 31 | GenServer.call(container, {:state}) 32 | 33 | def process_message(container, message), do: 34 | GenServer.call(container, {:process_message, message}) 35 | 36 | 37 | ## CALLBACKS 38 | 39 | def init(%Container{} = state) do 40 | GenServer.cast(self, {:restore}) 41 | {:ok, state} 42 | end 43 | 44 | def handle_call({:data}, _from, %Container{data: data} = state), do: 45 | {:reply, data, state} 46 | 47 | def handle_call({:state}, _from, %Container{} = state), do: 48 | {:reply, state, state} 49 | 50 | @doc "Replay the events from the eventstore db" 51 | def handle_cast({:restore}, %Container{module: module} = state) do 52 | state = Persistence.rebuild_from_events(%Container{state | 53 | version: 0, 54 | data: struct(module) # empty data structure to be filled 55 | }) 56 | {:noreply, state} 57 | end 58 | 59 | @doc "Handle a command (for an aggregate) or an event (for the process manager)" 60 | def handle_call({:process_message, message}, _from, %Container{} = state) do 61 | {reply, state} = process(message, state) 62 | {:reply, reply, state} 63 | end 64 | 65 | ## INTERNALS 66 | 67 | defp process(message, 68 | %Container{uuid: uuid, version: expected_version, data: data, module: module} = state) do 69 | event = module.handle(data, message) # process message for an aggregate or process manager 70 | wrapped_event = List.wrap(event) 71 | 72 | new_data = Persistence.apply_events(module, data, wrapped_event) 73 | Persistence.persist_events(wrapped_event, uuid, expected_version) 74 | state = %Container{ state | 75 | data: new_data, 76 | version: expected_version + length(wrapped_event) 77 | } 78 | {:ok, state} 79 | end 80 | 81 | # update the process instance's state by applying the event 82 | def mutate_state(module, data, event), do: 83 | module.apply(data, event) 84 | 85 | 86 | end 87 | -------------------------------------------------------------------------------- /lib/dispatcher.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Dispatcher do 2 | @moduledoc""" 3 | Dispatch commands and events (messages) to aggregates and process managers 4 | """ 5 | require Logger 6 | 7 | alias Workflow.Repository 8 | alias Workflow.Container 9 | 10 | def dispatch(message, module, uuid, timeout) do 11 | Logger.debug(fn -> "attempting to dispatch message: #{inspect message}, to: module: #{inspect module}" end) 12 | {:ok, container} = Repository.open(module, uuid) 13 | Container.execute(container, message, timeout) 14 | end 15 | 16 | 17 | end 18 | -------------------------------------------------------------------------------- /lib/extreme/adapter.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Extreme.Adapter do 2 | require Logger 3 | @moduledoc """ 4 | Interface with the Extreme EventStore driver to save and read to EVENTSTORE. 5 | Note that the Engine supervisor starts the driver naming it as 'EventStore'. 6 | """ 7 | alias Workflow.Extreme.Mapper 8 | alias Extreme.Messages.WriteEventsCompleted 9 | 10 | @behaviour Workflow.Adapter 11 | @extreme Workflow.Extreme # the pid name we called it on 12 | 13 | @type aggregate_uuid :: String.t 14 | @type start_version :: String.t 15 | @type batch_size :: integer() 16 | @type batch :: list() 17 | @type reason :: atom() 18 | @type read_event_batch_size :: integer() 19 | 20 | 21 | @doc "Save a list of events to the stream." 22 | def append_to_stream(stream_id, expected_version, pending_events) do 23 | # attention, erlangish pattern matching (^) 24 | message = Mapper.map_write_events(stream_id, pending_events) 25 | version = expected_version# postgre driver counts + 1, so let's fix adding 1 here 26 | {:ok, %WriteEventsCompleted{first_event_number: ^version}} = 27 | Extreme.execute(@extreme, message) 28 | :ok 29 | end 30 | 31 | 32 | @doc "Read stream, transforming messages in an event list ready for replay" 33 | def read_stream_forward(stream_id, start_version, read_event_batch_size) do 34 | message = Mapper.map_read_stream(stream_id, start_version, read_event_batch_size) 35 | case Extreme.execute(@extreme, message) do 36 | {:ok, events} -> Mapper.extract_events({:ok, events}) 37 | {:error, reason, _} -> {:error, reason} 38 | end 39 | end 40 | 41 | end 42 | -------------------------------------------------------------------------------- /lib/extreme/mapper.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Extreme.Mapper do 2 | @moduledoc """ 3 | Map raw events to event data structs ready to be persisted to the event store. 4 | """ 5 | # serialization alias 6 | alias Workflow.Extreme.Serialization 7 | 8 | # extreme aliases 9 | alias Extreme.Messages.ReadStreamEvents 10 | alias Extreme.Messages.WriteEvents 11 | alias Extreme.Messages.NewEvent 12 | 13 | def map_to_event_data(events, correlation_id) when is_list(events) do 14 | Enum.map(events, &map_to_event_data(&1, correlation_id)) 15 | end 16 | 17 | 18 | def extract_events({:ok, response}), do: {:ok, Enum.map(response.events, &extract_data/1)} 19 | def extract_events({:error,_}), do: {:error, :not_found} 20 | def extract_events({:error,_,_}), do: {:error, :not_found} 21 | 22 | # rebuild the struct from a string stored in the eventstore 23 | def extract_data(message) do 24 | st = message.event.event_type |> make_alias |> struct 25 | message.event.data |> deserialize(st) 26 | end 27 | 28 | # transforms a ":Jim" string into a Jim atom alias 29 | def make_alias(name) do 30 | name_s = String.to_atom(name) 31 | ast = {:__aliases__, [alias: false], [name_s]} 32 | {result, _} = Code.eval_quoted(ast) 33 | result 34 | end 35 | 36 | defp deserialize(data, struct \\ nil), 37 | do: Serialization.decode(data, struct) 38 | 39 | @doc "create a read stream message" 40 | def map_read_stream(stream_id, from_event_number, max_count) do 41 | %ReadStreamEvents{ 42 | event_stream_id: stream_id, 43 | from_event_number: from_event_number, 44 | max_count: max_count, 45 | resolve_link_tos: true, 46 | require_master: false 47 | } 48 | end 49 | 50 | @doc "create a write message for a list of events" 51 | def map_write_events(stream, events) do 52 | proto_events = Enum.map(events, &create_event/1) # map the list of structs to event messages 53 | WriteEvents.new( 54 | event_stream_id: stream, 55 | expected_version: -2, 56 | events: proto_events, 57 | require_master: false 58 | ) 59 | end 60 | 61 | @doc "create one event message based on a struct" 62 | defp create_event(event) do 63 | NewEvent.new( 64 | event_id: Extreme.Tools.gen_uuid(), 65 | event_type: to_string(event.__struct__), 66 | data_content_type: 0, 67 | metadata_content_type: 0, 68 | data: Serialization.encode(event), 69 | meta: "" 70 | ) 71 | end 72 | 73 | end 74 | -------------------------------------------------------------------------------- /lib/extreme/router.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Extreme.Router do 2 | require Logger 3 | use GenServer 4 | 5 | # def start_link(extreme, last_processed_event), do: 6 | # GenServer.start_link(__MODULE__, {extreme, last_processed_event}) 7 | 8 | def start_link(:ok, extreme), do: 9 | GenServer.start_link(__MODULE__, {:ok, extreme}, [name: Workflow.Router]) 10 | 11 | def init({:ok, extreme}) do 12 | stream = "storage-test-02-660c072d-c726-4a25-8e09-26b628bfb7af" 13 | stream2 = "$cd-persistence" 14 | state = %{ event_store: extreme, stream: stream, last_event: 5 } 15 | GenServer.cast(self, :subscribe) 16 | {:ok, state} 17 | end 18 | 19 | # def init({extreme, last_processed_event}) do 20 | # stream = "people" 21 | # state = %{ event_store: extreme, stream: stream, last_event: last_processed_event } 22 | # GenServer.cast self, :subscribe 23 | # {:ok, state} 24 | # end 25 | def handle_cast(:subscribe, state) do 26 | IO.inspect "hi from subscribe" 27 | # read only unprocessed events and stay subscribed 28 | # {:ok, subscription} = 29 | # Extreme.read_and_stay_subscribed(state.event_store, self, state.stream, state.last_event + 1) 30 | {:ok, subscription} = 31 | Extreme.read_and_stay_subscribed(state.event_store, self, "$ce-persistence", 0) 32 | # we want to monitor when subscription is crashed so we can resubscribe 33 | {:noreply, state} 34 | # ref = Process.monitor subscription 35 | # {:noreply, %{state | subscription_ref: ref}} 36 | end 37 | 38 | # def handle_info({:DOWN, ref, :process, _pid, _reason}, %{subscription_ref: ref} = state) do 39 | # GenServer.cast(self, :subscribe) 40 | # {:noreply, state} 41 | # end 42 | 43 | def handle_info({:on_event, push}, state) do 44 | push.event.data 45 | |> process_event 46 | event_number = push.link.event_number 47 | :ok = update_last_event state.stream, event_number 48 | {:noreply, %{state|last_event: event_number}} 49 | end 50 | 51 | def handle_info(:caught_up, state) do 52 | Logger.debug "We are up to date!" 53 | {:noreply, state} 54 | end 55 | def handle_info(_msg, state), do: {:noreply, state} 56 | 57 | defp process_event(event), do: IO.puts("Do something with #{inspect event}") 58 | defp update_last_event(_stream, _event_number), do: 59 | IO.puts("Persist last processed event_number for stream") 60 | end 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | # defmodule Workflow.Router do 84 | # use Extreme.Listener 85 | # 86 | # # returns last processed event by MyListener on stream_name, -1 if none has been processed so far 87 | # defp get_last_event(stream_name), do: 88 | # -1 89 | # #IO.inspect(stream_name) 90 | # #DB.get_last_event MyListener, stream_name 91 | # 92 | # defp process_push(push, stream_name) do 93 | # #for indexed stream we need to follow push.link.event_number, otherwise push.event.event_number 94 | # #event_number = push.link.event_number 95 | # # DB.in_transaction fn -> 96 | # # Logger.info "Do some processing of event #{inspect push.event.event_type}" 97 | # # :ok = push.event.data 98 | # # |> :erlang.binary_to_term 99 | # # |> process_event(push.event.event_type) 100 | # # DB.ack_event(MyListener, stream_name, event_number) 101 | # # end 102 | # {:ok, 3} 103 | # end 104 | # 105 | # # This override is optional 106 | # defp caught_up, do: Logger.debug("We are up to date. YEEEY!!!") 107 | # 108 | # def process_event(data, "Elixir.MyApp.Events.PersonCreated") do 109 | # Logger.debug "Doing something with #{inspect data}" 110 | # :ok 111 | # end 112 | # def process_event(_, _), do: :ok # Just acknowledge events we are not interested in 113 | # end 114 | # 115 | -------------------------------------------------------------------------------- /lib/extreme/serialization.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Extreme.Serialization do 2 | @protocol Json # -> choose here your protocol TODO: move it to config 3 | 4 | def encode(data), 5 | do: internal_encode(@protocol, data) 6 | def decode(data, struct \\ nil), # to decode with Poison, we need the Struct 7 | do: internal_decode(@protocol, data, struct) 8 | 9 | 10 | # JSON 11 | defp internal_encode(Json, data), 12 | do: Poison.encode!(data) 13 | defp internal_decode(Json, data, struct), 14 | do: Poison.decode!(data, as: struct) 15 | 16 | # BINARY 17 | defp internal_encode(Binary, data), 18 | do: :erlang.term_to_binary(data) 19 | defp internal_decode(Binary, data, struct), 20 | do: :erlang.binary_to_term(data) 21 | 22 | end 23 | -------------------------------------------------------------------------------- /lib/extreme/supervisor.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Extreme.Supervisor do 2 | use Supervisor 3 | @extreme Workflow.Extreme 4 | @module __MODULE__ 5 | 6 | def start_link, do: 7 | Supervisor.start_link(@module, :ok, name: @module) 8 | 9 | 10 | def init(:ok) do 11 | event_store_settings = Application.get_env :extreme, :event_store 12 | 13 | children = [ 14 | worker(Extreme, [event_store_settings, [name: @extreme]], restart: :permanent), 15 | worker(Workflow.Extreme.Router, [:ok, @extreme]) 16 | # worker(Workflow.Router, [@extreme, -1, [name: Router]]) 17 | ] 18 | #TODO: one_for_all or one_for_one ? 19 | supervise(children, strategy: :one_for_all) # if we lost connection, we restart the router also 20 | end 21 | 22 | 23 | end 24 | -------------------------------------------------------------------------------- /lib/persistence.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Persistence do 2 | @read_event_batch_size 100 3 | @moduledoc """ 4 | Database side effects from Aggregates and Process Managers servers. Having them in a segregated 5 | file helps to test, debug and share the uncommon code between them 6 | """ 7 | 8 | alias Workflow.Container 9 | alias Workflow.Storage 10 | require Logger 11 | 12 | 13 | 14 | 15 | @typedoc "positions -> [first, last]" 16 | @type state :: struct() # the aggregate or process manager data structure 17 | @type events :: [struct()] 18 | @type uuid :: String.t 19 | @type reason :: atom 20 | @type stream :: String.t 21 | 22 | 23 | 24 | @doc "Rebuild if events are found, if not found, return the container state with an empty data structure" 25 | def rebuild_from_events(%Container{} = state), do: rebuild_from_events(state, 1) 26 | def rebuild_from_events(%Container{uuid: uuid, module: module, data: data} = state, start_version) do 27 | case Storage.read_stream_forward(uuid, start_version, @read_event_batch_size) do 28 | {:ok, batch} -> 29 | batch_size = length(batch) 30 | 31 | # rebuild the aggregate's state from the batch of events 32 | data = apply_events(module, data, batch) 33 | 34 | state = %Container{state | 35 | version: start_version - 1 + batch_size, 36 | data: data 37 | } 38 | 39 | case batch_size < @read_event_batch_size do 40 | true -> 41 | # end of event stream for aggregate so return its state 42 | state 43 | 44 | false -> 45 | # fetch next batch of events to apply to updated aggregate state 46 | rebuild_from_events(state, start_version + @read_event_batch_size) 47 | end 48 | 49 | {:error, _} -> 50 | # data-structure does not exist so return empty state 51 | state 52 | end 53 | state 54 | end 55 | 56 | def persist_events([], _aggregate_uuid, _expected_version), do: :ok 57 | def persist_events(pending_events, uuid, expected_version) do 58 | :ok = Storage.append_to_stream(uuid, expected_version, pending_events) 59 | end 60 | 61 | 62 | @doc "Receive a module that implements apply function, and rebuild the state from events" 63 | def apply_events(module, state, events), do: 64 | Enum.reduce(events, state, &module.apply(&2, &1)) 65 | 66 | end 67 | 68 | -------------------------------------------------------------------------------- /lib/repository.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Repository do 2 | import Supervisor.Spec 3 | @module_doc """ 4 | It will give you a container pid, if it is on memory, will get from memory, but if it's not 5 | on memory, it will replay all events, reach the last state, save on cache memory, and give 6 | it to you. :) 7 | """ 8 | 9 | def start_container(module, uuid) do 10 | {:ok, container} = Workflow.Supervisor.start_container(module, uuid) 11 | Process.monitor(container) 12 | container 13 | end 14 | 15 | end 16 | -------------------------------------------------------------------------------- /lib/serialization.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Serialization do 2 | @protocol Json # -> choose here your protocol TODO: move it to config 3 | 4 | def encode(data), 5 | do: internal_encode(@protocol, data) 6 | def decode(data, struct \\ nil), # to decode with Poison, we need the Struct 7 | do: internal_decode(@protocol, data, struct) 8 | 9 | 10 | # JSON 11 | defp internal_encode(Json, data), 12 | do: Poison.encode!(data) 13 | defp internal_decode(Json, data, struct), 14 | do: Poison.decode!(data, as: struct) 15 | 16 | # BINARY 17 | defp internal_encode(Binary, data), 18 | do: :erlang.term_to_binary(data) 19 | defp internal_decode(Binary, data, struct), 20 | do: :erlang.binary_to_term(data) 21 | 22 | end 23 | -------------------------------------------------------------------------------- /lib/storage.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Storage do 2 | require Logger 3 | @moduledoc """ 4 | Proxy API layer to provide a Facade for different data storages, with optimization logic, 5 | like snapshots, batch reading, etc... 6 | The ideia is to read from the config file 7 | what is the choosen storage, and route the call to the specifc implementation. 8 | http://elixir-lang.org/docs/stable/elixir/typespecs 9 | """ 10 | # defaults 11 | @default_adapter Workflow.Extreme.Adapter 12 | @read_event_batch_size 100 13 | 14 | # types 15 | @type position :: integer 16 | @type result :: {position, String.t} 17 | @type stream :: String.t 18 | @type event :: struct() 19 | @type events :: [struct()] 20 | 21 | 22 | @doc "Recieve internal event data to append. message building is an adapter task." 23 | def append_to_stream(stream_id, expected_version, events), do: 24 | adapter.append_to_stream(stream_id, expected_version, events) 25 | 26 | 27 | @doc "Read pure events from stream" 28 | def read_stream_forward(stream_id, start_version, read_event_batch_size \\ @read_event_batch_size), do: 29 | adapter.read_stream_forward(stream_id, start_version, read_event_batch_size) 30 | 31 | 32 | @doc "Get choosen db adapter from config files" 33 | defp adapter(), do: 34 | Application.get_env(:workflow, :adapter, @default_adapter) 35 | 36 | 37 | end 38 | 39 | -------------------------------------------------------------------------------- /lib/supervisor.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Supervisor do 2 | @module __MODULE__ 3 | @moduledoc """ 4 | Supervise zero, one or more containers 5 | """ 6 | use Supervisor 7 | require Logger 8 | 9 | def start_link, do: 10 | Supervisor.start_link(__MODULE__, :ok, name: @module) 11 | 12 | def start_container(module, uuid) do 13 | Logger.debug(fn -> "starting process for `#{module}` with uuid #{uuid}" end) 14 | Supervisor.start_child(Workflow.Supervisor, [module, uuid]) 15 | end 16 | 17 | def init(:ok) do 18 | children = [ 19 | worker(Workflow.Container, [], restart: :permanent), 20 | ] 21 | 22 | supervise(children, strategy: :simple_one_for_one) 23 | end 24 | end 25 | -------------------------------------------------------------------------------- /lib/workflow.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow do 2 | use Application 3 | require Logger 4 | @doc "Start the supervisor and activate its handlers" 5 | def start(_type, _args) do 6 | import Supervisor.Spec, warn: false 7 | children = [ 8 | supervisor(Workflow.Extreme.Supervisor, [], restart: :permanent), 9 | supervisor(Workflow.Supervisor, [], restart: :permanent), 10 | ] 11 | opts = [strategy: :one_for_one, name: Workflow.Application] 12 | Supervisor.start_link(children, opts) 13 | end 14 | 15 | 16 | end 17 | -------------------------------------------------------------------------------- /mix.exs: -------------------------------------------------------------------------------- 1 | defmodule Engine.Mixfile do 2 | use Mix.Project 3 | 4 | def project do 5 | [app: :workflow, 6 | version: "0.2.0", 7 | elixir: "~> 1.3", 8 | description: description(), 9 | package: package(), 10 | build_embedded: Mix.env == :prod, 11 | start_permanent: Mix.env == :prod, 12 | deps: deps()] 13 | end 14 | 15 | 16 | def application do 17 | [applications: [:logger], 18 | mod: {Workflow, []}] 19 | end 20 | 21 | defp deps do 22 | [ 23 | {:extreme, "~> 0.7.1"}, # eventstore driver 24 | # utils 25 | {:uuid, "~> 1.1.4" }, 26 | {:logger_file_backend, "~> 0.0.9"}, # Save logs to file [remmember to create a ~/logs directory!] 27 | # DEVs 28 | # {:dogma, "~> 0.1.7", only: [:dev]}, # code linter 29 | {:dialyxir, "~> 0.3.5", only: [:dev]}, # simplify dialyzer, type: mix dialyzer.plt first 30 | {:mix_test_watch, "~> 0.2", only: :dev}, # use mix test.watch for TDD development 31 | {:ex_doc, ">= 0.0.0", only: :dev} 32 | ] 33 | end 34 | 35 | 36 | defp package do 37 | [# These are the default files included in the package 38 | name: :workflow, 39 | files: ["lib", "test", "config", "mix.exs", "README*", "LICENSE*"], 40 | maintainers: ["Henry Hazan", "Shmuel Kalmus"], 41 | licenses: ["MIT"], 42 | links: %{"GitHub" => "https://github.com/work-capital/workflow"}] 43 | end 44 | 45 | 46 | defp description do 47 | """ 48 | Building Blocks to write CQRS Event Sourcing apps in Elixir 49 | """ 50 | end 51 | 52 | 53 | end 54 | -------------------------------------------------------------------------------- /test/domain/account/account.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Domain.Account do 2 | defstruct [ 3 | account_number: nil, 4 | balance: 0, 5 | state: nil, 6 | ] 7 | 8 | alias Workflow.Domain.Account 9 | 10 | defmodule Commands do 11 | defmodule OpenAccount, do: defstruct [:account_number, :initial_balance] 12 | defmodule DepositMoney, do: defstruct [:account_number, :transfer_uuid, :amount] 13 | defmodule WithdrawMoney, do: defstruct [:account_number, :transfer_uuid, :amount] 14 | defmodule CloseAccount, do: defstruct [:account_number] 15 | end 16 | 17 | defmodule Events do 18 | defmodule AccountOpened, do: defstruct [:account_number, :initial_balance] 19 | defmodule MoneyDeposited, do: defstruct [:account_number, :transfer_uuid, :amount, :balance] 20 | defmodule MoneyWithdrawn, do: defstruct [:account_number, :transfer_uuid, :amount, :balance] 21 | defmodule AccountOverdrawn, do: defstruct [:account_number, :balance] 22 | defmodule AccountClosed, do: defstruct [:account_number] 23 | end 24 | 25 | alias Commands.{OpenAccount,DepositMoney,WithdrawMoney,CloseAccount} 26 | alias Events.{AccountOpened,MoneyDeposited,MoneyWithdrawn,AccountOverdrawn,AccountClosed} 27 | 28 | def handle(%Account{state: nil}, 29 | %OpenAccount{account_number: account_number, initial_balance: initial_balance}) 30 | when is_number(initial_balance) and initial_balance > 0 do 31 | %AccountOpened{account_number: account_number, initial_balance: initial_balance} 32 | end 33 | 34 | def handle(%Account{state: :active, balance: balance}, 35 | %DepositMoney{account_number: account_number, transfer_uuid: transfer_uuid, amount: amount}) 36 | when is_number(amount) and amount > 0 do 37 | balance = balance + amount 38 | %MoneyDeposited{account_number: account_number, transfer_uuid: transfer_uuid, amount: amount, balance: balance} 39 | end 40 | 41 | def handle(%Account{state: :active, balance: balance}, 42 | %WithdrawMoney{account_number: account_number, transfer_uuid: transfer_uuid, amount: amount}) 43 | when is_number(amount) and amount > 0 do 44 | case balance - amount do 45 | balance when balance < 0 -> 46 | [ 47 | %MoneyWithdrawn{account_number: account_number, transfer_uuid: transfer_uuid, amount: amount, balance: balance}, 48 | %AccountOverdrawn{account_number: account_number, balance: balance}, 49 | ] 50 | balance -> 51 | %MoneyWithdrawn{account_number: account_number, transfer_uuid: transfer_uuid, amount: amount, balance: balance} 52 | end 53 | end 54 | 55 | def handle(%Account{state: :active}, 56 | %CloseAccount{account_number: account_number}), do: %AccountClosed{account_number: account_number} 57 | 58 | # state mutatators 59 | 60 | def apply(%Account{} = state, %AccountOpened{account_number: account_number, initial_balance: initial_balance}) do 61 | %Account{state | 62 | account_number: account_number, 63 | balance: initial_balance, 64 | state: :active, 65 | } 66 | end 67 | 68 | def apply(%Account{} = state, %MoneyDeposited{balance: balance}), do: %Account{state | balance: balance} 69 | def apply(%Account{} = state, %MoneyWithdrawn{balance: balance}), do: %Account{state | balance: balance} 70 | def apply(%Account{} = state, %AccountOverdrawn{}), do: state 71 | def apply(%Account{} = state, %AccountClosed{}) do 72 | %Account{state | 73 | state: :closed, 74 | } 75 | end 76 | end 77 | -------------------------------------------------------------------------------- /test/domain/conference/commands.ex: -------------------------------------------------------------------------------- 1 | defmodule Workflow.Domain.Conference.Commands do 2 | defmodule MakeSeatReservation, 3 | do: defstruct [uuid: nil] 4 | defmodule CommitSeatReservation, 5 | do: defstruct [uuid: nil] 6 | defmodule CancelSeatReservation, 7 | do: defstruct [uuid: nil] 8 | end 9 | -------------------------------------------------------------------------------- /test/persistence_test.exs: -------------------------------------------------------------------------------- 1 | defmodule Workflow.PersistenceTest do 2 | use ExUnit.Case 3 | 4 | defmodule ExampleAggregate do 5 | defstruct [ 6 | items: [], 7 | last_index: 0, 8 | ] 9 | 10 | # command & event 11 | defmodule Commands, do: defmodule AppendItems, do: defstruct [count: 0] 12 | defmodule Events, do: defmodule ItemAppended, do: defstruct [index: nil] 13 | 14 | alias Commands.{AppendItems} 15 | alias Events.{ItemAppended} 16 | 17 | def append_items(%ExampleAggregate{last_index: last_index}, count) do 18 | Enum.map(1..count, fn index -> 19 | %ItemAppended{index: last_index + index} 20 | end) 21 | end 22 | 23 | def append_item(%ExampleAggregate{last_index: last_index}, %AppendItems{count: count}) do 24 | %ItemAppended{index: last_index + 1} 25 | end 26 | 27 | # state mutatators 28 | def apply(%ExampleAggregate{items: items} = state, %ItemAppended{index: index}) do 29 | %ExampleAggregate{state | 30 | items: items ++ [index], 31 | last_index: index, 32 | } 33 | end 34 | end 35 | 36 | alias ExampleAggregate.Commands.{AppendItems} 37 | alias Workflow.Persistence 38 | 39 | 40 | 41 | test "Apply events for a data structure" do 42 | 43 | stream_id = "persistence-test-01-" <> UUID.uuid4 44 | aggregate = %ExampleAggregate{} 45 | 46 | events = ExampleAggregate.append_items(aggregate, 6) 47 | res = Persistence.persist_events(events, stream_id, 0) 48 | state = Persistence.apply_events(ExampleAggregate, aggregate, events) 49 | 50 | last_state = %ExampleAggregate{items: [1, 2, 3, 4, 5, 6], last_index: 6} 51 | assert state == last_state 52 | 53 | events2 = ExampleAggregate.append_items(aggregate, 2) 54 | res2 = Persistence.persist_events(events2, stream_id, 6) 55 | state2 = Persistence.apply_events(ExampleAggregate, aggregate, events2) 56 | 57 | last_state2 = %ExampleAggregate{items: [1, 2], last_index: 2} 58 | assert state2 == last_state2 59 | 60 | end 61 | end 62 | -------------------------------------------------------------------------------- /test/repository_test.exs: -------------------------------------------------------------------------------- 1 | defmodule RepositoryTest do 2 | use ExUnit.Case 3 | 4 | alias Workflow.Container 5 | alias Workflow.Domain.Account 6 | alias Workflow.Repository 7 | 8 | defmodule CounterAggregate do 9 | defstruct [ 10 | counter: 0 11 | ] 12 | 13 | # commands & events 14 | defmodule Commands do 15 | defmodule Add, do: defstruct [quantity: nil] 16 | defmodule Remove, do: defstruct [quantity: nil] 17 | end 18 | 19 | defmodule Events do 20 | defmodule Added, do: defstruct [counter: nil] 21 | defmodule Removed, do: defstruct [counter: nil] 22 | end 23 | 24 | # aliases 25 | alias Commands.{Add, Remove} 26 | alias Events.{Added, Removed} 27 | 28 | # handlers 29 | def handle(%CounterAggregate{counter: counter}, %Add{quantity: quantity}) do 30 | new_counter = counter + quantity 31 | %Added{counter: new_counter} 32 | end 33 | 34 | def handle(%CounterAggregate{counter: counter}, %Remove{quantity: quantity}) do 35 | new_counter = counter - quantity 36 | %Removed{counter: new_counter} 37 | end 38 | 39 | # state mutatators 40 | def apply(%CounterAggregate{} = state, %Added{counter: counter}), do: 41 | %CounterAggregate{state | counter: counter } 42 | 43 | def apply(%CounterAggregate{} = state, %Removed{counter: counter}), do: 44 | %CounterAggregate{state | counter: counter } 45 | end 46 | 47 | alias CounterAggregate.Commands.{Add, Remove} 48 | alias CounterAggregate.Events.{Added, Removed} 49 | 50 | 51 | test "simulation using pure functional data structures" do 52 | # generate event 53 | ev1 = %CounterAggregate{} 54 | |> CounterAggregate.handle(%Add{quantity: 7}) 55 | 56 | # apply 57 | c = %CounterAggregate{} 58 | |> CounterAggregate.apply(ev1) 59 | 60 | # generate event over the last state 61 | ev2 = c |> CounterAggregate.handle(%Remove{quantity: 3}) 62 | 63 | # apply 64 | c2 = %CounterAggregate{} 65 | |> CounterAggregate.apply(ev2) 66 | 67 | assert c2 == %CounterAggregate{counter: 4} 68 | end 69 | 70 | 71 | test "simulate using side effects" do 72 | stream_id = "repository-test-01-" <> UUID.uuid4 73 | container = Repository.start_container(CounterAggregate, stream_id) 74 | # process two commands 75 | Container.process_message(container, %Add{quantity: 7}) 76 | Container.process_message(container, %Remove{quantity: 3}) 77 | # get state data 78 | data = Container.get_data(container) 79 | state = Container.get_state(container) 80 | IO.inspect state 81 | 82 | assert data== %CounterAggregate{counter: 4} # 7 - 3 = 4 83 | end 84 | 85 | 86 | end 87 | -------------------------------------------------------------------------------- /test/storage_test.exs: -------------------------------------------------------------------------------- 1 | defmodule Workflow.StorageTest do 2 | use ExUnit.Case 3 | 4 | #import Commanded.Enumerable, only: [pluck: 2] 5 | #alias Commanded.Aggregates.{Registry,Aggregate} 6 | alias Workflow.Storage 7 | 8 | defmodule ExampleAggregate do 9 | defstruct [ 10 | items: [], 11 | last_index: 0, 12 | ] 13 | 14 | # command & event 15 | defmodule Commands, do: defmodule AppendItems, do: defstruct [count: 0] 16 | defmodule Events, do: defmodule ItemAppended, do: defstruct [index: nil] 17 | 18 | alias Commands.{AppendItems} 19 | alias Events.{ItemAppended} 20 | 21 | def append_items(%ExampleAggregate{last_index: last_index}, count), do: 22 | Enum.map(1..count, fn index -> %ItemAppended{index: last_index + index} end) 23 | 24 | # state mutatators 25 | def apply(%ExampleAggregate{items: items} = state, %ItemAppended{index: index}) do 26 | %ExampleAggregate{state | 27 | items: items ++ [index], 28 | last_index: index, 29 | } 30 | end 31 | end 32 | 33 | alias ExampleAggregate.Commands.{AppendItems} 34 | 35 | 36 | test "should append events to stream" do 37 | config = Application.get_env(:workflow, :adapter, []) 38 | stream_id = "storage-test-01-" <> UUID.uuid4 39 | evts = ExampleAggregate.append_items(%ExampleAggregate{last_index: 0}, 9) 40 | res = Storage.append_to_stream(stream_id, 0, evts) 41 | assert res == :ok 42 | # again 43 | evts2 = ExampleAggregate.append_items(%ExampleAggregate{last_index: 9}, 3) 44 | res2 = Storage.append_to_stream(stream_id, 9, evts) 45 | assert res2 == :ok 46 | end 47 | 48 | test "read stream forward" do 49 | stream_id = "storage-test-02-" <> UUID.uuid4 50 | evts = ExampleAggregate.append_items(%ExampleAggregate{last_index: 0}, 9) 51 | res = Storage.append_to_stream(stream_id, 0, evts) 52 | res2 = Storage.read_stream_forward(stream_id, 3, 2) 53 | expected_res = {:ok, 54 | [%Workflow.StorageTest.ExampleAggregate.Events.ItemAppended{index: 4}, 55 | %Workflow.StorageTest.ExampleAggregate.Events.ItemAppended{index: 5}]} 56 | assert res2 == expected_res 57 | end 58 | 59 | test "read stream forward for a non-existing stream, and generate error" do 60 | stream_id = UUID.uuid4 61 | res = Storage.read_stream_forward(stream_id, 0, 2) 62 | {error, reason} = res 63 | assert error == :error 64 | end 65 | 66 | end 67 | -------------------------------------------------------------------------------- /test/test_helper.exs: -------------------------------------------------------------------------------- 1 | ExUnit.start() 2 | Code.load_file("test/domain/account/account.ex") 3 | --------------------------------------------------------------------------------